Sunday, March 30, 2008

Playing with Firefox 3 Beta 4

I've pretty much standardized on Firefox as my default browser (but not the default Windows browser).  I had been an Opera bigot for years, but the tipping point was when Google released their browser sync addin that made it easy to keep my bookmarks and other settigns in sync between work and home.

I've been hearing good things about the latest beta (beta 4) of Firefox 3, so I have decided to give it a shot.  I found a good article for running Firefox 3 in tandem with Firefox 2 on the Hack-A-Day site, so it seemed safe to run them together.  My first impression is that it's fast, much faster than Firefox 2 and probably faster than Opera.  Faster is good, I look for things to make me go and this is one of them.

Our Google Overlords have not updated the Google Browser Sync addin to work with Firefox 3.  There have been quite a few requests for Firefox 3 support, I'm surprised that company known for keeping products in beta for years isn't supporting the Firefox 3 beta.

I decided to take a look to see if anyone else had a browse sync that supported Firefox 3.  As it turns out, Foxmarks has a beta that supports the Firefox 3 beta.  I installed into my Firefox 2 and Firefox 3 installations and synced my Firefox 2 bookmarks up to the Foxmarks server.  I then tried to sync up from Firefox 3 and it repeatedly crashed.

I assumed that it was a fault of the Foxmarks plugin, so I took a look around their site.  On their wiki, I read the following:

On initial sync (or regular sync of large change sets), Firefox sometimes crashes. (We believe this is a Firefox bug; if you experience this, please make sure you allow Firefox to submit a crash report to Mozilla.) [This is an open bug in Firefox being investigated by Mozilla.]

It indicates that it's a Firefox bug and I have a work around.  Since Firefox 3 is puking on large change sets, the solution is to reduce the size of the change set.  I synced up Firefox 2 and then exported the bookmarks to a file.  I then imported that file into Firefox 3 and it was able to sync up changes after that.

Another addin that I like is GMail Manager, and it too doesn't support Firefox 3.

Friday, March 28, 2008

When will the Visual Studio IDE catchup to the Delphi IDE?

Sometimes it just the little things that annoy you.  If I add a button to a Delphi form, by default it is named Button1.  If I double click on Button1, a Click event handler is added and is named Button1Click and wires it up to the button's OnClick event.  Visual Studio does pretty much the same thing.  Where they is differ is when you edit the button's name.

If I renamed Button1 to something more meaningful like DestroyAllMonsters, the Delphi IDE will rename the event handler to DestroyAllMonstersClick.  When you rename the button in the Visual Studio IDE, it doesn't touch the event handler.  That's annoying.

Another difference is how Delphi handles event handlers when your delete the code or never add the code.  When you double click on the button and get the new event handler automagically created and wired up to the control, that event handler has no code.  If you save that file, Delphi removes the event handler.  it does under the assumption that you didn't mean to have that event handler and it will clean up the code for you.  Like wise, if you delete the code out of the handler and save the file, Delphi will remove the handler. 

This makes it easy to cleanup code when you remove a control that you didn't really want or need any more.  Visual Studio doesn't do that for you, or least not in C#.  When you spend your day working in both IDE's at the same time, those types of annoyances really can be annoying.

Wednesday, March 26, 2008

Miss Bimbo is not welcome in my house

There have been a few articles posted on the Internet about a web site called www.missbimbo.com.  It's supposed to be fashion game for young girls, but it sounds pretty trashy.  It's aimed at girls between the ages of 9 to 16 and it purports to be a virtual fashion games.  It's widely condemned as the users are encouraged to compete against each other to become the "hottest, coolest, most famous bimbo in the whole world." by buy chest implants.  CNN had a good write up of it and here's a direct quote:

The provocatively named "Miss Bimbo" Web site launched in the UK last month and is described as a "virtual fashion game for girls."

Girls are encouraged to compete against each other to become the "hottest, coolest, most famous bimbo in the whole world."

When a girl signs up, they are given a naked virtual character to look after and pitted against other girls to earn "bimbo" dollars so they can dress her in sexy outfits and take her clubbing.

They are told "stop at nothing," even "meds or plastic surgery," to ensure their dolls win.

Users are given missions, including securing plastic surgery at the game's clinic to give their dolls bigger breasts, and they have to keep her at her target weight with diet pills, which cost 100 bimbo dollars.

Breast implants sell at 11,500 bimbo dollars and net the buyer 2,000 bimbo attitudes, making her more popular on the site.

And bagging a billionaire boyfriend is the most desirable way to earn the all important "mula" or bimbo dollars.

As the father of two girls aged 7 and 5, I'm appalled by the site and I'm not going to let them anywhere near it.  We have a family PC that girls can use and we do let them visit certain sites on the Internet.  I've already taken steps to prevent them from seeing that site in our household.  I've been a fan of OpenDNS.org for a couple of years for their speedy DNS lookups.  OpenDNS has the ability to do some basic filtering and it's trivial to get it to block entire domains.  I just logged into my OpenDNS account and added missbimbo.com as a blocked domain. 

If you have never heard of OpenDNS, it's a free service that does speedy DNS lookups.  When you type in www.cnn.com into your web browser, a DNS server takes that domain name and converts it to the actual IP address of the site so that it can loaded into your browser.  Your Internet provider usually provides their own DNS servers, but OpenDNS has faster ones.

I'm probably overreacting and my girls will never even hear of that site, but it doesn't hurt to be safe.

Tuesday, March 25, 2008

Enabling users with ESX 3.5.0

I've been playing admin on our shiny new ESX server and it's been a struggle trying to get the user accounts configured.  I wanted to give our QA staff enough rights to login through the VMware Web Access portal so that they could start and stop their virtual machines.  I figured I would just set up their accounts and put them in the "Virtual Machine User" role.  No of the accounts could login in.  They kept getting the dreaded "Login failed due to a bad username or password."

I dug around a bit and started searching the VMware forums.  Apparently everyone was using AD to authenticate their user accounts.  I wasn't planning using AD because we are going to be changing domains in the near future, but I figured nothing else was working, it couldn't hurt.  This message led me to a very useful post by Geert Baeke on how to integrate Active Directory with ESX 3.   There was a lot of useful stuff in that post, but the part that I needed came down to this:

esxcfg-auth --enablead --addomain=domain.com --addc=domain.com


The VMware document uses the FQDN of a domain controller for the --addc parameter, but you can use the FQDN of the domain. That way, DNS is used to find domain controllers and use one of those. The command above modifies a few files like /etc/krb5.conf and also the system-auth file in /etc/pam.d. The ESX firewall is also automatically configured to open the needed ports for AD authentication.

Before you can logon with an AD account, you need to create a console user on the ESX box that has the same name as your AD account. For example, if you have an AD account domain\esxadmin, you need to add a user to the ESX console called esxadmin. The command to use is useradd esxadmin. You can also use VI Client to create the user. You can now logon with the account and use the AD password. I tested this with ESX 3.0.1 servers against Windows 2000 and Windows 2003 domains and it worked as advertised.

I did it and it worked like a charm.  Life is good.

Monday, March 17, 2008

Link rot and the ascendance of Wikipedia

As usual, Steve Tibbets hits the nail directly on the head with his post about link rot and Wikipedia.  I would have to say that the domain that I link to the most would have to be Wikipedia.  Most of the time I dot it because it's convenient, but link rot (there!  I just did it) happens a lot. 

Microsoft is pretty bad at this.  There have been too many times where I tried to follow a link into the MSDN, only to find that the MSDN has been reorganized and all of the links have changed.

If I link to another blog, I look for the permalink.  That usually indicates that the link will be around for a while.  I also try to find links on multiple domains.  That way, the odds are less likely that you will get the dreaded 404 page when you follow a link that has rotted away.

Steve posting his rules for linking and they just make sense:

    • If what I’m linking to has a top-level domain, then I will link to it.
    • If I’m linking to someone’s words (say, a blog post or magazine article), then I will link to that.
    • Otherwise, I’m linking to Wikipedia.

Friday, March 14, 2008

Two of my favorite geek toys (PowerShell and VMWare) playing together

I can't wait to get the VI Toolkit for Windows.  It's PowerShell cmdlets that let you manage VMWare.  To get an idea of what you can do with PowerShell and ESX Server, check out the Automating VMware with PowerShell Lab Manual, from the VMworld Europe 2008 conference.  You will be able monitor the status of VM's on the ESX server and be able to control the VM's remotely.  I'm curious to see of you would be able to take a running VM offline and move it to a storage volume.  That would make my life easier.

The VI PowerShell Blog mentioned that the VI Toolkit will be Beta this month, I can't wait to get a hold of it.

Tuesday, March 11, 2008

The Lost Art of TSR Programming

Scott Allen had a amusing post, "Talks You Won’t See At the Local Code Camp", on his blog.  One of the talks was "The Lost Art of TSR Programming".   That shook some memories out of the cranial storage device.  I used to write TSR programs, more formerly known as Terminate and Stay Resident.

This takes back to the days of DOS, when giants like dBase and Lotus walked the land.  Your network was Novell and you feared the Bindery.  You could only run one program at a time and the 640K limitation was real and not just a saying commonly misattributed to Bill Gates.

When I was at Stochos, we wrote software to do statistical process control (SPC) in the manufacturing industry.  We would collect data from measurement devices or from the manufacturing hardware to monitor the process of making whatever was running.  Back in the late '80s, we had hardened PC's running DOS and our software right on the factory floor.  One of my tasks was to write the code to collect the data from the machines and get it into the PC.

Early on, I had decided that the data collection code would run separately from the SPC application.  This allowed the user to exit our app to run other apps while still collecting data.  The way I did that was to write the data collector as a TSR.

The whole TSR section of DOS programming was pretty much an accident.  The DOS print spooler gave DOS the ability to print in the background while your application was running.  The print spooler was the first TSR.  A TSR would load itself using INT 27H to make itself as a TSR type of programming.  Once loaded, the TSR would typically insert itself into the chain of the applications that would receive hardware and/or software events.  If the TSR wasn't careful, it could wreck havoc with the interrupt chain.

I would write TSR's that would hook into the serial port or parallel port events.   Some of the time, all I needed to do was to capture data as it came in and write it to a file.  Usually, the hardware would have some sort of protocol and I would have implement it in my code.  One of the odder ones was for a machine that printed foil packets, the kind used for condiments at fast food restaurants.  This machine did not support the logging of it's data.  But I found a way in.  It had a PC that functioned as operator console with a color screen and keyboard.  Usually the screen would be displaying the current process settings and readings from it's own measuring devices, but not always.  The PC was connected to the machine over the serial port and was basically being used as a terminal.

I wrote a TSR that would periodically scan the screen in memory.  Since it was running in text mode, it's fairly easy to ready the screen from memory.  If I saw a certain sequence of characters at a specific location, I knew that I had the main console screen.  I then scanned different locations on the screen and wrote out a text file, logging each set of values as an attribute for our SPC application.

The TSR would read a template file that listed what attributes to look for and at what locations on the screen to expect the attributes at.  The setup of the template was done by trial and error, but once it was set the client never had to touch it.  It even allowed for character translation.  For some odd reason, the console display didn't use the number "0", they used the letter "O".  That took more time to track down then you would have expected.

To keep the size of the TSR down, I used a library named CodeRunner with Microsoft C.  The Coderunner library had the housekeeping code for doing the INT 27h stuff and interrupt managing.  It also took many of the standard routines and replaced them with hard coded assembler optimized for space over performance.  It also had the ability to run most of the TSR out of EMS memory, greatly reducing the footprint in the lower 640K space.  This particuliar TSR took about 6000 bytes of conventional memory.  I remember talking to Ratko Tomic, the engineer who wrote the CodeRunner libraries.  He was genius at squeezing every extra byte of the TSR code.  It's a lost art.

Monday, March 10, 2008

I like having a build box

This morning I came across a blog by Landon Dyer called Dadhacker.  He got linked by BoingBoing for a entertaining post that he wrote about working on the Donkey Kong cartridge for Atari.  I started reading his other posts and it turns out that I agree with nearly all of his opinions.  Except for build boxes. He wrote:

The best thing you can do for your productivity when you’re tempted to set up that spare machine to do extra work for you is to ditch the thing.

If you are the person only using that build machine, I can see that point.  Almost.  I would argue that it's worth the time to have a 2nd machine in case the first one goes to the land whre DOS is eternally blessed.  For a team of programmers, having a dedicated build box is a must have feature.  No more guessing which program set which option to build which executable, they always get built the same way.  Plus it offloads the build process processing from your development environment and that is always a good thing.

The other advantage to having a dedicated build box is that you have have it do everything.  We use FinalBuilder and it's the kitchen sink of automated build tools.  We build about a half dozen or so shrink wrap ready applications on our build box and they all pretty much follow the same pattern:

  1. Get the latest code from source control
  2. Read an .ini to get the version number and other test resources, the build tool will bake the version number into the compiler and install builder.
  3. Compile the application
  4. Collect all of the bits and put them in a folder for the install builder
  5. Authenticode anything remotely executable that we compiled
  6. Create the installer from all bits that were compiled in the last step and add the necessary required bits (CaptiveX controls, Assemblies, help files, etc).
  7. Copy the installer to a deployment folder for QA to test
  8. Send an email to QA and other interested parties that a new build was available and include a change list in the email.

There are other minor tasks that get performed, but that's the gist of it.  And it works for for Delphi Win32 and .NET assemblies, with error handling.  The time our department saves that level of build automation clearly outweighs the the maintenance time on the build box.