Wednesday, December 01, 2010

One final migration from Vista to Windows 7

Over Thanksgiving weekend, I upgraded our last Vista PC to Windows 7. We have a family PC, that we all share.  It’s nothing too fancy, a three year old Dell Inspiron 530.  It came with Windows Vista Home Edition preinstalled.  A Core 2 4300, running at 1.8 Ghz and 2GB of RAM. 

We use it mainly for web browsing, word processing, and email.  The kids use it for games, mainly web based games.  My oldest likes to play “The Sims 3” on it.  Something in how “The Sims 3” was installed just killed the performance of the machine, even when the games wasn’t running.  There was an app that checks for updates for “The Sims 3” and that just took forever to run.

Over the Thanksgiving break, I decided to clean up the machine and bring back the performance.  I wanted to repave the machine with Windows 7.  I had a spare Windows 7 Ultimate disk (overkill, but I had it available) and I was not afraid to use it.  It wasn’t that Vista was horrible, but Windows 7 is better.

I decided to install on a new hard drive and keep the existing hard drive around, just in case something went seriously wrong.  I have Windows Home Server, in the shape of an HP MediaSmart EX495, so I could restore the machine back to yesterday’s backup.  While I could have done that, it’s still easier having access to the old drive in case I needed to get something off of it.

The first step was to inventory the installed hardware and software on the machine.  The hardware was pretty easy, it’s basically a stock Dell box.  I had added a Microsoft webcam and a Logitech mouse, but the rest was stock Dell.

The software was a little trickier.  We have four users on this machine.  Myself, my wife Anne, and my daughters: Kathryn and Laura.  The ladies all had local email accounts set up with Thunderbird, I needed to migrate that over.  They all have iPod Nanos and I needed to get their iTunes data across.  Plus all of their documents.

I couldn’t find the serial number for “The Sims 3”, so I copied it from the registry.  With 32 bit editions of Windows, the serial number is the default value stored in the following key:

HKEY_LOCAL_MACHINE\SOFTWARE\Electronic Arts\Sims\The Sims 3\ergc

For 64 bit Windows, look in:

HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Electronic Arts\Sims\The Sims 3\ergc

After storing the serial number, I then went into iTunes and deauthorized the computer.  We use a single AppleID account for all of our iTunes purchases and we have it on a few PC’s.  Apple only allows 5 PC’s to be used with any account.  When you reformat or decommision a PC, you want to make sure that it’s no longer authorized to your Apple account.  If you forget this step, you can deauthorize all of your PC’s.  You would the have to authorize each PC again and Apple only lets you do this once a year.

By installing the OS on a new drive and keeping the old OS on a mounted drive, it would make it easier to bring over files and folders.  I had a spare 250 GB Maxtor drive that used to be part of a RAID 5 array in my home development machine.  I had three identical drives in the array and I replaced them with “normal” drives after I got the MediaSmart server.  Being able to do bare metal restores from the server trumped the protection that I was getting from RAID 5.

So I installed the drive and Vista saw the drive and I did a quick format of the drive.  I then rebooted the Dell and switched the boot order of the drives so that it would see the Maxtor drive first.  I then booted from the Windows 7 disk.

Now Microsoft has come a long way with OS installs.  Windows 7 installs fairly quickly and without bothering you too much.  But for some reason, it wouldn’t see the Maxtor drive.  It came up fine in the BIOS and Vista had no problem with it.  It was just invisible to Windows 7.  I spent a few hours playing with cables and boot order and BIOS tinkering to no avail.

It was now Saturday, the day after Black Friday.  Probably the best weekend of the year to buy electronic stuff.  The local BestBuy had Western Digital Caviar Black 1 TB drives on sale for the ridiculous price of $59.99.  For $60 I could get a faster drive with four times the capacity of the drive that came with the machine.  I went down and bought two of them.  I would use one in the Family PC and the other would go in my development machine or into the MediaSmart server. 

I installed that drive in and Windows 7 saw it without any problems.  The installer did it’s thing and 20 minutes later, I was running the 64bit edition of Windows 7.  It had installed drivers for all of the onboard hardware and even the web cam..  Being a freshly paved Windows install, Windows Update needed to be run.  Installing the updates took longer than installing the OS, but that’s normal.  I did pick a new name for the PC.  I wanted to make sure that this machine appeared as a new machine to the rest of the network.  Life is much easier that way.

Now that OS was up and running, it was time to bring over the software.  First up, create the users.  You need to login as each user at least once to get the the folders all set up.

The first app was email.  We use Thunderbird as the desktop client.  My wife uses a account on this machine and the girls use our personal domain GMail accounts.  With the girls, I just had to fire up Thunderbird and add their email accounts.  Their GMail accounts are set to use IMAP with all of their mail stored in the Google Cloud.  Their existing email and settings came over automagically.  With Anne’s email, the messages were stored locally.  So I configured Anne’s email settings and then closed down Thunderbird.  I then copied the contents of her Thunderbird profile folder from the old hard drive to the appropriate location on the new drive.  The profile folder will have a random name and will be located in the “%APPDATA%\Thunderbird\profiles” folder.

Next up was iTunes. To keep things easy to manage, we try to keep all of our music files in a “C:\mp3” folder.  So I copied that folder from the old drive to the new drive.  Apple also likes to place files in the “%HOMEPATH%\My Music\iTunes” folder.  I installed iTunes and then copied that folder for each account.  I then started up iTunes and authorized the PC.

I then copied over the documents for each user, plus some shared folders.  I then installed “The Sims 3” and the serial number worked.  The installer made an updater app start with Windows,but I ran MSConfig and fixed that.  To keep the saved games, I had copied over a few folders from ”%HOMEPATH%\Documents\Electronic Arts\The Sims 3” and all was good.

Next, I installed the usual suspects. Microsoft Office, then Microsoft Security Essentials, then the Windows Home Server client.  The WHS client allows quick access to the server, plus enables the nightly backups.  I switched to Microsoft Security Essentials last year and I have been very pleased with it.  I can’t imagine dealing with the bloated offerings from Symantec or Mcafee these days.  MSE does the job and doesn’t bog down the machine like the big boys too.

That being said, I consider the PC’s antivirus solution to just be a part of the protection. Even with daily updates, a 0-day attack could still get your machine.  With the Windows Home Server, I can easily do a bare metal restore of the OS.

After getting everything back up and running, I added another 2 GB of RAM to the system.  It was cheap and with fast user switching, the more memory the better.  I still need to install a few utilities here and there, but for the most part the machine is back in service.  It’s much snappier.  Between removing 3 years of accumulated crap and Windows 7 being faster than Vista, it’s like having a new machine.

Sunday, October 17, 2010

How TweeVo survived the OAuthcalypse

Last month, I spent a few evenings adding OAuth support to Brian Peek's TweeVo application.  TweeVo is a little WPF based application that runs in the background and logs what your TiVo has recorded to a specified Twitter account.  I’ve been running it on and off as @AnotherTiVo. Brian keeps a Twitter list of known TweeVo accounts as tweevousers.

It's a good learning tool for showing how to query a web server and how to post to Twitter using the Twitter API, all wrapped up as a WPF application.  What it does is very clever and Brian did a nice article about it on the Coding4Fun site.

The web server is the built in HTTP server that runs on the TiVo box.  You can use that web server to get a list of everything that has been recorded by your TiVo.  Brian wrote a nice, clean application that would query the selected TiVo units on your home network.  TweeVo polls each unit and checks the "Now Playing" list to see what shows were recorded since the last check by TweeVo.  It then posts the name of the show, plus a link to the specified Twitter account.  The Zap2It link will list some information about the show, plus a link to tell your TiVo to record that show.

The original version of TweeVo posted to Twitter using the username and password for the account.  This was called Basic Authentication or just Basic Auth. The user’s credentials were stored in a config file by TweeVo and they were encrypted so nothing else could read it.  Brian released it a while back and and it was a lot of fun for the people who used it.  Then came the OAuthcalypse.

Twitter supported two forms of authentication, Basic Auth and OAuth.  Twitter announced in the Spring that support for Basic Auth was being phased out and everyone using the Twitter API needed to implement OAuth.

With OAuth, the application requesting Twitter access with the application key.  To get the application key, you would request one for your application from Twitter.  If they approves your request, you would get a consumerKey and a consumerSecret.The user would be presented with a web dialog asking if they wanted to allow access to their account to the application and they would be prompted for their user name and password. 

If they allowed it, Twitter would send back an access token and application would use that token and their own api key to access the Twitter API.  The web dialog would redirect back to the calling web application and life was good for the user.

That’s an over simplification of the process, but it describes the basic mechanism for allowing an web application to post to your Twitter timeline.  There are a few advantages to using OAuth.  Since the application uses an access token, you could change your password without having to update the application.  Plus you could revoke the access token at any time from your Twitter web page.

For desktop application, it was a little trickier.  You still needed to present the web dialog from Twitter to request access.  Since you couldn’t get back to desktop application from a web page, the user would be presented with a PIN from the web dialog.  He would then manually type the PIN into an entry field provided by the desktop application.  The app would then request the access token from Twitter by providing the application key and the PIN.

While this mechanism keeps the user’s credentials away from the application, it’s annoying to use.  Plus you have now introduced a point of failure where the user types in the PIN.  A more streamlined approach called XAuth was made available by Twitter for desktop applications.

XAuth works by consolidating a few of the steps.  The user provides the user name and password to the application.  The application then requests the access token by passing the credentials and the application key.  This skips over the access request dialog and sends back the access token.  For the end user, this is a much simpler. 

The original shutoff date for Basic Auth was June 30th, 2010.  This date became commonly known as the OAuthcalypse.  Due to heavy Twitter usage around World Cup activity, the Twitter team pushed the OAuthcalypse date to August.  On August 16th, Basic Auth usage would start get rate limited and a final shutoff occurred on August 30th.

The OAuthcalypse basically prevented TweeVo from posting to Twitter.  Brian was a little busy in September and I offered to help add OAuth/XAuth support to TweeVo.  I did some reading and played around with some .NET implementations of XAuth.

We implemented XAuth with TweeVo and tested it in late September.  Much of the XAuth code was based on code that been posted in a set of blog posts by Shannon Whitely.  I made a few changes to Shannon’s code, but the his implementation was sound and it saved me quite a bit of time.  That allowed me to spend more time reworking the TweeVo code to use XAuth and do more testing.  If you were using TweeVo 1.0, now is the time to get version 1.1.

Tuesday, September 28, 2010

Care and feeding of your wireless router

This morning I went to check my email from my iPad and had a unpleasant surprise.  No Internet.  I have the Wi-Fi only version of the iPad and it usually has a nice solid connection to my wireless router.  On the iPad, I fired up the Settings app and sure enough, under “Wi-Fi”, it displayed not connected. I tapped “Wi-Fi” and let it scan for networks.  No sign of mine.  I tried my iPod Touch, same thing.

That pretty much rules out the iPad as being the problem, time to move up the chain of command.  My router is an ancient (by home router standards) Linksys, the WRT54GS.  It’s your basic work horse router with 802.11b (11 Mbit/s) and 802.11g (54 Mbit/s) support.  I run a 3rd party firmware on it called DD-WRT.  DD-WRT greatly enhances the functionality of the router and lets you do a lot of cool things.  It’s about 99.9% cool and 0.1% flakey.  The flakey bit means that I need to reboot it every now and then.  I don’t know if it’s a memory leak, or something is getting confused, but a reboot every once in a while clears the cobwebs out.

From my PC with a wired connection to the router, I was able to access both the Internet and the administrative web pages on the router.  So the router wasn’t completely borked, just the wireless part.  From the admin pages on the router, I could get the router to do a wireless scan.  That’s where the router goes looking to see what other wireless hotspots are nearby.  This is very useful for picking the right channel

With 802.11b and 802.11g, the router uses a narrow range of frequencies at 2.4ghz.  In the US, this range is divided up into 11 channels.  It’s kind of fuzzy how they divided the channels, they all overlap the next channel on each side.  So you really want to think of it as 3 channels, 1, 6, and 11.  Since channel 1 only bleeds up with channel 2, and channel 11 only bleeds down with 10, you really want to use channel 1 or 11 if you can.  Most routers default to channel 6, in a crowded neighborhood so you should see less traffic on 1 or 11 than on 6.

So I unplugged the router and waited about 20 seconds.  It came back up and my PC could hit the Internet.  But still no Wi-Fi.  Since a power cycle didn’t fix it, I moved up to the nuclear option: the 30/30/30 reset.  The 30/30/30 reset is the best way to clear out all of the router settings to the firmware default values.  When you upgrade the firmware on the router, you should do the 30/30/30 before you flash the router and one more time again after you have flashed the router to the new firmware.

There are 3 steps to the 30/30/30 reset:

  1. Press and hold the reset button on the router and wait 30 seconds.
  2. While keeping the reset button pressed, unplug the power from the router and wait 30 more seconds
  3. Plug the power back in while keeping the reset button pressed and wait for another 30 seconds.

Before you do the reset, you’ll want to write down any custom settings that you have performed. I use the following features added by DD-WRT:

  1. Configured the client.  This assigns the current IP address of the router to domain name provided by DynDNS.  When my ISP changes the IP address assigned to the router, DD-WRT passes the new IP address to DynDNS and they update the record for that domain name.  This makes it easy to use my VPN to connect to my home network.
  2. Configured the VPN server.  DD-WRT provides both PPTP and OpenVPN.  PPTP is simpler the configure and works for me, so that’s what I use. I’ll use the VPN if I’m a location that blocks sites that I want to access from my iPad.  I can also use the VPN if I need to open a Remote Desktop (RDP) connection to my home PC.
  3. Changed the default login account on the router.  Most routers let you change just the admin password, DD-WRT lets you change the login name as well.
  4. Set the router to use OpenDNS.  This isn’t a DD-WRT feature, most routers will let you set the DNS servers.  I use OpenDNS for their fast DNS servers and for their content filtering.
  5. Sell the router to reboot itself once a week.  In a perfect world, your computer would never need to be rebooted. In this world, it doesn’t hurt for the router to get rebooted automatically once a week.  Mine is set to reboot ever Wednesday at 5:30am.  Not much is going on at that hour.
  6. Enable UPnP.  I missed that one at first.  Windows Home Server uses UPnP to punch a secure hole through your firewall to allow remote access to your network through Windows Home Server.  I have a HP Media Smart Server and I will access my home network from work by going through it’s Remote Access.

After the reset, I set the Wi-Fi settings and after a few seconds, my iPad was back on the Internet.  Since I had already reset the settings, I decided to update the DD-WRT version on my router.  The DD-WRT web site has an interactive Router Database, where you can pick your router and it will tell you which versions of DD-WRT will work with your router.  It’s pretty handy, there are multiple versions of DD-WRT available, and different builds of each versions.

Unless you have a large pile of routers to play with, you really want to use the suggested version of DD-WRT.  If you install the wrong version or get an unstable development build, you could brick the router.  I grabbed the standard set and installed it without blowing anything up.  Then I put back my custom settings.

So now I have full Wi-Fi back, but I don’t know why it failed.  It was working the evening before, but stopped sometime last night.  MY WRT45GS is about 4 years old and has been on 24/7 the entire time.  It could be just wearing out.  My iPad supports 802.11n, it may be time to upgrade to a newer router.  The Buffalo Technology Nfiniti Wireless-N High Power Router (WZR-HP-G300NH) works very well with DD-WRT and the people at SmallNetBuilder like itJeff Atwood just picked one up too, and he’s programmer with the hardware tweak bit set to on.  It would also be nice to have a router that supports gigabit speed WAN connections for the small set of the devices that I own that support gigabit. 

The Nfiniti is a single band router, everything is broadcast over the 2.4ghz range.  I may look at a dual band router, like the Netgear WNR3500L.  With a dual band router, you can have 802.11g devices on the 2.4ghz band and the 802.11n devices on the 5ghz band.  As far as I can tell, my iPad supports 802.11a/b/g/n on either band. It may be worth it to get the heavy duty Netgear router over the Buffalo one.

Some additional notes:
Resetting the router evidently disabled my Windows Home Server’s Remote Access functionality.  I was able to repair that by performing the following steps:

  1. Launch the Windows Home Console
  2. Click on “Settings”
  3. Click on “Remote Access”
  4. Click on the “Repair…” button.  This launched the “Repairing Remote Access Configuration” wizard. Which told me that UPnP was not enabled on the router. Sure enough, I went into the UPnP page (under NAT / QoS) in dd-qrt and saw that UPnP was not enabled.  I enabled UPnP, rebooted the router, and let the wizard repair the Remote Access settings. 

The fun part was that I did this from my office PC.  I created opened a VPN connection to my router and changed the UPnP settings remotely.

Monday, September 20, 2010

There was a problem sending the command to the program

I hate error messages like that. It’s both detailed and vague at the same. What command was being sent and what was the problem? Let me back up a few steps. A family member bought a new PC running Windows 7, for his home office. He works from home and accesses his work email account through Internet Explorer.  His email based on Domino Web Access, which I’m assuming is the web bit for Lotus Notes email.

When ever he was sent a document like an Excel file or Word file as an email attachment, he was unable to open the file.  He would double-click on the icon for the file, and Domino web page would spit out “There was a problem sending the command to the program”.  He has Office 2007 installed and we verified that it was working just fine.

If he tried to save the file from Domino, he would get prompted for a folder to store the file and he would try to save it is his documents folder. It went through the motions of saving the file.  No error messages, but the file wasn’t there. I repeat, there was no error message. I took a peek at the file system and found the file in an odd location. All the files that he had been trying to launch were in “c:\users\hisname\AppData\Local\Temp\Low\Domino Web Access\80\”. The “Low” part of the folder name tells us that Internet Explorer was redirected by Windows.

Starting with Vista, IE 7 runs with “low” privileges. The temporary files, cookies, and history folders are now in “low privilege” folders. Access to protected locations (root folder, documents, “my programs”, etc) is redirected by the operating system to the %LocalAppData%\Temp\Low folder. The folder virtualization that Vista/Win7 uses is pretty transparent the application. Unless the application checks for the file after it writes it (or knows about folder virtualization), it will not know that the file is in a different location. That explains why the files were not in the right location, but we still needed to figure out why they couldn’t be launched by the browser.

After a bit of searching, I figured out the problem. He just needed to add the webmail server site as a “Trusted Site” in the Internet Explorer security options. With Windows 7, he needed to do the following steps in Internet Explorer 8:

  1. From “Tools” menu, select “Internet Options”.
  2. On the “Security” tab, click “Trusted site” in the “Select a zone to view or change security settings.” panel.
  3. Click the “Sites” button.
  4. Under the “Add this website to the zone:”, enter the URL for the web mail site and click the “Add” button.
  5. If the URL starts with HTTP instead of HTTPS, make sure that the “Require server verification (https:) for all sites in this zone.” check box is cleared.
  6. Press the “Close” button.
  7. Press the “OK” button.

That will tell Internet Explorer that it’s safe to launch binary applications from the web mail application. It’s sounds annoying, but it’s just Microsoft trying to keep rogue web sites from running nasty programs.

Friday, September 17, 2010

All about “F# and You”

Last night at our monthly Tech Valley .NET User Group (TVUG) meeting, we had Rick Minerich come in and do a presentation on F#.  It was a very good presentation.  Rick was enthusiastic and knows F# cold.  One of the cool things that he showed in his presentation were examples in both F# and C#.  It looked like you could replace every 5-10 lines of C# code with F# code.

F# isn’t for everyone, but if you are doing serious number crunching and want to process data in parallel, then you seriously want to look at using F#.  It’s a full fledged member of Visual Studio 2010, it’s not just something bolted on to the architecture.

With F#, asynchronous programming is much simpler.  This useful for performing operations that require asynchronous I/O.  A common example would be collecting data from multiple, non-related web pages.  With F#’s asynchronous workflows, you define a set of operations to be performed in parallel.  The following example from MSDN shows one way that you can implement this.

open System.Net
open Microsoft.FSharp.Control.WebExtensions

let urlList = [ "", ""
"MSDN", ""
"Bing", ""

let fetchAsync(name, url:string) =
async {
let uri = new System.Uri(url)
let webClient = new WebClient()
let! html = webClient.AsyncDownloadString(uri)
printfn "Read %d characters for %s" html.Length name
| ex -> printfn "%s" (ex.Message);

let runAll() =
|> fetchAsync
|> Async.Parallel
|> Async.RunSynchronously
|> ignore


This code will process each URL in urlList in parallel, and will wait until each sequence has been processed before continuing.  The wait state management and thread housekeeping are handled by F#, the programmer doesn’t have to worry about that at all.

If you want to know more about F#, Rick is a great source.  In addition to his web site, you can find him on Twitter as @Rickasaurus.

Monday, August 30, 2010

Resolving "Navigation to the webpage was canceled" with Compiled HTML Help files (.chm)

I'm working with a SDK from a vendor that we have partnered with.  They provide the SDK as a download that I grabbed over FTP with Internet Explorer.  The SDK has a .NET assembly to use, some sample code, and documentation in a .chm help file.  It's all neatly bundled in a .zip files, nothing too esoteric.

Since Windows directly supports .zip files, I used Windows Explorer (Windows 7) to copy the files from the .zip file to new folder.  I then launched the help to examine some new functions the vendor had added for me.  The help file loaded up, but I couldn’t access anything.  It looked like this:


At first I thought the file was corrupt, but then I realized what was going on.  With Windows Explorer, I selected the .chm file and right-clicked on it and selected “Properties”.


If you look at the section that is highlighted in green, you’ll see the text ”This file came from another computer and might be blocked to help protect this computer.”.  With Windows XP SP2 and later operating systems, the file’s zone information is being stored with the file as stream.  A stream is a separate resource stored with the file, just not exactly in the file.  Separate resource streams is a feature of the NTFS file system.  Since the .zip file had been downloaded with Internet Explorer, the .chm file was treated as if it had been downloaded directly. 

This is actually a good thing.  By default Internet Explorer will not let you run content from your local disk without your expressed acceptance.  Since the Internet Explorer rendering engine is used to render the pages of the .chm file, it’s going to block pages that came from the Internet Zone.

You have a couple of ways of fixing this.  One way would to disable the blocking of local content.  I don’t think that’s a safe way to operate so I’m not going to describe how to do that.  In the file Properties dialog, there is an “Unblock” button.  Click that button and you can remove the Zone block.

Another way would be to use a command line tool and remove the Zone Identifier resource stream.  SInce NTFS file streams pretty much invisible to the casual eye, you can grab a free tool to trip that data out for you.  Mark Russinovich’s Sysinternals collection of utilities includes a nice little gen called streams.  It’s a handy little utility.  It will list what streams are associated with a file or folder and you delete them.  Recursively and with wild cards too.  One of the thinks I like about Systinternal command line tools is that you can run them without a parameter to get a brief description of what it does and how to use it:

Streams v1.56 - Enumerate alternate NTFS data streams
Copyright (C) 1999-2007 Mark Russinovich
Sysinternals -

usage: \utils\SysInternals\streams.exe [-s] [-d] 
-s     Recurse subdirectories
-d     Delete streams

When I ran streams on my .chm file, I saw the following:

streams.exe SomeSdk.chm

Streams v1.56 - Enumerate alternate NTFS data streams
Copyright (C) 1999-2007 Mark Russinovich
Sysinternals -

:Zone.Identifier:$DATA       26

You can also get a listing of the resource streams if you use the “/R” parameter with the DIR command. To see the contents of the stream, you can open it with notepad with syntax like this:

notepad MySdk.chm:Zone.Identifier

That would display something like this:


Any value of 3 or higher would be considered a remote file.  So I ran it one more time, just with the –d parameter and got this:

streams.exe -d SomeSdk.chm

Streams v1.56 - Enumerate alternate NTFS data streams
Copyright (C) 1999-2007 Mark Russinovich
Sysinternals -

Deleted :Zone.Identifier:$DATA

Once I did that, my help file was unblocked and ready to be used.

A Disaster Recovery Plan is useless if you don’t verify that it works.

I was just reading a Computerworld article about how American Eagles Outfitters just went through an eight day web outage (originally covered by StorefrontBacktalk).  It started when some hardware failed, then the backup hardware failed, then the software designed to restore the data to the replacement hardware failed, and finally their disaster recovery site wasn’t ready.  They were doing the right things: backups, backups of backups, and an alternate site in case their main site is dead in the water.  It just didn’t work when it was needed.  They were flat out down for 4 days, and then only had minimal functionality for another four days.

Being down or not fully functional for 8 days is a huge amount of time if you are an online retailer, but it would be catastrophic for just about any company to their network down or seriously hobbled for over a week.  How effective would your company be today if you had were isolated computers, without any Internet access.  On the plus side, that means no Farmville, which would actually be a productivity gain.  But life without email, that’s another story.  For most companies, email is a basic tool of business and you can’t get by without it.

You need to have a disaster recovery (DR) plan.  You have to plan on the basis that a meteor has taken out your building one night, and all your business tools are gone.  You need to backup the key assets of your network.  If it’s on a computer and you need it, then it should be backed up.

Those backups need to be off site.  If your office complex is under 8 feet of water, those backup tapes are going to under 8 feet of water too.  Your IT staff needs to be taking those backups offsite.  It could as simple as taking the backups to a safe deposit box, or a live backup of your system to alternate location.

You also need a disaster recovery site.  You need backup network equipment that you can bring online with your current data.  It could be a dedicated hosting facility or a rack of servers at another location if your company has more than one office.  The important thing is that it’s periodically tested.  A DR site is no good if it doesn’t work.  That is what was the biggest failure for American Eagles Outfitters, their final point of protection wasn’t ready and had never been tested.

You also need to plan for the human resources.  If a disaster strikes your office, you need to have a plan to contact all of the employees and arrange for alternate office facilities.  if your DR site is up and running, it wont do you any good if none of your employees can access it.

Being prepared with a DR plan is not a one time task or expense.  IT departments need to have the support and resources to keep the plan updated.  And they need to be able to test it on a period basis.  A plan that works today with X amount of data, could be utterly useless next year when you have Y amount of data.  You need to be able validate that your backups worked and that they can be restored in a reasonable amount of time.

That’s the hard sell, with companies looking to keep their costs down, it’s hard to keep items like this in the IT budget.  I look at what happened to American Eagles Outfitters as a cautionary tale of what happens if you skimp on an IT budget. 

The money quote for why the DR site was not fully functional:

“I know they were supposed to have completed it with Oracle Data Guard, but apparently it must have fallen off the priority list in the past few months and it was not there when needed.”

Penny wise, pound foolish.

Friday, August 27, 2010

DynDNS is making making changes to their free Dynamic DNS accounts

I use the free version of the dynamic DNS names provided by DynDNS.  It gives me an easy way to connect to my home VPN.  They provide a domain name, usually for a home network.  Since most ISP’s will change the IP addresses handed out to home users, you need to periodically update the IP address associated with your domain name.

Many home routers have the ability to automatically update your DynDNS account when it detects an IP change.  I have a custom firmware called DD-WRT on my Linksys router, DD-WRT has the ability to update DynDNS (and other providers).  if your router doesn’t have that capability, you can always find a small app or script to run on a PC and update the DynDNS account automatically for you.

It looks like DynDNS made some changes to their Dynamic DNS accounts.  Previously, you could have up to five free accounts, from a long list of 88 domain names.  From now on for new accounts, you get to have two free names, from a list of 18 domains.  If you had more than two names on the free account, you get to keep them for as long as you keep them active.  if you fail to keep them updated (every 30 days), they will be dropped until you reach the free number. If you have the paid version, DynDNS Pro, then you are not affected by the change. 

It’s a small price to pay for the free service.  I’ve been using it for over 5 years and have never had a problem with it. Over the years, I had managed to collect 6 domain names.  Most of them were used for testing, I had completely forgotten that I had them.  I just pruned the list down to two, but I’m really only using one.  With the VPN, I can securely log into my home network while I’m on the road and have full access to my network.

A change at TVUG

For the last couple of years, Griff Townsend has been the President of the Tech Valley .NET Users Group (TVUG), here in Albany NY.  Griff has put in many hours with TVUG activities and has done many presentations for us.  Unfortunately for us, Griff is moving to bigger and better things in another state.  Griff will be missed here and we wish him well as he continues his career in a warmer climate.

As Vice-President of TVUG, I will be stepping up to the position of President and continue in Griff’s footsteps.  Griff will be moving before the next TVUG meeting in September, I’ll be at the next one.  We will be having elections of the executive board officer positions at the end of year, during the December meeting.  Our bylaws are posted here.

GriffGriff is an experienced architect and teacher with a few Microsoft certifications under his belt. While he’s leaving this area code, you can still keep up with him.  Griff has a blog, “Bloggin from my Noggin”, at  You can also follow him on Twitter at @vidiotz.  And Griff’s LinkedIn profile is Griffith Townsend.

Monday, August 23, 2010

You should have WinPatrol on your system

You really should have WinPatrol installed on your system.  It’s a service type of application that monitors changes to your system.  For example, if an app tries to register a web browser toolbar, WinPatrol will warn you and give you a chance to block it.  There’s a free version and a paid version.  The free version is very good, but you’ll want the paid version.  It’s very affordable and will keep your machine from being bogged down with crapware and suspicious processes.  WinPatrol is written and supported by Bill Pytlovany, a well known Windows security professional.

I just installed the MyHeritage Family Tree Builder desktop application on my main development box.  I’ve been using FTB for a few years on our shared family PC.  It’s a nice genealogy application that I have used to to publish my mother’s family tree online.  The technology is very cool and I will get back to describing it in more depth.

When I installed the FTB app, the installer asked if I wanted to change the default search provider to one provided by MyHeritage and to install a MyHeritage toolbar into Internet Explorer.  I declined both options.  I have IE set to use Bing as the default search provider and I didn’t want to change it.  I also did not want to install any toolbars into IE.

I avoid IE toolbars like they are the plague.  They eat up screen real estate, slow down the browsing experience, are the root cause of 70% of the browser crashes, and cause cancer in lab rats.  So I declined that option and installed FTB.  And the installer ignored my choices and tried to change the search provider and install their toolbar anyways. I don’t know if that was sloppy coding and testing on their part or it was intentional.  Either way, that wasn’t what I wanted.

How did I know this?  Because WinPatrol was doing it’s job and warned me about each change.  I saw a dialog that looked remarkably like this:


Scotty (the mascot and public face of WinPatrol) caught the attempt of the installer to register a new toolbar.  The “New Program Alert” dialog will display enough information about the pending change to your system that you can usually make a quick and informed decision on whether or not to block it.  If you see something you don’t recognize, clicking the “PLUS Info…” dialog will take you to a WinPatrol web page that will display more information about the object being installed.

Without WinPatrol, I would not have caught either change until the next time I started Internet Explorer.  With the MyHeritage stuff, it wasn’t malicious code, but it was code that I didn’t want to run.  And thanks to WinPatrol, it wasn’t going to run. I was able to prevent changes being made to IE, and that’s worth the price of admission.

Monitoring changes to IE is not the only thing in WinPatrol’s arsenal.  It gives you an easy way to see what apps are set to start when the computer boots up and the means to block them.  If you computer seems to be running slower and slower each day, the odds are you picked up some process that run in the background.  Most of them are pretty harmless, but when you start adding them up, they will show down your PC.  WinPatrol has an online database and can identify most of them and tell you if you should keep them running or block them.

Sunday, August 22, 2010

Why are there randomly named folders with mpavdlta.vdm files on my C: drive?

I was looking for a folder on a PC at home (Vista, 32 bit) when I saw a bunch of folders with oddly formed filenames.  There were 13 of them, with names and dates like this:
05/15/2010  03:34 AM    <DIR>          2f934881647646785dbf842f86e91ec9
11/01/2009  03:24 AM    <DIR>          3b9e7b6e4c58a68b7e71c5e3
11/03/2009  04:18 AM    <DIR>          54693b59d80daf1421b7dda39a
10/31/2009  03:16 AM    <DIR>          56d6fe71d579ef79995fee64834082

They all had files with the name “mpavdlta.vdm” and every time I tried to open the folder with Windows Explorer, i would get the following dialog:

You don't currently have permission to access this folder

I would press the “Continue” button and would have to answer “Continue” to the UAC dialog that would pop up on the screen.

Ok, so what are these files, what are they doing here, and can I remove them? 

After a bit of searching, I found that they are the AntiSpyware definition files from the Microsoft Security Essentials antivirus application.  That answered the first question.  More precisely, they are the delta files for the antivirus definitions.  There is also a mpvabase.vdm, which is the base signature file.  The mpavdlta has all of the changes since the last mpvabase was downloaded. Gilham Consulting has a nice blog post that describes the various AV definition files that come with MSE.

As for how they got there, it appears to be a bug or design flaw with MSE.  The last randomly named folder from MSE was dated 5/26/2010, a good three months ago.  I fired up the the MSE console and it displayed that the virus definitions were current as of 8/21/2010.


My first guess was that situation was causing the vdm files and folders being created all over my C: drive has been addressed.  With Windows Explorer, I went in and was able to delete most, but not all of the folders.  It appears that MSE is still doing the random folder thing.  But I was able to clear out most of them. So it looks like this is bug in the current release.  From the various posts in the MSE forums, it appears that MS is aware of the problem, but nothing official has been posted about a resolution.

I think it’s a bit odd that MSE is storing the AV definition files in this manner.  I’ve been pretty happy with how MSE is protecting my PC from virus attacks.  I wouldn’t call it perfect, but it’s more than good enough for my needs.  It’s a much lighter load on the system than the commercial AV solutions.  I can put up with a few randomly named folders for the protection that it provides, but I would be more comfortable of the files had been shoved in a folder under %ALLUSERSPROFILE% as a default location.  I’ll file this under “Nothing to see here, move along”.

Tuesday, August 10, 2010

New Windows Live Writer Plug-in Submission Process

I just received an email from Microsoft about changes to the submission and and hosting of Windows Live Writer plug-ins.  It goes into effect on September 10, 2010 (one month from now).  Live Writer is a great blogging tool and it has a nice API for extending it’s functionality.  If you use Live Writer, you should check out the plug-ins at Windows Live Gallery.

I submitted a SmugMug plug-in a couple of years ago.  It will need to be resubmitted.  Actually, I’ll probably rewrite it.  I’m sure it’s obsolete by now.  Here are the contents of the email.

Dear Windows Live Writer plug-in authors,

On behalf of the Windows Live Writer team and all of our customers, thank you for the valuable contributions your plug-ins have made to the Writer experience.

We’re writing to let you know that Writer’s plug-in hosting and submission processes are changing.  Note: Existing plug-ins currently hosted on Windows Live Gallery will need to be resubmitted using the new process outlined below.  In the future, should you wish to provide additional plug-ins for Writer, we request that you also submit your plug-ins using this process.

We hope that you find the new plug-in submission process and hosting solution simple and lightweight. New plug-in submission process:
  1. Author uploads plug-in MSI installer to Windows Live SkyDrive using his/her Windows Live ID (email address).
  2. Author emails (Windows Live Essentials Plug-ins) including the following information:
    • Author name
    • Author Windows Live ID (that will host the plug-in MSI)
    • Author contact email address
    • Plug-in name
    • Plug-in description
    • Plug-in category (pick only one):
      • Formatting/clipboard
      • Post publishing
      • Pictures
      • Buttons
      • Other content
      • Miscellaneous
    • URL to plug-in MSI on SkyDrive
  3. Writer team verifies that the plug-in works as described.
  4. Writer team updates public list of Writer plug-ins that will include information on the plug-in and a link to the installer that is hosted on the author’s SkyDrive.
  5. Writer team notifies plug-in author that plug-in has been listed.
We value your efforts and want to ensure that your plug-ins will continue to be accessible for the many people interested in them.  In order to do this, we need you to resubmit any existing plug-ins on Windows Live Gallery using this new process by September 10, 2010. You will be able to add more plug-ins after this date, but we need you to move existing plug-ins by then.

Please email us with any questions or concerns about the new plug-in submission process at Other support requests should be directed to the Windows Live Solution Center.

The Windows Live Writer Team

How to shoot yourself in the foot writing installer upgrades

I’m in the middle of a development cycle for one of our products, when QA logged an unexpected bug.  When you upgraded an existing version of the product to the new one, only some of the settings were being saved.  This was a new bug, something I did recently broke the installer.

A little background on the product.  It’s a set of data collection services that allow our Onscreen product to work with various 3rd party GPS vendors.  I have a generic collection service, plus a handful of services for vendors that can’t work with the generic service. Each collector is a service written with in C# for the .NET Framework.  Most of the settings for each collector are stored in the service’s app.config file.

I have written the installer so that it does a full install each time.  If it detects a previous version of the product, it caches the service status (running, disabled, etc) and the app.config file for each service.  It puts the cached files in a temp folder that will get purged at the end of the installation. Then it does a silent uninstall of the previous version and installs the new version.

After the files for the new version have been installed, the installer restores the service status for each collector service and retrieve the cached settings.  I don’t copy the old file over the new, I just retrieve a set of settings and update the default values in the new app.config with the cached values from the previous one.  You don’t want to blindly copy over the app.config, you would wipe out new or changed settings that are required for the new version.

I wrote a simple command line app that gets called by the installer.  it reads the new app.config, the cached copy, and a set of rules in XML format.  For each new file, it looks for the same file in the temp folder created by the installer.  It then does a XPath search and replace for each rule.  The rules are a little flexible.  It can be directed to replace the default with the cached copy or replace it with a new default value.

When I need to do some file manipulation, I usually write a small command line app instead of trying to have the installer code try to do it.  There’s a couple of reasons for this.  By using a separate app, I get to code and test the file manipulation outside of the installer environment.  Having an installer locate and enumerate a set of files, and then update them based on the contents of others would be a difficult to write and maintain.

Getting back to bug at hand, the installer installs five services by default.  Three of them were getting their settings persisted across upgrades, two were not.  That threw me, for this type of activity usually everything works or everything breaks.  I spent some time tracing through the code, which is always fun with an installer.  I ended up spending quality time with a VM of Server 2008 and had my command line app write detailed information to the Windows Event Log.  Crude, but effective.

As it turns out, I created the problem at the start of this development cycle.  I had renamed the executables as part of a rebranding exercise.  When the the code when to locate the cached copy of the service, the new name didn’t match the cached name.   Yes sir, I had shot myself in the foot with a device of my own making. 

I had to tweak my command line app a bit.  As it processed each app.config file, I added a some additional code so that it would make two attempts to load the cached copy.  First attempt would by the current name.  If that fails, it then uses the previous file name.  After that,we were back in business.

Friday, August 06, 2010

Notes on installing FinalBuilder 7

VSoft Technologies just released a new version of their build automation tool, FinalBuilder.  Version 7 gets a new look to their IDE and you can finally have multiple projects open at the same time.  Among other things, they added support for Hg and Git, plus full support for Visual Studio 2010.  It also supports Team Foundation Build 2010.


Along with FinalBuilder, you get a single user license for FinalBuilder Server.  FinalBuilder Server provides a nice web frontend so that you can remotely start a build process from a web browser.  We use this a lot. This tool does a lot and it will save you time.

After installing FinalBuilder, I loaded in an existing build project that had been created with FinalBuilder 6.  It complained about not being able to find the user variables.   User variables are variables defined with FinalBuilder and are global to all projects. We use them to define some settings shared by all our projects.  Within FinalBuilder IDE, you can cut and paste variables from one project to another.  It only works with the same version of the IDE.  I could cut and paste variables a project loaded in one instance of FinalBuilder 6 to another project loaded in a separate instance of FinalBuilder 6.  I could not copy variables from FinalBuilder 6 to FinalBuilder 7. 

Which is really odd, the properties of the variables in FinalBuilder 7 are a superset of what is available in FinalBuilder 6.  Fortunately, this is easy to fix.  The FinalBuilder 6 user variables are stored in a file named FBUserVariables.ini, located in Documents and %USREPROFILE\Application Data\FinalBuilder6.  FinalBuilder 7 follows the same pattern so all I needed to was to copy that file to the %USREPROFILE\Application Data\FinalBuilder7 folder.  Once I did that, the variables were all defined.

User variables are specific to the Windows user account running FinalBuilder.  When we set up our build machine, we created a user account specific to that machine.  All of the compilers and component sets are installed as that user.  FinalBuilder Server is set to run the projects as that user.  This makes life much simpler.  We completely avoid the issue of stuff getting installed under one user and not being available to another user.

I did have to replace a deprecated action with it’s replacement.  Actions are what FinalBuilder uses to implement a single task.  Copy a file, compile a project, get from source control, each would be considered a distinct action.  A build project is a series of actions, with some flow control.

The “Text Replace” action has been replaced with the “Text Find / Replace” action.  Not a big deal, but it threw me when I tried running the script.  I had run the “Batch Project Upgrade…” option from the IDE, but that only copies a previous version project as new version.  It does not replace deprecated actions.  I can understand not changing the actions, but it would be helpful if deprecated actions could be identified.  It was a quick change to make, took about 30 seconds.

I found another odd glitch with FinalBuilder 7.  A really useful feature of FinalBuilder is that it can update the AssemblyInfo.cs files for every project in a .NET C# solution.  This makes it very easy to set the version number and other attributes for every assembly.  I built a multiple project solution with FinalBuilder 7 and only some of the projects had their AssemblyInfo.cs file updated.  After some peeking and poking, I saw that it was only updating the AssemblyInfo.cs files located in the project’s Properties folder.  If the file was in the project root folder, then it didn’t get updated.  I reported this as a bug on the FinalBuilder forums and moved the offending files to the Properties folder.  Built the project again and everything was updated.  Double rainbows all around for everyone. [Update: After reporting this bug, they fixed it a few hours later.]

I had one more glitch, this time with FinalBuilder Server.  I added a project to the server and it threw an “type initializer exception” error.  This is a known issue with FinalBuilder 7.  FinalBuilder Server uses PowerShell for some of it’s tasks.  If PowerShell is not installed, you will get the “type initializer exception” error.  The solution was easy, just install PowerShell.  Windows Update wanted to push it down anyways.  I did suggest to VSoft that they add a prerequisite check for PowerShell in the FinalBuilder installers.  This is easy to do and well is documented

  • To check if any version of PowerShell is installed, check for the following value in the registry:
    • Key Location: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PowerShell\1
    • Value Name: Install
    • Value Type: REG_DWORD
    • Value Data: 0x00000001 (1)
  • To check whether version 1.0 or 2.0 of PowerShell is installed, check for the following value in the registry:
    • Key Location: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PowerShell\1\PowerShellEngine
    • Value Name: PowerShellVersion
    • Value Type: REG_SZ
    • Value Data: <1.0 | 2.0>

Version 2 of PowerShell is backwards compatible with version 1, as long as you have either version, FinalBuilder Server will be happy.  Even with these glitches, I really like using FinalBuilder.  If you are looking for an automated build tool with a decent IDE and debugging, you should consider FinalBuilder.

Tuesday, July 27, 2010

Wifi password cracking in the clouds

Bruce Schneier posted an article on his blog about, a service for cracking WPA and other passwords.  Basically, you send them a dump of network traffic from a WiFi network and they will use a brute force algorithm to guess the password.  With a massive dictionary and brute force computing. they claim to be able to crack most WPA passwords in 20 minutes.  Once they have the WPA password, your home network wide open to the outside world.

Some say that with enough monkeys typing at random on a keyboard, eventually you would get the complete works of Shakespeare.  Actually you wouldn’t (the universe would come to the end first), but that is the general idea of a brute force password attack.  First you use a large dictionary of words, then after exhausting that list you go for a random series of letters and number combinations.  With this company, they are using a 400 CPU cluster and a database of 135 million words.  They are claiming about 20 minutes to crack a password.  That works out to about 7 words a minute per CPU, using from their dictionary.

Processing 7 words a minutes per CPU to test a world from a dictionary doesn’t sound very fast at all.  It does take a lot of computer horsepower to crack WPA passwords. WPA passwords are case sensitive, which means each letter in a password could be either upper or lower case.  An 8 letter word would have 28 possible combinations.  It can take a while to work through the iterations, but with enough CPU power, you’ll work though most common words.

The way it works is that someone can park in front of your house and use commonly available software to capture the wireless data being broadcast through your house.  If you are using WPA encryption, then all of the data will be encrypted.  They can then send a copy of that data to the wonderful people at wpacracker and pay them $35.  They will get back your WPA password, if the wpacracker people were able to crack it.

You can make it much,much harder for the password to be cracked.  Just use long passwords and use numbers with the letters.  WPA passwords are case sensitive, so if you limit the characters to just the upper and lower case letters, plus the numbers, you have 62 possible choices for each character in the password.  If you pick an eight character password, you would 628 possible passwords.  That’s a pretty big number.  With 16 characters, you get 6216, which is 4.something , followed by 28 zeros.  That’s a number beyond big.  A brute force attack with today’s hardware would take centuries to process. 

I use a 32 character password, brute force attempts will fail on that one. But I cheat a little.  I’m not going to remember a 32 character sequence, and it would just take forever when someone visited my home and we had to type in a random 32 character sequence.  What I did was to make up a random 8 character sequence using a 4 letter family name and 4 digits and then repeat that sequence 4 times.  It’s easy to remember and easy to type.  Something like “Doug2112”. It wont show up in the dictionary and it’s not going to get cracked.  When I’m letting someone on our network, I just have them open up notepad and type in the 8 characters and then copy and paste it 4 times into WPA password dialog.

If you want to quickly test your password to see how long it would take for a desktop PC to crack it, try  On that site, they estimate that a desktop PC would crack the “Doug2112” password in 252 days. For that phrase repeated 4 times, they came up with 32 octillion years.  Take that with a grain of salt, but it shows how much harder it is to crack longer passwords.

Thursday, July 22, 2010

The Delmar Kid Chaser

I’ve been following the story of the Delmar Kid Chaser.  If you had not been following this one, it made the local news a few days ago.  Four teenagers raised a ruckus outside a local families home around 10pm on Saturday night.  They banged on the back door and then rang the front door bell.  The home owner chased the boys and caught one of them.  He brought the 14 year old into his home and then called the police.

The parents of the boy were asked by the police if they wanted to press charges against the homeowner, which they did.  The boy suffered some bumps and scrapes when he was caught, so the homeowner has been charged with endangering the welfare of a child and harassment.

That sounds a bit extreme to me.  If the boys had just run the doorbell and ran away, like typical teenage morons, nothing else would have happened.  The pounding on the back door, that changes things a bit.  That’s not normal “Ding-Dong-Ditches” behavior.  Two small children and the homeowner’s wife were sleeping upstairs when this happened.  The teenagers took the prank one step too far.

When you hear someone pounding on your back door at 10pm, your Homeland Security Advisory System goes from green to orange in about 2 heart beats.  I can see why the homeowner gave chase.  A dumb idea, but I can see where his motivation came from.

The story has had a lot of play on the Times Union web site.  The father of the arrested boy has a blog on the site and was formerly the anonymous blogger behind the Albany Eye. He has a fair amount of notoriety attached to his name, based on his blog postings.  Except in this case, he’s being quiet the actions of his son and the consequences, except for being interviewed by the paper that runs his blog.

In the words of the boy’s father (as reported by the Times Union) of the prankster: “I’m very unhappy with my son’s behavior Saturday night. I don’t condone his actions under any circumstances and we’ll deal with this in the harshest possible way.”

I guess the “harshest possible way” also includes the homeowner that his son terrorized that night.  He is facing a misdemeanor charge and now has to go to court.  The boys?  Too young to be prosecuted, they walk away from this.  Drop the charges, this is ridiculous.

Thursday, June 24, 2010

Enjoy some Bar-B-Que for a good cause

There’s a flyer floating around in the office and it’s worth sharing the details with the rest of the world (the part of the world that lives near Latham, NY).  Brooks Bar-B-Q of Oneonta will be participating in a fund raiser for The Coins Foundation on July 20, 2010.  They will be selling dinners to go, with the proceeds going to help build homes, schools, and hospitals in areas where they are needed the most.

This will be from 3:30 to 6:30pm and will be located at the COINS building at 6 Airport Park Blvd, Latham NY 12110.  I ordered dinners for my family at last year’s fund raiser and the food is really good.

Map picture

You can preorder tickets starting on July 6th or buy them at the door.  Call COINS at 518-242-7200 for more information.

Available for pre-order:

Available the day of the event (while supplies last):

*Dinners include half chicken or pork spareribs, barbequed over charcoal, and served with a baked potato, coleslaw, dinner roll, beverage, and desert.

The Bar-B-Q is being hosted by COINS USA to raise funds for The COINS Foundation. coins_foundation_logo_2 The COINS Foundation is a registered 501(c)3 corporation that partners with communities and other organizations to build homes, schools, and hospitals in the areas where they are needed the most.  The COINS 3 Peaks Challenge is committed to build 50 homes in Haiti with monies raised by the Construction Industry primarily through the COINS 3 Peaks Challenge.


BrooksLogo Brooks Bar-B-Que is located in Oneonta, NY and has been serving great food for the last 50 years.

Wednesday, May 26, 2010

Using robocopy with Visual Studio 2008 Post-build events

I have a solution that has about 15 odd projects in it.  It’s for an application that has multiple optional services with shared assemblies.  Hence the 15 projects.  I have an installer that lets the user pick from which of the optional bits to install.  For ease of maintenance, I version the installer with multiple folders.  Each version gets a folder for the installer source and a folder for the installable bits.

With Visual Studio, the default way of building a solution is to put the compiled assemblies and dependant files inside a bin folder for each project.  Each bin folder can multiple folders for different build options (debug, release, platform, etc).  I wanted a simple way of consolidating the files from each bin folder to a common folder for the installer.

I didn’t want to set the output path, from the Build tab on the project options page.  While I could direct all of the compiled output to a single folder, that would send symbol and vshost files over as well.  I didn’t want those files.  I only wanted just what I actually needed for deployment.

I decided on using the Post-build event for each project and copy only the files that I wanted.  I had already created a “deploy” folder from the solution root folder.  So I defined a build event with the following command line:

robocopy $(TargetDir) $(SolutionDir)Deploy *.exe *.dll

If you don’t know robocopy, it’s a powerful file and folder copy command line.  It’s been around for years in one Microsoft SDK or another and has been included as part of the OS since at least Vista.  In it’s simplest form, you specify the source folder, the destination folder.  It has a slew of options.

$(TargetDir) is a macro that represents the complete path for folder that the compiled units are created in, like bin\debug\ or bin\release\.  The $(SolutionDir) macro is the path to the solution folder.

I went to build one of the projects in the solution and it failed on the Post-build event.  Error 1, it said.  I copied the robocopy command line from the error message and ran it from a cmd prompt.  It executed without any errors and did exactly wat it was suppsoed to do.  After a little digging, I found out that robocopy returns 1 as an exit code to indicate success.  Since the dawn of time (January 1, 1970), command line programs have returned 0 for the exit code to indicate success and any other value to indicate an error.  It’s a bizarre flaw with an otherwise very useful tool.

Visual Studio 2008 does not appear to have any way of ignoring the exit code.  Well that’s just ducky.  So what I did was to bury the exit code in an unmarked grave so that Visual Studio 2008 wouldn’t be able to see or complain about it.  I created a batch file in the solution root folder and named it robopip.cmd.  Robopip has the following contents:

robocopy %1 %2 %3 %4 %5 %6 %7 %8 /R:1 /XF *vshost*
dir %TEMP% >nul

The first line executes the robocopy command with up to 8 parameters passed in by the build process.  The “/R:1” parameter basically says try once and then die.  The “/XF *vshost*” tells robocopy to ignore any file with “vshost” in the same.  The second line is crucial, it’s basically a low impact command to clear the last exit code.  We ask for the directory listing of the user’s temp folder and then toss away the results.  There is probably a more elegant way of clearing the error code, that was the first one that worked for me.

So now, I can use the following Post-event build command with each project.

$(SolutionDir)robopip $(TargetDir) $(SolutionDir)Deploy *.exe *.dll

I get the power of RoboCopy without Visual Studio squawking about the exit code.

Thursday, April 29, 2010

Suppressing a repeated column value in SQL

I was asked by one of my co-workers for some SQL help. He needed a SQL statement that would suppress repeated column values for the result set. Basically the value would be shown for the first row and blanked for each successful row that had the same value. Typically you would handle this in the application code, we had a case where we had to pass data to another application and we needed to do this within a single SQL select statement.

For example if we have the values:

username             Category
-------------------- ----------
Brian cs
Tom cs
Joe cs
Allen cs
Bill ts
Steven ts
Fred ts
Ted ts

We would want to return this as the output

username             Category
-------------------- ----------
Brian cs
Bill ts

Using the following table structure:

create table test(id int, cat varchar(10), username varchar(20))

We can make a query like

select t.username
  when = (select top 1 id
from test t3
where =
order by, t3.username) then
  else ''
end as Category
from test t
order by, t.username

What the case keyword is doing is a sub-select on the same table and uses top 1 to match on only the first row for each set of categories.  If we match, we use the category value, otherwise we use an empty string value.  This is not very inefficient, you are doing the sub-select for each row of the query.  We needed to do this because the situation only allowed a single SQL statement to be executed.  We were working with a small set of records and this executed without any delay.

If you can call a stored procedure or execute a batch of SQL, you can split this up and gain a performance increase for larger sets of data.  Instead of doing the sub-select on each row, populate a table variable with the first row for each category.  Then do a left join from the main table to the table variable.  The combined SQL would look something like this:

declare @q table(cat varchar(10), username varchar(20))
insert into @q(cat, username)
select, MIN(t.username)
from test t
group by

select t.username, COALESCE(,'') as Category
from test t
left join @q q on = and t.username = q.username
order by, t.username

Another way to get this affect is to use a Common Table Expression (CTE) as part of the query.  This would behave like the table variable, but you would have just a single select statement.  This would be useful for reporting tools where you can only specify a single SQL statement to retrieve the data.

Using the above example data, the new select statement would look like this

with cte as
select cat, min(username) as username
from test
group by cat
select t.username, COALESCE(,'') as Category
from test t
left join cte c on t.username = c.username and =
order by, t.username

This query should be perform well (if not better) as the query with the table variable.

Friday, April 23, 2010

What to do when Firefox displays multiple versions of the Java Console in the Add-ons list

I upgraded to Firefox 3.6.3 the other day and at the same time, upgraded one of the add-ons I use (Xmarks).  I love Xmarks, but that’s for another time.  When Firefox upgrades an add-on, it displays the Add-ons list, which is a dialog box that lists all of the add-ons that are registered with Firefox.  You can use that dialog to enable or to disable an add-on or see if a newer version of an add-on is available. 

Firefox has two terms for add-ons.  They refer to them as either an add-on or an extension.  They mean pretty much the same thing with Firefox and I’m going with the term that appears in their own documentation, which is extension.

Something was changed in Firefox with the 3.6 release.  I saw a bunch of extensions that were not there before.  I was seeing multiple versions of something called the “Java Console”.

Add-ons with Java Console

This add-on is a Java development tool.  You can use it display error messages from Java applets running on a page.  I never use it and I didn’t want every old version in that list. If you search on “firefox java console multiple” with your favorite search site, you’ll get over 100,000 hits.  So I figured that it wasn’t me, it was Java causing the problem.  One the higher ranked hits for the search took me me to an article about Java in the Mozilla (people behind Firefox) knowledge base.

It appears that when the Java installer is upgrading from a previous version of Java, it’s smart enough to remove or overwrite the previous version of the Java runtime, but not smart enough to remove the previous version of the Java Console extension. The KB article about Java has a link for how to manually uninstall a add-on.  You can do it manually or from within Firefox.  I was unable to remove it through Firefox, so I decided to remove it manually with extreme prejudice.

You can register an extension with Firefox in multiple ways.  With Java, the extensions are stored in the following folder:  “%ProgramFiles%\Mozilla Firefox\extensions”.

On my XP machine, a listing of that folder displayed the following:

Volume in drive C is JOHNSONWAX     Serial number is 00a4:443c
Directory of  C:\Program Files\Mozilla Firefox\extensions\*

 4/23/2010   9:42         <DIR>    .
4/23/2010   9:42         <DIR>    ..
4/16/2010  10:57         <DIR>    {972ce4c6-7e08-4474-a285-3208198ce6fd}
11/18/2008  12:37         <DIR>    {CAFEEFAC-0016-0000-0010-ABCDEFFEDCBA}
2/18/2009  16:52         <DIR>    {CAFEEFAC-0016-0000-0011-ABCDEFFEDCBA}
5/28/2009  10:31         <DIR>    {CAFEEFAC-0016-0000-0013-ABCDEFFEDCBA}
8/10/2009  14:35         <DIR>    {CAFEEFAC-0016-0000-0015-ABCDEFFEDCBA}
11/23/2009  10:29         <DIR>    {CAFEEFAC-0016-0000-0017-ABCDEFFEDCBA}
3/31/2010  13:57         <DIR>    {CAFEEFAC-0016-0000-0019-ABCDEFFEDCBA}
4/22/2010  11:33         <DIR>    {CAFEEFAC-0016-0000-0020-ABCDEFFEDCBA}
10/09/2009  14:17              49  {E0B8C461-F8FB-49b4-8373-FE32E9252800}
             49 bytes in 1 file and 10 dirs    4,096 bytes allocated
20,867,198,976 bytes free

From this listing, the multiple Java Console entries jump out as because of the multiple folder names that start with “{CAFEEFAC-0016”.  The folder names are GUIDs, and it looks like Sun has embedded the version number in to the GUID name.

You typically will not see the name of the extension in the folder or in any of the file names in the extension folder.  You need to check the folder and read either the chrome.manifast or the install.rdf, files typically found in an extension folder.  When I opened the install.rdf (this file tells FireFox how to register the extension) in the {CAFEEFAC-0016-0000-0010-ABCDEFFEDCBA} folder, this is what I saw.

<RDF xmlns=""
  <Description about="urn:mozilla:install-manifest">
    <em:name>Java Console</em:name>

The fourth line down lists the extension name as the value of the em:name element and two lines down, you’ll see the version number as the value of the em:version element. Since I could tell that the “{CAFEEFAC-0016” folders belong to the Java Console, all I had to was to delete all of the “{CAFEEFAC-0016” folders, except for the last one.  Once I did that, all I needed to do was to restart Firefox and the extra extensions were gone.

This appears to be a bug with the Java installer when you upgrade over an existing version.  If it’s smart enough to remove the previous runtime, it should be able to remove the previous version of the console.  It would not have been that hard, all they had to to was look at all the extension folders that started with “{CAFEEFAC-0016-0000” and remove the one that was not the current one.

Sunday, April 18, 2010

Twitter? Time Warner Cable gets it

I have a love/hate relationship with Twitter.  Some days, I think it’s greatest time waste since they invented the meeting.  Other days, it’s an endless source of amusement.  And on some days, it’s actually useful.  Yesterday was one of the useful days.

At about 9m last night, my 10 year daughter came running into my home office.  Kathryn was convinced that she had broken the TV.  Since the destruction of electronic devices is her sister’s department, I went in to the bedroom to take a look.  Kathryn was watching a show on TLC HD and it was not coming in well.  The audio was distorted and the image had tiling artifacts.  I assured my daughter that it wasn’t her fault and sent her off to read a book.

I checked a few channels and sure enough, it wasn’t just TLC HD that was having problems.  Most of the channels had the same problem.  And yet, a small number of channels were just fine.  My decidedly non-scientific search had determined that the SDV channels were having the problem and the non-SDV channels were just fine. 

SDV or Switched Digital Video is a technical hack that allows cable companies to provide more channels than they actually have the physical bandwidth to carry them to every home.  What they do is to only provide a channel when a view switches to that channel.  A cable node will supply somewhere between 1000 to 2000 homes in a neighborhood.  If the cable company provides 200 channels, only a fraction of that number is actually being watched at any time. 

The “Switched” part of SDV means that the frequency that the channels goes out on is not fixed, it can changed based on usage.  When you select a SDV channel from your set top box, the channel request goes back to the cable office.  They check to see if any other box on your node is watching that channel.  If it’s in use, the cable office sends back the frequency that the channel is on, and you get to watch “Matlock”. 

It’s a clever hack, but it’s still a hack.  If you get enough people on your node to watch a different channel, it will fall apart like a cheap suit.  While I doubt that would actually happen, it’s technically feasible.  They can spend the money running Fiber optic to each node, and making the last run to house over copper.  Running fiber to each house is very expensive.  Just ask Verizon how much it costs to wire a neighborhood for FiOS.

If no one else is currently watching that channel, the cable company assigns the channel to an open frequency for your node and sends that frequency back down to your set top box.  If you have a TiVo, the process works the same way.  Your cable company provides a device called a Tuning Adapter, that manages the frequency negotiation between the TiVo and the cable office.

I have both a set top box and a TiVo HD with a Tuning Adapter.  The set top box was having the audio distortion and video artifacts.  The TiVo wasn’t even displaying the SDV channels.  Since the non SDV channels were crystal clear, I guessed that it wasn’t a signal problem and the problem was more likely than not an issue at the cable office.  Since I could reproduce the problem with just the cable company’s equipment, I decided to call their tech support line.

My cable provider is Time Warner Cable of Albany, NY.  I went to their web site to look up their support number.  I called it and heard a recorded message explaining that there would be a long wait time for technical support.  TWC Albany is pretty well staffed, that only happens when something bad has happened. 

While I was in the hold queue, I saw an option for chat support on their web site.  I tried that and the best thing that I can say about that experience was that it occupied 45 minutes of my time without any resolution.   The sum of the support rep’s skill set was to have me reboot the set top box and when that did not resolve the issue, he then scheduled a service call for a time slot that was 9 hours in the past.  Since my cable package did not include any time travel options, I wrapped up that chat session and tried calling the Albany support number again.  This was about 10pm.

While I was in a never ending on hold queue, I decided to see who from Time Warner Cable was on Twitter at that hour. As it turns out, the corporate office of Time Warner Cable has a bunch of people on Twitter.  Their main support account is @TWCableHelp and I could see on Tweetdeck that they were active at this late hour.  I tweeted the following message:

Can @TWCableHelp help with local issues? My SDV channels are unwatchable and I made the mistake of trying TWC online chat

The message was a little snarky, but 45 minutes of chatting with a CSR in India will do that to you.  Within two minutes, I got the following response:

@anotherlab Can certainly take a look into your issue. Had eChat already rebooted the box? ^BP

We then traded a few tweets and Bryan (the ^BP in the message) ended up calling me.  He was very professional and had a pretty good sense of humor.  We immediately ruled out time travel as an option and he was able to remotely check the signal strength coming into my set top box.  Everything looked normal.  While we were discussing what the root cause could be, the problem cleared up.  What ever had been going on at the local office, the problem appeared to have been resolved. 

We checked the channels on the set top box and on the TiVo.  From his end, he could see that about 50 channels where being sent over SDV on my node.  So we picked a few channels were not being sent at that moment. Everything was normal. 

We chatted a bit about my TiVo and I mentioned that a few people had been complaining about the latest software update that went out to the DVRs that TWC supplies.  He wanted to know more about those complaints.  I told him that I didn’t know too much about those problems, but I had read a few complaints on some local sites.  I gave him a few links (Kristi Gustafson at the Albany Times Union and Albany HDTV) and he checked them out while we were talking.

Bryan then asked if I had any other questions.  So I asked him about the CCI Flag.  Almost all of the digital channels (except for local stations) carried by Time Warner Albany have the CCI Flag set to 0x02, which is usually referred to as “Copy Once”.  “Copy Once” means that the show can be recorded to a DVR, but it can not be copied from the DVR.  The DVR must respect the CCI Flag, otherwise it would not be allowed to use a CableCARD, which is needed to access the digital content.

This is not an issue with the DVR that the cable company supplies, but TiVo can do quite a bit more than a cable company DVR.  If you have multiple TiVo DVRs in your house, you can copy shows from one to another.  If you have young children like I do, that is a very handy feature.  We can also copy the shows to a PC for storage and transfer them back at a later date.  The video files are copy protected, it’s not like you can just upload them to the great unwashed masses on the Internet.

I have a couple of TiVo boxes and it’s been handy to watch a show recorded on one TiVo from another.  When the CCI flag is set to “Copy Once”, the only copy allowed is the copy on the first TiVo.  It just wont copy it to another TiVo in the house.  Many cable providers set most of the channels (excluding Pay Per View and premium channels like HBO) to 0x00, which is usually called “Copy Freely”.

Bryan told me that it was a local cable office decision. I told him that the local cable office had made it clear to me that the decision was mandated by Time Warner Corporate.  He said that he would look into it and then would get back to me on that.  i think that he will find that it’s a dead end, but it was worth a shot.

All and all, I was very satisfied with the support provided by Time Warner Cable through their Twitter account.  But please change the CCI flag.