Monday, August 30, 2010

Resolving "Navigation to the webpage was canceled" with Compiled HTML Help files (.chm)

I'm working with a SDK from a vendor that we have partnered with.  They provide the SDK as a download that I grabbed over FTP with Internet Explorer.  The SDK has a .NET assembly to use, some sample code, and documentation in a .chm help file.  It's all neatly bundled in a .zip files, nothing too esoteric.

Since Windows directly supports .zip files, I used Windows Explorer (Windows 7) to copy the files from the .zip file to new folder.  I then launched the help to examine some new functions the vendor had added for me.  The help file loaded up, but I couldn’t access anything.  It looked like this:

BlockedChm

At first I thought the file was corrupt, but then I realized what was going on.  With Windows Explorer, I selected the .chm file and right-clicked on it and selected “Properties”.

BlockedProperties

If you look at the section that is highlighted in green, you’ll see the text ”This file came from another computer and might be blocked to help protect this computer.”.  With Windows XP SP2 and later operating systems, the file’s zone information is being stored with the file as stream.  A stream is a separate resource stored with the file, just not exactly in the file.  Separate resource streams is a feature of the NTFS file system.  Since the .zip file had been downloaded with Internet Explorer, the .chm file was treated as if it had been downloaded directly. 

This is actually a good thing.  By default Internet Explorer will not let you run content from your local disk without your expressed acceptance.  Since the Internet Explorer rendering engine is used to render the pages of the .chm file, it’s going to block pages that came from the Internet Zone.

You have a couple of ways of fixing this.  One way would to disable the blocking of local content.  I don’t think that’s a safe way to operate so I’m not going to describe how to do that.  In the file Properties dialog, there is an “Unblock” button.  Click that button and you can remove the Zone block.

Another way would be to use a command line tool and remove the Zone Identifier resource stream.  SInce NTFS file streams pretty much invisible to the casual eye, you can grab a free tool to trip that data out for you.  Mark Russinovich’s Sysinternals collection of utilities includes a nice little gen called streams.  It’s a handy little utility.  It will list what streams are associated with a file or folder and you delete them.  Recursively and with wild cards too.  One of the thinks I like about Systinternal command line tools is that you can run them without a parameter to get a brief description of what it does and how to use it:

Streams v1.56 - Enumerate alternate NTFS data streams
Copyright (C) 1999-2007 Mark Russinovich
Sysinternals - www.sysinternals.com

usage: \utils\SysInternals\streams.exe [-s] [-d] 
-s     Recurse subdirectories
-d     Delete streams

When I ran streams on my .chm file, I saw the following:

streams.exe SomeSdk.chm

Streams v1.56 - Enumerate alternate NTFS data streams
Copyright (C) 1999-2007 Mark Russinovich
Sysinternals - www.sysinternals.com

C:\dev\SomeSdk.chm:
:Zone.Identifier:$DATA       26

You can also get a listing of the resource streams if you use the “/R” parameter with the DIR command. To see the contents of the stream, you can open it with notepad with syntax like this:

notepad MySdk.chm:Zone.Identifier

That would display something like this:

[ZoneTransfer]
ZoneId=3

Any value of 3 or higher would be considered a remote file.  So I ran it one more time, just with the –d parameter and got this:

streams.exe -d SomeSdk.chm

Streams v1.56 - Enumerate alternate NTFS data streams
Copyright (C) 1999-2007 Mark Russinovich
Sysinternals - www.sysinternals.com

C:\dev\SomeSdk.chm:
Deleted :Zone.Identifier:$DATA

Once I did that, my help file was unblocked and ready to be used.

A Disaster Recovery Plan is useless if you don’t verify that it works.

I was just reading a Computerworld article about how American Eagles Outfitters just went through an eight day web outage (originally covered by StorefrontBacktalk).  It started when some hardware failed, then the backup hardware failed, then the software designed to restore the data to the replacement hardware failed, and finally their disaster recovery site wasn’t ready.  They were doing the right things: backups, backups of backups, and an alternate site in case their main site is dead in the water.  It just didn’t work when it was needed.  They were flat out down for 4 days, and then only had minimal functionality for another four days.

Being down or not fully functional for 8 days is a huge amount of time if you are an online retailer, but it would be catastrophic for just about any company to their network down or seriously hobbled for over a week.  How effective would your company be today if you had were isolated computers, without any Internet access.  On the plus side, that means no Farmville, which would actually be a productivity gain.  But life without email, that’s another story.  For most companies, email is a basic tool of business and you can’t get by without it.

You need to have a disaster recovery (DR) plan.  You have to plan on the basis that a meteor has taken out your building one night, and all your business tools are gone.  You need to backup the key assets of your network.  If it’s on a computer and you need it, then it should be backed up.

Those backups need to be off site.  If your office complex is under 8 feet of water, those backup tapes are going to under 8 feet of water too.  Your IT staff needs to be taking those backups offsite.  It could as simple as taking the backups to a safe deposit box, or a live backup of your system to alternate location.

You also need a disaster recovery site.  You need backup network equipment that you can bring online with your current data.  It could be a dedicated hosting facility or a rack of servers at another location if your company has more than one office.  The important thing is that it’s periodically tested.  A DR site is no good if it doesn’t work.  That is what was the biggest failure for American Eagles Outfitters, their final point of protection wasn’t ready and had never been tested.

You also need to plan for the human resources.  If a disaster strikes your office, you need to have a plan to contact all of the employees and arrange for alternate office facilities.  if your DR site is up and running, it wont do you any good if none of your employees can access it.

Being prepared with a DR plan is not a one time task or expense.  IT departments need to have the support and resources to keep the plan updated.  And they need to be able to test it on a period basis.  A plan that works today with X amount of data, could be utterly useless next year when you have Y amount of data.  You need to be able validate that your backups worked and that they can be restored in a reasonable amount of time.

That’s the hard sell, with companies looking to keep their costs down, it’s hard to keep items like this in the IT budget.  I look at what happened to American Eagles Outfitters as a cautionary tale of what happens if you skimp on an IT budget. 

The money quote for why the DR site was not fully functional:

“I know they were supposed to have completed it with Oracle Data Guard, but apparently it must have fallen off the priority list in the past few months and it was not there when needed.”

Penny wise, pound foolish.

Friday, August 27, 2010

DynDNS is making making changes to their free Dynamic DNS accounts

I use the free version of the dynamic DNS names provided by DynDNS.  It gives me an easy way to connect to my home VPN.  They provide a domain name, usually for a home network.  Since most ISP’s will change the IP addresses handed out to home users, you need to periodically update the IP address associated with your domain name.

Many home routers have the ability to automatically update your DynDNS account when it detects an IP change.  I have a custom firmware called DD-WRT on my Linksys router, DD-WRT has the ability to update DynDNS (and other providers).  if your router doesn’t have that capability, you can always find a small app or script to run on a PC and update the DynDNS account automatically for you.

It looks like DynDNS made some changes to their Dynamic DNS accounts.  Previously, you could have up to five free accounts, from a long list of 88 domain names.  From now on for new accounts, you get to have two free names, from a list of 18 domains.  If you had more than two names on the free account, you get to keep them for as long as you keep them active.  if you fail to keep them updated (every 30 days), they will be dropped until you reach the free number. If you have the paid version, DynDNS Pro, then you are not affected by the change. 

It’s a small price to pay for the free service.  I’ve been using it for over 5 years and have never had a problem with it. Over the years, I had managed to collect 6 domain names.  Most of them were used for testing, I had completely forgotten that I had them.  I just pruned the list down to two, but I’m really only using one.  With the VPN, I can securely log into my home network while I’m on the road and have full access to my network.

A change at TVUG

For the last couple of years, Griff Townsend has been the President of the Tech Valley .NET Users Group (TVUG), here in Albany NY.  Griff has put in many hours with TVUG activities and has done many presentations for us.  Unfortunately for us, Griff is moving to bigger and better things in another state.  Griff will be missed here and we wish him well as he continues his career in a warmer climate.

As Vice-President of TVUG, I will be stepping up to the position of President and continue in Griff’s footsteps.  Griff will be moving before the next TVUG meeting in September, I’ll be at the next one.  We will be having elections of the executive board officer positions at the end of year, during the December meeting.  Our bylaws are posted here.

GriffGriff is an experienced architect and teacher with a few Microsoft certifications under his belt. While he’s leaving this area code, you can still keep up with him.  Griff has a blog, “Bloggin from my Noggin”, at http://grifftownsend.blogspot.com/.  You can also follow him on Twitter at @vidiotz.  And Griff’s LinkedIn profile is Griffith Townsend.

Monday, August 23, 2010

You should have WinPatrol on your system

You really should have WinPatrol installed on your system.  It’s a service type of application that monitors changes to your system.  For example, if an app tries to register a web browser toolbar, WinPatrol will warn you and give you a chance to block it.  There’s a free version and a paid version.  The free version is very good, but you’ll want the paid version.  It’s very affordable and will keep your machine from being bogged down with crapware and suspicious processes.  WinPatrol is written and supported by Bill Pytlovany, a well known Windows security professional.

I just installed the MyHeritage Family Tree Builder desktop application on my main development box.  I’ve been using FTB for a few years on our shared family PC.  It’s a nice genealogy application that I have used to to publish my mother’s family tree online.  The technology is very cool and I will get back to describing it in more depth.

When I installed the FTB app, the installer asked if I wanted to change the default search provider to one provided by MyHeritage and to install a MyHeritage toolbar into Internet Explorer.  I declined both options.  I have IE set to use Bing as the default search provider and I didn’t want to change it.  I also did not want to install any toolbars into IE.

I avoid IE toolbars like they are the plague.  They eat up screen real estate, slow down the browsing experience, are the root cause of 70% of the browser crashes, and cause cancer in lab rats.  So I declined that option and installed FTB.  And the installer ignored my choices and tried to change the search provider and install their toolbar anyways. I don’t know if that was sloppy coding and testing on their part or it was intentional.  Either way, that wasn’t what I wanted.

How did I know this?  Because WinPatrol was doing it’s job and warned me about each change.  I saw a dialog that looked remarkably like this:

WinPatrol1

Scotty (the mascot and public face of WinPatrol) caught the attempt of the installer to register a new toolbar.  The “New Program Alert” dialog will display enough information about the pending change to your system that you can usually make a quick and informed decision on whether or not to block it.  If you see something you don’t recognize, clicking the “PLUS Info…” dialog will take you to a WinPatrol web page that will display more information about the object being installed.

Without WinPatrol, I would not have caught either change until the next time I started Internet Explorer.  With the MyHeritage stuff, it wasn’t malicious code, but it was code that I didn’t want to run.  And thanks to WinPatrol, it wasn’t going to run. I was able to prevent changes being made to IE, and that’s worth the price of admission.

Monitoring changes to IE is not the only thing in WinPatrol’s arsenal.  It gives you an easy way to see what apps are set to start when the computer boots up and the means to block them.  If you computer seems to be running slower and slower each day, the odds are you picked up some process that run in the background.  Most of them are pretty harmless, but when you start adding them up, they will show down your PC.  WinPatrol has an online database and can identify most of them and tell you if you should keep them running or block them.

Sunday, August 22, 2010

Why are there randomly named folders with mpavdlta.vdm files on my C: drive?

I was looking for a folder on a PC at home (Vista, 32 bit) when I saw a bunch of folders with oddly formed filenames.  There were 13 of them, with names and dates like this:
05/15/2010  03:34 AM    <DIR>          2f934881647646785dbf842f86e91ec9
11/01/2009  03:24 AM    <DIR>          3b9e7b6e4c58a68b7e71c5e3
11/03/2009  04:18 AM    <DIR>          54693b59d80daf1421b7dda39a
10/31/2009  03:16 AM    <DIR>          56d6fe71d579ef79995fee64834082

They all had files with the name “mpavdlta.vdm” and every time I tried to open the folder with Windows Explorer, i would get the following dialog:


You don't currently have permission to access this folder


I would press the “Continue” button and would have to answer “Continue” to the UAC dialog that would pop up on the screen.

Ok, so what are these files, what are they doing here, and can I remove them? 

After a bit of searching, I found that they are the AntiSpyware definition files from the Microsoft Security Essentials antivirus application.  That answered the first question.  More precisely, they are the delta files for the antivirus definitions.  There is also a mpvabase.vdm, which is the base signature file.  The mpavdlta has all of the changes since the last mpvabase was downloaded. Gilham Consulting has a nice blog post that describes the various AV definition files that come with MSE.

As for how they got there, it appears to be a bug or design flaw with MSE.  The last randomly named folder from MSE was dated 5/26/2010, a good three months ago.  I fired up the the MSE console and it displayed that the virus definitions were current as of 8/21/2010.

mse


My first guess was that situation was causing the vdm files and folders being created all over my C: drive has been addressed.  With Windows Explorer, I went in and was able to delete most, but not all of the folders.  It appears that MSE is still doing the random folder thing.  But I was able to clear out most of them. So it looks like this is bug in the current release.  From the various posts in the MSE forums, it appears that MS is aware of the problem, but nothing official has been posted about a resolution.

I think it’s a bit odd that MSE is storing the AV definition files in this manner.  I’ve been pretty happy with how MSE is protecting my PC from virus attacks.  I wouldn’t call it perfect, but it’s more than good enough for my needs.  It’s a much lighter load on the system than the commercial AV solutions.  I can put up with a few randomly named folders for the protection that it provides, but I would be more comfortable of the files had been shoved in a folder under %ALLUSERSPROFILE% as a default location.  I’ll file this under “Nothing to see here, move along”.

Tuesday, August 10, 2010

New Windows Live Writer Plug-in Submission Process

I just received an email from Microsoft about changes to the submission and and hosting of Windows Live Writer plug-ins.  It goes into effect on September 10, 2010 (one month from now).  Live Writer is a great blogging tool and it has a nice API for extending it’s functionality.  If you use Live Writer, you should check out the plug-ins at Windows Live Gallery.

I submitted a SmugMug plug-in a couple of years ago.  It will need to be resubmitted.  Actually, I’ll probably rewrite it.  I’m sure it’s obsolete by now.  Here are the contents of the email.

Dear Windows Live Writer plug-in authors,

On behalf of the Windows Live Writer team and all of our customers, thank you for the valuable contributions your plug-ins have made to the Writer experience.

We’re writing to let you know that Writer’s plug-in hosting and submission processes are changing.  Note: Existing plug-ins currently hosted on Windows Live Gallery will need to be resubmitted using the new process outlined below.  In the future, should you wish to provide additional plug-ins for Writer, we request that you also submit your plug-ins using this process.

We hope that you find the new plug-in submission process and hosting solution simple and lightweight. New plug-in submission process:
  1. Author uploads plug-in MSI installer to Windows Live SkyDrive using his/her Windows Live ID (email address).
  2. Author emails wleplugins@microsoft.com (Windows Live Essentials Plug-ins) including the following information:
    • Author name
    • Author Windows Live ID (that will host the plug-in MSI)
    • Author contact email address
    • Plug-in name
    • Plug-in description
    • Plug-in category (pick only one):
      • Formatting/clipboard
      • Post publishing
      • Pictures
      • Buttons
      • Other content
      • Miscellaneous
    • URL to plug-in MSI on SkyDrive
  3. Writer team verifies that the plug-in works as described.
  4. Writer team updates public list of Writer plug-ins that will include information on the plug-in and a link to the installer that is hosted on the author’s SkyDrive.
  5. Writer team notifies plug-in author that plug-in has been listed.
We value your efforts and want to ensure that your plug-ins will continue to be accessible for the many people interested in them.  In order to do this, we need you to resubmit any existing plug-ins on Windows Live Gallery using this new process by September 10, 2010. You will be able to add more plug-ins after this date, but we need you to move existing plug-ins by then.

Please email us with any questions or concerns about the new plug-in submission process at wleplugins@microsoft.com. Other support requests should be directed to the Windows Live Solution Center.

Thanks,
The Windows Live Writer Team

How to shoot yourself in the foot writing installer upgrades

I’m in the middle of a development cycle for one of our products, when QA logged an unexpected bug.  When you upgraded an existing version of the product to the new one, only some of the settings were being saved.  This was a new bug, something I did recently broke the installer.

A little background on the product.  It’s a set of data collection services that allow our Onscreen product to work with various 3rd party GPS vendors.  I have a generic collection service, plus a handful of services for vendors that can’t work with the generic service. Each collector is a service written with in C# for the .NET Framework.  Most of the settings for each collector are stored in the service’s app.config file.

I have written the installer so that it does a full install each time.  If it detects a previous version of the product, it caches the service status (running, disabled, etc) and the app.config file for each service.  It puts the cached files in a temp folder that will get purged at the end of the installation. Then it does a silent uninstall of the previous version and installs the new version.

After the files for the new version have been installed, the installer restores the service status for each collector service and retrieve the cached settings.  I don’t copy the old file over the new, I just retrieve a set of settings and update the default values in the new app.config with the cached values from the previous one.  You don’t want to blindly copy over the app.config, you would wipe out new or changed settings that are required for the new version.

I wrote a simple command line app that gets called by the installer.  it reads the new app.config, the cached copy, and a set of rules in XML format.  For each new file, it looks for the same file in the temp folder created by the installer.  It then does a XPath search and replace for each rule.  The rules are a little flexible.  It can be directed to replace the default with the cached copy or replace it with a new default value.

When I need to do some file manipulation, I usually write a small command line app instead of trying to have the installer code try to do it.  There’s a couple of reasons for this.  By using a separate app, I get to code and test the file manipulation outside of the installer environment.  Having an installer locate and enumerate a set of files, and then update them based on the contents of others would be a difficult to write and maintain.

Getting back to bug at hand, the installer installs five services by default.  Three of them were getting their settings persisted across upgrades, two were not.  That threw me, for this type of activity usually everything works or everything breaks.  I spent some time tracing through the code, which is always fun with an installer.  I ended up spending quality time with a VM of Server 2008 and had my command line app write detailed information to the Windows Event Log.  Crude, but effective.

As it turns out, I created the problem at the start of this development cycle.  I had renamed the executables as part of a rebranding exercise.  When the the code when to locate the cached copy of the service, the new name didn’t match the cached name.   Yes sir, I had shot myself in the foot with a device of my own making. 

I had to tweak my command line app a bit.  As it processed each app.config file, I added a some additional code so that it would make two attempts to load the cached copy.  First attempt would by the current name.  If that fails, it then uses the previous file name.  After that,we were back in business.

Friday, August 06, 2010

Notes on installing FinalBuilder 7

VSoft Technologies just released a new version of their build automation tool, FinalBuilder.  Version 7 gets a new look to their IDE and you can finally have multiple projects open at the same time.  Among other things, they added support for Hg and Git, plus full support for Visual Studio 2010.  It also supports Team Foundation Build 2010.

FB7_IDE_1

Along with FinalBuilder, you get a single user license for FinalBuilder Server.  FinalBuilder Server provides a nice web frontend so that you can remotely start a build process from a web browser.  We use this a lot. This tool does a lot and it will save you time.

After installing FinalBuilder, I loaded in an existing build project that had been created with FinalBuilder 6.  It complained about not being able to find the user variables.   User variables are variables defined with FinalBuilder and are global to all projects. We use them to define some settings shared by all our projects.  Within FinalBuilder IDE, you can cut and paste variables from one project to another.  It only works with the same version of the IDE.  I could cut and paste variables a project loaded in one instance of FinalBuilder 6 to another project loaded in a separate instance of FinalBuilder 6.  I could not copy variables from FinalBuilder 6 to FinalBuilder 7. 

Which is really odd, the properties of the variables in FinalBuilder 7 are a superset of what is available in FinalBuilder 6.  Fortunately, this is easy to fix.  The FinalBuilder 6 user variables are stored in a file named FBUserVariables.ini, located in Documents and %USREPROFILE\Application Data\FinalBuilder6.  FinalBuilder 7 follows the same pattern so all I needed to was to copy that file to the %USREPROFILE\Application Data\FinalBuilder7 folder.  Once I did that, the variables were all defined.

User variables are specific to the Windows user account running FinalBuilder.  When we set up our build machine, we created a user account specific to that machine.  All of the compilers and component sets are installed as that user.  FinalBuilder Server is set to run the projects as that user.  This makes life much simpler.  We completely avoid the issue of stuff getting installed under one user and not being available to another user.

I did have to replace a deprecated action with it’s replacement.  Actions are what FinalBuilder uses to implement a single task.  Copy a file, compile a project, get from source control, each would be considered a distinct action.  A build project is a series of actions, with some flow control.

The “Text Replace” action has been replaced with the “Text Find / Replace” action.  Not a big deal, but it threw me when I tried running the script.  I had run the “Batch Project Upgrade…” option from the IDE, but that only copies a previous version project as new version.  It does not replace deprecated actions.  I can understand not changing the actions, but it would be helpful if deprecated actions could be identified.  It was a quick change to make, took about 30 seconds.

I found another odd glitch with FinalBuilder 7.  A really useful feature of FinalBuilder is that it can update the AssemblyInfo.cs files for every project in a .NET C# solution.  This makes it very easy to set the version number and other attributes for every assembly.  I built a multiple project solution with FinalBuilder 7 and only some of the projects had their AssemblyInfo.cs file updated.  After some peeking and poking, I saw that it was only updating the AssemblyInfo.cs files located in the project’s Properties folder.  If the file was in the project root folder, then it didn’t get updated.  I reported this as a bug on the FinalBuilder forums and moved the offending files to the Properties folder.  Built the project again and everything was updated.  Double rainbows all around for everyone. [Update: After reporting this bug, they fixed it a few hours later.]

I had one more glitch, this time with FinalBuilder Server.  I added a project to the server and it threw an “type initializer exception” error.  This is a known issue with FinalBuilder 7.  FinalBuilder Server uses PowerShell for some of it’s tasks.  If PowerShell is not installed, you will get the “type initializer exception” error.  The solution was easy, just install PowerShell.  Windows Update wanted to push it down anyways.  I did suggest to VSoft that they add a prerequisite check for PowerShell in the FinalBuilder installers.  This is easy to do and well is documented

  • To check if any version of PowerShell is installed, check for the following value in the registry:
    • Key Location: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PowerShell\1
    • Value Name: Install
    • Value Type: REG_DWORD
    • Value Data: 0x00000001 (1)
  • To check whether version 1.0 or 2.0 of PowerShell is installed, check for the following value in the registry:
    • Key Location: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PowerShell\1\PowerShellEngine
    • Value Name: PowerShellVersion
    • Value Type: REG_SZ
    • Value Data: <1.0 | 2.0>

Version 2 of PowerShell is backwards compatible with version 1, as long as you have either version, FinalBuilder Server will be happy.  Even with these glitches, I really like using FinalBuilder.  If you are looking for an automated build tool with a decent IDE and debugging, you should consider FinalBuilder.