Friday, November 12, 2010

Extracting Icons with the Windows SDK

This is an article about the technical aspects of dealings with Windows Icons, including icon resources and HICON handles. But first, let me share with you how I feel about icons. I hate icons. I hate creating them. I hate working with them. I hate rendering them. I hate converting them. I hate dealing with transparency. About the only thing I like about icons is the artistry, and that's sadly lacking lately with the one and two letter icons now popularized with Adobe, Microsoft, Pandora, and Facebook. And let's not even talk about the new dark blue Visual Studio icon, that's pretty much invisible on the task bar.
Life got a little better about four years ago when I bought Axialis IconWorkshop. That's one handy piece of software, with free upgrades for life. If you are in the business of writing software, IconWorkshop will save you a lot of headaches.

But I digress.

Let me start out by pointing out the "official" documentation on icons. See the article on MSDN titled simply, Icons. Note the date of the article: 1995. Things have changed a lot in the last 15 years.

In the beginning, there were 16-color (4-bit) icons, with one color reserved for transparency, so there were really 15 colors available for use. As computers because more powerful, Microsoft introduced 256 color icons, then 24-bit truecolor icons, then 32-bit icons with 24-bit color and 8-bit transparency. Then, as a final coup-de-grace, Vista introduced 32-bit icons sized at 256x256 pixels. These took up so much space that they were stored in a completed different format (PNG). Ouch!

A good case study is the application icon for Outlook Express/Windows Mail in the file MSOERES.DLL, which has existed for most of the life of Microsoft Windows and has evolved as operating system support (and video card support) for icons has improved. 
  • Vista and Windows 7: eight different sizes (256 pixel, 64, 48, 40, 32, 24, 22, 16) at three bit depths (32-bit, 8-bit and 4-bit.)
  • Outlook XP SP3: three sizes (48 pixel, 32, 16) at at three bit depths (32-bit, 8-bit and 4-bit.)
  • Windows 2000 SP4: three sizes (48 pixel, 32, 16) at at three bit depths (32-bit, 8-bit and 4-bit.)
  • Windows 95: two sizes (32 pixel and 16 pixel) at one bit depth (4-bit.)

The only surprise for me in this list is that the Windows 2000 DLL included 32-bit icons, which would have transparency. However, that DLL was part of Service Pack 4 in 2002 and so was probably shared with Windows XP. I'm not aware that Windows 2000 was able to handle transparent icons. Also, you might wonder why I list "dead" operating systems like Windows 95. The reason is that there are some APIs that are stuck in the days of Win9x and never improved, notably the toolbar code in MFC. The common controls also behave differently depending on whether the Common Controls 6 is enabled, either explicitly or with a manifest.
Here's an important point about dealing with icons that took me quite a while to understand. First, there's an icon resource with a DLL or EXE that contains the icon in multiple sizes and multiple bit depths. On the other hand, an HICON contains only a single one of those formats. If you want a different size or a different bit depth, you need to load a new HICON.

The most basic icon-handling function is LoadIcon. It loads an icon from a particular HINSTANCE whose "size conforms to the SM_CXICON and SM_CYICON system metric values." So let's say you call this function and get an icon back. What bit depth? I have no idea. Presumably the same as your desktop, but even that's a hazy concept, because your bit depth can change on the fly, especially when you start a Remote Desktop session. And there's no such thing as a 32-bit desktop - there's no transparency on your desktop. So a 32-bit icon kind of/sort of matches a 24-bit and a 16-bit desktop. But if the icon matches the bit depth of your desktop, how would the icon keep its transparency bits? Again, I don't know.

The documentation for LoadIcon says that "This function has been superseded by the LoadImage function."  The LoadImage function takes the desired x and y size as parameters. That's helpful, now I can load those 256x256 bitmaps. But there's still no documentation on the bit depth.

What lead me down the path to this point is that I'm trying to extract some application icons for a web page that is being created for the local user. Those icons needs to be in PNG format to preserve transparency, so the goal was to load them from the original EXE as 32-bit icons, then save them as compressed PNG files.

Let me summarize my success for this project: Total Fail.

The first task was to get the icon for the particular file extension, such as ".doc".  This problem is easy to solve and well documented - use SHGetFileInfo, which returns the HICON in either small or large (however those happen to be defined.) I spent several hours trying to convert that icon to a PNG file. The popular strategy seemed to be using OleCreatePictureIndirect, as described by DanRollins in the comments in this article and by neilsolent in this article The problem I saw was that my PNG file was always 16 colors. In later comments of that article, neilsolent said that he was seeing a similar problem, which no one was able to provide an answer to. I wasn't able to solve that problem.

Note that the algorithm offered in the initial question in that article is utterly wrong. An HICON is not an HGLOBAL and you can't access it using GlobalLock.

My fallback strategy was to just write a .ico file instead of automatically converting to a .png file. With a .ico file, converting to .png is trivial using IconWorks.

This was nowhere near as simple as I'd hoped. Remember that an icon resource contains multiple sizes and multiple bitdepths? A .ico file is really a container for manager all of those formats, and there is no Windows API for writing that file. Some documentation is given in that 1995 article I mentioned earlier, but it's a non-trivial problem to solve. There is a sample from Microsoft called IconPro (run the executable, open the .chm help file, look for IconPro, and it will give you the option to extract file sample code.) IconPro hasn't been updated in fifteen years. It knows how to write .ico files, so I was hoping that I could make some minor modifications and feed it the HICON from SHGetFileInfo. No dice. IconPro only reads the original EXE or DLL file.

No problem. I'd just retrieve to pointer to the original DLL or EXE and pass that information to the appropriate routine in IconPro.

Unfortunately, it's not that easy. In fact, it's really ugly. There's no Windows API that will automatically tell you the filename and index to retrieve an icon. SHGetFileInfo only returns an index to the system image list (aka the Icon Cache.) There's no hint of information about where the icon came from. I was surprised because I thought that this problem was simple - just look up the DefaultIcon key in the registry. Turns out that this is just one way of specifying icons. To even begin to understand the other methods, you have to delve into the shell namespace and the IExtractIcon interface. I didn't care that much.

I spent several more hours searching for a method that could take an HICON and write it to a .ico file. I found numerous other people asking this same question, but I found no answers that worked and would create a .ico file with 32-bit color depth.

At this point I gave up on an automated solution. I'll use IconWorks to manually extract the resources that I need from the relevant EXE and DLL files.

Tuesday, November 2, 2010

Norton Antivirus: The Clear Choice for SSDs

Today I walked up to my computer and noticed that all eight cores were pegged. I thought that was odd, since the computer should have been doing absolutely nothing. Then I noticed the "Norton Antivirus" logo in the bottom right. "Could it be?" I thought to myself.  Is there actually a consumer product on the market that is smart enough to use more than one core?!?

I tapped the keyboard, the Norton Antivirus banner disappeared, and all of the cores dropped back to idle.

I'm really impressed by this. I was trying to remember the last time I was impressed by antivirus software. I'm pretty sure the answer is "never."

Normally an antivirus application is limited by your hard drive's random access performance. Antivirus software processes files in a directory sequentially, but the files often aren't aren't laid out on the disk in that order. This means that the drive is mostly being accessed randomly, not sequentially, all of which is why it can take your antivirus software twelve hours to scan your hard drive, even with a fast hard drive and processor.

However, with an SSD, the system can maintain transfer rates in excess of 100MB/second, even with random access behavior. Crunching that much data per second is going to take more than one core, and, I'm impressed to say, Norton Antivirus appears to be up to the task.

Saturday, September 11, 2010

Lightning Fast Builds with Visual Studio 2010 and an SSD

I reduced my Visual Studio 2010 C++ build time from 21 minutes to  7  5 minutes! You can too. Here's how.

I'm a build performance junkie. If there's one thing I really hate in life, it's sitting around waiting for builds to complete. Fifteen years ago, the very first article I published was titled Speeding Up Visual C++. It was all about making Visual C++ 1.51 go faster on what was then a state of the art computer - an ISA bus Gateway DX2/50. Woo hoo! My recommendations were:
  1. Use Precompiled Headers.
  2. Upgrade to 16MB of memory.
  3. Use 4 megabytes of Disk Cache.
  4. Upgrade your Hard Drive to Fast SCSI or Enhanced IDE.
  5. Turn off Browse Info.
  6. 32 Bit File Access.
Today computers are thousands of times faster, but rotating platter disk drives are still desperately slow. The seek time of 7200RPM drives has changed very little in the last ten years, although the transfer rate for sequential files has risen dramatically. That problem, combined with Visual Studio's desire to create hundreds of .tlog temporary files, quarter gigabyte .sdf files, and project size bloat means that the average build may be even slower today than it was fifteen years ago.

Historically, your CPU would be sitting idle most of the time waiting for the hard disk to keep up. Linking performance is based almost entirely on your disk's random access read/write performance. The highly rated Western Digital Caviar Black can only perform about 100 random access IOPS (I/O Operations Per Second.) It takes multiple I/O operations per OBJ file, so a significant fraction of the link time is waiting for the hard drive to do its job.

Enter the latest generation of SSDs driven by the Sandforce Controller, such as the OCZ Vertex 2. These drives can do over 40,000 IOPS - 400 times faster than a Caviar Black. And Visual Studio 2010 build performance is phenomenal. In fact, these drives are so fast that their impact on build time is negligible. This SSD will easily hit over 50MB/sec of 4KB random writes. In contrast, the ultra-zippy 10,000 RPM VelociRaptor can only do about 3.5MB/sec. (Note that disk striping or mirroring has minimal impact on build performance because the linker isn't smart enough to use scatter/gather to force the queue depth high enough to let the command queuing on the drive work its magic.)

Now that the hard disk performance no longer matters, our next bottleneck is the CPU. You can tell your boss you need one of those monster hyper-threaded quad core i7 processors such as the 875k or, for the money-is-no-object crowd, the hyper-threaded six core 980X. Visual Studio 2010 automatically uses parallel builds. My 875k pegs all eight cores simultaneously at 100%. Compiles proceed eight files at a time and the CPUs stay pegged until the compile is finished. I've never seen a project build so fast.

The next bottleneck is probably your RAM. If you have 4GB RAM running on a 32-bit OS, you are severely limited and you probably won't be able to run eight compiles (much less 24 compiles if you are using parallel MSBuild tasks, as I explain in Part 2.) Upgrade to Windows 7 64-bit with 8GB of RAM.

So it's interesting that the types of recommendations for build performance haven't changed much. Here is my updated list:
  1. Use Precompiled Headers.
  2. Upgrade to 16MB 8GB of memory.
  3. Use 4 megabytes of Disk Cache. Upgrade to 64-bit Windows. 
  4. Upgrade your Hard Drive to Fast SCSI or Enhanced IDE a Sandforce-based SSD. 
  5. Turn off Browse Info. (This is still true. Browse Info is different than Intellisense.)
  6. 32 Bit File Access. Check your motherboard's SATA implementation. At SSD speeds, not all controllers are created equal.
In Part 2 of this article, I'll talk about how to tune your build system to keep all that hardware busy.

The system used for this article was:
  • ASUS P7P55D-E Pro motherboard.
  • Intel 875k i7 processor.
  • OCZ Vertex 2 120GB SSD.
  • 8GB RAM.
  • Thermaltake MUX-120 heatsink.
Do not use the SATA 3 Marvell controller on the ASUS motherboard. The standard ICH10 controller is much faster with SSDs.

The prior system that took 21 minutes was a Core 2 Duo 2.4 GHz with ASUS P5B Deluxe and Caviar Black 750GB hard drive.

Project is permanently out of date

I've seen several cases with Visual Studio 2010 and later where my project would be permanently "out of date." I'd build the project, run it, and would immediately be told that my project was out of date. Solving this through trial and error is tedious at best. This problem commonly is caused by two different problems:

Missing file in project

The most common cause is a file that's in your project that you've deleted from the disk. These are easy to find in smaller projects by just looking through the project for a file with an X next to it. If you can't find the file causing the problem, enable detailed Diagnostic output in MSBuild. In Visual Studio, open Tools | Options, select Projects and Solutions | Build and Run, and set MSBuild project build output verbosity to Detailed.

Alternatively, here's a handy Python script that will examine a .vcxproj file and look for any missing files:

I've also seen this problem when I renamed a file. The old filename is still embedded in the compiler's temporary files, even though the name no longer appears in the project. The solution is to delete all intermediate files in the project and rebuild.

Project upgraded from earlier version of Visual Studio

The StackOverflow page above also details a bug where MSBuild says, "Forcing rebuild of all source files due to a change in the command line since the last build."

The most likely cause is if you used /MP in the Additional Options box in project Properties | Configuration Properties | C/C++ | Command Line. This was often done in Visual Studio 2005. My project started building properly after I fixed this (although it was tricky, because I had set /MP1 on certain files due to bugs in multiprocessor support in VC2005.) This tip came from

The next problem I saw in the MSBuild log was the use of an old PDB filename. I was using Visual Studio 2013, which uses vc120.pdb. However, some of my project files used the name vc80.pdb, which was the wrong name and so was never found. In the property page for the project, set it to look something like this: (the part in bold is what you see after you set the value for the first time. The next time you reopen the page, it will show you the value that the compiler will use.)

Unhelpful recommendations

After enabling Detailed output for MSBuild, I saw this error. It didn't appear to affect the problem, but it should probably be fixed to prevent confusion in the future:
Project file contains ToolsVersion="4.0". This toolset may be unknown or missing, in which case you may be able to resolve this by installing the appropriate version of MSBuild, or the build may have been forced to a particular ToolsVersion for policy reasons. Treating the project as if it had ToolsVersion="12.0". For more information, please see

Another suggestion which didn't work was that intermediate directories use a relative path. but I use an absolute path in my project and it works fine:

Sunday, September 5, 2010

Fixing Permissions on an External Hard Drive

Today I pulled a hard drive from my old computer and hooked it up to my new computer, planning to move the data to the new drive and then use the old drive as a backup disk. Mostly this plan worked well, except that I wasn't allowed to delete many files and folders, even though I was an Administrator. Curious.

The problem turned out to be that I was on a workgroup, not a domain, and so the systems didn't have any common notion of "Administrator". Although Explorer knows how to request elevated permissions, this still isn't enough. You have to "Take Ownership" of the files in order to delete them, and there's no way to do this from Explorer.

I found a solution in the winmatrix forum, but it only works for files, not directories. You can't set the Full Control permission on a directory, so you end up being locked out of directories if you try and use these commands:

takeown /f filepath /r
icacls filepath /grant yourusername:f /t

Note that I've added /r and /t to the commands, which is required for the them to operate recursively.

Instead, I did the steps below. These steps assume that the new drive is on E:.
  1. Open a Command Prompt with Run As Administrator.
  2. Run this command: takeown /f e:\ /r
  3. Right-Click on the root of the copied drive.
  4. Select Properties.
  5. Click the Security tab.
  6. Click the Edit button.
  7. Select Authenticated Users.
  8. Click the checkbox under Allow for Full Control.
  9. Run this command: icacls e:\*.* /reset /t
    This command will force all permissions to mirror the permissions on the root of the drive that you set in #6. You must have the *.* or the root directory will be reset, which you don't want.
After these commands completed, I was able to delete all of the desired files. Executing these commands can take quite a while if you have many files on your disk.

Friday, September 3, 2010

Windows 7 Network Performance

A few years back, after I installed Vista, I spent quite a bit of time trying to fix my GigE Ethernet performance with Windows Vista talking to Windows Server 2003 R2. My file copy performance hovered around 15 to 18 MB/sec, which was pretty dismal.

I've just built myself a new PC with a Core i7 CPU and Windows 7. I tried copying a file from the old Vista box (Core 2 Duo 2.4GHz on ASUS P5B Deluxe mobo) to the new Windows 7 box. The copy performance over the Ethernet went straight up to 50MB/sec! This was with a single WD Caviar Black drive on the Vista system, no RAID or striping.

When I tried copying from a Windows Server 2008 R2 system to the new Windows 7 system, I peaked at 112MB/sec for data that was cached, then backed off to 50 MB/sec for data that wasn't cached. Windows Server 2008 R2 was installed on the same hardware that used to be running Windows Server 2003 R2, so the performance increase was solely due to the OS upgrade.

I'm seeing similar performance gains over the internet. Talking to our corporate server from the Vista system, I maxed out at 1.5 MB/s.  Under Windows 7 with i7, I'm able to max out the connection at 5 MB/s.

All of this leads me to believe that the GigE networking performance of Windows Server 2003 R2 was awful, given that I'm running Windows Server 2008 R2 on the exact same hardware with a 5x performance increase.

Net result: (no pun intended) I'm very happy with Windows 7.

Sunday, July 25, 2010

Safely using erase() in STL

There are few features of STL that have caused me more aggravation than the erase() functions.The problems with erase() are, first that it takes an iterator as an argument and second that erase() invalidates all iterators - sometimes. This makes for a catch-22 situation if you don't know "the secret."

Consider what happens if you want to erase each element of a collection that meets some condition. Deleting a single element is not an issue because you don't need the iterator afterward. Deleting all elements is similarly easy because you can simply call clear(). Deleting an arbitrary number of elements is generally written this way by most beginners:

/* ... */
for (set<int>::iterator itr=coll.begin(); itr!=coll.end(); ++itr)
   if (*itr % 2 == 0)

Predictably, this code doesn't work. The iterator is invalidated when erase() is called and so the ++itr statement in the for() loop fails. Most modern versions of STL have debugging code to warn you if you use an invalidated iterator, but no such training wheels existed in the early days of STL. I got a lot of bug reports because of this mistake.

My first attempted workaround was for the vector class. I wanted to ignore the iterator completely and just step through the collection with an index:

for (unsigned int i=0; i<coll.size(); ++i)

When I tried to call coll.erase(), I discovered that I needed an iterator, which is what I was trying to avoid in the first place. Years after I first tried to make this solution work, I finally learned the solution. Here's an example to delete all even values when using an index-based for loop;

vector<int> coll;
/* ... */
for (unsigned int i=0; i<coll.size(); ++i)
    if (coll[i]%2 == 0)
        coll.erase(coll.begin() + i--);

coll.begin() is always a valid iterator (even if it equals "end()"), and you are allowed to use array arithmetic on iterators for vector<> and for deque<>. Although this solutions works, it's a bit of a hack, especially because you have to "fix up" i after the call to erase() so that the index of the array won't change. This strategy also won't work for collections such as list<> and set<> that can't be indexed by a scalar.

So I was back to figuring out how to get a "valid" iterator after calling erase().

In some STL collections, the erase() function returns an iterator to the next element. Therefore, even though the original iterator was invalidated, you were given a new iterator that would work, so there was no conflict. I thought the code would be easy to write, once I learned about that return value:

set<int> coll;
/* ... */
for (set<int>::iterator itr=coll.begin(); itr!=coll.end(); ++itr)
   if (*itr%2 == 0)
      itr = coll.erase(itr);

I liked this solution. No tricks, easy to write, easy to read. But it crashed. Why? Because erase() could return coll.end(), which the for loop would try to increment, which wasn't allowed. So the solution looks like this:

set<int> coll;
/* ... */

set<int>::iterator itr=coll.begin();
while ( itr!=coll.end() )

   if (*itr%2 == 0)
      itr = coll.erase(itr);


The Microsoft version of STL implements map::erase() by returning an iterator, but this is non-standard. I therefore ran into trouble when I switched to STLport and map::erase() no longer returned an iterator. I found the solution buried in the STL source code, where I learned that the iterators in some collections are stable if they are incremented before erase() is called, in which case you end up with this result:

set<int> coll;
/* ... */
set<int>::iterator itr=coll.begin();
while ( itr!=coll.end() )
   if (*itr%2 == 0)

      set<int>::iterator next = itr;
      itr = next;


Later I learned that the code could be simplified:

set<int> coll;
/* ... */

set<int>::iterator itr=coll.begin();
while ( itr!=coll.end())

   if (*itr%2 == 0)


Note that we can't use the "hack" from the beginning of this article, which would allow us to write coll.erase(itr--) and then increment the iterator in a surrounding for loop. The problem is that a scalar can decrement below zero, but an iterator cannot decrement before begin().

Personally, I don't like the autoincrement code above as much as when erase() returns a value. It relies on the reader understanding the "trick" and it makes it more difficult for collection classes to know in advance what to do. For example, this technique can't be used with a vector, which is why vector does return an iterator. This creates a confusing inconsistency. (Which is why the Microsoft version of map::erase() is non-standard, but much more consistent.)

The question is, why does this trick work? If you've spent much time in C++, you've inevitably learned about the dangers of code like this:

int i = 5;
i = i++;

The code could end with i equal to either 5 or 6 - the result is undefined because the C++ standard only guarantees that ++ will be executed after i is read and before the semi-colon. It could be incremented before or after the assignment happens. (If you are wondering why, remember that assignments in C++ can be embedded in expressions, unlike most languages.)

To reword the guarantee in the language of the C++ Standard, the only guarantee is that ++ will be executed after i is read and before the next sequence point. The sequence point is the key to understanding our call to erase().

In the expression above, the semicolon is the sequence point. Now look at our code example:


If you apply the same evaluation rules, the undefined timing of the execution of ++ would mean that the iterator could be evaluated after erase() returns and the iterator had been invalidated. This would crash, but experience shows that it doesn't. The reason is that the function call to erase() introduces a new sequence point. The code is guaranteed to be evaluated as:
  1. The value of itr is loaded and pushed on the stack for erase().
  2. itr is incremented.
  3. erase() is called.
All of which gives the desired result. QED.

Friday, July 16, 2010

"Invalid pointer" error in Visual Studio 2010 in Configuration Manager

I've been fighting against a problem I've seen in most of my project files that were upgraded from VS2005 or VS2008. Any attempt to delete a Configuration or a Platform results in the error "The operation could not be completed. Invalid pointer."

Today I was finally able to track down the problem, which is caused by some seemingly benign lines in the .vcxproj file. To fix the problem, open your .vcxproj file in Notepad, go to the end of the file, and you'll find some lines that look like this:

    <UserProperties RESOURCE_FILE="Dot.rc" />

Remove those lines, reload the project, and everything will start working fine. Those lines do not appear in project files that are created by the Visual Studio 2010 wizard, so I believe that they are not needed.
My complete bug report can be found on Microsoft Connect:

Thursday, July 1, 2010

Debug "Just My Code" for C++ and MFC

One of my biggest annoyances with debugging MFC code is constantly stepping through CString functions and COM object functions. In Visual C++ 6 there was some functionality in autoexp.dat for handing this, but I never got it to work to my satifaction.

This week I was able to solve the problem. Here's how (note that the 10.0 is for Visual Studio 2010. The value will be 9.0 for VS2008. For earlier versions, see the link at the end of this article.)
  1. Open RegEdit
  2. On 32-bit Windows, go to:

    On 64-bit Windows, go to:

    At least for Visual Studio 2010, there are several default functions defined.
  3. Add a new String value. Name it CStringT.
  4. Edit the key and set the value to:


    The colons must be backslashed as shown. CStringT is the template class used for CString, CStringA and CStringW. The CStringT class is shared with ATL, so that's why it's in the ATL namespace, even if you are using MFC. The "dot star" at the end matches anything for the rest of the line.
  5. Here are a couple of other examples:

    Key: CSimpleString
    Value: ATL\:\:CSimpleStringT.*

    Key: CComPtrBase
    Value: ATL\:\:CComPtrBase.*

    Key: vector
    Value: std\:\:vector.*
Update 7/26/2010: There are additional features documented by Andy Pennell at Microsoft.

Friday, May 28, 2010

Configuring Mozy Pro for Visual Studio

Earlier this week I revised my review of Mozy based on version 2.0 of the MozyPro client software. In short, MozyPro went from something I found acceptable to something that I am quite happy with.

One of my frustrations in earlier versions of MozyPro was figuring out how to exclude Visual Studio temporary files. In Visual Studio 2010, the IntelliSense data (the .sdf file) is 70MB to 300MB for each project. Since Visual Studio automatically rebuilds this file as necessary, it's a waste of time and money to back it up. There are numerous other temporary files, such as .ncb, .sbr, .bsc, and others, all of which are unnecessary to backup. This article tells how to set up MozyPro so that files with those extensions will not be backed up.

I found it best to create a global rule instead of modifying one of the predefined Backup Sets. Although MozyPro has a predefined Backup Set named "Visual Studio Projects", I've created other Backup Sets based on projects and customers. By creating a global rule, the temporary files will be ignored by all other projects.

To create a rule for Visual Studio, go to the Backup Sets page in the MozyPro configuration. Create a new backup set and call it something like Excluded Files. In the Backup Set Editor, put a checkmark next to C: so that this rule applies to the entire drive. Next check the box in the top right labeled "Files matching this set will be EXCLUDED." Under the Rules, the first rule should read:

Include / File Type / sdf pch ncb idb tlog sbr res dep obj ilk ipch bsc

This will exclude temporary and intermediate files for Visual Studio 2005 through 2010, as well as Virtual PC undo files. If you run Visual Studio 2010, also click the plus sign (+) on the right and create a second rule that reads:

Or / Include / Folder name / is / ipch / Files and Folders

Finally, if you use Virtual PC, create a third rule:

Or / Include / File Type / vud vsv

Note that the "Include" command in these rules really means "Exclude" because you checked the box "Files matching this set will be EXCLUDED..."

Wednesday, May 19, 2010

Understanding ReadDirectoryChangesW - Part 2

The longest, most detailed description in the world of how to successfully use ReadDirectoryChangesW.

This is Part 2 of 2. Part 1 describes the theory and this part describes the implementation.

Go to the GitHub repo for this article or just download the sample code.

Getting a Handle to the Directory

Now we'll look at the details of implementing the Balanced solution described in Part 1. When reading the declaration for ReadDirectoryChangesW, you'll notice that the first parameter is to a directory, and it's a HANDLE. Did you know that you can get a handle to a directory? There is no OpenDirectory function and the CreateDirectory function doesn't return a handle. Under the documentation for the first parameter, it says “This directory must be opened with the FILE_LIST_DIRECTORY access right.” Later, the Remarks section says, “To obtain a handle to a directory, use the CreateFile function with the FILE_FLAG_BACKUP_SEMANTICS flag.” The actual code looks like this:

HANDLE hDir = ::CreateFile(

    strDirectory,           // pointer to the file name
    // access (read/write) mode
        // share mode
    NULL, // security descriptor
        // how to create
    FILE_FLAG_BACKUP_SEMANTICS // file attributes
                 // file with attributes to copy

The first parameter, FILE_LIST_DIRECTORY, isn't even mentioned in the CreateFile() documentation. It's discussed in File Security and Access Rights, but not in any useful way.

Similarly, FILE_FLAG_BACKUP_SEMANTICS has this interesting note, "Appropriate security checks still apply when this flag is used without SE_BACKUP_NAME and SE_RESTORE_NAME privileges." In past dealings with this flag, it had been my impression that Administrator privileges were required, and the note seems to bear this out. However, attempting to enable these privileges on a Windows Vista system by adjusting the security token does not work if UAC is enabled. I'm not sure if the requirements have changed or if the documentation is simply ambiguous. Others are similarly confused.

The sharing mode also has pitfalls. I saw a few samples that left out FILE_SHARE_DELETE. You'd think that this would be fine since you do not expect the directory to be deleted. However, leaving out that permission prevents other processes from renaming or deleting files in that directory. Not a good result.

Another potential pitfall of this function is that the referenced directory itself is now “in use” and so can't be deleted. To monitor files in a directory and still allow the directory to be deleted, you would have to monitor the parent directory and its children.

Calling ReadDirectoryChangesW

The actual call to ReadDirectoryChangesW is the simplest part of the whole operation. Assuming you are using completion routines, the only tricky part is that the buffer must be DWORD-aligned.

The OVERLAPPED structure is supplied to indicate an overlapped operation, but none of the fields are actually used by ReadDirectoryChangesW. However, a little known secret of using Completion Routines is that you can supply your own pointer to the C++ object. How does this work? The documentation says that, "The hEvent member of the OVERLAPPED structure is not used by the system, so you can use it yourself." This means that you can put in a pointer to your object. You'll see this in my sample code below:

void CChangeHandler::BeginRead()
    ::ZeroMemory(&m_Overlapped, sizeof(m_Overlapped));
    m_Overlapped.hEvent = this;

    DWORD dwBytes=0;

    BOOL success = ::ReadDirectoryChangesW(
        FALSE, // monitor children?

Since this call uses overlapped I/O, m_Buffer won't be filled in until the completion routine is called.

Dispatching Completion Routines

For the Balanced solution we've been discussing, there are only two ways to wait for Completion Routines to be called. If everything is being dispatched using Completion Routines, then SleepEx is all you need. If you need to wait on handles as well as to dispatch Completion Routines, then you want WaitForMultipleObjectsEx. The Ex version of the functions is required to put the thread in an “alertable” state, which means that completion routines will be called.

To terminate a thread that's waiting using SleepEx, you can write a Completion Routine that sets a flag in the SleepEx loop, causing it to exit. To call that Completion Routine, use QueueUserAPC, which allows one thread to call a completion routine in another thread.

Handling the Notifications

The notification routine should be easy. Just read the data and save it, right? Wrong. Writing the Completion Routine also has its complexities.

First, you need to check for and handle the error code ERROR_OPERATION_ABORTED, which means that CancelIo has been called, this is the final notification, and you should clean up appropriately. I describe CancelIo in more detail in the next section. In my implementation, I used InterlockedDecrement to decrease cOutstandingCalls, which tracks my count of active calls, then I returned. My objects were all managed by the MFC mainframe and so did not need to be deleted by the Completion Routine itself.

You can receive multiple notifications in a single call. Make sure you walk the data structure and check for each non-zero NextEntryOffset field to skip forward.

ReadDirectoryChangesW is a "W" routine, so it does everything in Unicode. There's no ANSI version of this routine. Therefore, the data buffer is also Unicode. The string is not NULL-terminated, so you can't just use wcscpy. If you are using the ATL or MFC CString class, you can instantiate a wide CString from a raw string with a given number of characters like this:

CStringW wstr(fni.Data, fni.Length / sizeof(wchar_t));

Finally, you have to reissue the call to ReadDirectoryChangesW before you exit the completion routine.You can reuse the same OVERLAPPED structure. The documentation specifically says that the OVERLAPPED structure is not accessed again by Windows after the completion routine is called. However, you have to make sure that you use a different buffer than your current call or you will end up with a race condition.

One point that isn't clear to me is what happens to change notifications in between the time that your completion routine is called and the time you issue the new call to ReadDirectoryChangesW.

I'll also reiterate that you can still "lose" notifications if many files are changed in a short period of time. According to the documentation, if the buffer overflows, the entire contents of the buffer are discarded and the lpBytesReturned parameter contains zero. However, it's not clear to me if the completion routine will be called with dwNumberOfBytesTransfered equal to zero, and/or if there will be an error code specified in

There are some humorous examples of people trying (and failing) to write the completion routine correctly. My favorite is found on, where, after insulting the person asking for help, he presents his example of how to write the routine and concludes with, "It's not like this stuff is difficult." His code is missing error handling, he doesn't handle ERROR_OPERATION_ABORTED, he doesn't handle buffer overflow, and he doesn't reissue the call to ReadDirectoryChangesW. I guess it's not difficult when you just ignore all of the difficult stuff.

Using the Notifications

Once you receive and parse a notification, you need to figure out how to handle it. This isn't always easy. For one thing, you will often receive multiple duplicate notifications about changes, particularly when a long file is being written by its parent process. If you need the file to be complete, you should process each file after a timeout period has passed with no further updates. [Update: See the comment below by Wally The Walrus for details on the timeout.]

An article by Eric Gunnerson points out that the documentation for FILE_NOTIFY_INFORMATION contains a critical comment: If there is both a short and long name for the file, the function will return one of these names, but it is unspecified which one. Most of the time it's easy to convert back and forth between short and long filenames, but that's not possible if a file has been deleted. Therefore, if you are keeping a list of tracked files, you should probably track both the short and long filename. I was unable to reproduce this behavior on Windows Vista, but I only tried on one computer.

You will also receive some notifications that you may not expect. For example, even if you set the parameters of ReadDirectoryChangesW so you aren't notified about child directories, you will still get notifications about the child directories themselves. For example. Let's assume you have two directories, C:\A and C:\A\B.  You move the file info.txt from the first directory to the second. You will receive FILE_ACTION_REMOVED for the file C:\A\info.txt and you will receive FILE_ACTION_MODIFIED for the directory C:\A\B. You will not receive any notifications about C:\A\B\info.txt.

There are some other surprises. Have you ever used hard links in NTFS? Hard links allow you to have multiple filenames that all reference the same physical file. If you have one reference in a monitored directory and a second reference in a second directory, you can edit the file in the second directory and a notification will be generated in the first directory. It's like magic.

On the other hand, if you are using symbolic links, which were introduced in Windows Vista, then no notification will be generated for the linked file. This makes sense when you think it through, but you have to be aware of these various possibilities.

There's yet a third possibility, which is junction points linking one partition to another. In that case, monitoring child directories won't monitor files in the linked partition. Again, this behavior makes sense, but it can be baffling when it's happening at a customer site and no notifications are being generated.

Shutting Down

I didn't find any articles or code (even in open source production code) that properly cleaned up the overlapped call. The documentation on MSDN for canceling overlapped I/O says to call CancelIo. That's easy. However, my application then crashed when exiting. The call stack showed that one of my third party libraries was putting the thread in an alertable state (which meant that Completion Routines could be called) and that my Completion Routine was being called even after I had called CancelIo, closed the handle, and deleted the OVERLAPPED structure.

As I was searching various web pages with sample code that called CancelIo, I found this page that included the code below:


if (!HasOverlappedIoCompleted(&pMonitor->ol))
SleepEx(5, TRUE);


This looked promising. I faithfully copied it into my app. No effect.

I re-read the documentation for CancelIo, which makes the statement that "All I/O operations that are canceled complete with the error ERROR_OPERATION_ABORTED, and all completion notifications for the I/O operations occur normally." Decoded, this means that all Completion Routines will be called at least one final time after CancelIo is called. The call to SleepEx should have allowed that, but it wasn't happening. Eventually I determined that waiting for 5 milliseconds was simply too short. Maybe changing the "if" to a "while" would have solved the problem, but I chose to approach the problem differently since this solution requires polling every existing overlapped structure.

My final solution was to track the number of outstanding requests and to continue calling SleepEx until the count reached zero. In the sample code, the shutdown sequence works as follows:
  1. The application calls CReadDirectoryChanges::Terminate (or simply allows the object to destruct.)
  2. Terminate uses QueueUserAPC to send a message to CReadChangesServer in the worker thread, telling it to terminate.
  3. CReadChangesServer::RequestTermination sets m_bTerminate to true and delegates the call to the CReadChangesRequest objects, each of which calls CancelIo on its directory handle and closes the directory handle.
  4. Control is returned to CReadChangesServer::Run function. Note that nothing has actually terminated yet.
void Run()
    while (m_nOutstandingRequests || !m_bTerminate)
        DWORD rc = ::SleepEx(INFINITE, true);

  1. CancelIo causes Windows to automatically call the Completion Routine for each CReadChangesRequest overlapped request. For each call, dwErrorCode is set to ERROR_OPERATION_ABORTED.
  2. The Completion Routine deletes the CReadChangesRequest object, decrements nOutstandingRequests, and returns without queuing a new request.
  3. SleepEx returns due to one or more APCs completing. nOutstandingRequests is now zero and m_bTerminate is true, so the function exits and the thread terminates cleanly.
In the unlikely event that shutdown doesn't proceed properly, there's a timeout in the primary thread when waiting for the worker thread to terminate. If the worker thread doesn't terminate in a timely fashion, we let Windows kill it during termination.

    Network Drives

    ReadDirectoryChangesW works with network drives, but only if the remote server supports the functionality. Drives shared from other Windows-based computers will correctly generate notifications. Samba servers may or may not generate notifications, depending on whether the underlying operating system supports the functionality. Network Attached Storage (NAS) devices usually run Linux, so won't support notifications. High-end SANs are anybody's guess.

    ReadDirectoryChangesW fails with ERROR_INVALID_PARAMETER when the buffer length is greater than 64 KB and the application is monitoring a directory over the network. This is due to a packet size limitation with the underlying file sharing protocols.


    If you've made it this far in the article, I applaud your can-do attitude. I hope I've given you a clear picture of the challenges of using ReadDirectoryChangesW and why you should be dubious of any sample code you see for using the function. Careful testing is critical, including performance testing.

    Go to the GitHub repo for this article or just download the sample code.

    Understanding ReadDirectoryChangesW - Part 1

    The longest, most detailed description in the world of how to successfully use ReadDirectoryChangesW.

    This is Part 1 of 2. This part describes the theory and Part 2 describes the implementation.

    Go to the GitHub repo for this article or just download the sample code.

    I have spent this week digging into the barely-documented world of ReadDirectoryChangesW and I hope this article saves someone else some time. I believe I've read every article I could find on the subject, as well as numerous code samples. Almost all of the examples, including the one from Microsoft, either have significant shortcoming or have outright mistakes.

    You'd think that this problem would have been a piece of cake for me, having been the author of Multithreading Applications in Win32, where I wrote a chapter about the differences between synchronous I/O, signaled handles, overlapped I/O, and I/O completion ports. Except that I only write overlapped I/O code about once every five years, which is just about long enough for me to forget how painful it was the last time. This endeavor was no exception.

    Four Ways to Monitor Files and Directories

    First, a brief overview of monitoring directories and files. In the beginning there was SHChangeNotifyRegister. It was implemented using Windows messages and so required a window handle. It was driven by notifications from the shell (Explorer), so your application was only notified about things that the shell cared about - which almost never aligned with what you cared about. It was useful for monitoring things that the user did in Explorer, but not much else.

    SHChangeNotifyRegister was fixed in Windows Vista so it could report all changes to all files, but is was too late - there are still several hundred million Windows XP users and that's not going to change any time soon.

    SHChangeNotifyRegister also had a performance problem, since it was based on Windows messages. If there were too many changes, your application would start receiving roll-up messages that just said "something changed" and you had to figure out for yourself what had really happened. Fine for some applications, rather painful for others.

    Windows 2000 brought two new interfaces, FindFirstChangeNotification and ReadDirectoryChangesW. FindFirstChangeNotification is fairly easy to use but doesn't give any information about what changed. Even so, it can be useful for applications such as fax servers and SMTP servers that can accept queue submissions by dropping a file in a directory. ReadDirectoryChangesW does tell you what changed and how, at the cost of additional complexity.

    Similar to SHChangeNotifyRegister, both of these new functions suffer from a performance problem. They can run significantly faster than shell notifications, but moving a thousand files from one directory to another will still cause you to lose some (or many) notifications. The exact cause of the missing notifications is complicated. Surprisingly, it apparently has little to do with how fast you process notifications.

    Note that FindFirstChangeNotification and ReadDirectoryChangesW are mutually exclusive. You would use one or the other, but not both.

    Windows XP brought the ultimate solution, the Change Journal, which could track in detail every single change, even if your software wasn't running. Great technology, but equally complicated to use.

    The fourth and final solution is is to install a File System Filter, which was used in the popular SysInternals FileMon tool. There is a sample of this in the Windows Driver Kit (WDK). However, this solution is essentially a device driver and so potentially can cause system-wide stability problems if not implemented exactly correctly.

    For my needs, ReadDirectoryChangesW was a good balance of performance versus complexity.

    The Puzzle

    The biggest challenge to using ReadDirectoryChangesW is that there are several hundred possibilities for combinations of I/O mode, handle signaling, waiting methods, and threading models. Unless you're an expert on Win32 I/O, it's extremely unlikely that you'll get it right, even in the simplest of scenarios. (In the list below, when I say "call", I mean a call to ReadDirectoryChangesW.)

    A. First, here are the I/O modes:
    1. Blocking synchronous
    2. Signaled synchronous
    3. Overlapped asynchronous
    4. Completion Routine (aka Asynchronous Procedure Call or APC)
    B. When calling the WaitForXxx functions, you can:
    1. Wait on the directory handle.
    2. Wait on an event object in the OVERLAPPED structure.
    3. Wait on nothing (for APCs.)
    C. To handle notifications, you can use:
    1. Blocking
    2. WaitForSingleObject
    3. WaitForMultipleObjects
    4. WaitForMultipleObjectsEx
    5. MsgWaitForMultipleObjectsEx
    6. I/O Completion Ports
    D. For threading models, you can use:
    1. One call per worker thread.
    2. Multiple calls per worker thread.
    3. Multiple calls on the primary thread.
    4. Multiple threads for multiple calls. (I/O Completion Ports)
    Finally, when calling ReadDirectoryChangesW, you specify flags to choose what you want to monitor, including file creation, last modification date change, attribute changes, and other flags. You can use one flag per call  and issue multiple calls or you can use use multiple flags in one call. Multiple flags is always the right solution. If you think you need to use multiple calls with one flag per call to make it easier to figure out what to do, then you need to read more about the data contained in the notification buffer returned by ReadDirectoryChangesW.

    If your head is now swimming in information overload, you can easily see why so many people have trouble getting this right.

    Recommended Solutions

    So what's the right answer? Here's my opinion, depending on what's most important:

    Simplicity - A2C3D1 - Each call to ReadDirectoryChangesW runs  in its own thread and sends the results to the primary thread with PostMessage. Most appropriate for GUI apps with minimal performance requirements. This is the strategy used in CDirectoryChangeWatcher on CodeProject. This is also the strategy used by Microsoft's FWATCH sample.

    Performance - A4C6D4 - The highest performance solution is to use I/O completion ports, but, as an aggressively multithreaded solution, it's also a very complex solution that should be confined to servers. It's unlikely to be necessary in any GUI application. If you aren't a multithreading expert, stay away from this strategy.

    Balanced - A4C5D3 - Do everything in one thread with Completion Routines. You can have as many outstanding calls to ReadDirectoryChangesW as you need. There are no handles to wait on, since Completion Routines are dispatched automatically. You embed the pointer to your object in the callback, so it's easy to keep callbacks matched up to their original data structure.

    Originally I had thought that GUI applications could use MsgWaitForMultipleObjectsEx to intermingle change notifications with Windows messages. This turns out not to work because dialog boxes have their own message loop that's not alertable, so a dialog box being displayed would prevent notifications from being processed. Another good idea steamrolled by reality.

    Wrong Techniques

    As I was researching this solution, I saw a lot of recommendations that ranged from dubious to wrong to really, really wrong. Here's some commentary on what I saw.

    If you are using the Simplicity solution above, don't use blocking calls because the only way to cancel it is with the undocumented technique of closing the handle or the Vista-only technique of CancelSynchronousIo. Instead, use the Signal Synchronous I/O mode by waiting on the directory handle. Also, to terminate threads, don't use TerminateThread, because that doesn't clean up resources and can cause all sorts of problems. Instead, create a manual-reset event object that is used as the the second handle in the call to WaitForMultipleObjects.When the event is set, exit the thread.

    If you have dozens or hundreds of directories to monitor, don't use the Simplicity solution. Switch to the Balanced solution. Alternatively, monitor a root common directory and ignore files you don't care about.

    If you have to monitor a whole drive, think twice (or three times) about this idea. You'll be notified about every single temporary file, every Internet cache file, every  Application Data change - in short, you'll be getting an enormous number of notifications that could slow down the entire system. If you need to monitor an entire drive, you should probably use the Change Journal instead. This will also allow you to track changes even if your app is not running. Don't even think about monitoring the whole drive with FILE_NOTIFY_CHANGE_LAST_ACCESS.

    If you are using overlapped I/O without using an I/O completion port, don't wait on handles. Use Completion Routines instead. This removes the 64 handle limitation, allows the operating system to handle call dispatch, and allows you to embed a pointer to your object in the OVERLAPPED structure. My example in a moment will show all of this.

    If you are using worker threads, don't send results back to the primary thread with SendMessage.  Use PostMessage instead. SendMessage is synchronous and will not return if the primary thread is busy. This would defeat the purpose of using a worker thread in the first place.

    It's tempting to try and solve the issue of lost notifications by providing a huge buffer. However, this may not be the wisest course of action. For any given buffer size, a similarly-sized buffer has to be allocated from the kernel non-paged memory pool. If you allocate too many large buffers, this can lead to serious problems, including a Blue Screen of Death. Thanks to an anonymous contributor in the MSDN Community Content.

    Jump to Part 2 of this article.

    Go to the GitHub repo for this article or just download the sample code.

    Monday, May 17, 2010

    Using MsgWaitForMultipleObjects in MFC

    One of the problems that I didn't solve in my Multithreading book was how to use MsgWaitForMultipleObjectsEx with MFC. I have always felt guilty about this because it was an important problem, but I simply ran out of time to do the implemenation. Recently I finally had a need to solve the problem. MFC has come a long way since 1996 and it's clear that this problem was planned for. With MFC in Visual Studio 2008 and 2010, I was able to solve the problem in minutes instead of days.

    In short, the solution is to replace MFC's call to GetMessage with your own call to MsgWaitForMultipleObjectsEx. This will allow you to dispatch messages, handle signaled objects, and dispatch Completion Routines. Here's the code, which goes in your MFC App object:

    // virtual
    BOOL CMyApp::PumpMessage()
    {    HANDLE hEvent = ...;
        HANDLE handles[] = { hEvent };
        DWORD const res =
        switch (res)
            case WAIT_OBJECT_0 + 0:
                // the event object was signaled...
                return true;
            case WAIT_OBJECT_0 + _countof(handles):
                return __super::PumpMessage();
            case WAIT_IO_COMPLETION:

        return TRUE;

    There are several obscure points in this code worth mentioning. Getting any of these wrong will cause it to break:
    • The MWMO_INPUTAVAILABLE is required to solve race conditions with the message queue.
    • The MWMO_ALERTABLE is required in order for Completion Routines to be called.
    • WAIT_IO_COMPLETION does not require any action on your part. It just indicates that a completion routine was called.
    • The handle array is empty. You do NOT wait on the directory handle. Doing so will prevent your completion routine from being called.
    • MsgWaitForMultipleObjectsEx indicates that a message is available (as opposed to a signaled object) by returning WAIT_OBJECT_0 plus the handle count.
    • The call to __super::PumpMessage() is for MFC when this is running in your application object. Outside of MFC, you should replace it with your own message loop.
    Note that this strategy breaks whenever a dialog box is displayed because dialog boxes use their own message loop.

    Wednesday, May 12, 2010

    Visual Studio 2010 Tab Key Not Working

    Several days ago, my tab key stopped working when editing C++ code in Visual Studio 2010. Everything else worked fine, even Ctrl-Tab. But pressing Tab on the keyboard would not insert a tab into the file.  I tried disabling add-ins. No change. I reset the keyboard in Tools / Options / Environment / Keyboard. Still no change. I tried running devenv /safemode. That didn't work either.

    Finally I looked at the other command line switches for devenv.exe, where I saw devenv /ResetSettings.  Finally, that fixed the problem.

    Wednesday, April 28, 2010

    Static Library Dependencies in Visual Studio 2010

    I've been porting my Visual Studio 2008 C++ application to Visual Studio 2010. One rough spot I've hit is the change in how static library dependencies works. In VS2005 and VS2008, we completely gave up on automated static library dependencies because they only worked in the IDE, not from automated builds with VCBUILD. VS2010 was supposed to fix this problem. (Note that MSBUILD replaces VCBUILD for command line builds in VS2010.)

    The problem manifested itself during Batch Build with errors indicating that Debug libraries were being linked into Release builds. For example, these errors happen if you are using STL:

    Core.lib(HttpHelpers.obj) : error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '2' doesn't match value '0' in Activation.obj

    MSVCRTD.lib(MSVCR100D.dll) : error LNK2005: _strcspn already defined in libcmt.lib(strcspn.obj)

    I've also seen these errors for vanilla MFC projects:

    msvcrtd.lib(ti_inst.obj) : error LNK2005: "private: __thiscall type_info::type_info(class type_info const &)" (??0type_info@@AAE@ABV0@@Z) already defined in libcmt.lib(typinfo.obj)

    msvcrtd.lib(ti_inst.obj) : error LNK2005: "private: class type_info & __thiscall type_info::operator=(class type_info const &)" (??4type_info@@AAEAAV0@ABV0@@Z) already defined in libcmt.lib(typinfo.obj)

    Visual Studio 2010 has multiple ways of specifying dependencies.  The actual effect of each setting and settings' interdependence does not appear to be officially documented, although there is some information at and at

    The first way of specifying dependencies continues from previous versions of Visual Studio in Project | Project Dependencies. However, unlike earlier versions of Visual Studio, this only affects project build order, not dependency library linking. Also, this only affects project build order in the IDE. It does not affect project build order in MSBUILD. Therefore, you need to configure that page, but it's useless for command line builds. Changes on this page are reflected in the .sln file.

    The real action is now on the Framework and References tab in the project properties. This tab used to be exclusively for .Net dependencies, but now it's been expanded for C++ dependencies. When I looked at this page for my project, it looked like this:

    There are a couple of things to notice. First, the Configuration in the top left is grayed out. The Debug/Release settings do not apply to the Framework and References page. Second, you'll notice that the Full Path in the properties on the right points to the Debug build. In other words, the Debug library is always linked, even in the Release build. I've blogged about this problem in the past with .Net libraries, but this is pretty fundamental for C++ libraries.

    I first thought that the solution was to set Reference Assembly Output to False. However, after reading the blog of one of the Microsoft Program Managers and following up with him, he said that the Reference Assembly Output option is only for managed libraries and does not do anything for native libraries. He said that the behavior I'm seeing is anamolous and that it shouldn't be happening.

    [Update 7/18/2010] I have confirmed that this problem only happens with Batch Build. The problem doesn't happen if you explicitly choose a configuration, nor does the problem happen when running MSBUILD from the command line.

    I have opened a bug report on Microsoft Connect. Please vote for it at:

    [Update 7/18/2010] Microsoft has reproduced the problem, but has decided not to fix it. If you are reading this, PLEASE let them you that fixing this problem is important to you!

    Thursday, April 22, 2010

    Visual C++ 2010 Apps Don't Support Windows 2000

    One of the rather important things missing in the list of Breaking Changes for Visual C++ in Visual Studio 2010 is that applications generated by VS2010 no longer support Windows 2000.  Usage of W2K has fallen under 0.5%, but that's still a very large number of users. This was pointed out by Martin Richter in the Community Content. However, the actual story is a little more complex.

    I built a C++ application to see what functions are actually being called. When examined in DEPENDS under Windows 2000, these are the functions that are undefined:
    • DecodePointer
    • EncodePointer
    • ReleaseActCtx
    • CreateActCtxW
    • ActivateActCtx
    • DeactivateActCtx
    The interesting thing is the DecodePointer and EncodePointer are documented as only being available in Windows XP Service Pack 2 or later. This means that the minimum system requirements for an application generated by Visual C++ 2010 is WinXP SP2, or Windows Server 2003 SP1 for server versions of Windows.

    In addition, DUMPBIN shows that the required version of Windows has been bumped from 5.00 to 5.01, which is Windows XP.

    What's really annoying is that none of these calls are required under Windows 2000. These calls appear to be an artificial limitation purely for the purposes of preventing end-of-life versions of Windows from working properly.

    Tuesday, April 13, 2010

    Targeting C++/CLR v2.0 with Visual Studio 2010

    I just spent far too many hours recovering from trying to change the target .Net Framework version for a C++/CLR project in Visual Studio 20010 (final release.) I hope to save someone else the same frustation. If you don't need the features in .Net 4.0, there's a big advantage to targeting the old version since it is much more likely that your customers will have the old version installed.

    If you go into the Framework and References page for a C++/CLR project in Visual Studio 2010, you will see that the Platform droplist is grayed out. There's nothing you can do to ungray it, so don't bother trying.

    Underneath it says, "Targeted framework: .NETFramework, Version=4.0" There's no button you can click to change that. If you RTMF, the documentation says that, "The IDE does not support modifying the targeted framework, but you can change it manually." So that's what I tried to do. I went into the vcxproj file, changed the header to say ToolsVersion="2.0" and reloaded the project. Big mistake. The updated project file wouldn't load. I reverted back to the original file from source control and that wouldn't load either.

    The first error I had to fix was:

    The imported project "C:\Microsoft.Cpp.Default.props" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk.

    This turned out to be relatively easy to solve. I went into the vcxproj file and did a "Replace All" changing UserRootDir to VCTargetsPath.

    The second error was something about ClCompile and not finding AssemblyInfo.cpp. I'm still not entirely clear what caused this because now I can't reproduce the problem. In the end, I removed the project from the solution, closed the solution, closed Visual Studio, deleted the .sdf file, deleted all of the build directories (Debug, Release, etc.), deleted the ipch directory, restarted Visual Studio, added the project back into the solution, and it worked.

    None of which got me any closer to solving the original problem.

    I created a new C++/CLR project from scratch, which pointed me at the answer. The project file I was using had been created by the Upgrade Wizard in Visual Studio 2010 and it turns out that an important line was left out. Underneath these two lines:

    <PropertyGroup Label="Globals">

    there should be another line that says:


    Change it from v4.0 to v2.0 and everything will magically switch to v2.0. The value v3.5 is also supported. There's some discussion of this at halfway down under Known issues for conversion in VS2010.

    Note that using v2.0 forces Visual Studio to use the Visual Studio 2008 compiler. Visual Studio 2010 can only compile v4.0 code.

    Update 6/18/2010: To debug your code, you may have to explicitly tell Visual Studio 2010 that the .Net 2.0 CLR should be used. I ran into this when my EXE was native and my DLL was C++/CLR. Create a file named .exe.config and put the following information in the file:
            <supportedRuntime version="v2.0.50727"/>
    More information can be found in this article on the Visual Studio Debugger Team Blog. Note that their sample did not work for me under Windows Vista until I removed the line that said:
    <?xml version ="1.0"?>

    Thursday, March 18, 2010

    Managing multiple configurations using Virtual PC

    We use Virtual PC extensively for testing in our lab. We have about 12 different configurations for each version of Windows we test. We've spent a lot of time learning how to create these configurations so that they work reliably and to create a reasonable balance between updates of individual configurations versus a global rebuild as the security updates and service packs pile up.

    This blog entry describes how we create the VHDs and configure them in Virtual PC.

    Layer Overview

    Our VHD tree is built based on a set of defined layers. Layer 1 is comprised of single root VHD for each version of Windows, such as WinXP SP3. Layer 2 is built from differencing VHDs that use the root VHD as the parent. This means that the root VHD needs to be configured as far as possible, without requiring lower layer VHDs have to "undo" any configuration. Here is the VHD structure we use:

    1. Root VHD - Contains base Windows install and common configurations
    2. Configuration specific VHD - Includes apps, service packs, etc. that are different from the root. This is where configuration-dependent changes are done.
    3. Working VHD - Includes temporary changes that are used over the course of several tests.
    4. Undo file.

    Layers are defined by how often they are changed, because when a layer changes, everything underneath it is invalidated. Layer 1 changes every six to eighteen months, usually when the number of security updates becomes overwhelming. Layer 2 changes every few months. Layer 3 is only used when a customized configuration needs to be tested for several days (or when you hit the wrong button and say "commit change.) Layer 4 changes on a minute to minute basis. It is discarded whenever a user selects "Discard changes" when a VPC is closed.


    Creating the VHD hierarchy is tricky. There are several problems we've seen that must be solved by making the changes in the correct order in the correct order. Some of these problems include:
    • Duplicate Ethernet MAC addresses.
    • Duplicate computer names.
    • Domain disconnects.
    • Accidental overwriting of a parent vhd file.
    • Windows Update automatically overwriting desired configurations.
    The MAC address will be updated automatically by Virtual PC as long as you create a new .vmc file from the menu. In other words, use the New Virtual Machine Wizard, don't make a copy of your existing .vmc file. If you insist on copying your .vmc file, you can manually remove the MAC address from the XML file and it will automatically be recreated.
    Many people are also concerned about "SID duplication" and want to use something like NewSID or SysPrep. Recent reports indicate that this is no longer a concern. Please see NewSID Retirement and the Machine SID Duplication Myth.

    Create the Layer 1 Root VPC

    The first task in creating the hierarchy is to create the Layer 1 VHD, which contains the OS install and will be common to all dependent virtual machines.
    1. Create the .vhd file using the Disk Wizard. Set to dynamic size, 20 to 32GB is usually a good size.
    2. Create the Virtual PC .vmc using the New Virtual Machine Wizard under the File menu. Make sure you give it enough RAM, the defaults are almost always too small.
    3. Install Windows. Install a version of Windows that includes the desired service pack. Set the system name to be something like RootWinXp so you know when you've forgotten to change it. Do not join a domain during installation.
    4. Log in as an Administrator and activate Windows, if necessary. Don't do this in a child virtual machine or you'll need an additional activation key for every child.
    5. Shut down Windows and make a backup of the VHD. Name it "Level 1 - Clean" This is a "clean machine" that's been activated. Use this VHD to create completely new configuration hierarchies.

    Configure the Layer 1 Root VPC

    1. Boot the virtual machine.
    2. Install the Virtual PC Additions.
    3. Set the vpc to be part of workgroup, if it's not already. Do NOT join the root vhd to a domain! That must be done in a lower layer.
    4. Install desired Windows service pack, if any.
    5. For Windows XP, go to Windows Update in Internet Explorer and enable Microsoft Update.
    6. Install latest security updates through Windows Update. Pay attention to the following:
      • For Windows XP, go to Windows Update in Internet Explorer and check High Priority and Optional updates. The task bar "Update" icon does not show these updates.
      • Under Optional, I recommend you install Update for Root Certificates.
      • Also under Optional, install Microsoft .NET Framework, if needed.
      • Windows Update may automatically upgrade Internet Explorer if you don't do a "Custom" update and uncheck Internet Explorer. OTOH, IE8 for Windows XP is under Optional.
      • After each reboot, go back to Windows Update and check again. You'll probably need to run through Windows Update several times to get all updates.
    7. In Control Panel, set Automatic Updates to "Notify me", not to automatically download or to automatically install. Otherwise you end up fighting Windows Update every time you boot a configuration.
    8. Disable the requirement for Ctrl-Alt-Del. This can save a lot of mousing around when running the virtual PC in a window. For Windows XP, this can be found under Control Panel / User Accounts / Advanced (or use Group Policy.)
    9. Create any desired local users (as opposed to domain users.) Make sure you set them as Admin or Limited, appropriately. At a minimum, I recommend creating a "Limited" user for testing.
    10. Disable the Internet Connection Wizard if you have Windows XP virtual machines. In Group Policy, it's under User Configuration\Administrative Templates\Windows Components\Internet Explorer\Internet Settings\Advanced Settings\Internet Connection Wizard\
    11. Set firewall exceptions in Control Panel | Windows Firewall | Exceptions:
      • File and Printer Sharing must be enabled to allow access the system by name on the network.
      • Remote Desktop must be enabled to access the system remotely.
    12. If desired, enable Remote Desktop under My Computer | Properties | Remote. Note that Remote Desktop is not supported in any of the "Home" versions of Windows. Local Administrators will automatically have rights to access the system via Remote Desktop.
    13. If you use Remote Debugging in Visual Studio, start up MSVCMON.EXE and tell it unblock the firewall for your subnet. You will discover that this adds numerous exceptions to the firewall. MSVCMON.EXE can be found in:

      C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\Remote Debugger\x86\msvsmon.exe
    14. Modify options in My Computer | Tools | Folder Options | View, such as "Display the contents of system folders", "Show hidden files and folders", "Hide extensions", and Hide protected operating system files". This can also be done in the domain logon script with:

      reg add hkcu\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced /v Hidden /t REG_DWORD /d 1 /f

      reg add hkcu\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced /v HideFileExt /t REG_DWORD /d 0 /f

      reg add hkcu\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced /v WebViewBarricade /t REG_DWORD /d 1 /f
    15. Share any desired folders.
    16. Shut down the root vpc using the "Shutdown" from the Start menu.
    17. Commit all changes if you are using Undo disks.
    18. Set the VHD to be read-only.
    19. Create a backup of the file. Name it "Level 1 - Common".
    20. Optionally, go to the Virtual PC Console and remove this virtual machine so you don't accidently try to use it. If you don't delete it, make sure you enable Undo disks so you don't accidentally change it (Virtual PC will helpfully remove the read-only attribute.)

    Create a Layer 2 VPC

    1. Use disk wizard to create a differencing hard disk to the root.
    2. Make a backup of this vhd file. Name it "Level 2 - Empty".
    3. Create the Virtual PC .vmc using the menus. Again, make sure you give it enough RAM. Turn off Undo for now.
    4. Boot the new virtual PC.
    5. Log in as an adminstrator.
    6. Change the system name.
    7. Join a domain, if desired. This can be done from the command line with the NetDom command, which is available in the Windows XP Support Tools. (This should all be on one line.)

      netdom join myvpc /
      /passwordd:<domain admin password>
    8. Give any desired domain users or groups permission to login via Remote Desktop.
    9. Log in as any domain users you plan to use frequently. This does the one-time configuration of the VPC for that user. Change Folder Options again for these users.
    10. Install any configuration-dependent software.
    11. Install updates and service packs for the software you just installed.
    12. Map network shares with a domain logon script. If a computer on a domain must map a drive on a non-domain computer, make sure the non-domain computer has an account with the same name and password as the domain computer. Alternatively, you use this command to save the username/password on the domain computer, then say "net use x: \\acme\share" in the domain logon script.

      net use x: \\acme\share
      /savecred /persistent:yes
    13. Check Windows Update one more time. On Vista and Windows 7, make sure you explicitly tell it to check again.
    14. Once everything is configured to your satisfaction, shut down Windows from the Start menu so it shuts down cleanly.
    15. Set the VHD to be read-only.
    16. Create a backup of the file. Name it "Level 2 - Configured".

    Create a Layer 3 VPC

    1. Create a third vhd, differencing, whose parent is the vhd you just set to be read-only. I normally name it the same as the parent with the word "Child" appended. This is what I called the "Working VHD" earlier in this document. You will use this when you need to reuse a configuration a few times, then discard the configuration. Normally this vhd is empty.
    2. Change the settings in Virtual PC for your virtual machine to point to this working file.
    3. Enable Undo for this vhd.
    4. Backup this third VHD. Label it "Level 3 - Empty". You'll need the backup whenever you discard your current working configuration.
    At this point you should have five backup files. The files labeled "Empty" in the below should be small, between 44KB and 106KB:
    • Level 1 - Clean root VHD, with a clean Windows install.
    • Level 1 - Configured root VHD, with a mostly configured Windows install.
    • Level 2 - Empty configuration VHD, which is an empty differencing disk whose parent is the Level 1 vhd.
    • Level 2 - Configured VHD, which is your fully configured test environment.
    • Level 3 - Empty VHD.

    Create Remaining Configurations

    1. Copy your Level 2 - Empty vhd to a new file that will be the new configuration. Under Windows 7, you can create all of them at once using the DISKPART utility. For example: (DiskPart requires elevated privileges and so must be run from a command prompt started with Run As Administrator.)

      DISKPART < Parts.txt

      And Parts.txt is one or more lines similar to this:
      create vdisk file="c:\vpc\ParentDisks\WinXP Office 2007.vhd" parent="c:\vpc\ParentDisks\WinXP SP3 Root.vhd"
    2. Go to Step 3 under Configure a Layer 2 VPC and continue through Configure a Layer 3 VPCs.


    Licensing is also an issue. Strictly speaking, it appears that you need one Windows license per virtual PC. In practice, we have a dozen different variants of each root virtual PC, with only minor changes between each, such as whether it's a member of a workgroup or a member of a domain, or whether it's Service Pack 2 of WinXP or Service Pack 3. Since we rebuild these fairly routinely from scratch, it is hard to believe that we are supposed to own thirty different licenses and rebuy all of them every few months. Our solutions for Windows has been to use the volume licensing versions of Windows, which are available to Microsoft Certified Partners on MSDN Downloads.

    Tuesday, March 9, 2010

    Create separate references for Debug and Release DLLs in C++/CLR

    I'm one of the crazy people who has worked with C++/CLR to make native C++ code coexist with .Net code. It runs really well, with a minimum of frustration.

    Except for one thing. We rely on some components supplied by a third party. These components have both Debug and Release versions. You'll notice that the References are listed under "Common Properties" in the Properties window, so you can't create separate references for Debug and Release by simply switching to the appropriate property set.

    This has been annoying me now for at least two years, and I finally found a solution, thanks Marco Beninca in this post:

    The solution is to use #using in your source code instead of defining your assembly references on the "Framework and References" page. Then you can go to the General page for C/C++ and set different directories for Debug and for Release.

    The other advantage to this solution is that you don't have to reset all of your references when you get a new version of the components.

    One disadvantage to this strategy is that the compiler no longer takes care of copying the appropriate DLLs to the appropriate directories. This can easily cause you to build with a different set of DLLs than you are running against, which will certainly cause a crash. My solution was to create a Pre-Link step that copies the DLLs to $(OutDir).

    Monday, February 15, 2010

    Online Backups - The Good, The Bad, and the Ugly

    This is the third and last part in my series on doing backups for home and small office environments.

    Online backup has come a long way in the past couple of years. Service has become more reliable and faster Internet connections have made backing up large quantities of data more palatable. My Internet connection is 5Mbps (30 MB/min) upstream, which is more than fast enough for overnight backup.

    In spite of these advances, my opinion is that online backup is still best suited for last-ditch disaster recovery. Any online backup strategy should be paired with a primary backup strategy that includes an image and/or file backup to a USB drive or network drive. Buying 100-GB of online storage is quite expensive, much less 1-TB. The "unlimited" plans are attractive, but companies make significant compromises to make these unlimited plans affordable, including lower backup speeds, lack of geographic redundancy, and subpar technical support. My research indicates that cheap and reliable are mutually exclusive for online backup - the lower the price, the less reliable the service.

    It should also be noted that it's the recovery that's most important, not the backup. Numerous people reported that Mozy could take several days (or more!) to prepare to restore a large number of files. Ordering a DVD was even worse. For me, this isn't acceptable.

    Here's my experience with the various services.

    Acronis. My first attempt at online backup was with Acronis, who introduced online backup in October 2009. My experience with it has been total failure. Acronis TrueImage 11 has never succeeded in creating a backup on my Windows Vista system. Acronis recently increased my backup space from 25GB to 250GB, all for $50/year. This seems like a safe choice for them, since I've never been able to backup more than 5GB to their servers.

    Mozy I've been using Mozy Pro for about two months now. They charge 50 cents/GB/month for their Pro version. On the plus side, the nightly backup works quite reliably and I was easily able to select what I wanted to back up.

    However, there are several downsides to Mozy. First, backup speed is only thirty to forty percent of the possible maximum. Mozy compresses data, then sends it. During compression, Mozy usually isn't transmitting data. When Mozy is transmitting, it only manages to max out my connection part of the time. So there's a lot of room for improvement in backup speed. [Updated 5/27/2010] The performance problems seem to have been resolved in Mozy Pro v2.0. I'm consistently maxing out my upstream connection at 5Mbps.

    Second, numerous people have reported problems restoring data from Mozy, even from their Pro service. Former employees of Mozy confirm that Mozy has been having problems managing rapid growth.

    Third, Mozy does not allow me to exclude specific extensions. While this isn't an issue for most people, it's a real problem when backing up development directories containing projects built by Microsoft Visual Studio.[Updated 5/27/2010] Mozy Pro v1.6 did allow excluding extensions, but it was difficult to use and the documentation was useless. Mozy Pro v2.0 makes it somewhat easier. See my article Configuring Mozy Pro for Visual Studio.

    My final issue with Mozy is that it doesn't support multiple historical revisions. This is a feature that I consider very important in case local backups become corrupted (which, as I mentioned earlier, has been all too frequent with Acronis TrueImage.) [Updated 5/27/2010] Multiple historical revisions are definitely supported in Mozy Pro v2.0, but again the documentation is lacking. To view the historical revisions, go to My Computer, open MozyPro Remote Backup, open one of the drives, right-click on a folder, and select Change Time.

    Carbonite I didn't try Carbonite, although I know several people who are happy with it.

    Iron Mountain I will be trying Iron Mountain next. They are the most expensive, but, compared to the cost of losing the data, the price is cheap. They support multiple historical revisions, incremental block updates, Windows 7, and many other features. Iron Mountain makes backup systems for large companies, so I'm hopeful that their technology is more reliable. I'll keep you posted.

    To recap, although online backup has been around for several years, most of the solutions still have significant shortcomings, especially with reliability. My research is that you get what you pay for, with no exceptions.