Friday, May 28, 2010

Configuring Mozy Pro for Visual Studio

Earlier this week I revised my review of Mozy based on version 2.0 of the MozyPro client software. In short, MozyPro went from something I found acceptable to something that I am quite happy with.

One of my frustrations in earlier versions of MozyPro was figuring out how to exclude Visual Studio temporary files. In Visual Studio 2010, the IntelliSense data (the .sdf file) is 70MB to 300MB for each project. Since Visual Studio automatically rebuilds this file as necessary, it's a waste of time and money to back it up. There are numerous other temporary files, such as .ncb, .sbr, .bsc, and others, all of which are unnecessary to backup. This article tells how to set up MozyPro so that files with those extensions will not be backed up.

I found it best to create a global rule instead of modifying one of the predefined Backup Sets. Although MozyPro has a predefined Backup Set named "Visual Studio Projects", I've created other Backup Sets based on projects and customers. By creating a global rule, the temporary files will be ignored by all other projects.

To create a rule for Visual Studio, go to the Backup Sets page in the MozyPro configuration. Create a new backup set and call it something like Excluded Files. In the Backup Set Editor, put a checkmark next to C: so that this rule applies to the entire drive. Next check the box in the top right labeled "Files matching this set will be EXCLUDED." Under the Rules, the first rule should read:

Include / File Type / sdf pch ncb idb tlog sbr res dep obj ilk ipch bsc

This will exclude temporary and intermediate files for Visual Studio 2005 through 2010, as well as Virtual PC undo files. If you run Visual Studio 2010, also click the plus sign (+) on the right and create a second rule that reads:

Or / Include / Folder name / is / ipch / Files and Folders

Finally, if you use Virtual PC, create a third rule:

Or / Include / File Type / vud vsv

Note that the "Include" command in these rules really means "Exclude" because you checked the box "Files matching this set will be EXCLUDED..."

Wednesday, May 19, 2010

Understanding ReadDirectoryChangesW - Part 2

The longest, most detailed description in the world of how to successfully use ReadDirectoryChangesW.

This is Part 2 of 2. Part 1 describes the theory and this part describes the implementation.

Go to the GitHub repo for this article or just download the sample code.

Getting a Handle to the Directory

Now we'll look at the details of implementing the Balanced solution described in Part 1. When reading the declaration for ReadDirectoryChangesW, you'll notice that the first parameter is to a directory, and it's a HANDLE. Did you know that you can get a handle to a directory? There is no OpenDirectory function and the CreateDirectory function doesn't return a handle. Under the documentation for the first parameter, it says “This directory must be opened with the FILE_LIST_DIRECTORY access right.” Later, the Remarks section says, “To obtain a handle to a directory, use the CreateFile function with the FILE_FLAG_BACKUP_SEMANTICS flag.” The actual code looks like this:

HANDLE hDir = ::CreateFile(

    strDirectory,           // pointer to the file name
    // access (read/write) mode
        // share mode
    NULL, // security descriptor
        // how to create
    FILE_FLAG_BACKUP_SEMANTICS // file attributes
                 // file with attributes to copy

The first parameter, FILE_LIST_DIRECTORY, isn't even mentioned in the CreateFile() documentation. It's discussed in File Security and Access Rights, but not in any useful way.

Similarly, FILE_FLAG_BACKUP_SEMANTICS has this interesting note, "Appropriate security checks still apply when this flag is used without SE_BACKUP_NAME and SE_RESTORE_NAME privileges." In past dealings with this flag, it had been my impression that Administrator privileges were required, and the note seems to bear this out. However, attempting to enable these privileges on a Windows Vista system by adjusting the security token does not work if UAC is enabled. I'm not sure if the requirements have changed or if the documentation is simply ambiguous. Others are similarly confused.

The sharing mode also has pitfalls. I saw a few samples that left out FILE_SHARE_DELETE. You'd think that this would be fine since you do not expect the directory to be deleted. However, leaving out that permission prevents other processes from renaming or deleting files in that directory. Not a good result.

Another potential pitfall of this function is that the referenced directory itself is now “in use” and so can't be deleted. To monitor files in a directory and still allow the directory to be deleted, you would have to monitor the parent directory and its children.

Calling ReadDirectoryChangesW

The actual call to ReadDirectoryChangesW is the simplest part of the whole operation. Assuming you are using completion routines, the only tricky part is that the buffer must be DWORD-aligned.

The OVERLAPPED structure is supplied to indicate an overlapped operation, but none of the fields are actually used by ReadDirectoryChangesW. However, a little known secret of using Completion Routines is that you can supply your own pointer to the C++ object. How does this work? The documentation says that, "The hEvent member of the OVERLAPPED structure is not used by the system, so you can use it yourself." This means that you can put in a pointer to your object. You'll see this in my sample code below:

void CChangeHandler::BeginRead()
    ::ZeroMemory(&m_Overlapped, sizeof(m_Overlapped));
    m_Overlapped.hEvent = this;

    DWORD dwBytes=0;

    BOOL success = ::ReadDirectoryChangesW(
        FALSE, // monitor children?

Since this call uses overlapped I/O, m_Buffer won't be filled in until the completion routine is called.

Dispatching Completion Routines

For the Balanced solution we've been discussing, there are only two ways to wait for Completion Routines to be called. If everything is being dispatched using Completion Routines, then SleepEx is all you need. If you need to wait on handles as well as to dispatch Completion Routines, then you want WaitForMultipleObjectsEx. The Ex version of the functions is required to put the thread in an “alertable” state, which means that completion routines will be called.

To terminate a thread that's waiting using SleepEx, you can write a Completion Routine that sets a flag in the SleepEx loop, causing it to exit. To call that Completion Routine, use QueueUserAPC, which allows one thread to call a completion routine in another thread.

Handling the Notifications

The notification routine should be easy. Just read the data and save it, right? Wrong. Writing the Completion Routine also has its complexities.

First, you need to check for and handle the error code ERROR_OPERATION_ABORTED, which means that CancelIo has been called, this is the final notification, and you should clean up appropriately. I describe CancelIo in more detail in the next section. In my implementation, I used InterlockedDecrement to decrease cOutstandingCalls, which tracks my count of active calls, then I returned. My objects were all managed by the MFC mainframe and so did not need to be deleted by the Completion Routine itself.

You can receive multiple notifications in a single call. Make sure you walk the data structure and check for each non-zero NextEntryOffset field to skip forward.

ReadDirectoryChangesW is a "W" routine, so it does everything in Unicode. There's no ANSI version of this routine. Therefore, the data buffer is also Unicode. The string is not NULL-terminated, so you can't just use wcscpy. If you are using the ATL or MFC CString class, you can instantiate a wide CString from a raw string with a given number of characters like this:

CStringW wstr(fni.Data, fni.Length / sizeof(wchar_t));

Finally, you have to reissue the call to ReadDirectoryChangesW before you exit the completion routine.You can reuse the same OVERLAPPED structure. The documentation specifically says that the OVERLAPPED structure is not accessed again by Windows after the completion routine is called. However, you have to make sure that you use a different buffer than your current call or you will end up with a race condition.

One point that isn't clear to me is what happens to change notifications in between the time that your completion routine is called and the time you issue the new call to ReadDirectoryChangesW.

I'll also reiterate that you can still "lose" notifications if many files are changed in a short period of time. According to the documentation, if the buffer overflows, the entire contents of the buffer are discarded and the lpBytesReturned parameter contains zero. However, it's not clear to me if the completion routine will be called with dwNumberOfBytesTransfered equal to zero, and/or if there will be an error code specified in

There are some humorous examples of people trying (and failing) to write the completion routine correctly. My favorite is found on, where, after insulting the person asking for help, he presents his example of how to write the routine and concludes with, "It's not like this stuff is difficult." His code is missing error handling, he doesn't handle ERROR_OPERATION_ABORTED, he doesn't handle buffer overflow, and he doesn't reissue the call to ReadDirectoryChangesW. I guess it's not difficult when you just ignore all of the difficult stuff.

Using the Notifications

Once you receive and parse a notification, you need to figure out how to handle it. This isn't always easy. For one thing, you will often receive multiple duplicate notifications about changes, particularly when a long file is being written by its parent process. If you need the file to be complete, you should process each file after a timeout period has passed with no further updates. [Update: See the comment below by Wally The Walrus for details on the timeout.]

An article by Eric Gunnerson points out that the documentation for FILE_NOTIFY_INFORMATION contains a critical comment: If there is both a short and long name for the file, the function will return one of these names, but it is unspecified which one. Most of the time it's easy to convert back and forth between short and long filenames, but that's not possible if a file has been deleted. Therefore, if you are keeping a list of tracked files, you should probably track both the short and long filename. I was unable to reproduce this behavior on Windows Vista, but I only tried on one computer.

You will also receive some notifications that you may not expect. For example, even if you set the parameters of ReadDirectoryChangesW so you aren't notified about child directories, you will still get notifications about the child directories themselves. For example. Let's assume you have two directories, C:\A and C:\A\B.  You move the file info.txt from the first directory to the second. You will receive FILE_ACTION_REMOVED for the file C:\A\info.txt and you will receive FILE_ACTION_MODIFIED for the directory C:\A\B. You will not receive any notifications about C:\A\B\info.txt.

There are some other surprises. Have you ever used hard links in NTFS? Hard links allow you to have multiple filenames that all reference the same physical file. If you have one reference in a monitored directory and a second reference in a second directory, you can edit the file in the second directory and a notification will be generated in the first directory. It's like magic.

On the other hand, if you are using symbolic links, which were introduced in Windows Vista, then no notification will be generated for the linked file. This makes sense when you think it through, but you have to be aware of these various possibilities.

There's yet a third possibility, which is junction points linking one partition to another. In that case, monitoring child directories won't monitor files in the linked partition. Again, this behavior makes sense, but it can be baffling when it's happening at a customer site and no notifications are being generated.

Shutting Down

I didn't find any articles or code (even in open source production code) that properly cleaned up the overlapped call. The documentation on MSDN for canceling overlapped I/O says to call CancelIo. That's easy. However, my application then crashed when exiting. The call stack showed that one of my third party libraries was putting the thread in an alertable state (which meant that Completion Routines could be called) and that my Completion Routine was being called even after I had called CancelIo, closed the handle, and deleted the OVERLAPPED structure.

As I was searching various web pages with sample code that called CancelIo, I found this page that included the code below:


if (!HasOverlappedIoCompleted(&pMonitor->ol))
SleepEx(5, TRUE);


This looked promising. I faithfully copied it into my app. No effect.

I re-read the documentation for CancelIo, which makes the statement that "All I/O operations that are canceled complete with the error ERROR_OPERATION_ABORTED, and all completion notifications for the I/O operations occur normally." Decoded, this means that all Completion Routines will be called at least one final time after CancelIo is called. The call to SleepEx should have allowed that, but it wasn't happening. Eventually I determined that waiting for 5 milliseconds was simply too short. Maybe changing the "if" to a "while" would have solved the problem, but I chose to approach the problem differently since this solution requires polling every existing overlapped structure.

My final solution was to track the number of outstanding requests and to continue calling SleepEx until the count reached zero. In the sample code, the shutdown sequence works as follows:
  1. The application calls CReadDirectoryChanges::Terminate (or simply allows the object to destruct.)
  2. Terminate uses QueueUserAPC to send a message to CReadChangesServer in the worker thread, telling it to terminate.
  3. CReadChangesServer::RequestTermination sets m_bTerminate to true and delegates the call to the CReadChangesRequest objects, each of which calls CancelIo on its directory handle and closes the directory handle.
  4. Control is returned to CReadChangesServer::Run function. Note that nothing has actually terminated yet.
void Run()
    while (m_nOutstandingRequests || !m_bTerminate)
        DWORD rc = ::SleepEx(INFINITE, true);

  1. CancelIo causes Windows to automatically call the Completion Routine for each CReadChangesRequest overlapped request. For each call, dwErrorCode is set to ERROR_OPERATION_ABORTED.
  2. The Completion Routine deletes the CReadChangesRequest object, decrements nOutstandingRequests, and returns without queuing a new request.
  3. SleepEx returns due to one or more APCs completing. nOutstandingRequests is now zero and m_bTerminate is true, so the function exits and the thread terminates cleanly.
In the unlikely event that shutdown doesn't proceed properly, there's a timeout in the primary thread when waiting for the worker thread to terminate. If the worker thread doesn't terminate in a timely fashion, we let Windows kill it during termination.

    Network Drives

    ReadDirectoryChangesW works with network drives, but only if the remote server supports the functionality. Drives shared from other Windows-based computers will correctly generate notifications. Samba servers may or may not generate notifications, depending on whether the underlying operating system supports the functionality. Network Attached Storage (NAS) devices usually run Linux, so won't support notifications. High-end SANs are anybody's guess.

    ReadDirectoryChangesW fails with ERROR_INVALID_PARAMETER when the buffer length is greater than 64 KB and the application is monitoring a directory over the network. This is due to a packet size limitation with the underlying file sharing protocols.


    If you've made it this far in the article, I applaud your can-do attitude. I hope I've given you a clear picture of the challenges of using ReadDirectoryChangesW and why you should be dubious of any sample code you see for using the function. Careful testing is critical, including performance testing.

    Go to the GitHub repo for this article or just download the sample code.

    Understanding ReadDirectoryChangesW - Part 1

    The longest, most detailed description in the world of how to successfully use ReadDirectoryChangesW.

    This is Part 1 of 2. This part describes the theory and Part 2 describes the implementation.

    Go to the GitHub repo for this article or just download the sample code.

    I have spent this week digging into the barely-documented world of ReadDirectoryChangesW and I hope this article saves someone else some time. I believe I've read every article I could find on the subject, as well as numerous code samples. Almost all of the examples, including the one from Microsoft, either have significant shortcoming or have outright mistakes.

    You'd think that this problem would have been a piece of cake for me, having been the author of Multithreading Applications in Win32, where I wrote a chapter about the differences between synchronous I/O, signaled handles, overlapped I/O, and I/O completion ports. Except that I only write overlapped I/O code about once every five years, which is just about long enough for me to forget how painful it was the last time. This endeavor was no exception.

    Four Ways to Monitor Files and Directories

    First, a brief overview of monitoring directories and files. In the beginning there was SHChangeNotifyRegister. It was implemented using Windows messages and so required a window handle. It was driven by notifications from the shell (Explorer), so your application was only notified about things that the shell cared about - which almost never aligned with what you cared about. It was useful for monitoring things that the user did in Explorer, but not much else.

    SHChangeNotifyRegister was fixed in Windows Vista so it could report all changes to all files, but is was too late - there are still several hundred million Windows XP users and that's not going to change any time soon.

    SHChangeNotifyRegister also had a performance problem, since it was based on Windows messages. If there were too many changes, your application would start receiving roll-up messages that just said "something changed" and you had to figure out for yourself what had really happened. Fine for some applications, rather painful for others.

    Windows 2000 brought two new interfaces, FindFirstChangeNotification and ReadDirectoryChangesW. FindFirstChangeNotification is fairly easy to use but doesn't give any information about what changed. Even so, it can be useful for applications such as fax servers and SMTP servers that can accept queue submissions by dropping a file in a directory. ReadDirectoryChangesW does tell you what changed and how, at the cost of additional complexity.

    Similar to SHChangeNotifyRegister, both of these new functions suffer from a performance problem. They can run significantly faster than shell notifications, but moving a thousand files from one directory to another will still cause you to lose some (or many) notifications. The exact cause of the missing notifications is complicated. Surprisingly, it apparently has little to do with how fast you process notifications.

    Note that FindFirstChangeNotification and ReadDirectoryChangesW are mutually exclusive. You would use one or the other, but not both.

    Windows XP brought the ultimate solution, the Change Journal, which could track in detail every single change, even if your software wasn't running. Great technology, but equally complicated to use.

    The fourth and final solution is is to install a File System Filter, which was used in the popular SysInternals FileMon tool. There is a sample of this in the Windows Driver Kit (WDK). However, this solution is essentially a device driver and so potentially can cause system-wide stability problems if not implemented exactly correctly.

    For my needs, ReadDirectoryChangesW was a good balance of performance versus complexity.

    The Puzzle

    The biggest challenge to using ReadDirectoryChangesW is that there are several hundred possibilities for combinations of I/O mode, handle signaling, waiting methods, and threading models. Unless you're an expert on Win32 I/O, it's extremely unlikely that you'll get it right, even in the simplest of scenarios. (In the list below, when I say "call", I mean a call to ReadDirectoryChangesW.)

    A. First, here are the I/O modes:
    1. Blocking synchronous
    2. Signaled synchronous
    3. Overlapped asynchronous
    4. Completion Routine (aka Asynchronous Procedure Call or APC)
    B. When calling the WaitForXxx functions, you can:
    1. Wait on the directory handle.
    2. Wait on an event object in the OVERLAPPED structure.
    3. Wait on nothing (for APCs.)
    C. To handle notifications, you can use:
    1. Blocking
    2. WaitForSingleObject
    3. WaitForMultipleObjects
    4. WaitForMultipleObjectsEx
    5. MsgWaitForMultipleObjectsEx
    6. I/O Completion Ports
    D. For threading models, you can use:
    1. One call per worker thread.
    2. Multiple calls per worker thread.
    3. Multiple calls on the primary thread.
    4. Multiple threads for multiple calls. (I/O Completion Ports)
    Finally, when calling ReadDirectoryChangesW, you specify flags to choose what you want to monitor, including file creation, last modification date change, attribute changes, and other flags. You can use one flag per call  and issue multiple calls or you can use use multiple flags in one call. Multiple flags is always the right solution. If you think you need to use multiple calls with one flag per call to make it easier to figure out what to do, then you need to read more about the data contained in the notification buffer returned by ReadDirectoryChangesW.

    If your head is now swimming in information overload, you can easily see why so many people have trouble getting this right.

    Recommended Solutions

    So what's the right answer? Here's my opinion, depending on what's most important:

    Simplicity - A2C3D1 - Each call to ReadDirectoryChangesW runs  in its own thread and sends the results to the primary thread with PostMessage. Most appropriate for GUI apps with minimal performance requirements. This is the strategy used in CDirectoryChangeWatcher on CodeProject. This is also the strategy used by Microsoft's FWATCH sample.

    Performance - A4C6D4 - The highest performance solution is to use I/O completion ports, but, as an aggressively multithreaded solution, it's also a very complex solution that should be confined to servers. It's unlikely to be necessary in any GUI application. If you aren't a multithreading expert, stay away from this strategy.

    Balanced - A4C5D3 - Do everything in one thread with Completion Routines. You can have as many outstanding calls to ReadDirectoryChangesW as you need. There are no handles to wait on, since Completion Routines are dispatched automatically. You embed the pointer to your object in the callback, so it's easy to keep callbacks matched up to their original data structure.

    Originally I had thought that GUI applications could use MsgWaitForMultipleObjectsEx to intermingle change notifications with Windows messages. This turns out not to work because dialog boxes have their own message loop that's not alertable, so a dialog box being displayed would prevent notifications from being processed. Another good idea steamrolled by reality.

    Wrong Techniques

    As I was researching this solution, I saw a lot of recommendations that ranged from dubious to wrong to really, really wrong. Here's some commentary on what I saw.

    If you are using the Simplicity solution above, don't use blocking calls because the only way to cancel it is with the undocumented technique of closing the handle or the Vista-only technique of CancelSynchronousIo. Instead, use the Signal Synchronous I/O mode by waiting on the directory handle. Also, to terminate threads, don't use TerminateThread, because that doesn't clean up resources and can cause all sorts of problems. Instead, create a manual-reset event object that is used as the the second handle in the call to WaitForMultipleObjects.When the event is set, exit the thread.

    If you have dozens or hundreds of directories to monitor, don't use the Simplicity solution. Switch to the Balanced solution. Alternatively, monitor a root common directory and ignore files you don't care about.

    If you have to monitor a whole drive, think twice (or three times) about this idea. You'll be notified about every single temporary file, every Internet cache file, every  Application Data change - in short, you'll be getting an enormous number of notifications that could slow down the entire system. If you need to monitor an entire drive, you should probably use the Change Journal instead. This will also allow you to track changes even if your app is not running. Don't even think about monitoring the whole drive with FILE_NOTIFY_CHANGE_LAST_ACCESS.

    If you are using overlapped I/O without using an I/O completion port, don't wait on handles. Use Completion Routines instead. This removes the 64 handle limitation, allows the operating system to handle call dispatch, and allows you to embed a pointer to your object in the OVERLAPPED structure. My example in a moment will show all of this.

    If you are using worker threads, don't send results back to the primary thread with SendMessage.  Use PostMessage instead. SendMessage is synchronous and will not return if the primary thread is busy. This would defeat the purpose of using a worker thread in the first place.

    It's tempting to try and solve the issue of lost notifications by providing a huge buffer. However, this may not be the wisest course of action. For any given buffer size, a similarly-sized buffer has to be allocated from the kernel non-paged memory pool. If you allocate too many large buffers, this can lead to serious problems, including a Blue Screen of Death. Thanks to an anonymous contributor in the MSDN Community Content.

    Jump to Part 2 of this article.

    Go to the GitHub repo for this article or just download the sample code.

    Monday, May 17, 2010

    Using MsgWaitForMultipleObjects in MFC

    One of the problems that I didn't solve in my Multithreading book was how to use MsgWaitForMultipleObjectsEx with MFC. I have always felt guilty about this because it was an important problem, but I simply ran out of time to do the implemenation. Recently I finally had a need to solve the problem. MFC has come a long way since 1996 and it's clear that this problem was planned for. With MFC in Visual Studio 2008 and 2010, I was able to solve the problem in minutes instead of days.

    In short, the solution is to replace MFC's call to GetMessage with your own call to MsgWaitForMultipleObjectsEx. This will allow you to dispatch messages, handle signaled objects, and dispatch Completion Routines. Here's the code, which goes in your MFC App object:

    // virtual
    BOOL CMyApp::PumpMessage()
    {    HANDLE hEvent = ...;
        HANDLE handles[] = { hEvent };
        DWORD const res =
        switch (res)
            case WAIT_OBJECT_0 + 0:
                // the event object was signaled...
                return true;
            case WAIT_OBJECT_0 + _countof(handles):
                return __super::PumpMessage();
            case WAIT_IO_COMPLETION:

        return TRUE;

    There are several obscure points in this code worth mentioning. Getting any of these wrong will cause it to break:
    • The MWMO_INPUTAVAILABLE is required to solve race conditions with the message queue.
    • The MWMO_ALERTABLE is required in order for Completion Routines to be called.
    • WAIT_IO_COMPLETION does not require any action on your part. It just indicates that a completion routine was called.
    • The handle array is empty. You do NOT wait on the directory handle. Doing so will prevent your completion routine from being called.
    • MsgWaitForMultipleObjectsEx indicates that a message is available (as opposed to a signaled object) by returning WAIT_OBJECT_0 plus the handle count.
    • The call to __super::PumpMessage() is for MFC when this is running in your application object. Outside of MFC, you should replace it with your own message loop.
    Note that this strategy breaks whenever a dialog box is displayed because dialog boxes use their own message loop.

    Wednesday, May 12, 2010

    Visual Studio 2010 Tab Key Not Working

    Several days ago, my tab key stopped working when editing C++ code in Visual Studio 2010. Everything else worked fine, even Ctrl-Tab. But pressing Tab on the keyboard would not insert a tab into the file.  I tried disabling add-ins. No change. I reset the keyboard in Tools / Options / Environment / Keyboard. Still no change. I tried running devenv /safemode. That didn't work either.

    Finally I looked at the other command line switches for devenv.exe, where I saw devenv /ResetSettings.  Finally, that fixed the problem.