This is Part 2 of 2. Part 1 describes the theory and this part describes the implementation.
Go to the GitHub repo for this article or just download the sample code.
Getting a Handle to the Directory
Now we'll look at the details of implementing the Balanced solution described in Part 1. When reading the declaration for ReadDirectoryChangesW, you'll notice that the first parameter is to a directory, and it's a HANDLE. Did you know that you can get a handle to a directory? There is no OpenDirectory function and the CreateDirectory function doesn't return a handle. Under the documentation for the first parameter, it says “This directory must be opened with the FILE_LIST_DIRECTORY access right.” Later, the Remarks section says, “To obtain a handle to a directory, use the CreateFile function with the FILE_FLAG_BACKUP_SEMANTICS flag.” The actual code looks like this:
HANDLE hDir = ::CreateFile(
strDirectory,
// pointer to the file name
FILE_LIST_DIRECTORY,
// access (read/write) mode
FILE_SHARE_READ
// share mode
| FILE_SHARE_WRITE
| FILE_SHARE_DELETE,
NULL, // security descriptor
OPEN_EXISTING,
// how to create
FILE_FLAG_BACKUP_SEMANTICS // file attributes
| FILE_FLAG_OVERLAPPED,
NULL);
// file with attributes to copy
The first parameter, FILE_LIST_DIRECTORY, isn't even mentioned in the CreateFile() documentation. It's discussed in File Security and Access Rights, but not in any useful way.
Similarly, FILE_FLAG_BACKUP_SEMANTICS has this interesting note, "Appropriate security checks still apply when this flag is used without SE_BACKUP_NAME and SE_RESTORE_NAME privileges." In past dealings with this flag, it had been my impression that Administrator privileges were required, and the note seems to bear this out. However, attempting to enable these privileges on a Windows Vista system by adjusting the security token does not work if UAC is enabled. I'm not sure if the requirements have changed or if the documentation is simply ambiguous. Others are similarly confused.
The sharing mode also has pitfalls. I saw a few samples that left out FILE_SHARE_DELETE. You'd think that this would be fine since you do not expect the directory to be deleted. However, leaving out that permission prevents other processes from renaming or deleting files in that directory. Not a good result.
Another potential pitfall of this function is that the referenced directory itself is now “in use” and so can't be deleted. To monitor files in a directory and still allow the directory to be deleted, you would have to monitor the parent directory and its children.
Calling ReadDirectoryChangesW
The actual call to ReadDirectoryChangesW is the simplest part of the whole operation. Assuming you are using completion routines, the only tricky part is that the buffer must be DWORD-aligned.The OVERLAPPED structure is supplied to indicate an overlapped operation, but none of the fields are actually used by ReadDirectoryChangesW. However, a little known secret of using Completion Routines is that you can supply your own pointer to the C++ object. How does this work? The documentation says that, "The hEvent member of the OVERLAPPED structure is not used by the system, so you can use it yourself." This means that you can put in a pointer to your object. You'll see this in my sample code below:
void CChangeHandler::BeginRead()
{
::ZeroMemory(&m_Overlapped, sizeof(m_Overlapped));
m_Overlapped.hEvent = this;
DWORD dwBytes=0;
BOOL success = ::ReadDirectoryChangesW(
m_hDirectory,
&m_Buffer[0],
m_Buffer.size(),
FALSE, // monitor children?
FILE_NOTIFY_CHANGE_LAST_WRITE
| FILE_NOTIFY_CHANGE_CREATION
| FILE_NOTIFY_CHANGE_FILE_NAME,
&dwBytes,
&m_Overlapped,
&NotificationCompletion);
}
Since this call uses overlapped I/O, m_Buffer won't be filled in until the completion routine is called.
Dispatching Completion Routines
For the Balanced solution we've been discussing, there are only two ways to wait for Completion Routines to be called. If everything is being dispatched using Completion Routines, then SleepEx is all you need. If you need to wait on handles as well as to dispatch Completion Routines, then you want WaitForMultipleObjectsEx. The Ex version of the functions is required to put the thread in an “alertable” state, which means that completion routines will be called.To terminate a thread that's waiting using SleepEx, you can write a Completion Routine that sets a flag in the SleepEx loop, causing it to exit. To call that Completion Routine, use QueueUserAPC, which allows one thread to call a completion routine in another thread.
Handling the Notifications
The notification routine should be easy. Just read the data and save it, right? Wrong. Writing the Completion Routine also has its complexities.First, you need to check for and handle the error code ERROR_OPERATION_ABORTED, which means that CancelIo has been called, this is the final notification, and you should clean up appropriately. I describe CancelIo in more detail in the next section. In my implementation, I used InterlockedDecrement to decrease cOutstandingCalls, which tracks my count of active calls, then I returned. My objects were all managed by the MFC mainframe and so did not need to be deleted by the Completion Routine itself.
You can receive multiple notifications in a single call. Make sure you walk the data structure and check for each non-zero NextEntryOffset field to skip forward.
ReadDirectoryChangesW is a "W" routine, so it does everything in Unicode. There's no ANSI version of this routine. Therefore, the data buffer is also Unicode. The string is not NULL-terminated, so you can't just use wcscpy. If you are using the ATL or MFC CString class, you can instantiate a wide CString from a raw string with a given number of characters like this:
FILE_NOTIFY_INFORMATION* fni = (
FILE_NOTIFY_INFORMATION
*
)buf;
CStringW wstr(fni.Data, fni.Length / sizeof(wchar_t));
Finally, you have to reissue the call to ReadDirectoryChangesW before you exit the completion routine.You can reuse the same OVERLAPPED structure. The documentation specifically says that the OVERLAPPED structure is not accessed again by Windows after the completion routine is called. However, you have to make sure that you use a different buffer than your current call or you will end up with a race condition.
One point that isn't clear to me is what happens to change notifications in between the time that your completion routine is called and the time you issue the new call to ReadDirectoryChangesW.
I'll also reiterate that you can still "lose" notifications if many files are changed in a short period of time. According to the documentation, if the buffer overflows, the entire contents of the buffer are discarded and the lpBytesReturned parameter contains zero. However, it's not clear to me if the completion routine will be called with dwNumberOfBytesTransfered equal to zero, and/or if there will be an error code specified in
dwNumberOfBytesTransfered.
There are some humorous examples of people trying (and failing) to write the completion routine correctly. My favorite is found on stackoverflow.com, where, after insulting the person asking for help, he presents his example of how to write the routine and concludes with, "It's not like this stuff is difficult." His code is missing error handling, he doesn't handle ERROR_OPERATION_ABORTED, he doesn't handle buffer overflow, and he doesn't reissue the call to ReadDirectoryChangesW. I guess it's not difficult when you just ignore all of the difficult stuff.
Using the Notifications
Once you receive and parse a notification, you need to figure out how to handle it. This isn't always easy. For one thing, you will often receive multiple duplicate notifications about changes, particularly when a long file is being written by its parent process. If you need the file to be complete, you should process each file after a timeout period has passed with no further updates. [Update: See the comment below by Wally The Walrus for details on the timeout.]An article by Eric Gunnerson points out that the documentation for FILE_NOTIFY_INFORMATION contains a critical comment: If there is both a short and long name for the file, the function will return one of these names, but it is unspecified which one. Most of the time it's easy to convert back and forth between short and long filenames, but that's not possible if a file has been deleted. Therefore, if you are keeping a list of tracked files, you should probably track both the short and long filename. I was unable to reproduce this behavior on Windows Vista, but I only tried on one computer.
You will also receive some notifications that you may not expect. For example, even if you set the parameters of ReadDirectoryChangesW so you aren't notified about child directories, you will still get notifications about the child directories themselves. For example. Let's assume you have two directories, C:\A and C:\A\B. You move the file info.txt from the first directory to the second. You will receive FILE_ACTION_REMOVED for the file C:\A\info.txt and you will receive FILE_ACTION_MODIFIED for the directory C:\A\B. You will not receive any notifications about C:\A\B\info.txt.
There are some other surprises. Have you ever used hard links in NTFS? Hard links allow you to have multiple filenames that all reference the same physical file. If you have one reference in a monitored directory and a second reference in a second directory, you can edit the file in the second directory and a notification will be generated in the first directory. It's like magic.
On the other hand, if you are using symbolic links, which were introduced in Windows Vista, then no notification will be generated for the linked file. This makes sense when you think it through, but you have to be aware of these various possibilities.
There's yet a third possibility, which is junction points linking one partition to another. In that case, monitoring child directories won't monitor files in the linked partition. Again, this behavior makes sense, but it can be baffling when it's happening at a customer site and no notifications are being generated.
Shutting Down
I didn't find any articles or code (even in open source production code) that properly cleaned up the overlapped call. The documentation on MSDN for canceling overlapped I/O says to call CancelIo. That's easy. However, my application then crashed when exiting. The call stack showed that one of my third party libraries was putting the thread in an alertable state (which meant that Completion Routines could be called) and that my Completion Routine was being called even after I had called CancelIo, closed the handle, and deleted the OVERLAPPED structure.As I was searching various web pages with sample code that called CancelIo, I found this page that included the code below:
CancelIo(pMonitor->hDir);
if (!HasOverlappedIoCompleted(&pMonitor->ol))
{
SleepEx(5, TRUE);
}
CloseHandle(pMonitor->ol.hEvent);
CloseHandle(pMonitor->hDir);
This looked promising. I faithfully copied it into my app. No effect.
I re-read the documentation for CancelIo, which makes the statement that "All I/O operations that are canceled complete with the error ERROR_OPERATION_ABORTED, and all completion notifications for the I/O operations occur normally." Decoded, this means that all Completion Routines will be called at least one final time after CancelIo is called. The call to SleepEx should have allowed that, but it wasn't happening. Eventually I determined that waiting for 5 milliseconds was simply too short. Maybe changing the "if" to a "while" would have solved the problem, but I chose to approach the problem differently since this solution requires polling every existing overlapped structure.
My final solution was to track the number of outstanding requests and to continue calling SleepEx until the count reached zero. In the sample code, the shutdown sequence works as follows:
- The application calls CReadDirectoryChanges::Terminate (or simply allows the object to destruct.)
- Terminate uses QueueUserAPC to send a message to CReadChangesServer in the worker thread, telling it to terminate.
- CReadChangesServer::RequestTermination sets m_bTerminate to true and delegates the call to the CReadChangesRequest objects, each of which calls CancelIo on its directory handle and closes the directory handle.
- Control is returned to CReadChangesServer::Run function. Note that nothing has actually terminated yet.
void Run()
{
while (m_nOutstandingRequests || !m_bTerminate)
{
DWORD rc = ::SleepEx(INFINITE, true);
}
}
- CancelIo causes Windows to automatically call the Completion Routine for each CReadChangesRequest overlapped request. For each call, dwErrorCode is set to ERROR_OPERATION_ABORTED.
- The Completion Routine deletes the CReadChangesRequest object, decrements nOutstandingRequests, and returns without queuing a new request.
- SleepEx returns due to one or more APCs completing. nOutstandingRequests is now zero and m_bTerminate is true, so the function exits and the thread terminates cleanly.
Network Drives
ReadDirectoryChangesW works with network drives, but only if the remote server supports the functionality. Drives shared from other Windows-based computers will correctly generate notifications. Samba servers may or may not generate notifications, depending on whether the underlying operating system supports the functionality. Network Attached Storage (NAS) devices usually run Linux, so won't support notifications. High-end SANs are anybody's guess.ReadDirectoryChangesW fails with ERROR_INVALID_PARAMETER when the buffer length is greater than 64 KB and the application is monitoring a directory over the network. This is due to a packet size limitation with the underlying file sharing protocols.
Summary
If you've made it this far in the article, I applaud your can-do attitude. I hope I've given you a clear picture of the challenges of using ReadDirectoryChangesW and why you should be dubious of any sample code you see for using the function. Careful testing is critical, including performance testing.Go to the GitHub repo for this article or just download the sample code.
Thanks for the info. Indeed a very comprehensive description. One thing I was curious about is your use of completion routines. I have used ReadDirectoryChangesW without completion routines, and it seems that using them complicates the code, and requires you to handle more edge cases, like the situation with CancelIo. Have you tried using ReadDirectoryChangesW without completion routines? Are there any benefits to using them?
ReplyDeleteHi Ben,
ReplyDeleteThe only advantage to using ReadDirectoryChangesW without completion routines is simplicity. There are several downsides.
The biggest downside is that you'll need one thread per call to ReadDirectoryChangesW, which equates to one thread per directory. This is fine if you have one or two directories to monitor, but bad if you have a lot of directories. Threads are relatively lightweight (compared to processes) but they still have a signficant amount of overhead for stack space, kernel structures, etc.
With completion routines, you can monitor as many directories as you want with a single thread. That's what I show in the sample code.
Also, if you are waiting on handles, you need to use one of the WaitForMultipleObjects functions. These functions are limited to a maximum of 64 handles. Trying to monitor an arbitrary number of directories with this strategy would be complicated because you'd have to partition calls out to multiple threads with no more than 64 per thread. Ugh.
The use of completion routines also makes it much simpler to do interthread communication. The normal strategy for interthread communication is to allocate a structure, put the structure on a queue, update the interlock mechanism (typically a semaphore), and then undo all of this on the receiving thread. With completion routines, the calling thread makes a single call to QueueUserAPC and the operating system does everything else. The sample code uses this technique to shut down the thread.
Finally, Microsoft documentation discourages waiting on file handles. There are many different things that can signal a file handle, not all of which you are interested in. That's why you want to use the event object that's embedded in the OVERLAPPED structure - when it's signaled, it's guaranteed to be because of the call to ReadDirectoryChangesW.
In any case, I have embedded the complexity of the Balanced solution into a single interface in the sample code for this article. That sample code is actually being used in shipping production code.
Thanks for the response, Jim. And thanks again for the great article.
ReplyDeleteIf you feel like writing more on this topic, I would really appreciate a high-level description of how your code works, maybe with a block diagram. I know you talk about it in various places in your article, but it's a little hard to see how the whole thing is tied together.
Hi Jim,
ReplyDeleteThanks for the excellent description.
Could it be that the filter criteria (m_dwFlags in CReadChangesRequest) ist not evaluated?
See you,
Martin
Martin,
ReplyDeleteI'm not sure sure what you question is in reference to - the sample code? Something in the blog?
Hi Jim,
ReplyDeletesorry for my unclear description.
I meant m_dwFlags in CReadChangesRequest in the sample code.
Martin
Hi Jim,
ReplyDeleteSeveral years ago I wrote for my company File System Listener based on ReadDirectoryChangesW with Completion Routine. I used one worker thread for listening the whole directory tree with filtering not relevant notifications.
After handling the current Notification buffer and posting information to the main thread, my completion routing reissue ReadDirectoryChangesW call with FILE_NOTIFY_CHANGE_FILE_NAME, FILE_NOTIFY_CHANGE_DIR_NAME and FILE_NOTIFY_CHANGE_LAST_WRITE flags.
The all went OK until recently our customers began to complain of how the Listener works with network drives. After some investigation I found that reissued call to ReadDirectoryChangesW sometimes fails with ERROR_TOO_MANY_CMDS (56) error. This error is described as "The network BIOS command limit has been reached".
Do you familiar with the problem and can you give me an advice of what to do with it?
Thanks,
Ilia Faingold,
Iliaf@Cimatron.co.il
Ilia,
ReplyDeleteI haven't seen that problem myself, although I've spent very little time trying to monitor network drives. I only found one other person asking about this error, and that was in 2003. He was seeing this failure after seven calls to FindFirstChangeNotification. No one had any responses.
I'd recommend you open an incident with Microsoft Professional Advisory Services. They have the ability to go back to the Windows Kernel team to ask questions.
Jim,
ReplyDeleteThank you for response. I would like only to add one detail.
I am not surprised by existence of some network limitation. I am surprised that the limit is reached despite my program produces (as far as I understand the ussue) only one monitoring request at a time.
My Completion routine does not call directly to ReadDirectoryChangesW. Instead, before exiting, it queues an APC, which does the call. Thus, a new call to ReadDirectoryChangesW is done after the previous one should be completely finished.
So, a limit of concurrent network requests may be reached in my case only if this limit is equal to zero. Or I am missing something?
Thanks,
Ilia.
iliaf@cimatron.co.il
Interesting. I'd do two things. First, log every call you make and every call to the completion routine, along with the error code for each. Include a counter that counts up for each call to ReadDidrectoryChangesW and down for each call to the APC.
ReplyDeleteSecond, what servers are the customers using? Are they Windows Servers, 3rd party SANs, Samba servers?
OK, I created a request counter for each monitored drive. As expected, it takes only two values: 0 and 1.
ReplyDeleteI even succeded to reproduce the bug under debugger. I created a network drive mapped to a shared folder on my colleague's computer running XP. My own computer runs W7.
When the bug occured, the respective counter was equal to 1, and only two drives were in the monitoring list: my local drive C: and the network drive K:.
I made also a workaround. When the call to ReadDirectoryChangesW fails with error other than ERROR_INVALID_FUNCTION, I re-issue the call (by queuing respective APC). It helps: after about 600 failures (which take 2-3 sec.), the next call succeed.
Can you understand this? I can't.
Ilia.
A small final remark.
ReplyDeleteI can imagine only one explanation for the ERROR_TOO_MANY_CMDS error. Namely, that it hasn’t any relation to the number of simultaneous requests to the ReadDirectoryChangesW API.
In our system the error occurs when the user saves to a network location a group of documents at once. Saving of one document internally include copying a file from the local drive to network destination, as well as creation, removing and renaming of the network file. Thus, saving of group of documents can easily lead to several dozens of network file operations in a very small time interval.
It seems that the ERROR_TOO_MANY_CMDS error relates somehow to the number of these file operations, which ReadDirectoryChangesW must report about in the one call. If this is so, the only solution is to look for a way of increasing the default limit, along with the workaround I mentioned in the previous post.
Ilia.
I have implemented this successfully using WaitForMultipleObjectsEx and an overlapped completion routine, but I have a very simple question (I apologize if I I overlooked this already being answered).
ReplyDeleteIf you are watching multiple directories, and get a file notification, how do you tell which directory it was for? I only seem to receive relative filenames, so if I am watching c:\a and c:\b, the notification "file1.cpp" could be ambiguous.
I feel like I am missing something very simple, but I have not found it so far.
The answer is in the article but it's not obvious. Use the hEvent member in OVERLAPPED to keep a handle to your own tracking object.
ReplyDeleteGreat article Jim - got my DirWatch working just great - thanks a lot!
ReplyDeleteProbably should point out that the completion routine will not get called at all with WaitForSingleObject - has to be WaitForSingleObjectEx (with the Alertable flag)or WaitForMultipleObjects. Helps n00bs like me who like to roll there own :)
Hi Jim,
ReplyDeleteVery insightful article indeed.
I am facing the similar problem faced by Ilia. I am not using Completion Routine as I need to watch only a top level directory.
My code goes like this:
(This is in a thread other than Main thread)
//OVERLAPPED structure contains an event in hEvent
ReadDirectoryChanges(); //initial call
while (1)
{
//one of the handle below is the handle of event in OVERLAPPED, other being used to signal the thread to stop
dwRet = WaitForMultipleObjects(2,pHandleArray, FALSE, 100);
if(dwRet == WAIT_OBJECT_0+2) // this was handle for OVERLAPPED event
{
BOOL bRet = GetQueuedCompletionStatus(); // last parameter is 0, don't wait
if(bRet)// successfully dequeued an IO packet
{
HandleNotification();
//Reissue the command with the same buffer
BOOL bSuc = ReadDirectoryChangesW();
// here bSuc is FALSE and GetLastError says it is 56 "The network BIOS command limit has been reached. "
}
}
}
Since, I only reissue a ReadDirectoryChangesW command after successfully returning from GetQueuedCompletionStatus(), that means previous call to ReadDirectoryChangesW has finished. So, there is only one request pending at any moment.
I am getting the error only when there is a sudden burst of file activities (adding/deleting/updating). After reading the documentation I thought of getting the buffer overflow situation, but stuck with this problem.
Looking for some help.
-Anil Padia
Anil,
ReplyDeleteIt looks like you have the exact same problem as Ilia, and we never found a solution to that problem. My recommendation is tht you open an incident with Microsoft Professional Advisory Services:
http://support.microsoft.com/gp/advisoryservice#tab0
If you want to detect if a file is added to a watched directory and then read the contents of the file to determine what type of notification to send back to the client, how can you know when Windows is done processing the file? Is it after FILE_ACTION_ADDED?
ReplyDeleteBased on your description above, I have to assume you cannot always predict how many times your completion function will be called when a set of files is added or deleted from a watched directory?
Bryan,
ReplyDeleteYour assumption is correct. Your completion function will be called multiple times. However, you need to pay attention to FILE_ACTION_MODIFIED, not FILE_ACTION_ADDED. After you receive FILE_ACTION_ADDED, the file is often still empty.
My recommendation for handling this is in the article, where I said, "If you need the file to be complete, you should process each file after a timeout period has passed with no further updates."
There is no perfect solution to this because the behavior of the process writing the file is unpredictable (in the general case.) Some people have tried things like opening the file exclusive, which will (usually) fail if the other process has it open, but this can cause nasty things to happen if the process closes a file and reopens it.
Jim, since you seem to be in the know of these things, do you know if this (Notification occurs before file is completely written) is true for FindFirstChangeNotification as well?
ReplyDeleteAlso, if I were to monitor FILE_ACTION_MODIFIED to determine when a file is completely written before processing for client notification, it seems like I may have a problem telling the difference between a new file that was added and then modified and a file that already exists and was modified unless I kept a state element in the monitoring code for each file(possibly in your completion routine)?
I appreciate your knowledgeable feedback, as you said, this stuff in not easy.
You are looking for a solution where none exists. Some applications (like Outlook) will keep a file open for weeks.
ReplyDeleteThere are only two ways to solve this cleanly. First, you know the application will write the file, close it, and be done with it. In this case you can use exclusive locking to check for this. The second case is you just put up with the fact that changes happen all the time and either snapshot the volume (*very* expensive, potentially taking minutes in some cases) or you open the file with DENY_NONE for sharing and grab whatever data is there. Both of those solutions will still fail in some cases.
And yes, you have to maintain the state yourself.
You may be better off using the change journal.
Fundamentally, this problem is impossible to solve in the general case. Without knowing what you are trying to do, why, the reliability level, the expected writers, and a few other things, there is no "right" answer. I can work through all of that for a consulting fee, but not on this blog.
Sorry if I got too detailed in my questioning. I am not looking to have you solve my problem on this forum. It is just so rare to find someone who knows anything about these apis at all and the Microsoft documentation often leaves a lot to be desired. As a clarification to my previous question the software for the app doing the monitoring, and the apps doing the writing are both written by the same team, but they will definitely exist in separate processes.
ReplyDeleteI guess my main interest was trying to better understand the detailed interplay of how the OS works with the various monitoring Apis as you have worked on describing in your blog.
Anyway, I appreciate your help and I apologize again for getting too specific in my requests.
The simplest solution for what you are doing is for the writer to write the file to another directory, then when the file is complete, use MoveFile to put it into the monitored directory. This gets around the problem completely. The reader process can then use the simpler FindFirstChangeNotification call to monitor the shared folder for notifications.
ReplyDeleteMake sure that the intermediate directory is on the same physical drive as the destination directory, otherwise you can have a race condition. Alternatively, use MoveFileTransacted (but it's not supported in XP or Win2K3 Server.)
Excellent article, I had gone through most of the pain you described before I stumbled on this, but better late than never..
ReplyDeleteI'm monitoring a directory using callbacks on a single alertable thread which works fine, my main problem has been the multiple duplicate notifications, even for single character changes in a file.
I was hoping that someone knew what caused it/ had a workaround, but no joy yet.
Regarding shutdown, did you not consider CancelIoEx? CancelIo only cancels operations started in the calling thread, CancelIoEx cancels all outstanding IO on the handle (in the process) - may help you in you shutdown - see below
//Make non-blocking call to see if IO pending
BOOL res=GetOverlappedResult(hDir_,&overlapped_,&byteCount,FALSE);
if(!res && ERROR_IO_INCOMPLETE==GetLastError()) {
//IO pending, cancel and make blocking call to GetOverlappedResult
CancelIoEx(hDir_,NULL);
res=GetOverlappedResult(hDir_,&overlapped_,&byteCount,TRUE);
}
Wonderful article Jim!
ReplyDeleteRegarding reliability, have you tried the method descibed in http://social.msdn.microsoft.com/forums/en-US/netfxbcl/thread/4465cafb-f4ed-434f-89d8-c85ced6ffaa8/
with FlushFileBufers? Does it give 100% reliability? Do you think if I wrote a FileSystemFilter driver, I would get 100%? I am interested only in one folder, but both change journels and drivers would only let me monitor volumes and I have to ignore the others which is not ideal when I am interested only in one folder
Would highly appreciate your quick response
Thanks a lot!
Journaling is the only way to get 100% accuracy. As noted in your link, the documentation specifically says you can lose information with this call. Any hack to try and fix this could break with different drivers, different version of the OS, different service packs, etc.
ReplyDeleteFlushFileBuffers is not 100% guaranteed because some hard drives lie for performance reasons. To get 100% guarantee, you need hardware and software that's certified to handle flushing properly, including the controller and the hard drive's firmware (even the version of the firmware!) You would buy such hardware when you are building a SQL Server, because it's required for ACID compliance.
Thanks for your response Jim! If you had tried or come across good links for sample implementation of journaling, please post it.
ReplyDeleteOne more question: Whenever I modify a file, open notepad edit a txt file, I am receiving two modified notifications. These duplicate notifications are expected?
Thanks!
Yes, multiple notifications are expected. Remember that what you see as "one" modification is really numerous calls to the Win32 API. Since there is no transaction to bucket all of the calls together, the filesystem is forced to send out multiple notifications for various kernel calls. (The way Windows buffers file changes prevents a notification storm if the app is writing (for example) one character at a time.)
ReplyDeleteI've never looked into the journaling API, so I can't help you there.
Got it, THANKS again Jim!
ReplyDeleteI looked into journaling as well, does not quite suit my purpose of real time monitoring for a particular folder alone. Right now I am thinking if FindFirstChangeNotification is 100% reliable, giving notifications for all files even if I copy 10000 files into the folder. If so, I can try some logic with that and enumerating folder. Getting 100% reliability is most critical for my purposes :( If you or anyone else reading this post know of a way through logics too not necessarily APIs, please post it - Thanks a lot in advance!!
Great article! I'm working on an implementation of this for a commercial application. We suffer from the 'monitoring many different directories' issue, so your solution seems ideal. During a conversation with a co-worker, he mentioned BindIoCompletionCallback() function as a way of simplifying the code. As always, the documentation is not very clear, so I'm not sure if there is a limitation on using a thread pool which I'm not getting.
ReplyDeleteThoughts?
Please read Part I of the article, where I go into completion ports in some detail. I aggressively disagree with your co-worker. Taking a single threaded problem and making it multithreaded increases the complexity by an order of magnitude. Remember, the first rule is "optimize last." If performance problems show that you need to process notifications faster, then you can look at completion ports.
ReplyDeleteOn the other hand, if your application already uses completion ports and you have already gone through the pain and agony of making them production-ready, then integrating change notifications into your existing architecture is probably the right thing to do.
Hi Jim
ReplyDeleteJust a few comments, having written an application using ReadDirectoryChangesW and had it on the market for a couple of years.
1. Monitoring SAMBA servers (eg NAS) will actually work in many cases. The more modern Linux operating systems upon which these are based, and the more modern versions of SAMBA, will all support notifications. It can be hit and miss. I have found a number of NAS boxes where it works OK.
2. I have found that the buffer which is populated needs to be declared as a DWORD array (this gets the alignment right) and then the size needs to be < 16384 DWORDs. It can be larger - but this breaks for network drives. Making it about 16380 DWORDs seems to be the sweet spot for local and network drives.
3. Sometimes ReadDirectoryChangesW will (when not using overlap or completion) return error 58 (bad network response). The reason for this is completely unknown, but things seem to recover if you just ignore that error code and keep on processing. Sometimes even when that code is returned the buffer seems to have valid data in it.
4. Your comments about timing out file operations are spot on. The timeouts you need can be larger than might first seem sensible - 15 to 30 seconds seems to work. This is the idle period, not the period from the first operation coming in! It can be interesting watching the way something like Word or Excel saves a file that was previously existing: There will be a creation of a temp file, a number of writes, a close, a rename of the wanted file name to the backup file name, and a rename of the temp file name to the wanted file name. Putting this together into a big picture is exciting.
5. On modern PC's using the simple method you describe (many threads) seems to work ok - I allow up to about 10 - 20 waiting threads and there seems to be no detrimental effects on performance. I'd be uncomfortable going above that, though. If doing this, for performance, I don't use SendMessage or PostMessage. I pass things from the watching thread(s) to a data aggregating thread which actually does the work of tracking the file operations. This minimises the non-watching time of the watching threads.
6. As far as I can tell, when you are not in the call to ReadDirectoryChangesW, the changes are still being tracked for you and will be presented on the next call. I have little evidence for this apart from observation, but some experiments inserting Sleep() calls will tell the true story pretty fast for anyone who really must know the answer.
Wally,
ReplyDeleteThanks very much for your detailed feedback. I updated the article about the Samba server and your timeout recommendations.
For Wally's #6 and your [Jim] initial questions about changes between calls, MSDN documentation does state that changes made between calls to ReadDirectoryChangesW will be tracked.
ReplyDelete"When you first call ReadDirectoryChangesW, the system allocates a buffer to store change information. This buffer is associated with the directory handle until it is closed and its size does not change during its lifetime. Directory changes that occur between calls to this function are added to the buffer and then returned with the next call. If the buffer overflows, the entire contents of the buffer are discarded and the lpBytesReturned parameter contains zero."
From this I'm theorizing that the actual changes are recorded first in the matching kernel buffer until it either overflows or you make some call to read out the data (another ReadDirectoryChangesW, GetQueuedCompletionStatus, etc...) at which point the data is copied out to your buffer and the kernel buffer pointers are reset to the start of the buffer. I've not actually watched the contents of the passed in buffer to verify this but I can't think of any other reason why they would otherwise create their own buffer...
My own experience with this API seems to confirm that in fact the changes are being buffered between calls. I've had a Windows service using this API for years (using single thread, I/O Completion Port, and GetQueuedCompletionStatus) and I've not missed picking up any file dropped into the monitored directory yet.
I've not run into the case where I've overflowed the buffer. I have code that justs enumerates the directory and then calls ReadDirectoryChangesW again but it's never been exercised.
Hi Chris
ReplyDeleteI have much the same experience - the buffer returned can have multiple file entries in it. I have never seen more than 1 - but I do have a draining thread sitting there sucking events out as quickly as I can.
I am however noticing something strange. Even though I do not set the flag to see updates in the last access time of a file, it looks a lot like events ARE delivered for last access time changes. NTFS also can defer the updates to last access time of a file "by up to an hour" according to MSDN.
This might be another little "gotcha" that is lurking. It's certainly causing me a great deal of pain at the moment.
Hello Jim,
ReplyDeleteGrt article! Can I convert this code in win Service (c++)??
"Unknown",
ReplyDeleteThis code is already appropriate for a service. It is written in C++ and does not use a message queue.
Hi Jim. Thank you for this articles and code. I'm trying to use your library and sometimes in CReadChangesRequest::NotificationCompletion assert happens "_ASSERTE(dwNumberOfBytesTransfered >= offsetof(FILE_NOTIFY_INFORMATION, FileName) + sizeof(WCHAR));". Could you please help, what does it mean? Is it critical? I don't know specific conditions, it happens not on my machine but I could provide all necessary information if needed.
ReplyDeleteThanks,
Andrey
If you look at the next line in the source code, it says, “This might mean overflow? Not sure” That comment was if dwNumberOfBytesTransfered was zero. The assert you mentioned would trigger in that case, which would have let me know that I should investigate why dwNumberOfBytesTransfered was zero.
Delete- There is no need to use CancelIo. If you just close the directory handle the overlapped operation will simply complete, and it will either report ERROR_OPERATION_ABORTED or just success without any events.
ReplyDelete- There is never a need to spin using HasOverlappdIoCompleted. Peoplce can use GetOverlappedResult with the bWait parameter set to TRUE to accomplish the same.
Hi Bert,
DeleteThanks so much for joining in.
I believe I discussed closing the handle. It's an undocumented "feature" and using it in the fashion you describe is not recommended given that there's a documented way to do what you want. (At least this was the case I wrote the blog entry.)
I hadn't considered using GetOverlappedResult that way, but even so, you need to use care doing so because there's no timeout and so your application could hang at shutdown. The solution presented in the article times out if everything does not shut down cleanly.
Jim
Hello Jim,
ReplyDeleteCan't thank you enough for this comprehensive tutorial.
Something I discovered:
From my testing it appears that it is actually possible to delete a watched directory, it does not seem to be "in use" and inaccessible as you described. An error code 5 "ERROR_ACCESS_DENIED" is sent to the callback, with dwNumBytesTransfered set to 0, and reissuing ReadDirectoryChangesW may or may not succeed right thereafter (perhaps depending on whether the delete operation has 'finished' or not) although it will definitely fail eventually.
I am currently trying to analyze this more so I can figure out how to correctly handle the case when a watched directory is deleted.
Thanks again!
-regedit
To clarify, an external process can't delete the directory. Whether or not the same process can delete the directory is something I didn't test.
DeleteActually I was even successful in deleting the watched folder from another process too. I tried manually right-clicking & deleting the folder in windows explorer - success. I also tried deleting a watched folder from a command prompt - also successful.
Delete-regedit
1) In CThreadSafeQueue.push:
ReplyDeletepush_back( c );
lock.Unlock(); << !!!
if (!::ReleaseSemaphore(m_hSemaphore, 1, NULL))
{
pop_back(); << !!!
2) In CReadChangesRequest::BeginRead() - dont use m_dwFlags
Fixed. Thank you!
DeleteCongratulations for this amazing article. I've using some of the techniques you explain here but, as you mention, ReadDirectoryChangesW may fail when used on network directories, depending on the remote server O.S.
ReplyDeleteIs there any docummented and reliable techique I could use for watching directories on network drives?
Thank you.
José,
DeleteSince most of the functionality relies on the remote OS, it's impossible to have a reliable function in Windows. Also, the concept of a "reliable network" is an oxymoron. Networks are not reliable and things go wrong with them all the time, even on a LAN.
Thank you Jim. I guess I won't be able to design a monitor based in notifications due to those limitations.
DeleteKeep up the good work.
First, thank you for the great article. Regarding "I saw a few samples that left out FILE_SHARE_DELETE. You'd think that this would be fine since you do not expect the directory to be deleted. However, leaving out that permission prevents other processes from renaming or deleting files in that directory". This is not the behavior I see on Win7. I am able to create/rename child entries just fine with only FILE_SHARE_WRITE but I cannot delete/rename the monitored directory itself.
ReplyDeleteRich,
DeleteThanks for your feedback. That's similar to what Anonymous said on July 17. Unfortunately I haven't had the time to follow up and test things in more detail. I know I originally wrote this article on Windows 7. Are you using NTFS or FAT32 on the drive you are testing?
Hi Jim,
DeleteThis is an NTFS drive, Win7 Enterprise SP1 x64
Hi Jim,
ReplyDeleteNice article, But I am not getting child folder delete or rename notifications
I have set the notification flags as const DWORD dwNotificationFlags =
FILE_NOTIFY_CHANGE_LAST_WRITE
| FILE_NOTIFY_CHANGE_CREATION
| FILE_NOTIFY_CHANGE_FILE_NAME |FILE_NOTIFY_CHANGE_DIR_NAME|FILE_NOTIFY_CHANGE_SECURITY;
I'm not clear from your comment whether you are running my code or running your own code. Grab one of the sample program (not mine, they're available elsewhere) that demonstrate use of this API and log all of the notifications for you.
DeleteIf you are using your own code, are you handling the case of multiple notifications in a single callback?
Finally, if you are monitoring a LAN drive, then all bets are off.
This bug should be fixed now. Latest source are on GitHub.
DeleteThis bug should be fixed now. Latest source are on GitHub.
DeleteThanks for all your hard work. It's appreciated. It's also further evidence of just how poor MSFT's documentation usually is (.NET is even worse). Forcing programmers to constantly experiment to figure out how things work is not how you go about publishing an API. Unfortunately, MSFT is guilty of this on a regular basis (an error prone and frustrating/expensive reality).
ReplyDelete"However, you have to make sure that you use a different buffer than your current call or you will end up with a race condition."
ReplyDeleteThis statement makes an impression that some race condition is possible because of the way the system processes the call and the callback.
Whereas its actually depends on how you process the buffer returned, i.e. if you process it in a different thread trying to give to the system the same buffer before your processing of the current buffer completes, you will get a race condition.
But if you provide you're done with the buffer before you reissue the API call, its OK to use the same buffer again.
I think one can use ReadDirectoryChangesW much simpler in async way, using the lpCompletionRoutine param:
ReplyDeleteFolder handle (hF) must include FILE_FLAG_OVERLAPPED,
OVERLAPPED structure (Ov) needs a new event (Ev),
and you need a MyCompletionRoutine, which can queue the incoming notifications thread-safe way (e.g. using critical sections, etc.)
The loop:
ReadDirectoryChangesW(hF1,....,Ov, MyCompletionRoutine)
ReadDirectoryChangesW(hF2,....,Ov, MyCompletionRoutine)
...
[Empty notification queue thread-safe way ] WaitForSingleObjectEx(Ev,60000,true)
The timeout should be a fairly large value (1-2 secs might cause skipping in data). I think you should empty the queue BEFORE WaitForSingleObjectEx, to keep delay before reissuing ReadDirectoryChangesW-s at minimum, but I'm not really sure.
For a graceful thread shutdown use SetEvent(Ev) before exiting the application.
What I would recommend is to keep monitored folders at minimum, using only top-levels with bWatchSubtree = true, if possible (and if needed)
Hi Jim
ReplyDeleteAlthough I'd managed to get a general understanding of how this works elsewhere your article cleared up some of the portions that were less than clear and added info I wasn't aware of at all. So thank you for that!
My question relates to what changes are required to a routine which is fully functional w/o a completion routine to have it actually call a completion routine. I don't need to know what needs to be handled after that routine is called or the cleanup on exit etc. I just need to know how to get it to actually call the routine.
My CreateFile & ReadDirectoryChangesW flags appear to match your examples and it does work w/o the completion routine and the only change that appears to be needed to cause the call to occur is to add the pointer to the routine to the end of ReadDirectoryChangesW but my routine still doesn't get called.
I do notice that adding the pointer stops the non-completion routine code from detecting the change so I presume the function is at least acknowledging the pointer even though it doesn't actually make the call.
It sounds like you need to use the alerting form of your Wait function, such as SleepEx, MsgWaitForMultipleObjectsEx , etc.
DeleteI had the same question when I wrote the article. The answer is that the notifications are queued (possibly because of the open handle on the directory?) So, the result is that it works as desired, although exactly how and why is a question.
ReplyDeleteI was implemented ReadDirectoryChangesW() using alertable I/O with socket, but it can watch only one directory,I want watch whole Directory.
ReplyDeleteIf I use CancelIoEx and am using completion routine. Is it ok if I do CloseHandle on hDirectory soon after I call CancelIoEx or should I do CloseHandle on hDirectory in the callback when dwErrorCode is ERROR_OPERATION_ABORTED?
ReplyDeleteThanks again for such an awesome article without this I couldn't have accomplished this.
Hey , Incase if we are monitoring a USB, i.e we have opened a handle to the usb device and instead of safely removing it, we pluck it out directly, the call to readDirectoryChangesW() still goes on, can u tell a solution to it ?
ReplyDeleteSorry, I've never tried to use this call with a removable drive.
DeleteGr8 article. Do you know how to achieve the same in C#
ReplyDeleteSorry, but I'm not able to help with that. All of the C# implementations I've seen were very basic (although it's been years since I last looked.)
DeleteHi Jim,
ReplyDeleteThank you very much for your detailed explanation. I have a concern -->
How to detect file moves with ReadDirectoryChangesW? I can receive the delete/create events when a file is moved but I cannot figure out a way to determine whether it is simply two separate files or a single file being moved.
I am monitoring only one directory and its subtree.
Thank you in advanced.
~Manish.
Sorry, I wrote this article five years ago and I simply don't remember anymore.
Deleteokk..Anyways Thank you.
DeleteI was able to get this to work to detect directory changes also. However, I have to modify the filter conditions in the ReadDirectoryChangesW call inside of the BeginRead method of the ReadDirectoryChangesPrivate class to include FILE_NOTIFY_CHANGE_DIR_NAME.
ReplyDeleteNice classes! This saved me a ton of time trying to figure this out by trial and error.
This was fixed when I closed Bug #1. (The filter criteria was being ignored.) Now you can pass in that flag.
DeleteCReadDirectoryChanges doesn't use user's filter criterias in the attached sample. Change ReadDirectoryChangesPrivate.cpp, line 105 from "FILE_NOTIFY_CHANGE_LAST_WRITE|FILE_NOTIFY_CHANGE_CREATION|FILE_NOTIFY_CHANGE_FILE_NAME," to "m_dwFlags,".
ReplyDeleteThank you!
Deletethanks for the great tutorial! BTW, what software license do you use for your code examples?
ReplyDeleteThe license information is included in all source files. It is the MIT License.
DeleteJim, thanks for this great, comprehensive tutorial.
ReplyDeleteI'm new to Windows programming and I initially thought I could use ReadDirectoryChangesW. However I realized that the fact I need to open the top root directory makes it less useful as I tried to make mechanism transparent to the user. In this case even if I lock the whole drive with CreateFile on Drive:\ an end-user may notice that (e.g. while trying to safe-remove a usb stick memory).
Do you know any alternative for file modification detection that would overcome this problem (other than a kernel driver)?
Thanks,
Marek
thanks for the great tutorial!
ReplyDeleteI find a bug .When monitor a linux share storage which upload file via ftp, the completion routine of ReadDirectoryChangesW
(CReadChangesRequest::NotificationCompletion) parameter dwNumberOfBytesTransfered is 0
Sir,
ReplyDeleteIn this piece of code , You explained how to assign object to hEvent parameter.
void CChangeHandler::BeginRead()
{
::ZeroMemory(&m_Overlapped, sizeof(m_Overlapped));
m_Overlapped.hEvent = this;
DWORD dwBytes=0;
BOOL success = ::ReadDirectoryChangesW(
m_hDirectory,
&m_Buffer[0],
m_Buffer.size(),
FALSE, // monitor children?
FILE_NOTIFY_CHANGE_LAST_WRITE
| FILE_NOTIFY_CHANGE_CREATION
| FILE_NOTIFY_CHANGE_FILE_NAME,
&dwBytes,
&m_Overlapped,
&NotificationCompletion);
}
Can I know, How to use this in the NotificationCompletion callback method. I am new to cpp windows programming
Thanks Jim, Great Article! Your sample is one of the few that actually works. When I monitor a local drive your sample works fine, but when I monitor a Network Drive it detects the first file, but then in NotificationCompletion the second call to BeginRead gives the dreaded: 1450 "Insufficient system resources exist to complete the requested service" and the monitoring no longer works. Do you know of any workaround for this?
ReplyDeleteThank you!
Michael
Thank you very much.
ReplyDeleteYour kind article is helpful still in 2019.
Hi Jim,
ReplyDeleteThis was the most elaborate explanation. Thank you so much.
I have a task at hand, where I require the name of the new File that has been newly added to folder or changed. The ReadDirectoryChangesW with the help of FILE_NOTIFY_INFORMATION structure, gives the filename in "non-null terminated" unicode string. How can I get the ascii or readable format of the filename?