Wednesday, December 17, 2008

Calibrating Monitors with the Eye-One LT
including dual monitors and dual LUTs

I've spent the last four years staring at two monitors side by side that do not have matching colors. After a while, a minor irritant turned into a major irritant, and I bought the X-Rite Eye-One LT (aka the i1) to fix the problem.

As numerous others have noticed, the documentation leaves a great deal to be desired, so here is my addendum.

1. Don't bother installing from the CD. It's all out of date. Install the latest versions of the software from:

http://www.xrite.com/product_overview.aspx?ID=789&Action=Support

The only software required for calibration is i1Match, which is Vista-compatible as of v3.6.2. (You do not need to install the download labeled 32 & 64-Bit Drivers for Win2000, XP, and Vista. The drivers are included with i1Match)

2. Install the software before you plug in the USB cord. The manual is wrong when it says to do it the other way around.

3. Understand what your monitor is capable of. For the purposes of calibration, there are three classes of LCD monitors:

  • Monitors that do not allow you to adjust the contrast or whitepoint. My 5-year old Samsung 191T fell into this category. Eye-One works with these monitors, but the results are suboptimal. In general, it is almost impossible to calibrate these older monitors to be the same as other monitors.
  • Monitors that allow you to manually adjust the contrast, whitepoint, or RGB values. i1 walks you through making adjustments. The quality of the result depends on the number of adjustments available on the monitor.
  • Newer monitors where Eye-One is able to automatically control the monitor to set contrast, RGB and whitepoint. Such monitors are the easiest to calibrate and will have the highest level of success. Note that Eye-One only has built-in support for automatically controlling a small number of monitors. For other monitors, you'll need to manually make the adjustments.

4. Ambient light matters. The ambient light in your room has a significant impact on the result. Make sure you calibrate your monitor in the light where it's normally used. Don't do what I did and calibrate your monitor at night when the room is lit up with incandescent bulbs. The result is not satisfactory. (Note that the i1Match software actually warned me about this when I turned on "measure ambient light". The measurement bars shows that the light in my room was inappropriate for successful color correction.)

If you have Vista, you may also run into the problem of your color correction settings being lost every few minutes. Install Vista Service Pack 1.

5. Run the calibration. This is generally fairly painless. Make sure you read the help panel on the right in i1Match - it has useful information. Make sure you do the Contrast adjustment, if i1Match asks you to.

If you are calibrating an LCD monitor, you should use the counterweight instead of the suction cups built into the sensor. I found that the suction cups wouldn't stick to the matte screen of my LCD monitor.

6. Understand LUTs. The LUT, or "LookUp Table," is a feature of your video card that automatically performs color correction at all times. Without a video LUT, color correction only works in software like Photoshop and PaintShop Pro that can perform software-based color correction. Virtually all video cards made in recent years contain a hardware LUT.

You can verify that your video card has LUT support by downloading the LUT Tester.

When you run color calibration software such as i1Match, the result is an ICC file. ICC stands for Internation Color Consortium. The file contains information on how RGB values need to be adjusted to display properly on your monitor.

Here's the annoying part. Windows, even Windows Vista, does not understand how to load the hardware LUT with the ICC file. So Windows has all of the infrastructure to track ICC profiles for each monitor, but Windows doesn't actually do anything with that information!

The solution is software that loads the LUT when Windows starts. Such software reads the ICC profile for each monitor and loads the appropriate LUT. The most common example is Adobe Gamma Loader, but the Eye-One includes the Logo Calibration Loader, which should always be used to load the LUTs.

7. Dual monitors and Dual LUTs. This one is the 900-pound gorilla. The problem with dual monitors is that you need dual LUTs. Every monitor requires a unique calibration, even two monitors that are the identical model. Many video cards today have connections for two monitors. In order to show the correct colors in both of them, each must be calibrated separately AND your video card(s) must have one LUT for each monitor. Most low-end video cards (and most pre-2006 video cards) only have a single LUT, which means that either you need a second video card for your second monitor or you need to upgrade to a card with dual outputs and dual LUTs.

It can be tricky to determine if your system will be able to calibrate two monitors. There are several variables:

  • Operating system: If you are running Windows Vista SP1, then the OS includes the required support. If you are running Windows XP, then you need to install extra software.
  • Eye-One software. The i1Display 2 software apparently has built-in support for multiple monitors. The i1Display LT software does not support multiple monitors, but see my workaround below.
  • Dual LUTs. To determine whether your video card has dual LUTs, run through the calibration on the primary monitor. On the final screen, use the Calibration On/Off button to see how calibration affects your monitor. If you have dual LUTs, then only the primary monitor will be affected by the Calibration On/Off button. If you only have a single LUT, then both monitors will be affected.

I have verified that the Logo Calibration Loader supports multiple LUTs. It is intelligent enough to read the ICC profile for each monitor and set the appropriate LUT.

My system has an nVidia 7950GT video card. I was very happy to discover that this card has dual LUTs.

8. Remove conflicting software. When you install i1Match, your Startup group is updated to include the Logo Calibration Loader. This sofware updates the LUT(s)in your video card. Unfortunately, there are several other applications that try to do the same thing and must be removed. The most common is the Adobe Gamma Loader, which is installed with PhotoShop.

9 Advanced users. If you want to know more about what your monitor can do, download and the free sofware HCFR. Although this software will not create or manipulate ICC profiles, it will give you copious information about ambient light, your monitor's color space, and more. Just be prepared to spend some time figuring how it works. You'll also need to copy EyeOne.dll into HCFR's installation directory. Make sure you enable Eye-One support in HCFR.

10 Finding your ICC profiles in Vista. The ICC profiles are managed by Windows. To view them in Vista, go to Control Panel | Color Management. Note that this information is maintained on a per-monitor basis and you can switch between monitors using the droplist at the top of the window. Important! If you change the default ICC profile for a monitor, you'll notice that your screen colors do not change. You must run the Logo Calibration Loader to update your LUTs after you change the ICC profile. You can run this Loader from the Start menu or from C:\Program Files\GretagMacbeth\i1\Eye-One Match 3\CalibrationLoader\CalibrationLoader.exe.

11 Calibrating your second monitor with Eye-One LT. The cheaper LT version of Eye-One does not include built-in support for multiple monitors, but the workaround is easy. (This is for Vista, I haven't tested this in XP.) This process assumes that you have already completed the calibration for your primary monitor and that you've verified that you have dual LUTs. Here is how to calibrate your second monitor:
  • Open Control Panel
  • Open Personalization
  • Open Display Settings
  • Left-click the big #2.
  • Check the box labeled "This is my main monitor."
  • Left-click the big #1.
  • Uncheck the box labeled "Extend the desktop onto this monitor."
  • Click Apply
  • Your primary monitor is now disabled. You should be able to run i1Match and calibrate the second monitor without difficulty. The ICC profile for the second monitor will be set to the new ICC profile and the ICC profile for the first monitor should be unchanged.
  • Reenable your primary monitor by reversing the steps above for the "big #1" and the "big #2."
  • Run the Logo Calibration Loader to reload the LUTs for both monitors. This step is required.

Other Thoughts

I'm quite happy with my Eye-One. It calibrated my desktop PC, my laptop, and my Mac. It's considered to be the best colorimeter on the market that's "affordable." (The Spyder hardware also appears to score well, but Amazon Reviews are littered with complaints about their software.)

Another alternative is the Huey, which is half the price of the Eye-One. However, the Huey cannot calibrate luminence, which is critical for calibrating today's excessively bright LCD monitors. Also, luminence calibration is required in dual monitor setups to make both monitors appear the same.

After final calibration, I was not able to get good results calibrating my Samsung 191T or 191T+. However, both of these monitors are over four years old, both have over 15,000 hours on their backlights, and neither has an adjustable whitepoint. So the poor results are to be expected. (Both monitors looked better after calibration than before calibration, so there was definite improvement.) On the other hand, my brand new HP 2475w ended up with near-perfect calibration. After creating the ICC profile and loading the LUTs with Eye-One, I switched to HCFR to graph the results and the results were excellent. Luminance was a perfect match to theoretical and RGB was corrected within 5% for all values.

[Update 2/25/2009] I tested the latest version of i1Match (3.6.2) under Windows 7 Beta and it worked correctly. The Logo Calibration Loader also worked. One small hiccup was that it couldn't find the position of the sensor, but I just told it to continue and it worked fine.

[Update 5/21/2009] I upgraded the T42/p from Win7 Beta to Win7 RC. I copied the color profile from the Beta to the RC, set it to be the default, and the screen magically updated - even though I didn't start Logo Calibration Loader. This means that, as of Windows 7, Windows finally includes support for automatically loading the LUTs based on the color profile. Great news! (This means that you can remove the the Logo Calibration Loader from your Startup group if you have Windows 7.) Note that I was not able to test dual monitors to see if Windows 7 could handle multiple LUTs.

Monday, November 3, 2008

VMware Server 2.0 Review - What a disaster

I've been a big fan of VMware Server, as I've described in earlier posts. Yesterday I tried to install VMware Server 2.0 and I have to say that they've screwed up the product so badly that they are making Microsoft Virtual Server look good in comparison.

First of all, the new VMware Server 2.0 is a 500MB download. Yes, that's HALF A GIGABYTE for a virtualization product. In contrast, Microsoft's download is a paltry 29MB.

Many installations of Windows can't handle an MSI file that's so large and so the product won't install. The instructions in the Release Notes, which are not accessible to Google, describe a fix that can't be done on a Primary Domain Controller, so I spent two hours trying to find an alternative. (The server is just a development test bed, so there's no issue with VMware hurting performance.)

When I tried the link on my desktop for connecting to a virtual machine, I got a 404 error. That's it. There's no link anywhere to documentation or support. Turns out that this 500MB download included Apache, Tomcat, and other resource-hogging applications. This is a far cry from the sleek install of the 1.0 product.

I did a little more reading and learned that all connections to VMware Server 2.0 are done through the browser. This is not good news. That's how Microsoft's product works, and it's miserable. One of the best features of VMware was its client interface, which was lightweight and fast.

To add insult to injury, the installation of VMware Server 2.0 broke my network connectivity. The network interface won't accept any incoming connections. File sharing, remote desktop - everything is broken.

So my review of VMware Server 2.0 is two thumbs down.

Soon I'll need to install Windows Server 2008, which isn't supported by VMware Server. As an alternative, I'll probably investigate Microsoft's new Hyper-V offering.

Postscript: I spent three hours trying to restore network connectivity for the server and finally gave up. I wiped the drive and restored from an image-level backup. Acronis TrueImage saves the day again. Happily, this had the side effect of reverting to the prior installation of VMware server.

Thursday, October 23, 2008

DreamWeaver CS4 - First Look

I finally broke down and bought DreamWeaver CS4 today. This is the first time I've used Dreamweaver since 1997. It works a lot better on a 4GB Core 2 Duo machine than it did on a Pentium II 350 :-)

Anyway, here are my initial impressions.

The Good

  1. It doesn't crash! In the past eight years I don't think I ever used GoLive for more than 40 minutes straight without crashing. DW hasn't crashed yet.
  2. Works under Windows Vista.
  3. The upgrade registration was painless. I entered my version of GoLive and my GoLive serial #. No problems.
  4. DW supports sftp.
  5. I/O to the remote sftp server is fast, in direct contrast to every other web dev tool I've tried.
  6. Template page updates are fast. GoLive was always very slow.
  7. Live View allows you to see the browser view without going to the browser. Pretty slick.
  8. Dual Monitor mode.

The Bad

  1. It took me a long time to figure out how to activate the Import Golive Site command. It's not in any of the menus. Looking for "import golive" in Dreamweaver Help brings up numerous old articles about DW CS3, which worked completely differently. I think I finally found the answer by searching on "golive to dreamweaver cs4" (without the quotes) but the first several search results were still for CS3. The answer can be found at Dreamweaver for GoLive users, http://www.adobe.com/devnet/dreamweaver/golive_migration/
  2. After going through the effort to find and install it, the GoLive import tool didn't work at all. It refused to open the site file. My workaround was to use the GL2DW tool from CS3, which is installed in GoLive instead of DW. GL2DW crashed halfway through, but it did enough that I was able to determine what to do myself.
  3. I got stuck on how to open an existing site. Turns out you use "New Site..." even for an existing site. From what I've heard, this will just be the first in a long list of Dreamweaver-isms that I'll run into.
  4. Where's the "remove whitespace" command? Every other tool I've ever used, even Microsoft Expression, does this automatically when the site is uploaded.
  5. Only Subversion is supported for source control, which is a little strange since SourceSafe is supported as a test server. Not a big deal, but SourceSafe integration would have been handy.

Friday, October 17, 2008

How to load MSHTML with data

Download the sample code for this article.

This week I needed to write code to parse HTML pages and find the IMG tags to extract the images. I saw that the IHTMLDocument2 interface includes the get_images() function, so I figured that this would be a trivial problem to solve, especially since MSHTML supports running outside of a browser window.

This "trivial problem" has now taken me the better part of a week to solve. Most of the solutions I found are buggy, wrong or incomplete, some of which are from Microsoft technical articles.

For this project, I was starting with a raw buffer that contained HTML data. The buffer was multibyte, not Unicode, but the encoding of the buffer was not known in advance, so I couldn't convert the data to Unicode without parsing the page to extract the charset tag. Since parsing is what MSHTML was supposed to do for me in the first place, this made the Unicode calls useless.

1. IMarkupServices

The first article I found was from CodeProject, titled How to identify the different elements in the selection of web pages? Although the article didn't solve my exact problem, it did lead me to IMarkupServices and ParseString(), with supporting information from CodeGuru in Lightweight HTML Parsing Using MSHTML.

Unfortunately, I was never able to make this work. As it turned out, this solution wouldn't have worked anyway because ParseString requires a Unicode string and I couldn't provide one. [Note: I now believe that I needed a message loop to make this work, as in #3 below, but I didn't verify this.] Update 12/21/2010: I was finally able to make IMarkupServices work properly using sample code from MSDN. IMarkupServices does not require a message loop, and you can read HTML with any character set using ParseGlobal instead of ParseString. However, IMarkupServices does have some strange drawbacks. The IHtmlDocument2 object it returns is not fully functional. You cannot QI IPersistStreamNew to serialize it and IHtmlDocument3::get_documentElement simply fails. I've also found comments that IMarkupServices will simply fail on many web pages.

2. IHtmlDocument2::write()

My next try was to use IHtmlDocument2::write(). I found this in another article on CodeProject titled, Loading and parsing HTML using MSHTML. 3rd way. This was the easiest of the various solutions to use because it worked the first time and required no message loop (more on this in #3.) However, the write() function also requires the HTML to be passed as Unicode, which meant that this solution had the same problem as #1.

The only challenge to using IHtmlDocument2::write() is properly setting up the SAFEARRAY. Although the sample code in MSDN shows how to do this, it's complicated and easy to break.

3. IPersistStreamInit::Load()

My third try was to use IPersistStreamInit::Load(). This is the solution recommended by Microsoft in the article Loading HTML content from a Stream. This function caused me no end of aggravation. No matter what I tried, I couldn't get it to work. My call to Load() would return success, but my data wouldn't appear in the document.

I found other people with the same problem. It turns out that a few important details were omitted in the MSDN article. The first hint I found was in a Usenet post on microsoft.public.windows.inetexplorer.ie5.programming.components.webbrowser_ctl, where Mikko Noromaa recommended using CoInitializeEx(NULL,COINIT_MULTITHREADED) instead of CoInitialize(NULL).

My initial results with this change were perfect; for the first time my data could actually be retrieved from IHTMLDocument2. However, when I tried to change the data, I started seeing really weird crashes. Examining the call stack showed that my calls were being marshaled across COM apartments, which shouldn't have been necessary. Eventually, I determined that MSHTML wants to live in an STA (Single Thread Apartment.) When I declared my thread to be an MTA, COM automatically created a new thread to host MSHTML and marshaled my calls to that thread. Something was going wrong in those cross-thread calls and I didn't have the inclination to debug it.

I finally found a blog post by Scott Hanselman that described how to make Load() work properly. Recent versions of MSHTML require a message loop. Apparently, older versions of MSHTML did not. Without a message loop, Load() returns success but the actual work of loading the HTML is performed asynchronously. I had to add this code after Load() in order to make it work:
for (;;)
{
    CComBSTR bstrReady;
    hr = pDoc->get_readyState(&bstrReady);
    if (bstrReady != L"loading")
        break;
    //while( doc.readyState != "complete" )

    MSG msg;
    while (::PeekMessage( &msg, NULL, 0, 0, PM_NOREMOVE))
    {
        if ( !AfxGetApp()->PumpMessage( ) )
        {
        return;
        }
    }
}

Unfortunately, after going through all this time and effort to make Load() function, I discovered a small fact that made all of this effort useless: there's a bug in the MIME sniffing code used by IPersistStream::Load(). The bug is documented in this MSDN KB article:
BUG: PersistStreamInit::Load() Displays HTML Files as Text. You can read the discussion in Scott Hanselman's blog entry to understand why this is an issue. Update 12/21/2010: This bug was fixed in IE7 and the KB article is now marked as "retired content."

4. IPersistFile

My next try was to use IPersistFile, like this:

CComQIPtr<IPersistFile> pPersist(pDoc);
hr = pPersist->Load(L"C:\\abc.html", STGM_READ);
pPersist.Release();

As in #3, IPersistFile::Load() requires a message loop to function properly.

Unfortunately, using this function was not optimal for my situation because I already had the HTML document in memory. I didn't want to write it out to disk again and slow things down.

5. IPersistMoniker

I finally found the correct solution in a post by Craig Monro in microsoft.public.inetsdk.programming.html_objmodel.

The solution is to use IPersistMoniker to feed the stream to MSHTML. By using IPersistMoniker, you avoid the MIME sniffing bug, you don't need a Unicode buffer, and you can use in-memory data.

There is one problem with the solution posted by Mr. Monro. The SetHTML function in his example takes a Unicode string for the HTML data, but this isn't a requirement to use IPersistMoniker with MSHTML. I changed the function to use a byte buffer instead of a Unicode buffer and it worked fine. I also used SHCreateMemStream() to avoid having to make a second copy of the data buffer.

Preventing Execution [Added 12/21/2010]

According to the documentation in Microsoft's WalkAll sample, "If the loaded document is an HTML page which contains scripts, Java Applets and/or ActiveX controls, and those scripts are coded for immediate execution, MSHTML will execute them by default." This is very important to understand if the HTML code you are loading is not trusted because you begin executing the page as soon as it's loaded. This applies to all forms of loading described above except IMarkupServices.

The solution to this problem is not shown in my sample code, but it is shown in the WalkAll sample. Look in the comments at the beginning of the WalkAll sample for "IOleClientSite and IDispatch".

Updating the Document [Added 12/30/2010]

Once you've loaded the HTML document, you often want to update it and save the result. I found it necessary to set designMode to "On" in order to make changes "stick." Otherwise the change would appear to work, but would be discarded when I tried to save the document. You must set designMode to "On" after the document is loaded, because loading a document resets the designMode value.

If you want to save the document, QI for IPersistStreamInit and call Save(). (This doesn't work for documents loaded with IMarkupServices.) However, be aware that MSHTML cannot faithfully recreate the document that you loaded. In other words, if you do a load/save cycle, you can't diff the new file with the original and get a reasonable result. MSHTML normalizes tags, removes newlines, and makes many other changes. There does not appear to be any way to force MSHTML to save a byte-perfect form of the document.

As an alternative to IPersistStreamInit, you can get a pointer to the document element with IHTMLDocument3::get_documentElement(), and then call get_outerHTML(), but this strategy is imperfect for the following reasons:
  • Any DOCTYPE or xml declaration at the beginning of the document is discarded.
  • Any attributes on the BODY tag are discarded.
  • Character encoding is lost because you always end up with wide character Unicode. Even worse, any CHARSET declaration in the HEAD is preserved, so non-English documents can display as garbage.


Performance [Added 12/30/2010]

One of my concerns was the performance that would be offered by the MSHTML control, which is hardly a lightweght control. To better understand the behavior, I ran a series of performance tests on a 3GHz Core i7 processor. What I learned is that the time required to parse HTML is dwarfed by the time required to create an MSHTML object. In my tests, if the MSHTML object is created once and then reused, a thread can load a 2K file about 1000 times in one second. If the MSHTML object is created and destroyed on each iteration, performance drops to 25 loads per second, a 40x performance hit. The lesson is that HSHTML should be created once and reused. The reuse strategy is the fundamental reason IPersistStreamInit exists as a separate interface from IPersistStream.


Conclusion

This is one of the more difficult problems I've worked on lately. Between bad examples, bad documentation, bad 3rd party advice, and a plethora of different strategies, finding this solution was far more difficult than I expected. I hope this article saves some others from the same frustration.

Download the sample code for this article.

Sunday, October 12, 2008

GigE File Sharing Performance - 96MB/sec!

I wrote numerous blog entries last year about my difficulties getting GigE (Gigabit Ethernet) to work properly. Yesterday I upgraded both my Vista client and my Windows 2003 Server to 4GB RAM (see the end of this article for hardware configurations.) Suddenly my resource-constrained systems had lots of room to play. I reran my performance tests over my Gigabit Ethernet and came up with some very unexpected results.

First I used the DOS "copy" command on the Vista client to copy a 256MB file from the server to the client. The file was not cached on the server, and it transferred about 15MB/sec. This was the same performance I was getting before the RAM upgrade.

Next I repeated the same copy. The file was completely cached on the server and the file transferred at about 25MB/sec.

Finally, I again used the DOS "copy" command on the Vista client to copy the file from the client back to the server. The transfer peaked at 96MB/sec!! (A 256MB file copies VERY quickly at that speed.)

On the one hand, this is a contrived test - in the real world we almost never have the luxury of copying a file that's already cached in RAM. However, the tests lead me to several useful conclusions:

  1. The tests show the peak performance of Windows file sharing when you take the disks out of the equation. At 96MB/sec, that's 85% of the practical maximum of 112MB/sec. Considering that the filesharing protocol itself has overhead, we are actually running at somewhere between 90% and 95% of the theoretical max. That's fantastic.

  2. The file sharing protocol's latency is negligible. If it were significant, I wouldn't have reached the above performance numbers. Instead, the transfer rate of the hard drives is the primary performance constraint. Both test systems have SATA 7200RPM drives. I expect that if I had 10,000 RPM RAID 5 drives, my maximum performance when copying non-cached files would improve dramatically.

  3. Something very strange is happening when copying from the server back down to the client. Why this operation is peaking at 25MB/sec is not clear.

  4. Jumbo frames are completely unnecessary for peak performance. They might cut down on the CPU load, but even that's debatable when interrupt coalescing is enabled on the Ethernet card.

One set of datapoints that still need to be measured is to rerun from the command prompt on the server. In prior tests, it mattered which system initiated the file copy.

Finally, I accidentally performed the above tests with Virtual PC 2007 running on the Vista client. Virtual PC cut the transfer rates by 30 to 60%. The final copy peaked at 40MB/sec instead of 96MB/sec. Oddly enough, VMware Server was running on the Windows 2003 Server for all of the tests, and it had two virtual machines active. So while Virtual PC had a significant impact on network performance, it appears that VMware Server had no impact.


Client Configuration
Vista SP1
P5B Deluxe Wifi
Core 2 Duo 2.4 GHz
On-board Marvell Yukon Ethernet Card
  - Interrupt coalescing enabled
  - Jumbo frames disabled
4GB RAM
GigE registry update applied

Windows 2003 x64 Server Configuration
P5B Deluxe Wifi
Core 2 Duo 2.13 GHz
On-board Marvell Yukon Ethernet Card
  - Interrupt coalescing enabled
  - Jumbo frames disabled
4GB RAM
Ethernet adapters on the VMware virtual machines were running in Bridged mode.

Saturday, October 11, 2008

Passive (PASV) ftp in Windows 7

I was controlling a customer's computer and trying to upload a file to our corporate ftp server. I could connect and log in, but any other command would cause the ftp client to hang. I was using the ftp command built in to XP because I didn't have privileges to install a 3rd party ftp client. It was frustrating, but I didn't have time to debug the problem.

Today I had the same problem on a system in the office when trying to connect to a new server. However, this time I was running a different ftp server (vsftpd) that gave me a helpful error message:

200 PORT command successful. Consider using PASV.

I tried the obvious, which was to restart ftp from the command prompt and then type PASV, but this gave the error "Invalid command." I found several posts that said that the ftp client built-in to Windows doesn't support passive mode. The good news is that they are wrong this is fixed in newer versions of Windows.

With Windows 7, enter this command to enter passive mode:

QUOTE PASV

With vsftpd, I was rewarded with this response, and everything started working:

227 Entering Passive Mode

 
As other have noted in the comments, this does not work on Windows XP.


Monday, September 29, 2008

CString and Path functions in shlwapi.dll

The "Path" functions in shlwapi.dll are examples of obscure APIs that are incredibly useful. There are dozens of useful functions in there, including PathAppend, PathFileExists, PathFindExtension, and PathFindFileName.

The only problem is that these functions don't play nicely with CString without going through the contortions of GetBuffer/ReleaseBuffer.

Turns out that Visual Studio 2005 and 2008 both include the atlpath.h header file, which includes wrappers for all of the Path functions to make them easy to use with CString. This makes those functions much easier (and much safer) to use.

Sunday, September 28, 2008

Visual Studio 2008 C/C++ applications under Win9x

On of the disadvantages of Visual Studio 2008 is that it does not create applications that support Windows 95/98/98SE/ME nor Windows NT. Although Windows XP has been shipping for seven years, there are still people out there who have never seen any reason to upgrade and are still running the old operating systems. (As of August 2008, www.w3counter.com shows that the combined market share of the Windows 9x variants is less than 0.9%.)

Windows 95 support was dropped in Visual Studio 2005, but working around the problem was easy.

Visual Studio 2008 is a thornier problem. VS2008 dropped support for all variants of Win9x and the C run-time depends on several functions that require Windows 2000 or later. Fortunately, a third party company has created a product named Legacy Extender that provides a drop-in solution. The caveat is that it only works with the statically linked C runtime libraries, not with the DLLs.

For a modest $29.95/developer, including unlimited upgrades, Legacy Extender is a great solution to a difficult problem.

Saturday, September 13, 2008

Screenshots for technical support

The bane of every support technician's extistence is trying to interpret what a customer is saying. I'm launching a new site named www.SendMyScreen.com that allows users to send technical support people a screenshot without requiring any expertise.

Check it out.

Monday, September 8, 2008

Remote Debugging Managed Code

Last year I blogged about how to get Remote Debugging working. That article was for Visual Studio 2005, but it works fine with some minor modifications for directory and filenames for Visual Studio 2008.

Now I've had to take the next step, which is remote debugging managed code instead of native code. Turns out that debugging managed code is even more complicated than native code.

One of the key problems I originally had was that I was never able to get authenticated debugging working because one computer was on a workgroup and one was on a domain. After reading several articles, I was able to solve the problem by creating an account on the remote computer that had the same username and password as the account of the development computer. If your development system isn't on a domain, make sure that the remote account is created on the local computer and not the domain. You may need to use the "NET USER" command from the command line to create such an account.

If you are running Windows Firewall, the remote debug module msvcmon.exe will automatically unblock the right ports for you. At least that was easy.

The next trick is to convince .Net to run your application from a network drive. This technique is described in the .Net Security Blog.

By the way, the Remote Debugger Configuration Wizard was useless for me.

Tuesday, September 2, 2008

Testing Google APIs

I blogged a little while ago about the problems with Google documentation. I've now completed my code that uses these APIs and I'm writing stress tests for the system. Except there's one problem: The server can generate all sorts of errors that aren't under your control, and there's no way to test these error conditions.

These errors are different than communication errors. It's easy to simulate a "link down" problem or similar TCP/IP failure. The problem is that the Google Data API servers can fail in strange and bizarre ways.

One example is the simple "Server Busy" command. The documentation says to "wait and retry." Unfortunately, there's no way to force this scenario, so there's no way to test this error.

Another example is error 403 "Operation could not be completed". What do you do with that? Even worse, if you are performing batch operations, you could have partial failure. Again, there's no way to test this scenario.

Please vote for my bug report asking Google to fix this problem:

http://code.google.com/p/gdata-issues/issues/detail?id=737

Monday, August 4, 2008

Visual Studio 2008 Feature Pack: Manifest Problems

If you install the Visual Studio 2008 Feature Pack, be aware that the manifest handling in the release is buggy, to put it politely. It's very easy to end up with the manifest requiring both the RTM DLLs (9.00.21022.8) and the Feature Pack DLLs (9.00.30411.0). This can cause weird, untraceable errors at runtime. I blogged about this problem before, but in that case it was caused by changing from a Beta build to the Release build.

There's a helpful blog post about the problem in the Visual C++ Team Blog.

The important point is that there are two #defines that force the manifest to point to the newer DLLs:

#define _BIND_TO_CURRENT_CRT_VERSION 1
#define _BIND_TO_CURRENT_MFC_VERSION 1


There's a single #define that does the same thing, but it's broken in the Feature Pack. [9/7/2008 - It's fixed in Service Pack 1. Thanks "anonymous"!]

#define _BIND_TO_CURRENT_VCLIBS_VERSION 1


The blog points out that defining the explicit BIND macros is not always the right thing to do.

This problem bit me because I was following the directions in this blog post, which call for making copies of certain directories from Debug_NonRedist. In a system where the CRT and MFC DLLs have been installed, there are pointers in the registry from the old versions of the DLLs to the new versions, so everything "just works." However, when using the Debug_nonRedist directories, these redirect pointers don't exist. Since the manifest was calling for the old versions of the DLLs, the application wouldn't open.

If you get error c101008a, either Rebuild All or delete all of your "xxx.exe.embed.manifest" files and build the solution.

Friday, July 25, 2008

Installing a Code Signing Certificate

WARNING: if you are getting your code signing certificate from VeriSign or Thawte, DO NOT use a Windows Vista or Win7 computer to get your certificate. If you do, you will not be able to export the private key and so you won't be able to sign code on any other computer and you won't be able to back up your certificate. If this happens to you, Thawte will give you a free reissue (Thanks Thawte!) See https://www.thawte.com/ssl-digital-certificates/technical-support/ Make sure you do not use a Vista or Win7 computer for the reissue! [Added 7/13/2008]. View my earlier post to see why.

Every time I try and install a code signing certificate, I forget how I did it last time. You'd think that there would be a guide somewhere on how to do it, but if there is, I haven't found it. Both VeriSign and Thawte give tantalizing hints scattered among dozens of knowledgebase articles, but overall, it's rather poorly documented.

So here's how to do it: (Note: If you are using Comodo and saving to the CSP, you should skip to Step 3, then skip to Step 7. In this case, you don't use the PVK or SPC file.)

  1. Prerequisites: You must have a PVK file and an SPC file. From VeriSign and Thawte, these are normally named mycert.spc and mykey.pvk. If you don't have both of these files, this article won't help you. You'll also need the password for the PVK file.
  2. Install PVKIMPRT from Microsoft. You can download it here.
  3. Remove your old certificate. If you are renewing an existing certificate, then keeping the old certificates installed isn't usually useful, and having multiple certificates will break SIGNTOOL if signtool is searching the certificate store. Go to Control Panel / Internet Options / Content, click Certificates, select your old certificate, and click Remove. The old certificate will probably be on the Personal page if you allowed PVKIMPRT to decide where to put it.
  4. Import the certificate. Run PVKIMPRT to load the certificate into your cert store, like this (should all be entered on one line):

    C:\Windows\PVKIMPRT.EXE -import c:\mycert.spc c:\mykey.pvk

    You'll be prompted for your password, which you should already know. You'll also be asked which certificate store. You can let PVKIMPRT decide.
  5. Verify the installation. Go to Control Panel / Internet Options / Content, click Certificates, select your old certificate, and click View in the bottom right. The certificate will probably be on the Personal page if you allowed PVKIMPRT to decide where to put it.
  6. Install the intermediate certificate. When you view the certificate information, you'll probably get a message that says something like "Windows does not have enough information to verify this certificate."
  7. Don't panic! This is easily solved by installing the intermediate certificate. For Thawte, download the Root Certificates. This package also contains the intermediate certificates. Extract the ZIP file. Double click the file named "Thawte Code Signing CA.cer". You should see the Certificate Information. Click Install Certificate. Now go back to the Certificates Page on Internet Options and view your certificate. You should see the complete Certificate Information.

  • Make sure you have the private key. Go back to the Certificates Page on Internet Options and view your certificate. At the end of the information, right under the "Valid from" dates, you should see something that says "You have a private key that corresponds to this certificate." If this isn't there, delete the certificate, and repeat this procedure starting from Step 3. This happened to me when the intermediate certificate wasn't installed.
  • Export the PFX file. The PFX file can be used by SIGNTOOL. One advantage is that PFX files can be created without a password, which is handy in automated builds if you are using SIGNTOOL. You can see the complete process with pretty pictures at PentaWare. Here's the abridged version:
    1. Go back to the Certificates Page on Internet Options
    2. Select your certificate
    3. Click Export.
    4. Click Next.
    5. Select "Yes, export the private key."
    6. Click Next.
    7. Select "Personal Information Exchange - PKCS #12 (.PFX)." (If this option is grayed out, then your private key was not imported).
    8. Check the box labeled "Include all certificates in the certification path if possible. THIS IS VERY IMPORTANT.
    9. Click Next.
    10. On Windows XP, you can leave the password information blank if you plan to use this PFX file for automated builds. On Windows Vista, you must provide a password.
    11. Click Next.
    12. Enter a filename.
    13. Click Next.
    14. Click Finish.
  • Sign your code! If you need some hints on this, see my earlier post.
  • Thursday, July 24, 2008

    Visual Studio 2008 C++ Redistributable Components

    I've wasted two hours hunting down files that should be obvious, so I'm hoping this post helps someone else.

    If you need to redistribute components from Visual Studio 2008, such as the C runtime or the MFC DLLs, there are four ways to do it:

    1. Put the DLLs in your installer and install them into Windows\System32 directory yourself.

    DON'T EVEN THINK OF DOING THIS. It's almost impossible to do it right because of WinSxS. If you want complete control over your installation, use option #2.

    2. Use the redistributable directories Microsoft provides.

    These directories provide copies of the DLLs that are only available to your application. The directories should go in the same install directory as your EXE. This means that they are subdirectories of wherever your EXE is installed. [Updated 11/19/2008] I no longer recommend copying the directory itself. Instead, copy the contents of the redistributable directory into the same directory as your app. Make sure you include the manifest file. I made this change because msvcm90.dll will not bind to msvcr90.dll if they are a subdirectory. As a bonus, this change makes this option work with Windows 2000.

    Assuming you've installed Visual Studio 2008, you can find the directories at:

    C:\Program Files\Microsoft Visual Studio 9.0\VC\redist\x86

    Important: If you do this with Visual Studio 2008 SP1, make sure you put the following in your precompiled header:

    #define _BIND_TO_CURRENT_VCLIBS_VERSION 1


    Advantage: Doesn't require admin privileges. Works with XCOPY. Your app won't break if the system global version is updated by Microsoft (but you won't benefit from security fixes either.)
    Disadvantage: Not viable if your EXEs and DLLs are installed across multiple directories.

    3. Microsoft Visual C++ 2008 Redistributable Package:

    Original distribution.

    SP1 distribution. See caveats.

    [Update 6/22/2009] Both of these distributions cause the the Windows 7 Logo Toolkit Beta to generate FAIL errors.

    Advantage: All you have to do is run it. Permamently installs the components in the proper locations.
    Disadvantage: Includes everything, so it's larger than the individual merge modules. If you are building your own installer, not as clean of a user experience as the merge modules.

    4. Use the Merge Modules.

    Assuming you've installed Visual Studio 2008, you can find the files at:

    C:\Program Files\Common Files\Merge Modules

    Advantage: Best user experience. Smallest download.
    Disadvantage: Installation isn't permanent - the DLLs may be uninstalled when your application is uninstalled. Requires you to use Windows Installer. (This is only a disadvantage for a minority of developers. Windows Installer is a logo requirement for Vista.)

    Wednesday, July 16, 2008

    Google API Documentation

    One of my annoyances of dealing with many Open Source projects is the lack of documentation. nHibernate is a prime example - it's a fantastic technology with minimal documentation. The only book on the subject is a year late (and counting.)

    I no longer find it entertaining to beat my head against a wall for days (or weeks) trying to figure something out that should have been documented in the first place. I don't mind paying for answers anymore. $200 is a small price to pay to get an answer that saves me a day of work.

    I was therefore terribly disappointed by the tremendously poor quality of the documentation for the .Net library for the Google API. In the best tradition of F/OSS, the documentation for it is desperately lacking. Most functions have little more than a terse one-liner description, with no additional remarks. If it weren't for the fact that source code was included, trying to determine how to use the API might well be impossible.

    A typical example is Captcha handling. The Google API will sometimes request the user to solve a Captcha to make sure that a human is really present. The problem is that there is no C# source code to demonstrate the correct response. Captcha is not handled by the sample code or the unit tests. It's mentioned in the newsgroups, but none of the responses actually contain code that works.

    A more pathological example is documenation on the Contacts namespace. Try clicking on GroupMembershipCollection. No dice - broken link. Try looking at the code in the Samples directory. No help there, there's no sample code for contacts. Try looking at the Java library. Nope, it works differently from the .Net library (That's not a dig in this case. Both libraries are crafted appropriately for the style of each language.)

    There's also no one you can call if you have a problem, even if you are willing to pay for the answer. Say what you will about Microsoft, but when they support something, they mean it. Monitored newsgroups, books, tech support, consultants, 3rd party training - you name it, the resources go on and on.

    Google has a lot to learn about providing services to the enterprise market.

    Friday, July 11, 2008

    Fixing "ambiguous symbol" in a C++/CLR project

    I was trying to use a CString in a managed project today. This is supposed to be easy but it wasn't working. [Note: Microsoft factored CString from MFC several years ago, so CString can now be used standalone. Super handy class.]

    I started with what I thought was a reasonable statement:

    #include "cstringt.h"

    This gave me numerous errors, starting with:

    C:\Program Files\Microsoft SDKs\Windows\v6.0\Include\ocidl.h(6238) : error C2872: 'IServiceProvider' : ambiguous symbol
    C:\Program Files\Microsoft Visual Studio 8\VC\atlmfc\include\atlcomcli.h(370) : error C2872: 'DISPPARAMS' : ambiguous symbol


    With a little research, I determined I was using the wrong include file. So I switched to:

    #include "atlstr.h"

    When I tried to do that, I received similar errors:

    C:\Program Files\Microsoft SDKs\Windows\v6.0\Include\ocidl.h(6238) : error C2872: 'IServiceProvider' : ambiguous symbol
    C:\Program Files\Microsoft Visual Studio 8\VC\atlmfc\include\atlcomcli.h(370) : error C2872: 'DISPPARAMS' : ambiguous symbol

    Several other posts inquiring about the same errors received no answers.

    The problem is caused because the Microsoft SDK (aka Platform SDK) and mscorlib both have definitions for these classes.

    The solution is to take the #include file that caused the problem and make sure it is placed before any "using namespace" statement. In my case, that meant moving #include near the top of stdafx.h.

    Tuesday, May 20, 2008

    Debugging .Net Framework Code

    When I started trying to write C# code, I spent most of my time with the Reflector trying to figure out what the .Net Framework was doing under the hood. My start routine was: start the debugger, see unexpected results, read source code in Reflector, repeat. I spent a lot of time searching for functions in the Reflector.

    I spent years doing MFC programming. MFC shipped with complete source code from the very early days. Without that source code, it would have been almost impossible to solve many of the problems I ran into. (See my earlier blog about fixing problems with MFC symbols in Visual Studio 2005.)

    I was tickled to discover today that Microsoft has made the source code of the Framework available for debugging purposes - even specialized code like Asp.Net. You must be running Visual Studio 2008. Take a look at Shawn Burke's Blog showing how to set it up.

    Make sure you install the Visual Studio hotfix (QFE)!

    Thursday, May 15, 2008

    What do developers want for working conditions?

    There's a great article by Joel Spolsky titled The Joel Test: 12 Steps to Better Code. This article talks about how to write better code, but it's equally relevant to building an environment where software developers want to work.

    The article hits home for me because, in the mid-90s, I worked for a company that was developing software for Microsoft. By requirement, we adopted methodologies that Joel talks about, like the "zero defect methodology." This had an almost miraculous effect on our ability to ship stable software on time. Even today, I can tell with frightening accuracy when a project will ship by doing nothing more than looking at the bug database statistics. The last person to argue with me on the subject was the president of a $50M company. I said they'd ship in 2 to 3 years - if they were lucky. He said I had no idea what I was talking about because I was just a lowly engineer and he was certain that they'd ship in three months. Two years later, I was still waiting for the initial release.

    We also adopted other Microsoft techniques, such as the 1 to 3 ratio of testers to developers. This change also had a dramatically positive effect, both on the ship schedule and on the morale of the developers. Consider that developers spend 80% of their time on the "normal" case. Testers spend 80% of their time on the boundary cases, thereby acting like customers who abuse the software right out of the box. If you really want to make a developer unhappy, tell him to spend 80% of his time testing (as opposed to developing) obscure boundary cases, like whether the software correctly handles 260 character filenames on multibyte character systems. Most developers will just read Slashdot instead.

    Joel also talks about quiet working conditions, which is one of my pet peeves. The last five companies I worked for all had overhead intercom systems that would be used to page people throughout the day. Every time one of those pages came on, it interrupted six to twenty developers and broke them out of "the zone". Aggravated me no end.

    The final point that Joel talks about is Netscape. Everyone says that Microsoft put Netscape out of business. However, Netscape did far more damage to themselves than Microsoft ever did. Netscape released version 4.5 in October, 1998. They didn't release a stable upgrade (7.0) until five long years later. This means that, for all intents, Netscape completely sat out the second half of the DotCom era. They tried to start fresh on an entirely new project, but their cadre of 20-something programmers simply didn't have the project management and large-scale software development experience to pull it off.

    In summary, Joel's article is now eight years old, but many of the points discussed in the article, such as continuous builds, are now considered de riguer of any software development effort with any self respect. It's worth your time to read.

    Friday, May 2, 2008

    Fixing "PRJ0050 Failed: Failed to register output"

    I'm in the process of converting a COM DLL over to .Net using C++/CLR and I've been plagued with this error:

    RegAsm : error RA0000 : An error occurred while writing the registration information to the registry. You must have administrative credentials to perform this task. Contact your system administrator for assistance
    Project : error PRJ0050: Failed to register output. Please try enabling Per-user Redirection or register the component from a command prompt with elevated permissions.


    I used Google to try and find a solution and found more confusion than solutions. The correct solution (for me) didn't appear anywhere. So here's my guide to solving this error.

    There are three possible causes. Diagnosis is usually straightforward. Run Regsvr32 and try to manually register the DLL. If you see "entry-point DllRegisterServer was not found" then jump down to Disable Registration. If you see "access denied" then jump down to User Account Control. Finally, if you get "missing dependency," jump down to Missing Dependency.

    User Account Control

    This error is most likely to happen on Vista when you try to register a COM DLL. The reason is that, even if you are running as Administrator, you aren't really an Administrator because of User Account Control (UAC). This problem is easy to test. Enable per-user redirection in the project properties under Linker / General. Force the project to relink (make a minor change to a source file) and see what happens. If the error goes away, then you've found the problem. Amazingly enough, PROJ0050 is specifically mentioned in the documentation for the Linker Property Pages.

    Other more intrusive solutions:
    • Close Visual Studio and restart it as an Administrator by right-clicking the Visual Studio icon and selecting Run as Administrator. The screen should blink and you should see the UAC prompt.
    • Always run Visual Studio as Administrator as follows. Right-click the Visual Studio icon, select Properties, go to the Compatibility tab, and enable "Run this program as an administrator."
    • Disable UAC (not recommended.)
    • Ignore the problem. It's not hurting anything.
    No matter which solution you choose, make sure you test the registration of the dialog on a clean machine when you build your installer.

    Disable Registration

    This was the solution that applied to my problem. My project used to be a COM object written using MFC. The COM object required registration and so the project was set to register the DLL after linking. My fancy new .Net object did not require registration, so there was no "DllRegisterServer" entry point in the DLL. I fixed the problem in the project properties under Linker / General by setting Register Output to No. Make sure you do this for both Release and Debug builds.

    Missing Dependency

    This problem happens because your DLL is dependent on another DLL that can't be located. It's easy for this problem to happen if your DLL compiles to one directory and a dependent DLL compiles to another directory.

    To diagnose this problem, use the Depends utility. Don't use the version that ships with Visual Studio, it's out of date. Download the latest version of Depends from http://www.dependencywalker.com/. When you run Depends, make sure you open the DLL that's in the target directory where the linker put it. Now look down the list in the middle pane and see what's marked with a yellow question mark. Ignore any modules with an hour glass, they are delay loaded and don't cause a problem if they are missing.

    If the missing DLL is one of your DLLs, then update your path to include that DLL. Problem solved.

    If the missing DLL is one of the MFC or C Runtime DLLs (which start with MSVC or MFC, respectively), then life is much more complicated. You might be having a problem with WinSxs, which means there's an error in your manifest. These problems can be nasty to fix, as I described in this post.

    Finally, there was a red herring for my project when I ran Depends. The file MSVCR90D.DLL was marked as missing. I'm not sure what was causing this, but the DLL ran and loaded properly, so I'm suspicious that the problem was with Depends.

    Tuesday, April 29, 2008

    ReadyBoost - The Final Chapter?

    I'm very happy to say that Windows Vista SP1 finally cured the remainder of the problems with ReadyBoost that I'd been seeing. For the last 18 months I've been unable to put my system to sleep at night because it would kill the ReadyBoost drive, without which performance would suffer. This means that SP1 has fixes for USB above and beyond the USB patch from last year.


    For those wondering why I care so much about putting my system to sleep at night - it's the principle of the matter. Leaving lights on is throwing money down the drain. Leaving your computer on is too.

    Gigabit Ethernet and Vista SP1

    I've complained at length about the performance problems with Windows Vista and Gigabit Ethernet. In the most common scenario, Windows Vista severely throttles the performance of Gigabit Ethernet if you are listening to music or watching videos, such as WinAmp or Windows Media Player. The problem is even worse if you have multiple network cards (including wireless cards.)

    Service Pack 1 for Vista contains a workaround for the problem, but you have to modify the registry directly with RegEdit - there's no user interface to control the throttling. You can find a complete description of the change in SP1 at AnandTech, including instructions:
    http://www.anandtech.com/systems/showdoc.aspx?i=3233&p=2

    Thursday, March 20, 2008

    Entrepreneur Conference

    I attend a couple of meetings of entrepreneur groups each month. I've learned a tremendous amount from these meetings, particularly those run by TIE. It's a hardcore business group that's light on socializing and heavy on running and funding your business. Presentations I've heard at TIE events have dramatically influenced the way I've built my business and market products.

    Today I attended the Lucky Napkin conference, the first of its kind, by and for entrepreneurs. This conference was unabashedly pro-entreprenur, a breath of fresh air in a world that generally considers entrepreneurs to be on par with "crazy inventors." While the overwhelming majority of attendees were focused on consumer goods, it was great talking to other people who were also facing the challenges of running businesses with a lean budget and less than a handful of employees.

    The founders of Lucky Napkin also consulted with with Danny Deutsch, the host of "The Big Idea" on CNBC. It's a great show, as long as you remember than Danny features the success stores and that there are a lot more failures than success stores.

    DVDs of today's Lucky Napkin conference are available from their web site.

    Sunday, March 16, 2008

    Choosing Web Development Software

    For the past eight years I've been using Adobe GoLive. I got started with GoLive because it was recommended by people I trusted and it was a lot faster than Dreamweaver. GoLive also did a good job of editing PHP and had a nice, simple template tool that allowed me to do WYSIWYG editing without having to hand-maintain dozens of files.

    Times have changed. The Windows version of GoLive is now horribly unstable and Adobe simply removes features instead of fixing them. I've started looking for a new tool. Our site has been PHP for several years, so it isn't worth the effort to switch it to ASP.net.

    I found Visual Studio 2005 to be okay for quick HTML edits, but it's rather clunky editing HTML compared to more sophisticated tools. Out of the box, it also can't handle PHP. I tried VS.php, which is inexpensive, but the VS.php debugger never worked properly and their tech support was useless. After a year of trying, I gave up on VS.php in Sep 2007.

    Instead I settled on PHPed from NuSphere. It is a fantastic tool that I'd recommend to any PHP developer. Its debugger is very reliable. However, PHPed is purely a PHP development tool and doesn't do HTML editing.

    So I'm still looking for an HTML tool. Although Dreamweaver is the obvious choice, it's expensive to buy, expensive to support, and expensive to upgrade. So it wasn't my first choice. OTOH, I'm a Microsoft Certified Partner and so can use all of their development tools for free. When I saw Expression Web at the Professional Developers Conference in 2005, I initially had high hopes, but then I learned that it was restricted to Microsoft languages. Oh well.

    In February, Microsoft shipped Expression Web 2 Beta. Among other things, this update supports PHP. It's also easily the fastest tool I've used. Unlike Visual Studio 2005 and Adobe GoLive, you don't need to take a coffee break to start it up and shut it down.

    My initial reaction to Expression Web 2 was quite favorable. It's beta, so the WebDAV support doesn't work with Apache and there are other teething problems, but these are okay. However, the deal killer is that you can't define the root directory, so all URLs relative to the root don't work. Microsoft has indicated they don't plan to fix this problem in this release. WTF?

    Concurrent to all of this, I tried iWeb on the Mac. It's a high-level tool designed for non-web designers. It generates completely unmaintainable code, but it works and the results look good. It also has the ability to do handy things like rotate images, add drop shadows, and other image operations that historically have taken twenty steps to perform using an external image editing application, such as Paint.Net or Photoshop. iWeb will also natively import PhotoShop and Illustrator files. This is a huge win compared to most tools under Windows. iWeb also has some pretty good templates, in stark contrast to any software I've ever seen under Windows. So I've seen a vision of what things could be like with the best of both worlds, but I haven't seen a solution.

    So now I'm back to square one. I'll probably have to buy Dreamweaver. It's $1,000 for Creative Suite (CS3) Web Standard version or $1,600 for CS3 Web Premium. Ouch. To add insult to injury, there's no way to upgrade from GoLive to Dreamweaver, even though they're both made by Adobe. Thanks Adobe!


    Update 5/13/2008: Adobe has formally announced that GoLive has been discontinued. The good news is that they are offering an upgrade price to switch to DreamWeaver as well as a migration kit for converting GoLive projects. Read the details at:
    http://www.adobe.com/products/golive/pdfs/golive_eol_faq.pdf

    Thursday, February 28, 2008

    ReadyBoost with lots of RAM

    There's been quite a bit of discussion as to whether ReadyBoost does anything for you if you already have 2GB of RAM. On my system, I'm running 2GB of RAM and a 4GB thumb drive for ReadyBoost, and I believe ReadyBoost makes a dramatic difference.

    I'm hardly a typical user. Normally I have open: Outlook 2007, Visual Studio 2005, Virtual PC 2007 running Windows XP, and Internet Explorer with a dozen pages open. The Performance tab in Task Manager currently shows I have 1GB cached, which isn't much considering how much is running.

    Earlier this week my system started churning. Everything was taking a lot longer than it had the day before. It took me a while, but I finally realized that the lights weren't flashing on the ReadyBoost drive anymore. I rebooted, reset the ReadyBoost drive, and performance returned.

    So I don't have any concrete, measurable results that I can hold up as evidence, but the subjective experience is pretty clear. Many reviewers have tried to do synthetic benchmarks, such as loading five applications in a row. These tests don't do a good job of measuring what ReadyBoost is doing. Loading apps measures SuperFetch, not ReadyBoost. IMHO, ReadyBoost seems to shine when the system is under heavy memory load and you switch applications task.

    Monday, January 7, 2008

    Visual Studio 2008 Unusable with C++



    Update 8/12/2008: Microsoft has released Service Pack 1 for Visual Studio 2008, which resolves these problems. See the last post on this page:
    http://www.microsoft.com/downloads/details.aspx?FamilyID=27673c47-b3b5-4c67-bd99-84e525b5ce61

    Update 3/30/2008: Microsoft has finally released two hotfixes that address these problems. See the last post on this page:
    http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2799211&SiteID=1



    I've now spent a couple of weeks using Visual Studio 2008 for C++ development, and my conclusion is that it's unusable. Those of you who follow this blog know that I'm generally supportive of Microsoft's errors, but this time there's no excuse.

    Our project is about 250,000 lines of code, a relatively small project by some measures. Every few builds, you get this error:

    LINK : fatal error LNK1000: Internal error during IncrBuildImage

    And the build crashes. If you rebuild, it succeeds, but this doesn't help much for automated builds that rely on a deterministic compile result. The problem only appears if you have incremental linking turned on, but disabling incremental linking makes debugging unbearably slow and painful.

    I'm not the only one having this problem:
    https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=311732
    If you are reading this blog, please go to that page and register your vote for an importance of five stars.

    So I've just been living with that problem, since we're largely in a testing phase for VS2008 and automated builds aren't yet needed. Today I ran into another problem, and this one's a showstopper. The first time the compiler encounters a file that fails to compile because of a particular set of errors, the vc80.pdb program database file is corrupted and it's impossible to build any more files until you do a Rebuild All. Except that rebuilding everything causes the file to be rebuilt, which again corrupts the PDB file. Even if you try to compile that file by itself (using Build/Compile), the PDB file is still corrupted.

    This problem was not only known, if was reported during the beta process, but never fixed:
    https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=308775
    https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=309462

    Both of the above problems have been open for 30 to 60 days. Perhaps a severity of "entire build fails catastrophically" isn't as serious as I think it is.