How to enable MaxCDN for a WordPress web site

A necessary disclosure: some of the links in this article contain our affiliate information. It means that if you follow such links and order a service from the company, we may receive a small commission from them for sending them a customer (you). Keep in mind, however, that we don’t give the recommendations just because of the commission, we only recommend our partners because they are extremely good at what they do: we use their services ourselves!

If you have a WordPress-powered web site and you want to finally jump on that CDN bandwagon everyone is talking about, it’s very easy to set up.

If you lived under a rock during the last few years and don’t know why would you need a CDN (Content Delivery Network), here is an elevator pitch: the CDN lets you speed up the delivery of the static files (such as the images, CSS, JavaScript) and make your web site respond faster. And a faster web site is good not only for your human visitors, but also for the SEO (the faster your web site, the better it will rank in the Google search results.)

One of the more popular CDN providers is MaxCDN, which we’ve been using for our primary web site for about a year now, and are very happy about. It offers a control panel that you can use to set everything up manually, but if your web site is running on WordPress, there is a WordPress plugin that does all the necessary configuration for you. Here is what to do, step, by step:

0. (Optional) If you are familiar with HTML and want to see exactly what CDN would do to the HTML code of your web pages, you may want to save the page source of your main page, to be able to compare it with the new code after enabling the CDN. Here is how the HTML source of this web site ( looked like before we have enabled the CDN for it:

The page source before enabling MaxCDN

The page source before enabling MaxCDN

1. Open an account with MaxCDN. It’s easy and free, and they offer a free trial. There is also a 25% off offer for the new customers, if you want to take advantage of that.

2. Install the W3 Total Cache plugin into your WordPress site. After you activate it, it will add some speed to your site out of the box, and one of the options it offers on its menu is the CDN integration:

Total Cache plugin offers the CDN option on its menu

Total Cache plugin offers the CDN option on its menu

3. Select CDN from the menu on the left, and Total Cache should ask you to autorize access to your MaxCDN account:

Authorize access to the MaxCDN account

Authorize access to the MaxCDN account

Press the Authorize button, and you will be taken directly to a special web page at MaxCDN with your authorization key pre-formatted. Copy it, go back to your WordPress admin panel, and paste it there. Press the Save all settings button, and on to the final step:

Create and select a pull zone to serve your files from the CDN

Create and select a pull zone to serve your files from the CDN

4. Create a pull zone (by pressing the Create pull zone button, of course):

Create a pull zone

Create a pull zone

What is a pull zone, you might ask? Think of it as a special CDN service that pulls files from your main server and delivers them to the end user when the user is opening your web page. MaxCDN also supports the push zones, which work in about the same way as the FTP servers, but we won’t use them here for now.

Anyway, for this web site ( I have entered softblog as the name for the pull zone, to distinguish it from other pull zones I have previously created for other web sites. After the zone is created, make sure it is selected on the CDN page of the Total Cache plugin, as shown above. Press Save all settings button once again, and your CDN is now set up!

If something is wrong, look at the top of your WordPress admin page: Total Cache is good about detecting problems and telling you exactly what to do to fix them. If no problems are reported, congratulations, all done, your WordPress site now uses MaxCDN!

5. (Optional) Look at the page source of your web site again; ours has the following:

Page source after MaxCDN is configured and enabled

Page source after MaxCDN is configured and enabled

Compare it with the page source you obtained before enabling MaxCDN, see the difference? The CSS file is now served from the CDN pull zone. The same should be true for other static files and images. From now on, if someone from the other half of the globe opens your web page, they don’t need to wait for the static files to travel all around the world to reach them, MaxCDN will deliver the files to the user’s browser from the closest location.

Happy word pressing!

Software Marketing Glossary from DP Directory

If you have not seen it yet, check out the Software Marketing Glossary created by Al Harberg, who runs the press release distribution company DP Directory. This is an impressive collection of the short (and some not so short) articles describing different terms used in the software marketing business, neatly organized and illustrated. It also has quite a few of reviews of the marketing books. I thought that after being in the software business for so many years I knew everything related to software marketing, yet when browsing the Software Marketing Glossary I discovered quite a few things I didn’t know about. If you are a software developer who is just starting with marketing your products, this list could be a good starting point to become familiar with all the different things involved in marketing of your digital products.

By the way, I’ve been using DP Directory to distribute our press releases for more than a decade now, and Al’s service has always been top-notch. (And no, Al does not pay me a commission for this recommendation, I’m just a satisfied customer.) In addition to the press release distribution, Al also offers several other marketing services, including web site reviews, copy writing, SEO optimization, and so on. I’ve used some of them as well and was very satisfied with the results. Check it out!

Getting ready for ISVCon 2012

In less than two months I will be at the ISVCon conference in Reno, Nevada. If you are an independent software developer (or just thinking about becoming one) and have not registered for ISVCon yet, it’s not too late: go to the ISVCon registration web page and use the coupon code AB2012 to get 10% discount. (No, I don’t get a commission, I’m just a happy participant of the past such conferences.) Note: Better not delay the registration and do it before the end of June, because in July the registration fee increases quite considerably.

During my career as an independent software developer I’ve been to quite a few such events in the past, and I’ve always found the presentations informative and useful. This year’s schedule is no exception and I’m looking forward to attending quite a few of the sessions, on many different topics, including marketing, mobile development, modern trends in the software business, and so on.

I’m also looking forward to the social events, and to the opportunity to meet other developers face to face. It’s one thing to read what other people write on their blogs or in the ASP forums, but it’s entirely different thing to actually talk to them! Besides, if the conference is within the driving distance, my wife and I are always take the opportunity to take a few days off and make the driving to and from the conference fun, too.

Here are a few random photos from the past software events I attended:

The ice sculpture that Tucows put up at their Software Industry Conference booth in 2000, in Florida:

Bob Ostrander brought his potato bazooka to the shareware schmooze in 1999 in Columbus, OH:

The women of software are beautiful (Software Industry Conference in Denver, 2005):

Software developers can be fun (Software Industry Conference in Denver, 2005):

Yours truly, “standing on the corner in Winslow, Arizona”, on the way back from the Software Industry Conference in Denver, 2007:

Once again, if you have not registered for ISVCon yet, don’t miss your chance: go to the ISVCon registration web page and use the coupon code AB2012 to get 10% discount. See you there!

Solution for the slow network for Virtual PC on Windows 7

(Note: the following is a long version of the story. If you want the short version, just the solution, skip to the last paragraph below.)

As I wrote before, I use virtual machines extensively to do work on my computer. I used Windows XP Pro as the host OS for several years now, and it worked quite well. I skipped Vista and kept using XP, mainly because it did not seem like Vista would add any significant benefits to my host computer. However, a few months ago, Windows 7 became available, and while testing my products with it, I was so impressed with its speed, stability, the look and feel, that I decided it was time to upgrade my main host computer to Windows 7.

So, a week or so ago, after backing everything up, I took the plunge and installed a fresh copy of Windows 7 Ultimate. It went well, I was happy. Of course the first application I added to it was Virtual PC, because I needed it to run my virtual machines that do the real work for me. It went well, except for a few surprises, such as the new user interface of the Virtual PC console, that looked like a regular folder rather than a separate program. Also, it upgraded the integration components of my virtual machines and as a result it started using the Remote Desktop to display the virtual machine desktop. It added the ability for the virtual machines to recognize the USB drives attached to the host, but at the same time it downgraded the display capability of the virtual machine to display 16-bit colors only, that caused the fonts on the virtual displays not to be anti-aliased quite as nicely as before.

Those were minor things, though, and after trying my virtual machines for a couple of days, I decided I could live with the new version of Virtual PC. One thing did bother me, though: when I tried to browse the shared network folders from within the virtual machines, the browsing was quite slow. Literally, it took a few seconds just to navigate from a directory to a subdirectory. It was especially bad if the directories contained a lot of files. Copying files over the network was painfully slow, too. However, the network was slow only when using it from within the virtual machines. Outside, the network was as fast as it was before: I could browse the virtual machines from my host computer, and connect to other “real” computers from it, the speed was as usual.

I searched the web and found a few reports from people describing the same problem, but no real solution. The only suggestion was to replace the new version of Virtual PC software with the previous one, Virtual PC 2007. Although Microsoft does not officially support Virtual PC 2007 on Windows 7, a few posts I found suggested that it was quite possible to install and use VPC 2007 on Windows 7.

After contemplating it a bit, I decided that having fast network access from within the virtual machines was worth the trouble and did just that: uninstalled the new Virtual PC, and installed the previous VPC 2007 with SP1. The good news was that even though the virtual machines were previously updated with the new version of the integration components, they kept working well with VPC 2007, as before: the full 32-bit colors of the display were available, the old console window was back, and most importantly, the network access was as fast as before.

I was happy for a couple of days, until I noticed a strange problem happening: after using the virtual machines for some time, while switching back and forth between them and the host computer, at some point the virtual machines would stop accepting the TAB and ESC keys. That was a new problem that I did not experience before. Again, I started searching the web for a solution, and found a couple of suggestions, such as the one about creating a local security policy for the file VPCKeyboard.dll, but none of the suggestions worked: after several minutes, the TAB and ESC keys would stop working in the virtual machines (all of them at once), and the only way to restore them was to shut down all virtual machines, and restart the Virtual PC console application. Then work for a few minutes and do the restart again. Needless to say, that was extremely annoying.

Having spent two days trying every possible thing I could come up with, including searching for the updated drivers, reinstalling the virtual machine additions, trying alternative ALT+TAB managers, turning the Aero theme on and off, and so on, I decided that having the slow network was not as bad as it seemed after all. I removed VPC 2007, and reinstalled the new version of Virtual PC software.  The TAB/ESC keys problem went away, the slow network access returned, and I started searching the web for a solution again. Accidentally, I came across of an old Microsoft support article that applied to Virtual PC 2004 and Virtual Server 2005, that mentioned a solution to a problem of a slow network access similar to what I’ve experienced. Out of desperation, I decide to give it a try, and … it worked! I guess, this problem was fixed in VPC 2007, but resurrected in the new version of Virtual PC (the old bugs are had to die, it seems). Anyway, here is what solved the slow network access in Virtual PC for me (from the Microsoft support article , Method 2):

– On the host computer,  backup the Registry, just in case, and then:

– Run Registry Editor (regedit.exe) and select the following key:


– Create new DWORD value named DisableTaskOffload and set its value to 1.

– Restart the computer.

After I did the above, the network speed from within the virtual machines became as fast as it was before. Hope this helps someone!

Did you know? Our USB Encryption Software can protect any USB flash drive or any other external drive with a password and strong encryption. Try it free for 30 days! Read more…

Getting online in New Zealand

It’s March of 2009 and I’m in New Zealand, on a month-long expedition, driving from town to town, ferrying from island to island, trying to understand what kind of country it is. (I like it so far! Except for the sand flies.) As you can imagine, keeping connected to the Internet is critical for my kind of business: I need to get online at least once or twice a day, or face a wave of angry messages from my customers. My trip coming to an end soon, and since I’ve been through quite a few of different places, I thought I would share my findings here, hopefully someone would find it of use when traveling to NZ.

Most cafes and hotels I stayed had computers in their lobbies with an Internet connection. If your online needs are modest (like you need to check your Gmail account, browse the web, or talk to someone via Skype), such computers should do just fine. Many have the coin slots attached, and for about NZ$2 per 20 or 30 minutes you can get online. The problem with such computers is that usually quite a few people want to use them, and you may spend some time waiting in line. The big advantage though is that you don’t need to bring your laptop with you, which could be important if you are traveling light.

Myself, I needed to bring my laptop with me, because it’s an integrated part of my business: I need it to access my customer’s database, and I also need to have at least a light version of my development environment in case a need for a quick bug fix arises. It means that taking the easy road and using the online computers I described above was not an option: I needed the full wireless access to the Internet.

Most hotels did offer the wi-fi access option, but not all: you better ask specifically about the wireless Internet availability before committing to the place. Keep in mind that even if a hotel advertises the wi-fi option, it does not necessarily mean it is actually available: in more than one place the wi-fi router was online but had trouble accepting connections. More than once I had to ask for a refund of the fee I paid and go search for the better wi-fi signal in the nearby cafes.

The common wireless access providers that I’ve encountered in more than one place were as follows:


This provider charges for the bandwidth rather than for the connection time. At first I was not sure how much bandwidth I would need, it seemed like the usual 50MB that you could buy for NZ$5 was not a lot. Turned out that was the most economical access that I encountered. To minimize the bandwidth I turned off the automatic Windows and anti-virus updates, shut down various programs that maintain constant Internet connection (Skype, RSS reader, etc.) and kept active only the actual programs I needed at that time. In such a mode, I was consuming less than 10MB per hour, doing just email checking and light web browsing. However, when I needed to talk to someone via Skype, the consumption quickly rose to about 1MB per minute. Still, it was pretty cheap.

One thing to keep in mind about Zenbu is that if you buy your access ticket at the reception, you can use that ticket only at that specific place, you cannot roll it over to another location. However, if you pay directly to Zenbu via their web site, you can use that bandwidth at any location.

The Internet Access Company, The Internet Kiosk, SiteWifi

These providers charge for the time used (they also have the bandwidth limits, but I have never reached them). The important thing about using them is not to forget to log off from the wireless access. This way, you can keep the remaining minutes and use them at another place serviced by the same provider. If you forget to log off, the minutes may keep rolling, even if you are not connected to the Internet anymore. These were not as economical as Zenbu, their prices were in the range between NZ$3 and NZ$8 per hour.


This was a complete waste of time. Before going on this trip, I tried to prepare myself: I’ve searched the Internet for the available Internet providers in NZ, and I came across the iPass service. From their web site it appeared like they had a lot of access points throughout New Zealand. They offered a Global Wi-Fi account for US$45.00/month with no commitment requirement, and it looked like a good thing to have. So the day before departure I opened an iPass account.

I had the first (and only) opportunity to use my iPass account in Auckland, at the Hyatt Regency hotel. I powered my laptop, opened the web browser, and indeed there was an option to login with my iPass account there. Unfortunately, my login name and password were not accepted. Having tried a few times, I gave up and went to a nearby cafe to connect via some other provider.

That happened to be the only place that I encountered that offered the iPass login option. None of the other places I stayed at had it available. So I contacted iPass about canceling the account and getting a refund. It took one email message and two phone calls, but I did receive the refund. Still, the iPass experience turned out to be just a waste of my time and energy. I guess I should take it easy and not get  ‘overprepared’ next time.

Private providers

Some of the places offered the private connection options, that is the time you buy there could be used only at that specific location and nowhere else. These were obviously the least flexible options, but what could you do if there was nothing else available. The prices usually were between NZ$3 and NZ$6 per hour, and one place offered 24-hour access for NZ$8, which was pretty good.

If the reception confirms that they do have a working wireless access point, it’s better not to hurry and buy the ticket at reception. Before doing that, I would usually try to connect from the laptop and see what kind of access was at that particular place. Chances were, the access would be provided by one of the common companies (see above) with which I’ve already opened an account at one of the previous places. If so, I could use the leftover minutes before paying for more time. Only if there was no option to connect and pay via the web browser I would go and buy the access time at the reception.

Happy connecting!

Did you know? Our USB Encryption Software can protect any USB flash drive or any other external drive with a password and strong encryption. Try it free for 30 days! Read more…

Getting through the Google “winter”

Winter came early to the usually sunny and warm South-West Utah this year. First, early frost came in the beginning of October, killing tomatoes my wife was so fond of, as well as quite a few other outside plants of hers. Then, at about the same time I’ve got an email from Google stating “Removal from Google’s Index … In order to preserve the quality of our search engine, pages from are scheduled to be removed temporarily from our search results for at least 30 days.” WHAT??? It took me a few seconds to understand what it was about: Goggle decided to penalize my main business web site for something that violated its terms of service. Considering that Google was a significant source of visitors to the web site, my heart skipped a few beats.  It looked like we were up for a long, cold winter this year.

I went to review the source pages of the web site right away, to see what exactly triggered the Google penalty. It turned out, I had a paragraph of text on the page enclosed with the tags <div class=”description”>…</div>. Nothing bad by itself, until you pull the style.css file and see that the definition for class “description” was: .description { display: none; } In other words, the text that was between the tags was not displayed on the page. Yet it was visible to the googlebot. A clear violation of the Google’s terms, indeed!

How come I had that piece of hidden text on the web page? It happened a few years ago, when I was doing the last redesign of the web site. I was playing with different style sheets, one designed to display the web page on the screen, and another one to be used when the page was sent to the printer.  I was using the display: none; attributes in the printing style sheet to suppress the printing of the non-essential elements (like menus, which have no use when printed on paper). While playing with the different layouts, I forgot about one invisible piece of text I left there. And now, several years later, Goggle finally figured out that the text was hidden and decided to penalize me.

Once I realized what the problem was about, I removed the hidden text from all pages, and submitted a reconsideration request to Google. They replied: “Please allow several weeks for the reconsideration request.” Several weeks! As if the stock market crisis was not enough of bad news at that time.

Then, after a few days passed, the frost was gone, and the usual warm fall returned to Utah. One morning, while checking the Google search results the N-th time, I saw my web site back in the results. Hurray, Google has lifted its punishment, and allowed my web site back into its index! It turned out that instead of “at least 30 days” or “several weeks” the penalty lasted only about 6 days. (Of course, I’m not complaining!).

Looking at the web site statistics, here is how the Google penalty affected the traffic:

Google "winter"

The decrease in the traffic was noticeable, but not devastating. Why? Because the “organic” Google traffic, although significant, was not the primary source of the visitors to my web site. What was the primary source then, you might ask? Sorry, I’m not telling (I have a lot of competition!) Let me just mention that the “word of mouth” kind of traffic, that is people telling their friends and colleagues about my products, the web sites recommending my products to their users, and other similar sources, play a very prominent role in keeping my business alive.

Even though the Google punishment was light, I promised to myself that I will pay better attention to the web pages in the future and make sure I don’t do something stupid again that would trigger the Google “winter” again. It may not be so short next time.


Did you know? Our USB Encryption Software can protect any USB flash drive or any other external drive with a password and strong encryption. Try it free for 30 days! Read more…

Is GPL software free as in “free love”?

I was contemplating the other day how to extend AB Commander to make it able to collaborate with several third-party software products. For example, it would be cool to add some support for the 7z files, which are created by the file compressor 7-Zip . It shouldn’t be too difficult, I thought, because 7-Zip is an open source project, I could examine its source code to see how exactly the 7z files are handled, and maybe reuse some of its code in AB Commander?

Not so fast. 7-Zip is licensed under the GPL license, which strictly forbids the re-use of its source code in the non-GPL’d projects. If I want to use some of the GPL’d code in my own software, I need to convert my whole project to the GPL license and open its whole source code, something I don’t want to do.

After thinking some more, I came up with an idea: I’d make a separate module (a plug-in) that would serve as a bridge between my closed-source project and the GPL’d project. I would make the plug-in GPL’d (because it would need to use some of the GPL’d code), and publish its source code.  My closed source project would link to the plug-in, the plug-in would link to the other GPL’d project’s code, and everyone would be happy: the users of AB Commander would get additional software functionality, the plug-in would be GPL’d, and I would keep the source code of AB Commander closed. Pretty smart, huh?

Again, not so fast. Let’s browse through the GPL FAQ, that explains the issues related to linking between proprietary software and the GPL’d software (or, as they put it, “free” software, where “free” is as in “free speech”). For example, the answer to the question Can I release a non-free program that’s designed to load a GPL-covered plug-in? reads “In order to use the GPL-covered plug-ins, the main program must be released under the GPL or a GPL-compatible free software license…

Huh? I could understand the requirement to GPL any code derived from a GPL’d code, it’s their code and they are free to restrict its re-use anyway they want, but to require two separate modules dynamically linking to each other to be covered by GPL if only one of them is GPL’d? It seems like asking a bit too much. Especially considering that GPL programs have no reservations about linking to proprietary software. Just take any software for Windows, GPL or not, it’s linking to the Microsoft’s proprietory system libraries. Looks like the philosophy of the “free software” is it’s OK to take advantage of the evil and dirty proprietory software, as long as it is not trying to link back to take some advantage of the GPL’d software.

Not fair!

Update: Apparently I am not the only one who thinks so:

Another update: Just found the following article with a legal analysis of this very problem, that confirms the “absurdity” of the GPL’s approach to the software plugins:

Vista Elevator 2.0

(This post is moved here from the TweakUAC web site, where it was first published on February 27, 2007)

Vista Elevator 2.0 is an updated version of the sample application Vista Elevator that uses a different approach to solving the problem of starting a non-elevated task from an elevated one.

The first version of Vista Elevator used a trick that created a non-elevated process by programming Vista Task Scheduler to start such a process immediately upon its registration. It worked even if the parent process was elevated. However, there were a few problems with that:

  1. It worked well when the process was started by an administrator (that is, by an account with a “split token”). However, if the account was of a standard user (or a Guest account) it did not work as expected: the secondary non-elevated process was created by Task Scheduler to be executed in the administrator’s context, rather than in the original context of the standard user account. The task would launch not when it was registered, but later on, when the administrator logged on to the system.
  2. The target machine could have Task Scheduler disabled. In such a case, this method would fail to start the secondary non-elevated task at all.

To solve these problems, a different approach is necessary. An obvious method of achieving the goal would be to have a separate helper executable that would help the main application launch a non-elevated task, when necessary. Specifically, it would work as follows:

  1. When a user wants to run the application (main.exe), s/he would start by launching the helper executable (helper.exe) first.
  2. The helper process would start non-elevated, but it would launch main.exe, and request it to start elevated (for example, by using the Run Elevated() function).
  3. After the administrator would have approved the launch of main.exe, the user would work with it, as usual. Helper.exe would keep running non-elevated, waiting for a signal from main.exe.
  4. When main.exe would need to start a non-elevated task, it would send a signal to helper.exe, using some sort of inter-process communication, and helper.exe would start a non-elevated process on main.exe’s behalf.

Such an approach would solve both problems described above: it would not require Task Scheduler to be running on the target system, and it would launch the non-elevated task in the context of the original user, whether it is an administrator, a standard user, or a guest.

What is not good about this approach, it requires a separate helper process to be running all the time, wasting the CPU cycles. It also requires to design a communication protocol between the helper and the main executable, which is not a trivial task and is subject to errors. Wouldn’t it be better if we could use some other non-elevated process already running on the target system to launch a non-elevated process on the main.exe’s behalf? Let’s see… There actually is a process that is guaranteed to run all the time while the user is logged on to the system: the Windows shell! And it runs non-elevated, just what we need. Seems like a perfect candidate for our helper process. But how can we ask Windows shell to launch a process on our behalf? Simply calling ShellExecute() or Start Process() would not work, because they would be executed by our process, not by the shell. What we need to do is inject our code into the shell process and make it launch a process on our behalf!

So, the plan of the attack could be as follows:

  1. Our process would find a window that belongs to the shell, and that is guaranteed to be available at any time. A good window for this purpose is Progman, that is responsible for displaying the desktop. We can call the FindWindow() API to obtain a handle to this window.
  2. Our process would call RegisterWindowsMessage() API to register a unique message that we would use to communicate with the shell’s window. It must be unique to avoid possible conflicts and side effects if we would have accidentally picked a message that is already used by the shell for some purpose.
  3. Our elevated process would call SetWindowsHookEx() API to install a global hook, to be invoked when a windows message gets processed by any process running on the system.
  4. Once the hook is installed we would send our unique message to the shell’s window, and that would make our hook procedure to get invoked. (That’s how we inject our code into the shell’s process!)
  5. When the hook procedure is called (in the context of the shell process), it would call ShellExecute() API to launch the non-elevated process that we need. The process would start non-elevated because the shell’s process is not elevated, and our process would inherit the shell’s elevation level.
  6. Finally, we would remove the hook, as we no longer need it and it should no longer be called and waste system resources and CPU cycles.

That’s the plan that is implemented as the RunNonElevated() function in the VistaTools.cxx file that VistaElevator 2.0 uses. To make it work, the design of the VistaElevator application had to be changed significantly:

Firstly, in order to be able to install a global hook, the hook procedure must reside in a DLL. It means that we can no longer have a single executable, we must create a DLL to go with it, as well.

Secondly, in order to be able to pass data from our process to our code injected in the shell’s process, we must set up a special code section to be shared between several processes.

Finally, to be able to use this method with both 32-bit and 64-bit versions of Vista, we must produce two separate builds, one 32-bit and another one 64-bit. The reason for that is that on a 64-bit Vista the shell is a native 64-bit process, and in order to be able to hook it, we need to use 64-bit code, too.

To see the details, use the download links below:


(the compiled executables only, without the source code)

(the source code, a Visual Studio 2005 project)

Note: If you want to compile the source code on your own, make sure you have the latest Windows SDK (see for more information).

Did you know? Our USB Encryption Software can protect any USB flash drive or any other external drive with a password and strong encryption. Try it free for 30 days! Read more…

The first bug in Vista UAC?

(This post is moved here from the TweakUAC web site, were it appeared first on January 1st, 2007).

I believe I’ve stumbled upon the first bug in Vista UAC (in the final release of Vista, not in a beta version).

It’s very easy to see the bug in action:

  • Login to your computer with the Guest account. (You may need to enable the Guest account first, using the Control Panel).
  • Download any digitally signed program (such as TweakUAC), save it to the default download folder (C:\Users\Guest\Downloads).
  • Now run the file you’ve just downloaded, and take a look at the elevation prompt displayed:

As you can see, UAC cannot recognize that the file contains a valid digital signature, and it warns you that the program is “unidentified”. This is a bug, because you can check that the digital signature of the file is actually valid:

This problem is not limited to the TweakUAC file, any other digitally signed executable (such as the installation utilities of most software packages) will produce the same effect. All you need to do to reproduce this bug is login to Vista with the Guest account and run a digitally signed file from the Guest\Downloads folder. Note that if you copy the executable into the C:\Program Files folder, and run the file from there, its digital signature would magically become recognizable by UAC! Move the file to the root folder C:\, and the file again becomes unidentified to UAC.

Is this bug dangerous? Yes, it is! The whole idea behind UAC is to shift the responsibility of distinguishing the bad programs from the good ones to the end user (you!). The only tool that UAC gives you in this regard is the digital signature information, and it turns out it’s broken! How are you supposed to make the decision whether to trust a certain program or not if UAC does not provide you with the correct information? (Nevermind, it’s a rhetorical question).

Andrei Belogortseff

WinAbility Software Corp.


21 Responses to “The first bug in Vista UAC?”

  1. Myria Says:

    I believe that this is because there is an NTFS fork on the directory that says that anything in that directory shouldn’t be trusted. This is similar to XP in how it knows that a file was downloaded recently by IE.

  2. Soumitra Says:

    Hi Andrei,

    Have a quick question about TweakUAC. Can I suppress UAC messages only for a single application using TWeak? Or does it suppress all UAC messages, system wide?




  3. Andrei Belogortseff Says:

    Hi Soumitra,

    > Can I suppress UAC messages only for a single application using TWeak?

    No, it’s impossible.

    > Or does it suppress all UAC messages, system wide?

    Yes, that’s how it works.


  4. Andrei Belogortseff Says:

    Hi Myria,

    > I believe that this is because there is an NTFS fork on the directory that says that anything in that directory shouldn’t be trusted. This is similar to XP in how it knows that a file was downloaded recently by IE.

    It may very well be so, but it does not make it any less of a bug. If a file contains a valid digital signature, Windows should not misrepresent it as coming from an unidentified publisher.


  5. Chris Says:

    How did you take a screenshot of the UAC? I can’t get Print Screen to copy it to the clipboard, and the snipping tool isn’t working either.

  6. Andrei Belogortseff Says:

    Hi Chris,

    > How did you take a screenshot of the UAC? I can’t get Print Screen to copy it to the clipboard, and the snipping tool isn’t working either.

    Those tools don’t work because UAC displays its messages on the secure desktop, to which the “normal” user tools have no access. To solve this problem, I’ve changed the local security policy to make the UAC prompts to appear on the user’s desktop. After that, I used the regular Print Screen key to capture the screenshots.

    Hope this helps,


  7. Matthew Bragg Says:

    Hi Andrei,

    I sell software to a *very* non-technical customer base. My setup procedure includes installation of an .ocx file into the \windows\system32 folder and registration of it using regsvr32. In order to copy anything into the \windows\system32 folder under Vista I have to turn off UAC. I would like to be able to do this automatically, programmatically, so I don’t have to make my users mess with UAC. I’d like to be able to turn off UAC for a second or two programmatically, then turn it back on. Will your software enable me to do that?


  8. Andrei Belogortseff Says:

    Hi Matthew,

    > I’d like to be able to turn off UAC for a second or two programmatically, then turn it back on.

    Unfortunately it’s impossible: if you enable or disable the UAC, Windows must be restarted before the change would have take effect.

    To solve your problem:

    > In order to copy anything into the \windows\system32 folder under Vista I have to turn off UAC.

    It looks like your setup process is executing non-elevated, that’s why it cannot do that. You may want to try to start it elevated and see if it would have solved the problem without turning off the UAC.



  9. caz Says:

    don’t use TweakUAC because this program makes your Vista unsafe!

  10. Herbys Says:

    > How are you supposed to make the decision whether to trust a certain program or not if UAC does not provide you with the correct information? (Nevermind, it’s a rhetorical question).

    The answer is you are not. A guest should not be allowed to make any decision about installing software. If you log on as a valid user, the prompt works just fine. If you log on as a guest, you shouldn’t be installing software, so any dire warning is fine.

    Yes, this might be unintended behavior (or perhaps it is not), but its impact is null.

  11. Andrei Belogortseff Says:

    Hi Herbys, you wrote:

    > The answer is you are not. A guest should not be allowed to make any decision about installing software.

    Sorry, but you are missing the point: the UAC displays this information for the _administrator_ to use and to make a decision, not for the guest user. The administrator is supposed to review the information and enter his or her password to approve the action. Take a look at the screenshot and see for yourself.

    > If you log on as a guest, you shouldn’t be installing software

    Why shouldn’t I? What if I want to install a program for use by the guests only? For example, I use only one web browser (IE), but I never know what browser a guest may want to use. So, being a good host :-) I want to install also Firefox and Opera, but I don’t want them to clutter my desktop, etc., I want them to be used by the guests only. To achieve that, I would log in to the guest account and install the additional browsers from there.

    > so any dire warning is fine.


    > Yes, this might be unintended behavior (or perhaps it is not), but its impact is null.

    May be, may be not. In any case, it does not make it any less of a bug!

  12. Timothy Conner Says:

    Is there any plan to adapt your program into a Control Panel Applet? I think that would be very clever.

  13. Andrei Belogortseff Says:

    Hi Timothy, you wrote:

    > Is there any plan to adapt your program into a Control Panel Applet?

    No, we don’t have such plans at this time, sorry.

  14. Mike Says:

    Anyone know why Vista won’t let me rename any new folder?

    The permissions are all checked for me as administrator, still I get an error message, “folder does not exist”. I can put things in folder and move it, but can’t rename it?

  15. Bob Says:

    To be honest, I have always thought that digitally signing was merely a way of generating more revenue. It doesn’t offer you any more security and windows will always moan at you regardless of an application having a signature or not.

    Even if your application has the “all powerful” and completely unnecessary Windows Logo certification, it still offers nothing to you as a user other than the reassurance that the person/s developing the software has allot of spare cash.

  16. Thomas Says:

    Bob Said:

    > To be honest, I have always thought that digitally signing was merely a way of generating more revenue.

    I have to agree. I’ve heard the argument of how it’s all designed to protect users from malicious software, and that’s all well and good as far as that goes — but since Vista, and most mobile OSes, don’t offer a way for users to say “okay, I understand the risk, I accept full responsibility, please go ahead and run this unsigned application without restrictions, and never bother me again when I try to run this application”… That makes it pretty clear it’s just a racket initiated by VeriSign and the like, and happily endorsed by Microsoft.

  17. Andrew Says:

    This is not a bug.

    The first screen shot shows that Windows doesn’t trust the identity contained in the certificate. In other words, “I can read this, but I don’t know if I should trust the person who wrote it.”

    The second screen shot just shows that the certificate is well-formed, that Windows can understand the information contained within it. It says nothing about what Windows will do with that information.

    Who did you did you pay to sign the certificate for you? If they’re not someone with a well-established reputation, then I don’t WANT my computer to automatically trust them.

    It’s just like how web browsers automatically trust SSL certificates signed by Thawte or Verisign, but will ask you before accepting a certificate from Andy’s Shady Overnight Certificate Company. As always, it’s a balancing act between usability and security.

  18. Andrei Belogortseff Says:

    Andrew, you wrote:

    > This is not a bug.

    OK, there is a fine line between a bug and a feature, let’s assume for a moment that it’s a feature rather than a bug. If so, what benefit is this feature supposed to provide? As the second screen shows, the file is digitally signed, and Vista can detect that. Yet, it shows the publisher as “unidentified” on the first screen. Note also (as I mentioned in the post), that if you move the file to one a few specific folders (such as C:/Program files), Vista would magically begin to recognize the publisher. Move the file to some other folder, and it’s unidentified again.

    If you can explain why they designed it that way, I would agree with you. Until then, it’s a bug. Guilty until proven innocent!

    > Who did you pay to sign the certificate for you?

    That particular file was signed with a Verisign certificate, but the same problem occurs with _any_ file, signed with _any_ certificate. Try it yourself and you will see.

  19. Farthen Says:

    I think that it occurs because of IE7 protected mode – see, it’s a guide to run firefox in protected mode, and this explains very good how the protected mode works… the prompt is because the “Download” Folder is a protected folder (level “low”) and I think it only displays at the guest, because Windows forces UAC to display certificate in normal user mode in “low” level folders, but NOT for the MUCH MORE RESTRICTED “guest” account. This would be my explanaition.

    It doesn’t mean that I like it how Microsoft handels this but this would eventually explain WHY the warning appears sometimes and sometimes not.

  20. Farthen Says:

    sorry, in my last post there is a comma in the link, the correct link is:

  21. Andrei Belogortseff Says:

    Farthen: thank you for the information and explanation!

A book review: “Conversation Marketing” by Ian Lurie

Summary: If you are a small business owner who’s got a web site, but is unsure what to do next,  where to invest your hard earned money to make the web site actually work for your business, this book will serve you well as a good introduction into the Internet marketing.

Although anyone can read this book online for free, I’ve ordered a paper version of it about a year ago, after seeing it mentioned by Bob Walsh in his blog. I was not in the marketing mood, and I did not read it then. These days, I’m getting closer to releasing a new software product (stay tuned for the big announcement here, any day now :-) ), and I’m preparing myself to switching from the programming to the marketing mode for some time, so I decided it was time to read it now. I’m not a novice in the Internet matters (I created the first web site for my business back in 1994, before Google, Yahoo!, and MSN even existed), still I found this book of a good value.

Not that it took too long to read the book: it’s only 93 pages, including the Table of Contents and Acknowledgments. That was my first suprise when I got the book, “Can a book this thin be any good?”. It was, although not without some shortcomings. The biggest of which were rather awkward analogies used throughout the book. For example, the book begins with a description of an imaginary Farmer’s Market that is neat, shiny, visited by a lot of people, but that happens not to have any lettuce on its shelves. This analogy is used to illustrate a poorly designed web site, that does not do a good job of delivering what the visitors are looking for. To me, the analogy is poor: the problem of the missing lettuce was probably caused by a one-time misjudgement of the market’s management, and is easily fixed (by ordering the lettuce!). Problems with web site design and navigation are not so easy to correct.

Another example of a poor analogy is further in the book, when the author explains that you must use a double-opt-in method of subscribing people to your email list. “Don’t sign them up and then ask to unsubscribe!”, writes the author, “That’s just rude, like eating the last piece of cake and then asking if anyone wants it”. Sure, eating the last piece of cake is rude, but it does not illustrate the rudeness of the subscribing to an email list without asking for the permission first. A better analogy, IMHO, would be, for example,  a situation when you are standing on a bus stop, waiting for your bus, and a taxi driver would suddenly stop by you, push you into the car, and start driving, yelling “I’ll take you whenever you want to go, if you don’t want that, I can drop you off on the next corner!”. Now that’s what’s getting on an unwanted email list feels like, if you ask me.

Anyway, those are minor things, which fortunately did not diminish the good value of the book itself for me. What I really liked about the book is the practical advice the author has given, taking an imaginary small business as a case study. Too many books on the Internet marketing give a too abstract advice, that’s difficult to apply to the reality. “Choose the right keyphrases”, “implement good site navigation”, “start a blog”, all that sounds well in theory, but when it comes down to the reality, the question “how do I apply that to my particular situation, to my specific business?” often remains hard to answer. What the author of “Conversation marketing” did, he illustrated the advice he was giving by applying it to the specific situation of a specific small business, step by step, taking it from a regular “brick and walls” business to a business with strong Internet presence, and solid Internet strategy for the future.

The author does not go into the technical details too deeply, and that’s probably why the book turned out so thin in the end. But that’s a good thing, in my opinion: you don’t need to allocate a lot of time for reading it, just a few hours would be enough. Of course, when you start applying the advice to your own business and web site, you will want to revisit the pages, to make sure you’re not missing anything.

To summarize, if you are looking for a good review of the current state of the art and practical advice on doing business on the Internet, get this book. (Just ignore the analogies it has or come up with your own :-) )