close
The Wayback Machine - https://web.archive.org/web/20100124201807/http://blogs.msdn.com:80/ieinternals/
Welcome to MSDN Blogs Sign in | Join | Help

Why doesn’t Flash/Silverlight work in my .NET Application?

Over the past few months, I’ve run across a number of developers who have reported problems where their .NET application fails to render Flash or Silverlight content within a Web Browser Control.

The most common reason for this problem is that .NET, by default, compiles with a target of AnyCPU, which means that your application will run as a 64bit application if the user is running a 64bit version of Windows. This, in turn, means that the Web Browser Control is the 64bit version, and that, in turn, means that it will attempt to load the 64bit version of Flash or Silverlight. And there’s the problem—Adobe and Microsoft don’t currently ship 64bit versions of these controls. Unfortunately, most HTML content won’t detect that the browser is 64bit and will instead simply direct the user to install the 32bit ActiveX control (which won’t help).

Obviously, the same problem will occur with all other ActiveX controls which are only available in 32bit flavors, but Flash and Silverlight are the most common culprits. Until ActiveX control vendors ship 64bit versions of their controls, the only real workarounds are to either avoid using HTML content that requires the controls, or force your application to run in 32bit mode. To accomplish the latter using Visual Studio 2008:

  • Open the Solution Explorer
  • Right-click your project
  • Choose Properties
  • Click the Build tab
  • Change the Platform Target from AnyCPU to x86.

If you don’t have the source to a .NET application and are encountering this issue, you can tweak the executable file to force it to run in x86 mode using CorFlags.exe.

-Eric

Posted by EricLaw | 7 Comments
Filed under: , , , ,

In-Place Shell Navigation with the WebBrowser Control on Windows 7

Because the WebBrowser Control (WebOC) can be used to display a wide range of content (HTML, Office Documents, PDFs, the local file-system, etc) it is often integrated into applications as a somewhat generic object hosting surface. For Windows 7, a small change was made that will impact applications that use the WebOC to allow the user to explore the local file system.

By way of example, here’s a trivial little WebOC host which displays the Windows folder:

Web Browser control showing C:\Windows folder

On Windows Vista and below, the user may double-click on a folder to navigate the WebOC to that folder, like so:

Web Browser control showing C:\Windows\AppPatch subfolder

However, on Windows 7, double-clicking on the folder will open a Windows Explorer window instead:

Windows Explorer launched by WebOC

The change in behavior exists because of a small change made in the Windows 7 Shell. Specifically, the filesystem viewing object will not navigate in-place unless the host container supports SID_SInPlaceBrowser, which is defined in the Windows 7 SDK (see shlguid.h). By default, the WebBrowser control’s QueryService implementation does not support SID_SInPlaceBrowser, so the filesystem viewing object will launch a new Windows Explorer instance when the user double-clicks on a folder in the WebOC.

For WebOC-hosting applications that are impacted by this change, two workarounds are available.

Workaround #1: Switch to use the ExplorerBrowser object (recommended)

Windows Vista’s Shell introduced a new control which implements the IExplorerBrowser interface; this is the recommended method of hosting a Windows Shell filesystem view within your application. Developers building applications using .NET can use the wrapped version of the ExplorerBrowser control available in the Windows API CodePack for .NET.

Please note that this interface is only available on Windows Vista and later. If your application needs to run on earlier Windows versions, you will need to fallback to the old WebOC implementation on those platforms.

Workaround #2: Handle SID_SInPlaceBrowser

As noted in the previous section, using the ExplorerBrowser control is the supported and recommended method for hosting a filesystem view within your application.

Having said that, you may be able to make a small change to your application to enable the filesystem object to navigate in-place within the WebOC when running on Windows 7. To do so, your hosting application will implement the IServiceProvider interface, and hand back the WebBrowser control’s SID_SShellBrowser when asked for SID_SInPlaceBrowser:

   1: // IServiceProvider
   2: IFACEMETHODIMP QueryService(__in REFGUID guidService, __in REFIID riid, __deref_out void **ppv)
   3: {
   4:     *ppv = NULL;
   5:     HRESULT hr = E_NOINTERFACE;
   6:     if (guidService == SID_SInPlaceBrowser)
   7:     {
   8:         hr = IUnknown_QueryService(_spBrowser, SID_SShellBrowser, riid, ppv);
   9:     }
  10:     return hr;
  11: }

By doing this, the filesystem viewer object will believe its host supports SID_SInPlaceBrowser and will navigate in place as the user double-clicks on folders.

Happy New Year, and thanks for reading! IEInternals is now just over seven months old, and this is post #63. I’m confident that next year, I’ll have even more to share. :-D

-Eric

Posted by EricLaw | 4 Comments
Filed under: , , ,

AES is not a valid cipher for SSLv3

A Windows 7 user of Fiddler encountered an interesting error this morning, and it reminded me of an interesting HTTPS compatibility problem we found in the Windows Vista timeframe.

The user is trying to visit https://www.atsenergo.ru with Fiddler running in HTTPS-decryption mode. Fiddler uses the SslStream class to communicate with upstream servers. As in IE itself, by default, the SSLv3 and TLSv1 protocols are enabled.

He finds that when he tries to use Fiddler to connect to this site, the following error is thrown in Fiddler:

System.Security.Authentication.AuthenticationException: A call to SSPI failed, see inner exception. ---> System.ComponentModel.Win32Exception: The client and server cannot communicate, because they do not possess a common algorithm

--- End of inner exception stack trace ---
at System.Net.Security.SslState.StartSendAuthResetSignal(ProtocolToken message, AsyncProtocolRequest asyncRequest, Exception exception)
at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult)
at Fiddler.Pipe.Connect(Boolean bCreateConnectTunnel, IPEndPoint remoteEP, Boolean bSecureTheSocket, String sCertCN, String sClientCertificateFilename, String sPoolingKey)
at Fiddler.Pipe.Connect(IPEndPoint remoteEP, Boolean bSecureTheSocket, String sCertCN, String sClientCertificateFilename, String sPoolingKey)
at Fiddler.Session.ExecuteHTTPSConnect()

Any time I encounter a problem with low-level HTTPS handshakes, my next stop is Netmon, which allows me to see what’s going out over the wire. The capture shows a standard TLS ClientHello, which offers up the standard set of Crypto algorithms, including TLSCipherSuites: TLS_RSA_WITH_AES_128_CBC_SHA. It then shows that the server responds a standard SSLv3 ServerHello, requesting SSLCipherSuite: TLS_RSA_WITH_AES_128_CBC_SHA.

At this point the connection fails with the exception message. Now, the exception text is clearly a bit misleading: the client and the server actually selected exactly the same cipher algorithm but there’s one problem: AES ciphers are not valid choices for SSLv3, although some servers will incorrectly try to use them.

We first encountered this problem during compat-testing of IE7 on Windows Vista back in 2006—because AES isn’t a supported cipher in SSLv3, SChannel rejects the choice of cipher. Today, that rejection leads to the exception in .NET’s SslStream class.

Now, in the WinHTTP and WinINET HTTPS stacks, we have special code to handle this problem—WinINET (and thus IE) simply falls back to SSLv3 when talking to the server.

To avoid performance-impacting fallback logic, server HTTPS implementations should be updated to properly choose a TLS ServerHello if they select an AES cipher, and if they use a SSLv3 ServerHello, they should choose a cipher defined for SSLv3.

Note that when the client (Fiddler, IE, etc) is running on Windows XP, this problem doesn’t occur, because SChannel does not support AES on that platform, so the client never offers AES when making the TLS connection.

Fiddler users running on Windows Vista or Windows 7 can workaround this problem in one of two ways: either manually disable use of AES by SChannel using Group Policy (generally a bad choice) or write a little bit of FiddlerScript to force Fiddler to use only SSLv3 upstream (which prevents sending of the TLS ClientHello that offers the AES cipher).

To update your FiddlerScript, click Rules > Customize Rules. Scroll down to the Main() function and add the following line within the function:

CONFIG.oAcceptedServerHTTPSProtocols = System.Security.Authentication.SslProtocols.Ssl3;

This will force Fiddler to only offer SSLv3 connections when connecting to secure servers, and that, in turn will resolve the problem.

-Eric

Posted by EricLaw | 0 Comments
Filed under: , ,

Understanding Certificate Name Mismatches

Recently, I received a query from the Windows Mobile team-- they had observed that visiting https://gmail.com triggers a certificate name mismatch error on IEMobile, but doesn’t seem to trigger any error on Windows 7 when using the desktop versions of Internet Explorer or Firefox.

Now, long-time readers know that I love a good mystery, so I was excited to take a look at what was going on here. I first verified the original problem: IE on Windows Mobile 6.5 does indeed show the name mismatch warning, and desktop IE doesn’t show any warning at all. My next step was to watch the Desktop’s traffic with Fiddler, which, when configured to perform HTTPS decryption, will warn about any certificate errors by default. I was intrigued to find that Fiddler does, in fact, warn about the certificate when visiting https://gmail.com. As you can see, the certificate presented is for “mail.google.com” instead of “gmail.com”:

Name Mismatch Warning in Fiddler

This name mismatch triggers a warning in Fiddler and should be triggering a similar warning within the browser. My next step was to try opening the site using IE6 inside XP Virtual Mode. There, I found that IE6 reported the same certificate error.

Name Mismatch Warning in IE6 

So, what’s going on here? A security bug introduced in the newer desktop versions of Firefox and IE, that prevents proper name matching? Unlikely.

No, actually, something more interesting is going on, and by now I had a hunch about what it was. I tried IE8 running on a Windows XP machine and saw the expected certificate error page. I then switched back to my Windows 7 machine and unticked the “Use TLS 1.0” option inside of IE8’s Advanced Internet Options and revisited the site. This time, I also got a certificate error:

Name Mismatch Warning in IE8

So, the site always yields a certificate error on Windows XP, and on Windows 7 if TLS is disabled. But why?

By now, some of you have probably jumped ahead to solve the case, but I wanted to be very sure. My next tool of choice was Netmon, a packet-monitor that allows me to easily examine the HTTPS handshakes in both the Certificate-Error and No-Certificate-Error cases. I was able to quickly determine that in the No-Certificate-Error case, the GMail site was returning the “CN=gmail.com” certificate while in the Certificate-Error case, it was returning the “CN=mail.google.com” certificate.

More importantly, however, I was able to see the reason why: In the No-Certificate-Error case, the browser was sending the Server Name Indication TLS extension. I first blogged about the SNI extension back in the fall of 2005 when it was introduced in IE7 on Windows Vista. As I explained back then:

When a web browser initiates a HTTPS handshake with a web server, the server immediately sends down a digital certificate.  The hostname of the server is listed inside the digital certificate, and the browser compares it to the hostname it was attempting to reach. If these hostnames do not match, the browser raises an error. 

The matching-hostnames requirement causes a problem if a single-IP is configured to host multiple sites (sometimes known as “virtual-hosting”). Ordinarily, a virtual-hosting server examines the HTTP Host request header to determine what HTTP content to return.  However, in the HTTPS case, the server must provide a digital certificate before it receives the HTTP headers from the browser.  SNI resolves this problem by listing the target server’s hostname in the SNI extension field of the initial client handshake with the secure server. A virtual-hosting server may examine the SNI extension to determine which digital certificate to send back to the client.

The GMail server is configured to select a certificate to return based on the SNI sent by the client; the only problem is that pre-Vista versions of IE don’t send the SNI at all, nor will any browser where either SSLv2 is enabled or TLS is disabled. (Even if TLS is enabled, having SSLv2 enabled prevents sending of the TLS extensions because the SSLv2 handshake format cannot carry TLS extensions.)

Unfortunately, SNI support isn’t available on Windows XP, even in IE8. IE relies on SChannel for the implementation of all of its HTTPS protocols. SChannel is an operating system component, and it was only updated with support for TLS extension on Windows Vista and later. The Google folks could avoid the name mismatch problem for downlevel clients by returning a certificate containing multiple hostnames (e.g. “SubjectCN=mail.google.com; SubjectAltNames=DNS Name=gmail.com”) but apparently doing so is problematic because they have so many hostnames in use on their load-balanced servers.

Case closed.

-Eric

PS: I’m especially glad I investigated this case because it uncovered a bug in Fiddler. Fiddler shouldn’t have encountered this problem (when running on Windows 7) because it should have been using a SSLv3 format handshake and the TLS extensions should have been sent. The bug I found is that Fiddler was incorrectly allowing SSLv2 connections to upstream servers, which forced use of the v2 format handshake, which had the effect of disabling TLS extensions. That bug is now fixed in Fiddler v2.2.7.9.

Of course, when running Fiddler on pre-Vista versions of Windows, it makes no difference: the .NET Framework’s SslStream class also relies on SChannel, and hence TLS extensions aren’t available to .NET applications running on Windows XP either.

Posted by EricLaw | 4 Comments

Understanding the Protected Mode Elevation Dialog

Internet Explorer 7 introduced Protected Mode, a feature which helps ensure that the browser and its add-ons run with a minimal set of permissions. Code running inside the “Low Rights” process doesn’t have permission to write to your user-profile’s folders or registry keys, which helps to constrain the damage if a bad guy manages to find a vulnerability within the browser or its add-ons.

To help ensure compatibility, Protected Mode employs a system of virtualization to help ensure that code that runs within Protected Mode will continue to work even when its permissions are restricted.

In some cases, virtualization can lead to surprising outcomes, some of which Mark Russinovich describes in his blog post The Case of the Phantom Desktop Files. Beyond such surprises, some functions just cannot be virtualized effectively—for instance, if you want to offer a feature that sets the current user’s Desktop wallpaper, your code simply must write to their user-profile.

How does IE resolve the tradeoff between security and functionality? The answer is “by using brokers.” The idea is that Internet Explorer (and some add-ons like Flash and Java) will run a broker process with “Medium Rights” that can use the current user’s permissions to take actions that would otherwise be prohibited when rendering content inside the Protected Mode sandbox. A broker process must be carefully designed to accept untrusted input (since its caller could be malicious code trying to escape the sandbox), sanitizing data and confirming any security-sensitive actions with the user directly before making changes.

When an add-on running inside Protected Mode attempts to launch a broker process (or any other program), the ElevationPolicy registry key (HKLM\Software\Microsoft\Internet Explorer\Low Rights\ElevationPolicy) is checked to determine how the process should be launched. One of four policy values may be specified:

Policy Result
0 Protected mode prevents the process from launching.
1 Protected mode silently launches the broker as a low integrity process.
2 Protected Mode prompts the user for permission to launch the process. If permission is granted, the process is launched as a medium integrity process.
3 Protected Mode silently launches the broker as a medium integrity process.

The problem arises when a broker process fails to properly register an elevation policy. If no ElevationPolicy is specified, then the default policy is #2, and the user sees a prompt for permission to launch the process. In the case of a broker process, this can lead to a very confusing user-experience. For instance, if Flash’s Broker’s elevation policy is missing from the registry, any page that uses Flash will trigger the following prompt:

Protected Mode Elevation Prompt

Now, keeping in mind that average users (and most super users) don’t have any idea what a broker is or why they’re seeing this dialog, it’s understandable that they might click either “Allow” or “Don’t allow” just to get rid of it. However, the next time the add-on attempts to launch the broker process, the user will be presented with the same prompt.  As you might imagine, they will quickly get tired of this!

Users tired of banging the “Don’t allow” button (not really understanding what the broker is and why it exists) are likely to try checking the “Do not show me the warning for this program again” box before clicking the “Don’t allow” button.

Protected Mode Elevation Prompt with Don't Ask, Deny Always

Unfortunately, this exercise is doomed—the “Do not show” checkbox only takes effect when you push the “Allow” button—you cannot automatically deny access for a given process.

Why not? Because it would break things unexpectedly, and there would be no way for a normal person to figure out what went wrong and subsequently fix it. An add-on that tried to launch its broker would always fail, and might try repeatedly (hanging the browser). Worse still, there’s no way for the user to go back and change their mind—there was no reasonably affordable way to build a UI that would allow for such reversals.

Add-on developers should take care to ensure that the ElevationPolicy for their broker process is properly set at install time (and may wish to confirm that it’s set properly if the broker ever fails to launch due to an Access Denied error, and notify the user accordingly).

End-users encountering unexpected Protected Mode Elevation prompts should consider either reinstalling whatever add-on is triggering the prompt (it’s often obvious) or disabling any unrecognized or unwanted add-ons. Beyond reducing attack surface and prompts, disabling unwanted add-ons will often improve browser performance.

-Eric

Posted by EricLaw | 7 Comments

The JVM Install Prompt

Many years ago, Microsoft developed an implementation of a Java Virtual Machine to run Java content. Internet Explorer 5 included code that would download and install the JVM (if needed) when a user encountered Java content on the web. After some time, support was discontinued for the Microsoft JVM, and no further updates were made available. The Microsoft JVM should no longer be used, as security patches are no longer released for it-- installation is blocked on Vista and Windows 7.

To help ensure that Internet Explorer users still are able to recognize when a page requires a JVM, the existing Microsoft JVM install code in IE was replaced with a dialog box that helps direct the user toward an available JVM (namely, Sun Microsystems’ implementation).

That dialog box looks like this:

Install Java Prompt

If you click the “More Info” button, you are taken to a web page explaining how to install the Sun Java Virtual Machine.

When you check the “Do not show this message again” box, Internet Explorer stores this preference in the registry. It does so by creating a registry string named {08B0e5c0-4FCB-11CF-AAA5-00401C608501} inside the HKCU\Software\Microsoft\Active Setup\Declined Install On Demand IEv5\ branch.

If you decide not to install a JVM, you may quickly grow tired of this modal dialog box and thus tick the “Do not show this message again” box. Subsequently, IE will never show this prompt again.

Unfortunately, depending on how pages using Java Applets are constructed, this may result in a confusing user-experience. Consider, for instance, this National Ice Center page which requires Java. When you visit this page without a JVM installed, you will see the following information bar:

Misleading Information Bar

The text of this information bar is misleading—the page doesn’t use an ActiveX control—the prompt is merely a side-effect of how Applet support was built into IE. Unfortunately, there’s no indication that this prompt is really related to Java. If you choose “Install This Add-on” from the Information bar’s menu, you’ll see another misleading dialog box:

Misleading Authenticode Dialog

Fortunately, the National Ice Center page also includes some fallback text in the Applet tag so that if the Applet cannot be rendered, the page itself will explain that Java is required:

<APPLET>
<PARAM></PARAM> <PARAM></PARAM><PARAM></PARAM>
<b>You must install Java to use this page!</b>
</APPLET>

Additionally, if you develop your page using an OBJECT tag with an APPLET tag embedded within, Internet Explorer will show only the “You need Java” dialog, and will not display the misleading ActiveX information bar.

-Eric

Posted by EricLaw | 1 Comments
Filed under: , ,

Troubleshooting Authentication with Fiddler

Over the last few weeks, I’ve been exchanging mail with a webmaster (Vladimir) in Russia who reported that his customers were having problems using IE8 on Windows 7 to log into his website. His site uses HTTP Basic Authentication, so users are prompted to enter their credentials using the following dialog:

CredUI HTTP Authentication Prompt

I asked the webmaster to submit some HTTP Traffic Logs collected by the lightweight network traffic capture tool known as FiddlerCap. He obliged, and I used Fiddler to take a look at the captured .SAZ traffic log.

Fiddler includes an “Auth” Inspector that allows you to easily look at the HTTP Authentication credentials sent for a given request. I opened the .SAZ file captured with FiddlerCap. In the failing case, he had entered the username test and the password ABCDEFG. The Auth inspector, however, showed that the password wasn’t being sent:

Fiddler Auth Inspector view of HTTP Authentication; password blank

As you can see, the base64-obfuscated string is quite short, and the decoded username:password string contains only the username and the colon, but no password at all.

Now, I didn’t have ready access to the customer’s test page, but wanted to try to reproduce the problem myself. I don't have a server that required Basic auth handy, but Fiddler makes it simple to simulate scenarios such as this. I simply used the AutoResponder tab to create a rule that responds to any request for a URL that contains the string “AUTH” with a HTTP/401 response that demands Basic authentication:

Using Fiddler's AutoResponder to demand HTTP Authentication

Fiddler includes about a dozen sample responses like the 401_AuthBasic.dat, and you can easily use Fiddler to capture other responses, or even create your own using any text editor.

With this AutoResponse rule in place, I can request any invented URL containing the word "auth" and get an authentication prompt in response. I tried http://www.example.com/auth and received the expected authentication prompt. I typed in some credentials, submitted them, and took a look in the response inspector. I found that the credentials were submitted perfectly:

Fiddler Auth Inspector view of HTTP Authentication

As you can see, the base64-obfuscated string is longer, and the decoded username:password string contains both the username and password, split by the colon. So, I wasn’t able to reproduce the behavior reported by the web developer, despite trying a number of different reproduction cases. However, he was fortunately quite persistent and did some additional research, determining that the problem only existed when the password was pasted from the clipboard.

This was an interesting finding, and narrowed down the problem substantially.

First, a bit of background. In Windows 7, WinINET was updated to call the CredUIPromptForWindowsCredentials function to collect HTTP Authentication credentials. The function shows a new CredUI dialog that offers an improved UI over the legacy password prompt, and its use is recommended for use on Windows Vista and later (although IE8 only uses it when running on Windows 7).

Now, experienced developers in the audience know that any time anything is changed, there’s always a chance of regression, so, coupled with the fact that the problem was narrowed down to just the password-paste case, we had some leads. At first, I thought it likely that the problem was related to the Russian version of Windows 7, because I didn’t have any problems with pasting in the password with CTRL+V on the English OS. So, I asked Vladimir to collect screenshots of all of the different formats on his clipboard. This is easily done with a free little utility called ClipSpy. I suspected that perhaps there was a codepage-related problem where the password characters were perhaps being mangled because the system codepage was Cyrillic. However, the output of the ClipSpy tool didn’t reveal anything interesting; Vladimir's clipboard's bytes looked just like my clipboard's.

At this point, I was stumped. I wasn’t able to reproduce this problem in-house, and had tried on many different Windows 7 computers, using a variety of different sites and passwords. Then, Vladimir saved the day by forwarding along a posting on a message board where a customer complained of exactly the same problem. The response from “auggy” indicated that the user should try using CTRL+V to paste rather than using the context menu.

I had always been pasting with CTRL+V and had never tried using the context menu; Vladimir's repro steps hadn't mentioned the context menu, and it didn't even occur to me that it could make a difference-- typically the context menu and CTRL+V behave identically.

After playing with the context menu’s Paste option, I was very quickly able to reproduce the problem. It turns out that there is a tiny bug in the CredUI dialog on Windows 7. If you use the CredUI dialog's context menu to paste into the username or password dialog box without pressing any key in the box (e.g. tab, CTRL+V, CTRL+A, etc) then, while the text appears to be updated, the internal data structures are not updated. The internal username or password data remains unchanged from its original value until a key is pressed. In the failing cases, the boxes started out empty, so when the user used the context menu to paste into the password box, the password data was never updated away from the blank value.

I’ve passed this bug along to the CredUI team for further investigation. I’d like to thank Vladimir for his patience as we hunted down the core problem and for his willingness to provide network captures and other information.

Hopefully, this post has shown you a few ways that you can use Fiddler to find the root cause of problems (and eliminate confounding variables from the repro) and reiterates the value of providing painstaking detail when sharing bug repro steps.

-Eric

PS: My "Debugging with Fiddler" talk at the Microsoft PDC is now available for online viewing.

Posted by EricLaw | 3 Comments

Inline AutoComplete

Internet Explorer 8 removed support for one of my favorite browser features: Inline AutoComplete (IAC) for the address bar. This feature was off-by-default, but for almost a decade the first thing I did when setting up a new computer was enable IAC using the checkbox Tools > Internet Options > Advanced > Use inline AutoComplete. 

For IE8, we introduced a new Smart Address Bar which offers a bunch of improvements including better and more relevant suggestions in the new flyout window. The feature also includes keyboard tips, which show how to take advantage of keyboard combos to open pages in new tabs, background tabs, etc. Unfortunately, as a consequence of the rewrite, we lost the legacy AutoComplete behavior provided by the Shell. The consensus was that, while IAC had some vocal proponents (myself especially), the fact that it was off-by-default and most users didn't have it enabled meant that it was a reasonable sacrifice when compared to the benefits brought by the new address bar. The most important improvement for keyboard lovers was the SHIFT+Enter hotkey, which navigates to the "best match" in the results list; there have long been complaints and debates about whether the default behavior of IAC was suboptimal. With the relevance engine added to IE8, we have good reason to believe that SHIFT+Enter is a great feature for most folks to more quickly get to the best result.

Nevertheless, I expected that we'd hear from vocal proponents of IAC during the IE8 beta cycles. The initial blog post announcing the change had a few heated comments, and one bug with a meager 16 votes was filed on Connect, but we didn't receive nearly the level of feedback I was expecting. After two betas and one release-candidate which were used by many millions of users, I could only count a handful of supporters for IAC. Since we shipped the final version of IE8, I've received more mail asking why IAC was removed. The gist of much of the feedback was "You already had the feature, it wouldn't have cost you anything to keep it." Unfortunately, that's simply not true-- IE8 is no longer using the standard controls that support AutoComplete, and even if was, the "free" AutoComplete behavior wouldn't work as expected with the matches in the Smart Address Bar's dropdown.

IE8 has been my default browser for quite a while now, and I've largely adjusted to the change. Beyond getting used to the SHIFT+Enter shortcut, I also heavily use SlickRun, a keyboard-lovers' utility I wrote a long time ago which makes heavy use of command aliasing and offers Inline AutoComplete.

As we build future versions of IE, I encourage you to provide feedback early and often. We've already received some great suggestions from the web developers out there, but we're very interested in UI suggestions as well!

thanks!

-Eric

Posted by EricLaw | 2 Comments
Filed under: ,

Security Intelligence Report Volume 7 Released

Security researchers at Microsoft release a biannual "Intelligence Report" containing statistics about the software-related security incidents over the past 6 months. This report is called the SIR, and the latest version can be found here. There are many interesting charts and data points in the report, but I have two favorites from the latest edition.

As browser code quality improves, add-ons become a more appealing target:

Most browser attacks are against 3rd party (addon) code

Here's a chart of the types of malicious downloads that SmartScreen has blocked over the last six months:

SmartScreen blocks a wide range of malicious downloads

 

Microsoft remains committed to help protect you on the web-- IE8's SmartScreen and Microsoft Security Essentials are making a significant impact against the bad guys.

-Eric

Posted by EricLaw | 0 Comments
Filed under: ,

Using Meddler to Simulate Web Traffic

As mentioned back in July, IE8’s new lookahead downloader has a number of bugs which cause it to issue incorrect speculative download requests.

The “BASE Bug” caused the speculative downloader to only respect the <BASE> element for the first speculatively downloaded script file. Subsequent relative SCRIPT SRCs would be combined without respecting the specified BASE, which resulted in spurious requests being sent to the server. Eventually, the main parser would catch up and request the proper URLs, but the spurious requests waste bandwidth and could cause problems for some servers.

When first investigating the speculative downloader problems, I decided to use the Meddler HTTP Traffic Generation tool to build some test cases. Meddler is a simple little tool that allows you to write JavaScript.NET scripts to emulate a web server. Meddler allows for precisely timed delivery of responses, and includes classes to enable basic fuzzing scenarios. The best part of Meddler is that you can use a single MeddlerScript (.ms) file to contain an entire test case, even if that test case is made up of multiple pages, images, scripts, and other resources. These .ms files can be shared with others, run across multiple operating systems, and attached to bugs or test harnesses for future regression testing. The test machine only requires the .NET Framework and Meddler installed, and does not need IIS, Apache, Perl, ASP.NET, etc.

Because the base issue was so simple, I was able to quickly build a simple MeddlerScript which demonstrates the BASE Bug. If you’d like, you can follow along using my MeddlerScript: PreParserBaseBug.ms.

The test script generates the following sample HTML:

<html><head><base href="http://ipv4.fiddler:8088/pass/"></base>
<script type="text/javascript" src="inc/1.js"></script>
<script type="text/javascript" src="inc/2.js"></script>
<script type="text/javascript" src="inc/3.js"></script>
<script type="text/javascript" src="inc/4.js"></script>
<script type="text/javascript" src="inc/5.js"></script>
<script type="text/javascript" src="inc/6.js"></script>
<script type="text/javascript" src="inc/7.js"></script>
<script type="text/javascript" src="inc/8.js"></script>
<script type="text/javascript" src="inc/9.js"></script>
</head>
<body> Test page.</body></html>

Note that I plan to watch the network traffic with Fiddler, and because traffic sent to localhost isn’t proxied, I will use “ipv4.fiddler” as an alias to 127.0.0.1.

When visiting the Meddler test page, the traffic from IE is as follows:

Screenshot of original incorrect network traffic

As you can see, there are spurious download requests containing the wrong path; these are shown in red as the MeddlerScript is designed to return failure for such requests. Later, the correct URLs are downloaded as the main parser encounters the script tags and correctly combines the URLs.

Today's IE8 Cumulative Update (KB974455) fixes the BASE Bug. After installing the update, loading the sample HTML results in no spurious requests-- each script URL is correctly relative to the specified BASE.

Screenshot of corrected network traffic

Please note that while the BASE bug is fixed, the “4k Bug” is not fixed by this update. If you want to view that bug in action, try this script: PreParser4kBug.ms. As it is a timing issue, you may need to reload the “hammer” page a few times to encounter the problem.

While Meddler is rather simplistic, it can be very useful for sharing test cases and simulating the behavior of web servers. You can use Meddler to build reduced test cases that reliably generate problematic HTTP responses.

Until next time,

-Eric

Capturing Crash Dumps for Analysis

Sometimes, folks report crashes to the IE team that we are unable to reproduce internally. That’s usually because, as mentioned often, most crashes are caused by buggy browser add-ons.

In some cases, however, crashes occur even when running with browser add-ons off, and if we cannot reproduce the problem, the next best thing is a crash dump file from the affected machine.

Collecting crash dumps isn’t hard:

  1. Install WinDBG from http://www.microsoft.com/whdc/devtools/debugging/installx86.mspx#ERB
  2. Configure WinDBG to run whenever a crash occurs: In an elevated command prompt, run WinDBG with the -I (case-sensitive) parameter.  For instance: 

    C:\debuggers\windbg.exe –I
  3. When the crash occurs, WinDBG opens.  Type the following command to generate a .DMP file:

    dump /ma %USERPROFILE%\Desktop\IECrash.dmp

Dump files tend to be dozens to hundreds of megabytes in size, so they typically cannot be readily passed around via email (although they often compress well). If a DMP file is requested, the person asking for the file will typically tell you how to return the file to them.

If you allowed the "Watson" Windows Error Reporting system to upload a crash report, you can help us find "your" crash by letting us know the "bucket number." All Windows Error reporting logs are presented in the event viewer as Event ID 1001.  After the crash report is sent, go into Computer Management and drill down to Event Viewer\Windows Logs\Applications, search for the Event ID 1001 that corresponds to the time the crash occured, open it up, the failure bucket id will be listed there.

Depending on the problem reported, we may also want to get a network traffic log or a Process Monitor log

-Eric

Update: The Visual Studio team just posted a blog on capturing dumps with Visual Studio or Task Manager.

Posted by EricLaw | 0 Comments

Understanding DEP/NX

Despite being one of the crucial security features of modern browsers, Data Execution Prevention / No Execute (DEP/NX) is not well understood by most users, even technical experts without a security background.

In this post, I’ll try to provide some insight into how DEP/NX works, explain why you might encounter a DEP/NX crash, and convince you that turning off DEP/NX is almost never the right decision.

More than anything else, I hope you take away two important facts from reading this post:

  • In many cases where you encounter a DEP/NX crash, the browser would have crashed anyway.
  • The vast majority of DEP/NX crashes are caused by browser add-ons. If you run IE in No Add-ons Mode, it’s very unlikely that you will encounter a DEP/NX crash.

Background

I’ll begin by providing some background information on DEP/NX and how the browser makes use of it.

What is DEP/NX?

DEP/NX is a feature of modern CPUs that allows marking of memory pages as Executable or non-Executable. This allows the CPU to help prevent execution of malicious data placed into memory by an attacker. If the CPU detects that it is about to jump to (begin execution of) data which is in a memory page which is not marked as Executable, the CPU will raise an exception which results in termination of the process.

Stated another way, if DEP/NX determines that if a potentially dangerous jump is about to be made, the process is intentionally “safely crashed” to prevent a potential security exploit.

Checking Your Protection

You can see which processes are protected by DEP/NX using Task Manager’s Process tab. On Windows XP, you need to use Process Explorer instead. In either case, ensure that the “Data Execution Prevention box” is checked in the View > Select Columns menu, and a column in the process list will show the DEP/NX protection status.

Process Explorer showing DEP Permanent for iexplore.exe

As mentioned last year, Internet Explorer 8 enables DEP/NX protection by default. In IE7 and earlier, DEP/NX was disabled by default due to compatibility concerns that were resolved in IE8.

Opting-in to DEP/NX

Internet Explorer 8 uses the SetProcessDEPPolicy() API to enable DEP/NX. This provides the following benefits versus using the /NXCOMPAT linker flag:

  • It allows us to offer an Internet Control Panel checkbox and Group Policy option to disable DEP/NX if desired.
  • It enables DEP/NX on Windows XP SP3. The Windows XP loader does not check the NX Compatible bit.
  • It ensures that ATL_THUNK_EMULATION, an important compatibility feature, works properly.

Note: New applications without 3rd-party code compatibility concerns, targeted for use on Vista and later, should simply use the /NXCOMPAT linker flag.

Recognizing a DEP/NX Crash in Internet Explorer

When Internet Explorer 8 recovers from a DEP/NX-induced crash, it will not automatically recover the current tabs. This is a security measure designed to help prevent a malicious site from having multiple attempts to exploit a vulnerability. Instead of reloading the tabs, the browser will show the following error page:

Error page for DEP/NX Crash Recovery

Unfortunately, the nature of DEP/NX crashes makes it infeasible for the browser to “pin the blame” on the specific add-on that is responsible for the problem.

Why do DEP/NX Crashes Occur in the Real World?

Now, let’s take a look at why users encounter DEP/NX crashes in the real-world.

When the CPU is about to jump to a non-Executable memory page, there are three possible types of data in that page: malicious code, non-malicious code, and garbage data. I’ll discuss each of these in the following sections.

Jump Target: Malicious code

This is the scenario where DEP/NX shines. In this scenario, an attacker has put malicious data in memory that will be executed as x86 instructions if he can get the CPU to jump to it. The attacker then exploits some vulnerability to induce the CPU to jump to his data, typically using a memory-related vulnerability in an add-on or the browser itself.

In this scenario, the CPU notes that the attacker’s code is not in an executable memory page and prevents the interpretation of the attacker-supplied data as instructions. The attack is foiled and the user’s machine is protected. If not for DEP/NX, the attacker would have been able to execute his instructions and potentially infect the user’s machine with malware, steal their data, or achieve some other nefarious goal.

Now, the obvious next question is: What if the attacker can somehow get his data marked as executable?

The answer is that doing so is intentionally difficult. IE8 blocks the best known trick used to get the attacker’s data in an executable page. That means that the attacker must find some other way to get the memory page containing his instructions marked as executable.

The obvious choice would be for the attacker to call VirtualProtect() directly, passing PAGE_EXECUTE_READ as the flNewProtect flag. However, thanks to Address Space Layout Randomization (ASLR) it is difficult for the attacker to guess where the VirtualProtect function is in memory. If he guesses wrong (and he almost always will), the process will crash and not execute his attack instructions.

Jump Target: Non-malicious code

In this scenario, a browser add-on is designed in such a way that it expects to be able to execute data from memory pages which are not marked as executable, or otherwise makes a bad assumption.

There are a number of possible cases where this may happen.

Case #1: Code Generation

In the first case, the add-on (or the technology it is built upon) depends on the ability to execute dynamically generated instructions at runtime. Examples of this are the Java Virtual Machine (JVM) and the Active Template Library (ATL). These frameworks generate (“JIT compile”) executable code at runtime and jump to it. Older versions of these frameworks did not mark the memory pages containing the generated code as executable and would hence crash when DEP/NX was enabled. The Java team fixed this problem in the JVM years ago, and the ATL team also fixed this problem several versions ago.

Because ATL is so commonly used to build Internet Explorer add-ons, additional work was done to allow Windows to “emulate” the ATL Thunk code which violated DEP/NX, so that even if an add-on was compiled against an ancient version of ATL, ATL Thunk Emulation will ensure that the code runs properly inside Internet Explorer with DEP/NX enabled.

Case #2: Code Rewriting

In another common case, the add-on depends on “thunking” or modifying an existing Internet Explorer API or Windows function at runtime by rewriting the instructions in the existing function’s memory page. In order to accomplish this, the add-on uses VirtualProtect() to change the memory protection of the target page to allow Write and then update the memory with new instructions that point to some code that the add-on would like to have run inside the target function.

If the add-on fails to subsequently call VirtualProtect() to revert the memory protection back to allow Execute, the process will crash with a DEP/NX violation the next time that function is called.

More commonly, the add-on will later change the memory protection back to allow Execute, but the developer ignores the fact that it’s entirely unsafe to perform modification of shared code while any other threads are executing. While an add-on thread is modifying the code in a memory page, if any thread attempts to call any function in the same memory page, the process will crash. Internet Explorer makes extensive use of threads, so such crashes are likely if an add-on uses thunking.

Because timing is a critical factor here, the add-on may seem to “work fine” on one machine (e.g. a slower single-core machine) and always crash on another (e.g. a fast multi-core machine). This problem is just one of the major reasons why function thunking by Add-ons is not supported and is strongly discouraged.

Jump Target: Garbage data

In this scenario, inadvertent memory corruption has occurred such that the CPU is about to jump to arbitrary data somewhere in memory. This scenario is probably the most common source of DEP/NX crashes, particularly when the crash occurs at a seemingly random time, or when a browser tab is closed.

This arbitrary data isn't usually chosen by an attacker, and usually doesn’t even represent sensible x86 instructions. For instance, the jump may be to an address near 0x000000 where no code is loaded (near-null jump), if a virtual function was called off an object pointer which has been nulled. Or, the jump may be to some other address where code used to exist (stale pointer) but that memory was later freed and potentially reused for another purpose.

In this “garbage data” scenario, the process will almost always crash, even if DEP/NX were not enabled. That’s because the CPU is very unlikely to reliably execute arbitrary data as sensible x86 instructions. Most likely, the process will crash within a microsecond with an exception like “Access Violation”, “Invalid Instruction”,  “Divide by 0” or similar.

Attackers look for this type of memory corruption to use as an entry point in their attacks; they may, for instance, “spray the heap” with many copies of their malicious data, then trigger the memory corruption vulnerability with the hope that the CPU will jump into a copy of their malicious code.

Resolving DEP/NX Problems

Your best bet to resolve DEP/NX problems in Internet Explorer is to first confirm that the problem is caused by a buggy browser add-on. You can do this by running IE in No Add-ons Mode. After confirming that the problem is related to an add-on, you should use the browser’s Manage Add-Ons feature to disable unwanted add-ons and find updated versions of any add-ons that you wish to keep.

If you find that you’re encountering DEP/NX crashes in multiple software applications, it’s possible that you have malicious or buggy system software installed (e.g. malware or a buggy anti-virus product). You should check your system for malware and ensure that you install the latest updates for your system software.

Frequent DEP/NX crashes also suggest that your computer might have a hardware problem (e.g. bad system memory). To help rule out hardware failure, you can use the Windows Memory Diagnostic.

Conclusion

DEP/NX provides an important defense against malicious websites that may try to exploit vulnerabilities in your add-ons or web browser. By ensuring that you are running the latest version of add-ons and system software, you can improve your security and minimize the incidence of DEP/NX crashes. If you're currently using an older version (6 or 7) of Internet Explorer that does not have DEP/NX protections enabled by default, you should upgrade to IE8 as soon as possible.

Thanks for reading!

-Eric

DotNet UserControls Restricted in IE8

In the past, Internet Explorer supported a really easy way to host .NET UserControls in HTML. These controls worked much like ActiveX controls, but because they ran with limited permissions, sandboxed by the .NET Framework, they would download and run without security prompts.

It was a very cool technology, but didn’t see much use in the real-world, partly because the .NET Framework wasn’t broadly deployed when the feature was introduced. Later, ClickOnce, WPF, and other technologies took center stage, leaving this relic around, mostly unused beyond developer demonstration pages and tutorials.

Until the summer of 2008, that is. At BlackHat 2008, security researchers Dowd and Sotirov revealed that the loader for UserControls enabled bypass of memory-protection mechanisms, meaning that browser vulnerabilities could be exploited with improved reliability.

While Protected Mode and other features are useful to constrain the impact of vulnerabilities, DEP/NX and ASLR memory protection are a very important part of the overall mitigation strategy. After investigating the options, crawling the web to examine use “in the wild,” and consulting with the .NET team, we elected to disable UserControls in the Internet Zone by default for IE8.

Now, since the UserControls feature was first introduced, IE’s security settings allowed disabling ".NET Framework-reliant components," but the existing settings were overly broad. They controlled not only UserControls, but also out-of-process features like ClickOnce. Because out-of-process use of .NET is not a vector for memory-protection-bypass in the browser, we chose to create a new URLAction that would restrict only use of UserControls.

IE8 introduced the URLACTION_DOTNET_USERCONTROLS setting, which allows .NET UserControls to load only from Intranet and Trusted pages by default. On Internet pages, the controls are blocked as if they had failed to download. This setting is not exposed in the Internet Options dialog or in the Group Policy editor; it can only be controlled via the registry keys.

Reducing attack surface by removing an extensibility feature was painful decision, but ultimately a good one. Not long after we made this change, the new URLAction would cleanly block exploitation of a browser vulnerability that was unveiled at the CanSecWest security conference.

IE8 includes a number of important security features and defense-in-depth changes that raise the bar against the bad guys. If you haven’t upgraded yet, you should do so today!

thanks,

-Eric

Posted by EricLaw | 2 Comments
Filed under:

The User-Agent String: Use and Abuse

When I first joined the IE team five years ago, I became responsible for the User-Agent string. While I’ve owned significantly more “important” features over the years, on a byte-for-byte basis, few have proved as complicated as the “simple” UA string.

I (and others) have written a lot about the UA string over the years. This post largely assumes that you’re familiar with what the user-agent string is and what it’s commonly (mis)used for. 

In this post, I’ll try to summarize why the UA string causes so many problems (beyond browser version sniffing), and expose the complex tradeoff between compatibility and extensibility.

Background

First things first-- you can check the UA string currently sent by your browser using my User-Agent string test page.

Do you see anything in there that you weren’t expecting?

Changing the User-Agent String at Runtime

For IE8, we fixed significant bugs in the UrlMkSetSessionOption API, which allows setting of the User-Agent for the current process. Before IE8, calling this API inside IE would (depending on timing) set the User-Agent sent to the server by WinINET, or set the User-Agent property in the DOM, but never properly set both.

I developed a simple User-Agent Picker Add-on for IE8 that allows you to change your User-Agent string to whatever you like. You can then easily see how websites react to various UA strings. For instance, try sending the GoogleBot UA string to MSDN to see how that site is optimized for search.

Internally, the add-on simply exercises the URLMon API:

UrlMkSetSessionOption(URLMON_OPTION_USERAGENT, szNewUA, strlen(szNewUA), 0)

Alternatively, Web Browser Control hosts can change the User-Agent string sent by hyperlink navigations by overriding the OnAmbientProperty method for DISPID_AMBIENT_USERAGENT. However, the overridden property is not used when programmatically calling the Navigate method, and it will not impact the userAgent property of the DOM's navigator or clientInformation objects.

Extending the User-Agent String in the Registry

It’s trivial to add tokens to the User-Agent string using simple registry modifications. Tokens added to the registry keys are sent by all requests from Internet Explorer and other hosts of the Web Browser control. These registry keys have been supported since IE5, meaning that all currently supported IE versions will send these tokens.

Other browsers (Firefox, Chrome, etc) do not offer the same degree of ease in extending the UA string, so it’s uncommon for software to extend the UA string in non-IE browsers.

The Fiasco

Unfortunately, the ease of extending IE’s UA string means that it’s a very common practice. That, in turn, leads to a number of major problems that impact normal folks who don’t even know what a UA string is. 

A few of the problems include:

  1. Many websites will return only error pages upon receiving a UA header over a fixed length (often 256 characters).
  2. In IE7 and below, if the UA string grows to over 260 characters, the navigator.userAgent property is incorrectly computed.
  3. Poorly designed UA-sniffing code may be confused and misinterpret tokens in the UA.
  4. Poorly designed browser add-ons are known to misinterpret how the registry keys are used, and shove an entire UA string into one of the tokens, resulting in a “nested” UA string.
  5. Because UA strings are sent for every HTTP request, they entail a significant performance cost. In degenerate cases, sending the UA string might consume 50% of the overall request bandwidth.

Two real-world examples:

My bank has problem #1. They have security software on their firewall looking for “suspicious” requests, and the developers assumed that they’d never see a UA over 256 bytes.

Some major sites are using super-liberal UA parsing code (problem #3) to detect mobile browsers. Unfortunately, for instance, Creative Labs adds the token “Creative AutoUpdate” to the UA string. Naive server code sees the characters pda inside that token and decides that the user must be on a mobile browser. The server might then return WML content that the desktop browser will not even render, or provide an otherwise degraded experience. Worse still, some sites don’t send a Vary: User-Agent response header when returning the mobile content, meaning that network proxies will sometimes start sending everyone content designed for mobile devices.

Ultimately, the problem is what economists call the Tragedy of the Commons, although personally I prefer the visual representation. You might remember that the extensibility of the Accept header leads to the same problem, although that header is sent so unreliably that no sane website would depend upon it.

Standards

It’s tempting to look to the standards for restrictions on the UA string. Unfortunately, the RFC for HTTP has little to say on the topic:

14.43 User-Agent

The User-Agent request-header field contains information about the user agent originating the request. This is for statistical purposes, the tracing of protocol violations, and automated recognition of user agents for the sake of tailoring responses to avoid particular user agent limitations. User agents SHOULD include this field with requests. The field can contain multiple product tokens (section 3.8) and comments identifying the agent and any subproducts which form a significant part of the user agent. By convention, the product tokens are listed in order of their significance for identifying the application.

User-Agent = "User-Agent" ":" 1*( product | comment )

Example:

User-Agent: CERN-LineMode/2.15 libwww/2.17b3

Notably, the RFC does not define a maximum length for the header value, and does not provide much guidance into what “subproducts which form a significant part of the user agent” means. It suggests a few broad uses of the UA string on the server-side, without discussion of what problems such usage might introduce.

Motivations for UA Modification

OEMs and ISVs have a number of motivations for adding to the UA string.

  1. Metrics. Every server on the web can easily tell if your software is installed.
  2. Client capability detection. JavaScript can easily detect if your (ActiveX control / Protocol Handler / Client application / etc) is available.
  3. User Tracking. I don’t know of any current offenders, but at some point in the past some software would add a GUID token to the UA string. This token would effectively act as an invisible “super-cookie” that would be sent to every site the user ever visited.

Now, scenario #3 is clearly evil, and we have no desire to support it. Scenarios #1 and #2 aren’t inherently bad—but advertising to every site in the world that a given piece of software is available on the client is probably the wrong design.

Known UA Tokens

Here are some explanations of common tokens found in real-world IE UA strings.

Token Meaning / Component
SV1 Security Version 1- Indicates that XP SP2 was installed. Removed from IE7.
SLCC1 Software Licensing Commerce Client- Indicates Vista+ AnyTime Upgrade component is available. 
MS-RTC LM 8 Microsoft Real Time Conferencing Live Meeting version 8
InfoPath.2 InfoPath XML MIME Filter
GTB6 Google Toolbar
Creative AutoUpdate Creative AutoUpdate software
Trident/4.0 IE8 version of HTML Renderer installed
Zune 3.0 Zune Software client
Media Center PC 6.0 It's a Media Center PC
Tablet PC 2.0 It's a TabletPC
.NET CLR 3.5.30729 The .NET Common Language Runtime
chromeframe Google ChromeFrame addon
fdm FreeDownloadManager.org add-on
Comcast Install 1.0 Comcast High-speed Internet installer
OfficeLiveConnector.1.3 Office Connector
OfficeLivePatch.0.0 ??
WOW64 Running in 32bit IE on 64bit Windows
Win64; x64 Running in 64bit IE
msn OptimizedIE8 Installed with MSN branding and services
yie8 Installed with Yahoo! branding and services

Alternatives to UA Modification

In many cases, allowing client-side script to detect a capability without forcing the browser to send that information to the server would be sufficient. While new APIs might be proposed for this purpose, we need an alternative that already works in all versions of IE.

You probably know that Conditional Comments can be used to detect the IE version, but they can also be used to detect custom information about any component listed in the registry’s version vector key. For instance, Windows 7 uses the new WindowsVersion entry to allow script to detect the OperatingSystemSKU.

To expose your capabilities via conditional comments, simply create a REG_SZ inside HKLM\SOFTWARE\Microsoft\Internet Explorer\Version Vector. The new entry should be named uniquely (e.g. EricLaw-SampleAddon) and contain a string in the format x.xxxx (e.g. 1.0002).

You can then detect the version (or absence) of your component using conditional comments:

<!--[if !EricLawSampleAddon]><script>alert("You don’t have my IE add-on yet. Go install it!");</script><![endif]-->
<!--[if lt EricLawSampleAddon 1.0002]><b>You have an outdated version. Go upgrade!</b><![endif]-->

These conditional comments are hidden from non-IE browsers, and will work properly in IE5 and above.

Conclusions?

Extensibility is an important aspect for any major software project, but can also be the source of severe compatibility problems that are extremely painful to fix in the future. As we increase the power of the web platform, we need to find ways to ensure that extension points and the tragedy of the commons don’t destroy the user’s experience.

Until next time,

-Eric

Good News: Microsoft Security Essentials Released

Microsoft’s free new anti-virus / anti-malware realtime scanner is now available as a free download. Installing MSE, a traditional signature-based scanner, alongside IE8’s URL Reputation-based SmartScreen Filter yields comprehensive protection to help keep your computers safe from malicious software.

There are a few things I like about MSE over other scanners:

  1. You won’t see advertisements trying to “upsell” you to a professional version.
  2. You won’t see “scareware” style warnings trying to convince you that MSE is providing value-- “oh my gosh, we found a cookie! Panic!”
  3. Signature updates are free—there’s no “subscription” that will expire and leave you unprotected.
  4. The product doesn’t install a bunch of 3rd party toolbars or other such nonsense— unfortunately, a common business model for other “free” products.

The product has been getting some great reviews.  I’ll definitely be installing this on my parent’s computer the next time I’m home.  :-)

-Eric

Posted by EricLaw | 0 Comments
Filed under: ,
More Posts Next page »
 
Page view tracker