AJAX Performance Measurement Methodology for Internet Explorer 8 Beta 2
The AJAX Subsystems
In AJAX applications, data is retrieved asynchronously using the XMLHttpRequest object. An AJAX scenario can be visualized as an activity passing through a pipeline of logical subsystems within the browser stack. In order to understand the performance characteristics of AJAX applications, we need to understand what each of these subsystems do and how they interact with each other. Figure 1 shows the AJAX subsystems for Internet Explorer 8 Beta 2:
“Some of the tests we have done show pure JScript performance improvements up to 2.5 times. Key gains are in strings. We also measured the performance gains on common Gmail operations, like loading the inbox (24%), opening a conversation (35%) and opening a thread (25%) compared to IE7" - Greg Badros, Senior Director of Engineering, Google
Figure 1: The AJAX subsystems in Internet Explorer.
- Network: Whenever a user types in a URL to load a webpage, the browser communicates with the server over the network, and waits for a response from the server. The network is also responsible for asynchronous data exchange between the Web client and the server.
- Parsers: When data is received from the server, it reads, analyzes, and converts the data (HTML, CSS, XML, etc.) into their native object model formats.
- Layout: Internet Explorer’s layout subsystem takes input from the parsers and computes the layout of the various components, which form the webpage.
- Rendering: Internet Explorer’s rendering engine does the final painting of the page (and any subsequent updates that are required).
- Native OM (or DOM): The DOM is the object representation of the website’s HTML and CSS content. The DOM also acts as a layer for communication between different browser components.
- JScript Engine: The JScript engine represents Microsoft’s implementation of the ECMAScript language based on ECMA-262 3rd Edition standard. It contains the basic primitives (functions, objects, types, etc.) for performing various language operations.
There are several micro-benchmarks (for instance, SunSpider, Celtic Kane, RockStarApps, ZIMBRA) cited to compare browser performance on AJAX applications today. These micro-benchmarks typically measure two things:
Microsoft’s goal in Internet Explorer 8 Beta 2 was to improve end-user perceivable performance so we were careful not to use micro-benchmarks as a singular metric to focus our engineering effort.
Micro-benchmarks are simple to create, easy to run and provide a quick way for developers to run regressions before making check-ins. But, the use of micro-benchmarks has limitations.
- They consist of simple operations that run several thousand iterations, which is generally not representative of real world applications.
- They can be written to exaggerate a particular behavior in a browser and browsers can be tuned to run certain micro-benchmarks very well.
Microsoft’s goal in Internet Explorer 8 Beta 2 was to improve end-user perceivable performance so we were careful not to use micro-benchmarks as a singular metric to focus our engineering effort. Instead, we took a more balanced approach and used real world code in addition to micro-benchmarks to drive end-user visible AJAX performance improvement.
We use three different measurements to represent a more holistic view of AJAX performance:
- AJAX subsystem measurements
- Real World Code (RWC) measurements
- Micro-benchmark measurements
AJAX Subsystem Measurements
We measured AJAX subsystems time profile using the ETW (Event Tracing for Windows) infrastructure in Internet Explorer 8 Beta 2. Using ETW events that mark off AJAX subsystems, we can accurately measure time spent within each AJAX subsystem, including CPU and elapsed time. This data can be collected while running any scenario. A select set of thirty popular websites were run to collect this data. The process in each case is to clear the cache, launch Internet Explorer 8 Beta 2 with a blank page, navigate to the site, wait for 45 seconds (to let any animation/layout settle down) and exit. Each test is run three times and the subsystem profiles are averaged and the run that shows the time closest to the average is used for the analysis. Figure 2 shows the subsystem times for each of these 30 sites.
Figure 2: CPU times per site/per AJAX subsystem (June 10 2008 Internet Explorer 8 build).
As part of the early access program to get customer feedback, we have been working with a few customers that develop AJAX applications. Part of this engagement includes getting comparative data on performance scenarios on their applications. We worked closely with the Google Gmail product team and focused on making engineering fixes that directly resulted in improved end-user performance. As a result of these efforts, we were able to directly impact commonly used Gmail operations between 15% - 25% compared to Internet Explorer 7. We believe Gmail is quite representative of the current generation of AJAX applications in how they exercise the AJAX subsystems, so we think other AJAX applications will see similar improvements.
Measuring real-world code is challenging in many ways. The scenarios are hard to automate and replicate results consistently. Isolating a scenario is difficult and it is very easy to get “noise” in the measurements that can lead to misleading performance data. Investment needs to be made to add the right level of instrumentation to get consistent and accurate measurements. Working closely with large AJAX customers has been our way to solve some of these challenges and we continue to develop and hone this process.
AJAX performance measurement is a complex problem and we adopted a structured approach in Internet Explorer 8 Beta 2 to drive targeted performance improvements. Be sure to read the follow-up article in this issue, “Performance Improvements in Internet Explorer 8” that describes the actual changes.
By: Shreesh Dubey
For more information, check out these resources:
Internet Explorer Developer Center - http://msdn.microsoft.com/ie
Internet Explorer Team Blog - http://blogs.msdn.com/ie/
JScript Team Blog - http://blogs.msdn.com/JScript/
The micro-benchmarks referenced in this article are listed below:
Celtic Kane: http://celtickane.com/webdesign/jsspeed2007.php