The road to economic competitiveness - ten years of I/O reduction for Exchange

It’s important not to get too carried away when you listen to the representatives of any company wax lyrical about the wonders of the company’s products. Unless of course you’re a member of a cult, in which case it’s probably a good idea to show an appropriate level of enthusiasm when being briefed about some new development or another.

Microsoft holds that the reduction of I/O demanded by Exchange over the last three releases is a very good thing. Indeed, the claim made at the recent Microsoft Exchange Conference (MEC) is that the reduction is that Exchange 2013 is 99% more efficient than Exchange 2013 is when it comes to I/O operations generated by the Information Store. Many jokes were made that Exchange will soon be in the position that it will soon be giving I/O back to its customers, an odd situation should it ever come about.

But as I listened to the claims, a horrible feeling came over me that the achievement of Exchange 2013 is simply a case of Microsoft catching up with Google in terms of operational economics. Let me explain why.

When Google launched Gmail on April 1, 2004, they enjoyed the luxury of not having to deal with an installed base. They then took the advantage of being able to continually tweak a product that seemed to be in an almost-perpetual beta before removing the beta label in July 2009 in an attempt to make Gmail more acceptable to businesses. More importantly, Google enjoyed enormous economic advantages from the start by building Gmail on top of commodity hardware, running on Google’s own version of Linux and the Google File System. Sure, Google had to pay for the datacenters, power and cooling, operations, and so on to support Gmail (a large investment in itself), offsetting these costs with the income derived from Search and other products.

Compared to Gmail’s economic basis, Microsoft was in no position to compete with Exchange. The version then in use (Exchange 2003) was tied to very expensive Storage Area Networks (SANs) for large-scale deployments as these were the only platform capable of delivering the I/O capacity required to support large numbers of connected users. Its support for automation was very weak or non-existent. Like Google, Microsoft controlled the operating system and application, but the servers that they used were similar to those commercially available from mainline vendors such as HP and Dell and not the cut-down purpose-designed low-cost hardware used by Google. Hosted Exchange existed in 2004, but not at the scale of Gmail.

I’m not sure that Microsoft took Gmail very seriously at first. The folks running Hotmail were probably more concerned than the Exchange engineering group, but the success of Gmail soon made it a competitive influence, especially in terms of the multi-gigabyte mailbox offered by Gmail from 2005 onwards and its compelling price point (zero). Hindsight is a wonderful teacher and looking back now it seems clear that Gmail provided Exchange with a new baseline for performance and price to which Microsoft had to respond.

Exchange 2007 came along too soon for much of a response. We saw the first attempt to reduce I/O and cost through many tweaks to the Store as well as the beginning of what has become a very good high availability story with the introduction of Cluster Continuous Replication (CCR). More importantly, Exchange embraced PowerShell as the foundation for its management interfaces and as a route to automation. Microsoft based its first hosted service, Business Productivity Online Services (BPOS), on Exchange 2007 and rapidly discovered many inadequacies that had to be fixed in order to improve reliability and drive down cost.

Exchange 2010 is much more representative of Microsoft’s engineering response to Gmail. The Store was overhauled with a new schema and internal layout, high availability was dramatically enhanced through the Database Availability Group, the Mailbox Replication Service (MRS) provides a seamless method of moving mailboxes in the background, and Remote PowerShell delivered the foundation for remote automated management of many servers. With Exchange 2010, Microsoft was confident enough in the performance of the Store to recommend deployment on low-cost disks, a major change in the economic calculations around any Exchange deployment. Collectively, these changes enabled Microsoft to launch a very successful and cost-competitive Exchange Online service as part of Office 365.

And soon we will have Exchange 2013, a version that completes the transition of Microsoft’s engineering focus from on-premises enterprise deployments to the cloud. Alongside we have Windows 2012, an operating system built with automation in mind, dedicated to removing administrator fingers off keyboards connected direct to physical servers in favour of a much more distributed management model.

It would be nice to achieve a 99% reduction in I/O in production environments but I suspect that few customer deployments will be capable of measuring such a reduction, if only because of the unavailability of sufficient data from the past. However, I’m not fixated on that 99% headline figure. What’s much more important is to consider just how far Exchange has come since Exchange 2003 and how the presence of a formidable competitor has “encouraged” Microsoft to improve the economics of Exchange over the last decade. Every Exchange customer has gained from this effort.

It’s often said that competition drives innovation and improvement. In this case, I certainly think that the adage is true.

Follow Tony @12Knocksinna

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.