Skip navigation
Exchange 2013 is a resource hog - no surprise there

Exchange 2013 is a resource hog - no surprise there

Each time a new version Exchange appears, we have a debate about the escalating hardware requirements. Everyone gets very upset that the new software demands more memory, more CPU, more disk, and possibly more knowledge on the part of those who work with the product. The same cycle of realization, debate, and condemnation (of Microsoft, because their software costs so much money to deploy) has happened since Exchange 4.0 appeared in 1996. I guess I shouldn’t have been surprised to hear people complaining that Exchange 2013 wallows in memory, seizes every CPU cycle that comes its way, and spreads its ample posterior across vast expanses of disk.

Exchange 2013 is guilty as charged. If you haven’t yet gone through the joyous task of designing suitable server configurations for an Exchange 2013 deployment, you might be shocked to see the kind of production servers used for Exchange 2013. Take whatever you used for Exchange 2010 and add 50% more CPU cycles and 50% more RAM. Disk is not so much of an issue because cost keeps on dropping and volumes expanding, but you’ll need more of that too.

Before you rush to pen a note of complaint to the powers-that-be in Redmond, let me ask why you would be so surprised? The era of machine code programming is long gone and I don’t think that many worry so much about the few extra bytes required here and there or the couple of additional cycles needed to execute some code that could be tightened if time allows. Registry hacks are still common and pragmatic get-the-job-done coding is practised. Not always, but enough to create a swelling code base for most products.

You could argue that Microsoft should do a better job of tuning its software to the nth degree. But they won’t because that’s not the way software engineering happens today. Remember, we march to the cadence of the cloud and its ceaseless and ongoing demand for new features and updates. Time doesn’t wait and code has to be done if users are to be kept happen. And of course, to provide material for commentators to write about.

Exchange 2013 has a lot of new code. The Information Store was rewritten; the management console discarded in favour of a web-friendly interface; MSSearch ended up being dumped because the Search Foundation allows for commonality across Exchange and SharePoint; the CAS isn’t the same kind of CAS it once was; Managed Availability exerts a pervasive influence across all components; Outlook Web App was rewritten (again) to accommodate tablets, smartphones, and the cool new design language (whatever it’s called today). A lot of change happened and many new features were introduced and quite a few dropped, some of which have returned belatedly. And the churn continues every quarter as each cumulative update appears.

The bottom line is that you cannot compare the hardware resources demanded by Exchange 2007 or Exchange 2010 with the bill presented by Exchange 2013. The comparison does not compute. We should therefore accept the beast that we have to tame and provide appropriate hardware resources for an Exchange 2013 deployment to be successful.

Those who provide planning tools like the Excel spreadsheet from hell (Microsoft’s Exchange 2013 server role requirements tool) do their level best to come up with reasonable server configurations. However, tools are flawed and imperfect because they miss two major influences on real-life Exchange servers. The first factor is third-party products – every customer deployment that I have ever seen uses some third-party products alongside Exchange – and these products don’t run on air. The second factor is you. Or rather, how Exchange is managed within your company. It won’t be perfect (nothing ever is) and production requirements will introduce their own complexities.

The output from planning tools are therefore the first step along a path that eventually leads to a server order being given to a grateful sales representative. Use the recommendations as a point of discussion rather than a definitive answer and be prepared to add CPU, RAM, and disk to deal with your own circumstances.

Looking at some hardware configurations that have been put into production, some who planned large deployments of Exchange 2013 have told me that the numbers for a virtualized platform simply didn’t stack up. The combination of heavier hardware requirements plus the overview imposed by the hypervisor in terms of added cost, performance, and complexity tipped the balance toward using physical servers. The same outcome happened for both Hyper-V and VMware.

Microsoft doesn’t have to pay for software in the same way as other commercial entities for its deployments. (I imagine that some internal accounting cost is levied). Even so, the team who designed and deployed Exchange Online inside Office 365 also elected to use physical servers. Office 365 is not a good example to cite in some ways because no one else will deploy at the same scale. Nevertheless, I find it interesting that virtualization is not used. It’s just one of the quirks of the Office 365 configuration alongside their use of single-role Exchange servers and a ruthless (but understandable) dedication to standardization.

There is no doubt that a shift has occurred in the economics of Exchange over the last ten years. The success that Microsoft has had in swapping slow disk I/O for fast cached memory and the introduction of high availability in the form of the Database Availability Group has permitted a move away from expensive Storage Area Networks (SANs) to low-cost JBOD.

The SAN once occupied the center of any discussion about large-scale deployments, an understandable situation when singular databases were the rule and nasty -1018 errors lurked in the undergrowth. A reasonable amount of SAN technology is still used with Exchange, especially where companies seek to deploy a unified storage architecture to serve multiple applications rather than taking the application-centric view into which the debate around JBOD for Exchange often descends.

SAN or JBOD, the fact remains that disk is cheaper today than ever before. Coupled with multiple database copies and features such as autoreseed and background mailbox moves, the cheapness of disk encourages hardware designs based on fast CPUs, lots of memory (to cache all that data), and scads of cheap disks that will be discarded without the blink of an eye should a problem occur. And that recipe is exactly what many Exchange 2013 deployments have decided to use.

I expect that the same conversation will happen after Microsoft releases Exchange 2016 in late 2015 (a prediction fully justified by the recent release history). We will moan about the 256GB RAM requirement for mailbox servers, decry the 64-core CPUs required to make the management tools wake up, and condemn the petabyte of free disk required to install the product. The shape of the argument doesn’t change – only the numbers. So put some lipstick on the Exchange 2013 and give the server enough resources to do its work. You know it makes sense!

Follow Tony @12Knocksinna

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish