The appearance of "Ask the Perf Guy: Sizing Exchange 2013 Deployments" on the EHLO blog on May 6 is a very good thing because it represents the first formal word from Microsoft about how the topic of Exchange 2013 server sizing should be approached. There's lots of good information included in the document (its length makes it much more than a normal blog post) and the content is well worth your while to read, if only to help you to understand how you might begin to configure servers for your own company. Unfortunately, the lack of any tool to take the formulae presented in the paper and make them more approachable for those who do not spend their entire time thinking about server sizing is regrettable. The abscene of sizing tools some eight months after Exchange 2013 achieved RTM is curious, is it not?.
We were spoilt by the comprehensive nature of the sizing tools provided by Microsoft to guide the planning for Exchange 2010 deployments, including the world’s most complex Excel spreadsheet (the Exchange 2010 mailbox server role requirements calculator). Some of the fundamental underpinnings of the Exchange 2010 calculator need to be modified because of changes to the Exchange 2013 architecture such as the reduction to 50 from 100 for the maximum number of databases that can be mounted on a mailbox server.
Other big changes that affect sizing models is the introduction of the “Managed Store” and a new ESE memory allocation model, the introduction of the Managed Availability framework and the switchover to use the Search Foundation to index content in mailbox databases. I suppose the fact that modern public folders are now stored in mailbox databases might also influence matters, but only slightly.
Nevertheless, you would imagine that change and evolution is second nature to software engineers and that this should not delay an update for sizing tools, especially if Microsoft wanted to encourage customers to migrate from now-legacy versions of Exchange to Exchange 2013 - and to do so at a faster rate than achieved with previous migrations such as Exchange 2003 to Exchange 2007.
You might also wonder why Microsoft has not used the mass of performance data that it has gathered through the use of Exchange 2013 in the form of Exchange Online in Office 365. The on-premises and cloud versions of Exchange now share the same code base, so you’d assume that they share the same performance characteristics. But then you’d miss the point that Microsoft uses highly standardized server configurations in its datacenters. In other words, if you use exactly the same server type, storage configuration, and memory that Microsoft deploys for its Exchange Online servers, then that data would be very helpful to you. On the other hand, using cloud platform data to create a generalized tool that is capable of handling many different kinds of on-premises server configurations is not without its challenges, so Microsoft's current information is based on their "Dogfood" environment, which might or might not be similar to your environment.
My guess is that the hyper-active nature of email within Microsoft is a characteristic shared by few other companies, but I could be wrong. On the other hand, sizing based on an average message size of 75KB seems right on the money to me, unless your users send even more graphic-rich PowerPoint presentations to each other than is the norm. The point is that the formulae are there for you to make sense of in your own context. You, after all, should have a fair appreciation of the usage patterns, peaks and ebbs, and user habits that pertain within your company and so should be able to take the generalized approach described by Microsoft and use it to make a fair guess as to what kind of Exchange 2013 servers to deploy.
In terms of sizing tools, we continue to wait until the white smoke appears over Redmond. Or something like that. In the meantime, let me predict that the new models will emphasize servers equipped with more memory than used by Exchange 2010. The logic is inescapable. Microsoft has steadily traded disk I/O for memory since Exchange 2003 to make Exchange perform well with low-cost storage, an economic benefit for both on-premises customers and those who deploy massive cloud-based services. Caching data in memory rather than going to disk is a very good thing, but you need servers equipped with sufficient memory to make everything work.
It’s not just the Information Store either. Other processes such as the Search Foundation enjoy wallowing in memory too (all of those important thoughts contained in email have to be indexed and kept ready for consultation) and new features like Managed Availability take their toll too. So be prepared for Exchange 2013 servers to want lots of memory. Or maybe more than lots.
Follow Tony @12Knocksinna