The notion of “best practice” has long been an elastic commodity when applied to Exchange architecture and design. Consultants and consulting companies have professed to have their own magic method to ensure the successful deployment of Exchange since the first version appeared nearly twenty years ago. There’s nothing harmful in following a successful recipe. Problems only appear when people refuse to change the recipe to reflect new developments.
Over the years, attempts have been made to crystallize “best practice” only to find that it is very difficult to set down best practice when knowledge brings new insights on an ongoing basis. Microsoft made the Exchange Best Practice Analyzer (ExBPA) available to help. ExBPA applies rules to assess aspects of a deployment against guidelines set by Microsoft and is successful in telling people when divergences exist. The struggle with ExBPA is to keep its rules updated to reflect new knowledge and new developments in the software. Although a version of ExBPA is available for Exchange 2013, Microsoft hasn’t formally announced whether they will update it for Exchange 2016.
Apart from publishing blog posts on different aspects of best practice and recommending that customers run ExBPA to validate their deployments, until recently, Microsoft seemed to be happy enough to let customers do their own thing when it came to Exchange deployments. The upshot is many weird and wonderful designs have appeared in production. Good reasons lay behind the choices that drove the designs, but invariably some deployments ran into problems that could be linked back to choices made in the design. Often, these problem environments end up with Microsoft Premier Support or a consulting company, who have to figure out how to reverse-engineer the deployment to bring it back to a supportable state.
But that was then and this is now. With the release of Exchange 2016, Microsoft is becoming more proscriptive about the way that they want to see customers deploy Exchange and have described their views in the Exchange 2016 Preferred Architecture or their “best practice recommendation for what we believe is the optimum deployment architecture for Exchange 2016”.
Of course, everyone is entitled to their opinion, especially paying customers, and you do not have to deploy Exchange 2016 according to the product group’s wishes if you don’t want to. Microsoft admits that there are other ways to deploy Exchange 2016 but make their view clear by saying “While there are other supported deployment architectures, they are not recommended.” You might take this to mean that system architects had better have a very good reason if they wish to deviate from the preferred architecture. However, the support for a deployment of Exchange 2016 is not tied to the use of the preferred architecture and your calls won’t be declined if your organization uses a different approach.
What Microsoft is saying is that:
- They have invested a lot of time to develop the preferred architecture and would like to see customers use it whenever possible.
- The preferred architecture inherits and uses many of the lessons learned from the operation of Exchange Online within Office 365.
Accordingly, if you need to deploy Exchange across multiple datacenters, it makes good sense to use the preferred architecture as the basis for the design because of the experience drawn from Exchange Online, especially when it comes to namespace design and datacenter pairing. The preferred architecture makes eminent sense in both categories.
Things get a little trickier when server design comes into play. The product group doesn’t recommend virtual servers, saying “Virtualization adds an additional layer of management and complexity, which introduces additional recovery modes that do not add value, particularly since Exchange provides that functionality.” The folks running Hyper-V and VMware farms might care to differ, but it is hard to argue the case that virtualization does imply more complexity in an environment, which is one of the reasons why it’s not used inside Office 365. As in Office 365, the preferred architecture is based around the use of physical servers because physical servers are easier to manage.
The distaste exhibited by the product group for virtual servers does not mean that you should never virtualize Exchange. Remember, this is a preferred architecture and choices have had to be made, just like you would for any system architecture. If you opt to virtualize Exchange for your own good reasons, just be sure that the servers are virtualized according to the guidelines of the product group and don’t attempt anything funky.
Microsoft recommends commodity servers. I don’t think they mean that you should white box the servers. Rather, it’s a case of setting on a server design that exhibit the following characteristics:
- 2U, dual socket servers (20-24 cores)
- up to 96 GB of memory
- a battery-backed write cache controller
- 12 or more large form factor drive bays within the server chassis (1 RAID1 drive for the binaries, transport database, and protocol and other logs; the rest are large capacity 7.2K RPM serially attached SCSI (SAS) disks)
You can choose DELL, HP, or whatever server vendor you want. The point is that you should have a simple, straightforward server configuration that is capable of running Windows 2012 R2, participating in a Database Availability Group (DAG), presenting JBOD storage for the mailbox databases. Disks used by mailbox databases use ReFS (with the integrity option disabled) and the DAG is configured to use ReFS to autoseed databases. Disks are protected with BitLocker, so the server has to include a Trusted Platform Module (version 2.0).
Of course, these are hefty servers and your budget and user base might not extend to servers like these, so you can certainly adjust downward based on guidance from the Exchange 2013/2016 Server Role Sizing Calculator.
The preferred architecture uses the DAG as the fundamental building block. A DAG can extend from one member to sixteen members, but as the architecture envisages Exchange 2016 being deployed across multiple datacenters to serve reasonably large organizations, there’s room for adjustments here too. For instance, the preferred architecture uses four copies of each database, one of which is a lagged copy. This is because Microsoft uses Native Data Protection and doesn’t take backup copies, just like they do inside Exchange Online. You might have a more conservative view about backups and wish to continue taking them and in this instance, you might eliminate the lagged copy on the basis that backups provide equivalent or better protection against logical corruption. On the other hand, all available evidence indicates that Exchange Online successfully uses Native Data Protection, so perhaps the changeover to Exchange 2016 is the time to consider changing a backup regime that might date back to the last century?
Apart from the option of placing the DAG witness server in Azure, there’s no mention of using Azure to host any part of the preferred architecture. Better performance and payback can be gained by running Exchange in your own datacenters. This isn’t at all surprising because the economics of Azure-based Exchange are still very unproven.
Although the preferred architecture takes some account of Office Online Server (needed for online viewing of attachments), it does not cover any third party products that you might use alongside Exchange. For instance, no account is taken of backup, anti-malware, reporting and monitoring or any other of the common third-party software that often builds out an Exchange environment. It’s entirely understandable that this should be the case as the product group obviously doesn’t want to get into the complexities of testing and validating third party software against Exchange 2016.
Microsoft doesn’t require you to use their preferred architecture for Exchange 2016 but a strong hint is present that using the architecture gives organizations a better chance of deploying Exchange 2016 right. That notion might be correct but any IT architecture must take business and other operational requirements into account before setting on “the right answer”. With this in mind, perhaps it’s best to use the preferred architecture as your starting point and work from there. You might decide to use Microsoft’s recommendations or you might not, but at least you’ll have debated the issues and understand why your deployment differs from the way that Microsoft would do things.
Follow Tony @12Knocksinna