Skip navigation

Scaling Out OWA

Guidelines for implementing OWA in a large environment

More and more organizations are turning to Outlook Web Access (OWA) as an Exchange 2000 Server client, thanks to OWA's almost invisible desktop footprint, roaming capability, and steadily improving functionality and usability. If you're considering OWA as your company's client of choice, you might benefit from the insight I gained designing and testing an institutional, ISP-style, data-center—based Exchange 2000 messaging environment for 350,000 users.

Sizing
The first step in the process was determining a realistic number of active users. Knowing that the messaging system needed to support 350,000 users was only the start of the sizing process.

The first point that the design team needed to clarify was whether 350,000 users meant 350,000 users logged on and sending email at the same time. In our institutional environment, only 40,000 workstations were available at any one time, so the ratio of concurrent users to active users—that is, the ratio of mailboxes to logged-on users—was about 9:1. (This figure is generally accurate for most ISPs as well.) Furthermore, we could safely assume that not all the workstations would be occupied at the same time. And just because a user was logged on to a workstation didn't mean that the user was using the email system. Even a user connected to his or her Exchange 2000 mailbox might not be an active user (i.e., a user who has executed an Exchange transaction such as sending, opening, or deleting a message within the past 10 minutes). With these variations to the initial figure of 350,000 users, the design team established that we needed to scale the system to support 20,000 active users.

Scaling
When discussing any computer application, you often hear the terms vertical scalability, or scaling up, and horizontal scalability, or scaling out. We needed to consider both forms of scalability.

We wanted to implement a platform that would support 20,000 users. From a vertical-scalability perspective, we probably could have supported that many users on one server—a solution that would offer the highest possible level of consolidation, which is an industry trend today. But what about redundancy? We probably wouldn't want to have all 20,000 users on just one server, no matter how powerful that server might be. Distributing the workload across several systems is a more common approach because it minimizes downtime and caters to a high workload.

From a horizontal-scalability standpoint, we needed to provide disk storage for 350,000 users, even though only 20,000 of them might be reading their email at any one time. You could argue that we simply needed to hook up a big enough Storage Area Network (SAN) to one vertically scalable server, but doing so would burden us with some rather large databases and storage groups (SGs) with long backup and restore times to match.

We concluded that our architecture needed to encompass aspects of both vertical and horizontal scalability. The result was a design with several high-performance, high-memory mailbox servers in a server-farm arrangement. We thought of this platform as comprising units of service. To extend the service to more users, we could simply add more units—that is, more server-and-storage sets.

Designing the Infrastructure
Now that we had a general idea of the approach we wanted to take, we began to design the infrastructure. First, we turned our attention to the storage requirements for the mailbox servers.

Each mailbox had a storage allocation of 10MB, which is a small allocation for a corporate environment but common for an ISP-style environment. We needed to determine how best to distribute that storage load for 350,000 users across several servers. We used the common formula

\[(mailbox quota + number of mailboxes)/
single-instance storage ratio\] + 
Deleted Items cache percentage

and adhered to the following guiding principles:

  • Aim to keep each database at approximately 40GB so that backup times—and more important, restore times—are reasonable. (With today's streaming backup devices, you can restore a 40GB database in about 30 minutes.)
  • Never max out the number of SGs or databases on a server. This rule simplifies the expansion of individual servers in the future. We aimed for no more than three SGs per server and no more than four databases per SG.
  • Consider the effect of single-instance storage. Even with so many users and relatively few databases, we couldn't expect the single-instance storage ratio to be very high, given the likely random nature of user placement and email behavior. We decided to play it safe and stick with a low ratio of 1.2.
  • Consider the effect of the Deleted Items retention cache, if you plan to enable it. Setting even 7 days' worth of retention can result in database growth of 25 to 30 percent.

With these guidelines in mind, we used Microsoft Excel to develop a simple model from which to derive figures such as users per server, as Figure 1 shows. We ended up with about 2625 active users at any one time on any one server—well within the capabilities of an average Exchange 2000 server configuration. Figure 2 shows the resulting configuration of each mailbox server.

Now that we'd decided on sizing for each mailbox server, we needed to calculate the servers' optimal configuration. We needed to consider the nature of the load that users would place on the servers. Ideally, you'll have detailed knowledge—based on historical, recorded data—of how your users will use the system. For a new environment, though, you'll need to provide an estimate of what the load might look like. For Messaging API (MAPI)—style clients, you can use a benchmarking tool such as the Microsoft Exchange Load Simulator (LoadSim) with a realistic benchmark profile such as MAPI Messaging Benchmark 2 (MMB2). However, you can't use LoadSim to model OWA clients. We needed a different tool: the Exchange Stress and Performance (ESP) tool, which you can download from http://www.microsoft.com/exchange/downloads/2000/esp.asp. (For more information about LoadSim and ESP, see "Load Testing Exchange 2000: Analysis and ESP," September 2002, http://www.winnetmag.com, InstantDoc ID 25963, and "Load Testing Exchange 2000," May 2002, InstantDoc ID 24126.)

ESP ues a script to mimic the behavior of an OWA client. (ESP can also simulate workloads from POP3 and IMAP clients.) You can edit the script to reflect the kind of workload that you want to model. Typically, you define several operations that a client might perform (e.g., sending, reading, and deleting messages; making calendar appointments). To reflect real-life behavior, ESP usually separates the operations with pauses. Then, you use the ESP client to run multiple instances of these scripts on one workstation (separate from the Exchange server). You need to carefully monitor the performance utilization on the workstation, making sure that the system doesn't become saturated in terms of CPU, memory usage, or disk I/O. (Otherwise, the workstation becomes a bottleneck and can't accurately reflect the number of users that you're simulating.) Typically, you can simulate as many as 1500 users per workstation, running ESP and an OWA script. You can use multiple PCs to run multiple instances of ESP, so with three PCs you can easily simulate a workload for about 3000 OWA clients. (Although ESP lets you simulate OWA user load, it won't set up or initialize the mailboxes on the server for the test. You need to use scripts or LoadSim to create the test accounts and populate the mailboxes with messages and other content for use during the ESP run. If you use LoadSim to populate the mailboxes, be ready for the content conversion that takes place as messages are converted from MAPI format to MIME format. Such conversion results in higher CPU usage and increases I/O subsystem activity.)

In our performance benchmarking tests, we characterized an OWA user population and configured each ESP script to exhibit heavy client behavior. For example, one set of tests that we ran simulated the following activities within a 30-minute period for each simulated user: Open fifteen 5KB text messages, create and send five 2KB text messages to one recipient, create and send five 10KB text messages to five recipients, and create two calendar appointments. (Light use runs closer to 3 or 4 sent messages and 15 received messages per day.)

We ran these tests continuously for hours and observed the behavior of the mailbox server while under load. To ensure that the system didn't become saturated, we used Performance Monitor to watch system performance in several areas:

  • The system needed to maintain aggregate CPU usage of less than 80 percent to leave sufficient headroom for growth. We made sure that the Processor Queue Length counter didn't exceed two per processor.
  • In general, Exchange 2000 shouldn't page, so we wanted to maintain low levels of Memory page reads and page writes per second. If we couldn't maintain low levels, we would have considered adding more memory to the Exchange server.
  • For the MSExchangeIS Mailbox Performance Monitor object, we observed the Send Queue size. Sustained values are OK, as long as no queue-length growth is evident over time.

We weren't too concerned with queues to the transaction logs or database volumes. Sustained queue lengths on the database volumes are OK as long as latency for the I/O operations is small. (For reads, latency below 15 milliseconds (ms) is good, latency in the 30ms to 40ms range is OK, and latency above 60ms is bad. For writes, latency should always be below 10ms and write-back caching should be turned on, especially for the transaction log drives.)

We also needed to confirm that the intervening network between the PCs running ESP and the Exchange server didn't cause a bottleneck, which would seriously affect the accuracy of our test results. Always run these tests on a LAN (preferably running at 100Mbps, switched). Also, we confirmed that the workload we created on the ESP PCs was actually carried out on the server in the allocated time period. For example, if our workload script dictated that we would send five messages per client for 3000 clients in a 30-minute period, at the end of the test run we confirmed that 15,000 messages had been sent.

Always look for possible ESP problems before starting your tests. Check your ESP logs to make sure the scripts run fine in the first instance, then scale up the tests and run them from multiple ESP workstations. For more information about Exchange 2000 performance and sizing, I suggest reading Pierre Bijaoui's Scaling Microsoft Exchange 2000 (Digital Press, 2001).

Implementing the Server Hardware
Based on our heavy user workload and the satisfactory completion of our benchmark tests, we elected to use 8 four-processor, 1.2GHz Xeon back-end mailbox servers with 3GB of memory and 540GB of usable SAN-based storage. In addition, our implementation included 4 two-processor, 2.8GHz Xeon front-end servers with 2GB of memory and 18GB of usable Direct Attached Storage (DAS)—ample for hosting the OS and applications. The front-end servers let us normalize the OWA namespace and proxy connection requests from OWA clients to the appropriate back-end servers, which hosted the OWA clients' mailboxes. Figure 3, page 4, shows our overall server platform. (Note that our ESP simulations exercised the front-end servers as well as the back-end servers. Also, you must use Exchange 2000 Enterprise Edition to make use of the front-end/back-end server architecture.)

Notice that all our Exchange servers had more than 1GB of physical memory. In such cases, be sure to modify the server's boot.ini file with the /3GB switch to ensure that Windows 2000 Advanced Server sets aside 3GB of user-mode space. For details about this requirement, see the Microsoft article "XGEN: Exchange 2000 Requires /3GB Switch with More Than 1 Gigabyte of Physical RAM" (http://support.microsoft.com/?kbid=266096).

When you increase the user-mode virtual address space available, also increase the size of the Store database cache (i.e., the cache that buffers transactions before Exchange writes them to the Store files on disk). By default, this cache is approximately 858MB, but Microsoft suggests that you increase the size to 1.2GB when you use the /3GB switch. To make this change, follow these steps:

  1. Use ADSI Edit (or a tool such as LDP) to navigate to the Configuration Naming Context within Active Directory (AD). For information about ADSI Edit, see "Introducing the ADSI Edit Utility," July 2000, http://www.exchangeadmin.com, InstantDoc 8901.
  2. Navigate to CN=Information Store, CN=<servername>, CN=Servers, CN=<AdminGroupname>, CN=Administrative Groups, CN=<orgname>, CN=Microsoft Exchange, CN=Services, CN=Configuration.
  3. Right-click the Information Store object and select Properties from the context menu.
  4. Select Both from the Select which properties to view drop-down list.
  5. Select msExchESEParamCacheSizeMax and set its new value to 307200 (pages, which is approximately 1.2GB).
  6. Wait for AD replication to take place throughout your organization, then restart the Information Store service.

The Store process uses log buffers to cache information in memory before committing it to the transaction logs. The default value is 84 (pages), which in general is quite small on large back-end servers such as the ones we were using. Also, you need to change the value to 500 on systems running Exchange 2000 Service Pack 3 (SP3). The procedure for so doing is similar to the procedure (in Step 5) for increasing the Store Database Cache, except that you select the msExchESEParamLogBuffers item instead of msExchESEParamCacheSizeMax.

Global Catalog Servers
Exchange 2000 has substantial requirements for Global Catalog (GC) servers: for the provision of Global Address List (GAL) services to clients, for front-end server mailbox lookup, and for mail routing within and between back-end servers. Because of this dependency, you must carefully plan the number of GC servers that you'll provide for use by your Exchange servers. In general, you should implement one GC processor for every four Exchange 2000 processors.

In our environment, with 8 four-processor back-end servers and 4 two-processor front-end servers, we required 10 GC processors. We decided to use two-processor, 2.8GHz Xeon GC systems with 3GB of memory and 108GB of usable DAS. (That might seem like a lot of storage for 350,000 or so AD objects, but the driver was the number of spindles. More spindles are better from a performance perspective; the high capacity is simply a welcome side effect.) According to the Exchange server—to—GC server ratio guidelines, we required five GC servers, but we implemented only four Exchange-dedicated GC servers because our performance benchmarks indicated that doing so would be appropriate. (Always be prepared to fine-tune the number of GC servers that you set aside specifically for Exchange, according to production usage.)

Considering the logical placement of your GC servers in relation to your Exchange servers is also important. Exchange 2000 and other consumers (e.g., the Win2K logon process) can use GC servers. In many cases, dedicating these servers to Exchange makes sense. To do so, place your Exchange 2000 servers and GC servers in a separate Win2K site, as Figure 4 shows. That way, all non-Exchange traffic from other sites will use other GC servers.

In a large environment such as ours, in which Exchange servers might regularly look up many user objects, you might want to increase the size of the DSAccess cache, which is the cache that Exchange servers use when performing queries against a GC server. By default, this cache is 50MB, with 25MB set aside for caching user objects. Microsoft recommends that you increase the size of the cache when you have more than 50,000 user objects in AD. To increase the cache to a recommended maximum size of approximately 94MB, set the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSExchangeDSAccess\Instance0 registry subkey's MaxMemoryUser value (of type REG_DWORD) to 0x000170A3.

Closing Thoughts
This article serves as an introduction to the design and implementation of a large Exchange 2000 OWA environment and offers some guidelines for sizing mailbox servers and associated AD systems to provide a scalable and extendable data-center platform. Need to support more users? Simply add another back-end server and more front-end servers or GC servers.

You might not need to design an environment for 350,000 users every day, but the information in this article can apply to a variety of environments. In a follow-up article, I'll describe the recommended storage configurations that we used with this implementation, including DAS, SAN, RAID virtualization, and the merits of booting systems from a SAN.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish