The Role of Storage in Server Consolidation

Three years ago, Intel ran much of its mail system on Microsoft Mail (MS Mail). When the company decided to move to a richer messaging platform, it began a large Microsoft Exchange pilot program at Intel's Folsom, California, plant. Intel chose to reduce the server count by installing the latest and greatest—at that time, Xeon PIII Quad 550s—and to create a mission-critical application. Today, at the halfway point in its life cycle, the Exchange system serves about 27,000 users worldwide; when the system is fully loaded, it will serve about 45,000 users. A white paper about the project was posted on Intel's Web site in January but withdrawn shortly thereafter for revision.

I spoke last week with Bill Kirkpatrick, the Intel engineer who oversaw the design and initial implementation aspects of the project, and he detailed the critical role of the consolidated storage system in this implementation. The system was designed (in retrospect, Bill would say "over-designed") to address the heaviest mail periods. (Historical data identified a period early in the day on Mondays—30 minutes out of the 40-hour work week—as the peak.) Each server was tasked to support 600 users moving 720 I/Os per second per processor. The designers assumed that a Quad 550 could support slightly fewer than four times this number of users.

The first lesson Intel learned was how closely you need to know your application. Exchange 5.x and Exchange 2000 read and write in 4KB increments, so the specification called for disks formatted for this block size to maximize use. (Historically, most messages were 5KB or less.) The following specification was developed for storage vendors who might design the system's storage: support for 3000 random I/Os per second, a 4KB block-size format, and a read/write ratio of 73 percent: 27 percent (measured from data logs). The specification had to be supported across an 80GB disk. The total system capacity was specified as 15 servers requiring 45,000 I/Os per second.

Intel contacted several vendors, but most declined the challenge. EMC got the project, and eventually 15 servers running Windows NT 4.0 were attached and dual-ported through SCSI to 30 of the 32 available ports of a 3930 series Symmetrix. Each server was connected to a single logical 120GB drive and configured in a RAID 0/1 arrangement (striped and mirrored). The system used 18GB 10,000 RPM drives, with 14 spindles for each primary data set and 14 more spindles used as mirrors.

After some initial glitches, the system has run as designed for almost a year without problems. Intel considers this project to be both a success and a model for server consolidation (scaling up as opposed to scaling out).

What else has Intel learned? By over-designing the system, Intel has been able to support more users than the company expected, and the system will last longer than the 2 years it was designed for. Intel also discovered that less disk space than expected was required because Exchange's single-instance usage ratio goes up (Exchange uses pointers to point to data in messages, reducing redundancy and requiring less disk space).

Also, Intel learned that although fibre channel gets a lot of great hype, when small file size transfers are involved, SCSI is much faster. (Which SCSI board you use makes a difference; Intel found that one combination of board and drivers was 25 percent faster than another.) However, the lack of additional ports on the Symmetrix prevented using a centralized backup scheme. Each server is backed up locally. If they had to do it over, the designers would now redesign the system for centralized backup. Because the storage is centralized, the application servers are abstracted and can be replaced. That abstraction extends the system lifetime and builds in flexibility for system redesign over time.

It's a shame that Intel's messaging system project isn't better known. The revised white paper will probably be reposted on EMC's Web site in about 2 months.

I need to clarify two things from my September 18 column about Network Appliance's (NetApp's) new systems. First is my comment about the ability of NetApp's boxes to scale. The gist of what I was trying to say is that here's a company with a reputation for boxes that don't scale, but its new offerings are large enterprise-class boxes. NetApp is about to swim with the (IBM) Sharks and the (EMC) Syms. Award NetApp an "S" for size.

Second, we inadvertently designated the NetApp F840 system "F480." We regret any inconvenience that might have caused.

The column netted a couple of interesting responses from system administrators who had seriously evaluated NetApp and bemoaned the fact that you can't do good virus prevention or quota management on NetApp boxes. I asked NetApp about these two features. Rod Mathews, marketing manager, responded:

About virus software: "Users and administrators currently do virus scans using standard antivirus software running on each client accessing the filer. To more fully meet the requirement of antivirus scanning, we're developing an on-access virus-scanning feature for a future version of Data ONTAP."

Rod asserts that snapshots can be used for recovery without a restore, a serious benefit.

About disk quotas: "We have native quotas built into Write Anywhere File Layout (WAFL) and Data ONTAP. And WAFL has something called a qtree that can also be used to enforce quotas. Our quotas let administrators track either the total number of files used or the total amount of space used, specified either by user or by group. We have a new tool called the Secure Share Quota Manager, which is a Windows 2000/Windows NT Microsoft Management Console (MMC) snap-in that can be used to manage quotas on NetApp filers in a Win2K or NT environment."

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.