Skip navigation

Benchmark Standards Mark Storage's Coming of Age

The benchmarks that the Storage Performance Council recently unveiled mark a coming of age for storage and might be able to supply data that IT managers have wanted for quite a while. As storage has become an increasingly significant part of the IT budget, IT managers have come to see the storage infrastructure's performance as an important factor in the performance of enterprise-level IT initiatives. In fact, data availability—getting the right piece of data to the right place at the right time—might be the most crucial part of delivering a quality end-user experience with enterprise applications.

IT managers often can use manufacturers' generally imperfect specifications to gauge the relative performance of a piece of technology (e.g., a CPU). However, the situation is much murkier when the performance of a piece of hardware or software depends as much on the context in which someone uses it as on its precise specifications—clearly the case with storage and particularly with Storage Area Networks (SANs).

The Storage Performance Council, founded in 1997, currently has 12 member companies including Adaptec, Compaq, Dell, Hitachi Data Systems (HDS), IBM, and VERITAS Software. The council's mission is to define, create, and disseminate storage benchmarks; relevant, verifiable performance data; testing tools; and price-performance comparisons among vendors. Companies with multivendor storage implementations are the council's target audience.

The council designed the first benchmark—dubbed the SPC-1—to have broad appeal in server-class systems environments that have common business applications. According to the council, companies can use SPC-1 to evaluate virtually any storage configuration, from infrastructures of Just a Bunch of Disks (JBOD) strung together to sophisticated SANs with multiple multivendor array controllers, visualization appliances, and host computers.

The idea behind industry benchmarks is to supply more data than manufacturers' specifications provide. To achieve that goal, SPC-1 tests the technology within the context of an entire system. For the SPC-1 benchmark, the Storage Performance Council argued that two kinds of environments were important for storage subsystem performance.

The first environment has multiple systems with many simultaneous application threads that can saturate the throughput of the storage subsystem. The performance of a system in this environment depends on the storage subsystem's ability to manage both a large number of I/O requests and provide an acceptable response time for the underlying application. For these kinds of environments, the council has developed the I/Os per second (IOps) metric, which measures I/O request throughput capacity. An airline reservations system represents a typical application in which IOps would be appropriate .

The second environment consists of business-critical applications that issue thousands of serial I/O requests—one right after the completion of another—that consequently are sensitive to minimizing wall-clock completion time for optimal performance. The storage subsystem depends on its ability to have an efficient response to each I/O request. The council uses a metric to measure the capabilities of the storage subsystem to provide minimum I/O request response times in this environment—dubbed the SPC-1 Least Response Time. This metric would be appropriate when rebuilding a large database.

Although the Storage Performance Council's benchmarks appear to be solid efforts to generate reliable information, people will need to test their usefulness over time. The council insists that every configuration tested must demonstrate several characteristics: data persistence, repeatability, sustainability, equal access to host systems, and support for general-purpose applications. Those requirements reflect the industry's long experience with companies that try to manipulate benchmark results by conducting tests in conditions that don't resemble real-world scenarios. The council also requires that a test sponsor supply the capacity of the tested storage configuration. This metric includes the storage addressable by the host system's execution of the benchmark and the resources required to support that capacity (e.g., parity disks, hot spares).

Another obstacle the council benchmarks must overcome to win industry acceptance is that neither EMC nor Network Appliance, the heavyweights in SAN and Network Attached Storage (NAS) respectively, have joined the Storage Performance Council. If EMC and Network Appliance aren't members of the council, we can expect a great deal of conflicting information to emerge during the next several months as the council releases the initial SPC-1 results.

Even successful benchmarks can serve only as a starting point in the product-specification process. No actual IT infrastructure precisely matches testing configurations. So, as the financial community says, past results don't always predict future performance.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish