Skip navigation

What About the Little Guy?

Greetings,
As I've discussed the ongoing saga of TPC-C benchmark leapfrogging, I've received reader comments like the following: "I want comparative TPC-C benchmarks on comparable mainstream platforms—for example, Oracle versus SQL Server on comparable dual-processor servers with comparable RAID arrays. The very high end is fascinating, of course, and I enjoy imagining such systems. But . . ." Another reader asked, "What about the little guy? I've never worked in a shop that had the 128-way or 96-way processing power that Microsoft, IBM, and Oracle use to reach these benchmarks. My colleague suggested that the manufacturers be forced to have a limit of, say, $50,000 for their hardware and software budgets. Then run the tests and see who comes out on top."

What's this world coming to when the average SQL Server DBA doesn't have access to a $10 million server? Forget universal health care or world peace! The United Nations shouldn't rest until the world's decent, hardworking DBAs have access to 32-CPU Unisys ES7000 servers back-ended by EMC arrays! All kidding aside, I couldn't agree more with these sentiments. A maximum server-cost threshold would eliminate the benchmark games all vendors play and give us a fighting chance to compare numbers in an "apples-to-apples" manner. Let's assume that most modern database features are commodities and are readily available from all the major vendors in one form or another. Let's further assume that the ugly reality of fixed, limited IT budgets constrain most database server purchases. Then the only two questions that really matter are: How much does it cost, and how fast will it run? Configure the server any way you want, sprinkle fairy dust on it, paint it your favorite color, or hang a good luck charm from its on/off switch. It doesn't matter. When push comes to shove, the only thing that matters to most of the IT world is getting the most bang for the buck. Unfortunately, the major vendors don't publish TPC-C scores for systems that cost $50,000 or even $100,000. TPC-C scores have genuine technical value, but it's impossible to ignore the huge marketing tie- in. Sane vendors don't publish TPC-C numbers that make them look bad. For some time now, Microsoft has dominated the price/performance race, with SQL Server holding the top 45 TPC-C price/performance scores. All database vendors, including Microsoft, have license clauses that prohibit the publication of their benchmark numbers without explicit approval. So it's unlikely that we'll see comparative "low-end" benchmarks from other vendors competing in Microsoft's bread-and-butter market. Having said all that, I'm convinced that high-end benchmarks still have value for "Joe Average DBA" because of the ever-present danger that customers will base their database purchases on future, imagined, just-in-case scalability needs. I've had too many customer conversations that go something like this: "It's clear that SQL Server's shipping version can meet all of my current and foreseeable performance needs. But what if flying pink elephants attack from outer space, and I suddenly need more processing power than I could possibly imagine under any conceivable business scenario? I know database XYZ costs a lot more than SQL Server, but those pink elephants can be scary. I'd better use database XYZ because I know it can scale to meet my needs." High-end benchmarks are important for Joe Average DBAs and the companies they support because high-end benchmarks provide that warm, fuzzy feeling that the platform will scale to meet their needs— whatever those needs turn out to be.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish