Skip navigation

Scaling Up vs. Scaling Out - 10 May 2000

The latest Transaction Processing Performance Council (TPC) TPC-C benchmarks for SQL Server have opened a new chapter in the scale-up vs. scale-out debate. IS professionals typically add capacity to computer systems by scaling up. When response time starts to degrade because of additional workload or higher database capacities, the straightforward answer to the immediate performance problem is adding bigger, faster hardware.

Extrapolating from Moore's Law, which states that hardware performance will double every 18 months, you might conclude that scaling up is an adequate solution to handle growth for the foreseeable future. However, you'll soon realize that Murphy's Law precludes Moore's Law.

Although the current 8-way SMP systems equipped with high-speed Storage Area Network (SAN) storage arrays provide tremendous scalability, they also bring to light several other scalability problems. First, when a system reaches a certain point, further scaling up becomes prohibitively expensive. Second, even with Moore's Law in full effect, you can't scale beyond a certain point—at least until vendors release the next generation of hardware. Even beyond the hardware problems, you'll probably encounter software hurdles when you're trying to scale up. Software systems such as databases have internal mechanisms that handle locking and other multiuser database issues. These software structures have limited efficiency, and these limits typically become the real governing impediments to continued upward scalability. Thus, you don't see SMP performance graphs continuing to demonstrate linear upward scalability as you add more processor power. At some point, the curve always begins to flatten. At the upper reaches of that curve, you'll find that you need very expensive hardware upgrades to get very small performance improvements.

Scaling out can provide an effective answer to the problems of the scale-up scenario. The SQL Server systems used in the TPC-C benchmarks implemented a scale-out architecture that consisted of 12 8-way Compaq systems joined with shared-nothing architecture. Essentially, shared-nothing architecture means that each system operates independently. Each system in the cluster maintains separate CPU, memory, and disk storage that other systems can't directly access. To address capacity issues by scaling out, you add more hardware—not bigger hardware. This solution addresses the cost problem associated with scaling up because adding several smaller systems is typically far less expensive than upgrading a mainframe-class system. When you scale out, the absolute size and speed of a single system doesn't limit total capacity. Shared-nothing architecture also skirts the software bottleneck by providing multiple multiuser concurrency mechanisms. Because the workload is divided among the servers, total software capacity increases.

Although scaling out provides great answers to the inherent limitations in scale-up architecture, this method is no stranger to Murphy's Law, either. At this point in the technology lifecycle, scaling out requires increased management overhead that is potentially as great as the performance gains it offers. Even so, scaling out might be a viable solution to database implementations that have reached the limits of SMP scalability.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish