Skip navigation

I used Dynameasure 1.0 by Bluecurve to generate my test load and record the results for "Microsoft SQL Server 6.5 Scaleability," page 80. Dynameasure puts a controlled, end-to-end stress on a SQL Server system by directing the workload of a number of PC clients and recording how much work was performed within a predetermined period. Dynameasure uses Open Database Connectivity (ODBC) as the communications methodology between the PC clients and the test server. The test bed and test results are stored on a separate SQL Server system, and a central management console directs and monitors all tests. (See John Enck, "Dynameasure by Bluecurve: Born to Measure," November 1996, for more information on Dynameasure.)

Dynameasure lets you mix reads, writes, and mixed read/write SQL database transactions to tune the test environment to your user environment or to approximate various system behaviors and see how your client/server system will perform and scale under heavy loads. To measure CPU scaleability, I used Dynameasure's single-read--frequent, yet light-weight, transaction mix--to minimize disk I/O. A read/write transaction mix gave me a feel for system performance without isolating any subsystem. I used Dynameasure's 500MB test data set for each server.

I used default settings for transaction weights and test duration and changed only the think time for each user. Typically a user has a 10-second think time, but I reduced it to 5 seconds to increase how much stress on the server I could obtain from a limited number of PC clients. Dynameasure lets you control the number of simulated users, or motors, on PC clients that execute the transactions. My tests went from 50 users to 300 users in 50-user increments. Each test run took about two hours. At the conclusion of each test run, Dynameasure's Analyzer module reports the transactions per second (TPS) and average response time (ART) rates measured from the client to the server and back. The graphs on pages 82 and 83 summarize my results. The following paragraphs describe the test equipment environment.

Network
I conducted all the tests on a 100Mbit-per-second (Mbps) Fast Ethernet network running TCP/IP. The core component in this network was a 100Mbps Compaq switching hub. Non-switching Compaq and Cogent hubs let me funnel the workstations into the main switching hub.

Workstations
I used 15 Pentium-class systems as the workhorses to generate my PC client load. Each system hosted 20 motors. These systems included

  • Telos minitower systems with 120MHz Pentium CPUs, 32MB of RAM, 1GB IDE hard drives, 4X CD-ROMs, and 100 TX 3Com Fast Ethernet controllers
  • Compaq Deskpro XL 5100 minitower systems, all with 100MHz Pentium processors, 32MB of RAM, Netelligent 10/100 NICs, 1GB IDE drives, and 4X CD-ROMs
  • Two Dell Optiplex GMXT 5166s and one 5133 (166MHz and 133MHz Pentiums, respectively), each with 32MB of RAM, 1GB drive, and CD-ROM
  • An Innova Pro 5400ST from Canon with a 133MHz Pentium processor

Test Management Systems
A Compaq ProLiant 4500 with dual 166MHz Pentium CPUs with 2MB of independent Level 2 cache per CPU, with 196MB of RAM and two 4.3GB Fast and Wide SCSI-2 drives functioned as the SQL control server and housed the test results. A Micron Promagnum workstation with a 200MHz Pentium Pro CPU (256KB on-chip Level 2 cache), 64MB of RAM, 2GB SCSI-2 hard drive, 8X CD-ROM, and Matrox Millennium video card was the management console. We used a Digital Prioris HX 5133DP with two 133MHz Pentium CPUs and 64MB of RAM as the domain controller.

Servers
I tested several different server systems. All these systems used the same NT settings (optimized for background network applications and a 500MB pagefile) and the same SQL settings. (For more information on optimizing settings, see "More Easy SQL Server Performance Tips," on page 88.) Also, I equipped all these systems with 384MB of system RAM, and I spread the disk I/O across multiple devices. I used

  • two Compaq systems (the ProLiant 4500 and the ProLiant 5000), each with 10 disk drives: The 4500 used 2.1GB Fast and Wide SCSI-2 disks, and the 5000 used 4.3GB Fast and Wide SCSI-2 disks (which are slightly faster, but this speed had no visible effect on disk I/O performance). Both systems used the Compaq SMART-2/P Array Controller, and the disks were evenly distributed across both available SCSI channels, with the data volume stripe sets spanning the channels.
  • a NEC ProServa with seven 2.1GB Fast and Wide disk drives and a Mylex 960 DACP-2 RAID controller. Disk I/O was spread out on this system, too.
TAGS: SQL
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish