SCSI Host Adapter Performance Shootout

Host Adapter Performance Shootout

After I finished my article comparing SCSI and IDE technologies ("SCSI and IDE: Defining the Differences," June 1997), several readers wrote asking me to review SCSI host adapter performance. Representatives from several SCSI host adapter vendors also asked for such a review. I've wanted to do this type of head-to-head comparison to test how the performance and efficiency of the current crop of SCSI cards stack up. So, I set off to obtain a cross section of SCSI host adapters representing some of the most popular and outstanding cards for Windows NT systems. I'll show you the results of my SCSI performance shootout and provide a few tips to enhance your SCSI host adapter's performance under NT, regardless of which product you own. This article focuses on regular SCSI host adapters rather than hardware RAID or caching controllers that incorporate SCSI. Therefore, this article is most appropriate for workstations and low to midrange servers rather than enterprise-level servers.

Rounding 'Em Up
I reviewed products from five SCSI host adapter vendors. The vendors are Adaptec, Advanced System Products (AdvanSys), Mylex/BusLogic, QLogic, and Symbios Logic. These manufacturers supply most SCSI host adapters in use on NT systems around the world. They also supply most embedded SCSI controller chips on system motherboards and RAID controllers.

Each vendor provided an Ultra SCSI-capable host adapter. Some vendors could provide only single-channel or narrow SCSI adapters, so I decided to use a narrow Ultra SCSI hard disk (a Seagate Medalist Pro ST-52160N 2GB 5400RPM Ultra SCSI disk and two Cheetah 4LP ST-34501N disks) to represent a least common denominator for the adapters in this review. All the SCSI host adapters I reviewed use single-ended termination. For specific information about each host adapter, see the sidebar, "The Contenders," page 187.

For my test system, I configured a 200MHz Pentium-based system with 48MB of RAM and 512KB L2 cache, running NT Workstation 4.0 with Service Pack (SP) 3. My benchmarking software was U Software's Bench32 version 1.21. Bench32 is an excellent system benchmarking utility that provides comprehensive yet intuitive configuration options for disk, CPU, memory, and video performance benchmarking. For the tests in the benchmark suites, I used a 4KB block size and a 1MB test file size (Screen 1 shows Bench32's disk test configuration screen). In addition, I disabled all nonessential services on the test system and rebooted the system before each test to ensure a common system memory load. I attached the test hard disks used for performance benchmarking as the only devices on one channel of each Ultra SCSI host adapter, with no other SCSI devices present except the adapter. I used this configuration to ensure optimal SCSI bus performance and to eliminate the possibility of SCSI bus contention interfering with the tests. In another effort to eliminate possible contention with the test drives, I used a separate SCSI channel and controller for the drive containing the system/boot partition.

Are Newer Drivers Faster?
In addition to testing the performance of various Ultra SCSI host adapters, I wanted to test the potential benefits of using the latest drivers for these cards. NT 4.0 ships with drivers for most SCSI host adapters, and I wanted to find out whether you can improve performance by using newer versions of the same drivers obtained from the manufacturers. For each card I reviewed, I tested the native, shipping version of the SCSI host adapter driver (if available) and the latest version of the driver from the vendor's Web or FTP site. Table 1, page 188, lists the SCSI vendors for the cards that participated in this review and the URLs for obtaining the latest drivers, and, where applicable, Flash BIOS updates for their products. The results tables later in this article provide head-to-head comparisons of original versus latest driver performance. By the way, to update a SCSI adapter's driver, you have to access the SCSI Adapters control panel in NT, as you see in Screen 2.

Test 1:
Single-Drive Test
For the first performance comparison, I executed a straightforward test. I placed an Ultra SCSI hard disk (a Seagate Medalist 5400RPM Ultra SCSI 2GB drive) by itself on one channel of each SCSI adapter and ran a disk test suite on an NTFS-formatted 2GB disk partition. I designed this test to show what, if any, performance differences exist when you use different Ultra SCSI host adapters with a one-disk configuration. I hoped to find out which host adapters best maximized the test disk's performance potential. Although all host adapters adhere to the same specifications in theory (Ultra SCSI compliance, 20MBps signaling rate) and should therefore deliver approximately the same results, I had a feeling that this test would show some performance differences among the various adapters. Table 2, page 190, shows the results of this disk test.

The test measures drive throughput in MBps and applies an overall disk score in terms of average MBps and a Bench32 DiskMark score. As you can see from the table, the scores were far from identical. Although my test disk topped out at a maximum score of about 169 DiskMarks and a 3.76MBps average transfer rate, some SCSI host adapters were unable to exploit this potential.

In particular, I was surprised to see that connecting the test disk to the Adaptec AHA-2940U adapter caused the disk to achieve roughly two-thirds the total performance it was capable of when connected to other adapters. This result was especially surprising given Adaptec's popularity in the SCSI adapter market. The stark difference in results prompted me to re-run the test several times to ensure that I hadn't obtained any anomalous results. Successive tests (even with different cards and firmware revisions of the card) yielded very similar results. Adaptec has a strong reputation for compatibility and reliability; however, the company apparently lags behind other vendors in terms of SCSI host adapter performance. The fact that the Adaptec AHA-2940U continues to fetch a high price compared with its competitors also means that this adapter has a fairly low price/performance ratio (something to keep in mind if you're looking to buy the most bang for your SCSI buck).

In regard to the differences in driver performance, I also discovered some interesting results during the first test. Although most cards exhibited only minor or insignificant performance increases when I installed the newer drivers, one SCSI host adapter, the Mylex/BusLogic FlashPoint LT, gained significant performance when I used the latest driver (from 150 DiskMark points to 169 DiskMark points, an increase of almost 9 percent). As you can see from Table 2, page 190, none of the adapters I reviewed showed a decrease in performance going from the original drivers to the newer drivers.

Bench32 version 1.21

Contact: U Software 303-280-3198
Email: [email protected]
Price: $35

AHA-2940U Ultra SCSI Host Adapter

Contact: Adaptec 408-945-8600
Price: $300

ABP-980U Ultra SCSI Host Adapter

Contact: Advanced System Products (AdvanSys) 800-525-7443
Price: $399

FlashPoint LT Ultra SCSI Host Adapter

Contact: Mylex (BusLogic was acquired by Mylex) 800-776-9539
Price: $119 kit that includes drivers and cables

QLA-1040 Ultra SCSI Host Adapter

Contact: QLogic 800-662-4471
Price: Price unavailable

SYM8751SP Ultra SCSI Host Adapter

Contact: Symbios Logic 800-677-6011 ext. 3047
Price: $140

Testing Multi-Drive Performance
Although my initial test provided some interesting and enlightening data, it didn't test the adapters' performance in the presence of multiple devices. After all, most SCSI host adapters are likely to have more than one physical SCSI device attached (e.g., multiple hard disks or a hard disk and a CD-ROM drive). Therefore, testing the efficiency of the adapters when multiple active devices are simultaneously on the SCSI bus is important.

To test the adapters with multiple active devices, I configured an 8GB RAID 0 stripe set on a testbed NT workstation with two Seagate Cheetah 4LP 10,000RPM Ultra SCSI 4.3GB hard disks. I formatted the partition with NTFS (with a 4KB cluster size), and I conducted the same battery of tests that I used for the previous comparison. However, this time I wanted to look at more than just raw disk and controller performance metrics.

Host Adapter Efficiency: What Price Performance?
Although my tests focused on high performance, I wanted to look at another important factor: efficiency. The efficiency of a SCSI host adapter takes into account not only its performance during the tests, but the corresponding resource utilization on the host computer. As you may remember from my June article, one of SCSI's biggest advantages over IDE disk subsystems is its relatively low CPU utilization during data transfers. This factor is a result of SCSI's efficient architecture, which accommodates CPU-freeing Bus Master direct memory access (DMA) data transfers, device independence, and device concurrency in addition to high data transfer rates.

Although SCSI is more efficient that IDE in this respect, you still have to pay attention to CPU utilization, which varies widely among different brands and models of SCSI host adapters. In addition, efficient drivers help lower CPU utilization, and inefficiently coded drivers use more CPU time. To measure the efficiency of each Ultra SCSI host adapter, I monitored several factors, including CPU utilization, interrupts per second, and context switches per second incurred during the tests.

CPU utilization. CPU utilization represents the amount of CPU time (expressed as a percentage) that the Ultra SCSI host adapter consumed during a particular test. The CPU utilization of any device on an NT system is important, because any CPU time consumed on behalf of the device (in this case, the Ultra SCSI host adapter) to service device I/O operations is time when the CPU is unavailable to applications and other system processes.

Although higher CPU utilization is generally undesirable, it can be justifiable if you achieve a higher level of performance (i.e., you are paid back for the CPU time in performance). Ideally, you want to balance CPU usage and controller performance, and keep CPU usage relative low to leave the processor free for other tasks.

Interrupts per second. Interrupts per second represents the number of times per second that a device (such as a SCSI host adapter) interrupts the processor with a request. Fewer interrupts mean efficient adapter performance because the system is transferring more data with fewer interruptions at the CPU. Like CPU usage, a low interrupts-per-second value is desirable. A high value can signify that the hardware or driver is acting inefficiently because of continual interruptions at the CPU to service I/O requests.

Context switches per second. Context switches per second represents the rate of change from one process thread to another. Context switches can occur either within one process or across multiple processes. Two situations can result in thread switching: One thread asks another for information, or a higher priority thread that's ready to run preempts the thread currently running. Although low-level device driver I/O operations always incur a certain amount of thread context switches, an exaggerated amount wastes CPU time and slows performance. With CPU utilization and interrupts per second, the number of thread context switches per second can give you a good indication of how efficient a SCSI controller's firmware and driver are, and what kinds of system resources the adapter is using behind the scenes to produce its performance level.

Fortunately, the benchmarking utility (Bench32) I used for this review lets me track these and other counters during disk test suites. This capability provides a Performance Monitor-like picture of system performance, but one that's specific to individual disk tests, and tells me exactly what kind of resource impact each test had on the system.

Test 2:
Multi-Drive Test
The second test measured the performance of the Ultra SCSI host adapters in a multi-drive environment. Test 2 yielded a lot of interesting results and, in some cases, changed the performance rankings for certain adapters (Table 3, page 190, shows the results of the second test).

Adaptec's AHA-2940U regained its footing somewhat, closing the first test's wide margin between the Adaptec and the other adapters that I reviewed. However, the Adaptec adapter still ranked last during the multi-drive test, but this time by only a razor-thin margin from the next competitor, rather than the 33 percent performance gap I saw during the first test.

The Adaptec card benefited highly from the use of the latest NT driver, which increased its overall DiskMark score by 11 points (from 150 to 161). This finding may be of particular interest to Adaptec customers who are using this adapter and NT with the default driver; depending on your environment, you may gain a significant performance boost simply by installing the newer version of the driver for this adapter.

While I'm on the subject of driver-induced performance increases, Symbios Logic's adapter also went through a Cinderella-style transformation after I installed the newer driver. The Symbios Logic Ultra SCSI host adapter started at the bottom of the ranks after the default driver tests, but raced past the other adapters I reviewed after the newer driver test, to finish with the top score for raw performance.

In fairness, the relatively low scores and high interrupt rate of the default driver test are most likely because this adapter does not have a driver designed specifically for the chipset on this card. As a substitute for the lacking driver, I used a device driver for an earlier chipset (the only kind shipping with NT). This driver appeared to work fine, but it was not optimized for the 875 chipset on the test card.

Although the QLogic QLA-1040 adapter finished a close second behind the Symbios Logic adapter, it also incurred a lower CPU utilization rate than the Symbios Logic adapter. This result brought the overall efficiencies of the two adapters to a veritable tie. The QLogic adapter seemed to derive little performance benefit from its driver update. I assume this finding reflects that QLogic had already highly optimized the default NT driver for this adapter and therefore had little room for increasing its performance.

Another strong performer was the Mylex/BusLogic FlashPoint LT card. With its latest driver scores, it finished third overall. However, one interesting note is that this card's CPU interrupt generation level was far higher than that of most other adapters, as you can see in the average interrupts per second column of Table 3. In fact, the number of CPU interrupts for this adapter during the tests was approximately three times as high as for other adapters. However, the CPU utilization scores were not significantly higher than for other adapters (and were lower in several cases). This finding indicates that although the adapter and driver are interrupt-intensive, the numbers don't translate into any heavy taxing of CPU time. Finally, the AdvanSys adapter seemed to falter somewhat from its stellar performance in the single-drive test, achieving scores that placed it with the Adaptec 2940U at the lower end of the scoring range.

Wrapping It Up
In general, all the adapters fell into the same level of processor utilization, which confirms SCSI's CPU-friendliness. This result also confirms the reasoning that SCSI is the best choice for servers and high-end workstations with significant levels of disk activity, regardless of which brand or model of adapter you choose.

After all, if your disk I/O is sapping all your CPU time, you won't have as much of this time available for applications and services running on your machine.

In future articles, I'll look at the performance of some of the new and upcoming standards, including Ultra2 SCSI and Fibre Channel, in both single-drive and multi-drive RAID configurations. In addition, I'll evaluate the efficiency improvements of the new generation of I20-aware motherboards, adapters, and drivers as they become available. I20 is an initiative that various industry vendors support. It calls for the offloading of system I/O processing from the host system's CPU to a dedicated I/O processor such as the Intel i960 RISC chip. This technology promises to deliver a new level of I/O performance and efficiency to network servers. I'll test these products on some NT servers to separate what's hip from what's hype. Until then, be sure to grab that latest driver off your SCSI vendor's Web or FTP site, and watch that CPU utilization!

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.