The Gateway ALR 9200 Server is the newest addition to the ALR Server series that offers the combined technologies of Gateway 2000 and Advanced Logic Research (ALR), which is now a Gateway business unit. The ALR 9200 is Gateway's enterprise quad Xeon server system for transaction-intensive applications. At only 18" high, 12.25" wide, and 25.5" deep, the ALR 9200 fits both office and computing environments.
Inside the System
The ALR 9200's cabinet design isn't tool-less but only requires loosening three screws on the system's back to remove the side panel and gain access to card slots and memory. Molded foam inserts hold the cooling fans in place and direct the airflow within the system. The system I tested had five cooling fans (with room for three more), which provided ample cooling for the system components. The Seagate Cheetah 10,000rpm hard disks in hot-swappable disk bays are equipped with heat sinks to promote the necessary cooling for extended hard disk life.
Removing two additional screws lets both the front hard disk cage and the rear electronics cage pivot out and provide access to cable connectors on the motherboard. In this ALR 9200 system, a RAID controller connected five disks, which left one hot-swappable hard disk bay unused. First, I chose to configure all the disks for fault tolerance. For the boot volume, I configured two of the disks for RAID 1 (mirroring). Then, I configured the other three hard disks for RAID 5, which created an 8.5GB data volume. Because the ADAC Ultra 2 model S466 RAID controller doesn't incorporate battery-backed cache, I configured it for data security with the write-through and direct I/O options. (The Gateway's model S438 three-channel controller, which is based on AMI technology and customized to Gateway's specifications, has battery backup for the cache, so I would feel more comfortable configuring this controller for performance with the write-back option.) For more information about controller configuration, see the sidebar "RAID Performance Configuration," page 160. To accommodate attachment to the Lab's four benchmark network segments, I replaced the 3Com 10/100 Ethernet card with a four-port Cogent 10/100 Ethernet card. Access to memory slots in the ALR 9200 is painless because the memory card, which has 16 DIMM sockets, slides easily out of the cabinet.
I installed Windows NT Server 4.0 with Service Pack 3 (SP3) without incident to a 2GB hard disk volume on the mirror set. The installation required minimal reference to the system manuals. In addition to the User's Guide, which covers the system's basic operation, Gateway also includes Maintaining and Troubleshooting the Gateway ALR 9200 Server, which thoroughly covers BIOS and hardware configuration and how to use the system utilities.
The ALR 9200 has two features that support remote systems management: the Intel Server Control (ISC) software and the Emergency Management Port (EMP) Console. You'll find both the ISC and the EMP Console on the System Utility CD-ROM.
ISC. ISC uses the server's implementation of Desktop Management Interface (DMI) 2.0 to provide current operational status information for server hardware components. In addition, you can use ISC in conjunction with the supported enterprise management platforms (i.e., HP OpenView Network Node Manager, Intel LANDesk Server Manager, and Computer Associates Unicenter TNG) to generate alerts. Because Intel wrote ISC as an ActiveX control, in standalone mode you can access ISC through a Web browser that supports ActiveX or a container application, such as the Microsoft Management Console (MMC).
I chose to use the MMC to test ISC. ISC's ActiveX control first presents a list of managed servers by OS. For each server on the list, ISC displays icons for two components: the ISC console and the Intel DMI Explorer. The DMI Explorer lets you view or set attributes for each DMI-compliant component on the system. The DMI Explorer presents the attributes as mostly undefined numeric codes, which provide little value without a fundamental understanding of the data.
The ISC console provides the functionality. Screen 1 displays a hierarchical tree structure of servers and monitored components in the ISC's left pane and a the selected folder's configuration information in the right pane. When you expand the server folder to show the monitored component categories, an indicator of the server's overall health displays at the top. (I would rather have the health icon display beside the collapsed server folders to provide a compact visual representation of the health of several servers.) The icon is green to identify a healthy state and turns red (i.e., critical) or yellow (i.e., noncritical) for unhealthy states. You can select the health item to display a summary of all sensors reporting error conditions, which lets you quickly drill down into the subsystem that is reporting a problem.
Not all the information available through the ISC console is the result of measurement sensors. You can expand the System Information folder to reveal a wealth of nonsensor information, such as BIOS versions, processor types and versions, field replaceable units (FRUs) and serial numbers, and system slot status. You can also obtain a summary of the DMI hardware system event log, which lists system boots and events that a sensor value out of the predefined acceptable range triggers. This information is valuable when you're planning a system upgrade or troubleshooting.
Screen 1 shows components that sensors monitor. When you select the Sensor Settings tab, the ISC's right pane displays the current sensor readings, a count of historical errors, and the current thresholds for events. The Alert Actions tab provides configuration options for selecting actions to take place when an alert occurs. Notification options include logging the action to disk (i.e., the system event log), displaying and broadcasting a message, and sounding the system speaker. In addition to notification options, a variety of shutdown and power control actions are also available. Shutdown options include shutting down the OS, immediately rebooting, and powering off the server. The Sensor Information tab displays information about the sensor, including the sensor's measurement range and accuracy and the normal values for its measurement range and accuracy.
EMP Console. The EMP Console utility runs on an NT Server, NT Workstation, or Windows 9x system and connects to the server's COM 2 port via either a modem or a direct connection. This utility's primary function is remote support, which lets Value Added Resellers (VARs) and system administrators view DMI status information (e.g., the DMI system event log, sensor data, FRU data) and reset a hung server remotely.
The EMP Console utility is significantly less useful and less user-friendly than the ISC. The Sensor Data Record view that the utility provides is largely undefined hexadecimal data, and the FRU viewer shows only baseboard information. The software lets you simulate the system's power and reset buttons but doesn't provide an option to shut down the OS. However, the EMP Console communicates directly with server firmware, so it works as long as the system has a power connection, even if the system is powered down.
To test the server's performance with a file server application, I ran the Standard File Service test in Bluecurve's Dynameasure benchmarking suite. I ran each test case three times. Table 1 shows the results, which are the average of the three test runs.
Dynameasure's Standard File Service test consists of 50 percent read transactions and 50 percent write transactions. These read and write transactions occur between the test servers' disk subsystem and the load-generating client computer systems' cache. I generated a 5GB test-data set to minimize the chances that data cached at the server would artificially improve read-transaction performance.
I ran two test configurations. The first test used the original fault-tolerant RAID 5 configuration of three hard disks. For the second test, I reconfigured the three hard disks into a RAID 0, non-fault tolerant configuration. The difference between these tests demonstrates the cost in performance for fault tolerance. As Table 1 shows, this difference in performance (primarily because of the additional parity calculation processing that a RAID 5 write requires) is significant. In this case, RAID 5 throughput is only 40.7 percent of the RAID 0 throughput. During the maximum-throughput part of the tests, CPU utilization was in the 40 to 45 percent range, which suggests significant excess processing capacity. At the same time, the disk I/O queue length averaged 11, indicating that a disk I/O bottleneck limited throughput in this test.
During my earlier testing of a Dell PowerEdge 6300 Server configured with a RAID 0 array, average throughput for the same Dynameasure benchmark test was 7983KBps, which is 8.6 percent better than the ALR 9200. I attribute the difference to the I/O throughput capacity advantage that the Dell's seven-disk RAID 0 array has over the ALR 9200's three-disk configuration because both arrays use Seagate Cheetah 10,000rpm hard disks.
The ALR 9200 is compact in size and has adequate internal expansion capacity. System performance and fault tolerance features are similar to that of other quad Xeon systems that the Lab has reviewed. The onboard systems management tools, as standalone tools, are less robust than the tools of other servers in this class. Yet, in conjunction with the supported enterprise management platforms, the server provides solid application management. The ALR 9200 is a respectable entry into the quad Xeon server market.
|Gateway ALR 9200 Server|
Contact: Gateway * 800-315-2536|
Price: $28,201 for tested configuration
System Configuration: Four 400MHz Pentium II Xeon processors with 1MB of Level 2 cache, Intel 450NX chipset with 100MHz front system bus, 2GB Error-Correcting Code DRAM, 5 Seagate Cheetah Ultra 2 SCSI Low Voltage Differential 4GB hard disks, IDE CD-ROM driv, Gateway ADAC Ultra 2 model S466 single-channel PCI RAID controller with 16MB cache, 3Com 10/100 PCI Ethernet card, Integrated 2MB Cirrus Logic GD5480 video adapter, Integrated Symbios narrow SCSI controller, Integrated Symbios wide dual-channel Low Voltage Differential, Ultra/Ultra 2 SCSI controller, 6 PCI expansion slots and one shared PCI/ISA expansion slot, Ultra-Direct Memory Access PCI IDE interface, Three 400-watt power supplies