Skip navigation

Business Server Development and NT 5.0

Innovations to Improve Your NT Server Platform

Since the Pentium Pro processor's introduction in late 1995, the Intel server platform hasn't changed much. Innovations such as the adoption of Ultra Wide SCSI have introduced evolutionary, rather than revolutionary, change. In 1997 (a sleepy time for business server development), the primary server platform remained Pentium Pro-based and ran Windows NT Server 4.0. Late 1998, however, will bring significant change to the business server universe as major advances in the Intel server platform and NT Server 5.0 chart a direct course for your network. In this article, I'll examine some important server hardware changes coming in late 1998 and the key features of NT Server 5.0. I'll look at how these hardware and operating system (OS) changes will affect the typical NT server platform and suggest some ways you can choose among them to customize your system.

Processor Technology
The fastest Intel processors are in the Pentium II family. The exceptions to this generalization are 4-way servers that use the Pentium Pro processor. Besides clock speed, the main difference between Pentium Pro and Pentium II processors is the speed and size of the secondary, or Level 2, cache. Pentium Pro offers up to 1MB of Level 2 cache that runs at the same speed as the processor; Pentium II is limited to 512KB of Level 2 cache that runs at half the processor's speed. On desktops, the higher clock speed of a Pentium II compensates for the slower cache. However, on multiprocessor servers where cache performance is mission critical, a Pentium II processor's higher clock speed does not compensate for the slower cache.

The new Xeon processors start at 400MHz and will offer even faster speeds by the end of the year.
The next generation of Pentium II technology, the Pentium II Xeon, targets the problem of limited cache speed by offering full-speed caches of up to 2MB per processor. The new Xeon processors start at 400MHz and will offer even faster speeds by the end of the year. As processor speed increases, other system components must become faster as well. Therefore, the new generation of Pentium II processors increases memory bus bandwidth from 528 MB per second (MBps) to 800MBps by increasing the bus frequency from 66MHz to 100MHz. These increases are essential to enable 4-way systems to scale well.

When Intel introduced the Pentium Pro processor and its supporting chipsets, this technology allowed standardization of the basic design of 4-way Intel servers. The next generation of Pentium II processors will likely serve a similar crucial role in standardizing the design of 8-way servers. Intel recently moved toward controlling the 8-way market by buying both Corollary's 8-way ProFusion design and NCR's OctaScale design. Axil, the only other developer of 8-way Xeon technology, recently closed its doors. A consequence might be premium prices for 8-way servers.

Eight-way systems are not the end of the line, though. Unisys recently announced its intent to market a 32-way symmetric multiprocessing (SMP) server that uses Pentium II Xeon technology on crossbar system architecture and measures memory bandwidth in GBs. (This monster won't be available in the near future, however.) To learn more about the new generation of Pentium II processors, go to Intel's Web site (http://www.developer .intel.com/design/PentiumII).

Storage and I/O Subsystems
Server storage has been largely SCSI-based for almost a decade, and performance has increased incrementally over that time. SCSI has leapt from 5MBps to a whopping 40MBps of bandwidth. Ten years ago, Intel servers used a 386 processor, usually supported no more than 64MB of memory, and came with a 100MB hard disk. By the end of 1998, a server will have four next-generation Pentium II processors, as much as 8GB of memory, and hundreds of GBs of disk storage. Clearly, storage I/O subsystem design must change if the next generation of servers is not to be bound by I/O. Two technologies will emerge in 1998 to relieve the looming I/O bottleneck: Ultra2 SCSI and fibre channel storage subsystems.

Ultra2 SCSI. Today's Ultra Wide SCSI interface presents several problems to server designers. The most serious problem is performance: 40MBps may seem like a lot, but four modern disks running at full speed can still saturate it. A less widely understood problem is that every time SCSI speed has doubled, the average total length of the cabling from controller to disk has been cut in half. In addition, although you can put several SCSI channels on one PCI board, because of the size of the connectors, you can't easily offer more than two channels per board to external devices. These restrictions in cable length and slot availability make adding large amounts of storage to servers difficult.

Ultra2 SCSI (also known as Low Voltage Differential Signaling--LVDS--SCSI) addresses both of these limitations. First, Ultra2 increases bandwidth to 80MBps. Second, because Ultra2 uses differential signaling (+5 and ­5 volts rather than +5 and 0 volts), it allows the use of longer cables than current SCSI implementations allow. Ultra2 SCSI is available today; unfortunately, it's not widespread. You can learn more about Ultra2 technology at http://www.adaptec.com/technology/whitepapers/futureofscsi.html.

Fibre channel storage subsystems. Fibre channel fundamentally changes the way we connect devices to computers. It does so by using a high-speed serial interface instead of the parallel interface SCSI uses. (A serial bus can be faster than a parallel bus because driving a serial bus at very high frequencies is much easier.) In their initial implementations, fibre channel storage subsystems will enable transfer speeds of 100MBps; fibre channel's future upgrade path offers speeds of up to 400MBps. This serial transmission mode addresses the problem of signal degradation in long cables, allowing the use of cables whose lengths are measured in kilometers. The name fibre channel is somewhat misleading, because it implies the use of fiber optics cables. Actually, many fibre channel implementations will use copper wiring, which is cheaper than fiber-optic cabling and still allows very long (hundreds of meters) cable lengths. Longer cables let you locate a server in a building separate from the server's storage, a particularly useful feature in clustered environments in which separating a server from its storage facilitates disaster recovery and heightens security.

Fibre channel supports as many as 126 devices per channel. However, users must still choose between performance and expansion, because peak performance will occur with around 30 devices to 40 devices on a single controller. You can implement fibre channel with a great deal of fault tolerance by running dual-fibre loops from one disk to two systems, each with its own disk controller, to provide redundancy, as Figure 1 shows. You can find out more about fibre channel by visiting the Fibre Channel Association Web site (http://www.fibrechannel.com). To read a preview of fibre channel performance, see Dean Porter, "Fibre Channel, SCSI, and You," September 1997.

PCI Advances
At PCI's introduction in 1994, it represented an advance over EISA and ISA bus technologies, which offered a peak bandwidth of 133MBps. Today, the PCI bus looks increasingly like a serious bottleneck as the computer industry looks forward to advances like fibre channel (100MBps) and Gigabit Ethernet (80MBps to 100MBps). Fortunately, the designers of PCI anticipated the problem and designed PCI to be scalable. The next generation of servers will offer a 64-bit PCI bus capable of 267MBps peak bandwidth when running at 33MHz (current PCI bus implementations are 32-bit at 33MHz).

Speed isn't the only advance coming to the PCI bus. Already several vendors have offered hot-plug PCI slots in their systems. With hot-plug, you can replace a failed PCI card without a system reboot. Hot-plug PCI combined with fault-tolerant technology such as fibre channel means you'll never again have to say you're sorry because your system is down.

High-Speed Dedicated Clustering Interconnects
The current 2-way failover cluster that NT Server, Enterprise Edition (NTS/E) supports is rather primitive compared with more mature clustered environments, such as Digital Equipment's VMS clustering, that support load balancing across many more systems in a cluster. Microsoft plans to change this situation with a future release of NTS/E that supports not only failover but also load balancing between cluster members. However, when you begin load balancing between clusters, you generate housekeeping traffic between the servers in the cluster.

In theory, Microsoft could use a standard protocol such as TCP/IP to implement failover and load balancing over a conventional network such as 100Base-T. However that approach introduces a problem with network latency. Current design lets networks and networking protocols handle general-purpose networking, making acceptable a delay of hundreds of milliseconds between the sending of a packet and acknowledgment of its receipt. However, in intracluster communications, latency would prevent a 2-node cluster from scaling effectively. Adding nodes to the cluster would increase data-relay time with each additional node, eventually affecting performance unacceptably. (A high-speed interconnect that can overcome the data-relay bottleneck would consist of a software layer--an OS--and a hardware component. Microsoft will likely base such a software component in NT 5.0 on the Virtual Interface--VI--architecture it is developing jointly with Intel and Compaq Computers. You can find the technical details describing VI on Intel's Web site­http://www.intel.com/solutions/tech/via.htm.)

Several companies are working on dedicated high-speed clustering interconnect technologies. Compaq and Tandem collaborated on ServerNet, which Tandem originally developed for its proprietary Himalaya systems. ServerNet is now available to other vendors as an NT cluster interconnect solution (http://www.tandem.com). Dolphin Interconnect Solutions has designed an interconnect for UNIX clustering solutions that implements the Scalable Coherent Interface (SCI) standard. This product is now available to vendors as an NT solution (http://www.dolphinics.com/dolphin2/interconnect). HAL Computer Systems recently announced its entry into the high-speed interconnect marketplace with its Synfinity Interconnect Architecture. This product addresses a variety of clustering interconnect needs, including VIA and CC-NUMA (http://hal.com/fjst/). All these cluster interconnect solutions share several features, including performance in the GB-per-second range combined with very low latency. This feature set means cluster members can communicate with one another at very high speeds, allowing them to scale efficiently as new members join the cluster.

I2O Intelligent I/O Devices
Although Intelligent Input/Output (I20) will not have a big impact on server performance in 1998, it is nonetheless an important technology. In theory, I20 devices can turn your computer from a master talking to slave I/O cards into a collective of intelligent devices that cooperate to get work done.

Figure 2 shows the data transfer path from a disk drive to a network controller. The system must read the data into system memory from the I/O subsystem, process the data, and then write it back to the network interface. I20, however, will let the system transfer data directly from the disk subsystem to the network, as Figure 3 shows. Because the data travels directly from the disk controller to the network controller, the only additional traffic is messages passing between the OS and the controllers to direct the data.

Because I2O requires new motherboards, adapters, and modifications to the OS, predicting when it might become a common solution is impossible. However, Microsoft is adding support for I2O to NT 5.0, and the computer industry might get an early look at that support if Microsoft offers I20 as a service pack for NT 4.0. For more information about I20, check out Intel's Web page (http://www.intel.com/procs/servers/i2otech/index.htm).

Universal Serial Bus
One technology that will appear in the Intel server platform this year is the Universal Serial Bus (USB). USB supplements the serial ports today's systems use and provides a common interface for a variety of low-bandwidth devices such as modems, keyboards, mouse devices, and scanners. The immediate utility of USB on servers is hard to see, because few devices that typically attach to servers are available with a USB interface. However, this situation is likely to change.

There is some question whether USB on a server is a good idea, particularly in secure environments. One of the key features of USB is that it accepts hot-plug devices so that the system recognizes the device without user intervention. In many secure environments, servers run without a keyboard, and the ability to plug in a keyboard in such a secure situation is not desirable.

Advanced Configuration and Power Interface
ACPI is a standard Microsoft, Intel, and several other vendors developed to improve Intel-based hardware platforms' ease of use and management. NT 5.0 requires ACPI to fully implement advanced power-management features such as Wake-on LAN, in which a server can be in sleep mode until a network packet wakes it up. In the past, Intel-based systems implemented power management that was fairly inflexible in the BIOS. ACPI will let the OS completely control how much power goes to which devices. Full implementation of ACPI is useful for desktops; however, its application in servers is crucial. Without an ACPI-compliant system, you won't get the benefits of hot-plug PCI.

Although Microsoft touts ACPI as an important component of NT 5.0, the current crop of 4-way Intel servers will never fully support ACPI because of hardware limitations. These limitations don't mean that NT 5.0 won't run on non-ACPI compliant hardware, only that the hardware won't implement all ACPI features. If you want to know more about ACPI, you can find the full ACPI specifications at http://www.teleport.com/~acpi/spec.htm.

New Features in NT Server 5.0
Improvements in the Intel server platform are only part of the coming changes to the business-server development equation. Other and equally significant parts of the equation are changes in software: Microsoft has big plans for NT Server 5.0. The primary purpose of NT 5.0 is to lower total cost of ownership (TCO) and improve scalability, security, and administration. NT 5.0 will accomplish these goals with a variety of features, including Plug and Play (PnP), IntelliMirror, media management services, Active Directory (AD), and Microsoft Management Console (MMC). You can read about the new features in NT 5.0 and find links to additional information at http://www.microsoft.com/ntserver/basics/future/windowsnt5/features.asp. To learn how you can get your enterprise ready for NT 5.0 today, see Sean Daily, "10 Steps to Prepare for NT 5.0 Now," February 1998.

PnP. Today, PCI in NT is a largely static implementation. If you change a device, you have to reboot the system and perhaps reload drivers. This requirement applies even to systems with hot-pluggable PCI slots. NT 5.0 will be the first release of NT that fully uses the PnP capabilities of the PCI bus. For most implementations, this capability means simpler installation of new devices. In addition, for servers with hot-plug PCI and redundant network or disk interfaces such as fibre channel, this capability will enable a higher level of fault tolerance. With NT 4.0, if your disk controller fails, the system will go down even if it is protected by RAID. However, if you have a pair of fibre channel controllers attached to a dual-ported drive and your disk controller fails, your system can stay up while you replace the failed controller.

IntelliMirror. Many UNIX companies have spent a lot of time criticizing NT because of the amount of work it takes to administer a large NT network. Those companies have a point; in fact, the only way to get a truly zero administration PC is to take out the power supply. This situation will change when enterprises roll out NT Server 5.0 and NT Workstation 5.0, both of which will support the client and server portions of a technology called IntelliMirror, an important component in Microsoft's Zero Administration for Windows (ZAW) initiative. IntelliMirror will automatically mirror data, applications, and settings from a PC back to a central server. IntelliMirror will increase user mobility by enabling users to sit down in front of any PC on their network and easily access their data and applications. IntelliMirror will simplify PC replaceability by mirroring all of a user's data, applications, and customized settings from the server, which means replacing a PC will be as simple as plugging in a new PC with NT 5.0 installed. To learn more about ZAW and IntelliMirror, see Mark Minasi, "Zero Administration for Windows," December 1997.

Media management services. NT servers support more disk space every year, which makes solving the problem of availability particularly important. Today, you must reboot an NT server after adding a disk or changing the layout of partitions. As NT becomes a crucial part of a network, such rebooting becomes problematic. The new media management services in NT 5.0 will let administrators add and rearrange disk space as well as expand file systems without a reboot. Screen 1 shows the new Disk Management utility on the MMC in NT 5.0 beta 1. Some features of NT 5.0 media management services include improved handling of removable media such as ZIP drives and dynamic resizing of file systems. Currently, you can format removable media devices only as FAT. This limitation is not a problem with a ZIP drive but can be serious for a device such as a Jaz drive supporting 2GB of storage.

NT directory services. Perhaps the biggest change coming in NT Server 5.0 is AD. Currently, various logical objects in NT, such as users, storage, and applications, share no unified naming convention. For example, users exist either in a local or a domainwide structure, file shares are named for the machine they exist on, and applications have no naming convention. Currently, NT layers these varying addressing constructs on either Domain Name System (DNS) or Microsoft's NetBEUI. The resulting multilayered and noninteroperable structure complicates managing very large NT networks.

AD will simplify this picture, imposing a unified naming convention based on the International Standards Organization (ISO) X.500 standard on all objects on the network. AD will also build lookup operations on top of a Lightweight Directory Access Protocol (LDAP) service. LDAP is an increasingly common network standard that most vendors support. With AD, you'll be able to move or duplicate resources without affecting users. You can read an excellent white paper discussing the fundamentals of AD at http://www.microsoft.com. To learn how AD will change the way NT functions, see Mark Minasi's series of Inside Out columns spotlighting AD, November and December 1997, and January and February 1998.

MMC. Just as existing versions of NT have no common naming convention, NT has no common interface for administrative tasks. In NT 5.0, you will perform all administrative tasks through the MMC, which Screen 2 shows. If you've been using Internet Information Server (IIS) 4.0, you've already seen the early version of MMC that manages IIS. MMC is built around a plug-in architecture based on ActiveX. This structure has two major advantages. First, any vendor can add a proprietary plug-in to control its application. Second, backup and database applications will quickly adopt MMC as their administrative interface. Because MMC is built on the foundations of ActiveX, extending the interface to work over the Web is possible, although Microsoft does not guarantee that all MMC objects will be Web-accessible.

MMC will simplify administration by presenting one tool for all administrative tasks, including tasks that Disk Administrator, User Manager, and Performance Monitor currently perform. To read more about MMC fundamentals, go to http://www.microsoft.com. To learn more about MMC's design and functionality, see Keith Pleas and Dean Porter, "Microsoft Management Console," February 1997.

Putting It All Together
As you anticipate using NT 5.0 and new server technology in your organization, you need a clear picture of which features are most important for your system. The primary criterion in your evaluation of these features need to be reliability, rather than performance. After all, if you need a car to get you to work in all weather with minimum fuss and expense and maximum dependability, you buy a Ford--not a Ferrari. The same logic applies to your network servers.

Be aware that the new features in NT 5.0 can strain your network and storage subsystems. Technologies such as IntelliMirror require more disk space on the server and consume more network bandwidth. What's more important, however, is that these new technologies make your network server a single point of failure for all the desktops on your LAN. Large organizations with deep pockets can overcome the single-point-of-failure challenge with clustered NT systems. If your enterprise is on a less capacious budget, it can meet that challenge by using high-availability solutions. For example, features like RAID, redundant network interfaces, and other solutions that keep downtime to a minimum need to be high on your list of priorities.

If you want to take full advantage of NT 5.0's media management services, you'll need a hot-pluggable disk subsystem. Avoiding downtime is hard if you have to shut down your system every time you add a disk drive, so you'll also want servers with hot-swap drive bays. You might consider RAID controllers that support the addition of new disks to existing logical volumes.

MMC doesn't require hardware support, but now is a good time to start asking your software and hardware suppliers whether their management tools will be integrated into MMC or continue as standalone applications. Similarly to MMC, AD doesn't require hardware support, but AD will rely on your servers to resolve directory lookup requests quickly and reliably.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish