Skip navigation

NT News Analysis - 01 Jun 1998

Future of NT SMP Gets Murkier

Several major hardware vendors are testing the Windows NT symmetric multiprocessing (SMP) server waters. Axil Computer, HP, and Data General have formed the Crossbar Coalition, a loosely aligned group dedicated to promoting 8-way server technology based on Axil's Adaptive Memory Crossbar standard. With this quad-to-quad, shared-memory interconnect, enterprises can use Socket 8 and Pentium Pro processor components to build cost-effective, hybrid 8-way platforms. All the coalition members currently offer products based on the Adaptive Memory Crossbar standard.

The Adaptive Memory Crossbar technology was becoming the de facto standard--that is, until an Intel subsidiary, Corollary, jumped in and muddied the NT SMP server waters. Corollary, an interconnect solution competitor, announced that it would bypass Socket 8 entirely for its 8-way SMPs, choosing instead to deliver a more robust design based on the Deschutes Slot 2 series of Pentium II CPUs. Servers based on the Adaptive Memory Crossbar standard don't have a Slot 2 upgrade path.

Vendors' reactions to Corollary's announcement are further clouding the issue. NCR is reportedly dumping its OctaSCALE architecture in favor of Corollary's design. Even coalition member HP is reportedly considering a switch to the Corollary design for future server designs, a prospect that does little to instill confidence in the coalition's solutions.

Corollary's announcement and vendors' reactions are causing some IS planners to reconsider their 8-way purchasing decisions. Should they buy the coalition's solutions because they are currently available, even though they risk getting stuck using Socket 8 and Pentium Pro processor components? Or should they wait for the release of Corollary's Slot 2 solution this fall?

Given the disappointing scalability of existing 8-way designs (according to the Transaction Processing Performance Council--TPC, they are only 46 percent better than top-end 4-way boxes) and the incredible pull that Intel wields within the industry, the smart move might be to avoid the coalition's 8-way platforms. Without a Slot 2 upgrade path, these 8-way designs will find themselves outclassed by next-generation Corollary-based implementations. Unless enterprises absolutely need that extra 46 percent throughput (and are willing to pay handsomely to get it), IS planners would do well to wait for systems based on the Intel-sponsored standard.

Exchange 5.5's Dynamic Buffer Allocation Provides a Worthwhile Tradeoff
If you browse through the Internet mailing list devoted to Microsoft Exchange (email [email protected] with the word subscribe in the body of the message), you'll see many notes from people worried that Exchange 5.5 is using all available memory on their servers. Although Exchange 5.5 uses more memory than previous versions, administrators usually attain better server performance with it. Here's why.

The Information Store is the heart of Exchange. It is a multithreaded process implemented as a single executable (store.exe), which runs as a Windows NT service. As more users connect to the Information Store, the number of threads increases, which causes memory demand to increase. You can adjust the Information Store's system parameters to reflect the expected demand.

Before Exchange 5.5, you used the Performance Optimizer to make this adjustment. However, the Performance Optimizer adjusts the system parameters based on a static snapshot of system load allied to some historical information. If load increases or decreases dramatically after you use Performance Optimizer, the system must cope with the load change. As load increases, additional memory requests result in excessive paging; as load decreases, Exchange does not release its memory reserve to accommodate other applications, which also results in excessive paging.

Exchange 5.5's Dynamic Buffer Allocation feature takes Information Store adjustments to a higher level. This self-tuning capability ensures that store.exe uses the appropriate amount of memory at all times, taking into account the relative need for memory for other active processes in the system. Dynamic Buffer Allocation continually measures user load against system load. As load increases, Exchange requests more memory from NT; as load decreases, Exchange releases memory, making it available to other applications.

To free up memory for Exchange, Dynamic Buffer Allocation causes NT to page out parts of the GUI subsystem. As a result, the server can appear to be unresponsive the next time a user accesses the GUI because NT must page the interface back into memory before NT can display the GUI. This unresponsiveness is most obvious when the Information Store databases or transaction logs are on the same disks as the page file. In this case, you're trading a slight slowdown in GUI responsiveness for better memory utilization and better overall system performance. (However, a significant GUI slowdown is a good indication that the server is under severe load and is a candidate for hardware upgrade or replacement.)

Dynamic Buffer Allocation is not a fix for servers that do not have enough memory. Rather, Dynamic Buffer Allocation helps reduce paging and thus increases the performance of many servers, especially those that manage multiple high-demand applications, such as Exchange, SQL Server, and Systems Management Server (SMS).

The imminent release of Windows 98, coupled with ongoing delays in the Windows NT 5.0 development cycle, has left many enterprise IS planners with tough decisions. Should they upgrade their companies' client base to Windows 98 or wait for NT 5.0? Is NT 4.0 a viable alternative? Or should they pursue the path of least resistance and simply wait to make a decision?

Microsoft has clearly expressed where it stands on the issue: NT is the future of Microsoft client computing, and Win98 is a dead end. Microsoft representatives repeatedly made this point to attendees of the Windows Hardware Engineering Conference (WinHEC) in Orlando, Florida. Bill Gates and company demonstrated a variety of cutting-edge hardware technologies­including Universal Serial Bus (USB) and IEEE Std 1394-1995 (i.e., Fire Wire standard)­all running under a slick new build of NT Workstation 5.0. (For more information about USB and FireWires, see the news story "Microsoft Gets Under the PC's Hood," page 41.)

Still, NT 5.0 is a long way off. Current estimates place the final ship date in the second quarter of 1999. That estimate doesn't include the inevitable phase in which you wait for others to find bugs so that Microsoft can fix them with the first service pack. If IS organizations want a relatively bug-free NT 5.0, they will have to wait more than a year and a half.

Another factor to consider in choosing a client operating system (OS) is that major first- and second-tier hardware vendors (such as Compaq Computer, HP, and Micron Electronics) are preinstalling NT Workstation 4.0 on midrange Pentium- and Pentium II-class PCs. The benefits of purchasing such NT clients are numerous. IS planners receive a reliable and manageable system with certified device drivers and single-source support from vendor engineers who understand NT configuration and maintenance.

Although these benefits are important, they aren't the primary impetus for purchasing NT clients. The ability to leverage new Microsoft client technologies in the future is the main reason why many IS planners are implementing NT systems in their companies today.

These IS planners believe that, by adopting NT Workstation 4.0 now, their companies will be in a better position to migrate directly to NT 5.0 when Microsoft finally releases it. Most of the migration's groundwork will have already been laid, including the specification of NT-compatible hardware and software configurations as corporate standards. Thus, the companies can quickly and easily adopt key NT 5.0 client technologies, such as IntelliMirror, without the trauma of a major OS platform shift.

Although some IS planners are rolling out NT Workstation 4.0 to prepare for the NT 5.0 migration, the Year 2000 problem is halting many planners' migration schemes. Many companies simply don't have the manpower or financial resources to tackle a mass exodus from Windows 3.x or Win9x and the Year 2000 certification and verification process. For these companies, staying put is perhaps the safest course.

SQL Server 7.0 Aims for the Enterprise
The SQL Server development team has been busy in the past 2 years. At a Microsoft briefing, SQL Server product managers previewed the major features of the long awaited 7.0 release of SQL Server. The development team has made many significant changes. Some of these changes--such as improvements in SQL Server 7.0's scalability, basic database architecture, and administration--clearly signal Microsoft's desire to grab a bigger piece of the enterprise database pie.

To better support high-end scalability, the team increased SQL Server's maximum database size from 1TB in version 6.5 to 1,048,516TB in version 7.0. SQL Server 7.0 can hold more than 2 billion objects per database. This improved support for very large databases (VLDBs) will help make Microsoft a viable contender in enterprise database market.

VLDB wouldn't be possible without significant enhancements in SQL Server's basic database architecture. The development team significantly increased the database's maximum row size (from 1962 bytes in version 6.5 to 8060 bytes in version 7.0), maximum number of columns per table (from 256 columns to 1024 columns), and maximum size of character and binary columns (from 255 bytes to 8000 bytes). In conjunction with these and other basic capacity enhancements, the team redesigned SQL Server 7.0's query engine. The engine now supports single and multiprocessor parallel query execution and the use of multiple indexes in queries.

New backup capabilities will also help make SQL Server a viable enterprise-level contender. Version 7.0 supports incremental backups, concurrent backups (i.e., a backup that occurs when users update the database), and parallel backups to multiple tape drives. Microsoft's inhouse tests showed that running the concurrent backup caused only a 5 percent drop in throughput.

Another improvement that enterprises will benefit from is SQL Server 7.0's support for row-level locking. SQL Server 6.5 supported only page-level locking, in which each database lock restricted all users from updating any data on the same page. SQL Server 7.0 uses a predefined cost index to dynamically determine the locking level at runtime. Matching the locking mechanism to the type of application can result in significant performance benefits.

Enterprise administrators will like SQL Server 7.0's enhanced administration tools. The development team transformed SQL Server 6.5's SQL Enterprise Manager into a Microsoft Management Console (MMC) snap-in. With MMC, administrators can use one console to manage all Microsoft BackOffice products. However, although administrators can't use MMC to manage SQL Server 6.5 servers, MMC can coexist with SQL Server 6.5's SQL Enterprise Manager.

One of the best improvements in SQL Server administration is what's no longer present: predefined chunks of file space, or devices. Instead of using devices, SQL Server 7.0 moves each database into a separate OS file. The use of separate files lets SQL Server's disk consumption dynamically increase and decrease with demand, a process Microsoft refers to as dynamic memory management.

Proving that free lunches don't exist, SQL Server 7.0 has one major gotcha. Its new internal database format requires that you reload all your databases. However, Microsoft recognizes this problem and provides a high-performance migration wizard that offers several different modes of database conversion.

SQL Server 7.0 has been in beta 2 testing since January 1998. Microsoft plans to release SQL Server 7.0 to the general public during the second half of this year.

"The Adaptive Memory Crossbar technology was becoming the de facto standard--that is, until an Intel subsidiary, Corollary, jumped in and muddied the NT SMP server waters."

NT in Hackers' Crosshairs
A rash of highly publicized hacker attacks on Windows NT systems has left many IS planners questioning the integrity of the operating system (OS) as a secure computing platform. Failures at NASA, in particular, have sparked concerns and rumors about possible security holes in NT's TCP/IP (tcpip.sys) implementation.

Fortunately, the truth is less sensational than the industry tabloid headlines. The attack that crashed NASA's systems (and many other NT and Windows 95 systems at various government and educational sites) was not new. In fact, Microsoft had released a hotfix for the breach (a modified teardrop attack) and posted public notices on its Web site's security page (http://www.microsoft.com/security) a month earlier.

If NASA had implemented the hotfix, the attack would have been thwarted. To make matters worse, NASA didn't protect its systems with firewalls, so the systems were directly exposed to the Internet. NASA's systems were sitting ducks in the hackers' crosshairs.

Although NASA's security policies weren't sound, this incident points to a serious flaw in Microsoft's support mechanism: Most organizations don't have the time or wherewithal to keep track of every new patch and hotfix coming out of Redmond. With Microsoft delaying the release of Service Pack 4 (SP4) because of a lack of developer resources, the number of minor hotfixes published on the Microsoft FTP site (ftp://ftp.microsoft.com) is growing at an alarming rate. The site listed more than 30 post-SP3 hotfixes at press time.

Expecting even the most diligent IS organizations to keep up with the numerous hotfixes is unrealistic. Microsoft tries to alert users to the more serious bugs by posting entries to its Knowledge Base and messages on the NTBUGTRAK mailing list, but neither information source is high profile. The NASA incident illustrates that a show-stopping problem can go unnoticed. Microsoft needs a more visible information source to disseminate critical NT hotfixes.

Until Microsoft creates such a source, you must visit Microsoft's security page and FTP site regularly. And, if you can stand wading through lots of spam, Internet newsgroups are a good source of current information on NT security issues.

Microsoft Gets Under the PC's Hood
Microsoft wants to get under the hood of your PC. This software giant is on a mission to redefine the PC architecture. Specifically, Microsoft wants to remove many of the legacy devices that it believes are holding back progress in the industry.

Microsoft started this mission with the debut of the PC99 specification at the Windows Hardware Engineering Conference (WinHEC) in Orlando, Florida. Under PC99, the Universal Serial Bus (USB) and IEEE Std 1394-1995 (i.e., Fire Wire standard) interfaces will be the primary interfaces to peripherals. Gone are the serial and parallel ports of yesteryear, along with another piece of legacy PC baggage: the ISA peripheral bus. According to Microsoft, the industry needs to dump the ISA bus before for the Plug and Play (PnP) configuration architecture can achieve its true potential. According to Microsoft, ISA PnP never worked properly and is holding everyone back from autoconfiguration Nirvana.

But discarding ISA and replacing it with an all-PCI platform could be a painful proposition for many customers. PCI-based multimedia cards are scarce, and the prospect of losing backward compatibility with the bulk of legacy devices will make PC99 a hard sell. But even more painful will be Microsoft's move to a higher-bandwidth system bus. Microsoft is targeting the current PCI system bus for a major overhaul. Microsoft's motivation is IEEE-1394. "The PCI bus can't deliver the necessary bandwidth to 1394 devices," said Carl Stork, general manager for Windows hardware platforms at Microsoft. "It's out of gas." However, a new bus design will likely require at least 2 years to develop. In the meantime, customers will have to make do with the current PCI system bus, which is scheduled for an upgrade to 100MHz as part of the Intel Slot 2 400MHz Pentium II architecture.

No doubt about it, PC99 is an ambitious road map for PC architecture. PC99's success or failure will hinge on Microsoft's ability to organize industry support within the hardware community. This situation begs the question: Does Microsoft really have the clout to drive PC hardware standards?

A few years back, the answer would have been unequivocally no. PC hardware simply advances too quickly for a software company, even a dominant one without any competition in the OS space, to dictate future directions. You only need to recall the ill-fated multimedia PC specification to get a feel for Microsoft's track record in this area.

Microsoft might have a chance to succeed because PC99 is sufficiently forward thinking. Most hardware vendors agree with Microsoft that USB and FireWire are the proper paths to future peripheral connectivity. But dumping ISA and the classic I/O interfaces might be harder for hardware vendors to accept, so you'll likely see a mixture of PC99 concepts and legacy components in the future.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish