Skip navigation

NT News Analysis - 01 May 1998

NT's Poor Scalability Performance Might Not Be Caused by an Inability to Scale

Conventional wisdom has long known that Windows NT doesn't scale as well as UNIX. Benchmark after benchmark, multiprocessor NT systems have lagged behind their multiprocessor UNIX counterparts. People have developed many theories to explain the reasons behind NT's poor multiprocessor showing against UNIX. The theories include the relative immaturity of the NT code base and the generic nature of the x86 hardware platform.

However, NT's poor scalability performance might result from the level of abstraction in the operating system (OS) rather than NT's inability to scale. "NT is at a disadvantage against UNIX because Microsoft must maintain a single binary image for a broad range of platforms," said Kitrick Sheets, the chief technical officer at MCSB Technology, maker of AutoPilot P/SA (Performance/Scalability Accelerator). In contrast, UNIX hardware vendors have more control over their particular hardware implementations and can tune the OS in ways that NT hardware vendors can't, such as modifying the memory-management and thread-scheduling functions.

This analysis could go a long way toward explaining the disappointing scalability numbers reported by vendors of early 8-way NT servers. Sheets gave the example of one vendor's Transaction Processing Council Benchmark C (TPC-C) results for its 8-way symmetric multiprocessing (SMP) system. "The vendor's problem was really hardware related. The 8-way SMP system had a memory partition scheme that separated the two processor quads, a scheme that NT wasn't aware of. Without custom tuning the NT scheduler to maintain proper processor affinity, there was no way that SMP system was going to achieve linear results." (Processor affinity is the association between a processor and a thread.)

Products such as MCBS's AutoPilot can address this problem by progressively monitoring various environmental and quantitative factors within the NT Executive. The products then dynamically tune the OS's thread-scheduling and memory-management functions to compensate for idiosyncrasies in the underlying hardware (such as those in the vendor's 8-way design).

Such approaches can help vendors get more out of NT's architecture. MCSB points to commissioned benchmarks that show significant performance improvements from better thread and processor affinity management, especially in OEM configurations. What about scaling beyond 8-way? According to Sheets, "The situation is only going to get worse as vendors try to extend NT past eight CPUs."

This MCSB executive is not alone in his assessment. Most experts agree that, as SMP systems scale beyond four processors, system bus contention becomes a major problem. Not enough bus bandwidth exists to accommodate memory activity from all the CPUs. This situation is particularly true in x86-based designs, in which the system bus often runs at less than 100MHz.

One solution is to design a faster, shorter, more efficient bus. But as Digital Equipment has discovered in its 8- and 12-way Alpha systems (which employ a true, single-bus SMP design), delivering such an implementation is expensive because it requires custom Application Specific Integrated Circuits (ASIC) and highly tolerant signaling materials.

The high cost associated with true-SMP scaling leads many analysts to believe that hybrid designs using interconnected 4-way quads are the wave of the future. In fact, such hybrid solutions are already in use at the ultra-high-end of the UNIX server market. Many technology leaders in the multiprocessing field, including Sequent Computer Systems and Data General, are basing their 16- and 32-way UNIX configurations on the radical nonuniform memory architecture (NUMA), a design that uses separate 4-way SMP system boards linked via a high-speed interconnect switch.

Microsoft has publicly voiced its opposition to NUMA-based designs, pointing to the difficulty that some companies have in adapting applications designed for SMP scaling to the disjointed, quad-based environment. Microsoft advocates the continued development of true-SMP designs and the use of clustering as alternatives to the more radical interconnect approach.

Many analysts agree, noting that Intel's forthcoming Deschutes architecture and vendors' development of high-speed, low-cost system bus architectures will help alleviate the SMP performance bottleneck. In the meantime, companies deploying current generation 8-way designs need to be aware of the architectural compromises being made in the name of reducing production costs. In the case of systems based on Intel's Corollary and similar quad-based architectures, SMP isn't always SMP. Companies might need to invest in additional software performance technologies before they can achieve near-linear scalability.

Driver Drought Threatens Mobile NT
Customers who want to use Windows NT on mobile computing platforms are finding it hard to locate supporting device drivers. Although NT supports many Microsoft and third-party devices, some advanced combo cards are still Windows 95-specific. An example is 3Com's EtherLink III LAN +33.6 Modem PC Card. A quick search of the 3Com Web site at press time failed to yield an NT 4.0-compatible driver.

Users will find little relief in the current generation of third-party mobile computing add-ons from vendors, such as Softex and SystemSoft. Once thought to be a promising option for notebook users, these products do little to address the NT driver shortage.

For example, consider the Softex PC Card Controller for NT, which requires you to use custom device drivers that are compatible with Softex's hot-swap and power-management features. Softex ships a limited set of generic drivers on the installation disks. However, these drivers address only a small subset of the myriad PC Card devices on the market.

Some customers have pointed to NT 5.0 as the ultimate solution for NT-based mobile computing. However, early experience with the NT 5.0 betas has not supported this prediction. NT 5.0's Plug and Play (PnP) support (an integral part of PC Card functionality) will require a new set of device drivers based on the Win32 Driver Model. Microsoft has made the controversial decision not to support the Advanced Power Management (APM) standard. Microsoft will instead mandate that NT 5.0-compatible notebook systems implement the newer Advanced Configuration and Power Interface (ACPI) architecture. Customers with APM-based systems, which represents the vast majority of NT-capable notebook computers, will be left out in the cold. Maybe customers still need those third-party APM and PC Card vendors after all.

Microsoft Tweaks Early Adopter and Service Pack Programs
Microsoft is expanding its Early Adopter Program and rethinking its Windows NT Service Pack program. Microsoft is revisiting these programs in an effort to have faster time-to-market cycles for new operating system (OS) releases.

The Early Adopter Program originated with the Exchange Server 5.0 beta program. The program has since been carried over to the NT 5.0 beta, with several sites already deploying the OS in a production setting. Microsoft has traditionally limited the program to a select group of its largest customers; however, the company is hinting at plans to expand the program in the near future. Under the current Early Adopter Program, Microsoft gives enterprise customers early access to key OS technologies and encourages them to deploy these releases into production environments. Microsoft then monitors the sites and works closely with the customers to isolate and correct potential problems. By exposing new products to real-world testing early in the development cycle, Microsoft gains valuable field data and the customers get to experience the new release firsthand.

In addition, Microsoft is hinting at plans to create a Service Pack team. This team would test and verify incremental NT updates to dramatically increase the speed with which Microsoft releases new Service Packs. Currently, Microsoft releases a new Service Pack, which includes various bug fixes and incremental upgrades to core NT services, every quarter. However, the latest NT Service Pack, SP4, has been delayed until the second quarter of 1998 because Microsoft redirected resources to the NT 5.0 project.

Many analysts point to the SP4 delay as justification for creating a Service Pack team. As of early March, the Microsoft's NT FTP Web site listed 29 post-SP3 hotfixes. (Hotfixes are problem-specific patches released by Microsoft to address issues that arise between Service Packs. The FTP Web site is at ftp://ftp.microsoft.com.) According to analysts, if Microsoft had had a Service Pack team in place to oversee testing, Microsoft would have likely made SP4's original target date of December 1997.

However, before customers raise their glasses to the new Service Pack team, they might want to consider just what an accelerated release schedule will mean to them in terms of support costs. Many enterprise customers already complain that Microsoft is trying to do too much with the current Service Pack program. These customers point to the number of add-ons--new features that force systems administrators to treat each new Service Pack as an OS upgrade--that have been bundled into the latest releases.

You could argue that all the add-ons in SP1 through SP4 constitute an entire point upgrade to NT. This ongoing abuse of the Service Pack concept (Service Packs were supposed to be just maintenance releases) has left customers leery of any plans to improve the program. The increasing number of Microsoft applications relying on new functionality introduced with the latest Service Pack is not helping IS planners keep up.

Perhaps Microsoft is using the wrong justification (i.e., faster time-to-market cycles) to gain support for the new Service Pack team. First, Microsoft needs to convince customers that a dedicated testing team will mean better quality Service Packs, with fewer add-ons that might disable existing systems.

NT Now Coming to a Theater Near You
You might have heard the rumors regarding Windows NT's involvement with the motion picture "Titanic." Although the accounts vary widely, NT has made significant inroads into the digital media marketplace. Whether because of low cost, ease of use, or reliability, NT is winning the hearts and minds of digital authors.

Just ask Grant Boucher, lead systems architect at Station X Studios. This digital media company in Los Angeles, California, is betting the rendering farm on NT. I caught up with Grant recently and asked him about his firm's choice to go with an all-NT solution.

The digital effects (FX) industry has long been the domain of high-end UNIX boxes, yet your team has built a company on NT. What were the driving forces behind the decision to dump UNIX in favor of NT?

Reliability, low maintenance cost, and systems administration overhead, performance, and price were the driving factors. We have battle tested NT as an operating system (OS) for more than 3 years now and have found it to be 100 percent reliable in a production environment. That reliability has not been our experience with any UNIX flavor or derivative. NT used to be the revolution; now it is the standard. We have all the applications we need (and then some) available in robust implementations under NT.

Describe the lab architecture at Station X Studios. What constitutes a typical client/server configuration, and what are the core applications that drive the Station X machine?

Our hardware platform consists of Digital Equipment's Alpha workstations running at 500MHz to 600MHz using the 21164 CPU. Each system has 256MB to 512MB of RAM and a 3DPRO-based OpenGL graphics accelerator, like AccelGraphic's AccelECLIPSE or Digital's 4D30T. In terms of software, we run NewTek's LightWave 3D, eyeon Software's Digital Fusion Post, Silicon Grail's Chalice, Adobe Systems' Adobe PhotoShop (via Digital's FX!32), and Microsoft's Internet Explorer (IE), Outlook Express, Word, Excel, and Visual C++.

As a veteran of many UNIX-based projects, how would you rate NT's performance as an authoring and rendering platform compared with that of UNIX?

Two years ago, NT was a stretch from a hardware and software perspective. Now, NT products lead the market at all levels and in all categories.

What is the biggest remaining stumbling block for digital media companies as they evaluate the NT platform? Is it a lack of hardware, software, scalability, or tools?

It's the employees. Many employees grew up in a visual FX industry dominated by script-driven programmers who were trying to be artists. The possibility of using faster, easier, more productive ways of doing effects is thoroughly terrifying to them because nonprogrammers can create a majority of photo-real FX work. The inertia and fears of these employees will hold the rest of the industry back. As a result, NT will make its largest strides initially as a low-cost, high-performance rendering alternative.

What's next for Station X Studios? Should people be looking for the Designed-for-Windows-NT logo on the bottom of a hideous creature's foot or the wing of that hot new animated spacecraft?

In the past 2 months, our artists, supervisors, and programmers have pioneered several revolutionary industrywide technologies. And they all run under NT and Alpha. Our challenge to the rest of the visual effects industry is catch us if you can.

A New Kid on the OLAP Block
In what is undoubtedly a carefully orchestrated PR campaign, Microsoft has been teasing online analytical processing (OLAP) enthusiasts with details about its new OLAP server, code-named Plato, which will accompany SQL Server 7.0. Although Microsoft has yet to reveal how it will bundle Plato, what the price will be, or even the server's official name, OLAP enthusiasts, vendors, and industry analysts are already reacting to the upcoming release, which marks the company's entry into the OLAP market.

Because Microsoft's entry into any market always changes the competitive landscape, existing OLAP vendors have been trying to grab market share. For example, in January, Informix released a new version of its MetaCube ROLAP (relational online analytical processing) Option for the Informix Dynamic Server. And in February, IBM released its first OLAP product, DB2 OLAP Server 1.0 based on Arbor's Essbase 5.0

Meanwhile, analysts have been speculating on the effect that Plato will have on the industry. For example, The OLAP Report (http://www.olapreport.com) predicts that Plato will transform the OLAP industry. "It will be distributed on a mammoth scale, at prices that are a fraction of those charged for current OLAP servers. It will have all the ease-of-use features that are expected in any Microsoft product. It will have unrivaled client-tool support and will probably be enthusiastically adopted by legions of Microsoft Solution Providers (MSPs) and specialist-application builders."

Microsoft released a Sphinx beta 2 version of Plato in January. (Sphinx is the codename for SQL Server 7.0.) The beta 2 server includes several wizards, including one that helps you construct an OLAP data store that can contain data from any OLE DB or Open Database Connectivity (ODBC) source. Another wizard analyzes a proposed model for sparsity to help you make decisions about the number of dimensions and precalculations to use. Because Plato is a true multidimensional database, you can conduct dynamic, multidimensional analyses of your enterprise data.

If the 1.0 release is as good as the beta 2 version, The OLAP Report's prediction might hold true: The Microsoft OLAP server will transform the OLAP industry. (For an exclusive report on OLAP and Microsoft's OLAP server, see Windows NT Magazine's Web site at http://www.winntmag.com.)

Microsoft Targets Technical Market
At an analyst briefing in Redmond, Washington, Microsoft executives unveiled the firm's most ambitious plan yet to penetrate the technical workstation market: the Migration Assistance Program (MAP). Through a combination of new UNIX integration utilities and an aggressive independent software vendor (ISV) recruiting program, Microsoft is hoping to convince developers of high-end technical applications (e.g., 3-D modeling) to switch to Windows NT Workstation as their primary delivery platform.

Vendors such as Silicon Graphics and Sun Microsystems have traditionally dominated the technical workstation environment with their proprietary UNIX-based hardware and software solutions. Microsoft plans to provide better UNIX integration capabilities for NT to encourage high-end developers to more readily accept the NT platform.

At the heart of this initiative is the new Windows NT Services for UNIX, a combination Microsoft and third-party enhancement package that adds NFS and Telnet client/server support to existing NT systems. The package lets users perform a single logon to both UNIX and Microsoft network environments­a feature UNIX-NT shops have sought for years.

Pricing for the new package was unavailable at press time; however, analysts agree that even an aggressive price might not be enough to win diehard UNIX users. Microsoft's biggest hurdle is lack of end-user experience: Most high-end software engineers use UNIX-based technical applications, and many of these engineers are reluctant to learn a new platform.

For the number-crunchers, however, Microsoft's message of high-end workstation functionality in a PC-based, Office- or BackOffice-compatible form is welcome. UNIX workstations have long been troublesome for mainstream network administrators. Windows NT Services for UNIX and MAP could provide a way for administrators to wean UNIX gurus off their expensive, proprietary RISC boxes.

Third-party vendors of UNIX-NT integration products (such as Hummingbird Software and Intergraph) have much to lose if Microsoft successfully enters this market. They will have to find a new way to differentiate their products.

IBM Rehashes Eagle Initiative
In a rehash of its stalled Software Servers (i.e., Eagle) initiative, IBM unveiled a new lineup of three BackOffice-style application suites. Eagle­a strategy of combining various network infrastructure, database, and development resources into one package­was IBM's ill-fated attempt to challenge Microsoft on its own turf: Windows NT Server. And IBM's latest attempt is faring no better. Initial customer reaction has been tepid, and the NT enterprise community has been skeptical.

IBM has targeted the small business suite (code-named Emerald), departmental suite (code-named Rodin), and enterprise suite (code-named Bartholdi) at different segments of the NT Server market. IBM has designed Emerald for small to midsized businesses. This suite includes the Domino Intranet Starter Pack, DB2, several customized business templates, and a fax server. The suite comes bundled with a simplified installation routine that lets Value Added Resellers (VARs) create simple turnkey client/server solutions.

Rodin builds on Emerald, adding a beefed-up DB2 Workgroup product, eNetwork Communications Server software, ADSTAR Distributed Storage Manager software, and Tivoli Management Environment. IBM has designed Rodin for organizations with 25 to 100 users.

Bartholdi further extends Rodin. Bartholdi includes IBM Transaction Series, IBM MQSeries messaging middleware, and an NT-to-host DB2 connector.

IBM has committed to full NT 5.0 support before the product ships in early 1999. This support includes integration with the forthcoming Active Directory (AD). "We are part of the first wave of testing and are working to design and be totally available when NT 5.0 comes out," said Jon Prial, director of NT solutions marketing at IBM.

IBM's commitment to NT 5.0 and introduction of the three new suites demonstrate that the company is making a strong play for a piece of the NT Server market. IBM's ultimate success, however, will likely depend on factors outside its control. For example, the forthcoming SQL Server 7.0 is a watershed release for Microsoft. If Microsoft delivers a stable product on time, IBM might have trouble convincing enterprise NT customers to choose a non-Microsoft suite.

Regardless of IBM's ultimate success, the fact that it is fielding competitive suites is good news for NT customers. Competition is what drives innovation, and from the looks of IBM's new suite offerings, Microsoft's BackOffice developers will be facing stiff competition.

Windows for Mainframes?
Mainframe vendor Hitachi sent a tremor through the host systems marketplace when company officials announced a controversial plan to develop future mainframe designs around Windows NT and Intel's forthcoming IA-64 CPU architecture (code-named Merced). As part of a new development alliance with Digital Equipment, Hitachi will work to develop advanced clustering solutions based on symmetric multiprocessing (SMP) systems incorporating Intel's IA-64 or Digital's Alpha processors. These virtual mainframes will help Digital and Hitachi realize their goal of moving enterprise IS shops away from traditional big iron environments dominated by IBM's S/390 series.

The new alliance, however, raises some questions. For example, how will Digital executives reconcile their firm's clustering initiatives with those of Compaq, Digital's new parent company? Compaq already has clustering architecture, E2000, which is based on the ServerNet technology Compaq acquired from Tandem Computers.

In terms of reliability, how will Hitachi convince enterprise customers that NT-powered clusters are ready to replace mainframes? Microsoft is still struggling to deliver UNIX-caliber scalability and availability on the NT platform. Microsoft is a long way from delivering anything close to the 99.99 percent uptime that enterprise-level shops expect. Until then, the Hitachi-Digital alliance is nothing more than smoke on the horizon.

AEC II Begins to Take Shape
Microsoft's and Digital Equipment's Alliance for Enterprise Computing (AEC II) is a much more productive partnership than its predecessor, AEC I. (For information about AEC I and II, see Craig Barth, "Digital and Microsoft Renew Their Vows," April 1998.) The first fruits of AEC II are already hitting the market: Digital's DIGITAL Visual Batch, DIGITAL Commander, and AltaVista Process Flow packages. These three packages might give a much needed boost to Digital's bottom line. The company's lagging Alpha sales and muddled x86 strategy have left it scrambling to contain the red ink.

These packages will also benefit Microsoft by adding mainframe-level functionality to NT. Running atop Microsoft Cluster Server (MSCS), DIGITAL Visual Batch provides mainframe-style batch process scheduling and execution management for NT. DIGITAL Commander, a Microsoft Exchange add-on, lets systems administrators direct server alerts and error messages to a variety of different systems management consoles. AltaVista Process Flow is part of a much larger business process reengineering effort that centers on a modular-workflow architecture designed to maximize application code reuse. All three packages will go a long way toward proving the NT platform's viability as a serious enterprise solution.

TAGS: SQL Server
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish