Skip navigation

Making the Case for Server Consolidation

Is a server consolidation project in your network's future?

Remember when distributed computing was a hot new IT industry term? The distributed computing model dictates that an organization spread its network resources among many PC servers that run affordable network OS software (e.g., Windows NT, UNIX), instead of relying on one expensive, high-powered midrange or mainframe system. The distributed computing philosophy of network architecture uses PC-based ubiquitous network servers to reduce hardware costs and increase overall network availability. Not long ago, Microsoft championed NT as the premier tool in the move toward distributed computing. Microsoft designed NT, which provides traditional file and print services and strong application-serving capabilities, for use in a distributed environment. (And let's not forget that a distributed computing model requires more server licenses—a fact that never escapes Microsoft.) However, for several reasons and with surprising results, the industry is starting to use consolidation tools (which are available for a variety of budgets) to move away from the distributed model.

Back to the Future
Until recently, distributed computing's premises remained largely unchallenged. However, computing philosophies, like fashion, seem to go in and out of style. The tide of industry events has slowly turned, and several once-abandoned concepts have come back into vogue. Take America's recent interest in thin-client computing, in which the major contenders are Citrix MetaFrame, Microsoft Windows 2000 Server Terminal Services, and Windows NT Server 4.0, Terminal Server Edition (WTS). The thin-client computing model doesn't contradict the principles of distributed computing (especially when you consider that Citrix designed MetaFrame to support load balancing and server farms). However, thin-client computing requires more powerful—and more expensive—servers than traditional PC resource computing, and this requirement makes a thin-client server less of a commodity and more of a major capital investment. (For more information about thin-client computing, see Christa Anderson, "Windows Terminals vs. Network Computers," May 1999.)

The second retro-trend I've noticed of late is the IT industry's increasing shift away from the "servers everywhere" idea that has dominated the industry during the past 5 years. The IT industry is discovering that a distributed arrangement isn't perfect. First on the list of troubles: Purchasing many medium-powered servers (even when the servers are PC-based) doesn't automatically translate to huge hardware savings. Second, all servers need to be fault-resilient—even in a distributed computing environment. To implement an acceptable level of fault tolerance, you must implement redundancy measures on various subsystems (e.g., power supplies, disks, network connections) independently at each server. The bottom line is that this implementation often requires a pricey duplication of hardware.

As if the hardware costs weren't enough, most software companies license applications (e.g., database applications, backup and antivirus software) on a per-server basis. As a result, more servers mean increased resource costs. Several other major disadvantages to distributed computing exist: When you add servers, you increase potential single points of network failure and wasted capacity on each server. Increased system complexity leads to an increase in IT staff technician-hours required to manage and maintain equipment. And many enterprise customers with distributed environments haven't achieved the levels of scalability for which the organizations had hoped.

Causes for Consolidation
Many companies have started waking up to these problems. Recent studies by various industry research groups (including GartnerGroup, Forrester, and Computer Economics, Inc.--CEI) provide some eye-opening figures about the costs of server ownership and the benefits of reducing the number of servers in your organization. One GartnerGroup study reports that increasing the average number of users per server from 75 to 300 reduced server total cost of ownership (TCO) by a whopping 44 percent and reduced LAN TCO by 15 percent. (For more details, see "Lowering TCO with NT: Fact or Fiction?" at http://www.info-edge.com/tco2.htm.) You might be curious about how often consolidation projects truly meet these lower TCO goals. CEI addresses this question in a survey of businesses that recently completed server consolidation projects. The survey asked each business about the rate of return for that company's consolidation project. An overwhelming 69 percent of responding businesses stated that the company experienced a positive rate of return after completing the project, 24 percent stated that the company broke even, and only 7 percent cited a negative rate of return.

As a result of such server TCO revelations, server consolidation is experiencing a massive surge of popularity. However, cost reduction isn't the only reason that your organization might want to consider a server consolidation project. The following list provides additional reasons to consolidate:

  • Reduced network complexity—Fewer servers translate to a less complex network architecture that you can manage more easily in the long term.
  • Reduced application management and deployment burdens—With fewer servers, you can deploy crucial applications or updates across the enterprise more quickly and can manage the overall software environment more easily.
  • Centralized support staff—Although server consolidation isn't an excuse to cut IT jobs, server consolidation lets organizations centralize and refocus IT personnel away from mundane server-maintenance tasks and toward more important jobs that contribute to the company's bottom line.
  • Reorganization following an acquisition or merger—Organizational changes (e.g., two companies merge and consolidate similar departments) often require you to merge server resources.
  • Server retirement—When the time comes to put server hardware out to pasture, you might consider consolidating the retired system's contents with an existing server rather than replacing the machine one-to-one. A similar situation might be the end of an equipment lease because the organization still needs the server's resources but will no longer have access to the leased equipment.

The Windows 2000 Factor
Windows 2000 (Win2K) is also poised to play a significant role in further fueling the present server consolidation trend. As you might have heard, Win2K Active Directory (AD) lets organizations implement flatter domain structures than were practical or possible under NT 4.0. Because Win2K's domain architecture doesn't share NT's size limitations, bandwidth usage, administrative model, or per-domain database, many organizations are flattening and consolidating their NT domains. (For more information about domain flattening and consolidation, see Robert Pierce, "Flatten Your NT Domain Structure," September 1999.) Most domain restructuring projects involve either consolidating account and resource domains or collapsing resource domains into account domains. In either case, numerous opportunities exist to simultaneously consolidate servers. For example, AD's scalable and efficient design makes many existing site domain controllers unnecessary in a Win2K network. When these controllers also contain local resources, the controllers are good candidates for consolidation with other site servers. Keep in mind that Win2K has significantly more stringent hardware requirements than NT. As a result, introducing Win2K within an organization will likely render many existing systems obsolete. To deal with this situation, organizations will have to replace existing servers or consolidate their resources onto higher-powered machines. In the long term, the latter choice might well represent the most cost-effective solution for many companies.

Remember that from a reliability standpoint, Win2K is the first version of NT that makes server consolidation feasible. Although Microsoft won't readily admit it, many of us who have toiled with NT over the years know that fault tolerance is one of the reasons that NT's multiserver distributed computing model is a good idea. In many cases, NT hasn't achieved the level of reliability required in some network roles. For many organizations, using multiple NT servers in various network roles amounts to hedging bets—when one server goes down, another server can always take its place. However, if Win2K lives up to the promise of its newfound reliability (and experience with the product thus far seems to indicate that it will), then the idea of merging server resources becomes far more practical.

A final note about Win2K and server consolidations: Many organizations are already loudly complaining that, despite Win2K's substantially improved management and reliability features, the costs associated with a Win2K migration are too high. Software licensing costs and the aforementioned need to enhance or replace existing equipment to meet Win2K hardware requirements make demonstrating a positive Return on Investment (ROI) for a Win2K migration difficult or impossible. NT works fairly well for these organizations (typically thanks to third-party management products and add-ons), and the high costs that a Win2K upgrade can incur don't make the prospect of migrating an enticing one. If this situation sounds familiar, the server consolidation solution might help you convince your organization to approve a Win2K migration. By using server consolidations as part of Win2K migration projects, many organizations will be able to about-face negative ROI projections and turn Win2K upgrades into win-win scenarios. If you're a frustrated IT manager who longs for the enhanced management capabilities and greater reliability of Win2K, this approach might be your ace in the hole.

Consolidation Tools and Techniques
The idea of consolidating system roles and resources certainly isn't new. However, NT organizations haven't had a great deal of incentive to combine server resources. The rewards simply aren't worth the risks associated with potential server downtime. Also, the logistics involved in facilitating a consolidation project can be daunting. In addition to worrying about moving data and devices, you might also need to move NT networking services (e.g., WINS, DHCP, RAS), Microsoft BackOffice, or third-party applications. If you've ever embarked upon this kind of project, you've noticed the scarcity of available tools to migrate data from one server to another. Although plenty of tools copy data between points A and B, these tools don't deal with problems (e.g., moving shares between servers, WAN bandwidth usage during transfers) that are especially important to large enterprises seeking to consolidate servers.

As a result of the relative unavailability of migration tools, most companies home-grow consolidation projects by using built-in tools, custom scripting, and lots of IT elbow grease. However, this difficult and time-consuming process requires tedious attention to every aspect of the relocation. To illustrate, I'll briefly describe some of the problems that you need to consider when planning a consolidation project.

Choice of consolidation candidates. The first step in any consolidation project is selecting eligible machines. Typically, this selection involves analyzing each server's capacity usage and gathering information about each system's long-term trends. Because volume space is usually the most important resource in a server consolidation project, you want to verify that you've budgeted adequately for present and future disk-volume space requirements. In addition, you might want to consider using third-party tools to help you perform disk analysis, cleanup (aka disk grooming), and management activities. HighGround Systems' Storage Resource Manager (SRM) is an excellent tool for these tasks. This product provides long-term disk-usage trend information and a management interface to perform disk grooming across all your network servers. (For more information about SRM, see "Server Consolidation Resources," page 96.)

Adequacy of target/destination server resources. You need to ensure that each consolidation group's destination machines have sufficient resources (e.g., disk space, CPU, memory, network capacity) to handle the increased load from the source servers. In the case of simple file-server consolidations, you primarily need to make sure that the target systems' disk subsystems can hold the amount of data migrating from the source servers and that the target server's other subsystems can handle the increased client load. However, for application server consolidations, other considerations (e.g., CPU and memory resources on the target server) might be more important.

Manual migration of services, applications, and roles. Unfortunately, you can't easily move some server resources (e.g., NT networking services and functions, such as WINS, DHCP, RAS, and Microsoft IIS) from one server to another; you must manually install and configure these services on the target server. The same problem is true of BackOffice applications, such as Microsoft Exchange Server and Microsoft SQL Server, although many of these products have built-in features and tools to facilitate migrating data between servers. You also need to manually move and reinstall most third-party software applications (e.g., backup, antivirus, network fax) from server to server. And you have to perform a reinstallation to change a system that was previously an NT 4.0 member server (i.e., non-domain controller) to a domain controller or vice versa.

Impact on participating servers and network traffic. Even when the bulk of your consolidation work involves migrating simple data files, you need to consider network availability and how migration activities will affect the servers and network (LAN and WAN) bandwidth usage. If you're fortunate enough to be able to perform the consolidation in an offline mode (e.g., tape backup and restore, direct data copy from server to server) when users don't require access to the servers involved, network availability probably won't be a problem. However, larger organizations will likely want to perform consolidations on many servers across slower WAN links, which introduces the possibility that these activities might interfere with network availability. The bottom line: You need to make sure that your data and application migrations don't take users offline during productive hours.

Effect on client workstations. With file shares mapped through logon scripts, you can easily handle changes in the server-name portion of a resource (i.e., Uniform Naming Convention—UNC—pathname, such as \\bigserver\bigshare) by simply changing logon scripts to reflect the relocation of the data. However, some applications tend to embed resources' backing UNC pathnames in a variety of places, including the Registry or program-specific configuration files. Local printer mappings also present this problem. To restore proper functionality to network clients, you might have some postconsolidation cleanup work on your hands. Depending on the applications and environment, the problem can be catastrophic. Don't underestimate the magnitude of effort involved in finding and changing these pathnames. Always use machines that reflect your production environment to conduct trial lab migrations. That way, you'll be prepared and will know ahead of time exactly how pulling the plug on a particular server will affect that server's clients.

Finally, certain tools can help you automate the client-side changes that you require to complete the consolidation project. Invalidated Registry data that points to old server names causes many problems during consolidation, so you might want to employ a network Registry management tool to automate the correction of this Registry data. These utilities can help you make search-and-replace-style changes to Registry data on several machines. Although you can theoretically use NT's System Policy Editor (SPE) to assist in this effort, it isn't the best tool for this type of job. A better idea is to use third-party tools designed for this kind of task, such as Aelita Software Group's MultiReg or Steven J. Hoek Software Development's Registry Search + Replace. Using this type of utility to search for specific server names in workstations' Registries before you begin the consolidation project is a good idea. That way, you can estimate how widespread the effects of a server name change will be.

By the way, organizations that have deployed NT's Dfs might have an easier time during consolidation than organizations that haven't deployed Dfs. Dfs root names are independent of any particular server, so changes to the servers don't usually affect the client (assuming that the client is pointing to the Dfs root name and not to individual servers). The ability to easily consolidate resources is one of many benefits of and reasons to consider using Dfs.

DM/Consolidator: The Swiss Army Knife of Data Consolidation
If this whole process sounds a bit daunting, then you'll be happy to know that at least one company is coming to your rescue. FastLane Technologies, makers of the DM/Suite of enterprise directory management and network administration products, recently announced a tool that specifically targets companies seeking to migrate and consolidate server data. FastLane originally developed DM/Consolidator for Microsoft's internal Information Technology Group's (ITG's) server-consolidation projects, but FastLane realized that it had a good product and made the utility available to the general public. Microsoft and FastLane announced in January 2000 that Microsoft has licensed DM/Consolidator's technology, which the company will incorporate in the Microsoft File Migration Utility. MSFMU will be part of Microsoft's upcoming Services for NetWare 5.0 product, which the company has scheduled for release shortly after Win2K. MSFMU facilitates file migration from Novell's NetWare servers to Win2K servers.

SERVER CONSOLIDATION RESOURCES
DM/CONSOLIDATOR
FastLane Technologies * 902-421-5353 or 800-947-6752
http://www.fastlane.com
DOMAIN MIGRATION WIZARD (DMW), MULTIREG
Aelita Software Group * 614-336-9223 or 800-263-0036
http://www.aelita.com
REGISTRY SEARCH + REPLACE
Steven J. Hoek Software Development
http://www.iserv.net/~sjhswdev
STORAGE RESOURCE MANAGER (SRM)
HighGround Systems * 508-460-5152 or 800-395-9385
http://www.highground.com
From the outset, FastLane designed DM/Consolidator to handle the migration and synchronization of live server data with minimal impact on bandwidth and users. The software can handle both small- and large-scale server consolidations. The program provides a multitiered architecture involving primary and (optionally) secondary administrative consoles (either of which you can use to direct the consolidation process), source computers, and target computers. The real magic is DM/Consolidator's WAN-savvy approach to data migration. Using the administrative console, which Screen 1 shows, the software lets you replicate data using an intelligent multistep process that minimizes server and bandwidth utilization. In the first step, you schedule a System Snapshot of the consolidation's source and target systems. This process constructs a database that is a snapshot of crucial account and file-system information about each computer, including file-system directory structure, ACL data, and information about local shares and groups. In the next step, the Structure Replicator uses information from the System Snapshot to create on the target computer a duplicate of the source computer's directory structure. Finally, the Data Replication module duplicates data files between servers. FastLane also designed DM/Consolidator so that you don't need to use the Data Replication agent to perform the final data migration step; you can use a different method for this step, such as a tape backup and restore.

One of the things I like best about this tool is that it offers flexible scheduling features for each stage of the process, so you can schedule intensive tasks during off-hours when network usage is low. Another benefit is that DM/Consolidator's replication process minimizes the impact on source servers by placing most of the processing on the target servers. If DM/Consolidator is outside your budget, check out the sidebar "Poor Man's Consolidation Toolkit" for tools that provide similar functionality at a fraction of the cost.

Although Win2K's new reliability reduces the fault-resilience-related problems associated with consolidation, you'll probably want to take additional steps to minimize the risks involved. If you've been using less expensive midrange PC-server hardware, you might consider using more powerful and scalable hardware with the maximum fault-tolerance features you can afford (e.g., redundant power supplies, hardware RAID controllers with redundancy on each volume, redundant network connections, lots of UPS battery backup power on each system). Also consider clustering products that further increase these machines' availability. These steps will maximize the number of users that each server can support and help mitigate the increased risks you'll inherit by putting more eggs in fewer baskets.

New Methods for the New Millennium
Be creative when it comes to how your organization deploys servers and network resources and how you might leverage Win2K to ease your administrative burdens. Aside from its highly touted array of new features, Win2K includes an equally new and important level of reliability. This newfound reliability renders obsolete some of the distributed computing precepts that exist under NT. When deploying servers in your organization, remember the KISS principle—Keep It Simple, Silly—and avoid needlessly deploying five servers where two can do the job. You'll save your organization money and yourself from headaches.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish