Skip navigation

Planning for Implementing Microsoft Cluster Server

Prepare your network

The release of Microsoft Cluster Server (MSCS--formerly code-named Wolfpack) as part of Windows NT Server, Enterprise Edition (NTS/E) 4.0 is a welcome addition to Microsoft's base NT Server offering. I work with several clients who have been waiting impatiently for Microsoft to provide a solution that supports the high availability, scalability, and ease of management that clustering has brought them on their OpenVMS and Tandem systems for more than 10 years. For better or for worse, more than a year of speculation and waiting is over.

The lack of shrink-wrapped, high availability support from Microsoft has sent many users looking for third-party data replication and mirroring products such as Qualix Group's OctopusHA+ and Vinca's StandbyServer. Other users wanting a shared storage solution similar to MSCS have turned to Digital Equipment's Digital Clusters for Windows NT and NCR's LifeKeeper for Windows NT. Still other users with fault tolerance requirements have used Marathon Technologies' Endurance 4000, and those with pressing out-of-box scalability needs have looked at Convoy by Valence Research.

Articles that describe clustering, explain why you might want it, and outline ways you might use it in your environment have already set the stage for this article (for a list of related articles and clustering product reviews, see "Related Articles in Windows NT Magazine," page 165). In this article, I'll describe how to plan for getting MSCS up and running in your environment. To begin, you need to know how to prepare your network environment for MSCS.

Getting Ready Is the Hard Part
Installing the MSCS software is straightforward and simple. However, setting up the cluster is a different matter. A cluster is much more than software: It's a carefully planned, assembled, and tested collection of software and hardware that's only part of a high availability computing environment. Many new MSCS users will initially purchase their production clusters as new, preconfigured systems. That approach is good because the time these users save on hardware configuration, validation, and support by purchasing a turnkey MSCS solution is better spent making the user's environment cluster ready.

Configuring your environment for MSCS takes a fair amount of network-related planning beyond hardware configuration and software installation issues. You have to address these issues before you have several thousand dollars of cluster parts sitting in a corner of your office and a CFO breathing down your neck.

TCP/IP Connectivity
All workstation clients accessing data on the cluster do so via TCP/IP. As a result, TCP/IP connectivity is important to the planning and day-to-day operation of your cluster. Although this point is trivial at some sites, it can be a nightmare at others. Before you even begin to think about deploying MSCS, you need to ensure that your site can run TCP/IP as the only transport. You can save a lot of grief by testing client to server, NetBIOS over TCP/IP (NetBT) connectivity with the NBTSTAT utility before you start digging into MSCS. If you can't verify NetBT connectivity, you have bigger problems you need to address (for information on using NetBIOS over TCP/IP, see "Related Articles in Windows NT Magazine," page 165).

Most enterprise TCP/IP connectivity problems revolve around name resolution, so you will also want to be sure that you configure your existing NT servers and clients to use the Windows Internet Naming Service (WINS) for NetBIOS name resolution and Domain Name Server (DNS) for host name resolution. (For information on WINS and DNS, see "Related Articles in Windows NT Magazine," page 165.)

The Address Monster (aka the Virtual Server)
After you confirm TCP/IP connectivity across your network, you have to plan how you want to assign the MSCS TCP/IP addresses. MSCS eats TCP/IP addresses. You have to use several addresses (five static addresses) just to set up a typical two-node cluster. Two of these addresses are for connecting your clients, two are for cluster-node-to-cluster-node communications (i.e., intracluster communications), and one is for cluster management. A validated (and therefore supported) two-node MSCS configuration requires at least four network interfaces, so five TCP/IP addresses seem reasonable. However, you haven't met the real address monster of MSCS--the virtual server.

MSCS uses virtual servers when organizing applications and services that you want to provide failover capability for. From a networking standpoint, each virtual server simulates an NT node and needs a static TCP/IP address and a network name.

At the Microsoft Professional Developers Conference '97 in California, last September, Tom Phillips, Microsoft MSCS program manager, stressed the concept of virtual servers in his "Writing a Cluster (Wolfpack)-Aware Application" presentation. Phillips sounded the virtual server horn for Exchange administrators at the Microsoft Exchange Conference '97 also in California last September. The MSCS developers are serious about sharing this concept with all parts of the NT community. The developers' concern stems from the effect virtual servers have on the network namespace and applications that rely on it.

Table 1: NetBIOS Machine Names and Usage
Name Suffix(h) Type Usage
BANCU01 UNIQUE Workstation Service
BANCU01 UNIQUE File Server Service
BANCU GROUP Domain Name
BANCU GROUP Domain Controllers
BANCU UNIQUE Domain Master Browser
BANCU GROUP Browser Service Elections
BANCU UNIQUE Master Browser
..__MSBROWSE GROUP Master Browser
INet~Services GROUP IIS
IS~BANCU01..... UNIQUE IIS
BANCU01 UNIQUE Microsoft Exchange IMC
BANCU01 UNIQUE Microsoft Exchange MTA
BANCU01 UNIQUE Messenger Service
BANCU01 UNIQUE NetDDE Service

In the past, server-based applications and services interacted with the local computer name. The problem with this approach was that it relied on the computer name and tied applications to a particular machine. For example, the Server service exported this name to the network as a NetBIOS name with the <20> suffix in the 16th position so workstations could use file service connections. Microsoft Exchange services took this approach even further by exporting the computer name with five or more NetBIOS suffixes. Table 1 combines the output from the command

NBTSTAT ­a BANCU01

to query an NT server (BANCU01) for its NetBIOS names, and a query to the Microsoft knowledge base on NetBIOS suffixes (http://premium.microsoft.com/support/
kb/articles/q163/4/09.asp). This configuration is an NT 4.0 server running as a Primary Domain Controller (PDC) hosting Exchange 5.0 and Internet Information Server (IIS) 3.0.

MSCS shifts this paradigm by tying applications to a virtual server instead of a physical server and its computer name. With MSCS, you install applications on virtual servers, which binds the application to its associated network name and TCP/IP address resources. This approach gives applications independence from one machine, and you can move the virtual server among cluster members. Two-node clusters and even a single-node cluster can support multiple virtual servers.

Imagine a two-node cluster that supports file services for four departments and hosts Microsoft Exchange, SQL Server, Microsoft Transaction Server (MTS), and IIS. Designing this cluster requires spreading the applications across multiple virtual servers and assigning network names and addresses to these servers. The addresses you assign are in addition to the initial five TCP/IP addresses you need to set up the cluster. A typical design might involve creating a virtual server for each department, each requiring its own address. With the initial five addresses plus one address for each department, you've used nine addresses. You can then assign departmental file and print services to these departmental virtual servers and distribute the load across the two physical servers that MSCS supports today (in the future, you will be able to redistribute the load to other cluster nodes as Microsoft adds support for them). This approach has the added benefit of letting each department think it has its own server. You need four more addresses for the virtual servers that Exchange, SQL Server, MTS, and IIS will execute in. Figure 1 shows the configuration for this setup. You've just used 13 TCP/IP addresses. Don't worry about the number 13; even if it is unlucky, you won't be stopping here for long.

Table 2: Address Allocation for Private Internets (RFC 1918)
Class Address Range
Class A/8 10.0.0.0 to 10.255.255.255
Class B/16 172.16.0.0 to 172.31.255.255
Class C/24 192.168.0.0 to 192.168.255.255

Fortunately, as outlined in Request for Comments (RFC) 1918, the Internet Assigned Numbers Authority (IANA) has set aside several IP networks that are not routed on the Internet and that you can use within your organization. Table 2 lists available address classes as set forth by RFC 1918. Microsoft notes that these addresses are useful for setting up a private network for intracluster communications. I like to use these addresses throughout the organization, especially at larger sites that want to connect to the Internet but don't want to implement routing internally or that want to SuperNet (the reverse process of subnetting) the network using class C (/24) addresses that an Internet Service Provider (ISP) assigns. By adding a switch and using one of the RFC 1918 class A (/8) or B (/16) networks, you can gain a lot of flexibility and performance because you don't have to worry about routing. You can then use a proxy or Network Address Translator for Internet connectivity.

Domain Membership and Roles
MSCS nodes must belong to the same NT domain. I prefer to install them as member servers in an existing domain. MSCS doesn't do anything to enhance the availability of domain controllers, so why burden these systems? This approach simplifies installation and management, reduces the overhead on these systems, and eases the movement of MSCS nodes between domains. Keep in mind that domain controllers must maintain the entire user accounts database in a nonpage pool. You don't want to burden an application server (i.e., your MSCS node) running BackOffice and other server-based applications with this additional memory consumption and the overhead of validating logons.

However, if you have a need, your MSCS nodes can fill any server role. The MSCS documentation says that you can pair cluster nodes in almost any domain controller role in an existing NT domain or in their own domain. What the documentation doesn't mention is that you can also install MSCS cluster nodes in a domain controller-member server configuration. As with all domain role decisions, you need to make this choice carefully.

Time for a Reinstall Already?
One final consideration when planning your network for MSCS is that you lock down the names and TCP/IP addresses you're going to assign to your cluster nodes. You have to be careful when changing these names and addresses because your cluster will stop functioning and you'll have to reinstall the MSCS software. Because of this concern, you can't get away with setting up your cluster and changing the names and addresses immediately before you put the cluster into production.

Now you know what to consider when planning your network for MSCS. In a future article, I'll tell you what you need to consider when selecting your cluster hardware.

Corrections to this Article:

  • In "Planning for Implementing Microsoft Cluster Server", the values in the last row of Figure 1 were incorrect. The correct values are 3 through 11.
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish