Skip navigation

MSCS Update

Most systems that use Microsoft Cluster Server (MSCS--formerly code-named Wolfpack) come from one of Microsoft's early adopter partners. Microsoft worked with several vendors to develop MSCS, including Digital Equipment, Compaq/Tandem, HP, Data General, NCR, IBM, Fujitsu, Sequent, Siemens Nixdorf, and Dell Computer. Microsoft validated these vendors' products as capable of supporting MSCS. Vendors prepackage these clustered solutions, with additional storage options available. Prepackaged solutions include servers; Windows NT Server, Enterprise Edition (NTS/E) with MSCS; dual NICs; and shared and non-shared storage. Most of these preconfigured systems use an internal EIDE boot drive with NTS/E loaded on it and some type of external chassis for the shared SCSI storage devices (including the Quorum Resource drive or volume).

The Quorum Resource is the cluster's database; it contains all the information on the cluster and cluster resources. The Quorum Resource is typically a dedicated volume or drive that belongs to the first node to form the cluster. If the first node fails, the Quorum Resource transfers to the surviving node. In MSCS Phase 2, you can designate a hierarchy of nodes to own the Quorum Resource if a failover occurs. The Quorum Resource is part of the shared-storage bus (typically SCSI), and it shouldn't store cluster-aware application data or NTS/E files.

Microsoft-validated systems are based on specific configurations (e.g., CPU, BIOS, Fibre Channel-to-SCSI, host bus adapter--HBA). Thus, the systems have numerous configuration control issues associated with clustering. Vendors supply these systems with optimal-length cabling and active terminators. The systems often include redundant power supplies and other critical components. The systems are fully configured, with few variables to introduce. End users must load and administer only the applications they want to run on the cluster. This situation is ideal because making changes to shared SCSI storage environments can be tricky.

Vendors such as Digital StorageWorks, CLARiiON, and IBM offer validated storage options that are compatible with certified MSCS clusters. (For resources on storage and clusters, see the Storage and Cluster Resource List, page 138.) These storage offerings are typically external RAID or just a bunch of disk (JBOD) systems that have shared SCSI or Fibre Channel (FC) interconnects and can support comprehensive clustering environments that require protected storage and high performance.

Despite the emphasis on prepackaged or preconfigured systems, many end users, vendors, and resellers will want to upgrade existing systems to NTS/E and MSCS. Whether you use a preconfigured system or develop your own, you must focus on several storage-related areas. These issues include the shared-SCSI bus, advanced MSCS storage options, and MSCS storage management.

Shared SCSI Pragmatics and Pitfalls
Shared SCSI clusters use multiple initiators (i.e., SCSI HBAs) that arbitrate over the bus to access and control targets (i.e., drives). Digital and Compaq/
Tandem pioneered this technique's use on high-end clustering systems using initiators and targets that they designed for this purpose.

Two challenges in shared SCSI are powering up the drives simultaneously via their respective owners and keeping initiators from locking preferred owners out of shared drives (i.e., unequal access). These problems often coexist, with one problem masking the other. Screen 1 shows how these startup problems appear under the Cluster Administrator. Disk U is failed, and disk V is offline. You can often correct this problem by manually transferring drive ownership back and forth until you disable the Failure/Offline flag. This is an easy solution, but it isn't possible during un-
attended failback.

SCSI hubs and switches. Digital and GigaLabs have announced the availability of several hubs and switches you can use with conventional SCSI interconnects. These hubs and switches behave like their networking equivalents to resolve problems. The devices overcome SCSI (especially Ultra SCSI) cable-length limitations, enable disaster-tolerant multiple-node clustering, and provide support for remote vaulting and mirroring. Hubs and switches eliminate the need to use Y cables or tri-link connectors with active terminators or SCSI devices that feature dual ports. Figure 1 shows a two-node cluster with conventional SCSI architecture, Figure 2 shows hub architecture, Figure 3, page 137, shows switching architecture, and Figure 4, page 137, shows a conventional SCSI architecture, also with non-shared storage.

Storage and Cluster Resource List
CLARiiON * 800-672-7729
Web: http://www.clariion.com
Computer Associates' Cheyenne Division
516-342-5224 or 800-243-9462
Web: http://www.cheyenne.com
ECCS * 800-322-7462
Web: http://www.storage.digital.com
Digital Equipment * 800-786-7967
Web: http://www.eccs.com
GigaLabs * 408-481-3030
Web: http://www.gigalabs.com
IBM * 770-863-1234 or 800-426-4968
Web: http://www.storage.ibm.com/adsm
Legato Systems * 650-812-6000
Web: http://www.legato.com
Microsoft * 425-882-8080
Web: http://www.microsoft.com/ ntserverenterprise and http://www.microsoft.com/hwtest/hcl
Seagate Software * 408-438-6550
Web: http://www.seagatesoftware.com
Symbios Logic * 619-677-3135 or 888-677-3135
Web: http://www.symbios.com

SCSI hubs and switches simplify multiple-host, shared-bus configurations so that more nodes can access a particular shared SCSI storage peripheral. SCSI hubs permit radial rather than serial connectivity, which facilitates longer total SCSI buses. The hubs support bus isolation when power is cycled on and off by host nodes or their HBAs or if a node fails. This feature solves most problems associated with multiple-initiator lockout. Hubs and switches let you take shared devices offline without rebooting the rest of the cluster, which is an important but often overlooked component of routine MSCS maintenance.

Multiple-initiator lockout is a SCSI phenomenon in which multiple initiators (i.e., HBAs) can't simultaneously take ownership of a SCSI device or Logical Unit Number (LUN). Instead, an initiator locks on a device, performs its operations, and then releases the device for other initiators to access.

You need to use external RAID to protect the Quorum Resource. If you lose the Quorum Resource and crash the cluster, you need to start over because the backup file will be out of sync with the current cluster file. You can start over from your last backup, or you can reinstall MSCS on the nodes. Neither option is preferable. Screen 2, page 136, shows the contents of the Quorum Resource folder.

Tips and trips for upgraders. If you develop your own MSCS clusters, you'll need some guidance. The following list presents tips and possible trip-ups.

*Before installation, set the SCSI ID on the adapter for Node-1 to 7, and set the ID for Node-2 to 6. Disable the boot-time SCSI reset operation on each controller before you install MSCS, or each node will hang at startup.

*When you initially set up your shared drives, before you set up MSCS you need to use Disk Administrator on Node-1 to assign drive letters and descriptions to shared disks. During setup, MSCS will copy these letters and descriptions to Node-2. You must choose sequential letters that don't conflict with existing letters on either node.

*Unless you specify otherwise, the first drive and associated letter in this series of shared drives is the default MSCS chooses for the Quorum Resource. If you're using RAID to protect the Quorum, you need to set up RAID before you assign drive letters. Use a 1GB Hardware RAID 5 configuration for the Quorum Resource, because the software RAID supplied with NTS/E can't handle a boot disk crash. Don't make the drive larger than 1GB, or you might be tempted to use it for application or data storage. Use this drive to store only cluster data (files you store on the Quorum Resource drive or volume in the MSCS folder).

*If you partition shared drives, you must do so before you install MSCS. You must also format the drives for compressed or uncompressed NTFS. You can't use disk partitions to make up a RAID or mirror set. The partitions are separate resources, and you need to manage and move them accordingly.

*You must place only NTS/E, system data, and paging files on the EIDE boot drive. Shared disks and NTS/E files aren't compatible.

*All SCSI buses must be identical. For example, you can't mix Wide and Narrow or Fast and Ultra. You need to correctly terminate both ends of the bus with active-type terminators. Use high-quality cables of the correct length (matched from one node to the other) for the given speed and type of SCSI.

High-Performance Options for NT Clusters
Whether you buy a preconfigured MSCS cluster or build your own, you have several high-performance storage subsystem options. Microsoft-validated clusters are available in any interconnect and configuration flavor. These high-capacity, high-performance systems are available with SCSI, Serial Storage Architecture (SSA), or FC storage interconnects. Table 1, page 138, provides a brief overview of the three competing interconnects. Table 2, page 138, shows the models Microsoft summarizes in its latest MSCS Hardware Compatibility List (HCL--February 1998). Microsoft will soon add various vendors' FC offerings, many of which are already on the NT 4.0 HCL. These FC offerings feature cluster-friendly storage managers. Systems administrators can monitor and administer these managers as easily as the clusters they reside on.

MSCS Storage Management
Clusters present special challenges in storage management. You must be able to back up, restore, and manage boot disks (e.g., EIDE) and shared SCSI disks that span multiple nodes or servers. The storage manager you choose must have well-defined behavior during failover and failback. Your solution must support the cluster's numerous virtual servers and the cluster's public network clients.

Microsoft promised a special NT backup utility in its first NTS/E release, but the company hasn't release the utility yet. In the Managing MSCS section of the MSCS administrator's guide, Microsoft recommends that you use NT Backup to back up the Registry and the boot and system drives. The company also suggests you use Rdisk.exe to back up the Registry on an Emergency Repair Disk (ERD). Microsoft also recommends that you use each node to create a hidden administration share on shared disk drives where you want to back up data. You can then use NT Backup to back up these shares. However, clustering presents a larger challenge than these recommendations address.

Backing up a cluster is more involved than backing up a standalone server. A cluster has shared (on a common bus) and non-shared data (on the NTS/E boot drive and a separate SCSI bus) that you must carefully protect. You need to choose a reliable solution that works regardless of the individual nodes' status. A local, non-cluster-aware backup solution backs up and restores only the data on its own bus (e.g., SCSI and EIDE) and the data it has on the shared bus. It can't directly access drives or buses on other nodes to perform backups or restores. A backup solution must handle situations in which the primary node is no longer available and shared-data drives belong to the surviving node.

Fortunately for MSCS's early adopters, Legato Systems, Seagate Software, and Computer Associates' Cheyenne Division (CA Cheyenne) developed enterprise editions of their backup products. Legato's NetWorker 4.4 for Windows NT, Power Edition supports all facets of NTS/E, including clustering. Seagate's Backup Exec for Windows NT, Enterprise Edition 7.0 supports NT for the enterprise with component object model (COM) compatibility and FC storage support. CA Cheyenne's ARCserve 6.5 for Windows NT is tightly integrated with cluster-aware Unicenter TNG, with support for FC storage.

Currently, Legato's NetWorker is the only product that supports clustering's backup, restore, and management requirements. Considering IBM's legacy of clustering and support of MSCS, the company will probably release a cluster-aware version of ADSTAR Distributed Storage Manager (ADSM) in the near future. As MSCS and its storage-management specifics become more widely implemented, vendors will offer fully comprehensive, cluster-aware support.

Putting Your Knowledge to Work
Storage and storage management for NT clusters are complicated matters. You must use a cluster-friendly approach when planning for, deploying, and administrating MSCS and other clustering products. Whether you buy a preconfigured cluster or storage subsystem or develop your own, you must methodically tackle these systems' challenges.

When MSCS's early adopter partners begin to self-certify clusters and peripherals, the market for upgrades such as CPUs and storage will open up quickly. Thus, end users with existing assets will be able to protect their investments. However, users will want to carefully observe storage rules and regulations when they adopt new hardware and software.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish