Skip navigation

Make the Most of Your SAN with iSCSI

Add your Windows server storage to your Fibre Channel network

Executive Summary:
Fibre Channel is still the transport of choice for many data centers with high bandwidth and high availability requirements. However, iSCSI is a mature storage technology and is being deployed for small departmental operations as well as data center applications. Combining iSCSI and Fibre Channel Storage Area Network (SAN) technologies helps administrators bring all server assets into a common storage infrastructure. Microsoft iSCSI Software Initiator and Microsoft iSNS Server are free software applications that let Windows servers participate in a combined SAN.

Modern data centers typically run their most mission-critical business applications on Fibre Channel SANs. Fibre Channel has a proven track record in enabling fast performance and high availability of application data as well as established best practices for data backup and disaster recovery. Not all business applications, however, require the bandwidth of 4Gbps Fibre Channel, and large data centers might have hundreds of second-tier standalone rack-mounted servers still using direct-attached storage. Some find it hard to justify the cost of a $1,000 Fibre Channel host bus adapter (HBA) when the server itself cost less than $3,000. On the other hand, standalone servers incur more administrative overhead per server, particularly for backup operations.

Until the advent of iSCSI, there were few options for economically integrating all application, Web-hosting, and file servers into the data center SAN. iSCSI and iSCSI gateways, however, now provide the means to streamline the management and backup of second-tier servers and integrate these servers into the Fibre Channel SAN. This integration extends data center best practices to all server assets and can amortize the substantial investment in a data center SAN over a much larger population of attached devices.

Microsoft offers new iSCSI-enabling software, making it possible to cost effectively bring Windows servers into the data center. Let's look at the steps required to make this happen and factors you need to consider. First—a little background on iSCSI.

iSCSI Essentials
Like traditional parallel SCSI, the iSCSI protocol enables reads and writes of data in high-performance block format. However, by serializing SCSI commands, status, and data, iSCSI overcomes the distance limitations of parallel SCSI cabling and simplifies deployment and maintenance. Because iSCSI runs over TCP/IP, it can be transported over conventional Gigabit Ethernet networks and wide-area IP networks. Figure 1, illustrates how conventional SCSI is wrapped in TCP/IP for transport.

Using economical Gigabit Ethernet interface cards and Gigabit Ethernet switches keeps the iSCSI per-server attachment cost low and works fine in many situations. Some vendors do provide iSCSI HBAs that optimize iSCSI processing via TCP offload engines (TOEs) and onboard iSCSI processing logic. iSCSI HBAs are required for boot from SAN applications, and they're suitable for applications that require high bandwidth, but they increase per-server attachment costs. In this article, I assume standard Gigabit Ethernet NICs. With the faster 10 Gigabit Ethernet, you lose most of the cost advantage over Fibre Channel.

For Windows storage management, an iSCSI target appears as just another storage resource that can be assigned a drive letter, formatted, and used for applications and data. Instead of being housed inside the server or connected by parallel cabling, though, the iSCSI storage resource can be anywhere in an IP-routed network. Because iSCSI is a block storage protocol, the latency of long-distance connections over a WAN might have a serious negative effect on performance or cause timeouts. Typically, iSCSI is best deployed within a data center, campus, or metro environment.

Microsoft iSCSI Support
Microsoft's introduction of iSCSI initiator and Internet Storage Name Service (iSNS) software provides an economical means to bring even low-cost Windows servers and workstations into the data center SAN infrastructure. Microsoft iSCSI Software Initiator enables connection of a Windows host to an external iSCSI storage array. Microsoft iSNS Server discovers targets on an iSCSI network.

As of this writing, iSCSI Software Initiator =2.04 is available free on the Microsoft Download Center and requires Windows Server 2003 or later, Windows XP Professional SP1 or later, or Windows 2000 SP3 or later. Download it at http://www.microsoft.com/downloads/details.aspx?familyid=12cb3c1a-15d6-4585b385-befd1319f825&displaylang=en. Microsoft iSNS server code is also available as a free download and requires Windows Server 2003 or Windows 2000 SP4. Download it at http://www.microsoft.com/downloads/details.aspx?familyid=0dbc4af5-9410-4080a545-f90b45650e20&displaylang=en.

Microsoft has included some attractive features in iSCSI Software Initiator, including multipathing, security, and support for server clustering to iSCSI targets. Multipathing with the Microsoft Multipath I/O (MPIO) driver included in iSCSI Software Initiator provides for higher availability through failover and better performance through load balancing. Secure connections between iSCSI initiators and storage targets are supported with Challenge Handshake Authentication Protocol (CHAP) and IPsec for data-payload encryption. Authentication and encryption might be required when storage data traverses an untrusted network segment. Support for clustering enables iSCSI storage to be used for Microsoft Exchange Server or Microsoft SQL Server clusters. For the configurations discussed below, the Exchange or SQL Server data can be managed centrally and protected on the SAN, while clustering provides high availability of applications to end users.

iSNS Server isn't mandatory, but it does simplify iSCSI deployment by enabling automatic discovery of iSCSI target resources. It can be run on a dedicated server or coexist with other server applications. Essentially, iSNS Server combines the capabilities of DNS with conventional discovery services provided by the Simple Name Server (SNS) of Fibre Channel fabrics. In Fibre Channel switches and directors, for example, the SNS contains information about all storage assets in the SAN. As a storage array or tape subsystem is attached to the SAN, it registers with the SNS. When Fibre Channel initiators connect to the fabric, they query the SNS for available storage resources. The resources that are reported to a specific initiator can be filtered by use of zoning and LUN masking. This prevents initiators from accessing unauthorized storage assets (e.g., stopping a Windows server from binding to a UNIX storage array).

The iSCSI Gateway
An iSCSI gateway provides protocol conversion between iSCSI initiators and Fibre Channel– attached storage targets. An iSCSI gateway effectively proxies for each side, presenting a virtual Fibre Channel initiator to the real Fibre Channel target and a virtual iSCSI target to the real iSCSI initiator, as Figure 2 shows. Consequently, when setting up an iSCSI gateway, you must follow the respective rules of both protocols.

Because Fibre Channel connections today are typically 2Gbps or 4Gbps and iSCSI is typically 1Gbps, you can aggregate more iSCSI servers per Fibre Channel storage port on an iSCSI gateway than you can Fibre Channel servers. In conventional business application environments running at 1Gbps end to end, a typical ratio of servers to storage ports (known as the fan-in ratio) might be 7:1. An iSCSI gateway that provides 1Gbps port connections for iSCSI initiators and 4Gbps connections for storage ports can enable a much higher fan-in ratio of 18:1 or greater. For iSCSI initiators, you implement the higher fan-in ratio by attaching multiple iSCSI servers to a Gigabit Ethernet switch, which in turn provides a 1Gbps connection to the iSCSI gateway for every fan-in group. An iSCSI gateway that offers four 1Gbps Ethernet ports and several 4Gbps Fibre Channel ports can support 70 or more iSCSI initiators concurrently.

The other factor to consider when scoping fan-in ratios is the maximum number of concurrent iSCSI sessions per gateway port that the storage vendor has certified. An iSCSI gateway might support up to 50 iSCSI sessions per Gigabit Ethernet port, whereas the storage vendor might certify only a more conservative 20 sessions per port. Each storage vendor does its own certification and testing of iSCSI gateway products and sets its own supported limit for each.

Bringing iSCSI Servers into the SAN
As you plan for integrating iSCSI-attached Windows servers into your SAN, identify the collective storage capacity required for all the newly attached iSCSI servers, the average storage traffic generated by the second-tier applications running on the servers, and the initial fan-in ratio that best suits the aggregate traffic load to help size both SAN and iSCSI gateway requirements. It might be fairly easy to identify the amount of storage capacity each second-tier server needs, but it's usually more difficult to identify storage traffic patterns and loads, particularly for "bursty" applications. It's best, then, to start with a fairly conservative fan-in ratio (e.g., 7:1 or lower) and gradually increase the number of iSCSI servers per iSCSI gateway port until you reach the optimum fan-in for your situation.

Deploying second-tier iSCSI servers into an existing Fibre Channel SAN requires three basic steps: configuring the existing Fibre Channel storage array for additional hosts, setting up the iSCSI gateway for both virtual Fibre Channel initiator and virtual iSCSI target connections, and installing the Microsoft iSCSI initiator and iSNS (if desired) software for host connection. No one step is particularly difficult, but the process might require collaboration between server administrators and SAN administrators if those functions aren't combined in your environment.

Step 1: Configuring SAN storage for new iSCSI hosts. Because you're using an iSCSI gateway to integrate additional servers, no special process is required to configure additional storage capacity. From the SAN administrator's standpoint, the new LUNs are being configured for traditional Fibre Channel initiators, which in fact have a virtual existence within the iSCSI gateway. Consequently, you create additional LUNs with the desired capacity as usual by using the storage vendor's configuration utility, and the appropriate number of new storage ports (determined by the fan-in ratio) are connected to the SAN fabric.

Although an iSCSI gateway platform might allow direct connection between the gateway and SAN storage, data center administrators might prefer to drive all storage connections through Fibre Channel directors or switches. In this case, you connect both storage ports and iSCSI gateway Fibre Channel ports to the fabric and configure zoning or LUN masking at the fabric level. Each new storage port is represented by a unique World Wide Name (WWN), which you use to configure zoning and connectivity to the iSCSI gateway.

Every storage vendor provides its own management utility for creating LUNs from the total capacity of the storage array. Typically, these utilities are GUI-based and fairly simple to configure. Likewise, individual fabric switch vendors provide utilities for configuring switch ports, zone groups, and LUN masking. It's important to remember that although you're configuring SAN resources to connect iSCSI initiators, the storage arrays and fabric see only Fibre Channel initiators proxied by the iSCSI gateway.

Step 2: Setting up the iSCSI gateway. The iSCSI gateway configuration has two basic components. You configure and bind the iSCSI initiators to their respective virtual iSCSI targets. And, likewise, you configure and bind the real Fibre Channel targets to their respective virtual Fibre Channel initiators. Typically, the configuration utility provided by the iSCSI gateway vendor streamlines this dual process so that when you configure an iSCSI initiator, the proxy Fibre Channel initiator is created automatically.

You register iSCSI initiators by iSCSI identifiers and register SAN resources by WWNs and Fibre Channel IDs (FCIDs) on the iSCSI gateway. You must determine these respective identifiers in advance to properly configure the iSCSI gateway. In Figure 3, the configuration utility for an iSCSI gateway (in this example, a Brocade M2640) shows an iSCSI initiator defined by iSCSI identifier and alias, IP address, and proxied WWNs.

The iSCSI gateway might include additional utilities for implementing CHAP or IPsec for security. As with general address information, you should determine any CHAP parameters or IPsec addressing in advance to simplify gateway installation.

Because each iSCSI gateway vendor provides its own unique utility for configuring iSCSI hosts and SAN targets, I can't provide a step-by-step example for gateway configuration. The common requirements, though, are to configure iSCSI initiator properties, configure proxied targets, and define LUN masking parameters for the target volumes.

Step 3: Configure the iSCSI hosts. Along with its free iSCSI Software Initiator, Microsoft provides detailed installation instructions in a downloadable users' guide. Once you've installed the software on a Windows server, the basic steps are to assign an iSCSI initiator node name for the server, configure any desired security features, discover (via iSNS) or define targets available for the server, and bind the iSCSI host to the appropriate targets.

After you've set the initiator parameters on the General tab of the iSCSI Initiator Properties dialog box, use the Discovery tab to either discover through iSNS or manually enter the IP address of intended targets. If you install iSNS on a LAN-attached server, it will periodically check for the existence of any additional iSCSI targets. In this example, those targets are represented by the iSCSI gateway. Alternatively, click Add in the Target Portals area of the Discovery tab to manually identify targets.

After you've defined targets, use the Targets tab to select and log on to the proxied iSCSI targets. As Figure 4 shows, the logon window also enables you to select whether a target is persistent and whether multipathing is used for this connection. Click Advanced in the logon window to configure cyclical redundancy check (CRC), CHAP, and IPsec settings for this connection.

Once the logon session between the iSCSI initiator and proxied iSCSI target is active, you can configure the iSCSI storage volume via the Windows Disk Management utility, assign it a drive letter, and format it for use.

A Dedicated IP SAN
Compared with a messaging LAN (i.e., a LAN that carries application traffic as opposed to storage traffic), a Fibre Channel SAN is inherently a separate network, with its own cabling scheme, protocols, and fabric infrastructure. If properly designed, congestion on a Fibre Channel SAN should be minimal and high availability is enhanced through redundant pathing between initiators and targets.

One of the more marketed aspects of iSCSI is that it can be run over common LAN infrastructures by using relatively cheap Gigabit Ethernet switches. This means that storage and messaging traffic coexists on the same LAN. Certainly there are no significant technical barriers to prevent this. However, Microsoft and, in particular, storage vendors typically advise against combining storage and messaging traffic on the same network. Messaging traffic can withstand wide fluctuations in latency, congestion, and packet loss and recovery; storage traffic can't. Consequently, the Ethernet network between the iSCSI gateway and the complex of iSCSI initiators it's serving should be a dedicated IP SAN, as Figure 5 shows.

Designing a dedicated IP SAN from the start takes advantage of more low-cost perserver connection and use of commodity Gigabit Ethernet switches, and it allows you to scale the IP SAN over time to accommodate additional servers-without significantly impacting (or being-impacted by) the corporate LAN.

iSCSI is now a mature storage technology and is being deployed for small departmental operations as well as data center applications. Today, Fibre Channel is still the transport of choice for many data centers with high bandwidth and high availability requirements. Combining iSCSI and Fibre Channel SAN technologies helps administrators bring all server assets into a common storage infrastructure and provide best practices handling of all corporate data.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish