The iSCSI protocol is becoming increasingly popular in enterprise environments of all sizes. If you haven't heard of iSCSI, it’s simply an adaption of the tried-and-true SCSI protocol used to connect servers to high-performance disk enclosures and CD-ROM drives. It uses TCP/IP over wired networks instead of dedicated cables. (Remember all those thick cables and huge plugs with delicate pin-outs?)
In recent months, iSCSI has become wildly popular in part because of the rise of virtualization, especially virtualization technology that allows running virtual machines (VMs) to be moved between different servers so that maintenance can be carried out on the host server without affecting the availability of the guest VMs. The only interconnectivity required between servers and the disks on storage subsystems is an IP network, which allows many servers to share the same storage subsystem. With older technologies, such as SCSI DAS, two servers at most could access a shared storage subsystem. Fibre Channel (FC)-based SANs can be built to permit multiple servers to access the same storage subsystem. But FC-based SANs are extremely expensive and require special HBAs to be installed on servers before the servers can connect to FC controllers, which are specialized switches that are fronting a SAN.
Using an IP network, iSCSI costs far less than the alternatives and offers more flexibility. You might already be using NAS in your environment. NAS uses common IP-based protocols, such as WWW Distributed Authoring and Versioning (WebDAV), NFS, Server Message Block (SMB) and Common Internet File System (CIFS), but these protocols are file-level protocols and are generally not up to the task of providing access to extremely large files, nor are they suited for high-performance solutions such as virtualization or databases. The block-level iSCSI protocol avoids performance and other problems associated with file-level protocols, including file locking.
A number of vendors offer a wide range of iSCSI solutions today, including NAS systems with iSCSI support. Microsoft recently released Windows Storage Server 2008 R2, which is an optimized version of Windows Server 2008 R2, with iSCSI support. Available in storage products from vendors like Dell, HP, and others, Storage Server boasts an impressive range of features that allow you to build storage subsystems. Those subsystems can be used to host the virtual disks of your VMs, host huge databases for your database servers, and do pretty much anything else you can think to throw at them. Storage Server also includes easy-to-use management features and deduplication software to minimize storage requirements. Best of all, Storage Server integrates seamlessly into your Windows network and also offers support for typical NAS protocols.
In this article, I’ll describe how to set up and test Storage Server Enterprise Edition in your environment. Storage Server comes in Workgroup Edition, Standard Edition, and Enterprise Edition. An overview of the differences between each edition can be found at the Microsoft TechNet web page. Although Storage Server is only available for use in production environments in products from OEMs, if you have a TechNet subscription, you can download Storage Server as an update to Server 2008 R2. You can then test it in a physical or virtual environment. Click here for more details about downloading and using Storage Server.
Preparing for and Performing Initial Setup
Before deploying Storage Server, you need to prepare your environment. You should allocate a static IPv4 address for Storage Server, and if you use IPv6, allocate an IPv6 address. The IPv4 and/or IPv6 address will be used to remotely connect to Storage Server to manage it, and for the Storage Server to communicate with domain controllers (DCs), DNS servers, and other equipment. You don’t need to join a Storage Server system to a domain, but I recommend doing so for ease of management. It’s possible to run both iSCSI and other network traffic over the same network using a single NIC on your member servers and the Storage Server machine in low-volume scenarios, but I wouldn’t recommend this configuration in a production environment. Typically, you'll dedicate a NIC on each member server just for communicating with Storage Server on a network dedicated to iSCSI traffic. When using dedicated NICs, allocate IPv4 and IPv6 addresses for Storage Server that will be used by iSCSI clients, which are also known as iSCSI Software Initiators. You can use RFC 1918 non-public IPv4 addresses in the following ranges for your iSCSI network: 10.x.x.x, 172.16.x.x – 172.31.x.x, and 192.168.x.x. Later, I’ll discuss additional security considerations.
When you first start Storage Server and log on using the default username and password from the OEM that built your server, you’ll be presented with an Initial Configuration Tasks (ICT) screen customized for Storage Server and similar to that shown in Figure 1. Each OEM can configure the ICT screen—for example, to add tasks for creating and managing two-node Storage Server clusters for fault tolerance and resiliency. Your first steps should be to configure the time zone, IP addresses, and host name, and to domain-join Storage Server. Next, download and install updates. I recommend that you not enable automatic updating for production environments because you don’t want your Storage Server system to reboot in an uncontrolled fashion after installing updates, making the shares and volumes it serves unavailable.
Provisioning and Serving NAS Using SMB and NFS
From the ICT screen, you can click the option Provision a volume, which launches the Provision Storage Wizard, which Figure 2 shows. You’ll have the option to provision storage on one or more disks that are attached to the server, that are online and initialized, and that have unallocated space. If your disks are offline or haven't yet been initialized, you’ll have to use Disk Management under the Storage node in Server Manager to bring the disks online and to initialize them before the wizard will recognize them. You’ll also have the opportunity to provision storage on storage subsystems, such as SANs, if Storage Server fronts a storage subsystem. Stepping through the wizard allows you to select the unallocated storage, choose the amount you want to allocate, decide where to mount the storage when allocated, and select the options to use when formatting the storage. When you create storage, you can choose to launch the Provision a Shared Folder Wizard, which takes you through the steps of creating a new folder, setting permissions to it, and sharing it using SMB and (if you have Server for NFS installed on Storage Server) NFS.
Figure 3 shows the new Microsoft Management Console (MMC) Share and Storage Management snap-in in Storage Server, available from the Administrative Tools folder on the Start menu. From the snap-in, you can launch the Provision Storage Wizard and the Provision Share Wizard. You can also manage open sessions and files and configure NFS if it’s enabled. You don’t need to use the Provision Storage Wizard nor the Provision a Shared Folder Wizard to manage disks, volumes, and shares. You can perform all these tasks from Disk Management in Server Manager and from Windows Explorer. You can also use the command line and Windows PowerShell as you would on Server 2008 R2. The MMC snap-in and wizards make the tasks easier to accomplish for inexperienced administrators.
Storage Server and iSCSI
Storage Server Enterprise Edition includes iSCSI target software. With this software, you can serve iSCSI clients using the iSCSI protocol. Managing iSCSI on Storage Server isn't as easy as provisioning and serving NAS using SMB and NFS. Before you start working with iSCSI in Storage Server, you’ll need to understand how it works. When Storage Server is used as a NAS server, you share folders, but when you use iSCSI, Storage Server serves whole drives to clients. The drives themselves aren't physical drives on Storage Server. Rather, they're virtual disks implemented as Virtual Hard Disk (VHD) files stored on volumes across one or more physical drives. This separation of iSCSI virtual disks, volumes, and physical drives isn't unique to Storage Server and can be found in other iSCSI server implementations.
This separation does come with many advantages. You can create iSCSI virtual disks that are the right size for the clients that will use them, freeing you from creating iSCSI disks that are the same size as physical drives. A physical drive can hold one or more volumes each with one or more VHDs. Alternatively, a volume can span multiple physical drives and hold one or more VHDs. Another advantage is that you can move the VHDs representing the virtual disks served by Storage Server to different volumes or even to different servers as needed.
Virtual Disk Management
You create a VHD file in Storage Server using the MMC Microsoft iSCSI Software Target snap-in, which Figure 4 shows. You can find this snap-in in the Administrative Tools folder on the Start menu. Right-click the Devices node, and select Create Virtual Disk from the context-sensitive menu to launch the Create Virtual Disk Wizard. As you step through the wizard, you’ll be asked for the location and name of the VHD file representing the virtual disk, the size of the virtual disk in megabytes, and a description of the virtual disk. You’ll also be prompted to specify the iSCSI targets that can access the virtual disk. You can specify the iSCSI targets later.
On Storage Server, you can mount the virtual disks, and initialize, format, and assign a drive letter to them through Windows Server's standard Disk Management snap-in. This allows you to prepare the virtual disks before serving them to iSCSI clients. Mounted disks can also be backed up using your standard backup software. To mount a disk, select the virtual disk in the Microsoft iSCSI Software Target snap-in, right-click it, select Disk Access, and click Mount Read/Write. You can dismount a virtual disk by right-clicking the disk, selecting Disk Access, and clicking Dismount.
You can create snapshots of virtual disks using the Microsoft iSCSI Software Target snap-in by right-clicking a virtual disk and selecting Create Snapshot. Snapshots can be mounted just like virtual disks. You can also roll back a virtual disk to a snapshot and export it. When you export a snapshot, you make it available to iSCSI clients just like a regular virtual disk. You can delete snapshots when they're no longer required.
It’s possible to configure Storage Server to take regular snapshots using the Schedule Snapshot Wizard, which you can launch by right-clicking the Schedules node under the Snapshots node. A schedule can perform one of two actions: snapshot virtual disks, or snapshot virtual disks and mount the snapshots locally on the server. A snapshot schedule can snapshot all virtual disks or just the virtual disks you select, and snapshots can be taken on a daily, weekly, monthly, or one-time-only basis. Within each period, you can select the day and time of the snapshot.
Serving Virtual Disks to iSCSI Clients
Once you've created virtual disks, you need to specify iSCSI targets, which are used by iSCSI clients to mount the disks locally. An iSCSI target is associated with a virtual disk. To create an iSCSI target, right-click the iSCSI Targets node, and select Create iSCSI Target to launch the Create iSCSI Target Wizard. After the introduction page in the wizard, the first step asks you to specify a name for the iSCSI target and an optional description. The target name is used by iSCSI clients to specify and connect to the target. The iSCSI target name can contain only letters and numbers without spaces.
The second step asks you to enter the iSCSI Qualified Name (IQN) of the iSCSI initiator (the iSCSI client) that will connect to the target. On Windows Server–based iSCSI clients, you can find the IQN by running the iSCSI Initiator program from the Administrative Tools folder on the Start menu. Figure 5 shows you where to find the IQN. You can list multiple IQNs for a target. You can also specify iSCSI clients by Fully Qualified Domain Name (FQDN) and by IP address instead. Once a target is created, associate a virtual disk by right-clicking it and selecting either the option to create a virtual disk or the option to add an existing virtual disk. You can add multiple disks to a target if you wish. Each virtual disk you add to a target is given a unique Logical Unit Number (LUN), just like a regular SCSI subsystem. The iSCSI client references each virtual disk using its LUN.
To access an iSCSI target, an iSCSI client is configured using its iSCSI initiator. To add a target, it must first be discovered by clicking the Discovery tab, clicking the Add Portal button, and specifying the IP address of Storage Server. The IP address is the IP address of the dedicated iSCSI network NIC, as Figure 6 shows. Once a target is discovered, it’s listed on the Targets tab and will be listed as Inactive. Selecting an inactive target and clicking the Log On button allows you to connect to the target. You can specify that the connection to the target should be restored every time the computer starts.
On both the iSCSI target and the initiator, you can specify additional configuration options, such as the source IP address, which is typically the IP address of a NIC on a dedicated iSCSI IP network. Unless you run into difficulty connecting an initiator to a connector, I recommend you simply accept the default options.
When the iSCSI initiator logs on to a target, the disks are made available and visible in the Disk Management snap-in. The disks will be offline by default and must be brought online. Unless you mounted the disks on Storage Server and prepared them, you’ll need to initialize and format the disks on the iSCSI client before you can assign them to a drive letter or mount them onto an empty folder on an NTFS volume.
In production environments you probably go to great lengths to physically secure your hosts and data. When you use iSCSI, you introduce potential risks. If malicious insiders or (even worse) hackers outside your network could gain access to the iSCSI network, they would potentially be able to eavesdrop on iSCSI traffic and obtain sensitive data. They might also be able connect to an iSCSI target and mount an iSCSI virtual disk onto their machine and access it, bypassing any file system controls, such as DACLs, if they know how.
Fortunately, there are several steps you can take to minimize the risks. First, as I described earlier, you should create a dedicated iSCSI network, and you should ensure that no traffic can be routed to or from this network. All routers, switches, and other equipment should be dedicated to the iSCSI network. Second, it’s possible to use the Challenge Handshake Authentication Protocol (CHAP), which uses a shared secret known to the targets and the initiators to restrict access. You can also use mutual CHAP, in which a pair of secrets must be known to the initiator and the target. For authentication, it’s also possible to use RADIUS, which is stronger than CHAP alone.
While it’s possible to restrict access to a target to only those initiators who know the shared secret, CHAP won't prevent an eavesdropper from gathering sensitive data by simply monitoring communications between initiators and targets. The Microsoft implementation of iSCSI Initiator and Target supports IPsec using a shared secret, which can be employed to encrypt traffic between initiators and targets. Note that there is a performance overhead when using IPsec, which might make it less appealing for demanding environments.
Tip of the Iceberg
I've only touched the surface of the features that Storage Server provides. Storage Server is an enterprise-ready, NAS, and iSCSI solution that integrates seamlessly into your Windows networks. Its deduplication technology helps you save disk space, and it can be deployed in fault-tolerant configurations to provide high-reliability solutions. Although it might seem daunting at first, once you've installed Storage Server and started using iSCSI, you’ll find it to be amazingly flexible and almost second nature to use.