Skip navigation

Spinnaker Networks SpinServer 3300

NAS and multivendor storage management

Spinnaker Networks' SpinServer 3300 is a highly scalable Network Attached Storage (NAS) server product that incorporates and manages third-party Fibre Channel RAID storage products and supports common protocols, such as Windows Common Internet File System (CIFS) and UNIX NFS. The system has two core components: the SpinServer NAS server and the SpinStor storage enclosure, which is available in Just a Bunch of Disks (JBOD) or Fibre Channel—based RAID configurations. One SpinServer 3300 cluster can scale up to as many as 512 servers with as much as 11,000TB of data. Each SpinServer 3300 system comes with two Pentium III 1.26GHz processors, 4GB of memory, and Spinnaker's Linux-based SpinFS DFS. Each 4U (7") rack-mount server is equipped with four Fibre Channel ports for storage attachment and four switch-attached dual Gigabit Ethernet connections—two for user-data access and two for server communication. The SpinServer 3300 is designed to work with any storage enclosure that includes a Fibre Channel host interface, and presents storage as LUNs, including enclosures that support Fibre Channel, SCSI, Serial ATA, or Parallel ATA disk drives. Pricing starts at $49,100 for one server and the base software. The SpinServer 3300 was designed to allow full management of attached storage—including moving data to a new location—while users continue to access and update the data.

SpinServer Architecture
The SpinServer architecture includes a Virtual File System (VFS), virtual servers, virtual interfaces, and storage pools. SpinServer groups servers into units called SpinClusters—global file systems that can span across hundreds of servers, regardless of those servers' locations. This architecture, which lets users access data from any client port within the cluster, lets you manage all your servers as one global resource and separates user access from file-system management. When a server receives a data request, SpinFS uses the server's data-communication Gigabit Ethernet connections to locate and retrieve the data from servers located anywhere within the cluster, then uses the data-access Gigabit Ethernet connections to relay the data back to the user. The data's location is transparent to the user.

The SpinServer VFS is a universal storage container—a set of files or directories that you can view and manipulate as one unit. You can assign each VFS to a user, group of users, or application. SpinFS routes user data requests to the server that controls the VFS. SpinFS lets you combine multiple storage units, which one of the server's host bus adapters (HBAs) must be able to access, to create a storage pool—a logical map of the physical storage. When you create a VFS, you ask SpinFS to allocate (i.e., reserve) a certain amount of space in the storage pool (the minimum quota) and specify the largest amount of space the VFS can use (the maximum quota). After you define several VFSs in one storage pool, you can allocate additional space to the maximum quota. If you have Windows clients, you can create a CIFS virtual server and specify an existing storage pool to hold the root of the server's file system. Currently, CIFS virtual servers can be members of any Windows domain functional level—from Windows NT 4.0 through Windows Server 2003 Interim. Support for Windows 2003 functional level is planned for fourth quarter 2003. CIFS virtual servers can use Kerberos or NT LAN Manager (NTLM) authentication. You can create a storage hierarchy by joining one or more VFSs to a virtual server. Within the storage hierarchy, you can then define a share name for one or more VFSs and create user access points that are visible in the client's Network Neighborhood.

Exchange and SQL Support
SpinServer supports Microsoft SQL Server, although the product hasn't yet earned Microsoft's certification. SpinServer doesn't support Microsoft Exchange Server, however; the current SpinFS release offers only CIFS and Server Message Block (SMB) file access, whereas Exchange requires a block-mode interface. A Spinnaker spokesperson told me that the company plans to offer a block-mode interface in the future. (For more information about using Exchange with NAS and Storage Area Network—SAN—devices, see the Microsoft article "XADM: Exchange 2000 Server and Network-Attached Storage" at http://support.microsoft.com/?kbid=317173.)

Systems Management
SpinFS's API uses Command Line Interface (CLI) commands as well as the SpinManager Web browser—based GUI that Figure 1 shows. SpinServer executes CLI commands from a serial console that's connected to the server and through Telnet or Secure Shell (SSH) connections. SpinManager integrates with HP OpenView Network Node Manager (NNM), discovers SpinServers on the network, manages those servers through the NNM interface, and receives SpinServer event notification through NNM. The flexible SpinFS event-notification system, which uses email, SNMP traps, and user-defined Java classes, lets you create notifications according to the event severity level (e.g., warning, critical) that one or more SpinFS subsystem modules report or based on specific events in one module.

You'll experience a learning curve when you start using the SpinServer architecture, as you would with any system that offers a significant amount of configuration flexibility. I found the process to be straightforward, however, especially with the help of Spinnaker's PDF-based documentation and some willing and knowledgeable people in the company's technical support organization. I started with the Spinnaker SpinServer Architecture white paper, a 15-page document that provides a useful overview of the system's key components and capabilities. The 35-page Quick Start Guide guided me through the initial SpinServer configuration. The 252-page Installation and Configuration Guide and the 260-page Administration Guide expand on the Quick Start Guide. The 642-page Administration Reference details the CLI commands and provides a more granular look at the SpinServer architecture.

Testing SpinServer 3300
The system I tested consisted of two SpinServer 3300 NAS heads and two SpinStor RAID enclosures. The list price for the configuration I tested was $231,250. At $10,650 per server, licenses for SpinHA (server failover) and SpinMirror (asynchronous mirroring) software contributed $42,600 to the total. Spinnaker provided two American Power Conversion (APC) UPSs because each SpinServer requires a UPS to ensure that system-state information is successfully written to nonvolatile RAM (NVRAM) before system shutdown. The company also provided a Dell Gigabit Ethernet switch configured for three Virtual LANs (VLANs).

Although Spinnaker can preconfigure systems as an optional service, I chose the do-it-yourself route. Using the Quick Start Guide, I easily assembled the system and connected the power, storage, and network cables. I configured and implemented the SpinServer in stages, first installing one server with one RAID enclosure, then adding the second RAID enclosure, and finally adding the second server to the cluster configuration in high-availability mode.

The initial stages of the SpinServer software installation required a console attached to the SpinServer's serial port. I used HyperTerminal on a Windows XP notebook computer as a console. After booting the supplied CD-ROM, I used the serial console to configure the server's time zone and to assign an IP address to the dedicated 10/100Base-T Ethernet port. The rest of the SpinServer software installation was automatic, and the system rebooted.

Using Microsoft Internet Explorer (IE), I connected to the administrative IP address so that I could use SpinManager to configure the system. The Quick Start Guide includes a useful set of planning worksheets that I completed in advance; the configuration steps weren't difficult. I defined a logical cluster and provided IP addresses for the server cluster's network ports. With one RAID enclosure powered up, I created several RAID 5 arrays. Using two of the arrays (which Spinnaker calls storage units), I created a striped storage pool. Next, I created a virtual server by using the storage pool I just created to host the virtual server's file-system root. I then created two virtual interfaces, specifying IP addresses on the user-access network and initially assigning the IP addresses to the two data-network Gigabit Ethernet ports on the SpinServer. I bound the two virtual interfaces to the virtual server, establishing data-network communication.

The SpinServer began to communicate on the network that hosted my domain controller (DC), and I was ready to create a share that Windows users could access. First, I described a domain by entering the name of my Win2K domain, the name and IP address of the PDC emulator, and the address of a WINS server (an optional step). Within this domain, I used the Microsoft Management Console (MMC) Active Directory Users and Computers snap-in to create a computer account named SPINVS1 that I also enabled for use with OSs earlier than Win2K. I used SpinManager to create a CIFS server, assigning the NetBIOS name SPINVS1 to the virtual server and joining the virtual server to the domain. I then created a VFS within the virtual server I had created earlier and mounted it as a directory named vfs1 off the virtual server's root directory, with an initial allocation of 5000MB and a maximum quota of 30,000MB. I specified options in the VFS, such as a domain user and group to receive default rights to the file system, as Figure 2 shows.

A final step—sharing the VFS I mounted at /vfs1 as SPINVFS1—made the space visible by and accessible to Windows users. A look at Network Neighborhood on my XP system showed that the server SPINVS1 was a member of the domain. I mapped a drive letter to the share SPINVS1 and wrote to the disk.

Systems Administration Tasks
Protecting data and managing space are two important systems administration tasks. SpinServer has tools that support both tasks. SpinServer's first line of data protection is its use of fault-tolerant RAID arrays. For data backup, SpinServer supports Network Data Management Protocol (NDMP) 3.0 and NDMP 2.0, NDMP local mode, and VERITAS Software's NetBackup DataCenter 4.5 and NetBackup DataCenter 3.4. SpinServer also offers several data-preservation methods. As the name implies, a copy duplicates all the data in a VFS to the same or a different storage pool (which a different server might own) in the same virtual server. A mirror is a read-only copy in a different storage pool, replicated by schedule or on demand.

Like Windows 2003's Volume Shadow Copy Service (VSS), SpinServer's SpinShot software lets you schedule VFS point-in-time snapshots. Because SpinShot preserves a record of which storage blocks are in use and ensures that subsequent write activity to the VFS doesn't overwrite those storage blocks, SpinShot snapshots are fast and use little additional disk storage. Users see the point-in-time copy as a SpinShot directory at the same level in which the VFS is mounted in the share. When you create a SpinShot snapshot, you decide how many earlier versions of the snapshot to make available to users. Each version becomes a directory under the SpinShot directory. A clone is similar to a SpinShot snapshot except that you create a clone manually rather than on a schedule, and a clone isn't automatically visible to users.

At some point, even the best-designed file server needs maintenance. Disks fill up and usage patterns change, resulting in pockets of poor performance. Managing the SpinServer storage pool is a basic administrative function for which Spinnaker offers several tools. First, expanding a storage pool is as easy as designating additional storage units (RAID arrays) to be a part of the pool. SpinMove lets you move a VFS from one storage pool to another anywhere in the cluster. Spinnaker lets you perform these actions while users continue to access the network without disruption; you don't need to take the server offline to reconfigure it. When a VFS becomes obsolete, you can delete it, and it simply disappears from the virtual server's directory structure. Similarly, you can delete a storage pool after you delete all VFSs and mirrors that use space in the pool. Reconfiguring RAID arrays is a little more restricted. You must delete arrays created within the same SpinStor cabinet in reverse order of their creation, which could mean emptying all the storage pools in a cabinet so that you can reconfigure the disks to a different set of arrays. When this step is necessary, SpinMove lets you move a VFS from one storage pool to another without disrupting user access. Note my test results below.

Security and Access Authentication
SpinFS supports Kerberos and NTLM user authentication for Windows clients and Kerberos and Network Information Service (NIS) authentication for NFS clients. When you create a VFS, you can specify only a domain user and group that have default access rights to the VFS. When you specify a Windows CIFS user and group, the specified user receives Full Control rights and is the default owner of the directory, whereas members of the group receive Read access only. By design, the specified user then uses a standard Windows file security interface (e.g., the Security tab in a Windows Explorer Properties page) to tailor access control rights as needed.

Mixed Windows and UNIX Environments
Because SpinServer is a Linux-based system, it supports users in mixed Windows and UNIX environments. From a user-authentication perspective, SpinServer supports NIS domains and both Microsoft and non-Microsoft Kerberos realms. SpinServer offers facilities to map Windows users or groups to an NFS equivalent and to map Kerberos identities to NIS names. In both instances, you can specify simple names or generic substitutions in the form of standard UNIX expressions. Windows-only environments don't require name mapping. SpinServer will use your NT 4.0 or Active Directory (AD) domain logons directly.

Although I didn't test alternative authentication methods, I did test basic NFS access to files written to a CIFS share. From a SuSE Linux Professional 8.2 desktop computer, I created a mount point and mounted the VFS root. From the desktop, I performed standard file manipulations (e.g., copy, delete) and played a .wav file using Kaboodle, a utility supplied with SuSE Linux, without any problems.

Testing Failover
After I completed some basic single-server testing, I installed the second SpinServer. When I completed the basic software installation, I used SpinManager to join the new server to the cluster and to define a virtual interface that corresponded to each of the server's physical data-network interfaces.

Configuring the cluster to allow component failover required a little additional work. First, I configured each virtual interface with a list of alternative physical interfaces to use in case the active physical interface failed. Spinnaker supports failover to interfaces on other servers in the cluster as well as to the other data interfaces on the same server. Next, for each storage pool, I configured an alternative server to take over when the primary server for the storage pool became unavailable. The last step—enabling high-availability mode—is automatic when your cluster consists of three or more servers. I enabled high-availability mode by selecting the option from SpinManager's Cluster Status screen.

To test failover, I used an application that provided immediate feedback when access to application data was interrupted—Windows Media Player (WMP) 9. The fact that I was able to enjoy Jimi Hendrix music during testing was just an added bonus. I configured the virtual server that hosted the music files to use several virtual interfaces. To test properly, I needed to know which interface WMP 9 used, so I used the virtual interface's IP address to map a drive to the CIFS share. SpinManager's Interface Status screen displayed the server and physical interface that hosted the virtual interface, so I was able to observe when SpinManager reported that the failover was complete.

To break the data-network interface for the IP address I mapped to, I unplugged the interface cable. WMP 9 stopped a few seconds later and didn't resume playing the next song in the list for more than 1 minute, even though SpinManager reported that the failover had completed after only a few seconds. Because the failure of the network interface terminated the current TCP session, the workstation service had to reestablish a session with the new interface before the application could continue. I also found that although WMP 9 needed a little more than 1 minute to restore the connection, within about 10 seconds after unplugging the cable, I was able to use Windows Explorer to browse the directory structure through the same IP address.

SpinFS completes failover of the virtual interface within a few seconds; after the failover, the application is responsible for restoring the TCP connection. I tested several variations on this theme with the same result. When the connection to a network interface was restored (i.e., when I plugged in the cable), SpinFS didn't automatically fail the virtual interface back to the connection; this process protects against intermittent failures that could cause the system to repeatedly fail over and back. To restore the original configuration, you can use either the SpinManager GUI or the corresponding CLI command to manually migrate the virtual interface back to the original physical interface. I saw the same minute-long interruption in WMP 9's ability to access files whenever I migrated a virtual interface to another physical interface.

I designed the next set of tests to see what happened when the server that owned a storage pool experienced a problem. A storage pool is essentially a collection of RAID arrays striped or concatenated together; the OS views a hardware-based RAID array as one physical device owned by one server. While I listened to Hendrix through an interface on the server (SERVER1) that I knew would stay up, I used the Linux Poweroff CLI command to shut down SERVER2, the storage-pool—owning server. The music paused for only a second or two, then "Catfish Blues" rocked on. After resetting the original configuration, I unplugged all the Fibre Channel cables from the server that owned the storage pool, SERVER2. The music paused for about 3 seconds, then SERVER1 took over as the primary owner of the storage pool. From SpinManager, after reconnecting the Fibre Channel cables to SERVER2, I used the Failover command to force the storage pool to return to its original primary server, SERVER2. Another 3-second pause, and I was back in business. In each case, the other server needed only a few seconds to take control of the storage pool and start serving data. Because the TCP session between WMP 9 and SERVER1 wasn't interrupted by the induced failures, the application didn't play a role in the recovery; SpinFS did all the work.

The cluster network is an important part of the SpinFS architecture. When a user connects to an interface on one server and requests data on a storage pool that another server hosts, the data-hosting server uses the cluster network to relay the data to the user-connected server. What happens when communications across the cluster network fail? Accessing the cluster as described, I unplugged power from the Dell Gigabit switch that supported the cluster network. The music stopped for a little more than 1 minute before it resumed. My observation of Ethernet switch activity, which SpinManager confirmed, showed that SpinFS failed over the IP address I connected through (the virtual interface) to the server that hosted the storage pool, as Figure 3 shows.

I also tested Spinnaker's claim that you can move data from one storage location to another without disrupting user access to that data. While I used WMP 9 to play a long song, I moved the VFS to a storage pool in the other SpinStor enclosure. Moving the approximately 2.5GB of data took about 90 seconds without any interruption in the music flow.

SpinServer Advantages
SpinServer 3300 combines the advantages of Ethernet-based NAS with the power of a global DFS. It's flexible and easy to manage. I wasn't able to test the product in a mixed-vendor storage environment, but my results suggest that the SpinServer architecture can provide storage-virtualization advantages for such environments. If you manage large storage farms and want to position yourself for growth, give this product serious consideration.



Contact Information
Product: SpinServer 3300
Company: Spinnaker Networks * 412-968-7746
Web:http://www.spinnakernet.com
Price: Entry level: $49,100; as tested: $231,250
Decision Summary
Pros: Flexible multivendor storage management
Cons: Priced for enterprises that require its advanced features
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish