Skip navigation

More on Fast Pipes

Every time I turn around, another new wrinkle appears in the storage I/O area. Storage I/O is, of course, of critical importance to storage practitioners. And, given the number of possible solutions today, uncertainty abounds in the marketplace. The past few months have brought Adaptec announcements about fibre over SCSI and Ultra SCSI 160 and IBM announcements about InfiniBand specification developments—and a new fibre channel speed announcement should be forthcoming. I've read several recent articles about gigabit and multigigabit Ethernet.

Recently, Giganet announced new gigabit Ethernet boards that work over existing Ethernet networks (one of Giganet's initial development partners is Network Appliances). These boards, which should be available soon, boost server I/O over the most widely used networking infrastructure to a couple of gigabits per second—a range that makes this technology practical for storage. Just over the horizon (18 months to 2 years out) is 10Gbps Ethernet. And I know of a group working to specify a 100Gbps Ethernet standard. This Ethernet technology has the potential to transform standard networks and let us work in new ways; it offers the kind of throughput required for multimedia work, for example.

Giganet's Virtual Interface (VI) Architecture boards write data in memory on one computer to memory on another computer without requiring much work of either computer's processor (chips on Giganet boards do the processing). Giganet VI messages are differentiated from standard IP packets through extra addressing in the packet heading. VI traffic is embedded in a standard TCP/IP-based frame and runs over the same wire and through the same switches. A VI system lets you double your application throughput while using about 20 percent of the computer resources that would be required to move a given amount of data using standard OS calls. (I'm told that in instances where applications would require 3000 programming calls, Giganet's boards can reduce system overhead to about 50 to 60 programming calls.) Typically, system overhead for high-speed data transfer using the new boards is about 7 percent of server processing—a significant reduction in networking overhead.

Giganet has been involved in some of the largest clustering and load-balancing systems available today (for more information, see "World's Largest NT Cluster Goes Live," Windows 2000 Magazine, August 1999, InstantDoc ID 7149). At Cornell University's computer center, the AC3 Velocity cluster was built with 64-quad 500MHz Pentium III Dell PowerEdge servers, Dell PowerVault storage, and Giganet's cLAN host adapters using Windows NT Server 4.0. Giganet's VI technology is the glue that holds together many of the largest clusters of Windows and Linux systems, making it a strategic technology.

Giganet's new products offer performance that Infiniband can only promise for the next year or so. Giganet has not announced prices, but if the company can price the products affordably, the VI technology may become widespread. Then the question will be whether mainstream server vendors will embrace it.

I've never understood why anyone who didn't have to would want to build two separate data network infrastructures, one for storage and one for server I/O traffic. Giganet's boards essentially double your network throughput for a modest upgrade, and they are particularly valuable in point-to-point situations such as those that involve Networked Attached Storage (NAS) or backup.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish