The Grid Heats Up

Grid computing has been on the IT hype agenda for a while, with IBM, HP, and Oracle, among others, pushing the concept hard. The idea, for readers who might have missed the release of Oracle Database 10g, is to harness the unused processing cycles of thousands of inexpensive computers to deliver the computational power of large SMP servers or mainframe computers at a fraction of the cost. The long-term vision of grid computing is to use a utility model to deliver computing power. Under this vision, users can turn on and pay for exactly the amount of computer processing they need at exactly the moment they need it, much like people can turn on and off water or electricity.

The ideas behind grid computing have been put to work in some scientific computing settings. Perhaps the most high-profile example is the [email protected] project, which analyzes radio waves to search for intelligent extraterrestrial life. [email protected]'s global network of 3 million computers averaged about 14 trillion floating point operations per second (FLOPS) and generated more than 500,000 years of processing time in a year and a half. Despite success in the scientific arena, many observers believe that grid computing won't have widespread commercial application for years to come.

Nevertheless, as is often the case with IT, where there is smoke, there is often a little fire as well. Two deals announced in fourth quarter 2003 have signaled that grid computing won't be just about roping together thousands of servers but will have a significant impact on storage.

In November, Network Appliance (NetApp) snapped up Spinnaker Networks for $300 million in what NetApp executives described as a "software technology acquisition." As Dave Hitz, cofounder and executive vice president of NetApp, put it, "Behind every computer grid is the need for a storage grid." NetApp was particularly interested in Spinnaker's global DFS technology, SpinFS, which enables sharing files across storage servers, the key technology for a storage grid.

And in December, Red Hat announced that it would purchase Sistina Software, which offers a global DFS technology that's geared to the Linux marketplace. Many observers believe that Linux servers will be the workhorses of most grid infrastructures. Interestingly, just weeks before the Red Hat acquisition, SAP Ventures (SAP's venture-capital arm) had invested in Sistina, indicating its interest in developing enterprise Linux applications.

The move toward grid computing is likely to pressure the storage infrastructure to develop along three different--and not necessarily compatible--trajectories simultaneously. First, building a grid might mean that storage subsystems will be smaller, simpler, and closely tied to specific computing resources. If that's the case, however, the storage infrastructure will still need to be managed as a single image. In other words, many small, far-flung storage devices will have to be coordinated centrally.

Second, some industry observers suggest that storage subsystems will become larger and function as peer servers in the grid, potentially simplifying storage management. And third, the concepts that underlie grid computing might also be applied to storage. Instead of investing in large central storage hardware or even costly Storage Area Networks (SANs), companies could figure out a way use all those half-full 80GB hard drives sitting on people's desktops. Of course, managing the interactions between two separate grids would raise the level of complexity another notch.

The Spinnaker and Sistina deals are a sign that the industry is beginning to address these issues. In fact, Joaquin Ruiz, vice president of marketing and product management at Sistina, has argued that the basic building blocks for grid computing and the storage infrastructure to support it already are in place. The key requirements, he said, are sufficient network bandwidth, low-cost hardware and software, and good interconnect technology. According to Ruiz, those already exist, but still lacking is sufficient planning and good understanding of data flows. A grid won't fulfill its potential if it's riddled with bottlenecks.

Although years might pass before grid computing is common in most enterprises, cutting-edge companies are starting to move forward with impressive results. For example, last summer Acxiom, a company that manages and enhances customer data for enterprises, deployed a grid that Company Leader Charles Morgan claimed increased data processing throughput by a factor of 40 while cutting capital equipment expenditures by 70 percent. And Yahoo!, NetApp's Hitz pointed out, uses grid technology to run its email service for 150 million users.

Although small, the grid-computing fire is starting to burn hotter. Grid computing isn't just about increased, low-cost processing power. It will also have a significant impact on the storage infrastructure.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.