Building a data center remains one of the most costly aspects of any IT infrastructure, and no data center is possibly more expensive that the sprawling, power-sucking behemoths that power the web services of the likes of Google, Microsoft, and Facebook. While Google and Microsoft have been relatively tight-lipped about their data center approach and philosophy, Facebook has decided to open their data center design philosophy to the world. In the process they've also announced the launch of the Open Compute Project, a community initiative designed to encourage the sharing of data center design best practices.
Facebook's computing needs are vast, and few companies on the globe have a need for so many computing resources. Operating data centers at this scale has led Facebook into uncharted territory, and resulted in some surprising design decisions. In a statement announcing the news, Facebook claims that their data center design approach has "delivered a 38 percent increase in energy efficiency at 24 percent lower cost."
According to the Open Compute Project Facebook page, a team of engineers worked for more than two years to develop the specs of their Open Compute Project, with the ultimate goal of their research being the construction of an all-new Facebook data center in Prineville, Oregon. Their research resulted in Facebook designing their own battery backup systems, servers, server racks, and power supplies.
Jonathan Heiliger, Vice President of Technical Operations at Facebook, points out the following positive features of their data center approach:
- Uses a 480-volt electrical distribution system to reduce energy loss.
- Removes anything in our servers that doesn't contribute to efficiency.
- Reuses hot aisle air in winter to both heat the offices and the outside air flowing into the data center.
- Eliminates the need for a central uninterruptible power supply.
So how did Facebook achieve such impressive cost and energy savings? Partly by paring their expenses down to the bare essentials. Online shopping giant Amazon.com is well-know for the spartan, "why pay for something the customer never sees" approach to cost containment, and Facebook follows that mantra. Servers and server racks are non-descript and use the smallest amount of hardware possible. Here's an excerpt from the release:
"Servers use a vanity-free design with no paint, logos, stickers, or front panel – and are free of all non-essential parts. This saves more than 6 pounds of materials per server. In a typical data center, this would save more than 120 tons of material from being manufactured, transported, and, ultimately, discarded."
Facebook has also published specs and designs for all of their Open Compute Project components, including server and battery cabinets, power supplies, motherboards, as well as mechanical and electrical specs for constructing the data center itself. AMD, DELL, HP, and Intel all helped Facebook develop their data center technology, and Dell has announced that it will offer servers based on Facebook's Open Compute Project specs.
While all of this is fascinating to see, it remains to be seen what sort of an impact the Open Compute Project will have on the specialized (and often expensive) data center consulting services that hardware vendors like Dell, HP and IBM offer their larger customers. I attended the recent unveiling of a new 50,000 square foot HP data center research facility just down the street from the Business Technology and Windows IT Pro editorial offices in Fort Collins, CO.
At that event HP executives and researchers gave us a glimpse of future data center designs, and touted the services provided by HP Critical Facilities Services, a division of HP that specifically helps large customers design, build and maintain large and mission-critical IT infrastructures. If Facebook is open-sourcing the secret sauce of how they put together a cost-effective, energy-efficient data center to serve billions of web pages, I can imagine the sales job of expensive data center consultants from Dell, HP and IBM just got an order of magnitude more difficult.
Will you take advantage of Facebook's efforts and apply some of their learnings to your own data center deployments? Let me know what you think by commenting on this blog post or following me on Twitter.