As data center operators try to squeeze every ounce of efficiency out of their facilities, many of them are facing increasing workload requirements from big data, machine learning, and related technologies.
As a result, many companies operating data centers are pushing their server racks to work harder by increasing the power density inside each rack. The power density inside each rack cabinet – and across the data center as a whole – has risen significantly in recent years, say several operators.
Data center workloads have risen dramatically, “thus demand for power density at the rack level has also increased,” said Craig Broadbent, VP of customer solutions and services at Virtual Power Systems, a data center software vendor.
In the past, data center operators “have chosen to throw more low-density racks at the issue, creating rack sprawl to handle the increase in IT requirements,” but that’s changing as data centers focus on hyper-scale workloads while limiting the amount of space they occupy, he said.
Toward the end of the last decade, a server rack’s power density was typically in the 3kW to 5kW range. Many operators say that number has risen steadily, even though there’s no consensus about what today’s range is. Some operators say a typical rack’s power density is now in the 5kW to 13kW range, while others say some racks can consume 25kW or even 40kW.
For colocation providers, the typical power density in a data hall has risen from 150 watts per square foot to 250 or 300 watts per square foot, with some providers going up to 400 watts, said Mike Kilkeary, associate principal engineer at Southland Engineering, an engineering services provider.
Consolidation, New Workloads
Several factors are driving rack power density higher, said Joseph Reele, VP of data center solution architects at Schneider Electric. Organizations want to “reduce IT spend and optimize the IT environment,” he said. “This is typically achieved through technology consolidation, virtualization, cloud services, and eliminating any wasted or underutilized resources, such as zombie servers (servers that don’t run any computing workload but stay on).”
By moving to a virtualized environment, organizations can consolidate servers, he said. So a business could cut the number of servers it uses from five running at 25 percent of capacity to just two running at 85 to 90 percent.
Another factor driving higher density is hyper-scale computing, with data centers specializing in these high-performance workloads capturing market share from other data centers, Broadbent said.
Hyper-scale applications -- many using big data, machine learning, or artificial intelligence -- require a higher power density per rack, he said. However, large businesses are also embracing higher densities as a way to manage capital expenses and reduce operational costs, he added.
More Powerful Hardware
In addition, servers need more power as CPU makers work to keep up with performance targets like Moore’s Law, added Scott Tease, executive director of high-performance computing and AI at the Lenovo Data Center Group.
Newer servers are also packed with more I/O cards, more storage devices, more power-hungry memory, and more accelerators, such as GPUs and FPGAs [field programmable gate arrays], Tease noted. “All of these components have a power cost associated with them,” he said.
The upside is that many new server components have become “much more efficient for the same amount of equivalent work,” said Jonathan Halstuch, co-founder and COO of RackTop Systems, a provider of high-performance software-defined storage. Meanwhile, the adoption of technologies like SSDs helps data center operators pack more punch into the same amount of server space.
A data center can fit more than 700 terabytes of SSD storage in a 4U rack, compared to 288TBs of hard drives, he noted.
“There are more components and items being put into that same space envelope,” Halstuch said. “The SSDs will use more power per gigabyte, but they will grant you better space density.”
Cooling High Density
The rise in rack power density has implications for data center design, particularly the cooling technologies needed for these tightly packed racks.
High density data centers may need containment systems that separate the cold supply airflow from hot air exhaust, said Schneider’s Reele. The rise in density creates the need for cooling technologies to be located next to the compute load, rather than on the perimeter, as was the case in many legacy data centers, he said.
Some data centers are moving to water cooling systems, added Southland’s Kilkeary.
“As server densities increase, airside ducting becomes very expensive and complex to deliver the large volumes of air required to cool the increased heat loads,” he said. “Alternatively, waterside cooling becomes a more efficient way to deliver cooling to the data hall.”
Still, water cooling has its own risks, “as water and server equipment don’t mix well,” he added. “The systems required for waterside cooling are more complex and require greater levels of care to separate the cooling system -- and water -- from the data hall.”
Some data center operators are also experimenting with submerging servers in dielectric liquids.
As a result, data center technicians’ jobs are becoming more complicated, Kilkeary said.
“As rack and data hall power densities increase, the waterside systems required to cool the increased heat output of the servers are of increased complexity and require more components,” he said. “For operators, this requires a higher level of knowledge of the systems and increases the number of components that need to be monitored and routinely maintained.”
Higher Density Can Mean More Risk
Generally, higher power density means “more work and more worries” for technicians, added Arthur Valhuerdi, senior VP of operations at DataGryd Data Centers.
High-density racks can overheat in less than a minute if there’s a cooling failure, while technicians have more time to diagnose and fix a 4kW rack, he said.
“As these higher density loads come on, having the appropriate redundancies in place becomes critical,” Valhuerdi added. “The response time from a service-interrupting event to an outage becomes ever shorter, so the forethought and preplanning is critical as well as having contingency plans laid out and set up."