Suppose a service outage at a data center were an unsolved crime in a detective novel — a good one. You’d expect the book to contain all the clues you’d need to piece together the chain of events and solve the case.
No database of data center event logs is as interesting as a crime novel, but it should contain all the data pertaining to the cause of an incident. However, the sheer volume of data often makes the cause difficult to identify. Still, the more data you have the more likely an analytics function or even an artificial intelligence (AI) algorithm will spot the culprit.
Here’s the challenge: How much data must be pooled from all the facilities before we can determine not only a pattern but a formula for incidents, one that can be used to diagnose them before they occur? The answer may lie in determining first just how many similarities there are from one data center to another – an amount that appears to be shrinking.
Patterns and Motives
“Like any maturing industry, data center is becoming rapidly commoditized,” says Peter Gross, member of the Executive Committee of the Data Center Incident Reporting Network (DCIRN) and former VP of mission critical systems at Bloom Energy. “The major components, whether they’re cooling systems, power systems, fire protection, EPO [emergency power off], architectural components, are the same types. The architecture (the configuration and topology of the data center, the electrical, mechanical) might be slightly different, but you see commonality in...
To read the rest of this free article please fill out the form below: