Jonathan Feldman, chief information officer for the city of Asheville, N.C., admits that over a decade ago, a worry was keeping him up at night. “We weren’t confident enough that we could get business up and running in a reasonable time, should a disaster take out our main data center, which was just a couple blocks away,” he says.
Indeed, during a time when natural disasters like Hurricane Katrina were unfolding, and the dangerous consequences were serving as abrupt reminders about the vulnerabilities of IT, Feldman’s concerns were certainly justified. To fuel his fears, Feldman says his background in public safety led him to become hardwired to think about worst case scenarios. “When I came into this role, the present solution to disaster recovery was that Asheville had a disaster recovery (DR) center, but I knew it was too close,” Feldman says.
Meanwhile, in what could be called just a coincidence or a blessing in disguise, the city’s proposed new data center, which was going to be an add-on to a planned fire station just 12 miles away, eventually fell through. “I began to think, maybe what we really need is a data center not just a few miles away, but nowhere near us,” Feldman explains.
The answer, Feldman believed, was the cloud. The team’s first instinct was to try to script their own solution, but they quickly decided it was too complicated and that they could be spending their time more effectively on other matters. “It felt like we were building a fragile artifact that could quickly break if there were changes in the environment,” Feldman says. As a result, they began looking into vendors, and after finding one they were comfortable with, proceeded slowly, step-by-step, by first deploying one low-risk server, then one mid-risk server, and then a mid-risk n-tier application.
In a nutshell, Feldman enlisted automation software to do real-time syncing of production systems to cloud storage. This involved paying for the software and the storage, but no compute, until it was needed. “One of the things that really appealed to me was having the automated process working all the time to update the server in the cloud,” Feldman says. “When you go to fail over, that server image is only minutes old, not hours old, which is fantastic.”
The journey; however, certainly wasn’t smooth from start to finish, Feldman admits. He says the biggest challenge was getting the staff to think differently and embrace the hybrid cloud solution. In fact, there were two looming fears. One fear was openly discussed: concerns about security in the cloud. Then, there was another concern that was largely unstated. The team feared that outsourcing to the cloud could make their own positions less relevant. To ease staff anxiety, Feldman assured employees that neither concern was legitimate. He also proposed taking a slow approach and having an auditor come in and evaluate how things were going. He promised, “if the external system in the cloud fails the audit, I’ll apologize to all of you and we’ll all go our merry way without talking about the cloud anymore.”
Once the first audit was finished, problems were found—but not with the cloud-based solution. The issues were only with the internal systems, Feldman explained.
Today, Feldman says if there’s an outage at 9 a.m., the system is back up and running by, say, 9:15, as opposed to ensuing downtime that drags on for hours or most of the business day. As for the cost savings, Feldman didn’t specify, but gave us a hint: “Well, we didn’t have to build a new DR data center, so that’s a pretty big deal,” he says. “It’s been a good run for sure.”
Renee Morad is a freelance writer and editor based in New Jersey. Her work has appeared in The New York Times, Discovery News, Business Insider, Ozy.com, NPR, MainStreet.com, and other outlets. If you have a story you would like profiled, contact her at [email protected]
The IT Innovators series of articles is underwritten by Microsoft, and is editorially independent.