Skip navigation

Disaster-Recovery Study Points to Trends in the Cloud, Virtualization, and Testing

Symantec-Data-StorageI recently spoke with Symantec's Dan Lamorena, director of storage and availability management, on the event of his company's sixth annual Symantec Disaster Recovery Study. Lamorena has worked on the study for five of those six years. The results of this year's study demonstrate the growing challenge of managing disparate virtual, physical, and cloud resources because of added complexity for organizations protecting and recovering mission critical applications and data. In addition, the study shows that virtual systems still aren't properly protected. Lamorena pointed to three key areas to pay particular attention to in the findings: virtualization/cloud, the "downtime recovery gap," and the impact of disaster-recovery testing.

 

Virtualization/Cloud

The study highlights that nearly half—44 percent—of data on virtual systems isn't regularly backed up, and only one in five respondents use replication and failover technologies to protect virtual environments. Respondents also indicated that 60 percent of virtualized servers aren't covered in their current disaster-recovery (DR) plans. This is up significantly from the 45 percent reported by respondents in last year's study.

Respondents state that 82 percent of backups occur only weekly or less frequently, rather than daily. Resource constraints, lack of storage capacity, and incomplete adoption of advanced and more efficient protection methods hamper rapid deployment of virtual environments. In particular:

  • 59 percent of respondents identified resource constraints (people, budget, and space) as the top challenge when backing up virtual machines.
  • Respondents state that the lack of available primary (57 percent) and backup storage (60 percent) hampers protecting mission critical data.
  • 50 percent of respondents use advanced methods (clientless) to reduce the impact of virtual machine backups.

In terms of cloud computing, respondents reported that their organization runs approximately 50 percent of mission-critical applications in the cloud. Two-thirds of respondents (66 percent) report that security is the main concern of putting applications in the cloud. However, the biggest challenge respondents face when implementing cloud computing and storage is the ability to control failovers and make resources highly available (55 percent).

 

The Downtime and Recovery Gap

The study showed that the time required to recover from an outage is twice as long as respondents perceive it to be. When asked if a significant disaster were to occur at their organization that destroyed the main data center, respondents indicated that they expected the downtime per outage to be two hours to be up and running after an outage. (This is an improvement from 2009, when they reported it would take four hours to be up and running after an outage.) The median downtime per outage in the last 12 months was five hours, more than doubling the two-hour expectation. Organizations experienced on average four downtime incidents in the past 12 months.

When asked what caused their organization to experience downtime over the past five years, respondents reported their outages were mainly from system upgrades, power outages and failures, and cyber attacks.

 

Disaster-Recovery Testing

The study also showed a gap between those organizations that experience power outages and failures and those who have conducted an impact assessment for power outages and failures: Surprisingly, only 26 percent of respondents’ organizations have conducted a power outage and failure impact assessment.

“While organizations are adopting new technologies to reduce costs, they are adding more complexity to their environment and leaving mission critical applications and data unprotected,” said Lamorena. “While we expect to see further consolidation in the industry of these tools, data center managers should simplify and standardize so they can focus on fundamental best practices that help reduce downtime.”

 

Tips

  • Treat all environments the same—Ensure that mission-critical data and applications are treated the same across environments (virtual, cloud, physical) in terms of DR assessments and planning.
  • Use integrated tool sets—Using fewer tools that manage physical, virtual, and cloud environments will help organizations save time and training costs, and help them to better automate processes.
  • Simplify data protection processes—Embrace low-impact backup methods and deduplication to ensure that mission-critical data in virtual environments is backed up, efficiently replicated off campus.
  • Plan and automate to minimize downtime—Prioritize planning activities and tools that automate and perform processes which minimize downtime during system upgrades.
  • Identify issues earlier—Implement solutions that detect issues, reduce downtime, and recover faster to be more in line with expectations.
  • Don’t cut corners—Organizations should implement basic technologies and processes that protect in case of an outage, and not take shortcuts that could have disastrous consequences.
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish