Skip navigation

Virtualization 2.0: Beyond Server Consolidation

Is there such a thing as "Virtualization "2.0"? Advanced virtualization was a hot topic at the Next Generation Data Center conference.

Do we need another "2.0" technology? "Virtualization 2.0" was the title of a session Tuesday at the IDG Next Generation Data Center conference in San Francisco. The panelists didn't lay out any revolutionary definitions, as the main point of agreement was that it's pretty much virtualization 1.0, only used in production rather than testing or quality assurance.

But the discussion highlighted how virtualization's benefits for the data center can extend beyond current uses. "Most people who talk about virtualization are talking about server consolidation," said Jonah Paransky, Vice President of Marketing for StackSafe. "We see an opportunity to address deep problems in the IT world in a new way."

That's slowly convincing companies that equated virtualization with server consolidation to explore using virtualization for databases, security and storage as well as advanced testing, he said.


It's slow going thus far, but the panel predicted that implementation of virtualization beyond consolidation will be driven by challenges in the data center.

"We're looking at the data center being transformed into an unmanageable collection of bits in VMs (virtual machines) instead of boxes," said Albert Lee, chief strategy officer of xkoto, which makes GridScale software to virtualize databases for maximum scalability. "When you look at the maturity of the virtualization market, there's a lot of areas where it will have to grow up. Virtualization has taken care of server sprawl, but now you have new challenges that hypervisors introduce."

The largest enterprises are developing sophisticated uses of virtualization, as evidenced by the Tuesday keynote by Jeffrey Birnbaum, Managing Director and Chief Technology Architect of Merrill Lynch, who discussed "stateless computing" and Merrill's plans to deploy a centralized virtual infrastructure.

"Stateless computing isn't about not having state, it's about where the state is stored," said Birnbaum, who sees it moving from the desktop to a cloud-based enterprise file system (EFS), in which applications are managed by a "placement engine" that seamlessly allocates virtual machines and applications to hardware based on policies that set priorities for resource usage.

The biggest problem: the placement engine doesn't exist yet, although Birnbaum said Merrill is working with several vendors. "We haven't found the nirvana piece of software yet, but we're working on it."

Few companies are contemplating the kind of virtualization Birnbaum envisions for Merrill Lynch. But members of the "Virtualization 2.0" panel predict that the challenges of managing data centers will prompt more companies to look beyond server consolidation.

"Pain is the best driver for change," said Lee. "We respond to pain."

"I don't think we have the luxury of objecting to change anymore," said Paransky. "We have to find ways to adapt to change and still maintain availability."

"It really comes down to incremental changes versus wholesale changes," said Larry Stein, Engineering VP of Scalent. "You really have to take steps incrementally."

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish