There is no single way to explain the process of implementing and operating a virtualized environment. Every situation is unique, each deployment its own creation. But there are frameworks and guidelines that various vendors and consultants tend to use. While there is no “right” or “wrong” virtual server lifecycle, the six-step approach here has some great benefits. This particular approach maps nicely to other deployment and operations frameworks, so it will feel familiar to most IT organizations. And this approach has quantifiable products and services at each of the six steps, which enable lifecycle measurement and reporting.
The six steps are:
Hardware provisioning. When you receive a brand-new server, and rack it up, it has no initial purpose. The physical hardware, or “bare metal” at this point, must be enumerated, identified, and allocated to one or more purposes. In an integrated hardware and software virtual management system, this is done through a single software interface.
Virtual workload provisioning. You’re already familiar with the concept of virtual workloads. Like new hardware, workloads must also be provisioned so the virtual server management infrastructure knows where to send the work for processing.
Operating system and software deployment, patching, and state management. Ensuring the right-sizing of resources is a key benefit of all virtualized server environments. New virtual machines and software instances are instantiated, managed, and destroyed all the time. This is based on the management system’s analysis of performance versus required service levels as well as the hardware and virtual workload provisioning described above. Physical machines also require some level of management, such as maintenance of the hypervisor software or BIOS updates, so physical machines are also managed within the lifecycle.
Performance and health monitoring. Once a virtual server is deployed it must be constantly monitored and managed. This part of the lifecycle is crucial to the load-balancing benefits of centralized virtualization management. For example, when a virtual server becomes unresponsive and has potentially crashed, the management system moves its workload to other systems, possibly provisioning additional resources, and certainly alerting IT while remediating the issue.
Disaster recovery. Being able to dynamically shift workloads and provision hardware and software enables flexible disaster recovery. Having a remote mirror site come online within seconds of a primary site catastrophe used to be only available with enormous amounts of money and an army of IT professionals. Today that same level of disaster recovery is a part of any good lifecycle process.
Backup. Protecting data is always a critical function of IT. Virtual servers change the dynamic of this function a bit, as the data no longer lives on a single monolithic hard disk with a tape drive attached to the same computer. Data storage is also virtualized to give wider flexibility in deployment and utilization. This means you don’t stop planning backups with a virtual environment – you just change how they work.
These six steps are not a one-shot deal. They are part of an ongoing process of IT management. The steps are repeated often, and some like performance monitoring never end. But looking at them in this linear fashion does help get a sense of one way to approach virtual server lifecycles.