Converged infrastructure is allowing organizations to be more agile with data centers by integrating and simplifying the management of core components including compute, server, storage and networking technology. Quite recently, hyper-converged infrastructure came about, which more tightly integrates the individual infrastructure components through software. It can bring faster deployments with lower-risk architectures to organizations. Generally speaking, CI and HCI benefit VDI deployments by creating building blocks capable of fast scalability. Yet there are differences to consider with these two technologies.
Let’s create a scenario: You’re an organization running 1000 VDI instances. Within that environment, you’re also hosting virtual applications. Here are the options:
Traditional. In this scenario, we’re deploying VDI in the most traditional data center fashion: a hypervisor running on some type of physical server. From there, the hypervisor manages the VMs and operates as the VDI provisioning center. The data for those VMs and physical servers is then stored on some type of attached storage ecosystem, like a NAS or SAN.
Converged Infrastructure (CI). Here, you begin to couple core resources like storage into the compute layer to remove operational silos. Now, your storage environment is directly integrated into the physical servers. You’ll see flash allocated for high-performance virtual desktops and applications, or used for caching.
One of the biggest benefits behind CI is the ability to design and deploy pre-configured and validated components in design. This speeds the deployment of virtual machines by creating an architecture which simply “snaps” into place the building blocks of server, storage, networking, and even virtualization technologies. With CI, all the components are designed to work together and can support a highly agile virtual ecosystem. You don’t have to configure the individual components separately.
Hyper-Converged Infrastructure (HCI). This architecture contains a virtual storage controller that runs as a service within the hyper-converged cluster. The storage controller manages storage functions which are usually relegated to the physical SAN or NAS platform. Basically, you’re introducing software-defined storage (SDS) directly into the convergence layer. From there, SDS can spread storage resources across all clusters in the HCI environment. This creates a policy-driven, software-based, control architecture which is no longer dependent on hardware.
If you’re looking to remove your existing storage environment and allow HCI to manage those resources, this is a great way to deploy VDI. However, if you’re leveraging existing investments around compute and storage technologies, HCI may not integrate quite so well. In those cases, you might have to look for a technology like Atlantis for true (hardware-agnostic) software-defined storage.
In deploying VDI, it’s critical to understand your current environment and where your business is headed. If you’re doing a complete data center refresh, converged systems play an important role in how you design your data center footprint, deliver virtual workloads and manage your critical resources. To leverage existing investments around current infrastructure, working with CI might be the way to go. This is especially the case if you have solid storage and network management in place already.
Hyper-converged infrastructure introduces new concepts around storage and compute management capabilities. You can remove management from the compute and storage layers and insert those capabilities into the virtual layer.
Underwritten by HPE, NVIDIA and VMware.