Skip navigation
Cloud and VDI Performance Metrics: What to Keep an Eye On

Cloud and VDI Performance Metrics: What to Keep an Eye On

Effective VDI workload monitoring can help catch performance issues before they happen. 

The modern business has become a distributed environment with many users accessing a number of different resources. IT engineers are no longer just concerned about users at that specific location—they must now focus their efforts and monitor thousands of users actively accessing the corporate environment 24/7/365. These users are coming in from all places using a variety of devices – iPads, Android tablets, Macs, PCs and so on. Effective VDI workload monitoring can help catch performance issues before they happen. By knowing how a VDI environment is behaving, engineers are able to deliver a more powerful cloud-computing experience to the end user.

Cloud and VDI Performance and Metrics

Just like any large environment, engineers must actively gather and log VDI and cloud server performance metrics and data. This is especially important since most servers hosting VDI workloads are VMs requiring specific amounts of dedicated resources.

For example, a company may be hosting its own private cloud that delivers thousands of corporate virtual desktops as a part of their VDI. Engineers must be aware of their backend cloud servers, how much RAM they have, their storage requirements, and the general CPU usage. Over or under-allocating resources to cloud servers can become a costly scenario. To avoid this, proper planning and workload management should take place prior to any major rollout.

Using performance monitoring tools can help as well. There are great solutions that are able to gather performance metrics at the server as well as the end-point level. By understanding how your VDI servers are operating and what the end-user requirements are, administrators are able to make better decisions about how to size their physical infrastructure to best support virtual instances.

When gathering performance metrics surrounding specific servers running dedicated workloads, engineers need to evaluate the following details:

  • CPU: Remember, your VDI and cloud ecosystem will be a combination of virtual and physical resources. Administrators must look at specific machines (virtual and physical) and see how users are accessing the CPU resources. With numerous users launching desktops or applications from the cloud, special considerations must be made as to how many dedicated cores this server requires.
  • GPU: Always ensure that you include graphics resource planning as a key design consideration around successful VDI deployments and end user acceptance. The key here is to assess the types of applications that utilize rich graphics and allocate those required resources to them based on user graphics requirements. For example, high-end designers and engineers will need to be allocated high-end vGPUs to ensure that they have a consistently positive user experience. 
  • RAM: There will be instances where VDI workloads can be very RAM intensive. By monitoring a workload on a specific server, administrators are able to gauge how much RAM needs to be allocated. The key here is to plan for fluctuation without over-allocating resources. This is where workload monitoring becomes very important. By looking at RAM utilization over a period of time, administrators will be able to tell when spikes occur and at which level RAM should be set.
  • Storage: Sizing considerations are always important when working with a cloud and VDI workload. User settings and workload location will all require space. However, I/O is another consideration that must be looked at. For example, a boot storm or massive spike in usage can cripple a SAN not prepared for such an event. Monitoring I/O and controller metrics, administrators are able to make decisions that will determine performance specifics for their storage system. Often times, SSDs or onboard flash cache may need to be used to help prevent spikes in I/O.
  • Network: As mentioned earlier, there are numerous components that comprise a good VDI and cloud ecosystem. With that in mind, it’s critical to understand that networking and its corresponding architecture play a very important piece in a cloud-facing workload. Monitoring network throughput internally in the datacenter as well as in the cloud will help determine specific speed requirements. Uplinks from the servers into the SAN through a fabric switch providing 10GB connectivity can help reduce bottlenecks and help VDI, as well as cloud workloads perform better.

Too often we get caught up in the “virtual” world of application, desktop, and content delivery. Through it all, it’s very important to remember that the modern data center has become the home to all virtualization and cloud environments. That means we must take special care to manage and control the underlying physical systems that ultimately help support new levels of virtualization and business enablement.

This content is underwritten by HPE, NVIDIA and VMWare.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish