The rise of the OpenStack cloud management framework has served to increase usage of the Kernel-based virtual machine (KVM) hypervisor on which the framework was first built. In recognition of that shift in the hypervisor landscape, Cirba has added support for KVM hypervisors to its IT infrastructure analytics application.
Cirba CTO Andrew Hillier says that while Cirba already supports VMware ESX, IBM PowerVM, Microsoft Hyper-V and Red Hat Enterprise Virtualization, the company is now adding support for KVM hypervisors deployed in OpenStack environments that are gaining ground in both public and private clouds.
In particular, Hillier says that Cirba is now beginning to see OpenStack adoption increase within internal IT organizations that are looking to replace commercial hypervisors with an open source platform.
The Cirba management platform consists of a Control Console that identifies ways to increase efficiency, while also helping to reduce application performance created by IT infrastructure capacity issues. Specifically, Cirba makes use of analytics to eliminate the need to manually determine where workloads should be placed within an IT environment. The Cirba Reservation Console then automates the entire process of selecting the optimal hosting environment for any given workload based on the available amount of compute and storage capacity.
Hillier says the rise of more standard application programming interfaces (APIs) is making it easier to apply analytics across a broad spectrum of IT infrastructure. That data in turn is then being used to automate IT operations.
“You can’t automate what you can’t see,” says Hillier. “Now we can use APIs to monitor the infrastructure.”
Of course, not every IT organization is equally comfortable with either analytics or automation. Some may appreciate the analytics, but not necessarily the level of automation. While the IT industry as a whole has reached a new level of industrialization, many IT administrators worry that IT automation could just as easily propagate errors at the same scale fixes get applied. Those errors could then have a cascading effect that winds up taking entire applications offline.
At the same time, it’s equally apparent that data centers can’t scale on the backs of the manual processes implemented by IT administrators or even the custom scripts they might write. In that context, reliance on more IT automation is almost inevitable. But before any of that automation gets embraced most IT organizations are first going to want a lot more visibility into exactly what is currently occurring across their entire IT infrastructure environment.