One of the most challenging factors in scaling a workload is ensuring that the transition occurs seamlessly. Organizations certainly don’t want to replace recently purchased—and deployed-- infrastructure because the workload’s hardware requirements start to exceed what everyone thought was a hypothetical upper limit.
When organizations spec out the hardware to host a workload, they make assumptions about capacity requirements. Sometimes these assumptions are based on experience; other times they are nothing more than educated guesses. In any case, what organizations don’t often do is figure out, from the beginning, what they will do if the workload’s requirements do end up exceeding capacity.
Indeed, all too often the question of “What do we do if our upper limit is too conservative?” doesn’t get addressed until the workload’s requirements are rapidly approaching, or have slammed right into, the infrastructure’s capacity limit.
Even developing a hypothetical upper capacity limit for a workload is a challenge. Unless they have substantial experience with identical workloads, most IT professionals don’t know what the capacity requirements of a new workload really are. When do they tend to find out? Once the workload is deployed in production.
Another issue is that server administrators and application developers are typically under pressure to be conservative about their estimates: When spec-ing out hardware, server administrators and application developers need to ensure that they aren’t spending money on capacity that will never be required.
All of this is why things get very challenging--very quickly--when the capacity limit that no one ever thought the workload would hit is reached.
One of the big advantages of converged architecture is that it’s designed in such a way that capacity can be easily scaled out as required. Rather than going to the expense of deploying new infrastructure to deal with increased requirements, converged architecture allows organizations to take a more incremental approach that enables companies to move away from IT silos and toward a model with blocks of scalable infrastructure that can be viewed as pools of resources that can be quickly adjusted to meet changing needs.
This makes converged architecture a more agile—and therefore safer and more efficient--bet than an infrastructure that has a fixed upper limit on capacity.
How does your company estimate capacity requirements? What are the biggest challenges in “getting it right?” We welcome your insight, questions and answers in the space below.
Underwritten by Hewlett Packard Enterprise.
Converged Infrastructure through the new HPE Composable Infrastructure: Enables IT to operate like a cloud provider to lines of business and the extended enterprise. It maximizes the speed, agility, and efficiency of core infrastructure and operations to consistently meet SLAs and provide the predictable performance needed to support core workloads—for both today and tomorrow.
HPE ConvergedSystems: Integrates compute, storage, and networking resources to deploy pre-validated, factory-tested configurations, from HPE, in weeks instead of months. HPE ConvergedSystem gives you lower cost of ownership and greater flexibility to meet more business demands.
HPE Converged Architecture: A flexible and validated reference architecture, 100% fulfilled through HPE channel partners. Based on best-in-class HPE compute, storage, and HPE OneView infrastructure management software, this architecture delivers interoperability with a choice of third party top-of-rack networking switches and hypervisors from leading industry vendors.