Top Four Enterprise Infrastructure Mistakes

Top Four Enterprise Infrastructure Mistakes

Building out the optimal set of enterprise IT infrastructure components can be a challenging undertaking. Every business runs different workloads and has different resource requirements. For many businesses, this can be a trial-and-error process where things don’t always work out the first time. That said, when you’re building your enterprise IT infrastructure, there are a few key mistakes you need to avoid.

Undersizing your servers – The biggest mistake that you can make is buying servers that are not sized adequately to run the workloads you need. Most of today’s on-premise servers are running as virtualization hosts that need to support multiple concurrent workloads. While it’s always good to maximize your ROI to meet your SLAs it’s important to have enough headroom to handle resource spikes and still deliver the required performance. In addition, many of today’s modern workloads like ERP applications run better when they have access to more cores and higher amounts of RAM. Finally, some server applications like SQL Server can take advantage of in-memory technologies to significantly improve application performance.  However, in order for them to do so you need to make sure that the required cores and memory are there for them to use.

Not buying enterprise level equipment – There are vast differences in the server capabilities that are available today. Enterprise-level server features like out-of-band management, firmware error correction and integrated remote access and support can go a long ways toward increasing the uptime of mission critical servers. If your servers need to support multiple mixed workloads like OTLP and data warehousing, then technologies like hardware partitioning can enable you to separate those workloads providing the same levels of performance as running them on different servers.

Not planning for adequate network bandwidth – One of the areas that it’s easy to underestimate is network bandwidth – especially in a highly virtualized environment. It can be tempting to think that because the workloads running on your VMs don’t normally consume a high percentage of the available NIC bandwidth, you can funnel all of the network bandwidth through a far fewer number of NICs than you would have in a physical installation. While this might be partly true, you need to be careful because this can also result in network bottlenecks. It doesn’t matter how fast your processors are or how much memory you have in the server if the client network traffic can’t get to the server resources. Plus, it’s important to remember that in most cases you’ll also want to separate your storage, management and live migration traffic from your product client workloads.

Not planning for future growth – Even if you’ve avoided making the first three mistakes, it might not be enough to accommodate future processing requirements. With data rates for most organizations growing at 30-50% per year and the emergence of new workloads like IoT, Big Data, mobility and increased analytics it’s easy to see how a server that’s sized perfectly for today’s workloads might not be able to handle your requirements in a year. You need to make sure your server platform has the scalability to grow and adapt to your anticipated future needs.

Underwritten by HPE and Microsoft

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish