Data is unquestionably the lifeblood of today's digital organization. Storage solutions remain a top priority in IT budgets, precisely because the integrity, availability and protection of data are vital to business productivity and success. Still, the most important point to remember is that every environment is unique. This means that deploying virtualization, compute, and storage ecosystems must be done with careful consideration.
When it comes to setting up a virtual workplace, creating a smart storage environment is actually one of the more challenging tasks to overcome. Do you use spinning disks? Do you go flash-optimized? Where do you need to deploy better redundancy and resiliency? In reality, there isn’t one right answer. Rather, it’s critical to look at use cases, your desktop ecosystem, and how users will be accessing this environment. From a storage planning perspective, it’s very important to get the design right. VDI experiences can be heavily impacted if the wrong type of storage ecosystem is deployed.
With all of this in mind, consider the following points around storage design and deployment for a virtual ecosystem:
- Capacity. Assess current conditions to gauge where the existing environment is lacking or needs additional resources. With VDI, we are storing our data at a central location to be distributed to other data centers or points within the infrastructure. This means capacity requirements need to be measured for current requirements and the future.
- Performance. Storage array performance is always important. The type of controller used will dictate the performance of the virtual environment. Considerations need to be made with the types of databases to be used, the applications being delivered, and how many users will be accessing the environment. I/O and throughput requirements must be planned out prior to undertaking any virtualization project.
- Scalability. Data agility and the ability for the VDI environment to evolve with the needs of the business are very important storage design considerations. Working with vendors capable of seamless scalability where workloads are able to be migrated between controllers is important not only for minimal downtime, but for infrastructure growth as well.
- Availability and Reliability. There is a very simple question to ask: How mission-critical are your applications? When working with sizing the right controller for a virtualization project, it’s very important to know how well the given device is able to handle availability and data recovery. Also, know how well the device works with specific applications. Good sizing techniques will include research into an environment’s application set to see how well it performs with a specific controller.
- Data protection. Carefully researched storage solutions will have thorough considerations for the protection of vital data sets. This means looking for features that help data migrate between disks, backup considerations, and even site-to-site recovery strategies. Data protection also means data efficiency. The way a controller stores and applies the data within the VDI environment is very important. That means intelligent de-duplication of data, compression, and good use of environment snapshots.
- IT staff and resource availability. Planning out and purchasing a controller for a virtualization project is step one. However, understanding and learning the various aspects of that controller is important as well. Internal IT staff must know how to use and operate this powerful tool. In many situations, it’s recommended that a partner or knowledgeable party be brought in to help the team better acquaint themselves with the new technology. Missed optimization features or mis-configurations can have negative impacts on your entire environment.
- Budget concerns. Although budgeting is always a concern, saving money on storage means efficiently choosing the right components for the current environment – and the future one as well. If considerations are not made for future virtualization or VDI expansion, IT managers may find themselves paying more in the long-term to resolve performance or scalability issues. Furthermore, when looking at all-flash technologies; don’t assume that flash systems always come at a premium. We’re not only looking at upfront costs when it comes to virtualization and VDI. All-flash arrays are now a lot more affordable and provide excellent value for the entire ecosystem.
There has been a resurgence around the deployment of virtual delivery systems. Organizations are working hard to onboard more users and provide a truly positive workforce experience. The only way to accomplish this is by using intelligent data center systems that help optimize the entire ecosystem. When it comes to VDI, storage will always be a critical consideration. There are options around hybrid arrays, all-flash, and even cloud-based storage systems. Always understand your use cases, how your users will be interacting with their virtual desktops, and which resources they’ll need. All of this will help you select the right type of storage platform and create a positive user experience.
This content is underwritten by HP, NVIDIA and VMWare.