3 Quick Ways to Optimize Storage - Without Adding Any Hardware update from May 2015
There must be better ways to improve storage performance without adding more disk. Too often administrators resort to more shelves or controllers to resolve storage challenges. Now, take a look at three quick ways to help you optimize storage; without needing to add hardware.
May 11, 2015
As I walked through a customer’s data center – we had a very interesting conversation. The topic revolved around deploying a series of new applications which were quite IO intensive. The saddened storage architect was now telling me he was being asked to do the last thing he really wanted to do – add more hardware.
It ’s not that he didn’t want to add another amazing controller – he was just tired of adding shelf after shelf without really doing much optimization. Sure, there was some. However, direct integration with applications and even the cloud layer certainly wasn’t happening. So, with that in mind – what are some ways you can ask your storage environment to do more for you … without adding more disk? Here are a three quick ways to help you optimize storage.
Using your hypervisor. Your hypervisor is a lot more powerful than you think! Features from technologies like XenServer and VMware offer a lot of great controls around your storage architecture. You can now create flash-optimized storage architecture which delivers extremely high performance with consistent fast response times. Here’s another one - thin provisioning from the hypervisor involves using virtualization technology to give the appearance of having more physical resources than are actually available. Thin provisioning allows space to be easily allocated to servers, on an as-needed or scale-as-you-go basis. And, yet another example is creating storage monitoring and alerts. Proactively monitoring your storage resources allows you to find issues before they become real problems. These alerts can be set for thresholds around performance, capacity, read/write access, and much more.
Look for hidden features. Have you taken a look at your new policies? What about some of the latest feature releases and updates? EMC, NetApp and other major vendors completely understand that optimization is now the new language between storage architects. They’ve built in new powerful features which allow for greater data agility and control. New ways to compress data and control block as well as file-level storage can really help impact how much storage resources are actually needed. Don’t only look at hidden features either. Out-of-the-box configurations absolutely need to be looked at as well. For example, what is your current deduplication rate? Is it 40%? Maybe 80% on some volumes? When was the last time you looked at the efficiency of your deduplicated volumes? Features that aren’t often looked at and existing features which impact storage should all be reviewed. As storage requirements change, your policies and configurations have to adapt as well.
Use the cloud! New ways to extend into the cloud has allowed data and storage engineers to do great things with their environments. You can literally set policies where certain thresholds immediately point new users to a cloud-based environment. Dynamic load-balancing and data replication mechanisms allow for transparent data migration. OpenStack, CloudStack and Eucalyptus all create powerful connection mechanisms for storage extension. This way, you can specify exactly how much storage you want to use internally – and outsource the rest. Over the years, pay-as-you-grow models have become a lot more attractive from both a pricing perspective and a technological aspect. APIs are a lot more powerful, it’s easier to extend your environment, and hybrid cloud models are more popular than ever. Cloud providers now allow you to pay for only the space that you need to use. This is great for storage bursts, offloading a piece of your storage architecture, and even creating new - completely cloud-based – solutions for your business.
Consider this - The latest Cisco Global Cloud Index report indicates direct growth in global data center and cloud traffic. For example:
Annual global data center IP traffic will reach 8.6 zettabytes (715 exabytes [EB] per month) by the end of 2018.
Overall data center workloads will nearly double (1.9-fold) from 2013 to 2018; however, cloud workloads will nearly triple (2.9-fold) over the same period.
By 2018, 53 percent (2 billion) of the consumer Internet population will use personal cloud storage, up from 38 percent (922 million users) in 2013.
Globally, consumer cloud storage traffic per user will be 811 megabytes per month by 2018, compared to 186 megabytes per month in 2013.
All of this data will have to live somewhere. And for the most part – that somewhere is your data center. More applications, more mobility, and an ever-evolving user have created new demands around storage resource utilization. Fortunately for us – virtual and software-defined technologies are making it much easier for storage and data to do great things.
With all of that in mind – there is still such a diverse array of applications and storage solutions that it’s incredibly difficult to nail down just 3 ways to optimize storage. New ways to deliver efficiency, better physical storage controls, and software-based optimizations are all designed to make your storage work better for you. Remember, as your build out your own storage environment – many times there are great ways to optimize your data control methodology outside of just adding hardware. Using the cloud, or even existing features, are really great ways to quickly optimize your storage environment.
Read more about:
Data Center KnowledgeAbout the Author
You May Also Like