Flash Storage: A Guide to Best Practices

There are circumstances when an organization should and shouldn't utilize the faster but more expensive flash storage. Here's a look at when to go all-flash, go disk/tape or mix and match.

Karen D. Schwartz, Contributor

February 17, 2021

5 Min Read
flash storage
Getty Images

“Flash will solve all of your storage problems.” “Flash is the new hard disk drive.” “Flash is cheaper, better, faster.” Nearly every IT professional has heard these claims repeatedly over the past few years. If we are to believe the vendors, flash storage can do everything but slice bread.

For organizations with mission-critical workloads and performance- and time-sensitive service-level agreements (SLAs), flash has become a lifeline over the past several years. It is about 100 times faster than hard drives, resulting in very low latency, and can accommodate a lot of capacity in a small form factor. At the same time, it is much more expensive than other types of storage.

Over the years, however, flash storage has become more affordable, thanks in part to its evolution from a single bit per cell to quad-level cell (QLC), at four bits per cell. This has essentially improved density by a factor of four. Yet flash today is still as much as 15 times more expensive than traditional hard drives. If you’re dealing with 50 petabytes of data, 2 cents versus 20 cents per gigabyte makes a world of difference.

There are many types of flash storage available today, from basic storage arrays, to SSD flash drives and all-flash arrays, to NVMe-based flash, to hybrid systems that combine traditional hard disk drives with SSD flash drives.

Finding the right balance between more expensive flash storage and traditional hard disk storage can be tricky. Here are some points to consider:

If performance and low latency are your top priorities, you probably do need all-flash storage. If sub-millisecond latency is important for your critical applications, DevOps or analytics, flash is the answer. That’s even more the case today, with the growth of NVMe and NVMe over Fabrics (NVMe-oF) capabilities, which allow flash to become more accessible by workload and application, and has added performance headroom.

“Before, if you had an environment that was 80% utilized, you didn’t want to run additional queries or insights on your data because you were worried about the production apps,” said Scott Sinclair, a senior analyst at Enterprise Strategy Group. “With flash and NVMe, you can run at 40% utilization and have plenty of headroom to load on queries.”

If you are prioritizing digital business initiatives such as automation and AI, you probably also want to go all-flash. Data is growing at warp speed, and instead of simply storing that data, companies are actually using it to gain insights, improve productivity, boost customer service and bolster cybersecurity. As organizations spend more on artificial intelligence and automation, the majority are turning to on-premises all-flash object storage to power heavy workloads of unstructured data, according to a recent report.

For very large data sets, especially those that aren’t mission-critical or performance-sensitive, consider hard disk and maybe even some tape. When you’re talking about storing tens or hundreds of petabytes, figuring out a way to use disk, and even some tape, can dramatically change the equation, Sinclair said. Innovation in disk technology has continued, and today, there are massive hard drives with modern features. One example is Western Digital’s 18TB Ultrastar DC HC550, which contains nine disks, each capable of storing 2TB of data. The same is true with tape. Recently, for example, IBM demonstrated magnetic tape technology with a capacity of 580TB, representing 32 times the capacity of LTO-Ultrium, the current standard.

Don’t be afraid to mix and match. Even if you do choose disk and/or tape for some storage options, you can mix that with at least a little flash. For example, using an NVMe-based flash device as a cache to hard drives is an increasingly popular way to go.

Another option is tiering. “Within each given workload, there are times when the data has to be hot and when the data cools down over time,” said Jeff Baxter, a senior director at NetApp. “When data is first created or being accessed, it’s usually hot and needs the performance of flash. So now these systems have tiering capabilities, so you can have data on flash when it’s hot or being heavily accessed and then the data transparently tiers down to systems based on disk drives.”

Evaluate cost carefully; things have changed. The cost of flash has definitely declined over time, but it’s still more expensive than other forms of storage. But it’s not that simple. The price you pay is just one factor in the decision-making process. There is plenty else to consider, like speed of performance, and capex and opex expenses. It’s also important to evaluate “softer” costs that can make a difference. For example, flash storage is often software-defined, while other storage formats may require additional software. Hardware also tends to fail after some number of years and must be maintained, and flash can often extend the depreciation period.

Flash is also very compressed, resulting in space and power savings. Finally, flash vendors tend to make a point of offering easy management and flexibility.

“It used to be that if you wanted to use flash, you had to buy a device and plug it into your server,” said Surya Varanasi, CTO of StorCentric. “Today, the devices are remote-attached to the server, so you can use it where needed and then repurpose it for something else, all through software.”

As flash technology continues to become more dense, it will allow for even more storage in fewer rack units. While QLC-based flash is the norm today, some vendors are already talking about penta-level cell (PLC) flash, which could result in flash drives that reach up to 20TB. As that happens, flash should become even more cost-effective from a dollar-per-gigabyte standpoint. At the same time, hard drives will continue to become more dense and reach a lower dollar-per-gigabyte cost.

“It’s an evolution for everything in this realm,” Baxter said. “For now, it’s about combining these technologies together for any next-generation data center, at least for the next several years.” 

About the Author(s)

Karen D. Schwartz


Karen D. Schwartz is a technology and business writer with more than 20 years of experience. She has written on a broad range of technology topics for publications including CIO, InformationWeek, GCN, FCW, FedTech, BizTech, eWeek and Government Executive


Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like