digital_abstract_horizon.jpg Getty Images

The Promise of Non-volatile Random-access Memory for Storage

Non-volatile random-access memory, or NVRAM, is increasingly a technology to watch. Learn about how NVRAM works, use cases, and more.

With data growing so rapidly, becoming a critical component of decision-making, it’s unsurprising that organizations want faster access and better throughput to data stores -- even large datasets. They also want less latency and lower costs, and they don’t want to worry about issues like power interruptions causing delays or access problems.

Enter non-volatile random-access memory, or NVRAM.

What Is Non-volatile Random-access Memory?

Until recently, most related technologies were volatile, including dynamic random-access memory (DRAM). If power is lost, so is the data stored in registers, caches, and memory.

NVRAM, also called storage-class memory or persistent memory, doesn’t lose data when a system loses power. NVRAM is typically very fast (although slightly slower than DRAM) and has high capacities and significant scalability. It’s also less expensive than DRAM and has lower latency.

“Many storage systems require caching and checkpointing for acceleration, which has typically been done through flash,” explained Arthur Sainio, co-chair of the SNIA Persistent Memory and NVDIMM Special Interest Group and a director at SMART Modular Technologies. “By moving caching and checkpointing in persistent memory, you are speeding up the process, increasing throughput and reducing latency.”

Non-volatile random-access memory can also enable a powered-down system to come back up quickly because the system doesn’t have to move saved data from storage back to memory.

What Is NVRAM Used For?

The benefits of persistent memory translate into interesting use cases in the storage arena.

For example, storage devices are often too slow for applications that require data persistence. However, moving applications from a storage device to persistent memory can resolve many issues. Take the case of a time-series database. Typically, it involved using an solid-state drive (SSD) when memory requirements became too great. Persistent memory can reduce the latency, which in turn improves performance.

Another example is a data orchestration service with a tiered cache, providing the option to store cache data in DRAM and SSDs. With the option to cache in persistent memory, performance improves significantly.

In other cases, data structures or applications must reside in memory because they require fast access. But sometimes these applications can run out of memory or become too expensive to run. Using persistent memory can solve the problem by improving performance and lowering costs. For example, an in-memory database that uses persistent memory requires fewer nodes to store the data because each system has a larger pool of memory.

Persistent memory also is a key enabler of data portability, which has become more important in recent years, said Ed Fiore, vice president and chief architect for platforms at NetApp.

Persistent memory technologies enable file system metadata to be separated from other types of metadata. “Typically, all of the data and metadata in a file is written to a storage device, but going forward, people are seeing the value of breaking these up to take better advantage of file system metadata,” Fiore explained. “So, you’re not just using the cache as a way to store data quickly but as a way to actually hold metadata. That means you can start searching the data and adding value to it.”

Types of Persistent Memory

One of the most prominent types of persistent memory is non-volatile dual in-line memory module (NVDIMM). Essentially, NVDIMM combines non-volatile NAND flash memory with DRAM. Because it has very low latency, it is commonly used in storage servers for write acceleration and commit logs. And because data has been saved in the NVDIMM, it is very useful for data recovery operations.

An increasingly popular form of persistent memory is magnetic random-access memory (MRAM), which stores data using magnetic charges, unlike static random-access memory (SRAM) and DRAM, which use electrical charges. MRAM results in power savings and smaller chip size.

Over time, MRAM could replace embedded SRAM in NOR Flash because of its lower power requirements, lower cost, and higher density, said Tom Coughlin of Coughlin Associates, a digital storage consulting and analyst firm. MRAM has real potential for storage, especially for caches and buffers. IBM, for example, uses MRAM in some of its flash-based systems and as a cache in its SSDs.

Yet another up-and-coming option is phase change random-access memory (PCRAM) that stores data by changing the state of the material used from amorphous to crystalline and back again. PCRAM is considered less expensive and more scalable than regular flash memory, making it a viable option for replacing parts of the DRAM-based DIMM and some SSDs. PCRAM made a big splash, with Intel and Micron in 2015 basing their 3D XPoint technology called Optane on it.

Two other persistent memory technologies could become more important over time:

  • Resistive RAM (ReRAM) creates different levels of electrical resistance using charged atoms instead of electrons to store data. It is particularly useful for high-density applications.
  • Ferroelectric RAM (sometimes called FeRAM, F-RAM, or FRAM) shares some similarities with DRAM, but it uses a different type of electric polarization to achieve non-volatility. It isn’t affected by power disruption or magnetic interference, making it particularly reliable. It has faster write performance and much higher maximum read/write endurance than flash. At the same time, it has much lower storage density and storage capacity limitations, and higher cost, than flash.

Conclusion: Non-volatile Memory Market Still Maturing

While non-volatile memory offers many benefits for storage applications, the field is far from mature. In fact, many of the early players like Intel and Micron, which first came to market with proprietary 3D XPoint/Optane technology, now look to be embrace the open-source CXL (Compute-Express Link) standard. The CXL standard integrates different forms of memory, like processors, accelerators, smart NICs, and memory devices. Eventually, CXL may make it possible to have both volatile and non-volatile memory in the same environment, Coughlin said.

While it’s still relatively early for non-volatile random-access memory and storage, the goal is clear. “At the end of the day, what the industry needs in general is a non-volatile storage device as large as it can get and as close to the processor as possible, and we’re getting there,” Fiore said.

It’s not too early to dip a toe into the persistent memory waters, Coughlin added. “It makes sense to look at persistent storage now -- even though there are new architectures still being developed,” he said. “Even a little bit of persistence can be very useful.”

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish