Skip navigation
The In-Memory Revolution

The In-Memory Revolution

Learn about SQL Server 2014's In-Memory OLTP database engine

If you look back at the major performance enhancements that Microsoft has delivered with the recent SQL Server releases, it's clear that in-memory technologies have defined the company's trajectory for the product. This trend began with the Vertipaq technologies that Microsoft first released with Power Pivot and SQL Server 2008. Power Pivot used memory-compression techniques to allow Microsoft Excel to perform business intelligence (BI) analysis over multimillion-row data sets.

In-Memory Evolution

These in-memory technologies later evolved into the columnar index technology that was released with SQL Server 2012. Unlike the original Veritpak technology, which was hosted in Excel, the SQL Server 2012 columnar index is hosted in the SQL Server instance itself. Designed for data-warehousing implementations, the columnar index provided anywhere from 10x to 100x performance improvements by compressing column data and moving it into memory. With SQL Server 2014, Microsoft has continued to evolve its in-memory technologies by applying them to the online transaction processing (OLTP) performance problem. The new In-Memory OLTP database engine, code-named Hekaton, adds an all-new processing engine to SQL Server 2014.

Several factors are driving this in-memory revolution. First, memory has become much less expensive. Companies can afford to add large amounts of RAM to their servers, and today’s 64-bit processors are able to address up to 4TB of RAM. Second, the number of CPU computing cores has dramatically increased. Today, quad-core CPUs are the norm, and it’s not uncommon to see eight-core and twelve-core CPUs. This increase in the number of cores has increased the number of threads that the system can run at any given time. Third, the raw clock speed of today’s CPUs has plateaued. The dramatic increases in CPU speed that we saw a decade ago have stalled out. Although the number of cores has increased, the maximum clock speed hasn't increased for the past several CPU generations. Considering these factors, you can see why Microsoft felt that the greatest performance benefits are going to be made by taking advantage of memory.

Not as Easy as It Seems

However, taking advantage of memory isn’t as simple as merely moving your databases into RAM. As David DeWitt, technical fellow at Microsoft, pointed out in his keynote address at PASS Summit 2013, the latches and locks that the database engine must use to maintain data consistency slow down the amount of data and the number of threads that the engine can process. Simply moving the database into memory doesn’t change the mechanisms that it uses to maintain data consistency. What was needed to really take advantage of in-memory technology was an all-new lockless engine design. That is exactly what the new In-Memory OLTP database engine is. The new SQL Server 2014 In-Memory OLTP engine resides inside a SQL Server instance exactly like the classic relational engine and the newer columnar store engine (formerly code-named Apollo)—essentially giving SQL Server 2014 three separate database engines.

The Microsoft Approach

The In-Memory OLTP database engine has been redesigned from the ground up, using an all-new lock-free design: It uses no locks, and there are no internal latches. Instead, SQL Server 2014’s In-Memory OLTP database engine uses a new optimistic multi-version concurrency control mechanism. When a row in a shared buffer is modified, the engine makes an entirely new version of that row and timestamps the row. This process is very fast because it’s done entirely in memory. The engine then analyzes and validates any updated rows before committing them.

This design is faster and more scalable because there are no locks or other waits that stop the processor from running at full speed. However, this process leaves a number of discarded rows in memory. To handle this problem, Microsoft implemented a new lock-free garbage-collection process. The garbage-collection process periodically cleans up all the unneeded rows.

In conjunction with this new lock-free design, Microsoft also introduced a compilation process that takes interpreted SQL code and compiles it into native code. The goal is to reduce the number of instructions that the CPU must execute to process the query. A shorter code path equates to faster-executing code. In his keynote address, David DeWitt stated that the compilation process reduces the code that the in-memory engine must execute by about two and one-half times. The In-Memory OLTP database engine can also execute interpreted queries against Hekaton tables, but it won't be as fast as compiled code. David estimated that, in many cases, the In-Memory OLTP database engine can provide a 10x to 30x improvement in performance. SQL Server 2014 will include wizards to help you know what tables to convert to the In-Memory OLTP database engine and what stored procedures to compile.

Looking Forward

It’s clear that in-memory technologies are the future of performance in the database world. The new In-Memory OLTP database engine clearly positions SQL Server 2014 as the enterprise database market leader in in-memory technologies.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish