carloscastilla - Fotolia

How to improve OLTP with SQL Server 2014 In-Memory

Basit Farooq explains how to improve the performance of OLTP applications using SQL Server 2014 In-Memory. He details what makes SQL Server 2014 In-Memory different from other in-memory database engines and how to use that to your advantage.

The SQL Server 2014 in-memory OLTP engine -- project name Hekaton -- is one of the key new performance-related architectural enhancements to the SQL Server database engine. It is designed to boost performance of OLTP and data-warehouse workloads and reduce processing times. The SQL Server 2014 in-memory OLTP engine is supported on 64-bit Enterprise, Developer and Evaluation editions of SQL Server 2014.

Fully integrated into SQL Server

Unlike other vendors' in-memory database engines, the SQL Server 2014 in-memory OLTP engine is fully integrated into SQL Server. This means that a database can have both memory-optimized tables (Hekaton tables) and disk-based tables. The memory-optimized tables can be accessed using Transact-SQL in the same way as disk-based SQL Server tables. Transact-SQL queries can reference both memory-optimized tables and regular disk-based tables, and data in both types of tables can be updated simultaneously.

A high-level overview of the integration between Hekaton (SQL Server 2014 In-Memory OLTP) and SQL Server
Figure 1. A high-level overview of the integration between Hekaton (SQL Server 2014 In-Memory OLTP) and SQL Server.

High concurrency by eliminating latches and locks

Like disk-based tables, the transactions on memory-optimized tables are fully atomic, consistent, isolated and durable (ACID). The in-memory OLTP engine solves problems in high-concurrency situations using data structures that are entirely lock- and latch-free, meaning that there are no latches or spinlocks on performance-critical paths in the system. Instead, it uses an optimistic multi-version concurrency control (MVCC) technique that provides transaction isolation semantics. This helps to avoid interference among transactions with full ACID support. Thus, any user process can access any row in a table without acquiring latches or locks. This eliminates common scalability bottlenecks, providing frictionless scale-up and high-concurrency opportunities to increase the performance of OLTP and data-warehousing workloads.

T-SQL compiled to machine code

Stored procedures that only reference memory-optimized tables can be compiled into highly efficient machine code for further performance improvements. The compilation is performed via C code generator and Visual C compiler. The C code generator converts the store procedure to a C program and the Visual C compiler compiles the C program into a DLL, which is highly efficient machine code. Stored procedures that have been compiled to native machine code are referred to as "natively compiled stored procedures." The only difference between the interpreted (disk-based) stored procedures and natively compiled stored procedures is that the interpreted stored procedures are compiled at first execution while the natively compiled stored procedures are compiled when created. Because of this, many error conditions such as arithmetic overflow, type conversion, some divide-by-zero conditions and so on can be caught at create time. In addition, natively compiled stored procedures maximize runtime performance for certain OLTP and data-warehousing workloads because the generated machine code only contains instructions that are needed to run the request, and nothing more.

The architecture of the Hekaton (SQL Server 2014 In-Memory OLTP) compiler
Figure 2. The architecture of the Hekaton (SQL Server 2014 In-Memory OLTP) compiler.

Optimized indexes for main memory

The indexes on memory-optimized tables are designed for memory-optimized data. The memory-optimized indexes are structured as Bw-tree. Bw-tree is a high-performance, latch-free structure B-tree structure. Index operations on memory-optimized indexes are not logged in the transaction log. Instead, SQL Server ensures the durability of these indexes by logging and checkpointing records to external storage.

Bw-tree architecture
Figure 3. Bw-tree architecture.

There are two types of memory-optimized indexes: non-clustered hash indexes and non-clustered range indexes. Non-clustered hash indexes are optimized for index seeks on equality predicates and also support full index scans. Queries that use hash indexes return results in an unsorted order. Hash indexes do not have pages and are always fixed in size. Non-clustered range indexes are suitable for range scans and ordered scans. They support everything that hash indexes support, plus seek operations, on inequality predicates and sort order. Queries that use non-clustered range indexes return results in a sorted order.

For more information on memory-optimized indexes, see Introduction to Indexes on Memory-Optimized Tables. Figures are drawn from Hekaton: SQL Server's Memory-Optimized OLTP Engine.

This was first published in August 2014

Dig deeper on SQL Server Data Warehousing

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchBusinessAnalytics

SearchDataCenter

SearchDataManagement

SearchAWS

SearchOracle

SearchContentManagement

SearchWindowsServer

Close