In-memory SQL Server: The implications of integration

How is in-memory SQL Server different from SQL Server with a bunch of RAM? SQL Server guru Serdar Yegulalp explains what to know before you install.

It's easy to be misled by terminology. When I first heard the term "in-memory database" applied to SQL Server,

I made the same mistake many other people have. I thought, "How's that different from an instance of SQL Server with tons of RAM and the buffer pooling for the program turned up to 11?" But in-memory SQL Server -- a major change, codenamed Hekaton and planned for the next iteration of the product -- isn't like that. And why it isn't like that has implications for how such an instance of SQL Server would fit in with the rest of your setup.

First, the basics. The label conveys the fundamental idea behind an in-memory database pretty accurately: It's a database system where both the engine and as much of the data as possible are stored directly in RAM. Of course, this dramatically increases the speed of transactions, but this is only possible in a couple circumstances:

  1. The database and engine are small enough to fit into RAM by default
  2. The system contains enough RAM to hold both database and engine

With memory getting cheaper all the time and the average server sporting more of it, scenario #2 has been happening a lot more often than scenario #1 ever did. The sheer size of almost any professional SQL Server deployment all but guarantees the former rarely happens.

But it isn't just memory that's grown cheaper, faster and more plentiful. CPUs have also become dramatically faster and more parallel, and in order to keep that silicon from just sitting around doing nothing, more of what the database does is being moved into memory whenever possible.

Because of these issues, the forthcoming in-memory enhancements to SQL Server are being engineered to take advantage of all these trends. In a TechNet blog post entitled "The coming in-memory database tipping point," David Campbell lays out some of these changes, including how the database itself is stored differently in memory (via a columnar, rather than a row-based, model).

For more on in-memory databases

Using an in-memory database for data processing

Battle of the in-memory databases: TimesTen vs. ASE

Microsoft has in fact already adopted some of these methods for the PowerPivot add-on for Microsoft Excel. "In SQL Server 2012," Campbell explains, "this ships as the xVelocity in-memory analytics engine as part of SQL Server Analysis Services." In the long run, these components are going to be used in other parts of SQL Server, so that the power unleashed by in-memory databases can be used outside of the highly vertical solutions that currently exploit it most, such as data warehousing or analysis.

This brings up the first question I've been building towards: Would all of your SQL Server installations need to be upgraded to take full advantage of in-memory processing? The short answer is probably, but with three caveats:

1. In-memory database solutions do require some re-architecting of your existing database

The columnar, rather than row-based, layout in-memory databases use will require some work on your part to set up. In one of the case studies Microsoft released to talk about SQL Server 2012's existing in-memory features, they noted that while some changes to the database were needed, they required little more than "chang[ing] some metadata values." Depending on the way your data is already set up, the amount of work may be minimal, but don't assume it's going to be zero.

2. The larger the database, and the more it fits the scenario, the bigger the payoff

You're likely to see the biggest improvements -- the ones most worth the effort of retooling everything to use in-memory processing -- from the largest workloads that are analysis- and data-warehouse-centric. Campbell notes that the columnar structure of in-memory databases "are not optimal for transaction processing workloads," the conventional CRUD (create, read, update, delete) workloads that most of us learned about in DBMS 102. He does note that in time, in-memory technology will be expanded to include these more conventional scenarios, but it won't happen right away.

3. Consequently, not everything can or may need to be moved to an in-memory solution at once

The biggest and most processor-intensive workloads like data analytics are the first place to start. Moving them to an in-memory arrangement should get priority. Even if you haven't started formally planning a migration to another version of SQL Server, it helps to get up to speed now on how the technology works and start planning for how to re-architect your future SQL Server system migrations.

This was first published in December 2012

Dig deeper on SQL Server Business Intelligence Strategies

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

1 comment

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchBusinessAnalytics

SearchDataCenter

SearchDataManagement

SearchAWS

SearchOracle

SearchContentManagement

SearchWindowsServer

Close