carloscastilla - Fotolia
Microsoft has incorporated in-memory technologies into its database since SQL Server 2008 R2. The first was the addition of in-memory to the VertiPaq columnstore engine, which became the driving force behind PowerPivot for Excel and SharePoint. The next generation of the VertiPaq engine, the xVelocity in-memory analysis engine, brought support for the SQL Server Analysis Services (SSAS) tabular model. SSAS enabled the memory-optimized columnstore index, a boon for data warehouses and analytic queries. The current version is In-Memory Online Transaction Processing (OLTP), a memory-optimized engine in the database engine that is tailored for highly concurrent OLTP.
VertiPaq in-memory columnstore engine
SQL Server made its first foray into in-memory technologies in SQL Server 2008 R2 with the release of PowerPivot for Excel and SharePoint. Both used the VertiPaq in-memory columnstore engine to manage large data sets. The VertiPaq engine supports columnar storage, in-memory caching, advanced compression and sophisticated aggregation algorithms. These enabled information workers to build reports based on millions of rows of data from multiple heterogeneous sources.
With PowerPivot for Excel, a user can import large data sets into an Excel worksheet from SQL Server databases, Microsoft Access files, third-party database management systems, data feeds, text files, Excel workbooks, SSAS cubes and SQL Server Reporting Services reports. PowerPivot also provides tools for manipulating data and creating comprehensive reports based on Excel's built-in features.
Users can store PowerPivot workbooks in the SharePoint content databases. PowerPivot for SharePoint works with Excel Services in the SharePoint Server to provide an infrastructure for users to view and modify PowerPivot workbooks. Analysis Services manages data in the workbooks and makes it available to other components. The service also interfaces with the PowerPivot System Service, which communicates with the Excel Calculation Services and the PowerPivot Web service.
XVelocity in-memory analysis engine
With SQL Server 2012, Microsoft redubbed the VertiPaq in-memory columnstore engine as the xVelocity in-memory analysis engine. Microsoft enhanced the engine's capabilities and added support for the SSAS tabular model, one of the two models available to the BI Semantic (BISM) framework. The BISM framework provides the structure necessary to implement analytical solutions in SSAS. These can be based on the tabular model or on the more traditional multidimensional model.
Like its Excel counterpart, the tabular model lets developers import data from multiple, diverse sources and use the Data Analysis Expressions language to apply business logic to that data. The tabular model is based on a structure similar to a relational database, with tables and columns rather than the cubes, measures and dimensions of the multidimensional model.
The xVelocity engine plays a key role in the tabular model and can deliver scan rates for billions of rows per second. Although columnstore was dropped from the engine name, the data is still stored in columns. Furthermore, state-of-the-art compression algorithms help to reduce storage requirements and allow more data to fit into memory, which boosts performance.
Memory-optimized columnstore index
The xVelocity engine was not the only new in-memory feature in SQL Server 2012. The release also came with support for the xVelocity memory-optimized columnstore index. Unlike traditional SQL Server indexes, which store the indexed data in rows, the columnstore index stores the data in columns, with each column in its own segment. The columns are then linked together into segment groups that provide quick and easy access to the data.
As with an SSAS tabular database, the data in a columnstore index is highly compressed on the disk where it is stored and in memory when it is being retrieved. The greater the compression, the more data that can fit into memory. The more data there is in memory, the better the performance. Queries benefit from the columnar format because only the columns needed for the query are retrieved into memory. Traditional indexes pull all related column data into memory.
Columnstore indexes are usually best suited to fact tables within a data warehouse that conforms to a dimensional model, typically in a star or snowflake schema. But it's not just the table structure that makes the difference. Columnstore indexes also work best on databases primarily supporting read-only queries that return large sets of aggregated data, the type of queries typical to a data warehouse.
However, when columnstore indexes were first introduced, they came with a number of restrictions, most notably, the inability to update a columnstore index or to create a clustered one. That has changed in SQL Server 2014. You can now create updateable clustered columnstore indexes, although most other restrictions still apply.
With SQL Server 2014 came In-Memory OLTP, a memory-optimized engine that works with memory-optimized tables. When you declare a table as memory-optimized, the OLTP engine manages the table and maintains the data in memory. However, like traditional tables in a SQL Server database, memory-optimized tables are fully transactional and durable. You can issue queries against them as you would any other table.
The OLTP engine uses latch-free data structures and optimistic, multi-version concurrency control to support extremely high concurrency rates of the type often driven by a large, scaled-out middle application tier. Microsoft claims that, under such conditions, a database can see transactional performance gains up to 30 times greater than with the traditional table and database engine, although five to 20 times greater seems more the norm.
Memory-optimized tables vs. disk-based tables: Which is better?