This is the second of two articles detailing important features of the upcoming SQL Server 2014. This one focuses...
on updating partitioned indexes and memory-optimized tables. The first looked at extending the buffer pool and implementing clustered columnar indexes in SQL 2014.
The first public Community Technology Preview (CTP) of SQL Server 2014 is out. Many of the new features that will be available in the final product are in the CTP, which you can get from the TechNet Evaluation Center for testing SQL 2014 yourself. This article digs into two new capabilities available in the CTP: updating partitioned indexes and managing memory-optimized tables.
Updating partitioned indexes in SQL 2014
SQL 2014 also makes it possible to rebuild partitioned columnstore indexes. For example, suppose we partition the ResellerSales table. To do so, we can use the Create Partition Wizard in SQL Server Management Studio (SSMS). Although a complete walk-through of the wizard is beyond the scope of this article, I've included a couple of screenshots to show how I partitioned the table on my system. Figure 1 shows the Set Boundary Values dialog box, which you access through the wizard's Map Partitions page. (I created the partition based on the OrderDate column.)
After I set the values in the Set Boundary Values dialog box, I returned to the Map Partitions page and selected a filegroup for each partition. For this example, I used PRIMARY for all my partitions. I then clicked the Estimate storage button to populate the rest of the grid. Figure 2 shows the final results on my system.
I'm not suggesting you set up your partitions in this way, but the following examples are based on this configuration, so you might find this information helpful.
After creating the partitions, you can use the sys.partitions catalog view to retrieve information about the partitions created for the ResellerSales table, as shown in the following SELECT statement:
SELECT * FROM sys.partitions
WHERE object_id = OBJECT_ID('ResellerSales');
Figure 3 shows the results returned by this query. Notice that there are five partitions, although only the first four contain data. Notice too that the data_compression_desc column in each row shows the compression type as COLUMNSTORE, which is what we want to see.
In SQL Server 2014, we can use a single T-SQL statement to rebuild either all our partitioned clustered columnstore indexes or individual indexes. To update all partitions, we can once again use the ALTER INDEX statement, only this time, we modify the REBUILD clause to include the PARTITION option, as shown in the following example:
ALTER INDEX ix_clustered
REBUILD PARTITION = ALL;
As you can see, we've set the PARTITION option to ALL. However, if we want to update an individual partition, we instead set the option to a specific number. (You can use the sys.partitions catalog view to retrieve the partition numbers.) For example, the following ALTER INDEX statement updates partition 2:
ALTER INDEX ix_clustered
REBUILD PARTITION = 2;
As handy as it is to be able to rebuild individual partitions in a clustered columnstore index, you cannot perform online updates. In other words, the data in the underlying table cannot be queried or updated during the index rebuild operation.
However, that's not true for all partitioned indexes in SQL Server 2014. For example, suppose we use a SELECT…INTO statement to create the ResellerSales2 table:
As is to be expected, the statement adds 60,855 rows to the new table. Next, we partition the table just as we partitioned the ResellerSales table. Finally, we use the following CREATE CLUSTERED INDEX statement to create a regular (non-columnstore) clustered index:
CREATE CLUSTERED INDEX ix_orderdate
ON ResellerSales2 (OrderDate);
We can now update the indexes associated with a specific partition while the table is still online. To do so, we need to modify our ALTER INDEX statement once more to include a WITH clause, as shown in the following example:
ALTER INDEX ix_orderdate
REBUILD PARTITION = 2
WITH (ONLINE = ON (WAIT_AT_LOW_PRIORITY
(MAX_DURATION = 10, ABORT_AFTER_WAIT = BLOCKERS)));
Our WITH clause must first set the ONLINE option to ON and then specify additional related options. In this case, the WAIT_AT_LOW_PRIORITY option tells SQL Server to wait to build the index until there are no blocking operations on the table. The MAX_DURATION option indicates the number of minutes to wait for those blocking operations. And the ABORT_AFTER_WAIT option determines what to do after waiting. For this example, I've specified BLOCKERS, which tells SQL Server to kill any transactions that block the index rebuild.
Creating memory-optimized tables
The In-Memory online transaction processing engine (code-named Hekaton) moves OLTP into new territory in SQL Server 2014. The storage engine is now integrated into the current database management system, but uses advanced in-memory technologies to support large-scale OLTP workloads. However, to take advantage of this new feature, your database must include "memory-optimized" filegroups and tables; that is, filegroups and tables configured use the Hekaton technology. Fortunately, SQL Server 2014 makes this a very straightforward process.
To demonstrate how this works, let's create a database named TestHekaton, then add a memory-optimized filegroup to that database, as shown in the following T-SQL code:
CREATE DATABASE TestHekaton;
ALTER DATABASE TestHekaton
ADD FILEGROUP HekatonFG CONTAINS MEMORY_OPTIMIZED_DATA;
Notice that ADD FILEGROUP clause in the ALTER DATABASE statement includes the name of the filegroup (HekatonFG) and the keywords CONTAINS MEMORY_OPTIMIZED_DATA, which instruct SQL Server to create the type of filegroup necessary to support the In-Memory OLTP engine. To confirm that the filegroup has been created, you can access the Filegroups page of the database's properties in SSMS, as shown in Figure 4.
The next step is to add a file to the filegroup, which you can do by running a second ALTER DATABASE statement. The following example adds a new file to the HekatonFG filegroup:
ALTER DATABASE TestHekaton
NAME = 'HekatonFile',
'C:\Program Files\Microsoft SQL Server\MSSQL11.SQLSRV2014CTP1\MSSQL\DATA\HekatonFile'
TO FILEGROUP [HekatonFG];
Notice that, in the ADD FILE clause, we simply provide a friendly name for our file along with path and file name. And in the TO FILEGROUP clause, we specify the name of our new filegroup. We can then go to the Files page of the database's properties to view the file we've just added, as shown in Figure 5.
It's also worth noting that we could have used the database properties interface to add our filegroup and file directly. It all depends on your preferred method. The advantage to scripts is that you can save and rerun them.
After we've set up our database with the necessary filegroup and file, we can create our memory-optimized tables. When defining the table, we specify its "durability." A memory-optimized table can be either durable or nondurable. A durable table is one in which the data is stored in memory and also saved to the memory-optimized filegroup. With a nondurable table, the data is stored in memory only, so if the system crashes or is restarted, the data is lost. Durable tables are the default in SQL Server 2014 and the type you're most likely to create, so let's take a closer look at how to build one.
When you define a durable, memory-optimized table, you must also define a primary key that is based on a nonclustered hash index. In a hash index, data is accessed through an in-memory hash table, rather than through fixed-sized pages. Hash indexes are the only type of indexes supported in a memory-optimized table.
In addition to defining the primary key in your table definition, you must also configure the table as memory-optimized, as shown in the following CREATE TABLE statement:
CREATE TABLE Reseller
ResellerID INT NOT NULL
PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 1024),
ResellerName NVARCHAR(50) NOT NULL,
ResellerType NVARCHAR(20) NOT NULL
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
INSERT INTO Reseller
VALUES (1, 'A Bike Store', 'Value Added Reseller');
As you can see, the ResellerID column definition includes the primary key, which is defined as a nonclustered hash. Notice that you must include a WITH clause that specifies the BUCKET_COUNT setting, which indicates the number of buckets that should be created in the hash index. (Each bucket is a slot that holds a set of key-value pairs.) Microsoft suggests that the bucket count be about one-to-two times the number of unique index keys you expect the table to include.
The table definition ends with a second WITH clause. Here you specify that the MEMORY_OPTIMIZED option be set to ON and the DURABILITY option be set to SCHEMA_AND_DATA, which is the option used for a durable table. We then insert a row in the table so we can try it out.
That's all there is to creating a memory optimized table. Everything else happens behind the scenes. Keep in mind, however, SQL Server 2014 places a number of limitations on these tables. For example, they do not support foreign keys or check constraints, nor do they support IDENTITY columns or data manipulation language (DML) triggers. Most importantly, running out of memory causes write activity to stop.
On the other hand, memory-optimized tables support natively compiled stored procedures, as long as those stored procedure reference only memory-optimized tables. In such cases, the stored procedures can be converted to native code, which tends to run faster and require less memory than a typical stored procedure.
In addition to referencing only memory-optimized tables, a natively compiled stored procedure must be schema-bound and run within a specified execution content. Plus, each natively compiled stored procedure must consist of exactly one atomic block.
The following CREATE PROCEDURE statement defines a natively compiled stored procedure that retrieves data from the Reseller table created in the previous example:
CREATE PROCEDURE GetResellerType
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
(TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = 'us_english')
SELECT ResellerName, ResellerType
WHERE ResellerID = @id
After we define our parameter, we include a WITH clause that specifies the NATIVE_COMPILATION option. Notice, however, that the clause also includes the SCHEMABINDING option and EXECUTE AS option, with OWNER specified as the execution context. The WITH clause takes care of three of our requirements for implementing a natively compiled stored procedure.
To address the atomic block requirement, we specify ATOMIC after the BEGIN keyword, followed by another WITH clause that includes the transaction isolation level and language. For transactions accessing memory-optimized tables, you can use SNAPSHOT, REPEATABLE READ or SERIALIZABLE as your isolation level. And for the language, you must use one of the available languages or language aliases.
That's all you need to include when defining your stored procedure. Once you create it, you can test it by running an EXECUTE statement, as shown in the following example:
EXEC GetResellerType 1;
As you'd expect, the statement returns the reseller name and type, which in this case are A Bike Store and Value Added Reseller, respectively.
Moving ahead with SQL Server 2014
As these two articles have demonstrated, working with buffer pool extensions, clustered columnstore indexes, partitioned indexes and memory-optimized tables are very straightforward processes in SQL Server 2014. However, we've touched only lightly on each of these topics. You should refer to SQL 2014 documentation for more details before you consider implementing any of them. In addition, what we've covered by no means represents all the new features and enhancements in SQL 2014. Like any SQL Server release, there have been many changes, some of which are not apparent on the surface (such as improvements to the query engine and the new resource governor for I/O). But what we've covered here should give you a chance to start seriously testing some of the new features so you can see for yourself whether SQL Server 2014 might be right for you.
About the author
Robert Sheldon is a technical consultant and the author of numerous books, articles and training material related to Microsoft Windows, various relational database management systems and business intelligence design and implementation. You can find more information at his website.
Memory-optimized tables vs. disk-based tables: Which is better?