Designing SQL Server non-clustered indexes for query optimization

Non-clustered indexes improve SQL Server query performance by focusing queries on a subset of data. In this tip, you'll find guidelines for designing non-clustered indexes, including choosing index fields, when a clustered index should be used instead of a compound index and how statistics influence non-clustered indexes.

This Content Component encountered an error

Non-clustered indexes are bookmarks that allow SQL Server to find shortcuts to the data you're searching for. Non-clustered indexes are important because they allow you to focus queries on a specific subset of the data instead of scanning the entire table. We'll address this critical topic by first hitting the basics, such as how clustered indexes interact with non-clustered indexes, how to pick fields, when to use compound indexes...

and how statistics influence non-clustered indexes.

The basics of non-clustered indexes in SQL Server

A non-clustered index consists of the chosen fields and the clustered index value. If the clustered index is not defined as unique, then SQL Server will use a clustered index value plus a uniqueness value. Always define your clustered indexes as unique -- if they are in fact unique -- because it will result in a smaller clustered/non-clustered index size. If your unique clustered index consists of an int and you create a non-clustered index on a year column (defined as smallint), then your non-clustered index will contain an int and smallint for every row in the table. The size would increase according to the data types chosen. So the smaller the clustered/non-clustered index data types are, the smaller the resulting index size will be, and the maintenance capacity will increase.

Choosing fields for non-clustered indexes

The first rule is to never include the clustered index key fields in the non-clustered index. The field is already part of the clustered index, so it will always be used for queries. The only time it makes sense to include any clustered index key in a non-clustered index is when the clustered index is a compound index and the query is referencing the second, third or higher field in the compound index.

Assume you have the following table:

 
ID (identity, clustered unique) DateFrom DateTo Amt DateInserted Description

Now assume you always run queries such as:

Example 1:

Select *
From tbl [t]
where t.datefrom = '12/12/2006' and
t.DateTo = '12/31/2006' and t.DateInserted
= '12/01/2006'

At this point it makes sense to have a non-clustered index defined on DateFrom, DateTo and DateInserted, since that will always give the best unique results.

Now assume you run multiple queries such as:

Example 2:

Select *
From tbl [t]
where t.datefrom = '12/12/2006' and
t.DateInserted = '12/01/2006'

Select *
From tbl [t]
where t.datefrom = '12/12/2006'

Select *
From tbl [t]
where t.DateTo = '12/31/2006'

Select *
From tbl [t]
where t.DateInserted = '12/01/2006'

Select *
From tbl [t]
where t.DateTo = '12/31/2006' and
t.DateInserted = '12/01/2006'

Select *
From tbl [t]
where t.id = 5 and t.DateTo = '12/31/2006'
and t.DateInserted = '12/01/2006'

Many people, at this point, would be tempted to create the following non-clustered indexes:

  1. DateFrom
  2. DateTo
  3. DateInserted
  4. DateTo and DateInserted
  5. DateFrom and DateInserted
  6. ID, DateTo and DateInserted

You probably expect the index size to increase dramatically at this point, since you are storing DateFrom in two separate locations, DateTo in three locations and DateInserted in four locations. On top of this, you've stored the clustered index key in seven locations. This approach increases I/O for insert, update and delete operations (also known as IUD operations). Updates to the records must be written first to the clustered index data row. Then, the non-clustered indexes will have to be updated so they can be written to.

You should routinely ask yourself these questions:

  • Is the cost of additional I/O for IUD operations and maintenance worth the improved query time?
  • Will the additional I/O and increased maintenance time outweigh any performance boost I get on the queries?
  • What will give me the most unique results with the least overhead as possible?
  • In this case, the best solution would be three non-clustered indexes as follows:

    1. DateFrom
    2. DateTo
    3. DateInserted

Each field in this scenario is only stored once, except for the primary key which is stored on all three non-clustered indexes. As a result, the index size is much smaller and will require less I/O and less maintenance. SQL Server will query each of the non-clustered indexes, depending on the criteria chosen, and then hash the results together. While this is not as efficient as Example 1, it is much more efficient than defining the five separate non-clustered indexes. Real world queries will more often match Example 2 rather than being structured as Example 1.

SQL Server statistics

Statistics tell SQL Server how many rows most likely match a given value. It gives SQL Server an idea of how "unique" a value is, information it then uses to determine whether to use an index. By default, SQL Server automatically updates statistics whenever it thinks approximately 20% of the records have changed. In SQL Server 2000, this is done synchronously with the IUD operation, delaying the completion of the IUD operation while the rows are sampled. In SQL Server 2005, you can have it sample either synchronously with the IUD operation or asynchronously after the IUD operation is done. The latter approach is better and will cause less blocking because locks will be released sooner. I recommend turning off the database setting "Auto Update Statistics." This setting will increase your server loads at the worst times. Instead of letting SQL Server automatically keep statistics up to date, create a job that calls the command "update statistics" and runs during your slowest time. You can pick your own sampling ratio depending on how accurate you want the statistics to be.

Statistics are only kept on the first column in any non-clustered index. What does this mean in compound non-clustered indexes? It means SQL Server will use the first field to determine whether an index should be used. Even if the second field in the compound index will match 50% of the rows, the field still needs to be used to return the results (see Example 3). Now, if the non-clustered index were split into two non-clustered indexes, SQL Server might choose to use index 1, but not index 2. This is because the statistics on index 2 may show that it will not benefit the query (see Example 4).

Example 3

Assume you have a compound, non-clustered index defined on DateFrom and Amt.

Statistics would only be kept on the DateFrom field within the index, and SQL Server would have to seek (or scan) across both DateFrom and Amt. Since SQL Server has to traverse more data, the query will be slower.

Example 4

Assume you have two non-clustered indexes: The first is defined on DateFrom and the second is defined on Amt.

Statistics would be kept on both fields because they are separate indexes. SQL Server will examine the statistics on DateFrom and decide to use that index. It will then examine the Amt column and may decide -- based on the statistics -- that the index is not unique enough and should be ignored. At this point, SQL Server would only need to traverse the DateFrom field, rather than both DateFrom and Amt, resulting in a faster query.

By using non-clustered indexes in SQL Server, you'll be able to focus queries on a data subset. Use the guidelines described in this tip to determine if it's best to create multiple non-clustered indexes or a compound non-clustered index. Also keep in mind the role of statistics and how they impact non-clustered indexes: Statistics affect the choice between using multiple non-clustered indexes and a compound non-clustered index in SQL Server.


SQL Server clustered and non-clustered index design series

ABOUT THE AUTHOR
Matthew Schroeder is a senior software engineer who works on SQL Server database systems ranging in size from 2 GB to 3+ TB, with between 2k and 40+k trans/sec. He specializes in OLTP/OLAP DBMS systems as well as highly scalable processing systems written in .NET. Matthew is a Microsoft certified MCITP, Database Developer, has a master's degree in computer science and more than 12 years of experience in SQL Server/Oracle. He can be reached at cyberstrike@aggressivecoding.com.

This was first published in February 2008

Dig deeper on Microsoft SQL Server Performance Monitoring and Tuning

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchBusinessAnalytics

SearchDataCenter

SearchDataManagement

SearchAWS

SearchOracle

SearchContentManagement

SearchWindowsServer

Close