Appropriate indexes can make a world of difference in performance. SQL Server supports only two index types for...
most data types — clustered and non-clustered. SQL Server also supports full-text indexes and XML indexes, but those are relevant only for specific data types.
It is crucial to choose the appropriate column or set of columns for your clustered index. The reason is that the table's data is physically sorted by the values in the clustered index column or columns. You can create only a single clustered index on each table. Nonclustered indexes reference the clustered index keys (data values) to determine the physical location of each record.
It is recommended that you create the clustered index on the columns that do not change often, are highly selective and have lean data types. In many cases, the clustered index on an identity column is the best choice because identity values are highly selective — each record has a unique identity value — and they are never updated and are built using SMALLINT, INT or BIGINT data types.
However, it is not uncommon to find a table that is never queried based on its identity column. If so, carefully consider how the data is commonly retrieved, perhaps by a foreign key to another table or by a character column. Often, you can improve performance by creating the clustered index on the column or set of columns that is most frequently used for retrieving the data.
Some developers like to create composite clustered indexes. These span several columns, a combination of which uniquely identifies each
record. This might sound like a good practice because the identity column has no business meaning, whereas other columns — such as hire date, department name and vehicle identification number — definitely translate into something immediately known by application users. However, from a performance perspective, you should avoid composite clustered indexes.
Once again, the leaner the index, the faster SQL Server can scan or seek through it. You might find that for a small data set, composite indexes perform relatively well. But as the number of users grows, you're bound to encounter problems.
After you see performance benefits from building appropriate indexes, you might think your work is finished. But as data is added, modified and removed from tables, the respective indexes become fragmented. The higher the fragmentation, the less effective your indexes become. Now you'll need to implement a plan for removing fragmentation from your indexes to ensure they remain effective.
With prior versions of SQL Server, removing fragmentation from large indexes (tables with many millions of rows) often required downtime. Fortunately, SQL Server 2005 supports online index rebuilds that make your life considerably easier. Keep in mind, however, that rebuilding indexes still requires system resources and space in a tempdb database. If possible, schedule index maintenance during periods of minimum user activity.
This tip was excerpted from Designing databases for performance.
SQL Server database architects and administrators have many choices to make if they want their applications to perform well from the start. To ensure successful database performance, it's important to make good choices during the design phase. In this edition of SQL Server INSIDER, expert Baya Pavliashvili discusses how to make database design decisions that will optimize performance. Design a database that contains appropriate:
High Availability options
ABOUT THE AUTHOR
Baya Pavliashvili is a database consultant helping his customers develop highly available and scalable applications with SQL Server and Analysis Services. Throughout his career he has managed database administrator teams and databases of terabyte caliber. Baya's primary areas of expertise are performance tuning, replication and data warehousing. He can be reached at email@example.com.