carloscastilla - Fotolia

Problem solve Get help with specific problems with your technologies, process and projects.

Troubleshoot SQL Server database performance with T-SQL

Basit Farooq provides eight tips for optimizing T-SQL code to prevent SQL Server database performance bottlenecks and to tune database applications.

Troubleshooting database performance bottlenecks is not an easy task, since a variety of factors can cause a bottleneck. Bad Transact-SQL (T-SQL) code is at the root of many common symptoms of SQL Server database performance logjams, because the core database application logic uses T-SQL statements, whether they are issued directly through an API or not. Therefore, optimizing the underlying T-SQL code is the best way to tune the performance of a SQL Server database application.

Here are eight tips for optimizing T-SQL code and avoiding SQL Server database performance bottlenecks.

Avoid the use of DISTINCT and UNION clauses

Where possible, avoid using DISTINCT and UNION clauses in T-SQL queries. They can adversely affect the query performance since they require a sorting operation to identify and remove duplicates from the result set of the query.

Use column_list for your SELECT and INSERT statements

Although the column_list parameters are optional for the SELECT and INSERT statements, it is always a good idea to provide a full-column list for these statements. When the full-column list is not specified, SQL Server resolves only a full-column list when the SELECT and INSERT statements execute. Also, without column_list parameters, SELECT and INSERT statements may generate errors if the underlying table schema changes. To reduce network traffic and improve the overall performance of the SELECT query, use the actual columns names in the SELECT statement.

Use stored procedures, not heavy-duty queries

When possible, use stored procedures instead of T-SQL patches, because code in a stored procedure is executed as a single unit or batch. This greatly reduces the network traffic, since multiple T-SQL statements contained in the stored procedure are not required to travel through the network individually. Instead, only the name of the stored procedure and its parameters are transmitted over the network.

SQL Server always caches a stored procedure execution plan in an area of SQL Server memory called procedure cache. Provided there is enough memory available, the stored procedure is not executed with the WITH RECOMPILE option; the stored procedure will remain in the procedure cache and be reused. Ad hoc SQL plans created when running ad hoc T-SQL statements are not always stored in the procedure cache. Therefore, SQL Server does not retrieve and reload the stored procedure from disk and/or parse, optimize and compile the stored procedure each time it runs.

Use TRUNCATE TABLE instead of DELETE statement

When removing all the rows from a table, try using the TRUNCATE TABLE statement instead of the DELETE statement. The TRUNCATE TABLE statement is much faster than the DELETE statement without the WHERE clause, because it uses fewer system and database transaction log resources. Unlike the DELETE statement, when you run the TRUNCATE TABLE statement, SQL Server does not log individual row deletions in a transaction log. Instead, SQL Server just logs page de-allocations that occur as a result of this operation.

Keep transactions small

Running several large transactions simultaneously increases the likelihood of a deadlock, so keep your transactions as short as possible. Try to break down larger transactions into several small transactions and then execute these transactions in batches. Since exclusive and update locks are held longer for large transactions, too many large transactions can block other activities and lead to deadlock. Executing large transactions in batches helps minimize the network round trips during the transaction, reducing the delays in completing the transaction and releasing the locks.

Reduce transaction time

Cut down on transaction time by making sure that you are not performing the same reads over and over again. If your application needs to read the same data more than once, then you can cache the data into variables, temporary tables or table variables. Then, it can reread the data from cache. This will help to reduce the lock time on actual resources. To reduce lock time, make sure the application grabs the locks at the latest possible moment and releases it at the earliest possible time.

Avoid cursors where possible

Avoid cursors as much as possible. Instead, use a set-based approach to updating or inserting data from one table to another. The same locking rules apply to a SELECT statement in a cursor definition as apply to any other SELECT statement. When using cursors, ensure you have the correct isolation level or locking hint specified for your cursor SELECT statement. SQL Server holds the locks for SELECT statements in a cursor and in the independent SELECT statement until it completes both transactions. This only applies if SQL Server is running in explicit or implicit transaction mode.

Choose the right data types for your tables, stored procedures, and variables

The data type determines the kind of data that can be stored in a database table column. When creating a table, you have to pick the data type used for the column definitions. You can also use data types to define variables and store procedure input and output parameters. For each column and variable, pick an appropriate data type for the data stored in it. In addition, consider storage requirements and choose data types that allow for efficient storage. Choosing the right data types for your tables, stored procedures and variables also improves data integrity by ensuring that only the correct type of data is stored in a database.

Next Steps

Check out these five tips to avoid SQL Server bottlenecks and database design issues

Learn how to improve SQL Server performance by streamlining your data

See how SQL Server database performance monitoring tools are saving time

Dig Deeper on SQL-Transact SQL (T-SQL)