Manage Learn to apply best practices and optimize your operations.

The most dangerous game: Hunting down bad SQL query performance

SQL Server is faster when the queries behind your applications are faster. Learn how to spot bad SQL query performance, and then work with developers so you can improve throughput.

Once you’ve fine-tuned your databases’ indexes, maxed out your hardware and gotten the fastest disks money can buy, you’ll hit a limit on how much performance you can squeeze from your SQL Server machines.

At some point you have to focus on fine-tuning the applications rather than beefing up SQL Server itself. That takes you into the somewhat tricky world of query analysis, where you try to identify database queries that -- because of the way they are written --aren’t performing as well as they could be.

SQL Server comes with an excellent tool, SQL Profiler, that’s designed to capture traces. Think of these as similar to a network packet capture: You’re actually capturing the raw queries being fed to SQL Server, along with information about their execution time. Using this raw data, you can spot bad SQL query performance, and then offer advice to the application developers on how to improve them.

Actually improving performance does depend on the ability to change the applications themselves, so this isn’t often an option for prepackaged applications on which you can’t make changes to the source code. Instead, this approach is usually limited to in-house applications. With these, you or someone you work with can get at the code to make alterations and improvements.

When you open SQL Profiler, you’ll start by creating a new trace. Part of the trace definition is a list of the events that you want to capture. You’ll usually want to capture remote procedure call (RPC) events as well as Transact-SQL events, as these two event types represent the two ways queries can be submitted to SQL Server or stored procedures can be executed. I usually include the following event classes in my traces:

  • RPC:Completed. This is generated after a stored procedure is executed from an RPC and includes information about such parameters as the duration of the execution, the CPU utilization and the name of the stored procedure.
  • SP:StmtCompleted. This is fired whenever a statement within a stored procedure finishes running and also includes data on metrics such as execution duration and CPU use.
  • SQL:BatchStarting. You’ll see this whenever Transact-SQL batches begin, including those inside and outside stored procedures.
  • SQL:BatchCompleted. This occurs when a Transact-SQL batch finishes; it provides data similar to the RPC and stored-procedure examples listed above.
  • Showplan XML. This gets you a graphical execution plan for a query—key to understanding how the query was executed and spotting performance problems.

Once your trace is set up, start capturing data. You’ll want to capture representative data, and often that means running the trace during production workloads. Be sure to capture to a file or SQL Server table that’s on a machine other than the one you’re analyzing so that the analysis itself doesn’t affect performance.

You’ll need to tell SQL Profiler which data columns you want to view; I usually start with this list:

  • Duration
  • ObjectName
  • TextData
  • CPU
  • Reads
  • Writes
  • DatabaseName
  • ApplicationName
  • StartTime
  • EndTime
  • EventSequence

These columns give me good insight into how long each query took to run, and I can often just skim through the Duration column looking for especially large values. You’ll want to focus on longer-running queries to see if you can improve execution time. The duration is shown in milliseconds (although it’s stored internally in microseconds), so don’t be alarmed if all of the values seem large at first.

I’ll also scan through the CPU column, since a query can run quickly but consume a lot of CPU time. Heavy-CPU queries will often bog down when the server is especially busy and can’t devote a lot of CPU capacity to them; as a result, rewriting queries so that they’re a bit less CPU-hungry can result in better performance. Profiler lets you create filters, and I’ll often start by creating a filter that hides anything taking less than 5,000 milliseconds, just so I can focus on the longer-running queries.

All in all, there are a number of things to look for, though most of the fixes for these problems will have to be implemented by the application developer:

  • Ad hoc SQL queries that are run outside of a stored procedure. Stored procedures almost always offer better performance because SQL Server can cache their execution plans; ad hoc queries should, whenever feasible, be converted to stored procedures.
  • Long-running or CPU-heavy queries in execution plans. Table scan operations indicate the lack of a suitable index, and putting an index in place to eliminate the table scan can have an immediate and positive effect on performance.
  • Queries that include a large number of joins. Joins take time, and while SQL Server is obviously designed to handle them, a large number of joins can really slow things down. The general rule of thumb I use is seven joins; if you have more than that, you have to start looking at ways to cut back.
  • A slow-running query that always runs slowly. This is a query that could perhaps be rewritten to perform better. A query that runs slowly some of the time is one that’s likely being affected by outside factors, such as locks or resource contention.

SQL query-performance tuning is as much art as science, and really, it belongs to the realm of application developers. The goal for database folks is to identify those slow-running or CPU-intensive queries, gather evidence and then work with developers to find ways of improving them.

Other good tools for lousy queries

Although SQL Profiler is a great tool, it doesn’t actually show you slow queries; you have to examine the data Profiler captures and figure out which queries are “slow” on your own. If query troubleshooting becomes a big part of your daily life, then you might want to move beyond Profiler and into a dedicated query analysis tool.

These tools, written by third-party software vendors, are designed to capture data in much the same way that Profiler does (some actually accept a Profiler capture file as input), and then identify poorly performing queries for you. In many cases, these tools can tell you why a query is performing poorly and even suggest changes that would help improve performance.

Vendors in this space include SQL Sentry, Red Gate, Idera, Quest Software, DBSophic, and more. Look for tools that can interface with, or even completely replace, SQL Profiler, and that offer automated query analysis and prescriptive advice.

If you start talking to vendors about a potential purchase, be sure to ask how much, if any, impact the product will have on SQL query performance when analyzing a production server. Some vendors have techniques to minimize or even eliminate production impact, and that’s always a good thing.

About the author:
Don Jones is a co-founder of Concentrated Technology LLC, the author of more than 30 books on IT and a speaker at technical conferences worldwide. He can be reached through his website at

Next Steps

Find out how statements can help SQL query performance

Learn to predict SQL query performance on larger sets

Dig Deeper on Microsoft SQL Server Performance Monitoring and Tuning