It's something of a truism that a complex SQL Server setup needs to be tuned to run properly. The good news is that most of the SQL Server tools you can pick up to accomplish that job are either already available in your backyard, or can be downloaded for nothing. What's required most is not the tools, though, but a good understanding of what they will and will not do for you, and how to get the most out even though they may seem to have a fairly limited scope.
The first and most obvious tool to work with is Query Analyzer (unless you're using SQL Server 2005, but more on that later). Query Analyzer (QA) most useful performance-analysis tools fall into two basic categories: trace tools and optimization tools. The former can be used to bolster the latter. One common trace tool, Show Execution Plan, returns a flowchart that shows you how much time is taken up by each step of the query you've run. This way, you can figure out what the single, slowest step of the whole query is and get hints for what might be done about it. Sometimes this involves creating an index where there wasn't one before (which the Index Tuning Wizard can help with); this may involve restructuring queries from the ground up and performing something in multiple steps or across multiple queries.
Get more on SQL Server performance tuning:
- SQL Server performance-tuning worst practices
- SQL Server 2005 performance tuning tools A-Z
Sometimes nothing can be done about it, since what looks like a bottleneck is just the way the data is reported back by QA. For instance, if you have one part of the query that takes up 98% of the query cost, but the query as a whole runs fast, then that 98% is not a bottleneck per se. On the other hand, you may have a query that takes minutes on end to return a table and there's a particular join or seek operation that's gobbling up most of the query's time. In this case, you can focus on that and figure out if you need to add more indexes or maybe subdivide this query into a number of smaller, more atomic operations. QA itself, however, will not be able to tell you if that's the case; you'll need to go to it with some understanding of what you're working on and how it might be best optimized.
Since indexes are one of the cheapest ways to accelerate many common queries, it's no surprise that QA contains the Index Tuning Wizard. Run it against a given query in QA, or against a trace table or workload file, and you'll get a report that tells you which, if any, indexes could be created to optimize the query in question. Another tool that's used in conjunction with the Tuning Wizard is SQL Profiler.
This tool captures real time events as they take place in SQL Server and records them to a log (either a file on disk or a table in the database itself). The resulting log can be fed to the Tuning Wizard and used to derive suggestions about how to optimize the system even further. The trace operation doesn't have to run for all that long to generate useful data, either. On my own SQL Server-driven website, which generally has about 50 simultaneous users at any one time, I only needed to capture SQL activity for about five minutes (about 10 MB of trace data) to get a useful analysis.
Be warned that the number-crunching for this optimization analysis can be pretty CPU-intensive. The more tables included in the analysis and the bigger the resulting dataset, the longer it'll take to return any kind of results. Running such a crunch against a trace file is generally a little less burdensome than if you run it against a live query. But, since the actual load on SQL Server isn't as heavy, and if you have a multicore / multisocket server, it'll be handled all the more efficiently.
QA's limitations make it a little primitive compared to some other tools out there, but if you're a relative amateur to SQL Server and you're just learning the ropes, it's actually a good place to start. Likewise, Query Analyzer is a fine arena to practice good database tuning hygiene. Start with a small-scale database project, if possible, and identify the bottlenecks or performance-hampering queries in it one at a time; tackle more than that at once and you might get in over your head.
Later, you can gather broad amounts of data to determine what to optimize via trace files. Changes should be done individually, so you can determine if any one change generates bad cross-interactions. I should point out that the breadth of one change may vary: if you are creating indexes for two tables at once because of a JOIN query, for instance, that should be considered one action. But multiple indexes for multiple tables that are queried in parallel should not be changed at the same time. To be on the safe side, change one, test, check any delta in performance, then move on.
The one big downside to most of the free analysis tools for SQL Server is the implementation. It's not always obvious how to get the best possible results, or where to start digging. The wizard interfaces for things like the Index Tuner do help — and adding indices almost always generates a big performance boon — but applying their findings requires some knowledge. The admin must decipher what to look for and in what context. If you don't know the specific problematic query, for instance, you have to narrow down on that first, which in turn, requires going back to the Query Analyzer. Each tool only covers one specific aspect of performance tuning, so to do a really comprehensive analysis, you need to skip and switch between them. Also, if you have multiple physical files and you keep your indexes segregated from the actual tables, you'll need to insure the indexes have been created in the correct physical files after the Index Tuning Wizard has implemented its recommendations (that is, if it has any).
One really painful scenario is when you've "inherited" someone else's SQL Server setup. You don't know anything about it, and can't find any documentation about how it works or what might specifically be causing problems. In such a case, you might want to start by creating a trace log during peak hours, then analyzing it to see what improvements can be put into effect and where things might be deadlocking or hanging up. Also make sure the problems you want to subject to analysis are in fact database issues. Let's say you've got a web application that's timing out on a given query but the same query runs fine when executed from an instance of Query Editor. In this case, the problem is either in the connection to the database from the front end (not enough threads in the connection pool?) or in the front end itself.
Another reason to grab a more upscale performance analysis tool (or suite of tools) is if you're not simply working on one database at a time. If you have a cluster, or a whole slew of database machines which work together, it can be really hard to figure anything out by simply inspecting them one at a time. For that you need something that can give you a more upscale view, and parallelize testing across multiple servers.
This is especially true if the only way to get realistic performance analyses is by testing in parallel. One good tool for gathering and diagnosing performance information about multiple instances of SQL Server is Quest's Spotlight on SQL Server Enterprise, or their Performance Analysis for SQL Server tool. Quest makes a whole suite of SQL Server-specific tools, including an application-specific load-test tool called Benchmark Factory to test the performance of databases as they manifest in a specific application. Many of their products have free trial versions available, so you can download one and give it a whirl in your organization to see if it's a good fit.
In SQL Server 2005, the Query Analyzer has been replaced with a new tool, the Query Editor (found in SQL Server Management Studio). The Index Tuning Wizard has been eclipsed by the Database Tuning Advisor, which includes goodies like being able to graphically display resource contentions that result in deadlocks. The exact ways these tools are used may vary from the older implementation, but the basic approach is the same: when you tune for performance, don't change more than one thing at a time, even if you're given advice to do more. If you're worried about the scope of the changes, make a backup of the database before attempting anything. You can also invest in a utility that lets you do snapshots and version control for SQL Server databases so you can roll back gracefully.
I can think of two other Microsoft performance tools worth mentioning, although they focus more on analyzing the performance of the underlying system than SQL Server itself. They are SQLIOSIM and Read80Trace / OSTRESS. SQLIOSIM (short for "SQL Server I/O Simulator") is designed to simulate the I/O loads that SQL Server imposes on a given system, without actually running SQL Server itself. This is useful if you're building a system or have one you want to add SQL Server to, and are trying to determine if the hardware configuration (for instance, the disk layout) is optimal without actually making it into a production system.
The program was known in former incarnations as SQLIOStress, but has since been rewritten to more accurately simulate the way SQL Server 7.0, 2000 and 2005 behave. Read80Trace is used to process SQL Server 2000 trace files so you can generate RML (replay markup language) files for output. RML files can then be used by OSTRESS to simulate or "replay" the same connections and simulate server stress, typically for the sake of adjusting other parameters (e.g., network links, number of SQL Server threads, etc.).
ABOUT THE AUTHOR:
Serdar Yegulalp has been writing about Windows and related technologies for more than 10 years and is a regular contributor to various sections of TechTarget as well as other publications. He hosts the Web site WindowsInsider.com, where he posts regularly about Windows and has an ongoing feature guide to Vista for emigrants from Windows XP.