Statistics and diagnostic data are the lifeblood of any SQL troubleshooting effort. You can’t find out what’s wrong with a given database’s performance without hard numbers backing up your theories. That bottleneck in a table might not be due to an indexing issue; maybe it’s the way database files are allocated. This is where Pssdiag and Sqldiag Manager comes in.
Microsoft bundles a command-line tool with SQL Server, called Sqldiag, for collecting diagnostic information from both SQL Server instances and the Windows system it’s running on. It can harvest quite a breadth of data: SQL Server Profiler traces and data about blockages, Windows performance and event logs and SQL Server configuration information.
For more on SQL troubleshooting
Learn how to troubleshoot SQL server backup and restore dilemmas
How to troubleshoot SQL Server permissions
Find out how expert Denny Cherry troubleshoots SQL
Sqldiag’s greatest disadvantage is that it’s not very interactive. It’s a command-line tool that uses an XML configuration file, which must be hand-edited. Although you can create multiple XML files for harvesting different batches of statistics, it’s still not the easiest way to go about working with the tool. This is even more true if you’re running Sqldiag as a service, which is handy for non-stop background collection of data but again requires a good deal of manual setup to be useful. If you find yourself constantly having to tweak the data-collection parameters, Sqldiag should be easier to set up and work with multiple times, not just once.
Enter Pssdiag and Sqldiag Manager, an open source project that provides Sqldiag with a graphic interface. It’s not just for show: It makes it easier to create profiles for Sqldiag, and to package the program and its attendant files so that it can be run easily on multiple servers.
Pssdiag and Sqldiag Manager (P&SM) works with all versions of SQL Server since version 7.0, including 2000, 2005 and 2008 (Support for Windows Server 2012 is planned as well). It also supports all SQL Server processor platforms – x86/x64 and Itanium – so it doesn’t matter which CPU or edition of the software you’re using. Note that if you’re using SQL Server 2000 or earlier, the necessary tools may have to be downloaded separately since they were not included in earlier editions of SQL Server.
P&SM works by compiling a package—a CAB file that contains custom data-collection parameters you’ve specified. When you run P&SM, you compile a package by specifying an instance of SQL Server by name (the default is “*”, or all instances on the current machine), choosing the performance counters, log events and diagnostic data for both the entire machine and for SQL Server instances themselves. These parameters are compiled into a CAB file, which you then extract and run on the target machine.
By default the most common and useful diagnostics are already pre-selected, although depending on the kinds of problems you’re looking for, you may want to narrow things down. It is recommended to use things like trace filtering to limit the scope of what’s captured, but also note that how this is implemented can reduce I/O and disk space usage at the cost of CPU—in essence, trading one problem for another. They give some specific recommendations for avoiding such issues; for example, filtering integer columns instead of text and not capturing high-frequency events, which may not provide useful insights in the first place.
One of the most important things about P&SM is that it allows you to create custom data-collection methods. This is useful if the data you’re trying to collect isn’t being gleaned from one of the sources exposed through the P&SM interface. You could use the custom collectors to run T-SQL commands, batch scripts, PowerShell commands or third-party command-line utilities. The files produced by PS&M can be analyzed by hand or fed to another utility by the same author, SQL Nexus.
There are a couple of other things about packages created by P&SM worth keeping in mind. First, when you run the trace package, it should be run on the same machine as the instance of SQL Server and monitored whenever possible. This is in part because you need the SQL Server instance itself to do some of the necessary capture work. The work of capturing those stats can’t be offloaded to another machine or instance. You can run the trace package on another machine only if you’re not capturing a SQL Server profiler trace, but the authors claim the benefits are minimal anyway, so it’s probably best to run the trace from the system in question and save yourself some hassle.
In the same vein, you might be tempted to have the trace package from a network drive as a way to save yourself some trouble. For example, you can put the various trace packages all in subfolders on the same shared drive, and then run them from each SQL Server instance. Again, this is not recommended. The way I see it, the amount of network traffic back and forth to the drive in question may create an artificial performance drain and throw off the accuracy of the trace.
ABOUT THE AUTHOR
Serdar Yegulalp has been writing about computers and IT for more than 15 years for a variety of publications, including InformationWeekand Windows Magazine.