Doing any kind of performance tuning on SQL Server can be tricky. In fact, I wrote a whole book about it back in...
2000, called The Definitive Guide to SQL Server Performance Tuning and Optimization. Shortly after that, I was excited to see Microsoft Operations Manager come out, along with the introduction of SQL Server Management Packs.
Why? Because management packs take the raw data that you can see in Performance Monitor (PerfMon) and add a layer of intelligence to help you differentiate between data that looks good and data that should be cause for concern. If you’re not using MOM or its successors and just want a better idea of what’s going on under the hood, you can still pull out Performance Monitor and take a look at your SQL Server.
Here are a few key counters to watch out for:
Page Splits/sec. A page split occurs when SQL Server needs to insert data into an 8 KB page that’s already full – such as inserting a new entry into a full index. In order to insert the entry into the right place, SQL Server splits the page by copying half the contents to a new page at the end of the file and inserting the new entry into the freed-up space on the old page.
Page splits take time and result in fragmentation, especially in indexes. A high number of page splits is anything over 100. This number could be even higher with a really fast disk system, or a bit lower with a slow one.
SQL Server knows when disk access is slow, so it tries to cache key pieces of information in RAM. You can look at Cache Size to see how many 8 KB pages are cached in memory (you can multiply the figure shown by 8 KB for an actual memory utilization figure). Generally, this number should be close to the amount of physical RAM available in the server, deducting some RAM for Windows itself and any other services running on the server. If the SQL Server is under use and the cache number is too small, someone may have configured the server to throttle its usage of RAM.
Batch requests/sec. Batch requests tell you how many requests SQL Server is processing per second, and should generally follow your server’s CPU utilization. As you start hitting 1,000 on this counter, your server is getting very busy. If you’re not seeing max CPU utilization at the same time, then be on the lookout for disk, memory, or network bottlenecks.
As a reference, a gigabit Ethernet connection can handle about 30,000 batch requests per second. Avoid the Transactions/sec counter, as it’s not as good an indicator of overall activity.
SQL complications/sec. SQL Server has to compile T-SQL to execute this, but the server tries to reuse already-compiled statements whenever possible. If you’re seeing a high number – generally, over 100 per second – then you’ve got a lot of overhead being spent on compilations. Try to give SQL more cache memory to work with.
Miss a tip? Check out the archive page for Don's series, SQL Server for the Reluctant DBA.
ABOUT THE AUTHOR:
Don Jones is a co-founder of Concentrated Technology LLC, the author of more than 30 IT books and a speaker at technical conferences worldwide. Contact him through his website at www.ConcentratedTech.com .