Hardware for SQL Server: Optimizing performance
A comprehensive collection of articles, videos and more, hand-picked by our editors
As a DBA, much of your focus is on performance tuning SQL Server. But have you spent time to tune the hardware...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
supporting your SQL Server system? Are you using the optimal disk array configuration? Are the disk partitions aligned? This tip discusses how to get your SQL Server hardware performance in top shape – whether the system is already in operation or it's a new setup.
With the massive amount of raw horse power available in today's server class hardware, it's easy to skip over the hardware when it comes to performance tuning the SQL Server database. After all, with so much power available, who cares if something takes a few extra milliseconds to complete? Is anyone really going to notice that extra millisecond?
But what happens when you perform an operation that takes 10 extra milliseconds to complete; and it needs to perform 100 times an hour, for a year? All of a sudden, that 10 milliseconds turns into 2.4 hours. If you perform that operation 1,000 times an hour -- which isn't all that unheard of in a smaller OLTP database -- you are now looking at more than 24 hours of wasted time.
In my particular environment, we run the same stored procedure at least 2,000 times per minute. If that stored procedure takes an extra 10 milliseconds to complete, we are looking at eight hours of lost time daily, or 121 days of lost time per year.
Tune SQL Server by tuning disk arrays
There are a few places to check hardware components when tuning your SQL Server system. The easiest components to check are disk arrays. Typically, disk arrays are where the most time is expended
waiting for something to happen. There are a couple of ways to tune the disks to improve SQL Server performance. The first is to make sure your disk array has enough spindles to handle the workload that will be placed on it. Second, make sure the disk arrays are in the correct RAID level to offer the best support level for the database.
While it is true that RAID 10 offers better write performance, in most cases, RAID 10 isn't required for the data files. That said, you should use RAID 10 for your transaction logs and tempdb database, as they are mostly write files. The reason I say not to use RAID 10 for all database files is that RAID 10 is very costly to implement in large disk sizes. This is because for each spindle used for data, a second spindle is used for redundancy.
Finding out if you need more spindles on an existing system is easy. Open Performance Monitor on the server and add the "Physical Disk" object and the "Current Disk Queue Length" counter. Some queuing is OK; however, there is a tipping point. To find out where the tipping point of "OK queuing" and "too much queuing" is, take the number of disks in the array and multiply it by two. If the result is greater than the maximum value in Performance Monitor, then you have too much queuing. When we talk about the number of disks, we're referring to the number of disks that are actively working with data. If you have a RAID 10 array, this is half the number of disks in the array.
"Number of Disks" x 2 = Maximum Allowable Queuing
How to configure the disk array on your new SQL Server system
When working on a new system without any load on it, making sure you configure your disk array correctly is a little more challenging. If you have another system with the same amount of load on it, you can use that system as a guide. However, if this is the first large system in your environment, then getting it correct can be a bit harder.
You'll need to look at your database system and the expected transactions per second, and make an educated guess on how many spindles you'll need. When dealing with high-end drives, expect each drive to give you about 100 IOPs to 120 IOPs per second per disk in an OLTP environment. When dealing with SATA drives, expect each drive to give you about 60 IOPs to 80 IOPs per second per disk in an OLTP environment. Those numbers will go up when working in an OLAP environment because OLAP databases put a different kind of load on the disk system. It's a more sequential load, whereas OLTP databases put a random load on the disks.
Disk partition alignment improves SQL Server performance
Once you set up your disk array, you'll want to make sure the partition you create is correctly aligned. By default, when Windows (or any other operating system for that matter) creates the partition on a disk or array, the partition is not correctly aligned for peak performance. Disk drives are made up of 1K blocks. The physical disks like to do all their operations in 64-block chunks called clusters. Conveniently, SQL Server likes to do all its operations in 64 K operations -- there are eight 8K blocks in each extent, and SQL does its reads one extent at a time. When the partition is created, the boot sector is created at the beginning of the partition. This boot sector is 32 K in size, causing the 64K operations to be spread between two 64K clusters. This then causes each logical operation to take twice as many physical operations as needed.
You can see your current alignment offset by using the diskpart application. Open a command prompt and run diskpart.exe. When you are prompted with a DISKPART> prompt, type SELECT DISK n where n is the disk number you want to look at -- the command LIST DISK will give you a list of disks in the machine. After selecting the disk, type in "LIST PARTITION" to get the partition information, including the offset.
Figure 1: Run the DISKPART application to view disk alignment.
In order to create the partition, you'll need to use the CREATE PARTITION command. The full command is CREATE PARTITION PRIMARY ALIGN=64. This creates the new partition with the 64K offset, aligning the partition into the optimum position for maximum performance.
Check out part two in this tip series, configuring memory and CPU processing for improved SQL Server performance.
ABOUT THE AUTHOR
Denny Cherry has over a decade of experience managing SQL Server, including MySpace.com's over 175-million-user installation, one of the largest in the world. Denny's areas of expertise include system architecture, performance tuning, replication and troubleshooting. He currently holds several Microsoft certifications related to SQL Server and is a Microsoft MVP.
Check out his blog: SQL Server with Mr. Denny.