Building a new SQL Server system can be tricky. SQL Server is a product that really utilizes hardware, and its...
performance is dependent on how you configure your server -- and in particular on how you configure your server’s storage subsystems. With that in mind, here are some of the top mistakes people make when purchasing hardware for SQL Server:
- Going the DIY route. Don’t build your own SQL Server computer from off-the-shelf parts, unless it’s just meant to be a nonproduction development machine. Servers in general, and SQL Server computers specifically, need tightly matched parts: processors, chipsets, memory, controller cards and the like. You need components that will hold up to high heat, for example, and that have been designed to work together. That isn’t saying it’s impossible to build your own server -- but it’s far easier to buy one that’s been fully integrated and will be supported by the manufacturer.
- Having no performance expectations. You simply can’t build a SQL Server system properly unless you know what kind of load it’s going to be under. Well, you can -- but you’ll either underbuild or overbuild, and either one is going to be expensive. When you underbuild, you’re essentially setting your server up to not have enough power sometime in the future, meaning you’ll be forced to spend money upgrading (and depending on the server’s initial configuration, upgrading may not even be possible). With overbuilding, you’re spending more than you need or ever anticipate needing. Use existing databases, applications or even vendor benchmarks to get some expectation of how many transactions per second you expect to process and size the hardware accordingly.
- Buying disk size, not disk performance. Yes, SQL Server often needs tons of disk space. But all that space is useless if the disk technology isn’t fast. Tossing a handful of drives into a RAID 5 array might get you the space and redundancy you want, but if that array can’t move the bits on and off the platters with some serious speed, it’s going to be a major performance bottleneck for your system. If you can’t afford fast disks in the size you need, then you can’t afford SQL Server.
Ideally, database files and transaction logs should be on different disks (or arrays), and SQL Server should be accessing them through different channels, such as disk controller cards or storage area network (SAN) connections. The tempdb system database may need its own disk or array as well if it’s heavily used.
- Choosing the wrong RAID option. RAID 5 is slow at writing data to disk. Period. Most RAID controllers attempt to overcome this handicap by caching data in on-controller memory (which is typically battery backed up for safety), but a busy SQL Server database can fill that cache and hit a bottleneck. RAID 10 is the way to go. It's more expensive than RAID 5, but it combines disk mirroring with data striping, and it offers higher redundancy and faster reads and writes.
- Buying too few drives. If you need X number of gigabytes or terabytes of storage space, you want it delivered in as many physical disks as possible in order to get the fastest throughput possible. That’s because having more disks—whether small or large in capacity -- is better than going with fewer bigger ones. With striping (supported by both RAID 5 and RAID 10), every extra disk will improve SQL Server’s performance just a bit more. If, for example, you have an option of buying five 1 TB drives or twenty 250 GB drives, the twenty drives (assuming they’re configured in a stripe array and the drives feature the same speed and transfer rate ) will almost always outperform the five.
- Using disk controllers without batteries. If you’re relying on disk controllers to cache write instructions -- say, to a RAID 5 array -- make sure there are batteries on board. Plan to monitor the server’s power-on self-test (POST) screen from time to time to make sure those batteries (usually lithium watch batteries) continue to hold a charge.
- Blindly trusting the SAN. A SAN is not the perfect answer to storage in all cases. You have to make sure it’s built for fast throughput and that SQL Server isn’t sharing it with so many other servers and applications that it has to compete for bandwidth and throughput. SQL Server needs fast storage access -- it’s the biggest performance bottleneck for most SQL Server computers. Make sure you know the configuration of the SAN (RAID 5 versus RAID 10, for example, with the above mistakes in mind), its throughput and other details—just as you would want to know for direct-attached storage.
- Going 32-bit. Not so much in the hardware, which is mostly all 64-bit these days, but in the software. On a 32-bit copy of Windows, it’s harder for SQL Server to utilize more than 3 GB of memory -- it has to use some paging extensions that aren’t as efficient as just having raw access to tons of memory. If you’ve got 64-bit hardware, run a 64-bit operating system on it. Besides, Windows Server 2008 R2 -- and later versions of Windows -- are only available in 64-bit versions.
Many of these mistakes seem to be storage-related, don’t they? Definitely. Storage for SQL Server is the one area where people tend to focus too much only on size, and not enough on other factors, such as throughput. Especially with SANs, where storage becomes something like “a service of our private cloud,” like a big magic box in the sky, where data lives.
Of course, there’s more to SQL Server performance than just storage, such as processor architecture and server memory capacity. Details matter, and performance counts. Avoid these mistakes when purchasing hardware for SQL Server and you’ll have a healthier, happier -- and above all, faster -- machine.
ABOUT THE AUTHOR
Don Jones is a co-founder of Concentrated Technology LLC, the author of more than 30 books on IT and a speaker at technical conferences worldwide. He can be reached through his website at www.ConcentratedTech.com.