Let’s face it: If you haven’t already virtualized SQL Server instances in your environment, you’re going to do so eventually, if not “real soon now.”
SQL Server and virtualization are made for each other, and the situation is getting better all the time. Not just for workload management and consolidation either, but also for high availability. A new breed of technologies is out there now that can provide multiprocessor power to SQL Server on multiple hosts, keeping virtual machine instances in lockstep with one another and enabling zero-nanosecond failover in the event that one instance goes down.
But to make it all happen you’re going to need hardware, and buying hardware for a virtualization host that will be running SQL Server is a bit different than selecting hardware for SQL Server itself. You also have to plan your SQL Server instances. Busy instances that handle big databases might go onto virtual machines (VMs) all their own, while smaller instances might be teamed up within a single VM.
Remember that a VM becomes your basic unit of management: You can move VMs to different hosts, fail them over and so forth -- but every instance within each VM goes along for the ride. Focus on creating VMs that need as few virtual processors as possible to do their job; that will make each VM more granular in terms of the workload it handles, and it will make it easier for those VMs to co-exist with other VMs on the same host.
When outfitting that host, there are three things to consider: disk throughput, memory and processors. Your money is best spent, initially, on processors. Ignore blade servers and compact 1U servers for SQL Server hosts: You’ll squeeze more processor sockets and cores into a 4U chassis, and that chassis will often run with lower cooling and power requirements than a similar 1U or 2U chassis.
Find the “sweet spot” for processor speed -- where you’re getting the best performance for your dollar -- don’t just buy the fastest. A few extra megahertz aren’t going to deliver a vast performance improvement. Do focus on server-class processors, though. If you’re the kind of person who believes he can build a server from off-the-shelf Centrino-based motherboards, abandon that theory when it comes to SQL Server virtualization hosts, please.
Memory is the next expense. The more, the merrier. Modern hypervisors typically let you overcommit memory, meaning you can configure your VMs to use more memory, in total, than the host actually has. Many environments do well with a 50% overcommit, but SQL Server is a real memory hog. Analyze SQL Server instances to see how much memory they’re typically consuming, plan your overcommit accordingly, and don’t put VMs on the same host if they’re all running SQL Server instances, which tend to max out their memory allocation.
Bear in mind that SQL Server, more than many other server applications, will try to use whatever memory the operating system is willing to give it—so if Windows thinks it has 12 GB of memory, SQL Server will often make its best effort to utilize that. That behavior can make overcommit tricky, so proceed with caution. In fact, most experienced database administrators don’t like to use memory overcommit at all when they’re virtualizing SQL Server.
That said, the amount of memory is the one thing you can skimp on when buying a server. That’s because you can add more later -- provided you put the largest memory modules possible in your server, leaving free slots for future expansion. Don’t cheap out on the memory you do buy, however. Get error-correcting memory that’s speed-matched to the server’s motherboard. In other words, buy whatever your chosen server vendor recommends for your server, and ideally buy the memory from that same vendor. After all, that vendor is most likely to offer you support if you have problems with it.
Disk is last, and in most SQL Server cases you’ll be building a storage area network (SAN) rather than relying heavily on internal storage within the servers. (You might build a mirrored set of internal hard drives to run SQL Server and Windows themselves, not to store data.) In order of priority, build your SAN for fault tolerance, speed and size; if you think you need a 10-terabyte SAN, size is the last thing you price out.
First, make sure you can afford to make that storage redundant enough to survive the failure of a handful of actual disks, and you have to build it to be fast enough to support SQL Server. SQL Server’s most common bottleneck is storage speed, so it’s almost impossible to build a SAN that’s “too fast.” That's especially true with virtualization, which brings its own I/O overhead as data is written to virtual disk images.
While SQL Server is perfectly capable of being run in a virtual machine, buying hardware for SQL Server virtualization hosts and configuring the virtual machines requires a specialized approach. Simply moving your existing SQL Server instances into poorly configured virtual machines, or to poorly provisioned hosts, can significantly degrade performance. There’s no need to take that risk: Keep these tips in mind and you’ll have an efficient virtualization infrastructure that’s SQL Server-ready.
ABOUT THE AUTHOR
Don Jones is a senior partner and principal technologist at strategic consulting firm Concentrated Technology. Contact him through his website, www.ConcentratedTech.com.
This was first published in June 2011