Manage Learn to apply best practices and optimize your operations.

Four tips on boosting SQL Server scalability

SQL Server scalability is high on the list of system properties, and scaling up is easier than scaling out. Follow these four tips and you’ll be on your way to optimum performance.

This article can also be found in the Premium Editorial Download: SQL Server Insider: Top three SQL Server Denali features to watch:

Scaling out a SQL Server environment across multiple systems can be a difficult and complicated project, involving partitioned databases, federation and more. So, when it comes to SQL Server scalability, most organizations prefer to scale individual systems up as much as possible before trying to tackle the out option. Here are four tips for making the scale-up process easier and more effective.

Maximize SQL Server performance components. Every server’s performance comes down to four basic components: memory, disk storage, network adapters and CPU. Your first step will be to maximize all four of these components in an existing server.

Start with memory – it can have the biggest impact and it’s usually the most easily expanded piece of a server. There is just one limitation: On servers running a 32-bit version of Windows, there’s no reason to have more than 4 GB of total RAM installed, because the operating system can’t make use of more than that. On 64-bit machines running 64-bit versions of Windows and SQL Server, install as much RAM as the server can hold to get the best performance from SQL Server.

In fact, if you have any SQL Server instances running on 32-bit versions of Windows, migrate them to a 64-bit machine as your very first step, because enabling your systems to address that additional memory is one of the biggest performance improvements you can make. And don’t be cheap on memory; buy whatever your server manufacturer recommends, which is often more expensive, error-correcting, high-speed memory. It’s worth the expense.

Scaling up SQL Server? First, a few questions

Storage is the next thing you’ll look at. It’s a big topic, and it takes up its own tip later in this article. Suffice to say that faster storage performance is always a good thing.

Network connectivity is third. Plenty of SQL Server computers get performance-bound at the network adapter level. If you can afford them, multiple network adapters will provide multiple paths to your network. Gigabit Ethernet (GbE) adapters should be a minimum for SQL Server computers, and if your network can support 10 GbE, then go for it. It’s especially important that your computers use a separate network and have at least one network adapter for every major use.

For example, if you’re relying on iSCSI for storage communications, data transfers should be happening over a dedicated network interface controller (NIC) and a dedicated network, not over networks that are shared by client traffic.

Finally, examine your server’s processors. They are last on this list for a reason: It’s rare that you can upgrade them in a cost-effective fashion. That’s because of the way processors are matched to their motherboards, which typically are designed for a specific family and generation of CPUs. To install significantly faster processors, you often have to get a new motherboard – and that usually means new memory and new everything else. In other words, a whole new server.

Adding more processors, if your server has room for them, will add some kick – with upgrades, more is always preferable to slightly faster. But here again, your options might be limited: Most servers are purchased with fully populated sockets, leaving no room for additional processors.   

Virtualize when it makes sense. Believe it or not, virtualization can be a clever way to bring about performance benefits from SQL Server. It seems counterintuitive; after all, the point of virtualization is to run multiple workloads on a single host computer. So, by dedicating a single computer entirely to SQL Server, wouldn’t you get better SQL Server performance?

Generally speaking, you would. If you have one SQL Server database that can keep an entire physical server completely occupied, then do just that. Many organizations, however, load their SQL Server systems with multiple databases or even multiple instances. There’s nothing wrong with that – it’s what SQL Server was designed for, in fact – but it doesn’t give you much flexibility for matching workloads with the available hardware resources.

Instead, move those databases that need less than an entire physical server into a virtual machine. SQL Server is a great virtual-machine guest application, and by separating your databases across multiple SQL Server instances, each hosted in different virtual machines, you gain flexibility. Using live migration technologies, you can quickly move virtual machines from host to host, rearranging the virtual machines to best utilize available hardware based on current workload demands.

You can pull off similar tricks with clustering technologies, including Windows Cluster Service. Partition your databases into different clustered SQL Server instances (a SQL Server instance is, after all, a form of virtualization). You can then move those instances around at will, with very little interruption in database availability. If Database A needs to be scaled up one afternoon, shift other instances off of that database’s cluster node, freeing up resources for use by the instance that’s getting hammered. This kind of dynamic scaling can produce impressive performance results, although it does require your organization to develop a mature performance monitoring and response model.

If you’re going to be spending big money on new server hardware, it makes sense to do so in conjunction with some kind of shared-resources scheme, such as clustering or virtualization. That way you’ll have your expensive new hardware working as close as possible to its maximum capacity at all times while still delivering the performance your users require.

Upgrade storage performance. The size of your storage systems is driven by the capacity needs of your databases; the speed of those storage systems is something too often overlooked. In the SQL Server world, disk storage speed is king. SQL Server is most likely to become I/O-bound before memory, network adapters or processors become an issue. You want storage that’s fast. That means putting storage area networks (SANs) on the other end of hyper-fast fiber-optic connections, and using fast SAN protocols such as iSCSI to communicate with your disks.

Pay close attention to the exact kind of workload your database handles and match storage technologies such as RAID to that workload. RAID 5, for example, offers recoverability in the event of a device failure, but it can slightly slow down write times because the extra striping information, which enables data recovery, must be written along with each update.

Fast disk-controller electronics can help resolve that issue by caching data as fast as the server sends it, and then quickly dumping the data to platters. Every element of the storage subsystem plays a crucial role in performance – platter rotational speed, raw device I/O, average seek times, communications media (copper or fiber). Work with an experienced storage vendor to put together a system that offers the best performance for SQL Server.

If money is an issue, skimp on processor speed, and even memory, before you skimp on storage. Storage performance will go a long way toward scaling SQL Server up, up, up.

Invest in new servers. At some point, you’re going to look at your existing servers and realize that you can’t squeeze any more performance out of them. Your storage I/O is as fast as it’s going to get. You’ve maxed out the memory. Every processor slot is full. The boxes are bristling with 10 GbE NICs, and they have enough cooling fans to power a hovercraft. In those cases, it’s easy to talk yourself into new servers (although your chief financial officer might still argue the point with you).

What’s tougher is to look at a server that clearly has room for expansion and decide to ditch it for a new one. When you’re looking at empty memory slots, available CPU sockets and desolate PCI backplanes, it seems much more cost-effective to start filling in those holes with new, performance-enhancing components.

Sometimes that’s a great idea. Sometimes you need to resist the urge.

First of all, never spend a dime upgrading a 32-bit computer. Replace it with a 64-bit one and install 64-bit versions of Windows and SQL Server. Load the new server with RAM aplenty, four or more multicore processors, and fast new NICs.

One last thought on SQL Server scalability: Any server more than four or five years old should be replaced, not upgraded. The cost to add memory or processors to an older server is often too close to what a brand-new server would cost, and a new server comes with lots of performance upgrades you couldn’t buy a la carte: faster BIOS circuits, faster chipsets, faster memory bridges. That old server can still find useful life as a file server, or it can handle some other, less-intense workload than running SQL Server.

ABOUT THE AUTHOR
Don Jones
is a senior partner and principal technologist at strategic consulting firm Concentrated Technology. Contact him via www.ConcentratedTech.com.

This was last published in March 2011

Essential Guide

Hardware for SQL Server: Optimizing performance

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchBusinessAnalytics

SearchDataCenter

SearchDataManagement

SearchAWS

SearchOracle

SearchContentManagement

SearchWindowsServer

Close