Scaling is the process of making a computing service capable of handling larger workloads, or in other words, making it bigger and better. With SQL Server 2008, there are two distinct paths you can take to scaling: up and out. Both are designed to provide increased overall performance for a SQL Server-based application, but each provides unique advantages and utilizes very different techniques.
Scaling up with SQL Server
Scaling up is the most straightforward scaling method, as it simply involves making a single SQL Server computer able to handle greater workloads. You get faster processors, more processor cores and sockets, lots more memory and potentially more disk space when you scale up. This can take you pretty far in theory, but in practice how many of us are able to equip 64-way machines with a terabyte or so of RAM? Besides, past a certain point (depending on the type of applications you're running), memory and processor resources stop being your major bottleneck and issues like network connectivity begin. Unfortunately, simply adding more network cards to the computer won't necessarily increase your connectivity capabilities.
Scaling up is also limited because it does involve only a single computer. In other words, you're not distributing the data. In many cases, performance can be improved by having multiple copies of your data, potentially on cheaper and less-powerful machines. This approach is often used to scale a Web farm, and by using less-expensive, often "commodity"hardware, it will cost less to scale out than to scale up.
Go out if you can
Scaling out with SQL Server involves significant challenges. It's not like a Web server, where you can simply copy static webpage files in bulk to create a second, identical-looking Web server. With SQL Server, your data is constantly changing.
Many people look first to SQL Server 2008's built-in replication capabilities to help with scaling out. SQL Server offers several modes of replication, but let's focus primarily on those which offer the lowest possible latency -- meaning they do the best job of keeping every replica of the data as up-to-date as possible. These are transactional and merge replication.
Both of these techniques start off by manually creating an exact replica of your current data using a called a snapshot – which is, for all intents and purposes, a photocopy of your existing database. Both techniques continue by analyzing the SQL Server transaction log and transmitting new transactions to the replica of the data. Because every database change goes through the transaction log (in a normal production database, that is), grabbing those transactions and sending them to the replica allows the replica to reconstruct the data -- giving it a continuous, exact copy.
Both forms of replication are inherently one-way, though; designed to keep a replica updated, but not to keep two writable copies of the data in sync. What you can do is set up two one-way replications between two copies of the database. You will then have to deal with conflicting changes. Transactional replication does so very simply, as the most recent change, based on timestamp, wins.
Merge replication, which is more expressly designed for multiple-replica-sync scenarios, allows you to create custom merge handlers that deal with conflicting changes. I've seen merge replication work well in scenarios involving up to three or four busy servers, assuming that the servers have high-quality connectivity between them, such as a private T1 line. When connectivity slows, however, so does the ability of merge replication to maintain accurate copies on all involved servers. I've spent many hours troubleshooting replication problems and failures, and I have to say that I'm not a huge fan of it as a scale-out mechanism.
Where it does work is if most of your changes are made to a single, "master" replica, and your other replicas are primarily read-only. Replication handles this wonderfully, but it's not a true scale out scenario; those imply that every copy of the database is an equal citizen, handling both reads and writes.
While we're on the subject, it's important to keep in mind that log shipping and database mirroring in SQL Server 2008 are explicitly not intended for scale out scenarios. Both of these features are intended to improve SQL Server reliability by providing a "hot spare." Neither offers two-way replication, nor do they provide low enough latency to be considered scale out options.
Scale out – differently
The complexity of maintaining multi-master replication in SQL Server means that folks looking to scale out often look elsewhere. One technique is to partition the data. For example, in a large order-entry database, you might keep customer information on Server 1, product information on Server 2 and order information on Server 3. Order information might be further divvied up so that orders from North America are on Server 3, while orders from Europe are on Server 4. All the data can remain linked, and special SQL Server objects called federated views can be created so that client applications see a single giant database rather than three or four different ones. Because no two servers contain the same data, there's no need for replication. To a degree, federated views can even reduce the amount of client application reprogramming that needs to be done to implement this scenario.
Downsides of this, however, include the fact that every client needs good connectivity to each server – which might not be possible. This is especially true if you have clients in smaller countries where private WAN connectivity is expensive and difficult to find. Another problem is that scaling out further becomes more complex since you have to re-divide data and move it around.
The type of partitioning I've described here is called vertical partitioning because you're distributing entire SQL Server tables. A second technique, horizontal partitioning, is more complicated to implement but overcomes some of these downsides. With this technique, each server contains every table the application needs – but those tables don't contain every row. A column is used as a partitioning value to determine which rows of data live on which servers.
All of the orders taken in Europe, for example, might wind up in the Orders table on Server 3, which is physically located in the European office. Orders from North America would then go to the Orders table of Server 4, which is in a U.S.-based office. This helps keep data physically close to the people who are most likely to need it, but still offers every user the possibility of getting to all the data – albeit more slowly, depending on your WAN connectivity.
The last option is to ditch your SQL Servers and put your head – or your data – in the clouds. The difficulty with scaling out is precisely why Microsoft created Windows Azure and the family of Azure services. This can be used to rewrite your backend logic as a set of Web services, which your client applications talk to in order to read and write data. Your data lives on special, Azure-flavored SQL Server computers.
In this scenario, all users access the application via Internet connectivity, and Microsoft makes sure --- magically -- that copies of your data are physically distributed everywhere they're needed. It also ensures that sufficient computing resources are available to handle all use requests promptly. It's a very different way of approaching scalability, but it offers nearly infinite scaling potential – and you only pay for what you use.
ABOUT THE AUTHOR
Don Jones is a co-founder of Concentrated Technology LLC, the author of more than 30 IT books and a speaker at technical conferences worldwide. Contact him through his website at www.ConcentratedTech.com.
This was first published in April 2009