Preparing for your SQL Server scale-out

It's never too early to prepare for a SQL Server scale-out. Read this tip from Roman Rehak to learn three popular ways to scale out SQL Server.

One of the greatest challenges with scaling out SQL Server is selecting the right option when a database needs

to scale. When a database or a set of databases behind an application becomes more heavily used, database architects and developers need to be prepared well in advance to make sure performance doesn't suffer.

Scaling out is, in a nutshell, spreading data across multiple servers. In most cases, the data is accessed by a Web application that has thousands or even hundreds of thousands of concurrent users. Very often, this data could theoretically reside in one database on a single server, where use of the database is less heavy. As a website becomes more popular and experiences higher levels of use, its architecture team needs to quickly strategize to distribute data across multiple servers. There are many different ways of scaling out, and describing all of them in detail is beyond the scope of this article. This article focuses on three relatively easy ways of scaling out with only moderate implementation effort. The data will still reside within a single database, but the idea is to make the data available on multiple servers in read-only format. There are three ways of implementing this strategy -- restoring a backup on one or more servers, creating a snapshot over a mirrored database or using the new SQL Server 2012 feature called "AlwaysOn" to read data from a replica. These techniques have some limitations in scalability and are mainly suitable for medium-sized database loads, but they might buy you enough time to implement a more advanced and complex scale-out solution if you find yourself in a situation where your database needs to scale.

More on SQL Server scale-out

Scale up or scale out? Read this Q&A to find out

Four ways to boost SQL Server scalability

Upgrades to make SQL Server more scalable

As a database gets bigger and busier, you'll likely run into performance problems when mixing a transactional load like data updates, with your decision support load (reports, building of online analytical processing cubes). As you start mixing these two types of loads, queries scanning the ever-growing historical data start slowing down the Web applications because the database server gets very busy. A simple solution is to simply restore a backup of the database on a reporting server and use it as a back end for history, data mining and other decision support loads. This is very easy to implement and, if needed, you can even restore on multiple servers. In my experience, you can restore a 500 GB database in less than 30 minutes on good hardware, especially if you utilize the built-in backup compression. If there is no requirement that your reporting database must be available 24/7, you can just restore a new backup over the existing database. If you need round-the-clock availability, you can have two copies of the database, restore them into the second copy and then "swap" the two databases by renaming them. You can use this "reporting" database not only for reports or data mining, but you can also access it from a front-end application, provided you do not need to get the most recent data. For example, if you refresh this database once a day, you could code your application to update the data in the main database in real time and use it to read data less than 24 hours old and then use the restored database to read older data. The beauty of this solution is that it is very simple to implement, yet gives you a lot of options and can greatly offload your main database server.

Another option is to make data from a production database easily available on another server using a combination of database mirroring and database snapshots. When you mirror a database to a different server, the database is not available because it is in the same state as a restored database that has not been "recovered," or brought online. However, if you build a database snapshot on top of the mirrored database, you actually do get a read-only copy of the data. It is essentially the same as building a snapshot on top of a regular database -- you end up with a read-only, point-in-time copy of the database. This implementation allows you to use the mirror server for offloading the primary server, just like we described in the previous scenario with restoring backups. You can use it for reporting, data mining and so on. The underlying mirrored database will continue getting updates, but your snapshot will remain the same. Once your threshold for data staleness is reached, you can delete the snapshot and create another one. The downside of this implementation is that you can only use it with the Enterprise Edition of SQL Server.

The third and final option for easily scaling out the primary database is the new AlwaysOn technology in SQL Server 2012. It is a high-availability feature that allows you to ship data in real time to up to four readable replicas on other servers. Just like in the two previous implementations, you can use these replicas to reduce the load on the primary server. Moreover, with AlwaysOn you have one advantage over the previous two methods -- the databases on the replicas are getting updated in almost real time, and if there are no network issues, the data on the replicas should be only seconds behind the primary database. This means that in addition to reporting, you can use these replicas from your front-end applications, so long as you only use it to read the data (since only the primary database is read/write, while the replicas are read-only). You should, however, expect small network hiccups and don't assume that the replicas will always be up to date. You could code your application to connect to the primary database to update the data and retrieve the most recent data. Then, any data older than 10 to 15 minutes could be retrieved from a replica. AlwaysOn provides automatic redirection for the main database -- if the primary node dies, one of the replicas becomes the new primary database. It also provides redirection for replicas. For example, if your primary server is Server A and the replicas are Server B and Server C, and you use Server B for reporting, AlwaysOn can automatically redirect your reporting applications to Server C if Server B becomes unavailable. The downside is that AlwaysOn not only requires the Enterprise Edition of SQL Server, but you also need to install and set up Windows Server Failover Clustering on each node.

You have now learned three different ways of easily spreading data from a primary database across multiple servers -- backup/restore, database snapshot over a mirrored database and finally, utilizing AlwaysOn replicas. As you can see, AlwaysOn provides options and flexibility for providing replicas of a primary database on other servers, but it is also the most complex and costly to implement.

This was first published in January 2013

Dig deeper on SQL Server High Availability, Scalability and Reliability

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchBusinessAnalytics

SearchDataCenter

SearchDataManagement

SearchAWS

SearchOracle

SearchContentManagement

SearchWindowsServer

Close