Problem solve Get help with specific problems with your technologies, process and projects.

Scalability options for really big SQL Server databases

When a database outgrows its server, DBAs have the option of either scaling it out across multiple systems or moving to a more powerful server. Here are the factors to consider.

When your database becomes too big for the server it’s on or is handling too much traffic, what can you do?

The first option is to scale out, which means spreading your database across multiple servers. This may involve using replication (which can get complicated) or a partitioned database (also really complex). In fact, there’s almost no way to scale a database out without doing some kind of redesign on the database itself, which will probably mean changes to whatever applications are talking to the database.

This might not be the case for every application in the world, but scaling out always involves a great deal of planning, analysis, and work. 

Some of the major scale-out options for SQL Server include:

Replication. The idea is to evenly split the workload across multiple servers so that each has an identical copy of the database. This usually involves merge replication, although transactional replication with updating subscribers is also an option.

There is always latency in every form of SQL Server replication, so you can’t rely on every copy of the database to be precisely identical at all times. Replication can also be tricky to maintain in the long term, especially if the network links between servers aren’t absolutely reliable.

Other replication. This is replication provided by third-parties rather than SQL Server itself. Expect complexity, application redesigns, and more than a little cost with this method. If you’re not a SQL Server expert, it’s time to find one.

Distributed, partitioned databases. The idea here is to split the database’s data across multiple servers so that some tables – or even portions of some tables – are spread out. Database objects like distributed partitioned views (DPVs) provide a way of accessing the data as if it all lived on one server. For amazingly well-written applications that only access data via views and stored procedures, you can distribute the database without changing the application, since you just need to create distributed views that provide the same functionality as the old non-distributed ones that the application was using.

The second -- and easier -- option is to scale up. Simply put, this involves getting:

  • a bigger server
  • more memory (lots more)
  • 64-bit processors so you can access all that memory
  • more processor sockets and cores
  • fewer other things running on the same server and competing for resources

When Microsoft does formal benchmarks of SQL Server, it uses max scale-up hardware: 16-socket, 96-core 64-bit servers with a terabyte of memory. Yeah! Such a system can generate over 2,000 transactions per second with an 8 TB database and costs something in the neighborhood of a small Caribbean island. But it is also far easier to spend money to scale up rather than out in most cases.

Miss a tip? Check out the archive page for Don's series, SQL Server for the Reluctant DBA.

ABOUT THE AUTHOR:
Don Jones is a co-founder of Concentrated Technology LLC, the author of more than 30 IT books and a speaker at technical conferences worldwide. Contact him through his website at www.ConcentratedTech.com.

This was last published in June 2010

Dig Deeper on SQL Server High Availability, Scalability and Reliability

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchBusinessAnalytics

SearchDataCenter

SearchDataManagement

SearchAWS

SearchOracle

SearchContentManagement

SearchWindowsServer

Close