SQL Server load distribution can be accomplished in many different ways. Here is a list of some of the most common methods:
- Log shipping
- Database mirroring
- SSRS n-tier deployment
- SSAS server farm deployment
Let's take a closer look at each of these methods, one at a time.
Merge replication can be used to better utilize geographically oriented resources. In other words, if you were a grocery store chain with a supply system, for example, you would likely want to distribute the load to a regional SQL Server. You could then utilize merge replication on a nightly basis -- preferably after hours -- to synchronize the changes with a centralized supply server.
More on SQL Server availability and scalability
This type of configuration is ideal, since the regional system would be queried/updated heavily throughout the day, while supply information might be updated hundreds or thousands of times prior to the one update that gets passed up to the centralized system. A system such as this would keep network latency to a minimum as well as decrease the network load required for a functional system. Response time in this scenario should be excellent.
Keep in mind that merge replication has certain requirements for database design, so successful completion of this requires an architecture with merge replication in mind or schema modifications.
Please also review the distribution db/server optimization notes in the "transactional replication" section, since they apply to merge replication as well.
Transactional replication can be used to push transactions to a secondary reporting or analytical server. Bidirectional communications can be enabled here to provide low latency updates across SQL Server systems. Note that it would be significantly more intensive on both the network and the involved servers to utilize this approach over merge replication, since all DML operations are replicated rather than just the current state at the time of synchronization.
If this approach is utilized, you can further distribute the load. Transactional replication requires that a distribution database/server be setup, which then controls the distribution of all transactions to be replicated from the publisher to the n number of subscribers. Transactions are not cleared from the published database's transaction log until they have been transferred to the distribution database/server and a transaction log backup occurs. In most transactional replication deployments, the distribution database/server is actually a part of the publishing server. This increases the CPU, I/O, etc. on the publishing server.
A best practice is to create a separate server with a distribution database. In this configuration, the publisher transfers transactions that need to be replicated to the database on the distribution server. Once the publisher has transferred the transactions, the next transaction log dump can clear them. This scenario reduces the risk of processing delays casuing the published database transaction log to fill up. It will also reduce the CPU, I/O and memory load on the server that holds the publisher --especially if there are multiple subscribers involved.
The risk here is that transaction logs are not cleared when a transaction log backup is done, and over time the publisher's transaction log could fill up, resulting in a down production system.
Log shipping basically involves taking the transaction logs from the originating server, transferring them to the destination system, and restoring them automatically on a database that is in restore mode on the receiving server. The benefit to this approach is that there is no distribution database/server involved, so there is no risk to the production server when processing backups as there is with replication scenarios.
The database is marked as restoring on the destination system, but it can also be marked as standby, allowing it to be utilized as a read-only server for reporting or analytical purposes. It's safest to require that users be disconnected during the transaction log restores, otherwise you may experience issues due to the transaction log backups accumulating too quickly. In the replication scenarios, all databases can be updated as needed, even the standby server(s).
Database mirroring can be used to mirror transactions to one standby server. In this case, the database on the standby server is in restore mode much like in the log shipping scenario. If you have enough space, however, you can create a database snapshot of the restoring database at that moment in time, creating a read-only copy that you can use for reporting and analytics. Keep in mind that for this scenario you will be limited to one standby server, whereas replication and log shipping allowed you to have n number of standby servers.
This approach is ideal for scenarios where the reporting data can be snapshot nightly, but not so much when that reporting data needs to be up-to-date, or reasonably so. While log shipping would give you a delay in the reporting data of the interval for your transaction log backups, replication provides a near real-time solution -- assuming the system does not become backed up.
SSRS n-tier deployment
With many SQL Server Reporting Services (SSRS) deployments, SSRS is installed on the server it's intend to run on. The problem with this approach is that SSRS can take up substantial CPU and memory resources depending on the formulas in the report, length of the reports, usage patterns, and more. As a result, SSRS can quickly start contending with the host SQL Server services for CPU and memory resources, resulting in a sluggish server.
SSRS can be installed on any server separate from a SQL Server instance, and you can keep the SQL Server instances consolidated on central servers. SSRS requires metadata databases to run, but those can be located on any servers separate from the SSRS services themselves.
In other words, Server1 may run the SSRS metadata databases and be a consolidated SQL Server, while Server2 could be a consolidated SQL Server and have the target data for some of the SSRS reports. Server3 may then be a SSRS service box with virtually no storage -- but with plenty of CPU/RAM -- and it will talk to Server1 for its metadata and with Server2 for the actual running of the reports.
SSAS server farm deployment
SQL Server Analysis Services (SSAS) can be intensive in both CPU and memory usage to an even greater extent than the SSRS servers depending on the activities occurring. SSAS can be installed on multiple servers that are distributed behind a load balancer in a server farm configuration.
Third-party load balancing solutions would be viable with a normally rich and reliable feature set. The Microsoft solution is the Network Load Balancing feature which is part of the Windows Server operating system and has received generally mixed reviews. Once a Network Load Balancing cluster is created, incoming requests are balanced out among all the servers in the server farm. This distributes the processing and allows for excellent redundancy in case of server failure. It also provides good scalability in case the load increases.
In this configuration, performance will generally vary from one node in the cluster to another, since each node will keep its own separate cache.
SQL Server can be scaled out in many different ways, and I have just barely begun to scratch the surface with this article. When you consider a SQL Server solution to a given business problem, be sure to review all the approaches I have outlined above as well as SQL Server Service Broker.
The particular business problem and tools at your disposal will determine what scalability requirements exist for your situation. Keep in mind that in many scenarios you will want to utilize more than one of these techniques, so don't be afraid to mix and match. Finally, please feel free to comment with other load balancing ideas, along with your network load balancer NLB of choice.
ABOUT THE AUTHOR
Matthew Schroeder is a senior software engineer who works on SQL Server database systems ranging in size from 2 GB to 3+ TB, with between 2k and 40+k trans/sec. He specializes in OLTP/OLAP DBMS systems as well as highly scalable processing systems written in .NET. Matthew is a Microsoft certified MCITP, Database Developer, has a master's degree in computer science and more than 12 years of experience in SQL Server/Oracle. He can be reached at [email protected]