Problem solve Get help with specific problems with your technologies, process and projects.

Optimize merge replication performance

Improve merge replication performance with methods explained in this tip and foster a highly scalable SQL Server environment. SQL Server expert and Microsoft MVP Hilary Cotter explains the tuning process and helps you determine if merge replication is right for you.

Merge replication has been a feature of SQL Server since SQL Server 7.0. It is designed for clients who are frequently...

offline and need to bi-directionally replicate with a publisher.

Merge replication works by logging changes that occur on the publisher or subscriber(s) in your merge replication topology between synchronizations. During synchronizations, the publisher and subscriber compare what has changed since the last synchronization. Then they use stored procedures to synchronize the subscriber with changes that occurred on the publisher, and the publisher with changes that occurred on the subscriber. If changes occur on the same row(s) between synchronizations, during synchronization these changes are logged as conflicts and either the publisher's or the subscriber's change will be persisted on either side, depending on how you choose to resolve these conflicts.

Microsoft has published an article on how to improve merge replication performance, which is a helpful reference.

Here are some additional tips to consider:

Is merge replication the best replication choice?
Merge replication is not the only bi-directional replication solution Microsoft offers. When considering which choice to make, consider the following factors:

  • Does it modify the underlying schema?
  • Does it scale?
  • Is there added latency?
  • Is there a limitation on where the majority of the DML (Data Manipulation Language) should occur?
  • How does it handle conflicts?
  • Does it replicate DDL (Data Definition Language) statements, i.e., can you modify the schema without breaking replication?
  • Are there any version/edition limitations?

The following table illustrates the choices and their limitations:

Replication Type Schema Changes Scale Latency DML limitations Conflict Handling Replicate DDL Version Edition Limitations
Merge Replication Adds a GUID col Yes, you may need hierarchies. Adds latency to each DML, sync times can be lengthy. None. Logs conflicts and allows you to roll them back. Rich conflict handling. Yes. SQL 2005 limits the number of subscribers depending on your version. MSDE can act as a publisher; Express can only be a subscriber.
Bi-directional Transactional replication. None Offers very good performance, but does not scale well beyond 2 to 3 subscribers. Offers excellent performance. None. No conflict handling. None. No version dependencies. Not supported on SQL Server Express or MSDE.
Immediate Updating Adds a GUID column Scales up to 10 subscribers. It is crtical that the link be very stable. If there is any network interruption the subscribers will become read only. Latency is added to all DML originating at the subscriber. The majority of the DML should originate on the publisher. Conflicts are logged. Yes. No version dependencies. Not supported on MSDE or SQL Server Express.
Queued Updating Adds a GUID column. Scales up to 10 subscribers. Adds some latency to all DML originating at the subscriber. The majority of the DML should originate on the publisher. Conflicts are logged. Yes. No version dependencies. Not supported on MSDE or SQL Server Express.
Peer to Peer None Offers very good performance, but does not scale well beyond 10 subscribers. Offers excellent performance. None. However, all updates should occur on a single node to limit possible update conflicts. No conflict handling. Yes, however all nodes should be quiesced while DDL changes occur. SQL 2005 only, supported on the Enterprise Edition.
RDA None. Can offer better synchronization performance on PDAs than merge replication. Can be shorter than merge. None. None. No. Works on PDAs running SQL CE only.

 

Evaluate whether some of the other bi-directional replication options will work for you, despite the limitations.

Choose the appropriate profile
Most of the fine tuning of merge replication topologies is done through the agent properties. Microsoft has bundled collections of these properties together in groups called profiles, with each profile dedicated to a specific topology. For example, SQL 2005 has the following profiles:

  • Default agent
  • High volume server-to-server
  • Rowcount and checksum validation
  • Rowcount validation
  • Slow link agent
  • Verbose history agent
  • Windows Synchronization Manager

Choose the appropriate profile for each subscriber. Only use the Rowcount and checksum validation profile, Rowcount validation profile, and Verbose history agent profile while you are debugging or performance tuning. If you are doing server to server merge replication, use the High volume server-to-server profile, instead of the Default agent profile.

Similarly if you are replicating to clients over low bandwidth links -- like phone lines -- use the slow link agent profile for these subscribers.

Minimize conflicts
A conflict occurs during a synchronization when:

  • A row on the publisher has the same pk value as a row on the subscriber.
  • An attempt is made to delete a row on the publisher which has been updated on the subscriber.
  • An attempt is made to delete a row on the subscriber which has been deleted on the publisher.
  • An attempt is made to update a row on the publisher which has been deleted on the subscriber.
  • An attempt is made to update a row on the subscriber which has been deleted on the publisher.
  • If you are using column level tracking and the same column is updated on the publisher and subscriber.
  • If you are using row-level tracking and the same row is updated on both the publisher and subscriber.

When doing a sync, the synchronization process attempts to merge all changes on both sides. Depending on the type of conflict, a retry attempt is made after each batch is processed. These re-tries and the subsequent conflict logging are somewhat expensive, and anything you can do to minimize conflicts will shorten synchronization times. In some cases the merge agent will fail after a conflict and only succeed during the next synchronization.

Partitioning your data to avoid conflicts will increase the efficiency of your synchronizations.

Retention period
Merge replication has to do extensive tracking to know what it has sent and received from each subscriber, and to know what it has to send to each new subscriber in addition to the snapshot. On high volume systems, this tracking data (or replication metadata) can become very large and can increase the amount of time required to synchronize. This is especially acute with mobile subscribers. Drop the retention period to the smallest value possible. The limit should be a time when you know all subscribers will synchronize within; otherwise you will need to reinitialize these subscribers.

Snapshot
When synchronization times become lengthy, they may exceed the time period required to send a new snapshot. In this case, consider sending a new snapshot to these subscribers. This is especially acute with SQL CE subscribers.

Hierarchies
When you have large numbers of subscribers, there will be considerable locking when the merge agents run. In some cases, it will make sense to create hierarchies where groups of subscribers will synchronize with one publisher. This will in turn synchronize with another publisher further upstream. The topology might consist of a top level publisher replicating to four down stream subscriber/publishers, and from each of those four downstream subscriber/publishers 20 or more subscribers replicate to.

Creating such hierarchies will improve the overall performance of all nodes in the topology.

Join filters
Join filters are a merge only feature that treats related data as a unit or partition. Consider a case where you have a salesman table, which has a client table related by a common SalesPersonID key. Further, you have an order table which is related to the client table by a clientID key. If the clients were assigned to a new Sales Person and you had a filter on SalesPersonID, when you did the update of the SalesPersonID from let's say one to two, the rows in

Read more about replication in SQL Server:

the client table would move to the subscription that matched the filter value. But the order rows that correspond to that client table would not move. In order to extend that partition to the orders table you either add the SalesPersonID to each the order table and update the value of the SalesPersonID column here as well, or use join filters.

Join filters require a lot of processing during synchronization to figure out all rows that should be added or removed from a partition. In SQL 2005, partitions can be pre-computed so the partitions will not have to be dynamically built during the synchronization process. However, this does add even more latency to all DML originating at the Publisher or Subscriber, so the impact using pre-computed partition may be greater than if you do not use them.

If you can make your join filters as shallow as possible. In our case, our join filters were two levels deep. You will get dramatic performance improvements by moving from four to five levels to ywo to one level(s). This will require re-architecting your schema however, and considerable de-normalization of your tables.

Another point about join filters and filters in general is to ensure that all columns you are filtering on have an index on them.

Summary
In this article we had a look at some not-so-obvious methods of improving merge replication performance. It is important to evaluate whether merge replication is the best solution. I find it is frequently implemented for server to server replication as a DR solution, when bi-directional transactional replication is a better fit. Properly tuned merge replication is highly scalable and does perform exceptionally well.

ABOUT THE AUTHOR
Hilary Cotter has been involved in IT for more than 20 years as a Web and database consultant. Microsoft first awarded Cotter the Microsoft SQL Server MVP award in 2001. Cotter received his bachelor of applied science degree in mechanical engineering from the University of Toronto and studied economics at the University of Calgary and computer science at UC Berkley. He is the author of a book on SQL Server transactional replication and is currently working on books on merge replication and Microsoft search technologies.
Copyright 2007 TechTarget

This was last published in April 2007

Dig Deeper on Microsoft SQL Server Performance Monitoring and Tuning

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchBusinessAnalytics

SearchDataCenter

SearchDataManagement

SearchAWS

SearchOracle

SearchContentManagement

SearchWindowsServer

Close