Is there a way to get replication to commit records in batches instead of all at once? I am in a 24/7 shop and some of my updates end up being thousands of rows and it locks the subscriber table for a few minutes sometimes. If I could get it to commit say every 1000 rows it might give me some relief in this area..
Or am I thinking about this wrong? If this is possible, would it help at all...
I
transactional replication adheres to the ACID properties, so if you apply 10,000 commands in one trasaction at the publisher, the distribution agent will apply the same commands in one transaction at the subscriber.
You have a couple options:
1. commit your commands at the publisher in smaller batches. That means if your update statement will affect 100k rows, break it up by updating 1,000 at a time until all 100k are completed.
2. You can use the logreader agent parameter -MaxCmdsInTran, which will break the ACID properties, but allow you to do what you want to do. This has other performance impact, but if you're committing thousands and thousands (or millions) of rows in a single transaction, this can offer some relief. You can find more information about this parameter in Books Online.
I'd recommend #1 before #2.
|||Thanks for that response... it was what I was expecting to hear but just wanted to check. My servers typically only get updated once a day with major updates and the updates can happen over time as they are not mission critical. The servers are only content related so as long as they get updated they are happy...
Thannks again for the answer.
No comments:
Post a Comment