Friday, March 30, 2012

Replication to multiple subscribers question

We have an application that runs with MS SQL 2000 (SP3a) which is used for
tracking information from multiple sources. We are shortly going to be
setting up replication of this database with a secondary location, with a
third location planned by mid year. We know that there will be other
locations added over the course of the next few years. Our concern is that
if we have the snapshot expire, we will have to regenerate the snapshot and
reapply it to ALL subscribers before we can add a new subscriber. This is a
problem for 2 reasons. First because the foreign key relationships we have
between many of the tables appears to require that we delete the database on
the subscriber before we can apply the snapshot, and second because the
nature of our subscribers will mean that most of them will be disconnected
for long periods and as the number of subscribers grows it will be
increasingly unlikely that we will have them all connected at the same time.
It appears that our other option is to not have the snapshot expire, but MS
warns that this may cause performance problems but I haven't seen anything
that quantifies the potential performance problems.
Can anyone tell me what the "right" way of doing this is? What sort of
performance problems are we looking at if we don't have the snapshot expire?
Our database consists of approximately 100 or so user tables and currently
is about 300MB of data.
TIA
Ron L.
Ron,
I suspect the performance problems being talked about are proportional to
the amount of changes taking place to the data once the snapshot has been
made. EG if your snapshot contained static lookup tables, then having a
long-lived snapshot file wouldn't really be an issue. However if your
snapshot was of a productlisting table containing 1 million rows, and
product prices were changed on a regular basis (my case) then after
application of the snapshot you might find your merge agent (or distributor)
would struggle to send down the accumulated changes. BTW, in this case the
problem would be least for merge assuming records could be repeatedly
changed, and it avoids the issue of not being able to run the distribution
cleanup agent that you'll have.
HTH,
Paul Ibison
|||Paul
Thanks for the response. My data tends to be less changes and more
additions - we have logging records for text logs and records tracking
movement of parts within systems. At the moment, our largest table has just
over 100,000 records and the next largest is in the 20,000 record range.
One other option we were looking at was to have seperate publications
for each (or a small group of) additional server, thus making the snapshot
regeneration have less impact. Do you have any feeling as to whether this
is a better approach?
Thanks,
Ron L
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:%23FVkMhWLEHA.4052@.TK2MSFTNGP11.phx.gbl...
> Ron,
> I suspect the performance problems being talked about are proportional to
> the amount of changes taking place to the data once the snapshot has been
> made. EG if your snapshot contained static lookup tables, then having a
> long-lived snapshot file wouldn't really be an issue. However if your
> snapshot was of a productlisting table containing 1 million rows, and
> product prices were changed on a regular basis (my case) then after
> application of the snapshot you might find your merge agent (or
distributor)
> would struggle to send down the accumulated changes. BTW, in this case the
> problem would be least for merge assuming records could be repeatedly
> changed, and it avoids the issue of not being able to run the distribution
> cleanup agent that you'll have.
> HTH,
> Paul Ibison
>
|||Ron,
I agree that this approach leads to more granularity and therefore is more
versatile for your needs, As long as the publications are distinct units
then it should be OK - eg if there are PK-FK relationships across
publications then it's a no-go, so it depends how related the tables are.
Regards,
Paul
|||Paul
What we are thinking is multiple publications covering the same set of
tables. Is this not going to work?
Ron L
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:e454$eeLEHA.1484@.tk2msftngp13.phx.gbl...
> Ron,
> I agree that this approach leads to more granularity and therefore is more
> versatile for your needs, As long as the publications are distinct units
> then it should be OK - eg if there are PK-FK relationships across
> publications then it's a no-go, so it depends how related the tables are.
> Regards,
> Paul
>
|||Ron,
I've not heard of this arrangement before, but it sounds feasible with merge
replication.
I guess the only downside is increased maintenance.
Cheers,
Paul

No comments:

Post a Comment