Here's what the DBA did as I understand it. This is his procedure for changing 
the replication set. Slony version is slony1-2.0.3-rc. Postgres 8.3.9 on the 
Master. I might be mis-using some of the terminology, as I'm not a DBA and this 
is my 1st foray with slony. 
1. Kill the slon running on the master and slave. Leave the slon on the remote 
host running.
2. he uses PGAdmin to remove the slony schema (slony replication cluster) from 
Master and Slave.
3. regenerate the set records using a shell script on the Master.
4. use slonik to initialize the cluster with the set records from above on 
Master and Slave.
5. start the slon on the Master.
6. start the slon on the Slave with -a option.
7. use slonik to subscribe the Slave to the Master.

On the remote postgres server, the value of at_counter in the 
_cluster1.sl_archive_tracking table is 338005. But it looks like the new logs 
being shipped started over at sequence number 1.

Does that description make sense? This can't be the optimal way to add a table 
to the replication set!

Mike

-----Original Message-----
From: Steve Singer [mailto:[email protected]] 
Sent: Thursday, October 04, 2012 5:39 PM
To: Mike James
Cc: [email protected]
Subject: Re: [Slony1-general] offline log shipping errors

On 12-10-04 04:11 PM, Mike James wrote:
> OK, I was mistaken. The slon on the subscriber that ships the logs was 
> restarted with the -a option. The slon on the Master was not started with the 
> -a option.

Then I'm a bit confused about steps were done that resulted in log shipping 
getting out of sync.

1. If setup log shipping and started slon with -a on the slave 2. Added some 
tables to replication

I would expect log shipping to continue to work.



_______________________________________________
Slony1-general mailing list
[email protected]
http://lists.slony.info/mailman/listinfo/slony1-general

Reply via email to