Newbie alert...  Ok, I'm following the online docs 
(http://slony.info/documentation/2.0/logshipping.html#AEN1572). I started the 
subscriber slon with the -a option. I ran the slony1_dump.sh script and 
captured the output to a file. Copied the file to the remote server. 

So, my next steps are:
1. stop slon on the remote server, which is out-of-sync anyway.
2. do I need to drop the out-of-sync database from the remote server? 
3. what psql commandline do I need to run on the remote server? I realize this 
is pretty basic, but as noted, I'm not the DBA. He's on vacation this week. :(
4. If I then restart slon on the remote server, will it catch up? 

Thanks for your patience! 
Mike


-----Original Message-----
From: Steve Singer [mailto:[email protected]] 
Sent: Friday, October 05, 2012 9:38 AM
To: Mike James
Cc: [email protected]
Subject: Re: [Slony1-general] offline log shipping errors

On 12-10-05 08:37 AM, Mike James wrote:
> Here's what the DBA did as I understand it. This is his procedure for 
> changing the replication set. Slony version is slony1-2.0.3-rc. Postgres 
> 8.3.9 on the Master. I might be mis-using some of the terminology, as I'm not 
> a DBA and this is my 1st foray with slony.
> 1. Kill the slon running on the master and slave. Leave the slon on the 
> remote host running.
> 2. he uses PGAdmin to remove the slony schema (slony replication cluster) 
> from Master and Slave.
> 3. regenerate the set records using a shell script on the Master.
> 4. use slonik to initialize the cluster with the set records from above on 
> Master and Slave.
> 5. start the slon on the Master.
> 6. start the slon on the Slave with -a option.
> 7. use slonik to subscribe the Slave to the Master.
>
> On the remote postgres server, the value of at_counter in the 
> _cluster1.sl_archive_tracking table is 338005. But it looks like the new logs 
> being shipped started over at sequence number 1.
>
> Does that description make sense? This can't be the optimal way to add a 
> table to the replication set!
>
What you describe makes perfect sense.  Step 2, removing the slony schema 
uninstalls slony from both the master and the slave.  When you reinstall slony 
with step 4 you are creating a new slony cluster.  The offline node isn't in 
sync with the new cluster it was sync'd with the old one.  You will need to 
rebuild your offline node with slony1_dump.sh

The proper way to add tables to a replication set is described here 
http://www.slony.info/documentation/2.0/administration.html#ADDTHINGS

If your using the altperl tools you can look at slonik_create_set, 
slonik_subscribe_set and slonik_merge_sets


_______________________________________________
Slony1-general mailing list
[email protected]
http://lists.slony.info/mailman/listinfo/slony1-general

Reply via email to