Hi

 

Thanks for the advice.

 

With regard to the question on moving back the master role back to N1 after 
fully synced with N2. Does the move_set command pass the Master role back 
completely? 

 

Assuming the role are switched back to the initial setup stage, meaning N1 is 
the Master and N2 is the Slave. Assuming if we have to stop both servers due to 
unseen event and at the point of starting the slon daemon on both servers, 

1.      is it the same as before (eg. N1 - slon_start 1,  N2 - slon_start 2)?
2.      will the master role still be with N1?

 

Regards,

Lawrence Giam

  <http://www.globalitcreations.com> 
..................................................................................................
Lawrence Giam | Global IT Creations Pte Ltd |  Network Administrator  
website: http://www.globalitcreations.com
phone: +65 6836 4768 ext 115| fax: + 65 6836 4736 | mobile: + 65 9758 7448 

-----Original Message-----
From: Filip Rembialkowski [mailto:[email protected]] 
Sent: Tuesday, 29 September 2009 12:44 AM
To: Lawrence Giam
Cc: [email protected]
Subject: Re: [Slony1-general] Failover and Failback

 

 

2009/9/28 Lawrence Giam <[email protected]>

Hi All,

 

I am doing testing for the failover and failback of Slony and also documenting 
this done to better understand how this can be done. I have setup 2 node 
(Master and Slave) test servers.

 

N1 - Master

N2 - Slave

 

Cluster Setup

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------

# INIT CLUSTER

cluster name = testrepl;

 node 1 admin conninfo='host=db01 dbname=testdb user=postgres port=5432 
password=xxx';

 node 2 admin conninfo='host=db02 dbname=testdb user=postgres port=5432 
password=xxx';

  init cluster (id = 1, comment = 'Node 1 - tes...@db01');

 

# STORE NODE

  store node (id = 2, event node = 1, comment = 'Node 2 - tes...@db02');

  echo 'Set up replication nodes';

 

# STORE PATH

  echo 'Next: configure paths for each node/origin';

  store path (server = 1, client = 2, conninfo = 'host=db01 dbname=testdb 
user=postgres port=5432 password=xxx');

  store path (server = 2, client = 1, conninfo = 'host=db02 dbname=testdb 
user=postgres port=5432 password=xxx');

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------

 

In normal operation, N2 is subscribed to N1. Assuming I issue a failover 
command to "Slonik_failover n1 n2" and N2 becomes the Master node and operation 
is running smoothly and I issue a slonik_drop_node to fully clear the cluster 
config of N1.

 

Next I fixed up N1 to operational status and want to put it back into the 
cluster and sync it with N2,

1.      What do I need to do to introduce it back into the cluster? (sample 
config or command)


re-add N1 into the cluster, and subscribe it to the existing replication set.
 
slonik_store_node node1
slonik_subscribe_set set1 node1
 

        1.      After N1 is fully synced with N2, what do I need to do to 
switch the Master role back to N1? (sample config or command)

move origin of the replication set to N1

slonik_move_set set1 node2 node1 

        1.      Steps in sequence to execute the above scenario.

         


(above was written assuming that you use slon_tools perl utilities) 




-- 
Filip Rembiałkowski
JID,mailto:[email protected]
http://filip.rembialkowski.net/

<<image001.jpg>>

_______________________________________________
Slony1-general mailing list
[email protected]
http://lists.slony.info/mailman/listinfo/slony1-general

Reply via email to