The work-around for old Slony versions:

replication_wait() {
   echo "Waiting for all slaves are in sync with the master...";
   echo "
       `slonik_print_preamble`
        # Hack for old Slony: this is a dummy operator which generates a
SYNC
        # event and saves its ID for later waiting only, nothing more.
        DROP PATH (SERVER = $MASTER_NODE_ID, CLIENT = $MASTER_NODE_ID);
        WAIT FOR EVENT (
            ORIGIN = ALL,
            CONFIRMED = ALL,
            WAIT ON = $MASTER_NODE_ID
        );
   " | slonik
   echo "All slaves are in sync.";
}

This script waits until all slaves are in sync with the master.


On 6/1/07, Dmitry Koterov <[EMAIL PROTECTED]> wrote:

Hello.

Seems when I use EXECUTE 
SCRIPT<http://slony.info/documentation/stmtddlscript.html>and slonik reports 
PGRES_TUPLES_OK updates may NOT be finished yet on all
slaves.
I ran a long ALTER TABLE statement (about 3 minutes), master updated
immediately after I had seen PGRES_TUPLES_OK, but slave - 10 or more minutes
later.

So, the questions are:

1. THE MAIN question: is it possible to ask slonik to wait untill all
scheme changes were propogated to all slaves after a slonik call?

2. If slonik updates slaves not immediately, but via event creation, why
does it still need to know an information about ALL database hosts, not only
about the master database? I have to enumerate all slave hosts in slonik
calls:

cluster name = my_cluster;
 node 1 admin conninfo='host=host1 dbname=m user=slony port=5432
password=**';
 node 2 admin conninfo='host=host2 dbname=m user=slony port=5432
password=**';
 node 3 admin conninfo='host=host3 dbname=m user=slony port=5432
password=**';
 ...
  execute script (
    set id = 1,
    filename = '/tmp/e0H7Aa03Fh',
    event node = 1
  );

But if a schema changes are propogated via events, theoretically we have
to know only master's address...

Reply via email to