Hi Slony community.

I am trying to set up a replication between a version 8.3.5
of PostgreSQL and an 8.4.1 version using Slony1 v. 2.0.3-rc2.
I followed the same procedure with v. 1.2.17-rc2 and it worked
fine:

1. Install Slony on the DB from source with the following
   configure command:

   for 8.3.5:
$ ./configure --prefix=/usr/site/postgresql-8.3.5 
--with-pgconfigdir=/usr/site/postgresql-8.3.5/bi
n --with-pgbindir=/usr/site/postgresql-8.3.5/bin 
--with-pgincludedir=/usr/site/postgresql-8.3.5/incl
ude --with-pgincludeserverdir=/usr/site/postgresql-8.3.5/include/server 
--with-pglibdir=/usr/site/po
stgresql-8.3.5/lib --with-pgpkglibdir=/usr/site/postgresql-8.3.5/lib 
--with-pgsharedir=/usr/site/pos
tgresql-8.3.5/share --with-perltools

   for 8.4.5:
$ ./configure --prefix=/usr/site/postgresql-8.4.1 
--with-pgconfigdir=/usr/site/postgresql-8.4.1/
bin --with-pgbindir=/usr/site/postgresql-8.4.1/bin 
--with-pgincludedir=/usr/site/postgresql-8.4.1/
include --with-pgincludeserverdir=/usr/site/postgresql-8.4.1/include/server 
--with-pglibdir=/usr/s
ite/postgresql-8.4.1/lib --with-pgpkglibdir=/usr/site/postgresql-8.4.1/lib 
--with-pgsharedir=/usr/
site/postgresql-8.4.1/share --with-perltools

Followed by a "make" and "make install". The files in ./bin ./lib
are updated.

2. Set up the replication using the altperl scripts:

/usr/site/postgresql-8.3.5/bin/slonik_init_cluster  | 
/usr/site/postgresql-8.3.5/bin/slonik
/usr/site/postgresql-8.3.5/bin/slon_start node1
/usr/site/postgresql-8.3.5/bin/slon_start node2
/usr/site/postgresql-8.3.5/bin/slonik_create_set set1 | 
/usr/site/postgresql-8.3.5/bin/slonik
/usr/site/postgresql-8.3.5/bin/slonik_subscribe_set set1 node2 | 
/usr/site/postgresql-8.3.5/bin/slonik

The nodes are defined as follows:

# The 8.3.5 node
    add_node(node     => 1,
             host     => 'localhost',
             dbname   => 'rt38',
             port     => 5432,
             user     => 'postgres',
             password => undef);

# The 8.4.1 node
    add_node(node     => 2,
             host     => '192.168.0.2',
             dbname   => 'rt38replica',
             port     => 5432,
             user     => 'postgres',
             password => undef);

The initial copy takes place but I am getting this error in
the log file for the node1 slon process:

2009-09-22 10:12:41 CDT CONFIG main: slon version 2.0.3 starting up
2009-09-22 10:12:41 CDT INFO   slon: watchdog process started
2009-09-22 10:12:41 CDT CONFIG slon: watchdog ready - pid = 29617
2009-09-22 10:12:41 CDT CONFIG main: Integer option vac_frequency = 3
2009-09-22 10:12:41 CDT CONFIG main: Integer option log_level = 0
2009-09-22 10:12:41 CDT CONFIG main: Integer option sync_interval = 1000
2009-09-22 10:12:41 CDT CONFIG main: Integer option sync_interval_timeout = 
10000
2009-09-22 10:12:41 CDT CONFIG main: Integer option sync_group_maxsize = 20
2009-09-22 10:12:41 CDT CONFIG main: Integer option desired_sync_time = 60000
2009-09-22 10:12:41 CDT CONFIG main: Integer option syslog = 0
2009-09-22 10:12:41 CDT CONFIG main: Integer option quit_sync_provider = 0
2009-09-22 10:12:41 CDT CONFIG main: Integer option quit_sync_finalsync = 0
2009-09-22 10:12:41 CDT CONFIG main: Integer option sync_max_rowsize = 8192
2009-09-22 10:12:41 CDT CONFIG main: Integer option sync_max_largemem = 5242880
2009-09-22 10:12:41 CDT CONFIG main: Integer option remote_listen_timeout = 300
2009-09-22 10:12:41 CDT CONFIG main: Boolean option log_pid = 0
2009-09-22 10:12:41 CDT CONFIG main: Boolean option log_timestamp = 1
2009-09-22 10:12:41 CDT CONFIG main: Boolean option cleanup_deletelogs = 0
2009-09-22 10:12:41 CDT CONFIG main: Real option real_placeholder = 0.000000
2009-09-22 10:12:41 CDT CONFIG main: String option cluster_name = rt
2009-09-22 10:12:41 CDT CONFIG main: String option conn_info = host=localhost 
dbname=rt38 user=postg
res port=5432
2009-09-22 10:12:41 CDT CONFIG main: String option pid_file = [NULL]
2009-09-22 10:12:41 CDT CONFIG main: String option log_timestamp_format = 
%Y-%m-%d %H:%M:%S %Z
2009-09-22 10:12:41 CDT CONFIG main: String option archive_dir = [NULL]
2009-09-22 10:12:41 CDT CONFIG main: String option sql_on_connection = [NULL]
2009-09-22 10:12:41 CDT CONFIG main: String option lag_interval = [NULL]
2009-09-22 10:12:41 CDT CONFIG main: String option command_on_logarchive = 
[NULL]
2009-09-22 10:12:41 CDT CONFIG main: String option syslog_facility = LOCAL0
2009-09-22 10:12:41 CDT CONFIG main: String option syslog_ident = slon
2009-09-22 10:12:41 CDT CONFIG main: String option cleanup_interval = 10 minutes
2009-09-22 10:12:41 CDT CONFIG slon: worker process created - pid = 29620
2009-09-22 10:12:41 CDT CONFIG main: local node id = 1
2009-09-22 10:12:41 CDT INFO   main: main process started
2009-09-22 10:12:41 CDT CONFIG main: launching sched_start_mainloop
2009-09-22 10:12:41 CDT CONFIG main: loading current cluster configuration
2009-09-22 10:12:41 CDT CONFIG storeNode: no_id=2 no_comment='Node 2 - 
[email protected]'
2009-09-22 10:12:41 CDT CONFIG storePath: pa_server=2 pa_client=1 
pa_conninfo="host=192.168.0.2 dbna
me=rt38replica user=postgres port=5432" pa_connretry=10
2009-09-22 10:12:41 CDT CONFIG storeListen: li_origin=2 li_receiver=1 
li_provider=2
2009-09-22 10:12:41 CDT CONFIG storeSet: set_id=1 set_origin=1 set_comment='Set 
1 for rt'
2009-09-22 10:12:41 CDT CONFIG main: last local event sequence = 5000001685
2009-09-22 10:12:41 CDT CONFIG main: configuration complete - starting threads
2009-09-22 10:12:41 CDT INFO   localListenThread: thread starts
2009-09-22 10:12:41 CDT CONFIG version for "host=localhost dbname=rt38 
user=postgres port=5432" is 8
0305
NOTICE:  Slony-I: cleanup stale sl_nodelock entry for pid=29177
NOTICE:  Slony-I: cleanup stale sl_nodelock entry for pid=29184
NOTICE:  Slony-I: cleanup stale sl_nodelock entry for pid=29193
2009-09-22 10:12:41 CDT CONFIG enableNode: no_id=2
2009-09-22 10:12:41 CDT INFO   remoteWorkerThread_2: thread starts
2009-09-22 10:12:41 CDT INFO   remoteListenThread_2: thread starts
2009-09-22 10:12:41 CDT CONFIG cleanupThread: thread starts
2009-09-22 10:12:41 CDT CONFIG cleanupThread: bias = 35383
2009-09-22 10:12:41 CDT INFO   syncThread: thread starts
2009-09-22 10:12:41 CDT INFO   main: running scheduler mainloop
2009-09-22 10:12:41 CDT CONFIG version for "host=localhost dbname=rt38 
user=postgres port=5432" is 8
0305
2009-09-22 10:12:41 CDT CONFIG remoteWorkerThread_2: update provider 
configuration
2009-09-22 10:12:41 CDT CONFIG version for "host=192.168.0.2 dbname=rt38replica 
user=postgres port=5
432" is 80401
2009-09-22 10:12:41 CDT CONFIG version for "host=localhost dbname=rt38 
user=postgres port=5432" is 8
0305
2009-09-22 10:12:41 CDT CONFIG version for "host=localhost dbname=rt38 
user=postgres port=5432" is 8
0305

If I kill and restart either the PostgreSQL 8.4.1 on node2 or the
slon processes, the _rt.sl_status on node2 will update once. Does
anyone have any ideas? What does it mean to "update provider configuration"?
They are both running 2.0.3-rc2 of Slony1. The same process works without
a problem using 1.2.17-rc2.

Regards,
Ken
_______________________________________________
Slony1-general mailing list
[email protected]
http://lists.slony.info/mailman/listinfo/slony1-general

Reply via email to