With no more information I gave up this method and now we're using drop node + store node + subscribe method.

Cyril Scetbon a écrit :
Another important information : we are using Slony 1.2.15 (it says that Slony is compatible with PostgreSQL 8.3 since version 1.2.13)

Cyril Scetbon a écrit :
FYI,

I did the pg_dump/load another time and used repair config on this node. Now the provider is stuck on RESET CONFIG :

2009-08-11 16:54:01 CEST DEBUG1 main: running scheduler mainloop
2009-08-11 16:54:01 CEST DEBUG1 remoteListenThread_104: connected to 'host=slave-db02.profiles.qualif.pns.b3.p.fti.net dbname=pns_voila_preprod user=pns_slon
y_voila_preprod port=5432 password=pns_v0!la_sl0ny_pr3pr0d'
2009-08-11 16:54:01 CEST DEBUG1 remoteListenThread_103: connected to 'host=master-db02.profiles.qualif.pns.b3.p.fti.net dbname=pns_voila_preprod user=pns_slo
ny_voila_preprod port=5432 password=pns_v0!la_sl0ny_pr3pr0d'
2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_103: queue event 103,477 SYNC 2009-08-11 16:54:01 CEST DEBUG2 remoteWorkerThread_103: Received event 103,477 SYNC 2009-08-11 16:54:01 CEST DEBUG2 remoteWorkerThread_103: SYNC 477 processing 2009-08-11 16:54:01 CEST DEBUG2 remoteWorkerThread_103: no sets need syncing for this event 2009-08-11 16:54:01 CEST DEBUG2 remoteWorkerThread_103: forward confirm 101,113409 received by 104 2009-08-11 16:54:01 CEST DEBUG2 remoteWorkerThread_103: forward confirm 101,112143 received by 102 2009-08-11 16:54:01 CEST DEBUG2 remoteWorkerThread_103: forward confirm 101,114448 received by 103 2009-08-11 16:54:01 CEST DEBUG2 remoteWorkerThread_103: forward confirm 103,207 received by 104 2009-08-11 16:54:01 CEST DEBUG2 remoteWorkerThread_103: forward confirm 104,58 received by 103 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 103,477 SYNC 2009-08-11 16:54:01 CEST DEBUG2 remoteWorker_event: event 103,477 ignored - duplicate 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 104,59 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 104,60 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 remoteWorkerThread_104: Received event 104,59 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 104,61 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 104,62 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 104,63 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 104,64 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 104,65 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 104,66 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 104,67 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 104,68 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 104,69 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 104,70 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 104,71 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 104,72 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 104,73 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 104,74 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 104,75 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 104,76 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 104,77 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 104,78 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 104,79 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 104,80 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 104,81 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 remoteListenThread_104: queue event 104,82 RESET_CONFIG 2009-08-11 16:54:01 CEST DEBUG2 slon: child terminated status: 11; pid: 24444, current worker pid: 24444
2009-08-11 16:54:01 CEST DEBUG1 slon: restart of worker in 10 seconds

and I really don't know what's happening ! other nodes are waiting for sync of set (that are updated) :

2009-08-11 16:54:49 CEST DEBUG2 calc sync size - last time: 1 last length: 6217 ideal: 3 proposed size: 3 2009-08-11 16:54:49 CEST DEBUG2 remoteWorkerThread_103: SYNC 482 processing 2009-08-11 16:54:49 CEST DEBUG2 remoteWorkerThread_103: no sets need syncing for this event 2009-08-11 16:54:49 CEST DEBUG2 syncThread: new sl_action_seq 1 - SYNC 223 2009-08-11 16:54:51 CEST DEBUG2 localListenThread: Received event 104,223 SYNC
2009-08-11 16:54:54 CEST DEBUG2 remoteListenThread_101: LISTEN
2009-08-11 16:54:54 CEST DEBUG2 remoteWorkerThread_101: forward confirm 103,482 received by 101 2009-08-11 16:54:59 CEST DEBUG2 syncThread: new sl_action_seq 1 - SYNC 224 2009-08-11 16:55:00 CEST DEBUG2 remoteListenThread_103: queue event 103,483 SYNC
2009-08-11 16:55:00 CEST DEBUG2 remoteListenThread_103: UNLISTEN
2009-08-11 16:55:00 CEST DEBUG2 remoteWorkerThread_103: Received event 103,483 SYNC 2009-08-11 16:55:00 CEST DEBUG2 calc sync size - last time: 1 last length: 11011 ideal: 1 proposed size: 1 2009-08-11 16:55:00 CEST DEBUG2 remoteWorkerThread_103: SYNC 483 processing 2009-08-11 16:55:00 CEST DEBUG2 remoteWorkerThread_103: no sets need syncing for this event 2009-08-11 16:55:01 CEST DEBUG2 localListenThread: Received event 104,224 SYNC 2009-08-11 16:55:03 CEST DEBUG2 remoteWorkerThread_101: forward confirm 103,483 received by 101
2009-08-11 16:55:04 CEST DEBUG2 remoteListenThread_103: LISTEN
2009-08-11 16:55:09 CEST DEBUG2 remoteListenThread_103: queue event 103,484 SYNC
2009-08-11 16:55:09 CEST DEBUG2 remoteListenThread_103: UNLISTEN
2009-08-11 16:55:09 CEST DEBUG2 remoteWorkerThread_103: Received event 103,484 SYNC 2009-08-11 16:55:09 CEST DEBUG2 calc sync size - last time: 1 last length: 8989 ideal: 2 proposed size: 2 2009-08-11 16:55:09 CEST DEBUG2 remoteWorkerThread_103: SYNC 484 processing 2009-08-11 16:55:09 CEST DEBUG2 remoteWorkerThread_103: no sets need syncing for this event 2009-08-11 16:55:09 CEST DEBUG2 syncThread: new sl_action_seq 1 - SYNC 225 2009-08-11 16:55:11 CEST DEBUG2 localListenThread: Received event 104,225 SYNC
2009-08-11 16:55:13 CEST DEBUG2 remoteListenThread_103: LISTEN
2009-08-11 16:55:13 CEST DEBUG2 remoteWorkerThread_101: forward confirm 103,484 received by 101 2009-08-11 16:55:19 CEST DEBUG2 remoteListenThread_103: queue event 103,485 SYNC
2009-08-11 16:55:19 CEST DEBUG2 remoteListenThread_103: UNLISTEN
2009-08-11 16:55:19 CEST DEBUG2 remoteWorkerThread_103: Received event 103,485 SYNC 2009-08-11 16:55:19 CEST DEBUG2 calc sync size - last time: 1 last length: 10003 ideal: 1 proposed size: 1 2009-08-11 16:55:19 CEST DEBUG2 remoteWorkerThread_103: SYNC 485 processing 2009-08-11 16:55:19 CEST DEBUG2 remoteWorkerThread_103: no sets need syncing for this event 2009-08-11 16:55:19 CEST DEBUG2 syncThread: new sl_action_seq 1 - SYNC 226 2009-08-11 16:55:21 CEST DEBUG2 localListenThread: Received event 104,226 SYNC
2009-08-11 16:55:23 CEST DEBUG2 remoteListenThread_101: LISTEN
2009-08-11 16:55:24 CEST DEBUG2 remoteWorkerThread_101: forward confirm 103,485 received by 101

any help ?

Should I really use update function on the node upgraded ? on every node ?

Cyril Scetbon a écrit :
I had to submit it to the same node it was applied on. Still getting some error and investiguating...

Glyn Astill a écrit :
I'm not sure why repair config isn't working for you, however if you're looking for the baremetal function it is updatereloid(set, node).





----- Original Message ----
From: Cyril Scetbon <[email protected]>
To: chris <[email protected]>; [email protected]
Sent: Tuesday, 11 August, 2009 10:48:18
Subject: Re: [Slony1-general] Replacing PostgreSQL 8.2 by Pg 8.3 does not work

Outch I can't dump the table oid (misunderstand between table oids and column oids). I'll look at the slonik code for the repair command.

Cyril Scetbon a écrit :
you are right !

mydb=# select oid,relname from pg_class where relname = 't_512';
 oid  | relname
-------+---------
69187 | t_512
(1 row)

mydb=# select * from _pns_slony_voila_preprod_1.sl_table where
tab_relname='t_512';
tab_id | tab_reloid | tab_relname | tab_nspname | tab_set | tab_idxname |
tab_altered | tab_comment --------+------------+-------------+-------------+---------+-------------+-------------+-------------------------------------
512 | 24638 | t_512 | public | 1 | t_512_pkey | t
       | Table public.t_512 with primary key
(1 row)

But, repair does not work correctly, and I can't debug it (tried by looking in
the postgresql query log, but found nothing)
I'll try by dumping/reloading with oid. Any idea why the repair command does
not work correctly ? I don't see updates on the node I want to be repaired. I used the following script :
echo > /var/tmp/repair.sql
slonik_print_preamble --config /etc/slony1/slon_tools_mydb.conf >>
/var/tmp/repair.sql
for set in `seq 1 31`
do
echo "REPAIR CONFIG (SET ID = $set, EVENT NODE = 101, EXECUTE ONLY ON =103);"
/var/tmp/repair.sql
done
slonik < /var/tmp/repair.sql

I got no error but nothing seems to be done

thx

chris a écrit :
Cyril Scetbon writes:
Alan Hodgson a écrit :
On Monday 10 August 2009, Cyril Scetbon
wrote:
However, when slony is started with pg 8.3 it does not see new events
from his provider (still in pg8.3).
If we restart our pg 8.2 with slony it works !

Do you know what we are missing ?

thx
You need to modify all the table and sequence OIDs stored in the
slony configuration tables to reflect the new table and sequence
OIDs.
I don't think oids are used and table id were not modified
No, Alan's quite right.

If you look at sl_table and sl_sequence, you'll find "reloid" columns,
which are indeed relevant.

You need to update the slony functions through the appropriate slon
commands.
really ?
The slonik "repair config" command should be useful.



"Resets the name-to-oid mapping of tables in a replication set, useful
for restoring a node after a pg_dump."
-- Cyril SCETBON
_______________________________________________
Slony1-general mailing list
[email protected]
http://lists.slony.info/mailman/listinfo/slony1-general







--
Cyril SCETBON
_______________________________________________
Slony1-general mailing list
[email protected]
http://lists.slony.info/mailman/listinfo/slony1-general

Reply via email to