Hi,

We are trying to upgrade our PG from 8.3 to 9.3 with slony 2.2.3.

The complete database (21.000 tables) is now being sincronized with SLONY
and I would like to terminate the replication processing in order to
another team test the application with the new 9.3 replicated database.
They will need several hours to test the application. Once I get they green
light, will repeat the processing again and replicate the database from
scratch.

I'm trying to unsubscribe a receiver from the master set but I'm getting an
out of shared memory message:

cat unsubscribe2.sl
cluster name = slcluster;
node 1 admin conninfo = 'dbname=FiscalWeb host=192.168.23.10 user=slonyo';
node 2 admin conninfo = 'dbname=FiscalWeb host=192.168.23.11 user=slonyn';
unsubscribe set ( id = 1 , receiver = 2);

slonik < unsubscribe2.sl
<stdin>:4: WARNING:  out of shared memory
CONTEXT:  SQL statement "drop trigger "_slcluster_logtrigger" on
"8359_wsn"."tbpgdas01502""
PL/pgSQL function altertabledroptriggers(integer) line 47 at EXECUTE
statement
SQL statement "SELECT "_slcluster".alterTableDropTriggers(v_tab_row.tab_id)"
PL/pgSQL function unsubscribeset(integer,integer,boolean) line 49 at PERFORM
<stdin>:4: PGRES_FATAL_ERROR lock table "_slcluster".sl_event_lock,
"_slcluster".sl_config_lock;select "_slcluster".unsubscribeSet(1,
2,false);  - ERROR:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.
CONTEXT:  SQL statement "drop trigger "_slcluster_logtrigger" on
"8359_wsn"."tbpgdas01502""
PL/pgSQL function altertabledroptriggers(integer) line 47 at EXECUTE
statement
SQL statement "SELECT "_slcluster".alterTableDropTriggers(v_tab_row.tab_id)"
PL/pgSQL function unsubscribeset(integer,integer,boolean) line 49 at PERFORM
[postgres@00002-NfseNet-SGDB reimer]$ ^C

max_locks_per_transaction is set to 255. Changed to 4096 and will restart
the database during the night change window.

Tried to repeat the process again and now I'm getting always the same error:
[postgres@00002-NfseNet-SGDB reimer]$ slonik < unsubscribe2.sl
<stdin>:4: PGRES_FATAL_ERROR lock table "_slcluster".sl_event_lock,
"_slcluster".sl_config_lock;select "_slcluster".unsubscribeSet(1,
2,false);  - ERROR:  deadlock detected
DETAIL:  Process 15366 waits for AccessExclusiveLock on relation 29564 of
database 16384; blocked by process 14994.
Process 14994 waits for RowExclusiveLock on relation 84222 of database
16384; blocked by process 15366.
HINT:  See server log for query details.
CONTEXT:  SQL statement "lock table "7481_spunico"."sincdc" in access
exclusive mode"
PL/pgSQL function altertabledroptriggers(integer) line 42 at EXECUTE
statement
SQL statement "SELECT "_slcluster".alterTableDropTriggers(v_tab_row.tab_id)"
PL/pgSQL function unsubscribeset(integer,integer,boolean) line 49 at PERFORM
[postgres@00002-NfseNet-SGDB reimer]$

The two pids reported by the deadlock message are probably temporary
processed created by SLONY as I did not find them in the system.

The increase in the max_locks_per_transactions and the server restart will
fix this issue?

Thank you!

-- 
Reimer
47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br
_______________________________________________
Slony1-general mailing list
Slony1-general@lists.slony.info
http://lists.slony.info/mailman/listinfo/slony1-general

Reply via email to