2007-03-01 16:43:10 COT FATAL Do you already have a slon running
against this node?
2007-03-01 16:43:10 COT FATAL Or perhaps a residual idle backend
connection from a dead slon?
Well, do you? ps axwww as your postmaster user (typically postgres)
should tell you what's connected. If you followed best practices and
created a separate slony user for your slons to work as, then this
should be very easy to spot.
Andrew
On 3/1/07, Zagato <[EMAIL PROTECTED]> wrote:
> Hello.
>
> I have a problem starting up my slon process in my backup node, is was
> working good, but after a power failure, he say me:
>
> 2007-03-01 16:43:09 COT CONFIG main: slon version 1.2.2 starting up
> 2007-03-01 16:43:10 COT DEBUG2 slon: watchdog process started
> 2007-03-01 16:43:10 COT DEBUG2 slon: watchdog ready - pid = 6155
> 2007-03-01 16:43:10 COT DEBUG2 slon: worker process created - pid = 6156
> 2007-03-01 16:43:10 COT CONFIG main: local node id = 2
> 2007-03-01 16:43:10 COT DEBUG2 main: main process started
> 2007-03-01 16:43:10 COT CONFIG main: launching sched_start_mainloop
> 2007-03-01 16:43:10 COT CONFIG main: loading current cluster configuration
> 2007-03-01 16:43:10 COT CONFIG storeNode: no_id=1 no_comment='Master Node'
> 2007-03-01 16:43:10 COT DEBUG2 setNodeLastEvent: no_id=1 event_seq=185271
> 2007-03-01 16:43:10 COT CONFIG storePath: pa_server=1 pa_client=2
> pa_conninfo="dbname=xxx host=192.168.x.y port=xxxx user=xxxx" pa_connretry=
> 10
> 2007-03-01 16:43:10 COT CONFIG storeListen: li_origin=1 li_receiver=2
> li_provider=1
> 2007-03-01 16:43:10 COT CONFIG storeSet: set_id=1 set_origin=1
> set_comment='All pgbench tables'
> 2007-03-01 16:43:10 COT WARN remoteWorker_wakeup: node 1 - no worker
> thread
> 2007-03-01 16:43:10 COT DEBUG2 sched_wakeup_node(): no_id=1 (0 threads +
> worker signaled)
> 2007-03-01 16:43:10 COT CONFIG storeSubscribe: sub_set=1 sub_provider=1
> sub_forward='t'
> 2007-03-01 16:43:10 COT WARN remoteWorker_wakeup: node 1 - no worker
> thread
> 2007-03-01 16:43:10 COT DEBUG2 sched_wakeup_node(): no_id=1 (0 threads +
> worker signaled)
> 2007-03-01 16:43:10 COT CONFIG enableSubscription: sub_set=1
> 2007-03-01 16:43:10 COT WARN remoteWorker_wakeup: node 1 - no worker
> thread
> 2007-03-01 16:43:10 COT DEBUG2 sched_wakeup_node(): no_id=1 (0 threads +
> worker signaled)
> 2007-03-01 16:43:10 COT DEBUG2 main: last local event sequence = 184505
> 2007-03-01 16:43:10 COT CONFIG main: configuration complete - starting
> threads
> 2007-03-01 16:43:10 COT DEBUG1 localListenThread: thread starts
> 2007-03-01 16:43:10 COT DEBUG4 version for "dbname=transportenet
> user=postgres host=192.168.x.y port=xxxx" is 80003
> NOTICE: Slony-I: cleanup stale sl_nodelock entry for pid=3567
> 2007-03-01 16:43:10 COT FATAL localListenThread: "select
> "_cluster_transportenet".cleanupNodelock(); insert into
> "_cluster_transportenet".sl_nodelock valu
> es ( 2, 0, "pg_catalog".pg_backend_pid()); " - ERROR: duplicate key
> violates unique constraint "sl_nodelock-pkey"
>
> 2007-03-01 16:43:10 COT FATAL Do you already have a slon running against
> this node?
> 2007-03-01 16:43:10 COT FATAL Or perhaps a residual idle backend connection
> from a dead slon?
> 2007-03-01 16:43:10 COT DEBUG2 slon_abort() from pid=6156
> 2007-03-01 16:43:10 COT DEBUG1 slon: shutdown requested
> 2007-03-01 16:43:10 COT DEBUG2 slon: notify worker process to shutdown
> 2007-03-01 16:43:30 COT DEBUG1 slon: child termination timeout - kill child
> 2007-03-01 16:43:30 COT DEBUG2 slon: child terminated status: 9; pid: 6156,
> current worker pid: 6156
> 2007-03-01 16:43:30 COT DEBUG1 slon: done
> 2007-03-01 16:43:30 COT DEBUG2 slon: exit(0)
>
> Someone can tell me how to unlock the process ???
>
> A really appreciate any help... Thanks :D
>
> --
> Farewell.
> ruby << __EOF__
> puts [ 111, 116, 97, 103, 97, 90 ].collect{|v| v.chr}.join.reverse
> __EOF__
> _______________________________________________
> Slony1-general mailing list
> [email protected]
> http://gborg.postgresql.org/mailman/listinfo/slony1-general
>
>
_______________________________________________
Slony1-general mailing list
[email protected]
http://gborg.postgresql.org/mailman/listinfo/slony1-general