François Girault <[EMAIL PROTECTED]> writes:
> I have troubles with a simple two nodes cluster. I have a set, id=3,
> that can't be copied on a slave node cause of this error :
>
> 2007-09-13 17:07:23 CEST DEBUG1 copy_set 3
> 2007-09-13 17:07:23 CEST ERROR remoteWorkerThread_1: node -1 not
> found in runtime configuration
> 2007-09-13 17:07:23 CEST WARN remoteWorkerThread_1: data copy for
> set 3 failed - sleep 60 seconds
>
> The most strange thing : even when I drop the set 3, the log message
> persists ! I don't know what to do and I can't uninstall all and
> rebuild the cluster.
Generally, this has to do with the slon process being a bit confused
as to the runtime configuration *in memory.*
Restart the slon, and this ought to alleviate the problem.
It strikes me that maybe we should have logic in the remoteWorker
thread which notices "Oh, it's looking for node -1, which *obviously*
doesn't exist. Maybe I should fall over and let another slon take
over..."
--
output = ("cbbrowne" "@" "cbbrowne.com")
http://cbbrowne.com/info/slony.html
E.V.A., pod 5, launching...
_______________________________________________
Slony1-general mailing list
[email protected]
http://gborg.postgresql.org/mailman/listinfo/slony1-general