Hi Jan,
Unfortunately, it was down for around ten days and there's lots of data
being replicated (some of which includes potentially largish blobs).
Sorry I can't tell how many events it is behind as I don't have access
to the site remotely.
At present I am treating this as a learning experience and will be
putting monitoring in place to ensure that there is lots of noise if the
slon processes are not running...
Thanks,
Peter
Jan Wieck wrote:
On 12/7/2005 9:23 PM, Peter Davie wrote:
Hi All,
Using Slony1 version 1.1.0 at a customer site, the customer has had
the slon daemons fall over on one of their slave servers (and didn't
notice!) On restarting the slon processes, there is now an error
being generated because it is attempting to malloc memory to record
all of the outstanding transactions and the slon daemon is running
out of memory. Is there any way forward to resolve this, or will I
just have to uninstall the slave and resubscribe (which is my current
plan).
This node must have been down for quite some time. A SYNC event in the
remote_worker queue takes about 200 bytes or so. How many million
events is this node behind? You could tell from looking at sl_status.
And don't forget to VACUUM FULL ANALYZE that database after you've
dropped that node.
Jan
--
Relevance... because results count
Relevance Phone: +61 (0)2 6241 6400
A.B.N. 86 063 403 690 Fax: +61 (0)2 6241 6422
Unit 11, Mobile: +61 (0)417 265 175
Cnr Brookes & Heffernan Sts, E-mail: [EMAIL PROTECTED]
Mitchell ACT 2911 Web: http://www.relevance.com.au
_______________________________________________
Slony1-general mailing list
[email protected]
http://gborg.postgresql.org/mailman/listinfo/slony1-general