Hello.


I am working with a SuSE machine (cat /etc/issue: SUSE Linux Enterprise Server 
11 SP1  (i586)) running Postgresql 8.1.3 and the Slony-I replication system 
(slon version 1.1.5).  We have a working replication setup going between the 
databases on this server, which is generating log shipping files to be sent to 
the remote machines we are tasked to maintain.  The cluster itself is set up 
around three nodes (Node1, Node2, Node3) - Node 1 is the master, and Node 3 the 
slave.  Node 2 appears to be a legacy of the original build done by my 
predecessor; that database is blank.



As of this morning, we ran into a problem with this.



For a while now, we've had strange memory problems on this machine - the 
oom-killer seems to be striking even when there is plenty of free memory left.  
That has set the stage for our current issue to occur - we ran a massive update 
on our system last night, while replication was turned off.  Now, as things 
currently stand, we cannot replicate the changes out - slony is attempting to 
compile all the changes into a single massive log file, and after about half an 
hour or so of running, it trips over the oom-killer issue, which appears to 
restart the replication package.  Since it is constantly trying to rebuild that 
same package, it never gets anywhere.



My first question is this: Is there a way to cap the size of Slony log shipping 
files, so that it writes out no more than 'X' bytes (or K, or Meg, etc.) and 
after going over that size, closes the current log shipping file and starts a 
new one?  We've been able to hit about four megs in size before the oom-killer 
hits with fair regularity, so if I could cap it there, I could at least start 
generating the smaller files and hopefully eventually get through this.



My second question, I guess, is this: Does anyone have a better solution for 
this issue than the one I'm asking about?  It's quite possible I'm getting 
tunnel vision looking at the problem, and all I really need is -a- solution, 
not necessarily -my- solution.
_______________________________________________
Slony1-general mailing list
[email protected]
http://lists.slony.info/mailman/listinfo/slony1-general

Reply via email to