Richard Yen <[EMAIL PROTECTED]> writes:
> Thanks for the tips.
>
> Ran the vacuumdb a few times, and looks like the sl_log_1 table is
> the one bloating.
>
> Looks like sl_log_1 has just about as many dead rows as live rows
> over a 10 minute period (in the 20K range). sl_confirm gets about a
> thousand rows, and sl_seqlog gets ~60 live rows, and just about as
> many dead rows during this period.
>
> Like you said, perhaps the confirms aren't getting through, but I'm
> not sure how to remedy that.

Actually, based on the output of the vacuums, I don't see anything
much to be rectified.

There is NOT a problem with confirms not getting through; if that were
the case, then you'd see sl_log_1 increasing steadily in size, with NO
dead tuples to be cleaned out.  The fact that sl_log_1 has tens of
thousands of dead rows, and the vacuum is removing stuff, means that,
at least in that regard, all is well.

It's plausible that more frequent runs of the *entire* cleanup loop
would draw the tables down a bit smaller, but based on the vacuum
output, it doesn't look like anything is really wrong.

The only thing that "smells funny" is that the vacuums are taking
~200s, which seems more like I/O starvation to me.  e.g. - it's
possible that you downright need more disk hardware to have good
performance with your application.

I know Vivek Khera ran into this scenario; he had a system that was so
nearly "pegged" without replication that the added load of replication
was just too much.  Replication isn't of infinite cost, but it
certainly does add to system load somewhat...
-- 
let name="cbbrowne" and tld="ca.afilias.info" in String.concat "@" [name;tld];;
<http://dba2.int.libertyrms.com/>
Christopher Browne
(416) 673-4124 (land)
_______________________________________________
Slony1-general mailing list
[email protected]
http://gborg.postgresql.org/mailman/listinfo/slony1-general

Reply via email to