On Wednesday 07 February 2007 10:03 am, Andrew Sullivan wrote:
> Running a vacuum analyse on a weeky basis not linked to replacing all
> those rows will indeed make performace crawl.  I actually suggest you
> look at autovacuum, but if you don't like that option, I think you
> need to look at which tables are getting changed a lot, and vacuum
> them _way_ more often.  In particular, given the number of rows you
> change in each transaction, why not schedule a vacuum of the relevant
> tables between each COMMIT; BEGIN in your inventory procedure?
>
> A

        Well, we're talking about a table that presently has 114+ million rows. 
 I 
believe it takes quite a while to run each vacuum, which may greatly hinder 
our inventory system.  As for autovacuum, I remember that causing some 
issues, especially when we're rolling out a big change: sometimes it's 
forgotten, and we run into the problem of having to kill the site when slony 
starts trying it's exclusive lock while a the vacuum blocks it... 

        Actually, I may try doing some time trials on the slave database, just 
to see 
how much time it'll take to vacuum the big table several times in a row.  I 
recall the vacuums running exponentially faster when they're run very 
frequently.

-- 
Best Regards,


Dan Falconer
"Head Geek",
AvSupport, Inc. (http://www.partslogistics.com)
_______________________________________________
Slony1-general mailing list
[email protected]
http://gborg.postgresql.org/mailman/listinfo/slony1-general

Reply via email to