--- [EMAIL PROTECTED] wrote:
> I gather you mean, out-of-the-ordinary for most
> apps, but not for this client?

Actually, no.  The normal activity is to delete 3-5%
of the rows per day, followed by a VACUUM ANALYZE. 
Then over the course of the day (in multiple
transactions) about the same amount are INSERTed (each
transaction followed by a VACUUM ANALYZE on just the
updated table).  So 75% deletion is just out of the
ordinary for this app.  However, on occasion, deleting
75% of rows is a legitimate action for the client to
take.  It would be nice if they didn't have to
remember to do things like REINDEX or CLUSTER or
whatever on just those occasions.
 
> In case nobody else has asked: is your max_fsm_pages
> big enough to handle all
> the deleted pages, across ALL tables hit by the
> purge? If not, you're
> haemorrhaging pages, and VACUUM is probably warning
> you about exactly that.

This parameter is most likely set incorrectly.  So
that could be causing problems.  Could that be a
culprit for the index bloat, though?

> If that's not a problem, you might want to consider
> partitioning the data.
> Take a look at inherited tables. For me, they're a
> good approximation of
> clustered indexes (sigh, miss'em) and equivalent to
> table spaces.
> 
> My app is in a similar boat to yours: up to 1/3 of a
> 10M-row table goes away
> every day. For each of the child tables that is a
> candidate to be dropped, there
> is a big prologue txn, whichs moves (INSERT then
> DELETE) the good rows into a
> child table that is NOT to be dropped. Then BANG
> pull the plug on the tables you
> don't want. MUCH faster than DELETE: the dropped
> tables' files' disk space goes
> away in one shot, too.
> 
> Just my 2c.

Thanks.

Bill 

__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

               http://archives.postgresql.org

Reply via email to