vacuum is not happening on a heavily modified big table even if the dead tuples 
are more than configured threshold.

This is because during at the end of vacuum, the number of dead tuples of the 
table is reset as zero, because
of this reason the dead tuples which are occurred during the vacuum operation 
are lost. Thus to trigger a next vacuum
on the same table, the configured threshold number of dead tuples needs to be 
occurred.
The next vacuum operation is taking more time because of more number of dead 
tuples, like this it continues and it leads
to Table and index bloat.

To handle the above case instead of directly resetting the dead tuples as zero, 
how if the exact dead tuples
are removed from the table stats. With this approach vacuum gets triggered 
frequently thus it reduces the bloat.
Patch for the same is attached in the mail.

please let me know is there any problem in this approach.

Regards,
Hari babu.

Attachment: vacuum_fix_v1.patch
Description: vacuum_fix_v1.patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to