On 12/5/2012 2:00 PM, Robert Haas wrote:
Many it'd be sensible to relate the retry time to the time spend
vacuuming the table.  Say, if the amount of time spent retrying
exceeds 10% of the time spend vacuuming the table, with a minimum of
1s and a maximum of 1min, give up.  That way, big tables will get a
little more leeway than small tables, which is probably appropriate.

That sort of "dynamic" approach would indeed be interesting. But I fear that it is going to be complex at best. The amount of time spent in scanning heavily depends on the visibility map. The initial vacuum scan of a table can take hours or more, but it does update the visibility map even if the vacuum itself is aborted later. The next vacuum may scan that table in almost no time at all, because it skips all blocks that are marked "all visible".

So the total time the "scan" takes is no yardstick I'd use.


Jan

--
Anyone who trades liberty for security deserves neither
liberty nor security. -- Benjamin Franklin


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to