Gregory Stark wrote:
"Bruce Momjian" <[EMAIL PROTECTED]> writes:

I agree it index cleanup isn't > 50% of vacuum.  I was trying to figure
out how small, and it seems about 15% of the total table, which means if
we have bitmap vacuum, we can conceivably reduce vacuum load by perhaps
80%, assuming 5% of the table is scanned.

Actually no. A while back I did experiments to see how fast reading a file
sequentially was compared to reading the same file sequentially but skipping
x% of the blocks randomly. The results were surprising (to me) and depressing.
The breakeven point was about 7%.

Note that with uniformly random updates, you have dirtied every page of the table until you get anywhere near 5% of dead space. So we have to assume non-uniform distribution of update for the DSM to be of any help.

And if we assume non-uniform distribution, it's a good bet that the blocks that need vacuuming are also not randomly distributed. In fact, they might very well all be in one cluster, so that scanning that cluster is indeed sequential I/O.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

Reply via email to