Alvaro Herrera wrote:
Matthew T. O'Connor wrote:
Well, if a table has 10 rows, and we keep the current threshold of 1000
rows, then this table must have 1002 dead tuples (99% dead tuples, 1002
dead + 10 live) before being vacuumed.  This seems wasteful because
there are 500 dead tuples on it and only 10 live tuples.  So each scan
must wade through all the dead tuples.

Another small table with 100 tuples will be vacuumed on every iteration
as well, even if there are just two dead tuples.  So you are right --
maybe dropping it all the way to 0 is too much.  But a small value of 10
is reasonable?  That will make the 10 tuple table be vacuumed when there
are 10 dead tuples (50% of dead tuples), and the 100 tuple table when
there are 11 (11% of dead tuples).  It decreases quickly to the scale
factor (2%, or do we want to decrease it to 1%?)

I think it's probably fine. I think, that the optimal number for the base_threhold is probably dependant on the width of the row, for a very narrow row where you might have many on the same page, 20 or 50 might be right, but for a very wide table a smaller number might be optimal, however I think it probably doesn't matter much anyway.

Reducing the default to 10 seems fine, and perhaps even removing it as a tuning knob. I think there are too many autovacuum knobs and it confuses people. Is it too late to possibly remove this GUC altogether?
                                

---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at

               http://www.postgresql.org/about/donate

Reply via email to