On Thu, 2024-04-25 at 14:33 -0400, Robert Haas wrote: > I believe that the underlying problem here can be summarized in this > way: just because I'm OK with 2MB of bloat in my 10MB table doesn't > mean that I'm OK with 2TB of bloat in my 10TB table. One reason for > this is simply that I can afford to waste 2MB much more easily than I > can afford to waste 2TB -- and that applies both on disk and in > memory.
I don't find that convincing. Why are 2TB of wasted space in a 10TB table worse than 2TB of wasted space in 100 tables of 100GB each? > Another reason, at least in existing releases, is that at some > point index vacuuming hits a wall because we run out of space for dead > tuples. We *most definitely* want to do index vacuuming before we get > to the point where we're going to have to do multiple cycles of index > vacuuming. That is more convincing. But do we need a GUC for that? What about making a table eligible for autovacuum as soon as the number of dead tuples reaches 90% of what you can hold in "autovacuum_work_mem"? Yours, Laurenz Albe