On Fri, May 17, 2024 at 5:55 AM Andres Freund <and...@anarazel.de> wrote:
>
> Hi,
>
> In the subthread at [1] I needed to trigger multiple rounds of index vacuuming
> within one vacuum.
>
> It turns out that with the new dead tuple implementation, that got actually
> somewhat expensive. Particularly if all tuples on all pages get deleted, the
> representation is just "too dense". Normally that's obviously very good, but
> for testing, not so much:
>
> With the minimum setting of maintenance_work_mem=1024kB, a simple table with
> narrow rows, where all rows are deleted, the first cleanup happens after
> 3697812 dead tids. The table for that has to be > ~128MB.
>
> Needing a ~128MB table to be able to test multiple cleanup passes makes it
> much more expensive to test and consequently will lead to worse test coverage.
>
> I think we should consider lowering the minimum setting of
> maintenance_work_mem to the minimum of work_mem.

+1 for lowering the minimum value of maintenance_work_mem. I've faced
the same situation.

Even if a shared tidstore is empty, TidStoreMemoryUsage() returns
256kB because it's the minimum segment size of DSA, i.e.
DSA_MIN_SEGMENT_SIZE. So we can lower the minimum maintenance_work_mem
down to 256kB, from a vacuum perspective.

Regards,

-- 
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com


Reply via email to