> On Feb 21, 2023, at 10:48, Brad White <b55wh...@gmail.com> wrote:
> 
> Running the table_bloat_check query from here 
> https://github.com/pgexperts/pgx_scripts/blob/master/bloat/table_bloat_check.sql
>  
> shows some tables with over 20MB and over 20% bloat while my threshold is set 
> to 0.1.

Apples-to-oranges comparison.  That query attempts to calculate a percentage of 
the overall table filespace that is not occupied by live data.  A perfectly 
100% vacuumed table with only live tuples can (and probably will) still have 
bloat, since free space is counted as bloat.  (It's just an approximation, so 
it changing after a vacuum isn't particularly surprising.)

The autovacuum threshold doesn't use the same calculations as the bloat query.  
That happens when n_dead_tup exceeds threshold + pg_class.reltuples * 
scale_factor.  If there are more than 21,651 or more rows in the table, 2215 
dead tuples is below that, and autovacuum won't run on the table.

The output from vacuum didn't have anything it in that would indicate that it 
couldn't recover dead tuples (like a long-running transaction or something), so 
there's no reason that autovacuum wouldn't run on the table in the future when 
the number of dead tuples reaches the threshold.

By the way, you really should check to see if all those indexes are being used. 
 That's a *lot* of indexes, which will greatly slow down inserts, take up 
filespace, and slow down autovacuum (since it has to scan each index before it 
can reclaim space in the heap).

Reply via email to