On Mon, 2020-06-15 at 16:42 +0300, Kristjan Mustkivi wrote:
> > You should schedule down time and run a VACUUM (FULL) on that table.
> > That will rewrite the table and get rid of the bloat.
>
> But in order to avoid the situation happening again (as it will with
> the current settings), I should
On Mon, Jun 15, 2020 at 4:37 PM Laurenz Albe wrote:
>
> On Mon, 2020-06-15 at 13:47 +0300, Kristjan Mustkivi wrote:
> > Still, pgstattuple reveals that the table size is 715MB while live
> > tuple len is just 39MB and 94% of the table is vacant. I do not have
> > much experience in interpreting th
On Mon, 2020-06-15 at 13:47 +0300, Kristjan Mustkivi wrote:
> Still, pgstattuple reveals that the table size is 715MB while live
> tuple len is just 39MB and 94% of the table is vacant. I do not have
> much experience in interpreting this but it would seem that it is
> still getting bloated. Should
On Mon, Jun 15, 2020 at 12:17 PM Laurenz Albe wrote:
>
> On Mon, 2020-06-15 at 11:51 +0300, Kristjan Mustkivi wrote:
> > I have a table which contains a "json" column and it gets heavily
> > updated. Before introducing toast.autovacuum_vacuum_scale_factor=0.05
> > and toast.autovacuum_vacuum_cost_
On Mon, 2020-06-15 at 11:51 +0300, Kristjan Mustkivi wrote:
> Dear all,
>
> I have a table which contains a "json" column and it gets heavily
> updated. Before introducing toast.autovacuum_vacuum_scale_factor=0.05
> and toast.autovacuum_vacuum_cost_limit=1000 this table bloated to
> nearly 1TB in
Dear all,
I have a table which contains a "json" column and it gets heavily
updated. Before introducing toast.autovacuum_vacuum_scale_factor=0.05
and toast.autovacuum_vacuum_cost_limit=1000 this table bloated to
nearly 1TB in a short while. Now the n_dead_tup value is nicely under
control but stil