Hi all,

As AUTOVACUUM is having multiple workers now, the semantics of
autovacuum_cost_limit also need to be redefined.

Currently, autovacuum_cost_limit is the accumulated cost that will cause
one single worker vacuuming process to sleep.  It is used to restrict
the I/O consumption of a single vacuum worker. When there are N workers,
the I/O consumption by autovacuum workers can be increased by N times.
This autovacuum_cost_limit semantics produces unpredictable I/O
consumption for multiple-autovacuum-workers.

One simple idea is to set cost limit for every worker to:
autovacuum_cost_limit / max_autovacuum_workers. But for scenarios which
have fewer active workers, it is obvious unfair to active workers. So a
better way is to set cost limit of every active worker to:
autovacuum_cost_limit/autovacuum_active_workers. This ensures the I/O
consumption of autovacuum is stable.

Worker can be extended to have its own cost_limit on share memory. When
a worker is brought up or a worker has finished its work, launcher
recalculates:

   worker_cost_limit= (autovacuum_cost_limit/autovacuum_active_workers)

and sets new value for each active workers.

The above approach requires launcher can change cost delay setting of
workers on-the-fly. This can be achieved by forcing VACUUM refers to the
cost delay setting in its worker’s share memory every vacuum_delay_point.

Any comments or suggestions?

Best Regards

Galy Lee
[EMAIL PROTECTED]
NTT Open Source Software Center

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to