On 02/24/2016 08:54 AM, Alvaro Herrera wrote:
> Joe Conway wrote:
>
>> In my experience it is almost always best to run autovacuum very often
>> and very aggressively. That generally means tuning scaling factor and
>> thresholds as well, such that there are never more than say 50-100k dead
>> rows
Joe Conway wrote:
> In my experience it is almost always best to run autovacuum very often
> and very aggressively. That generally means tuning scaling factor and
> thresholds as well, such that there are never more than say 50-100k dead
> rows. Then running vacuum with no delays or limits runs qu
On 02/23/2016 10:23 PM, Robert Haas wrote:
> On Tue, Jan 12, 2016 at 6:12 PM, Andres Freund wrote:
>> right now the defaults for autovacuum cost limiting are so low that they
>> regularly cause problems for our users. It's not exactly obvious that
>> any installation above a couple gigabytes defin
On Tue, Jan 12, 2016 at 6:12 PM, Andres Freund wrote:
> right now the defaults for autovacuum cost limiting are so low that they
> regularly cause problems for our users. It's not exactly obvious that
> any installation above a couple gigabytes definitely needs to change
> autovacuum_vacuum_cost_d
On 1/12/16 6:42 AM, Andres Freund wrote:
Somehow computing the speed in relation to the cluster/database size is
probably possible, but I wonder how we can do so without constantly
re-computing something relatively expensive?
ISTM relpages would probably be good enough for this, if we take the
Hi,
right now the defaults for autovacuum cost limiting are so low that they
regularly cause problems for our users. It's not exactly obvious that
any installation above a couple gigabytes definitely needs to change
autovacuum_vacuum_cost_delay &
autovacuum_vacuum_cost_limit/vacuum_cost_limit. Esp