On 25 November 2013 21:51, Peter Geoghegan <p...@heroku.com> wrote:
> On Sun, Nov 24, 2013 at 9:06 AM, Simon Riggs <si...@2ndquadrant.com> wrote:
>> VACUUM uses 6 bytes per dead tuple. And autovacuum regularly removes
>> dead tuples, limiting their numbers.
>>
>> In what circumstances will the memory usage from multiple concurrent
>> VACUUMs become a problem? In those circumstances, reducing
>> autovacuum_work_mem will cause more passes through indexes, dirtying
>> more pages and elongating the problem workload.
>
> Yes, of course, but if we presume that the memory for autovacuum
> workers to do everything in one pass simply isn't there, it's still
> better to do multiple passes.

That isn't clear to me. It seems better to wait until we have the memory.

My feeling is this parameter is a fairly blunt approach to the
problems of memory pressure on autovacuum and other maint tasks. I am
worried that it will not effectively solve the problem. I don't wish
to block the patch; I wish to get to an effective solution to the
problem.

A better aproach to handling memory pressure would be to globally
coordinate workers so that we don't oversubscribe memory, allocating
memory from a global pool.

-- 
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to