Robert Haas wrote:

> Also, I think that we might actually want to add an
> additional GUC to prevent the parallel query system from consuming the
> entire pool of processes established by max_worker_processes.  If
> you're doing anything else with worker processes on your system, you
> might well want to say, well, it's OK to have up to 20 worker
> processes, but at most 10 of those can be used for parallel queries,
> so that the other 10 are guaranteed to be available for whatever other
> stuff I'm running that uses the background process facility.  It's
> worth remembering that the background worker stuff was originally
> invented by Alvaro to allow users to run daemons, not for parallel
> query.

Agreed -- things like pglogical and BDR rely on background workers to do
their jobs.  Many other users of bgworkers have popped up, so I think
it'd be a bad idea if parallel queries are able to monopolize all the
available slots.

> So I think in the long run we should have three limits:
> 
> 1. Cluster-wide limit on number of worker processes for all purposes
> (currently, max_worker_processes).
> 
> 2. Cluster-wide limit on number of worker processes for parallelism
> (don't have this yet).
> 
> 3. Per-operation limit on number of worker processes for parallelism
> (currently, max_parallel_degree).
> 
> Whatever we rename, there needs to be enough semantic space between #1
> and #3 to allow for the possibility - I think the very likely
> possibility - that we will eventually also want #2.

max_background_workers sounds fine to me for #1, and I propose to add #2
in 9.6 rather than wait.  max_total_parallel_query_workers ?  I already
presented my proposal for #3 which, as you noted, nobody endorsed.

-- 
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to