On 1/5/15, 9:21 AM, Stephen Frost wrote:
* Robert Haas (robertmh...@gmail.com) wrote:
I think it's right to view this in the same way we view work_mem.  We
plan on the assumption that an amount of memory equal to work_mem will
be available at execution time, without actually reserving it.

Agreed- this seems like a good approach for how to address this.  We
should still be able to end up with plans which use less than the max
possible parallel workers though, as I pointed out somewhere up-thread.
This is also similar to work_mem- we certainly have plans which don't
expect to use all of work_mem and others that expect to use all of it
(per node, of course).

I agree, but we should try and warn the user if they set 
parallel_seqscan_degree close to max_worker_processes, or at least give some 
indication of what's going on. This is something you could end up beating your 
head on wondering why it's not working.

Perhaps we could have EXPLAIN throw a warning if a plan is likely to get less 
than parallel_seqscan_degree number of workers.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to