Amit,

* Amit Kapila (amit.kapil...@gmail.com) wrote:
> On Fri, Jan 9, 2015 at 1:02 AM, Jim Nasby <jim.na...@bluetreble.com> wrote:
> > I agree, but we should try and warn the user if they set
> > parallel_seqscan_degree close to max_worker_processes, or at least give
> > some indication of what's going on. This is something you could end up
> > beating your head on wondering why it's not working.
> 
> Yet another way to handle the case when enough workers are not
> available is to let user  specify the desired minimum percentage of
> requested parallel workers with parameter like
> PARALLEL_QUERY_MIN_PERCENT. For  example, if you specify
> 50 for this parameter, then at least 50% of the parallel workers
> requested for any  parallel operation must be available in order for
> the operation to succeed else it will give error. If the value is set to
> null, then all parallel operations will proceed as long as at least two
> parallel workers are available for processing.

Ugh.  I'm not a fan of this..  Based on how we're talking about modeling
this, if we decide to parallelize at all, then we expect it to be a win.
I don't like the idea of throwing an error if, at execution time, we end
up not being able to actually get the number of workers we want-
instead, we should degrade gracefully all the way back to serial, if
necessary.  Perhaps we should send a NOTICE or something along those
lines to let the user know we weren't able to get the level of
parallelization that the plan originally asked for, but I really don't
like just throwing an error.

Now, for debugging purposes, I could see such a parameter being
available but it should default to 'off/never-fail'.

        Thanks,

                Stephen

Attachment: signature.asc
Description: Digital signature

Reply via email to