On 2017-04-03 22:13:18 -0400, Robert Haas wrote:
> On Mon, Apr 3, 2017 at 4:17 PM, Andres Freund <and...@anarazel.de> wrote:
> > Hm.  I'm not really convinced by the logic here.  Wouldn't it be better
> > to try to compute the minimum total cost across all workers for
> > 1..#max_workers for the plans in an iterative manner?  I.e. try to map
> > each of the subplans to 1 (if non-partial) or N workers (partial) using
> > some fitting algorith (e.g. always choosing the worker(s) that currently
> > have the least work assigned).  I think the current algorithm doesn't
> > lead to useful #workers for e.g. cases with a lot of non-partial,
> > high-startup plans - imo a quite reasonable scenario.
> 
> Well, that'd be totally unlike what we do in any other case.  We only
> generate a Parallel Seq Scan plan for a given table with one # of
> workers, and we cost it based on that.  We have no way to re-cost it
> if we changed our mind later about how many workers to use.
> Eventually, we should probably have something like what you're
> describing here, but in general, not just for this specific case.  One
> problem, of course, is to avoid having a larger number of workers
> always look better than a smaller number, which with the current
> costing model would probably happen a lot.

I don't think the parallel seqscan is comparable in complexity with the
parallel append case.  Each worker there does the same kind of work, and
if one of them is behind, it'll just do less.  But correct sizing will
be more important with parallel-append, because with non-partial
subplans the work is absolutely *not* uniform.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to