(picking up the thread again too)

>> Five minutes?! That's not been my experience. Not claiming parallelism
is perfect yet,
>> but there are plenty of parallel performance savings under the five
minute mark.

> Absolutely, I've seen 1 second queries go to 200ms with parallelism of
2.  The problem
> isn't about making that query faster in isolation, the problem is that
every single one of
> those means a new connection

I feel that this 1 second example, and the subsequent testing with a
"SELECT 1;" are strawmen. I was replying to your claim of five minutes as a
cutoff, so a challenge to that would be showing that a query taking 4:45
would have an overall benefit or not from going parallel. I maintain it's
99.99% of the time a net win at the 4:45 mark.

> The other issue is that when you have a nonzero
max_parallel_workers_per_gather,
> Postgres tries to launch parallel workers and if you've exhausted
max_parallel_workers,
> it falls back to a standard plan.  There's no good way for a user to
really understand
> the behavior here, and having max_parallel_worker_per_gather enabled adds
overhead
> across the entire cluster.

Not entirely clear on the problem here - seems things are working as
designed? One could make this argument about almost any of our planner
GUCs, as they have their own tradeoffs and hard-to-measure / explain
effects at a distance.

Other than picking an arbitrary value (i.e. 5000), any thoughts about how
> to build a case around a specific value ?


Do you have actual examples of queries / situations that are harmed by the
current settings? Let's start there. I've not seen any indications in the
field that our current defaults are all that bad, but am open to being
persuaded (ideally with real data).

Tom wrote:
>> BTW, I would say largely the same things about JIT

Yeah, that would change this from a few people conversing over tea into a
large angry mob bearing pitchforks.

Cheers,
Greg

--
Crunchy Data - https://www.crunchydata.com
Enterprise Postgres Software Products & Tech Support

Reply via email to