On Tue, May 30, 2017 at 6:50 AM, Ashutosh Bapat <ashutosh.ba...@enterprisedb.com> wrote: > Increasing that number would require increased DSM which may not be > available. Also, I don't see any analysis as to why 6553600 is chosen? > Is it optimal? Does that work for all kinds of work loads?
Picky, picky. The point is that Rafia has discovered that a large increase can sometimes significantly improve performance. I don't think she's necessarily proposing that (or anything else) as a final value that we should definitely use, just getting the conversation started. I did a little bit of brief experimentation on this same topic a long time ago and didn't see an improvement from boosting the queue size beyond 64k but Rafia is testing Gather rather than Gather Merge and, as I say, my test was very brief. I think it would be a good idea to try to get a complete picture here. Does this help on any query that returns many tuples through the Gather? Only the ones that use Gather Merge? Some queries but not others with no obvious pattern? Only this query? Blindly adding a GUC because we found one query that would be faster with a different value is not the right solution. If we don't even know why a larger value is needed here and (maybe) not elsewhere, then how will any user possibly know how to tune the GUC? And do we really want the user to have to keep adjusting a GUC before each query to get maximum performance? I think we need to understand the whole picture here, and then decide what to do. Ideally this would auto-tune, but we can't write code for that without a more complete picture of the behavior. BTW, there are a couple of reasons I originally picked 64k here. One is that making it smaller was very noticeably terrible in my testing, while making it bigger didn't help much. The other is that I figured 64k was small enough that nobody would care about the memory utilization. I'm not sure we can assume the same thing if we make this bigger. It's probably fine to use a 6.4M tuple queue for each worker if work_mem is set to something big, but maybe not if work_mem is set to the default of 4MB. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers