Greetings, * Jeff Davis (pg...@j-davis.com) wrote: > On Wed, 2020-07-08 at 10:00 -0400, Stephen Frost wrote: > > That HashAgg previously didn't care that it was going wayyyyy over > > work_mem was, if anything, a bug. > > I think we all agree about that, but some people may be depending on > that bug.
That's why we don't make these kinds of changes in a minor release and instead have major releases. > > Inventing new GUCs late in the > > cycle like this under duress seems like a *really* bad idea. > > Are you OK with escape-hatch GUCs that allow the user to opt for v12 > behavior in the event that they experience a regression? The enable_* options aren't great, and the one added for this is even stranger since it's an 'enable' option for a particular capability of a node rather than just a costing change for a node, but I feel like people generally understand that they shouldn't be messing with the enable_* options and that they're not really intended for end users. > The one for the planner is already there, and it looks like we need one > for the executor as well (to tell HashAgg to ignore the memory limit > just like v12). No, ignoring the limit set was, as agreed above, a bug, and I don't think it makes sense to add some new user tunable for this. If folks want to let HashAgg use more memory then they can set work_mem higher, just the same as if they want a Sort node to use more memory or a HashJoin. Yes, that comes with potential knock-on effects about other nodes (possibly) using more memory but that's pretty well understood for all the other cases and I don't think that it makes sense to have a special case for HashAgg when the only justification is that "well, you see, it used to have this bug, so...". Thanks, Stephen
signature.asc
Description: PGP signature