Greetings,

* Alvaro Herrera (alvhe...@2ndquadrant.com) wrote:
> On 2020-Jun-25, Andres Freund wrote:
> 
> > >My point here is that maybe we don't need to offer a GUC to explicitly
> > >turn spilling off; it seems sufficient to let users change work_mem so
> > >that spilling will naturally not occur.  Why do we need more?
> > 
> > That's not really a useful escape hatch, because I'll often lead to
> > other nodes using more memory.
> 
> Ah -- other nodes in the same query -- you're right, that's not good.

It's exactly how the system has been operating for, basically, forever,
for everything.  Yes, it'd be good to have a way to manage the
overall amount of memory that a query is allowed to use but that's a
huge change and inventing some new 'hash_mem' or some such GUC doesn't
strike me as a move in the right direction- are we going to have
sort_mem next?  What if having one large hash table for aggregation
would be good, but having the other aggregate use a lot of memory would
run the system out of memory?  Yes, we need to do better, but inventing
new node_mem GUCs isn't the direction to go in.

That HashAgg previously didn't care that it was going wayyyyy over
work_mem was, if anything, a bug.  Inventing new GUCs late in the
cycle like this under duress seems like a *really* bad idea.  Yes,
people are going to have to adjust work_mem if they want these queries
to continue using a ton of memory to run when the planner didn't think
it'd actually take that much memory- but then, in lots of the kinds of
cases that I think you're worrying about, the stats aren't actually that
far off and people did increase work_mem to get the HashAgg plan in the
first place.

I'm also in support of having enable_hashagg_disk set to true as the
default, just like all of the other enable_*'s.

Thanks,

Stephen

Attachment: signature.asc
Description: PGP signature

Reply via email to