On Wed, Jun 24, 2020 at 12:36:24PM -0700, Andres Freund wrote:
Hi,

On 2020-06-24 15:28:47 -0400, Robert Haas wrote:
On Wed, Jun 24, 2020 at 3:14 PM Andres Freund <and...@anarazel.de> wrote:
> FWIW, my gut feeling is that we'll end up have to separate the
> "execution time" spilling from using plain work mem, because it'll
> trigger spilling too often. E.g. if the plan isn't expected to spill,
> only spill at 10 x work_mem or something like that.  Or we'll need
> better management of temp file data when there's plenty memory
> available.

So, I don't think we can wire in a constant like 10x. That's really
unprincipled and I think it's a bad idea. What we could do, though, is
replace the existing Boolean-valued GUC with a new GUC that controls
the size at which the aggregate spills. The default could be -1,
meaning work_mem, but a user could configure a larger value if desired
(presumably, we would just treat a value smaller than work_mem as
work_mem, and document the same).

To be clear, I wasn't actually thinking of hard-coding 10x, but having a
config option that specifies a factor of work_mem. A factor seems better
because it'll work reasonably for different values of work_mem, whereas
a concrete size wouldn't.


I'm not quite convinced we need/should introduce a new memory limit.
It's true keping it equal to work_mem by default makes this less of an
issue, but it's still another moving part the users will need to learn
how to use.

But if we do introduce a new limit, I very much think it should be a
plain limit, not a factor. That just makes it even more complicated, and
we don't have any such limit yet.


I think that's actually pretty appealing. Separating the memory we
plan to use from the memory we're willing to use before spilling seems
like a good idea in general, and I think we should probably also do it
in other places - like sorts.

Indeed. And then perhaps we could eventually add some reporting /
monitoring infrastructure for the cases where plan time and execution
time memory estimate/usage widely differs.


I wouldn't mind something like that in general - not just for hashagg,
but for various other nodes.


regards

--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Reply via email to