Re: Organize working memory under per-PlanState context

2025-08-20 Thread Andrei Lepikhov
On 20/8/2025 19:00, Jeff Davis wrote: On Wed, 2025-08-20 at 09:22 +0200, Andrei Lepikhov wrote: I'm not sure I understand your reasoning clearly. How do you know that the current subtree will not be rescanned with the same parameter set? Building a hash table repeatedly may be pretty costly, no?

Re: Organize working memory under per-PlanState context

2025-08-20 Thread Jeff Davis
On Wed, 2025-08-20 at 09:22 +0200, Andrei Lepikhov wrote: > I'm not sure I understand your reasoning clearly. How do you know > that > the current subtree will not be rescanned with the same parameter > set? > Building a hash table repeatedly may be pretty costly, no? We can check the eflags for

Re: Organize working memory under per-PlanState context

2025-08-20 Thread Tom Lane
Jeff Davis writes: > On Wed, 2025-08-20 at 09:22 +0200, Andrei Lepikhov wrote: >> Building a hash table repeatedly may be pretty costly, no? > We can check the eflags for EXEC_FLAG_REWIND. That might not be the > only condition we need to check, but we should know at plan time > whether a subtree

Re: Organize working memory under per-PlanState context

2025-08-20 Thread Andrei Lepikhov
On 20/8/2025 01:34, Jeff Davis wrote: It doesn't do much yet, but it creates infrastructure that will be useful for subsequent patches to make the memory accounting and enforcement more consistent throughout the executor.Does this mean that you are considering flexible memory allocation during e

Re: Organize working memory under per-PlanState context

2025-08-20 Thread Andrei Lepikhov
On 20/8/2025 07:38, Chao Li wrote: I know some memory must be retained until the entire query finishes. But those per-node memories, such as hash table, might be destroyed immediately after a node finishes. I'm not sure I understand your reasoning clearly. How do you know that the current subtr

Re: Organize working memory under per-PlanState context

2025-08-19 Thread Chao Li
> On Aug 20, 2025, at 07:34, Jeff Davis wrote: > > > I understand this is not a final patch, so I would focus on the design: 1. This patch adds ps_WorkMem to PlanState, and make it as parent for other per-node memory contexts, thus all memory usage with that node is grouped and measurabl

Organize working memory under per-PlanState context

2025-08-19 Thread Jeff Davis
Right now, the work_mem limit is tracked and enforced ad-hoc throughout the executor. Different nodes tally the total memory usage differently, may use different internal data structures (each of which can consume work_mem independently), and decide when to spill based on different criteria, etc.