On Mon, Aug 14, 2023 at 8:22 AM Andrey Lepikhov
<a.lepik...@postgrespro.ru> wrote:
>
> Really, the current approach with the final value of consumed memory
> smooths peaks of memory consumption. I recall examples likewise massive
> million-sized arrays or reparameterization with many partitions where
> the optimizer consumes much additional memory during planning.
> Ideally, to dive into the planner issues, we should have something like
> a report-in-progress in the vacuum, reporting on memory consumption at
> each subquery and join level. But it looks too much for typical queries.

Planner finishes usually finish within a second. When partitioning is
involved it might take a few dozens of seconds but it's still within a
minute and we are working to reduce that as well to a couple hundred
milliseconds at max. Tracking memory usages during this small time may
not be worth it. The tracking itself might make the planning
in-efficient and we might still miss the spikes in memory allocations,
if they are very short lived. If the planner runs for more than a few
minutes, maybe we could add some tracking.

-- 
Best Wishes,
Ashutosh Bapat


Reply via email to