On Fri, Sep 09, 2022 at 12:34:15PM -0400, Stephen Frost wrote:
> > While we are at it, what do you think about also recording the max memory
> > allocated by a backend? (could be useful and would avoid sampling for which
> > there is no guarantee to sample the max anyway).

FYI, that's already kind-of available from getrusage:

$ psql ts -c "SET log_executor_stats=on; SET client_min_messages=debug;
SELECT a, COUNT(1) FROM generate_series(1,999999) a GROUP BY 1;" |wc
LOG:  EXECUTOR STATISTICS
...
!       194568 kB max resident size

Note that max rss counts things allocated outside postgres (like linked
libraries).

> What would you do with that information..?  By itself, it doesn't strike
> me as useful.  Perhaps it'd be interesting to grab the max required for
> a particular query in pg_stat_statements or such but again, that's a
> very different thing.

log_executor_stats is at level "debug", so it's not great to enable it
for a single session, and painful to think about enabling it globally.
This would be a lot friendlier.

Storing the maxrss per backend somewhere would be useful (and avoid the
issue of "sampling" with top), after I agree that it ought to be exposed
to a view.  For example, it might help to determine whether (and which!)
backends are using large multiple of work_mem, and then whether that can
be increased.  If/when we had a "memory budget allocator", this would
help to determine how to set its GUCs, maybe to see "which backends are
using the work_mem that are precluding this other backend from using
efficient query plan".

I wonder if it's better to combine these two threads into one.  The 0001
patch of course can be considered independently from the 0002 patch, as
usual.  Right now, there's different parties on both threads ...

-- 
Justin


Reply via email to