Le lundi 26 juin 2023, 23:03:48 CEST Andres Freund a écrit :
> Hi,
> 
> On 2023-06-26 08:38:35 +0200, Ronan Dunklau wrote:
> > I hope what I'm trying to achieve is clearer that way. Maybe this patch is
> > not the best way to go about this, but since the memory allocator
> > behaviour can have such an impact it's a bit sad we have to leave half
> > the performance on the table because of it when there are easily
> > accessible knobs to avoid it.
> I'm *quite* doubtful this patch is the way to go.  If we want to more
> tightly control memory allocation patterns, because we have more
> information than glibc, we should do that, rather than try to nudge glibc's
> malloc in random direction.  In contrast a generic malloc() implementation
> we can have much more information about memory lifetimes etc due to memory
> contexts.

Yes this is probably much more appropriate, but a much larger change with 
greater risks of regression. Especially as we have to make sure we're not 
overfitting our own code for a specific malloc implementation, to the detriment 
of others. Except if you hinted we should write our own directly instead ?

> 
> We e.g. could keep a larger number of memory blocks reserved
> ourselves. Possibly by delaying the release of additionally held blocks
> until we have been idle for a few seconds or such.

I think keeping work_mem around after it has been used a couple times make 
sense. This is the memory a user is willing to dedicate to operations, after 
all.

> 
> 
> WRT to the difference in TPS in the benchmark you mention - I suspect that
> we are doing something bad that needs to be improved regardless of the
> underlying memory allocator implementation.  Due to the lack of detailed
> instructions I couldn't reproduce the results immediately.

I re-attached the simple script I used. I've run this script with different 
values for glibc_malloc_max_trim_threshold. 

Best regards,

--
Ronan Dunklau

Attachment: bench.sh
Description: application/shellscript

Reply via email to