Re: Add GUC to tune glibc's malloc implementation.

2023-08-01 Thread Daniel Gustafsson
> On 29 Jun 2023, at 00:31, Andres Freund  wrote:
> On 2023-06-28 07:26:03 +0200, Ronan Dunklau wrote:
>> I see it as a way to have *some* sort of control over the malloc
>> implementation we use, instead of tuning our allocations pattern on top of it
>> while treating it entirely as a black box. As for the tuning, I proposed
>> earlier to replace this parameter expressed in terms of size as a "profile"
>> (greedy / conservative) to make it easier to pick a sensible value.
> 
> I don't think that makes it very usable - we'll still have idle connections
> use up a lot more memory than now in some cases, and not in others, even
> though it doesn't help. And it will be very heavily dependent on the OS and
> glibc version.

Based on the comments in this thread and that no update has been posted
addressing the objections I will mark this returned with feedback.  Please feel
free to resubmit to a future CF.

--
Daniel Gustafsson





Re: Add GUC to tune glibc's malloc implementation.

2023-06-28 Thread Andres Freund
Hi,

On 2023-06-28 07:26:03 +0200, Ronan Dunklau wrote:
> I see it as a way to have *some* sort of control over the malloc
> implementation we use, instead of tuning our allocations pattern on top of it
> while treating it entirely as a black box. As for the tuning, I proposed
> earlier to replace this parameter expressed in terms of size as a "profile"
> (greedy / conservative) to make it easier to pick a sensible value.

I don't think that makes it very usable - we'll still have idle connections
use up a lot more memory than now in some cases, and not in others, even
though it doesn't help. And it will be very heavily dependent on the OS and
glibc version.


> Le mardi 27 juin 2023, 20:17:46 CEST Andres Freund a écrit :
> > > Except if you hinted we should write our own directly instead ?
> > > > We e.g. could keep a larger number of memory blocks reserved
> > > > ourselves. Possibly by delaying the release of additionally held blocks
> > > > until we have been idle for a few seconds or such.
> > >
> > > I think keeping work_mem around after it has been used a couple times make
> > > sense. This is the memory a user is willing to dedicate to operations,
> > > after all.
> >
> > The biggest overhead of returning pages to the kernel is that that triggers
> > zeroing the data during the next allocation. Particularly on multi-node
> > servers that's surprisingly slow.  It's most commonly not the brk() or
> > mmap() themselves that are the performance issue.
> >
> > Indeed, with your benchmark, I see that most of the time, on my dual Xeon
> > Gold 5215 workstation, is spent zeroing newly allocated pages during page
> > faults. That microarchitecture is worse at this than some others, but it's
> > never free (or cache friendly).
>
> I'm not sure I see the practical difference between those, but that's
> interesting. Were you able to reproduce my results ?

I see a bit smaller win than what you observed, but it is substantial.


The runtime difference between the "default" and "cached" malloc are almost
entirely in these bits:

cached:
-8.93%  postgres  libc.so.6 [.] __memmove_evex_unaligned_erms
   - __memmove_evex_unaligned_erms
  + 6.77% minimal_tuple_from_heap_tuple
  + 2.04% _int_realloc
  + 0.04% AllocSetRealloc
0.02% 0x56281094806f
0.02% 0x56281094e0bf

vs

uncached:

-   14.52%  postgres  libc.so.6 [.] __memmove_evex_unaligned_erms
 8.61% asm_exc_page_fault
   - 5.91% __memmove_evex_unaligned_erms
  + 5.78% minimal_tuple_from_heap_tuple
0.04% 0x560130a2900f
0.02% 0x560130a20faf
  + 0.02% AllocSetRealloc
  + 0.02% _int_realloc

+3.81%  postgres  [kernel.vmlinux]  [k] native_irq_return_iret
+1.88%  postgres  [kernel.vmlinux]  [k] __handle_mm_fault
+1.76%  postgres  [kernel.vmlinux]  [k] clear_page_erms
+1.67%  postgres  [kernel.vmlinux]  [k] get_mem_cgroup_from_mm
+1.42%  postgres  [kernel.vmlinux]  [k] cgroup_rstat_updated
+1.00%  postgres  [kernel.vmlinux]  [k] get_page_from_freelist
+0.93%  postgres  [kernel.vmlinux]  [k] mtree_range_walk

None of the latter are visible in a profile in the cached case.

I.e. the overhead is encountering page faults and individually allocating the
necessary memory in the kernel.


This isn't surprising, I just wanted to make sure I entirely understand.


Part of the reason this code is a bit worse is that it's using generation.c,
which doesn't cache any part of of the context. Not that aset.c's level of
caching would help a lot, given that it caches the context itself, not later
blocks.


> > FWIW, in my experience trimming the brk()ed region doesn't work reliably
> > enough in real world postgres workloads to be worth relying on (from a
> > memory usage POV). Sooner or later you're going to have longer lived
> > allocations placed that will prevent it from happening.
>
> I'm not sure I follow: given our workload is clearly split at queries and
> transactions boundaries, releasing memory at that time, I've assumed (and
> noticed in practice, albeit not on a production system) that most memory at
> the top of the heap would be trimmable as we don't keep much in between
> queries / transactions.

That's true for very simple workloads, but once you're beyond that you just
need some longer-lived allocation to happen. E.g. some relcache / catcache
miss during the query execution, and there's no exant memory in
CacheMemoryContext, so a new block is allocated.

Greetings,

Andres Freund




Re: Add GUC to tune glibc's malloc implementation.

2023-06-27 Thread Ronan Dunklau
Le mardi 27 juin 2023, 20:17:46 CEST Andres Freund a écrit :
> > Yes this is probably much more appropriate, but a much larger change with
> > greater risks of regression. Especially as we have to make sure we're not
> > overfitting our own code for a specific malloc implementation, to the
> > detriment of others.
> 
> I think your approach is fundamentally overfitting our code to a specific
> malloc implementation, in a way that's not tunable by mere mortals. It just
> seems like a dead end to me.

I see it as a way to have *some* sort of control over the malloc 
implementation we use, instead of tuning our allocations pattern on top of it 
while treating it entirely as a black box. As for the tuning, I proposed 
earlier to replace this parameter expressed in terms of size as a "profile" 
(greedy / conservative) to make it easier to pick a sensible value.

> 
> > Except if you hinted we should write our own directly instead ?
> 
> I don't think we should write our own malloc - we don't rely on it much
> ourselves. And if we replace it, we need to care about mallocs performance
> characteristics a whole lot, because various libraries etc do heavily rely
> on it.
> 
> However, I do think we should eventually avoid using malloc() for aset.c et
> al. malloc() is a general allocator, but at least for allocations below
> maxBlockSize aset.c's doesn't do allocations in a way that really benefit
> from that *at all*. It's not a lot of work to do such allocations on our
> own.
> > > We e.g. could keep a larger number of memory blocks reserved
> > > ourselves. Possibly by delaying the release of additionally held blocks
> > > until we have been idle for a few seconds or such.
> > 
> > I think keeping work_mem around after it has been used a couple times make
> > sense. This is the memory a user is willing to dedicate to operations,
> > after all.
> 
> The biggest overhead of returning pages to the kernel is that that triggers
> zeroing the data during the next allocation. Particularly on multi-node
> servers that's surprisingly slow.  It's most commonly not the brk() or
> mmap() themselves that are the performance issue.
> 
> Indeed, with your benchmark, I see that most of the time, on my dual Xeon
> Gold 5215 workstation, is spent zeroing newly allocated pages during page
> faults. That microarchitecture is worse at this than some others, but it's
> never free (or cache friendly).

I'm not sure I see the practical difference between those, but that's 
interesting. Were you able to reproduce my results ?

> FWIW, in my experience trimming the brk()ed region doesn't work reliably
> enough in real world postgres workloads to be worth relying on (from a
> memory usage POV). Sooner or later you're going to have longer lived
> allocations placed that will prevent it from happening.

I'm not sure I follow: given our workload is clearly split at queries and 
transactions boundaries, releasing memory at that time, I've assumed (and 
noticed in practice, albeit not on a production system) that most memory at 
the top of the heap would be trimmable as we don't keep much in between 
queries / transactions.

> 
> I have played around with telling aset.c that certain contexts are long
> lived and using mmap() for those, to make it more likely that the libc
> malloc/free can actually return memory to the system. I think that can be
> > quite worthwhile.

So if I understand your different suggestions, we should: 
 - use mmap ourselves for what we deem to be "one-off" allocations, to make 
sure that memory is not hanging around after we don't use
 - keep some pool allocated which will not be freed in between queries, but 
reused for the next time we need it. 

Thank you for looking at this problem.

Regards,

--
Ronan Dunklau







Re: Add GUC to tune glibc's malloc implementation.

2023-06-27 Thread Andres Freund
Hi,

On 2023-06-27 08:35:28 +0200, Ronan Dunklau wrote:
> Le lundi 26 juin 2023, 23:03:48 CEST Andres Freund a écrit :
> > Hi,
> >
> > On 2023-06-26 08:38:35 +0200, Ronan Dunklau wrote:
> > > I hope what I'm trying to achieve is clearer that way. Maybe this patch is
> > > not the best way to go about this, but since the memory allocator
> > > behaviour can have such an impact it's a bit sad we have to leave half
> > > the performance on the table because of it when there are easily
> > > accessible knobs to avoid it.
> > I'm *quite* doubtful this patch is the way to go.  If we want to more
> > tightly control memory allocation patterns, because we have more
> > information than glibc, we should do that, rather than try to nudge glibc's
> > malloc in random direction.  In contrast a generic malloc() implementation
> > we can have much more information about memory lifetimes etc due to memory
> > contexts.
>
> Yes this is probably much more appropriate, but a much larger change with
> greater risks of regression. Especially as we have to make sure we're not
> overfitting our own code for a specific malloc implementation, to the 
> detriment
> of others.

I think your approach is fundamentally overfitting our code to a specific
malloc implementation, in a way that's not tunable by mere mortals. It just
seems like a dead end to me.


> Except if you hinted we should write our own directly instead ?

I don't think we should write our own malloc - we don't rely on it much
ourselves. And if we replace it, we need to care about mallocs performance
characteristics a whole lot, because various libraries etc do heavily rely on
it.

However, I do think we should eventually avoid using malloc() for aset.c et
al. malloc() is a general allocator, but at least for allocations below
maxBlockSize aset.c's doesn't do allocations in a way that really benefit from
that *at all*. It's not a lot of work to do such allocations on our own.


> > We e.g. could keep a larger number of memory blocks reserved
> > ourselves. Possibly by delaying the release of additionally held blocks
> > until we have been idle for a few seconds or such.
>
> I think keeping work_mem around after it has been used a couple times make
> sense. This is the memory a user is willing to dedicate to operations, after
> all.

The biggest overhead of returning pages to the kernel is that that triggers
zeroing the data during the next allocation. Particularly on multi-node
servers that's surprisingly slow.  It's most commonly not the brk() or mmap()
themselves that are the performance issue.

Indeed, with your benchmark, I see that most of the time, on my dual Xeon Gold
5215 workstation, is spent zeroing newly allocated pages during page
faults. That microarchitecture is worse at this than some others, but it's
never free (or cache friendly).


> > WRT to the difference in TPS in the benchmark you mention - I suspect that
> > we are doing something bad that needs to be improved regardless of the
> > underlying memory allocator implementation.  Due to the lack of detailed
> > instructions I couldn't reproduce the results immediately.
>
> I re-attached the simple script I used. I've run this script with different
> values for glibc_malloc_max_trim_threshold.

FWIW, in my experience trimming the brk()ed region doesn't work reliably
enough in real world postgres workloads to be worth relying on (from a memory
usage POV). Sooner or later you're going to have longer lived allocations
placed that will prevent it from happening.

I have played around with telling aset.c that certain contexts are long lived
and using mmap() for those, to make it more likely that the libc malloc/free
can actually return memory to the system. I think that can be quite
worthwhile.

Greetings,

Andres Freund




Re: Add GUC to tune glibc's malloc implementation.

2023-06-27 Thread Ronan Dunklau
Le mardi 27 juin 2023, 08:35:28 CEST Ronan Dunklau a écrit :
> I re-attached the simple script I used. I've run this script with different
> values for glibc_malloc_max_trim_threshold.

I forgot to add that it was using default parametrers except for work_mem, set 
to 32M, and max_parallel_workers_per_gather set to zero. 







Re: Add GUC to tune glibc's malloc implementation.

2023-06-27 Thread Ronan Dunklau
Le lundi 26 juin 2023, 23:03:48 CEST Andres Freund a écrit :
> Hi,
> 
> On 2023-06-26 08:38:35 +0200, Ronan Dunklau wrote:
> > I hope what I'm trying to achieve is clearer that way. Maybe this patch is
> > not the best way to go about this, but since the memory allocator
> > behaviour can have such an impact it's a bit sad we have to leave half
> > the performance on the table because of it when there are easily
> > accessible knobs to avoid it.
> I'm *quite* doubtful this patch is the way to go.  If we want to more
> tightly control memory allocation patterns, because we have more
> information than glibc, we should do that, rather than try to nudge glibc's
> malloc in random direction.  In contrast a generic malloc() implementation
> we can have much more information about memory lifetimes etc due to memory
> contexts.

Yes this is probably much more appropriate, but a much larger change with 
greater risks of regression. Especially as we have to make sure we're not 
overfitting our own code for a specific malloc implementation, to the detriment 
of others. Except if you hinted we should write our own directly instead ?

> 
> We e.g. could keep a larger number of memory blocks reserved
> ourselves. Possibly by delaying the release of additionally held blocks
> until we have been idle for a few seconds or such.

I think keeping work_mem around after it has been used a couple times make 
sense. This is the memory a user is willing to dedicate to operations, after 
all.

> 
> 
> WRT to the difference in TPS in the benchmark you mention - I suspect that
> we are doing something bad that needs to be improved regardless of the
> underlying memory allocator implementation.  Due to the lack of detailed
> instructions I couldn't reproduce the results immediately.

I re-attached the simple script I used. I've run this script with different 
values for glibc_malloc_max_trim_threshold. 

Best regards,

--
Ronan Dunklau


bench.sh
Description: application/shellscript


Re: Add GUC to tune glibc's malloc implementation.

2023-06-26 Thread Andres Freund
Hi,

On 2023-06-26 08:38:35 +0200, Ronan Dunklau wrote:
> I hope what I'm trying to achieve is clearer that way. Maybe this patch is not
> the best way to go about this, but since the memory allocator behaviour can
> have such an impact it's a bit sad we have to leave half the performance on
> the table because of it when there are easily accessible knobs to avoid it.

I'm *quite* doubtful this patch is the way to go.  If we want to more tightly
control memory allocation patterns, because we have more information than
glibc, we should do that, rather than try to nudge glibc's malloc in random
direction.  In contrast a generic malloc() implementation we can have much
more information about memory lifetimes etc due to memory contexts.

We e.g. could keep a larger number of memory blocks reserved
ourselves. Possibly by delaying the release of additionally held blocks until
we have been idle for a few seconds or such.


WRT to the difference in TPS in the benchmark you mention - I suspect that we
are doing something bad that needs to be improved regardless of the underlying
memory allocator implementation.  Due to the lack of detailed instructions I
couldn't reproduce the results immediately.

Greetings,

Andres Freund




Re: Add GUC to tune glibc's malloc implementation.

2023-06-26 Thread Andres Freund
Hi,

On 2023-06-22 15:35:12 +0200, Ronan Dunklau wrote:
> This can cause problems. Let's take for example a table with 10k rows, and 32 
> columns (as defined by a bench script David Rowley shared last year when 
> discussing the GenerationContext for tuplesort), and execute the following 
> query, with 32MB of work_mem:
> 
> select * from t order by a offset 10;
>
> I attached the results of the 10k rows / 32 columns / 32MB work_mem benchmark 
> with different values for glibc_malloc_max_trim_threshold.

Could you provide instructions for the benchmark that don't require digging
into the archive to find an email by David?

Greetings,

Andres Freund




Re: Add GUC to tune glibc's malloc implementation.

2023-06-26 Thread Ronan Dunklau
Le vendredi 23 juin 2023, 22:55:51 CEST Peter Eisentraut a écrit :
> On 22.06.23 15:35, Ronan Dunklau wrote:
> > The thing is, by default, those parameters are adjusted dynamically by the
> > glibc itself. It starts with quite small thresholds, and raises them when
> > the program frees some memory, up to a certain limit. This patch proposes
> > a new GUC allowing the user to adjust those settings according to their
> > workload.
> > 
> > This can cause problems. Let's take for example a table with 10k rows, and
> > 32 columns (as defined by a bench script David Rowley shared last year
> > when discussing the GenerationContext for tuplesort), and execute the
> > following
> > query, with 32MB of work_mem:

> I don't follow what you are trying to achieve with this.  The examples
> you show appear to work sensibly in my mind.  Using this setting, you
> can save some of the adjustments that glibc does after the first query.
> But that seems only useful if your session only does one query.  Is that
> what you are doing?

No, not at all: glibc does not do the right thing, we don't "save"  it. 
I will try to rephrase that.

In the first test case I showed, we see that glibc adjusts its threshold, but 
to a suboptimal value since repeated executions of a query needing the same 
amount of memory will release it back to the kernel, and move the brk pointer 
again, and will not adjust it again. On the other hand, by manually adjusting 
the thresholds, we can set them to a higher value which means that the memory 
will be kept in malloc's freelist for reuse for the next queries. As shown in 
the benchmark results I posted, this can have quite a dramatic effect, going 
from 396 tps to 894.  For ease of benchmarking, it is a single query being 
executed over and over again, but the same thing would be true if different 
queries allocating memories were executed by a single backend. 

The worst part of this means it is unpredictable: depending on past memory 
allocation patterns, glibc will end up in different states, and exhibit 
completely different performance for all subsequent queries. In fact, this is 
what Tomas noticed last year, (see [0]),  which led to investigation into 
this. 

I also tried to show that for certain cases glibcs behaviour can be on the 
contrary to greedy, and hold on too much memory if we just need the memory 
once and never allocate it again. 

I hope what I'm trying to achieve is clearer that way. Maybe this patch is not 
the best way to go about this, but since the memory allocator behaviour can 
have such an impact it's a bit sad we have to leave half the performance on 
the table because of it when there are easily accessible knobs to avoid it.

[0] 
https://www.postgresql.org/message-id/bcdd4e3e-c12d-cd2b-7ead-a91ad416100a%40enterprisedb.com






Re: Add GUC to tune glibc's malloc implementation.

2023-06-23 Thread Peter Eisentraut

On 22.06.23 15:35, Ronan Dunklau wrote:

The thing is, by default, those parameters are adjusted dynamically by the
glibc itself. It starts with quite small thresholds, and raises them when the
program frees some memory, up to a certain limit. This patch proposes a new
GUC allowing the user to adjust those settings according to their workload.

This can cause problems. Let's take for example a table with 10k rows, and 32
columns (as defined by a bench script David Rowley shared last year when
discussing the GenerationContext for tuplesort), and execute the following
query, with 32MB of work_mem:


I don't follow what you are trying to achieve with this.  The examples 
you show appear to work sensibly in my mind.  Using this setting, you 
can save some of the adjustments that glibc does after the first query. 
But that seems only useful if your session only does one query.  Is that 
what you are doing?






Re: Add GUC to tune glibc's malloc implementation.

2023-06-22 Thread Tom Lane
Ronan Dunklau  writes:
> Le jeudi 22 juin 2023, 15:49:36 CEST Tom Lane a écrit :
>> Aren't these same settings controllable via environment variables?
>> I could see adding some docs suggesting that you set thus-and-such
>> values in the postmaster's startup script.  Admittedly, the confusion
>> argument is perhaps still raisable; but we have a similar docs section
>> discussing controlling Linux OOM behavior, and I've not heard much
>> complaints about that.

> Yes they are, but controlling them via an environment variable for the whole 
> cluster defeats the point: different backends have different workloads, and 
> being able to make sure for example the OLAP user is memory-greedy while the 
> OLTP one is as conservative as possible is a worthwile goal.

And what is going to happen when we switch to a thread model?
(I don't personally think that's going to happen, but some other
people do.)  If we only document how to adjust this cluster-wide,
then we won't have a problem with that.  But I'm not excited about
introducing functionality that is both platform-dependent and
unsupportable in a threaded system.

regards, tom lane




Re: Add GUC to tune glibc's malloc implementation.

2023-06-22 Thread Ronan Dunklau
Le jeudi 22 juin 2023, 15:49:36 CEST Tom Lane a écrit :
> This seems like a pretty awful idea, mainly because there's no way
> to have such a GUC mean anything on non-glibc platforms, which is
> going to cause confusion or worse.

I named the GUC glibc_malloc_max_trim_threshold, I hope this is enough to 
clear up the confusion. We already have at least event_source, which is 
windows specific even if it's not clear from the name. 

> 
> Aren't these same settings controllable via environment variables?
> I could see adding some docs suggesting that you set thus-and-such
> values in the postmaster's startup script.  Admittedly, the confusion
> argument is perhaps still raisable; but we have a similar docs section
> discussing controlling Linux OOM behavior, and I've not heard much
> complaints about that.

Yes they are, but controlling them via an environment variable for the whole 
cluster defeats the point: different backends have different workloads, and 
being able to make sure for example the OLAP user is memory-greedy while the 
OLTP one is as conservative as possible is a worthwile goal.  Or even a 
specific backend may want to raise it's work_mem and adapt glibc behaviour 
accordingly, then get back to being conservative with memory until the next 
such transaction. 

Regards,

--
Ronan Dunklau






Re: Add GUC to tune glibc's malloc implementation.

2023-06-22 Thread Tom Lane
Ronan Dunklau  writes:
> Following some conversation with Tomas at PGCon, I decided to resurrect this 
> topic, which was previously discussed in the context of moving tuplesort to 
> use GenerationContext: https://www.postgresql.org/message-id/
> 8046109.NyiUUSuA9g%40aivenronan

This seems like a pretty awful idea, mainly because there's no way
to have such a GUC mean anything on non-glibc platforms, which is
going to cause confusion or worse.

Aren't these same settings controllable via environment variables?
I could see adding some docs suggesting that you set thus-and-such
values in the postmaster's startup script.  Admittedly, the confusion
argument is perhaps still raisable; but we have a similar docs section
discussing controlling Linux OOM behavior, and I've not heard much
complaints about that.

regards, tom lane




Add GUC to tune glibc's malloc implementation.

2023-06-22 Thread Ronan Dunklau
Hello, 

Following some conversation with Tomas at PGCon, I decided to resurrect this 
topic, which was previously discussed in the context of moving tuplesort to 
use GenerationContext: https://www.postgresql.org/message-id/
8046109.NyiUUSuA9g%40aivenronan

The idea for this patch is that the behaviour of glibc's malloc can be 
counterproductive for us in some cases. To summarise, glibc's malloc offers 
(among others) two tunable parameters which greatly affects how it allocates 
memory. From the mallopt manpage:

   M_TRIM_THRESHOLD
  When the amount of contiguous free memory at the top of
  the heap grows sufficiently large, free(3) employs sbrk(2)
  to release this memory back to the system.  (This can be
  useful in programs that continue to execute for a long
  period after freeing a significant amount of memory.)  

   M_MMAP_THRESHOLD
  For allocations greater than or equal to the limit
  specified (in bytes) by M_MMAP_THRESHOLD that can't be
  satisfied from the free list, the memory-allocation
  functions employ mmap(2) instead of increasing the program
  break using sbrk(2).

The thing is, by default, those parameters are adjusted dynamically by the 
glibc itself. It starts with quite small thresholds, and raises them when the 
program frees some memory, up to a certain limit. This patch proposes a new 
GUC allowing the user to adjust those settings according to their workload.

This can cause problems. Let's take for example a table with 10k rows, and 32 
columns (as defined by a bench script David Rowley shared last year when 
discussing the GenerationContext for tuplesort), and execute the following 
query, with 32MB of work_mem:

select * from t order by a offset 10;

On unpatched master, attaching strace to the backend and grepping on brk|mmap, 
we get the following syscalls:

brk(0x55b00df0c000) = 0x55b00df0c000
brk(0x55b00df05000) = 0x55b00df05000
brk(0x55b00df28000) = 0x55b00df28000
brk(0x55b00df52000) = 0x55b00df52000
mmap(NULL, 266240, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x7fbc49254000
brk(0x55b00df7e000) = 0x55b00df7e000
mmap(NULL, 528384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x7fbc48f7f000
mmap(NULL, 1052672, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x7fbc48e7e000
mmap(NULL, 200704, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x7fbc4980f000
brk(0x55b00df72000) = 0x55b00df72000
mmap(NULL, 2101248, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x7fbc3d56d000

Using systemtap, we can hook to glibc's mallocs static probes to log whenever 
it adjusts its values. During the above queries, glibc's malloc raised its 
thresholds:

347704: New thresholds: mmap: 2101248 bytes, trim: 4202496 bytes


If we re-run the query again, we get: 

brk(0x55b00dfe2000) = 0x55b00dfe2000
brk(0x55b00e042000) = 0x55b00e042000
brk(0x55b00e0ce000) = 0x55b00e0ce000
brk(0x55b00e1e6000) = 0x55b00e1e6000
brk(0x55b00e216000) = 0x55b00e216000
brk(0x55b00e416000) = 0x55b00e416000
brk(0x55b00e476000) = 0x55b00e476000
brk(0x55b00dfbc000) = 0x55b00dfbc000

This time, our allocations are below the new mmap_threshold, so malloc gets us 
our memory by repeatedly moving the brk pointer. 

When running with the attached patch, and setting the new GUC:

set glibc_malloc_max_trim_threshold = '64MB';

We now get the following syscalls for the same query, for the first run:

brk(0x55b00df0c000) = 0x55b00df0c000
brk(0x55b00df2e000) = 0x55b00df2e000
brk(0x55b00df52000) = 0x55b00df52000
brk(0x55b00dfb2000) = 0x55b00dfb2000
brk(0x55b00e03e000) = 0x55b00e03e000
brk(0x55b00e156000) = 0x55b00e156000
brk(0x55b00e186000) = 0x55b00e186000
brk(0x55b00e386000) = 0x55b00e386000
brk(0x55b00e3e6000) = 0x55b00e3e6000

But for the second run, the memory allocated is kept by malloc's freelist 
instead of being released to the kernel,  generating no syscalls at all, which 
brings us a significant performance improvement at the cost of more memory 
being used by the idle backend, up to twice as more tps.

On the other hand, the default behaviour can also be a problem if a backend 
makes big allocations for a short time and then never needs that amount of 
memory again.

For example, running this query: 

select * from generate_series(1, 100);

We allocate some memory. The first time it's run, malloc will use mmap to 
satisfy it. Once it's freed, it will raise it's threshold,