On Wed, Oct 16, 2019 at 6:30 PM Alexander Pyhalov <a...@sfedu.ru> wrote:
> I see that at some point several postgresql backends start consuming about 16 
>  GB RAM. If we account for shared_buffers, it meens 4 GB RAM for private 
> backend memory. How can we achieve such numbers? I don't see any long-running 
> (or complex) queries (however, there could be long-running transactions and 
> queries to large partitioned tables). But how could they consume 512* 
> work_mem memory?

I'm not sure they ae consuming 512 times the work_memory, I mean there
is a whole lot of stuff a process can allocate, and it requires to dig
into the process memory map (something I'm not good at!) to understand
it.
For sure, a single process (backend) can consume one time work_memory
per "complex node" in a query plan, that is it can consume multiple
times the work_memory value if that is available.

Luca


Reply via email to