Wow. Thanks for the prompt answer.
As a follow-up I was wondering if maybe there would be a way to tell it to
NOT try to plan/execute the query (and instead throw an error) if the memory
usage exceeded X.
Thanks again.
Greig
--
View this message in context:
greigwise writes:
> So, I decided to try an experiment. I wrote 2 queries as follows:
> 1 ) select pg_sleep(100) ;
> 2 ) with q (s1, s2) as (select pg_sleep(100), 1)
> select * from q where s2 in ( 1, delimited numbers>)
>
> It looks to me like the connection
I had an issue today where the OOM killer terminated one of my postgres
processes.
On my server I have 8 GB of RAM, shared_memory is 1 GB and work_memory is
24MB.
I have connection pooling which limits us to 25 connections. Even if I'm
maxed out there, I'm still only using 1.6 MB of RAM of my