> In the meantime, you might think about switching over to some process
> that doesn't create and drop the same table constantly.  Perhaps
> along the lines of

> create temp table if not exists tt_preTally (...) on commit delete rows;

> if (tt_preTally contains no rows) then
>  insert into tt_preTally select ...
> end if;

> This'd have the advantage of reducing catalog churn in other catalogs
> besides pg_statistic, too.
Just to close the loop, doing the CREATE TEMP TABLE IF NOT EXISTS etc. helped a 
lot.  Still a tiny increase in memory over time, but this is much much better.
Thanks for the help, and amazed at the responsiveness of Mr. Lane and this 
community!

Tom Carroll

     On Monday, December 10, 2018, 4:59:47 PM EST, Laurenz Albe 
<laurenz.a...@cybertec.at> wrote:  
 
 Thomas Carroll wrote:
> We have a situation where a long-persistent Postgres connection consumes more
> and more memory.  If ignored, we eventually get “Cannot allocate memory”
> errors in the Postgres log.  If still ignored, the box will eventually crash.
> This takes about 3 weeks to happen.  It issues a call to a single function
> about once every 15 seconds.  What can I do to prevent this outcome?

There should be a memory context dump in the log file.
What does it look like?

Yours,
Laurenz Albe
-- 
Cybertec | Startseite - Cybertec

| 
| 
|  | 
Startseite - Cybertec

The PostgreSQL Company CYBERTEC – Ihr professioneller PostgreSQL-Partner für 
Support, Consulting, Trainings, … s...
 |

 |

 |



Thanks for your reply.  There are many memory context dumps






Reply via email to