We have a situation where a long-persistent Postgres connection consumes more 
and more memory.  If ignored, we eventually get “Cannot allocate memory” errors 
in the Postgres log.  If still ignored, the box will eventually crash.  This 
takes about 3 weeks to happen.  It issues a call to a single function about 
once every 15 seconds.  What can I do to prevent this outcome?

What the function does: The purpose of the function is to refresh a cache in an 
application, not make meaningful updates. It does write to the database, but 
only a temporary table.  If I remove the temporary table (just as an 
experiment, it makes the function useless) the memory consumption does not 
occur.

 There are no transactions left hanging open, no locks holding resources for 
long periods of time.  The temporary table is about half a meg in size, about 
5500 rows.

The memory usage is identified by examining and totaling the lines beginning 
with “Private” in the /proc/1234/smaps file, where 1234 is the process ID for 
the connection.  The memory consumption starts out at under 20 meg, but swells 
to hundreds of megabytes over the three weeks.  I have been able to reproduce 
the issue on my own Linux workstation with an accelerated schedule.

Other loads: None, this is a dedicated Postgres server

Postgres version: 10.5.  work_mem setting: 4MB, shared_buffers setting: 800 MB, 
connections typically around 30-40.

Linux kernel version: 3.10 and CentOS 7.  Also kernel 4.19 and OpenSUSE 
Tumbleweed when I recreate the issue on my workstation.

Server: An AWS EC2 instance: t2.medium.  In other words, 2 CPUs, 4 GB of 
memory.  Not big, but we do have a bunch of them.

Workaround: We monitor the process and bounce it periodically.  I don't love 
this approach.  We could rewrite the function to avoid the temporary table.  It 
would be my shame as a DBA to ask a developer to do that :).

Thanks for any insight!

Tom

Reply via email to