On 11/02/2013 07:47 PM, John R Pierce wrote:
On 11/2/2013 11:03 AM, Grzegorz Tańczyk wrote:

Is there any way to limit total memory usage by postgres and keep maximum connections limit? Postgresql.conf settings are default for 8.3.23. I need to have 100 connections in pool.

the size of your connection pool shouldn't be much more than 2-3 times the CPU core count on the server for optimal throughput... 100 queries running at once will grind ANY non-monster server to a standstill
In fact thats what happened when tsearch2 problem occured even though there was only few queries running at once. Group of idle connections was using resources and that's the thing I don't understand. Did tsearch2 dictionary caching implementation improve after 8.3 on this matter?

Making small connection pool will help, however how small should it be? 1 connection max, 0 connections minimum? Connections will get closed after they are released by application code, but still there will be some group of postmaster processes and how can I be sure if none of them will get 1gb of system memory? I can't have any control over this (other than grepping ps output and manually pg_cancel_backend them once they grow too much).

Thanks!

--
Regards,
  Grzegorz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to