Erik Jones wrote:

max_connections = 2400

That is WAY too high. Get a real pooler, such as pgpool, and drop that down to 1000 and test from there. I see you mentioned 500 concurrent connections. Are each of those connections actually doing something? My guess that once you cut down on the number actual connections you'll find that each connection can get it's work done faster and you'll see that number drop significantly.

It's not an issue for me - I'm expecting *never* to top 100 concurrent connections, and many of those will be idle, with the usual load being closer to 30 connections. Big stuff ;-)

However, I'm curious about what an idle backend really costs.

On my system each backend has an RSS of about 3.8MB, and a psql process tends to be about 3.0MB. However, much of that will be shared library bindings etc. The real cost per psql instance and associated backend appears to be 1.5MB (measured with 10 connections using system free RAM change) . If I use a little Python program to generate 50 connections free system RAM drops by ~45MB and rises by the same amount when the Python process exists and the backends die, so the backends presumably use less than 1MB each of real unshared RAM.

Presumably the backends will grow if they perform some significant queries and are then left idle. I haven't checked that.

At 1MB of RAM per backend that's not a trivial cost, but it's far from earth shattering, especially allowing for the OS swapping out backends that're idle for extended periods.

So ... what else does an idle backend cost? Is it reducing the amount of shared memory available for use on complex queries? Are there some lists PostgreSQL must scan for queries that get more expensive to examine as the number of backends rise? Are there locking costs?

--
Craig Ringer

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to