On Wed, May 25, 2011 at 10:58 PM, Craig Ringer
<cr...@postnewspapers.com.au> wrote:
> There might be a very cheap and simple way to help reduce the number of
> people running into problems because they set massive max_connections values
> that their server cannot cope with instead of using pooling.
>
> In the default postgresql.conf, change:
>
> max_connections = 100                   # (change requires restart)
> # Note:  Increasing max_connections costs ~400 bytes of shared memory
> # per connection slot, plus lock space (see max_locks_per_transaction).
>
> to:
>
> max_connections = 100                   # (change requires restart)
> # WARNING: If you're about to increase max_connections above 100, you
> # should probably be using a connection pool instead. See:
> #     http://wiki.postgresql.org/max_connections
> #
> # Note:  Increasing max_connections costs ~400 bytes of shared memory
> # per connection slot, plus lock space (see max_locks_per_transaction).
> #
>
>
> ... where wiki.postgresql.org/max_connections (which doesn't yet exist)
> explains the throughput costs of too many backends and the advantages of
> configuring a connection pool instead.
>
> Sure, this somewhat contravenes the "users don't read - ever" principle, but
> we can hope that _some_ people will read a comment immediately beside the
> directive they're modifying.

+1 on this idea, although I'm not so sure it's a good idea to point to
the wiki.  Also, all other .conf explanation is in the standard docs,
so maybe this should be too.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to