On 04/28/2014 12:39 PM, Andres Freund wrote:
On 2014-04-28 10:48:30 +0300, Heikki Linnakangas wrote:
On 04/26/2014 09:27 PM, Andres Freund wrote:
I don't think we need to decide this without benchmarks proving the
benefits. I basically want to know whether somebody has an actual
usecase - even if I really, really, can't think of one - of setting
max_connections even remotely that high. If there's something
fundamental out there that'd make changing the limit impossible, doing
benchmarks wouldn't be worthwile.

It doesn't seem unreasonable to have a database with tens of thousands of
connections. Sure, performance will suffer, but if the connections sit idle
most of the time so that the total load is low, who cares. Sure, you could
use a connection pooler, but it's even better if you don't have to.

65k connections will be absolutely *disastrous* for performance because
of the big PGPROC et al.

Well, often that's still good enough.

The main reason I want to shrink it is that I want to make pin/unpin
buffer lockless and all solutions I can come up with for that require
flags to be in the same uint32 as the refcount. For performance
it'd be beneficial if usagecount also fits in there.

Would it be enough to put only some of the flags in the same uint32?

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to