On 04/26/2014 09:27 PM, Andres Freund wrote:
I don't think we need to decide this without benchmarks proving the
benefits. I basically want to know whether somebody has an actual
usecase - even if I really, really, can't think of one - of setting
max_connections even remotely that high. If
On 2014-04-28 10:48:30 +0300, Heikki Linnakangas wrote:
On 04/26/2014 09:27 PM, Andres Freund wrote:
I don't think we need to decide this without benchmarks proving the
benefits. I basically want to know whether somebody has an actual
usecase - even if I really, really, can't think of one - of
On 04/28/2014 12:39 PM, Andres Freund wrote:
On 2014-04-28 10:48:30 +0300, Heikki Linnakangas wrote:
On 04/26/2014 09:27 PM, Andres Freund wrote:
I don't think we need to decide this without benchmarks proving the
benefits. I basically want to know whether somebody has an actual
usecase - even
On 2014-04-28 13:32:45 +0300, Heikki Linnakangas wrote:
On 04/28/2014 12:39 PM, Andres Freund wrote:
On 2014-04-28 10:48:30 +0300, Heikki Linnakangas wrote:
On 04/26/2014 09:27 PM, Andres Freund wrote:
I don't think we need to decide this without benchmarks proving the
benefits. I basically
On Mon, Apr 28, 2014 at 7:37 AM, Andres Freund and...@2ndquadrant.com wrote:
Well, often that's still good enough.
That may be true for 2-4k max_connections, but 65k? That won't even
*run*, not to speak of doing something, in most environments because of
the number of processes required.
Robert Haas robertmh...@gmail.com writes:
I think the fact that making 20k connections might crash your computer
is an artifact of other problems that we really ought to also fix
(like per-backend memory utilization, and lock contention on various
global data structures) rather than baking it
On 2014-04-28 10:03:58 -0400, Tom Lane wrote:
What I find much more worrisome about Andres' proposals is that he
seems to be thinking that there are *no* other changes to the buffer
headers on the horizon.
Err. I am not thinking that at all. I am pretty sure I never made that
argument. The
Andres Freund and...@2ndquadrant.com writes:
On 2014-04-28 10:03:58 -0400, Tom Lane wrote:
What I find much more worrisome about Andres' proposals is that he
seems to be thinking that there are *no* other changes to the buffer
headers on the horizon.
Err. I am not thinking that at all. I am
On 2014-04-28 10:57:12 -0400, Tom Lane wrote:
Andres Freund and...@2ndquadrant.com writes:
On 2014-04-28 10:03:58 -0400, Tom Lane wrote:
What I find much more worrisome about Andres' proposals is that he
seems to be thinking that there are *no* other changes to the buffer
headers on the
On Fri, Apr 25, 2014 at 11:15 PM, Andres Freund and...@2ndquadrant.com wrote:
Since there's absolutely no sensible scenario for setting
max_connections that high, I'd like to change the limit to 2^16, so we
can use a uint16 in BufferDesc-refcount.
Clearly there's no sensible way to run 64k
On Sat, Apr 26, 2014 at 12:15:40AM +0200, Andres Freund wrote:
Hi,
Currently the maximum for max_connections (+ bgworkers + autovacuum) is
defined by
#define MAX_BACKENDS0x7f
which unfortunately means that some things like buffer reference counts
need a full integer to store
On 2014-04-26 11:52:44 +0100, Greg Stark wrote:
On Fri, Apr 25, 2014 at 11:15 PM, Andres Freund and...@2ndquadrant.com
wrote:
Since there's absolutely no sensible scenario for setting
max_connections that high, I'd like to change the limit to 2^16, so we
can use a uint16 in
On 2014-04-26 05:40:21 -0700, David Fetter wrote:
On Sat, Apr 26, 2014 at 12:15:40AM +0200, Andres Freund wrote:
Hi,
Currently the maximum for max_connections (+ bgworkers + autovacuum) is
defined by
#define MAX_BACKENDS0x7f
which unfortunately means that some things like
Andres Freund and...@2ndquadrant.com writes:
On 2014-04-26 11:52:44 +0100, Greg Stark wrote:
But I don't think it's beyond the realm of possibility
that we'll reduce the overhead in the future with an eye to being able
to do that. Is it that helpful that it's worth baking in more
dependencies
Andres Freund and...@2ndquadrant.com writes:
On 2014-04-26 05:40:21 -0700, David Fetter wrote:
Out of curiosity, where are you finding that a 32-bit integer is
causing problems that a 16-bit one would solve?
Save space? For one it allows to shrink some structs (into one
cacheline!).
And
On Sat, Apr 26, 2014 at 11:20:56AM -0400, Tom Lane wrote:
Andres Freund and...@2ndquadrant.com writes:
On 2014-04-26 11:52:44 +0100, Greg Stark wrote:
But I don't think it's beyond the realm of possibility
that we'll reduce the overhead in the future with an eye to being able
to do that.
On 2014-04-26 11:20:56 -0400, Tom Lane wrote:
Andres Freund and...@2ndquadrant.com writes:
On 2014-04-26 11:52:44 +0100, Greg Stark wrote:
But I don't think it's beyond the realm of possibility
that we'll reduce the overhead in the future with an eye to being able
to do that. Is it that
On 2014-04-26 11:22:39 -0400, Tom Lane wrote:
Andres Freund and...@2ndquadrant.com writes:
On 2014-04-26 05:40:21 -0700, David Fetter wrote:
Out of curiosity, where are you finding that a 32-bit integer is
causing problems that a 16-bit one would solve?
Save space? For one it allows to
On 04/26/2014 11:06 AM, David Fetter wrote:
I know we allow for gigantic numbers of backend connections, but I've
never found a win for 2x the number of cores in the box, which at
least in my experience so far tops out in the 8-bit (in extreme cases
unsigned 8-bit) range.
For my part, I've
On 2014-04-26 13:16:38 -0700, Josh Berkus wrote:
However, I agree with Tom that Andres should show his hand before we
decrease MAX_BACKENDS by 256X.
I just don't want to invest time in developing and benchmarking
something that's not going to be accepted anyway. Thus my question.
Greetings,
On Sat, Apr 26, 2014 at 11:20:56AM -0400, Tom Lane wrote:
Andres Freund and...@2ndquadrant.com writes:
What I think it's necessary for is at least:
* Move the buffer content lock inline into to the buffer descriptor,
while still fitting into one cacheline.
* lockless/atomic Pin/Unpin
Noah Misch n...@leadboat.com writes:
On Sat, Apr 26, 2014 at 11:20:56AM -0400, Tom Lane wrote:
While I agree with you that it seems somewhat unlikely we'd ever get
past 2^16 backends, these arguments are not nearly good enough to
justify a hard-wired limitation.
I'm satisfied with the
On Sat, Apr 26, 2014 at 1:30 PM, Noah Misch n...@leadboat.com wrote:
Sure, let's not actually commit a patch to impose this limit until the first
change benefiting from doing so is ready to go. There remains an opportunity
to evaluate whether that beneficiary change is better done a different
On Sat, Apr 26, 2014 at 1:58 PM, Peter Geoghegan p...@heroku.com wrote:
The 2Q paper also suggests a correlated reference period.
I withdraw this. 2Q in fact does not have such a parameter, while
LRU-K does. But the other major system I mentioned very explicitly has
a configurable delay that
On 4/26/14, 1:27 PM, Andres Freund wrote:
I don't think we need to decide this without benchmarks proving the
benefits. I basically want to know whether somebody has an actual
usecase - even if I really, really, can't think of one - of setting
max_connections even remotely that high. If there's
Hi,
Currently the maximum for max_connections (+ bgworkers + autovacuum) is
defined by
#define MAX_BACKENDS0x7f
which unfortunately means that some things like buffer reference counts
need a full integer to store references.
Since there's absolutely no sensible scenario for setting
26 matches
Mail list logo