On Fri, May 27, 2011 at 7:19 PM, Jeff Davis wrote:
> On Thu, 2011-05-26 at 09:31 -0500, Merlin Moncure wrote:
>> Where they are most helpful is for masking of i/o if
>> a page gets dirtied >1 times before it's written out to the heap
>
> Another possible benefit of higher shared_buffers is that it
On 05/27/2011 07:30 PM, Mark Kirkwood wrote:
Greg, having an example with some discussion like this in the docs
would probably be helpful.
If we put that example into the docs, two years from now there will be
people showing up here saying "I used the recommended configuration from
the docs"
On Thu, 2011-05-26 at 09:31 -0500, Merlin Moncure wrote:
> Where they are most helpful is for masking of i/o if
> a page gets dirtied >1 times before it's written out to the heap
Another possible benefit of higher shared_buffers is that it may reduce
WAL flushes. A page cannot be evicted from shar
On 27/05/11 11:10, Greg Smith wrote:
OK, so the key thing to do is create a table such that shared_buffers
is smaller than the primary key index on a table, then UPDATE that
table furiously. This will page constantly out of the buffer cache to
the OS one, doing work that could be avoided. I
On Fri, May 27, 2011 at 9:24 PM, Maciek Sakrejda wrote:
> Another +1. While I understand that this is not simple, many users
> will not look outside of standard docs, especially when first
> evaluating PostgreSQL. Merlin is right that the current wording does
> not really mention a down side to cr
>> After failing to get even basic good recommendations for
>> checkpoint_segments into the docs, I completely gave up on focusing there as
>> my primary way to spread this sort of information.
>
> Hmm. That's rather unfortunate. +1 for revisiting that topic, if you
> have the energy for it.
An
On Fri, May 27, 2011 at 2:47 PM, Greg Smith wrote:
> Any attempt to make a serious change to the documentation around performance
> turns into a bikeshedding epic, where the burden of proof to make a change
> is too large to be worth the trouble to me anymore. I first started
> publishing tuning
On Fri, May 27, 2011 at 1:47 PM, Greg Smith wrote:
> Merlin Moncure wrote:
>>
>> That's just plain unfair: I didn't challenge your suggestion nor give
>> you homework.
>
> I was stuck either responding to your challenge, or leaving the impression I
> hadn't done the research to back the suggestion
Merlin Moncure wrote:
That's just plain unfair: I didn't challenge your suggestion nor give
you homework.
I was stuck either responding to your challenge, or leaving the
impression I hadn't done the research to back the suggestions I make if
I didn't. That made it a mandatory homework assign
Scott Carey wrote:
And there is an OS component to it too. You can actually get away with
shared_buffers at 90% of RAM on Solaris. Linux will explode if you try
that (unless recent kernels have fixed its shared memory accounting).
You can use much larger values for shared_buffers on Solari
Scott Carey wrote:
> So how far do you go? 128MB? 32MB? 4MB?
Under 8.2 we had to keep shared_buffers less than the RAM on our BBU
RAID controller, which had 256MB -- so it worked best with
shared_buffers in the 160MB to 200MB range. With 8.3 we found that
anywhere from 512MB to 1GB perform
So how far do you go? 128MB? 32MB? 4MB?
Anecdotal and an assumption, but I'm pretty confident that on any server
with at least 1GB of dedicated RAM, setting it any lower than 200MB is not
even going to help latency (assuming checkpoint and log configuration is
in the realm of sane, and connecti
On Thu, May 26, 2011 at 6:10 PM, Greg Smith wrote:
> Merlin Moncure wrote:
>>
>> So, the challenge is this: I'd like to see repeatable test cases that
>> demonstrate regular performance gains > 20%. Double bonus points for
>> cases that show gains > 50%.
>
> Do I run around challenging your sugge
On Thu, May 26, 2011 at 4:10 PM, Greg Smith wrote:
>
> As for figuring out how this impacts more complicated cases, I hear
> somebody wrote a book or something that went into pages and pages of detail
> about all this. You might want to check it out.
>
>
I was just going to suggest that there wa
Merlin Moncure wrote:
So, the challenge is this: I'd like to see repeatable test cases that
demonstrate regular performance gains > 20%. Double bonus points for
cases that show gains > 50%.
Do I run around challenging your suggestions and giving you homework?
You have no idea how much eye ro
Merlin Moncure wrote:
> Kevin Grittner wrote:
>> Merlin Moncure wrote:
>>
>>> So, the challenge is this: I'd like to see repeatable test cases
>>> that demonstrate regular performance gains > 20%. Double bonus
>>> points for cases that show gains > 50%.
>>
>> Are you talking throughput, maximum
On Thu, May 26, 2011 at 11:37 AM, Claudio Freire wrote:
> On Thu, May 26, 2011 at 6:02 PM, Merlin Moncure wrote:
>> The point is what we can prove, because going through the
>> motions of doing that is useful.
>
> Exactly, and whatever you can "prove" will be workload-dependant.
> So you can't pr
On Thu, May 26, 2011 at 6:02 PM, Merlin Moncure wrote:
> The point is what we can prove, because going through the
> motions of doing that is useful.
Exactly, and whatever you can "prove" will be workload-dependant.
So you can't prove anything "generally", since no single setting is
best for all.
On Thu, May 26, 2011 at 10:45 AM, Claudio Freire wrote:
> On Thu, May 26, 2011 at 5:36 PM, Merlin Moncure wrote:
>> Point being: cranking buffers
>> may have been the bee's knees with, say, the 8.2 buffer manager, but
>> present and future improvements may have render that change moot or
>> even
On Thu, May 26, 2011 at 5:36 PM, Merlin Moncure wrote:
> Point being: cranking buffers
> may have been the bee's knees with, say, the 8.2 buffer manager, but
> present and future improvements may have render that change moot or
> even counter productive.
I suggest you read the docs on how shared
On Thu, May 26, 2011 at 10:10 AM, Kevin Grittner
wrote:
> Merlin Moncure wrote:
>
>> So, the challenge is this: I'd like to see repeatable test cases
>> that demonstrate regular performance gains > 20%. Double bonus
>> points for cases that show gains > 50%.
>
> Are you talking throughput, maxim
Merlin Moncure wrote:
> So, the challenge is this: I'd like to see repeatable test cases
> that demonstrate regular performance gains > 20%. Double bonus
> points for cases that show gains > 50%.
Are you talking throughput, maximum latency, or some other metric?
In our shop the metric we tu
Hello performers, I've long been unhappy with the standard advice
given for setting shared buffers. This includes the stupendously
vague comments in the standard documentation, which suggest certain
settings in order to get 'good performance'. Performance of what?
Connection negotiation speed? N
23 matches
Mail list logo