On Wed, May 23, 2012 at 2:09 PM, Jeff Janes <jeff.ja...@gmail.com> wrote:
> On Wed, May 23, 2012 at 10:33 AM, Amit Kapila <amit.kap...@huawei.com> wrote:
>>>>I don't think there is a clear picture yet of what benchmark to use for
>> testing changes here.
>> I will first try to generate such a scenario(benchmark). I have still not
>> thought completely.
>> However the idea in my mind is that scenario where buffer list is heavily
>> operated upon.
>> Operations where shared buffers are much less compare to the data in disk
>> and the operations are distributed such that
>> they require to access most of the data in disk randomly.
>
> If most buffer reads actually have to read from disk, then that will
> so throttle your throughput that you will not be able to make anything
> else be relevant.  You need to have shared_buffers be much smaller
> than RAM, and have almost all the "disk" data resident in RAM but not
> in shared_buffers.

But this is pretty common, since we advise people to set
shared_buffers relatively low as compared to physical memory.  The
problem is visible on the graph I posted here:

http://rhaas.blogspot.com/2012/03/performance-and-scalability-on-ibm.html

When the scale factor gets large enough to exceed shared_buffers,
performance peaks in the 36-44 client range.  When it's small enough
to fit in shared_buffers, performance continues to increase through 64
clients and even a bit beyond.  Note that the absolute *performance*
is not much worse with the larger scale factor, if you have only one
client.  It's the *scalability* that goes out the window.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to