On 07/25/2011 09:23 AM, Robert Haas wrote:
At some point, we also need to sort out the scale factor limit issues,
so you can make these things bigger.
I had a patch to improve that whole situation, but it hasn't seem to nag
at me recently. I forget why it seemed less important, but I doubt I'll
make it another six months without coming to some resolution there.
The two systems I have in for benchmarking right now have 128GB and
192GB of RAM in them, so large scales should have been tested.
Unfortunately, it looks like the real-world limiting factor on doing
lots of tests at big scales is how long it takes to populate the data
set. For example, here's pgbench creation time on a big server (48
cores, 128GB RAM) with a RAID10 array, when scale=20000 (292GB):
real 174m12.055s
user 17m35.994s
sys 0m52.358s
And here's the same server putting the default tablespace (but not the
WAL) on [much faster flash device I can't talk about yet]:
Creating new pgbench tables, scale=20000
real 169m59.541s
user 18m19.527s
sys 0m52.833s
I was hoping for a bigger drop here; maybe I needed to use unlogged
tables? (ha!) I think I need to start looking at the pgbench data
generation stage as its own optimization problem. Given how expensive
systems this large are, I never get them for very long before they are
rushed into production. People don't like hearing that just generating
the data set for a useful test is going to take 3 hours; that tends to
limit how many of them I can schedule running.
And, yes, I'm going to try and sneak in some time to test fastpatch
locking on one of these before they head into production.
--
Greg Smith 2ndQuadrant US g...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers