Kevin Grittner wrote:
"Kevin Grittner" <kevin.gritt...@wicourts.gov> wrote:
Performance tests to follow in a day or two.
I'm looking to beg another week or so on this to run more tests. What
I can have by the end of today is pretty limited, mostly because I
decided it made the most sense to test this with big complex
databases, and it just takes a fair amount of time to throw around
that much data.  (This patch didn't seem likely to make a significant
difference on smaller databases.)
My current plan is to test this on a web server class machine and a
distributed application class machine.  Both database types have over
300 tables with tables with widely ranging row counts, widths, and
index counts.
It would be hard to schedule the requisite time on our biggest web
machines, but I assume an 8 core 64GB machine would give meaningful
results.  Any sense what numbers of parallel jobs I should use for
tests?  I would be tempted to try 1 (with the -1 switch), 8, 12, and
16 -- maybe keep going if 16 beats 12.  My plan here would be to have
the dump on one machine, and run pg_restore there, and push it to a
database on another machine through the LAN on a 1Gb connection. (This seems most likely to be what we'd be doing in real life.) I
would run each test with the CVS trunk tip with and without the patch
applied.  The database is currently 1.1TB.


you need to be careful here - in my latest round of benchmarking I had actually test with the workload generator on the same box because on fast boxes we can easily achive >100MB/s total load rate these days. At these load rates you are very close or over the pratical limits of GigE...


Stefan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to