On 05/09/2011 04:32 PM, Chris Hoover wrote:
So, does anyone have any suggestions/experiences in benchmarking storage when the storage is smaller then 2x memory?

If you do the Linux trick to drop its caches already mentioned, you can start a database test with zero information in memory. In that situation, whether or not everything could fit in RAM doesn't matter as much; you're starting with none of it in there. In that case, you can benchmark things without having twice as much disk space. You just have to recognize that the test become less useful the longer you run it, and measure the results accordingly.

A test starting from that state will start out showing you random I/O speed on the device, slowing moving toward in-memory cached speeds as the benchmark runs for a while. You really need to capture the latency data for every transaction and graph it over time to make any sense of it. If you look at "Using and Abusing pgbench" at http://projects.2ndquadrant.com/talks , starting on P33 I have several slides showing such a test, done with pgbench and pgbench-tools. I added a quick hack to pgbench-tools around then to make it easier to run this specific type of test, but to my knowledge no one else has ever used it. (I've had talks about PostgreSQL in my yard that were better attended than that session, for which I blame Jonah Harris for doing a great talk in the room next door concurrent with it.)

--
Greg Smith   2ndQuadrant US    g...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to