On 12/19/12 6:30 PM, Jeff Davis wrote:
The idea is to prevent interference from the bgwriter or autovacuum.
Also, I turn of fsync so that it's measuring the calculation overhead,
not the effort of actually writing to disk.

With my test server issues sorted, what I did was setup a single 7200RPM drive with a battery-backed write cache card. That way fsync doesn't bottleneck things. And I to realized that limit had to be cracked before anything use useful could be done. Having the BBWC card is a bit better than fsync=off, because we'll get something more like the production workload out of it. I/O will be realistic, but limited to only one one drive can pull off.

Without checksums, it takes about 1000ms. With checksums, about 2350ms.
I also tested with checksums but without the CHECKPOINT commands above,
and it was also 1000ms.

I think we need to use lower checkpoint_segments to try and trigger more checkpoints. My 10 minute pgbench-tool runs will normally have at most 3 checkpoints. I would think something like 10 would be more useful, to make sure we're spending enough time seeing extra WAL writes;

This test is more plausible than the other two, so it's more likely to
be a real problem. So, the biggest cost of checksums is, by far, the
extra full-page images in WAL, which matches our expectations.

What I've done with pgbench-tools is actually measure the amount of WAL from the start to the end of the test run. To analyze it you need to scale it a bit; computing "wal bytes / commit" seems to work.

pgbench-tools also launches vmstat and isstat in a way that it's possible to graph the values later. The interesting results I'm seeing are when the disk is about 80% busy and when it's 100% busy.

--
Greg Smith   2ndQuadrant US    g...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to