I have done a bunch of benchmarking. It was not easy to find consistent numbers, so I picked a job and ran the same thing over and over.
I'm running Slackware 13.1 on a desktop computer. Linux storm 2.6.35.7-smp #1 SMP Sun Oct 10 21:43:07 CDT 2010 i686 AMD Athlon(tm) 7850 Dual-Core Processor AuthenticAMD GNU/Linux Database on: /dev/sda2 on /pub type ext4 (rw,noatime) I started with stock, unpatched, pg 9.1, and ran pg_bench. I used several scale's and always set the # connections at half the scale. (so scale 20 used 10 connections). I ran all tests for 180 seconds. autovacuum was always off, and I ran "vacuum -z" between each pg_bench. each block of numbers has these columns: scale, test 1, test 2, test 3, avg So the first line below: 6, 96, 105, 102, 101 means: pg_becnh -i -s 6 pg_bench -c 3 -T 180 vacuum -z pg_bench -c 3 -T 180 vacuum -z pg_bench -c 3 -T 180 result times for the three runs 96, 105 and 102 seconds, with average 101 seconds. The LOGS test is importing 61+ million rows of apache logs. Its a perl script, uses COPY over many many files. Each file is commit separate. checkpoint_segments = 7 shared_buffers = 512MB effective_cache_size = 1024MB autovacuum off fsync on synchronous_commit on full_page_writes on bgwriter_lru_maxpages 100 180 second tests scale, test 1, test 2, test 3, avg 6, 96, 105, 102, 101 20, 120, 82, 76, 93 40, 73, 42, 43, 53 80, 50, 29, 35, 38 synchronous_commit off 6, 239, 676, 614, 510 20, 78, 47, 56, 60 40, 59, 35, 41, 45 80, 53, 30, 35, 39 LOGS: ~ 3,900 ins/sec (I didnt record this well, its sort of a guess) synchronous_commit off full_page_writes off 6, 1273, 1344, 1287, 1301 20, 1323, 1307, 1313, 1314 40, 1051, 872, 702, 875 80, 551, 206, 245, 334 LOGS (got impatient and killed it) Total rows: 20,719,095 Total Seconds: 5,279.74 Total ins/sec: 3,924.25 fsync off synchronous_commit off full_page_writes off bgwriter_lru_maxpages 0 6, 3622, 2940, 2879, 3147 20, 2860, 2952, 2939, 2917 40, 2204, 2143, 2349, 2232 80, 1394, 1043, 1085, 1174 LOG (this is a full import) Total rows: 61,467,489 Total Seconds: 1,240.93 Total ins/sec: 49,533.37 ------- Apply unlogged patches and recompile, re-initdb --- I patched pg_bench to run with either normal or unlogged tables fsync on synchronous_commit on full_page_writes on bgwriter_lru_maxpages 100 180 second tests normal tables 6, 101, 102, 108, 103 20, 110, 71, 90, 90 40, 83, 45, 49, 59 80, 50, 34, 30, 38 LOGS (partial import) Total rows: 24,754,871 Total Seconds: 6,058.03 Total ins/sec: 4,086.28 unlogged tables 6, 2966, 3047, 3007, 3006 20, 2767, 2515, 2708, 2663 40, 1933, 1311, 1464, 1569 80, 837, 552, 579, 656 LOGS (full import) Total rows: 61,467,489 Total Seconds: 1,126.75 Total ins/sec: 54,552.60 After all this... there are too many numbers for me. I have no idea what this means. -Andy -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers