Forgot to say that this is it with new values suggested (see included postgresql.conf) and ARC cache size set to 32GB.
Sébastien On Wed, Sep 12, 2012 at 9:16 PM, Sébastien Lorion <s...@thestrangefactory.com>wrote: > I recreated the DB and WAL pools, and launched pgbench -i -s 10000. Here > are the stats during the load (still running): > > *iostat (xbd13-14 are WAL zpool)* > device r/s w/s kr/s kw/s qlen svc_t %b > xbd8 0.0 471.5 0.0 14809.3 40 67.9 84 > xbd7 0.0 448.1 0.0 14072.6 39 62.0 74 > xbd6 0.0 472.3 0.0 14658.6 39 61.3 77 > xbd5 0.0 464.7 0.0 14433.1 39 61.4 76 > xbd14 0.0 0.0 0.0 0.0 0 0.0 0 > xbd13 0.0 0.0 0.0 0.0 0 0.0 0 > xbd12 0.0 460.1 0.0 14189.7 40 63.4 78 > xbd11 0.0 462.9 0.0 14282.8 40 61.8 76 > xbd10 0.0 477.0 0.0 14762.1 38 61.2 77 > xbd9 0.0 477.6 0.0 14796.2 38 61.1 77 > > *zpool iostat (db pool)* > pool alloc free read write read write > db 11.1G 387G 0 6.62K 0 62.9M > > *vmstat* > procs memory page disks faults cpu > r b w avm fre flt re pi po fr sr ad0 xb8 in sy cs > us sy id > 0 0 0 3026M 35G 126 0 0 0 29555 0 0 478 2364 31201 26165 > 10 9 81 > > *top* > last pid: 1333; load averages: 1.89, 1.65, 1.08 up 0+01:17:08 > 01:13:45 > 32 processes: 2 running, 30 sleeping > CPU: 10.3% user, 0.0% nice, 7.8% system, 1.2% interrupt, 80.7% idle > Mem: 26M Active, 19M Inact, 33G Wired, 16K Cache, 25M Buf, 33G Free > > > > On Wed, Sep 12, 2012 at 9:02 PM, Sébastien Lorion < > s...@thestrangefactory.com> wrote: > > > > One more question .. I could not set wal_sync_method to anything else > but fsync .. is that expected or should other choices be also available ? I > am not sure how the EC2 SSD cache flushing is handled on EC2, but I hope it > is flushing the whole cache on every sync .. As a side note, I got > corrupted databases (errors about pg_xlog directories not found, etc) at > first when running my tests, and I suspect it was because of > vfs.zfs.cache_flush_disable=1, though I cannot prove it for sure. > > > > Sébastien > > > > > > On Wed, Sep 12, 2012 at 8:49 PM, Sébastien Lorion < > s...@thestrangefactory.com> wrote: > >> > >> Is dedicating 2 drives for WAL too much ? Since my whole raid is > comprised of SSD drives, should I just put it in the main pool ? > >> > >> Sébastien > >> > >> > >> On Wed, Sep 12, 2012 at 8:28 PM, Sébastien Lorion < > s...@thestrangefactory.com> wrote: > >>> > >>> Ok, make sense .. I will update that as well and report back. Thank > you for your advice. > >>> > >>> Sébastien > >>> > >>> > >>> On Wed, Sep 12, 2012 at 8:04 PM, John R Pierce <pie...@hogranch.com> > wrote: > >>>> > >>>> On 09/12/12 4:49 PM, Sébastien Lorion wrote: > >>>>> > >>>>> You set shared_buffers way below what is suggested in Greg Smith > book (25% or more of RAM) .. what is the rationale behind that rule of > thumb ? Other values are more or less what I set, though I could lower the > effective_cache_size and vfs.zfs.arc_max and see how it goes. > >>>> > >>>> > >>>> I think those 25% rules were typically created when ram was no more > than 4-8GB. > >>>> > >>>> for our highly transactional workload, at least, too large of a > shared_buffers seems to slow us down, perhaps due to higher overhead of > managing that many 8k buffers. I've heard other read-mostly workloads, > such as data warehousing, can take advantage of larger buffer counts. > >>>> > >>>> > >>>> > >>>> > >>>> -- > >>>> john r pierce N 37, W 122 > >>>> santa cruz ca mid-left coast > >>>> > >>>> > >>>> > >>>> > >>>> -- > >>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > >>>> To make changes to your subscription: > >>>> http://www.postgresql.org/mailpref/pgsql-general > >>> > >>> > >> > > >
postgresql.conf
Description: Binary data
-- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general