On Dec 28, 2009, at 12:40 PM, Brad wrote:

"Try an SGA more like 20-25 GB. Remember, the database can cache more
effectively than any file system underneath. The best I/O is the I/O
you don't have to make."

We'll be turning up the SGA size from 4GB to 16GB.
The arc size will be set from 8GB to 4GB.

This doesn't make sense to me. You've got 32 GB, why not use it?
Artificially limiting the memory use to 20 GB seems like a waste of
good money.

"This can be a red herring. Judging by the number of IOPS below,
it has not improved. At this point, I will assume you are using
disks that have NCQ or CTQ (eg most SATA and all FC/SAS drives).
If you only issue one command at a time, you effectively disable
NCQ and thus cannot take advantage of its efficiencies."

Here's another sample of the data taken at another time after the number of concurrent ios change from 10 to 1. We're using Seagate Savio 10K SAS drives...I could not pull up info if the drives support NCQ or not. What's the recommended value to set concurrent IOs to?

   r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
   0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0
   0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t0d0
1402.2 7805.3    2.7   36.2  0.2 54.9    0.0    6.0   0 940 c1
  10.8    1.0    0.1    0.0  0.0  0.1    0.0    7.0   0   7 c1t0d0
 117.1  640.7    0.2    1.8  0.0  4.5    0.0    5.9   1  76 c1t1d0
 116.9  638.2    0.2    1.7  0.0  4.6    0.0    6.1   1  78 c1t2d0
 116.4  639.1    0.2    1.8  0.0  4.6    0.0    6.0   1  78 c1t3d0
 116.6  638.1    0.2    1.7  0.0  4.6    0.0    6.1   1  77 c1t4d0
 113.2  638.0    0.2    1.8  0.0  4.6    0.0    6.1   1  77 c1t5d0
 116.6  635.3    0.2    1.7  0.0  4.5    0.0    6.0   1  76 c1t6d0
 116.2  637.8    0.2    1.8  0.0  4.7    0.0    6.2   1  79 c1t7d0
 115.3  636.7    0.2    1.8  0.0  4.4    0.0    5.8   1  77 c1t8d0
 115.4  637.8    0.2    1.8  0.0  4.5    0.0    5.9   1  77 c1t9d0
 114.8  635.0    0.2    1.8  0.0  4.3    0.0    5.7   1  76 c1t10d0
 114.9  639.9    0.2    1.8  0.0  4.7    0.0    6.2   1  78 c1t11d0
 115.1  638.7    0.2    1.8  0.0  4.4    0.0    5.9   1  77 c1t12d0
   1.6  140.0    0.0   15.1  0.0  0.6    0.0    4.4   0   8 c1t13d0
   1.3    9.1    0.0    0.1  0.0  0.0    0.0    1.0   0   0 c1t14d0

SAS will be CTQ, basically the same thing as NCQ for SATA disks.
You can see here that you're averaging 4.6 I/Os queued at the
disks (actv column) and the response time is quite good.
Meanwhile, the disks are handling more than 700 IOPS with
less than 10 ms response time.  Not bad at all for HDDs, but
not a level that can be expected, either. Here we see more
than 600 small write IOPS. These will be sequential (as in
contiguous blocks, not sequential as in large blocks) so
they get buffered and efficiently written by the disk.  When
your workload returns to the read-mostly random activity,
then the IOPS will go down.

As to what is the magic number? It is hard to say.  In this case,
more than 4 is good.  Remember, the default of 35 is as much
of a guess as anything.  For HDDs, 35 might be a little bit too
much, but for a RAID array, something more like 1,000 might
be optimal.  Keeping an eye on the actv column of iostat can
help you make that decision.
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to