On Wed, Nov 25, 2009 at 7:54 AM, Paul Kraus <pk1...@gmail.com> wrote:
>> You're peaking at 658 256KB random IOPS for the 3511, or ~66
>> IOPS per drive.  Since ZFS will max out at 128KB per I/O, the disks
>> see something more than 66 IOPS each.  The IOPS data from
>> iostat would be a better metric to observe than bandwidth.  These
>> drives are good for about 80 random IOPS each, so you may be
>> close to disk saturation.  The iostat data for IOPS and svc_t will
>> confirm.
>
> But ... if I am saturating the 3511 with one thread, then why do I get
> many times that performance with multiple threads ?

I'm having troubles making sense of the iostat data (I can't tell how
many threads at any given point), but I do see lots of times where
asvc_t * reads is in the range 850 ms to 950 ms.  That is, this is as
fast as a single threaded app with a little bit of think time can
issue reads (100 reads * 9 ms svc_t + 100 reads * 1 ms think_time = 1
sec).  The %busy shows that 90+% of the time there is an I/O in flight
(100 reads * 9ms = 900/1000 = 90%).  However, %busy isn't aware of how
many I/O's could be in flight simultaneously.

When you fire up more threads, you are able to have more I/O's in
flight concurrently.  I don't believe that the I/O's per drive is
really a limiting factor at the single threaded case, as the spec
sheet for the 3511 says that it has 1 GB of cache per controller.
Your working set is small enough that it is somewhat likely that many
of those random reads will be served from cache.  A dtrace analysis of
just how random the reads are would be interesting.  I think that
hotspot.d from the DTrace Toolkit would be a good starting place.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to