Just was on the phone with Andy  Bowers.  He cleared up that
our SATA device drivers need some work.  We basically do not
have  the necessary I/O  concurrency at this  stage.  So the
write_cache is actually a good substitute for tag queuing.

So that explain why we get  more throughput _on SATA_ drives
from the write_cache; and I guess  the other bug explain why
ZFS is still not able to benefit from it.

-r

Jonathan Edwards writes:
 > 
 > On Jun 15, 2006, at 06:23, Roch Bourbonnais - Performance Engineering  
 > wrote:
 > 
 > > Naively I'd think a write_cache  should not help throughput
 > > test since the cache should fill  up after which you should still be
 > > throttled by the physical drain rate. You clearly show that
 > > it helps; Anyone knows why/how a cache helps throughput ?
 > 
 > 7200 RPM disks are typically IOP bound - so the write cache (which
 > can be up to 16MB on some drives) should be able to buffer enough
 > IO to deliver more efficiently on each IOP and also reduce head seek.
 > Not sure which vendors implement write through when the cache fills,
 > or how detailed the drive cache algos on SATA can go ..
 > 
 > Take a look at PSARC 2004/652:
 > http://www.opensolaris.org/os/community/arc/caselog/2004/652/
 > 
 > .je
 > _______________________________________________
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to