> When running the card in copyback write cache mode, I got horrible
> performance (with zfs), much worse than with copyback disabled
> (which I believe should mean it does write-through), when tested
> with filebench.

When I benchmark my disks, I also find that the system is slower with
WriteBack enabled.  I would not call it "much worse," I'd estimate about 10%
worse.  This, naturally, is counterintuitive.  I do have an explanation,
however, which is partly conjecture:  With the WriteBack enabled, when the
OS tells the HBA to write something, it seems to complete instantly.  So the
OS will issue another, and another, and another.  The HBA has no knowledge
of the underlying pool data structure, so it cannot consolidate the smaller
writes into larger sequential ones.  It will brainlessly (or
less-brainfully) do as it was told, and write the blocks to precisely the
addresses that it was instructed to write.  Even if those are many small
writes, scattered throughout the platters.  ZFS is smarter than that.  It's
able to consolidate a zillion tiny writes, as well as some larger writes,
all into a larger sequential transaction.  ZFS has flexibility, in choosing
precisely how large a transaction it will create, before sending it to disk.
One of the variables used to decide how large the transaction should be is
... Is the disk busy writing, right now?  If the disks are still busy, I
might as well wait a little longer and continue building up my next
sequential block of data to write.  If it appears to have completed the
previous transaction already, no need to wait any longer.  Don't let the
disks sit idle.  Just send another small write to the disk.

Long story short, I think, ZFS simply does a better job of write buffering
than the HBA could possibly do.  So you benefit by disabling the WriteBack,
in order to allow ZFS handle that instead.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to