On Thu, Feb 12 at 19:43, Toby Thain wrote:
^^ Spec compliance is what we're testing for... We wouldn't know if this special variant is working correctly either. :)

Time the difference between NCQ reads with and without FUA in the
presence of overlapped cached write data.  That should have a
significant performance penalty, compared to a device servicing the
reads from a volatile buffer cache.

FYI, there are semi-commonly-available power control units that take
serial port or USB as an input, and have a whole bunch of SATA power
connectors on them.  These are the sorts of things that drive vendors
use to bounce power unexpectedly in their testing, if you need to
perform that same validation, it makes sense to invest in that bit of
infrastructure.

Something like this:
http://www.ulinktech.com/products/hw_power_hub.html

or just roll your own in a few days like this guy did for his printer:
http://chezphil.org/slugpower/


It should be pretty trivial to perform a few thousand cached writes,
issue a flush cache ext, and turn off power immediately after that
command completes.  Then go back and figure out how many of those
writes were successfully written as the device claimed.

--
Eric D. Mudama
edmud...@mail.bounceswoosh.org

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to