According to the hard disk drive guide at 
http://www.storagereview.com/guide2000/ref/hdd/index.html, a wopping 
36% of data loss is due to human error.  49% of data loss was due to 
hardware or system malfunction.  With proper pool design, zfs 
addresses most of the 49% of data loss due to hardware malfunction.

You can do as much MTTDL analysis as you want based on drive 
reliability and read failure rates, but it still only addresses that 
49% of data loss.

Zfs makes human error really easy.  For example

   $ zpool destroy mypool

   $ zfs destroy mypool/mydata

The commands are almost instantaneous and are much faster than the 
classic:

   $ rm -rf /mydata

or

   % newfs /dev/rdsk/c0t0d0s6 < /dev/null

Most problems we hear about on this list are due to one of these 
issues:

  * Human error

  * Beta level OS software

  * System memory error (particularly non-ECC memory)

  * Wrong pool design

Zfs is a tool which can lead to exceptional reliability.  Some forms 
of human error can be limited by facilities such as snapshots.  System 
administrator human error is still a major factor.

Bob
======================================
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to