With ZFS, we can enable copies=[1,2,3] to configure how many copies of data 
there are.  With copies of 2 or more, in theory, an entire disk can have read 
errors, and the zfs volume still works.  

The unfortunate part here is that the redundancy lies in the volume, not the 
pool vdev like with raidz or mirroring.  So if a disk were to go missing, the 
zpool (stripe) is missing a vdev and thus becomes offline.  If a single disk in 
a raidz vdev is missing, it would become degraded and still usable.  Now, with 
non-redundant stripes, the disk can't be replaced, but all the data is still 
there with copies=2 if a disk dies.  Is there not a way to force the zpool 
online or prevent it from offlining itself?

One of the key benefits of the metadata copies is that if a single block fails, 
the filesystem is still navigable to grab what data is possible.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to