So personally I find ZFS to be fantastic, it's only missing three 
features from my ideal filesystem:
1) The ability to easily recover the portions of a filesystem that are 
still intact after a catastrophic failure (It looks like zfs scrub can 
do this as long as a damaged pool could be imported, so this is almost 
there, or it's hackable at the moment if a bit of drive information has 
been kept around)

2) The ability to push the data off a device and safely remove it from a 
non-mirrored pool (Marked as a future feature)

3) File system level mirroring across devices, rather than device level 
mirroring, so...  To raise this issue for discussion, pros/cons/not 
worth the effort, ideas:

It would be fantastic if ZFS could support another option for copies 
that **guarantees** that it writes copies to different devices, and if 
it cannot (due to free space constraints or failing/failed device), 
write to the same device, but raise an error/warning that could be 
checked in zpool status or similar fashion (similar to a RAID5 losing a 
disk...  It's workable, simply degraded)

zpool scrub strikes me as the perfect tool to attempt to enforce the 
copies=X attribute, as a way to bring the entire filesystem into line 
with the current settings and ensure that old data meets the 
requirement, rather than only affecting new data.
An issue I immediately see here would involve possibly needing to move 
data from one disk to another in order to free up space for replication 
across devices, which is likely non-trivial.

-Tim

Miles Nordin wrote:
>>>>>> "tr" == Timothy Renner <timothy.ren...@gmail.com> writes:
>>>>>>             
>
>     tr> zfs set copies=2 zfspool/test2
>
> 'copies=2' says things will be written twice, but regardless of
> discussion about where the two copies are written, copies=2 says
> nothing at all about being able to *read back* your data if one of the
> copies disappears.  It only promises that the two copies will be
> written.  This does you no good at all if you can't import the pool,
> which is probably what will happen to anyone who has relied on
> copies=2 for redundancy.
>
> The discussion about *where* the copies tend to be written is really
> impractical and distracting, IMO.
>
> The chance that the copies won't be written to separate vdev's is not
> where the problem comes from.  You can't import a pool unless it has
> enough redundancy at vdev-level to get all your data, so copies=2
> doesn't add much.  The best copies=2 will do is give you a slightly
> better shot at evacuating the data from a slowly-failing drive.  If
> anyone at all should be using it, certainly I don't think someone with
> more than one drive should be using it.
>   
> ------------------------------------------------------------------------
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to