Dick Davies wrote:
For the sake of argument, let's assume:
1. disk is expensive
2. someone is keeping valuable files on a non-redundant zpool
3. they can't scrape enough vdevs to make a redundant zpool
(remembering you can build vdevs out of *flat files*)
Given those assumptions, I think that the proposed feature is the
perfect solution. Simply put those files in a filesystem that has copies>1.
Also note that using files to back vdevs is not a recommended solution.
If the user wants to make sure the file is 'safer' than others, he
can just make multiple copies. Either to a USB disk/flashdrive, cdrw,
dvd, ftp server, whatever.
It seems to me that asking the user to solve this problem by manually
making copies of all his files puts all the burden on the
user/administrator and is a poor solution.
For one, they have to remember to do it pretty often. For two, when
they do experience some data loss, they have to manually reconstruct the
files! They could have one file which has part of it missing from copy
A and part of it missing from copy B. I'd hate to have to reconstruct
that manually from two different files, but the proposed solution would
do this transparently.
The redundancy you're talking about is what you'd get from 'cp
/foo/bar.jpg /foo/bar.jpg.ok', except it's hidden from the user and
causing headaches for anyone trying to comprehend, port or extend the
codebase in the future.
Whether it's hard to understand is debatable, but this feature
integrates very smoothly with the existing infrastructure and wouldn't
cause any trouble when extending or porting ZFS.
I'm afraid I honestly think this greatly complicates the conceptual model
(not to mention the technical implementation) of ZFS, and I haven't seen
a convincing use case.
Just for the record, these changes are pretty trivial to implement; less
than 50 lines of code changed.
--matt
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss