On Tue, 16 Feb 2010, Christo Kutrovsky wrote:

The goal was to do "damage control" in a disk failure scenario involving data loss. Back to the original question/idea.

Which would you prefer, loose a couple of datasets, or loose a little bit of every file in every dataset.

This ignores the fact that zfs is based on complex heirarchical data structures which support the user data. When a pool breaks, it is usually because one of these complex data structures has failed, and not because user data has failed.

It seems easiest to support your requirement by simply creating another pool.

The vast majority of complaints to this list are about pool-wide problems and not lost files due to media/disk failure.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to