On Fri, Oct 10, 2008 at 9:14 PM, Jeff Bonwick <[EMAIL PROTECTED]> wrote:
> Note: even in a single-device pool, ZFS metadata is replicated via
> ditto blocks at two or three different places on the device, so that
> a localized media failure can be both detected and corrected.
> If you have two or more devices, even without any mirroring
> or RAID-Z, ZFS metadata is mirrored (again via ditto blocks)
> across those devices.

And in the event that you have a pool that is mostly not very
important but some of it is important, you can have data mirrored on a
per dataset level via copies=n.

If we can avoid losing an entire pool by rolling back a txg or two,
the biggest source of data loss and frustration is taken care of.
Ditto blocks for metadata should take care of most other cases that
would result in wide spread loss.  Normal bit rot that causes you to
lose blocks here and there are somewhat likely to take out a small
minority of files and spit warnings along the way.  If there are some
files that are more important to you than others (e.g. losing files in
rpool/home may have more impact than than rpool/ROOT) copies=2 can
help there.

And for those places where losing a txg or two is a mortal sin, don't
use flaky hardware and allow zfs to handle a layer of redundancy.

This gets me thinking that it may be worthwhile to have a small (<100
MB x 2) rescue boot environment with copies=2 (as well as rpool/boot/)
so that "pkg repair" could be used to deal with cases that prevent
your normal (>4 GB) boot environment from booting.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to