>>>>> "mb" == Martin Blom <[EMAIL PROTECTED]> writes:

    mb> if I'm risking it more than usual when the procedure is done?

yeah, that is my opinion: when the procedure is done, using ZFS
without a backup is risking the data more than using UFS or ext3
without a backup.  Is that a clear statement?


I can ramble on, but maybe that's all you care to hear.

ZFS or not, I've become a big believer that filesystem-level backup is
always important for data that must last a decade, not just RAID or
snapshots, so it doesn't mean don't use ZFS, it means this is the time
to start building proper backup into your budget.

With huge amounts of data, you can get into a situation where you need
to make a copy of the data, and you've nowhere to put it and no time
and money to create a space for it, and you find yourself painted into
a corner.  This is not as much the case if you've bought a single big
external drive---you can afford to buy another drive, and the new
drive works instantly---but with big RAIDs you have to
save/budget/build to make space to copy them.  

Why would you suddenly need a copy?  Well, maybe you need to carry the
data with you to deliver it to someone else.  You risk damaging the
copy you're physically transporting, so you should always have a
stationary copy too.  Maybe you need to (I'm repeating myself again so
maybe just reread my other post) change the raid stripe arrangement
(ex., widen the stripe when you add a fourth disk, otherwise you end
up stuck with raidz(3disk) * 2 when you could have the same capacity
and more protection with raidz2(6disk) * 1), or remove a slog, or work
around a problem by importing the pool on an untrustworthy SXCE
release or hacked zfs code that might destroy everything, or you want
to test-upgrade the pool to a new version to see if it fixes a problem
but think you might want to downgrade it to the old zpool version if
you run into other problems.  Without a copy, you will be so fearful
of every reasonable step you take, you will make prudish decisions and
function slowly.

The possible exception to needing backups is these two-level
filesystems like GlusterFS or googlefs or maybe samfs?  These are
mid-layer filesystems that are backed by ordinary filesystems beneath
them, not block devices, and they replicate the data across the
ordinary filesystems.  Some of these may have worst-case recovery
schemes that are pretty good, especially if you have a few big files
rather than many tiny ones so you can live with getting just the
insides of the file back like fsck gives you in lost+found.  And they
don't use RAID-like parity/FEC schemes, rather they only make
mirror-like copies of the files, and they've usually the ability to
evacuate an underlying filesystem, so you're less likely to be painted
into a corner like i described---you always own enough disk for two or
three complete copies.  but I don't have experience here---that's my
point.  Maybe still for these, maybe not, but at least for all
block-backed filesystems, my experience says that you need a backup,
because within a decade you'll make a mistake or hit a corruption bug
or need a copy.

Attachment: pgpnXGfHPQu5o.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to