On Wed, Feb 17, 2010 at 11:37:54PM -0500, Ethan wrote:
> > It seems to me that you could also use the approach of 'zpool replace' for
> That is true. It seems like it then have to rebuild from parity for every
> drive, though, which I think would take rather a long while, wouldn't it?

No longer than copying - plus, it will only resilver active data, so
unless the pool is close to full it could save some time.  Certainly
it will save some hassle and risk of error, plugging and swapping drives
between machines more times.  As a further benefit, all this work will
count towards a qualification cycle for the current hardware setup.

I would recommend using replace, one drive at a time. Since you still
have the original drives to fall back on, you can do this now (before
making more changes to the pool with new data) without being overly
worried about a second failure killing your raidz1 pool.  Normally,
when doing replacements like this on a singly-redundant pool, it's a
good idea to run a scrub after each replace, making sure everything
you just wrote is valid before relying on it to resilver the next
disk. 

If you're keen on copying, I'd suggest doing over the network; that
way your write target is a system that knows the target partitioning
and there's no (mis)caclulation of offsets.

--
Dan.

Attachment: pgpXOZkFtzKSn.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to