Hi Malcolm,

As far as I'm aware (someone please correct me if I'm wrong, it's been
awhile since I've played with this) you can't mirror at the vdev level, so
you have to replace disks at the individual disk level.

E.g. you have a 6 disk/lun raidz configuration @ 50gb a disk.
You've calculated that you actually want 100gb per disk.
If you want to grow this on the fly, you'll need to replace the disks
individually with 6 new luns using "# zpool replace <zpool> <olddisk>
<newdisk>".
Once all the disks have been replaced, set autoexpand property on the zpool
to on "# zpool set autoexpand=on <zpool>"
Then run "# zpool online -e <zpool>" to soak up all the new space.

The drawbacks are you can't change the number of disks, only replace them
with equal or bigger drives.
This is how I've done it in the past, if there is a better way now to do it
online I'm all for it :).

Otherwise if you want to take the outage, using snapshots and zfs
send/receive will work also. I've done it this way also, and found using a
base snapshot for the initial copy online, and then a 2nd one at cutover
time helped reduce downtime significantly.

Cheers,
Leigh Maddock
UNIX Sys. Admin.


On Wed, Dec 11, 2013 at 3:23 PM, Malcolm Herbert <[email protected]> wrote:

> Hey all - I've got a raidz zpool that I wish to migrate to a new set of
> LUNs. What's the best way to do this?
>
> Can I add the new set of LUNs as a separate raidz set to make a mirror,
> let them sync and then detach the original raidz set? Without offlining
> the zpool?
>
> I suspect the answer is 'no' but I'd be curious to know whether it's
> even possible - there are some documents that outline the similar
> process but only for devices made up of mirrors.
>
> How would other handle this?  My current plan is to create a new pool,
> then use zfs send/receive to copy the bulk of the data between the two
> prior to the cutover
>
> Regards,
> Malcolm
>
> --
> Malcolm Herbert
> [email protected]
>
> _______________________________________________
> msosug mailing list
> [email protected]
> http://mexico.purplecow.org/m/listinfo/msosug
>
>
_______________________________________________
msosug mailing list
[email protected]
http://mexico.purplecow.org/m/listinfo/msosug

Reply via email to