For this type of migration a downtime is required. However, it can be reduce to only a few hours to a few minutes depending how much change need to be synced.
I have done this many times on a NetApp Filer but can be apply to zfs as well. First thing is consider is only do the migration once so you don't need two downtime. Let talk about migration first 1. You will need a recent enough zfs to support zfs send and recive. 2. create your destination pool (there is things you can do here to avoid migrating back). 3. create you destination volume 4. create snapshot snap1 of the source volume (zfs snapshot) 5. use zfs send <vol...@snapshot1> | zfs receive <dstvol> (this will sync most of the 11 TB and may take days) 6 create snapshot2 snap2 of the source volume 7. incremental sync the snapshot with zfs send -i <vol...@snapshot1> <vol...@snapshot2> | <vol...@snapshot1> (this should be faster). repeat 6 and 7 as needed to get the sync time to be about the allowed downtime. 8 ** DOWNTIME ** Turn off the windows Servers 9 zfs unmount the source volume to ensure no more change to the volume 10 create snapshot final of the source volume 11 incremental sync the final snapshot 12 rename the source volume to backup volume (you can rename pools via import export) 13 rename the destvol to production 14 mount product destvol. (reconfigure what you need for comstar) 15 Turn on windows server 16 You need to have some way of verifying the migration and blackout if needed. Once verify enable the window services ** END of DOWNTIME ** 17 you should have a backup of the old volume before destroy the old pool 18 Destroy the pool. 19 Add the now spare disks into the new pool No Downtime is not possible because you need to switch pool and zfs don't currently support features like pvmove, vgsplit, vgmerge, and vgreduce in lvm. -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss