Moving to a new SAN, both LUNs will not be accessible at the same time.

Thanks for the several replies I've received, sounds like the dd to tape mechanism is broken for zfs send, unless someone knows otherwise or has some trick?

I'm just going to try a tar to tape then (maybe using dd), then, as I don't have any extended attributes/ACLs. Would appreciate any suggestions for block sizes for LTO5 tape drive, writing to LTO4 tapes (what I have).

Might send it across the (Gigabit Ethernet) network to a server that's already on the new SAN, but I was trying to avoid hogging down the network or the other server's NIC.

I've seen examples online for sending via network, involves piping zfs send over ssh to zfs receive, right? Could I maybe use rsh, if I enable it temporarily between the two hosts?

Thanks again, all.

--
David Strom

On 1/11/2011 11:43 PM, Ian Collins wrote:
On 01/12/11 04:15 AM, David Strom wrote:
I've used several tape autoloaders during my professional life. I
recall that we can use ufsdump or tar or dd with at least some
autoloaders where the autoloader can be set to automatically eject a
tape when it's full & load the next one. Has always worked OK whenever
I tried it.

I'm planning to try this with a new Quantum Superloader 3 with LTO5
tape drives and zfs send. I need to migrate a Solaris 10 host on a
V440 to a new SAN. There is a 10 TB zfs pool & filesystem that is
comprised of 3 LUNs of different sizes put in the zfs pool, and it's
almost full. Rather than copying the various sized Luns from the old
SAN storage unit to the new one & getting ZFS to recognize the pool, I
thought it would be cleaner to dump the zfs filesystem to the tape
autoloader & restore it to a 10TB Lun. The users can live without this
zfs filesystem for a few days.


Why can't you just send directly to the new LUN? Create a new pool, send
the data, export the old pool and rename.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to