Hi Dave,

Until the ZFS/flash support integrates into an upcoming Solaris 10
release, I don't think we have an easy way to clone a root pool/dataset
from one system to another system because system specific info is still
maintained.

Your manual solution sounds plausible but probably won't work because of
the system specific info.

Here are some options:

1. Wait for the ZFS/flash support in an upcoming Solaris 10 release.
You can track CR 6690473 for this support.

2. Review interim solutions that involves UFS to ZFS migration but might
give you some ideas:

http://blogs.sun.com/scottdickson/entry/flashless_system_cloning_with_zfs
http://blogs.sun.com/scottdickson/entry/a_much_better_way_to

3. Do an initial installation of your new server with a two-disk mirrored root pool. Set up a separate pool for data/applications. Snapshot data from the E450 and send/receive over to the data/app
pool on the new server.

Cindy

Dave Ringkor wrote:
So I had an E450 running Solaris 8 with VxVM encapsulated root disk.  I 
upgraded it to Solaris 10 ZFS root using this method:

- Unencapsulate the root disk
- Remove VxVM components from the second disk
- Live Upgrade from 8 to 10 on the now-unused second disk
- Boot to the new Solaris 10 install
- Create a ZFS pool on the now-unused first disk
- Use Live Upgrade to migrate root filesystems to the ZFS pool
- Add the now-unused second disk to the ZFS pool as a mirror

Now my E450 is running Solaris 10 5/09 with ZFS root, and all the same users, 
software, and configuration that it had previously.  That is pretty slick in 
itself.  But the server itself is dog slow and more than half the disks are 
failing, and maybe I want to clone the server on new(er) hardware.

With ZFS, this should be a lot simpler than it used to be, right? A new server has new hardware, new disks with different names and different sizes. But that doesn't matter anymore. There's a procedure in the ZFS manual to recover a corrupted server by using zfs receive to reinstall a copy of the boot environment into a newly created pool on the same server. But what if I used zfs send to save a recursive snapshot of my root pool on the old server, booted my new server (with the same architecture) from the DVD in single user mode and created a ZFS pool on its local disks, and did zfs receive to install the boot environments there? The filesystems don't care about the underlying disks. The pool hides the disk specifics. There's no vfstab to edit.
Off the top of my head, all I can think to have to change is the network interfaces.  And 
that change is as simple as "cd /etc ; mv hostname.hme0 hostname.qfe0" or 
whatever.  Is there anything else I'm not thinking of?

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to