Thanks Edward, you understood me perfectly.

Your suggestion sounds very promising. I like the idea of letting the 
installation CD set everything up, that way some hardware/drivers could 
possibly be updated and yet it still work. On top of a bare metal recovery, I 
would like to leverage the incredible power of ZFS snapshots, I love the way 
zfs send / receive works. It's the root pool and BEs complexities that worry me.

My ideal solution would be to have the data accessible from the backup media 
(external HDD) as well as be used as full syatem restore. Below is what I would 
consider ideal:

1.) Create a pool on an external HDD called backup-pool
2.) Send the whole rpool (all filesystems within) to the backup pool.
3.) be able to browse the backup pool starting from /backup-pool
4.) be able to export the backup pool and import on PC2 to browse the files 
there
5.) be able to create another snapshot of rpool and "zfs snapshot -i 
rp...@first-snapshot rp...@next-snapshot backup-pool/rpool" (send the increment 
to the backup pool/drive
6.) be able to browse the latest snapshot data on the backup drive, whilst able 
to clone an older snapsho
7.) be able to 'zfs send' the latest backup snapshot to a fresh installation, 
thus get it back to exactly how it was before disaster.

At the moment I have successfully achieved 1-4 and I'm very impressed. I am 
currently trying to get 5-6 working, mildy confident that it will work, done it 
in part but got errors with /export/home filesystem and subsequently pool 
failed to import/export. It's just copying over again after wiping backup pool 
and starting again. I hope build 134 is a good build to test this on.

However, it's step 7 that I have no idea if it will work. Edward, your post 
gives me promise, 90% confidence is a good start.

Watch this space for my results.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to