> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Arjun YK
> 
> Trying to understand how to backup mirrored zfs boot pool 'rpool' to tape,
> and restore it back if in case the disks are lost.
> Backup would be done with an enterprise tool like tsm, legato etc.

Backup/restore of bootable rpool to tape with a 3rd party application like
legato etc is kind of difficult.  Because if you need to do a bare metal
restore, how are you going to do it?  The root of the problem is the fact
that you need an OS with legato in order to restore the OS.  It's a
catch-22.  It is much easier if you can restore the rpool from some storage
that doesn't require the 3rd party tool to access it ... 

I might suggest:  If you use "zfs send" to backup rpool to a file in the
data pool...  And then use legato etc to backup the data pool...  If you
need to do a bare metal restore some day, you would just install a new OS,
install legato or whatever, and restore your data pool.  Then you could boot
to a command prompt of the installation disc, and restore (obliterate) the
rpool using the rpool backup file.

But I hope you can completely abandon the whole 3rd party backup software
and tapes.  Some people can, and others cannot.  By far, the fastest best
way to backup ZFS is to use zfs send | zfs receive on another system or a
set of removable disks.  ZFS send has the major advantage that it doesn't
need to crawl the whole filesystem scanning for changes.  It just knows what
incremental blocks have changed, and it instantly fetches only the necessary
blocks.


> 1. Is it possible to backup 'rpool' as a single entity, or do we need to
backup
> each filesystems, volumes etc. within rpool seperately ?

You can do it either way you like.  Specify a single filesystem, or
recursively do all of its children.  man zfs send


> 2. How do we backup the whole structure of zfs (pool, filesystem, volume,
> snapshot etc.) along with all its property settings. Not just the actual
data
> stored within.

Regarding pool & filesystem properties, I believe this changed at some
point.  There was a time in history when I decided to "zpool get all mypool"
and "zfs get all mypool" and store those text files alongside the backup.
But if you check the man page for zfs send, I think this is automatic now.

No matter what, you'll have to create a pool before you can restore.  So
you'll just have to take it upon yourself to remember your pool architecture
... striping, mirroring, raidz, cache & log devices etc.

Incidentally, when you do incremental zfs send, you have to specify the
"from" and "to" snapshots.  So there must be at least one identical snapshot
in the sending and receiving system (or else your only option is to do a
complete full send.)  Point is:  You can take a lot of baby steps if you
wish, keeping all the snapshots if you wish.  Or you can jump straight from
the oldest matching snapshot to the latest snap.  You'll complete somewhat
faster but lose granularity in the backups if you do that.


> 3. If in case the whole structure cannot be backed up using enterprise
> backup, how do we save and restore zfs sctructure if in case the disks are
> lost. I have read about 'zfs send receive ...'. Is this the only
recommended
> way ?

For anything other than rpool, you can use any normal backup tool you like.
Netbackup, legato, tar, cpio.  Whatever.  (For rpool, I wouldn't really
trust those - I recommend zfs send for rpool.)  You can also use zfs send &
receive for data pools.  You gain performance (potentially many orders of
magnitude shorter backup window) if zfs send & receive are acceptable in
your environment.  But it's not suitable for everyone for many reasons...
You can't exclude anything from zfs send...  And you can't do a selective
zfs receive.  It's the whole filesystem or nothing.  And a single bit
corruption will render the whole backup unusable, so it's not recommended to
store a "zfs send" data stream for later use.  It's recommended to pipe a
zfs send directly into a zfs receive.  Which implies disk-to-disk, no tape.


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to