Comments inline :
 
On Wednesday, March 03, 2010, at 06:35PM, "Svein Skogen" <sv...@stillbilde.net> 
wrote:
>
>However trying to wrap my head around solaris and backups (I'm used to 
>FreeBSD) is now leaving me with a nasty headache, and still no closer to a 
>real solution. I need something that on regular intervals pushes this zpool:
>
>storage  4.06T  1.19T  2.87T    29%  1.00x  ONLINE  -
>
>onto a series of tapes, and I really want a solution that allows me to have 
>something resembling a one-button-disaster recovery, either via a cd/dvd 
>bootdisc, or a bootusb image, or via writing a bootblock on the tapes. 
>Preferably a solution that manages to dump the entire zpool, including zfses 
>and volumes and whatnot. If I can dump the rpool along with it, all the 
>better. (basically something that allows me to shuffle a stack of tapes into 
>the safe, maybe along with a bootdevice, with the effect of making me sleep 
>easy knowing that ... when disaster happens, I can use a similar-or-better 
>specced box to restore the entire server to bring everything back on line).
>
>are there ... ANY good ideas out there for such a solution?
>-- 
Only limited by your creativity.  Out curiosity, why the tape solution for 
disaster recovery?  That strikes me as being more work, not to mention much 
more complicated for disaster recovery since LTOs aren't usually found as 
standard kit on most machines.  As a quick idea how about the following :

Boot your system from a USB key (or portable HD), and dd the key to a spare 
that's kept in the safe, updated when you do anything substantial.  There you 
recover not just a bootable system but any system based customization you've 
done. This does require downtime however for the duplication.

For the data, rather than fight with tapes, I'd go buy a dual-bay disk 
enclosure and pop in 2 2Tb drives.  Attach that to the server (USB/eSATA, 
whatever's convenient) and use zfs send/recv to copy over snapshots into a full 
exploitable copy.  Put that in the safe with the USB key and you have a 
completely mobile solution that wants only a computer. Assuming that you don't 
fill up your current 4Tb of storage, you can keep a number of snapshots to 
replace the iterative copies done to tape in the old fashioned world. Better 
yet, do this to two destinations and rotate one off-site.

That would be the best as far as disaster recovery convenience goes, but does 
still require the legwork of attaching the backup disks, running the send/recv, 
exporting the pool and putting it back in the safe. Using a second machine 
somewhere and sending it across the network is more easily scalable (but more 
possibly expensive).

Remembering that by copying to another zpool you have a fully exploitable 
backup copy.  I don't think that the idea of copying zfs send streams to tape 
is a reasonable approach to backups - way to many failure points and 
dependencies. Not to mention that testing your backup is easy - just import the 
pool and scrub.  Testing against tape adds wear and tear to the tapes and you 
need room to restore to, is time consuming, and a general PITA. (but it's 
essential!)

If you want to stick with a traditional approach, amanda is a good choice, and 
OpenSolaris does include an ndmp service, although I haven't looked at it yet.

This kind of design depends on your RTO, RPO, administrative constraints, data 
retention requirements, budget and your definition of a disaster...

IMHO, disk to disk with zfs send/recv offers a very flexible and practical 
solution to many backup and restore needs. Your storage media can be wildly 
different - small, fast SAS for production going to fewer big SATA drives with 
asymmetric snapshot retention policies- keep a week in production and as many 
as you want on the bigger backup drives. Then file level dumps to tape from the 
backup volumes for archival purposes that can be restored onto any filesystem.

Cheers,

Erik
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to