You can also select which snapshots you'd like to copy - and egrep away what you
don't need.

Here's what I did to back up some servers to a filer (as compressed ZFS 
snapshots
stored into files or further simple deployment on multiple servers, as well as 
offsite rsyncing of the said files). The example below is a framework from our 
scratchpad docs, modify it to a specific server's environment.

Apparently, such sending and receiving examples (see below) can be piped 
together without use of files (and gzip, ssh, whatever) within a local system.

##### ZFS snapshot dumps

# prepare
TAGPRV='20090427-01'
TAGNEW='20090430-01-running'
zfs snapshot -r pool/zones@"$TAGNEW"

# incremental dump over NFS (needs set TAGNEW/TAGPRV)
cd /net/back-a/export/DUMP/manual/`hostname` && \
for ZSn in `zfs list -t snapshot | grep "$TAGNEW" | awk '{ print $1 }'`; do 
ZSp=`echo $ZSn | sed "s/$TAGNEW/$TAGPRV/"`; Fi="`hostname`%`echo $ZSn | sed 
's/\//_/g'`.incr.zfsshot.gz"; echo "=== `date`"; echo "===== prev: $ZSp"; echo 
"===== new: $ZSn"; echo "===== new: incr-file: $Fi"; /bin/time zfs send -i 
"$ZSp" "$ZSn" | /bin/time pigz -c - > "$Fi"; echo "   res = [$?]"; done

# incremental dump over ssh (needs set TAGNEW/TAGPRV; paths hardcoded in the 
end)
for ZSn in `zfs list -t snapshot | grep "$TAGNEW" | awk '{ print $1 }'`; do 
ZSp=`echo $ZSn | sed "s/$TAGNEW/$TAGPRV/"`; Fi="`hostname`%`echo $ZSn | sed 
's/\//_/g'`.incr.zfsshot.gz"; echo "=== `date`"; echo "===== prev: $ZSp"; echo 
"===== new: $ZSn"; echo "===== new: incr-file: $Fi"; /bin/time zfs send -i 
"$ZSp" "$ZSn" | /bin/time pigz -c - | ssh back-a "cat > 
/export/DUMP/manual/`hostname`/$Fi"; echo "   res = [$?]"; done

All in all, these lines send an incremental snapshot between $TAGPRV and 
$TAGNEW to per-server directories into per-snapshot files. They are quickly 
compressed with pigz (parallel gzip) before writing.

First of all you'd of course need an initial dump (a full dump of any snapshot):

# Initial dump of everything except swap volumes
zfs list -H -t snapshot | egrep -vi 'swap|rpool/dump' | grep "@$TAGPRV" | awk 
'{ print $1 }' | while read Z; do F="`hostname`%`echo $Z | sed 
's/\//_/g'`.zfsshot"; echo "`date`: $Z > $F.gz"; time zfs send "$Z" | pigz -9 > 
$F.gz; done

Now, if your snapshots were named in an incrementing manner (like these 
timestamped examples above), you are going to have a directory with files 
named like this (it's assumed that incremented snapshots all make up a valid 
chain):

servername%p...@20090214-01.zfsshot.gz
servername%pool_zo...@20090214-01.zfsshot.gz
servername%pool_zo...@20090405-03.incr.zfsshot.gz
servername%pool_zo...@20090427-01.incr.zfsshot.gz
servername%pool_zones_gene...@20090214-01.zfsshot.gz
servername%pool_zones_gene...@20090405-03.incr.zfsshot.gz
servername%pool_zones_gene...@20090427-01.incr.zfsshot.gz
servername%pool_zones_general_...@20090214-01.zfsshot.gz
servername%pool_zones_general_...@20090405-03.incr.zfsshot.gz
servername%pool_zones_general_...@20090427-01.incr.zfsshot.gz

The last one is a large snapshot of the zone (ns4) while the first ones are 
small 
datasets which simply form nodes in the hierarchical tree. There's lots of 
these 
usually :)

You can simply import these files into a zfs pool by a script like:

# for F in *.zfsshot.gz; do echo "=== $F"; gzcat "$F" | time zfs recv -nFvd 
pool; done

Probably better use "zfs recv -nFvd" first (no-write verbose mode) to be 
certain 
about your write-targets and about overwriting stuff (i.e. "zfs recv -F" would 
destroy any newer snapshots, if any - so you can first check which ones, and 
possibly clone/rename them first).

// HTH, Jim Klimov
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to