Re: [zfs-discuss] zfs snapshoot of rpool/* to usb removable drives?

2009-07-14 Thread Carl Brewer
This is what I've done, but am still a bit stuck, as it doesn't quite work!

I scan the zpool list for the drive (I created backup1/data and backup2/data on 
the two USB drives)

/usr/sbin/zpool import backup2
/usr/sbin/zfs snapshot -r rp...@20090715033358
/usr/sbin/zfs destroy rpool/s...@20090715033358
/usr/sbin/zfs destroy rpool/d...@20090715033358
/usr/sbin/zfs send -R rp...@20090715033358 | /usr/sbin/zfs recv -d -F 
backup2/dump
/usr/sbin/zfs unmount -f /backup2   # one of the rpool bits is shared, if I 
don't do this it refuses to export

/usr/sbin/zpool export backup2


The send/recv bit isn't working.  It moans :

/usr/sbin/zfs send -R rp...@20090715033358 | /usr/sbin/zfs recv -d -F 
backup2/dump
cannot receive new filesystem stream: destination has snapshots (eg. 
backup2/d...@zfs-auto-snap:monthly-2009-06-29-12:47)
must destroy them to overwrite it.  I get dozens of auto-snapshots in there 
which I'm not sure how they got there?  I've not got the timeslider set to 
create anything in backup/ - I think they're being created when the send/recv 
runs?

When I try after deleting all the snapshots on backup2/ (be nice if zfs destroy 
took multiple arguments!) it seems to cheerfully recreate all those snapshots 
etc, but I really only want it to grab the one I took.

zfs list shows :

 zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
backup2  15.4G   898G23K  /backup2
backup2/dump 15.4G   898G  84.5K  /backup2/dump
backup2/dump/ROOT15.1G   898G21K  legacy
backup2/dump/ROOT/b118   15.0G   898G  11.3G  /
backup2/dump/ROOT/opensolaris37.4M   898G  5.02G  /
backup2/dump/ROOT/opensolaris-1  88.2M   898G  11.2G  /
backup2/dump/cashmore 194K   898G22K  /backup2/dump/cashmore
backup2/dump/export   232M   898G23K  /export
backup2/dump/export/home  232M   898G   737K  /export/home
backup2/dump/export/home/carl 228M   898G   166M  /export/home/carl
rpool17.4G   896G  84.5K  /rpool
rpool/ROOT   15.1G   896G19K  legacy
rpool/ROOT/b118  15.0G   896G  11.3G  /
rpool/ROOT/opensolaris   37.7M   896G  5.02G  /
rpool/ROOT/opensolaris-1 88.4M   896G  11.2G  /
rpool/cashmore199K   896G22K  /rpool/cashmore
rpool/dump   1018M   896G  1018M  -
rpool/export  232M   896G23K  /export
rpool/export/home 232M   896G   736K  /export/home
rpool/export/home/carl228M   896G   166M  /export/home/carl
rpool/swap   1018M   897G   101M  -

now if I try again to send to it, is there a magic incantation to recv that 
says "incremental update"?  Is this the right way to do what I'm trying to do?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs snapshoot of rpool/* to usb removable drives?

2009-07-13 Thread Carl Brewer
last question! I promise :)

Google's not helped me much, but that's probably my keyword-ignorance.

I have two USB HDD's, I want to swap them over, so there's one off-site and one 
plugged in, they get swapped over weekly.  Not perfect, but sufficient for this 
site's risk assessment. They'll have the relevant ZFS snapshots sent to them.  
I assume then that if I plug one of these drives into another box that groks 
ZFS it'll see a filesystem and be able to access the files etc.

I formatted two drives, 1TB drives.

The tricky bit I think, is swapping them.  I can mount one, and then send/recv 
to it, but what's the best way to automate the process of swapping the drives? 
A human has to physically switch them on & off and plug them in etc, but what's 
the process to do it in ZFS?

Does each drive need a separate mountpoint?  In the old UFS days I'd have just 
mounted them from an entry in (v)fstab in a cronjob and they'd be the same as 
far as everything was concerned, but with ZFS I'm a little confused.

Can anyone here outline the procedure to do this assuming that the USB drives 
will be plugged into the same USB port (the server will be in a cabinet, the 
backup drives outside of it so they don't have to open the cabinet, and thus, 
bump things that don't like to be bumped!).

Thankyou again for everyone's help.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs snapshoot of rpool/* to usb removable drives?

2009-07-09 Thread Jim Klimov
You can also select which snapshots you'd like to copy - and egrep away what you
don't need.

Here's what I did to back up some servers to a filer (as compressed ZFS 
snapshots
stored into files or further simple deployment on multiple servers, as well as 
offsite rsyncing of the said files). The example below is a framework from our 
scratchpad docs, modify it to a specific server's environment.

Apparently, such sending and receiving examples (see below) can be piped 
together without use of files (and gzip, ssh, whatever) within a local system.

# ZFS snapshot dumps

# prepare
TAGPRV='20090427-01'
TAGNEW='20090430-01-running'
zfs snapshot -r pool/zones@"$TAGNEW"

# incremental dump over NFS (needs set TAGNEW/TAGPRV)
cd /net/back-a/export/DUMP/manual/`hostname` && \
for ZSn in `zfs list -t snapshot | grep "$TAGNEW" | awk '{ print $1 }'`; do 
ZSp=`echo $ZSn | sed "s/$TAGNEW/$TAGPRV/"`; Fi="`hostname`%`echo $ZSn | sed 
's/\//_/g'`.incr.zfsshot.gz"; echo "=== `date`"; echo "= prev: $ZSp"; echo 
"= new: $ZSn"; echo "= new: incr-file: $Fi"; /bin/time zfs send -i 
"$ZSp" "$ZSn" | /bin/time pigz -c - > "$Fi"; echo "   res = [$?]"; done

# incremental dump over ssh (needs set TAGNEW/TAGPRV; paths hardcoded in the 
end)
for ZSn in `zfs list -t snapshot | grep "$TAGNEW" | awk '{ print $1 }'`; do 
ZSp=`echo $ZSn | sed "s/$TAGNEW/$TAGPRV/"`; Fi="`hostname`%`echo $ZSn | sed 
's/\//_/g'`.incr.zfsshot.gz"; echo "=== `date`"; echo "= prev: $ZSp"; echo 
"= new: $ZSn"; echo "= new: incr-file: $Fi"; /bin/time zfs send -i 
"$ZSp" "$ZSn" | /bin/time pigz -c - | ssh back-a "cat > 
/export/DUMP/manual/`hostname`/$Fi"; echo "   res = [$?]"; done

All in all, these lines send an incremental snapshot between $TAGPRV and 
$TAGNEW to per-server directories into per-snapshot files. They are quickly 
compressed with pigz (parallel gzip) before writing.

First of all you'd of course need an initial dump (a full dump of any snapshot):

# Initial dump of everything except swap volumes
zfs list -H -t snapshot | egrep -vi 'swap|rpool/dump' | grep "@$TAGPRV" | awk 
'{ print $1 }' | while read Z; do F="`hostname`%`echo $Z | sed 
's/\//_/g'`.zfsshot"; echo "`date`: $Z > $F.gz"; time zfs send "$Z" | pigz -9 > 
$F.gz; done

Now, if your snapshots were named in an incrementing manner (like these 
timestamped examples above), you are going to have a directory with files 
named like this (it's assumed that incremented snapshots all make up a valid 
chain):

servername%p...@20090214-01.zfsshot.gz
servername%pool_zo...@20090214-01.zfsshot.gz
servername%pool_zo...@20090405-03.incr.zfsshot.gz
servername%pool_zo...@20090427-01.incr.zfsshot.gz
servername%pool_zones_gene...@20090214-01.zfsshot.gz
servername%pool_zones_gene...@20090405-03.incr.zfsshot.gz
servername%pool_zones_gene...@20090427-01.incr.zfsshot.gz
servername%pool_zones_general_...@20090214-01.zfsshot.gz
servername%pool_zones_general_...@20090405-03.incr.zfsshot.gz
servername%pool_zones_general_...@20090427-01.incr.zfsshot.gz

The last one is a large snapshot of the zone (ns4) while the first ones are 
small 
datasets which simply form nodes in the hierarchical tree. There's lots of 
these 
usually :)

You can simply import these files into a zfs pool by a script like:

# for F in *.zfsshot.gz; do echo "=== $F"; gzcat "$F" | time zfs recv -nFvd 
pool; done

Probably better use "zfs recv -nFvd" first (no-write verbose mode) to be 
certain 
about your write-targets and about overwriting stuff (i.e. "zfs recv -F" would 
destroy any newer snapshots, if any - so you can first check which ones, and 
possibly clone/rename them first).

// HTH, Jim Klimov
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs snapshoot of rpool/* to usb removable drives?

2009-07-08 Thread Daniel Carosone
> Thankyou!  Am I right in thinking that rpool
> snapshots will include things like swap?  If so, is
> there some way to exclude them? 

Hi Carl :)

You can't exclude them from the send -R with something like --exclude, but you 
can make sure there are no such snapshots (which aren't useful anyway) before 
sending, as noted.

As well as deleting them, another way to do this is to not create them in the 
first place.  If you use the snapshots created by tim's zfs-auto-snapshot 
service, that service observes a property on each dataset that excludes 
snapshots being taken on that dataset.

There are convenient hooks in that service that you can use to facilitate the 
sending step directly once the snapshots are taken, and to use incremental 
sends of the snapshots as well.

You might consider your replication schedule, too - for example, keep 
"frequent" and maybe even "hourly" snapshots only on the internal pool, and 
replicate "daily" and beyond snapshots to the external drive ready for removal. 
  If you arrange your schedule of swapping drives well enough, such that a 
drive returns from offsite storage and is reconnected while the most recent 
snapshot it contains is still present in rpool, then catching it up with the 
week of snapshots it missed while offsite can be quick.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs snapshoot of rpool/* to usb removable drives?

2009-07-08 Thread Lori Alt

On 07/08/09 15:57, Carl Brewer wrote:

Thankyou!  Am I right in thinking that rpool snapshots will include things like 
swap?  If so, is there some way to exclude them?  Much like rsync has --exclude?
  
By default, the "zfs send -R" will send all the snapshots, including 
swap and dump.  But you can do the following after taking the snapshot:


# zfs destroy rpool/d...@mmddhh
# zfs destroy rpool/s...@mmddhh

and then do the "zfs send -R" .  You'll get messages about the missing 
snapshots, but they can be ignored. 

In order to re-create a bootable pool from your backup, there are 
additional steps required.  A full description of a procedure similar to 
what you are attempting can be found here:


http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Root_Pool_Recovery


Lori


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs snapshoot of rpool/* to usb removable drives?

2009-07-08 Thread Richard Elling

Carl Brewer wrote:

Thankyou!  Am I right in thinking that rpool snapshots will include things like 
swap?  If so, is there some way to exclude them?  Much like rsync has --exclude?
  


No. Snapshots are a feature of the dataset, not the pool.  So you
would have separate snapshot policies for each file system (eg rpool)
and volume (eg swap and dump).
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs snapshoot of rpool/* to usb removable drives?

2009-07-08 Thread Carl Brewer
Thankyou!  Am I right in thinking that rpool snapshots will include things like 
swap?  If so, is there some way to exclude them?  Much like rsync has --exclude?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs snapshoot of rpool/* to usb removable drives?

2009-07-08 Thread Darren J Moffat

Carl Brewer wrote:

G'day,
I'm putting together a LAN server with a couple of terabyte HDDs as a mirror 
(zfs root) on b117 (updated 2009.06).

I want to back up snapshots of all of rpool to a removable drive on a USB port - 
simple & cheap backup media for a two week rolling DR solution - ie: once a 
week a HDD gets swapped out and kept offsite.  I figure ZFS snapshots are perfect 
for local backups of files, it's only DR that we need the offsite backup for.

I created and formatted one drive on the USB interface (hopefully this will 
cope with drives being swapped in and out?), called it 'backup' to confuse 
things :)

zfs list shows :
NAME   USED  AVAIL  REFER  MOUNTPOINT
backup 114K   913G21K  /backup
rpool 16.1G   897G84K  /rpool
rpool/ROOT13.7G   897G19K  legacy
rpool/ROOT/opensolaris37.7M   897G  5.02G  /
rpool/ROOT/opensolaris-1  13.7G   897G  10.9G  /
rpool/cashmore 140K   897G22K  /rpool/cashmore
rpool/dump1018M   897G  1018M  -
rpool/export   270M   897G23K  /export
rpool/export/home  270M   897G   736K  /export/home
rpool/export/home/carl 267M   897G   166M  /export/home/carl
rpool/swap1.09G   898G   101M  -

I've tried this :

zfs snapshot -r rp...@mmddhh
zfs send rp...@mmddhh | zfs receive -F backup/data

eg :

c...@lan2:/backup# zfs snapshot -r rp...@2009070804
c...@lan2:/backup# zfs send rp...@2009070804 | zfs receive -F backup/data


You are missing a -R for the 'zfs send' part.

What you have done there is create snapshots of all the datasets in 
rpool called 2009070804 but you only sent the one of the top level rpool 
dataset.


 -R

 Generate a replication stream  package,  which  will
 replicate  the specified filesystem, and all descen-
 dant file systems, up to the  named  snapshot.  When
 received, all properties, snapshots, descendent file
 systems, and clones are preserved.

 If the -i or -I flags are used in  conjunction  with
 the  -R  flag,  an incremental replication stream is
 generated. The current  values  of  properties,  and
 current  snapshot and file system names are set when
 the stream is received. If the -F flag is  specified
 when  this  stream  is  recieved, snapshots and file
 systems that do not exist on the  sending  side  are
 destroyed.



--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs snapshoot of rpool/* to usb removable drives?

2009-07-08 Thread Carl Brewer
G'day,
I'm putting together a LAN server with a couple of terabyte HDDs as a mirror 
(zfs root) on b117 (updated 2009.06).

I want to back up snapshots of all of rpool to a removable drive on a USB port 
- simple & cheap backup media for a two week rolling DR solution - ie: once a 
week a HDD gets swapped out and kept offsite.  I figure ZFS snapshots are 
perfect for local backups of files, it's only DR that we need the offsite 
backup for.

I created and formatted one drive on the USB interface (hopefully this will 
cope with drives being swapped in and out?), called it 'backup' to confuse 
things :)

zfs list shows :
NAME   USED  AVAIL  REFER  MOUNTPOINT
backup 114K   913G21K  /backup
rpool 16.1G   897G84K  /rpool
rpool/ROOT13.7G   897G19K  legacy
rpool/ROOT/opensolaris37.7M   897G  5.02G  /
rpool/ROOT/opensolaris-1  13.7G   897G  10.9G  /
rpool/cashmore 140K   897G22K  /rpool/cashmore
rpool/dump1018M   897G  1018M  -
rpool/export   270M   897G23K  /export
rpool/export/home  270M   897G   736K  /export/home
rpool/export/home/carl 267M   897G   166M  /export/home/carl
rpool/swap1.09G   898G   101M  -

I've tried this :

zfs snapshot -r rp...@mmddhh
zfs send rp...@mmddhh | zfs receive -F backup/data

eg :

c...@lan2:/backup# zfs snapshot -r rp...@2009070804
c...@lan2:/backup# zfs send rp...@2009070804 | zfs receive -F backup/data

Now I'd expect to see the drive light up and to see some activity, but not much 
seems to happen.

zpool status shows :
# zpool status
  pool: backup
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
backup   ONLINE   0 0 0
  c10t0d0s0  ONLINE   0 0 0

errors: No known data errors

  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  mirrorONLINE   0 0 0
c8d1s0  ONLINE   0 0 0
c9d1s0  ONLINE   0 0 0

errors: No known data errors

and zfs list -t all :
zfs list -t all | grep back
backup 238K   913G  
  23K  /backup
bac...@zfs-auto-snap:frequent-2009-07-08-23:30  18K  -  
  21K  -
backup/data 84K   913G  
  84K  /backup/data
backup/d...@20090708040  -  
  84K  -

So nothing much is getting copied onto the USB drive as far as I can tell. 
Certainly not a few GB of stuff.  Can anyone tell me what I've missed or 
misunderstood?  Does snapshot -r not get all of rpool?

Thankyou!

Carl
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss