Success: Re: [zfs-discuss] Re: I seem to have backed myself into a corner - how do I migrate filesyst

2007-06-01 Thread John Plocher

I managed to correct the problem by writing a script inspired
by Chris Gerhard's blog that did a zfs send | zfs recv.  Now
that things are back up, I have a couple of lingering questions:


1) I noticed that the filesystem size information is not the
   same between the src and dst filesystem sets.  Is this
   an expected behavior?


[EMAIL PROTECTED] zfs list -r tank/projects/sac
NAMEUSED  AVAIL  REFER  MOUNTPOINT
tank/projects/sac  49.0G   218G  48.7G  /export/sac
tank/projects/[EMAIL PROTECTED]   104M  -  48.7G  -
tank/projects/[EMAIL PROTECTED]  96.7M  -  48.7G  -
tank/projects/[EMAIL PROTECTED]  74.3M  -  48.7G  -
tank/projects/[EMAIL PROTECTED]  18.7M  -  48.7G  -

[EMAIL PROTECTED] zfs list -r tank2/projects/sac
NAME USED  AVAIL  REFER  MOUNTPOINT
tank2/projects/sac  49.3G   110G  48.6G  /export2/sac
tank2/projects/[EMAIL PROTECTED]  99.7M  -  48.6G  -
tank2/projects/[EMAIL PROTECTED]  92.3M  -  48.6G  -
tank2/projects/[EMAIL PROTECTED]  70.1M  -  48.6G  -
tank2/projects/[EMAIL PROTECTED]  70.7M  -  48.6G  -

2) Following Chris's advice to do more with snapshots, I
   played with his cron-triggered snapshot routine:
   http://blogs.sun.com/chrisg/entry/snapping_every_minute

   Now, after a couple of days, zpool history shows almost
   100,000 lines of output (from all the snapshots and
   deletions...)

   How can I purge or truncate this log (which has got to be
   taking up several Mb of space, not to mention the ever
   increasing sluggishness of the command...)


  -John

Oh, here's the script I used - it contains hardcoded zpool
and zfs info, so it must be edited to match your specifics
before it is used!  It can be rerun safely; it only sends
snapshots that haven't already been sent so that I could do
the initial time-intensive copies while the system was still
in use and only have to do a faster resync while down in
single user mode.

It isn't pretty (it /is/ a perl script) but it worked :-)

--

#!/usr/bin/perl
# John Plocher - May, 2007
# ZFS helper script to replicate the filesystems+snapshots in
# SRCPOOL onto a new DSTPOOL that was a different size.
#
#   Historical situation:
# + zpool create tank raidz c1t1d0 c1t2d0 c1t3d0
# + zfs create tank/projects
# + zfs set mountpoint=/export tank/projects
# + zfs set sharenfs=on tank/projects
# + zfs create tank/projects/...
# ... fill up the above with data...
# Drive c1t3d0 FAILED
# + zpool offline tank c1t3d0
# ... find out that replacement drive is 10,000 sectors SMALLER
# ... than the original, and zpool replace won't work with it.
#
# Usage Model:
#   Create a new (temp) pool large enough to hold all the data
#   currently on tank
# + zpool create tank2 c2t2d0 c2t3d0 c2t4do
# + zfs set mountpoint=/export2 tank2/projects
#   Set a baseline snapshot on tank
# + zfs snapshot -r [EMAIL PROTECTED]
#   Edit and run this script to copy the data + filesystems from tank to
#   the new pool tank2
# + ./copyfs
#   Drop to single user mode, unshare the tank filesystems,
# + init s
# + zfs unshare tank
#   Shut down apache, cron and sendmail
# + svcadm disable svc:/network/http:cswapache2
# + svcadm disable svc:/system/cron:default
# + svcadm disable svc:/network/smtp:sendmail
#   Take another snapshot,
# + zfs snapshot -r [EMAIL PROTECTED]
#   Rerun script to catch recent changes
# + ./copyfs
#   Verify that the copies were successful,
# + dircmp -s /export/projects /export2/projects
# + zfs destroy tank
# + zpool create tank raidz c1t1d0 c1t2d0 c1t3d0
#   Modify script to reverse transfer and set properties, then
#   run script to recreate tank's filesystems,
# + ./copyfs
#   Reverify that content is still correct
# + dircmp -s /export/projects /export2/projects
#   Re-enable  cron, http and mail
# + svcadm enable svc:/network/http:cswapache2
# + svcadm enable svc:/system/cron:default
# + svcadm enable svc:/network/smtp:sendmail
#   Go back to multiuser
# + init 3
#   Reshare filesystems.
# + zfs share tank
#   Go home and get some sleep
#

$SRCPOOL=tank;
$DSTPOOL=tank2;

# Set various properties once the initial filesystem is recv'd...
# (Uncomment these when copying the filesystems back to their original pool)
# $props{projects} = ();
# push( @{ $props{projects} }, (zfs set mountpoint=/export tank/projects));
# push( @{ $props{projects} }, (zfs set sharenfs=on tank/projects));
# $props{projects/viper} = ();
# push( @{ $props{projects/viper} }, (zfs set 
sharenfs=rw=bk-test:eressea:scuba:sac:viper:caboose,root=sac:viper:caboose,ro 
tank/projects/viper));

sub getsnapshots(@) {
my (@filesystems) = @_;
my @snaps;
my @snapshots;
foreach my $fs ( @filesystems ) {
chomp($fs);
next if ($fs eq $SRCPOOL);
# print Filesystem: $fs\n;
# Get a list of all snapshots in 

Re: Success: Re: [zfs-discuss] Re: I seem to have backed myself into a corner - how do I migrate filesyst

2007-06-01 Thread eric kustarz


2) Following Chris's advice to do more with snapshots, I
   played with his cron-triggered snapshot routine:
   http://blogs.sun.com/chrisg/entry/snapping_every_minute

   Now, after a couple of days, zpool history shows almost
   100,000 lines of output (from all the snapshots and
   deletions...)

   How can I purge or truncate this log (which has got to be
   taking up several Mb of space, not to mention the ever
   increasing sluggishness of the command...)




You can check out the comment at the head of spa_history.c:
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/ 
common/fs/zfs/spa_history.c


The history is implemented as a ring buffer (where the size is MIN 
(32MB, 1 %of your capacity)):
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/ 
common/fs/zfs/spa_history.c#105


We specifically didn't allow the admin the ability to truncate/prune  
the log as then it becomes unreliable - ooops i made a mistake, i  
better clear the log and file the bug against zfs 


eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: Success: Re: [zfs-discuss] Re: I seem to have backed myself into a corner - how do I migrate filesyst

2007-06-01 Thread John Plocher

eric kustarz wrote:
We specifically didn't allow the admin the ability to truncate/prune the 
log as then it becomes unreliable - ooops i made a mistake, i better 
clear the log and file the bug against zfs 



I understand - auditing means never getting to blame someone else :-)

There are things in the log that are (IMhO, and In My Particular Case)
more important than others.  Snapshot creations  deletions are noise
compared with filesystem creations, property settings, etc.

This seems especially true when there is closure on actions - the set of
zfs snapshot foo/[EMAIL PROTECTED]
zfs destroy  foo/[EMAIL PROTECTED]
commands is (except for debugging zfs itself) a noop

Looking at history.c, it doesn't look like there is an easy
way to mark a set of messages as unwanted and compress the log
without having to take the pool out of service first.

Oh well...

  -John


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: Success: Re: [zfs-discuss] Re: I seem to have backed myself into a corner - how do I migrate filesyst

2007-06-01 Thread Nicolas Williams
On Fri, Jun 01, 2007 at 02:09:55PM -0700, John Plocher wrote:
 eric kustarz wrote:
 We specifically didn't allow the admin the ability to truncate/prune the 
 log as then it becomes unreliable - ooops i made a mistake, i better 
 clear the log and file the bug against zfs 
 
 I understand - auditing means never getting to blame someone else :-)
 
 There are things in the log that are (IMhO, and In My Particular Case)
 more important than others.  Snapshot creations  deletions are noise
 compared with filesystem creations, property settings, etc.

But clone creation == filesystem creation, and since you can only clone
snapshots you'd want snapshotting included in the log, at least the ones
referenced by live clones.  Or if there was a pivot and the old fs and
snapshot were destroyed you might still want to know about that.

I think there has to be a way to truncate/filter the log, at least by
date.

 This seems especially true when there is closure on actions - the set of
 zfs snapshot foo/[EMAIL PROTECTED]
 zfs destroy  foo/[EMAIL PROTECTED]
 commands is (except for debugging zfs itself) a noop

Yes, but it could be very complicated:

zfs snapshot foo/[EMAIL PROTECTED]
zfs clone foo/[EMAIL PROTECTED] foo/bar-then
zfs clone foo/[EMAIL PROTECTED] foo/bar-then-again
zfs snapshot foo/[EMAIL PROTECTED]
zfs clone foo/[EMAIL PROTECTED] foo/bar-then-and-then
zfs destroy -r foo/[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: Success: Re: [zfs-discuss] Re: I seem to have backed myself into a corner - how do I migrate filesyst

2007-06-01 Thread Mark J Musante
On Fri, 1 Jun 2007, John Plocher wrote:

 This seems especially true when there is closure on actions - the set of
  zfs snapshot foo/[EMAIL PROTECTED]
  zfs destroy  foo/[EMAIL PROTECTED]
 commands is (except for debugging zfs itself) a noop

Note that if you use the recursive snapshot and destroy, only one line is
entered into the history for all filesystems.


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: Success: Re: [zfs-discuss] Re: I seem to have backed myself into a corner - how do I migrate filesyst

2007-06-01 Thread eric kustarz


On Jun 1, 2007, at 2:09 PM, John Plocher wrote:


eric kustarz wrote:
We specifically didn't allow the admin the ability to truncate/ 
prune the log as then it becomes unreliable - ooops i made a  
mistake, i better clear the log and file the bug against zfs 



I understand - auditing means never getting to blame someone else :-)


:)



There are things in the log that are (IMhO, and In My Particular Case)
more important than others.  Snapshot creations  deletions are  
noise

compared with filesystem creations, property settings, etc.

This seems especially true when there is closure on actions - the  
set of

zfs snapshot foo/[EMAIL PROTECTED]
zfs destroy  foo/[EMAIL PROTECTED]
commands is (except for debugging zfs itself) a noop

Looking at history.c, it doesn't look like there is an easy
way to mark a set of messages as unwanted and compress the log
without having to take the pool out of service first.


Right, you'll have to do any post-processing yourself (something like  
a script + cron job).


eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: Success: Re: [zfs-discuss] Re: I seem to have backed myself into a corner - how do I migrate filesyst

2007-06-01 Thread John Plocher

Mark J Musante wrote:

Note that if you use the recursive snapshot and destroy, only one line is



My problem (and it really is /not/ an important one) was that
I had a cron job that every minute did

min=`date +%d`
snap=$pool/[EMAIL PROTECTED]
zfs destroy $snap
zfs snapshot $snap

and, after a couple of days (at 86 thousand minutes/day), the
pool's history log seemed quite full (but not at capacity...)

There were no clones to complicate things...

  -John

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss