I managed to correct the problem by writing a script inspired
by Chris Gerhard's blog that did a zfs send | zfs recv.  Now
that things are back up, I have a couple of lingering questions:


1) I noticed that the filesystem size information is not the
   same between the src and dst filesystem sets.  Is this
   an expected behavior?


[EMAIL PROTECTED]> zfs list -r tank/projects/sac
NAME                        USED  AVAIL  REFER  MOUNTPOINT
tank/projects/sac          49.0G   218G  48.7G  /export/sac
tank/projects/[EMAIL PROTECTED]   104M      -  48.7G  -
tank/projects/[EMAIL PROTECTED]  96.7M      -  48.7G  -
tank/projects/[EMAIL PROTECTED]  74.3M      -  48.7G  -
tank/projects/[EMAIL PROTECTED]  18.7M      -  48.7G  -

[EMAIL PROTECTED]> zfs list -r tank2/projects/sac
NAME                         USED  AVAIL  REFER  MOUNTPOINT
tank2/projects/sac          49.3G   110G  48.6G  /export2/sac
tank2/projects/[EMAIL PROTECTED]  99.7M      -  48.6G  -
tank2/projects/[EMAIL PROTECTED]  92.3M      -  48.6G  -
tank2/projects/[EMAIL PROTECTED]  70.1M      -  48.6G  -
tank2/projects/[EMAIL PROTECTED]  70.7M      -  48.6G  -

2) Following Chris's advice to do more with snapshots, I
   played with his cron-triggered snapshot routine:
   http://blogs.sun.com/chrisg/entry/snapping_every_minute

   Now, after a couple of days, zpool history shows almost
   100,000 lines of output (from all the snapshots and
   deletions...)

   How can I purge or truncate this log (which has got to be
   taking up several Mb of space, not to mention the ever
   increasing sluggishness of the command...)


  -John

Oh, here's the script I used - it contains hardcoded zpool
and zfs info, so it must be edited to match your specifics
before it is used!  It can be rerun safely; it only sends
snapshots that haven't already been sent so that I could do
the initial time-intensive copies while the system was still
in use and only have to do a faster "resync" while down in
single user mode.

It isn't pretty (it /is/ a perl script) but it worked :-)

--------------------------

#!/usr/bin/perl
# John Plocher - May, 2007
# ZFS helper script to replicate the filesystems+snapshots in
# SRCPOOL onto a new DSTPOOL that was a different size.
#
#   Historical situation:
#     + zpool create tank raidz c1t1d0 c1t2d0 c1t3d0
#     + zfs create tank/projects
#     + zfs set mountpoint=/export tank/projects
#     + zfs set sharenfs=on tank/projects
#     + zfs create tank/projects/...
#     ... fill up the above with data...
#     Drive c1t3d0 FAILED
#     + zpool offline tank c1t3d0
#     ... find out that replacement drive is 10,000 sectors SMALLER
#     ... than the original, and zpool replace won't work with it.
#
# Usage Model:
#   Create a new (temp) pool large enough to hold all the data
#   currently on tank
#     + zpool create tank2 c2t2d0 c2t3d0 c2t4do
#     + zfs set mountpoint=/export2 tank2/projects
#   Set a baseline snapshot on tank
#     + zfs snapshot -r [EMAIL PROTECTED]
#   Edit and run this script to copy the data + filesystems from tank to
#   the new pool tank2
#     + ./copyfs
#   Drop to single user mode, unshare the tank filesystems,
#     + init s
#     + zfs unshare tank
#   Shut down apache, cron and sendmail
#     + svcadm disable svc:/network/http:cswapache2
#     + svcadm disable svc:/system/cron:default
#     + svcadm disable svc:/network/smtp:sendmail
#   Take another snapshot,
#     + zfs snapshot -r [EMAIL PROTECTED]
#   Rerun script to catch recent changes
#     + ./copyfs
#   Verify that the copies were successful,
#     + dircmp -s /export/projects /export2/projects
#     + zfs destroy tank
#     + zpool create tank raidz c1t1d0 c1t2d0 c1t3d0
#   Modify script to reverse transfer and set properties, then
#   run script to recreate tank's filesystems,
#     + ./copyfs
#   Reverify that content is still correct
#     + dircmp -s /export/projects /export2/projects
#   Re-enable  cron, http and mail
#     + svcadm enable svc:/network/http:cswapache2
#     + svcadm enable svc:/system/cron:default
#     + svcadm enable svc:/network/smtp:sendmail
#   Go back to multiuser
#     + init 3
#   Reshare filesystems.
#     + zfs share tank
#   Go home and get some sleep
#

$SRCPOOL="tank";
$DSTPOOL="tank2";

# Set various properties once the initial filesystem is recv'd...
# (Uncomment these when copying the filesystems back to their original pool)
# $props{"projects"} = ();
# push( @{ $props{"projects"} }, ("zfs set mountpoint=/export tank/projects"));
# push( @{ $props{"projects"} }, ("zfs set sharenfs=on tank/projects"));
# $props{"projects/viper"} = ();
# push( @{ $props{"projects/viper"} }, ("zfs set 
sharenfs=rw=bk-test:eressea:scuba:sac:viper:caboose,root=sac:viper:caboose,ro 
tank/projects/viper"));

sub getsnapshots(@) {
    my (@filesystems) = @_;
    my @snaps;
    my @snapshots;
    foreach my $fs ( @filesystems ) {
        chomp($fs);
        next if ($fs eq $SRCPOOL);
        # print "Filesystem: $fs\n";
        # Get a list of all snapshots in this filesystem
        @snapshots =  split /^/, `zfs list -Hr -t snapshot -o name -s creation 
$fs`;
        foreach my $dataset ( @snapshots ) {
            chomp($dataset);
            my ($dpool, $dsnapshot) = split(/\//, $dataset,2);
            my ($dfs, $dtag) = split(/@/, $dsnapshot,2);
            next if ($fs ne "$dpool/$dfs");
            next if ($dtag =~ /^minute_/);
            # print "    Dataset=$dataset, P=$dpool, S=$dsnapshot, FS=$dfs, 
TAG=$dtag\n";
            push (@snaps, ($dataset));
        }
    }
    return @snaps;
}

# Get a list of all filesystems in the SRC pool
@src_snaps = &getsnapshots(split /^/, `zfs list -Hr -t filesystem -o name 
$SRCPOOL`);
# Get a list of all filesystems in the DST pool
@dst_snaps = &getsnapshots(split /^/, `zfs list -Hr -t filesystem -o name 
$DSTPOOL`);

# Mark snapshots that have already been sent...
foreach my $dataset ( @dst_snaps  ) {
    ($pool, $snapshot) = split(/\//, $dataset,2);
    ($fs, $tag) = split(/@/, $snapshot,2);
    $last{$fs} = $snapshot;  # keep track of the last one sent
    $dst{$fs}{$tag} = $pool;
}

# only send snaps that have not already been sent
foreach $dataset ( @src_snaps  ) {
    ($pool, $snapshot) = split(/\//, $dataset,2);
    ($fs, $tag) = split(/@/, $snapshot,2);
    if (!defined($dst{$fs}{$tag})) {
        push (@snaps, ($dataset));
    }
}

# do the work...
if ($#snaps == -1) {
    print("Warning: No uncopied snapshots found in pool $SRCPOOL\n");
} else {
    # copy them over to the new pool
    $last_fs = "";
    foreach $dataset ( @snaps  ) {
        ($pool, $snapshot) = split(/\//, $dataset,2);
        ($fs, $tag) = split(/@/, $snapshot,2);
        if ($fs ne $last_fs) {
            $last_snapshot = undef;
            print "Filesystem: $fs\n";
            $last_fs = $fs;
        }
        # print "accepted: P=$pool, FS=$fs, TAG=$tag\n";
        @cmd = ();
        if ( !defined($last_snapshot) ) {
            if ( defined($last{$fs} ) ) {
                push(@cmd, ("zfs send -i $SRCPOOL/$last{$fs} $dataset | zfs recv 
$DSTPOOL/$fs"));
            } else {
                push(@cmd, ("zfs send $dataset | zfs recv $DSTPOOL/$fs"));
            }
            # If any properties need to be set on this filesystem, do so
            # after the initial dataset has been copied over...
            if ( defined($props{$fs} ) ) {
                foreach my $c ( @{ $props{$fs} }) {
                    push(@cmd, ($c));
                }
            }
        } else {
            push(@cmd, ("zfs send -i $last_snapshot $dataset | zfs recv 
$DSTPOOL/$fs"));
        }
        foreach $cmd ( @cmd ) {
            print "    + $cmd\n";
            system($cmd);
        }
        $last_snapshot = $dataset;
    }
}

----------------------------




_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to