For disaster recovery, it's good to have copies of your database dumps
that you can easily & conveniently access, that are outside the data
center where the database lives.  Since we do a weekly full dump and
use binary logs for "incrementals", I also wanted copies of our binary
logs in the same place.

However, binary logs for an active server can be very big.  It'd be
nice to gzip them them for faster transfer, lower bandwidth charges,
less disk space used on the backup host, etc.  And they compress well:
in my experience, usually to about 1/10th of original size.

Unfortunately, if you gzip the destination, you can't easily use rsync
to make the backups, since it won't correctly identify which files
need to be copied or deleted.  So I wrote this script, which syncs one
directory to another, gzip'ing the resulting files - but only files
whose name matches a regex you set at the beginning.  It knows that
each file in the source dir corresponds to a file with the same name
with .gz appended in the destination dir, and correctly figures out
which ones to copy over and which ones to delete.

I posted the generic version at: http://thwip.sysadmin.org/dirsyncgz

Here it is, with variables set for typical mysql binary log use:

----------------------------------------------------------------------
#!/usr/bin/perl
#
# $Id: dirsyncgz,v 1.1 2007/05/03 04:15:35 cos Exp $
#
# syncs files w/names matching a regex from srcdir to destdir, and gzips
#
# only files whose modification time is more recent than the
# corresponding gzip'ed file will be copied, and if a file has been
# deleted from the srcdir, the corresponding gzip'ed file will be
# deleted from the destdir

my $srcdir = "/var/lib/mysql";
my $destdir = "/backup/mysqllogs";
my $basename = "^binlog.\d+$";

opendir SRCDIR, $srcdir or die "$0: can't open directory $srcdir: $!\n";

foreach $file
  ( sort grep { /$basename/ && -f "$srcdir/$_" } readdir(SRCDIR) )

{ next unless ((stat("$srcdir/$file"))[9] > (stat("$destdir/$file.gz"))[9]);
  print "Copying $srcdir/$file to $destdir\n";
  
  system("cp -p $srcdir/$file $destdir") == 0
    or warn "$0: cp -p $srcdir/$file $destdir failed: $?\n"
    and next;
  system("gzip -f $destdir/$file") == 0
    or warn "$0: gzip -f $destdir/$file failed: $?\n";
}

# now delete from the backup dir any logs deleted from the srcdir

opendir DESTDIR, $destdir or die "$0: can't open directory $destdir: $!\n";

foreach $savedfile
  ( sort grep { /$basename/ && -f "$destdir/$_" } readdir(DESTDIR) )
{ $savedfile =~ s/.gz$//;
  next if -f "$srcdir/$savedfile";

  print "Deleting $savedfile from $destdir\n";
  unlink "$destdir/${savedfile}.gz"
    or unlink "$destdir/$savedfile"
    or warn "$0: error deleting $savedfile: $!\n";
}
----------------------------------------------------------------------

You can sync the logs to a remotely mounted filesystem and/or use its
destination directory as a source directory for your rsync.

  --  Cos (Ofer Inbar)  --  [EMAIL PROTECTED]
  It's been said that if a sysadmin does his job perfectly, he's the
  fellow that people wonder what he does and why the company needs him,
  until he goes on vacation.          -- comp.unix.admin FAQ

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to