On 9/22/10 7:03 PM, Dustin J. Mitchell wrote:
On Wed, Sep 22, 2010 at 10:07 AM, Jon LaBadie<j...@jgcomp.com>  wrote:
Any thoughts on how desireable you feel be a separate
copy of amanda data would be and other approaches?
This comes up often, and I've never found a solution I'm happy enough
with to make the "official" solution.

Gene's approach is the obvious one, but has a few limitations:

  - What do you do if you run out of space on that tape?  Start a new
tape?  How do you reflect the use of that new tape in the catalog?

  - How does recovery from that metadata backup work?  There's a
chicken-and-egg problem here, too - you'll need an Amanda config to
run any Amanda tools other than amrestore.

Let's break down "metadata" into its component parts, too:
  1. configuration
  2. catalog (logdir, tapelist)
  3. indexes
  4. performance data (curinfo)
  5. debug logs

Configuration (1) can be backed up like a normal DLE.  The catalog (2)
should technically be recoverable from a scan of the tapes themselves,
although the tool to do this is still awaiting a happy hacker to bring
it to life.  Indexes (3) are only required for amrecover, and if your
Amanda server is down, you likely want whole DLEs restored, so you
only need amfetchdump.  Performance data (4) will automatically
regenerate itself over subsequent runs, so there's no need to back it
up.  Similarly, debug logs (5) can get quite large, and generally need
not be backed up.

So, to my mind, the only component that needs special handling is the
catalog, and we have a menu of ways to handle that:

  - append a copy of the entire catalog to the last tape in each run
(hey, what is the "last tape" now that we have multiple simultaneous
tapers?)

  - append a copy of only the catalog for that run to the last tape in each run

  - finally get around to writing 'amrecatalog'

  - rsync the entire catalog to another machine nightly

I just stuck that last one in because it was my technique back when I
managed a fleet of Amanda servers.  Each would simply rsync its config
and catalog to the other servers.  Since they were all backing up to
shared storage (a SAN), I could do a restore / amfetchdump / recovery
of any dump on any server without trouble.  It's a very
non-Amanda-centric solution, but it's *very* effective.

The last one is something like what I do. I don't use rsync, because I want multiple backup copies going back over the last week. I have a cron job that launches in the morning and hangs waiting for the Amanda backups from the previous night to complete. When they seem to be complete, it proceeds with a backup to a local archive partition on another spindle that is normally mounted read-only. Then it tars that all up and scp's it to another server. So, if my drive fails and I need to recover, I have the Amanda stuff on another drive on the same computer. If the whole thing dies, I have it on another computer. It also gets backed up to tape from the other computer. So, several departments with Amanda backup servers backing up one another's Amanda configurations and catalogs.

Just in case anyone is interested, I put the script at the end. It's not particularly parameterized for general use, but is pretty simple and easy to modify. Watch out for email line wrap.

--
---------------

Chris Hoogendyk

-
   O__  ---- Systems Administrator
  c/ /'_ --- Biology&  Geology Departments
 (*) \(*) -- 140 Morrill Science Center
~~~~~~~~~~ - University of Massachusetts, Amherst

<hoogen...@bio.umass.edu>

---------------

Erdös 4



#!/bin/ksh
#
# Daily-amanda backup script - copies critical amanda configuration and indexes.
# Based on Daily-office backup script copied from marlin 04/24/2007 Chris H.
#
# Script used for daily archives of particular directories to /archive
# (which is a separate partition on a different drive).
# Script should be cronned as
#   45 7 * * 2-6 /usr/local/adm/backups/daily-amanda
# should wait until amanda backup for the day is done.
#
# Tape backups then catch all this weekly, of course.
#
# ---------------------------------------------------

# CAREFULL -- basename must be unique for all these!
DIRLIST="/usr/local/etc/amanda";
ARCH=/archive;
DAY=`date '+%a'`;

# wait until we are sure amanda is no longer running
RUNNING=1;
while (( ${RUNNING} == 1 )); do
  PROCS=`ps -ef | grep amanda | grep amdump | wc -l`;
  if (( ${PROCS}>= 1 )); then
    sleep 1500;
  else
    RUNNING=0;
  fi
done

# set archive partition to read/write
mount -o remount,rw,logging ${ARCH};

case "$DAY" in
  Sun|Mon|Tue|Wed|Thu|Fri)
    echo "\nDaily incremental backup of Amanda configuration and indexes for 
Biology";
    echo 
"------------------------------------------------------------------------\n";;
  Sat)
    echo "\nSaturday full backup of Amanda configuration and indexes for 
Biology";
    echo 
"--------------------------------------------------------------------\n";;
esac

for DIR in ${DIRLIST}
do
  BASE=`basename ${DIR}`;
  ADIR=${ARCH}/${BASE}/${DAY};
  if cd ${ADIR} ; then
    if cd ${DIR} ; then
      /usr/ucb/echo -n "Backing up ${DIR} to ${ADIR}:   ";
      case "$DAY" in
#      # removed the 3 day incremental and added Sun,Mon,Tue to 1 day 
incrementals
#      # Chris H. 7/11/2007
#      Tue)
#        # delete previous contents and then do incrementals from Saturday
#        rm -r ${ADIR}; mkdir ${ADIR};
#        find . -mtime -3 | cpio -oa 2>/dev/null | ( cd ${ADIR}&&  cpio -imd );;
      Sun|Mon|Tue|Wed|Thu|Fri)
        # delete previous contents and then do incrementals
        rm -r ${ADIR}; mkdir ${ADIR};
        find . -mtime -1 | cpio -oa 2>/dev/null | ( cd ${ADIR}&&  cpio -imd );;
      Sat)
        # delete previous contents and then do full copies
        rm -r ${ADIR}; mkdir ${ADIR};
        find . | cpio -oa 2>/dev/null | ( cd ${ADIR}&&  cpio -imd );;
      *)
        echo "Wha? DAY=\"${DAY}\" in mormyrid daily office chron.";;
      esac
    else
      echo "Wha? unable to cd to ${DIR} in mormyrid daily office chron.";
    fi
  else
    echo "Wha? unable to cd to ${ADIR} in mormyrid daily office chron.";
  fi
done

echo "\n";
df -k ${ARCH};

# tar it up&  send it to marlin
cd ${ARCH};
echo "\n\nMaking tar file of ${ARCH}/amanda/${DAY}/";
tar -cf amanda-${DAY}.tar amanda/${DAY};
rm amanda-${DAY}.tar.gz;
gzip amanda-${DAY}.tar;
chown amanda:amanda amanda-${DAY}.tar.gz;
echo "\nscp'ing amanda-${DAY}.tar.gz to marlin:/usr/local/etc/amanda/mormyrid/";
su - amanda -c "scp -i /usr/local/etc/amanda/.ssh/id_rsa_dailyconfig -o 
BatchMode=yes \
 ${ARCH}/amanda-${DAY}.tar.gz 
marlin.bio.mor.nsm:/usr/local/etc/amanda/mormyrid/" \
 >  /dev/null;

# for some reason, Solaris 9 doesn't allow `mount -o remount,ro /mountpoint`
# set it back to read only with umount and then mount with the -r option.
# unfortunately, the umount could fail if someone has a process in that 
directory.
# that won't hurt the backups, but could leave the partition in rw mode.
cd /;
umount ${ARCH};
mount -r ${ARCH};



Reply via email to