> On Fri, 2005-09-16 at 11:13 -0500, J French wrote:
> > How are most people backing up to tape with Debian (or linux in
> > general)?  I need a robust backup because this will be a production
> > server.  Advice is appreciated.

I ran a Linux lab for a while.  Ended up with one of those 200G tape
drives running off a NetApp fileserver.  Since the main filesystem was
on Raid5 I only did a weekly tape dump and stored the tapes in my
apartment.  It worked fine as long as we only had 200G of data, but
manually changing tapes is an enormous hassle!  I can't imagine anyone
running a large site with a single-tape drive.  Do you have the
hardware angle under control?

I started out with a nice full/incremental system using dump, but I
believe that during the time when everything would fit on one tape I
was just using tar and doing a full backup each time (everything
worked unattended except picking up the tape on Friday and swapping
the new one in, so that was fine by me).

Reasons not to do that:

1) If you have more data than will fit in a single tape (or you're in
   a rush), you will want to do full/incremental backups.  The
   incrementals had better each fit on one tape, but the fulls won't,
   of course.  And swapping tapes is really really tedious!!!

   However, if you can afford a robotic tape drive, such as those from
   Tadpole (AIR they start at around $10k), you might be very happy
   with tapes.

   The problem is still remembering to take the tapes offsite in case
   the building burns down.

2) If you ever do want to restore less than the whole filesystem,
   tapes make it hard to find (and then there's the whole offsite
   thingy).  I used tapes because the backup was meant as a measure
   against catastrophic failure only--the NetApp handled daily
   accidental deletions, disk failures, etc, perfectly.  But if you
   don't have such a nice fileserver, you care more!

My current solution for my personal computer is an external USB2 disk,
and faubackup.  Not offsite :( but very cost-effective, and faubackup
basically pulls the same stunt that the NetApp did (on the file level
rather than inodes, so not as sophisticated nor efficient).  So I have
nightly backups for a week, weekly for a month, monthly for 3, annual
forever, or whatever you like, of everything I need, at my fingertips.
200G drives are now <$200, and you can easily add more.  As I recall,
100G tapes were $100, and that's not counting the (then) $4000 tape
drive.  For a company a big cheap fileserver would be more appropriate
for this, but you get the idea.  Oh, by the way, last I checked (over
a year ago) faubackup was terribly, terribly slow and needs work.  But
I threw it out there as a random idea.

So happens that I can get away with no offsite without feeling too
guilty by using Unison to sync my desktop to my laptop, which tends to
live offsite.  But for a real company, this would be an interesting
"solution."

But what seems to me to be the best is a mutual arrangement with
someone offsite, along the lines of each party saying "Here's x bytes
of network-accessible storage and a login account for you."  Then,
rdist your filesystem to the remote site (or ideally something
cleverer like CVS/Subversion/etc).  Why don't more people do this?
I'm thinking of setting this up for my current lab--any words of
warning?

May all your best data be immortalised!

-Ben

--
Ben Pearre          http://hebb.mit.edu/~ben       PGP: CFDA6CDA
         Free music at http://hebb.mit.edu/FreeMusic
Don't let Bush read your email!             http://www.gnupg.org


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to