Re: Backing up filesystem to large for one tape

2001-08-07 Thread Jens Bech Madsen

Kris Boulez <[EMAIL PROTECTED]> writes:

> What do people think about my proposed solution to this problem
> 
> Problem
> ---
> Filesysteem needs to be backed up using amanda. Level 0 doesn't fit on a
> single tape anymore (after compression)
> 
> Solutions
> -
> a) By a tape unit which can hold more data (expensive)
> 
> b) Wait for multi-tape support in amanda (ETA ?)
> 
> c) Split the filesystem on the server
> 
> d) let amanda believe that the filesystem is split
> - create two dumptype, each with a seperate exclude list
> - add two entries to the disklist with seperate dump type (will
>   amanda backup the same partition twice ?)
> - (if the above step doesn't work) create a new /dev/rdsk/.. entry
>   pointing to the same device but with a different name
> 
> 
> Do you think d) can be made to work ?

It can. Works ok here. Takes a little work before the backup is run,
bus is fairly straightforward, especially if you use a version of
Amanda, which let's you define dumptypes in the disklist (2.4.2?).
Also requires you to use tar since Amanda doesn't support excluding
with dump (even if the dump program supports it).

> 
> Kris,

/Jens
-- 
Jens Bech Madsen
The Stibo Group, Denmark



Re: Backing up filesystem to large for one tape

2001-08-06 Thread John R. Jackson

>b) Wait for multi-tape support in amanda (ETA ?)

I'm reasonably (but not 100%) sure it will be done this millennium :-).

>c) Split the filesystem on the server

Here's what we (I) do (and it's not at all pleasant):

  * Have enough holding disk space to handle a file system that does
not fit, preferably a lot more space.

  * Added a small hack to driver to fail without trying any file
system larger than the tapetype "length" parameter.

  * File systems that fail fall into two categories, those that would
fit with software compression (which we don't normally use) and
those that won't fit no matter what.

  * File systems that would fit with software compression are run
through "amrestore -chp  > out.xxx".  Then the original
images are removed, the "out.xxx" image renamed and amflush run to
write the compressed image to tape.  Hardware compression is manually
disabled for that one tape.

Restoring from these tapes is transparent -- Amanda just does the
right thing.

  * For the rest, I have the holding disk chunksize set to ~90% of
tape capacity.  So when one of these fails to go to tape, I have one
big image and one smaller one.  I move the smaller one up a directory
level so amflush won't process it, then run amlink (I posted that
a while ago) to reset the header on the big chunk so it does not
look like it continues on to the smaller chunk.  Then amflush is
used to write that chunk to tape.  The smaller chunk is written to
a non-Amanda tape with dd.

Restoring these file systems is a PITA.  First, they have to be
recognized as being split onto two tapes.  Then the pieces have to be
brought back into the holding disk in two chunks as though they had
just been dumped there.  Then amlink is run to link them together.
Finally, a small patch (already in CVS) to amindexd (or amrecover,
I forget which) makes amrecover see the holding disk instead of the
tape version and the rest works OK.

>d) let amanda believe that the filesystem is split
>- create two dumptype, each with a seperate exclude list

You can do this in the disklist without separate dumptypes:

{
... dumptype stuff ...
... dumptype stuff ...
... dumptype stuff ...
  } [ spindle [ interface ]]

See the amanda(8) man page.

>- add two entries to the disklist with separate dump type (will
>  amanda backup the same partition twice ?)

Amanda will not allow the same client/disk being mentioned more than once.
The usual trick is to use mount point names (e.g. "/var") instead of
disk names and add on a trailing "/.", e.g.:

   /var  ...
   /var/.  ...

I'm not 100% certain if this works with dump.  I do know it will work
with GNU tar, and since you're talking about exclusion lists, you're
already committed to that anyway.

>Do you think d) can be made to work ?

Other people have done it this way.

>Kris

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Backing up filesystem to large for one tape

2001-08-06 Thread Kris Boulez

What do people think about my proposed solution to this problem

Problem
---
Filesysteem needs to be backed up using amanda. Level 0 doesn't fit on a
single tape anymore (after compression)

Solutions
-
a) By a tape unit which can hold more data (expensive)

b) Wait for multi-tape support in amanda (ETA ?)

c) Split the filesystem on the server

d) let amanda believe that the filesystem is split
- create two dumptype, each with a seperate exclude list
- add two entries to the disklist with seperate dump type (will
  amanda backup the same partition twice ?)
- (if the above step doesn't work) create a new /dev/rdsk/.. entry
  pointing to the same device but with a different name


Do you think d) can be made to work ?

Kris,
-- 
Kris Boulez Tel: +32-9-241.11.00
AlgoNomics NV   Fax: +32-9-241.11.02
Technologiepark 4   email: [EMAIL PROTECTED]
B 9052 Zwijnaarde   http://www.algonomics.com/



Re: Backing up filesystem to large for one tape

2001-08-06 Thread Johannes Niess

Kris Boulez <[EMAIL PROTECTED]> writes:

> What do people think about my proposed solution to this problem
> 
> Problem
> ---
> Filesysteem needs to be backed up using amanda. Level 0 doesn't fit on a
> single tape anymore (after compression)
> 
> Solutions
> -
> a) By a tape unit which can hold more data (expensive)
> 
> b) Wait for multi-tape support in amanda (ETA ?)
> 
> c) Split the filesystem on the server
> 
> d) let amanda believe that the filesystem is split
> - create two dumptype, each with a seperate exclude list
> - add two entries to the disklist with seperate dump type (will
>   amanda backup the same partition twice ?)
> - (if the above step doesn't work) create a new /dev/rdsk/.. entry
>   pointing to the same device but with a different name
> 
> 
> Do you think d) can be made to work ?

Until b) works (i.e is post gamma phase :-) ) the standard work around
is

e) use tar on sub directories as disklist entries. For a large home
partition you could move from /home/username1 to
/home/workgroup1/username1 and back up /home/workgroup[1..X]. It gets
tricky when you commonly add large sub dir's to a common dir
(e. g. large data sets from experiments)

HTH,

Johannes Niess