Amanda 3.3.4

Hi,

I'm guessing the answer is no since I haven't read about this, but maybe...

I'm hoping amanda might be able to auto-size DLE's into sub-DLE's of an
approximate size, say 500GB.

My understanding is this:

1) if I have multiple DLE's in my disklist, then tell amdump to perform a
level 0 dump of the complete config, each DLE gets written to tape as a
separate dump/tar file (possibly in parts if the tar is > part-size). Is
that right?

2) If multiple DLE's are processed in a single level 0 amdump run, with
each DLE << tape-size, then as many as can fit will be written to a single
tape, or possibly spanning tapes. But in any case it won't be a single DLE
per tape. Is that right? That looks like what I've observed so far.

3) I had figured that when restoring, amrestore has to read in a complete
dump/tar file before it can extract even a single file. So if I have a
single DLE that's ~2TB that fits (with multiple parts) on a single tape,
then to restore a single file, amrestore has to read the whole tape.
HOWEVER, I'm now testing restoring a single file from a large 2.1TB DLE,
and the file has been restored, but the amrecover operation is still
running, for quite some time after restoring the file. Why might this be
happening?

The recover log shows this on the client doing the recovery:

[root@cfile amRecoverTest_Feb_27]# tail -f
/var/log/amanda/client/jet1/amrecover.20140227135820.debug
Thu Feb 27 17:23:12 2014: thd-0x25f1590: amrecover: stream_read_callback:
data is still flowing

3a) Where is the recovered dump file written to by amrecover? I can't see
space being used for it on either server or client. Is it streaming and
untar'ing in memory, only writing the desired files to disk?

4) To restore from a single DLE's dump/tar file that's smaller than tape
size, and exists on a tape with multiple other smaller DLE dump/tar files,
amrestore can seek to the particular DLE's dump/tar file and only has to
read that one file. Is that right?

So assuming all the above is true, it'd be great if amdump could
automatically break large DLE's into small DLE's to end up with smaller
dump files and faster restore of individual files. Maybe it would happen
only for level 0 dumps, so that incremental dumps would still use the same
sub-DLE's used by the most recent level 0 dump.

The issue I have is that with 30TB of data, there'd be lots of manual
fragmenting of data directories to get more easily-restorable DLE's sizes
of say, 500GB each. Some top-level dirs in my main data drive have 3-6TB
each, while many others have only 100GB or so. Manually breaking these into
smaller DLE's once is fine, but since data gets regularly moved, added and
deleted, things would quickly change and upset my smaller DLE's.

Any thoughts on how I can approach this? If amanda can't do it, I thought I
might try a script to create DLE's of a desired size based on disk-usage,
then run the script everytime I wanted to do a new level 0 dump. That of
course would mean telling amanda when I wanted to do level 0's, rather than
amanda controlling it.

Thanks for reading this long post!

-M

Reply via email to