>... On this server I've got large partition - aprox. 25Gb. I set 
>etimeout to 6000 and dtimeout to 18000 - I thought this should be 
>enough.

Assuming the message you're getting is from planner ("Request to XXX
timed out"), only etimeout would have any effect.  Dtimeout is the "data
timeout" for when the dump is actually run.

>In log on client side (sendsize.debug) it is written as directory was
>finished, but in my amanda e-mail report I've got Request timed out 
>anyway.

Look at the first and last lines of that file.  How long did the
estimate take?

What version of GNU tar are you using ("gtar --version")?

>I think that this might be result of one of two problems:
>       - simply tar of directory was made, but couldn't be send to
>         server in time. This could be solved with setting even 
>         higher etimeout value

Yes.

>       - problems are with holding disks ...

The holding disks don't have anything to do with the estimates.

>         ... Does amanda know what holding disk to take, or 
>         should I configure it manually.

It depends on what version of Amanda you're using.  If you set a chunksize
to break the images up into pieces in the holding disk and you are using
a new version of Amanda (e.g. 2.4.2p2), the chunks will be scattered
around all of your holding disks.  Older versions of Amanda would put
all the chunks (or the whole image) in whatever disk they started out in.

Note, however, that Amanda uses the estimated size to guess at which
disk has enough space (in older versions).

>Does Amanda have any limitations with etimeout, dtimeout, max. 
>directory sizes?

Amanda does not have limits on the timeouts.  As to the directory size,
that's not an Amanda problem, it's a GNU tar one.

>I read about manually backup partition for the first time, and then
>configure to only incremental backup it everytime - do you think
>this could be the solution?  ...

I'd keep trying to find out why tar is so slow.  You might have an old
version.  Or there could be some really deep directories (or bajillions
of files), etc.

Another possibility would be to back up the individual top level
directories of that big file system, if that's not too much hassle.
You'll be splitting up the load that way so each estimate (and backup)
will be a smaller chunk.

If you decide to upgrade to a new version of GNU tar, make sure you
get the latest from alpha.gnu.org and make sure it is 1.13.19 or later.
Do **not**, under any circumstances, run anything older than that (e.g.
plain 1.13) -- there were lots of bugs in those.

>Jure

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]

Reply via email to