This makes sense because when I ran my initial Amanda dump on that host, I
had no holding-disk defined, and it did backup the filesystem at level 0,
and that filesystem has over 24GB of data on it, albeit, they are all small
.c files and the such.  I am left wondering then how chunksize fits into the
equation.  It was my understanding that this is what the chunksize was for.

well, i just started anohter amdump right now without the holding-disk in
the amanda.conf file.  let's see what happens.

-edwin 

-----Original Message-----
From: Adrian Reyer [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 15, 2002 9:24 AM
To: Pedro Aguayo
Cc: Rivera, Edwin; [EMAIL PROTECTED]
Subject: Re: ["data write: File too large"]


On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote:
> But, I think Edwin doesn't have this problem, meaning he says he doesn't
> have a file larger than 2gb.

I had none, either, but the filesystem was dumped into a file as a
whole, leading to a huge file, same with tar. The problem only occurs
as holding-disk is used. Continuously writing a stream of unlimited
size to a tape is no problem, but as soon as you try to do this onto a
filesytem, you run in whatever limits you have, mostly 2GB-limits on a
single file.
No holding-disk -> no big file -> no problem. (well, tape might have
to stop more often because of interruption in data-flow)

Regards,
        Adrian Reyer
-- 
Adrian Reyer                             Fon:  +49 (7 11) 2 85 19 05
LiHAS - Servicebuero Suttgart            Fax:  +49 (7 11) 5 78 06 92
Adrian Reyer & Joerg Henner GbR          Mail: [EMAIL PROTECTED]
Linux, Netzwerke, Consulting & Support   http://lihas.de/

Reply via email to