RE: data write: File too large

2003-11-18 Thread Dana Bourgeois
I had this error when trying to put dump files larger than 2 Gig on a
non-BigFile supporting OS (RedHat 7.0).  I had chunk size set correctly but
forgot about what would happen when I used the file: device to write the
coalesced dump files to disk tapes.

Got a modern kernel and the problem went away.


Dana Bourgeois


 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Byarlay, Wayne A.
 Sent: Monday, November 17, 2003 7:54 AM
 To: [EMAIL PROTECTED]
 Subject: data write: File too large
 
 
 Hi all,
 
 I checked the archives on this problem... but they all 
 suggested to adjust the chunksize of my holdingdisk section 
 in my amanda.conf. However, I have ver. 2.4.1, and there's no 
 holdingdisk section IN my amanda.conf! Is the chunksize the 
 problem? I've got filesystems MUCH larger than this one going 
 to AMANDA... but if so, How do I adjust my chunksize?
 
 Here's from my error log:
 /-- xxx/services lev 0 FAILED [data write: File too large]
 sendbackup: start [xxx:/services level 0]
 sendbackup: info BACKUP=/bin/tar
 sendbackup: info RECOVER_CMD=/bin/gzip -dc |/bin/tar -f... -
 sendbackup: info COMPRESS_SUFFIX=.gz
 sendbackup: info end
 \
 
 The example.conf for the latest version has some section 
 like: holdingdisk hd1 {
   comment blah
   directory /dumps/amanda
   use 256 Mb
   chunksize 1Gb
   }
 
 BUT My amanda.conf has NO section similar to this. I just have:
 
 diskdir /dumps/disk1
 disksize 4000 mb
 
 So I guess my questions for the gurus are: Is the chunksize 
 the problem? If so, how do I change it with this version? If 
 not, is it just some really huge file? (I am going to check 
 on this after sending this e-mail).
 
 
 Wayne Byarlay
 Purdue Libraries ITD 
 [EMAIL PROTECTED]   
 765-496-3067 
 
 



RE: data write: File too large

2003-11-18 Thread Byarlay, Wayne A.
Thanks for the responses...

Upgrading at this time would be GREAT. Unfortunately this is not going
to happen anytime soon. I have another hard drive with RH9 and 2.4.4 on
it, but it's only half-configured, and it took a while to get to that
point.

As per the current, Irridum-dust, ancient setup, it's Debian, a really
old one... Woody? Again, upgrading is in my list of stuff To Do.

So, Chunksize is the problem, and nobody's sure the syntax for 2.4.1?
I'll experiment... It seemed unlikely that it was merely the SIZE of the
filesystem I was trying to back up, because I've got another which is
really huge, and I had assumed it was even huger than this one. You know
what they say about assumptions.

Again thanks all for the help.

-wab

Chunk: You guys'l never believe me. There was 2 cop cars, okay? And
they were chasing this 4-wheel deal, this real neat ORV, and there were
bullets flying all over the place. It was the most amazing thing I ever
saw! -The Goonies



RE: data write: File too large

2003-11-18 Thread Byarlay, Wayne A.
Never Mind,

I was under the impression that, for some reason, the holdingdisk {}
section of amanda.conf could not exist in my version. But I put it
there, with an appropriate Chunksize, and ... amcheck did not complain
at all. In fact it said 4096 size requested, that's plenty.

If you don't hear from me again on this issue, consider it solved!

wab




Re: data write: File too large

2003-11-17 Thread Gene Heskett
On Monday 17 November 2003 10:54, Byarlay, Wayne A. wrote:
Hi all,

I checked the archives on this problem... but they all suggested to
adjust the chunksize of my holdingdisk section in my amanda.conf.
However, I have ver. 2.4.1, and there's no holdingdisk section IN
 my amanda.conf! Is the chunksize the problem? I've got filesystems
 MUCH larger than this one going to AMANDA... but if so, How do I
 adjust my chunksize?


Wow, talk about ancient history, 2.4.1 has iridium dust on it!
And it may be that its a later option.

Here's from my error log:
/-- xxx/services lev 0 FAILED [data write: File too large]
sendbackup: start [xxx:/services level 0]
sendbackup: info BACKUP=/bin/tar
sendbackup: info RECOVER_CMD=/bin/gzip -dc |/bin/tar -f... -
sendbackup: info COMPRESS_SUFFIX=.gz
sendbackup: info end
\

The example.conf for the latest version has some section like:
holdingdisk hd1 {
   comment blah
   directory /dumps/amanda
   use 256 Mb
   chunksize 1Gb
   }

BUT My amanda.conf has NO section similar to this. I just have:

diskdir /dumps/disk1
disksize 4000 mb

So I guess my questions for the gurus are: Is the chunksize the
 problem? If so, how do I change it with this version? If not, is it
 just some really huge file? (I am going to check on this after
 sending this e-mail).


Wayne Byarlay
Purdue Libraries ITD
[EMAIL PROTECTED]
765-496-3067

Run, don't walk, to the amanda.org web page, pull nearly all the way 
to the bottom and find the latest snapshots link, download and build 
2.4.4p1.  Really, a lot has been improved in the last 4 or 5 years, 
hopefully without breaking any backwards compatibility.

I use a script to simplify the build, and it been posted to this group 
many times, as recently as a couple of weeks ago.  You might find it 
to be usefull for you with suitable mods.

-- 
Cheers, Gene
AMD [EMAIL PROTECTED] 320M
[EMAIL PROTECTED]  512M
99.27% setiathome rank, not too shabby for a WV hillbilly
Yahoo.com attornies please note, additions to this message
by Gene Heskett are:
Copyright 2003 by Maurice Eugene Heskett, all rights reserved.



Re: data write: File too large

2003-11-17 Thread Paul Bijnens
Byarlay, Wayne A. wrote:

Hi all,

I checked the archives on this problem... but they all suggested to
adjust the chunksize of my holdingdisk section in my amanda.conf.
However, I have ver. 2.4.1, and there's no holdingdisk section IN my
amanda.conf! Is the chunksize the problem? I've got filesystems MUCH
larger than this one going to AMANDA... but if so, How do I adjust my
chunksize?
While 2.4.1 is not exactly a dinosaur, it's from long ago...
Isn't there a possibility to upgrade?  At least 2.4.1p2 was much more
stable.  Currently we're on 2.4.4p1.
It's not the size of the filesystem that is the problem, but the
maximum size of one file on that filesystem.  For older Unix/Linux
versions this is often 2 Gbyte, even when the filesystem could
be larger.
That one file is the dump image.  Specifying chunksize instructs
amanda to chunk it up in manageable pieces on disk; while writing
to tape, all chunks are concatenated again.

The example.conf for the latest version has some section like:
holdingdisk hd1 {
comment blah
directory /dumps/amanda
use 256 Mb
chunksize 1Gb
}
BUT My amanda.conf has NO section similar to this. I just have:

diskdir /dumps/disk1
disksize 4000 mb
These last two parameters are now deprecated.  The program still
recognizes them for backward compatibility, but you better use
the format with holdingdisk as above, if it works in your ancient
version.  The disksize paramater is now use, but also with more
configurability (like negative sizes).

So I guess my questions for the gurus are: Is the chunksize the problem?
Yes.

--
Paul Bijnens, XplanationTel  +32 16 397.511
Technologielaan 21 bus 2, B-3001 Leuven, BELGIUMFax  +32 16 397.512
http://www.xplanation.com/  email:  [EMAIL PROTECTED]
***
* I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, F6, *
* quit,  ZZ, :q, :q!,  M-Z, ^X^C,  logoff, logout, close, bye,  /bye, *
* stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
* PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
* kill -9 1,  Alt-F4,  Ctrl-Alt-Del,  AltGr-NumLock,  Stop-A,  ...*
* ...  Are you sure?  ...   YES   ...   Phew ...   I'm out  *
***



Re: [data write: File too large]

2002-06-23 Thread Paul Bijnens



Pedro Aguayo wrote:
 
 Ok, I didn't but think I do now.
 Basically when amanda write to the holding disk, it rights it to a flat file
 on the file system, and if that flat file is larger than 2gb then you might
 encounter a problem if your filesystem has a limitation where it can only
 support files 2gb.
 But if you write directly to tape you will avoid this problem cause you are
 bypassing the filesystem.

And that's why the paramater chunksize for the holding disk can be set
to e.g. 1Gbyte. 

And, when everything else fails, read the manual pages.  :-)

 
 Right Adrian?
 
 Hope I got it right, but this makes sense.

-- 
Paul Bijnens, Lant Tel  +32 16 40.51.40
Interleuvenlaan 15 H, B-3001 Leuven, BELGIUM   Fax  +32 16 40.49.61
http://www.lant.com/   email:  [EMAIL PROTECTED]
***
* I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, F6, *
* quit,  ZZ, :q, :q!,  M-Z, ^X^C,  logoff, logout, close, bye,  /bye, *
* stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
* PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
* kill -9 1,  Alt-F4,  Ctrl-Alt-Del,  AltGr-NumLock,  Stop-A,  ...*
* ...  Are you sure?  ...   YES   ...   Phew ...   I'm out  *
***



Re: [data write: File too large]

2002-01-17 Thread John R. Jackson

This makes sense because when I ran my initial Amanda dump on that host, I
had no holding-disk defined, and it did backup the filesystem at level 0,
and that filesystem has over 24GB of data on it, albeit, they are all small
.c files and the such.  I am left wondering then how chunksize fits into the
equation.  It was my understanding that this is what the chunksize was for.

If you didn't have a holding disk defined, Amanda went straight to tape.

When you do have a holding disk defined, Amanda will try (given enough
space, etc) to write the whole image into it, then when that is done,
write the image to tape.

Without chunksize, the holding disk image will be one monolithic file.
With chunksize, the image will be broken up into pieces when put into
the holding disk and then recombined when written to tape.  Chunksize was
put in to support systems with holding disks that only handled individual
files smaller than 2 GBytes.  You don't have that problem on AIX 4+.

Another possibility for the File too large error is your Amanda user
running into ulimit.  If you do this:

  su - user -c ulimit -a

does it have a file(blocks) limit?  If so, you can use smitty to
change that.

-edwin 

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: [data write: File too large]

2002-01-15 Thread Adrian Reyer

On Mon, Jan 14, 2002 at 10:53:29AM -0500, Pedro Aguayo wrote:
 Could be that your holding disk space is to small, or you trying to backup a
 file that is larger than 2 gigs?

Perhaps I misunderstand something here, but...
The holding disk afaik holds the entire dump of the filesystem you try
and backup to make it one last file that is able to get faster onto
tape once completed.
So if your partition has more than 2GB in use, that file might be
bigger than 2GB and you run into a filesystem limit.
Had that problem with an older Linux installation, turning off
holding-disk and dumping directly to tape works fine in that case.

Regards,
Adrian Reyer
-- 
Adrian Reyer Fon:  +49 (7 11) 2 85 19 05
LiHAS - Servicebuero SuttgartFax:  +49 (7 11) 5 78 06 92
Adrian Reyer  Joerg Henner GbR  Mail: [EMAIL PROTECTED]
Linux, Netzwerke, Consulting  Support   http://lihas.de/



RE: [data write: File too large]

2002-01-15 Thread Pedro Aguayo

That's a great Idea, I'm going to save this one.

But, I think Edwin doesn't have this problem, meaning he says he doesn't
have a file larger than 2gb.

Could be hidden, or maybe you mounted over a directory that had a huge file,
just digging here.

Pedro

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Adrian Reyer
Sent: Tuesday, January 15, 2002 4:03 AM
To: Pedro Aguayo
Cc: Rivera, Edwin; [EMAIL PROTECTED]
Subject: Re: [data write: File too large]


On Mon, Jan 14, 2002 at 10:53:29AM -0500, Pedro Aguayo wrote:
 Could be that your holding disk space is to small, or you trying to backup
a
 file that is larger than 2 gigs?

Perhaps I misunderstand something here, but...
The holding disk afaik holds the entire dump of the filesystem you try
and backup to make it one last file that is able to get faster onto
tape once completed.
So if your partition has more than 2GB in use, that file might be
bigger than 2GB and you run into a filesystem limit.
Had that problem with an older Linux installation, turning off
holding-disk and dumping directly to tape works fine in that case.

Regards,
Adrian Reyer
--
Adrian Reyer Fon:  +49 (7 11) 2 85 19 05
LiHAS - Servicebuero SuttgartFax:  +49 (7 11) 5 78 06 92
Adrian Reyer  Joerg Henner GbR  Mail: [EMAIL PROTECTED]
Linux, Netzwerke, Consulting  Support   http://lihas.de/




Re: [data write: File too large]

2002-01-15 Thread Adrian Reyer

On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote:
 But, I think Edwin doesn't have this problem, meaning he says he doesn't
 have a file larger than 2gb.

I had none, either, but the filesystem was dumped into a file as a
whole, leading to a huge file, same with tar. The problem only occurs
as holding-disk is used. Continuously writing a stream of unlimited
size to a tape is no problem, but as soon as you try to do this onto a
filesytem, you run in whatever limits you have, mostly 2GB-limits on a
single file.
No holding-disk - no big file - no problem. (well, tape might have
to stop more often because of interruption in data-flow)

Regards,
Adrian Reyer
-- 
Adrian Reyer Fon:  +49 (7 11) 2 85 19 05
LiHAS - Servicebuero SuttgartFax:  +49 (7 11) 5 78 06 92
Adrian Reyer  Joerg Henner GbR  Mail: [EMAIL PROTECTED]
Linux, Netzwerke, Consulting  Support   http://lihas.de/



RE: [data write: File too large]

2002-01-15 Thread Pedro Aguayo

Ahh! I see said the blind man.

Pedro

-Original Message-
From: Adrian Reyer [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 15, 2002 9:24 AM
To: Pedro Aguayo
Cc: Rivera, Edwin; [EMAIL PROTECTED]
Subject: Re: [data write: File too large]


On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote:
 But, I think Edwin doesn't have this problem, meaning he says he doesn't
 have a file larger than 2gb.

I had none, either, but the filesystem was dumped into a file as a
whole, leading to a huge file, same with tar. The problem only occurs
as holding-disk is used. Continuously writing a stream of unlimited
size to a tape is no problem, but as soon as you try to do this onto a
filesytem, you run in whatever limits you have, mostly 2GB-limits on a
single file.
No holding-disk - no big file - no problem. (well, tape might have
to stop more often because of interruption in data-flow)

Regards,
Adrian Reyer
-- 
Adrian Reyer Fon:  +49 (7 11) 2 85 19 05
LiHAS - Servicebuero SuttgartFax:  +49 (7 11) 5 78 06 92
Adrian Reyer  Joerg Henner GbR  Mail: [EMAIL PROTECTED]
Linux, Netzwerke, Consulting  Support   http://lihas.de/



RE: [data write: File too large]

2002-01-15 Thread KEVIN ZEMBOWER

I see! said the blind carpenter, who picked up his hammer and saw.
-My ninth grade science teacher, Brother Paul, a terrible
punner.

-Kevin

 Pedro Aguayo [EMAIL PROTECTED] 01/15/02 09:45AM 
Ahh! I see said the blind man.

Pedro

-Original Message-
From: Adrian Reyer [mailto:[EMAIL PROTECTED]] 
Sent: Tuesday, January 15, 2002 9:24 AM
To: Pedro Aguayo
Cc: Rivera, Edwin; [EMAIL PROTECTED] 
Subject: Re: [data write: File too large]


On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote:
 But, I think Edwin doesn't have this problem, meaning he says he
doesn't
 have a file larger than 2gb.

I had none, either, but the filesystem was dumped into a file as a
whole, leading to a huge file, same with tar. The problem only occurs
as holding-disk is used. Continuously writing a stream of unlimited
size to a tape is no problem, but as soon as you try to do this onto a
filesytem, you run in whatever limits you have, mostly 2GB-limits on a
single file.
No holding-disk - no big file - no problem. (well, tape might have
to stop more often because of interruption in data-flow)

Regards,
Adrian Reyer
-- 
Adrian Reyer Fon:  +49 (7 11) 2 85 19 05
LiHAS - Servicebuero SuttgartFax:  +49 (7 11) 5 78 06 92
Adrian Reyer  Joerg Henner GbR  Mail: [EMAIL PROTECTED] 
Linux, Netzwerke, Consulting  Support   http://lihas.de/



Re: [data write: File too large]

2002-01-15 Thread Hans Kinwel


On 15-Jan-2002 Adrian Reyer wrote:

 No holding-disk - no big file - no problem. (well, tape might have
 to stop more often because of interruption in data-flow)


Why not define a chunksize of 500 Mb on your holdingdisk?  That's what
I did.  Backups go faster and there's less wear and tear on my tapestreamer.

-- 
  |Hans Kinwel
  | [EMAIL PROTECTED]



RE: [data write: File too large]

2002-01-15 Thread Pedro Aguayo

Ok, I didn't but think I do now.
Basically when amanda write to the holding disk, it rights it to a flat file
on the file system, and if that flat file is larger than 2gb then you might
encounter a problem if your filesystem has a limitation where it can only
support files 2gb.
But if you write directly to tape you will avoid this problem cause you are
bypassing the filesystem.


Right Adrian?

Hope I got it right, but this makes sense.

Pedro

-Original Message-
From: Wayne Richards [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 15, 2002 10:12 AM
To: Adrian Reyer
Cc: Pedro Aguayo
Subject: Re: [data write: File too large]


I don't understand the problem.  When amanda encounters a filesystem larger
than the holding disk, she AUTOMATICALLY resorts to direct tape write.
Quoting from the amanda.conf file:
# If no holding disks are specified then all dumps will be written directly
# to tape.  If a dump is too big to fit on the holding disk than it will be
# written directly to tape.  If more than one holding disk is specified then
# they will all be used round-robin.

We routinely backup filesystems larger than our holding disks and many files

4GB.


 On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote:
  But, I think Edwin doesn't have this problem, meaning he says he doesn't
  have a file larger than 2gb.

 I had none, either, but the filesystem was dumped into a file as a
 whole, leading to a huge file, same with tar. The problem only occurs
 as holding-disk is used. Continuously writing a stream of unlimited
 size to a tape is no problem, but as soon as you try to do this onto a
 filesytem, you run in whatever limits you have, mostly 2GB-limits on a
 single file.
 No holding-disk - no big file - no problem. (well, tape might have
 to stop more often because of interruption in data-flow)

 Regards,
   Adrian Reyer
 --
 Adrian Reyer Fon:  +49 (7 11) 2 85 19 05
 LiHAS - Servicebuero SuttgartFax:  +49 (7 11) 5 78 06 92
 Adrian Reyer  Joerg Henner GbR  Mail: [EMAIL PROTECTED]
 Linux, Netzwerke, Consulting  Support   http://lihas.de/



---
Wayne Richards  e-mail: [EMAIL PROTECTED]





Re: [data write: File too large]

2002-01-15 Thread Paul Bijnens



Pedro Aguayo wrote:
 
 Ok, I didn't but think I do now.
 Basically when amanda write to the holding disk, it rights it to a flat file
 on the file system, and if that flat file is larger than 2gb then you might
 encounter a problem if your filesystem has a limitation where it can only
 support files 2gb.
 But if you write directly to tape you will avoid this problem cause you are
 bypassing the filesystem.

And that's why the paramater chunksize for the holding disk can be set
to e.g. 1Gbyte. 

And, when everything else fails, read the manual pages.  :-)

 
 Right Adrian?
 
 Hope I got it right, but this makes sense.

-- 
Paul Bijnens, Lant Tel  +32 16 40.51.40
Interleuvenlaan 15 H, B-3001 Leuven, BELGIUM   Fax  +32 16 40.49.61
http://www.lant.com/   email:  [EMAIL PROTECTED]
***
* I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, F6, *
* quit,  ZZ, :q, :q!,  M-Z, ^X^C,  logoff, logout, close, bye,  /bye, *
* stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
* PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
* kill -9 1,  Alt-F4,  Ctrl-Alt-Del,  AltGr-NumLock,  Stop-A,  ...*
* ...  Are you sure?  ...   YES   ...   Phew ...   I'm out  *
***



RE: [data write: File too large]

2002-01-15 Thread Rivera, Edwin

This makes sense because when I ran my initial Amanda dump on that host, I
had no holding-disk defined, and it did backup the filesystem at level 0,
and that filesystem has over 24GB of data on it, albeit, they are all small
.c files and the such.  I am left wondering then how chunksize fits into the
equation.  It was my understanding that this is what the chunksize was for.

well, i just started anohter amdump right now without the holding-disk in
the amanda.conf file.  let's see what happens.

-edwin 

-Original Message-
From: Adrian Reyer [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 15, 2002 9:24 AM
To: Pedro Aguayo
Cc: Rivera, Edwin; [EMAIL PROTECTED]
Subject: Re: [data write: File too large]


On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote:
 But, I think Edwin doesn't have this problem, meaning he says he doesn't
 have a file larger than 2gb.

I had none, either, but the filesystem was dumped into a file as a
whole, leading to a huge file, same with tar. The problem only occurs
as holding-disk is used. Continuously writing a stream of unlimited
size to a tape is no problem, but as soon as you try to do this onto a
filesytem, you run in whatever limits you have, mostly 2GB-limits on a
single file.
No holding-disk - no big file - no problem. (well, tape might have
to stop more often because of interruption in data-flow)

Regards,
Adrian Reyer
-- 
Adrian Reyer Fon:  +49 (7 11) 2 85 19 05
LiHAS - Servicebuero SuttgartFax:  +49 (7 11) 5 78 06 92
Adrian Reyer  Joerg Henner GbR  Mail: [EMAIL PROTECTED]
Linux, Netzwerke, Consulting  Support   http://lihas.de/



RE: [data write: File too large]

2002-01-15 Thread Rivera, Edwin

is there a way, in the amanda.conf file, to specify *NOT* to use the
holding-disk for a particular filesystem?

for example, if i use amanda to backup 8 filesystems on one box and i want 7
to use the holding-disk, but one not to.. is that possible?

just curious..

-edwin

-Original Message-
From: Adrian Reyer [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 15, 2002 9:24 AM
To: Pedro Aguayo
Cc: Rivera, Edwin; [EMAIL PROTECTED]
Subject: Re: [data write: File too large]


On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote:
 But, I think Edwin doesn't have this problem, meaning he says he doesn't
 have a file larger than 2gb.

I had none, either, but the filesystem was dumped into a file as a
whole, leading to a huge file, same with tar. The problem only occurs
as holding-disk is used. Continuously writing a stream of unlimited
size to a tape is no problem, but as soon as you try to do this onto a
filesytem, you run in whatever limits you have, mostly 2GB-limits on a
single file.
No holding-disk - no big file - no problem. (well, tape might have
to stop more often because of interruption in data-flow)

Regards,
Adrian Reyer
-- 
Adrian Reyer Fon:  +49 (7 11) 2 85 19 05
LiHAS - Servicebuero SuttgartFax:  +49 (7 11) 5 78 06 92
Adrian Reyer  Joerg Henner GbR  Mail: [EMAIL PROTECTED]
Linux, Netzwerke, Consulting  Support   http://lihas.de/



Re: [data write: File too large]

2002-01-15 Thread Gene Heskett

On Tuesday 15 January 2002 09:56 am, Rivera, Edwin wrote:
is there a way, in the amanda.conf file, to specify *NOT* to use
 the holding-disk for a particular filesystem?

for example, if i use amanda to backup 8 filesystems on one box
 and i want 7 to use the holding-disk, but one not to.. is that
 possible?

just curious..

Diwn toward the end of amanda.conf is just such a 'dumptype', 
edit to suit your circumstances.

[...]

-- 
Cheers, Gene
AMD K6-III@500mhz 320M
Athlon1600XP@1400mhz  512M
98.3+% setiathome rank, not too shabby for a hillbilly



Re: [data write: File too large]

2002-01-14 Thread Adrian Reyer

On Mon, Jan 14, 2002 at 10:03:57AM -0500, Rivera, Edwin wrote:
 my aix 4.2.1 box running Amanda v2.4.2p2 is not backing up one of my
 filesystems at level 0 anymore, it did it only once (the first amdump run).
 here is the error:
 aix421.us.lhsgroup.com /home4/bscs_fam lev 0 FAILED [data write: File too
 large]
 here is the entry in my amanda.conf file
 chunksize 1536 mbytes
 can you suggest anything?  thanks in advance.

2GB limit on files perhaps? Not sure about AIX 4.2.1-support for files
bigger 2GB, quit AIX with version 3.2.5. Might be the
holding-disk-file.

Regards,
Adrian Reyer
-- 
Adrian Reyer Fon:  +49 (7 11) 2 85 19 05
LiHAS - Servicebuero SuttgartFax:  +49 (7 11) 5 78 06 92
Adrian Reyer  Joerg Henner GbR  Mail: [EMAIL PROTECTED]
Linux, Netzwerke, Consulting  Support   http://lihas.de/



RE: [data write: File too large]

2002-01-14 Thread Pedro Aguayo

Could be that your holding disk space is to small, or you trying to backup a
file that is larger than 2 gigs?

Thats all I can come up with.

Pedro

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Rivera, Edwin
Sent: Monday, January 14, 2002 10:04 AM
To: [EMAIL PROTECTED]
Subject: [data write: File too large]


hello again,

my aix 4.2.1 box running Amanda v2.4.2p2 is not backing up one of my
filesystems at level 0 anymore, it did it only once (the first amdump run).

here is the error:
aix421.us.lhsgroup.com /home4/bscs_fam lev 0 FAILED [data write: File too
large]

here is the entry in my amanda.conf file
chunksize 1536 mbytes


can you suggest anything?  thanks in advance.

Highest Regards,

Edwin R. Rivera
UNIX Administrator
Tel:  +1 305 894 4609
Fax:  +1 305 894 4799[EMAIL PROTECTED]




RE: [data write: File too large]

2002-01-14 Thread Pedro Aguayo

You sure you don't have a huge file in that filesystem?

Pedro

-Original Message-
From: Rivera, Edwin [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 14, 2002 10:55 AM
To: Pedro Aguayo; Rivera, Edwin; [EMAIL PROTECTED]
Subject: RE: [data write: File too large]


my holding disk is 4GB and i have it set to use -100Mbytes

i'm confused on this one..


-Original Message-
From: Pedro Aguayo [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 14, 2002 10:53 AM
To: Rivera, Edwin; [EMAIL PROTECTED]
Subject: RE: [data write: File too large]


Could be that your holding disk space is to small, or you trying to backup a
file that is larger than 2 gigs?

Thats all I can come up with.

Pedro

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Rivera, Edwin
Sent: Monday, January 14, 2002 10:04 AM
To: [EMAIL PROTECTED]
Subject: [data write: File too large]


hello again,

my aix 4.2.1 box running Amanda v2.4.2p2 is not backing up one of my
filesystems at level 0 anymore, it did it only once (the first amdump run).

here is the error:
aix421.us.lhsgroup.com /home4/bscs_fam lev 0 FAILED [data write: File too
large]

here is the entry in my amanda.conf file
chunksize 1536 mbytes


can you suggest anything?  thanks in advance.

Highest Regards,

Edwin R. Rivera
UNIX Administrator
Tel:  +1 305 894 4609
Fax:  +1 305 894 4799[EMAIL PROTECTED]




RE: [data write: File too large]

2002-01-14 Thread Rivera, Edwin

yeah, it's a CVMC filesystem with a ton of small flat files, no large
individual files.. only .c and .h files and things like that.

-Original Message-
From: Pedro Aguayo [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 14, 2002 11:00 AM
To: Rivera, Edwin; [EMAIL PROTECTED]
Subject: RE: [data write: File too large]


You sure you don't have a huge file in that filesystem?

Pedro

-Original Message-
From: Rivera, Edwin [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 14, 2002 10:55 AM
To: Pedro Aguayo; Rivera, Edwin; [EMAIL PROTECTED]
Subject: RE: [data write: File too large]


my holding disk is 4GB and i have it set to use -100Mbytes

i'm confused on this one..


-Original Message-
From: Pedro Aguayo [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 14, 2002 10:53 AM
To: Rivera, Edwin; [EMAIL PROTECTED]
Subject: RE: [data write: File too large]


Could be that your holding disk space is to small, or you trying to backup a
file that is larger than 2 gigs?

Thats all I can come up with.

Pedro

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Rivera, Edwin
Sent: Monday, January 14, 2002 10:04 AM
To: [EMAIL PROTECTED]
Subject: [data write: File too large]


hello again,

my aix 4.2.1 box running Amanda v2.4.2p2 is not backing up one of my
filesystems at level 0 anymore, it did it only once (the first amdump run).

here is the error:
aix421.us.lhsgroup.com /home4/bscs_fam lev 0 FAILED [data write: File too
large]

here is the entry in my amanda.conf file
chunksize 1536 mbytes


can you suggest anything?  thanks in advance.

Highest Regards,

Edwin R. Rivera
UNIX Administrator
Tel:  +1 305 894 4609
Fax:  +1 305 894 4799[EMAIL PROTECTED]



Re: data write: File too large ???

2001-08-14 Thread Joshua Baker-LePain

On Tue, 14 Aug 2001 at 9:05am, Katrinka Dall wrote

 /-- xx.p /dev/sdb1 lev 0 FAILED [data write: File too large]
 sendbackup: start [x.x.xxx.x.com:/dev/sdb1 level 0]
 sendbackup: info BACKUP=/bin/tar
 sendbackup: info RECOVER_CMD=/usr/bin/gzip -dc |/bin/tar -f... -
 sendbackup: info COMPRESS_SUFFIX=.gz
 sendbackup: info end
 \

   Now, I know that this isn't an issue of not having enough space on the
 tape or holding disk, both are in excess of 35G.  Some of the things I
 have tried are, upgrading tar on the server that is failing, upgrading
 the backup server from RedHat 6.2 to 7.1, and using every available
 version of Amanda.  Currently I am using Amanda-2.4.2p2.  The client
 that I'm having these problems on is a RedHat 5.1 (kernel 2.0.36)
 machine.

It looks like you may be hitting Linux's filesize limitation.  What is
your chunksize set to?  Try setting it to something smaller than 2GB-32KB
for header.  There's no performance penalty, so why not try 1GB.

That may not be the issue, though, as amanda 2.4.2p2 on RedHat 7.1
should have large file support.  It seems to compile with the right
flags -- has anybody confirmed whether it works or not?

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University




Re: data write: File too large ???

2001-08-14 Thread Ragnar Kjørstad

On Tue, Aug 14, 2001 at 09:05:25AM -0500, Katrinka Dall wrote:
  FAILED AND STRANGE DUMP DETAILS:
  
 /-- xx.p /dev/sdb1 lev 0 FAILED [data write: File too large]

Does it fail after backing up 2 Gigabyte?

It sounds like you don't have Large File Support (LFS).

 sendbackup: start [x.x.xxx.x.com:/dev/sdb1 level 0]
 sendbackup: info BACKUP=/bin/tar
 sendbackup: info RECOVER_CMD=/usr/bin/gzip -dc |/bin/tar -f... -
 sendbackup: info COMPRESS_SUFFIX=.gz
 sendbackup: info end
 \ 
 
   Now, I know that this isn't an issue of not having enough space on the
 tape or holding disk, both are in excess of 35G.  Some of the things I
 have tried are, upgrading tar on the server that is failing, upgrading
 the backup server from RedHat 6.2 to 7.1, and using every available
 version of Amanda.  Currently I am using Amanda-2.4.2p2.  The client
 that I'm having these problems on is a RedHat 5.1 (kernel 2.0.36)
 machine.

Redhat 7.1 should include a kernel, libraries and utilities with LFS.
Did you install some of the utilities manually, or were they all from
RedHat 7.1? (e.g. amanda?)

If so, you need to recompile this on your RH71 system, to make them
support  2Gb files.


-- 
Ragnar Kjorstad



RE: data write: File too large ???

2001-08-14 Thread Anthony Valentine

Hello Katrina,

Have you looked at your chunksize setting in your holding disk config?  I
believe that Linux has a 2GB limit on file sizes that may be causing your
problem.  Try setting this to just below 2GB (like 1999 MB) and see if that
helps.

Hope this was helpful!

Anthony Valentine


-Original Message-
From: Katrinka Dall [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, August 14, 2001 6:05 AM
To: [EMAIL PROTECTED]
Subject: data write: File too large ???


Hello,


I must say that I'm completely stumped, trying everything I can
possibly think of, I've decided to post this here in hopes that one of
you can help me out.  Recently I had to migrate our backup server from a
Solaris 2.5.1 machine to a Linux 6.2 machine.  In the process of doing
this, I found that I was unable to get one of the Linux machines that we
had been backing up to work properly.  I keep getting this error
whenever I try to do a dump on this machine/filesystem:


 FAILED AND STRANGE DUMP DETAILS:
 
/-- xx.p /dev/sdb1 lev 0 FAILED [data write: File too large]
sendbackup: start [x.x.xxx.x.com:/dev/sdb1 level 0]
sendbackup: info BACKUP=/bin/tar
sendbackup: info RECOVER_CMD=/usr/bin/gzip -dc |/bin/tar -f... -
sendbackup: info COMPRESS_SUFFIX=.gz
sendbackup: info end
\ 

Now, I know that this isn't an issue of not having enough space on
the
tape or holding disk, both are in excess of 35G.  Some of the things I
have tried are, upgrading tar on the server that is failing, upgrading
the backup server from RedHat 6.2 to 7.1, and using every available
version of Amanda.  Currently I am using Amanda-2.4.2p2.  The client
that I'm having these problems on is a RedHat 5.1 (kernel 2.0.36)
machine.

If you have any suggestions I'd greatly appreciate them.  Oh, and one
more thing, when we were running these backups from the Solaris machine,
they did not produce this error, and I had about a quarter of the space
on tape and holding disk that I have now.

Thanks in advance,

Katrinka



Re: data write: File too large ???

2001-08-14 Thread Christoph Sold

Katrinka Dall wrote:

Hello,


   I must say that I'm completely stumped, trying everything I can
possibly think of, I've decided to post this here in hopes that one of
you can help me out.  Recently I had to migrate our backup server from a
Solaris 2.5.1 machine to a Linux 6.2 machine.  In the process of doing
this, I found that I was unable to get one of the Linux machines that we
had been backing up to work properly.  I keep getting this error
whenever I try to do a dump on this machine/filesystem:


 FAILED AND STRANGE DUMP DETAILS:
 
/-- xx.p /dev/sdb1 lev 0 FAILED [data write: File too large]
sendbackup: start [x.x.xxx.x.com:/dev/sdb1 level 0]
sendbackup: info BACKUP=/bin/tar
sendbackup: info RECOVER_CMD=/usr/bin/gzip -dc |/bin/tar -f... -
sendbackup: info COMPRESS_SUFFIX=.gz
sendbackup: info end
\ 

   Now, I know that this isn't an issue of not having enough space on the
tape or holding disk, both are in excess of 35G.  Some of the things I
have tried are, upgrading tar on the server that is failing, upgrading
the backup server from RedHat 6.2 to 7.1, and using every available
version of Amanda.  Currently I am using Amanda-2.4.2p2.  The client
that I'm having these problems on is a RedHat 5.1 (kernel 2.0.36)
machine.

If you have any suggestions I'd greatly appreciate them.  Oh, and one
more thing, when we were running these backups from the Solaris machine,
they did not produce this error, and I had about a quarter of the space
on tape and holding disk that I have now.


Seems your Linux version can handle only files up to 2GB in size. Either 
define CHUNK_SIZE smaller than that, or install an OS which handles 
bigger files.

HTH
-Christoph Sold





Re: data write: File too large ???

2001-08-14 Thread Joshua Baker-LePain

On Tue, 14 Aug 2001 at 11:30am, Joshua Baker-LePain wrote

 On Tue, 14 Aug 2001 at 9:05am, Katrinka Dall wrote

  /-- xx.p /dev/sdb1 lev 0 FAILED [data write: File too large]
  sendbackup: start [x.x.xxx.x.com:/dev/sdb1 level 0]
  sendbackup: info BACKUP=/bin/tar
  sendbackup: info RECOVER_CMD=/usr/bin/gzip -dc |/bin/tar -f... -
  sendbackup: info COMPRESS_SUFFIX=.gz
  sendbackup: info end
  \
 
  Now, I know that this isn't an issue of not having enough space on the
  tape or holding disk, both are in excess of 35G.  Some of the things I
  have tried are, upgrading tar on the server that is failing, upgrading
  the backup server from RedHat 6.2 to 7.1, and using every available
  version of Amanda.  Currently I am using Amanda-2.4.2p2.  The client
  that I'm having these problems on is a RedHat 5.1 (kernel 2.0.36)
  machine.

 It looks like you may be hitting Linux's filesize limitation.  What is
 your chunksize set to?  Try setting it to something smaller than 2GB-32KB
 for header.  There's no performance penalty, so why not try 1GB.

Private email seems to indicate that this isn't the issue.  Is the holding
disk space changing during the backup, perhaps?

 That may not be the issue, though, as amanda 2.4.2p2 on RedHat 7.1
 should have large file support.  It seems to compile with the right
 flags -- has anybody confirmed whether it works or not?

Responding to myself, it does.  I just created a 6GB holding disk image on
ext2 with no problem.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University




Re: data write: File too large

2001-03-11 Thread John R. Jackson

We just ran across this error, because we hit the 2GB filesize limit ...

However, the behavior after this error seems a bit odd.  Amanda flushed the
other disk files on the holding disk to tape ok, but then also left them on
the holdingdisk.  amverify confirms they're all on tape, but ls shows
they're all still on the holdingdisk, too.  Is this normal?  ...

By sheer (and extremely annoying :-) coincidence, one of my systems did
the same thing last night, but behaved "properly", i.e. all the images
smaller than 2 GBytes were flushed and removed from the holding disk.
So my best guess is this a problem that's fixed with a more recent
version of Amanda (you didn't say what you're running).

Carey

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



RE: data write: File too large

2001-03-11 Thread Carey Jung

 
 By sheer (and extremely annoying :-) coincidence, one of my systems did
 the same thing last night, but behaved "properly", i.e. all the images
 smaller than 2 GBytes were flushed and removed from the holding disk.
 So my best guess is this a problem that's fixed with a more recent
 version of Amanda (you didn't say what you're running).
 

We're running 2.4.2 with no patches.