Re: TAPE-ERROR 00-00001 [writing file: File too large]

2007-05-02 Thread Igor V. Ruzanov

On Wed, 2 May 2007, Olivier Nicole wrote:


Hi,


I run Amanda under FreeBSD-4.11, so when Amanda trying to make file of
dump with size of more than 4GB (exactly, 2**32-1), i can see in


May I suggest that you ask the question on FreeBSD list :)

Olivier




Not exactly, Olivier:)
I just tried to find some ways how to let the Amanda (taper phase of 
backup process) to create files avoiding Large Files Problem (LFP). For 
example, i can do it using the following command under FreeBSD-4.x:


ssh MyHost dump -0 -u -a -f /MyBigBigBigSlice  MyHost.MyBigBigBigSlice.0.dump

The same way is used in Amanda but in dumper phase only. But when taper is 
beginning to move chunks from holding disks to a virtual tape, the LFP is 
appeared until we do tape splitting into chunks, the method offered by 
Jon.
Anyway, now i run Amanda under FreeBSD-5.x without LFP. I think 
its a good solution for today:)


+---+
! CANMOS ISP Network!
+---+
! Best regards  !
! Igor V. Ruzanov, network operational staff!
! e-Mail: [EMAIL PROTECTED]   !
+---+



Re: TAPE-ERROR 00-00001 [writing file: File too large]

2007-05-02 Thread Olivier Nicole
 I just tried to find some ways how to let the Amanda (taper phase of 
 backup process) to create files avoiding Large Files Problem (LFP). For 
 example, i can do it using the following command under FreeBSD-4.x:
 
 ssh MyHost dump -0 -u -a -f /MyBigBigBigSlice  MyHost.MyBigBigBigSlice.0.dump
 
 The same way is used in Amanda but in dumper phase only. But when taper is 

Got it now.

I defined small chuncks on the holding disks, typically 1GB, because I
think that if the dumper meets a disk full, there will be less issue
on a small chunk than on a big one (less data lost and dump to be
restarted).

So I assumed that your virtual tapes were always MUCH bigger than your
holding disk chunks, so holding disk would never hit LFP.

Olivier


Re: TAPE-ERROR 00-00001 [writing file: File too large]

2007-05-01 Thread Olivier Nicole
Hi,

 I run Amanda under FreeBSD-4.11, so when Amanda trying to make file of 
 dump with size of more than 4GB (exactly, 2**32-1), i can see in 

May I suggest that you ask the question on FreeBSD list :)

Olivier



Re: TAPE-ERROR 00-00001 [writing file: File too large]

2007-04-28 Thread Igor V. Ruzanov

On Fri, 27 Apr 2007, Jon LaBadie wrote:


On Fri, Apr 27, 2007 at 11:33:59AM +0400, Igor V. Ruzanov wrote:

On Thu, 26 Apr 2007, Jon LaBadie wrote:


On Thu, Apr 26, 2007 at 08:52:24PM +0400, Igor V. Ruzanov wrote:

Hello!
I run Amanda under FreeBSD-4.11, so when Amanda trying to make file of
dump with size of more than 4GB (exactly, 2**32-1), i can see in
log/amdump.* the following messages:


...

That is due to filesystems constraints in FreeBSD-4.x where a size of
file is given in 32-bit representation and declared in sys/stat.h with
int32_t integer type.
Is there passible to change some code of Amanda to do the dumps into more
than 4GB files under FreeBSD-4.x? (for example, with deal of stdout) Where
in the Amanda code the conditions are checking for the maximum length of
writing file?


Are you taping to virtual tapes on hard disk?  If so, and your version
of amanda is recent, us the tape splitting feature that writes in chunks
than can, but don't have to, span several tapes.



Yes, i'm taping to hard disk (tapedev=file:/). Actually the problem arises
when the dump itself is finished and all chunks (in my config chunksize =
256MB) are beginning to be sent from holding disk to file in storage
directory. And when this file reaches 4GB Amanda dropping errors such as
[writing file: File too large] into log. Mayby there are some ways to
solve the problem of under FreeBSD-4.x system?
I have Amanda of version 2.5.1p3.



As I suggested, look into the tape splitting/spanning features of amanda.conf.
These can be used to limit the size of the on tape files.


--
Jon H. LaBadie  [EMAIL PROTECTED]
JG Computing
4455 Province Line Road(609) 252-0159
Princeton, NJ  08540-4322  (609) 683-7220 (fax)



Okey, i got the feature of splitting of dumps across virtual tape. 
Thanks a lot to Paul and Jon for the help. But i would prefer to store the 
dumps by single files because of ability to apply `/sbin/restore' command 
to the Amanda dumps (after cutting of 32K dump header of course). So 
probably i will start Amanda under FreeBSD-5.x.



+---+
! CANMOS ISP Network!
+---+
! Best regards  !
! Igor V. Ruzanov, network operational staff!
! e-Mail: [EMAIL PROTECTED]   !
+---+



Re: TAPE-ERROR 00-00001 [writing file: File too large]

2007-04-27 Thread Igor V. Ruzanov

On Thu, 26 Apr 2007, Jon LaBadie wrote:


On Thu, Apr 26, 2007 at 08:52:24PM +0400, Igor V. Ruzanov wrote:

Hello!
I run Amanda under FreeBSD-4.11, so when Amanda trying to make file of
dump with size of more than 4GB (exactly, 2**32-1), i can see in
log/amdump.* the following messages:

taper: writing end marker. [Sunday2 ERR kb 4194272 fm 1]
driver: state time 2560.825 free kps: 3 space: 81451306 taper: writing
idle-dumpers: 1 qlen tapeq: 0 runq: 0 roomq: 0 wakeup: 0 driver-idle:
not-idle
driver: interface-state time 2560.825 if default: free 1 if local: free
1 if lnc0: free 1
driver: hdisk-state time 2560.825 hdisk 0: free 81451306 dumpers 0
driver: result time 2560.825 from taper: TAPE-ERROR 00-1 [writing
file: File too large]

That is due to filesystems constraints in FreeBSD-4.x where a size of
file is given in 32-bit representation and declared in sys/stat.h with
int32_t integer type.
Is there passible to change some code of Amanda to do the dumps into more
than 4GB files under FreeBSD-4.x? (for example, with deal of stdout) Where
in the Amanda code the conditions are checking for the maximum length of
writing file?


Are you taping to virtual tapes on hard disk?  If so, and your version
of amanda is recent, us the tape splitting feature that writes in chunks
than can, but don't have to, span several tapes.

If using a holding disk, use the chunk size feature to limit the maximum
size of a file on holding disk.

--
Jon H. LaBadie  [EMAIL PROTECTED]
JG Computing
4455 Province Line Road(609) 252-0159
Princeton, NJ  08540-4322  (609) 683-7220 (fax)



Yes, i'm taping to hard disk (tapedev=file:/). Actually the problem arises
when the dump itself is finished and all chunks (in my config chunksize =
256MB) are beginning to be sent from holding disk to file in storage
directory. And when this file reaches 4GB Amanda dropping errors such as
[writing file: File too large] into log. Mayby there are some ways to
solve the problem of under FreeBSD-4.x system?
I have Amanda of version 2.5.1p3.


+---+
! CANMOS ISP Network!
+---+
! Best regards  !
! Igor V. Ruzanov, network operational staff!
! e-Mail: [EMAIL PROTECTED]   !
+---+



Re: TAPE-ERROR 00-00001 [writing file: File too large]

2007-04-27 Thread Jon LaBadie
On Fri, Apr 27, 2007 at 11:33:59AM +0400, Igor V. Ruzanov wrote:
 On Thu, 26 Apr 2007, Jon LaBadie wrote:
 
 On Thu, Apr 26, 2007 at 08:52:24PM +0400, Igor V. Ruzanov wrote:
 Hello!
 I run Amanda under FreeBSD-4.11, so when Amanda trying to make file of
 dump with size of more than 4GB (exactly, 2**32-1), i can see in
 log/amdump.* the following messages:
 
...
 That is due to filesystems constraints in FreeBSD-4.x where a size of
 file is given in 32-bit representation and declared in sys/stat.h with
 int32_t integer type.
 Is there passible to change some code of Amanda to do the dumps into more
 than 4GB files under FreeBSD-4.x? (for example, with deal of stdout) Where
 in the Amanda code the conditions are checking for the maximum length of
 writing file?
 
 Are you taping to virtual tapes on hard disk?  If so, and your version
 of amanda is recent, us the tape splitting feature that writes in chunks
 than can, but don't have to, span several tapes.
 
 
 Yes, i'm taping to hard disk (tapedev=file:/). Actually the problem arises
 when the dump itself is finished and all chunks (in my config chunksize =
 256MB) are beginning to be sent from holding disk to file in storage
 directory. And when this file reaches 4GB Amanda dropping errors such as
 [writing file: File too large] into log. Mayby there are some ways to
 solve the problem of under FreeBSD-4.x system?
 I have Amanda of version 2.5.1p3.
 

As I suggested, look into the tape splitting/spanning features of amanda.conf.
These can be used to limit the size of the on tape files.


-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


TAPE-ERROR 00-00001 [writing file: File too large]

2007-04-26 Thread Igor V. Ruzanov

Hello!
I run Amanda under FreeBSD-4.11, so when Amanda trying to make file of 
dump with size of more than 4GB (exactly, 2**32-1), i can see in 
log/amdump.* the following messages:


taper: writing end marker. [Sunday2 ERR kb 4194272 fm 1]
driver: state time 2560.825 free kps: 3 space: 81451306 taper: writing 
idle-dumpers: 1 qlen tapeq: 0 runq: 0 roomq: 0 wakeup: 0 driver-idle: not-idle
driver: interface-state time 2560.825 if default: free 1 if local: free 
1 if lnc0: free 1
driver: hdisk-state time 2560.825 hdisk 0: free 81451306 dumpers 0
driver: result time 2560.825 from taper: TAPE-ERROR 00-1 [writing file: File 
too large]

That is due to filesystems constraints in FreeBSD-4.x where a size of 
file is given in 32-bit representation and declared in sys/stat.h with 
int32_t integer type.
Is there passible to change some code of Amanda to do the dumps into more 
than 4GB files under FreeBSD-4.x? (for example, with deal of stdout) Where 
in the Amanda code the conditions are checking for the maximum length of 
writing file?


Thank you!

+---+
! CANMOS ISP Network!
+---+
! Best regards  !
! Igor V. Ruzanov, network operational staff!
! e-Mail: [EMAIL PROTECTED]   !
+---+


RE: data write: File too large

2003-11-18 Thread Dana Bourgeois
I had this error when trying to put dump files larger than 2 Gig on a
non-BigFile supporting OS (RedHat 7.0).  I had chunk size set correctly but
forgot about what would happen when I used the file: device to write the
coalesced dump files to disk tapes.

Got a modern kernel and the problem went away.


Dana Bourgeois


 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Byarlay, Wayne A.
 Sent: Monday, November 17, 2003 7:54 AM
 To: [EMAIL PROTECTED]
 Subject: data write: File too large
 
 
 Hi all,
 
 I checked the archives on this problem... but they all 
 suggested to adjust the chunksize of my holdingdisk section 
 in my amanda.conf. However, I have ver. 2.4.1, and there's no 
 holdingdisk section IN my amanda.conf! Is the chunksize the 
 problem? I've got filesystems MUCH larger than this one going 
 to AMANDA... but if so, How do I adjust my chunksize?
 
 Here's from my error log:
 /-- xxx/services lev 0 FAILED [data write: File too large]
 sendbackup: start [xxx:/services level 0]
 sendbackup: info BACKUP=/bin/tar
 sendbackup: info RECOVER_CMD=/bin/gzip -dc |/bin/tar -f... -
 sendbackup: info COMPRESS_SUFFIX=.gz
 sendbackup: info end
 \
 
 The example.conf for the latest version has some section 
 like: holdingdisk hd1 {
   comment blah
   directory /dumps/amanda
   use 256 Mb
   chunksize 1Gb
   }
 
 BUT My amanda.conf has NO section similar to this. I just have:
 
 diskdir /dumps/disk1
 disksize 4000 mb
 
 So I guess my questions for the gurus are: Is the chunksize 
 the problem? If so, how do I change it with this version? If 
 not, is it just some really huge file? (I am going to check 
 on this after sending this e-mail).
 
 
 Wayne Byarlay
 Purdue Libraries ITD 
 [EMAIL PROTECTED]   
 765-496-3067 
 
 



RE: data write: File too large

2003-11-18 Thread Byarlay, Wayne A.
Thanks for the responses...

Upgrading at this time would be GREAT. Unfortunately this is not going
to happen anytime soon. I have another hard drive with RH9 and 2.4.4 on
it, but it's only half-configured, and it took a while to get to that
point.

As per the current, Irridum-dust, ancient setup, it's Debian, a really
old one... Woody? Again, upgrading is in my list of stuff To Do.

So, Chunksize is the problem, and nobody's sure the syntax for 2.4.1?
I'll experiment... It seemed unlikely that it was merely the SIZE of the
filesystem I was trying to back up, because I've got another which is
really huge, and I had assumed it was even huger than this one. You know
what they say about assumptions.

Again thanks all for the help.

-wab

Chunk: You guys'l never believe me. There was 2 cop cars, okay? And
they were chasing this 4-wheel deal, this real neat ORV, and there were
bullets flying all over the place. It was the most amazing thing I ever
saw! -The Goonies



RE: data write: File too large

2003-11-18 Thread Byarlay, Wayne A.
Never Mind,

I was under the impression that, for some reason, the holdingdisk {}
section of amanda.conf could not exist in my version. But I put it
there, with an appropriate Chunksize, and ... amcheck did not complain
at all. In fact it said 4096 size requested, that's plenty.

If you don't hear from me again on this issue, consider it solved!

wab




data write: File too large

2003-11-17 Thread Byarlay, Wayne A.
Hi all,

I checked the archives on this problem... but they all suggested to
adjust the chunksize of my holdingdisk section in my amanda.conf.
However, I have ver. 2.4.1, and there's no holdingdisk section IN my
amanda.conf! Is the chunksize the problem? I've got filesystems MUCH
larger than this one going to AMANDA... but if so, How do I adjust my
chunksize?

Here's from my error log:
/-- xxx/services lev 0 FAILED [data write: File too large]
sendbackup: start [xxx:/services level 0]
sendbackup: info BACKUP=/bin/tar
sendbackup: info RECOVER_CMD=/bin/gzip -dc |/bin/tar -f... -
sendbackup: info COMPRESS_SUFFIX=.gz
sendbackup: info end
\

The example.conf for the latest version has some section like:
holdingdisk hd1 {
comment blah
directory /dumps/amanda
use 256 Mb
chunksize 1Gb
}

BUT My amanda.conf has NO section similar to this. I just have:

diskdir /dumps/disk1
disksize 4000 mb

So I guess my questions for the gurus are: Is the chunksize the problem?
If so, how do I change it with this version? If not, is it just some
really huge file? (I am going to check on this after sending this
e-mail).


Wayne Byarlay
Purdue Libraries ITD 
[EMAIL PROTECTED]   
765-496-3067 



Re: data write: File too large

2003-11-17 Thread Gene Heskett
On Monday 17 November 2003 10:54, Byarlay, Wayne A. wrote:
Hi all,

I checked the archives on this problem... but they all suggested to
adjust the chunksize of my holdingdisk section in my amanda.conf.
However, I have ver. 2.4.1, and there's no holdingdisk section IN
 my amanda.conf! Is the chunksize the problem? I've got filesystems
 MUCH larger than this one going to AMANDA... but if so, How do I
 adjust my chunksize?


Wow, talk about ancient history, 2.4.1 has iridium dust on it!
And it may be that its a later option.

Here's from my error log:
/-- xxx/services lev 0 FAILED [data write: File too large]
sendbackup: start [xxx:/services level 0]
sendbackup: info BACKUP=/bin/tar
sendbackup: info RECOVER_CMD=/bin/gzip -dc |/bin/tar -f... -
sendbackup: info COMPRESS_SUFFIX=.gz
sendbackup: info end
\

The example.conf for the latest version has some section like:
holdingdisk hd1 {
   comment blah
   directory /dumps/amanda
   use 256 Mb
   chunksize 1Gb
   }

BUT My amanda.conf has NO section similar to this. I just have:

diskdir /dumps/disk1
disksize 4000 mb

So I guess my questions for the gurus are: Is the chunksize the
 problem? If so, how do I change it with this version? If not, is it
 just some really huge file? (I am going to check on this after
 sending this e-mail).


Wayne Byarlay
Purdue Libraries ITD
[EMAIL PROTECTED]
765-496-3067

Run, don't walk, to the amanda.org web page, pull nearly all the way 
to the bottom and find the latest snapshots link, download and build 
2.4.4p1.  Really, a lot has been improved in the last 4 or 5 years, 
hopefully without breaking any backwards compatibility.

I use a script to simplify the build, and it been posted to this group 
many times, as recently as a couple of weeks ago.  You might find it 
to be usefull for you with suitable mods.

-- 
Cheers, Gene
AMD [EMAIL PROTECTED] 320M
[EMAIL PROTECTED]  512M
99.27% setiathome rank, not too shabby for a WV hillbilly
Yahoo.com attornies please note, additions to this message
by Gene Heskett are:
Copyright 2003 by Maurice Eugene Heskett, all rights reserved.



Re: data write: File too large

2003-11-17 Thread Paul Bijnens
Byarlay, Wayne A. wrote:

Hi all,

I checked the archives on this problem... but they all suggested to
adjust the chunksize of my holdingdisk section in my amanda.conf.
However, I have ver. 2.4.1, and there's no holdingdisk section IN my
amanda.conf! Is the chunksize the problem? I've got filesystems MUCH
larger than this one going to AMANDA... but if so, How do I adjust my
chunksize?
While 2.4.1 is not exactly a dinosaur, it's from long ago...
Isn't there a possibility to upgrade?  At least 2.4.1p2 was much more
stable.  Currently we're on 2.4.4p1.
It's not the size of the filesystem that is the problem, but the
maximum size of one file on that filesystem.  For older Unix/Linux
versions this is often 2 Gbyte, even when the filesystem could
be larger.
That one file is the dump image.  Specifying chunksize instructs
amanda to chunk it up in manageable pieces on disk; while writing
to tape, all chunks are concatenated again.

The example.conf for the latest version has some section like:
holdingdisk hd1 {
comment blah
directory /dumps/amanda
use 256 Mb
chunksize 1Gb
}
BUT My amanda.conf has NO section similar to this. I just have:

diskdir /dumps/disk1
disksize 4000 mb
These last two parameters are now deprecated.  The program still
recognizes them for backward compatibility, but you better use
the format with holdingdisk as above, if it works in your ancient
version.  The disksize paramater is now use, but also with more
configurability (like negative sizes).

So I guess my questions for the gurus are: Is the chunksize the problem?
Yes.

--
Paul Bijnens, XplanationTel  +32 16 397.511
Technologielaan 21 bus 2, B-3001 Leuven, BELGIUMFax  +32 16 397.512
http://www.xplanation.com/  email:  [EMAIL PROTECTED]
***
* I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, F6, *
* quit,  ZZ, :q, :q!,  M-Z, ^X^C,  logoff, logout, close, bye,  /bye, *
* stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
* PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
* kill -9 1,  Alt-F4,  Ctrl-Alt-Del,  AltGr-NumLock,  Stop-A,  ...*
* ...  Are you sure?  ...   YES   ...   Phew ...   I'm out  *
***



File too large problem

2002-08-20 Thread Stanislav Malyshev

I have recently installed amanda 2.4.2p2 and I discovered that it no 
longer supports negative chunksize parameter, which I used to direct big 
dumps (some 10G+ filesystems) to tape directly. Now, when I try to du 
level 0 dump of such a filesystem, it gives me an error: data write: File 
too large. 
What could be a solution of this problem? I don't really want to split 
these filesystems in parts - they have a lot of subdirs, and it's a pain 
to maintain the up-to-date list. Is there a way to make dump go directly 
on tape as before? Or does my problem lie in something else? (BTW, the 
tape should be enough to keep the dumps - it's 35M tape, and I did 
successfull level 0 dump on it with older amanda version). 

TIA for any help,
-- 
Stanislav Malyshev, Zend Products Engineer   
[EMAIL PROTECTED]  http://www.zend.com/ +972-3-6139665 ext.115





Re: File too large problem

2002-08-20 Thread Jon LaBadie

On Tue, Aug 20, 2002 at 12:08:52PM +0300, Stanislav Malyshev wrote:
 I have recently installed amanda 2.4.2p2 and I discovered that it no 
 longer supports negative chunksize parameter, which I used to direct big 
 dumps (some 10G+ filesystems) to tape directly. Now, when I try to du 
 level 0 dump of such a filesystem, it gives me an error: data write: File 
 too large. 
 What could be a solution of this problem? I don't really want to split 
 these filesystems in parts - they have a lot of subdirs, and it's a pain 
 to maintain the up-to-date list. Is there a way to make dump go directly 
 on tape as before? Or does my problem lie in something else? (BTW, the 
 tape should be enough to keep the dumps - it's 35M tape, and I did 
 successfull level 0 dump on it with older amanda version). 

I do not recall a meaning for negative chunksize.
Sure you do not mean a negative value for use.

File too large sounds like  max file size for the file system.
Generally a chunksize of 1GB would prevent that.  An unspecified
chunksize could allow chunks to grow too large.  Perhaps the
negative chunksize is interpreted as illegal and thus unspecified.

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)



Re: [data write: File too large]

2002-06-23 Thread Paul Bijnens



Pedro Aguayo wrote:
 
 Ok, I didn't but think I do now.
 Basically when amanda write to the holding disk, it rights it to a flat file
 on the file system, and if that flat file is larger than 2gb then you might
 encounter a problem if your filesystem has a limitation where it can only
 support files 2gb.
 But if you write directly to tape you will avoid this problem cause you are
 bypassing the filesystem.

And that's why the paramater chunksize for the holding disk can be set
to e.g. 1Gbyte. 

And, when everything else fails, read the manual pages.  :-)

 
 Right Adrian?
 
 Hope I got it right, but this makes sense.

-- 
Paul Bijnens, Lant Tel  +32 16 40.51.40
Interleuvenlaan 15 H, B-3001 Leuven, BELGIUM   Fax  +32 16 40.49.61
http://www.lant.com/   email:  [EMAIL PROTECTED]
***
* I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, F6, *
* quit,  ZZ, :q, :q!,  M-Z, ^X^C,  logoff, logout, close, bye,  /bye, *
* stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
* PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
* kill -9 1,  Alt-F4,  Ctrl-Alt-Del,  AltGr-NumLock,  Stop-A,  ...*
* ...  Are you sure?  ...   YES   ...   Phew ...   I'm out  *
***



Re: file too large

2002-03-07 Thread Mike Cathey

Joshua,

The annyoing thing about using this hack is that if you have to use 
amflush (something goes wrong), then it (amflush) gives you errors about 
removing cruft files for all of the files that split creates.  An 
ingenious hack nonetheless... :)

If you have the disk space, you could switch the holding partition to 
ReiserFS or maybe ext3 (it supports files larger than 2GB right?)

I'm assuming his amanda server is running on linux...

Cheers,

Mike

-- 

Mike Cathey - http://www.mikecathey.com/
Network Administrator
RTC Internet - http://www.catt.com/

Joshua Baker-LePain wrote:

 On Wed, 6 Mar 2002 at 4:32pm, Charlie Chrisman wrote
 
 
/-- countach.i /dev/sda3 lev 0 FAILED [data write: File too large]

I get this for two of my clients?  what does this mean?


 FAQ.  Set your chunksize to something less then 2GB-32Kb.  1GB is fine -- 
 there's no performance penalty.
 
 





Re: file too large

2002-03-07 Thread Joshua Baker-LePain

On Thu, 7 Mar 2002 at 12:30pm, Mike Cathey wrote

 The annyoing thing about using this hack is that if you have to use 
 amflush (something goes wrong), then it (amflush) gives you errors about 
 removing cruft files for all of the files that split creates.  An 
 ingenious hack nonetheless... :)
 
 If you have the disk space, you could switch the holding partition to 
 ReiserFS or maybe ext3 (it supports files larger than 2GB right?)

ext2 supports 2GB files.  The problem lies in glibc and the kernel being 
compiled with the proper support.  RH7.0 wasn't compiled this way, 7.1+ 
were.  So another FS won't help.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University




Re: file too large

2002-03-07 Thread Frank Smith

I've never had amflush complain about chunks being cruft files.  It writes
them to tape and removes them.  If amdump dies you might end up with cruft
files (both chunks and full files), but an amcleanup followed by an amflush
generally clears things out.

Frank

--On Thursday, March 07, 2002 12:30:54 -0500 Mike Cathey [EMAIL PROTECTED] wrote:

 Joshua,

 The annyoing thing about using this hack is that if you have to use amflush 
(something goes wrong), then it (amflush) gives you errors about removing cruft files 
for all of the files that split creates.  An ingenious hack nonetheless... :)

 If you have the disk space, you could switch the holding partition to ReiserFS or 
maybe ext3 (it supports files larger than 2GB right?)

 I'm assuming his amanda server is running on linux...

 Cheers,

 Mike

 --

 Mike Cathey - http://www.mikecathey.com/
 Network Administrator
 RTC Internet - http://www.catt.com/

 Joshua Baker-LePain wrote:

 On Wed, 6 Mar 2002 at 4:32pm, Charlie Chrisman wrote


 /-- countach.i /dev/sda3 lev 0 FAILED [data write: File too large]

 I get this for two of my clients?  what does this mean?


 FAQ.  Set your chunksize to something less then 2GB-32Kb.  1GB is fine --
 there's no performance penalty.






--
Frank Smith[EMAIL PROTECTED]
Systems Administrator Voice: 512-374-4673
Hoover's Online Fax: 512-374-4501



file too large

2002-03-06 Thread Charlie Chrisman

/-- countach.i /dev/sda3 lev 0 FAILED [data write: File too large]

I get this for two of my clients?  what does this mean?

-- 
Charlie Chrisman
Business Development Director
(859) 514-7600
(859) 514-7601 Fax

http://www.intelliwire.net/
³The Intelligent Way to Work²


Sent using the Entourage X Test Drive.




Re: file too large

2002-03-06 Thread Joshua Baker-LePain

On Wed, 6 Mar 2002 at 4:32pm, Charlie Chrisman wrote

 /-- countach.i /dev/sda3 lev 0 FAILED [data write: File too large]
 
 I get this for two of my clients?  what does this mean?
 
FAQ.  Set your chunksize to something less then 2GB-32Kb.  1GB is fine -- 
there's no performance penalty.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University




Re: file too large

2002-03-06 Thread Frank Smith


--On Wednesday, March 06, 2002 16:32:01 -0500 Charlie Chrisman 
[EMAIL PROTECTED] wrote:

 /-- countach.i /dev/sda3 lev 0 FAILED [data write: File too large]

 I get this for two of my clients?  what does this mean?

Probably that your Amanda server has a file size limit smaller than the
size of the filesystem you are trying to back up on the client.
   Try setting chunksize in your amanda.conf to something your server can
handle ( try 1 GB ).

Frank

--
Frank Smith[EMAIL PROTECTED]
Systems Administrator Voice: 512-374-4673
Hoover's Online Fax: 512-374-4501



file too large, linux 2.4.2

2002-01-28 Thread Matthew Boeckman

Hi there list.
  I'm running amanda 2.4.2 on a RH7.1 box with 2.4.2-2. I recently read 
in the archives about making chunksizes 1Gb instead of 2 due to amanda 
tacking on stuff at the end, making those file too large. Too late, I'm 
afraid, as I'm trying to restore a set of files from a level0 of a sun 
box. I was able to get the archive off of the tape medium, and it's size 
is:2468577280. restore on the linux box in question fails with File too 
large (version 0.4b21). I was also able to get the file to the sun box 
it was backed up from, but ufsrestore complains that Volume is not in 
dump format, which I assume is because it is a file made by dump, not 
ufsdump.

So the question is: WHAT CAN I DO? Is there any way to get this 
directory extricated from this honking big 2 gig file? Second, I 
_thought_ that the 2.4 kernel was supposed to do away with the 2gb file 
size limitations. Am I misinformed?

-- 
Matthew Boeckman(816) 777-2160
Manager - Systems Integration   Saepio Technologies
== 
==
Public Notice as Required by Law: Any Use of This Product, in
Any Manner Whatsoever, Will Increase the Amount of Disorder in the Universe.
Although No Liability Is Implied Herein, the Consumer Is Warned That This
Process Will Ultimately Lead to the Heat Death of the Universe.




Re: file too large, linux 2.4.2

2002-01-28 Thread Joshua Baker-LePain

On Mon, 28 Jan 2002 at 10:38am, Matthew Boeckman wrote

   I'm running amanda 2.4.2 on a RH7.1 box with 2.4.2-2. I recently read 
 in the archives about making chunksizes 1Gb instead of 2 due to amanda 
 tacking on stuff at the end, making those file too large. Too late, I'm 
 afraid, as I'm trying to restore a set of files from a level0 of a sun 
 box. I was able to get the archive off of the tape medium, and it's size 
 is:2468577280. restore on the linux box in question fails with File too 
 large (version 0.4b21). I was also able to get the file to the sun box 

Newer versions of dump/restore should be compiled with large file support 
-- try upgrading.  You could also compile the newest version yourself, 
adding in the appropriate flags.  The kernel/glibc combo on RH7.1 *can* 
handle large files -- you just have to make sure that the app can.

 it was backed up from, but ufsrestore complains that Volume is not in 
 dump format, which I assume is because it is a file made by dump, not 
 ufsdump.

No.  For every filesystem, AMANDA runs the appropriate backup utility.  
Also, dumps are FS specific -- ext2 dump won't be able dump a UFS 
filesystem, and vice verse for ufsdump.  Whether or not a 
particular restore can read another dump's format is often hit-and-miss.

So, if that file was indeed from a Sun filesytem using dump, it should 
indeed be a ufsdump archive.  Perhaps it's getting truncated somewhere?

 So the question is: WHAT CAN I DO? Is there any way to get this 
 directory extricated from this honking big 2 gig file? Second, I 

Your options are:

1) Use amrecover from the Sun box.  This will automatically do everything 
in pipes and avoid the whole large file issue.

2) Upgrade dump/restore to a version compiled with large file support.

3) Pull the file off the tape with amrestore or dd, and pipe the output 
straight to restore, again avoiding a 2GB disk file.

 _thought_ that the 2.4 kernel was supposed to do away with the 2gb file 
 size limitations. Am I misinformed?
 
As mentioned above, it's a kernel/glibc thing, *and* the app must be 
configured appropriately.  IIRC, for whatever reason, the dump/restore 
shipped with RH7.1 wasn't.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University




Re: [data write: File too large]

2002-01-17 Thread John R. Jackson

This makes sense because when I ran my initial Amanda dump on that host, I
had no holding-disk defined, and it did backup the filesystem at level 0,
and that filesystem has over 24GB of data on it, albeit, they are all small
.c files and the such.  I am left wondering then how chunksize fits into the
equation.  It was my understanding that this is what the chunksize was for.

If you didn't have a holding disk defined, Amanda went straight to tape.

When you do have a holding disk defined, Amanda will try (given enough
space, etc) to write the whole image into it, then when that is done,
write the image to tape.

Without chunksize, the holding disk image will be one monolithic file.
With chunksize, the image will be broken up into pieces when put into
the holding disk and then recombined when written to tape.  Chunksize was
put in to support systems with holding disks that only handled individual
files smaller than 2 GBytes.  You don't have that problem on AIX 4+.

Another possibility for the File too large error is your Amanda user
running into ulimit.  If you do this:

  su - user -c ulimit -a

does it have a file(blocks) limit?  If so, you can use smitty to
change that.

-edwin 

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: [data write: File too large]

2002-01-15 Thread Adrian Reyer

On Mon, Jan 14, 2002 at 10:53:29AM -0500, Pedro Aguayo wrote:
 Could be that your holding disk space is to small, or you trying to backup a
 file that is larger than 2 gigs?

Perhaps I misunderstand something here, but...
The holding disk afaik holds the entire dump of the filesystem you try
and backup to make it one last file that is able to get faster onto
tape once completed.
So if your partition has more than 2GB in use, that file might be
bigger than 2GB and you run into a filesystem limit.
Had that problem with an older Linux installation, turning off
holding-disk and dumping directly to tape works fine in that case.

Regards,
Adrian Reyer
-- 
Adrian Reyer Fon:  +49 (7 11) 2 85 19 05
LiHAS - Servicebuero SuttgartFax:  +49 (7 11) 5 78 06 92
Adrian Reyer  Joerg Henner GbR  Mail: [EMAIL PROTECTED]
Linux, Netzwerke, Consulting  Support   http://lihas.de/



RE: [data write: File too large]

2002-01-15 Thread Pedro Aguayo

That's a great Idea, I'm going to save this one.

But, I think Edwin doesn't have this problem, meaning he says he doesn't
have a file larger than 2gb.

Could be hidden, or maybe you mounted over a directory that had a huge file,
just digging here.

Pedro

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Adrian Reyer
Sent: Tuesday, January 15, 2002 4:03 AM
To: Pedro Aguayo
Cc: Rivera, Edwin; [EMAIL PROTECTED]
Subject: Re: [data write: File too large]


On Mon, Jan 14, 2002 at 10:53:29AM -0500, Pedro Aguayo wrote:
 Could be that your holding disk space is to small, or you trying to backup
a
 file that is larger than 2 gigs?

Perhaps I misunderstand something here, but...
The holding disk afaik holds the entire dump of the filesystem you try
and backup to make it one last file that is able to get faster onto
tape once completed.
So if your partition has more than 2GB in use, that file might be
bigger than 2GB and you run into a filesystem limit.
Had that problem with an older Linux installation, turning off
holding-disk and dumping directly to tape works fine in that case.

Regards,
Adrian Reyer
--
Adrian Reyer Fon:  +49 (7 11) 2 85 19 05
LiHAS - Servicebuero SuttgartFax:  +49 (7 11) 5 78 06 92
Adrian Reyer  Joerg Henner GbR  Mail: [EMAIL PROTECTED]
Linux, Netzwerke, Consulting  Support   http://lihas.de/




Re: [data write: File too large]

2002-01-15 Thread Adrian Reyer

On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote:
 But, I think Edwin doesn't have this problem, meaning he says he doesn't
 have a file larger than 2gb.

I had none, either, but the filesystem was dumped into a file as a
whole, leading to a huge file, same with tar. The problem only occurs
as holding-disk is used. Continuously writing a stream of unlimited
size to a tape is no problem, but as soon as you try to do this onto a
filesytem, you run in whatever limits you have, mostly 2GB-limits on a
single file.
No holding-disk - no big file - no problem. (well, tape might have
to stop more often because of interruption in data-flow)

Regards,
Adrian Reyer
-- 
Adrian Reyer Fon:  +49 (7 11) 2 85 19 05
LiHAS - Servicebuero SuttgartFax:  +49 (7 11) 5 78 06 92
Adrian Reyer  Joerg Henner GbR  Mail: [EMAIL PROTECTED]
Linux, Netzwerke, Consulting  Support   http://lihas.de/



RE: [data write: File too large]

2002-01-15 Thread Pedro Aguayo

Ahh! I see said the blind man.

Pedro

-Original Message-
From: Adrian Reyer [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 15, 2002 9:24 AM
To: Pedro Aguayo
Cc: Rivera, Edwin; [EMAIL PROTECTED]
Subject: Re: [data write: File too large]


On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote:
 But, I think Edwin doesn't have this problem, meaning he says he doesn't
 have a file larger than 2gb.

I had none, either, but the filesystem was dumped into a file as a
whole, leading to a huge file, same with tar. The problem only occurs
as holding-disk is used. Continuously writing a stream of unlimited
size to a tape is no problem, but as soon as you try to do this onto a
filesytem, you run in whatever limits you have, mostly 2GB-limits on a
single file.
No holding-disk - no big file - no problem. (well, tape might have
to stop more often because of interruption in data-flow)

Regards,
Adrian Reyer
-- 
Adrian Reyer Fon:  +49 (7 11) 2 85 19 05
LiHAS - Servicebuero SuttgartFax:  +49 (7 11) 5 78 06 92
Adrian Reyer  Joerg Henner GbR  Mail: [EMAIL PROTECTED]
Linux, Netzwerke, Consulting  Support   http://lihas.de/



RE: [data write: File too large]

2002-01-15 Thread KEVIN ZEMBOWER

I see! said the blind carpenter, who picked up his hammer and saw.
-My ninth grade science teacher, Brother Paul, a terrible
punner.

-Kevin

 Pedro Aguayo [EMAIL PROTECTED] 01/15/02 09:45AM 
Ahh! I see said the blind man.

Pedro

-Original Message-
From: Adrian Reyer [mailto:[EMAIL PROTECTED]] 
Sent: Tuesday, January 15, 2002 9:24 AM
To: Pedro Aguayo
Cc: Rivera, Edwin; [EMAIL PROTECTED] 
Subject: Re: [data write: File too large]


On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote:
 But, I think Edwin doesn't have this problem, meaning he says he
doesn't
 have a file larger than 2gb.

I had none, either, but the filesystem was dumped into a file as a
whole, leading to a huge file, same with tar. The problem only occurs
as holding-disk is used. Continuously writing a stream of unlimited
size to a tape is no problem, but as soon as you try to do this onto a
filesytem, you run in whatever limits you have, mostly 2GB-limits on a
single file.
No holding-disk - no big file - no problem. (well, tape might have
to stop more often because of interruption in data-flow)

Regards,
Adrian Reyer
-- 
Adrian Reyer Fon:  +49 (7 11) 2 85 19 05
LiHAS - Servicebuero SuttgartFax:  +49 (7 11) 5 78 06 92
Adrian Reyer  Joerg Henner GbR  Mail: [EMAIL PROTECTED] 
Linux, Netzwerke, Consulting  Support   http://lihas.de/



Re: [data write: File too large]

2002-01-15 Thread Hans Kinwel


On 15-Jan-2002 Adrian Reyer wrote:

 No holding-disk - no big file - no problem. (well, tape might have
 to stop more often because of interruption in data-flow)


Why not define a chunksize of 500 Mb on your holdingdisk?  That's what
I did.  Backups go faster and there's less wear and tear on my tapestreamer.

-- 
  |Hans Kinwel
  | [EMAIL PROTECTED]



RE: [data write: File too large]

2002-01-15 Thread Pedro Aguayo

Ok, I didn't but think I do now.
Basically when amanda write to the holding disk, it rights it to a flat file
on the file system, and if that flat file is larger than 2gb then you might
encounter a problem if your filesystem has a limitation where it can only
support files 2gb.
But if you write directly to tape you will avoid this problem cause you are
bypassing the filesystem.


Right Adrian?

Hope I got it right, but this makes sense.

Pedro

-Original Message-
From: Wayne Richards [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 15, 2002 10:12 AM
To: Adrian Reyer
Cc: Pedro Aguayo
Subject: Re: [data write: File too large]


I don't understand the problem.  When amanda encounters a filesystem larger
than the holding disk, she AUTOMATICALLY resorts to direct tape write.
Quoting from the amanda.conf file:
# If no holding disks are specified then all dumps will be written directly
# to tape.  If a dump is too big to fit on the holding disk than it will be
# written directly to tape.  If more than one holding disk is specified then
# they will all be used round-robin.

We routinely backup filesystems larger than our holding disks and many files

4GB.


 On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote:
  But, I think Edwin doesn't have this problem, meaning he says he doesn't
  have a file larger than 2gb.

 I had none, either, but the filesystem was dumped into a file as a
 whole, leading to a huge file, same with tar. The problem only occurs
 as holding-disk is used. Continuously writing a stream of unlimited
 size to a tape is no problem, but as soon as you try to do this onto a
 filesytem, you run in whatever limits you have, mostly 2GB-limits on a
 single file.
 No holding-disk - no big file - no problem. (well, tape might have
 to stop more often because of interruption in data-flow)

 Regards,
   Adrian Reyer
 --
 Adrian Reyer Fon:  +49 (7 11) 2 85 19 05
 LiHAS - Servicebuero SuttgartFax:  +49 (7 11) 5 78 06 92
 Adrian Reyer  Joerg Henner GbR  Mail: [EMAIL PROTECTED]
 Linux, Netzwerke, Consulting  Support   http://lihas.de/



---
Wayne Richards  e-mail: [EMAIL PROTECTED]





Re: [data write: File too large]

2002-01-15 Thread Paul Bijnens



Pedro Aguayo wrote:
 
 Ok, I didn't but think I do now.
 Basically when amanda write to the holding disk, it rights it to a flat file
 on the file system, and if that flat file is larger than 2gb then you might
 encounter a problem if your filesystem has a limitation where it can only
 support files 2gb.
 But if you write directly to tape you will avoid this problem cause you are
 bypassing the filesystem.

And that's why the paramater chunksize for the holding disk can be set
to e.g. 1Gbyte. 

And, when everything else fails, read the manual pages.  :-)

 
 Right Adrian?
 
 Hope I got it right, but this makes sense.

-- 
Paul Bijnens, Lant Tel  +32 16 40.51.40
Interleuvenlaan 15 H, B-3001 Leuven, BELGIUM   Fax  +32 16 40.49.61
http://www.lant.com/   email:  [EMAIL PROTECTED]
***
* I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, F6, *
* quit,  ZZ, :q, :q!,  M-Z, ^X^C,  logoff, logout, close, bye,  /bye, *
* stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
* PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
* kill -9 1,  Alt-F4,  Ctrl-Alt-Del,  AltGr-NumLock,  Stop-A,  ...*
* ...  Are you sure?  ...   YES   ...   Phew ...   I'm out  *
***



RE: [data write: File too large]

2002-01-15 Thread Rivera, Edwin

This makes sense because when I ran my initial Amanda dump on that host, I
had no holding-disk defined, and it did backup the filesystem at level 0,
and that filesystem has over 24GB of data on it, albeit, they are all small
.c files and the such.  I am left wondering then how chunksize fits into the
equation.  It was my understanding that this is what the chunksize was for.

well, i just started anohter amdump right now without the holding-disk in
the amanda.conf file.  let's see what happens.

-edwin 

-Original Message-
From: Adrian Reyer [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 15, 2002 9:24 AM
To: Pedro Aguayo
Cc: Rivera, Edwin; [EMAIL PROTECTED]
Subject: Re: [data write: File too large]


On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote:
 But, I think Edwin doesn't have this problem, meaning he says he doesn't
 have a file larger than 2gb.

I had none, either, but the filesystem was dumped into a file as a
whole, leading to a huge file, same with tar. The problem only occurs
as holding-disk is used. Continuously writing a stream of unlimited
size to a tape is no problem, but as soon as you try to do this onto a
filesytem, you run in whatever limits you have, mostly 2GB-limits on a
single file.
No holding-disk - no big file - no problem. (well, tape might have
to stop more often because of interruption in data-flow)

Regards,
Adrian Reyer
-- 
Adrian Reyer Fon:  +49 (7 11) 2 85 19 05
LiHAS - Servicebuero SuttgartFax:  +49 (7 11) 5 78 06 92
Adrian Reyer  Joerg Henner GbR  Mail: [EMAIL PROTECTED]
Linux, Netzwerke, Consulting  Support   http://lihas.de/



RE: [data write: File too large]

2002-01-15 Thread Rivera, Edwin

is there a way, in the amanda.conf file, to specify *NOT* to use the
holding-disk for a particular filesystem?

for example, if i use amanda to backup 8 filesystems on one box and i want 7
to use the holding-disk, but one not to.. is that possible?

just curious..

-edwin

-Original Message-
From: Adrian Reyer [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 15, 2002 9:24 AM
To: Pedro Aguayo
Cc: Rivera, Edwin; [EMAIL PROTECTED]
Subject: Re: [data write: File too large]


On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote:
 But, I think Edwin doesn't have this problem, meaning he says he doesn't
 have a file larger than 2gb.

I had none, either, but the filesystem was dumped into a file as a
whole, leading to a huge file, same with tar. The problem only occurs
as holding-disk is used. Continuously writing a stream of unlimited
size to a tape is no problem, but as soon as you try to do this onto a
filesytem, you run in whatever limits you have, mostly 2GB-limits on a
single file.
No holding-disk - no big file - no problem. (well, tape might have
to stop more often because of interruption in data-flow)

Regards,
Adrian Reyer
-- 
Adrian Reyer Fon:  +49 (7 11) 2 85 19 05
LiHAS - Servicebuero SuttgartFax:  +49 (7 11) 5 78 06 92
Adrian Reyer  Joerg Henner GbR  Mail: [EMAIL PROTECTED]
Linux, Netzwerke, Consulting  Support   http://lihas.de/



Re: [data write: File too large]

2002-01-15 Thread Gene Heskett

On Tuesday 15 January 2002 09:56 am, Rivera, Edwin wrote:
is there a way, in the amanda.conf file, to specify *NOT* to use
 the holding-disk for a particular filesystem?

for example, if i use amanda to backup 8 filesystems on one box
 and i want 7 to use the holding-disk, but one not to.. is that
 possible?

just curious..

Diwn toward the end of amanda.conf is just such a 'dumptype', 
edit to suit your circumstances.

[...]

-- 
Cheers, Gene
AMD K6-III@500mhz 320M
Athlon1600XP@1400mhz  512M
98.3+% setiathome rank, not too shabby for a hillbilly



[data write: File too large]

2002-01-14 Thread Rivera, Edwin

hello again,

my aix 4.2.1 box running Amanda v2.4.2p2 is not backing up one of my
filesystems at level 0 anymore, it did it only once (the first amdump run).

here is the error:
aix421.us.lhsgroup.com /home4/bscs_fam lev 0 FAILED [data write: File too
large]
   
here is the entry in my amanda.conf file
chunksize 1536 mbytes


can you suggest anything?  thanks in advance.

Highest Regards,

Edwin R. Rivera
UNIX Administrator
Tel:  +1 305 894 4609
Fax:  +1 305 894 4799[EMAIL PROTECTED]



Re: [data write: File too large]

2002-01-14 Thread Adrian Reyer

On Mon, Jan 14, 2002 at 10:03:57AM -0500, Rivera, Edwin wrote:
 my aix 4.2.1 box running Amanda v2.4.2p2 is not backing up one of my
 filesystems at level 0 anymore, it did it only once (the first amdump run).
 here is the error:
 aix421.us.lhsgroup.com /home4/bscs_fam lev 0 FAILED [data write: File too
 large]
 here is the entry in my amanda.conf file
 chunksize 1536 mbytes
 can you suggest anything?  thanks in advance.

2GB limit on files perhaps? Not sure about AIX 4.2.1-support for files
bigger 2GB, quit AIX with version 3.2.5. Might be the
holding-disk-file.

Regards,
Adrian Reyer
-- 
Adrian Reyer Fon:  +49 (7 11) 2 85 19 05
LiHAS - Servicebuero SuttgartFax:  +49 (7 11) 5 78 06 92
Adrian Reyer  Joerg Henner GbR  Mail: [EMAIL PROTECTED]
Linux, Netzwerke, Consulting  Support   http://lihas.de/



RE: [data write: File too large]

2002-01-14 Thread Pedro Aguayo

Could be that your holding disk space is to small, or you trying to backup a
file that is larger than 2 gigs?

Thats all I can come up with.

Pedro

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Rivera, Edwin
Sent: Monday, January 14, 2002 10:04 AM
To: [EMAIL PROTECTED]
Subject: [data write: File too large]


hello again,

my aix 4.2.1 box running Amanda v2.4.2p2 is not backing up one of my
filesystems at level 0 anymore, it did it only once (the first amdump run).

here is the error:
aix421.us.lhsgroup.com /home4/bscs_fam lev 0 FAILED [data write: File too
large]

here is the entry in my amanda.conf file
chunksize 1536 mbytes


can you suggest anything?  thanks in advance.

Highest Regards,

Edwin R. Rivera
UNIX Administrator
Tel:  +1 305 894 4609
Fax:  +1 305 894 4799[EMAIL PROTECTED]




RE: [data write: File too large]

2002-01-14 Thread Pedro Aguayo

You sure you don't have a huge file in that filesystem?

Pedro

-Original Message-
From: Rivera, Edwin [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 14, 2002 10:55 AM
To: Pedro Aguayo; Rivera, Edwin; [EMAIL PROTECTED]
Subject: RE: [data write: File too large]


my holding disk is 4GB and i have it set to use -100Mbytes

i'm confused on this one..


-Original Message-
From: Pedro Aguayo [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 14, 2002 10:53 AM
To: Rivera, Edwin; [EMAIL PROTECTED]
Subject: RE: [data write: File too large]


Could be that your holding disk space is to small, or you trying to backup a
file that is larger than 2 gigs?

Thats all I can come up with.

Pedro

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Rivera, Edwin
Sent: Monday, January 14, 2002 10:04 AM
To: [EMAIL PROTECTED]
Subject: [data write: File too large]


hello again,

my aix 4.2.1 box running Amanda v2.4.2p2 is not backing up one of my
filesystems at level 0 anymore, it did it only once (the first amdump run).

here is the error:
aix421.us.lhsgroup.com /home4/bscs_fam lev 0 FAILED [data write: File too
large]

here is the entry in my amanda.conf file
chunksize 1536 mbytes


can you suggest anything?  thanks in advance.

Highest Regards,

Edwin R. Rivera
UNIX Administrator
Tel:  +1 305 894 4609
Fax:  +1 305 894 4799[EMAIL PROTECTED]




RE: [data write: File too large]

2002-01-14 Thread Rivera, Edwin

yeah, it's a CVMC filesystem with a ton of small flat files, no large
individual files.. only .c and .h files and things like that.

-Original Message-
From: Pedro Aguayo [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 14, 2002 11:00 AM
To: Rivera, Edwin; [EMAIL PROTECTED]
Subject: RE: [data write: File too large]


You sure you don't have a huge file in that filesystem?

Pedro

-Original Message-
From: Rivera, Edwin [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 14, 2002 10:55 AM
To: Pedro Aguayo; Rivera, Edwin; [EMAIL PROTECTED]
Subject: RE: [data write: File too large]


my holding disk is 4GB and i have it set to use -100Mbytes

i'm confused on this one..


-Original Message-
From: Pedro Aguayo [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 14, 2002 10:53 AM
To: Rivera, Edwin; [EMAIL PROTECTED]
Subject: RE: [data write: File too large]


Could be that your holding disk space is to small, or you trying to backup a
file that is larger than 2 gigs?

Thats all I can come up with.

Pedro

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Rivera, Edwin
Sent: Monday, January 14, 2002 10:04 AM
To: [EMAIL PROTECTED]
Subject: [data write: File too large]


hello again,

my aix 4.2.1 box running Amanda v2.4.2p2 is not backing up one of my
filesystems at level 0 anymore, it did it only once (the first amdump run).

here is the error:
aix421.us.lhsgroup.com /home4/bscs_fam lev 0 FAILED [data write: File too
large]

here is the entry in my amanda.conf file
chunksize 1536 mbytes


can you suggest anything?  thanks in advance.

Highest Regards,

Edwin R. Rivera
UNIX Administrator
Tel:  +1 305 894 4609
Fax:  +1 305 894 4799[EMAIL PROTECTED]



File too large?

2001-10-10 Thread Dave Brooks

What does file to large mean?  Example (from the email report:)

FAILURE AND STRANGE DUMP SUMMARY:
  localhost  /hermes lev 0 FAILED [data write: File too large]

I cant figure out whats up with that... my spooling disk is 40GB (and its
empty), and my tapes are 40GB compressed.  Any ideas as to what that means?

-Dave

-- 
david a. brooks
* systems administrator
* stayonline.net
* voice: .. 770/933-0600 x217
* email: .. [EMAIL PROTECTED]
* :wq!



Re: File too large?

2001-10-10 Thread Maarten Vink

Maybe you are using an OS (linux for example) that has a filesize-limit of 2
Gb? There's an option to limit filesizes in amanda.conf.

Maarten Vink

- Original Message -
From: Dave Brooks [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, October 10, 2001 6:47 PM
Subject: File too large?


 What does file to large mean?  Example (from the email report:)

 FAILURE AND STRANGE DUMP SUMMARY:
   localhost  /hermes lev 0 FAILED [data write: File too large]

 I cant figure out whats up with that... my spooling disk is 40GB (and its
 empty), and my tapes are 40GB compressed.  Any ideas as to what that
means?

 -Dave

 --
 david a. brooks
 * systems administrator
 * stayonline.net
 * voice: .. 770/933-0600 x217
 * email: .. [EMAIL PROTECTED]
 * :wq!





Re: File too large?

2001-10-10 Thread Dave Brooks

I've got my chunksize set to 2Gb as it is, right now.  I'm using linux with
a 2.2.14-5 kernel.  Would it cause problems if the chunksize was right at 2Gb?
Perhaps I should lower it.

-Dave

Maarten Vink writes:
Maybe you are using an OS (linux for example) that has a filesize-limit of 2
Gb? There's an option to limit filesizes in amanda.conf.

Maarten Vink

- Original Message -
From: Dave Brooks [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, October 10, 2001 6:47 PM
Subject: File too large?


 What does file to large mean?  Example (from the email report:)

 FAILURE AND STRANGE DUMP SUMMARY:
   localhost  /hermes lev 0 FAILED [data write: File too large]

 I cant figure out whats up with that... my spooling disk is 40GB (and its
 empty), and my tapes are 40GB compressed.  Any ideas as to what that
means?

 -Dave

 --
 david a. brooks
 * systems administrator
 * stayonline.net
 * voice: .. 770/933-0600 x217
 * email: .. [EMAIL PROTECTED]
 * :wq!




-- 
david a. brooks
* systems administrator
* stayonline.net
* voice: .. 770/933-0600 x217
* email: .. [EMAIL PROTECTED]
* :wq!



Re: File too large?

2001-10-10 Thread Joshua Baker-LePain

On Wed, 10 Oct 2001 at 3:20pm, Dave Brooks wrote

 I've got my chunksize set to 2Gb as it is, right now.  I'm using linux with
 a 2.2.14-5 kernel.  Would it cause problems if the chunksize was right at 2Gb?
 Perhaps I should lower it.

Yep, that's an issue.  Amanda adds on a 32KB for the header.  There's
no performance penalty for smaller chunksizes, which is why 1GB is the
default.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University




Re: File too large?

2001-10-10 Thread Bernhard R. Erdmann

 I've got my chunksize set to 2Gb as it is, right now.  I'm using linux with
 a 2.2.14-5 kernel.  Would it cause problems if the chunksize was right at 2Gb?
 Perhaps I should lower it.

Oh yes, put it something below 2GB-32KB: a good value is 2000 MB



Re: data write: File too large ???

2001-08-14 Thread Joshua Baker-LePain

On Tue, 14 Aug 2001 at 9:05am, Katrinka Dall wrote

 /-- xx.p /dev/sdb1 lev 0 FAILED [data write: File too large]
 sendbackup: start [x.x.xxx.x.com:/dev/sdb1 level 0]
 sendbackup: info BACKUP=/bin/tar
 sendbackup: info RECOVER_CMD=/usr/bin/gzip -dc |/bin/tar -f... -
 sendbackup: info COMPRESS_SUFFIX=.gz
 sendbackup: info end
 \

   Now, I know that this isn't an issue of not having enough space on the
 tape or holding disk, both are in excess of 35G.  Some of the things I
 have tried are, upgrading tar on the server that is failing, upgrading
 the backup server from RedHat 6.2 to 7.1, and using every available
 version of Amanda.  Currently I am using Amanda-2.4.2p2.  The client
 that I'm having these problems on is a RedHat 5.1 (kernel 2.0.36)
 machine.

It looks like you may be hitting Linux's filesize limitation.  What is
your chunksize set to?  Try setting it to something smaller than 2GB-32KB
for header.  There's no performance penalty, so why not try 1GB.

That may not be the issue, though, as amanda 2.4.2p2 on RedHat 7.1
should have large file support.  It seems to compile with the right
flags -- has anybody confirmed whether it works or not?

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University




Re: data write: File too large ???

2001-08-14 Thread Ragnar Kjørstad

On Tue, Aug 14, 2001 at 09:05:25AM -0500, Katrinka Dall wrote:
  FAILED AND STRANGE DUMP DETAILS:
  
 /-- xx.p /dev/sdb1 lev 0 FAILED [data write: File too large]

Does it fail after backing up 2 Gigabyte?

It sounds like you don't have Large File Support (LFS).

 sendbackup: start [x.x.xxx.x.com:/dev/sdb1 level 0]
 sendbackup: info BACKUP=/bin/tar
 sendbackup: info RECOVER_CMD=/usr/bin/gzip -dc |/bin/tar -f... -
 sendbackup: info COMPRESS_SUFFIX=.gz
 sendbackup: info end
 \ 
 
   Now, I know that this isn't an issue of not having enough space on the
 tape or holding disk, both are in excess of 35G.  Some of the things I
 have tried are, upgrading tar on the server that is failing, upgrading
 the backup server from RedHat 6.2 to 7.1, and using every available
 version of Amanda.  Currently I am using Amanda-2.4.2p2.  The client
 that I'm having these problems on is a RedHat 5.1 (kernel 2.0.36)
 machine.

Redhat 7.1 should include a kernel, libraries and utilities with LFS.
Did you install some of the utilities manually, or were they all from
RedHat 7.1? (e.g. amanda?)

If so, you need to recompile this on your RH71 system, to make them
support  2Gb files.


-- 
Ragnar Kjorstad



RE: data write: File too large ???

2001-08-14 Thread Anthony Valentine

Hello Katrina,

Have you looked at your chunksize setting in your holding disk config?  I
believe that Linux has a 2GB limit on file sizes that may be causing your
problem.  Try setting this to just below 2GB (like 1999 MB) and see if that
helps.

Hope this was helpful!

Anthony Valentine


-Original Message-
From: Katrinka Dall [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, August 14, 2001 6:05 AM
To: [EMAIL PROTECTED]
Subject: data write: File too large ???


Hello,


I must say that I'm completely stumped, trying everything I can
possibly think of, I've decided to post this here in hopes that one of
you can help me out.  Recently I had to migrate our backup server from a
Solaris 2.5.1 machine to a Linux 6.2 machine.  In the process of doing
this, I found that I was unable to get one of the Linux machines that we
had been backing up to work properly.  I keep getting this error
whenever I try to do a dump on this machine/filesystem:


 FAILED AND STRANGE DUMP DETAILS:
 
/-- xx.p /dev/sdb1 lev 0 FAILED [data write: File too large]
sendbackup: start [x.x.xxx.x.com:/dev/sdb1 level 0]
sendbackup: info BACKUP=/bin/tar
sendbackup: info RECOVER_CMD=/usr/bin/gzip -dc |/bin/tar -f... -
sendbackup: info COMPRESS_SUFFIX=.gz
sendbackup: info end
\ 

Now, I know that this isn't an issue of not having enough space on
the
tape or holding disk, both are in excess of 35G.  Some of the things I
have tried are, upgrading tar on the server that is failing, upgrading
the backup server from RedHat 6.2 to 7.1, and using every available
version of Amanda.  Currently I am using Amanda-2.4.2p2.  The client
that I'm having these problems on is a RedHat 5.1 (kernel 2.0.36)
machine.

If you have any suggestions I'd greatly appreciate them.  Oh, and one
more thing, when we were running these backups from the Solaris machine,
they did not produce this error, and I had about a quarter of the space
on tape and holding disk that I have now.

Thanks in advance,

Katrinka



Re: data write: File too large ???

2001-08-14 Thread Christoph Sold

Katrinka Dall wrote:

Hello,


   I must say that I'm completely stumped, trying everything I can
possibly think of, I've decided to post this here in hopes that one of
you can help me out.  Recently I had to migrate our backup server from a
Solaris 2.5.1 machine to a Linux 6.2 machine.  In the process of doing
this, I found that I was unable to get one of the Linux machines that we
had been backing up to work properly.  I keep getting this error
whenever I try to do a dump on this machine/filesystem:


 FAILED AND STRANGE DUMP DETAILS:
 
/-- xx.p /dev/sdb1 lev 0 FAILED [data write: File too large]
sendbackup: start [x.x.xxx.x.com:/dev/sdb1 level 0]
sendbackup: info BACKUP=/bin/tar
sendbackup: info RECOVER_CMD=/usr/bin/gzip -dc |/bin/tar -f... -
sendbackup: info COMPRESS_SUFFIX=.gz
sendbackup: info end
\ 

   Now, I know that this isn't an issue of not having enough space on the
tape or holding disk, both are in excess of 35G.  Some of the things I
have tried are, upgrading tar on the server that is failing, upgrading
the backup server from RedHat 6.2 to 7.1, and using every available
version of Amanda.  Currently I am using Amanda-2.4.2p2.  The client
that I'm having these problems on is a RedHat 5.1 (kernel 2.0.36)
machine.

If you have any suggestions I'd greatly appreciate them.  Oh, and one
more thing, when we were running these backups from the Solaris machine,
they did not produce this error, and I had about a quarter of the space
on tape and holding disk that I have now.


Seems your Linux version can handle only files up to 2GB in size. Either 
define CHUNK_SIZE smaller than that, or install an OS which handles 
bigger files.

HTH
-Christoph Sold





Re: data write: File too large ???

2001-08-14 Thread Joshua Baker-LePain

On Tue, 14 Aug 2001 at 11:30am, Joshua Baker-LePain wrote

 On Tue, 14 Aug 2001 at 9:05am, Katrinka Dall wrote

  /-- xx.p /dev/sdb1 lev 0 FAILED [data write: File too large]
  sendbackup: start [x.x.xxx.x.com:/dev/sdb1 level 0]
  sendbackup: info BACKUP=/bin/tar
  sendbackup: info RECOVER_CMD=/usr/bin/gzip -dc |/bin/tar -f... -
  sendbackup: info COMPRESS_SUFFIX=.gz
  sendbackup: info end
  \
 
  Now, I know that this isn't an issue of not having enough space on the
  tape or holding disk, both are in excess of 35G.  Some of the things I
  have tried are, upgrading tar on the server that is failing, upgrading
  the backup server from RedHat 6.2 to 7.1, and using every available
  version of Amanda.  Currently I am using Amanda-2.4.2p2.  The client
  that I'm having these problems on is a RedHat 5.1 (kernel 2.0.36)
  machine.

 It looks like you may be hitting Linux's filesize limitation.  What is
 your chunksize set to?  Try setting it to something smaller than 2GB-32KB
 for header.  There's no performance penalty, so why not try 1GB.

Private email seems to indicate that this isn't the issue.  Is the holding
disk space changing during the backup, perhaps?

 That may not be the issue, though, as amanda 2.4.2p2 on RedHat 7.1
 should have large file support.  It seems to compile with the right
 flags -- has anybody confirmed whether it works or not?

Responding to myself, it does.  I just created a 6GB holding disk image on
ext2 with no problem.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University




Re: [amrestore] file too large

2001-05-04 Thread Pierre Volcke


 
 On May  3, 2001, Pierre Volcke [EMAIL PROTECTED] wrote:
 
  amrestore:   4: restoring planeto._home_zone10.20010502.0
  amrestore: write error: File too large


Alexandre Oliva wrote:
 
 You seem to be bumping into file size limits of the host filesystem.
 You may want to pipe the output of amrestore -p directly to tar, or to
 bsplit.
 

Bort Paul wrote:
 Perhaps your operating system cannot handle files over 2Gb? Most Linuxes
 can't. I don't know about *BSD or Solaris/SunOS.


 that seems to be the problem : the host is a Linux RH7, with kernel
2.2.16.
 
 thanks,

Pierre.



[amrestore] file too large

2001-05-03 Thread Pierre Volcke


 hello,

 I get the following message when restoring one of my
 full dumps with amrestore:

[tape-serveur]# /usr/local/amanda/sbin/amrestore /dev/nst0 planeto
amrestore:   0: skipping start of tape: date 20010502 label TAPE4
amrestore:   1: skipping jupiter._home_zone1.20010502.2
amrestore:   2: skipping jupiter._home_zone2.20010502.1
amrestore:   3: skipping jupiter._home_zone3.20010502.0
amrestore:   4: restoring planeto._home_zone10.20010502.0
amrestore: write error: File too large

The target disk is big enough to handle the whole taped archive.

According to the mail reports, the amount of datas saved 
on this tape is 17.8Gb. I use 20Gb/40Gb DLT Tapes.

Could it be a problem with my tapetype config or with
the tape hardware itself?

Of course, a tar xvf on this archive will fail at the
end of file:

[tape-server]#cat planeto._home_zone10.20010502.0 | tar xvf - 
...
tar: 511 garbage bytes ignored at end of archive
tar: Unexpected EOF in archive
tar: Error exit delayed from previous errors
...

any advice? 
thanks!

Pierre.



Re: [amrestore] file too large

2001-05-03 Thread Alexandre Oliva

On May  3, 2001, Pierre Volcke [EMAIL PROTECTED] wrote:

 amrestore:   4: restoring planeto._home_zone10.20010502.0
 amrestore: write error: File too large

You seem to be bumping into file size limits of the host filesystem.
You may want to pipe the output of amrestore -p directly to tar, or to
bsplit.

-- 
Alexandre Oliva   Enjoy Guarana', see http://www.ic.unicamp.br/~oliva/
Red Hat GCC Developer  aoliva@{cygnus.com, redhat.com}
CS PhD student at IC-Unicampoliva@{lsd.ic.unicamp.br, gnu.org}
Free Software Evangelist*Please* write to mailing lists, not to me



RE: [amrestore] file too large

2001-05-03 Thread Bort, Paul

Perhaps your operating system cannot handle files over 2Gb? Most Linuxes
can't. I don't know about *BSD or Solaris/SunOS.



-Original Message-
From: Pierre Volcke [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 03, 2001 3:01 PM
To: [EMAIL PROTECTED]
Subject: [amrestore] file too large



 hello,

 I get the following message when restoring one of my
 full dumps with amrestore:

[tape-serveur]# /usr/local/amanda/sbin/amrestore /dev/nst0 planeto
amrestore:   0: skipping start of tape: date 20010502 label TAPE4
amrestore:   1: skipping jupiter._home_zone1.20010502.2
amrestore:   2: skipping jupiter._home_zone2.20010502.1
amrestore:   3: skipping jupiter._home_zone3.20010502.0
amrestore:   4: restoring planeto._home_zone10.20010502.0
amrestore: write error: File too large

The target disk is big enough to handle the whole taped archive.

According to the mail reports, the amount of datas saved 
on this tape is 17.8Gb. I use 20Gb/40Gb DLT Tapes.

Could it be a problem with my tapetype config or with
the tape hardware itself?

Of course, a tar xvf on this archive will fail at the
end of file:

[tape-server]#cat planeto._home_zone10.20010502.0 | tar xvf - 
...
tar: 511 garbage bytes ignored at end of archive
tar: Unexpected EOF in archive
tar: Error exit delayed from previous errors
...

any advice? 
thanks!

Pierre.



Re: FATAL: data write: File too large

2001-03-15 Thread Mark L. Chang

On 15 Mar 2001, Alexandre Oliva wrote:

  and is:
  chunksize 2gb
  the right way to limit it?

 Nope.  Use 2000mb.  That's a little bit less than 2Gb, so it won't
 bump into the limit.

Confusing, but understood. I will try that, thanks.
Mark

-- 
http://www.mchang.org/
http://decss.zoy.org/





RE: FATAL: data write: File too large

2001-03-15 Thread Carey Jung

 
  Nope.  Use 2000mb.  That's a little bit less than 2Gb, so it won't
  bump into the limit.

 Confusing, but understood. I will try that, thanks.

I think the amanda man page discusses this a bit, or else it's in the
default amanda.conf.  To allow for header information on the dump file,
etc., you can't set chunksize to exactly 2GB (= 2048MB); it has to be a bit
less.   2000MB is a nice, round, easy-to-remember number.

Carey




FATAL: data write: File too large

2001-03-14 Thread Mark L. Chang

Just a quick sanity check ...
server is linux w/DLT7000 and 25g free on holding disk
brain (client) is solaris 8 with 15g partition (7 gig used) to back up for
the first time.

I get this error in the raw log:
FAIL dumper brain c1t1d0s0 0 ["data write: File too large"]
  sendbackup: start [brain:c1t1d0s0 level 0]
  sendbackup: info BACKUP=/usr/local/gnome/bin/gtar
  sendbackup: info RECOVER_CMD=/usr/local/bin/gzip -dc
|/usr/local/gnome/bin/gtar -f... -
  sendbackup: info COMPRESS_SUFFIX=.gz
  sendbackup: info end

and this in amdump.1:
driver: hdisk-state time 153.095 hdisk 0: free 21616884 dumpers 1
driver: result time 3319.648 from dumper0: FAILED 00-1 ["data write:
File too large"]
dumper: kill index command

Is this all because of the 2g file limit on basic kernels on Linux?

and is:
chunksize 2gb

the right way to limit it? Or should I have a "chunksize 1950 Mb"?

Cheers,
Mark

-- 
http://www.mchang.org/
http://decss.zoy.org/





Re: FATAL: data write: File too large

2001-03-14 Thread Alexandre Oliva

On Mar 15, 2001, "Mark L. Chang" [EMAIL PROTECTED] wrote:

 Is this all because of the 2g file limit on basic kernels on Linux?

Yep.

 and is:
 chunksize 2gb

 the right way to limit it?

Nope.  Use 2000mb.  That's a little bit less than 2Gb, so it won't
bump into the limit.

-- 
Alexandre Oliva   Enjoy Guarana', see http://www.ic.unicamp.br/~oliva/
Red Hat GCC Developer  aoliva@{cygnus.com, redhat.com}
CS PhD student at IC-Unicampoliva@{lsd.ic.unicamp.br, gnu.org}
Free Software Evangelist*Please* write to mailing lists, not to me



Re: data write: File too large

2001-03-11 Thread John R. Jackson

We just ran across this error, because we hit the 2GB filesize limit ...

However, the behavior after this error seems a bit odd.  Amanda flushed the
other disk files on the holding disk to tape ok, but then also left them on
the holdingdisk.  amverify confirms they're all on tape, but ls shows
they're all still on the holdingdisk, too.  Is this normal?  ...

By sheer (and extremely annoying :-) coincidence, one of my systems did
the same thing last night, but behaved "properly", i.e. all the images
smaller than 2 GBytes were flushed and removed from the holding disk.
So my best guess is this a problem that's fixed with a more recent
version of Amanda (you didn't say what you're running).

Carey

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



RE: data write: File too large

2001-03-11 Thread Carey Jung

 
 By sheer (and extremely annoying :-) coincidence, one of my systems did
 the same thing last night, but behaved "properly", i.e. all the images
 smaller than 2 GBytes were flushed and removed from the holding disk.
 So my best guess is this a problem that's fixed with a more recent
 version of Amanda (you didn't say what you're running).
 

We're running 2.4.2 with no patches. 



data write: File too large

2001-03-10 Thread Carey Jung

We just ran across this error, because we hit the 2GB filesize limit on ext2
filesystems.  After reading the man pages, we've cranked down the
holdingdisk chunksize to 2000MB, which should alleviate the problem in the
future.

However, the behavior after this error seems a bit odd.  Amanda flushed the
other disk files on the holding disk to tape ok, but then also left them on
the holdingdisk.  amverify confirms they're all on tape, but ls shows
they're all still on the holdingdisk, too.  Is this normal?  I would have
expected amanda to flush all the files to tape, remove them from the holding
disk, and then retry the level 0 backup that failed on the next run.

I've forced a level 0 of the failed disk for the next run (probably
unnecessary, but what the hey), but I guess I need to run amflush again on
the holding disk.

Regards,
Carey