> I just tried to find some ways how to let the Amanda (taper phase of
> backup process) to create files avoiding Large Files Problem (LFP). For
> example, i can do it using the following command under FreeBSD-4.x:
>
> ssh MyHost dump -0 -u -a -f /MyBigBigBigSlice > MyHost.MyBigBigBigSlice.0.dum
On Wed, 2 May 2007, Olivier Nicole wrote:
Hi,
I run Amanda under FreeBSD-4.11, so when Amanda trying to make file of
dump with size of more than 4GB (exactly, 2**32-1), i can see in
May I suggest that you ask the question on FreeBSD list :)
Olivier
Not exactly, Olivier:)
I just tried to
Hi,
> I run Amanda under FreeBSD-4.11, so when Amanda trying to make file of
> dump with size of more than 4GB (exactly, 2**32-1), i can see in
May I suggest that you ask the question on FreeBSD list :)
Olivier
be sent from holding disk to file in storage
directory. And when this file reaches 4GB Amanda dropping errors such as
[writing file: File too large] into log. Mayby there are some ways to
solve the problem of under FreeBSD-4.x system?
I have Amanda of version 2.5.1p3.
As I suggested, look int
;
> Yes, i'm taping to hard disk (tapedev=file:/). Actually the problem arises
> when the dump itself is finished and all chunks (in my config chunksize =
> 256MB) are beginning to be sent from holding disk to file in storage
> directory. And when this file reaches 4GB Amanda drop
1
driver: hdisk-state time 2560.825 hdisk 0: free 81451306 dumpers 0
driver: result time 2560.825 from taper: TAPE-ERROR 00-1 "[writing
file: File too large]"
That is due to filesystems constraints in FreeBSD-4.x where a size of
file is given in 32-bit representation and decla
if lnc0: free 1
> driver: hdisk-state time 2560.825 hdisk 0: free 81451306 dumpers 0
> driver: result time 2560.825 from taper: TAPE-ERROR 00-1 "[writing
> file: File too large]"
>
> That is due to filesystems constraints in FreeBSD-4.x where a size of
> file
from taper: TAPE-ERROR 00-1 "[writing file: File
too large]"
That is due to filesystems constraints in FreeBSD-4.x where a size of
file is given in 32-bit representation and declared in sys/stat.h with
int32_t integer type.
Is there passible to change some code of Amanda to do the
Never Mind,
I was under the impression that, for some reason, the holdingdisk {}
section of amanda.conf could not exist in my version. But I put it
there, with an appropriate Chunksize, and ... amcheck did not complain
at all. In fact it said "4096 size requested, that's plenty."
If you don't hea
Thanks for the responses...
Upgrading at this time would be GREAT. Unfortunately this is not going
to happen anytime soon. I have another hard drive with RH9 and 2.4.4 on
it, but it's only half-configured, and it took a while to get to that
point.
As per the current, Irridum-dust, ancient setup,
away.
Dana Bourgeois
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Byarlay, Wayne A.
> Sent: Monday, November 17, 2003 7:54 AM
> To: [EMAIL PROTECTED]
> Subject: data write: File too large
>
>
> Hi all,
>
&g
Byarlay, Wayne A. wrote:
Hi all,
I checked the archives on this problem... but they all suggested to
adjust the chunksize of my holdingdisk section in my amanda.conf.
However, I have ver. 2.4.1, and there's no "holdingdisk" section IN my
amanda.conf! Is the chunksize the problem? I've got filesys
e's from my error log:
>/-- xxx/services lev 0 FAILED ["data write: File too large"]
>sendbackup: start [xxx:/services level 0]
>sendbackup: info BACKUP=/bin/tar
>sendbackup: info RECOVER_CMD=/bin/gzip -dc |/bin/tar -f... -
>sendbackup: info COMPRESS_SUFFIX=.gz
>
ems MUCH
larger than this one going to AMANDA... but if so, How do I adjust my
chunksize?
Here's from my error log:
/-- xxx/services lev 0 FAILED ["data write: File too large"]
sendbackup: start [xxx:/services level 0]
sendbackup: info BACKUP=/bin/tar
sendbackup: info REC
w, when I try to du
> level 0 dump of such a filesystem, it gives me an error: "data write: File
> too large".
> What could be a solution of this problem? I don't really want to split
> these filesystems in parts - they have a lot of subdirs, and it's a pain
> to mai
I have recently installed amanda 2.4.2p2 and I discovered that it no
longer supports "negative chunksize" parameter, which I used to direct big
dumps (some 10G+ filesystems) to tape directly. Now, when I try to du
level 0 dump of such a filesystem, it gives me an error: "data w
Pedro Aguayo wrote:
>
> Ok, I didn't but think I do now.
> Basically when amanda write to the holding disk, it rights it to a flat file
> on the file system, and if that flat file is larger than 2gb then you might
> encounter a problem if your filesystem has a limitation where it can only
> sup
;
> --
>
> Mike Cathey - http://www.mikecathey.com/
> Network Administrator
> RTC Internet - http://www.catt.com/
>
> Joshua Baker-LePain wrote:
>
>> On Wed, 6 Mar 2002 at 4:32pm, Charlie Chrisman wrote
>>
>>
>>> /-- countach.i /dev/sda3 lev 0 FAILED [&q
On Thu, 7 Mar 2002 at 12:30pm, Mike Cathey wrote
> The annyoing thing about using this hack is that if you have to use
> amflush (something goes wrong), then it (amflush) gives you errors about
> removing cruft files for all of the files that split creates. An
> ingenious hack nonetheless...
wrote:
> On Wed, 6 Mar 2002 at 4:32pm, Charlie Chrisman wrote
>
>
>>/-- countach.i /dev/sda3 lev 0 FAILED ["data write: File too large"]
>>
>>I get this for two of my clients? what does this mean?
>>
>>
> FAQ. Set your chunksize to something less
--On Wednesday, March 06, 2002 16:32:01 -0500 Charlie Chrisman
<[EMAIL PROTECTED]> wrote:
> /-- countach.i /dev/sda3 lev 0 FAILED ["data write: File too large"]
>
> I get this for two of my clients? what does this mean?
Probably that your Amanda server has a file si
On Wed, 6 Mar 2002 at 4:32pm, Charlie Chrisman wrote
> /-- countach.i /dev/sda3 lev 0 FAILED ["data write: File too large"]
>
> I get this for two of my clients? what does this mean?
>
FAQ. Set your chunksize to something less then 2GB-32Kb. 1GB is fine --
there
/-- countach.i /dev/sda3 lev 0 FAILED ["data write: File too large"]
I get this for two of my clients? what does this mean?
--
Charlie Chrisman
Business Development Director
(859) 514-7600
(859) 514-7601 Fax
<http://www.intelliwire.net/>
³The Intelligent Way to Work²
On Mon, 28 Jan 2002 at 10:38am, Matthew Boeckman wrote
> I'm running amanda 2.4.2 on a RH7.1 box with 2.4.2-2. I recently read
> in the archives about making chunksizes 1Gb instead of 2 due to amanda
> tacking on stuff at the end, making those file too large. Too late, I'
Hi there list.
I'm running amanda 2.4.2 on a RH7.1 box with 2.4.2-2. I recently read
in the archives about making chunksizes 1Gb instead of 2 due to amanda
tacking on stuff at the end, making those file too large. Too late, I'm
afraid, as I'm trying to restore a set of files f
ding disks that only handled individual
files smaller than 2 GBytes. You don't have that problem on AIX 4+.
Another possibility for the "File too large" error is your Amanda user
running into ulimit. If you do this:
su - -c "ulimit -a"
does it have a "file(blocks)
On Tuesday 15 January 2002 09:56 am, Rivera, Edwin wrote:
>is there a way, in the amanda.conf file, to specify *NOT* to use
> the holding-disk for a particular filesystem?
>
>for example, if i use amanda to backup 8 filesystems on one box
> and i want 7 to use the holding-disk, but one not to.. is
-
From: Adrian Reyer [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 15, 2002 9:24 AM
To: Pedro Aguayo
Cc: Rivera, Edwin; [EMAIL PROTECTED]
Subject: Re: ["data write: File too large"]
On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote:
> But, I think Edwin doesn't
esday, January 15, 2002 9:24 AM
To: Pedro Aguayo
Cc: Rivera, Edwin; [EMAIL PROTECTED]
Subject: Re: ["data write: File too large"]
On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote:
> But, I think Edwin doesn't have this problem, meaning he says he doesn't
> have
Pedro Aguayo wrote:
>
> Ok, I didn't but think I do now.
> Basically when amanda write to the holding disk, it rights it to a flat file
> on the file system, and if that flat file is larger than 2gb then you might
> encounter a problem if your filesystem has a limitation where it can only
> sup
c: Pedro Aguayo
Subject: Re: ["data write: File too large"]
I don't understand the problem. When amanda encounters a filesystem larger
than the holding disk, she AUTOMATICALLY resorts to direct tape write.
Quoting from the amanda.conf file:
# If no holding disks are specified then al
On 15-Jan-2002 Adrian Reyer wrote:
>
> No holding-disk -> no big file -> no problem. (well, tape might have
> to stop more often because of interruption in data-flow)
>
Why not define a chunksize of 500 Mb on your holdingdisk? That's what
I did. Backups go faster and there's less wear and tear
Original Message-
From: Adrian Reyer [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 15, 2002 9:24 AM
To: Pedro Aguayo
Cc: Rivera, Edwin; [EMAIL PROTECTED]
Subject: Re: ["data write: File too large"]
On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote:
> But, I
Ahh! I see said the blind man.
Pedro
-Original Message-
From: Adrian Reyer [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 15, 2002 9:24 AM
To: Pedro Aguayo
Cc: Rivera, Edwin; [EMAIL PROTECTED]
Subject: Re: ["data write: File too large"]
On Tue, Jan 15, 2002 at 09:14:
On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote:
> But, I think Edwin doesn't have this problem, meaning he says he doesn't
> have a file larger than 2gb.
I had none, either, but the filesystem was dumped into a file as a
whole, leading to a huge file, same with tar. The problem only
ge-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Adrian Reyer
Sent: Tuesday, January 15, 2002 4:03 AM
To: Pedro Aguayo
Cc: Rivera, Edwin; [EMAIL PROTECTED]
Subject: Re: ["data write: File too large"]
On Mon, Jan 14, 2002 at 10:53:29AM -0500, Pedro Aguayo wrote:
>
On Mon, Jan 14, 2002 at 10:53:29AM -0500, Pedro Aguayo wrote:
> Could be that your holding disk space is to small, or you trying to backup a
> file that is larger than 2 gigs?
Perhaps I misunderstand something here, but...
The holding disk afaik holds the entire dump of the filesystem you try
and
t: RE: ["data write: File too large"]
You sure you don't have a huge file in that filesystem?
Pedro
-Original Message-
From: Rivera, Edwin [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 14, 2002 10:55 AM
To: Pedro Aguayo; Rivera, Edwin; [EMAIL PROTECTED]
Subject: RE: [&
You sure you don't have a huge file in that filesystem?
Pedro
-Original Message-
From: Rivera, Edwin [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 14, 2002 10:55 AM
To: Pedro Aguayo; Rivera, Edwin; [EMAIL PROTECTED]
Subject: RE: ["data write: File too large"]
my hold
: [EMAIL PROTECTED]
Subject: ["data write: File too large"]
hello again,
my aix 4.2.1 box running Amanda v2.4.2p2 is not backing up one of my
filesystems at level 0 anymore, it did it only once (the first amdump run).
here is the error:
aix421.us.lhsgroup.com /home4/bscs_fam lev 0 FAI
my holding disk is 4GB and i have it set to use -100Mbytes
i'm confused on this one..
-Original Message-
From: Pedro Aguayo [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 14, 2002 10:53 AM
To: Rivera, Edwin; [EMAIL PROTECTED]
Subject: RE: ["data write: File too large&qu
AILED ["data write: File too
> large"]
> here is the entry in my amanda.conf file
> chunksize 1536 mbytes
> can you suggest anything? thanks in advance.
2GB limit on files perhaps? Not sure about AIX 4.2.1-support for files
bigger 2GB, quit AIX with version 3.2.5. Might
hello again,
my aix 4.2.1 box running Amanda v2.4.2p2 is not backing up one of my
filesystems at level 0 anymore, it did it only once (the first amdump run).
here is the error:
aix421.us.lhsgroup.com /home4/bscs_fam lev 0 FAILED ["data write: File too
large"]
here is the e
> I've got my chunksize set to 2Gb as it is, right now. I'm using linux with
> a 2.2.14-5 kernel. Would it cause problems if the chunksize was right at 2Gb?
> Perhaps I should lower it.
Oh yes, put it something below 2GB-32KB: a good value is 2000 MB
On Wed, 10 Oct 2001 at 3:20pm, Dave Brooks wrote
> I've got my chunksize set to 2Gb as it is, right now. I'm using linux with
> a 2.2.14-5 kernel. Would it cause problems if the chunksize was right at 2Gb?
> Perhaps I should lower it.
>
Yep, that's an issue. Amanda adds on a 32KB for the heade
imit of 2
>Gb? There's an option to limit filesizes in amanda.conf.
>
>Maarten Vink
>
>- Original Message -
>From: "Dave Brooks" <[EMAIL PROTECTED]>
>To: <[EMAIL PROTECTED]>
>Sent: Wednesday, October 10, 2001 6:47 PM
>Subject: File too lar
2001 6:47 PM
Subject: File too large?
> What does "file to large" mean? Example (from the email report:)
>
> FAILURE AND STRANGE DUMP SUMMARY:
> localhost /hermes lev 0 FAILED ["data write: File too large"]
>
> I cant figure out whats up with that...
What does "file to large" mean? Example (from the email report:)
FAILURE AND STRANGE DUMP SUMMARY:
localhost /hermes lev 0 FAILED ["data write: File too large"]
I cant figure out whats up with that... my spooling disk is 40GB (and its
empty), and my tapes are 40GB compr
On Tue, 14 Aug 2001 at 11:30am, Joshua Baker-LePain wrote
> On Tue, 14 Aug 2001 at 9:05am, Katrinka Dall wrote
>
> > /-- xx.p /dev/sdb1 lev 0 FAILED ["data write: File too large"]
> > sendbackup: start [x.x.xxx.x.com:/dev/sdb1 level 0]
> >
GE DUMP DETAILS:
>
>/-- xx.p /dev/sdb1 lev 0 FAILED ["data write: File too large"]
>sendbackup: start [x.x.xxx.x.com:/dev/sdb1 level 0]
>sendbackup: info BACKUP=/bin/tar
>sendbackup: info RECOVER_CMD=/usr/bin/gzip -dc |/bin/tar -f... -
>sendbackup: info CO
-Original Message-
From: Katrinka Dall [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, August 14, 2001 6:05 AM
To: [EMAIL PROTECTED]
Subject: "data write: File too large" ???
Hello,
I must say that I'm completely stumped, trying everything I can
possibly think of, I've
On Tue, Aug 14, 2001 at 09:05:25AM -0500, Katrinka Dall wrote:
> FAILED AND STRANGE DUMP DETAILS:
>
> /-- xx.p /dev/sdb1 lev 0 FAILED ["data write: File too large"]
Does it fail after backing up 2 Gigabyte?
It sounds like you don't have Large File Support (LF
On Tue, 14 Aug 2001 at 9:05am, Katrinka Dall wrote
> /-- xx.p /dev/sdb1 lev 0 FAILED ["data write: File too large"]
> sendbackup: start [x.x.xxx.x.com:/dev/sdb1 level 0]
> sendbackup: info BACKUP=/bin/tar
> sendbackup: info RECOVER_CMD=/usr/bin
s of doing
this, I found that I was unable to get one of the Linux machines that we
had been backing up to work properly. I keep getting this error
whenever I try to do a dump on this machine/filesystem:
FAILED AND STRANGE DUMP DETAILS:
/-- xx.p /dev/sdb1 lev 0 FAILED ["data write:
>
> On May 3, 2001, Pierre Volcke <[EMAIL PROTECTED]> wrote:
>
> > amrestore: 4: restoring planeto._home_zone10.20010502.0
> > amrestore: write error: File too large
>
Alexandre Oliva wrote:
>
> You seem to be bumping into file size limits of the hos
Perhaps your operating system cannot handle files over 2Gb? Most Linuxes
can't. I don't know about *BSD or Solaris/SunOS.
-Original Message-
From: Pierre Volcke [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 03, 2001 3:01 PM
To: [EMAIL PROTECTED]
Subject: [amrestore] file
On May 3, 2001, Pierre Volcke <[EMAIL PROTECTED]> wrote:
> amrestore: 4: restoring planeto._home_zone10.20010502.0
> amrestore: write error: File too large
You seem to be bumping into file size limits of the host filesystem.
You may want to pipe the output of amrestore -p directl
: skipping jupiter._home_zone2.20010502.1
amrestore: 3: skipping jupiter._home_zone3.20010502.0
amrestore: 4: restoring planeto._home_zone10.20010502.0
amrestore: write error: File too large
The target disk is big enough to handle the whole taped archive.
According to the mail reports, the
> >
> > Nope. Use 2000mb. That's a little bit less than 2Gb, so it won't
> > bump into the limit.
>
> Confusing, but understood. I will try that, thanks.
I think the amanda man page discusses this a bit, or else it's in the
default amanda.conf. To allow for header information on the dump file,
On 15 Mar 2001, Alexandre Oliva wrote:
> > and is:
> > chunksize 2gb
> > the right way to limit it?
>
> Nope. Use 2000mb. That's a little bit less than 2Gb, so it won't
> bump into the limit.
Confusing, but understood. I will try that, thanks.
Mark
--
http://www.mchang.org/
http://decss.zoy.
On Mar 15, 2001, "Mark L. Chang" <[EMAIL PROTECTED]> wrote:
> Is this all because of the 2g file limit on basic kernels on Linux?
Yep.
> and is:
> chunksize 2gb
> the right way to limit it?
Nope. Use 2000mb. That's a little bit less than 2Gb, so it won't
bump into the limit.
--
Alexandre
Just a quick sanity check ...
server is linux w/DLT7000 and 25g free on holding disk
brain (client) is solaris 8 with 15g partition (7 gig used) to back up for
the first time.
I get this error in the raw log:
FAIL dumper brain c1t1d0s0 0 ["data write: File too large"]
sendbac
>
> By sheer (and extremely annoying :-) coincidence, one of my systems did
> the same thing last night, but behaved "properly", i.e. all the images
> smaller than 2 GBytes were flushed and removed from the holding disk.
> So my best guess is this a problem that's fixed with a more recent
> versi
>We just ran across this error, because we hit the 2GB filesize limit ...
>
>However, the behavior after this error seems a bit odd. Amanda flushed the
>other disk files on the holding disk to tape ok, but then also left them on
>the holdingdisk. amverify confirms they're all on tape, but ls sho
We just ran across this error, because we hit the 2GB filesize limit on ext2
filesystems. After reading the man pages, we've cranked down the
holdingdisk chunksize to 2000MB, which should alleviate the problem in the
future.
However, the behavior after this error seems a bit odd. Amanda flushed
65 matches
Mail list logo