On Sun, 17 Apr 2011 13:54:28 -0400
Jean-Louis Martineau wrote:
> amrestore -r
Thank you.
Then it means that before concatenate-ing all the parts together, we
need to skip first 32k block?
Sincerely,
Gour
--
“In the material world, conceptions of good and bad are
all mental speculations…” (
d and now amrestore reports:
amrestore on fbsd on compressed .gz tape (on linux) failed with:
ERROR: /usr/bin/gzip exited with status 1.
Any hint how to overcome it?
(It's multi-tapes backup and I need to amrestore first and then to
concatenate all the parts.)
Sincerely,
Gour
sd on compressed .gz tape (on linux) failed with:
ERROR: /usr/bin/gzip exited with status 1.
Any hint how to overcome it?
(It's multi-tapes backup and I need to amrestore first and then to
concatenate all the parts.)
Sincerely,
Gour
--
“In the material world, conceptions of good and bad are
Hi,
just wanted to send you an update on this issue. Switching to
auth=bsdtcp completely solved my problem.
The working line from /etc/inetd.conf (for openbsd-inetd, and the
amanda-user being "backup") is:
amanda stream tcp nowait backup /usr/lib/amanda/amandad amandad
-auth=bsdtcp amdump amindex
On Wed, Apr 21, 2010 at 11:04 AM, Volker Pallas wrote:
> Is auth=bsdtcp mandatory?
If you want to switch to bsdtcp, then yes. You'll also need to change
your (x)inetd configuration accordingly. The amanda-auth(7) manpage
may be of use to you in figuring the whole thing out.
DUstin
--
Open So
ry?
Thanks you,
Volker
Volker Pallas wrote:
> Gunnarsson, Gunnar wrote:
>
>> Switching to tcp instead of using udp cured those problems.
>>
>>> Hi,
>>>
>>> I'm having a bit of a problem on *some* servers concerning failed
>>> b
ose problems.
>> Hi,
>>
>> I'm having a bit of a problem on *some* servers concerning failed
>> backups with the error message:
>> lev # FAILED [spawn /bin/gzip: dup2 out: Bad file descriptor]
>>
>
> Gunnar had a similar problem - maybe his exper
: FAILED [spawn /bin/gzip: dup2 out: Bad file descriptor] on 2.6.1p2
On Mon, Apr 12, 2010 at 4:48 AM, Volker Pallas wrote:
> Hi,
>
> I'm having a bit of a problem on *some* servers concerning failed
> backups with the error message:
> lev # FAILED [spawn /bin/gzip: dup2 out:
On Mon, Apr 12, 2010 at 4:48 AM, Volker Pallas wrote:
> Hi,
>
> I'm having a bit of a problem on *some* servers concerning failed
> backups with the error message:
> lev # FAILED [spawn /bin/gzip: dup2 out: Bad file descriptor]
Gunnar had a similar problem - maybe his
Hi,
I'm having a bit of a problem on *some* servers concerning failed
backups with the error message:
lev # FAILED [spawn /bin/gzip: dup2 out: Bad file descriptor]
usually these failed backups are "successfully retried", but sometimes I
get the same error twice and the backup fo
On Thu, Apr 23, 2009 at 3:18 PM, Darin Perusich
wrote:
> In my continued testing of amsuntar I am intermittently seeing this
> "/opt/csw/bin/gzip: dup2 err: Bad file number" error during amdump.
> While it appears to be random I have seen this occur with certain
> parti
In my continued testing of amsuntar I am intermittently seeing this
"/opt/csw/bin/gzip: dup2 err: Bad file number" error during amdump.
While it appears to be random I have seen this occur with certain
partitions more then others, I've been changing up the disklist to try
and recrea
Good luck at your new place. How do you like it? Was/is it hard to move your
consulting practice so far?
--Ian
On Saturday 26 July 2008 23:17:44 Jon LaBadie wrote:
> On Sat, Jul 26, 2008 at 10:55:56PM -0400, Ian Turner wrote:
> > Jon,
> >
> > I thought you were in Princeton. Did you move?
> >
>
On Sat, Jul 26, 2008 at 10:55:56PM -0400, Ian Turner wrote:
> Jon,
>
> I thought you were in Princeton. Did you move?
>
> --Ian
>
Yes I did Ian. Moved at the beginning of the year to
Reston, which for those unfamiliar with the area is
about 20 miles west of Washington, D.C.
Jon
--
Jon H. LaB
urse ramped up
> > the load on the clients even further.
> >
> > The question of which version of Gzip to run arose, we had a fairly
> > old version and there is a newer-Sun version available, just didn't
> > know how version sensitive we where. I know version of gzip (
as smaller, and it was taking
> longer to get to us).
>
> So I increased the inparallel parameter, which of course ramped up
> the load on the clients even further.
>
> The question of which version of Gzip to run arose, we had a fairly
> old version and there is a newer-Sun
as smaller, and it was taking
> longer to get to us).
>
> So I increased the inparallel parameter, which of course ramped up
> the load on the clients even further.
>
> The question of which version of Gzip to run arose, we had a fairly
> old version and there is a newer-Sun
p
the load on the clients even further.
The question of which version of Gzip to run arose, we had a fairly
old version and there is a newer-Sun version available, just didn't
know how version sensitive we where. I know version of gzip (which
we use on some partitons on these clients) is ve
On 2006-03-11 14:17, Kai Zimmer wrote:
Hi all,
has anybody on the list experience with hardware gzip accelerator cards
(e.g. form indranetworks)? Are they of any use for amanda - or is the
disk-i/o the limiting factor? And how much are those (generally
pci-based) cards?
Never heard, and
--On March 11, 2006 2:17:50 PM +0100 Kai Zimmer <[EMAIL PROTECTED]> wrote:
Hi all,
has anybody on the list experience with hardware gzip accelerator cards
(e.g. form indranetworks)? Are they of any use for amanda - or is the
disk-i/o the limiting factor? And how much are those (general
On Sat, Mar 11, 2006 at 02:17:50PM +0100, Kai Zimmer wrote:
> Hi all,
>
> has anybody on the list experience with hardware gzip accelerator cards
> (e.g. form indranetworks)? Are they of any use for amanda - or is the
> disk-i/o the limiting factor? And how much are those (g
Hi all,
has anybody on the list experience with hardware gzip accelerator cards
(e.g. form indranetworks)? Are they of any use for amanda - or is the
disk-i/o the limiting factor? And how much are those (generally
pci-based) cards?
thanks,
Kai
Greg Troxel wrote:
I'm using 2.4.5p1 on NetBSD with Kerberos encryption and
authentication.
I tried to verify some tapes and found that 'gzip -t' failed on the
restored files. On investigation, after adding some better
diagnostics to gzip (NetBSD's own), I found that the
I'm using 2.4.5p1 on NetBSD with Kerberos encryption and
authentication.
I tried to verify some tapes and found that 'gzip -t' failed on the
restored files. On investigation, after adding some better
diagnostics to gzip (NetBSD's own), I found that the problem was that
th
NetBSD's gzip currently warns about output files > 4 GB, because the
gzip format can't store such lengths. Also, it sets the exit status
to 1 and prints EOPNOTSUPP, which is just plain wrong. I'm discussing
how to fix this with other NetBSD people. I think the real issue is
w
On Wed, 2005-06-29 at 11:12 -0600, Michael Loftis wrote:
> Then do client side compression? Is there really a reason as to why you're
> not?
Client side compression gives me around 3-4 MB / sec data transfers.
Server side gives me around 10-15 MB / sec (with the current CPU in the
AMANDA server)
On Wed, 2005-06-29 at 13:18 -0400, Jon LaBadie wrote:
> "fast" rather than "best" might make a big difference
Oh, can you specify compress-fast as well as srvcompress? That
definitely would help.
> Wishlist item: allow for compress "normal" as well as best and fast.
> It often strikes a good bal
On Wed, Jun 29, 2005 at 10:58:07AM -0600, Graeme Humphries wrote:
> On Wed, 2005-06-29 at 10:18 -0600, Michael Loftis wrote:
> > Nope it isn't. One is for the index, one for the data. I had the same
> > 'huh?!' question (sort of) a while back since I do client side compression
> > and still had
--On June 29, 2005 10:58:07 AM -0600 Graeme Humphries
<[EMAIL PROTECTED]> wrote:
Ahhh, that makes sense then. Alright, I've got to beef up my AMANDA
server, because it's struggling along with just those 4 gzips, and I
want to have 4 dumpers going simultaneously all the time.
Then do client
On Wed, 2005-06-29 at 10:18 -0600, Michael Loftis wrote:
> Nope it isn't. One is for the index, one for the data. I had the same
> 'huh?!' question (sort of) a while back since I do client side compression
> and still had gzip's running ;)
Ahhh, that makes sense then. Alright, I've got to beef
; 9685 ?S 0:01 \_ /usr/lib/amanda/driver weekly
> 9686 ?S 4:24 \_ taper weekly
> 9687 ?S 0:59 | \_ taper weekly
> 9699 ?S 9:45 \_ dumper0 weekly
> 10629 ?S
--On June 29, 2005 9:57:48 AM -0600 Graeme Humphries
<[EMAIL PROTECTED]> wrote:
Now, why oh why is it doing *two* gzip operations on each set of data!?
It looks like the gzip --best isn't actually getting that much running
time, so is there something going on here that's fa
kly
9687 ?S 0:59 | \_ taper weekly
9699 ?S 9:45 \_ dumper0 weekly
10629 ?S 96:19 | \_ /bin/gzip --fast
10630 ?S 0:00 | \_ /bin/gzip --best
9700 ?S 6:52 \_ dumper1
ive using dd off the tape then run
the gzip wrapper script, I now have a dump or a tar archive.
I've looked through the list archives and others appeared to have this same
problem but I didn't see a solution. I've changed the redirect in the
script from:
${gzip_prog} ${gzip_f
Toralf Lund wrote:
Paul Bijnens wrote:
Toralf Lund wrote:
Other possible error sources that I think I have eliminated:
1. tar version issues - since gzip complains even if I just uncopress
and send the data to /dev/null, or use the -t option.
2. Network transfer issues. I get errors even
Joshua Baker-LePain wrote:
On Thu, 21 Oct 2004 at 6:19pm, Toralf Lund wrote
This may be related to our backup problems described earlier:
I just noticed that during a dump running just now, I have
# ps -f -C gzip
UIDPID PPID C STIME TTY TIME CMD
amanda3064 769 0 17:18
On Thu, 21 Oct 2004 at 6:19pm, Toralf Lund wrote
> This may be related to our backup problems described earlier:
>
> I just noticed that during a dump running just now, I have
>
> # ps -f -C gzip
> UIDPID PPID C STIME TTY TIME CMD
> amanda3064 769
This may be related to our backup problems described earlier:
I just noticed that during a dump running just now, I have
# ps -f -C gzip
UIDPID PPID C STIME TTY TIME CMD
amanda3064 769 0 17:18 pts/500:00:00 /bin/gzip --best
amanda3129 773 0 17:44 pts/500:00
On Wed, Oct 20, 2004 at 12:52:12PM -0400, Eric Siegerman wrote:
> echo $* >/securedirectory/sum$$ &
> md5sum >/securedirectory/sum$$ &
Oops: the "echo" command shouldn't have an "&".
--
| | /\
|-_|/ > Eric Siegerman, Toronto, Ont.[EMAIL PROTECTED]
| | /
The animal that
On Wed, Oct 20, 2004 at 01:18:45PM +0200, Toralf Lund wrote:
> Other possible error sources that I think I have eliminated:
> [ 0. gzip ]
> 1. tar version issues [...]
> 2. Network transfer issues [...]
> 3. Problems with a specific amanda version [...]
> 4. Problems w
from Paul Bijnens <[EMAIL PROTECTED]> -
From: Paul Bijnens <[EMAIL PROTECTED]>
To: Toralf Lund <[EMAIL PROTECTED]>
Cc: Amanda Mailing List <[EMAIL PROTECTED]>
Subject: Re: Multi-Gb dumps using tar + software compression (gzip)?
Date: Wed, 20 Oct 2004 13:59:31 +0200
Message-ID:
Paul Bijnens wrote:
Toralf Lund wrote:
Other possible error sources that I think I have eliminated:
1. tar version issues - since gzip complains even if I just uncopress
and send the data to /dev/null, or use the -t option.
2. Network transfer issues. I get errors even with server
Toralf Lund wrote:
Other possible error sources that I think I have eliminated:
1. tar version issues - since gzip complains even if I just uncopress
and send the data to /dev/null, or use the -t option.
2. Network transfer issues. I get errors even with server
compression, and I
ything but itself. (and I'm not sure that 1.13
could even recover its own output!)
I hate to be boreing and repetitive, but there are those here *now*
who did not go thru that period of hair removal that 1.13 caused.
Yep.
But how about gzip? Any known issues there? I think I've rul
#x27;ve tried both. In fact, I've tested just about every combination of
tar, gzip, filesystems, hosts, recovery sources (tape, disk dump,
holding disk...) etc. I could think of, and I always get the same result.
I'm thinking this can't possibly be a tar problem, though, or at leas
On Tuesday 19 October 2004 11:10, Paul Bijnens wrote:
>Michael Schaller wrote:
>> I found out that this was a problem of my tar.
>> I backed up with GNUTAR and "compress server fast".
>> AMRESTORE restored the file but TAR (on the server!) gave some
>> horrible messages like yours.
>> I transferred
Michael Schaller wrote:
I found out that this was a problem of my tar.
I backed up with GNUTAR and "compress server fast".
AMRESTORE restored the file but TAR (on the server!) gave some horrible
messages like yours.
I transferred the file to the original machine ("client") and all worked
fine.
I
Alexander Jolk wrote:
Joshua Baker-LePain wrote:
I think that OS and utility (i.e. gnutar and gzip) version info would be
useful here as well.
True, forgot that. I'm on Linux 2.4.19 (Debian woody), using GNU tar
1.13.25 and gzip 1.3.2. I have never had problems recovering files from
Hi Toralf,
I'v had nearly the same problem this week.
I found out that this was a problem of my tar.
I backed up with GNUTAR and "compress server fast".
AMRESTORE restored the file but TAR (on the server!) gave some horrible
messages like yours.
I transferred the file to the original machine ("cli
Joshua Baker-LePain wrote:
> I think that OS and utility (i.e. gnutar and gzip) version info would be
> useful here as well.
True, forgot that. I'm on Linux 2.4.19 (Debian woody), using GNU tar
1.13.25 and gzip 1.3.2. I have never had problems recovering files from
huge d
nce when the tape was failing. I'll send you my amanda.conf
> privately. BTW which version are you using? I'm at version
> 2.4.4p1-20030716.
I think that OS and utility (i.e. gnutar and gzip) version info would be
useful here as well.
--
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University
Alexander Jolk wrote:
Toralf Lund wrote:
1. Dumps of directories containing several Gbs of data (up to roughly
20Gb compressed in my case.)
2. Use dumptype GNUTAR.
3. Compress data using "compress client fast" or "compress server fast".
If you do, what exactly are your amanda.conf set
Toralf Lund wrote:
>1. Dumps of directories containing several Gbs of data (up to roughly
> 20Gb compressed in my case.)
>2. Use dumptype GNUTAR.
>3. Compress data using "compress client fast" or "compress server fast".
>
> If you do, what exactly are your amanda.conf settings? A
Since I'm still having problems gunzip'ing my large dumps - see separate
thread, I was just wondering:
Some of you people out there are doing the same kind of thing, right? I
mean, have
1. Dumps of directories containing several Gbs of data (up to roughly
20Gb compressed in my case.)
2
to suspect that gzip itself is causing the
problem. Any known issues, there? The client in question does have a
fairly old version, 1.2.4,
That rings a bell somewhere. Hasn't there been once a report on this
list from someone whose zipped backups got corrupted at every (other)
GB mark?
Alexander Jolk wrote:
Toralf Lund wrote:
[...] I get the same kind of problem with harddisk dumps as well as
tapes, and as it now turns out, also for holding disk files. And the
disks and tape drive involved aren't even on the same chain.
Actually, I'm starting to suspect that gzip
s. I do seem to remember that I took care to make sure it wouldn't be
used, when I installed Amanda.
I've installed the freeware version a while ago (GNU tar) 1.13.25
without an itch along with /usr/sbin/gzip.
Both incarnations of gzip return the same version string as the one you
included
On Thu, 14 Oct 2004, Toralf Lund wrote:
> Gene Heskett wrote:
> > Also, the gzip here is 1.3.3, dated in 2002. There may have been fixes to
> > it, probably in the >2GB file sizes areas.
> >
> Ahem. If >2GB data is or has been a problem, then I'm definitely d
Gene Heskett wrote:
On Wednesday 13 October 2004 11:07, Toralf Lund wrote:
Jean-Francois Malouin wrote:
[ snip ]
Actually, I'm starting to suspect that gzip itself is causing the
problem. Any known issues, there? The client in question does have
a fairly old version, 1.2.4, I
On Wednesday 13 October 2004 11:07, Toralf Lund wrote:
>Jean-Francois Malouin wrote:
>> [ snip ]
>>
>>>Actually, I'm starting to suspect that gzip itself is causing the
>>>problem. Any known issues, there? The client in question does have
>>> a fairly
Jean-Francois Malouin wrote:
[ snip ]
Actually, I'm starting to suspect that gzip itself is causing the
problem. Any known issues, there? The client in question does have a
fairly old version, 1.2.4, I think (that's the latest one supplied by
SGI, unless they have upgraded it ver
* Toralf Lund <[EMAIL PROTECTED]> [20041013 09:43]:
> Alexander Jolk wrote:
>
> >Toralf Lund wrote:
> >
> >
> >>tar: Skipping to next header
> >>tar: Archive contains obsolescent base-64 headers
> >>37800+0 records in
> >>3780
Alexander Jolk wrote:
Toralf Lund wrote:
tar: Skipping to next header
tar: Archive contains obsolescent base-64 headers
37800+0 records in
37800+0 records out
gzip: stdin: invalid compressed data--crc error
tar: Child returned status 1
tar: Error exit delayed from previous errors
I'v
Toralf Lund wrote:
> tar: Skipping to next header
> tar: Archive contains obsolescent base-64 headers
> 37800+0 records in
> 37800+0 records out
>
> gzip: stdin: invalid compressed data--crc error
> tar: Child returned status 1
> tar: Error exit delayed from previous e
acking the
harddisk dump in a more direct manner:
# dd if=/dumps/mirror/d4/data/00013.fileserv._scanner2_Hoyde.6 bs=32k
skip=1 | tar -xvpzkf -
[ file extract info skipped ]
tar: Skipping to next header
tar: Archive contains obsolescent base-64 headers
37800+0 records in
37800+0 records out
gzip: stdi
Kaushal Shriyan wrote:
I wanted to enable gzip compression on my backups since i want to
accomodate lots of data on to a 40GB tape, At present I am using
define dumptype root-tar {
global
program "GNUTAR"
comment "root partitions dumped with tar"
com
wrote:
Hi
I wanted to enable gzip compression on my backups since i want to
accomodate lots of data on to a 40GB tape, At present I am using
define dumptype root-tar {
global
program "GNUTAR"
comment "root partitions dumped with tar"
compress none
index
Hi
I wanted to enable gzip compression on my backups since i want to
accomodate lots of data on to a 40GB tape, At present I am using
define dumptype root-tar {
global
program "GNUTAR"
comment "root partitions dumped with tar"
compress none
index
Dana Bourgeois wrote:
OK, so all my clients are compressing. I have 13 clients and about 5 of
them are Solaris using dump, the rest are using tar. Could someone explain
why the dumpers are also spawning a 'gzip --best' process? They only use 5
or 6 seconds of CPU so they are not doin
OK, so all my clients are compressing. I have 13 clients and about 5 of
them are Solaris using dump, the rest are using tar. Could someone explain
why the dumpers are also spawning a 'gzip --best' process? They only use 5
or 6 seconds of CPU so they are not doing much but I don'
Just a note that I have experienced a similar problem, but with
Redhat and Mandrake rather than Debian Linux. The dump format
is GNU tar with gzip compressed on the client side, written to
a large holding disk then flushed to tape. The archives on
holding disk verify OK, but the problem is
On Tue, Jul 15, 2003 at 12:10:27PM -0400, Kurt Yoder wrote:
> However, I
> was able to duplicate the problem simply by gzipping a big file to
> my ATA/IDE holding disk. So I'm certain it's not a scsi problem.
Is it repeatable? I.e. if you gzip the *same* file five times,
Niall O Broin said:
> What's your backup device ? If it's a SCSI tape then I'd say your
> problem is
> most likely SCSI cabling termination. I had this a long time ago and
> it drove
> me nuts. I eventually found that the SCSI chain wasn't terminated
> correctly.
> Just like you, I would only e
On Tuesday 15 July 2003 16:07, Kurt Yoder wrote:
> they seem to go fine. However, upon verifying the backups, I notice
> gzip errors. I get two different kinds of errors: "crc" errors and
> "format violated" errors. The errors don't happen on all dump
> images
Hello list
I've been having a problem with amanda and gzip on my debian backup
servers for a while now. I do my backups with gzip compression, and
they seem to go fine. However, upon verifying the backups, I notice
gzip errors. I get two different kinds of errors: "crc" err
or other thoughts? Is this the Linux
>> > dump/restore problem I've seen talked about on the mailing
>> > list? I don't understand how the gzip file could be corrupted
>> > by a problem internal to the dump/restore cycle.
>>
>> Answering my own ques
ughts? Is this the Linux dump/restore
> > problem I've seen talked about on the mailing list? I don't
> > understand how the gzip file could be corrupted by a problem internal
> > to the dump/restore cycle.
>
> Answering my own question after a week of testing
On Fri March 28 2003 23:32, Gene Heskett wrote:
>On Fri March 28 2003 12:46, Mike Simpson wrote:
>>Hi --
>>
>>> Any tips or tricks or other thoughts? Is this the Linux
>>> dump/restore problem I've seen talked about on the mailing
>>> list? I don
On Fri March 28 2003 12:46, Mike Simpson wrote:
>Hi --
>
>> Any tips or tricks or other thoughts? Is this the Linux
>> dump/restore problem I've seen talked about on the mailing list?
>> I don't understand how the gzip file could be corrupted by a
>> pr
Hi --
> Any tips or tricks or other thoughts? Is this the Linux dump/restore
> problem I've seen talked about on the mailing list? I don't
> understand how the gzip file could be corrupted by a problem internal
> to the dump/restore cycle.
Answering my own question a
rticularly unusual in the amrecover debug file on the client side.
The corresponding amidxtaped debug file on the tape host side seemed
to be running normally, then terminating on a gzip error:
amidxtaped: time 10.959: Ready to execv amrestore with:
path = /usr/local/sbin/amrestore
argv[0] =
* Orion Poplawski <[EMAIL PROTECTED]> (Wed, Jan 15, 2003 at 12:31:44PM -0700)
> Just notice that on at least on of my amanda disk dumps, it is being run
> through gzip on client and on the server. The details:
> lsof -p 7200:
> COMMAND PID USER FD TYPE DEVICESIZE
Joshua Baker-LePain wrote:
On Wed, 15 Jan 2003 at 12:31pm, Orion Poplawski wrote
Just notice that on at least on of my amanda disk dumps, it is being run
through gzip on client and on the server. The details:
I'm pretty sure that the gzip on the server is compressing the index
On Wed, 15 Jan 2003 at 12:31pm, Orion Poplawski wrote
> Just notice that on at least on of my amanda disk dumps, it is being run
> through gzip on client and on the server. The details:
I'm pretty sure that the gzip on the server is compressing the index file,
*not* the du
Just notice that on at least on of my amanda disk dumps, it is being run
through gzip on client and on the server. The details:
disklist:
lewis /export/lewis3 comp-best-user-tar
amanda.conf:
define dumptype root-tar {
global
program "GNUTAR"
comment "
Does anybody else have additional feedback on this reply from, Greg? We
are regarding a secure backup scheme for amanda whereby the backups are
passed to a gzip wrapper that encrypts the data with gpg and then
forwards it to the real gzip for further compression. I'd wondered
abou
Hello,
I'm currently in the testing phase for switching our amanda backups over
to Judith Freeman's secure scheme, using gpg and a gzip wrapper
(http://security.uchicago.edu/tools/gpg-amanda.) Everything's working
great with our test computers, and, so far, I'm pre
On Wednesday 24 October 2001 10:47 am, David Chin wrote:
> Howdy,
>
> I'm running amanda 2.4.2p2 on RH7.1 Linux and HP-UX 10.20, with a Linux box
> acting as server. On the server, there is a "gzip --best" process running
> even though I have "compress none&
[mailto:[EMAIL PROTECTED]]On Behalf Of David Chin
> Sent: Wednesday, October 24, 2001 8:48 AM
> To: [EMAIL PROTECTED]
> Subject: gzip running when "compress none"
>
>
>
> Howdy,
>
> I'm running amanda 2.4.2p2 on RH7.1 Linux and HP-UX 10.20, with a
> Lin
> I'm running amanda 2.4.2p2 on RH7.1 Linux and HP-UX 10.20, with a Linux box
> acting as server. On the server, there is a "gzip --best" process running
> even though I have "compress none" in the "global" configuration. Is this
> norm
Howdy,
I'm running amanda 2.4.2p2 on RH7.1 Linux and HP-UX 10.20, with a Linux box
acting as server. On the server, there is a "gzip --best" process running
even though I have "compress none" in the "global" configuration. Is this
normal?
--Dave Chin
[EMAIL PROTECTED]
Hello,
I am still trying to get gzip/gpg working. I did not receive any replies
from my last two mails, so let me try again not so broad.
If someone might be able to answer this, that would be awesome:
As I understand the process the data should be written to tape with gzip,
not dump. But
* John R. Jackson <[EMAIL PROTECTED]> (Mon, Feb 05, 2001 at 11:59:56PM -0500)
>>2. quoting a colocation facilitys website:
>>"We use bzip2 instead of gzip for data compression. ...
> This comes up here about once a month :-). There was a lengthy discussion
> last
>1. I have all of my gzips set to fast instead of best but whenever amdump is
>running there will be a gzip --fast and gzip --best for every file that is
>in my holding disk. What are the reasons behind this?
The --best one is doing the index files, not the data stream.
>2. quoting
I have 2 questions relating to gzip.
1. I have all of my gzips set to fast instead of best but whenever amdump is
running there will be a gzip --fast and gzip --best for every file that is
in my holding disk. What are the reasons behind this?
2. quoting a colocation facilitys website:
"W
On Wed, 15 Nov 2000, Sandra Panesso wrote:
> Hi Kevin
>
> I Want to know if you have tried to run amanda on Mac OS X Beta. If you did
> please tell me how was it. My question is because I am testing to run amanda
> on Mac OS X Beta but I found some problems when i tried to compiled it. I
> use
disabled ktrace debugging of the kernel so there wasn't
> much I could do to figure out where the problem is. However, recently, I
> decided to do a set of dumps with compression turned off. It turns out,
> thats where the slowdown is occuring. For some reason, the compression
>
On Thu, 9 Nov 2000, Mitch Collinsworth wrote:
> Have you tried compress client fast yet or are you still doing client
> best?
Yes, actually, I had been using client fast for all my backups. Maybe I
would do better with client best :) Still, the thing that irks me most
about it is not that the
> >... gzip is just really, really slow when used with AMANDA under Mac
> >OS X Server. Command line issued tar/gzip pipes seem to work reasonably
> >fast on the OS X Server.
>
> Well, if one of these boxes ever drops in my lap (and I have time),
> I guess I can ta
on that machine (a 400MHz G4 machine with a Gig of
> RAM). So its not merely an issue of gzip compression adding time to the
> backups. gzip is just really, really slow when used with AMANDA under Mac
> OS X Server. Command line issued tar/gzip pipes seem to work reasonably
> fast on t
1 - 100 of 105 matches
Mail list logo