ERROR: /usr/bin/gzip exited with status 1

2011-04-17 Thread Gour-Gadadhara Dasa
Hello,

during my migration from Linux to FreeBSD I lost Amanda's logs and
cannot use amrecover. In the attempt to re-do backup I wanted to
amrestore content from the tapes, but one set of tapes was dumped with
software compression enabled and now amrestore reports:

amrestore on fbsd on compressed .gz tape (on linux) failed with:
ERROR: /usr/bin/gzip exited with status 1. 

Any hint how to overcome it?

(It's multi-tapes backup and I need to amrestore first and then to
concatenate all the parts.)

Sincerely,
Gour


-- 
“In the material world, conceptions of good and bad are
all mental speculations…” (Sri Caitanya Mahaprabhu)

http://atmarama.net | Hlapicina (Croatia) | GPG: 52B5C810




signature.asc
Description: PGP signature


Re: ERROR: /usr/bin/gzip exited with status 1

2011-04-17 Thread Jean-Louis Martineau

amrestore -r

Jean-Louis

Gour-Gadadhara Dasa wrote:

Hello,

during my migration from Linux to FreeBSD I lost Amanda's logs and
cannot use amrecover. In the attempt to re-do backup I wanted to
amrestore content from the tapes, but one set of tapes was dumped with
software compression enabled and now amrestore reports:

amrestore on fbsd on compressed .gz tape (on linux) failed with:
ERROR: /usr/bin/gzip exited with status 1. 


Any hint how to overcome it?

(It's multi-tapes backup and I need to amrestore first and then to
concatenate all the parts.)

Sincerely,
Gour


  




Re: ERROR: /usr/bin/gzip exited with status 1

2011-04-17 Thread Gour-Gadadhara Dasa
On Sun, 17 Apr 2011 13:54:28 -0400
Jean-Louis Martineau martin...@zmanda.com wrote:

 amrestore -r

Thank you.

Then it means that before concatenate-ing all the parts together, we
need to skip first 32k block?


Sincerely,
Gour


-- 
“In the material world, conceptions of good and bad are
all mental speculations…” (Sri Caitanya Mahaprabhu)

http://atmarama.net | Hlapicina (Croatia) | GPG: 52B5C810




signature.asc
Description: PGP signature


Re: SV: FAILED [spawn /bin/gzip: dup2 out: Bad file descriptor] on 2.6.1p2

2010-05-03 Thread Volker Pallas
Hi,

just wanted to send you an update on this issue. Switching to
auth=bsdtcp completely solved my problem.
The working line from /etc/inetd.conf (for openbsd-inetd, and the
amanda-user being backup) is:

amanda stream tcp nowait backup /usr/lib/amanda/amandad amandad
-auth=bsdtcp amdump amindexd amidxtaped

During the last 7 days there hasn't been a single failed backup on any
of my previously affected systems.

Thank you for your support!

Volker





Re: SV: FAILED [spawn /bin/gzip: dup2 out: Bad file descriptor] on 2.6.1p2

2010-04-21 Thread Volker Pallas
Hello again,

unfortunately my dup2-problems reappeared after that one night when it
was working, I still have the same error on my machines.

so far I have only changed the entry in /etc/inetd.conf (I'm using
openbsd-inetd) from:
amanda dgram udp wait backup /usr/lib/amanda/amandad amandad -auth=bsd
amdump amindexd amidxtaped
to:
amanda dgram tcp wait backup /usr/lib/amanda/amandad amandad -auth=bsd
amdump amindexd amidxtaped
and of course restarted the inetd process.

Are there any other places I need to adapt when completely switching
amanda to tcp?
Is auth=bsdtcp mandatory?

Thanks you,

Volker

Volker Pallas wrote:
 Gunnarsson, Gunnar wrote:
   
 Switching to tcp instead of using udp cured those problems.
 
  Hi,

 I'm having a bit of a problem on *some* servers concerning failed 
 backups with the error message:
 lev # FAILED [spawn /bin/gzip: dup2 out: Bad file descriptor]
 
   
 Gunnar had a similar problem - maybe his experience will help?

   http://www.mail-archive.com/amanda-users@amanda.org/msg42119.html

 Dustin
   
 


Re: SV: FAILED [spawn /bin/gzip: dup2 out: Bad file descriptor] on 2.6.1p2

2010-04-21 Thread Dustin J. Mitchell
On Wed, Apr 21, 2010 at 11:04 AM, Volker Pallas ama...@sqmail.de wrote:
 Is auth=bsdtcp mandatory?

If you want to switch to bsdtcp, then yes.  You'll also need to change
your (x)inetd configuration accordingly.  The amanda-auth(7) manpage
may be of use to you in figuring the whole thing out.

DUstin

-- 
Open Source Storage Engineer
http://www.zmanda.com


Re: SV: FAILED [spawn /bin/gzip: dup2 out: Bad file descriptor] on 2.6.1p2

2010-04-16 Thread Volker Pallas
Thank you Gunnar and Dustin!
I switched to tcp on one host yesterday and it worked fine so far. I
will observe this for the next couple of days and report back to you.

Thank you again for your fast response,
Volker

Gunnarsson, Gunnar wrote:
 Switching to tcp instead of using udp cured those problems.
  Hi,

 I'm having a bit of a problem on *some* servers concerning failed 
 backups with the error message:
 lev # FAILED [spawn /bin/gzip: dup2 out: Bad file descriptor]
 

 Gunnar had a similar problem - maybe his experience will help?

   http://www.mail-archive.com/amanda-users@amanda.org/msg42119.html

 Dustin
   


SV: FAILED [spawn /bin/gzip: dup2 out: Bad file descriptor] on 2.6.1p2

2010-04-14 Thread Gunnarsson, Gunnar
Switching to tcp instead of using udp cured those problems.

-- GG 

-Ursprungligt meddelande-
Från: owner-amanda-us...@amanda.org [mailto:owner-amanda-us...@amanda.org] För 
Dustin J. Mitchell
Skickat: den 13 april 2010 18:11
Till: Volker Pallas
Kopia: amanda-users@amanda.org
Ämne: Re: FAILED [spawn /bin/gzip: dup2 out: Bad file descriptor] on 2.6.1p2

On Mon, Apr 12, 2010 at 4:48 AM, Volker Pallas ama...@sqmail.de wrote:
  Hi,

 I'm having a bit of a problem on *some* servers concerning failed 
 backups with the error message:
 lev # FAILED [spawn /bin/gzip: dup2 out: Bad file descriptor]

Gunnar had a similar problem - maybe his experience will help?

  http://www.mail-archive.com/amanda-users@amanda.org/msg42119.html

Dustin

--
Open Source Storage Engineer
http://www.zmanda.com




Re: FAILED [spawn /bin/gzip: dup2 out: Bad file descriptor] on 2.6.1p2

2010-04-13 Thread Dustin J. Mitchell
On Mon, Apr 12, 2010 at 4:48 AM, Volker Pallas ama...@sqmail.de wrote:
  Hi,

 I'm having a bit of a problem on *some* servers concerning failed
 backups with the error message:
 lev # FAILED [spawn /bin/gzip: dup2 out: Bad file descriptor]

Gunnar had a similar problem - maybe his experience will help?

  http://www.mail-archive.com/amanda-users@amanda.org/msg42119.html

Dustin

-- 
Open Source Storage Engineer
http://www.zmanda.com



FAILED [spawn /bin/gzip: dup2 out: Bad file descriptor] on 2.6.1p2

2010-04-12 Thread Volker Pallas
 Hi,

I'm having a bit of a problem on *some* servers concerning failed
backups with the error message:
lev # FAILED [spawn /bin/gzip: dup2 out: Bad file descriptor]

usually these failed backups are successfully retried, but sometimes I
get the same error twice and the backup for the day fails completely.

This only happens on servers backing up windows clients via
smb-share/smbclient which are running amanda version 2.6.1p2. Servers
with Linux clients are not affected and other versions of amanda backing
up windows clients are also not affected. Additionally, as I said, this
does not happen all the time and not always with the same clients.

I verified amanda-user-access to /dev/null and I would rule out that the
file systems are suddenly corrupt on all these different servers.

I would like to know what this error message means and how to fix it or
maybe you can point me in the right direction.

If you need any more information, please tell me.
Thank you in advance,
Volker

some detailed output from sendbackup.*.debug (this time failed and then
retried successfully):
1270767389.972013: sendbackup: start: serverfqdn://ipaddr/backupshare$ lev 1
1270767389.972046: sendbackup: pipespawnv: stdoutfd is 50
1270767389.972078: sendbackup: Spawning /bin/gzip /bin/gzip --fast in
pipeline
1270767389.972196: sendbackup: gnutar: pid 24511:
/bin/gzip1270767389.972213: sendbackup: pid 24511: /bin/gzip --fast
1270767389.972347: sendbackup: critical (fatal): error [spawn /bin/gzip:
dup2 out: Bad file descriptor]
1270767389.972928: sendbackup: gnutar: backup of \\ipaddr\backupshare$
1270767389.973070: sendbackup: pipespawnv: stdoutfd is 6
1270767389.973188: sendbackup: Spawning /usr/bin/smbclient smbclient
ipaddr\\backupshare$ -U username -E -d0 -Tqcg - in pipeline
1270767389.976787: sendbackup: Started index creator: /bin/tar -tf -
2/dev/null | sed -e 's/^\.//'
1270767389.979324: sendbackup: gnutar: /usr/bin/smbclient: pid 24513
1270767389.979343: sendbackup: Started backup

software versions on affected servers:
the OS is Debian Lenny
amanda 2.6.1p2
gzip 1.3.12-6+lenny1
tar 1.20-1
smbclient 3.2.5-4lenny9


sendbackup: error [spawn /opt/csw/bin/gzip: dup2 err: Bad file number]

2009-04-23 Thread Darin Perusich
In my continued testing of amsuntar I am intermittently seeing this
/opt/csw/bin/gzip: dup2 err: Bad file number error during amdump.
While it appears to be random I have seen this occur with certain
partitions more then others, I've been changing up the disklist to try
and recreate it with more frequency.

Does anyone know causes this type of error and how to debug it further?

-- 
Darin Perusich
Unix Systems Administrator
Cognigen Corporation
395 Youngs Rd.
Williamsville, NY 14221
Phone: 716-633-3463
Email: darin...@cognigencorp.com


Re: sendbackup: error [spawn /opt/csw/bin/gzip: dup2 err: Bad file number]

2009-04-23 Thread Dustin J. Mitchell
On Thu, Apr 23, 2009 at 3:18 PM, Darin Perusich
darin.perus...@cognigencorp.com wrote:
 In my continued testing of amsuntar I am intermittently seeing this
 /opt/csw/bin/gzip: dup2 err: Bad file number error during amdump.
 While it appears to be random I have seen this occur with certain
 partitions more then others, I've been changing up the disklist to try
 and recreate it with more frequency.

I don't know the exact circumstances, but basically this means that
gzip was trying to duplicate a file descriptor, and gave a bad file
descriptor either for the source or target argument.  If you have some
context, I may be able to give more detail -- how is gzip being
invoked?  Can you use truss or the like to figure out what it's trying
to do?

Dustin

-- 
Open Source Storage Engineer
http://www.zmanda.com


Re: performance tuning, gzip

2008-07-28 Thread Ian Turner
Good luck at your new place. How do you like it? Was/is it hard to move your 
consulting practice so far?

--Ian

On Saturday 26 July 2008 23:17:44 Jon LaBadie wrote:
 On Sat, Jul 26, 2008 at 10:55:56PM -0400, Ian Turner wrote:
  Jon,
 
  I thought you were in Princeton. Did you move?
 
  --Ian

 Yes I did Ian.  Moved at the beginning of the year to
 Reston, which for those unfamiliar with the area is
 about 20 miles west of Washington, D.C.

 Jon
-- 
Zmanda: Open Source Backup and Recovery.
http://www.zmanda.com


Re: performance tuning, gzip

2008-07-26 Thread Ian Turner
Jon,

I thought you were in Princeton. Did you move?

--Ian

On Friday 25 July 2008 14:03:52 Jon LaBadie wrote:
 On Fri, Jul 25, 2008 at 01:18:40PM -0400, Brian Cuttler wrote:
  We have a Solaris E250 amanda server backing up two T1000 servers,
  also Solaris, hosting Lotus Notes.
 
  Over time, we decided on HW compression, runs where long but they
  completed pretty reliably at the same time every day.
 
  We tried an experiment, since we hadn't really tried SW compression
  since we upgraded the client systems, we used SW-client compression
  and removed the HW compression. Runs jumped to 22+ hours, but we where
  not seeing the work area filled (the data was smaller, and it was taking
  longer to get to us).
 
  So I increased the inparallel parameter, which of course ramped up
  the load on the clients even further.
 
  The question of which version of Gzip to run arose, we had a fairly
  old version and there is a newer-Sun version available, just didn't
  know how version sensitive we where. I know version of gzip (which
  we use on some partitons on these clients) is very version specific.
 
  Is there a list of tested/approved gzip versions ? I didn't see one
  but may not have dug deep enough.
 
  Current gzip
  $ /usr/local/bin/gzip -V
  gzip 1.2.4 (18 Aug 93)
 
  Proposed gzip
  $ /usr/bin/gzip -V
  gzip 1.3.5
  (2002-09-30)
  Copyright 2002 Free Software Foundation

 One thing to check is whether you have specified best for your
 compression.  Gzip allows you to select from 9 levels of compression,
 trading cpu time (and wall time) for extra compression.  Amanda
 allows you to select fastest (aka level 1), best (level 9) or
 default which is level 6.

 I just ran a quick test on an 11MB text only file.  Level 9 took
 three times as long as level 1.  Yet level 1 gave 83% of the compression
 of level 9.  I like default level 6 which took 1.8 times as long
 as level 1 and gave 97% of the compression of level 9.

 BTW I also ran bzip2 on the same file.  It did 60% better than gzip
 level 9, but took nearly 22 times as long as gzip level 1.
-- 
Wiki for Amanda documentation: http://wiki.zmanda.com/


performance tuning, gzip

2008-07-25 Thread Brian Cuttler

We have a Solaris E250 amanda server backing up two T1000 servers,
also Solaris, hosting Lotus Notes.

Over time, we decided on HW compression, runs where long but they
completed pretty reliably at the same time every day.

We tried an experiment, since we hadn't really tried SW compression
since we upgraded the client systems, we used SW-client compression
and removed the HW compression. Runs jumped to 22+ hours, but we where
not seeing the work area filled (the data was smaller, and it was taking
longer to get to us).

So I increased the inparallel parameter, which of course ramped up
the load on the clients even further.

The question of which version of Gzip to run arose, we had a fairly
old version and there is a newer-Sun version available, just didn't
know how version sensitive we where. I know version of gzip (which
we use on some partitons on these clients) is very version specific.

Is there a list of tested/approved gzip versions ? I didn't see one
but may not have dug deep enough.

Current gzip
$ /usr/local/bin/gzip -V
gzip 1.2.4 (18 Aug 93)

Proposed gzip
$ /usr/bin/gzip -V
gzip 1.3.5
(2002-09-30)
Copyright 2002 Free Software Foundation 


thanks,

Brian
---
   Brian R Cuttler [EMAIL PROTECTED]
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



IMPORTANT NOTICE: This e-mail and any attachments may contain
confidential or sensitive information which is, or may be, legally
privileged or otherwise protected by law from further disclosure.  It
is intended only for the addressee.  If you received this in error or
from someone who was not authorized to send it to you, please do not
distribute, copy or use it or any attachments.  Please notify the
sender immediately by reply e-mail and delete this from your
system. Thank you for your cooperation.




Re: performance tuning, gzip

2008-07-25 Thread Brian Cuttler
On Fri, Jul 25, 2008 at 01:18:40PM -0400, Brian Cuttler wrote:
 
 We have a Solaris E250 amanda server backing up two T1000 servers,
 also Solaris, hosting Lotus Notes.
 
 Over time, we decided on HW compression, runs where long but they
 completed pretty reliably at the same time every day.
 
 We tried an experiment, since we hadn't really tried SW compression
 since we upgraded the client systems, we used SW-client compression
 and removed the HW compression. Runs jumped to 22+ hours, but we where
 not seeing the work area filled (the data was smaller, and it was taking
 longer to get to us).
 
 So I increased the inparallel parameter, which of course ramped up
 the load on the clients even further.
 
 The question of which version of Gzip to run arose, we had a fairly
 old version and there is a newer-Sun version available, just didn't
 know how version sensitive we where. I know version of gzip (which
 we use on some partitons on these clients) is very version specific.

- meant to say gtar is very version sensitive.
* where should be rewritten were is several places.

 Is there a list of tested/approved gzip versions ? I didn't see one
 but may not have dug deep enough.
 
 Current gzip
 $ /usr/local/bin/gzip -V
 gzip 1.2.4 (18 Aug 93)
 
 Proposed gzip
 $ /usr/bin/gzip -V
 gzip 1.3.5
 (2002-09-30)
 Copyright 2002 Free Software Foundation 
 
 
   thanks,
 
   Brian
 ---
Brian R Cuttler [EMAIL PROTECTED]
Computer Systems Support(v) 518 486-1697
Wadsworth Center(f) 518 473-6384
NYS Department of HealthHelp Desk 518 473-0773
 
 
 
 IMPORTANT NOTICE: This e-mail and any attachments may contain
 confidential or sensitive information which is, or may be, legally
 privileged or otherwise protected by law from further disclosure.  It
 is intended only for the addressee.  If you received this in error or
 from someone who was not authorized to send it to you, please do not
 distribute, copy or use it or any attachments.  Please notify the
 sender immediately by reply e-mail and delete this from your
 system. Thank you for your cooperation.
 
 
---
   Brian R Cuttler [EMAIL PROTECTED]
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



IMPORTANT NOTICE: This e-mail and any attachments may contain
confidential or sensitive information which is, or may be, legally
privileged or otherwise protected by law from further disclosure.  It
is intended only for the addressee.  If you received this in error or
from someone who was not authorized to send it to you, please do not
distribute, copy or use it or any attachments.  Please notify the
sender immediately by reply e-mail and delete this from your
system. Thank you for your cooperation.




Re: performance tuning, gzip

2008-07-25 Thread Jon LaBadie
On Fri, Jul 25, 2008 at 01:18:40PM -0400, Brian Cuttler wrote:
 
 We have a Solaris E250 amanda server backing up two T1000 servers,
 also Solaris, hosting Lotus Notes.
 
 Over time, we decided on HW compression, runs where long but they
 completed pretty reliably at the same time every day.
 
 We tried an experiment, since we hadn't really tried SW compression
 since we upgraded the client systems, we used SW-client compression
 and removed the HW compression. Runs jumped to 22+ hours, but we where
 not seeing the work area filled (the data was smaller, and it was taking
 longer to get to us).
 
 So I increased the inparallel parameter, which of course ramped up
 the load on the clients even further.
 
 The question of which version of Gzip to run arose, we had a fairly
 old version and there is a newer-Sun version available, just didn't
 know how version sensitive we where. I know version of gzip (which
 we use on some partitons on these clients) is very version specific.
 
 Is there a list of tested/approved gzip versions ? I didn't see one
 but may not have dug deep enough.
 
 Current gzip
 $ /usr/local/bin/gzip -V
 gzip 1.2.4 (18 Aug 93)
 
 Proposed gzip
 $ /usr/bin/gzip -V
 gzip 1.3.5
 (2002-09-30)
 Copyright 2002 Free Software Foundation 
 

One thing to check is whether you have specified best for your
compression.  Gzip allows you to select from 9 levels of compression,
trading cpu time (and wall time) for extra compression.  Amanda
allows you to select fastest (aka level 1), best (level 9) or
default which is level 6.

I just ran a quick test on an 11MB text only file.  Level 9 took
three times as long as level 1.  Yet level 1 gave 83% of the compression
of level 9.  I like default level 6 which took 1.8 times as long
as level 1 and gave 97% of the compression of level 9.

BTW I also ran bzip2 on the same file.  It did 60% better than gzip
level 9, but took nearly 22 times as long as gzip level 1.

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 12027 Creekbend Drive  (703) 787-0884
 Reston, VA  20194  (703) 787-0922 (fax)


hardware gzip accelerator cards

2006-03-11 Thread Kai Zimmer

Hi all,

has anybody on the list experience with hardware gzip accelerator cards 
(e.g. form indranetworks)? Are they of any use for amanda - or is the 
disk-i/o the limiting factor? And how much are those (generally 
pci-based) cards?


thanks,
Kai


Re: hardware gzip accelerator cards

2006-03-11 Thread Jon LaBadie
On Sat, Mar 11, 2006 at 02:17:50PM +0100, Kai Zimmer wrote:
 Hi all,
 
 has anybody on the list experience with hardware gzip accelerator cards 
 (e.g. form indranetworks)? Are they of any use for amanda - or is the 
 disk-i/o the limiting factor? And how much are those (generally 
 pci-based) cards?


Had not heard of such a beast.
Did a simple google, found another one from ?Comtech? at aha.com.
If I read the two pages correctly, indranetwork claims 450MB/sec
while aha claims 3000MB/sec.

Seems like they supply an alternative gzip, ahagzip,
that interfaces to their hardware driver.

I suspect you would have to either replace gzip with their
version, or recompile amanda pointing it to the accelerated
version of gzip.  If you go the latter route I'd do the
recompile with a configure pointing to a standin for either,
like /usr/local/libexec/amgzip.  It could start out being
a simple copy of /bin/gzip til certain amanda was working.
Then replace it with the accelerated version for testing.

I'll bet you could contact the companies and tell them you
are a contributer to the amanda mailing list.  That you would
like to do an evaluation of their product to confirm its
value and suitability for use in amanda installations.  Promise
to write an evaluation that you will publish one the list and
that you will forward to them a copy.

Gee Kai, I'd like for you to be a guinea pig for the list :))

jl

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


Re: hardware gzip accelerator cards

2006-03-11 Thread Michael Loftis



--On March 11, 2006 2:17:50 PM +0100 Kai Zimmer [EMAIL PROTECTED] wrote:


Hi all,

has anybody on the list experience with hardware gzip accelerator cards
(e.g. form indranetworks)? Are they of any use for amanda - or is the
disk-i/o the limiting factor? And how much are those (generally
pci-based) cards?

thanks,
Kai



Depends on the machine, most machines are disk I/O limited.  For those that 
aren't unless the card accellerates the gzip command it's worthless. 
Usually they require special APIs to be implemented in a special (apache) 
module in order to work.  That's not to say you couldn't write a gzip 
implementation using the card.  It might not be any faster though, in fact 
it might be slower.  Modern CPUs are pretty damned fast.  And because of 
hte nature of compression, you need a GP proc to run it, and it's not very 
likely you'll get anything faster than a newer Athlon or P4 on one of these 
cards.  Add to that the fact you have to load data to/from main memory, 
over whatever bus (esp a slow PCI bus) you might actually be *slower* 
running one of these cards.


They're meant to accellerate systems that are being used pretty heavily for 
other things by freeing the main processor to run the intensive Java apps 
or ASP.Net apps.


That said, you might see an improvement, if you can get the gzip command 
line command accellerated, or whatever your dump/tar/gtar equivalent uses.





--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


gzip trailing garbage

2006-02-21 Thread Greg Troxel
I'm using 2.4.5p1 on NetBSD with Kerberos encryption and
authentication.

I tried to verify some tapes and found that 'gzip -t' failed on the
restored files.  On investigation, after adding some better
diagnostics to gzip (NetBSD's own), I found that the problem was that
the last 32K block was padded with zeros.

Unflushed dumps in the holding directory have this problem for remote
dumps (krb encrypted), but not local ones.

On an older amanda install, not using krb4, I don't have this problem.

Is anyone else seeing this?

Does GNU gzip ignore trailing NULLs?   The NetBSD implementation goes
back to looking for MAGIC0 after reading the saved length.

-- 
Greg Troxel [EMAIL PROTECTED]


Re: gzip trailing garbage

2006-02-21 Thread Kevin Till

Greg Troxel wrote:

I'm using 2.4.5p1 on NetBSD with Kerberos encryption and
authentication.

I tried to verify some tapes and found that 'gzip -t' failed on the
restored files.  On investigation, after adding some better
diagnostics to gzip (NetBSD's own), I found that the problem was that
the last 32K block was padded with zeros.

Unflushed dumps in the holding directory have this problem for remote
dumps (krb encrypted), but not local ones.

On an older amanda install, not using krb4, I don't have this problem.

Is anyone else seeing this?


Hi Greg,

Yes, I have seen it with the new data encryption in Amanda 2.5. gzip 
will ignore the trailing zero and give out advisory about trailing 
garbage. While bzip2 does not ignore trailing zeros. I have yet found 
out what part of Amanda code is responsible for the trailing zeros though.


--
Thank you!
Kevin Till

Amanda documentation: http://wiki.zmanda.com
Amanda forums:http://forums.zmanda.com


warnings from NetBSD gzip about 4GB saved files

2005-11-02 Thread Greg Troxel
NetBSD's gzip currently warns about output files  4 GB, because the
gzip format can't store such lengths.  Also, it sets the exit status
to 1 and prints EOPNOTSUPP, which is just plain wrong.  I'm discussing
how to fix this with other NetBSD people.  I think the real issue is
whether gzip should warn about this condition.

I'd like to know if others have seen this error, how GNU gzip behaves,
and if anyone has wisdom about what the right behavior is from the
amanda viewpoint.

FAILED AND STRANGE DUMP DETAILS:

/-- [redacted].ir.b wd0e lev 0 STRANGE
sendbackup: start [[redacted].ir.bbn.com:wd0e level 0]
sendbackup: info BACKUP=/sbin/dump
sendbackup: info RECOVER_CMD=/usr/bin/gzip -dc |/sbin/restore -f... -
sendbackup: info COMPRESS_SUFFIX=.gz
sendbackup: info end
|   DUMP: Found /dev/rwd0e on /usr in /etc/fstab
|   DUMP: Date of this level 0 dump: Wed Nov  2 00:47:19 2005
|   DUMP: Date of last level 0 dump: the epoch
|   DUMP: Dumping /dev/rwd0e (/usr) to standard output
|   DUMP: Label: none
|   DUMP: mapping (Pass I) [regular files]
|   DUMP: mapping (Pass II) [directories]
|   DUMP: estimated 22480798 tape blocks.
|   DUMP: Volume 1 started at: Wed Nov  2 00:47:22 2005
|   DUMP: dumping (Pass III) [directories]
|   DUMP: dumping (Pass IV) [regular files]
|   DUMP: 4.33% done, finished in 1:50
|   DUMP: 8.60% done, finished in 1:46
|   DUMP: 12.90% done, finished in 1:41
|   DUMP: 16.85% done, finished in 1:38
|   DUMP: 21.78% done, finished in 1:29
|   DUMP: 24.95% done, finished in 1:30
|   DUMP: 30.30% done, finished in 1:20
|   DUMP: 34.04% done, finished in 1:17
|   DUMP: 42.42% done, finished in 1:01
|   DUMP: 54.08% done, finished in 0:42
|   DUMP: 65.94% done, finished in 0:28
|   DUMP: 77.79% done, finished in 0:17
|   DUMP: 89.03% done, finished in 0:08
|   DUMP: 99.92% done, finished in 0:00
|   DUMP: 22484268 tape blocks
|   DUMP: Volume 1 completed at: Wed Nov  2 01:57:29 2005
|   DUMP: Volume 1 took 1:10:07
|   DUMP: Volume 1 transfer rate: 5344 KB/s
|   DUMP: Date of this level 0 dump: Wed Nov  2 00:47:19 2005
|   DUMP: Date this dump completed:  Wed Nov  2 01:57:29 2005
|   DUMP: Average transfer rate: 5344 KB/s
|   DUMP: level 0 dump on Wed Nov  2 00:47:19 2005
|   DUMP: DUMP IS DONE
? gzip: input file size = 4GB cannot be saved: Operation not supported
??error [compress returned 1]? dumper: strange [missing size line from 
sendbackup]
? dumper: strange [missing end line from sendbackup]
\



-- 
Greg Troxel [EMAIL PROTECTED]


multiple gzip on same data!?

2005-06-29 Thread Graeme Humphries
Hi guys,

I've got my configuration mostly sorted out now, so it's doing what I
want it to. However, I've got a question about some weird behavior I'm
seeing on my AMANDA server. I'm using the srvcompress option because the
servers I'm backing up from are rather slow, and when backing up, I see
the following on the AMANDA server:

9675 ?S  0:00  \_ /USR/SBIN/CRON
 9676 ?Ss 0:00  \_ /bin/sh /usr/sbin/amdump weekly
 9685 ?S  0:01  \_ /usr/lib/amanda/driver weekly
 9686 ?S  4:24  \_ taper weekly
 9687 ?S  0:59  |   \_ taper weekly
 9699 ?S  9:45  \_ dumper0 weekly
10629 ?S 96:19  |   \_ /bin/gzip --fast
10630 ?S  0:00  |   \_ /bin/gzip --best
 9700 ?S  6:52  \_ dumper1 weekly
10086 ?S149:32  |   \_ /bin/gzip --fast
10087 ?S  0:21  |   \_ /bin/gzip --best
 9701 ?S  0:00  \_ dumper2 weekly
 9702 ?S  0:00  \_ dumper3 weekly

Now, why oh why is it doing *two* gzip operations on each set of data!?
It looks like the gzip --best isn't actually getting that much running
time, so is there something going on here that's faking me out, and it
isn't *actually* gzipping everything twice? :)

Graeme

-- 
Graeme Humphries ([EMAIL PROTECTED])
Linux Administrator
VCom Inc.
(306) 955-7075 ext 485

My views and comments do not necessarily reflect the views of my
employer.



Re: multiple gzip on same data!?

2005-06-29 Thread Michael Loftis



--On June 29, 2005 9:57:48 AM -0600 Graeme Humphries 
[EMAIL PROTECTED] wrote:



Now, why oh why is it doing *two* gzip operations on each set of data!?
It looks like the gzip --best isn't actually getting that much running
time, so is there something going on here that's faking me out, and it
isn't *actually* gzipping everything twice? :)


Nope it isn't.  One is for the index, one for the data.  I had the same 
'huh?!' question (sort of) a while back since I do client side compression 
and still had gzip's running ;)



--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: multiple gzip on same data!?

2005-06-29 Thread Jon LaBadie
On Wed, Jun 29, 2005 at 09:57:48AM -0600, Graeme Humphries wrote:
 Hi guys,
 
 I've got my configuration mostly sorted out now, so it's doing what I
 want it to. However, I've got a question about some weird behavior I'm
 seeing on my AMANDA server. I'm using the srvcompress option because the
 servers I'm backing up from are rather slow, and when backing up, I see
 the following on the AMANDA server:
 
 9675 ?S  0:00  \_ /USR/SBIN/CRON
  9676 ?Ss 0:00  \_ /bin/sh /usr/sbin/amdump weekly
  9685 ?S  0:01  \_ /usr/lib/amanda/driver weekly
  9686 ?S  4:24  \_ taper weekly
  9687 ?S  0:59  |   \_ taper weekly
  9699 ?S  9:45  \_ dumper0 weekly
 10629 ?S 96:19  |   \_ /bin/gzip --fast
 10630 ?S  0:00  |   \_ /bin/gzip --best
  9700 ?S  6:52  \_ dumper1 weekly
 10086 ?S149:32  |   \_ /bin/gzip --fast
 10087 ?S  0:21  |   \_ /bin/gzip --best
  9701 ?S  0:00  \_ dumper2 weekly
  9702 ?S  0:00  \_ dumper3 weekly
 
 Now, why oh why is it doing *two* gzip operations on each set of data!?
 It looks like the gzip --best isn't actually getting that much running
 time, so is there something going on here that's faking me out, and it
 isn't *actually* gzipping everything twice? :)


A dump is two streams, the data being backed up and the index.
Each are separately gzipped.

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


Re: multiple gzip on same data!?

2005-06-29 Thread Graeme Humphries
On Wed, 2005-06-29 at 10:18 -0600, Michael Loftis wrote:
 Nope it isn't.  One is for the index, one for the data.  I had the same 
 'huh?!' question (sort of) a while back since I do client side compression 
 and still had gzip's running ;)

Ahhh, that makes sense then. Alright, I've got to beef up my AMANDA
server, because it's struggling along with just those 4 gzips, and I
want to have 4 dumpers going simultaneously all the time.

-- 
Graeme Humphries ([EMAIL PROTECTED])
Linux Administrator
VCom Inc.
(306) 955-7075 ext 485

My views and comments do not necessarily reflect the views of my
employer.



Re: multiple gzip on same data!?

2005-06-29 Thread Michael Loftis



--On June 29, 2005 10:58:07 AM -0600 Graeme Humphries 
[EMAIL PROTECTED] wrote:



Ahhh, that makes sense then. Alright, I've got to beef up my AMANDA
server, because it's struggling along with just those 4 gzips, and I
want to have 4 dumpers going simultaneously all the time.


Then do client side compression?  Is there really a reason as to why you're 
not?  Unless your clients are all extrmely slow that's what I would 
suggest.





Re: multiple gzip on same data!?

2005-06-29 Thread Jon LaBadie
On Wed, Jun 29, 2005 at 10:58:07AM -0600, Graeme Humphries wrote:
 On Wed, 2005-06-29 at 10:18 -0600, Michael Loftis wrote:
  Nope it isn't.  One is for the index, one for the data.  I had the same 
  'huh?!' question (sort of) a while back since I do client side compression 
  and still had gzip's running ;)
 
 Ahhh, that makes sense then. Alright, I've got to beef up my AMANDA
 server, because it's struggling along with just those 4 gzips, and I
 want to have 4 dumpers going simultaneously all the time.

fast rather than best might make a big difference

Wishlist item:  allow for compress normal as well as best and fast.
It often strikes a good balance between the slight extra compression
of best and its markedly greater cpu usage.

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


Re: multiple gzip on same data!?

2005-06-29 Thread Graeme Humphries
On Wed, 2005-06-29 at 13:18 -0400, Jon LaBadie wrote:
 fast rather than best might make a big difference

Oh, can you specify compress-fast as well as srvcompress? That
definitely would help.

 Wishlist item:  allow for compress normal as well as best and fast.
 It often strikes a good balance between the slight extra compression
 of best and its markedly greater cpu usage.

-- 
Graeme Humphries ([EMAIL PROTECTED])
Linux Administrator
VCom Inc.
(306) 955-7075 ext 485

My views and comments do not necessarily reflect the views of my
employer.



Re: multiple gzip on same data!?

2005-06-29 Thread Graeme Humphries
On Wed, 2005-06-29 at 11:12 -0600, Michael Loftis wrote:
 Then do client side compression?  Is there really a reason as to why you're 
 not?

Client side compression gives me around 3-4 MB / sec data transfers.
Server side gives me around 10-15 MB / sec (with the current CPU in the
AMANDA server). Uncompressed FTP dumps get around 30-40 MB / sec. I have
600 GB to back up. ;)

   Unless your clients are all extrmely slow that's what I would 
 suggest.

They're pretty slow... :)

-- 
Graeme Humphries ([EMAIL PROTECTED])
Linux Administrator
VCom Inc.
(306) 955-7075 ext 485

My views and comments do not necessarily reflect the views of my
employer.



gzip wrapper script for encrypted backups and restore not working

2005-03-31 Thread Oscar Ricardo Silva
I have setup encrypted backups using the script found at:
http://security.uchicago.edu/tools/gpg-amanda/
and backups appear to work.   The problem comes when I attempt to restore 
files using amrecover.  Once the restore starts I get a message saying that 
what's found is not a tar archive

Load tape daily28 now
Continue [?/Y/n/s/t]? Y
tar: This does not look like a tar archive
tar: Skipping to next header
tar: Archive contains obsolescent base-64 headers
tar: ./log/ldap.1: Not found in archive
tar: ./log/messages: Not found in archive
tar: Error exit delayed from previous errors
extract_list - child returned non-zero status: 2
Continue [?/Y/n/r]? n
or that it's not a dump tape:
Load tape daily28 now
Continue [?/Y/n/s/t]? Y
restore: Tape is not a dump tape
extract_list - child returned non-zero status: 1
Continue [?/Y/n/r]? Y
If I restore the entire dump or tar archive using dd off the tape then run 
the gzip wrapper script, I now have a dump or a tar archive.

I've looked through the list archives and others appeared to have this same 
problem but I didn't see a solution.   I've changed the redirect in the 
script from:

${gzip_prog} ${gzip_flags} /tmp/amanda/gpg.debug
to
${gzip_prog} ${gzip_flags} 2/tmp/amanda/gpg.debug

Any thoughts on what I'm doing wrong?  The only thing changed in the script 
is to add my gpg keys.  In my dumptype I have compress fast turned on so 
that gzip will be called.




Re: Multi-Gb dumps using tar + software compression (gzip)?

2004-11-05 Thread Toralf Lund
Toralf Lund wrote:
Paul Bijnens wrote:
Toralf Lund wrote:
Other possible error sources that I think I have eliminated:
  1. tar version issues - since gzip complains even if I just uncopress
 and send the data to /dev/null, or use the -t option.
  2. Network transfer issues. I get errors even with server
 compression, and I'm assuming gzip would produce consistent output
 even if input data were garbled  due to network problems.
  3. Problems with a specific amanda version. I've tried 2.4.4p1 and
 2.4.4p3. Results are the same.
  4. Problems with a special disk. I've tested more than one, as target
 for file dumps as well as holding disk.

5. Hardware errors, e.g. in bad RAM (on a computer without ECC), or
disk controller, or cables.
If one single bit is flipped, then gzip produces complete garbage from
that point on.

Good point. The data isn't compeletely garbled, though; closer 
inspection reveals that the uncompressed data actually have valid tar 
file entries after the failure point. In other words, it looks like 
only limited sections within the file are corrupted.

Also, I believe it's not the disk controller, since I've even tried 
dumping to NFS volumes (but maybe that raises other issues.)

Maybe you're only seeing it in such large backups with gzip, but it 
happens (less often) in other cases too.
Any tools available to test the hardware?

I have one of those stand-alone test software packages... Yes. Maybe I 
should run it. I can't just take the server down right now, though ;-(
Yes, the problem was most probably caused by a memory error. Faults were 
reported when testing the RAM thoroughly, and we have not been able to 
reproduce the gzip issues after replacing the memory!

- Toralf



Re: Unexpected 'gzip --best' processes

2004-10-21 Thread Joshua Baker-LePain
On Thu, 21 Oct 2004 at 6:19pm, Toralf Lund wrote

 This may be related to our backup problems described earlier:
 
 I just noticed that during a dump running just now, I have
 
 # ps -f -C gzip
 UIDPID  PPID  C STIME TTY  TIME CMD
 amanda3064   769  0 17:18 pts/500:00:00 /bin/gzip --best
 amanda3129   773  0 17:44 pts/500:00:00 /bin/gzip --best

*snip*

 Any ideas why I get these processes?

That's the indexes (indices?) getting compressed.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University


Re: Multi-Gb dumps using tar + software compression (gzip)?

2004-10-20 Thread Toralf Lund
Michael Schaller wrote:
Hi Toralf,
I'v had nearly the same problem this week.
I found out that this was a problem of my tar.
I backed up with GNUTAR and compress server fast.
AMRESTORE restored the file but TAR (on the server!) gave some 
horrible messages like yours.
I transferred the file to the original machine (client) and all 
worked fine.
I guess this is a problem of different tar versions ...

Do you made your tests on the client or on the server??
If the answer is server then transfer the restored archive to your 
client and untar there!!
I've tried both. In fact, I've tested just about every combination of 
tar, gzip, filesystems, hosts, recovery sources (tape, disk dump, 
holding disk...) etc. I could think of, and I always get the same result.

I'm thinking this can't possibly be a tar problem, though, or at least 
not only that, since gzip reports errors, too. I get

dd if=00010.raid2._scanner4.7 bs=32k skip=1 | gzip -t
124701+0 records in
124701+0 records out
gzip: stdin: invalid compressed data--crc error
gzip: stdin: invalid compressed data--length error

Greets
Michael
Toralf Lund schrieb:
Since I'm still having problems gunzip'ing my large dumps - see 
separate thread, I was just wondering:

Some of you people out there are doing the same kind of thing, right? 
I mean, have

  1. Dumps of directories containing several Gbs of data (up to roughly
 20Gb compressed in my case.)
  2. Use dumptype GNUTAR.
  3. Compress data using compress client fast or compress server 
fast.

If you do, what exactly are your amanda.conf settings? And can you 
actually extract *all* files from the dumps?

- Toralf





Re: Multi-Gb dumps using tar + software compression (gzip)?

2004-10-20 Thread Toralf Lund
Gene Heskett wrote:
On Tuesday 19 October 2004 11:10, Paul Bijnens wrote:
 

Michael Schaller wrote:
   

I found out that this was a problem of my tar.
I backed up with GNUTAR and compress server fast.
AMRESTORE restored the file but TAR (on the server!) gave some
horrible messages like yours.
I transferred the file to the original machine (client) and all
worked fine.
I guess this is a problem of different tar versions ...
 

That's strange and freightening!  Tar is supposed to be a portable
format!  Especially gnutar  -- there are indeed differences with
normal OS-supplied tar formats, but only to overcome limits in
filesize, path name length etc.; but the same version of gnutar on
different architectures should be able to read each others files.
I'm not 100% sure what happens if you compile tar on an architecture
without largefile support on and try to restore a file exceeding
such a limit.
Are you sure you used the correct version of tar. I've called mine
gtar to avoid confusion with the OS-supplied tar (actually, amanda
even uses amgtar, which is a link to the correct version, or a
wrapper that does some pre/post processing if needed on e.g.
database DLE's).
   

We probably should point out to the new bees here, that tar-1.13 is 
indeed broken.  In other words, if your tar --version doesn't 
report that its at least 1.13-19, it may not, and probably is not, 
compatible with anything but itself.  (and I'm not sure that 1.13 
could even recover its own output!)

I hate to be boreing and repetitive, but there are those here *now* 
who did not go thru that period of hair removal that 1.13 caused.
 

Yep.
But how about gzip? Any known issues there? I think I've ruled out 
problems with one particular gzip version since I've tried server as 
well as client compression, where the client has a different gzip 
version from the server (and I've tried using both for recovery, too), 
but if a range of releases has a problem...

Other possible error sources that I think I have eliminated:
  1. tar version issues - since gzip complains even if I just uncopress
 and send the data to /dev/null, or use the -t option.
  2. Network transfer issues. I get errors even with server
 compression, and I'm assuming gzip would produce consistent output
 even if input data were garbled  due to network problems.
  3. Problems with a specific amanda version. I've tried 2.4.4p1 and
 2.4.4p3. Results are the same.
  4. Problems with a special disk. I've tested more than one, as target
 for file dumps as well as holding disk.
- Toralf




Re: Multi-Gb dumps using tar + software compression (gzip)?

2004-10-20 Thread Toralf Lund
Paul Bijnens wrote:
Toralf Lund wrote:
Other possible error sources that I think I have eliminated:
  1. tar version issues - since gzip complains even if I just uncopress
 and send the data to /dev/null, or use the -t option.
  2. Network transfer issues. I get errors even with server
 compression, and I'm assuming gzip would produce consistent output
 even if input data were garbled  due to network problems.
  3. Problems with a specific amanda version. I've tried 2.4.4p1 and
 2.4.4p3. Results are the same.
  4. Problems with a special disk. I've tested more than one, as target
 for file dumps as well as holding disk.

5. Hardware errors, e.g. in bad RAM (on a computer without ECC), or
disk controller, or cables.
If one single bit is flipped, then gzip produces complete garbage from
that point on.
Good point. The data isn't compeletely garbled, though; closer 
inspection reveals that the uncompressed data actually have valid tar 
file entries after the failure point. In other words, it looks like only 
limited sections within the file are corrupted.

Also, I believe it's not the disk controller, since I've even tried 
dumping to NFS volumes (but maybe that raises other issues.)

Maybe you're only seeing it in such large backups with gzip, but it 
happens (less often) in other cases too.
Any tools available to test the hardware?
I have one of those stand-alone test software packages... Yes. Maybe I 
should run it. I can't just take the server down right now, though ;-(

- Toralf


Re: [paul.bijnens@xplanation.com: Re: Multi-Gb dumps using tar + software compression (gzip)?]

2004-10-20 Thread Toralf Lund
Patrick Michael Kane wrote:
If you restore a dump file to disk someplace and run file on it,
what type of file does it tell you it is?
 

Do you mean a normal amrestored'ed file, or a raw recovery?
Actually, I have examples of both:
#  file fileserv._scanner2_Hoyde.20041008.6
fileserv._scanner2_Hoyde.20041008.6: GNU tar archive
# file fileserv._scanner2_Hoyde.20041006.0
fileserv._scanner2_Hoyde.20041006.0: AMANDA  dump file, DATE 20041006 
fileserv /scanner2/Hoy

But of course the output would be what you expected for valid dump 
files, since they are *mostly* OK. Like I said earlier, tar extract (or 
list) on the files starts off right, and if I look at the (uncompressed) 
files starting at the end, I also find valid tar file entries. If looks 
like the files have section(s) of corrupt data in the middle, however. 
I don't know any way to find out exactly where the error occurs, or what 
is wrong with the data. Or I know where tar gets into trouble for each 
of the files, of course, but I don't know how to find the corresponding 
compressed data, or its offset within the dump.

- Toralf

- Forwarded message from Paul Bijnens [EMAIL PROTECTED] -
From: Paul Bijnens [EMAIL PROTECTED]
To: Toralf Lund [EMAIL PROTECTED]
Cc: Amanda Mailing List [EMAIL PROTECTED]
Subject: Re: Multi-Gb dumps using tar + software compression (gzip)?
Date: Wed, 20 Oct 2004 13:59:31 +0200
Message-ID: [EMAIL PROTECTED]
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.1) Gecko/20040707
Toralf Lund wrote:
 

Other possible error sources that I think I have eliminated:
 1. tar version issues - since gzip complains even if I just uncopress
and send the data to /dev/null, or use the -t option.
 2. Network transfer issues. I get errors even with server
compression, and I'm assuming gzip would produce consistent output
even if input data were garbled  due to network problems.
 3. Problems with a specific amanda version. I've tried 2.4.4p1 and
2.4.4p3. Results are the same.
 4. Problems with a special disk. I've tested more than one, as target
for file dumps as well as holding disk.
   

5. Hardware errors, e.g. in bad RAM (on a computer without ECC), or
disk controller, or cables.
If one single bit is flipped, then gzip produces complete garbage from
that point on.  Maybe you're only seeing it in such large backups with 
gzip, but it happens (less often) in other cases too.
Any tools available to test the hardware?


 




Re: Multi-Gb dumps using tar + software compression (gzip)?

2004-10-20 Thread Eric Siegerman
On Wed, Oct 20, 2004 at 01:18:45PM +0200, Toralf Lund wrote:
 Other possible error sources that I think I have eliminated:
 [ 0. gzip ]  
   1. tar version issues [...]
   2. Network transfer issues [...]
   3. Problems with a specific amanda version [...]
   4. Problems with a special disk [...]

Of course it might well be hardware, as Paul suggested; but in
case it isn't, have you tried removing various of these pieces
from the pipeline entirely, e.g.:
  - create a multi-GB file on the client, gzip it, and see if it
gunzip's ok

  - then ftp the .gz to the server and see if it gunzip's ok
there too

  - then ftp the uncompressed version to the server, and both
gzip and gunzip it there

  - or use netcat instead of ftp so that you can put the various
gzip's and gunzip's in pipeline with the network transfer,
thus more closely mimicking what Amanda does.  (Of course
this won't make any difference -- but the whole point is to
question assumptions like the one that begins this sentence!)

  - run gtar manually with the same options as Amanda would run
it with, and see if you can untar the results

  - write a gtar wrapper that computes the MD5 of the tarball on
its way through -- something like this (untested) script, the
interesting parts of which are the use of tee(1) and a FIFO:
mknod /securedirectory/FIFO$$ p

echo $* /securedirectory/sum$$ 
md5sum /securedirectory/FIFO$$ /securedirectory/sum$$ 

real-gtar ${1+$@} | tee /securedirectory/FIFO$$
rm /securedirectory/FIFO$$

Run Amanda with that wrapper installed on the client in place
of the real gtar, with compression turned *off* for the DLE
in question; then compare the MD5 of the tarball on tape with
that computed by the tar wrapper.  (As someone (Paul?)
alluded to, compression tends to make small errors noticeable
because it magnifies them; this is a more dependable way to
catch them, while removing one of the prime suspects -- the
compression itself -- from the loop.)

  - etc, etc, etc.

--

|  | /\
|-_|/ Eric Siegerman, Toronto, Ont.[EMAIL PROTECTED]
|  |  /
The animal that coils in a circle is the serpent; that's why so
many cults and myths of the serpent exist, because it's hard to
represent the return of the sun by the coiling of a hippopotamus.
- Umberto Eco, Foucault's Pendulum


Re: Multi-Gb dumps using tar + software compression (gzip)?

2004-10-20 Thread Eric Siegerman
On Wed, Oct 20, 2004 at 12:52:12PM -0400, Eric Siegerman wrote:
   echo $* /securedirectory/sum$$ 
   md5sum /securedirectory/FIFO$$ /securedirectory/sum$$ 

Oops: the echo command shouldn't have an .

--

|  | /\
|-_|/ Eric Siegerman, Toronto, Ont.[EMAIL PROTECTED]
|  |  /
The animal that coils in a circle is the serpent; that's why so
many cults and myths of the serpent exist, because it's hard to
represent the return of the sun by the coiling of a hippopotamus.
- Umberto Eco, Foucault's Pendulum


Multi-Gb dumps using tar + software compression (gzip)?

2004-10-19 Thread Toralf Lund
Since I'm still having problems gunzip'ing my large dumps - see separate 
thread, I was just wondering:

Some of you people out there are doing the same kind of thing, right? I 
mean, have

  1. Dumps of directories containing several Gbs of data (up to roughly
 20Gb compressed in my case.)
  2. Use dumptype GNUTAR.
  3. Compress data using compress client fast or compress server fast.
If you do, what exactly are your amanda.conf settings? And can you 
actually extract *all* files from the dumps?

- Toralf


Re: Multi-Gb dumps using tar + software compression (gzip)?

2004-10-19 Thread Alexander Jolk
Toralf Lund wrote:
1. Dumps of directories containing several Gbs of data (up to roughly
   20Gb compressed in my case.)
2. Use dumptype GNUTAR.
3. Compress data using compress client fast or compress server fast.
 
 If you do, what exactly are your amanda.conf settings? And can you
 actually extract *all* files from the dumps?

Yes, I'm doing this, and I've never had problems recovering all files,
just once when the tape was failing.  I'll send you my amanda.conf
privately.  BTW which version are you using?  I'm at version
2.4.4p1-20030716.

(I'm doing roughly 500GB a night on two sites, one of them has dumps up
to 80GB compressed, and takes a little less than 24h to finish, after my
exclude lists have been adapted.)

Alex

-- 
Alexander Jolk / BUF Compagnie
tel +33-1 42 68 18 28 /  fax +33-1 42 68 18 29


Re: Multi-Gb dumps using tar + software compression (gzip)?

2004-10-19 Thread Toralf Lund
Alexander Jolk wrote:
Toralf Lund wrote:
 

  1. Dumps of directories containing several Gbs of data (up to roughly
 20Gb compressed in my case.)
  2. Use dumptype GNUTAR.
  3. Compress data using compress client fast or compress server fast.
If you do, what exactly are your amanda.conf settings? And can you
actually extract *all* files from the dumps?
   

Yes, I'm doing this, and I've never had problems recovering all files,
just once when the tape was failing.
Good...
 I'll send you my amanda.conf
privately.
OK. Thanks. I don't right a way see any significant differences from 
what I'm doing, but I'll study it closer...

Oh, there is one thing, by the way: I notice that you use chunksize 
1Gb - and so do I, right now, but for a while the holding disk data 
wasn't split into chunks at all, and I've been wondering if that may 
have been the problem.

 BTW which version are you using?  I'm at version
2.4.4p1-20030716.
 

I've used the release version of 2.4.4p1 for some time, but I'm 
testing 2.4.4p3 right now.

(I'm doing roughly 500GB a night on two sites, one of them has dumps up
to 80GB compressed, and takes a little less than 24h to finish, after my
exclude lists have been adapted.)
Alex
 




Re: Multi-Gb dumps using tar + software compression (gzip)?

2004-10-19 Thread Joshua Baker-LePain
On Tue, 19 Oct 2004 at 11:35am, Alexander Jolk wrote

 Toralf Lund wrote:
 1. Dumps of directories containing several Gbs of data (up to roughly
20Gb compressed in my case.)
 2. Use dumptype GNUTAR.
 3. Compress data using compress client fast or compress server fast.
  
  If you do, what exactly are your amanda.conf settings? And can you
  actually extract *all* files from the dumps?
 
 Yes, I'm doing this, and I've never had problems recovering all files,
 just once when the tape was failing.  I'll send you my amanda.conf
 privately.  BTW which version are you using?  I'm at version
 2.4.4p1-20030716.

I think that OS and utility (i.e. gnutar and gzip) version info would be 
useful here as well.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University


Re: Multi-Gb dumps using tar + software compression (gzip)?

2004-10-19 Thread Alexander Jolk
Joshua Baker-LePain wrote:
 I think that OS and utility (i.e. gnutar and gzip) version info would be
 useful here as well.

True, forgot that.  I'm on Linux 2.4.19 (Debian woody), using GNU tar
1.13.25 and gzip 1.3.2.  I have never had problems recovering files from
huge dumps.

Alex

-- 
Alexander Jolk / BUF Compagnie
tel +33-1 42 68 18 28 /  fax +33-1 42 68 18 29


Re: Multi-Gb dumps using tar + software compression (gzip)?

2004-10-19 Thread Michael Schaller
Hi Toralf,
I'v had nearly the same problem this week.
I found out that this was a problem of my tar.
I backed up with GNUTAR and compress server fast.
AMRESTORE restored the file but TAR (on the server!) gave some horrible 
messages like yours.
I transferred the file to the original machine (client) and all worked 
fine.
I guess this is a problem of different tar versions ...

Do you made your tests on the client or on the server??
If the answer is server then transfer the restored archive to your 
client and untar there!!

Greets
Michael
Toralf Lund schrieb:
Since I'm still having problems gunzip'ing my large dumps - see separate 
thread, I was just wondering:

Some of you people out there are doing the same kind of thing, right? I 
mean, have

  1. Dumps of directories containing several Gbs of data (up to roughly
 20Gb compressed in my case.)
  2. Use dumptype GNUTAR.
  3. Compress data using compress client fast or compress server fast.
If you do, what exactly are your amanda.conf settings? And can you 
actually extract *all* files from the dumps?

- Toralf




Re: Multi-Gb dumps using tar + software compression (gzip)?

2004-10-19 Thread Toralf Lund
Alexander Jolk wrote:
Joshua Baker-LePain wrote:
 

I think that OS and utility (i.e. gnutar and gzip) version info would be
useful here as well.
   

True, forgot that.  I'm on Linux 2.4.19 (Debian woody), using GNU tar
1.13.25 and gzip 1.3.2.  I have never had problems recovering files from
huge dumps.
 

I'm using Red Hat Linux 9 with kernel version 2.4.20 on the server, and 
I have clients running Linux and SGI IRIX (version 6.5.16f). tar version 
is 1.13.25 on both platforms; gzip is 1.3.3 on Linux, 1.2.4a on IRIX. 
I'm mainly having problems with IRIX clients since that's where the 
large filesystems are connected. These get corrupted with server as well 
as client compression, i.e. I've tried both gzip versions.

Alex
 




Re: Multi-Gb dumps using tar + software compression (gzip)?

2004-10-19 Thread Paul Bijnens
Michael Schaller wrote:
I found out that this was a problem of my tar.
I backed up with GNUTAR and compress server fast.
AMRESTORE restored the file but TAR (on the server!) gave some horrible 
messages like yours.
I transferred the file to the original machine (client) and all worked 
fine.
I guess this is a problem of different tar versions ...
That's strange and freightening!  Tar is supposed to be a portable
format!  Especially gnutar  -- there are indeed differences with normal
OS-supplied tar formats, but only to overcome limits in filesize, path
name length etc.; but the same version of gnutar on different 
architectures should be able to read each others files.

I'm not 100% sure what happens if you compile tar on an architecture 
without largefile support on and try to restore a file exceeding such
a limit.

Are you sure you used the correct version of tar. I've called mine
gtar to avoid confusion with the OS-supplied tar (actually, amanda
even uses amgtar, which is a link to the correct version, or a
wrapper that does some pre/post processing if needed on e.g. database
DLE's).
--
Paul Bijnens, XplanationTel  +32 16 397.511
Technologielaan 21 bus 2, B-3001 Leuven, BELGIUMFax  +32 16 397.512
http://www.xplanation.com/  email:  [EMAIL PROTECTED]
***
* I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, F6, *
* quit,  ZZ, :q, :q!,  M-Z, ^X^C,  logoff, logout, close, bye,  /bye, *
* stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
* PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
* kill -9 1,  Alt-F4,  Ctrl-Alt-Del,  AltGr-NumLock,  Stop-A,  ...*
* ...  Are you sure?  ...   YES   ...   Phew ...   I'm out  *
***



Re: Multi-Gb dumps using tar + software compression (gzip)?

2004-10-19 Thread Gene Heskett
On Tuesday 19 October 2004 11:10, Paul Bijnens wrote:
Michael Schaller wrote:
 I found out that this was a problem of my tar.
 I backed up with GNUTAR and compress server fast.
 AMRESTORE restored the file but TAR (on the server!) gave some
 horrible messages like yours.
 I transferred the file to the original machine (client) and all
 worked fine.
 I guess this is a problem of different tar versions ...

That's strange and freightening!  Tar is supposed to be a portable
format!  Especially gnutar  -- there are indeed differences with
 normal OS-supplied tar formats, but only to overcome limits in
 filesize, path name length etc.; but the same version of gnutar on
 different architectures should be able to read each others files.

I'm not 100% sure what happens if you compile tar on an architecture
without largefile support on and try to restore a file exceeding
 such a limit.

Are you sure you used the correct version of tar. I've called mine
gtar to avoid confusion with the OS-supplied tar (actually, amanda
even uses amgtar, which is a link to the correct version, or a
wrapper that does some pre/post processing if needed on e.g.
 database DLE's).

We probably should point out to the new bees here, that tar-1.13 is 
indeed broken.  In other words, if your tar --version doesn't 
report that its at least 1.13-19, it may not, and probably is not, 
compatible with anything but itself.  (and I'm not sure that 1.13 
could even recover its own output!)

I hate to be boreing and repetitive, but there are those here *now* 
who did not go thru that period of hair removal that 1.13 caused.

-- 
Cheers, Gene
There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order.
-Ed Howdershelt (Author)
99.27% setiathome rank, not too shabby for a WV hillbilly
Yahoo.com attorneys please note, additions to this message
by Gene Heskett are:
Copyright 2004 by Maurice Eugene Heskett, all rights reserved.


Re: tar/gzip problems on restore (CRC error, Archive contains obsolescentbase-64 headers...)

2004-10-15 Thread Toralf Lund
Toralf Lund wrote:
Alexander Jolk wrote:
Toralf Lund wrote:
[...] I get the same kind of problem with harddisk dumps as well as 
tapes, and as it now turns out, also for holding disk files. And the 
disks and tape drive involved aren't even on the same chain.

Actually, I'm starting to suspect that gzip itself is causing the 
problem. Any known issues, there? The client in question does have a 
fairly old version, 1.2.4,

That rings a bell somewhere.  Hasn't there been once a report on this 
list from someone whose zipped backups got corrupted at every (other) 
GB mark?  Something with chunks on the holding disk having a header 
that didn't get stripped off when writing to tape?  

That would of course explain a lot. I wasn't able to find anything on 
this in the mailing list archives, though, and I haven't been able to 
identify header data (except for the one at the start) within the dump 
files, but of course, these are a bit difficult to work with, and I 
may have looked in the wrong place or for the wrong thing.
I've now tested a bit more, and while it may be too early to draw 
conclusions, I'm starting to suspect that it's actually *not* splitting 
up into chunk that leads to trouble. I've now changed from no chunksize 
specification to chunksize 1Gb, and everything looks more promising. 
Actually, I did get a crc error from gzip when trying to unpack a dump 
file created via this setup, too, but that was after all files had been 
extracted (according to the amanda index), and there was no tar error.

BTW, the manual seems to be wrong in this respect; it says that default 
value for chunksize is 1 Gb, but it actually is 0 (meaning no max size.)

Question: Do you know if I can get gzip to report where exactly the 
mismatch is found?
I'd still like to know that.

Anybody remember which version had this problem, and whether that 
gave the same symptoms?

Alex





Re: tar/gzip problems on restore (CRC error, Archive contains obsolescentbase-64 headers...)

2004-10-14 Thread Toralf Lund
Gene Heskett wrote:
On Wednesday 13 October 2004 11:07, Toralf Lund wrote:
 

Jean-Francois Malouin wrote:
   

[ snip ]
 

Actually, I'm starting to suspect that gzip itself is causing the
problem. Any known issues, there? The client in question does have
a fairly old version, 1.2.4, I think (that's the latest one
supplied by SGI, unless they have upgraded it very recently.)
   

Honestly, I missed the earlier post in this thread...
Which version of tar are you using? I've used the SGI provided gzip
for a long time and never experienced that behaviour...
#~ /usr/sbin/gzip -h
gzip 1.2.4 (18 Aug 93)
#~ uname -R
6.5 6.5.19f
[...snip...]
 

The fun part here is that I have two different tars and two
different gzips - the ones supplied with the OS and SGI freeware
variants installed on /usr/freeware (dowloaded from
http://freeware.sgi.com/)
Both incarnations of gzip return the same version string as the one
you included above
/usr/freeware/bin/tar is
tar (GNU tar) 1.13.25
Not sure how to get version string from /usr/bin/tar, but I have
# uname -R
6.5 6.5.16f
Based on the dump file headers, I would assume that /usr/freeware
variants are used for both tar and gzip. Actually, maybe that one is
*required* by Amanda, since it wants GNU tar, and the one on
/usr/bin is not, as far as I can tell. Perhaps there wasn't really
any point in installing freeware version of gzip, or will Amanda
make assumptions about binary locations?
   

- T
   

jf
 

You can tell amanda in the ./configuration options given,
where the 
correct tar actually lives and it will be hardcoded into it.  And to 
get the version from the other tar, /usr/bin/tar --version should 
work,

Nope. This is not GNU tar at all. But I'm fairly sure it isn't used...
and if its not 1.13-19 or newer, use the other one that is -25 
on your system.

Also, the gzip here is 1.3.3, dated in 2002.  There may have been 
fixes to it, probably in the 2GB file sizes areas.
 

Ahem. If 2GB data is or has been a problem, then I'm definitely doomed, 
since Amanda dumps tend to get a lot larger than that (on our systems, 
and I would assume, also in general.)


Re: tar/gzip problems on restore (CRC error, Archive contains obsolescentbase-64 headers...)

2004-10-14 Thread Geert Uytterhoeven
On Thu, 14 Oct 2004, Toralf Lund wrote:
 Gene Heskett wrote:
  Also, the gzip here is 1.3.3, dated in 2002.  There may have been fixes to
  it, probably in the 2GB file sizes areas.
   
 Ahem. If 2GB data is or has been a problem, then I'm definitely doomed, since
 Amanda dumps tend to get a lot larger than that (on our systems, and I would
 assume, also in general.)

If gzip has problems in the  2 GiB file size areas, usually it's related to
displaying statistics (try `gzip -l' on a large compressed file).

I never heard of problems with the actual compression, which is stream based
and thus shouldn't suffer from such limitations.

Gr{oetje,eeting}s,

Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED]

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say programmer or something like that.
-- Linus Torvalds


Re: tar/gzip problems on restore (CRC error, Archive contains obsolescentbase-64 headers...)

2004-10-14 Thread Toralf Lund


 

The fun part here is that I have two different tars and two different 
gzips - the ones supplied with the OS and SGI freeware variants 
installed on /usr/freeware (dowloaded from http://freeware.sgi.com/)
   

Do not use the OS supplied tar! You'll hit a bug.
 

Yes. I do seem to remember that I took care to make sure it wouldn't be 
used, when I installed Amanda.

I've installed the freeware version a while ago (GNU tar) 1.13.25
without an itch along with /usr/sbin/gzip.
 

Both incarnations of gzip return the same version string as the one you 
included above

/usr/freeware/bin/tar is
tar (GNU tar) 1.13.25
Not sure how to get version string from /usr/bin/tar, but I have
# uname -R
6.5 6.5.16f
Based on the dump file headers, I would assume that /usr/freeware 
variants are used for both tar and gzip. Actually, maybe that one is 
*required* by Amanda, since it wants GNU tar, and the one on /usr/bin is 
not, as far as I can tell. Perhaps there wasn't really any point in 
installing freeware version of gzip, or will Amanda make assumptions 
about binary locations?
   

Check your amandad debug files and look at the paths and defs
 

Good idea. I should also be able to find the actual build setup, but 
this one seems easier... It looks like /usr/freeware/bin/tar and 
/usr/freeware/bin/gzip are used - see below.

Notice, however, that I've now tried some dumps with compress server, 
and I can actually reproduce the problem on those, so if it's a gzip 
problem, it must be one that's common to multiple versions and platforms.

The one on the server is gzip-1.3.3-9 from Red Hat.
and check for GNUTAR=/usr/freeware/bin/tar ,
COMPRESS_PATH=/usr/sbin/gzip and UNCOMPRESS_PATH=/usr/sbin/gzip
Mine has:
amandad: paths: bindir=/opt/amanda/amanda2/bin
amandad:sbindir=/opt/amanda/amanda2/sbin
amandad:libexecdir=/opt/amanda/amanda2/libexec
amandad:mandir=/opt/amanda/amanda2/man
amandad:AMANDA_TMPDIR=/tmp/amanda-conf2
amandad:AMANDA_DBGDIR=/tmp/amanda-conf2
amandad:CONFIG_DIR=/opt/amanda/amanda2/etc/amanda
amandad:DEV_PREFIX=/dev/dsk/ RDEV_PREFIX=/dev/rdsk/
amandad:DUMP=/sbin/dump RESTORE=/sbin/restore VDUMP=UNDEF
amandad:VRESTORE=UNDEF XFSDUMP=/sbin/xfsdump
amandad:XFSRESTORE=/sbin/xfsrestore VXDUMP=UNDEF VXRESTORE=UNDEF
amandad:SAMBA_CLIENT=UNDEF GNUTAR=/usr/freeware/bin/tar
amandad:COMPRESS_PATH=/usr/sbin/gzip
amandad:UNCOMPRESS_PATH=/usr/sbin/gzip LPRCMD=/usr/bsd/lpr
amandad:MAILER=/usr/sbin/Mail
amandad: listed_incr_dir=/opt/amanda/amanda2/var/amanda/gnutar-lists
amandad: defs:  DEFAULT_SERVER=bullcalf DEFAULT_CONFIG=stk_40-conf2
amandad:DEFAULT_TAPE_SERVER=bullcalf
amandad:DEFAULT_TAPE_DEVICE=/hw/tape/tps12d2nrnsv HAVE_MMAP
amandad:HAVE_SYSVSHM LOCKING=POSIX_FCNTL SETPGRP_VOID DEBUG_CODE
amandad:AMANDA_DEBUG_DAYS=4 BSD_SECURITY USE_AMANDAHOSTS
amandad:CLIENT_LOGIN=amanda FORCE_USERID HAVE_GZIP
amandad:COMPRESS_SUFFIX=.gz COMPRESS_FAST_OPT=--fast
amandad:COMPRESS_BEST_OPT=--best UNCOMPRESS_OPT=-dc
 

Mine is:
amandad: paths: bindir=/usr/freeware/bin sbindir=/usr/freeware/bin
amandad:libexecdir=/usr/freeware/libexec
amandad:mandir=/usr/freeware/man AMANDA_TMPDIR=/tmp/amanda
amandad:AMANDA_DBGDIR=/tmp/amanda
amandad:CONFIG_DIR=/usr/freeware/etc/amanda
amandad:DEV_PREFIX=/dev/dsk/ RDEV_PREFIX=/dev/rdsk/
amandad:DUMP=/sbin/dump RESTORE=/sbin/restore
amandad:XFSDUMP=/sbin/xfsdump XFSRESTORE=/sbin/xfsrestore
amandad:GNUTAR=/usr/freeware/bin/tar
amandad:COMPRESS_PATH=/usr/freeware/bin/gzip
amandad:UNCOMPRESS_PATH=/usr/freeware/bin/gzip
amandad:MAILER=/usr/sbin/Mail
amandad:listed_incr_dir=/usr/freeware/var/lib/amanda/gnutar-lists
amandad: defs:  DEFAULT_SERVER=localhost DEFAULT_CONFIG=DailySet1
amandad:DEFAULT_TAPE_SERVER=localhost
amandad:DEFAULT_TAPE_DEVICE=/dev/null HAVE_MMAP HAVE_SYSVSHM
amandad:LOCKING=POSIX_FCNTL SETPGRP_VOID DEBUG_CODE
amandad:AMANDA_DEBUG_DAYS=4 BSD_SECURITY USE_AMANDAHOSTS
amandad:CLIENT_LOGIN=amanda FORCE_USERID HAVE_GZIP
amandad:COMPRESS_SUFFIX=.gz COMPRESS_FAST_OPT=--fast
amandad:COMPRESS_BEST_OPT=--best UNCOMPRESS_OPT=-dc
amandad: time 0.002: got packet:
HTH
jf
 




Re: tar/gzip problems on restore (CRC error, Archive contains obsolescentbase-64 headers...)

2004-10-14 Thread Toralf Lund
Alexander Jolk wrote:
Toralf Lund wrote:
[...] I get the same kind of problem with harddisk dumps as well as 
tapes, and as it now turns out, also for holding disk files. And the 
disks and tape drive involved aren't even on the same chain.

Actually, I'm starting to suspect that gzip itself is causing the 
problem. Any known issues, there? The client in question does have a 
fairly old version, 1.2.4,

That rings a bell somewhere.  Hasn't there been once a report on this 
list from someone whose zipped backups got corrupted at every (other) 
GB mark?  Something with chunks on the holding disk having a header 
that didn't get stripped off when writing to tape?  
That would of course explain a lot. I wasn't able to find anything on 
this in the mailing list archives, though, and I haven't been able to 
identify header data (except for the one at the start) within the dump 
files, but of course, these are a bit difficult to work with, and I may 
have looked in the wrong place or for the wrong thing.

Question: Do you know if I can get gzip to report where exactly the 
mismatch is found?

Anybody remember which version had this problem, and whether that gave 
the same symptoms?

Alex




tar/gzip problems on restore (CRC error, Archive contains obsolescent base-64 headers...)

2004-10-13 Thread Toralf Lund
I'm having serious problems with full restore of a GNUTAR dump. Simply 
put, if I do amrestore, then tar xvf dump file, tar will exit with

tar: Skipping to next header
tar: Archive contains obsolescent base-64 headers
tar: Error exit delayed from previous errors
after extracting most, but not all, files - if the amanda index is 
anything to go by. There is a significant delay between the two first 
and last error messages, which leads me to believe that there is more 
data, but tar just doesn't understand it. I get the same behaviour if I 
extract files using amrecover instead of amrestore + tar. Also, I 
actually have two dumps of the filesystem in question: One on tape and 
one on harddisk (well, I have more than one tape, too, but the others 
are stored elsewhere.) Both fail in the manner described above, but at 
different points. Also notice the following behaviour when unpacking the 
harddisk dump in a more direct manner:

# dd if=/dumps/mirror/d4/data/00013.fileserv._scanner2_Hoyde.6 bs=32k 
skip=1 | tar -xvpzkf -

[ file extract info skipped ]
tar: Skipping to next header
tar: Archive contains obsolescent base-64 headers
37800+0 records in
37800+0 records out
gzip: stdin: invalid compressed data--crc error
tar: Child returned status 1
tar: Error exit delayed from previous errors
All this with amanda 2.4.4p1 with server on Linux and clients on SGI 
IRIX as well as Linux; I've tried unpacking on both platforms.

HELP
- Toralf



Re: tar/gzip problems on restore (CRC error, Archive contains obsolescentbase-64 headers...)

2004-10-13 Thread Alexander Jolk
Toralf Lund wrote:
 tar: Skipping to next header
 tar: Archive contains obsolescent base-64 headers
 37800+0 records in
 37800+0 records out
 
 gzip: stdin: invalid compressed data--crc error
 tar: Child returned status 1
 tar: Error exit delayed from previous errors

I've had the same message from tar on what were apparently erroneous
backups.  I believe the `obsolescent base-64 header' message is what you
get whenever tar's input is corrupted, which seems to be confirmed by
gzip's `crc error'.  I'd venture to say your backups are hosed; if that
arrives systematically, you might want to investigate your SCSI chain. 
(Sacrificed a goat lately?)

Alex

-- 
Alexander Jolk / BUF Compagnie
tel +33-1 42 68 18 28 /  fax +33-1 42 68 18 29


Re: tar/gzip problems on restore (CRC error, Archive contains obsolescentbase-64 headers...)

2004-10-13 Thread Toralf Lund
Alexander Jolk wrote:
Toralf Lund wrote:
 

tar: Skipping to next header
tar: Archive contains obsolescent base-64 headers
37800+0 records in
37800+0 records out
gzip: stdin: invalid compressed data--crc error
tar: Child returned status 1
tar: Error exit delayed from previous errors
   

I've had the same message from tar on what were apparently erroneous
backups.  I believe the `obsolescent base-64 header' message is what you
get whenever tar's input is corrupted, which seems to be confirmed by
gzip's `crc error'.
I was hoping you wouldn't say that ;-( But yes, I guess it's likely that 
the actual backup is corrupted.

 I'd venture to say your backups are hosed; if that
arrives systematically, you might want to investigate your SCSI chain. 
 

Well, yes, I've now found that I do get this for different dump files 
(but not all of them), so I guess some serious problem with the setup is 
likely. I don't think SCSI-issues is likely to be the cause, though, 
since I get the same kind of problem with harddisk dumps as well as 
tapes, and as it now turns out, also for holding disk files. And the 
disks and tape drive involved aren't even on the same chain.

Actually, I'm starting to suspect that gzip itself is causing the 
problem. Any known issues, there? The client in question does have a 
fairly old version, 1.2.4, I think (that's the latest one supplied by 
SGI, unless they have upgraded it very recently.)

Also, to my great relief, it turned out that I could actually extract 
most of the files I wanted by skipping over the error point via dd 
skip=... tar still gave me warnings when I did this (unsurprisingly 
enough since the data was probably not aligned to a file boundary after 
the skip), but this time it was actually able to find files after it 
did. Based on this, I'm thinking that the original tar should at least 
have been able to resync itself. Any ideas why it didn't?


(Sacrificed a goat lately?)
 

Apparently not...
- T


Re: tar/gzip problems on restore (CRC error, Archive contains obsolescentbase-64 headers...)

2004-10-13 Thread Jean-Francois Malouin
* Toralf Lund [EMAIL PROTECTED] [20041013 09:43]:
 Alexander Jolk wrote:
 
 Toralf Lund wrote:
  
 
 tar: Skipping to next header
 tar: Archive contains obsolescent base-64 headers
 37800+0 records in
 37800+0 records out
 
 gzip: stdin: invalid compressed data--crc error
 tar: Child returned status 1
 tar: Error exit delayed from previous errors

 
 
 I've had the same message from tar on what were apparently erroneous
 backups.  I believe the `obsolescent base-64 header' message is what you
 get whenever tar's input is corrupted, which seems to be confirmed by
 gzip's `crc error'.
 
 I was hoping you wouldn't say that ;-( But yes, I guess it's likely that 
 the actual backup is corrupted.
 
  I'd venture to say your backups are hosed; if that
 arrives systematically, you might want to investigate your SCSI chain. 
  
 
 Well, yes, I've now found that I do get this for different dump files 
 (but not all of them), so I guess some serious problem with the setup is 
 likely. I don't think SCSI-issues is likely to be the cause, though, 
 since I get the same kind of problem with harddisk dumps as well as 
 tapes, and as it now turns out, also for holding disk files. And the 
 disks and tape drive involved aren't even on the same chain.
 
 Actually, I'm starting to suspect that gzip itself is causing the 
 problem. Any known issues, there? The client in question does have a 
 fairly old version, 1.2.4, I think (that's the latest one supplied by 
 SGI, unless they have upgraded it very recently.)

Honestly, I missed the earlier post in this thread...
Which version of tar are you using? I've used the SGI provided gzip
for a long time and never experienced that behaviour...

#~ /usr/sbin/gzip -h
gzip 1.2.4 (18 Aug 93)

#~ uname -R
6.5 6.5.19f

[...snip...]

 - T

jf
-- 


Re: tar/gzip problems on restore (CRC error, Archive contains obsolescentbase-64 headers...)

2004-10-13 Thread Toralf Lund
Jean-Francois Malouin wrote:
[ snip ]
Actually, I'm starting to suspect that gzip itself is causing the 
problem. Any known issues, there? The client in question does have a 
fairly old version, 1.2.4, I think (that's the latest one supplied by 
SGI, unless they have upgraded it very recently.)
   

Honestly, I missed the earlier post in this thread...
Which version of tar are you using? I've used the SGI provided gzip
for a long time and never experienced that behaviour...
#~ /usr/sbin/gzip -h
gzip 1.2.4 (18 Aug 93)
#~ uname -R
6.5 6.5.19f
[...snip...]
 

The fun part here is that I have two different tars and two different 
gzips - the ones supplied with the OS and SGI freeware variants 
installed on /usr/freeware (dowloaded from http://freeware.sgi.com/)

Both incarnations of gzip return the same version string as the one you 
included above

/usr/freeware/bin/tar is
tar (GNU tar) 1.13.25
Not sure how to get version string from /usr/bin/tar, but I have
# uname -R
6.5 6.5.16f
Based on the dump file headers, I would assume that /usr/freeware 
variants are used for both tar and gzip. Actually, maybe that one is 
*required* by Amanda, since it wants GNU tar, and the one on /usr/bin is 
not, as far as I can tell. Perhaps there wasn't really any point in 
installing freeware version of gzip, or will Amanda make assumptions 
about binary locations?

- T
   

jf
 




Re: tar/gzip problems on restore (CRC error, Archive contains obsolescentbase-64 headers...)

2004-10-13 Thread Gene Heskett
On Wednesday 13 October 2004 11:07, Toralf Lund wrote:
Jean-Francois Malouin wrote:
 [ snip ]

Actually, I'm starting to suspect that gzip itself is causing the
problem. Any known issues, there? The client in question does have
 a fairly old version, 1.2.4, I think (that's the latest one
 supplied by SGI, unless they have upgraded it very recently.)

Honestly, I missed the earlier post in this thread...
Which version of tar are you using? I've used the SGI provided gzip
for a long time and never experienced that behaviour...

#~ /usr/sbin/gzip -h
gzip 1.2.4 (18 Aug 93)

#~ uname -R
6.5 6.5.19f

[...snip...]

The fun part here is that I have two different tars and two
 different gzips - the ones supplied with the OS and SGI freeware
 variants installed on /usr/freeware (dowloaded from
 http://freeware.sgi.com/)

Both incarnations of gzip return the same version string as the one
 you included above

/usr/freeware/bin/tar is

tar (GNU tar) 1.13.25


Not sure how to get version string from /usr/bin/tar, but I have

# uname -R
6.5 6.5.16f

Based on the dump file headers, I would assume that /usr/freeware
variants are used for both tar and gzip. Actually, maybe that one is
*required* by Amanda, since it wants GNU tar, and the one on
 /usr/bin is not, as far as I can tell. Perhaps there wasn't really
 any point in installing freeware version of gzip, or will Amanda
 make assumptions about binary locations?

- T

jf
You can tell amanda in the ./configuration options given, where the 
correct tar actually lives and it will be hardcoded into it.  And to 
get the version from the other tar, /usr/bin/tar --version should 
work, and if its not 1.13-19 or newer, use the other one that is -25 
on your system.

Also, the gzip here is 1.3.3, dated in 2002.  There may have been 
fixes to it, probably in the 2GB file sizes areas.

-- 
Cheers, Gene
There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order.
-Ed Howdershelt (Author)
99.27% setiathome rank, not too shabby for a WV hillbilly
Yahoo.com attorneys please note, additions to this message
by Gene Heskett are:
Copyright 2004 by Maurice Eugene Heskett, all rights reserved.


Re: Question about gzip on the server

2003-11-13 Thread Paul Bijnens
Dana Bourgeois wrote:

OK, so all my clients are compressing.  I have 13 clients and about 5 of
them are Solaris using dump, the rest are using tar.  Could someone explain
why the dumpers are also spawning a 'gzip --best' process?  They only use 5
or 6 seconds of CPU so they are not doing much but I don't see why they
start.
The index is always compressed on the server with the '--best' option.

Another question is the status 'no-bandwidth'.  I have been assuming this is
...

I wonder too.  If you try to follow the sequence in amdump, I sometimes
notice the real reason is something else (usually client constraints
in my case).
--
Paul Bijnens, XplanationTel  +32 16 397.511
Technologielaan 21 bus 2, B-3001 Leuven, BELGIUMFax  +32 16 397.512
http://www.xplanation.com/  email:  [EMAIL PROTECTED]
***
* I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, F6, *
* quit,  ZZ, :q, :q!,  M-Z, ^X^C,  logoff, logout, close, bye,  /bye, *
* stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
* PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
* kill -9 1,  Alt-F4,  Ctrl-Alt-Del,  AltGr-NumLock,  Stop-A,  ...*
* ...  Are you sure?  ...   YES   ...   Phew ...   I'm out  *
***



Question about gzip on the server

2003-11-12 Thread Dana Bourgeois
OK, so all my clients are compressing.  I have 13 clients and about 5 of
them are Solaris using dump, the rest are using tar.  Could someone explain
why the dumpers are also spawning a 'gzip --best' process?  They only use 5
or 6 seconds of CPU so they are not doing much but I don't see why they
start.

Another question is the status 'no-bandwidth'.  I have been assuming this is
network bandwidth.  I am running 89 DLEs, 13 clients, 6 dumpers (also tried
4 and 13) and amstatus reports network free as high as 10200 and as low as
1200.  With 4 dumpers, all dumpers ran without a break.  With 13 dumpers, by
the middle of the run 11 were idle for no bandwidth.  I raised my netusage
from 1200 to 4200 and tonight with 6 dumpers, by the last third of the run,
one dumper idled with no bandwidth yet network usage was 10200.  I'm not
sure I'm reading this right.  10200 free would suggest that something like
1200 is being used and I thought that bandwidth limiting wouldn't happen
until the next dumper to start would push network usage ABOVE 4200.  I also
assume that a usage of 4200 would show up as something like network free of
about 7000 yet I have a network free of 10200.  What am I missing here?


Dana Bourgeois




amanda + gzip errors on debian?

2003-07-15 Thread Kurt Yoder
Hello list

I've been having a problem with amanda and gzip on my debian backup
servers for a while now. I do my backups with gzip compression, and
they seem to go fine. However, upon verifying the backups, I notice
gzip errors. I get two different kinds of errors: crc errors and
format violated errors. The errors don't happen on all dump
images, usually just the bigger ones (which means level 0's, which
means I'm screwed for restores!). I've had them crop up from 300 MB
into the image to 10 GB into it, and anywhere in between. At least
one gzipped image fails on every backup.

I installed a second Debian backup server, and it had the exact same
problem right after installation. I re-did my backup server on a
different hard disk, and still had the problem. I had been using a
security-updated gzip, so I tried the pre-security gzip, but had the
same problem.

I tried manually gzipping and unzipping images on the backup server
without using amanda at all, and had the same problem. Aha! you
say, it must not be amanda then. Well... maybe. I tried a fresh
debian install with just the base system on it which includes gzip.
I created a 20 GB tar file, gzipped and gunzipped it, and had no
errors. The only link between all these boxes with gzip problems is
that the Debian amanda-server package was installed.

Anyone else noticed this problem? And fixed it?

-- 
Kurt Yoder
Sport  Health network administrator



Re: amanda + gzip errors on debian?

2003-07-15 Thread Niall O Broin
On Tuesday 15 July 2003 16:07, Kurt Yoder wrote:

 they seem to go fine. However, upon verifying the backups, I notice
 gzip errors. I get two different kinds of errors: crc errors and
 format violated errors. The errors don't happen on all dump
 images, usually just the bigger ones (which means level 0's, which
 means I'm screwed for restores!). I've had them crop up from 300 MB
 into the image to 10 GB into it, and anywhere in between. At least
 one gzipped image fails on every backup.

.
.
.

 Anyone else noticed this problem? And fixed it?

What's your backup device ? If it's a SCSI tape then I'd say your problem is 
most likely SCSI cabling termination. I had this a long time ago and it drove 
me nuts. I eventually found that the SCSI chain wasn't terminated correctly. 
Just like you, I would only encounter the problems on big backups (because it 
was only producing occasional errors, and the bigger the file, the more 
likely I was to encounter it. The way to fix the problem is to make sure that 
all SCSI cable connections are well made and that termination is correct. 
Using good quality SCSI cables is a big help too.

If OTOH you're not using SCSI tape then I'm afraid I'm all out of clue.


Kindest regards,


Niall  O Broin



Re: amanda + gzip errors on debian?

2003-07-15 Thread Kurt Yoder

Niall O Broin said:

snipped

 What's your backup device ? If it's a SCSI tape then I'd say your
 problem is
 most likely SCSI cabling termination. I had this a long time ago and
 it drove
 me nuts. I eventually found that the SCSI chain wasn't terminated
 correctly.
 Just like you, I would only encounter the problems on big backups
 (because it
 was only producing occasional errors, and the bigger the file, the
 more
 likely I was to encounter it. The way to fix the problem is to make
 sure that
 all SCSI cable connections are well made and that termination is
 correct.
 Using good quality SCSI cables is a big help too.

 If OTOH you're not using SCSI tape then I'm afraid I'm all out of
 clue.

I had a similar thought when I first had this problem. However, I
was able to duplicate the problem simply by gzipping a big file to
my ATA/IDE holding disk. So I'm certain it's not a scsi problem.

-- 
Kurt Yoder
Sport  Health network administrator



Re: amanda + gzip errors on debian?

2003-07-15 Thread Eric Siegerman
On Tue, Jul 15, 2003 at 12:10:27PM -0400, Kurt Yoder wrote:
 However, I
 was able to duplicate the problem simply by gzipping a big file to
 my ATA/IDE holding disk. So I'm certain it's not a scsi problem.

Is it repeatable?  I.e. if you gzip the *same* file five times,
do you get the same error five times, at the same location?  Or
do you get five different errors?  Or maybe a mix of errors and
successes?

The former would point to a bug in gzip; the latter to hardware
problems of some sort (or kernel, but I'd bet on the hardware).

Try that test on your freshly installed Debian box, that you say
works fine.  Then install (but don't use) Amanda and try it
again.

Divide and conquer :-)

--

|  | /\
|-_|/ Eric Siegerman, Toronto, Ont.[EMAIL PROTECTED]
|  |  /
When I came back around from the dark side, there in front of me would
be the landing area where the crew was, and the Earth, all in the view
of my window. I couldn't help but think that there in front of me was
all of humanity, except me.
- Michael Collins, Apollo 11 Command Module Pilot



Re: amanda + gzip errors on debian?

2003-07-15 Thread C. Chan
Just a note that I have experienced a similar problem, but with
Redhat and Mandrake rather than Debian Linux.  The dump format
is GNU tar with gzip compressed on the client side, written to
a large holding disk then flushed to tape.  The archives on
holding disk verify OK, but the problem is retrieving them
from tape.  So I suspect it is a hardware, SCSI cabling or kernel
problem when writing to tape.  It's mostly dumps  8-10GBs that
have problems, and then only about 1 out of 3 dumps.

However, the dumps done without using gzip have had no problems
during restores, and I've done a lot more of those than
restores of the gzip'd dumps, which I only used on clients
with fast CPUs which don't run batch jobs at night.  I've turned
on hardware compression and turned off client side compression as
an acceptable workaround.  Still trying to find a way
to resurrect corrupt gzip tar archives.

Also Sprach Eric Siegerman:

 On Tue, Jul 15, 2003 at 12:10:27PM -0400, Kurt Yoder wrote:
  However, I
  was able to duplicate the problem simply by gzipping a big file to
  my ATA/IDE holding disk. So I'm certain it's not a scsi problem.
 
 Is it repeatable?  I.e. if you gzip the *same* file five times,
 do you get the same error five times, at the same location?  Or
 do you get five different errors?  Or maybe a mix of errors and
 successes?
 
 The former would point to a bug in gzip; the latter to hardware
 problems of some sort (or kernel, but I'd bet on the hardware).
 
 Try that test on your freshly installed Debian box, that you say
 works fine.  Then install (but don't use) Amanda and try it
 again.
 
 Divide and conquer :-)
 
 --
 
 |  | /\
 |-_|/ Eric Siegerman, Toronto, Ont.[EMAIL PROTECTED]
 |  |  /
 When I came back around from the dark side, there in front of me would
 be the landing area where the crew was, and the Earth, all in the view
 of my window. I couldn't help but think that there in front of me was
 all of humanity, except me.
   - Michael Collins, Apollo 11 Command Module Pilot
 


C. Chan [EMAIL PROTECTED] 
GPG Public Key registered at pgp.mit.edu 


Re: amrecover failure, corrupted gzip file?

2003-03-29 Thread Gene Heskett
On Fri March 28 2003 23:32, Gene Heskett wrote:
On Fri March 28 2003 12:46, Mike Simpson wrote:
Hi --

 Any tips or tricks or other thoughts?  Is this the Linux
 dump/restore problem I've seen talked about on the mailing
 list? I don't understand how the gzip file could be corrupted
 by a problem internal to the dump/restore cycle.

Answering my own question after a week of testing ... I think
 I've discovered a bug in Amanda 2.4.4.  This is what I've
 deciphered:

(1) Restores of backup sets that compressed to  1 gb worked
 fine. Backup sets that, when compressed, were  1 GB blew up
 every time with gzip corruption error messages.  This was
 consistent across OS's (Solaris 8, RedHat 7.x), filesystem types
 (ufs, vxfs, ext2/3), and backup modes (DUMP, GNUTAR).

(2) The gzip corruption message always occured at the same spot,
 i.e.

  gzip: stdin: invalid compressed data--format violated
  Error 32 (Broken pipe) offset 1073741824+131072, wrote 0


My recovered test file wasn't gzipped, and it didn't contain a block 
of zero's at that offset.  khexedit had a high old time loading 
that, took about 10 minutes to load it, and about 15 to find the 
exact offset as the scroller went about 1000x too fast per pixel it 
moved.

And I just checked my amanda.conf, and I am indeed using a 1G chunk 
size.  So everything seems to match except it wasn't gzipped, and 
the recovered file is good.  So, while I didn't find anything, this 
is one more clue to plug into the detectives data.  I could change 
that disklist entry to make it run it thru gzip, but it would 
probably expand quite a bit as its my cd collection, ogg-encoded.  
ISTR the one time I tried that it compressed to 134%.  But I will 
try it for the next dumpcycle.  But dumpcycle is 5 days too, so 
that answer won't be instant.

In the meantime, if you figure it out, be sure to post how you did 
it.

[...]

-- 
Cheers, Gene
AMD [EMAIL PROTECTED] 320M
[EMAIL PROTECTED]  512M
99.25% setiathome rank, not too shabby for a WV hillbilly
Yahoo.com attornies please note, additions to this message
by Gene Heskett are:
Copyright 2003 by Maurice Eugene Heskett, all rights reserved.



Re: amrecover failure, corrupted gzip file?

2003-03-29 Thread Jean-Louis Martineau
Hi Mike,

Thanks for your good description of the problem.

You found a bug in a the way the taper read a file from holding disk
if blocksize  32k.

There is two posible workaround (untested).

1. Set your chunksize to '32k + n * blocksize' where n is an integer.
2. Set file-pad to false.

Setting chunksize to an arbitrary big value will not work if a
dump must be written to two different holding disk, you should set
it acording to workaround 1.

Jean-Louis

On Fri, Mar 28, 2003 at 11:46:11AM -0600, Mike Simpson wrote:
 Hi --
 
  Any tips or tricks or other thoughts?  Is this the Linux dump/restore 
  problem I've seen talked about on the mailing list?  I don't 
  understand how the gzip file could be corrupted by a problem internal 
  to the dump/restore cycle.
 
 Answering my own question after a week of testing ... I think I've 
 discovered a bug in Amanda 2.4.4.  This is what I've deciphered:
 
 (1) Restores of backup sets that compressed to  1 gb worked fine.
 Backup sets that, when compressed, were  1 GB blew up every time
 with gzip corruption error messages.  This was consistent across
 OS's (Solaris 8, RedHat 7.x), filesystem types (ufs, vxfs, 
 ext2/3), and backup modes (DUMP, GNUTAR).
 
 (2) The gzip corruption message always occured at the same spot, i.e.
 
   gzip: stdin: invalid compressed data--format violated
   Error 32 (Broken pipe) offset 1073741824+131072, wrote 0
 
 which is 1024^3 bytes + 128k.  I note that in my Amanda 
 configuration, I had chunksize defined to 1 gbyte and 
 blocksize set to 128 kbytes (the chunksize was just for
 convenience, the blocksize seems to maximize my write 
 performance).
 
 (3) I used dd to retrieve one of the compressed images that was 
 failing.  At the 1 gb mark in the file, the more-or-less random
 bytes of the compressed stream were interrupted by exactly 32k of
 zeroed bytes.  I note that 32k is Amanda's default blocksize.
 
 (4) For last night's backups, I set chunksize to an arbitrarily
 high number, to prevent chunking, which works fine in my setup
 because I use one very large ext3 partition for all of my Amanda
 holding disk, which nullifies concerns about filesystem size and
 max file size.  The restores I've done this morning have all 
 worked fine, including the ones that had previously shown the
 corruption.
 
 I'm not enough of a C coder to come up with a real patch to fix this. 
 I'm hoping the above gives enough clues to let someone who _is_ a 
 real C coder do so.
 
 If this should be posted to the amanda-hackers list, please feel free 
 to do so, or let me know and I'll do it.  Also, if any other 
 information would be helpful, just ask.
 
 Thanks,
 
 -mgs
 
 

-- 
Jean-Louis Martineau email: [EMAIL PROTECTED] 
Departement IRO, Universite de Montreal
C.P. 6128, Succ. CENTRE-VILLETel: (514) 343-6111 ext. 3529
Montreal, Canada, H3C 3J7Fax: (514) 343-5834


Re: amrecover failure, corrupted gzip file?

2003-03-29 Thread Gene Heskett
On Sat March 29 2003 12:04, Jean-Louis Martineau wrote:
Hi Mike,

Thanks for your good description of the problem.

You found a bug in a the way the taper read a file from holding
 disk if blocksize  32k.

There is two posible workaround (untested).

1. Set your chunksize to '32k + n * blocksize' where n is an
 integer.
2. Set file-pad to false.

Setting chunksize to an arbitrary big value will not work if a
dump must be written to two different holding disk, you should set
it acording to workaround 1.

Jean-Louis

Humm.  While looking for the file-pad variable, right above it is 
this statement, which seems like a rather odd way of saying that the 
blocksize as used is actually fixed at 32768 bytes
---
blocksize int
   Default: 32 kbytes.  How much data will be written in each  tape
   record.   The minimum blocksize value is 32 KBytes.  The maximum
   blocksize value is 32 KBytes.
---
Please comment or discuss this.

On Fri, Mar 28, 2003 at 11:46:11AM -0600, Mike Simpson wrote:
 Hi --

  Any tips or tricks or other thoughts?  Is this the Linux
  dump/restore problem I've seen talked about on the mailing
  list?  I don't understand how the gzip file could be corrupted
  by a problem internal to the dump/restore cycle.

 Answering my own question after a week of testing ... I think
 I've discovered a bug in Amanda 2.4.4.  This is what I've
 deciphered:

 (1) Restores of backup sets that compressed to  1 gb worked
 fine. Backup sets that, when compressed, were  1 GB blew up
 every time with gzip corruption error messages.  This was
 consistent across OS's (Solaris 8, RedHat 7.x), filesystem types
 (ufs, vxfs, ext2/3), and backup modes (DUMP, GNUTAR).

 (2) The gzip corruption message always occured at the same spot,
 i.e.

  gzip: stdin: invalid compressed data--format violated
  Error 32 (Broken pipe) offset 1073741824+131072, wrote 0

 which is 1024^3 bytes + 128k.  I note that in my Amanda
 configuration, I had chunksize defined to 1 gbyte and
 blocksize set to 128 kbytes (the chunksize was just for
 convenience, the blocksize seems to maximize my write
 performance).

 (3) I used dd to retrieve one of the compressed images that
 was failing.  At the 1 gb mark in the file, the more-or-less
 random bytes of the compressed stream were interrupted by
 exactly 32k of zeroed bytes.  I note that 32k is Amanda's
 default blocksize.

 (4) For last night's backups, I set chunksize to an
 arbitrarily high number, to prevent chunking, which works fine
 in my setup because I use one very large ext3 partition for all
 of my Amanda holding disk, which nullifies concerns about
 filesystem size and max file size.  The restores I've done this
 morning have all worked fine, including the ones that had
 previously shown the corruption.

 I'm not enough of a C coder to come up with a real patch to fix
 this. I'm hoping the above gives enough clues to let someone who
 _is_ a real C coder do so.

 If this should be posted to the amanda-hackers list, please feel
 free to do so, or let me know and I'll do it.  Also, if any
 other information would be helpful, just ask.

 Thanks,

 -mgs

-- 
Cheers, Gene
AMD [EMAIL PROTECTED] 320M
[EMAIL PROTECTED]  512M
99.25% setiathome rank, not too shabby for a WV hillbilly
Yahoo.com attornies please note, additions to this message
by Gene Heskett are:
Copyright 2003 by Maurice Eugene Heskett, all rights reserved.



Re: amrecover failure, corrupted gzip file?

2003-03-28 Thread Mike Simpson
Hi --

 Any tips or tricks or other thoughts?  Is this the Linux dump/restore 
 problem I've seen talked about on the mailing list?  I don't 
 understand how the gzip file could be corrupted by a problem internal 
 to the dump/restore cycle.

Answering my own question after a week of testing ... I think I've 
discovered a bug in Amanda 2.4.4.  This is what I've deciphered:

(1) Restores of backup sets that compressed to  1 gb worked fine.
Backup sets that, when compressed, were  1 GB blew up every time
with gzip corruption error messages.  This was consistent across
OS's (Solaris 8, RedHat 7.x), filesystem types (ufs, vxfs, 
ext2/3), and backup modes (DUMP, GNUTAR).

(2) The gzip corruption message always occured at the same spot, i.e.

gzip: stdin: invalid compressed data--format violated
Error 32 (Broken pipe) offset 1073741824+131072, wrote 0

which is 1024^3 bytes + 128k.  I note that in my Amanda 
configuration, I had chunksize defined to 1 gbyte and 
blocksize set to 128 kbytes (the chunksize was just for
convenience, the blocksize seems to maximize my write 
performance).

(3) I used dd to retrieve one of the compressed images that was 
failing.  At the 1 gb mark in the file, the more-or-less random
bytes of the compressed stream were interrupted by exactly 32k of
zeroed bytes.  I note that 32k is Amanda's default blocksize.

(4) For last night's backups, I set chunksize to an arbitrarily
high number, to prevent chunking, which works fine in my setup
because I use one very large ext3 partition for all of my Amanda
holding disk, which nullifies concerns about filesystem size and
max file size.  The restores I've done this morning have all 
worked fine, including the ones that had previously shown the
corruption.

I'm not enough of a C coder to come up with a real patch to fix this. 
I'm hoping the above gives enough clues to let someone who _is_ a 
real C coder do so.

If this should be posted to the amanda-hackers list, please feel free 
to do so, or let me know and I'll do it.  Also, if any other 
information would be helpful, just ask.

Thanks,

-mgs





Re: amrecover failure, corrupted gzip file?

2003-03-28 Thread Gene Heskett
On Fri March 28 2003 12:46, Mike Simpson wrote:
Hi --

 Any tips or tricks or other thoughts?  Is this the Linux
 dump/restore problem I've seen talked about on the mailing list?
  I don't understand how the gzip file could be corrupted by a
 problem internal to the dump/restore cycle.

Answering my own question after a week of testing ... I think I've
discovered a bug in Amanda 2.4.4.  This is what I've deciphered:

(1) Restores of backup sets that compressed to  1 gb worked fine.
Backup sets that, when compressed, were  1 GB blew up every
 time with gzip corruption error messages.  This was consistent
 across OS's (Solaris 8, RedHat 7.x), filesystem types (ufs, vxfs,
 ext2/3), and backup modes (DUMP, GNUTAR).

(2) The gzip corruption message always occured at the same spot,
 i.e.

   gzip: stdin: invalid compressed data--format violated
   Error 32 (Broken pipe) offset 1073741824+131072, wrote 0

which is 1024^3 bytes + 128k.  I note that in my Amanda
configuration, I had chunksize defined to 1 gbyte and
blocksize set to 128 kbytes (the chunksize was just for
convenience, the blocksize seems to maximize my write
performance).

(3) I used dd to retrieve one of the compressed images that was
failing.  At the 1 gb mark in the file, the more-or-less
 random bytes of the compressed stream were interrupted by exactly
 32k of zeroed bytes.  I note that 32k is Amanda's default
 blocksize.

(4) For last night's backups, I set chunksize to an arbitrarily
high number, to prevent chunking, which works fine in my setup
because I use one very large ext3 partition for all of my
 Amanda holding disk, which nullifies concerns about filesystem
 size and max file size.  The restores I've done this morning have
 all worked fine, including the ones that had previously shown the
 corruption.

Well, after making a blithering idiot out of myself with the last 2 
replies, (I've been doing too much work in hex lately) this does 
sound as if you have nailed it.  I've no idea how big your tapes 
are, but if they handled a huge chunksize ok, then a retry at 2 
gigs might be in order to confirm this, or maybe even half a gig 
which should give a confirming result pretty quickly.

I don't recall running into that here as the huge majority of my 
stuff is broken into subdirs that rarely exceed 800 megs.  I also 
didn't use an even chunk size, but is set nominally to 1/4 of a 
DDS2 tape or 900something megs.

Interesting.  Sounds like Jean-Louis or JRJ might want to look into 
this one.  Like you, I know just enough C to be dangerous, I'd 
druther code in assembly, on a smaller machine...

Are you using the last snapshot from Jean-Louis's site at umontreal?
If not, maybe this has already been fixed.  The latest one is dated 
20030318.  (Or was an hour ago :)  I just checked the ChangeLog, 
but didn't spot any references to something like this from now back 
to about the middle of November last.

I'm not enough of a C coder to come up with a real patch to fix
 this. I'm hoping the above gives enough clues to let someone who
 _is_ a real C coder do so.

If this should be posted to the amanda-hackers list, please feel
 free to do so, or let me know and I'll do it.  Also, if any other
 information would be helpful, just ask.

Thanks,

-mgs

-- 
Cheers, Gene
AMD [EMAIL PROTECTED] 320M
[EMAIL PROTECTED]  512M
99.25% setiathome rank, not too shabby for a WV hillbilly
Yahoo.com attornies please note, additions to this message
by Gene Heskett are:
Copyright 2003 by Maurice Eugene Heskett, all rights reserved.



amrecover failure, corrupted gzip file?

2003-03-21 Thread Mike Simpson
Hi --

Running Amanda 2.4.4 servers and clients, using a RedHat 7.3 tape 
host, backing up using DUMP method (dump/restore) ext2 filesystems on 
a RedHat 7.2 client host:

I tried to do an amrecover on the /home filesystem (~8 GB), which
recovered all of the directories (as expected) and about 2/3's of the
actual data before terminating with a message asking if I wanted to 
change volumes (which I stupidly forgot to cut and paste, and can't 
find in any of my scrollback buffers, sorry).  Then prompted for 
aborting, then whether or not to dump core, the terminated.  Nothing 
particularly unusual in the amrecover debug file on the client side.

The corresponding amidxtaped debug file on the tape host side seemed 
to be running normally, then terminating on a gzip error:

  amidxtaped: time 10.959: Ready to execv amrestore with:
  path = /usr/local/sbin/amrestore
  argv[0] = amrestore
  argv[1] = -p
  argv[2] = -h
  argv[3] = -l
  argv[4] = LSX-13
  argv[5] = -f
  argv[6] = 2
  argv[7] = /dev/tapex
  argv[8] = ^whisper$
  argv[9] = ^/home$
  argv[10] = 20030315
  amrestore:   2: restoring whisper._home.20030315.0

  gzip: stdin: invalid compressed data--format violated
  Error 32 (Broken pipe) offset -2147483648+131072, wrote 0
  amrestore: pipe reader has quit in middle of file.
  amidxtaped: time 3606.244: amrestore terminated normally with status: 2
  amidxtaped: time 3606.244: rewinding tape ...
  amidxtaped: time 3623.767: done
  amidxtaped: time 3623.768: pid 5140 finish time Thu Mar 20 11:11:19 2003

I was able to recover the raw file from tape using dd, i.e.

  dd if=/dev/tapex of=./label-x bs=128k count=1

which recovered:

  AMANDA: TAPESTART DATE 20030315 TAPE LSX-13

Then:

  mt -f /dev/tapex asf 2
  dd if=/dev/tapex of=./label-2 bs=128k count=1

which recovered:

  AMANDA: FILE 20030315 whisper /home lev 0 comp .gz program /sbin/dump
  To restore, position tape at start of file and run:
  dd if=tape bs=128k skip=1 | /usr/bin/gzip -dc | sbin/restore -f... -

I did that, and was successful in recovering the file from tape:

  -rw-r--r--1 msimpson msimpson 2872049664 Mar 20 13:20 whisper_home_0.gz

I tried to do the pipe to restore, with a failure similar to the 
above.  The gzip file looks like it's become corrupted:

  $ file whisper_home_0.gz
  whisper_home_0.gz: gzip compressed data, from Unix, max speed

  $ file -z whisper_home_0.gz
  whisper_home_0.gz: new-fs dump file (little endian), This dump Sat Mar 15 20:03:59 
2003, Previous dump Wed Dec 31 18:00:00 1969, Volume 1, Level zero, type: tape header, 
Label none, Filesystem /home, Device /dev/datavg/homelv, Host whisper.doit.wisc.edu, 
Flags 1 (gzip compressed data, from Unix, max speed)

_but_:

  $ gzip -l /projects/archives/whisper_home_0.gz
compresseduncompressed  ratio uncompressed_name
2872049664   0   0.0% whisper_home_0

and when I try to unzip it, even using the trick I found at 
www.gzip.org to avoid the 4gb file limit that's apparently a problem 
on some versions of gzip, I get the same error as in the debug file:

  $ gunzip  whisper_home_0.gz  whisper_home_0

  gunzip: stdin: invalid compressed data--format violated

  $ ls -l whisper_home*
  -rw-r--r--1 msimpson msimpson 2872049664 Mar 20 13:20 whisper_home_0.gz
  -rw-r--r--1 msimpson msimpson 6030524416 Mar 20 15:23 whisper_home_0

Any tips or tricks or other thoughts?  Is this the Linux dump/restore 
problem I've seen talked about on the mailing list?  I don't 
understand how the gzip file could be corrupted by a problem internal 
to the dump/restore cycle.

Thanks for any help,

-mgs




Bug? - gzip running on client AND server

2003-01-15 Thread Orion Poplawski
Just notice that on at least on of my amanda disk dumps, it is being run 
through gzip on client and on the server.  The details:

disklist:
lewis   /export/lewis3  comp-best-user-tar

amanda.conf:
define dumptype root-tar {
   global
   program GNUTAR
   comment root partitions dumped with tar
   compress none
   index
   exclude list /usr/local/lib/amanda/exclude.gtar
   priority low
}
define dumptype user-tar {
   root-tar
   comment user partitions dumped with tar
   priority medium
   comprate 0.70
}
define dumptype comp-best-user-tar {
   user-tar
   compress client best
}

on sever:
UIDPID  PPID  C STIME TTY  TIME CMD
amanda6995  6994  0 Jan14 ?00:00:00 /bin/sh /usr/sbin/amdump 
Data
amanda7004  6995  0 Jan14 ?00:00:01 /usr/lib/amanda/driver Data
amanda7005  7004  0 Jan14 ?00:03:09 taper Data
amanda7006  7004  2 Jan14 ?00:21:09 dumper0 Data
amanda7010  7005  0 Jan14 ?00:02:48 taper Data
amanda7200  7006  0 02:31 ?00:00:01 /bin/gzip --best

lsof -p 7200:
COMMAND  PID   USER   FD   TYPE DEVICESIZE  NODE NAME
gzip7200 amanda0u  IPv4  89527   TCP 
matchbox.colorado-research.com:1187-lewis.colorado-research.com:5603 
(ESTABLISHED)
gzip7200 amanda1w   REG8,5   49152 50218 
/var/lib/amanda/Data/index/lewis/_export_lewis3/20030114_0.gz.tmp
gzip7200 amanda2w   REG8,5   21860 40187 
/var/lib/amanda/Data/amdump

grep lewis amdump:
DUMP lewis 34cbfe811f01 /export/lewis3 20030114 1 0 1970:1:1:0:0:0 
13255125 9312 1 2003:1:8:5:7:24 11532986 51563
driver: send-cmd time 27114.674 to dumper0: FILE-DUMP 00-00011 
/var/amanda/Data/20030114/lewis._export_lewis3.0 lewis 34cbfe811f01 
/export/lewis3 NODEVICE 0 1970:1:1:0:0:0 1073741824 GNUTAR 13255200 
|;bsd-auth;compress-best;index;exclude-list=/usr/local/lib/amanda/exclude.gtar;


On client:
UIDPID   PPID  CSTIME TTY TIME CMD
 amanda  90664  93121  0 02:31:55 ?   5:38 
/usr/freeware/libexec/sendbackup
 amanda  93116  93121  0 02:31:55 ?  584:12 /usr/sbin/gzip 
--best
 amanda  93120  90664  0 02:31:55 ?   0:01 sed -e s/^\.//
 amanda  93121  1  0 02:31:55 ?   0:00 
/usr/freeware/libexec/sendbackup
 amanda  93139  93120  0 02:31:55 ?   2:12 
/usr/freeware/bin/tar -tf -

Has anyone seen this before?

TIA,

 Orion





Re: Bug? - gzip running on client AND server

2003-01-15 Thread Joshua Baker-LePain
On Wed, 15 Jan 2003 at 12:31pm, Orion Poplawski wrote

 Just notice that on at least on of my amanda disk dumps, it is being run 
 through gzip on client and on the server.  The details:

I'm pretty sure that the gzip on the server is compressing the index file, 
*not* the dump contents.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University




Re: Bug? - gzip running on client AND server

2003-01-15 Thread Orion Poplawski
Joshua Baker-LePain wrote:


On Wed, 15 Jan 2003 at 12:31pm, Orion Poplawski wrote

 

Just notice that on at least on of my amanda disk dumps, it is being run 
through gzip on client and on the server.  The details:
   


I'm pretty sure that the gzip on the server is compressing the index file, 
*not* the dump contents.

 

Ah, duh!  A close look at the lsof output confirms.






Re: Bug? - gzip running on client AND server

2003-01-15 Thread Gerhard den Hollander
* Orion Poplawski [EMAIL PROTECTED] (Wed, Jan 15, 2003 at 12:31:44PM -0700)
 Just notice that on at least on of my amanda disk dumps, it is being run 
 through gzip on client and on the server.  The details:

 lsof -p 7200:
 COMMAND  PID   USER   FD   TYPE DEVICESIZE  NODE NAME
 gzip7200 amanda0u  IPv4  89527   TCP 
 matchbox.colorado-research.com:1187-lewis.colorado-research.com:5603 
 (ESTABLISHED)
 gzip7200 amanda1w   REG8,5   49152 50218 
 /var/lib/amanda/Data/index/lewis/_export_lewis3/20030114_0.gz.tmp
  

iIt's compressing the index file.

 Has anyone seen this before?

Yup, happens all the time, it's what it's supposed to do if you enable
indexing.

Currently listening to: Metallica - Nothing Else Matters

Gerhard,  @jasongeo.com   == The Acoustic Motorbiker ==   
-- 
   __O  If your watch is wound, wound to run, it will
 =`\,  If your time is due, due to come, it will
(=)/(=) Living this life, is like trying to learn latin
in a chines firedrill




Re: Speed of backups under amanda with gpg and gzip wrapper?

2002-01-31 Thread Jennifer Peterson

Does anybody else have additional feedback on this reply from, Greg?  We 
are regarding a secure backup scheme for amanda whereby the backups are 
passed to a gzip wrapper that encrypts the data with gpg and then 
forwards it to the real gzip for further compression.  I'd wondered 
about the utility of sending an encrypted stream to gzip, but kept that 
part in because of general laziness and admitted ignorance.  It 
shouldn't hurt anything except speed, but, since speed is now perhaps an 
issue, I might take the final gzip step out unless somebody can tell me 
why I'd want to keep it.  The documentation in question resides at 
http://security.uchicago.edu/tools/gpg-amanda

Thanks,

Jenn

Greg Troxel wrote:

 Looking at the docs, it seems that this is
 gzip | gpg | gzip
 
 This seems odd.  The output of gpg will be quite incompressible.
 I think gpg by default will do compression of some kind, perhaps
 gzip-like.
 So you might try omitting the gzips, or the second one and config
 compression off in gpg.  I suspect things will speed up a lot and not
 be any bigger.  IIRC gzip on hard-to-compress data is pretty slow.
 
 I'd appreciate hearing back on the output of my suggestion; this is
 something I've been wanting to do for a while but never gotten around
 to.
 
 Greg Troxel [EMAIL PROTECTED]
 





Speed of backups under amanda with gpg and gzip wrapper?

2002-01-30 Thread Jennifer Peterson

Hello,

I'm currently in the testing phase for switching our amanda backups over 
to Judith Freeman's secure scheme, using gpg and a gzip wrapper 
(http://security.uchicago.edu/tools/gpg-amanda.)  Everything's working 
great with our test computers, and, so far, I'm pretty psyched about it. 
  However, I would like to ask anybody who's actually done a full scale 
change over from regular amanda to this secure amanda whether or not 
they experienced significant slow downs as a result.  The small scale 
testing that I'm doing seems to be taking quite a long time.  It's not a 
huge deal if our backups take longer as a result of this added security, 
but it's going to take double or triple time, then we might need to add 
another amanda server into the mix.

If I missed this aspect of the secure amanda discussion in the archives, 
then please point me to them.

Thanks for your insight.

Jenn




Re: gzip running when compress none

2001-10-28 Thread Chris Dahn

On Wednesday 24 October 2001 10:47 am, David Chin wrote:
 Howdy,

 I'm running amanda 2.4.2p2 on RH7.1 Linux and HP-UX 10.20, with a Linux box
 acting as server.  On the server, there is a gzip --best process running
 even though I have compress none in the global configuration.  Is this
 normal?

 --Dave Chin
   [EMAIL PROTECTED]


  From dumper.c:
switch(compresspid=fork()) {
case 0:
aclose(outpipe[0]);
/* child acts on stdin/stdout */
if (dup2(outpipe[1],1) == -1)
fprintf(stderr, err dup2 out: %s\n, strerror(errno));
if (dup2(tmpfd, 0) == -1)
fprintf(stderr, err dup2 in: %s\n, strerror(errno));
for(tmpfd = 3; tmpfd = FD_SETSIZE; ++tmpfd) {
close(tmpfd);
}
/* now spawn gzip -1 to take care of the rest */
execlp(COMPRESS_PATH, COMPRESS_PATH,
   (srvcompress == srvcomp_best ? COMPRESS_BEST_OPT
: COMPRESS_FAST_OPT),
   (char *)0);
error(error: couldn't exec %s.\n, COMPRESS_PATH);
}

  It looks like it always spawns off a child to run gzip.



gzip running when compress none

2001-10-24 Thread David Chin


Howdy,

I'm running amanda 2.4.2p2 on RH7.1 Linux and HP-UX 10.20, with a Linux box 
acting as server.  On the server, there is a gzip --best process running 
even though I have compress none in the global configuration.  Is this 
normal?

--Dave Chin
  [EMAIL PROTECTED]




Re: gzip running when compress none

2001-10-24 Thread Mitch Collinsworth

 
 I'm running amanda 2.4.2p2 on RH7.1 Linux and HP-UX 10.20, with a Linux box 
 acting as server.  On the server, there is a gzip --best process running 
 even though I have compress none in the global configuration.  Is this 
 normal?

If you are indexing, yes.  The indexes are compressed.

-Mitch




RE: gzip running when compress none

2001-10-24 Thread Amanda Admin

Are you indexing your backups?

Amanda compresses the index files stored on the server. Amanda may also
compress other process-oriented (not backup) files on the server, indexes
are the only ones I'm certain of though.

HTH

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED]]On Behalf Of David Chin
 Sent: Wednesday, October 24, 2001 8:48 AM
 To: [EMAIL PROTECTED]
 Subject: gzip running when compress none



 Howdy,

 I'm running amanda 2.4.2p2 on RH7.1 Linux and HP-UX 10.20, with a
 Linux box
 acting as server.  On the server, there is a gzip --best
 process running
 even though I have compress none in the global configuration.
  Is this
 normal?

 --Dave Chin
   [EMAIL PROTECTED]





Still trying to get gzip/gpg

2001-10-18 Thread ahall

Hello,

I am still trying to get gzip/gpg working.  I did not receive any replies
from my last two mails, so let me try again not so broad.

If someone might be able to answer this, that would be awesome:

As I understand the process the data should be written to tape with gzip,
not dump.  But what is weird is that it appears that amanda is attempting
to backup the data with dump, but restore it with gzip.

sendbackup: start [localhost:sda1 level 0]
sendbackup: info BACKUP=/sbin/dump
sendbackup: info RECOVER_CMD=/var/backups/bin/gzip -dc |/sbin/restore
-f... -
sendbackup: info COMPRESS_SUFFIX=.gz
sendbackup: info end


Why is the BACKUP command /sbin/dump, and the RECOVER_CMD gzip?  How can I
change this?


BTW, When I get this all figured out I will write a doc covering exactly
how to do this.


TIA.

Andrew




gzip

2001-02-05 Thread Ryan Williams

I have 2 questions relating to gzip.

1. I have all of my gzips set to fast instead of best but whenever amdump is
running there will be a gzip --fast and gzip --best for every file that is
in my holding disk. What are the reasons behind this?

2. quoting a colocation facilitys website:
"We use bzip2 instead of gzip for data compression. Unlike gzip, bzip2
compresses data in blocks, which means that in the unlikely event that a
small part of the backup is corrupted, only the affected block is lost. All
other data is still recoverable."

Is this true and if so is there a way to use bzip instead of gzip?  Has
anyone ever looked into this?



Thanks,

Ryan Williams





Re: gzip

2001-02-05 Thread John R. Jackson

1. I have all of my gzips set to fast instead of best but whenever amdump is
running there will be a gzip --fast and gzip --best for every file that is
in my holding disk. What are the reasons behind this?

The --best one is doing the index files, not the data stream.

2. quoting a colocation facilitys website:
"We use bzip2 instead of gzip for data compression.  ...

This comes up here about once a month :-).  There was a lengthy discussion
last November.  Quoting Alexandre Oliva:

  ... people who tried to use bzip2 for backup compression ended
  up finding out it was just too slow.

And then Jonathan F. Dill after some (non-Amanda) timing tests:

  In summary, bzip2 gave me 3.84% more compression at a cost of a more
  than fourfold increase in the time that it took to run the compression.

Even so, there is something called the "FILTER API" that should allow
arbritrary programs to be inserted in the data stream (compression,
encryption, random number :-) and this would be the logical place to
put this effort.

Ryan Williams

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: Mac OS X Server problems w/ gzip

2000-11-09 Thread Kevin M. Myer

On Wed, 8 Nov 2000, Chris Karakas wrote:

 When I first used AMANDA, I used it without compression. Then I upgraded
 the tape drivers (which used an inherend "block" compression,
 transparent to the user), the new ones did not support any inherent
 compression, so I had to use the usual "client" compression. I was
 amazed to see how much longer it took. Where AMANDA used to take 2-3
 hours to finish, now it took 6-8! 

Thats all fine and good but my AMANDA server backs up 13 servers.  3 run
Solaris, 8 run Linux and 1 runs OS X Server.  I can do full backups of all
the servers in less than three hours, with client-side compression, with
the exception of the OS X Server.  It takes over 8 hours to compress and
backup 400 Mb of data on that machine (a 400MHz G4 machine with a Gig of
RAM).  So its not merely an issue of gzip compression adding time to the
backups.  gzip is just really, really slow when used with AMANDA under Mac
OS X Server.  Command line issued tar/gzip pipes seem to work reasonably
fast on the OS X Server.

Its not a big deal - I've moved all compression to the backup server at
this point, but it is an oddity I was hoping to figure out.

Kevin

-- 
Kevin M. Myer
Systems Administrator
Lancaster-Lebanon Intermediate Unit 13
(717)-560-6140






Re: Mac OS X Server problems w/ gzip

2000-11-09 Thread Kevin M. Myer

On Thu, 9 Nov 2000, Mitch Collinsworth wrote:

 Have you tried compress client fast yet or are you still doing client
 best?

Yes, actually, I had been using client fast for all my backups.  Maybe I
would do better with client best :)  Still, the thing that irks me most
about it is not that the backup is slow - its that Apple has made it nigh
on impossible to debug anything under Mac OS X Server.  If I could just
run ktrace on a running backup, I'm sure it would shed some light on the
matter.  But like I said, its not that big an issue - as long as we have
the network bandwidth and tape space and/or can do the compression on the
backup server, things will be fine.

Kevin

-- 
Kevin M. Myer
Systems Administrator
Lancaster-Lebanon Intermediate Unit 13
(717)-560-6140