Re: Mac OS X Server problems w/ gzip

2000-11-09 Thread Kevin M. Myer

On Wed, 8 Nov 2000, Chris Karakas wrote:

 When I first used AMANDA, I used it without compression. Then I upgraded
 the tape drivers (which used an inherend "block" compression,
 transparent to the user), the new ones did not support any inherent
 compression, so I had to use the usual "client" compression. I was
 amazed to see how much longer it took. Where AMANDA used to take 2-3
 hours to finish, now it took 6-8! 

Thats all fine and good but my AMANDA server backs up 13 servers.  3 run
Solaris, 8 run Linux and 1 runs OS X Server.  I can do full backups of all
the servers in less than three hours, with client-side compression, with
the exception of the OS X Server.  It takes over 8 hours to compress and
backup 400 Mb of data on that machine (a 400MHz G4 machine with a Gig of
RAM).  So its not merely an issue of gzip compression adding time to the
backups.  gzip is just really, really slow when used with AMANDA under Mac
OS X Server.  Command line issued tar/gzip pipes seem to work reasonably
fast on the OS X Server.

Its not a big deal - I've moved all compression to the backup server at
this point, but it is an oddity I was hoping to figure out.

Kevin

-- 
Kevin M. Myer
Systems Administrator
Lancaster-Lebanon Intermediate Unit 13
(717)-560-6140






sendbackup error with tar: 'file changed as we read it'

2000-11-09 Thread Edwin Chiu

Hi,

Is it possible to get GNU tar to ignore these errors:

e.g.

/-- host/ lev 0 FAILED [/bin/gtar retruned 2]
sendbackup: start [merlin:/ level 0]
sendbackup: info BACKUP=/bin/gtar
sendbackup: info RECOVER_CMD=/bin/gtar -f... -
sendbackup: info end
? gtar: /var/log/slapd.log: file changed as we read it: Invalid argument

| Total bytes written: 4019077120
? gtar: Error exit delayed from previous errors
sendbackup: error [/bin/gtar returned 2]
\

Regards,
Edwin




Re: sendbackup error with tar: 'file changed as we read it'

2000-11-09 Thread Alexandre Oliva

On Nov  9, 2000, Edwin Chiu [EMAIL PROTECTED] wrote:

 Is it possible to get GNU tar to ignore these errors:

 ? gtar: /var/log/slapd.log: file changed as we read it: Invalid argument

I *hope* GNU tar 1.13.17 or newer will ignore this error when called
with --ignore-failed-read, as Amanda does.

If you don't feel like using such an experimental version of GNU tar,
there's a #define somewhere in sendbackup*.c that controls whether to
ignore the exit status of GNU tar or something alike.  Note that you
might miss other, more important error messages, though.  Not that
I've ever seen an error message that couldn't be safely ignored.

-- 
Alexandre Oliva   Enjoy Guarana', see http://www.ic.unicamp.br/~oliva/
Red Hat GCC Developer  aoliva@{cygnus.com, redhat.com}
CS PhD student at IC-Unicampoliva@{lsd.ic.unicamp.br, gnu.org}
Free Software Evangelist*Please* write to mailing lists, not to me



Re: Using an old tape

2000-11-09 Thread David Wolfskill

Date: Thu, 09 Nov 2000 15:47:48 -0300
From: Gonzalo Arana [EMAIL PROTECTED]

If I configure a 4 tape cicle (full backup each, none incremental), and the last
dump is on tape 3.  How may I retrieve de dump on tape 2?
Sory if my question is quite basic, but I haven't found anything on the
amrecover manpage.
Waiting for answers,

Use the "setdate" command within amrecover (to specify the date as of
which you want the restore done).

If you are not sure of the date, use the "history" command to see the
correspondence between dates and tape labels.

You will probably be well-advised to peform the "mt fsf" command (to
forward space to the proper file number) before telling amrecover to use
the tape in question.

Cheers,
david
-- 
David Wolfskill  [EMAIL PROTECTED]   UNIX System Administrator
Desk: 650/577-7158   TIE: 8/499-7158   Cell: 650/759-0823



Re: Mac OS X Server problems w/ gzip

2000-11-09 Thread Kevin M. Myer

On Thu, 9 Nov 2000, Mitch Collinsworth wrote:

 Have you tried compress client fast yet or are you still doing client
 best?

Yes, actually, I had been using client fast for all my backups.  Maybe I
would do better with client best :)  Still, the thing that irks me most
about it is not that the backup is slow - its that Apple has made it nigh
on impossible to debug anything under Mac OS X Server.  If I could just
run ktrace on a running backup, I'm sure it would shed some light on the
matter.  But like I said, its not that big an issue - as long as we have
the network bandwidth and tape space and/or can do the compression on the
backup server, things will be fine.

Kevin

-- 
Kevin M. Myer
Systems Administrator
Lancaster-Lebanon Intermediate Unit 13
(717)-560-6140




Re: one large partition

2000-11-09 Thread John R. Jackson

I am trying to backup one (~75GB) partition.  Amanda will compress the
entire partition and try to put on the tape.  We only have the compressed
50GB tapes.  ...

Note that you should **not** use both hardware and software compression.
Pick one or the other.  Trying to put an already (software) compressed
image on tape via hardware compression may not even give you the
non-compressed capacity listed for the device.

We are using tape changer, but amanda can't split one
file.  Is there any way to get around this problem?  

The standard solution is to use GNU tar and logically (to Amanda) break
up the partition into individually backed up pieces.  There are tricks
to auto-manage this and group things, but they still use this basic plan.

Sherrill

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: Different locations for tar

2000-11-09 Thread John R. Jackson

 For now, how about:
 
   ln -s /bin/tar /usr/local/bin/gtar-for-amanda
 
 on that client?

Yeah, that's not a problem.  It's just that we would have to recompile
amanda on that client, which its owner would rather avoid.

I thought the symlink would avoid the recompile since both names would
be available.  However I think I got the problem mixed up.  The problem is
that amverify **on the server** sees /bin/tar in the header and doesn't
deal with it because it doesn't match what it thinks tar is supposed
to be.

And since you gave the server a "strange" name for tar (the two basenames
do not match), I don't think 2.4.2 is going to help either.

In the doonefile() function in amverify, there is a long string of if/elif
statements that figure out what program to use.  I think you'll need to
add another one like this before the first reference to $DUMP:

elif [ X"$TAR" != X"" -a X"$2" = X"/bin/tar" ]; then
CMD=$TAR
ARGS="tf -"

Nate Eldredge

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: your mail

2000-11-09 Thread Rob Simmons

doh.  I'm sorry about that.

-- 
Robert Simmons
Systems Administrator
http://www.wlcg.com/




Re: one large partition

2000-11-09 Thread John R. Jackson

...  I am not very familiar with the setup you are
talking about.  Do you know where I can find some docs and references to
this kind of setup?  ...

I don't think it's really documented anyplace.

The idea is that if your file system is named /some/big/filesystem and it
contains sudirectories sub1, sub2 and sub3 that instead of just listing
/some/big/filesystem in disklist you would instead enter each one:

  the-client/some/big/filesystem/sub1   the-dumptype
  the-client/some/big/filesystem/sub2   the-dumptype
  the-client/some/big/filesystem/sub3   the-dumptype

The idea/hope is that the individual subdirectories are smaller than a
tape and so Amanda will not have to split them.

Note that if you have "too many" file systems, you may overflow the UDP
packet size Amanda uses.  That can be changed in the source from 8 KBytes
to 64 KBytes in common-src/dgram.h (symbol MAX_DGRAM).  You'll need to
do a complete rebuild of both client and server.  The value is already
changed to this at 2.4.2.

You might also look at:

  ftp://gandalf.cc.purdue.edu/pub/amanda/gtar-wrapper.*

for another way of breaking up a large parent area.

Note that you have to use GNU tar for this, and that has "issues", like
the access time for all backed up files will be altered.  You should
also probably avoid version 1.13.* as it's known to have lots of problems
with Amanda.  Use 1.12 and apply the patches at www.amanda.org.

Sherrill

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: Different locations for tar

2000-11-09 Thread Nate Eldredge

On Thu, 9 Nov 2000, John R. Jackson wrote:

  For now, how about:
  
ln -s /bin/tar /usr/local/bin/gtar-for-amanda
  
  on that client?
 
 Yeah, that's not a problem.  It's just that we would have to recompile
 amanda on that client, which its owner would rather avoid.
 
 I thought the symlink would avoid the recompile since both names would
 be available.  However I think I got the problem mixed up.  The problem is
 that amverify **on the server** sees /bin/tar in the header and doesn't
 deal with it because it doesn't match what it thinks tar is supposed
 to be.
 
 And since you gave the server a "strange" name for tar (the two basenames
 do not match), I don't think 2.4.2 is going to help either.
 
 In the doonefile() function in amverify, there is a long string of if/elif
 statements that figure out what program to use.  I think you'll need to
 add another one like this before the first reference to $DUMP:
 
 elif [ X"$TAR" != X"" -a X"$2" = X"/bin/tar" ]; then
 CMD=$TAR
 ARGS="tf -"

Yeah.  Actually, I was planning something on the order of a substring
match for `tar'.

-- 
Nate Eldredge
HMC CS Staff
[EMAIL PROTECTED]




RFC: change to negative chunksize semantics

2000-11-09 Thread John R. Jackson

Up until 2.4.2, a negative value for chunksize (other than -1) caused
images estimated to be larger than the absolute value to go direct to
tape.  For instance, "-1024 Mb" would cause anything larger than 1 GByte
to go direct to tape and anything smaller to go through the holding disk
(if there is enough space, etc).

Before I conned the other administrators here into providing obscene
amounts of holding disk space :-), I used to set this to slightly less
than half the holding disk space to prevent Amanda from going into what
I call "ping pong" mode with lots of large dumps.  Since there was not
enough space for the two largest images, but was enough for one of each
of the several largest, it would spend a long time dumping into the
holding disk with no tape activity, then a long time writing to tape
with no other dump activity, then go back to dumping.  Very un-parallel.

And there might be other uses for forcing direct to tape.

With 2.4.2, backup images may be split across multiple holding disks
(yeah!) and that makes using chunksize for this direct to tape feature
seem like not such a good idea since it's not really related to each
specific holding disk.  So we're looking for input:

  * Does anyone use negative chunksize (other than -1) to force direct
to tape?

  * The suggested change would be to add a new "maxholding" parameter
in the general (non-holdingdisk) area to do the same thing.  Would
that be acceptable?

FYI, 2.4.2 will complain if it sees a negative chunksize (other than -1)
to help catch anyone who forgets.

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: Not Getting Estimates

2000-11-09 Thread David Lloyd


I forgot to say that I'm running:

* RedHat Linux 6.2
 - vanialla except for important updates

* Latest Stable Release of dump for Linux
 - I got an RPM from sourceforge...

dsl

-- 
Do you believe in life after love?
 I can feel something inside me, say
 I really don't think you're strong enough



Re: HP1557A and chg-zd-mtx pb...

2000-11-09 Thread Joe Rhett

MTX -f should point to your changer device, not your tape device.

 mtx -f /dev/scsi/changer/c0t0d1

On Thu, Nov 09, 2000 at 01:35:52PM +0100, Yann PURSON wrote:
 
 Hi,
 
 I have an HP1557A tape autoloader (6 tape) and I'm trying to configure
 amanda using this device...I'm using mtx and the chg-zd-mtx script but
 it seems I have a pb with mtx. Each time I use it it return an error
 message, followed by the correct information, for example :
 
 mtx -f /dev/rmt/0n status
 mtx: I/O error
   Storage Changer /dev/rmt/0n:1 Drives, 6 Slots ( 0 Import/Export )
 Data Transfer Element 0:Empty
   Storage Element 1:Full
   Storage Element 2:Full
   Storage Element 3:Full
   Storage Element 4:Full
   Storage Element 5:Full
   Storage Element 6:Full
 
 This message I/O error occurs a pb with the chg-zd-mtx, each time I
 lauch it it return this :
 
 bash-2.02# /usr/local/libexec/chg-zd-mtx -info
 mtx: I/O error
 /usr/local/libexec/chg-zd-mtx: test: argument expected
 
 
 The last line is generated by this line in the script :
 
   used=`$MTX status |
 sed -n 's/Data Transfer Element:.Empty/-1/p;s/Data Transfer
 Element:.Full (Storage Element \(.\) Loaded)/\1/p'`
 
 
 I'm not sure, but I think that the error is due to the I/O error line
 from MTX...
 
 Can someone help me about this???
 
 Thanks to Tony Traylor and Joe Rhett for their help.
 
 --
 Yann PURSON - Assistant chef de projet
 ADNTIC - 93, rue du Hocquet - 80.000 AMIENS
 Téléphone : 03.22.22.27.32
 

-- 
Joe Rhett Chief Technology Officer
[EMAIL PROTECTED]  ISite Services, Inc.

PGP keys and contact information:  http://www.noc.isite.net/Staff/



How many dumpers active?

2000-11-09 Thread Robert L. Harris


How do I know how many dumpers are active?  I just changed my config to
allow 2 while i figure out this problem with the disk.  It said both
dumpers
were active, but there were no dumper processes on the one client box
in my disklist file.

Robert


--

:wq!
---
Robert L. Harris|  Micros~1 :
Unix System Administrator   |For when quality, reliability
  at Agency.com |  and security just aren't
\_   that important!
DISCLAIMER:
  These are MY OPINIONS ALONE.  I speak for no-one else.
FYI:
 perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'






Re: one large partition

2000-11-09 Thread David Lloyd


Hmm!

I thought AMANDA only backed up whole "disks" or "partitions"...i.e:

[lloy0076] % df
/dev/hda /usr
/dev/hdb /tmp

And I would only be able to backup ALL hda or ALL hdb, but not just
/usr/bin without getting into GNUTAR exclusions and such...maybe I'm
wrong...

DL



Re: one large partition

2000-11-09 Thread John R. Jackson

And I would only be able to backup ALL hda or ALL hdb, but not just
/usr/bin without getting into GNUTAR exclusions and such...maybe I'm
wrong...

You're wrong :-).

If you put a directory name in your disklist (instead of something like
"hdb" or "/dev/hdb"), that just gets passed to GNU tar as is, so it's
pretty easy to just do a subdirectory.

Now the problem is if you also list the top level mount point as well.
That would cause it and the subdirectory to both be dumped.  That's where
you would need an exclusion to ignore the subdirectory when doing the
whole partition.

But if you don't have very many subdirectories and list each one by
itself and never mention the top level, it will work just fine.

DL

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: How many dumpers active?

2000-11-09 Thread David Wolfskill

Date: Thu, 09 Nov 2000 18:42:16 -0500
From: "John R. Jackson" [EMAIL PROTECTED]

How do I know how many dumpers are active?  ...

Amanda will start the number you tell it to.

Up to MAX_DUMPERS (which in my copy of server-src/driverio.c is 63).

The real question is,
how well are they being used.  The easiest way to tell is with amstatus.
In particular, running it against a completed amdump.nn file.

amplot is handy for this, as well (and for checking on various resource
constraints in general).

For instance, I have this as part of my "run-amanda" script right
after amdump (the syntax may be different at 2.4.1p1 -- I forget):

  amstatus ${config} --summary --file amdump.1

Since the most recent amdump.nn file is always amdump.1, this gives
me a summary of what happened during this run.

Part of the output looks like this (although I just noticed it's broken
with 2.4.2 -- sigh :-):

(What's "broken" about it...?)

Cheers,
david
-- 
David Wolfskill  [EMAIL PROTECTED]   UNIX System Administrator
Desk: 650/577-7158   TIE: 8/499-7158   Cell: 650/759-0823



Re: How many dumpers active?

2000-11-09 Thread John R. Jackson

Up to MAX_DUMPERS (which in my copy of server-src/driverio.c is 63).

True.

amplot is handy for this, as well ...

Also true.

(What's "broken" about it...?)

It doesn't generate any of that output I posted :-(.  I'm guessing it
might be because the dump was finished, so it skipped over doing some
things that might show up while it's still running.

david

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: Not Getting Estimates

2000-11-09 Thread John R. Jackson

Yet I'm getting estimate failures.  ...

What does amcheck have to say?

What's in /tmp/amanda/sendsize*debug on adlcds1.nci.com.au?

DL

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



RESULTS MISSING

2000-11-09 Thread Tom Hudak

I have just ironed out my config, setup my clients and gotten amcheck to run
without problems, after setting the "record no" global option, running amdump
spits out this:
FAILURE AND STRANGE DUMP SUMMARY:
  smb/home RESULTS MISSING
  smb/var/data/kalisa RESULTS MISSING
  homer  /dev/hda1 RESULTS MISSING
  homer  /dev/md0 RESULTS MISSING


STATISTICS:
  Total   Full  Daily
      
Dump Time (hrs:min)0:00   0:00   0:00   (0:00 start, 0:00
idle) 
Output Size (meg)   0.00.00.0
Original Size (meg) 0.00.00.0
Avg Compressed Size (%) -- -- --
Tape Used (%)   0.00.00.0
Filesystems Dumped0  0  0
Avg Dump Rate (k/s) -- -- --
Avg Tp Write Rate (k/s) -- -- --

^L
NOTES:
  driver: WARNING: got empty schedule from planner
  taper: tape Daily-05 kb 0 fm 0 [OK]

^L
DUMP SUMMARY:
  DUMPER STATS  TAPER
STATS 
HOSTNAME  DISK   L  ORIG-KB   OUT-KB COMP%  MMM:SS   KB/s  MMM:SS KB/s
homer /dev/hda1 MISSING

homer /dev/md0  MISSING

smb   /home MISSING

smb   -r/data/kalisaMISSING


(brought to you by Amanda version 2.4.1p1)
My question relates to the line's containing RESULTS MISSING (although I'm not
sure how the -r got in there, the disklist contains the full path: "smb --  
-r/data/kalisa --". I 
have checked the amdump.debug, selfcheck.debug and looked for other .debug files but 
found
none on both the server and the client. Neither the client or host .debug
files contain anything about error's or problems, yet the error continues.
What causes this? am I not looking at the right debug files? I have found MANY
submissions like this to the hackers list, none of them seem to fit my current
setup. This is a brand spankin' new config and I *believe* it's correct. Any
info would be MUCH appreciated.


 PGP signature


Re: RESULTS MISSING

2000-11-09 Thread Tom Hudak

On Thu, Nov 09, 2000 at 06:26:54AM -0600, Tom Hudak wrote:
I have a modification, it turns out the dumpers program was not u+x, so it
choked, now I'm getting "request to HOST timed out." I do not see a resolution
before leaving work tonight being reasonable, but If I should get any more
info in the next day or so, I'll repost.

I have just ironed out my config, setup my clients and gotten amcheck to run
without problems, after setting the "record no" global option, running amdump
spits out this:
FAILURE AND STRANGE DUMP SUMMARY:
  smb/home RESULTS MISSING
  smb/var/data/kalisa RESULTS MISSING
  homer  /dev/hda1 RESULTS MISSING
  homer  /dev/md0 RESULTS MISSING


STATISTICS:
  Total   Full  Daily
      
Dump Time (hrs:min)0:00   0:00   0:00   (0:00 start, 0:00
idle) 
Output Size (meg)   0.00.00.0
Original Size (meg) 0.00.00.0
Avg Compressed Size (%) -- -- --
Tape Used (%)   0.00.00.0
Filesystems Dumped0  0  0
Avg Dump Rate (k/s) -- -- --
Avg Tp Write Rate (k/s) -- -- --

^L
NOTES:
  driver: WARNING: got empty schedule from planner
  taper: tape Daily-05 kb 0 fm 0 [OK]

^L
DUMP SUMMARY:
  DUMPER STATS  TAPER
STATS 
HOSTNAME  DISK   L  ORIG-KB   OUT-KB COMP%  MMM:SS   KB/s  MMM:SS KB/s
homer /dev/hda1 MISSING

homer /dev/md0  MISSING

smb   /home MISSING

smb   -r/data/kalisaMISSING


(brought to you by Amanda version 2.4.1p1)
My question relates to the line's containing RESULTS MISSING (although I'm not
sure how the -r got in there, the disklist contains the full path: "smb --  
-r/data/kalisa --". I 
have checked the amdump.debug, selfcheck.debug and looked for other .debug files but 
found
none on both the server and the client. Neither the client or host .debug
files contain anything about error's or problems, yet the error continues.
What causes this? am I not looking at the right debug files? I have found MANY
submissions like this to the hackers list, none of them seem to fit my current
setup. This is a brand spankin' new config and I *believe* it's correct. Any
info would be MUCH appreciated.




 PGP signature


Re: Not Getting Estimates

2000-11-09 Thread David Lloyd


John!

 What does amcheck have to say?

Everything is OK (except I don't have any spare tapes for it to do a
tape write test, but I've fixed that problem at last).

 What's in /tmp/amanda/sendsize*debug on adlcds1.nci.com.au?

I'm not certain, but I'll have a look asap :-)

DL
-- 
Do you believe in life after love?
 I can feel something inside me, say
 I really don't think you're strong enough



Help with FAILED AND STRANGE DUMP

2000-11-09 Thread Jason Ingham

Not sure how to get past this problem, the last three backups have failed
for a certain volume. Any clues greatly appreciated. Here's the emailed
symptoms, stuff not relevant snipped:

Thanks!
~Jason


FAILURE AND STRANGE DUMP SUMMARY:
  platt.w sda3 lev 0 FAILED [/sbin/dump returned 3]

STATISTICS:
  Total   Full  Daily
      
Dump Time (hrs:min)4:33   3:23   0:24   (0:10 start, 0:37
idle)
Output Size (meg)3726.6 3126.9  599.7
Original Size (meg)  6275.7 4992.7 1283.0
Avg Compressed Size (%)56.5   62.6   21.6
Tape Used (%)  19.1   15.63.5   (level:#disks ...)
Filesystems Dumped   48  1 47   (1:44 2:3)
Avg Dump Rate (k/s)   371.2  624.1  119.2
Avg Tp Write Rate (k/s)   280.4  262.8  430.7


FAILED AND STRANGE DUMP DETAILS:

/-- platt.w sda3 lev 0 FAILED [/sbin/dump returned 3]
sendbackup: start [platt.ourdomain.com:sda3 level 0]
sendbackup: info BACKUP=/sbin/dump
sendbackup: info RECOVER_CMD=/usr/bin/gzip -dc |/sbin/restore -f... -
sendbackup: info COMPRESS_SUFFIX=.gz
sendbackup: info end
|   DUMP: Date of this level 0 dump: Thu Nov  9 02:35:51 2000
|   DUMP: Date of last level 0 dump: the epoch
|   DUMP: Dumping /dev/sda3 (/) to standard output
|   DUMP: mapping (Pass I) [regular files]
|   DUMP: mapping (Pass II) [directories]
|   DUMP: estimated 3581362 tape blocks.
|   DUMP: dumping (Pass III) [directories]
|   DUMP: dumping (Pass IV) [regular files]
|   DUMP: 3.71% done, finished in 2:09
|   DUMP: 7.76% done, finished in 1:58
|   DUMP: 11.79% done, finished in 1:52
|   DUMP: 16.28% done, finished in 1:42
|   DUMP: 19.92% done, finished in 1:40
|   DUMP: 23.89% done, finished in 1:35
|   DUMP: 28.46% done, finished in 1:27
|   DUMP: 33.23% done, finished in 1:20
|   DUMP: 38.26% done, finished in 1:12
|   DUMP: 42.27% done, finished in 1:08
|   DUMP: 45.83% done, finished in 1:05
|   DUMP: 51.04% done, finished in 0:57
|   DUMP: 56.04% done, finished in 0:50
|   DUMP: 60.77% done, finished in 0:45
|   DUMP: 65.90% done, finished in 0:38
|   DUMP: 72.13% done, finished in 0:30
|   DUMP: 75.79% done, finished in 0:27
|   DUMP: 78.37% done, finished in 0:24
|   DUMP: 81.73% done, finished in 0:21
|   DUMP: 85.07% done, finished in 0:17
|   DUMP: 89.36% done, finished in 0:12
|   DUMP: 93.05% done, finished in 0:08
|   DUMP: 98.17% done, finished in 0:02
?   DUMP:   DUMP:   DUMP: bread: lseek fails
?   DUMP: bread: lseek fails
?   DUMP: short read error from /dev/sda3: [block -454628770]: count=1024,
got=0
?   DUMP: bread: lseek2 fails!
?   DUMP: short read error from /dev/sda3: [sector -454628770]: count=512,
got=0
?   DUMP: bread: lseek2 fails!
?   DUMP: short read error from /dev/sda3: [sector -454628769]: count=512,
got=0
?   DUMP: bread: lseek fails
?   DUMP: short read error from /dev/sda3: [block -958474146]: count=1024,
got=0
?   DUMP: bread: lseek2 fails!
?   DUMP: short read error from /dev/sda3: [sector -958474146]: count=512,
got=0
?   DUMP: bread: lseek2 fails!
?   DUMP: short read error from /dev/sda3: [sector -958474145]: count=512,
got=0
?   DUMP: bread: lseek fails
?   DUMP: short read error from /dev/sda3: [block -497100606]: count=1024,
got=0
?   DUMP: bread: lseek2 fails!
?   DUMP: short read error from /dev/sda3: [sector -497100606]: count=512,
got=0
?   DUMP: bread: lseek2 fails!
?   DUMP: short read error from /dev/sda3: [sector -497100605]: count=512,
got=0
?   DUMP: bread: lseek fails
?   DUMP: short read error from /dev/sda3: [block -657276198]: count=1024,
got=0
?   DUMP: bread: lseek2 fails!

[Lot's of these snipped]

?   DUMP: short read error from /dev/sda3: [sector -657276197]: count=512,
got=0
?   DUMP: bread: lseek fails
?   DUMP: short read error from /dev/sda3: [block -354001310]: count=1024,
got=0
?   DUMP: More than 32 block read errors from 134553888
?   DUMP: This is an unrecoverable error.
?   DUMP: fopen on /dev/tty fails: Device not configured
|   DUMP: The ENTIRE dump is aborted.
sendbackup: error [/sbin/dump returned 3]
\

NOTES:
  driver: platt.ourdomain.com sda3 0 [dump to tape failed, will try again]
  taper: tape DailySet15 kb 6065760 fm 49 [OK]

DUMP SUMMARY:
  DUMPER STATS  TAPER
STATS
HOSTNAME  DISK   L  ORIG-KB   OUT-KB COMP%  MMM:SS   KB/s  MMM:SS  
KB/s
-- --
--
platt. sda3   000   -- 0:000.0  117:33 
318.8





Re: Help with FAILED AND STRANGE DUMP

2000-11-09 Thread David Lloyd


The seek errors you're seeing generally occur because some other process
is atempting to use the blocks that AMANDA (rather dump) is attempting
to back up. I've noticed that if I get enough of them it just trashes my
backup and I have to do it again :-)

DL
-- 
Do you believe in life after love?
 I can feel something inside me, say
 I really don't think you're strong enough



Re: backup on linux/win98 client

2000-11-09 Thread Urte Fuerst


Hi Nate,

 dump doesn't support vfat, which you'd expect (dump has to be intimate
 with the physical filesystem, and linux dump is written for ext2).  You
 can still use tar, though, which is explained in the docs.
Yes, that solved my problem. /win is now backup up as desired. Thanks
for your quick help. Have a nice day. :-)

Urte

-- 
   \|/
   @ @
---oOO-(_)-OOo-

Urte Fuerst
_/_/  _/ _/_/_/ German Aerospace Center DLR
   _/  _/_/ _/_/   
  _/   _/   _/ _/_/ Institute of Aeroelasticity
 _/_/  _/ _/ _/_/  Bunsenstrasse 10
_/_/  _/ _/_/D - 37073 Goettingen
   _/_/  _/ _/  _/Phone:  +49 (0)551 709 2432
  _/   _/   _/ _/_/   Fax:+49 (0)551 709 2862
 _/_/_/_/_/_/_/_/ _/  _/  e-mail: [EMAIL PROTECTED]