"could not access" using tar

2001-08-15 Thread peanut butter

Hi, because the home partition for one of my machines is pushing the
capacity of my tapes, I want to break it into smaller parts using tar.
In this case, it essentially means placing a couple, large
subdirectories of a user's home directory into their own disklist
entries while excluding the same from the backup of the entire
partition.  The subdirectories are "data" and "James" in the following
home directory:

drwx--   51 akritsuk users   40960 Aug 11 18:34 /home/akritsuk

and with the following privileges:

drwxr-xr-x4 akritsuk users   32768 Dec 18  2000 /home/akritsuk/James/
drwxr-xr-x4 akritsuk users   12288 Jan 19  2001 /home/akritsuk/data/

My amanda user is in group 'disk' which is the group with access to the
device yet I get the following errors with an amcheck only for the two
subdirectories of /home/akritsuk:

ERROR: akpc: [could not access /home/akritsuk/data (/home/akritsuk/data): Permission 
denied]
ERROR: akpc: [could not access /home/akritsuk/James (/home/akritsuk/James): Permission 
denied]

Having used Amanda for quite awhile in several different circumstances,
I've never run into this problem before.  Any ideas?  My only guess
is that amanda is denied access because this home directory only gives
permissions to the user but I've come to believe that as long as the
amanda user is in the group with access to the device, this shouldn't
matter.  Note that no such errors were given for the disklist entry
of this machine for "/home" itself.

The machine is Redhat 6.2 and amanda is version 2.4.2p2.

Thanks for any help.

-- 
Paul Yeatman   (858) 534-9896[EMAIL PROTECTED]
 ==
 ==Proudly brought to you by Mutt==
 ==



Re: Solaris 8 Server hangs during backup

2001-08-15 Thread Paul . Haldane


On Tue, 14 Aug 2001, Eva Freer wrote:

> We have a highly subnetted configuration of Solaris 8 and 2.6 boxes, mostly
> E220R's. The subnets are connected via firewalls. Each subnet has its own
> Amanda server with an Exabyte Mammoth tape drive. We use hardware
> compression only. The Amanda is 2.4.2p1 on most nodes.
...

We very occasionally (two times in months of running Amanada) see
something which _may_ be related to your problem.  We're running a
mixture of Solaris 7 and 8 (Amanda server is 7) [as well as some
RedHat Linux and MacOS X clients].

Twice one of the Solaris 7 Amanda clients (same one both times) has
locked up during the estimate phase of the backup run (this is using
ufsdump).  When this happens access to one or more filesystems blocks
and the system clogs up with jammed processes.  This is a mail server
and sendmail stops accepting new mail once the load gets too high so
I've managed to recover both times by killing off the amanda
processes.  Next time this happens I plan to be less flustered :-> and
hopeffully will have better data about what's causing the blockage.

Paul
-- 
Paul Haldane
Computing Service
University of Newcastle





Re: Problems in dumping to holding disk

2001-08-15 Thread John R. Jackson

>... Any ideas on what to do next would be much appreciated.  ...

Upgrade to Amanda 2.4.2p2.  The "socket ignored" message is handled
(ignored) there.

I'm not so sure about the /dev/fd problem.  You may have to explicitly
exclude that in the dumptype.

>Nupur

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Problems in dumping to holding disk

2001-08-15 Thread Nupur Pande


Hello,

I just tried dumping to my holding disk, but got the following message by
amdump. Any ideas on what to do next would be much appreciated. Is it a
problem with tar? I've downloaded the newer version of tar (tar-1.13.19).

Thanks,
Nupur


*** A TAPE ERROR OCCURRED: [no tape online].
Some dumps may have been left in the holding disk.
Run amflush to flush them to tape.
The next tape Amanda expects to use is: a new tape.

FAILURE AND STRANGE DUMP SUMMARY:
  spindletop / lev 0 FAILED [/usr/local/bin/tar returned 2]


STATISTICS:
  Total   Full  Daily
      
Estimate Time (hrs:min)0:03
Run Time (hrs:min) 0:38
Dump Time (hrs:min)0:00   0:00   0:00
Output Size (meg)   0.00.00.0
Original Size (meg) 0.00.00.0
Avg Compressed Size (%) -- -- --
Filesystems Dumped0  0  0
Avg Dump Rate (k/s) -- -- --

Tape Time (hrs:min)0:00   0:00   0:00
Tape Size (meg) 0.00.00.0
Tape Used (%)   0.00.00.0
Filesystems Taped 0  0  0
Avg Tp Write Rate (k/s) -- -- --


FAILED AND STRANGE DUMP DETAILS:

/-- spindletop / lev 0 FAILED [/usr/local/bin/tar returned 2]
sendbackup: start [spindletop:/ level 0]
sendbackup: info BACKUP=/usr/local/bin/tar
sendbackup: info RECOVER_CMD=/usr/local/bin/tar -f... -
sendbackup: info end
? gtar: ./dev/fd: Cannot savedir: Function not implemented
? gtar: ./dev/fd: Warning: Cannot savedir: Function not implemented
| gtar: ./dev/printer: socket ignored
| gtar: ./tmp/.TTX_128.83.166.6_0: socket ignored
| gtar: ./tmp/.Xsgishmsrv0: socket ignored
| gtar: ./tmp/.cam_server: socket ignored
| gtar: ./tmp/.camcacdb_sock: socket ignored
| gtar: ./tmp/.camcicdb_sock: socket ignored
| gtar: ./tmp/.cas_socket: socket ignored
| gtar: ./tmp/.cclconf_cdb.uds: socket ignored
| gtar: ./tmp/.ccms_cdb.uds: socket ignored
| gtar: ./tmp/.cmond_cdb.uds: socket ignored
| gtar: ./tmp/.cmond_client.uds: socket ignored
| gtar: ./tmp/.eventmond.cmd.sock: socket ignored
| gtar: ./tmp/.eventmond.events.sock: socket ignored
| gtar: ./tmp/.eventmond.info.sock: socket ignored
| gtar: ./tmp/.famAAAa000AS: socket ignored
| gtar: ./tmp/.famRAAa000AS: socket ignored
| gtar: ./tmp/.famSAAa000AS: socket ignored
| gtar: ./tmp/.famUAAa000AS: socket ignored
| gtar: ./tmp/.famVAAa000AS: socket ignored
| gtar: ./tmp/.fam_socket: socket ignored
| gtar: ./tmp/.imd_socket-6097-\:0.0: socket ignored
| gtar: ./tmp/.mediad_socket: socket ignored
| gtar: ./tmp/.rtmond_socket: socket ignored
| gtar: ./tmp/espdb.sock: socket ignored
| gtar: ./tmp/.X11-unix/X0: socket ignored
| gtar: ./tmp/.arraysvcs/lclsrvr.5434: socket ignored
| gtar: ./var/spool/lp/CMDSOCK: socket ignored
| Total bytes written: 4386416640 (4.1GB, 2.0MB/s)
? gtar: Error exit delayed from previous errors
sendbackup: error [/usr/local/bin/tar returned 2]
\


NOTES:
  planner: Forcing full dump of spindletop:/ as directed.


DUMP SUMMARY:
 DUMPER STATSTAPER STATS
HOSTNAME DISKL ORIG-KB OUT-KB COMP% MMM:SS  KB/s MMM:SS  KB/s
-- - 
spindletop   /   0 FAILED ---

(brought to you by Amanda version 2.4.2)





RE: taper: FATAL syncpipe_get: w: unexpected EOF

2001-08-15 Thread Charles

Hi John,

It is a 120M, it is HP's "8GB" tape for the DAT8i model we're using.

Thanks,
Charles

-Original Message-
From: John R. Jackson [mailto:[EMAIL PROTECTED]]
Sent: Monday, August 13, 2001 1:20 PM
To: Charles
Cc: [EMAIL PROTECTED]
Subject: Re: taper: FATAL syncpipe_get: w: unexpected EOF 


>Nope I'm sure we're using DDS-2 tapes, as they're purchased with the DAT
>drive.

What do they say on them?  For instance, 60M, 90M, 120M, etc?  As I
understand it (I deduced this just this weekend after you brought it
up) only 120M are DDS-2.

>I hope Amanda do not have a bug in 2.4.1p1 that treat DDS-2 tape become
>DDS-1?

Amanda had no idea what it's writing to.  It just issues standard write()
calls until the OS/kernel reports an error.  There is absolutely no
magic going on.

>Charles

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]




Re: FW: DailySet1 AMANDA MAIL REPORT FOR August 14, 2001

2001-08-15 Thread John R. Jackson

>...  below is the report sent by amanda, and the amdump.log.  ...

I didn't quite understand the E-mail report.  I don't see anything in
it about *why* /usr failed to write to tape.  And tha amdump. file
you sent is truncated.  Did the run abort or was the machine rebooted?
Maybe what you got was the result of an amcleanup done afterward?

>I am not able to find the actual *.tar.gz files.  ...

What *.tar.gz files are you expecting to find?  Amanda does not store
images that way.  You might want to read:

  http://www.backupcentral.com/amanda.html

>I get a tape read error when trying to view
>the contents of the tape.  I tried tar tvf /dev/rmt/0bn .

The tapes Amanda writes are not tar format.  Again, read the URL
mentioned above.

>Brandon

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: undesired level 0 in degraded mode

2001-08-15 Thread peanut butter

->>In response to a message from John R. Jackson<<-

Hi, thanks for the reply.

I answer all your questions below but, first, the best clue that I
could find in the amdump file "causing" the problem is the following:

PROMOTING DUMPS IF NEEDED, total_lev0 0, balanced_size 3465982...
   promote: checking up to 82 days ahead
   promote: checking 1 day now
   promote: checking 2 days now
  promote: ics:/pulsar/wc too big: new size 8071860 total 8071860,
bal size 3465982 thresh 173299
   promote: moving sabre:/biff up, total_lev0 3390062, total_size
3574492
analysis took 0.379 secs

This precedes any detection for a tape and degraded mode.

> Look for "reserving ... for degraded-mode dumps" in your amdump file.
> How much did it reserve?  What does "amgetconf  reserve" say
> (just to confirm your grep).

reserving 8335017 out of 8335017 for degraded-mode dumps

cass150>amgetconf hea_daily reserve
100

> 
> Look for "result ... from taper:".  It should have reported the drive
> offline problem **before** driver said "dump of driver schedule after
> start degraded mode" and certainly before any FILE-DUMP started.
> 
> Looking at that degraded mode schedule, what does it say about sabre:/biff?

Shows it as zero.  Guess this is what should be figured out:

driver: result time 237.285 from taper: TAPE-ERROR [no tape online]
dump of driver schedule before start degraded mode:

  ics/pulsar lv 4 t 1 s 2882 p 1
  ics/pulsar/dg lv 1 t 1 s  222 p 1
  tgs5   /tom lv 1 t14 s  551 p 1
  ics/pulsar/wc lv 5 t17 s 6132 p 1
  uz /uzko lv 1 t18 s  173 p 1
  sabre  /john lv 1 t22 s  333 p 1
  cass20 / lv 1 t26 s15294 p 1
  pjs5   /pete lv 1 t31 s  187 p 1
  wcfields   /rick lv 1 t60 s 4135 p 1
  hexcalibur /phil lv 1 t   100 s  551 p 1
  cass20 /usr/local lv 1 t  1191 s18905 p 1
  sabre  /biff lv 0 t  3104 s  3390094 p 1
  neutronsta /neutronstar lv 1 t 12650 s   134969 p 1

dump of driver schedule after start degraded mode:

  sabre  /biff lv 7 t 0 s  551 p 1
  ics/pulsar lv 4 t 1 s 2882 p 1
  ics/pulsar/dg lv 1 t 1 s  222 p 1
  tgs5   /tom lv 1 t14 s  551 p 1
  ics/pulsar/wc lv 5 t17 s 6132 p 1
  uz /uzko lv 1 t18 s  173 p 1
  sabre  /john lv 1 t22 s  333 p 1
  cass20 / lv 1 t26 s15294 p 1
  pjs5   /pete lv 1 t31 s  187 p 1
  wcfields   /rick lv 1 t60 s 4135 p 1
  hexcalibur /phil lv 1 t   100 s  551 p 1
  cass20 /usr/local lv 1 t  1191 s18905 p 1
  neutronsta /neutronstar lv 1 t 12650 s   134969 p 1

of note, the following immediately preceeded the above:

driver: start time 237.284 inparallel 4 bandwidth 2000 diskspace 8335017 dir OBS
OLETE datestamp 20010813 driver: drain-ends tapeq LFFO big-dumpers 1

The 20010813 backup is one I aborted due to this "level 0 to
holding disk" problem.  Possibly this is what the "OBSOLETE"
refers to.

-- 
Paul Yeatman   (858) 534-9896[EMAIL PROTECTED]
 ==
 ==Proudly brought to you by Mutt==
 ==



Re: undesired level 0 in degraded mode

2001-08-15 Thread peanut butter

Whoops.  I noticed retroactively that the level stated for sabre:/biff
in the "after start degraded mode" schedule is level 7.  There is a "0"
in that row and looking too quickly I mistook that for the dump level
column.  I noticed this after discovering that the file created on the
holding disk for that dump was "sabre._biff.7".  I learned this from
grepping for "biff" in the entire amdump log file:

setting up estimates for sabre:/biff
setup_estimate: sabre:/biff: command 0, options:
got result for host sabre disk /biff: 0 -> 3390062K, 6 ->
3300252K, 7 -> 519K
  8: sabre  /biff
pondering sabre:/biff... next_level0 2 last_level 6 (not due for a
full dump, picking an incr level)
  sabre /biff pri 1 lev 7 size 519
   promote: moving sabre:/biff up, total_lev0 3390062, total_size
3574492
sabre /biff 1 0 1970:1:1:0:0:0 3390062 3104 7 2001:8:10:18:36:32
519 0
  sabre  /biff lv 0 t  3104 s  3390094 p 1
  sabre  /biff lv 7 t 0 s  551 p 1
driver: send-cmd time 237.289 to dumper0: FILE-DUMP 00-1
/hea_holding_disk/new/20010813/sabre._biff.7 sabre /biff 7
2001:8:10:18:36:32 1073741824 DUMP 608 |;bsd-auth;index;

So why did amstatus report it as level 0?  If I had let the amdump
finish, I would be curious to know what level would have actually been
performed and what the final report would have stated.

Notice the "not due for a full dump, picking an incr level"
paranthetical statement.  Also note the "promote: moving
sabre:/biff up, total_lev0"

Paul



Re: undesired level 0 in degraded mode

2001-08-15 Thread John R. Jackson

>   promote: moving sabre:/biff up, total_lev0 3390062, total_size 3574492

That would not affect degraded mode.

>reserving 8335017 out of 8335017 for degraded-mode dumps
>
>cass150>amgetconf hea_daily reserve
>100

OK, that all fits and says no level 0's should be done in degraded mode.

>Shows it as zero.  ...

No, it shows it as level 7.  The first list is the queue before it
recomputes for degraded mode (which does show level 0).  The second list
is after.  In the "after" list, it says:

>  sabre  /biff lv 7 t 0 s  551 p 1

So it should have done a level 7.

So the next question is, what does the FILE-DUMP line look like for
this disk?  And where is it in the file in relation to the "start
degraded mode"?

Are we sure it's really doing (did) a level 0?  This all might be a silly
amstatus reporting bug rather than Amanda actually doing the wrong thing.

>...  Possibly this is what the "OBSOLETE" refers to.

The "OBSOLETE" is a placeholder in that log line for a parameter that is
no longer there.  It's just to make old parsers happy and may be ignored.

>Paul Yeatman

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



undesired level 0 in degraded mode

2001-08-15 Thread peanut butter

Hi, I'm trying to figure out why I still get level 0 dumps when in
degraded mode (no tape available).

I took the default of 100% for the "reserve" parameter first being that
the result, as I understand it to be, is as I desire.  When Amanda
would still run level 0's on some machines in degraded mode, I
explicitely set reserve to be 100.  I'm still getting level 0 backups
to my holding disk.  Currently, no tape is loaded yet one of the disks
being dumped right now is running at a level 0.  (I don't want over 3
Gigs of a level 0 to go to my holding disk.)  Relevant commands and their
output follow.

I'm running version 2.4.2p2, BTW.

Thanks, Paul

>grep -i reserve /usr/local/etc/amanda/hea_daily/amanda.conf
# incremental backups in this case, i.e., it will reserve 100% of the
# However, if you specify a different value for the `reserve'
# non-reserved portion of the holding disk.
# reserve 30 # percent
reserve 100 # percent

>grep -i tapedev /usr/local/etc/amanda/hea_daily/amanda.conf
# Some tape changers require tapedev to be defined; others will use
# tapedev "/dev/rmt/0bn"# the no-rewind tape device to be used
tapedev "/dev/nrst28"   # the no-rewind tape device to be used

>mt -f /dev/nrst28 status
/dev/nrst28: no tape loaded or drive offline

>amdump hea_daily &

>amstatus hea_daily
Using /home/amanda/hea/amdump from Mon Aug 13 13:01:24 PDT 2001

cass20:/ 1   15262k wait for dumping 
cass20:/usr/local1   18873k wait for dumping 
hexcalibur:/phil 1 519k wait for dumping 
ics:/pulsar  42850k wait for dumping 
ics:/pulsar/dg   1 192k dump done (14:05:28), wait for 
writing to tape
ics:/pulsar/wc   56100k wait for dumping 
neutronstar:/neutronstar 1  134937k dumping 32k (  0.02%) (14:05:27)
pjs5:/pete   1 155k wait for dumping 
sabre:/biff  0 3390062k dumping 32k (  0.00%) (14:05:27)
sabre:/john  1 301k wait for dumping 
tgs5:/tom1 519k dumping 32k (  6.17%) (14:05:27)
uz:/uzko 1 141k dumping 32k ( 22.70%) (14:05:29)
wcfields:/rick   14103k wait for dumping

-- 
Paul Yeatman   (858) 534-9896[EMAIL PROTECTED]
 ==
 ==Proudly brought to you by Mutt==
 ==



Re: Solaris 8 Server hangs during backup

2001-08-15 Thread John R. Jackson

>We have a highly subnetted configuration of Solaris 8 and 2.6 boxes, mostly
>E220R's. The subnets are connected via firewalls. Each subnet has its own
>Amanda server with an Exabyte Mammoth tape drive.  ...

Do the servers reach across the firewalls to back up clients "on the
other side"?  Or is that the point of having a tape drive in each subnet,
so backups stay inside a given firewall?

>We use hardware compression only. The Amanda is 2.4.2p1 on most nodes.
>...
>Originally, we seemed to have a problem with only one subnet, with a Solaris
>2.6 server, 2 Solaris clients, and 1 Solaris 8 client. The server would hang
>during the backup and required a poweroff reboot.  ...

Please believe me that I'm not just trying to pass the buck :-), but
Amanda cannot be the root of this problem.  Put another way, anything
you do to Amanda that gets this going is, at best, a workaround and
the real problem will still be there, waiting to bite you at the worst
possible time.

Amanda is pure application level code.  Any program that generates the
same set of circumstances (e.g. high network load, particular data
patterns, etc) would trigger the same problem.  If you have systems
crashing or hanging, something else (hardware or OS) is wrong.

>... Messages in the logs (not from amanda) indicate
>that the system is very busy (e.g. sendmail won't run the queue because the
>load average is too high.)  ...

How high is the load average getting?  Amanda is I/O bound, especially
on the server.  It should not be generating significant load (w.r.t.
"load average").  Are you certain nothing else was going on at the time?
Do you have "top" to see what the heavy hitters are when it starts to
go wrong?  Or there are other tools (even just a "ps") that do roughly
the same thing.

What kind of netstat numbers are you seeing during the bad times?  Any
high error/collision counts or excessive packets?

Are all your systems up to reasonably recent Solaris patch levels?

Have you tried doing several ftp's of roughly dump image size from
the client to the server (they can go to /dev/null on the server as an
initial test)?

What is maxdumps set to?  That would control how many backups were
running at one time on the client, which, in turn, would control how
many data streams were coming into the server.

How about inparallel?  That will also throttle how many dumpers are
active.

Is anything special about the two subnets with the problem?
Any particular type of network card, connection, media or topology?

>Eva Freer

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: IBM Autoloader amanda compatible ?

2001-08-15 Thread Thomas Hepper

Hi,
On Tue, Aug 14, 2001 at 06:36:19PM -0400, Vijay Parthasarathy wrote:
> Hi, 
> 
> I am planning to buy IBM LTO Ultrium Autoloader. Does amanda supports
> this autoloader ?
It depends on which OS you are using for it, as long as either mtx or the amanda
builtin chg-scsi is supported by the OS you want to use it should work.


Thomas
-- 
  ---
  |  Thomas Hepper[EMAIL PROTECTED] |
  | ( If the above address fail try   ) |
  | ( [EMAIL PROTECTED])|
  ---



regenerating an amanda report

2001-08-15 Thread David Carter

Hello all,

I didn't receive a mail report this morning from amanda, although I can see
that the backups completed successfully, and the tape contents look good. 

Is there a way I can re-generate the amdump report that I usually get via
email?

David Carter
McLeodUSA Information Systems
[EMAIL PROTECTED]
281-465-1835




Re: reiserfs and tar slow / fail

2001-08-15 Thread Christopher McCrory

Hello...

 This is what I got last night:


file -- runtar.20010815001028.debug
runtar: debug 1 pid 920 ruid 33 euid 0 start time Wed Aug 15 00:10:28 2001
/bin/tar: version 2.4.2p2
running: /bin/tar: /bin/tar --create --file /dev/null --directory /www 
--one-file-system --listed-incremental 
/var/lib/amanda/gnutar-lists/eeyore_www_0.new --sparse 
--ignore-failed-read --totals .

file -- runtar.20010815010829.debug
runtar: debug 1 pid 1280 ruid 33 euid 0 start time Wed Aug 15 01:08:29 2001
gtar: version 2.4.2p2
running: /bin/tar: gtar --create --file - --directory /www 
--one-file-system --listed-incremental 
/var/lib/amanda/gnutar-lists/eeyore_www_0.new --sparse 
--ignore-failed-read --totals .


And the report showed: ... 0 [data timeout] (1:38:28)

? ? ?

color me confused


Christopher McCrory wrote:

> Hello...
> 
> first, thanks to all the helpful people on this list.  I rarely have 
> to post anything, I just look through the list archives and my questions 
> are already answered. :)
> 
> 
> I have a problem tring to backup a reiserfs partition with gnutar.  
> I use dump for all the other systems.  The system is a redhat linux 7.1. 
> three filesystems, only one is reiserfs.
> 
> reiserfs: checking transaction log (device 08:11) ...
> Using tea hash to sort names
> reiserfs: using 3.5.x disk format
> ReiserFS version 3.6.25
> (no reiserfs debug)
> 
> tar-1.13.19-4
> amanda-client-2.4.2p2-1
> amanda-2.4.2p2-1
> 
> 
> The same exact data is also on other servers, so in the past I just 
> skipped this filesystem.  Now I would like to convert some other 
> filesystems to reiserfs, so now I have to get the backups working.  the 
> reiserfs filesystem isn't huge, but it does have lots of small files. ( 
> try http://www.pricegrabber.com , see the pictures?  They might be 
> coming off this reiserfs system.  look around...there are a lot of 
> images ;).  If I put it in my disklist, the estimate takes about an hour 
> (etimeout 7200).  Then the backups start, the other filesystems work, 
> but this one fails everytime.
> 
> looking back through my mailing list archives, I saw a thread regarding 
> 'calcsize'  and using that instead of GNUTAR to do estimates.  Is there 
> another way to get faster gnutar performance?  Is there an amanda.conf 
> switch to use calcsize?  or use gerhard's patch?
> 
> Any other ideas?
> 
> I'm going to put it back in for tonight's run to get all the debug info.
> 
> 
> 
> define dumptype my-global {
> index yes
> record yes
> priority high
> maxdumps 1
> compress client fast
> holdingdisk yes
> dumpcycle 2
> }
> 
> define dumptype my-type {
> my-global
> }
> 
> define dumptype my-type-reiserfs {
> my-global
> program "GNUTAR"
> }
> 
> 
> 
> bash-2.04$ cat runtar.20010806020712.debug
> runtar: debug 1 pid 31211 ruid 33 euid 0 start time Mon Aug  6 02:07:12 
> 2001
> gtar: version 2.4.2p2
> running: /bin/tar: gtar --create --file - --directory /www 
> --one-file-system --listed-incremental 
> /var/lib/amanda/gnutar-lists/eeyore_www_0.new --sparse 
> --ignore-failed-read --totals .
> 
> 
> TIA
> 



-- 
Christopher McCrory
"The guy that keeps the servers running"
[EMAIL PROTECTED]
http://www.pricegrabber.com

I don't make jokes in base 13. Anyone who does should get help. 
--Douglas Adams




Re: taper: FATAL syncpipe_get: w: unexpected EOF

2001-08-15 Thread Marc SCHAEFER

Charles <[EMAIL PROTECTED]> wrote:
> It is a 120M, it is HP's "8GB" tape for the DAT8i model we're using.

maybe check you disabled compression: if you compress both software and
hardware, you may get a too large file for the tape.



Re: append-patch

2001-08-15 Thread Marc SCHAEFER

Yannick LeBlanc <[EMAIL PROTECTED]> wrote:
> i want to know if someone use the amanda-2.4.2p2-append-patch.bz2

Yes I do: two customer sites (hopefully three in a while), and a test site.

They use amcompare/amrio, the autochanger-capable amrecover, the sparse
fix, and the rest (from the Debian package), that has some additional
goodies.

If you are unsure, you can set can_append to 0 in the config file, and
then you have stock Amanda.