Re: SunOS 5.8 global/user profile and amanda

2005-04-19 Thread Peter Mueller
Hi!

...

ld.so.1: /usr/local/libexec/sendsize: fatal: libgcc_s.so.1: open 
failed: No such file or directory

I had similar problems not with amanda but with other packages porting 
them from 5.6(Solaris 2.6) to
5.8(Solaris 8).
Playing around with the order of entries in LD_LIBRARY_PATH solved the 
problem then.

Bye, Peter
WOTLmade
--
Peter Mueller
WOTLmade


Re: Amanda vs Homegrown

2005-04-21 Thread Peter Mueller
Hi!
Mitch Collinsworth wrote:
On Thu, 21 Apr 2005, Mark Lidstone wrote:
It would still be worth pointing out what a huge security risk the rcp
command is, and if they insist on using their scripts at least get them
to remove the r* accounts setup stuff and use something like rsync over
an encrypted channel (why bother protecting the file on the disk if
you're going to potentially transfer it in plain text over the network).

So have you modified amanda to encrypt your network transfers?
It doesn't do that out of the box you know.
Dont know if there is an Amanda-wish-list somewhere, but using ssh, ssl
or something similar for the client-server connection would be a nice 
goddie.

Btw. is there any concept how to add desaster recovery - even if its' 
platform
specific and involves usage of non-amanda tools?
Any papers or how-tos concerning this?

Bye, Peter
WOTLmade


Another "dump bigger then tape" question"

2005-04-23 Thread Peter Mueller
Hi!
I know this matter has been discussed here serveral times before,
but scanning the archive and reading the online docs I could not
find answers to my specific problem.
My situation:
Due to non-optimal configuration I got into this:
.) I thought to used software compression only, but maybe
  the DDS3 tape drive tried to compress the data too.
.) My tapedrive configuration in amanda.conf is this:
define tapetype DDS3 {
   comment "HP C1537A"
   length 11738 mbytes
   filemark 392 kbytes
   speed 1018 kbytes
   lbl-templ "/etc/amanda/D1/HP-DAT-Elimpex.ps"
}
  As I understand it, it specifies only the uncompressed size
  of DDS3 tapes, which I use...
.) I managed to produce a DUMP which is of this sice in the
  holding disk:
uschi:/silo1/amanda/20050422 # du -ks *
12055028sonja._silo1.0
.) Amanda dump and flush try to write this to the tape, and
  fail, because it is too big.
Ok. It IS TOO BIG, but now my questions:
.) Why did this dump even got created ?
.) How do I remove it - I know I have to split this area to
  backup it with amanda but I dont want to break the whole system
  and start from scatch...
.) What about this switching DDSx DAT tapes from compressed to
  uncompressed ? Somebody mentioned a script I cant find.
  I tried this procedure (/dev/nrmt0 is a HP DDS3 drive on Linux):
  mt -f /dev/nrmt0 datcompression 0
  mt -f /dev/nrmt0 rewind
  dd if=/dev/nrmt0 of=label.dat bs=32k count=1
  mt -f /dev/nrmt0 erase
  mt -f /dev/nrmt0 rewind
  dd if=label.dat of=/dev/nrmt0 bs=32k count=1
  mt -f /dev/nrmt0 offline
  Will this be sufficent to switch the tapes to uncompressed ones ?
.) Is there any way to insert a command in amanda that is executed
  "just before the tape is accessed" e.g. to switch the drive
  to uncompressed ?
.) Why does amanda produce "empty" tapes if a dump is too big ?
  It would be nice if the tape would be at least be reused for the
  other dumps waiting to get taped ...
And now to something completely different:
Is there any way to execute a command "just before and after" a
"disk" is backuped ? This would be nice to e.g. shut down and start
servers which may change this data inbetween ...
Hopefully somebody is able to help on this,
My amanda keeps producing empty tapes and I am running out of
holding disk space.
Peter Mueller
WOTLmade


Re: Another "dump bigger then tape" question"

2005-04-23 Thread Peter Mueller
Hi!
Thanks to Jon for the quick answer.
Jon LaBadie wrote:
Ok. It IS TOO BIG, but now my questions:
.) Why did this dump even got created ?
   

A 12.1GB dump with a stated tape capacity of 11.7GB.
Not a large difference.  The pre-dump estimates are just that,
estimates.  Plus the capacity is not an 'absolute' number.
Now, if the estimate had been 24GB, you would have gotten a
message in your report about "way too big".  I don't know
what the fudge factor is and where amanda considers it too big.
 

Thats what confused me. But also amflush tries to put it on tape,
and at this point, the 12GB > 11.7GB comparison is evident!
Furthermore, it does not try to put the smaller dumps resting on
the holding disk, resulting from the follow up amdump at the
night after the desaster 
Maybe the planner etc. could be made even more clever to
figure out this situation.
.) How do I remove it - I know I have to split this area to
 backup it with amanda but I dont want to break the whole system
 and start from scatch...
   

Not certain, but probably 'rm' it and then run amcleanup.
Others may chime in here with alternatives.
 

Ok. I will move it somewhere else for the first try.
At the moment there is another amflush running which most
probably will not manage to put it on tape .. lets see what happens
if I stop this one.
For the split, you will have to delete the current DLE, or add
some indication to never back it up.  The create two or more
new DLE's.  The new DLE's will have to get level 0 dumps the
first time amanda encounters them.  Perhaps introduce them on
different days.
 

Of course I did this with several other areas before ... even
this DLE is only part of the real /silo1 ...
.) What about this switching DDSx DAT tapes from compressed to
 uncompressed ? Somebody mentioned a script I cant find.
   

Do you have reason to believe your tapes have already been
written with hardware compression turned on?  That is the
time it is needed.
 

Definitely YES
Look for past-postings by Gene Heskett for the script.
 

Ok. now I found it - the write som MBs to force buffer flush I missed.
...  And there
is no way to stick in command to turn it off after that.
 

Thats what I was looking for
.) Why does amanda produce "empty" tapes if a dump is too big ?
 It would be nice if the tape would be at least be reused for the
 other dumps waiting to get taped ...
   

Thinking it "might just fit" (only a bit too big) amanda writes
as much as it can, hits the end and fails.  Since there is not a
"complete" dump on the tape, that part of the tape is "wasted".
 

But that's the problem. As it was the only dump to be put on tape,
is obvious now, that this dump will NEVER fit on a tape, so I think
it should be marked to be left out on future runs of amdump or amflush..
Otherwise it keeps blocking amanda writing anything to tape!
And now to something completely different:
Is there any way to execute a command "just before and after" a
"disk" is backuped ? This would be nice to e.g. shut down and start
servers which may change this data inbetween ...
   

See above comment on crontab.
 

Not exactly, since there are several DLEs and I would prefer to be
triggered just before and after the single DLE is touched. Furthermore
starting and stopping some servers has to be done on the client side
so being triggered by te amanda client would be easier since the
triggered code is called on the relevant system allready...
Or wrappers.  Replace amdump with a script.  Or replace the backup
program(s) you use, gnutar or dump, with a script.
If you don't want to turn things off just before dumping,
i.e. not during estimates of during dump of other DLE's,
you can add code to do that too.
 if (output device is /dev/null and not the tape device)
this is an estimate, call the backup program itself
 else if (this is not a DLE of interest)
call the backup program itself
 else
 	the action, stop the service, call the backup program,
	save the return status, start the service, exit with
	the saved status
 

Ok. I think I got it. But does the amanda client honor the PATH
variable in his environment? This would be a way to replace gnutar
with a wrapper - its not acceptable on my systems to replace the
general one since other programs and users need the original ...
... seems to be time for some experiments and research.
Thanks again for the answers,
Peter
WOTLmade


Re: Is it possible to configure amanda and inetd just for localho st?

2005-04-25 Thread Peter Mueller
Hi from Austria too!
[EMAIL PROTECTED] wrote:
...
Don't do it!!!
...
Subject: Is it possible to configure amanda and inetd just for
localhost?
 

Is it a linux box ? Define some iptable rules or similar to
block access from other IP addresses.
Bye, Peter
WOTLmade


Re: Pre/Post backup scripts on clients ?

2005-05-06 Thread Peter Mueller
Hi Bert!

Amanda – Control other processes - Mini How-To 
Nice thing, since most amanda admins come to this point sooner
or later ...
...
Step 2 : Rebuild Amanda so that it uses your newly created script.
Download the sources, untar them to a directory. I'm sure there are 
lots of documents already available on how to do this, so I won't go 
in to too much detail.

...
I think this is the main design flaw!
Why is the full path to tools like tar compiled into amanda?
It should be configurable, where tar is placed and how it is called etc!
I dont like to compile amanda on my own. This is why I pay for a
Linux distribution like SuSE where they take care of fiddling around
with all the cross-dependencies of hundreds of great Open Source
projects to make them work as a consistent system.
O.k. this may be not the problem of the maintainer of a single
project but anyway I think the path and name of a program used
by an executable that is NOT part of the project should be
configurable for the administrator without compiling 
Bye, Peter
WOTLmade


Re: very interesting little problem.

2005-05-09 Thread Peter Mueller
Hi!

Sysadmin #1 started amdump this morning to correct a failed dump from 
last night. Sysadmin #2 needed to restore a file, and started 
amrecover on the backup machine.  Sysadmin #2 was unaware of the 
running backup process and used amtape to change slots.  OOPS, both 
sysadmins got really weird errors, not quite descriptive of the 
actual problem (that you cant dump and restore at the same time).  
Maybe a check could be 

Actually, I have two tapedrives (used with chg-multi) and I do sometimes
restore while dumping or flushing...  It is possible to dump and 
restore at the same time, when not using the same tapedrive.

So there could or (should ?) be a "device lock"  ?
Maybe another entry to the Amanda-wish-list !?!
As my site is not that big, I didnt have such problems until now.
But I can easily imagine bigger installations and bigger
sysadmin groups where such problems may show up frequently.
Bye, Peter
WOTLmade


Re: Amanda backup of Read-Only NFS shares

2005-06-01 Thread Peter Mueller

Hi Mark!


I have a read-only NFS share provided by a Windows 2000 server machine 
and I want to back it up with my AMANDA cycle.  I keep getting these 
errors though:


...
? gtar: ./foo/bar: Warning: Cannot stat: Permission denied
? gtar: ./foo/baz: Warning: Cannot stat: Permission denied
...

Is there anything I can do to effectively back up these files without 
getting these warnings, a tar parameter perhaps (I couldn't find on in 
the man page for tar)? Setting the NFS share to be read-write isn't an 
option.


As you allready wrote, its read-only so tar cant set inode dates etc.
So you wont be able to do propper incremental backups.
You could baybe do full backups every amanda run, but as I dont use
this, I am not aware who to propper configure this.


Bye, Peter
WOTLmade



Re: out-of-tape on dump errors

2005-06-14 Thread Peter Mueller

Hi Mike!



Does anyone now why Amanda might generate an out of tape error in a
situation where it blatantly isn't anywhere even close to the end of the
tape?

 

As I understand it, Amanda interpretes any non-ok result from the tape 
device

as "out of tape" - as most backup and tape handling software does.


Bye, Peter

WOTLmade



Re: out-of-tape on dump errors

2005-06-22 Thread Peter Mueller

Hi Shaun!


If you think your harddisk is the problem, you should check it.

Did you make an fsck (file system check) or similar
(depending on the filesystem and os you are using) ?

Did you find any errors in yor system log ?
(harddisk read or write errors, controller errrors etc.)

Furthermore you could make a NON-destructive hard disk test.
How to do this also depends on your os capabilities - I dont remember
your initial mail - did you write which os and filesystem you
are using ?


Bye, Peter
WOTLmade


Shaun Feeley wrote:


Hi Peter,

I have started getting the out of tape error on a fairly large (~90GB)
and important partition.  After reading this thread and doing some
testing I do think there is actually something wrong with the partition.
I need to get this backing up again.  I am wondering if you have any
ideas as to what i could do to pin point what is wrong?

Thanks for your help Shaun


On Tue, 2005-06-14 at 11:22 +0200, Peter Mueller wrote:
 


Hi Mike!


   


Does anyone now why Amanda might generate an out of tape error in a
situation where it blatantly isn't anywhere even close to the end of the
tape?



 

As I understand it, Amanda interpretes any non-ok result from the tape 
device

as "out of tape" - as most backup and tape handling software does.


Bye, Peter

WOTLmade


   



 



--
Peter Mueller
WOTLmade



Re: Unable to Flush Held Backup Jobs

2005-06-22 Thread Peter Mueller

Hi!



...
They may be incomplete dumps.  If there is a problem before a dump
completes you end up with a partial dump image in your holdingdisk
that never gets cleaned up. (To the developers: Since amflush seems
to know which ones are complete, can't it remove the incomplete ones?
Also, why does amflush mark a tape as used even when it doesn't write
any new data to it?)

 


Anyone know how to just delete the held backup jobs?
   



'rm' works for me.  Just make sure they aren't complete (try a verify of the
dump image) or are older than your rotation before removing them.
...
 



This would be very interresting. I had the situation before that amflush 
could
not flush anything to a tape, but it was marked as "used" anyway - very 
irritiating,

and tape-consuming.

Furthermore it would be nice to have a "cancel" command to cleanly remove
dumps that cant be flushed - e.g. because they are bigger than a tape - 
I used rm
and reconfigured amanda to make the "disks" smaller - but didnt know if 
I mess up

the whole thing ...


Bye, Peter

WOTLmade



Re: runtar: error [must be invoked by amanda]

2005-07-14 Thread Peter Mueller

Hi!


As this tends to be a general "where should amanda development go" 
discussion

I throw in my opinion too:

.) The "localhost" issue comes up in a daily basis here.
  I agree with  others here, that there "has to be" a solution how 
precompiled packages,
  not only on Linux, can be configured so that backup & recovery work 
properly

  without compiling anything!
  --> The localhost issue must be reconsidered more general!
  (I think the stategy of recovery must be reconsidered from ground up 
to make

  a simple "localhost" configuration usable and maybe the operator has to
  enter the missing host indentification at recovery time  ??!?? )

  I tend to use precompiled RPMs on production systems - faster more 
reliable
  install, easier to upgrade - and thats what I pay the distributor for 
to do all the

  nasty fiddling around with autoconf and dependencies etc.

  Imagine to collect and compile all the little bits and pieces that 
form a nowerdays

  Linux distribution by hand everytime you set up a Linux box!
  Nobody expects to recompile ls if the hostname changes!

.) Things like the binding to binaries (dump, tar etc) must be 
configurable, without

  recompiling -> config files!

.) Client side "plugins" that may be run before and after each "disk" 
must be
  easily configurable - wrappers are a hack! (and need recompiling - 
see above)


  A short term solution would be to add standard-wrappers to the 
tarball and

  precompiled packages which may be changed by the operator ...

.) Furthermore I am missing a standard procedure how to cope with a failed
  disk backup which is bigger than the available tapes sticking around 
in your
  holding disk - maybe only a doc - problem - but I think amadmin 
should have

  something like a cancel command ?

.) And there is the long promised "multi tape" solution for disks larger 
than tapes 
  Be honest - disks grow ten times faster then tapes - we have to 
accept this -
  and furthermore most of us use some kind of virtualisation of disk 
space which
  makes the filesystem to be managed independend of available disk 
sizes 



Bye, Peter
WOTLmade


Re: VXA-V23 taoe difficulties

2005-12-16 Thread Peter Mueller

Hi!



Paul Bijnens wrote:


...
The tapes with servo-tracks ARE able to detect EOT: that information
is found in their servo-tracks.  (I'm not sure how/if a damaged servo
track can result in premature End Of Tape.)

The helical scan type drives detect End Of Tape as a hard write error
on the same spot.  They cannot distinguish the real end of tape from
a hard write error.  Some of the drives seem to guess the end of tape
by measuring the spindle speed of one or both the reels, but can only
approximately indicate "near" end of tape.



Where are the old days, when ANSI tapes had these metal patches to indicate
BOT and EOT   ;-)


Bye, Peter
WOTLmade



Re: VXA-V23 taoe difficulties

2005-12-16 Thread Peter Mueller

Hello!


Gene Heskett wrote:


Where are the old days, when ANSI tapes had these metal patches to
indicate BOT and EOT   ;-)

   

Dunno, but FYI, the DDS tapes have about 1/16" diameter holes for 
optical detection of BOT & EOT, about a foot back from the leader 
splice on either end.


I saw that when it failed and ripped the tape in two on one drive back 
in jurrasic times.
 


So who's throwing away the information ?
The tape drive not sensing these holes ?
The low level driver not recognising or not asking for the info ?
The general linux/unix tape system not recognising or asking for the info ?


Bye, Peter
WOTLmade



Re: VXA-V23 taoe difficulties

2005-12-19 Thread Peter Mueller

Hello!



I saw that when it failed and ripped the tape in two on one drive 
back in jurrasic times.



So who's throwing away the information ?
The tape drive not sensing these holes ?
The low level driver not recognising or not asking for the info ?
The general linux/unix tape system not recognising or asking for the 
info ?



I have the same question.  But no answer.
I can only assure that my previous AIT-1 tapes only reported 4 times in
it's life time (it's dead now) an IO-error, while several other times,
it claimed "No space on device", not even near the end, while rewriting
the same tape a few days later could put much more data on it.

That tapedrive was connected to Linux but previously to a Solaris
machine, and both had the same view of IO-error versus 'No space on 
device.'   I even made a graph of it with gnuplot:



I had similar experiences with several flavours of *ix*,
Linux, Ultrix, OSF1/DigitalUnix/..., HP-UX, ... only to name a few.

It seems to be a conceptual thing of the tape interface to NOT differnciate
between EOT and Read/Write errors - or is it a log lasting bug carried from
one implementation to the other ...

I think to remember that VAX/VMS was much more clever in handling
tapes ... not only the old ANSI beasts, but also the "modern" streaming 
ones.


You could even boot a mini-System from there !
(Not that I would want to do this now, it took ages ... )


Bye, Peter
WOTLmade



Intelegence of amflush

2006-02-23 Thread Peter Mueller

Hi!


It happened again, that a "disk" grew over the lenght of a tape, and
so its backup got stuck on the holding disk.

And as it is all the time it happened when i was out of office
for some days...

I knew its my concern as the sysadmin to split it into smaller
pieces ...

But it would be "nice" if amflush could recognise this
situation and would try to flush the other backups on the holding
disk if it could not flush the big one once, and not tryigng to flush
the one that gave an error last time again and again  producing
empty tapes and ambackup fills up the holding disk with level 2 and 3
backups every night 


Bye, Peter
WOTLmade


Re: Intelegence of amflush

2006-02-23 Thread Peter Mueller

Hi Paul!


Thanks for the answer!


Paul Bijnens wrote:


... What version of amanda is that?


Amanda-2.4.4p3


Did you specify "taperalgo largestfit" in the amanda.conf? Or any other?


I will check that, probably not, because I dont remember this parameter.


How many "runtapes" do you have?


1, a DDS3 DAT drive with DDS3 DAT tapes.


I guess that if you have only one runtape, and autoflush, then amanda
starts a flush, while doing the estimates for the current run.  It
could well be that amanda does start indeed the large, probably 
not-fitting image (because that is the only one to choose from, even

when largestfit is choosen), before the first nightly dump image is
finished.  In that case, it will indeed fill up the only tape.


Partly, yes. My holding disk is big enough so that the amdump runs after
the one that produced the "too big" image produced 2rd and 3rd level dumps
but they all stayed on the disk, because of the one thats too big and
allways choosen to be flushed first.

My collegue, which was at least trained to read the amanda mails, swap
tapes and start an amflush by WebMin commands (home made), did what
he was trained and tried to flush after he got the "out of tape" message,
but this didnt work  see above.


When there is only one image to choose from, Amanda will take that one,
even if it will probably not fit.  Changing that will make many other
users of Amanda unhappy, e.g. those using hardware compression because
their tape length is just a wild guess of the truth, or even me, because
I underestimate the tape length a bit, so that amanda has a few percent
margin with estimates being smaller than real dumps (that's why I get
tapes filled with 106%).


That's perfectly o.k. . But im my case there where about 25 images to 
choose from.


I was thinking about an "error recovery strategy" by "remembering" that 
this
image introduced an error last time, so if there are several others, 
skip it first
and give the others a try so that they get a chance to find their way to 
the tape 


But anyway, I have allready changed the disklist so hopefully in two or 
three
days the new split up "disks" will be backed up  until it strikes 
next time.



Bye, Peter
WOTLmade



Re: Intelegence of amflush

2006-02-23 Thread Peter Mueller

Hi again!



Did you specify "taperalgo largestfit" in the amanda.conf? Or any other?



I will check that, probably not, because I dont remember this parameter.


Is was missing, which means "first" - I changed this to "smallest" to 
give the

2rd and 3rd level images a chance to get thru ..

I taped the "too big" image by hand.

But now I am stuck with a question I didnt get an answer when I posted it
here last time:

What's the correct way to remove an image from the holding disk, that
cant be backed up by using amflush (because its too big !?!):

-) Is it sufficcient to remove it by unix rm ?
-) Is there any command e.g. with amadmin to remove it
-) Is there any way to tell amanda that this image does not exist, or
  that its' backed up (by hand to tapes not labeled by amanda in my
  case) ?


Maybe one of the amanda gurus could give me the pointer where in the docs
this is discussed 


Bye, Peter
WOTLmade


Re: Intelegence of amflush

2006-02-23 Thread Peter Mueller

Hi Paul!


Paul Bijnens wrote:


On 2006-02-23 14:18, Peter Mueller wrote:



Did you specify "taperalgo largestfit" in the amanda.conf? Or any 
other?




I will check that, probably not, because I dont remember this 
parameter.



Is was missing, which means "first" - I changed this to "smallest" to 
give the

2rd and 3rd level images a chance to get thru ..



The "largestfit" reverts to "smallest" when nothing fits (because
that one has the most chance of fitting in the unknown gap near the
end of tape).

When start taping "smallest" in the beginning, you got only large
ones left near the end of a tape, and that means you loose a lot of
the tape capacity, because the failed image has to be started all
over again on the next tape.


O.k. I will try largestfit, but in normal situations all my dumps fit on 
one tape

anyway, as my setup is rather simple, without changer and optimised to
use this one tape each night ...
Normally I tend to split up disks so that they are definitely smaller 
than the

tape ... but if Users change usage policy without noticing me - it fails ...

But now I am stuck with a question I didnt get an answer when I 
posted it

here last time:

What's the correct way to remove an image from the holding disk, that
cant be backed up by using amflush (because its too big !?!):

-) Is it sufficcient to remove it by unix rm ?



yes.  and in that case you better schedule a level 0 again for
that DLE.


-) Is there any command e.g. with amadmin to remove it



no, but it would be a nice addition.


-) Is there any way to tell amanda that this image does not exist, or
  that its' backed up (by hand to tapes not labeled by amanda in my
  case) ?



no. but would be nice.


Thanks!


Bye, Peter
WOTLmade



Re: Intelegence of amflush

2006-02-23 Thread Peter Mueller

Hi Paul!


Paul Bijnens wrote:


...
The problem here is that when you had a real tape error, and then try
to flush o a new tape, then amanda refuses to put that one tape, because
it got an error last time?  Not good.  So how you do see if you got
a real error or just EOT.  Explain me, because I surely want to know.
AFAICT most drivers do not distinguish between write error or EOT.
(At least that is my experience.)


I think we discussed the EOT versus real tape error here some weeks ago ...

O.k. maybe I expect a little to much guessing - my thought was, that the
image that failed last time gets sheduled at the end, not recoginsed in the
choosing and ordering process for each flush, so if the flush is done, the
others have there chance.

If a tape error was the reason why the flush failed, the following
run - expecting the next tape is o.k. - will work anyway, anly
the order of the images on tape is changed. But if the image was the
reason why the flush failed, all the other images may move to tape, and
the error on will propably trigger its error again.

Size may not be the only reason why the image failes to tape - there may
be a hardware error e.g. disk or controller which prevents it from taping
- would be a bad thing anyway - but especially in such situations it would
be fine if at least the rest is taped.

Another problem in the flush stategy is mentioned in a reply of  Matt 
Hyclak:


The top level criteria for ordering the images to flush seems to be the 
date.


As in my case, where one big image left over from a ambackup run, all
following ambackup runs produce 2rd and 3rd level backups to fit into the
remaining holding disk.

The flush runs, either triggered by ambackup or manually continue to try
the one big image becaus its from an older ambackup run 

Bye, Peter
WOTLmade



Re: Out of space problem

2008-05-06 Thread Peter Mueller

Hi Nigel!

Nigel Allen wrote:

I'm experiencing an odd problem with a USB DAT drive that keeps 
running out of space. Apologies for the length of the post.


The drive is supposed to be 36 / 72 GB.


...


Output Size (meg)   33927.133927.10.0
Original Size (meg) 50710.150710.00.0
Avg Compressed Size (%)66.9   66.9   10.0   (level:#disks 
...)



...


define tapetype HP-DAT72 {
comment "HP DAT72 USB with hardware compression on"
length 72 G
}



...


define dumptype custom-compress {
   global
   program "GNUTAR"
   comment "Dump with custom client compression"
   exclude list "/etc/amanda/exclude.gtar"
   compress client custom
   client_custom_compress "/usr/bin/bzip2"
}



...

mail.airsolutions.com.aumapper/VolGroup00-LogVol00  
custom-compress

mail.airsolutions.com.ausda1custom-compress



Hardware & Software - compression together is allways a bad combination!

bzipped 34G would fit on d 36G tape but the hardware-commpression of the
tape will blow them up to much more so you run out of tape.

Disable hardware-compression - see howto's and wikis - or dont compress 
your data

and rely on hardware-compression only (not recommended).


Bye, Peter
WOTLmade