Re: Barcode support outside of chg-scsi

2001-02-01 Thread Jason Hollinden

That's essentially the same things I had to do, the timeout and the changing of
the wording returned from mtx.  The only other thing I had to change was the 
case test at line that read:

[$firstslot-$lastslot])
  load=$1
  ;;

Because of the wording of the [test]), the highest slot number you can have
would be 9.  Since I have 30, I had to change it to the below:

   $[${whichslot}])

   if [ $whichslot -gt $lastslot ] || [ $whichslot -lt $firstslot ]; then
   echo "0 Slot $whichslot is out of range ($firstslot - $lastslot)"
   exit 1
   else
   loadslot=$whichslot
   fi
   ;;


This was because if your $lastslot > 9, the test doesn't work right.  For me,
$lastslot is 30, so the test was saying " [1-30]) which is "1-3 _or_ 0" in 
shell pattern matching.  It took me forever to notice this, and I pulled some
hair out wondering why I couldn't load a tape higher than slot 3.

Thanks...

On Thu, 01 Feb 2001, Doug Munsinger wrote:

> Jason -
> 
> having just been through "changer hell" I'm copying some mail I just sent 
> to Joe Rhett re: chg-zd-mtx and barcodes -
> MAYBE it will help resolve what you are going through now.
> 
> I fought with chg-scsi and chg-zd-mtx for about a week before dropping back 
> and getting chg-manual to fully work first.
> 
> Here's what I just finished:
> Hope this helps.
> 
> --doug
> 
> ___
> 
> 
> Joe -
> 
> I downloaded the improved mtx script from the link you gave in this post -
> MUCH THANKS as the site also gave specific instructions for placing
> at /client-src/chg-zd-mtx.sh.in
> and then run configure.  This came very close to working as is.
> 
> The timing worked well as I was already re-installing 2.4.2p1 upgrades 
> anyway...
> 
> I found two flaws in the code for my specific tape drive and changer.
> I have an Ecrix VXA autopak library with a single Ecrix VXA tape unit 
> installed - with a barcode reader.
> 
> I managed to get chg-manual running consistently and getting good backups 
> first, which let me know that the tape drive and amanda were doing well, 
> before attempting for a second time to get the changer to cooperate.
> I also installed mtx 1.2.10 and tested and verified this would work... So 
> at least I could get an e-mail that the tape needed changing and then log 
> in and accomplish that from home...
> 
> What I changed to make the chg-zd-mtx script work was to add a TIMEOUT 
> variable to use in a sleep command in loadslot as follows:
>  # Now rewind and check
>  echo " -> rewind $loadslot" >> $DBGFILE
>  echo " -> sleeping TIMEOUT: $TIMEOUT 
> seconds..." >> $DBGFILE
>  sleep $TIMEOUT
>  echo " -> rewind..." >> $DBGFILE
>  $MT $MTF $tape rewind
>  echo "$loadslot" > $slotfile
>  echo "$loadslot $tape"
>  exit 0
> 
> Otherwise I would consistently get
> "/dev/nst0: Input/output error"
> on any rewind or tape change by the script
> 
> This MOSTLY allowed the script to work, except -
> I have a barcode reader.
> The result of an mtx status command with a barcode reader is different than 
> without - at least when the tapes have barcodes -
> here's what the mtx -f /dev/sg1 status result looks like (with barcode)
> 
> Data Transfer Element 0:Full (Storage Element 11 Loaded):VolumeTag = 09
>Storage Element 1:Full :VolumeTag=00
>Storage Element 2:Full :VolumeTag=01
>Storage Element 3:Full :VolumeTag=02
>Storage Element 4:Full :VolumeTag=03
>Storage Element 5:Full
>Storage Element 6:Full
>Storage Element 7:Full
>Storage Element 8:Full
>Storage Element 9:Full
>Storage Element 10:Full
>Storage Element 11:Empty
>Storage Element 12:Full
>Storage Element 13:Full
>Storage Element 14:Full
>Storage Element 15:Full
> 
> which caused the sed command to give back:
> [amanda@ford amanda]$ mtx -f /dev/sg1 status | sed -n 's/Data Transfer 
> Element 0:Empty/-1/p;s/Data Transfer Element 0:Full (Storage Element \(.\) 
> Loaded)/\1/p'
> 1:VolumeTag = 00
> which won't work...
> 
> here's the change:
> readstatus() {
>  EMP_FULL=`$MTX status | grep "Data Transfer Element" | awk '{ 
> print $4 }' | awk -F: '{print $2 }'`
>  if [ $EMP_FULL = "Empty" ]; then
>  usedslot="-1"
>  fi
>  if [ $EMP_FULL = "Full" ]; then
>  usedslot=`$MTX status | grep "Data Transfer Element" | awk 
> '{ print $7 }'`
>  fi
> 
> 
> I'll include the full script below this.
> 
> I thought this might come in handy and might also be something you would 
> want to include in the upkeep of chg-zd-mtx.
> 
> --doug
> 
> Doug Munsinger
> 
> egenera, Inc.
> [EMAIL PROTECTED]
> 563 Main Street, Bol

Problems with amdump

2001-02-01 Thread Christopher Wargaski

Hello folks--

Problems with amdump from HP-UX 11.0 server to BSD/OS 4.2
client (named ritchie).

The 'amcheck' commands succeeds. The 'amdump' connects
and looks like the index is started, but that is about it. After 10
minutes, the 'amdump' quits.

On the client, the sendbackup.debug file says after the runtar
command:

sendbackup [pid]: index tee cannot write [Broken pipe]
sendbackup [pid]: error [/usr/local/bin/tar got signal 13]

I read in the archives that this error is usually caused by bad
tapes, or write protect etc. I know that the tape is in good condition,
because I tar'd to it before labeling.

The /var/adm/amanda/RitchieWeekly/index/ritchie/_ directory
does not actually contain an index.

The amdump.1 file has the following errors:

driver: result time 548.663 from dumper0: FAILED 00-1 [data read:
 Connection  reset by peer]
dumper: kill index command

This is my first backup over the network, so I am a little clueless.

cjw





Re: Barcode support outside of chg-scsi

2001-02-01 Thread Doug Munsinger

Jason -

having just been through "changer hell" I'm copying some mail I just sent 
to Joe Rhett re: chg-zd-mtx and barcodes -
MAYBE it will help resolve what you are going through now.

I fought with chg-scsi and chg-zd-mtx for about a week before dropping back 
and getting chg-manual to fully work first.

Here's what I just finished:
Hope this helps.

--doug

___


Joe -

I downloaded the improved mtx script from the link you gave in this post -
MUCH THANKS as the site also gave specific instructions for placing
at /client-src/chg-zd-mtx.sh.in
and then run configure.  This came very close to working as is.

The timing worked well as I was already re-installing 2.4.2p1 upgrades 
anyway...

I found two flaws in the code for my specific tape drive and changer.
I have an Ecrix VXA autopak library with a single Ecrix VXA tape unit 
installed - with a barcode reader.

I managed to get chg-manual running consistently and getting good backups 
first, which let me know that the tape drive and amanda were doing well, 
before attempting for a second time to get the changer to cooperate.
I also installed mtx 1.2.10 and tested and verified this would work... So 
at least I could get an e-mail that the tape needed changing and then log 
in and accomplish that from home...

What I changed to make the chg-zd-mtx script work was to add a TIMEOUT 
variable to use in a sleep command in loadslot as follows:
 # Now rewind and check
 echo " -> rewind $loadslot" >> $DBGFILE
 echo " -> sleeping TIMEOUT: $TIMEOUT 
seconds..." >> $DBGFILE
 sleep $TIMEOUT
 echo " -> rewind..." >> $DBGFILE
 $MT $MTF $tape rewind
 echo "$loadslot" > $slotfile
 echo "$loadslot $tape"
 exit 0

Otherwise I would consistently get
"/dev/nst0: Input/output error"
on any rewind or tape change by the script

This MOSTLY allowed the script to work, except -
I have a barcode reader.
The result of an mtx status command with a barcode reader is different than 
without - at least when the tapes have barcodes -
here's what the mtx -f /dev/sg1 status result looks like (with barcode)

Data Transfer Element 0:Full (Storage Element 11 Loaded):VolumeTag = 09
   Storage Element 1:Full :VolumeTag=00
   Storage Element 2:Full :VolumeTag=01
   Storage Element 3:Full :VolumeTag=02
   Storage Element 4:Full :VolumeTag=03
   Storage Element 5:Full
   Storage Element 6:Full
   Storage Element 7:Full
   Storage Element 8:Full
   Storage Element 9:Full
   Storage Element 10:Full
   Storage Element 11:Empty
   Storage Element 12:Full
   Storage Element 13:Full
   Storage Element 14:Full
   Storage Element 15:Full

which caused the sed command to give back:
[amanda@ford amanda]$ mtx -f /dev/sg1 status | sed -n 's/Data Transfer 
Element 0:Empty/-1/p;s/Data Transfer Element 0:Full (Storage Element \(.\) 
Loaded)/\1/p'
1:VolumeTag = 00
which won't work...

here's the change:
readstatus() {
 EMP_FULL=`$MTX status | grep "Data Transfer Element" | awk '{ 
print $4 }' | awk -F: '{print $2 }'`
 if [ $EMP_FULL = "Empty" ]; then
 usedslot="-1"
 fi
 if [ $EMP_FULL = "Full" ]; then
 usedslot=`$MTX status | grep "Data Transfer Element" | awk 
'{ print $7 }'`
 fi


I'll include the full script below this.

I thought this might come in handy and might also be something you would 
want to include in the upkeep of chg-zd-mtx.

--doug

Doug Munsinger

egenera, Inc.
[EMAIL PROTECTED]
563 Main Street, Bolton, MA  01740
Tel: 508-786-9444 ext. 2612
OR 508-481-5493
fax: 978 779-9730





At 03:54 PM 1/31/2001 -0800, Joe Rhett wrote:
>Are you using the latest MTX version?
>
>Is the problem mtx itself? (Can you run "mtx load x", "mtx unload x" etc?
>
>Or is the problem with the changer script? Are you using the latest
>version? You can get it at
> http://www.noc.isite.net/?Projects
>
>On Wed, Jan 31, 2001 at 07:32:44PM -, [EMAIL PROTECTED] 
>wrote:
> > I'm new to amanda and really can use some help installing a new
> > changer. The new unit is a Overland Minilibrary (15-slot) with 1
> > DLT-7000 drive. Our old unit is working fine but our filesystems have
> > grown a lot. The new unit is a model 7115.
> >
> > The problem appears to be my mtx configuration. Any help is greatly
> > appreciated!!!
> >
> > Sam Lauro
>
>--
>Joe Rhett Chief Technology Officer
>[EMAIL PROTECTED]  ISite Services, Inc.
>
>PGP keys and contact information:  http://www.noc.isite.net/Staff/


Full revised chg-zd-mtx script:

#!/bin/sh
#
# Exit Status:
# 0 Alles Ok
# 1 Illegal Request
# 2 Fatal Error
#
# Contributed by Eric DOUTRELEAU <[EMAIL

Re: Need some help with a new changer

2001-02-01 Thread Doug Munsinger

Joe -

I downloaded the improved mtx script from the link you gave in this post -
MUCH THANKS as the site also gave specific instructions for placing
at /client-src/chg-zd-mtx.sh.in
and then run configure.  This came very close to working as is.

The timing worked well as I was already re-installing 2.4.2p1 upgrades 
anyway...

I found two flaws in the code for my specific tape drive and changer.
I have an Ecrix VXA autopak library with a single Ecrix VXA tape unit 
installed - with a barcode reader.

I managed to get chg-manual running consistently and getting good backups 
first, which let me know that the tape drive and amanda were doing well, 
before attempting for a second time to get the changer to cooperate.
I also installed mtx 1.2.10 and tested and verified this would work... So 
at least I could get an e-mail that the tape needed changing and then log 
in and accomplish that from home...

What I changed to make the chg-zd-mtx script work was to add a TIMEOUT 
variable to use in a sleep command in loadslot as follows:
 # Now rewind and check
 echo " -> rewind $loadslot" >> $DBGFILE
 echo " -> sleeping TIMEOUT: $TIMEOUT 
seconds..." >> $DBGFILE
 sleep $TIMEOUT
 echo " -> rewind..." >> $DBGFILE
 $MT $MTF $tape rewind
 echo "$loadslot" > $slotfile
 echo "$loadslot $tape"
 exit 0

Otherwise I would consistently get
"/dev/nst0: Input/output error"
on any rewind or tape change by the script

This MOSTLY allowed the script to work, except -
I have a barcode reader.
The result of an mtx status command with a barcode reader is different than 
without - at least when the tapes have barcodes -
here's what the mtx -f /dev/sg1 status result looks like (with barcode)

Data Transfer Element 0:Full (Storage Element 11 Loaded):VolumeTag = 09
   Storage Element 1:Full :VolumeTag=00
   Storage Element 2:Full :VolumeTag=01
   Storage Element 3:Full :VolumeTag=02
   Storage Element 4:Full :VolumeTag=03
   Storage Element 5:Full
   Storage Element 6:Full
   Storage Element 7:Full
   Storage Element 8:Full
   Storage Element 9:Full
   Storage Element 10:Full
   Storage Element 11:Empty
   Storage Element 12:Full
   Storage Element 13:Full
   Storage Element 14:Full
   Storage Element 15:Full

which caused the sed command to give back:
[amanda@ford amanda]$ mtx -f /dev/sg1 status | sed -n 's/Data Transfer 
Element 0:Empty/-1/p;s/Data Transfer Element 0:Full (Storage Element \(.\) 
Loaded)/\1/p'
1:VolumeTag = 00
which won't work...

here's the change:
readstatus() {
 EMP_FULL=`$MTX status | grep "Data Transfer Element" | awk '{ 
print $4 }' | awk -F: '{print $2 }'`
 if [ $EMP_FULL = "Empty" ]; then
 usedslot="-1"
 fi
 if [ $EMP_FULL = "Full" ]; then
 usedslot=`$MTX status | grep "Data Transfer Element" | awk 
'{ print $7 }'`
 fi


I'll include the full script below this.

I thought this might come in handy and might also be something you would 
want to include in the upkeep of chg-zd-mtx.

--doug

Doug Munsinger

egenera, Inc.
[EMAIL PROTECTED]
563 Main Street, Bolton, MA  01740
Tel: 508-786-9444 ext. 2612
OR 508-481-5493
fax: 978 779-9730





At 03:54 PM 1/31/2001 -0800, Joe Rhett wrote:
>Are you using the latest MTX version?
>
>Is the problem mtx itself? (Can you run "mtx load x", "mtx unload x" etc?
>
>Or is the problem with the changer script? Are you using the latest
>version? You can get it at
> http://www.noc.isite.net/?Projects
>
>On Wed, Jan 31, 2001 at 07:32:44PM -, [EMAIL PROTECTED] 
>wrote:
> > I'm new to amanda and really can use some help installing a new
> > changer. The new unit is a Overland Minilibrary (15-slot) with 1
> > DLT-7000 drive. Our old unit is working fine but our filesystems have
> > grown a lot. The new unit is a model 7115.
> >
> > The problem appears to be my mtx configuration. Any help is greatly
> > appreciated!!!
> >
> > Sam Lauro
>
>--
>Joe Rhett Chief Technology Officer
>[EMAIL PROTECTED]  ISite Services, Inc.
>
>PGP keys and contact information:  http://www.noc.isite.net/Staff/


Full revised chg-zd-mtx script:

#!/bin/sh
#
# Exit Status:
# 0 Alles Ok
# 1 Illegal Request
# 2 Fatal Error
#
# Contributed by Eric DOUTRELEAU <[EMAIL PROTECTED]>
# This is supposed to work with Zubkoff/Dandelion version of mtx
#
# Modified by Joe Rhett <[EMAIL PROTECTED]>
# to work with MTX 1.2.9 by Eric Lee Green http://mtx.sourceforge.net
#
# modified 010201 by doug munsinger ([EMAIL PROTECTED]) to
#   1) add timeout for loadslot routine at rewind command to
#   remove "/dev/nst0: Input/output error" see TIMEOUT below to set
#   2) re-write the readstatus "sed" c

Re: amrecover: cannot connect to host (connection refused)

2001-02-01 Thread John R. Jackson

After looking at this some more, I can't figure out why we are not
using the Amanda stream_client function which already does all the right
things (and does **not** bind specific host addresses).  So if someone
who's having this problem could give the following a try instead of the
previous attempt and let me know, I'd appreciate it.

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]

 amrecover.diff


Re: Speeding up the dumpprocess ?

2001-02-01 Thread Johannes Niess

Gerhard den Hollander <[EMAIL PROTECTED]> writes:

> * Johannes Niess <[EMAIL PROTECTED]> (Thu, Feb 01, 2001 at 04:01:03PM +0100)
> 
> > My first thought was to disable compression, but that's already
> > done. What's your network technology? 2 MBytes/sec is not the optimum
> 
> They're all local disks
> (Wide scsi 2, so that's 80 Mbaud IIRC)
> 
> > On the other hand disk speed might be the bottle neck. I'd run bonnie
> > on the clients and the server holding disk. I assume you've checked
> > SCSI settings/errors/DMA for IDE controllers etc. of the holding disk.

Let's do the math:

We need 10 M/s sustained transfer rate from holding disk to the tape
and 10 M/s from data disk to holding disk. A 80 M/s SCSI bus should be
able to do that.

20 M/s with a lot of seeks are quite a load for a single holding
disk. Depending on you holding disk hardware I'd guestimate 3 M/s a
reasonable throughput under these circumstances. Two simultaneous runs
of bonnie (read/write) on the same disk could give reasonably good
values. How does amanda determine the block size of writes/reads? I
hope that there are large buffers on all sides between client, server
and tape writing parts. RAID in stripping (or concatenation) mode
might help on the holding disks. Different physical holding disks
might do the same trick. What exactly is meant by "round robing" of
holding disks? It looks like you need two holding disks for maximum
performance: 1 for writing to , one for simultaneous reads. In your
case I'd set maxdumps=1, too. The extended formula is N = D + 1, where
N is the optimum number of independend holding disks and D is the
number of simultaneous dumpers, 1 is for the taper reading data.

Nobody asked about seek problems on your data disks. It definitely
kills performance to dump two partitions of the same disk at the same
time. Amanda's default value of the spindle parameter is not at an
intuitive value:

(man amanda, disk list parameters)

  spindle
  Default:  -1.  A number used to balance backup load
  on a host.  Amanda will not run multiple backups at
  the same time on the same spindle, unless the spin­
  dle number is -1, which means there is  no  spindle
  restriction.  

This assumption might date from the times when disks where too small
to be split into pieces. IMHO 1 is a more reasonable default as it
does only one dump at a time on a host.

HTH,

Johannes Niess

P.S: Feel free to make a faq-o-matic section on performace out of
this. What about a weekly post of the FAQ to this list?



unsubscribe

2001-02-01 Thread Malinda Stremmel

unsubscribe



Barcode support outside of chg-scsi

2001-02-01 Thread Jason Hollinden

After several days of searching, reading, and cussing, I am posting my woes
in the hopes someone has some insight to using barcodes.

Just for background, I have a Dell Poweredge-2400 with RedHat 6.2, running 
amanda-2.4.2p1.  My tape library is an ADIC Scalar 100 hanging off a dedicated 
Adaptec 2944UW.  All my tapes have been labeled using amlabel.  I only have
8 tapes (and a cleaning tape resting in the last slot), as you will see in the
attached configs, as it takes way too long to index 30 tapes.

That said, I'm having trouble with several different paths.  I have about 60% 
sucess with chg-scsi, and 90% success with chg-zd-mtx (using mtx-1.2.10-1).  

chg-scsi problems:

Using this glue script, I can do all of the "tape movement only" functions, 
such as amtape's eject, reset, and slot .  Where it fails is 
any activity where it has to load tapes and 'quickly' read from them, such as 
amtape's update, label, and show commands.  The problem is that amtape is not 
giving my library enough time to load up the tape, despite whatever the sleep 
setting in my chg-scsi.conf file is set to.  I've pretty much confirmed this
with the fact that I can run the same command again after the tape is 
completely loaded, and the amlabel shows up fine.  Here's an example:

### Ran with no tape in the drive

bash-2.03$ /usr/sbin/amtape Reubens current
amtape: scanning current slot in tape-changer rack:
slot 0: rewinding tape: Input/output error

### This loads a tape from slot 0, but doesn't wait for it to finish winding
### Next is the exact same command, after it's done winding

bash-2.03$ /usr/sbin/amtape Reubens current
amtape: scanning current slot in tape-changer rack:
slot 0: date Xlabel Reubens01

###

Because of this, I can't get the bardcodes from the tapes to do much of
anything.  Chg-scsi will populate my barcode file with just the tape loaded, 
but when I need to index all the tapes with 'amtape Reubens update' it has no
data, since it doesn't sleep until the tape is ready.  My sleep statement has
been everything from 120 - 1200 sec. but I don't believe that is what it is 
supposed to do, anyways.

chg-zd-mtx problems:

Actually, I have very few problems with this, except that there is no barcode
work done with it.  I had to modify it to work with > 9 slots, and can post
that if someone wants.  The main problem is no barcode support.  The version of
mtx I have reports the barcodes, so modifying this glue script some more to use
them shouldn't be too much trouble.

This is where I finally start getting to my point...

How does Amanda utilize barcodes?  Are there any Amanda commands that directly
mess with barcodes, or is the only thing that has anything to do with them the
changer glue script, and it just so happens that only chg-scsi looks at them
right now?  If it's the second, I'm about a few days away from writing a 
chg-zd-mtx variant with an "internal" barcode matching system.

I'm attaching about everything I've seen on this group as a usable file for
cases like this, and if there is anything I've missed, please tell me.  If, 
after looking through my stuff, it looks like a simple fix for chg-scsi on my 
part for it to work correctly, that's great.

A quick list of attachments:

- amanda.conf
- ADIC.conf   (chg-scsi script)
- status-all.txt   (running chg-scsi -status all)

# below 3 were done with no tape in the drive, and 8 in the slots
- chg-scsi.debug.update   (generated after running 'amtape Reubens update')
- amtape.debug.update   (generated after running 'amtape Reubens update')
- update.txt   (what 'amtape Reubens update' spat to the screen)

# outputs from 'amtape Reubens slot current' with tape in drive
- chg-scsi.debug.current
- amtape.debug.current
- current.txt   (paste from the screen)

--
   Jason Hollinden

   SMG Systems Admin

 amanda-problems.tar.gz


RE: driver: dumper0 died while dumping

2001-02-01 Thread Chris Herrmann

just a thought, that may not be the answer...

try setting your chunksize to something like 1Gb (or 200M, if that works for
you). This means that if it's backing up 2.5G, no individual file in the
holding disk area will be bigger than that chunksize. We needed it because
our backup was trying to write chunks 3 or 4G big, and failing. e2fs can't
currently hold bigger than a 2G file, can't provide any advice on other
filesystems, however.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Robinson David F Dr
DLVA
Sent: Thursday, 1 February 2001 08:07
To: '[EMAIL PROTECTED]'
Subject: RE: driver: dumper0 died while dumping


This problem goes away if I specify a much lower value for
the 'use' parameter on the holdingdisk.  The original
value specified was 30Gb.  This would be more than enough
space for the backup (approximately 2.5Gb).  If I switch
this value to 200Mb the backup works.

If I use amstatus to watch the dump as it proceeds, it
appears that the entire dump is being written to the
holding disk.  The dump is apparently dying when trying
to move the file to the tape.

Any suggestions as to why this behaves this way?

David






Re: amanda-2.4.1-p1 and VxFS 3.3.3

2001-02-01 Thread John R. Jackson

>Since upgrading VxFS (Veritas Filesystem) to v 3.3.3 on our Solaris 2.6 servers,
>amanda (2.4.1-p1) is complaining about strange dump results.  ...

I think this is fixed in 2.4.2, and that the following is the appropriate
patch.

>Kris,

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]

 sendbackup-dump.diff


A Perl-based changer (perhaps FreeBSD-centric)

2001-02-01 Thread Christopher Masto

I've been meaning to submit this for about two years.. it's probably
not as useful anymore, but who knows.

When we set up Amanda here, ISTR we couldn't get the chg-scsi thing to
work happily with our FreeBSD systems and an Exabyte EHB-10h.  So I
threw together a Perl module and changer script to do the job.  It's
been taking care of our daily backups since April '99.

Maybe someone else having trouble with the changer stuff will be able
to use it, or at least bend it to their needs.  I find straightforward
Perl easier to modify than a complicated C program slinging pointers
around and core dumping.

It can be found at:

http://www.masto.com/dist/nm-changer/

Enjoy, and thanks to the Amanda team for a really impressive piece of
software.
-- 
Christopher Masto Senior Network Monkey  NetMonger Communications
[EMAIL PROTECTED][EMAIL PROTECTED]http://www.netmonger.net

Free yourself, free your machine, free the daemon -- http://www.freebsd.org/



Re: Speeding up the dumpprocess ?

2001-02-01 Thread Gerhard den Hollander

Thanskl for all who delivered suggestions for this:

>>> To really know how fast your dumps are going, look down the KB/s
>>> column.  Your numbers look pretty good to me.  You've got better than
>>> 1 MB/s on all your big dumps. 

>> True.
>> The point is though that 2.3M/s (which Im getting at the 0 dump of the
>> biggest one) translates to about 10G/hr, which means it takes 11 hours to
>> dump the largest slice.

> Wow, that's one big partition.  You _could_ cut it up into smaller
> partitions or use the GNU tar top level directory trick but you
> probably have your reasons why those solutions won't work for you.

OK,
maybe a description of th setup might help.

We've got 1 (one) big server (sun UE450)
it's got 2 scsi ontrollers, and we stuck in an additional 2 dual scsi cards
giving us 6 scsi channels.

They're all ultra LVD (that's 160 Mbaud unles Im deeply mistaken )

Controller 0
-> 4 internal 4 G disk
1 disk split up into different slices
other 3 disks striped to a 12 Gb disk
Controller 1
-> CDrom drive
-> Mammoth tape drive
-> Mammoth tapedrive

Controller 2
-> empty

Controller 3
-> 2 18G disks
-> 1 36G disk partitioned into 2 18G disks

Controller 4
-> LTO tapedrive

Controller 5
-> Zero downtime RAID array (420 G)
[this puppy rules, see below for rant ;) ]

For the 420G raid array I use gnutar (and a patched sendsize to use
calcsize to calculate the estimates, see my previous posts on this)
to backup the toplevel subdirs.

The RAID array is raid 5 (w/ hotspare).


The holding disk is on that same raid array (and on the same partition)
I've checked, and it's not the scsi controller bandwidth that's the
limiting factor.

[that is, if amdump is dumping to holding disk, I can start moving data to
and from that disk, whitout the amdump performance dropping noticably]



* Johannes Niess <[EMAIL PROTECTED]> (Thu, Feb 01, 2001 at 08:01:39PM +0100)

> Let's do the math:

> We need 10 M/s sustained transfer rate from holding disk to the tape
> and 10 M/s from data disk to holding disk. A 80 M/s SCSI bus should be
> able to do that.

> 20 M/s with a lot of seeks are quite a load for a single holding
> disk. Depending on you holding disk hardware I'd guestimate 3 M/s a
> reasonable throughput under these circumstances. 

Im getting 7 M/s to tape, while the simultaneous write to the holding disk
is around 3 M/s
Even if there are 2 dumpers dumping (giving 6 M/s -> holding disk) I easily
get 7 M/s to tape.


I upped maxdumpers to 4
and rearranged the toplevel dir layout on the big disk (i split the 110G
dir into 2 dirs of ~ 55G each).

Let's see how weel that performs ..

If upping maxdumpers to 4 is indeed bottlenecking the disk
(which i can easily see if the dumper speed average is dropping instead of
staying more or less the same) I can lower the number again.

> Nobody asked about seek problems on your data disks. It definitely
> kills performance to dump two partitions of the same disk at the same
> time. Amanda's default value of the spindle parameter is not at an
> intuitive value:

Been there, tried that, sent the postcard ;)
dumping a single disk at once gives me roughly the sme performance as
dumping 2 disks simultaneously, even to the same holding disk.

> Johannes Niess

> P.S: Feel free to make a faq-o-matic section on performace out of
> this. What about a weekly post of the FAQ to this list?

I am actually keeping notes on this, and I plan to put it all together into
some larger document.

Im proobaly gonna stick it on my webpage (along with all the other useful
hints and tips I've found about amanda sofar).

Ill happily make it into a faq-o-matic or write it up a bit more detailed
for inclusion with the amdna docs.

Whatever is most appropriate.

(but currently i's not yet finished ,
and im still experimenting.


Gerhard,  <@jasongeo.com>   == The Acoustic Motorbiker ==   
-- 
   __O  Standing above the crowd, he had a voice so strong and loud
 =`\<,  we'll miss him
(=)/(=) Ranting and pointing his finger, At everything but his heart
we'll miss him




Re: modification

2001-02-01 Thread John R. Jackson

>i did a modification in the amstatus script, it looked for amdump file by
>default, but amanda creates amdump.1, as the newer file, so if i typed 
>amstatus Diario, (Diario is where i have the amanda.conf), it said that the
>$logdir/amdum doesn't exist, so you had to type the commad with the file
>option.

The amstatus command is normally used while amdump is running to
find out how far along it is, so going after "amdump" is probably the
right default.  I doubt if many people (although I'm one of them) use
it to post-process a completed run.  It seems to me that adding "--file
amdump.1" to the command line is reasonable requirement to do this.

You do realize you only have to enter "amdump.1", not the full path to
the file, right?

Of course, one of the beauties of open source is that if you disagree,
you're free to make it do whatever you want :-).

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: Planner - balance suggestions?

2001-02-01 Thread Jean-Louis Martineau

On Thu, Feb 01, 2001 at 08:46:53AM -0600, Bill Carlson wrote:
> On 31 Jan 2001, Alexandre Oliva wrote:
> 
> > On Jan 30, 2001, Bill Carlson <[EMAIL PROTECTED]> wrote:
> >
> > > That isn't a problem until 2 biggies are due on the same day,
> > > resulting in backup running until noon.  :)
> >
> > Why don't you just force a full backup of one of the biggies half-way
> > through the dumpcycle?
> >
> >
> 
> This works, but only temporarily, because my dumpcycle is low (5) for the
> large file systems. Since the dumps aren't done until they are due, if one
> happens to fall due on a weekend, by Monday there are 2 "biggies" due on
> the same day again.

Why did you set your dumpcycle to 5?
You want a full every 5 or 7 days?

Maybe you need a dumpcycle of 7 and a runspercycle of 5?

Jean-Louis
-- 
Jean-Louis Martineau email: [EMAIL PROTECTED] 
Departement IRO, Universite de Montreal
C.P. 6128, Succ. CENTRE-VILLETel: (514) 343-6111 ext. 3529
Montreal, Canada, H3C 3J7Fax: (514) 343-5834



Re: Speeding up the dumpprocess ?

2001-02-01 Thread Mitch Collinsworth


On Thu, 1 Feb 2001, Gerhard den Hollander wrote:

> * Mitch Collinsworth <[EMAIL PROTECTED]> (Thu, Feb 01, 2001 at 09:31:52AM -0500)
> 
> >> The avg dump rate is listed as 2M/s
> >> The avg tape rate is listed as 10M/s
> >> ...
> >> is there any way to speed up the dump process ?
> 
> > If you take a closer look at the numbers you'll see these are actually
> > averages over the individual file systems' dump rates, without taking
> > into account amount of data dumped for each data point in the average.
> > Put more plainly, these numbers are really bogus.
 
Oh crap.  Maybe I should look more closely before shooting my mouth off.
The numbers are clearly correct.  I must have _assumed_ this was some
other problem that I used to know about but can no longer recall even
though I seem to remember the details.  :-/  My apologies to you and
to whoever's work I wrongly maligned.

> > To really know how fast your dumps are going, look down the KB/s
> > column.  Your numbers look pretty good to me.  You've got better than
> > 1 MB/s on all your big dumps. 
> 
> True.
> The point is though that 2.3M/s (which Im getting at the 0 dump of the
> biggest one) translates to about 10G/hr, which means it takes 11 hours to
> dump the largest slice.

Wow, that's one big partition.  You _could_ cut it up into smaller
partitions or use the GNU tar top level directory trick but you
probably have your reasons why those solutions won't work for you.

The next step seems to be to optimize the hardware.  Is the holding
disk on the same SCSI controller as the filesystems being backed up
and if so are you using all the controller's bandwidth?  Maybe another
controller and faster holding disks would help?  Maybe putting the
110 GB filesystem on it's own controller?

-Mitch




Re: Planner - balance suggestions?

2001-02-01 Thread Bill Carlson

On Thu, 1 Feb 2001, Jean-Louis Martineau wrote:

> Why did you set your dumpcycle to 5?
> You want a full every 5 or 7 days?
>
> Maybe you need a dumpcycle of 7 and a runspercycle of 5?

I have the dumpcycle of 5 set mainly to affect the balance calculation, in
most of my dumptypes (except for the "biggies") pushed out to 14. The goal
is to get amanda to promote the "biggies" rather than discard them until
they are due based on the balance size. Since the "biggies" also tend to
be the important file systems, I'd rather have the lower dump cycle for
them.

Eventually I could work towards patching planner to consider the dump
cycle of each file system rather than using the default value (yes I know
the balance calc actually uses an estimated runspercycle currently).


Bill Carlson
-- 
Systems Programmer[EMAIL PROTECTED]|  Opinions are mine,
Virtual Hospital  http://www.vh.org/|  not my employer's.
University of Iowa Hospitals and Clinics|




Speeding up the dumpprocess ?

2001-02-01 Thread Gerhard den Hollander

see attached amreport.

The avg dump rate is listed as 2M/s
The avg tape rate is listed as 10M/s

now I've upped maxdumps to 4 (which measn I should expect to get 4 dumps at
2 M/s so ~ 8 M/s to holding disk,
and from there with 10 M/s to tape.

The question is,
is there any way to speed up the dump process ?


- Forwarded message from Super-User  -

> Subject: lto AMANDA MAIL REPORT FOR January 31, 2001
> These dumps were to tape LTO017.
> The next tape Amanda expects to use is: a new tape.

>   Total   Full  Daily
>       
> Estimate Time (hrs:min)0:42
> Run Time (hrs:min)17:50
> Dump Time (hrs:min)   18:14  17:17   0:57
> Output Size (meg)   137513.8133340.4 4173.5
> Original Size (meg) 137513.8133340.4 4173.5
> Avg Compressed Size (%) -- -- --(level:#disks ...)
> Filesystems Dumped   19  2 17   (1:17)
> Avg Dump Rate (k/s)  2144.8 2194.7 1242.4
> 
> Tape Time (hrs:min)3:44   3:32   0:13
> Tape Size (meg) 137514.4133340.4 4174.0
> Tape Used (%)  89.5   86.82.7   (level:#disks ...)
> Filesystems Taped19  2 17   (1:17)
> Avg Tp Write Rate (k/s) 10459.410758.6 5539.2

> NOTES:
>   planner: Incremental of james:/whopper/home bumped to level 2.
>   taper: tape LTO017 kb 140814784 fm 19 [OK]
> 

> DUMP SUMMARY:
>DUMPER STATS  TAPER STATS 
> HOSTNAME DISK L ORIG-KB OUT-KB COMP%MMM:SS  KB/s MMM:SS  KB/s
> ---  
> james-md/dsk/d10  1   47776  47776   --   2:35  308.9   0:07  7136.3
> james-hopper/big  16304   6304   --   1:04   98.0   0:04  1634.8
> james-/charybdis  1 832832   --   0:20   42.0   0:03   248.6
> james-opper/data  0 2065040020650400   --   220:23 1561.7  48:04  7160.1
> james-er/distrib  1 512512   --   0:18   29.2   0:04   135.7
> james-opper/home  0 115890144115890144   -- 816:30 2365.6 163:27 11816.8
> james-hopper/pd1  1 128128   --   0:03   43.5   0:0437.2
> james-hopper/rap  12240   2240   --   1:07   33.7   0:04   613.5
> james-per/scylla  12144   2144   --   1:00   35.6   0:04   571.8
> james-opper/thea  1 288288   --   0:07   42.6   0:0479.1
> james/whopper/ts  11920   1920   --   0:52   37.0   0:04   503.5
> james-per/volume  11056   1056   --   0:29   37.0   0:03   364.8
> jamesc0t0d0s0 1   14592  14592   --   0:18  797.7   0:04  3318.0
> jamesc0t0d0s3 1 11706561170656   --   6:14 3126.2   2:22  8251.4
> jamesc0t0d0s5 1 160160   --   0:12   13.2   0:0357.1
> jamesc0t0d0s6 19344   9344   --   0:22  426.9   0:04  2491.3
> jamesc2t1d0s2 1   23488  23488   --   0:41  572.9   0:05  4907.4
> jamesc2t3d0s0 1   62816  62816   --   8:28  123.6   0:13  4847.4
> jamesc2t3d0s1 1 29293762929376   --  33:11 1471.4   9:20  5234.0


Kind regards,
 --
Gerhard den Hollander   Phone +31-10.280.1515
Technical Support Jason Geosystems BV   Fax   +31-10.280.1511
   (When calling please note: we are in GMT+1)
[EMAIL PROTECTED]  POBox 1573
visit us at http://www.jasongeo.com 3000 BN Rotterdam  
JASON...#1 in Reservoir CharacterizationThe Netherlands

  This e-mail and any attachment is/are intended solely for the named
  addressee(s) and may contain information that is confidential and privileged.
   If you are not the intended recipient, we request that you do not
 disseminate, forward, distribute or copy this e-mail message.
  If you have received this e-mail message in error, please notify us
   immediately by telephone and destroy the original message.



Re: why | ufsrestore?

2001-02-01 Thread Marc W. Mengel



On Tue, 30 Jan 2001, John R. Jackson wrote:
> 
> I think what he meant was he changed the 'b' flag value on the dump,
> which increases the size of the write() call (and possibly some network
> ioctl sizes).

Yes.   (Sorry, I've been down with some Evil bio-virus the last two days)

Note that you only see the difference on faster ethernets (i.e. Gb stuff).

Marc





Re: Speeding up the dumpprocess ?

2001-02-01 Thread Gerhard den Hollander

* Johannes Niess <[EMAIL PROTECTED]> (Thu, Feb 01, 2001 at 04:01:03PM +0100)
> Gerhard den Hollander <[EMAIL PROTECTED]> writes:

> My first thought was to disable compression, but that's already
> done. What's your network technology? 2 MBytes/sec is not the optimum

They're all local disks
(Wide scsi 2, so that's 80 Mbaud IIRC)

> On the other hand disk speed might be the bottle neck. I'd run bonnie
> on the clients and the server holding disk. I assume you've checked
> SCSI settings/errors/DMA for IDE controllers etc. of the holding disk.

Yup.
using ufsdump (of the holding disk, or any of the other disks) straight to
tape, I notice it's the tape unit that';s the bottleneck.

> Tar might be an alternative sub program for the backups, but I doubt
> it's faster. 

I use both tar and ufsdump
(the biggest disk is 420G, you cannot get that on a 150G LTO tape with
ufsdump and amanda ;) )

> To me it looks like a classical search for the bottle neck.

Yup.

The dump of the 110G slice takes 10 hours,
during the last 9 that was the only thing amanda was doing, just dumping
that one slice to holding disk, and that was ~ 2.5 Mbs.

> P.S: As your company can afford that tape drive, it can afford Gigabit
> ethernet :-)

Heh heh, yeah, if only ;)

> P.P.S: What about fully switched 100TP for the clients and a 1000T?? 
> port for the tape server?

See above, this was all local to the tape server.


PPS thanks for all the suggestions ;)

Gerhard,  (@jasongeo.com)   == The Acoustic Motorbiker ==   
-- 
   __O  One day a king will rise with the sun, the moon and the stars
 =`\<,  And you are he and you must die,
(=)/(=) To be born again, come again, live again,
once more be again the king




Re: Speeding up the dumpprocess ?

2001-02-01 Thread Johannes Niess

Gerhard den Hollander <[EMAIL PROTECTED]> writes:

> see attached amreport.
> 
> The avg dump rate is listed as 2M/s
> The avg tape rate is listed as 10M/s
> 
> now I've upped maxdumps to 4 (which measn I should expect to get 4 dumps at
> 2 M/s so ~ 8 M/s to holding disk,
> and from there with 10 M/s to tape.
> 
> The question is,
> is there any way to speed up the dump process ?

Gerhard,

My first thought was to disable compression, but that's already
done. What's your network technology? 2 MBytes/sec is not the optimum
performance of a quiet 100 Mbit twisted pair ethernet. How fast can
you do repeated ftp transfers of large files from the clients to
/dev/null on the server? This is a good estimate of the maximum
throuput of the network interfaces involved. 100BT maxes out at
approx. 5 MByte/sec. If you do multiple dumps at the same time, things
get worse. Collisions need repeat of data packets and slow things
down. You can get a nice overview via amplot. maxdumps=1 is worth
trying.

On the other hand disk speed might be the bottle neck. I'd run bonnie
on the clients and the server holding disk. I assume you've checked
SCSI settings/errors/DMA for IDE controllers etc. of the holding disk.

Tar might be an alternative sub program for the backups, but I doubt
it's faster. 

To me it looks like a classical search for the bottle neck.

HTH,

Johannes Nieß

P.S: As your company can afford that tape drive, it can afford Gigabit
ethernet :-)

P.P.S: What about fully switched 100TP for the clients and a 1000T?? 
port for the tape server?




Re: Planner - balance suggestions?

2001-02-01 Thread Bill Carlson

On 31 Jan 2001, Alexandre Oliva wrote:

> On Jan 30, 2001, Bill Carlson <[EMAIL PROTECTED]> wrote:
>
> > That isn't a problem until 2 biggies are due on the same day,
> > resulting in backup running until noon.  :)
>
> Why don't you just force a full backup of one of the biggies half-way
> through the dumpcycle?
>
>

This works, but only temporarily, because my dumpcycle is low (5) for the
large file systems. Since the dumps aren't done until they are due, if one
happens to fall due on a weekend, by Monday there are 2 "biggies" due on
the same day again.


Bill Carlson
-- 
Systems Programmer[EMAIL PROTECTED]|  Opinions are mine,
Virtual Hospital  http://www.vh.org/|  not my employer's.
University of Iowa Hospitals and Clinics|





Re: Speeding up the dumpprocess ?

2001-02-01 Thread Gerhard den Hollander

* Mitch Collinsworth <[EMAIL PROTECTED]> (Thu, Feb 01, 2001 at 09:31:52AM -0500)

>> The avg dump rate is listed as 2M/s
>> The avg tape rate is listed as 10M/s
>> ...
>> is there any way to speed up the dump process ?

> If you take a closer look at the numbers you'll see these are actually
> averages over the individual file systems' dump rates, without taking
> into account amount of data dumped for each data point in the average.
> Put more plainly, these numbers are really bogus.

> To really know how fast your dumps are going, look down the KB/s
> column.  Your numbers look pretty good to me.  You've got better than
> 1 MB/s on all your big dumps. 

True.
The point is though that 2.3M/s (which Im getting at the 0 dump of the
biggest one) translates to about 10G/hr, which means it takes 11 hours to
dump the largest slice.

Dumping the results from that from holding disk to tape takes less than 3
hours.

In other words, it's fast, but I'd like it even faster ;).

I guess splitting that slice into a bunch of smaller slices would help 


Gerhard,  <@jasongeo.com>   == The Acoustic Motorbiker ==   
-- 
   __O  I'm preparing to fly, under my own steam
 =`\<,  I'm preparing to fly, into the dream
(=)/(=) I'm back in the saddle, I'm out in the clear
I've got no regrets, I've got no fear




Re: Speeding up the dumpprocess ?

2001-02-01 Thread Mitch Collinsworth


On Thu, 1 Feb 2001, Gerhard den Hollander wrote:

> The avg dump rate is listed as 2M/s
> The avg tape rate is listed as 10M/s
> ...
> is there any way to speed up the dump process ?
 
If you take a closer look at the numbers you'll see these are actually
averages over the individual file systems' dump rates, without taking
into account amount of data dumped for each data point in the average.
Put more plainly, these numbers are really bogus.

To really know how fast your dumps are going, look down the KB/s
column.  Your numbers look pretty good to me.  You've got better than
1 MB/s on all your big dumps.  The slower ones are all small
incrementals.  Incrementals are generally slower due to dump having
to hunt through the filesystem looking for stuff to dump.  With fulls
you don't have to do that, you just dump everything.  I normally look
at the KB/s column and only take notice of the level 0 speeds.  If
they're up to snuff I'm happy and pretty much ignore the incrementals.

-Mitch




Re: Need some help with a new changer

2001-02-01 Thread Joe Rhett

Are you using the latest MTX version?

Is the problem mtx itself? (Can you run "mtx load x", "mtx unload x" etc?

Or is the problem with the changer script? Are you using the latest
version? You can get it at
http://www.noc.isite.net/?Projects

On Wed, Jan 31, 2001 at 07:32:44PM -, [EMAIL PROTECTED] wrote:
> I'm new to amanda and really can use some help installing a new 
> changer. The new unit is a Overland Minilibrary (15-slot) with 1 
> DLT-7000 drive. Our old unit is working fine but our filesystems have 
> grown a lot. The new unit is a model 7115.
> 
> The problem appears to be my mtx configuration. Any help is greatly 
> appreciated!!!
> 
> Sam Lauro 

-- 
Joe Rhett Chief Technology Officer
[EMAIL PROTECTED]  ISite Services, Inc.

PGP keys and contact information:  http://www.noc.isite.net/Staff/



Re: Client constrained ?

2001-02-01 Thread Gerhard den Hollander

* Alexandre Oliva <[EMAIL PROTECTED]> (Wed, Jan 31, 2001 at 08:23:27PM -0200)
> On Jan 31, 2001, Gerhard den Hollander <[EMAIL PROTECTED]> wrote:

>> (client constrained she tells me).
>> Can anyone tell me how I can tell amanda to use more dumpers at once ?

> Increase maxdumps in some dumptype of that host.

Thats' what I like about this list,
rapid turn artound time, and to the ppoint answers that correctly solve the
problem ;)

Try finding that level of support for a commercial product ;)

Gerhard,  <@jasongeo.com>   == The Acoustic Motorbiker ==   
-- 
   __O  Standing above the crowd, he had a voice so strong and loud
 =`\<,  we'll miss him
(=)/(=) Ranting and pointing his finger, At everything but his heart
we'll miss him




please help me - amrestore

2001-02-01 Thread Sandra Panesso

hello everybody;

I'm sorry for my last question maybe it was not clear.

so i'm going to try again.

I am trying to use amrestore to restore a data that i have in my holding

disk area.  How can i do that?.

I did "amrestore -p
/amanda_holding/20001215/duchamp.tomandandy.com._Local_Users.0.1", but
didn't work
in other words nothing happened.
I think is important to mention that i'm using tar

Also I would like to know if somebody knows about cont_dumpfile in
compares to a dumpfile.

does anybody can help me?

Please I have been working in the same thing all day without any results

thanks a lot

Sandra




modification

2001-02-01 Thread Monserrat Seisdedos Nuñez

Hello every body:
i did a modification in the amstatus script, it looked for amdump file by
default, but amanda creates amdump.1, as the newer file, so if i typed 
amstatus Diario, (Diario is where i have the amanda.conf), it said that the
$logdir/amdum doesn't exist, so you had to type the commad with the file
option.
bye





stctl question

2001-02-01 Thread Adams, Christopher
Title: stctl question





Is there anyone that is using the 'stctl' driver for their changer that has ever seen this problem before:



I have already installed the driver stctl and when I issue the 'stc status' command I get a report of my changer "not" having any tapes in it.  This is not true, I have it loaded full with tapes.

I'm just curious as to if anyone has ever seen this before and might know what to do in this case.




Thanks all!



Christopher A.
Los Angeles, Ca.





Fastsor DLT4000 and linux, how?

2001-02-01 Thread Martin Koch

Hello,

 i am using amanda2.4.2 on a Linux Suse7.0 system. (amanda is compiled by 
hand, no rpm)

I am now trying to get my Faststor DL4000 7-tapes-changer to work.

I am using chg-zd-mtx. But i am still having problems like...

tux06:/tmp/amanda # tail -f changer.debug
Args -> -info
 -> info   1
Args -> -slot current
 -> loaded 1
Args -> -slot next
 -> loaded 1
Args -> -slot clean
 -> loaded 1
 -> load   7
 -> status 1
 -> resmtx: Storage Element 7 is Empty
 -> load   2
 -> status 0
 -> resLoading Storage Element 2 into Data Transfer Element...done
 -> rew 2
/bin/dd: /dev/nst0: Input/output error
0+0 records in
0+0 records out
Args -> -slot next
 -> loaded 2
 -> unload 2
 -> status 1
 -> resUnloading Data Transfer Element into Storage Element 2...mtx: 
Request Sense: 70 00 05 00 00 00 00 0E 00 00 00 00 3B 90 00 00 00 56 00 00

I changed chg-zd-mtx with $MT $MTF $tape offline calls in front of any $MTX 
call, as the Changer was blocking the mtx-commands otherwise

tux06:/tmp/amanda # mtx -f /dev/sg1 inquiry
Vendor ID: 'ADIC', Product ID: 'FastStor DLT', Revision: '0119'

Any comments?

-- 
Martin Koch  Systemadministration

MOSAIC SOFTWARE AGFeldstrasse 8 D-53340 Meckenheim
Tel. 02225/882-0   Fax. 02225/882-201




Re: please help me - amrestore

2001-02-01 Thread John R. Jackson

>I did
> amrestore -p \
>  /amanda_holding/20001215/duchamp.tomandandy.com._Local_Users.0.1 \
>  | tar xvf duchamp.tomandandy._Local_Users.0.1

In case it wasn't clear from Alexandre's reply, the first problem is
that you are using the ".1" holding disk file.  The file you want should
be named:

  duchamp.tomandandy.com._Local_Users.0

The second problem is that you gave the file name to tar as well as
to amrestore.  It should be something like this:

  amrestore -p \
   /amanda_holding/20001215/duchamp.tomandandy.com._Local_Users.0.1 \
   | tar xvf -

which tells tar to read from stdin and restore everything in the current
directory.

>Sandra

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: please help me - amrestore

2001-02-01 Thread John R. Jackson

Sorry.  That should have been:

  amrestore -p \
   /amanda_holding/20001215/duchamp.tomandandy.com._Local_Users.0 \
   | tar xvf -

JJ



Re: amrestore problems

2001-02-01 Thread Alexandre Oliva

On Jan 31, 2001, Sandra Panesso <[EMAIL PROTECTED]> wrote:

> /amanda_holding/20001215/duchamp.tomandandy._Local_Users.0.1 | tar xvf
> duchamp.tomandandy._Local_Users.0.1  but I got  two errors :
> amrestore: 0: skipping cont dumpfile: date 20001215 host
> duchamp.tomandandy.com disk /Local/Users lev 0 comp .gztar:

This is ok

> tar: duchamp.tomandandy.com._Local_Users.0.1: Not found in archive

This means this is not the name of the file you want to restore.
I.e., it's not part of the archive, it's the name of the archive
itself.

But this is just one of the chunks of a multi-chunk backup.  You have
to start from the first one.

-- 
Alexandre Oliva   Enjoy Guarana', see http://www.ic.unicamp.br/~oliva/
Red Hat GCC Developer  aoliva@{cygnus.com, redhat.com}
CS PhD student at IC-Unicampoliva@{lsd.ic.unicamp.br, gnu.org}
Free Software Evangelist*Please* write to mailing lists, not to me



amanda-2.4.1-p1 and VxFS 3.3.3

2001-02-01 Thread Kris Boulez

Since upgrading VxFS (Veritas Filesystem) to v 3.3.3 on our Solaris 2.6 servers,
amanda (2.4.1-p1) is complaining about strange dump results. An excerpt
from a mail follows.

Kris,


/-- appel  /dev/dsk/c3t5d0s4 lev 0 STRANGE
sendbackup: start [appel:/dev/dsk/c3t5d0s4 level 0]
sendbackup: info BACKUP=/usr/sbin/vxdump
sendbackup: info RECOVER_CMD=/usr/local/bin/gzip -dc
|/usr/sbin/vxrestore -f...
+-
sendbackup: info COMPRESS_SUFFIX=.gz
sendbackup: info end
? vxfs vxdump: Date of this level 0 dump: Wed Jan 31 21:47:52 2001
? vxfs vxdump: Date of last level 0 dump: the epoch  
? vxfs vxdump: Dumping /dev/rdsk/c3t5d0s4 (/gcg/bin) to stdout
? vxfs vxdump: mapping (Pass I) [regular files]
? vxfs vxdump: mapping (Pass II) [directories]
? vxfs vxdump: estimated 858690 blocks (419.28MB) on 0.01 tape(s).
? vxfs vxdump: dumping (Pass III) [directories]
? vxfs vxdump: dumping (Pass IV) [regular files]
| vxfs vxdump: vxdump: 429673 tape blocks
? vxfs vxdump: level 0 dump on Wed Jan 31 21:47:52 2001
? vxfs vxdump: vxdump is done
sendbackup: size 429673
sendbackup: end
\




Re: BSDI & changers

2001-02-01 Thread Mitch Collinsworth


On Wed, 31 Jan 2001, Rick Meidinger wrote:

>  - What changer script(s) have you tried?
> chg-multi

chg-multi is probably not what you want to use.  I haven't tried it
myself but it sounds like it can be made to work if you configure your
Treefrog to operate in gravity mode.  The Treefrog documentation might
call this by a different name such as sequential.


> Would have loved to try chg-scsi, but it doesn't compile on my system.
 
Have you posted the pertinent information?  The author is usually
listening and might be interested in getting it to build on BSDI.
He got it working with FreeBSD so it shouldn't(?) take much more
effort to get it going on BSDI.


>  - Have you determined the device name for the changer?
> sg0
> 
>  - Does BSDI have chio?  If yes, are you able to control the changer with it?
> Nope.
> 
>  - If no chio, have you tried mtx?
> Yes, but I gotten anywhere with it.

Doesn't BSDI come with _any_ SCSI media changer control software?  If
not I think I would complain to the company about that.  If they
really don't have a recommended solution I would try to get mtx, chio,
or chg-scsi to build and run.  In the mean time chg-multi should at
least allow you to run in gravity/sequential mode.

The thing to realize here is amanda does not have built in changer
support.  Amanda allows for changer support, through the use of a
changer script that calls out to whatever software is needed to
control the changer you have.  This is a flexible design that allows
for new devices to be supported by the addition of an appropriate
script, but it means you first have to have the low-level software
to control the changer itself.  (Unless you can get chg-scsi to
work.)

Some sort of SCSI media changer software typically comes with the OS
these days, and then there are applications like mtx that have been
around for a while in various versions.  The changer scripts that
come with amanda are samples that you can use with common existing
changer software but they are just a starting place.  If you need
something different or more exotic it's easy to swap in something
else or modify an existing script to do what you need.

A few years ago I needed to set up a changer with an OS that came
with its own media changer software that amanda didn't have an
existing changer script for, and chg-scsi didn't exist yet.  By
looking at the docs and the existing scripts I was able to write my
own in a few days with normal interruptions.  Uninterrupted it would
have taken a day at most, probably less.

-Mitch





Re: BSDI & changers

2001-02-01 Thread Mitch Collinsworth


On Wed, 31 Jan 2001, Rick Meidinger wrote:

> Is there anyone out there that is using BSDI 4.2 with amanda and a
> tapechanger?  I have a Spectra Logic Treefrog changer with a Sony AIT2
> drive.  Amanda works great with the drive, but I can get it to recognize
> the changer.  I've searched the archives, and haven't found anything
> specific on this subject.  I've read the TAPE.CHANGERS file, and that
> hasn't helped too much either.  I'm running amanda-2.4.2.  Thanks,
 
 
I'm not using BSDI but I imagine it's probably similar to FreeBSD.
Let's start with a few questions:

- What changer script(s) have you tried?

- Have you determined the device name for the changer?

- Does BSDI have chio?  If yes, are you able to control the changer with it?

- If no chio, have you tried mtx?

-Mitch




Re: Can't Find Clients

2001-02-01 Thread Chris Marble

Wilkerson, Scott wrote:
> 
> greener pastures a few months ago.  Since then, each time our remaining
> system manager has upgraded a sun system to Solaris 8 it has begun failing
> out of the backup set.  I have made sure that I can rsh from our backup
> 
> amcheck gives this error:
>  WARNING: gsbmkt.uchicago.edu: selfcheck request timed out.  Host down?

I expect that the amanda entries are no longer in /etc/inetd.conf after
the update.  You will need to add them back and may need to add appropriate
entries to /etc/services too.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager