Re: Does anybody have a LTO4 tapetype ?

2008-05-13 Thread Chris Marble
[EMAIL PROTECTED] wrote:
 
 Hello,
 
 I have used amtapetype to get the LTO4 constants, it was very long
 from 13:00 until 21:30 . but here is it :
 
 define tapetype LTO4 {
 comment Dell LTO4 800Go - Compression Off
 length 802816 mbytes
 filemark 0 kbytes
 speed 52616 kps
 }

Just about identical to mine:
define tapetype LTO4 {
comment just produced by tapetype prog (hardware compression off)
length 802816 mbytes
filemark 0 kbytes
speed 94975 kps
}

Last I looked Amanda did not make use of the speed parameter (but that
was many years ago).  Not sure why my speed number is so much higher.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.


Re: HELP ! Setting up a RAID-0 holding disk on centos 4.4

2007-04-13 Thread Chris Marble
Guy Dallaire wrote:
 
 I finally found the right FM in order to RTFM and proceeded to create the
 RAID-0 holding disk with 2 250Gb SATA drives in order to try to fix the
 trhroughput problem with the tape drive. The raidtools have been replaced by
 the mdadm command.
 
 I'm back to square one. Even if I use a RAID-0 stripped (64k stripes) array,
 I still can't feed the tape drive adequately when amanda is writing to the
 holding disk while it's being dumped to tape.
 
 I really wish there was an easy way to instruct amanda to wait for all the
 client files,  before starting dumping the holding disk to tape, but there
 isn't.

Just run the backup without a tape in the drive and then amflush the backup
sometime later.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.


Re: Problems backing root partition with dump on CentOS

2005-11-15 Thread Chris Marble
Jon LaBadie wrote:
 
 On Thu, Oct 27, 2005 at 10:25:43AM -0700, Chris Marble wrote:
  Jon LaBadie wrote:
   
   On Wed, Oct 26, 2005 at 11:53:16AM -0700, Chris Marble wrote:
  
 the /dev/sdb2 is mounted on / partition
 but instead of using /dev/sdb2 it uses /dev/root for backup
 here is the right of the two devices
 
 brw---  1 root root 8, 18 avr  1 16:48 /dev/root
 brw-rw  1 root disk 8, 18 avr  1 16:48 /dev/sdb2

Did you ever get a solution to this problem?  I've done a chgrp disk
and chmod g+r on /dev/root but that only helps until the next reboot.
The other partitions are fine.
Client is 2.4.5 and server is 2.4.4p1
Client OS is CentOS 4.2
   
   On my FC3 /dev/root is a symbolic link to the root partition.
   Might that be persistant across reboot?
   
  My /dev/root isn't a sym link but a normal device file.  No LVM in use
  either.  I'm trying to figure out why amanda's backing up /dev/root instead
  of simply /.  Here's the lines from disklist:
  Sakai   /   comp-root   1
  Sakai   /boot   comp-root   1
  Sakai   /usr/local  comp-root   2
  Sakai   /varcomp-root   2
  
  Lastly lines from sendsize and sendbackup showing successes and failures:
  sendsize.2005102602.debug:sendsize[3765]: time 0.003: calculating for 
  device '/dev/root' with 'ext3'
  sendsize.2005102602.debug:sendsize[3765]: time 0.003: running 
  /sbin/dump 0Ssf 1048576 - /dev/root
  sendsize.2005102602.debug:sendsize[3765]: time 0.028:   DUMP: Cannot 
  open /dev/root
  sendsize.2005102702.debug:sendsize[3634]: time 0.007: calculating for 
  device '/dev/root' with 'ext3'
  sendsize.2005102702.debug:sendsize[3634]: time 0.007: running 
  /sbin/dump 0Ssf 1048576 - /dev/root
  sendbackup.20051027001751.debug:sendbackup: time 0.098: dumping device 
  '/dev/root' with 'ext3'
  sendbackup.20051027001751.debug:sendbackup: argument list: dump 0usf 
  1048576 - /dev/root
  sendbackup.20051027001751.debug:sendbackup: time 0.104:  93:  normal(|):   
  DUMP: Dumping /dev/root (an unlisted file system) to standard output
  
  You see that amanda is asking for information on /dev/root rather than 
  simply /
 
 Just as a workaround you might try the device name in
 your disklist rather than the starting directory.  Amanda
 will still have to do some mapping, but it will be
 device-directory rather than the current directory-device.
 Shouldn't be needed, but might work better in this situation.

Thanks for the suggestion.
That's become my solution.  Backing up /dev/sda2 (or whatever)
instead of /.  The line is /etc/fstab is:
LABEL=/ /   ext3defaults1 1

This reminds me of a similar problem many years ago with amanda where
people had to install the advfs patch to handle that syntax in /etc/fstab.
This happens to me with both CentOS and RedHat EL 4 installations.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.


Problems backing root partition with dump on CentOS

2005-10-27 Thread Chris Marble
Jon LaBadie wrote:
 
 On Wed, Oct 26, 2005 at 11:53:16AM -0700, Chris Marble wrote:

   the /dev/sdb2 is mounted on / partition
   but instead of using /dev/sdb2 it uses /dev/root for backup
   here is the right of the two devices
   
   brw---  1 root root 8, 18 avr  1 16:48 /dev/root
   brw-rw  1 root disk 8, 18 avr  1 16:48 /dev/sdb2
  
  Did you ever get a solution to this problem?  I've done a chgrp disk
  and chmod g+r on /dev/root but that only helps until the next reboot.
  The other partitions are fine.
  Client is 2.4.5 and server is 2.4.4p1
  Client OS is CentOS 4.2
 
 On my FC3 /dev/root is a symbolic link to the root partition.
 Might that be persistant across reboot?
 
 Under /etc, where I'd expect nearly anything related to devices
 and booting, the only referenced I find to /dev/root are under
 /etc/selinux.  Do you have secure linux enabled and might there
 be some setting for that system that is recreating /dev/root?

My /dev/root isn't a sym link but a normal device file.  No LVM in use
either.  I'm trying to figure out why amanda's backing up /dev/root instead
of simply /.  Here's the lines from disklist:
Sakai   /   comp-root   1
Sakai   /boot   comp-root   1
Sakai   /usr/local  comp-root   2
Sakai   /varcomp-root   2

[10:00am] [EMAIL PROTECTED] (/dev): ll sda1 sda2 sda root
brw-r-  1 root disk 8, 2 Oct 26 04:34 root
brw-rw  1 root disk 8, 0 Oct 26 04:34 sda
brw-rw  1 root disk 8, 1 Oct 26 04:34 sda1
brw-rw  1 root disk 8, 2 Oct 26 04:34 sda2

Lastly lines from sendsize and sendbackup showing successes and failures:
sendsize.2005102602.debug:sendsize[3765]: time 0.003: calculating for 
device '/dev/root' with 'ext3'
sendsize.2005102602.debug:sendsize[3765]: time 0.003: running /sbin/dump 
0Ssf 1048576 - /dev/root
sendsize.2005102602.debug:sendsize[3765]: time 0.028:   DUMP: Cannot open 
/dev/root
sendsize.2005102702.debug:sendsize[3634]: time 0.007: calculating for 
device '/dev/root' with 'ext3'
sendsize.2005102702.debug:sendsize[3634]: time 0.007: running /sbin/dump 
0Ssf 1048576 - /dev/root
sendbackup.20051027001751.debug:sendbackup: time 0.098: dumping device 
'/dev/root' with 'ext3'
sendbackup.20051027001751.debug:sendbackup: argument list: dump 0usf 1048576 - 
/dev/root
sendbackup.20051027001751.debug:sendbackup: time 0.104:  93:  normal(|):   
DUMP: Dumping /dev/root (an unlisted file system) to standard output

You see that amanda is asking for information on /dev/root rather than simply /
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.


Re: problems backuping root partition with dump on FC3

2005-10-26 Thread Chris Marble
Eric Doutreleau wrote:
 
 
 i cannot backup / partition with dump on my FC3 machine.
 
 
 I'm using 
 amanda-2.4.4p3-1 on both server and client
 
 Steps to Reproduce:
 i have configured my amanda server to backup a / partition on my FC3
 computer. I use the mountpoint / in my disklist file
   
 i got the following report
 sonde  / lev 0 FAILED [disk /, all estimate failed]
 
 
 here is some information from the sendize file on the client.
 sendsize[23903]: time 0.021: calculating for amname '/', dirname '/',
 spindle 1
 sendsize[23903]: time 0.021: getting size via dump for / level 0
 sendsize[23903]: time 0.021: calculating for device '/dev/root' with
 'ext3'
 sendsize[23903]: time 0.021: running /sbin/dump 0Ssf 1048576
 - /dev/root
  
 the /dev/sdb2 is mounted on / partition
 but instead of using /dev/sdb2 it uses /dev/root for backup
 here is the right of the two devices
 
 brw---  1 root root 8, 18 avr  1 16:48 /dev/root
 brw-rw  1 root disk 8, 18 avr  1 16:48 /dev/sdb2

Did you ever get a solution to this problem?  I've done a chgrp disk
and chmod g+r on /dev/root but that only helps until the next reboot.
The other partitions are fine.
Client is 2.4.5 and server is 2.4.4p1
Client OS is CentOS 4.2
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.


Building amanda on Mac OS X

2005-07-07 Thread Chris Marble
I'm tring to build amanda client on a new Mac 10.4.1 box.

./configure --prefix=/usr/local/amanda-2.4.5 --with-user=amanda 
--with-group=wheel --with-config=hmcis --without-server 
--host=powerpc-apple-darwin

Then a simple make which ends with:
/bin/sh ../libtool --mode=link gcc  -g -O2 -o libamanda.la -rpath 
/usr/local/amanda-2.4.5/lib -release 2.4.5 alloc.lo amflock.lo clock.lo 
debug.lo dgram.lo error.lo file.lo fileheader.lo amfeatures.lo match.lo 
protocol.lo regcomp.lo regerror.lo regexec.lo regfree.lo security.lo statfs.lo 
stream.lo token.lo util.lo versuff.lo version.lo pipespawn.lo sl.lo  -lm  
-ltermcap
gcc -dynamiclib -flat_namespace -undefined suppress -o 
.libs/libamanda-2.4.5.dylib  .libs/alloc.o .libs/amflock.o .libs/clock.o 
.libs/debug.o .libs/dgram.o .libs/error.o .libs/file.o .libs/fileheader.o 
.libs/amfeatures.o .libs/match.o .libs/protocol.o .libs/regcomp.o 
.libs/regerror.o .libs/regexec.o .libs/regfree.o .libs/security.o 
.libs/statfs.o .libs/stream.o .libs/token.o .libs/util.o .libs/versuff.o 
.libs/version.o .libs/pipespawn.o .libs/sl.o  -lm -ltermcap -install_name  
/usr/local/amanda-2.4.5/lib/libamanda-2.4.5.dylib
/usr/bin/libtool: for architecture: cputype (16777234) cpusubtype (0) file: -lm 
is not an object file (not allowed in a library)
/usr/bin/libtool: for architecture: cputype (16777234) cpusubtype (0) file: 
-lSystem is not an object file (not allowed in a library)
make[1]: *** [libamanda.la] Error 1
make: *** [all-recursive] Error 1

I did a make distclean and tried it with
 ./configure --prefix=/usr/local/amanda-2.4.5 --with-user=amanda 
--with-group=wheel --with-config=hmcis --without-server
and the make fails with the same message.

Suggestions?  My amanda server is 2.4.4p1 so I can use an older version.
I just wanted the latest on the new client.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.


Odin /home lev 1 STRANGE

2003-02-21 Thread Chris Marble
Server is Linux 2.4.20, Amanda 2.4.4b1  Troublesome client is
Linux 2.4.18, Amanda 2.4.4b1.  108 other DLEs on other systems are
working fine.  I'm using dump for my backups.

/-- Odin   /home lev 1 STRANGE
sendbackup: start [Odin:/home level 1]
sendbackup: info BACKUP=/sbin/dump
sendbackup: info RECOVER_CMD=/bin/gzip -dc |/sbin/restore -f... -
sendbackup: info COMPRESS_SUFFIX=.gz
sendbackup: info end
|   DUMP: Date of this level 1 dump: Fri Feb 21 04:00:01 2003
|   DUMP: Date of last level 0 dump: Sat Feb  8 02:00:00 2003
|   DUMP: Dumping /dev/sdc1 (/home) to standard output
|   DUMP: Added inode 8 to exclude list (journal inode)
|   DUMP: Added inode 7 to exclude list (resize inode)
|   DUMP: Label: /rhome
|   DUMP: mapping (Pass I) [regular files]
|   DUMP: mapping (Pass II) [directories]
|   DUMP: estimated 8397639 tape blocks.
|   DUMP: Volume 1 started with block 1 at: Fri Feb 21 04:02:55 2003
|   DUMP: dumping (Pass III) [directories]
|   DUMP: dumping (Pass IV) [regular files]
|   DUMP: 7.26% done at 2032 kB/s, finished in 1:03
|   DUMP: 15.24% done at 2133 kB/s, finished in 0:55
? sendbackup: index tee cannot write [Broken pipe]
|   DUMP: Broken pipe
|   DUMP: The ENTIRE dump is aborted.
? index returned 1
??error [compress got signal 24, /sbin/dump returned 3]? dumper: strange [missing size 
line from sendbackup]
? dumper: strange [missing end line from sendbackup]
\


This DLE was backing up fine until 2 weeks ago.  Tape is 50Gb AIT II.
Troublesome DLE has 26Gb used.  Dumptype for that DLE is:
define dumptype comp-u-s4 {
comp-user
comment User partitions on reasonably fast machines, start 4am
starttime 400
}

define dumptype comp-user {
global
comment Non-root partitions on reasonably fast machines
compress client fast
priority medium
}

define dumptype global {
comment Global definitions
# dumptype name.
# You may want to use this for globally enabling or disabling
# indexing, recording, etc.  Some examples:
compress client fast
index yes
# record no
}



I'll be working on an include list to split up that DLE RSN.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.


Re: Amanda 2.4.4b1 fails on Linux

2003-02-20 Thread Chris Marble
Jean-Louis Martineau wrote:
 
 Could you send a backtrace?
 In gdb, type 'where' after the Segmentation fault.
 
 Could you also try to run amcheck with the MALLOC_CHECK_ variable set.
 For more details on it, see the man page for malloc.
 
 export MALLOC_CHECK_=2

[9:23am] operator@Chris (~): setenv MALLOC_CHECK_ 2
[9:24am] operator@Chris (~): amcheck -c hmcis
Abort
[9:24am] operator@Chris (~): gdb amcheck
GNU gdb 19991004
Copyright 1998 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type show copying to see the conditions.
There is absolutely no warranty for GDB.  Type show warranty for details.
This GDB was configured as i386-redhat-linux...
(gdb) run -c hmcis
Starting program: /usr/local/pkg/amanda/sbin/amcheck -c hmcis

Program received signal SIGABRT, Aborted.
0x400d1d21 in __kill () from /lib/libc.so.6
(gdb) where
#0  0x400d1d21 in __kill () from /lib/libc.so.6
#1  0x400d1996 in raise (sig=6) at ../sysdeps/posix/raise.c:27
#2  0x400d30b8 in abort () at ../sysdeps/generic/abort.c:88
#3  0x4010ee83 in free_check (mem=0x40028a23, caller=0x4003c3c5)
at malloc.c:4520
#4  0x4010cf17 in __libc_free (mem=0x40028a23) at malloc.c:2991
#5  0x4003c3c5 in debug_newstralloc (s=0x40028a18 conffile.c, l=2318, 
oldstr=0x40028a23 , newstr=0x4002cf00 ruffy) at alloc.c:366
#6  0x4001eaf7 in get_simple (var=0x4002cb3c, seen=0x4002ce30, type=STRING)
at conffile.c:2318
#7  0x4001c3b9 in read_confline () at conffile.c:1068
#8  0x4001bc15 in read_conffile_recursively (
filename=0x8053658 /usr/local/pkg/amanda-2.4.4b1/etc/amanda/hmcis/amanda.conf) 
at conffile.c:884
#9  0x4001a9d0 in read_conffile (
filename=0x8053658 /usr/local/pkg/amanda-2.4.4b1/etc/amanda/hmcis/amanda.conf) 
at conffile.c:339
#10 0x804a1ef in main (argc=1, argv=0xbadc) at amcheck.c:210


[9:25am] operator@Chris (~): unsetenv MALLOC_CHECK_
[9:25am] operator@Chris (~): amcheck -c hmcis
Segmentation fault
[9:25am] operator@Chris (~): gdb amcheck
GNU gdb 19991004
Copyright 1998 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type show copying to see the conditions.
There is absolutely no warranty for GDB.  Type show warranty for details.
This GDB was configured as i386-redhat-linux...
(gdb) run -c hmcis
Starting program: /usr/local/pkg/amanda/sbin/amcheck -c hmcis

Program received signal SIGSEGV, Segmentation fault.
chunk_free (ar_ptr=0x464c457f, p=0x40028a1b) at malloc.c:3049
3049malloc.c: No such file or directory.
(gdb) where
#0  chunk_free (ar_ptr=0x464c457f, p=0x40028a1b) at malloc.c:3049
#1  0x4010cf9a in __libc_free (mem=0x40028a23) at malloc.c:3023
#2  0x4003c3c5 in debug_newstralloc (s=0x40028a18 conffile.c, l=2318, 
oldstr=0x40028a23 , newstr=0x4002cf00 ruffy) at alloc.c:366
#3  0x4001eaf7 in get_simple (var=0x4002cb3c, seen=0x4002ce30, type=STRING)
at conffile.c:2318
#4  0x4001c3b9 in read_confline () at conffile.c:1068
#5  0x4001bc15 in read_conffile_recursively (
filename=0x80535a0 /usr/local/pkg/amanda-2.4.4b1/etc/amanda/hmcis/amanda.conf) 
at conffile.c:884
#6  0x4001a9d0 in read_conffile (
filename=0x80535a0 /usr/local/pkg/amanda-2.4.4b1/etc/amanda/hmcis/amanda.conf) 
at conffile.c:339
#7  0x804a1ef in main (argc=1, argv=0xbaec) at amcheck.c:210



Does this say it's looking at my amanda.conf file and blowing up?
When I tried installing 2.4.4b1 on the server I first went with my
classic cmanda.conf file.  Then I took the latest example file and
started putting my customizations in it.  It's running with the
latter file now.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager



Amanda 2.4.4b1 fails on Linux

2003-02-19 Thread Chris Marble
I've been using Amanda 2.4.2p2 for several years.  Server is Linux
2.4.20 and clients are Linux, Solaris, HP-UX, IRIX and Tru-64.

Had a DLE that was too large so it was time for gnu tar and an exclude list.
Installed Amanda 2.4.4b1 on the troublesome client, Linux 2.4.18, and it's
fine there.  Installed 2.4.4b1 on the server and it won't even pass amcheck.
Just dies with a Segmentation fault  So I fire up gdb:

gdb amcheck
GNU gdb 19991004
Copyright 1998 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type show copying to see the conditions.
There is absolutely no warranty for GDB.  Type show warranty for details.
This GDB was configured as i386-redhat-linux...
(gdb) run -c hmcis
Starting program: /usr/local/pkg/amanda/sbin/amcheck -c hmcis

Program received signal SIGSEGV, Segmentation fault.
chunk_free (ar_ptr=0x464c457f, p=0x40028a1b) at malloc.c:3049
3049malloc.c: No such file or directory.


Any ideas?  The one client's fine with 2.4.4b1 and it's very similar to
the server (same hardware, same original RedHat install and the like).
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: Amanda 2.4.4b1 fails on Linux

2003-02-19 Thread Chris Marble
Joshua Baker-LePain wrote:
 
 On Wed, 19 Feb 2003 at 2:46pm, Chris Marble wrote
 
  I've been using Amanda 2.4.2p2 for several years.  Server is Linux
  2.4.20 and clients are Linux, Solaris, HP-UX, IRIX and Tru-64.
  
  Had a DLE that was too large so it was time for gnu tar and an exclude list.
  Installed Amanda 2.4.4b1 on the troublesome client, Linux 2.4.18, and it's
 
 You may already know this, but exclude lists work with 2.4.2p2.  It's 
 *include* lists that need 2.4.3 or greater.
 
  fine there.  Installed 2.4.4b1 on the server and it won't even pass amcheck.
  Just dies with a Segmentation fault  So I fire up gdb:
 *snip*
  Any ideas?  The one client's fine with 2.4.4b1 and it's very similar to
  the server (same hardware, same original RedHat install and the like).
 
 What RH version?

Both machines are antique RedHat 6.2 installs with glibc-2.1.3
On the client box I specified --without-server --without-restore
So there's no amcheck binary I can copy over.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager



Re: Upgraded tape server, now lot's of broken pipes

2003-02-04 Thread Chris Marble
stan wrote:
 
 I'm in the process of upgrading my Amanda machine from an older HP-UX
 machine to a new faster FreebSD machine. Lot's of machines that used to
 back up fine ae now failing with the following:
 
 sendbackup: spawning /usr/sbin/dump in pipeline
 sendbackup: argument list: dump 0usf 1048576 - /dev/rdsk/cEd4s0
 index tee cannot write [Broken pipe]
 error [/usr/sbin/dump returned 3]

I have been getting the same problem:
  Odin   /home lev 0 FAILED [compress got signal 24, /sbin/dump returned 3]

I investigated it for a long time and the best guess was that I was exceeding
some CPU time limit.  When the system got less busy at 2am the problem would
go away.  It's back again so I'm switching to gnu tar and splitting the disklist
for this one 35Gb of data into 2 seperate ones.
Check your process limits and see if you find anything.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager



chg-multi

2003-01-31 Thread Chris Marble
Jon LaBadie wrote:
 
 chg-multi?
 I thought that was generally used for multiple drives.
 Not for multiple tapes in a changer.

I've been using chg-multi for several years with a changer.
Started using it with a gravity-feed DDS stacker and stayed with
it when I moved to an AIT-2 4-tape changer.  Still works fine.
Ejects tape and moves to next when I do amcheck.  Does multiple-tape
backups when needed.  Why would I want to switch to something else?
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager



Re: Want more aggressive tape use

2002-11-21 Thread Chris Marble
Orion Poplawski wrote:
 
 Just started testing with using software compression on the server 
 (trying not to impact the clients too much), and got the following result:

Advantageto doing some compression on the clients is that it saves you
network bandwidth.

 A lot of file systems were not backed up that could have been because 
 the planner rejected them.  I'd like to be able to operate in a mode 
 where it kept dumping until the tape was full, but then threw away 
 whatever was left (i.e. - no amflush - inefficient use of another tape).

If you crank down dumpcycle in amanda.conf the program will do its best
to fit more on each tape.  If you crank it down too far you'll get daily
complaints about it having to delay backups but it'll still do what it can.
That's what I've done to maximize tape usage in my setup:
single 50Gb AIT tape/day, 300Gb to back up.  Tell it to manage full backups
of everything over 4 days of backups.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager



Re: Advice IDE Tape Drive

2002-11-21 Thread Chris Marble
Robert wrote:
 
  Please advice me which type IDE-Tape drive will be better to buy for
 15 Gb storage.

Doing a quick look at Pricewatch says that a Travan TR5 10/20Gb drive
would be a good match:  About $200
Seagate Hornet
10/20Gb Travan 5 Int
IDE Tape backup
drive p/n stt22a

With the cost of tapes these days it's tempting to go with removable
hard drives.  But your needs are so minimal that tape's probably cheaper.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager



Re: Adding new clients

2002-11-14 Thread Chris Marble
Owain Pritchard wrote:
 
 I have added a new client into the disklist and it's access password 
 in the amandapass file.
 
 When I run amcheck on the new configuration, amcheck throws up 
 the following error appears:-
 
 ERROR: info file 
 /var/lib/amanda/BackupMonth/curinfo/Neli/__nlserver_d$/info: not 
 readable

That could be a varient of a message saying that amanda couldn't find the
index files for this new client.  If so then it'll all get taken care of
after the amdump.  But usually the message is different and it says
INFO not ERROR.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager



Re: Amanda without holddisk

2002-11-14 Thread Chris Marble
=?iso-8859-1?B?SOlsaW8gRHViZXV4?= wrote:
 
 Can i use Amanda without holdingdisk??? I want to backup from a share 
 directly do a tape device.

Sure, no problem.  Just don't specify any holding disk in amanda.conf
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager



Appending to tapes

2002-11-12 Thread Chris Marble
Nicolas Cartron wrote:
 
 I want to install Amanda on a production environment,
 backup server on FreeBSD and clients running Linux, Solaris or FreeBSD.
 
 I read in the section dedicated to Amanda (in the 'Unix backup  recovery',
 O'reilly) the following lines :
 
 AMANDA currently starts a new tape for each run and does not provide a
 mechanism to append a new run to the same tape as a previous run

If you wanted to keep one week's worth of history and ran backups 5 days
a week then you would have to have 5 tapes.  A single days tape may have
a mix of full and incremental backups on it but they will all be from the
same Amanda run.
With careful use of a holding (spool) disk you can alter this behavior.
Write backups for 2 or 3 days to disk and then use amflush to write them
all to a single tape.
Hope I've answered some of your questions.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager



Re: More than one dump per server

2002-11-06 Thread Chris Marble
Du-Wayne Rood wrote:
 
 I was wondering if someone can tel me how to configure amanda to run 
 more than one dump per server?

Do you want Amanda to run multiple data streams from a client to the
tape server?  Then increase
maxdumps 4  # Max number of concurrent dumps to run on the client.
in your amanda.conf file
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager



Re: Server down for 2 days

2002-11-03 Thread Chris Marble
Wayne Byarlay wrote:
 
 Greetings, quick easy question: My amanda machine was down for 2 consecutive
 backup days. What can I expect when I bring it back online? Will Amanda
 automatically compensate? or do I need to manually do the last 2 tapes?

Amanda will whine but everything will be fine after a couple of days.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager



Re: parse of reply message failed

2002-10-22 Thread Chris Marble
Chris Marble wrote:
 
 Jean-Louis Martineau wrote:
  
  Amanda-2.4.3b4 on client is broken and will not talk correctly with
  older server, upgrade your client to 2.4.3
 
 Okay, upgrade done.  I'll find out tonight.

Dumps appear to have gone sucessfully.  If only I'd asked before spending
as much time on it as I did.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager



Re: parse of reply message failed

2002-10-21 Thread Chris Marble
Jean-Louis Martineau wrote:
 
 Amanda-2.4.3b4 on client is broken and will not talk correctly with
 older server, upgrade your client to 2.4.3

Okay, upgrade done.  I'll find out tonight.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager



Re: Amanda won't dump one particular partition

2002-07-09 Thread Chris Marble

John R. Jackson wrote:
 
 2 of the strace files ended with:
 
 --- SIGXCPU (CPU time limit exceeded) ---
 +++ killed by SIGXCPU +++
 
 Well, *that* would certainly do it :-).
 
 Not sure how we got from XCPU to signal 24, but whatever.
 
 I've been using Amanda for several years and usually answer my share
 of questions on the mailing list.
 
 And believe me, that is **greatly** appreciated.
 
 Assuming you get this figured out, make sure you post an FAQ item.
 I'm pretty sure this symptom has shown up before, and a CPU time limit
 never occurred to me.

I waited a month or so then re-enabled compression on the troublesome
partitions.  No problems in the past 2 months (and I've even done the
occasional file restore).  I didn't change anything on the system or on
the Amanda server.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager



Re: Amanda won't dump one particular partition

2002-06-23 Thread Chris Marble

John R. Jackson wrote:
 
 Okay, script gzip.test put in place and amanda run with a disklist
 just for the one troublesome client.  5 partitions.  3 work, 2 fail.
 The gzip.log files don't give me any great hints:
 
 There's a bit more information there.  The two that failed returned a
 status of 152 - 0x98 - 0x80 + 0x18 - 0x80 + 24.  Freely translated,
 that's still our SIGTSTP signal (24), but it also says gzip dropped core
 (0x80).
 
 Take a look in /tmp/amanda and see if there are any core files from
 gzip.

Nope, no core files at all.

 You might try changing the script to call gzip like this:
 
   /usr/bin/strace -o /tmp/gzip.strace.$$ /bin/gzip $@

Okay, I got some output that may have identified the problem for me.
Now I have to figure out where this is getting set for this user.
His shell is bash and I find nothing in .bashrc or /etc/bashrc
2 of the strace files ended with:

--- SIGXCPU (CPU time limit exceeded) ---
+++ killed by SIGXCPU +++

 This is fast becoming a gzip or Linux debugging session rather than an
 Amanda one.

Seems have become such.  I appreciate all your help on this one.
I've been using Amanda for several years and usually answer my share
of questions on the mailing list.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: most efficient use of holding disk

2002-06-23 Thread Chris Marble

Martin Oehler wrote:
 
 I use amanda on a solaris 7 box with a DTL drive (20 GB)
 attached. My dumpcyle is 4 weeks with 20 runs per cycle. 
 
 Because the size of one incremental backup is only
 between 2-4 GB I don't want to change the tape each day.

You could crank your dumpcycle down to 3 (or whatever number would get
Amanda to use most of your tape capacity).
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: Amanda won't dump one particular partition

2002-06-23 Thread Chris Marble

Chris Marble wrote:
 
 John R. Jackson wrote:
  
  That may be what it says (--with-user=backup), but it's not what's built
  into the runtar binary.  The binary is running as though you had said
  --with-user=operator.
 
 You're right.  After make distclean, reconfig and reinstall the gnutar works.
 I can't believe I had a build lying around that wasn't what I'd installed.
 Thanks so much.  At least I can run backups with compression once again.

Drat, the tar backups fail with compression same as the dump.
I also tried removing the troublesome partitions via amadmin and then readding
them.  Didn't help with the gzip error.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: Using Multiple Tape Sizes

2002-05-02 Thread Chris Marble

Brook Hurd wrote:
 
 We currently use 20 8 gig tapes for our backup in
 amanda.  We have obtained some 20 gig tapes and we
 would like to add these to the backup schedule.  Do we
 need to replace the 8 gig tapes with the 20's or can
 we intermix them?

You could set up a seperate configuration to use the 20Gb tapes.
Set record=no and you could set a real short dumpcycle so Amanda would
try and fit as many full backups as possible on the new tapes.
The reason for the record=no is to not mess anything up on the
existing backup configuration.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager



Re: Amcheck errors with ADIC FastStor (dlt8000)

2002-04-12 Thread Chris Marble

Joshua Hamor wrote:
 
 Amanda Tape Server Host Check
 -
 Holding disk /dumps/amanda: 14044688 KB disk space available, that's plenty
 amcheck-server: slot 2: date 20020318 label C5102 (active tape)
 amcheck-server: slot error: no tape online
 amcheck-server: fatal slot mt:: /dev/nrst0: Input/output error

Do you actually have a changer or just a single tape drive?
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: corrupt tape label

2002-03-09 Thread Chris Marble

Brad Tilley wrote:
 
 But I was wondering if I should have just tried to run amlabel on the
 tape (in hopes that a new label would fix the problem) instead of
 removing the tape first.

IIRC (If I Recall Correctly) an amlabel makes whatever data was on
the tape unavailable.  You could have skipped over the label and
then used dd to recover savesets from tape if necessary.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: Backup stop time?

2002-02-28 Thread Chris Marble

Jeffery Smith wrote:
 
 Can any one tell me if there is a way to set a stop time for backups?  I
 am not thrilled about this idea, but some of my backups do run a bit
 long and I am getting complaints some mornings when there are many level
 0 dumps.

No way that I can think of.  You could start backups earlier.  You
could increase maxdumps.  You could lengthen your dumpcycle so Amanda won't
have to fit as many level 0s on a tape.
Would you really want unfinished backups to abort?  You could do that
with some kill commands in cron.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: hardware vs software compression

2002-02-25 Thread Chris Marble

Carl Wilhelm Soderstrom wrote:
 
 I've got an AIT-2 tape drive, which supposedly will hold 50-100GB.
 the tapetype that I found, seems to say that it's 43GB:
 
 define tapetype AIT2 {
 comment AIT-2 with 230m tapes
 length 43778 mbytes
 filemark 3120 kbytes
 speed 5371 kps
 }

Here's what I'm using with my AIT-2 drive:
define tapetype AIT-2 {
comment SDX-500C
length 46700 mbytes # This is a safer uncompressed length
filemark 1541 kbytes
speed 2920 kps  # Not used with the current Amanda
lbl-templ /usr/local/pkg/amanda/example/3hole.ps
}

I've cranked the length down gradually - lessening it 300Mb or so when
I hit EOT.  I haven't hit EOT in the past 30 tapes so this number will
work for me.  I use a mix of client-side and server-side gzipping.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: most efficient use of holding disk

2002-02-18 Thread Chris Marble

Martin Oehler wrote:
 
 Chris Marble wrote:
  
  Martin Oehler wrote:
  
   I use amanda on a solaris 7 box with a DTL drive (20 GB)
   attached. My dumpcyle is 4 weeks with 20 runs per cycle.
  
   Because the size of one incremental backup is only
   between 2-4 GB I don't want to change the tape each day.
  
  You could crank your dumpcycle down to 3 (or whatever number would get
  Amanda to use most of your tape capacity).
 
 Hmmm, I didn't want to use a constant value because it
 can happen that there is a 8 GB or 10 GB incremental backup.
 
 Is there a way I can calculate the needed size on tape (approx.)
 before the backup process starts?

madmin config balance
will try and figure out how much needs to go on each tape in a dumpcycle.
I'm suggesting that you lower dumpcycle so you might get a full backup of
one partition every day and incrementals of the others.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: gnutar-lists problem

2002-02-18 Thread Chris Marble

Brandon Moro wrote:
 
 It seems to be expecting an entry in the gnutar-lists file, and didn't find
 it.  
 Can someone give me a better explanation concerning what exactly
 the gnutar-lists do? (what function it serves?)

See if those files get created once you've got the backups running.  I may
have touched files in those directories to quiet things down - don't remember.
Amanda will populate the files.  They seem to be what it uses to figure out
what belongs on an incremental backup.

 I have the /use and the /export/home partitions set to be backed up as
 separate
 partitions (two lines in the disklist file).  Why is the error for the
 /export/home
 backup listed in the /usr report?  There was no entry in the reports last
 night
 for the /export/home.  I assume it was not backed up.
 
 
 /-- host.local. /usr lev 0 STRANGE
 sendbackup: start [host.local.site.corp:/usr level 0]
 sendbackup: info BACKUP=/bin/gtar
 sendbackup: info RECOVER_CMD=/bin/gtar -f... -
 sendbackup: info end
 ? gtar:
 ./local/var/amanda/gnutar-lists/host.loca.site.corp_export_home_0.new:
 Warning: Cannot stat: No such file or directory
 | Total bytes written: 1148590080 (1.1GB, 602kB/s)
 sendbackup: size 1121670
 sendbackup: end
 \

This was just a warning.  You'd been running a backup on export/home
which apparently died while this backup of /usr was happening.  This report
was on a file /usr/local/var/amanda/gnutar-lists/host.loca.site.corp_export_home_0.new
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: most efficient use of holding disk

2002-02-17 Thread Chris Marble

Martin Oehler wrote:
 
 I use amanda on a solaris 7 box with a DTL drive (20 GB)
 attached. My dumpcyle is 4 weeks with 20 runs per cycle. 
 
 Because the size of one incremental backup is only
 between 2-4 GB I don't want to change the tape each day.

You could crank your dumpcycle down to 3 (or whatever number would get
Amanda to use most of your tape capacity).
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: inparallell option in amanda.conf

2002-02-17 Thread Chris Marble

Don Potter wrote:
 
 I'm trying this outI have more than enough holding are and network 
 bandwidth..so I was wondering how many users actually adjust this 
 value.I have been up to 8 but I have usually become involved with 
 spindle contention.

I run mine at 20.  I max out a pair of 100Mb cards at times.
I tweak the spindle numbers on some client machines to set the max on
them to 1 or 2.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: Best Schedule

2002-02-16 Thread Chris Marble

[EMAIL PROTECTED] wrote:
 
 I have 17 ~50G tapes...
 
 I also have approximately 50G to backup.

If it's going to be 50Gb after compression then you can run full
backups every day with dumpcycle=0.  If it won't fit you could set
dumpcycle=2.  Set your weekly backups to no-reuse until they're a
month old.  Then put them back in the cycle and set the no-reuse
on whatever tape you call the monthly backup.
Set your tapecycle down to 7 so you can reuse tapes every week if
needed.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: next tape and tapelist

2002-02-16 Thread Chris Marble

Juanjo wrote:
 
 After doing some checks, I'm gonna let amanda do a first backup, about 180
 Gigs of data. 
 Well, after those checks, the pointer is at tape 3, how can I tell amanda
 to start with tape 1? 
 
 I've tried launching: 
 amrmtape confname tapelabel
 
 but it says something about preserving original database, and after
 running amcheck, next due tape is 3 still...

When I want to reuse a tape out of sequence I do an amrmtape and then edit
the tape back into tapelist with an appropriately old usage date.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: Missing Tape has Backups Screwed

2002-02-08 Thread Chris Marble

Stephen Carville wrote:
 
 One of my tapes did not come back from offsite in time for today's
 backup (Some winidiot sent if off for four weeks instead of one) so
 today's backup failed.  How can I get amanda to just skip that tape
 and go on to the next one in the sequence?  When I try amcheck, amanda
 thinks all of the tapes in the changer are active tapes and will not
 write to them.  Even if I load the next tape in the sequence mnaually,
 amcheck rejects it. How can I tell which tapes it will write too?
 
 dumpcycle 7 days
 runspercycle 5
 tapecycle 15 tapes
 runtapes 2
 
 So far, backups have only needed one tape per run so it seems to me
 that tapes from two weeks ago should no longer be active but amanda
 thinks they are.

You need to decrease tapecycle.  That says how many tapes you've got
in rotation.  You could lower it to 10 which would say that it can't
rewrite a tape for 2 weeks.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: Amanda won't dump one particular partition

2002-02-01 Thread Chris Marble

John R. Jackson wrote:
 
 Okay, script gzip.test put in place and amanda run with a disklist
 just for the one troublesome client.  5 partitions.  3 work, 2 fail.
 The gzip.log files don't give me any great hints:
 
 There's a bit more information there.  The two that failed returned a
 status of 152 - 0x98 - 0x80 + 0x18 - 0x80 + 24.  Freely translated,
 that's still our SIGTSTP signal (24), but it also says gzip dropped core
 (0x80).
 
 Take a look in /tmp/amanda and see if there are any core files from
 gzip.

Nope, no core files at all.

 You might try changing the script to call gzip like this:
 
   /usr/bin/strace -o /tmp/gzip.strace.$$ /bin/gzip $@

Okay, I got some output that may have identified the problem for me.
Now I have to figure out where this is getting set for this user.
His shell is bash and I find nothing in .bashrc or /etc/bashrc
2 of the strace files ended with:

--- SIGXCPU (CPU time limit exceeded) ---
+++ killed by SIGXCPU +++

 This is fast becoming a gzip or Linux debugging session rather than an
 Amanda one.

Seems have become such.  I appreciate all your help on this one.
I've been using Amanda for several years and usually answer my share
of questions on the mailing list.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: Amanda won't dump one particular partition

2002-01-31 Thread Chris Marble

John R. Jackson wrote:
 
/sbin/dump 0sf 1048576 - /dev/sda5 | /bin/gzip -dc | cat  /dev/null
 
 That command fails with
 
 gzip: stdin: not in gzip format
 
 Sorry.  You're right that the 'd' should not have been there.
 
 ...  At least I can run backups with compression once again.
 
 So, where are you at?  Do you still have a problem or not?

Yeah, still have a problem.  The tar backups fail if I turn on
compression too.  So I'm adding an extra hour to my backups and
using an extra 10Gb on the tape because I can't compress these 2
partitions.

 If you do, I think the next thing I'd try (and it's icky) is put
 the appended script (after you test it -- I did some but not enough)
 someplace.  Then rebuild, but set the GZIP environment variable before
 running ./configure so Amanda uses the script:
 
   make distclean
   GZIP=/path/to/the/test/script ./configure ...
   # Make sure ./configure found the right gzip
   make
   su -c make install

Am I doing this on the troublesome client or on the tape host?
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: Amanda won't dump one particular partition

2002-01-31 Thread Chris Marble

John R. Jackson wrote:
 
 Yeah, still have a problem.  ...
 
 Nuts!  I was hoping your problem would just go away on its own :-).

It did go away by itself when the same thing happened 6-8 months ago.
Each time it showed up when I cranked down dumpcycle and amanda had
to start delaying things.  The other time it went away after I cranked
dumpcycle up and got an incremental of Odin:/home.  I did my best to
do the same with no luck now.  I've even deleted the partition and
re-added it in case something was messed up in the database.

 Am I doing this on the troublesome client or on the tape host?
 
 The client.

Okay, script gzip.test put in place and amanda run with a disklist
just for the one troublesome client.  5 partitions.  3 work, 2 fail.
The gzip.log files don't give me any great hints:

gzip.log.8026
::
=== Thu Jan 31 12:37:29 PST 2002: start 8026: /usr/local/src/gzip.test --fast
UIDPID  PPID  C STIME TTY  TIME CMD
backup8025 1  0 12:37 ?00:00:00 /usr/local/pkg/amanda-test/libex
backup8026  8025  0 12:37 ?00:00:00 /bin/bash /usr/local/src/gzip.te
=== Thu Jan 31 12:37:37 PST 2002: done: status = 0
::
gzip.log.8075
::
=== Thu Jan 31 12:37:44 PST 2002: start 8075: /usr/local/src/gzip.test --fast
UIDPID  PPID  C STIME TTY  TIME CMD
backup8074 1  0 12:37 ?00:00:00 /usr/local/pkg/amanda-test/libex
backup8075  8074  0 12:37 ?00:00:00 /bin/bash /usr/local/src/gzip.te
=== Thu Jan 31 12:38:30 PST 2002: done: status = 0
::
gzip.log.8148
::
=== Thu Jan 31 12:37:59 PST 2002: start 8148: /usr/local/src/gzip.test --fast
UIDPID  PPID  C STIME TTY  TIME CMD
backup8147 1  0 12:37 ?00:00:00 /usr/local/pkg/amanda-test/libex
backup8148  8147  0 12:37 ?00:00:00 /bin/bash /usr/local/src/gzip.te
=== Thu Jan 31 13:06:15 PST 2002: done: status = 152
::
gzip.log.8227
::
=== Thu Jan 31 12:38:30 PST 2002: start 8227: /usr/local/src/gzip.test --fast
UIDPID  PPID  C STIME TTY  TIME CMD
backup8226 1  0 12:38 ?00:00:00 /usr/local/pkg/amanda-test/libex
backup8227  8226  0 12:38 ?00:00:00 /bin/bash /usr/local/src/gzip.te
=== Thu Jan 31 12:40:23 PST 2002: done: status = 0
::
gzip.log.8545
::
=== Thu Jan 31 12:40:24 PST 2002: start 8545: /usr/local/src/gzip.test --fast
UIDPID  PPID  C STIME TTY  TIME CMD
backup8544 1  0 12:40 ?00:00:00 /usr/local/pkg/amanda-test/libex
backup8545  8544  0 12:40 ?00:00:00 /bin/bash /usr/local/src/gzip.te
=== Thu Jan 31 12:55:26 PST 2002: done: status = 152

The last part of the amdump.1 file from the tape host:

driver: finished-cmd time 572.015 taper wrote Odin:/var
driver: state time 572.016 free kps: 190313 space: 69736536 taper: idle idle-dumpers: 
18 qlen tapeq: 0 runq: 0 roomq: 0 wakeup: 86400 driver-idle: not-idle
driver: interface-state time 572.016 if : free 45313 if ENET100: free 4 if ENET10: 
free 8 if LOCAL: free 25000
driver: hdisk-state time 572.016 hdisk 0: free 69736536 dumpers 2
driver: result time 1459.563 from dumper0: FAILED 02-8 [compress returned 152, 
/sbin/dump returned 3]
driver: state time 1474.593 free kps: 192638 space: 71257176 taper: idle idle-dumpers: 
19 qlen tapeq: 0 runq: 0 roomq: 0 wakeup: 86400 driver-idle: not-idle
driver: interface-state time 1474.593 if : free 47638 if ENET100: free 4 if 
ENET10: free 8 if LOCAL: free 25000
driver: hdisk-state time 1474.593 hdisk 0: free 71257176 dumpers 1
driver: result time 2113.522 from dumper1: FAILED 00-4 [compress returned 152, 
/sbin/dump returned 3]
driver: state time 2128.548 free kps: 195000 space: 73553528 taper: idle idle-dumpers: 
20 qlen tapeq: 0 runq: 0 roomq: 0 wakeup: 86400 driver-idle: not-idle
driver: interface-state time 2128.548 if : free 5 if ENET100: free 4 if 
ENET10: free 8 if LOCAL: free 25000
driver: hdisk-state time 2128.548 hdisk 0: free 73553528 dumpers 0
driver: QUITTING time 2128.548 telling children to quit

And sendbackup.20020131123759.debug from the client:

sendbackup: debug 1 pid 8147 ruid 50 euid 50 start time Thu Jan 31 12:37:59 2002
/usr/local/pkg/amanda-test/libexec/sendbackup: version 2.4.2p2
sendbackup: got input request: DUMP /home 3 2002:1:31:10:0:0 OPTIONS 
|;bsd-auth;compress-fast;index;
  parsed request as: program `DUMP'
 disk `/home'
 lev 3
 since 2002:1:31:10:0:0
 opt `|;bsd-auth;compress-fast;index;'
sendbackup: try_socksize: send buffer size is 65536
sendbackup: stream_server: waiting for connection: 0.0.0.0.46346
sendbackup: stream_server: waiting for connection: 0.0.0.0.46347
sendbackup: stream_server: waiting for connection: 0.0.0.0.46348
  waiting for connect on 46346, then 46347, then 46348
sendbackup: stream_accept: connection from 

Re: Dump disk size and backup failure

2002-01-30 Thread Chris Marble

Wayne Richards wrote:
 
 We have been experiencing a lot of failures on full backup with a large dump 
 disk.
 
 Our dump disk devices are:
 
 /dev/dsk/c0t13d0s0   83143932708 8145398 1%/usr13
 /dev/dsk/c2t1d0s317413250  10 17239108 1%/usr14
 
 The holdingdisk definitions are:
 
 holdingdisk hd1 {
 comment main holding disk
 directory /usr13/amanda_holding_disk  # where the holding disk is
 use -2500 mb# how much space can we use on it
 chunksize 0 mb
 }
 
 holdingdisk hd2 {
 comment main holding disk
 directory /usr14/amanda_holding_disk  # where the holding disk is
 use -2500 mb# how much space can we use on it
 chunksize 0 mb
 }
 
 All backups work fine using hd1, but when using hd2, the disk fills; backups 
 go into degraded mode and several fail.  Is there some limit on the size of 
 holding disk that amanda can manage?

Not that I've encountered.  I'm using a 75Gb drive as my holding disk.
Backing up 50Gb a night to AIT-II tape from about 200Gb across 80 disks.

Try setting chunksize to 2000Mb (not 2Gb) and see if that fixes things.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: re-doing bad tapes

2002-01-28 Thread Chris Marble

Ben Elliston wrote:
 
 On Friday, amdump wrote my dump images to a tape successfully, but a 
 subsequent amverify showed that the tape is defective.  Now, my dumps have 
 been removed from the holding disk, but I don't think the images on the 
 tape are usable.
 
 Is there any way to re-do this backup onto a good tape or am I hosed?

Do an amadmin config amrmtape tape-label
Mark the take so you don't re-use it.
amlabel a new tape with the same tape-label
Rerun the backup and you should get about the same levels as before.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: nervous over amrecover

2002-01-28 Thread Chris Marble

Albert Hopkins wrote:
 
 Testing a new amanda server install (RH Linux 7.2, amanda 2.4.2p2, DLT
 7000). I first ran a successful backup and then attempted to recover one
 (small) file.  The first time I extract I get
 
 amrecover: Can't read file header
 extract_list - child returned non-zero status: 1
 
 But then, without exiting I add and extract the file again.  This
 time it works.  But this would have been pretty scary if this were a
 real-life situation.  I'm just wondering why this happened and what
 could be done to prevent it.  I had not ejected or rewound the tape
 between backup and recovery.  Could this have been the issue?

You need to rewind the tape before doing the recover.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: amandad

2002-01-28 Thread Chris Marble

Davidson, Brian wrote:
 
 I have Amanda 2.4.3b1 installed on a intel computer running BSDI 4.1.  I
 have amanda in the /etc/services file and in the /etc/inetd.conf file.
 amandad does not start when the computer is rebooted but no error messages
 are reported in /var/log/messages. I can run amandad from the command line
 as user amanda and it will creat the /tmp/amanda/debug.amanda directory
 and debug file before timing out. 

That sounds like the appropriate behavior.  Amandad doesn't run except
when a connection comes in on port 10080.  Running it by hand should do
what you describe.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: ok to insert correct tapes during amdump run?

2002-01-23 Thread Chris Marble

Jon LaBadie wrote:
 
 Last night I realized I had forgotten to change cartridges
 when the dump was about 1/2 way through.  Some disklist entries
 done, others in progress, others not started.
 
 What would happen IF I put in the correct cartridge at that time?

Amanda already knows that there was no tape in the drive.  It redid
its schedule to account for that (fewer full dumps and the like).
All dumps would still get written to the holding disk.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: Amanda won't dump one particular partition

2002-01-18 Thread Chris Marble

John R. Jackson wrote:
 
  This would be a more accurate test:
  
dump 0sf 1048576 - /dev/sda5 | (restore -tvf - ; cat  /dev/null)
 
 Suprisingly that ran fine:
 
 Not terribly surprising, but it shows the dump - restore pipeline is
 not the problem.  Now try this:
 
   /sbin/dump 0sf 1048576 - /dev/sda5 | /bin/gzip -dc | cat  /dev/null

That command fails with

gzip: stdin: not in gzip format


Taking out the d from the gzip command:

/sbin/dump 0sf 1048576 - /dev/sda5 | /bin/gzip -c | cat  /dev/null
  DUMP: Date of this level 0 dump: Fri Jan 18 16:19:31 2002
  DUMP: Dumping /dev/sda5 (/usr) to standard output
  DUMP: Added inode 7 to exclude list (resize inode)
  DUMP: Label: none
  DUMP: mapping (Pass I) [regular files]
  DUMP: mapping (Pass II) [directories]
  DUMP: estimated 3037697 tape blocks.
  DUMP: Volume 1 started with block 1 at: Fri Jan 18 16:19:41 2002
  DUMP: dumping (Pass III) [directories]
  DUMP: dumping (Pass IV) [regular files]
  DUMP: 17.00% done at 1720 kB/s, finished in 0:24
  DUMP: 34.60% done at 1751 kB/s, finished in 0:18
  DUMP: 52.80% done at 1782 kB/s, finished in 0:13
  DUMP: 71.24% done at 1803 kB/s, finished in 0:08
  DUMP: 89.07% done at 1803 kB/s, finished in 0:03
  DUMP: 100.00% done at 1811 kB/s, finished in 0:00
  DUMP: Volume 1 completed at: Fri Jan 18 16:49:41 2002
  DUMP: Volume 1 3261670 tape blocks (3185.22MB)
  DUMP: Volume 1 took 0:30:00
  DUMP: Volume 1 transfer rate: 1812 kB/s
  DUMP: 3261670 tape blocks (3185.22MB)
  DUMP: finished in 1800 seconds, throughput 1812 kBytes/sec
  DUMP: Date of this level 0 dump: Fri Jan 18 16:19:31 2002
  DUMP: Date this dump completed:  Fri Jan 18 16:49:41 2002
  DUMP: Average transfer rate: 1812 kB/s
  DUMP: DUMP IS DONE


 What's in a sendbackup*debug file corresponding to one of these errors?

sendbackup: debug 1 pid 7505 ruid 50 euid 50 start time Fri Jan 18 15:46:30 2002
/usr/local/pkg/amanda-2.4.2p2/libexec/sendbackup: version 2.4.2p2
sendbackup: got input request: DUMP /usr 0 1970:1:1:0:0:0 OPTIONS 
|;bsd-auth;compress-fast;index;
  parsed request as: program `DUMP'
 disk `/usr'
 lev 0
 since 1970:1:1:0:0:0
 opt `|;bsd-auth;compress-fast;index;'
sendbackup: try_socksize: send buffer size is 65536
sendbackup: stream_server: waiting for connection: 0.0.0.0.53073
sendbackup: stream_server: waiting for connection: 0.0.0.0.53074
sendbackup: stream_server: waiting for connection: 0.0.0.0.53075
  waiting for connect on 53073, then 53074, then 53075
sendbackup: stream_accept: connection from 134.173.32.73.44160
sendbackup: stream_accept: connection from 134.173.32.73.44161
sendbackup: stream_accept: connection from 134.173.32.73.44162
  got all connections
sendbackup: spawning /bin/gzip in pipeline
sendbackup: argument list: /bin/gzip --fast
sendbackup-gnutar: pid 7506: /bin/gzip --fast
sendbackup: spawning /sbin/dump in pipeline
sendbackup: argument list: dump 0usf 1048576 - /dev/sda5
sendbackup: started index creator: /sbin/restore -tvf - 21 | sed -e '
s/^leaf[]*[0-9]*[   ]*\.//
t
/^dir[  ]/ {
s/^dir[ ]*[0-9]*[   ]*\.//
s%$%/%
t
}
d
'
index tee cannot write [Broken pipe]
sendbackup: pid 7507 finish time Fri Jan 18 15:59:03 2002
error [compress got signal 24, /sbin/dump returned 3]
sendbackup: pid 7505 finish time Fri Jan 18 15:59:03 2002


 ...  I've got working large
 partitions on some other systems - including the tape host:
 
 Are those other systems with large partitions the same type of OS as
 the problem client?

Yes, the tape host (Chris) and the problem client (Odin) are both RedHat
Linux boxes.  kernel 2.4.12, dump 0.4b25, gzip 1.3.2.
The tape host is using libext2fs 1.22 of 22-Jun-2001
The client: libext2fs 1.19 of 13-Jul-2000
I also get the failure on the /usr partition on the same client.  That's
a 4Gb partition with 3Gb used.  That's the /dev/sda5 I dumped above.

   runtar: error [must be invoked by operator]
  ...
 My config.status file on the troublesome client says:
 
 # ./configure  --prefix=/usr/local/pkg/amanda-2.4.2p2 --with-user=backup --wit
 h-config=hmcis --with-group=sys
 
 That may be what it says (--with-user=backup), but it's not what's built
 into the runtar binary.  The binary is running as though you had said
 --with-user=operator.

You're right.  After make distclean, reconfig and reinstall the gnutar works.
I can't believe I had a build lying around that wasn't what I'd installed.
Thanks so much.  At least I can run backups with compression once again.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: amanda newbie question

2002-01-18 Thread Chris Marble

Robert Kearey wrote:
 
 It seems that the setup I've inherited has no indexing enabled, and
 I'm at a loss to understand why given the advantages. Is there some
 compelling reason to have indexing off? Can I just turn it on?

I have indexing enabled but I never make use of it.  I'm either restoring
an entire disk or someone's delete mail inbox.  In both cases I use
amrestore.  You need indexing if you want to use amrecover.
I just figure out what tape I need from my printouts.
Only disadvantage to indexing is that it takes some diskspace on the
amanda server.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: Amanda won't dump one particular partition

2002-01-17 Thread Chris Marble

Joshua Baker-LePain wrote:
 
 On Wed, 16 Jan 2002 at 9:40pm, Chris Marble wrote
 
  ? sendbackup: index tee cannot write [Broken pipe]
  |   DUMP: Broken pipe
  |   DUMP: The ENTIRE dump is aborted.
  ? index returned 1
  sendbackup: error [/sbin/dump returned 3, compress got signal 24]
 
 Hmmm.  It seems like it could be a gzip problem.  Have you tried running 
 the dump/gzip command line (found in sendbackup*debug) by hand?

The /dev/sda5 (/usr) partition misbehaves in the same way, so I did:

dump 0usf 1048576 - /dev/sda5 | restore -tvf -

It ran, spit stuff out and then ended with:

dir 478493  ./kerberos/man/man1
leaf478494  ./kerberos/man/man1/sclient.1
dir 478495  ./kerberos/man/man8
leaf478496  ./kerberos/man/man8/sserver.8
leaf480966  ./kerberos/man/whatis
dir 478497  ./kerberos/sbin
leaf478498  ./kerberos/sbin/sserver
leaf   176  ./tmp
  DUMP: Broken pipe
  DUMP: The ENTIRE dump is aborted.

The /usr/tmp directory is a symbolic link to ../var/tmp/
When I run it on /home the final lines are:

leaf   6291807  ./tars/btilahun.tgz
leaf   6291808  ./tars/jtobiska.tgz
leaf   6291809  ./tars/jwang.tgz
leaf30  ./quota.user
leaf31  ./quotas
  DUMP: Broken pipe
  DUMP: The ENTIRE dump is aborted.


So the problem isn't with Amanda but with fairly ordinary dump/restore
commands.  It fails without the s and 1048576 parameters too.  Thanks
for the pointer.  The dump command by itself works fine, it's the restore
part that's giving the error.

  sendsize: argument list: /bin/gtar --create --file /dev/null --directory /home 
--one-file-system --listed-in
  cremental /usr/local/pkg/amanda-2.4.2p2/var/amanda/gnutar-lists/Odin_home_0.new 
--sparse --ignore-failed-rea
  d --totals .
  runtar: error [must be invoked by operator]
   
 What does 'ls -l /usr/local/libexec/runtar' (or wherever you installed 
 runtar) say?  IIRC, runtar needs to be owned by root and setuid.  Also, 
 what does your (x)inetd entry for amandad look like?  runtar isn't getting 
 run by the right user.

Amanda runs as user operator on the tape server and user backup on this client.
Everything's been running fine with dump for a year.

I've got a symlink on both machines pointing /usr/local/pkg/amanda
to /usr/local/pkg/amanda-2.4.2p2.  Makes updating versions easier.

From /etc/inetd.conf on the client:
amanda  dgram   udp waitbackup  /usr/local/pkg/amanda/libexec/amandad   
amandad

And the runtar:
-rwsr-x---1 root sys 86403 Dec 19 13:45 
/usr/local/pkg/amanda-2.4.2p2/libexec/runtar*

On the tape host:
amandadgram  udp wait   operator /usr/local/pkg/amanda/libexec/amandadamandad

and the runtar there:
-rwsr-x---1 root disk86403 Apr 19  2001 
/usr/local/pkg/amanda-2.4.2p2/libexec/runtar*

-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: Amanda won't dump one particular partition

2002-01-17 Thread Chris Marble

John R. Jackson wrote:
 
 This would be a more accurate test:
 
   dump 0sf 1048576 - /dev/sda5 | (restore -tvf - ; cat  /dev/null)

Suprisingly that ran fine:

dir 478497  ./kerberos/sbin
leaf478498  ./kerberos/sbin/sserver
leaf   176  ./tmp

  DUMP: 81.09% done at 8174 kB/s, finished in 0:01
  DUMP: Volume 1 completed at: Thu Jan 17 12:56:51 2002
  DUMP: Volume 1 3248140 tape blocks (3172.01MB)
  DUMP: Volume 1 took 0:06:42
  DUMP: Volume 1 transfer rate: 8079 kB/s
  DUMP: 3248140 tape blocks (3172.01MB)
  DUMP: finished in 402 seconds, throughput 8079 kBytes/sec
  DUMP: Date of this level 0 dump: Thu Jan 17 12:50:03 2002
  DUMP: Date this dump completed:  Thu Jan 17 12:56:51 2002
  DUMP: Average transfer rate: 8079 kB/s
  DUMP: DUMP IS DONE


 As to your original problem, I agree with Joshua that it sounds like a
 gzip problem.  Note the following:
 
   compress got signal 24
 
 What is signal number 24 on your OS (grep -w 24 /usr/include/sys/signal.h)?

I thought the compress getting signal 24 was from the dump returning
signal 3.  I'm getting these signal numbers from the signal man page.
Signal 3 is Quit from keyboard
Signal 24 is Stop typed at tty

 My next guess would be that you've hit a 2 GByte boundary someplace.
 How much data are you dumping on the file systems that fail?  Are there
 some that are under 2 GBytes that work?

I've got gzip 1.3.2 on both client and server.  I've got working large
partitions on some other systems - including the tape host:
/dev/sda1  4386344   2418076   1745368  58% /home

  runtar: error [must be invoked by operator]
 
 As Joshua said, this has got to be a problem for doing backups of that
 client.  When you built Amanda for that client (however that happened),
 it was told it would be run by operator.  That isn't happening and
 runtar is failing.
 
 This would only affect the backups done with GNU tar (/home), but it is
 fatal for that one.  You either need to rebuild Amanda for that client
 and set --with-user to backup or start running amandad as operator
 on that client.

My config.status file on the troublesome client says:

# ./configure  --prefix=/usr/local/pkg/amanda-2.4.2p2 --with-user=backup 
--with-config=hmcis --with-group=sys

The files in /usr/local/pkg/amanda/libexec are owned by a mixture of
backup:sys and root:sys.  I've got a .amandahosts file that's been
fine for the dumps.
Amanda's running as operator on the tape host.  Do I have to use the
same user on both server and client if I'm using gnutar?
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Amanda won't dump one particular partition

2002-01-16 Thread Chris Marble

I've been a proponent of dump for years but I'm looking to move to
gnu tar for one disk.  Amanda fails with dump on just this disk when
it tries to do a level 0 backup.

/-- Odin   /home lev 1 FAILED [/sbin/dump returned 3, compress got signal 24]
sendbackup: start [Odin:/home level 1]
sendbackup: info BACKUP=/sbin/dump
sendbackup: info RECOVER_CMD=/bin/gzip -dc |/sbin/restore -f... -
sendbackup: info COMPRESS_SUFFIX=.gz
sendbackup: info end
|   DUMP: Date of this level 1 dump: Sat Jan  5 02:00:00 2002
|   DUMP: Date of last level 0 dump: Wed Dec 12 02:00:00 2001
|   DUMP: Dumping /dev/sdb1 (/home) to standard output
|   DUMP: Added inode 7 to exclude list (resize inode)
|   DUMP: Label: none
|   DUMP: mapping (Pass I) [regular files]
|   DUMP: mapping (Pass II) [directories]
|   DUMP: estimated 7985435 tape blocks.
|   DUMP: Volume 1 started with block 1 at: Sat Jan  5 02:01:14 2002
|   DUMP: dumping (Pass III) [directories]
|   DUMP: dumping (Pass IV) [regular files]
|   DUMP: 7.89% done at 2100 kB/s, finished in 0:58
|   DUMP: 16.43% done at 2186 kB/s, finished in 0:50
? sendbackup: index tee cannot write [Broken pipe]
|   DUMP: Broken pipe
|   DUMP: The ENTIRE dump is aborted.
? index returned 1
sendbackup: error [/sbin/dump returned 3, compress got signal 24]
\

If I take out compression for this disk all is fine (I'm doing the gzip
on the client (machine with the disk).  It's an amanda problem of some
sort but I have given up trying to fix it (started happening when I
cranked down the dumpcycle to try and squeeze a set of archivals onto
fewer tapes.  It'd happened once months before but went away by itself).

So I switch the dumptype on this one disk to:

define dumptype tcomp-u-s2 {
global
comment User partitions on reasonably fast machines, start 2am, gnutar
priority medium
program GNUTAR
starttime 200
}
 
amcheck runs fine.  But when I do the amdump I don't get an estimate.
Here's the lines from the amdump file:

setting up estimates for Odin:/home
setup_estimate: Odin:/home: command 0, options:
got result for host Odin disk /home: 0 - -1K, 1 - -1K, -1 - -1K
  0: Odin   /home
planner: FAILED Odin /home 0 [disk /home offline on Odin?]


And sendsize.20020116210824.debug on the client:
/usr/local/pkg/amanda-2.4.2p2/libexec/sendsize: version 2.4.2p2
calculating for amname '/', dirname '/'
calculating for amname '/boot', dirname '/boot'
sendsize: getting size via dump for / level 0
calculating for amname '/home', dirname '/home'
calculating for amname '/usr', dirname '/usr'
sendsize: getting size via dump for /boot level 0
sendsize: running /sbin/dump 0Ssf 1048576 - /dev/sda8
sendsize: running /sbin/dump 0Ssf 1048576 - /dev/sda1
running /usr/local/pkg/amanda-2.4.2p2/libexec/killpgrp
running /usr/local/pkg/amanda-2.4.2p2/libexec/killpgrp
sendsize: getting size via dump for /usr level 0
sendsize: getting size via gnutar for /home level 0
sendsize: running /sbin/dump 0Ssf 1048576 - /dev/sda5
running /usr/local/pkg/amanda-2.4.2p2/libexec/killpgrp
calculating for amname '/var', dirname '/var'
sendsize: spawning /usr/local/pkg/amanda-2.4.2p2/libexec/runtar in pipeline
sendsize: argument list: /bin/gtar --create --file /dev/null --directory /home 
--one-file-system --listed-in
cremental /usr/local/pkg/amanda-2.4.2p2/var/amanda/gnutar-lists/Odin_home_0.new 
--sparse --ignore-failed-rea
d --totals .
runtar: error [must be invoked by operator]
 
.
(no size line match in above gnutar output)
.



I'm running amanda 2.4.2p2 on both client and server.  OSs are Linux,
kernels are 2.4.12.  gnu tar is 1.13.25, dump is 0.4b25 on both.
I'm backing up 30 machines and 75 disks.  Client OSs are Solaris,
HP-UX, IRIX and Linux.  The only thing I changed when the dump on
the one disk started failing was dumpcycle.  Since amanda can handle
things if I tell that one disk NOT to compress that tells me that it's
an amanda problem rather than dump.

Help appreciated. Wither getting dump woking with compression again
or getting gnutar to work.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: Suggestions on how to proceed ...

2002-01-07 Thread Chris Marble

Malcolm Herbert wrote:
 
 The first question is how to easily stagger 0-level dumps across a
 dump cycle where some disks need to be backed up at different frequency
 to others.

You can use different dump types for different disks.  In these special
dumptypes set the desired dumpcycle for that disk.  I used to do that and
it worked fine.  I like a short dumpcycle (fewer tapes for restores) but
I had a 36Gb disk that was fairly static.  I upped the dumpcycle for just
that one disk.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: stopping estimates if machine goes off-line

2001-12-20 Thread Chris Marble

Eric Nelson wrote:
 
 Does anyone know how to 'tell' amanda to stop gathering estimates of a PC if it goes
 off-line/crashes?  Overall, I would like for amanda to bypass the particular PC if 
the
 estimates are taking too long and continue with the rest of the dump.

You could shorten etimeout in your amanda.conf file.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: 1st time amanda user

2001-12-16 Thread Chris Marble

[EMAIL PROTECTED] wrote:
 
 have a
 Quantum DLT8000-40 which holds ten tapes for which I found the description
 with length, filemark, and speed. My question is do I use chg-multi for the
 tpchanger parameter, and do I need to use the changerfile parameter?  Also
 do you need a changer.conf file?  thanks.

I'm using chg-multi with a 4-tape AIT-2 changer.  From amanda.conf:

runtapes 2 # number of tapes to be used in a single run of amdump
tpchanger chg-multi   # the tape-changer glue script
tapedev /dev/ntape# the no-rewind tape device to be used
rawtapedev /dev/null  # the raw device to be used (ftape only)
changerfile chg-multi.conf
changerdev /dev/ntape
 

And my chg-multi.conf file:

multieject 0
gravity 0
needeject 1
ejectdelay 60
statefile changer-status
firstslot 1
lastslot 4
slot 1 /dev/ntape
slot 2 /dev/ntape
slot 3 /dev/ntape
slot 4 /dev/ntape


As you can see my box uses the same tape device for each slot.  Might not be
the same for your setup.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: Help needed, problems with amrecover: weird no index messages on the tape server and never connects from Linux client

2001-12-16 Thread Chris Marble

=?iso-8859-1?q?Jos=E9=20Vicente=20N=FA=F1ez=20Zuleta?= wrote:
 
 Error #1: I ran amrecover on the tape server:
 # /home/amanda/sbin/amrecover NEWBREAK
 AMRECOVER Version 2.4.2. Contacting server on
 lenbkx0001 ...
 220 lenbkx0001 AMANDA index server (2.4.2) ready.
 200 Access OK
 Setting restore date to today (2001-12-06)
 200 Working date set to 2001-12-06.
 200 Config set to NEWBREAK.
 501 No index records for host: lenbkx0001. Invalid?
 Trying lenbkx0001 ...
 501 No index records for host: lenbkx0001. Invalid?
 Trying lenbkx0001.newbreak.com ...
 501 No index records for host:
 lenbkx0001.newbreak.com. Invalid?
 amrecover 

Do you know whether you have index files?  Do you have
 index yes
in the appropriate dumptype in your amanda.conf file?
I got caught on one amanda setup when we didn't have indexing enabled
for the partitular dumptype we were using.

Do you have index files in NEWBREAK/index?
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: xinetd.d

2001-12-16 Thread Chris Marble

Thomas Beer wrote:
 
 during boot the following errors occure:
 Should I place every service started place in an
 extra file?

That's what works for me (seperate files).
My amanda file just has:
service amanda
 {
socket_type = dgram
protocol= udp
wait= yes
user= amanda
group   = disk
server  = /usr/lib/amanda/amandad
disable = no
 }

repeat as required for the other services.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: Dump reports negative disk size

2001-12-16 Thread Chris Marble

Ross George wrote:
 
 When I run amdump, the following message appears (presumably due to the 
 hard drive being too big):
 The system is FreeBSD 4.0, with Amanda 2.4.2p2.
 
 FAILED AND STRANGE DUMP DETAILS:
 
 /-- xxx /exports/mail lev 0 FAILED [/sbin/dump returned 3]
 sendbackup: start [xxx.xxx.net.au:/exports/mail level 0]
 sendbackup: info BACKUP=/sbin/dump
 sendbackup: info RECOVER_CMD=/sbin/restore -f... -
 sendbackup: info end
 |   DUMP: Date of this level 0 dump: Wed Nov 28 12:42:34 2001
 |   DUMP: Date of last level 0 dump: the epoch
 |   DUMP: Dumping /dev/rmlxd0c (/exports/mail) to standard output
 |   DUMP: mapping (Pass I) [regular files]
 |   DUMP: mapping (Pass II) [directories]
 |   DUMP: estimated 22194864 tape blocks.
 |   DUMP: dumping (Pass III) [directories]
 |   DUMP: dumping (Pass IV) [regular files]
 |   DUMP: 1.60% done, finished in 5:07
 ?   DUMP: read error from /dev/rmlxd0c: Invalid argument: [block 
 -225809704]: count=7168
 ?   DUMP: read error from /dev/rmlxd0c: Invalid argument: [sector 
 -225809704]: count=512
 ?   DUMP: read error from /dev/rmlxd0c: Invalid argument: [sector 
 -225809703]: count=512
 ?   DUMP: read error from /dev/rmlxd0c: Invalid argument: [sector 
 -225809702]: count=512
 ?   DUMP: read error from /dev/rmlxd0c: Invalid argument: [sector 
 -225809701]: count=512
 ?   DUMP: read error from /dev/rmlxd0c: Invalid argument: [sector 
 -225809700]: count=512

I've got a 75Gb disk with over 22Gb of data on it:

/-- Odin   /home lev 0 FAILED [compress got signal 24, /sbin/dump returned 3]
sendbackup: start [Odin:/home level 0]
sendbackup: info BACKUP=/sbin/dump
sendbackup: info RECOVER_CMD=/bin/gzip -dc |/sbin/restore -f... -
sendbackup: info COMPRESS_SUFFIX=.gz
sendbackup: info end
|   DUMP: Date of this level 0 dump: Sun Dec 16 02:00:00 2001
|   DUMP: Dumping /dev/sdb1 (/home) to standard output
|   DUMP: Added inode 7 to exclude list (resize inode)
|   DUMP: Label: none
|   DUMP: mapping (Pass I) [regular files]
|   DUMP: mapping (Pass II) [directories]
|   DUMP: estimated 22479401 tape blocks.
|   DUMP: Volume 1 started with block 1 at: Sun Dec 16 02:01:21 2001
|   DUMP: dumping (Pass III) [directories]
|   DUMP: dumping (Pass IV) [regular files]
|   DUMP: 1.97% done at 1473 kB/s, finished in 4:09
|   DUMP: 4.35% done at 1631 kB/s, finished in 3:39
|   DUMP: 6.51% done at 1626 kB/s, finished in 3:35
? sendbackup: index tee cannot write [Broken pipe]
|   DUMP: Broken pipe
|   DUMP: The ENTIRE dump is aborted.


I'm running RedHat Linux 6.2 with kernel 2.4.12.
This disk usually backs up fine but amanda started complaining when I
shortened the dumpcycle (so I had to pull fewer tapes for an archive).
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Linux dump

2001-12-13 Thread Chris Marble

=?iso-8859-1?q?Jos=E9=20Vicente=20N=FA=F1ez=20Zuleta?= wrote:
 
 Also another of my problems was using Linux dump. Is
 simply useless with amanda, stick with tar (not very
 usefull if you have to backup time sensitive files as
 Rational Clearcase VOB's).

I'm doing my backups with Linux dump, Solaris ufsdump, HP-UX dump
and IRIX xfsdump.  No problems.  I did update the Linux dump to
0.4b23 from sourceforge.net.  What problems was Linux dump causing
for you?
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: Who has Configset for Autoloader under Linux?

2001-12-13 Thread Chris Marble

Joerg Klaas wrote:
 
 So, does someone have a set of working conf-files for me, to be able to
 start an backup using:
 
 Linux Kernel 2.2x
 SONY TSL-11000 DAT Tapechanger   (8Tapes)
 
 I already found 3 devicefiles which seem to be ok, for me side here.
 They are
 /dev/sg0
 /dev/sg1
 /dev/nst0
 but I don't know which are the right ones to use

/dev/nst0 should be the non-rewinding tape device.  I'm using a 4-tape AIT-2
changer on a Linux 2.4.12 system.  A few lines from my amanda.conf file:

runtapes 2 # number of tapes to be used in a single run of amdump
tpchanger chg-multi   # the tape-changer glue script
tapedev /dev/ntape# the no-rewind tape device to be used
rawtapedev /dev/null  # the raw device to be used (ftape only)
changerfile chg-multi.conf
changerdev /dev/ntape
 
chg-multi is in the same directory and I don't think I had to do anything to it.
chg-multi.conf contains:

multieject 0
 
gravity 0
 
needeject 1
 
ejectdelay 60
 
statefile changer-status
 
firstslot 1
lastslot 4
 
slot 1 /dev/nst0
slot 2 /dev/nst0
slot 3 /dev/nst0
slot 4 /dev/nst0

-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: skipping start of tape......

2001-11-20 Thread Chris Marble

Damith Coomasaru wrote:
 
 I am new to amanda. I tried to restore but I occured following error: 
 
 amanda:~# /usr/sbin/amrestore /dev/st0 
 
 amrestore:   0: skipping start of tape: date 2002 label HISS0
 amrestore:   1: restoring amanda.ruh.ac.lk._var_backups.2002.0
...
 amrestore:  14: skipping start of tape: date 2002 label HISS0
 amrestore:  15: restoring amanda.ruh.ac.lk._var_backups.2002.0 
 
 It is endless. Have you any idea? 

You didn't tell it what it was supposed to restore.
Here's the syntax I use as the amanda user on my tape server to do an
interactive restore of files on /home of another machine names Odin:

amrestore -p /dev/st0 Odin '/home$' | ssh odin -l root restore -if -
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager



Re: Problems with amverify and amrecover - more info

2001-10-26 Thread Chris Marble

Michael Sobik wrote:
 
 amrestore correctly lists all the dump images on the tape.  However, if I
 try and restore with:
 
 amrestore -p /dev/device real hostname real disk name  /dev/null
 
 I get:
 
 amrestore:  3: restoring realhostname.diskimage.0

Why not try a more real restoration scenario?
amrestore -p /dev/device real hostname real disk name | restore -if -

(or ufsrestore if that's what you're using).
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: HELP: server fast compression in a loop?

2001-10-26 Thread Chris Marble

Dan Smith wrote:
 
 I've got a machine that is trying to backup a directory that is about
 4GB.  I've got the dumptype set with server fast compression.  It's
 been going for at least 8 hours (on this one disklist entry).  The
 machine is very fast, so it's not just lagging.  Other disklist
 entries went fine and quick.  the holding area shows:
 
 host.disk.0.tmp
 host.disk.0.1.tmp
 host.disk.0.2.tmp
 ...
  Each of these are 500M (one is constantly growing towards 500.  Then,
 if I look a bit later, they're all gone and it is starting at the
 beginning.

It looks like that one backup to holding disk is failing and restarting.
I'd kill the last amdump process on the tape server and see if that at
least gets the backup process to complete.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: More than one page worth of dumps on report

2001-10-25 Thread Chris Marble

Stan Brown wrote:
 
 I'm backing up 99 filesystems. I am using one of the postscript forms
 (forget which one) to print out a report at the end of the run. Clearly 99
 won't fit in one page.

I'm using example/3hole.ps and my 81 filesystems fit on 2 sides of a single
piece of paper.  I don't think I've modified the file from what John Jackson
wrote several years ago.  It's easy enough to shrink fonts to fit more
onto a page (the postscript file's quite readable).
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: every 10 days amanda fails

2001-10-23 Thread Chris Marble

=?iso-8859-1?Q?gloria_era=F1a?= wrote:
 
 I've been noticing that our amanda backup fails every 10 days and stop to
 backup our main server. Can someone point me why I'm getting this? I'm not
 very familiar with amanda and we rely on someone's script for our backup
 which makes it more complicated. I would appreciate for any help. Thanks.

Is your dumpcycle 10?  Does your disklist specify a 10-day cycle for
any particular drives (actually a special dumptype specified in amanda.conf)?
I expect there's some disk that's getting a full backup every 10 days and
there's a problem with it or the connectivity to that computer).
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: NAK: amandad busy from unfinishing selfchecks

2001-10-12 Thread Chris Marble

peanut butter wrote:
 
 Hi, I'm using version 2.4.2p2 of Amanda.  For a particular client on
 which I use tar with Amanda to back up a single directory, this entry
 has worked fine until I aborted an amdump several days ago (at least I
 think these are connected).  The next amdump (or one very soon
 afterward) would show a no estimate with an amstatus for this machine
 and finally a FAILED for the email report.  Amchecks started coming
 back with NAK:  amandad busy for this machine.  Investigating this, I
 noticed that two processes from the amanda user were running on the
 client, an amandad and (likely, in retro) a
 /usr/local/libexec/amanda/2.4.2p2/sendsize.  I killed these but an
 amcheck only started two new ones--amandad and
 /usr/local/libexec/amanda/2.4.2p2/selfcheck--which would continue to
 run until I would kill them.  One time I let them run over a
 night or two to see if they would ever finish or be cleaned up.  Alas,
 it would seem they would run forever if I let them.  If I kill them, an
 amcheck will start them again and subsequent amchecks will give me the
 dreaded NAK:  amandad busy message.

I've got a mix of 2.4.2 and 2.4.2p2 on Linux, Solaris, HP-UX and
IRIX systems.  I'm using varients of dump everywhere.  I've never had
a problem with amanda processes restarting once I got them killed.
Have you run an amcleanup on your server?  Is there anything left
around that you need to amflush?
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager



Re: ERROR [can not access /dev/hdg1 (/dev/hdg1): Permission denied]

2001-06-09 Thread Chris Marble

Denise Ives wrote:
 
 I still get access Errors on the partitions for my newly
 added clients. I have tried using both hdg1 and /dev/hdg1 in the disk
 list.
 
 Amanda Backup Client Hosts Check
 
 
 ERROR: sunny3.neptune.com: [can not access /dev/hdg1
 (/dev/hdg1): Permission denied]
 ERROR: sunny2.neptune.com: [can not access /dev/hda1
 (/dev/hda1): Permission denied] 

The amanda user on the clients (suny[23]) needs to have read access
to the raw disk devices.  Usually making the amanda user a member of
the disk group is okay.  On some machines I have to
 chmod g+r /dev/sdb1
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: reuse tape issue

2001-05-16 Thread Chris Marble

[EMAIL PROTECTED] wrote:
 
 This morning I needed to overwrite an old tape.  I looked in the tapelist
 file and my tape is listed as reuse.  If I try to flush data to that tape
 I get the error cannot overwrite active tape.  Must I run an amrmtape
 before I can reuse a tape?  I feel I am missing some piece of this puzzle,
 so if one could shed some light.  Thanks again in advance for your help.

In you amanda.conf file you specify tapecycle.  Using that and dumpcycle
and runspercycle Amanda figures out how often you should reuse a tape.
If you lower tapecycle then you should be able to reuse this tape without
problems.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: No such file or directory

2001-04-19 Thread Chris Marble

Tal Ovadia wrote:
 
 Server Linux RH6.2 Client Sun E450 Solaris 2.7
 I get a strange error from amcheck on the non system disks on this
 specific machine:
 
 ERROR: myserver.mydomain.com: [can not access /disc2 (/disc2): No such
 file or directory]
 ERROR: myserver.mydomain.com: [can not access /disc1 (/disc1): No such
 file or directory]
 
 The disks are mounted (here is the mount results):
 # mount
 /disc1 on /dev/dsk/c0t1d0s6 read/write/setuid/largefiles on Tue Apr 17
 15:55:33 2001
 /disc2 on /dev/dsk/c0t2d0s6 read/write/setuid/largefiles on Tue Apr 17
 15:55:33 2001

Is there a problem with /etc/vfstab?  I thought you posted it the
other day and I noticed that you had /dev/dsk/c0t1d0s6 one place where
you should have had /dev/rdsk/c0t1d0s6
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: how do you backup up 5 days to a single tape?

2001-04-19 Thread Chris Marble

Patrick Presto wrote:
 
 Is this the only way to do it you think?  I was hoping I could use the tapes
 and just append to the last backup taken (somehow??). I would prefer not to
 use the holding disk if possible.  

By design Amanda will not append to a tape.

 If I did backup five days to the holding disk and then flushed the data to a
 tape, would I be able to use amrecover to restore data from any of the last
 5 days?  Would I have to do anything special to recover?

I don't think I've ever had to recover from any of my amflush'ed tapes
so I can't speak from experience.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: could not connect, problems

2001-04-07 Thread Chris Marble

Randy Gibson wrote:
 
 We have one system on our LAN that seldom get's all partitions backed up.
 
 FAILURE AND STRANGE DUMP SUMMARY:
   kazoo /u1 lev 1 FAILED [could not connect to kazoo]
 
 Where another partition on the same machine does make it.  I seen this on
 other nodes on the LAN once in awhile, but this one seldom gets all
 partitions
 completed.  This is a server that's has several connections all day long
 with
 no connection problems.

My tape server used to have a problem where it would lose network connectivity.
When the IDE spool disk and the SCSI tape were busy then the ethernet card
would go south.  Buying a Promise 100 card and moving the IDE drive to that
solved the problem for me.  Could you be having something similar where the
client box has connectivity problems under high disk load?
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager



Re: ((((( hard or soft )))))

2001-03-02 Thread Chris Marble

radar wrote:
 
 i have a  (hp c1537a dds3 125m 4mm) tapedrive
 that can hold 12 gig uncompressed and says that it can
 compress up to 24. i think you know it all.
 my thougts was, hey, let the hardware do the compression
 so your system is not under such a heavy load.

I was going that way for a while - tape drive doing the compression.
But I have one computer with 30Gb of data that's mostly already been
gzipped.  To avoid expanding this data I chose to go with software
compression.  I don't compress this one disk's worth of data.  I
spread the compression among my clients and the tape server.
Going with software compression means I don't have to guess how
compressible my data is.  Amanda will learn the compression ratio
for the different disk partitions and start doing an excellent job of
making use of the tape.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: Questions and Answers and Thanks

2001-02-21 Thread Chris Marble

Gerhard den Hollander wrote:
 
 Since it's a linux box , you may want to look into using reiserFS.
 I found that file access with Reiser is faster than ext2 (although nothing
 shocking) esp. when going through a lot of smaller files in one swoop.

I formatted up the holding disk with a large blocksize and small inode
count.  It's never going to have over 100 files on it so I doubt
ReiserFS is worth the hassle.  I do all my backups with dump, ufsdump
or xfsdump now.  I'd just as soon avoid adding in gnu tar too.
Though since I've got one Digital Unix box where the vfsdump doesn't
work...
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: tweaking the schedule

2001-02-19 Thread Chris Marble

Ben Elliston wrote:
 
 I have a couple of machines that are slow and poorly connected to my
 tape server.  In every instance, I happen to *know* that if these
 machines were to appear first in the backup schedule, the backup would
 finish more quickly.

Can you put in a delay time for all the rest of the backups?
Write up a dumptype and set a value for starttime.  Use that dumptype
for all the backups but the ones you want to have start first.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: Can't Find Clients

2001-02-01 Thread Chris Marble

Wilkerson, Scott wrote:
 
 greener pastures a few months ago.  Since then, each time our remaining
 system manager has upgraded a sun system to Solaris 8 it has begun failing
 out of the backup set.  I have made sure that I can rsh from our backup
 
 amcheck gives this error:
  WARNING: gsbmkt.uchicago.edu: selfcheck request timed out.  Host down?

I expect that the amanda entries are no longer in /etc/inetd.conf after
the update.  You will need to add them back and may need to add appropriate
entries to /etc/services too.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager



Re: Amanda and Exabyte (Sorry)

2001-01-20 Thread Chris Marble

Tanniel Simonian wrote:
 
 Currently, I am running Amanda 2.4.2, with the Exabyte Mammoth 2 EZ17 
 loader, on a Dell Poweredge 2300, etc..
 My exact configuration is as follows.
 
 In my amanda.conf file i have:
 snip
 tpchanger "/usr/amanda/libexec/chg-zd-mtx"
 tapedev "/dev/nst0"
 changerdev "/dev/nst0"
 
 tapetype M2-AME225 (thanks to an earlier post from someone that went 
 through the boredom of tapetype)

Have you tried just using the changer as a single tape drive?
Comment out the tpchanger and changerdev lines and then see if you
can get Amanda working.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager



Re: Never-ending dump

2001-01-19 Thread Chris Marble

John R. Jackson wrote:
 
 I've been running an `amdump' for about 16 hours now -- the following dump
 (a level 0, as it happens) keeps happening over and over again.  When I
 reach 100%, it starts again!
 
 Is this 2.4.2?  If so, there's a known bug that we think is fixed in the
 latest CVS sources.  If you can't or don't want to get them, there will
 be a 2.4.2p1 shortly, or ask me offline and I'll send you the patch.

Will I only have to update my 2.4.2 installation on my Amanda server
or will the clients need the new version too?
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager



Re: testing mail to:

2000-10-31 Thread Chris Marble

Denise Ives wrote:
 
 the mail to: on the amanda.conf file-
 
 Is there a command besides amdump - to test this?
 
 org "daily" # your organization name for reports
 mailto "[EMAIL PROTECTED]"  #space separated list of operators at your site 

amcheck will send e-mail on a failure if you pass it the "-m" switch.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager



Re: wrong tape problem

2000-10-24 Thread Chris Marble

Henning Sprang wrote:
 
 I have a problem with amandas tapes.
 5 days ago i got the last mail with a successful backup message:
 
 --- snip ---
 These dumps were to tape net-daily4.
 Tonight's dumps should go onto 1 tape: a new tape.
 --- snap ---
 
 the day after that, the tape was changed to the one one number higher,
 but the message was:
 
 --- snip ---
 *** A TAPE ERROR OCCURRED: [cannot overwrite active tape net-daily5].
 *** PERFORMED ALL DUMPS TO HOLDING DISK. 

My guess would be that a couple of tapes in the net-daily sequence got
skipped.  Since Amanda hasn't written to a tapecycle worth of them yet
it doesn't want to overwrite a newer tape with data.
Check the contents of the tapelist file and you should see what tapes
it's used and what got skipped.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager