Re: How to fix annoying break in tape sequence?

2003-12-02 Thread Dave Ewart
On Monday, 01.12.2003 at 12:51 -0500, Brian Cuttler wrote:

> > Remember to set a calendar reminder to reactivate it.
>
> We always run amcheck -m from cron - if the tapes are out of sequence
> we notice (also someone reads the amdump output which lists both
> current and next tapes).

Already doing both of the above  :-)

Dave.
-- 
Dave Ewart
[EMAIL PROTECTED]
Computing Manager, Epidemiology Unit, Oxford
Cancer Research UK
PGP: CC70 1883 BD92 E665 B840 118B 6E94 2CFD 694D E370



Re: How to fix annoying break in tape sequence?

2003-12-02 Thread Dave Ewart
On Monday, 01.12.2003 at 20:11 +0100, Gerhard den Hollander wrote:

> > We have a four-week, five days per week cycle:
> 
> Good :)
> 
> > These twenty tapes are named OurName-A-Mon, OurName-A-Tue, ...,
> > OurName-A-Fri, OurName-B-Mon, ..., OurName-D-Fri (in other words,
> > the letters A, B, C, D refer to weeks in the four-week cycle):
> 
> Bad idea.
> 
> Why not simpl;y name the tapes YourName001 -> YourName020

Yes, I know you're right.  To be honest, that is something I'm
considering changing.  I have a method sorted out to do gradually rename
all the tapes.

> And then every day you have a cronjob (say at 15:00 and at 16:00) that
> does an amcheck -m YourConfig

We do that anyway, just in case we've forgotten to load the tape.
"Knowing" what the next tape is, is no more/less easy regardless of
whether you use YourName001 -> YourName020, or OurName-A-Mon etc.

> When you come to Xmas, you will either have to go to work to replace
> the tapes, or have to live with a shifted week.

Actually, this one's easy, because we have a clean two-week shutdown at
Christmas.  :-)

Dave.
-- 
Dave Ewart
[EMAIL PROTECTED]
Computing Manager, Epidemiology Unit, Oxford
Cancer Research UK
PGP: CC70 1883 BD92 E665 B840 118B 6E94 2CFD 694D E370



Tapetype Overland LoaderExpress with HP Ultrium 1 (LXL1U11)

2003-12-02 Thread Postaremczak Bernd
Hello,

here is the Tapetype-Definition for the Overland LoaderExpress with HP
Ultrium 1 device 11-Slot-Loader. It's the output from the tapetype-Run
(after 33 hours :-().


define tapetype OVERLAND-LXL1U11 {
comment "OVERLAND AutoLoader HP Ultrium 1"
length 104251 mbytes
filemark 537 kbytes
speed 1736 kps
}

The Loader runs fine with the mtx-Tools and Configuration.

Greetings

Bernd



Re: Tapetype Overland LoaderExpress with HP Ultrium 1 (LXL1U11)

2003-12-02 Thread Paul Bijnens
Postaremczak Bernd wrote:
Hello,

here is the Tapetype-Definition for the Overland LoaderExpress with HP
Ultrium 1 device 11-Slot-Loader. It's the output from the tapetype-Run
(after 33 hours :-().
It would have been much faster (about 6-7 hours) if you used the
amtapetype from 2.4.4 and specified a reasonable estimate, just like
in it's manpage. Something like:  amtapetype -e 100g
define tapetype OVERLAND-LXL1U11 {
comment "OVERLAND AutoLoader HP Ultrium 1"
length 104251 mbytes
filemark 537 kbytes
speed 1736 kps
}
Capacity is correct, but speed is a little slow (due to not giving
a realistic estimate).
--
Paul @ Home


RE: Tapetype Overland LoaderExpress with HP Ultrium 1 (LXL1U11)

2003-12-02 Thread Postaremczak Bernd
> Postaremczak Bernd wrote:
> > Hello,
> > 
> > here is the Tapetype-Definition for the Overland 
> LoaderExpress with HP
> > Ultrium 1 device 11-Slot-Loader. It's the output from the 
> tapetype-Run
> > (after 33 hours :-().
> 
> It would have been much faster (about 6-7 hours) if you used the
> amtapetype from 2.4.4 and specified a reasonable estimate, just like
> in it's manpage. Something like:  amtapetype -e 100g

I'll try it next time.

> > define tapetype OVERLAND-LXL1U11 {
> > comment "OVERLAND AutoLoader HP Ultrium 1"
> > length 104251 mbytes
> > filemark 537 kbytes
> > speed 1736 kps
> > }
> 
> Capacity is correct, but speed is a little slow (due to not giving
> a realistic estimate).

Maybe it is the SCSI-Controller. He works only with 20 MBytes/sec.

Have you got better definitions for it?

Bernd


SendSize CoreDmp on Aix 4.3.3

2003-12-02 Thread Didierjean Fabrice
Hello,

I am using amanda 2.4.4p1 on a AIX 4.3.3 IBM RS6000. I have compiled it
with both, the ibm coompiler en gcc 2.95.3. The both gave me an unusable
sendsize command. This command end in a coredmp. For some obscure raison
the executable crash on the call to amroflock, dont know why (seem to be
a link error ..)
Well after 1 day, i found a solution. I recompiled all amanda with
--disable-shared option, and after that all seems to work normaly
Hope it can help others ...

Regards

Fabrice




Re: SendSize CoreDmp on Aix 4.3.3

2003-12-02 Thread Nicolas Ecarnot
Didierjean Fabrice wrote:
Hello,

I am using amanda 2.4.4p1 on a AIX 4.3.3 IBM RS6000. I have compiled it
with both, the ibm coompiler en gcc 2.95.3. The both gave me an unusable
sendsize command. This command end in a coredmp. For some obscure raison
the executable crash on the call to amroflock, dont know why (seem to be
a link error ..)
Well after 1 day, i found a solution. I recompiled all amanda with
--disable-shared option, and after that all seems to work normaly
Hope it can help others ...
It sure does !
I was hoping for months to be able to use amanda on my old AIX server 
but I had the same problem as you.
I scrached my old amanda compilation, retrieved a fresh one, compiled it 
with the --disable-shared option and after some tests, it does works 
very nice.

Thank you very much for that information.

--
Nicolas Ecarnot



RE: Help - recovering without amanda

2003-12-02 Thread Rebecca Pakish Crum
> I always restore to a temp space, just in case.

I should have done this...I never do direct restores...ugh
> 
> > # mt rewind
> > # mt fsf 1
> > # dd if=/dev/rmt/0hn bs=1 skip=1 | /usr/local/bin/tar -xf -
>   ^
> Bad.  It should be bs=32k, as above (and as in the header you 
> got running 
> the first command).  Also, an even safer way is to not do the pipe:
> 
> dd if=/dev/rmt/0hn bs=32k skip=1 of=output.file
> 
> Then you can do 'tar t' on the output file to get a table of 
> contents and *really* make sure it's what you want.

The bs=1 was a typo...I can't cut and paste because this box is on a
test LAN that's not even getting out the door...my bad. But skipping the
pipe is a good suggestion.
> 
> Here's a brief summary.  Amanda stores several files on a 
> tape.  The first 
> is the tape header.  That's what you skip over with 'mt fsf 
> 1'.  The next 
> file is the first dump image.  The next is the second image, 
> etc.  Each 
> dump image has a 32k amanda header, and then the image.
> 
> The dd command with with 'bs=32k skip=1' reads the whole 
> file, skipping 
> over the first 32k -- the amanda header.  It stops when it 
> hits EOF of 
> that dump image.  If you run the exact same dd command again, 
> you'll grab 
> the next backup image.
> 

Thanks...I'll keep plugging away...



Testing Tape Drive

2003-12-02 Thread Josiah Ritchie
I'd like to run an new tape drive through its paces to make sure it is working
full before I start trying to learn AMANDA on it. What are some commands that i
can use to write, read and generally get familiar with it and be sure it is
working as desired? It is a Compaq SDT-9000 drive on Gentoo Linux (which
requires devfs) and so it probably is in a non-standard area.

I've been reading around and have taken a lot in, but seem to be having trouble
organizing the information in my head. If someone would be willing to just punch
out some commands to help my brain stitch together what it's picked up I'd
greatly appreciate it. I know mt and tar should be used. 

Thanks, 
JSR/

-- 
System Administrator
Washington Bible College/Capital Bible Seminary
http://www.bible.edu


Re: SendSize CoreDmp on Aix 4.3.3

2003-12-02 Thread Jon LaBadie
On Tue, Dec 02, 2003 at 11:39:39AM +0100, Didierjean Fabrice wrote:
> Hello,
> 
> I am using amanda 2.4.4p1 on a AIX 4.3.3 IBM RS6000. I have compiled it
> with both, the ibm coompiler en gcc 2.95.3. The both gave me an unusable
> sendsize command. This command end in a coredmp. For some obscure raison
> the executable crash on the call to amroflock, dont know why (seem to be
> a link error ..)
> 
> Well after 1 day, i found a solution. I recompiled all amanda with
> --disable-shared option, and after that all seems to work normaly

That sounds like a shared library is seen at compile-time
but not at run-time.  The "link error" you note (without the
--disable-shared option) is probably the run-time "dynamic
linker" not finding the library that the compiler did find.

Shared libraries are preferable due to better use of memory
and disk space.  One copy of the shared library for all programs
that use it as opposed to a separate, private copy, in each
program.  Think about how many C programs use "printf" and what
a waste it would be if each has a separate, private copy.

I scanned some AIX documentation online - meaning I could have,
and most likely did, miss a lot :((  I'll relate some of what
I found to Solaris which I know better.  Maybe someone who knows
AIX better can correct any omissions or errors.

Most compilers and linkers use a "library search path" to locate
the libraries they need.  There is generally a default set of
directories to search and this can be extended in several ways.
One way, generally considered the preferable way, is with compile/
linker options.  On Solaris some relevant options include:

   -l  # a library that will be needed
   -L# a directory for the compiler to search in
   -R# a directory for the runtime linker to search

Note, on Solaris the search path is extended separatly for the
compiler and the runtime linker.  My scan of AIX documentation
suggests it only has the "-L" and that it may serve both purposes.

A second, less prefered way, to extend the library search path
is with an environment variable.  On Solaris the appropriate
variable is "LD_LIBRARY_PATH".  My reading of AIX stuff suggests
the compareable variable there is "LIBPATH".  The setting of this
variable could be a source of your troubles when using shared libs.
It may be set one way during compile, but when amanda runs it from
cron it may be different.  That is one reason for prefering to
set the compile and runtime paths in the executable with the -L
(and maybe -R) options.

Solaris also has one command that I did not find a comparable one
on AIX.  That is the "ldd" command.  It looks at an executable
file and lists the shared libraries it needs and where it finds
them (or doesn't find them).  Here is an example:

  $ ldd firebird-bin
libdl.so.1 =>/usr/lib/libdl.so.1
libglib-1.2.so.0 =>  /usr/sfw/lib/libglib-1.2.so.0
libc.so.1 => /usr/lib/libc.so.1
libnsl.so.1 =>   /usr/lib/libnsl.so.1
libmozjs.so =>   (file not found)
libX11.so.4 =>   /usr/lib/libX11.so.4
libsocket.so.1 =>/usr/lib/libsocket.so.1
libm.so.1 => /usr/lib/libm.so.1
libCstd.so.1 =>  /usr/lib/libCstd.so.1
libxpcom.so =>   (file not found)
libpthread.so.1 =>   /usr/lib/libpthread.so.1
libgtk-1.2.so.0 =>   /usr/sfw/lib/libgtk-1.2.so.0

This output says most of the libs will be found at run-time.
But two libs (libmozjs.so and libxpcom.so) were found properly
during compile, but at run-time they will not be found.
This executable may run ok until something in the "not found"
libraries is needed.

If I've compiled this executable I go back and set the appropriate
options so the run-time linker finds the libraries.  Otherwise,
as above where I did not compile the program, I run it from a
wrapper shell that first sets LD_LIBRARY_PATH (your LIBPATH)
appropriately and then calls the actual executable.

As I noted, I did not find a program compareable to ldd in my
scan of AIX docs.  There probably is one.  It would be a good
diagnostic tool to put into your toolkit.

jl
-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


Re: Testing Tape Drive

2003-12-02 Thread Jon LaBadie
On Tue, Dec 02, 2003 at 09:34:23AM -0500, Josiah Ritchie wrote:
> I'd like to run an new tape drive through its paces to make sure it is working
> full before I start trying to learn AMANDA on it. What are some commands that i
> can use to write, read and generally get familiar with it and be sure it is
> working as desired? It is a Compaq SDT-9000 drive on Gentoo Linux (which
> requires devfs) and so it probably is in a non-standard area.
> 
> I've been reading around and have taken a lot in, but seem to be having trouble
> organizing the information in my head. If someone would be willing to just punch
> out some commands to help my brain stitch together what it's picked up I'd
> greatly appreciate it. I know mt and tar should be used. 

Unless that is a changer you pretty much have it.  Write some
write a tar archive to the tape and be sure you can recover it.
Then write multiple archives to the tape using the "no rewind"
version of the tape device.  Use mt to see if you can position
at a specific archive and recover just the one.

You might want to use dd with a blocksize of 32K to be sure
the default amanda block size works ok.  Similarly you might
want to explore the mt commands to set compression on/off,
block size, check status, set defaults, ...

If it is a changer, also explore the mtx command.

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


su hangs vs 2.6.x kernels, problem found & fixed

2003-12-02 Thread Gene Heskett
Greetings everybody;

As the transition to the 2.6 kernel family is eminent, there is one 
minor gotcha that I can tell you howto fix.

Doing an 'su amanda' in order to do the install hangs the su process 
occasionally.  If you search thru the process table tree, it can be 
seen that su has started another copy of bash, and that bash has a 
copy of stty linked to it.  Everything above stty is sleeping, 
waiting on stty to init I guess.  However stty is shown as stopped, 
so the shell is well and truely hung.  Note that this doesn't seem to 
effect the 'su amanda -c "command"' version of the su usage, only 
that which actually returns you a shell with amanda rights seems to 
be effected.

It turns out there is a missing call in the readymade versions of bash 
being shipped with most distro's.  Its called PRGP_PIPE in the 
.config.

The fedora release of redhat contains a bash rpm that fixes this.
It installed with no hiccups here on this RH8.0 system, and solves 
that particular problem.

The test to see if the call is missing in your installed version of 
bash is:
#>strings /bin/bash|grep pgrp_pipe
If this returns a null, then its missing and you'll need to find a 
bash that has it, or rebuild yours (shudder) after editing the file 
config.h to add this line:
-
#define PGRP_PIPE 1
-

I hope this helps the smooth transition to 2.6.

-- 
Cheers, Gene
AMD [EMAIL PROTECTED] 320M
[EMAIL PROTECTED]  512M
99.27% setiathome rank, not too shabby for a WV hillbilly
Yahoo.com attornies please note, additions to this message
by Gene Heskett are:
Copyright 2003 by Maurice Eugene Heskett, all rights reserved.



Re: Testing Tape Drive

2003-12-02 Thread Josiah Ritchie
Jon LaBadie scripted ::

>Unless that is a changer you pretty much have it.  Write some
>write a tar archive to the tape and be sure you can recover it.
>Then write multiple archives to the tape using the "no rewind"
>version of the tape device.  Use mt to see if you can position
>at a specific archive and recover just the one.
>
>You might want to use dd with a blocksize of 32K to be sure
>the default amanda block size works ok.  Similarly you might
>want to explore the mt commands to set compression on/off,
>block size, check status, set defaults, ...
>
>If it is a changer, also explore the mtx command.

It isn't a changer so I can skip the mtx curve. :-) I'll probably need to get
into that later though cause I'll eventually need one.

I can't play around on this server too much for fear of messing things up. Can I
pass some commands by you guys for sanity checking?

mt -f /dev/st0 status # works fine (If I know what I'm reading)
mt -f /dev/st0 retension # works
mt -f /dev/st0 erase # works I need to remember a & at the end next time, ssh
won't background it for some odd reason. (^z bg)

tar -cf /dev/st0 /etc #? records /etc to tape in /dev/st0

JSR/


Re: SendSize CoreDmp on Aix 4.3.3

2003-12-02 Thread Didierjean Fabrice
Jon LaBadie wrote:

That sounds like a shared library is seen at compile-time
but not at run-time.  The "link error" you note (without the
--disable-shared option) is probably the run-time "dynamic
linker" not finding the library that the compiler did find.
 

There was no error at run time .. only a core dump. On my systeme there 
was the ldd commande and when i use it on the executable it says it 
found all the libs.

I have lauch gdb on the exe and in fact on the line 92 of the file 
amandates.c it calls the function amroflock :

rc = amroflock(fileno(amdf), "amandates");

This function amroflock is in the file amflock.c between the line 319 
and 314, or the call (trace with gdb) falls at the line 398, which is 
the last line of the function amfunlock ... 

I have try the operation whit gcc and with ibm compilator .. the two 
give the same error.

Here is the result of an execution :
bash-2.04# pwd
/opt-net/amanda/AIX/libexec
bash-2.04# ldd sendsize
/opt-net/amanda/AIX/lib/libamclient.a(libamclient-2.4.4p1.so)
/usr/lib/libintl.a(shr.o)
/opt-net/amanda/AIX/lib/libamanda.a(libamanda-2.4.4p1.so)
/usr/lib/libc.a(pse.o)
/usr/lib/libtli.a(shr.o)
/usr/lib/libpthreads.a(shr.o)
/usr/lib/libpthreads_compat.a(shr.o)
/usr/lib/libnsl.a(shr.o)
/usr/lib/libpthreads.a(shr_xpg5.o)
/usr/lib/libpthreads.a(shr_comm.o)
sendsize
/usr/lib/libcrypt.a(shr.o)
/usr/lib/libc.a(shr.o)
bash-2.04# ./sendsize
Segmentation fault (core dumped)
bash-2.04# gdb ./sendsize
GNU gdb 5.0
Copyright 2000 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain 
conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "powerpc-ibm-aix4.3.2.0"...
(gdb) run
Starting program: /opt-net/amanda/AIX/libexec/./sendsize
Program received signal SIGSEGV, Segmentation fault.
0xd0dee728 in amroflock () at 
/projets/adminsys/src/amanda-2.4.4p1/common-src/amflock.c:398
398 }
(gdb) bt
#0  0xd0dee728 in amroflock () at 
/projets/adminsys/src/amanda-2.4.4p1/common-src/amflock.c:398
#1  0xd0dece4c in start_amandates (open_readwrite=0)
   at /projets/adminsys/src/amanda-2.4.4p1/client-src/amandates.c:92
#2  0x10003940 in main (argc=1, argv=0x2ff22ba8)
   at /projets/adminsys/src/amanda-2.4.4p1/client-src/sendsize.c:159
#3  0x11dc in __start ()
(gdb) quit
The program is running.  Exit anyway? (y or n) y
bash-2.04#




[snip]
 

regards

Fabrice




Backup taking more than 2 days to complete, help!

2003-12-02 Thread Fernando Costa de Almeida
Hi,

since one week ago, my daily backups are taking more than 2 days to
complete! I dont know why yet, but just one directory in a client
machine (host2:/var/vpopmail) is taking a lot of time, as seen in the
output of the amstatus below. Here is the amanda proccess right now
running in the client machine:

amanda   32319  0.0  0.1  1552  436  ??  SMon07AM   0:12.50
/usr/local/amanda/libexec/sendbackup
amanda   32320  0.0  0.1  1552  352  ??  SMon07AM   4:47.62
/usr/local/amanda/libexec/sendbackup
amanda   32322  0.0  0.0   6280  ??  IW   - 0:00.00 sh -c
/usr/local/bin/tar -tf - 2>/dev/null | sed -e 's/^\\.//'
amanda   32323  0.0  0.1  1068  372  ??  SMon07AM   2:47.97
/usr/local/bin/tar -tf -
amanda   32324  0.0  0.1   972  368  ??  SMon07AM   0:10.05 sed -e
s/^\\.//

Im not using compression neither in the server or the client.
host2:/var/vpopmail contains 2.4 GB of data, with 168577 files.

The strange is that it begins to happen with no reason 
My guess is that gnu tar is eating all the time...

AMSTATUS OUTPUT:

host1:/etc01184k finished (0:09:56)
host1:/root   0  32k finished (0:09:10)
host1:/usr/adm0   29440k finished (0:10:48)
host1:/usr/home   0  32k finished (0:09:28)
host1:/usr/local  0  169920k finished (0:14:27)
host1:/var/qmail  01184k finished (0:09:44)
host2:/etc  01120k finished (0:11:01)
host2:/root 0   91936k finished (1:06:02)
host2:/usr/home 0  32k finished (0:10:00)
host2:/usr/local0  701824k finished (8:50:05)
host2:/usr/share0   94560k finished (2:00:00)
host2:/var/qmail/control 0  32k finished (0:09:25)
host2:/var/qmail/supervise 0  32k finished (0:09:14)
host2:/var/qmail/users 0  32k finished (0:09:40)
host2:/var/vpopmail 0 4473120k finished (2+5:47:06)

SUMMARY  part real estimated
  size  size
partition   :  15
estimated   :  155961992k
failed  :   0  0k   (  0.00%)
wait for dumping:   0  0k   (  0.00%)
dumping to tape :   0  0k   (  0.00%)
dumping :   00k0k (  0.00%) (  0.00%)
dumped  :  15  5564480k  5961992k ( 93.33%) ( 93.33%)
wait for writing:   00k0k (  0.00%) (  0.00%)
writing to tape :   00k0k (  0.00%) (  0.00%)
failed to tape  :   00k0k (  0.00%) (  0.00%)
taped   :  15  5564480k  5961992k ( 93.33%) ( 93.33%)
4 dumpers idle  : not-idle
taper idle
network free kps: 1900
holding space   : 11918382k (100.00%)
 dumper0 busy   : 2+4:36:05  ( 98.08%)
 dumper1 busy   :  0:00:00  (  0.00%)
   taper busy   :  1:16:16  (  2.37%)
 0 dumpers busy :  1:01:52  (  1.92%)not-idle:  1:00:56  (
98.50%)
   start-wait:  0:00:55  ( 
1.50%)
 1 dumper busy  : 2+4:36:06  ( 98.08%)not-idle: 1+20:05:42 
( 83.83%)
client-constrained:  8:27:35  (
16.08%)
  no-bandwidth:  0:02:00  ( 
0.06%)
start-wait:  0:00:47  ( 
0.03%)
 2 dumpers busy :  0:00:00  (  0.00%)





Re: Backup taking more than 2 days to complete, help!

2003-12-02 Thread Frank Smith
--On Tuesday, December 02, 2003 15:46:57 -0200 Fernando Costa de Almeida <[EMAIL 
PROTECTED]> wrote:

> Hi,
> 
> since one week ago, my daily backups are taking more than 2 days to
> complete! I dont know why yet, but just one directory in a client
> machine (host2:/var/vpopmail) is taking a lot of time, as seen in the
> output of the amstatus below. Here is the amanda proccess right now
> running in the client machine:
> 
> amanda   32319  0.0  0.1  1552  436  ??  SMon07AM   0:12.50
> /usr/local/amanda/libexec/sendbackup
> amanda   32320  0.0  0.1  1552  352  ??  SMon07AM   4:47.62
> /usr/local/amanda/libexec/sendbackup
> amanda   32322  0.0  0.0   6280  ??  IW   - 0:00.00 sh -c
> /usr/local/bin/tar -tf - 2>/dev/null | sed -e 's/^\\.//'
> amanda   32323  0.0  0.1  1068  372  ??  SMon07AM   2:47.97
> /usr/local/bin/tar -tf -
> amanda   32324  0.0  0.1   972  368  ??  SMon07AM   0:10.05 sed -e
> s/^\\.//
> 
> Im not using compression neither in the server or the client.
> host2:/var/vpopmail contains 2.4 GB of data, with 168577 files.
> 
> The strange is that it begins to happen with no reason 
> My guess is that gnu tar is eating all the time...
> 
> AMSTATUS OUTPUT:
> 
> host1:/etc01184k finished (0:09:56)
> host1:/root   0  32k finished (0:09:10)
> host1:/usr/adm0   29440k finished (0:10:48)
> host1:/usr/home   0  32k finished (0:09:28)
> host1:/usr/local  0  169920k finished (0:14:27)
> host1:/var/qmail  01184k finished (0:09:44)
> host2:/etc  01120k finished (0:11:01)
> host2:/root 0   91936k finished (1:06:02)
> host2:/usr/home 0  32k finished (0:10:00)
> host2:/usr/local0  701824k finished (8:50:05)
> host2:/usr/share0   94560k finished (2:00:00)
> host2:/var/qmail/control 0  32k finished (0:09:25)
> host2:/var/qmail/supervise 0  32k finished (0:09:14)
> host2:/var/qmail/users 0  32k finished (0:09:40)
> host2:/var/vpopmail 0 4473120k finished (2+5:47:06)

If host2 is not your Amanda server, my bet would be a network
problem, probably a duplex mismatch.  Can you ftp a large file
from host2 to your Amanda server and get reasonable a transfer
rate?

Frank

> 
> SUMMARY  part real estimated
>   size  size
> partition   :  15
> estimated   :  155961992k
> failed  :   0  0k   (  0.00%)
> wait for dumping:   0  0k   (  0.00%)
> dumping to tape :   0  0k   (  0.00%)
> dumping :   00k0k (  0.00%) (  0.00%)
> dumped  :  15  5564480k  5961992k ( 93.33%) ( 93.33%)
> wait for writing:   00k0k (  0.00%) (  0.00%)
> writing to tape :   00k0k (  0.00%) (  0.00%)
> failed to tape  :   00k0k (  0.00%) (  0.00%)
> taped   :  15  5564480k  5961992k ( 93.33%) ( 93.33%)
> 4 dumpers idle  : not-idle
> taper idle
> network free kps: 1900
> holding space   : 11918382k (100.00%)
>  dumper0 busy   : 2+4:36:05  ( 98.08%)
>  dumper1 busy   :  0:00:00  (  0.00%)
>taper busy   :  1:16:16  (  2.37%)
>  0 dumpers busy :  1:01:52  (  1.92%)not-idle:  1:00:56  (
> 98.50%)
>start-wait:  0:00:55  ( 
> 1.50%)
>  1 dumper busy  : 2+4:36:06  ( 98.08%)not-idle: 1+20:05:42 
> ( 83.83%)
> client-constrained:  8:27:35  (
> 16.08%)
>   no-bandwidth:  0:02:00  ( 
> 0.06%)
> start-wait:  0:00:47  ( 
> 0.03%)
>  2 dumpers busy :  0:00:00  (  0.00%)
> 
> 



-- 
Frank Smith  [EMAIL PROTECTED]
Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501



Re: Testing Tape Drive

2003-12-02 Thread Frank Smith
--On Tuesday, December 02, 2003 12:35:06 -0500 Josiah Ritchie <[EMAIL PROTECTED]> 
wrote:

> Jon LaBadie scripted ::
> 
>> Unless that is a changer you pretty much have it.  Write some
>> write a tar archive to the tape and be sure you can recover it.
>> Then write multiple archives to the tape using the "no rewind"
>> version of the tape device.  Use mt to see if you can position
>> at a specific archive and recover just the one.
>> 
>> You might want to use dd with a blocksize of 32K to be sure
>> the default amanda block size works ok.  Similarly you might
>> want to explore the mt commands to set compression on/off,
>> block size, check status, set defaults, ...
>> 
>> If it is a changer, also explore the mtx command.
> 
> It isn't a changer so I can skip the mtx curve. :-) I'll probably need to get
> into that later though cause I'll eventually need one.
> 
> I can't play around on this server too much for fear of messing things up. Can I
> pass some commands by you guys for sanity checking?
> 
> mt -f /dev/st0 status # works fine (If I know what I'm reading)
> mt -f /dev/st0 retension # works
> mt -f /dev/st0 erase # works I need to remember a & at the end next time, ssh
> won't background it for some odd reason. (^z bg)
> 
> tar -cf /dev/st0 /etc #? records /etc to tape in /dev/st0

Also make sure you can then
mt -f /dev/st0 rewind
tar -xf /dev/st0  (first make sure you are using GNU tar and are in
a temp directory so you don't clobber your /etc)

As Jon pointed out, you need to make sure you can use the proper
non-rewinding device:

mt -f /dev/nst0 rewind
tar -cv /dev/nst0 /some/dir
tar -cf /dev/nst0 /another dir
mt -f /dev/nst0 rewind
tar -tf /dev/nst0  (should list /some/dir)
mt -f /dev/nst0 fsf 1  (moves to beginning of next file)
tar -tf /dev/nst0  (should list /another/dir
mt -f /dev/st0 rewind

If you're planning on using tar for backups:

tar --version

Make sure it is at least 1.13.19, I think 1.13.25 is the current
version. Older versions will appear to back up fine but you will
have problems restoring properly.

> 
> JSR/



-- 
Frank Smith  [EMAIL PROTECTED]
Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501



Re: Backup taking more than 2 days to complete, help!

2003-12-02 Thread Jon LaBadie
On Tue, Dec 02, 2003 at 12:22:26PM -0600, Frank Smith wrote:
> --On Tuesday, December 02, 2003 15:46:57 -0200 Fernando Costa de Almeida <[EMAIL 
> PROTECTED]> wrote:
> 
> > Hi,
> > 
> > since one week ago, my daily backups are taking more than 2 days to
> > complete! I dont know why yet, but just one directory in a client
> > machine (host2:/var/vpopmail) is taking a lot of time, as seen in the
> > output of the amstatus below. Here is the amanda proccess right now
> > running in the client machine:

It is not unique in the rate of backup.  It is large and thus takes
a lot of time.  But you have 4 DLE's on that host with greater than
2 MB.  Each has the same backup rate, about 28KB/sec.  So it is not
the directory, most likely as Frank suggests a network problem.

> > 
> > Im not using compression neither in the server or the client.
> > host2:/var/vpopmail contains 2.4 GB of data, with 168577 files.

Maybe now, but it had 4.4GB when it was backed up.

> > 
> > The strange is that it begins to happen with no reason 
> > My guess is that gnu tar is eating all the time...

Doubtful.  Well, maybe if it is a cygwin client on a PC.

> > 
> > AMSTATUS OUTPUT:
...
> > 
> > host2:/root 0   91936k finished (1:06:02)
> > host2:/usr/local0  701824k finished (8:50:05)
> > host2:/usr/share0   94560k finished (2:00:00)
> > host2:/var/vpopmail 0 4473120k finished (2+5:47:06)

These are the DLE's, all with the same dump rate.
Likely network limited.

> 
> If host2 is not your Amanda server, my bet would be a network
> problem, probably a duplex mismatch.  Can you ftp a large file
> from host2 to your Amanda server and get reasonable a transfer
> rate?
> 
> Frank


jl
-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


Is it a bad idea to share the /holdingdisk/ area among several concurrent dump sets?

2003-12-02 Thread Mark_Conty
Hi --

[I'm _finally_ getting this posted!  I've been trying for days to submit 
it through Yahoo, but it failed every time.  Thanks for telling me about 
the Yahoo interface being broken and reminding me about the mailing 
list, Jon L!! :-]

I did a search through the FAQ-O-Matic and through the Yahoo egroup for 
'share holding' and didn't find anything specific in response to this 
question, so I think it's safe to ask:

Is it a bad idea to have a central /holdingdisk/ area in use by
multiple concurrent dump sets?

But before you answer that, maybe I should ask this:

Is it a bad idea to run multiple concurrent dump sets in the first 
place?

(While searching the egroup archive for an answer to this, I came across 
comments from some of the Amanda veterans that led me to believe that I 
should be using a _single_ backup set, instead of running multiple 
concurrent ones.  Am I reading that correctly?  If so, then the original 
question is moot...)

I've attached one of my configuration files (amanda4.conf.txt) with all 
its include files, the chg-scsi.conf(.txt) file, and its disklist 
(disklist4.txt).  (I had to rename all the files to end with ".txt" so 
that they show up as text files.  *sigh*)  This particular dump set 
configuration is different than the other three I run concurrently.  For 
each of those, a full dump is about 20gb, but this one is for some much 
larger filesystems.  I've done a one-time full backup, and now I'm 
trying to set it up to only do incrementals now, but even doing that, I 
think it's going to take 2-3 35gb tapes to hold each night's incremental 
dumps.

But isn't working anyway.  It gets to a certain point and then 
_everything_ stops.  I don't see any incrementing for the various 
filehandles (checked via 'lsof') -- not taper's tape device filehandle 
nor any of the sendbackup->dumper pipes.

(If I do need to consolidate the four dump sets into one, I wonder what 
it'll take to make this one play nicely with the other three...)


Separate question:  I started to look at 2.4.4p1, but I ran into an 
issue right off the bat.  With 2.4.4, 'configure' reported "fcntl 
locking works... yes", but with 2.4.4p1, I get this:

checking whether posix fcntl locking works... no
checking whether flock locking works... no
checking whether lockf locking works... no
checking whether lnlock locking works... no
configure: WARNING: *** No working file locking capability found!
configure: WARNING: *** Be VERY VERY careful.

(I get the same results whether I build with ANSI C or with GCC.)

I don't see any mention in the NEWS or ChangeLog files of any changes in 
this area.  I do have some degree of parallelism in my configuration, 
but I can't tell if there've been any situations up to now where file 
locking has been needed and used, so I'm reluctant to go any further 
with 2.4.4p1.  Has anyone else run into this?

Thanks!
-- 
Mark Conty
Cargill, Inc.
# Configuration file for chg-scsi tape changer program.
#
number_configs  4

#emubarcode 0
autoinv 1
havebarcode 1

debuglevel  9:0
eject   0
sleep   1
changerdev  /dev/picker
labelfile   /opt/amanda/etc/labelfile
usagecount  /opt/amanda/etc/totaltime

# Drive-specific configurations:
#
config  0
drivenum0
dev /dev/rmt/c4t1d0NOCOMPn  # aka /dev/rmt/1mn
startuse0   # chg-scsi uses 0-based numbering
enduse  9
statfile/opt/amanda/etc/daily1/tape-slot
tapestatus  /opt/amanda/etc/daily1/tape-status
cleanfile   /opt/amanda/etc/daily1/tape-clean

config  1
drivenum1
dev /dev/rmt/c4t2d0NOCOMPn  # aka /dev/rmt/2mn
startuse10
enduse  19
statfile/opt/amanda/etc/daily2/tape-slot
tapestatus  /opt/amanda/etc/daily2/tape-status
cleanfile   /opt/amanda/etc/daily2/tape-clean

config  2
drivenum2
dev /dev/rmt/c5t3d0NOCOMPn  # aka /dev/rmt/3mn
startuse20
enduse  29
statfile/opt/amanda/etc/daily3/tape-slot
tapestatus  /opt/amanda/etc/daily3/tape-status
cleanfile   /opt/amanda/etc/daily3/tape-clean

config  3
drivenum3
dev /dev/rmt/c5t4d0NOCOMPn  # aka /dev/rmt/4mn
startuse30
enduse  47
statfile/opt/amanda/etc/daily4/tape-slot
tapestatus  /opt/amanda/etc/daily4/tape-status
cleanfile   /opt/amanda/etc/daily4/tape-clean
# dumptypes
#
# These are referred to by the disklist file.  The dumptype specifies
# certain parameters for dumping including:
#   auth- authentication scheme to use between server and client.
# Valid values are "bsd" and "krb4".  Default: [auth bsd]
#   comment - just a comment string
#   comprate- set default compression rate.  Should be followed by one or
# two numbers, optionally separated by a comma.  The 1st is
#