amverify doesn't run

2000-11-29 Thread Randolph Cordell

Hello again!,

I had a successful backup run last night as shown in the log.20001128.0 file.
  8-)  When I run amverify It returns:  Using device /dev/nst0
   Waiting for device to go ready...

which never happens.  It sits there like that 'till I Ctrl-C it.  What might
be causing this?

Is there an easier way to install the client piece than the whole
configure/make/make install process?

Randy


__
Do You Yahoo!?
Yahoo! Shopping - Thousands of Stores. Millions of Products.
http://shopping.yahoo.com/



Re: configure error: connot find output from lex; giving up

2000-11-29 Thread Alexandre Oliva

On Nov 29, 2000, Tom Hofmann [EMAIL PROTECTED] wrote:

 Can somoene help me with that? Wich packet do i need and where can i get it 
 from? 

Try flex.  It's available at ftp.gnu.org.

Or try Amanda 2.4.2, which, unless I'm mistaken, is supposed to not
require lex at all.

-- 
Alexandre Oliva   Enjoy Guarana', see http://www.ic.unicamp.br/~oliva/
Red Hat GCC Developer  aoliva@{cygnus.com, redhat.com}
CS PhD student at IC-Unicampoliva@{lsd.ic.unicamp.br, gnu.org}
Free Software Evangelist*Please* write to mailing lists, not to me



Re: Completely Stuck :-(

2000-11-29 Thread Alexandre Oliva

On Nov 29, 2000, John Cartwright [EMAIL PROTECTED] wrote:

 See, it's not (LISTEN), which means inetd has disabled the service

 I hate to disagree with you, but in my experience Solaris always
 reports UDP ports as idle.

I stand corrected.  Thank you and JJ for pointing out my mistake.

-- 
Alexandre Oliva   Enjoy Guarana', see http://www.ic.unicamp.br/~oliva/
Red Hat GCC Developer  aoliva@{cygnus.com, redhat.com}
CS PhD student at IC-Unicampoliva@{lsd.ic.unicamp.br, gnu.org}
Free Software Evangelist*Please* write to mailing lists, not to me



Backup up oracle database, part II

2000-11-29 Thread Fredrik Persson P (QRA)

Hi all!

Thanks to those of you who helped me getting a backup for an oracle database
working.

Now, the solution to copy the db files to a temp storage and then have
amanda back up from there was good - it works! There is a problem however,
and that is that the copy command consumes a LOT of system resources (mainly
disk of course) and my server almost look dead from time to time. I've tried
nice -n 15 to make the cp command a bit more nice, but it doesn't seem to
work... 

Any suggestions? This is a RedHat Linux 6.2 system.

Thanks!

/Fredrik Persson



unsubscribe

2000-11-29 Thread Claudia Tran

unsubscribe



RE: missing result for...

2000-11-29 Thread Joe Prochazka

I don't receive any errors when running amcheck -c config

Amanda Backup Client Hosts Check

Client check: 1 host checked in 0.303 seconds, 0 problems found

Here are the other debug files:


=
---
sendsize.debug --

=

sendsize: debug 1 pid 24669 ruid 11 euid 11 start time Wed Nov 29 00:45:01
2000
/usr//libexec/sendsize: version 2.4.2


=

-

=


=
--
selfcheck.debug --

=

selfcheck: debug 1 pid 24046 ruid 11 euid 11 start time Tue Nov 28 16:00:01
2000
/usr//libexec/selfcheck: version 2.4.2
checking disk /var/log: device /var/log: OK
checking disk /etc: device /etc: OK
checking disk /home: device /home: OK
selfcheck: pid 24046 finish time Tue Nov 28 16:00:01 2000


=

-

=


=
---
Amcheck.debug ---

=

amcheck: debug 1 pid 24042 ruid 11 euid 0 start time Tue Nov 28 16:00:00
2000
amcheck: pid 24042 finish time Tue Nov 28 16:00:18 2000


=

-

=


=
---
Amandad.debug ---

=

amandad: debug 1 pid 24667 ruid 11 euid 11 start time Wed Nov 29 00:45:00
2000
amandad: version 2.4.2
amandad: build: VERSION="Amanda-2.4.2"
amandad:BUILT_DATE="Mon Nov 27 11:31:27 EST 2000"
amandad:BUILT_MACH="Linux foo.bar.com 2.2.16-22 #1 Tue Aug 22
16:16:55 EDT 2000 i586 unknown"
amandad:CC="gcc"
amandad: paths: bindir="/usr//bin" sbindir="/usr//sbin"
amandad:libexecdir="/usr//libexec" mandir="/usr//man"
amandad:AMANDA_TMPDIR="/tmp/amanda" AMANDA_DBGDIR="/tmp/amanda"
amandad:CONFIG_DIR="/usr//etc/amanda" DEV_PREFIX="/dev/"
amandad:RDEV_PREFIX="/dev/" DUMP="/sbin/dump"
amandad:RESTORE="/sbin/restore" SAMBA_CLIENT="/usr/bin/smbclient"
amandad:GNUTAR="/bin/gtar" COMPRESS_PATH="/usr/bin/gzip"
amandad:UNCOMPRESS_PATH="/usr/bin/gzip" MAILER="/usr/bin/Mail"
amandad:listed_incr_dir="/usr//var/amanda/gnutar-lists"
amandad: defs:  DEFAULT_SERVER="foo.bar.com"
amandad:DEFAULT_CONFIG="DailySet1"
amandad:DEFAULT_TAPE_SERVER="foo.bar.com"
amandad:DEFAULT_TAPE_DEVICE="/dev/null" HAVE_MMAP HAVE_SYSVSHM
amandad:LOCKING=FLOCK SETPGRP_VOID DEBUG_CODE BSD_SECURITY
amandad:USE_AMANDAHOSTS CLIENT_LOGIN="operator" FORCE_USERID
amandad:HAVE_GZIP COMPRESS_SUFFIX=".gz" COMPRESS_FAST_OPT="--fast"
amandad:COMPRESS_BEST_OPT="--best" UNCOMPRESS_OPT="-dc"
got packet:

Amanda 2.4 REQ HANDLE 000-78790708 SEQ 975476700
SECURITY USER operator
SERVICE sendsize
OPTIONS maxdumps=1;hostname=localhost;
GNUTAR /var/log 0 1970:1:1:0:0:0 -1
exclude-list=/usr/local/lib/amanda/exclude.gtar
GNUTAR /home 0 1970:1:1:0:0:0 -1
exclude-list=/usr/local/lib/amanda/exclude.gtar


sending ack:

Amanda 2.4 ACK HANDLE 000-78790708 SEQ 975476700


bsd security: remote host localhost.localdomain user operator local user
operator
amandahosts security check passed
amandad: running service "/usr//libexec/sendsize"
sendsize: reading /etc/amandates: Is a directory
amandad: sending REP packet:

Amanda 2.4 REP HANDLE 000-78790708 SEQ 975476700



=

-

=

Thanks again for your time.




Re: amrecover

2000-11-29 Thread David Lloyd


Brian and Alexandre!

  but once the process is finished, i get back the the amrecover
  prompt, and i cannot find the stuff i wanted to be restored.
 
 Note that stuff will be restored into a directory tree that mirrors
 the tree of the backed up filesystem.  So, if you restore bar/baz that
 was originally in /foo, where /foo is the root of a filesystem (or a
 subdirectory of / listed in the disklist), amrecover will get you
 `bar/baz, not just `baz.

The question is, does ANYTHING at all appear? Perhaps you could use
another virtual terminal to do an `ls' after the recover returns a
promptyou could use amrestore /dev/st0 hostname diskname and see
what that turns up. Naturally you'd substitute the values I put in...

DL
-- 
Don't you find it rather touching to behold
The OS that came in from the cold
Seen for what it is: religion, plus finesse
Countries, creeds, mean nothing - only Linux...



Re: Client install

2000-11-29 Thread Eric Wadsworth

I agree, this topic is largely uncovered in the documentation. I spent
several days trying to figure out how to set things up, until I realized
that amanda had to be installed in full on the client machines as well. I
had incorrectly assumed that amanda used some kind of UNIX networking to
suck the data from the client computers, but I was confused as to why I
never had to specify any authentication to be able to access those
computers.

As for massive overkill, it only installs less than a meg of binaries on
each client. Not too bad. Compare that with Windows bloatware, and it's
microscopic. :)

 Eric Wadsworth

On Wed, 29 Nov 2000, Harri Haataja wrote:

 On Tue, 28 Nov 2000, Randolph Cordell wrote:
 
  How is installing for the clients different than for the server?  That is not
  evident in anything I've read (README, INSTALL and the entire chapter online
  at www.amanda.org).  Do I need to do the whole ./configure, make, make
  install process for each client?  IT seems that's massive overkill.
 
 You can configure them --without-server. Otherwise, it's pretty much the
 same.
 
 
 




Re: [Fwd: Problems with amanda and Red-Hat 7.0]

2000-11-29 Thread Casile Antonino

Hi,
your suggestion partially worked. I added a line like :
ALL : localhost 127.0.0.1 128.197.61.90

in hosts.allow (the fact is that, with the release of xinetd, you can
filter packets based on the service that is being requested either with
the hosts.allow file or with the xinetd configuration files in
/etc/xinetd.d ... which I think is quite confusing!!!)

here is the output I get running "amcheck -c DailySet1" :

Amanda Backup Client Hosts Check

protocol packet receive: Connection refused
protocol packet receive: Connection refused
WARNING: raffaello: selfcheck request timed out.  Host down?
Client check: 1 host checked in 29.997 seconds, 1 problem found.
 
(brought to you by Amanda 2.4.1p1)

Going through the /var/log/messages file I have found the following
record :

Nov 29 11:41:56 raffaello xinetd[16241]: xinetd Version 2.1.8.9pre11
started with
Nov 29 11:41:56 raffaello xinetd[16241]: libwrap
Nov 29 11:41:56 raffaello xinetd[16241]: options compiled in.
Nov 29 11:41:56 raffaello xinetd[16241]: Started working: 8 available
services
Nov 29 11:41:59 raffaello xinetd: xinetd startup succeeded
Nov 29 11:42:04 raffaello xinetd[16241]: amanda service was deactivated
because
of looping
Nov 29 11:42:04 raffaello xinetd[16241]: recv: Bad file descriptor
(errno = 9)

Anybody knows what's a "looping" in this context???
Thank you
Bye, Antonino Casile



DUMPed /home trumps TARred /home/tim in index (fwd)

2000-11-29 Thread Tape Backup



I'm still tinkering with AMANDA.
I've pretty much committed to GNUTAR, because I'm on an HP
with logical volumes and can't get dump to work on the clients.

At one point, I tried dump on /home on the server and it worked.
Since then, I've been using TAR on /home/tim.  

Today I deleted a file and tried to restore it, but
according to amrecover, it had not been backed up,
as it was insisting  setdisk be /home, which hadn't been
backed up lately.  The file ~had~ been backed up in the normal tar run.


Do I need to wipe out my past flirtations with DUMP
from the index before I can get TAR to work? 
Is fidelity critical?  
Can I get that DUMP expunged from my record?






BTW
What was the final verdict on
the "--listed-incremental" and "--incremental"
flurry recently?









amflush question

2000-11-29 Thread Sandra Panesso

Hi everyone:

I have a question about amflush.  If i have huge data in my holding disk
and  one tape is not enough how can I do to use amflush with more than
one tape?

Please, any help would be appreciated.

Sandra




Re: amflush question

2000-11-29 Thread John R. Jackson

I have a question about amflush.  If i have huge data in my holding disk
and  one tape is not enough how can I do to use amflush with more than
one tape?

I assume you have multiple dump images in the holding disk and it's the
total size of them that's large, rather than a single huge image that
won't fit on a tape, right?

Run amflush once and let it process what it can.  Then run it again.
And again.  Eventually it will get the disk cleaned out, one tape worth
at a time.

Sandra

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: tapetype data for OnStream ADR50

2000-11-29 Thread John R. Jackson

... So here are the data:

Please go to www.sourceforge.net and post your results to the Amanda
FAQ so others can find it in the future.

Olaf

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



unsubscribe

2000-11-29 Thread Susan Fleming

unsubscribe




Re: missing result for...

2000-11-29 Thread John R. Jackson

sendsize: reading /etc/amandates: Is a directory

I swear I'm going to get rid of that damned thing :-).

/etc/amandates is supposed to be a file, not a directory.  Do this:

  # rm -fr /etc/amandates
  # touch /etc/amandates
  # chown amanda-user /etc/amandates

Just for curiosity, did you create (mkdir) /etc/amandates?  If so, why?

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



unsubscribe

2000-11-29 Thread Bagni, Marco

unsubscribe
 

La informacion en este correo electrónico es confidencial y puede ser
legalmente protegida.  El contenido está dirigido solamente al destinatario.
Cualquier acceso al mismo por otra persona, no está autorizado.  Si Ud. no
es el destinatario intencionado, cualquier divulgación, copia, distribución
o cualquier otra acción tomada u omitida concerniente a la información
contenida en el mismo, es prohibida y puede ser ilegal.
Si Ud. ha recibido este correo por error, por favor contactar al remitente o
elimine el mensaje.

The information in this email is confidential and may be legally privileged.
It is intended solely for the addressee.  Access to this email by anyone
else is unauthorized. If you are not the intended recipient, any disclosure,
copying, distribution or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful.
If you have received this mail in error, please contact the sender or delete
the message.




Re: Filesystem offline in cluster ENV

2000-11-29 Thread John R. Jackson

Here is the info from sendsize.debug ( client )

sendsize: getting size via dump for /u14 level 0
sendsize: running "/opt/amanda/libexec/rundump (/usr/sbin/ufsdump) 0Ssf 1048576 - 
/u14"
running /opt/amanda/libexec/killpgrp
  DUMP: `/u14' is not on a locally mounted filesystem
  DUMP: The ENTIRE dump is aborted.

It's been a while since I've used 2.4.1p1, but I think the combination of
"/u14" being passed to rundump and the error from dump itself both point
to a system configuration problem, e.g. /etc/vfstab.  Amanda takes the
file system name and converts it to a /dev entry via the standard system
calls (e.g. getmntent).  It doesn't appear to have been able to do that.

Take a look at the lines for some of your other file systems that do
work and see if this isn't true.

Assuming it is, you'll need to get that fixed before Amanda (actully,
the dump program) can be called correctly.

Or, if you cannot do this (because of whatever clustering means), you
might change your disklist entry to use the /dev name instead of /u14.
But that may still not let Amanda call the right program (vxdump).

Ruksana Siddiqui

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



odd 'selfcheck request timed out' problem

2000-11-29 Thread Frank Smith

After installing and using Amanda without any problems not solvable
by some RTFM'ing, I'm now stuck.  My server is a recently upgraded
Solaris box running 2.4.2. All my working clients are also Solaris
but running the 2.4.1p1 version.
   The problem child is a new RH Linux 6.2 box that I just compiled
and installed the same 2.4.2 source on. It appears to be making the
connection but is not completing for some reason.  The selfcheck.debug
and amandad.debug from the client are included below.  There is a
firewall inbetween but it is currently set to allow all UDP traffic
between the two hosts and it is not logging any rejected packets so
I don't think it is the problem.
   Any clues would be appreciated.

Thanks,
Frank

selfcheck.debug:
/usr/local/libexec/selfcheck: version 2.4.2
checking disk /etc: device /etc: OK
selfcheck: pid 26303 finish time Wed Nov 29 11:59:35 2000

amandad.debug:
amandad: debug 1 pid 26302 ruid 999 euid 999 start time Wed Nov 29 11:59:35 
2000
amandad: version 2.4.2
amandad: build: VERSION="Amanda-2.4.2"
amandad:BUILT_DATE="Wed Nov 29 10:46:57 CST 2000"
amandad:BUILT_MACH="Linux p4.hoovers.com 2.2.14-5.0 #1 Tue Mar 7 
21:07:39 EST 2000 i686 unk
nown"
amandad:CC="gcc"
amandad: paths: bindir="/usr/local/bin" sbindir="/usr/local/sbin"
amandad:libexecdir="/usr/local/libexec" mandir="/usr/local/man"
amandad:AMANDA_TMPDIR="/tmp/amanda" AMANDA_DBGDIR="/tmp/amanda"
amandad:CONFIG_DIR="/usr/local/etc/amanda" DEV_PREFIX="/dev/"
amandad:RDEV_PREFIX="/dev/" GNUTAR="/bin/gtar"
amandad:COMPRESS_PATH="/bin/gzip" UNCOMPRESS_PATH="/bin/gzip"
amandad:MAILER="/usr/bin/Mail"
amandad:listed_incr_dir="/usr/local/var/amanda/gnutar-lists"
amandad: defs:  DEFAULT_SERVER="p4.hoovers.com" DEFAULT_CONFIG="normal"
amandad:DEFAULT_TAPE_SERVER="clone"
amandad:DEFAULT_TAPE_DEVICE="/dev/rmt/0bn" HAVE_MMAP HAVE_SYSVSHM
amandad:LOCKING=POSIX_FCNTL SETPGRP_VOID DEBUG_CODE BSD_SECURITY
amandad:USE_AMANDAHOSTS CLIENT_LOGIN="backup" FORCE_USERID HAVE_GZIP
amandad:COMPRESS_SUFFIX=".gz" COMPRESS_FAST_OPT="--fast"
amandad:COMPRESS_BEST_OPT="--best" UNCOMPRESS_OPT="-dc"
got packet:

Amanda 2.4 REQ HANDLE 007-0005C6E8 SEQ 975520772
SECURITY USER backup
SERVICE selfcheck
OPTIONS ;
GNUTAR /etc 0 OPTIONS 
|;bsd-auth;index;exclude-list=/export/home/backup/exclude.gtar;


sending ack:

Amanda 2.4 ACK HANDLE 007-0005C6E8 SEQ 975520772


bsd security: remote host clone.hoovers.com user backup local user backup
amandahosts security check passed
amandad: running service "/usr/local/libexec/selfcheck"
amandad: sending REP packet:

Amanda 2.4 REP HANDLE 007-0005C6E8 SEQ 975520772
OPTIONS ;
OK /etc
OK /usr/local/libexec/runtar executable
OK /bin/gtar executable
OK /etc/amandates read/writable
OK /usr/local/var/amanda/gnutar-lists/. read/writable
OK /dev/null read/writable
OK /tmp/amanda has more than 64 KB available.
OK /tmp/amanda has more than 64 KB available.
OK /etc has more than 64 KB available.


amandad: got packet:

Amanda 2.4 REQ HANDLE 007-0005C6E8 SEQ 975520772
SECURITY USER backup
SERVICE selfcheck
OPTIONS ;
GNUTAR /etc 0 OPTIONS 
|;bsd-auth;index;exclude-list=/export/home/backup/exclude.gtar;


amandad: It's not an ack
amandad: sending REP packet:

Amanda 2.4 REP HANDLE 007-0005C6E8 SEQ 975520772
OPTIONS ;
OK /etc
OK /usr/local/libexec/runtar executable
OK /bin/gtar executable
OK /etc/amandates read/writable
OK /usr/local/var/amanda/gnutar-lists/. read/writable
OK /dev/null read/writable
OK /tmp/amanda has more than 64 KB available.
OK /tmp/amanda has more than 64 KB available.
OK /etc has more than 64 KB available.


amandad: waiting for ack: timeout, retrying
amandad: waiting for ack: timeout, retrying
amandad: waiting for ack: timeout, retrying
amandad: waiting for ack: timeout, retrying
amandad: waiting for ack: timeout, giving up!
amandad: pid 26302 finish time Wed Nov 29 12:00:35 2000

--
Frank Smith  [EMAIL PROTECTED]
Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501



Re: uns*bscr*be

2000-11-29 Thread Paul Tomblin

Quoting Dan Wilder ([EMAIL PROTECTED]):
 On Wed, Nov 29, 2000 at 02:13:32PM -0500, Paul Tomblin wrote:
  Can somebody please explain this recent rash of people posting uns*bscr*be
  messages to the list?  What's the matter, don't people pay *any* attention
 
 The list server can no doubt be set up to divert such traffic.
 Unfortunately that increases the workload of the list maintainer.

As a list maintainer myself, I'm well aware of that.  That's why I
obfuscated the word "s*bscr*be".

 On the lists I maintain, there are unsubscribe instructions in the footer
 of each post.  That doesn't entirely eliminate such requests being

Yes, I do that as well, ever since I switched to using mailman instead of
majordomo.  It seems to work fairly well, even with such perpetually
computer-clueless people as pilots.

-- 
Paul Tomblin [EMAIL PROTECTED], not speaking for anybody
God is real, unless declared as an integer.



Re: Reset history/data for a particular host

2000-11-29 Thread John R. Jackson

One of my hosts, "merlin" is not being backed up constantly as
amanda/planner complains about no estimate or historical data for a
planned level 1 dump, when a level 0 dump was done just the other day...

My guess is the estimates are taking longer than Amanda is willing
to wait.  Take a look at the first and last lines of sendsize*debug in
/tmp/amanda on the client, calculate the amount of time it took, then
look at the etimeout variable in the amanda(8) man page.

Note that getting an incremental estimate almost always takes longer
than getting an estimate for a full dump because of the extra decision
making involved.

Edwin

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: uns*bscr*be

2000-11-29 Thread Mitch Collinsworth

On Wed, 29 Nov 2000, Dan Wilder wrote:

 The list server can no doubt be set up to divert such traffic.
 Unfortunately that increases the workload of the list maintainer.

Not necessarily.  One can simply divert administrative requests
to /dev/null.  Or auto-respond with a canned message.

-Mitch




Re: Backup up oracle database, part II

2000-11-29 Thread Colin Smith

On Wed, 29 Nov 2000, Fredrik Persson P (QRA) wrote:

 Hi all!
 
 Thanks to those of you who helped me getting a backup for an oracle database
 working.
 
 Now, the solution to copy the db files to a temp storage and then have
 amanda back up from there was good - it works! There is a problem however,

Ouch. You'll need a lot of space which will be wasted while not holding
the copied data files. I haven't seen part 1 but why not simply
put all of the tablespaces into hot backup mode. Backup the data files
and archivelogs normally and then backup the control files last or
alternatively back the control files to trace?

Just make sure you have plenty of archivelog space for transactions which 
hit the database during the backup.

In addition you should do a database export; You're more likely to have a
developer drop a table by mistake than have a disk fail and it's much
easier to get data back from an export.

Now, if you want some redundancy *and* faster backups then you could put
the database on a raid 1 mirror. When it comes to do the backup, break the
mirror and mount the broken mirror somewhere. Backup the broken mirror
rather than the live database. Once the backup is done, unmount and resync
the mirror. Make use of all the wasted temp space. I don't think the linux
md driver is up to this but some of the newer stuff might be.

 and that is that the copy command consumes a LOT of system resources (mainly
 disk of course) and my server almost look dead from time to time. I've tried
 nice -n 15 to make the cp command a bit more nice, but it doesn't seem to
 work... 
 
 Any suggestions? This is a RedHat Linux 6.2 system.
 
 Thanks!
 
 /Fredrik Persson
 

-- 




Re: Backup up oracle database, part II

2000-11-29 Thread John R. Jackson

Just make sure you have plenty of archivelog space for transactions which 
hit the database during the backup.

As I understand it, this was a major reason we don't do this but use the
"big backup area" approach (which I proposed in "part I").

I'm not an Oracle type, but the guy who put this together here is
very good.  And we've "tested" (well, OK, the thing crashed and we had
to recover :-) the result -- several times.

Now, if you want some redundancy *and* faster backups then you could put
the database on a raid 1 mirror.  When it comes to do the backup, break the
mirror and mount the broken mirror somewhere.  ...

How does the guarantee the disks in the mirror are up to date, i.e.
logically consistent from Oracle's point of view?

This technique comes up once in a while for normal backups, too, and
it's never made any sense to me.  It won't work any better than what dump
(or tar, in some cases) does now.

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]