Re: does not support multiple exclude

2004-04-20 Thread Jon LaBadie
On Wed, Apr 21, 2004 at 12:41:37AM -, [EMAIL PROTECTED] wrote:
> I have just switched to tar (been using dump for years) but I am now
> getting an error message I do not understand:
> 
> $ amcheck -c Daily
> 
> Amanda Backup Client Hosts Check
> 
> WARNING: wookie.internal.americom.com:/ does not support multiple exclude
> Client check: 9 hosts checked in 0.428 seconds, 0 problems found
> 
> (brought to you by Amanda 2.4.4p1)
> 
> I thought I must have an older tar on wookie so I updated to the newest
> but it made no difference.
> 
> Can anyone explain what the likely problem is and how to fix it?
> 
> My exclude list in amanda.conf is:
> 
> exclude "./tmp" "./var/qmail/queue" "./var/log" "*/logs/*"
> 

The error message appears twice in the source and both seem
related to querying the client if it supports multiple exclude.

Your server seems fairly current.
What about an older amanda on "wookie"?

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


does not support multiple exclude

2004-04-20 Thread rwk
I have just switched to tar (been using dump for years) but I am now
getting an error message I do not understand:

$ amcheck -c Daily

Amanda Backup Client Hosts Check

WARNING: wookie.internal.americom.com:/ does not support multiple exclude
Client check: 9 hosts checked in 0.428 seconds, 0 problems found

(brought to you by Amanda 2.4.4p1)

I thought I must have an older tar on wookie so I updated to the newest
but it made no difference.

Can anyone explain what the likely problem is and how to fix it?

My exclude list in amanda.conf is:

exclude "./tmp" "./var/qmail/queue" "./var/log" "*/logs/*"

Thanks,
Dick


Re: access not allowed

2004-04-20 Thread Jon LaBadie
On Tue, Apr 20, 2004 at 06:58:12PM +0200, Pablo Quinta Vidal wrote:
> Hi al !!
> I have this errors doing amcheck
> 
> ERROR: irixoa06.des.udc.es: [access as inspqv00 not allowed from 
> [EMAIL PROTECTED] open of /home/alumnos/inspqv00/.amandahosts failed
> 
> The problem is that I first installed AMANDA --with-user=inspqv00  then I 
> uninstalled it and reinstalled --with-user=amanda but it seems something 
> did not change.
> In inetd.conf the user is amanda, I don know why AMANDA uses the old user.

Some things are "compiled-in".  You will have to do a make distclean
then configure again using --with-user=amanda.  Then remake and reinstall.

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


access not allowed

2004-04-20 Thread Pablo Quinta Vidal
Hi al !!
I have this errors doing amcheck
ERROR: irixoa06.des.udc.es: [access as inspqv00 not allowed from 
[EMAIL PROTECTED] open of /home/alumnos/inspqv00/.amandahosts failed

The problem is that I first installed AMANDA --with-user=inspqv00  then I 
uninstalled it and reinstalled --with-user=amanda but it seems something did 
not change.
In inetd.conf the user is amanda, I don know why AMANDA uses the old user.

Thanks!

_
Reparaciones, servicios a domicilio, empresas, profesionales... Todo en la 
guía telefónica de QDQ. http://qdq.msn.es/msn.cfm



Re: Problems while amanda is running !?

2004-04-20 Thread Joshua Baker-LePain
On Tue, 20 Apr 2004 at 1:11pm, Oliver Simon wrote

> Maybe anyone can help me solving this problem or had the same probs ?
> After running amanda for quite long time without any problems, the following 
> messages crop up in my mails every morning.
> Can anyone help ?
> The only thing we changed, was to put a new switch online, that has 
> GBit-Capabilities.
> The mail every morning really looks shi$&, because of 272 DLE´s and not a half of 
> them can be done without problems. 
> 
> -
> 
> hal/var lev 1 FAILED 20040420[could not connect to hal]
> hydra  /opt lev 0 FAILED 20040420[could not connect to hydra]
> helena1/boot lev 0 FAILED 20040420[could not connect to helena1]
> 
> -
> 
> Were great, anyone had a hint !

What do the debug logs on the clients (in /tmp/amanda) have to say about 
these failed attempts?  This certainly sounds like network issues to me...

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University



Re: Amdump failures in shared cluster

2004-04-20 Thread Joshua Baker-LePain
I moved this discussion to amanda-users, since that's really where it 
belongs.

On Mon, 19 Apr 2004 at 5:37pm, Jeff Mulliken wrote

> 'disklist' file, and the '.amandahosts' file.  The problem, persists, 

By "the problem persists", I'm assuming that you're still getting 'request 
to $FOO timed out'?  Does this setup pass amcheck?

> So, can someone tell me how I get rid of the 'localhost' in the 
> DEFAULT_TAPE_SERVER line?

That's not a big deal.  All that means is that you didn't specify a 
default tape server when you did "./configure".  It comes into play, e.g., 
when you run amrecover, and you can override the default with a command 
line switch (explained in the amrecover man page).

> Secondarily, I suspect that amanda may be colliding with some other software 
> that I have running on this server, that makes extensive use of the internal 
> network.  Is there a way to affect the ports that amanda uses?

Look for the tcpportrange and udpportrange (or some such) options to 
./configure (run './configure --help' to see all the options).

But I'm not convinced that's the problem.  As long as UDP 10080 is open 
(that's where incoming requests go), amanda should pick open ports to use 
for the data transfer.  Try this:

tcpdump -i lo
(In another terminal:)
amcheck -c $CONFIG

And show us the output of the tcpdump.  I assumed in that example that 
your server and client are the same machine.  If they're not, you'll need 
to sniff on eth0 on the server and filter out all the other traffic.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University


Problem configuring full dump

2004-04-20 Thread Gertjan van Oosten
Hello,

I'm having a problem configuring Amanda-2.4.4p2 (on sun-sparc-solaris8)
to always do a full dump of a filesystem.  The filesystem in question is
over 100 GByte, and no matter what I try, when Amanda starts, Amanda's
planner asks for a size estimate for the level 0 dump of this filesystem
(good), and then for a size estimate for a level 1 dump of this
filesystem (not good).  The level 1 estimate takes too long, so Amanda
aborts with the (in)famous:

FAILURE AND STRANGE DUMP SUMMARY: 
  datahost/data lev 0 FAILED [Estimate timeout from datahost]


In my disklist I have:

  datahost /data always-full


Excerpt from my amanda.conf:

  inparallel 4
  dumporder "Ssss"
  netusage  1 Kbps

  dumpcycle 1 day
  runspercycle 5
  tapecycle 5 tapes

  bumpsize 20 Mb
  bumpdays 1
  bumpmult 4

  etimeout 300
  dtimeout 1800
  ctimeout 30

  tapebufs 30

  runtapes 1
  tpchanger "chg-zd-mtx"
  tapedev "/dev/rmt/0bn"
  rawtapedev "/dev/null"
  changerfile "/opt/amanda-2.4.4p2/etc/amanda/tapehost/changer"
  changerdev "/dev/scsi/changer/c3t0d0"

  maxdumpsize -1

  tapetype HP-LTO-2
  define tapetype HP-LTO-2 {
  comment "HP LTO-2 Ultrium (hardware compression off)"
  length 201216 mbytes
  filemark 0 kbytes
  speed 23663 kps
  }

  define dumptype global {
  comment "Global definitions"
  index yes
  }

  define dumptype always-full {
  global
  comment "Full dump of this filesystem always"
  compress none
  priority high
  dumpcycle 0
  strategy noinc
  }


What I want is simple: Amanda should do a level 0 dump of that
filesystem, and not try to find out how large a level 1 dump of this
filesystem would be (it wastes a *HUGE* amount of time, almost an hour).
I want a level 0 or nothing at all.  Is that possible, and if so, how?

By the way, even though the above specifies:

  etimeout 300  # number of seconds per filesystem for estimates.
  dtimeout 1800 # number of idle seconds before a dump is aborted.

the estimate is not aborted after 300 seconds but only after 1800
seconds.  That doesn't seem right, does it?

Furthermore, even after the planner decides to timeout and cut the
connection, the amandad/sendsize/ufsdump processes keep on running on
the client (until they're done, in fact, but they have nowhere to send
the data to).  Shouldn't they be terminated as well?

Kind regards,
-- 
-- Gertjan van Oosten, [EMAIL PROTECTED], West Consulting B.V., +31 15 2191 600


Problems while amanda is running !?

2004-04-20 Thread Oliver Simon
Hi group !

Maybe anyone can help me solving this problem or had the same probs ?
After running amanda for quite long time without any problems, the following messages 
crop up in my mails every morning.
Can anyone help ?
The only thing we changed, was to put a new switch online, that has GBit-Capabilities.
The mail every morning really looks shi$&, because of 272 DLE´s and not a half of them 
can be done without problems. 

-

hal/var lev 1 FAILED 20040420[could not connect to hal]
hydra  /opt lev 0 FAILED 20040420[could not connect to hydra]
helena1/boot lev 0 FAILED 20040420[could not connect to helena1]

-

Were great, anyone had a hint !

Greetings, ...oliver