Re: incremental backup overwriting last full backup

2020-04-30 Thread Jukka Salmi
Hello

Debra S Baddorf --> amanda-users (2020-04-25 17:11:23 +):
> IMHO, you should have a least one multiple of “dumpcycle”  in number of tapes
> “tapecycle”. IE  minimum  dumpcycle=2  if you only have 4 tapes.
> Maybe even  dumpcycle=1  if you are limited to 4 tapes.That’s the only
> way to prevent writing over your only level-0.
> I use dumpcycle=7 (a week)  but I keep 10 times that many tapes, 70,
> so that I have ten weeks of data being stored.
> 
> And, if you want to run more than one backup per day,  having a large multiple
> of tapes is even more important.

Sure, I would (and do) use different settings for a production
environment, too.  I was just trying to point out that the [1]"Basic
Configuration" from the [2]"Getting Started with Amanda" pages seems not
to work.

However, in the meantime it turned out that if amdump is run only once a
day instead of every few seconds/minutes as I did during my testing,
then a full backup is created after dumpcycle (3) runs at the lastest,
as documented in amanda.conf(5) ("Each disk will get a full backup at
least this often"):

  $ amadmin MyConfig find
  
  datehost  disk lv storage  pool tape or file file 
part status
  2020-04-26 23:05:01 localhost /etc  0 MyConfig MyConfig MyData011  
1/1 OK
  2020-04-27 23:05:01 localhost /etc  1 MyConfig MyConfig MyData021  
1/1 OK
  2020-04-28 23:05:01 localhost /etc  1 MyConfig MyConfig MyData031  
1/1 OK
  2020-04-29 23:05:01 localhost /etc  0 MyConfig MyConfig MyData041  
1/1 OK

So the issue I described seems to be an issue of my testing, not of how
Amanda works. (It would be nice if Amanda would _never overwrite a full
dump which is needed for restoring, but that's another story...).


> Dumpcycle DOES default to  unit=days, but I think you can specify 
> otherwise.
> If you want to run 4 backups per day,  you might be able to specify 
> dumpcycle=6 hours
> but I’m not at all certain about that.

This seems not to work:

$ amcheck MyConfig
'/etc/amanda/MyConfig/amanda.conf', line 14: end of line is expected
ERROR: errors processing config file


>If you are only running extra backups during testing, then make sure to 
> FORCE
> an extra level 0 sometime before you hit your tape limit.
> AMADMINFORCE  DLE-NAME
True, thanks for the hint.


> (Use lower case;  my computer keeps trying to correct the spelling, unless I 
> use
> uppercase)

Maybe you should ask your computer to be less picky.  I mean, you only
have lowercase and uppercase, so if your computer one day starts doing
funny things also when you type uppercase, you'll be run out of options
already.

;)


Cheers, Jukka

[1] https://wiki.zmanda.com/index.php/GSWA/Build_a_Basic_Configuration
[2] https://wiki.zmanda.com/index.php/Getting_Started_with_Amanda

> > On Apr 25, 2020, at 5:55 AM, Jukka Salmi  wrote:
> > 
> > Hello
> > 
> > I just installed Amanda 3.5.1 on a Debian 10.3 (buster) system and am
> > following the [1]"GSWA/Build a Basic Configuration" example.
> > 
> >  $ amgetconf MyConfig tapecycle
> >  4
> >  $ amgetconf MyConfig dumpcycle
> >  3
> > 
> > Running amdump a few times seemed to be successful, but the I noticed
> > that while the first run created a full backup...
> > 
> >  $ amadmin MyConfig find
> > 
> >  datehost  disk lv storage  pool tape or file file 
> > part status
> >  2020-04-25 10:41:40 localhost /etc  0 MyConfig MyConfig MyData011  
> > 1/1 OK
> > 
> > ...all subsequent runs created incremental backups...
> > 
> >  $ amadmin MyConfig find
> > 
> >  datehost  disk lv storage  pool tape or file file 
> > part status
> >  2020-04-25 10:41:40 localhost /etc  0 MyConfig MyConfig MyData011  
> > 1/1 OK 
> >  2020-04-25 10:46:06 localhost /etc  1 MyConfig MyConfig MyData021  
> > 1/1 OK 
> >  2020-04-25 10:46:13 localhost /etc  1 MyConfig MyConfig MyData031  
> > 1/1 OK 
> >  2020-04-25 10:46:21 localhost /etc  1 MyConfig MyConfig MyData041  
> > 1/1 OK 
> > 
> > ...and the fifth run overwrote the first vtape which contained the full
> > backup...
> > 
> >  $ amadmin MyConfig find
> > 
> >  datehost  disk lv storage  pool tape or file file 
> > part status
> >  2020-04-25 10:46:06 localhost /etc  1 MyConfig MyConfig MyData021  
> > 1/1 OK
> >  2020-04-25 10:46:13 localhost /etc  1 MyConfig MyConfig MyData031  
> > 1/1 OK
> >  2020-04-25 10:46:21 localhost /etc  1 MyConfig MyConfig MyData041  
&g

incremental backup overwriting last full backup

2020-04-25 Thread Jukka Salmi
Hello

I just installed Amanda 3.5.1 on a Debian 10.3 (buster) system and am
following the [1]"GSWA/Build a Basic Configuration" example.

  $ amgetconf MyConfig tapecycle
  4
  $ amgetconf MyConfig dumpcycle
  3

Running amdump a few times seemed to be successful, but the I noticed
that while the first run created a full backup...

  $ amadmin MyConfig find
  
  datehost  disk lv storage  pool tape or file file 
part status
  2020-04-25 10:41:40 localhost /etc  0 MyConfig MyConfig MyData011  
1/1 OK

...all subsequent runs created incremental backups...

  $ amadmin MyConfig find
  
  datehost  disk lv storage  pool tape or file file 
part status
  2020-04-25 10:41:40 localhost /etc  0 MyConfig MyConfig MyData011  
1/1 OK 
  2020-04-25 10:46:06 localhost /etc  1 MyConfig MyConfig MyData021  
1/1 OK 
  2020-04-25 10:46:13 localhost /etc  1 MyConfig MyConfig MyData031  
1/1 OK 
  2020-04-25 10:46:21 localhost /etc  1 MyConfig MyConfig MyData041  
1/1 OK 

...and the fifth run overwrote the first vtape which contained the full
backup...

  $ amadmin MyConfig find
  
  datehost  disk lv storage  pool tape or file file 
part status
  2020-04-25 10:46:06 localhost /etc  1 MyConfig MyConfig MyData021  
1/1 OK
  2020-04-25 10:46:13 localhost /etc  1 MyConfig MyConfig MyData031  
1/1 OK
  2020-04-25 10:46:21 localhost /etc  1 MyConfig MyConfig MyData041  
1/1 OK
  2020-04-25 10:47:33 localhost /etc  1 MyConfig MyConfig MyData011  
1/1 OK

...thus rendering the whole backup useless.

What am I missing?  Is the dumpcycle (3 _days_ in this case) to be taken
literally, i.e. should I just not run amdump more often than once _per
day_ (per Amanda config)?  And if so, how can I configure Amanda not to
overwrite a tape containing a full backup which is needed for the other
incremental backups, no matter how often it is run in what period?


TIA & cheers, Jukka

[1] https://wiki.zmanda.com/index.php/GSWA/Build_a_Basic_Configuration

-- 
This email fills a much-needed gap in the archives.


Re: amcheckdump(8) failure and other issues

2013-05-14 Thread Jukka Salmi
Jean-Louis Martineau --> amanda-users (2013-05-14 12:21:28 -0400):
> On 05/14/2013 12:01 PM, Jukka Salmi wrote:
> 
> >$ grep AMANDA_COMPONENTS 
> >/usr/pkg/lib/perl5/vendor_perl/5.16.0/Amanda/Constants.pm
> >$AMANDA_COMPONENTS = " ndmp";
> That's the problem, it should be like:
> $AMANDA_COMPONENTS = " server restore client amrecover ndmp"
> 
> Xfer.pm load XferServer only if "server" is included in $AMANDA_COMPONENTS.

Ok, I adjusted Constants.pm to list all components in AMANDA_COMPONENTS
(and reverted my changes to the other Perl modules).  Now amdump(8)
succeeds, but amcheckdump(8) still fails:

$ amcheckdump Test http://salmi.ch/~jukka/Amanda/amcheckdump.20130514213840.debug

-- 
This email fills a much-needed gap in the archives.


Re: amcheckdump(8) failure and other issues

2013-05-14 Thread Jukka Salmi
Jean-Louis Martineau --> amanda-users (2013-05-14 12:21:28 -0400):
> On 05/14/2013 12:01 PM, Jukka Salmi wrote:
> 
> >$ grep AMANDA_COMPONENTS 
> >/usr/pkg/lib/perl5/vendor_perl/5.16.0/Amanda/Constants.pm
> >$AMANDA_COMPONENTS = " ndmp";
> That's the problem, it should be like:
> $AMANDA_COMPONENTS = " server restore client amrecover ndmp"
> 
> Xfer.pm load XferServer only if "server" is included in $AMANDA_COMPONENTS.

Ok, thanks.  In this case it seems to me that the Amanda pkgsrc packages
need to be fixed.

Thanks for your help!


Cheers, Jukka

-- 
This email fills a much-needed gap in the archives.


Re: amcheckdump(8) failure and other issues

2013-05-14 Thread Jukka Salmi
Hello

Sorry, I only noticed your email after I sent my previous one...

Jean-Louis Martineau --> amanda-users (2013-05-14 10:39:15 -0400):
> On 05/14/2013 02:22 AM, Jukka Salmi wrote:
> >critical (fatal): Can't locate object method "new" via package 
> >"Amanda::Xfer::Dest::Taper::Splitter" (perhaps you forgot to load 
> >"Amanda::Xfer::Dest::Taper::Splitter"?) at 
> >/usr/pkg/lib/perl5/vendor_perl/5.16.0/Amanda/Taper/Scribe.pm line 731.
> >
> >Hmm, looks similar to the problem mentioned above...  Any hints?
> >
> 
> It looks like your installation is not complete, or you have
> multiple version installed?

No, only Amanda 3.3.1 is installed from pkgsrc:

$ pkg_info -a | grep ^amanda
amanda-common-3.3.1nb2 Common libraries and binaries for Amanda
amanda-server-3.3.1nb1 Server part of Amanda, a network backup system
amanda-client-3.3.1nb1 Client part of Amanda, a network backup system


> Do you have the following file:
>   /usr/pkg/lib/perl5/vendor_perl/5.16.0/Amanda/XferServer.pm

Yes:

$ ls -l /usr/pkg/lib/perl5/vendor_perl/5.16.0/Amanda/XferServer.pm
-rw-r--r--  1 root  wheel  4174 May 13 11:03 
/usr/pkg/lib/perl5/vendor_perl/5.16.0/Amanda/XferServer.pm


> What's the output of the following command:
>grep AMANDA_COMPONENTS
> /usr/pkg/lib/perl5/vendor_perl/5.16.0/Amanda/Constants.pm

$ grep AMANDA_COMPONENTS 
/usr/pkg/lib/perl5/vendor_perl/5.16.0/Amanda/Constants.pm
$AMANDA_COMPONENTS = " ndmp";


Cheers, Jukka

-- 
This email fills a much-needed gap in the archives.


Re: amcheckdump(8) failure and other issues

2013-05-14 Thread Jukka Salmi
Hello

Jukka Salmi --> amanda-users (2013-05-14 08:22:16 +0200):
> Jukka Salmi --> amanda-users (2013-05-13 22:37:12 +0200):
> > Hello
> > 
> > I'm currently updating Amanda on some NetBSD/amd64 5.2_STABLE systems from
> > 2.5.2p1 to 3.3.1; Amanda has been built from pkgsrc.  So far the Amanda 
> > server
> > and one Amanda client have been updated.  The first thing I tried after
> > the update was to check the latest backup (created by the "old" Amanda),
> > but this failed:
> > 
> > $ amcheckdump Daily  > You will need the following volume: DAILY07
> > Press enter when ready
> > Validating image foo.salmi.ch:/usr dumped 20130512032500 level 1
> > amcheckdump: Can't locate object method "new" via package 
> > "Amanda::Xfer::Source::Recovery" (perhaps you forgot to load 
> > "Amanda::Xfer::Source::Recovery"?) at 
> > /usr/pkg/lib/perl5/vendor_perl/5.16.0/Amanda/Recovery/Clerk.pm line 551.
> > 
> > I haven't run amdump(8) yet, thus e.g.  $logdir/amdump.$n is still from an
> > Amanda 2.5.2p1 run.  Could this be the reason for the failure and I should 
> > just
> > run amdump(8)?
> 
> In the meantime I created a test configuration and ran amdump(8).
> However, this particular problem still exists.

The first attached patch seems to fix this problem.  Note that I don't
really understand Perl...


> More specifically, now I get
> 
> foo.salmi.ch:/0 44979k failed: process terminated while waiting for 
> dumping 
> foo.salmi.ch:/etc 1 4k failed: killed while writing to tape (7:45:12)
> bar.salmi.ch:/0 22788k failed: process terminated while waiting for 
> dumping 
> bar.salmi.ch:/etc 1 5k dump done (7:45:12), process terminated while 
> waiting for writing to tape
> 
> (bar is the Amanda server)
> 
> The taper logfile reveals:
> 
> [...]
> Amanda::Changer::compat initialized with script 
> /usr/pkg/libexec/amanda/chg-disk, temporary directory /etc/pkg/amanda/Test
> Amanda::Taper::Scan::traditional stage 1: search for oldest reusable volume
> Amanda::Taper::Scan::traditional no oldest reusable volume
> Amanda::Taper::Scan::traditional stage 2: scan for any reusable volume
> Amanda::Changer::compat: invoking /usr/pkg/libexec/amanda/chg-disk with -info
> Amanda::Changer::compat: Got response '8 8 1' with exit status 0
> Amanda::Changer::compat: invoking /usr/pkg/libexec/amanda/chg-disk with -slot 
> current
> Amanda::Changer::compat: Got response '8 file:/var/amanda/vtapes/Test' with 
> exit status 0
> Slot 8 with label TEST08 is usable
> Amanda::Taper::Scan::traditional result: 'TEST08' on 
> file:/var/amanda/vtapes/Test slot 8, mode 2
> Amanda::Taper::Scribe preparing to write, part size 0, using LEOM (falling 
> back to holding disk as cache) (splitter)  (LEOM supported)
> critical (fatal): Can't locate object method "new" via package 
> "Amanda::Xfer::Dest::Taper::Splitter" (perhaps you forgot to load 
> "Amanda::Xfer::Dest::Taper::Splitter"?) at 
> /usr/pkg/lib/perl5/vendor_perl/5.16.0/Amanda/Taper/Scribe.pm line 731.

The second attached patch seems to fix this problem.

However, with those two patches applied amdump(8) succeeds.  Manual
verification shows that the dumps are ok, but amcheckdump(8) fails:

$ amcheckdump Test  bs=32k skip=1 | [...] | /sbin/restore -xpGf - ...

At least NetBSD's restore(8) has no -p and no -G option.


Cheers, Jukka

[1] http://salmi.ch/~jukka/Amanda/amcheckdump.20130514140608.debug

-- 
This email fills a much-needed gap in the archives.
--- Amanda/Recovery/Clerk.pm.orig   2013-05-13 11:03:18.0 +0200
+++ Amanda/Recovery/Clerk.pm2013-05-14 13:15:17.0 +0200
@@ -23,6 +23,7 @@ use warnings;
 use Carp;
 
 use Amanda::Xfer qw( :constants );
+use Amanda::XferServer;
 use Amanda::Device qw( :constants );
 use Amanda::Header;
 use Amanda::Holding;
--- Amanda/Taper/Scribe.pm.orig 2013-05-13 11:03:18.0 +0200
+++ Amanda/Taper/Scribe.pm  2013-05-14 13:56:07.0 +0200
@@ -427,6 +427,7 @@ use warnings;
 use Carp;
 
 use Amanda::Xfer qw( :constants );
+use Amanda::XferServer;
 use Amanda::Device qw( :constants );
 use Amanda::Header;
 use Amanda::Debug qw( :logging );


Re: amcheckdump(8) failure and other issues

2013-05-13 Thread Jukka Salmi
Jukka Salmi --> amanda-users (2013-05-13 22:37:12 +0200):
> Hello
> 
> I'm currently updating Amanda on some NetBSD/amd64 5.2_STABLE systems from
> 2.5.2p1 to 3.3.1; Amanda has been built from pkgsrc.  So far the Amanda server
> and one Amanda client have been updated.  The first thing I tried after
> the update was to check the latest backup (created by the "old" Amanda),
> but this failed:
> 
> $ amcheckdump Daily  You will need the following volume: DAILY07
> Press enter when ready
> Validating image foo.salmi.ch:/usr dumped 20130512032500 level 1
> amcheckdump: Can't locate object method "new" via package 
> "Amanda::Xfer::Source::Recovery" (perhaps you forgot to load 
> "Amanda::Xfer::Source::Recovery"?) at 
> /usr/pkg/lib/perl5/vendor_perl/5.16.0/Amanda/Recovery/Clerk.pm line 551.
> 
> I haven't run amdump(8) yet, thus e.g.  $logdir/amdump.$n is still from an
> Amanda 2.5.2p1 run.  Could this be the reason for the failure and I should 
> just
> run amdump(8)?

In the meantime I created a test configuration and ran amdump(8).
However, this particular problem still exists.

More specifically, now I get

foo.salmi.ch:/0 44979k failed: process terminated while waiting for 
dumping 
foo.salmi.ch:/etc 1 4k failed: killed while writing to tape (7:45:12)
bar.salmi.ch:/0 22788k failed: process terminated while waiting for 
dumping 
bar.salmi.ch:/etc 1 5k dump done (7:45:12), process terminated while 
waiting for writing to tape

(bar is the Amanda server)

The taper logfile reveals:

[...]
Amanda::Changer::compat initialized with script 
/usr/pkg/libexec/amanda/chg-disk, temporary directory /etc/pkg/amanda/Test
Amanda::Taper::Scan::traditional stage 1: search for oldest reusable volume
Amanda::Taper::Scan::traditional no oldest reusable volume
Amanda::Taper::Scan::traditional stage 2: scan for any reusable volume
Amanda::Changer::compat: invoking /usr/pkg/libexec/amanda/chg-disk with -info
Amanda::Changer::compat: Got response '8 8 1' with exit status 0
Amanda::Changer::compat: invoking /usr/pkg/libexec/amanda/chg-disk with -slot 
current
Amanda::Changer::compat: Got response '8 file:/var/amanda/vtapes/Test' with 
exit status 0
Slot 8 with label TEST08 is usable
Amanda::Taper::Scan::traditional result: 'TEST08' on 
file:/var/amanda/vtapes/Test slot 8, mode 2
Amanda::Taper::Scribe preparing to write, part size 0, using LEOM (falling back 
to holding disk as cache) (splitter)  (LEOM supported)
critical (fatal): Can't locate object method "new" via package 
"Amanda::Xfer::Dest::Taper::Splitter" (perhaps you forgot to load 
"Amanda::Xfer::Dest::Taper::Splitter"?) at 
/usr/pkg/lib/perl5/vendor_perl/5.16.0/Amanda/Taper/Scribe.pm line 731.

Hmm, looks similar to the problem mentioned above...  Any hints?


> Another potential issue I noticed is with amstatus(8):
[...]

It seems that this problem was caused by reading an amdump output file
created by 2.5.2p1 amdump with 3.3.1 amdump; amstatus(8) now seems to
work fine.


TIA & cheers,

Jukka

-- 
This email fills a much-needed gap in the archives.


amcheckdump(8) failure and other issues

2013-05-13 Thread Jukka Salmi
Hello

I'm currently updating Amanda on some NetBSD/amd64 5.2_STABLE systems from
2.5.2p1 to 3.3.1; Amanda has been built from pkgsrc.  So far the Amanda server
and one Amanda client have been updated.  The first thing I tried after
the update was to check the latest backup (created by the "old" Amanda),
but this failed:

$ amcheckdump Daily  line 82.

** (process:13775): WARNING **: Use of uninitialized value $gdatestamp in hash 
element at /usr/pkg/sbin/amstatus line 451,  line 82.

** (process:13775): WARNING **: Use of uninitialized value $datestamp in 
concatenation (.) or string at /usr/pkg/sbin/amstatus line 1428,  line 
106.

** (process:13775): WARNING **: Use of uninitialized value $datestamp in hash 
element at /usr/pkg/sbin/amstatus line 1429,  line 106.

** (process:13775): WARNING **: Use of uninitialized value $datestamp in 
concatenation (.) or string at /usr/pkg/sbin/amstatus line 1428,  line 
107.

[...]

Using /var/amanda/Daily/amdump.1
>From Sun May 12 03:25:00 CEST 2013

bar.salmi.ch:/   0 14166k dump done (3:27:58), process 
terminated while waiting for writing to tape
bar.salmi.ch:/   0no estimate
bar.salmi.ch:/etc0  1748k dump done (3:27:29), process 
terminated while waiting for writing to tape
bar.salmi.ch:/etc0no estimate
bar.salmi.ch:/usr0 3k dump done (3:27:22), process 
terminated while waiting for writing to tape
bar.salmi.ch:/usr1no estimate
bar.salmi.ch:/var0 72858k dump done (3:32:11), process 
terminated while waiting for writing to tape
bar.salmi.ch:/var0no estimate
[...]

The rest of the amstatus(8) output looks ok.  Are those Perl warnings
and the "process terminated [...]" messages to be expected because
amdump(8) has not been run since Amanda was updated, or is this a
serious issue which should be fixed before running amdump(8)?


TIA & cheers,

Jukka

-- 
This email fills a much-needed gap in the archives.


Re: 501 Could not read config file /etc/amanda/amindexd/amanda.conf!

2009-04-03 Thread Jukka Salmi
Jean-Louis Martineau --> amanda-users (2009-04-03 12:43:36 -0400):
> Change it to:
> amandaidx stream tcp nowait backup /usr/sbin/tcpd /usr/lib/amanda/amindexd
> amidxtape stream tcp nowait backup /usr/sbin/tcpd /usr/lib/amanda/amidxtaped
>
> Other arguments should only be used for amandad.

Indeed, works fine now, thanks a lot!

Hmm, I wonder why the Debian package adds those arguments to the
amandaidx and amidxtape services...  Seems to be a bug, doesn't it?


Regards, Jukka

> Jean-Louis
>
> Jukka Salmi wrote:
>> Jean-Louis Martineau --> amanda-users (2009-04-03 11:49:53 -0400):
>>   
>>> amrecover from 2.5.2p1 and 2.4.4p3 use different protocol to 
>>> communicate  with the server.
>>>
>>> What's the xinetd configuration for the amindexd and amidxtaped 
>>> network  services on the server?
>>> 
>>
>> (The systems in question use obsd inetd instead of xinetd, but this shouldn't
>> matter AFAICT...)
>>
>> amandaidx stream tcp nowait backup /usr/sbin/tcpd /usr/lib/amanda/amindexd 
>> amindexd -auth=bsd amdump amindexd amidxtaped
>> amidxtape stream tcp nowait backup /usr/sbin/tcpd /usr/lib/amanda/amidxtaped 
>> amidxtaped -auth=bsd amdump amindexd amidxtaped
>>
>>
>> Regards, Jukka
>>
>>   
>>> Jean-Louis
>>>
>>> Jukka Salmi wrote:
>>> 
>>>> Hello,
>>>>
>>>> I'm having a problem with amrecover(8) on Linux systems.  The backup
>>>> server runs Amanda 2.5.2p1 on Debian lenny, the backup client runs
>>>> Amanda 2.4.4p3 on Debian sarge.  Running amrecover(8) on the server
>>>> works fine:
>>>>
>>>> $ amrecover -C myconf -s srv -t srv
>>>> AMRECOVER Version 2.5.2p1. Contacting server on srv ...
>>>> 220 srv AMANDA index server (2.5.2p1) ready.
>>>> [...]
>>>>
>>>> But running it on the client does not:
>>>>
>>>> $ amrecover -C myconf -s srv -t srv
>>>> AMRECOVER Version 2.4.4p3. Contacting server on srv ...
>>>> 501 Could not read config file /etc/amanda/amindexd/amanda.conf!
>>>>
>>>> The relevant amindexd.*.debug on the server then looks like this:
>>>>
>>>> amindexd: debug 1 pid 28092 ruid 34 euid 34: start at Fri Apr  3 17:31:45 
>>>> 2009
>>>> amindexd: version 2.5.2p1
>>>> could not open conf file "/etc/amanda/amindexd/amanda.conf": No such file 
>>>> or directory
>>>> amindexd: time 0.003: < 501 Could not read config file 
>>>> /etc/amanda/amindexd/amanda.conf!
>>>> amindexd: time 0.003: < 220 srv AMANDA index server (2.5.2p1) ready.
>>>> amindexd: time 0.003: ? read error: Connection reset by peer
>>>> amindexd: time 0.003: pid 28092 finish time Fri Apr  3 17:31:45 2009
>>>>
>>>> That file (/etc/amanda/amindexd/amanda.conf) indeed does not exist on
>>>> the server, but I can't find any references to it in the documentation.
>>>> What am I missing?
>>>>
>>>>
>>>> TIA, Jukka
>>>>   
>>
>>   
>

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


Re: 501 Could not read config file /etc/amanda/amindexd/amanda.conf!

2009-04-03 Thread Jukka Salmi
Jean-Louis Martineau --> amanda-users (2009-04-03 11:49:53 -0400):
> amrecover from 2.5.2p1 and 2.4.4p3 use different protocol to communicate  
> with the server.
>
> What's the xinetd configuration for the amindexd and amidxtaped network  
> services on the server?

(The systems in question use obsd inetd instead of xinetd, but this shouldn't
matter AFAICT...)

amandaidx stream tcp nowait backup /usr/sbin/tcpd /usr/lib/amanda/amindexd 
amindexd -auth=bsd amdump amindexd amidxtaped
amidxtape stream tcp nowait backup /usr/sbin/tcpd /usr/lib/amanda/amidxtaped 
amidxtaped -auth=bsd amdump amindexd amidxtaped


Regards, Jukka

> Jean-Louis
>
> Jukka Salmi wrote:
>> Hello,
>>
>> I'm having a problem with amrecover(8) on Linux systems.  The backup
>> server runs Amanda 2.5.2p1 on Debian lenny, the backup client runs
>> Amanda 2.4.4p3 on Debian sarge.  Running amrecover(8) on the server
>> works fine:
>>
>> $ amrecover -C myconf -s srv -t srv
>> AMRECOVER Version 2.5.2p1. Contacting server on srv ...
>> 220 srv AMANDA index server (2.5.2p1) ready.
>> [...]
>>
>> But running it on the client does not:
>>
>> $ amrecover -C myconf -s srv -t srv
>> AMRECOVER Version 2.4.4p3. Contacting server on srv ...
>> 501 Could not read config file /etc/amanda/amindexd/amanda.conf!
>>
>> The relevant amindexd.*.debug on the server then looks like this:
>>
>> amindexd: debug 1 pid 28092 ruid 34 euid 34: start at Fri Apr  3 17:31:45 
>> 2009
>> amindexd: version 2.5.2p1
>> could not open conf file "/etc/amanda/amindexd/amanda.conf": No such file or 
>> directory
>> amindexd: time 0.003: < 501 Could not read config file 
>> /etc/amanda/amindexd/amanda.conf!
>> amindexd: time 0.003: < 220 srv AMANDA index server (2.5.2p1) ready.
>> amindexd: time 0.003: ? read error: Connection reset by peer
>> amindexd: time 0.003: pid 28092 finish time Fri Apr  3 17:31:45 2009
>>
>> That file (/etc/amanda/amindexd/amanda.conf) indeed does not exist on
>> the server, but I can't find any references to it in the documentation.
>> What am I missing?
>>
>>
>> TIA, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


sendbackup: broken pipe

2008-11-24 Thread Jukka Salmi
Hello,

every now and then I see this error on an otherwise perfectly working
Amanda system (server 2.4.4p3, client 2.5.1p1):

FAILURE AND STRANGE DUMP SUMMARY:
  host.staso / lev 0 FAILED [data timeout]


The client's sendbackup log for such a failed run:

sendbackup: debug 1 pid 14726 ruid 34 euid 34: start at Fri Nov 21 22:46:00 2008
sendbackup: version 2.5.1p1
Could not open conf file "/etc/amanda/amanda-client.conf": No such file or 
directory
  sendbackup req: 
  parsed request as: program `GNUTAR'
 disk `/'
 device `/'
 level 0
 since 1970:1:1:0:0:0
 options 
`|;bsd-auth;no-record;index;exclude-list=.backup.exc;exclude-optional;'
sendbackup: start: host.stasoft.ch:/ lev 0
sendbackup-gnutar: time 0.006: doing level 0 dump as listed-incremental to 
'/var/lib/amanda/gnutar-lists/host.stasoft.ch__0.new'
sendbackup-gnutar: time 0.010: doing level 0 dump from date: 1970-01-01  
0:00:00 GMT
sendbackup: time 0.015: spawning /usr/lib/amanda/runtar in pipeline
sendbackup: argument list: runtar NOCONFIG gtar --create --file - --directory / 
--one-file-system --listed-incremental 
/var/lib/amanda/gnutar-lists/host.stasoft.ch__0.new --sparse 
--ignore-failed-read --totals --exclude-from 
/tmp/amanda/sendbackup._.20081121224600.exclude .
sendbackup: time 0.078: started index creator: "/bin/tar -tf - 2>/dev/null | 
sed -e 's/^\.//'"
sendbackup-gnutar: time 0.082: /usr/lib/amanda/runtar: pid 14729
sendbackup: time 0.083: started backup
sendbackup: time 2996.696: index tee cannot write [Broken pipe]
sendbackup: time 2996.718: pid 14728 finish time Fri Nov 21 23:35:57 2008
sendbackup: time 2996.697: 118: strange(?): sendbackup: index tee cannot write 
[Broken pipe]
sendbackup: time 2996.751: 118: strange(?): sed: couldn't flush stdout: Broken 
pipe
sendbackup: time 2996.779:  47:size(|): Total bytes written: 16497745920 
(16GiB, ?/s)
sendbackup: time 2996.780: 118: strange(?): gtar: -: Wrote only 4096 of 10240 
bytes
sendbackup: time 2996.781: 118: strange(?): gtar: Error is not recoverable: 
exiting now
sendbackup: time 2996.781: parsed backup messages
sendbackup: time 2996.781: pid 14726 finish time Fri Nov 21 23:35:57 2008


On the server side, the amdump log contains:

[...]
driver: send-cmd time 2015.907 to dumper0: FILE-DUMP 01-00032 
/amholdingdisk/20081121/host.stasoft.ch._.0 host.stasoft.ch feff9ffeff / 
NODEVICE 0 1970:1:1:0:0:0 1048576 GNUTAR 49498848 
|;bsd-auth;no-record;index;exclude-list=.backup.exc;exclude-optional;
[...]
driver: result time 6812.702 from dumper0: FAILED 01-00032 [data timeout]
[...]


Hmm, but most of the time I do _not_ see this problem.  Any hints what
could cause the problem or how to debug this further?  


Regards, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


Re: `include list' question

2008-10-13 Thread Jukka Salmi
Paul Bijnens --> amanda-users (2008-10-13 14:18:51 +0200):
> On 2008-10-13 13:55, Jukka Salmi wrote:
>> Hello,
>>
>> while reading amanda.conf(5) I noticed the following comment about the
>> `include' dumptype option:
>>
>>   All include expressions are expanded by Amanda, concatenated in one
>>   file and passed to GNU-tar as a --files-from argument. They must
>>   start with "./" and contain no other "/".
>>
>> I've been using include lists for a long time now, but some of them
>> do contain lines with slashes after the starting `./'; this seems to
>> work fine, though.
>>
>> So, why are slashes forbidden?
>
> Amanda needs to pass these strings to the gnutar option "--files-from"
> and gnutar does not do globbing expansion on those strings (as opposed to
> the --exclude option, where gnutar will do globbing).
> Therefor Amanda does the globbing before passing the result of the glob
> to gnutar.
> And for this globbing to work (and still be efficient), Amanda restricts
> itself to the toplevel directory (./some*thing) only.
> However, as boundary case, if the path contains more than one slash,
> then the loplevel globbing will not work, but Amanda will pass these strings
> unmodified to gnutar; and there you can get away with it.
> As long as you do not expect that pathnames having more than one slash
> will glob correctly, there is no problem.

I see. Thanks a lot for the explanation. (This should probably be
mentioned in the amanda.conf(5)...)


Regards, Jukka

> This dark corner of Amanda will probably change in future releases,
> so it's best to not rely on this too much.

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


`include list' question

2008-10-13 Thread Jukka Salmi
Hello,

while reading amanda.conf(5) I noticed the following comment about the
`include' dumptype option:

  All include expressions are expanded by Amanda, concatenated in one
  file and passed to GNU-tar as a --files-from argument. They must
  start with "./" and contain no other "/".

I've been using include lists for a long time now, but some of them
do contain lines with slashes after the starting `./'; this seems to
work fine, though.

So, why are slashes forbidden?


Regards, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


Re: connection problem during `sendbackup' stage

2008-09-12 Thread Jukka Salmi
Jukka Salmi --> amanda-users (2008-09-10 14:40:29 +0200):
[...]
> Hmm, EINTR. I'll try to reproduce this with another version of NetBSD
> (trying with 4.0_STABLE ATM) before debugging any further...

It seems that I hit a known problem:

http://mail-index.netbsd.org/pkgsrc-users/2008/06/14/msg007388.html

At least after setting net.inet6.ip6.v6only=0 on the client system
Amanda works fine now...


Regards, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


Re: connection problem during `sendbackup' stage

2008-09-10 Thread Jukka Salmi
Dustin J. Mitchell --> amanda-users (2008-09-09 09:58:15 -0400):
> On Tue, Sep 9, 2008 at 8:50 AM, Jukka Salmi <[EMAIL PROTECTED]> wrote:
> > c:10080 -> s:846 udp Amanda 2.5 REP HANDLE ... CONNECT DATA 56639 MESG 
> > 56638 INDEX 56637 ...
> > s:846 -> c:10080 udp Amanda 2.4 ACK HANDLE ...
> >
> > s:50029 -> c:56639 tcp SYN
> > c:56639 -> s:50029 tcp SYN,RST
> >
> > Oops. IIUC this TCP connection is supposd to transfer the actual data
> > to back up. Hmm, why could it be reset by the client?
> 
> Most likely, the port is closed by the time the server tries to
> contact it, because something has gone wrong on the client.  Note that
> the index tee uses 'sed'.  Is that sed invocation failing on NetBSD?
> Check the sendbackup debug logs.

Hmm, sed seems not to be the problem here AFAICT.

Some of the sendbackup logs on the client look ok:

[...]
sendbackup: time 0.002: spawning /usr/pkg/libexec/runtar in pipeline
sendbackup: time 0.002: argument list: runtar NOCONFIG gtar --create [...]
sendbackup-gnutar: time 0.002: /usr/pkg/libexec/runtar: pid 4766
sendbackup: time 0.002: started backup
sendbackup: time 0.008: started index creator: "/usr/pkg/bin/gtar -tf - 
2>/dev/null | sed -e 's/^\.//'"
sendbackup: time 0.015:  47:size(|): Total bytes written: 10240 (10KiB, 
33MiB/s)
sendbackup: time 0.017: index created successfully
sendbackup: time 0.017: parsed backup messages
sendbackup: time 0.017: pid 10092 finish time Tue Sep  9 17:32:15 2008

while some don't:

[...]
sendbackup: time 0.002: spawning /usr/pkg/libexec/runtar in pipeline
sendbackup: time 0.002: argument list: runtar NOCONFIG gtar --create [...]
sendbackup-gnutar: time 0.002: /usr/pkg/libexec/runtar: pid 21997
sendbackup: time 0.002: started backup
sendbackup: time 0.004: started index creator: "/usr/pkg/bin/gtar -tf - 
2>/dev/null | sed -e 's/^\.//'"
sendbackup: time 195.078: index tee cannot write [Broken pipe]
sendbackup: time 195.078: pid 4821 finish time Tue Sep  9 17:35:30 2008

The amandad log reveals:

[...]
SERVICE sendbackup
[...]
CONNECT DATA 65311 MESG 65310 INDEX 65309
OPTIONS features=9ffe00;
>>>>>
amandad: time 30.206: dgram_send_addr(addr=0x8056120, dgram=0xbbba3e04)
amandad: time 30.206: (sockaddr_in *)0x8056120 = { 2, 844, 192.168.12.15 }
amandad: time 30.206: dgram_send_addr: 0xbbba3e04->socket = 0
amandad: time 30.207: dgram_recv(dgram=0xbbba3e04, timeout=0, 
fromaddr=0x3df0)
amandad: time 30.207: (sockaddr_in *)0x3df0 = { 2, 844, 192.168.12.15 }
amandad: time 30.207: received ACK pkt:
<<<<<
>>>>>
amandad: time 30.214: stream_accept: select() failed: Interrupted system call
amandad: time 60.247: stream_accept: timeout after 30 seconds
amandad: time 60.247: security_stream_seterr(0x8076000, can't accept new stream 
connection: No such file or directory)
amandad: time 60.247: stream 0 accept failed: unknown protocol error
amandad: time 60.247: security_stream_close(0x8076000)
amandad: time 90.310: stream_accept: timeout after 30 seconds
amandad: time 90.311: security_stream_seterr(0x807f000, can't accept new stream 
connection: No such file or directory)
amandad: time 90.311: stream 1 accept failed: unknown protocol error
amandad: time 90.311: security_stream_close(0x807f000)
amandad: time 120.387: stream_accept: timeout after 30 seconds
amandad: time 120.387: security_stream_seterr(0x8088000, can't accept new 
stream connection: No such file or directory)
amandad: time 120.387: stream 2 accept failed: unknown protocol error
amandad: time 120.387: security_stream_close(0x8088000)
amandad: time 120.387: security_close(handle=0x8056100, driver=0xbbba1f20 (BSD))
amandad: time 120.387: pid 14131 finish time Tue Sep  9 17:28:59 2008

Hmm, EINTR. I'll try to reproduce this with another version of NetBSD
(trying with 4.0_STABLE ATM) before debugging any further...

Any hints?

TIA, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


Re: connection problem during `sendbackup' stage

2008-09-09 Thread Jukka Salmi
Jukka Salmi --> amanda-users (2008-09-05 14:51:31 +0200):
> Hello,
> 
> I'm trying to add a NetBSD system running Amanda 2.5.2p1 as a backup
> client to an existing backup server running Amanda 2.4.4p3 on Linux.
> 
> Running amcheck(8) shows no problems, and both the `noop' and `sendsize'
> stages during an amdump(8) run seem to be fine. But then, during the
> `sendbackup' stage, some strange connection problem occurs: running
> amstatus(8) on the server shows lines like
> 
>   myclient:/  0  27800k wait for dumping driver: (aborted:[request timeout])
> 
> for all disk from that new client, and the Amanda mail report contains
> lines like
> 
>   myclient / lev 0 FAILED 20080904[could not connect to myclient]
> 
> for all of those. On the client system, syslogd receives several
> messages about
> 
>   sendbackup[n]: index tee cannot write [Broken pipe]
> 
> during the amdump run.
> 
> Both hosts involved are connected to the same IP subnet, without any
> packet filtering done in between.
> 
> 
> Any hint about what could be the problem here?

Sniffing network traffic between the Amanda server and client shows
this (s:n is server port n, c:n is client port n):

s:851 -> c:10080 udp Amanda 2.4 REQ HANDLE ... SERVICE noop ...
c:10080 -> s:851 udp Amanda 2.5 ACK HANDLE ...
s:851 -> c:10080 udp Amanda 2.4 REP HANDLE ...
c:10080 -> s:851 udp Amanda 2.5 ACK HANDLE ...
s:851 -> c:10080 udp Amanda 2.4 REQ HANDLE ... SERVICE selfcheck ...
c:10080 -> s:851 udp Amanda 2.5 ACK HANDLE ...
c:10080 -> s:851 udp Amanda 2.5 REP HANDLE ...
s:851 -> c:10080 udp Amanda 2.4 ACK HANDLE ...

s:846 -> c:10080 udp Amanda 2.4 REQ HANDLE ... SERVICE noop ...
c:10080 -> s:846 udp Amanda 2.5 ACK HANDLE ...
c:10080 -> s:846 udp Amanda 2.5 REP HANDLE ...
s:846 -> c:10080 udp Amanda 2.4 ACK HANDLE ...
s:846 -> c:10080 udp Amanda 2.4 REQ HANDLE ... SERVICE sendsize ...
c:10080 -> s:846 udp Amanda 2.5 ACK HANDLE ...
c:10080 -> s:846 udp Amanda 2.5 REP HANDLE ...
s:846 -> c:10080 udp Amanda 2.4 ACK HANDLE ...
s:846 -> c:10080 udp Amanda 2.4 REQ HANDLE ... SERVICE sendbackup ...
c:10080 -> s:846 udp Amanda 2.5 ACK HANDLE ...
c:10080 -> s:846 udp Amanda 2.5 REP HANDLE ... CONNECT DATA 56639 MESG 56638 
INDEX 56637 ...
s:846 -> c:10080 udp Amanda 2.4 ACK HANDLE ...

s:50029 -> c:56639 tcp SYN
c:56639 -> s:50029 tcp SYN,RST

Oops. IIUC this TCP connection is supposd to transfer the actual data
to back up. Hmm, why could it be reset by the client?

TIA, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


connection problem during `sendbackup' stage

2008-09-05 Thread Jukka Salmi
Hello,

I'm trying to add a NetBSD system running Amanda 2.5.2p1 as a backup
client to an existing backup server running Amanda 2.4.4p3 on Linux.

Running amcheck(8) shows no problems, and both the `noop' and `sendsize'
stages during an amdump(8) run seem to be fine. But then, during the
`sendbackup' stage, some strange connection problem occurs: running
amstatus(8) on the server shows lines like

  myclient:/  0  27800k wait for dumping driver: (aborted:[request timeout])

for all disk from that new client, and the Amanda mail report contains
lines like

  myclient / lev 0 FAILED 20080904[could not connect to myclient]

for all of those. On the client system, syslogd receives several
messages about

  sendbackup[n]: index tee cannot write [Broken pipe]

during the amdump run.

Both hosts involved are connected to the same IP subnet, without any
packet filtering done in between.


Any hint about what could be the problem here?

TIA, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


Re: amreport: ERROR unexpected log line: ...

2008-06-29 Thread Jukka Salmi
Dustin J. Mitchell --> amanda-users (2008-06-29 14:22:16 -0400):
> I would remove the "#ifdef HAVE_ALLOCA_H" from amanda.h and just
> unconditionally include alloca.h.

Hmm, the problem was that there is no alloca.h on my systems, so this
won't work ;-)

The attached patch fixes the problem for me, but I doubt it's the
correct way to do it...


Regards, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~
--- config/gnulib/alloca.m4.orig2007-05-04 13:39:05.0 +0200
+++ config/gnulib/alloca.m4 2008-06-28 19:00:37.0 +0200
@@ -39,10 +39,6 @@ AC_DEFUN([gl_FUNC_ALLOCA],
 ALLOCA_H=alloca.h
   fi
   AC_SUBST([ALLOCA_H])
-
-  AC_DEFINE(HAVE_ALLOCA_H, 1,
-[Define HAVE_ALLOCA_H for backward compatibility with older code
- that includes  only if HAVE_ALLOCA_H is defined.])
 ])
 
 # Prerequisites of lib/alloca.c.
--- configure.orig  2007-06-07 01:22:45.0 +0200
+++ configure   2008-06-28 19:02:47.0 +0200
@@ -8174,9 +8174,6 @@ _ACEOF
 
 
 
-cat >>confdefs.h <<\_ACEOF
-#define HAVE_ALLOCA_H 1
-_ACEOF
 
 
 


Re: amreport: ERROR unexpected log line: ...

2008-06-29 Thread Jukka Salmi
Dustin J. Mitchell --> amanda-users (2008-06-28 13:53:43 -0400):
> Actually, it's not quite "wrong."  Gnulib is a little weird.  What the
> rest of that macro does is to *check* for alloca.h, and if it's not
> found in the system, create a local copy of the file.  I think that
> the compiler wasn't finding it because the amflock tests didn't
  ^^
I assume with "it" you mean the "local copy of the file", and not the
alloca.h "in the system" (which doesn't exist on my system), correct?


> provide all of the CFLAGS that are eventually present when source
> files are actually compiled.  This was one of a few places in the
> configure script that used the rather unorthodox technique of
> compiling full *.c files, rather than constructing minimal
> "conftest.c" files directly (the normal technique).  This was part of
> the reason that this code was replaced in 2.6.0.

Nice to hear :)

Hmm, what fix would you advise for 2.5? Sync the CFLAGS? (I want to
fix the Amanda package in [1]pkgsrc which is still at 2.5.)


> 1. This problem won't recur in 2.6.0 because configure won't try to
> build any files that #include  during the configure process.
> 
> 2. We don't use alloca() in the Amanda codebase, so I've added a
> ticket to remove the #include from amanda.h.
> 
> 3. Gnulib's vasnprintf *does* use alloca, so the gnulib m4 and source
> files will remain in the codebase.  I don't think this will cause a
> problem due to point 1.
> 
> Nice job investigating this, by the way!  If you want to stick around
> and hack on Amanda, we'd love to have you.

Thanks; you'll probably hear from me when I reconfigure my Amanda
systems to use Kerberos ;-)


Regards, Jukka

[1] http://www.pkgsrc.org/

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


Re: amreport: ERROR unexpected log line: ...

2008-06-28 Thread Jukka Salmi
Dustin J. Mitchell --> amanda-users (2008-06-27 13:56:06 -0400):
> On Thu, Jun 26, 2008 at 11:34 AM, Jukka Salmi <[EMAIL PROTECTED]> wrote:
> > This is on NetBSD/i386 where at least fcntl, flock and lockf are
> > available; config.log reveals that the record locking function tests
> > failed because HAVE_ALLOCA_H was defined but there's no alloca.h on
> > my system.
> 
> We had a bit of this sort of trouble in 2.5.2, and the code has been
> re-worked in 2.6.0.  I imagine that some rather rough edits to the
> source  (removing #include  from amanda.h) will fix the
> problem.

I'm not familiar with the autotools and with m4, but AFAICT the source
of this problem is in config/gnulib/alloca.m4. The resulting `configure'
script unconditionally defines HAVE_ALLOCA_H which is obviously wrong.
Removing that part (see attached patch) fixes the problem for me:

[...]
checking for working alloca.h... no
checking for alloca... yes
[...]
checking whether posix fcntl locking works... yes
[...]

This code seems to be in 2.6 as well.


Regards, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~
--- config/gnulib/alloca.m4.orig2007-05-04 13:39:05.0 +0200
+++ config/gnulib/alloca.m4 2008-06-28 19:00:37.0 +0200
@@ -39,10 +39,6 @@ AC_DEFUN([gl_FUNC_ALLOCA],
 ALLOCA_H=alloca.h
   fi
   AC_SUBST([ALLOCA_H])
-
-  AC_DEFINE(HAVE_ALLOCA_H, 1,
-[Define HAVE_ALLOCA_H for backward compatibility with older code
- that includes  only if HAVE_ALLOCA_H is defined.])
 ])
 
 # Prerequisites of lib/alloca.c.
--- configure.orig  2007-06-07 01:22:45.0 +0200
+++ configure   2008-06-28 19:02:47.0 +0200
@@ -8174,9 +8174,6 @@ _ACEOF
 
 
 
-cat >>confdefs.h <<\_ACEOF
-#define HAVE_ALLOCA_H 1
-_ACEOF
 
 
 


Re: amreport: ERROR unexpected log line: ...

2008-06-26 Thread Jukka Salmi
Jean-Louis Martineau --> amanda-users (2008-06-26 10:12:11 -0400):
> Amanda lock the file while writing to it.
> What is the output of: amadmin vv version | grep LOCKING
> Which filesystem is used, is it NFS mounted?

Hmm, amadmin reveals that

LOCKING=**NONE**

on my system, which probably explains the problem... Furthermore,
reading ./configure output more carefully, I see

[...]
checking for working alloca.h... no
checking for alloca... yes
[...]
checking whether posix fcntl locking works... no
checking whether flock locking works... no
checking whether lockf locking works... no
checking whether lnlock locking works... no
configure: WARNING: *** No working file locking capability found!
configure: WARNING: *** Be VERY VERY careful.
[...]

This is on NetBSD/i386 where at least fcntl, flock and lockf are
available; config.log reveals that the record locking function tests
failed because HAVE_ALLOCA_H was defined but there's no alloca.h on
my system.

I'll debug this further tomorrow since I just ran out of time ;-)


Regards, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


Re: amreport: ERROR unexpected log line: ...

2008-06-26 Thread Jukka Salmi
Dustin J. Mitchell --> amanda-users (2008-06-26 10:11:54 -0400):
[...]
> Unfortunately, that file is basically the intermingled freeform stderr
> of just about every process spawned by amdump/amflush, so there's no
> single interface through which we can funnel all notifications.  The
> long-term plan is to replace the whole "fleet" of Amanda processes
> with a single process, using the transfer architecture to juggle
> multiple concurrent data transfers.  Your suggestion seems a good
> short-term fix.  I don't have any immediate ideas for a mid-term fix,
> but I'm open to suggestions.

What about using record locking on a file in the logdir (maybe on the
logfile itself)? I.e. making sure that all involved processes always
acquire an exclusive lock, then write the log and afterwards release
the lock?


> It's interesting that we haven't seen this happen more often.

Indeed.


> Do you want to send along a patch?

Maybe I'll find some time next week to have a look at it. But first
let's decide on how to fix it ;-)


Regards, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


Re: amreport: ERROR unexpected log line: ...

2008-06-26 Thread Jukka Salmi
Jukka Salmi --> amanda-users (2008-06-25 12:37:15 +0200):
> Hello,
> 
> since I upgraded an Amanda installation from 2.4.4p4 to 2.5.2p1, backup
> reports always contain lines like these:
> 
> FAILURE AND STRANGE DUMP SUMMARY:
>   amreport: ERROR unexpected log line: 20080625 2 [sec 45.227 kb 41655 kps 
> 921.7]
>   amreport: ERROR unexpected log line: 20080625 0 [sec 35.747 kb 67281 kps 
> 1883.0]
> 
> The log file which causes these warnings contains amongst others the
> following two lines:
> 
> grouper.salmi.ch /var/spool/imap 20080625 2 [sec 45.227 kb 41655 kps 921.7]
> grouper.salmi.ch /home 20080625 0 [sec 35.747 kb 67281 kps 1883.0]
> 
> (grouper is the hostname of the host running amdump.)
> 
> 
> Any hints about what could be wrong here?

I just had a closer look at this problem and it seems to be caused by
a race condition between programs started by Amanda's driver program:
all of these programs (dumper, chunker, taper) and the driver program
itself write to the same log file. Since these programs run
simultaneously, their calls to log_add() may result in interleaved
writes. And this is exactly what happens here almost daily: e.g.
yesterday's logfile contains

SUCCESS dumper grouper.salmi.ch /home 20080625 0 [...]
SUCCESS chunker STATS driver estimate grouper.salmi.ch /home 20080625 0 [...]
grouper.salmi.ch /home 20080625 0 [...]
SUCCESS taper grouper.salmi.ch /home 20080625 0 [...]

The chunker and the driver wrote to the log file at the same time...

To fix this problem correctly, the writing of the log file should be
synchronised. As a hack, the two calls to fullwrite() in log_add()
could be replaced by a single call; this would probably cause the
problem to occur less often. I'd certainly prefer the correct
solution...

Any comments?


Regards, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


amreport: ERROR unexpected log line: ...

2008-06-25 Thread Jukka Salmi
Hello,

since I upgraded an Amanda installation from 2.4.4p4 to 2.5.2p1, backup
reports always contain lines like these:

FAILURE AND STRANGE DUMP SUMMARY:
  amreport: ERROR unexpected log line: 20080625 2 [sec 45.227 kb 41655 kps 
921.7]
  amreport: ERROR unexpected log line: 20080625 0 [sec 35.747 kb 67281 kps 
1883.0]

The log file which causes these warnings contains amongst others the
following two lines:

grouper.salmi.ch /var/spool/imap 20080625 2 [sec 45.227 kb 41655 kps 921.7]
grouper.salmi.ch /home 20080625 0 [sec 35.747 kb 67281 kps 1883.0]

(grouper is the hostname of the host running amdump.)


Any hints about what could be wrong here?

TIA, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


Re: how to repeat last backup?

2006-06-17 Thread Jukka Salmi
Joshua Baker-LePain --> amanda-users (2006-06-17 06:53:42 -0400):
> On Sat, 17 Jun 2006 at 10:33am, Jukka Salmi wrote
> 
> >to backup some client systems I use an Amanda config containing
> >
> > dumpcycle 0 days
> > tapecycle 4 tapes
> >
> >which I run manually from time to time. The config uses the file driver
> >(in case this matters).
> >
> >Because on some clients lots of files were moved during the last amdump
> >run, I'd like to repeat it - but I'd like it to overwrite the tape
> >which was used in the "failed" run. How can I achieve this? Simply
> >running amdump again uses the next tape...
> 
> amrmtape $CONFIG $TAPEYOUWANTTORERUN
> amlabel $CONFIG $TAPEYOUWANTTORERUN
> amdump $CONFIG

Thanks a lot!

For the archives: I needed to specify the slot number to amlabel
(`amlabel $CONFIG $TAPETORERUN slot $slot'), otherwise it tried to
label one of the active tapes.


Regards, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


how to repeat last backup?

2006-06-17 Thread Jukka Salmi
Hello,

to backup some client systems I use an Amanda config containing

dumpcycle 0 days
tapecycle 4 tapes

which I run manually from time to time. The config uses the file driver
(in case this matters).

Because on some clients lots of files were moved during the last amdump
run, I'd like to repeat it - but I'd like it to overwrite the tape
which was used in the "failed" run. How can I achieve this? Simply
running amdump again uses the next tape...


TIA, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


Re: Tar archive size limitations was Preventing yum or up2date to upd ate gnu tar

2005-06-03 Thread Jukka Salmi
Lengyel, Florian --> owner-amanda-users (2005-06-03 11:18:05 -0400):
> This business about tar reminds me of a not-terribly well documented problem
> with tar: under some operatinjg systems, tar has a 2 gigabyte limit on the
> size of 
> the tar archive! The tar that came with red hat 7.3 had this limitation, I
> believe. 
> This is listed on the web somewhere--perhaps it should be part of the FAQ.
> 
> Even splitting DLEs into 10 gig chunks won't help if the tar archive is
> silently truncated...  

I don't use Linux anymore, but IIRC tar is not to blame here: it's
probably a [1]LFS problem.


Cheers, Jukka

[1] http://www.suse.de/~aj/linux_lfs.html

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


Re: amverify looping

2005-06-03 Thread Jukka Salmi
Hi,

Jean-Louis Martineau --> amanda-users (2005-06-03 10:10:19 -0400):
> Could you try the attached patch.

thanks. After applying it amverify doesn't loop endlessly anymore:

[ ... successful checks ... ]
Checked host1._opt.20050603.1
** Error detected (host1._opt.20050603.1)
amrestore: WARNING: not at start of tape, file numbers will be offset
amrestore:   0: restoring host1._opt.20050603.1
gzip: truncated input
Error 32 (Broken pipe) offset 10485760+32768, wrote 0
amrestore: pipe reader has quit in middle of file.
Level 1 dump of /opt on host1:/dev/rwd0m
Label: none
64+0 in
64+0 out
[ ... same message six times ... ]
Too many errors.
Rewinding...
Errors found: 
DAILY02 (host1._opt.20050603.1):
amrestore: WARNING: not at start of tape, file numbers will be offset
amrestore:   0: restoring host1._opt.20050603.1
gzip: truncated input
Error 32 (Broken pipe) offset 10485760+32768, wrote 0
amrestore: pipe reader has quit in middle of file.
Level 1 dump of /opt on host1:/dev/rwd0m
Label: none
64+0 in
64+0 out
[ ... same message six times ... ]


> If it doesn't work, try the following command:
> mt -f  rewind
> mt -f  fsf 12
> dd if= bs=32k count=1
> 
> Send me the output of the dd command.

I'm using the file driver and chg-disk. Would you like to see the first
32k of the dump file in question?


Regards, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


Re: amverify looping

2005-06-03 Thread Jukka Salmi
Jukka Salmi --> amanda-users (2005-06-03 13:14:09 +0200):
> Hi,
> 
> on a Amanda 2.4.4p4 system which ran fine for some months I noticed today
> amverify was still running after about 5 hours; normally it only takes some
> minutes to complete. I killed the process, and then received the so far
> missing verify report (which was about 3MB...). For some reasons a certain
> file system was checked over and over again:
> 
> - AMANDA VERIFY REPORT ---
> Tapes:  DAILY02 
> Errors found:
> aborted! 
>
> amverify Daily
> Fri Jun  3 03:50:13 CEST 2005
>   
> Loading current slot...
> Using device file:/var/amanda/vtapes/Daily
> Volume DAILY02, Date 20050603
> Checked host1._pkgbuild_etc.20050603.1
> Checked host2._.20050603.1 
> Checked host1._pkgbuild_home.20050603.0
> Checked host2._usr.20050603.1
> Checked host1._src.20050603.1
> Checked host2._var.20050603.1
> Checked host2._etc.20050603.0
> Checked host1._.20050603.1
> Checked host1._etc.20050603.0
> Checked host1._var_spool_imap.20050603.1
> Checked host2._home.20050603.0
> Checked host1._opt.20050603.1
> Checked host1._opt.20050603.1
> Checked host1._opt.20050603.1
> Checked host1._opt.20050603.1
> Checked host1._opt.20050603.1
> [ ... about 7 identical lines skipped ... ]
> Checked host1._opt.20050603.1
> Checked host1._opt.20050603.1
> Checked host1._opt.20050603.1
> aborted!
> - END OF AMANDA VERIFY REPORT -
> 
> This is 100% reproducible. Unfortunately I can't find the source of the
> problem. Hints are welcome!

After adding some lines to the amverify script I was able to see the
following error message in amrestore.out:

amrestore: WARNING: not at start of tape, file numbers will be offset
amrestore:   0: restoring host1._opt.20050603.1
gzip: truncated input
Error 32 (Broken pipe) offset 10485760+32768, wrote 0
amrestore: pipe reader has quit in middle of file.

The dump file in question is not gzipped, so I would have expected a
message like

gzip: input not gziped (MAGIC0)

which I can see is printed for other dump files before amverify starts to
loop endlessly at this point.


Any hints what could cause this problem?

TIA, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


amverify looping

2005-06-03 Thread Jukka Salmi
Hi,

on a Amanda 2.4.4p4 system which ran fine for some months I noticed today
amverify was still running after about 5 hours; normally it only takes some
minutes to complete. I killed the process, and then received the so far
missing verify report (which was about 3MB...). For some reasons a certain
file system was checked over and over again:

- AMANDA VERIFY REPORT ---
Tapes:  DAILY02 
Errors found:
aborted! 
   
amverify Daily
Fri Jun  3 03:50:13 CEST 2005
  
Loading current slot...
Using device file:/var/amanda/vtapes/Daily
Volume DAILY02, Date 20050603
Checked host1._pkgbuild_etc.20050603.1
Checked host2._.20050603.1 
Checked host1._pkgbuild_home.20050603.0
Checked host2._usr.20050603.1
Checked host1._src.20050603.1
Checked host2._var.20050603.1
Checked host2._etc.20050603.0
Checked host1._.20050603.1
Checked host1._etc.20050603.0
Checked host1._var_spool_imap.20050603.1
Checked host2._home.20050603.0
Checked host1._opt.20050603.1
Checked host1._opt.20050603.1
Checked host1._opt.20050603.1
Checked host1._opt.20050603.1
Checked host1._opt.20050603.1
[ ... about 7 identical lines skipped ... ]
Checked host1._opt.20050603.1
Checked host1._opt.20050603.1
Checked host1._opt.20050603.1
aborted!
- END OF AMANDA VERIFY REPORT -

This is 100% reproducible. Unfortunately I can't find the source of the
problem. Hints are welcome!


TIA, Jukka

P.S.: this is on NetBSD 2.0 with Amanda built from pkgsrc, in case this
matters.

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


Re: dumpcycle / runspercycle / tapecycle best practices?

2004-11-04 Thread Jukka Salmi
Eric Siegerman --> amanda-users (2004-11-04 14:02:13 -0500):
> On Wed, Nov 03, 2004 at 10:53:59PM +0100, Jukka Salmi wrote:
> > Hmm, when setting dumpcycle to zero, to what value should runspercycle
> > be set if amdump runs once a day? Zero ("same as dumpcycle") or one?
> 
> Both settings are equivalent.  From planner.c:
> if (runs_per_cycle <= 0) {
> runs_per_cycle = 1;
> }
> 
> So pick the one that looks nicer :-)

Fine, thanks!

Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


Re: dumpcycle / runspercycle / tapecycle best practices?

2004-11-03 Thread Jukka Salmi
Hi,

Erik Anderson --> amanda-users (2004-11-03 14:44:55 -0600):
[...]
> full backup every night.
[...]
> dumpcycle 1 day

According to the amanda man page you should set dumpcycle to zero to
get full dumps each run.

Hmm, when setting dumpcycle to zero, to what value should runspercycle
be set if amdump runs once a day? Zero ("same as dumpcycle") or one?
Does it matter at all?


TIA, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


Re: how to automate tape changing

2004-10-18 Thread Jukka Salmi
Hello,

Paul Bijnens --> amanda-users (2004-10-18 22:14:10 +0200):
> Before the chg-disk tape changer was written, I used the chg-multi
> changer with the file-driver.  It's a little more complicated
> to configure, but the advantage is that it finds and load automatically
> the vtapes.

To what extent (with regard to functionality) does this differ from
using chg-disk and setting amrecover_changer to the same value as
tapedev?


Regards, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


Re: how to automate tape changing

2004-10-18 Thread Jukka Salmi
Toralf Lund --> amanda-users (2004-10-18 16:07:48 +0200):
> Jukka Salmi wrote:
> 
> >Hi,
> >
> >I'm using the chg-disk tape changer. When restoring files using
> >amrecover, after adding some files and issuing the extract command,
> >amrecover tells me what tapes are needed, and asks me to "Load tape
> > now". I load the needed tape using amtape, and tell amrecover
> >to continue. After a while I'm promted to load another tape, and
> >so on...
> >
> >Is it possible to automate the process of loading the needed tapes?
> > 
> >
> Yes. I'm using the following tape device setup to do this:
> 
> tpchanger "chg-zd-mtx"
> tapedev "/dev/nrtape"
> rawtapedev "/dev/tape"
> changerfile "chg-mtx"
> changerdev "/dev/changer"
> 
> amrecover_changer "/dev/nrtape"
> amrecover_do_fsf yes
> amrecover_check_label yes
> 
> It's amrecover_changer that does the trick; essentially it tells 
> amrecover to use the tape changer or the tape device is set to /dev/nrtape.

That's exactly what I was looking for. Thank you!


Regards, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


how to automate tape changing

2004-10-16 Thread Jukka Salmi
Hi,

I'm using the chg-disk tape changer. When restoring files using
amrecover, after adding some files and issuing the extract command,
amrecover tells me what tapes are needed, and asks me to "Load tape
 now". I load the needed tape using amtape, and tell amrecover
to continue. After a while I'm promted to load another tape, and
so on...

Is it possible to automate the process of loading the needed tapes?
That's not very important, but maybe "nice to have".


TIA, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


Re: How to get rid of unflushed dumps?

2004-08-27 Thread Jukka Salmi
Hi,

Greg Troxel --> amanda-users (2004-08-27 08:38:13 -0400):
> rm'ing the holding dir should work fine.  run amadmin config find
> before and after and you'll note that the disks are recorded as being
> in the holding dir, but not assigned to a tape.

Worked fine, thanks!


> The only issue I can see is getting amanda's dump levels and
> /etc/dumpdates out of sync, but if these are old and you have more
> recent level 0s, that won't matter.  I would guess that's the case or
> you wouldn't want to delete them...

Exactly. (This particular system does daily level 0 backups exclusively,
so unflushed dumps are obsolete after one day.)


Cheers, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


How to get rid of unflushed dumps?

2004-08-27 Thread Jukka Salmi
Hi,

if for some reason dumps have been left in the holding disk Amanda
recommends to run amflush. That works fine, but sometimes - especially
if those dumps are already very old - I don't want to flush them to
tape, because I don't need them anymore. What should I do in such a
situation? Is it safe to just delete the appropriate folder on the
holding disk (named +%Y%m%d)?


TIA, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


Re: tapecycle <= runspercycle

2004-06-23 Thread Jukka Salmi
Paul Bijnens --> amanda-users (2004-06-23 14:53:04 +0200):
> In that case, yes, just ignore the note.

OK, I'll try that... But hmm, why does one get warned if
tapecycle <= runspercycle? How could that be a problem?


> Oops.  I was wrong.  The note is there indeed.
> (why didn't I see that this morning?)

You probably tried out the advice you gave me ;-)


Regards, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


Re: tapecycle <= runspercycle

2004-06-23 Thread Jukka Salmi
Paul Bijnens --> amanda-users (2004-06-23 13:14:45 +0200):
> Jukka Salmi wrote:
> 
> >To achieve this I set dumpcycle and runspercycle to 0, and tapecycle
> >and runtapes to 1.
> >
> >This seems to work so far, except for the planner being discontent:
> >
> >NOTES:
> >  planner: tapecycle (1) <= runspercycle (1)
> 
> If I try this in amanda 2.4.4p3, then planner does not warn.

Strange. I had a short look at planner.c, seems that part was not changed
between 2.4.4p2 and p3 (see line 336 ff.).


> Have you really set runspercycle=0 and not 1, as the errors message
> suggests?

Yes. "runspercycle 0" means "same as dumpcycle"; dumpcycle is 0 which
means "full backup each run". At least that's how I understand amanda(8).


Cheers, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~


tapecycle <= runspercycle

2004-06-23 Thread Jukka Salmi
Hello,

using Amanda 2.4.4p2, I'd like to do daily full backups, no matter
what happens. I.e.: There are about 20 tapes, and humans (as opposed
to Amanda) are responsible for the correct tape beeing loaded before
each amdump run. If one forgets to change tapes the next run should
overwrite the tape which is still inserted.

To achieve this I set dumpcycle and runspercycle to 0, and tapecycle
and runtapes to 1.

This seems to work so far, except for the planner being discontent:

NOTES:
  planner: tapecycle (1) <= runspercycle (1)


Does my setup make sense? I'm quite new to Amanda. Should I just
ignore the planner note?

Hints are appreciated!


TIA, Jukka

-- 
bashian roulette:
$ ((RANDOM%6)) || rm -rf ~