amcheckdump & xfsrestore

2021-12-19 Thread Bernhard Erdmann
Hello,

amcheckdump does not know well xfsrestore on Linux / CentOS 7.

Two lines from
/tmp/amanda/server/be-weekly/amcheckdump.20211219163126.debug:

So Dez 19 16:31:31.502440996 2021: pid 5771: thd-0xf4c400: amcheckdump:
 spawning: '/sbin/xfsrestore' '-t' '-v'
'silent'

So Dez 19 16:31:31.524325240 2021: pid 5771: thd-0xf4c400: amcheckdump:
/opt/amanda/lib/amanda/perl/Amanda/Restore.pm:582:info:4900018
application stdout: /sbin/xfsrestore:
 ERROR: no source file(s) specified

I guess that a single "-" is missing when xfsrestore is called.

"man xfsrestore" states:

   -f source [ -f source ... ]
Specifies a source of the dump to be restored.  This can be
the pathname of a device (such as a tape drive), a regular file or a
remote  tape  drive  (see rmt(8)).  This option must be omitted if the
standard input option (a lone - preceding the dest specification) is
specified.


$ amcheckdump be-weekly
1 volume(s) needed for restoration
The following volumes are needed: BE-weekly-01

Validating image james:/var dumped 20211219122951 level 0
Reading volume BE-weekly-01 file 1
application stderr: /sbin/xfsrestore: usage: xfsrestore [ -a  ... ]
application stderr: [ -b  ]
application stderr: [ -c   ]
application stderr: [ -e (don't
overwrite existing files) ]
application stderr: [ -f  ... ]
application stderr: [ -h (help) ]
application stderr: [ -i (interactive) ]
application stderr: [ -m (force
usage of minimal rmt) ]
application stderr: [ -n 
(restore only if newer than) ]
application stderr: [ -o (restore
owner/group even if not root) ]
application stderr: [ -p  ]
application stderr: [ -q  ]
application stderr: [ -r (cumulative
restore) ]
application stderr: [ -s  ... ]
application stderr: [ -t (contents
only) ]
application stderr: [ -v  ]
application stderr: [ -w (use small
tree window) ]
application stderr: [ -A (don't
restore extended file attributes) ]
application stderr: [ -B (restore
root dir owner/permissions) ]
application stderr: [ -D (restore
DMAPI event settings) ]
application stderr: [ -E (don't
overwrite if changed) ]
application stderr: [ -F (don't
prompt) ]
application stderr: [ -I (display
dump inventory) ]
application stderr: [ -J (inhibit
inventory update) ]
application stderr: [ -K (force use
of format 2 generation numbers) ]
application stderr: [ -L  ]
application stderr: [ -O  ]
application stderr: [ -Q (force
interrupted session completion) ]
application stderr: [ -R (resume) ]
application stderr: [ -S  ]
application stderr: [ -T (don't
timeout dialogs) ]
application stderr: [ -X  ... ]
application stderr: [ -Y  ]
application stderr: [ - (stdin) ]
application stderr: [  ]
1024 kb
/sbin/xfsrestore exited with status 1
17 images not validated.
So Dez 19 16:31:26.734000527 2021: pid 5771: thd-0xf4c400: amcheckdump: pid 
5771 ruid 33 euid 33 version 3.5.1: start at Sun Dec 19 16:31:26 2021
So Dez 19 16:31:26.734079805 2021: pid 5771: thd-0xf4c400: amcheckdump: 
Arguments: be-weekly
So Dez 19 16:31:26.734396952 2021: pid 5771: thd-0xf4c400: amcheckdump: reading 
config file /var/lib/amanda/be-weekly/amanda.conf
So Dez 19 16:31:26.734512841 2021: pid 5771: thd-0xf4c400: amcheckdump: reading 
config file /var/lib/amanda/amanda.conf.main
So Dez 19 16:31:26.736554294 2021: pid 5771: thd-0xf4c400: amcheckdump: pid 
5771 ruid 33 euid 33 version 3.5.1: rename at Sun Dec 19 16:31:26 2021
So Dez 19 16:31:26.737519355 2021: pid 5771: thd-0xf4c400: amcheckdump: 
beginning trace log: /var/lib/amanda/be-weekly/log/log.20211219163126.0
So Dez 19 16:31:26.756621520 2021: pid 5771: thd-0xf4c400: amcheckdump: 
/opt/amanda/lib/amanda/perl/Amanda/Restore.pm:1303:success:492 1 volume(s) 
needed for restoration
The following volumes are needed: BE-weekly-01

So Dez 19 16:31:26.756999059 2021: 

Re: Backing up remote server

2020-10-21 Thread Bernhard Erdmann
Am 21.10.20 um 18:24 schrieb Robert Wolfe:
> This would be my assumption as well, but I am having issues on finding a
> working xinetd file for that (running under RHEL 7.x and 8.x).

>From my second CentOS 6.10 amanda server using amanda-3.5.1:

$ head -n20 /etc/xinetd.d/am*
==> /etc/xinetd.d/amanda <==
service amanda
{
socket_type = stream
protocol= tcp
wait= no
user= amanda
group   = disk
groups  = yes
server  = /opt/amanda/libexec/amanda/amandad
server_args = -auth=bsdtcp amdump amindexd amidxtaped
}

==> /etc/xinetd.d/amandaidx <==
service amandaidx
{
disable = yes
socket_type = stream
protocol= tcp
wait= no
user= amanda
group   = disk
groups  = yes
server  = /opt/amanda/libexec/amanda/amindexd
}

==> /etc/xinetd.d/amidxtape <==
service amidxtape
{
disable = yes
socket_type = stream
protocol= tcp
wait= no
user= amanda
group   = disk
groups  = yes
server  = /opt/amanda/libexec/amanda/amidxtaped
}

>From a general CentOS 7.8.2003 server using amanda-3.4.5 (getting backup
only):

$ cat /etc/xinetd.d/amanda
service amanda
{
socket_type = stream
protocol= tcp
wait= no
user= amanda
group   = disk
groups  = yes
server  = /opt/amanda/libexec/amanda/amandad
server_args = -auth=bsdtcp amdump
}


Re: restore from vtapes written by amvault

2020-10-18 Thread Bernhard Erdmann
Hello,

Am 18.10.20 um 18:08 schrieb Nathan Stratton Treadway:
> So, when you ran your perl patching on the log files, were there any
> lines other than the "DONE taper" lines that got changed?  

I checked the last two logfiles. Only lines beginning with 'DONE taper
"ST:vtape" ' were changed, e.g.

-DONE taper "ST:vtape" rs6000-1 /usr 2003011600 1 0 [sec 61.00
bytes 661422080 kps 10588.852459 orig-kb 0]
+DONE taper "ST:vtape" rs6000-1 /usr 20030116 1 0 [sec 61.00 bytes
661422080 kps 10588.852459 orig-kb 0]


Re: restore from vtapes written by amvault

2020-10-18 Thread Bernhard Erdmann
Hello,

Am 07.10.20 um 23:50 schrieb Nathan Stratton Treadway:
> That is, I would make a copy of the original log.20201004123343.0 file
> into some other directory, then used a txt editor to edit that
> particular DONE line to remove the "00" at the end of the
> datetimestamp field... and then run the "amadmin ... find" command again
> to see if that edit allowed it to start finding the vaulted copies of the
> dumps.

I ended up patching the logfiles written by amvault, e.g. for

$ fgrep 20020908 ../tapelist
20020908 BE-full-43 reuse
$ amvault --dest-storage vtape be-full \* \* 20020908

I get log.20201018130312.0 afterwards. Then I do

$ cp -p log.20201018130312.0 ../log_backup
$ perl -p -i -e 's/ 2002090800 / 20020908 /g' log.20201018130312.0

and then amvaulted dump images written to vtapes can be located by
amadmin find:

$ amadmin be-full find indigo-2 | egrep "date|2002-09"
date   host disk lv storage pooltape or file file part status
2002-09-08 indigo-2 / 0 vtape   vtape   vBE-full-014   54  1/1 OK
2002-09-08 indigo-2 / 0 be-full be-full BE-full-43 54 1/-1 OK

$ amfetchdump -ostorage=vtape be-full indigo-2 / 20020908
1 volume(s) needed for restoration
The following volumes are needed: vBE-full-014

Press enter when ready


Reading label 'vBE-full-014' filenum 54
FILE: date 20020908 host indigo-2 disk / lev 0 comp N program /sbin/xfsdump
1819872 kb


Re: restore from vtapes written by amvault

2020-10-11 Thread Bernhard Erdmann
Hello,

Am 11.10.20 um 00:50 schrieb Nathan Stratton Treadway:
> So, does the following command work any better?:
>   $ amfetchdump -ostorage=vtape -d vtape be-full svr '^/$' 2119

great idea, many thanks!

$ amfetchdump -ostorage=vtape -d vtape be-full svr '^/$' 2119
1 volume(s) needed for restoration
The following volumes are needed: vBE-full-001

Press enter when ready
^C

It even works without "-d vtape":

$ amfetchdump -ostorage=vtape be-full svr '^/$' 2119
1 volume(s) needed for restoration
The following volumes are needed: vBE-full-001

Press enter when ready


Reading label 'vBE-full-001' filenum 2
FILE: date 2119 host svr disk / lev 0 comp .gz program /sbin/dump
13312 kb
$ file svr._.2119.0
svr._.2119.0: gzip compressed data, from Unix, last modified: Wed
Jan 19 14:10:55 2000, max speed
$ zcat  ->
)>
So Okt 11 11:09:18.898993261 2020: pid 1453: thd-0x13ba550: amfetchdump:
Final linkage:  -(PULL_BUFFER)->
 -(WRITEFD)-> 
So Okt 11 11:09:18.899008369 2020: pid 1453: thd-0x13ba550: amfetchdump:
setup_impl: 3, 2
So Okt 11 11:09:18.899104583 2020: pid 1453: thd-0x13ba550: amfetchdump:
xfer_queue_message: MSG:  version=0>
So Okt 11 11:09:18.899163214 2020: pid 1453: thd-0x13ba550: amfetchdump:
Amanda::Recovery::Clerk: starting recovery
So Okt 11 11:09:18.899179840 2020: pid 1453: thd-0x2886b70: amfetchdump:
pull_and_write
So Okt 11 11:09:18.900882824 2020: pid 1453: thd-0x13ba550: amfetchdump:
/opt/amanda/lib/amanda/perl/Amanda/FetchDump.pm:165:info:333 Reading
label 'vBE-full-001' filenum 2
FILE: date 2119 host svr disk / lev 0 comp .gz program /sbin/dump
So Okt 11 11:09:18.901234409 2020: pid 1453: thd-0x13ba550: amfetchdump:
Amanda::Recovery::Clerk: reading file 2 on 'vBE-full-001'
So Okt 11 11:09:19.036676900 2020: pid 1453: thd-0x2886b70: amfetchdump:
Device file:/var/lib/vtape/be-full/slot1 error = 'EOF'
So Okt 11 11:09:19.036718143 2020: pid 1453: thd-0x2886b70: amfetchdump:
xfer_queue_message: MSG:  version=0>
So Okt 11 11:09:19.036764427 2020: pid 1453: thd-0x2886b70: amfetchdump:
xfer_queue_message: MSG:  version=0>
So Okt 11 11:09:19.037318386 2020: pid 1453: thd-0x13ba550: amfetchdump:
source_crc: 54b643b1:13631488
So Okt 11 11:09:19.037422653 2020: pid 1453: thd-0x13ba550: amfetchdump:
Amanda::Recovery::Clerk: done reading file 2 on 'vBE-full-001'
So Okt 11 11:09:19.037842519 2020: pid 1453: thd-0x2886b70: amfetchdump:
sending XMSG_CRC message 0x28bf820
So Okt 11 11:09:19.037891165 2020: pid 1453: thd-0x2886b70: amfetchdump:
pull_and_write CRC: 54b643b1  size 13631488
So Okt 11 11:09:19.037905382 2020: pid 1453: thd-0x2886b70: amfetchdump:
xfer_queue_message: MSG:  version=0>
So Okt 11 11:09:19.037948843 2020: pid 1453: thd-0x2886b70: amfetchdump:
xfer_queue_message: MSG:  version=0>
So Okt 11 11:09:19.038328991 2020: pid 1453: thd-0x13ba550: amfetchdump:
dest_crc: 54b643b1:13631488
So Okt 11 11:09:19.038662664 2020: pid 1453: thd-0x13ba550: amfetchdump:
/opt/amanda/lib/amanda/perl/Amanda/Restore.pm:1913:info:4900012 13312 kb
So Okt 11 11:09:19.045728617 2020: pid 1453: thd-0x13ba550: amfetchdump:
ru_utime   : 0
So Okt 11 11:09:19.045764790 2020: pid 1453: thd-0x13ba550: amfetchdump:
ru_stime   : 0
So Okt 11 11:09:19.045770951 2020: pid 1453: thd-0x13ba550: amfetchdump:
ru_maxrss  : 29596
So Okt 11 11:09:19.045776603 2020: pid 1453: thd-0x13ba550: amfetchdump:
ru_ixrss   : 0
So Okt 11 11:09:19.045782046 2020: pid 1453: thd-0x13ba550: amfetchdump:
ru_idrss   : 0
So Okt 11 11:09:19.045787304 2020: pid 1453: thd-0x13ba550: amfetchdump:
ru_isrss   : 0
So Okt 11 11:09:19.045792361 2020: pid 1453: thd-0x13ba550: amfetchdump:
ru_minflt  : 7906
So Okt 11 11:09:19.045797472 2020: pid 1453: thd-0x13ba550: amfetchdump:
ru_majflt  : 1
So Okt 11 11:09:19.045802565 2020: pid 1453: thd-0x13ba550: amfetchdump:
ru_nswap   : 0
So Okt 11 11:09:19.045807644 2020: pid 1453: thd-0x13ba550: amfetchdump:
ru_inblock : 26929
So Okt 11 11:09:19.045812727 2020: pid 1453: thd-0x13ba550: amfetchdump:
ru_oublock : 26696
So Okt 11 11:09:19.045817778 2020: pid 1453: thd-0x13ba550: amfetchdump:
ru_msgsnd  : 0
So Okt 11 11:09:19.045822739 2020: pid 1453: thd-0x13ba550: amfetchdump:
ru_msgrcv  : 0
So Okt 11 11:09:19.045827694 2020: pid 1453: thd-0x13ba550: amfetchdump:
ru_nsignals: 0
So Okt 11 11:09:19.045832775 2020: pid 1453: thd-0x13ba550: amfetchdump:
ru_nvcsw   : 646
So Okt 11 11:09:19.045837966 2020: pid 1453: thd-0x13ba550: amfetchdump:
ru_nivcsw  : 458
So Okt 11 11:09:19.052730514 2020: pid 1453: thd-0x13ba550: amfetchdump:
pid 1453 finish time Sun Oct 11 11:09:19 2020



Re: restore from vtapes written by amvault

2020-10-09 Thread Bernhard Erdmann
Hello,

Am 09.10.20 um 01:31 schrieb Nathan Stratton Treadway:
> It looks like amfetchdump should be creating a
> "$logdir/fetchdump.$timestamp" log file.  If so, does that include any
> mention of opening the vtape changer and/or detecting storage names?

A logfile like log.20201008195900.0 ist created. No fetchdump.$timestamp
file.

$ cat log.20201008195900.0
INFO amfetchdump fetchdump pid 4503

There is another logfile:
/tmp/amanda/server/be-full/amfetchdump.20201009083605.debug

$ cat /tmp/amanda/server/be-full/amfetchdump.20201009083605.debug
Fr Okt 09 08:36:05.594766041 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
pid 2215 ruid 33 euid 33 version 3.5.1: start at Fri Oct  9 08:36:05 2020
Fr Okt 09 08:36:05.594871851 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
Arguments: -d vtape be-full svr ^/$ 2119
Fr Okt 09 08:36:05.595963551 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
reading config file /var/lib/amanda/be-full/amanda.conf
Fr Okt 09 08:36:05.596291148 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
reading config file /var/lib/amanda/amanda.conf.main
Fr Okt 09 08:36:05.605085245 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
pid 2215 ruid 33 euid 33 version 3.5.1: rename at Fri Oct  9 08:36:05 2020
Fr Okt 09 08:36:05.608700988 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
beginning trace log: /var/lib/amanda/be-full/log/log.20201009083605.0
Fr Okt 09 08:36:05.624978043 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
chg-disk: Dir /var/lib/vtape/be-full
Fr Okt 09 08:36:05.625018252 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
chg-disk: Using statefile '/var/lib/vtape/be-full/state'
Fr Okt 09 08:36:05.839215832 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
/opt/amanda/lib/amanda/perl/Amanda/Restore.pm:1303:success:492 1
volume(s) needed for restoration
The following volumes are needed: BE-full-00

Fr Okt 09 08:36:08.174733052 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
Amanda::Recovery::Clerk: loading volume 'BE-full-00'
Fr Okt 09 08:36:08.174965611 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
find_volume labeled 'BE-full-00'
Fr Okt 09 08:36:08.195919769 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
parse_inventory: load slot 6
Fr Okt 09 08:36:08.196089927 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
/opt/amanda/lib/amanda/perl/Amanda/Recovery/Scan.pm:420:info:120 slot 6
Fr Okt 09 08:36:08.23707 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
dir_name: /var/lib/vtape/be-full/slot6/
Fr Okt 09 08:36:08.201052481 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
Device file:/var/lib/vtape/be-full/slot6 error = 'File 0 not found'
Fr Okt 09 08:36:08.201109545 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
Device file:/var/lib/vtape/be-full/slot6 setting status flag(s):
DEVICE_STATUS_VOLUME_UNLABELED
Fr Okt 09 08:36:08.203808430 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
/opt/amanda/lib/amanda/perl/Amanda/Recovery/Scan.pm:471:error:122
File 0 not found
Fr Okt 09 08:36:08.204280888 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
new Amanda::Changer::Error: type='fatal', message='File 0 not found'
Fr Okt 09 08:36:08.221645777 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
parse_inventory: load slot 7
Fr Okt 09 08:36:08.221763588 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
/opt/amanda/lib/amanda/perl/Amanda/Recovery/Scan.pm:420:info:120 slot 7
Fr Okt 09 08:36:08.225548944 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
dir_name: /var/lib/vtape/be-full/slot7/
Fr Okt 09 08:36:08.226464125 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
Device file:/var/lib/vtape/be-full/slot7 error = 'File 0 not found'
Fr Okt 09 08:36:08.226503803 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
Device file:/var/lib/vtape/be-full/slot7 setting status flag(s):
DEVICE_STATUS_VOLUME_UNLABELED
Fr Okt 09 08:36:08.229226167 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
/opt/amanda/lib/amanda/perl/Amanda/Recovery/Scan.pm:471:error:122
File 0 not found
Fr Okt 09 08:36:08.229572359 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
new Amanda::Changer::Error: type='fatal', message='File 0 not found'
Fr Okt 09 08:36:14.765428839 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
new Amanda::Changer::Error: type='fatal', message='Aborted by user'
Fr Okt 09 08:36:14.765978065 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
/opt/amanda/lib/amanda/perl/Amanda/Restore.pm:1398:error:4900045 Aborted
by user
Fr Okt 09 08:36:14.766222642 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
/opt/amanda/lib/amanda/perl/Amanda/Restore.pm:2174:error:4900068 Aborted
by user
Fr Okt 09 08:36:14.767321030 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
ru_utime   : 0
Fr Okt 09 08:36:14.767354700 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
ru_stime   : 0
Fr Okt 09 08:36:14.767361174 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
ru_maxrss  : 29200
Fr Okt 09 08:36:14.767382936 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
ru_ixrss   : 0
Fr Okt 09 08:36:14.767388566 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
ru_idrss   : 0
Fr Okt 09 08:36:14.767393604 2020: pid 2215: thd-0x1bab4f0: amfetchdump:
ru_isrss   : 0
Fr Okt 

Re: restore from vtapes written by amvault

2020-10-09 Thread Bernhard Erdmann
Hello,

Am 08.10.20 um 20:41 schrieb Nathan Stratton Treadway:
> (What does
>   $ amadmin be-full config | grep -i storage
> show right now?)

$ amadmin be-full config |grep -i storage
ACTIVE-STORAGEbe-full
STORAGE   be-full
VAULT-STORAGE ""
DEFINE STORAGE vtape {
DEFINE STORAGE be-full {

Am 09.10.20 um 01:31 schrieb Nathan Stratton Treadway:
> It looks like amfetchdump should be creating a
> "$logdir/fetchdump.$timestamp" log file.  If so, does that include any
> mention of opening the vtape changer and/or detecting storage names?

A logfile like log.20201008195900.0 ist created. No fetchdump.$timestamp
file.

$ cat log.20201008195900.0
INFO amfetchdump fetchdump pid 4503


Re: restore from vtapes written by amvault

2020-10-08 Thread Bernhard Erdmann
Hello,

Am 07.10.20 um 23:50 schrieb Nathan Stratton Treadway:
> Hmmm, the one thing that seems a little strange is the extra zeros at
> the end of the datetimestamp string on the DONE line

you were well unterstood while first reading.

Original logfile:

$ grep "svr / " log.20201004123343.0
PART taper "ST:vtape" vBE-full-001 2 svr / 2119 1/-1 0 [sec
43.408733 bytes 13631488 kps 306.666403]
DONE taper "ST:vtape" svr / 211900 1 0 [sec 44.00 bytes
13631488 kps 302.545455 orig-kb 0]

$ amadmin be-full find svr /

date   host disk lv storage pooltape or file file part status
2000-01-19 svr  / 0 be-full be-full BE-full-00  2 1/-1 OK


Modified logfile:

$ grep "svr / " log.20201004123343.0
PART taper "ST:vtape" vBE-full-001 2 svr / 2119 1/-1 0 [sec
43.408733 bytes 13631488 kps 306.666403]
DONE taper "ST:vtape" svr / 2119 1 0 [sec 44.00 bytes 13631488
kps 302.545455 orig-kb 0]

$ amadmin be-full find svr /

date   host disk lv storage pooltape or file file part status
2000-01-19 svr  / 0 vtape   vtape   vBE-full-0012  1/1 OK
2000-01-19 svr  / 0 be-full be-full BE-full-00  2 1/-1 OK


But amfetchdump still does not know about tape vBE-full-001:

$ amfetchdump -d vtape be-full svr '^/$' 2119
1 volume(s) needed for restoration
The following volumes are needed: BE-full-00

Press enter when ready

File 0 not found

Insert volume labeled 'BE-full-00' in vtape
and press enter, or ^D to abort.

File 0 not found

Insert volume labeled 'BE-full-00' in vtape
and press enter, or ^D to abort.
Aborted by user

$ amfetchdump be-full svr '^/$' 2119
1 volume(s) needed for restoration
The following volumes are needed: BE-full-00

Press enter when ready

Source Volume 'BE-full-00' not found

Insert volume labeled 'BE-full-00' in
chg-multi:{/dev/nst0,/dev/nst1,/dev/nst2}
and press enter, or ^D to abort.
Aborted by user


Re: restore from vtapes written by amvault

2020-10-07 Thread Bernhard Erdmann
Hello,

Am 06.10.20 um 21:54 schrieb Debra S Baddorf:
> Maybe try adding the exact time stamp too?   (In your orig mail)

$ /opt/amanda/sbin/amfetchdump -d vtape be-full svr '^/$' 20201004123343
No matching dumps found

> Oh, and ….  I got confused in your orig email about  be-full  and BE-full.  I 
> hope amanda isn’t confused too.
> Maybe try the capital letter version?But probably I just didn’t read it 
> carefully enough.

The config name is "be-full" in lower letters. The tapes of Jan. 2000
are labeled BE-full-00 to BE-full-04 (labelstr "^BE-full-[0-9][0-9]*$").
The vtape nowadays is labeled vBE-full-001 (LABELSTR "vBE-full-[0-9]*").

Am 07.10.20 um 00:05 schrieb Nathan Stratton Treadway:
> (I assume you are running Amanda v3.5, right?)

amanda-3.5.1 is running since 29 Feb 2020.

> Looking at the source code for the "find" command, it seems that Amanda
> looks through the log.* files based on the data stamps pulled out of the
> tapelist file...  so in your case, what does
>
>   grep 20201004123343 tapelist
>
> show (for the /etc/amanda/be-full/tapelist file)?

$ grep 20201004123343 ../tapelist
20201004123343 vBE-full-001 reuse BLOCKSIZE:32 POOL:vtape STORAGE:vtape
CONFIG:be-full

$ grep 2119 ../tapelist
2119 BE-full-04 reuse
2119 BE-full-03 reuse
2119 BE-full-02 reuse
2119 BE-full-01 reuse
2119 BE-full-00 reuse

> Also, what do you get when you grep log.20201004123343.0 for "srv /"?
> (That should give you all the taper lines related to writing the
> "missing" dump for srv / .)

$ grep "svr / " log.20201004123343.0
PART taper "ST:vtape" vBE-full-001 2 svr / 2119 1/-1 0 [sec
43.408733 bytes 13631488 kps 306.666403]
DONE taper "ST:vtape" svr / 211900 1 0 [sec 44.00 bytes
13631488 kps 302.545455 orig-kb 0]

$ grep "svr / " log.2119.7
SUCCESS dumper svr / 2119 0 [sec 40.837 kb 13312 kps 326.0 orig-kb
30860]
SUCCESS taper svr / 2119 0 [sec 27.941 kb 13344 kps 477.6 {wr:
writers 417 rdwait 0.000 wrwait 25.417 filemark 2.237}]



Re: restore from vtapes written by amvault

2020-10-06 Thread Bernhard Erdmann
Dear Deb,

Am 06.10.20 um 19:47 schrieb Debra S Baddorf:
> To access the vaulted files on the hard disk,   I had to look for the date on 
> which I had amvaulted.
> Try repeating this command:
>  $ /opt/amanda/sbin/amfetchdump -d vtape  be-full svr '^/$’ 2119
> using the data of the vault,   20201004

$ /opt/amanda/sbin/amfetchdump -d vtape be-full svr '^/$' 20201004
No matching dumps found


restore from vtapes written by amvault

2020-10-06 Thread Bernhard Erdmann
Hello,

recently I amvaulted from 20 years old DDS tapes to vtapes on hard disk.
Now I would like to restore from these vtapes.

"amadmin CONF find" does not locate the dumps on vtape (labeled
vBE-full-001), only on original tapes (labeled BE-full-00 to BE-full-04).

$ amadmin be-full find svr /

date   host disk lv storage pooltape or file file part status
2000-01-19 svr  / 0 be-full be-full BE-full-00  2 1/-1 OK

$ ll /var/lib/vtape/be-full/slot1
total 7643100
-rw--- 1 amanda disk  32768 Oct  4 12:34 0.vBE-full-001
[...]
-rw--- 1 amanda disk   13664256 Oct  4 12:35 2.svr._.0
[...]

The logdir contains the original logfiles of 19 Jan 2000 as well as the
logfile log.20201004123343.0 describing the amvaulting to vBE-full-001.

$ /opt/amanda/sbin/amfetchdump -d vtape  be-full svr '^/$' 2119
1 volume(s) needed for restoration
The following volumes are needed: BE-full-00

Press enter when ready

File 0 not found

Insert volume labeled 'BE-full-00' in vtape
and press enter, or ^D to abort.

File 0 not found

Insert volume labeled 'BE-full-00' in vtape
and press enter, or ^D to abort.
Aborted by user

Some snippets from amanda.conf:

tpchanger "chg-multi:{/dev/nst0,/dev/nst1,/dev/nst2}"
property "changerfile" "chg-multi.state"
runtapes 2
autoflush yes

define changer vtape {
  tpchanger "chg-disk:/var/lib/vtape/be-full"
  property "num-slot" "200"
  property "auto-create-slot" "yes"
}

define taperscan taper_lexical {
comment "lexical"
plugin "lexical"
}

DEFINE STORAGE vtape {
  TPCHANGER   "vtape"
  LABELSTR"vBE-full-[0-9]*"
  AUTOLABEL   "vBE-full-%%%"
  TAPEPOOL"vtape"
  RUNTAPES1
  TAPERSCAN   "taper_lexical"
  TAPERALGO   FIRST
  AUTOFLUSH   yes
  FLUSH-THRESHOLD-DUMPED 100
  FLUSH-THRESHOLD-SCHEDULED 100
}


Re: Out of memory: Killed process 707, UID 0, (amrecover). / Amanda-3.4.4

2017-06-05 Thread Bernhard Erdmann

Well done, Jean-Louis!

Many thanks for the very quick patch. This patch solved my problem.

Now for the same recover procedure as before (same backup date, same 
directory to recover) the largest Amanda process (amidxtaped) did not 
grow beyond 233 MB virtual image size.



Am 05.06.17 um 18:00 schrieb Jean-Louis Martineau:

Bernhard,

Thanks for reporting this new issue.
I committed the attached patch

Jean-Louis


Out of memory: Killed process 707, UID 0, (amrecover). / Amanda-3.4.4

2017-06-04 Thread Bernhard Erdmann

Hello,

after an upgrade to Amanda-3.4.4, I discovered that amrecover eats too 
much memory. Trying to recover a single directory from a huge GNU tar 
image (900 GB), amrecover gets killed because of memory constraints.


System is running CentOS 5.11 x86_64 using 8 GB RAM and 24 GB Swap 
available for several years using Amanda to backup to a 90 slot vTape 
iSCSI-attached.


Downgrade to Amanda-3.4.3 helped. The same recover procedure (same 
backup date, same directory to recover) works well using this version.


Last lines of amidxtaped.20170603173611.debug (timezone is CEST (GMT+2)):

Sat Jun 03 17:13:55.770129000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
/opt/amanda/lib/amanda/perl/Amanda/Restore.pm:1699:info:490 198642624 kb
Sat Jun 03 17:14:20.07357 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
/opt/amanda/lib/amanda/perl/Amanda/Restore.pm:1699:info:490 198645984 kb
Sat Jun 03 17:15:14.519889000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
/opt/amanda/lib/amanda/perl/Amanda/Restore.pm:1699:info:490 198646752 kb
Sat Jun 03 17:21:39.862161000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
/opt/amanda/lib/amanda/perl/Amanda/Restore.pm:1699:info:490 198647680 kb
Sat Jun 03 17:21:40.388673000 2017: pid 720: thd-0x20d85690: amidxtaped: 
xfer_cancel_with_error: Error writing to fd 8: Broken pipe
Sat Jun 03 17:21:40.388729000 2017: pid 720: thd-0x20d85690: amidxtaped: 
xfer_queue_message: MSG:  version=0>
Sat Jun 03 17:21:40.38890 2017: pid 720: thd-0x20d85690: amidxtaped: 
xfer_queue_message: MSG:  version=0>
Sat Jun 03 17:21:40.845451000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
Cancelling  -> 
)>
Sat Jun 03 17:21:40.922716000 2017: pid 720: thd-0x20d85690: amidxtaped: 
sending XMSG_CRC message 0x20d83010
Sat Jun 03 17:21:40.922777000 2017: pid 720: thd-0x20d85690: amidxtaped: 
pull_and_write CRC: 1b42fe0e  size 203415191552
Sat Jun 03 17:21:40.922793000 2017: pid 720: thd-0x20d85690: amidxtaped: 
xfer_queue_message: MSG:  version=0>
Sat Jun 03 17:21:40.922818000 2017: pid 720: thd-0x20d85690: amidxtaped: 
xfer_queue_message: MSG:  version=0>
Sat Jun 03 17:21:41.313146000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
dest_crc: 1b42fe0e:203415191552
Sat Jun 03 17:21:41.674875000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
/opt/amanda/lib/amanda/perl/Amanda/Restore.pm:1893:info:4900012 198647680 kb
Sat Jun 03 17:21:42.462789000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
/opt/amanda/lib/amanda/perl/Amanda/Restore.pm:1921:error:4900055 Error 
writing to fd 8: Broken pipe
Sat Jun 03 17:21:42.958237000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
/opt/amanda/lib/amanda/perl/Amanda/Restore.pm:2138:error:4900068 Error 
writing to fd 8: Broken pipe
Sat Jun 03 17:21:43.406247000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
user_message feedback: Error writing to fd 8: Broken pipe
Sat Jun 03 17:21:44.597549000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
CTL >> MESSAGE Error writing to fd 8: Broken pipe
Sat Jun 03 17:21:46.940293000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
exiting with 1
Sat Jun 03 17:21:46.990525000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
ru_utime   : 352
Sat Jun 03 17:21:46.990566000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
ru_stime   : 463
Sat Jun 03 17:21:46.990576000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
ru_maxrss  : 38984
Sat Jun 03 17:21:46.990595000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
ru_ixrss   : 0
Sat Jun 03 17:21:46.990604000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
ru_idrss   : 0
Sat Jun 03 17:21:46.990611000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
ru_isrss   : 0
Sat Jun 03 17:21:46.990619000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
ru_minflt  : 13614
Sat Jun 03 17:21:46.990627000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
ru_majflt  : 2372
Sat Jun 03 17:21:46.990634000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
ru_nswap   : 0
Sat Jun 03 17:21:46.990641000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
ru_inblock : 0
Sat Jun 03 17:21:46.990649000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
ru_oublock : 0
Sat Jun 03 17:21:46.990656000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
ru_msgsnd  : 0
Sat Jun 03 17:21:46.990663000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
ru_msgrcv  : 0
Sat Jun 03 17:21:46.99067 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
ru_nsignals: 0
Sat Jun 03 17:21:46.990678000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
ru_nvcsw   : 1477628
Sat Jun 03 17:21:46.990685000 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
ru_nivcsw  : 4970121
Sat Jun 03 17:21:47.01036 2017: pid 720: thd-0x1f6147f0: amidxtaped: 
pid 720 finish time Sat Jun  3 17:21:47 2017


Lines from /var/log/messages:


Re: Trouble with libUtil.so

2015-02-11 Thread Bernhard Erdmann

Hello Jean-Louis,
thanks for your patch. It helped me to get amanda 3.3.7 up and running 
on an RHEL 5.11 machine. Without your patch, I had the same problem as Jens.



Am 26.01.15 um 16:14 schrieb Jean-Louis Martineau:

Jens,

Try the attached patch.

Jean-Louis

On 01/26/2015 10:00 AM, Jens Berg wrote:

Hi list,

I compiled amanda 3.3.7 from source which completed successfully.
However, each time I run a command which utilizes libUtil.so, e.g.
amlabel or amcheck I get the following error message:
/usr/bin/perl: symbol lookup error:
/usr/local/share/perl/5.14.2/auto/Amanda/Util/libUtil.so: undefined
symbol: struct_file_lock_lock

/usr/bin/perl -v tells me it's a 5.14.2
Looks like some SWIG weirdness to me but I'm not experienced enough to
figure out what the real problem is. Do I miss a library or the path
to it?
I already did make clean; make; sudo make install but without success.
The system I run amanda on is the same machine I used for compiling.
It's a Debian 7.8.0 32-bit (3.2.0-4-686-pae #1 SMP Debian
3.2.65-1+deb7u1 i686 GNU/Linux) with Perl 5.14.2 and gcc (Debian
4.7.2-5) 4.7.2

Any ideas what could be wrong?

Jens







Re: Deleted /var/lib/amanda/gnutar-lists directory

2012-05-23 Thread Bernhard Erdmann


Quoting Szakolczi Gábor szakolczi.ga...@lib.pte.hu on Wed, 23 May  
2012 09:52:54 +0200 (CEST):



Hi!

I deleted the /var/lib/amanda/gnutar-lists directory unfortunately,  
but I have the backup files.
Is it possible to recreate the files in the gnutar-lists directory  
from the backup files?



Hello Gábor,

this is my cron job on every host using gnutar to survive in case of  
accidental deletion or modification:


# Backup of --listed-incremental files for GNU tar
32 15 * * * tar cf gnutar-lists_amandates.tar amandates gnutar-lists


Just for the first time:
(umask 077; touch gnutar-lists_amandates.tar)


For your case: use your backup to recreate the gnutar-lists directory  
and Amanda / GNU tar will backup using this state.






Re: Fit algorithm revisited

2012-03-28 Thread Bernhard Erdmann

Hi Jean-Louis,

the second medium-sized DLE (53 GB) has been promoted from 5 days  
ahead. Looks much better now.


$ amadmin be balance
 due-date  #fsorig kB out kB   balance
--
 3/28 Wed2   49318040   49318040-12.4%
 3/29 Thu1   73811810   73811810+31.1%
 3/30 Fri2   40733720   40733720-27.6%
 3/31 Sat2   55579040   55579040 -1.3%
 4/01 Sun1  126538220  126538220   +124.8%
 4/02 Mon1   44029300   44029300-21.8%
 4/03 Tue2   67804280   67804280+20.5%
 4/04 Wed2   48122090   48122090-14.5%
 4/05 Thu2   55214820   55214820 -1.9%
 4/06 Fri2   58847690   58847690 +4.6%
 4/07 Sat2   47259350   47259350-16.0%
 4/08 Sun1   39843350   39843350-29.2%
 4/09 Mon1   41888900   41888900-25.6%
 4/10 Tue2   58437479   58437479 +3.8%
 4/11 Wed2   55687590   55687590 -1.1%
 4/12 Thu0  0  0  ---
 4/13 Fri2   56478270   56478270 +0.3%
 4/14 Sat2   61045722   58645042 +4.2%
 4/15 Sun1   44408210   44408210-21.1%
 4/16 Mon2   56126655   51227393 -9.0%
 4/17 Tue3   65676196   53594814 -4.8%
 4/18 Wed2   64966230   64966230+15.4%
 4/19 Thu4   81805259   55246746 -1.8%
 4/20 Fri   10   67815815   54983817 -2.3%
 4/21 Sat3  104465218  104465218+85.6%
--
TOTAL   54 1465903254 1407131419  56285256
  (estimated 25 runs per dumpcycle)



Quoting Jean-Louis Martineau martin...@zmanda.com on Tue, 27 Mar  
2012 13:44:26 -0400:



Bernhard,

Use the attached patch for 3.3

Let me know if it improve the balancing, or if some dle get promoted  
too often.


Jean-Louis

On 03/27/2012 01:27 PM, Bernhard Erdmann wrote:

Hi Jean-Louis,

will your patch apply to Amanda version 3.3.1?

For several months I have an ongoing similar problem with one  
Amanda configuration. One big DLE (120 GB), two DLEs at 75 and 50  
GB and ca. 35 DLEs at 20-35 GB each.


Amanda 3.3.1 does not move the biggest DLE to a day when it only  
full-dumps this DLE or shuffles the smaller DLEs around so that  
only the biggest DLE is full-dumped at a particular day. Instead,  
always the second medium-sized DLE (50 GB) is full-dumped at the  
same day as the biggest DLE.


This configuration has been stable for more than half a year, i.e.  
4-5 dumpcycles have passed.


$ amadmin be balance

due-date  #fsorig kB out kB   balance
--
3/27 Tue2   50952368   50952368 -9.5%
3/28 Wed2   49318040   49318040-12.4%
3/29 Thu1   73811810   73811810+31.1%
3/30 Fri2   40733720   40733720-27.6%
3/31 Sat2   55579040   55579040 -1.3%
4/01 Sun2  180292110  180292110   +220.3%
4/02 Mon1   44029300   44029300-21.8%
4/03 Tue2   67804280   67804280+20.4%
4/04 Wed2   48122090   48122090-14.5%
4/05 Thu2   55214820   55214820 -1.9%
4/06 Fri2   58847690   58847690 +4.5%
4/07 Sat2   47259350   47259350-16.1%
4/08 Sun1   39843350   39843350-29.2%
4/09 Mon1   41888900   41888900-25.6%
4/10 Tue2   58437479   58437479 +3.8%
4/11 Wed2   55687590   55687590 -1.1%
4/12 Thu0  0  0  ---
4/13 Fri2   56478270   56478270 +0.3%
4/14 Sat2   61045722   58645042 +4.2%
4/15 Sun1   44408210   44408210-21.1%
4/16 Mon2   56126655   51227393 -9.0%
4/17 Tue3   65676196   53594814 -4.8%
4/18 Wed2   64966230   64966230+15.4%
4/19 Thu4   81805259   55246746 -1.9%
4/20 Fri   10   67815815   54983817 -2.3%
--
TOTAL   54 1466144294 1407372459  56294898
 (estimated 25 runs per dumpcycle)



Quoting Jean-Louis Martineau martin...@zmanda.com on Fri, 23 Mar  
2012 08:18:51 -0400:



Hi Gene,

Can you try the attached patch? (it is lightly tested and uncommitted).

Jean-Louis

On 03/21/2012 12:48 PM, gene heskett wrote:

Greetings from the canary;

One of the things that constantly get under my skin is the  
apparent lack of

amanda's ability to juggle backup order in order to balance the sizes of
the backups from night to night.  I have fussed about this before without
arriving at a solution, but it seems to me amanda has gone dumb with all
the re-writes in the last 3 or 5 years.

I am seemingly locked into a cadence of 4 nights worth of doing about 15Gb
a night, followed by the night when it does the largest 5 or so DLE's all
on the same run, which makes that run be 45+Gb.

The biggest one is /usr/movies, at a bit over 16Gb.  If I could get that
one separated from the other larger ones, it would help.  Sure, I could
comment that DLE out for a day or 2. Or I could force a level 0 on Friday.
The point is that 5 years ago, amanda would do this all by itself  
and it is

no longer even making the effort for at least the last

Re: Fit algorithm revisited

2012-03-27 Thread Bernhard Erdmann

Hi Jean-Louis,

will your patch apply to Amanda version 3.3.1?

For several months I have an ongoing similar problem with one Amanda  
configuration. One big DLE (120 GB), two DLEs at 75 and 50 GB and ca.  
35 DLEs at 20-35 GB each.


Amanda 3.3.1 does not move the biggest DLE to a day when it only  
full-dumps this DLE or shuffles the smaller DLEs around so that only  
the biggest DLE is full-dumped at a particular day. Instead, always  
the second medium-sized DLE (50 GB) is full-dumped at the same day as  
the biggest DLE.


This configuration has been stable for more than half a year, i.e. 4-5  
dumpcycles have passed.


$ amadmin be balance

 due-date  #fsorig kB out kB   balance
--
 3/27 Tue2   50952368   50952368 -9.5%
 3/28 Wed2   49318040   49318040-12.4%
 3/29 Thu1   73811810   73811810+31.1%
 3/30 Fri2   40733720   40733720-27.6%
 3/31 Sat2   55579040   55579040 -1.3%
 4/01 Sun2  180292110  180292110   +220.3%
 4/02 Mon1   44029300   44029300-21.8%
 4/03 Tue2   67804280   67804280+20.4%
 4/04 Wed2   48122090   48122090-14.5%
 4/05 Thu2   55214820   55214820 -1.9%
 4/06 Fri2   58847690   58847690 +4.5%
 4/07 Sat2   47259350   47259350-16.1%
 4/08 Sun1   39843350   39843350-29.2%
 4/09 Mon1   41888900   41888900-25.6%
 4/10 Tue2   58437479   58437479 +3.8%
 4/11 Wed2   55687590   55687590 -1.1%
 4/12 Thu0  0  0  ---
 4/13 Fri2   56478270   56478270 +0.3%
 4/14 Sat2   61045722   58645042 +4.2%
 4/15 Sun1   44408210   44408210-21.1%
 4/16 Mon2   56126655   51227393 -9.0%
 4/17 Tue3   65676196   53594814 -4.8%
 4/18 Wed2   64966230   64966230+15.4%
 4/19 Thu4   81805259   55246746 -1.9%
 4/20 Fri   10   67815815   54983817 -2.3%
--
TOTAL   54 1466144294 1407372459  56294898
  (estimated 25 runs per dumpcycle)



Quoting Jean-Louis Martineau martin...@zmanda.com on Fri, 23 Mar  
2012 08:18:51 -0400:



Hi Gene,

Can you try the attached patch? (it is lightly tested and uncommitted).

Jean-Louis

On 03/21/2012 12:48 PM, gene heskett wrote:

Greetings from the canary;

One of the things that constantly get under my skin is the apparent lack of
amanda's ability to juggle backup order in order to balance the sizes of
the backups from night to night.  I have fussed about this before without
arriving at a solution, but it seems to me amanda has gone dumb with all
the re-writes in the last 3 or 5 years.

I am seemingly locked into a cadence of 4 nights worth of doing about 15Gb
a night, followed by the night when it does the largest 5 or so DLE's all
on the same run, which makes that run be 45+Gb.

The biggest one is /usr/movies, at a bit over 16Gb.  If I could get that
one separated from the other larger ones, it would help.  Sure, I could
comment that DLE out for a day or 2. Or I could force a level 0 on Friday.
The point is that 5 years ago, amanda would do this all by itself and it is
no longer even making the effort for at least the last 2 or 3 years.

Here is the output of amadmin Daily balance:
 due-date  #fsorig MB out MB   balance
--
 3/21 Wed2  22104   9865-32.9%
 3/22 Thu8   6787   3178-78.4%
 3/23 Fri   12   1117   1065-92.8%
 3/24 Sat9  28015  14595 -0.7%
 3/25 Sun5  48037  44776   +204.7%
--
TOTAL   36 106060  73479 14695
  (estimated 5 runs per dumpcycle)

No huge files to disturb it have been added or deleted in at least a month.

Now it used to be, years ago, that amanda's planner would come within 20-50
megs of filling a 4Gb tape every night for weeks at a time without ever
hitting an EOT.  Now it seems as if amanda is making no effort to move that
16Gb movie (its weddings I've shot, all since I moved to vtapes years ago)
DLE to Friday or even Thursday where it would fit quite nicely in the 30Gb
I give it as a virtual tape size.

Do I have it being forced to maintain the existing terrible schedule by
some option with a hidden interaction in my amanda.conf?

Please name that option if there is such a beast.

Thanks.

Cheers, Gene






error with amcheckdump

2012-03-20 Thread Bernhard Erdmann

Hi,

when I call amcheckdump (Amanda version 3.3.1) it reports:

$ amcheckdump --verbose be-full
amcheckdump: '[sec 2477.048 kb 1266208 kps 511' at  
/opt/amanda/lib/amanda/perl/Amanda/DB/Catalog.pm line 750.

$ echo $?
1

I guess it has something to do with the index of this Amanda  
configuration which has grown over 12 years. I have never used  
amcheckdump successfully before.


Do you have an idea how to debug?


$ perl -d /opt/amanda/sbin/amcheckdump --verbose be-full

Loading DB routines from perl5db.pl version 1.28
Editor support available.

Enter h or `h h' for help, or `man perldebug' for more help.

main::(/opt/amanda/sbin/amcheckdump:65):
65: Amanda::Util::setup_application(amcheckdump, server,  
$CONTEXT_CMDLINE);

  DB1 b 303
  DB2 r
main::CODE(0x10d4de50)(/opt/amanda/sbin/amcheckdump:303):
303:Amanda::Recovery::Planner::make_plan(
304:dumpspecs = [ $spec ],
305:changer = $chg,
306:plan_cb = $steps-{'plan_cb'});
  DB2 n
amcheckdump: '[sec 2477.048 kb 1266208 kps 511' at  
/opt/amanda/lib/amanda/perl/Amanda/DB/Catalog.pm line 750.






Re: error with amcheckdump

2012-03-20 Thread Bernhard Erdmann

Hi Jean-Louis,

one of the Amanda logfiles (be-full/log/log.2612.0, 84 lines)  
contains the line in question:


[...]
START taper datestamp 2612 label BE-full-24 tape 3
INFO taper retrying ente:/var/spool/news.0 on new tape: [closing tape:  
Input/output error]
SUCCESS taper ente /var/spool/news 2612 0 [sec 5438.722 kb 2781504  
kps 511.4 {wr: writers 86922 rdwait 0.000 wrwait 5433.433 filemark  
3.160}]

INFO taper tape BE-full-24 kb 4006560 fm 2 writing file: Input/output error
START taper datestamp 2612 label BE-full-25 tape 4
INFO taper retrying apollo:/mnt/xcdroast.0 on new tape: [closing tape:  
Input/output error]

SUCCESS taper apollo /mnt/xcdroast 2612 0 [sec 2477.048 kb 1266208 kps 511

This logfile ends with kps 511 - nothing beyond.



Quoting Jean-Louis Martineau martin...@zmanda.com on Tue, 20 Mar  
2012 09:18:10 -0400:



It is probably a corruption of one of the log.datastamp.* file

Can you post the file with the string: [sec 2477.048 kb 1266208 kps 511

Jean-Louis

On 03/20/2012 08:52 AM, Bernhard Erdmann wrote:

Hi,

when I call amcheckdump (Amanda version 3.3.1) it reports:

$ amcheckdump --verbose be-full
amcheckdump: '[sec 2477.048 kb 1266208 kps 511' at  
/opt/amanda/lib/amanda/perl/Amanda/DB/Catalog.pm line 750.

$ echo $?
1

I guess it has something to do with the index of this Amanda  
configuration which has grown over 12 years. I have never used  
amcheckdump successfully before.


Do you have an idea how to debug?


$ perl -d /opt/amanda/sbin/amcheckdump --verbose be-full

Loading DB routines from perl5db.pl version 1.28
Editor support available.

Enter h or `h h' for help, or `man perldebug' for more help.

main::(/opt/amanda/sbin/amcheckdump:65):
65: Amanda::Util::setup_application(amcheckdump, server,  
$CONTEXT_CMDLINE);

 DB1 b 303
 DB2 r
main::CODE(0x10d4de50)(/opt/amanda/sbin/amcheckdump:303):
303:Amanda::Recovery::Planner::make_plan(
304:dumpspecs = [ $spec ],
305:changer = $chg,
306:plan_cb = $steps-{'plan_cb'});
 DB2 n
amcheckdump: '[sec 2477.048 kb 1266208 kps 511' at  
/opt/amanda/lib/amanda/perl/Amanda/DB/Catalog.pm line 750.









Re: error with amcheckdump

2012-03-20 Thread Bernhard Erdmann

Hi Jean-Louis,

I have added  {wr: writers 86922 rdwait 0.000 wrwait 5433.433  
filemark 3.160}] to the last line in be-full/log/log.2612.0 and  
now amcheckdump seems to work:


$ amcheckdump --verbose be-full
You will need the following volume: BE-full-85
Press enter when ready



Quoting Bernhard Erdmann b...@berdmann.de on Tue, 20 Mar 2012 14:47:08 +0100:


Hi Jean-Louis,

one of the Amanda logfiles (be-full/log/log.2612.0, 84 lines)  
contains the line in question:


[...]
START taper datestamp 2612 label BE-full-24 tape 3
INFO taper retrying ente:/var/spool/news.0 on new tape: [closing  
tape: Input/output error]
SUCCESS taper ente /var/spool/news 2612 0 [sec 5438.722 kb  
2781504 kps 511.4 {wr: writers 86922 rdwait 0.000 wrwait 5433.433  
filemark 3.160}]

INFO taper tape BE-full-24 kb 4006560 fm 2 writing file: Input/output error
START taper datestamp 2612 label BE-full-25 tape 4
INFO taper retrying apollo:/mnt/xcdroast.0 on new tape: [closing  
tape: Input/output error]
SUCCESS taper apollo /mnt/xcdroast 2612 0 [sec 2477.048 kb  
1266208 kps 511


This logfile ends with kps 511 - nothing beyond.



Quoting Jean-Louis Martineau martin...@zmanda.com on Tue, 20 Mar  
2012 09:18:10 -0400:



It is probably a corruption of one of the log.datastamp.* file

Can you post the file with the string: [sec 2477.048 kb 1266208 kps 511

Jean-Louis

On 03/20/2012 08:52 AM, Bernhard Erdmann wrote:

Hi,

when I call amcheckdump (Amanda version 3.3.1) it reports:

$ amcheckdump --verbose be-full
amcheckdump: '[sec 2477.048 kb 1266208 kps 511' at  
/opt/amanda/lib/amanda/perl/Amanda/DB/Catalog.pm line 750.

$ echo $?
1

I guess it has something to do with the index of this Amanda  
configuration which has grown over 12 years. I have never used  
amcheckdump successfully before.


Do you have an idea how to debug?


$ perl -d /opt/amanda/sbin/amcheckdump --verbose be-full

Loading DB routines from perl5db.pl version 1.28
Editor support available.

Enter h or `h h' for help, or `man perldebug' for more help.

main::(/opt/amanda/sbin/amcheckdump:65):
65: Amanda::Util::setup_application(amcheckdump, server,  
$CONTEXT_CMDLINE);

DB1 b 303
DB2 r
main::CODE(0x10d4de50)(/opt/amanda/sbin/amcheckdump:303):
303:Amanda::Recovery::Planner::make_plan(
304:dumpspecs = [ $spec ],
305:changer = $chg,
306:plan_cb = $steps-{'plan_cb'});
DB2 n
amcheckdump: '[sec 2477.048 kb 1266208 kps 511' at  
/opt/amanda/lib/amanda/perl/Amanda/DB/Catalog.pm line 750.








Re: driver: WARNING: got empty schedule from planner

2005-10-27 Thread Bernhard Erdmann

Jean-Louis Martineau wrote:

Hi,

If the planner is segfaulting, could you try to run it with gdb.



gdb path_to_planner


(gdb) run your_config_name

After the crash, use the 'where' command.
(gdb) where

Send me the complete output.



Hi Jean-Louis,

here's the same using egcs-2.91.66, amanda-2.4.5 on RedHat Linux 6.2:

$ gdb /opt/amanda/libexec/planner
GNU gdb 19991004
Copyright 1998 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain 
conditions.

Type show copying to see the conditions.
There is absolutely no warranty for GDB.  Type show warranty for details.
This GDB was configured as i386-redhat-linux...
(gdb) run be
Starting program: /opt/amanda/libexec/planner be
planner: pid 18662 executable /opt/amanda/libexec/planner version 2.4.5
planner: build: VERSION=Amanda-2.4.5
planner:BUILT_DATE=Mon Oct 10 14:23:50 CEST 2005
planner:BUILT_MACH=Linux ente.berdmann.de 2.4.31 #1 SMP Tue Jun 
7 00:23:56 CEST 2005 i686 unknown

planner:CC=gcc
planner:CONFIGURE_COMMAND='./configure' '--prefix=/opt/amanda' 
'--sysconfdir=/var/lib' '--with-index-server=amandahost' 
'--with-config=be' '--with-tape-device=/dev/nst0' '--with-user=amanda' 
'--with-group=disk' '--with-dump-honor-nodump' '--disable-static' 
'--enable-shared' '--datadir=/opt/doc'

planner: paths: bindir=/opt/amanda/bin sbindir=/opt/amanda/sbin
planner:libexecdir=/opt/amanda/libexec mandir=/opt/amanda/man
planner:AMANDA_TMPDIR=/tmp/amanda AMANDA_DBGDIR=/tmp/amanda
planner:CONFIG_DIR=/var/lib/amanda DEV_PREFIX=/dev/
planner:RDEV_PREFIX=/dev/ DUMP=/sbin/dump
planner:RESTORE=/sbin/restore VDUMP=UNDEF VRESTORE=UNDEF
planner:XFSDUMP=/sbin/xfsdump XFSRESTORE=/sbin/xfsrestore
planner:VXDUMP=UNDEF VXRESTORE=UNDEF
planner:SAMBA_CLIENT=/usr/bin/smbclient GNUTAR=/bin/gtar
planner:COMPRESS_PATH=/usr/bin/gzip
planner:UNCOMPRESS_PATH=/usr/bin/gzip LPRCMD=/usr/bin/lpr
planner:MAILER=/usr/bin/Mail
planner:listed_incr_dir=/opt/amanda/var/amanda/gnutar-lists
planner: defs:  DEFAULT_SERVER=amandahost DEFAULT_CONFIG=be
planner:DEFAULT_TAPE_SERVER=amandahost
planner:DEFAULT_TAPE_DEVICE=/dev/nst0 HAVE_MMAP HAVE_SYSVSHM
planner:LOCKING=POSIX_FCNTL SETPGRP_VOID DEBUG_CODE
planner:AMANDA_DEBUG_DAYS=4 BSD_SECURITY USE_AMANDAHOSTS
planner:CLIENT_LOGIN=amanda FORCE_USERID HAVE_GZIP
planner:COMPRESS_SUFFIX=.gz COMPRESS_FAST_OPT=--fast
planner:COMPRESS_BEST_OPT=--best UNCOMPRESS_OPT=-dc
planner: time 0.000: dgram_bind: socket bound to 0.0.0.0.32814
READING CONF FILES...
planner: time 0.002: startup took 0.002 secs

SENDING FLUSHES...
ENDFLUSH
ENDFLUSH

SETTING UP FOR ESTIMATES...
planner: time 0.002: setting up estimates for ente:/
setup_estimate: ente:/: command 0, options: nonelast_level 0 
next_level0 6 level_days 0getting estimates 0 (-2) 1 (-2) -1 (-2)


Program received signal SIGSEGV, Segmentation fault.
main (argc=Cannot access memory at address 0x6
) at planner.c:418
418 }


Re: Backup and recovery CD

2003-03-16 Thread Bernhard Erdmann
Seth, Wayne (Contractor) wrote:
Leonid,

I have been trying to do a complete restore on a Red Hat box for some time
now.  I haven't been able to figure out a way to access my SCSI tape drive
from Red Hat rescue mode.  Perhaps I'm missing something? 
No, just RedHat did miss to include the st (SCSI tape) driver. I've been 
bitten, too.