Hello,
amcheckdump does not know well xfsrestore on Linux / CentOS 7.
Two lines from
/tmp/amanda/server/be-weekly/amcheckdump.20211219163126.debug:
So Dez 19 16:31:31.502440996 2021: pid 5771: thd-0xf4c400: amcheckdump:
spawning: '/sbin/xfsrestore' '-t' '-v'
'silent'
So Dez 19 16:31:31.5243252
Am 21.10.20 um 18:24 schrieb Robert Wolfe:
> This would be my assumption as well, but I am having issues on finding a
> working xinetd file for that (running under RHEL 7.x and 8.x).
>From my second CentOS 6.10 amanda server using amanda-3.5.1:
$ head -n20 /etc/xinetd.d/am*
==> /etc/xinetd.d/aman
Hello,
Am 18.10.20 um 18:08 schrieb Nathan Stratton Treadway:
> So, when you ran your perl patching on the log files, were there any
> lines other than the "DONE taper" lines that got changed?
I checked the last two logfiles. Only lines beginning with 'DONE taper
"ST:vtape" ' were changed, e.g.
Hello,
Am 07.10.20 um 23:50 schrieb Nathan Stratton Treadway:
> That is, I would make a copy of the original log.20201004123343.0 file
> into some other directory, then used a txt editor to edit that
> particular DONE line to remove the "00" at the end of the
> datetimestamp field... and then
Hello,
Am 11.10.20 um 00:50 schrieb Nathan Stratton Treadway:
> So, does the following command work any better?:
> $ amfetchdump -ostorage=vtape -d vtape be-full svr '^/$' 2119
great idea, many thanks!
$ amfetchdump -ostorage=vtape -d vtape be-full svr '^/$' 2119
1 volume(s) needed for
Hello,
Am 09.10.20 um 01:31 schrieb Nathan Stratton Treadway:
> It looks like amfetchdump should be creating a
> "$logdir/fetchdump.$timestamp" log file. If so, does that include any
> mention of opening the vtape changer and/or detecting storage names?
A logfile like log.20201008195900.0 ist cr
Hello,
Am 08.10.20 um 20:41 schrieb Nathan Stratton Treadway:
> (What does
> $ amadmin be-full config | grep -i storage
> show right now?)
$ amadmin be-full config |grep -i storage
ACTIVE-STORAGEbe-full
STORAGE be-full
VAULT-STORAGE ""
DEFINE STORAGE vtape {
DEFINE
Hello,
Am 07.10.20 um 23:50 schrieb Nathan Stratton Treadway:
> Hmmm, the one thing that seems a little strange is the extra zeros at
> the end of the datetimestamp string on the DONE line
you were well unterstood while first reading.
Original logfile:
$ grep "svr / " log.20201004123343.0
P
Hello,
Am 06.10.20 um 21:54 schrieb Debra S Baddorf:
> Maybe try adding the exact time stamp too? (In your orig mail)
$ /opt/amanda/sbin/amfetchdump -d vtape be-full svr '^/$' 20201004123343
No matching dumps found
> Oh, and …. I got confused in your orig email about be-full and BE-full. I
Dear Deb,
Am 06.10.20 um 19:47 schrieb Debra S Baddorf:
> To access the vaulted files on the hard disk, I had to look for the date on
> which I had amvaulted.
> Try repeating this command:
> $ /opt/amanda/sbin/amfetchdump -d vtape be-full svr '^/$’ 2119
> using the data of the vault,
Hello,
recently I amvaulted from 20 years old DDS tapes to vtapes on hard disk.
Now I would like to restore from these vtapes.
"amadmin CONF find" does not locate the dumps on vtape (labeled
vBE-full-001), only on original tapes (labeled BE-full-00 to BE-full-04).
$ amadmin be-full find svr /
d
Well done, Jean-Louis!
Many thanks for the very quick patch. This patch solved my problem.
Now for the same recover procedure as before (same backup date, same
directory to recover) the largest Amanda process (amidxtaped) did not
grow beyond 233 MB virtual image size.
Am 05.06.17 um 18:00 s
Hello,
after an upgrade to Amanda-3.4.4, I discovered that amrecover eats too
much memory. Trying to recover a single directory from a huge GNU tar
image (900 GB), amrecover gets killed because of memory constraints.
System is running CentOS 5.11 x86_64 using 8 GB RAM and 24 GB Swap
availabl
Hello Jean-Louis,
thanks for your patch. It helped me to get amanda 3.3.7 up and running
on an RHEL 5.11 machine. Without your patch, I had the same problem as Jens.
Am 26.01.15 um 16:14 schrieb Jean-Louis Martineau:
Jens,
Try the attached patch.
Jean-Louis
On 01/26/2015 10:00 AM, Jens Ber
Quoting Szakolczi Gábor on Wed, 23 May
2012 09:52:54 +0200 (CEST):
Hi!
I deleted the /var/lib/amanda/gnutar-lists directory unfortunately,
but I have the backup files.
Is it possible to recreate the files in the gnutar-lists directory
from the backup files?
Hello Gábor,
this is my
56285256
(estimated 25 runs per dumpcycle)
Quoting Jean-Louis Martineau on Tue, 27 Mar
2012 13:44:26 -0400:
Bernhard,
Use the attached patch for 3.3
Let me know if it improve the balancing, or if some dle get promoted
too often.
Jean-Louis
On 03/27/2012 01:27 PM, Bernhard Erdmann wrote
Hi Jean-Louis,
will your patch apply to Amanda version 3.3.1?
For several months I have an ongoing similar problem with one Amanda
configuration. One big DLE (120 GB), two DLEs at 75 and 50 GB and ca.
35 DLEs at 20-35 GB each.
Amanda 3.3.1 does not move the biggest DLE to a day when it onl
Quoting Bernhard Erdmann on Tue, 20 Mar 2012 14:47:08 +0100:
Hi Jean-Louis,
one of the Amanda logfiles (be-full/log/log.2612.0, 84 lines)
contains the line in question:
[...]
START taper datestamp 2612 label BE-full-24 tape 3
INFO taper retrying ente:/var/spool/news.0 on new tape
of the log..* file
Can you post the file with the string: [sec 2477.048 kb 1266208 kps 511
Jean-Louis
On 03/20/2012 08:52 AM, Bernhard Erdmann wrote:
Hi,
when I call amcheckdump (Amanda version 3.3.1) it reports:
$ amcheckdump --verbose be-full
amcheckdump: '[sec 2477.048 kb 1266208 kps
Hi,
when I call amcheckdump (Amanda version 3.3.1) it reports:
$ amcheckdump --verbose be-full
amcheckdump: '[sec 2477.048 kb 1266208 kps 511' at
/opt/amanda/lib/amanda/perl/Amanda/DB/Catalog.pm line 750.
$ echo $?
1
I guess it has something to do with the index of this Amanda
configuratio
Jean-Louis Martineau wrote:
Hi,
If the planner is segfaulting, could you try to run it with gdb.
gdb
(gdb) run
After the crash, use the 'where' command.
(gdb) where
Send me the complete output.
Hi Jean-Louis,
here's the same using egcs-2.91.66, amanda-2.4.5 on RedHat Linux 6.2:
$ g
Seth, Wayne (Contractor) wrote:
Leonid,
I have been trying to do a complete restore on a Red Hat box for some time
now. I haven't been able to figure out a way to access my SCSI tape drive
from Red Hat rescue mode. Perhaps I'm missing something?
No, just RedHat did miss to include the st (SCSI
22 matches
Mail list logo