Dustin J. Mitchell wrote:
On Thu, Aug 28, 2008 at 7:48 AM, Gene Heskett <[EMAIL PROTECTED]> wrote:
The tar folks at gnu do not consider that a bug, but as a part of the
security.
To be fair, the tar developers did fix this -- in 1.21, IIRC.
As to the original question, no, I don't thi
Toralf Lund wrote:
Gene Heskett wrote:
On Thursday 28 August 2008, Toralf Lund wrote:
I've just moved a disk from one server to another without really
changing anything with respect to how the clients see it; logically,
the
disk still represents exactly the same volume on the ne
Gene Heskett wrote:
On Thursday 28 August 2008, Toralf Lund wrote:
I've just moved a disk from one server to another without really
changing anything with respect to how the clients see it; logically, the
disk still represents exactly the same volume on the network. [... ]
Un
I've just moved a disk from one server to another without really
changing anything with respect to how the clients see it; logically, the
disk still represents exactly the same volume on the network.
Is there any way I can change the amanda config so that the disk is
backed up via the new serv
Toralf Lund wrote:
Forgot to answer this earlier, I think...
If you can't kill sendsize, it's because it is hang in a system call.
It's often when it try to access a mount point.
Do you have a hanged mount point?
But yes, you are absolutely right. The host in question had proble
A few days ago I had some backup problems that turned out to be caused
by a hanging NFS mount, causing "sendsize" to lock up completely - see
a separate post on this. Now I have sorted out this problem, and it
seemed like amdump would once again start properly, but it turns out
that the backu
A few days ago I had some backup problems that turned out to be caused
by a hanging NFS mount, causing "sendsize" to lock up completely - see a
separate post on this. Now I have sorted out this problem, and it seemed
like amdump would once again start properly, but it turns out that the
backup
Forgot to answer this earlier, I think...
If you can't kill sendsize, it's because it is hang in a system call.
It's often when it try to access a mount point.
Do you have a hanged mount point?
But yes, you are absolutely right. The host in question had problems
with an NFS mount. I didn't think
We just started to get a serious problem with our amdump execution
(Amanda 2.5.0p2). As usual, we don't thing we have changed anything at
all after the last successful dump
Symptoms:
1. "amstatus" says
fileserv:/scanner0 planner: [hmm, disk was
stranded on w
I'm not sure this is the right place to ask questions like this, but
I'll give it a try:
Oh. No.
It most certainly isn't, since I didn't even post to the intended list.
I wanted "Anaconda", not "Amanda" (that's automatic address lookup for you.)
Sorry.
- Toralf
I'm not sure this is the right place to ask questions like this, but
I'll give it a try:
On a certain host at work, the RH system installer (usually booted from
a customised DVD...) will fail to detect the existing Linux partitions.
The error message is (I think) "Unknown partition table signa
As long as the log files are available, the amandatape script that I
posted a while ago to this list will give you the info that you are
looking for. You can find it here:
Maybe I've missed something, but I can't see why a special script would
be necessary in this case. Why not use th
Maybe I've missed something, but I can't see why a special script would
be necessary in this case.
[ ... ]
This script is not special for the situation in question. In contrast,
it is meant to be run after every backup to produce tape-labels with
information which files from whic
Josef Wolf wrote:
On Fri, Jun 16, 2006 at 10:47:52AM -0400, Marlin Whitaker wrote:
On Jun 15, 2006, at 3:03 PM, Jon LaBadie wrote:
On Thu, Jun 15, 2006 at 12:33:16PM -0400, Marlin Whitaker wrote:
If I have a collection of tapes from previous amanda backups,
is there a proce
Regarding my recent post regarding "strategy incronly" and "skip-full",
I just found out more about the problems I've had in the past with this
simply by checking the amanda.conf manual page it says:
skip-full /boolean/
Default: no. If true and planner has scheduled a full backup,
Another question related to one of my recent threads:
What do you think is a good value for "bumppercent"? Why?
- Toralf
I also have one other scenario in mind, though - which is one I've
actually come across a number of times: What if a certain DLE due for
backup is estimated to be slightly smaller than *,
and thus dumped to holding disk, but then turns out to be slightly
larger?
Wouldn't it be more acc
Jon LaBadie wrote:
On Tue, Jun 13, 2006 at 11:46:20AM +0200, Paul Bijnens wrote:
I'm still finding out a use for "skip-full". It seems to be a weird
option: when the planner had decided to make a full dump, then in
the real run it is skipped. That would mean that you have to be
carefully
Toralf Lund wrote:
Jon LaBadie wrote:
On Tue, Jun 13, 2006 at 02:46:31PM +0200, Toralf Lund wrote:
Normally I would agree, but I have to back up 3Tb of data organised
as one single volume. The only "simple" option would be to have one
3Tb tape as well, but such a thing isn'
Jon LaBadie wrote:
On Tue, Jun 13, 2006 at 02:46:31PM +0200, Toralf Lund wrote:
Normally I would agree, but I have to back up 3Tb of data organised as
one single volume. The only "simple" option would be to have one 3Tb
tape as well, but such a thing isn't available
To throw my $.02 in here, the situations would be very different.
If one is "forced" to have all DLEs "tapeable" in one amdump run,
then (theoretically), nothing will be left on the holding disk to
lose should said disk die.
But we're talking about a situation where the DLEs are not
"tapea
Joshua Baker-LePain wrote:
On Tue, 13 Jun 2006 at 12:55pm, Toralf Lund wrote
Paul Bijnens wrote:
Taping one DLE is several "runs" opens a can of worms: you have to
add a notion of "partial" succeeded. Restoring then needs some tapes
and some holdingdisk files. Wha
Paul Bijnens wrote:
On 2006-06-13 12:10, Toralf Lund wrote:
Yes indeed. The whole DLE. A singe DLE still needs to be written
in one run, possibly using many tapes.
Oh no... Like I said, that's a big disappointment. I'm tempted to
say that it is not correct to claim that Amanda n
Yes indeed. The whole DLE. A singe DLE still needs to be written
in one run, possibly using many tapes.
Oh no... Like I said, that's a big disappointment. I'm tempted to say
that it is not correct to claim that Amanda now suppots tape spanning,
if it can't span dumps across tapes written in
Paul Bijnens wrote:
On 2006-06-13 10:32, Toralf Lund wrote:
2. What happens to the holding disk file after a dump is partially
written to tape? Will Amanda keep the entire file, or just what
will be written next time around? And what if the holding disk
data is split into
Another question related to amanda 2.5:
Does anyone know if the issues with "skip-full" and/or "strategy
incronly" been addressed? In the past, neither "strategy incronly" nor
"skip-full" have worked quite as expected. I'm afraid I can't remember
the full details, but one problem I've had, was
2. What happens to the holding disk file after a dump is partially
written to tape? Will Amanda keep the entire file, or just what
will be written next time around? And what if the holding disk
data is split into "chunks"?
Amanda keeps the entire dump, and will be flushed en
Anyhow, I'd really like to know more about how the spanning actually
works. Is it documented anywhere? http://www.amanda.org/docs and FAQ
still say that the option does not exist...
Try http://wiki.zmanda.org/index.php/Splitting_dumps_across_tapes
Yes. Thanks. That's quite helpfu
Hi Toralf,
First off, I rather like your approach to configuration files.
Good ;-)
A little research shows that the explicit test was introduced to plug
a security hole reported by PERL... See BUG #1353481 for more
information.
I see...
[ ... ]
I'm proposing an alternate solution
I'm trying to run "amstatus" on existing logfiles after upgrading from
version 2.4.4p3 to 2.5.0p2. Unfortunately, the command will most of the
time fail with a message like:
amstatus ks --file /dumps/amanda/ks/log/amdump.1
Using /dumps/amanda/ks/log/amdump.1 from Thu Jun 8 17:04:30 CEST 2006
y whether it is correct or not? (I mean,
you just look for /etc/amanda//amanda.conf...)
--
Toralf Lund
I haven't been following the posts to this list too closely, or bothered
to upgrade amanda, for some time (since our existing setup *works*...),
so I didn't find out until right now that tape spanning is supported in
the current release.
Anyhow, I'd really like to know more about how the spann
xible than bumpsize...
--
Toralf Lund
Any idea why I get the following?
fileserv:/usr/freeware/etc/openldap 1 32k dumping0k (
1.53%) (11:32:10)
This is from amstatus on a currently running dump, and the time is now
# date
Thu Feb 23 12:32:35 CET 2006
I mean, why does this dump take so long?
I get these very
Any idea why I get the following?
fileserv:/usr/freeware/etc/openldap 1 32k dumping0k (
1.53%) (11:32:10)
This is from amstatus on a currently running dump, and the time is now
# date
Thu Feb 23 12:32:35 CET 2006
I mean, why does this dump take so long? This is (as you can see
I have a disk containing all sorts of temporary data etc. that I haven't
included in the amanda config so far. Now I've found, however, that
there are *some* files on this disk that I want to back up after all. I
can quite easily set up a file matching pattern that would include all
those files
Something I've always wondered about:
Is it safe to run multiple instances of amdump simultaneously? I mean,
with different configs, but possibly the same hosts and disks?
- Toralf
Toralf Lund wrote:
I've been meaning to ask about this for a long time:
Does anyone here use AIT2 tapes, a.k.a. SDX-50C, for Amanda backup?
What tape length are you using? [ ... ]
I've now finally run amtape - after making absolutely sure H/W
compression was off - and it said:
[ ... ]
(*) I used to say "on all linux versions", but it seems there
are different implementations in different versions.
Some systems can control the tapesettings with the file
/etc/stinit.def (see "man stinit" if that exists).
Yes. I think maybe you can do something like that on this
Paul Bijnens wrote:
Toralf Lund wrote:
Paul Bijnens wrote:
amtapetype will tell you too if hardware compression is on.
OK.
Does amanda have any built-in support for switching it off? I mean,
can any of the changer scripts or whatever do this? Or even amdump
itself?
No
Paul Bijnens wrote:
Toralf Lund wrote:
Ah, yes, of course. No, hardware compression is not supposed to be
on. But I'm not sure it isn't... In fact, now that to mention it, I
suspect it's on after all. I'll a have a closer look. And I very much
doubt that the dr
Alexander Jolk wrote:
Toralf Lund wrote:
the tape size is specified as 50Gb, and that's more or less what the
"length" parameter in entry says, but it seems to me that it isn't
actually possible to write that much data to these tapes. The maximum
seems to be clos
I've been meaning to ask about this for a long time:
Does anyone here use AIT2 tapes, a.k.a. SDX-50C, for Amanda backup? What
tape length are you using?
Please note that I'm not asking for the tapetype entry from the
amanda.org archives, as it does not seem quite correct. I mean, the tape
si
Toralf Lund wrote:
Paul Bijnens wrote:
Toralf Lund wrote:
Other possible error sources that I think I have eliminated:
1. tar version issues - since gzip complains even if I just uncopress
and send the data to /dev/null, or use the -t option.
2. Network transfer issues. I get errors even
Joshua Baker-LePain wrote:
On Thu, 21 Oct 2004 at 6:19pm, Toralf Lund wrote
This may be related to our backup problems described earlier:
I just noticed that during a dump running just now, I have
# ps -f -C gzip
UIDPID PPID C STIME TTY TIME CMD
amanda3064 769 0 17:18
This may be related to our backup problems described earlier:
I just noticed that during a dump running just now, I have
# ps -f -C gzip
UIDPID PPID C STIME TTY TIME CMD
amanda3064 769 0 17:18 pts/500:00:00 /bin/gzip --best
amanda3129 773 0 17:44 pts/500:00:
Paul Bijnens wrote:
Toralf Lund wrote:
I just noticed the following:
$ amadmin ks/incr force fileserv /scanner
amadmin: fileserv:/scanner/plankart is set to a forced level 0 at
next run.
amadmin: fileserv:/scanner/golg is set to a forced level 0 at next run.
amadmin: fileserv:/scanner is set to a
I just noticed the following:
$ amadmin ks/incr force fileserv /scanner
amadmin: fileserv:/scanner/plankart is set to a forced level 0 at next run.
amadmin: fileserv:/scanner/golg is set to a forced level 0 at next run.
amadmin: fileserv:/scanner is set to a forced level 0 at next run.
Why did this
from Paul Bijnens <[EMAIL PROTECTED]> -
From: Paul Bijnens <[EMAIL PROTECTED]>
To: Toralf Lund <[EMAIL PROTECTED]>
Cc: Amanda Mailing List <[EMAIL PROTECTED]>
Subject: Re: Multi-Gb dumps using tar + software compression (gzip)?
Date: Wed, 20 Oct 2004 13:59:31 +0200
Message-ID:
Paul Bijnens wrote:
Toralf Lund wrote:
Other possible error sources that I think I have eliminated:
1. tar version issues - since gzip complains even if I just uncopress
and send the data to /dev/null, or use the -t option.
2. Network transfer issues. I get errors even with server
Gene Heskett wrote:
On Tuesday 19 October 2004 11:10, Paul Bijnens wrote:
Michael Schaller wrote:
I found out that this was a problem of my tar.
I backed up with GNUTAR and "compress server fast".
AMRESTORE restored the file but TAR (on the server!) gave some
horrible messages like yours.
I
t
not only that, since gzip reports errors, too. I get
dd if=00010.raid2._scanner4.7 bs=32k skip=1 | gzip -t
124701+0 records in
124701+0 records out
gzip: stdin: invalid compressed data--crc error
gzip: stdin: invalid compressed data--length error
Greets
Michael
Toralf Lund schrieb:
Since I&
Alexander Jolk wrote:
Joshua Baker-LePain wrote:
I think that OS and utility (i.e. gnutar and gzip) version info would be
useful here as well.
True, forgot that. I'm on Linux 2.4.19 (Debian woody), using GNU tar
1.13.25 and gzip 1.3.2. I have never had problems recovering files from
huge
Paul Bijnens wrote:
Jukka Salmi wrote:
Paul Bijnens --> amanda-users (2004-10-18 22:14:10 +0200):
Before the chg-disk tape changer was written, I used the chg-multi
changer with the file-driver. It's a little more complicated
to configure, but the advantage is that it finds and load automatically
Alexander Jolk wrote:
Toralf Lund wrote:
1. Dumps of directories containing several Gbs of data (up to roughly
20Gb compressed in my case.)
2. Use dumptype GNUTAR.
3. Compress data using "compress client fast" or "compress server fast".
If you do, what exactly
Since I'm still having problems gunzip'ing my large dumps - see separate
thread, I was just wondering:
Some of you people out there are doing the same kind of thing, right? I
mean, have
1. Dumps of directories containing several Gbs of data (up to roughly
20Gb compressed in my case.)
2
Jukka Salmi wrote:
Hi,
I'm using the chg-disk tape changer. When restoring files using
amrecover, after adding some files and issuing the extract command,
amrecover tells me what tapes are needed, and asks me to "Load tape
now". I load the needed tape using amtape, and tell amrecover
to continue.
Toralf Lund wrote:
Alexander Jolk wrote:
Toralf Lund wrote:
[...] I get the same kind of problem with harddisk dumps as well as
tapes, and as it now turns out, also for holding disk files. And the
disks and tape drive involved aren't even on the same chain.
Actually, I'm starting
Alexander Jolk wrote:
Toralf Lund wrote:
[...] I get the same kind of problem with harddisk dumps as well as
tapes, and as it now turns out, also for holding disk files. And the
disks and tape drive involved aren't even on the same chain.
Actually, I'm starting to suspect that gzip
The fun part here is that I have two different tars and two different
gzips - the ones supplied with the OS and "SGI freeware" variants
installed on /usr/freeware (dowloaded from http://freeware.sgi.com/)
Do not use the OS supplied tar! You'll hit a bug.
Yes. I do seem to remembe
Gene Heskett wrote:
On Wednesday 13 October 2004 11:07, Toralf Lund wrote:
Jean-Francois Malouin wrote:
[ snip ]
Actually, I'm starting to suspect that gzip itself is causing the
problem. Any known issues, there? The client in question does have
a fairly old version, 1.2.4, I
Jean-Francois Malouin wrote:
[ snip ]
Actually, I'm starting to suspect that gzip itself is causing the
problem. Any known issues, there? The client in question does have a
fairly old version, 1.2.4, I think (that's the latest one supplied by
SGI, unless they have upgraded it very recently.)
Alexander Jolk wrote:
Toralf Lund wrote:
tar: Skipping to next header
tar: Archive contains obsolescent base-64 headers
37800+0 records in
37800+0 records out
gzip: stdin: invalid compressed data--crc error
tar: Child returned status 1
tar: Error exit delayed from previous errors
I'v
I'm having serious problems with full restore of a GNUTAR dump. Simply
put, if I do amrestore, then tar xvf , tar will exit with
tar: Skipping to next header
tar: Archive contains obsolescent base-64 headers
tar: Error exit delayed from previous errors
after extracting most, but not all, files -
Is anyone here using the "includefile" directive in their config? How
exactly does it work? Does it apply to all config files, or just
amanda.conf? What can the file contain - full config info, or just
whatever is not set in the file including it? If I have two configs, can
I have one amanda.co
Following our recent reorg of amanda configs, I've considered moving
some data from index, curinfo and possibly the log of one config to the
datadirs of another. The object would be to make amrecover think that
certain tapes were written using this other config, although they were
really dumped
Paul Bijnens wrote:
Toralf Lund wrote:
Ah, I see. So it won't actually multiply tape size by runtapes
when trying to figure out how much it can write... I'm not sure
the functionaltiy is of much use to me then, but perhaps I could
cheat and pretend the tapes are "runtapes&quo
Ah, I see. So it won't actually multiply tape size by runtapes when
trying to figure out how much it can write... I'm not sure the
functionaltiy is of much use to me then, but perhaps I could cheat and
pretend the tapes are "runtapes" times larger than they really are?
Won't buy you a
On Thu, Oct 23, 2003 at 05:37:02PM +0200, Toralf Lund wrote:
runspercycle does not need to be changed. The "runtapes" means that
for each run up to that number of tapes may be used (note: not "must").
You have to increase your tapecycle probably to cover the same
d
runspercycle does not need to be changed. The "runtapes" means that
for each run up to that number of tapes may be used (note: not "must").
You have to increase your tapecycle probably to cover the same
dumpcycle(s), because you'll burn twice as much tapes for each run.
(well, "burn", hop
Paul Bijnens wrote:
Toralf Lund wrote:
I'm thinking about using more than one tape, i.e. set "runtapes"
parameter to a value > 1, for my updated archival setup. Is there
anything special I need to keep in mind when doing this? Also, how do
I set up runspercycle in this cas
I'm thinking about using more than one tape, i.e. set "runtapes"
parameter to a value > 1, for my updated archival setup. Is there
anything special I need to keep in mind when doing this? Also, how do I
set up runspercycle in this case? Is it the total number of tapes
runspercycle * runtapes, o
Reviewing the amanda config again in conjunction with a tape format
update...
I forget (and list search doesn't return anything conclusive): How do
you people recommend handling archival runs? There are two obvious ways:
1. Via a special config.
2. Using the normal config, but special labels
It looks like Amanda will count *all* tapes written after the tape in
question, even the ones marked as "no-reuse", when comparing count with
tapecycle to determine if a tape may be overwritten. Is this observation
correct? Should "no-reuse" tapes be included like that?
--
Toralf
As far as I can't tell, amrecover won't work unless
1. Log file from the backup you are trying to recover is still present
2. DLE is still in the disklist
Why? Shouldn't amrecover work from the index alone?
- Toralf
Eric Siegerman wrote:
On Mon, Jul 14, 2003 at 11:07:10AM +0200, Toralf Lund wrote:
I've been getting a lot of
*** A TAPE ERROR OCCURRED: [[writing file: I/O error]].
On Mon, Jul 14, 2003 at 01:44:26PM +0200, Toralf Lund wrote:
Note that I've now gone back to amanda-
On Thursday 14 August 2003 03:21, Toralf Lund wrote:
>> On Wednesday 13 August 2003 02:55, Toralf Lund wrote:
>> >I suddenly realised that I have a lot of dump directories on my
>> > holding disk, even though dumps have generally been successful.
>> > The below &
On Tue, Aug 12, 2003 at 04:08:47PM -0400, LaValley, Brian E wrote:
> Can I list an nfs mounted disk in the disklist file?
>
> I only ask because I am having trouble compiling for Solaris 8.
The disklist's contents don't affect compiling one way or the
other. What's the specific problem you're seei
On Wednesday 13 August 2003 02:55, Toralf Lund wrote:
>I suddenly realised that I have a lot of dump directories on my
> holding disk, even though dumps have generally been successful. The
> below "amflush" output should illustrate this.
>
>
>-sh-2.05b$ /usr/sbin/amflus
1 amanda disk0 Aug 13 00:13
20030812/fileserv._scanner_golg.0.tmp
--
Toralf Lund
something related to the holding disk handling or taping
of images has changed since 2.4.4.
- Toralf
--
Martin Hepworth
Senior Systems Administrator
Solid State Logic Ltd
+44 (0)1865 842300
Toralf Lund wrote:
I've mentioned this earlier, but not a lot came out of it:
I've been get
I've mentioned this earlier, but not a lot came out of it:
I've been getting a lot of
*** A TAPE ERROR OCCURRED: [[writing file: I/O error]].
lately. This does not, however, happen all the time, and not for specific
tapes, either. Also, I can't find any error messages related to the tape
devic
resting since I *know* it works. dd the exact data that fails,
perhaps...
--
Martin Hepworth
Senior Systems Administrator
Solid State Logic Ltd
+44 (0)1865 842300
Toralf Lund wrote:
Just got
*** A TAPE ERROR OCCURRED: [[writing file: I/O error]].
during a backup run. - Must be something wrong wi
Just got
*** A TAPE ERROR OCCURRED: [[writing file: I/O error]].
during a backup run. - Must be something wrong with the tape or tape
drive, I thought, but it turns out that
1. I get this error for various different tapes when trying to amflush the
dump to them.
2. I can write other dumps to th
What is the right and proper way to unschedule the dump of a DLE? I
thought the answer would be "amadmin delete, then remove DLE from
disklist", but it seems to me that this will prevent me from amrecover'ing
the DLE from existing backups, which is something I want to be able to do.
Notice that
The addition of "autoflush" option in 2.4.3 was really very helpful, but
I'm still not satisfied. What I really want, is to autoflush when one or
two smallish DLE dumps are left on the holding disk, but not if the entire
taper operation failed due to tape error or something.
Comments?
--
- Tora
Hi there,
ive got a Problem with amrecover on an Debian GNU/Linux maschine.
amrecover reports:
No index records for disk for specified date
If date correct, notify system administrator
In debug files in /tmp i cant find any other Informations.Debugfile is
there but the informations are the same s
On 2003.04.03 16:54, Jon LaBadie wrote:
On Thu, Apr 03, 2003 at 09:51:25AM +0200, Toralf Lund wrote:
> As I've indicated earlier, I want to write full backups to tape, but
keep
> some or all of the incremental on the harddisk, so I've set up two
> different configs; one with &q
-Messaggio originale-
Da: Valeria Cavallini [mailto:[EMAIL PROTECTED]
Inviato: giovedi 3 aprile 2003 10.59
A: [EMAIL PROTECTED]
Oggetto: backup with exclude
Hi,
I've read some tread on the hacker newsgroup and I've found that someone
talks about the option exclude to exclude more then o
#x27;t just one way
of choosing what to include on the backup?
--
Toralf Lund <[EMAIL PROTECTED]> +47 66 85 51 22
ProCaptura AS +47 66 85 51 00 (switchboard)
http://www.procaptura.com/~toralf +47 66 85 51 01 (fax)
chg-multi and a
chg-multi.conf where all slots point to file:. The
disklist is shared between the configs. The question is simply, can they
share the index as well? Will everything work all right if I simply
specify the same indexdir for both configs?
--
Toralf Lund <[EMAIL PROTECTED]> +47
[ ... ]
> >What's the output of 'amadmin ks find mercedes-benz /usr/people/jfo'?
> Trying this helped me figure out what was wrong ;-) The command would
list
> the expected dates and tape names when executed as root, but as amanda,
I
> got "No dump to list", which made it quite obvious that the pe
On Tue, Feb 11, 2003 at 05:31:04PM +0100, Toralf Lund wrote:
> I'm getting error message
>
> No index records for disk for specified date
>
> when trying to recover a certain DLE using amrecover (version 2.4.3.)
The
> full output from the session + some of the debug messag
www-server": "localhost:/var/www"
guess_disk: 6: 8: "/imgproc": "raid1:/imgproc"
guess_disk: 6: 9: "/imgproc2": "imgproc:/imgproc2"
guess_disk: 6: 9: "/imgproc3": "imgproc:/imgproc3"
guess_disk: 6: 9: "/scanner4": "raid2:/scanner4"
amrecover: pid 48822211 finish time Tue Feb 11 17:25:35 2003
--
Toralf Lund <[EMAIL PROTECTED]> +47 66 85 51 22
ProCaptura AS +47 66 85 51 00 (switchboard)
http://www.procaptura.com/~toralf +47 66 85 51 01 (fax)
I seem to remember that something like this has been discussed before, but
I couldn't find anything in the archives ;-/
Anyhow, I'm thinking about setting up a config with full backups to tape
and incrementals to harddisk - due to limited tape capacity (yes, I know
incrementals are usually smal
version
2.4.3 server and 2.4.2p2 clients
--
Toralf Lund <[EMAIL PROTECTED]> +47 66 85 51 22
ProCaptura AS +47 66 85 51 00 (switchboard)
http://www.procaptura.com/~toralf +47 66 85 51 01 (fax)
I've had auth problems with one of the hosts I'm backing up for some
time; amcheck says
# amcheck -c ks
Amanda Backup Client Hosts Check
ERROR: bmw: [access as amanda not allowed from root@]
amandahostsauth failed
I found out what was going on after all; it tur
;m missing.
Help, anyone?
--
Toralf Lund <[EMAIL PROTECTED]> +47 66 85 51 22
ProCaptura AS +47 66 85 51 00 (switchboard)
http://www.procaptura.com/~toralf +47 66 85 51 01 (fax)
>amrecover will in my setup make a completely wrong guess about what disk
>to consider at startup in most cases. The example output included below
>should illustrate the problem; /usr/freeware/apache is not on the /u
>filesystem, and it has a separate disklist entry. Any ideas what is
going
>on?
>
k set to /u.
Invalid directory - /usr/freeware/apache
amrecover>
--
Toralf Lund <[EMAIL PROTECTED]>
1 - 100 of 135 matches
Mail list logo