Re: level 1 does not behave as level 1 - last message(?)

2014-02-26 Thread Charles Stroom
and this evening the first scheduled backup and it failed miserably
with exactly the same symptoms as before: all level 1 DLEs are equal in
size to a level 0, with the exception of only 2 DLEs (and heaven may
know why!)

I think I give up.

Regards, Charles


On Wed, 26 Feb 2014 00:54:09 +0100
Charles Stroom  wrote:

> As said, I changed to tar 1.27 and now the problem has disappeared.  I
> cleaned all what was necessary and started with a full backup on all
> DLEs.  There was some strange tar message on the single dle at the
> external client stremen ("... /usr/local/bin/tar exited with status
> 2 .."), but as this was a dle which worked normally also with the
> previous version of tar on stremen, I decided to revert that
> single dumptype to not using /usr/local/bin/tar (1.27).
> 
> I forced all fiume (the server/client) dle to level 1 and did a
> subsequent amdump.  All worked fine, and the single stremen dle was
> also ok on level 0.
> 
> Then I forced everything to level 1, including stremen and did a 3rd
> amdump.  Everything OK!  The small problem with 1.27 tar on stremen
> will have to wait.  I checked a file with stat at various stages, but
> the times are not affected by tar 1.27 at all, they don't change a
> bit.
> 
> Because tar 1.26 worked without problems in my previous suse 11.4 with
> kernel 2.6.37, but does not want to work with my current suse 13.1
> with 3.11.10 kernel, I suppose there is some incompatibility between
> them.
> 
> I keep my fingers crossed.  Again many thanks for the assistance.
> 
> Regards, Charles
> 
> 
> 
> 
> On Tue, 25 Feb 2014 17:48:13 +0100
> Charles Stroom  wrote:
> 
> > Jean-Louis,
> > 
> > It's getting complicated, because in my previous post below I
> > reported that with option "--atime-preserve=system" a level 1 tar
> > incremental worked.  However, some time later it didn't work any
> > more, so it is really "unpredictable".
> > 
> > I have repeated part of below with some stat in between.  Results
> > are in attachment tar_1.26_incremental.  It seems that in the first
> > create tar changes atime.
> > 
> > Because I got no good results so far, I have installed the latest
> > 1.27 tar from gnu, which is in /usr/local/bin and repeated the
> > results from above.  With 1.27 no times are changed at level 0 or
> > level 1. Results in attachment tar_1.27_incremental.
> > 
> > Both tests actually did (now) a proper incremental tar as can be
> > seen from the the size of the created files:
> > -rw-r--r-- 1 charles users  379238400 Feb 25 17:23 wine_docs_0.tar
> > -rw-r--r-- 1 charles users  71680 Feb 25 17:24 wine_docs_1.tar
> > -rw-r--r-- 1 charles users  13405 Feb 25 17:24 wine_docs.snar
> > -rw-r--r-- 1 charles users   1785 Feb 25 17:28
> > tar_1.26_incremental -rw-r--r-- 1 charles users  379238400 Feb 25
> > 17:36 wine_docs_0_1.27.tar -rw-r--r-- 1 charles users  71680 Feb
> > 25 17:36 wine_docs_1_1.27.tar -rw-r--r-- 1 charles users  13405
> > Feb 25 17:36 wine_docs_1.27.snar -rw-r--r-- 1 charles users
> > 2100 Feb 25 17:39 tar_1.27_incremental
> > 
> > 
> > I am now continuing with amanda, but testing takes much longer.  But
> > my best bet now is to use 1.27 and I will use amgtar to do achieve
> > that.
> > 
> > Regards, Charles
> > 
> > 
> >
> -- 
> Charles Stroom
> email: charles at no-spam.stremen.xs4all.nl (remove the "no-spam.")


-- 
Charles Stroom
email: charles at no-spam.stremen.xs4all.nl (remove the "no-spam.")


Re: Release of 3.3.6?

2014-02-26 Thread Jean-Louis Martineau

On 02/26/2014 11:45 AM, Steven Backus wrote:

Jean-Louis writes:

...This is already fixed in SVN, the fix will be in 3.3.6

Do we have a date for the blessed event?  I have a few patches I
need to make sure get in there and I'd like to take that off my
to-do list.


Our plan is to release 3.3.6 in April.
Post your patch as soon as possible.

Jean-Louis


Re: tar suddenly hanging, backups near 100% fail

2014-02-26 Thread Markus Iturriaga Woelfel
I missed whether you tried using "strace" and "lsof" (or the /proc file system) 
to see what system call and/or what file tar might be stuck on?
---
Markus A. Iturriaga Woelfel, IT Administrator
Department of Electrical Engineering and Computer Science
University of Tennessee
Min H. Kao Building, Suite 424 / 1520 Middle Drive
Knoxville, TN 37996-2250
mitur...@eecs.utk.edu / (865) 974-3837
http://twitter.com/UTKEECSIT










Re: Best practise for archive tapes with vtapes?

2014-02-26 Thread Stefan G. Weichinger
Am 26.02.2014 19:03, schrieb Markus Iturriaga Woelfel:
> Stefan,
> 
>> Maybe someday you share your script? ;-)
> 
> If I ever find the time to clean up the script, I'll be happy to
> share it. Not sure how useful it'd be to others because it's specific
> to our situation.

Sometimes it already helps to get some kind of template ... but you can
judge that better than me ... ;-)

>> For my current job I need/should use external hdds as target media,
>> so I wonder if should simply define a separate changer "vault" in
>> parallel, with vtapes on these disks.
>> 
> 
> That's what we did with our vault setup. We have two changers, one is
> a "chg-disk:VTAPEROOT" type and one is a "chg-robot". In your case,
> they'd probably both be chg-disk?

Yep. I already have such a setup on one of my servers ... for testing this.

>> A script could check:  if week-number is even, make sure to mount
>> disk2 ... if week-number is not even, mount disk1 ...
>> 
>> disk1 contains vtapes 1-10 ... (as example), disk2 contains vtapes
>> 11-20 ... then amvault stuff from config daily to changer vault
>> ...
>> 
>> Would that work? Does anyone do it like that?
>> 
>> Additional question (yes, I am amvault-newbie): how to get back
>> stuff from these vault-tapes?
> 
> Basically, this isn't too different from our setup. You can restore
> from the vault tapes (or vtapes in your case) just like you would
> from your main vtapes. They show up in normal amadmin find operations
> and show you the information of the original dump (not the date you
> "vaulted" the backup). This means the same DLE and dump date/level
> will show up more than once in your amadmin find, e.g.:
> 
> 2014-01-29 10:55:57 HOST.eecs.utk.edu  /a/b/c  0 EECS-05  62
> 1/1 OK 2014-01-29 10:55:57 HOST.eecs.utk.edu /a/b/c  0 EECS-VAULT-058
> 78   1/1 OK
> 
> You can restore from the vaulted "tapes" the same as you could from
> your other vtapes (using amrecover, amfetchdump, or amrestore, or
> even manually).

Good to hear ... I didn't understand that in the first place.

> I've never used removable drives for backups, so you may need to
> worry about making them available in the right order, etc. when
> restoring.

Yes. Some kind of helper-script needed here. Plus cronjobs etc.
I already have some basic ideas and will try to set that up asap.


Greets, Stefan


Re: Best practise for archive tapes with vtapes?

2014-02-26 Thread Markus Iturriaga Woelfel
Stefan, 

> Maybe someday you share your script? ;-)

If I ever find the time to clean up the script, I'll be happy to share it. Not 
sure how useful it'd be to others because it's specific to our situation.

> 
> For my current job I need/should use external hdds as target media, so I
> wonder if should simply define a separate changer "vault" in parallel,
> with vtapes on these disks.
> 

That's what we did with our vault setup. We have two changers, one is a 
"chg-disk:VTAPEROOT" type and one is a "chg-robot". In your case, they'd 
probably both be chg-disk?

> A script could check:  if week-number is even, make sure to mount disk2
> ... if week-number is not even, mount disk1 ...
> 
> disk1 contains vtapes 1-10 ... (as example), disk2 contains vtapes 11-20
> ... then amvault stuff from config daily to changer vault ...
> 
> Would that work? Does anyone do it like that?
> 
> Additional question (yes, I am amvault-newbie): how to get back stuff
> from these vault-tapes?

Basically, this isn't too different from our setup. You can restore from the 
vault tapes (or vtapes in your case) just like you would from your main vtapes. 
They show up in normal amadmin find operations and show you the information of 
the original dump (not the date you "vaulted" the backup). This means the same 
DLE and dump date/level will show up more than once in your amadmin find, e.g.:

2014-01-29 10:55:57 HOST.eecs.utk.edu  /a/b/c  0 EECS-05  62   1/1 OK 
2014-01-29 10:55:57 HOST.eecs.utk.edu /a/b/c  0 EECS-VAULT-058   78   1/1 OK 

You can restore from the vaulted "tapes" the same as you could from your other 
vtapes (using amrecover, amfetchdump, or amrestore, or even manually). 

I've never used removable drives for backups, so you may need to worry about 
making them available in the right order, etc. when restoring. 

Markus
---
Markus A. Iturriaga Woelfel, IT Administrator
Department of Electrical Engineering and Computer Science
University of Tennessee
Min H. Kao Building, Suite 424 / 1520 Middle Drive
Knoxville, TN 37996-2250
mitur...@eecs.utk.edu / (865) 974-3837
http://twitter.com/UTKEECSIT










Re: tar suddenly hanging, backups near 100% fail

2014-02-26 Thread Gene Heskett
On Wednesday 26 February 2014 12:19:18 Gene Heskett did opine:

> On Wednesday 26 February 2014 10:39:13 Gene Heskett did opine:
> > Greetings;
> > 
> > 3 backups ago, with no change to the amanda.conf in months, I have
> > awakened to a hung tar task using 100% of a core, more than 5 hours
> > after it should have completed.
> > 
> > It is in that state now.  How can I find what is causing this
> > blockage? Here is the report from yesterdays attempt,  received after
> > I had used htop to send this stuck tar instance a normal quit signal.
> > 
> > These dumps were to tape Dailys-9.
> > The next 2 tapes Amanda expects to use are: Dailys-10, Dailys-11.
> > 
> > FAILURE DUMP SUMMARY:
> >   planner: ERROR Some estimate timeout on coyote, using server
> >   estimate
> > 
> > if possible coyote /CoCo lev 0  FAILED [too many dumper retry:
> > [request failed: Connection timed out]] coyote
> > /GenesAmandaHelper-0.61 lev 1 FAILED [too many dumper retry: [request
> > failed: Connection timed out]] coyote /home lev 2  FAILED [too many
> > dumper retry: [request failed: Connection timed out]] coyote /lib lev
> > 0  FAILED [disk /lib, all estimate timed out] coyote /opt lev 0 
> > FAILED [disk /opt, all estimate timed out] coyote /root lev 0  FAILED
> > [disk /root, all estimate timed out] coyote /sbin lev 0  FAILED [disk
> > /sbin, all estimate timed out] coyote /var lev 0  FAILED [disk /var,
> > all estimate timed out] coyote /usr/bin lev 0  FAILED [disk /usr/bin,
> > all estimate timed out] coyote /usr/dlds/misc lev 0  FAILED [disk
> > /usr/dlds/misc, all estimate timed out] coyote /usr/dlds/tgzs lev 0 
> > FAILED [disk /usr/dlds/tgzs, all estimate timed out] coyote
> > /usr/dlds/books lev 0  FAILED [disk /usr/dlds/books, all estimate
> > timed out] coyote /usr/include lev 0 FAILED [disk /usr/include, all
> > estimate timed out] coyote /usr/lib lev 0  FAILED [disk /usr/lib, all
> > estimate timed out] coyote /usr/libexec lev 0  FAILED [disk
> > /usr/libexec, all estimate timed out] coyote /usr/movies lev 0 
> > FAILED [disk /usr/movies, all estimate timed out] coyote /usr/local
> > lev 0  FAILED [disk /usr/local, all estimate timed out] coyote
> > /usr/music lev 0  FAILED [disk /usr/music, all estimate timed out]
> > coyote /usr/pix lev 0  FAILED [disk /usr/pix, all estimate timed out]
> > coyote /usr/sbin lev 0  FAILED [disk /usr/sbin, all estimate timed
> > out] coyote /usr/share lev 0  FAILED [disk /usr/share, all estimate
> > timed out] coyote /usr/src lev 0  FAILED [disk /usr/src, all estimate
> > timed out] coyote /usr/games lev 0  FAILED [disk /usr/games, all
> > estimate timed out] coyote /CoCo lev 0  FAILED Got empty header
> > 
> >   coyote /CoCo lev 0  FAILED Got empty header
> >   coyote /GenesAmandaHelper-0.61 lev 1  FAILED Got empty header
> >   coyote /GenesAmandaHelper-0.61 lev 1  FAILED Got empty header
> >   coyote /boot lev 0  FAILED Got empty header
> >   coyote /home lev 2  FAILED Got empty header
> >   coyote /home lev 2  FAILED Got empty header
> > 
> > However, at the bottom of the report, the remote systems were backed
> > up just fine. lathe   /home   1   1   0  
> > 5.6 0:00   169.9  0:36 1.2 lathe   /usr/lib/amanda 1
> > 0   0   3.3  0:05 0.4  0:0010.0 lathe   /usr/local
> > 
> > 1   0   0   2.0  0:05 0.4  0:0010.0 lathe
> > 
> > /var/lib/amanda 1   0   0  22.0  0:00   354.6 
> > 0:00
> > 
> >   220.0 shop/home   3   4   0   8.2
> > 
> > 0:0743.6  0:00  3080.0 shop/usr/lib/amanda 1
> > 0   0   3.3  0:05 0.4  0:0010.0 shop/usr/local
> > 
> > 1   0   0   2.0  0:05 0.4  0:0010.0 shop
> > 
> > /var/lib/amanda 1   2   0  17.8  0:01   584.4 
> > 0:00
> > 
> >  2950.0
> > 
> > (brought to you by Amanda version 4.0.0alpha.svn.4761)
> > 
> > Now, the thing that _has_ changed is the running kernel, from a 3.12.9
> > that seemed to work well with amanda, to a 3.13.5 that I had one heck
> > of a time building because of Kconfig dependency errors that caused
> > all of the many "media" options to disappear from the "make ?config"
> > operations, and it is likely this one could be missing something that
> > tar needs.
> > 
> > So, what, from this, would be the most likely candidate? The config.gz
> > is attached.
> > 
> > Thank you very much for any insight that can be determined from this.
> > 
> > Cheers, Gene
> 
> Ping!  In the meantime I have rebuilt this kernel 3 times, getting an
> unbootable once, but without finding the option that seems to throw tar
> for a forever loop.
> 
> FWIW, when tar is in that state, the only drive activity is related to
> fetchmail activities which loops every 3 minutes, tar apparently gets
> stuck hammering on something it can't access.  And yet, the DLE it
> appears to be stuck on while attempting an estimate, /lib, can be
> listed with an ls -laR, with n

Re: is part_cache_max_size shared?

2014-02-26 Thread Michael Stauffer
Thanks, 5 processes failed to terminate the first time so I ran it again.
Seems all good now.

-M


On Wed, Feb 26, 2014 at 11:19 AM, Jean-Louis Martineau  wrote:

> On 02/26/2014 11:17 AM, Michael Stauffer wrote:
>
>> Thanks! That's very good to know.
>>
>> As far as aborting the current amdump, do I just SIGINT it and then run
>> amcleanup?
>>
>
> use 'amcleanup -k CONF', it should kill all processes on the amanda
> server. Some process on the amanda client might not be killed.
>
> Jean-Louis
>
>
>> -M
>>
>>
>>
>> On Wed, Feb 26, 2014 at 7:22 AM, Jean-Louis Martineau <
>> martin...@zmanda.com > wrote:
>>
>> On 02/25/2014 04:24 PM, Michael Stauffer wrote:
>>
>> Amanda 3.3.4
>>
>> Hi,
>>
>> If amanda is using memory cache for splits, is the cache
>> shared between simultaneous amdump runs, or does each try to
>> grab that much memory?
>>
>> I'm setup like this:
>>
>> part_cache_type memory
>>  part_cache_max_size 20G
>>
>> and with
>>
>>   taper-parallel-write 2
>>
>> and
>>
>>   inparallel 10
>>
>> Thanks
>>
>> -M
>>
>>
>> Each taper-parallel-write allocate part_cache_max_size of memory.
>>
>> Jean-Louis
>>
>>
>>
>


Release of 3.3.6?

2014-02-26 Thread Steven Backus
> Jean-Louis writes:
> 
> ...This is already fixed in SVN, the fix will be in 3.3.6

Do we have a date for the blessed event?  I have a few patches I
need to make sure get in there and I'd like to take that off my
to-do list.

Steve
-- 
Steven J. BackusComputer Systems Manager
University of Utah  E-Mail:  steven.bac...@utah.edu
Genetic EpidemiologyAlternate:  bac...@math.utah.edu
391 Chipeta Way -- Suite D  Office:  801.587.9308
Salt Lake City, UT 84108-1266   http://www.math.utah.edu/~backus


Re: Best practise for archive tapes with vtapes?

2014-02-26 Thread Stefan G. Weichinger
Am 26.02.2014 15:31, schrieb Markus Iturriaga Woelfel:
> Here is what we do:
> 
> We use vtapes in our Amanda setup. I wrote a small script that finds
> the most recent level 0 backups and uses amvault to dump those to
> physical tapes. I run this once a month and then archive those tapes.
> The script was hacked together and if I get the time I'd like to
> change it to use Amanda's Perl API rather than calling Amanda
> commands directly, but it has been working for us. Basically, it
> constructs a command line for amvault that looks like this:
> 
> /usr/sbin/amvault -otapetype="DellPV124-DLT4" -otapecycle=1
> -osend-amreport-on=never --dst-changer robot --label-template
> EECS-VAULT-%%% CONGIG HOST DISK DATE LEVEL HOST DISK DATE LEVEL ...
> 
> After the vault, it constructs an email that contains an amreport
> (with -otapetype set to our VAULT tapes) and attaches a PDF with the
> 2-hole punch label version of the report. We slightly modified the
> template provided. We then archive the tapes and file the report.
> This should allow us to do a restore using just standard Unix tools
> even if we somehow lost our entire amanda server.

Thanks a lot for your reply and the description of your setup.

Maybe someday you share your script? ;-)

For my current job I need/should use external hdds as target media, so I
wonder if should simply define a separate changer "vault" in parallel,
with vtapes on these disks.

A script could check:  if week-number is even, make sure to mount disk2
... if week-number is not even, mount disk1 ...

disk1 contains vtapes 1-10 ... (as example), disk2 contains vtapes 11-20
... then amvault stuff from config daily to changer vault ...

Would that work? Does anyone do it like that?

Additional question (yes, I am amvault-newbie): how to get back stuff
from these vault-tapes?

I somehow miss a nice howto-document for this ...

Stefan


Re: error: "ERROR: fiume.localnet usr: data-path is AMANDA but device do not support it"

2014-02-26 Thread Jean-Louis Martineau

Charles,

It erroneously report 'data-path is AMANDA but device do not support it' 
because it didn't find a tape to use.

This is already fixed in SVN, the fix will be in 3.3.6

Jean-Louis

On 02/18/2014 04:30 AM, Charles Stroom wrote:

I noticed the following when I do an amcheck:
"
Amanda Tape Server Host Check
-
Holding disk /work/amanda/dumps: 29732864 KB disk space available,
using 29528064 KB slot 1: volume 'dds4-03'
Will write to volume 'dds4-03' in slot 1.
NOTE: skipping tape-writable test
Server check took 3.753 seconds

Amanda Backup Client Hosts Check

Client check: 2 hosts checked in 2.384 seconds.  0 problems found.

(brought to you by Amanda 3.3.5)
"

This looks fine, but if I only take the tape out (single dds4 tape
drive, no changer), I get:
"
Amanda Tape Server Host Check
-
Holding disk /work/amanda/dumps: 29732864 KB disk space available, using 
29528064 KB
slot 1: Tape device /dev/nst1 is not ready or is empty
  all slots have been loaded
Taper scan algorithm did not find an acceptable volume.
 (expecting volume 'dds4-03' or a new volume)
ERROR: No acceptable volumes found
ERROR: fiume.localnet usr: data-path is AMANDA but device do not support it
ERROR: fiume.localnet Homes: data-path is AMANDA but device do not support it
ERROR: fiume.localnet home-charles: data-path is AMANDA but device do not 
support it
ERROR: fiume.localnet Pictures_rest: data-path is AMANDA but device do not 
support it
ERROR: fiume.localnet Pictures_800IS: data-path is AMANDA but device do not 
support it
ERROR: fiume.localnet Pictures_G2: data-path is AMANDA but device do not 
support it
ERROR: fiume.localnet Wine: data-path is AMANDA but device do not support it
ERROR: fiume.localnet Hobbies: data-path is AMANDA but device do not support it
ERROR: fiume.localnet Mail: data-path is AMANDA but device do not support it
ERROR: fiume.localnet Vbox-2: data-path is AMANDA but device do not support it
ERROR: fiume.localnet Vbox-1: data-path is AMANDA but device do not support it
ERROR: fiume.localnet root: data-path is AMANDA but device do not support it
ERROR: stremen.localnet my_data: data-path is AMANDA but device do not support 
it
Server check took 0.262 seconds

Amanda Backup Client Hosts Check

Client check: 2 hosts checked in 2.389 seconds.  0 problems found.

(brought to you by Amanda 3.3.5)
"

Odd, whatever it means.

Regards, Charles










Re: is part_cache_max_size shared?

2014-02-26 Thread Jean-Louis Martineau

On 02/26/2014 11:17 AM, Michael Stauffer wrote:

Thanks! That's very good to know.

As far as aborting the current amdump, do I just SIGINT it and then 
run amcleanup?


use 'amcleanup -k CONF', it should kill all processes on the amanda 
server. Some process on the amanda client might not be killed.


Jean-Louis



-M


On Wed, Feb 26, 2014 at 7:22 AM, Jean-Louis Martineau 
mailto:martin...@zmanda.com>> wrote:


On 02/25/2014 04:24 PM, Michael Stauffer wrote:

Amanda 3.3.4

Hi,

If amanda is using memory cache for splits, is the cache
shared between simultaneous amdump runs, or does each try to
grab that much memory?

I'm setup like this:

part_cache_type memory
 part_cache_max_size 20G

and with

  taper-parallel-write 2

and

  inparallel 10

Thanks

-M


Each taper-parallel-write allocate part_cache_max_size of memory.

Jean-Louis






Re: is part_cache_max_size shared?

2014-02-26 Thread Michael Stauffer
Thanks! That's very good to know.

As far as aborting the current amdump, do I just SIGINT it and then run
amcleanup?

-M


On Wed, Feb 26, 2014 at 7:22 AM, Jean-Louis Martineau
wrote:

> On 02/25/2014 04:24 PM, Michael Stauffer wrote:
>
>> Amanda 3.3.4
>>
>> Hi,
>>
>> If amanda is using memory cache for splits, is the cache shared between
>> simultaneous amdump runs, or does each try to grab that much memory?
>>
>> I'm setup like this:
>>
>> part_cache_type memory
>>  part_cache_max_size 20G
>>
>> and with
>>
>>   taper-parallel-write 2
>>
>> and
>>
>>   inparallel 10
>>
>> Thanks
>>
>> -M
>>
>
> Each taper-parallel-write allocate part_cache_max_size of memory.
>
> Jean-Louis
>


Re: tar suddenly hanging, backups near 100% fail

2014-02-26 Thread Gene Heskett
On Wednesday 26 February 2014 10:39:13 Gene Heskett did opine:

> Greetings;
> 
> 3 backups ago, with no change to the amanda.conf in months, I have
> awakened to a hung tar task using 100% of a core, more than 5 hours
> after it should have completed.
> 
> It is in that state now.  How can I find what is causing this blockage?
> Here is the report from yesterdays attempt,  received after I had used
> htop to send this stuck tar instance a normal quit signal.
> 
> These dumps were to tape Dailys-9.
> The next 2 tapes Amanda expects to use are: Dailys-10, Dailys-11.
> FAILURE DUMP SUMMARY:
>   planner: ERROR Some estimate timeout on coyote, using server estimate
> if possible coyote /CoCo lev 0  FAILED [too many dumper retry: [request
> failed: Connection timed out]] coyote /GenesAmandaHelper-0.61 lev 1 
> FAILED [too many dumper retry: [request failed: Connection timed out]]
> coyote /home lev 2  FAILED [too many dumper retry: [request failed:
> Connection timed out]] coyote /lib lev 0  FAILED [disk /lib, all
> estimate timed out] coyote /opt lev 0  FAILED [disk /opt, all estimate
> timed out] coyote /root lev 0  FAILED [disk /root, all estimate timed
> out] coyote /sbin lev 0  FAILED [disk /sbin, all estimate timed out]
> coyote /var lev 0  FAILED [disk /var, all estimate timed out] coyote
> /usr/bin lev 0  FAILED [disk /usr/bin, all estimate timed out] coyote
> /usr/dlds/misc lev 0  FAILED [disk /usr/dlds/misc, all estimate timed
> out] coyote /usr/dlds/tgzs lev 0  FAILED [disk /usr/dlds/tgzs, all
> estimate timed out] coyote /usr/dlds/books lev 0  FAILED [disk
> /usr/dlds/books, all estimate timed out] coyote /usr/include lev 0 
> FAILED [disk /usr/include, all estimate timed out] coyote /usr/lib lev
> 0  FAILED [disk /usr/lib, all estimate timed out] coyote /usr/libexec
> lev 0  FAILED [disk /usr/libexec, all estimate timed out] coyote
> /usr/movies lev 0  FAILED [disk /usr/movies, all estimate timed out]
> coyote /usr/local lev 0  FAILED [disk /usr/local, all estimate timed
> out] coyote /usr/music lev 0  FAILED [disk /usr/music, all estimate
> timed out] coyote /usr/pix lev 0  FAILED [disk /usr/pix, all estimate
> timed out] coyote /usr/sbin lev 0  FAILED [disk /usr/sbin, all estimate
> timed out] coyote /usr/share lev 0  FAILED [disk /usr/share, all
> estimate timed out] coyote /usr/src lev 0  FAILED [disk /usr/src, all
> estimate timed out] coyote /usr/games lev 0  FAILED [disk /usr/games,
> all estimate timed out] coyote /CoCo lev 0  FAILED Got empty header
>   coyote /CoCo lev 0  FAILED Got empty header
>   coyote /GenesAmandaHelper-0.61 lev 1  FAILED Got empty header
>   coyote /GenesAmandaHelper-0.61 lev 1  FAILED Got empty header
>   coyote /boot lev 0  FAILED Got empty header
>   coyote /home lev 2  FAILED Got empty header
>   coyote /home lev 2  FAILED Got empty header
> 
> However, at the bottom of the report, the remote systems were backed up
> just fine. lathe   /home   1   1   0   5.6 
> 0:00   169.9  0:36 1.2 lathe   /usr/lib/amanda 1  
> 0   0   3.3  0:05 0.4  0:0010.0 lathe   /usr/local 
> 1   0   0   2.0  0:05 0.4  0:0010.0 lathe  
> /var/lib/amanda 1   0   0  22.0  0:00   354.6  0:00
>   220.0 shop/home   3   4   0   8.2 
> 0:0743.6  0:00  3080.0 shop/usr/lib/amanda 1  
> 0   0   3.3  0:05 0.4  0:0010.0 shop/usr/local 
> 1   0   0   2.0  0:05 0.4  0:0010.0 shop   
> /var/lib/amanda 1   2   0  17.8  0:01   584.4  0:00
>  2950.0
> 
> (brought to you by Amanda version 4.0.0alpha.svn.4761)
> 
> Now, the thing that _has_ changed is the running kernel, from a 3.12.9
> that seemed to work well with amanda, to a 3.13.5 that I had one heck
> of a time building because of Kconfig dependency errors that caused all
> of the many "media" options to disappear from the "make ?config"
> operations, and it is likely this one could be missing something that
> tar needs.
> 
> So, what, from this, would be the most likely candidate? The config.gz
> is attached.
> 
> Thank you very much for any insight that can be determined from this.
> 
> Cheers, Gene

Ping!  In the meantime I have rebuilt this kernel 3 times, getting an 
unbootable once, but without finding the option that seems to throw tar for 
a forever loop.

FWIW, when tar is in that state, the only drive activity is related to 
fetchmail activities which loops every 3 minutes, tar apparently gets stuck 
hammering on something it can't access.  And yet, the DLE it appears to be 
stuck on while attempting an estimate, /lib, can be listed with an ls -laR, 
with no problems.

This is the distro's copy of tar-1.22, but I've no clue what options it was 
compiled with.  Is this 1.22 a known bad actor under some conditions?

Cheers, Gene
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and

Re: Best practise for archive tapes with vtapes?

2014-02-26 Thread Markus Iturriaga Woelfel
Here is what we do:

We use vtapes in our Amanda setup. I wrote a small script that finds the most 
recent level 0 backups and uses amvault to dump those to physical tapes. I run 
this once a month and then archive those tapes. The script was hacked together 
and if I get the time I'd like to change it to use Amanda's Perl API rather 
than calling Amanda commands directly, but it has been working for us. 
Basically, it constructs a command line for amvault that looks like this:

/usr/sbin/amvault -otapetype="DellPV124-DLT4" -otapecycle=1 
-osend-amreport-on=never --dst-changer robot --label-template EECS-VAULT-%%% 
CONGIG HOST DISK DATE LEVEL HOST DISK DATE LEVEL ...

After the vault, it constructs an email that contains an amreport (with 
-otapetype set to our VAULT tapes) and attaches a PDF with the 2-hole punch 
label version of the report. We slightly modified the template provided. We 
then archive the tapes and file the report. This should allow us to do a 
restore using just standard Unix tools even if we somehow lost our entire 
amanda server. 

Cheers,

Markus
---
Markus A. Iturriaga Woelfel, IT Administrator
Department of Electrical Engineering and Computer Science
University of Tennessee
Min H. Kao Building, Suite 424 / 1520 Middle Drive
Knoxville, TN 37996-2250
mitur...@eecs.utk.edu / (865) 974-3837
http://twitter.com/UTKEECSIT










Re: is part_cache_max_size shared?

2014-02-26 Thread Jean-Louis Martineau

On 02/25/2014 04:24 PM, Michael Stauffer wrote:

Amanda 3.3.4

Hi,

If amanda is using memory cache for splits, is the cache shared 
between simultaneous amdump runs, or does each try to grab that much 
memory?


I'm setup like this:

part_cache_type memory
 part_cache_max_size 20G

and with

  taper-parallel-write 2

and

  inparallel 10

Thanks

-M


Each taper-parallel-write allocate part_cache_max_size of memory.

Jean-Louis


Re: Best practise for archive tapes with vtapes?

2014-02-26 Thread Stefan G. Weichinger
Am 21.02.2014 09:13, schrieb Stefan G. Weichinger:
> 
> What's the current best practise to somehow have archive "tapes" with
> vtapes?
> 
> A customer backs up to vtapes on a NAS and wants to have an additional
> USB-driven hdd for archiving weekly/monthly dumps.
> 
> How to achieve that?

So amvault seems the way to go ...

I still wonder how to configure in detail:

if my customer wants 2 or more external harddrives as "tertiary media"
... do I set up chg-disk-changers on each of them? What is the media, a
single vtape in a changer-setup on each hdd?

Thanks for any pointers ...  Stefan