RE: CPU eating tar process

2021-09-21 Thread Cuttler, Brian R (HEALTH)
Deb, Kees,

Yes, I'd done something similar, we have zfs mounts for each user, and then 
globbed them together by first letter of the username.

Brian

Extract from disklist
ZFS mounts for samba shares was by specific share, I'd written a script to find 
the shares that currently existed and updated the disklist daily with the 
current list, automatically so shares could be created and not missed, we 
seldom took any offline.
I did not bother with spindles for this.
Got great plots from # amplot which was critical in pinpointing bottlenecks.

# extracted from home directory list, new home directories caught automatically 
by the glob.
finsen  /export/home-Y /export/home   {
user-tar2
include "./[y]*"
}
finsen  /export/home-Z /export/home   {
user-tar2
include "./[z]*"
}
finsen  /export2/home-AZ /export2/home   {
user-tar2
include "./[A-Z]*"
}

List supporting samba shares, updated by daily script.
finsen /export2/samba/bdinst   zfs-snapshot2
finsen /export2/samba/bdlshare zfs-snapshot2
finsen /export2/samba/bladder  zfs-snapshot2

-Original Message-
From: owner-amanda-us...@amanda.org  On Behalf 
Of Debra S Baddorf
Sent: Tuesday, September 21, 2021 1:38 PM
To: Kees Meijs | Nefos 
Cc: Debra S Baddorf ; amanda-users@amanda.org
Subject: Re: CPU eating tar process

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


Have you experimented with dividing those target disks into smaller pieces, 
using
tar?   So that amanda isn’t doing level 0 on all parts on the same day?

I’ve divided some disks as far as a*,  b*, c*, …. z*,  Other  (to catch caps or 
numbers
or future additions).   I’ve found that each piece must have SOME content,
or tar fails.  So Other always contains some small portion,  and non-existent 
letters
are skipped and are caught by Other if they’re created later.

It’s a pain for restoring a whole disk, but it helps backups.

Deb Baddorf
Fermilab

> On Sep 21, 2021, at 8:55 AM, Kees Meijs | Nefos  wrote:
>
> Hi list,
>
> We've got some backup targets with lots (and lots, and then some) of files. 
> There's so much of them that making back-ups is becoming a problem.
>
> During the backup process, tar(1) is eating up a CPU core. There's no or 
> hardly no I/O wait to be seen. Very likely tar is single threaded so there's 
> that. The additional gzip(1) process is doing zero to nothing.
>
> Any thoughts on speeding this up? Maybe an alternative for GNU tar, or...?
>
> Thanks all!
>
> Cheers,
> Kees
>
> --
> https://protect2.fireeye.com/v1/url?k=65a54a9a-3a3e73f8-65a7b3af-000babd9069e-d60d8dd67d1ef6d1&q=1&e=de036afa-6a0d-4723-8c1b-1a3c292b5206&u=https%3A%2F%2Fnefos.nl%2Fcontact
>
> Nefos IT bv
> Ambachtsweg 25 (industrienummer 4217)
> 5627 BZ Eindhoven
> Nederland
>
> KvK 66494931
>
> Bereikbaar op maandag, dinsdag, donderdag en vrijdag tussen 09:00u en 17:00u.





RE: CPU eating tar process

2021-09-21 Thread Cuttler, Brian R (HEALTH)
Jens, Kees,

I think Shilly Tar, STAR is multi-threaded and that we use it with Amanda on at 
least one of our Amanda clients.
Honestly don't recall (without checking the docs) if spindle is to group DLE's 
for concurrent backup, or to avoid running them concurrently, I think that 
latter.
Be careful not to make the drive thrash, you may find you run into issues with 
head movement, or at least that had been the conventional wisdom in the past.

Brian

-Original Message-
From: owner-amanda-us...@amanda.org  On Behalf 
Of Jens Berg
Sent: Tuesday, September 21, 2021 10:30 AM
To: Kees Meijs | Nefos ; amanda-users@amanda.org
Subject: Re: CPU eating tar process

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


Hi Kees,

maybe you can try to split up the files into several DLEs and put them
into different spindles. That should fire off multiple instances of tar
in parallel, if I understood Amanda's concept of spindles correctly.

Hope it helps,
Jens


On 21.09.2021 15:55, Kees Meijs | Nefos wrote:
> Hi list,
>
> We've got some backup targets with lots (and lots, and then some) of
> files. There's so much of them that making back-ups is becoming a problem.
>
> During the backup process, tar(1) is eating up a CPU core. There's no or
> hardly no I/O wait to be seen. Very likely tar is single threaded so
> there's that. The additional gzip(1) process is doing zero to nothing.
>
> Any thoughts on speeding this up? Maybe an alternative for GNU tar, or...?
>
> Thanks all!
>
> Cheers,
> Kees
>
> --
> https://protect2.fireeye.com/v1/url?k=2e3f7039-71a44928-2e3d890c-000babd9fe9f-58b29de2b7dad0ea&q=1&e=f3989b7d-8dbb-47cc-bef6-319b06eb43b0&u=https%3A%2F%2Fnefos.nl%2Fcontact
>  
> <https://protect2.fireeye.com/v1/url?k=99e244fa-c6797deb-99e0bdcf-000babd9fe9f-d15805b2468198c5&q=1&e=f3989b7d-8dbb-47cc-bef6-319b06eb43b0&u=https%3A%2F%2Fnefos.nl%2Fcontact>
>
> Nefos IT bv
> Ambachtsweg 25 (industrienummer 4217)
> 5627 BZ Eindhoven
> Nederland
>
> KvK 66494931
>
> /Bereikbaar op maandag, dinsdag, donderdag en vrijdag tussen 09:00u en
> 17:00u./




RE: DLE splitting

2021-09-20 Thread Cuttler, Brian R (HEALTH)
Debra, Charles, Jon,

I had some 'large' vtapes for a while, and had a requirement for N-days of 
recovery and ran into an issue were I was getting deadlocked on space during 
backup.
If the vtapes were too small I consumed more of them, if they were too large 
then I'd might not have enough free space when one of them was reused, not 
certain why but it may have been that level 0 was advanced more frequently 
because of the "available" tape capacity, which is measured by tape size and 
does not take vtape pool volume into account.

100% right, vtapes leave unused space in the pool, but some vtape size tuning 
may be needed.

Thanks,
Brian


-Original Message-
From: owner-amanda-us...@amanda.org  On Behalf 
Of Debra S Baddorf
Sent: Friday, September 17, 2021 5:28 PM
To: Charles Curley 
Cc: Debra S Baddorf ; amanda-users 
Subject: Re: DLE splitting

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


> On Sep 17, 2021, at 12:57 PM, Charles Curley 
>  wrote:
>
> On Fri, 17 Sep 2021 11:55:09 -0400
> Jon LaBadie  wrote:
>
>> I'm seeing instances of wasted VTape with amanda switching
>> to a new tape for the last DLE even though there is room
>> left on the current tape.
>
> I wouldn't worry about it. Unlike physical tapes, vtapes all pull
> storage out of the same pool. You might create some more vtapes so you
> have plenty for your storage.
>

And, as I understand it,  vtapes don’t actually require more space than
is actually ON them.   I.E.  Empty space is still owned by the vtape manager.
Which is what Charley said,  but in different words.

Deb Baddorf
Fermilab





> --
> Does anybody read signatures any more?
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__charlescurley.com&d=DwICAg&c=gRgGjJ3BkIsb5y6s49QqsA&r=HMrKaRiCv4jddln9fLPIOw&m=PwvWx6Y9I4KUy_b5D3jsXMnkbLiq7ulFG5J1JzobJeY&s=lljOS8dvUO1Mot2mr72DNZxuu8f_j1tqh0kSizkJc18&e=
> https://urldefense.proofpoint.com/v2/url?u=https-3A__charlescurley.com_blog_&d=DwICAg&c=gRgGjJ3BkIsb5y6s49QqsA&r=HMrKaRiCv4jddln9fLPIOw&m=PwvWx6Y9I4KUy_b5D3jsXMnkbLiq7ulFG5J1JzobJeY&s=3vxiQwgUx4Am7JAOXEQ7z-J2yfWqyCtBxy2fvA7FyJs&e=





RE: Amanda really blew up last night.

2021-02-01 Thread Cuttler, Brian R (HEALTH)
I missed the early part of the discussion. Vtapes?

I had a large zpool I broke up into vtapes, found that if they where too small 
I'd waste space, if I recall because partial dumps were not removed from the 
'tape' to free up the disk, and if they where too large I'd run into a deadlock 
problem when the pool got full.

Yes, to unused capacity on a vtape is still available in the pool.

At this point I've got 2 zpools and a share on an nfs windows server, though at 
some point in the foreseeable future we will decommission the old Solaris boxes 
and decommission the Amanda server that has been backing them up.

Many thanks to the Amanda developers and community, you have all been wonderful 
to work with over the years.
I don’t know when my last Amanda server will shut down, but likely before I 
retire, still several years off.

-Original Message-
From: owner-amanda-us...@amanda.org  On Behalf 
Of Debra S Baddorf
Sent: Monday, February 1, 2021 2:09 PM
To: Gene Heskett 
Cc: Debra S Baddorf ; amanda-users@amanda.org
Subject: Re: Amanda really blew up last night.

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


> On Jan 30, 2021, at 3:46 PM, Gene Heskett  wrote:
>
> On Friday 29 January 2021 17:13:39 Gene Heskett wrote:
>
>> On Saturday 23 January 2021 14:04:36 Gene Heskett wrote:
>>> On Saturday 23 January 2021 11:47:15 Jon LaBadie wrote:
 ls -l /usr/local/libexec/amanda/ambind
>>>
>>> ls -l /usr/local/libexec/amanda/ambind
>>> -rwsr-x--x 1 root backup 26640 Jan 22
>>> 20:17 /usr/local/libexec/amanda/ambind
>>> So I changed it to root:disk, no change in the amcheck output, its
>>> still semi happy
>>> Chmod 4750 gets a no execute error out of ambind, must be 4751.
>>> amcheck then complains, but the backup runs just fine. And did last
>>> night, 6 minutes after it failed on a 29Gig dle.
>>>
>>> I'm going to write another dumptype pair today, breakiing that 29G
>>> directory up into two dle's.
>>>
>>> Cheers, Gene Heskett
>>
>> Actually, I broke it up into 7 dle's and its run fine for 3 nights in
>> a row, I should paint it on the wall but Murphy is watching. :)
>>
> And I knew I should not have bragged, it failed again last night,
> someplace in amstatus. But did a redo for recovery but that wastes space
> in the vtape, and leaves trash in the holding disk I have to clean up
> before it will do another GOOD backup.

I keep reading that there is less wasted space with tapes.   Just because you
allocate X space for each tape,  I believe they only use up the space that’s
actually used.
I does use up another number in your vtape roster, but those seem to me
to be free - just increase your vtape  count and number a few more.

Odd about the holding disk trash preventing further backups.
I find months old stuff in my holding disk, and only clear it out
occasionally when the mood arises.

Deb Baddorf
Fermilab




>
>
> Copyright 2019 by Maurice E. Heskett
> Cheers, Gene Heskett
> --
> "There are four boxes to be used in defense of liberty:
> soap, ballot, jury, and ammo. Please use in that order."
> -Ed Howdershelt (Author)
> If we desire respect for the law, we must first make the law respectable.
> - Louis D. Brandeis
> Genes Web page 
>   >





RE: How's amanda feeling these days?

2020-09-25 Thread Cuttler, Brian R (HEALTH)


I came into an existing Amanda environment what I started this job 22 years ago.
Amanda has been upgraded numerous times on many platforms, and while we have 
migrated away from SGI clients we continue to have Solaris clients and several 
flavors of linux. Backup servers vary between Solaris and linux, we have 
upgraded tape drives and jukeboxes and use VTape on several of our Amanda 
servers.

Amanda has never been problematic and has saved us on numerous occasions.

Admittedly we are back rev and have not used some of the newer features, but 
have made good use of tape flush parameters, jukebox and vtape control and have 
never had difficulty with cross platform issues (taking into account 
OS/filesystem specific native tools).

Brian Cuttler
Wadsworth Center/NYS Department of Health

-Original Message-
From: owner-amanda-us...@amanda.org  On Behalf 
Of Debra S Baddorf
Sent: Friday, September 25, 2020 2:09 PM
To: Dave Sherohman 
Cc: Debra S Baddorf ; amanda-users@amanda.org
Subject: Re: How's amanda feeling these days?

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


I've had an amanda “world” running for 15 or 20 years.  I have 33 unix nodes,
of varying flavors of unix.  They play together nicely, though you can’t unpack 
a file
on a different flavor of unix. I think you can with TAR rather than DUMP.
And, since many of my disks are big, I’m using a lot of TAR, to split them into
smaller chunks — which have grown in size as my tape drive has been upgraded
over time.

We still use physical tape.   I keep 70 days of backups  (inc every day, full 
once a week)
and a separate config for archival fulls once a month.  10-12 times your amount 
of data (fulls once a week)
might be rather a lot of disk space, but I haven’t compared.  Someone else 
recently
discussed the large cost of using cloud space.

I’m not doing any Windows backups;  we have another group that does those.

I’m happy with amanda’s current capabilities.  In fact, I’m not running the most
recent versions of amanda, not wanting the effort of changing for features I 
won’t use.

My data size is a bit over 30 TB, since my archival fulls now require a second
LT05 tape,  one of which claims 30 TB capacity.

Another data point -
Deb Baddorf
Fermilab


> On Sep 25, 2020, at 8:19 AM, Dave Sherohman  wrote:
>
> Howdy, all!
>
> We've recently had some problems at work with our backup provider, so my
> boss has come to me and requested a recommendation for bringing backups
> in-house.  I've previously adminned a small amanda installation back in
> 2000-2006 and I quite liked the system and how it works, so that was my
> first thought.
>
> I've done some general web searches and it looks like the situation
> today isn't as good as it was a decade and a half ago - not a lot of
> active development, limited support for Windows clients, etc.  But, on
> the other hand, amanda was already a very mature system back then, so I
> don't know that a lot of ongoing development would still be needed.
>
> So let's see what the current users have to say.  Is a new amanda
> installation still a sane choice in 2020?
>
> My use case is that I'll be backing up somewhere in the neighborhood of
> 75ish servers, a mix of physical and (mostly) virtual machines, and a
> mix of mostly Linux with some Windows and one or two FreeBSD.  Total
> disk usage is currently in the 35-40 TB range, growing by maybe 1-2 TB
> per year.  Aside from my own positive experiences with amanda, both I
> and my boss (and most of my coworkers) are very pro-open-source.
>
> If amanda isn't a reasonable choice for that scenario, what would be a
> better option?
>
> And what kind of hardware specs should I be looking at?  Is tape still
> king, or is everyone backing up to hard drives now?
>
> --
> Dave Sherohman





RE: holding disk too small?

2019-12-03 Thread Cuttler, Brian R (HEALTH)
Stefan,

In order for the holding disk to be used it has to be bigger than the largest 
DLE.
To get parallelism in dumping it has to be large enough to hold more than one 
DLE at a time, ideally I suppose the number of in parallel dumps, and then some 
more so that you can begin spooling to tape while a new dump is being performed.

I think that a work area larger than a tape is probably overkill - but the took 
I like to use to visualize where the bottle neck is, is amplot.

With a work area as large as yours I think you will probably see that the work 
area is never fully utilized, and that dumping constraints are somewhere else, 
or showing that you can increase parallelism in dumping to shorten overall 
amdump run time.

I don't know what the config looks like, number of clients, number and size of 
partitions being managed, at some point you will run out of CPU, or disk 
performance or something you can't overcome with Amanda tuning.

Best,
Brian

-Original Message-
From: owner-amanda-us...@amanda.org  On Behalf 
Of Stefan G. Weichinger
Sent: Tuesday, December 3, 2019 9:43 AM
To: amanda-users@amanda.org
Subject: holding disk too small?

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.

Another naive question:

Does the holdingdisk have to be bigger than the size of one tape?

I know that it would be good, but what if not?

I right now have ~2TB holding disk and "runtapes 2" with LTO6 tapetype.

That is 2.4 TB per tape.

So far it works but maybe not optimal. I consider recreating that
holding disk array (currently RAID1 of 2 disks) as RAID0 ..

And sub-question:

how would you configure these parameters here:

autoflush   yes
flush-threshold-dumped  50
flush-threshold-scheduled 50
taperflush  50

I'd like to collect some files in the disk before writing to tape, but
can't collect a full tape's data ...

I assume here also "dumporder" plays a role:

dumporder "Ssss"

- thanks, Stefan



amanda client on leap 15 system, ie systemd

2019-05-02 Thread Cuttler, Brian R (HEALTH)


Hi, installing current version of amanda client on leap 15 system using system.

My understanding, and I have read some of what has been written to the amanda 
groups, is that rather than /etc/xinetd.d we need amanda.socket and 
amanda.service files, and while I have seen some discussion of them, there is 
some subtlety that I have clearly missed.

Can anyone provide the files I need, give hints on what I'm doing wrong or help 
in some other way?

Thank you,

Brian


Brian Cuttler
Network and System Administrator, ITG - Information Technology Group
Wadsworth Center, NYS Department of Health
Biggs Lab, Empire State Plaza, Albany, NY 12201
(518) 486-1697 | brian.cutt...@health.ny.gov





RE: Configuration Rollback [Was: Reusable]

2018-11-26 Thread Cuttler, Brian R (HEALTH)
Depending on what I'm testing and its importance I may just # cp 
tapelist.yesterday tapelist or manually alter it.

-Original Message-
From: Debra S Baddorf  
Sent: Monday, November 26, 2018 3:25 PM
To: Cuttler, Brian R (HEALTH) 
Cc: Debra S Baddorf ; Debra S Baddorf ; 
amanda-users 
Subject: Re: Configuration Rollback [Was: Reusable]

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


> On Nov 26, 2018, at 2:19 PM, Cuttler, Brian R (HEALTH) 
>  wrote:
>
> Deb,
>
> I'm with you, periodically I have to test something, or will start an 
> additional backup to help move data after a failure of some sort but in 
> general I allow cron to start # amdump once/day. My understanding matches 
> yours, balance will get thrown off, amanda may advance dumps but you chew up 
> a lot of tape and do a lot of I/O for little overall gain if you are running 
> multiple dumps as a matter of course.

When I have something that failed/was missed  OR a new node or DLE to add,  I 
do amdump config —no-taper  node  DLE so test the dump,  but leave the results 
on my holding disk.  It’ll be flushed when tonight’s backup starts.
So I don’t HAVE to chew up tape, just to test things.   :)

Deb Baddorf


>
> I have even gone so far as run run multiple dumps across a very small 
> (perhaps "1") DLE to test include/exclude or get a new dumptype (snapshot) or 
> compress(pig/parallel zip) tested, but do not run multiple dumps as a matter 
> of routine.
>
> In my mind running many small DLEs can be self-defeating, as can running very 
> few very large once, each hitting a different set of constraints.
>
> My samba shares are on separate ZFS mount points and I snapshot them. My home 
> directories are also on separate ZFS mount points but individual backups were 
> untenable so I glob them by letter, but that means I can't do snapshots.
>
> Based on the latest emails I think Chris may have moved on, but he has these 
> additional answers for when he cycles back.
>
> Thanks,
> Brian
>
>
> -Original Message-
> From: Debra S Baddorf 
> Sent: Monday, November 26, 2018 3:00 PM
> To: amanda-users 
> Cc: Debra S Baddorf ; Debra S Baddorf 
> ; Cuttler, Brian R (HEALTH) 
> 
> Subject: Re: Configuration Rollback [Was: Reusable]
>
> ATTENTION: This email came from an external source. Do not open attachments 
> or click on links from unknown senders or unexpected emails.
>
>
>>
>> -Original Message-
>> From: owner-amanda-us...@amanda.org  
>> On Behalf Of Debra S Baddorf
>> Sent: Monday, November 26, 2018 2:04 PM
>> To: amanda-users 
>> Cc: Debra S Baddorf 
>> Subject: Re: Configuration Rollback [Was: Reusable]
>>
>> ATTENTION: This email came from an external source. Do not open attachments 
>> or click on links from unknown senders or unexpected emails.
>>
>>
>>> On Nov 24, 2018, at 9:47 AM, Chris Nighswonger 
>>>  wrote:
>>>
>>> On Fri, Nov 23, 2018 at 6:47 PM Jon LaBadie  wrote:
>>> On Wed, Nov 21, 2018 at 11:55:21AM -0800, Chris Miller wrote:
>>>> Hi Folks,
>>>>
>>>> I have written some very small DLEs so I can rip through weeks of 
>>>> backups in minutes. I've learned some things.
>>>
>>
>> Am I wrong in thinking that you cannot do extra backups,  to get to the end 
>> of your  “dumpcycle” faster?
>> Dumpcycle is explicitly in days,  not number of times run.   Isn’t it?
>>
>> Deb Baddorf
>> Fermilab
>>
>>
>>
>> On Nov 26, 2018, at 1:21 PM, Cuttler, Brian R (HEALTH) 
>>  wrote:
>>
>> Deb,
>>
>> Not sure if I'm understanding your question. If so I believe Amanda was 
>> built with the concept of a once/day run schedule taking into account 
>> runs/cycle as well as days/dumpcycle (for instance 5 run days in a one week 
>> dump cycle).
>>
>> Brian
>
> Yes, I agree.  At one point,  Chris was trying to speed this up by doing 21 
> runs in 10 minutes or so.
> Perhaps he has stopped that,  and people are just continuing to quote that 
> line (above).   It’s that line that’s bothering me.
>
> He did ask  "Can I specify "dumpcycle" as an elapsed count rather than an 
> elapsed time?”
> Per the wiki help files,  dumpcycle seems to be explicitly in “days”  and 
> cannot be changed or sped up.
> (  
> https://urldefense.proofpoint.com/v2/url?u=https-3A__protect2.fireeye.
> com_url-3Fk-3D42d018d223e76922.42d2e1e7-2D4a9f6eb7b4f1e3bb-26u-3Dhttps
> -3A__wiki.zmanda.com_index.php_Dumpcycle&d=DwIGaQ&c=gRgGjJ3BkIsb5y6s

RE: Configuration Rollback [Was: Reusable]

2018-11-26 Thread Cuttler, Brian R (HEALTH)
Deb,

I'm with you, periodically I have to test something, or will start an 
additional backup to help move data after a failure of some sort but in general 
I allow cron to start # amdump once/day. My understanding matches yours, 
balance will get thrown off, amanda may advance dumps but you chew up a lot of 
tape and do a lot of I/O for little overall gain if you are running multiple 
dumps as a matter of course.

I have even gone so far as run run multiple dumps across a very small (perhaps 
"1") DLE to test include/exclude or get a new dumptype (snapshot) or 
compress(pig/parallel zip) tested, but do not run multiple dumps as a matter of 
routine.

In my mind running many small DLEs can be self-defeating, as can running very 
few very large once, each hitting a different set of constraints.

My samba shares are on separate ZFS mount points and I snapshot them. My home 
directories are also on separate ZFS mount points but individual backups were 
untenable so I glob them by letter, but that means I can't do snapshots.

Based on the latest emails I think Chris may have moved on, but he has these 
additional answers for when he cycles back.

Thanks,
Brian


-Original Message-
From: Debra S Baddorf  
Sent: Monday, November 26, 2018 3:00 PM
To: amanda-users 
Cc: Debra S Baddorf ; Debra S Baddorf ; 
Cuttler, Brian R (HEALTH) 
Subject: Re: Configuration Rollback [Was: Reusable]

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


>
> -Original Message-
> From: owner-amanda-us...@amanda.org  On 
> Behalf Of Debra S Baddorf
> Sent: Monday, November 26, 2018 2:04 PM
> To: amanda-users 
> Cc: Debra S Baddorf 
> Subject: Re: Configuration Rollback [Was: Reusable]
>
> ATTENTION: This email came from an external source. Do not open attachments 
> or click on links from unknown senders or unexpected emails.
>
>
>> On Nov 24, 2018, at 9:47 AM, Chris Nighswonger 
>>  wrote:
>>
>> On Fri, Nov 23, 2018 at 6:47 PM Jon LaBadie  wrote:
>> On Wed, Nov 21, 2018 at 11:55:21AM -0800, Chris Miller wrote:
>>> Hi Folks,
>>>
>>> I have written some very small DLEs so I can rip through weeks of 
>>> backups in minutes. I've learned some things.
>>
>
> Am I wrong in thinking that you cannot do extra backups,  to get to the end 
> of your  “dumpcycle” faster?
> Dumpcycle is explicitly in days,  not number of times run.   Isn’t it?
>
> Deb Baddorf
> Fermilab
>
>
>
> On Nov 26, 2018, at 1:21 PM, Cuttler, Brian R (HEALTH) 
>  wrote:
>
> Deb,
>
> Not sure if I'm understanding your question. If so I believe Amanda was built 
> with the concept of a once/day run schedule taking into account runs/cycle as 
> well as days/dumpcycle (for instance 5 run days in a one week dump cycle).
>
> Brian

Yes, I agree.  At one point,  Chris was trying to speed this up by doing 21 
runs in 10 minutes or so.
Perhaps he has stopped that,  and people are just continuing to quote that line 
(above).   It’s that line that’s bothering me.

He did ask  "Can I specify "dumpcycle" as an elapsed count rather than an 
elapsed time?”
Per the wiki help files,  dumpcycle seems to be explicitly in “days”  and 
cannot be changed or sped up.
(  
https://protect2.fireeye.com/url?k=42d018d223e76922.42d2e1e7-4a9f6eb7b4f1e3bb&u=https://wiki.zmanda.com/index.php/Dumpcycle
  )

Chris also asked  "Are "dumpcycle" and "runspercycle" conflicting with each 
other?”
Even if you occasionally do extra amdump runs  (I do),  that doesn’t bother the 
“runspercycle” count.  It still results in   “at least one level 0 within
dumpcycle days”.Runspercycle just lets amanda gauge numbers for balance 
adjusting, and maybe promoting a DLE to do an early level 0.
Doing 21 runs in NN minutes will use up all your vtapes,  but will not force 
more than 1 level 0.

Maybe I’m being pedantic,  and he’s looking at other issues now.  If so, 
nevermind me!
:)

Deb Baddorf






RE: Configuration Rollback [Was: Reusable]

2018-11-26 Thread Cuttler, Brian R (HEALTH)


Deb,

Not sure if I'm understanding your question. If so I believe Amanda was built 
with the concept of a once/day run schedule taking into account runs/cycle as 
well as days/dumpcycle (for instance 5 run days in a one week dump cycle).

Brian

-Original Message-
From: owner-amanda-us...@amanda.org  On Behalf 
Of Debra S Baddorf
Sent: Monday, November 26, 2018 2:04 PM
To: amanda-users 
Cc: Debra S Baddorf 
Subject: Re: Configuration Rollback [Was: Reusable]

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


> On Nov 24, 2018, at 9:47 AM, Chris Nighswonger  
> wrote:
>
> On Fri, Nov 23, 2018 at 6:47 PM Jon LaBadie  wrote:
> On Wed, Nov 21, 2018 at 11:55:21AM -0800, Chris Miller wrote:
> > Hi Folks,
> >
> > I have written some very small DLEs so I can rip through weeks of 
> > backups in minutes. I've learned some things.
>

Am I wrong in thinking that you cannot do extra backups,  to get to the end of 
your  “dumpcycle” faster?
Dumpcycle is explicitly in days,  not number of times run.   Isn’t it?

Deb Baddorf
Fermilab





RE: Another dumper question

2018-11-26 Thread Cuttler, Brian R (HEALTH)
I believe maxdumps is max concurrent dumps across all clients.

You might have 10 clients each with an inparallel of 2, giving 20 possible 
concurrent dumps, but because of server limitations you might set maxdumps to 
something between 2 and 20.

-Original Message-
From: Chris Nighswonger  
Sent: Monday, November 26, 2018 1:57 PM
To: Cuttler, Brian R (HEALTH) 
Cc: amanda-users@amanda.org
Subject: Re: Another dumper question

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


inparallel 10

maxdumps not listed, so I'm assuming the default of 1 is being observed.

I'm not sure that the maxdumps parameter would affect dumping DLEs from 
multiple clients in parallel, though. The manpage states, "The maximum number 
of backups from a single host that Amanda will attempt to run in parallel." 
That seems to indicate that this parameter controls parallel dumps of DLEs on a 
single client.

Kind regards,
Chris
On Mon, Nov 26, 2018 at 1:50 PM Cuttler, Brian R (HEALTH) 
 wrote:
>
> Did you check your maxdumps and inparallel parameters?
>
> -Original Message-
> From: owner-amanda-us...@amanda.org  On 
> Behalf Of Chris Nighswonger
> Sent: Monday, November 26, 2018 1:34 PM
> To: amanda-users@amanda.org
> Subject: Another dumper question
>
> ATTENTION: This email came from an external source. Do not open attachments 
> or click on links from unknown senders or unexpected emails.
>
>
> So in one particular configuration I have the following lines:
>
> inparallel 10
> dumporder "STSTSTSTST"
>
> I would assume that that amanda would spawn 10 dumpers in parallel and 
> execute them giving priority to largest size and largest time alternating. I 
> would assume that amanda would do some sort of sorting of the DLEs based on 
> size and time, set them in descending order, and the run the first 10 based 
> on the list thereby utilizing all 10 permitted dumpers in parallel.
>
> However, based on the amstatus excerpt below, it looks like amanda simply 
> starts with the largest size and runs the DLEs one at a time, not making 
> efficient use of parallel dumpers at all. This has the unhappy results at 
> times of causing amdump to be running when the next backup is executed.
>
> I have changed the dumporder to STSTStstst for tonight's run to see if that 
> makes any  difference. But I don't have much hope it will.
>
> Any thoughts?
>
> Kind regards,
> Chris
>
>
>
>
> From Mon Nov 26 01:00:01 EST 2018
>
> 1   4054117k waiting for dumping
> 1  6671k waiting for dumping
> 1   222k waiting for dumping
> 1  2568k waiting for dumping
> 1  6846k waiting for dumping
> 1125447k waiting for dumping
> 1 91372k waiting for dumping
> 192k waiting for dumping
> 132k waiting for dumping
> 132k waiting for dumping
> 132k waiting for dumping
> 132k waiting for dumping
> 1290840k waiting for dumping
> 1 76601k waiting for dumping
> 186k waiting for dumping
> 1 71414k waiting for dumping
> 0  44184811k waiting for dumping
> 1   281k waiting for dumping
> 1  6981k waiting for dumping
> 150k waiting for dumping
> 1 86968k waiting for dumping
> 1 81649k waiting for dumping
> 1359952k waiting for dumping
> 0 198961004k dumping 159842848k ( 80.34%) (7:23:39)
> 1 73966k waiting for dumping
> 1821398k waiting for dumping
> 1674198k waiting for dumping
> 0 233106841k dump done (7:23:37), waiting for writing to tape
> 132k waiting for dumping
> 132k waiting for dumping
> 1166876k waiting for dumping
> 132k waiting for dumping
> 1170895k waiting for dumping
> 1162817k waiting for dumping
> 0 failed: planner: [Request to client failed: Connection timed out]
> 132k waiting for dumping
> 132k waiting for dumping
> 053k waiting for dumping
> 0  77134628k waiting for dumping
> 1  2911k waiting for dumping
> 136k waiting for dumping
> 132k waiting for dumping
> 1 84935k waiting for dumping
>
> SUMMARY  part  real  estimated
>size   size
> partition   :  43
> estimated   :  42559069311k
> flush   :   0 0k
> failed  :   10k   (  0.00%)
> wait for dumping:  40128740001k   ( 23.03%)
> dumping to tape :   00k   (  0.00%)
> dumping :   1 159842848k 198961004k ( 80.34%) ( 28.59%)
> dumped  :   1 233106841k 231368306k (100.75%) ( 41.70%)
> wait for writin

RE: Another dumper question

2018-11-26 Thread Cuttler, Brian R (HEALTH)
Did you check your maxdumps and inparallel parameters?

-Original Message-
From: owner-amanda-us...@amanda.org  On Behalf 
Of Chris Nighswonger
Sent: Monday, November 26, 2018 1:34 PM
To: amanda-users@amanda.org
Subject: Another dumper question

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


So in one particular configuration I have the following lines:

inparallel 10
dumporder "STSTSTSTST"

I would assume that that amanda would spawn 10 dumpers in parallel and execute 
them giving priority to largest size and largest time alternating. I would 
assume that amanda would do some sort of sorting of the DLEs based on size and 
time, set them in descending order, and the run the first 10 based on the list 
thereby utilizing all 10 permitted dumpers in parallel.

However, based on the amstatus excerpt below, it looks like amanda simply 
starts with the largest size and runs the DLEs one at a time, not making 
efficient use of parallel dumpers at all. This has the unhappy results at times 
of causing amdump to be running when the next backup is executed.

I have changed the dumporder to STSTStstst for tonight's run to see if that 
makes any  difference. But I don't have much hope it will.

Any thoughts?

Kind regards,
Chris




>From Mon Nov 26 01:00:01 EST 2018

1   4054117k waiting for dumping
1  6671k waiting for dumping
1   222k waiting for dumping
1  2568k waiting for dumping
1  6846k waiting for dumping
1125447k waiting for dumping
1 91372k waiting for dumping
192k waiting for dumping
132k waiting for dumping
132k waiting for dumping
132k waiting for dumping
132k waiting for dumping
1290840k waiting for dumping
1 76601k waiting for dumping
186k waiting for dumping
1 71414k waiting for dumping
0  44184811k waiting for dumping
1   281k waiting for dumping
1  6981k waiting for dumping
150k waiting for dumping
1 86968k waiting for dumping
1 81649k waiting for dumping
1359952k waiting for dumping
0 198961004k dumping 159842848k ( 80.34%) (7:23:39)
1 73966k waiting for dumping
1821398k waiting for dumping
1674198k waiting for dumping
0 233106841k dump done (7:23:37), waiting for writing to tape
132k waiting for dumping
132k waiting for dumping
1166876k waiting for dumping
132k waiting for dumping
1170895k waiting for dumping
1162817k waiting for dumping
0 failed: planner: [Request to client failed: Connection timed out]
132k waiting for dumping
132k waiting for dumping
053k waiting for dumping
0  77134628k waiting for dumping
1  2911k waiting for dumping
136k waiting for dumping
132k waiting for dumping
1 84935k waiting for dumping

SUMMARY  part  real  estimated
   size   size
partition   :  43
estimated   :  42559069311k
flush   :   0 0k
failed  :   10k   (  0.00%)
wait for dumping:  40128740001k   ( 23.03%)
dumping to tape :   00k   (  0.00%)
dumping :   1 159842848k 198961004k ( 80.34%) ( 28.59%)
dumped  :   1 233106841k 231368306k (100.75%) ( 41.70%)
wait for writing:   1 233106841k 231368306k (100.75%) ( 41.70%)
wait to flush   :   0 0k 0k (100.00%) (  0.00%)
writing to tape :   0 0k 0k (  0.00%) (  0.00%)
failed to tape  :   0 0k 0k (  0.00%) (  0.00%)
taped   :   0 0k 0k (  0.00%) (  0.00%)
9 dumpers idle  : 0
taper status: Idle
taper qlen: 1
network free kps: 0
holding space   : 436635431k ( 50.26%)
chunker0 busy   :  6:17:03  ( 98.28%)
 dumper0 busy   :  6:17:03  ( 98.28%)
 0 dumpers busy :  0:06:34  (  1.72%)   0:  0:06:34  (100.00%)
 1 dumper busy  :  6:17:03  ( 98.28%)   0:  6:17:03  (100.00%)



RE: Taper scan algorithm did not find an acceptable volume.

2018-11-20 Thread Cuttler, Brian R (HEALTH)
I prelabel my tables, both physical and virtual.

Amanda will mount it and report that it isn't an amanda tape. I've never 
configured automatic labeling.


-Original Message-
From: owner-amanda-us...@amanda.org  On Behalf 
Of Debra S Baddorf
Sent: Tuesday, November 20, 2018 2:14 PM
To: Chris Miller 
Cc: Debra S Baddorf ; amanda-users 
Subject: Re: Taper scan algorithm did not find an acceptable volume.

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


> On Nov 20, 2018, at 9:54 AM, Chris Miller  wrote:
>
> Hi Jon,
>
> - Original Message -
>> From: "Jon LaBadie"
>> To: "amanda-users" 
>> Sent: Monday, November 19, 2018 1:44:35 PM
>> Subject: Re: Taper scan algorithm did not find an acceptable volume.
>
>> As a general approach, an admin should not run new software systems 
>> until its dependices are independently verified.  For example client 
>> backups depand on a working network.  You should confirm the clients 
>> can reach the server and visa-versa.
>
> This is confirmed. From my AMANDA server, I can see each of my NAS as a SaMBa 
> share. I can ping each of my clients from the AMANDA server, and smbclient -L 
> let's me browse their filesystems. I don't think connectivity is my problem. 
> I think I am fighting my own ignorance exhibited in a badly configured AMANDA 
> server. I seek advice of detecting and correcting this very simple 
> configuration:
>
>mailto "bac...@tclc.org"   # space separated list of 
> operators at your site
>dumpuser "amandabackup"# the user to run dumps under
>org "Mail.TCLC.org"
>
>infofile "/var/amanda/info"
>logdir   "/var/amanda/Mail.TCLC.org"
>indexdir "/var/amanda/index"
>
>tapetype "NAS"
>tapedev  "chg-disk://etc/amanda/Mail.TCLC.org/dst"
>holdingdisk hd
>dumpcycle 1week
>runspercycle 7
>tapecycle 21
>
>define holdingdisk hd {
>directory "/var/amanda/hold"
>}
>
>includefile "advanced.conf"
>includefile "/etc/amanda/template.d/dumptypes"
>includefile "/etc/amanda/template.d/tapetypes"
>
>
>
>
>> You are setting up a vtape library, tapes to fill the library slots, 
>> and a changer to manipulate the vtapes in the library.  You should 
>> exercise these components with amanda commands before trying to 
>> backup your clients.
>>
>> Hmm, does amanda have any commands to exercise the changer and library?
>
> I assume that this is a rhetorical question. In my case it is a valid 
> question that I can't answer, but the answer to which may solve my problem. I 
> now know about "amadmin", and "amtape", and "locate am | grep "^am" | grep 
> bin" has revealed many more which justify greater familiarity, but I have no 
> idea how to proceed testing my vtape config. I would be appreciative if you 
> could give me an example of how to do this.
>
> Thanks for the help, Jon.
> --
> Chris.
>

When installing a new tape drive or tape library  (physical tapes, for me)
I read ~18 years ago  (can’t find where, anymore)   that I should exercise the
tape commands manually first.   Then attempt to get amanda to use them.

To whit:
amtape  slot 1
amtape config slot 2
amtape config slot 15   (jump around)
amtape config slot next
amtape config slot first   (either 0 or 1,  or whatever the first slot it’s 
been allowed)
amtape config slot last (the last slot)

I pre-label all my tapes, so I was able to do the above commands,  and see 
succcess
before proceeding to amanda commands.   Dunno if it will mount an unlabeled 
vtape?
But you need to have the above working first.

It couldn’t *hurt*  to prelabel all your vtapes,  but I know you don’t want to. 
 That may be
why it’s failing???I’ve never ventured into “unlabeled tape”  territory.   
Others?

Deb Baddorf
Fermilab





RE: tape pools, wrapping my mind around them

2018-11-15 Thread Cuttler, Brian R (HEALTH)
I believe that you would define tape pool of 80 for each, with regular 
expressions you may be able to have the same label string format but different 
tape number ranges.
Maybe like this?

Sales POOL1[0-9][0-9]
Support POOL2[0-9][0-9]
Research POOL3[0-9][0-9]

For sanity I think I'd define 3 tape pools in the one changer. You spend time 
loading and unloading tapes to find the pool and tapes you want, but since this 
is done without having to access a physical tape drive or robot it will take 
virtually no time at all.

-Original Message-
From: owner-amanda-us...@amanda.org  On Behalf 
Of Jon LaBadie
Sent: Thursday, November 15, 2018 3:25 PM
To: amanda-users@amanda.org
Subject: tape pools, wrapping my mind around them

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


I currently have 240 Vtapes in a single changer.
Suppose I have 3 "departments", sales, research, and support.

Could I create 3 dumptype definitions so that all "sales" DLEs are backed up to 
a tape pool consisting of vtapes 1-80, "research" to vtapes 81-160, and 
"support" to vtapes 161-240?

If so, then a single config could handle keeping the various data separated 
while still maintaining balance etc.  Of course, is the balance on a tape pool 
basis or on a total configuration basis?

Jon
--
Jon H. LaBadie j...@jgcomp.com
 11226 South Shore Rd.  (703) 787-0688 (H)
 Reston, VA  20190  (703) 935-6720 (C)



RE: Clients that return something else

2018-11-15 Thread Cuttler, Brian R (HEALTH)
I think if you want a disk image you want the native OS disk image utility, 
(file-system-type)dump. Dump, ufsdump, xfsdump, etc. Amanda supports this.

Do you ever need a disk image other than the boot volume? I realize some 
special applications may have file types that are not recognized for backup by 
tar, but that would be pretty rare.

From: owner-amanda-us...@amanda.org  On Behalf 
Of Chris Miller
Sent: Thursday, November 15, 2018 2:23 PM
To: amanda-users 
Subject: Re: Clients that return something else


ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.

Hi Brian
From: "Cuttler, Brian R (HEALTH)" 
mailto:brian.cutt...@health.ny.gov>>
To: "Chris Miller" mailto:c...@tryx.org>>, "amanda-users" 
mailto:amanda-users@amanda.org>>
Sent: Thursday, November 15, 2018 10:54:49 AM
Subject: RE: Clients that return something else
Why pipe dd to tar when you can just run tar?
Good question. tar works at the filesystem level but dd works at the disk block 
level and I'm not aware of any way that tar can create a disk image, so I need 
to read the disk with dd. AMANDA expects a tar saveset, so I need to pipe 
anything I create to tar.



Er – I think the answer is “yes”, but you may have to roll your own.
Yeah, so do I. I'm just not exactly sure how I tell the client what to do. It 
appears that the dumptype uses something symbolic, and leaves the client up to 
its own devices to determine what it means. I could also do this, but I'd 
really like to be able to define the script on the server. Also, it's not 
exactly clear to me how the client understands what "GNUTAR" or "DUMP" means 
locally -- something must see "GNUTAR" and conclude, "Oh, he wants to run 
/usr/sbin/tar". For example, if I could put "BASH" in my dumptype definition 
for "program", and include that code somehow, that would be perfect! Ever hear 
of anything like that?

Thanks for the help, Brian.
--
Chris.

V:916.974.0424
F:916.974.0428


RE: Clients that retunr something else

2018-11-15 Thread Cuttler, Brian R (HEALTH)

I’m configured amanda to run native client utilities gnutar, dump, star (shilly 
tar), er something else.

There are certain known and available/approved things you can do, but the 
bottom line is always – Amanda is an intelligent, scheduling, wrapper, 
client-server using OS native tools.

I think I was thinking pigzip (name?) rather than gzip.

Why pipe dd to tar when you can just run tar?

Note – some versions of gtar are considered broken for Amanda, don’t recall the 
versions or the specific reason (might have had to do with argument handling) 
but that is also likely ancient history.

For zfs file systems we’ve also utilized the snapshot option, perhaps out of 
reach because of the way you’ve globbed your DLEs together.

Er – I think the answer is “yes”, but you may have to roll your own.

From: owner-amanda-us...@amanda.org  On Behalf 
Of Chris Miller
Sent: Thursday, November 15, 2018 1:31 PM
To: amanda-users 
Subject: Clients that retunr something else


ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.

Hi Folks,

Is it possible to have the client run something else besides "tar", like, for 
example, "dd | tar"? Can I specify this from the server? Of course, it goes 
without saying that I need to be able to do the same thing on a Windows client, 
and now I've just said it, so maybe it needed to be said.

Thanks for the help,
--
Chris.

V:916.974.0424
F:916.974.0428


RE: Configuration confusion

2018-11-15 Thread Cuttler, Brian R (HEALTH)
I suppose you could have a small modification to your config file every day.

(scripted to run dynamically, # date has very useful output formatting)

# amdump config-type-1-day-${dayname}

With 7 dumps/week and three configs he can have 21 different backup files, 
completely defeating the scheduling but getting exactly the result he wants.

-Original Message-
From: Debra S Baddorf  
Sent: Thursday, November 15, 2018 1:39 PM
To: Cuttler, Brian R (HEALTH) 
Cc: Debra S Baddorf ; Charles Curley 
; amanda-users 
Subject: Re: Configuration confusion

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


If you’ve disable level 0 backups in the setup,  *WILL*  it do a level 0,  even 
if you try to force it?
I have one hge config that is set up this way.   I have to edit the 
amanda.conf  and REMOVE the  “incr-only” lines
when I want to force a level 0.   Specially since you HAVE to start with a 
level 0, initially.
I’m pretty sure I’ve tried to force some level 0’s and found they failed,  till 
I removed the  “incr-only”  bits.

Deb Baddorf

> On Nov 15, 2018, at 12:16 PM, Cuttler, Brian R (HEALTH) 
>  wrote:
>
> Alternatively - configure all DLE to be non-fulls, disable level 0 backups 
> entirely and run cron jobs to force level 0 dumps on particular DLEs.
> That way you can get level 0 when you want it to occur an no other DLE will 
> advance to a level 0 on its own.
>
> # amadmin config force client DLE
>
> -Original Message-
> From: owner-amanda-us...@amanda.org  On 
> Behalf Of Charles Curley
> Sent: Thursday, November 15, 2018 12:31 PM
> To: amanda-users 
> Subject: Re: Configuration confusion
>
> ATTENTION: This email came from an external source. Do not open attachments 
> or click on links from unknown senders or unexpected emails.
>
>
> On Thu, 15 Nov 2018 09:03:00 -0800 (PST) Chris Miller  wrote:
>
>> If I run three backups, serial or otherwise, then do they know about 
>> each other? Meaning, is AMANDA smart enough to know not to run more 
>> than one level 0 dump per night? The problem is that level 0 backups 
>> take several hours and if I run multiple then I will still be 
>> completing last nights backup when everybody comes in the next 
>> morning. That would be embarrassing. "Sorry, I didn't complete my 
>> work last night, so you can't continue yours."
>
> Ah, that helps. My experience is in a SOHO environment, so take with the salt 
> shaker handy.
>
> The different configurations don't know about each other at all. So you could 
> in theory have a night in which all three run level 0 backups.
> The only way I know to get that kind of co-ordination is to have one 
> configuration which then backs up all three machines. Unfortunately your 
> requirement not to mix the backups due to custodial and security requirements 
> may kill that idea.
>
> Another thing to look at is to break your DLEs up into lots of smaller DLEs. 
> You'll get more level 0 backups, but they'll be spread around the week more 
> evenly.
>
> Or consider having a longer tape cycle. That means fewer level 0 backups in 
> any one week.
>
> --
> "When we talk of civilization, we are too apt to limit the meaning of the 
> word to its mere embellishments, such as arts and sciences; but the true 
> distinction between it and barbarism is, that the one presents a state of 
> society under the protection of just and well-administered law, and the other 
> is left to the chance government of brute force."
> - The Rev. James White, Eighteen Christian Centuries, 1889 Key 
> fingerprint = CE5C 6645 A45A 64E4 94C0  809C FFF6 4C48 4ECD DFDB 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__charlescurley.com
> &d=DwIDAg&c=gRgGjJ3BkIsb5y6s49QqsA&r=HMrKaRiCv4jddln9fLPIOw&m=fnbC4Yi7
> 6_5ky_32Tf9A5Geluildi-avCP3JC0hU2RA&s=oia-zSsMzlRk8WRCNW_-9vpn6VWUX7Vm
> XSyeRVdmgcM&e=
>




RE: Configuration confusion

2018-11-15 Thread Cuttler, Brian R (HEALTH)


From: owner-amanda-us...@amanda.org  On Behalf 
Of Chris Miller
Sent: Thursday, November 15, 2018 12:29 PM
To: amanda-users 
Subject: Re: Configuration confusion


ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.

Hi Brian,
From: "Cuttler, Brian R (HEALTH)" 
mailto:brian.cutt...@health.ny.gov>>
To: "Cuttler, Brian R (HEALTH)" 
mailto:brian.cutt...@health.ny.gov>>, "Chris 
Miller" mailto:c...@tryx.org>>, "amanda-users" 
mailto:amanda-users@amanda.org>>
Sent: Wednesday, November 14, 2018 9:08:26 AM
Subject: RE: Configuration confusion
Tape custody – means what, retention policy or storage of the tape when not in 
the drive/juke?
Yes. Simplest is local custody. Off site custody comes in two media flavors -- 
local media and cloud. Off site costs, so we want to minimize this, as well as 
increases the response time for restorations. You get the idea. I don't use 
tapes; I use removable disks, optical media,and usb keydrives. General this is 
determined by the client, so when I plan this, I simply consider which client 
backups need to be sequestered where. This gives me a configuration where I can 
think in terms of "client  gets backedup to NAS ", where NAS  has 
different properties and different dispositions.


  *   Yes, I understand. I have worked at sites where offsite was someone’s 
house (tapes never came back in the right cycle, seems if you use them as hills 
under your train set you might not return the oldest tapes but bring back a 
mix). Other sites had tapes stored in the fire protected computer room, still 
others had them in a room in another part of the building, but it is a very big 
building.

Amanda is not an archiver in the sense that the tapes are cycled on a regular 
basis. You are able to tape a tape out of rotation and replace it, or create a 
unique tape label and perform level 0 backups to it and then mark it as 
no-reuse in the tapelist, but the primary function is not long term archiving, 
though the tools exist to do that very well.

You can use the same tape pool for all three Amanda configs, but they will need 
to have a common tapelist file. But if you are doing that then you are 
selecting a single set of standards for your data-at-rest security, in which 
case there is little reason to maintain 3 different configs.

Each Amanda config will look to level nightly data, but you will have nights 
with relatively little and nights with relatively large data volume swings, er 
think wave interference from physics. You eliminate a lot of that by combining 
disklists into a single configuration.
Yes! "Each Amanda config will look to level nightly data ..."  This is my 
principle question, and I seek to level the nightly data across all configs on 
a given night, which I recognize can't be done, so I seek to combine my 
multiple configs into a single config which specifies multiple sets of DLEs 
being mapped to multiple tapedev, if that can be done. For example, if the 
definition of tapedev had a "DLE ..." argument, and AMANDA were capable of this 
additional dimension of scheduling.


  *   See reply I put in subsequent email by Charles Curley that was a response 
to you.

Amanda will backup a new DLE at level 0 the first time it sees it. If you are 
worried about running long you will want to phase in the DLEs across several 
evenings. You may want to add the largest on Friday night, assuming that no one 
cares how late Amanda runs into Saturday. You will want to avoid adding 
multiple large DLEs on a single night, add a large and a small each night until 
they are all added.
"Phasing" my backup jobs may be my only choice, as that is exactly the problem 
I seek to avoid and I have been admonished to not try to subvert the scheduler 
by forcing level 0 backups to happen on my schedule. As I continue to discuss 
this, I am more and more convinced that AMANDA cannot schedule in two 
dimensions {(backups / night) X (nights / cycle)}

So, suppose I wanted to force my level 0 backups to happen at my discretion, so 
I can level my nightly run times. How would I do that?

Thanks for the help, Brian.
--
Chris.

V:916.974.0424
F:916.974.0428


RE: Configuration confusion

2018-11-15 Thread Cuttler, Brian R (HEALTH)
Alternatively - configure all DLE to be non-fulls, disable level 0 backups 
entirely and run cron jobs to force level 0 dumps on particular DLEs.
That way you can get level 0 when you want it to occur an no other DLE will 
advance to a level 0 on its own.

# amadmin config force client DLE

-Original Message-
From: owner-amanda-us...@amanda.org  On Behalf 
Of Charles Curley
Sent: Thursday, November 15, 2018 12:31 PM
To: amanda-users 
Subject: Re: Configuration confusion

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


On Thu, 15 Nov 2018 09:03:00 -0800 (PST) Chris Miller  wrote:

> If I run three backups, serial or otherwise, then do they know about 
> each other? Meaning, is AMANDA smart enough to know not to run more 
> than one level 0 dump per night? The problem is that level 0 backups 
> take several hours and if I run multiple then I will still be 
> completing last nights backup when everybody comes in the next 
> morning. That would be embarrassing. "Sorry, I didn't complete my work 
> last night, so you can't continue yours."

Ah, that helps. My experience is in a SOHO environment, so take with the salt 
shaker handy.

The different configurations don't know about each other at all. So you could 
in theory have a night in which all three run level 0 backups.
The only way I know to get that kind of co-ordination is to have one 
configuration which then backs up all three machines. Unfortunately your 
requirement not to mix the backups due to custodial and security requirements 
may kill that idea.

Another thing to look at is to break your DLEs up into lots of smaller DLEs. 
You'll get more level 0 backups, but they'll be spread around the week more 
evenly.

Or consider having a longer tape cycle. That means fewer level 0 backups in any 
one week.

--
"When we talk of civilization, we are too apt to limit the meaning of the word 
to its mere embellishments, such as arts and sciences; but the true distinction 
between it and barbarism is, that the one presents a state of society under the 
protection of just and well-administered law, and the other is left to the 
chance government of brute force."
- The Rev. James White, Eighteen Christian Centuries, 1889 Key fingerprint = 
CE5C 6645 A45A 64E4 94C0  809C FFF6 4C48 4ECD DFDB https://charlescurley.com



RE: Configuration confusion

2018-11-14 Thread Cuttler, Brian R (HEALTH)
Tape custody – means what, retention policy or storage of the tape when not in 
the drive/juke?

Amanda is not an archiver in the sense that the tapes are cycled on a regular 
basis. You are able to tape a tape out of rotation and replace it, or create a 
unique tape label and perform level 0 backups to it and then mark it as 
no-reuse in the tapelist, but the primary function is not long term archiving, 
though the tools exist to do that very well.

You can use the same tape pool for all three Amanda configs, but they will need 
to have a common tapelist file. But if you are doing that then you are 
selecting a single set of standards for your data-at-rest security, in which 
case there is little reason to maintain 3 different configs.

Each Amanda config will look to level nightly data, but you will have nights 
with relatively little and nights with relatively large data volume swings, er 
think wave interference from physics. You eliminate a lot of that by combining 
disklists into a single configuration.

Amanda will backup a new DLE at level 0 the first time it sees it. If you are 
worried about running long you will want to phase in the DLEs across several 
evenings. You may want to add the largest on Friday night, assuming that no one 
cares how late Amanda runs into Saturday. You will want to avoid adding 
multiple large DLEs on a single night, add a large and a small each night until 
they are all added.

You may want to think about your dumpcycle, 1 week, 2 weeks? Depends on your 
tapecycle (number of tapes in the pool) as well as your business requirements. 
You want multiple level 0 dumps of each DLE in the pool, you want the level 0 
dumps to be relatively frequent as it simplifies the restore process should you 
need to run it (you remember the text book procedure for restoring TAR/DUMP 
backups).

From: owner-amanda-us...@amanda.org  On Behalf 
Of Cuttler, Brian R (HEALTH)
Sent: Wednesday, November 14, 2018 11:44 AM
To: Chris Miller ; amanda-users 
Subject: RE: Configuration confusion


ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.

You can run amanda multiple times per night, and each config can specify a 
different, MUST specify a different set of tape labels, different tape pools. 
But I don’t believe you can run multiple amanda servers concurrently.

Could you run your tapes with the highest security level so that DLEs can 
intermix on the output tape?

I believe that selection of encryption data-in-motion (vs on tape data-at-rest) 
can be configured per DLE, if not then certainly by host.

From: owner-amanda-us...@amanda.org<mailto:owner-amanda-us...@amanda.org> 
mailto:owner-amanda-us...@amanda.org>> On Behalf 
Of Chris Miller
Sent: Wednesday, November 14, 2018 11:33 AM
To: amanda-users mailto:amanda-users@amanda.org>>
Subject: Configuration confusion


ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.

Hi Folks,

I now have three working configs, meaning that my test configuration can backup 
three clients. I still can't tell what is happening, but that is a topic for a 
different thread. There is not much difference among the configs; in fact the 
only difference is the src (contents of "disklist") and the dst ("tapedev").

So, I have three clients, but the way I have configured AMANDA, I am running 
three copies of AMANDA, none of which knows what any other is doing. They will 
quite probably schedule level 0 backups on the same run, meaning I lose the 
smoothing benefit of the scheduler, which wants to try to make the nightly 
backup task approximately equal in terms of storage and network bandwidth 
consumption. However, I recognize that I'm asking a single copy of AMANDA to do 
multiple backups each night, and this might not be something AMANDA was 
designed to do. I don't know, being relatively inexperienced.

The config specifies the src and dst of the backup where src is a set of DLEs 
and dst is single tapedev. I think I want a single config that recognizes 
multiple (src to dst) mappings, and AMANDA can make backup level decisions 
knowing the full scope of the problem for that cycle. Given that I have fewer 
clients than backup-cycles, I can space my level 0 backups so that I never do 
more than one on any given night. I can fear that AMANDA will schedule everyone 
for level 0 and backups are still proceeding the next day! That would be 
disruptive and embarrassing.

I think what I'm asking is if I can backup a set of DLEs to a single tapedev, 
and have a single copy of AMANDA run multiple backups each night? I can't mix 
clients on the backup media, since each has different security and custody 
requirements, and I think I'd like AMANDA to be aware of the complete set of 
tasks for any given night without coming into conflict 

RE: Configuration confusion

2018-11-14 Thread Cuttler, Brian R (HEALTH)
You can run amanda multiple times per night, and each config can specify a 
different, MUST specify a different set of tape labels, different tape pools. 
But I don’t believe you can run multiple amanda servers concurrently.

Could you run your tapes with the highest security level so that DLEs can 
intermix on the output tape?

I believe that selection of encryption data-in-motion (vs on tape data-at-rest) 
can be configured per DLE, if not then certainly by host.

From: owner-amanda-us...@amanda.org  On Behalf 
Of Chris Miller
Sent: Wednesday, November 14, 2018 11:33 AM
To: amanda-users 
Subject: Configuration confusion


ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.

Hi Folks,

I now have three working configs, meaning that my test configuration can backup 
three clients. I still can't tell what is happening, but that is a topic for a 
different thread. There is not much difference among the configs; in fact the 
only difference is the src (contents of "disklist") and the dst ("tapedev").

So, I have three clients, but the way I have configured AMANDA, I am running 
three copies of AMANDA, none of which knows what any other is doing. They will 
quite probably schedule level 0 backups on the same run, meaning I lose the 
smoothing benefit of the scheduler, which wants to try to make the nightly 
backup task approximately equal in terms of storage and network bandwidth 
consumption. However, I recognize that I'm asking a single copy of AMANDA to do 
multiple backups each night, and this might not be something AMANDA was 
designed to do. I don't know, being relatively inexperienced.

The config specifies the src and dst of the backup where src is a set of DLEs 
and dst is single tapedev. I think I want a single config that recognizes 
multiple (src to dst) mappings, and AMANDA can make backup level decisions 
knowing the full scope of the problem for that cycle. Given that I have fewer 
clients than backup-cycles, I can space my level 0 backups so that I never do 
more than one on any given night. I can fear that AMANDA will schedule everyone 
for level 0 and backups are still proceeding the next day! That would be 
disruptive and embarrassing.

I think what I'm asking is if I can backup a set of DLEs to a single tapedev, 
and have a single copy of AMANDA run multiple backups each night? I can't mix 
clients on the backup media, since each has different security and custody 
requirements, and I think I'd like AMANDA to be aware of the complete set of 
tasks for any given night without coming into conflict with AMANDA 
doppelgangers, unless I'm inventing problems that don't exist and there is no 
problem running multiple copies of AMANDA.

I may be looking to solve a non-problem, meaning that running multiple copies 
of AMANDA each night is not a problem and hearing, "That is not a problem." 
from those who know would be comforting.

Thanks for the help,
--
Chris.

V:916.974.0424
F:916.974.0428


RE: Monitor and Manage

2018-11-14 Thread Cuttler, Brian R (HEALTH)
Chris,

How many work areas? How many tape drives?

I have one config per amanda service platform, I have several amanda server 
platforms, each backing up a unique and non-overlapping set of clients.

Are you creating multiple amanda configs and running them from a single amanda 
server?

You can do that, if they have non-overlapping clients, or you have structured 
them in a way that will backup level 0 from one config and never level 0 from 
the other, but that defeats the entire purpose of the amanda scheduler.

Also I’d be very much surprised if you could successfully run multiple configs 
concurrently from a single server, you “might” be able to if you were able to 
assure unique sockets for each instance of the server, but I wouldn’t recommend 
such a setup.
Oh, in the case of concurrent runs you want to make strictly certain that 
clients are unique to config, an amanda client can only reply to a single 
master, you can’t have multiple concurrent clients on the server end.

Brian


Brian Cuttler
Network and System Administrator, ITG - Information Technology Group
Wadsworth Center, NYS Department of Health
Biggs Lab, Empire State Plaza, Albany, NY 12201
(518) 486-1697 | brian.cutt...@health.ny.gov<mailto:brian.cutt...@health.ny.gov>



From: owner-amanda-us...@amanda.org  On Behalf 
Of Chris Miller
Sent: Wednesday, November 14, 2018 10:44 AM
To: amanda-users 
Subject: Monitor and Manage


ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.

Hi Folks,

I now have three working configs, meaning that I can backup three clients. 
There is not much difference among the configs, but that is a topic for a 
different thread. My question is how I manage what AMANDA is doing?

So, let's suppose I fire up all three amdumps at once:

  *   How do I know if I'm getting level 0 or higher?
  *   How do I know the backups are running and have not silently failed?
  *   How do I know when they complete?
  *   How do I know what has been accomplished?
  *   :
These are all the sort of questions that might be answered by some sort of 
dashboard, but I haven't heard of any such thing, nor do I expect to hear of 
any such thing, but I am also equally sure that all the answers exist. I just 
don't know where.

In short, how do I monitor and mange AMANDA?

Thanks for the help,
--
Chris.

V:916.974.0424
F:916.974.0428


RE: Breaking DLEs up

2018-11-08 Thread Cuttler, Brian R (HEALTH)
My login directories are all lower case, but I did have one include "./[A-Z]*", 
it never found anything, which didn't bother me because there wasn't supposed 
to be anything to be found, now you make me wonder if it would have failed if I 
had in fact needed it.

Good the situation never come up.

-Original Message-
From: Debra S Baddorf  
Sent: Thursday, November 8, 2018 3:44 PM
To: Cuttler, Brian R (HEALTH) 
Cc: Debra S Baddorf ; Chris Nighswonger 
; amanda-users@amanda.org
Subject: Re: Breaking DLEs up

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


Yeah, I do use includes,  but I only do a single letter at a time
   include "./a*”

Perhaps the problem is with the syntax of doing more than one letter.
I only do   [a-f]   on my excludes.   Weird!

Deb Baddorf

> On Nov 8, 2018, at 2:33 PM, Cuttler, Brian R (HEALTH) 
>  wrote:
>
>
> Interesting, not sure.
>
> For part 2, I will say that it is far easier to exclude files from backup 
> than include them. You had done an excellent job of exclusion, you’ll pardon 
> the poor attempt at humor, it is getting late in the day.
>
>
> From: Chris Nighswonger 
> Sent: Thursday, November 8, 2018 3:21 PM
> To: Cuttler, Brian R (HEALTH) 
> Cc: amanda-users@amanda.org
> Subject: Re: Breaking DLEs up
>
> ATTENTION: This email came from an external source. Do not open attachments 
> or click on links from unknown senders or unexpected emails.
>
> On Thu, Nov 8, 2018 at 1:56 PM Cuttler, Brian R (HEALTH) 
>  wrote:
>
> Your syntax
>
> fileserver "/netdrives/CAMPUS/af" "/netdrives/CAMPUS" {
>   comp-tar
>   include "./[a-f]*"
>   estimate server
> }
>
> my syntax
>
> finsen  /export/home-A /export/home   {
> user-tar2
> include "./[a]*"
> }
>
> finsen  /export/home-AZ /export/home   {
> user-tar2
> include "./[A-Z]*"
> }
>
>
> Well, this fixes my problem, though why I do not know.
>
> fileserver CAMPUS_a-f /netdrives/CAMPUS {
>   comp-tar
>   exclude file "./[g-z]*"
>   estimate server
> } 1
>
> It seems a bit of work compared to the include directive. I tried "include 
> file" to no avail.
>
> I'll see how the backup runs tonight, but amcheck likes it.
>
> Kind regards,
> Chris




RE: Breaking DLEs up

2018-11-08 Thread Cuttler, Brian R (HEALTH)

Interesting, not sure.

For part 2, I will say that it is far easier to exclude files from backup than 
include them. You had done an excellent job of exclusion, you’ll pardon the 
poor attempt at humor, it is getting late in the day.


From: Chris Nighswonger 
Sent: Thursday, November 8, 2018 3:21 PM
To: Cuttler, Brian R (HEALTH) 
Cc: amanda-users@amanda.org
Subject: Re: Breaking DLEs up


ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.

On Thu, Nov 8, 2018 at 1:56 PM Cuttler, Brian R (HEALTH) 
mailto:brian.cutt...@health.ny.gov>> wrote:

Your syntax

fileserver "/netdrives/CAMPUS/af" "/netdrives/CAMPUS" {
  comp-tar
  include "./[a-f]*"
  estimate server
}

my syntax

finsen  /export/home-A /export/home   {
user-tar2
include "./[a]*"
}

finsen  /export/home-AZ /export/home   {
user-tar2
include "./[A-Z]*"
}


Well, this fixes my problem, though why I do not know.

fileserver CAMPUS_a-f /netdrives/CAMPUS {
  comp-tar
  exclude file "./[g-z]*"
  estimate server
} 1

It seems a bit of work compared to the include directive. I tried "include 
file" to no avail.

I'll see how the backup runs tonight, but amcheck likes it.

Kind regards,
Chris


RE: Breaking DLEs up

2018-11-08 Thread Cuttler, Brian R (HEALTH)
Chris,

We seem to be doing the same thing. We are doing it for the same reason. I went 
a little more finely grained as I had about 1200 user accounts and some stored 
quite a bit of data in their home directories, vs the group samba share or data 
directories for projects (usually on the compute engine or cluster, rather than 
the home directory server). [Latest storage server came in the door with 250 
Tbytes of space]

Note that I have ‘a’ to ‘z’ and then the catch-all that never catches anything, 
but that is expected and fits in with Debra’s comments.

I always worry about the leading dot to anchor the path, but we did it the 
same. That is why I copied the two near each other, for easy comparison.

You can see the tar commands in the log files, what do you catch if you run the 
tar command manually?
Could it be a permissions problem for Amanda to read the user files?

Your syntax

fileserver "/netdrives/CAMPUS/af" "/netdrives/CAMPUS" {
  comp-tar
  include "./[a-f]*"
  estimate server
}

my syntax

finsen  /export/home-A /export/home   {
user-tar2
include "./[a]*"
}

finsen  /export/home-AZ /export/home   {
user-tar2
include "./[A-Z]*"
}

I’ve removed all but the last email you wrote from this email, it was getting 
long for little or no gain.

From: Chris Nighswonger 
Sent: Thursday, November 8, 2018 1:25 PM
To: Cuttler, Brian R (HEALTH) 
Cc: amanda-users@amanda.org
Subject: Re: Breaking DLEs up


ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.

No question is stupid. I learned that beating my head against the wall for long 
hours. :-)

/netdrives/CAMPUS/ is a path which contains users' network drives. The level 
below CAMPUS contains folders which follow the naming convention of the 
username of each account. ie. Chris Nighswonger would be cnighswonger and thus 
/netdrives/CAMPUS/cnighswonger/ would be my related network drive.

There are somewhat less than 100 user directories. Prior to this I have been 
backing them all up with a DLE which looks like this:

host "/netdrives/CAMPUS" {
  comp-tar
  estimate server
}

This works fine with the caveat that it results in a huge level 0 backup.

In the supplied disklist file example in Amanda's documentation 
(/usr/share/doc/amanda-server/examples/disklist), I discovered the DLE form I 
am currently attempting. According to the example this should limit each DLE to 
backing up subdirectories of /netdrives/CAMPUS/ based on the regexp supplied in 
the "include" directive.

It appears to me that something may have changed with the way Amanda handles 
this since that document was written.

As Stefan points out, Amanda seems to thing that there is "nothing" to be 
backed up. Furthermore, it appears that the log excerpts I posted also show 
that the regexp is not being applied but that Amanda is actually looking for 
specific subdirectories like /netdrives/CAMPUS/af and the like.

Maybe the DLE syntax is incorrect?


RE: Breaking DLEs up

2018-11-08 Thread Cuttler, Brian R (HEALTH)


Stupid question, host fileserver, directories  /netdrives/CAMPUS/s* to 
/netdrives/CAMPUS/z* exist and have some files in them?

From: Chris Nighswonger 
Sent: Thursday, November 8, 2018 12:11 PM
To: Cuttler, Brian R (HEALTH) 
Cc: amanda-users@amanda.org
Subject: Re: Breaking DLEs up


ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.

From the client:

Thu Nov  8 05:27:11 2018: thd-0x1c8f200: sendbackup: pid 23692 ruid 10195 euid 
10195 version 3.3.1: start at Thu Nov  8 05:27:11 2018
Thu Nov  8 05:27:11 2018: thd-0x1c8f200: sendbackup: Version 3.3.1
Thu Nov  8 05:27:11 2018: thd-0x1c8f200: sendbackup: pid 23692 ruid 10195 euid 
10195 version 3.3.1: rename at Thu Nov  8 05:27:11 2018
Thu Nov  8 05:27:11 2018: thd-0x1c8f200: sendbackup:   Parsed request as: 
program `GNUTAR'
Thu Nov  8 05:27:11 2018: thd-0x1c8f200: sendbackup:  disk 
`/netdrives/CAMPUS/sz'
Thu Nov  8 05:27:11 2018: thd-0x1c8f200: sendbackup:  
device `/netdrives/CAMPUS'
Thu Nov  8 05:27:11 2018: thd-0x1c8f200: sendbackup:  level 0
Thu Nov  8 05:27:11 2018: thd-0x1c8f200: sendbackup:  since 
NODATE
Thu Nov  8 05:27:11 2018: thd-0x1c8f200: sendbackup:  
options `'
Thu Nov  8 05:27:11 2018: thd-0x1c8f200: sendbackup:  
datapath `AMANDA'
Thu Nov  8 05:27:11 2018: thd-0x1c8f200: sendbackup: start: 
host:/netdrives/CAMPUS/sz lev 0
Thu Nov  8 05:27:11 2018: thd-0x1c8f200: sendbackup: Spawning "/bin/gzip 
/bin/gzip --fast" in pipeline
Thu Nov  8 05:27:11 2018: thd-0x1c8f200: sendbackup: gnutar: pid 23694: 
/bin/gzipThu Nov  8 05:27:11 2018: thd-0x1c8f200: sendbackup: pid 23694: 
/bin/gzip --fast
Thu Nov  8 05:27:11 2018: thd-0x1c8f200: sendbackup: doing level 0 dump as 
listed-incremental to 
'/var/lib/amanda/gnutar-lists/host_netdrives_CAMPUS_sz_0.new'
Thu Nov  8 05:27:11 2018: thd-0x1c8f200: sendbackup: Nothing found to include 
for disk /netdrives/CAMPUS/sz
Thu Nov  8 05:27:11 2018: thd-0x1c8f200: sendbackup: Spawning 
"/usr/libexec/amanda/runtar runtar campus /bin/tar --create --file - 
--directory /netdrives/CAMPUS --one-file-system --listed-incremental 
/var/lib/amanda/gnutar-lists/host_netdrives_CAMPUS_sz_0.new --sparse 
--ignore-failed-read --totals --files-from 
/tmp/amanda/sendbackup._netdrives_CAMPUS_sz.20181108052711.include" in pipeline
Thu Nov  8 05:27:11 2018: thd-0x1c8f200: sendbackup: gnutar: 
/usr/libexec/amanda/runtar: pid 23696
Thu Nov  8 05:27:11 2018: thd-0x1c8f200: sendbackup: Started backup
Thu Nov  8 05:27:11 2018: thd-0x1c8f200: sendbackup: Started index creator: 
"/bin/tar -tf - 2>/dev/null | sed -e 's/^\.//'"
Thu Nov  8 05:27:11 2018: thd-0x1c8f200: sendbackup:  46:size(|): Total 
bytes written: 10240 (10KiB, 78MiB/s)
Thu Nov  8 05:27:11 2018: thd-0x1c8f200: sendbackup: Index created successfully
Thu Nov  8 05:27:11 2018: thd-0x1c8f200: sendbackup: Parsed backup messages
Thu Nov  8 05:27:11 2018: thd-0x1c8f200: sendbackup: pid 23692 finish time Thu 
Nov  8 05:27:11 2018

From the server:

/var/log/amanda/server/campus/dumper.20181108020002007.debug:165691:Thu Nov  8 
05:27:11 2018: thd-0x5579048e2400: dumper: getcmd: PORT-DUMP 03-00023 50013 1 
host 9efefbff1f /netdrives/CAMPUS/sz /netdrives/CAMPUS 0 
1970:1:1:0:0:0 GNUTAR "" "" "" "" bsdtcp AMANDA 
127.0.0.1:50014<http://127.0.0.1:50014> 20 |"  bsdtcp\n  
FAST\n  YES\n  YES\n  
AMANDA\n  \n./[s-z]*\n  
\n"
/var/log/amanda/server/campus/dumper.20181108020002007.debug:165705:  
/netdrives/CAMPUS/sz
/var/log/amanda/server/campus/dumper.20181108020002007.debug:165706:  
/netdrives/CAMPUS
/var/log/amanda/server/campus/dumper.20181108020002007.debug:165740:  
/netdrives/CAMPUS/sz
/var/log/amanda/server/campus/dumper.20181108020002007.debug:165741:  
/netdrives/CAMPUS
/var/log/amanda/server/campus/dumper.20181108020002007.debug:165857:Thu Nov  8 
05:27:11 2018: thd-0x5579048e2400: dumper: Building type FILE header of 
32768-32768 bytes with name='host' disk='/netdrives/CAMPUS/sz' dumplevel=0 and 
blocksize=32768
/var/log/amanda/server/campus/dumper.20181108020002007.debug:165944:Thu Nov  8 
05:27:11 2018: thd-0x5579048e2400: dumper: Building type FILE header of 
32768-32768 bytes with name='host' disk='/netdrives/CAMPUS/sz' dumplevel=0 and 
blocksize=32768

On Thu, Nov 8, 2018 at 12:00 PM Cuttler, Brian R (HEALTH) 
mailto:brian.cutt...@health.ny.gov>> wrote:
Client and server side?
/var/log/amanda/ ?


From: Chris Nighswonger 
mailto:cnighswon...@foundations.edu>>
Sent: Thursday, November 8, 2018 11:43 AM
To: Cuttler, Brian R (HEALTH) 
mailto:brian.cutt...@health.ny.gov>>
Cc: amanda-users@amanda.org<mailto:amanda-users@amanda.org>
Subject: Re: Bre

RE: Breaking DLEs up

2018-11-08 Thread Cuttler, Brian R (HEALTH)
What does amcheck say, no logs may mean authentication failure between client 
and server.

Try running the client on the client side, as the amanda user, from the command 
line. You should get a log, on occasion I’ve seen failures that proved to be a 
path issue to the binary in the inetd.conf (or equiv).

From: Cuttler, Brian R (HEALTH)
Sent: Thursday, November 8, 2018 12:00 PM
To: 'Chris Nighswonger' 
Cc: amanda-users@amanda.org
Subject: RE: Breaking DLEs up

Client and server side?
/var/log/amanda/ ?


From: Chris Nighswonger 
mailto:cnighswon...@foundations.edu>>
Sent: Thursday, November 8, 2018 11:43 AM
To: Cuttler, Brian R (HEALTH) 
mailto:brian.cutt...@health.ny.gov>>
Cc: amanda-users@amanda.org<mailto:amanda-users@amanda.org>
Subject: Re: Breaking DLEs up


ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.

Oddly enough, /tmp/amanda is empty.

On Thu, Nov 8, 2018 at 11:33 AM Cuttler, Brian R (HEALTH) 
mailto:brian.cutt...@health.ny.gov>> wrote:
I have been using a very similar setup for years, though I did not have any 
quotes in the first line of each DLE. I do NOT believe the quotes are an issue.

What do the /tmp/amanda files show for these attempted dumps?

From: owner-amanda-us...@amanda.org<mailto:owner-amanda-us...@amanda.org> 
mailto:owner-amanda-us...@amanda.org>> On Behalf 
Of Chris Nighswonger
Sent: Thursday, November 8, 2018 11:12 AM
To: amanda-users@amanda.org<mailto:amanda-users@amanda.org>
Subject: Breaking DLEs up


ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.

I attempted this and it appears to not have worked. I'm not sure why.

Here is the relevant portion of my DLEs:

fileserver "/netdrives/CAMPUS/af" "/netdrives/CAMPUS" {
  comp-tar
  include "./[a-f]*"
  estimate server
}
fileserver "/netdrives/CAMPUS/gl" "/netdrives/CAMPUS" {
  comp-tar
  include "./[g-l]*"
  estimate server
}
fileserver "/netdrives/CAMPUS/mr" "/netdrives/CAMPUS" {
  comp-tar
  include "./[m-r]*"
  estimate server
}
fileserver "/netdrives/CAMPUS/sz" "/netdrives/CAMPUS" {
  comp-tar
  include "./[s-z]*"
  estimate server
}

Here are the corresponding lines from amreport for the last backup run:

fileserver:/netdrives/CAMPUS/af 
0 1k dump done (5:28:16), waiting for writing to tape
fileserver:/netdrives/CAMPUS/gl 
0 1k dump done (5:28:11), waiting for writing to tape
fileserver:/netdrives/CAMPUS/mr 
0 1k dump done (5:28:06), waiting for writing to tape
fileserver:/netdrives/CAMPUS/sz 
0 1k dump done (5:27:11), waiting for writing to tape

Kind regards,
Chris


RE: Breaking DLEs up

2018-11-08 Thread Cuttler, Brian R (HEALTH)
Client and server side?
/var/log/amanda/ ?


From: Chris Nighswonger 
Sent: Thursday, November 8, 2018 11:43 AM
To: Cuttler, Brian R (HEALTH) 
Cc: amanda-users@amanda.org
Subject: Re: Breaking DLEs up


ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.

Oddly enough, /tmp/amanda is empty.

On Thu, Nov 8, 2018 at 11:33 AM Cuttler, Brian R (HEALTH) 
mailto:brian.cutt...@health.ny.gov>> wrote:
I have been using a very similar setup for years, though I did not have any 
quotes in the first line of each DLE. I do NOT believe the quotes are an issue.

What do the /tmp/amanda files show for these attempted dumps?

From: owner-amanda-us...@amanda.org<mailto:owner-amanda-us...@amanda.org> 
mailto:owner-amanda-us...@amanda.org>> On Behalf 
Of Chris Nighswonger
Sent: Thursday, November 8, 2018 11:12 AM
To: amanda-users@amanda.org<mailto:amanda-users@amanda.org>
Subject: Breaking DLEs up


ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.

I attempted this and it appears to not have worked. I'm not sure why.

Here is the relevant portion of my DLEs:

fileserver "/netdrives/CAMPUS/af" "/netdrives/CAMPUS" {
  comp-tar
  include "./[a-f]*"
  estimate server
}
fileserver "/netdrives/CAMPUS/gl" "/netdrives/CAMPUS" {
  comp-tar
  include "./[g-l]*"
  estimate server
}
fileserver "/netdrives/CAMPUS/mr" "/netdrives/CAMPUS" {
  comp-tar
  include "./[m-r]*"
  estimate server
}
fileserver "/netdrives/CAMPUS/sz" "/netdrives/CAMPUS" {
  comp-tar
  include "./[s-z]*"
  estimate server
}

Here are the corresponding lines from amreport for the last backup run:

fileserver:/netdrives/CAMPUS/af 
0 1k dump done (5:28:16), waiting for writing to tape
fileserver:/netdrives/CAMPUS/gl 
0 1k dump done (5:28:11), waiting for writing to tape
fileserver:/netdrives/CAMPUS/mr 
0 1k dump done (5:28:06), waiting for writing to tape
fileserver:/netdrives/CAMPUS/sz 
0 1k dump done (5:27:11), waiting for writing to tape

Kind regards,
Chris


RE: Breaking DLEs up

2018-11-08 Thread Cuttler, Brian R (HEALTH)
I have been using a very similar setup for years, though I did not have any 
quotes in the first line of each DLE. I do NOT believe the quotes are an issue.

What do the /tmp/amanda files show for these attempted dumps?

From: owner-amanda-us...@amanda.org  On Behalf 
Of Chris Nighswonger
Sent: Thursday, November 8, 2018 11:12 AM
To: amanda-users@amanda.org
Subject: Breaking DLEs up


ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.

I attempted this and it appears to not have worked. I'm not sure why.

Here is the relevant portion of my DLEs:

fileserver "/netdrives/CAMPUS/af" "/netdrives/CAMPUS" {
  comp-tar
  include "./[a-f]*"
  estimate server
}
fileserver "/netdrives/CAMPUS/gl" "/netdrives/CAMPUS" {
  comp-tar
  include "./[g-l]*"
  estimate server
}
fileserver "/netdrives/CAMPUS/mr" "/netdrives/CAMPUS" {
  comp-tar
  include "./[m-r]*"
  estimate server
}
fileserver "/netdrives/CAMPUS/sz" "/netdrives/CAMPUS" {
  comp-tar
  include "./[s-z]*"
  estimate server
}

Here are the corresponding lines from amreport for the last backup run:

fileserver:/netdrives/CAMPUS/af 
0 1k dump done (5:28:16), waiting for writing to tape
fileserver:/netdrives/CAMPUS/gl 
0 1k dump done (5:28:11), waiting for writing to tape
fileserver:/netdrives/CAMPUS/mr 
0 1k dump done (5:28:06), waiting for writing to tape
fileserver:/netdrives/CAMPUS/sz 
0 1k dump done (5:27:11), waiting for writing to tape

Kind regards,
Chris


RE: dumporder

2018-11-06 Thread Cuttler, Brian R (HEALTH)
Yah, the plot is a very helpful tool. You can get it to work harder if you can 
enlarge the work area or add an additional work area.

Also you might want to check your chunk size, tendency, at least when I started 
out, was for many smaller files, I increased the size of the chunks in the 
holding area and believe it help to improve performance as there were few files 
and fewer file creates/accesses/deletes later on.

From: Chris Nighswonger 
Sent: Tuesday, November 6, 2018 2:38 PM
To: Cuttler, Brian R (HEALTH) 
Cc: ned.danie...@duke.edu; amanda-users@amanda.org
Subject: Re: dumporder


ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.

Setting the output to postscript (-p) and then converting to pdf (ps2pdf) 
worked the trick.

It looks like my holding disk maxes out.

[20181106020001.jpg]


Christopher Nighswonger
Faculty Member
Network & Systems Director
Foundations Bible College & Seminary
www.foundations.edu<https://protect2.fireeye.com/url?k=5fb1533cbfe0ffe5.5fb3aa09-e5969cc601a99a3a&u=http://www.foundations.edu>
www.fbcradio.org<https://protect2.fireeye.com/url?k=7e89f7371429ff69.7e8b0e02-990af0384ddad8fd&u=http://www.fbcradio.org>
-
NOTICE: The information contained in this electronic mail message is intended 
only for the use of the intended recipient, and may also be protected by the 
Electronic Communications Privacy Act, 18 USC Sections 2510-2521. If the reader 
of this message is not the intended recipient, you are hereby notified that any 
dissemination, distribution or copying of this communication is strictly 
prohibited. If you have received this communication in error, please reply to 
the sender, and delete the original message. Thank you.

On Tue, Nov 6, 2018 at 2:26 PM Cuttler, Brian R (HEALTH) 
mailto:brian.cutt...@health.ny.gov>> wrote:
Xhost, and environmental variable DISPLAY, plus your output needs to be x11.
You can also write a pdf file and print it, or view with an appropriate viewer.

-Original Message-
From: owner-amanda-us...@amanda.org<mailto:owner-amanda-us...@amanda.org> 
mailto:owner-amanda-us...@amanda.org>> On Behalf 
Of Chris Nighswonger
Sent: Tuesday, November 6, 2018 1:18 PM
To: ned.danie...@duke.edu<mailto:ned.danie...@duke.edu>
Cc: amanda-users@amanda.org<mailto:amanda-users@amanda.org>
Subject: Re: dumporder

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


This seems to work:

amplot /var/backups/campus/log/amdump.1

Running under the amanda user.

However, the issue now is the attempt to write the output to the X11 terminal:


gnuplot: unable to open display ''
gnuplot: X11 aborted.

Not sure what all that's about. So I'm doing a bit of hacking on the gnuplot 
script to have it write the results out to a png file.

Chris
On Tue, Nov 6, 2018 at 12:29 PM Ned Danieley 
mailto:ned.danie...@duke.edu>> wrote:
>
> On Tue, Nov 06, 2018 at 11:50:57AM -0500, Chris Nighswonger wrote:
> > Digging around a bit, it appears that it might be a reference to a
> > file which is missing. From amplot.g, line 62 we see:
> >
> > # file title has the parameters that this program needs load 'title'
> > plot"run_queue" title "Run Queue" with lines,\
> > "tape_queue" title "Tape Queue" with lines,\
> > "finished"  title "Dumps Finished" with lines,\
> > "bandw_free" title "Bandwidth Allocated" with lines, \
> > "disk_alloc" title "%Disk Allocated" with lines, \
> > "tape_wait" title "%Tape Wait" with lines,\
> > "tape_idle" title "Taper Idle" with lines,\
> > "dump_idle" title "Dumpers Idle" with lines
> >
> > Where is a developer when you need one? :-P
>
> looks like the awk script is supposed to generate 'title'. on my
> system, I have to run amplot as user 'amanda'. that means that I have
> to be in a directory where amanda has write permission, otherwise
> title can't be generated. my home directory doesn't work, but a temp
> dir that's chmod 777 does.
>
> --
> Ned Danieley (ned.danie...@duke.edu<mailto:ned.danie...@duke.edu>)
> Department of Biomedical Engineering
> Box 90281, Duke University
> Durham, NC  27708   (919) 660-5111
>
> http://dilbert.com/strips/comic/2012-02-11/


RE: dumporder

2018-11-06 Thread Cuttler, Brian R (HEALTH)
Xhost, and environmental variable DISPLAY, plus your output needs to be x11.
You can also write a pdf file and print it, or view with an appropriate viewer.

-Original Message-
From: owner-amanda-us...@amanda.org  On Behalf 
Of Chris Nighswonger
Sent: Tuesday, November 6, 2018 1:18 PM
To: ned.danie...@duke.edu
Cc: amanda-users@amanda.org
Subject: Re: dumporder

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


This seems to work:

amplot /var/backups/campus/log/amdump.1

Running under the amanda user.

However, the issue now is the attempt to write the output to the X11 terminal:


gnuplot: unable to open display ''
gnuplot: X11 aborted.

Not sure what all that's about. So I'm doing a bit of hacking on the gnuplot 
script to have it write the results out to a png file.

Chris
On Tue, Nov 6, 2018 at 12:29 PM Ned Danieley  wrote:
>
> On Tue, Nov 06, 2018 at 11:50:57AM -0500, Chris Nighswonger wrote:
> > Digging around a bit, it appears that it might be a reference to a 
> > file which is missing. From amplot.g, line 62 we see:
> >
> > # file title has the parameters that this program needs load 'title'
> > plot"run_queue" title "Run Queue" with lines,\
> > "tape_queue" title "Tape Queue" with lines,\
> > "finished"  title "Dumps Finished" with lines,\
> > "bandw_free" title "Bandwidth Allocated" with lines, \
> > "disk_alloc" title "%Disk Allocated" with lines, \
> > "tape_wait" title "%Tape Wait" with lines,\
> > "tape_idle" title "Taper Idle" with lines,\
> > "dump_idle" title "Dumpers Idle" with lines
> >
> > Where is a developer when you need one? :-P
>
> looks like the awk script is supposed to generate 'title'. on my 
> system, I have to run amplot as user 'amanda'. that means that I have 
> to be in a directory where amanda has write permission, otherwise 
> title can't be generated. my home directory doesn't work, but a temp 
> dir that's chmod 777 does.
>
> --
> Ned Danieley (ned.danie...@duke.edu)
> Department of Biomedical Engineering
> Box 90281, Duke University
> Durham, NC  27708   (919) 660-5111
>
> http://dilbert.com/strips/comic/2012-02-11/



RE: dumporder

2018-11-06 Thread Cuttler, Brian R (HEALTH)
I think you need to provide the name of the amconfig. I believe it reads 
amanda.conf.

-Original Message-
From: Chris Nighswonger  
Sent: Tuesday, November 6, 2018 11:51 AM
To: Cuttler, Brian R (HEALTH) 
Cc: amanda-users@amanda.org
Subject: Re: dumporder

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


Digging around a bit, it appears that it might be a reference to a file which 
is missing. From amplot.g, line 62 we see:

# file title has the parameters that this program needs load 'title'
plot"run_queue" title "Run Queue" with lines,\
"tape_queue" title "Tape Queue" with lines,\
"finished"  title "Dumps Finished" with lines,\
"bandw_free" title "Bandwidth Allocated" with lines, \
"disk_alloc" title "%Disk Allocated" with lines, \
"tape_wait" title "%Tape Wait" with lines,\
"tape_idle" title "Taper Idle" with lines,\
"dump_idle" title "Dumpers Idle" with lines

Where is a developer when you need one? :-P

On Tue, Nov 6, 2018 at 11:45 AM Cuttler, Brian R (HEALTH) 
 wrote:
>
> It has been a while, title might be the org string from amanda.conf?
> I'm sorry, it has been years since I ran it regularly.
>
> -Original Message-
> From: Chris Nighswonger 
> Sent: Tuesday, November 6, 2018 11:42 AM
> To: Cuttler, Brian R (HEALTH) 
> Cc: amanda-users@amanda.org
> Subject: Re: dumporder
>
> ATTENTION: This email came from an external source. Do not open attachments 
> or click on links from unknown senders or unexpected emails.
>
>
> The amplot utility is new to me. Here is what it says when I attempt to run 
> it:
>
> root@scriptor:~# amplot
> /var/log/amanda/server/campus/amdump.20181106020001.debug
> Displaying graph on the screen,  for next graph : MISSING SPACE 
> DECLARATION "title", line 62: Cannot open script file 'title'
>
> I have both gnuplot and gawk installed on the backup server.
>
> On Tue, Nov 6, 2018 at 11:10 AM Cuttler, Brian R (HEALTH) 
>  wrote:
> >
> > Chris,
> >
> > There is an amplot utility that I used to run a lot, it would show me what 
> > I was constrained on, often holding space, which if you are running a bunch 
> > of large dumps to start off with could constrain dumping later on.
> >
> >
> > -Original Message-
> > From: owner-amanda-us...@amanda.org  
> > On Behalf Of Chris Nighswonger
> > Sent: Tuesday, November 6, 2018 11:02 AM
> > To: amanda-users@amanda.org
> > Subject: Re: dumporder
> >
> > ATTENTION: This email came from an external source. Do not open attachments 
> > or click on links from unknown senders or unexpected emails.
> >
> >
> > On Mon, Nov 5, 2018 at 1:31 PM Chris Nighswonger 
> >  wrote:
> > >
> > > Is there any wisdom available on optimization of dumporder?
> > >
> >
> > After looking over the feedback from Brian and Austin and reviewing the 
> > actual sizes of the DLEs, I ended up with this sort of thing:
> >
> > inparallel 15
> > dumporder "STs"
> >
> > Which resulted in a reduced time over what I have been seeing. Here are 
> > some stats from amstatus for this config. It looks like I could drop about 
> > half of the dumpers and still be fine. Any idea what causes the dumpers 
> > over the first five to be utilized less?
> >
> >  dumper0 busy   :  5:13:42  ( 97.98%)
> >  dumper1 busy   :  1:12:09  ( 22.54%)
> >  dumper2 busy   :  0:40:30  ( 12.65%)
> >  dumper3 busy   :  0:33:44  ( 10.54%)
> >  dumper4 busy   :  0:03:47  (  1.19%)
> >  dumper5 busy   :  0:37:32  ( 11.73%)
> >  dumper6 busy   :  0:02:00  (  0.62%)
> >  dumper7 busy   :  0:00:57  (  0.30%)
> >  dumper8 busy   :  0:05:54  (  1.85%)
> >  dumper9 busy   :  0:04:38  (  1.45%)
> > dumper10 busy   :  0:00:16  (  0.08%)
> > dumper11 busy   :  0:01:39  (  0.52%)
> > dumper12 busy   :  0:00:01  (  0.01%)
> >  0 dumpers busy :  0:02:43  (  0.85%)   0:  0:02:43  
> > (100.00%)
> >  1 dumper busy  :  3:39:13  ( 68.47%)   0:  3:39:13  
> > (100.00%)
> >  2 dumpers busy :  0:23:10  (  7.24%)   0:  0:23:10  
> > (100.00%)
> >  3 dumpers busy :  1:07:57  ( 21.22%)   0:  1:07:57  
> > (100.00%)
> >  4 dumpers busy :  0:02:09  (  0.67%)   0:  0:02:09  ( 
> > 99.99%)
> >  5 dumpers busy :  0:01:07  

RE: dumporder

2018-11-06 Thread Cuttler, Brian R (HEALTH)
It has been a while, title might be the org string from amanda.conf?
I'm sorry, it has been years since I ran it regularly.

-Original Message-
From: Chris Nighswonger  
Sent: Tuesday, November 6, 2018 11:42 AM
To: Cuttler, Brian R (HEALTH) 
Cc: amanda-users@amanda.org
Subject: Re: dumporder

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


The amplot utility is new to me. Here is what it says when I attempt to run it:

root@scriptor:~# amplot
/var/log/amanda/server/campus/amdump.20181106020001.debug
Displaying graph on the screen,  for next graph : MISSING SPACE DECLARATION 
"title", line 62: Cannot open script file 'title'

I have both gnuplot and gawk installed on the backup server.

On Tue, Nov 6, 2018 at 11:10 AM Cuttler, Brian R (HEALTH) 
 wrote:
>
> Chris,
>
> There is an amplot utility that I used to run a lot, it would show me what I 
> was constrained on, often holding space, which if you are running a bunch of 
> large dumps to start off with could constrain dumping later on.
>
>
> -Original Message-
> From: owner-amanda-us...@amanda.org  On 
> Behalf Of Chris Nighswonger
> Sent: Tuesday, November 6, 2018 11:02 AM
> To: amanda-users@amanda.org
> Subject: Re: dumporder
>
> ATTENTION: This email came from an external source. Do not open attachments 
> or click on links from unknown senders or unexpected emails.
>
>
> On Mon, Nov 5, 2018 at 1:31 PM Chris Nighswonger 
>  wrote:
> >
> > Is there any wisdom available on optimization of dumporder?
> >
>
> After looking over the feedback from Brian and Austin and reviewing the 
> actual sizes of the DLEs, I ended up with this sort of thing:
>
> inparallel 15
> dumporder "STs"
>
> Which resulted in a reduced time over what I have been seeing. Here are some 
> stats from amstatus for this config. It looks like I could drop about half of 
> the dumpers and still be fine. Any idea what causes the dumpers over the 
> first five to be utilized less?
>
>  dumper0 busy   :  5:13:42  ( 97.98%)
>  dumper1 busy   :  1:12:09  ( 22.54%)
>  dumper2 busy   :  0:40:30  ( 12.65%)
>  dumper3 busy   :  0:33:44  ( 10.54%)
>  dumper4 busy   :  0:03:47  (  1.19%)
>  dumper5 busy   :  0:37:32  ( 11.73%)
>  dumper6 busy   :  0:02:00  (  0.62%)
>  dumper7 busy   :  0:00:57  (  0.30%)
>  dumper8 busy   :  0:05:54  (  1.85%)
>  dumper9 busy   :  0:04:38  (  1.45%)
> dumper10 busy   :  0:00:16  (  0.08%)
> dumper11 busy   :  0:01:39  (  0.52%)
> dumper12 busy   :  0:00:01  (  0.01%)
>  0 dumpers busy :  0:02:43  (  0.85%)   0:  0:02:43  (100.00%)
>  1 dumper busy  :  3:39:13  ( 68.47%)   0:  3:39:13  (100.00%)
>  2 dumpers busy :  0:23:10  (  7.24%)   0:  0:23:10  (100.00%)
>  3 dumpers busy :  1:07:57  ( 21.22%)   0:  1:07:57  (100.00%)
>  4 dumpers busy :  0:02:09  (  0.67%)   0:  0:02:09  ( 99.99%)
>  5 dumpers busy :  0:01:07  (  0.35%)   0:  0:01:07  ( 99.98%)
>  6 dumpers busy :  0:00:03  (  0.02%)   0:  0:00:03  ( 99.59%)
>  7 dumpers busy :  0:00:43  (  0.22%)   0:  0:00:24  ( 56.35%)
> 5:  0:00:09  ( 22.36%)
> 4:  0:00:08  ( 18.89%)
> 1:  0:00:01  (  2.37%)
>  8 dumpers busy :  0:00:20  (  0.10%)   0:  0:00:19  ( 96.50%)
>  9 dumpers busy :  0:01:51  (  0.58%)   0:  0:01:51  ( 99.98%)
> 10 dumpers busy :  0:00:41  (  0.22%)   0:  0:00:37  ( 89.71%)
> 5:  0:00:02  (  7.13%)
> 4:  0:00:01  (  3.09%)
> 11 dumpers busy :  0:00:07  (  0.04%)   5:  0:00:07  ( 99.54%)
> 12 dumpers busy :  0:00:00  (  0.00%)
> 13 dumpers busy :  0:00:00  (  0.00%)
>
> I am going to give things another week or so and then try breaking up some of 
> the very large DLEs in the config.



RE: dumporder

2018-11-06 Thread Cuttler, Brian R (HEALTH)
Note on that - you want to maximize throughput, not maximize the number of 
running dumpers.

More dumpers moving data is usually good, but more isn't always better.

-Original Message-
From: owner-amanda-us...@amanda.org  On Behalf 
Of Cuttler, Brian R (HEALTH)
Sent: Tuesday, November 6, 2018 11:11 AM
To: Chris Nighswonger ; amanda-users@amanda.org
Subject: RE: dumporder

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


Chris,

There is an amplot utility that I used to run a lot, it would show me what I 
was constrained on, often holding space, which if you are running a bunch of 
large dumps to start off with could constrain dumping later on.


-Original Message-
From: owner-amanda-us...@amanda.org  On Behalf 
Of Chris Nighswonger
Sent: Tuesday, November 6, 2018 11:02 AM
To: amanda-users@amanda.org
Subject: Re: dumporder

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


On Mon, Nov 5, 2018 at 1:31 PM Chris Nighswonger  
wrote:
>
> Is there any wisdom available on optimization of dumporder?
>

After looking over the feedback from Brian and Austin and reviewing the actual 
sizes of the DLEs, I ended up with this sort of thing:

inparallel 15
dumporder "STs"

Which resulted in a reduced time over what I have been seeing. Here are some 
stats from amstatus for this config. It looks like I could drop about half of 
the dumpers and still be fine. Any idea what causes the dumpers over the first 
five to be utilized less?

 dumper0 busy   :  5:13:42  ( 97.98%)
 dumper1 busy   :  1:12:09  ( 22.54%)
 dumper2 busy   :  0:40:30  ( 12.65%)
 dumper3 busy   :  0:33:44  ( 10.54%)
 dumper4 busy   :  0:03:47  (  1.19%)
 dumper5 busy   :  0:37:32  ( 11.73%)
 dumper6 busy   :  0:02:00  (  0.62%)
 dumper7 busy   :  0:00:57  (  0.30%)
 dumper8 busy   :  0:05:54  (  1.85%)
 dumper9 busy   :  0:04:38  (  1.45%)
dumper10 busy   :  0:00:16  (  0.08%)
dumper11 busy   :  0:01:39  (  0.52%)
dumper12 busy   :  0:00:01  (  0.01%)
 0 dumpers busy :  0:02:43  (  0.85%)   0:  0:02:43  (100.00%)
 1 dumper busy  :  3:39:13  ( 68.47%)   0:  3:39:13  (100.00%)
 2 dumpers busy :  0:23:10  (  7.24%)   0:  0:23:10  (100.00%)
 3 dumpers busy :  1:07:57  ( 21.22%)   0:  1:07:57  (100.00%)
 4 dumpers busy :  0:02:09  (  0.67%)   0:  0:02:09  ( 99.99%)
 5 dumpers busy :  0:01:07  (  0.35%)   0:  0:01:07  ( 99.98%)
 6 dumpers busy :  0:00:03  (  0.02%)   0:  0:00:03  ( 99.59%)
 7 dumpers busy :  0:00:43  (  0.22%)   0:  0:00:24  ( 56.35%)
5:  0:00:09  ( 22.36%)
4:  0:00:08  ( 18.89%)
1:  0:00:01  (  2.37%)
 8 dumpers busy :  0:00:20  (  0.10%)   0:  0:00:19  ( 96.50%)
 9 dumpers busy :  0:01:51  (  0.58%)   0:  0:01:51  ( 99.98%)
10 dumpers busy :  0:00:41  (  0.22%)   0:  0:00:37  ( 89.71%)
5:  0:00:02  (  7.13%)
4:  0:00:01  (  3.09%)
11 dumpers busy :  0:00:07  (  0.04%)   5:  0:00:07  ( 99.54%)
12 dumpers busy :  0:00:00  (  0.00%)
13 dumpers busy :  0:00:00  (  0.00%)

I am going to give things another week or so and then try breaking up some of 
the very large DLEs in the config.




RE: dumporder

2018-11-06 Thread Cuttler, Brian R (HEALTH)
Chris,

There is an amplot utility that I used to run a lot, it would show me what I 
was constrained on, often holding space, which if you are running a bunch of 
large dumps to start off with could constrain dumping later on.


-Original Message-
From: owner-amanda-us...@amanda.org  On Behalf 
Of Chris Nighswonger
Sent: Tuesday, November 6, 2018 11:02 AM
To: amanda-users@amanda.org
Subject: Re: dumporder

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


On Mon, Nov 5, 2018 at 1:31 PM Chris Nighswonger  
wrote:
>
> Is there any wisdom available on optimization of dumporder?
>

After looking over the feedback from Brian and Austin and reviewing the actual 
sizes of the DLEs, I ended up with this sort of thing:

inparallel 15
dumporder "STs"

Which resulted in a reduced time over what I have been seeing. Here are some 
stats from amstatus for this config. It looks like I could drop about half of 
the dumpers and still be fine. Any idea what causes the dumpers over the first 
five to be utilized less?

 dumper0 busy   :  5:13:42  ( 97.98%)
 dumper1 busy   :  1:12:09  ( 22.54%)
 dumper2 busy   :  0:40:30  ( 12.65%)
 dumper3 busy   :  0:33:44  ( 10.54%)
 dumper4 busy   :  0:03:47  (  1.19%)
 dumper5 busy   :  0:37:32  ( 11.73%)
 dumper6 busy   :  0:02:00  (  0.62%)
 dumper7 busy   :  0:00:57  (  0.30%)
 dumper8 busy   :  0:05:54  (  1.85%)
 dumper9 busy   :  0:04:38  (  1.45%)
dumper10 busy   :  0:00:16  (  0.08%)
dumper11 busy   :  0:01:39  (  0.52%)
dumper12 busy   :  0:00:01  (  0.01%)
 0 dumpers busy :  0:02:43  (  0.85%)   0:  0:02:43  (100.00%)
 1 dumper busy  :  3:39:13  ( 68.47%)   0:  3:39:13  (100.00%)
 2 dumpers busy :  0:23:10  (  7.24%)   0:  0:23:10  (100.00%)
 3 dumpers busy :  1:07:57  ( 21.22%)   0:  1:07:57  (100.00%)
 4 dumpers busy :  0:02:09  (  0.67%)   0:  0:02:09  ( 99.99%)
 5 dumpers busy :  0:01:07  (  0.35%)   0:  0:01:07  ( 99.98%)
 6 dumpers busy :  0:00:03  (  0.02%)   0:  0:00:03  ( 99.59%)
 7 dumpers busy :  0:00:43  (  0.22%)   0:  0:00:24  ( 56.35%)
5:  0:00:09  ( 22.36%)
4:  0:00:08  ( 18.89%)
1:  0:00:01  (  2.37%)
 8 dumpers busy :  0:00:20  (  0.10%)   0:  0:00:19  ( 96.50%)
 9 dumpers busy :  0:01:51  (  0.58%)   0:  0:01:51  ( 99.98%)
10 dumpers busy :  0:00:41  (  0.22%)   0:  0:00:37  ( 89.71%)
5:  0:00:02  (  7.13%)
4:  0:00:01  (  3.09%)
11 dumpers busy :  0:00:07  (  0.04%)   5:  0:00:07  ( 99.54%)
12 dumpers busy :  0:00:00  (  0.00%)
13 dumpers busy :  0:00:00  (  0.00%)

I am going to give things another week or so and then try breaking up some of 
the very large DLEs in the config.



RE: dumporder

2018-11-05 Thread Cuttler, Brian R (HEALTH)
Depends on how many dumpers you are using, I like of like TSTSTSts, or 
something like that. Also very much depends on the size/length of the relative 
dumps, but I do like to kick off some of the longest ones early.

Caveat – I haven’t done enough experimentation to know that what I’m doing is 
actually reasonable.

From: owner-amanda-us...@amanda.org  On Behalf 
Of Chris Nighswonger
Sent: Monday, November 5, 2018 1:31 PM
To: amanda-users 
Subject: dumporder


ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.

Is there any wisdom available on optimization of dumporder?

Kind regards,
Chris


Re: ipv4 vs ipv6

2018-05-14 Thread Cuttler, Brian R (HEALTH)
Ingo,

I’m sorry, I carelessly and stupidly copied the wrong set of lines from tcpdump.

The tcpdump was performed on the client, this shows a failure from the client 
to the server showing “TCP(6)”
Which I think is the problem. The server is IPv4 only, xinetd specified the 
flag as “IPv4” and the inbound traffic is IPv4.

Sorry for the earlier error.

Brian


11:29:40.183354 IP (tos 0xc0, ttl 64, id 11278, offset 0, flags [none], proto 
ICMP (1), length 80)
biowork2.health1.hcom.health.state.ny.us > flower.wadsworth.org: ICMP host 
biowork2.health1.hcom.health.state.ny.us unreachable - admin prohibited, length 
60
  IP (tos 0x0, ttl 64, id 9375, offset 0, flags [DF], proto TCP (6), length 
52)
flower.wadsworth.org.516 > biowork2.health1.hcom.health.state.ny.us.amanda: 
Flags [S], cksum 0xfc6b (correct), seq 2828866963, win 49640, options [mss 
1460,nop,wscale 0,nop,nop,sackOK], length 0


From: Ingo Schaefer 
Date: Monday, May 14, 2018 at 3:16 PM
To: "Cuttler, Brian R (HEALTH)" , 
"amanda-users@amanda.org" 
Subject: AW: ipv4 vs ipv6


ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.

Hello Brian,

Your tcpdump Output is just ARP traffic for getting the ethernet address to the 
IP address.

And according to the length in tcpdump output I would say it is requesting the 
ethernet address to an IPv4 IP address.

So nothing wrong there.

Regards,
Ingo

Gesendet von meinem BlackBerry 10-Smartphone.
Von: Cuttler, Brian R (HEALTH)
Gesendet: Montag, 14. Mai 2018 17:44
An: amanda-users@amanda.org
Betreff: ipv4 vs ipv6



Hello Amanda users,

For some reason I’m not seeing much/any Amanda traffic, I did re-register with 
a new email address last month…

Installing Amanda-client on an ununtu system, when I run # amcheck from the 
server I’m seeing the following in tcpdump output.


flower.wadsworth.org.516 > biowork2.health1.hcom.health.state.ny.us.amanda: 
Flags [R], cksum 0x3d33 (correct), seq 2828866964, win 49640, length 0

11:29:44.054702 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 
flower.wadsworth.org tell biowork2.health1.hcom.health.state.ny.us, length 28

11:29:44.054796 ARP, Ethernet (len 6), IPv4 (len 4), Reply flower.wadsworth.org 
is-at 00:14:4f:21:10:c2 (oui Unknown), length 46

That is, Amanda-client is being activated by IPv4 but responding using IPv6. I 
know that is mentioned in the mail archives and I have checked, re-installed 
xinetd on the client and verified flags are IPv4.


root@biowork2:/etc/ufw/applications.d# more /etc/xinetd.d/amanda

# default: on

#

# description: Amanda services for Amanda server and client.

#



service amanda

{

disable = no

flags   = IPv4

socket_type = dgram

protocol= udp

wait= no

user= amandabackup

group   = disk

groups  = yes

#server  = /usr/lib/amanda/amandad

server  = /usr/lib/x86_64-linux-gnu/amanda/amandad

server_args = -auth=bsdtcp amdump amindexd amidxtaped

}

Either I’m missing a step or I’m not chasing the correct problem.

Any help would be appreciated.

Brian

Brian Cuttler, Wadsworth Center/NYS Dept of Health
Albany, NY 12201





ipv4 vs ipv6

2018-05-14 Thread Cuttler, Brian R (HEALTH)

Hello Amanda users,

For some reason I’m not seeing much/any Amanda traffic, I did re-register with 
a new email address last month…

Installing Amanda-client on an ununtu system, when I run # amcheck from the 
server I’m seeing the following in tcpdump output.


flower.wadsworth.org.516 > biowork2.health1.hcom.health.state.ny.us.amanda: 
Flags [R], cksum 0x3d33 (correct), seq 2828866964, win 49640, length 0

11:29:44.054702 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 
flower.wadsworth.org tell biowork2.health1.hcom.health.state.ny.us, length 28

11:29:44.054796 ARP, Ethernet (len 6), IPv4 (len 4), Reply flower.wadsworth.org 
is-at 00:14:4f:21:10:c2 (oui Unknown), length 46

That is, Amanda-client is being activated by IPv4 but responding using IPv6. I 
know that is mentioned in the mail archives and I have checked, re-installed 
xinetd on the client and verified flags are IPv4.


root@biowork2:/etc/ufw/applications.d# more /etc/xinetd.d/amanda

# default: on

#

# description: Amanda services for Amanda server and client.

#



service amanda

{

disable = no

flags   = IPv4

socket_type = dgram

protocol= udp

wait= no

user= amandabackup

group   = disk

groups  = yes

#server  = /usr/lib/amanda/amandad

server  = /usr/lib/x86_64-linux-gnu/amanda/amandad

server_args = -auth=bsdtcp amdump amindexd amidxtaped

}

Either I’m missing a step or I’m not chasing the correct problem.

Any help would be appreciated.

Brian

Brian Cuttler, Wadsworth Center/NYS Dept of Health
Albany, NY 12201



updated amanda client

2018-04-17 Thread Cuttler, Brian R (HEALTH)
We did a server update, which replaced old libraries, so we did an update to 
the Amanda install.

After updating (possibly incorrectly) the security.conf, we still have a 
getpeername error.

Not sure how to fix this, not seeing anything directly related in the archives.

Any help would be appreciated.

Thank you,
Brian



[root@netd /tmp/amanda/amandad]# more amandad.20180417104331.debug

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad: pid 57255 ruid 210 euid 210 
version 3.3.9: start at Tue Apr 17 10:43:31 2018

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad: 
security_getdriver(name=BSDTCP) returns 0x280d61a8

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad: version 3.3.9

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad: build: 
VERSION="Amanda-3.3.9"

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad:BUILT_DATE="Sun 
Apr 8 11:38:59 UTC 2018" BUILT_MACH=""

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad:BUILT_REV="6535" 
BUILT_BRANCH="tags" CC="cc"

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad: paths: 
bindir="/usr/local/bin" sbindir="/usr/local/sbin"

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad:
libexecdir="/usr/local/libexec/amanda"

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad:
amlibexecdir="/usr/local/libexec/amanda"

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad:
mandir="/usr/local/man" AMANDA_TMPDIR="/tmp/amanda"

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad:
AMANDA_DBGDIR="/tmp/amanda"

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad:
CONFIG_DIR="/usr/local/etc/amanda" DEV_PREFIX="/dev/"

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad:
RDEV_PREFIX="/dev/r" DUMP="/sbin/dump"

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad:
RESTORE="/sbin/restore" VDUMP=UNDEF VRESTORE=UNDEF

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad:XFSDUMP=UNDEF 
XFSRESTORE=UNDEF VXDUMP=UNDEF VXRESTORE=UNDEF

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad:
SAMBA_CLIENT="/usr/bin/smbclient"

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad:
GNUTAR="/usr/local/bin/gtar" COMPRESS_PATH="/usr/bin/gzip"

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad:
UNCOMPRESS_PATH="/usr/bin/gzip"  LPRCMD=UNDEF  MAILER=UNDEF

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad:
listed_incr_dir="/usr/local/var/amanda/gnutar-lists"

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad: defs:  
DEFAULT_SERVER="111i386-quarterly-job-16"

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad:
DEFAULT_CONFIG="DailySet1"

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad:
DEFAULT_TAPE_SERVER="111i386-quarterly-job-16"

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad:
DEFAULT_TAPE_DEVICE="" NEED_STRSTR AMFLOCK_POSIX

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad:AMFLOCK_FLOCK 
AMFLOCK_LOCKF AMFLOCK_LNLOCK AMANDA_DEBUG_DAYS=4

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad:BSD_SECURITY 
USE_AMANDAHOSTS CLIENT_LOGIN="amanda"

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad:CHECK_USERID 
HAVE_GZIP COMPRESS_SUFFIX=".gz"

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad:
COMPRESS_FAST_OPT="--fast" COMPRESS_BEST_OPT="--best"

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad:
UNCOMPRESS_OPT="-dc"

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad:CONFIGURE_ARGS=" 
'--libexecdir=/usr/local/libexec/amanda' '--without-amlibexecdir' 
'--with-amandahosts' '--with-fqdn' '--with-dump-honor-nodump' 
'--disable-glibtest' '--with-user=amanda' '--with-group=amanda' 
'--with-bsdtcp-security' '--with-bsdudp-security' '--with-ssh-security' 
'--disable-installperms' 
'--with-security-file=/usr/local/etc/amanda/security.conf' 
'--with-gnutar-listdir=/usr/local/var/amanda/gnutar-lists' 
'--with-gnutar=/usr/local/bin/gtar' '--without-server' 
'--with-amandates=/usr/local/var/amanda/amandates' '--prefix=/usr/local' 
'--localstatedir=/var' '--mandir=/usr/local/man' '--disable-silent-rules' 
'--infodir=/usr/local/info/' '--build=i386-portbld-freebsd11.1' 
'build_alias=i386-portbld-freebsd11.1' 'CC=cc' 'CFLAGS=-O2 -pipe  
-fstack-protector -fno-strict-aliasing' 'LDFLAGS=  -fstack-protector' 'LIBS=' 
'CPPFLAGS=' 'CPP=cpp' 'PERL=/usr/local/bin/perl-amanda' 'PKG_CONFIG=pkgconf'"

Tue Apr 17 10:43:31 2018: thd-0x28a1ee00: amandad: getpeername returned: Socket 
is not connected



load library path

2014-09-30 Thread Brian Cuttler

I have a machine on which we replaced the bash shell, which is
used by some of the amanda scripts.

We can add ld_library_path to the .cshrc and run interactive
command like # amcheck -t, but the daemon is not finding the
library, so we have failures in the nightly run.

Is there a way to set LD_LIBRARY_PATH so my cron initiated processes
and its client jobs will find it?

Solaris 10 system.

thank you.
Brian

---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: Configuring dump path on client at runtime?

2014-09-12 Thread Brian Cuttler
On Thu, Sep 11, 2014 at 05:29:08PM -0400, Gene Heskett wrote:
> On Thursday 11 September 2014 13:19:14 Debra S Baddorf did opine
> And Gene did reply:
> > I agree.  I always build amanda myself.   [ My package seems okay when
> > I have ROOT build it and install it ….  I don’t think  I’ve changed
> > anything to allow that to work. Hmm.  I do remember reading that my
> > amanda user (operator for me)  was supposed to build it.  ]
> > 
> > A comment about the configure settings:
> > > ./configure --with-user=amanda \
> > > 
> > >   --with-group=disk \
> > >   --with-owner=amanda \
> > >   --with-gnu-ld \
> > >   --prefix=/usr/local/ \
> > >   --with-debugging=/tmp/amanda-dbg/ \
> > >   --with-tape-server=coyote \
> > >   --with-bsdtcp-security --with-amandahosts \
> > >   --with-configdir=/usr/local/etc/amanda \
> > >   --with-gnutar=/usr/bin/tar
> > 
> > Since I have clients of several ages and varying amanda versions …
> > I’ve discovered that you can have all three of the security types
> > enabled and used by differing clients:
> >  --with-bsd-security \
> >  --with-krb5-security  \
> >  --with-bsdtcp-security \is default so I guess I don’t explicitly
> > add that line, but it’s a good idea to do so



> 
> I was not aware of that, but then my system is a pretty small model 
> compared to many on this list.  I would not be surprised to find that 
> Brian C. has in excess of 50 spindles scattered over nearly that many 
> clients, as do you at Fermi.  Fortunately, amanda scales exceedingly well.



Really very well. I have a system that has only itself as a client
and has 288 DLEs and backups over 1Tbyte plus per night. Another system
has only 113 DLE but spread across 34 clients. I have 4-5 TBytes backed
up on VTapes and I have other systems that backup relatively little data
across only about a half dozen machines. 

Amanda scales very well. And the client/server we have are a combinations
are a result of historical, practical and political, rather than an issue
of amanda's scalability.

The only issue I have with amanda scaling is the machine with 288 DLE
has a file handle limitation, which I think is an OS issue that I should
be able to (but have not) work around. And despite the error, amanda runs
and backups up all of the DLEs properly, so it works around the OS issue
for me.

Brian


> Cheers, Gene Heskett
> -- 
> "There are four boxes to be used in defense of liberty:
>  soap, ballot, jury, and ammo. Please use in that order."
> -Ed Howdershelt (Author)
> Genes Web page <http://geneslinuxbox.net:6309/gene>
> US V Castleman, SCOTUS, Mar 2014 is grounds for Impeaching SCOTUS
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: Configuring dump path on client at runtime?

2014-09-11 Thread Brian Cuttler

The more you customize the greater the chance you will have a
problem unraveling come update time.

I'd suppose you, if you want an unsupported solution, that you
could replace the original path's binary with a # ln file.

On Wed, Sep 10, 2014 at 11:29:27AM -0500, Jason L Tibbitts III wrote:
> >>>>> "JM" == Jean-Louis Martineau  writes:
> 
> JM> It's not possible to set the path at runtime. It can only be set at
> JM> compilation time
> 
> OK, I guess Red Hat gets a bug report on that.  I went ahead built a
> fresh package that has dump in the build environment and all is well.
> 
>  - J<
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: Question on the one-filesystem option

2014-07-28 Thread Brian Cuttler
On Fri, Jul 25, 2014 at 04:15:43PM -0400, Gene Heskett wrote:
> On Friday 25 July 2014 15:26:00 Debra S Baddorf did opine
> And Gene did reply:
> > I just create two DLEs
> > 
> > mynode  / backuptype
> > mynode  /boot  backuptype
> > 
> > Then   amdump  myconfig   (  or  amdump  myconfig  mynode  )
> > will get both of them.Is this not an option for you?
> 
> That would completely hose amanda's ability to spread the levels of the 
> backups out over the cycle in its attempt to make full use of the backup 
> media available.
> 
> I could do it, yes, but that is not what amanda is all about.

"amadmin force"  both DLEs on the same day(s)?


> It would not be a huge problem with vtapes, but for those using individual 
> tapes, that would never fly unless you allowed amanda to use a virtually 
> unlimited number of tapes when it is doing a level 0 backup.  I am 
> currently set for about 30Gb per vtape, and it will very occasionally use 
> a few megs of the 2nd vtape.  I'd have to let it use 9 or more of the 30 
> formatted with just 4 entries. 2 on this machine, and the / entries on my 
> pair of cnc boxes.  That backup would also take quite a few more hours to 
> complete, currently in the 1:25 range.
> 
> Thanks Deb.
> 
> > Deb Baddorf
> > Fermilab
> > 
> > On Jul 25, 2014, at 1:45 PM, Gene Heskett  wrote:
> > > Greetings;
> > > 
> > > It turns out some of my backups for this machine will not be usable,
> > > particularly for a bare metal recovery on a new drive, due to a miss-
> > > understanding of the --one-filesystem option by tar-1.27, and 1.27.1
> > > does not fix it.
> > > 
> > > The problem is that this system is currently installed with two
> > > partitions, 3 actually if you count swap space.
> > > 
> > > They are /boot, and /
> > > 
> > > So tar refuses to back up a softlink that points to a directory/file 
> > > that is in a different dir than the current disklist entry points
> > > to, EVEN THOUGH IT IS IN FACT on the same filesystem.
> > > 
> > > The config I've been using for a decade and change uses that option
> > > command.  Will removing it (--one-filesystem) from my config fix this
> > > missing links in the backup problem?  Or will that result in its
> > > making a duplicate backup file of what is at the end of that
> > > softlink?
> > > 
> > > IMNSHO, when tar encounters such a link, instead of making a backup
> > > of that file at the end of the link, it should backup the contents
> > > of the links text so that an amrecovery will re-create the link
> > > file.
> > > 
> > > It is not doing that either, so how do, or can, I force tar to do
> > > that and result in a fully usable backup?
> > > 
> > > This does not seem to me to be a violation of the one filesystem
> > > command, making sense ONLY if the link is to a different partition,
> > > which could be a different filesystem, which this present situation
> > > is most certainly not.  Different directory yes, different
> > > filesystem, no.
> > > 
> > > Thanks.
> > > 
> > > Cheers, Gene Heskett
> 
> 
> Cheers, Gene Heskett
> -- 
> "There are four boxes to be used in defense of liberty:
>  soap, ballot, jury, and ammo. Please use in that order."
> -Ed Howdershelt (Author)
> Genes Web page <http://geneslinuxbox.net:6309/gene>
> US V Castleman, SCOTUS, Mar 2014 is grounds for Impeaching SCOTUS
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: Why have my tapes 'shrunk' in size?

2014-03-12 Thread Brian Cuttler

Don't know if its relevant, but I've got an LTO5/juke and it
dropped in both speed and capacity. I'm now trying to remember
if I had the host ESAS card replaced, or the I/O module in the
juke...

On Wed, Mar 12, 2014 at 12:28:33PM -0400, Jon LaBadie wrote:
> On Wed, Mar 12, 2014 at 02:38:48PM +, Dave Ewart wrote:
> > Hello,
> > 
> > Four years ago I deployed a pair of Tandberg LTO-5 Ultrium (SAS) tape,
> > connected a Dell PowerEdge server via a Dell H200 SAS controller.
> > 
> > At that time ran the amtapetype utility which produced this output:
> > 
> > define tapetype Tandberg-LTO5 {
> > comment "Tandberg LTO5 1500/3000, produced by tapetype prog (hardware 
> > compression off)"
> > length 1410 gbytes
> > filemark 0 kbytes
> > speed 125762 kps
> > }
> > 
> > (That was created by an older version of AMANDA, which we were using at
> > the time: probably from Debian/Lenny, which was version 2.5.2p1, I
> > believe)
> > 
> > These tapes are native 1.5TB and so that looks pretty reasonable.  We've
> > never used these tapes to their fullest capacity and all was fun and
> > shiny until recently when the tapes reported "No space left on device".
> > However, the concerning thing is that the tapes reported 'full' at less
> > than what I was expecting as full capacity, just above 1.1TB in fact.
> > This means that our backup space 'growth', which I had been assuming was
> > only 75%/80% full is in fact at 100%!
> > 
> > I re-ran the tapetype utility from our current AMANDA (version 2.6.1p2-3
> > from Debian/Squeeze) and it showed this:
> > 
> >   define tapetype unknown-tapetype {  
> > comment "Created by amtapetype; compression disabled"
> > length 1148746080 kbytes
> > filemark 0 kbytes
> > speed 69815 kps
> > blocksize 32 kbytes
> >   }
> > 
> > 
> > The length reported here is ~1.1TB which ties up with the "no space left
> > on device" message, but ...
> > 
> > ... these are genuine LTO-5 (Tandberg brand) tapes - just like
> > http://img.misco.eu/Resources/images/Modules/InformationBlocks/1210/TAN/TAN-2/202175-tandberg-LTO-5-tape-cartridge-small.jpg
> > - and the second tapetype above was created using a previously-unused
> >   tape and they really are 1.5TB native!
> > 
> > What's going on?  Why am I not getting to use the full capacity??
> > 
> 
> A guess only.
> 
> I note the measured speed has dropped by 45%.  Due to what I haven't a clue,
> but maybe some hardware change or cables or ???
> 
> Perhaps your system's ability to feed the drive has dropped below the
> minimum needed to keep the drive streaming.  In that case, the drive
> must "shoe-shine" and each restart costs a bit of tape.
> 
> Jon
> -- 
> Jon H. LaBadie j...@jgcomp.com
>  11226 South Shore Rd.  (703) 787-0688 (H)
>  Reston, VA  20190  (609) 477-8330 (C)
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: Why have my tapes 'shrunk' in size?

2014-03-12 Thread Brian Cuttler
On Wed, Mar 12, 2014 at 01:11:03PM -0400, Brian Cuttler wrote:
> 
> Don't know if its relevant, but I've got an LTO5/juke and it
> dropped in both speed and capacity. I'm now trying to remember
> if I had the host ESAS card replaced, or the I/O module in the
> juke...

I'd found that reseating the esas cable helped temporarily, I
guess resetting the driver config. In testing I was able to somehow
rule out the tape drive itself.

YRWBCD.

Brian


> On Wed, Mar 12, 2014 at 12:28:33PM -0400, Jon LaBadie wrote:
> > On Wed, Mar 12, 2014 at 02:38:48PM +, Dave Ewart wrote:
> > > Hello,
> > > 
> > > Four years ago I deployed a pair of Tandberg LTO-5 Ultrium (SAS) tape,
> > > connected a Dell PowerEdge server via a Dell H200 SAS controller.
> > > 
> > > At that time ran the amtapetype utility which produced this output:
> > > 
> > > define tapetype Tandberg-LTO5 {
> > > comment "Tandberg LTO5 1500/3000, produced by tapetype prog (hardware 
> > > compression off)"
> > > length 1410 gbytes
> > > filemark 0 kbytes
> > > speed 125762 kps
> > > }
> > > 
> > > (That was created by an older version of AMANDA, which we were using at
> > > the time: probably from Debian/Lenny, which was version 2.5.2p1, I
> > > believe)
> > > 
> > > These tapes are native 1.5TB and so that looks pretty reasonable.  We've
> > > never used these tapes to their fullest capacity and all was fun and
> > > shiny until recently when the tapes reported "No space left on device".
> > > However, the concerning thing is that the tapes reported 'full' at less
> > > than what I was expecting as full capacity, just above 1.1TB in fact.
> > > This means that our backup space 'growth', which I had been assuming was
> > > only 75%/80% full is in fact at 100%!
> > > 
> > > I re-ran the tapetype utility from our current AMANDA (version 2.6.1p2-3
> > > from Debian/Squeeze) and it showed this:
> > > 
> > >   define tapetype unknown-tapetype {  
> > > comment "Created by amtapetype; compression disabled"
> > > length 1148746080 kbytes
> > > filemark 0 kbytes
> > > speed 69815 kps
> > > blocksize 32 kbytes
> > >   }
> > > 
> > > 
> > > The length reported here is ~1.1TB which ties up with the "no space left
> > > on device" message, but ...
> > > 
> > > ... these are genuine LTO-5 (Tandberg brand) tapes - just like
> > > http://img.misco.eu/Resources/images/Modules/InformationBlocks/1210/TAN/TAN-2/202175-tandberg-LTO-5-tape-cartridge-small.jpg
> > > - and the second tapetype above was created using a previously-unused
> > >   tape and they really are 1.5TB native!
> > > 
> > > What's going on?  Why am I not getting to use the full capacity??
> > > 
> > 
> > A guess only.
> > 
> > I note the measured speed has dropped by 45%.  Due to what I haven't a clue,
> > but maybe some hardware change or cables or ???
> > 
> > Perhaps your system's ability to feed the drive has dropped below the
> > minimum needed to keep the drive streaming.  In that case, the drive
> > must "shoe-shine" and each restart costs a bit of tape.
> > 
> > Jon
> > -- 
> > Jon H. LaBadie j...@jgcomp.com
> >  11226 South Shore Rd.  (703) 787-0688 (H)
> >  Reston, VA  20190  (609) 477-8330 (C)
> ---
>Brian R Cuttler brian.cutt...@wadsworth.org
>Computer Systems Support(v) 518 486-1697
>Wadsworth Center(f) 518 473-6384
>NYS Department of HealthHelp Desk 518 473-0773
> 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: amdump dry run?

2014-03-06 Thread Brian Cuttler

I believe amanda TEEs the output of the dump, spooling it to
the holding area and also reading the dump/tar file to create
an index. I'm uncertain when (as created or after the DLE is
fully spooled) the index file is placed in the server side
tree for storage.


On Wed, Mar 05, 2014 at 09:20:56PM -0500, Michael Stauffer wrote:
> Yes, thanks I was trying to avoid that since it will take a long time.
> Although...if I start the dump with --no-taper, is there a file that's
> written with at the begin of the process with all the files to be dumped? I
> assume the index itself is written at the end?
> 
> -M
> 
> 
> On Wed, Mar 5, 2014 at 5:55 PM, Debra S Baddorf  wrote:
> 
> >
> > On Mar 5, 2014, at 4:26 PM, Michael Stauffer 
> >  wrote:
> >
> > > Amanda 3.3.4
> > >
> > > Hi,
> > >
> > > Is there a way to get amdump to do a dry run? The idea is to see
> > everything that will be dumped, to check dle settings. For completeness'
> > sake I'd like a list of all files that will be backed up.
> > >
> > > I can get some idea of amadmin's disklist and estimate commands, but
> > would like more if possible.
> > >
> > > -M
> >
> >
> > Wel .
> > you could do   "amdump  config  --no-taper  nodename  DLE DLE2  nodename
> >  DLE3"
> > This would give you the index files and would leave the dumps on your
> > holding disk without using up a tape.
> > Since you wouldn't be going to tape,  it might over-fill your holding
> > disk,  which is why I suggested doing
> > a few DLEs  at a time.
> >
> > Deb
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: not recognizing holdingdisk define?

2014-02-28 Thread Brian Cuttler
On Thu, Feb 27, 2014 at 10:20:29PM +, Debra S Baddorf wrote:
> I believe the whole dump has to be done before it starts to write to tape.   
> This prevents incomplete dumps from wasting space on the tape.
> 
> I try to have numerous smaller DLEs, so that it takes several DLEs to fill a 
> tape.  Thus, when any one of them is finished, it can start going
> to tape.   If you have only a single DLE which occupies the whole tape,  then 
> it does seem slower.  In that case, perhaps you don't even
> bother with a holding disk?
> 
> Deb Baddorf

Not sure about the newer technologies, but older tech would cause
the drive to "shoe shine" and cause unnecessary wear if it wasn't
kept streaming, so use of holding disk is still probably recommended,
even for a single DLE.




> On Feb 27, 2014, at 3:44 PM, Michael Stauffer 
>  wrote:
> 
> > Yes, it's 4.5TB.
> > 
> > I's not clear to me from the docs whether a level 0 dump gets written fully 
> > to holding disk before it gets streamed to tape, or if streaming starts 
> > once one or more chunks have been written to the holding disk - anyone 
> > know? I'd prefer the latter for performance reasons. If the former, then I 
> > figure I'd need two tapes-worth of holding disk space since I have two tape 
> > drives and have setup tape-parallel-writes as 2.
> > 
> > -M
> > 
> > 
> > On Thu, Feb 27, 2014 at 3:43 PM, Jon LaBadie  wrote:
> > On Thu, Feb 27, 2014 at 03:09:20PM -0500, Michael Stauffer wrote:
> > > Amanda 3.3.4
> > >
> > > Hi,
> > >
> > > Seems like I'm having trouble getting amanda to use my holding disk.
> > >
> > > Here's my setup in amanda.conf:
> > >
> > > define holdingdisk holdingdisk1 {
> > >   directory "/mnt/amanda-holdingdisk1/"
> > >   use 4500Gb
> > >   chunksize 100Gb
> > > }
> > 
> > Others pointed out the error, but is the size really 4.5 TeraBytes?
> > 
> > --
> > Jon H. LaBadie j...@jgcomp.com
> >  11226 South Shore Rd.  (703) 787-0688 (H)
> >  Reston, VA  20190  (609) 477-8330 (C)
> > 
> 
> 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: not recognizing holdingdisk define?

2014-02-27 Thread Brian Cuttler

I don't believe amanda attempts to write to the holding disk.

Who owns it?

On Thu, Feb 27, 2014 at 03:09:20PM -0500, Michael Stauffer wrote:
> Amanda 3.3.4
> 
> Hi,
> 
> Seems like I'm having trouble getting amanda to use my holding disk.
> 
> Here's my setup in amanda.conf:
> 
> define holdingdisk holdingdisk1 {
>   directory "/mnt/amanda-holdingdisk1/"
>   use 4500Gb
>   chunksize 100Gb
> }
> 
> define dumptype gui-base {
>global
>program "GNUTAR"
>comment "gui base dumptype dumped with tar"
>compress none
>index yes
>maxdumps 2
>max-warnings 100
>allow-split true #Stauffer. Default is true.
>holdingdisk yes  #Stauffer. Default is auto.
> }
> 
> 
> When I run amcheck, it's not giving me any msg regarding holding disk,
> positive or negative. A few things I've seen online have shown amcheck
> reporting on holding disk status.
> 
> My tapetype has this:
> 
> #settings for splitting/spanning
> part_size 190G # about 1/10 of tape size - should be used when using
> holding disk
> # these should be used when no holding disk is used - but cache size
> determines
> #   part size (AFAIK), and lots of small parts in a dump is said to be
> inefficient
> part_cache_type memory
> part_cache_max_size 12G
> 
> 
> I ran a level 0 dump and saw this:
> 
>   USAGE BY TAPE:
>   Label   Time Size  %  DLEs Parts
>   000406-jet121:212152G  152.1 2   181
>   000407-jet1 7:17     660G   46.6 156
> 
> which looks to me like parts of 12G, close to my 14G cache size.
> 
> Do I need to do something else to tell amanda to use my holding disk?
> Thanks.
> 
> -M
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: Tape exclusion

2014-02-04 Thread Brian Cuttler

Hugh,

Glad to have been of help.

Others may weigh in with alternate strategies but it sounds
as if you have a satisfactory plan for now.

Good luck,

Brian

On Tue, Feb 04, 2014 at 09:47:18AM -0800, Hugh E Cruickshank wrote:
> From: Brian Cuttler Sent: February 4, 2014 07:42
> > 
> > Personally I think periodic archives are a good idea.
> > Pulling the January monthly sounds like a reasonable approach.
> 
> Thanks. It is nice to have confirmation that I am on reasonable track.
> 
> > > 2. If we are to use this approach is there a way to designate a tape
> > >as not to be reused but leave it in the set or will we have to
> > >remove it from the set?
> > 
> > You can mark the tape "no-reuse" in the tapelist, # amadmin has an
> > option to do this. Depending on your naming schema this may or may
> > not cause confusing later on.
> 
> That is exactly what I am looking for. It is so straight forward I
> wonder how I could have possibly missed it in my previous searches. I
> guess I am just getting blind in my old age.
> 
> > You also have the option of adding tapes to the pool with different
> > sufixes, or if you are running a different config, even different
> > prefixes.
> > 
> > Annual2014, Annual2015
> > MonthlyFeb, MonthlyMarch, MonthlyApril
> > Daily01...Daily20 (or however large your pool is).
> 
> I had decided to avoid trying to force the tape names to match specific
> days, months or years as it appeared that I would end up spending an
> inordinate amount of time trying to ensure that Amanda used the right
> tape on the right day. I created DailySet1 (5 tapes), WeeklySet1 (5
> tapes) and MonthlySet1 (12 tapes) and named the tapes DS1-01 through
> DS1-05, WS1-01 through WS1-05 and MS1-01 through MS1-12, respectively.
> I then let Amanda manage which tape is used next. We keep a log both
> manually and electronically of which tapes were used on which days in
> case we ever need to restore from tape.
> 
> For the yearly tape I will now mark which ever tape Amanda used for
> the January tape (MS1-04 this year) as NO-REUSE and then add another
> tape (MS1-13 in this case) to the set and then defer to Amanda to
> choose when it gets used in the cycle.
> 
> > How are you creating your monthly? Separate config, or are you
> > using amadmin to force fulls for all on one weekend? Or are you
> > simply pulling a tapecycle of tapes?
> 
> We are fortunate to be able to do a complete backup on one tape as we
> are to have the resources to perform full backups on a regular basis
> therefore we only run full backups to all tapes. Dailies are run Sunday
> through Thursday night, weeklies are run Friday night and monthlies are
> run Sunday nights. This greatly simplifies our scheduling and restoring
> should that ever be necessary.
> 
> > How are you keeping your currently monthly's out of the re-use cycle?
> 
> We just started doing the monthly tapes so this had not come up yet.
> 
> Thanks for your very informative and helpful response.
> 
> Regards, Hugh
> 
> -- 
> Hugh E Cruickshank, Forward Software, www.forward-software.com
> 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: Tape exclusion

2014-02-04 Thread Brian Cuttler
On Tue, Feb 04, 2014 at 07:24:38AM -0800, Hugh E Cruickshank wrote:
> Hi All:
> 
> We have just setup a monthly backup set to augment our daily and weekly
> sets. I would like to start a yearly permanent archive tape policy. My
> initial thoughts were to take the January monthly tape out of the
> rotation and replace it with a new tape.

Personally I think periodic archives are a good idea.

Pulling the January monthly sounds like a reasonable approach.

> 1. Is this a reasonable approach or does someone have a better
>suggestion?
> 
> 2. If we are to use this approach is there a way to designate a tape
>as not to be reused but leave it in the set or will we have to
>remove it from the set?

You can mark the tape "no-reuse" in the tapelist, # amadmin has an
option to do this. Depending on your naming schema this may or may
not cause confusing later on.

You also have the option of adding tapes to the pool with different
sufixes, or if you are running a different config, even different
prefixes.

Annual2014, Annual2015
MonthlyFeb, MonthlyMarch, MonthlyApril
Daily01...Daily20 (or however large your pool is).

Or you could mark specific tapes as no-reuse, and just keep
lengthening the pool, keep all tape names consistant and
accept the fact that when you cycle through there will be a lot
of skips.

How are you creating your monthly? Separate config, or are you
using amadmin to force fulls for all on one weekend? Or are you
simply pulling a tapecycle of tapes?

How are you keeping your currently monthly's out of the re-use cycle?

> TIA
> 
> Regards, Hugh
> 
> -- 
> Hugh E Cruickshank, Forward Software, www.forward-software.com
> 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: amdump output, tape usage

2014-01-08 Thread Brian Cuttler
On Wed, Jan 08, 2014 at 11:18:46AM -0500, Jean-Francois Malouin wrote:
> * Brian Cuttler  [20140108 11:09]:
> > 
> > Jean-Louis,
> > 
> > googled it... I had my numbers wrong, it can supposedly hold
> > 800 Gig, and with HW compression up to 2x that. I should not
> > have to adjust the tape length and no reason (that I can see)
> > for dumps to take more than one tape.
> > 
> > Its odd that I'm not seeing an end of media error message...
> > I think I need to run a cleaning tape and keep an eye on this.
> 
> I was about to chime in but you bet me to it.
> Indeed LTO4 is 800GB.
> 
> Now, did you verify that you're in fact HW not using compression
> enabled? Having both hw and sw compression can actually lead to
> smaller tape usage and waste of cpu cycles.
> I don't know about Solaris but on my Linux servers the command
> tapeinfo from the mtx package will tell me You must pass it the
> generic scsi device of the tape drive. In my case, it's /dev/sg7:
> 
> ~# tapeinfo -f /dev/sg7
> Product Type: Tape Drive
> Vendor ID: 'HP  '
> Product ID: 'Ultrium 4-SCSI  '
> Revision: 'B12H'
> Attached Changer API: No
> SerialNumber: 'HUE084156W'
> ...
> DataCompEnabled: yes
> DataCompCapable: yes
> DataDeCompEnabled: yes
> ...

Ok, learn something new every day.

The LTO is advertised as having block level decision making on
compression so that it doesn't expand the data, wonder if that
is not quite true.

Also - Isn't there another level of tape header that needs to be
cleared? Isn't re-writing the tape with compression off a little
bit of a trick? If you don't clear that other level of header, then
the compression is determined by the header info and not by the
device type selected when you write the tape?


[finsen]: /devices > /usr/local/sbin/tapeinfo -f /dev/rmt/3n
Product Type: Tape Drive
Vendor ID: 'HP  '
Product ID: 'Ultrium 4-SCSI  '
Revision: 'H5AW'
Attached Changer API: No
SerialNumber: 'HU1102EE9V'
MinBlock: 1
MaxBlock: 16777215
Ready: yes
BufferedMode: yes
Medium Type: Not Loaded
Density Code: 0x46
BlockSize: 0
DataCompEnabled: yes
DataCompCapable: yes
DataDeCompEnabled: yes
CompType: 0x1
DeCompType: 0x1
BOP: yes
Block Position: 0
Partition 0 Remaining Kbytes: 800226
Partition 0 Size in Kbytes: 800226
ActivePartition: 0
EarlyWarningSize: 0
NumPartitions: 0
MaxPartitions: 0



> 
> > 
> > thank you,
> > 
> > Brian
> > 
> > On Wed, Jan 08, 2014 at 10:57:22AM -0500, Jean-Louis Martineau wrote:
> > > On 01/08/2014 10:47 AM, Brian Cuttler wrote:
> > > >Jean-Louis,
> > > >
> > > >Good question, especially as I badly misspoke.
> > > >
> > > >The tape is not DLT IV, it is LTO IV, I used the standard
> > > >tape type definition, provided, as I would not have used
> > > >such a specific number.
> > > >
> > > >define tapetype LTO4 {
> > > >comment "Dell LTO4 800Gb - Compression Off"
> > > >length 802816 mbytes
> > > >filemark 0 kbytes
> > > >speed 52616 kps
> > > >}
> > > >
> > > >I am not using HW compression.
> > > >I am using "compress client fast" for the dumptypes.
> > > >All file systems are local to the host, we have a large > 255
> > > >DLE that are ZFS file systems.
> > > >
> > > >That being said - I think you are correct, I am using the
> > > >wrong tape length.
> > > >
> > > >If they are LTO IV tapes, they hold 400 Gig, up to 800 G with
> > > >HW compression, which we are not using...
> > > >
> > > >
> > > >  Q - Would it be correct to reset the tape length to 400 Gig ?
> > > 
> > > no, you can write at least 603197M on a tape.
> > > 
> > > Jean-Louis
> > > 
> > > >
> > > > thank you/rookie mistake,
> > > >
> > > > Brian
> > > >
> > > >
> > > >On Wed, Jan 08, 2014 at 10:06:54AM -0500, Jean-Louis Martineau wrote:
> > > >>Brian,
> > > >>
> > > >>Maybe you defined the tape larger than it is?
> > > >>Are you sure the tape can hold 80M of data?
> > > >>Are you using hardware compression?
> > > >>
> > > >>Jean-Louis
> > > >&

Re: amdump output, tape usage

2014-01-08 Thread Brian Cuttler

Jean-Louis,

googled it... I had my numbers wrong, it can supposedly hold
800 Gig, and with HW compression up to 2x that. I should not
have to adjust the tape length and no reason (that I can see)
for dumps to take more than one tape.

Its odd that I'm not seeing an end of media error message...
I think I need to run a cleaning tape and keep an eye on this.

thank you,

    Brian

On Wed, Jan 08, 2014 at 10:57:22AM -0500, Jean-Louis Martineau wrote:
> On 01/08/2014 10:47 AM, Brian Cuttler wrote:
> >Jean-Louis,
> >
> >Good question, especially as I badly misspoke.
> >
> >The tape is not DLT IV, it is LTO IV, I used the standard
> >tape type definition, provided, as I would not have used
> >such a specific number.
> >
> >define tapetype LTO4 {
> >comment "Dell LTO4 800Gb - Compression Off"
> >length 802816 mbytes
> >filemark 0 kbytes
> >speed 52616 kps
> >}
> >
> >I am not using HW compression.
> >I am using "compress client fast" for the dumptypes.
> >All file systems are local to the host, we have a large > 255
> >DLE that are ZFS file systems.
> >
> >That being said - I think you are correct, I am using the
> >wrong tape length.
> >
> >If they are LTO IV tapes, they hold 400 Gig, up to 800 G with
> >HW compression, which we are not using...
> >
> >
> >  Q - Would it be correct to reset the tape length to 400 Gig ?
> 
> no, you can write at least 603197M on a tape.
> 
> Jean-Louis
> 
> >
> > thank you/rookie mistake,
> >
> > Brian
> >
> >
> >On Wed, Jan 08, 2014 at 10:06:54AM -0500, Jean-Louis Martineau wrote:
> >>Brian,
> >>
> >>Maybe you defined the tape larger than it is?
> >>Are you sure the tape can hold 80M of data?
> >>Are you using hardware compression?
> >>
> >>Jean-Louis
> >>
> >>On 01/08/2014 09:13 AM, Brian Cuttler wrote:
> >>>I'm not sure I understand this tape usage... 18% plus 75% < 100%, isn't 
> >>>it?
> >>>
> >>>Amanda 3.3.0
> >>>Solaris 10x86
> >>>tapes are DLT IV
> >>>
> >>>This, using a second tape, is new behaivor.
> >>>
> >>>I did try a newer amanda but the dump clients wouldn't die when
> >>>they needed two, amanda never completed and required a lot of
> >>>daily manual cleanup. I haven't tried the latest...
> >>>
> >>>
> >>>FAILURE DUMP SUMMARY:
> >>>
> >>>   finsen / lev 0  partial taper: No space left on device, splitting not
> >>>   enabled  finsen / lev 0  was successfully re-flushed
> >>>
> >>>USAGE BY TAPE:
> >>>   Label   Time Size  %  DLEs Parts
> >>>   Finsen322:57  603197M   75.11717
> >>>   Finsen330:48  145593M   18.1   259   259
> >>>
> >---
> >Brian R Cuttler brian.cutt...@wadsworth.org
> >Computer Systems Support(v) 518 486-1697
> >Wadsworth Center(f) 518 473-6384
> >NYS Department of HealthHelp Desk 518 473-0773
> >
> 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: amdump output, tape usage

2014-01-08 Thread Brian Cuttler

Jean-Louis,

Good question, especially as I badly misspoke.

The tape is not DLT IV, it is LTO IV, I used the standard
tape type definition, provided, as I would not have used
such a specific number.

define tapetype LTO4 {
   comment "Dell LTO4 800Gb - Compression Off"
   length 802816 mbytes
   filemark 0 kbytes
   speed 52616 kps
}

I am not using HW compression.
I am using "compress client fast" for the dumptypes.
All file systems are local to the host, we have a large > 255
DLE that are ZFS file systems.

That being said - I think you are correct, I am using the
wrong tape length.

If they are LTO IV tapes, they hold 400 Gig, up to 800 G with
HW compression, which we are not using...


 Q - Would it be correct to reset the tape length to 400 Gig ?

thank you/rookie mistake,

    Brian


On Wed, Jan 08, 2014 at 10:06:54AM -0500, Jean-Louis Martineau wrote:
> Brian,
> 
> Maybe you defined the tape larger than it is?
> Are you sure the tape can hold 80M of data?
> Are you using hardware compression?
> 
> Jean-Louis
> 
> On 01/08/2014 09:13 AM, Brian Cuttler wrote:
> >I'm not sure I understand this tape usage... 18% plus 75% < 100%, isn't it?
> >
> >Amanda 3.3.0
> >Solaris 10x86
> >tapes are DLT IV
> >
> >This, using a second tape, is new behaivor.
> >
> >I did try a newer amanda but the dump clients wouldn't die when
> >they needed two, amanda never completed and required a lot of
> >daily manual cleanup. I haven't tried the latest...
> >
> >
> >FAILURE DUMP SUMMARY:
> >
> >   finsen / lev 0  partial taper: No space left on device, splitting not 
> >   enabled  finsen / lev 0  was successfully re-flushed
> >
> >USAGE BY TAPE:
> >   Label   Time Size  %  DLEs Parts
> >   Finsen322:57  603197M   75.11717
> >   Finsen330:48  145593M   18.1   259   259
> >
> 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



amdump output, tape usage

2014-01-08 Thread Brian Cuttler
 0:02 8149.8   0:04 4094.5
finsen   -samba/wcqaqms 12820   72.00:27  775.7   0:04 5150.5
finsen   -rt2/samba/wgs 1 1 0   29.20:02   90.6   0:03  73.0
finsen   -t2/samba/wp51 0 100960 88840   88.0  135:04 11225.2  29:51 50794.2
finsen   -samba/wsecure 1  1052   982   93.30:52 19283.0   0:15 67061.5
finsen   -t2/samba/zdeb 0   525   483   92.10:26 19247.8   0:16 30939.2
finsen   /finsenp   0 0 05.00:180.1   0:04   0.0
finsen   /lorep 0 0 0   10.00:011.4   0:02   0.0
finsen   /lyra  0 0 0   10.00:011.0   0:02   0.0
finsen   /lyra/space129 3   11.50:25  138.8   0:02 1734.0
finsen   -space/softdev 1  1743  1220   70.04:56 4225.4   0:18 69389.6
finsen   hp10p/connlab  0  2554  1771   69.32:17 13225.7   0:24 75569.2
finsen   hp10p/flyshare 0   439   384   87.50:24 16440.5   0:11 35759.5
finsen   -10p/grifadmin 0  1154  1085   94.00:45 24917.1   0:18 61739.7
finsen   hp10p/hiu  13922   57.72:59  128.6   0:04 5759.5
finsen   hp10p/hiu2 11512   79.50:30  400.4   0:04 3051.8
finsen   hp10p/ivcp 0 26016 19360   74.4   17:55 18450.1   5:02 65645.7
finsen   -0p/virologypt 015 3   17.80:03  854.1   0:04 695.8

(brought to you by Amanda version 3.3.0)

- End forwarded message -
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: hostname doesn't resolve to itself

2014-01-07 Thread Brian Cuttler

I'd thought you could only have one PTR for any given IP,
and that while long ago you could have two A records, that
was no longer acceptable and it was recommended to have a
single A record and a CNAME (as you said).

This differs from having multiple IP addresses defined by
more than one A record for a single hostname, that provides
(or used to) a round-robin lookup.

CNAME and PTR are on opposite sides of the fence.

On Mon, Jan 06, 2014 at 03:22:29PM -0700, Charles Curley wrote:
> On Mon, 6 Jan 2014 21:52:53 +0100
> Heiko Schlittermann  wrote:
> 
> > ssl.schlittermann.de.3600IN  A   212.80.235.130
> > pu.schlittermann.de. 3600IN  A   212.80.235.130
> > 
> > 
> > I think, nothing is wrong with having two PTR records. Even I know
> > that ancient software used to have problems with this setup.
> > 
> > Is there any interest to fix it? Or am I wrong completly?
> 
> Try setting one up as the PTR, the other as an alias.
> 
> -- 
> 
> The right of the people to be secure in their persons, houses, papers,
> and effects, against unreasonable searches and seizures, shall not be
> violated, and no Warrants shall issue, but upon probable cause,
> supported by Oath or affirmation, and particularly describing the
> place to be searched, and the persons or things to be seized.
> -- U.S. Const. Amendment IV
> 
> Key fingerprint = CE5C 6645 A45A 64E4 94C0  809C FFF6 4C48 4ECD DFDB


---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



feature request

2013-12-10 Thread Brian Cuttler

My version of amanda is a little older, but I don't believe I've
seen anything on rejecting a DLE because of size.

I have a user that just grew their directory/files and is now sitting
on close to 2 TBytes of data.

Amanda discovered this, determined it didn't have sufficient work area
and began a dump to tape.

As I don't have splitting enabled, I'd like to have amanda automatically
print a warning and then refuse to dump DLEs that are in excess of tape
capacity. Or for that matter, even if splitting is enabled and you are
dumping direct to tape, amanda could perhaps do the math.

If splitting is DISabled, ok to backup if DLE_SIZE <= tape_capacity

If splitting ENabled, ok to backup if DLE_SIZE <= tape_capacity * run_tapes

Or more sophisticated, capacity less the space already committed
to the other DLEs.

Else, with no splitting, amanda tries to dump, hits EOT, and tries
again on the next tape, fails, repeats until we hit the run tapes limit.

Even with splitting, amanda could pre-emptively fail a DLE greater than
total allowed tape capacity.

This could of course all be avoided with a better educated user base,
but even the more savvy users spend little or no time thinking about
the magic that the sys admins work for them.

thank you,

    Brian

---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: virtual tape size (take #3)

2013-10-25 Thread Brian Cuttler

I'll have to look at the amanda-devices page for the exceptions.

But I can tell you that while amanda is skipping some level 1
dumps on me, because of calculated size, and I am using vtapes.
Amanda is telling me that its filled the tape and then keeping
things in the holding area.

I suspect that in the case of vtapes, amanda will finish writing
the current DLE even if it crosses the size boundry, but that it
will not write additional DLEs to tape above the set tape size limit.

hum, not seeing what I'm looking for. My vtapes do fill though.

On Fri, Oct 25, 2013 at 08:50:28AM -0500, John G. Heim wrote:
> 
> Last week I asked a few questions about virtual tape size. Well, I 
> wouldn't say I resolved them but I think I have one clue. One of the 
> things I asked about is the meaning of the "length" parameter for a tape 
> definition. I had set it to 25G for my virtual tapes but amanda was 
> writing 38G to each vtape.   This was going to be a problem because I 
> have a quota on my virtual tape file system of 2Tb and I had therefore 
> calculated that I could create 80 25Gb tapes. Here is what the 
> amanda.conf man page has to say about the tape length parameter:
> 
> > length int
> > Default: 2000 kbytes. How much data will fit on a tape, expressed in
> > kbytes. Note that this value is only used by Amanda to schedule which
> > backupswill be run. Once the backups start, Amanda will continue to
> > write to a tape until it gets an error, regardless of what value is
> > entered for length (but see amanda-devices(7) for exceptions).
> 
> The device for virtual tapes is vfs-disk or something like that. But I 
> couldn't find anything in the docs for   how it determines how much to 
> write to a vtape. If it keeps writing until it got an error, in my case, 
> it would write over 2Tb until it hit the hard quota on the filesystem. 
> So that couldn't be.
> 
> Then it occured to me that 25 * 150% = 38G.  . Coincidence? I recreated 
> the vtapes with a size of 16Gb.  Now amanda is writing 25Gb to each 
> tape. My first backup wrote to 11 tapes. Slots 1-10 have 25Gb on them 
> with 19G on the last.  The amount of stuff on tapes 1-10 is not exactly 
> the same on each tape but it's a little over 24G each.
> 
> IMO, this borders on a bug. The wiki strongly implies that you can take 
> your disk file system size and divide by the number of tapes to get the 
> tape size.  While the amanda developers aren't responsibe for the wiki 
> being misleading, it is natural to assume the length parameter for a 
> virtual tape is the actual size of the tape. I'm not the only one who 
> made that assumption, so did whoever wrote the wiki entry on virtual tapes.
> 
> Anyway, I'd still like to know how amanda determines when to stop 
> writing to a vtape. I can put comments in my amanda.conf to say that the 
> tape length  is set to 16G so amanda writes 24G to 25G to each tape. But 
> that's hardly an ideal solution.
> 
> 
> 
> -- 
> ---
> John G. Heim, 608-263-4189, jh...@math.wisc.edu
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



bumping my backups

2013-10-22 Thread Brian Cuttler

Amanda users,

One of my amanda servers uses Vtapes, and has multiple zpools
(ZFS filesystem) assigned to it. Vtapes have been configured
at 1.8 Tbytes, which is a value that seems to be insufficient.
At least based on dump estimates.

I seem to be sitting at level 1 dumps for several nights, and
think that perhaps my bump parameters could be better set.

I think these values are carry-forwards from older amanda versions
as they are no where near the current defaults (per amanda.conf
man page on the web) and are not parameters that we usually mess with.

bumpsize 20 Mb  # minimum savings (threshold) to bump level 1 -> 2
bumppercent 20  # minimum savings (threshold) to bump level 1 -> 2
bumpdays 1  # minimum days at each level
bumpmult 4  # threshold = bumpsize * bumpmult^(level-1)

The file systems I'm looking at are in excess of 100Gig and may
we in excess of 500 gig, so not bumping is causing them to be
skipped because total dumps are too large and level 0 dumps are
taking presidents over these level 1 dumps.

I'm also going to check with the data owners, as I'm rather surprised
that these dumps aren't falling under the savings cap.

Than again we had some that where failing for a long time because
the number of files per directory was excessive (zfs has virtually
no limit, but putting several hundred thousand files in a directory
is still not recommended) and we had no level 0 dumps for a while.

Estimates are "server", and that may play into estimates that are
in fact excessive based on lack of current data.

Do you have any recommendations on how to proceed (dump parameters
or other things to look at) from here?

thank you,

    Brian
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: Problem with "authoritative answer"

2013-09-13 Thread Brian Cuttler

Chris,

Thanks, that makes sense, and I'm not all that surprised to hear it.

I haven't heard anything from the amanda list on whether or not
the zmanda client checks to see if the reply is authoritative or
not. And experimentally we failed, then it worked, then failed
and worked again.

So whatever is happening doesn't not seem to be systemic, as 
computers are supposed to produce the same result each time.

And, frustratingly, its become moot. The machine was repurposed
and the DLE (disklist entry) I was backing up no longer exists.
Whether it comes back or not, or has a new mount name, remains
to be seen...

Thank you,

Brian

On Fri, Sep 13, 2013 at 12:02:13PM -0700, Chris Buxton wrote:
> On Sep 11, 2013, at 8:11 AM, Brian Cuttler  wrote:
> > We have remapped some of our DNS clients to point to another
> > DNS resolver, one that we do not control, but that has "forwarder"
> > records in place to point our domain's address resolution requests
> > back to an authoritative server in our domain.
> > 
> > Dig is showing authoritative answer when I query my domain's server
> > for an address that I own.
> > 
> > Dig is NOT showing authoritative when I query the other domain's server.
> > 
> > I'd have thought that the forwarded request, coming from my server,
> > would have resulted in an authoritative reply.
> 
> When you query a non-authoritative server, such as one configured to forward 
> the query to another server, the result is supposed to be marked 
> non-authoritative. That's the point of the 'aa' flag. Not all name servers 
> behave this way, but they are supposed to. BIND 9 behaves correctly.
> 
> Regards,
> Chris Buxton
> ___
> Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
> from this list
> 
> bind-users mailing list
> bind-us...@lists.isc.org
> https://lists.isc.org/mailman/listinfo/bind-users
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: Problem with "authoritative answer"

2013-09-11 Thread Brian Cuttler

Paddy,

The guy with access to the logs is gone for the day, but will 
post a reply to you in the morning.

Reported error was that the server needed to be registered with
the client. Did ultimately add the IP to the client, worked Monday
night but failed again Tuesday, for what reason I can not say.

thank you,

Brian

On Wed, Sep 11, 2013 at 01:36:41PM -0700, Paddy Sreenivasan wrote:
> Brian,
> 
> You can check the Amanda windows client logs in the Debug folder to see why
> it is failing the request.
> 
> Registering Amanda server by IP address on the Windows client will likely
> help.
> 
> thanks
> Paddy
> 
> 
> On Wed, Sep 11, 2013 at 8:11 AM, Brian Cuttler  wrote:
> 
> >
> > Cross posting to both Amanda users and bind users lists.
> >
> > We have remapped some of our DNS clients to point to another
> > DNS resolver, one that we do not control, but that has "forwarder"
> > records in place to point our domain's address resolution requests
> > back to an authoritative server in our domain.
> >
> > Dig is showing authoritative answer when I query my domain's server
> > for an address that I own.
> >
> > Dig is NOT showing authoritative when I query the other domain's server.
> >
> > I'd have thought that the forwarded request, coming from my server,
> > would have resulted in an authoritative reply.
> >
> > What does this have to do with Amanda?
> >
> > We have a zmanda client in our citrix cloud that has been changed
> > from our domain controller to the DC of the other dept, which has
> > its own DNS servers.
> >
> > While we can get a DNS result on the client, zmanda is failing to
> > authenticate the server. I suspect but do not know for sure that
> > this is because the DNS result (as determined by # dig) is not
> > authoritative.
> >
> > Am I right in my guess as to the zmanda client issue?
> >  - if so
> > Is there a zmanda work-around or fix? Other than adding the IP
> > information to tables in the client or registering the amanda
> > server by IP?
> >
> > Is there a DNS fix? Do I need to update by DNZ zone file to make the
> > other domains DNS, which only has forwarder records for us, authoritative
> > by adding an NS record for it?
> >
> > Am I just barking up the wrong tree?
> >
> > thanks in advance,
> >
> > Brian
> > ---
> >Brian R Cuttler brian.cutt...@wadsworth.org
> >Computer Systems Support(v) 518 486-1697
> >Wadsworth Center(f) 518 473-6384
> >NYS Department of HealthHelp Desk 518 473-0773
> >
> >
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Problem with "authoritative answer"

2013-09-11 Thread Brian Cuttler

Cross posting to both Amanda users and bind users lists.

We have remapped some of our DNS clients to point to another
DNS resolver, one that we do not control, but that has "forwarder"
records in place to point our domain's address resolution requests
back to an authoritative server in our domain.

Dig is showing authoritative answer when I query my domain's server
for an address that I own.

Dig is NOT showing authoritative when I query the other domain's server.

I'd have thought that the forwarded request, coming from my server,
would have resulted in an authoritative reply.

What does this have to do with Amanda?

We have a zmanda client in our citrix cloud that has been changed
from our domain controller to the DC of the other dept, which has
its own DNS servers.

While we can get a DNS result on the client, zmanda is failing to
authenticate the server. I suspect but do not know for sure that
this is because the DNS result (as determined by # dig) is not
authoritative.

Am I right in my guess as to the zmanda client issue?
 - if so
Is there a zmanda work-around or fix? Other than adding the IP
information to tables in the client or registering the amanda
server by IP?

Is there a DNS fix? Do I need to update by DNZ zone file to make the
other domains DNS, which only has forwarder records for us, authoritative
by adding an NS record for it?

Am I just barking up the wrong tree?

thanks in advance,

    Brian
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: Splitting a DLE with regex

2013-07-31 Thread Brian Cuttler
On Wed, Jul 31, 2013 at 01:43:50PM -0400, Mike Neimoyer wrote:
> Thanks to Gerrit, Brian and Jean-Louis.  I'll be responding to them in 
> this message
> 
> 
> Hello Gerrit, thanks for chiming in!!>> include "./[a-c].*"
> > Maybe you should first try an existing pathname (so no wildcard), just
> > to be sure the "./" is correct here.
> 
> Good point, thanks!
> 
> localhost /home/a1/home {
> vhost2-user-tar
> include "./aaronson"
> } 1 local
> 
> And an amcheck seemed to work:
> Client check: 1 host checked in 1.220 seconds.  0 problems found.
> 
> So, it appears that the ./ is correct, which follows what Jean-louis 
> follows with:
> 
> > include must be a glob expression, not a regex, so "./[a-c]*" is the
> > correct syntax,
> >
> > Do /home and /home/aaronson are on the same filesystem?
> >df /home
> >df /home/aaronson
> 
> Yes, they are on the same filesystem:
> bash-4.1$ df /home /home/aaronson
> Filesystem   1K-blocks  Used Available Use% Mounted on
> /dev/vda1935102536 420335640 514756896  45% /
> /dev/vda1935102536 420335640 514756896  45% /
> 
> 
> > Are you using the application 'amgtar' or the program 'GNUTAR'?
> 
> I *think* that amanda was configured --with-program GNUTAR, but it was 
> compiled before I took over.  Is there a way to check, via a debug file 
> or log?
> 
> 
> Brian added:
> 
> > Remember, to check where you are anchored. I've lost your
> > earlier emails, but you do have to be careful to know what
> > your starting point is.
> 
> Starting point is /home and then the include directive is for "./[a-c]*" 
>  And all subdirectories are lower-case, except the numbered which will 
> get "./[0-9]*" as their include directive.


Oh - interesting to note.

I'm using globbing on a system with ZFS file systems, each
user directory has its own zfs file system carved out of a
large zpool. I have no problems with GNUTAR backups globbing
together the a* directories and then the b*, etc. Each file
system below my anchor point is a unique file system, but I
can glob them together in a single DLE. [My manager has suggested
separate DLE for use userid, but that would have been unworkable
for us], this scheme however does prevent me from being able to
backup the globs using snapshots.


> Thanks so much,
> ~~Mike
> 
> 
> 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: Splitting a DLE with regex

2013-07-31 Thread Brian Cuttler

I do that sort of thing a lot.

finsen  /export/home-AZ /export/home   {
user-tar2
include "./[A-Z]*"
}

trel   /trelRZ /trel   {
comp-server-user-tar
include "./[R-Z]*"
}

you have teh case correct? The dot between the first letter
and the wild card is intentional (between the right closing
bracket and the astrix)?

Remember, to check where you are anchored. I've lost your
earlier emails, but you do have to be careful to know what
your starting point is.

In the above example its /export/home, or the /trel directory,
respectively. And you are using TAR (gtar, star, as long as its
an amanda compatible version) not DUMP.



On Wed, Jul 31, 2013 at 07:19:31PM +0200, Gerrit A. Smit TI wrote:
> Op 31-07-13 18:56, Mike Neimoyer schreef:
> >
> > include "./[a-c].*"
> Maybe you should first try an existing pathname (so no wildcard), just
> to be sure the "./" is correct here.
> 
> 
> 
> 
> -- 
> Met vriendelijke groeten,
> AT COMPUTING
> 
> Gerrit A. Smit
> Beheer Technische Infrastructuur
> 
> AT Computing   Telefoon: +31 24 352 72 22
> Dé one-stop-Linux-shop Telefoon cursussecretariaat: +31 24 352 72 72
>Fax: +31 24 352 72 92
> Kerkenbos 12-38t...@atcomputing.nl
> 6546 BE  Nijmegen  www.atcomputing.nl
> 
> Nieuw bij AT Computing: onze Linux Reference Card nu ook als gratis app!
> 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: gizp with compress none in dumptype

2013-07-11 Thread Brian Cuttler

Karsten,

Do you have indexing enabled? The tee that runs # tar -tf
also seems to pipe the output through gzip to produce a
listing file that is .gz

On Thu, Jul 11, 2013 at 01:02:42AM +0200, Karsten Fuhrmann wrote:
> Hello,
> in my process list  on the amanda server i see one gzip process per dumper, 
> but i have set compress none to all my dumptypes, so where does this gzip 
> process comes from?
> 
> Greetings,
> Karsten Fuhrmann
> System Administrator
> Rothkirch Cartoon-Film GmbH
> Hasenheide. 54
> D-10967 Berlin
> phone  +49 30 698084-109
> fax  +49 30 698084-29
> mobile +49 176 49118462
> skype: parceval3000
> AIM: in...@mac.com
> Jabber: parce...@jabber.org
> 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: amrecover asking for tape drive on old server

2013-06-17 Thread Brian Cuttler



On Mon, Jun 17, 2013 at 03:00:44PM -0400, Chris Hoogendyk wrote:
> hmm, didn't specify either of those in configure when I built Amanda 3.3.3. 
> However, now that you mention amanda-client.conf, I went and looked for 
> that. Turns out they were both specified there (I originally rsynced 
> /usr/local/etc/amanda/ to the new server and then went about modifying 
> stuff to set it up). However, changing the entries there and then running 
> amrecover didn't seem to change anything.
> 
> So, I tried `setdevice -h localhost /dev/nst0`, but it tells me it cannot 
> talk to the tape server [request failed: timeout waiting for ACK].
> 
> I also tried
> 
> chrisho@supereclogite:/amanda1$ sudo amrecover -s localhost -t localhost -d 
> /dev/nst0 -h eclogite.geo.umass.edu -C daily
> AMRECOVER Version 3.3.3. Contacting server on localhost ...
> [request failed: timeout waiting for ACK]
> 
> Totally puzzled, because the Amanda backups seems to have been running 
> flawlessly. I can access the tape drive with mt and the changer with mtx, 
> and Amanda has been writing stuff to tapes.
> 
> As I mentioned in my reply to Brian, I'm going to have to change my 
> workflow to recover stuff for the Solaris client. But, I'll still need to 
> resolve some of these issues, because the old Amanda server might have 
> stuff in its configuration and files that make it think it is still the 
> server.

Chris,

If you are restoring on the server, you can just use amrestore,
rather than amrecover, and pull the dump set back, amrestore will
automatically decompress, if necessary, but will not unpack it
if you leave off the -p option and do not pipe it to ufsrestore.
Then you can scp the dump file to the E250.

I believe that amrecover running on the client (once those issues
are corrected) will decompress and pipe the stream to the client
and run ufsrestore for you on the client. There shouldn't be an
issue with trying to run ufsrestore on the linux amanda server,
amanda was programmed smarter than that.


> On 6/17/13 2:04 PM, Jean-Louis Martineau wrote:
> >The default index and tape server are set when amanda is compiled, at the 
> >configure step.
> >You can overwrite the default different ways:
> >
> >amrecover -s ... -t ...(man amrecover)
> >add them to amanda-client.conf   (man amanda-client.conf)
> >you can also change the tape server inside amrecover:
> >  setdevice [[-h tape-server] tapedev]   (help command inside 
> >  amrecover)
> >You can't change the index server inside amrecover.
> >
> >jean-Louis
> >
> >On 06/17/2013 01:47 PM, Chris Hoogendyk wrote:
> >>puzzle. If you recall, I switched from Amanda 2.5.1p3 on a Solaris 9 
> >>server on an E250 to Amanda 3.3.3 on an Ubuntu 12.04 server on SuperMicro 
> >>around May 22. The transition seemed to run smoothly.
> >>
> >>The old E250 is still running mail services, and the new Amanda server is 
> >>still backing that up.
> >>
> >>So, now someone wants their mail files recovered from May 26. Cool. Tapes 
> >>were written by the new server on that date. But, when I run amrecover, I 
> >>ran into the situation below. It seems to be searching for a tape drive 
> >>on the old server rather than on the new server. Weird. I've been running 
> >>Amanda on the new server with the tape library and tape drive on the new 
> >>server for several weeks and it has been working just fine as far as the 
> >>Amanda email reports go. I went through the amanda.conf, changer.conf, 
> >>etc. and find no references to any server. It should just be looking for 
> >>the devices on the Amanda server that I am running this on.
> >>
> >>Any idea what's up with this? And how I can fix it?
> >>
> >>I know I can just read the tape using native facilities, but I really 
> >>don't want to fall back on that if I don't have to. This should all just 
> >>work, as it always has in the past.
> >>
> >>
> >>-- attempted amrecover session ---
> >>
> >>
> >>chrisho@supereclogite:/usr/local/adm$ cd /amanda1
> >>
> >>chrisho@supereclogite:/amanda1$ sudo amrecover daily
> >>
> >>[sudo] password for chrisho:
> >>
> >>AMRECOVER Version 3.3.3. Contacting server on eclogite.geo.umass.edu ...
> >>220 eclogite AMANDA index server (2.5.1p3) ready.
> >>Setting restore date to today (2013-06-17)
> >>200 Working date set to 2013-06-17.
> >>200 Config set to daily.
> >>501 Host supereclogite is not in your di

Re: amrecover asking for tape drive on old server

2013-06-17 Thread Brian Cuttler

Chris,

I think Jean-Louis nailed the problem with the server and
tape unit. I think amrecover will pass the data stream back
to the client and ufsrestore on the client side, I don't 
think you will have any issues after taking JML's advise.

On Mon, Jun 17, 2013 at 02:30:50PM -0400, Chris Hoogendyk wrote:
> from the session that I included,
>   supereclogite is the new 3.3.3 Amanda server, and
>   eclogite is the old 2.5.1p3 Amanda server.
> 
> The old server, eclogite, was added to the disklist on the new server, 
> supereclogite, and has been backed up there since.
> 
> My session is on the new Amanda server asking to recover a file that was 
> backed up to the new Amanda server and put to tape on the new Amanda 
> server, but the file is from the old server. So I'm doing everything on the 
> new Amanda server. The old one is a client to the new one.
> 
> hmm. Interesting. As I go over this in my mind, I'm realizing that the 
> client was using ufsdump and then sending the resulting file over to the 
> server. But, the server is Ubuntu. It cannot do a ufsrestore. Interesting. 
> This is the first time I have had an Ubuntu server as the Amanda server 
> (I've had a bunch of them as clients), and it seems that it breaks the work 
> flow that I am used to. Still, I will need to straighten out what it is 
> doing with regard to tape drive and index server.
> 
> 
> 
> On 6/17/13 2:06 PM, Brian Cuttler wrote:
> >Chris,
> >
> >You are using the amanda client/server and issuing the restore
> >from the E250?  Or you are restoring the tar/dump file on the
> >server and (if dump) porting the (decompressed) file back to the
> >client to unpack?
> >
> >If the first option, maybe you are still referencing the config
> >on the E250, that is, the server config, rather than a client
> >config.
> >
> >
> >
> >On Mon, Jun 17, 2013 at 01:47:08PM -0400, Chris Hoogendyk wrote:
> >>puzzle. If you recall, I switched from Amanda 2.5.1p3 on a Solaris 9 
> >>server
> >>on an E250 to Amanda 3.3.3 on an Ubuntu 12.04 server on SuperMicro around
> >>May 22. The transition seemed to run smoothly.
> >>
> >>The old E250 is still running mail services, and the new Amanda server is
> >>still backing that up.
> >>
> >>So, now someone wants their mail files recovered from May 26. Cool. Tapes
> >>were written by the new server on that date. But, when I run amrecover, I
> >>ran into the situation below. It seems to be searching for a tape drive on
> >>the old server rather than on the new server. Weird. I've been running
> >>Amanda on the new server with the tape library and tape drive on the new
> >>server for several weeks and it has been working just fine as far as the
> >>Amanda email reports go. I went through the amanda.conf, changer.conf, 
> >>etc.
> >>and find no references to any server. It should just be looking for the
> >>devices on the Amanda server that I am running this on.
> >>
> >>Any idea what's up with this? And how I can fix it?
> >>
> >>I know I can just read the tape using native facilities, but I really 
> >>don't
> >>want to fall back on that if I don't have to. This should all just work, 
> >>as
> >>it always has in the past.
> >>
> >>
> >>-- attempted amrecover session ---
> >>
> >>
> >>chrisho@supereclogite:/usr/local/adm$ cd /amanda1
> >>
> >>chrisho@supereclogite:/amanda1$ sudo amrecover daily
> >>
> >>[sudo] password for chrisho:
> >>
> >>AMRECOVER Version 3.3.3. Contacting server on eclogite.geo.umass.edu ...
> >>220 eclogite AMANDA index server (2.5.1p3) ready.
> >>Setting restore date to today (2013-06-17)
> >>200 Working date set to 2013-06-17.
> >>200 Config set to daily.
> >>501 Host supereclogite is not in your disklist.
> >>Trying host supereclogite.geo.umass.edu ...
> >>200 Dump host set to supereclogite.geo.umass.edu.
> >>Use the setdisk command to choose dump disk to recover
> >>
> >>amrecover> sethost eclogite.geo.umass.edu
> >>
> >>200 Dump host set to eclogite.geo.umass.edu.
> >>
> >>amrecover> setdisk /var/mail
> >>
> >>200 Disk set to /var/mail.
> >>
> >>amrecover> setdate --05-26
> >>
> >>200 Working date set to 2013-05-26.
> >>
> >>amrecover> add cooke
> >>
> >>Added file /cooke
> >>
> >>amrecover&g

Re: amanda 3.3.3 "too many files"

2013-06-07 Thread Brian Cuttler

Jean-Louis,
Jon,

I've updated my amanda.conf to use auth="local" for the
dumptypes I have in use in my disklist.

> ulimit
unlimited

Per solaris instructions...
> echo 'rlim_fd_max/d' | mdb -k
rlim_fd_max:
rlim_fd_max:0

> amcheck finsen
Amanda Tape Server Host Check
-
Holding disk /lstripe: 546222 MB disk space available, using 546122 MB
slot 9: volume 'Finsen31'
Will write to volume 'Finsen31' in slot 9.
NOTE: skipping tape-writable test
NOTE: info dir 
/usr/local/etc/amanda/finsen/DailySet1/curinfo/finsen/_export2_samba_maldi does 
not exist
NOTE: it will be created on the next run.
NOTE: index dir 
/usr/local/etc/amanda/finsen/DailySet1/index/finsen/_export2_samba_maldi does 
not exist
NOTE: it will be created on the next run.
Server check took 4.691 seconds

Amanda Backup Client Hosts Check

ERROR: finsen: service selfcheck: selfcheck: Error opening pipe to child: Too 
many open files
ERROR: finsen: service /usr/local/libexec/amanda/selfcheck failed: pid 5457 
exited with code 1
Client check: 1 host checked in 130.727 seconds.  2 problems found.

(brought to you by Amanda 3.3.3)

The new DLE is fact did cause the retained snapshot to change by
one DLE, in alpha order. It is (re)verified that this is not random
and is tied to list position.

So much for the solaris run time work-around.

export LD_PRELOAD_32 /usr/lib/extendedFILE.so.1

then run amcheck.
> amcheck finsen
ld.so.1: amcheck: warning: /usr/lib/extendedFILE.so.1: open failed: illegal 
insecure pathname
Amanda Tape Server Host Check
-
Holding disk /lstripe: 546222 MB disk space available, using 546122 MB
slot 9: volume 'Finsen31'

FILE.so.1: open failed: illegal insecure pathname
ERROR: finsen: Application 'amgtar': can't run support command
ERROR: finsen: Application 'amgtar': ld.so.1: amgtar: warning: 
/usr/lib/extendedFILE.so.1: open failed: illegal insecure pathname
ERROR: finsen: Application 'amgtar': can't run support command
ERROR: finsen: Application 'amgtar': ld.so.1: amgtar: warning: 
/usr/lib/extendedFILE.so.1: open failed: illegal insecure pathname
ERROR: finsen: Application 'amgtar': can't run support command

related to suid programs?

Don't want to make further changes before the weekend, think I'll
implement auth="local" for amdump on Monday and see how it performs.


    thank you,

Brian




On Wed, Jun 05, 2013 at 01:41:16PM -0400, Brian Cuttler wrote:
> 
> Jean-Louis,
> 
> Yes, I did find some information on a run time mechanism to
> increase the 256 file limit (file limit stored in unsigned character).
> 
> The work-around employes requires the exection of /usr/lib/extendedFILE.so.1
> prior to the binary being executed.
> 
> Following up on your maxcheck and Spindle number, I wonder if I 
> couldn't automatically build an alternate disklist file with 
> spindle number and swap it in and out. It would have to be done
> dynamically (since my disklist changes and making changes in 
> multiple locations is error prone), but that can be scripted and
> called from cron.
> 
> /* I need something that will handle both formats of DLE
>  *
> finsen  /export2 zfs-snapshot2
> finsen  /export/home-AZ /export/home   {
> user-tar2
> include "./[A-Z]*"
> }
>  *
>  */
> 
> Since this is an amanda-client issue, rather than an amanda server
> issue, I need to ask you, how to execute this on the client-side
> before attempting to check the DLE list. Is there a way to invoke
> this from the amanda daemon?
> 
>  - Alternatively, if someone better versed than I am on the Solaris
>    inetd or in SMF knows how to insert the requisit command on the
>client side - I would be appreciative if they would share their
>information.
> 
>   thank you,
> 
>   Brian
> 
> 
> On Wed, Jun 05, 2013 at 11:54:35AM -0400, Jean-Louis Martineau wrote:
> > Brian,
> > 
> > Can you increase the number of open files at the system level?
> > 
> > amcheck check all DLEs in parallel, you can try to add spindle (in the 
> > disklist) to reduce parallelism but that can have a bad impact on dump 
> > performance, so it is not a good workaround.
> > 
> > You would like a maxcheck  setting similar to maxdump, I put it in my 
> > TODO list.
> > 
> > Jean-Louis
> > 
> > On 06/05/2013 11:05 AM, Brian Cuttler wrote:
> > >Hello amanda users,
> > >
> > >I just updates amanda 3.3.

Re: amanda 3.3.3 "too many files"

2013-06-07 Thread Brian Cuttler


Jean-Louis,

added a couple of switches to # ls, got a much more informative output.

[finsen]: /proc/734/fd > ls -F -C /proc/10832/fd
0=  1=  10  12|  13|  16|  17|  2=  20|  21|  3>  6|  8|




On Thu, Jun 06, 2013 at 11:09:20AM -0400, Jean-Louis Martineau wrote:
> On 06/05/2013 11:54 AM, Jean-Louis Martineau wrote:
> >Brian,
> >
> >Can you increase the number of open files at the system level?
> >
> >amcheck check all DLEs in parallel, you can try to add spindle (in the 
> >disklist) to reduce parallelism but that can have a bad impact on dump 
> >performance, so it is not a good workaround.
> 
> Forget that idea, adding spindle will not help.
> 
> I think the problem is a file descriptor leak (files not closed), but it 
> can be in any process.
> Can you monitor all opened file for all amanda processes?
> I don't know how to do it with Solaris, but you 'ls /proc/PID/fd' on linux.
> It will help to find which process leak.
> 
> Jean-Louis
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



amanda 3.3.3 not unwinding

2013-06-07 Thread Brian Cuttler

Installed Amanda 3.3.0 on solaris 10/x86 two days ago.
Have found that both amdumps since did not complete normally.

While completing all DLE and sending the report.

> amstatus finsen
Using /usr/local/etc/amanda/finsen/DailySet1/amdump
>From Thu Jun  6 18:30:00 EDT 2013



finsen:/0 43283m estimate done
finsen:/export  0 20448m estimate done
finsen:/export/home-A   0   118m estimate done
finsen:/export/home-AZ  0 0m estimate done

 < MANY LINES REMOVED>

finsen:hp10p/flyshare   0   403m estimate done
finsen:hp10p/grifadmin  0   783m estimate done
finsen:hp10p/hiu0 74691m estimate done
finsen:hp10p/hiu2   0 22763m estimate done
finsen:hp10p/ivcp   0 26015m estimate done
finsen:hp10p/virologypt 015m estimate done

SUMMARY  part  real  estimated
   size   size
partition   : 265
estimated   : 265  3902078m
flush   :   0 0m
failed  :   00m   (  0.00%)
wait for dumping:   00m   (  0.00%)
dumping to tape :   00m   (  0.00%)
dumping :   0 0m 0m (  0.00%) (  0.00%)
dumped  :   0 0m 0m (  0.00%) (  0.00%)
wait for writing:   0 0m 0m (  0.00%) (  0.00%)
wait to flush   :   0 0m 0m (100.00%) (  0.00%)
writing to tape :   0 0m 0m (  0.00%) (  0.00%)
failed to tape  :   0 0m 0m (  0.00%) (  0.00%)
taped   :   0 0m 0m (  0.00%) (  0.00%)
12 dumpers idle : not-idle
taper status: Idle
taper qlen: 0
network free kps:   800
holding space   :546122m (100.00%)
 0 dumpers busy :  0:00:05  (100.00%)not-idle:  0:00:05  (100.00%)

we where left with processes that did not unwind.

> ps -ef | grep amanda
  amanda 16257 16256  13 18:30:01 ? 879:30 
/usr/local/libexec/amanda/planner finsen --starttime 20130606183000
  amanda 16271 16258   0 18:30:01 ?   0:00 dumper11 finsen
  amanda 16267 16258   0 18:30:01 ?   0:00 dumper7 finsen
  amanda 16263 16258   0 18:30:01 ?   0:00 dumper3 finsen
  amanda 27743  8729   0 09:11:17 pts/14  0:00 -tcsh
  amanda 16270 16258   0 18:30:01 ?   0:00 dumper10 finsen
  amanda 16260 16258   0 18:30:01 ?   0:00 dumper0 finsen
  amanda 16262 16258   0 18:30:01 ?   0:00 dumper2 finsen
  amanda 27766 27743   0 09:11:45 pts/14  0:00 grep amanda
  amanda 16268 16258   0 18:30:01 ?   0:00 dumper8 finsen
  amanda 27765 27743   0 09:11:45 pts/14  0:00 ps -ef
  amanda 16256 16253   0 18:30:01 ?   0:00 /usr/local/bin/perl 
/usr/local/sbin/amdump finsen
  amanda 16259 16258   0 18:30:01 ?   0:00 /usr/local/bin/perl 
/usr/local/libexec/amanda/taper finsen
  amanda 16266 16258   0 18:30:01 ?   0:00 dumper6 finsen
  amanda 16264 16258   0 18:30:01 ?   0:00 dumper4 finsen
  amanda 16261 16258   0 18:30:01 ?   0:00 dumper1 finsen
  amanda 16258 16256   0 18:30:01 ?   0:00 
/usr/local/libexec/amanda/driver finsen
  amanda 16253   541   0 18:30:01 ?   0:00 sh -c /usr/local/sbin/amdump 
 finsen
  amanda 16265 16258   0 18:30:01 ?   0:00 dumper5 finsen
  amanda 16269 16258   0 18:30:01 ?   0:00 dumper9 finsen


This is new behavior since amanda 3.3.0 which was the previous
version on this system.

Amanda server has only one client, itself.

I'm not sure where to even start unraveling this.


thank you,

    Brian
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: amanda 3.3.3 "too many files"

2013-06-05 Thread Brian Cuttler

Jean-Louis,

Thank you, I'm sorry I was unclear. Yes, of course the disklist
needs to be in place when I invoke amcheck on the server.

I'd meant that I need to find out how to up the file limit on
the client, which is a more difficult proposition since its
SMF/INET and not simply something I can script in cron on the
server. The fact that the client and the server are the same
box doesn't help much in this case.

thank you,

    Brian

On Wed, Jun 05, 2013 at 03:08:45PM -0400, Jean-Louis Martineau wrote:
> On 06/05/2013 01:41 PM, Brian Cuttler wrote:
> >Jean-Louis,
> >
> >Yes, I did find some information on a run time mechanism to
> >increase the 256 file limit (file limit stored in unsigned character).
> >
> >The work-around employes requires the exection of 
> >/usr/lib/extendedFILE.so.1
> >prior to the binary being executed.
> >
> >Following up on your maxcheck and Spindle number, I wonder if I
> >couldn't automatically build an alternate disklist file with
> >spindle number and swap it in and out. It would have to be done
> >dynamically (since my disklist changes and making changes in
> >multiple locations is error prone), but that can be scripted and
> >called from cron.
> >
> >/* I need something that will handle both formats of DLE
> >  *
> >finsen  /export2 zfs-snapshot2
> >finsen  /export/home-AZ /export/home   {
> > user-tar2
> > include "./[A-Z]*"
> > }
> >  *
> >  */
> >
> >Since this is an amanda-client issue, rather than an amanda server
> >issue, I need to ask you, how to execute this on the client-side
> >before attempting to check the DLE list. Is there a way to invoke
> >this from the amanda daemon?
> It must be done on the server before amcheck is executed.
> 
> ./script-add-spindle < disklist > disklist.spindle
> ./amcheck CONF -odiskfile=disklist.spindle
> 
> Jean-Louis
> 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: amanda 3.3.3 "too many files"

2013-06-05 Thread Brian Cuttler

Jean-Louis,

Yes, I did find some information on a run time mechanism to
increase the 256 file limit (file limit stored in unsigned character).

The work-around employes requires the exection of /usr/lib/extendedFILE.so.1
prior to the binary being executed.

Following up on your maxcheck and Spindle number, I wonder if I 
couldn't automatically build an alternate disklist file with 
spindle number and swap it in and out. It would have to be done
dynamically (since my disklist changes and making changes in 
multiple locations is error prone), but that can be scripted and
called from cron.

/* I need something that will handle both formats of DLE
 *
finsen  /export2 zfs-snapshot2
finsen  /export/home-AZ /export/home   {
user-tar2
include "./[A-Z]*"
}
 *
 */

Since this is an amanda-client issue, rather than an amanda server
issue, I need to ask you, how to execute this on the client-side
before attempting to check the DLE list. Is there a way to invoke
this from the amanda daemon?

 - Alternatively, if someone better versed than I am on the Solaris
   inetd or in SMF knows how to insert the requisit command on the
   client side - I would be appreciative if they would share their
   information.

thank you,

    Brian


On Wed, Jun 05, 2013 at 11:54:35AM -0400, Jean-Louis Martineau wrote:
> Brian,
> 
> Can you increase the number of open files at the system level?
> 
> amcheck check all DLEs in parallel, you can try to add spindle (in the 
> disklist) to reduce parallelism but that can have a bad impact on dump 
> performance, so it is not a good workaround.
> 
> You would like a maxcheck  setting similar to maxdump, I put it in my 
> TODO list.
> 
> Jean-Louis
> 
> On 06/05/2013 11:05 AM, Brian Cuttler wrote:
> >Hello amanda users,
> >
> >I just updates amanda 3.3.0 to 3.3.0 on a Solaris 10/x86 system.
> >The system is both the server and the client, there are no other
> >clients of this system.
> >
> >We have ~265 DLEs on this system (large zfs arrays and all
> >samba shares are their own file systems and DLE, thank goodness
> >I was able to talk my manager out of making all user directories
> >their own DLE as well, though they are their own zfs file systems).
> >
> >The following errors are -not- new with 3.3.3, we've had them for
> >a while, I'd hoped the upgrade would take are of it.
> >
> >Also the amcheck leaves an amanda-check file around for one of
> >the zfs file systems (yes, configured to use zfs snapshot). [I'm
> >pretty sure these two errors are related to one another]
> >
> >The filesystem amanda-*-check file left is for the same filesystem
> >each night, unless we add/remove DLE/filesystems. So I think it is
> >the nth filesystem and at the limit of the open file counter, rather
> >than something in the file system itself.
> >
> >I was hoping there was an easy fix for this. Last I recall on the
> >topic it had to do with the fillm being a 32 bit rather than 64 bit
> >value (I could be wrong about this).
> >
> >Otherwise all # amcheck tests run successfully. Will run # amdump
> >this evening but do not anticipate any issues there.
> >
> > thank you,
> >
> > Brian
> >
> >>amcheck -c finsen
> >Amanda Backup Client Hosts Check
> >
> >ERROR: finsen: service selfcheck: selfcheck: Error opening pipe to child: 
> >Too many open files
> >ERROR: finsen: service /usr/local/libexec/amanda/selfcheck failed: pid 
> >8590 exited with code 1
> >Client check: 1 host checked in 83.304 seconds.  2 problems found.
> >
> >(brought to you by Amanda 3.3.3)
> >
> >
> >from /var/log/conlog
> >
> >Jun  5 10:55:04 finsen amandad[8583]: [ID 927837 daemon.info] connect from 
> >finsen.wadsworth.org
> >Jun  5 10:56:27 finsen selfcheck[8590]: [ID 702911 daemon.error] Error 
> >opening pipe to child: Too many open files
> >
> >
> >
> >
> >
> >---
> >Brian R Cuttler brian.cutt...@wadsworth.org
> >Computer Systems Support(v) 518 486-1697
> >Wadsworth Center(f) 518 473-6384
> >NYS Department of HealthHelp Desk 518 473-0773
> >
> 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



amanda 3.3.3 "too many files"

2013-06-05 Thread Brian Cuttler

Hello amanda users,

I just updates amanda 3.3.0 to 3.3.0 on a Solaris 10/x86 system.
The system is both the server and the client, there are no other
clients of this system.

We have ~265 DLEs on this system (large zfs arrays and all
samba shares are their own file systems and DLE, thank goodness
I was able to talk my manager out of making all user directories
their own DLE as well, though they are their own zfs file systems).

The following errors are -not- new with 3.3.3, we've had them for
a while, I'd hoped the upgrade would take are of it. 

Also the amcheck leaves an amanda-check file around for one of
the zfs file systems (yes, configured to use zfs snapshot). [I'm
pretty sure these two errors are related to one another]

The filesystem amanda-*-check file left is for the same filesystem
each night, unless we add/remove DLE/filesystems. So I think it is
the nth filesystem and at the limit of the open file counter, rather
than something in the file system itself.

I was hoping there was an easy fix for this. Last I recall on the
topic it had to do with the fillm being a 32 bit rather than 64 bit
value (I could be wrong about this).

Otherwise all # amcheck tests run successfully. Will run # amdump
this evening but do not anticipate any issues there.

thank you,

    Brian

> amcheck -c finsen

Amanda Backup Client Hosts Check

ERROR: finsen: service selfcheck: selfcheck: Error opening pipe to child: Too 
many open files
ERROR: finsen: service /usr/local/libexec/amanda/selfcheck failed: pid 8590 
exited with code 1
Client check: 1 host checked in 83.304 seconds.  2 problems found.

(brought to you by Amanda 3.3.3)


from /var/log/conlog

Jun  5 10:55:04 finsen amandad[8583]: [ID 927837 daemon.info] connect from 
finsen.wadsworth.org
Jun  5 10:56:27 finsen selfcheck[8590]: [ID 702911 daemon.error] Error opening 
pipe to child: Too many open files





---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: tape size question

2013-05-23 Thread Brian Cuttler

Robert,

On Thu, May 23, 2013 at 10:04:29AM -0600, Charles Curley wrote:
> On Thu, 23 May 2013 13:59:33 +
> "McGraw, Robert P"  wrote:
> 
> > Why does amanda stop at %52  when I still have 1.5TB of data in the
> > holding disk to write to the tape? It is hard to believe that the
> > LTO4 compression is so bad that I am not getting any compression at
> > all.
> 
> It would be if you have both tape drive compression turned on (as you
> do) and compression specified in the disk list entries (DLEs).
> Conventional wisdom on this list has been that DLE compression (using
> gzip, say), is more efficient than tape drive compression.

Tape drive compression is nice, and the newer technologies do a
block by block determination as to whether the data can be compressed
or not [used to be that if you SW compressed and then HW compressed
you might get inflation and not only waste CPU but end up occupying
more tape than if you hand't compressed at all].

What is more difficult is estimation of tape usage. SW compression
gives you a raw size number, a compressed size number and Amanda can
sum the compressed size numbers up and compare them to the capacity
of the tape. With HW compression its kid of out of your hands and
you don't really know, except from experience, what is going to fit
on tape any given night.

If you run SW compression and see you get 1/3 data reduction, you
can move the DLE to HW compression and will hopefully get 1/3 data
compression, which you might allow for by "lenthening" the tape
capacity by 1/3 the size of the particular DLE. At least, I've done
this in the past. I don't currently run HW compression on any DLE
or amanda config that I'm managing. YMMV.

Database compression depends a lot on your database. I have a lot
of sparse databases that compress very well. This is not always
the case, you have to know your data.


> -- 
> 
> Charles Curley  /"\ASCII Ribbon Campaign
> Looking for fine software   \ /Respect for open standards
> and/or writing?  X No HTML/RTF in email
> http://www.charlescurley.com    / \No M$ Word docs in email
> 
> Key fingerprint = CE5C 6645 A45A 64E4 94C0  809C FFF6 4C48 4ECD DFDB
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: zfs-snapshot errors

2013-05-21 Thread Brian Cuttler

Jean-Louis,

Thank you -

1) yes, file path is different, but it turns out that its a 
   result of links in the file system, I'd walked down the
   correct path, though there was no way to see that based
   on the prior message.

2) Update to gtar 1.22 seems to have been sufficient, didn't also
   need to touch the switches, which are default and what I have
   on other boxes.

Q: Do I want to remove the --no-check-device switch in the general case?

thanks,

    Brian

On Tue, May 21, 2013 at 10:33:39AM -0400, Jean-Louis Martineau wrote:
> On 05/21/2013 09:47 AM, Brian Cuttler wrote:
> >Hello Amanda users,
> >
> >Yesterday I upgraded an client on Solaris 10/Sparc from 2.4.2p2,
> >which working great (except perhaps for not being built with GTAR
> >in mind, in, I think 2001!) with amanda 3.3.3.
> >
> >Found the following in the client side debug file.
> >
> >Mon May 20 18:36:42 2013: thd-2a4b8: amgtar: NORMAL : File .* shrunk by 
> >[0-9][0-9]* bytes, padding with zeros
> >Mon May 20 18:36:42 2013: thd-2a4b8: amgtar: NORMAL : Cannot add file .*: 
> >No such file or directory$
> >Mon May 20 18:36:42 2013: thd-2a4b8: amgtar: NORMAL : Error exit delayed 
> >from previous errors
> >Mon May 20 18:36:42 2013: thd-2a4b8: amgtar: amgtar: error opening 
> >/usr/local/var/amanda/gnutar-lists/wcapp_appp_db_0: No such file or 
> >directory
> >Mon May 20 18:36:42 2013: thd-2a4b8: amgtar: Spawning "/usr/sfw/bin/gtar 
> >/usr/sfw/bin/gtar -x --no-check-device -f -" in pipeline
> >Mon May 20 18:36:42 2013: thd-2a4b8: amgtar: Spawning "/usr/sfw/bin/gtar 
> >/usr/sfw/bin/gtar --create --verbose --file - --directory 
> >/appp/db/.zfs/snapshot/amanda-_appp_db-current --one-file-system 
> >--no-check-device --listed-incremental 
> >/usr/local/var/amanda/gnutar-lists/wcapp_appp_db_1.new --sparse 
> >--ignore-failed-read --totals ." in pipeline
> >Mon May 20 18:36:42 2013: thd-2a4b8: amgtar:   0: strange(?): 
> >/usr/sfw/bin/gtar: unrecognized option `--no-check-device'
> >
> >Ignoring the "NORMAL" errors.
> >
> >The error
> >Mon May 20 18:36:42 2013: thd-2a4b8: amgtar: amgtar: error opening 
> >/usr/local/var/amanda/gnutar-lists/wcapp_appp_db_0: No such file or 
> >directory
> >
> >is I think inconsequentail since in the file system I find.
> >
> >
> ># pwd
> >/appp/export/home/local/var/amanda/gnutar-lists
> >
> ># ls -l
> >total 6
> >-rw---   1 amanda   sys0 May 14 21:43 wcapp_appp_db2_0.new
> >-rw---   1 amanda   sys0 May 20 18:39 wcapp_appp_db2_1.new
> >-rw---   1 amanda   sys0 May 15 20:49 wcapp_appp_db_0.new
> >-rw---   1 amanda   sys0 May 20 18:40 wcapp_appp_db_1.new
> >-rw---   1 amanda   sys0 May 20 20:17 
> >wcapp_appp_export_0.new
> >-rw---   1 amanda   sys0 May 17 18:33 
> >wcapp_appp_export_1.new
> The error is from /usr/local/var/amanda/gnutar-lists, but you list 
> /appp/export/home/local/var/amanda/gnutar-lists.
> 
> 
> >I think we choked on the --no-check-device error.
> >
> >Mon May 20 18:36:42 2013: thd-2a4b8: amgtar:   0: strange(?): 
> >/usr/sfw/bin/gtar: unrecognized option `--no-check-device'
> >
> >
> ># /usr/sfw/bin/gtar --version
> >tar (GNU tar) 1.17
> >Copyright (C) 2007 Free Software Foundation, Inc.
> >License GPLv2+: GNU GPL version 2 or later 
> ><http://gnu.org/licenses/gpl.html>
> >This is free software: you are free to change and redistribute it.
> >There is NO WARRANTY, to the extent permitted by law.
> >
> >Written by John Gilmore and Jay Fenlason.
> >
> >Simple issue of having an updated gtar? Or do I have something
> >more complex to worry about?
> Upgrade gtar to a newer version of do not set the CHECK-DEVICE property.
> 
> Jean-Louis
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



zfs-snapshot errors

2013-05-21 Thread Brian Cuttler

Hello Amanda users,

Yesterday I upgraded an client on Solaris 10/Sparc from 2.4.2p2,
which working great (except perhaps for not being built with GTAR
in mind, in, I think 2001!) with amanda 3.3.3.

Found the following in the client side debug file.

Mon May 20 18:36:42 2013: thd-2a4b8: amgtar: NORMAL : File .* shrunk by 
[0-9][0-9]* bytes, padding with zeros
Mon May 20 18:36:42 2013: thd-2a4b8: amgtar: NORMAL : Cannot add file .*: No 
such file or directory$
Mon May 20 18:36:42 2013: thd-2a4b8: amgtar: NORMAL : Error exit delayed from 
previous errors
Mon May 20 18:36:42 2013: thd-2a4b8: amgtar: amgtar: error opening 
/usr/local/var/amanda/gnutar-lists/wcapp_appp_db_0: No such file or directory
Mon May 20 18:36:42 2013: thd-2a4b8: amgtar: Spawning "/usr/sfw/bin/gtar 
/usr/sfw/bin/gtar -x --no-check-device -f -" in pipeline
Mon May 20 18:36:42 2013: thd-2a4b8: amgtar: Spawning "/usr/sfw/bin/gtar 
/usr/sfw/bin/gtar --create --verbose --file - --directory 
/appp/db/.zfs/snapshot/amanda-_appp_db-current --one-file-system 
--no-check-device --listed-incremental 
/usr/local/var/amanda/gnutar-lists/wcapp_appp_db_1.new --sparse 
--ignore-failed-read --totals ." in pipeline
Mon May 20 18:36:42 2013: thd-2a4b8: amgtar:   0: strange(?): 
/usr/sfw/bin/gtar: unrecognized option `--no-check-device'

Ignoring the "NORMAL" errors.

The error
Mon May 20 18:36:42 2013: thd-2a4b8: amgtar: amgtar: error opening 
/usr/local/var/amanda/gnutar-lists/wcapp_appp_db_0: No such file or directory

is I think inconsequentail since in the file system I find.


# pwd
/appp/export/home/local/var/amanda/gnutar-lists

# ls -l
total 6
-rw---   1 amanda   sys0 May 14 21:43 wcapp_appp_db2_0.new
-rw---   1 amanda   sys0 May 20 18:39 wcapp_appp_db2_1.new
-rw---   1 amanda   sys0 May 15 20:49 wcapp_appp_db_0.new
-rw---   1 amanda   sys0 May 20 18:40 wcapp_appp_db_1.new
-rw---   1 amanda   sys0 May 20 20:17 wcapp_appp_export_0.new
-rw---   1 amanda   sys0 May 17 18:33 wcapp_appp_export_1.new

I think we choked on the --no-check-device error.

Mon May 20 18:36:42 2013: thd-2a4b8: amgtar:   0: strange(?): 
/usr/sfw/bin/gtar: unrecognized option `--no-check-device'


# /usr/sfw/bin/gtar --version
tar (GNU tar) 1.17
Copyright (C) 2007 Free Software Foundation, Inc.
License GPLv2+: GNU GPL version 2 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Written by John Gilmore and Jay Fenlason.

Simple issue of having an updated gtar? Or do I have something
more complex to worry about?

thanks,

    Brian
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: amanda 3.3.3 sellfcheck request failed

2013-05-16 Thread Brian Cuttler


Work-around.

I was able to get amanda to work on the client using
"bsd" protocal, rather than bsdtcp.

This is not optimal, but is an improvement over using
the 2.6.1 client. Which to tell the truth worked pretty
darn well, with the one nagging issue being the fact that
the client daemons would often hang after job completion,
causing problems dumping the 'pending' DLE the next day.

We'll see how the client behaves this evening.



On Thu, May 16, 2013 at 03:11:33PM -0400, Brian Cuttler wrote:
> On Thu, May 16, 2013 at 03:02:53PM -0400, Jean-Louis Martineau wrote:
> > On 05/16/2013 02:53 PM, Brian Cuttler wrote:
> > >/tmp/amanda/amandad - which was created by the manual run
> > >of amandad, no new files.
> > 
> > If telnet do not create a new debug file it is because amandad is not 
> > executed, which means SMF is misconfigured.
> > I can't help with SMF.
> 
> OS specific issue, I can appreciate that. Thank you.
> 
> Does anyone have a clue as to how I've misconfigured SMF on Solaris 10?
> 
> My amanda config on Finsen, which is a solaris 10/x86 amanda server
> that has only itself as a client, has the following entry in inetd.conf
> 
> amanda  stream tcp nowait  amanda  /usr/local/libexec/amanda/amandad
>amandad -auth=bsdtcp amdump
> 
> This is identical to the entry in inetd.conf on Grifserv, the new
> amanda client I'm trying to upgrade for amanda server Curie.
> 
> service output information is identical on the two machines.
> 
> # svcs -l svc:/network/amanda/tcp:default
> fmri svc:/network/amanda/tcp:default
> name amanda
> enabled  true
> stateonline
> next_state   none
> state_time   Thu May 16 14:31:24 2013
> restartersvc:/network/inetd:default
> contract_id  
> 
> The manifest services give every indication of being identical.
> 
> [finsen]: /var/svc/manifest/network > ls -l *amanda*
> -rw-r--r-- 1 root root 2320 Feb  9 01:05 amanda-tcp.xml
> -rw-r--r-- 1 root root 2485 Feb  9 01:05 amanda-udp.xml
> -rw-r--r-- 1 root root 2292 Feb 27  2009 amandaidx-tcp.xml
> 
> # ls -l *amanda*
> -rw-r--r--   1 root root2320 May 16 14:31 amanda-tcp.xml
> 
> # wc amanda-tcp.xml
>   82 2272320 amanda-tcp.xml
> 
> # cksum amanda-tcp.xml
> 4135061186  2320amanda-tcp.xml
> 
> ... I'm going to have another look at the dumptype definitions.
> 
> 
> 
> 
> 
> 
> 
> ---
>Brian R Cuttler brian.cutt...@wadsworth.org
>Computer Systems Support(v) 518 486-1697
>Wadsworth Center(f) 518 473-6384
>NYS Department of HealthHelp Desk 518 473-0773
> 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: amanda 3.3.3 sellfcheck request failed

2013-05-16 Thread Brian Cuttler
On Thu, May 16, 2013 at 03:02:53PM -0400, Jean-Louis Martineau wrote:
> On 05/16/2013 02:53 PM, Brian Cuttler wrote:
> >/tmp/amanda/amandad - which was created by the manual run
> >of amandad, no new files.
> 
> If telnet do not create a new debug file it is because amandad is not 
> executed, which means SMF is misconfigured.
> I can't help with SMF.

OS specific issue, I can appreciate that. Thank you.

Does anyone have a clue as to how I've misconfigured SMF on Solaris 10?

My amanda config on Finsen, which is a solaris 10/x86 amanda server
that has only itself as a client, has the following entry in inetd.conf

amanda  stream tcp nowait  amanda  /usr/local/libexec/amanda/amandad
   amandad -auth=bsdtcp amdump

This is identical to the entry in inetd.conf on Grifserv, the new
amanda client I'm trying to upgrade for amanda server Curie.

service output information is identical on the two machines.

# svcs -l svc:/network/amanda/tcp:default
fmri svc:/network/amanda/tcp:default
name amanda
enabled  true
stateonline
next_state   none
state_time   Thu May 16 14:31:24 2013
restartersvc:/network/inetd:default
contract_id  

The manifest services give every indication of being identical.

[finsen]: /var/svc/manifest/network > ls -l *amanda*
-rw-r--r-- 1 root root 2320 Feb  9 01:05 amanda-tcp.xml
-rw-r--r-- 1 root root 2485 Feb  9 01:05 amanda-udp.xml
-rw-r--r-- 1 root root 2292 Feb 27  2009 amandaidx-tcp.xml

# ls -l *amanda*
-rw-r--r--   1 root root2320 May 16 14:31 amanda-tcp.xml

# wc amanda-tcp.xml
  82 2272320 amanda-tcp.xml

# cksum amanda-tcp.xml
4135061186  2320amanda-tcp.xml

... I'm going to have another look at the dumptype definitions.







---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: amanda 3.3.3 sellfcheck request failed

2013-05-16 Thread Brian Cuttler

/tmp/amanda/amandad - which was created by the manual run
of amandad, no new files.


On Thu, May 16, 2013 at 02:50:37PM -0400, Jean-Louis Martineau wrote:
> On 05/16/2013 02:46 PM, Brian Cuttler wrote:
> >But I'm not seeing, perhaps, looking in the wrong directory, any
> >debug files.
> 
> In: `amgetconf build.amanda_dbgdir`/amandad
> 
> Jean-Louis
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: amanda 3.3.3 sellfcheck request failed

2013-05-16 Thread Brian Cuttler


Jean-Louis,

On Thu, May 16, 2013 at 02:40:28PM -0400, Jean-Louis Martineau wrote:
> Brian,
> 
> Do a: telnet grifserv.wadsworth.org amanda
> create an amandad debug file on grifserv?

> telnet grifserv amanda
Trying 10.49.66.6...
Connected to grifserv.wadsworth.org.
Escape character is '^]'.
Connection to grifserv.wadsworth.org closed by foreign host.

produces

#  snoop host curie
Using device bge0 (promiscuous mode)
curie.wadsworth.org -> grifserv.wadsworth.org TCP D=10080 S=59114 Syn 
Seq=532791431 Len=0 Win=49640 Options=
grifserv.wadsworth.org -> curie.wadsworth.org TCP D=59114 S=10080 Syn 
Ack=532791432 Seq=2245896368 Len=0 Win=49640 Options=
curie.wadsworth.org -> grifserv.wadsworth.org TCP D=10080 S=59114 
Ack=2245896369 Seq=532791432 Len=0 Win=49640
grifserv.wadsworth.org -> curie.wadsworth.org TCP D=59114 S=10080 Fin 
Ack=532791432 Seq=2245896369 Len=0 Win=49640
curie.wadsworth.org -> grifserv.wadsworth.org TCP D=10080 S=59114 
Ack=2245896370 Seq=532791432 Len=0 Win=49640
curie.wadsworth.org -> grifserv.wadsworth.org TCP D=10080 S=59114 Fin 
Ack=2245896370 Seq=532791432 Len=0 Win=49640
grifserv.wadsworth.org -> curie.wadsworth.org TCP D=59114 S=10080 Ack=532791433 
Seq=2245896370 Len=0 Win=49640
grifserv.wadsworth.org -> curie.wadsworth.org TCP D=9997 S=51483 Push 
Ack=97669462 Seq=1337214714 Len=396 Win=49640
grifserv.wadsworth.org -> curie.wadsworth.org TCP D=9997 S=33469 Syn 
Seq=2993683995 Len=0 Win=49640 Options=
curie.wadsworth.org -> grifserv.wadsworth.org TCP D=33469 S=9997 Syn 
Ack=2993683996 Seq=543998241 Len=0 Win=49640 Options=
grifserv.wadsworth.org -> curie.wadsworth.org TCP D=9997 S=33469 Ack=543998242 
Seq=2993683996 Len=0 Win=49640
grifserv.wadsworth.org -> curie.wadsworth.org TCP D=9997 S=33469 Fin 
Ack=543998242 Seq=2993683996 Len=0 Win=49640
curie.wadsworth.org -> grifserv.wadsworth.org TCP D=33469 S=9997 Ack=2993683997 
Seq=543998242 Len=0 Win=49640
curie.wadsworth.org -> grifserv.wadsworth.org TCP D=33469 S=9997 Fin 
Ack=2993683997 Seq=543998242 Len=0 Win=49640
grifserv.wadsworth.org -> curie.wadsworth.org TCP D=9997 S=33469 Ack=543998243 
Seq=2993683997 Len=0 Win=49640
curie.wadsworth.org -> grifserv.wadsworth.org TCP D=51483 S=9997 Ack=1337215110 
Seq=97669462 Len=0 Win=48419

But I'm not seeing, perhaps, looking in the wrong directory, any
debug files.




> 
> Jean-Louis
> 
> On 05/16/2013 02:06 PM, Brian Cuttler wrote:
> >Hello amanda users,
> >
> >And the great wheel turns again... I've been here before, but
> >apparently fell back to 2.6.1 and upd from a failed 3.3.0 install.
> >
> >
> >I am running amanda 3.1.2 on Solaris 10/x86 as a server and
> >am in the process of upgrading the amanda client on solaris 10/Sparc
> >from 2.6.1 to 3.3.3.
> >
> >Built seems to have gone fine.
> >
> >Problme is the amcheck is failing.
> >
> >I have updated the xinetd.conf on the client, it now reads.
> >
> >amanda  stream tcp nowait  amanda  /usr/local/libexec/amanda/amandad
> > amandad -auth=bsdtcp amdump
> >
> >I disabled the prior instance of amanda (udp) in SMF services and
> >removed the config file both from the SMF manifest directory and from
> >the compiled database and then imported the new inetd config.
> >
> >I updated the disklist, rather than using
> >
> >
> >define dumptype zfs-snapshot {
> >   index
> >   program "APPLICATION"
> >   application "app_amgtar"
> >   script "script_zfs_snapshot"
> >#  auth "bsdtcp"
> >   estimate server
> >}
> >
> >We use
> >
> >define dumptype zfs-snapshot-bsdtcp {
> >   index
> >   program "APPLICATION"
> >   application "app_amgtar"
> >   script "script_zfs_snapshot"
> >   auth "bsdtcp"
> >   estimate server
> >}
> >
> >
> >I have a note that says the auto param should be in the script and
> >not the dumptype, but that produces an immediate error when I invoke
> >amcheck.
> >
> >
> >I have updated exactly one DLE in the disklist for this client
> >and I have commented out all other references to this host in
> >the disklist, this is to prevent any confusion as to which protocal
> >should be used.
> >
> >I am not seeing any errors in /var/adm/mesasges, nor are we producing
> >any debug files in the /tmp/amanda directory.
> >
> >Amcheck on the server does this.
> >
> >>amcheck -c curie grifserv
> >Amanda Backup Client Hosts Check
> >
> >WARNING: grifserv.wadsworth.org: selfcheck request failed: EOF on read 
> >from grifserv.wads

amanda 3.3.3 sellfcheck request failed

2013-05-16 Thread Brian Cuttler
th.org -> curie.wadsworth.org TCP D=9997 S=32768 Ack=3599432779 
Seq=342358185 Len=0 Win=49640
curie.wadsworth.org -> grifserv.wadsworth.org TCP D=51483 S=9997 Ack=1337172237 
Seq=97669462 Len=0 Win=48419


The configure command had looked like this.

./configure --with-user=amanda --with-group=sys --with-udpportrange=932,948 \
   --with-tcpportrange=10084,10100 --with-gnutar=/usr/sfw/bin/gtar \
   --with-gnuplot=/opt/sfw/bin/gnuplot --without-libiconv-prefix \ 
   --without-libintl-prefix \
  LDFLAGS="-L/usr/sfw/lib -R/usr/sfw/lib" \
  CPPFLAGS="-I/usr/sfw/include -I/opt/sfw/include" \
  CFLAGS="-I/usr/sfw/include -I/opt/sfw/include -I/usr/local/include" \
  CC=/opt/SUNWspro/bin/cc EGREP=/usr/sfw/bin/gegrep
Thu May 16 13:51:02 2013: thd-2ba30: amandad: pid 13715 ruid 0 euid 0 version 3

If I run the amandad from the command line we do produce a debug
file, which is inline here.


# /usr/local/libexec/amanda/amandad

.
3.3: start at Thu May 16 13:51:02 2013
Thu May 16 13:51:02 2013: thd-2ba30: amandad: security_getdriver(name=BSDTCP) re
turns ff32421c
Thu May 16 13:51:02 2013: thd-2ba30: amandad: version 3.3.3
Thu May 16 13:51:02 2013: thd-2ba30: amandad: build: 
VERSION="Amanda-3.3.3"Thu May 16 13:51:02 2013: thd-2ba30: amandad:
BUILT_DATE="Thu May 16 11:38:33 EDT 2013" BUILT_MACH=""
Thu May 16 13:51:02 2013: thd-2ba30: amandad:BUILT_REV="5099" 
BUILT_BRANCH="community_3_3_3"
Thu May 16 13:51:02 2013: thd-2ba30: amandad:
CC="/opt/SUNWspro/bin/cc"
Thu May 16 13:51:02 2013: thd-2ba30: amandad: paths: 
bindir="/usr/local/bin" sbindir="/usr/local/sbin"
Thu May 16 13:51:02 2013: thd-2ba30: amandad:
libexecdir="/usr/local/libexec"
Thu May 16 13:51:02 2013: thd-2ba30: amandad:
amlibexecdir="/usr/local/libexec/amanda"
Thu May 16 13:51:02 2013: thd-2ba30: amandad:
mandir="/usr/local/share/man" AMANDA_TMPDIR="/tmp/amanda"
Thu May 16 13:51:02 2013: thd-2ba30: amandad:
AMANDA_DBGDIR="/tmp/amanda"
Thu May 16 13:51:02 2013: thd-2ba30: amandad:
CONFIG_DIR="/usr/local/etc/amanda" DEV_PREFIX="/dev/dsk/"
Thu May 16 13:51:02 2013: thd-2ba30: amandad:
RDEV_PREFIX="/dev/rdsk/" DUMP="/usr/sbin/ufsdump"
Thu May 16 13:51:02 2013: thd-2ba30: amandad:
RESTORE="/usr/sbin/ufsrestore" VDUMP=UNDEF VRESTORE=UNDEF
Thu May 16 13:51:02 2013: thd-2ba30: amandad:XFSDUMP=UNDEF 
XFSRESTORE=UNDEF VXDUMP=UNDEF VXRESTORE=UNDEF
Thu May 16 13:51:02 2013: thd-2ba30: amandad:
SAMBA_CLIENT="/usr/sfw/bin/smbclient"
Thu May 16 13:51:02 2013: thd-2ba30: amandad:
GNUTAR="/usr/sfw/bin/gtar"
Thu May 16 13:51:02 2013: thd-2ba30: amandad:
COMPRESS_PATH="/usr/local/bin/gzip"
Thu May 16 13:51:02 2013: thd-2ba30: amandad:
UNCOMPRESS_PATH="/usr/local/bin/gzip"  LPRCMD=UNDEF
Thu May 16 13:51:02 2013: thd-2ba30: amandad: MAILER=UNDEF
Thu May 16 13:51:02 2013: thd-2ba30: amandad:
listed_incr_dir="/usr/local/var/amanda/gnutar-lists"
Thu May 16 13:51:02 2013: thd-2ba30: amandad: defs:  DEFAULT_SERVER="lyra" 
DEFAULT_CONFIG="DailySet1"
Thu May 16 13:51:02 2013: thd-2ba30: amandad:
DEFAULT_TAPE_SERVER="lyra" DEFAULT_TAPE_DEVICE=""
Thu May 16 13:51:02 2013: thd-2ba30: amandad:NEED_STRSTR 
AMFLOCK_POSIX AMFLOCK_LOCKF AMFLOCK_LNLOCK
Thu May 16 13:51:02 2013: thd-2ba30: amandad:SETPGRP_VOID 
AMANDA_DEBUG_DAYS=4 BSD_SECURITY USE_AMANDAHOSTS
Thu May 16 13:51:02 2013: thd-2ba30: amandad:CLIENT_LOGIN="amanda" 
CHECK_USERID HAVE_GZIP
Thu May 16 13:51:02 2013: thd-2ba30: amandad:COMPRESS_SUFFIX=".gz" 
COMPRESS_FAST_OPT="--fast"
Thu May 16 13:51:02 2013: thd-2ba30: amandad:
COMPRESS_BEST_OPT="--best" UNCOMPRESS_OPT="-dc"
Thu May 16 13:51:02 2013: thd-2ba30: amandad: getpeername returned: Socket 
operation on non-socket


I'm pretty sure this will come down to a simple protocal/auth config
issue, but I am not seeing my misstep. Please help.

thank you,

Brian
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: ZFS Compression Oddities with Amanda

2013-05-01 Thread Brian Cuttler

I'd thought for both ZFS and the newer (LTO) tape drives
that HW-compression was determined on a block by block
basic (if enabled) so that expansion of data would not occur.

Granted, this does nothing to help with on CPU usage, but I'd
thought it did save, rather, preserve, storage volume.

On Wed, May 01, 2013 at 01:41:35PM -0400, Jean-Louis Martineau wrote:
> Guy,
> 
> Are you also using amanda software compression?
> 
> Using compression on the holding disk is probably just a waste of CPU as 
> the data is compressed once and decompressed once, but if it the only 
> way it can fit in the holding disk.
> 
> If you use amanda software compression, then the data is compressed on 
> holding disk and on tape.
> 
> Jean-Louis
> 
> On 05/01/2013 01:10 PM, Guy Sisalli wrote:
> >I'm attempting to commit 7 TB of text to tape. It's presently stored in
> >a natively gzip-9 compressed zvol, weighing in at 1.7 TB.
> >
> >My holding area is 5 TB, and is set to a native gzip-5 compression.
> >
> >The functional difference between gzip-5 and gzip-9 is not very much:
> >Level 9 compression has a 4-8% advantage over level 5. The entire DLE
> >(taken as files, not a snapshot) should fit quite comfortably in my
> >holding area. It didn't!
> >
> >I watched my holding area balloon to 4 TB and keep right on going, as if
> >it wasn't compressing at all. Is there any scenario in which this might
> >happen? Would you recommend against a setup like the one I've described?
> >I'm happy to offer any details needed, but this is probably a good start:
> >
> >Source:
> >
> >NAMEPROPERTY  VALUE  SOURCE
> >zulu01/keyRepo  type  filesystem -
> >zulu01/keyRepo  creation  Tue Jan  8 13:48 2013  -
> >zulu01/keyRepo  used  1.74T  -
> >zulu01/keyRepo  available 6.60T  -
> >zulu01/keyRepo  referenced1.74T  -
> >zulu01/keyRepo  compressratio 4.65x  -
> >zulu01/keyRepo  mounted   yes-
> >zulu01/keyRepo  quota none   default
> >zulu01/keyRepo  reservation   none   default
> >zulu01/keyRepo  recordsize128K   default
> >zulu01/keyRepo  mountpoint/tank/datastoredefault
> >zulu01/keyRepo  sharenfs  offdefault
> >zulu01/keyRepo  checksum  on default
> >zulu01/keyRepo  compression   gzip-9 local
> >
> >Hold:
> >
> >NAME  PROPERTY  VALUE  SOURCE
> >hold  type  filesystem -
> >hold  creation  Fri Dec  7 10:34 2012  -
> >hold  used  257M   -
> >hold  available 5.35T  -
> >hold  referenced243M   -
> >hold  compressratio 1.00x  -
> >hold  mounted   no -
> >hold  quota none   default
> >hold  reservation   none   default
> >hold  recordsize128K   default
> >hold  mountpoint/hold  default
> >hold  sharenfs  offdefault
> >hold  checksum  on default
> >hold  compression   gzip-5 local
> >
> 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: can't connect the stream

2013-04-25 Thread Brian Cuttler

ah ha!


DUMP SUMMARY:
  DUMPER STATS TAPER STATS
HOSTNAME DISK L ORIG-GB OUT-GB  COMP%  MMM:SS   KB/s MMM:SSKB/s
--- - --
stemcell /etc 1 0 0-- 0:14 8465.0   0:09 12780.0
  
(brought to you by Amanda version 3.1.2)


I had 
unreserved-tcp-port 1083,10101

instead of 

unreserved-tcp-port 10083,10101

I did not count the zeros.

Zero might not have existed in roman numerals, but its
important in our system, very very important.

Now to replicate the install on half a dozen other clients.

thank you,

Brian


On Thu, Apr 25, 2013 at 02:53:14PM -0400, Brian Cuttler wrote:
> 
> Jean-Louis,
> 
> On Thu, Apr 25, 2013 at 11:03:50AM -0400, Jean-Louis Martineau wrote:
> > On 04/25/2013 10:39 AM, Brian Cuttler wrote:
> > >Already, _now_ I'm ready to solve the real problem...
> > >
> > >Server is Solaris 10 x86, Amanda server 3.1.2.
> > >
> > >Client, a CentOS 5x box has had amanda 2.5 removed
> > >in favor of 3.3.0-1.
> > >
> > >I've finally figured out the dumptype/auth/protocal, amcheck
> > >is running properly.
> > >
> > > From the amanda debug file, I find the config directory to be
> > >/etc/amanda, and the default config name "DailySet1", so I
> > >have copied the amanda.conf from the server to
> > >/etc/amanda/DailySet1/amanda-client.conf (on the client).
> > amandad process do not read the per-config amanda-client.conf.
> > Put it in /etc/amanda/amanda-client.conf
> 
> JML replies 
> > Btw. The bsdtcp auth, it is a lot easier to configure, that's why it is the 
> > default auth in 3.3
> 
> On the server I changed the DLE dumptype of my user-tar2 which
> specifies auth bsdtcp.
> 
> On the client changed the /etc/xinetd.d/amanda
> socket_type from dgram to stream
> protocal from udp to tcp
>   and
> server_args from -auth=bsd amdump to -auth=bsdtcp amdump
> 
> and restarted the xinetd on the client.
> 
> This didn't work... don't know what I missed, and reverted
> the changes because I'd thought I was close the other way...
> 
> Q1: What did I miss?
> 
> I moved the amanda-client.conf from /etc/amanda/DailySet1/ to
> /etc/amanda and began removing lines that the amdump report
> said where invalid. I quickly decided to google it and found
> the 15-minute install guide and trashed my amanda-client.conf,
> leaving only the following lines.
> 
> 
> [root@stema amanda]# more amanda-client.conf
> conf "curie"
> auth "bsd"
> unreserved-tcp-port 1083,10101
> reserved-udp-port 931,949
> 
> Now when I run # amdump curie stemcell-stage, on the server
> the amanda report shows:
> 
>   stemcell-stage /etc lev 1  FAILED [too many dumper retry: [could not
>   connect DATA stream: can't connect stream to stema.wadsworth.org port
>   1753: Connection timed out]]
> 
> Not sure where I'm goofing this up.
> 
> Not against using BSDTCP, just don't understand why I can't seem
> to configure it.
> 
>   thank you,
> 
>   Brian
> ---
>Brian R Cuttler brian.cutt...@wadsworth.org
>Computer Systems Support(v) 518 486-1697
>Wadsworth Center(f) 518 473-6384
>NYS Department of HealthHelp Desk 518 473-0773
> 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: can't connect the stream

2013-04-25 Thread Brian Cuttler

Jean-Louis,

On Thu, Apr 25, 2013 at 11:03:50AM -0400, Jean-Louis Martineau wrote:
> On 04/25/2013 10:39 AM, Brian Cuttler wrote:
> >Already, _now_ I'm ready to solve the real problem...
> >
> >Server is Solaris 10 x86, Amanda server 3.1.2.
> >
> >Client, a CentOS 5x box has had amanda 2.5 removed
> >in favor of 3.3.0-1.
> >
> >I've finally figured out the dumptype/auth/protocal, amcheck
> >is running properly.
> >
> > From the amanda debug file, I find the config directory to be
> >/etc/amanda, and the default config name "DailySet1", so I
> >have copied the amanda.conf from the server to
> >/etc/amanda/DailySet1/amanda-client.conf (on the client).
> amandad process do not read the per-config amanda-client.conf.
> Put it in /etc/amanda/amanda-client.conf

JML replies 
> Btw. The bsdtcp auth, it is a lot easier to configure, that's why it is the 
> default auth in 3.3

On the server I changed the DLE dumptype of my user-tar2 which
specifies auth bsdtcp.

On the client changed the /etc/xinetd.d/amanda
socket_type from dgram to stream
protocal from udp to tcp
  and
server_args from -auth=bsd amdump to -auth=bsdtcp amdump

and restarted the xinetd on the client.

This didn't work... don't know what I missed, and reverted
the changes because I'd thought I was close the other way...

Q1: What did I miss?

I moved the amanda-client.conf from /etc/amanda/DailySet1/ to
/etc/amanda and began removing lines that the amdump report
said where invalid. I quickly decided to google it and found
the 15-minute install guide and trashed my amanda-client.conf,
leaving only the following lines.


[root@stema amanda]# more amanda-client.conf
conf "curie"
auth "bsd"
unreserved-tcp-port 1083,10101
reserved-udp-port 931,949

Now when I run # amdump curie stemcell-stage, on the server
the amanda report shows:

  stemcell-stage /etc lev 1  FAILED [too many dumper retry: [could not
  connect DATA stream: can't connect stream to stema.wadsworth.org port
  1753: Connection timed out]]

Not sure where I'm goofing this up.

Not against using BSDTCP, just don't understand why I can't seem
to configure it.

thank you,

Brian
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



can't connect the stream

2013-04-25 Thread Brian Cuttler

Already, _now_ I'm ready to solve the real problem...

Server is Solaris 10 x86, Amanda server 3.1.2.

Client, a CentOS 5x box has had amanda 2.5 removed
in favor of 3.3.0-1.

I've finally figured out the dumptype/auth/protocal, amcheck
is running properly.

>From the amanda debug file, I find the config directory to be
/etc/amanda, and the default config name "DailySet1", so I
have copied the amanda.conf from the server to
/etc/amanda/DailySet1/amanda-client.conf (on the client).

I still see port issues in the amdump report.

The next 2 tapes Amanda expects to use are: Curie504, Curie505.
FAILURE DUMP SUMMARY:
  stemcell-stage /usr/share lev 1  FAILED [too many dumper retry: [could not 
connect DATA stream: can't connect stream to stema.wadsworth.org port 48982: 
Connection timed out]]
  stemcell-stage /etc lev 1  FAILED [too many dumper retry: [could not connect 
DATA stream: can't connect stream to stema.wadsworth.org port 48985: Connection 
timed out]]

I'd thought the solution was adding the following to
the amanda-client.conf

unreserved-tcp-port 1083,10101
reserved-udp-port 931,949

Those values choosen as they where the ranged used when I build
the amanda server.

I know I'm close, but I'm just not getting it.

Note - I had expected to see the amandad debug file tell me that
I was or was not including the config file, I do NOT see that though
I was sure I'd seen it in debug files on other amanda clients.

Perhaps I'm looking in the wrong debug file?
Perhaps my amanda-client.conf is in the wrong directory?

thank you,

    Brian
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



problem with port range

2013-04-23 Thread Brian Cuttler

Amanda users,

Server is Solaris x86, with amanda 3.1.2, locally built
Client is CentOS 5.9 with amanda 2.5.0p2, package

I know I must have restricted port ranges on the server because
I pass through a firewall and have ipf.conf settings on a client
on the far side set to 

(edited for line length)
pass proto tcp from 'server' port 10083><10101 to any port 10083><10101
pass proto udp from 'server' port 931><949 to any port = amanda

I'm having trouble figuring out how to set the port range on the
new client, which was a pre-build package. (and is back rev, yet
that was what was found in the repository).

Amdump of the client has these errors in its report and backups
are not happening. Amcheck runs cleanly, I see daemons start, but
I do not see any data exchange.

  stemcell-stage /usr/share lev 1  FAILED [too many dumper retry: [could not
connect DATA stream: can't connect stream to stema.wadsworth.org port 51058:
Connection timed out]]

  stemcell-stage /etc lev 1  FAILED [too many dumper retry: [request failed:
timeout waiting for ACK]]

  stemcell-stage /var lev 0  FAILED [too many dumper retry: [could not connect
DATA stream: can't connect stream to stema.wadsworth.org port 54800:
Connection timed out]]

  stemcell-stage /export/bak lev 1  FAILED [too many dumper retry: [request
failed: timeout waiting for ACK]]

I have created an amanda-client.conf on the client.

Based on the client's amanda debug file I see 
'--with-config=DailySet1'
CONFIG_DIR="/etc/amanda"

I created /etc/amanda/DailySet1/amanda-client.conf

and attempted to add port information

unreserved-tcp-port 1083,10101
reserved-udp-port 931,949

but this is clearly not working.

Do I need a better kit?
Do I need to build it locally?
Do I have the wrong directory?
Am I using the wrong options or values?

Not sure what I'm doing wrong, any help appreciated.

    thank you,

Brian
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: stream 0 accept failed: bad SECURITY line: ''

2013-04-18 Thread Brian Cuttler

I've been scratching my head over this, FW on the client looks ok to me.

[root@www-stage ~]# /sbin/iptables -L | grep curie
ACCEPT udp  --  curieb.wadsworth.org  anywhereudp spts:932:948 
dpt:amanda 
ACCEPT tcp  --  curieb.wadsworth.org  anywheretcp 
spts:10084:itap-ddtp dpts:10084:itap-ddtp 
ACCEPT udp  --  curie.wadsworth.org  anywhereudp spts:932:948 
dpt:amanda 
ACCEPT tcp  --  curie.wadsworth.org  anywheretcp 
spts:10084:itap-ddtp dpts:10084:itap-ddtp 


and the it occured to me, the amanda server was built with port
restriction, and I always do so for my clients as I like to avoid
multiple builds if I can, and some client/server pairs have to
traverse our FW.

But this instance of the amanda client was installed from an rpm,
so I'm betting, and based on debug files, believe that the client
is trying to connect back to the server on ports that the server
is not listening to.

I believe I can user amanda_client.conf to restrict the ports, but
wanted to know if this solution seemed right to the amanda community.
Ask if there was another method, ask what the minimal amanda.conf
needed to contain, and which directory, as I've seen some of the
pre-built kits use directories other than /usr/local/etc/amanda/

Yah, a lot of questions, hopefully on the correct path.

thank you,

    Brian



On Fri, Apr 12, 2013 at 01:28:02PM -0700, Jean-Louis Martineau wrote:
> On 04/12/2013 11:52 AM, Brian Cuttler wrote:
> >
> >amandad: try_socksize: send buffer size is 65536
> >amandad: try_socksize: receive buffer size is 65536
> >amandad: time 3.128: bind_portrange2: trying port=831
> >amandad: time 3.129: stream_server: waiting for connection: 0.0.0.0.36507
> >amandad: try_socksize: send buffer size is 65536
> >amandad: try_socksize: receive buffer size is 65536
> >amandad: time 3.136: bind_portrange2: trying port=831
> >amandad: time 3.136: stream_server: waiting for connection: 0.0.0.0.38560
> >amandad: try_socksize: send buffer size is 65536
> >amandad: try_socksize: receive buffer size is 65536
> >amandad: time 3.143: bind_portrange2: trying port=831
> >amandad: time 3.144: stream_server: waiting for connection: 0.0.0.0.49357
> >amandad: time 3.144: sending REP pkt:
> ><<<<<
> >CONNECT DATA 36507 MESG 38560 INDEX 49357
> >OPTIONS features=feff9ffe07;
> 
> The server should connect to these ports, check the server dumper debug 
> files, try to disable firewall and selinux.
> 
> Jean-Louis
> 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: stream 0 accept failed: bad SECURITY line: ''

2013-04-12 Thread Brian Cuttler

Jean-Louis,

Yes, the ability to hit the socket makes sense, especially as
amcheck is ok and amdump, which uses many more network resources,
does not.

I'm not seeing the failures I'd expect to see, which may simply
mean I don't actually know what I'm looking for.

The failure "bad security" is confusing to me in terms of the
networking.

I've already been over the sockets on the new client with the
manager of that system, but will do so again on Monday morning.

[root@stackb ~]# /sbin/iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source   destination 
RH-Firewall-1-INPUT  all  --  anywhere anywhere

Chain FORWARD (policy ACCEPT)
target prot opt source   destination 
RH-Firewall-1-INPUT  all  --  anywhere anywhere

Chain OUTPUT (policy ACCEPT)
target prot opt source   destination 

Chain RH-Firewall-1-INPUT (2 references)
target prot opt source   destination 
ACCEPT all  --  anywhere anywhere
ACCEPT icmp --  anywhere anywhereicmp any 
ACCEPT esp  --  anywhere anywhere
ACCEPT ah   --  anywhere anywhere
ACCEPT udp  --  anywhere 224.0.0.251 udp dpt:mdns 
ACCEPT udp  --  anywhere anywhereudp dpt:ipp 
ACCEPT tcp  --  anywhere anywheretcp dpt:ipp 
ACCEPT all  --  anywhere anywherestate 
RELATED,ESTABLISHED 
ACCEPT tcp  --  anywhere anywheretcp dpt:https 
ACCEPT udp  --  199.184.30.0/24  anywhereudp dpt:mysql 
ACCEPT udp  --  curieb.wadsworth.org  anywhereudp spts:932:948 
dpt:amanda 
ACCEPT tcp  --  curieb.wadsworth.org  anywheretcp 
spts:10084:itap-ddtp dpts:10084:itap-ddtp 
ACCEPT udp  --  curie.wadsworth.org  anywhereudp spts:932:948 
dpt:amanda 
ACCEPT tcp  --  curie.wadsworth.org  anywheretcp 
spts:10084:itap-ddtp dpts:10084:itap-ddtp 
ACCEPT tcp  --  199.184.30.0/24  anywheretcp dpt:mysql 
ACCEPT tcp  --  anywhere anywheretcp dpt:http 
ACCEPT tcp  --  anywhere anywherestate NEW tcp 
dpt:ssh 
REJECT all  --  anywhere anywherereject-with 
icmp-host-prohibited 



In the mean time, just for reference, I cleaned out the server's
files under /tmp/amanda and then ran amdump against the one client.

> amdump curie labsci-stage

I am attaching the /tmp/amanda tree as a tar file. Just so its
not lost if we need to refer back to it later on.

thank you/good weekend,

    Brian

On Fri, Apr 12, 2013 at 01:28:02PM -0700, Jean-Louis Martineau wrote:
> On 04/12/2013 11:52 AM, Brian Cuttler wrote:
> >
> >amandad: try_socksize: send buffer size is 65536
> >amandad: try_socksize: receive buffer size is 65536
> >amandad: time 3.128: bind_portrange2: trying port=831
> >amandad: time 3.129: stream_server: waiting for connection: 0.0.0.0.36507
> >amandad: try_socksize: send buffer size is 65536
> >amandad: try_socksize: receive buffer size is 65536
> >amandad: time 3.136: bind_portrange2: trying port=831
> >amandad: time 3.136: stream_server: waiting for connection: 0.0.0.0.38560
> >amandad: try_socksize: send buffer size is 65536
> >amandad: try_socksize: receive buffer size is 65536
> >amandad: time 3.143: bind_portrange2: trying port=831
> >amandad: time 3.144: stream_server: waiting for connection: 0.0.0.0.49357
> >amandad: time 3.144: sending REP pkt:
> ><<<<<
> >CONNECT DATA 36507 MESG 38560 INDEX 49357
> >OPTIONS features=feff9ffe07;
> 
> The server should connect to these ports, check the server dumper debug 
> files, try to disable firewall and selinux.
> 
> Jean-Louis
> 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



stream 0 accept failed: bad SECURITY line: ''

2013-04-12 Thread Brian Cuttler

Hi Amanda users,

I'm running Amanda 3.1.2 on Solaris x86 and I'm trying to add
several linux clients. Linux version varies but the problems
are all similar, so I will select a specific instance.


amanda client
-

cat: /etc/lsb-release.d: Is a directory
CentOS release 5.9 (Final)


[root@stackb amanda]# yum list amanda
Loaded plugins: fastestmirror, security
Loading mirror speeds from cached hostfile
 * base: ftp.ussg.iu.edu
 * epel: mirror.cs.princeton.edu
 * extras: centos.corenetworks.net
 * updates: centos-mirror.jchost.net
Installed Packages
amanda.i386   2.5.0p2-9.el5   installed
amanda.x86_64 2.5.0p2-9.el5   installed


Amcheck runs ok, but amdump fails.

I'd thought it was, well, I think I have good reason to believe
its the network authentication, so per articles I googled I
added server_args to /etc/xinetd.d/amanda, which now reads

service amanda
{
socket_type = dgram
protocol= udp
wait= yes
user= amanda
group   = disk
server  = /usr/lib64/amanda/amandad 
disable = no
server_args = -auth=bsd amdump
}

The failure seems to be shown in the tail end of the amandad
debug file, included here in its entirety.

thanks for your help,

    Brian




amandad.20130412143850.debug
::
amandad: debug 1 pid 18050 ruid 110 euid 110: start at Fri Apr 12 14:38:50 2013
amandad: version 2.5.0p2
amandad: build: VERSION="Amanda-2.5.0p2"
amandad:BUILT_DATE="Thu Feb 23 08:03:44 EST 2012"
amandad:BUILT_MACH="Linux builder10.centos.org 2.6.18-53.el5 #1 SMP Mon 
Nov 12 02:14:55 EST 2007 x86_64 x86_64 x86_64 GNU/Linux"
amandad:CC="gcc"
amandad:CONFIGURE_COMMAND="'./configure' '--build=x86_64-redhat-linux-gn
u' '--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu' '--progra
m-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/
usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include'
 '--libdir=/usr/lib64' '--libexecdir=/usr/lib64/amanda' '--localstatedir=/var/li
b' '--sharedstatedir=/usr/com' '--mandir=/usr/share/man' '--infodir=/usr/share/i
nfo' '--enable-shared' '--disable-static' '--disable-dependency-tracking' '--wit
h-index-server=amandahost' '--with-tape-server=amandahost' '--with-config=DailyS
et1' '--with-gnutar-listdir=/var/lib/amanda/gnutar-lists' '--with-smbclient=/usr
/bin/smbclient' '--with-dumperdir=/usr/lib64/amanda/dumperdir' '--with-amandahos
ts' '--with-user=amanda' '--with-group=disk' '--with-tmpdir=/var/log/amanda' '--
with-gnutar=/bin/tar' '--with-ssh-security'"
amandad: paths: bindir="/usr/bin" sbindir="/usr/sbin"
amandad:libexecdir="/usr/lib64/amanda" mandir="/usr/share/man"
amandad:AMANDA_TMPDIR="/var/log/amanda"
amandad:AMANDA_DBGDIR="/var/log/amanda" CONFIG_DIR="/etc/amanda"
amandad:DEV_PREFIX="/dev/" RDEV_PREFIX="/dev/r"
amandad:DUMP="/sbin/dump" RESTORE="/sbin/restore" VDUMP=UNDEF
amandad:VRESTORE=UNDEF XFSDUMP=UNDEF XFSRESTORE=UNDEF VXDUMP=UNDEF
amandad:VXRESTORE=UNDEF SAMBA_CLIENT="/usr/bin/smbclient"
amandad:GNUTAR="/bin/tar" COMPRESS_PATH="/bin/gzip"
amandad:UNCOMPRESS_PATH="/bin/gzip" LPRCMD="/usr/bin/lpr"
amandad:MAILER="/usr/bin/Mail"
amandad:listed_incr_dir="/var/lib/amanda/gnutar-lists"
amandad: defs:  DEFAULT_SERVER="amandahost" DEFAULT_CONFIG="DailySet1"
amandad:DEFAULT_TAPE_SERVER="amandahost"
amandad:DEFAULT_TAPE_DEVICE="null:" HAVE_MMAP HAVE_SYSVSHM
amandad:LOCKING=POSIX_FCNTL SETPGRP_VOID DEBUG_CODE
amandad:AMANDA_DEBUG_DAYS=4 BSD_SECURITY RSH_SECURITY USE_AMANDAHOSTS
amandad:CLIENT_LOGIN="amanda" FORCE_USERID HAVE_GZIP
amandad:COMPRESS_SUFFIX=".gz" COMPRESS_FAST_OPT="--fast"
amandad:COMPRESS_BEST_OPT="--best" UNCOMPRESS_OPT="-dc"
amandad: time 0.004: accept recv REQ pkt:
<<<<<
SERVICE noop
OPTIONS features=9efefbff01;
>>>>>
amandad: time 0.005: creating new service: /usr

Re: all estimate timed out

2013-04-05 Thread Brian Cuttler

Chris,

I don't know what tif files look like internally, don't know how
they compress.

Just of out left field... does your zpool have compression
enabled? I realized zfs will compress or not on a per block
basis, but I don't know what if any overhead is being incurred,
if the tif files are not compressed then there should be no
additional overhead to decompress them on read.

I would also probably hesitate to enable compression of a zfs
file system that was used for amanda work area, since you are
storing data that has already been zip'd. Though this also has
no impact on the estimate phase.

Our site has tended to gzip --fast, rather than --best, and have
on a few our our amanda servers moved to pigz. Again, potential
amdump issues but not amcheck issues.

Sanity check, the zpool itself is healthy? The drives are all of
the same architecture and spindle speeds?

good luck,

    Brian


On Fri, Apr 05, 2013 at 11:09:16AM -0400, Chris Hoogendyk wrote:
> Thank you!
> 
> Not sure why the debug file would list runtar in the form of a parameter, 
> when it's not to be used as such. Anyway, that got it working.
> 
> Which brings me back to my original problem. As indicated previously, the 
> filesystem in question only has 2806 files and 140 directories. As I watch 
> the runtar in verbose mode, when it hits the tif files, it is taking 20 
> seconds on each tif file. The tif files are scans of herbarium type 
> specimens and are pretty uniformly 200MB each. If I do a find on all the 
> tif files, piped to `wc -l`, there are 1300 of them. Times 20 seconds each 
> gives me the 26000 seconds that shows up in the sendsize debug file for 
> this filesystem.
> 
> So, why would these tif files only be going by at 10MB/s into /dev/null? No 
> compression involved. My (real) tapes run much faster than that. I also 
> pointed out that I have more than a dozen other filesystems on the same 
> zpool that are giving me no trouble (five 2TB drives in a raidz1 on a J4500 
> with multipath SAS).
> 
> Any ideas how to speed that up?
> 
> I think I may start out by breaking them down into sub DLE's. There are 129 
> directories corresponding to taxonomic families.
> 
> 
> On 4/4/13 8:05 PM, Jean-Louis Martineau wrote:
> >On 04/04/2013 02:48 PM, Chris Hoogendyk wrote:
> >>I may just quietly go nuts. I'm trying to run the command directly. In 
> >>the debug file, one example is:
> >>
> >>Mon Apr  1 08:05:49 2013: thd-32a58: sendsize: Spawning 
> >>"/usr/local/libexec/amanda/runtar runtar daily 
> >>/usr/local/etc/amanda/tools/gtar --create --file /dev/null 
> >>--numeric-owner --directory /export/herbarium --one-file-system 
> >>--listed-incremental 
> >>/usr/local/var/amanda/gnutar-lists/localhost_export_herbarium_1.new 
> >>--sparse --ignore-failed-read --totals ." in pipeline
> >>
> >>So, I created a script working off that and adding verbose:
> >>
> >>   #!/bin/ksh
> >>
> >>   OPTIONS=" --create --file /dev/null --numeric-owner --directory 
> >>   /export/herbarium
> >>   --one-file-system --listed-incremental";
> >>   OPTIONS="${OPTIONS} 
> >>   /usr/local/var/amanda/gnutar-lists/localhost_export_herbarium_1.new 
> >>   --sparse
> >>   --ignore-failed-read --totals --verbose .";
> >>
> >>   COMMAND="/usr/local/libexec/amanda/runtar runtar daily 
> >>   /usr/local/etc/amanda/tools/gtar ${OPTIONS}";
> >>   #COMMAND="/usr/sfw/bin/gtar ${OPTIONS}";
> >
> >remove the 'runtar' argument
> >
> >>
> >>   exec ${COMMAND};
> >>
> >>
> >>If I run that as user amanda, I get:
> >>
> >>   runtar: Can only be used to create tar archives
> >>
> >>
> >>If I exchange the two commands so that I'm using gtar directly rather 
> >>than runtar, then I get:
> >>
> >>   /usr/sfw/bin/gtar: Cowardly refusing to create an empty archive
> >>   Try `/usr/sfw/bin/gtar --help' or `/usr/sfw/bin/gtar --usage' for more
> >>   information.
> 
> -- 
> ---
> 
> Chris Hoogendyk
> 
> -
>O__   Systems Administrator
>   c/ /'_ --- Biology & Geology Departments
>  (*) \(*) -- 140 Morrill Science Center
> ~~ - University of Massachusetts, Amherst
> 
> 
> 
> ---
> 
> Erdös 4
> 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: all estimate timed out

2013-04-04 Thread Brian Cuttler

Reply using thunderbird rather than mutt.

Any way to vet the zfs file system? Make sure its sane and doesn't 
contain some kind of a bad

link causing a loop?

If you where to run the command used by estimate, which I believe 
displays in the debug file,
can you run that successfully on the command line? If you run it 
verbose, can you see where

its hangs or where it slows down?

On 4/4/2013 12:34 PM, Chris Hoogendyk wrote:
Still getting blank emails on a test reply (just to myself) to Brian's 
emails. So, I'm replying to my own email to the list and then pasting 
in the reply to Brian. It's clearly a weirdness in the headers coming 
from Brian, but it could also be some misbehavior in response to those 
by my mail client -- Thunderbird 17.0.5.


I changed the dump type to not use compression. If tif files are not 
going to compress anyway, then I might as well not even ask Amanda to 
try. However, it never gets to the dump, because it gets "all estimate 
timed out."


I will try breaking it into multiple DLE's and also changing it to 
"server estimate". But, until I know what is really causing the 
problem, I'm not optimistic about the possibility of a successful dump.


As I said, everything else runs without trouble, including DLE's that 
are different zfs filesystems on the same zpool.



On 4/4/13 9:39 AM, Brian Cuttler wrote:

Chris,

sorry for the email trouble, this is a new phenomenon and I
don't know what is causing it, if you can identify the bad
header please let me know. We updated our mailhost a few months
ago, but my MUA (mutt) has not changed nor has my editor (emacs).

My "large" directories are exceptions, even here, and I am educating
the users to do things differently. However I do have lots of files
on zfs in general...

I don't believe that gzip is used in the estimate phase, I think
that it produces "raw" dump size for dump scheduling and that tape
allocation is left for later in the process. If gzip is used you
should see it in # ps, or top (or prstat), you could always  start
a dump after disabling estimate and see if that phase runs any better.
Since you can be sure of finishing estimate phase by checking
# amstatus, you can always abort the dump if you don't want a
non-compressed backup. (Jean-Louis will know off-hand)

How does the dump phase perform?


On Wed, Apr 03, 2013 at 05:42:12PM -0400, Chris Hoogendyk wrote:
For some reason, the headers in the particular message from the list 
(from

Brian) are causing my mail client or something to completely strip the
message so that it is blank when I reply. That is, I compose a 
message, it
looks good, and I send it. But then I get a blank bcc, brian gets a 
blank

message, and the list gets a blank message. Weird. So I'm replying to
Christoph Scheeder's message and pasting in the contents for 
replying to
Brian. That will put the list thread somewhat out of order, but 
better than
completely disconnecting from the thread. Here goes (for the third 
time):


---

So, Brian, this is the puzzle. Your file systems have a reason for 
being

difficult. They have "several hundred thousand files PER directory."

The filesystem that is causing me trouble, as I indicated, only has 
2806

total files and 140 total directories. That's basically nothing.

So, is this gzip choking on tif files? Is gzip even involved when 
sending
estimates? If I remove compression will it fix this? I could break 
it up
into multiple DLE's, but Amanda will still need estimates of all the 
pieces.


Or is it something entirely different? And, if so, how should I go 
about

looking for it?



On 4/3/13 1:14 PM, Brian Cuttler wrote:

Chris,

for larger file systems I've moved to "server estimate", less
accurate but takes the entire estimate phase out of the equation.

We have had a lot of success with pig zip rather than regular
gzip, is it'll take advantage of the mutiple CPUs and give
parallelization during compression, which is often our bottleneck
during actual dumping. In one system I cut DLE dump time from
13 to 8 hours, a huge savings (I think those where the numbers,
I can look them up...).

ZFS will allow unlimited capacity, and enough files per directory
to choke access, we have backups that run very badly here, with
litterally several hundred thousand files PER directory, and
multiple such directories.

For backups themselves, I do use snapshots where I can on my
ZFS file systems.

On Wed, Apr 03, 2013 at 11:26:01AM -0400, Chris Hoogendyk wrote:

This seems like an obvious "read the FAQ" situation, but . . .

I'm running Amanda 3.3.2 on a Sun T5220 with Solaris 10 and a 
J4500 "jbod"
disk array with multipath SAS. It all should be fast and is on the 
local
server, so there isn't any network path outside localhost for the 
DLE's
that are giving me trouble. They are zfs 

Re: all estimate timed out

2013-04-04 Thread Brian Cuttler

Chris,

sorry for the email trouble, this is a new phenomenon and I
don't know what is causing it, if you can identify the bad
header please let me know. We updated our mailhost a few months
ago, but my MUA (mutt) has not changed nor has my editor (emacs).

My "large" directories are exceptions, even here, and I am educating
the users to do things differently. However I do have lots of files
on zfs in general...

I don't believe that gzip is used in the estimate phase, I think
that it produces "raw" dump size for dump scheduling and that tape
allocation is left for later in the process. If gzip is used you
should see it in # ps, or top (or prstat), you could always  start
a dump after disabling estimate and see if that phase runs any better.
Since you can be sure of finishing estimate phase by checking
# amstatus, you can always abort the dump if you don't want a
non-compressed backup. (Jean-Louis will know off-hand)

How does the dump phase perform?


On Wed, Apr 03, 2013 at 05:42:12PM -0400, Chris Hoogendyk wrote:
> For some reason, the headers in the particular message from the list (from 
> Brian) are causing my mail client or something to completely strip the 
> message so that it is blank when I reply. That is, I compose a message, it 
> looks good, and I send it. But then I get a blank bcc, brian gets a blank 
> message, and the list gets a blank message. Weird. So I'm replying to 
> Christoph Scheeder's message and pasting in the contents for replying to 
> Brian. That will put the list thread somewhat out of order, but better than 
> completely disconnecting from the thread. Here goes (for the third time):
> 
> ---
> 
> So, Brian, this is the puzzle. Your file systems have a reason for being 
> difficult. They have "several hundred thousand files PER directory."
> 
> The filesystem that is causing me trouble, as I indicated, only has 2806 
> total files and 140 total directories. That's basically nothing.
> 
> So, is this gzip choking on tif files? Is gzip even involved when sending 
> estimates? If I remove compression will it fix this? I could break it up 
> into multiple DLE's, but Amanda will still need estimates of all the pieces.
> 
> Or is it something entirely different? And, if so, how should I go about 
> looking for it?
> 
> 
> 
> On 4/3/13 1:14 PM, Brian Cuttler wrote:
> >Chris,
> >
> >for larger file systems I've moved to "server estimate", less
> >accurate but takes the entire estimate phase out of the equation.
> >
> >We have had a lot of success with pig zip rather than regular
> >gzip, is it'll take advantage of the mutiple CPUs and give
> >parallelization during compression, which is often our bottleneck
> >during actual dumping. In one system I cut DLE dump time from
> >13 to 8 hours, a huge savings (I think those where the numbers,
> >I can look them up...).
> >
> >ZFS will allow unlimited capacity, and enough files per directory
> >to choke access, we have backups that run very badly here, with
> >litterally several hundred thousand files PER directory, and
> >multiple such directories.
> >
> >For backups themselves, I do use snapshots where I can on my
> >ZFS file systems.
> >
> >On Wed, Apr 03, 2013 at 11:26:01AM -0400, Chris Hoogendyk wrote:
> >>This seems like an obvious "read the FAQ" situation, but . . .
> >>
> >>I'm running Amanda 3.3.2 on a Sun T5220 with Solaris 10 and a J4500 "jbod"
> >>disk array with multipath SAS. It all should be fast and is on the local
> >>server, so there isn't any network path outside localhost for the DLE's
> >>that are giving me trouble. They are zfs on raidz1 with five 2TB drives.
> >>Gnutar is v1.23. This server is successfully backing up several other
> >>servers as well as many more DLE's on the localhost. Output to an AIT5 
> >>tape
> >>library.
> >>
> >>I've upped the etimeout to 1800 and the dtimeout to 3600, which both seem
> >>outrageously long (jumped from the default 5 minutes to 30 minutes, and
> >>from the default 30 minutes to an hour).
> >>
> >>The filesystem (DLE) that is giving me trouble (hasn't backed up in a
> >>couple of weeks) is /export/herbarium, which looks like:
> >>
> >>marlin:/export/herbarium# df -k .
> >>Filesystemkbytesused   avail capacity  Mounted on
> >>J4500-pool1/herbarium
> >>  2040109465 262907572 177720189313%
> >>  /export/herbarium
> >>marlin:/export/herb

Re: all estimate timed out

2013-04-03 Thread Brian Cuttler

Chris,

for larger file systems I've moved to "server estimate", less
accurate but takes the entire estimate phase out of the equation.

We have had a lot of success with pig zip rather than regular
gzip, is it'll take advantage of the mutiple CPUs and give
parallelization during compression, which is often our bottleneck
during actual dumping. In one system I cut DLE dump time from
13 to 8 hours, a huge savings (I think those where the numbers,
I can look them up...).

ZFS will allow unlimited capacity, and enough files per directory
to choke access, we have backups that run very badly here, with
litterally several hundred thousand files PER directory, and
multiple such directories.

For backups themselves, I do use snapshots where I can on my
ZFS file systems.

On Wed, Apr 03, 2013 at 11:26:01AM -0400, Chris Hoogendyk wrote:
> This seems like an obvious "read the FAQ" situation, but . . .
> 
> I'm running Amanda 3.3.2 on a Sun T5220 with Solaris 10 and a J4500 "jbod" 
> disk array with multipath SAS. It all should be fast and is on the local 
> server, so there isn't any network path outside localhost for the DLE's 
> that are giving me trouble. They are zfs on raidz1 with five 2TB drives. 
> Gnutar is v1.23. This server is successfully backing up several other 
> servers as well as many more DLE's on the localhost. Output to an AIT5 tape 
> library.
> 
> I've upped the etimeout to 1800 and the dtimeout to 3600, which both seem 
> outrageously long (jumped from the default 5 minutes to 30 minutes, and 
> from the default 30 minutes to an hour).
> 
> The filesystem (DLE) that is giving me trouble (hasn't backed up in a 
> couple of weeks) is /export/herbarium, which looks like:
> 
>marlin:/export/herbarium# df -k .
>Filesystemkbytesused   avail capacity  Mounted on
>J4500-pool1/herbarium
>  2040109465 262907572 177720189313% 
>  /export/herbarium
>marlin:/export/herbarium# find . -type f | wc -l
> 2806
>marlin:/export/herbarium# find . -type d | wc -l
>  140
>marlin:/export/herbarium#
> 
> 
> So, it is only 262G and only has 2806 files. Shouldn't be that big a deal. 
> They are typically tif scans.
> 
> One thought that hits me is: possibly, because it is over 200G of tif 
> scans, compression is causing trouble? But this is just getting estimates, 
> output going to /dev/null.
> 
> Here is a segment from the very end of the sendsize debug file from April 1 
> (the debug file ends after these lines):
> 
> Mon Apr  1 08:05:49 2013: thd-32a58: sendsize: .
> Mon Apr  1 08:05:49 2013: thd-32a58: sendsize: estimate time for 
> /export/herbarium level 0: 26302.500
> Mon Apr  1 08:05:49 2013: thd-32a58: sendsize: estimate size for 
> /export/herbarium level 0: 262993150 KB
> Mon Apr  1 08:05:49 2013: thd-32a58: sendsize: waiting for runtar 
> "/export/herbarium" child
> Mon Apr  1 08:05:49 2013: thd-32a58: sendsize: after runtar 
> /export/herbarium wait
> Mon Apr  1 08:05:49 2013: thd-32a58: sendsize: getting size via gnutar for 
> /export/herbarium level 1
> Mon Apr  1 08:05:49 2013: thd-32a58: sendsize: Spawning 
> "/usr/local/libexec/amanda/runtar runtar daily 
> /usr/local/etc/amanda/tools/gtar --create --file /dev/null --numeric-owner 
> --directory /export/herbarium --one-file-system --listed-incremental 
> /usr/local/var/amanda/gnutar-lists/localhost_export_herbarium_1.new 
> --sparse --ignore-failed-read --totals ." in pipeline
> Mon Apr  1 10:16:17 2013: thd-32a58: sendsize: Total bytes written: 
> 77663795200 (73GiB, 9.5MiB/s)
> Mon Apr  1 10:16:17 2013: thd-32a58: sendsize: .
> Mon Apr  1 10:16:17 2013: thd-32a58: sendsize: estimate time for 
> /export/herbarium level 1: 7827.571
> Mon Apr  1 10:16:17 2013: thd-32a58: sendsize: estimate size for 
> /export/herbarium level 1: 75843550 KB
> Mon Apr  1 10:16:17 2013: thd-32a58: sendsize: waiting for runtar 
> "/export/herbarium" child
> Mon Apr  1 10:16:17 2013: thd-32a58: sendsize: after runtar 
> /export/herbarium wait
> Mon Apr  1 10:16:17 2013: thd-32a58: sendsize: done with amname 
> /export/herbarium dirname /export/herbarium spindle 45002
> 
> 
> -- 
> ---
> 
> Chris Hoogendyk
> 
> -
>O__   Systems Administrator
>   c/ /'_ --- Biology & Geology Departments
>  (*) \(*) -- 140 Morrill Science Center
> ~~ - University of Massachusetts, Amherst
> 
> 
> 
> ---
> 
> Erd?s 4
> 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: Newbie question.

2013-03-26 Thread Brian Cuttler

Erik,

Yes - if you have enough disk to configure "virtual tapes" you
can backup without physical tape. The docs are more than sufficient
and numerous people on the amanda list have configured their amanda
servers this way, myself included.

On Tue, Mar 26, 2013 at 09:08:23PM +0100, Erik P. Olsen wrote:
> Hi,
> 
> I am looking at Amanda to solve my backup needs. Can Amanda work without 
> tape devices?
> 
> I need to backup a Linux box, Fedora 18, which will also act as the server, 
> and a Windows 7 box, which will only act as client.
> 
> Is that feasible? Which document will guide me through  the configuration 
> of such backup scheme?
> 
> Thanks in advance,
> 
> -- 
> Erik
> 
> Concordia parv? res crescunt discordia maxim? dilabuntur
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: Amanda Performance

2013-03-15 Thread Brian Cuttler

Amit,

Did I understand you to say that you are not using an amanda
work area, an area on the server for temporary files?

Brian

On Fri, Mar 15, 2013 at 08:15:38AM -0400, Jean-Louis Martineau wrote:
> On 03/15/2013 12:11 AM, Amit Karpe wrote:
> >
> >I did not able to observe parallel processing. I can see only one 
> >dumping at a time:
> >-bash-4.0$ amstatus DailySet2  | grep dumping
> >bengkulu:/var  0 8g dumping6g ( 73.75%) (11:52:57)
> >wait for dumping:   00g   (  0.00%)
> >dumping to tape :   00g   (  0.00%)
> >dumping :   1 6g 8g ( 73.75%) ( 18.47%)
> >-bash-4.0$
> 
> amstatus have so much more information, can you post the complete output 
> or better, post the amdump.X file.
> Can you also post the email report or the log..0 file.
> 
> You posted a lot of number about your hardware and you said you monitor 
> it, but you never said how much you are close to the hardware limit.
> You posted no number about amanda performance (except total time and 
> size) and which number you think can be improved.
> 
> Jean-Louis
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: Amanda Performance

2013-03-14 Thread Brian Cuttler

Amit,

I don't think you told us how many client systems, compression
can be done on the client or the server. Also, besides the inparallel
and maxdump settings, are you short on work area - as Jean-Louis
said, the amplot output will help you spot those bottlenecks.

Brian

On Thu, Mar 14, 2013 at 08:27:11AM -0400, Jean-Louis Martineau wrote:
> Compression is often a CPU bottleneck, did you check for cpu usage? You 
> can try to use pigz instead of gzip if you have available core.
> 
> How many dump are you doing in parallel? You can try to increase 
> inparallel, netusage and/or maxdumps.
> 
> You can use amplot and amstatus to check amanda performance.
> 
> Jean-Louis
> 
> 
> On 03/13/2013 10:44 PM, Amit Karpe wrote:
> >Hi all,
> >I am using Amanda to take backup weekly & monthly. For monthly backup 
> >which is 2.5 to 2.7TB in size after backup with compression, it take 
> >4-5 days. (Total size is around 6-7 TB, and there 52 entries DLEs, 
> >from 10 different host in network. I am backuping on NAS, where I have 
> >19T total space.)
> >Off course there are various parameter we have to consider to claim 
> >whether it is slow process or not.
> >Could you please let me know how should I check and compare whether my 
> >backup process is slow or not ?
> >Which are main parameter which affect Amanda Performance ?
> >Which tool I should use to check Amanda Performance ?
> >Currently I am using following steps:
> >
> >1. I have started monthly backup.
> >2. Using bandwidth monitoring tools i.e. ntop, bmon I am checking 
> >Backup Server to NAS bandwidth usage & trafic status.
> >3. Using iotop I am checking status / speed of io operation.
> >4. There are other few tools, which may help to understand io, had 
> >disk usage. But as my backup directory is not a local device, (I have 
> >mounted as nfs directory) I can't run hdparm or iostat directly.
> >5. Monitoring NAS's admin interface for its bandwidth usage.
> >6. Currently I am checking for some spastics, which help to compare 
> >with my current setup.
> >
> >Still I can't understand whether I going right way or not !
> >It will be if you help me here.
> >
> >-- 
> >Regards
> >Amit Karpe.
> >http://www.amitkarpe.com/
> >http://news.karpe.net.in/
> 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



amcheck/amdump error on zfs ng-zone root

2013-02-15 Thread Brian Cuttler

amanda 3.1.2, solaris x86 server, solaris x86  client, client != server

client successfully backing up.

We moved a zpool from another machine and imported several
non-global zones onto the client.

Underlying mount points seem to be protection 700, we now see the
following error from amcheck.

ERROR: mailserv: Script 'amzfs-snapshot' command 'POST-DLE-AMCHECK' exited with 
status 1: see /tmp/amanda/client/curie/selfcheck.20130215114153.debug
Client check: 1 host checked in 9.278 seconds.  32 problems found.

amdump produces its own errors, but the file systems system actually
seem to be backing up just fine.

The success of the backups would be amanda's runtar which is suid root.

The failure, at least accourding to the message above, would be amanda
attempting to run # df, withoug sufficient access.

- Me? I think the restrictive permissions on the mount points are
  not good protection, if you, if you've taken over the global zone
  you've taken over the non-global zones. Restricting the mount point
  permissions will not contain a breach in a non-global zone, that is
  not where the access lives.

Probably not an argument I have the energy to make with the other admin.
May need to ACL the mount points to allow amanda access (just because the
errors make the amanda output files very ugly and are really false
negatives, as far as overall success are concerned.

Is there an in-Amanda solution?

I'll post my work-around to the list, once I've gotten around to
working on this and have it tested.

thanks,

        Brian
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: scheduling of individual DLE

2013-01-25 Thread Brian Cuttler

Jean-Louis,

Thank you, not sure what I was looking for (in http://wiki/zmanda.com's
amanda.conf page), perhaps I'd made assumptions about the parameter name
(must have been looking for some keyword not in the description, bad on
me) but it is clearly present and described.

I'll include this in a new definition for this client's DLE and see
how it works for us.

thank you,

    Brian

On Fri, Jan 25, 2013 at 11:08:41AM -0500, Jean-Louis Martineau wrote:
> On 01/25/2013 10:12 AM, Brian Cuttler wrote:
> >Hi Amanda users,
> >
> >Think I saw a reference to this at some point, but not finding
> >what I'm looking for on the wiki or in google (I may not be doing
> >the right search).
> >
> >I'm being asked by the admin of a specific machine if I can dump
> >the DLEs on that client later in the evening, they need to do some
> >wind-down after closing databases for the day.
> >
> >I don't really want to push amdump run's cron job later into the
> >evening, there are a lot of clients for this server and we want to
> >do what we can to make sure we are finished before start of business
> >the next day.
> >
> >Is there a way to delay the start of dump for the DLE during the
> >amanda run? I'd imagine a dumptype parameter, but I don't see it
> >listed and suspect that the implementation would be non-trivial, but
> >useful for cases like ours.
> 
> man amanda.conf:
> 
> DUMPTYPE SECTION
>starttime int
>Default: not set. Backup of these disks will not start until 
> after
>this time of day. The value should be hh*100+mm, e.g. 6:30PM
>(18:30) would be entered as 1830.
> 
> I never use or tested this feature, let me know if it still works.
> 
> Jean-Louis
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



scheduling of individual DLE

2013-01-25 Thread Brian Cuttler
Hi Amanda users,

Think I saw a reference to this at some point, but not finding
what I'm looking for on the wiki or in google (I may not be doing
the right search).

I'm being asked by the admin of a specific machine if I can dump
the DLEs on that client later in the evening, they need to do some
wind-down after closing databases for the day.

I don't really want to push amdump run's cron job later into the
evening, there are a lot of clients for this server and we want to
do what we can to make sure we are finished before start of business
the next day.

Is there a way to delay the start of dump for the DLE during the
amanda run? I'd imagine a dumptype parameter, but I don't see it
listed and suspect that the implementation would be non-trivial, but
useful for cases like ours.

thank you,

    Brian
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: FreeBSD 8.3 killed my Amanda

2012-11-19 Thread Brian Cuttler

Oliveier,

Did some parameter in amanda.conf get reset?

Where is the failure occuring? Estimate phase ("etimeout")?
Is the error in a consistent place?
Was there a change to the version of gtar being used? Is there
an incompattibility with gtar and amanda version that is only
catching on large (or possibly the only compressed) DLE?

Need a little more to make any sort of targeted guess.

On Mon, Nov 19, 2012 at 04:30:17PM +0700, Olivier Nicole wrote:
> Hi,
> 
> I apologize for coming crying here, but since I updated my manda
> server to FreeBSD 8.3 (from 7.4), any big DLE will fail.
> 
> I tried many versions of Amanda (2.5, 2.6, 3.3), with no success.
> 
> Before I start sending debug, maybe there is an obvious action I have
> forgotten.
> 
> I have tried, from the client side to tar|gzip|ssh cat >/dev/null the
> big DLE, and it went on with no problem.
> 
> best regards,
> 
> Olivier
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



IMPORTANT NOTICE: This e-mail and any attachments may contain
confidential or sensitive information which is, or may be, legally
privileged or otherwise protected by law from further disclosure.  It
is intended only for the addressee.  If you received this in error or
from someone who was not authorized to send it to you, please do not
distribute, copy or use it or any attachments.  Please notify the
sender immediately by reply e-mail and delete this from your
system. Thank you for your cooperation.




Re: backing up ZFS

2012-10-29 Thread Brian Cuttler

Gour,

On Sat, Oct 27, 2012 at 10:24:04AM +0200, Gour wrote:
> On Fri, 26 Oct 2012 11:40:20 -0400
> Brian Cuttler  wrote:
> 
> > That is odd, and not reflective out output for ZFS file systems
> > on a Solaris box.
> 
> Hmm..
> 
> > Clearly the script will not work for you as intended.
> 
> You mean dump or even gnutar?

No, I meant the script that takes your zfs list output and turns
it into DLEs. You should be able to dump with (appropriate tar)
the zfs file systems using Amanda.

You can amend th script, or, since its been run once, you can
stop running it and ammend the static output it created.

> > I have no clue what tank0/ROOT/default contains, suspect
> > /root/default is the mount point, but you'd know and I
> > can only guess. Suspect tank0/ROOT is in fact /root.
> 
> Here I'll include df-h output as well:

If I'm reading correctly (cafeine dependent)

tank0/ROOT/default is /
tank0/ROOT/root is /root

both of which should be backed up.

and both of these
tank0
tank0/ROOT

are the root of the ZFS file system and not mounted to the system 
per se.

 * Someone with more experience (than I have, which is pretty much
   none) with ZFS for linux may want to weigh in. *

There is NO substitute for an attempted restore, see if you actually
have backed up, what you think you have backed up.

Practice the restore, we have yet to restore a system at my site
with a ZFS boot and in fact several of our systems have UFS boot
drives, even though they may have quite large, or even multiple
zpools. (ZFS is currently only used on Solaris at my site)



> [gour@atmarama gour] df
> -h
> ~ Filesystem   SizeUsed   Avail Capacity  Mounted
> on tank0/ROOT/default   368G5.5G363G 1%/
> devfs1.0k1.0k  0B   100%/dev
> procfs   4.0k4.0k  0B   100%/proc
> linprocfs4.0k4.0k  0B
> 100%/compat/linux/proc tank0/root   363G 40M
> 363G 0%/root tank0/tmp363G126k
> 363G 0%/tmp tank0/usr/home   363G 73k
> 363G 0%/usr/home tank0/usr/home/gour  893G529G
> 363G59%/usr/home/gour tank0/usr/jails  363G
> 83M363G 0%/usr/jails tank0/usr/obj363G
> 31k363G 0%/usr/obj tank0/usr/pbi374G
> 11G363G 3%/usr/pbi tank0/usr/ports  363G
> 675M363G 0%/usr/ports tank0/usr/ports/distfiles363G
> 106M363G 0%/usr/ports/distfiles
> tank0/usr/src363G435M363G 0%/usr/src
> tank0/var/audit  363G 31k363G 0%/var/audit
> tank0/var/log363G626k363G 0%/var/log
> tank0/var/tmp363G114k363G 0%/var/tmp
> 
> 
> and zfs list (again):
> 
> [gour@atmarama gour] zfs list 
>~
> NAMEUSED  AVAIL  REFER  MOUNTPOINT
> tank0   550G   363G31K  legacy
> tank0/ROOT 5.52G   363G31K  legacy
> tank0/ROOT/default 5.52G   363G  5.52G  /mnt
> tank0/root 40.1M   363G  40.1M  /root
> tank0/swap 2.06G   365G  37.8M  -
> tank0/tmp   127K   363G   127K  /tmp
> tank0/usr   543G   363G31K  /mnt/usr
> tank0/usr/home  530G   363G  73.5K  /usr/home
> tank0/usr/home/gour 530G   363G   530G  /usr/home/gour
> tank0/usr/jails83.8M   363G  83.8M  /usr/jails
> tank0/usr/obj31K   363G31K  /usr/obj
> tank0/usr/pbi  11.6G   363G  11.6G  /usr/pbi
> tank0/usr/ports 782M   363G   676M  /usr/ports
> tank0/usr/ports/distfiles   106M   363G   106M  /usr/ports/distfiles
> tank0/usr/src   435M   363G   435M  /usr/src
> tank0/var   802K   363G31K  /mnt/var
> tank0/var/audit  31K   363G31K  /var/audit
> tank0/var/log   626K   363G   626K  /var/log
> tank0/var/tmp   114K   363G   114K  /var/tmp
> 
> 
> Does it help?
> 
> > I can not be 100% certain about linux, but as far as Solaris
> > goes (I believe linux to be the same) dump will only dump ext
> > type, ufs, xfs etc type file systems, that tar is always used
> > with ZFS file systems.
> 
> I'm on Free(PC)BSD.
> 
> 
> Sincerely,
> Gour
> 
> -- 
> A person who has given up all desires for sense gratification, 
> who lives free from desires, who has given up all sense of 
> proprietorship and is d

  1   2   3   4   5   6   7   8   9   10   >