Re: Another dumper question

2018-11-26 Thread Nathan Stratton Treadway
On Mon, Nov 26, 2018 at 17:07:54 -0500, Chris Hoogendyk wrote:
> My understanding was that Amanda cannot throttle the throughput, but
> it can see what it is. If it is at or over the limit, it won't start
> another dump. So, you can end up over the limit that you have
> specified for periods of time, but it won't just continue increasing
> without respect for the limit.

To expand on this slightly (and tie it back to the begining of this
thread): what Amanda actually looks at is the dump-size/time figure from
the "info" database it keeps on each DLE.  So, it's not looking at
dynamic real-time network bandwidth usage, but rather calculating what
the average throughput turned out to be the last time this DLE was
dumped.

As you say, the important thing to note is that Amanda uses this info to
do throttling at the dump/dumper level: when there is a free dumper, it
will send a request for a new DLE to that dumper only if the bandwidth
usage calculated for currently-in-progress dumps doesn't exceed the
configured limit.

If the configured limit is too high, the possiblity is that too many
simultaneous dumps, especially of very fast client systems, will
saturate the network interface (or some part of the network
infrastructure) -- while if it's too low, the symptom will be that some
of the configured "inparallel" dumpers will be left idle.  (But the nice
thing about having Amanda do this calculation is that it can go ahead
and kick off more dumpers when working on slow client systems and fewer
when processing the fast ones.)

Anyway, in the case of the original email in this thread, the problem
seems to be that the calculated bandwidth for the single DLE in processs
of being dumped already exceeds the configured bandwidth limit (as
represented by the amstatus report line "network free kps: 0") -- and
thus the other 9 dumpers are all left idle even though there are many
DLEs out there waiting to be dumped.


Nathan 


Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Re: Another dumper question

2018-11-26 Thread Gene Heskett
On Monday 26 November 2018 17:07:54 Chris Hoogendyk wrote:

> On 11/26/18 4:37 PM, Gene Heskett wrote:
> > On Monday 26 November 2018 15:13:43 Chris Nighswonger wrote:
> >> On Mon, Nov 26, 2018 at 2:32 PM Nathan Stratton Treadway
> >>
> >>  wrote:
> >>> On Mon, Nov 26, 2018 at 13:56:52 -0500, Austin S. Hemmelgarn wrote:
>  On 2018-11-26 13:34, Chris Nighswonger wrote:
>  The other possibility that comes to mind is that your bandwidth
>  settings are making Amanda decide to limit to one dumper at a
>  time.
> >>>
> >>> Chris, this is certainly the first thing to look at: note in your
> >>>
> >>> amstatus output the line "network free kps: 0":
> > 9 dumpers idle  : 0
> > taper status: Idle
> > taper qlen: 1
> > network free kps: 0
> > holding space   : 436635431k ( 50.26%)
> >>
> >> Hmm... I missed that completely. I'll set it arbitrarily high as
> >> Austin suggested and test it overnight.
> >
> > I was told once, a decade or more back up the log, that amanda made
> > no use of that setting. It just used what it needed or whatever the
> > hardware supported.
> >
> > When did it become an actively used setting?
> >
> > Copyright 2018 by Maurice E. Heskett
>
> My understanding was that Amanda cannot throttle the throughput, but
> it can see what it is. If it is at or over the limit, it won't start
> another dump. So, you can end up over the limit that you have
> specified for periods of time, but it won't just continue increasing
> without respect for the limit.

That makes perfect sense, no one has explained it in that light.  Not 
that its going to bother me, only 2 of my machines have active drives 
that aren't in the DLE file.  Scratchpad drives for devel work.


Copyright 2018 by Maurice E. Heskett
-- 
Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page 


Re: Another dumper question

2018-11-26 Thread Chris Hoogendyk




On 11/26/18 4:37 PM, Gene Heskett wrote:

On Monday 26 November 2018 15:13:43 Chris Nighswonger wrote:


On Mon, Nov 26, 2018 at 2:32 PM Nathan Stratton Treadway

 wrote:

On Mon, Nov 26, 2018 at 13:56:52 -0500, Austin S. Hemmelgarn wrote:

On 2018-11-26 13:34, Chris Nighswonger wrote:
The other possibility that comes to mind is that your bandwidth
settings are making Amanda decide to limit to one dumper at a
time.

Chris, this is certainly the first thing to look at: note in your

amstatus output the line "network free kps: 0":

9 dumpers idle  : 0
taper status: Idle
taper qlen: 1
network free kps: 0
holding space   : 436635431k ( 50.26%)

Hmm... I missed that completely. I'll set it arbitrarily high as
Austin suggested and test it overnight.

I was told once, a decade or more back up the log, that amanda made no
use of that setting. It just used what it needed or whatever the
hardware supported.

When did it become an actively used setting?

Copyright 2018 by Maurice E. Heskett


My understanding was that Amanda cannot throttle the throughput, but it can see what it is. If it is 
at or over the limit, it won't start another dump. So, you can end up over the limit that you have 
specified for periods of time, but it won't just continue increasing without respect for the limit.


--
---

Chris Hoogendyk

-
   O__   Systems Administrator
  c/ /'_ --- Biology & Geosciences Departments
 (*) \(*) -- 315 Morrill Science Center
~~ - University of Massachusetts, Amherst



---

Erdös 4



Re: Another dumper question

2018-11-26 Thread Gene Heskett
On Monday 26 November 2018 15:14:51 Nathan Stratton Treadway wrote:

> On Mon, Nov 26, 2018 at 14:15:41 -0500, Chris Nighswonger wrote:
> > That makes more sense. I've set it to 10 and inparallel to 2 and
> > we'll see how it goes tonight.
> >
> > On Mon, Nov 26, 2018 at 2:01 PM Cuttler, Brian R (HEALTH)
> >
> >  wrote:
> > > I believe maxdumps is max concurrent dumps across all clients.
> > >
> > > You might have 10 clients each with an inparallel of 2, giving 20
> > > possible concurrent dumps, but because of server limitations you
> > > might set maxdumps to something between 2 and 20.
>
> Actually, it is the other way around: when you kick off amdump, the
> driver retrieves the "inparallel" configuration option and then starts
> up that number of dumper subprocesses -- so "inparallel" controls the
> total number of dumpers available for the whole run.
>
> In contrast, the amanda.conf man page section on maxdumps says:  "The
> maximum number of backups from a single host that Amanda will attempt
> to run in parallel." So that controls how many (out of the
> "inparallel" total dumpers available) might simultaneously run on one
> host at the same time.
>
> (Maxdumps defaults to 1 to avoid two dumpers on the same client
> competing with each other for disk I/O [or worse, thrashing the disk
> heads by interweaving reads from different places on the device], CPU
> [e.g. for compression], and/or network bandwidth.  This seems likely
> to be a reasonable default, though I guess if you had multiple
> physical disk devices each of which was relatively slow compared to
> CPU and network bandwidth then it would make sense to run more than
> one dump at a time.  But I'd do careful testing to make sure that's
> not actually slowing down the overall dump.)
>
>
>   Nathan

AIUI, this is where the "spindle" number in your DLE comes into play, and 
unless that particular client has more than 1 active disk, it should 
remain at 1.  Now if like some, you have /home on a separate drive from 
the root filesystem, you may find it helpfull to put a 2 in those DLEs 
that are on that clients 2nd drive, and it should then run two dumpers 
in independent dumps on THAT client.

> --
>-- Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic
> region Ray Ontko & Co.  -  Software consulting services  -  
> http://www.ontko.com/ GPG Key:
> http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239 Key
> fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239



Copyright 2018 by Maurice E. Heskett
-- 
Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page 


Re: Another dumper question

2018-11-26 Thread Gene Heskett
On Monday 26 November 2018 15:13:43 Chris Nighswonger wrote:

> On Mon, Nov 26, 2018 at 2:32 PM Nathan Stratton Treadway
>
>  wrote:
> > On Mon, Nov 26, 2018 at 13:56:52 -0500, Austin S. Hemmelgarn wrote:
> > > On 2018-11-26 13:34, Chris Nighswonger wrote:
> > > The other possibility that comes to mind is that your bandwidth
> > > settings are making Amanda decide to limit to one dumper at a
> > > time.
> >
> > Chris, this is certainly the first thing to look at: note in your
> >
> > amstatus output the line "network free kps: 0":
> > > >9 dumpers idle  : 0
> > > >taper status: Idle
> > > >taper qlen: 1
> > > >network free kps: 0
> > > >holding space   : 436635431k ( 50.26%)
>
> Hmm... I missed that completely. I'll set it arbitrarily high as
> Austin suggested and test it overnight.

I was told once, a decade or more back up the log, that amanda made no 
use of that setting. It just used what it needed or whatever the  
hardware supported.

When did it become an actively used setting?

Copyright 2018 by Maurice E. Heskett
-- 
Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page 


Re: Another dumper question

2018-11-26 Thread Debra S Baddorf



> On Nov 26, 2018, at 2:24 PM, Austin S. Hemmelgarn  
> wrote:
> 
> On 2018-11-26 15:13, Chris Nighswonger wrote:
>> On Mon, Nov 26, 2018 at 2:32 PM Nathan Stratton Treadway
>>  wrote:
>>> 
>>> On Mon, Nov 26, 2018 at 13:56:52 -0500, Austin S. Hemmelgarn wrote:
 On 2018-11-26 13:34, Chris Nighswonger wrote:
 The other possibility that comes to mind is that your bandwidth
 settings are making Amanda decide to limit to one dumper at a time.
>>> 
>>> Chris, this is certainly the first thing to look at: note in your
>>> amstatus output the line "network free kps: 0":
>>> 
>>> 
> 9 dumpers idle  : 0
> taper status: Idle
> taper qlen: 1
> network free kps: 0
> holding space   : 436635431k ( 50.26%)
>> Hmm... I missed that completely. I'll set it arbitrarily high as
>> Austin suggested and test it overnight.
> Don't feel bad, it's not something that gets actively used by a lot of 
> people, so most people don't really think about it.  If used right though, it 
> provides the rather neat ability to have Amanda limit it's network 
> utilization while running backups, which is really helpful if you have to run 
> backups during production hours for some reason.

Or if production hours are 24/7/366  !   (high energy physics accelerator lab)

Deb Baddorf
Fermilab




Re: Configuration Rollback [Was: Reusable]

2018-11-26 Thread Nathan Stratton Treadway
On Mon, Nov 26, 2018 at 20:00:09 +, Debra S Baddorf wrote:
> promoting a DLE to do an early level 0. Doing 21 runs in NN minutes
> will use up all your vtapes, but will not force more than 1 level 0.

(Yes, you are correct -- that's why I was asking about the output of
"amoverview" for him last week, etc)


On Mon, Nov 26, 2018 at 20:19:06 +, Cuttler, Brian R (HEALTH) wrote:
> Based on the latest emails I think Chris may have moved on, but he has
> these additional answers for when he cycles back.
> 

(Note that the emails today are from a different Chris -- I don't
believe we've heard back from Chris Miller since last week)

Nathan


Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


RE: Configuration Rollback [Was: Reusable]

2018-11-26 Thread Cuttler, Brian R (HEALTH)
Depending on what I'm testing and its importance I may just # cp 
tapelist.yesterday tapelist or manually alter it.

-Original Message-
From: Debra S Baddorf  
Sent: Monday, November 26, 2018 3:25 PM
To: Cuttler, Brian R (HEALTH) 
Cc: Debra S Baddorf ; Debra S Baddorf ; 
amanda-users 
Subject: Re: Configuration Rollback [Was: Reusable]

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


> On Nov 26, 2018, at 2:19 PM, Cuttler, Brian R (HEALTH) 
>  wrote:
>
> Deb,
>
> I'm with you, periodically I have to test something, or will start an 
> additional backup to help move data after a failure of some sort but in 
> general I allow cron to start # amdump once/day. My understanding matches 
> yours, balance will get thrown off, amanda may advance dumps but you chew up 
> a lot of tape and do a lot of I/O for little overall gain if you are running 
> multiple dumps as a matter of course.

When I have something that failed/was missed  OR a new node or DLE to add,  I 
do amdump config —no-taper  node  DLE so test the dump,  but leave the results 
on my holding disk.  It’ll be flushed when tonight’s backup starts.
So I don’t HAVE to chew up tape, just to test things.   :)

Deb Baddorf


>
> I have even gone so far as run run multiple dumps across a very small 
> (perhaps "1") DLE to test include/exclude or get a new dumptype (snapshot) or 
> compress(pig/parallel zip) tested, but do not run multiple dumps as a matter 
> of routine.
>
> In my mind running many small DLEs can be self-defeating, as can running very 
> few very large once, each hitting a different set of constraints.
>
> My samba shares are on separate ZFS mount points and I snapshot them. My home 
> directories are also on separate ZFS mount points but individual backups were 
> untenable so I glob them by letter, but that means I can't do snapshots.
>
> Based on the latest emails I think Chris may have moved on, but he has these 
> additional answers for when he cycles back.
>
> Thanks,
> Brian
>
>
> -Original Message-
> From: Debra S Baddorf 
> Sent: Monday, November 26, 2018 3:00 PM
> To: amanda-users 
> Cc: Debra S Baddorf ; Debra S Baddorf 
> ; Cuttler, Brian R (HEALTH) 
> 
> Subject: Re: Configuration Rollback [Was: Reusable]
>
> ATTENTION: This email came from an external source. Do not open attachments 
> or click on links from unknown senders or unexpected emails.
>
>
>>
>> -Original Message-
>> From: owner-amanda-us...@amanda.org  
>> On Behalf Of Debra S Baddorf
>> Sent: Monday, November 26, 2018 2:04 PM
>> To: amanda-users 
>> Cc: Debra S Baddorf 
>> Subject: Re: Configuration Rollback [Was: Reusable]
>>
>> ATTENTION: This email came from an external source. Do not open attachments 
>> or click on links from unknown senders or unexpected emails.
>>
>>
>>> On Nov 24, 2018, at 9:47 AM, Chris Nighswonger 
>>>  wrote:
>>>
>>> On Fri, Nov 23, 2018 at 6:47 PM Jon LaBadie  wrote:
>>> On Wed, Nov 21, 2018 at 11:55:21AM -0800, Chris Miller wrote:
 Hi Folks,

 I have written some very small DLEs so I can rip through weeks of 
 backups in minutes. I've learned some things.
>>>
>>
>> Am I wrong in thinking that you cannot do extra backups,  to get to the end 
>> of your  “dumpcycle” faster?
>> Dumpcycle is explicitly in days,  not number of times run.   Isn’t it?
>>
>> Deb Baddorf
>> Fermilab
>>
>>
>>
>> On Nov 26, 2018, at 1:21 PM, Cuttler, Brian R (HEALTH) 
>>  wrote:
>>
>> Deb,
>>
>> Not sure if I'm understanding your question. If so I believe Amanda was 
>> built with the concept of a once/day run schedule taking into account 
>> runs/cycle as well as days/dumpcycle (for instance 5 run days in a one week 
>> dump cycle).
>>
>> Brian
>
> Yes, I agree.  At one point,  Chris was trying to speed this up by doing 21 
> runs in 10 minutes or so.
> Perhaps he has stopped that,  and people are just continuing to quote that 
> line (above).   It’s that line that’s bothering me.
>
> He did ask  "Can I specify "dumpcycle" as an elapsed count rather than an 
> elapsed time?”
> Per the wiki help files,  dumpcycle seems to be explicitly in “days”  and 
> cannot be changed or sped up.
> (  
> https://urldefense.proofpoint.com/v2/url?u=https-3A__protect2.fireeye.
> com_url-3Fk-3D42d018d223e76922.42d2e1e7-2D4a9f6eb7b4f1e3bb-26u-3Dhttps
> -3A__wiki.zmanda.com_index.php_Dumpcycle&d=DwIGaQ&c=gRgGjJ3BkIsb5y6s49
> QqsA&r=HMrKaRiCv4jddln9fLPIOw&m=l_nUa8ln6_AcIwhv7dPsFLOJjoUvcO0FZC_Go-
> Kk0_Q&s=jgJt912esNKK5Qjd43o1uniBJyWyfB4otVSX4o4CDL8&e=  )
>
> Chris also asked  "Are "dumpcycle" and "runspercycle" conflicting with each 
> other?”
> Even if you occasionally do extra amdump runs  (I do),  that doesn’t bother 
> the “runspercycle” count.  It still results in   “at least one level 0 within
> dumpcycle days”.Runspercycle just lets amanda gauge numbers for balance 
> adjusting, and maybe promoting a DLE to do an early level 0.
> Doing 21 r

Re: Another dumper question

2018-11-26 Thread Chris Nighswonger
On Mon, Nov 26, 2018 at 3:22 PM Nathan Stratton Treadway
 wrote:
>
> On Mon, Nov 26, 2018 at 14:15:41 -0500, Chris Nighswonger wrote:
> > That makes more sense. I've set it to 10 and inparallel to 2 and we'll
> > see how it goes tonight.
> >
> > On Mon, Nov 26, 2018 at 2:01 PM Cuttler, Brian R (HEALTH)
> >  wrote:
> > >
> > > I believe maxdumps is max concurrent dumps across all clients.
> > >
> > > You might have 10 clients each with an inparallel of 2, giving 20
> > > possible concurrent dumps, but because of server limitations you
> > > might set maxdumps to something between 2 and 20.
>
> Actually, it is the other way around: when you kick off amdump, the
> driver retrieves the "inparallel" configuration option and then starts
> up that number of dumper subprocesses -- so "inparallel" controls the
> total number of dumpers available for the whole run.
>
> In contrast, the amanda.conf man page section on maxdumps says:  "The
> maximum number of backups from a single host that Amanda will attempt to
> run in parallel." So that controls how many (out of the "inparallel"
> total dumpers available) might simultaneously run on one host at the
> same time.
>
> (Maxdumps defaults to 1 to avoid two dumpers on the same client
> competing with each other for disk I/O [or worse, thrashing the disk
> heads by interweaving reads from different places on the device], CPU
> [e.g. for compression], and/or network bandwidth.  This seems likely to
> be a reasonable default, though I guess if you had multiple physical
> disk devices each of which was relatively slow compared to CPU and
> network bandwidth then it would make sense to run more than one dump at
> a time.  But I'd do careful testing to make sure that's not actually
> slowing down the overall dump.)

So I've set these params back as they were before (ie. inparallel 10,
maxdumps 1) and opened the throttle on the bandwidth. We'll see if
things get better or worse tonight. I have a few more DLEs I need to
split which may help the over all problem.


Re: Configuration Rollback [Was: Reusable]

2018-11-26 Thread Debra S Baddorf



> On Nov 26, 2018, at 2:19 PM, Cuttler, Brian R (HEALTH) 
>  wrote:
> 
> Deb,
> 
> I'm with you, periodically I have to test something, or will start an 
> additional backup to help move data after a failure of some sort but in 
> general I allow cron to start # amdump once/day. My understanding matches 
> yours, balance will get thrown off, amanda may advance dumps but you chew up 
> a lot of tape and do a lot of I/O for little overall gain if you are running 
> multiple dumps as a matter of course.

When I have something that failed/was missed  OR a new node or DLE to add,  I do
amdump config —no-taper  node  DLE
so test the dump,  but leave the results on my holding disk.  It’ll be flushed 
when tonight’s backup
starts.
So I don’t HAVE to chew up tape, just to test things.   :)

Deb Baddorf


> 
> I have even gone so far as run run multiple dumps across a very small 
> (perhaps "1") DLE to test include/exclude or get a new dumptype (snapshot) or 
> compress(pig/parallel zip) tested, but do not run multiple dumps as a matter 
> of routine.
> 
> In my mind running many small DLEs can be self-defeating, as can running very 
> few very large once, each hitting a different set of constraints.
> 
> My samba shares are on separate ZFS mount points and I snapshot them. My home 
> directories are also on separate ZFS mount points but individual backups were 
> untenable so I glob them by letter, but that means I can't do snapshots.
> 
> Based on the latest emails I think Chris may have moved on, but he has these 
> additional answers for when he cycles back.
> 
> Thanks,
> Brian
> 
> 
> -Original Message-
> From: Debra S Baddorf  
> Sent: Monday, November 26, 2018 3:00 PM
> To: amanda-users 
> Cc: Debra S Baddorf ; Debra S Baddorf ; 
> Cuttler, Brian R (HEALTH) 
> Subject: Re: Configuration Rollback [Was: Reusable]
> 
> ATTENTION: This email came from an external source. Do not open attachments 
> or click on links from unknown senders or unexpected emails.
> 
> 
>> 
>> -Original Message-
>> From: owner-amanda-us...@amanda.org  On 
>> Behalf Of Debra S Baddorf
>> Sent: Monday, November 26, 2018 2:04 PM
>> To: amanda-users 
>> Cc: Debra S Baddorf 
>> Subject: Re: Configuration Rollback [Was: Reusable]
>> 
>> ATTENTION: This email came from an external source. Do not open attachments 
>> or click on links from unknown senders or unexpected emails.
>> 
>> 
>>> On Nov 24, 2018, at 9:47 AM, Chris Nighswonger 
>>>  wrote:
>>> 
>>> On Fri, Nov 23, 2018 at 6:47 PM Jon LaBadie  wrote:
>>> On Wed, Nov 21, 2018 at 11:55:21AM -0800, Chris Miller wrote:
 Hi Folks,
 
 I have written some very small DLEs so I can rip through weeks of 
 backups in minutes. I've learned some things.
>>> 
>> 
>> Am I wrong in thinking that you cannot do extra backups,  to get to the end 
>> of your  “dumpcycle” faster?
>> Dumpcycle is explicitly in days,  not number of times run.   Isn’t it?
>> 
>> Deb Baddorf
>> Fermilab
>> 
>> 
>> 
>> On Nov 26, 2018, at 1:21 PM, Cuttler, Brian R (HEALTH) 
>>  wrote:
>> 
>> Deb,
>> 
>> Not sure if I'm understanding your question. If so I believe Amanda was 
>> built with the concept of a once/day run schedule taking into account 
>> runs/cycle as well as days/dumpcycle (for instance 5 run days in a one week 
>> dump cycle).
>> 
>> Brian
> 
> Yes, I agree.  At one point,  Chris was trying to speed this up by doing 21 
> runs in 10 minutes or so.
> Perhaps he has stopped that,  and people are just continuing to quote that 
> line (above).   It’s that line that’s bothering me.
> 
> He did ask  "Can I specify "dumpcycle" as an elapsed count rather than an 
> elapsed time?”
> Per the wiki help files,  dumpcycle seems to be explicitly in “days”  and 
> cannot be changed or sped up.
> (  
> https://urldefense.proofpoint.com/v2/url?u=https-3A__protect2.fireeye.com_url-3Fk-3D42d018d223e76922.42d2e1e7-2D4a9f6eb7b4f1e3bb-26u-3Dhttps-3A__wiki.zmanda.com_index.php_Dumpcycle&d=DwIGaQ&c=gRgGjJ3BkIsb5y6s49QqsA&r=HMrKaRiCv4jddln9fLPIOw&m=l_nUa8ln6_AcIwhv7dPsFLOJjoUvcO0FZC_Go-Kk0_Q&s=jgJt912esNKK5Qjd43o1uniBJyWyfB4otVSX4o4CDL8&e=
>   )
> 
> Chris also asked  "Are "dumpcycle" and "runspercycle" conflicting with each 
> other?”
> Even if you occasionally do extra amdump runs  (I do),  that doesn’t bother 
> the “runspercycle” count.  It still results in   “at least one level 0 within
> dumpcycle days”.Runspercycle just lets amanda gauge numbers for balance 
> adjusting, and maybe promoting a DLE to do an early level 0.
> Doing 21 runs in NN minutes will use up all your vtapes,  but will not force 
> more than 1 level 0.
> 
> Maybe I’m being pedantic,  and he’s looking at other issues now.  If so, 
> nevermind me!
> :)
> 
> Deb Baddorf
> 
> 
> 




Re: Another dumper question

2018-11-26 Thread Austin S. Hemmelgarn

On 2018-11-26 15:13, Chris Nighswonger wrote:

On Mon, Nov 26, 2018 at 2:32 PM Nathan Stratton Treadway
 wrote:


On Mon, Nov 26, 2018 at 13:56:52 -0500, Austin S. Hemmelgarn wrote:

On 2018-11-26 13:34, Chris Nighswonger wrote:
The other possibility that comes to mind is that your bandwidth
settings are making Amanda decide to limit to one dumper at a time.


Chris, this is certainly the first thing to look at: note in your
amstatus output the line "network free kps: 0":



9 dumpers idle  : 0
taper status: Idle
taper qlen: 1
network free kps: 0
holding space   : 436635431k ( 50.26%)


Hmm... I missed that completely. I'll set it arbitrarily high as
Austin suggested and test it overnight.

Don't feel bad, it's not something that gets actively used by a lot of 
people, so most people don't really think about it.  If used right 
though, it provides the rather neat ability to have Amanda limit it's 
network utilization while running backups, which is really helpful if 
you have to run backups during production hours for some reason.


RE: Configuration Rollback [Was: Reusable]

2018-11-26 Thread Cuttler, Brian R (HEALTH)
Deb,

I'm with you, periodically I have to test something, or will start an 
additional backup to help move data after a failure of some sort but in general 
I allow cron to start # amdump once/day. My understanding matches yours, 
balance will get thrown off, amanda may advance dumps but you chew up a lot of 
tape and do a lot of I/O for little overall gain if you are running multiple 
dumps as a matter of course.

I have even gone so far as run run multiple dumps across a very small (perhaps 
"1") DLE to test include/exclude or get a new dumptype (snapshot) or 
compress(pig/parallel zip) tested, but do not run multiple dumps as a matter of 
routine.

In my mind running many small DLEs can be self-defeating, as can running very 
few very large once, each hitting a different set of constraints.

My samba shares are on separate ZFS mount points and I snapshot them. My home 
directories are also on separate ZFS mount points but individual backups were 
untenable so I glob them by letter, but that means I can't do snapshots.

Based on the latest emails I think Chris may have moved on, but he has these 
additional answers for when he cycles back.

Thanks,
Brian


-Original Message-
From: Debra S Baddorf  
Sent: Monday, November 26, 2018 3:00 PM
To: amanda-users 
Cc: Debra S Baddorf ; Debra S Baddorf ; 
Cuttler, Brian R (HEALTH) 
Subject: Re: Configuration Rollback [Was: Reusable]

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


>
> -Original Message-
> From: owner-amanda-us...@amanda.org  On 
> Behalf Of Debra S Baddorf
> Sent: Monday, November 26, 2018 2:04 PM
> To: amanda-users 
> Cc: Debra S Baddorf 
> Subject: Re: Configuration Rollback [Was: Reusable]
>
> ATTENTION: This email came from an external source. Do not open attachments 
> or click on links from unknown senders or unexpected emails.
>
>
>> On Nov 24, 2018, at 9:47 AM, Chris Nighswonger 
>>  wrote:
>>
>> On Fri, Nov 23, 2018 at 6:47 PM Jon LaBadie  wrote:
>> On Wed, Nov 21, 2018 at 11:55:21AM -0800, Chris Miller wrote:
>>> Hi Folks,
>>>
>>> I have written some very small DLEs so I can rip through weeks of 
>>> backups in minutes. I've learned some things.
>>
>
> Am I wrong in thinking that you cannot do extra backups,  to get to the end 
> of your  “dumpcycle” faster?
> Dumpcycle is explicitly in days,  not number of times run.   Isn’t it?
>
> Deb Baddorf
> Fermilab
>
>
>
> On Nov 26, 2018, at 1:21 PM, Cuttler, Brian R (HEALTH) 
>  wrote:
>
> Deb,
>
> Not sure if I'm understanding your question. If so I believe Amanda was built 
> with the concept of a once/day run schedule taking into account runs/cycle as 
> well as days/dumpcycle (for instance 5 run days in a one week dump cycle).
>
> Brian

Yes, I agree.  At one point,  Chris was trying to speed this up by doing 21 
runs in 10 minutes or so.
Perhaps he has stopped that,  and people are just continuing to quote that line 
(above).   It’s that line that’s bothering me.

He did ask  "Can I specify "dumpcycle" as an elapsed count rather than an 
elapsed time?”
Per the wiki help files,  dumpcycle seems to be explicitly in “days”  and 
cannot be changed or sped up.
(  
https://protect2.fireeye.com/url?k=42d018d223e76922.42d2e1e7-4a9f6eb7b4f1e3bb&u=https://wiki.zmanda.com/index.php/Dumpcycle
  )

Chris also asked  "Are "dumpcycle" and "runspercycle" conflicting with each 
other?”
Even if you occasionally do extra amdump runs  (I do),  that doesn’t bother the 
“runspercycle” count.  It still results in   “at least one level 0 within
dumpcycle days”.Runspercycle just lets amanda gauge numbers for balance 
adjusting, and maybe promoting a DLE to do an early level 0.
Doing 21 runs in NN minutes will use up all your vtapes,  but will not force 
more than 1 level 0.

Maybe I’m being pedantic,  and he’s looking at other issues now.  If so, 
nevermind me!
:)

Deb Baddorf






Re: Another dumper question

2018-11-26 Thread Nathan Stratton Treadway
On Mon, Nov 26, 2018 at 14:15:41 -0500, Chris Nighswonger wrote:
> That makes more sense. I've set it to 10 and inparallel to 2 and we'll
> see how it goes tonight.
> 
> On Mon, Nov 26, 2018 at 2:01 PM Cuttler, Brian R (HEALTH)
>  wrote:
> >
> > I believe maxdumps is max concurrent dumps across all clients.
> >
> > You might have 10 clients each with an inparallel of 2, giving 20
> > possible concurrent dumps, but because of server limitations you
> > might set maxdumps to something between 2 and 20.

Actually, it is the other way around: when you kick off amdump, the
driver retrieves the "inparallel" configuration option and then starts
up that number of dumper subprocesses -- so "inparallel" controls the
total number of dumpers available for the whole run.

In contrast, the amanda.conf man page section on maxdumps says:  "The
maximum number of backups from a single host that Amanda will attempt to
run in parallel." So that controls how many (out of the "inparallel"
total dumpers available) might simultaneously run on one host at the
same time.

(Maxdumps defaults to 1 to avoid two dumpers on the same client
competing with each other for disk I/O [or worse, thrashing the disk
heads by interweaving reads from different places on the device], CPU
[e.g. for compression], and/or network bandwidth.  This seems likely to
be a reasonable default, though I guess if you had multiple physical
disk devices each of which was relatively slow compared to CPU and
network bandwidth then it would make sense to run more than one dump at
a time.  But I'd do careful testing to make sure that's not actually
slowing down the overall dump.)


Nathan

Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Re: Another dumper question

2018-11-26 Thread Chris Nighswonger
On Mon, Nov 26, 2018 at 2:32 PM Nathan Stratton Treadway
 wrote:
>
> On Mon, Nov 26, 2018 at 13:56:52 -0500, Austin S. Hemmelgarn wrote:
> > On 2018-11-26 13:34, Chris Nighswonger wrote:
> > The other possibility that comes to mind is that your bandwidth
> > settings are making Amanda decide to limit to one dumper at a time.
>
> Chris, this is certainly the first thing to look at: note in your
> amstatus output the line "network free kps: 0":
>
>
> > >9 dumpers idle  : 0
> > >taper status: Idle
> > >taper qlen: 1
> > >network free kps: 0
> > >holding space   : 436635431k ( 50.26%)

Hmm... I missed that completely. I'll set it arbitrarily high as
Austin suggested and test it overnight.


Re: Reusable

2018-11-26 Thread J Chapman Flack
On 11/21/18 2:55 PM, Chris Miller wrote:
> AMANDA can't backup a file; only directories. Is there any way around
> that? Suppose I need to backup a large file in a directory with many
> other large files, so backing up the complete directory or moving /
> copying my target would be inconvenient, not to mention
> administratively cumbersome when it comes to restoring. 

Jean-Louis mentioned the amraw application as one that can be used on
a single file:  http://wiki.zmanda.com/man/3.5.1/amraw.8.html

In the pending pull request #11, there are some others as well:

amooraw: this is just the same as amraw but implemented in the OO style
made possible in PR#11, to show how the development is simplified. Like
amraw, it only supports full backups.

amgrowingfile: this is specific to the case where the single file is
something that grows very large and only by appending (think audit or
log files, etc). It can do efficient incremental backups of the single
file (an increment is just what has been appended since the last one).
A level 0 has to be forced any time the file is rewritten any other
way than appending.

amgrowingzip: like amgrowingfile but for the special case where the
single file is a large zip archive that is only grown by appending
new members at the end. It also supports increments. A level 0 must
be forced any time the file is modified any other way than adding new
zip members at the end.

In the preliminary man pages with the pull request, amooraw is on
page 12, amgrowingfile on page 9, and amgrowingzip on page 10.

https://github.com/zmanda/amanda/files/1265069/AppScriptWithAbstractClasses.pdf

-Chap


Re: Configuration Rollback [Was: Reusable]

2018-11-26 Thread Debra S Baddorf


> 
> -Original Message-
> From: owner-amanda-us...@amanda.org  On Behalf 
> Of Debra S Baddorf
> Sent: Monday, November 26, 2018 2:04 PM
> To: amanda-users 
> Cc: Debra S Baddorf 
> Subject: Re: Configuration Rollback [Was: Reusable]
> 
> ATTENTION: This email came from an external source. Do not open attachments 
> or click on links from unknown senders or unexpected emails.
> 
> 
>> On Nov 24, 2018, at 9:47 AM, Chris Nighswonger 
>>  wrote:
>> 
>> On Fri, Nov 23, 2018 at 6:47 PM Jon LaBadie  wrote:
>> On Wed, Nov 21, 2018 at 11:55:21AM -0800, Chris Miller wrote:
>>> Hi Folks,
>>> 
>>> I have written some very small DLEs so I can rip through weeks of 
>>> backups in minutes. I've learned some things.
>> 
> 
> Am I wrong in thinking that you cannot do extra backups,  to get to the end 
> of your  “dumpcycle” faster?
> Dumpcycle is explicitly in days,  not number of times run.   Isn’t it?
> 
> Deb Baddorf
> Fermilab
> 
> 
> 
> On Nov 26, 2018, at 1:21 PM, Cuttler, Brian R (HEALTH) 
>  wrote:
> 
> Deb,
> 
> Not sure if I'm understanding your question. If so I believe Amanda was built 
> with the concept of a once/day run schedule taking into account runs/cycle as 
> well as days/dumpcycle (for instance 5 run days in a one week dump cycle).
> 
> Brian

Yes, I agree.  At one point,  Chris was trying to speed this up by doing 21 
runs in 10 minutes or so.  
Perhaps he has stopped that,  and people are just continuing to quote that line 
(above).   It’s that line that’s bothering me.

He did ask  "Can I specify "dumpcycle" as an elapsed count rather than an 
elapsed time?”   
Per the wiki help files,  dumpcycle seems to be explicitly in “days”  and 
cannot be changed or sped up.
(  https://wiki.zmanda.com/index.php/Dumpcycle  )

Chris also asked  "Are "dumpcycle" and "runspercycle" conflicting with each 
other?”
Even if you occasionally do extra amdump runs  (I do),  that doesn’t bother the 
“runspercycle” count.  It still results in   “at least one level 0 within
dumpcycle days”.Runspercycle just lets amanda gauge numbers for balance 
adjusting, and maybe promoting a DLE to do an early level 0.
Doing 21 runs in NN minutes will use up all your vtapes,  but will not force 
more than 1 level 0.

Maybe I’m being pedantic,  and he’s looking at other issues now.  If so, 
nevermind me!
:)

Deb Baddorf






Re: Another dumper question

2018-11-26 Thread Nathan Stratton Treadway
On Mon, Nov 26, 2018 at 13:56:52 -0500, Austin S. Hemmelgarn wrote:
> On 2018-11-26 13:34, Chris Nighswonger wrote:
> The other possibility that comes to mind is that your bandwidth
> settings are making Amanda decide to limit to one dumper at a time.

Chris, this is certainly the first thing to look at: note in your
amstatus output the line "network free kps: 0":


> >9 dumpers idle  : 0
> >taper status: Idle
> >taper qlen: 1
> >network free kps: 0
> >holding space   : 436635431k ( 50.26%)


Nathan


Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


RE: Configuration Rollback [Was: Reusable]

2018-11-26 Thread Cuttler, Brian R (HEALTH)


Deb,

Not sure if I'm understanding your question. If so I believe Amanda was built 
with the concept of a once/day run schedule taking into account runs/cycle as 
well as days/dumpcycle (for instance 5 run days in a one week dump cycle).

Brian

-Original Message-
From: owner-amanda-us...@amanda.org  On Behalf 
Of Debra S Baddorf
Sent: Monday, November 26, 2018 2:04 PM
To: amanda-users 
Cc: Debra S Baddorf 
Subject: Re: Configuration Rollback [Was: Reusable]

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


> On Nov 24, 2018, at 9:47 AM, Chris Nighswonger  
> wrote:
>
> On Fri, Nov 23, 2018 at 6:47 PM Jon LaBadie  wrote:
> On Wed, Nov 21, 2018 at 11:55:21AM -0800, Chris Miller wrote:
> > Hi Folks,
> >
> > I have written some very small DLEs so I can rip through weeks of 
> > backups in minutes. I've learned some things.
>

Am I wrong in thinking that you cannot do extra backups,  to get to the end of 
your  “dumpcycle” faster?
Dumpcycle is explicitly in days,  not number of times run.   Isn’t it?

Deb Baddorf
Fermilab





Re: Another dumper question

2018-11-26 Thread Chris Nighswonger
That makes more sense. I've set it to 10 and inparallel to 2 and we'll
see how it goes tonight.

On Mon, Nov 26, 2018 at 2:01 PM Cuttler, Brian R (HEALTH)
 wrote:
>
> I believe maxdumps is max concurrent dumps across all clients.
>
> You might have 10 clients each with an inparallel of 2, giving 20 possible 
> concurrent dumps, but because of server limitations you might set maxdumps to 
> something between 2 and 20.
>
> -Original Message-
> From: Chris Nighswonger 
> Sent: Monday, November 26, 2018 1:57 PM
> To: Cuttler, Brian R (HEALTH) 
> Cc: amanda-users@amanda.org
> Subject: Re: Another dumper question
>
> ATTENTION: This email came from an external source. Do not open attachments 
> or click on links from unknown senders or unexpected emails.
>
>
> inparallel 10
>
> maxdumps not listed, so I'm assuming the default of 1 is being observed.
>
> I'm not sure that the maxdumps parameter would affect dumping DLEs from 
> multiple clients in parallel, though. The manpage states, "The maximum number 
> of backups from a single host that Amanda will attempt to run in parallel." 
> That seems to indicate that this parameter controls parallel dumps of DLEs on 
> a single client.
>
> Kind regards,
> Chris
> On Mon, Nov 26, 2018 at 1:50 PM Cuttler, Brian R (HEALTH) 
>  wrote:
> >
> > Did you check your maxdumps and inparallel parameters?
> >
> > -Original Message-
> > From: owner-amanda-us...@amanda.org  On
> > Behalf Of Chris Nighswonger
> > Sent: Monday, November 26, 2018 1:34 PM
> > To: amanda-users@amanda.org
> > Subject: Another dumper question
> >
> > ATTENTION: This email came from an external source. Do not open attachments 
> > or click on links from unknown senders or unexpected emails.
> >
> >
> > So in one particular configuration I have the following lines:
> >
> > inparallel 10
> > dumporder "STSTSTSTST"
> >
> > I would assume that that amanda would spawn 10 dumpers in parallel and 
> > execute them giving priority to largest size and largest time alternating. 
> > I would assume that amanda would do some sort of sorting of the DLEs based 
> > on size and time, set them in descending order, and the run the first 10 
> > based on the list thereby utilizing all 10 permitted dumpers in parallel.
> >
> > However, based on the amstatus excerpt below, it looks like amanda simply 
> > starts with the largest size and runs the DLEs one at a time, not making 
> > efficient use of parallel dumpers at all. This has the unhappy results at 
> > times of causing amdump to be running when the next backup is executed.
> >
> > I have changed the dumporder to STSTStstst for tonight's run to see if that 
> > makes any  difference. But I don't have much hope it will.
> >
> > Any thoughts?
> >
> > Kind regards,
> > Chris
> >
> >
> >
> >
> > From Mon Nov 26 01:00:01 EST 2018
> >
> > 1   4054117k waiting for dumping
> > 1  6671k waiting for dumping
> > 1   222k waiting for dumping
> > 1  2568k waiting for dumping
> > 1  6846k waiting for dumping
> > 1125447k waiting for dumping
> > 1 91372k waiting for dumping
> > 192k waiting for dumping
> > 132k waiting for dumping
> > 132k waiting for dumping
> > 132k waiting for dumping
> > 132k waiting for dumping
> > 1290840k waiting for dumping
> > 1 76601k waiting for dumping
> > 186k waiting for dumping
> > 1 71414k waiting for dumping
> > 0  44184811k waiting for dumping
> > 1   281k waiting for dumping
> > 1  6981k waiting for dumping
> > 150k waiting for dumping
> > 1 86968k waiting for dumping
> > 1 81649k waiting for dumping
> > 1359952k waiting for dumping
> > 0 198961004k dumping 159842848k ( 80.34%) (7:23:39)
> > 1 73966k waiting for dumping
> > 1821398k waiting for dumping
> > 1674198k waiting for dumping
> > 0 233106841k dump done (7:23:37), waiting for writing to tape
> > 132k waiting for dumping
> > 132k waiting for dumping
> > 1166876k waiting for dumping
> > 132k waiting for dumping
> > 1170895k waiting for dumping
> > 1162817k waiting for dumping
> > 0 failed: planner: [Request to client failed: Connection timed out]
> > 132k waiting for dumping
> > 132k waiting for dumping
> > 053k waiting for dumping
> > 0  77134628k waiting for dumping
> > 1  2911k waiting for dumping
> > 136k waiting for dumping
> > 132k waiting for dumping
> > 1 84935k waiting for dumping
> >
> > SUMMARY  part  real  estimated
> >size   size
> > partition   :  43
> > estimated   :  42559069311k
> > flush   :   0 0k
> > failed  :   10k   (  0.00%)
> > wait for dumping:  40128740001k   ( 23.03%)
> > dumping to tape :   00k   (  0.00%)
> > dumping :   1 159842848k 198961004k ( 80.34%) ( 28.59%)
> > dumped   

Re: Configuration Rollback [Was: Reusable]

2018-11-26 Thread Debra S Baddorf



> On Nov 24, 2018, at 9:47 AM, Chris Nighswonger  
> wrote:
> 
> On Fri, Nov 23, 2018 at 6:47 PM Jon LaBadie  wrote:
> On Wed, Nov 21, 2018 at 11:55:21AM -0800, Chris Miller wrote:
> > Hi Folks, 
> > 
> > I have written some very small DLEs so I can rip through weeks
> > of backups in minutes. I've learned some things.
> 

Am I wrong in thinking that you cannot do extra backups,  to get to the end of 
your  “dumpcycle” faster?
Dumpcycle is explicitly in days,  not number of times run.   Isn’t it?

Deb Baddorf
Fermilab




RE: Another dumper question

2018-11-26 Thread Cuttler, Brian R (HEALTH)
I believe maxdumps is max concurrent dumps across all clients.

You might have 10 clients each with an inparallel of 2, giving 20 possible 
concurrent dumps, but because of server limitations you might set maxdumps to 
something between 2 and 20.

-Original Message-
From: Chris Nighswonger  
Sent: Monday, November 26, 2018 1:57 PM
To: Cuttler, Brian R (HEALTH) 
Cc: amanda-users@amanda.org
Subject: Re: Another dumper question

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


inparallel 10

maxdumps not listed, so I'm assuming the default of 1 is being observed.

I'm not sure that the maxdumps parameter would affect dumping DLEs from 
multiple clients in parallel, though. The manpage states, "The maximum number 
of backups from a single host that Amanda will attempt to run in parallel." 
That seems to indicate that this parameter controls parallel dumps of DLEs on a 
single client.

Kind regards,
Chris
On Mon, Nov 26, 2018 at 1:50 PM Cuttler, Brian R (HEALTH) 
 wrote:
>
> Did you check your maxdumps and inparallel parameters?
>
> -Original Message-
> From: owner-amanda-us...@amanda.org  On 
> Behalf Of Chris Nighswonger
> Sent: Monday, November 26, 2018 1:34 PM
> To: amanda-users@amanda.org
> Subject: Another dumper question
>
> ATTENTION: This email came from an external source. Do not open attachments 
> or click on links from unknown senders or unexpected emails.
>
>
> So in one particular configuration I have the following lines:
>
> inparallel 10
> dumporder "STSTSTSTST"
>
> I would assume that that amanda would spawn 10 dumpers in parallel and 
> execute them giving priority to largest size and largest time alternating. I 
> would assume that amanda would do some sort of sorting of the DLEs based on 
> size and time, set them in descending order, and the run the first 10 based 
> on the list thereby utilizing all 10 permitted dumpers in parallel.
>
> However, based on the amstatus excerpt below, it looks like amanda simply 
> starts with the largest size and runs the DLEs one at a time, not making 
> efficient use of parallel dumpers at all. This has the unhappy results at 
> times of causing amdump to be running when the next backup is executed.
>
> I have changed the dumporder to STSTStstst for tonight's run to see if that 
> makes any  difference. But I don't have much hope it will.
>
> Any thoughts?
>
> Kind regards,
> Chris
>
>
>
>
> From Mon Nov 26 01:00:01 EST 2018
>
> 1   4054117k waiting for dumping
> 1  6671k waiting for dumping
> 1   222k waiting for dumping
> 1  2568k waiting for dumping
> 1  6846k waiting for dumping
> 1125447k waiting for dumping
> 1 91372k waiting for dumping
> 192k waiting for dumping
> 132k waiting for dumping
> 132k waiting for dumping
> 132k waiting for dumping
> 132k waiting for dumping
> 1290840k waiting for dumping
> 1 76601k waiting for dumping
> 186k waiting for dumping
> 1 71414k waiting for dumping
> 0  44184811k waiting for dumping
> 1   281k waiting for dumping
> 1  6981k waiting for dumping
> 150k waiting for dumping
> 1 86968k waiting for dumping
> 1 81649k waiting for dumping
> 1359952k waiting for dumping
> 0 198961004k dumping 159842848k ( 80.34%) (7:23:39)
> 1 73966k waiting for dumping
> 1821398k waiting for dumping
> 1674198k waiting for dumping
> 0 233106841k dump done (7:23:37), waiting for writing to tape
> 132k waiting for dumping
> 132k waiting for dumping
> 1166876k waiting for dumping
> 132k waiting for dumping
> 1170895k waiting for dumping
> 1162817k waiting for dumping
> 0 failed: planner: [Request to client failed: Connection timed out]
> 132k waiting for dumping
> 132k waiting for dumping
> 053k waiting for dumping
> 0  77134628k waiting for dumping
> 1  2911k waiting for dumping
> 136k waiting for dumping
> 132k waiting for dumping
> 1 84935k waiting for dumping
>
> SUMMARY  part  real  estimated
>size   size
> partition   :  43
> estimated   :  42559069311k
> flush   :   0 0k
> failed  :   10k   (  0.00%)
> wait for dumping:  40128740001k   ( 23.03%)
> dumping to tape :   00k   (  0.00%)
> dumping :   1 159842848k 198961004k ( 80.34%) ( 28.59%)
> dumped  :   1 233106841k 231368306k (100.75%) ( 41.70%)
> wait for writing:   1 233106841k 231368306k (100.75%) ( 41.70%)
> wait to flush   :   0 0k 0k (100.00%) (  0.00%)
> writing to tape :   0 0k 0k (  0.00%) (  0.00%)
> failed to tape  :   0 0k 0k (  0.00%) (  0.00%)
> taped   :   0 0k 0k (  0.00%) (  0.00%)
> 9 dumpers idle  : 0
> ta

Re: Another dumper question

2018-11-26 Thread Chris Nighswonger
inparallel 10

maxdumps not listed, so I'm assuming the default of 1 is being observed.

I'm not sure that the maxdumps parameter would affect dumping DLEs
from multiple clients in parallel, though. The manpage states, "The
maximum number of backups from a single host that Amanda will attempt
to run in parallel." That seems to indicate that this parameter
controls parallel dumps of DLEs on a single client.

Kind regards,
Chris
On Mon, Nov 26, 2018 at 1:50 PM Cuttler, Brian R (HEALTH)
 wrote:
>
> Did you check your maxdumps and inparallel parameters?
>
> -Original Message-
> From: owner-amanda-us...@amanda.org  On Behalf 
> Of Chris Nighswonger
> Sent: Monday, November 26, 2018 1:34 PM
> To: amanda-users@amanda.org
> Subject: Another dumper question
>
> ATTENTION: This email came from an external source. Do not open attachments 
> or click on links from unknown senders or unexpected emails.
>
>
> So in one particular configuration I have the following lines:
>
> inparallel 10
> dumporder "STSTSTSTST"
>
> I would assume that that amanda would spawn 10 dumpers in parallel and 
> execute them giving priority to largest size and largest time alternating. I 
> would assume that amanda would do some sort of sorting of the DLEs based on 
> size and time, set them in descending order, and the run the first 10 based 
> on the list thereby utilizing all 10 permitted dumpers in parallel.
>
> However, based on the amstatus excerpt below, it looks like amanda simply 
> starts with the largest size and runs the DLEs one at a time, not making 
> efficient use of parallel dumpers at all. This has the unhappy results at 
> times of causing amdump to be running when the next backup is executed.
>
> I have changed the dumporder to STSTStstst for tonight's run to see if that 
> makes any  difference. But I don't have much hope it will.
>
> Any thoughts?
>
> Kind regards,
> Chris
>
>
>
>
> From Mon Nov 26 01:00:01 EST 2018
>
> 1   4054117k waiting for dumping
> 1  6671k waiting for dumping
> 1   222k waiting for dumping
> 1  2568k waiting for dumping
> 1  6846k waiting for dumping
> 1125447k waiting for dumping
> 1 91372k waiting for dumping
> 192k waiting for dumping
> 132k waiting for dumping
> 132k waiting for dumping
> 132k waiting for dumping
> 132k waiting for dumping
> 1290840k waiting for dumping
> 1 76601k waiting for dumping
> 186k waiting for dumping
> 1 71414k waiting for dumping
> 0  44184811k waiting for dumping
> 1   281k waiting for dumping
> 1  6981k waiting for dumping
> 150k waiting for dumping
> 1 86968k waiting for dumping
> 1 81649k waiting for dumping
> 1359952k waiting for dumping
> 0 198961004k dumping 159842848k ( 80.34%) (7:23:39)
> 1 73966k waiting for dumping
> 1821398k waiting for dumping
> 1674198k waiting for dumping
> 0 233106841k dump done (7:23:37), waiting for writing to tape
> 132k waiting for dumping
> 132k waiting for dumping
> 1166876k waiting for dumping
> 132k waiting for dumping
> 1170895k waiting for dumping
> 1162817k waiting for dumping
> 0 failed: planner: [Request to client failed: Connection timed out]
> 132k waiting for dumping
> 132k waiting for dumping
> 053k waiting for dumping
> 0  77134628k waiting for dumping
> 1  2911k waiting for dumping
> 136k waiting for dumping
> 132k waiting for dumping
> 1 84935k waiting for dumping
>
> SUMMARY  part  real  estimated
>size   size
> partition   :  43
> estimated   :  42559069311k
> flush   :   0 0k
> failed  :   10k   (  0.00%)
> wait for dumping:  40128740001k   ( 23.03%)
> dumping to tape :   00k   (  0.00%)
> dumping :   1 159842848k 198961004k ( 80.34%) ( 28.59%)
> dumped  :   1 233106841k 231368306k (100.75%) ( 41.70%)
> wait for writing:   1 233106841k 231368306k (100.75%) ( 41.70%)
> wait to flush   :   0 0k 0k (100.00%) (  0.00%)
> writing to tape :   0 0k 0k (  0.00%) (  0.00%)
> failed to tape  :   0 0k 0k (  0.00%) (  0.00%)
> taped   :   0 0k 0k (  0.00%) (  0.00%)
> 9 dumpers idle  : 0
> taper status: Idle
> taper qlen: 1
> network free kps: 0
> holding space   : 436635431k ( 50.26%)
> chunker0 busy   :  6:17:03  ( 98.28%)
>  dumper0 busy   :  6:17:03  ( 98.28%)
>  0 dumpers busy :  0:06:34  (  1.72%)   0:  0:06:34  (100.00%)
>  1 dumper busy  :  6:17:03  ( 98.28%)   0:  6:17:03  (100.00%)



Re: Another dumper question

2018-11-26 Thread Austin S. Hemmelgarn

On 2018-11-26 13:34, Chris Nighswonger wrote:

So in one particular configuration I have the following lines:

inparallel 10
dumporder "STSTSTSTST"

I would assume that that amanda would spawn 10 dumpers in parallel and
execute them giving priority to largest size and largest time
alternating. I would assume that amanda would do some sort of sorting
of the DLEs based on size and time, set them in descending order, and
the run the first 10 based on the list thereby utilizing all 10
permitted dumpers in parallel.

However, based on the amstatus excerpt below, it looks like amanda
simply starts with the largest size and runs the DLEs one at a time,
not making efficient use of parallel dumpers at all. This has the
unhappy results at times of causing amdump to be running when the next
backup is executed.

I have changed the dumporder to STSTStstst for tonight's run to see if
that makes any  difference. But I don't have much hope it will.

Any thoughts?
Is this all for one host?  If so, that's probably your issue.  By 
default, Amanda will only run at most one DLE per host at a time.  You 
can change this in the dump settings, but I forget what the exact 
configuration parameter is.


The other possibility that comes to mind is that your bandwidth settings 
are making Amanda decide to limit to one dumper at a time.  You can 
easily test that by just setting the `netusage` parameter to an absurdly 
large value like 1073741824 (equivalent to one Tbit/s).


Kind regards,
Chris




 From Mon Nov 26 01:00:01 EST 2018

1   4054117k waiting for dumping
1  6671k waiting for dumping
1   222k waiting for dumping
1  2568k waiting for dumping
1  6846k waiting for dumping
1125447k waiting for dumping
1 91372k waiting for dumping
192k waiting for dumping
132k waiting for dumping
132k waiting for dumping
132k waiting for dumping
132k waiting for dumping
1290840k waiting for dumping
1 76601k waiting for dumping
186k waiting for dumping
1 71414k waiting for dumping
0  44184811k waiting for dumping
1   281k waiting for dumping
1  6981k waiting for dumping
150k waiting for dumping
1 86968k waiting for dumping
1 81649k waiting for dumping
1359952k waiting for dumping
0 198961004k dumping 159842848k ( 80.34%) (7:23:39)
1 73966k waiting for dumping
1821398k waiting for dumping
1674198k waiting for dumping
0 233106841k dump done (7:23:37), waiting for writing to tape
132k waiting for dumping
132k waiting for dumping
1166876k waiting for dumping
132k waiting for dumping
1170895k waiting for dumping
1162817k waiting for dumping
0 failed: planner: [Request to client failed: Connection timed out]
132k waiting for dumping
132k waiting for dumping
053k waiting for dumping
0  77134628k waiting for dumping
1  2911k waiting for dumping
136k waiting for dumping
132k waiting for dumping
1 84935k waiting for dumping

SUMMARY  part  real  estimated
size   size
partition   :  43
estimated   :  42559069311k
flush   :   0 0k
failed  :   10k   (  0.00%)
wait for dumping:  40128740001k   ( 23.03%)
dumping to tape :   00k   (  0.00%)
dumping :   1 159842848k 198961004k ( 80.34%) ( 28.59%)
dumped  :   1 233106841k 231368306k (100.75%) ( 41.70%)
wait for writing:   1 233106841k 231368306k (100.75%) ( 41.70%)
wait to flush   :   0 0k 0k (100.00%) (  0.00%)
writing to tape :   0 0k 0k (  0.00%) (  0.00%)
failed to tape  :   0 0k 0k (  0.00%) (  0.00%)
taped   :   0 0k 0k (  0.00%) (  0.00%)
9 dumpers idle  : 0
taper status: Idle
taper qlen: 1
network free kps: 0
holding space   : 436635431k ( 50.26%)
chunker0 busy   :  6:17:03  ( 98.28%)
  dumper0 busy   :  6:17:03  ( 98.28%)
  0 dumpers busy :  0:06:34  (  1.72%)   0:  0:06:34  (100.00%)
  1 dumper busy  :  6:17:03  ( 98.28%)   0:  6:17:03  (100.00%)



RE: Another dumper question

2018-11-26 Thread Cuttler, Brian R (HEALTH)
Did you check your maxdumps and inparallel parameters?

-Original Message-
From: owner-amanda-us...@amanda.org  On Behalf 
Of Chris Nighswonger
Sent: Monday, November 26, 2018 1:34 PM
To: amanda-users@amanda.org
Subject: Another dumper question

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


So in one particular configuration I have the following lines:

inparallel 10
dumporder "STSTSTSTST"

I would assume that that amanda would spawn 10 dumpers in parallel and execute 
them giving priority to largest size and largest time alternating. I would 
assume that amanda would do some sort of sorting of the DLEs based on size and 
time, set them in descending order, and the run the first 10 based on the list 
thereby utilizing all 10 permitted dumpers in parallel.

However, based on the amstatus excerpt below, it looks like amanda simply 
starts with the largest size and runs the DLEs one at a time, not making 
efficient use of parallel dumpers at all. This has the unhappy results at times 
of causing amdump to be running when the next backup is executed.

I have changed the dumporder to STSTStstst for tonight's run to see if that 
makes any  difference. But I don't have much hope it will.

Any thoughts?

Kind regards,
Chris




>From Mon Nov 26 01:00:01 EST 2018

1   4054117k waiting for dumping
1  6671k waiting for dumping
1   222k waiting for dumping
1  2568k waiting for dumping
1  6846k waiting for dumping
1125447k waiting for dumping
1 91372k waiting for dumping
192k waiting for dumping
132k waiting for dumping
132k waiting for dumping
132k waiting for dumping
132k waiting for dumping
1290840k waiting for dumping
1 76601k waiting for dumping
186k waiting for dumping
1 71414k waiting for dumping
0  44184811k waiting for dumping
1   281k waiting for dumping
1  6981k waiting for dumping
150k waiting for dumping
1 86968k waiting for dumping
1 81649k waiting for dumping
1359952k waiting for dumping
0 198961004k dumping 159842848k ( 80.34%) (7:23:39)
1 73966k waiting for dumping
1821398k waiting for dumping
1674198k waiting for dumping
0 233106841k dump done (7:23:37), waiting for writing to tape
132k waiting for dumping
132k waiting for dumping
1166876k waiting for dumping
132k waiting for dumping
1170895k waiting for dumping
1162817k waiting for dumping
0 failed: planner: [Request to client failed: Connection timed out]
132k waiting for dumping
132k waiting for dumping
053k waiting for dumping
0  77134628k waiting for dumping
1  2911k waiting for dumping
136k waiting for dumping
132k waiting for dumping
1 84935k waiting for dumping

SUMMARY  part  real  estimated
   size   size
partition   :  43
estimated   :  42559069311k
flush   :   0 0k
failed  :   10k   (  0.00%)
wait for dumping:  40128740001k   ( 23.03%)
dumping to tape :   00k   (  0.00%)
dumping :   1 159842848k 198961004k ( 80.34%) ( 28.59%)
dumped  :   1 233106841k 231368306k (100.75%) ( 41.70%)
wait for writing:   1 233106841k 231368306k (100.75%) ( 41.70%)
wait to flush   :   0 0k 0k (100.00%) (  0.00%)
writing to tape :   0 0k 0k (  0.00%) (  0.00%)
failed to tape  :   0 0k 0k (  0.00%) (  0.00%)
taped   :   0 0k 0k (  0.00%) (  0.00%)
9 dumpers idle  : 0
taper status: Idle
taper qlen: 1
network free kps: 0
holding space   : 436635431k ( 50.26%)
chunker0 busy   :  6:17:03  ( 98.28%)
 dumper0 busy   :  6:17:03  ( 98.28%)
 0 dumpers busy :  0:06:34  (  1.72%)   0:  0:06:34  (100.00%)
 1 dumper busy  :  6:17:03  ( 98.28%)   0:  6:17:03  (100.00%)



Another dumper question

2018-11-26 Thread Chris Nighswonger
So in one particular configuration I have the following lines:

inparallel 10
dumporder "STSTSTSTST"

I would assume that that amanda would spawn 10 dumpers in parallel and
execute them giving priority to largest size and largest time
alternating. I would assume that amanda would do some sort of sorting
of the DLEs based on size and time, set them in descending order, and
the run the first 10 based on the list thereby utilizing all 10
permitted dumpers in parallel.

However, based on the amstatus excerpt below, it looks like amanda
simply starts with the largest size and runs the DLEs one at a time,
not making efficient use of parallel dumpers at all. This has the
unhappy results at times of causing amdump to be running when the next
backup is executed.

I have changed the dumporder to STSTStstst for tonight's run to see if
that makes any  difference. But I don't have much hope it will.

Any thoughts?

Kind regards,
Chris




>From Mon Nov 26 01:00:01 EST 2018

1   4054117k waiting for dumping
1  6671k waiting for dumping
1   222k waiting for dumping
1  2568k waiting for dumping
1  6846k waiting for dumping
1125447k waiting for dumping
1 91372k waiting for dumping
192k waiting for dumping
132k waiting for dumping
132k waiting for dumping
132k waiting for dumping
132k waiting for dumping
1290840k waiting for dumping
1 76601k waiting for dumping
186k waiting for dumping
1 71414k waiting for dumping
0  44184811k waiting for dumping
1   281k waiting for dumping
1  6981k waiting for dumping
150k waiting for dumping
1 86968k waiting for dumping
1 81649k waiting for dumping
1359952k waiting for dumping
0 198961004k dumping 159842848k ( 80.34%) (7:23:39)
1 73966k waiting for dumping
1821398k waiting for dumping
1674198k waiting for dumping
0 233106841k dump done (7:23:37), waiting for writing to tape
132k waiting for dumping
132k waiting for dumping
1166876k waiting for dumping
132k waiting for dumping
1170895k waiting for dumping
1162817k waiting for dumping
0 failed: planner: [Request to client failed: Connection timed out]
132k waiting for dumping
132k waiting for dumping
053k waiting for dumping
0  77134628k waiting for dumping
1  2911k waiting for dumping
136k waiting for dumping
132k waiting for dumping
1 84935k waiting for dumping

SUMMARY  part  real  estimated
   size   size
partition   :  43
estimated   :  42559069311k
flush   :   0 0k
failed  :   10k   (  0.00%)
wait for dumping:  40128740001k   ( 23.03%)
dumping to tape :   00k   (  0.00%)
dumping :   1 159842848k 198961004k ( 80.34%) ( 28.59%)
dumped  :   1 233106841k 231368306k (100.75%) ( 41.70%)
wait for writing:   1 233106841k 231368306k (100.75%) ( 41.70%)
wait to flush   :   0 0k 0k (100.00%) (  0.00%)
writing to tape :   0 0k 0k (  0.00%) (  0.00%)
failed to tape  :   0 0k 0k (  0.00%) (  0.00%)
taped   :   0 0k 0k (  0.00%) (  0.00%)
9 dumpers idle  : 0
taper status: Idle
taper qlen: 1
network free kps: 0
holding space   : 436635431k ( 50.26%)
chunker0 busy   :  6:17:03  ( 98.28%)
 dumper0 busy   :  6:17:03  ( 98.28%)
 0 dumpers busy :  0:06:34  (  1.72%)   0:  0:06:34  (100.00%)
 1 dumper busy  :  6:17:03  ( 98.28%)   0:  6:17:03  (100.00%)