Re: Another dumper question

2018-11-28 Thread Chris Nighswonger
On Mon, Nov 26, 2018 at 3:28 PM Chris Nighswonger <
cnighswon...@foundations.edu> wrote:

>
>
> So I've set these params back as they were before (ie. inparallel 10,
> maxdumps 1) and opened the throttle on the bandwidth. We'll see if
> things get better or worse tonight. I have a few more DLEs I need to
> split which may help the over all problem.
>

The results are in:

 dumper0 busy   :  5:25:03  ( 98.03%)
 dumper1 busy   :  4:26:25  ( 80.35%)
 dumper2 busy   :  4:05:57  ( 74.18%)
 dumper3 busy   :  1:55:55  ( 34.96%)
 dumper4 busy   :  3:21:39  ( 60.82%)
 dumper5 busy   :  2:12:41  ( 40.02%)
 dumper6 busy   :  1:55:50  ( 34.93%)
 dumper7 busy   :  2:18:20  ( 41.72%)
 dumper8 busy   :  1:57:28  ( 35.43%)
 dumper9 busy   :  2:10:02  ( 39.22%)
 0 dumpers busy :  0:06:32  (  1.97%)   0:  0:06:32
(100.00%)
 1 dumper busy  :  0:58:37  ( 17.68%)   0:  0:58:37
(100.00%)
 2 dumpers busy :  0:20:26  (  6.17%)   0:  0:20:26
(100.00%)
 3 dumpers busy :  0:43:21  ( 13.08%)   0:  0:43:21
(100.00%)
 4 dumpers busy :  1:04:15  ( 19.38%)   0:  1:04:15
(100.00%)
 5 dumpers busy :  0:05:38  (  1.70%)   0:  0:05:38
(100.00%)
 6 dumpers busy :  0:02:38  (  0.79%)   0:  0:02:38  (
99.99%)
 7 dumpers busy :  0:12:34  (  3.79%)   0:  0:12:34
(100.00%)
 8 dumpers busy :  0:01:38  (  0.49%)   0:  0:01:38  (
99.99%)
 9 dumpers busy :  0:00:52  (  0.27%)   0:  0:00:52  (
99.65%)
10 dumpers busy :  1:54:57  ( 34.67%)   0:  1:53:34  (
98.79%)

Here's a graph of the bandwidth on the trunking interface on the switch the
backup server is on:

[image: 192.168.x.x_1_1-day.png]

As you can see, the network load is negligible even during business hours.
So the giving amanda the reigns is not a problem and it looks like the
dumps are running more efficiently.

Comments appreciated.

Kind regards,
Chris


Re: Another dumper question

2018-11-27 Thread Chris Nighswonger
On Mon, Nov 26, 2018 at 10:13 PM Nathan Stratton Treadway
 wrote:
>
> On Mon, Nov 26, 2018 at 17:07:54 -0500, Chris Hoogendyk wrote:
> > My understanding was that Amanda cannot throttle the throughput, but
> > it can see what it is. If it is at or over the limit, it won't start
> > another dump. So, you can end up over the limit that you have
> > specified for periods of time, but it won't just continue increasing
> > without respect for the limit.
>
> To expand on this slightly (and tie it back to the begining of this
> thread): what Amanda actually looks at is the dump-size/time figure from
> the "info" database it keeps on each DLE.  So, it's not looking at
> dynamic real-time network bandwidth usage, but rather calculating what
> the average throughput turned out to be the last time this DLE was
> dumped.
>
> As you say, the important thing to note is that Amanda uses this info to
> do throttling at the dump/dumper level: when there is a free dumper, it
> will send a request for a new DLE to that dumper only if the bandwidth
> usage calculated for currently-in-progress dumps doesn't exceed the
> configured limit.
>
> If the configured limit is too high, the possiblity is that too many
> simultaneous dumps, especially of very fast client systems, will
> saturate the network interface (or some part of the network
> infrastructure) -- while if it's too low, the symptom will be that some
> of the configured "inparallel" dumpers will be left idle.  (But the nice
> thing about having Amanda do this calculation is that it can go ahead
> and kick off more dumpers when working on slow client systems and fewer
> when processing the fast ones.)
>
> Anyway, in the case of the original email in this thread, the problem
> seems to be that the calculated bandwidth for the single DLE in processs
> of being dumped already exceeds the configured bandwidth limit (as
> represented by the amstatus report line "network free kps: 0") -- and
> thus the other 9 dumpers are all left idle even though there are many
> DLEs out there waiting to be dumped.

Nathan: Thank-you for the very clear explanation of how amanda handles
this. That's good wiki material.

So last night's run borked due to my having two netusage statements in
the config. For some reason I missed that yesterday. Maybe its
age-related, I don't know... ;-)

That's fixed and we'll see what sort of throughput the network can
handle tonight.


Re: Another dumper question

2018-11-26 Thread Nathan Stratton Treadway
On Mon, Nov 26, 2018 at 17:07:54 -0500, Chris Hoogendyk wrote:
> My understanding was that Amanda cannot throttle the throughput, but
> it can see what it is. If it is at or over the limit, it won't start
> another dump. So, you can end up over the limit that you have
> specified for periods of time, but it won't just continue increasing
> without respect for the limit.

To expand on this slightly (and tie it back to the begining of this
thread): what Amanda actually looks at is the dump-size/time figure from
the "info" database it keeps on each DLE.  So, it's not looking at
dynamic real-time network bandwidth usage, but rather calculating what
the average throughput turned out to be the last time this DLE was
dumped.

As you say, the important thing to note is that Amanda uses this info to
do throttling at the dump/dumper level: when there is a free dumper, it
will send a request for a new DLE to that dumper only if the bandwidth
usage calculated for currently-in-progress dumps doesn't exceed the
configured limit.

If the configured limit is too high, the possiblity is that too many
simultaneous dumps, especially of very fast client systems, will
saturate the network interface (or some part of the network
infrastructure) -- while if it's too low, the symptom will be that some
of the configured "inparallel" dumpers will be left idle.  (But the nice
thing about having Amanda do this calculation is that it can go ahead
and kick off more dumpers when working on slow client systems and fewer
when processing the fast ones.)

Anyway, in the case of the original email in this thread, the problem
seems to be that the calculated bandwidth for the single DLE in processs
of being dumped already exceeds the configured bandwidth limit (as
represented by the amstatus report line "network free kps: 0") -- and
thus the other 9 dumpers are all left idle even though there are many
DLEs out there waiting to be dumped.


Nathan 


Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Re: Another dumper question

2018-11-26 Thread Gene Heskett
On Monday 26 November 2018 17:07:54 Chris Hoogendyk wrote:

> On 11/26/18 4:37 PM, Gene Heskett wrote:
> > On Monday 26 November 2018 15:13:43 Chris Nighswonger wrote:
> >> On Mon, Nov 26, 2018 at 2:32 PM Nathan Stratton Treadway
> >>
> >>  wrote:
> >>> On Mon, Nov 26, 2018 at 13:56:52 -0500, Austin S. Hemmelgarn wrote:
>  On 2018-11-26 13:34, Chris Nighswonger wrote:
>  The other possibility that comes to mind is that your bandwidth
>  settings are making Amanda decide to limit to one dumper at a
>  time.
> >>>
> >>> Chris, this is certainly the first thing to look at: note in your
> >>>
> >>> amstatus output the line "network free kps: 0":
> > 9 dumpers idle  : 0
> > taper status: Idle
> > taper qlen: 1
> > network free kps: 0
> > holding space   : 436635431k ( 50.26%)
> >>
> >> Hmm... I missed that completely. I'll set it arbitrarily high as
> >> Austin suggested and test it overnight.
> >
> > I was told once, a decade or more back up the log, that amanda made
> > no use of that setting. It just used what it needed or whatever the
> > hardware supported.
> >
> > When did it become an actively used setting?
> >
> > Copyright 2018 by Maurice E. Heskett
>
> My understanding was that Amanda cannot throttle the throughput, but
> it can see what it is. If it is at or over the limit, it won't start
> another dump. So, you can end up over the limit that you have
> specified for periods of time, but it won't just continue increasing
> without respect for the limit.

That makes perfect sense, no one has explained it in that light.  Not 
that its going to bother me, only 2 of my machines have active drives 
that aren't in the DLE file.  Scratchpad drives for devel work.


Copyright 2018 by Maurice E. Heskett
-- 
Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page 


Re: Another dumper question

2018-11-26 Thread Chris Hoogendyk




On 11/26/18 4:37 PM, Gene Heskett wrote:

On Monday 26 November 2018 15:13:43 Chris Nighswonger wrote:


On Mon, Nov 26, 2018 at 2:32 PM Nathan Stratton Treadway

 wrote:

On Mon, Nov 26, 2018 at 13:56:52 -0500, Austin S. Hemmelgarn wrote:

On 2018-11-26 13:34, Chris Nighswonger wrote:
The other possibility that comes to mind is that your bandwidth
settings are making Amanda decide to limit to one dumper at a
time.

Chris, this is certainly the first thing to look at: note in your

amstatus output the line "network free kps: 0":

9 dumpers idle  : 0
taper status: Idle
taper qlen: 1
network free kps: 0
holding space   : 436635431k ( 50.26%)

Hmm... I missed that completely. I'll set it arbitrarily high as
Austin suggested and test it overnight.

I was told once, a decade or more back up the log, that amanda made no
use of that setting. It just used what it needed or whatever the
hardware supported.

When did it become an actively used setting?

Copyright 2018 by Maurice E. Heskett


My understanding was that Amanda cannot throttle the throughput, but it can see what it is. If it is 
at or over the limit, it won't start another dump. So, you can end up over the limit that you have 
specified for periods of time, but it won't just continue increasing without respect for the limit.


--
---

Chris Hoogendyk

-
   O__   Systems Administrator
  c/ /'_ --- Biology & Geosciences Departments
 (*) \(*) -- 315 Morrill Science Center
~~ - University of Massachusetts, Amherst



---

Erdös 4



Re: Another dumper question

2018-11-26 Thread Gene Heskett
On Monday 26 November 2018 15:14:51 Nathan Stratton Treadway wrote:

> On Mon, Nov 26, 2018 at 14:15:41 -0500, Chris Nighswonger wrote:
> > That makes more sense. I've set it to 10 and inparallel to 2 and
> > we'll see how it goes tonight.
> >
> > On Mon, Nov 26, 2018 at 2:01 PM Cuttler, Brian R (HEALTH)
> >
> >  wrote:
> > > I believe maxdumps is max concurrent dumps across all clients.
> > >
> > > You might have 10 clients each with an inparallel of 2, giving 20
> > > possible concurrent dumps, but because of server limitations you
> > > might set maxdumps to something between 2 and 20.
>
> Actually, it is the other way around: when you kick off amdump, the
> driver retrieves the "inparallel" configuration option and then starts
> up that number of dumper subprocesses -- so "inparallel" controls the
> total number of dumpers available for the whole run.
>
> In contrast, the amanda.conf man page section on maxdumps says:  "The
> maximum number of backups from a single host that Amanda will attempt
> to run in parallel." So that controls how many (out of the
> "inparallel" total dumpers available) might simultaneously run on one
> host at the same time.
>
> (Maxdumps defaults to 1 to avoid two dumpers on the same client
> competing with each other for disk I/O [or worse, thrashing the disk
> heads by interweaving reads from different places on the device], CPU
> [e.g. for compression], and/or network bandwidth.  This seems likely
> to be a reasonable default, though I guess if you had multiple
> physical disk devices each of which was relatively slow compared to
> CPU and network bandwidth then it would make sense to run more than
> one dump at a time.  But I'd do careful testing to make sure that's
> not actually slowing down the overall dump.)
>
>
>   Nathan

AIUI, this is where the "spindle" number in your DLE comes into play, and 
unless that particular client has more than 1 active disk, it should 
remain at 1.  Now if like some, you have /home on a separate drive from 
the root filesystem, you may find it helpfull to put a 2 in those DLEs 
that are on that clients 2nd drive, and it should then run two dumpers 
in independent dumps on THAT client.

> --
>-- Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic
> region Ray Ontko & Co.  -  Software consulting services  -  
> http://www.ontko.com/ GPG Key:
> http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239 Key
> fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239



Copyright 2018 by Maurice E. Heskett
-- 
Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page 


Re: Another dumper question

2018-11-26 Thread Gene Heskett
On Monday 26 November 2018 15:13:43 Chris Nighswonger wrote:

> On Mon, Nov 26, 2018 at 2:32 PM Nathan Stratton Treadway
>
>  wrote:
> > On Mon, Nov 26, 2018 at 13:56:52 -0500, Austin S. Hemmelgarn wrote:
> > > On 2018-11-26 13:34, Chris Nighswonger wrote:
> > > The other possibility that comes to mind is that your bandwidth
> > > settings are making Amanda decide to limit to one dumper at a
> > > time.
> >
> > Chris, this is certainly the first thing to look at: note in your
> >
> > amstatus output the line "network free kps: 0":
> > > >9 dumpers idle  : 0
> > > >taper status: Idle
> > > >taper qlen: 1
> > > >network free kps: 0
> > > >holding space   : 436635431k ( 50.26%)
>
> Hmm... I missed that completely. I'll set it arbitrarily high as
> Austin suggested and test it overnight.

I was told once, a decade or more back up the log, that amanda made no 
use of that setting. It just used what it needed or whatever the  
hardware supported.

When did it become an actively used setting?

Copyright 2018 by Maurice E. Heskett
-- 
Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page 


Re: Another dumper question

2018-11-26 Thread Debra S Baddorf



> On Nov 26, 2018, at 2:24 PM, Austin S. Hemmelgarn  
> wrote:
> 
> On 2018-11-26 15:13, Chris Nighswonger wrote:
>> On Mon, Nov 26, 2018 at 2:32 PM Nathan Stratton Treadway
>>  wrote:
>>> 
>>> On Mon, Nov 26, 2018 at 13:56:52 -0500, Austin S. Hemmelgarn wrote:
 On 2018-11-26 13:34, Chris Nighswonger wrote:
 The other possibility that comes to mind is that your bandwidth
 settings are making Amanda decide to limit to one dumper at a time.
>>> 
>>> Chris, this is certainly the first thing to look at: note in your
>>> amstatus output the line "network free kps: 0":
>>> 
>>> 
> 9 dumpers idle  : 0
> taper status: Idle
> taper qlen: 1
> network free kps: 0
> holding space   : 436635431k ( 50.26%)
>> Hmm... I missed that completely. I'll set it arbitrarily high as
>> Austin suggested and test it overnight.
> Don't feel bad, it's not something that gets actively used by a lot of 
> people, so most people don't really think about it.  If used right though, it 
> provides the rather neat ability to have Amanda limit it's network 
> utilization while running backups, which is really helpful if you have to run 
> backups during production hours for some reason.

Or if production hours are 24/7/366  !   (high energy physics accelerator lab)

Deb Baddorf
Fermilab




Re: Another dumper question

2018-11-26 Thread Chris Nighswonger
On Mon, Nov 26, 2018 at 3:22 PM Nathan Stratton Treadway
 wrote:
>
> On Mon, Nov 26, 2018 at 14:15:41 -0500, Chris Nighswonger wrote:
> > That makes more sense. I've set it to 10 and inparallel to 2 and we'll
> > see how it goes tonight.
> >
> > On Mon, Nov 26, 2018 at 2:01 PM Cuttler, Brian R (HEALTH)
> >  wrote:
> > >
> > > I believe maxdumps is max concurrent dumps across all clients.
> > >
> > > You might have 10 clients each with an inparallel of 2, giving 20
> > > possible concurrent dumps, but because of server limitations you
> > > might set maxdumps to something between 2 and 20.
>
> Actually, it is the other way around: when you kick off amdump, the
> driver retrieves the "inparallel" configuration option and then starts
> up that number of dumper subprocesses -- so "inparallel" controls the
> total number of dumpers available for the whole run.
>
> In contrast, the amanda.conf man page section on maxdumps says:  "The
> maximum number of backups from a single host that Amanda will attempt to
> run in parallel." So that controls how many (out of the "inparallel"
> total dumpers available) might simultaneously run on one host at the
> same time.
>
> (Maxdumps defaults to 1 to avoid two dumpers on the same client
> competing with each other for disk I/O [or worse, thrashing the disk
> heads by interweaving reads from different places on the device], CPU
> [e.g. for compression], and/or network bandwidth.  This seems likely to
> be a reasonable default, though I guess if you had multiple physical
> disk devices each of which was relatively slow compared to CPU and
> network bandwidth then it would make sense to run more than one dump at
> a time.  But I'd do careful testing to make sure that's not actually
> slowing down the overall dump.)

So I've set these params back as they were before (ie. inparallel 10,
maxdumps 1) and opened the throttle on the bandwidth. We'll see if
things get better or worse tonight. I have a few more DLEs I need to
split which may help the over all problem.


Re: Another dumper question

2018-11-26 Thread Austin S. Hemmelgarn

On 2018-11-26 15:13, Chris Nighswonger wrote:

On Mon, Nov 26, 2018 at 2:32 PM Nathan Stratton Treadway
 wrote:


On Mon, Nov 26, 2018 at 13:56:52 -0500, Austin S. Hemmelgarn wrote:

On 2018-11-26 13:34, Chris Nighswonger wrote:
The other possibility that comes to mind is that your bandwidth
settings are making Amanda decide to limit to one dumper at a time.


Chris, this is certainly the first thing to look at: note in your
amstatus output the line "network free kps: 0":



9 dumpers idle  : 0
taper status: Idle
taper qlen: 1
network free kps: 0
holding space   : 436635431k ( 50.26%)


Hmm... I missed that completely. I'll set it arbitrarily high as
Austin suggested and test it overnight.

Don't feel bad, it's not something that gets actively used by a lot of 
people, so most people don't really think about it.  If used right 
though, it provides the rather neat ability to have Amanda limit it's 
network utilization while running backups, which is really helpful if 
you have to run backups during production hours for some reason.


Re: Another dumper question

2018-11-26 Thread Nathan Stratton Treadway
On Mon, Nov 26, 2018 at 14:15:41 -0500, Chris Nighswonger wrote:
> That makes more sense. I've set it to 10 and inparallel to 2 and we'll
> see how it goes tonight.
> 
> On Mon, Nov 26, 2018 at 2:01 PM Cuttler, Brian R (HEALTH)
>  wrote:
> >
> > I believe maxdumps is max concurrent dumps across all clients.
> >
> > You might have 10 clients each with an inparallel of 2, giving 20
> > possible concurrent dumps, but because of server limitations you
> > might set maxdumps to something between 2 and 20.

Actually, it is the other way around: when you kick off amdump, the
driver retrieves the "inparallel" configuration option and then starts
up that number of dumper subprocesses -- so "inparallel" controls the
total number of dumpers available for the whole run.

In contrast, the amanda.conf man page section on maxdumps says:  "The
maximum number of backups from a single host that Amanda will attempt to
run in parallel." So that controls how many (out of the "inparallel"
total dumpers available) might simultaneously run on one host at the
same time.

(Maxdumps defaults to 1 to avoid two dumpers on the same client
competing with each other for disk I/O [or worse, thrashing the disk
heads by interweaving reads from different places on the device], CPU
[e.g. for compression], and/or network bandwidth.  This seems likely to
be a reasonable default, though I guess if you had multiple physical
disk devices each of which was relatively slow compared to CPU and
network bandwidth then it would make sense to run more than one dump at
a time.  But I'd do careful testing to make sure that's not actually
slowing down the overall dump.)


Nathan

Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Re: Another dumper question

2018-11-26 Thread Chris Nighswonger
On Mon, Nov 26, 2018 at 2:32 PM Nathan Stratton Treadway
 wrote:
>
> On Mon, Nov 26, 2018 at 13:56:52 -0500, Austin S. Hemmelgarn wrote:
> > On 2018-11-26 13:34, Chris Nighswonger wrote:
> > The other possibility that comes to mind is that your bandwidth
> > settings are making Amanda decide to limit to one dumper at a time.
>
> Chris, this is certainly the first thing to look at: note in your
> amstatus output the line "network free kps: 0":
>
>
> > >9 dumpers idle  : 0
> > >taper status: Idle
> > >taper qlen: 1
> > >network free kps: 0
> > >holding space   : 436635431k ( 50.26%)

Hmm... I missed that completely. I'll set it arbitrarily high as
Austin suggested and test it overnight.


Re: Another dumper question

2018-11-26 Thread Nathan Stratton Treadway
On Mon, Nov 26, 2018 at 13:56:52 -0500, Austin S. Hemmelgarn wrote:
> On 2018-11-26 13:34, Chris Nighswonger wrote:
> The other possibility that comes to mind is that your bandwidth
> settings are making Amanda decide to limit to one dumper at a time.

Chris, this is certainly the first thing to look at: note in your
amstatus output the line "network free kps: 0":


> >9 dumpers idle  : 0
> >taper status: Idle
> >taper qlen: 1
> >network free kps: 0
> >holding space   : 436635431k ( 50.26%)


Nathan


Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Re: Another dumper question

2018-11-26 Thread Chris Nighswonger
That makes more sense. I've set it to 10 and inparallel to 2 and we'll
see how it goes tonight.

On Mon, Nov 26, 2018 at 2:01 PM Cuttler, Brian R (HEALTH)
 wrote:
>
> I believe maxdumps is max concurrent dumps across all clients.
>
> You might have 10 clients each with an inparallel of 2, giving 20 possible 
> concurrent dumps, but because of server limitations you might set maxdumps to 
> something between 2 and 20.
>
> -Original Message-
> From: Chris Nighswonger 
> Sent: Monday, November 26, 2018 1:57 PM
> To: Cuttler, Brian R (HEALTH) 
> Cc: amanda-users@amanda.org
> Subject: Re: Another dumper question
>
> ATTENTION: This email came from an external source. Do not open attachments 
> or click on links from unknown senders or unexpected emails.
>
>
> inparallel 10
>
> maxdumps not listed, so I'm assuming the default of 1 is being observed.
>
> I'm not sure that the maxdumps parameter would affect dumping DLEs from 
> multiple clients in parallel, though. The manpage states, "The maximum number 
> of backups from a single host that Amanda will attempt to run in parallel." 
> That seems to indicate that this parameter controls parallel dumps of DLEs on 
> a single client.
>
> Kind regards,
> Chris
> On Mon, Nov 26, 2018 at 1:50 PM Cuttler, Brian R (HEALTH) 
>  wrote:
> >
> > Did you check your maxdumps and inparallel parameters?
> >
> > -Original Message-
> > From: owner-amanda-us...@amanda.org  On
> > Behalf Of Chris Nighswonger
> > Sent: Monday, November 26, 2018 1:34 PM
> > To: amanda-users@amanda.org
> > Subject: Another dumper question
> >
> > ATTENTION: This email came from an external source. Do not open attachments 
> > or click on links from unknown senders or unexpected emails.
> >
> >
> > So in one particular configuration I have the following lines:
> >
> > inparallel 10
> > dumporder "STSTSTSTST"
> >
> > I would assume that that amanda would spawn 10 dumpers in parallel and 
> > execute them giving priority to largest size and largest time alternating. 
> > I would assume that amanda would do some sort of sorting of the DLEs based 
> > on size and time, set them in descending order, and the run the first 10 
> > based on the list thereby utilizing all 10 permitted dumpers in parallel.
> >
> > However, based on the amstatus excerpt below, it looks like amanda simply 
> > starts with the largest size and runs the DLEs one at a time, not making 
> > efficient use of parallel dumpers at all. This has the unhappy results at 
> > times of causing amdump to be running when the next backup is executed.
> >
> > I have changed the dumporder to STSTStstst for tonight's run to see if that 
> > makes any  difference. But I don't have much hope it will.
> >
> > Any thoughts?
> >
> > Kind regards,
> > Chris
> >
> >
> >
> >
> > From Mon Nov 26 01:00:01 EST 2018
> >
> > 1   4054117k waiting for dumping
> > 1  6671k waiting for dumping
> > 1   222k waiting for dumping
> > 1  2568k waiting for dumping
> > 1  6846k waiting for dumping
> > 1125447k waiting for dumping
> > 1 91372k waiting for dumping
> > 192k waiting for dumping
> > 132k waiting for dumping
> > 132k waiting for dumping
> > 132k waiting for dumping
> > 132k waiting for dumping
> > 1290840k waiting for dumping
> > 1 76601k waiting for dumping
> > 186k waiting for dumping
> > 1 71414k waiting for dumping
> > 0  44184811k waiting for dumping
> > 1   281k waiting for dumping
> > 1  6981k waiting for dumping
> > 150k waiting for dumping
> > 1 86968k waiting for dumping
> > 1 81649k waiting for dumping
> > 1359952k waiting for dumping
> > 0 198961004k dumping 159842848k ( 80.34%) (7:23:39)
> > 1 73966k waiting for dumping
> > 1821398k waiting for dumping
> > 1674198k waiting for dumping
> > 0 233106841k dump done (7:23:37), waiting for writing to tape
> > 132k waiting for dumping
> > 132k waiting for dumping
> > 1166876k waiting for dumping
> > 132k waiting for dumping
> > 1170895k waiting for dumping
> > 1162817k waiting for dumping
> > 0 failed: planner: [Request to client failed: Connection timed out]
> > 132k waiting for dumping
> > 132k waiting for dumping
> > 053k waiting for dumping
> > 0  77134628k waiting for dumping
> > 1  2911k waiting 

RE: Another dumper question

2018-11-26 Thread Cuttler, Brian R (HEALTH)
I believe maxdumps is max concurrent dumps across all clients.

You might have 10 clients each with an inparallel of 2, giving 20 possible 
concurrent dumps, but because of server limitations you might set maxdumps to 
something between 2 and 20.

-Original Message-
From: Chris Nighswonger  
Sent: Monday, November 26, 2018 1:57 PM
To: Cuttler, Brian R (HEALTH) 
Cc: amanda-users@amanda.org
Subject: Re: Another dumper question

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


inparallel 10

maxdumps not listed, so I'm assuming the default of 1 is being observed.

I'm not sure that the maxdumps parameter would affect dumping DLEs from 
multiple clients in parallel, though. The manpage states, "The maximum number 
of backups from a single host that Amanda will attempt to run in parallel." 
That seems to indicate that this parameter controls parallel dumps of DLEs on a 
single client.

Kind regards,
Chris
On Mon, Nov 26, 2018 at 1:50 PM Cuttler, Brian R (HEALTH) 
 wrote:
>
> Did you check your maxdumps and inparallel parameters?
>
> -Original Message-
> From: owner-amanda-us...@amanda.org  On 
> Behalf Of Chris Nighswonger
> Sent: Monday, November 26, 2018 1:34 PM
> To: amanda-users@amanda.org
> Subject: Another dumper question
>
> ATTENTION: This email came from an external source. Do not open attachments 
> or click on links from unknown senders or unexpected emails.
>
>
> So in one particular configuration I have the following lines:
>
> inparallel 10
> dumporder "STSTSTSTST"
>
> I would assume that that amanda would spawn 10 dumpers in parallel and 
> execute them giving priority to largest size and largest time alternating. I 
> would assume that amanda would do some sort of sorting of the DLEs based on 
> size and time, set them in descending order, and the run the first 10 based 
> on the list thereby utilizing all 10 permitted dumpers in parallel.
>
> However, based on the amstatus excerpt below, it looks like amanda simply 
> starts with the largest size and runs the DLEs one at a time, not making 
> efficient use of parallel dumpers at all. This has the unhappy results at 
> times of causing amdump to be running when the next backup is executed.
>
> I have changed the dumporder to STSTStstst for tonight's run to see if that 
> makes any  difference. But I don't have much hope it will.
>
> Any thoughts?
>
> Kind regards,
> Chris
>
>
>
>
> From Mon Nov 26 01:00:01 EST 2018
>
> 1   4054117k waiting for dumping
> 1  6671k waiting for dumping
> 1   222k waiting for dumping
> 1  2568k waiting for dumping
> 1  6846k waiting for dumping
> 1125447k waiting for dumping
> 1 91372k waiting for dumping
> 192k waiting for dumping
> 132k waiting for dumping
> 132k waiting for dumping
> 132k waiting for dumping
> 132k waiting for dumping
> 1290840k waiting for dumping
> 1 76601k waiting for dumping
> 186k waiting for dumping
> 1 71414k waiting for dumping
> 0  44184811k waiting for dumping
> 1   281k waiting for dumping
> 1  6981k waiting for dumping
> 150k waiting for dumping
> 1 86968k waiting for dumping
> 1 81649k waiting for dumping
> 1359952k waiting for dumping
> 0 198961004k dumping 159842848k ( 80.34%) (7:23:39)
> 1 73966k waiting for dumping
> 1821398k waiting for dumping
> 1674198k waiting for dumping
> 0 233106841k dump done (7:23:37), waiting for writing to tape
> 132k waiting for dumping
> 132k waiting for dumping
> 1166876k waiting for dumping
> 132k waiting for dumping
> 1170895k waiting for dumping
> 1162817k waiting for dumping
> 0 failed: planner: [Request to client failed: Connection timed out]
> 132k waiting for dumping
> 132k waiting for dumping
> 053k waiting for dumping
> 0  77134628k waiting for dumping
> 1  2911k waiting for dumping
> 136k waiting for dumping
> 132k waiting for dumping
> 1 84935k waiting for dumping
>
> SUMMARY  part  real  estimated
>size   size
> partition   :  43
> estimated   :  42559069311k
> flush   :   0 0k
> failed  :   10k   (  0.00%)
> wait for dumping:  40128740001k   ( 23.03%)
> dumping to tape :   00k   (  0.00%)
> dumping :   1 159842848k 198961004k ( 80.34%) ( 28.59%)
> dumped  :   1 233106841k 231368306k (100.75%) ( 41.70%)
> wait for writing:   1 233106841k 23

Re: Another dumper question

2018-11-26 Thread Chris Nighswonger
inparallel 10

maxdumps not listed, so I'm assuming the default of 1 is being observed.

I'm not sure that the maxdumps parameter would affect dumping DLEs
from multiple clients in parallel, though. The manpage states, "The
maximum number of backups from a single host that Amanda will attempt
to run in parallel." That seems to indicate that this parameter
controls parallel dumps of DLEs on a single client.

Kind regards,
Chris
On Mon, Nov 26, 2018 at 1:50 PM Cuttler, Brian R (HEALTH)
 wrote:
>
> Did you check your maxdumps and inparallel parameters?
>
> -Original Message-
> From: owner-amanda-us...@amanda.org  On Behalf 
> Of Chris Nighswonger
> Sent: Monday, November 26, 2018 1:34 PM
> To: amanda-users@amanda.org
> Subject: Another dumper question
>
> ATTENTION: This email came from an external source. Do not open attachments 
> or click on links from unknown senders or unexpected emails.
>
>
> So in one particular configuration I have the following lines:
>
> inparallel 10
> dumporder "STSTSTSTST"
>
> I would assume that that amanda would spawn 10 dumpers in parallel and 
> execute them giving priority to largest size and largest time alternating. I 
> would assume that amanda would do some sort of sorting of the DLEs based on 
> size and time, set them in descending order, and the run the first 10 based 
> on the list thereby utilizing all 10 permitted dumpers in parallel.
>
> However, based on the amstatus excerpt below, it looks like amanda simply 
> starts with the largest size and runs the DLEs one at a time, not making 
> efficient use of parallel dumpers at all. This has the unhappy results at 
> times of causing amdump to be running when the next backup is executed.
>
> I have changed the dumporder to STSTStstst for tonight's run to see if that 
> makes any  difference. But I don't have much hope it will.
>
> Any thoughts?
>
> Kind regards,
> Chris
>
>
>
>
> From Mon Nov 26 01:00:01 EST 2018
>
> 1   4054117k waiting for dumping
> 1  6671k waiting for dumping
> 1   222k waiting for dumping
> 1  2568k waiting for dumping
> 1  6846k waiting for dumping
> 1125447k waiting for dumping
> 1 91372k waiting for dumping
> 192k waiting for dumping
> 132k waiting for dumping
> 132k waiting for dumping
> 132k waiting for dumping
> 132k waiting for dumping
> 1290840k waiting for dumping
> 1 76601k waiting for dumping
> 186k waiting for dumping
> 1 71414k waiting for dumping
> 0  44184811k waiting for dumping
> 1   281k waiting for dumping
> 1  6981k waiting for dumping
> 150k waiting for dumping
> 1 86968k waiting for dumping
> 1 81649k waiting for dumping
> 1359952k waiting for dumping
> 0 198961004k dumping 159842848k ( 80.34%) (7:23:39)
> 1 73966k waiting for dumping
> 1821398k waiting for dumping
> 1674198k waiting for dumping
> 0 233106841k dump done (7:23:37), waiting for writing to tape
> 132k waiting for dumping
> 132k waiting for dumping
> 1166876k waiting for dumping
> 132k waiting for dumping
> 1170895k waiting for dumping
> 1162817k waiting for dumping
> 0 failed: planner: [Request to client failed: Connection timed out]
> 132k waiting for dumping
> 132k waiting for dumping
> 053k waiting for dumping
> 0  77134628k waiting for dumping
> 1  2911k waiting for dumping
> 136k waiting for dumping
> 132k waiting for dumping
> 1 84935k waiting for dumping
>
> SUMMARY  part  real  estimated
>size   size
> partition   :  43
> estimated   :  42559069311k
> flush   :   0 0k
> failed  :   10k   (  0.00%)
> wait for dumping:  40128740001k   ( 23.03%)
> dumping to tape :   00k   (  0.00%)
> dumping :   1 159842848k 198961004k ( 80.34%) ( 28.59%)
> dumped  :   1 233106841k 231368306k (100.75%) ( 41.70%)
> wait for writing:   1 233106841k 231368306k (100.75%) ( 41.70%)
> wait to flush   :   0 0k 0k (100.00%) (  0.00%)
> writing to tape :   0 0k 0k (  0.00%) (  0.00%)
> failed to tape  :   0 0k 0k (  0.00%) (  0.00%)
> taped   :   0 0k 0k (  0.00%) (  0.00%)
> 9 dumpers idle  : 0
> taper status: Idle
> taper qlen: 1
> network free kps: 0
> holding space   : 436635431k ( 50.26%)
> chunker0 busy   :  6:17:03  ( 98.28%)
>  dumper0 busy   :  6:17:03  ( 98.28%)
>  0 dumpers busy :  0:06:34  (  1.72%)   0:  0:06:34  (100.00%)
>  1 dumper busy  :  6:17:03  ( 98.28%)   0:  6:17:03  (100.00%)



Re: Another dumper question

2018-11-26 Thread Austin S. Hemmelgarn

On 2018-11-26 13:34, Chris Nighswonger wrote:

So in one particular configuration I have the following lines:

inparallel 10
dumporder "STSTSTSTST"

I would assume that that amanda would spawn 10 dumpers in parallel and
execute them giving priority to largest size and largest time
alternating. I would assume that amanda would do some sort of sorting
of the DLEs based on size and time, set them in descending order, and
the run the first 10 based on the list thereby utilizing all 10
permitted dumpers in parallel.

However, based on the amstatus excerpt below, it looks like amanda
simply starts with the largest size and runs the DLEs one at a time,
not making efficient use of parallel dumpers at all. This has the
unhappy results at times of causing amdump to be running when the next
backup is executed.

I have changed the dumporder to STSTStstst for tonight's run to see if
that makes any  difference. But I don't have much hope it will.

Any thoughts?
Is this all for one host?  If so, that's probably your issue.  By 
default, Amanda will only run at most one DLE per host at a time.  You 
can change this in the dump settings, but I forget what the exact 
configuration parameter is.


The other possibility that comes to mind is that your bandwidth settings 
are making Amanda decide to limit to one dumper at a time.  You can 
easily test that by just setting the `netusage` parameter to an absurdly 
large value like 1073741824 (equivalent to one Tbit/s).


Kind regards,
Chris




 From Mon Nov 26 01:00:01 EST 2018

1   4054117k waiting for dumping
1  6671k waiting for dumping
1   222k waiting for dumping
1  2568k waiting for dumping
1  6846k waiting for dumping
1125447k waiting for dumping
1 91372k waiting for dumping
192k waiting for dumping
132k waiting for dumping
132k waiting for dumping
132k waiting for dumping
132k waiting for dumping
1290840k waiting for dumping
1 76601k waiting for dumping
186k waiting for dumping
1 71414k waiting for dumping
0  44184811k waiting for dumping
1   281k waiting for dumping
1  6981k waiting for dumping
150k waiting for dumping
1 86968k waiting for dumping
1 81649k waiting for dumping
1359952k waiting for dumping
0 198961004k dumping 159842848k ( 80.34%) (7:23:39)
1 73966k waiting for dumping
1821398k waiting for dumping
1674198k waiting for dumping
0 233106841k dump done (7:23:37), waiting for writing to tape
132k waiting for dumping
132k waiting for dumping
1166876k waiting for dumping
132k waiting for dumping
1170895k waiting for dumping
1162817k waiting for dumping
0 failed: planner: [Request to client failed: Connection timed out]
132k waiting for dumping
132k waiting for dumping
053k waiting for dumping
0  77134628k waiting for dumping
1  2911k waiting for dumping
136k waiting for dumping
132k waiting for dumping
1 84935k waiting for dumping

SUMMARY  part  real  estimated
size   size
partition   :  43
estimated   :  42559069311k
flush   :   0 0k
failed  :   10k   (  0.00%)
wait for dumping:  40128740001k   ( 23.03%)
dumping to tape :   00k   (  0.00%)
dumping :   1 159842848k 198961004k ( 80.34%) ( 28.59%)
dumped  :   1 233106841k 231368306k (100.75%) ( 41.70%)
wait for writing:   1 233106841k 231368306k (100.75%) ( 41.70%)
wait to flush   :   0 0k 0k (100.00%) (  0.00%)
writing to tape :   0 0k 0k (  0.00%) (  0.00%)
failed to tape  :   0 0k 0k (  0.00%) (  0.00%)
taped   :   0 0k 0k (  0.00%) (  0.00%)
9 dumpers idle  : 0
taper status: Idle
taper qlen: 1
network free kps: 0
holding space   : 436635431k ( 50.26%)
chunker0 busy   :  6:17:03  ( 98.28%)
  dumper0 busy   :  6:17:03  ( 98.28%)
  0 dumpers busy :  0:06:34  (  1.72%)   0:  0:06:34  (100.00%)
  1 dumper busy  :  6:17:03  ( 98.28%)   0:  6:17:03  (100.00%)



RE: Another dumper question

2018-11-26 Thread Cuttler, Brian R (HEALTH)
Did you check your maxdumps and inparallel parameters?

-Original Message-
From: owner-amanda-us...@amanda.org  On Behalf 
Of Chris Nighswonger
Sent: Monday, November 26, 2018 1:34 PM
To: amanda-users@amanda.org
Subject: Another dumper question

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


So in one particular configuration I have the following lines:

inparallel 10
dumporder "STSTSTSTST"

I would assume that that amanda would spawn 10 dumpers in parallel and execute 
them giving priority to largest size and largest time alternating. I would 
assume that amanda would do some sort of sorting of the DLEs based on size and 
time, set them in descending order, and the run the first 10 based on the list 
thereby utilizing all 10 permitted dumpers in parallel.

However, based on the amstatus excerpt below, it looks like amanda simply 
starts with the largest size and runs the DLEs one at a time, not making 
efficient use of parallel dumpers at all. This has the unhappy results at times 
of causing amdump to be running when the next backup is executed.

I have changed the dumporder to STSTStstst for tonight's run to see if that 
makes any  difference. But I don't have much hope it will.

Any thoughts?

Kind regards,
Chris




>From Mon Nov 26 01:00:01 EST 2018

1   4054117k waiting for dumping
1  6671k waiting for dumping
1   222k waiting for dumping
1  2568k waiting for dumping
1  6846k waiting for dumping
1125447k waiting for dumping
1 91372k waiting for dumping
192k waiting for dumping
132k waiting for dumping
132k waiting for dumping
132k waiting for dumping
132k waiting for dumping
1290840k waiting for dumping
1 76601k waiting for dumping
186k waiting for dumping
1 71414k waiting for dumping
0  44184811k waiting for dumping
1   281k waiting for dumping
1  6981k waiting for dumping
150k waiting for dumping
1 86968k waiting for dumping
1 81649k waiting for dumping
1359952k waiting for dumping
0 198961004k dumping 159842848k ( 80.34%) (7:23:39)
1 73966k waiting for dumping
1821398k waiting for dumping
1674198k waiting for dumping
0 233106841k dump done (7:23:37), waiting for writing to tape
132k waiting for dumping
132k waiting for dumping
1166876k waiting for dumping
132k waiting for dumping
1170895k waiting for dumping
1162817k waiting for dumping
0 failed: planner: [Request to client failed: Connection timed out]
132k waiting for dumping
132k waiting for dumping
053k waiting for dumping
0  77134628k waiting for dumping
1  2911k waiting for dumping
136k waiting for dumping
132k waiting for dumping
1 84935k waiting for dumping

SUMMARY  part  real  estimated
   size   size
partition   :  43
estimated   :  42559069311k
flush   :   0 0k
failed  :   10k   (  0.00%)
wait for dumping:  40128740001k   ( 23.03%)
dumping to tape :   00k   (  0.00%)
dumping :   1 159842848k 198961004k ( 80.34%) ( 28.59%)
dumped  :   1 233106841k 231368306k (100.75%) ( 41.70%)
wait for writing:   1 233106841k 231368306k (100.75%) ( 41.70%)
wait to flush   :   0 0k 0k (100.00%) (  0.00%)
writing to tape :   0 0k 0k (  0.00%) (  0.00%)
failed to tape  :   0 0k 0k (  0.00%) (  0.00%)
taped   :   0 0k 0k (  0.00%) (  0.00%)
9 dumpers idle  : 0
taper status: Idle
taper qlen: 1
network free kps: 0
holding space   : 436635431k ( 50.26%)
chunker0 busy   :  6:17:03  ( 98.28%)
 dumper0 busy   :  6:17:03  ( 98.28%)
 0 dumpers busy :  0:06:34  (  1.72%)   0:  0:06:34  (100.00%)
 1 dumper busy  :  6:17:03  ( 98.28%)   0:  6:17:03  (100.00%)