inparallel 10

maxdumps not listed, so I'm assuming the default of 1 is being observed.

I'm not sure that the maxdumps parameter would affect dumping DLEs
from multiple clients in parallel, though. The manpage states, "The
maximum number of backups from a single host that Amanda will attempt
to run in parallel." That seems to indicate that this parameter
controls parallel dumps of DLEs on a single client.

Kind regards,
Chris
On Mon, Nov 26, 2018 at 1:50 PM Cuttler, Brian R (HEALTH)
<brian.cutt...@health.ny.gov> wrote:
>
> Did you check your maxdumps and inparallel parameters?
>
> -----Original Message-----
> From: owner-amanda-us...@amanda.org <owner-amanda-us...@amanda.org> On Behalf 
> Of Chris Nighswonger
> Sent: Monday, November 26, 2018 1:34 PM
> To: amanda-users@amanda.org
> Subject: Another dumper question
>
> ATTENTION: This email came from an external source. Do not open attachments 
> or click on links from unknown senders or unexpected emails.
>
>
> So in one particular configuration I have the following lines:
>
> inparallel 10
> dumporder "STSTSTSTST"
>
> I would assume that that amanda would spawn 10 dumpers in parallel and 
> execute them giving priority to largest size and largest time alternating. I 
> would assume that amanda would do some sort of sorting of the DLEs based on 
> size and time, set them in descending order, and the run the first 10 based 
> on the list thereby utilizing all 10 permitted dumpers in parallel.
>
> However, based on the amstatus excerpt below, it looks like amanda simply 
> starts with the largest size and runs the DLEs one at a time, not making 
> efficient use of parallel dumpers at all. This has the unhappy results at 
> times of causing amdump to be running when the next backup is executed.
>
> I have changed the dumporder to STSTStstst for tonight's run to see if that 
> makes any  difference. But I don't have much hope it will.
>
> Any thoughts?
>
> Kind regards,
> Chris
>
>
>
>
> From Mon Nov 26 01:00:01 EST 2018
>
> 1   4054117k waiting for dumping
> 1      6671k waiting for dumping
> 1       222k waiting for dumping
> 1      2568k waiting for dumping
> 1      6846k waiting for dumping
> 1    125447k waiting for dumping
> 1     91372k waiting for dumping
> 1        92k waiting for dumping
> 1        32k waiting for dumping
> 1        32k waiting for dumping
> 1        32k waiting for dumping
> 1        32k waiting for dumping
> 1    290840k waiting for dumping
> 1     76601k waiting for dumping
> 1        86k waiting for dumping
> 1     71414k waiting for dumping
> 0  44184811k waiting for dumping
> 1       281k waiting for dumping
> 1      6981k waiting for dumping
> 1        50k waiting for dumping
> 1     86968k waiting for dumping
> 1     81649k waiting for dumping
> 1    359952k waiting for dumping
> 0 198961004k dumping 159842848k ( 80.34%) (7:23:39)
> 1     73966k waiting for dumping
> 1    821398k waiting for dumping
> 1    674198k waiting for dumping
> 0 233106841k dump done (7:23:37), waiting for writing to tape
> 1        32k waiting for dumping
> 1        32k waiting for dumping
> 1    166876k waiting for dumping
> 1        32k waiting for dumping
> 1    170895k waiting for dumping
> 1    162817k waiting for dumping
> 0 failed: planner: [Request to client failed: Connection timed out]
> 1        32k waiting for dumping
> 1        32k waiting for dumping
> 0        53k waiting for dumping
> 0  77134628k waiting for dumping
> 1      2911k waiting for dumping
> 1        36k waiting for dumping
> 1        32k waiting for dumping
> 1     84935k waiting for dumping
>
> SUMMARY          part      real  estimated
>                            size       size
> partition       :  43
> estimated       :  42            559069311k
> flush           :   0         0k
> failed          :   1                    0k           (  0.00%)
> wait for dumping:  40            128740001k           ( 23.03%)
> dumping to tape :   0                    0k           (  0.00%)
> dumping         :   1 159842848k 198961004k ( 80.34%) ( 28.59%)
> dumped          :   1 233106841k 231368306k (100.75%) ( 41.70%)
> wait for writing:   1 233106841k 231368306k (100.75%) ( 41.70%)
> wait to flush   :   0         0k         0k (100.00%) (  0.00%)
> writing to tape :   0         0k         0k (  0.00%) (  0.00%)
> failed to tape  :   0         0k         0k (  0.00%) (  0.00%)
> taped           :   0         0k         0k (  0.00%) (  0.00%)
> 9 dumpers idle  : 0
> taper status: Idle
> taper qlen: 1
> network free kps:         0
> holding space   : 436635431k ( 50.26%)
> chunker0 busy   :  6:17:03  ( 98.28%)
>  dumper0 busy   :  6:17:03  ( 98.28%)
>  0 dumpers busy :  0:06:34  (  1.72%)                   0:  0:06:34  (100.00%)
>  1 dumper busy  :  6:17:03  ( 98.28%)                   0:  6:17:03  (100.00%)

Reply via email to