On Saturday 10 November 2018 13:55:30 Nathan Stratton Treadway wrote:

> On Sat, Nov 10, 2018 at 12:48:15 -0500, Gene Heskett wrote:
> > On Saturday 10 November 2018 10:47:03 Nathan Stratton Treadway wrote:
> > > On Tue, Oct 30, 2018 at 15:51:36 -0400, Gene Heskett wrote:
> > > > I just changed the length of the dumpcycle and runs percycle up
> > > > to 10, about last friday while I was makeing the bump* stuff
> > > > more attractive, but the above command returns that the are 5
> > > > filesystens out of date: su amanda -c "/usr/local/sbin/amadmin
> > > > Daily balance"
> > > >
> > > >  due-date  #fs    orig MB     out MB   balance
> > > > ----------------------------------------------
> > > > 10/30 Tue    5          0          0      ---
> > > > 10/31 Wed    1      17355       8958    -45.3%
> > > > 11/01 Thu    2      10896      10887    -33.5%
> > > > 11/02 Fri    4      35944       9298    -43.2%
> > > > 11/03 Sat    4      14122      10835    -33.8%
> > > > 11/04 Sun    3      57736      57736   +252.7%
> > > > 11/05 Mon    2      39947      30635    +87.1%
> > > > 11/06 Tue    8       4235       4215    -74.3%
> > > > 11/07 Wed    4      19503      14732    -10.0%
> > > > 11/08 Thu   32      31783      16408     +0.2%
> > > > ----------------------------------------------
> > > > TOTAL       65     231521     163704     16370
> > >
> > > Okay, now that the small-DLE distraction is out of the way, we can
> > > get back to the original question regarding the scheduling of
> > > dumps over your dumpcycle.
> > >
> > > What does your "balance" output show now?
> > >
> > > (In particular, I'm curious if there is still one day with a huge
> > > surge like shown for 11/04 in the listing above.)
> > >
> > >
> > >                                           Nathan
> >
> > Its some better:
> > amanda@coyote:/amandatapes/Dailys/data$ /usr/local/sbin/amadmin
> > Daily balance
> >
> >  due-date  #fs    orig MB     out MB   balance
> > ----------------------------------------------
> > 11/10 Sat    1       7912       3145    -78.7%
> > 11/11 Sun    1      10886      10886    -26.1%
> > 11/12 Mon    1      32963       7875    -46.6%
> > 11/13 Tue    1       7688       7688    -47.8%
> > 11/14 Wed    2      22109      22109    +50.0%
> > 11/15 Thu    4      75027      46623   +216.3%
> > 11/16 Fri    6       8257       6109    -58.6%
> > 11/17 Sat   29      14034       8932    -39.4%
> > 11/18 Sun    4      21281      16842    +14.3%
> > 11/19 Mon   18      34599      17188    +16.6%
> > ----------------------------------------------
> > TOTAL       67     234756     147397     14739
> >
> > It will be interesting to see if it continues to get "better".
> > I should think it will be under 150% by the 15nth if so. If the
> > planner behaves itself.
>
> Off hand I am still suspicious of the entry here for 11/15, both
> because the data size is very high for only 4 DLEs and because you've
> gone through a full cycle since your previous listing and the huge
> surge hasn't evened out very much.  So I'm guessing that one of those
> DLEs is probably very large compared to all your other ones....
>
> Anyway, my next step would be to figure out which 4 DLEs are the ones
> in that group, which you should be able to do by looking through the
> output of "/usr/local/sbin/amadmin Daily due".  For example, try
>   /usr/local/sbin/amadmin Daily due | grep "5 day"
> and see if you get 4 DLEs listed (and adjust the count by a day if
> that shows you the wrong group of DLEs).
>
> Once you see which four are in that group, you can cross reference
> with your Amanda mail reports to figure out the relative sizes of
> those particular DLEs.
>
> > I'm of the opinion now that this bug has been tickling it wrong for
> > much more than the last 30 days or so that its been visible with the
> > update from 3.3.7p1 I'd been running forever. balance reports
> > weren't all that encouraging and the planner was half out of it mind
> > trying to shuffle things to help, without ever getting in "balance".
>
> (I am pretty sure that the small-DLE bug didn't affect the overall
> balance.  You can see from the "balance" output on 10/30 that the 5
> DLEs in question all show up as needing to be full-dumped that day --
> but the total size for that group is still zero.  So I suspect that
> there is/are some other factor(s) behind the single-day surge and
> whatever "shuffling" has been going on ...)
>
> > We shall see. Perhaps I could add a balance report to the end of
> > backup.sh so I get it emailed to me every morning? I'll take a look
> > after ingesting enough caffeine to get both eyes open
> > simultaneously.
>
> Yes, if you are trying to really understand what's going on with the
> scheduling it can certainly be useful to be able to watch the
> day-to-day changes to the balance listing.
>
>                                                       Nathan
The biggest problem is where do I put the stuff I have downloaded, which 
includes several multi-gigabyte iso's because I have 3 platforms here, 
the usual debian pc stuff, and the r-pi stuff, and the rock64 stuff. I 
am attempting to split the  /home/gene/Public tree up as we speak, with 
limited success because I also want to keep stuff properly sorted for 
easy finding. The dlds, download, and Downloads trees are loaded 
already. They are each of course their own dle's. So much to do, so 
little time, not helped by me being a "code packrat". :)
>
> ----------------------------------------------------------------------
>------ Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic
> region Ray Ontko & Co.  -  Software consulting services  -  
> http://www.ontko.com/ GPG Key:
> http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239 Key
> fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239



Copyright 2018 by Maurice E. Heskett
-- 
Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page <http://geneslinuxbox.net:6309/gene>

Reply via email to