On Tue, Jun 20, 2023 at 08:36:58AM +0200, Claudio Jeker wrote:
> On Mon, Jun 19, 2023 at 04:45:03PM -0500, Scott Cheloha wrote:
> > On Mon, Jun 19, 2023 at 10:22:56AM +0200, Claudio Jeker wrote:
> > > On Sun, Jun 18, 2023 at 12:43:18PM -0500, Scott Cheloha wrote:
> > > > On Sun, Jun 18, 2023 at 12:36:07PM -0500, Scott Cheloha wrote:
> > > > > On Sun, Jun 18, 2023 at 07:32:56PM +0200, Mark Kettenis wrote:
> > > > > > > Date: Sun, 18 Jun 2023 12:27:17 -0500
> > > > > > > From: Scott Cheloha <scottchel...@gmail.com>
> > > > > > > 
> > > > > > > The intent here is to update the load averages every five seconds.
> > > > > > > However:
> > > > > > > 
> > > > > > > 1. Measuring elapsed time with the UTC clock is unreliable 
> > > > > > > because of
> > > > > > >    settimeofday(2).
> > > > > > > 
> > > > > > > 2. "Call uvm_loadav() no more than once every five seconds", is 
> > > > > > > not
> > > > > > >     equivalent to "call uvm_loadav() if the current second is 
> > > > > > > equal
> > > > > > >     to zero, modulo five".
> > > > > > > 
> > > > > > >    Not hard to imagine edge cases where timeouts are delayed and
> > > > > > >    the load averages are not updated.
> > > > > > > 
> > > > > > > So, (1) use the monotonic clock, and (2) keep the next 
> > > > > > > uvm_loadav()
> > > > > > > call time in a static value.
> > > > > > > 
> > > > > > > ok?
> > > > > > 
> > > > > > I really don't see why the calculatin of something vague like the 
> > > > > > load
> > > > > > average warrants complicating the code like this.
> > > > > 
> > > > > Aren't load averages used to make decisions about thread placement in
> > > > > the scheduler?
> > > > > 
> > > > > Regardless, the code is still wrong.  At minimum you should use
> > > > > getuptime(9).
> > > > 
> > > > Maybe I misunderstood.  Are you suggesting this?
> > > > 
> > > > 
> > > >         now = getuptime();
> > > >         if (now >= next_uvm_loadav) {
> > > >                 next_uvm_loadav = now + 5;
> > > >                 uvm_loadav(...);
> > > >         }
> > > > 
> > > > The patch I posted preserves the current behavior.  It is equivalent
> > > > to:
> > > > 
> > > >         while (next_uvm_loadav <= now)
> > > >                 next_uvm_loadav += 5;
> > > > 
> > > > Changing it to (now + 5) changes the behavior.
> > > 
> > > To be honest, I think the uvm_meter should be called via timeout(9) and
> > > not be called via the current path using schedcpu().
> > > At some point schedcpu() may be removed and then we need to fix this
> > > proper anyway.
> > 
> > See attached.
> > 
> > This changes the wakeup interval for the proc0 (the swapper) on
> > patched kernels where maxslp is patched (highly unusual).
> > 
> > On default kernels, maxslp is 20, which divides evenly into 5, so
> > the patch does not change the proc0 wakeup interval.
> > 
> > Another approach would be to run the uvm_meter() timeout every second
> > and track the uvm_loadav() deadline in a static variable.
> 
> I think the proper fix is to just remove this wakeup.
> Since proc0 does nothing anymore.
> 
> This wakeup triggers this other bit of code:
> 
>         /*
>          * proc0: nothing to do, back to sleep
>          */
>         while (1)
>                 tsleep_nsec(&proc0, PVM, "scheduler", INFSLP);
>         /* NOTREACHED */
> 
> at the end of the main(). So we wakeup a while (1) loop.
> 
> I need to look more into this. I wonder if there is some silly side-effect
> that this code tickles. All of this just smells like old cruft.

So in Rev. 1.45 of uvm_glue() miod@ killed the swapper and with that
turned uvm_scheduler() into a dumb while(1) tsleep() loop.

Since then this periodic update has not done anything apart from burning
CPU time. Later on that loop was moved by blambert to init_main.c but
nobody ever looked at the wakeup() side of this tsleep_nsec().

So I guess we can burn this part of the code.
-- 
:wq Claudio

Reply via email to