On Sat, Apr 29, 2017 at 12:09:24AM +0200, Peter Zijlstra wrote:
> On Mon, Apr 10, 2017 at 11:18:29AM +0200, Vincent Guittot wrote:
> > +++ b/include/linux/sched.h
> > @@ -313,6 +313,7 @@ struct load_weight {
> >   */
> >  struct sched_avg {
> >     u64                             last_update_time;
> > +   u64                             stolen_idle_time;
> >     u64                             load_sum;
> >     u32                             util_sum;
> >     u32                             period_contrib;
> 
> > +           if (sa->util_sum < (LOAD_AVG_MAX * 1000)) {
> > +                   /*
> > +                    * Add the idle time stolen by running at lower compute
> > +                    * capacity
> > +                    */
> > +                   delta += sa->stolen_idle_time;
> > +           }
> > +           sa->stolen_idle_time = 0;
> 
> 
> So I was wondering if stolen_idle_time really needs to be a u64. Afaict
> we'll be at LOAD_AVG_MAX after LOAD_AVG_MAX_N periods, or LOAD_AVG_MAX_N
> * LOAD_AVG_PERIOD time, which ends up being 11040.

* 1024 or course, but still easily fits in u32.

Reply via email to