On Thu, Jun 30, 2016 at 03:20:37PM +0200, Peter Zijlstra wrote:
> On Thu, Jun 30, 2016 at 02:52:26PM +0200, Frederic Weisbecker wrote:
> > On Tue, Jun 14, 2016 at 05:58:42PM +0200, Peter Zijlstra wrote:
> 
> > > Why not add the division to the nohz exit path only?
> > 
> > It would be worse I think because we may exit much more often from nohz
> > than we reach a sched_avg_period().
> > 
> > So the only safe optimization I can do for now is:
> 
> How about something like this then?
> 
> ---
> 
>  kernel/sched/core.c | 19 +++++++++++++++++--
>  1 file changed, 17 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 3387e4f14fc9..fd1ae4c4105f 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -665,9 +665,23 @@ bool sched_can_stop_tick(struct rq *rq)
>  
>  void sched_avg_update(struct rq *rq)
>  {
> -     s64 period = sched_avg_period();
> +     s64 delta, period = sched_avg_period();
>  
> -     while ((s64)(rq_clock(rq) - rq->age_stamp) > period) {
> +     delta = (s64)(rq_clock(rq) - rq->age_stamp);
> +     if (likely(delta < period))
> +             return;
> +
> +     if (unlikely(delta > 3*period)) {
> +             int pending;
> +             u64 rem;
> +
> +             pending = div64_u64_rem(delta, period, &rem);
> +             rq->age_stamp += delta - rem;
> +             rq->rt_avg >>= pending;
> +             return;
> +     }
> +
> +     while (delta > period) {
>               /*
>                * Inline assembly required to prevent the compiler
>                * optimising this loop into a divmod call.
> @@ -675,6 +689,7 @@ void sched_avg_update(struct rq *rq)
>                */
>               asm("" : "+rm" (rq->age_stamp));
>               rq->age_stamp += period;
> +             delta -= period;
>               rq->rt_avg /= 2;
>       }
>  }

Makes sense. I'm going to do some tests. We might want to precompute 3*period 
maybe.

Thanks.

Reply via email to