On Thu, Jun 06, 2019 at 10:21:01AM -0700 bseg...@google.com wrote:
> When a cfs_rq sleeps and returns its quota, we delay for 5ms before
> waking any throttled cfs_rqs to coalesce with other cfs_rqs going to
> sleep, as this has to be done outside of the rq lock we hold.
> 
> The current code waits for 5ms without any sleeps, instead of waiting
> for 5ms from the first sleep, which can delay the unthrottle more than
> we want. Switch this around so that we can't push this forward forever.
> 
> This requires an extra flag rather than using hrtimer_active, since we
> need to start a new timer if the current one is in the process of
> finishing.
> 
> Signed-off-by: Ben Segall <bseg...@google.com>
> Reviewed-by: Xunlei Pang <xlp...@linux.alibaba.com>
> ---
>  kernel/sched/fair.c  | 7 +++++++
>  kernel/sched/sched.h | 1 +
>  2 files changed, 8 insertions(+)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 8213ff6e365d..2ead252cfa32 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4729,6 +4729,11 @@ static void start_cfs_slack_bandwidth(struct 
> cfs_bandwidth *cfs_b)
>       if (runtime_refresh_within(cfs_b, min_left))
>               return;
>  
> +     /* don't push forwards an existing deferred unthrottle */
> +     if (cfs_b->slack_started)
> +             return;
> +     cfs_b->slack_started = true;
> +
>       hrtimer_start(&cfs_b->slack_timer,
>                       ns_to_ktime(cfs_bandwidth_slack_period),
>                       HRTIMER_MODE_REL);
> @@ -4782,6 +4787,7 @@ static void do_sched_cfs_slack_timer(struct 
> cfs_bandwidth *cfs_b)
>  
>       /* confirm we're still not at a refresh boundary */
>       raw_spin_lock_irqsave(&cfs_b->lock, flags);
> +     cfs_b->slack_started = false;
>       if (cfs_b->distribute_running) {
>               raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
>               return;
> @@ -4920,6 +4926,7 @@ void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b)
>       hrtimer_init(&cfs_b->slack_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
>       cfs_b->slack_timer.function = sched_cfs_slack_timer;
>       cfs_b->distribute_running = 0;
> +     cfs_b->slack_started = false;
>  }
>  
>  static void init_cfs_rq_runtime(struct cfs_rq *cfs_rq)
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index efa686eeff26..60219acda94b 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -356,6 +356,7 @@ struct cfs_bandwidth {
>       u64                     throttled_time;
>  
>       bool                    distribute_running;
> +     bool                    slack_started;
>  #endif
>  };
>  
> -- 
> 2.22.0.rc1.257.g3120a18244-goog
> 

I think this looks good. I like not delaying that further even if it
does not fix Dave's use case. 

It does make it glaring that I should have used false/true for setting
distribute_running though :)


Acked-by: Phil Auld <pa...@redhat.com>

-- 

Reply via email to