On Fri, Jan 29, 2021 at 06:27:27PM +0100, Vincent Guittot wrote:
[...]
> > update_blocked_averages with preempt and irq off is not a good thing
> > because we don't manage the number of csf_rq to update and I'm going
> > to provide a patchset for this
> 
> The patch below moves the update of the blocked load of CPUs outside 
> newidle_balance().
> 
> Instead, the update is done with the usual idle load balance update. I'm 
> working on an
> additonnal patch that will select this cpu that is about to become idle, 
> instead of a
> random idle cpu but this 1st step fixe the problem of lot of update in newly 
> idle.
> 
> Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>

I confirmed that with this patch, I don't see the preemptoff issues related
to update_blocked_averages() anymore (tested using preemptoff tracer).

I went through the patch and it looks correct to me, I will further review it
and await further reviews from others as well, and then backport the patch to
our kernels. Thanks Vince and everyone!

Tested-by: Joel Fernandes (Google) <j...@joelfernandes.org>

thanks,

 - Joel



> ---
>  kernel/sched/fair.c | 32 +++-----------------------------
>  1 file changed, 3 insertions(+), 29 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 197a51473e0c..8200b1d4df3d 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7421,8 +7421,6 @@ enum migration_type {
>  #define LBF_NEED_BREAK       0x02
>  #define LBF_DST_PINNED  0x04
>  #define LBF_SOME_PINNED      0x08
> -#define LBF_NOHZ_STATS       0x10
> -#define LBF_NOHZ_AGAIN       0x20
>  
>  struct lb_env {
>       struct sched_domain     *sd;
> @@ -8426,9 +8424,6 @@ static inline void update_sg_lb_stats(struct lb_env 
> *env,
>       for_each_cpu_and(i, sched_group_span(group), env->cpus) {
>               struct rq *rq = cpu_rq(i);
>  
> -             if ((env->flags & LBF_NOHZ_STATS) && update_nohz_stats(rq, 
> false))
> -                     env->flags |= LBF_NOHZ_AGAIN;
> -
>               sgs->group_load += cpu_load(rq);
>               sgs->group_util += cpu_util(i);
>               sgs->group_runnable += cpu_runnable(rq);
> @@ -8969,11 +8964,6 @@ static inline void update_sd_lb_stats(struct lb_env 
> *env, struct sd_lb_stats *sd
>       struct sg_lb_stats tmp_sgs;
>       int sg_status = 0;
>  
> -#ifdef CONFIG_NO_HZ_COMMON
> -     if (env->idle == CPU_NEWLY_IDLE && READ_ONCE(nohz.has_blocked))
> -             env->flags |= LBF_NOHZ_STATS;
> -#endif
> -
>       do {
>               struct sg_lb_stats *sgs = &tmp_sgs;
>               int local_group;
> @@ -9010,15 +9000,6 @@ static inline void update_sd_lb_stats(struct lb_env 
> *env, struct sd_lb_stats *sd
>       /* Tag domain that child domain prefers tasks go to siblings first */
>       sds->prefer_sibling = child && child->flags & SD_PREFER_SIBLING;
>  
> -#ifdef CONFIG_NO_HZ_COMMON
> -     if ((env->flags & LBF_NOHZ_AGAIN) &&
> -         cpumask_subset(nohz.idle_cpus_mask, sched_domain_span(env->sd))) {
> -
> -             WRITE_ONCE(nohz.next_blocked,
> -                        jiffies + msecs_to_jiffies(LOAD_AVG_PERIOD));
> -     }
> -#endif
> -
>       if (env->sd->flags & SD_NUMA)
>               env->fbq_type = fbq_classify_group(&sds->busiest_stat);
>  
> @@ -10547,14 +10528,7 @@ static void nohz_newidle_balance(struct rq *this_rq)
>               return;
>  
>       raw_spin_unlock(&this_rq->lock);
> -     /*
> -      * This CPU is going to be idle and blocked load of idle CPUs
> -      * need to be updated. Run the ilb locally as it is a good
> -      * candidate for ilb instead of waking up another idle CPU.
> -      * Kick an normal ilb if we failed to do the update.
> -      */
> -     if (!_nohz_idle_balance(this_rq, NOHZ_STATS_KICK, CPU_NEWLY_IDLE))
> -             kick_ilb(NOHZ_STATS_KICK);
> +     kick_ilb(NOHZ_STATS_KICK);
>       raw_spin_lock(&this_rq->lock);
>  }
>  
> @@ -10616,8 +10590,6 @@ static int newidle_balance(struct rq *this_rq, struct 
> rq_flags *rf)
>                       update_next_balance(sd, &next_balance);
>               rcu_read_unlock();
>  
> -             nohz_newidle_balance(this_rq);
> -
>               goto out;
>       }
>  
> @@ -10683,6 +10655,8 @@ static int newidle_balance(struct rq *this_rq, struct 
> rq_flags *rf)
>  
>       if (pulled_task)
>               this_rq->idle_stamp = 0;
> +     else
> +             nohz_newidle_balance(this_rq);
>  
>       rq_repin_lock(this_rq, rf);
>  
> -- 
> 2.17.1
> 
> 
> > 
> > > for this.
> > >
> > > > Also update_blocked_averages was supposed called in newlyidle_balance
> > > > when the coming idle duration is expected to be long enough
> > >
> > > No, we do not want the schedule loop to take half a millisecond.
> > 
> > keep in mind that you are scaling frequency so everything takes time
> > at lowest frequency/capacity ...
> > 
> > >
> > > > > > IIUC, your real problem is that newidle_balance is running whereas a
> > > > > > task is about to wake up on the cpu and we don't abort quickly 
> > > > > > during
> > > > > > this load_balance
> > > > >
> > > > > Yes.
> > > > >
> > > > > > so we could also try to abort earlier in case of newly idle load 
> > > > > > balance
> > > > >
> > > > > I think interrupts are disabled when the load balance runs, so 
> > > > > there's no way
> > > > > for say an audio interrupt to even run in order to wake up a task. 
> > > > > How would
> > > > > you know to abort the new idle load balance?
> > > > >
> > > > > Could you elaborate more also on the drawback of the rate limiting 
> > > > > patch we
> > > > > posted? Do you see a side effect?
> > > >
> > > > Your patch just tries to hide your problem and not to solve the root 
> > > > cause.
> > >
> > > Agreed, the solution presented is a band aid and not a proper fix. It
> > > was just intended to illustrate the problem and start a discussion. I
> > > should have marked it RFC for sure.
> > >
> > > thanks!
> > >
> > > - Joel

Reply via email to