On Thu, 12 Nov 2020 at 12:12, Quentin Perret <[email protected]> wrote: > > enqueue_task_fair() attempts to skip the overutilized update for new > tasks as their util_avg is not accurate yet. However, the flag we check > to do so is overwritten earlier on in the function, which makes the > condition pretty much a nop. > > Fix this by saving the flag early on. > > Fixes: 2802bf3cd936 ("sched/fair: Add over-utilization/tipping point > indicator") > Reported-by: Rick Yiu <[email protected]> > Signed-off-by: Quentin Perret <[email protected]>
Reviewed-by: Vincent Guittot <[email protected]> > --- > kernel/sched/fair.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 290f9e38378c..f3ee60b92718 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -5477,6 +5477,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, > int flags) > struct cfs_rq *cfs_rq; > struct sched_entity *se = &p->se; > int idle_h_nr_running = task_has_idle_policy(p); > + int task_new = !(flags & ENQUEUE_WAKEUP); > > /* > * The code below (indirectly) updates schedutil which looks at > @@ -5549,7 +5550,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, > int flags) > * into account, but that is not straightforward to implement, > * and the following generally works well enough in practice. > */ > - if (flags & ENQUEUE_WAKEUP) > + if (!task_new) > update_overutilized_status(rq); > > enqueue_throttle: > -- > 2.29.2.222.g5d2a92d10f8-goog >

