From: Quentin Perret <qper...@google.com>

commit 8e1ac4299a6e8726de42310d9c1379f188140c71 upstream.

enqueue_task_fair() attempts to skip the overutilized update for new
tasks as their util_avg is not accurate yet. However, the flag we check
to do so is overwritten earlier on in the function, which makes the
condition pretty much a nop.

Fix this by saving the flag early on.

Fixes: 2802bf3cd936 ("sched/fair: Add over-utilization/tipping point indicator")
Reported-by: Rick Yiu <rick...@google.com>
Signed-off-by: Quentin Perret <qper...@google.com>
Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Reviewed-by: Vincent Guittot <vincent.guit...@linaro.org>
Reviewed-by: Valentin Schneider <valentin.schnei...@arm.com>
Link: https://lkml.kernel.org/r/20201112111201.2081902-1-qper...@google.com
Signed-off-by: Greg Kroah-Hartman <gre...@linuxfoundation.org>

---
 kernel/sched/fair.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5228,6 +5228,7 @@ enqueue_task_fair(struct rq *rq, struct
        struct cfs_rq *cfs_rq;
        struct sched_entity *se = &p->se;
        int idle_h_nr_running = task_has_idle_policy(p);
+       int task_new = !(flags & ENQUEUE_WAKEUP);
 
        /*
         * The code below (indirectly) updates schedutil which looks at
@@ -5299,7 +5300,7 @@ enqueue_throttle:
                 * into account, but that is not straightforward to implement,
                 * and the following generally works well enough in practice.
                 */
-               if (flags & ENQUEUE_WAKEUP)
+               if (!task_new)
                        update_overutilized_status(rq);
 
        }


Reply via email to