4.4-stable review patch.  If anyone has any objections, please let me know.

------------------

[ Upstream commit 0905f04eb21fc1c2e690bed5d0418a061d56c225 ]

If a newly created task is selected to go to a different CPU in fork
balance when it wakes up the first time, its load averages should
not be removed from the source CPU since they are never added to
it before. The same is also applicable to a never used group entity.

Fix it in remove_entity_load_avg(): when entity's last_update_time
is 0, simply return. This should precisely identify the case in
question, because in other migrations, the last_update_time is set
to 0 after remove_entity_load_avg().

Reported-by: Steve Muckle <steve.muc...@linaro.org>
Signed-off-by: Yuyang Du <yuyang...@intel.com>
[peterz: cfs_rq_last_update_time]
Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Cc: Dietmar Eggemann <dietmar.eggem...@arm.com>
Cc: Juri Lelli <juri.le...@arm.com>
Cc: Linus Torvalds <torva...@linux-foundation.org>
Cc: Mike Galbraith <efa...@gmx.de>
Cc: Morten Rasmussen <morten.rasmus...@arm.com>
Cc: Patrick Bellasi <patrick.bell...@arm.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Vincent Guittot <vincent.guit...@linaro.org>
Link: http://lkml.kernel.org/r/20151216233427.gj28...@intel.com
Signed-off-by: Ingo Molnar <mi...@kernel.org>
Signed-off-by: Sasha Levin <sas...@kernel.org>
---
 kernel/sched/fair.c | 38 ++++++++++++++++++++++++++++----------
 1 file changed, 28 insertions(+), 10 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c2af250547bb..6051007918ad 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2841,27 +2841,45 @@ dequeue_entity_load_avg(struct cfs_rq *cfs_rq, struct 
sched_entity *se)
                max_t(s64,  cfs_rq->runnable_load_sum - se->avg.load_sum, 0);
 }
 
-/*
- * Task first catches up with cfs_rq, and then subtract
- * itself from the cfs_rq (task must be off the queue now).
- */
-void remove_entity_load_avg(struct sched_entity *se)
-{
-       struct cfs_rq *cfs_rq = cfs_rq_of(se);
-       u64 last_update_time;
-
 #ifndef CONFIG_64BIT
+static inline u64 cfs_rq_last_update_time(struct cfs_rq *cfs_rq)
+{
        u64 last_update_time_copy;
+       u64 last_update_time;
 
        do {
                last_update_time_copy = cfs_rq->load_last_update_time_copy;
                smp_rmb();
                last_update_time = cfs_rq->avg.last_update_time;
        } while (last_update_time != last_update_time_copy);
+
+       return last_update_time;
+}
 #else
-       last_update_time = cfs_rq->avg.last_update_time;
+static inline u64 cfs_rq_last_update_time(struct cfs_rq *cfs_rq)
+{
+       return cfs_rq->avg.last_update_time;
+}
 #endif
 
+/*
+ * Task first catches up with cfs_rq, and then subtract
+ * itself from the cfs_rq (task must be off the queue now).
+ */
+void remove_entity_load_avg(struct sched_entity *se)
+{
+       struct cfs_rq *cfs_rq = cfs_rq_of(se);
+       u64 last_update_time;
+
+       /*
+        * Newly created task or never used group entity should not be removed
+        * from its (source) cfs_rq
+        */
+       if (se->avg.last_update_time == 0)
+               return;
+
+       last_update_time = cfs_rq_last_update_time(cfs_rq);
+
        __update_load_avg(last_update_time, cpu_of(rq_of(cfs_rq)), &se->avg, 0, 
0, NULL);
        atomic_long_add(se->avg.load_avg, &cfs_rq->removed_load_avg);
        atomic_long_add(se->avg.util_avg, &cfs_rq->removed_util_avg);
-- 
2.19.1



Reply via email to