From: Byungchul Park <[email protected]>

change from v1 to v2
* use #ifdef CONFIG_SMP to load tracking code
* make commit message compact which made confused

----->8-----
>From 02edcf69369bed72916304b449b82a74029ea908 Mon Sep 17 00:00:00 2001
From: Byungchul Park <[email protected]>
Date: Tue, 11 Aug 2015 09:30:17 +0900
Subject: [PATCH v2] sched: sync with the prev cfs when changing cgroup within
 a cpu

current code seems to be wrong with cfs_rq->blocked_load_avg when changing
a task's cgroup(=cfs_rq) to another. i tested with "echo pid > cgroup" and
found that cfs_rq->blocked_load_avg became larger and larger whenever i
changed a cgroup to another again and again.

it is possible to move between groups within *a* cpu, and each cfs_rq is
tracking its own blocked load. so we have to sync se's average load with
both *prev* cfs_rq and next cfs_rq when changing its group. i also removed
some comments mentioning migration_task_rq_fair().

Signed-off-by: Byungchul Park <[email protected]>
---
 kernel/sched/fair.c |   18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ffa70dc..759a394 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8229,8 +8229,18 @@ static void task_move_group_fair(struct task_struct *p, 
int queued)
        if (!queued && (!se->sum_exec_runtime || p->state == TASK_WAKING))
                queued = 1;
 
-       if (!queued)
-               se->vruntime -= cfs_rq_of(se)->min_vruntime;
+       if (!queued) {
+               cfs_rq = cfs_rq_of(se);
+               se->vruntime -= cfs_rq->min_vruntime;
+
+#ifdef CONFIG_SMP
+               /*
+                * we must synchronize with the prev cfs.
+                */
+               __synchronize_entity_decay(se);
+               subtract_blocked_load_contrib(cfs_rq, se->avg.load_avg_contrib);
+#endif
+       }
        set_task_rq(p, task_cpu(p));
        se->depth = se->parent ? se->parent->depth + 1 : 0;
        if (!queued) {
@@ -8238,9 +8248,7 @@ static void task_move_group_fair(struct task_struct *p, 
int queued)
                se->vruntime += cfs_rq->min_vruntime;
 #ifdef CONFIG_SMP
                /*
-                * migrate_task_rq_fair() will have removed our previous
-                * contribution, but we must synchronize for ongoing future
-                * decay.
+                * we must synchronize with the next cfs for ongoing future 
decay.
                 */
                se->avg.decay_count = atomic64_read(&cfs_rq->decay_counter);
                cfs_rq->blocked_load_avg += se->avg.load_avg_contrib;
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to