From: Oleg Nesterov <[email protected]>

commit c1804d547dc098363443667609c272d1e4d15ee8 upstream.

The previous patch preserved the retry logic, but it looks unneeded.

__migrate_task() can only fail if we raced with migration after we dropped
the lock, but in this case the caller of set_cpus_allowed/etc must initiate
migration itself if ->on_rq == T.

We already fixed p->cpus_allowed, the changes in active/online masks must
be visible to racer, it should migrate the task to online cpu correctly.

Signed-off-by: Oleg Nesterov <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
LKML-Reference: <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Paul Gortmaker <[email protected]>
---
 kernel/sched.c |   13 ++++++-------
 1 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 743af02..59ef8a1 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -5535,7 +5535,7 @@ static void move_task_off_dead_cpu(int dead_cpu, struct 
task_struct *p)
        struct rq *rq = cpu_rq(dead_cpu);
        int needs_cpu, uninitialized_var(dest_cpu);
        unsigned long flags;
-again:
+
        local_irq_save(flags);
 
        raw_spin_lock(&rq->lock);
@@ -5543,14 +5543,13 @@ again:
        if (needs_cpu)
                dest_cpu = select_fallback_rq(dead_cpu, p);
        raw_spin_unlock(&rq->lock);
-
-       /* It can have affinity changed while we were choosing. */
+       /*
+        * It can only fail if we race with set_cpus_allowed(),
+        * in the racer should migrate the task anyway.
+        */
        if (needs_cpu)
-               needs_cpu = !__migrate_task(p, dead_cpu, dest_cpu);
+               __migrate_task(p, dead_cpu, dest_cpu);
        local_irq_restore(flags);
-
-       if (unlikely(needs_cpu))
-               goto again;
 }
 
 /*
-- 
1.7.3.3

_______________________________________________
stable mailing list
[email protected]
http://linux.kernel.org/mailman/listinfo/stable

Reply via email to