Don't rely on the scheduler to force break affinity for us -- it will
stop doing that for per-cpu-kthreads.

Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Acked-by: Tejun Heo <[email protected]>
---
 kernel/workqueue.c |    4 ++++
 1 file changed, 4 insertions(+)

--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -4905,6 +4905,10 @@ static void unbind_workers(int cpu)
                pool->flags |= POOL_DISASSOCIATED;
 
                raw_spin_unlock_irq(&pool->lock);
+
+               for_each_pool_worker(worker, pool)
+                       WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, 
cpu_active_mask) < 0);
+
                mutex_unlock(&wq_pool_attach_mutex);
 
                /*


Reply via email to