In some case, cpu's numa affinity will be changed in cpu_up().
It happens after a new node is onlined.
(in x86, online cpus are tied to onlined node at boot.
 so, if memory is added later, cpu mapping can be changed at cpu_up()

Although wq_numa_possible_cpumask at el. are maintained against
node hotplug, this case should be handled.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hir...@jp.fujitsu.com>
---
 kernel/workqueue.c | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index f6ad05a..59d8be5 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -4618,6 +4618,27 @@ void workqueue_node_unregister(int node)
        mutex_unlock(&wq_pool_mutex);
 }
 
+static void workqueue_may_update_numa_affinity(int cpu)
+{
+       int curnode = cpu_to_node(cpu);
+       int node;
+
+       if (likely(cpumask_test_cpu(cpu, wq_numa_possible_cpumask[curnode])))
+               return;
+
+       /* cpu<->node relationship is changed in cpu_up() */
+       for_each_node_state(node, N_POSSIBLE)
+               cpumask_clear_cpu(cpu, wq_numa_possible_cpumask[node]);
+
+       workqueue_update_cpu_numa_affinity(cpu, curnode);
+}
+#else
+
+static void workqueue_may_update_numa_affinity(int cpu)
+{
+       return;
+}
+
 #endif
 
 /*
@@ -4647,6 +4668,8 @@ static int workqueue_cpu_up_callback(struct 
notifier_block *nfb,
        case CPU_ONLINE:
                mutex_lock(&wq_pool_mutex);
 
+               workqueue_may_update_numa_affinity(cpu);
+
                for_each_pool(pool, pi) {
                        mutex_lock(&pool->attach_mutex);
 
-- 
1.8.3.1



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to