From: "Paul E. McKenney" <paul...@linux.vnet.ibm.com>

The CPU-selection code in sync_rcu_exp_select_cpus() disables preemption
to prevent the cpu_online_mask from changing.  However, this relies on
the stop-machine mechanism in the CPU-hotplug offline code, which is not
desirable (it would be good to someday remove the stop-machine mechanism).

This commit therefore instead uses the relevant leaf rcu_node structure's
->ffmask, which has a bit set for all CPUs that are fully functional.
A given CPU's bit is cleared very early during offline processing by
rcutree_offline_cpu() and set very late during online processing by
rcutree_online_cpu().  Therefore, if a CPU's bit is set in this mask, and
preemption is disabled, we have to be before the synchronize_sched() in
the CPU-hotplug offline code, which means that the CPU is guaranteed to be
workqueue-ready throughout the duration of the enclosing preempt_disable()
region of code.

This also has the side-effect of using WORK_CPU_UNBOUND if all the CPUs for
this leaf rcu_node structure are offline, which is an acceptable difference
in behavior.

Reported-by: Sebastian Andrzej Siewior <bige...@linutronix.de>
Signed-off-by: Paul E. McKenney <paul...@linux.vnet.ibm.com>
---
 kernel/rcu/tree_exp.h | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
index 8d18c1014e2b..e669ccf3751b 100644
--- a/kernel/rcu/tree_exp.h
+++ b/kernel/rcu/tree_exp.h
@@ -450,10 +450,12 @@ static void sync_rcu_exp_select_cpus(smp_call_func_t func)
                }
                INIT_WORK(&rnp->rew.rew_work, sync_rcu_exp_select_node_cpus);
                preempt_disable();
-               cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask);
+               cpu = find_next_bit(&rnp->ffmask, BITS_PER_LONG, -1);
                /* If all offline, queue the work on an unbound CPU. */
-               if (unlikely(cpu > rnp->grphi))
+               if (unlikely(cpu > rnp->grphi - rnp->grplo))
                        cpu = WORK_CPU_UNBOUND;
+               else
+                       cpu += rnp->grplo;
                queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
                preempt_enable();
                rnp->exp_need_flush = true;
-- 
2.17.1

Reply via email to