It used to be that the rcuo callback-offload kthreads were spawned
in rcu_organize_nocb_kthreads(), and the comment before the "for"
loop says as much.  However, this spawning has long since moved to
the CPU-hotplug code, so this commit fixes this comment.

Reported-by: Michalis Kokologiannakis <mixas...@gmail.com>
Signed-off-by: Paul E. McKenney <paul...@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <j...@joshtriplett.org>
---
 kernel/rcu/tree_plugin.h | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 56583e764ebf..2f5541f6f031 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -2366,8 +2366,9 @@ static void __init rcu_organize_nocb_kthreads(struct 
rcu_state *rsp)
        }
 
        /*
-        * Each pass through this loop sets up one rcu_data structure and
-        * spawns one rcu_nocb_kthread().
+        * Each pass through this loop sets up one rcu_data structure.
+        * Should the corresponding CPU come online in the future, then
+        * we will spawn the needed set of rcu_nocb_kthread() kthreads.
         */
        for_each_cpu(cpu, rcu_nocb_mask) {
                rdp = per_cpu_ptr(rsp->rda, cpu);
-- 
2.5.2

Reply via email to