Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can be
replaced by synchronize_rcu().  This commit therefore makes this change,
even though it is but a comment.

Signed-off-by: Paul E. McKenney <paul...@linux.ibm.com>
Cc: Tejun Heo <t...@kernel.org>
Cc: Jens Axboe <ax...@kernel.dk>
Cc: Dennis Zhou <den...@kernel.org>
Cc: Johannes Weiner <han...@cmpxchg.org>
Cc: "Dennis Zhou (Facebook)" <dennissz...@gmail.com>
---
 kernel/cgroup/cgroup.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 6aaf5dd5383b..7a8429f8e280 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -5343,7 +5343,7 @@ int __init cgroup_init(void)
        cgroup_rstat_boot();
 
        /*
-        * The latency of the synchronize_sched() is too high for cgroups,
+        * The latency of the synchronize_rcu() is too high for cgroups,
         * avoid it at the cost of forcing all readers into the slow path.
         */
        rcu_sync_enter_start(&cgroup_threadgroup_rwsem.rss);
-- 
2.17.1

Reply via email to