Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can be
replaced by synchronize_rcu().  This commit therefore makes this change,
even though it is but a comment.

Signed-off-by: Paul E. McKenney <paul...@linux.ibm.com>
---
 .../rcutorture/formal/srcu-cbmc/include/linux/types.h         | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/tools/testing/selftests/rcutorture/formal/srcu-cbmc/include/linux/types.h 
b/tools/testing/selftests/rcutorture/formal/srcu-cbmc/include/linux/types.h
index 891ad13e95b2..d27285f8ee82 100644
--- a/tools/testing/selftests/rcutorture/formal/srcu-cbmc/include/linux/types.h
+++ b/tools/testing/selftests/rcutorture/formal/srcu-cbmc/include/linux/types.h
@@ -131,8 +131,8 @@ struct hlist_node {
  * weird ABI and we need to ask it explicitly.
  *
  * The alignment is required to guarantee that bits 0 and 1 of @next will be
- * clear under normal conditions -- as long as we use call_rcu(),
- * call_rcu_bh(), call_rcu_sched(), or call_srcu() to queue callback.
+ * clear under normal conditions -- as long as we use call_rcu() or
+ * call_srcu() to queue callback.
  *
  * This guarantee is important for few reasons:
  *  - future call_rcu_lazy() will make use of lower bits in the pointer;
-- 
2.17.1

Reply via email to