From: Frederic Weisbecker <[email protected]>

If an SRCU barrier is queued while callbacks are running and a new
callbacks invocator for the same sdp were to run concurrently, the
RCU barrier might execute too early. As this requirement is non-obvious,
make sure to keep a record.

Signed-off-by: Frederic Weisbecker <[email protected]>
Reviewed-by: Joel Fernandes (Google) <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
Signed-off-by: Neeraj Upadhyay (AMD) <[email protected]>
---
 kernel/rcu/srcutree.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index 2bfc8ed1eed2..0351a4e83529 100644
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -1715,6 +1715,11 @@ static void srcu_invoke_callbacks(struct work_struct 
*work)
        WARN_ON_ONCE(!rcu_segcblist_segempty(&sdp->srcu_cblist, RCU_NEXT_TAIL));
        rcu_segcblist_advance(&sdp->srcu_cblist,
                              rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq));
+       /*
+        * Although this function is theoretically re-entrant, concurrent
+        * callbacks invocation is disallowed to avoid executing an SRCU barrier
+        * too early.
+        */
        if (sdp->srcu_cblist_invoking ||
            !rcu_segcblist_ready_cbs(&sdp->srcu_cblist)) {
                spin_unlock_irq_rcu_node(sdp);
@@ -1745,6 +1750,7 @@ static void srcu_invoke_callbacks(struct work_struct 
*work)
        sdp->srcu_cblist_invoking = false;
        more = rcu_segcblist_ready_cbs(&sdp->srcu_cblist);
        spin_unlock_irq_rcu_node(sdp);
+       /* An SRCU barrier or callbacks from previous nesting work pending */
        if (more)
                srcu_schedule_cbs_sdp(sdp, 0);
 }
-- 
2.40.1


Reply via email to