Make sure to handle the pending bypass queue before we switch to the
final de-offload state. We'll have to be careful and later set
SEGCBLIST_SOFTIRQ_ONLY before re-enabling again IRQs, or new bypass
callbacks could be queued in the meantine.

Inspired-by: Paul E. McKenney <paul...@kernel.org>
Signed-off-by: Frederic Weisbecker <frede...@kernel.org>
Cc: Paul E. McKenney <paul...@kernel.org>
Cc: Josh Triplett <j...@joshtriplett.org>
Cc: Steven Rostedt <rost...@goodmis.org>
Cc: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Cc: Lai Jiangshan <jiangshan...@gmail.com>
Cc: Joel Fernandes <j...@joelfernandes.org>
Cc: Neeraj Upadhyay <neer...@codeaurora.org>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Boqun Feng <boqun.f...@gmail.com>
---
 kernel/rcu/tree_plugin.h | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 44b4ab9b3953..dfb4b62c6b88 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -2339,12 +2339,21 @@ static int __rcu_nocb_rdp_deoffload(struct rcu_data 
*rdp)
        swait_event_exclusive(rdp->nocb_state_wq,
                              !rcu_segcblist_test_flags(cblist, 
SEGCBLIST_KTHREAD_CB |
                                                        SEGCBLIST_KTHREAD_GP));
+       rcu_nocb_lock_irqsave(rdp, flags);
        /* Make sure nocb timer won't stay around */
-       rcu_nocb_lock_irqsave(rdp, flags);
        WRITE_ONCE(rdp->nocb_defer_wakeup, RCU_NOCB_WAKE_OFF);
        rcu_nocb_unlock_irqrestore(rdp, flags);
        del_timer_sync(&rdp->nocb_timer);
 
+       /*
+        * Flush bypass. While IRQs are disabled and once we set
+        * SEGCBLIST_SOFTIRQ_ONLY, no callback is supposed to be
+        * enqueued on bypass.
+        */
+       rcu_nocb_lock_irqsave(rdp, flags);
+       rcu_nocb_flush_bypass(rdp, NULL, jiffies);
+       rcu_nocb_unlock_irqrestore(rdp, flags);
+
        return ret;
 }
 
-- 
2.25.1

Reply via email to