On Tue, May 26, 2020 at 03:41:34PM +0200, Sebastian Andrzej Siewior wrote:
> SRCU disables interrupts to get a stable per-CPU pointer and then
> acquires the spinlock which is in the per-CPU data structure. The
> release uses spin_unlock_irqrestore(). While this is correct on a non-RT
> kernel, this conflicts with the RT semantics because the spinlock is
> converted to a 'sleeping' spinlock. Sleeping locks can obviously not be
> acquired with interrupts disabled.
> 
> Acquire the per-CPU pointer `ssp->sda' without disabling preemption and
> then acquire the spinlock_t of the per-CPU data structure. The lock
> will ensure that the data is consistent.
> The added check_init_srcu_struct() is now needed because a statically 
> defined srcu_struct may remain uninitialized until this point and the
> newly introduced locking operation requires an initialized spinlock_t.
> 
> This change was tested for four hours with 8*SRCU-N and 8*SRCU-P without
> causing any warnings.

Queued, thank you!!!

                                                        Thanx, Paul

> Cc: Lai Jiangshan <jiangshan...@gmail.com>
> Cc: "Paul E. McKenney" <paul...@kernel.org>
> Cc: Josh Triplett <j...@joshtriplett.org>
> Cc: Steven Rostedt <rost...@goodmis.org>
> Cc: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
> Cc: r...@vger.kernel.org
> Signed-off-by: Sebastian Andrzej Siewior <bige...@linutronix.de>
> ---
>  kernel/rcu/srcutree.c | 14 +++++++-------
>  1 file changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
> index 0c71505f0e19c..9459bca58c380 100644
> --- a/kernel/rcu/srcutree.c
> +++ b/kernel/rcu/srcutree.c
> @@ -764,14 +764,15 @@ static bool srcu_might_be_idle(struct srcu_struct *ssp)
>       unsigned long t;
>       unsigned long tlast;
>  
> +     check_init_srcu_struct(ssp);
>       /* If the local srcu_data structure has callbacks, not idle.  */
> -     local_irq_save(flags);
> -     sdp = this_cpu_ptr(ssp->sda);
> +     sdp = raw_cpu_ptr(ssp->sda);
> +     spin_lock_irqsave_rcu_node(sdp, flags);
>       if (rcu_segcblist_pend_cbs(&sdp->srcu_cblist)) {
> -             local_irq_restore(flags);
> +             spin_unlock_irqrestore_rcu_node(sdp, flags);
>               return false; /* Callbacks already present, so not idle. */
>       }
> -     local_irq_restore(flags);
> +     spin_unlock_irqrestore_rcu_node(sdp, flags);
>  
>       /*
>        * No local callbacks, so probabalistically probe global state.
> @@ -851,9 +852,8 @@ static void __call_srcu(struct srcu_struct *ssp, struct 
> rcu_head *rhp,
>       }
>       rhp->func = func;
>       idx = srcu_read_lock(ssp);
> -     local_irq_save(flags);
> -     sdp = this_cpu_ptr(ssp->sda);
> -     spin_lock_rcu_node(sdp);
> +     sdp = raw_cpu_ptr(ssp->sda);
> +     spin_lock_irqsave_rcu_node(sdp, flags);
>       rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp);
>       rcu_segcblist_advance(&sdp->srcu_cblist,
>                             rcu_seq_current(&ssp->srcu_gp_seq));
> -- 
> 2.27.0.rc0
> 

Reply via email to