On Fri, 25 May 2018, Paul E. McKenney wrote:

> On Fri, May 25, 2018 at 11:05:06AM +0200, Anna-Maria Gleixner wrote:
> > Since commit b4abf91047cf ("rtmutex: Make wait_lock irq safe") the
> > explanation in rcu_read_unlock() documentation about irq unsafe rtmutex
> > wait_lock is no longer valid.
> > 
> > Remove it to prevent kernel developers reading the documentation to rely on
> > it.
> > 
> > Suggested-by: Eric W. Biederman <[email protected]>
> > Signed-off-by: Anna-Maria Gleixner <[email protected]>
> 
> Reviewed-by: Paul E. McKenney <[email protected]>
> 
> Or let me know if you would like me to carry this patch.  Either way,
> just let me know!
> 

Thanks! Thomas told be he will take both.

Anna-Maria


> 
> > ---
> >  include/linux/rcupdate.h | 4 +---
> >  1 file changed, 1 insertion(+), 3 deletions(-)
> > 
> > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > index 36360d07f25b..64644fda3b22 100644
> > --- a/include/linux/rcupdate.h
> > +++ b/include/linux/rcupdate.h
> > @@ -653,9 +653,7 @@ static inline void rcu_read_lock(void)
> >   * Unfortunately, this function acquires the scheduler's runqueue and
> >   * priority-inheritance spinlocks.  This means that deadlock could result
> >   * if the caller of rcu_read_unlock() already holds one of these locks or
> > - * any lock that is ever acquired while holding them; or any lock which
> > - * can be taken from interrupt context because rcu_boost()->rt_mutex_lock()
> > - * does not disable irqs while taking ->wait_lock.
> > + * any lock that is ever acquired while holding them.
> >   *
> >   * That said, RCU readers are never priority boosted unless they were
> >   * preempted.  Therefore, one way to avoid deadlock is to make sure
> > -- 
> > 2.15.1
> > 
> 
> 

Reply via email to