On Mon, Jan 28, 2019 at 05:07:07PM -0500, Mathieu Desnoyers wrote:
> Jann Horn identified a racy access to p->mm in the global expedited
> command of the membarrier system call.
> 
> The suggested fix is to hold the task_lock() around the accesses to
> p->mm and to the mm_struct membarrier_state field to guarantee the
> existence of the mm_struct.
> 
> Link: 
> https://lore.kernel.org/lkml/cag48ez2g8ctf8dhs42tf37pthfr3y0rnooytmxvacm4u8yu...@mail.gmail.com
> Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
> Tested-by: Jann Horn <ja...@google.com>
> CC: Jann Horn <ja...@google.com>
> CC: Thomas Gleixner <t...@linutronix.de>
> CC: Peter Zijlstra (Intel) <pet...@infradead.org>
> CC: Ingo Molnar <mi...@kernel.org>
> CC: Andrea Parri <parri.and...@gmail.com>
> CC: Andy Lutomirski <l...@kernel.org>
> CC: Avi Kivity <a...@scylladb.com>
> CC: Benjamin Herrenschmidt <b...@kernel.crashing.org>
> CC: Boqun Feng <boqun.f...@gmail.com>
> CC: Dave Watson <davejwat...@fb.com>
> CC: David Sehr <s...@google.com>
> CC: H. Peter Anvin <h...@zytor.com>
> CC: Linus Torvalds <torva...@linux-foundation.org>
> CC: Maged Michael <maged.mich...@gmail.com>
> CC: Michael Ellerman <m...@ellerman.id.au>
> CC: Paul E. McKenney <paul...@linux.vnet.ibm.com>
> CC: Paul Mackerras <pau...@samba.org>
> CC: Russell King <li...@armlinux.org.uk>
> CC: Will Deacon <will.dea...@arm.com>
> CC: sta...@vger.kernel.org # v4.16+
> CC: linux-...@vger.kernel.org
> ---
>  kernel/sched/membarrier.c | 27 +++++++++++++++++++++------
>  1 file changed, 21 insertions(+), 6 deletions(-)
> 
> diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c
> index 76e0eaf4654e..305fdcc4c5f7 100644
> --- a/kernel/sched/membarrier.c
> +++ b/kernel/sched/membarrier.c
> @@ -81,12 +81,27 @@ static int membarrier_global_expedited(void)
> 
>               rcu_read_lock();
>               p = task_rcu_dereference(&cpu_rq(cpu)->curr);
> -             if (p && p->mm && (atomic_read(&p->mm->membarrier_state) &
> -                                MEMBARRIER_STATE_GLOBAL_EXPEDITED)) {
> -                     if (!fallback)
> -                             __cpumask_set_cpu(cpu, tmpmask);
> -                     else
> -                             smp_call_function_single(cpu, ipi_mb, NULL, 1);
> +             /*
> +              * Skip this CPU if the runqueue's current task is NULL or if
> +              * it is a kernel thread.
> +              */
> +             if (p && READ_ONCE(p->mm)) {
> +                     bool mm_match;
> +
> +                     /*
> +                      * Read p->mm and access membarrier_state while holding
> +                      * the task lock to ensure existence of mm.
> +                      */
> +                     task_lock(p);
> +                     mm_match = p->mm && 
> (atomic_read(&p->mm->membarrier_state) &

Are we guaranteed that this p->mm will be the same as the one loaded via
READ_ONCE() above?  Either way, wouldn't it be better to READ_ONCE() it a
single time and use the same value everywhere?

                                                        Thanx, Paul

> +                                          MEMBARRIER_STATE_GLOBAL_EXPEDITED);
> +                     task_unlock(p);
> +                     if (mm_match) {
> +                             if (!fallback)
> +                                     __cpumask_set_cpu(cpu, tmpmask);
> +                             else
> +                                     smp_call_function_single(cpu, ipi_mb, 
> NULL, 1);
> +                     }
>               }
>               rcu_read_unlock();
>       }
> -- 
> 2.17.1
> 

Reply via email to