On Tue, Sep 19, 2017 at 03:56:31PM -0400, Mathieu Desnoyers wrote: > Document the membarrier requirement on having a full memory barrier in > __schedule() after coming from user-space, before storing to rq->curr. > It is provided by smp_mb__before_spinlock() in __schedule().
It is smp_mb__after_spinlock(). (Yes: I missed it in my previous email.) Andrea > > Document that membarrier requires a full barrier on transition from > kernel thread to userspace thread. We currently have an implicit barrier > from atomic_dec_and_test() in mmdrop() that ensures this. > > The x86 switch_mm_irqs_off() full barrier is currently provided by many > cpumask update operations as well as write_cr3(). Document that > write_cr3() provides this barrier. > > Changes since v1: > - Update comments to match reality for code paths which are after > storing to rq->curr, before returning to user-space. > > Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com> > CC: Peter Zijlstra <pet...@infradead.org> > CC: Paul E. McKenney <paul...@linux.vnet.ibm.com> > CC: Boqun Feng <boqun.f...@gmail.com> > CC: Andrew Hunter <a...@google.com> > CC: Maged Michael <maged.mich...@gmail.com> > CC: gro...@google.com > CC: Avi Kivity <a...@scylladb.com> > CC: Benjamin Herrenschmidt <b...@kernel.crashing.org> > CC: Paul Mackerras <pau...@samba.org> > CC: Michael Ellerman <m...@ellerman.id.au> > CC: Dave Watson <davejwat...@fb.com> > --- > arch/x86/mm/tlb.c | 5 +++++ > include/linux/sched/mm.h | 5 +++++ > kernel/sched/core.c | 9 +++++++++ > 3 files changed, 19 insertions(+) > > diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c > index 1ab3821f9e26..74f94fe4aded 100644 > --- a/arch/x86/mm/tlb.c > +++ b/arch/x86/mm/tlb.c > @@ -144,6 +144,11 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct > mm_struct *next, > } > #endif > > + /* > + * The membarrier system call requires a full memory barrier > + * before returning to user-space, after storing to rq->curr. > + * Writing to CR3 provides that full memory barrier. > + */ > if (real_prev == next) { > VM_BUG_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) != > next->context.ctx_id); > diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h > index 3a19c253bdb1..766cc47c4d7c 100644 > --- a/include/linux/sched/mm.h > +++ b/include/linux/sched/mm.h > @@ -38,6 +38,11 @@ static inline void mmgrab(struct mm_struct *mm) > extern void __mmdrop(struct mm_struct *); > static inline void mmdrop(struct mm_struct *mm) > { > + /* > + * The implicit full barrier implied by atomic_dec_and_test is > + * required by the membarrier system call before returning to > + * user-space, after storing to rq->curr. > + */ > if (unlikely(atomic_dec_and_test(&mm->mm_count))) > __mmdrop(mm); > } > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index 18a6966567da..7977b25acf54 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -2658,6 +2658,12 @@ static struct rq *finish_task_switch(struct > task_struct *prev) > finish_arch_post_lock_switch(); > > fire_sched_in_preempt_notifiers(current); > + /* > + * When transitioning from a kernel thread to a userspace > + * thread, mmdrop()'s implicit full barrier is required by the > + * membarrier system call, because the current active_mm can > + * become the current mm without going through switch_mm(). > + */ > if (mm) > mmdrop(mm); > if (unlikely(prev_state == TASK_DEAD)) { > @@ -3299,6 +3305,9 @@ static void __sched notrace __schedule(bool preempt) > * Make sure that signal_pending_state()->signal_pending() below > * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE) > * done by the caller to avoid the race with signal_wake_up(). > + * > + * The membarrier system call requires a full memory barrier > + * after coming from user-space, before storing to rq->curr. > */ > rq_lock(rq, &rf); > smp_mb__after_spinlock(); > -- > 2.11.0 >