* Peter Zijlstra <[email protected]> wrote:

> On Wed, Jul 05, 2017 at 09:04:39AM -0700, Andy Lutomirski wrote:
> > On Wed, Jul 5, 2017 at 5:18 AM, Peter Zijlstra <[email protected]> wrote:
> > > On Thu, Jun 29, 2017 at 08:53:22AM -0700, Andy Lutomirski wrote:
> > >> @@ -104,18 +140,20 @@ void switch_mm_irqs_off(struct mm_struct *prev, 
> > >> struct mm_struct *next,
> > >>
> > >>               /* Resume remote flushes and then read tlb_gen. */
> > >>               cpumask_set_cpu(cpu, mm_cpumask(next));
> > >
> > > Barriers should have a comment... what is being ordered here against
> > > what?
> > 
> > How's this comment?
> > 
> >         /*
> >          * Resume remote flushes and then read tlb_gen.  We need to do
> >          * it in this order: any inc_mm_tlb_gen() caller that writes a
> >          * larger tlb_gen than we read here must see our cpu set in
> >          * mm_cpumask() so that it will know to flush us.  The barrier
> >          * here synchronizes with inc_mm_tlb_gen().
> >          */
> 
> Slightly confusing, you mean this, right?
> 
> 
>       cpumask_set_cpu(cpu, mm_cpumask());                     
> inc_mm_tlb_gen();
> 
>       MB                                                      MB
> 
>       next_tlb_gen = atomic64_read(&next->context.tlb_gen);   
> flush_tlb_others(mm_cpumask());
> 
> 
> which seems to make sense.

Btw., I'll wait for a v5 iteration before applying this last patch to 
tip:x86/mm.

Thanks,

        Ingo

Reply via email to