Clarify exactly what the memory barrier synchronizes with. Suggested-by: Peter Zijlstra <pet...@infradead.org> Signed-off-by: Rik van Riel <r...@surriel.com> Reviewed-by: Andy Lutomirski <l...@kernel.org> --- arch/x86/mm/tlb.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 752dbf4e0e50..5321e02c4e09 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -263,8 +263,11 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, /* * Read the tlb_gen to check whether a flush is needed. * If the TLB is up to date, just use it. - * The barrier synchronizes with the tlb_gen increment in - * the TLB shootdown code. + * The TLB shootdown code first increments tlb_gen, and then + * sends IPIs to CPUs that have this CPU loaded and are not + * in lazy TLB mode. The barrier ensures we handle + * cpu_tlbstate.is_lazy before tlb_gen, keeping this code + * synchronized with the TLB flush code. */ smp_mb(); next_tlb_gen = atomic64_read(&next->context.tlb_gen); -- 2.14.4