From: Peter Zijlstra <[email protected]>

commit 0758cd8304942292e95a0f750c374533db378b32 upstream.

Aneesh reported that:

        tlb_flush_mmu()
          tlb_flush_mmu_tlbonly()
            tlb_flush()                 <-- #1
          tlb_flush_mmu_free()
            tlb_table_flush()
              tlb_table_invalidate()
                tlb_flush_mmu_tlbonly()
                  tlb_flush()           <-- #2

does two TLBIs when tlb->fullmm, because __tlb_reset_range() will not
clear tlb->end in that case.

Observe that any caller to __tlb_adjust_range() also sets at least one of
the tlb->freed_tables || tlb->cleared_p* bits, and those are
unconditionally cleared by __tlb_reset_range().

Change the condition for actually issuing TLBI to having one of those bits
set, as opposed to having tlb->end != 0.

Link: 
http://lkml.kernel.org/r/[email protected]
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Signed-off-by: Aneesh Kumar K.V <[email protected]>
Reported-by: "Aneesh Kumar K.V" <[email protected]>
Cc: <[email protected]>  # 4.19
Signed-off-by: Santosh Sivaraj <[email protected]>
[santosh: backported to 4.19 stable]
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
 include/asm-generic/tlb.h |    7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -179,7 +179,12 @@ static inline void __tlb_reset_range(str
 
 static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb)
 {
-       if (!tlb->end)
+       /*
+        * Anything calling __tlb_adjust_range() also sets at least one of
+        * these bits.
+        */
+       if (!(tlb->freed_tables || tlb->cleared_ptes || tlb->cleared_pmds ||
+             tlb->cleared_puds || tlb->cleared_p4ds))
                return;
 
        tlb_flush(tlb);


Reply via email to