While the updated smp infrastructure is capable of running a function on
a single local core, it is not optimized for this case. The multiple
function calls and the indirect branch introduce some overhead, making
local TLB flushes slower than they were before the recent changes.

Before calling the SMP infrastructure, check if only a local TLB flush
is needed to restore the lost performance in this common case. This
requires to check mm_cpumask() another time, but unless this mask is
updated very frequently, this should impact performance negatively.

Cc: Dave Hansen <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Signed-off-by: Nadav Amit <[email protected]>
---
 arch/x86/mm/tlb.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 73d0d51b0f61..b0c3065aad5d 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -823,8 +823,12 @@ static void flush_tlb_on_cpus(const cpumask_t *cpumask,
                              const struct flush_tlb_info *info)
 {
        int this_cpu = smp_processor_id();
+       bool flush_others = false;
 
-       if (static_branch_likely(&flush_tlb_multi_enabled)) {
+       if (cpumask_any_but(cpumask, this_cpu) < nr_cpu_ids)
+               flush_others = true;
+
+       if (static_branch_likely(&flush_tlb_multi_enabled) && flush_others) {
                flush_tlb_multi(cpumask, info);
                return;
        }
@@ -836,7 +840,7 @@ static void flush_tlb_on_cpus(const cpumask_t *cpumask,
                local_irq_enable();
        }
 
-       if (cpumask_any_but(cpumask, this_cpu) < nr_cpu_ids)
+       if (flush_others)
                flush_tlb_others(cpumask, info);
 }
 
-- 
2.20.1

Reply via email to