Open-code on_each_cpu_cond_mask() in native_flush_tlb_others() to optimize the code. Open-coding eliminates the need for the indirect branch that is used to call is_lazy(), and in CPUs that are vulnerable to Spectre v2, it eliminates the retpoline. In addition, it allows to use a preallocated cpumask to compute the CPUs that should be.
This would later allow us not to adapt on_each_cpu_cond_mask() to support local and remote functions. Note that calling tlb_is_not_lazy() for every CPU that needs to be flushed, as done in native_flush_tlb_multi() might look ugly, but it is equivalent to what is currently done in on_each_cpu_cond_mask(). Actually, native_flush_tlb_multi() does it more efficiently since it avoids using an indirect branch for the matter. Cc: Peter Zijlstra <pet...@infradead.org> Cc: Dave Hansen <dave.han...@linux.intel.com> Cc: Rik van Riel <r...@surriel.com> Cc: Thomas Gleixner <t...@linutronix.de> Cc: Andy Lutomirski <l...@kernel.org> Cc: Josh Poimboeuf <jpoim...@redhat.com> Signed-off-by: Nadav Amit <na...@vmware.com> --- arch/x86/mm/tlb.c | 40 ++++++++++++++++++++++++++++++++++------ 1 file changed, 34 insertions(+), 6 deletions(-) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 233f3d8308db..abbf55fa8b81 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -658,11 +658,13 @@ static void flush_tlb_func_remote(void *info) flush_tlb_func_common(f, false, TLB_REMOTE_SHOOTDOWN); } -static bool tlb_is_not_lazy(int cpu, void *data) +static bool tlb_is_not_lazy(int cpu) { return !per_cpu(cpu_tlbstate.is_lazy, cpu); } +static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask); + void native_flush_tlb_others(const struct cpumask *cpumask, const struct flush_tlb_info *info) { @@ -706,12 +708,38 @@ void native_flush_tlb_others(const struct cpumask *cpumask, * up on the new contents of what used to be page tables, while * doing a speculative memory access. */ - if (info->freed_tables) + if (info->freed_tables) { smp_call_function_many(cpumask, flush_tlb_func_remote, (void *)info, 1); - else - on_each_cpu_cond_mask(tlb_is_not_lazy, flush_tlb_func_remote, - (void *)info, 1, GFP_ATOMIC, cpumask); + } else { + /* + * Although we could have used on_each_cpu_cond_mask(), + * open-coding it has performance advantages, as it eliminates + * the need for indirect calls or retpolines. In addition, it + * allows to use a designated cpumask for evaluating the + * condition, instead of allocating one. + * + * This code works under the assumption that there are no nested + * TLB flushes, an assumption that is already made in + * flush_tlb_mm_range(). + * + * cond_cpumask is logically a stack-local variable, but it is + * more efficient to have it off the stack and not to allocate + * it on demand. Preemption is disabled and this code is + * non-reentrant. + */ + struct cpumask *cond_cpumask = this_cpu_ptr(&flush_tlb_mask); + int cpu; + + cpumask_clear(cond_cpumask); + + for_each_cpu(cpu, cpumask) { + if (tlb_is_not_lazy(cpu)) + __cpumask_set_cpu(cpu, cond_cpumask); + } + smp_call_function_many(cond_cpumask, flush_tlb_func_remote, + (void *)info, 1); + } } /* @@ -865,7 +893,7 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) if (cpumask_test_cpu(cpu, &batch->cpumask)) { lockdep_assert_irqs_enabled(); local_irq_disable(); - flush_tlb_func_local(&full_flush_tlb_info); + flush_tlb_func_local((void *)&full_flush_tlb_info); local_irq_enable(); } -- 2.20.1