X86 TLB range flushing uses a balance point to decide if a single global TLB flush or multiple single page flushes would perform best. This patch takes into account how many CPUs must be flushed.
Signed-off-by: Mel Gorman <[email protected]> --- arch/x86/mm/tlb.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 09b8cb8..0cababa 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -217,6 +217,9 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, act_entries = mm->total_vm > act_entries ? act_entries : mm->total_vm; nr_base_pages = (end - start) >> PAGE_SHIFT; + /* Take the number of CPUs to range flush into account */ + nr_base_pages *= cpumask_weight(mm_cpumask(mm)); + /* tlb_flushall_shift is on balance point, details in commit log */ if (nr_base_pages > act_entries || has_large_page(mm, start, end)) { count_vm_event(NR_TLB_LOCAL_FLUSH_ALL); -- 1.8.4 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

