flush_tlb_page() passes a bogus range to flush_tlb_others() and
expects the latter to fix it up.  native_flush_tlb_others() has the
fixup but Xen's version doesn't.  Move the fixup to
flush_tlb_others().

AFAICS the only real effect is that, without this fix, Xen would
flush everything instead of just the one page on remote vCPUs in
when flush_tlb_page() was called.

Cc: Rik van Riel <r...@redhat.com>
Cc: Dave Hansen <dave.han...@intel.com>
Cc: Nadav Amit <na...@vmware.com>
Cc: Michal Hocko <mho...@suse.com>
Cc: Sasha Levin <sasha.le...@oracle.com>
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: Boris Ostrovsky <boris.ostrov...@oracle.com>
Cc: Juergen Gross <jgr...@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.w...@oracle.com>
Fixes: e7b52ffd45a6 ("x86/flush_tlb: try flush_tlb_single one by one in 
flush_tlb_range")
Signed-off-by: Andy Lutomirski <l...@kernel.org>
---
 arch/x86/mm/tlb.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 9db9260a5e9f..6e7bedf69af7 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -263,8 +263,6 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
 {
        struct flush_tlb_info info;
 
-       if (end == 0)
-               end = start + PAGE_SIZE;
        info.flush_mm = mm;
        info.flush_start = start;
        info.flush_end = end;
@@ -378,7 +376,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned 
long start)
        }
 
        if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids)
-               flush_tlb_others(mm_cpumask(mm), mm, start, 0UL);
+               flush_tlb_others(mm_cpumask(mm), mm, start, start + PAGE_SIZE);
 
        preempt_enable();
 }
-- 
2.9.3

Reply via email to