3.16.47-rc1 review patch.  If anyone has any objections, please let me know.

------------------

From: Andy Lutomirski <l...@kernel.org>

commit dbd68d8e84c606673ebbcf15862f8c155fa92326 upstream.

flush_tlb_page() passes a bogus range to flush_tlb_others() and
expects the latter to fix it up.  native_flush_tlb_others() has the
fixup but Xen's version doesn't.  Move the fixup to
flush_tlb_others().

AFAICS the only real effect is that, without this fix, Xen would
flush everything instead of just the one page on remote vCPUs in
when flush_tlb_page() was called.

Signed-off-by: Andy Lutomirski <l...@kernel.org>
Reviewed-by: Boris Ostrovsky <boris.ostrov...@oracle.com>
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: Borislav Petkov <b...@alien8.de>
Cc: Brian Gerst <brge...@gmail.com>
Cc: Dave Hansen <dave.han...@intel.com>
Cc: Denys Vlasenko <dvlas...@redhat.com>
Cc: H. Peter Anvin <h...@zytor.com>
Cc: Josh Poimboeuf <jpoim...@redhat.com>
Cc: Juergen Gross <jgr...@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.w...@oracle.com>
Cc: Linus Torvalds <torva...@linux-foundation.org>
Cc: Michal Hocko <mho...@suse.com>
Cc: Nadav Amit <na...@vmware.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Rik van Riel <r...@redhat.com>
Cc: Thomas Gleixner <t...@linutronix.de>
Fixes: e7b52ffd45a6 ("x86/flush_tlb: try flush_tlb_single one by one in 
flush_tlb_range")
Link: 
http://lkml.kernel.org/r/10ed0e4dfea64daef10b87fb85df1746999b4dba.1492844372.git.l...@kernel.org
Signed-off-by: Ingo Molnar <mi...@kernel.org>
[bwh: Backported to 3.16: the special case was handled in flush_tlb_func(), not
 native_flush_tlb_others()]
Signed-off-by: Ben Hutchings <b...@decadent.org.uk>
---
 arch/x86/mm/tlb.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -107,8 +107,6 @@ static void flush_tlb_func(void *info)
        if (this_cpu_read(cpu_tlbstate.state) == TLBSTATE_OK) {
                if (f->flush_end == TLB_FLUSH_ALL)
                        local_flush_tlb();
-               else if (!f->flush_end)
-                       __flush_tlb_single(f->flush_start);
                else {
                        unsigned long addr;
                        addr = f->flush_start;
@@ -248,7 +246,7 @@ void flush_tlb_page(struct vm_area_struc
        }
 
        if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids)
-               flush_tlb_others(mm_cpumask(mm), mm, start, 0UL);
+               flush_tlb_others(mm_cpumask(mm), mm, start, start + PAGE_SIZE);
 
        preempt_enable();
 }

Reply via email to