3.16.60-rc1 review patch.  If anyone has any objections, please let me know.

------------------

From: Joerg Roedel <jroe...@suse.de>

commit d6ef1f194b7569af8b8397876dc9ab07649d63cb upstream.

The walk_pte_level() function just uses __va to get the virtual address of
the PTE page, but that breaks when the PTE page is not in the direct
mapping with HIGHPTE=y.

The result is an unhandled kernel paging request at some random address
when accessing the current_kernel or current_user file.

Use the correct API to access PTE pages.

Fixes: fe770bf0310d ('x86: clean up the page table dumper and add 32-bit 
support')
Signed-off-by: Joerg Roedel <jroe...@suse.de>
Signed-off-by: Thomas Gleixner <t...@linutronix.de>
Cc: jgr...@suse.com
Cc: jbeul...@suse.com
Cc: h...@zytor.com
Cc: aryabi...@virtuozzo.com
Cc: kirill.shute...@linux.intel.com
Link: https://lkml.kernel.org/r/1523971636-4137-1-git-send-email-j...@8bytes.org
[bwh: Backported to 3.16:
 - Keep using pte_pgprot() to get protection flags
 - Adjust context]
Signed-off-by: Ben Hutchings <b...@decadent.org.uk>
---
--- a/arch/x86/mm/dump_pagetables.c
+++ b/arch/x86/mm/dump_pagetables.c
@@ -16,6 +16,7 @@
 #include <linux/mm.h>
 #include <linux/module.h>
 #include <linux/seq_file.h>
+#include <linux/highmem.h>
 
 #include <asm/pgtable.h>
 
@@ -263,15 +264,16 @@ static void walk_pte_level(struct seq_fi
                                                        unsigned long P)
 {
        int i;
-       pte_t *start;
+       pte_t *pte;
 
-       start = (pte_t *) pmd_page_vaddr(addr);
        for (i = 0; i < PTRS_PER_PTE; i++) {
-               pgprot_t prot = pte_pgprot(*start);
+               pgprot_t prot;
 
                st->current_address = normalize_addr(P + i * PTE_LEVEL_MULT);
+               pte = pte_offset_map(&addr, st->current_address);
+               prot = pte_pgprot(*pte);
                note_page(m, st, prot, 4);
-               start++;
+               pte_unmap(pte);
        }
 }
 

Reply via email to