From: Dave Hansen <dave.han...@linux.intel.com>

The kernel image is mapped into two places in the virtual address
space (addresses without KASLR, of course):

        1. The kernel direct map (0xffff880000000000)
        2. The "high kernel map" (0xffffffff81000000)

We actually execute out of #2.  If we get the address of a kernel
symbol, it points to #2, but almost all physical-to-virtual
translations point to #1.

Parts of the "high kernel map" alias are mapped in the userspace
page tables with the Global bit for performance reasons.  The
parts that we map to userspace do not (er, should not) have
secrets.

This is fine, except that some areas in the kernel image that
are adjacent to the non-secret-containing areas are unused holes.
We free these holes back into the normal page allocator and
reuse them as normal kernel memory.  The memory will, of course,
get *used* via the normal map, but the alias mapping is kept.

This otherwise unused alias mapping of the holes will, by default
keep the Global bit, be mapped out to userspace, and be
vulnerable to Meltdown.

Remove the alias mapping of these pages entirely.  This is likely
to fracture the 2M page mapping the kernel image near these areas,
but this should affect a minority of the area.

The pageattr code changes *all* aliases mapping the physical pages
that it operates on (by default).  We only want to modify a single
alias, so we need to tweak its behavior.

This unmapping behavior is currently dependent on PTI being in
place.  Going forward, we should at least consider doing this for
all configurations.  Having an extra read-write alias for memory
is not exactly ideal for debugging things like random memory
corruption and this does undercut features like DEBUG_PAGEALLOC
or future work like eXclusive Page Frame Ownership (XPFO).

Before this patch:

current_kernel:---[ High Kernel Mapping ]---
current_kernel-0xffffffff80000000-0xffffffff81000000          16M               
                pmd
current_kernel-0xffffffff81000000-0xffffffff81e00000          14M     ro        
 PSE     GLB x  pmd
current_kernel-0xffffffff81e00000-0xffffffff81e11000          68K     ro        
         GLB x  pte
current_kernel-0xffffffff81e11000-0xffffffff82000000        1980K     RW        
             NX pte
current_kernel-0xffffffff82000000-0xffffffff82600000           6M     ro        
 PSE     GLB NX pmd
current_kernel-0xffffffff82600000-0xffffffff82c00000           6M     RW        
 PSE         NX pmd
current_kernel-0xffffffff82c00000-0xffffffff82e00000           2M     RW        
             NX pte
current_kernel-0xffffffff82e00000-0xffffffff83200000           4M     RW        
 PSE         NX pmd
current_kernel-0xffffffff83200000-0xffffffffa0000000         462M               
                pmd

  current_user:---[ High Kernel Mapping ]---
  current_user-0xffffffff80000000-0xffffffff81000000          16M               
                pmd
  current_user-0xffffffff81000000-0xffffffff81e00000          14M     ro        
 PSE     GLB x  pmd
  current_user-0xffffffff81e00000-0xffffffff81e11000          68K     ro        
         GLB x  pte
  current_user-0xffffffff81e11000-0xffffffff82000000        1980K     RW        
             NX pte
  current_user-0xffffffff82000000-0xffffffff82600000           6M     ro        
 PSE     GLB NX pmd
  current_user-0xffffffff82600000-0xffffffffa0000000         474M               
                pmd


After this patch:

current_kernel:---[ High Kernel Mapping ]---
current_kernel-0xffffffff80000000-0xffffffff81000000          16M               
                pmd
current_kernel-0xffffffff81000000-0xffffffff81e00000          14M     ro        
 PSE     GLB x  pmd
current_kernel-0xffffffff81e00000-0xffffffff81e11000          68K     ro        
         GLB x  pte
current_kernel-0xffffffff81e11000-0xffffffff82000000        1980K               
                pte
current_kernel-0xffffffff82000000-0xffffffff82400000           4M     ro        
 PSE     GLB NX pmd
current_kernel-0xffffffff82400000-0xffffffff82488000         544K     ro        
             NX pte
current_kernel-0xffffffff82488000-0xffffffff82600000        1504K               
                pte
current_kernel-0xffffffff82600000-0xffffffff82c00000           6M     RW        
 PSE         NX pmd
current_kernel-0xffffffff82c00000-0xffffffff82c0d000          52K     RW        
             NX pte
current_kernel-0xffffffff82c0d000-0xffffffff82dc0000        1740K               
                pte

  current_user:---[ High Kernel Mapping ]---
  current_user-0xffffffff80000000-0xffffffff81000000          16M               
                pmd
  current_user-0xffffffff81000000-0xffffffff81e00000          14M     ro        
 PSE     GLB x  pmd
  current_user-0xffffffff81e00000-0xffffffff81e11000          68K     ro        
         GLB x  pte
  current_user-0xffffffff81e11000-0xffffffff82000000        1980K               
                pte
  current_user-0xffffffff82000000-0xffffffff82400000           4M     ro        
 PSE     GLB NX pmd
  current_user-0xffffffff82400000-0xffffffff82488000         544K     ro        
             NX pte
  current_user-0xffffffff82488000-0xffffffff82600000        1504K               
                pte
  current_user-0xffffffff82600000-0xffffffffa0000000         474M               
                pmd

Signed-off-by: Dave Hansen <dave.han...@linux.intel.com>
Cc: Kees Cook <keesc...@google.com>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Ingo Molnar <mi...@kernel.org>
Cc: Andrea Arcangeli <aarca...@redhat.com>
Cc: Juergen Gross <jgr...@suse.com>
Cc: Josh Poimboeuf <jpoim...@redhat.com>
Cc: Greg Kroah-Hartman <gre...@linuxfoundation.org>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Hugh Dickins <hu...@google.com>
Cc: Linus Torvalds <torva...@linux-foundation.org>
Cc: Borislav Petkov <b...@alien8.de>
Cc: Andy Lutomirski <l...@kernel.org>
Cc: Andi Kleen <a...@linux.intel.com>
---

 b/arch/x86/include/asm/set_memory.h |    1 +
 b/arch/x86/mm/init.c                |   25 +++++++++++++++++++++++--
 b/arch/x86/mm/pageattr.c            |   13 +++++++++++++
 3 files changed, 37 insertions(+), 2 deletions(-)

diff -puN 
arch/x86/include/asm/set_memory.h~x86-unmap-freed-areas-from-kernel-image 
arch/x86/include/asm/set_memory.h
--- a/arch/x86/include/asm/set_memory.h~x86-unmap-freed-areas-from-kernel-image 
2018-08-02 14:14:49.462483274 -0700
+++ b/arch/x86/include/asm/set_memory.h 2018-08-02 14:14:49.471483274 -0700
@@ -46,6 +46,7 @@ int set_memory_np(unsigned long addr, in
 int set_memory_4k(unsigned long addr, int numpages);
 int set_memory_encrypted(unsigned long addr, int numpages);
 int set_memory_decrypted(unsigned long addr, int numpages);
+int set_memory_np_noalias(unsigned long addr, int numpages);
 
 int set_memory_array_uc(unsigned long *addr, int addrinarray);
 int set_memory_array_wc(unsigned long *addr, int addrinarray);
diff -puN arch/x86/mm/init.c~x86-unmap-freed-areas-from-kernel-image 
arch/x86/mm/init.c
--- a/arch/x86/mm/init.c~x86-unmap-freed-areas-from-kernel-image        
2018-08-02 14:14:49.463483274 -0700
+++ b/arch/x86/mm/init.c        2018-08-02 14:14:49.471483274 -0700
@@ -780,8 +780,29 @@ void free_init_pages(char *what, unsigne
  */
 void free_kernel_image_pages(void *begin, void *end)
 {
-       free_init_pages("unused kernel image",
-                       (unsigned long)begin, (unsigned long)end);
+       unsigned long begin_ul = (unsigned long)begin;
+       unsigned long end_ul = (unsigned long)end;
+       unsigned long len_pages = (end_ul - begin_ul) >> PAGE_SHIFT;
+
+
+       free_init_pages("unused kernel image", begin_ul, end_ul);
+
+       /*
+        * PTI maps some of the kernel into userspace.  For
+        * performance, this includes some kernel areas that
+        * do not contain secrets.  Those areas might be
+        * adjacent to the parts of the kernel image being
+        * freed, which may contain secrets.  Remove the
+        * "high kernel image mapping" for these freed areas,
+        * ensuring they are not even potentially vulnerable
+        * to Meltdown regardless of the specific optimizations
+        * PTI is currently using.
+        *
+        * The "noalias" prevents unmapping the direct map
+        * alias which is needed to access the freed pages.
+        */
+       if (cpu_feature_enabled(X86_FEATURE_PTI))
+               set_memory_np_noalias(begin_ul, len_pages);
 }
 
 void __ref free_initmem(void)
diff -puN arch/x86/mm/pageattr.c~x86-unmap-freed-areas-from-kernel-image 
arch/x86/mm/pageattr.c
--- a/arch/x86/mm/pageattr.c~x86-unmap-freed-areas-from-kernel-image    
2018-08-02 14:14:49.466483274 -0700
+++ b/arch/x86/mm/pageattr.c    2018-08-02 14:14:49.472483274 -0700
@@ -53,6 +53,7 @@ static DEFINE_SPINLOCK(cpa_lock);
 #define CPA_FLUSHTLB 1
 #define CPA_ARRAY 2
 #define CPA_PAGES_ARRAY 4
+#define CPA_NO_CHECK_ALIAS 8 /* Do not search for aliases */
 
 #ifdef CONFIG_PROC_FS
 static unsigned long direct_pages_count[PG_LEVEL_NUM];
@@ -1486,6 +1487,9 @@ static int change_page_attr_set_clr(unsi
 
        /* No alias checking for _NX bit modifications */
        checkalias = (pgprot_val(mask_set) | pgprot_val(mask_clr)) != _PAGE_NX;
+       /* Never check aliases if the caller asks for it explicitly: */
+       if (checkalias && (in_flag & CPA_NO_CHECK_ALIAS))
+               checkalias = 0;
 
        ret = __change_page_attr_set_clr(&cpa, checkalias);
 
@@ -1772,6 +1776,15 @@ int set_memory_np(unsigned long addr, in
        return change_page_attr_clear(&addr, numpages, __pgprot(_PAGE_PRESENT), 
0);
 }
 
+int set_memory_np_noalias(unsigned long addr, int numpages)
+{
+       int cpa_flags = CPA_NO_CHECK_ALIAS;
+
+       return change_page_attr_set_clr(&addr, numpages, __pgprot(0),
+                                       __pgprot(_PAGE_PRESENT), 0,
+                                       cpa_flags, NULL);
+}
+
 int set_memory_4k(unsigned long addr, int numpages)
 {
        return change_page_attr_set_clr(&addr, numpages, __pgprot(0),
_

Reply via email to