Re: [RFC PATCH v4 27/28] x86: Add support to encrypt the kernel in-place
On 3/1/2017 11:36 AM, Borislav Petkov wrote: On Thu, Feb 16, 2017 at 09:48:08AM -0600, Tom Lendacky wrote: This patch adds the support to encrypt the kernel in-place. This is done by creating new page mappings for the kernel - a decrypted write-protected mapping and an encrypted mapping. The kernel is encyrpted s/encyrpted/encrypted/ by copying the kernel through a temporary buffer. "... by copying it... " Ok. Signed-off-by: Tom Lendacky --- ... +ENTRY(sme_encrypt_execute) + +#ifdef CONFIG_AMD_MEM_ENCRYPT + /* +* Entry parameters: +* RDI - virtual address for the encrypted kernel mapping +* RSI - virtual address for the decrypted kernel mapping +* RDX - length of kernel +* RCX - address of the encryption workarea , including: Ok. +* - stack page (PAGE_SIZE) +* - encryption routine page (PAGE_SIZE) +* - intermediate copy buffer (PMD_PAGE_SIZE) +*R8 - address of the pagetables to use for encryption +*/ + + /* Set up a one page stack in the non-encrypted memory area */ + movq%rcx, %rax + addq$PAGE_SIZE, %rax + movq%rsp, %rbp %rbp is callee-saved and you're overwriting it here. You need to push it first. Yup, I'll re-work the entry code based on this comment and the one below. + movq%rax, %rsp + push%rbp + + push%r12 + push%r13 In general, just do all pushes on function entry and the pops on exit, like the compiler does. + movq%rdi, %r10 + movq%rsi, %r11 + movq%rdx, %r12 + movq%rcx, %r13 + + /* Copy encryption routine into the workarea */ + movq%rax, %rdi + leaq.Lencrypt_start(%rip), %rsi + movq$(.Lencrypt_stop - .Lencrypt_start), %rcx + rep movsb + + /* Setup registers for call */ + movq%r10, %rdi + movq%r11, %rsi + movq%r8, %rdx + movq%r12, %rcx + movq%rax, %r8 + addq$PAGE_SIZE, %r8 + + /* Call the encryption routine */ + call*%rax + + pop %r13 + pop %r12 + + pop %rsp/* Restore original stack pointer */ +.Lencrypt_exit: Please put side comments like this here: Ok, can do. ENTRY(sme_encrypt_execute) #ifdef CONFIG_AMD_MEM_ENCRYPT /* * Entry parameters: * RDI - virtual address for the encrypted kernel mapping * RSI - virtual address for the decrypted kernel mapping * RDX - length of kernel * RCX - address of the encryption workarea * - stack page (PAGE_SIZE) * - encryption routine page (PAGE_SIZE) * - intermediate copy buffer (PMD_PAGE_SIZE) *R8 - address of the pagetables to use for encryption */ /* Set up a one page stack in the non-encrypted memory area */ movq%rcx, %rax # %rax = workarea addq$PAGE_SIZE, %rax# %rax += 4096 movq%rsp, %rbp # stash stack ptr movq%rax, %rsp # set new stack push%rbp# needs to happen before the mov %rsp, %rbp push%r12 push%r13 movq%rdi, %r10 # encrypted kernel movq%rsi, %r11 # decrypted kernel movq%rdx, %r12 # kernel length movq%rcx, %r13 # workarea ... and so on. ... diff --git a/arch/x86/kernel/mem_encrypt_init.c b/arch/x86/kernel/mem_encrypt_init.c index 25af15d..07cbb90 100644 --- a/arch/x86/kernel/mem_encrypt_init.c +++ b/arch/x86/kernel/mem_encrypt_init.c @@ -16,9 +16,200 @@ #ifdef CONFIG_AMD_MEM_ENCRYPT #include +#include + +#include + +extern void sme_encrypt_execute(unsigned long, unsigned long, unsigned long, + void *, pgd_t *); This belongs into mem_encrypt.h. And I think it already came up: please use names for those params. Yup, will move it. + +#define PGD_FLAGS _KERNPG_TABLE_NOENC +#define PUD_FLAGS _KERNPG_TABLE_NOENC +#define PMD_FLAGS __PAGE_KERNEL_LARGE_EXEC + +static void __init *sme_pgtable_entry(pgd_t *pgd, void *next_page, + void *vaddr, pmdval_t pmd_val) +{ sme_populate() or so sounds better. Ok. + pud_t *pud; + pmd_t *pmd; + + pgd += pgd_index((unsigned long)vaddr); + if (pgd_none(*pgd)) { + pud = next_page; + memset(pud, 0, sizeof(*pud) * PTRS_PER_PUD); + native_set_pgd(pgd, + native_make_pgd((unsigned long)pud + PGD_FLAGS)); Let it stick out, no need for those "stairs" in the vertical alignment :) O
Re: [RFC PATCH v4 27/28] x86: Add support to encrypt the kernel in-place
On Thu, Mar 02, 2017 at 12:30:31PM -0600, Tom Lendacky wrote: > The "* 2" here and above is that a PUD and a PMD is needed for both > the encrypted and decrypted mappings. I'll add a comment to clarify > that. Ah, makes sense. Definitely needs a comment. > Yup, I can do that here too (but need PGDIR_SIZE). Right, I did test and wanted to write PGDIR_SIZE but then ... I guess something distracted me :-) > So next_page is the first free page within the workarea in which a > pagetable entry (PGD, PUD or PMD) can be created when we are populating > the new mappings or adding the workarea to the current mapping. Any > new pagetable structures that are created will use this value. Ok, so I guess this needs an overview comment with maybe some ascii showing how workarea, exec_size, full_size and all those other things play together. > Ok, I'll work on the comment. Something along the line of: > > /* > * The encrypted mapping of the kernel will use identity mapped > * virtual addresses. A different PGD index/entry must be used to > * get different pagetable entries for the decrypted mapping. > * Choose the next PGD index and convert it to a virtual address > * to be used as the base of the mapping. Better. > Except the workarea size includes both the encryption execution > size and the pagetable structure size. I'll work on this to try > and clarify it better. That's a useful piece of info, yap, the big picture could use some more explanation. > Most definitely. I appreciate the feedback since I'm very close to > the code and have an understanding of what I'm doing. I'd like to be > sure that everyone can easily understand what is happening. Nice! Thanks. -- Regards/Gruss, Boris. Good mailing practices for 400: avoid top-posting and trim the reply. -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [RFC PATCH v4 27/28] x86: Add support to encrypt the kernel in-place
On Thu, Feb 16, 2017 at 09:48:08AM -0600, Tom Lendacky wrote: > This patch adds the support to encrypt the kernel in-place. This is > done by creating new page mappings for the kernel - a decrypted > write-protected mapping and an encrypted mapping. The kernel is encyrpted s/encyrpted/encrypted/ > by copying the kernel through a temporary buffer. "... by copying it... " > > Signed-off-by: Tom Lendacky > --- ... > +ENTRY(sme_encrypt_execute) > + > +#ifdef CONFIG_AMD_MEM_ENCRYPT > + /* > + * Entry parameters: > + * RDI - virtual address for the encrypted kernel mapping > + * RSI - virtual address for the decrypted kernel mapping > + * RDX - length of kernel > + * RCX - address of the encryption workarea , including: > + * - stack page (PAGE_SIZE) > + * - encryption routine page (PAGE_SIZE) > + * - intermediate copy buffer (PMD_PAGE_SIZE) > + *R8 - address of the pagetables to use for encryption > + */ > + > + /* Set up a one page stack in the non-encrypted memory area */ > + movq%rcx, %rax > + addq$PAGE_SIZE, %rax > + movq%rsp, %rbp %rbp is callee-saved and you're overwriting it here. You need to push it first. > + movq%rax, %rsp > + push%rbp > + > + push%r12 > + push%r13 In general, just do all pushes on function entry and the pops on exit, like the compiler does. > + movq%rdi, %r10 > + movq%rsi, %r11 > + movq%rdx, %r12 > + movq%rcx, %r13 > + > + /* Copy encryption routine into the workarea */ > + movq%rax, %rdi > + leaq.Lencrypt_start(%rip), %rsi > + movq$(.Lencrypt_stop - .Lencrypt_start), %rcx > + rep movsb > + > + /* Setup registers for call */ > + movq%r10, %rdi > + movq%r11, %rsi > + movq%r8, %rdx > + movq%r12, %rcx > + movq%rax, %r8 > + addq$PAGE_SIZE, %r8 > + > + /* Call the encryption routine */ > + call*%rax > + > + pop %r13 > + pop %r12 > + > + pop %rsp/* Restore original stack pointer */ > +.Lencrypt_exit: Please put side comments like this here: ENTRY(sme_encrypt_execute) #ifdef CONFIG_AMD_MEM_ENCRYPT /* * Entry parameters: * RDI - virtual address for the encrypted kernel mapping * RSI - virtual address for the decrypted kernel mapping * RDX - length of kernel * RCX - address of the encryption workarea * - stack page (PAGE_SIZE) * - encryption routine page (PAGE_SIZE) * - intermediate copy buffer (PMD_PAGE_SIZE) *R8 - address of the pagetables to use for encryption */ /* Set up a one page stack in the non-encrypted memory area */ movq%rcx, %rax # %rax = workarea addq$PAGE_SIZE, %rax# %rax += 4096 movq%rsp, %rbp # stash stack ptr movq%rax, %rsp # set new stack push%rbp# needs to happen before the mov %rsp, %rbp push%r12 push%r13 movq%rdi, %r10 # encrypted kernel movq%rsi, %r11 # decrypted kernel movq%rdx, %r12 # kernel length movq%rcx, %r13 # workarea ... and so on. ... > diff --git a/arch/x86/kernel/mem_encrypt_init.c > b/arch/x86/kernel/mem_encrypt_init.c > index 25af15d..07cbb90 100644 > --- a/arch/x86/kernel/mem_encrypt_init.c > +++ b/arch/x86/kernel/mem_encrypt_init.c > @@ -16,9 +16,200 @@ > #ifdef CONFIG_AMD_MEM_ENCRYPT > > #include > +#include > + > +#include > + > +extern void sme_encrypt_execute(unsigned long, unsigned long, unsigned long, > + void *, pgd_t *); This belongs into mem_encrypt.h. And I think it already came up: please use names for those params. > + > +#define PGD_FLAGS_KERNPG_TABLE_NOENC > +#define PUD_FLAGS_KERNPG_TABLE_NOENC > +#define PMD_FLAGS__PAGE_KERNEL_LARGE_EXEC > + > +static void __init *sme_pgtable_entry(pgd_t *pgd, void *next_page, > + void *vaddr, pmdval_t pmd_val) > +{ sme_populate() or so sounds better. > + pud_t *pud; > + pmd_t *pmd; > + > + pgd += pgd_index((unsigned long)vaddr); > + if (pgd_none(*pgd)) { > + pud = next_page; > + memset(pud, 0, sizeof(*pud) * PTRS_PER_PUD); > + native_set_pgd(pgd, > +native_make_pgd((unsigned long)pud + PGD_FLAGS)); Let it stick out, no need for those "stairs" in the vertical alignment :) > + next_page += sizeof(*pud) * PTRS_PER_PUD; > + } else { > + pud = (pud_t *)(native_pgd_val(*pgd) & ~PTE_
[RFC PATCH v4 27/28] x86: Add support to encrypt the kernel in-place
This patch adds the support to encrypt the kernel in-place. This is done by creating new page mappings for the kernel - a decrypted write-protected mapping and an encrypted mapping. The kernel is encyrpted by copying the kernel through a temporary buffer. Signed-off-by: Tom Lendacky --- arch/x86/kernel/Makefile |1 arch/x86/kernel/mem_encrypt_boot.S | 156 + arch/x86/kernel/mem_encrypt_init.c | 191 3 files changed, 348 insertions(+) create mode 100644 arch/x86/kernel/mem_encrypt_boot.S diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index 33af80a..dc3ed84 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -142,4 +142,5 @@ ifeq ($(CONFIG_X86_64),y) obj-y += vsmp_64.o obj-y += mem_encrypt_init.o + obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_boot.o endif diff --git a/arch/x86/kernel/mem_encrypt_boot.S b/arch/x86/kernel/mem_encrypt_boot.S new file mode 100644 index 000..58e1756 --- /dev/null +++ b/arch/x86/kernel/mem_encrypt_boot.S @@ -0,0 +1,156 @@ +/* + * AMD Memory Encryption Support + * + * Copyright (C) 2016 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include +#include +#include + + .text + .code64 +ENTRY(sme_encrypt_execute) + +#ifdef CONFIG_AMD_MEM_ENCRYPT + /* +* Entry parameters: +* RDI - virtual address for the encrypted kernel mapping +* RSI - virtual address for the decrypted kernel mapping +* RDX - length of kernel +* RCX - address of the encryption workarea +* - stack page (PAGE_SIZE) +* - encryption routine page (PAGE_SIZE) +* - intermediate copy buffer (PMD_PAGE_SIZE) +*R8 - address of the pagetables to use for encryption +*/ + + /* Set up a one page stack in the non-encrypted memory area */ + movq%rcx, %rax + addq$PAGE_SIZE, %rax + movq%rsp, %rbp + movq%rax, %rsp + push%rbp + + push%r12 + push%r13 + + movq%rdi, %r10 + movq%rsi, %r11 + movq%rdx, %r12 + movq%rcx, %r13 + + /* Copy encryption routine into the workarea */ + movq%rax, %rdi + leaq.Lencrypt_start(%rip), %rsi + movq$(.Lencrypt_stop - .Lencrypt_start), %rcx + rep movsb + + /* Setup registers for call */ + movq%r10, %rdi + movq%r11, %rsi + movq%r8, %rdx + movq%r12, %rcx + movq%rax, %r8 + addq$PAGE_SIZE, %r8 + + /* Call the encryption routine */ + call*%rax + + pop %r13 + pop %r12 + + pop %rsp/* Restore original stack pointer */ +.Lencrypt_exit: +#endif /* CONFIG_AMD_MEM_ENCRYPT */ + + ret +ENDPROC(sme_encrypt_execute) + +#ifdef CONFIG_AMD_MEM_ENCRYPT +/* + * Routine used to encrypt kernel. + * This routine must be run outside of the kernel proper since + * the kernel will be encrypted during the process. So this + * routine is defined here and then copied to an area outside + * of the kernel where it will remain and run decrypted + * during execution. + * + * On entry the registers must be: + * RDI - virtual address for the encrypted kernel mapping + * RSI - virtual address for the decrypted kernel mapping + * RDX - address of the pagetables to use for encryption + * RCX - length of kernel + * R8 - intermediate copy buffer + * + * RAX - points to this routine + * + * The kernel will be encrypted by copying from the non-encrypted + * kernel space to an intermediate buffer and then copying from the + * intermediate buffer back to the encrypted kernel space. The physical + * addresses of the two kernel space mappings are the same which + * results in the kernel being encrypted "in place". + */ +.Lencrypt_start: + /* Enable the new page tables */ + mov %rdx, %cr3 + + /* Flush any global TLBs */ + mov %cr4, %rdx + andq$~X86_CR4_PGE, %rdx + mov %rdx, %cr4 + orq $X86_CR4_PGE, %rdx + mov %rdx, %cr4 + + /* Set the PAT register PA5 entry to write-protect */ + push%rcx + movl$MSR_IA32_CR_PAT, %ecx + rdmsr + push%rdx/* Save original PAT value */ + andl$0x00ff, %edx /* Clear PA5 */ + orl $0x0500, %edx /* Set PA5 to WP */ + wrmsr + pop %rdx/* RDX contains original PAT value */ + pop %rcx + + movq%rcx, %r9 /* Save len