Re: [PATCH v3 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path

2020-10-28 Thread Joerg Roedel
On Tue, Oct 27, 2020 at 12:08:12PM +0100, Borislav Petkov wrote:
> On Wed, Oct 21, 2020 at 02:39:36PM +0200, Joerg Roedel wrote:
> > diff --git a/arch/x86/kernel/sev_verify_cbit.S 
> > b/arch/x86/kernel/sev_verify_cbit.S
> > new file mode 100644
> > index ..5075458ecad0
> > --- /dev/null
> > +++ b/arch/x86/kernel/sev_verify_cbit.S
> 
> Why a separate file? You're using it just like verify_cpu.S and this is
> kinda verifying CPU so you could simply add the functionality there...

verify_cpu.S is also used on 32bit and this function is 64bit code. It
can be made working with some #ifdef'fery but I think it is cleaner to
just keep it in a separate file, also given that sev_verify_cbit() is
not needed at every place verify_cpu() is called.

> Yeah, can you please use the callee-clobbered registers in the order as
> they're used by the ABI, see arch/x86/entry/calling.h.
> 
> Because I'm looking at this and wondering are rsi, rdx and rcx somehow
> live here and you're avoiding them...

Makes sense, will update the function.

Regards,

Joerg



Re: [PATCH v3 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path

2020-10-27 Thread Borislav Petkov
On Wed, Oct 21, 2020 at 02:39:36PM +0200, Joerg Roedel wrote:
> diff --git a/arch/x86/kernel/sev_verify_cbit.S 
> b/arch/x86/kernel/sev_verify_cbit.S
> new file mode 100644
> index ..5075458ecad0
> --- /dev/null
> +++ b/arch/x86/kernel/sev_verify_cbit.S

Why a separate file? You're using it just like verify_cpu.S and this is
kinda verifying CPU so you could simply add the functionality there...

> @@ -0,0 +1,90 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + *   sev_verify_cbit.S - Code for verification of the C-bit position reported
> + *   by the Hypervisor when running with SEV enabled.
> + *
> + *   Copyright (c) 2020  Joerg Roedel (jroe...@suse.de)
> + *
> + * Implements sev_verify_cbit() which is called before switching to a new
> + * long-mode page-table at boot.
> + *
> + * It verifies that the C-bit position is correct by writing a random value 
> to
> + * an encrypted memory location while on the current page-table. Then it
> + * switches to the new page-table to verify the memory content is still the
> + * same. After that it switches back to the current page-table and when the
> + * check succeeded it returns. If the check failed the code invalidates the
> + * stack pointer and goes into a hlt loop. The stack-pointer is invalidated 
> to
> + * make sure no interrupt or exception can get the CPU out of the hlt loop.
> + *
> + * New page-table pointer is expected in %rdi (first parameter)
> + *
> + */
> +SYM_FUNC_START(sev_verify_cbit)
> +#ifdef CONFIG_AMD_MEM_ENCRYPT

Yeah, can you please use the callee-clobbered registers in the order as
they're used by the ABI, see arch/x86/entry/calling.h.

Because I'm looking at this and wondering are rsi, rdx and rcx somehow
live here and you're avoiding them...

Otherwise nice commenting - I like when it is properly explained what
the asm does and what it expects as input, cool.

Thx.

-- 
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette


Re: [PATCH v3 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path

2020-10-27 Thread Borislav Petkov
On Wed, Oct 21, 2020 at 02:39:36PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel 
> 
> Check whether the hypervisor reported the correct C-bit when running as
> an SEV guest. Using a wrong C-bit position could be used to leak
> sensitive data from the guest to the hypervisor.
> 
> The check function is in arch/x86/kernel/sev_verify_cbit.S so that it
> can be re-used in the running kernel image.
> 
> Signed-off-by: Joerg Roedel 
> ---
>  arch/x86/boot/compressed/ident_map_64.c |  1 +
>  arch/x86/boot/compressed/mem_encrypt.S  |  4 ++
>  arch/x86/boot/compressed/misc.h |  2 +
>  arch/x86/kernel/sev_verify_cbit.S   | 90 +
>  4 files changed, 97 insertions(+)
>  create mode 100644 arch/x86/kernel/sev_verify_cbit.S
> 
> diff --git a/arch/x86/boot/compressed/ident_map_64.c 
> b/arch/x86/boot/compressed/ident_map_64.c
> index 063a60edcf99..73abba3312a7 100644
> --- a/arch/x86/boot/compressed/ident_map_64.c
> +++ b/arch/x86/boot/compressed/ident_map_64.c
> @@ -153,6 +153,7 @@ void initialize_identity_maps(void)
>* into cr3.
>*/
>   add_identity_map((unsigned long)_head, (unsigned long)_end);
> + sev_verify_cbit(top_level_pgt);
>   write_cr3(top_level_pgt);
>  }

Btw, might wanna redo them ontop of -rc1 because this looks like this after
Arvind's three fixes:

diff --git a/arch/x86/boot/compressed/ident_map_64.c 
b/arch/x86/boot/compressed/ident_map_64.c
index a5e5db6ada3c..81f6003553f8 100644
--- a/arch/x86/boot/compressed/ident_map_64.c
+++ b/arch/x86/boot/compressed/ident_map_64.c
@@ -162,6 +162,7 @@ void initialize_identity_maps(void *rmode)
add_identity_map((unsigned long)boot_params, (unsigned 
long)(boot_params + 1));
cmdline = get_cmd_line_ptr();
add_identity_map(cmdline, cmdline + COMMAND_LINE_SIZE);
+   sev_verify_cbit(top_level_pgt);
 
/* Load the new page-table. */
write_cr3(top_level_pgt);


-- 
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette


[PATCH v3 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path

2020-10-21 Thread Joerg Roedel
From: Joerg Roedel 

Check whether the hypervisor reported the correct C-bit when running as
an SEV guest. Using a wrong C-bit position could be used to leak
sensitive data from the guest to the hypervisor.

The check function is in arch/x86/kernel/sev_verify_cbit.S so that it
can be re-used in the running kernel image.

Signed-off-by: Joerg Roedel 
---
 arch/x86/boot/compressed/ident_map_64.c |  1 +
 arch/x86/boot/compressed/mem_encrypt.S  |  4 ++
 arch/x86/boot/compressed/misc.h |  2 +
 arch/x86/kernel/sev_verify_cbit.S   | 90 +
 4 files changed, 97 insertions(+)
 create mode 100644 arch/x86/kernel/sev_verify_cbit.S

diff --git a/arch/x86/boot/compressed/ident_map_64.c 
b/arch/x86/boot/compressed/ident_map_64.c
index 063a60edcf99..73abba3312a7 100644
--- a/arch/x86/boot/compressed/ident_map_64.c
+++ b/arch/x86/boot/compressed/ident_map_64.c
@@ -153,6 +153,7 @@ void initialize_identity_maps(void)
 * into cr3.
 */
add_identity_map((unsigned long)_head, (unsigned long)_end);
+   sev_verify_cbit(top_level_pgt);
write_cr3(top_level_pgt);
 }
 
diff --git a/arch/x86/boot/compressed/mem_encrypt.S 
b/arch/x86/boot/compressed/mem_encrypt.S
index 2192b3bd78d8..7409f2343d38 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -68,6 +68,9 @@ SYM_FUNC_START(get_sev_encryption_bit)
 SYM_FUNC_END(get_sev_encryption_bit)
 
.code64
+
+#include "../../kernel/sev_verify_cbit.S"
+
 SYM_FUNC_START(set_sev_encryption_mask)
 #ifdef CONFIG_AMD_MEM_ENCRYPT
push%rbp
@@ -105,4 +108,5 @@ SYM_FUNC_END(set_sev_encryption_mask)
.balign 8
 SYM_DATA(sme_me_mask,  .quad 0)
 SYM_DATA(sev_status,   .quad 0)
+SYM_DATA(sev_check_data,   .quad 0)
 #endif
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 6d31f1b4c4d1..d9a631c5973c 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -159,4 +159,6 @@ void boot_page_fault(void);
 void boot_stage1_vc(void);
 void boot_stage2_vc(void);
 
+unsigned long sev_verify_cbit(unsigned long cr3);
+
 #endif /* BOOT_COMPRESSED_MISC_H */
diff --git a/arch/x86/kernel/sev_verify_cbit.S 
b/arch/x86/kernel/sev_verify_cbit.S
new file mode 100644
index ..5075458ecad0
--- /dev/null
+++ b/arch/x86/kernel/sev_verify_cbit.S
@@ -0,0 +1,90 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * sev_verify_cbit.S - Code for verification of the C-bit position reported
+ * by the Hypervisor when running with SEV enabled.
+ *
+ * Copyright (c) 2020  Joerg Roedel (jroe...@suse.de)
+ *
+ * Implements sev_verify_cbit() which is called before switching to a new
+ * long-mode page-table at boot.
+ *
+ * It verifies that the C-bit position is correct by writing a random value to
+ * an encrypted memory location while on the current page-table. Then it
+ * switches to the new page-table to verify the memory content is still the
+ * same. After that it switches back to the current page-table and when the
+ * check succeeded it returns. If the check failed the code invalidates the
+ * stack pointer and goes into a hlt loop. The stack-pointer is invalidated to
+ * make sure no interrupt or exception can get the CPU out of the hlt loop.
+ *
+ * New page-table pointer is expected in %rdi (first parameter)
+ *
+ */
+SYM_FUNC_START(sev_verify_cbit)
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+   /* First check if a C-bit was detected */
+   movqsme_me_mask(%rip), %r10
+   testq   %r10, %r10
+   jz  3f
+
+   /* sme_me_mask != 0 could mean SME or SEV - Check also for SEV */
+   movqsev_status(%rip), %r10
+   testq   %r10, %r10
+   jz  3f
+
+   /* Save CR4 in %r8 */
+   movq%cr4, %r8
+
+   /* Disable Global Pages */
+   movq%r8, %r9
+   andq$(~X86_CR4_PGE), %r9
+   movq%r9, %cr4
+
+   /*
+* Verified that running under SEV - now get a random value using
+* RDRAND. This instruction is mandatory when running as an SEV guest.
+*
+* Don't bail out of the loop if RDRAND returns errors. It is better to
+* prevent forward progress than to work with a non-random value here.
+*/
+1: rdrand  %r10
+   jnc 1b
+
+   /* Store value to memory and keep it in %r10 */
+   movq%r10, sev_check_data(%rip)
+
+   /* Backup current %cr3 value to restore it later */
+   movq%cr3, %r11
+
+   /* Switch to new %cr3 - This might unmap the stack */
+   movq%rdi, %cr3
+
+   /*
+* Compare value in %r10 with memory location - If C-Bit is incorrect
+* this would read the encrypted data and make the check fail.
+*/
+   cmpq%r10, sev_check_data(%rip)
+
+   /* Restore old %cr3 */
+   movq%r11, %cr3
+
+   /* Restore previous CR4 */
+   movq%r8, %cr4
+
+   /* Check CMPQ res