On Wed, Oct 21, 2020 at 02:39:36PM +0200, Joerg Roedel wrote:
> diff --git a/arch/x86/kernel/sev_verify_cbit.S 
> b/arch/x86/kernel/sev_verify_cbit.S
> new file mode 100644
> index 000000000000..5075458ecad0
> --- /dev/null
> +++ b/arch/x86/kernel/sev_verify_cbit.S

Why a separate file? You're using it just like verify_cpu.S and this is
kinda verifying CPU so you could simply add the functionality there...

> @@ -0,0 +1,90 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + *   sev_verify_cbit.S - Code for verification of the C-bit position reported
> + *                       by the Hypervisor when running with SEV enabled.
> + *
> + *   Copyright (c) 2020  Joerg Roedel (jroe...@suse.de)
> + *
> + * Implements sev_verify_cbit() which is called before switching to a new
> + * long-mode page-table at boot.
> + *
> + * It verifies that the C-bit position is correct by writing a random value 
> to
> + * an encrypted memory location while on the current page-table. Then it
> + * switches to the new page-table to verify the memory content is still the
> + * same. After that it switches back to the current page-table and when the
> + * check succeeded it returns. If the check failed the code invalidates the
> + * stack pointer and goes into a hlt loop. The stack-pointer is invalidated 
> to
> + * make sure no interrupt or exception can get the CPU out of the hlt loop.
> + *
> + * New page-table pointer is expected in %rdi (first parameter)
> + *
> + */
> +SYM_FUNC_START(sev_verify_cbit)
> +#ifdef CONFIG_AMD_MEM_ENCRYPT

Yeah, can you please use the callee-clobbered registers in the order as
they're used by the ABI, see arch/x86/entry/calling.h.

Because I'm looking at this and wondering are rsi, rdx and rcx somehow
live here and you're avoiding them...

Otherwise nice commenting - I like when it is properly explained what
the asm does and what it expects as input, cool.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

Reply via email to