On Mon, Oct 16, 2017 at 10:34:22AM -0500, Brijesh Singh wrote:
> When SEV is active, guest memory is encrypted with a guest-specific key, a
> guest memory region shared with the hypervisor must be mapped as decrypted
> before we can share it.
>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: "H. Peter Anvin" <[email protected]>
> Cc: Borislav Petkov <[email protected]>
> Cc: Paolo Bonzini <[email protected]>
> Cc: "Radim Krčmář" <[email protected]>
> Cc: Tom Lendacky <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Signed-off-by: Brijesh Singh <[email protected]>
> ---
>
> Changes since v5:
> early_set_memory_decrypt() takes care of decrypting the memory contents
> and changing the C bit hence there is no need to use explicit call
> (sme_early_decrypt) to decrypt the memory contents.
>
> Boris,
>
> I removed your R-b since I was not sure if you are okay with the above
> changes.
> please let me know if you are okay with the changes. thanks
>
> arch/x86/kernel/kvm.c | 35 ++++++++++++++++++++++++++++++++---
> 1 file changed, 32 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 8bb9594d0761..ff0f04077925 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -75,8 +75,8 @@ static int parse_no_kvmclock_vsyscall(char *arg)
>
> early_param("no-kvmclock-vsyscall", parse_no_kvmclock_vsyscall);
>
> -static DEFINE_PER_CPU(struct kvm_vcpu_pv_apf_data, apf_reason) __aligned(64);
> -static DEFINE_PER_CPU(struct kvm_steal_time, steal_time) __aligned(64);
> +static DEFINE_PER_CPU_DECRYPTED(struct kvm_vcpu_pv_apf_data, apf_reason)
> __aligned(64);
> +static DEFINE_PER_CPU_DECRYPTED(struct kvm_steal_time, steal_time)
> __aligned(64);
> static int has_steal_clock = 0;
>
> /*
> @@ -312,7 +312,7 @@ static void kvm_register_steal_time(void)
> cpu, (unsigned long long) slow_virt_to_phys(st));
> }
>
> -static DEFINE_PER_CPU(unsigned long, kvm_apic_eoi) = KVM_PV_EOI_DISABLED;
> +static DEFINE_PER_CPU_DECRYPTED(unsigned long, kvm_apic_eoi) =
> KVM_PV_EOI_DISABLED;
>
> static notrace void kvm_guest_apic_eoi_write(u32 reg, u32 val)
> {
> @@ -426,9 +426,37 @@ void kvm_disable_steal_time(void)
> wrmsr(MSR_KVM_STEAL_TIME, 0, 0);
> }
>
> +static inline void __set_percpu_decrypted(void *ptr, unsigned long size)
> +{
> + early_set_memory_decrypted(slow_virt_to_phys(ptr), size);
> +}
Ok, so this looks like useless conversion:
you pass in a virtual address, it gets converted to a physical address
with slow_virt_to_phys() and then in early_set_memory_enc_dec() gets
converted to a virtual address again.
Why? Why not pass the virtual address directly?
> +/*
> + * Iterate through all possible CPUs and map the memory region pointed
> + * by apf_reason, steal_time and kvm_apic_eoi as decrypted at once.
> + *
> + * Note: we iterate through all possible CPUs to ensure that CPUs
> + * hotplugged will have their per-cpu variable already mapped as
> + * decrypted.
> + */
> +static void __init sev_map_percpu_data(void)
> +{
> + int cpu;
> +
> + if (!sev_active())
> + return;
> +
> + for_each_possible_cpu(cpu) {
> + __set_percpu_decrypted(&per_cpu(apf_reason, cpu),
> sizeof(apf_reason));
> + __set_percpu_decrypted(&per_cpu(steal_time, cpu),
> sizeof(steal_time));
> + __set_percpu_decrypted(&per_cpu(kvm_apic_eoi, cpu),
> sizeof(kvm_apic_eoi));
> + }
> +}
> +
> #ifdef CONFIG_SMP
> static void __init kvm_smp_prepare_boot_cpu(void)
> {
> + sev_map_percpu_data();
> kvm_guest_cpu_init();
> native_smp_prepare_boot_cpu();
> kvm_spinlock_init();
> @@ -496,6 +524,7 @@ void __init kvm_guest_init(void)
> kvm_cpu_online, kvm_cpu_down_prepare) < 0)
> pr_err("kvm_guest: Failed to install cpu hotplug callbacks\n");
> #else
> + sev_map_percpu_data();
> kvm_guest_cpu_init();
> #endif
Why isn't it enough to call
sev_map_percpu_data()
at the end of kvm_guest_init() only but you have to call it in
kvm_smp_prepare_boot_cpu() too? I mean, once you map those things
decrypted, there's no need to do them again...
--
Regards/Gruss,
Boris.
Good mailing practices for 400: avoid top-posting and trim the reply.