On 1/26/21 12:54 PM, Peter Gonda wrote:
> sev_pin_memory assumes that callers hold the kvm->lock. This was true for
> all callers except svm_register_enc_region since it does not originate
> from svm_mem_enc_op. Also added lockdep annotation to help prevent
> future regressions.

I'm not exactly sure what the problem is that your fixing? What is the
symptom that you're seeing?

> 
> Tested: Booted SEV enabled VM on host.
> 
> Cc: Thomas Gleixner <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: "H. Peter Anvin" <[email protected]>
> Cc: Paolo Bonzini <[email protected]>
> Cc: Joerg Roedel <[email protected]>
> Cc: Tom Lendacky <[email protected]>
> Cc: Brijesh Singh <[email protected]>
> Cc: Sean Christopherson <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Fixes: 116a2214c5173 (KVM: SVM: Pin guest memory when SEV is active)

I can't find this commit. The Linux and KVM trees have it as:

1e80fdc09d12 ("KVM: SVM: Pin guest memory when SEV is active")

> Signed-off-by: Peter Gonda <[email protected]>
> 
> ---
>  arch/x86/kvm/svm.c | 16 +++++++++-------

This patch won't apply, as it has already been a few releases since svm.c
was moved to the arch/x86/kvm/svm directory and this function now lives in
sev.c.

Thanks,
Tom

>  1 file changed, 9 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index afdc5b44fe9f..9884e57f3d0f 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -1699,6 +1699,8 @@ static struct page **sev_pin_memory(struct kvm *kvm, 
> unsigned long uaddr,
>       struct page **pages;
>       unsigned long first, last;
>  
> +     lockdep_assert_held(&kvm->lock);
> +
>       if (ulen == 0 || uaddr + ulen < uaddr)
>               return NULL;
>  
> @@ -7228,12 +7230,19 @@ static int svm_register_enc_region(struct kvm *kvm,
>       if (!region)
>               return -ENOMEM;
>  
> +     mutex_lock(&kvm->lock);
>       region->pages = sev_pin_memory(kvm, range->addr, range->size, 
> &region->npages, 1);
>       if (!region->pages) {
>               ret = -ENOMEM;
>               goto e_free;
>       }
>  
> +     region->uaddr = range->addr;
> +     region->size = range->size;
> +
> +     list_add_tail(&region->list, &sev->regions_list);
> +     mutex_unlock(&kvm->lock);
> +
>       /*
>        * The guest may change the memory encryption attribute from C=0 -> C=1
>        * or vice versa for this memory range. Lets make sure caches are
> @@ -7242,13 +7251,6 @@ static int svm_register_enc_region(struct kvm *kvm,
>        */
>       sev_clflush_pages(region->pages, region->npages);
>  
> -     region->uaddr = range->addr;
> -     region->size = range->size;
> -
> -     mutex_lock(&kvm->lock);
> -     list_add_tail(&region->list, &sev->regions_list);
> -     mutex_unlock(&kvm->lock);
> -
>       return ret;
>  
>  e_free:
> 

Reply via email to