One quick comment while it's on my mind, I'll give this a proper gander 
tomorrow.

On Tue, Feb 02, 2021, Michael Roth wrote:
> diff --git a/arch/x86/kvm/svm/svm_ops.h b/arch/x86/kvm/svm/svm_ops.h
> index 0c8377aee52c..c2a05f56c8e4 100644
> --- a/arch/x86/kvm/svm/svm_ops.h
> +++ b/arch/x86/kvm/svm/svm_ops.h
> @@ -56,4 +56,9 @@ static inline void vmsave(hpa_t pa)
>       svm_asm1(vmsave, "a" (pa), "memory");
>  }
>  
> +static inline void vmload(hpa_t pa)

This needs to be 'unsigned long', using 'hpa_t' in vmsave() is wrong as the
instructions consume rAX based on effective address.  I wrote the function
comment for the vmsave() fix so that it applies to both VMSAVE and VMLOAD,
so this can be a simple fixup on application (assuming v5 isn't needed for
other reasons).

https://lkml.kernel.org/r/20210202223416.2702336-1-sea...@google.com

> +{
> +     svm_asm1(vmload, "a" (pa), "memory");
> +}
> +
>  #endif /* __KVM_X86_SVM_OPS_H */
> -- 
> 2.25.1
> 

Reply via email to