On 04/09/19 01:36, Sean Christopherson wrote:
> Manually generate the PDPTR reserved bit mask when explicitly loading
> PDPTRs.  The reserved bits that are being tracked by the MMU reflect the
> current paging mode, which is unlikely to be PAE paging in the vast
> majority of flows that use load_pdptrs(), e.g. CR0 and CR4 emulation,
> __set_sregs(), etc...  This can cause KVM to incorrectly signal a bad
> PDPTR, or more likely, miss a reserved bit check and subsequently fail
> a VM-Enter due to a bad VMCS.GUEST_PDPTR.
> 
> Add a one off helper to generate the reserved bits instead of sharing
> code across the MMU's calculations and the PDPTR emulation.  The PDPTR
> reserved bits are basically set in stone, and pushing a helper into
> the MMU's calculation adds unnecessary complexity without improving
> readability.
> 
> Oppurtunistically fix/update the comment for load_pdptrs().
> 
> Note, the buggy commit also introduced a deliberate functional change,
> "Also remove bit 5-6 from rsvd_bits_mask per latest SDM.", which was
> effectively (and correctly) reverted by commit cd9ae5fe47df ("KVM: x86:
> Fix page-tables reserved bits").  A bit of SDM archaeology shows that
> the SDM from late 2008 had a bug (likely a copy+paste error) where it
> listed bits 6:5 as AVL and A for PDPTEs used for 4k entries but reserved
> for 2mb entries.  I.e. the SDM contradicted itself, and bits 6:5 are and
> always have been reserved.
> 
> Fixes: 20c466b56168d ("KVM: Use rsvd_bits_mask in load_pdptrs()")
> Cc: [email protected]
> Cc: Nadav Amit <[email protected]>
> Reported-by: Doug Reiland <[email protected]>
> Signed-off-by: Sean Christopherson <[email protected]>
> ---
>  arch/x86/kvm/x86.c | 11 ++++++++---
>  1 file changed, 8 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 290c3c3efb87..548cc6ef5408 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -674,8 +674,14 @@ static int kvm_read_nested_guest_page(struct kvm_vcpu 
> *vcpu, gfn_t gfn,
>                                      data, offset, len, access);
>  }
>  
> +static inline u64 pdptr_rsvd_bits(struct kvm_vcpu *vcpu)
> +{
> +     return rsvd_bits(cpuid_maxphyaddr(vcpu), 63) | rsvd_bits(5, 8) |
> +            rsvd_bits(1, 2);
> +}
> +
>  /*
> - * Load the pae pdptrs.  Return true is they are all valid.
> + * Load the pae pdptrs.  Return 1 if they are all valid, 0 otherwise.
>   */
>  int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long 
> cr3)
>  {
> @@ -694,8 +700,7 @@ int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu 
> *mmu, unsigned long cr3)
>       }
>       for (i = 0; i < ARRAY_SIZE(pdpte); ++i) {
>               if ((pdpte[i] & PT_PRESENT_MASK) &&
> -                 (pdpte[i] &
> -                  vcpu->arch.mmu->guest_rsvd_check.rsvd_bits_mask[0][2])) {
> +                 (pdpte[i] & pdptr_rsvd_bits(vcpu))) {
>                       ret = 0;
>                       goto out;
>               }
> 

Queued, thanks.

Paolo

Reply via email to