On 12/08/2017 15:35, Yu Zhang wrote:
>  struct rsvd_bits_validate {
> -     u64 rsvd_bits_mask[2][4];
> +     u64 rsvd_bits_mask[2][5];
>       u64 bad_mt_xwr;
>  };


Can you change this 4 to PT64_ROOT_MAX_LEVEL in patch 2?

> -     if (vcpu->arch.mmu.shadow_root_level == PT64_ROOT_4LEVEL &&
> -         (vcpu->arch.mmu.root_level == PT64_ROOT_4LEVEL ||
> -          vcpu->arch.mmu.direct_map)) {
> +     if (vcpu->arch.mmu.root_level >= PT64_ROOT_4LEVEL ||
> +         vcpu->arch.mmu.direct_map) {
>               hpa_t root = vcpu->arch.mmu.root_hpa;

You should keep the check on shadow_root_level (changing it to >= of
course), otherwise you break the case where EPT is disabled, paging is
disabled (so vcpu->arch.mmu.direct_map is true) and the host kernel is
32-bit.  In that case shadow pages use PAE format, and entering this
branch is incorrect.

> @@ -4444,7 +4457,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, 
> bool execonly,
>  
>       MMU_WARN_ON(VALID_PAGE(context->root_hpa));
>  
> -     context->shadow_root_level = kvm_x86_ops->get_tdp_level();
> +     context->shadow_root_level = kvm_x86_ops->get_tdp_level(vcpu);
>  
>       context->nx = true;
>       context->ept_ad = accessed_dirty;

Below, there is:

        context->root_level = context->shadow_root_level;

this should be forced to PT64_ROOT_4LEVEL until there is support for
nested EPT 5-level page tables.

Thanks,

Paolo

Reply via email to