From: Xiao Guangrong <xiaoguangr...@cn.fujitsu.com>

After nested nested paging, it may using long mode to shadow 32/PAE paging
guest, so this patch fix it

Signed-off-by: Xiao Guangrong <xiaoguangr...@cn.fujitsu.com>
Signed-off-by: Avi Kivity <a...@redhat.com>

diff --git a/arch/x86/kvm/mmu_audit.c b/arch/x86/kvm/mmu_audit.c
index bd2b1be..dcca3e7 100644
--- a/arch/x86/kvm/mmu_audit.c
+++ b/arch/x86/kvm/mmu_audit.c
@@ -51,7 +51,7 @@ static void mmu_spte_walk(struct kvm_vcpu *vcpu, 
inspect_spte_fn fn)
        if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
                return;
 
-       if (vcpu->arch.mmu.shadow_root_level == PT64_ROOT_LEVEL) {
+       if (vcpu->arch.mmu.root_level == PT64_ROOT_LEVEL) {
                hpa_t root = vcpu->arch.mmu.root_hpa;
 
                sp = page_header(root);
--
To unsubscribe from this list: send the line "unsubscribe kvm-commits" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to