From: Ben Gardon <bgar...@google.com>

[ Upstream commit 734e45b329d626d2c14e2bcf8be3d069a33c3316 ]

The KVM MMU caches already guarantee that shadow page table memory will
be zeroed, so there is no reason to re-zero the page in the TDP MMU page
fault handler.

No functional change intended.

Reviewed-by: Peter Feiner <pfei...@google.com>
Reviewed-by: Sean Christopherson <sea...@google.com>
Acked-by: Paolo Bonzini <pbonz...@redhat.com>
Signed-off-by: Ben Gardon <bgar...@google.com>
Message-Id: <20210202185734.1680553-5-bgar...@google.com>
Signed-off-by: Paolo Bonzini <pbonz...@redhat.com>
Signed-off-by: Sasha Levin <sas...@kernel.org>
---
 arch/x86/kvm/mmu/tdp_mmu.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index f88404033e0c..136311be5890 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -706,7 +706,6 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 
error_code,
                        sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level);
                        list_add(&sp->link, &vcpu->kvm->arch.tdp_mmu_pages);
                        child_pt = sp->spt;
-                       clear_page(child_pt);
                        new_spte = make_nonleaf_spte(child_pt,
                                                     !shadow_accessed_mask);
 
-- 
2.30.1



Reply via email to