From: Marcelo Tosatti <mtosa...@redhat.com>

Zeroing on mmu_memory_cache_alloc is unnecessary since:

- Smaller areas are pre-allocated with kmem_cache_zalloc.
- Page pointed by ->spt is overwritten with prefetch_page
  and entries in page pointed by ->gfns are initialized
  before reading.

[avi: zeroing pages is unnecessary]

Signed-off-by: Marcelo Tosatti <mtosa...@redhat.com>
Signed-off-by: Avi Kivity <a...@redhat.com>

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 10bdb2a..9d186fa 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -352,7 +352,6 @@ static void *mmu_memory_cache_alloc(struct 
kvm_mmu_memory_cache *mc,
 
        BUG_ON(!mc->nobjs);
        p = mc->objects[--mc->nobjs];
-       memset(p, 0, size);
        return p;
 }
 
--
To unsubscribe from this list: send the line "unsubscribe kvm-commits" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to