The page table walk has gotten crufty over the years and is threatening to 
become
even more crufty when SMAP is introduced.  Clean it up (and optimize it) 
somewhat.

v2:
  fix SMEP false positive by moving checks to the end of the walk
  fix last_pte_bitmap documentation
  fix incorrect SMEP fault permission checks
  introduce helper for accessing the permission bitmap

Avi Kivity (9):
  KVM: MMU: Push clean gpte write protection out of gpte_access()
  KVM: MMU: Optimize gpte_access() slightly
  KVM: MMU: Move gpte_access() out of paging_tmpl.h
  KVM: MMU: Update accessed and dirty bits after guest pagetable walk
  KVM: MMU: Optimize pte permission checks
  KVM: MMU: Simplify walk_addr_generic() loop
  KVM: MMU: Optimize is_last_gpte()
  KVM: MMU: Eliminate eperm temporary
  KVM: MMU: Avoid access/dirty update loop if all is well

 arch/x86/include/asm/kvm_host.h |  14 +++
 arch/x86/kvm/mmu.c              |  91 +++++++++++++++++++
 arch/x86/kvm/mmu.h              |  25 +++---
 arch/x86/kvm/paging_tmpl.h      | 190 +++++++++++++++++-----------------------
 arch/x86/kvm/x86.c              |  11 +--
 5 files changed, 202 insertions(+), 129 deletions(-)

-- 
1.7.12

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to