Re: [PATCH 4/5] KVM: MMU: Optimize pte permission checks

2012-09-13 Thread Xiao Guangrong
On 09/12/2012 10:29 PM, Avi Kivity wrote: walk_addr_generic() permission checks are a maze of branchy code, which is performed four times per lookup. It depends on the type of access, efer.nxe, cr0.wp, cr4.smep, and in the near future, cr4.smap. Optimize this away by precalculating all

Re: [PATCH 4/5] KVM: MMU: Optimize pte permission checks

2012-09-13 Thread Avi Kivity
On 09/13/2012 03:09 PM, Xiao Guangrong wrote: The result is short, branch-free code. Signed-off-by: Avi Kivity a...@redhat.com +static void update_permission_bitmask(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu) +{ +unsigned bit, byte, pfec; +u8 map; +bool fault, x, w, u,

Re: [PATCH 4/5] KVM: MMU: Optimize pte permission checks

2012-09-13 Thread Xiao Guangrong
On 09/12/2012 10:29 PM, Avi Kivity wrote: + pte_access = pt_access gpte_access(vcpu, pte); + eperm |= (mmu-permissions[access 1] pte_access) 1; last_gpte = FNAME(is_last_gpte)(walker, vcpu, mmu, pte); - if (last_gpte) { -

Re: [PATCH 4/5] KVM: MMU: Optimize pte permission checks

2012-09-13 Thread Avi Kivity
On 09/13/2012 03:41 PM, Xiao Guangrong wrote: On 09/12/2012 10:29 PM, Avi Kivity wrote: +pte_access = pt_access gpte_access(vcpu, pte); +eperm |= (mmu-permissions[access 1] pte_access) 1; last_gpte = FNAME(is_last_gpte)(walker, vcpu, mmu, pte); -

[PATCH 4/5] KVM: MMU: Optimize pte permission checks

2012-09-12 Thread Avi Kivity
walk_addr_generic() permission checks are a maze of branchy code, which is performed four times per lookup. It depends on the type of access, efer.nxe, cr0.wp, cr4.smep, and in the near future, cr4.smap. Optimize this away by precalculating all variants and storing them in a bitmap. The bitmap