On 09/12/2012 10:29 PM, Avi Kivity wrote:
walk_addr_generic() permission checks are a maze of branchy code, which is
performed four times per lookup. It depends on the type of access, efer.nxe,
cr0.wp, cr4.smep, and in the near future, cr4.smap.
Optimize this away by precalculating all
On 09/13/2012 03:09 PM, Xiao Guangrong wrote:
The result is short, branch-free code.
Signed-off-by: Avi Kivity a...@redhat.com
+static void update_permission_bitmask(struct kvm_vcpu *vcpu, struct kvm_mmu
*mmu)
+{
+unsigned bit, byte, pfec;
+u8 map;
+bool fault, x, w, u,
On 09/12/2012 10:29 PM, Avi Kivity wrote:
+ pte_access = pt_access gpte_access(vcpu, pte);
+ eperm |= (mmu-permissions[access 1] pte_access) 1;
last_gpte = FNAME(is_last_gpte)(walker, vcpu, mmu, pte);
- if (last_gpte) {
-
On 09/13/2012 03:41 PM, Xiao Guangrong wrote:
On 09/12/2012 10:29 PM, Avi Kivity wrote:
+pte_access = pt_access gpte_access(vcpu, pte);
+eperm |= (mmu-permissions[access 1] pte_access) 1;
last_gpte = FNAME(is_last_gpte)(walker, vcpu, mmu, pte);
-
walk_addr_generic() permission checks are a maze of branchy code, which is
performed four times per lookup. It depends on the type of access, efer.nxe,
cr0.wp, cr4.smep, and in the near future, cr4.smap.
Optimize this away by precalculating all variants and storing them in a
bitmap. The bitmap