MMU code tries to avoid if()s HW is not able to predict reliably by using
bitwise operation to streamline code execution, but in case of a dirty bit
folding this gives us nothing since write_fault is checked right before
the folding code. Lets just piggyback onto the if() to make code more clear.

Signed-off-by: Gleb Natapov <g...@redhat.com>
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 891eb6d..a7b24cf 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -249,16 +249,12 @@ retry_walk:
 
        if (!write_fault)
                protect_clean_gpte(&pte_access, pte);
-
-       /*
-        * On a write fault, fold the dirty bit into accessed_dirty by shifting 
it one
-        * place right.
-        *
-        * On a read fault, do nothing.
-        */
-       shift = write_fault >> ilog2(PFERR_WRITE_MASK);
-       shift *= PT_DIRTY_SHIFT - PT_ACCESSED_SHIFT;
-       accessed_dirty &= pte >> shift;
+       else
+               /*
+                * On a write fault, fold the dirty bit into accessed_dirty by
+                * shifting it one place right.
+                */
+               accessed_dirty &= pte >> (PT_DIRTY_SHIFT - PT_ACCESSED_SHIFT);
 
        if (unlikely(!accessed_dirty)) {
                ret = FNAME(update_accessed_dirty_bits)(vcpu, mmu, walker, 
write_fault);
--
                        Gleb.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to