Sometimes, we only modify the last one byte of a pte to update status bit,
for example, clear_bit is used to clear r/w bit in linux kernel and 'andb'
instruction is used in this function, in this case, kvm_mmu_pte_write will
treat it as misaligned access, and the shadow page table is zapped

Signed-off-by: Xiao Guangrong <xiaoguangr...@cn.fujitsu.com>
---
 arch/x86/kvm/mmu.c |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index cfe24fe..adaa160 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3601,6 +3601,14 @@ static bool detect_write_misaligned(struct kvm_mmu_page 
*sp, gpa_t gpa,
 
        offset = offset_in_page(gpa);
        pte_size = sp->role.cr4_pae ? 8 : 4;
+
+       /*
+        * Sometimes, the OS only writes the last one bytes to update status
+        * bits, for example, in linux, andb instruction is used in clear_bit().
+        */
+       if (!(offset & (pte_size - 1)) && bytes == 1)
+               return false;
+
        misaligned = (offset ^ (offset + bytes - 1)) & ~(pte_size - 1);
        misaligned |= bytes < 4;
 
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to