When software changes D bit (either from 1 to 0, or 0 to 1), the corresponding
TLB entity in the hardware won't be updated immediately. We should flush it to
guarantee the consistence of D bit between TLB and MMU page table in memory.
This is required if some specific hardware feature uses D-bit status to do
specific things.

Sanity test was done on my machine with Intel processor.

Signed-off-by: Kai Huang <kai.hu...@linux.intel.com>
---
 arch/x86/kvm/mmu.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 978f402..1feac0c 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -547,6 +547,11 @@ static bool spte_is_bit_cleared(u64 old_spte, u64 
new_spte, u64 bit_mask)
        return (old_spte & bit_mask) && !(new_spte & bit_mask);
 }
 
+static bool spte_is_bit_changed(u64 old_spte, u64 new_spte, u64 bit_mask)
+{
+       return (old_spte & bit_mask) != (new_spte & bit_mask);
+}
+
 /* Rules for using mmu_spte_set:
  * Set the sptep from nonpresent to present.
  * Note: the sptep being assigned *must* be either not present
@@ -597,6 +602,13 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte)
        if (!shadow_accessed_mask)
                return ret;
 
+       /*
+        * We also need to flush TLB when D-bit is changed by software to
+        * guarantee the D-bit consistence between TLB and MMU page table.
+        */
+       if (spte_is_bit_changed(old_spte, new_spte, shadow_dirty_mask))
+               ret = true;
+
        if (spte_is_bit_cleared(old_spte, new_spte, shadow_accessed_mask))
                kvm_set_pfn_accessed(spte_to_pfn(old_spte));
        if (spte_is_bit_cleared(old_spte, new_spte, shadow_dirty_mask))
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to