By forcing clean huge pages to be read-only, we have separate roles
for the shadow of a clean large page and the shadow of a dirty large
page.  This is necessary because different ptes will be instantiated
for the two cases, even for read faults.

Signed-off-by: Avi Kivity <[EMAIL PROTECTED]>
---
 drivers/kvm/paging_tmpl.h |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/drivers/kvm/paging_tmpl.h b/drivers/kvm/paging_tmpl.h
index e07cb2e..4538b15 100644
--- a/drivers/kvm/paging_tmpl.h
+++ b/drivers/kvm/paging_tmpl.h
@@ -382,6 +382,8 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
                        metaphysical = 1;
                        hugepage_access = walker->pte;
                        hugepage_access &= PT_USER_MASK | PT_WRITABLE_MASK;
+                       if (!is_dirty_pte(walker->pte))
+                               hugepage_access &= ~PT_WRITABLE_MASK;
                        hugepage_access >>= PT_WRITABLE_SHIFT;
                        if (walker->pte & PT64_NX_MASK)
                                hugepage_access |= (1 << 2);
-- 
1.5.3.7

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to