We dirtied only one page because writes originally couldn't span more.
Use improved syntax for '>> PAGE_SHIFT' while at it.

Fixes: 8f964525a121 ("KVM: Allow cross page reads and writes from cached 
translations.")
Signed-off-by: Radim Krčmář <rkrc...@redhat.com>
---
 The function handles cross memslot writes in a different path.

 I think we should dirty pages after partial writes too (r < len),
 but it probably won't happen and I already started refactoring :)

 virt/kvm/kvm_main.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index aadef264bed1..863df9dcab6f 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1665,6 +1665,7 @@ int kvm_write_guest_cached(struct kvm *kvm, struct 
gfn_to_hva_cache *ghc,
 {
        struct kvm_memslots *slots = kvm_memslots(kvm);
        int r;
+       gfn_t gfn;
 
        BUG_ON(len > ghc->len);
 
@@ -1680,7 +1681,10 @@ int kvm_write_guest_cached(struct kvm *kvm, struct 
gfn_to_hva_cache *ghc,
        r = __copy_to_user((void __user *)ghc->hva, data, len);
        if (r)
                return -EFAULT;
-       mark_page_dirty_in_slot(kvm, ghc->memslot, ghc->gpa >> PAGE_SHIFT);
+
+       for (gfn =  gpa_to_gfn(ghc->gpa);
+            gfn <= gpa_to_gfn(ghc->gpa + len - 1); gfn++)
+               mark_page_dirty_in_slot(kvm, ghc->memslot, gfn);
 
        return 0;
 }
-- 
2.3.4

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to