On 10/05/2010 01:59 AM, Marcelo Tosatti wrote:
Yep, the drawback is the unnecessary write fault. What i have here is:

--- kvm.orig/virt/kvm/kvm_main.c
+++ kvm/virt/kvm/kvm_main.c
@@ -827,7 +827,7 @@ unsigned long gfn_to_hva(struct kvm *kvm
  }
  EXPORT_SYMBOL_GPL(gfn_to_hva);

-pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn)
+pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn, int *writable)
  {
         struct page *page[1];
         unsigned long addr;
@@ -842,8 +842,16 @@ pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t
                 return page_to_pfn(bad_page);
         }

+       *writable = 1;
         npages = get_user_pages_fast(addr, 1, 1, page);

+       /* attempt to map read-only */
+       if (unlikely(npages != 1)) {
+               npages = get_user_pages_fast(addr, 1, 0, page);
+               if (npages == 1)
+                       *writable = 0;
+       }
+
         if (unlikely(npages != 1)) {
                 struct vm_area_struct *vma;

Can rebase and resend, if you'd like.


That will work for me but not for ksm. I guess it's good to get things going, so please to post it.

--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to