Izik Eidus wrote:
> we are working on swapping support for the guests in kvm.
> we want to allow management of the memory swapping of the guests from kvm.
This is excellent, thank you!

> this is request for comment, so any idea you have please write to me.
I ran into a few while reading the code:

+static void kvm_free_userspace_physmem(struct kvm_memory_slot *free)
+{
+       int i;
+       
+       for (i = 0; i < free->npages; ++i) {
+               if (free->phys_mem[i]) {
+                       if (!PageReserved(free->phys_mem[i]))
+                               SetPageDirty(free->phys_mem[i]);
+                       page_cache_release(free->phys_mem[i]);
+               }
+       }
+}
I don't see why we would want to dirty a page we release in general. 
We do only need to dirty it, if the corresponding page table entry 
indicates so (dirty bit). Did I miss something?



@@ -670,7 +692,8 @@ EXPORT_SYMBOL_GPL(fx_init);
   * Discontiguous memory is allowed, mostly for framebuffers.
   */
  static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm,
-                                         struct kvm_memory_region *mem)
+                                         struct kvm_memory_region *mem,
+                                               unsigned long guest_host_addr)
  {
        int r;
        gfn_t base_gfn;
@@ -748,12 +771,26 @@ raced:
                        goto out_free;

                memset(new.phys_mem, 0, npages * sizeof(struct page *));
-               for (i = 0; i < npages; ++i) {
-                       new.phys_mem[i] = alloc_page(GFP_HIGHUSER
-                                                    | __GFP_ZERO);
-                       if (!new.phys_mem[i])
+               
+               if (guest_host_addr) {
+                       unsigned long pages_num;
+                       
+                       new.user_alloc = 1;
+                       down_read(&current->mm->mmap_sem);
+                       pages_num = get_user_pages(current, current->mm, 
guest_host_addr,
+                                                               npages, 1, 0, 
new.phys_mem, NULL);
+                       up_read(&current->mm->mmap_sem);
+                       if (pages_num != npages)
                                goto out_free;
-                       set_page_private(new.phys_mem[i],0);
+               } else {
+                       new.user_alloc = 0;
+                       for (i = 0; i < npages; ++i) {
+                               new.phys_mem[i] = alloc_page(GFP_HIGHUSER
+                                                            | __GFP_ZERO);
+                               if (!new.phys_mem[i])
+                                       goto out_free;
+                               set_page_private(new.phys_mem[i],0);
+                       }
                }
        }
Do we intend to maintain both pathes in the long run, or wait until we 
don't care about ancient userland anymore that does'nt do the 
allocation on its own?


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to