Re: [kvm-devel] [RFC] Expose infrastructure for unpinning guest memory

2007-10-15 Thread Carsten Otte
Anthony Liguori wrote:
> So does MADV_REMOVE remove the backing page but still allow for memory 
> to be faulted in?  That is, after calling MADV_REMOVE, there's no 
> guarantee that the contents of a give VA range will remain the same (but 
> it won't SEGV the app if it accesses that memory)?
> 
> If so, I think that would be the right way to treat it.  That allows for 
> two types of hints for the guest to provide: 1) I won't access this 
> memory for a very long time (so it's a good candidate to swap out) and 
> 2) I won't access this memory and don't care about it's contents.
You really want MADV_DONTNEED. It does what one would expect: tell the 
kernel you'd prefer to see it discarded but it remains mapped so that 
you can fault it in. My xip code got into conflice with subject kernel 
feature once, that's why I had to care what it does.

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [RFC] Expose infrastructure for unpinning guest memory

2007-10-13 Thread Avi Kivity
Anthony Liguori wrote:
> Avi Kivity wrote:
>   
>> Anthony Liguori wrote:
>>   
>> 
>>> Now that we have userspace memory allocation, I wanted to play with 
>>> ballooning.
>>> The idea is that when a guest "balloons" down, we simply unpin the 
>>> underlying
>>> physical memory and the host kernel may or may not swap it.  To reclaim
>>> ballooned memory, the guest can just start using it and we'll pin it on 
>>> demand.
>>>
>>> The following patch is a stab at providing the right infrastructure for 
>>> pinning
>>> and automatic repinning.  I don't have a lot of comfort in the MMU code so I
>>> thought I'd get some feedback before going much further.
>>>
>>> gpa_to_hpa is a little awkward to hook, but it seems like the right place 
>>> in the
>>> code.  I'm most uncertain about the SMP safety of the unpinning.  
>>> Presumably,
>>> I have to hold the kvm lock around the mmu_unshadow and page_cache release 
>>> to
>>> ensure that another VCPU doesn't fault the page back in after mmu_unshadow?
>>>
>>>   
>>> 
>>>   
>> One we have true swapping capabilities (which imply ability for the
>> kernel to remove a page from the shadow page tables) you can unpin by
>> calling munmap() or madvise(MADV_REMOVE) on the pages to be unpinned.
>>   
>> 
>
> So does MADV_REMOVE remove the backing page but still allow for memory 
> to be faulted in?  That is, after calling MADV_REMOVE, there's no 
> guarantee that the contents of a give VA range will remain the same (but 
> it won't SEGV the app if it accesses that memory)?
>
>   

I think so.  The docs aren't clear.  See also MADV_DONTNEED.




-- 
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [RFC] Expose infrastructure for unpinning guest memory

2007-10-12 Thread Anthony Liguori
Avi Kivity wrote:
> Anthony Liguori wrote:
>   
>> Now that we have userspace memory allocation, I wanted to play with 
>> ballooning.
>> The idea is that when a guest "balloons" down, we simply unpin the underlying
>> physical memory and the host kernel may or may not swap it.  To reclaim
>> ballooned memory, the guest can just start using it and we'll pin it on 
>> demand.
>>
>> The following patch is a stab at providing the right infrastructure for 
>> pinning
>> and automatic repinning.  I don't have a lot of comfort in the MMU code so I
>> thought I'd get some feedback before going much further.
>>
>> gpa_to_hpa is a little awkward to hook, but it seems like the right place in 
>> the
>> code.  I'm most uncertain about the SMP safety of the unpinning.  Presumably,
>> I have to hold the kvm lock around the mmu_unshadow and page_cache release to
>> ensure that another VCPU doesn't fault the page back in after mmu_unshadow?
>>
>>   
>> 
>
> One we have true swapping capabilities (which imply ability for the
> kernel to remove a page from the shadow page tables) you can unpin by
> calling munmap() or madvise(MADV_REMOVE) on the pages to be unpinned.
>   

So does MADV_REMOVE remove the backing page but still allow for memory 
to be faulted in?  That is, after calling MADV_REMOVE, there's no 
guarantee that the contents of a give VA range will remain the same (but 
it won't SEGV the app if it accesses that memory)?

If so, I think that would be the right way to treat it.  That allows for 
two types of hints for the guest to provide: 1) I won't access this 
memory for a very long time (so it's a good candidate to swap out) and 
2) I won't access this memory and don't care about it's contents.

Regards,

Anthony Liguori

> Other than that the approach seems right.
>
>   


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [RFC] Expose infrastructure for unpinning guest memory

2007-10-11 Thread Avi Kivity
Anthony Liguori wrote:
> Now that we have userspace memory allocation, I wanted to play with 
> ballooning.
> The idea is that when a guest "balloons" down, we simply unpin the underlying
> physical memory and the host kernel may or may not swap it.  To reclaim
> ballooned memory, the guest can just start using it and we'll pin it on 
> demand.
>
> The following patch is a stab at providing the right infrastructure for 
> pinning
> and automatic repinning.  I don't have a lot of comfort in the MMU code so I
> thought I'd get some feedback before going much further.
>
> gpa_to_hpa is a little awkward to hook, but it seems like the right place in 
> the
> code.  I'm most uncertain about the SMP safety of the unpinning.  Presumably,
> I have to hold the kvm lock around the mmu_unshadow and page_cache release to
> ensure that another VCPU doesn't fault the page back in after mmu_unshadow?
>
>   

One we have true swapping capabilities (which imply ability for the
kernel to remove a page from the shadow page tables) you can unpin by
calling munmap() or madvise(MADV_REMOVE) on the pages to be unpinned.

Other than that the approach seems right.

-- 
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [RFC] Expose infrastructure for unpinning guest memory

2007-10-11 Thread Dor Laor


The idea being that kvm_read_guest_page() will effectively pin the page 
and put_page() has the effect of unpinning it?  It seems to me that we 
should be using page_cache_release()'ing since we're not just 
get_page()'ing the memory.  I may be wrong though.


Both of these are an optimization though.  It's not strictly needed for 
what I'm after since in the case of ballooning, there's no reason why 
someone would be calling kvm_read_guest_page() on the ballooned memory.


  
secoend, is hacking the rmap to do reverse mapping to every present 
pte and put_page() the pages at rmap_remove()

and this about all, to make this work.



If I understand you correctly, this is to unpin the page whenever it is 
removed from the rmap?  That would certainly be useful but it's still an 
optimization.  The other obvious optimization to me would be to not use 
get_user_pages() on all memory to start with and instead, allow pages to 
be faulted in on use.  This is particularly useful for creating a VM 
with a very large amount of memory, and immediately ballooning down.  
That way the large amount of memory doesn't need to be present to actual 
spawn the guest.


Regards,

Anthony Liguori

  
Izik idea is towards general guest swapping capability. The first step 
is just to increase the
reference count of the rmapped pages. The second is to change the size 
of the shadow pages
table as function of the guest memory usage and the third is to get 
notifications from Linux

about pte state changes.
btw: I have an unmerged balloon code (guest & host) with the old kernel 
mapping.

The guest part may be still valid for the userspace allocation.
Attaching it.
Dor.




-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

  


/*
 * KVM guest balloon driver
 *
 * Copyright (C) 2007, Qumranet, Inc., Dor Laor <[EMAIL PROTECTED]>
 *
 * This work is licensed under the terms of the GNU GPL, version 2.  See
 * the COPYING file in the top-level directory.
 */

#include "../kvm.h"
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

MODULE_AUTHOR ("Dor Laor");
MODULE_DESCRIPTION ("Implements guest ballooning support");
MODULE_LICENSE("GPL");
MODULE_VERSION("1");

#define KVM_BALLOON_MINOR MISC_DYNAMIC_MINOR

static LIST_HEAD(balloon_plist);
static int balloon_size = 0;
static DEFINE_SPINLOCK(balloon_plist_lock);
static  gfn_t balloon_shared_gfn;

struct balloon_page {
struct page *bpage;
struct list_head bp_list;
};

static int kvm_trigger_balloon_op(int npages)
{
unsigned long ret;

ret = kvm_hypercall2(__NR_hypercall_balloon, balloon_shared_gfn, 
npages);
WARN_ON(ret);
printk(KERN_DEBUG "%s:hypercall ret: %lx\n", __FUNCTION__, ret);

return ret;

}

static int kvm_balloon_inflate(unsigned long *shared_page_addr, int npages)
{
LIST_HEAD(tmp_list);
struct balloon_page *node, *tmp;
u32 *pfn = (u32*)shared_page_addr;
int allocated = 0;
int i, r = -ENOMEM;

for (i = 0; i < npages; i++) {
node = kzalloc(sizeof(struct balloon_page), GFP_KERNEL);
if (!node)
goto out_free;

node->bpage = alloc_page(GFP_HIGHUSER | __GFP_ZERO);
if (!node->bpage) {
kfree(node);
goto out_free;
}

list_add(&node->bp_list, &tmp_list);

allocated++;
*pfn++ = page_to_pfn(node->bpage);
}

spin_lock(&balloon_plist_lock);

r = kvm_trigger_balloon_op(npages);
if (r < 0) {
printk(KERN_DEBUG "%s: got kvm_trigger_balloon_op res=%d\n",
   __FUNCTION__, r);
spin_unlock(&balloon_plist_lock);
goto out_free;
}

list_splice(&tmp_list, &balloon_plist);
balloon_size += allocated;
printk(KERN_DEBUG "%s: current balloon size=%d\n", __FUNCTION__,
   balloon_size);

spin_unlock(&balloon_plist_lock);

return allocated;

out_free:
list_for_each_entry_safe(node, tmp, &tmp_list, bp_list) {
__free_page(node->bpage);
list_del(&node->bp_list);
kfree(node);
}

return r;
}

static int kvm_balloon_deflate(unsigned long *shared_page_addr, int npages)
{
LIST_HEAD(tmp_list);
struct balloon_page *node, *tmp;
u32 *pfn = (u32*)shared_page_addr;
int dealloca

Re: [kvm-devel] [RFC] Expose infrastructure for unpinning guest memory

2007-10-11 Thread Izik Eidus

Anthony Liguori wrote:

Izik Eidus wrote:
 static void page_header_update_slot(struct kvm *kvm, void *pte, 
gpa_t gpa)

 {
 int slot = memslot_id(kvm, gfn_to_memslot(kvm, gpa >> 
PAGE_SHIFT));


- 

  

kvm_memory_slot

heh, i am working on similir patch, and our gfn_to_page and the 
change to  kvm_memory_slot even by varible names :)


Ah, fantastic :-)  Care to share what you currently have?

here it is :)



few things you have to do to make this work:
make gfn_to_page safe always function (return bad_page in case of 
failure, i have patch for this if you want)


That seems pretty obvious.  No reason not to have that committed now.

it is include in the patch that i sent you


hacking the kvm_read_guest_page / kvm_write_guest_page 
kvm_clear_guest_page to do put_page after the usage of the page


The idea being that kvm_read_guest_page() will effectively pin the 
page and put_page() has the effect of unpinning it?  It seems to me 
that we should be using page_cache_release()'ing since we're not just 
get_page()'ing the memory.  I may be wrong though.


Both of these are an optimization though.  It's not strictly needed 
for what I'm after since in the case of ballooning, there's no reason 
why someone would be calling kvm_read_guest_page() on the ballooned 
memory.
ohhh, gfn_to_page do get_page to the pages ( this is called by 
get_user_pages automaticly), this is the only way the system can make 
sure the page wont be swapped by when you are using it
and if we will insert swapped page to the guest, we will have memory 
corraption...
therefor each page that we get by gfn_to_page must be put_paged after 
using it
to make it easy, gfn_to_page should do get_page even on normal kernel 
allocated pages
(btw you have nothing to worry about, if the page is swapped, 
get_users_pages walk on the pte and get it out for us)






secoend, is hacking the rmap to do reverse mapping to every present 
pte and put_page() the pages at rmap_remove()

and this about all, to make this work.


If I understand you correctly, this is to unpin the page whenever it 
is removed from the rmap?  That would certainly be useful but it's 
still an optimization.  The other obvious optimization to me would be 
to not use get_user_pages() on all memory to start with and instead, 
allow pages to be faulted in on use.  This is particularly useful for 
creating a VM with a very large amount of memory, and immediately 
ballooning down.  That way the large amount of memory doesn't need to 
be present to actual spawn the guest.


we must to call get_user_pages, beacuse each page that we dont hold the 
refernec (page->_count) can point to diffrent virtual address any moment
infact with this way, in kvmctl, we can remove the memset(...) on all 
the memory ( i did this just beacuse of some lazyness/copy on write/how 
you want to call this, mechanisem that linux have), beacuse now each 
call to gfn_to_page return the right virtual address of the physical 
guest page.


the patch is here,
all what needed to make it work with swapping is runing rmap on EVERY 
present pages ( now it run on just writeable pages, witch mean that 
other pages are not protected from being swapped )
you can try silly swapping, by removing the put_page from rmap_remove 
and from set_pte_common() function


btw, i didnt some ugly thing now to get you the patch, so i dont sure if 
it will be applied, or some parts might be missing, i am sorry for this, 
but i have no time to check it now

i will do cleanup to the patchs when the rmap will be ready...

Regards,

Anthony Liguori








diff --git a/drivers/kvm/kvm.h b/drivers/kvm/kvm.h
index 4ab487c..e7df8fc 100644
--- a/drivers/kvm/kvm.h
+++ b/drivers/kvm/kvm.h
@@ -409,6 +409,7 @@ struct kvm_memory_slot {
 	unsigned long *rmap;
 	unsigned long *dirty_bitmap;
 	int user_alloc; /* user allocated memory */
+	unsigned long userspace_addr;
 };
 
@@ -561,8 +562,9 @@ static inline int is_error_hpa(hpa_t hpa) { return hpa >> HPA_MSB; }
 hpa_t gva_to_hpa(struct kvm_vcpu *vcpu, gva_t gva);
 struct page *gva_to_page(struct kvm_vcpu *vcpu, gva_t gva);
 
-extern hpa_t bad_page_address;
+extern struct page *bad_page;
 
+int is_error_page(struct page *page);
 gfn_t unalias_gfn(struct kvm *kvm, gfn_t gfn);
 struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn);
diff --git a/drivers/kvm/kvm_main.c b/drivers/kvm/kvm_main.c
index 0b2894a..dde8497 100644
--- a/drivers/kvm/kvm_main.c
+++ b/drivers/kvm/kvm_main.c
@@ -325,13 +325,13 @@ static void kvm_free_userspace_physmem(struct kvm_memory_slot *free)
 {
 	int i;
 
-	for (i = 0; i < free->npages; ++i) {
+	/*for (i = 0; i < free->npages; ++i) {
 		if (free->phys_mem[i]) {
 			if (!PageReserved(free->phys_mem[i]))
 SetPageDirty(free->phys_mem[i]);
 			page_cache_release(free->phys_mem[i]);
 		}
-	}
+	}*/
 }
 
@@ -773,19 +773,8 @@ static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm,
 		memset(new.phys_mem, 0,

Re: [kvm-devel] [RFC] Expose infrastructure for unpinning guest memory

2007-10-11 Thread Anthony Liguori
Izik Eidus wrote:
>>  static void page_header_update_slot(struct kvm *kvm, void *pte, 
>> gpa_t gpa)
>>  {
>>  int slot = memslot_id(kvm, gfn_to_memslot(kvm, gpa >> PAGE_SHIFT));
>>
>> - 
>>
>>   
> kvm_memory_slot
>
> heh, i am working on similir patch, and our gfn_to_page and the change 
> to  kvm_memory_slot even by varible names :)

Ah, fantastic :-)  Care to share what you currently have?

> few things you have to do to make this work:
> make gfn_to_page safe always function (return bad_page in case of 
> failure, i have patch for this if you want)

That seems pretty obvious.  No reason not to have that committed now.

> hacking the kvm_read_guest_page / kvm_write_guest_page 
> kvm_clear_guest_page to do put_page after the usage of the page

The idea being that kvm_read_guest_page() will effectively pin the page 
and put_page() has the effect of unpinning it?  It seems to me that we 
should be using page_cache_release()'ing since we're not just 
get_page()'ing the memory.  I may be wrong though.

Both of these are an optimization though.  It's not strictly needed for 
what I'm after since in the case of ballooning, there's no reason why 
someone would be calling kvm_read_guest_page() on the ballooned memory.

>
> secoend, is hacking the rmap to do reverse mapping to every present 
> pte and put_page() the pages at rmap_remove()
> and this about all, to make this work.

If I understand you correctly, this is to unpin the page whenever it is 
removed from the rmap?  That would certainly be useful but it's still an 
optimization.  The other obvious optimization to me would be to not use 
get_user_pages() on all memory to start with and instead, allow pages to 
be faulted in on use.  This is particularly useful for creating a VM 
with a very large amount of memory, and immediately ballooning down.  
That way the large amount of memory doesn't need to be present to actual 
spawn the guest.

Regards,

Anthony Liguori

>
>


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [RFC] Expose infrastructure for unpinning guest memory

2007-10-11 Thread Izik Eidus
Anthony Liguori wrote:
> Now that we have userspace memory allocation, I wanted to play with 
> ballooning.
> The idea is that when a guest "balloons" down, we simply unpin the underlying
> physical memory and the host kernel may or may not swap it.  To reclaim
> ballooned memory, the guest can just start using it and we'll pin it on 
> demand.
>
> The following patch is a stab at providing the right infrastructure for 
> pinning
> and automatic repinning.  I don't have a lot of comfort in the MMU code so I
> thought I'd get some feedback before going much further.
>
> gpa_to_hpa is a little awkward to hook, but it seems like the right place in 
> the
> code.  I'm most uncertain about the SMP safety of the unpinning.  Presumably,
> I have to hold the kvm lock around the mmu_unshadow and page_cache release to
> ensure that another VCPU doesn't fault the page back in after mmu_unshadow?
>
> Feedback would be greatly appreciated!
>
> diff --git a/drivers/kvm/kvm.h b/drivers/kvm/kvm.h
> index 4a52d6e..8abe770 100644
> --- a/drivers/kvm/kvm.h
> +++ b/drivers/kvm/kvm.h
> @@ -409,6 +409,7 @@ struct kvm_memory_slot {
>   unsigned long *rmap;
>   unsigned long *dirty_bitmap;
>   int user_alloc; /* user allocated memory */
> + unsigned long userspace_addr;
>  };
>  
>  struct kvm {
> @@ -652,6 +653,7 @@ int kvm_mmu_unprotect_page_virt(struct kvm_vcpu *vcpu, 
> gva_t gva);
>  void __kvm_mmu_free_some_pages(struct kvm_vcpu *vcpu);
>  int kvm_mmu_load(struct kvm_vcpu *vcpu);
>  void kvm_mmu_unload(struct kvm_vcpu *vcpu);
> +int kvm_mmu_unpin(struct kvm *kvm, gfn_t gfn);
>  
>  int kvm_emulate_hypercall(struct kvm_vcpu *vcpu);
>  
> diff --git a/drivers/kvm/kvm_main.c b/drivers/kvm/kvm_main.c
> index a0f8366..74105d1 100644
> --- a/drivers/kvm/kvm_main.c
> +++ b/drivers/kvm/kvm_main.c
> @@ -774,6 +774,7 @@ static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm,
>   unsigned long pages_num;
>  
>   new.user_alloc = 1;
> + new.userspace_addr = mem->userspace_addr;
>   down_read(¤t->mm->mmap_sem);
>  
>   pages_num = get_user_pages(current, current->mm,
> @@ -1049,12 +1050,36 @@ struct kvm_memory_slot *gfn_to_memslot(struct kvm 
> *kvm, gfn_t gfn)
>  struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn)
>  {
>   struct kvm_memory_slot *slot;
> + struct page *page;
> + uint64_t slot_index;
>  
>   gfn = unalias_gfn(kvm, gfn);
>   slot = __gfn_to_memslot(kvm, gfn);
>   if (!slot)
>   return NULL;
> - return slot->phys_mem[gfn - slot->base_gfn];
> +
> + slot_index = gfn - slot->base_gfn;
> + page = slot->phys_mem[slot_index];
> + if (unlikely(page == NULL)) {
> + unsigned long pages_num;
> +
> + down_read(¤t->mm->mmap_sem);
> +
> + pages_num = get_user_pages(current, current->mm,
> +slot->userspace_addr +
> +(slot_index << PAGE_SHIFT),
> +1, 1, 0, &slot->phys_mem[slot_index],
> +NULL);
> +
> + up_read(¤t->mm->mmap_sem);
> +
> + if (pages_num != 1)
> + page = NULL;
> + else
> + page = slot->phys_mem[slot_index];
> + }
> +
> + return page;
>  }
>  EXPORT_SYMBOL_GPL(gfn_to_page);
>  
> diff --git a/drivers/kvm/mmu.c b/drivers/kvm/mmu.c
> index f52604a..1820816 100644
> --- a/drivers/kvm/mmu.c
> +++ b/drivers/kvm/mmu.c
> @@ -25,6 +25,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  
>  #include 
>  #include 
> @@ -820,6 +821,33 @@ static void mmu_unshadow(struct kvm *kvm, gfn_t gfn)
>   }
>  }
>  
> +int kvm_mmu_unpin(struct kvm *kvm, gfn_t gfn)
> +{
> + struct kvm_memory_slot *slot;
> + struct page *page;
> +
> + /* FIXME for each active vcpu */
> +
> + gfn = unalias_gfn(kvm, gfn);
> + slot = gfn_to_memslot(kvm, gfn);
> + if (!gfn)
> + return -EINVAL;
> +
> + /* FIXME: do we need to hold a lock here? */
> +
> + /* Remove page from shadow MMU and unpin page */
> + mmu_unshadow(kvm, gfn);
> + page = slot->phys_mem[gfn - slot->base_gfn];
> + if (page) {
> + if (!PageReserved(page))
> + SetPageDirty(page);
> + page_cache_release(page);
> + slot->phys_mem[gfn - slot->base_gfn] = NULL;
> + }
> +
> + return 0;
> +}
> +
>  static void page_header_update_slot(struct kvm *kvm, void *pte, gpa_t gpa)
>  {
>   int slot = memslot_id(kvm, gfn_to_memslot(kvm, gpa >> PAGE_SHIFT));
>
> -
>   
kvm_memory_slot

heh, i am working on similir patch, and our gfn_to_page and the change 
to  kvm_memory_slot even by varible names :)
few things you have to do to make this work:
make gfn_to_page safe always f