Re: [PATCH 4/4 v3] KVM: Introduce kvm_memory_slot::arch and move lpage_info into it

2012-03-07 Thread Alexander Graf

On 03/07/2012 05:46 AM, Takuya Yoshikawa wrote:

Alexander Grafag...@suse.de  wrote:


This patch is the first step to make this difference clear by
introducing kvm_memory_slot::arch;  lpage_info is moved into it.

I am planning to move rmap stuff into arch next if this patch is accepted.

Please let me know if you have some opinion about which members should be
moved into this.

What is this lpage stuff? When do we need it? Right now the code gets executed 
on ppc, right? And with the patch it doesn't, no?

lpage_info is used for supporting huge pages on kvm-x86: we have
write_count and rmap_pde for each largepage and these are in lpage_info.

At the time I made this patch, it seemed that only kvm-x86 supported
huge pages, on ppc the array should be empty:


Hrm. I suppose this refers to transparent huge pages? Andrea, Paul, is 
there anything keeping is from also needing/using that logic?



/* We don't currently support large pages. */
#define KVM_HPAGE_GFN_SHIFT(x)  0
#define KVM_NR_PAGE_SIZES   1

How each architecture supports huge pages will differ a lot.
So this kind of memory consuming stuff should be arch specific.


Yeah, probably.


IMO rmap also should to be moved into the arch.
s390 does not need it and other architectures than x86 will be happy if
the type of it can be changed from unsigned long to a pointer, no?


How would an unsigned long make a difference over a pointer?


Alex

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/4 v3] KVM: Introduce kvm_memory_slot::arch and move lpage_info into it

2012-03-07 Thread Avi Kivity
On 03/07/2012 03:27 PM, Alexander Graf wrote:
 At the time I made this patch, it seemed that only kvm-x86 supported
 huge pages, on ppc the array should be empty:


 Hrm. I suppose this refers to transparent huge pages? 

Just huge pages.  Whether they are static or dynamic is immaterial in
this context.

-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/4 v3] KVM: Introduce kvm_memory_slot::arch and move lpage_info into it

2012-03-07 Thread Takuya Yoshikawa
Alexander Graf ag...@suse.de wrote:

  IMO rmap also should to be moved into the arch.
  s390 does not need it and other architectures than x86 will be happy if
  the type of it can be changed from unsigned long to a pointer, no?
 
 How would an unsigned long make a difference over a pointer?

Not so much.  Just a matter of casting.

x86 is using the least significant bit for special encoding, so unsigned long
is natural.

My initial motivation was to make rmap multi-dimensional so that it can also
hold rmap_pde.

Takuya
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/4 v3] KVM: Introduce kvm_memory_slot::arch and move lpage_info into it

2012-03-07 Thread Alexander Graf

On 03/07/2012 03:04 PM, Avi Kivity wrote:

On 03/07/2012 03:27 PM, Alexander Graf wrote:

At the time I made this patch, it seemed that only kvm-x86 supported
huge pages, on ppc the array should be empty:


Hrm. I suppose this refers to transparent huge pages?

Just huge pages.  Whether they are static or dynamic is immaterial in
this context.


Well, book3s_hv and e500 support hugetlbfs. I've never had to touch that 
patches code though - so I guess I'm still not really understanding what 
it's there for O_o.



Alex

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/4 v3] KVM: Introduce kvm_memory_slot::arch and move lpage_info into it

2012-03-07 Thread Avi Kivity
On 03/07/2012 04:57 PM, Alexander Graf wrote:
 On 03/07/2012 03:04 PM, Avi Kivity wrote:
 On 03/07/2012 03:27 PM, Alexander Graf wrote:
 At the time I made this patch, it seemed that only kvm-x86 supported
 huge pages, on ppc the array should be empty:

 Hrm. I suppose this refers to transparent huge pages?
 Just huge pages.  Whether they are static or dynamic is immaterial in
 this context.

 Well, book3s_hv and e500 support hugetlbfs. I've never had to touch
 that patches code though - so I guess I'm still not really
 understanding what it's there for O_o.


The kvm hugepage code uses large sptes to map large pages, when
available (either via hugetlbfs or transparent hugepages).  Since x86
supports swapping, and needs to write-protect pages for dirty logging
and for shadowing guest pagetables, it needs a reverse map from pages to
sptes.  The data structure we're discussing is part of the reverse map
for large pages.

-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/4 v3] KVM: Introduce kvm_memory_slot::arch and move lpage_info into it

2012-03-07 Thread Alexander Graf

On 03/07/2012 04:03 PM, Avi Kivity wrote:

On 03/07/2012 04:57 PM, Alexander Graf wrote:

On 03/07/2012 03:04 PM, Avi Kivity wrote:

On 03/07/2012 03:27 PM, Alexander Graf wrote:

At the time I made this patch, it seemed that only kvm-x86 supported
huge pages, on ppc the array should be empty:

Hrm. I suppose this refers to transparent huge pages?

Just huge pages.  Whether they are static or dynamic is immaterial in
this context.

Well, book3s_hv and e500 support hugetlbfs. I've never had to touch
that patches code though - so I guess I'm still not really
understanding what it's there for O_o.


The kvm hugepage code uses large sptes to map large pages, when
available (either via hugetlbfs or transparent hugepages).  Since x86
supports swapping, and needs to write-protect pages for dirty logging
and for shadowing guest pagetables, it needs a reverse map from pages to
sptes.  The data structure we're discussing is part of the reverse map
for large pages.


Ah, now that makes more sense. On booke, we don't do rmap yet. On 
book3s_hv, IIRC Paul did implement something, so I'd like to hear his 
opinion on it really.


Alex

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/4 v3] KVM: Introduce kvm_memory_slot::arch and move lpage_info into it

2012-03-07 Thread Paul Mackerras
On Wed, Mar 07, 2012 at 12:01:38AM +0100, Alexander Graf wrote:
 
 On 31.01.2012, at 02:17, Takuya Yoshikawa wrote:
 
  Added s390 and ppc developers to Cc,
  
  (2012/01/30 14:35), Takuya Yoshikawa wrote:
  Some members of kvm_memory_slot are not used by every architecture.
  
  This patch is the first step to make this difference clear by
  introducing kvm_memory_slot::arch;  lpage_info is moved into it.
  
  I am planning to move rmap stuff into arch next if this patch is accepted.
  
  Please let me know if you have some opinion about which members should be
  moved into this.
 
 What is this lpage stuff? When do we need it? Right now the code
 gets executed on ppc, right? And with the patch it doesn't, no?

We do support large pages backing the guest on powerpc, at least for
the Book3S_HV style of KVM, but we don't use the lpage_info array.
The reason is that we only allow the guest to create large-page PTEs
in regions which are backed by large pages on the host side (and which
are therefore large-page aligned on both the host and guest side).  We
can enforce that because guests use a hypercall to create PTEs in the
hashed page table, and we have a way (via the device tree) to tell the
guest what page sizes it can use.

In contrast, on x86 we have no control over what PTEs the guest
creates in its page tables, so it can create large-page PTEs inside a
region which is backed by small pages, and which might not be
large-page aligned.  This is why we have the separate arrays pointed
to by lpage_info and why there is the logic in kvm_main.c for handling
misalignment at the ends.

So, at the moment on Book3S_HV, I have one entry in the rmap array for
each small page in a memslot.  Each entry is an unsigned long and
contains some control bits (dirty and referenced bits, among others)
and the index in the hashed page table (HPT) of one guest PTE that
references that page.  There is another array that then forms a
doubly-linked circular list of all the guest PTEs that reference the
page.  At present, guest PTEs are linked into the rmap lists based on
the starting address of the page irrespective of the page size, so a
large-page guest PTE gets linked into the same list as a small-page
guest PTE mapping the first small page of the large page.  That isn't
ideal from the point of view of dirty and reference tracking, so I
will probably move to having separate lists for the different page
sizes, meaning I will need something like the lpage_info array, but
I won't need the logic that is currently in kvm_main.c for handling
it.

Paul.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/4 v3] KVM: Introduce kvm_memory_slot::arch and move lpage_info into it

2012-03-06 Thread Alexander Graf

On 31.01.2012, at 02:17, Takuya Yoshikawa wrote:

 Added s390 and ppc developers to Cc,
 
 (2012/01/30 14:35), Takuya Yoshikawa wrote:
 Some members of kvm_memory_slot are not used by every architecture.
 
 This patch is the first step to make this difference clear by
 introducing kvm_memory_slot::arch;  lpage_info is moved into it.
 
 I am planning to move rmap stuff into arch next if this patch is accepted.
 
 Please let me know if you have some opinion about which members should be
 moved into this.

What is this lpage stuff? When do we need it? Right now the code gets executed 
on ppc, right? And with the patch it doesn't, no?


Alex

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/4 v3] KVM: Introduce kvm_memory_slot::arch and move lpage_info into it

2012-03-06 Thread Takuya Yoshikawa
Alexander Graf ag...@suse.de wrote:

  This patch is the first step to make this difference clear by
  introducing kvm_memory_slot::arch;  lpage_info is moved into it.
  
  I am planning to move rmap stuff into arch next if this patch is accepted.
  
  Please let me know if you have some opinion about which members should be
  moved into this.
 
 What is this lpage stuff? When do we need it? Right now the code gets 
 executed on ppc, right? And with the patch it doesn't, no?

lpage_info is used for supporting huge pages on kvm-x86: we have
write_count and rmap_pde for each largepage and these are in lpage_info.

At the time I made this patch, it seemed that only kvm-x86 supported
huge pages, on ppc the array should be empty:

/* We don't currently support large pages. */
#define KVM_HPAGE_GFN_SHIFT(x)  0
#define KVM_NR_PAGE_SIZES   1

How each architecture supports huge pages will differ a lot.
So this kind of memory consuming stuff should be arch specific.

IMO rmap also should to be moved into the arch.
s390 does not need it and other architectures than x86 will be happy if
the type of it can be changed from unsigned long to a pointer, no?

Takuya
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/4 v3] KVM: Introduce kvm_memory_slot::arch and move lpage_info into it

2012-01-31 Thread Avi Kivity
On 01/31/2012 03:17 AM, Takuya Yoshikawa wrote:
 Added s390 and ppc developers to Cc,

 (2012/01/30 14:35), Takuya Yoshikawa wrote:
 Some members of kvm_memory_slot are not used by every architecture.

 This patch is the first step to make this difference clear by
 introducing kvm_memory_slot::arch;  lpage_info is moved into it.

 I am planning to move rmap stuff into arch next if this patch is
 accepted.

 Please let me know if you have some opinion about which members should be
 moved into this.


Is there anything else?  Everything else seems to be generic.

-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/4 v3] KVM: Introduce kvm_memory_slot::arch and move lpage_info into it

2012-01-31 Thread Takuya Yoshikawa

(2012/01/31 18:18), Avi Kivity wrote:

On 01/31/2012 03:17 AM, Takuya Yoshikawa wrote:

Added s390 and ppc developers to Cc,

(2012/01/30 14:35), Takuya Yoshikawa wrote:

Some members of kvm_memory_slot are not used by every architecture.

This patch is the first step to make this difference clear by
introducing kvm_memory_slot::arch;  lpage_info is moved into it.


I am planning to move rmap stuff into arch next if this patch is
accepted.

Please let me know if you have some opinion about which members should be
moved into this.



Is there anything else?  Everything else seems to be generic.



About members, I agree.

But dirty_bitmap allocation/destruction should be implemented by
kvm_arch_create_*_dirty_bitmap().


Of course I may want to add another x86 specific member in the future.

Takuya
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/4 v3] KVM: Introduce kvm_memory_slot::arch and move lpage_info into it

2012-01-31 Thread Takuya Yoshikawa
Christian Borntraeger borntrae...@de.ibm.com wrote:

  Some members of kvm_memory_slot are not used by every architecture.
 
  This patch is the first step to make this difference clear by
  introducing kvm_memory_slot::arch;  lpage_info is moved into it.
 
 Patch series seems to work on s390.
 
 Christian
 

Thanks!

Takuya
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/4 v3] KVM: Introduce kvm_memory_slot::arch and move lpage_info into it

2012-01-30 Thread Takuya Yoshikawa

Added s390 and ppc developers to Cc,

(2012/01/30 14:35), Takuya Yoshikawa wrote:

Some members of kvm_memory_slot are not used by every architecture.

This patch is the first step to make this difference clear by
introducing kvm_memory_slot::arch;  lpage_info is moved into it.


I am planning to move rmap stuff into arch next if this patch is accepted.

Please let me know if you have some opinion about which members should be
moved into this.


Thanks,
Takuya




Signed-off-by: Takuya Yoshikawayoshikawa.tak...@oss.ntt.co.jp
---
  arch/ia64/include/asm/kvm_host.h|3 +
  arch/ia64/kvm/kvm-ia64.c|   10 +
  arch/powerpc/include/asm/kvm_host.h |3 +
  arch/powerpc/kvm/powerpc.c  |   10 +
  arch/s390/include/asm/kvm_host.h|3 +
  arch/s390/kvm/kvm-s390.c|   10 +
  arch/x86/include/asm/kvm_host.h |9 
  arch/x86/kvm/mmu.c  |2 +-
  arch/x86/kvm/x86.c  |   59 +
  include/linux/kvm_host.h|   11 ++---
  virt/kvm/kvm_main.c |   70 --
  11 files changed, 122 insertions(+), 68 deletions(-)

diff --git a/arch/ia64/include/asm/kvm_host.h b/arch/ia64/include/asm/kvm_host.h
index 2689ee5..e35b3a8 100644
--- a/arch/ia64/include/asm/kvm_host.h
+++ b/arch/ia64/include/asm/kvm_host.h
@@ -459,6 +459,9 @@ struct kvm_sal_data {
unsigned long boot_gp;
  };

+struct kvm_arch_memory_slot {
+};
+
  struct kvm_arch {
spinlock_t dirty_log_lock;

diff --git a/arch/ia64/kvm/kvm-ia64.c b/arch/ia64/kvm/kvm-ia64.c
index 8ca7261..d8ddbba 100644
--- a/arch/ia64/kvm/kvm-ia64.c
+++ b/arch/ia64/kvm/kvm-ia64.c
@@ -1571,6 +1571,16 @@ int kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct 
vm_fault *vmf)
return VM_FAULT_SIGBUS;
  }

+void kvm_arch_free_memslot(struct kvm_memory_slot *free,
+  struct kvm_memory_slot *dont)
+{
+}
+
+int kvm_arch_create_memslot(struct kvm_memory_slot *slot, unsigned long npages)
+{
+   return 0;
+}
+
  int kvm_arch_prepare_memory_region(struct kvm *kvm,
struct kvm_memory_slot *memslot,
struct kvm_memory_slot old,
diff --git a/arch/powerpc/include/asm/kvm_host.h 
b/arch/powerpc/include/asm/kvm_host.h
index af438b1..b9188aa 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -212,6 +212,9 @@ struct revmap_entry {
  #define KVMPPC_PAGE_WRITETHRU HPTE_R_W/* 0x40 */
  #define KVMPPC_GOT_PAGE   0x80

+struct kvm_arch_memory_slot {
+};
+
  struct kvm_arch {
  #ifdef CONFIG_KVM_BOOK3S_64_HV
unsigned long hpt_virt;
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 0e21d15..00d7e34 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -281,6 +281,16 @@ long kvm_arch_dev_ioctl(struct file *filp,
return -EINVAL;
  }

+void kvm_arch_free_memslot(struct kvm_memory_slot *free,
+  struct kvm_memory_slot *dont)
+{
+}
+
+int kvm_arch_create_memslot(struct kvm_memory_slot *slot, unsigned long npages)
+{
+   return 0;
+}
+
  int kvm_arch_prepare_memory_region(struct kvm *kvm,
 struct kvm_memory_slot *memslot,
 struct kvm_memory_slot old,
diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
index e630426..7343872 100644
--- a/arch/s390/include/asm/kvm_host.h
+++ b/arch/s390/include/asm/kvm_host.h
@@ -245,6 +245,9 @@ struct kvm_vm_stat {
u32 remote_tlb_flush;
  };

+struct kvm_arch_memory_slot {
+};
+
  struct kvm_arch{
struct sca_block *sca;
debug_info_t *dbf;
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 0b91679..418a69c 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -807,6 +807,16 @@ int kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct 
vm_fault *vmf)
return VM_FAULT_SIGBUS;
  }

+void kvm_arch_free_memslot(struct kvm_memory_slot *free,
+  struct kvm_memory_slot *dont)
+{
+}
+
+int kvm_arch_create_memslot(struct kvm_memory_slot *slot, unsigned long npages)
+{
+   return 0;
+}
+
  /* Section: memory related */
  int kvm_arch_prepare_memory_region(struct kvm *kvm,
   struct kvm_memory_slot *memslot,
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4610166..de3aa43 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -479,6 +479,15 @@ struct kvm_vcpu_arch {
} osvw;
  };

+struct kvm_lpage_info {
+   unsigned long rmap_pde;
+   int write_count;
+};
+
+struct kvm_arch_memory_slot {
+   struct kvm_lpage_info *lpage_info[KVM_NR_PAGE_SIZES - 1];
+};
+
  struct kvm_arch {
unsigned int n_used_mmu_pages;
unsigned int n_requested_mmu_pages;
diff --git a/arch/x86/kvm/mmu.c 

Re: [PATCH 4/4 v3] KVM: Introduce kvm_memory_slot::arch and move lpage_info into it

2012-01-30 Thread Christian Borntraeger
On 31/01/12 02:17, Takuya Yoshikawa wrote:
 Added s390 and ppc developers to Cc,
 
 (2012/01/30 14:35), Takuya Yoshikawa wrote:
 Some members of kvm_memory_slot are not used by every architecture.

 This patch is the first step to make this difference clear by
 introducing kvm_memory_slot::arch;  lpage_info is moved into it.

Patch series seems to work on s390.

Christian

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 4/4 v3] KVM: Introduce kvm_memory_slot::arch and move lpage_info into it

2012-01-29 Thread Takuya Yoshikawa
Some members of kvm_memory_slot are not used by every architecture.

This patch is the first step to make this difference clear by
introducing kvm_memory_slot::arch;  lpage_info is moved into it.

Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
 arch/ia64/include/asm/kvm_host.h|3 +
 arch/ia64/kvm/kvm-ia64.c|   10 +
 arch/powerpc/include/asm/kvm_host.h |3 +
 arch/powerpc/kvm/powerpc.c  |   10 +
 arch/s390/include/asm/kvm_host.h|3 +
 arch/s390/kvm/kvm-s390.c|   10 +
 arch/x86/include/asm/kvm_host.h |9 
 arch/x86/kvm/mmu.c  |2 +-
 arch/x86/kvm/x86.c  |   59 +
 include/linux/kvm_host.h|   11 ++---
 virt/kvm/kvm_main.c |   70 --
 11 files changed, 122 insertions(+), 68 deletions(-)

diff --git a/arch/ia64/include/asm/kvm_host.h b/arch/ia64/include/asm/kvm_host.h
index 2689ee5..e35b3a8 100644
--- a/arch/ia64/include/asm/kvm_host.h
+++ b/arch/ia64/include/asm/kvm_host.h
@@ -459,6 +459,9 @@ struct kvm_sal_data {
unsigned long boot_gp;
 };
 
+struct kvm_arch_memory_slot {
+};
+
 struct kvm_arch {
spinlock_t dirty_log_lock;
 
diff --git a/arch/ia64/kvm/kvm-ia64.c b/arch/ia64/kvm/kvm-ia64.c
index 8ca7261..d8ddbba 100644
--- a/arch/ia64/kvm/kvm-ia64.c
+++ b/arch/ia64/kvm/kvm-ia64.c
@@ -1571,6 +1571,16 @@ int kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct 
vm_fault *vmf)
return VM_FAULT_SIGBUS;
 }
 
+void kvm_arch_free_memslot(struct kvm_memory_slot *free,
+  struct kvm_memory_slot *dont)
+{
+}
+
+int kvm_arch_create_memslot(struct kvm_memory_slot *slot, unsigned long npages)
+{
+   return 0;
+}
+
 int kvm_arch_prepare_memory_region(struct kvm *kvm,
struct kvm_memory_slot *memslot,
struct kvm_memory_slot old,
diff --git a/arch/powerpc/include/asm/kvm_host.h 
b/arch/powerpc/include/asm/kvm_host.h
index af438b1..b9188aa 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -212,6 +212,9 @@ struct revmap_entry {
 #define KVMPPC_PAGE_WRITETHRU  HPTE_R_W/* 0x40 */
 #define KVMPPC_GOT_PAGE0x80
 
+struct kvm_arch_memory_slot {
+};
+
 struct kvm_arch {
 #ifdef CONFIG_KVM_BOOK3S_64_HV
unsigned long hpt_virt;
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 0e21d15..00d7e34 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -281,6 +281,16 @@ long kvm_arch_dev_ioctl(struct file *filp,
return -EINVAL;
 }
 
+void kvm_arch_free_memslot(struct kvm_memory_slot *free,
+  struct kvm_memory_slot *dont)
+{
+}
+
+int kvm_arch_create_memslot(struct kvm_memory_slot *slot, unsigned long npages)
+{
+   return 0;
+}
+
 int kvm_arch_prepare_memory_region(struct kvm *kvm,
struct kvm_memory_slot *memslot,
struct kvm_memory_slot old,
diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
index e630426..7343872 100644
--- a/arch/s390/include/asm/kvm_host.h
+++ b/arch/s390/include/asm/kvm_host.h
@@ -245,6 +245,9 @@ struct kvm_vm_stat {
u32 remote_tlb_flush;
 };
 
+struct kvm_arch_memory_slot {
+};
+
 struct kvm_arch{
struct sca_block *sca;
debug_info_t *dbf;
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 0b91679..418a69c 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -807,6 +807,16 @@ int kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct 
vm_fault *vmf)
return VM_FAULT_SIGBUS;
 }
 
+void kvm_arch_free_memslot(struct kvm_memory_slot *free,
+  struct kvm_memory_slot *dont)
+{
+}
+
+int kvm_arch_create_memslot(struct kvm_memory_slot *slot, unsigned long npages)
+{
+   return 0;
+}
+
 /* Section: memory related */
 int kvm_arch_prepare_memory_region(struct kvm *kvm,
   struct kvm_memory_slot *memslot,
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4610166..de3aa43 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -479,6 +479,15 @@ struct kvm_vcpu_arch {
} osvw;
 };
 
+struct kvm_lpage_info {
+   unsigned long rmap_pde;
+   int write_count;
+};
+
+struct kvm_arch_memory_slot {
+   struct kvm_lpage_info *lpage_info[KVM_NR_PAGE_SIZES - 1];
+};
+
 struct kvm_arch {
unsigned int n_used_mmu_pages;
unsigned int n_requested_mmu_pages;
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 37e7f10..ff053ca 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -689,7 +689,7 @@ static struct kvm_lpage_info *lpage_info_slot(gfn_t gfn,
unsigned long idx;
 
idx = gfn_to_index(gfn, slot-base_gfn, level);
-   return slot-lpage_info[level -