Re: [PATCH 2/2] kvm: ppc: booke: check range page invalidation progress on page setup

2013-10-07 Thread Paolo Bonzini
Il 04/10/2013 15:38, Alexander Graf ha scritto:
 
 On 07.08.2013, at 12:03, Bharat Bhushan wrote:
 
 When the MM code is invalidating a range of pages, it calls the KVM
 kvm_mmu_notifier_invalidate_range_start() notifier function, which calls
 kvm_unmap_hva_range(), which arranges to flush all the TLBs for guest pages.
 However, the Linux PTEs for the range being flushed are still valid at
 that point.  We are not supposed to establish any new references to pages
 in the range until the ...range_end() notifier gets called.
 The PPC-specific KVM code doesn't get any explicit notification of that;
 instead, we are supposed to use mmu_notifier_retry() to test whether we
 are or have been inside a range flush notifier pair while we have been
 referencing a page.

 This patch calls the mmu_notifier_retry() while mapping the guest
 page to ensure we are not referencing a page when in range invalidation.

 This call is inside a region locked with kvm-mmu_lock, which is the
 same lock that is called by the KVM MMU notifier functions, thus
 ensuring that no new notification can proceed while we are in the
 locked region.

 Signed-off-by: Bharat Bhushan bharat.bhus...@freescale.com
 
 Acked-by: Alexander Graf ag...@suse.de
 
 Gleb, Paolo, please queue for 3.12 directly.

Here is the backport.  The second hunk has a nontrivial conflict, so
someone please give their {Tested,Reviewed,Compiled}-by.

Paolo

diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
index 1c6a9d7..c65593a 100644
--- a/arch/powerpc/kvm/e500_mmu_host.c
+++ b/arch/powerpc/kvm/e500_mmu_host.c
@@ -332,6 +332,13 @@ static inline int kvmppc_e500_shadow_map(struct 
kvmppc_vcpu_e500 *vcpu_e500,
unsigned long hva;
int pfnmap = 0;
int tsize = BOOK3E_PAGESZ_4K;
+   int ret = 0;
+   unsigned long mmu_seq;
+   struct kvm *kvm = vcpu_e500-vcpu.kvm;
+
+   /* used to check for invalidations in progress */
+   mmu_seq = kvm-mmu_notifier_seq;
+   smp_rmb();
 
/*
 * Translate guest physical to true physical, acquiring
@@ -449,6 +456,12 @@ static inline int kvmppc_e500_shadow_map(struct 
kvmppc_vcpu_e500 *vcpu_e500,
gvaddr = ~((tsize_pages  PAGE_SHIFT) - 1);
}
 
+   spin_lock(kvm-mmu_lock);
+   if (mmu_notifier_retry(kvm, mmu_seq)) {
+   ret = -EAGAIN;
+   goto out;
+   }
+
kvmppc_e500_ref_setup(ref, gtlbe, pfn);
 
kvmppc_e500_setup_stlbe(vcpu_e500-vcpu, gtlbe, tsize,
@@ -457,10 +470,13 @@ static inline int kvmppc_e500_shadow_map(struct 
kvmppc_vcpu_e500 *vcpu_e500,
/* Clear i-cache for new pages */
kvmppc_mmu_flush_icache(pfn);
 
+out:
+   spin_unlock(kvm-mmu_lock);
+
/* Drop refcount on page, so that mmu notifiers can clear it */
kvm_release_pfn_clean(pfn);
 
-   return 0;
+   return ret;
 }
 
 /* XXX only map the one-one case, for now use TLB0 */


--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM: PPC: Book3S HV: Fix typo in saving DSCR

2013-10-07 Thread Paolo Bonzini
Il 04/10/2013 15:10, Alexander Graf ha scritto:
 
 On 21.09.2013, at 01:53, Paul Mackerras wrote:
 
 This fixes a typo in the code that saves the guest DSCR (Data Stream
 Control Register) into the kvm_vcpu_arch struct on guest exit.  The
 effect of the typo was that the DSCR value was saved in the wrong place,
 so changes to the DSCR by the guest didn't persist across guest exit
 and entry, and some host kernel memory got corrupted.

 Cc: sta...@vger.kernel.org [v3.1+]
 Signed-off-by: Paul Mackerras pau...@samba.org
 
 Acked-by: Alexander Graf ag...@suse.de
 
 Gleb, Paolo, can you please queue this directly?

Sure.  I'll wait for feedback on the other patch though.

Paolo

--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH] KVM: PPC: Book3S: MMIO emulation support for little endian guests

2013-10-07 Thread Cedric Le Goater
Hi Alex,

On 10/04/2013 02:50 PM, Alexander Graf wrote:
 
 On 03.10.2013, at 13:03, Cédric Le Goater wrote:
 
 MMIO emulation reads the last instruction executed by the guest
 and then emulates. If the guest is running in Little Endian mode,
 the instruction needs to be byte-swapped before being emulated.

 This patch stores the last instruction in the endian order of the
 host, primarily doing a byte-swap if needed. The common code
 which fetches last_inst uses a helper routine kvmppc_need_byteswap().
 and the exit paths for the Book3S PV and HR guests use their own
 version in assembly.

 kvmppc_emulate_instruction() also uses kvmppc_need_byteswap() to
 define in which endian order the mmio needs to be done.

 The patch is based on Alex Graf's kvm-ppc-queue branch and it
 has been tested on Big Endian and Little Endian HV guests and
 Big Endian PR guests.

 Signed-off-by: Cédric Le Goater c...@fr.ibm.com
 ---

 Here are some comments/questions : 

 * the host is assumed to be running in Big Endian. when Little Endian
   hosts are supported in the future, we will use the cpu features to
   fix kvmppc_need_byteswap()

 * the 'is_bigendian' parameter of the routines kvmppc_handle_load()
   and kvmppc_handle_store() seems redundant but the *BRX opcodes 
   make the improvements unclear. We could eventually rename the
   parameter to byteswap and the attribute vcpu-arch.mmio_is_bigendian
   to vcpu-arch.mmio_need_byteswap. Anyhow, the current naming sucks
   and I would happy to have some directions to fix it.



 arch/powerpc/include/asm/kvm_book3s.h   |   15 ++-
 arch/powerpc/kvm/book3s_64_mmu_hv.c |4 ++
 arch/powerpc/kvm/book3s_hv_rmhandlers.S |   14 +-
 arch/powerpc/kvm/book3s_segment.S   |   14 +-
 arch/powerpc/kvm/emulate.c  |   71 
 +--
 5 files changed, 83 insertions(+), 35 deletions(-)

 diff --git a/arch/powerpc/include/asm/kvm_book3s.h 
 b/arch/powerpc/include/asm/kvm_book3s.h
 index 0ec00f4..36c5573 100644
 --- a/arch/powerpc/include/asm/kvm_book3s.h
 +++ b/arch/powerpc/include/asm/kvm_book3s.h
 @@ -270,14 +270,22 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu 
 *vcpu)
  return vcpu-arch.pc;
 }

 +static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
 +{
 +return vcpu-arch.shared-msr  MSR_LE;
 +}
 +
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 {
  ulong pc = kvmppc_get_pc(vcpu);

  /* Load the instruction manually if it failed to do so in the
   * exit path */
 -if (vcpu-arch.last_inst == KVM_INST_FETCH_FAILED)
 +if (vcpu-arch.last_inst == KVM_INST_FETCH_FAILED) {
  kvmppc_ld(vcpu, pc, sizeof(u32), vcpu-arch.last_inst, false);
 +if (kvmppc_need_byteswap(vcpu))
 +vcpu-arch.last_inst = swab32(vcpu-arch.last_inst);
 
 Could you please introduce a new helper to load 32bit numbers? Something like 
 kvmppc_ldl or kvmppc_ld32. That'll be easier to read here then :).

ok. I did something in that spirit in the next patchset I am about to send. I 
will
respin if needed but there is one fuzzy area though : kvmppc_read_inst().  

It calls kvmppc_get_last_inst() and then again kvmppc_ld(). Is that actually 
useful ? 

 +}

  return vcpu-arch.last_inst;
 }
 @@ -293,8 +301,11 @@ static inline u32 kvmppc_get_last_sc(struct kvm_vcpu 
 *vcpu)

  /* Load the instruction manually if it failed to do so in the
   * exit path */
 -if (vcpu-arch.last_inst == KVM_INST_FETCH_FAILED)
 +if (vcpu-arch.last_inst == KVM_INST_FETCH_FAILED) {
  kvmppc_ld(vcpu, pc, sizeof(u32), vcpu-arch.last_inst, false);
 +if (kvmppc_need_byteswap(vcpu))
 +vcpu-arch.last_inst = swab32(vcpu-arch.last_inst);
 +}

  return vcpu-arch.last_inst;
 }
 diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c 
 b/arch/powerpc/kvm/book3s_64_mmu_hv.c
 index 3a89b85..28130c7 100644
 --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
 +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
 @@ -547,6 +547,10 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, 
 struct kvm_vcpu *vcpu,
  ret = kvmppc_ld(vcpu, srr0, sizeof(u32), last_inst, false);
  if (ret != EMULATE_DONE || last_inst == KVM_INST_FETCH_FAILED)
  return RESUME_GUEST;
 +
 +if (kvmppc_need_byteswap(vcpu))
 +last_inst = swab32(last_inst);
 +
  vcpu-arch.last_inst = last_inst;
  }

 diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S 
 b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
 index dd80953..1d3ee40 100644
 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
 +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
 @@ -1393,14 +1393,26 @@ fast_interrupt_c_return:
  lwz r8, 0(r10)
  mtmsrd  r3

 +ld  r0, VCPU_MSR(r9)
 +
 +/* r10 = vcpu-arch.msr  MSR_LE */
 +rldicl  r10, r0, 0, 63
 
 rldicl.?

sure.

 +cmpdi   r10, 0
 +bne 2f
 
 I think it makes sense to inline that 

[PATCH 0/3] KVM: PPC: Book3S: MMIO support for Little Endian guests

2013-10-07 Thread Cédric Le Goater
MMIO emulation reads the last instruction executed by the guest
and then emulates. If the guest is running in Little Endian mode,
the instruction needs to be byte-swapped before being emulated.

The first patches add simple helper routines to load instructions from 
the guest. It prepares ground for the byte-swapping of instructions 
when reading memory from Little Endian guests. There might be room for 
more changes in kvmppc_read_inst() : is the kvmppc_get_last_inst() call 
actually useful ? 

The last patch enables the MMIO support by byte-swapping the last 
instruction if the guest is Little Endian.

This patchset is based on Alex Graf's kvm-ppc-queue branch. It has been 
tested with anton's patchset for Big Endian and Little Endian HV guests 
and Big Endian PR guests. 

Thanks,

C.

Cédric Le Goater (3):
  KVM: PPC: Book3S: add helper routine to load guest instructions
  KVM: PPC: Book3S: add helper routines to detect endian order
  KVM: PPC: Book3S: MMIO emulation support for little endian guests

 arch/powerpc/include/asm/kvm_book3s.h   |   33 +-
 arch/powerpc/kvm/book3s_64_mmu_hv.c |2 +-
 arch/powerpc/kvm/book3s_hv_rmhandlers.S |   12 ++
 arch/powerpc/kvm/book3s_pr.c|2 +-
 arch/powerpc/kvm/book3s_segment.S   |   11 +
 arch/powerpc/kvm/emulate.c  |   72 +--
 6 files changed, 96 insertions(+), 36 deletions(-)

-- 
1.7.10.4

--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH] KVM: PPC: Book3S: MMIO emulation support for little endian guests

2013-10-07 Thread Cedric Le Goater
On 10/04/2013 03:48 PM, Aneesh Kumar K.V wrote:
 Cédric Le Goater c...@fr.ibm.com writes:
 
 MMIO emulation reads the last instruction executed by the guest
 and then emulates. If the guest is running in Little Endian mode,
 the instruction needs to be byte-swapped before being emulated.

 This patch stores the last instruction in the endian order of the
 host, primarily doing a byte-swap if needed. The common code
 which fetches last_inst uses a helper routine kvmppc_need_byteswap().
 and the exit paths for the Book3S PV and HR guests use their own
 version in assembly.

 kvmppc_emulate_instruction() also uses kvmppc_need_byteswap() to
 define in which endian order the mmio needs to be done.

 The patch is based on Alex Graf's kvm-ppc-queue branch and it
 has been tested on Big Endian and Little Endian HV guests and
 Big Endian PR guests.

 Signed-off-by: Cédric Le Goater c...@fr.ibm.com
 ---

 Here are some comments/questions : 

  * the host is assumed to be running in Big Endian. when Little Endian
hosts are supported in the future, we will use the cpu features to
fix kvmppc_need_byteswap()

  * the 'is_bigendian' parameter of the routines kvmppc_handle_load()
and kvmppc_handle_store() seems redundant but the *BRX opcodes 
make the improvements unclear. We could eventually rename the
parameter to byteswap and the attribute vcpu-arch.mmio_is_bigendian
to vcpu-arch.mmio_need_byteswap. Anyhow, the current naming sucks
and I would happy to have some directions to fix it.



  arch/powerpc/include/asm/kvm_book3s.h   |   15 ++-
  arch/powerpc/kvm/book3s_64_mmu_hv.c |4 ++
  arch/powerpc/kvm/book3s_hv_rmhandlers.S |   14 +-
  arch/powerpc/kvm/book3s_segment.S   |   14 +-
  arch/powerpc/kvm/emulate.c  |   71 
 +--
  5 files changed, 83 insertions(+), 35 deletions(-)

 diff --git a/arch/powerpc/include/asm/kvm_book3s.h 
 b/arch/powerpc/include/asm/kvm_book3s.h
 index 0ec00f4..36c5573 100644
 --- a/arch/powerpc/include/asm/kvm_book3s.h
 +++ b/arch/powerpc/include/asm/kvm_book3s.h
 @@ -270,14 +270,22 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu 
 *vcpu)
  return vcpu-arch.pc;
  }
  
 +static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
 +{
 +return vcpu-arch.shared-msr  MSR_LE;
 +}
 +
 
 May be kvmppc_need_instbyteswap ?, because for data it also depend on
 SLE bit ? Don't also need to check the host platform endianness here ?
 ie, if host os __BIG_ENDIAN__ ?

I think we will wait for the host to become Little Endian before adding
more complexity. 

  static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
  {
  ulong pc = kvmppc_get_pc(vcpu);
  
  /* Load the instruction manually if it failed to do so in the
   * exit path */
 -if (vcpu-arch.last_inst == KVM_INST_FETCH_FAILED)
 +if (vcpu-arch.last_inst == KVM_INST_FETCH_FAILED) {
  kvmppc_ld(vcpu, pc, sizeof(u32), vcpu-arch.last_inst, false);
 +if (kvmppc_need_byteswap(vcpu))
 +vcpu-arch.last_inst = swab32(vcpu-arch.last_inst);
 +}
  
  return vcpu-arch.last_inst;
  }
 @@ -293,8 +301,11 @@ static inline u32 kvmppc_get_last_sc(struct kvm_vcpu 
 *vcpu)
  
  /* Load the instruction manually if it failed to do so in the
   * exit path */
 -if (vcpu-arch.last_inst == KVM_INST_FETCH_FAILED)
 +if (vcpu-arch.last_inst == KVM_INST_FETCH_FAILED) {
  kvmppc_ld(vcpu, pc, sizeof(u32), vcpu-arch.last_inst, false);
 +if (kvmppc_need_byteswap(vcpu))
 +vcpu-arch.last_inst = swab32(vcpu-arch.last_inst);
 +}
  
  return vcpu-arch.last_inst;
  }
 diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c 
 b/arch/powerpc/kvm/book3s_64_mmu_hv.c
 index 3a89b85..28130c7 100644
 --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
 +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
 @@ -547,6 +547,10 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, 
 struct kvm_vcpu *vcpu,
  ret = kvmppc_ld(vcpu, srr0, sizeof(u32), last_inst, false);
  if (ret != EMULATE_DONE || last_inst == KVM_INST_FETCH_FAILED)
  return RESUME_GUEST;
 +
 +if (kvmppc_need_byteswap(vcpu))
 +last_inst = swab32(last_inst);
 +
  vcpu-arch.last_inst = last_inst;
  }
  
 diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S 
 b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
 index dd80953..1d3ee40 100644
 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
 +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
 @@ -1393,14 +1393,26 @@ fast_interrupt_c_return:
  lwz r8, 0(r10)
  mtmsrd  r3
  
 +ld  r0, VCPU_MSR(r9)
 +
 +/* r10 = vcpu-arch.msr  MSR_LE */
 +rldicl  r10, r0, 0, 63
 +cmpdi   r10, 0
 +bne 2f
 +
  /* Store the result */
  stw r8, VCPU_LAST_INST(r9)
  
  /* Unset guest mode. */
 -li  r0, KVM_GUEST_MODE_NONE
 +1:  li  r0, 

[PATCH 1/3] KVM: PPC: Book3S: add helper routine to load guest instructions

2013-10-07 Thread Cédric Le Goater
This patch adds an helper routine kvmppc_ld_inst() to load an
instruction form the guest. This routine will be modified in
the next patches to take into account the endian order of the
guest.

Signed-off-by: Cédric Le Goater c...@fr.ibm.com
---
 arch/powerpc/include/asm/kvm_book3s.h |   16 ++--
 arch/powerpc/kvm/book3s_64_mmu_hv.c   |2 +-
 arch/powerpc/kvm/book3s_pr.c  |2 +-
 3 files changed, 16 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h 
b/arch/powerpc/include/asm/kvm_book3s.h
index 0ec00f4..dfe8f11 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -270,6 +270,18 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
return vcpu-arch.pc;
 }
 
+static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
+ u32 *ptr, bool data)
+{
+   return kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
+}
+
+static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
+ u32 *inst)
+{
+   return kvmppc_ld32(vcpu, eaddr, inst, false);
+}
+
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 {
ulong pc = kvmppc_get_pc(vcpu);
@@ -277,7 +289,7 @@ static inline u32 kvmppc_get_last_inst(struct kvm_vcpu 
*vcpu)
/* Load the instruction manually if it failed to do so in the
 * exit path */
if (vcpu-arch.last_inst == KVM_INST_FETCH_FAILED)
-   kvmppc_ld(vcpu, pc, sizeof(u32), vcpu-arch.last_inst, false);
+   kvmppc_ld_inst(vcpu, pc, vcpu-arch.last_inst);
 
return vcpu-arch.last_inst;
 }
@@ -294,7 +306,7 @@ static inline u32 kvmppc_get_last_sc(struct kvm_vcpu *vcpu)
/* Load the instruction manually if it failed to do so in the
 * exit path */
if (vcpu-arch.last_inst == KVM_INST_FETCH_FAILED)
-   kvmppc_ld(vcpu, pc, sizeof(u32), vcpu-arch.last_inst, false);
+   kvmppc_ld_inst(vcpu, pc, vcpu-arch.last_inst);
 
return vcpu-arch.last_inst;
 }
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c 
b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 3a89b85..0083cd0 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -544,7 +544,7 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, 
struct kvm_vcpu *vcpu,
 * If we fail, we just return to the guest and try executing it again.
 */
if (vcpu-arch.last_inst == KVM_INST_FETCH_FAILED) {
-   ret = kvmppc_ld(vcpu, srr0, sizeof(u32), last_inst, false);
+   ret = kvmppc_ld_inst(vcpu, srr0, last_inst);
if (ret != EMULATE_DONE || last_inst == KVM_INST_FETCH_FAILED)
return RESUME_GUEST;
vcpu-arch.last_inst = last_inst;
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 6075dbd..a817ef6 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -600,7 +600,7 @@ static int kvmppc_read_inst(struct kvm_vcpu *vcpu)
u32 last_inst = kvmppc_get_last_inst(vcpu);
int ret;
 
-   ret = kvmppc_ld(vcpu, srr0, sizeof(u32), last_inst, false);
+   ret = kvmppc_ld_inst(vcpu, srr0, last_inst);
if (ret == -ENOENT) {
ulong msr = vcpu-arch.shared-msr;
 
-- 
1.7.10.4

--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests

2013-10-07 Thread Cédric Le Goater
MMIO emulation reads the last instruction executed by the guest
and then emulates. If the guest is running in Little Endian mode,
the instruction needs to be byte-swapped before being emulated.

This patch stores the last instruction in the endian order of the
host, primarily doing a byte-swap if needed. The common code
which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
and the exit paths for the Book3S PV and HR guests use their own
version in assembly.

Finally, kvmppc_emulate_instruction() uses kvmppc_is_bigendian()
to define in which endian order the mmio needs to be done.

Signed-off-by: Cédric Le Goater c...@fr.ibm.com
---
 arch/powerpc/include/asm/kvm_book3s.h   |9 +++-
 arch/powerpc/kvm/book3s_hv_rmhandlers.S |   12 ++
 arch/powerpc/kvm/book3s_segment.S   |   11 +
 arch/powerpc/kvm/emulate.c  |   72 +--
 4 files changed, 71 insertions(+), 33 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h 
b/arch/powerpc/include/asm/kvm_book3s.h
index 00c2061..9c2b865 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -289,7 +289,14 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong 
*eaddr,
 static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
  u32 *inst)
 {
-   return kvmppc_ld32(vcpu, eaddr, inst, false);
+   int ret;
+
+   ret = kvmppc_ld32(vcpu, eaddr, inst, false);
+
+   if (kvmppc_need_byteswap(vcpu))
+   *inst = swab32(*inst);
+
+   return ret;
 }
 
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S 
b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 77f1baa..7c9978a 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -1404,10 +1404,22 @@ fast_interrupt_c_return:
lwz r8, 0(r10)
mtmsrd  r3
 
+   ld  r0, VCPU_MSR(r9)
+
+   /* r10 = vcpu-arch.msr  MSR_LE */
+   rldicl. r10, r0, 0, 63
+
/* Store the result */
stw r8, VCPU_LAST_INST(r9)
 
+   beq after_inst_store
+
+   /* Swap and store the result */
+   addir11, r9, VCPU_LAST_INST
+   stwbrx  r8, 0, r11
+
/* Unset guest mode. */
+after_inst_store:
li  r0, KVM_GUEST_MODE_HOST_HV
stb r0, HSTATE_IN_GUEST(r13)
b   guest_exit_cont
diff --git a/arch/powerpc/kvm/book3s_segment.S 
b/arch/powerpc/kvm/book3s_segment.S
index 1abe478..2ceed4c 100644
--- a/arch/powerpc/kvm/book3s_segment.S
+++ b/arch/powerpc/kvm/book3s_segment.S
@@ -287,8 +287,19 @@ ld_last_inst:
sync
 
 #endif
+   ld  r8, SVCPU_SHADOW_SRR1(r13)
+
+   /* r10 = vcpu-arch.msr  MSR_LE */
+   rldicl. r10, r8, 0, 63
+
stw r0, SVCPU_LAST_INST(r13)
 
+   beq no_ld_last_inst
+
+   /* swap and store the result */
+   addir11, r13, SVCPU_LAST_INST
+   stwbrx  r0, 0, r11
+
 no_ld_last_inst:
 
/* Unset guest mode */
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 751cd45..76d0a12 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -219,7 +219,6 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int 
sprn, int rt)
  * lmw
  * stmw
  *
- * XXX is_bigendian should depend on MMU mapping or MSR[LE]
  */
 /* XXX Should probably auto-generate instruction decoding for a particular core
  * from opcode tables in the future. */
@@ -232,6 +231,7 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct 
kvm_vcpu *vcpu)
int sprn = get_sprn(inst);
enum emulation_result emulated = EMULATE_DONE;
int advance = 1;
+   int is_bigendian = kvmppc_is_bigendian(vcpu);
 
/* this default type might be overwritten by subcategories */
kvmppc_set_exit_type(vcpu, EMULATED_INST_EXITS);
@@ -266,47 +266,53 @@ int kvmppc_emulate_instruction(struct kvm_run *run, 
struct kvm_vcpu *vcpu)
advance = 0;
break;
case OP_31_XOP_LWZX:
-   emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
+   emulated = kvmppc_handle_load(run, vcpu, rt, 4,
+ is_bigendian);
break;
 
case OP_31_XOP_LBZX:
-   emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
+   emulated = kvmppc_handle_load(run, vcpu, rt, 1,
+ is_bigendian);
break;
 
case OP_31_XOP_LBZUX:
-   emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
+   emulated = kvmppc_handle_load(run, vcpu, rt, 1,
+ is_bigendian);
kvmppc_set_gpr(vcpu, ra, 

[PATCH 2/3] KVM: PPC: Book3S: add helper routines to detect endian order

2013-10-07 Thread Cédric Le Goater
They will be used to decide whether to byte-swap or not. When Little
Endian host kernels come, these routines will need to be changed
accordingly.

Signed-off-by: Cédric Le Goater c...@fr.ibm.com
---
 arch/powerpc/include/asm/kvm_book3s.h |   10 ++
 1 file changed, 10 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h 
b/arch/powerpc/include/asm/kvm_book3s.h
index dfe8f11..00c2061 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -270,6 +270,16 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
return vcpu-arch.pc;
 }
 
+static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
+{
+   return vcpu-arch.shared-msr  MSR_LE;
+}
+
+static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
+{
+   return !kvmppc_need_byteswap(vcpu);
+}
+
 static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
  u32 *ptr, bool data)
 {
-- 
1.7.10.4

--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH] KVM: PPC: Book3S: MMIO emulation support for little endian guests

2013-10-07 Thread Alexander Graf

On 07.10.2013, at 16:23, Cedric Le Goater c...@fr.ibm.com wrote:

 Hi Alex,
 
 On 10/04/2013 02:50 PM, Alexander Graf wrote:
 
 On 03.10.2013, at 13:03, Cédric Le Goater wrote:
 
 MMIO emulation reads the last instruction executed by the guest
 and then emulates. If the guest is running in Little Endian mode,
 the instruction needs to be byte-swapped before being emulated.
 
 This patch stores the last instruction in the endian order of the
 host, primarily doing a byte-swap if needed. The common code
 which fetches last_inst uses a helper routine kvmppc_need_byteswap().
 and the exit paths for the Book3S PV and HR guests use their own
 version in assembly.
 
 kvmppc_emulate_instruction() also uses kvmppc_need_byteswap() to
 define in which endian order the mmio needs to be done.
 
 The patch is based on Alex Graf's kvm-ppc-queue branch and it
 has been tested on Big Endian and Little Endian HV guests and
 Big Endian PR guests.
 
 Signed-off-by: Cédric Le Goater c...@fr.ibm.com
 ---
 
 Here are some comments/questions : 
 
 * the host is assumed to be running in Big Endian. when Little Endian
  hosts are supported in the future, we will use the cpu features to
  fix kvmppc_need_byteswap()
 
 * the 'is_bigendian' parameter of the routines kvmppc_handle_load()
  and kvmppc_handle_store() seems redundant but the *BRX opcodes 
  make the improvements unclear. We could eventually rename the
  parameter to byteswap and the attribute vcpu-arch.mmio_is_bigendian
  to vcpu-arch.mmio_need_byteswap. Anyhow, the current naming sucks
  and I would happy to have some directions to fix it.
 
 
 
 arch/powerpc/include/asm/kvm_book3s.h   |   15 ++-
 arch/powerpc/kvm/book3s_64_mmu_hv.c |4 ++
 arch/powerpc/kvm/book3s_hv_rmhandlers.S |   14 +-
 arch/powerpc/kvm/book3s_segment.S   |   14 +-
 arch/powerpc/kvm/emulate.c  |   71 
 +--
 5 files changed, 83 insertions(+), 35 deletions(-)
 
 diff --git a/arch/powerpc/include/asm/kvm_book3s.h 
 b/arch/powerpc/include/asm/kvm_book3s.h
 index 0ec00f4..36c5573 100644
 --- a/arch/powerpc/include/asm/kvm_book3s.h
 +++ b/arch/powerpc/include/asm/kvm_book3s.h
 @@ -270,14 +270,22 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu 
 *vcpu)
 return vcpu-arch.pc;
 }
 
 +static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
 +{
 +   return vcpu-arch.shared-msr  MSR_LE;
 +}
 +
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 {
 ulong pc = kvmppc_get_pc(vcpu);
 
 /* Load the instruction manually if it failed to do so in the
  * exit path */
 -   if (vcpu-arch.last_inst == KVM_INST_FETCH_FAILED)
 +   if (vcpu-arch.last_inst == KVM_INST_FETCH_FAILED) {
 kvmppc_ld(vcpu, pc, sizeof(u32), vcpu-arch.last_inst, false);
 +   if (kvmppc_need_byteswap(vcpu))
 +   vcpu-arch.last_inst = swab32(vcpu-arch.last_inst);
 
 Could you please introduce a new helper to load 32bit numbers? Something 
 like kvmppc_ldl or kvmppc_ld32. That'll be easier to read here then :).
 
 ok. I did something in that spirit in the next patchset I am about to send. I 
 will
 respin if needed but there is one fuzzy area though : kvmppc_read_inst().  
 
 It calls kvmppc_get_last_inst() and then again kvmppc_ld(). Is that actually 
 useful ? 

We can only assume that the contents of vcpu-arch.last_inst is valid (which is 
what kvmppc_get_last_inst relies on) when we hit one of these interrupts:

/* We only load the last instruction when it's safe */
cmpwi   r12, BOOK3S_INTERRUPT_DATA_STORAGE
beq ld_last_inst
cmpwi   r12, BOOK3S_INTERRUPT_PROGRAM
beq ld_last_inst
cmpwi   r12, BOOK3S_INTERRUPT_SYSCALL
beq ld_last_prev_inst
cmpwi   r12, BOOK3S_INTERRUPT_ALIGNMENT
beq-ld_last_inst
#ifdef CONFIG_PPC64
BEGIN_FTR_SECTION
cmpwi   r12, BOOK3S_INTERRUPT_H_EMUL_ASSIST
beq-ld_last_inst
END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
#endif

b   no_ld_last_inst

Outside of these interrupt handlers, we have to ensure that we manually load 
the instruction and if that fails, inject an interrupt into the guest to 
indicate that we couldn't load it.

I have to admit that the code flow is slightly confusing here. If you have 
suggestions how to improve it, I'm more than happy to see patches :).


Alex

--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH 1/2] kvm/powerpc: rename kvm_hypercall() to epapr_hypercall()

2013-10-07 Thread Bhushan Bharat-R65777


 -Original Message-
 From: Alexander Graf [mailto:ag...@suse.de]
 Sent: Friday, October 04, 2013 4:46 PM
 To: Bhushan Bharat-R65777
 Cc: Wood Scott-B07421; kvm-ppc@vger.kernel.org; k...@vger.kernel.org
 Subject: Re: [PATCH 1/2] kvm/powerpc: rename kvm_hypercall() to
 epapr_hypercall()
 
 
 On 04.10.2013, at 06:26, Bhushan Bharat-R65777 wrote:
 
 
 
  -Original Message-
  From: Wood Scott-B07421
  Sent: Thursday, October 03, 2013 12:04 AM
  To: Alexander Graf
  Cc: Bhushan Bharat-R65777; kvm-ppc@vger.kernel.org;
  k...@vger.kernel.org; Bhushan
  Bharat-R65777
  Subject: Re: [PATCH 1/2] kvm/powerpc: rename kvm_hypercall() to
  epapr_hypercall()
 
  On Wed, 2013-10-02 at 19:54 +0200, Alexander Graf wrote:
  On 02.10.2013, at 19:49, Scott Wood wrote:
 
  On Wed, 2013-10-02 at 19:46 +0200, Alexander Graf wrote:
  On 02.10.2013, at 19:42, Scott Wood wrote:
 
  On Wed, 2013-10-02 at 19:17 +0200, Alexander Graf wrote:
  On 02.10.2013, at 19:04, Scott Wood wrote:
 
  On Wed, 2013-10-02 at 18:53 +0200, Alexander Graf wrote:
  On 02.10.2013, at 18:40, Scott Wood wrote:
 
  On Wed, 2013-10-02 at 16:19 +0200, Alexander Graf wrote:
  Won't this break when CONFIG_EPAPR_PARAVIRT=n? We wouldn't
  have
  epapr_hcalls.S compiled into the code base then and the bl above
  would reference an unknown function.
 
  KVM_GUEST selects EPAPR_PARAVIRT.
 
  But you can not select KVM_GUEST and still call these inline
  functions,
  no?
 
  No.
 
  Like kvm_arch_para_features().
 
  Where does that get called without KVM_GUEST?
 
  How would that work currently, with the call to kvm_hypercall()
  in arch/powerpc/kernel/kvm.c (which calls epapr_hypercall, BTW)?
 
  It wouldn't ever get called because kvm_hypercall() ends up
  always
  returning EV_UNIMPLEMENTED when #ifndef CONFIG_KVM_GUEST.
 
  OK, so the objection is to removing that stub?  Where would we
  actually want to call this without knowing that KVM_GUEST or
  EPAPR_PARAVIRT are enabled?
 
  In probing code. I usually prefer
 
  if (kvm_feature_available(X)) {
   ...
  }
 
  over
 
  #ifdef CONFIG_KVM_GUEST
  if (kvm_feature_available(X)) {
   ...
  }
  #endif
 
  at least when I can avoid it. With the current code the compiler
  would be
  smart enough to just optimize out the complete branch.
 
  Sure.  My point is, where would you be calling that where the
  entire file isn't predicated on (or selecting) CONFIG_KVM_GUEST or 
  similar?
 
  We don't do these stubs for every single function in the kernel --
  only ones where the above is a reasonable use case.
 
  Yeah, I'm fine on dropping it, but we need to make that a conscious
  decision
  and verify that no caller relies on it.
 
  kvm_para_has_feature() is called from arch/powerpc/kernel/kvm.c,
  arch/x86/kernel/kvm.c, and arch/x86/kernel/kvmclock.c, all of which
  are enabled by CONFIG_KVM_GUEST.
 
  I did find one example of kvm_para_available() being used in an
  unexpected place
  -- sound/pci/intel8x0.c.  It defines its own non-CONFIG_KVM_GUEST
  stub, even though x86 defines kvm_para_available() using inline CPUID
  stuff which should work without CONFIG_KVM_GUEST.
  I'm not sure why it even needs to do that, though -- shouldn't the
  subsequent PCI subsystem vendor/device check should be sufficient?
  No hypercalls are involved.
 
  That said, the possibility that some random driver might want to make
  use of paravirt features is a decent argument for keeping the stub.
 
 
  I am not sure where we are agreeing on?
  Do we want to remove the stub in arch/powerpc/include/asm/kvm_para.h ? as
 there is no caller without KVM_GUEST and in future caller ensure this to be
 called only from code selected by KVM_GUEST?
 
  Or let this stub stay to avoid any random driver calling this ?
 
 I think the most reasonable way forward is to add a stub for non-CONFIG_EPAPR 
 to
 the epapr code, then replace the kvm bits with generic epapr bits (which your
 patches already do).

Please describe which stub you are talking about.

Thanks
-Bharat

 
 With that we should be 100% equivalent to today's code, just with a lot less
 lines of code :).
 
 
 Alex
 


--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] kvm/powerpc: rename kvm_hypercall() to epapr_hypercall()

2013-10-07 Thread Alexander Graf

On 07.10.2013, at 17:15, Bhushan Bharat-R65777 r65...@freescale.com wrote:

 
 
 -Original Message-
 From: Alexander Graf [mailto:ag...@suse.de]
 Sent: Friday, October 04, 2013 4:46 PM
 To: Bhushan Bharat-R65777
 Cc: Wood Scott-B07421; kvm-ppc@vger.kernel.org; k...@vger.kernel.org
 Subject: Re: [PATCH 1/2] kvm/powerpc: rename kvm_hypercall() to
 epapr_hypercall()
 
 
 On 04.10.2013, at 06:26, Bhushan Bharat-R65777 wrote:
 
 
 
 -Original Message-
 From: Wood Scott-B07421
 Sent: Thursday, October 03, 2013 12:04 AM
 To: Alexander Graf
 Cc: Bhushan Bharat-R65777; kvm-ppc@vger.kernel.org;
 k...@vger.kernel.org; Bhushan
 Bharat-R65777
 Subject: Re: [PATCH 1/2] kvm/powerpc: rename kvm_hypercall() to
 epapr_hypercall()
 
 On Wed, 2013-10-02 at 19:54 +0200, Alexander Graf wrote:
 On 02.10.2013, at 19:49, Scott Wood wrote:
 
 On Wed, 2013-10-02 at 19:46 +0200, Alexander Graf wrote:
 On 02.10.2013, at 19:42, Scott Wood wrote:
 
 On Wed, 2013-10-02 at 19:17 +0200, Alexander Graf wrote:
 On 02.10.2013, at 19:04, Scott Wood wrote:
 
 On Wed, 2013-10-02 at 18:53 +0200, Alexander Graf wrote:
 On 02.10.2013, at 18:40, Scott Wood wrote:
 
 On Wed, 2013-10-02 at 16:19 +0200, Alexander Graf wrote:
 Won't this break when CONFIG_EPAPR_PARAVIRT=n? We wouldn't
 have
 epapr_hcalls.S compiled into the code base then and the bl above
 would reference an unknown function.
 
 KVM_GUEST selects EPAPR_PARAVIRT.
 
 But you can not select KVM_GUEST and still call these inline
 functions,
 no?
 
 No.
 
 Like kvm_arch_para_features().
 
 Where does that get called without KVM_GUEST?
 
 How would that work currently, with the call to kvm_hypercall()
 in arch/powerpc/kernel/kvm.c (which calls epapr_hypercall, BTW)?
 
 It wouldn't ever get called because kvm_hypercall() ends up
 always
 returning EV_UNIMPLEMENTED when #ifndef CONFIG_KVM_GUEST.
 
 OK, so the objection is to removing that stub?  Where would we
 actually want to call this without knowing that KVM_GUEST or
 EPAPR_PARAVIRT are enabled?
 
 In probing code. I usually prefer
 
 if (kvm_feature_available(X)) {
 ...
 }
 
 over
 
 #ifdef CONFIG_KVM_GUEST
 if (kvm_feature_available(X)) {
 ...
 }
 #endif
 
 at least when I can avoid it. With the current code the compiler
 would be
 smart enough to just optimize out the complete branch.
 
 Sure.  My point is, where would you be calling that where the
 entire file isn't predicated on (or selecting) CONFIG_KVM_GUEST or 
 similar?
 
 We don't do these stubs for every single function in the kernel --
 only ones where the above is a reasonable use case.
 
 Yeah, I'm fine on dropping it, but we need to make that a conscious
 decision
 and verify that no caller relies on it.
 
 kvm_para_has_feature() is called from arch/powerpc/kernel/kvm.c,
 arch/x86/kernel/kvm.c, and arch/x86/kernel/kvmclock.c, all of which
 are enabled by CONFIG_KVM_GUEST.
 
 I did find one example of kvm_para_available() being used in an
 unexpected place
 -- sound/pci/intel8x0.c.  It defines its own non-CONFIG_KVM_GUEST
 stub, even though x86 defines kvm_para_available() using inline CPUID
 stuff which should work without CONFIG_KVM_GUEST.
 I'm not sure why it even needs to do that, though -- shouldn't the
 subsequent PCI subsystem vendor/device check should be sufficient?
 No hypercalls are involved.
 
 That said, the possibility that some random driver might want to make
 use of paravirt features is a decent argument for keeping the stub.
 
 
 I am not sure where we are agreeing on?
 Do we want to remove the stub in arch/powerpc/include/asm/kvm_para.h ? as
 there is no caller without KVM_GUEST and in future caller ensure this to be
 called only from code selected by KVM_GUEST?
 
 Or let this stub stay to avoid any random driver calling this ?
 
 I think the most reasonable way forward is to add a stub for 
 non-CONFIG_EPAPR to
 the epapr code, then replace the kvm bits with generic epapr bits (which your
 patches already do).
 
 Please describe which stub you are talking about.

kvm_hypercall is always available, regardless of the config option, which makes 
all its subfunctions always available as well.


Alex

---

#ifdef CONFIG_KVM_GUEST

#include linux/of.h

static inline int kvm_para_available(void)
{
struct device_node *hyper_node;

hyper_node = of_find_node_by_path(/hypervisor);
if (!hyper_node)
return 0;

if (!of_device_is_compatible(hyper_node, linux,kvm))
return 0;

return 1;
}

extern unsigned long kvm_hypercall(unsigned long *in,
   unsigned long *out,
   unsigned long nr);

#else

static inline int kvm_para_available(void)
{
return 0;
}

static unsigned long kvm_hypercall(unsigned long *in,
   unsigned long *out,
   unsigned long nr)
{
return EV_UNIMPLEMENTED;
}

#endif

--
To unsubscribe from this list: send the line unsubscribe 

RE: [PATCH 1/2] kvm/powerpc: rename kvm_hypercall() to epapr_hypercall()

2013-10-07 Thread Bhushan Bharat-R65777
  at least when I can avoid it. With the current code the compiler
  would be
  smart enough to just optimize out the complete branch.
 
  Sure.  My point is, where would you be calling that where the
  entire file isn't predicated on (or selecting) CONFIG_KVM_GUEST or
 similar?
 
  We don't do these stubs for every single function in the kernel
  -- only ones where the above is a reasonable use case.
 
  Yeah, I'm fine on dropping it, but we need to make that a
  conscious decision
  and verify that no caller relies on it.
 
  kvm_para_has_feature() is called from arch/powerpc/kernel/kvm.c,
  arch/x86/kernel/kvm.c, and arch/x86/kernel/kvmclock.c, all of which
  are enabled by CONFIG_KVM_GUEST.
 
  I did find one example of kvm_para_available() being used in an
  unexpected place
  -- sound/pci/intel8x0.c.  It defines its own non-CONFIG_KVM_GUEST
  stub, even though x86 defines kvm_para_available() using inline
  CPUID stuff which should work without CONFIG_KVM_GUEST.
  I'm not sure why it even needs to do that, though -- shouldn't the
  subsequent PCI subsystem vendor/device check should be sufficient?
  No hypercalls are involved.
 
  That said, the possibility that some random driver might want to
  make use of paravirt features is a decent argument for keeping the stub.
 
 
  I am not sure where we are agreeing on?
  Do we want to remove the stub in arch/powerpc/include/asm/kvm_para.h
  ? as
  there is no caller without KVM_GUEST and in future caller ensure this
  to be called only from code selected by KVM_GUEST?
 
  Or let this stub stay to avoid any random driver calling this ?
 
  I think the most reasonable way forward is to add a stub for
  non-CONFIG_EPAPR to the epapr code, then replace the kvm bits with
  generic epapr bits (which your patches already do).
 
  Please describe which stub you are talking about.
 
 kvm_hypercall is always available, regardless of the config option, which 
 makes
 all its subfunctions always available as well.

This patch renames kvm_hypercall() to epapr_hypercall() and which is always 
available. And the kvm_hypercall() friends now directly calls epapr_hypercall().
IIUC, So what you are trying to say is let the kvm_hypercall() friends keep on 
calling kvm_hypercall() itself and a sub something like this:

#ifdef CONFIG_KVM_GUEST
 
static unsigned long kvm_hypercall(unsigned long *in,
unsigned long *out,
unsigned long nr)
{
return epapr_hypercall(in, out. nr);
}
 
 #else
static unsigned long kvm_hypercall(unsigned long *in,
unsigned long *out,
unsigned long nr) {
 return EV_UNIMPLEMENTED;
}
-

I am still not really convinced about why we want to keep this stub where we 
know this is not called outside KVM_GUEST and calling this without KVM_GUEST is 
debatable.

Thanks
-Bharat

Thanks
-Bharat

 
 
 Alex
 
 ---
 
 #ifdef CONFIG_KVM_GUEST
 
 #include linux/of.h
 
 static inline int kvm_para_available(void) {
 struct device_node *hyper_node;
 
 hyper_node = of_find_node_by_path(/hypervisor);
 if (!hyper_node)
 return 0;
 
 if (!of_device_is_compatible(hyper_node, linux,kvm))
 return 0;
 
 return 1;
 }
 
 extern unsigned long kvm_hypercall(unsigned long *in,
unsigned long *out,
unsigned long nr);
 
 #else
 
 static inline int kvm_para_available(void) {
 return 0;
 }
 
 static unsigned long kvm_hypercall(unsigned long *in,
unsigned long *out,
unsigned long nr) {
 return EV_UNIMPLEMENTED;
 }
 
 #endif
 


--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] kvm/powerpc: rename kvm_hypercall() to epapr_hypercall()

2013-10-07 Thread Alexander Graf

On 07.10.2013, at 17:43, Bhushan Bharat-R65777 r65...@freescale.com wrote:

 at least when I can avoid it. With the current code the compiler
 would be
 smart enough to just optimize out the complete branch.
 
 Sure.  My point is, where would you be calling that where the
 entire file isn't predicated on (or selecting) CONFIG_KVM_GUEST or
 similar?
 
 We don't do these stubs for every single function in the kernel
 -- only ones where the above is a reasonable use case.
 
 Yeah, I'm fine on dropping it, but we need to make that a
 conscious decision
 and verify that no caller relies on it.
 
 kvm_para_has_feature() is called from arch/powerpc/kernel/kvm.c,
 arch/x86/kernel/kvm.c, and arch/x86/kernel/kvmclock.c, all of which
 are enabled by CONFIG_KVM_GUEST.
 
 I did find one example of kvm_para_available() being used in an
 unexpected place
 -- sound/pci/intel8x0.c.  It defines its own non-CONFIG_KVM_GUEST
 stub, even though x86 defines kvm_para_available() using inline
 CPUID stuff which should work without CONFIG_KVM_GUEST.
 I'm not sure why it even needs to do that, though -- shouldn't the
 subsequent PCI subsystem vendor/device check should be sufficient?
 No hypercalls are involved.
 
 That said, the possibility that some random driver might want to
 make use of paravirt features is a decent argument for keeping the stub.
 
 
 I am not sure where we are agreeing on?
 Do we want to remove the stub in arch/powerpc/include/asm/kvm_para.h
 ? as
 there is no caller without KVM_GUEST and in future caller ensure this
 to be called only from code selected by KVM_GUEST?
 
 Or let this stub stay to avoid any random driver calling this ?
 
 I think the most reasonable way forward is to add a stub for
 non-CONFIG_EPAPR to the epapr code, then replace the kvm bits with
 generic epapr bits (which your patches already do).
 
 Please describe which stub you are talking about.
 
 kvm_hypercall is always available, regardless of the config option, which 
 makes
 all its subfunctions always available as well.
 
 This patch renames kvm_hypercall() to epapr_hypercall() and which is always 
 available. And the kvm_hypercall() friends now directly calls 
 epapr_hypercall().
 IIUC, So what you are trying to say is let the kvm_hypercall() friends keep 
 on calling kvm_hypercall() itself and a sub something like this:

No, what I'm saying is that we either

  a) drop the whole #ifndef code path consciously. This would have to be a 
separate patch with a separate discussion. It's orthogonal to combining 
kvm_hypercall() and epapr_hypercall()

  b) add the #ifndef path to epapr_hypercall()

I prefer b, Scott prefers b.


Alex

--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH 1/2] kvm/powerpc: rename kvm_hypercall() to epapr_hypercall()

2013-10-07 Thread Bhushan Bharat-R65777


 -Original Message-
 From: kvm-ppc-ow...@vger.kernel.org [mailto:kvm-ppc-ow...@vger.kernel.org] On
 Behalf Of Alexander Graf
 Sent: Monday, October 07, 2013 9:16 PM
 To: Bhushan Bharat-R65777
 Cc: Wood Scott-B07421; kvm-ppc@vger.kernel.org; k...@vger.kernel.org
 Subject: Re: [PATCH 1/2] kvm/powerpc: rename kvm_hypercall() to
 epapr_hypercall()
 
 
 On 07.10.2013, at 17:43, Bhushan Bharat-R65777 r65...@freescale.com wrote:
 
  at least when I can avoid it. With the current code the
  compiler would be
  smart enough to just optimize out the complete branch.
 
  Sure.  My point is, where would you be calling that where the
  entire file isn't predicated on (or selecting) CONFIG_KVM_GUEST
  or
  similar?
 
  We don't do these stubs for every single function in the kernel
  -- only ones where the above is a reasonable use case.
 
  Yeah, I'm fine on dropping it, but we need to make that a
  conscious decision
  and verify that no caller relies on it.
 
  kvm_para_has_feature() is called from arch/powerpc/kernel/kvm.c,
  arch/x86/kernel/kvm.c, and arch/x86/kernel/kvmclock.c, all of
  which are enabled by CONFIG_KVM_GUEST.
 
  I did find one example of kvm_para_available() being used in an
  unexpected place
  -- sound/pci/intel8x0.c.  It defines its own non-CONFIG_KVM_GUEST
  stub, even though x86 defines kvm_para_available() using inline
  CPUID stuff which should work without CONFIG_KVM_GUEST.
  I'm not sure why it even needs to do that, though -- shouldn't
  the subsequent PCI subsystem vendor/device check should be sufficient?
  No hypercalls are involved.
 
  That said, the possibility that some random driver might want to
  make use of paravirt features is a decent argument for keeping the 
  stub.
 
 
  I am not sure where we are agreeing on?
  Do we want to remove the stub in
  arch/powerpc/include/asm/kvm_para.h
  ? as
  there is no caller without KVM_GUEST and in future caller ensure
  this to be called only from code selected by KVM_GUEST?
 
  Or let this stub stay to avoid any random driver calling this ?
 
  I think the most reasonable way forward is to add a stub for
  non-CONFIG_EPAPR to the epapr code, then replace the kvm bits with
  generic epapr bits (which your patches already do).
 
  Please describe which stub you are talking about.
 
  kvm_hypercall is always available, regardless of the config option,
  which makes all its subfunctions always available as well.
 
  This patch renames kvm_hypercall() to epapr_hypercall() and which is always
 available. And the kvm_hypercall() friends now directly calls 
 epapr_hypercall().
  IIUC, So what you are trying to say is let the kvm_hypercall() friends keep 
  on
 calling kvm_hypercall() itself and a sub something like this:
 
 No, what I'm saying is that we either
 
   a) drop the whole #ifndef code path consciously. This would have to be a
 separate patch with a separate discussion. It's orthogonal to combining
 kvm_hypercall() and epapr_hypercall()
 
   b) add the #ifndef path to epapr_hypercall()

Do you mean like this in arch/powerpc/include/asm/epapr_hcalls.h

#ifdef CONFIG_KVM_GUEST
static inline unsigned long epapr_hypercall(unsigned long *in,
   unsigned long *out,
   unsigned long nr)
{
 // code for this function
} 
#else
static inline unsigned long epapr_hypercall(unsigned long *in,
   unsigned long *out,
   unsigned long nr)
{
return EV_UNIMPLEMENTED;
}
#endif

 
 I prefer b, Scott prefers b.
 
 
 Alex
 
 --
 To unsubscribe from this list: send the line unsubscribe kvm-ppc in the body
 of a message to majord...@vger.kernel.org More majordomo info at
 http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] kvm/powerpc: rename kvm_hypercall() to epapr_hypercall()

2013-10-07 Thread Alexander Graf

On 07.10.2013, at 18:04, Bhushan Bharat-R65777 r65...@freescale.com wrote:

 
 
 -Original Message-
 From: kvm-ppc-ow...@vger.kernel.org [mailto:kvm-ppc-ow...@vger.kernel.org] On
 Behalf Of Alexander Graf
 Sent: Monday, October 07, 2013 9:16 PM
 To: Bhushan Bharat-R65777
 Cc: Wood Scott-B07421; kvm-ppc@vger.kernel.org; k...@vger.kernel.org
 Subject: Re: [PATCH 1/2] kvm/powerpc: rename kvm_hypercall() to
 epapr_hypercall()
 
 
 On 07.10.2013, at 17:43, Bhushan Bharat-R65777 r65...@freescale.com wrote:
 
 at least when I can avoid it. With the current code the
 compiler would be
 smart enough to just optimize out the complete branch.
 
 Sure.  My point is, where would you be calling that where the
 entire file isn't predicated on (or selecting) CONFIG_KVM_GUEST
 or
 similar?
 
 We don't do these stubs for every single function in the kernel
 -- only ones where the above is a reasonable use case.
 
 Yeah, I'm fine on dropping it, but we need to make that a
 conscious decision
 and verify that no caller relies on it.
 
 kvm_para_has_feature() is called from arch/powerpc/kernel/kvm.c,
 arch/x86/kernel/kvm.c, and arch/x86/kernel/kvmclock.c, all of
 which are enabled by CONFIG_KVM_GUEST.
 
 I did find one example of kvm_para_available() being used in an
 unexpected place
 -- sound/pci/intel8x0.c.  It defines its own non-CONFIG_KVM_GUEST
 stub, even though x86 defines kvm_para_available() using inline
 CPUID stuff which should work without CONFIG_KVM_GUEST.
 I'm not sure why it even needs to do that, though -- shouldn't
 the subsequent PCI subsystem vendor/device check should be sufficient?
 No hypercalls are involved.
 
 That said, the possibility that some random driver might want to
 make use of paravirt features is a decent argument for keeping the 
 stub.
 
 
 I am not sure where we are agreeing on?
 Do we want to remove the stub in
 arch/powerpc/include/asm/kvm_para.h
 ? as
 there is no caller without KVM_GUEST and in future caller ensure
 this to be called only from code selected by KVM_GUEST?
 
 Or let this stub stay to avoid any random driver calling this ?
 
 I think the most reasonable way forward is to add a stub for
 non-CONFIG_EPAPR to the epapr code, then replace the kvm bits with
 generic epapr bits (which your patches already do).
 
 Please describe which stub you are talking about.
 
 kvm_hypercall is always available, regardless of the config option,
 which makes all its subfunctions always available as well.
 
 This patch renames kvm_hypercall() to epapr_hypercall() and which is always
 available. And the kvm_hypercall() friends now directly calls 
 epapr_hypercall().
 IIUC, So what you are trying to say is let the kvm_hypercall() friends keep 
 on
 calling kvm_hypercall() itself and a sub something like this:
 
 No, what I'm saying is that we either
 
  a) drop the whole #ifndef code path consciously. This would have to be a
 separate patch with a separate discussion. It's orthogonal to combining
 kvm_hypercall() and epapr_hypercall()
 
  b) add the #ifndef path to epapr_hypercall()
 
 Do you mean like this in arch/powerpc/include/asm/epapr_hcalls.h
 
 #ifdef CONFIG_KVM_GUEST

CONFIG_EPAPR_PARAVIRT

Apart from that, yes, I think that's what we want.


Alex

 static inline unsigned long epapr_hypercall(unsigned long *in,
   unsigned long *out,
   unsigned long nr)
 {
 // code for this function
 } 
 #else
 static inline unsigned long epapr_hypercall(unsigned long *in,
   unsigned long *out,
   unsigned long nr)
 {
   return EV_UNIMPLEMENTED;
 }
 #endif


--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH 1/2] kvm/powerpc: rename kvm_hypercall() to epapr_hypercall()

2013-10-07 Thread Bhushan Bharat-R65777


 -Original Message-
 From: Alexander Graf [mailto:ag...@suse.de]
 Sent: Monday, October 07, 2013 9:43 PM
 To: Bhushan Bharat-R65777
 Cc: Wood Scott-B07421; kvm-ppc@vger.kernel.org; k...@vger.kernel.org
 Subject: Re: [PATCH 1/2] kvm/powerpc: rename kvm_hypercall() to
 epapr_hypercall()
 
 
 On 07.10.2013, at 18:04, Bhushan Bharat-R65777 r65...@freescale.com wrote:
 
 
 
  -Original Message-
  From: kvm-ppc-ow...@vger.kernel.org
  [mailto:kvm-ppc-ow...@vger.kernel.org] On Behalf Of Alexander Graf
  Sent: Monday, October 07, 2013 9:16 PM
  To: Bhushan Bharat-R65777
  Cc: Wood Scott-B07421; kvm-ppc@vger.kernel.org; k...@vger.kernel.org
  Subject: Re: [PATCH 1/2] kvm/powerpc: rename kvm_hypercall() to
  epapr_hypercall()
 
 
  On 07.10.2013, at 17:43, Bhushan Bharat-R65777 r65...@freescale.com 
  wrote:
 
  at least when I can avoid it. With the current code the
  compiler would be
  smart enough to just optimize out the complete branch.
 
  Sure.  My point is, where would you be calling that where the
  entire file isn't predicated on (or selecting)
  CONFIG_KVM_GUEST or
  similar?
 
  We don't do these stubs for every single function in the
  kernel
  -- only ones where the above is a reasonable use case.
 
  Yeah, I'm fine on dropping it, but we need to make that a
  conscious decision
  and verify that no caller relies on it.
 
  kvm_para_has_feature() is called from
  arch/powerpc/kernel/kvm.c, arch/x86/kernel/kvm.c, and
  arch/x86/kernel/kvmclock.c, all of which are enabled by
 CONFIG_KVM_GUEST.
 
  I did find one example of kvm_para_available() being used in an
  unexpected place
  -- sound/pci/intel8x0.c.  It defines its own
  non-CONFIG_KVM_GUEST stub, even though x86 defines
  kvm_para_available() using inline CPUID stuff which should work 
  without
 CONFIG_KVM_GUEST.
  I'm not sure why it even needs to do that, though -- shouldn't
  the subsequent PCI subsystem vendor/device check should be 
  sufficient?
  No hypercalls are involved.
 
  That said, the possibility that some random driver might want
  to make use of paravirt features is a decent argument for keeping the
 stub.
 
 
  I am not sure where we are agreeing on?
  Do we want to remove the stub in
  arch/powerpc/include/asm/kvm_para.h
  ? as
  there is no caller without KVM_GUEST and in future caller ensure
  this to be called only from code selected by KVM_GUEST?
 
  Or let this stub stay to avoid any random driver calling this ?
 
  I think the most reasonable way forward is to add a stub for
  non-CONFIG_EPAPR to the epapr code, then replace the kvm bits
  with generic epapr bits (which your patches already do).
 
  Please describe which stub you are talking about.
 
  kvm_hypercall is always available, regardless of the config option,
  which makes all its subfunctions always available as well.
 
  This patch renames kvm_hypercall() to epapr_hypercall() and which is
  always
  available. And the kvm_hypercall() friends now directly calls
 epapr_hypercall().
  IIUC, So what you are trying to say is let the kvm_hypercall()
  friends keep on
  calling kvm_hypercall() itself and a sub something like this:
 
  No, what I'm saying is that we either
 
   a) drop the whole #ifndef code path consciously. This would have to
  be a separate patch with a separate discussion. It's orthogonal to
  combining
  kvm_hypercall() and epapr_hypercall()
 
   b) add the #ifndef path to epapr_hypercall()
 
  Do you mean like this in arch/powerpc/include/asm/epapr_hcalls.h
 
  #ifdef CONFIG_KVM_GUEST
 
 CONFIG_EPAPR_PARAVIRT

Yes, I was getting confused why only KVM_GUEST as this not specific to 
KVM-GUEST.
Thank you

 
 Apart from that, yes, I think that's what we want.
 
 
 Alex
 
  static inline unsigned long epapr_hypercall(unsigned long *in,
unsigned long *out,
unsigned long nr) { // code for this
  function } #else static inline unsigned long epapr_hypercall(unsigned
  long *in,
unsigned long *out,
unsigned long nr) {
  return EV_UNIMPLEMENTED;
  }
  #endif
 
 


--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH -V2 05/14] kvm: powerpc: book3s: Add kvmppc_ops callback

2013-10-07 Thread Aneesh Kumar K.V
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com

This patch add a new callback kvmppc_ops. This will help us in enabling
both HV and PR KVM together in the same kernel. The actual change to
enable them together is done in the later patch in the series.

Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
 arch/powerpc/include/asm/kvm_book3s.h |   1 -
 arch/powerpc/include/asm/kvm_ppc.h|  85 +
 arch/powerpc/kernel/exceptions-64s.S  |   2 +-
 arch/powerpc/kvm/book3s.c | 145 +-
 arch/powerpc/kvm/book3s.h |  32 +
 arch/powerpc/kvm/book3s_32_mmu_host.c |   2 +-
 arch/powerpc/kvm/book3s_64_mmu_host.c |   2 +-
 arch/powerpc/kvm/book3s_64_mmu_hv.c   |  17 ++-
 arch/powerpc/kvm/book3s_emulate.c |   8 +-
 arch/powerpc/kvm/book3s_hv.c  | 220 --
 arch/powerpc/kvm/book3s_interrupts.S  |   2 +-
 arch/powerpc/kvm/book3s_pr.c  | 194 +++---
 arch/powerpc/kvm/book3s_xics.c|   4 +-
 arch/powerpc/kvm/emulate.c|   6 +-
 arch/powerpc/kvm/powerpc.c|  58 +++--
 15 files changed, 554 insertions(+), 224 deletions(-)
 create mode 100644 arch/powerpc/kvm/book3s.h

diff --git a/arch/powerpc/include/asm/kvm_book3s.h 
b/arch/powerpc/include/asm/kvm_book3s.h
index 99ef871..315a5d6 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -124,7 +124,6 @@ extern void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, 
ulong ea, ulong ea_mask)
 extern void kvmppc_mmu_pte_vflush(struct kvm_vcpu *vcpu, u64 vp, u64 vp_mask);
 extern void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, ulong pa_start, ulong 
pa_end);
 extern void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 new_msr);
-extern void kvmppc_set_pvr(struct kvm_vcpu *vcpu, u32 pvr);
 extern void kvmppc_mmu_book3s_64_init(struct kvm_vcpu *vcpu);
 extern void kvmppc_mmu_book3s_32_init(struct kvm_vcpu *vcpu);
 extern void kvmppc_mmu_book3s_hv_init(struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/include/asm/kvm_ppc.h 
b/arch/powerpc/include/asm/kvm_ppc.h
index 1823f38..1d22b53 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -106,13 +106,6 @@ extern void kvmppc_core_queue_external(struct kvm_vcpu 
*vcpu,
struct kvm_interrupt *irq);
 extern void kvmppc_core_dequeue_external(struct kvm_vcpu *vcpu);
 extern void kvmppc_core_flush_tlb(struct kvm_vcpu *vcpu);
-
-extern int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
-  unsigned int op, int *advance);
-extern int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn,
-ulong val);
-extern int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn,
-ulong *val);
 extern int kvmppc_core_check_requests(struct kvm_vcpu *vcpu);
 
 extern int kvmppc_booke_init(void);
@@ -135,8 +128,6 @@ extern long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm,
struct kvm_create_spapr_tce *args);
 extern long kvmppc_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
 unsigned long ioba, unsigned long tce);
-extern long kvm_vm_ioctl_allocate_rma(struct kvm *kvm,
-   struct kvm_allocate_rma *rma);
 extern struct kvm_rma_info *kvm_alloc_rma(void);
 extern void kvm_release_rma(struct kvm_rma_info *ri);
 extern struct page *kvm_alloc_hpt(unsigned long nr_pages);
@@ -177,6 +168,66 @@ extern int kvmppc_xics_get_xive(struct kvm *kvm, u32 irq, 
u32 *server,
 extern int kvmppc_xics_int_on(struct kvm *kvm, u32 irq);
 extern int kvmppc_xics_int_off(struct kvm *kvm, u32 irq);
 
+union kvmppc_one_reg {
+   u32 wval;
+   u64 dval;
+   vector128 vval;
+   u64 vsxval[2];
+   struct {
+   u64 addr;
+   u64 length;
+   }   vpaval;
+};
+
+struct kvmppc_ops {
+   int (*get_sregs)(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs);
+   int (*set_sregs)(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs);
+   int (*get_one_reg)(struct kvm_vcpu *vcpu, u64 id,
+  union kvmppc_one_reg *val);
+   int (*set_one_reg)(struct kvm_vcpu *vcpu, u64 id,
+  union kvmppc_one_reg *val);
+   void (*vcpu_load)(struct kvm_vcpu *vcpu, int cpu);
+   void (*vcpu_put)(struct kvm_vcpu *vcpu);
+   void (*set_msr)(struct kvm_vcpu *vcpu, u64 msr);
+   int (*vcpu_run)(struct kvm_run *run, struct kvm_vcpu *vcpu);
+   struct kvm_vcpu *(*vcpu_create)(struct kvm *kvm, unsigned int id);
+   void (*vcpu_free)(struct kvm_vcpu *vcpu);
+   int (*check_requests)(struct kvm_vcpu *vcpu);
+   int (*get_dirty_log)(struct kvm *kvm, struct kvm_dirty_log *log);
+   void (*flush_memslot)(struct kvm *kvm, struct kvm_memory_slot *memslot);
+  

[PATCH -V2 03/14] kvm: powerpc: book3s: pr: Rename KVM_BOOK3S_PR to KVM_BOOK3S_PR_POSSIBLE

2013-10-07 Thread Aneesh Kumar K.V
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com

With later patches supporting PR kvm as a kernel module, the changes
that has to be built into the main kernel binary to enable PR KVM module
is now selected via KVM_BOOK3S_PR_POSSIBLE

Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
 arch/powerpc/include/asm/exception-64s.h |  2 +-
 arch/powerpc/include/asm/kvm_book3s.h|  4 ++--
 arch/powerpc/include/asm/kvm_book3s_64.h |  2 +-
 arch/powerpc/include/asm/kvm_host.h  |  2 +-
 arch/powerpc/include/asm/paca.h  |  2 +-
 arch/powerpc/kernel/asm-offsets.c|  2 +-
 arch/powerpc/kernel/exceptions-64s.S |  2 +-
 arch/powerpc/kvm/Kconfig |  6 +++---
 arch/powerpc/kvm/trace.h | 10 +-
 9 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h 
b/arch/powerpc/include/asm/exception-64s.h
index b86c4db..fe1c62d 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -243,7 +243,7 @@ do_kvm_##n: 
\
 #define KVM_HANDLER_SKIP(area, h, n)
 #endif
 
-#ifdef CONFIG_KVM_BOOK3S_PR
+#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
 #define KVMTEST_PR(n)  __KVMTEST(n)
 #define KVM_HANDLER_PR(area, h, n) __KVM_HANDLER(area, h, n)
 #define KVM_HANDLER_PR_SKIP(area, h, n)__KVM_HANDLER_SKIP(area, h, n)
diff --git a/arch/powerpc/include/asm/kvm_book3s.h 
b/arch/powerpc/include/asm/kvm_book3s.h
index 0ec00f4..5c07d10 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -304,7 +304,7 @@ static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu 
*vcpu)
return vcpu-arch.fault_dar;
 }
 
-#ifdef CONFIG_KVM_BOOK3S_PR
+#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
 
 static inline unsigned long kvmppc_interrupt_offset(struct kvm_vcpu *vcpu)
 {
@@ -339,7 +339,7 @@ static inline bool kvmppc_critical_section(struct kvm_vcpu 
*vcpu)
 
return crit;
 }
-#else /* CONFIG_KVM_BOOK3S_PR */
+#else /* CONFIG_KVM_BOOK3S_PR_POSSIBLE */
 
 static inline unsigned long kvmppc_interrupt_offset(struct kvm_vcpu *vcpu)
 {
diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h 
b/arch/powerpc/include/asm/kvm_book3s_64.h
index 86d638a..e6ee7fd 100644
--- a/arch/powerpc/include/asm/kvm_book3s_64.h
+++ b/arch/powerpc/include/asm/kvm_book3s_64.h
@@ -20,7 +20,7 @@
 #ifndef __ASM_KVM_BOOK3S_64_H__
 #define __ASM_KVM_BOOK3S_64_H__
 
-#ifdef CONFIG_KVM_BOOK3S_PR
+#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
 static inline struct kvmppc_book3s_shadow_vcpu *svcpu_get(struct kvm_vcpu 
*vcpu)
 {
preempt_disable();
diff --git a/arch/powerpc/include/asm/kvm_host.h 
b/arch/powerpc/include/asm/kvm_host.h
index ac40013..a0ca1f4 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -262,7 +262,7 @@ struct kvm_arch {
struct kvmppc_vcore *vcores[KVM_MAX_VCORES];
int hpt_cma_alloc;
 #endif /* CONFIG_KVM_BOOK3S_64_HV */
-#ifdef CONFIG_KVM_BOOK3S_PR
+#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
struct mutex hpt_mutex;
 #endif
 #ifdef CONFIG_PPC_BOOK3S_64
diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index 77c91e7..aefe187 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -161,7 +161,7 @@ struct paca_struct {
struct dtl_entry *dtl_curr; /* pointer corresponding to dtl_ridx */
 
 #ifdef CONFIG_KVM_BOOK3S_HANDLER
-#ifdef CONFIG_KVM_BOOK3S_PR
+#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
/* We use this to store guest state in */
struct kvmppc_book3s_shadow_vcpu shadow_vcpu;
 #endif
diff --git a/arch/powerpc/kernel/asm-offsets.c 
b/arch/powerpc/kernel/asm-offsets.c
index aae7b54..c6c8675 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -530,7 +530,7 @@ int main(void)
DEFINE(VCPU_SLB_SIZE, sizeof(struct kvmppc_slb));
 
 #ifdef CONFIG_PPC_BOOK3S_64
-#ifdef CONFIG_KVM_BOOK3S_PR
+#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
DEFINE(PACA_SVCPU, offsetof(struct paca_struct, shadow_vcpu));
 # define SVCPU_FIELD(x, f) DEFINE(x, offsetof(struct paca_struct, 
shadow_vcpu.f))
 #else
diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index 580d97a..7b75008 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -424,7 +424,7 @@ data_access_check_stab:
mfspr   r9,SPRN_DSISR
srdir10,r10,60
rlwimi  r10,r9,16,0x20
-#ifdef CONFIG_KVM_BOOK3S_PR
+#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
lbz r9,HSTATE_IN_GUEST(r13)
rlwimi  r10,r9,8,0x300
 #endif
diff --git a/arch/powerpc/kvm/Kconfig b/arch/powerpc/kvm/Kconfig
index ffaef2c..d0665f2 100644
--- a/arch/powerpc/kvm/Kconfig
+++ b/arch/powerpc/kvm/Kconfig
@@ -34,7 +34,7 @@ config KVM_BOOK3S_64_HANDLER
bool
select KVM_BOOK3S_HANDLER
 

[PATCH -V2 12/14] kvm: Add struct kvm arg to memslot APIs

2013-10-07 Thread Aneesh Kumar K.V
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com

We will use that in the later patch to find the kvm ops handler

Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
 arch/arm/kvm/arm.c |  5 +++--
 arch/ia64/kvm/kvm-ia64.c   |  5 +++--
 arch/mips/kvm/kvm_mips.c   |  5 +++--
 arch/powerpc/include/asm/kvm_ppc.h |  6 --
 arch/powerpc/kvm/book3s.c  |  4 ++--
 arch/powerpc/kvm/booke.c   |  4 ++--
 arch/powerpc/kvm/powerpc.c |  9 +
 arch/s390/kvm/kvm-s390.c   |  5 +++--
 arch/x86/kvm/x86.c |  5 +++--
 include/linux/kvm_host.h   |  5 +++--
 virt/kvm/kvm_main.c| 12 ++--
 11 files changed, 37 insertions(+), 28 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 9c697db..e96c48f 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -152,12 +152,13 @@ int kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct 
vm_fault *vmf)
return VM_FAULT_SIGBUS;
 }
 
-void kvm_arch_free_memslot(struct kvm_memory_slot *free,
+void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
   struct kvm_memory_slot *dont)
 {
 }
 
-int kvm_arch_create_memslot(struct kvm_memory_slot *slot, unsigned long npages)
+int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
+   unsigned long npages)
 {
return 0;
 }
diff --git a/arch/ia64/kvm/kvm-ia64.c b/arch/ia64/kvm/kvm-ia64.c
index bdfd878..985bf80 100644
--- a/arch/ia64/kvm/kvm-ia64.c
+++ b/arch/ia64/kvm/kvm-ia64.c
@@ -1550,12 +1550,13 @@ int kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct 
vm_fault *vmf)
return VM_FAULT_SIGBUS;
 }
 
-void kvm_arch_free_memslot(struct kvm_memory_slot *free,
+void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
   struct kvm_memory_slot *dont)
 {
 }
 
-int kvm_arch_create_memslot(struct kvm_memory_slot *slot, unsigned long npages)
+int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
+   unsigned long npages)
 {
return 0;
 }
diff --git a/arch/mips/kvm/kvm_mips.c b/arch/mips/kvm/kvm_mips.c
index a7b0445..73b3482 100644
--- a/arch/mips/kvm/kvm_mips.c
+++ b/arch/mips/kvm/kvm_mips.c
@@ -198,12 +198,13 @@ kvm_arch_dev_ioctl(struct file *filp, unsigned int ioctl, 
unsigned long arg)
return -ENOIOCTLCMD;
 }
 
-void kvm_arch_free_memslot(struct kvm_memory_slot *free,
+void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
   struct kvm_memory_slot *dont)
 {
 }
 
-int kvm_arch_create_memslot(struct kvm_memory_slot *slot, unsigned long npages)
+int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
+   unsigned long npages)
 {
return 0;
 }
diff --git a/arch/powerpc/include/asm/kvm_ppc.h 
b/arch/powerpc/include/asm/kvm_ppc.h
index c13f15d..20f4616 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -134,9 +134,11 @@ extern struct page *kvm_alloc_hpt(unsigned long nr_pages);
 extern void kvm_release_hpt(struct page *page, unsigned long nr_pages);
 extern int kvmppc_core_init_vm(struct kvm *kvm);
 extern void kvmppc_core_destroy_vm(struct kvm *kvm);
-extern void kvmppc_core_free_memslot(struct kvm_memory_slot *free,
+extern void kvmppc_core_free_memslot(struct kvm *kvm,
+struct kvm_memory_slot *free,
 struct kvm_memory_slot *dont);
-extern int kvmppc_core_create_memslot(struct kvm_memory_slot *slot,
+extern int kvmppc_core_create_memslot(struct kvm *kvm,
+ struct kvm_memory_slot *slot,
  unsigned long npages);
 extern int kvmppc_core_prepare_memory_region(struct kvm *kvm,
struct kvm_memory_slot *memslot,
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 39d2994..130fe1d 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -761,13 +761,13 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct 
kvm_dirty_log *log)
return kvmppc_ops-get_dirty_log(kvm, log);
 }
 
-void kvmppc_core_free_memslot(struct kvm_memory_slot *free,
+void kvmppc_core_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
  struct kvm_memory_slot *dont)
 {
kvmppc_ops-free_memslot(free, dont);
 }
 
-int kvmppc_core_create_memslot(struct kvm_memory_slot *slot,
+int kvmppc_core_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
   unsigned long npages)
 {
return kvmppc_ops-create_memslot(slot, npages);
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 1769354..cb2d986 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -1662,12 +1662,12 @@ int 

[PATCH -V2 09/14] kvm: powerpc: book3s: pr: move PR related tracepoints to a separate header

2013-10-07 Thread Aneesh Kumar K.V
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com

This patch moves PR related tracepoints to a separate header. This
enables in converting PR to a kernel module which will be done in
later patches

Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
 arch/powerpc/kvm/book3s_64_mmu_host.c |   2 +-
 arch/powerpc/kvm/book3s_mmu_hpte.c|   2 +-
 arch/powerpc/kvm/book3s_pr.c  |   4 +-
 arch/powerpc/kvm/trace.h  | 234 +--
 arch/powerpc/kvm/trace_pr.h   | 297 ++
 5 files changed, 309 insertions(+), 230 deletions(-)
 create mode 100644 arch/powerpc/kvm/trace_pr.h

diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c 
b/arch/powerpc/kvm/book3s_64_mmu_host.c
index 819672c..0d513af 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -27,7 +27,7 @@
 #include asm/machdep.h
 #include asm/mmu_context.h
 #include asm/hw_irq.h
-#include trace.h
+#include trace_pr.h
 
 #define PTE_SIZE 12
 
diff --git a/arch/powerpc/kvm/book3s_mmu_hpte.c 
b/arch/powerpc/kvm/book3s_mmu_hpte.c
index 6b79bfc..5a1ab12 100644
--- a/arch/powerpc/kvm/book3s_mmu_hpte.c
+++ b/arch/powerpc/kvm/book3s_mmu_hpte.c
@@ -28,7 +28,7 @@
 #include asm/mmu_context.h
 #include asm/hw_irq.h
 
-#include trace.h
+#include trace_pr.h
 
 #define PTE_SIZE   12
 
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index b6a525d..ca6c73d 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -42,7 +42,9 @@
 #include linux/highmem.h
 
 #include book3s.h
-#include trace.h
+
+#define CREATE_TRACE_POINTS
+#include trace_pr.h
 
 /* #define EXIT_DEBUG */
 /* #define DEBUG_EXT */
diff --git a/arch/powerpc/kvm/trace.h b/arch/powerpc/kvm/trace.h
index 9e8368e..80f252a 100644
--- a/arch/powerpc/kvm/trace.h
+++ b/arch/powerpc/kvm/trace.h
@@ -85,6 +85,12 @@ TRACE_EVENT(kvm_ppc_instr,
{41, HV_PRIV}
 #endif
 
+#ifndef CONFIG_KVM_BOOK3S_PR_POSSIBLE
+/*
+ * For pr we define this in trace_pr.h since it pr can be built as
+ * a module
+ */
+
 TRACE_EVENT(kvm_exit,
TP_PROTO(unsigned int exit_nr, struct kvm_vcpu *vcpu),
TP_ARGS(exit_nr, vcpu),
@@ -94,9 +100,6 @@ TRACE_EVENT(kvm_exit,
__field(unsigned long,  pc  )
__field(unsigned long,  msr )
__field(unsigned long,  dar )
-#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
-   __field(unsigned long,  srr1)
-#endif
__field(unsigned long,  last_inst   )
),
 
@@ -105,9 +108,6 @@ TRACE_EVENT(kvm_exit,
__entry-pc = kvmppc_get_pc(vcpu);
__entry-dar= kvmppc_get_fault_dar(vcpu);
__entry-msr= vcpu-arch.shared-msr;
-#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
-   __entry-srr1   = vcpu-arch.shadow_srr1;
-#endif
__entry-last_inst  = vcpu-arch.last_inst;
),
 
@@ -115,18 +115,12 @@ TRACE_EVENT(kvm_exit,
 | pc=0x%lx
 | msr=0x%lx
 | dar=0x%lx
-#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
-| srr1=0x%lx
-#endif
 | last_inst=0x%lx
,
__print_symbolic(__entry-exit_nr, kvm_trace_symbol_exit),
__entry-pc,
__entry-msr,
__entry-dar,
-#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
-   __entry-srr1,
-#endif
__entry-last_inst
)
 );
@@ -145,6 +139,7 @@ TRACE_EVENT(kvm_unmap_hva,
 
TP_printk(unmap hva 0x%lx\n, __entry-hva)
 );
+#endif
 
 TRACE_EVENT(kvm_stlb_inval,
TP_PROTO(unsigned int stlb_index),
@@ -231,221 +226,6 @@ TRACE_EVENT(kvm_check_requests,
__entry-cpu_nr, __entry-requests)
 );
 
-
-/*
- * Book3S trace points   *
- */
-
-#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
-
-TRACE_EVENT(kvm_book3s_reenter,
-   TP_PROTO(int r, struct kvm_vcpu *vcpu),
-   TP_ARGS(r, vcpu),
-
-   TP_STRUCT__entry(
-   __field(unsigned int,   r   )
-   __field(unsigned long,  pc  )
-   ),
-
-   TP_fast_assign(
-   __entry-r  = r;
-   __entry-pc = kvmppc_get_pc(vcpu);
-   ),
-
-   TP_printk(reentry r=%d | pc=0x%lx, __entry-r, __entry-pc)
-);
-
-#ifdef CONFIG_PPC_BOOK3S_64
-
-TRACE_EVENT(kvm_book3s_64_mmu_map,
-   TP_PROTO(int rflags, ulong hpteg, ulong va, pfn_t hpaddr,
-struct kvmppc_pte *orig_pte),
-   TP_ARGS(rflags, hpteg, va, hpaddr, orig_pte),
-
-   TP_STRUCT__entry(
-   __field(unsigned char,   

[PATCH -V2 13/14] kvm: powerpc: book3s: Allow the HV and PR selection per virtual machine

2013-10-07 Thread Aneesh Kumar K.V
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com

This moves the kvmppc_ops callbacks to be a per VM entity. This
enables us to select HV and PR mode when creating a VM. We also
allow both kvm-hv and kvm-pr kernel module to be loaded. To
achieve this we move /dev/kvm ownership to kvm.ko module. Depending on
which KVM mode we select during VM creation we take a reference
count on respective module

Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
 arch/powerpc/include/asm/kvm_host.h |  1 +
 arch/powerpc/include/asm/kvm_ppc.h  |  7 +--
 arch/powerpc/kvm/44x.c  |  7 ++-
 arch/powerpc/kvm/book3s.c   | 89 +
 arch/powerpc/kvm/book3s.h   |  2 +
 arch/powerpc/kvm/book3s_hv.c| 18 
 arch/powerpc/kvm/book3s_pr.c| 25 +++
 arch/powerpc/kvm/book3s_xics.c  |  2 +-
 arch/powerpc/kvm/booke.c| 22 -
 arch/powerpc/kvm/e500.c |  8 +++-
 arch/powerpc/kvm/e500mc.c   |  6 ++-
 arch/powerpc/kvm/emulate.c  | 11 ++---
 arch/powerpc/kvm/powerpc.c  | 76 ++-
 include/uapi/linux/kvm.h|  4 ++
 14 files changed, 187 insertions(+), 91 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_host.h 
b/arch/powerpc/include/asm/kvm_host.h
index e86db97..c7a041d 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -275,6 +275,7 @@ struct kvm_arch {
 #ifdef CONFIG_KVM_XICS
struct kvmppc_xics *xics;
 #endif
+   struct kvmppc_ops *kvm_ops;
 };
 
 /*
diff --git a/arch/powerpc/include/asm/kvm_ppc.h 
b/arch/powerpc/include/asm/kvm_ppc.h
index 20f4616..3069cf4 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -182,6 +182,7 @@ union kvmppc_one_reg {
 };
 
 struct kvmppc_ops {
+   struct module *owner;
bool is_hv_enabled;
int (*get_sregs)(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs);
int (*set_sregs)(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs);
@@ -217,7 +218,6 @@ struct kvmppc_ops {
  unsigned long npages);
int (*init_vm)(struct kvm *kvm);
void (*destroy_vm)(struct kvm *kvm);
-   int (*check_processor_compat)(void);
int (*get_smmu_info)(struct kvm *kvm, struct kvm_ppc_smmu_info *info);
int (*emulate_op)(struct kvm_run *run, struct kvm_vcpu *vcpu,
  unsigned int inst, int *advance);
@@ -229,7 +229,8 @@ struct kvmppc_ops {
 
 };
 
-extern struct kvmppc_ops *kvmppc_ops;
+extern struct kvmppc_ops *kvmppc_hv_ops;
+extern struct kvmppc_ops *kvmppc_pr_ops;
 
 /*
  * Cuts out inst bits with ordering according to spec.
@@ -326,7 +327,7 @@ static inline void kvmppc_set_host_ipi(int cpu, u8 host_ipi)
 
 static inline void kvmppc_fast_vcpu_kick(struct kvm_vcpu *vcpu)
 {
-   kvmppc_ops-fast_vcpu_kick(vcpu);
+   vcpu-kvm-arch.kvm_ops-fast_vcpu_kick(vcpu);
 }
 
 #else
diff --git a/arch/powerpc/kvm/44x.c b/arch/powerpc/kvm/44x.c
index a765bcd..93221e8 100644
--- a/arch/powerpc/kvm/44x.c
+++ b/arch/powerpc/kvm/44x.c
@@ -213,16 +213,19 @@ static int __init kvmppc_44x_init(void)
if (r)
goto err_out;
 
-   r = kvm_init(kvm_ops_44x, sizeof(struct kvmppc_vcpu_44x),
-0, THIS_MODULE);
+   r = kvm_init(NULL, sizeof(struct kvmppc_vcpu_44x), 0, THIS_MODULE);
if (r)
goto err_out;
+   kvm_ops_44x.owner = THIS_MODULE;
+   kvmppc_pr_ops = kvm_ops_44x;
+
 err_out:
return r;
 }
 
 static void __exit kvmppc_44x_exit(void)
 {
+   kvmppc_pr_ops = NULL;
kvmppc_booke_exit();
 }
 
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 130fe1d..ad8f6ed 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -34,6 +34,7 @@
 #include linux/vmalloc.h
 #include linux/highmem.h
 
+#include book3s.h
 #include trace.h
 
 #define VCPU_STAT(x) offsetof(struct kvm_vcpu, stat.x), KVM_STAT_VCPU
@@ -71,7 +72,7 @@ void kvmppc_core_load_guest_debugstate(struct kvm_vcpu *vcpu)
 
 static inline unsigned long kvmppc_interrupt_offset(struct kvm_vcpu *vcpu)
 {
-   if (!kvmppc_ops-is_hv_enabled)
+   if (!vcpu-kvm-arch.kvm_ops-is_hv_enabled)
return to_book3s(vcpu)-hior;
return 0;
 }
@@ -79,7 +80,7 @@ static inline unsigned long kvmppc_interrupt_offset(struct 
kvm_vcpu *vcpu)
 static inline void kvmppc_update_int_pending(struct kvm_vcpu *vcpu,
unsigned long pending_now, unsigned long old_pending)
 {
-   if (kvmppc_ops-is_hv_enabled)
+   if (vcpu-kvm-arch.kvm_ops-is_hv_enabled)
return;
if (pending_now)
vcpu-arch.shared-int_pending = 1;
@@ -93,7 +94,7 @@ static inline bool kvmppc_critical_section(struct kvm_vcpu 
*vcpu)
ulong crit_r1;
bool crit;
 
-   if (kvmppc_ops-is_hv_enabled)
+   if 

[PATCH -V2 08/14] kvm: powerpc: book3s: Add is_hv_enabled to kvmppc_ops

2013-10-07 Thread Aneesh Kumar K.V
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com

This help us to identify whether we are running with hypervisor mode KVM
enabled. The change is needed so that we can have both HV and PR kvm
enabled in the same kernel.

If both HV and PR KVM are included, interrupts come in to the HV version
of the kvmppc_interrupt code, which then jumps to the PR handler,
renamed to kvmppc_interrupt_pr, if the guest is a PR guest.

Allowing both PR and HV in the same kernel required some changes to
kvm_dev_ioctl_check_extension(), since the values returned now can't
be selected with #ifdefs as much as previously. We look at is_hv_enabled
to return the right value when checking for capabilities.For capabilities that
are only provided by HV KVM, we return the HV value only if
is_hv_enabled is true. For capabilities provided by PR KVM but not HV,
we return the PR value only if is_hv_enabled is false.

NOTE: in later patch we replace is_hv_enabled with a static inline
function comparing kvm_ppc_ops

Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
 arch/powerpc/include/asm/kvm_book3s.h | 53 --
 arch/powerpc/include/asm/kvm_ppc.h|  5 ++--
 arch/powerpc/kvm/book3s.c | 44 
 arch/powerpc/kvm/book3s_hv.c  |  1 +
 arch/powerpc/kvm/book3s_pr.c  |  1 +
 arch/powerpc/kvm/book3s_xics.c|  2 +-
 arch/powerpc/kvm/powerpc.c| 54 +++
 7 files changed, 79 insertions(+), 81 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h 
b/arch/powerpc/include/asm/kvm_book3s.h
index 315a5d6..4a594b7 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -301,59 +301,6 @@ static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu 
*vcpu)
return vcpu-arch.fault_dar;
 }
 
-#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
-
-static inline unsigned long kvmppc_interrupt_offset(struct kvm_vcpu *vcpu)
-{
-   return to_book3s(vcpu)-hior;
-}
-
-static inline void kvmppc_update_int_pending(struct kvm_vcpu *vcpu,
-   unsigned long pending_now, unsigned long old_pending)
-{
-   if (pending_now)
-   vcpu-arch.shared-int_pending = 1;
-   else if (old_pending)
-   vcpu-arch.shared-int_pending = 0;
-}
-
-static inline bool kvmppc_critical_section(struct kvm_vcpu *vcpu)
-{
-   ulong crit_raw = vcpu-arch.shared-critical;
-   ulong crit_r1 = kvmppc_get_gpr(vcpu, 1);
-   bool crit;
-
-   /* Truncate crit indicators in 32 bit mode */
-   if (!(vcpu-arch.shared-msr  MSR_SF)) {
-   crit_raw = 0x;
-   crit_r1 = 0x;
-   }
-
-   /* Critical section when crit == r1 */
-   crit = (crit_raw == crit_r1);
-   /* ... and we're in supervisor mode */
-   crit = crit  !(vcpu-arch.shared-msr  MSR_PR);
-
-   return crit;
-}
-#else /* CONFIG_KVM_BOOK3S_PR_POSSIBLE */
-
-static inline unsigned long kvmppc_interrupt_offset(struct kvm_vcpu *vcpu)
-{
-   return 0;
-}
-
-static inline void kvmppc_update_int_pending(struct kvm_vcpu *vcpu,
-   unsigned long pending_now, unsigned long old_pending)
-{
-}
-
-static inline bool kvmppc_critical_section(struct kvm_vcpu *vcpu)
-{
-   return false;
-}
-#endif
-
 /* Magic register values loaded into r3 and r4 before the 'sc' assembly
  * instruction for the OSI hypercalls */
 #define OSI_SC_MAGIC_R30x113724FA
diff --git a/arch/powerpc/include/asm/kvm_ppc.h 
b/arch/powerpc/include/asm/kvm_ppc.h
index 326033c..c13f15d 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -180,6 +180,7 @@ union kvmppc_one_reg {
 };
 
 struct kvmppc_ops {
+   bool is_hv_enabled;
int (*get_sregs)(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs);
int (*set_sregs)(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs);
int (*get_one_reg)(struct kvm_vcpu *vcpu, u64 id,
@@ -309,10 +310,10 @@ static inline void kvmppc_set_xics_phys(int cpu, unsigned 
long addr)
 
 static inline u32 kvmppc_get_xics_latch(void)
 {
-   u32 xirr = get_paca()-kvm_hstate.saved_xirr;
+   u32 xirr;
 
+   xirr = get_paca()-kvm_hstate.saved_xirr;
get_paca()-kvm_hstate.saved_xirr = 0;
-
return xirr;
 }
 
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 784a1d5..493aff7 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -69,6 +69,50 @@ void kvmppc_core_load_guest_debugstate(struct kvm_vcpu *vcpu)
 {
 }
 
+static inline unsigned long kvmppc_interrupt_offset(struct kvm_vcpu *vcpu)
+{
+   if (!kvmppc_ops-is_hv_enabled)
+   return to_book3s(vcpu)-hior;
+   return 0;
+}
+
+static inline void kvmppc_update_int_pending(struct kvm_vcpu *vcpu,
+   unsigned long pending_now, unsigned long old_pending)
+{
+   if (kvmppc_ops-is_hv_enabled)
+  

[PATCH -V2 04/14] kvm: powerpc: book3s: Add a new config variable CONFIG_KVM_BOOK3S_HV_POSSIBLE

2013-10-07 Thread Aneesh Kumar K.V
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com

This help ups to select the relevant code in the kernel code
when we later move HV and PR bits as seperate modules. The patch
also makes the config options for PR KVM selectable

Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
 arch/powerpc/include/asm/kvm_book3s.h |  2 --
 arch/powerpc/include/asm/kvm_book3s_64.h  |  6 +++---
 arch/powerpc/include/asm/kvm_book3s_asm.h |  2 +-
 arch/powerpc/include/asm/kvm_host.h   | 10 +-
 arch/powerpc/include/asm/kvm_ppc.h|  2 +-
 arch/powerpc/kernel/asm-offsets.c |  8 
 arch/powerpc/kernel/idle_power7.S |  2 +-
 arch/powerpc/kvm/Kconfig  | 18 +-
 arch/powerpc/kvm/Makefile | 12 
 arch/powerpc/kvm/book3s_exports.c |  5 +++--
 10 files changed, 43 insertions(+), 24 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h 
b/arch/powerpc/include/asm/kvm_book3s.h
index 5c07d10..99ef871 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -199,8 +199,6 @@ static inline struct kvmppc_vcpu_book3s *to_book3s(struct 
kvm_vcpu *vcpu)
return vcpu-arch.book3s;
 }
 
-extern void kvm_return_point(void);
-
 /* Also add subarch specific defines */
 
 #ifdef CONFIG_KVM_BOOK3S_32_HANDLER
diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h 
b/arch/powerpc/include/asm/kvm_book3s_64.h
index e6ee7fd..bf0fa8b 100644
--- a/arch/powerpc/include/asm/kvm_book3s_64.h
+++ b/arch/powerpc/include/asm/kvm_book3s_64.h
@@ -35,7 +35,7 @@ static inline void svcpu_put(struct kvmppc_book3s_shadow_vcpu 
*svcpu)
 
 #define SPAPR_TCE_SHIFT12
 
-#ifdef CONFIG_KVM_BOOK3S_64_HV
+#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
 #define KVM_DEFAULT_HPT_ORDER  24  /* 16MB HPT by default */
 extern unsigned long kvm_rma_pages;
 #endif
@@ -278,7 +278,7 @@ static inline int is_vrma_hpte(unsigned long hpte_v)
(HPTE_V_1TB_SEG | (VRMA_VSID  (40 - 16)));
 }
 
-#ifdef CONFIG_KVM_BOOK3S_64_HV
+#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
 /*
  * Note modification of an HPTE; set the HPTE modified bit
  * if anyone is interested.
@@ -289,6 +289,6 @@ static inline void note_hpte_modification(struct kvm *kvm,
if (atomic_read(kvm-arch.hpte_mod_interest))
rev-guest_rpte |= HPTE_GR_MODIFIED;
 }
-#endif /* CONFIG_KVM_BOOK3S_64_HV */
+#endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */
 
 #endif /* __ASM_KVM_BOOK3S_64_H__ */
diff --git a/arch/powerpc/include/asm/kvm_book3s_asm.h 
b/arch/powerpc/include/asm/kvm_book3s_asm.h
index 6273711..0bd9348 100644
--- a/arch/powerpc/include/asm/kvm_book3s_asm.h
+++ b/arch/powerpc/include/asm/kvm_book3s_asm.h
@@ -83,7 +83,7 @@ struct kvmppc_host_state {
u8 restore_hid5;
u8 napping;
 
-#ifdef CONFIG_KVM_BOOK3S_64_HV
+#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
u8 hwthread_req;
u8 hwthread_state;
u8 host_ipi;
diff --git a/arch/powerpc/include/asm/kvm_host.h 
b/arch/powerpc/include/asm/kvm_host.h
index a0ca1f4..e86db97 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -229,15 +229,15 @@ struct revmap_entry {
 #define KVMPPC_GOT_PAGE0x80
 
 struct kvm_arch_memory_slot {
-#ifdef CONFIG_KVM_BOOK3S_64_HV
+#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
unsigned long *rmap;
unsigned long *slot_phys;
-#endif /* CONFIG_KVM_BOOK3S_64_HV */
+#endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */
 };
 
 struct kvm_arch {
unsigned int lpid;
-#ifdef CONFIG_KVM_BOOK3S_64_HV
+#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
unsigned long hpt_virt;
struct revmap_entry *revmap;
unsigned int host_lpid;
@@ -261,7 +261,7 @@ struct kvm_arch {
cpumask_t need_tlb_flush;
struct kvmppc_vcore *vcores[KVM_MAX_VCORES];
int hpt_cma_alloc;
-#endif /* CONFIG_KVM_BOOK3S_64_HV */
+#endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */
 #ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
struct mutex hpt_mutex;
 #endif
@@ -597,7 +597,7 @@ struct kvm_vcpu_arch {
struct kvmppc_icp *icp; /* XICS presentation controller */
 #endif
 
-#ifdef CONFIG_KVM_BOOK3S_64_HV
+#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
struct kvm_vcpu_arch_shared shregs;
 
unsigned long pgfault_addr;
diff --git a/arch/powerpc/include/asm/kvm_ppc.h 
b/arch/powerpc/include/asm/kvm_ppc.h
index b15554a..1823f38 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -260,7 +260,7 @@ void kvmppc_set_pid(struct kvm_vcpu *vcpu, u32 pid);
 
 struct openpic;
 
-#ifdef CONFIG_KVM_BOOK3S_64_HV
+#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
 extern void kvm_cma_reserve(void) __init;
 static inline void kvmppc_set_xics_phys(int cpu, unsigned long addr)
 {
diff --git a/arch/powerpc/kernel/asm-offsets.c 
b/arch/powerpc/kernel/asm-offsets.c
index c6c8675..3edce7b 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ 

[PATCH -V2 11/14] kvm: powerpc: book3s: Support building HV and PR KVM as module

2013-10-07 Thread Aneesh Kumar K.V
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com

Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
 arch/powerpc/kvm/Kconfig  |  6 +++---
 arch/powerpc/kvm/Makefile | 11 ---
 arch/powerpc/kvm/book3s.c | 12 +++-
 arch/powerpc/kvm/book3s_emulate.c |  2 +-
 arch/powerpc/kvm/book3s_hv.c  |  2 ++
 arch/powerpc/kvm/book3s_pr.c  |  5 -
 arch/powerpc/kvm/book3s_rtas.c|  1 +
 arch/powerpc/kvm/emulate.c|  1 +
 arch/powerpc/kvm/powerpc.c| 10 ++
 virt/kvm/kvm_main.c   |  4 
 10 files changed, 45 insertions(+), 9 deletions(-)

diff --git a/arch/powerpc/kvm/Kconfig b/arch/powerpc/kvm/Kconfig
index a96d7c3..8aeeda1 100644
--- a/arch/powerpc/kvm/Kconfig
+++ b/arch/powerpc/kvm/Kconfig
@@ -73,7 +73,7 @@ config KVM_BOOK3S_64
  If unsure, say N.
 
 config KVM_BOOK3S_64_HV
-   bool KVM support for POWER7 and PPC970 using hypervisor mode in host
+   tristate KVM support for POWER7 and PPC970 using hypervisor mode in 
host
depends on KVM_BOOK3S_64
select KVM_BOOK3S_HV_POSSIBLE
select MMU_NOTIFIER
@@ -94,8 +94,8 @@ config KVM_BOOK3S_64_HV
  If unsure, say N.
 
 config KVM_BOOK3S_64_PR
-   bool KVM support without using hypervisor mode in host
-   depends on KVM_BOOK3S_64  !KVM_BOOK3S_64_HV
+   tristate KVM support without using hypervisor mode in host
+   depends on KVM_BOOK3S_64
select KVM_BOOK3S_PR_POSSIBLE
---help---
  Support running guest kernels in virtual machines on processors
diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
index fa17b33..ce569b6 100644
--- a/arch/powerpc/kvm/Makefile
+++ b/arch/powerpc/kvm/Makefile
@@ -56,7 +56,7 @@ kvm-objs-$(CONFIG_KVM_E500MC) := $(kvm-e500mc-objs)
 kvm-book3s_64-builtin-objs-$(CONFIG_KVM_BOOK3S_64_HANDLER) := \
book3s_64_vio_hv.o
 
-kvm-book3s_64-objs-$(CONFIG_KVM_BOOK3S_64_PR) := \
+kvm-pr-y := \
fpu.o \
book3s_paired_singles.o \
book3s_pr.o \
@@ -76,7 +76,7 @@ kvm-book3s_64-builtin-objs-$(CONFIG_KVM_BOOK3S_64_HANDLER) += 
\
book3s_rmhandlers.o
 endif
 
-kvm-book3s_64-objs-$(CONFIG_KVM_BOOK3S_64_HV)  += \
+kvm-hv-y += \
book3s_hv.o \
book3s_hv_interrupts.o \
book3s_64_mmu_hv.o
@@ -84,13 +84,15 @@ kvm-book3s_64-objs-$(CONFIG_KVM_BOOK3S_64_HV)  += \
 kvm-book3s_64-builtin-xics-objs-$(CONFIG_KVM_XICS) := \
book3s_hv_rm_xics.o
 
-kvm-book3s_64-builtin-objs-$(CONFIG_KVM_BOOK3S_64_HV) += \
+ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
+kvm-book3s_64-builtin-objs-$(CONFIG_KVM_BOOK3S_64_HANDLER) += \
book3s_hv_rmhandlers.o \
book3s_hv_rm_mmu.o \
book3s_hv_ras.o \
book3s_hv_builtin.o \
book3s_hv_cma.o \
$(kvm-book3s_64-builtin-xics-objs-y)
+endif
 
 kvm-book3s_64-objs-$(CONFIG_KVM_XICS) += \
book3s_xics.o
@@ -131,4 +133,7 @@ obj-$(CONFIG_KVM_E500MC) += kvm.o
 obj-$(CONFIG_KVM_BOOK3S_64) += kvm.o
 obj-$(CONFIG_KVM_BOOK3S_32) += kvm.o
 
+obj-$(CONFIG_KVM_BOOK3S_64_PR) += kvm-pr.o
+obj-$(CONFIG_KVM_BOOK3S_64_HV) += kvm-hv.o
+
 obj-y += $(kvm-book3s_64-builtin-objs-y)
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 493aff7..39d2994 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -170,28 +170,32 @@ void kvmppc_book3s_queue_irqprio(struct kvm_vcpu *vcpu, 
unsigned int vec)
printk(KERN_INFO Queueing interrupt %x\n, vec);
 #endif
 }
-
+EXPORT_SYMBOL_GPL(kvmppc_book3s_queue_irqprio);
 
 void kvmppc_core_queue_program(struct kvm_vcpu *vcpu, ulong flags)
 {
/* might as well deliver this straight away */
kvmppc_inject_interrupt(vcpu, BOOK3S_INTERRUPT_PROGRAM, flags);
 }
+EXPORT_SYMBOL_GPL(kvmppc_core_queue_program);
 
 void kvmppc_core_queue_dec(struct kvm_vcpu *vcpu)
 {
kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_DECREMENTER);
 }
+EXPORT_SYMBOL_GPL(kvmppc_core_queue_dec);
 
 int kvmppc_core_pending_dec(struct kvm_vcpu *vcpu)
 {
return test_bit(BOOK3S_IRQPRIO_DECREMENTER, 
vcpu-arch.pending_exceptions);
 }
+EXPORT_SYMBOL_GPL(kvmppc_core_pending_dec);
 
 void kvmppc_core_dequeue_dec(struct kvm_vcpu *vcpu)
 {
kvmppc_book3s_dequeue_irqprio(vcpu, BOOK3S_INTERRUPT_DECREMENTER);
 }
+EXPORT_SYMBOL_GPL(kvmppc_core_dequeue_dec);
 
 void kvmppc_core_queue_external(struct kvm_vcpu *vcpu,
 struct kvm_interrupt *irq)
@@ -329,6 +333,7 @@ int kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu)
 
return 0;
 }
+EXPORT_SYMBOL_GPL(kvmppc_core_prepare_to_enter);
 
 pfn_t kvmppc_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, bool writing,
bool *writable)
@@ -354,6 +359,7 @@ pfn_t kvmppc_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, 
bool writing,
 
return gfn_to_pfn_prot(vcpu-kvm, gfn, writing, writable);
 }
+EXPORT_SYMBOL_GPL(kvmppc_gfn_to_pfn);
 
 static int kvmppc_xlate(struct kvm_vcpu *vcpu, ulong eaddr, 

[PATCH -V2 10/14] kvm: powerpc: booke: Move booke related tracepoints to separate header

2013-10-07 Thread Aneesh Kumar K.V
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com

Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
 arch/powerpc/kvm/booke.c |   4 +-
 arch/powerpc/kvm/e500_mmu.c  |   2 +-
 arch/powerpc/kvm/e500_mmu_host.c |   3 +-
 arch/powerpc/kvm/trace.h | 204 ---
 arch/powerpc/kvm/trace_booke.h   | 177 +
 5 files changed, 183 insertions(+), 207 deletions(-)
 create mode 100644 arch/powerpc/kvm/trace_booke.h

diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index e5f8ba7..1769354 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -40,7 +40,9 @@
 
 #include timing.h
 #include booke.h
-#include trace.h
+
+#define CREATE_TRACE_POINTS
+#include trace_booke.h
 
 unsigned long kvmppc_booke_handlers;
 
diff --git a/arch/powerpc/kvm/e500_mmu.c b/arch/powerpc/kvm/e500_mmu.c
index d25bb75..ebca6b8 100644
--- a/arch/powerpc/kvm/e500_mmu.c
+++ b/arch/powerpc/kvm/e500_mmu.c
@@ -32,7 +32,7 @@
 #include asm/kvm_ppc.h
 
 #include e500.h
-#include trace.h
+#include trace_booke.h
 #include timing.h
 #include e500_mmu_host.h
 
diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
index 8f0d532..e7dde4b 100644
--- a/arch/powerpc/kvm/e500_mmu_host.c
+++ b/arch/powerpc/kvm/e500_mmu_host.c
@@ -32,10 +32,11 @@
 #include asm/kvm_ppc.h
 
 #include e500.h
-#include trace.h
 #include timing.h
 #include e500_mmu_host.h
 
+#include trace_booke.h
+
 #define to_htlb1_esel(esel) (host_tlb_params[1].entries - (esel) - 1)
 
 static struct kvmppc_e500_tlb_params host_tlb_params[E500_TLB_NUM];
diff --git a/arch/powerpc/kvm/trace.h b/arch/powerpc/kvm/trace.h
index 80f252a..2e0e67e 100644
--- a/arch/powerpc/kvm/trace.h
+++ b/arch/powerpc/kvm/trace.h
@@ -31,116 +31,6 @@ TRACE_EVENT(kvm_ppc_instr,
  __entry-inst, __entry-pc, __entry-emulate)
 );
 
-#ifdef CONFIG_PPC_BOOK3S
-#define kvm_trace_symbol_exit \
-   {0x100, SYSTEM_RESET}, \
-   {0x200, MACHINE_CHECK}, \
-   {0x300, DATA_STORAGE}, \
-   {0x380, DATA_SEGMENT}, \
-   {0x400, INST_STORAGE}, \
-   {0x480, INST_SEGMENT}, \
-   {0x500, EXTERNAL}, \
-   {0x501, EXTERNAL_LEVEL}, \
-   {0x502, EXTERNAL_HV}, \
-   {0x600, ALIGNMENT}, \
-   {0x700, PROGRAM}, \
-   {0x800, FP_UNAVAIL}, \
-   {0x900, DECREMENTER}, \
-   {0x980, HV_DECREMENTER}, \
-   {0xc00, SYSCALL}, \
-   {0xd00, TRACE}, \
-   {0xe00, H_DATA_STORAGE}, \
-   {0xe20, H_INST_STORAGE}, \
-   {0xe40, H_EMUL_ASSIST}, \
-   {0xf00, PERFMON}, \
-   {0xf20, ALTIVEC}, \
-   {0xf40, VSX}
-#else
-#define kvm_trace_symbol_exit \
-   {0, CRITICAL}, \
-   {1, MACHINE_CHECK}, \
-   {2, DATA_STORAGE}, \
-   {3, INST_STORAGE}, \
-   {4, EXTERNAL}, \
-   {5, ALIGNMENT}, \
-   {6, PROGRAM}, \
-   {7, FP_UNAVAIL}, \
-   {8, SYSCALL}, \
-   {9, AP_UNAVAIL}, \
-   {10, DECREMENTER}, \
-   {11, FIT}, \
-   {12, WATCHDOG}, \
-   {13, DTLB_MISS}, \
-   {14, ITLB_MISS}, \
-   {15, DEBUG}, \
-   {32, SPE_UNAVAIL}, \
-   {33, SPE_FP_DATA}, \
-   {34, SPE_FP_ROUND}, \
-   {35, PERFORMANCE_MONITOR}, \
-   {36, DOORBELL}, \
-   {37, DOORBELL_CRITICAL}, \
-   {38, GUEST_DBELL}, \
-   {39, GUEST_DBELL_CRIT}, \
-   {40, HV_SYSCALL}, \
-   {41, HV_PRIV}
-#endif
-
-#ifndef CONFIG_KVM_BOOK3S_PR_POSSIBLE
-/*
- * For pr we define this in trace_pr.h since it pr can be built as
- * a module
- */
-
-TRACE_EVENT(kvm_exit,
-   TP_PROTO(unsigned int exit_nr, struct kvm_vcpu *vcpu),
-   TP_ARGS(exit_nr, vcpu),
-
-   TP_STRUCT__entry(
-   __field(unsigned int,   exit_nr )
-   __field(unsigned long,  pc  )
-   __field(unsigned long,  msr )
-   __field(unsigned long,  dar )
-   __field(unsigned long,  last_inst   )
-   ),
-
-   TP_fast_assign(
-   __entry-exit_nr= exit_nr;
-   __entry-pc = kvmppc_get_pc(vcpu);
-   __entry-dar= kvmppc_get_fault_dar(vcpu);
-   __entry-msr= vcpu-arch.shared-msr;
-   __entry-last_inst  = vcpu-arch.last_inst;
-   ),
-
-   TP_printk(exit=%s
-| pc=0x%lx
-| msr=0x%lx
-| dar=0x%lx
-| last_inst=0x%lx
-   ,
-   __print_symbolic(__entry-exit_nr, kvm_trace_symbol_exit),
-   __entry-pc,
-   __entry-msr,
-   __entry-dar,
-   __entry-last_inst
-   )
-);
-
-TRACE_EVENT(kvm_unmap_hva,
-   TP_PROTO(unsigned long hva),
-   TP_ARGS(hva),
-
-   TP_STRUCT__entry(
-   __field(unsigned long,  hva )
-   ),
-
-   TP_fast_assign(
-

[PATCH -V2 01/14] kvm: powerpc: book3s: remove kvmppc_handler_highmem label

2013-10-07 Thread Aneesh Kumar K.V
From: Paul Mackerras pau...@samba.org

This label is not used now.

Signed-off-by: Paul Mackerras pau...@samba.org
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
 arch/powerpc/kvm/book3s_hv_interrupts.S | 3 ---
 arch/powerpc/kvm/book3s_interrupts.S| 3 ---
 2 files changed, 6 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_hv_interrupts.S 
b/arch/powerpc/kvm/book3s_hv_interrupts.S
index 37f1cc4..928142c 100644
--- a/arch/powerpc/kvm/book3s_hv_interrupts.S
+++ b/arch/powerpc/kvm/book3s_hv_interrupts.S
@@ -158,9 +158,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
  * Interrupts are enabled again at this point.
  */
 
-.global kvmppc_handler_highmem
-kvmppc_handler_highmem:
-
/*
 * Register usage at this point:
 *
diff --git a/arch/powerpc/kvm/book3s_interrupts.S 
b/arch/powerpc/kvm/book3s_interrupts.S
index d4e30d8..38166ab 100644
--- a/arch/powerpc/kvm/book3s_interrupts.S
+++ b/arch/powerpc/kvm/book3s_interrupts.S
@@ -121,9 +121,6 @@ kvm_start_lightweight:
  *
  */
 
-.global kvmppc_handler_highmem
-kvmppc_handler_highmem:
-
/*
 * Register usage at this point:
 *
-- 
1.8.1.2

--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH -V2 00/14] Allow PR and HV KVM to coexist in one kernel

2013-10-07 Thread Aneesh Kumar K.V
Hi All,

This patch series support enabling HV and PR KVM together in the same kernel. We
extend machine property with new property kvm_type. A value of HV will 
force HV
KVM and PR PR KVM. If we don't specify kvm_type we will select the fastest 
KVM mode.
ie, HV if that is supported otherwise PR.

With Qemu command line having

 -machine pseries,accel=kvm,kvm_type=HV

[root@llmp24l02 qemu]# bash ../qemu
failed to initialize KVM: Invalid argument
[root@llmp24l02 qemu]# modprobe kvm-pr
[root@llmp24l02 qemu]# bash ../qemu
failed to initialize KVM: Invalid argument
[root@llmp24l02 qemu]# modprobe  kvm-hv
[root@llmp24l02 qemu]# bash ../qemu

now with

 -machine pseries,accel=kvm,kvm_type=PR

[root@llmp24l02 qemu]# rmmod kvm-pr
[root@llmp24l02 qemu]# bash ../qemu
failed to initialize KVM: Invalid argument
[root@llmp24l02 qemu]#
[root@llmp24l02 qemu]# modprobe kvm-pr
[root@llmp24l02 qemu]# bash ../qemu

Changes from V1:
* Build fixes for BOOKE (only compile tested)
* Address review feedback

-aneesh

--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH -V2 06/14] kvm: powerpc: booke: Convert BOOKE to use kvmppc_ops callbacks

2013-10-07 Thread Aneesh Kumar K.V
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com

Make required changes to get BOOKE configs to build with
the introduction of kvmppc_ops callback

Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
 arch/powerpc/include/asm/kvm_ppc.h |  4 +--
 arch/powerpc/kvm/44x.c | 55 +++---
 arch/powerpc/kvm/44x_emulate.c |  8 +++---
 arch/powerpc/kvm/44x_tlb.c |  2 +-
 arch/powerpc/kvm/booke.c   | 47 +++-
 arch/powerpc/kvm/booke.h   | 24 +
 arch/powerpc/kvm/e500.c| 53 +---
 arch/powerpc/kvm/e500_emulate.c|  8 +++---
 arch/powerpc/kvm/e500_mmu.c|  2 +-
 arch/powerpc/kvm/e500mc.c  | 54 ++---
 10 files changed, 194 insertions(+), 63 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_ppc.h 
b/arch/powerpc/include/asm/kvm_ppc.h
index 1d22b53..326033c 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -285,10 +285,10 @@ static inline u32 kvmppc_set_field(u64 inst, int msb, int 
lsb, int value)
__v;\
 })
 
-void kvmppc_core_get_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs);
+int kvmppc_core_get_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs);
 int kvmppc_core_set_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs);
 
-void kvmppc_get_sregs_ivor(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs);
+int kvmppc_get_sregs_ivor(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs);
 int kvmppc_set_sregs_ivor(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs);
 
 int kvm_vcpu_ioctl_get_one_reg(struct kvm_vcpu *vcpu, struct kvm_one_reg *reg);
diff --git a/arch/powerpc/kvm/44x.c b/arch/powerpc/kvm/44x.c
index 2f5c6b6..a765bcd 100644
--- a/arch/powerpc/kvm/44x.c
+++ b/arch/powerpc/kvm/44x.c
@@ -31,13 +31,13 @@
 #include 44x_tlb.h
 #include booke.h
 
-void kvmppc_core_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+static void kvmppc_core_vcpu_load_44x(struct kvm_vcpu *vcpu, int cpu)
 {
kvmppc_booke_vcpu_load(vcpu, cpu);
kvmppc_44x_tlb_load(vcpu);
 }
 
-void kvmppc_core_vcpu_put(struct kvm_vcpu *vcpu)
+static void kvmppc_core_vcpu_put_44x(struct kvm_vcpu *vcpu)
 {
kvmppc_44x_tlb_put(vcpu);
kvmppc_booke_vcpu_put(vcpu);
@@ -114,29 +114,32 @@ int kvmppc_core_vcpu_translate(struct kvm_vcpu *vcpu,
return 0;
 }
 
-void kvmppc_core_get_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
+static int kvmppc_core_get_sregs_44x(struct kvm_vcpu *vcpu,
+ struct kvm_sregs *sregs)
 {
-   kvmppc_get_sregs_ivor(vcpu, sregs);
+   return kvmppc_get_sregs_ivor(vcpu, sregs);
 }
 
-int kvmppc_core_set_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
+static int kvmppc_core_set_sregs_44x(struct kvm_vcpu *vcpu,
+struct kvm_sregs *sregs)
 {
return kvmppc_set_sregs_ivor(vcpu, sregs);
 }
 
-int kvmppc_get_one_reg(struct kvm_vcpu *vcpu, u64 id,
-   union kvmppc_one_reg *val)
+static int kvmppc_get_one_reg_44x(struct kvm_vcpu *vcpu, u64 id,
+ union kvmppc_one_reg *val)
 {
return -EINVAL;
 }
 
-int kvmppc_set_one_reg(struct kvm_vcpu *vcpu, u64 id,
-  union kvmppc_one_reg *val)
+static int kvmppc_set_one_reg_44x(struct kvm_vcpu *vcpu, u64 id,
+ union kvmppc_one_reg *val)
 {
return -EINVAL;
 }
 
-struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
+static struct kvm_vcpu *kvmppc_core_vcpu_create_44x(struct kvm *kvm,
+   unsigned int id)
 {
struct kvmppc_vcpu_44x *vcpu_44x;
struct kvm_vcpu *vcpu;
@@ -167,7 +170,7 @@ out:
return ERR_PTR(err);
 }
 
-void kvmppc_core_vcpu_free(struct kvm_vcpu *vcpu)
+static void kvmppc_core_vcpu_free_44x(struct kvm_vcpu *vcpu)
 {
struct kvmppc_vcpu_44x *vcpu_44x = to_44x(vcpu);
 
@@ -176,24 +179,46 @@ void kvmppc_core_vcpu_free(struct kvm_vcpu *vcpu)
kmem_cache_free(kvm_vcpu_cache, vcpu_44x);
 }
 
-int kvmppc_core_init_vm(struct kvm *kvm)
+static int kvmppc_core_init_vm_44x(struct kvm *kvm)
 {
return 0;
 }
 
-void kvmppc_core_destroy_vm(struct kvm *kvm)
+static void kvmppc_core_destroy_vm_44x(struct kvm *kvm)
 {
 }
 
+static struct kvmppc_ops kvm_ops_44x = {
+   .get_sregs = kvmppc_core_get_sregs_44x,
+   .set_sregs = kvmppc_core_set_sregs_44x,
+   .get_one_reg = kvmppc_get_one_reg_44x,
+   .set_one_reg = kvmppc_set_one_reg_44x,
+   .vcpu_load   = kvmppc_core_vcpu_load_44x,
+   .vcpu_put= kvmppc_core_vcpu_put_44x,
+   .vcpu_create = kvmppc_core_vcpu_create_44x,
+   .vcpu_free   = kvmppc_core_vcpu_free_44x,
+   .mmu_destroy  = kvmppc_mmu_destroy_44x,
+   .init_vm = kvmppc_core_init_vm_44x,
+   .destroy_vm 

[PATCH -V2 02/14] kvm: powerpc: book3s: move book3s_64_vio_hv.c into the main kernel binary

2013-10-07 Thread Aneesh Kumar K.V
From: Paul Mackerras pau...@samba.org

Since the code in book3s_64_vio_hv.c is called from real mode with HV
KVM, and therefore has to be built into the main kernel binary, this
makes it always built-in rather than part of the KVM module.  It gets
called from the KVM module by PR KVM, so this adds an EXPORT_SYMBOL_GPL().

Signed-off-by: Paul Mackerras pau...@samba.org
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
 arch/powerpc/kvm/Makefile   | 12 
 arch/powerpc/kvm/book3s_64_vio_hv.c |  1 +
 2 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
index 6646c95..104e8dc 100644
--- a/arch/powerpc/kvm/Makefile
+++ b/arch/powerpc/kvm/Makefile
@@ -53,32 +53,36 @@ kvm-e500mc-objs := \
e500_emulate.o
 kvm-objs-$(CONFIG_KVM_E500MC) := $(kvm-e500mc-objs)
 
+kvm-book3s_64-builtin-objs-$(CONFIG_KVM_BOOK3S_64_HANDLER) := \
+   book3s_64_vio_hv.o
+
 kvm-book3s_64-objs-$(CONFIG_KVM_BOOK3S_64_PR) := \
$(KVM)/coalesced_mmio.o \
fpu.o \
book3s_paired_singles.o \
book3s_pr.o \
book3s_pr_papr.o \
-   book3s_64_vio_hv.o \
book3s_emulate.o \
book3s_interrupts.o \
book3s_mmu_hpte.o \
book3s_64_mmu_host.o \
book3s_64_mmu.o \
book3s_32_mmu.o
-kvm-book3s_64-builtin-objs-$(CONFIG_KVM_BOOK3S_64_PR) := \
+
+kvm-book3s_64-builtin-objs-$(CONFIG_KVM_BOOK3S_64_PR) += \
book3s_rmhandlers.o
 
 kvm-book3s_64-objs-$(CONFIG_KVM_BOOK3S_64_HV) := \
book3s_hv.o \
book3s_hv_interrupts.o \
book3s_64_mmu_hv.o
+
 kvm-book3s_64-builtin-xics-objs-$(CONFIG_KVM_XICS) := \
book3s_hv_rm_xics.o
-kvm-book3s_64-builtin-objs-$(CONFIG_KVM_BOOK3S_64_HV) := \
+
+kvm-book3s_64-builtin-objs-$(CONFIG_KVM_BOOK3S_64_HV) += \
book3s_hv_rmhandlers.o \
book3s_hv_rm_mmu.o \
-   book3s_64_vio_hv.o \
book3s_hv_ras.o \
book3s_hv_builtin.o \
book3s_hv_cma.o \
diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c 
b/arch/powerpc/kvm/book3s_64_vio_hv.c
index 30c2f3b..2c25f54 100644
--- a/arch/powerpc/kvm/book3s_64_vio_hv.c
+++ b/arch/powerpc/kvm/book3s_64_vio_hv.c
@@ -74,3 +74,4 @@ long kvmppc_h_put_tce(struct kvm_vcpu *vcpu, unsigned long 
liobn,
/* Didn't find the liobn, punt it to userspace */
return H_TOO_HARD;
 }
+EXPORT_SYMBOL_GPL(kvmppc_h_put_tce);
-- 
1.8.1.2

--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH -V2 14/14] kvm: powerpc: book3s: drop is_hv_enabled

2013-10-07 Thread Aneesh Kumar K.V
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com

drop is_hv_enabled, because that should not be a callback property

Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
 arch/powerpc/include/asm/kvm_ppc.h | 6 +-
 arch/powerpc/kvm/book3s.c  | 6 +++---
 arch/powerpc/kvm/book3s_hv.c   | 1 -
 arch/powerpc/kvm/book3s_pr.c   | 1 -
 arch/powerpc/kvm/book3s_xics.c | 2 +-
 arch/powerpc/kvm/powerpc.c | 2 +-
 6 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_ppc.h 
b/arch/powerpc/include/asm/kvm_ppc.h
index 3069cf4..c8317fb 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -183,7 +183,6 @@ union kvmppc_one_reg {
 
 struct kvmppc_ops {
struct module *owner;
-   bool is_hv_enabled;
int (*get_sregs)(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs);
int (*set_sregs)(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs);
int (*get_one_reg)(struct kvm_vcpu *vcpu, u64 id,
@@ -232,6 +231,11 @@ struct kvmppc_ops {
 extern struct kvmppc_ops *kvmppc_hv_ops;
 extern struct kvmppc_ops *kvmppc_pr_ops;
 
+static inline bool is_kvmppc_hv_enabled(struct kvm *kvm)
+{
+   return kvm-arch.kvm_ops == kvmppc_hv_ops;
+}
+
 /*
  * Cuts out inst bits with ordering according to spec.
  * That means the leftmost bit is zero. All given bits are included.
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index ad8f6ed..8912608 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -72,7 +72,7 @@ void kvmppc_core_load_guest_debugstate(struct kvm_vcpu *vcpu)
 
 static inline unsigned long kvmppc_interrupt_offset(struct kvm_vcpu *vcpu)
 {
-   if (!vcpu-kvm-arch.kvm_ops-is_hv_enabled)
+   if (!is_kvmppc_hv_enabled(vcpu-kvm))
return to_book3s(vcpu)-hior;
return 0;
 }
@@ -80,7 +80,7 @@ static inline unsigned long kvmppc_interrupt_offset(struct 
kvm_vcpu *vcpu)
 static inline void kvmppc_update_int_pending(struct kvm_vcpu *vcpu,
unsigned long pending_now, unsigned long old_pending)
 {
-   if (vcpu-kvm-arch.kvm_ops-is_hv_enabled)
+   if (is_kvmppc_hv_enabled(vcpu-kvm))
return;
if (pending_now)
vcpu-arch.shared-int_pending = 1;
@@ -94,7 +94,7 @@ static inline bool kvmppc_critical_section(struct kvm_vcpu 
*vcpu)
ulong crit_r1;
bool crit;
 
-   if (vcpu-kvm-arch.kvm_ops-is_hv_enabled)
+   if (is_kvmppc_hv_enabled(vcpu-kvm))
return false;
 
crit_raw = vcpu-arch.shared-critical;
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 31922d5..b5229eb 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -2160,7 +2160,6 @@ static long kvm_arch_vm_ioctl_hv(struct file *filp,
 }
 
 static struct kvmppc_ops kvm_ops_hv = {
-   .is_hv_enabled = true,
.get_sregs = kvm_arch_vcpu_ioctl_get_sregs_hv,
.set_sregs = kvm_arch_vcpu_ioctl_set_sregs_hv,
.get_one_reg = kvmppc_get_one_reg_hv,
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index fbd985f..df36cf2 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -1526,7 +1526,6 @@ static long kvm_arch_vm_ioctl_pr(struct file *filp,
 }
 
 static struct kvmppc_ops kvm_ops_pr = {
-   .is_hv_enabled = false,
.get_sregs = kvm_arch_vcpu_ioctl_get_sregs_pr,
.set_sregs = kvm_arch_vcpu_ioctl_set_sregs_pr,
.get_one_reg = kvmppc_get_one_reg_pr,
diff --git a/arch/powerpc/kvm/book3s_xics.c b/arch/powerpc/kvm/book3s_xics.c
index 76ef525..20d56ec 100644
--- a/arch/powerpc/kvm/book3s_xics.c
+++ b/arch/powerpc/kvm/book3s_xics.c
@@ -818,7 +818,7 @@ int kvmppc_xics_hcall(struct kvm_vcpu *vcpu, u32 req)
}
 
/* Check for real mode returning too hard */
-   if (xics-real_mode  vcpu-kvm-arch.kvm_ops-is_hv_enabled)
+   if (xics-real_mode  is_kvmppc_hv_enabled(vcpu-kvm))
return kvmppc_xics_rm_complete(vcpu, req);
 
switch (req) {
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 00a995a..058f9d6 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -200,7 +200,7 @@ int kvmppc_sanity_check(struct kvm_vcpu *vcpu)
goto out;
 
/* HV KVM can only do PAPR mode for now */
-   if (!vcpu-arch.papr_enabled  vcpu-kvm-arch.kvm_ops-is_hv_enabled)
+   if (!vcpu-arch.papr_enabled  is_kvmppc_hv_enabled(vcpu-kvm))
goto out;
 
 #ifdef CONFIG_KVM_BOOKE_HV
-- 
1.8.1.2

--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 0/2 v2] kvm/powerpc: hypercall related cleanup

2013-10-07 Thread Bharat Bhushan
From: Bharat Bhushan bharat.bhus...@freescale.com

This patchset does some code cleanup around kvm-hypercall.

v1-v2:
 - Review comment of previous patch. More information available in individual 
patch.

Bharat Bhushan (2):
  kvm/powerpc: rename kvm_hypercall() to epapr_hypercall()
  kvm/powerpc: move kvm_hypercall0() and friends to epapr_hypercall0()

 arch/powerpc/include/asm/epapr_hcalls.h |  111 +++
 arch/powerpc/include/asm/kvm_para.h |   80 +--
 arch/powerpc/kernel/kvm.c   |   41 +---
 3 files changed, 114 insertions(+), 118 deletions(-)


--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/2 v2] kvm/powerpc: rename kvm_hypercall() to epapr_hypercall()

2013-10-07 Thread Bharat Bhushan
kvm_hypercall() have nothing KVM specific, so renamed to epapr_hypercall().
Also this in moved to arch/powerpc/include/asm/epapr_hcalls.h

Signed-off-by: Bharat Bhushan bharat.bhus...@freescale.com
---
v1-v2
 - epapr_hypercall() is always defined and returns EV_UNIMPLEMENTED
   when CONFIG_KVM_GUEST or CONFIG_EPAPR_PARAVIRT not defined.

 arch/powerpc/include/asm/epapr_hcalls.h |   46 +++
 arch/powerpc/include/asm/kvm_para.h |   23 ---
 arch/powerpc/kernel/kvm.c   |   41 +--
 3 files changed, 54 insertions(+), 56 deletions(-)

diff --git a/arch/powerpc/include/asm/epapr_hcalls.h 
b/arch/powerpc/include/asm/epapr_hcalls.h
index d3d6342..6b8e007 100644
--- a/arch/powerpc/include/asm/epapr_hcalls.h
+++ b/arch/powerpc/include/asm/epapr_hcalls.h
@@ -454,5 +454,51 @@ static inline unsigned int ev_idle(void)
 
return r3;
 }
+
+#if defined(CONFIG_KVM_GUEST) || defined(CONFIG_EPAPR_PARAVIRT)
+static inline unsigned long epapr_hypercall(unsigned long *in,
+   unsigned long *out,
+   unsigned long nr)
+{
+   unsigned long register r0 asm(r0);
+   unsigned long register r3 asm(r3) = in[0];
+   unsigned long register r4 asm(r4) = in[1];
+   unsigned long register r5 asm(r5) = in[2];
+   unsigned long register r6 asm(r6) = in[3];
+   unsigned long register r7 asm(r7) = in[4];
+   unsigned long register r8 asm(r8) = in[5];
+   unsigned long register r9 asm(r9) = in[6];
+   unsigned long register r10 asm(r10) = in[7];
+   unsigned long register r11 asm(r11) = nr;
+   unsigned long register r12 asm(r12);
+
+   asm volatile(blepapr_hypercall_start
+: =r(r0), =r(r3), =r(r4), =r(r5), =r(r6),
+  =r(r7), =r(r8), =r(r9), =r(r10), =r(r11),
+  =r(r12)
+: r(r3), r(r4), r(r5), r(r6), r(r7), r(r8),
+  r(r9), r(r10), r(r11)
+: memory, cc, xer, ctr, lr);
+
+   out[0] = r4;
+   out[1] = r5;
+   out[2] = r6;
+   out[3] = r7;
+   out[4] = r8;
+   out[5] = r9;
+   out[6] = r10;
+   out[7] = r11;
+
+   return r3;
+}
+#else
+static unsigned long epapr_hypercall(unsigned long *in,
+  unsigned long *out,
+  unsigned long nr)
+{
+   return EV_UNIMPLEMENTED;
+}
+#endif
+
 #endif /* !__ASSEMBLY__ */
 #endif /* _EPAPR_HCALLS_H */
diff --git a/arch/powerpc/include/asm/kvm_para.h 
b/arch/powerpc/include/asm/kvm_para.h
index 2b11965..c18660e 100644
--- a/arch/powerpc/include/asm/kvm_para.h
+++ b/arch/powerpc/include/asm/kvm_para.h
@@ -39,10 +39,6 @@ static inline int kvm_para_available(void)
return 1;
 }
 
-extern unsigned long kvm_hypercall(unsigned long *in,
-  unsigned long *out,
-  unsigned long nr);
-
 #else
 
 static inline int kvm_para_available(void)
@@ -50,13 +46,6 @@ static inline int kvm_para_available(void)
return 0;
 }
 
-static unsigned long kvm_hypercall(unsigned long *in,
-  unsigned long *out,
-  unsigned long nr)
-{
-   return EV_UNIMPLEMENTED;
-}
-
 #endif
 
 static inline long kvm_hypercall0_1(unsigned int nr, unsigned long *r2)
@@ -65,7 +54,7 @@ static inline long kvm_hypercall0_1(unsigned int nr, unsigned 
long *r2)
unsigned long out[8];
unsigned long r;
 
-   r = kvm_hypercall(in, out, KVM_HCALL_TOKEN(nr));
+   r = epapr_hypercall(in, out, KVM_HCALL_TOKEN(nr));
*r2 = out[0];
 
return r;
@@ -76,7 +65,7 @@ static inline long kvm_hypercall0(unsigned int nr)
unsigned long in[8];
unsigned long out[8];
 
-   return kvm_hypercall(in, out, KVM_HCALL_TOKEN(nr));
+   return epapr_hypercall(in, out, KVM_HCALL_TOKEN(nr));
 }
 
 static inline long kvm_hypercall1(unsigned int nr, unsigned long p1)
@@ -85,7 +74,7 @@ static inline long kvm_hypercall1(unsigned int nr, unsigned 
long p1)
unsigned long out[8];
 
in[0] = p1;
-   return kvm_hypercall(in, out, KVM_HCALL_TOKEN(nr));
+   return epapr_hypercall(in, out, KVM_HCALL_TOKEN(nr));
 }
 
 static inline long kvm_hypercall2(unsigned int nr, unsigned long p1,
@@ -96,7 +85,7 @@ static inline long kvm_hypercall2(unsigned int nr, unsigned 
long p1,
 
in[0] = p1;
in[1] = p2;
-   return kvm_hypercall(in, out, KVM_HCALL_TOKEN(nr));
+   return epapr_hypercall(in, out, KVM_HCALL_TOKEN(nr));
 }
 
 static inline long kvm_hypercall3(unsigned int nr, unsigned long p1,
@@ -108,7 +97,7 @@ static inline long kvm_hypercall3(unsigned int nr, unsigned 
long p1,
in[0] = p1;
in[1] = p2;
in[2] = p3;
-   return kvm_hypercall(in, out, KVM_HCALL_TOKEN(nr));
+   return epapr_hypercall(in, out, KVM_HCALL_TOKEN(nr));
 }
 
 

[PATCH 2/2 v2] kvm/powerpc: move kvm_hypercall0() and friends to epapr_hypercall0()

2013-10-07 Thread Bharat Bhushan
kvm_hypercall0() and friends have nothing KVM specific so moved to
epapr_hypercall0() and friends. Also they are moved from
arch/powerpc/include/asm/kvm_para.h to arch/powerpc/include/asm/epapr_hcalls.h

Signed-off-by: Bharat Bhushan bharat.bhus...@freescale.com
---
 v1-v2
 - No change

 arch/powerpc/include/asm/epapr_hcalls.h |   65 +
 arch/powerpc/include/asm/kvm_para.h |   69 +--
 2 files changed, 66 insertions(+), 68 deletions(-)

diff --git a/arch/powerpc/include/asm/epapr_hcalls.h 
b/arch/powerpc/include/asm/epapr_hcalls.h
index 6b8e007..223297e 100644
--- a/arch/powerpc/include/asm/epapr_hcalls.h
+++ b/arch/powerpc/include/asm/epapr_hcalls.h
@@ -500,5 +500,70 @@ static unsigned long epapr_hypercall(unsigned long *in,
 }
 #endif
 
+static inline long epapr_hypercall0_1(unsigned int nr, unsigned long *r2)
+{
+   unsigned long in[8];
+   unsigned long out[8];
+   unsigned long r;
+
+   r = epapr_hypercall(in, out, nr);
+   *r2 = out[0];
+
+   return r;
+}
+
+static inline long epapr_hypercall0(unsigned int nr)
+{
+   unsigned long in[8];
+   unsigned long out[8];
+
+   return epapr_hypercall(in, out, nr);
+}
+
+static inline long epapr_hypercall1(unsigned int nr, unsigned long p1)
+{
+   unsigned long in[8];
+   unsigned long out[8];
+
+   in[0] = p1;
+   return epapr_hypercall(in, out, nr);
+}
+
+static inline long epapr_hypercall2(unsigned int nr, unsigned long p1,
+   unsigned long p2)
+{
+   unsigned long in[8];
+   unsigned long out[8];
+
+   in[0] = p1;
+   in[1] = p2;
+   return epapr_hypercall(in, out, nr);
+}
+
+static inline long epapr_hypercall3(unsigned int nr, unsigned long p1,
+   unsigned long p2, unsigned long p3)
+{
+   unsigned long in[8];
+   unsigned long out[8];
+
+   in[0] = p1;
+   in[1] = p2;
+   in[2] = p3;
+   return epapr_hypercall(in, out, nr);
+}
+
+static inline long epapr_hypercall4(unsigned int nr, unsigned long p1,
+   unsigned long p2, unsigned long p3,
+   unsigned long p4)
+{
+   unsigned long in[8];
+   unsigned long out[8];
+
+   in[0] = p1;
+   in[1] = p2;
+   in[2] = p3;
+   in[3] = p4;
+   return epapr_hypercall(in, out, nr);
+}
 #endif /* !__ASSEMBLY__ */
 #endif /* _EPAPR_HCALLS_H */
diff --git a/arch/powerpc/include/asm/kvm_para.h 
b/arch/powerpc/include/asm/kvm_para.h
index c18660e..336a91a 100644
--- a/arch/powerpc/include/asm/kvm_para.h
+++ b/arch/powerpc/include/asm/kvm_para.h
@@ -48,73 +48,6 @@ static inline int kvm_para_available(void)
 
 #endif
 
-static inline long kvm_hypercall0_1(unsigned int nr, unsigned long *r2)
-{
-   unsigned long in[8];
-   unsigned long out[8];
-   unsigned long r;
-
-   r = epapr_hypercall(in, out, KVM_HCALL_TOKEN(nr));
-   *r2 = out[0];
-
-   return r;
-}
-
-static inline long kvm_hypercall0(unsigned int nr)
-{
-   unsigned long in[8];
-   unsigned long out[8];
-
-   return epapr_hypercall(in, out, KVM_HCALL_TOKEN(nr));
-}
-
-static inline long kvm_hypercall1(unsigned int nr, unsigned long p1)
-{
-   unsigned long in[8];
-   unsigned long out[8];
-
-   in[0] = p1;
-   return epapr_hypercall(in, out, KVM_HCALL_TOKEN(nr));
-}
-
-static inline long kvm_hypercall2(unsigned int nr, unsigned long p1,
- unsigned long p2)
-{
-   unsigned long in[8];
-   unsigned long out[8];
-
-   in[0] = p1;
-   in[1] = p2;
-   return epapr_hypercall(in, out, KVM_HCALL_TOKEN(nr));
-}
-
-static inline long kvm_hypercall3(unsigned int nr, unsigned long p1,
- unsigned long p2, unsigned long p3)
-{
-   unsigned long in[8];
-   unsigned long out[8];
-
-   in[0] = p1;
-   in[1] = p2;
-   in[2] = p3;
-   return epapr_hypercall(in, out, KVM_HCALL_TOKEN(nr));
-}
-
-static inline long kvm_hypercall4(unsigned int nr, unsigned long p1,
- unsigned long p2, unsigned long p3,
- unsigned long p4)
-{
-   unsigned long in[8];
-   unsigned long out[8];
-
-   in[0] = p1;
-   in[1] = p2;
-   in[2] = p3;
-   in[3] = p4;
-   return epapr_hypercall(in, out, KVM_HCALL_TOKEN(nr));
-}
-
-
 static inline unsigned int kvm_arch_para_features(void)
 {
unsigned long r;
@@ -122,7 +55,7 @@ static inline unsigned int kvm_arch_para_features(void)
if (!kvm_para_available())
return 0;
 
-   if(kvm_hypercall0_1(KVM_HC_FEATURES, r))
+   if(epapr_hypercall0_1(KVM_HCALL_TOKEN(KVM_HC_FEATURES), r))
return 0;
 
return r;
-- 
1.7.0.4


--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info 

[PATCH -V2 07/14] kvm: powerpc: book3s: Cleanup interrupt handling code

2013-10-07 Thread Aneesh Kumar K.V
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com

With this patch if HV is included, interrupts come in to the HV version
of the kvmppc_interrupt code, which then jumps to the PR handler,
renamed to kvmppc_interrupt_pr, if the guest is a PR guest. This helps
in enabling both HV and PR, which we do in later patch

Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
 arch/powerpc/include/asm/exception-64s.h | 11 +++
 arch/powerpc/kvm/book3s_hv_rmhandlers.S  |  9 +++--
 arch/powerpc/kvm/book3s_segment.S|  4 ++--
 3 files changed, 20 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h 
b/arch/powerpc/include/asm/exception-64s.h
index fe1c62d..76d326e 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -197,6 +197,17 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
cmpwi   r10,0;  \
bne do_kvm_##n
 
+#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
+/*
+ * If hv is possible, interrupts come into to the hv version
+ * of the kvmppc_interrupt code, which then jumps to the PR handler,
+ * kvmppc_interrupt_pr, if the guest is a PR guest.
+ */
+#define kvmppc_interrupt kvmppc_interrupt_hv
+#else
+#define kvmppc_interrupt kvmppc_interrupt_pr
+#endif
+
 #define __KVM_HANDLER(area, h, n)  \
 do_kvm_##n:\
BEGIN_FTR_SECTION_NESTED(947)   \
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S 
b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index f1f1bf3..55e4a01 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -734,8 +734,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
 /*
  * We come here from the first-level interrupt handlers.
  */
-   .globl  kvmppc_interrupt
-kvmppc_interrupt:
+   .globl  kvmppc_interrupt_hv
+kvmppc_interrupt_hv:
/*
 * Register contents:
 * R12  = interrupt vector
@@ -749,6 +749,11 @@ kvmppc_interrupt:
lbz r9, HSTATE_IN_GUEST(r13)
cmpwi   r9, KVM_GUEST_MODE_HOST_HV
beq kvmppc_bad_host_intr
+#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
+   cmpwi   r9, KVM_GUEST_MODE_GUEST
+   ld  r9, HSTATE_HOST_R2(r13)
+   beq kvmppc_interrupt_pr
+#endif
/* We're now back in the host but in guest MMU context */
li  r9, KVM_GUEST_MODE_HOST_HV
stb r9, HSTATE_IN_GUEST(r13)
diff --git a/arch/powerpc/kvm/book3s_segment.S 
b/arch/powerpc/kvm/book3s_segment.S
index 1abe478..bc50c97 100644
--- a/arch/powerpc/kvm/book3s_segment.S
+++ b/arch/powerpc/kvm/book3s_segment.S
@@ -161,8 +161,8 @@ kvmppc_handler_trampoline_enter_end:
 .global kvmppc_handler_trampoline_exit
 kvmppc_handler_trampoline_exit:
 
-.global kvmppc_interrupt
-kvmppc_interrupt:
+.global kvmppc_interrupt_pr
+kvmppc_interrupt_pr:
 
/* Register usage at this point:
 *
-- 
1.8.1.2

--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2 v2] kvm/powerpc: rename kvm_hypercall() to epapr_hypercall()

2013-10-07 Thread Scott Wood
On Mon, 2013-10-07 at 22:23 +0530, Bharat Bhushan wrote:
 kvm_hypercall() have nothing KVM specific, so renamed to epapr_hypercall().
 Also this in moved to arch/powerpc/include/asm/epapr_hcalls.h
 
 Signed-off-by: Bharat Bhushan bharat.bhus...@freescale.com
 ---
 v1-v2
  - epapr_hypercall() is always defined and returns EV_UNIMPLEMENTED
when CONFIG_KVM_GUEST or CONFIG_EPAPR_PARAVIRT not defined.
 
  arch/powerpc/include/asm/epapr_hcalls.h |   46 
 +++
  arch/powerpc/include/asm/kvm_para.h |   23 ---
  arch/powerpc/kernel/kvm.c   |   41 +--
  3 files changed, 54 insertions(+), 56 deletions(-)
 
 diff --git a/arch/powerpc/include/asm/epapr_hcalls.h 
 b/arch/powerpc/include/asm/epapr_hcalls.h
 index d3d6342..6b8e007 100644
 --- a/arch/powerpc/include/asm/epapr_hcalls.h
 +++ b/arch/powerpc/include/asm/epapr_hcalls.h
 @@ -454,5 +454,51 @@ static inline unsigned int ev_idle(void)
  
   return r3;
  }
 +
 +#if defined(CONFIG_KVM_GUEST) || defined(CONFIG_EPAPR_PARAVIRT)

CONFIG_KVM_GUEST implies CONFIG_EPAPR_PARAVIRT, so you only need to
check for the latter.

-Scott



--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/2 v3] kvm/powerpc: rename kvm_hypercall() to epapr_hypercall()

2013-10-07 Thread Bharat Bhushan
kvm_hypercall() have nothing KVM specific, so renamed to epapr_hypercall().
Also this in moved to arch/powerpc/include/asm/epapr_hcalls.h

Signed-off-by: Bharat Bhushan bharat.bhus...@freescale.com
---
v2-v3
 - CONFIG_KVM_GUEST implies CONFIG_EPAPR_PARAVIRT, so using only 
CONFIG_EPAPR_PARAVIRT

v1-v2
 - epapr_hypercall() is always defined and returns EV_UNIMPLEMENTED
   when CONFIG_KVM_GUEST or CONFIG_EPAPR_PARAVIRT not defined.

 arch/powerpc/include/asm/epapr_hcalls.h |   46 +++
 arch/powerpc/include/asm/kvm_para.h |   23 ---
 arch/powerpc/kernel/kvm.c   |   41 +--
 3 files changed, 54 insertions(+), 56 deletions(-)

diff --git a/arch/powerpc/include/asm/epapr_hcalls.h 
b/arch/powerpc/include/asm/epapr_hcalls.h
index d3d6342..8fc25fc 100644
--- a/arch/powerpc/include/asm/epapr_hcalls.h
+++ b/arch/powerpc/include/asm/epapr_hcalls.h
@@ -454,5 +454,51 @@ static inline unsigned int ev_idle(void)
 
return r3;
 }
+
+#ifdef CONFIG_EPAPR_PARAVIRT
+static inline unsigned long epapr_hypercall(unsigned long *in,
+   unsigned long *out,
+   unsigned long nr)
+{
+   unsigned long register r0 asm(r0);
+   unsigned long register r3 asm(r3) = in[0];
+   unsigned long register r4 asm(r4) = in[1];
+   unsigned long register r5 asm(r5) = in[2];
+   unsigned long register r6 asm(r6) = in[3];
+   unsigned long register r7 asm(r7) = in[4];
+   unsigned long register r8 asm(r8) = in[5];
+   unsigned long register r9 asm(r9) = in[6];
+   unsigned long register r10 asm(r10) = in[7];
+   unsigned long register r11 asm(r11) = nr;
+   unsigned long register r12 asm(r12);
+
+   asm volatile(blepapr_hypercall_start
+: =r(r0), =r(r3), =r(r4), =r(r5), =r(r6),
+  =r(r7), =r(r8), =r(r9), =r(r10), =r(r11),
+  =r(r12)
+: r(r3), r(r4), r(r5), r(r6), r(r7), r(r8),
+  r(r9), r(r10), r(r11)
+: memory, cc, xer, ctr, lr);
+
+   out[0] = r4;
+   out[1] = r5;
+   out[2] = r6;
+   out[3] = r7;
+   out[4] = r8;
+   out[5] = r9;
+   out[6] = r10;
+   out[7] = r11;
+
+   return r3;
+}
+#else
+static unsigned long epapr_hypercall(unsigned long *in,
+  unsigned long *out,
+  unsigned long nr)
+{
+   return EV_UNIMPLEMENTED;
+}
+#endif
+
 #endif /* !__ASSEMBLY__ */
 #endif /* _EPAPR_HCALLS_H */
diff --git a/arch/powerpc/include/asm/kvm_para.h 
b/arch/powerpc/include/asm/kvm_para.h
index 2b11965..c18660e 100644
--- a/arch/powerpc/include/asm/kvm_para.h
+++ b/arch/powerpc/include/asm/kvm_para.h
@@ -39,10 +39,6 @@ static inline int kvm_para_available(void)
return 1;
 }
 
-extern unsigned long kvm_hypercall(unsigned long *in,
-  unsigned long *out,
-  unsigned long nr);
-
 #else
 
 static inline int kvm_para_available(void)
@@ -50,13 +46,6 @@ static inline int kvm_para_available(void)
return 0;
 }
 
-static unsigned long kvm_hypercall(unsigned long *in,
-  unsigned long *out,
-  unsigned long nr)
-{
-   return EV_UNIMPLEMENTED;
-}
-
 #endif
 
 static inline long kvm_hypercall0_1(unsigned int nr, unsigned long *r2)
@@ -65,7 +54,7 @@ static inline long kvm_hypercall0_1(unsigned int nr, unsigned 
long *r2)
unsigned long out[8];
unsigned long r;
 
-   r = kvm_hypercall(in, out, KVM_HCALL_TOKEN(nr));
+   r = epapr_hypercall(in, out, KVM_HCALL_TOKEN(nr));
*r2 = out[0];
 
return r;
@@ -76,7 +65,7 @@ static inline long kvm_hypercall0(unsigned int nr)
unsigned long in[8];
unsigned long out[8];
 
-   return kvm_hypercall(in, out, KVM_HCALL_TOKEN(nr));
+   return epapr_hypercall(in, out, KVM_HCALL_TOKEN(nr));
 }
 
 static inline long kvm_hypercall1(unsigned int nr, unsigned long p1)
@@ -85,7 +74,7 @@ static inline long kvm_hypercall1(unsigned int nr, unsigned 
long p1)
unsigned long out[8];
 
in[0] = p1;
-   return kvm_hypercall(in, out, KVM_HCALL_TOKEN(nr));
+   return epapr_hypercall(in, out, KVM_HCALL_TOKEN(nr));
 }
 
 static inline long kvm_hypercall2(unsigned int nr, unsigned long p1,
@@ -96,7 +85,7 @@ static inline long kvm_hypercall2(unsigned int nr, unsigned 
long p1,
 
in[0] = p1;
in[1] = p2;
-   return kvm_hypercall(in, out, KVM_HCALL_TOKEN(nr));
+   return epapr_hypercall(in, out, KVM_HCALL_TOKEN(nr));
 }
 
 static inline long kvm_hypercall3(unsigned int nr, unsigned long p1,
@@ -108,7 +97,7 @@ static inline long kvm_hypercall3(unsigned int nr, unsigned 
long p1,
in[0] = p1;
in[1] = p2;
in[2] = p3;
-   return kvm_hypercall(in, out, KVM_HCALL_TOKEN(nr));
+   

[PATCH 2/2 v3] kvm/powerpc: move kvm_hypercall0() and friends to epapr_hypercall0()

2013-10-07 Thread Bharat Bhushan
kvm_hypercall0() and friends have nothing KVM specific so moved to
epapr_hypercall0() and friends. Also they are moved from
arch/powerpc/include/asm/kvm_para.h to arch/powerpc/include/asm/epapr_hcalls.h

Signed-off-by: Bharat Bhushan bharat.bhus...@freescale.com
---
v1-v2-v3
 - no Change

 arch/powerpc/include/asm/epapr_hcalls.h |   65 +
 arch/powerpc/include/asm/kvm_para.h |   69 +--
 2 files changed, 66 insertions(+), 68 deletions(-)

diff --git a/arch/powerpc/include/asm/epapr_hcalls.h 
b/arch/powerpc/include/asm/epapr_hcalls.h
index 8fc25fc..a691fff 100644
--- a/arch/powerpc/include/asm/epapr_hcalls.h
+++ b/arch/powerpc/include/asm/epapr_hcalls.h
@@ -500,5 +500,70 @@ static unsigned long epapr_hypercall(unsigned long *in,
 }
 #endif
 
+static inline long epapr_hypercall0_1(unsigned int nr, unsigned long *r2)
+{
+   unsigned long in[8];
+   unsigned long out[8];
+   unsigned long r;
+
+   r = epapr_hypercall(in, out, nr);
+   *r2 = out[0];
+
+   return r;
+}
+
+static inline long epapr_hypercall0(unsigned int nr)
+{
+   unsigned long in[8];
+   unsigned long out[8];
+
+   return epapr_hypercall(in, out, nr);
+}
+
+static inline long epapr_hypercall1(unsigned int nr, unsigned long p1)
+{
+   unsigned long in[8];
+   unsigned long out[8];
+
+   in[0] = p1;
+   return epapr_hypercall(in, out, nr);
+}
+
+static inline long epapr_hypercall2(unsigned int nr, unsigned long p1,
+   unsigned long p2)
+{
+   unsigned long in[8];
+   unsigned long out[8];
+
+   in[0] = p1;
+   in[1] = p2;
+   return epapr_hypercall(in, out, nr);
+}
+
+static inline long epapr_hypercall3(unsigned int nr, unsigned long p1,
+   unsigned long p2, unsigned long p3)
+{
+   unsigned long in[8];
+   unsigned long out[8];
+
+   in[0] = p1;
+   in[1] = p2;
+   in[2] = p3;
+   return epapr_hypercall(in, out, nr);
+}
+
+static inline long epapr_hypercall4(unsigned int nr, unsigned long p1,
+   unsigned long p2, unsigned long p3,
+   unsigned long p4)
+{
+   unsigned long in[8];
+   unsigned long out[8];
+
+   in[0] = p1;
+   in[1] = p2;
+   in[2] = p3;
+   in[3] = p4;
+   return epapr_hypercall(in, out, nr);
+}
 #endif /* !__ASSEMBLY__ */
 #endif /* _EPAPR_HCALLS_H */
diff --git a/arch/powerpc/include/asm/kvm_para.h 
b/arch/powerpc/include/asm/kvm_para.h
index c18660e..336a91a 100644
--- a/arch/powerpc/include/asm/kvm_para.h
+++ b/arch/powerpc/include/asm/kvm_para.h
@@ -48,73 +48,6 @@ static inline int kvm_para_available(void)
 
 #endif
 
-static inline long kvm_hypercall0_1(unsigned int nr, unsigned long *r2)
-{
-   unsigned long in[8];
-   unsigned long out[8];
-   unsigned long r;
-
-   r = epapr_hypercall(in, out, KVM_HCALL_TOKEN(nr));
-   *r2 = out[0];
-
-   return r;
-}
-
-static inline long kvm_hypercall0(unsigned int nr)
-{
-   unsigned long in[8];
-   unsigned long out[8];
-
-   return epapr_hypercall(in, out, KVM_HCALL_TOKEN(nr));
-}
-
-static inline long kvm_hypercall1(unsigned int nr, unsigned long p1)
-{
-   unsigned long in[8];
-   unsigned long out[8];
-
-   in[0] = p1;
-   return epapr_hypercall(in, out, KVM_HCALL_TOKEN(nr));
-}
-
-static inline long kvm_hypercall2(unsigned int nr, unsigned long p1,
- unsigned long p2)
-{
-   unsigned long in[8];
-   unsigned long out[8];
-
-   in[0] = p1;
-   in[1] = p2;
-   return epapr_hypercall(in, out, KVM_HCALL_TOKEN(nr));
-}
-
-static inline long kvm_hypercall3(unsigned int nr, unsigned long p1,
- unsigned long p2, unsigned long p3)
-{
-   unsigned long in[8];
-   unsigned long out[8];
-
-   in[0] = p1;
-   in[1] = p2;
-   in[2] = p3;
-   return epapr_hypercall(in, out, KVM_HCALL_TOKEN(nr));
-}
-
-static inline long kvm_hypercall4(unsigned int nr, unsigned long p1,
- unsigned long p2, unsigned long p3,
- unsigned long p4)
-{
-   unsigned long in[8];
-   unsigned long out[8];
-
-   in[0] = p1;
-   in[1] = p2;
-   in[2] = p3;
-   in[3] = p4;
-   return epapr_hypercall(in, out, KVM_HCALL_TOKEN(nr));
-}
-
-
 static inline unsigned int kvm_arch_para_features(void)
 {
unsigned long r;
@@ -122,7 +55,7 @@ static inline unsigned int kvm_arch_para_features(void)
if (!kvm_para_available())
return 0;
 
-   if(kvm_hypercall0_1(KVM_HC_FEATURES, r))
+   if(epapr_hypercall0_1(KVM_HCALL_TOKEN(KVM_HC_FEATURES), r))
return 0;
 
return r;
-- 
1.7.0.4


--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo 

[PATCH 0/2 v3] kvm/powerpc: hypercall related cleanup

2013-10-07 Thread Bharat Bhushan
From: Bharat Bhushan bharat.bhus...@freescale.com

This patchset does some code cleanup around kvm-hypercall.

v2-v3:
 - Individual patch changelog have information

v1-v2:
 - Review comment of previous patch. More information available in individual 
patch.

Bharat Bhushan (2):
  kvm/powerpc: rename kvm_hypercall() to epapr_hypercall()
  kvm/powerpc: move kvm_hypercall0() and friends to epapr_hypercall0()

 arch/powerpc/include/asm/epapr_hcalls.h |  111 +++
 arch/powerpc/include/asm/kvm_para.h |   80 +--
 arch/powerpc/kernel/kvm.c   |   41 +---
 3 files changed, 114 insertions(+), 118 deletions(-)


--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html