RE: [PATCH v9 13/16] ARM: KVM: Emulation framework and CP15 emulation

2012-07-17 Thread Min-gyu Kim
Why does the cache operation need to happen on the same CPU while the L1 caches 
between cores are coherent?

As you know, cache operations usually operate for a range and it iterates 
without disabling preemption.
Therefore, though you enclose the vcpu_run and handle_exit with preemption 
disable, the operations on a range can run on several cores.

If data of address X is written from CPU0 and CPU1 executes cache flush on 
address X while those 2 CPUs have same TTBR, does the cache operation fail? It 
seems to succeed from the document from ARM that I referred before. And that's 
why I think the preemption disable is unnecessary.

Regards
Kim, Min-gyu



-Original Message-


 And you said the reason of disabling preemption as CPU-specific data such as 
 caches.
 But as far as I know, the l1 caches are coherent.
 (http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0438e/
 BABFHDFE.html)

 Can you please explain why preemption disable is necessary in detail?


if a VM tries to do a cache maintenance operation specific to that CPU that 
traps we want to make sure that the emulation of such operations happen on the 
same physical CPU to ensure correct semantics.

-Christoffer

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v9 13/16] ARM: KVM: Emulation framework and CP15 emulation

2012-07-16 Thread Christoffer Dall
On Thu, Jul 12, 2012 at 7:35 AM, 김민규 mingyu84@samsung.com wrote:
 +   /*
 +* Make sure preemption is disabled while calling handle_exit
 +* as exit handling touches CPU-specific resources, such as
 +* caches, and we must stay on the same CPU.
 +*
 +* Code that might sleep must disable preemption and access
 +* CPU-specific resources first.
 +*/


 What does this comment mean? I cannot understand especially the last sentence
 because it's not possible to sleep while preemption is disabled.

it's a mistake, it should read Code that might sleep must enable preemption


 And you said the reason of disabling preemption as CPU-specific data such as 
 caches.
 But as far as I know, the l1 caches are coherent.
 (http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0438e/BABFHDFE.html)

 Can you please explain why preemption disable is necessary in detail?


if a VM tries to do a cache maintenance operation specific to that CPU
that traps we want to make sure that the emulation of such operations
happen on the same physical CPU to ensure correct semantics.

-Christoffer
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH v9 13/16] ARM: KVM: Emulation framework and CP15 emulation

2012-07-11 Thread 김민규
+   /*
+* Make sure preemption is disabled while calling handle_exit
+* as exit handling touches CPU-specific resources, such as
+* caches, and we must stay on the same CPU.
+*
+* Code that might sleep must disable preemption and access
+* CPU-specific resources first.
+*/


What does this comment mean? I cannot understand especially the last sentence
because it's not possible to sleep while preemption is disabled.

And you said the reason of disabling preemption as CPU-specific data such as 
caches.
But as far as I know, the l1 caches are coherent.
(http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0438e/BABFHDFE.html)

Can you please explain why preemption disable is necessary in detail?


Regards,
Kim, Min-gyu


-Original Message-
From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org] On Behalf Of 
Christoffer Dall
Sent: Tuesday, July 03, 2012 6:02 PM
To: android-v...@lists.cs.columbia.edu; kvm@vger.kernel.org
Cc: t...@virtualopensystems.com
Subject: [PATCH v9 13/16] ARM: KVM: Emulation framework and CP15 emulation

Adds a new important function in the main KVM/ARM code called
handle_exit() which is called from kvm_arch_vcpu_ioctl_run() on returns from 
guest execution. This function examines the Hyp-Syndrome-Register (HSR), which 
contains information telling KVM what caused the exit from the guest.

Some of the reasons for an exit are CP15 accesses, which are not allowed from 
the guest and this commit handles these exits by emulating the intended 
operation in software and skip the guest instruction.

Signed-off-by: Christoffer Dall c.d...@virtualopensystems.com
---
 arch/arm/include/asm/kvm_arm.h |5 
 arch/arm/include/asm/kvm_emulate.h |   10 +
 arch/arm/include/asm/kvm_host.h|3 
 arch/arm/kvm/arm.c |  114 +
 arch/arm/kvm/emulate.c |  455 
 arch/arm/kvm/trace.h   |   28 ++
 6 files changed, 614 insertions(+), 1 deletion(-)

diff --git a/arch/arm/include/asm/kvm_arm.h b/arch/arm/include/asm/kvm_arm.h 
index 232117c..0d1e895 100644
--- a/arch/arm/include/asm/kvm_arm.h
+++ b/arch/arm/include/asm/kvm_arm.h
@@ -77,6 +77,11 @@
HCR_SWIO)
 #define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
 
+/* System Control Register (SCTLR) bits */
+#define SCTLR_TE   (1  30)
+#define SCTLR_EE   (1  25)
+#define SCTLR_V(1  13)
+
 /* Hyp System Control Register (HSCTLR) bits */
 #define HSCTLR_TE  (1  30)
 #define HSCTLR_EE  (1  25)
diff --git a/arch/arm/include/asm/kvm_emulate.h 
b/arch/arm/include/asm/kvm_emulate.h
index 9e29335..f2e973c 100644
--- a/arch/arm/include/asm/kvm_emulate.h
+++ b/arch/arm/include/asm/kvm_emulate.h
@@ -51,6 +51,16 @@ static inline enum vcpu_mode vcpu_mode(struct kvm_vcpu *vcpu)
return mode;
 }
 
+int kvm_handle_cp10_id(struct kvm_vcpu *vcpu, struct kvm_run *run); int 
+kvm_handle_cp_0_13_access(struct kvm_vcpu *vcpu, struct kvm_run *run); 
+int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu, struct kvm_run 
+*run); int kvm_handle_cp14_access(struct kvm_vcpu *vcpu, struct kvm_run 
+*run); int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run 
+*run); int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run 
+*run); int kvm_handle_wfi(struct kvm_vcpu *vcpu, struct kvm_run *run); 
+void kvm_adjust_itstate(struct kvm_vcpu *vcpu); void 
+kvm_inject_undefined(struct kvm_vcpu *vcpu);
+
 /*
  * Return the SPSR for the specified mode of the virtual CPU.
  */
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h 
index f6b4c02..c58865b 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -112,6 +112,9 @@ struct kvm_vcpu_arch {
u64 pc_ipa2;/* same as above, but for non-aligned wide thumb
   instructions */
 
+   /* dcache set/way operation pending */
+   cpumask_t require_dcache_flush;
+
/* IO related fields */
bool mmio_sign_extend;  /* for byte/halfword loads */
u32 mmio_rd;
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 4687690..5e6465b 
100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -37,6 +37,7 @@
 #include asm/mman.h
 #include asm/idmap.h
 #include asm/tlbflush.h
+#include asm/cacheflush.h
 #include asm/cputype.h
 #include asm/kvm_arm.h
 #include asm/kvm_asm.h
@@ -271,6 +272,15 @@ void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu)  void 
kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)  {
vcpu-cpu = cpu;
+
+   /*
+* Check whether this vcpu requires the cache to be flushed on
+* this physical CPU. This is a consequence of doing dcache
+* operations by set/way on this vcpu. We do it here in order
+* to be in a non-preemptible section

[PATCH v9 13/16] ARM: KVM: Emulation framework and CP15 emulation

2012-07-03 Thread Christoffer Dall
Adds a new important function in the main KVM/ARM code called
handle_exit() which is called from kvm_arch_vcpu_ioctl_run() on returns
from guest execution. This function examines the Hyp-Syndrome-Register
(HSR), which contains information telling KVM what caused the exit from
the guest.

Some of the reasons for an exit are CP15 accesses, which are
not allowed from the guest and this commit handles these exits by
emulating the intended operation in software and skip the guest
instruction.

Signed-off-by: Christoffer Dall c.d...@virtualopensystems.com
---
 arch/arm/include/asm/kvm_arm.h |5 
 arch/arm/include/asm/kvm_emulate.h |   10 +
 arch/arm/include/asm/kvm_host.h|3 
 arch/arm/kvm/arm.c |  114 +
 arch/arm/kvm/emulate.c |  455 
 arch/arm/kvm/trace.h   |   28 ++
 6 files changed, 614 insertions(+), 1 deletion(-)

diff --git a/arch/arm/include/asm/kvm_arm.h b/arch/arm/include/asm/kvm_arm.h
index 232117c..0d1e895 100644
--- a/arch/arm/include/asm/kvm_arm.h
+++ b/arch/arm/include/asm/kvm_arm.h
@@ -77,6 +77,11 @@
HCR_SWIO)
 #define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
 
+/* System Control Register (SCTLR) bits */
+#define SCTLR_TE   (1  30)
+#define SCTLR_EE   (1  25)
+#define SCTLR_V(1  13)
+
 /* Hyp System Control Register (HSCTLR) bits */
 #define HSCTLR_TE  (1  30)
 #define HSCTLR_EE  (1  25)
diff --git a/arch/arm/include/asm/kvm_emulate.h 
b/arch/arm/include/asm/kvm_emulate.h
index 9e29335..f2e973c 100644
--- a/arch/arm/include/asm/kvm_emulate.h
+++ b/arch/arm/include/asm/kvm_emulate.h
@@ -51,6 +51,16 @@ static inline enum vcpu_mode vcpu_mode(struct kvm_vcpu *vcpu)
return mode;
 }
 
+int kvm_handle_cp10_id(struct kvm_vcpu *vcpu, struct kvm_run *run);
+int kvm_handle_cp_0_13_access(struct kvm_vcpu *vcpu, struct kvm_run *run);
+int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu, struct kvm_run *run);
+int kvm_handle_cp14_access(struct kvm_vcpu *vcpu, struct kvm_run *run);
+int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run);
+int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run);
+int kvm_handle_wfi(struct kvm_vcpu *vcpu, struct kvm_run *run);
+void kvm_adjust_itstate(struct kvm_vcpu *vcpu);
+void kvm_inject_undefined(struct kvm_vcpu *vcpu);
+
 /*
  * Return the SPSR for the specified mode of the virtual CPU.
  */
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index f6b4c02..c58865b 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -112,6 +112,9 @@ struct kvm_vcpu_arch {
u64 pc_ipa2;/* same as above, but for non-aligned wide thumb
   instructions */
 
+   /* dcache set/way operation pending */
+   cpumask_t require_dcache_flush;
+
/* IO related fields */
bool mmio_sign_extend;  /* for byte/halfword loads */
u32 mmio_rd;
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 4687690..5e6465b 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -37,6 +37,7 @@
 #include asm/mman.h
 #include asm/idmap.h
 #include asm/tlbflush.h
+#include asm/cacheflush.h
 #include asm/cputype.h
 #include asm/kvm_arm.h
 #include asm/kvm_asm.h
@@ -271,6 +272,15 @@ void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu)
 void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 {
vcpu-cpu = cpu;
+
+   /*
+* Check whether this vcpu requires the cache to be flushed on
+* this physical CPU. This is a consequence of doing dcache
+* operations by set/way on this vcpu. We do it here in order
+* to be in a non-preemptible section.
+*/
+   if (cpumask_test_and_clear_cpu(cpu, vcpu-arch.require_dcache_flush))
+   flush_cache_all(); /* We'd really want v7_flush_dcache_all() */
 }
 
 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
@@ -375,6 +385,69 @@ static void update_vttbr(struct kvm *kvm)
spin_unlock(kvm_vmid_lock);
 }
 
+static int handle_svc_hyp(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+   /* SVC called from Hyp mode should never get here */
+   kvm_debug(SVC called from Hyp mode shouldn't go here\n);
+   BUG();
+   return -EINVAL; /* Squash warning */
+}
+
+static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+   /*
+* Guest called HVC instruction:
+* Let it know we don't want that by injecting an undefined exception.
+*/
+   kvm_debug(hvc: %x (at %08x), vcpu-arch.hsr  ((1  16) - 1),
+vcpu-arch.regs.pc);
+   kvm_debug( HSR: %8x, vcpu-arch.hsr);
+   kvm_inject_undefined(vcpu);
+   return 0;
+}
+
+static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+   /* We don't support SMC; don't do that. */
+   kvm_debug(smc: at %08x, vcpu-arch.regs.pc);
+   return -EINVAL;