Re: [RFC PATCH 0/2] ARM: KVM: Moving GIC/timer out of arch/arm

2013-05-03 Thread Anup Patel
On Fri, May 3, 2013 at 7:32 PM, Marc Zyngier  wrote:
> As KVM/arm64 is looming on the horizon, it makes sense to move some
> of the common code to a single location in order to reduce duplication.
>
> The code could live anywhere. Actually, most of KVM is already built
> with a bunch of ugly ../../.. hacks in the various Makefiles, so we're
> not exactly talking about style here. But maybe it is time to start
> moving into a less ugly direction.
>
> The include files must be in a "public" location, as they are accessed
> from non-KVM files (arch/arm/kernel/asm-offsets.c).
>
> For this purpose, introduce two new locations:
> - virt/kvm/arm/ : x86 and ia64 already share the ioapic code in
>   virt/kvm, so this could be seen as a (very ugly) precedent.
> - include/kvm/  : there is already an include/xen, and while the
>   intent is slightly different, this seems as good a location as
>   any
>
> Once the code has been moved, it becomes easy to build it in a
> less hackish way, which makes the code easily reusable by KVM/arm64.
>
> Marc Zyngier (2):
>   ARM: KVM: move GIC/timer code to a common location
>   ARM: KVM: standalone Makefile for vgic and timers
>
>  Makefile   | 2 +-
>  arch/arm/include/asm/kvm_host.h| 4 ++--
>  arch/arm/kvm/Makefile  | 5 ++---
>  {arch/arm/include/asm => include/kvm}/kvm_arch_timer.h | 0
>  {arch/arm/include/asm => include/kvm}/kvm_vgic.h   | 0
>  virt/Makefile  | 1 +
>  virt/kvm/Makefile  | 1 +
>  virt/kvm/arm/Makefile  | 2 ++
>  {arch/arm/kvm => virt/kvm/arm}/arch_timer.c| 4 ++--
>  {arch/arm/kvm => virt/kvm/arm}/vgic.c  | 0
>  10 files changed, 11 insertions(+), 8 deletions(-)
>  rename {arch/arm/include/asm => include/kvm}/kvm_arch_timer.h (100%)
>  rename {arch/arm/include/asm => include/kvm}/kvm_vgic.h (100%)
>  create mode 100644 virt/Makefile
>  create mode 100644 virt/kvm/Makefile
>  create mode 100644 virt/kvm/arm/Makefile
>  rename {arch/arm/kvm => virt/kvm/arm}/arch_timer.c (99%)
>  rename {arch/arm/kvm => virt/kvm/arm}/vgic.c (100%)
>
> --
> 1.8.2.1
>
>
>
> ___
> kvmarm mailing list
> kvm...@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/cucslists/listinfo/kvmarm

The source files arch/arm/kvm/arm.c and arch/arm/kvm/mmu.c are also
shared between KVM ARM and KVM ARM64.

Can we move these files in virt/arm ?

--Anup
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH 04/11] kvm tools: console: unconditionally output to any console

2013-05-06 Thread Anup Patel
On Tue, May 7, 2013 at 2:34 AM, Sasha Levin  wrote:
> On 05/03/2013 12:09 PM, Will Deacon wrote:
>> On Fri, May 03, 2013 at 05:02:14PM +0100, Sasha Levin wrote:
>>> On 05/03/2013 05:19 AM, Pekka Enberg wrote:
 On Wed, May 1, 2013 at 6:50 PM, Will Deacon  wrote:
> From: Marc Zyngier 
>
> Kvmtool suppresses any output to a console that has not been elected
> as *the* console.
>
> While this makes sense on the input side (we want the input to be sent
> to one console driver only), it seems to be the wrong thing to do on
> the output side, as it effectively prevents the guest from switching
> from one console to another (think earlyprintk using 8250 to virtio
> console).
>
> After all, the guest *does* poke this device and outputs something
> there.
>
> Just remove the kvm->cfg.active_console test from the output paths.
>
> Signed-off-by: Marc Zyngier 
> Signed-off-by: Will Deacon 

 Seems reasonable. Asias, Sasha?

>>>
>>> I remember at trying it some time ago but dropped it for a reason I don't
>>> remember at the moment.
>>>
>>> Can I have the weekend to play with it to try and figure out why?
>>
>> There's no rush from my point of view (hence the RFC) so take as long as you
>> need!
>
> Looks good to me!
>
>
> Thanks,
> Sasha
>

I am fine with having 8250 emulated by KVMTOOL, but I am more inclined towards
having a full para-virtualized (PV) machine emulated by KVMTOOL.

Best Regards,
Anup
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Planning the merge of KVM/arm64

2013-06-04 Thread Anup Patel
Hi Marc,

On Tue, Jun 4, 2013 at 5:59 PM, Marc Zyngier  wrote:
> Guys,
>
> The KVM/arm64 code is now, as it seems, in good enough shape to be
> merged. I've so far addressed all the comments, and it doesn't seem any
> worse then what is queued for its 32bit counterpart.
>
> For reference, it is sitting there:
> git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git
> kvm-arm64/kvm
>
> What is not defined yet is the merge path:
> - It is touching some of the arm64 core code, so it would be better if
> it was merged through the arm64 tree
> - It is depending on some of the patches in the core KVM queue (the
> vgic/timer move to virt/kvm/arm/)
> - It is also depending on some of the patches that are in the KVM/ARM
> queue (parametrized timer interrupt, some MMU/MMIO fixes)
>
> So I can see two possibilities:
> - Either I can rely on a stable branch from both KVM and KVM/ARM trees
> on which I can base my tree for Catalin/Will to pull,
> - Or I ask Catalin to only pull the arm64 part *minus the Kconfig*, and
> only merge this last bit when the dependencies are satisfied in Linus' tree.
>
> What do you guys think?

I had quick look at your kvm-arm64/kvm branch. I agree with the approach
of going through arm64 tree.

FYI, latest tested branch on APM ARMv8 board is kvm-arm64/kvm-3.10-rc3
branch.

>From my side, +1 for the second option that is "pull the arm64 part *minus
the Kconfig*, and ..."

>
> Thanks,
>
> M.
> --
> Jazz is not dead. It just smells funny...
>
>
> ___
> kvmarm mailing list
> kvm...@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/cucslists/listinfo/kvmarm

Regards,
Anup
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/5] kvmtool: ARM64: Fix compile error for aarch64

2014-08-05 Thread Anup Patel
The __ARM64_SYS_REG() macro is already defined in uapi/asm/kvm.h
of Linux-3.16-rcX hence remove it from arm/aarch64/kvm-cpu.c

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
---
 tools/kvm/arm/aarch64/kvm-cpu.c |   11 ---
 1 file changed, 11 deletions(-)

diff --git a/tools/kvm/arm/aarch64/kvm-cpu.c b/tools/kvm/arm/aarch64/kvm-cpu.c
index 71a2a3a..545171b 100644
--- a/tools/kvm/arm/aarch64/kvm-cpu.c
+++ b/tools/kvm/arm/aarch64/kvm-cpu.c
@@ -19,17 +19,6 @@
(((x) << KVM_REG_ARM64_SYSREG_ ## n ## _SHIFT) &\
 KVM_REG_ARM64_SYSREG_ ## n ## _MASK)
 
-#define __ARM64_SYS_REG(op0,op1,crn,crm,op2)   \
-   (KVM_REG_ARM64 | KVM_REG_SIZE_U64   |   \
-KVM_REG_ARM64_SYSREG   |   \
-ARM64_SYS_REG_SHIFT_MASK(op0, OP0) |   \
-ARM64_SYS_REG_SHIFT_MASK(op1, OP1) |   \
-ARM64_SYS_REG_SHIFT_MASK(crn, CRN) |   \
-ARM64_SYS_REG_SHIFT_MASK(crm, CRM) |   \
-ARM64_SYS_REG_SHIFT_MASK(op2, OP2))
-
-#define ARM64_SYS_REG(...) __ARM64_SYS_REG(__VA_ARGS__)
-
 unsigned long kvm_cpu__get_vcpu_mpidr(struct kvm_cpu *vcpu)
 {
struct kvm_one_reg reg;
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 0/5] kvmtool: ARM/ARM64: Misc updates

2014-08-05 Thread Anup Patel
This patchset updates KVMTOOL to use some of the features
supported by Linux-3.16 KVM ARM/ARM64, such as:

1. Target CPU == Host using KVM_ARM_PREFERRED_TARGET vm ioctl
2. Target CPU type Potenza for using KVMTOOL on X-Gene
3. PSCI v0.2 support for Aarch32 and Aarch64 guest
4. System event exit reason

Anup Patel (5):
  kvmtool: ARM: Use KVM_ARM_PREFERRED_TARGET vm ioctl to determine
target cpu
  kvmtool: ARM64: Fix compile error for aarch64
  kvmtool: ARM64: Add target type potenza for aarch64
  kvmtool: Handle exit reason KVM_EXIT_SYSTEM_EVENT
  kvmtool: ARM/ARM64: Provide PSCI-0.2 guest when in-kernel KVM
supports it

 tools/kvm/arm/aarch64/arm-cpu.c |9 -
 tools/kvm/arm/aarch64/kvm-cpu.c |   11 ---
 tools/kvm/arm/fdt.c |   39 +--
 tools/kvm/arm/kvm-cpu.c |   26 +-
 tools/kvm/kvm-cpu.c |6 ++
 5 files changed, 68 insertions(+), 23 deletions(-)

-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/5] kvmtool: ARM: Use KVM_ARM_PREFERRED_TARGET vm ioctl to determine target cpu

2014-08-05 Thread Anup Patel
Instead, of trying out each and every target type we should use
KVM_ARM_PREFERRED_TARGET vm ioctl to determine target type
for KVM ARM/ARM64.

We bail-out target type returned by KVM_ARM_PREFERRED_TARGET vm ioctl
is not known to kvmtool.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
---
 tools/kvm/arm/kvm-cpu.c |   21 -
 1 file changed, 16 insertions(+), 5 deletions(-)

diff --git a/tools/kvm/arm/kvm-cpu.c b/tools/kvm/arm/kvm-cpu.c
index aeaa4cf..7478f8f 100644
--- a/tools/kvm/arm/kvm-cpu.c
+++ b/tools/kvm/arm/kvm-cpu.c
@@ -34,6 +34,7 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
struct kvm_cpu *vcpu;
int coalesced_offset, mmap_size, err = -1;
unsigned int i;
+   struct kvm_vcpu_init preferred_init;
struct kvm_vcpu_init vcpu_init = {
.features = ARM_VCPU_FEATURE_FLAGS(kvm, cpu_id)
};
@@ -46,6 +47,10 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
if (vcpu->vcpu_fd < 0)
die_perror("KVM_CREATE_VCPU ioctl");
 
+   err = ioctl(kvm->vm_fd, KVM_ARM_PREFERRED_TARGET, &preferred_init);
+   if (err < 0)
+   die_perror("KVM_ARM_PREFERRED_TARGET ioctl");
+
mmap_size = ioctl(kvm->sys_fd, KVM_GET_VCPU_MMAP_SIZE, 0);
if (mmap_size < 0)
die_perror("KVM_GET_VCPU_MMAP_SIZE ioctl");
@@ -55,17 +60,22 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, 
unsigned long cpu_id)
if (vcpu->kvm_run == MAP_FAILED)
die("unable to mmap vcpu fd");
 
-   /* Find an appropriate target CPU type. */
+   /* Match preferred target CPU type. */
+   target = NULL;
for (i = 0; i < ARRAY_SIZE(kvm_arm_targets); ++i) {
if (!kvm_arm_targets[i])
continue;
-   target = kvm_arm_targets[i];
-   vcpu_init.target = target->id;
-   err = ioctl(vcpu->vcpu_fd, KVM_ARM_VCPU_INIT, &vcpu_init);
-   if (!err)
+   if (kvm_arm_targets[i]->id == preferred_init.target) {
+   target = kvm_arm_targets[i];
break;
+   }
+   }
+   if (!target) {
+   die("preferred target not available\n");
}
 
+   vcpu_init.target = preferred_init.target;
+   err = ioctl(vcpu->vcpu_fd, KVM_ARM_VCPU_INIT, &vcpu_init);
if (err || target->init(vcpu))
die("Unable to initialise ARM vcpu");
 
@@ -81,6 +91,7 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
vcpu->cpu_type  = target->id;
vcpu->cpu_compatible= target->compatible;
vcpu->is_running= true;
+
return vcpu;
 }
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 5/5] kvmtool: ARM/ARM64: Provide PSCI-0.2 guest when in-kernel KVM supports it

2014-08-05 Thread Anup Patel
If in-kernel KVM support PSCI-0.2 emulation then we should set
KVM_ARM_VCPU_PSCI_0_2 feature for each guest VCPU and also
provide "arm,psci-0.2","arm,psci" as PSCI compatible string.

This patch updates kvm_cpu__arch_init() and setup_fdt() as
per above.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
---
 tools/kvm/arm/fdt.c |   39 +--
 tools/kvm/arm/kvm-cpu.c |5 +
 2 files changed, 38 insertions(+), 6 deletions(-)

diff --git a/tools/kvm/arm/fdt.c b/tools/kvm/arm/fdt.c
index 186a718..93849cf2 100644
--- a/tools/kvm/arm/fdt.c
+++ b/tools/kvm/arm/fdt.c
@@ -13,6 +13,7 @@
 #include 
 #include 
 #include 
+#include 
 
 static char kern_cmdline[COMMAND_LINE_SIZE];
 
@@ -162,12 +163,38 @@ static int setup_fdt(struct kvm *kvm)
 
/* PSCI firmware */
_FDT(fdt_begin_node(fdt, "psci"));
-   _FDT(fdt_property_string(fdt, "compatible", "arm,psci"));
-   _FDT(fdt_property_string(fdt, "method", "hvc"));
-   _FDT(fdt_property_cell(fdt, "cpu_suspend", KVM_PSCI_FN_CPU_SUSPEND));
-   _FDT(fdt_property_cell(fdt, "cpu_off", KVM_PSCI_FN_CPU_OFF));
-   _FDT(fdt_property_cell(fdt, "cpu_on", KVM_PSCI_FN_CPU_ON));
-   _FDT(fdt_property_cell(fdt, "migrate", KVM_PSCI_FN_MIGRATE));
+   if (kvm__supports_extension(kvm, KVM_CAP_ARM_PSCI_0_2)) {
+   const char compatible[] = "arm,psci-0.2\0arm,psci";
+   _FDT(fdt_property(fdt, "compatible",
+ compatible, sizeof(compatible)));
+   _FDT(fdt_property_string(fdt, "method", "hvc"));
+   if (kvm->cfg.arch.aarch32_guest) {
+   _FDT(fdt_property_cell(fdt, "cpu_suspend",
+   PSCI_0_2_FN_CPU_SUSPEND));
+   _FDT(fdt_property_cell(fdt, "cpu_off",
+   PSCI_0_2_FN_CPU_OFF));
+   _FDT(fdt_property_cell(fdt, "cpu_on",
+   PSCI_0_2_FN_CPU_ON));
+   _FDT(fdt_property_cell(fdt, "migrate",
+   PSCI_0_2_FN_MIGRATE));
+   } else {
+   _FDT(fdt_property_cell(fdt, "cpu_suspend",
+   PSCI_0_2_FN64_CPU_SUSPEND));
+   _FDT(fdt_property_cell(fdt, "cpu_off",
+   PSCI_0_2_FN_CPU_OFF));
+   _FDT(fdt_property_cell(fdt, "cpu_on",
+   PSCI_0_2_FN64_CPU_ON));
+   _FDT(fdt_property_cell(fdt, "migrate",
+   PSCI_0_2_FN64_MIGRATE));
+   }
+   } else {
+   _FDT(fdt_property_string(fdt, "compatible", "arm,psci"));
+   _FDT(fdt_property_string(fdt, "method", "hvc"));
+   _FDT(fdt_property_cell(fdt, "cpu_suspend", 
KVM_PSCI_FN_CPU_SUSPEND));
+   _FDT(fdt_property_cell(fdt, "cpu_off", KVM_PSCI_FN_CPU_OFF));
+   _FDT(fdt_property_cell(fdt, "cpu_on", KVM_PSCI_FN_CPU_ON));
+   _FDT(fdt_property_cell(fdt, "migrate", KVM_PSCI_FN_MIGRATE));
+   }
_FDT(fdt_end_node(fdt));
 
/* Finalise. */
diff --git a/tools/kvm/arm/kvm-cpu.c b/tools/kvm/arm/kvm-cpu.c
index 7478f8f..76c28a0 100644
--- a/tools/kvm/arm/kvm-cpu.c
+++ b/tools/kvm/arm/kvm-cpu.c
@@ -74,6 +74,11 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
die("preferred target not available\n");
}
 
+   /* Set KVM_ARM_VCPU_PSCI_0_2 if available */
+   if (kvm__supports_extension(kvm, KVM_CAP_ARM_PSCI_0_2)) {
+   vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
+   }
+
vcpu_init.target = preferred_init.target;
err = ioctl(vcpu->vcpu_fd, KVM_ARM_VCPU_INIT, &vcpu_init);
if (err || target->init(vcpu))
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 3/5] kvmtool: ARM64: Add target type potenza for aarch64

2014-08-05 Thread Anup Patel
The VCPU target type KVM_ARM_TARGET_XGENE_POTENZA is available
in latest Linux-3.16-rcX or higher hence register aarch64 target
type for it.

This patch enables us to run KVMTOOL on X-Gene Potenza host.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
---
 tools/kvm/arm/aarch64/arm-cpu.c |9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/tools/kvm/arm/aarch64/arm-cpu.c b/tools/kvm/arm/aarch64/arm-cpu.c
index ce5ea2f..ce526e3 100644
--- a/tools/kvm/arm/aarch64/arm-cpu.c
+++ b/tools/kvm/arm/aarch64/arm-cpu.c
@@ -41,10 +41,17 @@ static struct kvm_arm_target target_cortex_a57 = {
.init   = arm_cpu__vcpu_init,
 };
 
+static struct kvm_arm_target target_potenza = {
+   .id = KVM_ARM_TARGET_XGENE_POTENZA,
+   .compatible = "arm,arm-v8",
+   .init   = arm_cpu__vcpu_init,
+};
+
 static int arm_cpu__core_init(struct kvm *kvm)
 {
return (kvm_cpu__register_kvm_arm_target(&target_aem_v8) ||
kvm_cpu__register_kvm_arm_target(&target_foundation_v8) ||
-   kvm_cpu__register_kvm_arm_target(&target_cortex_a57));
+   kvm_cpu__register_kvm_arm_target(&target_cortex_a57) ||
+   kvm_cpu__register_kvm_arm_target(&target_potenza));
 }
 core_init(arm_cpu__core_init);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 4/5] kvmtool: Handle exit reason KVM_EXIT_SYSTEM_EVENT

2014-08-05 Thread Anup Patel
The KVM_EXIT_SYSTEM_EVENT exit reason was added to define
architecture independent system-wide events for a Guest.

Currently, it is used by in-kernel PSCI-0.2 emulation of
KVM ARM/ARM64 to inform user space about PSCI SYSTEM_OFF
or PSCI SYSTEM_RESET request.

For now, we simply treat all system-wide guest events as
same and shutdown the guest upon KVM_EXIT_SYSTEM_EVENT.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
---
 tools/kvm/kvm-cpu.c |6 ++
 1 file changed, 6 insertions(+)

diff --git a/tools/kvm/kvm-cpu.c b/tools/kvm/kvm-cpu.c
index ee0a8ec..e20ee4b 100644
--- a/tools/kvm/kvm-cpu.c
+++ b/tools/kvm/kvm-cpu.c
@@ -160,6 +160,12 @@ int kvm_cpu__start(struct kvm_cpu *cpu)
goto exit_kvm;
case KVM_EXIT_SHUTDOWN:
goto exit_kvm;
+   case KVM_EXIT_SYSTEM_EVENT:
+   /*
+* Treat both SHUTDOWN & RESET system events
+* as shutdown request.
+*/
+   goto exit_kvm;
default: {
bool ret;
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/5] kvmtool: ARM/ARM64: Misc updates

2014-08-05 Thread Anup Patel
On 5 August 2014 14:19, Anup Patel  wrote:
> This patchset updates KVMTOOL to use some of the features
> supported by Linux-3.16 KVM ARM/ARM64, such as:
>
> 1. Target CPU == Host using KVM_ARM_PREFERRED_TARGET vm ioctl
> 2. Target CPU type Potenza for using KVMTOOL on X-Gene
> 3. PSCI v0.2 support for Aarch32 and Aarch64 guest
> 4. System event exit reason
>
> Anup Patel (5):
>   kvmtool: ARM: Use KVM_ARM_PREFERRED_TARGET vm ioctl to determine
> target cpu
>   kvmtool: ARM64: Fix compile error for aarch64
>   kvmtool: ARM64: Add target type potenza for aarch64
>   kvmtool: Handle exit reason KVM_EXIT_SYSTEM_EVENT
>   kvmtool: ARM/ARM64: Provide PSCI-0.2 guest when in-kernel KVM
> supports it
>
>  tools/kvm/arm/aarch64/arm-cpu.c |9 -
>  tools/kvm/arm/aarch64/kvm-cpu.c |   11 ---
>  tools/kvm/arm/fdt.c |   39 
> +--
>  tools/kvm/arm/kvm-cpu.c |   26 +-
>  tools/kvm/kvm-cpu.c |6 ++
>  5 files changed, 68 insertions(+), 23 deletions(-)
>
> --
> 1.7.9.5
>

Hi All,

This patchset is tested on X-Gene Mustang and Foundation v8 model.

Regards,
Anup
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC PATCH 1/6] ARM64: Move PMU register related defines to asm/pmu.h

2014-08-05 Thread Anup Patel
To use the ARMv8 PMU related register defines from the KVM code,
we move the relevant definitions to asm/pmu.h include file.

We also add #ifndef __ASSEMBLY__ in order to use asm/pmu.h from
assembly code.

Signed-off-by: Anup Patel 
Signed-off-by: Pranavkumar Sawargaonkar 
---
 arch/arm64/include/asm/pmu.h   |   44 
 arch/arm64/kernel/perf_event.c |   32 -
 2 files changed, 44 insertions(+), 32 deletions(-)

diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
index e6f0878..f49cc72 100644
--- a/arch/arm64/include/asm/pmu.h
+++ b/arch/arm64/include/asm/pmu.h
@@ -19,6 +19,49 @@
 #ifndef __ASM_PMU_H
 #define __ASM_PMU_H
 
+/*
+ * Per-CPU PMCR: config reg
+ */
+#define ARMV8_PMCR_E   (1 << 0) /* Enable all counters */
+#define ARMV8_PMCR_P   (1 << 1) /* Reset all counters */
+#define ARMV8_PMCR_C   (1 << 2) /* Cycle counter reset */
+#define ARMV8_PMCR_D   (1 << 3) /* CCNT counts every 64th cpu cycle */
+#define ARMV8_PMCR_X   (1 << 4) /* Export to ETM */
+#define ARMV8_PMCR_DP  (1 << 5) /* Disable CCNT if non-invasive debug*/
+#defineARMV8_PMCR_N_SHIFT  11   /* Number of counters 
supported */
+#defineARMV8_PMCR_N_MASK   0x1f
+#defineARMV8_PMCR_MASK 0x3f /* Mask for writable bits */
+
+/*
+ * PMCNTEN: counters enable reg
+ */
+#defineARMV8_CNTEN_MASK0x  /* Mask for writable 
bits */
+
+/*
+ * PMINTEN: counters interrupt enable reg
+ */
+#defineARMV8_INTEN_MASK0x  /* Mask for writable 
bits */
+
+/*
+ * PMOVSR: counters overflow flag status reg
+ */
+#defineARMV8_OVSR_MASK 0x  /* Mask for writable 
bits */
+#defineARMV8_OVERFLOWED_MASK   ARMV8_OVSR_MASK
+
+/*
+ * PMXEVTYPER: Event selection reg
+ */
+#defineARMV8_EVTYPE_MASK   0xc80003ff  /* Mask for writable 
bits */
+#defineARMV8_EVTYPE_EVENT  0x3ff   /* Mask for EVENT bits 
*/
+
+/*
+ * Event filters for PMUv3
+ */
+#defineARMV8_EXCLUDE_EL1   (1 << 31)
+#defineARMV8_EXCLUDE_EL0   (1 << 30)
+#defineARMV8_INCLUDE_EL2   (1 << 27)
+
+#ifndef __ASSEMBLY__
 #ifdef CONFIG_HW_PERF_EVENTS
 
 /* The events for a given PMU register set. */
@@ -79,4 +122,5 @@ int armpmu_event_set_period(struct perf_event *event,
int idx);
 
 #endif /* CONFIG_HW_PERF_EVENTS */
+#endif /* __ASSEMBLY__ */
 #endif /* __ASM_PMU_H */
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index baf5afb..47dfb8b 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -810,38 +810,6 @@ static const unsigned 
armv8_pmuv3_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
 #defineARMV8_IDX_TO_COUNTER(x) \
(((x) - ARMV8_IDX_COUNTER0) & ARMV8_COUNTER_MASK)
 
-/*
- * Per-CPU PMCR: config reg
- */
-#define ARMV8_PMCR_E   (1 << 0) /* Enable all counters */
-#define ARMV8_PMCR_P   (1 << 1) /* Reset all counters */
-#define ARMV8_PMCR_C   (1 << 2) /* Cycle counter reset */
-#define ARMV8_PMCR_D   (1 << 3) /* CCNT counts every 64th cpu cycle */
-#define ARMV8_PMCR_X   (1 << 4) /* Export to ETM */
-#define ARMV8_PMCR_DP  (1 << 5) /* Disable CCNT if non-invasive debug*/
-#defineARMV8_PMCR_N_SHIFT  11   /* Number of counters 
supported */
-#defineARMV8_PMCR_N_MASK   0x1f
-#defineARMV8_PMCR_MASK 0x3f /* Mask for writable bits */
-
-/*
- * PMOVSR: counters overflow flag status reg
- */
-#defineARMV8_OVSR_MASK 0x  /* Mask for writable 
bits */
-#defineARMV8_OVERFLOWED_MASK   ARMV8_OVSR_MASK
-
-/*
- * PMXEVTYPER: Event selection reg
- */
-#defineARMV8_EVTYPE_MASK   0xc80003ff  /* Mask for writable 
bits */
-#defineARMV8_EVTYPE_EVENT  0x3ff   /* Mask for EVENT bits 
*/
-
-/*
- * Event filters for PMUv3
- */
-#defineARMV8_EXCLUDE_EL1   (1 << 31)
-#defineARMV8_EXCLUDE_EL0   (1 << 30)
-#defineARMV8_INCLUDE_EL2   (1 << 27)
-
 static inline u32 armv8pmu_pmcr_read(void)
 {
u32 val;
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC PATCH 6/6] ARM64: KVM: Upgrade to lazy context switch of PMU registers

2014-08-05 Thread Anup Patel
Full context switch of all PMU registers for both host and
guest can make KVM world-switch very expensive.

This patch improves current PMU context switch by implementing
lazy context switch of PMU registers.

To achieve this, we trap all PMU register accesses and use a
per-VCPU dirty flag to keep track whether guest has updated
PMU registers or not. If PMU registers of VCPU are dirty or
PMCR_EL0.E bit is set for VCPU then we do full context switch
for both host and guest.
(This is very similar to lazy world switch for debug registers:
http://lists.infradead.org/pipermail/linux-arm-kernel/2014-July/271040.html)

Also, we always trap-n-emulate PMCR_EL0 to fake number of event
counters available to guest. For this PMCR_EL0 trap-n-emulate to
work correctly, we always save/restore PMCR_EL0 for both host and
guest whereas other PMU registers will be saved/restored based
on PMU dirty flag.

Signed-off-by: Anup Patel 
Signed-off-by: Pranavkumar Sawargaonkar 
---
 arch/arm64/include/asm/kvm_asm.h  |3 +
 arch/arm64/include/asm/kvm_host.h |3 +
 arch/arm64/kernel/asm-offsets.c   |1 +
 arch/arm64/kvm/hyp.S  |   63 --
 arch/arm64/kvm/sys_regs.c |  248 +++--
 5 files changed, 298 insertions(+), 20 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 93be21f..47b7fcd 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -132,6 +132,9 @@
 #define KVM_ARM64_DEBUG_DIRTY_SHIFT0
 #define KVM_ARM64_DEBUG_DIRTY  (1 << KVM_ARM64_DEBUG_DIRTY_SHIFT)
 
+#define KVM_ARM64_PMU_DIRTY_SHIFT  0
+#define KVM_ARM64_PMU_DIRTY(1 << KVM_ARM64_PMU_DIRTY_SHIFT)
+
 #ifndef __ASSEMBLY__
 struct kvm;
 struct kvm_vcpu;
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index ae4cdb2..4dba2a3 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -117,6 +117,9 @@ struct kvm_vcpu_arch {
/* Timer state */
struct arch_timer_cpu timer_cpu;
 
+   /* PMU flags */
+   u64 pmu_flags;
+
/* PMU state */
struct pmu_cpu pmu_cpu;
 
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 053dc3e..4234794 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -140,6 +140,7 @@ int main(void)
   DEFINE(VGIC_CPU_NR_LR,   offsetof(struct vgic_cpu, nr_lr));
   DEFINE(KVM_VTTBR,offsetof(struct kvm, arch.vttbr));
   DEFINE(KVM_VGIC_VCTRL,   offsetof(struct kvm, arch.vgic.vctrl_base));
+  DEFINE(VCPU_PMU_FLAGS,   offsetof(struct kvm_vcpu, arch.pmu_flags));
   DEFINE(VCPU_PMU_IRQ_PENDING, offsetof(struct kvm_vcpu, 
arch.pmu_cpu.irq_pending));
 #endif
 #ifdef CONFIG_ARM64_CPU_SUSPEND
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index 6b41c01..5f9ccee 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -443,6 +443,9 @@ __kvm_hyp_code_start:
and x5, x4, #~(ARMV8_PMCR_E)// Clear PMCR_EL0.E
msr pmcr_el0, x5// This will stop all counters
 
+   ldr x5, [x0, #VCPU_PMU_FLAGS] // Only save if dirty flag set
+   tbz x5, #KVM_ARM64_PMU_DIRTY_SHIFT, 1f
+
mov x3, #0
ubfxx4, x4, #ARMV8_PMCR_N_SHIFT, #5 // Number of event counters
cmp x4, #0  // Skip if no event counters
@@ -731,7 +734,7 @@ __kvm_hyp_code_start:
msr mdccint_el1, x21
 .endm
 
-.macro restore_pmu
+.macro restore_pmu, is_vcpu_pmu
// x2: base address for cpu context
// x3: mask of counters allowed in EL0 & EL1
// x4: number of event counters allowed in EL0 & EL1
@@ -741,16 +744,19 @@ __kvm_hyp_code_start:
cmp x5, #1  // Must be PMUv3 else skip
bne 1f
 
+   ldr x5, [x0, #VCPU_PMU_FLAGS] // Only restore if dirty flag set
+   tbz x5, #KVM_ARM64_PMU_DIRTY_SHIFT, 2f
+
mov x3, #0
mrs x4, pmcr_el0
ubfxx4, x4, #ARMV8_PMCR_N_SHIFT, #5 // Number of event counters
cmp x4, #0  // Skip if no event counters
-   beq 2f
+   beq 3f
sub x4, x4, #1  // Last event counter is reserved
mov x3, #1
lsl x3, x3, x4
sub x3, x3, #1
-2: orr x3, x3, #(1 << 31)  // Mask of event counters
+3: orr x3, x3, #(1 << 31)  // Mask of event counters
 
ldr x5, [x2, #CPU_SYSREG_OFFSET(PMCCFILTR_EL0)]
msr pmccfiltr_el0, x5   // Restore PMCCFILTR_EL0
@@ -772,15 +778,15 @@ __kvm_hyp_code_start:
lsl x5, x4, #4
add x5, x5, #CPU_SYSREG_OFFSET(PMEVCNTR0_EL0)
add x5, x2, x5
-3: cmp x4, #0
-   beq 4f
+4: cmp x4, #0
+   beq 5f
sub x4, x4, #1
ldp x6, x7, [x5, #-16]!
ms

[RFC PATCH 4/6] ARM/ARM64: KVM: Add common code PMU IRQ routing

2014-08-05 Thread Anup Patel
This patch introduces common PMU IRQ routing code for
KVM ARM and KVM ARM64 under virt/kvm/arm directory.

The virtual PMU IRQ number for each Guest VCPU will be
provided by user space using set device address vm ioctl
with prameters:
dev_id = KVM_ARM_DEVICE_PMU
type = VCPU number
addr = PMU IRQ number for the VCPU

The low-level context switching code of KVM ARM/ARM64
will determine the state of VCPU PMU IRQ store it in
"irq_pending" flag when saving PMU context for the VCPU.

The common PMU IRQ routing code will inject virtual PMU
IRQ based on "irq_pending" flag and it will also clear
the "irq_pending" flag.

Signed-off-by: Anup Patel 
Signed-off-by: Pranavkumar Sawargaonkar 
---
 arch/arm/include/asm/kvm_host.h   |9 
 arch/arm/include/uapi/asm/kvm.h   |1 +
 arch/arm/kvm/arm.c|6 +++
 arch/arm/kvm/reset.c  |4 ++
 arch/arm64/include/asm/kvm_host.h |9 
 arch/arm64/include/uapi/asm/kvm.h |1 +
 arch/arm64/kvm/Kconfig|7 +++
 arch/arm64/kvm/Makefile   |1 +
 arch/arm64/kvm/reset.c|4 ++
 include/kvm/arm_pmu.h |   52 ++
 virt/kvm/arm/pmu.c|  105 +
 11 files changed, 199 insertions(+)
 create mode 100644 include/kvm/arm_pmu.h
 create mode 100644 virt/kvm/arm/pmu.c

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 193ceaf..a6a778f 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -24,6 +24,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #if defined(CONFIG_KVM_ARM_MAX_VCPUS)
 #define KVM_MAX_VCPUS CONFIG_KVM_ARM_MAX_VCPUS
@@ -53,6 +54,9 @@ struct kvm_arch {
/* Timer */
struct arch_timer_kvm   timer;
 
+   /* PMU */
+   struct pmu_kvm  pmu;
+
/*
 * Anything that is not used directly from assembly code goes
 * here.
@@ -118,8 +122,13 @@ struct kvm_vcpu_arch {
 
/* VGIC state */
struct vgic_cpu vgic_cpu;
+
+   /* Timer state */
struct arch_timer_cpu timer_cpu;
 
+   /* PMU state */
+   struct pmu_cpu pmu_cpu;
+
/*
 * Anything that is not used directly from assembly code goes
 * here.
diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h
index e6ebdd3..b21e6eb 100644
--- a/arch/arm/include/uapi/asm/kvm.h
+++ b/arch/arm/include/uapi/asm/kvm.h
@@ -75,6 +75,7 @@ struct kvm_regs {
 
 /* Supported device IDs */
 #define KVM_ARM_DEVICE_VGIC_V2 0
+#define KVM_ARM_DEVICE_PMU 1
 
 /* Supported VGIC address types  */
 #define KVM_VGIC_V2_ADDR_TYPE_DIST 0
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 3c82b37..04130f5 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -140,6 +140,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 
kvm_timer_init(kvm);
 
+   kvm_pmu_init(kvm);
+
/* Mark the initial VMID generation invalid */
kvm->arch.vmid_gen = 0;
 
@@ -567,6 +569,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct 
kvm_run *run)
if (ret <= 0 || need_new_vmid_gen(vcpu->kvm)) {
local_irq_enable();
kvm_timer_sync_hwstate(vcpu);
+   kvm_pmu_sync_hwstate(vcpu);
kvm_vgic_sync_hwstate(vcpu);
continue;
}
@@ -601,6 +604,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct 
kvm_run *run)
 */
 
kvm_timer_sync_hwstate(vcpu);
+   kvm_pmu_sync_hwstate(vcpu);
kvm_vgic_sync_hwstate(vcpu);
 
ret = handle_exit(vcpu, run, ret);
@@ -794,6 +798,8 @@ static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm,
if (!vgic_present)
return -ENXIO;
return kvm_vgic_addr(kvm, type, &dev_addr->addr, true);
+   case KVM_ARM_DEVICE_PMU:
+   return kvm_pmu_addr(kvm, type, &dev_addr->addr, true);
default:
return -ENODEV;
}
diff --git a/arch/arm/kvm/reset.c b/arch/arm/kvm/reset.c
index f558c07..42e6996 100644
--- a/arch/arm/kvm/reset.c
+++ b/arch/arm/kvm/reset.c
@@ -28,6 +28,7 @@
 #include 
 
 #include 
+#include 
 
 /**
  * Cortex-A15 and Cortex-A7 Reset Values
@@ -79,5 +80,8 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
/* Reset arch_timer context */
kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
 
+   /* Reset pmu context */
+   kvm_pmu_vcpu_reset(vcpu);
+
return 0;
 }
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 7592ddf..ae4cdb2 100644
--- a/arch/arm64/include/asm/kvm_host.h
+

[RFC PATCH 3/6] ARM: perf: Re-enable overflow interrupt from interrupt handler

2014-08-05 Thread Anup Patel
A hypervisor will typically mask the overflow interrupt before
forwarding it to Guest Linux hence we need to re-enable the overflow
interrupt after clearing it in Guest Linux. Also, this re-enabling
of overflow interrupt does not harm in non-virtualized scenarios.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
---
 arch/arm/kernel/perf_event_v7.c |8 
 1 file changed, 8 insertions(+)

diff --git a/arch/arm/kernel/perf_event_v7.c b/arch/arm/kernel/perf_event_v7.c
index 1d37568..581cca5 100644
--- a/arch/arm/kernel/perf_event_v7.c
+++ b/arch/arm/kernel/perf_event_v7.c
@@ -1355,6 +1355,14 @@ static irqreturn_t armv7pmu_handle_irq(int irq_num, void 
*dev)
if (!armv7_pmnc_counter_has_overflowed(pmnc, idx))
continue;
 
+   /*
+* If we are running under a hypervisor such as KVM then
+* hypervisor will mask the interrupt before forwarding
+* it to Guest Linux hence re-enable interrupt for the
+* overflowed counter.
+*/
+   armv7_pmnc_enable_intens(idx);
+
hwc = &event->hw;
armpmu_event_update(event);
perf_sample_data_init(&data, 0, hwc->last_period);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC PATCH 0/6] ARM64: KVM: PMU infrastructure support

2014-08-05 Thread Anup Patel
This patchset enables PMU virtualization in KVM ARM64. The
Guest can now directly use PMU available on the host HW.

The virtual PMU IRQ injection for Guest VCPUs is managed by
small piece of code shared between KVM ARM and KVM ARM64. The
virtual PMU IRQ number will be based on Guest machine model and
user space will provide it using set device address vm ioctl.

The second last patch of this series implements full context
switch of PMU registers which will context switch all PMU
registers on every KVM world-switch.

The last patch implements a lazy context switch of PMU registers
which is very similar to lazy debug context switch.
(Refer, 
http://lists.infradead.org/pipermail/linux-arm-kernel/2014-July/271040.html)

Also, we reserve last PMU event counter for EL2 mode which
will not be accessible from Host and Guest EL1 mode. This
reserved EL2 mode PMU event counter can be used for profiling
KVM world-switch and other EL2 mode functions.

All testing have been done using KVMTOOL on X-Gene Mustang and
Foundation v8 Model for both Aarch32 and Aarch64 guest.

Anup Patel (6):
  ARM64: Move PMU register related defines to asm/pmu.h
  ARM64: perf: Re-enable overflow interrupt from interrupt handler
  ARM: perf: Re-enable overflow interrupt from interrupt handler
  ARM/ARM64: KVM: Add common code PMU IRQ routing
  ARM64: KVM: Implement full context switch of PMU registers
  ARM64: KVM: Upgrade to lazy context switch of PMU registers

 arch/arm/include/asm/kvm_host.h   |9 +
 arch/arm/include/uapi/asm/kvm.h   |1 +
 arch/arm/kernel/perf_event_v7.c   |8 +
 arch/arm/kvm/arm.c|6 +
 arch/arm/kvm/reset.c  |4 +
 arch/arm64/include/asm/kvm_asm.h  |   39 +++-
 arch/arm64/include/asm/kvm_host.h |   12 ++
 arch/arm64/include/asm/pmu.h  |   44 +
 arch/arm64/include/uapi/asm/kvm.h |1 +
 arch/arm64/kernel/asm-offsets.c   |2 +
 arch/arm64/kernel/perf_event.c|   40 +---
 arch/arm64/kvm/Kconfig|7 +
 arch/arm64/kvm/Makefile   |1 +
 arch/arm64/kvm/hyp-init.S |   15 ++
 arch/arm64/kvm/hyp.S  |  209 +++-
 arch/arm64/kvm/reset.c|4 +
 arch/arm64/kvm/sys_regs.c |  385 +
 include/kvm/arm_pmu.h |   52 +
 virt/kvm/arm/pmu.c|  105 ++
 19 files changed, 870 insertions(+), 74 deletions(-)
 create mode 100644 include/kvm/arm_pmu.h
 create mode 100644 virt/kvm/arm/pmu.c

-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC PATCH 2/6] ARM64: perf: Re-enable overflow interrupt from interrupt handler

2014-08-05 Thread Anup Patel
A hypervisor will typically mask the overflow interrupt before
forwarding it to Guest Linux hence we need to re-enable the overflow
interrupt after clearing it in Guest Linux. Also, this re-enabling
of overflow interrupt does not harm in non-virtualized scenarios.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
---
 arch/arm64/kernel/perf_event.c |8 
 1 file changed, 8 insertions(+)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 47dfb8b..19fb140 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -1076,6 +1076,14 @@ static irqreturn_t armv8pmu_handle_irq(int irq_num, void 
*dev)
if (!armv8pmu_counter_has_overflowed(pmovsr, idx))
continue;
 
+   /*
+* If we are running under a hypervisor such as KVM then
+* hypervisor will mask the interrupt before forwarding
+* it to Guest Linux hence re-enable interrupt for the
+* overflowed counter.
+*/
+   armv8pmu_enable_intens(idx);
+
hwc = &event->hw;
armpmu_event_update(event, hwc, idx);
perf_sample_data_init(&data, 0, hwc->last_period);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC PATCH 5/6] ARM64: KVM: Implement full context switch of PMU registers

2014-08-05 Thread Anup Patel
This patch implements following stuff:
1. Save/restore all PMU registers for both Guest and Host in
KVM world switch.
2. Reserve last PMU event counter for performance analysis in
EL2-mode. To achieve we fake the number of event counters available
to the Guest by trapping PMCR_EL0 register accesses and program
MDCR_EL2.HPMN with number of PMU event counters minus one.
3. Clear and mask overflowed interrupts when saving PMU context
for Guest. The Guest will re-enable overflowed interrupts when
processing virtual PMU interrupt.

With this patch we have direct access of all PMU registers from
Guest and we only trap-n-emulate PMCR_EL0 accesses to fake number
of PMU event counters to Guest.

Signed-off-by: Anup Patel 
Signed-off-by: Pranavkumar Sawargaonkar 
---
 arch/arm64/include/asm/kvm_asm.h |   36 ++--
 arch/arm64/kernel/asm-offsets.c  |1 +
 arch/arm64/kvm/hyp-init.S|   15 
 arch/arm64/kvm/hyp.S |  168 +++-
 arch/arm64/kvm/sys_regs.c|  175 --
 5 files changed, 343 insertions(+), 52 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 993a7db..93be21f 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -53,15 +53,27 @@
 #define DBGWVR0_EL171  /* Debug Watchpoint Value Registers (0-15) */
 #define DBGWVR15_EL1   86
 #define MDCCINT_EL187  /* Monitor Debug Comms Channel Interrupt Enable 
Reg */
+#define PMCR_EL0   88  /* Performance Monitors Control Register */
+#define PMOVSSET_EL0   89  /* Performance Monitors Overflow Flag Status 
Set Register */
+#define PMCCNTR_EL090  /* Cycle Counter Register */
+#define PMSELR_EL0 91  /* Performance Monitors Event Counter Selection 
Register */
+#define PMEVCNTR0_EL0  92  /* Performance Monitors Event Counter Register 
(0-30) */
+#define PMEVTYPER0_EL0 93  /* Performance Monitors Event Type Register 
(0-30) */
+#define PMEVCNTR30_EL0 152
+#define PMEVTYPER30_EL0153
+#define PMCNTENSET_EL0 154 /* Performance Monitors Count Enable Set 
Register */
+#define PMINTENSET_EL1 155 /* Performance Monitors Interrupt Enable Set 
Register */
+#define PMUSERENR_EL0  156 /* Performance Monitors User Enable Register */
+#define PMCCFILTR_EL0  157 /* Cycle Count Filter Register */
 
 /* 32bit specific registers. Keep them at the end of the range */
-#defineDACR32_EL2  88  /* Domain Access Control Register */
-#defineIFSR32_EL2  89  /* Instruction Fault Status Register */
-#defineFPEXC32_EL2 90  /* Floating-Point Exception Control 
Register */
-#defineDBGVCR32_EL291  /* Debug Vector Catch Register */
-#defineTEECR32_EL1 92  /* ThumbEE Configuration Register */
-#defineTEEHBR32_EL193  /* ThumbEE Handler Base Register */
-#defineNR_SYS_REGS 94
+#defineDACR32_EL2  158 /* Domain Access Control Register */
+#defineIFSR32_EL2  159 /* Instruction Fault Status Register */
+#defineFPEXC32_EL2 160 /* Floating-Point Exception Control 
Register */
+#defineDBGVCR32_EL2161 /* Debug Vector Catch Register */
+#defineTEECR32_EL1 162 /* ThumbEE Configuration Register */
+#defineTEEHBR32_EL1163 /* ThumbEE Handler Base Register */
+#defineNR_SYS_REGS 164
 
 /* 32bit mapping */
 #define c0_MPIDR   (MPIDR_EL1 * 2) /* MultiProcessor ID Register */
@@ -83,6 +95,13 @@
 #define c6_IFAR(c6_DFAR + 1)   /* Instruction Fault Address 
Register */
 #define c7_PAR (PAR_EL1 * 2)   /* Physical Address Register */
 #define c7_PAR_high(c7_PAR + 1)/* PAR top 32 bits */
+#define c9_PMCR(PMCR_EL0 * 2)  /* Performance Monitors Control 
Register */
+#define c9_PMOVSSET(PMOVSSET_EL0 * 2)
+#define c9_PMCCNTR (PMCCNTR_EL0 * 2)
+#define c9_PMSELR  (PMSELR_EL0 * 2)
+#define c9_PMCNTENSET  (PMCNTENSET_EL0 * 2)
+#define c9_PMINTENSET  (PMINTENSET_EL1 * 2)
+#define c9_PMUSERENR   (PMUSERENR_EL0 * 2)
 #define c10_PRRR   (MAIR_EL1 * 2)  /* Primary Region Remap Register */
 #define c10_NMRR   (c10_PRRR + 1)  /* Normal Memory Remap Register */
 #define c12_VBAR   (VBAR_EL1 * 2)  /* Vector Base Address Register */
@@ -93,6 +112,9 @@
 #define c10_AMAIR0 (AMAIR_EL1 * 2) /* Aux Memory Attr Indirection Reg */
 #define c10_AMAIR1 (c10_AMAIR0 + 1)/* Aux Memory Attr Indirection Reg */
 #define c14_CNTKCTL(CNTKCTL_EL1 * 2) /* Timer Control Register (PL1) */
+#define c14_PMEVCNTR0  (PMEVCNTR0_EL0 * 2)
+#define c14_PMEVTYPR0  (PMEVTYPER0_EL0 * 2)
+#define c14_PMCCFILTR  (PMCCFILTR_EL0 * 2)
 
 #define cp14_DBGDSCRext(MDSCR_EL1 * 2)
 #define cp14_DBGBCR0   (DBGBCR0_EL1 * 2)
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index ae73a83..053dc3e 100644

Re: [RFC PATCH 0/6] ARM64: KVM: PMU infrastructure support

2014-08-05 Thread Anup Patel
On 5 August 2014 15:02, Anup Patel  wrote:
> On Tue, Aug 5, 2014 at 2:54 PM, Anup Patel  wrote:
>> This patchset enables PMU virtualization in KVM ARM64. The
>> Guest can now directly use PMU available on the host HW.
>>
>> The virtual PMU IRQ injection for Guest VCPUs is managed by
>> small piece of code shared between KVM ARM and KVM ARM64. The
>> virtual PMU IRQ number will be based on Guest machine model and
>> user space will provide it using set device address vm ioctl.
>>
>> The second last patch of this series implements full context
>> switch of PMU registers which will context switch all PMU
>> registers on every KVM world-switch.
>>
>> The last patch implements a lazy context switch of PMU registers
>> which is very similar to lazy debug context switch.
>> (Refer, 
>> http://lists.infradead.org/pipermail/linux-arm-kernel/2014-July/271040.html)
>>
>> Also, we reserve last PMU event counter for EL2 mode which
>> will not be accessible from Host and Guest EL1 mode. This
>> reserved EL2 mode PMU event counter can be used for profiling
>> KVM world-switch and other EL2 mode functions.
>>
>> All testing have been done using KVMTOOL on X-Gene Mustang and
>> Foundation v8 Model for both Aarch32 and Aarch64 guest.
>>
>> Anup Patel (6):
>>   ARM64: Move PMU register related defines to asm/pmu.h
>>   ARM64: perf: Re-enable overflow interrupt from interrupt handler
>>   ARM: perf: Re-enable overflow interrupt from interrupt handler
>>   ARM/ARM64: KVM: Add common code PMU IRQ routing
>>   ARM64: KVM: Implement full context switch of PMU registers
>>   ARM64: KVM: Upgrade to lazy context switch of PMU registers
>>
>>  arch/arm/include/asm/kvm_host.h   |9 +
>>  arch/arm/include/uapi/asm/kvm.h   |1 +
>>  arch/arm/kernel/perf_event_v7.c   |8 +
>>  arch/arm/kvm/arm.c|6 +
>>  arch/arm/kvm/reset.c  |4 +
>>  arch/arm64/include/asm/kvm_asm.h  |   39 +++-
>>  arch/arm64/include/asm/kvm_host.h |   12 ++
>>  arch/arm64/include/asm/pmu.h  |   44 +
>>  arch/arm64/include/uapi/asm/kvm.h |1 +
>>  arch/arm64/kernel/asm-offsets.c   |2 +
>>  arch/arm64/kernel/perf_event.c|   40 +---
>>  arch/arm64/kvm/Kconfig|7 +
>>  arch/arm64/kvm/Makefile   |1 +
>>  arch/arm64/kvm/hyp-init.S |   15 ++
>>  arch/arm64/kvm/hyp.S  |  209 +++-
>>  arch/arm64/kvm/reset.c|4 +
>>  arch/arm64/kvm/sys_regs.c |  385 
>> +
>>  include/kvm/arm_pmu.h |   52 +
>>  virt/kvm/arm/pmu.c|  105 ++
>>  19 files changed, 870 insertions(+), 74 deletions(-)
>>  create mode 100644 include/kvm/arm_pmu.h
>>  create mode 100644 virt/kvm/arm/pmu.c
>>
>> --
>> 1.7.9.5
>>
>> CONFIDENTIALITY NOTICE: This e-mail message, including any attachments,
>> is for the sole use of the intended recipient(s) and contains information
>> that is confidential and proprietary to Applied Micro Circuits Corporation 
>> or its subsidiaries.
>> It is to be used solely for the purpose of furthering the parties' business 
>> relationship.
>> All unauthorized review, use, disclosure or distribution is prohibited.
>> If you are not the intended recipient, please contact the sender by reply 
>> e-mail
>> and destroy all copies of the original message.

Please ignore this notice, it accidentally sneaked in.

--
Anup

>>
>
> Hi All,
>
> Please apply attached patch to KVMTOOL on-top-of my
> recent KVMTOOL patchset for trying this patchset using
> KVMTOOL.
>
> Regards,
> Anup
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH 0/6] ARM64: KVM: PMU infrastructure support

2014-08-05 Thread Anup Patel
On Tue, Aug 5, 2014 at 2:54 PM, Anup Patel  wrote:
> This patchset enables PMU virtualization in KVM ARM64. The
> Guest can now directly use PMU available on the host HW.
>
> The virtual PMU IRQ injection for Guest VCPUs is managed by
> small piece of code shared between KVM ARM and KVM ARM64. The
> virtual PMU IRQ number will be based on Guest machine model and
> user space will provide it using set device address vm ioctl.
>
> The second last patch of this series implements full context
> switch of PMU registers which will context switch all PMU
> registers on every KVM world-switch.
>
> The last patch implements a lazy context switch of PMU registers
> which is very similar to lazy debug context switch.
> (Refer, 
> http://lists.infradead.org/pipermail/linux-arm-kernel/2014-July/271040.html)
>
> Also, we reserve last PMU event counter for EL2 mode which
> will not be accessible from Host and Guest EL1 mode. This
> reserved EL2 mode PMU event counter can be used for profiling
> KVM world-switch and other EL2 mode functions.
>
> All testing have been done using KVMTOOL on X-Gene Mustang and
> Foundation v8 Model for both Aarch32 and Aarch64 guest.
>
> Anup Patel (6):
>   ARM64: Move PMU register related defines to asm/pmu.h
>   ARM64: perf: Re-enable overflow interrupt from interrupt handler
>   ARM: perf: Re-enable overflow interrupt from interrupt handler
>   ARM/ARM64: KVM: Add common code PMU IRQ routing
>   ARM64: KVM: Implement full context switch of PMU registers
>   ARM64: KVM: Upgrade to lazy context switch of PMU registers
>
>  arch/arm/include/asm/kvm_host.h   |9 +
>  arch/arm/include/uapi/asm/kvm.h   |1 +
>  arch/arm/kernel/perf_event_v7.c   |8 +
>  arch/arm/kvm/arm.c|6 +
>  arch/arm/kvm/reset.c  |4 +
>  arch/arm64/include/asm/kvm_asm.h  |   39 +++-
>  arch/arm64/include/asm/kvm_host.h |   12 ++
>  arch/arm64/include/asm/pmu.h  |   44 +
>  arch/arm64/include/uapi/asm/kvm.h |1 +
>  arch/arm64/kernel/asm-offsets.c   |2 +
>  arch/arm64/kernel/perf_event.c|   40 +---
>  arch/arm64/kvm/Kconfig|7 +
>  arch/arm64/kvm/Makefile   |1 +
>  arch/arm64/kvm/hyp-init.S |   15 ++
>  arch/arm64/kvm/hyp.S  |  209 +++-
>  arch/arm64/kvm/reset.c|4 +
>  arch/arm64/kvm/sys_regs.c |  385 
> +
>  include/kvm/arm_pmu.h |   52 +
>  virt/kvm/arm/pmu.c|  105 ++
>  19 files changed, 870 insertions(+), 74 deletions(-)
>  create mode 100644 include/kvm/arm_pmu.h
>  create mode 100644 virt/kvm/arm/pmu.c
>
> --
> 1.7.9.5
>
> CONFIDENTIALITY NOTICE: This e-mail message, including any attachments,
> is for the sole use of the intended recipient(s) and contains information
> that is confidential and proprietary to Applied Micro Circuits Corporation or 
> its subsidiaries.
> It is to be used solely for the purpose of furthering the parties' business 
> relationship.
> All unauthorized review, use, disclosure or distribution is prohibited.
> If you are not the intended recipient, please contact the sender by reply 
> e-mail
> and destroy all copies of the original message.
>

Hi All,

Please apply attached patch to KVMTOOL on-top-of my
recent KVMTOOL patchset for trying this patchset using
KVMTOOL.

Regards,
Anup
From c16a3265992ba8159ab1da6d589026c0aa0914ba Mon Sep 17 00:00:00 2001
From: Anup Patel 
Date: Mon, 4 Aug 2014 16:45:44 +0530
Subject: [RFC PATCH] kvmtool: ARM/ARM64: Add PMU node to generated guest DTB.

This patch informs KVM ARM/ARM64 in-kernel PMU virtualization
about the PMU irq numbers for each guest VCPU using set device
address vm ioctl.

We also adds PMU node in generated guest DTB to inform guest
about the PMU irq numbers. For now, we have assumed PPI17 as
PMU IRQ of KVMTOOL guest.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
---
 tools/kvm/Makefile |3 ++-
 tools/kvm/arm/fdt.c|4 +++
 tools/kvm/arm/include/arm-common/pmu.h |   10 +++
 tools/kvm/arm/pmu.c|   45 
 4 files changed, 61 insertions(+), 1 deletion(-)
 create mode 100644 tools/kvm/arm/include/arm-common/pmu.h
 create mode 100644 tools/kvm/arm/pmu.c

diff --git a/tools/kvm/Makefile b/tools/kvm/Makefile
index fba60f1..59b75c4 100644
--- a/tools/kvm/Makefile
+++ b/tools/kvm/Makefile
@@ -158,7 +158,8 @@ endif
 
 # ARM
 OBJS_ARM_COMMON		:= arm/fdt.o arm/gic.o arm/ioport.o arm/irq.o \
-			   arm/kvm.o arm/kvm-cpu.o arm/pci.o arm/timer.o
+			   arm/kvm.o arm/kvm-cpu.o arm/pci.o arm/timer.o \
+			   arm/pmu.o
 HDRS_ARM_COMMON		:= arm/include
 ifeq ($(ARCH), arm)
 	DEFINES		+= -DCONFIG

Re: [PATCH 1/5] kvmtool: ARM: Use KVM_ARM_PREFERRED_TARGET vm ioctl to determine target cpu

2014-08-07 Thread Anup Patel
On 6 August 2014 18:18, Will Deacon  wrote:
> On Tue, Aug 05, 2014 at 09:49:55AM +0100, Anup Patel wrote:
>> Instead, of trying out each and every target type we should use
>> KVM_ARM_PREFERRED_TARGET vm ioctl to determine target type
>> for KVM ARM/ARM64.
>>
>> We bail-out target type returned by KVM_ARM_PREFERRED_TARGET vm ioctl
>> is not known to kvmtool.
>
> -ENOPARSE

OK, I will fix the wordings here.

>
>> Signed-off-by: Pranavkumar Sawargaonkar 
>> Signed-off-by: Anup Patel 
>> ---
>>  tools/kvm/arm/kvm-cpu.c |   21 -
>>  1 file changed, 16 insertions(+), 5 deletions(-)
>>
>> diff --git a/tools/kvm/arm/kvm-cpu.c b/tools/kvm/arm/kvm-cpu.c
>> index aeaa4cf..7478f8f 100644
>> --- a/tools/kvm/arm/kvm-cpu.c
>> +++ b/tools/kvm/arm/kvm-cpu.c
>> @@ -34,6 +34,7 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, 
>> unsigned long cpu_id)
>>   struct kvm_cpu *vcpu;
>>   int coalesced_offset, mmap_size, err = -1;
>>   unsigned int i;
>> + struct kvm_vcpu_init preferred_init;
>>   struct kvm_vcpu_init vcpu_init = {
>>   .features = ARM_VCPU_FEATURE_FLAGS(kvm, cpu_id)
>>   };
>> @@ -46,6 +47,10 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, 
>> unsigned long cpu_id)
>>   if (vcpu->vcpu_fd < 0)
>>   die_perror("KVM_CREATE_VCPU ioctl");
>>
>> + err = ioctl(kvm->vm_fd, KVM_ARM_PREFERRED_TARGET, &preferred_init);
>> + if (err < 0)
>> + die_perror("KVM_ARM_PREFERRED_TARGET ioctl");
>
> Is this ioctl always available? If not, I don't like dying here as that
> could cause a regression under older hosts.

The KVM_ARM_PREFERRED_TARGET ioctl is available from 3.13 onwards.

I think we should first try KVM_ARM_PREFERRED_TARGET. If it fails then
we should fallback to old method of trying each and every target type.
What say?

--
Anup

>
> Will
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/5] kvmtool: ARM64: Fix compile error for aarch64

2014-08-07 Thread Anup Patel
On 6 August 2014 18:20, Will Deacon  wrote:
> On Tue, Aug 05, 2014 at 09:49:56AM +0100, Anup Patel wrote:
>> The __ARM64_SYS_REG() macro is already defined in uapi/asm/kvm.h
>> of Linux-3.16-rcX hence remove it from arm/aarch64/kvm-cpu.c
>
> I've been carrying a similar patch in my kvmtool/arm branch, but upstream
> kvmtool is still based on 3.13, so this isn't needed at the moment.
>
> Do you have a need for Pekka to merge in the latest kernel sources?
>
> Will

Yes, we should syncup KVMTOOL with latest kernel sources.

I want to be able to shutdown VM when using KVMTOOL. To do
this we need to use PSCI-v0.2 from Guest kernel.

--
Anup
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/5] kvmtool: ARM64: Add target type potenza for aarch64

2014-08-07 Thread Anup Patel
On 6 August 2014 18:22, Will Deacon  wrote:
> On Tue, Aug 05, 2014 at 09:49:57AM +0100, Anup Patel wrote:
>> The VCPU target type KVM_ARM_TARGET_XGENE_POTENZA is available
>> in latest Linux-3.16-rcX or higher hence register aarch64 target
>> type for it.
>>
>> This patch enables us to run KVMTOOL on X-Gene Potenza host.
>>
>> Signed-off-by: Pranavkumar Sawargaonkar 
>> Signed-off-by: Anup Patel 
>> ---
>>  tools/kvm/arm/aarch64/arm-cpu.c |9 -
>>  1 file changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/tools/kvm/arm/aarch64/arm-cpu.c 
>> b/tools/kvm/arm/aarch64/arm-cpu.c
>> index ce5ea2f..ce526e3 100644
>> --- a/tools/kvm/arm/aarch64/arm-cpu.c
>> +++ b/tools/kvm/arm/aarch64/arm-cpu.c
>> @@ -41,10 +41,17 @@ static struct kvm_arm_target target_cortex_a57 = {
>>   .init   = arm_cpu__vcpu_init,
>>  };
>>
>> +static struct kvm_arm_target target_potenza = {
>> + .id = KVM_ARM_TARGET_XGENE_POTENZA,
>> + .compatible = "arm,arm-v8",
>> + .init   = arm_cpu__vcpu_init,
>> +};
>
> This implies you have the same PPIs for the arch-timer as the Cortex-A CPUs.
> Is that right?

Currently, KVM ARM64 provides PPI27 as arch-time IRQ for all target types.

This will have to change if KVM ARM64 starts using different
arch-timer PPI based on target type.

--
Anup

>
> Will
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/5] kvmtool: Handle exit reason KVM_EXIT_SYSTEM_EVENT

2014-08-07 Thread Anup Patel
On 6 August 2014 18:23, Will Deacon  wrote:
> On Tue, Aug 05, 2014 at 09:49:58AM +0100, Anup Patel wrote:
>> The KVM_EXIT_SYSTEM_EVENT exit reason was added to define
>> architecture independent system-wide events for a Guest.
>>
>> Currently, it is used by in-kernel PSCI-0.2 emulation of
>> KVM ARM/ARM64 to inform user space about PSCI SYSTEM_OFF
>> or PSCI SYSTEM_RESET request.
>>
>> For now, we simply treat all system-wide guest events as
>> same and shutdown the guest upon KVM_EXIT_SYSTEM_EVENT.
>>
>> Signed-off-by: Pranavkumar Sawargaonkar 
>> Signed-off-by: Anup Patel 
>> ---
>>  tools/kvm/kvm-cpu.c |6 ++
>>  1 file changed, 6 insertions(+)
>>
>> diff --git a/tools/kvm/kvm-cpu.c b/tools/kvm/kvm-cpu.c
>> index ee0a8ec..e20ee4b 100644
>> --- a/tools/kvm/kvm-cpu.c
>> +++ b/tools/kvm/kvm-cpu.c
>> @@ -160,6 +160,12 @@ int kvm_cpu__start(struct kvm_cpu *cpu)
>>   goto exit_kvm;
>>   case KVM_EXIT_SHUTDOWN:
>>   goto exit_kvm;
>> + case KVM_EXIT_SYSTEM_EVENT:
>> + /*
>> +  * Treat both SHUTDOWN & RESET system events
>> +  * as shutdown request.
>> +  */
>> + goto exit_kvm;
>
> Can we figure out whether this was a SHUTDOWN or RESET request? If so,
> printing a message for the latter "RESET request received -- exiting KVM"
> might be informative.

OK, I will update this and make it more verbose.

--
Anup

>
> Will
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/5] kvmtool: ARM/ARM64: Provide PSCI-0.2 guest when in-kernel KVM supports it

2014-08-07 Thread Anup Patel
On 6 August 2014 18:26, Will Deacon  wrote:
> On Tue, Aug 05, 2014 at 09:49:59AM +0100, Anup Patel wrote:
>> If in-kernel KVM support PSCI-0.2 emulation then we should set
>> KVM_ARM_VCPU_PSCI_0_2 feature for each guest VCPU and also
>> provide "arm,psci-0.2","arm,psci" as PSCI compatible string.
>>
>> This patch updates kvm_cpu__arch_init() and setup_fdt() as
>> per above.
>>
>> Signed-off-by: Pranavkumar Sawargaonkar 
>> Signed-off-by: Anup Patel 
>> ---
>>  tools/kvm/arm/fdt.c |   39 +--
>>  tools/kvm/arm/kvm-cpu.c |5 +
>>  2 files changed, 38 insertions(+), 6 deletions(-)
>
> [...]
>
>> diff --git a/tools/kvm/arm/kvm-cpu.c b/tools/kvm/arm/kvm-cpu.c
>> index 7478f8f..76c28a0 100644
>> --- a/tools/kvm/arm/kvm-cpu.c
>> +++ b/tools/kvm/arm/kvm-cpu.c
>> @@ -74,6 +74,11 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, 
>> unsigned long cpu_id)
>>   die("preferred target not available\n");
>>   }
>>
>> + /* Set KVM_ARM_VCPU_PSCI_0_2 if available */
>> + if (kvm__supports_extension(kvm, KVM_CAP_ARM_PSCI_0_2)) {
>> + vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
>> + }
>
> Where is this used?

If we want to provide PSCI-0.2 to Guest then we should inform
in-kernel KVM ARM/ARM64 using init features.

By default KVM ARM/ARM64 provides PSCI-0.1 to Guest. If we don't set
this feature then Guest will get undefined exception for PSCI-0.2
calls.

--
Anup

>
> Will
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH 2/6] ARM64: perf: Re-enable overflow interrupt from interrupt handler

2014-08-07 Thread Anup Patel
On 6 August 2014 19:54, Will Deacon  wrote:
> On Tue, Aug 05, 2014 at 10:24:11AM +0100, Anup Patel wrote:
>> A hypervisor will typically mask the overflow interrupt before
>> forwarding it to Guest Linux hence we need to re-enable the overflow
>> interrupt after clearing it in Guest Linux. Also, this re-enabling
>> of overflow interrupt does not harm in non-virtualized scenarios.
>>
>> Signed-off-by: Pranavkumar Sawargaonkar 
>> Signed-off-by: Anup Patel 
>> ---
>>  arch/arm64/kernel/perf_event.c |8 
>>  1 file changed, 8 insertions(+)
>>
>> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
>> index 47dfb8b..19fb140 100644
>> --- a/arch/arm64/kernel/perf_event.c
>> +++ b/arch/arm64/kernel/perf_event.c
>> @@ -1076,6 +1076,14 @@ static irqreturn_t armv8pmu_handle_irq(int irq_num, 
>> void *dev)
>>   if (!armv8pmu_counter_has_overflowed(pmovsr, idx))
>>   continue;
>>
>> + /*
>> +  * If we are running under a hypervisor such as KVM then
>> +  * hypervisor will mask the interrupt before forwarding
>> +  * it to Guest Linux hence re-enable interrupt for the
>> +  * overflowed counter.
>> +  */
>> + armv8pmu_enable_intens(idx);
>> +
>
> Really? This is a giant bodge in the guest to work around short-comings in
> the hypervisor. Why can't we fix this properly using something like Marc's
> irq forwarding code?

This change is in accordance with our previous RFC thread about
PMU virtualization where Marc Z had suggest to do interrupt
mask/unmask dance similar to arch-timer.

I have not tried Marc'z irq forwarding series. In next revision of this
patchset, I will try to use Marc's irq forwarding approach.

>
> Will

--
Anup
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 0/4] kvmtool: ARM/ARM64: Misc updates

2014-08-26 Thread Anup Patel
This patchset updates KVMTOOL to use some of the features
supported by Linux-3.16 KVM ARM/ARM64, such as:

1. Target CPU == Host using KVM_ARM_PREFERRED_TARGET vm ioctl
2. Target CPU type Potenza for using KVMTOOL on X-Gene
3. PSCI v0.2 support for Aarch32 and Aarch64 guest
4. System event exit reason

Changes since v1:
- Drop the patch to fix compile error for aarch64
- Fallback to old method of trying all target types if
KVM_ARM_PREFERRED_TARGET vm ioctl fails
- Print more info when handling KVM_EXIT_SYSTEM_EVENT

Anup Patel (4):
  kvmtool: ARM: Use KVM_ARM_PREFERRED_TARGET vm ioctl to determine
target cpu
  kvmtool: ARM64: Add target type potenza for aarch64
  kvmtool: Handle exit reason KVM_EXIT_SYSTEM_EVENT
  kvmtool: ARM/ARM64: Provide PSCI-0.2 to guest when KVM supports it

 tools/kvm/arm/aarch64/arm-cpu.c |9 ++-
 tools/kvm/arm/fdt.c |   39 +-
 tools/kvm/arm/kvm-cpu.c |   51 ++-
 tools/kvm/kvm-cpu.c |   19 +++
 4 files changed, 100 insertions(+), 18 deletions(-)

-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 1/4] kvmtool: ARM: Use KVM_ARM_PREFERRED_TARGET vm ioctl to determine target cpu

2014-08-26 Thread Anup Patel
Instead, of trying out each and every target type we should
use KVM_ARM_PREFERRED_TARGET vm ioctl to determine target type
for KVM ARM/ARM64.

If KVM_ARM_PREFERRED_TARGET vm ioctl fails then we fallback to
old method of trying all known target types.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
---
 tools/kvm/arm/kvm-cpu.c |   46 +++---
 1 file changed, 35 insertions(+), 11 deletions(-)

diff --git a/tools/kvm/arm/kvm-cpu.c b/tools/kvm/arm/kvm-cpu.c
index aeaa4cf..c010e9c 100644
--- a/tools/kvm/arm/kvm-cpu.c
+++ b/tools/kvm/arm/kvm-cpu.c
@@ -34,6 +34,7 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
struct kvm_cpu *vcpu;
int coalesced_offset, mmap_size, err = -1;
unsigned int i;
+   struct kvm_vcpu_init preferred_init;
struct kvm_vcpu_init vcpu_init = {
.features = ARM_VCPU_FEATURE_FLAGS(kvm, cpu_id)
};
@@ -55,20 +56,42 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, 
unsigned long cpu_id)
if (vcpu->kvm_run == MAP_FAILED)
die("unable to mmap vcpu fd");
 
-   /* Find an appropriate target CPU type. */
-   for (i = 0; i < ARRAY_SIZE(kvm_arm_targets); ++i) {
-   if (!kvm_arm_targets[i])
-   continue;
-   target = kvm_arm_targets[i];
-   vcpu_init.target = target->id;
+   /*
+* If preferred target ioctl successful then use preferred target
+* else try each and every target type.
+*/
+   err = ioctl(kvm->vm_fd, KVM_ARM_PREFERRED_TARGET, &preferred_init);
+   if (!err) {
+   /* Match preferred target CPU type. */
+   target = NULL;
+   for (i = 0; i < ARRAY_SIZE(kvm_arm_targets); ++i) {
+   if (!kvm_arm_targets[i])
+   continue;
+   if (kvm_arm_targets[i]->id == preferred_init.target) {
+   target = kvm_arm_targets[i];
+   break;
+   }
+   }
+
+   vcpu_init.target = preferred_init.target;
err = ioctl(vcpu->vcpu_fd, KVM_ARM_VCPU_INIT, &vcpu_init);
-   if (!err)
-   break;
+   if (err || target->init(vcpu))
+   die("Unable to initialise vcpu for preferred target");
+   } else {
+   /* Find an appropriate target CPU type. */
+   for (i = 0; i < ARRAY_SIZE(kvm_arm_targets); ++i) {
+   if (!kvm_arm_targets[i])
+   continue;
+   target = kvm_arm_targets[i];
+   vcpu_init.target = target->id;
+   err = ioctl(vcpu->vcpu_fd, KVM_ARM_VCPU_INIT, 
&vcpu_init);
+   if (!err)
+   break;
+   }
+   if (err || target->init(vcpu))
+   die("Unable to initialise vcpu");
}
 
-   if (err || target->init(vcpu))
-   die("Unable to initialise ARM vcpu");
-
coalesced_offset = ioctl(kvm->sys_fd, KVM_CHECK_EXTENSION,
 KVM_CAP_COALESCED_MMIO);
if (coalesced_offset)
@@ -81,6 +104,7 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
vcpu->cpu_type  = target->id;
vcpu->cpu_compatible= target->compatible;
vcpu->is_running= true;
+
return vcpu;
 }
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 2/4] kvmtool: ARM64: Add target type potenza for aarch64

2014-08-26 Thread Anup Patel
The VCPU target type KVM_ARM_TARGET_XGENE_POTENZA is available
in latest Linux-3.16-rcX or higher hence register aarch64 target
type for it.

This patch enables us to run KVMTOOL on X-Gene Potenza host.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
---
 tools/kvm/arm/aarch64/arm-cpu.c |9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/tools/kvm/arm/aarch64/arm-cpu.c b/tools/kvm/arm/aarch64/arm-cpu.c
index ce5ea2f..ce526e3 100644
--- a/tools/kvm/arm/aarch64/arm-cpu.c
+++ b/tools/kvm/arm/aarch64/arm-cpu.c
@@ -41,10 +41,17 @@ static struct kvm_arm_target target_cortex_a57 = {
.init   = arm_cpu__vcpu_init,
 };
 
+static struct kvm_arm_target target_potenza = {
+   .id = KVM_ARM_TARGET_XGENE_POTENZA,
+   .compatible = "arm,arm-v8",
+   .init   = arm_cpu__vcpu_init,
+};
+
 static int arm_cpu__core_init(struct kvm *kvm)
 {
return (kvm_cpu__register_kvm_arm_target(&target_aem_v8) ||
kvm_cpu__register_kvm_arm_target(&target_foundation_v8) ||
-   kvm_cpu__register_kvm_arm_target(&target_cortex_a57));
+   kvm_cpu__register_kvm_arm_target(&target_cortex_a57) ||
+   kvm_cpu__register_kvm_arm_target(&target_potenza));
 }
 core_init(arm_cpu__core_init);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 4/4] kvmtool: ARM/ARM64: Provide PSCI-0.2 to guest when KVM supports it

2014-08-26 Thread Anup Patel
If in-kernel KVM support PSCI-0.2 emulation then we should set
KVM_ARM_VCPU_PSCI_0_2 feature for each guest VCPU and also
provide "arm,psci-0.2","arm,psci" as PSCI compatible string.

This patch updates kvm_cpu__arch_init() and setup_fdt() as
per above.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
---
 tools/kvm/arm/fdt.c |   39 +--
 tools/kvm/arm/kvm-cpu.c |5 +
 2 files changed, 38 insertions(+), 6 deletions(-)

diff --git a/tools/kvm/arm/fdt.c b/tools/kvm/arm/fdt.c
index 186a718..93849cf2 100644
--- a/tools/kvm/arm/fdt.c
+++ b/tools/kvm/arm/fdt.c
@@ -13,6 +13,7 @@
 #include 
 #include 
 #include 
+#include 
 
 static char kern_cmdline[COMMAND_LINE_SIZE];
 
@@ -162,12 +163,38 @@ static int setup_fdt(struct kvm *kvm)
 
/* PSCI firmware */
_FDT(fdt_begin_node(fdt, "psci"));
-   _FDT(fdt_property_string(fdt, "compatible", "arm,psci"));
-   _FDT(fdt_property_string(fdt, "method", "hvc"));
-   _FDT(fdt_property_cell(fdt, "cpu_suspend", KVM_PSCI_FN_CPU_SUSPEND));
-   _FDT(fdt_property_cell(fdt, "cpu_off", KVM_PSCI_FN_CPU_OFF));
-   _FDT(fdt_property_cell(fdt, "cpu_on", KVM_PSCI_FN_CPU_ON));
-   _FDT(fdt_property_cell(fdt, "migrate", KVM_PSCI_FN_MIGRATE));
+   if (kvm__supports_extension(kvm, KVM_CAP_ARM_PSCI_0_2)) {
+   const char compatible[] = "arm,psci-0.2\0arm,psci";
+   _FDT(fdt_property(fdt, "compatible",
+ compatible, sizeof(compatible)));
+   _FDT(fdt_property_string(fdt, "method", "hvc"));
+   if (kvm->cfg.arch.aarch32_guest) {
+   _FDT(fdt_property_cell(fdt, "cpu_suspend",
+   PSCI_0_2_FN_CPU_SUSPEND));
+   _FDT(fdt_property_cell(fdt, "cpu_off",
+   PSCI_0_2_FN_CPU_OFF));
+   _FDT(fdt_property_cell(fdt, "cpu_on",
+   PSCI_0_2_FN_CPU_ON));
+   _FDT(fdt_property_cell(fdt, "migrate",
+   PSCI_0_2_FN_MIGRATE));
+   } else {
+   _FDT(fdt_property_cell(fdt, "cpu_suspend",
+   PSCI_0_2_FN64_CPU_SUSPEND));
+   _FDT(fdt_property_cell(fdt, "cpu_off",
+   PSCI_0_2_FN_CPU_OFF));
+   _FDT(fdt_property_cell(fdt, "cpu_on",
+   PSCI_0_2_FN64_CPU_ON));
+   _FDT(fdt_property_cell(fdt, "migrate",
+   PSCI_0_2_FN64_MIGRATE));
+   }
+   } else {
+   _FDT(fdt_property_string(fdt, "compatible", "arm,psci"));
+   _FDT(fdt_property_string(fdt, "method", "hvc"));
+   _FDT(fdt_property_cell(fdt, "cpu_suspend", 
KVM_PSCI_FN_CPU_SUSPEND));
+   _FDT(fdt_property_cell(fdt, "cpu_off", KVM_PSCI_FN_CPU_OFF));
+   _FDT(fdt_property_cell(fdt, "cpu_on", KVM_PSCI_FN_CPU_ON));
+   _FDT(fdt_property_cell(fdt, "migrate", KVM_PSCI_FN_MIGRATE));
+   }
_FDT(fdt_end_node(fdt));
 
/* Finalise. */
diff --git a/tools/kvm/arm/kvm-cpu.c b/tools/kvm/arm/kvm-cpu.c
index c010e9c..0637e9a 100644
--- a/tools/kvm/arm/kvm-cpu.c
+++ b/tools/kvm/arm/kvm-cpu.c
@@ -56,6 +56,11 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
if (vcpu->kvm_run == MAP_FAILED)
die("unable to mmap vcpu fd");
 
+   /* Set KVM_ARM_VCPU_PSCI_0_2 if available */
+   if (kvm__supports_extension(kvm, KVM_CAP_ARM_PSCI_0_2)) {
+   vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
+   }
+
/*
 * If preferred target ioctl successful then use preferred target
 * else try each and every target type.
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 3/4] kvmtool: Handle exit reason KVM_EXIT_SYSTEM_EVENT

2014-08-26 Thread Anup Patel
The KVM_EXIT_SYSTEM_EVENT exit reason was added to define
architecture independent system-wide events for a Guest.

Currently, it is used by in-kernel PSCI-0.2 emulation of
KVM ARM/ARM64 to inform user space about PSCI SYSTEM_OFF
or PSCI SYSTEM_RESET request.

For now, we simply treat all system-wide guest events as
shutdown request in KVMTOOL.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
---
 tools/kvm/kvm-cpu.c |   19 +++
 1 file changed, 19 insertions(+)

diff --git a/tools/kvm/kvm-cpu.c b/tools/kvm/kvm-cpu.c
index ee0a8ec..6d01192 100644
--- a/tools/kvm/kvm-cpu.c
+++ b/tools/kvm/kvm-cpu.c
@@ -160,6 +160,25 @@ int kvm_cpu__start(struct kvm_cpu *cpu)
goto exit_kvm;
case KVM_EXIT_SHUTDOWN:
goto exit_kvm;
+   case KVM_EXIT_SYSTEM_EVENT:
+   /*
+* Print the type of system event and
+* treat all system events as shutdown request.
+*/
+   switch (cpu->kvm_run->system_event.type) {
+   case KVM_SYSTEM_EVENT_SHUTDOWN:
+   printf("  # Info: shutdown system event\n");
+   break;
+   case KVM_SYSTEM_EVENT_RESET:
+   printf("  # Info: reset system event\n");
+   break;
+   default:
+   printf("  # Warning: unknown system event 
type=%d\n",
+  cpu->kvm_run->system_event.type);
+   break;
+   };
+   printf("  # Info: exiting KVMTOOL\n");
+   goto exit_kvm;
default: {
bool ret;
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 1/4] kvmtool: ARM: Use KVM_ARM_PREFERRED_TARGET vm ioctl to determine target cpu

2014-08-29 Thread Anup Patel
Hi Andre,

On 29 August 2014 14:40, Andre Przywara  wrote:
> (resent, that was the wrong account before ...)
>
> Hi Anup,
>
> On 26/08/14 10:22, Anup Patel wrote:
>> Instead, of trying out each and every target type we should
>> use KVM_ARM_PREFERRED_TARGET vm ioctl to determine target type
>> for KVM ARM/ARM64.
>>
>> If KVM_ARM_PREFERRED_TARGET vm ioctl fails then we fallback to
>> old method of trying all known target types.
>
> So as the algorithm currently works, it does not give us much
> improvement over the current behaviour. We still need to list each
> supported MPIDR both in kvmtool and in the kernel.
> Looking more closely at the code, beside the target id we only need the
> kvm_target_arm[] list for the compatible string and the init() function.
> The latter is (currently) the same for all supported type, so we could
> use that as a standard fallback function.
> The compatible string seems to be completely ignored by the ARM64
> kernel, so we could as well pass "arm,armv8" all the time.
> In ARM(32) kernels we seem to not make any real use of it for CPUs which
> we care for (with virtualisation extensions).

You are absolutely right here. I was just trying to keep
KVMTOOL changes to minimum.

>
> So what about the following:
> We keep the list as it is, but not extend it for future CPUs, expect
> those in need for a special compatible string or a specific init
> function. Instead we rely on PREFFERED_TARGET for all current and
> upcoming CPUs (meaning unsupported CPUs must use a 3.12 kernel or higher).
> If PREFERRED_TARGET works, we scan the list anyway (to find CPUs needing
> special treatment), but on failure of finding something in the list we
> just go ahead:
> - with the target ID the kernel returned,
> - an "arm,armv8" compatible string (for arm64, not sure about arm) and
> - call the standard kvmtool init function
>
> This should relief us from the burden of adding each supported CPU to
> kvmtool.
>
> Does that make sense of am I missing something?
> I will hack something up to prove that it works.

Yes, this makes sense. In fact, QEMU does something similar
for "-cpu host -M virt" command line options.

I think I should be less lazy on this one. I will rework this
and make it more like QEMU "-cpu host" option.

Thanks,
Anup

>
> Also there is now a race on big.LITTLE systems: if the PREFERRED_TARGET
> ioctl is executed on one cluster, while the KVM_ARM_VCPU_INIT call is
> done on another core with a different MPIDR, then the kernel will refuse
> to init the CPU. I don't know of a good solution for this (except the
> sledgehammer pinning with sched_setaffinity to the current core, which
> is racy, too, but should at least work somehow ;-)
> Any ideas?
>
>> Signed-off-by: Pranavkumar Sawargaonkar 
>> Signed-off-by: Anup Patel 
>> ---
>>  tools/kvm/arm/kvm-cpu.c |   46 
>> +++---
>>  1 file changed, 35 insertions(+), 11 deletions(-)
>>
>> diff --git a/tools/kvm/arm/kvm-cpu.c b/tools/kvm/arm/kvm-cpu.c
>> index aeaa4cf..c010e9c 100644
>> --- a/tools/kvm/arm/kvm-cpu.c
>> +++ b/tools/kvm/arm/kvm-cpu.c
>> @@ -34,6 +34,7 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, 
>> unsigned long cpu_id)
>>   struct kvm_cpu *vcpu;
>>   int coalesced_offset, mmap_size, err = -1;
>>   unsigned int i;
>> + struct kvm_vcpu_init preferred_init;
>>   struct kvm_vcpu_init vcpu_init = {
>>   .features = ARM_VCPU_FEATURE_FLAGS(kvm, cpu_id)
>>   };
>> @@ -55,20 +56,42 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, 
>> unsigned long cpu_id)
>>   if (vcpu->kvm_run == MAP_FAILED)
>>   die("unable to mmap vcpu fd");
>>
>> - /* Find an appropriate target CPU type. */
>> - for (i = 0; i < ARRAY_SIZE(kvm_arm_targets); ++i) {
>> - if (!kvm_arm_targets[i])
>> - continue;
>> - target = kvm_arm_targets[i];
>> - vcpu_init.target = target->id;
>> + /*
>> +  * If preferred target ioctl successful then use preferred target
>> +  * else try each and every target type.
>> +  */
>> + err = ioctl(kvm->vm_fd, KVM_ARM_PREFERRED_TARGET, &preferred_init);
>> + if (!err) {
>> + /* Match preferred target CPU type. */
>> + target = NULL;
>> + for (i = 0; i < ARRAY_SIZE(kvm_arm_targets); ++i) {
>> + if (!kvm_arm_targets[i])
>> + continue;
>> + if (kvm_a

Re: [PATCH v2 4/4] kvmtool: ARM/ARM64: Provide PSCI-0.2 to guest when KVM supports it

2014-08-29 Thread Anup Patel
Hi Andre,

On 29 August 2014 14:41, Andre Przywara  wrote:
> Hi Anup,
>
> On 26/08/14 10:22, Anup Patel wrote:
>> If in-kernel KVM support PSCI-0.2 emulation then we should set
>> KVM_ARM_VCPU_PSCI_0_2 feature for each guest VCPU and also
>> provide "arm,psci-0.2","arm,psci" as PSCI compatible string.
>>
>> This patch updates kvm_cpu__arch_init() and setup_fdt() as
>> per above.
>>
>> Signed-off-by: Pranavkumar Sawargaonkar 
>> Signed-off-by: Anup Patel 
>> ---
>>  tools/kvm/arm/fdt.c |   39 +--
>>  tools/kvm/arm/kvm-cpu.c |5 +
>>  2 files changed, 38 insertions(+), 6 deletions(-)
>>
>> diff --git a/tools/kvm/arm/fdt.c b/tools/kvm/arm/fdt.c
>> index 186a718..93849cf2 100644
>> --- a/tools/kvm/arm/fdt.c
>> +++ b/tools/kvm/arm/fdt.c
>> @@ -13,6 +13,7 @@
>>  #include 
>>  #include 
>>  #include 
>> +#include 
>>
>>  static char kern_cmdline[COMMAND_LINE_SIZE];
>>
>> @@ -162,12 +163,38 @@ static int setup_fdt(struct kvm *kvm)
>>
>>   /* PSCI firmware */
>>   _FDT(fdt_begin_node(fdt, "psci"));
>> - _FDT(fdt_property_string(fdt, "compatible", "arm,psci"));
>> - _FDT(fdt_property_string(fdt, "method", "hvc"));
>> - _FDT(fdt_property_cell(fdt, "cpu_suspend", KVM_PSCI_FN_CPU_SUSPEND));
>> - _FDT(fdt_property_cell(fdt, "cpu_off", KVM_PSCI_FN_CPU_OFF));
>> - _FDT(fdt_property_cell(fdt, "cpu_on", KVM_PSCI_FN_CPU_ON));
>> - _FDT(fdt_property_cell(fdt, "migrate", KVM_PSCI_FN_MIGRATE));
>> + if (kvm__supports_extension(kvm, KVM_CAP_ARM_PSCI_0_2)) {
>> + const char compatible[] = "arm,psci-0.2\0arm,psci";
>> + _FDT(fdt_property(fdt, "compatible",
>> +   compatible, sizeof(compatible)));
>> + _FDT(fdt_property_string(fdt, "method", "hvc"));
>> + if (kvm->cfg.arch.aarch32_guest) {
>> + _FDT(fdt_property_cell(fdt, "cpu_suspend",
>> + PSCI_0_2_FN_CPU_SUSPEND));
>> + _FDT(fdt_property_cell(fdt, "cpu_off",
>> + PSCI_0_2_FN_CPU_OFF));
>> + _FDT(fdt_property_cell(fdt, "cpu_on",
>> + PSCI_0_2_FN_CPU_ON));
>> + _FDT(fdt_property_cell(fdt, "migrate",
>> + PSCI_0_2_FN_MIGRATE));
>> + } else {
>> + _FDT(fdt_property_cell(fdt, "cpu_suspend",
>> + PSCI_0_2_FN64_CPU_SUSPEND));
>> + _FDT(fdt_property_cell(fdt, "cpu_off",
>> + PSCI_0_2_FN_CPU_OFF));
>> + _FDT(fdt_property_cell(fdt, "cpu_on",
>> + PSCI_0_2_FN64_CPU_ON));
>> + _FDT(fdt_property_cell(fdt, "migrate",
>> + PSCI_0_2_FN64_MIGRATE));
>> + }
>> + } else {
>> + _FDT(fdt_property_string(fdt, "compatible", "arm,psci"));
>> + _FDT(fdt_property_string(fdt, "method", "hvc"));
>> + _FDT(fdt_property_cell(fdt, "cpu_suspend", 
>> KVM_PSCI_FN_CPU_SUSPEND));
>> + _FDT(fdt_property_cell(fdt, "cpu_off", KVM_PSCI_FN_CPU_OFF));
>> + _FDT(fdt_property_cell(fdt, "cpu_on", KVM_PSCI_FN_CPU_ON));
>> + _FDT(fdt_property_cell(fdt, "migrate", KVM_PSCI_FN_MIGRATE));
>> + }
>
> I guess this could be simplified much by defining three arrays with the
> respective function IDs and setting a pointer to the right one here.
> Then there would still be only one set of _FDT() calls, which reference
> this pointer. Like:
> uint32_t *psci_fn_ids;
> ...
> if (KVM_CAP_ARM_PSCI_0_2) {
> if (aarch32_guest)
> psci_fn_ids = psci_0_2_fn_ids;
> else
> psci_fn_ids = psci_0_2_fn64_ids;
> } else
> psci_fn_ids = psci_0_1_fn_ids;
> _FDT(fdt_property_cell(fdt, "cpu_suspend", psci_fn_ids[0]));
> _FDT(fdt_property_cell(fdt, "cpu_off", psci_fn_ids[1]));
> ...
>
> Also I wonder if we a

[PATCH v3 1/4] kvmtool: ARM: Use KVM_ARM_PREFERRED_TARGET vm ioctl to determine target cpu

2014-09-08 Thread Anup Patel
Instead, of trying out each and every target type we should
use KVM_ARM_PREFERRED_TARGET vm ioctl to determine target type
for KVM ARM/ARM64.

If KVM_ARM_PREFERRED_TARGET vm ioctl fails then we fallback to
old method of trying all known target types.

If KVM_ARM_PREFERRED_TARGET vm ioctl succeeds but the returned
target type is not known to KVMTOOL then we forcefully init
VCPU with target type returned by KVM_ARM_PREFERRED_TARGET vm ioctl.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
---
 tools/kvm/arm/kvm-cpu.c |   52 +--
 1 file changed, 41 insertions(+), 11 deletions(-)

diff --git a/tools/kvm/arm/kvm-cpu.c b/tools/kvm/arm/kvm-cpu.c
index aeaa4cf..ba7a762 100644
--- a/tools/kvm/arm/kvm-cpu.c
+++ b/tools/kvm/arm/kvm-cpu.c
@@ -33,7 +33,8 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
struct kvm_arm_target *target;
struct kvm_cpu *vcpu;
int coalesced_offset, mmap_size, err = -1;
-   unsigned int i;
+   unsigned int i, target_type;
+   struct kvm_vcpu_init preferred_init;
struct kvm_vcpu_init vcpu_init = {
.features = ARM_VCPU_FEATURE_FLAGS(kvm, cpu_id)
};
@@ -55,19 +56,47 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, 
unsigned long cpu_id)
if (vcpu->kvm_run == MAP_FAILED)
die("unable to mmap vcpu fd");
 
-   /* Find an appropriate target CPU type. */
-   for (i = 0; i < ARRAY_SIZE(kvm_arm_targets); ++i) {
-   if (!kvm_arm_targets[i])
-   continue;
-   target = kvm_arm_targets[i];
-   vcpu_init.target = target->id;
-   err = ioctl(vcpu->vcpu_fd, KVM_ARM_VCPU_INIT, &vcpu_init);
-   if (!err)
-   break;
+   /*
+* If preferred target ioctl successful then use preferred target
+* else try each and every target type.
+*/
+   err = ioctl(kvm->vm_fd, KVM_ARM_PREFERRED_TARGET, &preferred_init);
+   if (!err) {
+   /* Match preferred target CPU type. */
+   target = NULL;
+   for (i = 0; i < ARRAY_SIZE(kvm_arm_targets); ++i) {
+   if (!kvm_arm_targets[i])
+   continue;
+   if (kvm_arm_targets[i]->id == preferred_init.target) {
+   target = kvm_arm_targets[i];
+   target_type = kvm_arm_targets[i]->id;
+   break;
+   }
+   }
+   if (!target) {
+   target = kvm_arm_targets[0];
+   target_type = preferred_init.target;
+   }
+   } else {
+   /* Find an appropriate target CPU type. */
+   for (i = 0; i < ARRAY_SIZE(kvm_arm_targets); ++i) {
+   if (!kvm_arm_targets[i])
+   continue;
+   target = kvm_arm_targets[i];
+   target_type = target->id;
+   vcpu_init.target = target_type;
+   err = ioctl(vcpu->vcpu_fd, KVM_ARM_VCPU_INIT, 
&vcpu_init);
+   if (!err)
+   break;
+   }
+   if (err)
+   die("Unable to find matching target");
}
 
+   vcpu_init.target = target_type;
+   err = ioctl(vcpu->vcpu_fd, KVM_ARM_VCPU_INIT, &vcpu_init);
if (err || target->init(vcpu))
-   die("Unable to initialise ARM vcpu");
+   die("Unable to initialise vcpu");
 
coalesced_offset = ioctl(kvm->sys_fd, KVM_CHECK_EXTENSION,
 KVM_CAP_COALESCED_MMIO);
@@ -81,6 +110,7 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
vcpu->cpu_type  = target->id;
vcpu->cpu_compatible= target->compatible;
vcpu->is_running= true;
+
return vcpu;
 }
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 2/4] kvmtool: ARM64: Add target type potenza for aarch64

2014-09-08 Thread Anup Patel
The VCPU target type KVM_ARM_TARGET_XGENE_POTENZA is available
in latest Linux-3.16-rcX or higher hence register aarch64 target
type for it.

This patch enables us to run KVMTOOL on X-Gene Potenza host.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
---
 tools/kvm/arm/aarch64/arm-cpu.c |9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/tools/kvm/arm/aarch64/arm-cpu.c b/tools/kvm/arm/aarch64/arm-cpu.c
index ce5ea2f..ce526e3 100644
--- a/tools/kvm/arm/aarch64/arm-cpu.c
+++ b/tools/kvm/arm/aarch64/arm-cpu.c
@@ -41,10 +41,17 @@ static struct kvm_arm_target target_cortex_a57 = {
.init   = arm_cpu__vcpu_init,
 };
 
+static struct kvm_arm_target target_potenza = {
+   .id = KVM_ARM_TARGET_XGENE_POTENZA,
+   .compatible = "arm,arm-v8",
+   .init   = arm_cpu__vcpu_init,
+};
+
 static int arm_cpu__core_init(struct kvm *kvm)
 {
return (kvm_cpu__register_kvm_arm_target(&target_aem_v8) ||
kvm_cpu__register_kvm_arm_target(&target_foundation_v8) ||
-   kvm_cpu__register_kvm_arm_target(&target_cortex_a57));
+   kvm_cpu__register_kvm_arm_target(&target_cortex_a57) ||
+   kvm_cpu__register_kvm_arm_target(&target_potenza));
 }
 core_init(arm_cpu__core_init);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 3/4] kvmtool: Handle exit reason KVM_EXIT_SYSTEM_EVENT

2014-09-08 Thread Anup Patel
The KVM_EXIT_SYSTEM_EVENT exit reason was added to define
architecture independent system-wide events for a Guest.

Currently, it is used by in-kernel PSCI-0.2 emulation of
KVM ARM/ARM64 to inform user space about PSCI SYSTEM_OFF
or PSCI SYSTEM_RESET request.

For now, we simply treat all system-wide guest events as
shutdown request in KVMTOOL.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
---
 tools/kvm/kvm-cpu.c |   19 +++
 1 file changed, 19 insertions(+)

diff --git a/tools/kvm/kvm-cpu.c b/tools/kvm/kvm-cpu.c
index ee0a8ec..6d01192 100644
--- a/tools/kvm/kvm-cpu.c
+++ b/tools/kvm/kvm-cpu.c
@@ -160,6 +160,25 @@ int kvm_cpu__start(struct kvm_cpu *cpu)
goto exit_kvm;
case KVM_EXIT_SHUTDOWN:
goto exit_kvm;
+   case KVM_EXIT_SYSTEM_EVENT:
+   /*
+* Print the type of system event and
+* treat all system events as shutdown request.
+*/
+   switch (cpu->kvm_run->system_event.type) {
+   case KVM_SYSTEM_EVENT_SHUTDOWN:
+   printf("  # Info: shutdown system event\n");
+   break;
+   case KVM_SYSTEM_EVENT_RESET:
+   printf("  # Info: reset system event\n");
+   break;
+   default:
+   printf("  # Warning: unknown system event 
type=%d\n",
+  cpu->kvm_run->system_event.type);
+   break;
+   };
+   printf("  # Info: exiting KVMTOOL\n");
+   goto exit_kvm;
default: {
bool ret;
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 4/4] kvmtool: ARM/ARM64: Provide PSCI-0.2 to guest when KVM supports it

2014-09-08 Thread Anup Patel
If in-kernel KVM support PSCI-0.2 emulation then we should set
KVM_ARM_VCPU_PSCI_0_2 feature for each guest VCPU and also
provide "arm,psci-0.2","arm,psci" as PSCI compatible string.

This patch updates kvm_cpu__arch_init() and setup_fdt() as
per above.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
---
 tools/kvm/arm/fdt.c |   52 ++-
 tools/kvm/arm/kvm-cpu.c |5 +
 2 files changed, 52 insertions(+), 5 deletions(-)

diff --git a/tools/kvm/arm/fdt.c b/tools/kvm/arm/fdt.c
index 186a718..a15450e 100644
--- a/tools/kvm/arm/fdt.c
+++ b/tools/kvm/arm/fdt.c
@@ -13,6 +13,7 @@
 #include 
 #include 
 #include 
+#include 
 
 static char kern_cmdline[COMMAND_LINE_SIZE];
 
@@ -84,6 +85,34 @@ static void generate_irq_prop(void *fdt, u8 irq)
_FDT(fdt_property(fdt, "interrupts", irq_prop, sizeof(irq_prop)));
 }
 
+struct psci_fns {
+   u32 cpu_suspend;
+   u32 cpu_off;
+   u32 cpu_on;
+   u32 migrate;
+};
+
+static struct psci_fns psci_0_1_fns = {
+   .cpu_suspend = KVM_PSCI_FN_CPU_SUSPEND,
+   .cpu_off = KVM_PSCI_FN_CPU_OFF,
+   .cpu_on = KVM_PSCI_FN_CPU_ON,
+   .migrate = KVM_PSCI_FN_MIGRATE,
+};
+
+static struct psci_fns psci_0_2_aarch32_fns = {
+   .cpu_suspend = PSCI_0_2_FN_CPU_SUSPEND,
+   .cpu_off = PSCI_0_2_FN_CPU_OFF,
+   .cpu_on = PSCI_0_2_FN_CPU_ON,
+   .migrate = PSCI_0_2_FN_MIGRATE,
+};
+
+static struct psci_fns psci_0_2_aarch64_fns = {
+   .cpu_suspend = PSCI_0_2_FN64_CPU_SUSPEND,
+   .cpu_off = PSCI_0_2_FN_CPU_OFF,
+   .cpu_on = PSCI_0_2_FN64_CPU_ON,
+   .migrate = PSCI_0_2_FN64_MIGRATE,
+};
+
 static int setup_fdt(struct kvm *kvm)
 {
struct device_header *dev_hdr;
@@ -93,6 +122,7 @@ static int setup_fdt(struct kvm *kvm)
cpu_to_fdt64(kvm->arch.memory_guest_start),
cpu_to_fdt64(kvm->ram_size),
};
+   struct psci_fns *fns;
void *fdt   = staging_fdt;
void *fdt_dest  = guest_flat_to_host(kvm,
 kvm->arch.dtb_guest_start);
@@ -162,12 +192,24 @@ static int setup_fdt(struct kvm *kvm)
 
/* PSCI firmware */
_FDT(fdt_begin_node(fdt, "psci"));
-   _FDT(fdt_property_string(fdt, "compatible", "arm,psci"));
+   if (kvm__supports_extension(kvm, KVM_CAP_ARM_PSCI_0_2)) {
+   const char compatible[] = "arm,psci-0.2\0arm,psci";
+   _FDT(fdt_property(fdt, "compatible",
+ compatible, sizeof(compatible)));
+   if (kvm->cfg.arch.aarch32_guest) {
+   fns = &psci_0_2_aarch32_fns;
+   } else {
+   fns = &psci_0_2_aarch64_fns;
+   }
+   } else {
+   _FDT(fdt_property_string(fdt, "compatible", "arm,psci"));
+   fns = &psci_0_1_fns;
+   }
_FDT(fdt_property_string(fdt, "method", "hvc"));
-   _FDT(fdt_property_cell(fdt, "cpu_suspend", KVM_PSCI_FN_CPU_SUSPEND));
-   _FDT(fdt_property_cell(fdt, "cpu_off", KVM_PSCI_FN_CPU_OFF));
-   _FDT(fdt_property_cell(fdt, "cpu_on", KVM_PSCI_FN_CPU_ON));
-   _FDT(fdt_property_cell(fdt, "migrate", KVM_PSCI_FN_MIGRATE));
+   _FDT(fdt_property_cell(fdt, "cpu_suspend", fns->cpu_suspend));
+   _FDT(fdt_property_cell(fdt, "cpu_off", fns->cpu_off));
+   _FDT(fdt_property_cell(fdt, "cpu_on", fns->cpu_on));
+   _FDT(fdt_property_cell(fdt, "migrate", fns->migrate));
_FDT(fdt_end_node(fdt));
 
/* Finalise. */
diff --git a/tools/kvm/arm/kvm-cpu.c b/tools/kvm/arm/kvm-cpu.c
index ba7a762..3a5f358 100644
--- a/tools/kvm/arm/kvm-cpu.c
+++ b/tools/kvm/arm/kvm-cpu.c
@@ -56,6 +56,11 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
if (vcpu->kvm_run == MAP_FAILED)
die("unable to mmap vcpu fd");
 
+   /* Set KVM_ARM_VCPU_PSCI_0_2 if available */
+   if (kvm__supports_extension(kvm, KVM_CAP_ARM_PSCI_0_2)) {
+   vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
+   }
+
/*
 * If preferred target ioctl successful then use preferred target
 * else try each and every target type.
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 0/4] kvmtool: ARM/ARM64: Misc updates

2014-09-08 Thread Anup Patel
This patchset updates KVMTOOL to use some of the features
supported by Linux-3.16 KVM ARM/ARM64, such as:

1. Target CPU == Host using KVM_ARM_PREFERRED_TARGET vm ioctl
2. Target CPU type Potenza for using KVMTOOL on X-Gene
3. PSCI v0.2 support for Aarch32 and Aarch64 guest
4. System event exit reason

Changes since v2:
- Use target type returned by KVM_ARM_PREFERRED_TARGET vm ioctl
  for VCPU init such that we don't need to update KVMTOOL for
  every new host hardware
- Simplify DTB generation for PSCI node

Changes since v1:
- Drop the patch to fix compile error for aarch64
- Fallback to old method of trying all target types if
KVM_ARM_PREFERRED_TARGET vm ioctl fails
- Print more info when handling KVM_EXIT_SYSTEM_EVENT

Anup Patel (4):
  kvmtool: ARM: Use KVM_ARM_PREFERRED_TARGET vm ioctl to determine
target cpu
  kvmtool: ARM64: Add target type potenza for aarch64
  kvmtool: Handle exit reason KVM_EXIT_SYSTEM_EVENT
  kvmtool: ARM/ARM64: Provide PSCI-0.2 to guest when KVM supports it

 tools/kvm/arm/aarch64/arm-cpu.c |9 ++-
 tools/kvm/arm/fdt.c |   52 +++
 tools/kvm/arm/kvm-cpu.c |   57 +++
 tools/kvm/kvm-cpu.c |   19 +
 4 files changed, 120 insertions(+), 17 deletions(-)

-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 1/4] kvmtool: ARM: Use KVM_ARM_PREFERRED_TARGET vm ioctl to determine target cpu

2014-09-16 Thread Anup Patel
On Thu, Sep 11, 2014 at 9:24 PM, Andre Przywara  wrote:
> Hi Anup,
>
> On 08/09/14 09:17, Anup Patel wrote:
>> Instead, of trying out each and every target type we should
>> use KVM_ARM_PREFERRED_TARGET vm ioctl to determine target type
>> for KVM ARM/ARM64.
>>
>> If KVM_ARM_PREFERRED_TARGET vm ioctl fails then we fallback to
>> old method of trying all known target types.
>>
>> If KVM_ARM_PREFERRED_TARGET vm ioctl succeeds but the returned
>> target type is not known to KVMTOOL then we forcefully init
>> VCPU with target type returned by KVM_ARM_PREFERRED_TARGET vm ioctl.
>>
>> Signed-off-by: Pranavkumar Sawargaonkar 
>> Signed-off-by: Anup Patel 
>> ---
>>  tools/kvm/arm/kvm-cpu.c |   52 
>> +--
>>  1 file changed, 41 insertions(+), 11 deletions(-)
>>
>> diff --git a/tools/kvm/arm/kvm-cpu.c b/tools/kvm/arm/kvm-cpu.c
>> index aeaa4cf..ba7a762 100644
>> --- a/tools/kvm/arm/kvm-cpu.c
>> +++ b/tools/kvm/arm/kvm-cpu.c
>> @@ -33,7 +33,8 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, 
>> unsigned long cpu_id)
>>   struct kvm_arm_target *target;
>>   struct kvm_cpu *vcpu;
>>   int coalesced_offset, mmap_size, err = -1;
>> - unsigned int i;
>> + unsigned int i, target_type;
>> + struct kvm_vcpu_init preferred_init;
>>   struct kvm_vcpu_init vcpu_init = {
>>   .features = ARM_VCPU_FEATURE_FLAGS(kvm, cpu_id)
>>   };
>> @@ -55,19 +56,47 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, 
>> unsigned long cpu_id)
>>   if (vcpu->kvm_run == MAP_FAILED)
>>   die("unable to mmap vcpu fd");
>>
>> - /* Find an appropriate target CPU type. */
>> - for (i = 0; i < ARRAY_SIZE(kvm_arm_targets); ++i) {
>> - if (!kvm_arm_targets[i])
>> - continue;
>> - target = kvm_arm_targets[i];
>> - vcpu_init.target = target->id;
>> - err = ioctl(vcpu->vcpu_fd, KVM_ARM_VCPU_INIT, &vcpu_init);
>> - if (!err)
>> - break;
>> + /*
>> +  * If preferred target ioctl successful then use preferred target
>> +  * else try each and every target type.
>> +  */
>> + err = ioctl(kvm->vm_fd, KVM_ARM_PREFERRED_TARGET, &preferred_init);
>> + if (!err) {
>> + /* Match preferred target CPU type. */
>> + target = NULL;
>> + for (i = 0; i < ARRAY_SIZE(kvm_arm_targets); ++i) {
>> + if (!kvm_arm_targets[i])
>> + continue;
>> + if (kvm_arm_targets[i]->id == preferred_init.target) {
>> + target = kvm_arm_targets[i];
>> + target_type = kvm_arm_targets[i]->id;
>> + break;
>> + }
>> + }
>> + if (!target) {
>> + target = kvm_arm_targets[0];
>
> I think you missed the part of the patch which adds the now magic zero
> member of kvm_arm_targets[]. A simple static initializer should work.

Yes, good catch. I will fix this.

>
>> + target_type = preferred_init.target;
>
> Can't you move that out of the loop, in front of it actually? Then you
> can get rid of the line above setting the target_type also, since you
> always use the same value now, regardless whether you found that CPU in
> the list or not.

Sure, I'll rearrange this.

>
>> + }
>> + } else {
>> + /* Find an appropriate target CPU type. */
>> + for (i = 0; i < ARRAY_SIZE(kvm_arm_targets); ++i) {
>> + if (!kvm_arm_targets[i])
>> + continue;
>> + target = kvm_arm_targets[i];
>> + target_type = target->id;
>> + vcpu_init.target = target_type;
>> + err = ioctl(vcpu->vcpu_fd, KVM_ARM_VCPU_INIT, 
>> &vcpu_init);
>> + if (!err)
>> + break;
>> + }
>> + if (err)
>> + die("Unable to find matching target");
>>   }
>>
>> + vcpu_init.target = target_type;
>> + err = ioctl(vcpu->vcpu_fd, KVM_ARM_VCPU_INIT, &vcpu_init);
>
> You should do this only in the if-branch above, since you (try to) call
> KVM

Re: [PATCH v3 2/4] kvmtool: ARM64: Add target type potenza for aarch64

2014-09-16 Thread Anup Patel
On Thu, Sep 11, 2014 at 9:37 PM, Andre Przywara  wrote:
> Anup,
>
> On 08/09/14 09:17, Anup Patel wrote:
>> The VCPU target type KVM_ARM_TARGET_XGENE_POTENZA is available
>> in latest Linux-3.16-rcX or higher hence register aarch64 target
>> type for it.
>>
>> This patch enables us to run KVMTOOL on X-Gene Potenza host.
>
> Why do you need this still if the previous patch got rid of the need for
> naming each and every CPU in kvmtool?
> Do you care about kernels older than 3.12? I wouldn't bother so much
> since you'd need a much newer kvmtool anyway.

Yes, actually APM SW team uses 3.12 kernel.

>
> Can you consider dropping this patch then?
> I'd rather avoid adding CPUs to this list needlessly from now on.

I think lets keep this patch because there are quite a few X-Gene
users using older kernel.

Regards,
Anup

>
> Regards,
> Andre.
>
>>
>> Signed-off-by: Pranavkumar Sawargaonkar 
>> Signed-off-by: Anup Patel 
>> ---
>>  tools/kvm/arm/aarch64/arm-cpu.c |9 -
>>  1 file changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/tools/kvm/arm/aarch64/arm-cpu.c 
>> b/tools/kvm/arm/aarch64/arm-cpu.c
>> index ce5ea2f..ce526e3 100644
>> --- a/tools/kvm/arm/aarch64/arm-cpu.c
>> +++ b/tools/kvm/arm/aarch64/arm-cpu.c
>> @@ -41,10 +41,17 @@ static struct kvm_arm_target target_cortex_a57 = {
>>   .init   = arm_cpu__vcpu_init,
>>  };
>>
>> +static struct kvm_arm_target target_potenza = {
>> + .id = KVM_ARM_TARGET_XGENE_POTENZA,
>> + .compatible = "arm,arm-v8",
>> + .init   = arm_cpu__vcpu_init,
>> +};
>> +
>>  static int arm_cpu__core_init(struct kvm *kvm)
>>  {
>>   return (kvm_cpu__register_kvm_arm_target(&target_aem_v8) ||
>>   kvm_cpu__register_kvm_arm_target(&target_foundation_v8) ||
>> - kvm_cpu__register_kvm_arm_target(&target_cortex_a57));
>> + kvm_cpu__register_kvm_arm_target(&target_cortex_a57) ||
>> + kvm_cpu__register_kvm_arm_target(&target_potenza));
>>  }
>>  core_init(arm_cpu__core_init);
>>
> CONFIDENTIALITY NOTICE: This e-mail message, including any attachments,
> is for the sole use of the intended recipient(s) and contains information
> that is confidential and proprietary to Applied Micro Circuits Corporation or 
> its subsidiaries.
> It is to be used solely for the purpose of furthering the parties' business 
> relationship.
> All unauthorized review, use, disclosure or distribution is prohibited.
> If you are not the intended recipient, please contact the sender by reply 
> e-mail
> and destroy all copies of the original message.
>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 3/4] kvmtool: Handle exit reason KVM_EXIT_SYSTEM_EVENT

2014-09-16 Thread Anup Patel
On Thu, Sep 11, 2014 at 9:56 PM, Andre Przywara  wrote:
>
> On 08/09/14 09:17, Anup Patel wrote:
>> The KVM_EXIT_SYSTEM_EVENT exit reason was added to define
>> architecture independent system-wide events for a Guest.
>>
>> Currently, it is used by in-kernel PSCI-0.2 emulation of
>> KVM ARM/ARM64 to inform user space about PSCI SYSTEM_OFF
>> or PSCI SYSTEM_RESET request.
>>
>> For now, we simply treat all system-wide guest events as
>> shutdown request in KVMTOOL.
>
> Is that really a good idea to default to exit_kvm?
> I find a shutdown a rather drastic default.
> Also I'd like to see RESET not easily mapped to shutdown. If the user
> resets the box explicitly, it's probably expected to come up again (to
> load an updated kernel or proceed with an install).

Absolutely correct but we don't have VM reboot API in KVMTOOL
so I choose this route.

> So what about a more explicit message like: "... please restart the VM"
> until we gain proper reboot support in kvmtool?

Sure, I will print additional info for reset event such as:
INFO: VM reboot support not available
INFO: Please restart the VM manually

Regards,
Anup

>
> Regards,
> Andre.
>
>> Signed-off-by: Pranavkumar Sawargaonkar 
>> Signed-off-by: Anup Patel 
>> ---
>>  tools/kvm/kvm-cpu.c |   19 +++
>>  1 file changed, 19 insertions(+)
>>
>> diff --git a/tools/kvm/kvm-cpu.c b/tools/kvm/kvm-cpu.c
>> index ee0a8ec..6d01192 100644
>> --- a/tools/kvm/kvm-cpu.c
>> +++ b/tools/kvm/kvm-cpu.c
>> @@ -160,6 +160,25 @@ int kvm_cpu__start(struct kvm_cpu *cpu)
>>   goto exit_kvm;
>>   case KVM_EXIT_SHUTDOWN:
>>   goto exit_kvm;
>> + case KVM_EXIT_SYSTEM_EVENT:
>> + /*
>> +  * Print the type of system event and
>> +  * treat all system events as shutdown request.
>> +  */
>> + switch (cpu->kvm_run->system_event.type) {
>> + case KVM_SYSTEM_EVENT_SHUTDOWN:
>> + printf("  # Info: shutdown system event\n");
>> + break;
>> + case KVM_SYSTEM_EVENT_RESET:
>> + printf("  # Info: reset system event\n");
>> + break;
>> + default:
>> + printf("  # Warning: unknown system event 
>> type=%d\n",
>> +cpu->kvm_run->system_event.type);
>> + break;
>> + };
>> + printf("  # Info: exiting KVMTOOL\n");
>> + goto exit_kvm;
>>   default: {
>>   bool ret;
>>
>>
> CONFIDENTIALITY NOTICE: This e-mail message, including any attachments,
> is for the sole use of the intended recipient(s) and contains information
> that is confidential and proprietary to Applied Micro Circuits Corporation or 
> its subsidiaries.
> It is to be used solely for the purpose of furthering the parties' business 
> relationship.
> All unauthorized review, use, disclosure or distribution is prohibited.
> If you are not the intended recipient, please contact the sender by reply 
> e-mail
> and destroy all copies of the original message.
>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 1/4] kvmtool: ARM: Use KVM_ARM_PREFERRED_TARGET vm ioctl to determine target cpu

2014-09-16 Thread Anup Patel
On Wed, Sep 17, 2014 at 3:43 AM, Anup Patel  wrote:
> On Thu, Sep 11, 2014 at 9:24 PM, Andre Przywara  
> wrote:
>> Hi Anup,
>>
>> On 08/09/14 09:17, Anup Patel wrote:
>>> Instead, of trying out each and every target type we should
>>> use KVM_ARM_PREFERRED_TARGET vm ioctl to determine target type
>>> for KVM ARM/ARM64.
>>>
>>> If KVM_ARM_PREFERRED_TARGET vm ioctl fails then we fallback to
>>> old method of trying all known target types.
>>>
>>> If KVM_ARM_PREFERRED_TARGET vm ioctl succeeds but the returned
>>> target type is not known to KVMTOOL then we forcefully init
>>> VCPU with target type returned by KVM_ARM_PREFERRED_TARGET vm ioctl.
>>>
>>> Signed-off-by: Pranavkumar Sawargaonkar 
>>> Signed-off-by: Anup Patel 
>>> ---
>>>  tools/kvm/arm/kvm-cpu.c |   52 
>>> +--
>>>  1 file changed, 41 insertions(+), 11 deletions(-)
>>>
>>> diff --git a/tools/kvm/arm/kvm-cpu.c b/tools/kvm/arm/kvm-cpu.c
>>> index aeaa4cf..ba7a762 100644
>>> --- a/tools/kvm/arm/kvm-cpu.c
>>> +++ b/tools/kvm/arm/kvm-cpu.c
>>> @@ -33,7 +33,8 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, 
>>> unsigned long cpu_id)
>>>   struct kvm_arm_target *target;
>>>   struct kvm_cpu *vcpu;
>>>   int coalesced_offset, mmap_size, err = -1;
>>> - unsigned int i;
>>> + unsigned int i, target_type;
>>> + struct kvm_vcpu_init preferred_init;
>>>   struct kvm_vcpu_init vcpu_init = {
>>>   .features = ARM_VCPU_FEATURE_FLAGS(kvm, cpu_id)
>>>   };
>>> @@ -55,19 +56,47 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, 
>>> unsigned long cpu_id)
>>>   if (vcpu->kvm_run == MAP_FAILED)
>>>   die("unable to mmap vcpu fd");
>>>
>>> - /* Find an appropriate target CPU type. */
>>> - for (i = 0; i < ARRAY_SIZE(kvm_arm_targets); ++i) {
>>> - if (!kvm_arm_targets[i])
>>> - continue;
>>> - target = kvm_arm_targets[i];
>>> - vcpu_init.target = target->id;
>>> - err = ioctl(vcpu->vcpu_fd, KVM_ARM_VCPU_INIT, &vcpu_init);
>>> - if (!err)
>>> - break;
>>> + /*
>>> +  * If preferred target ioctl successful then use preferred target
>>> +  * else try each and every target type.
>>> +  */
>>> + err = ioctl(kvm->vm_fd, KVM_ARM_PREFERRED_TARGET, &preferred_init);
>>> + if (!err) {
>>> + /* Match preferred target CPU type. */
>>> + target = NULL;
>>> + for (i = 0; i < ARRAY_SIZE(kvm_arm_targets); ++i) {
>>> + if (!kvm_arm_targets[i])
>>> + continue;
>>> + if (kvm_arm_targets[i]->id == preferred_init.target) {
>>> + target = kvm_arm_targets[i];
>>> + target_type = kvm_arm_targets[i]->id;
>>> + break;
>>> + }
>>> + }
>>> + if (!target) {
>>> + target = kvm_arm_targets[0];
>>
>> I think you missed the part of the patch which adds the now magic zero
>> member of kvm_arm_targets[]. A simple static initializer should work.
>
> Yes, good catch. I will fix this.
>
>>
>>> + target_type = preferred_init.target;
>>
>> Can't you move that out of the loop, in front of it actually? Then you
>> can get rid of the line above setting the target_type also, since you
>> always use the same value now, regardless whether you found that CPU in
>> the list or not.
>
> Sure, I'll rearrange this.
>
>>
>>> + }
>>> + } else {
>>> + /* Find an appropriate target CPU type. */
>>> + for (i = 0; i < ARRAY_SIZE(kvm_arm_targets); ++i) {
>>> + if (!kvm_arm_targets[i])
>>> + continue;
>>> + target = kvm_arm_targets[i];
>>> + target_type = target->id;
>>> + vcpu_init.target = target_type;
>>> + err = ioctl(vcpu->vcpu_fd, KVM_ARM_VCPU_INIT, 
>>> &vcpu_init);
>>> +  

Re: [PATCH v3 3/4] kvmtool: Handle exit reason KVM_EXIT_SYSTEM_EVENT

2014-09-16 Thread Anup Patel
On Wed, Sep 17, 2014 at 3:59 AM, Anup Patel  wrote:
> On Thu, Sep 11, 2014 at 9:56 PM, Andre Przywara  
> wrote:
>>
>> On 08/09/14 09:17, Anup Patel wrote:
>>> The KVM_EXIT_SYSTEM_EVENT exit reason was added to define
>>> architecture independent system-wide events for a Guest.
>>>
>>> Currently, it is used by in-kernel PSCI-0.2 emulation of
>>> KVM ARM/ARM64 to inform user space about PSCI SYSTEM_OFF
>>> or PSCI SYSTEM_RESET request.
>>>
>>> For now, we simply treat all system-wide guest events as
>>> shutdown request in KVMTOOL.
>>
>> Is that really a good idea to default to exit_kvm?
>> I find a shutdown a rather drastic default.
>> Also I'd like to see RESET not easily mapped to shutdown. If the user
>> resets the box explicitly, it's probably expected to come up again (to
>> load an updated kernel or proceed with an install).
>
> Absolutely correct but we don't have VM reboot API in KVMTOOL
> so I choose this route.
>
>> So what about a more explicit message like: "... please restart the VM"
>> until we gain proper reboot support in kvmtool?
>
> Sure, I will print additional info for reset event such as:
> INFO: VM reboot support not available
> INFO: Please restart the VM manually
>
> Regards,
> Anup
>
>>
>> Regards,
>> Andre.
>>
>>> Signed-off-by: Pranavkumar Sawargaonkar 
>>> Signed-off-by: Anup Patel 
>>> ---
>>>  tools/kvm/kvm-cpu.c |   19 +++
>>>  1 file changed, 19 insertions(+)
>>>
>>> diff --git a/tools/kvm/kvm-cpu.c b/tools/kvm/kvm-cpu.c
>>> index ee0a8ec..6d01192 100644
>>> --- a/tools/kvm/kvm-cpu.c
>>> +++ b/tools/kvm/kvm-cpu.c
>>> @@ -160,6 +160,25 @@ int kvm_cpu__start(struct kvm_cpu *cpu)
>>>   goto exit_kvm;
>>>   case KVM_EXIT_SHUTDOWN:
>>>   goto exit_kvm;
>>> + case KVM_EXIT_SYSTEM_EVENT:
>>> + /*
>>> +  * Print the type of system event and
>>> +  * treat all system events as shutdown request.
>>> +  */
>>> + switch (cpu->kvm_run->system_event.type) {
>>> + case KVM_SYSTEM_EVENT_SHUTDOWN:
>>> + printf("  # Info: shutdown system event\n");
>>> + break;
>>> + case KVM_SYSTEM_EVENT_RESET:
>>> + printf("  # Info: reset system event\n");
>>> + break;
>>> + default:
>>> + printf("  # Warning: unknown system event 
>>> type=%d\n",
>>> +cpu->kvm_run->system_event.type);
>>> + break;
>>> + };
>>> + printf("  # Info: exiting KVMTOOL\n");
>>> + goto exit_kvm;
>>>   default: {
>>>   bool ret;
>>>
>>>
>> CONFIDENTIALITY NOTICE: This e-mail message, including any attachments,
>> is for the sole use of the intended recipient(s) and contains information
>> that is confidential and proprietary to Applied Micro Circuits Corporation 
>> or its subsidiaries.
>> It is to be used solely for the purpose of furthering the parties' business 
>> relationship.
>> All unauthorized review, use, disclosure or distribution is prohibited.
>> If you are not the intended recipient, please contact the sender by reply 
>> e-mail
>> and destroy all copies of the original message.
>>

Please ignore this notice. I used wrong email in my last reply.

Regards,
Anup

> ___
> kvmarm mailing list
> kvm...@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 2/4] kvmtool: ARM64: Add target type potenza for aarch64

2014-09-16 Thread Anup Patel
On Wed, Sep 17, 2014 at 3:54 AM, Anup Patel  wrote:
> On Thu, Sep 11, 2014 at 9:37 PM, Andre Przywara  
> wrote:
>> Anup,
>>
>> On 08/09/14 09:17, Anup Patel wrote:
>>> The VCPU target type KVM_ARM_TARGET_XGENE_POTENZA is available
>>> in latest Linux-3.16-rcX or higher hence register aarch64 target
>>> type for it.
>>>
>>> This patch enables us to run KVMTOOL on X-Gene Potenza host.
>>
>> Why do you need this still if the previous patch got rid of the need for
>> naming each and every CPU in kvmtool?
>> Do you care about kernels older than 3.12? I wouldn't bother so much
>> since you'd need a much newer kvmtool anyway.
>
> Yes, actually APM SW team uses 3.12 kernel.
>
>>
>> Can you consider dropping this patch then?
>> I'd rather avoid adding CPUs to this list needlessly from now on.
>
> I think lets keep this patch because there are quite a few X-Gene
> users using older kernel.
>
> Regards,
> Anup
>
>>
>> Regards,
>> Andre.
>>
>>>
>>> Signed-off-by: Pranavkumar Sawargaonkar 
>>> Signed-off-by: Anup Patel 
>>> ---
>>>  tools/kvm/arm/aarch64/arm-cpu.c |9 -
>>>  1 file changed, 8 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/tools/kvm/arm/aarch64/arm-cpu.c 
>>> b/tools/kvm/arm/aarch64/arm-cpu.c
>>> index ce5ea2f..ce526e3 100644
>>> --- a/tools/kvm/arm/aarch64/arm-cpu.c
>>> +++ b/tools/kvm/arm/aarch64/arm-cpu.c
>>> @@ -41,10 +41,17 @@ static struct kvm_arm_target target_cortex_a57 = {
>>>   .init   = arm_cpu__vcpu_init,
>>>  };
>>>
>>> +static struct kvm_arm_target target_potenza = {
>>> + .id = KVM_ARM_TARGET_XGENE_POTENZA,
>>> + .compatible = "arm,arm-v8",
>>> + .init   = arm_cpu__vcpu_init,
>>> +};
>>> +
>>>  static int arm_cpu__core_init(struct kvm *kvm)
>>>  {
>>>   return (kvm_cpu__register_kvm_arm_target(&target_aem_v8) ||
>>>   kvm_cpu__register_kvm_arm_target(&target_foundation_v8) ||
>>> - kvm_cpu__register_kvm_arm_target(&target_cortex_a57));
>>> + kvm_cpu__register_kvm_arm_target(&target_cortex_a57) ||
>>> + kvm_cpu__register_kvm_arm_target(&target_potenza));
>>>  }
>>>  core_init(arm_cpu__core_init);
>>>
>> CONFIDENTIALITY NOTICE: This e-mail message, including any attachments,
>> is for the sole use of the intended recipient(s) and contains information
>> that is confidential and proprietary to Applied Micro Circuits Corporation 
>> or its subsidiaries.
>> It is to be used solely for the purpose of furthering the parties' business 
>> relationship.
>> All unauthorized review, use, disclosure or distribution is prohibited.
>> If you are not the intended recipient, please contact the sender by reply 
>> e-mail
>> and destroy all copies of the original message.
>>

Please ignore this notice. I used wrong email in my last reply.

Regards,
Anup

> ___
> kvmarm mailing list
> kvm...@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v4 0/4] kvmtool: ARM/ARM64: Misc updates

2014-09-18 Thread Anup Patel
This patchset updates KVMTOOL to use some of the features
supported by Linux-3.16 KVM ARM/ARM64, such as:

1. Target CPU == Host using KVM_ARM_PREFERRED_TARGET vm ioctl
2. Target CPU type Potenza for using KVMTOOL on X-Gene
3. PSCI v0.2 support for Aarch32 and Aarch64 guest
4. System event exit reason

Changes since v3:
- Add generic targets for aarch32 and aarch64 which are used
  by KVMTOOL when target type returned by KVM_ARM_PREFERRED_TARGET
  vm ioctl is not known to KVMTOOL
- Print more info when handling system reset event

Changes since v2:
- Use target type returned by KVM_ARM_PREFERRED_TARGET vm ioctl
  for VCPU init such that we don't need to update KVMTOOL for
  every new host hardware
- Simplify DTB generation for PSCI node

Changes since v1:
- Drop the patch to fix compile error for aarch64
- Fallback to old method of trying all target types if
KVM_ARM_PREFERRED_TARGET vm ioctl fails
- Print more info when handling KVM_EXIT_SYSTEM_EVENT

Anup Patel (4):
  kvmtool: ARM: Use KVM_ARM_PREFERRED_TARGET vm ioctl to determine
target cpu
  kvmtool: ARM64: Add target type potenza for aarch64
  kvmtool: Handle exit reason KVM_EXIT_SYSTEM_EVENT
  kvmtool: ARM/ARM64: Provide PSCI-0.2 to guest when KVM supports it

 tools/kvm/arm/aarch32/arm-cpu.c |9 ++-
 tools/kvm/arm/aarch64/arm-cpu.c |   19 ++---
 tools/kvm/arm/fdt.c |   52 +++
 tools/kvm/arm/kvm-cpu.c |   57 +++
 tools/kvm/kvm-cpu.c |   21 +++
 5 files changed, 138 insertions(+), 20 deletions(-)

-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v4 1/4] kvmtool: ARM: Use KVM_ARM_PREFERRED_TARGET vm ioctl to determine target cpu

2014-09-18 Thread Anup Patel
Instead, of trying out each and every target type we should
use KVM_ARM_PREFERRED_TARGET vm ioctl to determine target type
for KVM ARM/ARM64.

If KVM_ARM_PREFERRED_TARGET vm ioctl fails then we fallback to
old method of trying all known target types.

If KVM_ARM_PREFERRED_TARGET vm ioctl succeeds but the returned
target type is not known to KVMTOOL then we forcefully init
VCPU with target type returned by KVM_ARM_PREFERRED_TARGET vm ioctl.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
---
 tools/kvm/arm/aarch32/arm-cpu.c |9 ++-
 tools/kvm/arm/aarch64/arm-cpu.c |   10 ++--
 tools/kvm/arm/kvm-cpu.c |   52 ++-
 3 files changed, 57 insertions(+), 14 deletions(-)

diff --git a/tools/kvm/arm/aarch32/arm-cpu.c b/tools/kvm/arm/aarch32/arm-cpu.c
index 71b98fe..0d2ff11 100644
--- a/tools/kvm/arm/aarch32/arm-cpu.c
+++ b/tools/kvm/arm/aarch32/arm-cpu.c
@@ -22,6 +22,12 @@ static int arm_cpu__vcpu_init(struct kvm_cpu *vcpu)
return 0;
 }
 
+static struct kvm_arm_target target_generic_v7 = {
+   .id = UINT_MAX,
+   .compatible = "arm,arm-v7",
+   .init   = arm_cpu__vcpu_init,
+};
+
 static struct kvm_arm_target target_cortex_a15 = {
.id = KVM_ARM_TARGET_CORTEX_A15,
.compatible = "arm,cortex-a15",
@@ -36,7 +42,8 @@ static struct kvm_arm_target target_cortex_a7 = {
 
 static int arm_cpu__core_init(struct kvm *kvm)
 {
-   return (kvm_cpu__register_kvm_arm_target(&target_cortex_a15) ||
+   return (kvm_cpu__register_kvm_arm_target(&target_generic_v7) ||
+   kvm_cpu__register_kvm_arm_target(&target_cortex_a15) ||
kvm_cpu__register_kvm_arm_target(&target_cortex_a7));
 }
 core_init(arm_cpu__core_init);
diff --git a/tools/kvm/arm/aarch64/arm-cpu.c b/tools/kvm/arm/aarch64/arm-cpu.c
index ce5ea2f..9ee3da3 100644
--- a/tools/kvm/arm/aarch64/arm-cpu.c
+++ b/tools/kvm/arm/aarch64/arm-cpu.c
@@ -16,13 +16,18 @@ static void generate_fdt_nodes(void *fdt, struct kvm *kvm, 
u32 gic_phandle)
timer__generate_fdt_nodes(fdt, kvm, timer_interrupts);
 }
 
-
 static int arm_cpu__vcpu_init(struct kvm_cpu *vcpu)
 {
vcpu->generate_fdt_nodes = generate_fdt_nodes;
return 0;
 }
 
+static struct kvm_arm_target target_generic_v8 = {
+   .id = UINT_MAX,
+   .compatible = "arm,arm-v8",
+   .init   = arm_cpu__vcpu_init,
+};
+
 static struct kvm_arm_target target_aem_v8 = {
.id = KVM_ARM_TARGET_AEM_V8,
.compatible = "arm,arm-v8",
@@ -43,7 +48,8 @@ static struct kvm_arm_target target_cortex_a57 = {
 
 static int arm_cpu__core_init(struct kvm *kvm)
 {
-   return (kvm_cpu__register_kvm_arm_target(&target_aem_v8) ||
+   return (kvm_cpu__register_kvm_arm_target(&target_generic_v8) ||
+   kvm_cpu__register_kvm_arm_target(&target_aem_v8) ||
kvm_cpu__register_kvm_arm_target(&target_foundation_v8) ||
kvm_cpu__register_kvm_arm_target(&target_cortex_a57));
 }
diff --git a/tools/kvm/arm/kvm-cpu.c b/tools/kvm/arm/kvm-cpu.c
index aeaa4cf..6de5344 100644
--- a/tools/kvm/arm/kvm-cpu.c
+++ b/tools/kvm/arm/kvm-cpu.c
@@ -13,7 +13,7 @@ int kvm_cpu__get_debug_fd(void)
return debug_fd;
 }
 
-static struct kvm_arm_target *kvm_arm_targets[KVM_ARM_NUM_TARGETS];
+static struct kvm_arm_target *kvm_arm_targets[KVM_ARM_NUM_TARGETS+1];
 int kvm_cpu__register_kvm_arm_target(struct kvm_arm_target *target)
 {
unsigned int i = 0;
@@ -33,7 +33,8 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
struct kvm_arm_target *target;
struct kvm_cpu *vcpu;
int coalesced_offset, mmap_size, err = -1;
-   unsigned int i;
+   unsigned int i, target_type;
+   struct kvm_vcpu_init preferred_init;
struct kvm_vcpu_init vcpu_init = {
.features = ARM_VCPU_FEATURE_FLAGS(kvm, cpu_id)
};
@@ -55,19 +56,47 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, 
unsigned long cpu_id)
if (vcpu->kvm_run == MAP_FAILED)
die("unable to mmap vcpu fd");
 
-   /* Find an appropriate target CPU type. */
-   for (i = 0; i < ARRAY_SIZE(kvm_arm_targets); ++i) {
-   if (!kvm_arm_targets[i])
-   continue;
-   target = kvm_arm_targets[i];
-   vcpu_init.target = target->id;
+   /*
+* If preferred target ioctl successful then use preferred target
+* else try each and every target type.
+*/
+   err = ioctl(kvm->vm_fd, KVM_ARM_PREFERRED_TARGET, &preferred_init);
+   if (!err) {
+   /* Match preferred target CPU type. */
+   target = NULL;
+   for (i = 0; i < ARRAY_SIZE(kvm_arm_targets); ++i) {
+   if (!kvm_

[PATCH v4 4/4] kvmtool: ARM/ARM64: Provide PSCI-0.2 to guest when KVM supports it

2014-09-18 Thread Anup Patel
If in-kernel KVM support PSCI-0.2 emulation then we should set
KVM_ARM_VCPU_PSCI_0_2 feature for each guest VCPU and also
provide "arm,psci-0.2","arm,psci" as PSCI compatible string.

This patch updates kvm_cpu__arch_init() and setup_fdt() as
per above.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
---
 tools/kvm/arm/fdt.c |   52 ++-
 tools/kvm/arm/kvm-cpu.c |5 +
 2 files changed, 52 insertions(+), 5 deletions(-)

diff --git a/tools/kvm/arm/fdt.c b/tools/kvm/arm/fdt.c
index 186a718..a15450e 100644
--- a/tools/kvm/arm/fdt.c
+++ b/tools/kvm/arm/fdt.c
@@ -13,6 +13,7 @@
 #include 
 #include 
 #include 
+#include 
 
 static char kern_cmdline[COMMAND_LINE_SIZE];
 
@@ -84,6 +85,34 @@ static void generate_irq_prop(void *fdt, u8 irq)
_FDT(fdt_property(fdt, "interrupts", irq_prop, sizeof(irq_prop)));
 }
 
+struct psci_fns {
+   u32 cpu_suspend;
+   u32 cpu_off;
+   u32 cpu_on;
+   u32 migrate;
+};
+
+static struct psci_fns psci_0_1_fns = {
+   .cpu_suspend = KVM_PSCI_FN_CPU_SUSPEND,
+   .cpu_off = KVM_PSCI_FN_CPU_OFF,
+   .cpu_on = KVM_PSCI_FN_CPU_ON,
+   .migrate = KVM_PSCI_FN_MIGRATE,
+};
+
+static struct psci_fns psci_0_2_aarch32_fns = {
+   .cpu_suspend = PSCI_0_2_FN_CPU_SUSPEND,
+   .cpu_off = PSCI_0_2_FN_CPU_OFF,
+   .cpu_on = PSCI_0_2_FN_CPU_ON,
+   .migrate = PSCI_0_2_FN_MIGRATE,
+};
+
+static struct psci_fns psci_0_2_aarch64_fns = {
+   .cpu_suspend = PSCI_0_2_FN64_CPU_SUSPEND,
+   .cpu_off = PSCI_0_2_FN_CPU_OFF,
+   .cpu_on = PSCI_0_2_FN64_CPU_ON,
+   .migrate = PSCI_0_2_FN64_MIGRATE,
+};
+
 static int setup_fdt(struct kvm *kvm)
 {
struct device_header *dev_hdr;
@@ -93,6 +122,7 @@ static int setup_fdt(struct kvm *kvm)
cpu_to_fdt64(kvm->arch.memory_guest_start),
cpu_to_fdt64(kvm->ram_size),
};
+   struct psci_fns *fns;
void *fdt   = staging_fdt;
void *fdt_dest  = guest_flat_to_host(kvm,
 kvm->arch.dtb_guest_start);
@@ -162,12 +192,24 @@ static int setup_fdt(struct kvm *kvm)
 
/* PSCI firmware */
_FDT(fdt_begin_node(fdt, "psci"));
-   _FDT(fdt_property_string(fdt, "compatible", "arm,psci"));
+   if (kvm__supports_extension(kvm, KVM_CAP_ARM_PSCI_0_2)) {
+   const char compatible[] = "arm,psci-0.2\0arm,psci";
+   _FDT(fdt_property(fdt, "compatible",
+ compatible, sizeof(compatible)));
+   if (kvm->cfg.arch.aarch32_guest) {
+   fns = &psci_0_2_aarch32_fns;
+   } else {
+   fns = &psci_0_2_aarch64_fns;
+   }
+   } else {
+   _FDT(fdt_property_string(fdt, "compatible", "arm,psci"));
+   fns = &psci_0_1_fns;
+   }
_FDT(fdt_property_string(fdt, "method", "hvc"));
-   _FDT(fdt_property_cell(fdt, "cpu_suspend", KVM_PSCI_FN_CPU_SUSPEND));
-   _FDT(fdt_property_cell(fdt, "cpu_off", KVM_PSCI_FN_CPU_OFF));
-   _FDT(fdt_property_cell(fdt, "cpu_on", KVM_PSCI_FN_CPU_ON));
-   _FDT(fdt_property_cell(fdt, "migrate", KVM_PSCI_FN_MIGRATE));
+   _FDT(fdt_property_cell(fdt, "cpu_suspend", fns->cpu_suspend));
+   _FDT(fdt_property_cell(fdt, "cpu_off", fns->cpu_off));
+   _FDT(fdt_property_cell(fdt, "cpu_on", fns->cpu_on));
+   _FDT(fdt_property_cell(fdt, "migrate", fns->migrate));
_FDT(fdt_end_node(fdt));
 
/* Finalise. */
diff --git a/tools/kvm/arm/kvm-cpu.c b/tools/kvm/arm/kvm-cpu.c
index 6de5344..219de16 100644
--- a/tools/kvm/arm/kvm-cpu.c
+++ b/tools/kvm/arm/kvm-cpu.c
@@ -56,6 +56,11 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
if (vcpu->kvm_run == MAP_FAILED)
die("unable to mmap vcpu fd");
 
+   /* Set KVM_ARM_VCPU_PSCI_0_2 if available */
+   if (kvm__supports_extension(kvm, KVM_CAP_ARM_PSCI_0_2)) {
+   vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
+   }
+
/*
 * If preferred target ioctl successful then use preferred target
 * else try each and every target type.
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v4 2/4] kvmtool: ARM64: Add target type potenza for aarch64

2014-09-18 Thread Anup Patel
The VCPU target type KVM_ARM_TARGET_XGENE_POTENZA is available
in latest Linux-3.16-rcX or higher hence register aarch64 target
type for it.

This patch enables us to run KVMTOOL on X-Gene Potenza host.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
---
 tools/kvm/arm/aarch64/arm-cpu.c |9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/tools/kvm/arm/aarch64/arm-cpu.c b/tools/kvm/arm/aarch64/arm-cpu.c
index 9ee3da3..51a1e2f 100644
--- a/tools/kvm/arm/aarch64/arm-cpu.c
+++ b/tools/kvm/arm/aarch64/arm-cpu.c
@@ -46,11 +46,18 @@ static struct kvm_arm_target target_cortex_a57 = {
.init   = arm_cpu__vcpu_init,
 };
 
+static struct kvm_arm_target target_potenza = {
+   .id = KVM_ARM_TARGET_XGENE_POTENZA,
+   .compatible = "arm,arm-v8",
+   .init   = arm_cpu__vcpu_init,
+};
+
 static int arm_cpu__core_init(struct kvm *kvm)
 {
return (kvm_cpu__register_kvm_arm_target(&target_generic_v8) ||
kvm_cpu__register_kvm_arm_target(&target_aem_v8) ||
kvm_cpu__register_kvm_arm_target(&target_foundation_v8) ||
-   kvm_cpu__register_kvm_arm_target(&target_cortex_a57));
+   kvm_cpu__register_kvm_arm_target(&target_cortex_a57) ||
+   kvm_cpu__register_kvm_arm_target(&target_potenza));
 }
 core_init(arm_cpu__core_init);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v4 3/4] kvmtool: Handle exit reason KVM_EXIT_SYSTEM_EVENT

2014-09-18 Thread Anup Patel
The KVM_EXIT_SYSTEM_EVENT exit reason was added to define
architecture independent system-wide events for a Guest.

Currently, it is used by in-kernel PSCI-0.2 emulation of
KVM ARM/ARM64 to inform user space about PSCI SYSTEM_OFF
or PSCI SYSTEM_RESET request.

For now, we simply treat all system-wide guest events as
shutdown request in KVMTOOL.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
---
 tools/kvm/kvm-cpu.c |   21 +
 1 file changed, 21 insertions(+)

diff --git a/tools/kvm/kvm-cpu.c b/tools/kvm/kvm-cpu.c
index ee0a8ec..5180039 100644
--- a/tools/kvm/kvm-cpu.c
+++ b/tools/kvm/kvm-cpu.c
@@ -160,6 +160,27 @@ int kvm_cpu__start(struct kvm_cpu *cpu)
goto exit_kvm;
case KVM_EXIT_SHUTDOWN:
goto exit_kvm;
+   case KVM_EXIT_SYSTEM_EVENT:
+   /*
+* Print the type of system event and
+* treat all system events as shutdown request.
+*/
+   switch (cpu->kvm_run->system_event.type) {
+   case KVM_SYSTEM_EVENT_SHUTDOWN:
+   printf("  # Info: shutdown system event\n");
+   goto exit_kvm;
+   case KVM_SYSTEM_EVENT_RESET:
+   printf("  # Info: reset system event\n");
+   printf("  # Info: KVMTOOL does not support VM 
reset\n");
+   printf("  # Info: please re-launch the VM 
manually\n");
+   goto exit_kvm;
+   default:
+   printf("  # Warning: unknown system event 
type=%d\n",
+  cpu->kvm_run->system_event.type);
+   printf("  # Info: exiting KVMTOOL\n");
+   goto exit_kvm;
+   };
+   break;
default: {
bool ret;
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4 2/4] kvmtool: ARM64: Add target type potenza for aarch64

2014-10-01 Thread Anup Patel
On 29 September 2014 22:30, Andre Przywara  wrote:
>
> On 19/09/14 00:57, Anup Patel wrote:
>> The VCPU target type KVM_ARM_TARGET_XGENE_POTENZA is available
>> in latest Linux-3.16-rcX or higher hence register aarch64 target
>> type for it.
>>
>> This patch enables us to run KVMTOOL on X-Gene Potenza host.
>
> I still don't like the addition of another CPU, but for the sake of
> running older kernels (which seems to have a use-case in your case) I am
> OK with this.
> Maybe it's worth adding a comment which states that this list is
> "closed" and just provided to support older kernels?
> So that other SoCs don't get funny ideas... ;-)

Sure, I will add a comment for this.

Regards,
Anup

>
> Cheers,
> Andre.
>
>>
>> Signed-off-by: Pranavkumar Sawargaonkar 
>> Signed-off-by: Anup Patel 
>> ---
>>  tools/kvm/arm/aarch64/arm-cpu.c |9 -
>>  1 file changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/tools/kvm/arm/aarch64/arm-cpu.c 
>> b/tools/kvm/arm/aarch64/arm-cpu.c
>> index 9ee3da3..51a1e2f 100644
>> --- a/tools/kvm/arm/aarch64/arm-cpu.c
>> +++ b/tools/kvm/arm/aarch64/arm-cpu.c
>> @@ -46,11 +46,18 @@ static struct kvm_arm_target target_cortex_a57 = {
>>   .init   = arm_cpu__vcpu_init,
>>  };
>>
>> +static struct kvm_arm_target target_potenza = {
>> + .id = KVM_ARM_TARGET_XGENE_POTENZA,
>> + .compatible = "arm,arm-v8",
>> + .init   = arm_cpu__vcpu_init,
>> +};
>> +
>>  static int arm_cpu__core_init(struct kvm *kvm)
>>  {
>>   return (kvm_cpu__register_kvm_arm_target(&target_generic_v8) ||
>>   kvm_cpu__register_kvm_arm_target(&target_aem_v8) ||
>>   kvm_cpu__register_kvm_arm_target(&target_foundation_v8) ||
>> - kvm_cpu__register_kvm_arm_target(&target_cortex_a57));
>> + kvm_cpu__register_kvm_arm_target(&target_cortex_a57) ||
>> + kvm_cpu__register_kvm_arm_target(&target_potenza));
>>  }
>>  core_init(arm_cpu__core_init);
>>
>
> -- IMPORTANT NOTICE: The contents of this email and any attachments are 
> confidential and may also be privileged. If you are not the intended 
> recipient, please notify the sender immediately and do not disclose the 
> contents to any other person, use it for any purpose, or store or copy the 
> information in any medium.  Thank you.
>
> ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, 
> Registered in England & Wales, Company No:  2557590
> ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, 
> Registered in England & Wales, Company No:  2548782
>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4 1/4] kvmtool: ARM: Use KVM_ARM_PREFERRED_TARGET vm ioctl to determine target cpu

2014-10-01 Thread Anup Patel
On 30 September 2014 14:26, Andre Przywara  wrote:
> Hi Anup,
>
> thanks for the re-spin and sorry for the delay.
>
> Looks better now, some minor comments below.
>
> On 19/09/14 00:57, Anup Patel wrote:
>> Instead, of trying out each and every target type we should
>> use KVM_ARM_PREFERRED_TARGET vm ioctl to determine target type
>> for KVM ARM/ARM64.
>>
>> If KVM_ARM_PREFERRED_TARGET vm ioctl fails then we fallback to
>> old method of trying all known target types.
>>
>> If KVM_ARM_PREFERRED_TARGET vm ioctl succeeds but the returned
>> target type is not known to KVMTOOL then we forcefully init
>> VCPU with target type returned by KVM_ARM_PREFERRED_TARGET vm ioctl.
>>
>> Signed-off-by: Pranavkumar Sawargaonkar 
>> Signed-off-by: Anup Patel 
>> ---
>>  tools/kvm/arm/aarch32/arm-cpu.c |9 ++-
>>  tools/kvm/arm/aarch64/arm-cpu.c |   10 ++--
>>  tools/kvm/arm/kvm-cpu.c |   52 
>> ++-
>>  3 files changed, 57 insertions(+), 14 deletions(-)
>>
>> diff --git a/tools/kvm/arm/aarch32/arm-cpu.c 
>> b/tools/kvm/arm/aarch32/arm-cpu.c
>> index 71b98fe..0d2ff11 100644
>> --- a/tools/kvm/arm/aarch32/arm-cpu.c
>> +++ b/tools/kvm/arm/aarch32/arm-cpu.c
>> @@ -22,6 +22,12 @@ static int arm_cpu__vcpu_init(struct kvm_cpu *vcpu)
>>   return 0;
>>  }
>>
>> +static struct kvm_arm_target target_generic_v7 = {
>> + .id = UINT_MAX,
>> + .compatible = "arm,arm-v7",
>> + .init   = arm_cpu__vcpu_init,
>> +};
>> +
>>  static struct kvm_arm_target target_cortex_a15 = {
>>   .id = KVM_ARM_TARGET_CORTEX_A15,
>>   .compatible = "arm,cortex-a15",
>> @@ -36,7 +42,8 @@ static struct kvm_arm_target target_cortex_a7 = {
>>
>>  static int arm_cpu__core_init(struct kvm *kvm)
>>  {
>> - return (kvm_cpu__register_kvm_arm_target(&target_cortex_a15) ||
>> + return (kvm_cpu__register_kvm_arm_target(&target_generic_v7) ||
>> + kvm_cpu__register_kvm_arm_target(&target_cortex_a15) ||
>>   kvm_cpu__register_kvm_arm_target(&target_cortex_a7));
>>  }
>
> I wonder if you could avoid the registration of this target and instead
> reference it later directly (instead of using a magic 0 index)?
> This way you wouldn't need to care about avoiding accidental .id matches
> with the UINT_MAX above.
>
>>  core_init(arm_cpu__core_init);
>> diff --git a/tools/kvm/arm/aarch64/arm-cpu.c 
>> b/tools/kvm/arm/aarch64/arm-cpu.c
>> index ce5ea2f..9ee3da3 100644
>> --- a/tools/kvm/arm/aarch64/arm-cpu.c
>> +++ b/tools/kvm/arm/aarch64/arm-cpu.c
>> @@ -16,13 +16,18 @@ static void generate_fdt_nodes(void *fdt, struct kvm 
>> *kvm, u32 gic_phandle)
>>   timer__generate_fdt_nodes(fdt, kvm, timer_interrupts);
>>  }
>>
>> -
>>  static int arm_cpu__vcpu_init(struct kvm_cpu *vcpu)
>>  {
>>   vcpu->generate_fdt_nodes = generate_fdt_nodes;
>>   return 0;
>>  }
>>
>> +static struct kvm_arm_target target_generic_v8 = {
>> + .id = UINT_MAX,
>> + .compatible = "arm,arm-v8",
>> + .init   = arm_cpu__vcpu_init,
>> +};
>> +
>>  static struct kvm_arm_target target_aem_v8 = {
>>   .id = KVM_ARM_TARGET_AEM_V8,
>>   .compatible = "arm,arm-v8",
>> @@ -43,7 +48,8 @@ static struct kvm_arm_target target_cortex_a57 = {
>>
>>  static int arm_cpu__core_init(struct kvm *kvm)
>>  {
>> - return (kvm_cpu__register_kvm_arm_target(&target_aem_v8) ||
>> + return (kvm_cpu__register_kvm_arm_target(&target_generic_v8) ||
>> + kvm_cpu__register_kvm_arm_target(&target_aem_v8) ||
>>   kvm_cpu__register_kvm_arm_target(&target_foundation_v8) ||
>>   kvm_cpu__register_kvm_arm_target(&target_cortex_a57));
>>  }
>
> (same thing like for v7 here)
>
>> diff --git a/tools/kvm/arm/kvm-cpu.c b/tools/kvm/arm/kvm-cpu.c
>> index aeaa4cf..6de5344 100644
>> --- a/tools/kvm/arm/kvm-cpu.c
>> +++ b/tools/kvm/arm/kvm-cpu.c
>> @@ -13,7 +13,7 @@ int kvm_cpu__get_debug_fd(void)
>>   return debug_fd;
>>  }
>>
>> -static struct kvm_arm_target *kvm_arm_targets[KVM_ARM_NUM_TARGETS];
>> +static struct kvm_arm_target *kvm_arm_targets[KVM_ARM_NUM_TARGETS+1];
>
> w/s issue
>
>>  int kvm_cpu__register_kvm_arm_target(struct kvm_arm_target *target)
>>  {
>>   unsigned i

[PATCH v5 2/4] kvmtool: ARM64: Add target type potenza for aarch64

2014-10-01 Thread Anup Patel
The VCPU target type KVM_ARM_TARGET_XGENE_POTENZA is available
in latest Linux-3.16-rcX or higher hence register aarch64 target
type for it.

This patch enables us to run KVMTOOL on X-Gene Potenza host.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
---
 tools/kvm/arm/aarch64/arm-cpu.c |   14 +-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/tools/kvm/arm/aarch64/arm-cpu.c b/tools/kvm/arm/aarch64/arm-cpu.c
index 88970de..e237cf9 100644
--- a/tools/kvm/arm/aarch64/arm-cpu.c
+++ b/tools/kvm/arm/aarch64/arm-cpu.c
@@ -46,12 +46,24 @@ static struct kvm_arm_target target_cortex_a57 = {
.init   = arm_cpu__vcpu_init,
 };
 
+/*
+ * We really don't require to register target for every
+ * new CPU. The target for Potenza CPU is only registered
+ * to enable use of KVMTOOL with older host kernels.
+ */
+static struct kvm_arm_target target_potenza = {
+   .id = KVM_ARM_TARGET_XGENE_POTENZA,
+   .compatible = "arm,arm-v8",
+   .init   = arm_cpu__vcpu_init,
+};
+
 static int arm_cpu__core_init(struct kvm *kvm)
 {
kvm_cpu__set_kvm_arm_generic_target(&target_generic_v8);
 
return (kvm_cpu__register_kvm_arm_target(&target_aem_v8) ||
kvm_cpu__register_kvm_arm_target(&target_foundation_v8) ||
-   kvm_cpu__register_kvm_arm_target(&target_cortex_a57));
+   kvm_cpu__register_kvm_arm_target(&target_cortex_a57) ||
+   kvm_cpu__register_kvm_arm_target(&target_potenza));
 }
 core_init(arm_cpu__core_init);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v5 0/4] kvmtool: ARM/ARM64: Misc updates

2014-10-01 Thread Anup Patel
This patchset updates KVMTOOL to use some of the features
supported by Linux-3.16 KVM ARM/ARM64, such as:

1. Target CPU == Host using KVM_ARM_PREFERRED_TARGET vm ioctl
2. Target CPU type Potenza for using KVMTOOL on X-Gene
3. PSCI v0.2 support for Aarch32 and Aarch64 guest
4. System event exit reason

Changes since v4:
- Avoid using magic '0' target for kvm arm generic target
- Added comment for why we need Potenza target in KVMTOOL

Changes since v3:
- Add generic targets for aarch32 and aarch64 which are used
  by KVMTOOL when target type returned by KVM_ARM_PREFERRED_TARGET
  vm ioctl is not known to KVMTOOL
- Print more info when handling system reset event

Changes since v2:
- Use target type returned by KVM_ARM_PREFERRED_TARGET vm ioctl
  for VCPU init such that we don't need to update KVMTOOL for
  every new host hardware
- Simplify DTB generation for PSCI node

Changes since v1:
- Drop the patch to fix compile error for aarch64
- Fallback to old method of trying all target types if
KVM_ARM_PREFERRED_TARGET vm ioctl fails
- Print more info when handling KVM_EXIT_SYSTEM_EVENT

Anup Patel (4):
  kvmtool: ARM: Use KVM_ARM_PREFERRED_TARGET vm ioctl to determine
target cpu
  kvmtool: ARM64: Add target type potenza for aarch64
  kvmtool: Handle exit reason KVM_EXIT_SYSTEM_EVENT
  kvmtool: ARM/ARM64: Provide PSCI-0.2 to guest when KVM supports it

 tools/kvm/arm/aarch32/arm-cpu.c |8 +++
 tools/kvm/arm/aarch64/arm-cpu.c |   23 -
 tools/kvm/arm/fdt.c |   51 +--
 tools/kvm/arm/include/arm-common/kvm-cpu-arch.h |2 +
 tools/kvm/arm/kvm-cpu.c |   61 +++
 tools/kvm/kvm-cpu.c |   21 
 6 files changed, 149 insertions(+), 17 deletions(-)

-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v5 1/4] kvmtool: ARM: Use KVM_ARM_PREFERRED_TARGET vm ioctl to determine target cpu

2014-10-01 Thread Anup Patel
Instead, of trying out each and every target type we should
use KVM_ARM_PREFERRED_TARGET vm ioctl to determine target type
for KVM ARM/ARM64.

If KVM_ARM_PREFERRED_TARGET vm ioctl fails then we fallback to
old method of trying all known target types.

If KVM_ARM_PREFERRED_TARGET vm ioctl succeeds but the returned
target type is not known to KVMTOOL then we forcefully init
VCPU with target type returned by KVM_ARM_PREFERRED_TARGET vm ioctl.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
---
 tools/kvm/arm/aarch32/arm-cpu.c |8 
 tools/kvm/arm/aarch64/arm-cpu.c |9 +++-
 tools/kvm/arm/include/arm-common/kvm-cpu-arch.h |2 +
 tools/kvm/arm/kvm-cpu.c |   56 +++
 4 files changed, 64 insertions(+), 11 deletions(-)

diff --git a/tools/kvm/arm/aarch32/arm-cpu.c b/tools/kvm/arm/aarch32/arm-cpu.c
index 71b98fe..946e443 100644
--- a/tools/kvm/arm/aarch32/arm-cpu.c
+++ b/tools/kvm/arm/aarch32/arm-cpu.c
@@ -22,6 +22,12 @@ static int arm_cpu__vcpu_init(struct kvm_cpu *vcpu)
return 0;
 }
 
+static struct kvm_arm_target target_generic_v7 = {
+   .id = UINT_MAX,
+   .compatible = "arm,arm-v7",
+   .init   = arm_cpu__vcpu_init,
+};
+
 static struct kvm_arm_target target_cortex_a15 = {
.id = KVM_ARM_TARGET_CORTEX_A15,
.compatible = "arm,cortex-a15",
@@ -36,6 +42,8 @@ static struct kvm_arm_target target_cortex_a7 = {
 
 static int arm_cpu__core_init(struct kvm *kvm)
 {
+   kvm_cpu__set_kvm_arm_generic_target(&target_generic_v7);
+
return (kvm_cpu__register_kvm_arm_target(&target_cortex_a15) ||
kvm_cpu__register_kvm_arm_target(&target_cortex_a7));
 }
diff --git a/tools/kvm/arm/aarch64/arm-cpu.c b/tools/kvm/arm/aarch64/arm-cpu.c
index ce5ea2f..88970de 100644
--- a/tools/kvm/arm/aarch64/arm-cpu.c
+++ b/tools/kvm/arm/aarch64/arm-cpu.c
@@ -16,13 +16,18 @@ static void generate_fdt_nodes(void *fdt, struct kvm *kvm, 
u32 gic_phandle)
timer__generate_fdt_nodes(fdt, kvm, timer_interrupts);
 }
 
-
 static int arm_cpu__vcpu_init(struct kvm_cpu *vcpu)
 {
vcpu->generate_fdt_nodes = generate_fdt_nodes;
return 0;
 }
 
+static struct kvm_arm_target target_generic_v8 = {
+   .id = UINT_MAX,
+   .compatible = "arm,arm-v8",
+   .init   = arm_cpu__vcpu_init,
+};
+
 static struct kvm_arm_target target_aem_v8 = {
.id = KVM_ARM_TARGET_AEM_V8,
.compatible = "arm,arm-v8",
@@ -43,6 +48,8 @@ static struct kvm_arm_target target_cortex_a57 = {
 
 static int arm_cpu__core_init(struct kvm *kvm)
 {
+   kvm_cpu__set_kvm_arm_generic_target(&target_generic_v8);
+
return (kvm_cpu__register_kvm_arm_target(&target_aem_v8) ||
kvm_cpu__register_kvm_arm_target(&target_foundation_v8) ||
kvm_cpu__register_kvm_arm_target(&target_cortex_a57));
diff --git a/tools/kvm/arm/include/arm-common/kvm-cpu-arch.h 
b/tools/kvm/arm/include/arm-common/kvm-cpu-arch.h
index 83cd8b8..36c7872 100644
--- a/tools/kvm/arm/include/arm-common/kvm-cpu-arch.h
+++ b/tools/kvm/arm/include/arm-common/kvm-cpu-arch.h
@@ -34,6 +34,8 @@ struct kvm_arm_target {
int (*init)(struct kvm_cpu *vcpu);
 };
 
+void kvm_cpu__set_kvm_arm_generic_target(struct kvm_arm_target *target);
+
 int kvm_cpu__register_kvm_arm_target(struct kvm_arm_target *target);
 
 static inline bool kvm_cpu__emulate_io(struct kvm_cpu *vcpu, u16 port, void 
*data,
diff --git a/tools/kvm/arm/kvm-cpu.c b/tools/kvm/arm/kvm-cpu.c
index aeaa4cf..f165373 100644
--- a/tools/kvm/arm/kvm-cpu.c
+++ b/tools/kvm/arm/kvm-cpu.c
@@ -13,7 +13,14 @@ int kvm_cpu__get_debug_fd(void)
return debug_fd;
 }
 
+static struct kvm_arm_target *kvm_arm_generic_target;
 static struct kvm_arm_target *kvm_arm_targets[KVM_ARM_NUM_TARGETS];
+
+void kvm_cpu__set_kvm_arm_generic_target(struct kvm_arm_target *target)
+{
+   kvm_arm_generic_target = target;
+}
+
 int kvm_cpu__register_kvm_arm_target(struct kvm_arm_target *target)
 {
unsigned int i = 0;
@@ -34,6 +41,7 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
struct kvm_cpu *vcpu;
int coalesced_offset, mmap_size, err = -1;
unsigned int i;
+   struct kvm_vcpu_init preferred_init;
struct kvm_vcpu_init vcpu_init = {
.features = ARM_VCPU_FEATURE_FLAGS(kvm, cpu_id)
};
@@ -55,19 +63,46 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, 
unsigned long cpu_id)
if (vcpu->kvm_run == MAP_FAILED)
die("unable to mmap vcpu fd");
 
-   /* Find an appropriate target CPU type. */
-   for (i = 0; i < ARRAY_SIZE(kvm_arm_targets); ++i) {
-   if (!kvm_arm_targets[i])
-   continue;
-   target = kvm_arm_

[PATCH v5 4/4] kvmtool: ARM/ARM64: Provide PSCI-0.2 to guest when KVM supports it

2014-10-01 Thread Anup Patel
If in-kernel KVM support PSCI-0.2 emulation then we should set
KVM_ARM_VCPU_PSCI_0_2 feature for each guest VCPU and also
provide "arm,psci-0.2","arm,psci" as PSCI compatible string.

This patch updates kvm_cpu__arch_init() and setup_fdt() as
per above.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
Reviewed-by: Andre Przywara 
---
 tools/kvm/arm/fdt.c |   51 ++-
 tools/kvm/arm/kvm-cpu.c |5 +
 2 files changed, 51 insertions(+), 5 deletions(-)

diff --git a/tools/kvm/arm/fdt.c b/tools/kvm/arm/fdt.c
index 186a718..5626931 100644
--- a/tools/kvm/arm/fdt.c
+++ b/tools/kvm/arm/fdt.c
@@ -13,6 +13,7 @@
 #include 
 #include 
 #include 
+#include 
 
 static char kern_cmdline[COMMAND_LINE_SIZE];
 
@@ -84,6 +85,34 @@ static void generate_irq_prop(void *fdt, u8 irq)
_FDT(fdt_property(fdt, "interrupts", irq_prop, sizeof(irq_prop)));
 }
 
+struct psci_fns {
+   u32 cpu_suspend;
+   u32 cpu_off;
+   u32 cpu_on;
+   u32 migrate;
+};
+
+static struct psci_fns psci_0_1_fns = {
+   .cpu_suspend = KVM_PSCI_FN_CPU_SUSPEND,
+   .cpu_off = KVM_PSCI_FN_CPU_OFF,
+   .cpu_on = KVM_PSCI_FN_CPU_ON,
+   .migrate = KVM_PSCI_FN_MIGRATE,
+};
+
+static struct psci_fns psci_0_2_aarch32_fns = {
+   .cpu_suspend = PSCI_0_2_FN_CPU_SUSPEND,
+   .cpu_off = PSCI_0_2_FN_CPU_OFF,
+   .cpu_on = PSCI_0_2_FN_CPU_ON,
+   .migrate = PSCI_0_2_FN_MIGRATE,
+};
+
+static struct psci_fns psci_0_2_aarch64_fns = {
+   .cpu_suspend = PSCI_0_2_FN64_CPU_SUSPEND,
+   .cpu_off = PSCI_0_2_FN_CPU_OFF,
+   .cpu_on = PSCI_0_2_FN64_CPU_ON,
+   .migrate = PSCI_0_2_FN64_MIGRATE,
+};
+
 static int setup_fdt(struct kvm *kvm)
 {
struct device_header *dev_hdr;
@@ -93,6 +122,7 @@ static int setup_fdt(struct kvm *kvm)
cpu_to_fdt64(kvm->arch.memory_guest_start),
cpu_to_fdt64(kvm->ram_size),
};
+   struct psci_fns *fns;
void *fdt   = staging_fdt;
void *fdt_dest  = guest_flat_to_host(kvm,
 kvm->arch.dtb_guest_start);
@@ -162,12 +192,23 @@ static int setup_fdt(struct kvm *kvm)
 
/* PSCI firmware */
_FDT(fdt_begin_node(fdt, "psci"));
-   _FDT(fdt_property_string(fdt, "compatible", "arm,psci"));
+   if (kvm__supports_extension(kvm, KVM_CAP_ARM_PSCI_0_2)) {
+   const char compatible[] = "arm,psci-0.2\0arm,psci";
+   _FDT(fdt_property(fdt, "compatible",
+ compatible, sizeof(compatible)));
+   if (kvm->cfg.arch.aarch32_guest)
+   fns = &psci_0_2_aarch32_fns;
+   else
+   fns = &psci_0_2_aarch64_fns;
+   } else {
+   _FDT(fdt_property_string(fdt, "compatible", "arm,psci"));
+   fns = &psci_0_1_fns;
+   }
_FDT(fdt_property_string(fdt, "method", "hvc"));
-   _FDT(fdt_property_cell(fdt, "cpu_suspend", KVM_PSCI_FN_CPU_SUSPEND));
-   _FDT(fdt_property_cell(fdt, "cpu_off", KVM_PSCI_FN_CPU_OFF));
-   _FDT(fdt_property_cell(fdt, "cpu_on", KVM_PSCI_FN_CPU_ON));
-   _FDT(fdt_property_cell(fdt, "migrate", KVM_PSCI_FN_MIGRATE));
+   _FDT(fdt_property_cell(fdt, "cpu_suspend", fns->cpu_suspend));
+   _FDT(fdt_property_cell(fdt, "cpu_off", fns->cpu_off));
+   _FDT(fdt_property_cell(fdt, "cpu_on", fns->cpu_on));
+   _FDT(fdt_property_cell(fdt, "migrate", fns->migrate));
_FDT(fdt_end_node(fdt));
 
/* Finalise. */
diff --git a/tools/kvm/arm/kvm-cpu.c b/tools/kvm/arm/kvm-cpu.c
index f165373..ab08815 100644
--- a/tools/kvm/arm/kvm-cpu.c
+++ b/tools/kvm/arm/kvm-cpu.c
@@ -63,6 +63,11 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
if (vcpu->kvm_run == MAP_FAILED)
die("unable to mmap vcpu fd");
 
+   /* Set KVM_ARM_VCPU_PSCI_0_2 if available */
+   if (kvm__supports_extension(kvm, KVM_CAP_ARM_PSCI_0_2)) {
+   vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
+   }
+
/*
 * If the preferred target ioctl is successful then
 * use preferred target else try each and every target type
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v5 3/4] kvmtool: Handle exit reason KVM_EXIT_SYSTEM_EVENT

2014-10-01 Thread Anup Patel
The KVM_EXIT_SYSTEM_EVENT exit reason was added to define
architecture independent system-wide events for a Guest.

Currently, it is used by in-kernel PSCI-0.2 emulation of
KVM ARM/ARM64 to inform user space about PSCI SYSTEM_OFF
or PSCI SYSTEM_RESET request.

For now, we simply treat all system-wide guest events as
shutdown request in KVMTOOL.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
Reviewed-by: Andre Przywara 
---
 tools/kvm/kvm-cpu.c |   21 +
 1 file changed, 21 insertions(+)

diff --git a/tools/kvm/kvm-cpu.c b/tools/kvm/kvm-cpu.c
index ee0a8ec..5180039 100644
--- a/tools/kvm/kvm-cpu.c
+++ b/tools/kvm/kvm-cpu.c
@@ -160,6 +160,27 @@ int kvm_cpu__start(struct kvm_cpu *cpu)
goto exit_kvm;
case KVM_EXIT_SHUTDOWN:
goto exit_kvm;
+   case KVM_EXIT_SYSTEM_EVENT:
+   /*
+* Print the type of system event and
+* treat all system events as shutdown request.
+*/
+   switch (cpu->kvm_run->system_event.type) {
+   case KVM_SYSTEM_EVENT_SHUTDOWN:
+   printf("  # Info: shutdown system event\n");
+   goto exit_kvm;
+   case KVM_SYSTEM_EVENT_RESET:
+   printf("  # Info: reset system event\n");
+   printf("  # Info: KVMTOOL does not support VM 
reset\n");
+   printf("  # Info: please re-launch the VM 
manually\n");
+   goto exit_kvm;
+   default:
+   printf("  # Warning: unknown system event 
type=%d\n",
+  cpu->kvm_run->system_event.type);
+   printf("  # Info: exiting KVMTOOL\n");
+   goto exit_kvm;
+   };
+   break;
default: {
bool ret;
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v5 0/4] kvmtool: ARM/ARM64: Misc updates

2014-10-06 Thread Anup Patel
On 3 October 2014 21:47, Will Deacon  wrote:
> On Wed, Oct 01, 2014 at 11:34:51AM +0100, Anup Patel wrote:
>> This patchset updates KVMTOOL to use some of the features
>> supported by Linux-3.16 KVM ARM/ARM64, such as:
>>
>> 1. Target CPU == Host using KVM_ARM_PREFERRED_TARGET vm ioctl
>> 2. Target CPU type Potenza for using KVMTOOL on X-Gene
>> 3. PSCI v0.2 support for Aarch32 and Aarch64 guest
>> 4. System event exit reason
>>
>> Changes since v4:
>> - Avoid using magic '0' target for kvm arm generic target
>> - Added comment for why we need Potenza target in KVMTOOL
>
> Please can you send a v5 addressing my minor comment and adding Andre's
> reviewed-by tags?

Sure, will do.

Thanks,
Anup

>
> Thanks,
>
> Will
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v6 2/4] kvmtool: ARM64: Add target type potenza for aarch64

2014-10-06 Thread Anup Patel
The VCPU target type KVM_ARM_TARGET_XGENE_POTENZA is available
in latest Linux-3.16-rcX or higher hence register aarch64 target
type for it.

This patch enables us to run KVMTOOL on X-Gene Potenza host.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
Reviewed-by: Andre Przywara 
---
 tools/kvm/arm/aarch64/arm-cpu.c |   14 +-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/tools/kvm/arm/aarch64/arm-cpu.c b/tools/kvm/arm/aarch64/arm-cpu.c
index 88970de..e237cf9 100644
--- a/tools/kvm/arm/aarch64/arm-cpu.c
+++ b/tools/kvm/arm/aarch64/arm-cpu.c
@@ -46,12 +46,24 @@ static struct kvm_arm_target target_cortex_a57 = {
.init   = arm_cpu__vcpu_init,
 };
 
+/*
+ * We really don't require to register target for every
+ * new CPU. The target for Potenza CPU is only registered
+ * to enable use of KVMTOOL with older host kernels.
+ */
+static struct kvm_arm_target target_potenza = {
+   .id = KVM_ARM_TARGET_XGENE_POTENZA,
+   .compatible = "arm,arm-v8",
+   .init   = arm_cpu__vcpu_init,
+};
+
 static int arm_cpu__core_init(struct kvm *kvm)
 {
kvm_cpu__set_kvm_arm_generic_target(&target_generic_v8);
 
return (kvm_cpu__register_kvm_arm_target(&target_aem_v8) ||
kvm_cpu__register_kvm_arm_target(&target_foundation_v8) ||
-   kvm_cpu__register_kvm_arm_target(&target_cortex_a57));
+   kvm_cpu__register_kvm_arm_target(&target_cortex_a57) ||
+   kvm_cpu__register_kvm_arm_target(&target_potenza));
 }
 core_init(arm_cpu__core_init);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v6 0/4] kvmtool: ARM/ARM64: Misc updates

2014-10-06 Thread Anup Patel
This patchset updates KVMTOOL to use some of the features
supported by Linux-3.16 KVM ARM/ARM64, such as:

1. Target CPU == Host using KVM_ARM_PREFERRED_TARGET vm ioctl
2. Target CPU type Potenza for using KVMTOOL on X-Gene
3. PSCI v0.2 support for Aarch32 and Aarch64 guest
4. System event exit reason

Changes since v5:
- Use pr_info() and pr_warning() instead of printf() when
handling system event exit reason

Changes since v4:
- Avoid using magic '0' target for kvm arm generic target
- Added comment for why we need Potenza target in KVMTOOL

Changes since v3:
- Add generic targets for aarch32 and aarch64 which are used
  by KVMTOOL when target type returned by KVM_ARM_PREFERRED_TARGET
  vm ioctl is not known to KVMTOOL
- Print more info when handling system reset event

Changes since v2:
- Use target type returned by KVM_ARM_PREFERRED_TARGET vm ioctl
  for VCPU init such that we don't need to update KVMTOOL for
  every new host hardware
- Simplify DTB generation for PSCI node

Changes since v1:
- Drop the patch to fix compile error for aarch64
- Fallback to old method of trying all target types if
KVM_ARM_PREFERRED_TARGET vm ioctl fails
- Print more info when handling KVM_EXIT_SYSTEM_EVENT

Anup Patel (4):
  kvmtool: ARM: Use KVM_ARM_PREFERRED_TARGET vm ioctl to determine
target cpu
  kvmtool: ARM64: Add target type potenza for aarch64
  kvmtool: Handle exit reason KVM_EXIT_SYSTEM_EVENT
  kvmtool: ARM/ARM64: Provide PSCI-0.2 to guest when KVM supports it

 tools/kvm/arm/aarch32/arm-cpu.c |8 +++
 tools/kvm/arm/aarch64/arm-cpu.c |   23 -
 tools/kvm/arm/fdt.c |   51 +--
 tools/kvm/arm/include/arm-common/kvm-cpu-arch.h |2 +
 tools/kvm/arm/kvm-cpu.c |   61 +++
 tools/kvm/kvm-cpu.c |   21 
 6 files changed, 149 insertions(+), 17 deletions(-)

-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v6 1/4] kvmtool: ARM: Use KVM_ARM_PREFERRED_TARGET vm ioctl to determine target cpu

2014-10-06 Thread Anup Patel
Instead, of trying out each and every target type we should
use KVM_ARM_PREFERRED_TARGET vm ioctl to determine target type
for KVM ARM/ARM64.

If KVM_ARM_PREFERRED_TARGET vm ioctl fails then we fallback to
old method of trying all known target types.

If KVM_ARM_PREFERRED_TARGET vm ioctl succeeds but the returned
target type is not known to KVMTOOL then we forcefully init
VCPU with target type returned by KVM_ARM_PREFERRED_TARGET vm ioctl.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
Reviewed-by: Andre Przywara 
---
 tools/kvm/arm/aarch32/arm-cpu.c |8 
 tools/kvm/arm/aarch64/arm-cpu.c |9 +++-
 tools/kvm/arm/include/arm-common/kvm-cpu-arch.h |2 +
 tools/kvm/arm/kvm-cpu.c |   56 +++
 4 files changed, 64 insertions(+), 11 deletions(-)

diff --git a/tools/kvm/arm/aarch32/arm-cpu.c b/tools/kvm/arm/aarch32/arm-cpu.c
index 71b98fe..946e443 100644
--- a/tools/kvm/arm/aarch32/arm-cpu.c
+++ b/tools/kvm/arm/aarch32/arm-cpu.c
@@ -22,6 +22,12 @@ static int arm_cpu__vcpu_init(struct kvm_cpu *vcpu)
return 0;
 }
 
+static struct kvm_arm_target target_generic_v7 = {
+   .id = UINT_MAX,
+   .compatible = "arm,arm-v7",
+   .init   = arm_cpu__vcpu_init,
+};
+
 static struct kvm_arm_target target_cortex_a15 = {
.id = KVM_ARM_TARGET_CORTEX_A15,
.compatible = "arm,cortex-a15",
@@ -36,6 +42,8 @@ static struct kvm_arm_target target_cortex_a7 = {
 
 static int arm_cpu__core_init(struct kvm *kvm)
 {
+   kvm_cpu__set_kvm_arm_generic_target(&target_generic_v7);
+
return (kvm_cpu__register_kvm_arm_target(&target_cortex_a15) ||
kvm_cpu__register_kvm_arm_target(&target_cortex_a7));
 }
diff --git a/tools/kvm/arm/aarch64/arm-cpu.c b/tools/kvm/arm/aarch64/arm-cpu.c
index ce5ea2f..88970de 100644
--- a/tools/kvm/arm/aarch64/arm-cpu.c
+++ b/tools/kvm/arm/aarch64/arm-cpu.c
@@ -16,13 +16,18 @@ static void generate_fdt_nodes(void *fdt, struct kvm *kvm, 
u32 gic_phandle)
timer__generate_fdt_nodes(fdt, kvm, timer_interrupts);
 }
 
-
 static int arm_cpu__vcpu_init(struct kvm_cpu *vcpu)
 {
vcpu->generate_fdt_nodes = generate_fdt_nodes;
return 0;
 }
 
+static struct kvm_arm_target target_generic_v8 = {
+   .id = UINT_MAX,
+   .compatible = "arm,arm-v8",
+   .init   = arm_cpu__vcpu_init,
+};
+
 static struct kvm_arm_target target_aem_v8 = {
.id = KVM_ARM_TARGET_AEM_V8,
.compatible = "arm,arm-v8",
@@ -43,6 +48,8 @@ static struct kvm_arm_target target_cortex_a57 = {
 
 static int arm_cpu__core_init(struct kvm *kvm)
 {
+   kvm_cpu__set_kvm_arm_generic_target(&target_generic_v8);
+
return (kvm_cpu__register_kvm_arm_target(&target_aem_v8) ||
kvm_cpu__register_kvm_arm_target(&target_foundation_v8) ||
kvm_cpu__register_kvm_arm_target(&target_cortex_a57));
diff --git a/tools/kvm/arm/include/arm-common/kvm-cpu-arch.h 
b/tools/kvm/arm/include/arm-common/kvm-cpu-arch.h
index 83cd8b8..36c7872 100644
--- a/tools/kvm/arm/include/arm-common/kvm-cpu-arch.h
+++ b/tools/kvm/arm/include/arm-common/kvm-cpu-arch.h
@@ -34,6 +34,8 @@ struct kvm_arm_target {
int (*init)(struct kvm_cpu *vcpu);
 };
 
+void kvm_cpu__set_kvm_arm_generic_target(struct kvm_arm_target *target);
+
 int kvm_cpu__register_kvm_arm_target(struct kvm_arm_target *target);
 
 static inline bool kvm_cpu__emulate_io(struct kvm_cpu *vcpu, u16 port, void 
*data,
diff --git a/tools/kvm/arm/kvm-cpu.c b/tools/kvm/arm/kvm-cpu.c
index aeaa4cf..f165373 100644
--- a/tools/kvm/arm/kvm-cpu.c
+++ b/tools/kvm/arm/kvm-cpu.c
@@ -13,7 +13,14 @@ int kvm_cpu__get_debug_fd(void)
return debug_fd;
 }
 
+static struct kvm_arm_target *kvm_arm_generic_target;
 static struct kvm_arm_target *kvm_arm_targets[KVM_ARM_NUM_TARGETS];
+
+void kvm_cpu__set_kvm_arm_generic_target(struct kvm_arm_target *target)
+{
+   kvm_arm_generic_target = target;
+}
+
 int kvm_cpu__register_kvm_arm_target(struct kvm_arm_target *target)
 {
unsigned int i = 0;
@@ -34,6 +41,7 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
struct kvm_cpu *vcpu;
int coalesced_offset, mmap_size, err = -1;
unsigned int i;
+   struct kvm_vcpu_init preferred_init;
struct kvm_vcpu_init vcpu_init = {
.features = ARM_VCPU_FEATURE_FLAGS(kvm, cpu_id)
};
@@ -55,19 +63,46 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, 
unsigned long cpu_id)
if (vcpu->kvm_run == MAP_FAILED)
die("unable to mmap vcpu fd");
 
-   /* Find an appropriate target CPU type. */
-   for (i = 0; i < ARRAY_SIZE(kvm_arm_targets); ++i) {
-   if (!kvm_arm_targets[i])
-   continue;
-  

[PATCH v6 3/4] kvmtool: Handle exit reason KVM_EXIT_SYSTEM_EVENT

2014-10-06 Thread Anup Patel
The KVM_EXIT_SYSTEM_EVENT exit reason was added to define
architecture independent system-wide events for a Guest.

Currently, it is used by in-kernel PSCI-0.2 emulation of
KVM ARM/ARM64 to inform user space about PSCI SYSTEM_OFF
or PSCI SYSTEM_RESET request.

For now, we simply treat all system-wide guest events as
shutdown request in KVMTOOL.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
Reviewed-by: Andre Przywara 
---
 tools/kvm/kvm-cpu.c |   21 +
 1 file changed, 21 insertions(+)

diff --git a/tools/kvm/kvm-cpu.c b/tools/kvm/kvm-cpu.c
index ee0a8ec..5a863b1 100644
--- a/tools/kvm/kvm-cpu.c
+++ b/tools/kvm/kvm-cpu.c
@@ -160,6 +160,27 @@ int kvm_cpu__start(struct kvm_cpu *cpu)
goto exit_kvm;
case KVM_EXIT_SHUTDOWN:
goto exit_kvm;
+   case KVM_EXIT_SYSTEM_EVENT:
+   /*
+* Print the type of system event and
+* treat all system events as shutdown request.
+*/
+   switch (cpu->kvm_run->system_event.type) {
+   case KVM_SYSTEM_EVENT_SHUTDOWN:
+   pr_info("shutdown system event");
+   goto exit_kvm;
+   case KVM_SYSTEM_EVENT_RESET:
+   pr_info("reset system event");
+   pr_info("KVMTOOL does not support VM reset");
+   pr_info("please re-launch the VM manually");
+   goto exit_kvm;
+   default:
+   pr_warning("unknown system event type=%d",
+  cpu->kvm_run->system_event.type);
+   pr_info("exiting KVMTOOL");
+   goto exit_kvm;
+   };
+   break;
default: {
bool ret;
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v6 4/4] kvmtool: ARM/ARM64: Provide PSCI-0.2 to guest when KVM supports it

2014-10-06 Thread Anup Patel
If in-kernel KVM support PSCI-0.2 emulation then we should set
KVM_ARM_VCPU_PSCI_0_2 feature for each guest VCPU and also
provide "arm,psci-0.2","arm,psci" as PSCI compatible string.

This patch updates kvm_cpu__arch_init() and setup_fdt() as
per above.

Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Anup Patel 
Reviewed-by: Andre Przywara 
---
 tools/kvm/arm/fdt.c |   51 ++-
 tools/kvm/arm/kvm-cpu.c |5 +
 2 files changed, 51 insertions(+), 5 deletions(-)

diff --git a/tools/kvm/arm/fdt.c b/tools/kvm/arm/fdt.c
index 186a718..5626931 100644
--- a/tools/kvm/arm/fdt.c
+++ b/tools/kvm/arm/fdt.c
@@ -13,6 +13,7 @@
 #include 
 #include 
 #include 
+#include 
 
 static char kern_cmdline[COMMAND_LINE_SIZE];
 
@@ -84,6 +85,34 @@ static void generate_irq_prop(void *fdt, u8 irq)
_FDT(fdt_property(fdt, "interrupts", irq_prop, sizeof(irq_prop)));
 }
 
+struct psci_fns {
+   u32 cpu_suspend;
+   u32 cpu_off;
+   u32 cpu_on;
+   u32 migrate;
+};
+
+static struct psci_fns psci_0_1_fns = {
+   .cpu_suspend = KVM_PSCI_FN_CPU_SUSPEND,
+   .cpu_off = KVM_PSCI_FN_CPU_OFF,
+   .cpu_on = KVM_PSCI_FN_CPU_ON,
+   .migrate = KVM_PSCI_FN_MIGRATE,
+};
+
+static struct psci_fns psci_0_2_aarch32_fns = {
+   .cpu_suspend = PSCI_0_2_FN_CPU_SUSPEND,
+   .cpu_off = PSCI_0_2_FN_CPU_OFF,
+   .cpu_on = PSCI_0_2_FN_CPU_ON,
+   .migrate = PSCI_0_2_FN_MIGRATE,
+};
+
+static struct psci_fns psci_0_2_aarch64_fns = {
+   .cpu_suspend = PSCI_0_2_FN64_CPU_SUSPEND,
+   .cpu_off = PSCI_0_2_FN_CPU_OFF,
+   .cpu_on = PSCI_0_2_FN64_CPU_ON,
+   .migrate = PSCI_0_2_FN64_MIGRATE,
+};
+
 static int setup_fdt(struct kvm *kvm)
 {
struct device_header *dev_hdr;
@@ -93,6 +122,7 @@ static int setup_fdt(struct kvm *kvm)
cpu_to_fdt64(kvm->arch.memory_guest_start),
cpu_to_fdt64(kvm->ram_size),
};
+   struct psci_fns *fns;
void *fdt   = staging_fdt;
void *fdt_dest  = guest_flat_to_host(kvm,
 kvm->arch.dtb_guest_start);
@@ -162,12 +192,23 @@ static int setup_fdt(struct kvm *kvm)
 
/* PSCI firmware */
_FDT(fdt_begin_node(fdt, "psci"));
-   _FDT(fdt_property_string(fdt, "compatible", "arm,psci"));
+   if (kvm__supports_extension(kvm, KVM_CAP_ARM_PSCI_0_2)) {
+   const char compatible[] = "arm,psci-0.2\0arm,psci";
+   _FDT(fdt_property(fdt, "compatible",
+ compatible, sizeof(compatible)));
+   if (kvm->cfg.arch.aarch32_guest)
+   fns = &psci_0_2_aarch32_fns;
+   else
+   fns = &psci_0_2_aarch64_fns;
+   } else {
+   _FDT(fdt_property_string(fdt, "compatible", "arm,psci"));
+   fns = &psci_0_1_fns;
+   }
_FDT(fdt_property_string(fdt, "method", "hvc"));
-   _FDT(fdt_property_cell(fdt, "cpu_suspend", KVM_PSCI_FN_CPU_SUSPEND));
-   _FDT(fdt_property_cell(fdt, "cpu_off", KVM_PSCI_FN_CPU_OFF));
-   _FDT(fdt_property_cell(fdt, "cpu_on", KVM_PSCI_FN_CPU_ON));
-   _FDT(fdt_property_cell(fdt, "migrate", KVM_PSCI_FN_MIGRATE));
+   _FDT(fdt_property_cell(fdt, "cpu_suspend", fns->cpu_suspend));
+   _FDT(fdt_property_cell(fdt, "cpu_off", fns->cpu_off));
+   _FDT(fdt_property_cell(fdt, "cpu_on", fns->cpu_on));
+   _FDT(fdt_property_cell(fdt, "migrate", fns->migrate));
_FDT(fdt_end_node(fdt));
 
/* Finalise. */
diff --git a/tools/kvm/arm/kvm-cpu.c b/tools/kvm/arm/kvm-cpu.c
index f165373..ab08815 100644
--- a/tools/kvm/arm/kvm-cpu.c
+++ b/tools/kvm/arm/kvm-cpu.c
@@ -63,6 +63,11 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
if (vcpu->kvm_run == MAP_FAILED)
die("unable to mmap vcpu fd");
 
+   /* Set KVM_ARM_VCPU_PSCI_0_2 if available */
+   if (kvm__supports_extension(kvm, KVM_CAP_ARM_PSCI_0_2)) {
+   vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
+   }
+
/*
 * If the preferred target ioctl is successful then
 * use preferred target else try each and every target type
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH 0/6] ARM64: KVM: PMU infrastructure support

2014-11-08 Thread Anup Patel
Hi Christoffer,

On Sat, Nov 8, 2014 at 1:55 AM, Christoffer Dall
 wrote:
> Hi Anup,
>
> [This time to the new email]
>
> What are your plans in terms of follow-up on this one?

Actually, I am already working on RFC v2. I will send-out
RFC v2 sometime next time.

This RFC v2 will be RFC v1 based upon Marc's IRQ
forwarding patchset.

I will try to address PMU context switching for KVM ARM
in RFC v3. Does this sound OK?

Regards,
Anup

>
> Should we review these patches and reply to anup _at_ brainfaul.org or
> are you looking for someone else to pick them up?
>
> Thanks,
> -Christoffer
>
> On Tue, Aug 05, 2014 at 02:54:09PM +0530, Anup Patel wrote:
>> This patchset enables PMU virtualization in KVM ARM64. The
>> Guest can now directly use PMU available on the host HW.
>>
>> The virtual PMU IRQ injection for Guest VCPUs is managed by
>> small piece of code shared between KVM ARM and KVM ARM64. The
>> virtual PMU IRQ number will be based on Guest machine model and
>> user space will provide it using set device address vm ioctl.
>>
>> The second last patch of this series implements full context
>> switch of PMU registers which will context switch all PMU
>> registers on every KVM world-switch.
>>
>> The last patch implements a lazy context switch of PMU registers
>> which is very similar to lazy debug context switch.
>> (Refer, 
>> http://lists.infradead.org/pipermail/linux-arm-kernel/2014-July/271040.html)
>>
>> Also, we reserve last PMU event counter for EL2 mode which
>> will not be accessible from Host and Guest EL1 mode. This
>> reserved EL2 mode PMU event counter can be used for profiling
>> KVM world-switch and other EL2 mode functions.
>>
>> All testing have been done using KVMTOOL on X-Gene Mustang and
>> Foundation v8 Model for both Aarch32 and Aarch64 guest.
>>
>> Anup Patel (6):
>>   ARM64: Move PMU register related defines to asm/pmu.h
>>   ARM64: perf: Re-enable overflow interrupt from interrupt handler
>>   ARM: perf: Re-enable overflow interrupt from interrupt handler
>>   ARM/ARM64: KVM: Add common code PMU IRQ routing
>>   ARM64: KVM: Implement full context switch of PMU registers
>>   ARM64: KVM: Upgrade to lazy context switch of PMU registers
>>
>>  arch/arm/include/asm/kvm_host.h   |9 +
>>  arch/arm/include/uapi/asm/kvm.h   |1 +
>>  arch/arm/kernel/perf_event_v7.c   |8 +
>>  arch/arm/kvm/arm.c|6 +
>>  arch/arm/kvm/reset.c  |4 +
>>  arch/arm64/include/asm/kvm_asm.h  |   39 +++-
>>  arch/arm64/include/asm/kvm_host.h |   12 ++
>>  arch/arm64/include/asm/pmu.h  |   44 +
>>  arch/arm64/include/uapi/asm/kvm.h |1 +
>>  arch/arm64/kernel/asm-offsets.c   |2 +
>>  arch/arm64/kernel/perf_event.c|   40 +---
>>  arch/arm64/kvm/Kconfig|7 +
>>  arch/arm64/kvm/Makefile   |1 +
>>  arch/arm64/kvm/hyp-init.S |   15 ++
>>  arch/arm64/kvm/hyp.S  |  209 +++-
>>  arch/arm64/kvm/reset.c|4 +
>>  arch/arm64/kvm/sys_regs.c |  385 
>> +
>>  include/kvm/arm_pmu.h |   52 +
>>  virt/kvm/arm/pmu.c|  105 ++
>>  19 files changed, 870 insertions(+), 74 deletions(-)
>>  create mode 100644 include/kvm/arm_pmu.h
>>  create mode 100644 virt/kvm/arm/pmu.c
>>
>> --
>> 1.7.9.5
>>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH 0/6] ARM64: KVM: PMU infrastructure support

2014-11-11 Thread Anup Patel
Hi All,

I have second thoughts about rebasing KVM PMU patches
to Marc's irq-forwarding patches.

The PMU IRQs (when virtualized by KVM) are not exactly
forwarded IRQs because they are shared between Host
and Guest.

Scenario1
-

We might have perf running on Host and no KVM guest
running. In this scenario, we wont get interrupts on Host
because the kvm_pmu_hyp_init() (similar to the function
kvm_timer_hyp_init() of Marc's IRQ-forwarding
implementation) has put all host PMU IRQs in forwarding
mode.

The only way solve this problem is to not set forwarding
mode for PMU IRQs in kvm_pmu_hyp_init() and instead
have special routines to turn on and turn off the forwarding
mode of PMU IRQs. These routines will be called from
kvm_arch_vcpu_ioctl_run() for toggling the PMU IRQ
forwarding state.

Scenario2
-

We might have perf running on Host and Guest simultaneously
which means it is quite likely that PMU HW trigger IRQ meant
for Host between "ret = kvm_call_hyp(__kvm_vcpu_run, vcpu);"
and "kvm_pmu_sync_hwstate(vcpu);" (similar to timer sync routine
of Marc's patchset which is called before local_irq_enable()).

In this scenario, the updated kvm_pmu_sync_hwstate(vcpu)
will accidentally forward IRQ meant for Host to Guest unless
we put additional checks to inspect VCPU PMU state.

Am I missing any detail about IRQ forwarding for above
scenarios?

If not then can we consider current mask/unmask approach
for forwarding PMU IRQs?

Marc?? Will??

Regards,
Anup
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH 0/6] ARM64: KVM: PMU infrastructure support

2014-11-17 Thread Anup Patel
On Tue, Nov 11, 2014 at 2:48 PM, Anup Patel  wrote:
> Hi All,
>
> I have second thoughts about rebasing KVM PMU patches
> to Marc's irq-forwarding patches.
>
> The PMU IRQs (when virtualized by KVM) are not exactly
> forwarded IRQs because they are shared between Host
> and Guest.
>
> Scenario1
> -
>
> We might have perf running on Host and no KVM guest
> running. In this scenario, we wont get interrupts on Host
> because the kvm_pmu_hyp_init() (similar to the function
> kvm_timer_hyp_init() of Marc's IRQ-forwarding
> implementation) has put all host PMU IRQs in forwarding
> mode.
>
> The only way solve this problem is to not set forwarding
> mode for PMU IRQs in kvm_pmu_hyp_init() and instead
> have special routines to turn on and turn off the forwarding
> mode of PMU IRQs. These routines will be called from
> kvm_arch_vcpu_ioctl_run() for toggling the PMU IRQ
> forwarding state.
>
> Scenario2
> -
>
> We might have perf running on Host and Guest simultaneously
> which means it is quite likely that PMU HW trigger IRQ meant
> for Host between "ret = kvm_call_hyp(__kvm_vcpu_run, vcpu);"
> and "kvm_pmu_sync_hwstate(vcpu);" (similar to timer sync routine
> of Marc's patchset which is called before local_irq_enable()).
>
> In this scenario, the updated kvm_pmu_sync_hwstate(vcpu)
> will accidentally forward IRQ meant for Host to Guest unless
> we put additional checks to inspect VCPU PMU state.
>
> Am I missing any detail about IRQ forwarding for above
> scenarios?
>
> If not then can we consider current mask/unmask approach
> for forwarding PMU IRQs?
>
> Marc?? Will??
>
> Regards,
> Anup

Ping ???

--
Anup
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH 0/6] ARM64: KVM: PMU infrastructure support

2014-11-20 Thread Anup Patel
On Wed, Nov 19, 2014 at 8:59 PM, Christoffer Dall
 wrote:
> On Tue, Nov 11, 2014 at 02:48:25PM +0530, Anup Patel wrote:
>> Hi All,
>>
>> I have second thoughts about rebasing KVM PMU patches
>> to Marc's irq-forwarding patches.
>>
>> The PMU IRQs (when virtualized by KVM) are not exactly
>> forwarded IRQs because they are shared between Host
>> and Guest.
>>
>> Scenario1
>> -
>>
>> We might have perf running on Host and no KVM guest
>> running. In this scenario, we wont get interrupts on Host
>> because the kvm_pmu_hyp_init() (similar to the function
>> kvm_timer_hyp_init() of Marc's IRQ-forwarding
>> implementation) has put all host PMU IRQs in forwarding
>> mode.
>>
>> The only way solve this problem is to not set forwarding
>> mode for PMU IRQs in kvm_pmu_hyp_init() and instead
>> have special routines to turn on and turn off the forwarding
>> mode of PMU IRQs. These routines will be called from
>> kvm_arch_vcpu_ioctl_run() for toggling the PMU IRQ
>> forwarding state.
>>
>> Scenario2
>> -
>>
>> We might have perf running on Host and Guest simultaneously
>> which means it is quite likely that PMU HW trigger IRQ meant
>> for Host between "ret = kvm_call_hyp(__kvm_vcpu_run, vcpu);"
>> and "kvm_pmu_sync_hwstate(vcpu);" (similar to timer sync routine
>> of Marc's patchset which is called before local_irq_enable()).
>>
>> In this scenario, the updated kvm_pmu_sync_hwstate(vcpu)
>> will accidentally forward IRQ meant for Host to Guest unless
>> we put additional checks to inspect VCPU PMU state.
>>
>> Am I missing any detail about IRQ forwarding for above
>> scenarios?
>>
> Hi Anup,

Hi Christoffer,

>
> I briefly discussed this with Marc.  What I don't understand is how it
> would be possible to get an interrupt for the host while running the
> guest?
>
> The rationale behind my question is that whenever you're running the
> guest, the PMU should be programmed exclusively with guest state, and
> since the PMU is per core, any interrupts should be for the guest, where
> it would always be pending.

Yes, thats right PMU is programmed exclusively for guest when
guest is running and for host when host is running.

Let us assume a situation (Scenario2 mentioned previously)
where both host and guest are using PMU. When the guest is
running we come back to host mode due to variety of reasons
(stage2 fault, guest IO, regular host interrupt, host interrupt
meant for guest, ) which means we will return from the
"ret = kvm_call_hyp(__kvm_vcpu_run, vcpu);" statement in the
kvm_arch_vcpu_ioctl_run() function with local IRQs disabled.
At this point we would have restored back host PMU context and
any PMU counter used by host can trigger PMU overflow interrup
for host. Now we will be having "kvm_pmu_sync_hwstate(vcpu);"
in the kvm_arch_vcpu_ioctl_run() function (similar to the
kvm_timer_sync_hwstate() of Marc's IRQ forwarding patchset)
which will try to detect PMU irq forwarding state in GIC hence it
can accidentally discover PMU irq pending for guest while this
PMU irq is actually meant for host.

This above mentioned situation does not happen for timer
because virtual timer interrupts are exclusively used for guest.
The exclusive use of virtual timer interrupt for guest ensures that
the function kvm_timer_sync_hwstate() will always see correct
state of virtual timer IRQ from GIC.

>
> When migrating a VM with a pending PMU interrupt away for a CPU core, we
> also capture the active state (the forwarding patches already handle
> this), and obviously the PMU state along with it.

Yes, the migration of PMU state and PMU interrupt state is
quite clear.

>
> Does this address your concern?

I hope above description give you idea about the concern
raised by me.

>
> -Christoffer

Regards,
Anup
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH 0/6] ARM64: KVM: PMU infrastructure support

2014-11-21 Thread Anup Patel
Hi Christoffer,

On Fri, Nov 21, 2014 at 3:29 PM, Christoffer Dall
 wrote:
> On Thu, Nov 20, 2014 at 08:17:32PM +0530, Anup Patel wrote:
>> On Wed, Nov 19, 2014 at 8:59 PM, Christoffer Dall
>>  wrote:
>> > On Tue, Nov 11, 2014 at 02:48:25PM +0530, Anup Patel wrote:
>> >> Hi All,
>> >>
>> >> I have second thoughts about rebasing KVM PMU patches
>> >> to Marc's irq-forwarding patches.
>> >>
>> >> The PMU IRQs (when virtualized by KVM) are not exactly
>> >> forwarded IRQs because they are shared between Host
>> >> and Guest.
>> >>
>> >> Scenario1
>> >> -
>> >>
>> >> We might have perf running on Host and no KVM guest
>> >> running. In this scenario, we wont get interrupts on Host
>> >> because the kvm_pmu_hyp_init() (similar to the function
>> >> kvm_timer_hyp_init() of Marc's IRQ-forwarding
>> >> implementation) has put all host PMU IRQs in forwarding
>> >> mode.
>> >>
>> >> The only way solve this problem is to not set forwarding
>> >> mode for PMU IRQs in kvm_pmu_hyp_init() and instead
>> >> have special routines to turn on and turn off the forwarding
>> >> mode of PMU IRQs. These routines will be called from
>> >> kvm_arch_vcpu_ioctl_run() for toggling the PMU IRQ
>> >> forwarding state.
>> >>
>> >> Scenario2
>> >> -
>> >>
>> >> We might have perf running on Host and Guest simultaneously
>> >> which means it is quite likely that PMU HW trigger IRQ meant
>> >> for Host between "ret = kvm_call_hyp(__kvm_vcpu_run, vcpu);"
>> >> and "kvm_pmu_sync_hwstate(vcpu);" (similar to timer sync routine
>> >> of Marc's patchset which is called before local_irq_enable()).
>> >>
>> >> In this scenario, the updated kvm_pmu_sync_hwstate(vcpu)
>> >> will accidentally forward IRQ meant for Host to Guest unless
>> >> we put additional checks to inspect VCPU PMU state.
>> >>
>> >> Am I missing any detail about IRQ forwarding for above
>> >> scenarios?
>> >>
>> > Hi Anup,
>>
>> Hi Christoffer,
>>
>> >
>> > I briefly discussed this with Marc.  What I don't understand is how it
>> > would be possible to get an interrupt for the host while running the
>> > guest?
>> >
>> > The rationale behind my question is that whenever you're running the
>> > guest, the PMU should be programmed exclusively with guest state, and
>> > since the PMU is per core, any interrupts should be for the guest, where
>> > it would always be pending.
>>
>> Yes, thats right PMU is programmed exclusively for guest when
>> guest is running and for host when host is running.
>>
>> Let us assume a situation (Scenario2 mentioned previously)
>> where both host and guest are using PMU. When the guest is
>> running we come back to host mode due to variety of reasons
>> (stage2 fault, guest IO, regular host interrupt, host interrupt
>> meant for guest, ) which means we will return from the
>> "ret = kvm_call_hyp(__kvm_vcpu_run, vcpu);" statement in the
>> kvm_arch_vcpu_ioctl_run() function with local IRQs disabled.
>> At this point we would have restored back host PMU context and
>> any PMU counter used by host can trigger PMU overflow interrup
>> for host. Now we will be having "kvm_pmu_sync_hwstate(vcpu);"
>> in the kvm_arch_vcpu_ioctl_run() function (similar to the
>> kvm_timer_sync_hwstate() of Marc's IRQ forwarding patchset)
>> which will try to detect PMU irq forwarding state in GIC hence it
>> can accidentally discover PMU irq pending for guest while this
>> PMU irq is actually meant for host.
>>
>> This above mentioned situation does not happen for timer
>> because virtual timer interrupts are exclusively used for guest.
>> The exclusive use of virtual timer interrupt for guest ensures that
>> the function kvm_timer_sync_hwstate() will always see correct
>> state of virtual timer IRQ from GIC.
>>
> I'm not quite following.
>
> When you call kvm_pmu_sync_hwstate(vcpu) in the non-preemtible section,
> you would (1) capture the active state of the IRQ pertaining to the
> guest and (2) deactive the IRQ on the host, then (3) switch the state of
> the PMU to the host state, and finally (4) re-enable IRQs on the CPU
> you're running on.
>
> If the host PMU

Re: [RFC PATCH 0/6] ARM64: KVM: PMU infrastructure support

2014-11-24 Thread Anup Patel
On Fri, Nov 21, 2014 at 5:19 PM, Christoffer Dall
 wrote:
> On Fri, Nov 21, 2014 at 04:06:05PM +0530, Anup Patel wrote:
>> Hi Christoffer,
>>
>> On Fri, Nov 21, 2014 at 3:29 PM, Christoffer Dall
>>  wrote:
>> > On Thu, Nov 20, 2014 at 08:17:32PM +0530, Anup Patel wrote:
>> >> On Wed, Nov 19, 2014 at 8:59 PM, Christoffer Dall
>> >>  wrote:
>> >> > On Tue, Nov 11, 2014 at 02:48:25PM +0530, Anup Patel wrote:
>> >> >> Hi All,
>> >> >>
>> >> >> I have second thoughts about rebasing KVM PMU patches
>> >> >> to Marc's irq-forwarding patches.
>> >> >>
>> >> >> The PMU IRQs (when virtualized by KVM) are not exactly
>> >> >> forwarded IRQs because they are shared between Host
>> >> >> and Guest.
>> >> >>
>> >> >> Scenario1
>> >> >> -
>> >> >>
>> >> >> We might have perf running on Host and no KVM guest
>> >> >> running. In this scenario, we wont get interrupts on Host
>> >> >> because the kvm_pmu_hyp_init() (similar to the function
>> >> >> kvm_timer_hyp_init() of Marc's IRQ-forwarding
>> >> >> implementation) has put all host PMU IRQs in forwarding
>> >> >> mode.
>> >> >>
>> >> >> The only way solve this problem is to not set forwarding
>> >> >> mode for PMU IRQs in kvm_pmu_hyp_init() and instead
>> >> >> have special routines to turn on and turn off the forwarding
>> >> >> mode of PMU IRQs. These routines will be called from
>> >> >> kvm_arch_vcpu_ioctl_run() for toggling the PMU IRQ
>> >> >> forwarding state.
>> >> >>
>> >> >> Scenario2
>> >> >> -
>> >> >>
>> >> >> We might have perf running on Host and Guest simultaneously
>> >> >> which means it is quite likely that PMU HW trigger IRQ meant
>> >> >> for Host between "ret = kvm_call_hyp(__kvm_vcpu_run, vcpu);"
>> >> >> and "kvm_pmu_sync_hwstate(vcpu);" (similar to timer sync routine
>> >> >> of Marc's patchset which is called before local_irq_enable()).
>> >> >>
>> >> >> In this scenario, the updated kvm_pmu_sync_hwstate(vcpu)
>> >> >> will accidentally forward IRQ meant for Host to Guest unless
>> >> >> we put additional checks to inspect VCPU PMU state.
>> >> >>
>> >> >> Am I missing any detail about IRQ forwarding for above
>> >> >> scenarios?
>> >> >>
>> >> > Hi Anup,
>> >>
>> >> Hi Christoffer,
>> >>
>> >> >
>> >> > I briefly discussed this with Marc.  What I don't understand is how it
>> >> > would be possible to get an interrupt for the host while running the
>> >> > guest?
>> >> >
>> >> > The rationale behind my question is that whenever you're running the
>> >> > guest, the PMU should be programmed exclusively with guest state, and
>> >> > since the PMU is per core, any interrupts should be for the guest, where
>> >> > it would always be pending.
>> >>
>> >> Yes, thats right PMU is programmed exclusively for guest when
>> >> guest is running and for host when host is running.
>> >>
>> >> Let us assume a situation (Scenario2 mentioned previously)
>> >> where both host and guest are using PMU. When the guest is
>> >> running we come back to host mode due to variety of reasons
>> >> (stage2 fault, guest IO, regular host interrupt, host interrupt
>> >> meant for guest, ) which means we will return from the
>> >> "ret = kvm_call_hyp(__kvm_vcpu_run, vcpu);" statement in the
>> >> kvm_arch_vcpu_ioctl_run() function with local IRQs disabled.
>> >> At this point we would have restored back host PMU context and
>> >> any PMU counter used by host can trigger PMU overflow interrup
>> >> for host. Now we will be having "kvm_pmu_sync_hwstate(vcpu);"
>> >> in the kvm_arch_vcpu_ioctl_run() function (similar to the
>> >> kvm_timer_sync_hwstate() of Marc's IRQ forwarding patchset)
>> >> which will try to detect PMU irq forwarding state in GIC hence it
>> >> can 

Re: [RFC PATCH 0/6] ARM64: KVM: PMU infrastructure support

2014-11-25 Thread Anup Patel
Hi Christoffer,

On Mon, Nov 24, 2014 at 8:07 PM, Christoffer Dall
 wrote:
> On Mon, Nov 24, 2014 at 02:14:48PM +0530, Anup Patel wrote:
>> On Fri, Nov 21, 2014 at 5:19 PM, Christoffer Dall
>>  wrote:
>> > On Fri, Nov 21, 2014 at 04:06:05PM +0530, Anup Patel wrote:
>> >> Hi Christoffer,
>> >>
>> >> On Fri, Nov 21, 2014 at 3:29 PM, Christoffer Dall
>> >>  wrote:
>> >> > On Thu, Nov 20, 2014 at 08:17:32PM +0530, Anup Patel wrote:
>> >> >> On Wed, Nov 19, 2014 at 8:59 PM, Christoffer Dall
>> >> >>  wrote:
>> >> >> > On Tue, Nov 11, 2014 at 02:48:25PM +0530, Anup Patel wrote:
>> >> >> >> Hi All,
>> >> >> >>
>> >> >> >> I have second thoughts about rebasing KVM PMU patches
>> >> >> >> to Marc's irq-forwarding patches.
>> >> >> >>
>> >> >> >> The PMU IRQs (when virtualized by KVM) are not exactly
>> >> >> >> forwarded IRQs because they are shared between Host
>> >> >> >> and Guest.
>> >> >> >>
>> >> >> >> Scenario1
>> >> >> >> -
>> >> >> >>
>> >> >> >> We might have perf running on Host and no KVM guest
>> >> >> >> running. In this scenario, we wont get interrupts on Host
>> >> >> >> because the kvm_pmu_hyp_init() (similar to the function
>> >> >> >> kvm_timer_hyp_init() of Marc's IRQ-forwarding
>> >> >> >> implementation) has put all host PMU IRQs in forwarding
>> >> >> >> mode.
>> >> >> >>
>> >> >> >> The only way solve this problem is to not set forwarding
>> >> >> >> mode for PMU IRQs in kvm_pmu_hyp_init() and instead
>> >> >> >> have special routines to turn on and turn off the forwarding
>> >> >> >> mode of PMU IRQs. These routines will be called from
>> >> >> >> kvm_arch_vcpu_ioctl_run() for toggling the PMU IRQ
>> >> >> >> forwarding state.
>> >> >> >>
>> >> >> >> Scenario2
>> >> >> >> -
>> >> >> >>
>> >> >> >> We might have perf running on Host and Guest simultaneously
>> >> >> >> which means it is quite likely that PMU HW trigger IRQ meant
>> >> >> >> for Host between "ret = kvm_call_hyp(__kvm_vcpu_run, vcpu);"
>> >> >> >> and "kvm_pmu_sync_hwstate(vcpu);" (similar to timer sync routine
>> >> >> >> of Marc's patchset which is called before local_irq_enable()).
>> >> >> >>
>> >> >> >> In this scenario, the updated kvm_pmu_sync_hwstate(vcpu)
>> >> >> >> will accidentally forward IRQ meant for Host to Guest unless
>> >> >> >> we put additional checks to inspect VCPU PMU state.
>> >> >> >>
>> >> >> >> Am I missing any detail about IRQ forwarding for above
>> >> >> >> scenarios?
>> >> >> >>
>> >> >> > Hi Anup,
>> >> >>
>> >> >> Hi Christoffer,
>> >> >>
>> >> >> >
>> >> >> > I briefly discussed this with Marc.  What I don't understand is how 
>> >> >> > it
>> >> >> > would be possible to get an interrupt for the host while running the
>> >> >> > guest?
>> >> >> >
>> >> >> > The rationale behind my question is that whenever you're running the
>> >> >> > guest, the PMU should be programmed exclusively with guest state, and
>> >> >> > since the PMU is per core, any interrupts should be for the guest, 
>> >> >> > where
>> >> >> > it would always be pending.
>> >> >>
>> >> >> Yes, thats right PMU is programmed exclusively for guest when
>> >> >> guest is running and for host when host is running.
>> >> >>
>> >> >> Let us assume a situation (Scenario2 mentioned previously)
>> >> >> where both host and guest are using PMU. When the guest is
>> >> >> running we come back to host mode due to 

Re: [RFC PATCH 0/6] ARM64: KVM: PMU infrastructure support

2014-11-27 Thread Anup Patel
On Tue, Nov 25, 2014 at 7:12 PM, Christoffer Dall
 wrote:
> On Tue, Nov 25, 2014 at 06:17:03PM +0530, Anup Patel wrote:
>> Hi Christoffer,
>>
>> On Mon, Nov 24, 2014 at 8:07 PM, Christoffer Dall
>>  wrote:
>> > On Mon, Nov 24, 2014 at 02:14:48PM +0530, Anup Patel wrote:
>> >> On Fri, Nov 21, 2014 at 5:19 PM, Christoffer Dall
>> >>  wrote:
>> >> > On Fri, Nov 21, 2014 at 04:06:05PM +0530, Anup Patel wrote:
>> >> >> Hi Christoffer,
>> >> >>
>> >> >> On Fri, Nov 21, 2014 at 3:29 PM, Christoffer Dall
>> >> >>  wrote:
>> >> >> > On Thu, Nov 20, 2014 at 08:17:32PM +0530, Anup Patel wrote:
>> >> >> >> On Wed, Nov 19, 2014 at 8:59 PM, Christoffer Dall
>> >> >> >>  wrote:
>> >> >> >> > On Tue, Nov 11, 2014 at 02:48:25PM +0530, Anup Patel wrote:
>> >> >> >> >> Hi All,
>> >> >> >> >>
>> >> >> >> >> I have second thoughts about rebasing KVM PMU patches
>> >> >> >> >> to Marc's irq-forwarding patches.
>> >> >> >> >>
>> >> >> >> >> The PMU IRQs (when virtualized by KVM) are not exactly
>> >> >> >> >> forwarded IRQs because they are shared between Host
>> >> >> >> >> and Guest.
>> >> >> >> >>
>> >> >> >> >> Scenario1
>> >> >> >> >> -
>> >> >> >> >>
>> >> >> >> >> We might have perf running on Host and no KVM guest
>> >> >> >> >> running. In this scenario, we wont get interrupts on Host
>> >> >> >> >> because the kvm_pmu_hyp_init() (similar to the function
>> >> >> >> >> kvm_timer_hyp_init() of Marc's IRQ-forwarding
>> >> >> >> >> implementation) has put all host PMU IRQs in forwarding
>> >> >> >> >> mode.
>> >> >> >> >>
>> >> >> >> >> The only way solve this problem is to not set forwarding
>> >> >> >> >> mode for PMU IRQs in kvm_pmu_hyp_init() and instead
>> >> >> >> >> have special routines to turn on and turn off the forwarding
>> >> >> >> >> mode of PMU IRQs. These routines will be called from
>> >> >> >> >> kvm_arch_vcpu_ioctl_run() for toggling the PMU IRQ
>> >> >> >> >> forwarding state.
>> >> >> >> >>
>> >> >> >> >> Scenario2
>> >> >> >> >> -
>> >> >> >> >>
>> >> >> >> >> We might have perf running on Host and Guest simultaneously
>> >> >> >> >> which means it is quite likely that PMU HW trigger IRQ meant
>> >> >> >> >> for Host between "ret = kvm_call_hyp(__kvm_vcpu_run, vcpu);"
>> >> >> >> >> and "kvm_pmu_sync_hwstate(vcpu);" (similar to timer sync routine
>> >> >> >> >> of Marc's patchset which is called before local_irq_enable()).
>> >> >> >> >>
>> >> >> >> >> In this scenario, the updated kvm_pmu_sync_hwstate(vcpu)
>> >> >> >> >> will accidentally forward IRQ meant for Host to Guest unless
>> >> >> >> >> we put additional checks to inspect VCPU PMU state.
>> >> >> >> >>
>> >> >> >> >> Am I missing any detail about IRQ forwarding for above
>> >> >> >> >> scenarios?
>> >> >> >> >>
>> >> >> >> > Hi Anup,
>> >> >> >>
>> >> >> >> Hi Christoffer,
>> >> >> >>
>> >> >> >> >
>> >> >> >> > I briefly discussed this with Marc.  What I don't understand is 
>> >> >> >> > how it
>> >> >> >> > would be possible to get an interrupt for the host while running 
>> >> >> >> > the
>> >> >> >> > guest?
>> >> >> >> >
>> >> >> >> > The rationale behind my question is that whenever y

Re: [RFC PATCH 0/6] ARM64: KVM: PMU infrastructure support

2014-11-27 Thread Anup Patel
On Thu, Nov 27, 2014 at 4:10 PM, Marc Zyngier  wrote:
> On 27/11/14 10:22, Anup Patel wrote:
>> On Tue, Nov 25, 2014 at 7:12 PM, Christoffer Dall
>>  wrote:
>>> On Tue, Nov 25, 2014 at 06:17:03PM +0530, Anup Patel wrote:
>>>> Hi Christoffer,
>>>>
>>>> On Mon, Nov 24, 2014 at 8:07 PM, Christoffer Dall
>>>>  wrote:
>>>>> On Mon, Nov 24, 2014 at 02:14:48PM +0530, Anup Patel wrote:
>>>>>> On Fri, Nov 21, 2014 at 5:19 PM, Christoffer Dall
>>>>>>  wrote:
>>>>>>> On Fri, Nov 21, 2014 at 04:06:05PM +0530, Anup Patel wrote:
>>>>>>>> Hi Christoffer,
>>>>>>>>
>>>>>>>> On Fri, Nov 21, 2014 at 3:29 PM, Christoffer Dall
>>>>>>>>  wrote:
>>>>>>>>> On Thu, Nov 20, 2014 at 08:17:32PM +0530, Anup Patel wrote:
>>>>>>>>>> On Wed, Nov 19, 2014 at 8:59 PM, Christoffer Dall
>>>>>>>>>>  wrote:
>>>>>>>>>>> On Tue, Nov 11, 2014 at 02:48:25PM +0530, Anup Patel wrote:
>>>>>>>>>>>> Hi All,
>>>>>>>>>>>>
>>>>>>>>>>>> I have second thoughts about rebasing KVM PMU patches
>>>>>>>>>>>> to Marc's irq-forwarding patches.
>>>>>>>>>>>>
>>>>>>>>>>>> The PMU IRQs (when virtualized by KVM) are not exactly
>>>>>>>>>>>> forwarded IRQs because they are shared between Host
>>>>>>>>>>>> and Guest.
>>>>>>>>>>>>
>>>>>>>>>>>> Scenario1
>>>>>>>>>>>> -
>>>>>>>>>>>>
>>>>>>>>>>>> We might have perf running on Host and no KVM guest
>>>>>>>>>>>> running. In this scenario, we wont get interrupts on Host
>>>>>>>>>>>> because the kvm_pmu_hyp_init() (similar to the function
>>>>>>>>>>>> kvm_timer_hyp_init() of Marc's IRQ-forwarding
>>>>>>>>>>>> implementation) has put all host PMU IRQs in forwarding
>>>>>>>>>>>> mode.
>>>>>>>>>>>>
>>>>>>>>>>>> The only way solve this problem is to not set forwarding
>>>>>>>>>>>> mode for PMU IRQs in kvm_pmu_hyp_init() and instead
>>>>>>>>>>>> have special routines to turn on and turn off the forwarding
>>>>>>>>>>>> mode of PMU IRQs. These routines will be called from
>>>>>>>>>>>> kvm_arch_vcpu_ioctl_run() for toggling the PMU IRQ
>>>>>>>>>>>> forwarding state.
>>>>>>>>>>>>
>>>>>>>>>>>> Scenario2
>>>>>>>>>>>> -
>>>>>>>>>>>>
>>>>>>>>>>>> We might have perf running on Host and Guest simultaneously
>>>>>>>>>>>> which means it is quite likely that PMU HW trigger IRQ meant
>>>>>>>>>>>> for Host between "ret = kvm_call_hyp(__kvm_vcpu_run, vcpu);"
>>>>>>>>>>>> and "kvm_pmu_sync_hwstate(vcpu);" (similar to timer sync routine
>>>>>>>>>>>> of Marc's patchset which is called before local_irq_enable()).
>>>>>>>>>>>>
>>>>>>>>>>>> In this scenario, the updated kvm_pmu_sync_hwstate(vcpu)
>>>>>>>>>>>> will accidentally forward IRQ meant for Host to Guest unless
>>>>>>>>>>>> we put additional checks to inspect VCPU PMU state.
>>>>>>>>>>>>
>>>>>>>>>>>> Am I missing any detail about IRQ forwarding for above
>>>>>>>>>>>> scenarios?
>>>>>>>>>>>>
>>>>>>>>>>> Hi Anup,
>>>>>>>>>>
>>>>>>>>>> Hi Christoffer,
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> I briefly discussed this with 

Re: [PATCH 1/2] ARM: KVM: Yield CPU when vcpu executes a WFE

2013-10-09 Thread Anup Patel
On Wed, Oct 9, 2013 at 7:48 PM, Marc Zyngier  wrote:
> On 09/10/13 14:26, Gleb Natapov wrote:
>> On Wed, Oct 09, 2013 at 03:09:54PM +0200, Alexander Graf wrote:
>>>
>>> On 07.10.2013, at 18:53, Gleb Natapov  wrote:
>>>
 On Mon, Oct 07, 2013 at 06:30:04PM +0200, Alexander Graf wrote:
>
> On 07.10.2013, at 18:16, Marc Zyngier  wrote:
>
>> On 07/10/13 17:04, Alexander Graf wrote:
>>>
>>> On 07.10.2013, at 17:40, Marc Zyngier  wrote:
>>>
 On an (even slightly) oversubscribed system, spinlocks are quickly
 becoming a bottleneck, as some vcpus are spinning, waiting for a
 lock to be released, while the vcpu holding the lock may not be
 running at all.

 This creates contention, and the observed slowdown is 40x for
 hackbench. No, this isn't a typo.

 The solution is to trap blocking WFEs and tell KVM that we're now
 spinning. This ensures that other vpus will get a scheduling boost,
 allowing the lock to be released more quickly.

> From a performance point of view: hackbench 1 process 1000

 2xA15 host (baseline):  1.843s

 2xA15 guest w/o patch:  2.083s 4xA15 guest w/o patch:   80.212s

 2xA15 guest w/ patch:   2.072s 4xA15 guest w/ patch:3.202s
>>>
>>> I'm confused. You got from 2.083s when not exiting on spin locks to
>>> 2.072 when exiting on _every_ spin lock that didn't immediately
>>> succeed. I would've expected to second number to be worse rather than
>>> better. I assume it's within jitter, I'm still puzzled why you don't
>>> see any significant drop in performance.
>>
>> The key is in the ARM ARM:
>>
>> B1.14.9: "When HCR.TWE is set to 1, and the processor is in a Non-secure
>> mode other than Hyp mode, execution of a WFE instruction generates a Hyp
>> Trap exception if, ignoring the value of the HCR.TWE bit, conditions
>> permit the processor to suspend execution."
>>
>> So, on a non-overcommitted system, you rarely hit a blocking spinlock,
>> hence not trapping. Otherwise, performance would go down the drain very
>> quickly.
>
> Well, it's the same as pause/loop exiting on x86, but there we have 
> special hardware features to only ever exit after n number of 
> turnarounds. I wonder why we have those when we could just as easily exit 
> on every blocking path.
>
 It will hurt performance if vcpu that holds the lock is running.
>>>
>>> Apparently not so on ARM. At least that's what Marc's numbers are showing. 
>>> I'm not sure what exactly that means. Basically his logic is "if we spin, 
>>> the holder must have been preempted". And it seems to work out surprisingly 
>>> well.
>
> Yes. I basically assume that contention should be rare, and that ending
> up in a *blocking* WFE is a sign that we're in thrashing mode already
> (no event is pending).
>
>>>
>> For not contended locks it make sense. We need to recheck if x86
>> assumption is still true there, but x86 lock is ticketing which
>> has not only lock holder preemption, but also lock waiter
>> preemption problem which make overcommit problem even worse.
>
> Locks are ticketing on ARM as well. But there is one key difference here
> with x86 (or at least what I understand of it, which is very close to
> none): We only trap if we would have blocked anyway. In our case, it is
> almost always better to give up the CPU to someone else rather than
> waiting for some event to take the CPU out of sleep.

Benefits of "Yield CPU when vcpu executes a WFE" seems to depend on:
1. How spin lock is implemented in Guest OS?
we cannot assume
that underlying Guest OS is always Linux.
2. How bad/good is spin

It will be good if we can enable/disable "Yield CPU when vcpu executes a WFE


>
> M.
> --
> Jazz is not dead. It just smells funny...
>
>
> ___
> kvmarm mailing list
> kvm...@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/cucslists/listinfo/kvmarm
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] ARM: KVM: Yield CPU when vcpu executes a WFE

2013-10-09 Thread Anup Patel
On Wed, Oct 9, 2013 at 8:20 PM, Anup Patel  wrote:
> On Wed, Oct 9, 2013 at 7:48 PM, Marc Zyngier  wrote:
>> On 09/10/13 14:26, Gleb Natapov wrote:
>>> On Wed, Oct 09, 2013 at 03:09:54PM +0200, Alexander Graf wrote:
>>>>
>>>> On 07.10.2013, at 18:53, Gleb Natapov  wrote:
>>>>
>>>>> On Mon, Oct 07, 2013 at 06:30:04PM +0200, Alexander Graf wrote:
>>>>>>
>>>>>> On 07.10.2013, at 18:16, Marc Zyngier  wrote:
>>>>>>
>>>>>>> On 07/10/13 17:04, Alexander Graf wrote:
>>>>>>>>
>>>>>>>> On 07.10.2013, at 17:40, Marc Zyngier  wrote:
>>>>>>>>
>>>>>>>>> On an (even slightly) oversubscribed system, spinlocks are quickly
>>>>>>>>> becoming a bottleneck, as some vcpus are spinning, waiting for a
>>>>>>>>> lock to be released, while the vcpu holding the lock may not be
>>>>>>>>> running at all.
>>>>>>>>>
>>>>>>>>> This creates contention, and the observed slowdown is 40x for
>>>>>>>>> hackbench. No, this isn't a typo.
>>>>>>>>>
>>>>>>>>> The solution is to trap blocking WFEs and tell KVM that we're now
>>>>>>>>> spinning. This ensures that other vpus will get a scheduling boost,
>>>>>>>>> allowing the lock to be released more quickly.
>>>>>>>>>
>>>>>>>>>> From a performance point of view: hackbench 1 process 1000
>>>>>>>>>
>>>>>>>>> 2xA15 host (baseline):  1.843s
>>>>>>>>>
>>>>>>>>> 2xA15 guest w/o patch:  2.083s 4xA15 guest w/o patch:   80.212s
>>>>>>>>>
>>>>>>>>> 2xA15 guest w/ patch:   2.072s 4xA15 guest w/ patch:3.202s
>>>>>>>>
>>>>>>>> I'm confused. You got from 2.083s when not exiting on spin locks to
>>>>>>>> 2.072 when exiting on _every_ spin lock that didn't immediately
>>>>>>>> succeed. I would've expected to second number to be worse rather than
>>>>>>>> better. I assume it's within jitter, I'm still puzzled why you don't
>>>>>>>> see any significant drop in performance.
>>>>>>>
>>>>>>> The key is in the ARM ARM:
>>>>>>>
>>>>>>> B1.14.9: "When HCR.TWE is set to 1, and the processor is in a Non-secure
>>>>>>> mode other than Hyp mode, execution of a WFE instruction generates a Hyp
>>>>>>> Trap exception if, ignoring the value of the HCR.TWE bit, conditions
>>>>>>> permit the processor to suspend execution."
>>>>>>>
>>>>>>> So, on a non-overcommitted system, you rarely hit a blocking spinlock,
>>>>>>> hence not trapping. Otherwise, performance would go down the drain very
>>>>>>> quickly.
>>>>>>
>>>>>> Well, it's the same as pause/loop exiting on x86, but there we have 
>>>>>> special hardware features to only ever exit after n number of 
>>>>>> turnarounds. I wonder why we have those when we could just as easily 
>>>>>> exit on every blocking path.
>>>>>>
>>>>> It will hurt performance if vcpu that holds the lock is running.
>>>>
>>>> Apparently not so on ARM. At least that's what Marc's numbers are showing. 
>>>> I'm not sure what exactly that means. Basically his logic is "if we spin, 
>>>> the holder must have been preempted". And it seems to work out 
>>>> surprisingly well.
>>
>> Yes. I basically assume that contention should be rare, and that ending
>> up in a *blocking* WFE is a sign that we're in thrashing mode already
>> (no event is pending).
>>
>>>>
>>> For not contended locks it make sense. We need to recheck if x86
>>> assumption is still true there, but x86 lock is ticketing which
>>> has not only lock holder preemption, but also lock waiter
>>> preemption problem which make overcommit problem even worse.
>>
>> Locks are ticketing on ARM as well. But there is one key difference here
>> with x86 (or at least what I understand of it, which is very close to
>> none): We only trap if we would have blocked anyway. In our case, it is
>> almost always better to give up the CPU to someone else rather than
>> waiting for some event to take the CPU out of sleep.
>
> Benefits of "Yield CPU when vcpu executes a WFE" seems to depend on:
> 1. How spin lock is implemented in Guest OS?
> we cannot assume
> that underlying Guest OS is always Linux.
> 2. How bad/good is spin
>
> It will be good if we can enable/disable "Yield CPU when vcpu executes a WFE

(Please ignore previous incomplete reply )

Benefits of "Yield CPU when vcpu executes a WFE" seems to depend on:
1. How spin lock is implemented in Guest OS?
(Note: we cannot assume that underlying Guest OS is always Linux)
2. How bad/good is spin lock contention in Guest ?
(Note: here too we cannot assume the loads running on Guest)

It will be good if we can enable/disable "Yield CPU when vcpu executes a WFE"
via Kconfig.

--Anup

>
>
>>
>> M.
>> --
>> Jazz is not dead. It just smells funny...
>>
>>
>> ___
>> kvmarm mailing list
>> kvm...@lists.cs.columbia.edu
>> https://lists.cs.columbia.edu/cucslists/listinfo/kvmarm
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] ARM: KVM: Yield CPU when vcpu executes a WFE

2013-10-09 Thread Anup Patel
On Wed, Oct 9, 2013 at 8:29 PM, Marc Zyngier  wrote:
> On 09/10/13 15:50, Anup Patel wrote:
>> On Wed, Oct 9, 2013 at 7:48 PM, Marc Zyngier  wrote:
>>> On 09/10/13 14:26, Gleb Natapov wrote:
>>>> On Wed, Oct 09, 2013 at 03:09:54PM +0200, Alexander Graf wrote:
>>>>>
>>>>> On 07.10.2013, at 18:53, Gleb Natapov  wrote:
>>>>>
>>>>>> On Mon, Oct 07, 2013 at 06:30:04PM +0200, Alexander Graf wrote:
>>>>>>>
>>>>>>> On 07.10.2013, at 18:16, Marc Zyngier  wrote:
>>>>>>>
>>>>>>>> On 07/10/13 17:04, Alexander Graf wrote:
>>>>>>>>>
>>>>>>>>> On 07.10.2013, at 17:40, Marc Zyngier  wrote:
>>>>>>>>>
>>>>>>>>>> On an (even slightly) oversubscribed system, spinlocks are quickly
>>>>>>>>>> becoming a bottleneck, as some vcpus are spinning, waiting for a
>>>>>>>>>> lock to be released, while the vcpu holding the lock may not be
>>>>>>>>>> running at all.
>>>>>>>>>>
>>>>>>>>>> This creates contention, and the observed slowdown is 40x for
>>>>>>>>>> hackbench. No, this isn't a typo.
>>>>>>>>>>
>>>>>>>>>> The solution is to trap blocking WFEs and tell KVM that we're now
>>>>>>>>>> spinning. This ensures that other vpus will get a scheduling boost,
>>>>>>>>>> allowing the lock to be released more quickly.
>>>>>>>>>>
>>>>>>>>>>> From a performance point of view: hackbench 1 process 1000
>>>>>>>>>>
>>>>>>>>>> 2xA15 host (baseline):  1.843s
>>>>>>>>>>
>>>>>>>>>> 2xA15 guest w/o patch:  2.083s 4xA15 guest w/o patch:   80.212s
>>>>>>>>>>
>>>>>>>>>> 2xA15 guest w/ patch:   2.072s 4xA15 guest w/ patch:3.202s
>>>>>>>>>
>>>>>>>>> I'm confused. You got from 2.083s when not exiting on spin locks to
>>>>>>>>> 2.072 when exiting on _every_ spin lock that didn't immediately
>>>>>>>>> succeed. I would've expected to second number to be worse rather than
>>>>>>>>> better. I assume it's within jitter, I'm still puzzled why you don't
>>>>>>>>> see any significant drop in performance.
>>>>>>>>
>>>>>>>> The key is in the ARM ARM:
>>>>>>>>
>>>>>>>> B1.14.9: "When HCR.TWE is set to 1, and the processor is in a 
>>>>>>>> Non-secure
>>>>>>>> mode other than Hyp mode, execution of a WFE instruction generates a 
>>>>>>>> Hyp
>>>>>>>> Trap exception if, ignoring the value of the HCR.TWE bit, conditions
>>>>>>>> permit the processor to suspend execution."
>>>>>>>>
>>>>>>>> So, on a non-overcommitted system, you rarely hit a blocking spinlock,
>>>>>>>> hence not trapping. Otherwise, performance would go down the drain very
>>>>>>>> quickly.
>>>>>>>
>>>>>>> Well, it's the same as pause/loop exiting on x86, but there we have 
>>>>>>> special hardware features to only ever exit after n number of 
>>>>>>> turnarounds. I wonder why we have those when we could just as easily 
>>>>>>> exit on every blocking path.
>>>>>>>
>>>>>> It will hurt performance if vcpu that holds the lock is running.
>>>>>
>>>>> Apparently not so on ARM. At least that's what Marc's numbers are 
>>>>> showing. I'm not sure what exactly that means. Basically his logic is "if 
>>>>> we spin, the holder must have been preempted". And it seems to work out 
>>>>> surprisingly well.
>>>
>>> Yes. I basically assume that contention should be rare, and that ending
>>> up in a *blocking* WFE is a sign that we're in thrashing mode already
>>> (no event is pending).
>>>
>>>>>
>>>> For not contended locks it make sense. We need to recheck if x86

Re: [PATCH 1/2] ARM: KVM: Yield CPU when vcpu executes a WFE

2013-10-09 Thread Anup Patel
On Wed, Oct 9, 2013 at 8:40 PM, Anup Patel  wrote:
> On Wed, Oct 9, 2013 at 8:29 PM, Marc Zyngier  wrote:
>> On 09/10/13 15:50, Anup Patel wrote:
>>> On Wed, Oct 9, 2013 at 7:48 PM, Marc Zyngier  wrote:
>>>> On 09/10/13 14:26, Gleb Natapov wrote:
>>>>> On Wed, Oct 09, 2013 at 03:09:54PM +0200, Alexander Graf wrote:
>>>>>>
>>>>>> On 07.10.2013, at 18:53, Gleb Natapov  wrote:
>>>>>>
>>>>>>> On Mon, Oct 07, 2013 at 06:30:04PM +0200, Alexander Graf wrote:
>>>>>>>>
>>>>>>>> On 07.10.2013, at 18:16, Marc Zyngier  wrote:
>>>>>>>>
>>>>>>>>> On 07/10/13 17:04, Alexander Graf wrote:
>>>>>>>>>>
>>>>>>>>>> On 07.10.2013, at 17:40, Marc Zyngier  wrote:
>>>>>>>>>>
>>>>>>>>>>> On an (even slightly) oversubscribed system, spinlocks are quickly
>>>>>>>>>>> becoming a bottleneck, as some vcpus are spinning, waiting for a
>>>>>>>>>>> lock to be released, while the vcpu holding the lock may not be
>>>>>>>>>>> running at all.
>>>>>>>>>>>
>>>>>>>>>>> This creates contention, and the observed slowdown is 40x for
>>>>>>>>>>> hackbench. No, this isn't a typo.
>>>>>>>>>>>
>>>>>>>>>>> The solution is to trap blocking WFEs and tell KVM that we're now
>>>>>>>>>>> spinning. This ensures that other vpus will get a scheduling boost,
>>>>>>>>>>> allowing the lock to be released more quickly.
>>>>>>>>>>>
>>>>>>>>>>>> From a performance point of view: hackbench 1 process 1000
>>>>>>>>>>>
>>>>>>>>>>> 2xA15 host (baseline):  1.843s
>>>>>>>>>>>
>>>>>>>>>>> 2xA15 guest w/o patch:  2.083s 4xA15 guest w/o patch:   80.212s
>>>>>>>>>>>
>>>>>>>>>>> 2xA15 guest w/ patch:   2.072s 4xA15 guest w/ patch:3.202s
>>>>>>>>>>
>>>>>>>>>> I'm confused. You got from 2.083s when not exiting on spin locks to
>>>>>>>>>> 2.072 when exiting on _every_ spin lock that didn't immediately
>>>>>>>>>> succeed. I would've expected to second number to be worse rather than
>>>>>>>>>> better. I assume it's within jitter, I'm still puzzled why you don't
>>>>>>>>>> see any significant drop in performance.
>>>>>>>>>
>>>>>>>>> The key is in the ARM ARM:
>>>>>>>>>
>>>>>>>>> B1.14.9: "When HCR.TWE is set to 1, and the processor is in a 
>>>>>>>>> Non-secure
>>>>>>>>> mode other than Hyp mode, execution of a WFE instruction generates a 
>>>>>>>>> Hyp
>>>>>>>>> Trap exception if, ignoring the value of the HCR.TWE bit, conditions
>>>>>>>>> permit the processor to suspend execution."
>>>>>>>>>
>>>>>>>>> So, on a non-overcommitted system, you rarely hit a blocking spinlock,
>>>>>>>>> hence not trapping. Otherwise, performance would go down the drain 
>>>>>>>>> very
>>>>>>>>> quickly.
>>>>>>>>
>>>>>>>> Well, it's the same as pause/loop exiting on x86, but there we have 
>>>>>>>> special hardware features to only ever exit after n number of 
>>>>>>>> turnarounds. I wonder why we have those when we could just as easily 
>>>>>>>> exit on every blocking path.
>>>>>>>>
>>>>>>> It will hurt performance if vcpu that holds the lock is running.
>>>>>>
>>>>>> Apparently not so on ARM. At least that's what Marc's numbers are 
>>>>>> showing. I'm not sure what exactly that means. Basically his logic is 
>>>>>> "if we spin, the holder must have been preempted". And it seems to work 
>>>>>> out surp

Re: [PATCH 0/3] virtio-mmio: handle BE guests on LE hosts

2013-10-14 Thread Anup Patel
On Mon, Oct 14, 2013 at 8:52 PM, Paolo Bonzini  wrote:
> Il 14/10/2013 17:12, Marc Zyngier ha scritto:
>> On 14/10/13 15:56, Paolo Bonzini wrote:
>>> Il 14/10/2013 16:52, Marc Zyngier ha scritto:
>> Sure. And I imagine this traps back into the kernel to read some
>> register and find out what the endianness of the accessing CPU is?
>
> Not yet. To be exact, it does the below today. But all virtio device
> emulation is 100% guest endianness unaware. This helper is the only
> piece of code where it gets any idea what endianness the guest has. So
> by checking for references to it in the code you know where endianness
> is an issue. And that's only in the config space.

 Only config space? How do you deal with virtio ring descriptors, for
 example?
>>>
>>> They also use guest endianness, but do not use virtio_is_big_endian()
>>> (yet?) so Alex missed them.
>>
>> Yeah, I thought as much. There is a whole bunch of things that need byte
>> swapping, both at the virtio level itself, and at the device level as well.
>>
>> Grep-ing for __u{16,32,64} through include/uapi/linux/virtio* shows the
>> extent of the disaster.
>
> Devices are fine in QEMU, it's only the "generic" parts (rings) that are
> missing AFAICT.

We need to take care of endianness of "device" specific descriptors in
a VirtIO device.

For example:
In VirtIO Net device, the guest sends "struct virtio_net_hdr" for each Tx
packet which describes the Tx offloads needed for packet and other info.

>
> Paolo
>
> ___
> kvmarm mailing list
> kvm...@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/cucslists/listinfo/kvmarm

--
Anup
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH 2/3] arm64: KVM: trap VM system registers until MMU and caches are ON

2014-01-20 Thread Anup Patel
On Fri, Jan 17, 2014 at 8:33 PM, Marc Zyngier  wrote:
> In order to be able to detect the point where the guest enables
> its MMU and caches, trap all the VM related system registers.
>
> Once we see the guest enabling both the MMU and the caches, we
> can go back to a saner mode of operation, which is to leave these
> registers in complete control of the guest.
>
> Signed-off-by: Marc Zyngier 
> Reviewed-by: Catalin Marinas 
> ---
>  arch/arm64/include/asm/kvm_arm.h |  3 ++-
>  arch/arm64/kvm/sys_regs.c| 58 
> 
>  2 files changed, 49 insertions(+), 12 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_arm.h 
> b/arch/arm64/include/asm/kvm_arm.h
> index c98ef47..fd0a651 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -62,6 +62,7 @@
>   * RW: 64bit by default, can be overriden for 32bit VMs
>   * TAC:Trap ACTLR
>   * TSC:Trap SMC
> + * TVM:Trap VM ops (until M+C set in SCTLR_EL1)
>   * TSW:Trap cache operations by set/way
>   * TWE:Trap WFE
>   * TWI:Trap WFI
> @@ -74,7 +75,7 @@
>   * SWIO:   Turn set/way invalidates into set/way clean+invalidate
>   */
>  #define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
> -HCR_BSU_IS | HCR_FB | HCR_TAC | \
> +HCR_TVM | HCR_BSU_IS | HCR_FB | HCR_TAC | \
>  HCR_AMO | HCR_IMO | HCR_FMO | \
>  HCR_SWIO | HCR_TIDCP | HCR_RW)
>  #define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 02e9d09..5e92b9e 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -121,6 +121,42 @@ done:
>  }
>
>  /*
> + * Generic accessor for VM registers. Only called as long as HCR_TVM
> + * is set.
> + */
> +static bool access_vm_reg(struct kvm_vcpu *vcpu,
> + const struct sys_reg_params *p,
> + const struct sys_reg_desc *r)
> +{
> +   BUG_ON(!p->is_write);
> +
> +   vcpu_sys_reg(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
> +   return true;
> +}
> +
> +/*
> + * SCTLR_EL1 accessor. Only called as long as HCR_TVM is set.  If the
> + * guest enables the MMU, we stop trapping the VM sys_regs and leave
> + * it in complete control of the caches.
> + */
> +static bool access_sctlr_el1(struct kvm_vcpu *vcpu,
> +const struct sys_reg_params *p,
> +const struct sys_reg_desc *r)
> +{
> +   unsigned long val;
> +
> +   BUG_ON(!p->is_write);
> +
> +   val = *vcpu_reg(vcpu, p->Rt);
> +   vcpu_sys_reg(vcpu, r->reg) = val;
> +
> +   if ((val & (0b101)) == 0b101)   /* MMU+Caches enabled? */
> +   vcpu->arch.hcr_el2 &= ~HCR_TVM;
> +
> +   return true;
> +}
> +
> +/*
>   * We could trap ID_DFR0 and tell the guest we don't support performance
>   * monitoring.  Unfortunately the patch to make the kernel check ID_DFR0 was
>   * NAKed, so it will read the PMCR anyway.
> @@ -185,32 +221,32 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>   NULL, reset_mpidr, MPIDR_EL1 },
> /* SCTLR_EL1 */
> { Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b), Op2(0b000),
> - NULL, reset_val, SCTLR_EL1, 0x00C50078 },
> + access_sctlr_el1, reset_val, SCTLR_EL1, 0x00C50078 },

This patch in its current form breaks Aarch32 VMs on Foundation v8 Model
because encoding for Aarch64 VM registers we get Op0=0b11 and for Aarch32
VM registers we get Op0=0b00 when trapped.

Either its a Foundation v8 Model bug or we need to add more enteries in
sys_reg_desc[] for Aarch32 VM registers with Op0=0b00.

> /* CPACR_EL1 */
> { Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b), Op2(0b010),
>   NULL, reset_val, CPACR_EL1, 0 },
> /* TTBR0_EL1 */
> { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b), Op2(0b000),
> - NULL, reset_unknown, TTBR0_EL1 },
> + access_vm_reg, reset_unknown, TTBR0_EL1 },
> /* TTBR1_EL1 */
> { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b), Op2(0b001),
> - NULL, reset_unknown, TTBR1_EL1 },
> + access_vm_reg, reset_unknown, TTBR1_EL1 },
> /* TCR_EL1 */
> { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b), Op2(0b010),
> - NULL, reset_val, TCR_EL1, 0 },
> + access_vm_reg, reset_val, TCR_EL1, 0 },
>
> /* AFSR0_EL1 */
> { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0001), Op2(0b000),
> - NULL, reset_unknown, AFSR0_EL1 },
> + access_vm_reg, reset_unknown, AFSR0_EL1 },
> /* AFSR1_EL1 */
> { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0001), Op2(0b001),
> - NULL, reset_unknown, AFSR1_EL1 },
> + access_vm_reg, reset_unknown, AFSR1_EL1 },
> /* ESR_EL1 */

Re: [RFC PATCH 2/3] arm64: KVM: trap VM system registers until MMU and caches are ON

2014-01-20 Thread Anup Patel
Hi Marc,

On Mon, Jan 20, 2014 at 7:11 PM, Marc Zyngier  wrote:
> Hi Anup,
>
> On 20/01/14 12:00, Anup Patel wrote:
>> On Fri, Jan 17, 2014 at 8:33 PM, Marc Zyngier  wrote:
>>> In order to be able to detect the point where the guest enables
>>> its MMU and caches, trap all the VM related system registers.
>>>
>>> Once we see the guest enabling both the MMU and the caches, we
>>> can go back to a saner mode of operation, which is to leave these
>>> registers in complete control of the guest.
>>>
>>> Signed-off-by: Marc Zyngier 
>>> Reviewed-by: Catalin Marinas 
>>> ---
>>>  arch/arm64/include/asm/kvm_arm.h |  3 ++-
>>>  arch/arm64/kvm/sys_regs.c| 58 
>>> 
>>>  2 files changed, 49 insertions(+), 12 deletions(-)
>>>
>>> diff --git a/arch/arm64/include/asm/kvm_arm.h 
>>> b/arch/arm64/include/asm/kvm_arm.h
>>> index c98ef47..fd0a651 100644
>>> --- a/arch/arm64/include/asm/kvm_arm.h
>>> +++ b/arch/arm64/include/asm/kvm_arm.h
>>> @@ -62,6 +62,7 @@
>>>   * RW: 64bit by default, can be overriden for 32bit VMs
>>>   * TAC:Trap ACTLR
>>>   * TSC:Trap SMC
>>> + * TVM:Trap VM ops (until M+C set in SCTLR_EL1)
>>>   * TSW:Trap cache operations by set/way
>>>   * TWE:Trap WFE
>>>   * TWI:Trap WFI
>>> @@ -74,7 +75,7 @@
>>>   * SWIO:   Turn set/way invalidates into set/way clean+invalidate
>>>   */
>>>  #define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
>>> -HCR_BSU_IS | HCR_FB | HCR_TAC | \
>>> +HCR_TVM | HCR_BSU_IS | HCR_FB | HCR_TAC | \
>>>  HCR_AMO | HCR_IMO | HCR_FMO | \
>>>  HCR_SWIO | HCR_TIDCP | HCR_RW)
>>>  #define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
>>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>>> index 02e9d09..5e92b9e 100644
>>> --- a/arch/arm64/kvm/sys_regs.c
>>> +++ b/arch/arm64/kvm/sys_regs.c
>>> @@ -121,6 +121,42 @@ done:
>>>  }
>>>
>>>  /*
>>> + * Generic accessor for VM registers. Only called as long as HCR_TVM
>>> + * is set.
>>> + */
>>> +static bool access_vm_reg(struct kvm_vcpu *vcpu,
>>> + const struct sys_reg_params *p,
>>> + const struct sys_reg_desc *r)
>>> +{
>>> +   BUG_ON(!p->is_write);
>>> +
>>> +   vcpu_sys_reg(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
>>> +   return true;
>>> +}
>>> +
>>> +/*
>>> + * SCTLR_EL1 accessor. Only called as long as HCR_TVM is set.  If the
>>> + * guest enables the MMU, we stop trapping the VM sys_regs and leave
>>> + * it in complete control of the caches.
>>> + */
>>> +static bool access_sctlr_el1(struct kvm_vcpu *vcpu,
>>> +const struct sys_reg_params *p,
>>> +const struct sys_reg_desc *r)
>>> +{
>>> +   unsigned long val;
>>> +
>>> +   BUG_ON(!p->is_write);
>>> +
>>> +   val = *vcpu_reg(vcpu, p->Rt);
>>> +   vcpu_sys_reg(vcpu, r->reg) = val;
>>> +
>>> +   if ((val & (0b101)) == 0b101)   /* MMU+Caches enabled? */
>>> +   vcpu->arch.hcr_el2 &= ~HCR_TVM;
>>> +
>>> +   return true;
>>> +}
>>> +
>>> +/*
>>>   * We could trap ID_DFR0 and tell the guest we don't support performance
>>>   * monitoring.  Unfortunately the patch to make the kernel check ID_DFR0 
>>> was
>>>   * NAKed, so it will read the PMCR anyway.
>>> @@ -185,32 +221,32 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>>>   NULL, reset_mpidr, MPIDR_EL1 },
>>> /* SCTLR_EL1 */
>>> { Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b), Op2(0b000),
>>> - NULL, reset_val, SCTLR_EL1, 0x00C50078 },
>>> + access_sctlr_el1, reset_val, SCTLR_EL1, 0x00C50078 },
>>
>> This patch in its current form breaks Aarch32 VMs on Foundation v8 Model
>> because encoding for Aarch64 VM registers we get Op0=0b11 and for Aarch32
>> VM registers we get Op0=0b00 when trapped.
>>
>> Either its a Foundation v8 Model bug or we need to add more enteries in
>> sys_reg_desc[] for Aarch32 VM registers with Op0=0b00.
>
> That's a good point. But Op0 isn't defined for AArch32, the value is
> simply hardcoded in kvm_handle_cp15_32/kvm_handle_cp15_64, which is
> obviously horribly broken.
>
> I'll work on a fix for that, thanks noticing it.
>
> Does this series otherwise fix your L3 cache issue (assuming you stick
> to 64bit guests)?

Just started trying your patches today.
First tried on Foundation v8 Model.
Next we will try on X-Gene.

Me or Pranav will soon provide more feedback in this regard.

>
> Cheers,
>
> M.
> --
> Jazz is not dead. It just smells funny...

Thanks,
Anup
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM and variable-endianness guest CPUs

2014-01-21 Thread Anup Patel
On Wed, Jan 22, 2014 at 11:09 AM, Victor Kamensky
 wrote:
> Hi Guys,
>
> Christoffer and I had a bit heated chat :) on this
> subject last night. Christoffer, really appreciate
> your time! We did not really reach agreement
> during the chat and Christoffer asked me to follow
> up on this thread.
> Here it goes. Sorry, it is very long email.
>
> I don't believe we can assign any endianity to
> mmio.data[] byte array. I believe mmio.data[] and
> mmio.len acts just memcpy and that is all. As
> memcpy does not imply any endianity of underlying
> data mmio.data[] should not either.
>
> Here is my definition:
>
> mmio.data[] is array of bytes that contains memory
> bytes in such form, for read case, that if those
> bytes are placed in guest memory and guest executes
> the same read access instruction with address to this
> memory, result would be the same as real h/w device
> memory access. Rest of KVM host and hypervisor
> part of code should really take care of mmio.data[]
> memory so it will be delivered to vcpu registers and
> restored by hypervisor part in such way that guest CPU
> register value is the same as it would be for real
> non-emulated h/w read access (that is emulation part).
> The same goes for write access, if guest writes into
> memory and those bytes are just copied to emulated
> h/w register it would have the same effect as real
> mapped h/w register write.
>
> In shorter form, i.e for len=4 access: endianity of integer
> at &mmio.data[0] address should match endianity
> of emulated h/w device behind phys_addr address,
> regardless what is endianity of emulator, KVM host,
> hypervisor, and guest
>
> Examples that illustrate my definition
> --
>
> 1) LE guest (E bit is off in ARM speak) reads integer
> (4 bytes) from mapped h/w LE device register -
> mmio.data[3] contains MSB, mmio.data[0] contains LSB.
>
> 2) BE guest (E bit is on in ARM speak) reads integer
> from mapped h/w LE device register - mmio.data[3]
> contains MSB, mmio.data[0] contains LSB. Note that
> if &mmio.data[0] memory would be placed in guest
> address space and instruction restarted with new
> address, then it would meet BE guest expectations
> - the guest knows that it reads LE h/w so it will byteswap
> register before processing it further. This is BE guest ARM
> case (regardless of what KVM host endianity is).
>
> 3) BE guest reads integer from mapped h/w BE device
> register - mmio.data[0] contains MSB, mmio.data[3]
> contains LSB. Note that if &mmio.data[0] memory would
> be placed in guest address space and instruction
> restarted with new address, then it would meet BE
> guest expectation - the guest knows that it reads
> BE h/w so it will proceed further without any other
> work. I guess, it is BE ppc case.
>
>
> Arguments in favor of memcpy semantics of mmio.data[]
> --
>
> x) What are possible values of 'len'? Previous discussions
> imply that is always powers of 2. Why is that? Maybe
> there will be CPU that would need to do 5 bytes mmio
> access, or 6 bytes. How do you assign endianity to
> such case? 'len' 5 or 6, or any works fine with
> memcpy semantics. I admit it is hypothetical case, but
> IMHO it tests how clean ABI definition is.
>
> x) Byte array does not have endianity because it
> does not have any structure. If one would want to
> imply structure why mmio is not defined in such way
> so structure reflected in mmio definition?
> Something like:
>
>
> /* KVM_EXIT_MMIO */
> struct {
>   __u64 phys_addr;
>   union {
>__u8 byte;
>__u16 hword;
>__u32 word;
>__u64 dword;
>   }  data;
>   __u32 len;
>   __u8  is_write;
> } mmio;
>
> where len is really serves as union discriminator and
> only allowed len values are 1, 2, 4, 8.
> In this case, I agree, endianity of integer types
> should be defined. I believe, use of byte array strongly
> implies that original intent was to have semantics of
> byte stream copy, just like memcpy does.
>
> x) Note there is nothing wrong with user kernel ABI to
> use just bytes stream as parameter. There is already
> precedents like 'read' and 'write' system calls :).
>
> x) Consider case when KVM works with emulated memory mapped
> h/w devices where some devices operate in LE mode and others
> operate in BE mode. It is defined by semantics of real h/w
> device which is it, and should be emulated by emulator and KVM
> given all other context. As far as mmio.data[] array concerned, if the
> same integer value is read from these devices registers, mmio.data[]
> memory should contain integer in opposite endianity for these
> two cases, i.e MSB is data[0] in one case and MSB is
> data[3] is in another 

Re: [RFC PATCH 0/6] ARM64: KVM: PMU infrastructure support

2014-12-29 Thread Anup Patel
(dropping previous conversation for easy reading)

Hi Marc/Christoffer,

I tried implementing PMU context-switch via C code
in EL1 mode and in atomic context with irqs disabled.
The context switch itself works perfectly fine but
irq forwarding is not clean for PMU irq.

I found another issue that is GIC only samples irq
lines if they are enabled. This means for using
irq forwarding we will need to ensure that host PMU
irq is enabled.  The arch_timer code does this by
doing request_irq() for host virtual timer interrupt.
For PMU, we can either enable/disable host PMU
irq in context switch or we need to do have shared
irq handler between kvm pmu and host kernel pmu.

I have rethinked about our discussion so far. I
understand that we need KVM PMU virtualization
to meet following criteria:
1. No modification in host PMU driver
2. No modification in guest PMU driver
3. No mask/unmask dance for sharing host PMU irq
4. Clean way to avoid infinite VM exits due to
PMU interrupt

I have discovered new approach which is as follows:
1. Context switch PMU in atomic context (i.e. local_irq_disable())
2. Ensure that host PMU irq is disabled when entering guest
mode and re-enable host PMU irq when exiting guest mode if
it was enabled previously. This is to avoid infinite VM exits
due to PMU interrupt because as-per new approach we
don't mask the PMU irq via PMINTENSET_EL1 register.
3. Inject virtual PMU irq at time of entering guest mode if PMU
overflow register is non-zero (i.e. PMOVSSET_EL0) in atomic
context (i.e. local_irq_disable()).

The only limitation of this new approach is that virtual PMU irq
is injected at time of entering guest mode. This means guest
will receive virtual PMU  interrupt with little delay after actual
interrupt occurred. The PMU interrupts are only overflow events
and generally not used in any timing critical applications. If we
can live with this limitation then this can be a good approach
for KVM PMU virtualization.

Regards,
Anup
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH 0/6] ARM64: KVM: PMU infrastructure support

2015-01-07 Thread Anup Patel
On Tue, Dec 30, 2014 at 11:19 AM, Anup Patel  wrote:
> (dropping previous conversation for easy reading)
>
> Hi Marc/Christoffer,
>
> I tried implementing PMU context-switch via C code
> in EL1 mode and in atomic context with irqs disabled.
> The context switch itself works perfectly fine but
> irq forwarding is not clean for PMU irq.
>
> I found another issue that is GIC only samples irq
> lines if they are enabled. This means for using
> irq forwarding we will need to ensure that host PMU
> irq is enabled.  The arch_timer code does this by
> doing request_irq() for host virtual timer interrupt.
> For PMU, we can either enable/disable host PMU
> irq in context switch or we need to do have shared
> irq handler between kvm pmu and host kernel pmu.
>
> I have rethinked about our discussion so far. I
> understand that we need KVM PMU virtualization
> to meet following criteria:
> 1. No modification in host PMU driver
> 2. No modification in guest PMU driver
> 3. No mask/unmask dance for sharing host PMU irq
> 4. Clean way to avoid infinite VM exits due to
> PMU interrupt
>
> I have discovered new approach which is as follows:
> 1. Context switch PMU in atomic context (i.e. local_irq_disable())
> 2. Ensure that host PMU irq is disabled when entering guest
> mode and re-enable host PMU irq when exiting guest mode if
> it was enabled previously. This is to avoid infinite VM exits
> due to PMU interrupt because as-per new approach we
> don't mask the PMU irq via PMINTENSET_EL1 register.
> 3. Inject virtual PMU irq at time of entering guest mode if PMU
> overflow register is non-zero (i.e. PMOVSSET_EL0) in atomic
> context (i.e. local_irq_disable()).
>
> The only limitation of this new approach is that virtual PMU irq
> is injected at time of entering guest mode. This means guest
> will receive virtual PMU  interrupt with little delay after actual
> interrupt occurred. The PMU interrupts are only overflow events
> and generally not used in any timing critical applications. If we
> can live with this limitation then this can be a good approach
> for KVM PMU virtualization.
>
> Regards,
> Anup

Hi Marc/Christoffer,

Ping??

Regards,
Anup
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH 0/6] ARM64: KVM: PMU infrastructure support

2015-01-11 Thread Anup Patel
On Mon, Jan 12, 2015 at 12:41 AM, Christoffer Dall
 wrote:
> On Tue, Dec 30, 2014 at 11:19:13AM +0530, Anup Patel wrote:
>> (dropping previous conversation for easy reading)
>>
>> Hi Marc/Christoffer,
>>
>> I tried implementing PMU context-switch via C code
>> in EL1 mode and in atomic context with irqs disabled.
>> The context switch itself works perfectly fine but
>> irq forwarding is not clean for PMU irq.
>>
>> I found another issue that is GIC only samples irq
>> lines if they are enabled. This means for using
>> irq forwarding we will need to ensure that host PMU
>> irq is enabled.  The arch_timer code does this by
>> doing request_irq() for host virtual timer interrupt.
>> For PMU, we can either enable/disable host PMU
>> irq in context switch or we need to do have shared
>> irq handler between kvm pmu and host kernel pmu.
>
> could we simply require the host PMU driver to request the IRQ and have
> the driver inject the corresponding IRQ to the VM via a mechanism
> similar to VFIO using an eventfd and irqfds etc.?

Currently, the host PMU driver does request_irq() only when
there is some event to be monitored. This means host will do
request_irq() only when we run perf application on host
user space.

Initially, I though that we could simply pass IRQF_SHARED
for request_irq() in host PMU driver and do the same for
reqest_irq() in KVM PMU code but the PMU irq can be
SPI or PPI. If the PMU irq is SPI then IRQF_SHARED
flag would fine but if its PPI then we have no way to
set IRQF_SHARED flag because request_percpu_irq()
does not have irq flags parameter.

>
> (I haven't quite thought through if there's a way for the host PMU
> driver to distinguish between an IRQ for itself and one for the guest,
> though).
>
> It does feel like we will need some sort of communication/coordination
> between the host PMU driver and KVM...
>
>>
>> I have rethinked about our discussion so far. I
>> understand that we need KVM PMU virtualization
>> to meet following criteria:
>> 1. No modification in host PMU driver
>
> is this really a strict requirement?  one of the advantages of KVM
> should be that the rest of the kernel should be supportive of KVM.

I guess so because host PMU driver should not do things
differently for host and guest. I think this the reason why
we discarded the mask/unmask PMU irq approach which
I had implemented in RFC v1.

>
>> 2. No modification in guest PMU driver
>> 3. No mask/unmask dance for sharing host PMU irq
>> 4. Clean way to avoid infinite VM exits due to
>> PMU interrupt
>>
>> I have discovered new approach which is as follows:
>> 1. Context switch PMU in atomic context (i.e. local_irq_disable())
>> 2. Ensure that host PMU irq is disabled when entering guest
>> mode and re-enable host PMU irq when exiting guest mode if
>> it was enabled previously.
>
> How does this look like software-engineering wise?  Would you be looking
> up the IRQ number from the DT in the KVM code again?  How does KVM then
> synchronize with the host PMU driver so they're not both requesting the
> same IRQ at the same time?

We only lookup host PMU irq numbers from DT at HYP init time.

During context switch we know the host PMU irq number for
current host CPU so we can get state of host PMU irq in
context switch code.

If we go by the shard irq handler approach then both KVM
and host PMU driver will do request_irq() on same host
PMU irq. In other words, there is no virtual PMU irq provided
by HW for guest.

>
>> This is to avoid infinite VM exits
>> due to PMU interrupt because as-per new approach we
>> don't mask the PMU irq via PMINTENSET_EL1 register.
>> 3. Inject virtual PMU irq at time of entering guest mode if PMU
>> overflow register is non-zero (i.e. PMOVSSET_EL0) in atomic
>> context (i.e. local_irq_disable()).
>>
>> The only limitation of this new approach is that virtual PMU irq
>> is injected at time of entering guest mode. This means guest
>> will receive virtual PMU  interrupt with little delay after actual
>> interrupt occurred.
>
> it may never receive it in the case of a tickless configuration AFAICT,
> so this doesn't sound like the right approach.

I think irrespective to any approach we take, we need a mechanism
to have shared irq handlers in KVM PMU and host PMU driver for
both PPI and SPI.

>
>> The PMU interrupts are only overflow events
>> and generally not used in any timing critical applications. If we
>> can live with this limitation then this can be a good approach
>> for KVM PMU virtualization.
>>
> Thanks,
> -Christoffer

Regards,
Anup
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH 0/6] ARM64: KVM: PMU infrastructure support

2015-01-13 Thread Anup Patel
On Mon, Jan 12, 2015 at 12:41 AM, Christoffer Dall
 wrote:
> On Tue, Dec 30, 2014 at 11:19:13AM +0530, Anup Patel wrote:
>> (dropping previous conversation for easy reading)
>>
>> Hi Marc/Christoffer,
>>
>> I tried implementing PMU context-switch via C code
>> in EL1 mode and in atomic context with irqs disabled.
>> The context switch itself works perfectly fine but
>> irq forwarding is not clean for PMU irq.
>>
>> I found another issue that is GIC only samples irq
>> lines if they are enabled. This means for using
>> irq forwarding we will need to ensure that host PMU
>> irq is enabled.  The arch_timer code does this by
>> doing request_irq() for host virtual timer interrupt.
>> For PMU, we can either enable/disable host PMU
>> irq in context switch or we need to do have shared
>> irq handler between kvm pmu and host kernel pmu.
>
> could we simply require the host PMU driver to request the IRQ and have
> the driver inject the corresponding IRQ to the VM via a mechanism
> similar to VFIO using an eventfd and irqfds etc.?
>
> (I haven't quite thought through if there's a way for the host PMU
> driver to distinguish between an IRQ for itself and one for the guest,
> though).
>
> It does feel like we will need some sort of communication/coordination
> between the host PMU driver and KVM...
>
>>
>> I have rethinked about our discussion so far. I
>> understand that we need KVM PMU virtualization
>> to meet following criteria:
>> 1. No modification in host PMU driver
>
> is this really a strict requirement?  one of the advantages of KVM
> should be that the rest of the kernel should be supportive of KVM.
>
>> 2. No modification in guest PMU driver
>> 3. No mask/unmask dance for sharing host PMU irq
>> 4. Clean way to avoid infinite VM exits due to
>> PMU interrupt
>>
>> I have discovered new approach which is as follows:
>> 1. Context switch PMU in atomic context (i.e. local_irq_disable())
>> 2. Ensure that host PMU irq is disabled when entering guest
>> mode and re-enable host PMU irq when exiting guest mode if
>> it was enabled previously.
>
> How does this look like software-engineering wise?  Would you be looking
> up the IRQ number from the DT in the KVM code again?  How does KVM then
> synchronize with the host PMU driver so they're not both requesting the
> same IRQ at the same time?
>
>> This is to avoid infinite VM exits
>> due to PMU interrupt because as-per new approach we
>> don't mask the PMU irq via PMINTENSET_EL1 register.
>> 3. Inject virtual PMU irq at time of entering guest mode if PMU
>> overflow register is non-zero (i.e. PMOVSSET_EL0) in atomic
>> context (i.e. local_irq_disable()).
>>
>> The only limitation of this new approach is that virtual PMU irq
>> is injected at time of entering guest mode. This means guest
>> will receive virtual PMU  interrupt with little delay after actual
>> interrupt occurred.
>
> it may never receive it in the case of a tickless configuration AFAICT,
> so this doesn't sound like the right approach.

The PMU interrupts are not similar to arch_timer interrupts. In fact,
they are overflow interrupts on event counters. The PMU events
of Guest VCPU are only counted when Guest VCPU is running.
If the Guest VCPU is scheduled out or we are in Host mode then
then PMU events are counted for Host or other Guest whoever is
running currently.

In my view, this does not break tickless guest.

Also, the above fact applies irrespective to the approach we take
for PMU virtualization.

Regards,
Anup

>
>> The PMU interrupts are only overflow events
>> and generally not used in any timing critical applications. If we
>> can live with this limitation then this can be a good approach
>> for KVM PMU virtualization.
>>
> Thanks,
> -Christoffer
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vexpress: Framebuffer broken with KVM enabled

2015-02-16 Thread Anup Patel
On Mon, Feb 16, 2015 at 2:43 PM, Jan Kiszka  wrote:
> Hi,
>
> next issue related to KVM/QEMU on the TK1: The guest image I'm running
> gives proper framebuffer output when in emulation mode. Once KVM is
> enabled, the screen is - at best - only initially updated. Sometimes I
> see the famous tux images and a bit of the console texts, but usually it
> stays black. Explanations?

The QEMU accesses Guest Video RAM (or any portion of Guest RAM) as
cacheable user space memory. The Guest Kernel might access Guest Video
RAM as non-cacheable to maintain coherency with video device. If this is
the case then all updates by Guest kernel to Guest Video RAM will not
be visible to QEMU.

--
Anup

>
> Jan
> ___
> kvmarm mailing list
> kvm...@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH 0/6] ARM64: KVM: PMU infrastructure support

2015-02-16 Thread Anup Patel
Hi Christoffer,

On Sun, Feb 15, 2015 at 9:03 PM, Christoffer Dall
 wrote:
> Hi Anup,
>
> On Mon, Jan 12, 2015 at 09:49:13AM +0530, Anup Patel wrote:
>> On Mon, Jan 12, 2015 at 12:41 AM, Christoffer Dall
>>  wrote:
>> > On Tue, Dec 30, 2014 at 11:19:13AM +0530, Anup Patel wrote:
>> >> (dropping previous conversation for easy reading)
>> >>
>> >> Hi Marc/Christoffer,
>> >>
>> >> I tried implementing PMU context-switch via C code
>> >> in EL1 mode and in atomic context with irqs disabled.
>> >> The context switch itself works perfectly fine but
>> >> irq forwarding is not clean for PMU irq.
>> >>
>> >> I found another issue that is GIC only samples irq
>> >> lines if they are enabled. This means for using
>> >> irq forwarding we will need to ensure that host PMU
>> >> irq is enabled.  The arch_timer code does this by
>> >> doing request_irq() for host virtual timer interrupt.
>> >> For PMU, we can either enable/disable host PMU
>> >> irq in context switch or we need to do have shared
>> >> irq handler between kvm pmu and host kernel pmu.
>> >
>> > could we simply require the host PMU driver to request the IRQ and have
>> > the driver inject the corresponding IRQ to the VM via a mechanism
>> > similar to VFIO using an eventfd and irqfds etc.?
>>
>> Currently, the host PMU driver does request_irq() only when
>> there is some event to be monitored. This means host will do
>> request_irq() only when we run perf application on host
>> user space.
>>
>> Initially, I though that we could simply pass IRQF_SHARED
>> for request_irq() in host PMU driver and do the same for
>> reqest_irq() in KVM PMU code but the PMU irq can be
>> SPI or PPI. If the PMU irq is SPI then IRQF_SHARED
>> flag would fine but if its PPI then we have no way to
>> set IRQF_SHARED flag because request_percpu_irq()
>> does not have irq flags parameter.
>>
>> >
>> > (I haven't quite thought through if there's a way for the host PMU
>> > driver to distinguish between an IRQ for itself and one for the guest,
>> > though).
>> >
>> > It does feel like we will need some sort of communication/coordination
>> > between the host PMU driver and KVM...
>> >
>> >>
>> >> I have rethinked about our discussion so far. I
>> >> understand that we need KVM PMU virtualization
>> >> to meet following criteria:
>> >> 1. No modification in host PMU driver
>> >
>> > is this really a strict requirement?  one of the advantages of KVM
>> > should be that the rest of the kernel should be supportive of KVM.
>>
>> I guess so because host PMU driver should not do things
>> differently for host and guest. I think this the reason why
>> we discarded the mask/unmask PMU irq approach which
>> I had implemented in RFC v1.
>>
>> >
>> >> 2. No modification in guest PMU driver
>> >> 3. No mask/unmask dance for sharing host PMU irq
>> >> 4. Clean way to avoid infinite VM exits due to
>> >> PMU interrupt
>> >>
>> >> I have discovered new approach which is as follows:
>> >> 1. Context switch PMU in atomic context (i.e. local_irq_disable())
>> >> 2. Ensure that host PMU irq is disabled when entering guest
>> >> mode and re-enable host PMU irq when exiting guest mode if
>> >> it was enabled previously.
>> >
>> > How does this look like software-engineering wise?  Would you be looking
>> > up the IRQ number from the DT in the KVM code again?  How does KVM then
>> > synchronize with the host PMU driver so they're not both requesting the
>> > same IRQ at the same time?
>>
>> We only lookup host PMU irq numbers from DT at HYP init time.
>>
>> During context switch we know the host PMU irq number for
>> current host CPU so we can get state of host PMU irq in
>> context switch code.
>>
>> If we go by the shard irq handler approach then both KVM
>> and host PMU driver will do request_irq() on same host
>> PMU irq. In other words, there is no virtual PMU irq provided
>> by HW for guest.
>>
>
> Sorry for the *really* long delay in this response.
>
> We had a chat about this subject with Will Deacon and Marc Zyngier
> during connect, and basically we came to think of a number of problems
> with the current approach:
>
> 1. As you pointed out, there is a need for a shared IRQ h

Re: [Qemu-ppc] KVM and variable-endianness guest CPUs

2014-01-22 Thread Anup Patel
Hi Alex,

On Wed, Jan 22, 2014 at 12:11 PM, Alexander Graf  wrote:
>
>
>> Am 22.01.2014 um 07:31 schrieb Anup Patel :
>>
>> On Wed, Jan 22, 2014 at 11:09 AM, Victor Kamensky
>>  wrote:
>>> Hi Guys,
>>>
>>> Christoffer and I had a bit heated chat :) on this
>>> subject last night. Christoffer, really appreciate
>>> your time! We did not really reach agreement
>>> during the chat and Christoffer asked me to follow
>>> up on this thread.
>>> Here it goes. Sorry, it is very long email.
>>>
>>> I don't believe we can assign any endianity to
>>> mmio.data[] byte array. I believe mmio.data[] and
>>> mmio.len acts just memcpy and that is all. As
>>> memcpy does not imply any endianity of underlying
>>> data mmio.data[] should not either.
>>>
>>> Here is my definition:
>>>
>>> mmio.data[] is array of bytes that contains memory
>>> bytes in such form, for read case, that if those
>>> bytes are placed in guest memory and guest executes
>>> the same read access instruction with address to this
>>> memory, result would be the same as real h/w device
>>> memory access. Rest of KVM host and hypervisor
>>> part of code should really take care of mmio.data[]
>>> memory so it will be delivered to vcpu registers and
>>> restored by hypervisor part in such way that guest CPU
>>> register value is the same as it would be for real
>>> non-emulated h/w read access (that is emulation part).
>>> The same goes for write access, if guest writes into
>>> memory and those bytes are just copied to emulated
>>> h/w register it would have the same effect as real
>>> mapped h/w register write.
>>>
>>> In shorter form, i.e for len=4 access: endianity of integer
>>> at &mmio.data[0] address should match endianity
>>> of emulated h/w device behind phys_addr address,
>>> regardless what is endianity of emulator, KVM host,
>>> hypervisor, and guest
>>>
>>> Examples that illustrate my definition
>>> --
>>>
>>> 1) LE guest (E bit is off in ARM speak) reads integer
>>> (4 bytes) from mapped h/w LE device register -
>>> mmio.data[3] contains MSB, mmio.data[0] contains LSB.
>>>
>>> 2) BE guest (E bit is on in ARM speak) reads integer
>>> from mapped h/w LE device register - mmio.data[3]
>>> contains MSB, mmio.data[0] contains LSB. Note that
>>> if &mmio.data[0] memory would be placed in guest
>>> address space and instruction restarted with new
>>> address, then it would meet BE guest expectations
>>> - the guest knows that it reads LE h/w so it will byteswap
>>> register before processing it further. This is BE guest ARM
>>> case (regardless of what KVM host endianity is).
>>>
>>> 3) BE guest reads integer from mapped h/w BE device
>>> register - mmio.data[0] contains MSB, mmio.data[3]
>>> contains LSB. Note that if &mmio.data[0] memory would
>>> be placed in guest address space and instruction
>>> restarted with new address, then it would meet BE
>>> guest expectation - the guest knows that it reads
>>> BE h/w so it will proceed further without any other
>>> work. I guess, it is BE ppc case.
>>>
>>>
>>> Arguments in favor of memcpy semantics of mmio.data[]
>>> --
>>>
>>> x) What are possible values of 'len'? Previous discussions
>>> imply that is always powers of 2. Why is that? Maybe
>>> there will be CPU that would need to do 5 bytes mmio
>>> access, or 6 bytes. How do you assign endianity to
>>> such case? 'len' 5 or 6, or any works fine with
>>> memcpy semantics. I admit it is hypothetical case, but
>>> IMHO it tests how clean ABI definition is.
>>>
>>> x) Byte array does not have endianity because it
>>> does not have any structure. If one would want to
>>> imply structure why mmio is not defined in such way
>>> so structure reflected in mmio definition?
>>> Something like:
>>>
>>>
>>>/* KVM_EXIT_MMIO */
>>>struct {
>>>  __u64 phys_addr;
>>>  union {
>>>   __u8 byte;
>>>   __u16 hword;
>>>   __u32 word;
>>> 

Re: [GIT PULL] KVM/ARM for 3.15

2014-03-05 Thread Anup Patel
On Wed, Mar 5, 2014 at 10:55 AM, Ming Lei  wrote:
> On Wed, Mar 5, 2014 at 1:23 PM, Ming Lei  wrote:
>> On Tue, Mar 4, 2014 at 10:27 AM, Marc Zyngier 
>>>
>>> Marc Zyngier (12):
>>>   arm64: KVM: force cache clean on page fault when caches are off
>>>   arm64: KVM: allows discrimination of AArch32 sysreg access
>>>   arm64: KVM: trap VM system registers until MMU and caches are ON
>>>   ARM: KVM: introduce kvm_p*d_addr_end
>>>   arm64: KVM: flush VM pages before letting the guest enable caches
>>
>> I tested the first 5 patches on APM arm64 board, and only after
>> applying the 5 patches, qemu can boot kernel successfully, otherwise
>> kernel can't be booted from qemu.
>
> For the first 5 patches, please feel free to add:

These patches are required for using KVM in presence of APM L3 cache.

Usually, APM U-boot enables L3 cache by default hence KVM does not
work for you without these patches.

To have KVM working without these patches you will need to explicitly
disable L3 cache from APM U-boot before starting Linux kernel.

Regards,
Anup

>
>  Tested-by: Ming Lei 
>
>
> Thanks,
> --
> Ming Lei
> ___
> kvmarm mailing list
> kvm...@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/cucslists/listinfo/kvmarm
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v10 00/12] In-kernel PSCI v0.2 emulation for KVM ARM/ARM64

2014-04-21 Thread Anup Patel
Currently, KVM ARM/ARM64 only provides in-kernel emulation of Power State
and Coordination Interface (PSCI) v0.1.

This patchset aims at providing newer PSCI v0.2 for KVM ARM/ARM64 VCPUs
such that it does not break current KVM ARM/ARM64 ABI.

The user space tools (i.e. QEMU or KVMTOOL) will have to explicitly enable
KVM_ARM_VCPU_PSCI_0_2 feature using KVM_ARM_VCPU_INIT ioctl for providing
PSCI v0.2 to VCPUs.

Changlog:

V10:
 - Updated PSCI_VERSION_xxx defines in uapi/linux/psci.h
 - Added PSCI_0_2_AFFINITY_LEVEL_ defines in uapi/linux/psci.h
 - Removed PSCI v0.1 related defines from uapi/linux/psci.h
 - Inject undefined exception for all types of errors in PSCI
   emulation (i.e kvm_psci_call(vcpu) < 0)
 - Removed "inline" attribute of kvm_prepare_system_event()
 - Store INTERNAL_FAILURE in r0 (or x0) before exiting to userspace
 - Use MPIDR_LEVEL_BITS in AFFINITY_MASK define
 - Updated comment in kvm_psci_vcpu_suspend() as-per Marc's suggestion

V9:
 - Rename undefined PSCI_VER_xxx defines to PSCI_VERSION_xxx defines

V8:
 - Add #define for possible values of migrate type in uapi/linux/psci.h
 - Simplified psci_affinity_mask() in psci.c
 - Update comments in kvm_psci_vcpu_suspend() to indicate that for KVM
   wakeup events are interrupts.
 - Unconditionally update r0 (or x0) in kvm_psci_vcpu_on()

V7:
 - Make uapi/linux/psci.h inline with Ashwin's patch
   http://www.spinics.net/lists/arm-kernel/msg319090.html
 - Incorporate Rob's suggestions for uapi/linux/psci.h
 - Treat CPU_SUSPEND power-down request to be same as standby
   request. This further simplifies CPU_SUSPEND emulation.

V6:
 - Introduce uapi/linux/psci.h for sharing PSCI defines between
   ARM kernel, ARM64 kernel, KVM ARM/ARM64 and user space
 - Make CPU_SUSPEND emulation similar to WFI emulation

V5:
 - Have separate last patch to advertise KVM_CAP_ARM_PSCI_0_2
 - Use kvm_psci_version() in kvm_psci_vcpu_on()
 - Return ALREADY_ON for PSCI v0.2 CPU_ON if VCPU is not paused
 - Remove per-VCPU suspend context
 - As-per PSCI v0.2 spec, only current CPU can suspend itself

V4:
 - Implement all mandatory functions required by PSCI v0.2

V3:
 - Make KVM_ARM_VCPU_PSCI_0_2 feature experiementatl for now so that
   it fails for user space till all mandatory PSCI v0.2 functions are
   emulated by KVM ARM/ARM64
 - Have separate patch for making KVM_ARM_VCPU_PSCI_0_2 feature available
   to user space. This patch can be defferred for now

V2:
 - Don't rename PSCI return values KVM_PSCI_RET_NI and KVM_PSCI_RET_INVAL
 - Added kvm_psci_version() to get PSCI version available to VCPU
 - Fixed grammer in Documentation/virtual/kvm/api.txt

V1:
 - Initial RFC PATCH

Anup Patel (12):
  KVM: Add capability to advertise PSCI v0.2 support
  ARM/ARM64: KVM: Add common header for PSCI related defines
  ARM/ARM64: KVM: Add base for PSCI v0.2 emulation
  KVM: Documentation: Add info regarding KVM_ARM_VCPU_PSCI_0_2 feature
  ARM/ARM64: KVM: Make kvm_psci_call() return convention more flexible
  KVM: Add KVM_EXIT_SYSTEM_EVENT to user space API header
  ARM/ARM64: KVM: Emulate PSCI v0.2 SYSTEM_OFF and SYSTEM_RESET
  ARM/ARM64: KVM: Emulate PSCI v0.2 AFFINITY_INFO
  ARM/ARM64: KVM: Emulate PSCI v0.2 MIGRATE_INFO_TYPE and related
functions
  ARM/ARM64: KVM: Fix CPU_ON emulation for PSCI v0.2
  ARM/ARM64: KVM: Emulate PSCI v0.2 CPU_SUSPEND
  ARM/ARM64: KVM: Advertise KVM_CAP_ARM_PSCI_0_2 to user space

 Documentation/virtual/kvm/api.txt |   17 +++
 arch/arm/include/asm/kvm_host.h   |2 +-
 arch/arm/include/asm/kvm_psci.h   |6 +-
 arch/arm/include/uapi/asm/kvm.h   |   10 +-
 arch/arm/kvm/arm.c|1 +
 arch/arm/kvm/handle_exit.c|   10 +-
 arch/arm/kvm/psci.c   |  221 +
 arch/arm64/include/asm/kvm_host.h |2 +-
 arch/arm64/include/asm/kvm_psci.h |6 +-
 arch/arm64/include/uapi/asm/kvm.h |   10 +-
 arch/arm64/kvm/handle_exit.c  |   10 +-
 include/uapi/linux/Kbuild |1 +
 include/uapi/linux/kvm.h  |9 ++
 include/uapi/linux/psci.h |   85 ++
 14 files changed, 353 insertions(+), 37 deletions(-)
 create mode 100644 include/uapi/linux/psci.h

-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v10 01/12] KVM: Add capability to advertise PSCI v0.2 support

2014-04-21 Thread Anup Patel
User space (i.e. QEMU or KVMTOOL) should be able to check whether KVM
ARM/ARM64 supports in-kernel PSCI v0.2 emulation. For this purpose, we
define KVM_CAP_ARM_PSCI_0_2 in KVM user space interface header.

Signed-off-by: Anup Patel 
Signed-off-by: Pranavkumar Sawargaonkar 
Acked-by: Christoffer Dall 
Acked-by: Marc Zyngier 
---
 include/uapi/linux/kvm.h |1 +
 1 file changed, 1 insertion(+)

diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index a8f4ee5..01c5624 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -743,6 +743,7 @@ struct kvm_ppc_smmu_info {
 #define KVM_CAP_IOAPIC_POLARITY_IGNORED 97
 #define KVM_CAP_ENABLE_CAP_VM 98
 #define KVM_CAP_S390_IRQCHIP 99
+#define KVM_CAP_ARM_PSCI_0_2 100
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v10 02/12] ARM/ARM64: KVM: Add common header for PSCI related defines

2014-04-21 Thread Anup Patel
We need a common place to share PSCI related defines among ARM kernel,
ARM64 kernel, KVM ARM/ARM64 PSCI emulation, and user space.

We introduce uapi/linux/psci.h for this purpose. This newly added
header will be first used by KVM ARM/ARM64 in-kernel PSCI emulation
and user space (i.e. QEMU or KVMTOOL).

Signed-off-by: Anup Patel 
Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Ashwin Chaugule 
---
 include/uapi/linux/Kbuild |1 +
 include/uapi/linux/psci.h |   85 +
 2 files changed, 86 insertions(+)
 create mode 100644 include/uapi/linux/psci.h

diff --git a/include/uapi/linux/Kbuild b/include/uapi/linux/Kbuild
index 6929571..24e9033 100644
--- a/include/uapi/linux/Kbuild
+++ b/include/uapi/linux/Kbuild
@@ -317,6 +317,7 @@ header-y += ppp-ioctl.h
 header-y += ppp_defs.h
 header-y += pps.h
 header-y += prctl.h
+header-y += psci.h
 header-y += ptp_clock.h
 header-y += ptrace.h
 header-y += qnx4_fs.h
diff --git a/include/uapi/linux/psci.h b/include/uapi/linux/psci.h
new file mode 100644
index 000..0d4a136
--- /dev/null
+++ b/include/uapi/linux/psci.h
@@ -0,0 +1,85 @@
+/*
+ * ARM Power State and Coordination Interface (PSCI) header
+ *
+ * This header holds common PSCI defines and macros shared
+ * by: ARM kernel, ARM64 kernel, KVM ARM/ARM64 and user space.
+ *
+ * Copyright (C) 2014 Linaro Ltd.
+ * Author: Anup Patel 
+ */
+
+#ifndef _UAPI_LINUX_PSCI_H
+#define _UAPI_LINUX_PSCI_H
+
+/*
+ * PSCI v0.1 interface
+ *
+ * The PSCI v0.1 function numbers are implementation defined.
+ *
+ * Only PSCI return values such as: SUCCESS, NOT_SUPPORTED,
+ * INVALID_PARAMS, and DENIED defined below are applicable
+ * to PSCI v0.1.
+ */
+
+/* PSCI v0.2 interface */
+#define PSCI_0_2_FN_BASE   0x8400
+#define PSCI_0_2_FN(n) (PSCI_0_2_FN_BASE + (n))
+#define PSCI_0_2_64BIT 0x4000
+#define PSCI_0_2_FN64_BASE \
+   (PSCI_0_2_FN_BASE + PSCI_0_2_64BIT)
+#define PSCI_0_2_FN64(n)   (PSCI_0_2_FN64_BASE + (n))
+
+#define PSCI_0_2_FN_PSCI_VERSION   PSCI_0_2_FN(0)
+#define PSCI_0_2_FN_CPU_SUSPENDPSCI_0_2_FN(1)
+#define PSCI_0_2_FN_CPU_OFFPSCI_0_2_FN(2)
+#define PSCI_0_2_FN_CPU_ON PSCI_0_2_FN(3)
+#define PSCI_0_2_FN_AFFINITY_INFO  PSCI_0_2_FN(4)
+#define PSCI_0_2_FN_MIGRATEPSCI_0_2_FN(5)
+#define PSCI_0_2_FN_MIGRATE_INFO_TYPE  PSCI_0_2_FN(6)
+#define PSCI_0_2_FN_MIGRATE_INFO_UP_CPUPSCI_0_2_FN(7)
+#define PSCI_0_2_FN_SYSTEM_OFF PSCI_0_2_FN(8)
+#define PSCI_0_2_FN_SYSTEM_RESET   PSCI_0_2_FN(9)
+
+#define PSCI_0_2_FN64_CPU_SUSPEND  PSCI_0_2_FN64(1)
+#define PSCI_0_2_FN64_CPU_ON   PSCI_0_2_FN64(3)
+#define PSCI_0_2_FN64_AFFINITY_INFOPSCI_0_2_FN64(4)
+#define PSCI_0_2_FN64_MIGRATE  PSCI_0_2_FN64(5)
+#define PSCI_0_2_FN64_MIGRATE_INFO_UP_CPU  PSCI_0_2_FN64(7)
+
+#define PSCI_0_2_POWER_STATE_ID_MASK   0x
+#define PSCI_0_2_POWER_STATE_ID_SHIFT  0
+#define PSCI_0_2_POWER_STATE_TYPE_MASK 0x1
+#define PSCI_0_2_POWER_STATE_TYPE_SHIFT16
+#define PSCI_0_2_POWER_STATE_AFFL_MASK 0x3
+#define PSCI_0_2_POWER_STATE_AFFL_SHIFT24
+
+#define PSCI_0_2_AFFINITY_LEVEL_ON 0
+#define PSCI_0_2_AFFINITY_LEVEL_OFF1
+#define PSCI_0_2_AFFINITY_LEVEL_ON_PENDING 2
+
+#define PSCI_0_2_TOS_UP_MIGRATE0
+#define PSCI_0_2_TOS_UP_NO_MIGRATE 1
+#define PSCI_0_2_TOS_MP2
+
+/* PSCI version decoding (independent of PSCI version) */
+#define PSCI_VERSION_MAJOR_SHIFT   16
+#define PSCI_VERSION_MINOR_MASK\
+   ((1U << PSCI_VERSION_MAJOR_SHIFT) - 1)
+#define PSCI_VERSION_MAJOR_MASK~PSCI_VERSION_MINOR_MASK
+#define PSCI_VERSION_MAJOR(ver)\
+   (((ver) & PSCI_VERSION_MAJOR_MASK) >> PSCI_VERSION_MAJOR_SHIFT)
+#define PSCI_VERSION_MINOR(ver)\
+   ((ver) & PSCI_VERSION_MINOR_MASK)
+
+/* PSCI return values (inclusive of all PSCI versions) */
+#define PSCI_RET_SUCCESS   0
+#define PSCI_RET_NOT_SUPPORTED -1
+#define PSCI_RET_INVALID_PARAMS-2
+#define PSCI_RET_DENIED-3
+#define PSCI_RET_ALREADY_ON-4
+#define PSCI_RET_ON_PENDING-5
+#define PSCI_RET_INTERNAL_FAILURE  -6
+#define PSCI_RET_NOT_PRESENT   -7
+#define PSCI_RET_DISABLED  -8
+
+#endif /* _UAPI_LINUX_PSCI_H */
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
t

[PATCH v10 03/12] ARM/ARM64: KVM: Add base for PSCI v0.2 emulation

2014-04-21 Thread Anup Patel
Currently, the in-kernel PSCI emulation provides PSCI v0.1 interface to
VCPUs. This patch extends current in-kernel PSCI emulation to provide
PSCI v0.2 interface to VCPUs.

By default, ARM/ARM64 KVM will always provide PSCI v0.1 interface for
keeping the ABI backward-compatible.

To select PSCI v0.2 interface for VCPUs, the user space (i.e. QEMU or
KVMTOOL) will have to set KVM_ARM_VCPU_PSCI_0_2 feature when doing VCPU
init using KVM_ARM_VCPU_INIT ioctl.

Signed-off-by: Anup Patel 
Signed-off-by: Pranavkumar Sawargaonkar 
Acked-by: Christoffer Dall 
Acked-by: Marc Zyngier 
---
 arch/arm/include/asm/kvm_host.h   |2 +-
 arch/arm/include/asm/kvm_psci.h   |4 ++
 arch/arm/include/uapi/asm/kvm.h   |   10 ++--
 arch/arm/kvm/psci.c   |   93 ++---
 arch/arm64/include/asm/kvm_host.h |2 +-
 arch/arm64/include/asm/kvm_psci.h |4 ++
 arch/arm64/include/uapi/asm/kvm.h |   10 ++--
 7 files changed, 99 insertions(+), 26 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 09af149..193ceaf 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -36,7 +36,7 @@
 #define KVM_COALESCED_MMIO_PAGE_OFFSET 1
 #define KVM_HAVE_ONE_REG
 
-#define KVM_VCPU_MAX_FEATURES 1
+#define KVM_VCPU_MAX_FEATURES 2
 
 #include 
 
diff --git a/arch/arm/include/asm/kvm_psci.h b/arch/arm/include/asm/kvm_psci.h
index 9a83d98..4c0e3e1 100644
--- a/arch/arm/include/asm/kvm_psci.h
+++ b/arch/arm/include/asm/kvm_psci.h
@@ -18,6 +18,10 @@
 #ifndef __ARM_KVM_PSCI_H__
 #define __ARM_KVM_PSCI_H__
 
+#define KVM_ARM_PSCI_0_1   1
+#define KVM_ARM_PSCI_0_2   2
+
+int kvm_psci_version(struct kvm_vcpu *vcpu);
 bool kvm_psci_call(struct kvm_vcpu *vcpu);
 
 #endif /* __ARM_KVM_PSCI_H__ */
diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h
index ef0c878..e6ebdd3 100644
--- a/arch/arm/include/uapi/asm/kvm.h
+++ b/arch/arm/include/uapi/asm/kvm.h
@@ -20,6 +20,7 @@
 #define __ARM_KVM_H__
 
 #include 
+#include 
 #include 
 
 #define __KVM_HAVE_GUEST_DEBUG
@@ -83,6 +84,7 @@ struct kvm_regs {
 #define KVM_VGIC_V2_CPU_SIZE   0x2000
 
 #define KVM_ARM_VCPU_POWER_OFF 0 /* CPU is started in OFF state */
+#define KVM_ARM_VCPU_PSCI_0_2  1 /* CPU uses PSCI v0.2 */
 
 struct kvm_vcpu_init {
__u32 target;
@@ -201,9 +203,9 @@ struct kvm_arch_memory_slot {
 #define KVM_PSCI_FN_CPU_ON KVM_PSCI_FN(2)
 #define KVM_PSCI_FN_MIGRATEKVM_PSCI_FN(3)
 
-#define KVM_PSCI_RET_SUCCESS   0
-#define KVM_PSCI_RET_NI((unsigned long)-1)
-#define KVM_PSCI_RET_INVAL ((unsigned long)-2)
-#define KVM_PSCI_RET_DENIED((unsigned long)-3)
+#define KVM_PSCI_RET_SUCCESS   PSCI_RET_SUCCESS
+#define KVM_PSCI_RET_NIPSCI_RET_NOT_SUPPORTED
+#define KVM_PSCI_RET_INVAL PSCI_RET_INVALID_PARAMS
+#define KVM_PSCI_RET_DENIEDPSCI_RET_DENIED
 
 #endif /* __ARM_KVM_H__ */
diff --git a/arch/arm/kvm/psci.c b/arch/arm/kvm/psci.c
index 448f60e..8c42596c 100644
--- a/arch/arm/kvm/psci.c
+++ b/arch/arm/kvm/psci.c
@@ -59,7 +59,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu 
*source_vcpu)
 * turned off.
 */
if (!vcpu || !vcpu->arch.pause)
-   return KVM_PSCI_RET_INVAL;
+   return PSCI_RET_INVALID_PARAMS;
 
target_pc = *vcpu_reg(source_vcpu, 2);
 
@@ -82,20 +82,60 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu 
*source_vcpu)
wq = kvm_arch_vcpu_wq(vcpu);
wake_up_interruptible(wq);
 
-   return KVM_PSCI_RET_SUCCESS;
+   return PSCI_RET_SUCCESS;
 }
 
-/**
- * kvm_psci_call - handle PSCI call if r0 value is in range
- * @vcpu: Pointer to the VCPU struct
- *
- * Handle PSCI calls from guests through traps from HVC instructions.
- * The calling convention is similar to SMC calls to the secure world where
- * the function number is placed in r0 and this function returns true if the
- * function number specified in r0 is withing the PSCI range, and false
- * otherwise.
- */
-bool kvm_psci_call(struct kvm_vcpu *vcpu)
+int kvm_psci_version(struct kvm_vcpu *vcpu)
+{
+   if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features))
+   return KVM_ARM_PSCI_0_2;
+
+   return KVM_ARM_PSCI_0_1;
+}
+
+static bool kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
+{
+   unsigned long psci_fn = *vcpu_reg(vcpu, 0) & ~((u32) 0);
+   unsigned long val;
+
+   switch (psci_fn) {
+   case PSCI_0_2_FN_PSCI_VERSION:
+   /*
+* Bits[31:16] = Major Version = 0
+* Bits[15:0] = Minor Version = 2
+*/
+   val = 2;
+   break;
+   case PSCI_0_2_FN_CPU_OFF:
+   kvm_psci_vcpu_off(vcpu);
+   val = PSCI_RET_SUCCESS;
+   break;
+   case PSCI_0_2_FN_CPU_ON:
+   case PSCI_0_2_F

[PATCH v10 05/12] ARM/ARM64: KVM: Make kvm_psci_call() return convention more flexible

2014-04-21 Thread Anup Patel
Currently, the kvm_psci_call() returns 'true' or 'false' based on whether
the PSCI function call was handled successfully or not. This does not help
us emulate system-level PSCI functions where the actual emulation work will
be done by user space (QEMU or KVMTOOL). Examples of such system-level PSCI
functions are: PSCI v0.2 SYSTEM_OFF and SYSTEM_RESET.

This patch updates kvm_psci_call() to return three types of values:
1) > 0 (success)
2) = 0 (success but exit to user space)
3) < 0 (errors)

Signed-off-by: Anup Patel 
Signed-off-by: Pranavkumar Sawargaonkar 
Reviewed-by: Christoffer Dall 
---
 arch/arm/include/asm/kvm_psci.h   |2 +-
 arch/arm/kvm/handle_exit.c|   10 +++---
 arch/arm/kvm/psci.c   |   28 
 arch/arm64/include/asm/kvm_psci.h |2 +-
 arch/arm64/kvm/handle_exit.c  |   10 +++---
 5 files changed, 32 insertions(+), 20 deletions(-)

diff --git a/arch/arm/include/asm/kvm_psci.h b/arch/arm/include/asm/kvm_psci.h
index 4c0e3e1..6bda945 100644
--- a/arch/arm/include/asm/kvm_psci.h
+++ b/arch/arm/include/asm/kvm_psci.h
@@ -22,6 +22,6 @@
 #define KVM_ARM_PSCI_0_2   2
 
 int kvm_psci_version(struct kvm_vcpu *vcpu);
-bool kvm_psci_call(struct kvm_vcpu *vcpu);
+int kvm_psci_call(struct kvm_vcpu *vcpu);
 
 #endif /* __ARM_KVM_PSCI_H__ */
diff --git a/arch/arm/kvm/handle_exit.c b/arch/arm/kvm/handle_exit.c
index 0de91fc..4c979d4 100644
--- a/arch/arm/kvm/handle_exit.c
+++ b/arch/arm/kvm/handle_exit.c
@@ -38,14 +38,18 @@ static int handle_svc_hyp(struct kvm_vcpu *vcpu, struct 
kvm_run *run)
 
 static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
+   int ret;
+
trace_kvm_hvc(*vcpu_pc(vcpu), *vcpu_reg(vcpu, 0),
  kvm_vcpu_hvc_get_imm(vcpu));
 
-   if (kvm_psci_call(vcpu))
+   ret = kvm_psci_call(vcpu);
+   if (ret < 0) {
+   kvm_inject_undefined(vcpu);
return 1;
+   }
 
-   kvm_inject_undefined(vcpu);
-   return 1;
+   return ret;
 }
 
 static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
diff --git a/arch/arm/kvm/psci.c b/arch/arm/kvm/psci.c
index 8c42596c..14e6fa6 100644
--- a/arch/arm/kvm/psci.c
+++ b/arch/arm/kvm/psci.c
@@ -93,7 +93,7 @@ int kvm_psci_version(struct kvm_vcpu *vcpu)
return KVM_ARM_PSCI_0_1;
 }
 
-static bool kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
+static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
 {
unsigned long psci_fn = *vcpu_reg(vcpu, 0) & ~((u32) 0);
unsigned long val;
@@ -128,14 +128,14 @@ static bool kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
val = PSCI_RET_NOT_SUPPORTED;
break;
default:
-   return false;
+   return -EINVAL;
}
 
*vcpu_reg(vcpu, 0) = val;
-   return true;
+   return 1;
 }
 
-static bool kvm_psci_0_1_call(struct kvm_vcpu *vcpu)
+static int kvm_psci_0_1_call(struct kvm_vcpu *vcpu)
 {
unsigned long psci_fn = *vcpu_reg(vcpu, 0) & ~((u32) 0);
unsigned long val;
@@ -153,11 +153,11 @@ static bool kvm_psci_0_1_call(struct kvm_vcpu *vcpu)
val = PSCI_RET_NOT_SUPPORTED;
break;
default:
-   return false;
+   return -EINVAL;
}
 
*vcpu_reg(vcpu, 0) = val;
-   return true;
+   return 1;
 }
 
 /**
@@ -165,12 +165,16 @@ static bool kvm_psci_0_1_call(struct kvm_vcpu *vcpu)
  * @vcpu: Pointer to the VCPU struct
  *
  * Handle PSCI calls from guests through traps from HVC instructions.
- * The calling convention is similar to SMC calls to the secure world where
- * the function number is placed in r0 and this function returns true if the
- * function number specified in r0 is withing the PSCI range, and false
- * otherwise.
+ * The calling convention is similar to SMC calls to the secure world
+ * where the function number is placed in r0.
+ *
+ * This function returns: > 0 (success), 0 (success but exit to user
+ * space), and < 0 (errors)
+ *
+ * Errors:
+ * -EINVAL: Unrecognized PSCI function
  */
-bool kvm_psci_call(struct kvm_vcpu *vcpu)
+int kvm_psci_call(struct kvm_vcpu *vcpu)
 {
switch (kvm_psci_version(vcpu)) {
case KVM_ARM_PSCI_0_2:
@@ -178,6 +182,6 @@ bool kvm_psci_call(struct kvm_vcpu *vcpu)
case KVM_ARM_PSCI_0_1:
return kvm_psci_0_1_call(vcpu);
default:
-   return false;
+   return -EINVAL;
};
 }
diff --git a/arch/arm64/include/asm/kvm_psci.h 
b/arch/arm64/include/asm/kvm_psci.h
index e25c658..bc39e55 100644
--- a/arch/arm64/include/asm/kvm_psci.h
+++ b/arch/arm64/include/asm/kvm_psci.h
@@ -22,6 +22,6 @@
 #define KVM_ARM_PSCI_0_2   2
 
 int kvm_psci_version(struct kvm_vcpu *vcpu);
-bool kvm_psci_call(struct kvm_vcpu *vcpu);
+int kvm_psci_call(struct kvm_vcpu *vcpu);
 
 #endif /* __ARM64_KVM_PSCI_H__ */
diff --git a/arch/arm64/kvm/handle_exit.c b

[PATCH v10 04/12] KVM: Documentation: Add info regarding KVM_ARM_VCPU_PSCI_0_2 feature

2014-04-21 Thread Anup Patel
We have in-kernel emulation of PSCI v0.2 in KVM ARM/ARM64. To provide
PSCI v0.2 interface to VCPUs, we have to enable KVM_ARM_VCPU_PSCI_0_2
feature when doing KVM_ARM_VCPU_INIT ioctl.

The patch updates documentation of KVM_ARM_VCPU_INIT ioctl to provide
info regarding KVM_ARM_VCPU_PSCI_0_2 feature.

Signed-off-by: Anup Patel 
Signed-off-by: Pranavkumar Sawargaonkar 
Acked-by: Christoffer Dall 
Acked-by: Marc Zyngier 
---
 Documentation/virtual/kvm/api.txt |2 ++
 1 file changed, 2 insertions(+)

diff --git a/Documentation/virtual/kvm/api.txt 
b/Documentation/virtual/kvm/api.txt
index a9380ba5..6dc1db5 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2376,6 +2376,8 @@ Possible features:
  Depends on KVM_CAP_ARM_PSCI.
- KVM_ARM_VCPU_EL1_32BIT: Starts the CPU in a 32bit mode.
  Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).
+   - KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU.
+ Depends on KVM_CAP_ARM_PSCI_0_2.
 
 
 4.83 KVM_ARM_PREFERRED_TARGET
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v10 07/12] ARM/ARM64: KVM: Emulate PSCI v0.2 SYSTEM_OFF and SYSTEM_RESET

2014-04-21 Thread Anup Patel
The PSCI v0.2 SYSTEM_OFF and SYSTEM_RESET functions are system-level
functions hence cannot be fully emulated by in-kernel PSCI emulation code.

To tackle this, we forward PSCI v0.2 SYSTEM_OFF and SYSTEM_RESET function
calls from vcpu to user space (i.e. QEMU or KVMTOOL) via kvm_run structure
using KVM_EXIT_SYSTEM_EVENT exit reasons.

Signed-off-by: Anup Patel 
Signed-off-by: Pranavkumar Sawargaonkar 
Reviewed-by: Christoffer Dall 
---
 arch/arm/kvm/psci.c |   32 +---
 1 file changed, 29 insertions(+), 3 deletions(-)

diff --git a/arch/arm/kvm/psci.c b/arch/arm/kvm/psci.c
index 14e6fa6..4486d0f 100644
--- a/arch/arm/kvm/psci.c
+++ b/arch/arm/kvm/psci.c
@@ -85,6 +85,23 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu 
*source_vcpu)
return PSCI_RET_SUCCESS;
 }
 
+static void kvm_prepare_system_event(struct kvm_vcpu *vcpu, u32 type)
+{
+   memset(&vcpu->run->system_event, 0, sizeof(vcpu->run->system_event));
+   vcpu->run->system_event.type = type;
+   vcpu->run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
+}
+
+static void kvm_psci_system_off(struct kvm_vcpu *vcpu)
+{
+   kvm_prepare_system_event(vcpu, KVM_SYSTEM_EVENT_SHUTDOWN);
+}
+
+static void kvm_psci_system_reset(struct kvm_vcpu *vcpu)
+{
+   kvm_prepare_system_event(vcpu, KVM_SYSTEM_EVENT_RESET);
+}
+
 int kvm_psci_version(struct kvm_vcpu *vcpu)
 {
if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features))
@@ -95,6 +112,7 @@ int kvm_psci_version(struct kvm_vcpu *vcpu)
 
 static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
 {
+   int ret = 1;
unsigned long psci_fn = *vcpu_reg(vcpu, 0) & ~((u32) 0);
unsigned long val;
 
@@ -114,13 +132,21 @@ static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
case PSCI_0_2_FN64_CPU_ON:
val = kvm_psci_vcpu_on(vcpu);
break;
+   case PSCI_0_2_FN_SYSTEM_OFF:
+   kvm_psci_system_off(vcpu);
+   val = PSCI_RET_INTERNAL_FAILURE;
+   ret = 0;
+   break;
+   case PSCI_0_2_FN_SYSTEM_RESET:
+   kvm_psci_system_reset(vcpu);
+   val = PSCI_RET_INTERNAL_FAILURE;
+   ret = 0;
+   break;
case PSCI_0_2_FN_CPU_SUSPEND:
case PSCI_0_2_FN_AFFINITY_INFO:
case PSCI_0_2_FN_MIGRATE:
case PSCI_0_2_FN_MIGRATE_INFO_TYPE:
case PSCI_0_2_FN_MIGRATE_INFO_UP_CPU:
-   case PSCI_0_2_FN_SYSTEM_OFF:
-   case PSCI_0_2_FN_SYSTEM_RESET:
case PSCI_0_2_FN64_CPU_SUSPEND:
case PSCI_0_2_FN64_AFFINITY_INFO:
case PSCI_0_2_FN64_MIGRATE:
@@ -132,7 +158,7 @@ static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
}
 
*vcpu_reg(vcpu, 0) = val;
-   return 1;
+   return ret;
 }
 
 static int kvm_psci_0_1_call(struct kvm_vcpu *vcpu)
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v10 06/12] KVM: Add KVM_EXIT_SYSTEM_EVENT to user space API header

2014-04-21 Thread Anup Patel
Currently, we don't have an exit reason to notify user space about
a system-level event (for e.g. system reset or shutdown) triggered
by the VCPU. This patch adds exit reason KVM_EXIT_SYSTEM_EVENT for
this purpose. We can also inform user space about the 'type' and
architecture specific 'flags' of a system-level event using the
kvm_run structure.

This newly added KVM_EXIT_SYSTEM_EVENT will be used by KVM ARM/ARM64
in-kernel PSCI v0.2 support to reset/shutdown VMs.

Signed-off-by: Anup Patel 
Signed-off-by: Pranavkumar Sawargaonkar 
Reviewed-by: Christoffer Dall 
Reviewed-by: Marc Zyngier 
---
 Documentation/virtual/kvm/api.txt |   15 +++
 include/uapi/linux/kvm.h  |8 
 2 files changed, 23 insertions(+)

diff --git a/Documentation/virtual/kvm/api.txt 
b/Documentation/virtual/kvm/api.txt
index 6dc1db5..c02d725 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2740,6 +2740,21 @@ It gets triggered whenever both KVM_CAP_PPC_EPR are 
enabled and an
 external interrupt has just been delivered into the guest. User space
 should put the acknowledged interrupt vector into the 'epr' field.
 
+   /* KVM_EXIT_SYSTEM_EVENT */
+   struct {
+#define KVM_SYSTEM_EVENT_SHUTDOWN   1
+#define KVM_SYSTEM_EVENT_RESET  2
+   __u32 type;
+   __u64 flags;
+   } system_event;
+
+If exit_reason is KVM_EXIT_SYSTEM_EVENT then the vcpu has triggered
+a system-level event using some architecture specific mechanism (hypercall
+or some special instruction). In case of ARM/ARM64, this is triggered using
+HVC instruction based PSCI call from the vcpu. The 'type' field describes
+the system-level event type. The 'flags' field describes architecture
+specific flags for the system-level event.
+
/* Fix the size of the union. */
char padding[256];
};
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 01c5624..e86c36a 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -171,6 +171,7 @@ struct kvm_pit_config {
 #define KVM_EXIT_WATCHDOG 21
 #define KVM_EXIT_S390_TSCH22
 #define KVM_EXIT_EPR  23
+#define KVM_EXIT_SYSTEM_EVENT 24
 
 /* For KVM_EXIT_INTERNAL_ERROR */
 /* Emulate instruction failed. */
@@ -301,6 +302,13 @@ struct kvm_run {
struct {
__u32 epr;
} epr;
+   /* KVM_EXIT_SYSTEM_EVENT */
+   struct {
+#define KVM_SYSTEM_EVENT_SHUTDOWN   1
+#define KVM_SYSTEM_EVENT_RESET  2
+   __u32 type;
+   __u64 flags;
+   } system_event;
/* Fix the size of the union. */
char padding[256];
};
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v10 09/12] ARM/ARM64: KVM: Emulate PSCI v0.2 MIGRATE_INFO_TYPE and related functions

2014-04-21 Thread Anup Patel
This patch adds emulation of PSCI v0.2 MIGRATE, MIGRATE_INFO_TYPE, and
MIGRATE_INFO_UP_CPU function calls for KVM ARM/ARM64.

KVM ARM/ARM64 being a hypervisor (and not a Trusted OS), we cannot provide
this functions hence we emulate these functions in following way:
1. MIGRATE - Returns "Not Supported"
2. MIGRATE_INFO_TYPE - Return 2 i.e. Trusted OS is not present
3. MIGRATE_INFO_UP_CPU - Returns "Not Supported"

Signed-off-by: Anup Patel 
Signed-off-by: Pranavkumar Sawargaonkar 
Reviewed-by: Christoffer Dall 
Acked-by: Marc Zyngier 
---
 arch/arm/kvm/psci.c |   21 -
 1 file changed, 16 insertions(+), 5 deletions(-)

diff --git a/arch/arm/kvm/psci.c b/arch/arm/kvm/psci.c
index 122bc67..d04a47b 100644
--- a/arch/arm/kvm/psci.c
+++ b/arch/arm/kvm/psci.c
@@ -182,6 +182,22 @@ static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
case PSCI_0_2_FN64_AFFINITY_INFO:
val = kvm_psci_vcpu_affinity_info(vcpu);
break;
+   case PSCI_0_2_FN_MIGRATE:
+   case PSCI_0_2_FN64_MIGRATE:
+   val = PSCI_RET_NOT_SUPPORTED;
+   break;
+   case PSCI_0_2_FN_MIGRATE_INFO_TYPE:
+   /*
+* Trusted OS is MP hence does not require migration
+* or
+* Trusted OS is not present
+*/
+   val = PSCI_0_2_TOS_MP;
+   break;
+   case PSCI_0_2_FN_MIGRATE_INFO_UP_CPU:
+   case PSCI_0_2_FN64_MIGRATE_INFO_UP_CPU:
+   val = PSCI_RET_NOT_SUPPORTED;
+   break;
case PSCI_0_2_FN_SYSTEM_OFF:
kvm_psci_system_off(vcpu);
val = PSCI_RET_INTERNAL_FAILURE;
@@ -193,12 +209,7 @@ static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
ret = 0;
break;
case PSCI_0_2_FN_CPU_SUSPEND:
-   case PSCI_0_2_FN_MIGRATE:
-   case PSCI_0_2_FN_MIGRATE_INFO_TYPE:
-   case PSCI_0_2_FN_MIGRATE_INFO_UP_CPU:
case PSCI_0_2_FN64_CPU_SUSPEND:
-   case PSCI_0_2_FN64_MIGRATE:
-   case PSCI_0_2_FN64_MIGRATE_INFO_UP_CPU:
val = PSCI_RET_NOT_SUPPORTED;
break;
default:
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v10 08/12] ARM/ARM64: KVM: Emulate PSCI v0.2 AFFINITY_INFO

2014-04-21 Thread Anup Patel
This patch adds emulation of PSCI v0.2 AFFINITY_INFO function call
for KVM ARM/ARM64. This is a VCPU-level function call which will be
used to determine current state of given affinity level.

Signed-off-by: Anup Patel 
Signed-off-by: Pranavkumar Sawargaonkar 
Reviewed-by: Christoffer Dall 
---
 arch/arm/kvm/psci.c |   52 +--
 1 file changed, 50 insertions(+), 2 deletions(-)

diff --git a/arch/arm/kvm/psci.c b/arch/arm/kvm/psci.c
index 4486d0f..122bc67 100644
--- a/arch/arm/kvm/psci.c
+++ b/arch/arm/kvm/psci.c
@@ -27,6 +27,16 @@
  * as described in ARM document number ARM DEN 0022A.
  */
 
+#define AFFINITY_MASK(level)   ~((0x1UL << ((level) * MPIDR_LEVEL_BITS)) - 1)
+
+static unsigned long psci_affinity_mask(unsigned long affinity_level)
+{
+   if (affinity_level <= 3)
+   return MPIDR_HWID_BITMASK & AFFINITY_MASK(affinity_level);
+
+   return 0;
+}
+
 static void kvm_psci_vcpu_off(struct kvm_vcpu *vcpu)
 {
vcpu->arch.pause = true;
@@ -85,6 +95,42 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu 
*source_vcpu)
return PSCI_RET_SUCCESS;
 }
 
+static unsigned long kvm_psci_vcpu_affinity_info(struct kvm_vcpu *vcpu)
+{
+   int i;
+   unsigned long mpidr;
+   unsigned long target_affinity;
+   unsigned long target_affinity_mask;
+   unsigned long lowest_affinity_level;
+   struct kvm *kvm = vcpu->kvm;
+   struct kvm_vcpu *tmp;
+
+   target_affinity = *vcpu_reg(vcpu, 1);
+   lowest_affinity_level = *vcpu_reg(vcpu, 2);
+
+   /* Determine target affinity mask */
+   target_affinity_mask = psci_affinity_mask(lowest_affinity_level);
+   if (!target_affinity_mask)
+   return PSCI_RET_INVALID_PARAMS;
+
+   /* Ignore other bits of target affinity */
+   target_affinity &= target_affinity_mask;
+
+   /*
+* If one or more VCPU matching target affinity are running
+* then ON else OFF
+*/
+   kvm_for_each_vcpu(i, tmp, kvm) {
+   mpidr = kvm_vcpu_get_mpidr(tmp);
+   if (((mpidr & target_affinity_mask) == target_affinity) &&
+   !tmp->arch.pause) {
+   return PSCI_0_2_AFFINITY_LEVEL_ON;
+   }
+   }
+
+   return PSCI_0_2_AFFINITY_LEVEL_OFF;
+}
+
 static void kvm_prepare_system_event(struct kvm_vcpu *vcpu, u32 type)
 {
memset(&vcpu->run->system_event, 0, sizeof(vcpu->run->system_event));
@@ -132,6 +178,10 @@ static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
case PSCI_0_2_FN64_CPU_ON:
val = kvm_psci_vcpu_on(vcpu);
break;
+   case PSCI_0_2_FN_AFFINITY_INFO:
+   case PSCI_0_2_FN64_AFFINITY_INFO:
+   val = kvm_psci_vcpu_affinity_info(vcpu);
+   break;
case PSCI_0_2_FN_SYSTEM_OFF:
kvm_psci_system_off(vcpu);
val = PSCI_RET_INTERNAL_FAILURE;
@@ -143,12 +193,10 @@ static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
ret = 0;
break;
case PSCI_0_2_FN_CPU_SUSPEND:
-   case PSCI_0_2_FN_AFFINITY_INFO:
case PSCI_0_2_FN_MIGRATE:
case PSCI_0_2_FN_MIGRATE_INFO_TYPE:
case PSCI_0_2_FN_MIGRATE_INFO_UP_CPU:
case PSCI_0_2_FN64_CPU_SUSPEND:
-   case PSCI_0_2_FN64_AFFINITY_INFO:
case PSCI_0_2_FN64_MIGRATE:
case PSCI_0_2_FN64_MIGRATE_INFO_UP_CPU:
val = PSCI_RET_NOT_SUPPORTED;
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v10 11/12] ARM/ARM64: KVM: Emulate PSCI v0.2 CPU_SUSPEND

2014-04-21 Thread Anup Patel
This patch adds emulation of PSCI v0.2 CPU_SUSPEND function call for
KVM ARM/ARM64. This is a CPU-level function call which can suspend
current CPU or current CPU cluster. We don't have VCPU clusters in
KVM so we only suspend the current VCPU.

The CPU_SUSPEND emulation is not tested much because currently there
is no CPUIDLE driver in Linux kernel that uses PSCI CPU_SUSPEND. The
PSCI CPU_SUSPEND implementation in ARM64 kernel was tested using a
Simple CPUIDLE driver which is not published due to unstable DT-bindings
for PSCI.
(For more info, http://lwn.net/Articles/574950/)

For simplicity, we implement CPU_SUSPEND emulation similar to WFI
(Wait-for-interrupt) emulation and we also treat power-down request
to be same as stand-by request. This is consistent with section
5.4.1 and section 5.4.2 of PSCI v0.2 specification.

Signed-off-by: Anup Patel 
Signed-off-by: Pranavkumar Sawargaonkar 
---
 arch/arm/kvm/psci.c |   28 
 1 file changed, 24 insertions(+), 4 deletions(-)

diff --git a/arch/arm/kvm/psci.c b/arch/arm/kvm/psci.c
index b582e99..757e506 100644
--- a/arch/arm/kvm/psci.c
+++ b/arch/arm/kvm/psci.c
@@ -37,6 +37,26 @@ static unsigned long psci_affinity_mask(unsigned long 
affinity_level)
return 0;
 }
 
+static unsigned long kvm_psci_vcpu_suspend(struct kvm_vcpu *vcpu)
+{
+   /*
+* NOTE: For simplicity, we make VCPU suspend emulation to be
+* same-as WFI (Wait-for-interrupt) emulation.
+*
+* This means for KVM the wakeup events are interrupts and
+* this is consistent with intended use of StateID as described
+* in section 5.4.1 of PSCI v0.2 specification (ARM DEN 0022A).
+*
+* Further, we also treat power-down request to be same as
+* stand-by request as-per section 5.4.2 clause 3 of PSCI v0.2
+* specification (ARM DEN 0022A). This means all suspend states
+* for KVM will preserve the register state.
+*/
+   kvm_vcpu_block(vcpu);
+
+   return PSCI_RET_SUCCESS;
+}
+
 static void kvm_psci_vcpu_off(struct kvm_vcpu *vcpu)
 {
vcpu->arch.pause = true;
@@ -183,6 +203,10 @@ static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
 */
val = 2;
break;
+   case PSCI_0_2_FN_CPU_SUSPEND:
+   case PSCI_0_2_FN64_CPU_SUSPEND:
+   val = kvm_psci_vcpu_suspend(vcpu);
+   break;
case PSCI_0_2_FN_CPU_OFF:
kvm_psci_vcpu_off(vcpu);
val = PSCI_RET_SUCCESS;
@@ -221,10 +245,6 @@ static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
val = PSCI_RET_INTERNAL_FAILURE;
ret = 0;
break;
-   case PSCI_0_2_FN_CPU_SUSPEND:
-   case PSCI_0_2_FN64_CPU_SUSPEND:
-   val = PSCI_RET_NOT_SUPPORTED;
-   break;
default:
return -EINVAL;
}
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v10 10/12] ARM/ARM64: KVM: Fix CPU_ON emulation for PSCI v0.2

2014-04-21 Thread Anup Patel
As-per PSCI v0.2, the source CPU provides physical address of
"entry point" and "context id" for starting a target CPU. Also,
if target CPU is already running then we should return ALREADY_ON.

Current emulation of CPU_ON function does not consider physical
address of "context id" and returns INVALID_PARAMETERS if target
CPU is already running.

This patch updates kvm_psci_vcpu_on() such that it works for both
PSCI v0.1 and PSCI v0.2.

Signed-off-by: Anup Patel 
Signed-off-by: Pranavkumar Sawargaonkar 
Reviewed-by: Christoffer Dall 
Acked-by: Marc Zyngier 
---
 arch/arm/kvm/psci.c |   15 ++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/arch/arm/kvm/psci.c b/arch/arm/kvm/psci.c
index d04a47b..b582e99 100644
--- a/arch/arm/kvm/psci.c
+++ b/arch/arm/kvm/psci.c
@@ -48,6 +48,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu 
*source_vcpu)
struct kvm_vcpu *vcpu = NULL, *tmp;
wait_queue_head_t *wq;
unsigned long cpu_id;
+   unsigned long context_id;
unsigned long mpidr;
phys_addr_t target_pc;
int i;
@@ -68,10 +69,17 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu 
*source_vcpu)
 * Make sure the caller requested a valid CPU and that the CPU is
 * turned off.
 */
-   if (!vcpu || !vcpu->arch.pause)
+   if (!vcpu)
return PSCI_RET_INVALID_PARAMS;
+   if (!vcpu->arch.pause) {
+   if (kvm_psci_version(source_vcpu) != KVM_ARM_PSCI_0_1)
+   return PSCI_RET_ALREADY_ON;
+   else
+   return PSCI_RET_INVALID_PARAMS;
+   }
 
target_pc = *vcpu_reg(source_vcpu, 2);
+   context_id = *vcpu_reg(source_vcpu, 3);
 
kvm_reset_vcpu(vcpu);
 
@@ -86,6 +94,11 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu 
*source_vcpu)
kvm_vcpu_set_be(vcpu);
 
*vcpu_pc(vcpu) = target_pc;
+   /*
+* NOTE: We always update r0 (or x0) because for PSCI v0.1
+* the general puspose registers are undefined upon CPU_ON.
+*/
+   *vcpu_reg(vcpu, 0) = context_id;
vcpu->arch.pause = false;
smp_mb();   /* Make sure the above is visible */
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v10 12/12] ARM/ARM64: KVM: Advertise KVM_CAP_ARM_PSCI_0_2 to user space

2014-04-21 Thread Anup Patel
We have PSCI v0.2 emulation available in KVM ARM/ARM64
hence advertise this to user space (i.e. QEMU or KVMTOOL)
via KVM_CHECK_EXTENSION ioctl.

Signed-off-by: Anup Patel 
Signed-off-by: Pranavkumar Sawargaonkar 
Acked-by: Christoffer Dall 
Acked-by: Marc Zyngier 
---
 arch/arm/kvm/arm.c |1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index f0e50a0..3c82b37 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -197,6 +197,7 @@ int kvm_dev_ioctl_check_extension(long ext)
case KVM_CAP_DESTROY_MEMORY_REGION_WORKS:
case KVM_CAP_ONE_REG:
case KVM_CAP_ARM_PSCI:
+   case KVM_CAP_ARM_PSCI_0_2:
r = 1;
break;
case KVM_CAP_COALESCED_MMIO:
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v11 02/12] ARM/ARM64: KVM: Add common header for PSCI related defines

2014-04-28 Thread Anup Patel
We need a common place to share PSCI related defines among ARM kernel,
ARM64 kernel, KVM ARM/ARM64 PSCI emulation, and user space.

We introduce uapi/linux/psci.h for this purpose. This newly added
header will be first used by KVM ARM/ARM64 in-kernel PSCI emulation
and user space (i.e. QEMU or KVMTOOL).

Signed-off-by: Anup Patel 
Signed-off-by: Pranavkumar Sawargaonkar 
Signed-off-by: Ashwin Chaugule 
---
 include/uapi/linux/Kbuild |1 +
 include/uapi/linux/psci.h |   90 +
 2 files changed, 91 insertions(+)
 create mode 100644 include/uapi/linux/psci.h

diff --git a/include/uapi/linux/Kbuild b/include/uapi/linux/Kbuild
index 6929571..24e9033 100644
--- a/include/uapi/linux/Kbuild
+++ b/include/uapi/linux/Kbuild
@@ -317,6 +317,7 @@ header-y += ppp-ioctl.h
 header-y += ppp_defs.h
 header-y += pps.h
 header-y += prctl.h
+header-y += psci.h
 header-y += ptp_clock.h
 header-y += ptrace.h
 header-y += qnx4_fs.h
diff --git a/include/uapi/linux/psci.h b/include/uapi/linux/psci.h
new file mode 100644
index 000..310d83e
--- /dev/null
+++ b/include/uapi/linux/psci.h
@@ -0,0 +1,90 @@
+/*
+ * ARM Power State and Coordination Interface (PSCI) header
+ *
+ * This header holds common PSCI defines and macros shared
+ * by: ARM kernel, ARM64 kernel, KVM ARM/ARM64 and user space.
+ *
+ * Copyright (C) 2014 Linaro Ltd.
+ * Author: Anup Patel 
+ */
+
+#ifndef _UAPI_LINUX_PSCI_H
+#define _UAPI_LINUX_PSCI_H
+
+/*
+ * PSCI v0.1 interface
+ *
+ * The PSCI v0.1 function numbers are implementation defined.
+ *
+ * Only PSCI return values such as: SUCCESS, NOT_SUPPORTED,
+ * INVALID_PARAMS, and DENIED defined below are applicable
+ * to PSCI v0.1.
+ */
+
+/* PSCI v0.2 interface */
+#define PSCI_0_2_FN_BASE   0x8400
+#define PSCI_0_2_FN(n) (PSCI_0_2_FN_BASE + (n))
+#define PSCI_0_2_64BIT 0x4000
+#define PSCI_0_2_FN64_BASE \
+   (PSCI_0_2_FN_BASE + PSCI_0_2_64BIT)
+#define PSCI_0_2_FN64(n)   (PSCI_0_2_FN64_BASE + (n))
+
+#define PSCI_0_2_FN_PSCI_VERSION   PSCI_0_2_FN(0)
+#define PSCI_0_2_FN_CPU_SUSPENDPSCI_0_2_FN(1)
+#define PSCI_0_2_FN_CPU_OFFPSCI_0_2_FN(2)
+#define PSCI_0_2_FN_CPU_ON PSCI_0_2_FN(3)
+#define PSCI_0_2_FN_AFFINITY_INFO  PSCI_0_2_FN(4)
+#define PSCI_0_2_FN_MIGRATEPSCI_0_2_FN(5)
+#define PSCI_0_2_FN_MIGRATE_INFO_TYPE  PSCI_0_2_FN(6)
+#define PSCI_0_2_FN_MIGRATE_INFO_UP_CPUPSCI_0_2_FN(7)
+#define PSCI_0_2_FN_SYSTEM_OFF PSCI_0_2_FN(8)
+#define PSCI_0_2_FN_SYSTEM_RESET   PSCI_0_2_FN(9)
+
+#define PSCI_0_2_FN64_CPU_SUSPEND  PSCI_0_2_FN64(1)
+#define PSCI_0_2_FN64_CPU_ON   PSCI_0_2_FN64(3)
+#define PSCI_0_2_FN64_AFFINITY_INFOPSCI_0_2_FN64(4)
+#define PSCI_0_2_FN64_MIGRATE  PSCI_0_2_FN64(5)
+#define PSCI_0_2_FN64_MIGRATE_INFO_UP_CPU  PSCI_0_2_FN64(7)
+
+/* PSCI v0.2 power state encoding for CPU_SUSPEND function */
+#define PSCI_0_2_POWER_STATE_ID_MASK   0x
+#define PSCI_0_2_POWER_STATE_ID_SHIFT  0
+#define PSCI_0_2_POWER_STATE_TYPE_SHIFT16
+#define PSCI_0_2_POWER_STATE_TYPE_MASK \
+   (0x1 << PSCI_0_2_POWER_STATE_TYPE_SHIFT)
+#define PSCI_0_2_POWER_STATE_AFFL_SHIFT24
+#define PSCI_0_2_POWER_STATE_AFFL_MASK \
+   (0x3 << PSCI_0_2_POWER_STATE_AFFL_SHIFT)
+
+/* PSCI v0.2 affinity level state returned by AFFINITY_INFO */
+#define PSCI_0_2_AFFINITY_LEVEL_ON 0
+#define PSCI_0_2_AFFINITY_LEVEL_OFF1
+#define PSCI_0_2_AFFINITY_LEVEL_ON_PENDING 2
+
+/* PSCI v0.2 multicore support in Trusted OS returned by MIGRATE_INFO_TYPE */
+#define PSCI_0_2_TOS_UP_MIGRATE0
+#define PSCI_0_2_TOS_UP_NO_MIGRATE 1
+#define PSCI_0_2_TOS_MP2
+
+/* PSCI version decoding (independent of PSCI version) */
+#define PSCI_VERSION_MAJOR_SHIFT   16
+#define PSCI_VERSION_MINOR_MASK\
+   ((1U << PSCI_VERSION_MAJOR_SHIFT) - 1)
+#define PSCI_VERSION_MAJOR_MASK~PSCI_VERSION_MINOR_MASK
+#define PSCI_VERSION_MAJOR(ver)\
+   (((ver) & PSCI_VERSION_MAJOR_MASK) >> PSCI_VERSION_MAJOR_SHIFT)
+#define PSCI_VERSION_MINOR(ver)\
+   ((ver) & PSCI_VERSION_MINOR_MASK)
+
+/* PSCI return values (inclusive of all PSCI versions) */
+#define PSCI_RET_SUCCESS   0
+#define PSCI_RET_NOT_SUPPORTED -1
+#define PSCI_RET_INVALID_PARAMS-2
+#define PSCI_RET_DENIED-3
+#define PSCI_RET_ALREADY_O

  1   2   >