Re: [PATCH v4 06/21] KVM: arm64: Support SDEI_EVENT_CONTEXT hypercall

2022-01-11 Thread Shannon Zhao




On 2021/8/15 8:13, Gavin Shan wrote:

+static unsigned long kvm_sdei_hypercall_context(struct kvm_vcpu *vcpu)
+{
+   struct kvm *kvm = vcpu->kvm;
+   struct kvm_sdei_kvm *ksdei = kvm->arch.sdei;
+   struct kvm_sdei_vcpu *vsdei = vcpu->arch.sdei;
+   struct kvm_sdei_vcpu_regs *regs;
+   unsigned long index = smccc_get_arg1(vcpu);
+   unsigned long ret = SDEI_SUCCESS;
+
+   /* Sanity check */
+   if (!(ksdei && vsdei)) {
+   ret = SDEI_NOT_SUPPORTED;
+   goto out;
+   }
Maybe we could move these common sanity check codes to 
kvm_sdei_hypercall to save some lines.


Thanks,
Shannon
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v4 02/21] KVM: arm64: Add SDEI virtualization infrastructure

2022-01-11 Thread Shannon Zhao




On 2021/8/15 8:13, Gavin Shan wrote:

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index e9a2b8f27792..2f021aa41632 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -150,6 +150,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
  
  	kvm_vgic_early_init(kvm);
  
+	kvm_sdei_init_vm(kvm);

+
/* The maximum number of VCPUs is limited by the host's GIC model */
kvm->arch.max_vcpus = kvm_arm_default_max_vcpus();
Hi, Is it possible to let user space to choose whether enabling SEDI or 
not rather than enable it by default?

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH] arm64: cpufeature: remove non-exist CONFIG_KVM_ARM_HOST

2021-01-04 Thread Shannon Zhao
Commit d82755b2e781 ("KVM: arm64: Kill off CONFIG_KVM_ARM_HOST") deletes
CONFIG_KVM_ARM_HOST option, it should use CONFIG_KVM instead.

Just remove CONFIG_KVM_ARM_HOST here.

Signed-off-by: Shannon Zhao 
---
 arch/arm64/kernel/cpufeature.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 7ffb5f1..e99edde 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2568,7 +2568,7 @@ static void verify_hyp_capabilities(void)
int parange, ipa_max;
unsigned int safe_vmid_bits, vmid_bits;
 
-   if (!IS_ENABLED(CONFIG_KVM) || !IS_ENABLED(CONFIG_KVM_ARM_HOST))
+   if (!IS_ENABLED(CONFIG_KVM))
return;
 
safe_mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
-- 
1.8.3.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH] KVM: ARM: call hyp_cpu_pm_exit at the right place

2019-12-01 Thread Shannon Zhao
It doesn't needs to call hyp_cpu_pm_exit() in init_hyp_mode() when some
error occurs. hyp_cpu_pm_exit() only needs to be called in
kvm_arch_init() if init_subsystems() fails. So move hyp_cpu_pm_exit()
out from teardown_hyp_mode() and call it directly in kvm_arch_init().

Signed-off-by: Shannon Zhao 
---
 virt/kvm/arm/arm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 12e0280..3b13ade 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1537,7 +1537,6 @@ static void teardown_hyp_mode(void)
free_hyp_pgds();
for_each_possible_cpu(cpu)
free_page(per_cpu(kvm_arm_hyp_stack_page, cpu));
-   hyp_cpu_pm_exit();
 }
 
 /**
@@ -1751,6 +1750,7 @@ int kvm_arch_init(void *opaque)
return 0;
 
 out_hyp:
+   hyp_cpu_pm_exit();
if (!in_hyp_mode)
teardown_hyp_mode();
 out_err:
-- 
1.8.3.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH RFC 0/7] Support KVM being compiled as a kernel module on arm64

2019-10-24 Thread Shannon Zhao




On 2019/10/24 21:41, Marc Zyngier wrote:

On 2019-10-24 11:58, James Morse wrote:

Hi Shannon,

On 24/10/2019 11:27, Shannon Zhao wrote:

Curently KVM ARM64 doesn't support to compile as a kernel module. It's
useful to compile KVM as a module.



For example, it could reload kvm without rebooting host machine.


What problem does this solve?

KVM has some funny requirements that aren't normal for a module. On
v8.0 hardware it must
have an idmap. Modules don't usually expect their code to be
physically contiguous, but
KVM does. KVM influences they way some of the irqchip stuff is set up
during early boot
(EOI mode ... not that I understand it).


We change the EOImode solely based on how we were booted (EL2 or not).
KVM doesn't directly influences that (it comes in the picture much
later).


(I think KVM-as-a-module on x86 is an artifact of how it was developed)



This patchset support this feature while there are some limitations
to be solved. But I just send it out as RFC to get more suggestion and
comments.



Curently it only supports for VHE system due to the hyp code section
address variables like __hyp_text_start.


We still need to support !VHE systems, and we need to do it with a
single image.



Also it can't call
kvm_update_va_mask when loading kvm module and kernel panic with below
errors. So I make kern_hyp_va into a nop funtion.


Making this work for the single-Image on v8.0 is going to be a
tremendous amount of work.
What is the payoff?


I can only agree. !VHE is something we're going to support for the 
foreseeable
future (which is roughly equivalent to "forever"), and modules have 
properties

that are fundamentally incompatible with the way KVM works with !VHE.

Yes, with this patchset we still support !VHE system with built-in KVM. 
While for VHE system we could support kernel module and check at module 
init to avoid wrong usage of kvm module on !VHE systems.



If the only purpose of this work is to be able to swap KVM implementations
in a development environment, then it really isn't worth the effort.

Making KVM as a kernel module has many advantages both for development 
and real use environment. For example, we can backport and update KVM 
codes independently and don't need to recompile kernel. Also making KVM 
as a kernel module is a basic for kvm hot upgrade feature without 
shutdown VMs and hosts. This is very important for Cloud Service 
Provider to provides non-stop services for its customers.


Thanks,
Shannon
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH RFC 0/7] Support KVM being compiled as a kernel module on arm64

2019-10-24 Thread Shannon Zhao

Hi James,

On 2019/10/24 18:58, James Morse wrote:

Hi Shannon,

On 24/10/2019 11:27, Shannon Zhao wrote:

Curently KVM ARM64 doesn't support to compile as a kernel module. It's
useful to compile KVM as a module.



For example, it could reload kvm without rebooting host machine.


What problem does this solve?

KVM has some funny requirements that aren't normal for a module. On v8.0 
hardware it must
have an idmap. Modules don't usually expect their code to be physically 
contiguous, but
KVM does. KVM influences they way some of the irqchip stuff is set up during 
early boot
(EOI mode ... not that I understand it).

(I think KVM-as-a-module on x86 is an artifact of how it was developed)



This patchset support this feature while there are some limitations
to be solved. But I just send it out as RFC to get more suggestion and
comments.



Curently it only supports for VHE system due to the hyp code section
address variables like __hyp_text_start.


We still need to support !VHE systems, and we need to do it with a single image.

I didn't make it clear. With this patchset we still support !VHE systems 
by choose CONFIG_KVM_ARM_HOST as y and by default CONFIG_KVM_ARM_HOST is 
y. And during module init, I add a check to avoid wrong usage for kvm 
module.


if (IS_MODULE(CONFIG_KVM_ARM_HOST) && !is_kernel_in_hyp_mode()) {
kvm_err("kvm arm kernel module only supports for VHE system\n");
return -ENODEV;
}





Also it can't call
kvm_update_va_mask when loading kvm module and kernel panic with below
errors. So I make kern_hyp_va into a nop funtion.


Making this work for the single-Image on v8.0 is going to be a tremendous 
amount of work.
What is the payoff?

Actually we can limit this feature only working for VHE systems and 
don't influence !VHE systems.


Thanks,
Shannon
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH RFC 3/7] KVM: vgic: make vgic parameters work well for module

2019-10-24 Thread Shannon Zhao
Signed-off-by: Shannon Zhao 
---
 virt/kvm/arm/vgic/vgic-v3.c | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/virt/kvm/arm/vgic/vgic-v3.c b/virt/kvm/arm/vgic/vgic-v3.c
index 8d69f00..228cfeb 100644
--- a/virt/kvm/arm/vgic/vgic-v3.c
+++ b/virt/kvm/arm/vgic/vgic-v3.c
@@ -548,6 +548,12 @@ int vgic_v3_map_resources(struct kvm *kvm)
 
 DEFINE_STATIC_KEY_FALSE(vgic_v3_cpuif_trap);
 
+#ifdef MODULE
+module_param_named(vgic_v3_group0_trap, group0_trap, bool, S_IRUGO);
+module_param_named(vgic_v3_group1_trap, group1_trap, bool, S_IRUGO);
+module_param_named(vgic_v3_common_trap, common_trap, bool, S_IRUGO);
+module_param_named(vgic_v4_enable, gicv4_enable, bool, S_IRUGO);
+#else
 static int __init early_group0_trap_cfg(char *buf)
 {
return strtobool(buf, &group0_trap);
@@ -571,6 +577,7 @@ static int __init early_gicv4_enable(char *buf)
return strtobool(buf, &gicv4_enable);
 }
 early_param("kvm-arm.vgic_v4_enable", early_gicv4_enable);
+#endif
 
 /**
  * vgic_v3_probe - probe for a VGICv3 compatible interrupt controller
-- 
1.8.3.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH RFC 0/7] Support KVM being compiled as a kernel module on arm64

2019-10-24 Thread Shannon Zhao
Curently KVM ARM64 doesn't support to compile as a kernel module. It's
useful to compile KVM as a module. For example, it could reload kvm
without rebooting host machine.

This patchset support this feature while there are some limitations
to be solved. But I just send it out as RFC to get more suggestion and
comments.

The patchset could be fetched from:
https://github.com/shannonz88/linux/tree/kvm_module

Curently it only supports for VHE system due to the hyp code section
address variables like __hyp_text_start. Also it can't call
kvm_update_va_mask when loading kvm module and kernel panic with below
errors. So I make kern_hyp_va into a nop funtion.

Unable to handle kernel read from unreadable memory at virtual address
88eda580
Mem abort info:
  ESR = 0x860f
  EC = 0x21: IABT (current EL), IL = 32 bits
  SET = 0, FnV = 0
  EA = 0, S1PTW = 0
swapper pgtable: 4k pages, 48-bit VAs, pgdp=00ca1000
[88eda580] pgd=2057f003, pud=2057e003,
pmd=003799c63003, pte=00e800575140f713
Internal error: Oops: 860f [#1] SMP
CPU: 25 PID: 9307 Comm: insmod Tainted: GE 5.4.0-rc4+ #39
pstate: 6049 (nZCv daif +PAN -UAO)
pc : 0x88eda580
lr : __apply_alternatives+0x130/0x26c
sp : 800022fdbb00
x29: 800022fdbb00 x28: 88f105bc
x27: 0005 x26: 88f105bc
x25: 800010b67dd0 x24: 88eda580
x23: 800010f21eff x22: 800022fdbba8
x21: 0001 x20: 800022fdbba0
x19: 88f232bc x18: 00379b437d90
x17:  x16: 9000
x15: 0008 x14: 0a00
x13: 0001 x12: 205768d18010
x11:  x10: 00aa
x9 : 8000109b0cc8 x8 : 0001
x7 : 00579f9391a8 x6 : 0001
x5 :  x4 : 
x3 : 0005 x2 : 88f105bc
x1 : 88f105bc x0 : 88f232bc
Call trace:
 0x88eda580
 apply_alternatives_module+0x64/0x84
 module_finalize+0xa8/0xd0
 load_module+0xf88/0x1b34
 __do_sys_finit_module+0xd0/0xfc
 __arm64_sys_finit_module+0x28/0x34
 el0_svc_handler+0x120/0x1d4
 el0_svc+0x8/0xc
Code: d34b2c00 17ea d402 d65f03c0 (a9ba7bfd)
---[ end trace 6de8ebc787a78157 ]---
Kernel panic - not syncing: Fatal exception
SMP: stopping secondary CPUs

Shannon Zhao (7):
  KVM: ARM: call hyp_cpu_pm_exit on correct fail and exit path
  KVM: arch_timer: Fix resource leak on error path
  KVM: vgic: make vgic parameters work well for module
  KVM: vgic: Add hyp uninitialize function
  KVM: arch_timer: Add hyp uninitialize function
  KVM: arm/arm64: Move target table register into register table init
function
  KVM: ARM: Support KVM being compiled as a kernel module

 arch/arm/kvm/coproc.c|  3 ++
 arch/arm/kvm/coproc.h|  3 ++
 arch/arm/kvm/coproc_a15.c|  4 +--
 arch/arm/kvm/coproc_a7.c |  4 +--
 arch/arm64/include/asm/cache.h   | 16 ++-
 arch/arm64/include/asm/cpufeature.h  | 11 +---
 arch/arm64/include/asm/fpsimd.h  |  6 +---
 arch/arm64/include/asm/kvm_host.h|  3 --
 arch/arm64/include/asm/kvm_mmu.h |  4 +++
 arch/arm64/include/asm/perf_event.h  |  2 ++
 arch/arm64/kernel/acpi.c |  1 +
 arch/arm64/kernel/asm-offsets.c  |  2 +-
 arch/arm64/kernel/cpu_errata.c   | 15 +-
 arch/arm64/kernel/cpufeature.c   |  2 ++
 arch/arm64/kernel/cpuinfo.c  | 16 +++
 arch/arm64/kernel/entry-fpsimd.S |  2 ++
 arch/arm64/kernel/entry.S|  1 +
 arch/arm64/kernel/fpsimd.c   | 11 
 arch/arm64/kernel/head.S |  1 +
 arch/arm64/kernel/hibernate.c|  6 
 arch/arm64/kernel/hyp-stub.S |  1 +
 arch/arm64/kernel/insn.c |  2 ++
 arch/arm64/kernel/perf_event.c   | 19 +++--
 arch/arm64/kernel/probes/kprobes.c   |  2 ++
 arch/arm64/kernel/smp.c  |  1 +
 arch/arm64/kernel/traps.c|  2 ++
 arch/arm64/kvm/Kconfig   | 19 ++---
 arch/arm64/kvm/Makefile  | 53 
 arch/arm64/kvm/hyp/Makefile  | 22 +++
 arch/arm64/kvm/sys_regs.c|  1 +
 arch/arm64/kvm/sys_regs.h|  2 ++
 arch/arm64/kvm/sys_regs_generic_v8.c |  5 +---
 arch/arm64/kvm/va_layout.c   |  7 -
 arch/arm64/mm/cache.S|  2 ++
 arch/arm64/mm/hugetlbpage.c  |  2 ++
 arch/arm64/mm/mmu.c  |  4 +++
 drivers/clocksource/arm_arch_timer.c |  1 +
 drivers/irqchip/irq-gic-common.c |  1 +
 drivers/irqchip/irq-gic-v4.c |  8 ++
 include/kvm/arm_arch_timer.h |  1 +
 include/kvm/arm_vgic.h   |  1 +
 include/linux/interrupt.h|  6 +---
 kernel/irq/manage.c  |  6 
 mm/pgtable-generic.c |  1 +
 virt/kvm/arm/arch_timer.c| 19 +++--
 virt/kvm/arm/arm.c

[PATCH RFC 1/7] KVM: ARM: call hyp_cpu_pm_exit on correct fail and exit path

2019-10-24 Thread Shannon Zhao
Signed-off-by: Shannon Zhao 
---
 virt/kvm/arm/arm.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 86c6aa1..da32c9b 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1488,6 +1488,7 @@ static int init_subsystems(void)
kvm_coproc_table_init();
 
 out:
+   hyp_cpu_pm_exit();
on_each_cpu(_kvm_arch_hardware_disable, NULL, 1);
 
return err;
@@ -1500,7 +1501,6 @@ static void teardown_hyp_mode(void)
free_hyp_pgds();
for_each_possible_cpu(cpu)
free_page(per_cpu(kvm_arm_hyp_stack_page, cpu));
-   hyp_cpu_pm_exit();
 }
 
 /**
@@ -1724,6 +1724,7 @@ int kvm_arch_init(void *opaque)
 void kvm_arch_exit(void)
 {
kvm_perf_teardown();
+   hyp_cpu_pm_exit();
 }
 
 static int arm_init(void)
-- 
1.8.3.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH RFC 6/7] KVM: arm/arm64: Move target table register into register table init function

2019-10-24 Thread Shannon Zhao
This prepares for making kvm arm compile as a module.

Signed-off-by: Shannon Zhao 
---
 arch/arm/kvm/coproc.c| 3 +++
 arch/arm/kvm/coproc.h| 3 +++
 arch/arm/kvm/coproc_a15.c| 4 +---
 arch/arm/kvm/coproc_a7.c | 4 +---
 arch/arm64/kvm/sys_regs.c| 1 +
 arch/arm64/kvm/sys_regs.h| 2 ++
 arch/arm64/kvm/sys_regs_generic_v8.c | 5 +
 7 files changed, 12 insertions(+), 10 deletions(-)

diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
index 07745ee..58e48b1 100644
--- a/arch/arm/kvm/coproc.c
+++ b/arch/arm/kvm/coproc.c
@@ -1404,6 +1404,9 @@ void kvm_coproc_table_init(void)
 {
unsigned int i;
 
+   coproc_a7_init();
+   coproc_a15_init();
+
/* Make sure tables are unique and in order. */
BUG_ON(check_reg_table(cp15_regs, ARRAY_SIZE(cp15_regs)));
BUG_ON(check_reg_table(invariant_cp15, ARRAY_SIZE(invariant_cp15)));
diff --git a/arch/arm/kvm/coproc.h b/arch/arm/kvm/coproc.h
index 637065b..592118c 100644
--- a/arch/arm/kvm/coproc.h
+++ b/arch/arm/kvm/coproc.h
@@ -127,4 +127,7 @@ bool access_vm_reg(struct kvm_vcpu *vcpu,
   const struct coproc_params *p,
   const struct coproc_reg *r);
 
+void coproc_a7_init(void);
+void coproc_a15_init(void);
+
 #endif /* __ARM_KVM_COPROC_LOCAL_H__ */
diff --git a/arch/arm/kvm/coproc_a15.c b/arch/arm/kvm/coproc_a15.c
index 36bf154..ece74b2f 100644
--- a/arch/arm/kvm/coproc_a15.c
+++ b/arch/arm/kvm/coproc_a15.c
@@ -31,9 +31,7 @@
.num = ARRAY_SIZE(a15_regs),
 };
 
-static int __init coproc_a15_init(void)
+void coproc_a15_init(void)
 {
kvm_register_target_coproc_table(&a15_target_table);
-   return 0;
 }
-late_initcall(coproc_a15_init);
diff --git a/arch/arm/kvm/coproc_a7.c b/arch/arm/kvm/coproc_a7.c
index 40f643e..74616f5 100644
--- a/arch/arm/kvm/coproc_a7.c
+++ b/arch/arm/kvm/coproc_a7.c
@@ -34,9 +34,7 @@
.num = ARRAY_SIZE(a7_regs),
 };
 
-static int __init coproc_a7_init(void)
+void coproc_a7_init(void)
 {
kvm_register_target_coproc_table(&a7_target_table);
-   return 0;
 }
-late_initcall(coproc_a7_init);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2071260..9dd164d 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2738,6 +2738,7 @@ void kvm_sys_reg_table_init(void)
unsigned int i;
struct sys_reg_desc clidr;
 
+   sys_reg_genericv8_init();
/* Make sure tables are unique and in order. */
BUG_ON(check_sysreg_table(sys_reg_descs, ARRAY_SIZE(sys_reg_descs)));
BUG_ON(check_sysreg_table(cp14_regs, ARRAY_SIZE(cp14_regs)));
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index 9bca031..f11cb63 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -140,6 +140,8 @@ const struct sys_reg_desc *find_reg_by_id(u64 id,
  const struct sys_reg_desc table[],
  unsigned int num);
 
+void sys_reg_genericv8_init(void);
+
 #define Op0(_x).Op0 = _x
 #define Op1(_x).Op1 = _x
 #define CRn(_x).CRn = _x
diff --git a/arch/arm64/kvm/sys_regs_generic_v8.c 
b/arch/arm64/kvm/sys_regs_generic_v8.c
index 2b4a3e2..3e4bacd 100644
--- a/arch/arm64/kvm/sys_regs_generic_v8.c
+++ b/arch/arm64/kvm/sys_regs_generic_v8.c
@@ -61,7 +61,7 @@ static void reset_actlr(struct kvm_vcpu *vcpu, const struct 
sys_reg_desc *r)
},
 };
 
-static int __init sys_reg_genericv8_init(void)
+void sys_reg_genericv8_init(void)
 {
unsigned int i;
 
@@ -81,7 +81,4 @@ static int __init sys_reg_genericv8_init(void)
  &genericv8_target_table);
kvm_register_target_sys_reg_table(KVM_ARM_TARGET_GENERIC_V8,
  &genericv8_target_table);
-
-   return 0;
 }
-late_initcall(sys_reg_genericv8_init);
-- 
1.8.3.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH RFC 7/7] KVM: ARM: Support KVM being compiled as a kernel module

2019-10-24 Thread Shannon Zhao
This patch adds support for KVM ARM64 to be compiled as a kernel module.
It makes the CONFIG_KVM_ARM_HOST as a tristate option and adds a new
config option CONFIG_KVM_ARM_HOST_VHE_ONLY to ensure that kernel module
feature only supports for VHE system.

Signed-off-by: Shannon Zhao 
---
 arch/arm64/include/asm/cache.h   | 16 ++-
 arch/arm64/include/asm/cpufeature.h  | 11 +---
 arch/arm64/include/asm/fpsimd.h  |  6 +---
 arch/arm64/include/asm/kvm_host.h|  3 --
 arch/arm64/include/asm/kvm_mmu.h |  4 +++
 arch/arm64/include/asm/perf_event.h  |  2 ++
 arch/arm64/kernel/acpi.c |  1 +
 arch/arm64/kernel/asm-offsets.c  |  2 +-
 arch/arm64/kernel/cpu_errata.c   | 15 +-
 arch/arm64/kernel/cpufeature.c   |  2 ++
 arch/arm64/kernel/cpuinfo.c  | 16 +++
 arch/arm64/kernel/entry-fpsimd.S |  2 ++
 arch/arm64/kernel/entry.S|  1 +
 arch/arm64/kernel/fpsimd.c   | 11 
 arch/arm64/kernel/head.S |  1 +
 arch/arm64/kernel/hibernate.c|  6 
 arch/arm64/kernel/hyp-stub.S |  1 +
 arch/arm64/kernel/insn.c |  2 ++
 arch/arm64/kernel/perf_event.c   | 19 +++--
 arch/arm64/kernel/probes/kprobes.c   |  2 ++
 arch/arm64/kernel/smp.c  |  1 +
 arch/arm64/kernel/traps.c|  2 ++
 arch/arm64/kvm/Kconfig   | 19 ++---
 arch/arm64/kvm/Makefile  | 53 
 arch/arm64/kvm/hyp/Makefile  | 22 +++
 arch/arm64/kvm/va_layout.c   |  7 -
 arch/arm64/mm/cache.S|  2 ++
 arch/arm64/mm/hugetlbpage.c  |  2 ++
 arch/arm64/mm/mmu.c  |  4 +++
 drivers/clocksource/arm_arch_timer.c |  1 +
 drivers/irqchip/irq-gic-common.c |  1 +
 drivers/irqchip/irq-gic-v4.c |  8 ++
 include/linux/interrupt.h|  6 +---
 kernel/irq/manage.c  |  6 
 mm/pgtable-generic.c |  1 +
 virt/kvm/arm/arm.c   | 36 ++--
 virt/kvm/arm/mmu.c   |  4 +++
 37 files changed, 215 insertions(+), 83 deletions(-)

diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
index 43da6dd..db79fc9 100644
--- a/arch/arm64/include/asm/cache.h
+++ b/arch/arm64/include/asm/cache.h
@@ -57,21 +57,9 @@
 
 #define ICACHEF_ALIASING   0
 #define ICACHEF_VPIPT  1
-extern unsigned long __icache_flags;
 
-/*
- * Whilst the D-side always behaves as PIPT on AArch64, aliasing is
- * permitted in the I-cache.
- */
-static inline int icache_is_aliasing(void)
-{
-   return test_bit(ICACHEF_ALIASING, &__icache_flags);
-}
-
-static inline int icache_is_vpipt(void)
-{
-   return test_bit(ICACHEF_VPIPT, &__icache_flags);
-}
+int icache_is_aliasing(void);
+int icache_is_vpipt(void);
 
 static inline u32 cache_type_cwg(void)
 {
diff --git a/arch/arm64/include/asm/cpufeature.h 
b/arch/arm64/include/asm/cpufeature.h
index 9cde5d2..eea7215 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -625,16 +625,7 @@ static inline bool system_has_prio_mask_debugging(void)
 #define ARM64_SSBD_FORCE_ENABLE2
 #define ARM64_SSBD_MITIGATED   3
 
-static inline int arm64_get_ssbd_state(void)
-{
-#ifdef CONFIG_ARM64_SSBD
-   extern int ssbd_state;
-   return ssbd_state;
-#else
-   return ARM64_SSBD_UNKNOWN;
-#endif
-}
-
+int arm64_get_ssbd_state(void);
 void arm64_set_ssbd_mitigation(bool state);
 
 extern int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt);
diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
index 59f10dd..b0e04b8 100644
--- a/arch/arm64/include/asm/fpsimd.h
+++ b/arch/arm64/include/asm/fpsimd.h
@@ -95,11 +95,7 @@ static inline unsigned int __bit_to_vq(unsigned int bit)
return SVE_VQ_MAX - bit;
 }
 
-/* Ensure vq >= SVE_VQ_MIN && vq <= SVE_VQ_MAX before calling this function */
-static inline bool sve_vq_available(unsigned int vq)
-{
-   return test_bit(__vq_to_bit(vq), sve_vq_map);
-}
+bool sve_vq_available(unsigned int vq);
 
 #ifdef CONFIG_ARM64_SVE
 
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index f656169..4f89322 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -582,9 +582,6 @@ static inline int kvm_arch_vcpu_run_pid_change(struct 
kvm_vcpu *vcpu)
 
 void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu);
 void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
-#else
-static inline void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr) {}
-static inline void kvm_clr_pmu_events(u32 clr) {}
 #endif
 
 static inline void kvm_arm_vhe_guest_enter(void)
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index befe37d..f67e5b5 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kv

[PATCH RFC 5/7] KVM: arch_timer: Add hyp uninitialize function

2019-10-24 Thread Shannon Zhao
When KVM ARM exits, it needs to cleanup arch_timer setup by kvm_timer_hyp_init.

Signed-off-by: Shannon Zhao 
---
 include/kvm/arm_arch_timer.h |  1 +
 virt/kvm/arm/arch_timer.c| 13 +
 virt/kvm/arm/arm.c   |  1 +
 3 files changed, 15 insertions(+)

diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index d120e6c..3cb3a01 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -68,6 +68,7 @@ struct arch_timer_cpu {
 };
 
 int kvm_timer_hyp_init(bool);
+void kvm_timer_hyp_uninit(void);
 int kvm_timer_enable(struct kvm_vcpu *vcpu);
 int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu);
 void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index f5a5d51..7dafa97 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -991,6 +991,19 @@ int kvm_timer_hyp_init(bool has_gic)
return err;
 }
 
+void kvm_timer_hyp_uninit(void)
+{
+   struct arch_timer_kvm_info *info = arch_timer_get_kvm_info();
+
+   cpuhp_remove_state(CPUHP_AP_KVM_ARM_TIMER_STARTING);
+   if (info->physical_irq > 0) {
+   on_each_cpu(disable_percpu_irq, (void *)host_ptimer_irq, 1);
+   free_percpu_irq(host_ptimer_irq, kvm_get_running_vcpus());
+   }
+   on_each_cpu(disable_percpu_irq, (void *)host_vtimer_irq, 1);
+   free_percpu_irq(host_vtimer_irq, kvm_get_running_vcpus());
+}
+
 void kvm_timer_vcpu_terminate(struct kvm_vcpu *vcpu)
 {
struct arch_timer_cpu *timer = vcpu_timer(vcpu);
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 0c60074..feb6649 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1724,6 +1724,7 @@ int kvm_arch_init(void *opaque)
 void kvm_arch_exit(void)
 {
kvm_perf_teardown();
+   kvm_timer_hyp_uninit();
kvm_vgic_hyp_uninit();
hyp_cpu_pm_exit();
 }
-- 
1.8.3.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH RFC 2/7] KVM: arch_timer: Fix resource leak on error path

2019-10-24 Thread Shannon Zhao
It needs to cleanup irq setup for host_vtimer_irq when
request_percpu_irq fails for host_ptimer_irq. It needs to cleanup irq
setup for both host_vtimer_irq and host_ptimer_irq when seeting vcpu
affinity error as well.

Fixes: 9e01dc76be6a ("KVM: arm/arm64: arch_timer: Assign the phys timer on VHE 
systems")
Signed-off-by: Shannon Zhao 
---
 virt/kvm/arm/arch_timer.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index e2bb5bd..f5a5d51 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -960,7 +960,7 @@ int kvm_timer_hyp_init(bool has_gic)
if (err) {
kvm_err("kvm_arch_timer: can't request ptimer interrupt 
%d (%d)\n",
host_ptimer_irq, err);
-   return err;
+   goto out_free_irq;
}
 
if (has_gic) {
@@ -968,7 +968,7 @@ int kvm_timer_hyp_init(bool has_gic)
kvm_get_running_vcpus());
if (err) {
kvm_err("kvm_arch_timer: error setting vcpu 
affinity\n");
-   goto out_free_irq;
+   goto out_free_pirq;
}
}
 
@@ -984,6 +984,8 @@ int kvm_timer_hyp_init(bool has_gic)
  "kvm/arm/timer:starting", kvm_timer_starting_cpu,
  kvm_timer_dying_cpu);
return 0;
+out_free_pirq:
+   free_percpu_irq(host_ptimer_irq, kvm_get_running_vcpus());
 out_free_irq:
free_percpu_irq(host_vtimer_irq, kvm_get_running_vcpus());
return err;
-- 
1.8.3.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH RFC 4/7] KVM: vgic: Add hyp uninitialize function

2019-10-24 Thread Shannon Zhao
When KVM ARM exits, it needs to cleanup vgic setup by kvm_vgic_hyp_init.

Signed-off-by: Shannon Zhao 
---
 include/kvm/arm_vgic.h| 1 +
 virt/kvm/arm/arm.c| 1 +
 virt/kvm/arm/vgic/vgic-init.c | 7 +++
 3 files changed, 9 insertions(+)

diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index af4f09c..7f44ebb 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -339,6 +339,7 @@ struct vgic_cpu {
 void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu);
 int kvm_vgic_map_resources(struct kvm *kvm);
 int kvm_vgic_hyp_init(void);
+void kvm_vgic_hyp_uninit(void);
 void kvm_vgic_init_cpu_hardware(void);
 
 int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int intid,
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index da32c9b..0c60074 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1724,6 +1724,7 @@ int kvm_arch_init(void *opaque)
 void kvm_arch_exit(void)
 {
kvm_perf_teardown();
+   kvm_vgic_hyp_uninit();
hyp_cpu_pm_exit();
 }
 
diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c
index 6f50c42..cd48047 100644
--- a/virt/kvm/arm/vgic/vgic-init.c
+++ b/virt/kvm/arm/vgic/vgic-init.c
@@ -550,3 +550,10 @@ int kvm_vgic_hyp_init(void)
kvm_get_running_vcpus());
return ret;
 }
+
+void kvm_vgic_hyp_uninit(void)
+{
+   cpuhp_remove_state(CPUHP_AP_KVM_ARM_VGIC_INIT_STARTING);
+   free_percpu_irq(kvm_vgic_global_state.maint_irq,
+   kvm_get_running_vcpus());
+}
-- 
1.8.3.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH] kvm: Delete the slot only when KVM_MEM_READONLY flag is changed

2018-06-12 Thread Shannon Zhao



On 2018/6/12 20:17, Paolo Bonzini wrote:
> On 16/05/2018 11:18, Shannon Zhao wrote:
>> According to KVM commit 75d61fbc, it needs to delete the slot before
>> changing the KVM_MEM_READONLY flag. But QEMU commit 235e8982 only check
>> whether KVM_MEM_READONLY flag is set instead of changing. It doesn't
>> need to delete the slot if the KVM_MEM_READONLY flag is not changed.
>>
>> This fixes a issue that migrating a VM at the OVMF startup stage and
>> VM is executing the codes in rom. Between the deleting and adding the
>> slot in kvm_set_user_memory_region, there is a chance that guest access
>> rom and trap to KVM, then KVM can't find the corresponding memslot.
>> While KVM (on ARM) injects an abort to guest due to the broken hva, then
>> guest will get stuck.
>>
>> Signed-off-by: Shannon Zhao 
> 
> I'm a bit worried about old_flags not being set on all paths to
> kvm_set_user_memory_region.  This would lead to extra
> KVM_SET_USER_MEMORY_REGION calls.  It should not be a problem but
> it is ugly.  Does something like the additional changes below work for you?
> 
I test below patch. It works for our testcase.
Do I need to fold them into one and resend?

Thanks,
-- 
Shannon

> Thanks,
> 
> Paolo
> 
> 
> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> index b04f193a76..e318bcfb78 100644
> --- a/accel/kvm/kvm-all.c
> +++ b/accel/kvm/kvm-all.c
> @@ -257,7 +257,7 @@ int kvm_physical_memory_addr_from_host(KVMState *s, void 
> *ram,
>  return 0;
>  }
>  
> -static int kvm_set_user_memory_region(KVMMemoryListener *kml, KVMSlot *slot)
> +static int kvm_set_user_memory_region(KVMMemoryListener *kml, KVMSlot *slot, 
> bool new)
>  {
>  KVMState *s = kvm_state;
>  struct kvm_userspace_memory_region mem;
> @@ -268,7 +268,7 @@ static int kvm_set_user_memory_region(KVMMemoryListener 
> *kml, KVMSlot *slot)
>  mem.userspace_addr = (unsigned long)slot->ram;
>  mem.flags = slot->flags;
>  
> -if (slot->memory_size && (mem.flags ^ slot->old_flags) & 
> KVM_MEM_READONLY) {
> +if (slot->memory_size && !new && (mem.flags ^ slot->old_flags) & 
> KVM_MEM_READONLY) {
>  /* Set the slot size to 0 before setting the slot to the desired
>   * value. This is needed based on KVM commit 75d61fbc. */
>  mem.memory_size = 0;
> @@ -276,6 +276,7 @@ static int kvm_set_user_memory_region(KVMMemoryListener 
> *kml, KVMSlot *slot)
>  }
>  mem.memory_size = slot->memory_size;
>  ret = kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, &mem);
> +slot->old_flags = mem.flags;
>  trace_kvm_set_user_memory(mem.slot, mem.flags, mem.guest_phys_addr,
>mem.memory_size, mem.userspace_addr, ret);
>  return ret;
> @@ -394,7 +395,6 @@ static int kvm_slot_update_flags(KVMMemoryListener *kml, 
> KVMSlot *mem,
>  {
>  int old_flags;
>  
> -mem->old_flags = mem->flags;
>  mem->flags = kvm_mem_flags(mr);
>  
>  /* If nothing changed effectively, no need to issue ioctl */
> @@ -402,7 +402,7 @@ static int kvm_slot_update_flags(KVMMemoryListener *kml, 
> KVMSlot *mem,
>  return 0;
>  }
>  
> -return kvm_set_user_memory_region(kml, mem);
> +return kvm_set_user_memory_region(kml, mem, false);
>  }
>  
>  static int kvm_section_update_flags(KVMMemoryListener *kml,
> @@ -756,7 +756,8 @@ static void kvm_set_phys_mem(KVMMemoryListener *kml,
>  
>  /* unregister the slot */
>  mem->memory_size = 0;
> -err = kvm_set_user_memory_region(kml, mem);
> +mem->flags = 0;
> +err = kvm_set_user_memory_region(kml, mem, false);
>  if (err) {
>  fprintf(stderr, "%s: error unregistering slot: %s\n",
>  __func__, strerror(-err));
> @@ -772,7 +773,7 @@ static void kvm_set_phys_mem(KVMMemoryListener *kml,
>  mem->ram = ram;
>  mem->flags = kvm_mem_flags(mr);
>  
> -err = kvm_set_user_memory_region(kml, mem);
> +err = kvm_set_user_memory_region(kml, mem, true);
>  if (err) {
>  fprintf(stderr, "%s: error registering slot: %s\n", __func__,
>  strerror(-err));
> 
> .
> 

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH] kvm: Delete the slot only when KVM_MEM_READONLY flag is changed

2018-06-11 Thread Shannon Zhao
Ping?

On 2018/5/16 17:18, Shannon Zhao wrote:
> According to KVM commit 75d61fbc, it needs to delete the slot before
> changing the KVM_MEM_READONLY flag. But QEMU commit 235e8982 only check
> whether KVM_MEM_READONLY flag is set instead of changing. It doesn't
> need to delete the slot if the KVM_MEM_READONLY flag is not changed.
> 
> This fixes a issue that migrating a VM at the OVMF startup stage and
> VM is executing the codes in rom. Between the deleting and adding the
> slot in kvm_set_user_memory_region, there is a chance that guest access
> rom and trap to KVM, then KVM can't find the corresponding memslot.
> While KVM (on ARM) injects an abort to guest due to the broken hva, then
> guest will get stuck.
> 
> Signed-off-by: Shannon Zhao 
> ---
>  include/sysemu/kvm_int.h | 1 +
>  kvm-all.c| 6 +++---
>  2 files changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/include/sysemu/kvm_int.h b/include/sysemu/kvm_int.h
> index 888557a..f838412 100644
> --- a/include/sysemu/kvm_int.h
> +++ b/include/sysemu/kvm_int.h
> @@ -20,6 +20,7 @@ typedef struct KVMSlot
>  void *ram;
>  int slot;
>  int flags;
> +int old_flags;
>  } KVMSlot;
>  
>  typedef struct KVMMemoryListener {
> diff --git a/kvm-all.c b/kvm-all.c
> index 2515a23..de8250e 100644
> --- a/kvm-all.c
> +++ b/kvm-all.c
> @@ -252,7 +252,7 @@ static int kvm_set_user_memory_region(KVMMemoryListener 
> *kml, KVMSlot *slot)
>  mem.userspace_addr = (unsigned long)slot->ram;
>  mem.flags = slot->flags;
>  
> -if (slot->memory_size && mem.flags & KVM_MEM_READONLY) {
> +if (slot->memory_size && (mem.flags ^ slot->old_flags) & 
> KVM_MEM_READONLY) {
>  /* Set the slot size to 0 before setting the slot to the desired
>   * value. This is needed based on KVM commit 75d61fbc. */
>  mem.memory_size = 0;
> @@ -376,11 +376,11 @@ static int kvm_slot_update_flags(KVMMemoryListener 
> *kml, KVMSlot *mem,
>  {
>  int old_flags;
>  
> -old_flags = mem->flags;
> +mem->old_flags = mem->flags;
>  mem->flags = kvm_mem_flags(mr);
>  
>  /* If nothing changed effectively, no need to issue ioctl */
> -if (mem->flags == old_flags) {
> +if (mem->flags == mem->old_flags) {
>  return 0;
>  }
>  
> 

-- 
Shannon

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH] kvm: Delete the slot only when KVM_MEM_READONLY flag is changed

2018-05-16 Thread Shannon Zhao
According to KVM commit 75d61fbc, it needs to delete the slot before
changing the KVM_MEM_READONLY flag. But QEMU commit 235e8982 only check
whether KVM_MEM_READONLY flag is set instead of changing. It doesn't
need to delete the slot if the KVM_MEM_READONLY flag is not changed.

This fixes a issue that migrating a VM at the OVMF startup stage and
VM is executing the codes in rom. Between the deleting and adding the
slot in kvm_set_user_memory_region, there is a chance that guest access
rom and trap to KVM, then KVM can't find the corresponding memslot.
While KVM (on ARM) injects an abort to guest due to the broken hva, then
guest will get stuck.

Signed-off-by: Shannon Zhao 
---
 include/sysemu/kvm_int.h | 1 +
 kvm-all.c| 6 +++---
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/include/sysemu/kvm_int.h b/include/sysemu/kvm_int.h
index 888557a..f838412 100644
--- a/include/sysemu/kvm_int.h
+++ b/include/sysemu/kvm_int.h
@@ -20,6 +20,7 @@ typedef struct KVMSlot
 void *ram;
 int slot;
 int flags;
+int old_flags;
 } KVMSlot;
 
 typedef struct KVMMemoryListener {
diff --git a/kvm-all.c b/kvm-all.c
index 2515a23..de8250e 100644
--- a/kvm-all.c
+++ b/kvm-all.c
@@ -252,7 +252,7 @@ static int kvm_set_user_memory_region(KVMMemoryListener 
*kml, KVMSlot *slot)
 mem.userspace_addr = (unsigned long)slot->ram;
 mem.flags = slot->flags;
 
-if (slot->memory_size && mem.flags & KVM_MEM_READONLY) {
+if (slot->memory_size && (mem.flags ^ slot->old_flags) & KVM_MEM_READONLY) 
{
 /* Set the slot size to 0 before setting the slot to the desired
  * value. This is needed based on KVM commit 75d61fbc. */
 mem.memory_size = 0;
@@ -376,11 +376,11 @@ static int kvm_slot_update_flags(KVMMemoryListener *kml, 
KVMSlot *mem,
 {
 int old_flags;
 
-old_flags = mem->flags;
+mem->old_flags = mem->flags;
 mem->flags = kvm_mem_flags(mr);
 
 /* If nothing changed effectively, no need to issue ioctl */
-if (mem->flags == old_flags) {
+if (mem->flags == mem->old_flags) {
 return 0;
 }
 
-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH] KVM: arm/arm64: Close VMID generation race

2018-04-16 Thread Shannon Zhao


On 2018/4/11 9:30, Shannon Zhao wrote:
> 
> On 2018/4/10 23:37, Marc Zyngier wrote:
>> > On 10/04/18 16:24, Mark Rutland wrote:
>>> >> On Tue, Apr 10, 2018 at 05:05:40PM +0200, Christoffer Dall wrote:
>>>> >>> On Tue, Apr 10, 2018 at 11:51:19AM +0100, Mark Rutland wrote:
>>>>> >>>> I think we also need to update kvm->arch.vttbr before updating
>>>>> >>>> kvm->arch.vmid_gen, otherwise another CPU can come in, see that the
>>>>> >>>> vmid_gen is up-to-date, jump to hyp, and program a stale VTTBR (with 
>>>>> >>>> the
>>>>> >>>> old VMID).
>>>>> >>>>
>>>>> >>>> With the smp_wmb() and update of kvm->arch.vmid_gen moved to the end 
>>>>> >>>> of
>>>>> >>>> the critical section, I think that works, modulo using READ_ONCE() 
>>>>> >>>> and
>>>>> >>>> WRITE_ONCE() to ensure single-copy-atomicity of the fields we access
>>>>> >>>> locklessly.
>>>> >>>
>>>> >>> Indeed, you're right.  I would look something like this, then:
>>>> >>>
>>>> >>> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
>>>> >>> index 2e43f9d42bd5..6cb08995e7ff 100644
>>>> >>> --- a/virt/kvm/arm/arm.c
>>>> >>> +++ b/virt/kvm/arm/arm.c
>>>> >>> @@ -450,7 +450,9 @@ void force_vm_exit(const cpumask_t *mask)
>>>> >>>   */
>>>> >>>  static bool need_new_vmid_gen(struct kvm *kvm)
>>>> >>>  {
>>>> >>> -  return unlikely(kvm->arch.vmid_gen != 
>>>> >>> atomic64_read(&kvm_vmid_gen));
>>>> >>> +  u64 current_vmid_gen = atomic64_read(&kvm_vmid_gen);
>>>> >>> +  smp_rmb(); /* Orders read of kvm_vmid_gen and kvm->arch.vmid */
>>>> >>> +  return unlikely(READ_ONCE(kvm->arch.vmid_gen) != 
>>>> >>> current_vmid_gen);
>>>> >>>  }
>>>> >>>  
>>>> >>>  /**
>>>> >>> @@ -500,7 +502,6 @@ static void update_vttbr(struct kvm *kvm)
>>>> >>>kvm_call_hyp(__kvm_flush_vm_context);
>>>> >>>}
>>>> >>>  
>>>> >>> -  kvm->arch.vmid_gen = atomic64_read(&kvm_vmid_gen);
>>>> >>>kvm->arch.vmid = kvm_next_vmid;
>>>> >>>kvm_next_vmid++;
>>>> >>>kvm_next_vmid &= (1 << kvm_vmid_bits) - 1;
>>>> >>> @@ -509,7 +510,10 @@ static void update_vttbr(struct kvm *kvm)
>>>> >>>pgd_phys = virt_to_phys(kvm->arch.pgd);
>>>> >>>BUG_ON(pgd_phys & ~VTTBR_BADDR_MASK);
>>>> >>>vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & 
>>>> >>> VTTBR_VMID_MASK(kvm_vmid_bits);
>>>> >>> -  kvm->arch.vttbr = pgd_phys | vmid;
>>>> >>> +  WRITE_ONCE(kvm->arch.vttbr, pgd_phys | vmid);
>>>> >>> +
>>>> >>> +  smp_wmb(); /* Ensure vttbr update is observed before vmid_gen 
>>>> >>> update */
>>>> >>> +  kvm->arch.vmid_gen = atomic64_read(&kvm_vmid_gen);
>>>> >>>  
>>>> >>>spin_unlock(&kvm_vmid_lock);
>>>> >>>  }
>>> >>
>>> >> I think that's right, yes.
>>> >>
>>> >> We could replace the smp_{r,w}mb() barriers with an acquire of the
>>> >> kvm_vmid_gen and a release of kvm->arch.vmid_gen, but if we're really
>>> >> trying to optimize things there are larger algorithmic changes necessary
>>> >> anyhow.
>>> >>
>>>> >>> It's probably easier to convince ourselves about the correctness of
>>>> >>> Marc's code using a rwlock instead, though.  Thoughts?
>>> >>
>>> >> I believe that Marc's preference was the rwlock; I have no preference
>>> >> either way.
>> > 
>> > I don't mind either way. If you can be bothered to write a proper commit
>> > log for this, I'll take it. What I'd really want is Shannon to indicate
>> > whether or not this solves the issue he was seeing.
>> > 
> I'll test Marc's patch. This will take about 3 days since it's not 100%
> reproducible.
Hi Marc,

I've run the test for about 4 days. The issue doesn't appear.
So Tested-by: Shannon Zhao 

Thanks,
-- 
Shannon

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH] KVM: arm/arm64: Close VMID generation race

2018-04-10 Thread Shannon Zhao


On 2018/4/10 23:37, Marc Zyngier wrote:
> On 10/04/18 16:24, Mark Rutland wrote:
>> On Tue, Apr 10, 2018 at 05:05:40PM +0200, Christoffer Dall wrote:
>>> On Tue, Apr 10, 2018 at 11:51:19AM +0100, Mark Rutland wrote:
 I think we also need to update kvm->arch.vttbr before updating
 kvm->arch.vmid_gen, otherwise another CPU can come in, see that the
 vmid_gen is up-to-date, jump to hyp, and program a stale VTTBR (with the
 old VMID).

 With the smp_wmb() and update of kvm->arch.vmid_gen moved to the end of
 the critical section, I think that works, modulo using READ_ONCE() and
 WRITE_ONCE() to ensure single-copy-atomicity of the fields we access
 locklessly.
>>>
>>> Indeed, you're right.  I would look something like this, then:
>>>
>>> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
>>> index 2e43f9d42bd5..6cb08995e7ff 100644
>>> --- a/virt/kvm/arm/arm.c
>>> +++ b/virt/kvm/arm/arm.c
>>> @@ -450,7 +450,9 @@ void force_vm_exit(const cpumask_t *mask)
>>>   */
>>>  static bool need_new_vmid_gen(struct kvm *kvm)
>>>  {
>>> -   return unlikely(kvm->arch.vmid_gen != atomic64_read(&kvm_vmid_gen));
>>> +   u64 current_vmid_gen = atomic64_read(&kvm_vmid_gen);
>>> +   smp_rmb(); /* Orders read of kvm_vmid_gen and kvm->arch.vmid */
>>> +   return unlikely(READ_ONCE(kvm->arch.vmid_gen) != current_vmid_gen);
>>>  }
>>>  
>>>  /**
>>> @@ -500,7 +502,6 @@ static void update_vttbr(struct kvm *kvm)
>>> kvm_call_hyp(__kvm_flush_vm_context);
>>> }
>>>  
>>> -   kvm->arch.vmid_gen = atomic64_read(&kvm_vmid_gen);
>>> kvm->arch.vmid = kvm_next_vmid;
>>> kvm_next_vmid++;
>>> kvm_next_vmid &= (1 << kvm_vmid_bits) - 1;
>>> @@ -509,7 +510,10 @@ static void update_vttbr(struct kvm *kvm)
>>> pgd_phys = virt_to_phys(kvm->arch.pgd);
>>> BUG_ON(pgd_phys & ~VTTBR_BADDR_MASK);
>>> vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & 
>>> VTTBR_VMID_MASK(kvm_vmid_bits);
>>> -   kvm->arch.vttbr = pgd_phys | vmid;
>>> +   WRITE_ONCE(kvm->arch.vttbr, pgd_phys | vmid);
>>> +
>>> +   smp_wmb(); /* Ensure vttbr update is observed before vmid_gen update */
>>> +   kvm->arch.vmid_gen = atomic64_read(&kvm_vmid_gen);
>>>  
>>> spin_unlock(&kvm_vmid_lock);
>>>  }
>>
>> I think that's right, yes.
>>
>> We could replace the smp_{r,w}mb() barriers with an acquire of the
>> kvm_vmid_gen and a release of kvm->arch.vmid_gen, but if we're really
>> trying to optimize things there are larger algorithmic changes necessary
>> anyhow.
>>
>>> It's probably easier to convince ourselves about the correctness of
>>> Marc's code using a rwlock instead, though.  Thoughts?
>>
>> I believe that Marc's preference was the rwlock; I have no preference
>> either way.
> 
> I don't mind either way. If you can be bothered to write a proper commit
> log for this, I'll take it. What I'd really want is Shannon to indicate
> whether or not this solves the issue he was seeing.
> 
I'll test Marc's patch. This will take about 3 days since it's not 100%
reproducible.

Thanks,
-- 
Shannon

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH] KVM: ARM: updtae the VMID generation logic

2018-03-30 Thread Shannon Zhao


On 2018/3/30 18:48, Marc Zyngier wrote:
> On Fri, 30 Mar 2018 17:52:07 +0800
> Shannon Zhao  wrote:
> 
>>
>>
>> On 2018/3/30 17:01, Marc Zyngier wrote:
>>> On Fri, 30 Mar 2018 09:56:10 +0800
>>> Shannon Zhao  wrote:
>>>
>>>> On 2018/3/30 0:48, Marc Zyngier wrote:
>>>>> On Thu, 29 Mar 2018 16:27:58 +0100,
>>>>> Mark Rutland wrote:  
>>>>>>
>>>>>> On Thu, Mar 29, 2018 at 11:00:24PM +0800, Shannon Zhao wrote:  
>>>>>>> From: zhaoshenglong 
>>>>>>>
>>>>>>> Currently the VMID for some VM is allocated during VCPU entry/exit
>>>>>>> context and will be updated when kvm_next_vmid inversion. So this will
>>>>>>> cause the existing VMs exiting from guest and flush the tlb and icache.
>>>>>>>
>>>>>>> Also, while a platform with 8 bit VMID supports 255 VMs, it can create
>>>>>>> more than 255 VMs and if we create e.g. 256 VMs, some VMs will occur
>>>>>>> page fault since at some moment two VMs have same VMID.  
>>>>>>
>>>>>> Have you seen this happen?
>>>>>>  
>>>> Yes, we've started 256 VMs on D05. We saw kernel page fault in some guests.
>>>
>>> What kind of fault? Kernel configuration? Can you please share some
>>> traces with us? What is the workload? What happens if all the guests are
>>> running on the same NUMA node?
>>>
>>> We need all the information we can get.
>>>
>> All 256 VMs run without special workload. The testcase is just starting
>> 256 VMs and then shutting down them. We found several VMs will not
>> shutdown since the guest kernel crash. While if we only start 255 VMs,
>> it works well.
>>
>> We didn't run the testcase that pins all VMs to the same NUMA node. I'll
>> try.
>>
>> The fault is
>> [ 2204.633871] Unable to handle kernel NULL pointer dereference at
>> virtual address 0008
>> [ 2204.633875] Unable to handle kernel paging request at virtual address
>> a57f4a9095032
>>
>> Please see the attachment for the detailed log.
> 
> Thanks. It looks pretty ugly indeed.
> 
> Can you please share your host kernel config (and version number -- I
> really hope the host is something more recent than the 4.1.44 stuff you
> run as a guest...)?
> 
We do run a 4.1.44 host kernel but with more recently KVM module(at
least 4.14) since we backport upstream KVM ARM patches to our kernel tree.

See the attachment for the kernel config.

> For the record, I'm currently running 5 concurrent Debian installs,
> each with 2 vcpus, on a 4 CPU system artificially configured to have
> only 2 bits of VMID (and thus at most 3 running VMs at any given time),
> a setup that is quite similar to what you're doing, only on a smaller
> scale.
> 
> It is pretty slow (as you'd expect), but so far I haven't seen any
> issue.
> 
Could you try to shutdown all VMs at the same time? The issue we
encounter happened at the shutdown step.

Thanks,
-- 
Shannon
#
# Automatically generated file; DO NOT EDIT.
# Linux/arm64 4.1.44-04.79.vhulk1711.1.1.aarch64 Kernel Configuration
#
CONFIG_ARM64=y
CONFIG_64BIT=y
CONFIG_ARCH_PHYS_ADDR_T_64BIT=y
CONFIG_MMU=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_ILLEGAL_POINTER_VALUE=0xdead
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_TRACE_IRQFLAGS_SUPPORT=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_GENERIC_HWEIGHT=y
CONFIG_GENERIC_CSUM=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_ZONE_DMA=y
CONFIG_HAVE_GENERIC_RCU_GUP=y
CONFIG_ARCH_DMA_ADDR_T_64BIT=y
CONFIG_NEED_DMA_MAP_STATE=y
CONFIG_NEED_SG_DMA_LENGTH=y
CONFIG_SWIOTLB=y
CONFIG_IOMMU_HELPER=y
CONFIG_KERNEL_MODE_NEON=y
CONFIG_FIX_EARLYCON_MEM=y
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_ARCH_MEMORY_PROBE=y
CONFIG_PGTABLE_LEVELS=4
CONFIG_ARM64_INDIRECT_PIO=y
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_EXTABLE_SORT=y

#
# General setup
#
CONFIG_INIT_ENV_ARG_LIMIT=32
CONFIG_CROSS_COMPILE=""
# CONFIG_COMPILE_TEST is not set
CONFIG_LOCALVERSION=""
# CONFIG_LOCALVERSION_AUTO is not set
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
CONFIG_POSIX_MQUEUE=y
CONFIG_POSIX_MQUEUE_SYSCTL=y
CONFIG_CROSS_MEMORY_ATTACH=y
CONFIG_FHANDLE=y
CONFIG_USELIB=y
CONFIG_AUDIT=y
CONFIG_HAVE_ARCH_AUDITSYSCALL=y
CONFIG_AUDITSYSCALL=y
CONFIG_AUDIT_WATCH=y
CONFIG_AUDIT_TREE=y

#
# IRQ subsystem
#
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_IRQ_SH

Re: [PATCH] KVM: ARM: updtae the VMID generation logic

2018-03-30 Thread Shannon Zhao


On 2018/3/30 17:01, Marc Zyngier wrote:
> On Fri, 30 Mar 2018 09:56:10 +0800
> Shannon Zhao  wrote:
> 
>> On 2018/3/30 0:48, Marc Zyngier wrote:
>>> On Thu, 29 Mar 2018 16:27:58 +0100,
>>> Mark Rutland wrote:  
>>>>
>>>> On Thu, Mar 29, 2018 at 11:00:24PM +0800, Shannon Zhao wrote:  
>>>>> From: zhaoshenglong 
>>>>>
>>>>> Currently the VMID for some VM is allocated during VCPU entry/exit
>>>>> context and will be updated when kvm_next_vmid inversion. So this will
>>>>> cause the existing VMs exiting from guest and flush the tlb and icache.
>>>>>
>>>>> Also, while a platform with 8 bit VMID supports 255 VMs, it can create
>>>>> more than 255 VMs and if we create e.g. 256 VMs, some VMs will occur
>>>>> page fault since at some moment two VMs have same VMID.  
>>>>
>>>> Have you seen this happen?
>>>>  
>> Yes, we've started 256 VMs on D05. We saw kernel page fault in some guests.
> 
> What kind of fault? Kernel configuration? Can you please share some
> traces with us? What is the workload? What happens if all the guests are
> running on the same NUMA node?
> 
> We need all the information we can get.
> 
All 256 VMs run without special workload. The testcase is just starting
256 VMs and then shutting down them. We found several VMs will not
shutdown since the guest kernel crash. While if we only start 255 VMs,
it works well.

We didn't run the testcase that pins all VMs to the same NUMA node. I'll
try.

The fault is
[ 2204.633871] Unable to handle kernel NULL pointer dereference at
virtual address 0008
[ 2204.633875] Unable to handle kernel paging request at virtual address
a57f4a9095032

Please see the attachment for the detailed log.

>>
>>>> I beleive that update_vttbr() should prevent this. We intialize
>>>> kvm_vmid_gen to 1, and when we init a VM, we set its vmid_gen to 0. So
>>>> the first time a VM is scheduled, update_vttbr() will allocate a VMID,
>>>> and by construction we shouldn't be able to allocate the same VMID to
>>>> multiple active VMs, regardless of whether we overflow several
>>>> times.  
>>>
>>> I remember testing that exact scenario when we implemented the VMID
>>> rollover a (long) while back. Maybe we've introduced a regression, but
>>> we're supposed to support 255 VMs running at the same time (which is
>>> not the same as having 255 VMs in total).
>>>
>>> Shannon: if you have observed such regression, please let us know.
>>>   
>> Current approach could allow more than 255 VMs running at the same time.
> 
> How??? By definition, it can't.
> 
>> It doesn't prevent the extra VMs creating. So at some moment, this will
>> be a race that two VMs have same VMID when there are more than 255 VMs.
> 
> Creating additional VMs is not an issue as long as we properly:
> 
> 1) Get a new generation number
> 2) Stop all the guests
> 3) invalidate all TLBs
> 
> The above should prevent the reuse of a VMID, because all the running
> guests have a different generation number, and thus will grab a new one.
> 
> If you see two guests with the same VMID, then we have a bug in that
> logic somewhere.
> 
>>>>  
>>>>> This patch uses the bitmap to record which VMID used and available.
>>>>> Initialize the VMID and vttbr during creating the VM instead of VCPU
>>>>> entry/exit context. Also it will return error to user space if it wants
>>>>> to create VMs more than the supporting number.  
>>>>
>>>> This creates a functional regression for anyone creating a large number
>>>> of VMs.  
>>>
>>> Indeed, and I'm not buys that approach at all. As I said above, the
>>> intent is that we can have up to 2^VMID_SIZE-1 VMs running at the same
>>> time, and *any* number of VMs in the system.
>>>   
>> I think it should not allow more than 255 VMs to create since if there
>> are 256 VMs, the VMs will not running properly and will fall in the loop
>> to update VMID.
> 
> I think you're wrong.
> 
> You're trying to paper over a bug. The VMID allocation is designed to
> deal with an *infinite* number of VMs, with at most 255 of them running
> at any given time. Are you also planning to limit the number of
> processes to the ASID capacity? Because that's the exact same problem.
> 
> Let's get down to the bottom of the problem instead.
> 
>>
>>>> If VMID overflow is a real b

Re: [PATCH] KVM: ARM: updtae the VMID generation logic

2018-03-29 Thread Shannon Zhao


On 2018/3/30 0:48, Marc Zyngier wrote:
> On Thu, 29 Mar 2018 16:27:58 +0100,
> Mark Rutland wrote:
>>
>> On Thu, Mar 29, 2018 at 11:00:24PM +0800, Shannon Zhao wrote:
>>> From: zhaoshenglong 
>>>
>>> Currently the VMID for some VM is allocated during VCPU entry/exit
>>> context and will be updated when kvm_next_vmid inversion. So this will
>>> cause the existing VMs exiting from guest and flush the tlb and icache.
>>>
>>> Also, while a platform with 8 bit VMID supports 255 VMs, it can create
>>> more than 255 VMs and if we create e.g. 256 VMs, some VMs will occur
>>> page fault since at some moment two VMs have same VMID.
>>
>> Have you seen this happen?
>>
Yes, we've started 256 VMs on D05. We saw kernel page fault in some guests.

>> I beleive that update_vttbr() should prevent this. We intialize
>> kvm_vmid_gen to 1, and when we init a VM, we set its vmid_gen to 0. So
>> the first time a VM is scheduled, update_vttbr() will allocate a VMID,
>> and by construction we shouldn't be able to allocate the same VMID to
>> multiple active VMs, regardless of whether we overflow several
>> times.
> 
> I remember testing that exact scenario when we implemented the VMID
> rollover a (long) while back. Maybe we've introduced a regression, but
> we're supposed to support 255 VMs running at the same time (which is
> not the same as having 255 VMs in total).
> 
> Shannon: if you have observed such regression, please let us know.
> 
Current approach could allow more than 255 VMs running at the same time.
It doesn't prevent the extra VMs creating. So at some moment, this will
be a race that two VMs have same VMID when there are more than 255 VMs.
>>
>>> This patch uses the bitmap to record which VMID used and available.
>>> Initialize the VMID and vttbr during creating the VM instead of VCPU
>>> entry/exit context. Also it will return error to user space if it wants
>>> to create VMs more than the supporting number.
>>
>> This creates a functional regression for anyone creating a large number
>> of VMs.
> 
> Indeed, and I'm not buys that approach at all. As I said above, the
> intent is that we can have up to 2^VMID_SIZE-1 VMs running at the same
> time, and *any* number of VMs in the system.
> 
I think it should not allow more than 255 VMs to create since if there
are 256 VMs, the VMs will not running properly and will fall in the loop
to update VMID.

>> If VMID overflow is a real bottleneck, it would be vastly better to
>> improve the VMID allocator along the lines of the arm64 ASID allocator,
>> so that upon overflow we reserve the set of active VMIDs (and therefore
>> avoid expensive TLB + icache maintenance). That does not require a
>> global limit on the number of VMs.
> 
> +1.
> 
I'll look at the ASID allocator approach.

Thanks,
-- 
Shannon

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH] KVM: ARM: updtae the VMID generation logic

2018-03-29 Thread Shannon Zhao
From: zhaoshenglong 

Currently the VMID for some VM is allocated during VCPU entry/exit
context and will be updated when kvm_next_vmid inversion. So this will
cause the existing VMs exiting from guest and flush the tlb and icache.

Also, while a platform with 8 bit VMID supports 255 VMs, it can create
more than 255 VMs and if we create e.g. 256 VMs, some VMs will occur
page fault since at some moment two VMs have same VMID.

This patch uses the bitmap to record which VMID used and available.
Initialize the VMID and vttbr during creating the VM instead of VCPU
entry/exit context. Also it will return error to user space if it wants
to create VMs more than the supporting number.

Signed-off-by: zhaoshenglong 
---
 arch/arm/include/asm/kvm_asm.h|   1 -
 arch/arm/include/asm/kvm_host.h   |   1 -
 arch/arm/kvm/hyp/tlb.c|   7 --
 arch/arm64/include/asm/kvm_asm.h  |   1 -
 arch/arm64/include/asm/kvm_host.h |   1 -
 arch/arm64/kvm/hyp/tlb.c  |   8 --
 virt/kvm/arm/arm.c| 150 ++
 7 files changed, 54 insertions(+), 115 deletions(-)

diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index 36dd296..a0b7fa6 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -63,7 +63,6 @@ extern char __kvm_hyp_init_end[];
 
 extern char __kvm_hyp_vector[];
 
-extern void __kvm_flush_vm_context(void);
 extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
 extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
 extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu);
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 248b930..4e340e2 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -214,7 +214,6 @@ int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 
__user *indices);
 int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
 int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
 unsigned long kvm_call_hyp(void *hypfn, ...);
-void force_vm_exit(const cpumask_t *mask);
 
 #define KVM_ARCH_WANT_MMU_NOTIFIER
 int kvm_unmap_hva(struct kvm *kvm, unsigned long hva);
diff --git a/arch/arm/kvm/hyp/tlb.c b/arch/arm/kvm/hyp/tlb.c
index c0edd45..28dca58 100644
--- a/arch/arm/kvm/hyp/tlb.c
+++ b/arch/arm/kvm/hyp/tlb.c
@@ -70,10 +70,3 @@ void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu 
*vcpu)
 
write_sysreg(0, VTTBR);
 }
-
-void __hyp_text __kvm_flush_vm_context(void)
-{
-   write_sysreg(0, TLBIALLNSNHIS);
-   write_sysreg(0, ICIALLUIS);
-   dsb(ish);
-}
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 24961b7..53cc97b 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -50,7 +50,6 @@ extern char __kvm_hyp_init_end[];
 
 extern char __kvm_hyp_vector[];
 
-extern void __kvm_flush_vm_context(void);
 extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
 extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
 extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 596f8e4..2adbdbd 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -345,7 +345,6 @@ void kvm_arm_resume_guest(struct kvm *kvm);
 u64 __kvm_call_hyp(void *hypfn, ...);
 #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
 
-void force_vm_exit(const cpumask_t *mask);
 void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot);
 
 int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c
index 131c777..41ac624 100644
--- a/arch/arm64/kvm/hyp/tlb.c
+++ b/arch/arm64/kvm/hyp/tlb.c
@@ -148,11 +148,3 @@ void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu 
*vcpu)
 
__tlb_switch_to_host()(kvm);
 }
-
-void __hyp_text __kvm_flush_vm_context(void)
-{
-   dsb(ishst);
-   __tlbi(alle1is);
-   asm volatile("ic ialluis" : : );
-   dsb(ish);
-}
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 5357230..fa8cbd7 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -60,8 +60,8 @@ static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
 static DEFINE_PER_CPU(struct kvm_vcpu *, kvm_arm_running_vcpu);
 
 /* The VMID used in the VTTBR */
-static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1);
-static u32 kvm_next_vmid;
+static void *kvm_vmid_bitmap;
+static unsigned int kvm_vmid_bitmap_bits;
 static unsigned int kvm_vmid_bits __read_mostly;
 static DEFINE_SPINLOCK(kvm_vmid_lock);
 
@@ -108,6 +108,40 @@ void kvm_arch_check_processor_compat(void *rtn)
*(int *)rtn = 0;
 }
 
+static int kvm_arm_init_vmid(struct kvm *kvm)
+{
+   phys_addr_t pgd_phys;
+   u64 vmid;
+
+   spin_lock(&kvm_vmid_lock);
+
+   vmid = find_first_zero_bit(kvm_vmid_bitmap, k

Re: [PATCH v3 51/59] KVM: arm/arm64: GICv4: Add doorbell interrupt handling

2017-09-06 Thread Shannon Zhao


On 2017/8/1 1:26, Marc Zyngier wrote:
> When a vPE is not running, a VLPI being made pending results in a
> doorbell interrupt being delivered. Let's handle this interrupt
> and update the pending_last flag that indicates that VLPIs are
> pending. The corresponding vcpu is also kicked into action.
> 
> Signed-off-by: Marc Zyngier 
> ---
>  virt/kvm/arm/vgic/vgic-v4.c | 34 ++
>  1 file changed, 34 insertions(+)
> 
> diff --git a/virt/kvm/arm/vgic/vgic-v4.c b/virt/kvm/arm/vgic/vgic-v4.c
> index 534d3051a078..6af3cde6d7d4 100644
> --- a/virt/kvm/arm/vgic/vgic-v4.c
> +++ b/virt/kvm/arm/vgic/vgic-v4.c
> @@ -21,6 +21,19 @@
>  
>  #include "vgic.h"
>  
> +static irqreturn_t vgic_v4_doorbell_handler(int irq, void *info)
> +{
> + struct kvm_vcpu *vcpu = info;
> +
> + if (!kvm_vgic_vcpu_pending_irq(vcpu)) {
> + vcpu->arch.vgic_cpu.vgic_v3.its_vpe.pending_last = true;
> + kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
> + kvm_vcpu_kick(vcpu);
> + }
> +
> + return IRQ_HANDLED;
> +}
> +
>  int vgic_v4_init(struct kvm *kvm)
>  {
>   struct vgic_dist *dist = &kvm->arch.vgic;
> @@ -57,16 +70,37 @@ int vgic_v4_init(struct kvm *kvm)
>   return ret;
>   }
>  
> + kvm_for_each_vcpu(i, vcpu, kvm) {
> + int irq = dist->its_vm.vpes[i]->irq;
> +
> + ret = request_irq(irq, vgic_v4_doorbell_handler,
> +   0, "vcpu", vcpu);
> + if (ret) {
> + kvm_err("failed to allocate vcpu IRQ%d\n", irq);
> + dist->its_vm.nr_vpes = i;
This overwirtes the nr_vpes while it uses kvm->online_vcpus in
its_alloc_vcpu_irqs to alloc irqs and if this fails it uses the
overwirten nr_vpes other than kvm->online_vcpus in its_free_vcpu_irqs to
free the irqs. So there will be memory leak on error path.

> + break;
> + }
> + }
> +
> + if (ret)
> + vgic_v4_teardown(kvm);
> +
>   return ret;
>  }
>  
>  void vgic_v4_teardown(struct kvm *kvm)
>  {
>   struct its_vm *its_vm = &kvm->arch.vgic.its_vm;
> + int i;
>  
>   if (!its_vm->vpes)
>   return;
>  
> + for (i = 0; i < its_vm->nr_vpes; i++) {
> + struct kvm_vcpu *vcpu = kvm_get_vcpu(kvm, i);
> + free_irq(its_vm->vpes[i]->irq, vcpu);
> + }
> +
>   its_free_vcpu_irqs(its_vm);
>   kfree(its_vm->vpes);
>   its_vm->nr_vpes = 0;
> 

Thanks,
-- 
Shannon

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v11 2/6] ACPI: Add APEI GHES Table Generation support

2017-08-25 Thread Shannon Zhao


On 2017/8/25 19:20, gengdongjiu wrote:
>>> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
>>> >> index 3d78ff6..def1ec1 100644
>>> >> --- a/hw/arm/virt-acpi-build.c
>>> >> +++ b/hw/arm/virt-acpi-build.c
>>> >> @@ -45,6 +45,7 @@
>>> >>  #include "hw/arm/virt.h"
>>> >>  #include "sysemu/numa.h"
>>> >>  #include "kvm_arm.h"
>>> >> +#include "hw/acpi/hest_ghes.h"
>>> >>  
>>> >>  #define ARM_SPI_BASE 32
>>> >>  #define ACPI_POWER_BUTTON_DEVICE "PWRB"
>>> >> @@ -771,6 +772,9 @@ void virt_acpi_build(VirtMachineState *vms, 
>>> >> AcpiBuildTables *tables)
>>> >>  acpi_add_table(table_offsets, tables_blob);
>>> >>  build_spcr(tables_blob, tables->linker, vms);
>>> >>  
>>> >> +acpi_add_table(table_offsets, tables_blob);
>>> >> +ghes_build_acpi(tables_blob, tables->hardware_errors, 
>>> >> tables->linker);
>>> >> +
>> > So we add this table unconditionally. Is there any bad impact if QEMU
>> > runs on old kvm? Does it need to check whether KVM supports RAS?
> this table is added before guest OS boot. so can not use KVM to check it.
No, we can check the RAS capability when we create vcpus like you done
in another patch ans can use that in table generation.

> if the old kvm does not support RAS, it does not have bad impact. only waste 
> table memory.
> May be we can make it as device? if this device is enabled in the qemu
> boot parameters, then we will add this table?
> 

And you need to add a option to virt machine for (migration)
compatibility. On new virt machine it's on by default while off for old
ones.

Thanks,
-- 
Shannon

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v11 1/6] ACPI: add APEI/HEST/CPER structures and macros

2017-08-25 Thread Shannon Zhao


On 2017/8/25 18:37, gengdongjiu wrote:
>>> +
>>> >> +/* From the ACPI 6.1 spec, "18.3.2.9 Hardware Error Notification" */
>>> >> +
>> > It's better to refer to the first spec version of this structure and
>> > same with others you define.
>  do you mean which spec version? the definition is aligned with the linux 
> kernel.
What I mean here is that it's better to refer to the ACPI spec version
which introduces Hardware Error Notification first time.

>> > 
>>> >> +enum AcpiHestNotifyType {
>>> >> +ACPI_HEST_NOTIFY_POLLED = 0,
>>> >> +ACPI_HEST_NOTIFY_EXTERNAL = 1,
>>> >> +ACPI_HEST_NOTIFY_LOCAL = 2,
>>> >> +ACPI_HEST_NOTIFY_SCI = 3,
>>> >> +ACPI_HEST_NOTIFY_NMI = 4,
>>> >> +ACPI_HEST_NOTIFY_CMCI = 5,  /* ACPI 5.0 */
>>> >> +ACPI_HEST_NOTIFY_MCE = 6,   /* ACPI 5.0 */
>>> >> +ACPI_HEST_NOTIFY_GPIO = 7,  /* ACPI 6.0 */
>>> >> +ACPI_HEST_NOTIFY_SEA = 8,   /* ACPI 6.1 */
>>> >> +ACPI_HEST_NOTIFY_SEI = 9,   /* ACPI 6.1 */
>>> >> +ACPI_HEST_NOTIFY_GSIV = 10, /* ACPI 6.1 */
>>> >> +ACPI_HEST_NOTIFY_RESERVED = 11  /* 11 and greater are reserved */
>> > In ACPI 6.2, 11 is for Software Delegated Exception, is this useful for
>> > your patchset?
>   it is usefull, for all the error source, I reserved the space for them.
> Because the space is allocated one time, is not dynamically allocated.
> so I use the ACPI_HEST_NOTIFY_RESERVED to specify that there is 11 error 
> source.
> 
I mean whether the new type Software Delegated Exception is useful for
RAS. If so, we could add this new type here.

Thanks,
-- 
Shannon

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v11 3/6] ACPI: build and enable APEI GHES in the Makefile and configuration

2017-08-24 Thread Shannon Zhao


On 2017/8/18 22:23, Dongjiu Geng wrote:
> Add CONFIG_ACPI_APEI configuration in the Makefile and
> enable it in the arm-softmmu.mak
> 
> Signed-off-by: Dongjiu Geng 
> ---
>  default-configs/arm-softmmu.mak | 1 +
>  hw/acpi/Makefile.objs   | 1 +
>  2 files changed, 2 insertions(+)
> 
> diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
> index bbdd3c1..c362113 100644
> --- a/default-configs/arm-softmmu.mak
> +++ b/default-configs/arm-softmmu.mak
> @@ -129,3 +129,4 @@ CONFIG_ACPI=y
>  CONFIG_SMBIOS=y
>  CONFIG_ASPEED_SOC=y
>  CONFIG_GPIO_KEY=y
> +CONFIG_ACPI_APEI=y
> diff --git a/hw/acpi/Makefile.objs b/hw/acpi/Makefile.objs
> index 11c35bc..bafb148 100644
> --- a/hw/acpi/Makefile.objs
> +++ b/hw/acpi/Makefile.objs
> @@ -6,6 +6,7 @@ common-obj-$(CONFIG_ACPI_MEMORY_HOTPLUG) += memory_hotplug.o
>  common-obj-$(CONFIG_ACPI_CPU_HOTPLUG) += cpu.o
>  common-obj-$(CONFIG_ACPI_NVDIMM) += nvdimm.o
>  common-obj-$(CONFIG_ACPI_VMGENID) += vmgenid.o
> +common-obj-$(CONFIG_ACPI_APEI) += hest_ghes.o
>  common-obj-$(call lnot,$(CONFIG_ACPI_X86)) += acpi-stub.o
>  
>  common-obj-y += acpi_interface.o
> 
Fold this patch into previous one.

Thanks,
-- 
Shannon

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v11 2/6] ACPI: Add APEI GHES Table Generation support

2017-08-24 Thread Shannon Zhao


On 2017/8/18 22:23, Dongjiu Geng wrote:
> This implements APEI GHES Table by passing the error CPER info
> to the guest via a fw_cfg_blob. After a CPER info is recorded, an
> SEA(Synchronous External Abort)/SEI(SError Interrupt) exception
> will be injected into the guest OS.
> 
> Below is the table layout, the max number of error soure is 11,
> which is classified by notification type.
> 
>  etc/acpi/tables   etc/hardware_errors
> 
> ==
> + +--++--+
> | | HEST ||address   |  
> +--+
> | +--+|registers |  | 
> Error Status |
> | | GHES0|| ++  | 
> Data Block 0 |
> | +--+ +->| |status_address0 |->| 
> ++
> | | .| |  | ++  | 
> |  CPER  |
> | | error_status_address-+-+ +--->| |status_address1 |--+   | 
> |  CPER  |
> | | .|   || ++  |   | 
> |    |
> | | read_ack_register+-+ ||  .   |  |   | 
> |  CPER  |
> | | read_ack_preserve| | |+--+  |   | 
> +-++
> | | read_ack_write   | | | +->| |status_address10|+ |   | 
> Error Status |
> + +--+ | | |  | ++| |   | 
> Data Block 1 |
> | | GHES1| +-+-+->| | ack_value0 || +-->| 
> ++
> + +--+   | |  | ++| | 
> |  CPER  |
> | | .|   | | +--->| | ack_value1 || | 
> |  CPER  |
> | | error_status_address-+---+ | || ++| | 
> |    |
> | | .| | || |  . || | 
> |  CPER  |
> | | read_ack_register+-+-+| ++| 
> +-++
> | | read_ack_preserve| |   +->| | ack_value10|| | 
> |..  |
> | | read_ack_write   | |   |  | ++| | 
> ++
> + +--| |   |  | | 
> Error Status |
> | | ...  | |   |  | | 
> Data Block 10|
> + +--+ |   |  +>| 
> ++
> | | GHES10   | |   || 
> |  CPER  |
> + +--+ |   || 
> |  CPER  |
> | | .| |   || 
> |    |
> | | error_status_address-+-+   || 
> |  CPER  |
> | | .| |
> +-++
> | | read_ack_register+-+
> | | read_ack_preserve|
> | | read_ack_write   |
> + +--+
> 
> For GHESv2 error source, the OSPM must acknowledges the error via Read Ack 
> register.
> so user space must check the ack value to avoid read-write race condition.
> 
> Signed-off-by: Dongjiu Geng 
> ---
>  hw/acpi/aml-build.c |   2 +
>  hw/acpi/hest_ghes.c | 345 
> 
>  hw/arm/virt-acpi-build.c|   6 +
>  include/hw/acpi/aml-build.h |   1 +
>  include/hw/acpi/hest_ghes.h |  47 ++
>  5 files changed, 401 insertions(+)
>  create mode 100644 hw/acpi/hest_ghes.c
Don't need to add the new file to hw/acpi/Makefile.objs?

>  create mode 100644 include/hw/acpi/hest_ghes.h
> 
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index 36a6cc4..6849e5f 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -1561,6 +1561,7 @@ void acpi_build_tables_init(AcpiBuildTables *tables)
>  tables->table_data = g_array_new(false, true /* clear */, 1);
>  tables->tcpalog = g_array_new(false, true /* clear */, 1);
>  tables->vmgenid = g_array_new(false, true /* clear */, 1);
> +tables->hardware_errors = g_array_new(false, true /* clear */, 1);
>  tables->linker = bios_linker_loader_init();
>  }
>  
> @@ -1571,6 +1572,7 @@ void acpi_build_tables_cleanup(AcpiBuildTables *tables, 
> bool mfre)
>  g_array_free(tables->table_data, true);
>  g_array_free(tables->tcpalog, mfre);
>  g_array_free(tables->vmgenid, mfre);
> +g_array_free(tables->hardware_errors, mfre);
>  }
>  
>  /* Build rsdt table */
> diff --git a/hw/acpi/hest_ghes.c b/hw/acpi/hest_ghes.c
> new f

Re: [PATCH v11 1/6] ACPI: add APEI/HEST/CPER structures and macros

2017-08-24 Thread Shannon Zhao


On 2017/8/18 22:23, Dongjiu Geng wrote:
> (1) Add related APEI/HEST table structures and  macros, these
> definition refer to ACPI 6.1 and UEFI 2.6 spec.
> (2) Add generic error status block and CPER memory section
> definition, user space only handle memory section errors.
> 
> Signed-off-by: Dongjiu Geng 
> ---
>  include/hw/acpi/acpi-defs.h | 193 
> 
>  1 file changed, 193 insertions(+)
> 
> diff --git a/include/hw/acpi/acpi-defs.h b/include/hw/acpi/acpi-defs.h
> index 72be675..3b4bad7 100644
> --- a/include/hw/acpi/acpi-defs.h
> +++ b/include/hw/acpi/acpi-defs.h
> @@ -297,6 +297,44 @@ typedef struct AcpiMultipleApicTable 
> AcpiMultipleApicTable;
>  #define ACPI_APIC_GENERIC_TRANSLATOR15
>  #define ACPI_APIC_RESERVED  16   /* 16 and greater are reserved 
> */
>  
> +/* UEFI Spec 2.6, "N.2.5 Memory Error Section */
missing "

> +#define UEFI_CPER_MEM_VALID_ERROR_STATUS 0x0001
> +#define UEFI_CPER_MEM_VALID_PA   0x0002
> +#define UEFI_CPER_MEM_VALID_PA_MASK  0x0004
> +#define UEFI_CPER_MEM_VALID_NODE 0x0008
> +#define UEFI_CPER_MEM_VALID_CARD 0x0010
> +#define UEFI_CPER_MEM_VALID_MODULE   0x0020
> +#define UEFI_CPER_MEM_VALID_BANK 0x0040
> +#define UEFI_CPER_MEM_VALID_DEVICE   0x0080
> +#define UEFI_CPER_MEM_VALID_ROW  0x0100
> +#define UEFI_CPER_MEM_VALID_COLUMN   0x0200
> +#define UEFI_CPER_MEM_VALID_BIT_POSITION 0x0400
> +#define UEFI_CPER_MEM_VALID_REQUESTOR0x0800
> +#define UEFI_CPER_MEM_VALID_RESPONDER0x1000
> +#define UEFI_CPER_MEM_VALID_TARGET   0x2000
> +#define UEFI_CPER_MEM_VALID_ERROR_TYPE   0x4000
> +#define UEFI_CPER_MEM_VALID_RANK_NUMBER  0x8000
> +#define UEFI_CPER_MEM_VALID_CARD_HANDLE  0x1
> +#define UEFI_CPER_MEM_VALID_MODULE_HANDLE0x2
> +#define UEFI_CPER_MEM_ERROR_TYPE_MULTI_ECC   3
> +
> +/* From the ACPI 6.1 spec, "18.3.2.9 Hardware Error Notification" */
> +
It's better to refer to the first spec version of this structure and
same with others you define.

> +enum AcpiHestNotifyType {
> +ACPI_HEST_NOTIFY_POLLED = 0,
> +ACPI_HEST_NOTIFY_EXTERNAL = 1,
> +ACPI_HEST_NOTIFY_LOCAL = 2,
> +ACPI_HEST_NOTIFY_SCI = 3,
> +ACPI_HEST_NOTIFY_NMI = 4,
> +ACPI_HEST_NOTIFY_CMCI = 5,  /* ACPI 5.0 */
> +ACPI_HEST_NOTIFY_MCE = 6,   /* ACPI 5.0 */
> +ACPI_HEST_NOTIFY_GPIO = 7,  /* ACPI 6.0 */
> +ACPI_HEST_NOTIFY_SEA = 8,   /* ACPI 6.1 */
> +ACPI_HEST_NOTIFY_SEI = 9,   /* ACPI 6.1 */
> +ACPI_HEST_NOTIFY_GSIV = 10, /* ACPI 6.1 */
> +ACPI_HEST_NOTIFY_RESERVED = 11  /* 11 and greater are reserved */
In ACPI 6.2, 11 is for Software Delegated Exception, is this useful for
your patchset?

> +};
> +
>  /*
>   * MADT sub-structures (Follow MULTIPLE_APIC_DESCRIPTION_TABLE)
>   */
> @@ -474,6 +512,161 @@ struct AcpiSystemResourceAffinityTable {
>  } QEMU_PACKED;
>  typedef struct AcpiSystemResourceAffinityTable 
> AcpiSystemResourceAffinityTable;
>  
> +/* Hardware Error Notification, from the ACPI 6.1
> + * spec, "18.3.2.9 Hardware Error Notification"
> + */
Use below style for multiple comment lines
/*
 * XXX
 */

> +struct AcpiHestNotify {
> +uint8_t type;
> +uint8_t length;
> +uint16_t config_write_enable;
> +uint32_t poll_interval;
> +uint32_t vector;
> +uint32_t polling_threshold_value;
> +uint32_t polling_threshold_window;
> +uint32_t error_threshold_value;
> +uint32_t error_threshold_window;
> +} QEMU_PACKED;
> +typedef struct AcpiHestNotify AcpiHestNotify;
> +
> +/* From ACPI 6.1, sections "18.3.2.1 IA-32 Architecture Machine
> + * Check Exception" through "18.3.2.8 Generic Hardware Error Source version 
> 2".
> + */
> +enum AcpiHestSourceType {
> +ACPI_HEST_SOURCE_IA32_CHECK = 0,
> +ACPI_HEST_SOURCE_IA32_CORRECTED_CHECK = 1,
> +ACPI_HEST_SOURCE_IA32_NMI = 2,
What's 3, 4, 5 for?

> +ACPI_HEST_SOURCE_AER_ROOT_PORT = 6,
> +ACPI_HEST_SOURCE_AER_ENDPOINT = 7,
> +ACPI_HEST_SOURCE_AER_BRIDGE = 8,
> +ACPI_HEST_SOURCE_GENERIC_ERROR = 9,
> +ACPI_HEST_SOURCE_GENERIC_ERROR_V2 = 10,
> +ACPI_HEST_SOURCE_RESERVED = 11/* 11 and greater are reserved */
> +};
> +
> +/* Block status bitmasks from ACPI 6.1, "18.3.2.7.1 Generic Error Data" */
> +#define ACPI_GEBS_UNCORRECTABLE (1)
> +#define ACPI_GEBS_CORRECTABLE   (1 << 1)
> +#define ACPI_GEBS_MULTIPLE_UNCORRECTABLE(1 << 2)
> +#define ACPI_GEBS_MULTIPLE_CORRECTABLE  (1 << 3)
> +/* 10 bits, error data entry count */
> +#define ACPI_GEBS_ERROR_ENTRY_COUNT (0x3FF << 4)
> +
> +/* Generic Hardware Error Source Structure, refer to ACPI 6.1
> + * "18.3.2.7 Generic Hardware Error Source". in this struct the
> + * "type" field has to be ACPI_HEST_SOURCE_GENERIC_ERROR
> + */
> +
> +struct AcpiGenericHardwareErrorSource {
> +uint16_t type;
> +uint16_t source_id;
> +uint16_t related_source_id;
> + 

Re: Android on virt device

2017-03-02 Thread Shannon Zhao


On 2017/3/2 21:04, Christoffer Dall wrote:
> On Sun, Feb 26, 2017 at 12:12:35PM +0200, Roman Livshits wrote:
>> Hi
>>
>> I am trying to run Android on qemu virt machine.
>> I want to use virt as this was used by the op-tee for implementing
>> TEE for ARM TrustZone, see
>> https://github.com/OP-TEE/build#op-tee-buildgit, so hope that
>> running Android on virt would be easier comparing to putting op-tee
>> to ranchu, used by Android emulator
>> (http://www.linaro.org/blog/core-dump/running-64bit-android-l-qemu/).
>>
>> Is there any advice how to do this?
>>
> 
> You'd need an Android guest kernel that runs on the virt board and you
> may also need some userspace changes, because I believe the initial user
> deamons in Android loads libraries and hardware management layers based
> on the machine name (e..g Ranchu).
> 
> The biggest challenge, however, is probably going to get the number of
> devices needed to get Android booting properly so that you can interact
> with it working on the virt board.  For example, I'm not sure how you
> plan on dealing with the framebuffer/graphics.
> 
> Personally, for a quick solution, I would probably go the other route
> and first try to make Android run under KVM using the Ranchu device
> without any secure component.  Then I would look at bringing together
> the work in whichever of the two platforms you prefer.
> 
> The Ranchu platform should be relatively similar to the virt platform,
> so I wouldn't expect too much work in getting OP-TEE compiled and
> running with Ranchu.
> 
Maybe another way is porting the goldfish framebuffer to virt machine.

Thanks,
-- 
Shannon

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v10 0/8] arm/arm64: vgic: Implement API for vGICv3 live migration

2017-01-18 Thread Shannon Zhao
Hi Vijaya,

On 2016/12/1 15:09, vijay.kil...@gmail.com wrote:
> From: Vijaya Kumar K 
> 
> This patchset adds API for saving and restoring
> of VGICv3 registers to support live migration with new vgic feature.
> This API definition is as per version of VGICv3 specification
> Documentation/virtual/kvm/devices/arm-vgic-v3.txt
> 
> The patch 3 & 4 are picked from the Pavel's previous implementation.
> http://www.spinics.net/lists/kvm/msg122040.html
> 
> NOTE: Only compilation tested for AArch32. No hardware to test.
> 
Where can I fetch the latest corresponding QEMU patches? I didn't find
them in qemu-devel/qemu-arm mail list.

Thanks,
-- 
Shannon

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH RFC 0/7] ARM64: KVM: Cross type vCPU support

2017-01-16 Thread Shannon Zhao
From: Shannon Zhao 

This patch set adds support for Cross type vCPU in KVM-ARM64. It allows
userspace to request a different vCPU type with the physical ones and
check whether the physical CPUs could support that specific vCPU. If so,
KVM will trap the ID registers and return guest with the values from
usersapce.

This patch set is not complete since the CPU Errata is not considered
and currently it only checks if the id_aa64mmfr0_el1 register value is
legal. I want this as an example and need some feedback from folks if
this approach is right or proper.

You can test this patch set with QEMU using
-cpu cortex-a53/cortex-a57/generic/cortex-a72

These patches can be fetched from:
https://git.linaro.org/people/shannon.zhao/linux-mainline.git cross_vcpu_rfc

You corresponding QEMU patches can be fetched from:
https://git.linaro.org/people/shannon.zhao/qemu.git cross_vcpu_rfc

Thanks,
Shannon

Shannon Zhao (7):
  ARM64: KVM: Add the definition of ID registers
  ARM64: KVM: Add reset handlers for all ID registers
  ARM64: KVM: Reset ID registers when creating the VCPUs
  ARM64: KVM: emulate accessing ID registers
  ARM64: KVM: Support cross type vCPU
  ARM64: KVM: Support heterogeneous system
  ARM64: KVM: Add user set handler for id_aa64mmfr0_el1

 arch/arm/kvm/arm.c   |  36 -
 arch/arm64/include/asm/kvm_coproc.h  |   1 +
 arch/arm64/include/asm/kvm_emulate.h |   3 +
 arch/arm64/include/asm/kvm_host.h|  49 +-
 arch/arm64/include/uapi/asm/kvm.h|   1 +
 arch/arm64/kvm/guest.c   |  18 ++-
 arch/arm64/kvm/hyp/sysreg-sr.c   |   2 +
 arch/arm64/kvm/sys_regs.c| 290 +++
 include/uapi/linux/kvm.h |   2 +
 9 files changed, 296 insertions(+), 106 deletions(-)

-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH RFC 3/7] ARM64: KVM: Reset ID registers when creating the VCPUs

2017-01-16 Thread Shannon Zhao
From: Shannon Zhao 

Reset ID registers when creating the VCPUs and store the values per
VCPU. Also modify the get_invariant_sys_reg and set_invariant_sys_reg
to get/set the ID register from vcpu context.

Signed-off-by: Shannon Zhao 
---
 arch/arm64/include/asm/kvm_coproc.h |  1 +
 arch/arm64/kvm/guest.c  |  1 +
 arch/arm64/kvm/sys_regs.c   | 58 ++---
 3 files changed, 31 insertions(+), 29 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_coproc.h 
b/arch/arm64/include/asm/kvm_coproc.h
index 0b52377..0801b66 100644
--- a/arch/arm64/include/asm/kvm_coproc.h
+++ b/arch/arm64/include/asm/kvm_coproc.h
@@ -24,6 +24,7 @@
 #include 
 
 void kvm_reset_sys_regs(struct kvm_vcpu *vcpu);
+void kvm_reset_id_sys_regs(struct kvm_vcpu *vcpu);
 
 struct kvm_sys_reg_table {
const struct sys_reg_desc *table;
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index b37446a..92abe2b 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -48,6 +48,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
 
 int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
 {
+   kvm_reset_id_sys_regs(vcpu);
return 0;
 }
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index bf71eb4..7c5fa03 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1440,11 +1440,11 @@ static const struct sys_reg_desc cp15_64_regs[] = {
  * the guest, or a future kvm may trap them.
  */
 
-#define FUNCTION_INVARIANT(reg)
\
-   static void get_##reg(struct kvm_vcpu *v,   \
- const struct sys_reg_desc *r) \
+#define FUNCTION_INVARIANT(register)   \
+   static void get_##register(struct kvm_vcpu *v,  \
+  const struct sys_reg_desc *r)\
{   \
-   ((struct sys_reg_desc *)r)->val = read_sysreg(reg); \
+   vcpu_id_sys_reg(v, r->reg) = read_sysreg(register); \
}
 
 FUNCTION_INVARIANT(midr_el1)
@@ -1480,7 +1480,6 @@ FUNCTION_INVARIANT(id_aa64mmfr1_el1)
 FUNCTION_INVARIANT(clidr_el1)
 FUNCTION_INVARIANT(aidr_el1)
 
-/* ->val is filled in by kvm_sys_reg_table_init() */
 static struct sys_reg_desc invariant_sys_regs[] = {
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b), Op2(0b000),
  NULL, get_midr_el1, MIDR_EL1 },
@@ -1952,43 +1951,43 @@ static int reg_to_user(void __user *uaddr, const u64 
*val, u64 id)
return 0;
 }
 
-static int get_invariant_sys_reg(u64 id, void __user *uaddr)
+static int get_invariant_sys_reg(struct kvm_vcpu *vcpu,
+const struct kvm_one_reg *reg)
 {
struct sys_reg_params params;
const struct sys_reg_desc *r;
+   void __user *uaddr = (void __user *)(unsigned long)reg->addr;
 
-   if (!index_to_params(id, ¶ms))
+   if (!index_to_params(reg->id, ¶ms))
return -ENOENT;
 
r = find_reg(¶ms, invariant_sys_regs, 
ARRAY_SIZE(invariant_sys_regs));
if (!r)
return -ENOENT;
 
-   return reg_to_user(uaddr, &r->val, id);
+   if (r->get_user)
+   return (r->get_user)(vcpu, r, reg, uaddr);
+
+   return reg_to_user(uaddr, &vcpu_id_sys_reg(vcpu, r->reg), reg->id);
 }
 
-static int set_invariant_sys_reg(u64 id, void __user *uaddr)
+static int set_invariant_sys_reg(struct kvm_vcpu *vcpu,
+const struct kvm_one_reg *reg)
 {
struct sys_reg_params params;
const struct sys_reg_desc *r;
-   int err;
-   u64 val = 0; /* Make sure high bits are 0 for 32-bit regs */
+   void __user *uaddr = (void __user *)(unsigned long)reg->addr;
 
-   if (!index_to_params(id, ¶ms))
+   if (!index_to_params(reg->id, ¶ms))
return -ENOENT;
r = find_reg(¶ms, invariant_sys_regs, 
ARRAY_SIZE(invariant_sys_regs));
if (!r)
return -ENOENT;
 
-   err = reg_from_user(&val, uaddr, id);
-   if (err)
-   return err;
-
-   /* This is what we mean by invariant: you can't change it. */
-   if (r->val != val)
-   return -EINVAL;
+   if (r->set_user)
+   return (r->set_user)(vcpu, r, reg, uaddr);
 
-   return 0;
+   return reg_from_user(&vcpu_id_sys_reg(vcpu, r->reg), uaddr, reg->id);
 }
 
 static bool is_valid_cache(u32 val)
@@ -2086,7 +2085,7 @@ int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const 
struct kvm_one_reg *reg
 
r = index_to_sys_reg_desc(vcpu, reg->id);
if (!r)
-   return get_invariant_sys_reg(reg->id, uaddr);
+   return get_invariant_sys_reg(vcpu, reg);
 
if (r->get_user)
return (r-&

[PATCH RFC 6/7] ARM64: KVM: Support heterogeneous system

2017-01-16 Thread Shannon Zhao
From: Shannon Zhao 

When initializing KVM, check whether physical hardware is a
heterogeneous system through the MIDR values. If so, force userspace to
set the KVM_ARM_VCPU_CROSS feature bit. Otherwise, it should fail to
initialize VCPUs.

Signed-off-by: Shannon Zhao 
---
 arch/arm/kvm/arm.c   | 26 ++
 include/uapi/linux/kvm.h |  1 +
 2 files changed, 27 insertions(+)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index bdceb19..21ec070 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -46,6 +46,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #ifdef REQUIRES_VIRT
 __asm__(".arch_extension   virt");
@@ -65,6 +66,7 @@ static unsigned int kvm_vmid_bits __read_mostly;
 static DEFINE_SPINLOCK(kvm_vmid_lock);
 
 static bool vgic_present;
+static bool heterogeneous_system;
 
 static DEFINE_PER_CPU(unsigned char, kvm_arm_hardware_enabled);
 
@@ -210,6 +212,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_ARM_CROSS_VCPU:
r = 1;
break;
+   case KVM_CAP_ARM_HETEROGENEOUS:
+   r = heterogeneous_system;
+   break;
case KVM_CAP_COALESCED_MMIO:
r = KVM_COALESCED_MMIO_PAGE_OFFSET;
break;
@@ -812,6 +817,12 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
int phys_target = kvm_target_cpu();
bool cross_vcpu = kvm_vcpu_has_feature_cross_cpu(init);
 
+   if (heterogeneous_system && !cross_vcpu) {
+   kvm_err("%s:Host is a heterogeneous system, set 
KVM_ARM_VCPU_CROSS bit\n",
+   __func__);
+   return -EINVAL;
+   }
+
if (!cross_vcpu && init->target != phys_target)
return -EINVAL;
 
@@ -1397,6 +1408,11 @@ static void check_kvm_target_cpu(void *ret)
*(int *)ret = kvm_target_cpu();
 }
 
+static void get_physical_cpu_midr(void *midr)
+{
+   *(u32 *)midr = read_cpuid_id();
+}
+
 struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr)
 {
struct kvm_vcpu *vcpu;
@@ -1417,6 +1433,7 @@ int kvm_arch_init(void *opaque)
 {
int err;
int ret, cpu;
+   u32 current_midr, midr;
 
if (!is_hyp_mode_available()) {
kvm_err("HYP mode not available\n");
@@ -1431,6 +1448,15 @@ int kvm_arch_init(void *opaque)
}
}
 
+   current_midr = read_cpuid_id();
+   for_each_online_cpu(cpu) {
+   smp_call_function_single(cpu, get_physical_cpu_midr, &midr, 1);
+   if (current_midr != midr) {
+   heterogeneous_system = true;
+   break;
+   }
+   }
+
err = init_common_resources();
if (err)
return err;
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 46115a2..cc2b63d 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -872,6 +872,7 @@ struct kvm_ppc_smmu_info {
 #define KVM_CAP_MSI_DEVID 131
 #define KVM_CAP_PPC_HTM 132
 #define KVM_CAP_ARM_CROSS_VCPU 133
+#define KVM_CAP_ARM_HETEROGENEOUS 134
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH RFC 1/7] ARM64: KVM: Add the definition of ID registers

2017-01-16 Thread Shannon Zhao
From: Shannon Zhao 

Add a new memeber in kvm_cpu_context to save the ID registers value.

Signed-off-by: Shannon Zhao 
---
 arch/arm64/include/asm/kvm_host.h | 46 +++
 1 file changed, 46 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index e505038..6034f92 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -187,12 +187,57 @@ enum vcpu_sysreg {
 
 #define NR_COPRO_REGS  (NR_SYS_REGS * 2)
 
+enum id_vcpu_sysreg {
+   MIDR_EL1,
+   /* ID group 1 registers */
+   REVIDR_EL1,
+   AIDR_EL1,
+
+   /* ID group 2 registers */
+   CTR_EL0,
+   CCSIDR_EL1,
+   CLIDR_EL1,
+
+   /* ID group 3 registers */
+   ID_PFR0_EL1,
+   ID_PFR1_EL1,
+   ID_DFR0_EL1,
+   ID_AFR0_EL1,
+   ID_MMFR0_EL1,
+   ID_MMFR1_EL1,
+   ID_MMFR2_EL1,
+   ID_MMFR3_EL1,
+   ID_ISAR0_EL1,
+   ID_ISAR1_EL1,
+   ID_ISAR2_EL1,
+   ID_ISAR3_EL1,
+   ID_ISAR4_EL1,
+   ID_ISAR5_EL1,
+   MVFR0_EL1,
+   MVFR1_EL1,
+   MVFR2_EL1,
+   ID_AA64PFR0_EL1,
+   ID_AA64PFR1_EL1,
+   ID_AA64DFR0_EL1,
+   ID_AA64DFR1_EL1,
+   ID_AA64ISAR0_EL1,
+   ID_AA64ISAR1_EL1,
+   ID_AA64MMFR0_EL1,
+   ID_AA64MMFR1_EL1,
+   ID_AA64AFR0_EL1,
+   ID_AA64AFR1_EL1,
+   ID_MMFR4_EL1,
+
+   NR_ID_SYS_REGS
+};
+
 struct kvm_cpu_context {
struct kvm_regs gp_regs;
union {
u64 sys_regs[NR_SYS_REGS];
u32 copro[NR_COPRO_REGS];
};
+   u64 id_sys_regs[NR_ID_SYS_REGS];
 };
 
 typedef struct kvm_cpu_context kvm_cpu_context_t;
@@ -277,6 +322,7 @@ struct kvm_vcpu_arch {
 
 #define vcpu_gp_regs(v)(&(v)->arch.ctxt.gp_regs)
 #define vcpu_sys_reg(v,r)  ((v)->arch.ctxt.sys_regs[(r)])
+#define vcpu_id_sys_reg(v,r)   ((v)->arch.ctxt.id_sys_regs[(r)])
 /*
  * CP14 and CP15 live in the same array, as they are backed by the
  * same system registers.
-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH RFC 5/7] ARM64: KVM: Support cross type vCPU

2017-01-16 Thread Shannon Zhao
From: Shannon Zhao 

Add a capability to tell userspace that KVM supports cross type vCPU.
Add a cpu feature for userspace to set when it doesn't use host type
vCPU and kvm_vcpu_preferred_target return the host MIDR register value
so that userspace can check whether its requested vCPU type macthes the
one of physical CPU and if so, KVM will not trap ID registers even
though userspace doesn't specify -cpu host.
Guest accesses MIDR through VPIDR_EL2 so we save/restore it no matter
it's a cross type vCPU.

Signed-off-by: Shannon Zhao 
---
 arch/arm/kvm/arm.c   | 10 --
 arch/arm64/include/asm/kvm_emulate.h |  3 +++
 arch/arm64/include/asm/kvm_host.h|  3 ++-
 arch/arm64/include/uapi/asm/kvm.h|  1 +
 arch/arm64/kvm/guest.c   | 17 -
 arch/arm64/kvm/hyp/sysreg-sr.c   |  2 ++
 include/uapi/linux/kvm.h |  1 +
 7 files changed, 33 insertions(+), 4 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 1167678..bdceb19 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -207,6 +207,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_ARM_PSCI_0_2:
case KVM_CAP_READONLY_MEM:
case KVM_CAP_MP_STATE:
+   case KVM_CAP_ARM_CROSS_VCPU:
r = 1;
break;
case KVM_CAP_COALESCED_MMIO:
@@ -809,8 +810,9 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
 {
unsigned int i;
int phys_target = kvm_target_cpu();
+   bool cross_vcpu = kvm_vcpu_has_feature_cross_cpu(init);
 
-   if (init->target != phys_target)
+   if (!cross_vcpu && init->target != phys_target)
return -EINVAL;
 
/*
@@ -839,7 +841,11 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
set_bit(i, vcpu->arch.features);
}
 
-   vcpu->arch.target = phys_target;
+   if (!cross_vcpu)
+   vcpu->arch.target = phys_target;
+   else
+   /* Use generic ARMv8 target for cross type vcpu. */
+   vcpu->arch.target = KVM_ARM_TARGET_GENERIC_V8;
 
/* Now we know what it is, we can reset it. */
return kvm_reset_vcpu(vcpu);
diff --git a/arch/arm64/include/asm/kvm_emulate.h 
b/arch/arm64/include/asm/kvm_emulate.h
index f5ea0ba..bca7d3a 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -49,6 +49,9 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
vcpu->arch.hcr_el2 |= HCR_E2H;
if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features))
vcpu->arch.hcr_el2 &= ~HCR_RW;
+   if (test_bit(KVM_ARM_VCPU_CROSS, vcpu->arch.features))
+   /* TODO: Set HCR_TID2 and trap cache registers */
+   vcpu->arch.hcr_el2 |= HCR_TID3 | HCR_TID1 | HCR_TID0;
 }
 
 static inline unsigned long vcpu_get_hcr(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 6034f92..d0073d7 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -41,10 +41,11 @@
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
-#define KVM_VCPU_MAX_FEATURES 4
+#define KVM_VCPU_MAX_FEATURES 5
 
 #define KVM_REQ_VCPU_EXIT  8
 
+bool kvm_vcpu_has_feature_cross_cpu(const struct kvm_vcpu_init *init);
 int __attribute_const__ kvm_target_cpu(void);
 int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
 int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext);
diff --git a/arch/arm64/include/uapi/asm/kvm.h 
b/arch/arm64/include/uapi/asm/kvm.h
index 3051f86..7ba7117 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -97,6 +97,7 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2  2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V33 /* Support guest PMUv3 */
+#define KVM_ARM_VCPU_CROSS 4 /* Support cross type vCPU */
 
 struct kvm_vcpu_init {
__u32 target;
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 92abe2b..4a5ccab 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -308,8 +308,15 @@ int __attribute_const__ kvm_target_cpu(void)
return KVM_ARM_TARGET_GENERIC_V8;
 }
 
+bool kvm_vcpu_has_feature_cross_cpu(const struct kvm_vcpu_init *init)
+{
+   return init->features[KVM_ARM_VCPU_CROSS / 32] &
+  (1 << (KVM_ARM_VCPU_CROSS % 32));
+}
+
 int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init)
 {
+   bool cross_vcpu = kvm_vcpu_has_feature_cross_cpu(init);
int target = kvm_target_cpu();
 
if (target < 0)
@@ -323,7 +330,15 @@ int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init)
 * specific features available for the preferred
 * target type.
 */
-   init->target =

[PATCH RFC 7/7] ARM64: KVM: Add user set handler for id_aa64mmfr0_el1

2017-01-16 Thread Shannon Zhao
From: Shannon Zhao 

Check if the configuration is fine.

Signed-off-by: Shannon Zhao 
---
 arch/arm64/kvm/sys_regs.c | 32 +++-
 1 file changed, 31 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f613e29..9763b79 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1493,6 +1493,35 @@ static bool access_id_reg(struct kvm_vcpu *vcpu,
return true;
 }
 
+static int set_id_aa64mmfr0_el1(struct kvm_vcpu *vcpu,
+   const struct sys_reg_desc *rd,
+   const struct kvm_one_reg *reg,
+   void __user *uaddr)
+{
+   u64 val, id_aa64mmfr0;
+
+   if (copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+   return -EFAULT;
+
+   asm volatile("mrs %0, id_aa64mmfr0_el1\n" : "=r" (id_aa64mmfr0));
+
+   if ((val & GENMASK(3, 0)) > (id_aa64mmfr0 & GENMASK(3, 0)) ||
+   (val & GENMASK(7, 4)) > (id_aa64mmfr0 & GENMASK(7, 4)) ||
+   (val & GENMASK(11, 8)) > (id_aa64mmfr0 & GENMASK(11, 8)) ||
+   (val & GENMASK(15, 12)) > (id_aa64mmfr0 & GENMASK(15, 12)) ||
+   (val & GENMASK(19, 16)) > (id_aa64mmfr0 & GENMASK(19, 16)) ||
+   (val & GENMASK(23, 20)) > (id_aa64mmfr0 & GENMASK(23, 20)) ||
+   (val & GENMASK(27, 24)) < (id_aa64mmfr0 & GENMASK(27, 24)) ||
+   (val & GENMASK(31, 28)) < (id_aa64mmfr0 & GENMASK(31, 28))) {
+   kvm_err("Wrong memory translation granule size/Physical Address 
range\n");
+   return -EINVAL;
+   }
+
+   vcpu_id_sys_reg(vcpu, rd->reg) = val & GENMASK(31, 0);
+
+   return 0;
+}
+
 static struct sys_reg_desc invariant_sys_regs[] = {
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b), Op2(0b000),
  access_id_reg, get_midr_el1, MIDR_EL1 },
@@ -1549,7 +1578,8 @@ static struct sys_reg_desc invariant_sys_regs[] = {
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0110), Op2(0b001),
  access_id_reg, get_id_aa64isar1_el1, ID_AA64ISAR1_EL1 },
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0111), Op2(0b000),
- access_id_reg, get_id_aa64mmfr0_el1, ID_AA64MMFR0_EL1 },
+ access_id_reg, get_id_aa64mmfr0_el1, ID_AA64MMFR0_EL1,
+ 0, NULL, set_id_aa64mmfr0_el1 },
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0111), Op2(0b001),
  access_id_reg, get_id_aa64mmfr1_el1, ID_AA64MMFR1_EL1 },
{ Op0(0b11), Op1(0b001), CRn(0b), CRm(0b), Op2(0b001),
-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH RFC 2/7] ARM64: KVM: Add reset handlers for all ID registers

2017-01-16 Thread Shannon Zhao
From: Shannon Zhao 

Move invariant_sys_regs before emulate_sys_reg so that it can be used
later.

Signed-off-by: Shannon Zhao 
---
 arch/arm64/kvm/sys_regs.c | 193 --
 1 file changed, 116 insertions(+), 77 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 87e7e66..bf71eb4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1432,6 +1432,122 @@ static const struct sys_reg_desc cp15_64_regs[] = {
{ Op1( 1), CRn( 0), CRm( 2), Op2( 0), access_vm_reg, NULL, c2_TTBR1 },
 };
 
+/*
+ * These are the invariant sys_reg registers: we let the guest see the
+ * host versions of these, so they're part of the guest state.
+ *
+ * A future CPU may provide a mechanism to present different values to
+ * the guest, or a future kvm may trap them.
+ */
+
+#define FUNCTION_INVARIANT(reg)
\
+   static void get_##reg(struct kvm_vcpu *v,   \
+ const struct sys_reg_desc *r) \
+   {   \
+   ((struct sys_reg_desc *)r)->val = read_sysreg(reg); \
+   }
+
+FUNCTION_INVARIANT(midr_el1)
+FUNCTION_INVARIANT(ctr_el0)
+FUNCTION_INVARIANT(revidr_el1)
+FUNCTION_INVARIANT(id_pfr0_el1)
+FUNCTION_INVARIANT(id_pfr1_el1)
+FUNCTION_INVARIANT(id_dfr0_el1)
+FUNCTION_INVARIANT(id_afr0_el1)
+FUNCTION_INVARIANT(id_mmfr0_el1)
+FUNCTION_INVARIANT(id_mmfr1_el1)
+FUNCTION_INVARIANT(id_mmfr2_el1)
+FUNCTION_INVARIANT(id_mmfr3_el1)
+FUNCTION_INVARIANT(id_isar0_el1)
+FUNCTION_INVARIANT(id_isar1_el1)
+FUNCTION_INVARIANT(id_isar2_el1)
+FUNCTION_INVARIANT(id_isar3_el1)
+FUNCTION_INVARIANT(id_isar4_el1)
+FUNCTION_INVARIANT(id_isar5_el1)
+FUNCTION_INVARIANT(mvfr0_el1)
+FUNCTION_INVARIANT(mvfr1_el1)
+FUNCTION_INVARIANT(mvfr2_el1)
+FUNCTION_INVARIANT(id_aa64pfr0_el1)
+FUNCTION_INVARIANT(id_aa64pfr1_el1)
+FUNCTION_INVARIANT(id_aa64dfr0_el1)
+FUNCTION_INVARIANT(id_aa64dfr1_el1)
+FUNCTION_INVARIANT(id_aa64afr0_el1)
+FUNCTION_INVARIANT(id_aa64afr1_el1)
+FUNCTION_INVARIANT(id_aa64isar0_el1)
+FUNCTION_INVARIANT(id_aa64isar1_el1)
+FUNCTION_INVARIANT(id_aa64mmfr0_el1)
+FUNCTION_INVARIANT(id_aa64mmfr1_el1)
+FUNCTION_INVARIANT(clidr_el1)
+FUNCTION_INVARIANT(aidr_el1)
+
+/* ->val is filled in by kvm_sys_reg_table_init() */
+static struct sys_reg_desc invariant_sys_regs[] = {
+   { Op0(0b11), Op1(0b000), CRn(0b), CRm(0b), Op2(0b000),
+ NULL, get_midr_el1, MIDR_EL1 },
+   { Op0(0b11), Op1(0b000), CRn(0b), CRm(0b), Op2(0b110),
+ NULL, get_revidr_el1, REVIDR_EL1 },
+   { Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0001), Op2(0b000),
+ NULL, get_id_pfr0_el1, ID_PFR0_EL1 },
+   { Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0001), Op2(0b001),
+ NULL, get_id_pfr1_el1, ID_PFR1_EL1 },
+   { Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0001), Op2(0b010),
+ NULL, get_id_dfr0_el1, ID_DFR0_EL1 },
+   { Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0001), Op2(0b011),
+ NULL, get_id_afr0_el1, ID_AFR0_EL1 },
+   { Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0001), Op2(0b100),
+ NULL, get_id_mmfr0_el1, ID_MMFR0_EL1 },
+   { Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0001), Op2(0b101),
+ NULL, get_id_mmfr1_el1, ID_MMFR1_EL1 },
+   { Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0001), Op2(0b110),
+ NULL, get_id_mmfr2_el1, ID_MMFR2_EL1 },
+   { Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0001), Op2(0b111),
+ NULL, get_id_mmfr3_el1, ID_MMFR3_EL1 },
+   { Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0010), Op2(0b000),
+ NULL, get_id_isar0_el1, ID_ISAR0_EL1 },
+   { Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0010), Op2(0b001),
+ NULL, get_id_isar1_el1, ID_ISAR1_EL1 },
+   { Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0010), Op2(0b010),
+ NULL, get_id_isar2_el1, ID_ISAR2_EL1 },
+   { Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0010), Op2(0b011),
+ NULL, get_id_isar3_el1, ID_ISAR3_EL1 },
+   { Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0010), Op2(0b100),
+ NULL, get_id_isar4_el1, ID_ISAR4_EL1 },
+   { Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0010), Op2(0b101),
+ NULL, get_id_isar5_el1, ID_ISAR5_EL1 },
+   { Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0011), Op2(0b000),
+ NULL, get_mvfr0_el1, MVFR0_EL1 },
+   { Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0011), Op2(0b001),
+ NULL, get_mvfr1_el1, MVFR1_EL1 },
+   { Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0011), Op2(0b010),
+ NULL, get_mvfr2_el1, MVFR2_EL1 },
+   { Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0100), Op2(0b000),
+ NULL, get_id_aa64pfr0_el1, ID_AA64PFR0_EL1 },
+   { Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0100), Op2(0b001),
+ NULL, get_id_aa64pfr1_el1, ID_AA64PFR1_EL1 },
+   { Op0(0b11), Op1(

[PATCH RFC 4/7] ARM64: KVM: emulate accessing ID registers

2017-01-16 Thread Shannon Zhao
From: Shannon Zhao 

Signed-off-by: Shannon Zhao 
---
 arch/arm64/kvm/sys_regs.c | 83 ---
 1 file changed, 50 insertions(+), 33 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 7c5fa03..f613e29 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1480,71 +1480,84 @@ FUNCTION_INVARIANT(id_aa64mmfr1_el1)
 FUNCTION_INVARIANT(clidr_el1)
 FUNCTION_INVARIANT(aidr_el1)
 
+static bool access_id_reg(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+   if (p->is_write) {
+   vcpu_id_sys_reg(vcpu, r->reg) = p->regval;
+   } else {
+   p->regval = vcpu_id_sys_reg(vcpu, r->reg);
+   }
+
+   return true;
+}
+
 static struct sys_reg_desc invariant_sys_regs[] = {
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b), Op2(0b000),
- NULL, get_midr_el1, MIDR_EL1 },
+ access_id_reg, get_midr_el1, MIDR_EL1 },
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b), Op2(0b110),
- NULL, get_revidr_el1, REVIDR_EL1 },
+ access_id_reg, get_revidr_el1, REVIDR_EL1 },
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0001), Op2(0b000),
- NULL, get_id_pfr0_el1, ID_PFR0_EL1 },
+ access_id_reg, get_id_pfr0_el1, ID_PFR0_EL1 },
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0001), Op2(0b001),
- NULL, get_id_pfr1_el1, ID_PFR1_EL1 },
+ access_id_reg, get_id_pfr1_el1, ID_PFR1_EL1 },
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0001), Op2(0b010),
- NULL, get_id_dfr0_el1, ID_DFR0_EL1 },
+ access_id_reg, get_id_dfr0_el1, ID_DFR0_EL1 },
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0001), Op2(0b011),
- NULL, get_id_afr0_el1, ID_AFR0_EL1 },
+ access_id_reg, get_id_afr0_el1, ID_AFR0_EL1 },
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0001), Op2(0b100),
- NULL, get_id_mmfr0_el1, ID_MMFR0_EL1 },
+ access_id_reg, get_id_mmfr0_el1, ID_MMFR0_EL1 },
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0001), Op2(0b101),
- NULL, get_id_mmfr1_el1, ID_MMFR1_EL1 },
+ access_id_reg, get_id_mmfr1_el1, ID_MMFR1_EL1 },
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0001), Op2(0b110),
- NULL, get_id_mmfr2_el1, ID_MMFR2_EL1 },
+ access_id_reg, get_id_mmfr2_el1, ID_MMFR2_EL1 },
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0001), Op2(0b111),
- NULL, get_id_mmfr3_el1, ID_MMFR3_EL1 },
+ access_id_reg, get_id_mmfr3_el1, ID_MMFR3_EL1 },
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0010), Op2(0b000),
- NULL, get_id_isar0_el1, ID_ISAR0_EL1 },
+ access_id_reg, get_id_isar0_el1, ID_ISAR0_EL1 },
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0010), Op2(0b001),
- NULL, get_id_isar1_el1, ID_ISAR1_EL1 },
+ access_id_reg, get_id_isar1_el1, ID_ISAR1_EL1 },
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0010), Op2(0b010),
- NULL, get_id_isar2_el1, ID_ISAR2_EL1 },
+ access_id_reg, get_id_isar2_el1, ID_ISAR2_EL1 },
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0010), Op2(0b011),
- NULL, get_id_isar3_el1, ID_ISAR3_EL1 },
+ access_id_reg, get_id_isar3_el1, ID_ISAR3_EL1 },
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0010), Op2(0b100),
- NULL, get_id_isar4_el1, ID_ISAR4_EL1 },
+ access_id_reg, get_id_isar4_el1, ID_ISAR4_EL1 },
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0010), Op2(0b101),
- NULL, get_id_isar5_el1, ID_ISAR5_EL1 },
+ access_id_reg, get_id_isar5_el1, ID_ISAR5_EL1 },
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0011), Op2(0b000),
- NULL, get_mvfr0_el1, MVFR0_EL1 },
+ access_id_reg, get_mvfr0_el1, MVFR0_EL1 },
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0011), Op2(0b001),
- NULL, get_mvfr1_el1, MVFR1_EL1 },
+ access_id_reg, get_mvfr1_el1, MVFR1_EL1 },
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0011), Op2(0b010),
- NULL, get_mvfr2_el1, MVFR2_EL1 },
+ access_id_reg, get_mvfr2_el1, MVFR2_EL1 },
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0100), Op2(0b000),
- NULL, get_id_aa64pfr0_el1, ID_AA64PFR0_EL1 },
+ access_id_reg, get_id_aa64pfr0_el1, ID_AA64PFR0_EL1 },
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0100), Op2(0b001),
- NULL, get_id_aa64pfr1_el1, ID_AA64PFR1_EL1 },
+ access_id_reg, get_id_aa64pfr1_el1, ID_AA64PFR1_EL1 },
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0101), Op2(0b000),
- NULL, get_id_aa64dfr0_el1, ID_AA64DFR0_EL1 },
+ access_id_reg, get_id_aa64dfr0_el1, ID_AA64DFR0_EL1 },
{ Op0(0b11), Op1(0b000), CRn(0b), CRm(0b0101), Op2(0b001),
- NULL, get_id_aa64dfr1_el1, ID_AA64DFR1_EL1 },
+ access_id_reg, get_id_aa64dfr1_el1, 

[PATCH RFC 3/6] arm: kvm64: Check if kvm supports cross type vCPU

2017-01-16 Thread Shannon Zhao
From: Shannon Zhao 

If user requests a specific type vCPU which is not same with the
physical ones and if kvm supports cross type vCPU, we set the
KVM_ARM_VCPU_CROSS bit and set the CPU ID registers.

Signed-off-by: Shannon Zhao 
---
 target/arm/kvm64.c | 182 +
 1 file changed, 182 insertions(+)

diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
index 609..70442ea 100644
--- a/target/arm/kvm64.c
+++ b/target/arm/kvm64.c
@@ -481,7 +481,151 @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUClass *ahcc)
 return true;
 }
 
+#define ARM_CPU_ID_MIDR3, 0, 0, 0, 0
 #define ARM_CPU_ID_MPIDR   3, 0, 0, 0, 5
+/* ID group 1 registers */
+#define ARM_CPU_ID_REVIDR  3, 0, 0, 0, 6
+#define ARM_CPU_ID_AIDR3, 1, 0, 0, 7
+
+/* ID group 2 registers */
+#define ARM_CPU_ID_CCSIDR  3, 1, 0, 0, 0
+#define ARM_CPU_ID_CLIDR   3, 1, 0, 0, 1
+#define ARM_CPU_ID_CSSELR  3, 2, 0, 0, 0
+#define ARM_CPU_ID_CTR 3, 3, 0, 0, 1
+
+/* ID group 3 registers */
+#define ARM_CPU_ID_PFR03, 0, 0, 1, 0
+#define ARM_CPU_ID_PFR13, 0, 0, 1, 1
+#define ARM_CPU_ID_DFR03, 0, 0, 1, 2
+#define ARM_CPU_ID_AFR03, 0, 0, 1, 3
+#define ARM_CPU_ID_MMFR0   3, 0, 0, 1, 4
+#define ARM_CPU_ID_MMFR1   3, 0, 0, 1, 5
+#define ARM_CPU_ID_MMFR2   3, 0, 0, 1, 6
+#define ARM_CPU_ID_MMFR3   3, 0, 0, 1, 7
+#define ARM_CPU_ID_ISAR0   3, 0, 0, 2, 0
+#define ARM_CPU_ID_ISAR1   3, 0, 0, 2, 1
+#define ARM_CPU_ID_ISAR2   3, 0, 0, 2, 2
+#define ARM_CPU_ID_ISAR3   3, 0, 0, 2, 3
+#define ARM_CPU_ID_ISAR4   3, 0, 0, 2, 4
+#define ARM_CPU_ID_ISAR5   3, 0, 0, 2, 5
+#define ARM_CPU_ID_MMFR4   3, 0, 0, 2, 6
+#define ARM_CPU_ID_MVFR0   3, 0, 0, 3, 0
+#define ARM_CPU_ID_MVFR1   3, 0, 0, 3, 1
+#define ARM_CPU_ID_MVFR2   3, 0, 0, 3, 2
+#define ARM_CPU_ID_AA64PFR03, 0, 0, 4, 0
+#define ARM_CPU_ID_AA64PFR13, 0, 0, 4, 1
+#define ARM_CPU_ID_AA64DFR03, 0, 0, 5, 0
+#define ARM_CPU_ID_AA64DFR13, 0, 0, 5, 1
+#define ARM_CPU_ID_AA64AFR03, 0, 0, 5, 4
+#define ARM_CPU_ID_AA64AFR13, 0, 0, 5, 5
+#define ARM_CPU_ID_AA64ISAR0   3, 0, 0, 6, 0
+#define ARM_CPU_ID_AA64ISAR1   3, 0, 0, 6, 1
+#define ARM_CPU_ID_AA64MMFR0   3, 0, 0, 7, 0
+#define ARM_CPU_ID_AA64MMFR1   3, 0, 0, 7, 1
+#define ARM_CPU_ID_MAX 36
+
+static int kvm_arm_set_id_registers(CPUState *cs)
+{
+int ret = 0;
+uint32_t i;
+ARMCPU *cpu = ARM_CPU(cs);
+struct kvm_one_reg id_regitsers[ARM_CPU_ID_MAX];
+
+memset(id_regitsers, 0, ARM_CPU_ID_MAX * sizeof(struct kvm_one_reg));
+
+id_regitsers[0].id = ARM64_SYS_REG(ARM_CPU_ID_MIDR);
+id_regitsers[0].addr = (uintptr_t)&cpu->midr;
+
+id_regitsers[1].id = ARM64_SYS_REG(ARM_CPU_ID_REVIDR);
+id_regitsers[1].addr = (uintptr_t)&cpu->revidr;
+
+id_regitsers[2].id = ARM64_SYS_REG(ARM_CPU_ID_MVFR0);
+id_regitsers[2].addr = (uintptr_t)&cpu->mvfr0;
+
+id_regitsers[3].id = ARM64_SYS_REG(ARM_CPU_ID_MVFR1);
+id_regitsers[3].addr = (uintptr_t)&cpu->mvfr1;
+
+id_regitsers[4].id = ARM64_SYS_REG(ARM_CPU_ID_MVFR2);
+id_regitsers[4].addr = (uintptr_t)&cpu->mvfr2;
+
+id_regitsers[5].id = ARM64_SYS_REG(ARM_CPU_ID_PFR0);
+id_regitsers[5].addr = (uintptr_t)&cpu->id_pfr0;
+
+id_regitsers[6].id = ARM64_SYS_REG(ARM_CPU_ID_PFR1);
+id_regitsers[6].addr = (uintptr_t)&cpu->id_pfr1;
+
+id_regitsers[7].id = ARM64_SYS_REG(ARM_CPU_ID_DFR0);
+id_regitsers[7].addr = (uintptr_t)&cpu->id_dfr0;
+
+id_regitsers[8].id = ARM64_SYS_REG(ARM_CPU_ID_AFR0);
+id_regitsers[8].addr = (uintptr_t)&cpu->id_afr0;
+
+id_regitsers[9].id = ARM64_SYS_REG(ARM_CPU_ID_MMFR0);
+id_regitsers[9].addr = (uintptr_t)&cpu->id_mmfr0;
+
+id_regitsers[10].id = ARM64_SYS_REG(ARM_CPU_ID_MMFR1);
+id_regitsers[10].addr = (uintptr_t)&cpu->id_mmfr1;
+
+id_regitsers[11].id = ARM64_SYS_REG(ARM_CPU_ID_MMFR2);
+id_regitsers[11].addr = (uintptr_t)&cpu->id_mmfr2;
+
+id_regitsers[12].id = ARM64_SYS_REG(ARM_CPU_ID_MMFR3);
+id_regitsers[12].addr = (uintptr_t)&cpu->id_mmfr3;
+
+id_regitsers[13].id = ARM64_SYS_REG(ARM_CPU_ID_ISAR0);
+id_regitsers[13].addr = (uintptr_t)&cpu->id_isar0;
+
+id_regitsers[14].id = ARM64_SYS_REG(ARM_CPU_ID_ISAR1);
+id_regitsers[14].addr = (uintptr_t)&cpu->id_isar1;
+
+id_regitsers[15].id = ARM64_SYS_REG(ARM_CPU_ID_ISAR2);
+id_regitsers[15].addr = (uintptr_t)&cpu->id_isar2;
+
+id_regitsers[16].id = ARM64_SYS_REG(ARM_CPU_ID_ISAR3);
+id_regitsers[16].addr = (uintptr_t)&cpu->id_isar3;
+
+id_regitsers[17].id = ARM64_SYS_REG(ARM_CPU_ID_ISAR4);
+id_regitsers[17].addr = (uintptr_t)&cpu->id_isar4;
+
+id_regitsers[18].id = ARM64_SYS_REG(ARM_CPU_ID_ISAR5);
+id_regitsers[18].addr = (uintptr_t)&cpu->id_isar5;
+
+id_regitsers[19].id = ARM6

[PATCH RFC 5/6] arm: virt: Enable generic type CPU in virt machine

2017-01-16 Thread Shannon Zhao
From: Shannon Zhao 

Signed-off-by: Shannon Zhao 
---
 hw/arm/virt.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 4b301c2..49b7b65 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -166,6 +166,7 @@ static const char *valid_cpus[] = {
 "cortex-a15",
 "cortex-a53",
 "cortex-a57",
+"generic",
 "host",
 NULL
 };
-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH RFC 0/6] target-arm: KVM64: Cross type vCPU support

2017-01-16 Thread Shannon Zhao
From: Shannon Zhao 

This patch set support use cross type vCPU when using KVM on ARM and add
two new CPU types: generic and cortex-a72.

You can test this patch set with QEMU using
-cpu cortex-a53/cortex-a57/generic/cortex-a72

These patches can be fetched from:
https://git.linaro.org/people/shannon.zhao/qemu.git cross_vcpu_rfc

You corresponding KVM patches can be fetched from:
https://git.linaro.org/people/shannon.zhao/linux-mainline.git cross_vcpu_rfc

Shannon Zhao (6):
  headers: update linux headers
  target: arm: Add the qemu target for KVM_ARM_TARGET_GENERIC_V8
  arm: kvm64: Check if kvm supports cross type vCPU
  target: arm: Add a generic type cpu
  arm: virt: Enable generic type CPU in virt machine
  target-arm: cpu64: Add support for Cortex-A72

 hw/arm/virt.c |   2 +
 linux-headers/asm-arm64/kvm.h |   1 +
 linux-headers/linux/kvm.h |   2 +
 target/arm/cpu64.c| 110 +
 target/arm/kvm-consts.h   |   2 +
 target/arm/kvm64.c| 182 ++
 6 files changed, 299 insertions(+)

-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH RFC 2/6] target: arm: Add the qemu target for KVM_ARM_TARGET_GENERIC_V8

2017-01-16 Thread Shannon Zhao
From: Shannon Zhao 

Signed-off-by: Shannon Zhao 
---
 target/arm/kvm-consts.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/target/arm/kvm-consts.h b/target/arm/kvm-consts.h
index a2c9518..fc01ac5 100644
--- a/target/arm/kvm-consts.h
+++ b/target/arm/kvm-consts.h
@@ -128,6 +128,7 @@ MISMATCH_CHECK(QEMU_PSCI_RET_DISABLED, PSCI_RET_DISABLED)
 #define QEMU_KVM_ARM_TARGET_CORTEX_A57 2
 #define QEMU_KVM_ARM_TARGET_XGENE_POTENZA 3
 #define QEMU_KVM_ARM_TARGET_CORTEX_A53 4
+#define QEMU_KVM_ARM_TARGET_GENERIC_V8 5
 
 /* There's no kernel define for this: sentinel value which
  * matches no KVM target value for either 64 or 32 bit
@@ -140,6 +141,7 @@ MISMATCH_CHECK(QEMU_KVM_ARM_TARGET_FOUNDATION_V8, 
KVM_ARM_TARGET_FOUNDATION_V8)
 MISMATCH_CHECK(QEMU_KVM_ARM_TARGET_CORTEX_A57, KVM_ARM_TARGET_CORTEX_A57)
 MISMATCH_CHECK(QEMU_KVM_ARM_TARGET_XGENE_POTENZA, KVM_ARM_TARGET_XGENE_POTENZA)
 MISMATCH_CHECK(QEMU_KVM_ARM_TARGET_CORTEX_A53, KVM_ARM_TARGET_CORTEX_A53)
+MISMATCH_CHECK(QEMU_KVM_ARM_TARGET_GENERIC_V8, KVM_ARM_TARGET_GENERIC_V8)
 #else
 MISMATCH_CHECK(QEMU_KVM_ARM_TARGET_CORTEX_A15, KVM_ARM_TARGET_CORTEX_A15)
 MISMATCH_CHECK(QEMU_KVM_ARM_TARGET_CORTEX_A7, KVM_ARM_TARGET_CORTEX_A7)
-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH RFC 4/6] target: arm: Add a generic type cpu

2017-01-16 Thread Shannon Zhao
From: Shannon Zhao 

Add a generic type cpu, it's useful for migration when running on
different hardwares.

Signed-off-by: Shannon Zhao 
---
 target/arm/cpu64.c | 54 ++
 1 file changed, 54 insertions(+)

diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
index 549cb1e..223f31e 100644
--- a/target/arm/cpu64.c
+++ b/target/arm/cpu64.c
@@ -204,6 +204,59 @@ static void aarch64_a53_initfn(Object *obj)
 define_arm_cp_regs(cpu, cortex_a57_a53_cp_reginfo);
 }
 
+static void aarch64_generic_initfn(Object *obj)
+{
+ARMCPU *cpu = ARM_CPU(obj);
+
+cpu->dtb_compatible = "arm,armv8";
+set_feature(&cpu->env, ARM_FEATURE_V8);
+set_feature(&cpu->env, ARM_FEATURE_VFP4);
+set_feature(&cpu->env, ARM_FEATURE_NEON);
+set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
+set_feature(&cpu->env, ARM_FEATURE_AARCH64);
+set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
+set_feature(&cpu->env, ARM_FEATURE_V8_AES);
+set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
+set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
+set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
+set_feature(&cpu->env, ARM_FEATURE_CRC);
+set_feature(&cpu->env, ARM_FEATURE_EL3);
+cpu->kvm_target = QEMU_KVM_ARM_TARGET_GENERIC_V8;
+cpu->midr = 0x410fd000; /* FIXME: this needs to adjust */
+cpu->revidr = 0x;
+cpu->reset_fpsid = 0x41034070;
+cpu->mvfr0 = 0x10110222;
+cpu->mvfr1 = 0x1211;
+cpu->mvfr2 = 0x0043;
+cpu->ctr = 0x84448004; /* L1Ip = VIPT */
+cpu->reset_sctlr = 0x00c50838;
+cpu->id_pfr0 = 0x0131;
+cpu->id_pfr1 = 0x00011011;
+cpu->id_dfr0 = 0x03010066;
+cpu->id_afr0 = 0x;
+cpu->id_mmfr0 = 0x10101105;
+cpu->id_mmfr1 = 0x4000;
+cpu->id_mmfr2 = 0x0126;
+cpu->id_mmfr3 = 0x02102211;
+cpu->id_isar0 = 0x02101110;
+cpu->id_isar1 = 0x13112111;
+cpu->id_isar2 = 0x21232042;
+cpu->id_isar3 = 0x01112131;
+cpu->id_isar4 = 0x00011142;
+cpu->id_isar5 = 0x00011121;
+cpu->id_aa64pfr0 = 0x;
+cpu->id_aa64dfr0 = 0x10305106;
+cpu->id_aa64isar0 = 0x00011120;
+cpu->id_aa64mmfr0 = 0x0f001101; /* only support 4k page, 36 bit physical 
addr */
+cpu->dbgdidr = 0x3516d000;
+cpu->clidr = 0x0a200023;
+cpu->ccsidr[0] = 0x7003e01a; /* 8KB L1 dcache */
+cpu->ccsidr[1] = 0x2007e00a; /* 8KB L1 icache */
+cpu->ccsidr[2] = 0x700fe07a; /* 128KB L2 cache */
+cpu->dcz_blocksize = 4; /* 64 bytes */
+define_arm_cp_regs(cpu, cortex_a57_a53_cp_reginfo);
+}
+
 #ifdef CONFIG_USER_ONLY
 static void aarch64_any_initfn(Object *obj)
 {
@@ -232,6 +285,7 @@ typedef struct ARMCPUInfo {
 static const ARMCPUInfo aarch64_cpus[] = {
 { .name = "cortex-a57", .initfn = aarch64_a57_initfn },
 { .name = "cortex-a53", .initfn = aarch64_a53_initfn },
+{ .name = "generic",.initfn = aarch64_generic_initfn },
 #ifdef CONFIG_USER_ONLY
 { .name = "any", .initfn = aarch64_any_initfn },
 #endif
-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH RFC 6/6] target-arm: cpu64: Add support for Cortex-A72

2017-01-16 Thread Shannon Zhao
From: Shannon Zhao 

Add the ARM Cortex-A72 processor definition. It's similar to A57.

Signed-off-by: Shannon Zhao 
---
 hw/arm/virt.c  |  1 +
 target/arm/cpu64.c | 56 ++
 2 files changed, 57 insertions(+)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 49b7b65..2ba93e3 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -166,6 +166,7 @@ static const char *valid_cpus[] = {
 "cortex-a15",
 "cortex-a53",
 "cortex-a57",
+"cortex-a72",
 "generic",
 "host",
 NULL
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
index 223f31e..4f00ceb 100644
--- a/target/arm/cpu64.c
+++ b/target/arm/cpu64.c
@@ -204,6 +204,61 @@ static void aarch64_a53_initfn(Object *obj)
 define_arm_cp_regs(cpu, cortex_a57_a53_cp_reginfo);
 }
 
+static void aarch64_a72_initfn(Object *obj)
+{
+ARMCPU *cpu = ARM_CPU(obj);
+
+cpu->dtb_compatible = "arm,cortex-a72";
+set_feature(&cpu->env, ARM_FEATURE_V8);
+set_feature(&cpu->env, ARM_FEATURE_VFP4);
+set_feature(&cpu->env, ARM_FEATURE_NEON);
+set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
+set_feature(&cpu->env, ARM_FEATURE_AARCH64);
+set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
+set_feature(&cpu->env, ARM_FEATURE_V8_AES);
+set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
+set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
+set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
+set_feature(&cpu->env, ARM_FEATURE_CRC);
+set_feature(&cpu->env, ARM_FEATURE_EL3);
+cpu->kvm_target = QEMU_KVM_ARM_TARGET_GENERIC_V8;
+cpu->midr = 0x410fd081;
+cpu->revidr = 0x;
+cpu->reset_fpsid = 0x41034080;
+cpu->mvfr0 = 0x10110222;
+cpu->mvfr1 = 0x1211;
+cpu->mvfr2 = 0x0043;
+cpu->ctr = 0x8444c004;
+cpu->reset_sctlr = 0x00c50838;
+cpu->id_pfr0 = 0x0131;
+cpu->id_pfr1 = 0x00011011;
+cpu->id_dfr0 = 0x03010066;
+cpu->id_afr0 = 0x;
+cpu->id_mmfr0 = 0x10201105;
+cpu->id_mmfr1 = 0x4000;
+cpu->id_mmfr2 = 0x0126;
+cpu->id_mmfr3 = 0x02102211;
+cpu->id_isar0 = 0x02101110;
+cpu->id_isar1 = 0x13112111;
+cpu->id_isar2 = 0x21232042;
+cpu->id_isar3 = 0x01112131;
+cpu->id_isar4 = 0x00011142;
+cpu->id_isar5 = 0x00011121;
+cpu->id_aa64pfr0 = 0x;
+cpu->id_aa64dfr0 = 0x10305106;
+cpu->pmceid0 = 0x;
+cpu->pmceid1 = 0x;
+cpu->id_aa64isar0 = 0x00011120;
+cpu->id_aa64mmfr0 = 0x1124;
+cpu->dbgdidr = 0x3516d000;
+cpu->clidr = 0x0a200023;
+cpu->ccsidr[0] = 0x701fe00a; /* 32KB L1 dcache */
+cpu->ccsidr[1] = 0x201fe012; /* 48KB L1 icache */
+cpu->ccsidr[2] = 0x71ffe07a; /* 4096KB L2 cache */
+cpu->dcz_blocksize = 4; /* 64 bytes */
+define_arm_cp_regs(cpu, cortex_a57_a53_cp_reginfo);
+}
+
 static void aarch64_generic_initfn(Object *obj)
 {
 ARMCPU *cpu = ARM_CPU(obj);
@@ -285,6 +340,7 @@ typedef struct ARMCPUInfo {
 static const ARMCPUInfo aarch64_cpus[] = {
 { .name = "cortex-a57", .initfn = aarch64_a57_initfn },
 { .name = "cortex-a53", .initfn = aarch64_a53_initfn },
+{ .name = "cortex-a72", .initfn = aarch64_a72_initfn },
 { .name = "generic",.initfn = aarch64_generic_initfn },
 #ifdef CONFIG_USER_ONLY
 { .name = "any", .initfn = aarch64_any_initfn },
-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH RFC 1/6] headers: update linux headers

2017-01-16 Thread Shannon Zhao
From: Shannon Zhao 

Signed-off-by: Shannon Zhao 
---
 linux-headers/asm-arm64/kvm.h | 1 +
 linux-headers/linux/kvm.h | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/linux-headers/asm-arm64/kvm.h b/linux-headers/asm-arm64/kvm.h
index fd5a276..f914eac 100644
--- a/linux-headers/asm-arm64/kvm.h
+++ b/linux-headers/asm-arm64/kvm.h
@@ -97,6 +97,7 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2  2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V33 /* Support guest PMUv3 */
+#define KVM_ARM_VCPU_CROSS 4 /* Support cross type vCPU */
 
 struct kvm_vcpu_init {
__u32 target;
diff --git a/linux-headers/linux/kvm.h b/linux-headers/linux/kvm.h
index bb0ed71..ea9e288 100644
--- a/linux-headers/linux/kvm.h
+++ b/linux-headers/linux/kvm.h
@@ -870,6 +870,8 @@ struct kvm_ppc_smmu_info {
 #define KVM_CAP_S390_USER_INSTR0 130
 #define KVM_CAP_MSI_DEVID 131
 #define KVM_CAP_PPC_HTM 132
+#define KVM_CAP_ARM_CROSS_VCPU 133
+#define KVM_CAP_ARM_HETEROGENEOUS 134
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 2/2] KVM: arm/arm64: vgic-v2: Add the missing resetting LRs at boot time

2016-12-15 Thread Shannon Zhao
Hi Marc,

On 2016/12/6 19:39, Marc Zyngier wrote:
> On 06/12/16 06:41, Shannon Zhao wrote:
>> From: Shannon Zhao 
>>
>> This is the corresponding part of commit d6400d7(KVM: arm/arm64:
>> vgic-v2: Reset LRs at boot time) which is missed for new-vgic.
>>
>> Signed-off-by: Shannon Zhao 
>> ---
>>  virt/kvm/arm/vgic/vgic-v2.c | 11 +++
>>  1 file changed, 11 insertions(+)
>>
>> diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
>> index 9bab867..c636a19 100644
>> --- a/virt/kvm/arm/vgic/vgic-v2.c
>> +++ b/virt/kvm/arm/vgic/vgic-v2.c
>> @@ -300,6 +300,15 @@ int vgic_v2_map_resources(struct kvm *kvm)
>>  
>>  DEFINE_STATIC_KEY_FALSE(vgic_v2_cpuif_trap);
>>  
>> +static void vgic_cpu_init_lrs(void *params)
>> +{
>> +int i;
>> +
>> +for (i = 0; i < kvm_vgic_global_state.nr_lr; i++)
>> +writel_relaxed(0, kvm_vgic_global_state.vctrl_base +
>> +  GICH_LR0 + (i * 4));
>> +}
Since this function will use kvm_vgic_global_state which is initialized
by kvm_vgic_hyp_init, if we call it in cpu_hyp_reinit/cpu_init_hyp_mode,
the kvm_vgic_global_state is not initialized for now. Is that fine to
move kvm_vgic_hyp_init to the first place in init_subsystems?

Thanks,
-- 
Shannon

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 0/2] Add the missing resetting LRs at boot time for new-vgic

2016-12-07 Thread Shannon Zhao


On 2016/12/7 16:10, Marc Zyngier wrote:
> On 07/12/16 07:45, Shannon Zhao wrote:
>>
>>
>> On 2016/12/6 19:47, Marc Zyngier wrote:
>>> On 06/12/16 06:41, Shannon Zhao wrote:
>>>> From: Shannon Zhao 
>>>>
>>>> Commit 50926d8(KVM: arm/arm64: The GIC is dead, long live the GIC)
>>>> removes the old vgic and commit 9097773(KVM: arm/arm64: vgic-new: 
>>>> vgic_init: implement kvm_vgic_hyp_init) doesn't reset LRs for new-vgic
>>>> when probing GIC. These two patches add the missing part.
>>>>
>>>> BTW, here is a strange problem on Huawei D03 board when using
>>>> upstream kernel that android guest with a goldfish_fb will hang with
>>>> rcu_stall and interrupt timeout for goldfish_fb. We apply these patches
>>>> but the problem still exists, while if we revert the commit
>>>> b40c489(arm64: KVM: vgic-v3: Only wipe LRs on vcpu exit) the guest runs
>>>> well.
>>>>
>>>> We add a trace in kvm_vgic_flush_hwstate() to print the value of 
>>>> compute_ap_list_depth(vcpu) and the value of vgic_lr before calling
>>>> vgic_flush_lr_state(). The first output shows that the ap_list_depth is 
>>>> zero
>>>> but the first one in vgic_lr is 10a02001. I don't understand why
>>>> there is a valued one in vgic_lr since the memory of vgic_lr is zero
>>>> allocated. I think It should be zero when the vcpu first run and first
>>>> call kvm_vgic_flush_hwstate().
>>>>
>>>> qemu-system-aar-6673  [016]    501.969251: kvm_vgic_flush_hwstate: 
>>>> VCPU: 0, lits-count: 0, LR: 10a02001, 0, 0, 0
>>>>
>>>> I also add a trace at the end of vgic_flush_lr_state() which shows the
>>>> kvm_vgic_global_state.nr_lr is 4, used_lrs is 0 and all LRs in vgic_lr
>>>> are zero.
>>>>
>>>> qemu-system-aar-6673  [016]    501.969254: vgic_flush_lr_state_nuke: 
>>>> kvm_vgic_global_state.nr_lr is :4, irq1:0, irq2:0, irq3:0, irq4:0
>>>>
>>>> But the trace at the beginning of kvm_vgic_sync_hwstate() shows the
>>>> first one of vgic_lr is 10a02001.
>>>>
>>>> qemu-system-aar-6673  [016]    501.969261: 
>>>> kvm_vgic_sync_hwstate_vgic_lr: VCPU: 0, used_lrs: 0, LR: 10a02001, 
>>>> 0, 0, 0
>>>>
>>>> The above three trace outputs are printed by the first KVM_ENTRY/EXIT of 
>>>> VCPU 0.
>>>
>>> Decoding this LR value is interesting:
>>>
>>> 10a02001
>>> | | | LPI 8193
>>> | |
>>> | Priority 0xa0
>>> |
>>> Group1
>>>
>>> Someone is injecting an LPI behind your back. If nobody populates this,
>>> then you may want to investigate what is happening on the host side. Is
>>> there anyone using this interrupt?
>>>
>>
>> For this guest, I think nobody populates this LR, but on the host, there
>> is a LPI interrupt 8193. It's a interrupt of eth2
>>
>> MBIGEN-V2 8193 Edge  eth2-tx0
>>
>> It's a little confused to me that the LR registers should only be used
>> for VM, right? Why does the interrupt on host would affect the LRs?
> 
> It should never have an impact, but I'm worried that this could be a HW
> bug where the physical side of the ITS leaks into the virtual one. You
> have a GICv4, right?
Yes, the hardware supports GICv4 but I think current kernel doesn't
enable it.

> 
> It'd be interesting to find out what happens if you leave this interrupt
> disabled (don't enable eth2) and see if that interrupt magically appears
> or not.
> 
Ah, I found the guest uses ITS and there is a irq number 8193. If I use
a qemu without ITS feature then there is no such irq in trace output.

But there is still unexpected LR in vgic_lr[] array of irq 27. Nobody
calls vgic_update_irq_pending for irq 27 before below trace outputs.

 qemu-system-aar-6681  [021]   1081.718849: kvm_vgic_flush_hwstate:
VCPU: 0, lits-count: 0, LR: 0, 0, 0
 qemu-system-aar-6681  [021]   1081.718849: vgic_flush_lr_state:
used lr count is :0, irq1:0, irq2:0, irq3:0, irq4:0
 qemu-system-aar-6681  [021] d...  1081.718850: kvm_entry: PC:
0xff8008432940
 qemu-system-aar-6681  [021]   1081.718852: kvm_exit: TRAP: HSR_EC:
0x0024 (DABT_LOW), PC: 0xff8008432954
 qemu-system-aar-6681  [021]   1081.718852:
kvm_vgic_sync_hwstate_vgic_lr: VCPU: 0, used_lrs: 0, LR: 0, 0, 0, 0
 qemu-system-aar-6681  [021]   1081.718855: kvm_vgic_flush_hwstate:
VCPU: 0, lits-count: 0, LR: 50a0021b, 0, 0, 0
 qemu-system-aar-6681  [021]   1081.718855: vgic_flush_lr_state:
used lr count is :0, irq1:0, irq2:0, irq3:0, irq4:0
 qemu-system-aar-6681  [021] d...  1081.718856: kvm_entry: PC:
0xff8008432958
 qemu-system-aar-6681  [021]   1081.718858: kvm_exit: TRAP: HSR_EC:
0x0024 (DABT_LOW), PC: 0xff800843291c

Thanks,
-- 
Shannon

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 0/2] Add the missing resetting LRs at boot time for new-vgic

2016-12-06 Thread Shannon Zhao


On 2016/12/6 19:47, Marc Zyngier wrote:
> On 06/12/16 06:41, Shannon Zhao wrote:
>> From: Shannon Zhao 
>>
>> Commit 50926d8(KVM: arm/arm64: The GIC is dead, long live the GIC)
>> removes the old vgic and commit 9097773(KVM: arm/arm64: vgic-new: 
>> vgic_init: implement kvm_vgic_hyp_init) doesn't reset LRs for new-vgic
>> when probing GIC. These two patches add the missing part.
>>
>> BTW, here is a strange problem on Huawei D03 board when using
>> upstream kernel that android guest with a goldfish_fb will hang with
>> rcu_stall and interrupt timeout for goldfish_fb. We apply these patches
>> but the problem still exists, while if we revert the commit
>> b40c489(arm64: KVM: vgic-v3: Only wipe LRs on vcpu exit) the guest runs
>> well.
>>
>> We add a trace in kvm_vgic_flush_hwstate() to print the value of 
>> compute_ap_list_depth(vcpu) and the value of vgic_lr before calling
>> vgic_flush_lr_state(). The first output shows that the ap_list_depth is zero
>> but the first one in vgic_lr is 10a02001. I don't understand why
>> there is a valued one in vgic_lr since the memory of vgic_lr is zero
>> allocated. I think It should be zero when the vcpu first run and first
>> call kvm_vgic_flush_hwstate().
>>
>> qemu-system-aar-6673  [016]    501.969251: kvm_vgic_flush_hwstate: VCPU: 
>> 0, lits-count: 0, LR: 10a02001, 0, 0, 0
>>
>> I also add a trace at the end of vgic_flush_lr_state() which shows the
>> kvm_vgic_global_state.nr_lr is 4, used_lrs is 0 and all LRs in vgic_lr
>> are zero.
>>
>> qemu-system-aar-6673  [016]    501.969254: vgic_flush_lr_state_nuke: 
>> kvm_vgic_global_state.nr_lr is :4, irq1:0, irq2:0, irq3:0, irq4:0
>>
>> But the trace at the beginning of kvm_vgic_sync_hwstate() shows the
>> first one of vgic_lr is 10a02001.
>>
>> qemu-system-aar-6673  [016]    501.969261: 
>> kvm_vgic_sync_hwstate_vgic_lr: VCPU: 0, used_lrs: 0, LR: 10a02001, 
>> 0, 0, 0
>>
>> The above three trace outputs are printed by the first KVM_ENTRY/EXIT of 
>> VCPU 0.
> 
> Decoding this LR value is interesting:
> 
> 10a02001
> | | | LPI 8193
> | |
> | Priority 0xa0
> |
> Group1
> 
> Someone is injecting an LPI behind your back. If nobody populates this,
> then you may want to investigate what is happening on the host side. Is
> there anyone using this interrupt?
> 

For this guest, I think nobody populates this LR, but on the host, there
is a LPI interrupt 8193. It's a interrupt of eth2

MBIGEN-V2 8193 Edge  eth2-tx0

It's a little confused to me that the LR registers should only be used
for VM, right? Why does the interrupt on host would affect the LRs?

Thanks,
-- 
Shannon

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH 0/2] Add the missing resetting LRs at boot time for new-vgic

2016-12-05 Thread Shannon Zhao
From: Shannon Zhao 

Commit 50926d8(KVM: arm/arm64: The GIC is dead, long live the GIC)
removes the old vgic and commit 9097773(KVM: arm/arm64: vgic-new: 
vgic_init: implement kvm_vgic_hyp_init) doesn't reset LRs for new-vgic
when probing GIC. These two patches add the missing part.

BTW, here is a strange problem on Huawei D03 board when using
upstream kernel that android guest with a goldfish_fb will hang with
rcu_stall and interrupt timeout for goldfish_fb. We apply these patches
but the problem still exists, while if we revert the commit
b40c489(arm64: KVM: vgic-v3: Only wipe LRs on vcpu exit) the guest runs
well.

We add a trace in kvm_vgic_flush_hwstate() to print the value of 
compute_ap_list_depth(vcpu) and the value of vgic_lr before calling
vgic_flush_lr_state(). The first output shows that the ap_list_depth is zero
but the first one in vgic_lr is 10a02001. I don't understand why
there is a valued one in vgic_lr since the memory of vgic_lr is zero
allocated. I think It should be zero when the vcpu first run and first
call kvm_vgic_flush_hwstate().

qemu-system-aar-6673  [016]    501.969251: kvm_vgic_flush_hwstate: VCPU: 0, 
lits-count: 0, LR: 10a02001, 0, 0, 0

I also add a trace at the end of vgic_flush_lr_state() which shows the
kvm_vgic_global_state.nr_lr is 4, used_lrs is 0 and all LRs in vgic_lr
are zero.

qemu-system-aar-6673  [016]    501.969254: vgic_flush_lr_state_nuke: 
kvm_vgic_global_state.nr_lr is :4, irq1:0, irq2:0, irq3:0, irq4:0

But the trace at the beginning of kvm_vgic_sync_hwstate() shows the
first one of vgic_lr is 10a02001.

qemu-system-aar-6673  [016]    501.969261: kvm_vgic_sync_hwstate_vgic_lr: 
VCPU: 0, used_lrs: 0, LR: 10a02001, 0, 0, 0

The above three trace outputs are printed by the first KVM_ENTRY/EXIT of VCPU 0.

Shannon Zhao (2):
  arm64: KVM: vgic-v3: Add the missing resetting LRs at boot time
  KVM: arm/arm64: vgic-v2: Add the missing resetting LRs at boot time

 virt/kvm/arm/vgic/vgic-v2.c | 11 +++
 virt/kvm/arm/vgic/vgic-v3.c |  7 +++
 2 files changed, 18 insertions(+)

-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH 1/2] arm64: KVM: vgic-v3: Add the missing resetting LRs at boot time

2016-12-05 Thread Shannon Zhao
From: Shannon Zhao 

This is the corresponding part of commit 0d98d00(arm64: KVM:
vgic-v3: Reset LRs at boot time) which is missed for new-vgic.

Signed-off-by: Shannon Zhao 
---
 virt/kvm/arm/vgic/vgic-v3.c | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/virt/kvm/arm/vgic/vgic-v3.c b/virt/kvm/arm/vgic/vgic-v3.c
index 5c9f974..7262f3b 100644
--- a/virt/kvm/arm/vgic/vgic-v3.c
+++ b/virt/kvm/arm/vgic/vgic-v3.c
@@ -307,6 +307,11 @@ int vgic_v3_map_resources(struct kvm *kvm)
return ret;
 }
 
+static void vgic_cpu_init_lrs(void *params)
+{
+   kvm_call_hyp(__vgic_v3_init_lrs);
+}
+
 /**
  * vgic_v3_probe - probe for a GICv3 compatible interrupt controller in DT
  * @node:  pointer to the DT node
@@ -361,5 +366,7 @@ int vgic_v3_probe(const struct gic_kvm_info *info)
kvm_vgic_global_state.type = VGIC_V3;
kvm_vgic_global_state.max_gic_vcpus = VGIC_V3_MAX_CPUS;
 
+   on_each_cpu(vgic_cpu_init_lrs, NULL, 1);
+
return 0;
 }
-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH 2/2] KVM: arm/arm64: vgic-v2: Add the missing resetting LRs at boot time

2016-12-05 Thread Shannon Zhao
From: Shannon Zhao 

This is the corresponding part of commit d6400d7(KVM: arm/arm64:
vgic-v2: Reset LRs at boot time) which is missed for new-vgic.

Signed-off-by: Shannon Zhao 
---
 virt/kvm/arm/vgic/vgic-v2.c | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
index 9bab867..c636a19 100644
--- a/virt/kvm/arm/vgic/vgic-v2.c
+++ b/virt/kvm/arm/vgic/vgic-v2.c
@@ -300,6 +300,15 @@ int vgic_v2_map_resources(struct kvm *kvm)
 
 DEFINE_STATIC_KEY_FALSE(vgic_v2_cpuif_trap);
 
+static void vgic_cpu_init_lrs(void *params)
+{
+   int i;
+
+   for (i = 0; i < kvm_vgic_global_state.nr_lr; i++)
+   writel_relaxed(0, kvm_vgic_global_state.vctrl_base +
+ GICH_LR0 + (i * 4));
+}
+
 /**
  * vgic_v2_probe - probe for a GICv2 compatible interrupt controller in DT
  * @node:  pointer to the DT node
@@ -368,6 +377,8 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
kvm_vgic_global_state.type = VGIC_V2;
kvm_vgic_global_state.max_gic_vcpus = VGIC_V2_MAX_CPUS;
 
+   on_each_cpu(vgic_cpu_init_lrs, NULL, 1);
+
kvm_info("vgic-v2@%llx\n", info->vctrl.start);
 
return 0;
-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


guest get stuck on stable 4.1.32

2016-10-24 Thread Shannon Zhao
Hi,

I have a testcase which fails on host linux kernel 4.1.32. The testcase
is that resetting the guest outside while rebooting inside at the same time.

By the way, the guest kernel is linux 4.4 with debian filesystem.

Here is the qemu command line:

qemu-kvm \
-smp 4 \
-enable-kvm \
-m 1024 -M virt,gic-version=2 \
-monitor telnet::5444,server,nowait \
-cpu host -nographic \
-device virtio-net-device,netdev=net0,mac="52:54:00:12:34:55" \
-netdev type=tap,id=net0,script=./qemu-ifup,downscript=no \
-drive file=debian.raw,if=none,id=drive-virtio-disk0,format=raw \
-device virtio-blk-device,drive=drive-virtio-disk0,id=virtio-disk0 \
-kernel Image-4.4 \
-append "console=ttyAMA0 root=/dev/vda1 earlycon=pl011,0x900 rw
dhcp"

And the test command is:

# ssh guest_ip reboot;echo system_reset|nc host_ip 5444

After executing above command several times, the guest gets stuck. The
guest log as follow:

...
Architected cp15 timer(s) running at 66.00MHz (virt).
clocksource: arch_sys_counter: mask: 0xff max_cycles:
0xf38bc32cd, max_idle_ns: 440795204298 ns
sched_clock: 56 bits at 66MHz, resolution 15ns, wraps every 2199023255548ns
Console: colour dummy device 80x25
Calibrating delay loop (skipped), value calculated using timer
frequency.. 132.00 BogoMIPS (lpj=264000)
pid_max: default: 32768 minimum: 301
Security Framework initialized
Mount-cache hash table entries: 2048 (order: 2, 16384 bytes)
Mountpoint-cache hash table entries: 2048 (order: 2, 16384 bytes)
Initializing cgroup subsys memory
Initializing cgroup subsys hugetlb
EFI services will not be available.
ASID allocator initialised with 65536 entries

I found the guest is stuck at
while ((now = jiffies) == j) in the function do_xor_speed(). Looks like
there is no timer interrupt injected to guest any more.

And the kernel 4.6 has fixed this bug, but I'm not sure if there is some
way to fix this in stable 4.1.

Thanks,
-- 
Shannon

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH] arm64: KVM: Enable support for Cortex-A72

2016-08-24 Thread Shannon Zhao


On 2016/8/24 16:57, Suzuki K Poulose wrote:
> On 24/08/16 08:21, Shannon Zhao wrote:
>> In order to allow KVM to run on Cortex-A72 physical cpus, enable KVM
>> support for Cortex-A72.
> 
> Do we really need this change ? Given that A72 is using the generic_v8
> table,
> it will automatically be supported via the GENERIC_V8 target. That was
> added
> just for this purpose. The pre-existing targets were preserved so that
> we don't break the ABI for older user space.
> 
Yes, this works for qemu with "-cpu host". But if it specifies the cpu
type with "-cpu cortex-a72". It will fail without this patch.

The corresponding qemu patches could be found at [1].
[1] https://lists.gnu.org/archive/html/qemu-devel/2016-08/msg03653.html

Thanks,
-- 
Shannon

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH] arm64: KVM: Enable support for Cortex-A72

2016-08-24 Thread Shannon Zhao
In order to allow KVM to run on Cortex-A72 physical cpus, enable KVM
support for Cortex-A72.

Signed-off-by: Shannon Zhao 
---
 arch/arm64/include/asm/cputype.h | 1 +
 arch/arm64/include/uapi/asm/kvm.h| 3 ++-
 arch/arm64/kvm/guest.c   | 2 ++
 arch/arm64/kvm/sys_regs_generic_v8.c | 2 ++
 4 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
index 9d9fd4b..cf1f638 100644
--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -76,6 +76,7 @@
 #define ARM_CPU_PART_FOUNDATION0xD00
 #define ARM_CPU_PART_CORTEX_A570xD07
 #define ARM_CPU_PART_CORTEX_A530xD03
+#define ARM_CPU_PART_CORTEX_A720xD08
 
 #define APM_CPU_PART_POTENZA   0x000
 
diff --git a/arch/arm64/include/uapi/asm/kvm.h 
b/arch/arm64/include/uapi/asm/kvm.h
index f209ea1..af8fbeb 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -65,8 +65,9 @@ struct kvm_regs {
 #define KVM_ARM_TARGET_CORTEX_A53  4
 /* Generic ARM v8 target */
 #define KVM_ARM_TARGET_GENERIC_V8  5
+#define KVM_ARM_TARGET_CORTEX_A72  6
 
-#define KVM_ARM_NUM_TARGETS6
+#define KVM_ARM_NUM_TARGETS7
 
 /* KVM_ARM_SET_DEVICE_ADDR ioctl id encoding */
 #define KVM_ARM_DEVICE_TYPE_SHIFT  0
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 32fad75..7eed92e 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -293,6 +293,8 @@ int __attribute_const__ kvm_target_cpu(void)
return KVM_ARM_TARGET_CORTEX_A53;
case ARM_CPU_PART_CORTEX_A57:
return KVM_ARM_TARGET_CORTEX_A57;
+   case ARM_CPU_PART_CORTEX_A72:
+   return KVM_ARM_TARGET_CORTEX_A72;
};
break;
case ARM_CPU_IMP_APM:
diff --git a/arch/arm64/kvm/sys_regs_generic_v8.c 
b/arch/arm64/kvm/sys_regs_generic_v8.c
index ed90578..cf823e1 100644
--- a/arch/arm64/kvm/sys_regs_generic_v8.c
+++ b/arch/arm64/kvm/sys_regs_generic_v8.c
@@ -92,6 +92,8 @@ static int __init sys_reg_genericv8_init(void)
  &genericv8_target_table);
kvm_register_target_sys_reg_table(KVM_ARM_TARGET_CORTEX_A57,
  &genericv8_target_table);
+   kvm_register_target_sys_reg_table(KVM_ARM_TARGET_CORTEX_A72,
+ &genericv8_target_table);
kvm_register_target_sys_reg_table(KVM_ARM_TARGET_XGENE_POTENZA,
  &genericv8_target_table);
kvm_register_target_sys_reg_table(KVM_ARM_TARGET_GENERIC_V8,
-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: usb keyboard and mouse can't work on QEMU ARM64 with KVM

2016-07-26 Thread Shannon Zhao


On 2016/7/26 16:07, Ard Biesheuvel wrote:
> On 26 July 2016 at 09:34, Shannon Zhao  wrote:
>> > Hi,
>> >
>> > Recently I'm trying to use usb keyboard and mouse with QEMU on ARM64. 
>> > Below is my QEMU command line,
>> > host and guest kernel both are 4.7.0-rc7+, and I ran it on Hikey board.
>> >
>> > qemu-system-aarch64 \
>> > -smp 1 -cpu host -enable-kvm \
>> > -m 256 -M virt \
>> > -k en-us \
>> > -nographic \
>> > -device usb-ehci -device usb-kbd -device usb-mouse -usb\
>> > -kernel Image \
>> > -initrd guestfs.cpio.gz \
>> > -append "rdinit=/sbin/init console=ttyAMA0 root=/dev/ram 
>> > earlycon=pl011,0x900 rw"
>> >
>> > The following guest log shows that usb controller can be probed but the 
>> > keyboard and mouse can't be
>> > found.
>> >
>> > [1.597433] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
>> > [1.599562] ehci-pci: EHCI PCI platform driver
>> > [1.608082] ehci-pci :00:03.0: EHCI Host Controller
>> > [1.609485] ehci-pci :00:03.0: new USB bus registered, assigned bus 
>> > number 1
>> > [1.611833] ehci-pci :00:03.0: irq 49, io mem 0x10041000
>> > [1.623599] ehci-pci :00:03.0: USB 2.0 started, EHCI 1.00
>> > [1.625867] hub 1-0:1.0: USB hub found
>> > [1.626906] hub 1-0:1.0: 6 ports detected
>> > [1.628685] ehci-platform: EHCI generic platform driver
>> > [1.630263] ehci-msm: Qualcomm On-Chip EHCI Host Controller
>> > [1.631947] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
>> > [1.633547] ohci-pci: OHCI PCI platform driver
>> > [1.634807] ohci-platform: OHCI generic platform driver
>> > [...]
>> > [1.939001] usb 1-1: new high-speed USB device number 2 using ehci-pci
>> > [   17.467040] usb 1-1: device not accepting address 2, error -110
>> > [   17.579165] usb 1-1: new high-speed USB device number 3 using ehci-pci
>> > [   32.287242] random: dd urandom read with 7 bits of entropy available
>> > [   33.110970] usb 1-1: device not accepting address 3, error -110
>> > [   33.223030] usb 1-1: new high-speed USB device number 4 using ehci-pci
>> > [   43.635185] usb 1-1: device not accepting address 4, error -110
>> > [   43.747033] usb 1-1: new high-speed USB device number 5 using ehci-pci
>> > [   54.159043] usb 1-1: device not accepting address 5, error -110
>> > [   54.160752] usb usb1-port1: unable to enumerate USB device
>> > [   54.307290] usb 1-2: new high-speed USB device number 6 using ehci-pci
>> > [   69.839052] usb 1-2: device not accepting address 6, error -110
>> > [   69.951249] usb 1-2: new high-speed USB device number 7 using ehci-pci
>> > [   85.483171] usb 1-2: device not accepting address 7, error -110
>> > [   85.595035] usb 1-2: new high-speed USB device number 8 using ehci-pci
>> > [   90.619247] usb 1-2: device descriptor read/8, error -110
>> > [   95.743482] usb 1-2: device descriptor read/8, error -110
>> > [   95.959165] usb 1-2: new high-speed USB device number 9 using ehci-pci
>> > [  106.371177] usb 1-2: device not accepting address 9, error -110
>> > [  106.372894] usb usb1-port2: unable to enumerate USB device
>> >
>> > lsusb shows:
>> > root@genericarmv8:~# lsusb
>> > Bus 001 Device 001: ID 1d6b:0002
>> >
>> > Besides, I have also tried QEMU TCG without KVM. The guest can 
>> > successfully probe usb controller,
>> > keyboard and mouse.
>> > lsusb shows:
>> > root@genericarmv8:~# lsusb
>> > Bus 001 Device 002: ID 0627:0001
>> > Bus 001 Device 003: ID 0627:0001
>> > Bus 001 Device 001: ID 1d6b:0002
>> >
>> > So it looks like that usb keyboard and mouse don't work with KVM on QEMU 
>> > ARM64 while they can work
>> > with TCG. IIUC, all the usb devices are emulated by QEMU, it has nothing 
>> > with KVM. So it really
>> > confused me and I'm not familiar with usb devices. Also I have seen 
>> > someone else reports this issue
>> > before[1].
>> >
>> > [1]https://lists.gnu.org/archive/html/qemu-arm/2016-06/msg00110.html
>> >
>> > Any comments and help are welcome. Thanks in advance.
>> >
> Does your QEMU have this patch?
> http://git.qemu.org/?p=qemu.git;a=commitdiff;h=5d636e21c44ecf982a22a7bc4ca89186079ac283

Great! I applied this patch and the keyboard and mouse can work with KVM
now. Thanks a lot, Ard.

Thanks,
-- 
Shannon

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


usb keyboard and mouse can't work on QEMU ARM64 with KVM

2016-07-26 Thread Shannon Zhao
Hi,

Recently I'm trying to use usb keyboard and mouse with QEMU on ARM64. Below is 
my QEMU command line,
host and guest kernel both are 4.7.0-rc7+, and I ran it on Hikey board.

qemu-system-aarch64 \
-smp 1 -cpu host -enable-kvm \
-m 256 -M virt \
-k en-us \
-nographic \
-device usb-ehci -device usb-kbd -device usb-mouse -usb\
-kernel Image \
-initrd guestfs.cpio.gz \
-append "rdinit=/sbin/init console=ttyAMA0 root=/dev/ram 
earlycon=pl011,0x900 rw"

The following guest log shows that usb controller can be probed but the 
keyboard and mouse can't be
found.

[1.597433] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[1.599562] ehci-pci: EHCI PCI platform driver
[1.608082] ehci-pci :00:03.0: EHCI Host Controller
[1.609485] ehci-pci :00:03.0: new USB bus registered, assigned bus 
number 1
[1.611833] ehci-pci :00:03.0: irq 49, io mem 0x10041000
[1.623599] ehci-pci :00:03.0: USB 2.0 started, EHCI 1.00
[1.625867] hub 1-0:1.0: USB hub found
[1.626906] hub 1-0:1.0: 6 ports detected
[1.628685] ehci-platform: EHCI generic platform driver
[1.630263] ehci-msm: Qualcomm On-Chip EHCI Host Controller
[1.631947] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[1.633547] ohci-pci: OHCI PCI platform driver
[1.634807] ohci-platform: OHCI generic platform driver
[...]
[1.939001] usb 1-1: new high-speed USB device number 2 using ehci-pci
[   17.467040] usb 1-1: device not accepting address 2, error -110
[   17.579165] usb 1-1: new high-speed USB device number 3 using ehci-pci
[   32.287242] random: dd urandom read with 7 bits of entropy available
[   33.110970] usb 1-1: device not accepting address 3, error -110
[   33.223030] usb 1-1: new high-speed USB device number 4 using ehci-pci
[   43.635185] usb 1-1: device not accepting address 4, error -110
[   43.747033] usb 1-1: new high-speed USB device number 5 using ehci-pci
[   54.159043] usb 1-1: device not accepting address 5, error -110
[   54.160752] usb usb1-port1: unable to enumerate USB device
[   54.307290] usb 1-2: new high-speed USB device number 6 using ehci-pci
[   69.839052] usb 1-2: device not accepting address 6, error -110
[   69.951249] usb 1-2: new high-speed USB device number 7 using ehci-pci
[   85.483171] usb 1-2: device not accepting address 7, error -110
[   85.595035] usb 1-2: new high-speed USB device number 8 using ehci-pci
[   90.619247] usb 1-2: device descriptor read/8, error -110
[   95.743482] usb 1-2: device descriptor read/8, error -110
[   95.959165] usb 1-2: new high-speed USB device number 9 using ehci-pci
[  106.371177] usb 1-2: device not accepting address 9, error -110
[  106.372894] usb usb1-port2: unable to enumerate USB device

lsusb shows:
root@genericarmv8:~# lsusb
Bus 001 Device 001: ID 1d6b:0002

Besides, I have also tried QEMU TCG without KVM. The guest can successfully 
probe usb controller,
keyboard and mouse.
lsusb shows:
root@genericarmv8:~# lsusb
Bus 001 Device 002: ID 0627:0001
Bus 001 Device 003: ID 0627:0001
Bus 001 Device 001: ID 1d6b:0002

So it looks like that usb keyboard and mouse don't work with KVM on QEMU ARM64 
while they can work
with TCG. IIUC, all the usb devices are emulated by QEMU, it has nothing with 
KVM. So it really
confused me and I'm not familiar with usb devices. Also I have seen someone 
else reports this issue
before[1].

[1]https://lists.gnu.org/archive/html/qemu-arm/2016-06/msg00110.html

Any comments and help are welcome. Thanks in advance.

Thanks,
-- 
Shannon

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [Query] Does Linux & Qemu support KVM for ARM32 guest on ARM64 host

2016-04-27 Thread Shannon Zhao


On 2016/4/28 9:50, RAVINDRA KUMAR SANDE wrote:
> 
> What I did  :
> 1) Just for investigation, I took a ARMv8 ( OdroidC2 ) board
> 2)  I compiled Linux 3.14 with KVM support for this ARMv8 ( OdroidC2 )
> board, with modification replacing meson_timer by arm timer in its dts
> file.
> Why Linux 3.14 : I took Linux 3.14 because display drivers for this
> board are officially for this version; and I am interested in seeing
> some Linux guest booting with display on.
> 3)  I see from boot log of  that KVM is initialized successfully, and I
> can see /dev/kvm node.
> 4) I built latest Qemu with --enable-kvm on this board natively.
> 
> What I find :
> 1) running "qemu-system-arm  -enable-kvm   -machine vexpress-a9 "
> gives error :  no accelerator found
> 2) running "qemu-system-aarch64 -enable-kvm  -machine vexpress-a9 "
> gives error : kmv_init_vcpu (IOCtl on /dev/kvm) failed, guest not supported
> ( I experimented some modifications as well to overcome above error,
> such as replacing value assigned to cpu->kvm_target etc, but IOCtl call
> is failing)
> 
> Query:
> 1) Does Arm64 Linux not enable KVM support for Arm32 guest ?
> 2) Can qemu-system-arm not use the KVM feature on Arm64 host ?
> 3) Can qemu-system-aarch64 not use KVM feature for Arm32 guest ?
> 
You can use below command to boot a ARM32 guest on ARM64:

qemu-system-aarch64 -enable-kvm -machine virt,kernel_irqchip=on -cpu
host,aarch64=off 

-- 
Shannon

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH] arm64: KVM: Add braces to multi-line if statement in virtual PMU code

2016-04-01 Thread Shannon Zhao
On 2016年04月01日 19:12, Will Deacon wrote:
> The kernel is written in C, not python, so we need braces around
> multi-line if statements. GCC 6 actually warns about this, thanks to the
> fantastic new "-Wmisleading-indentation" flag:
> 
>  | virt/kvm/arm/pmu.c: In function ‘kvm_pmu_overflow_status’:
>  | virt/kvm/arm/pmu.c:198:3: warning: statement is indented as if it were 
> guarded by... [-Wmisleading-indentation]
>  |reg &= vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
>  |^~~
>  | arch/arm64/kvm/../../../virt/kvm/arm/pmu.c:196:2: note: ...this ‘if’ 
> clause, but it is not
>  |   if ((vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
>  |   ^~
> 
> As it turns out, this particular case is harmless (we just do some &=
> operations with 0), but worth fixing nonetheless.
> 
Ah, thanks! I might be fooled at that moment. :)

Reviewed-by: Shannon Zhao 

> Signed-off-by: Will Deacon 
> ---
>  virt/kvm/arm/pmu.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index b5754c6c5508..575c7aa30d7e 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -193,11 +193,12 @@ static u64 kvm_pmu_overflow_status(struct kvm_vcpu 
> *vcpu)
>  {
>   u64 reg = 0;
>  
> - if ((vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
> + if ((vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) {
>   reg = vcpu_sys_reg(vcpu, PMOVSSET_EL0);
>   reg &= vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
>   reg &= vcpu_sys_reg(vcpu, PMINTENSET_EL1);
>   reg &= kvm_pmu_valid_counter_mask(vcpu);
> + }
>  
>   return reg;
>  }
> 


-- 
Shannon
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v13 01/20] ARM64: Move PMU register related defines to asm/perf_event.h

2016-02-29 Thread Shannon Zhao


On 2016/2/29 21:07, Marc Zyngier wrote:
> Shannon,
> 
> On 25/02/16 02:02, Shannon Zhao wrote:
>>
>>
>> On 2016/2/25 1:52, Will Deacon wrote:
>>> On Wed, Feb 24, 2016 at 01:08:21PM +0800, Shannon Zhao wrote:
>>>> From: Shannon Zhao 
>>>>
>>>> To use the ARMv8 PMU related register defines from the KVM code, we move
>>>> the relevant definitions to asm/perf_event.h header file and rename them
>>>> with prefix ARMV8_PMU_.
>>>>
>>>> Signed-off-by: Anup Patel 
>>>> Signed-off-by: Shannon Zhao 
>>>> Acked-by: Marc Zyngier 
>>>> Reviewed-by: Andrew Jones 
>>>> ---
>>>>  arch/arm64/include/asm/perf_event.h | 35 +++
>>>>  arch/arm64/kernel/perf_event.c  | 68 
>>>> ++---
>>>>  2 files changed, 52 insertions(+), 51 deletions(-)
>>>
>>> Looks fine to me, but we're going to get some truly horrible conflicts
>>> in -next.
>>>
>>> I'm open to suggestions on the best way to handle this, but one way
>>> would be:
>>>
>>>   1. Duplicate all the #defines privately in KVM (queue via kvm tree)
>> This way seems not proper I think.
>>
>>>   2. Rebase this patch onto my perf/updates branch [1] (queue via me)
>> While to this series, it really relies on the perf_event.h to compile
>> and test, so maybe for KVM-ARM and KVM maintainers it's not proper.
>>
>>>   3. Patch at -rc1 dropping the #defines from (1) and moving to the new
>>>  perf_event.h stuff
>>>
>> I vote for this way. Since the patch in [1] is small and nothing else
>> relies on them, I think it would be simple to rebase them onto this series.
>>
>>> Thoughts?
>>>
>> Anyway, there are only 3 lines which have conflicts. I'm not sure
>> whether we could handle this when we merge them.
> 
> I think you're missing the point:
> 
> - We want both the arm64 perf and KVM trees to be easy to merge
> - The conflicts are not that simple to resolve
> - We want these conflicts to be solved before it hits Linus' tree
> 
Ah, sorry. I realized this later.

> With that in mind, here's what I'm suggesting we merge as a first patch:
> 
> https://git.kernel.org/cgit/linux/kernel/git/kvmarm/kvmarm.git/commit/?h=queue&id=2029b4b02691ec6ebba3d281068e783353d7e108
> 
> Once this and the perf/updates branch are merged, we can add one last
> patch reverting this hack and actually doing the renaming work (Will has
> posted a resolution for most of the new things).
> 
> Thoughts?
> 
It's fine I think. (It's first time to me to face this kind of problem.
:)). Thanks for your help.

-- 
Shannon

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v15 15/20] KVM: ARM64: Add PMU overflow interrupt routing

2016-02-26 Thread Shannon Zhao
From: Shannon Zhao 

When calling perf_event_create_kernel_counter to create perf_event,
assign a overflow handler. Then when the perf event overflows, set the
corresponding bit of guest PMOVSSET register. If this counter is enabled
and its interrupt is enabled as well, kick the vcpu to sync the
interrupt.

On VM entry, if there is counter overflowed and interrupt level is
changed, inject the interrupt with corresponding level. On VM exit, sync
the interrupt level as well if it has been changed.

Signed-off-by: Shannon Zhao 
Reviewed-by: Marc Zyngier 
Reviewed-by: Andrew Jones 
Reviewed-by: Christoffer Dall 
---
 arch/arm/kvm/arm.c|  8 --
 include/kvm/arm_pmu.h |  5 
 virt/kvm/arm/pmu.c| 69 ++-
 3 files changed, 79 insertions(+), 3 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index dda1959..a7e50d7 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -28,6 +28,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #define CREATE_TRACE_POINTS
 #include "trace.h"
@@ -577,6 +578,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct 
kvm_run *run)
 * non-preemptible context.
 */
preempt_disable();
+   kvm_pmu_flush_hwstate(vcpu);
kvm_timer_flush_hwstate(vcpu);
kvm_vgic_flush_hwstate(vcpu);
 
@@ -593,6 +595,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct 
kvm_run *run)
if (ret <= 0 || need_new_vmid_gen(vcpu->kvm) ||
vcpu->arch.power_off || vcpu->arch.pause) {
local_irq_enable();
+   kvm_pmu_sync_hwstate(vcpu);
kvm_timer_sync_hwstate(vcpu);
kvm_vgic_sync_hwstate(vcpu);
preempt_enable();
@@ -642,10 +645,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct 
kvm_run *run)
trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), 
*vcpu_pc(vcpu));
 
/*
-* We must sync the timer state before the vgic state so that
-* the vgic can properly sample the updated state of the
+* We must sync the PMU and timer state before the vgic state so
+* that the vgic can properly sample the updated state of the
 * interrupt line.
 */
+   kvm_pmu_sync_hwstate(vcpu);
kvm_timer_sync_hwstate(vcpu);
 
kvm_vgic_sync_hwstate(vcpu);
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 8bc92d1..9c184ed 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -35,6 +35,7 @@ struct kvm_pmu {
int irq_num;
struct kvm_pmc pmc[ARMV8_PMU_MAX_COUNTERS];
bool ready;
+   bool irq_level;
 };
 
 #define kvm_arm_pmu_v3_ready(v)((v)->arch.pmu.ready)
@@ -44,6 +45,8 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu);
+void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
 void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
@@ -67,6 +70,8 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu 
*vcpu)
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {}
+static inline void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
 static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) 
{}
 static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index cda869c..74e858c 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
@@ -180,6 +181,71 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val)
kvm_vcpu_kick(vcpu);
 }
 
+static void kvm_pmu_update_state(struct kvm_vcpu *vcpu)
+{
+   struct kvm_pmu *pmu = &vcpu->arch.pmu;
+   bool overflow;
+
+   if (!kvm_arm_pmu_v3_ready(vcpu))
+   return;
+
+   overflow = !!kvm_pmu_overflow_status(vcpu);
+   if (pmu->irq_level != overflow) {
+   pmu->irq_level = overflow;
+   kvm_vg

Re: [PATCH v13 01/20] ARM64: Move PMU register related defines to asm/perf_event.h

2016-02-24 Thread Shannon Zhao


On 2016/2/25 1:52, Will Deacon wrote:
> On Wed, Feb 24, 2016 at 01:08:21PM +0800, Shannon Zhao wrote:
>> From: Shannon Zhao 
>>
>> To use the ARMv8 PMU related register defines from the KVM code, we move
>> the relevant definitions to asm/perf_event.h header file and rename them
>> with prefix ARMV8_PMU_.
>>
>> Signed-off-by: Anup Patel 
>> Signed-off-by: Shannon Zhao 
>> Acked-by: Marc Zyngier 
>> Reviewed-by: Andrew Jones 
>> ---
>>  arch/arm64/include/asm/perf_event.h | 35 +++
>>  arch/arm64/kernel/perf_event.c  | 68 
>> ++---
>>  2 files changed, 52 insertions(+), 51 deletions(-)
> 
> Looks fine to me, but we're going to get some truly horrible conflicts
> in -next.
> 
> I'm open to suggestions on the best way to handle this, but one way
> would be:
> 
>   1. Duplicate all the #defines privately in KVM (queue via kvm tree)
This way seems not proper I think.

>   2. Rebase this patch onto my perf/updates branch [1] (queue via me)
While to this series, it really relies on the perf_event.h to compile
and test, so maybe for KVM-ARM and KVM maintainers it's not proper.

>   3. Patch at -rc1 dropping the #defines from (1) and moving to the new
>  perf_event.h stuff
> 
I vote for this way. Since the patch in [1] is small and nothing else
relies on them, I think it would be simple to rebase them onto this series.

> Thoughts?
> 
Anyway, there are only 3 lines which have conflicts. I'm not sure
whether we could handle this when we merge them.

> Will
> 
> [1] git://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git perf/updates
> 
> .
> 

Thanks,
-- 
Shannon

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v14 15/20] KVM: ARM64: Add PMU overflow interrupt routing

2016-02-24 Thread Shannon Zhao
When calling perf_event_create_kernel_counter to create perf_event,
assign a overflow handler. Then when the perf event overflows, set the
corresponding bit of guest PMOVSSET register. If this counter is enabled
and its interrupt is enabled as well, kick the vcpu to sync the
interrupt.

On VM entry, if there is counter overflowed and interrupt level is
changed, inject the interrupt with corresponding level. On VM exit, sync
the interrupt level as well if it has been changed.

Signed-off-by: Shannon Zhao 
Reviewed-by: Marc Zyngier 
Reviewed-by: Andrew Jones 
---
 arch/arm/kvm/arm.c|  5 
 include/kvm/arm_pmu.h |  5 
 virt/kvm/arm/pmu.c| 69 ++-
 3 files changed, 78 insertions(+), 1 deletion(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index dda1959..5c133ac 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -28,6 +28,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #define CREATE_TRACE_POINTS
 #include "trace.h"
@@ -577,6 +578,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct 
kvm_run *run)
 * non-preemptible context.
 */
preempt_disable();
+   kvm_pmu_flush_hwstate(vcpu);
kvm_timer_flush_hwstate(vcpu);
kvm_vgic_flush_hwstate(vcpu);
 
@@ -593,6 +595,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct 
kvm_run *run)
if (ret <= 0 || need_new_vmid_gen(vcpu->kvm) ||
vcpu->arch.power_off || vcpu->arch.pause) {
local_irq_enable();
+   kvm_pmu_sync_hwstate(vcpu);
kvm_timer_sync_hwstate(vcpu);
kvm_vgic_sync_hwstate(vcpu);
preempt_enable();
@@ -641,6 +644,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct 
kvm_run *run)
kvm_guest_exit();
trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), 
*vcpu_pc(vcpu));
 
+   kvm_pmu_sync_hwstate(vcpu);
+
/*
 * We must sync the timer state before the vgic state so that
 * the vgic can properly sample the updated state of the
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 8bc92d1..9c184ed 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -35,6 +35,7 @@ struct kvm_pmu {
int irq_num;
struct kvm_pmc pmc[ARMV8_PMU_MAX_COUNTERS];
bool ready;
+   bool irq_level;
 };
 
 #define kvm_arm_pmu_v3_ready(v)((v)->arch.pmu.ready)
@@ -44,6 +45,8 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu);
+void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
 void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
@@ -67,6 +70,8 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu 
*vcpu)
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {}
+static inline void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
 static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) 
{}
 static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index cda869c..74e858c 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
@@ -180,6 +181,71 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val)
kvm_vcpu_kick(vcpu);
 }
 
+static void kvm_pmu_update_state(struct kvm_vcpu *vcpu)
+{
+   struct kvm_pmu *pmu = &vcpu->arch.pmu;
+   bool overflow;
+
+   if (!kvm_arm_pmu_v3_ready(vcpu))
+   return;
+
+   overflow = !!kvm_pmu_overflow_status(vcpu);
+   if (pmu->irq_level != overflow) {
+   pmu->irq_level = overflow;
+   kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
+   pmu->irq_num, overflow);
+   }
+}
+
+/**
+ * kvm_pmu_flush_hwstate - flush pmu state to cpu
+ * @vcpu: The vcpu pointer
+ *
+ * Check if the PMU has overflowed while we were running in the host, and 
inject
+ * an interrupt if that was the case.
+ */
+v

Re: [PATCH v13 15/20] KVM: ARM64: Add PMU overflow interrupt routing

2016-02-24 Thread Shannon Zhao



On 2016/2/24 21:19, Marc Zyngier wrote:

On 24/02/16 12:27, Christoffer Dall wrote:

>On Wed, Feb 24, 2016 at 01:08:35PM +0800, Shannon Zhao wrote:

>>From: Shannon Zhao
>>
>>When calling perf_event_create_kernel_counter to create perf_event,
>>assign a overflow handler. Then when the perf event overflows, set the
>>corresponding bit of guest PMOVSSET register. If this counter is enabled
>>and its interrupt is enabled as well, kick the vcpu to sync the
>>interrupt.
>>
>>On VM entry, if there is counter overflowed, inject the interrupt with
>>the level set to 1. Otherwise, inject the interrupt with the level set
>>to 0.
>>
>>Signed-off-by: Shannon Zhao
>>Reviewed-by: Marc Zyngier
>>Reviewed-by: Andrew Jones
>>---
>>  arch/arm/kvm/arm.c|  2 ++
>>  include/kvm/arm_pmu.h |  3 +++
>>  virt/kvm/arm/pmu.c| 51 
++-
>>  3 files changed, 55 insertions(+), 1 deletion(-)
>>
>>diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>>index dda1959..f54264c 100644
>>--- a/arch/arm/kvm/arm.c
>>+++ b/arch/arm/kvm/arm.c
>>@@ -28,6 +28,7 @@
>>  #include 
>>  #include 
>>  #include 
>>+#include 
>>
>>  #define CREATE_TRACE_POINTS
>>  #include "trace.h"
>>@@ -577,6 +578,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct 
kvm_run *run)
>> * non-preemptible context.
>> */
>>preempt_disable();
>>+   kvm_pmu_flush_hwstate(vcpu);
>>kvm_timer_flush_hwstate(vcpu);
>>kvm_vgic_flush_hwstate(vcpu);
>>
>>diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
>>index 8bc92d1..0aed4d4 100644
>>--- a/include/kvm/arm_pmu.h
>>+++ b/include/kvm/arm_pmu.h
>>@@ -35,6 +35,7 @@ struct kvm_pmu {
>>int irq_num;
>>struct kvm_pmc pmc[ARMV8_PMU_MAX_COUNTERS];
>>bool ready;
>>+   bool irq_level;
>>  };
>>
>>  #define kvm_arm_pmu_v3_ready(v)   ((v)->arch.pmu.ready)
>>@@ -44,6 +45,7 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
>>  void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
>>  void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
>>  void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
>>+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu);
>>  void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
>>  void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
>>  void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
>>@@ -67,6 +69,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct 
kvm_vcpu *vcpu)
>>  static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) 
{}
>>  static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
>>  static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
>>+static inline void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {}
>>  static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 
val) {}
>>  static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
>>  static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
>>diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
>>index cda869c..1cd4214 100644
>>--- a/virt/kvm/arm/pmu.c
>>+++ b/virt/kvm/arm/pmu.c
>>@@ -21,6 +21,7 @@
>>  #include 
>>  #include 
>>  #include 
>>+#include 
>>
>>  /**
>>   * kvm_pmu_get_counter_value - get PMU counter value
>>@@ -181,6 +182,53 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val)
>>  }
>>
>>  /**
>>+ * kvm_pmu_flush_hwstate - flush pmu state to cpu
>>+ * @vcpu: The vcpu pointer
>>+ *
>>+ * Inject virtual PMU IRQ if IRQ is pending for this cpu.
>>+ */
>>+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu)
>>+{
>>+   struct kvm_pmu *pmu = &vcpu->arch.pmu;
>>+   bool overflow;
>>+
>>+   if (!kvm_arm_pmu_v3_ready(vcpu))
>>+   return;
>>+
>>+   overflow = !!kvm_pmu_overflow_status(vcpu);
>>+   if (pmu->irq_level != overflow) {
>>+   pmu->irq_level = overflow;
>>+   kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
>>+   pmu->irq_num, overflow);
>>+   }

>
>a consequence of only doing this on flush and not checking if the input
>to the vgic should be adjusted on sync is that if you exit the guest
>because the guest does 

[PATCH v13 15/20] KVM: ARM64: Add PMU overflow interrupt routing

2016-02-23 Thread Shannon Zhao
From: Shannon Zhao 

When calling perf_event_create_kernel_counter to create perf_event,
assign a overflow handler. Then when the perf event overflows, set the
corresponding bit of guest PMOVSSET register. If this counter is enabled
and its interrupt is enabled as well, kick the vcpu to sync the
interrupt.

On VM entry, if there is counter overflowed, inject the interrupt with
the level set to 1. Otherwise, inject the interrupt with the level set
to 0.

Signed-off-by: Shannon Zhao 
Reviewed-by: Marc Zyngier 
Reviewed-by: Andrew Jones 
---
 arch/arm/kvm/arm.c|  2 ++
 include/kvm/arm_pmu.h |  3 +++
 virt/kvm/arm/pmu.c| 51 ++-
 3 files changed, 55 insertions(+), 1 deletion(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index dda1959..f54264c 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -28,6 +28,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #define CREATE_TRACE_POINTS
 #include "trace.h"
@@ -577,6 +578,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct 
kvm_run *run)
 * non-preemptible context.
 */
preempt_disable();
+   kvm_pmu_flush_hwstate(vcpu);
kvm_timer_flush_hwstate(vcpu);
kvm_vgic_flush_hwstate(vcpu);
 
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 8bc92d1..0aed4d4 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -35,6 +35,7 @@ struct kvm_pmu {
int irq_num;
struct kvm_pmc pmc[ARMV8_PMU_MAX_COUNTERS];
bool ready;
+   bool irq_level;
 };
 
 #define kvm_arm_pmu_v3_ready(v)((v)->arch.pmu.ready)
@@ -44,6 +45,7 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu);
 void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
@@ -67,6 +69,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu 
*vcpu)
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {}
 static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) 
{}
 static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index cda869c..1cd4214 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
@@ -181,6 +182,53 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val)
 }
 
 /**
+ * kvm_pmu_flush_hwstate - flush pmu state to cpu
+ * @vcpu: The vcpu pointer
+ *
+ * Inject virtual PMU IRQ if IRQ is pending for this cpu.
+ */
+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu)
+{
+   struct kvm_pmu *pmu = &vcpu->arch.pmu;
+   bool overflow;
+
+   if (!kvm_arm_pmu_v3_ready(vcpu))
+   return;
+
+   overflow = !!kvm_pmu_overflow_status(vcpu);
+   if (pmu->irq_level != overflow) {
+   pmu->irq_level = overflow;
+   kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
+   pmu->irq_num, overflow);
+   }
+}
+
+static inline struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)
+{
+   struct kvm_pmu *pmu;
+   struct kvm_vcpu_arch *vcpu_arch;
+
+   pmc -= pmc->idx;
+   pmu = container_of(pmc, struct kvm_pmu, pmc[0]);
+   vcpu_arch = container_of(pmu, struct kvm_vcpu_arch, pmu);
+   return container_of(vcpu_arch, struct kvm_vcpu, arch);
+}
+
+/**
+ * When perf event overflows, call kvm_pmu_overflow_set to set overflow status.
+ */
+static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
+ struct perf_sample_data *data,
+ struct pt_regs *regs)
+{
+   struct kvm_pmc *pmc = perf_event->overflow_handler_context;
+   struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc);
+   int idx = pmc->idx;
+
+   kvm_pmu_overflow_set(vcpu, BIT(idx));
+}
+
+/**
  * kvm_pmu_software_increment - do software increment
  * @vcpu: The vcpu pointer
  * @val: the value guest writes to PMSWINC register
@@ -291,7 +339,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, 
u64 data,
/* The initial sample period (overflow count) of an event. */
attr.sample_period 

[PATCH v13 14/20] KVM: ARM64: Add access handler for PMUSERENR register

2016-02-23 Thread Shannon Zhao
From: Shannon Zhao 

This register resets as unknown in 64bit mode while it resets as zero
in 32bit mode. Here we choose to reset it as zero for consistency.

PMUSERENR_EL0 holds some bits which decide whether PMU registers can be
accessed from EL0. Add some check helpers to handle the access from EL0.

When these bits are zero, only reading PMUSERENR will trap to EL2 and
writing PMUSERENR or reading/writing other PMU registers will trap to
EL1 other than EL2 when HCR.TGE==0. To current KVM configuration
(HCR.TGE==0) there is no way to get these traps. Here we write 0xf to
physical PMUSERENR register on VM entry, so that it will trap PMU access
from EL0 to EL2. Within the register access handler we check the real
value of guest PMUSERENR register to decide whether this access is
allowed. If not allowed, return false to inject UND to guest.

Signed-off-by: Shannon Zhao 
---
 arch/arm64/include/asm/kvm_host.h   |   1 +
 arch/arm64/include/asm/perf_event.h |   9 
 arch/arm64/kvm/hyp/hyp.h|   1 +
 arch/arm64/kvm/hyp/switch.c |   3 ++
 arch/arm64/kvm/sys_regs.c   | 101 ++--
 5 files changed, 110 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index de1f82d..7b61675 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -128,6 +128,7 @@ enum vcpu_sysreg {
PMINTENSET_EL1, /* Interrupt Enable Set Register */
PMOVSSET_EL0,   /* Overflow Flag Status Set Register */
PMSWINC_EL0,/* Software Increment Register */
+   PMUSERENR_EL0,  /* User Enable Register */
 
/* 32bit specific registers. Keep them at the end of the range */
DACR32_EL2, /* Domain Access Control Register */
diff --git a/arch/arm64/include/asm/perf_event.h 
b/arch/arm64/include/asm/perf_event.h
index c3f5937..76e1931 100644
--- a/arch/arm64/include/asm/perf_event.h
+++ b/arch/arm64/include/asm/perf_event.h
@@ -56,6 +56,15 @@
 #defineARMV8_PMU_EXCLUDE_EL0   (1 << 30)
 #defineARMV8_PMU_INCLUDE_EL2   (1 << 27)
 
+/*
+ * PMUSERENR: user enable reg
+ */
+#define ARMV8_PMU_USERENR_MASK 0xf /* Mask for writable bits */
+#define ARMV8_PMU_USERENR_EN   (1 << 0) /* PMU regs can be accessed at EL0 */
+#define ARMV8_PMU_USERENR_SW   (1 << 1) /* PMSWINC can be written at EL0 */
+#define ARMV8_PMU_USERENR_CR   (1 << 2) /* Cycle counter can be read at EL0 */
+#define ARMV8_PMU_USERENR_ER   (1 << 3) /* Event counter can be read at EL0 */
+
 #ifdef CONFIG_PERF_EVENTS
 struct pt_regs;
 extern unsigned long perf_instruction_pointer(struct pt_regs *regs);
diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
index fb27517..c65f8c9 100644
--- a/arch/arm64/kvm/hyp/hyp.h
+++ b/arch/arm64/kvm/hyp/hyp.h
@@ -22,6 +22,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #define __hyp_text __section(.hyp.text) notrace
 
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index f0e7bdf..dfe111d 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -41,6 +41,8 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
val |= CPTR_EL2_TTA | CPTR_EL2_TFP;
write_sysreg(val, cptr_el2);
 
+   /* Make sure we trap PMU access from EL0 to EL2 */
+   write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
 }
 
@@ -49,6 +51,7 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu 
*vcpu)
write_sysreg(HCR_RW, hcr_el2);
write_sysreg(0, hstr_el2);
write_sysreg(read_sysreg(mdcr_el2) & MDCR_EL2_HPMN_MASK, mdcr_el2);
+   write_sysreg(0, pmuserenr_el0);
write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
 }
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 12f36ef..fe15c23 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -453,6 +453,37 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct 
sys_reg_desc *r)
vcpu_sys_reg(vcpu, PMCR_EL0) = val;
 }
 
+static bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu)
+{
+   u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
+
+   return !((reg & ARMV8_PMU_USERENR_EN) || vcpu_mode_priv(vcpu));
+}
+
+static bool pmu_write_swinc_el0_disabled(struct kvm_vcpu *vcpu)
+{
+   u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
+
+   return !((reg & (ARMV8_PMU_USERENR_SW | ARMV8_PMU_USERENR_EN))
+|| vcpu_mode_priv(vcpu));
+}
+
+static bool pmu_access_cycle_counter_el0_disabled(struct kvm_vcpu *vcpu)
+{
+   u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
+
+   return !((reg & (ARMV8_PMU_USERENR_CR | ARMV8_PMU_USERENR_EN))
+|| vcpu_mode_priv(vcpu));
+}
+
+static bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu)
+{
+   u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
+
+   retur

[PATCH v13 00/20] KVM: ARM64: Add guest PMU support

2016-02-23 Thread Shannon Zhao
From: Shannon Zhao 

This patchset adds guest PMU support for KVM on ARM64. It takes
trap-and-emulate approach. When guest wants to monitor one event, it
will be trapped by KVM and KVM will call perf_event API to create a perf
event and call relevant perf_event APIs to get the count value of event.

Use perf to test this patchset in guest. When using "perf list", it
shows the list of the hardware events and hardware cache events perf
supports. Then use "perf stat -e EVENT" to monitor some event. For
example, use "perf stat -e cycles" to count cpu cycles and
"perf stat -e cache-misses" to count cache misses.

Below are the outputs of "perf stat -r 5 sleep 5" when running in host
and guest.

Host:
 Performance counter stats for 'sleep 5' (5 runs):

  0.529248  task-clock (msec) #0.000 CPUs utilized  
  ( +-  1.65% )
 1  context-switches  #0.002 M/sec
 0  cpu-migrations#0.000 K/sec
49  page-faults   #0.092 M/sec  
  ( +-  1.05% )
   1104279  cycles#2.087 GHz
  ( +-  1.65% )
 stalled-cycles-frontend
 stalled-cycles-backend
528112  instructions  #0.48  insns per cycle
  ( +-  1.12% )
 branches
  9579  branch-misses #   18.099 M/sec  
  ( +-  2.40% )

   5.000851904 seconds time elapsed 
 ( +-  0.00% )

Guest:
 Performance counter stats for 'sleep 5' (5 runs):

  0.695412  task-clock (msec) #0.000 CPUs utilized  
  ( +-  1.26% )
 1  context-switches  #0.001 M/sec
 0  cpu-migrations#0.000 K/sec
49  page-faults   #0.070 M/sec  
  ( +-  1.29% )
   1430471  cycles#2.057 GHz
  ( +-  1.25% )
 stalled-cycles-frontend
 stalled-cycles-backend
659173  instructions  #0.46  insns per cycle
  ( +-  2.64% )
 branches
 10893  branch-misses #   15.664 M/sec  
  ( +-  1.23% )

   5.001277044 seconds time elapsed 
 ( +-  0.00% )

Have a cycle counter read test like below in guest and host:

static void test(void)
{
unsigned long count, count1, count2;
count1 = read_cycles();
count++;
count2 = read_cycles();
}

Host:
count1: 3046505444
count2: 3046505575
delta: 131

Guest:
count1: 5932773531
count2: 5932773668
delta: 137

The gap between guest and host is very small. One reason for this I
think is that it doesn't count the cycles in EL2 and host since we add
exclude_hv = 1. So the cycles spent to store/restore registers which
happens at EL2 are not included.

This patchset can be fetched from [1] and the relevant QEMU version for
test can be fetched from [2].

The results of 'perf test' can be found from [3][4].
The results of perf_event_tests test suite can be found from [5][6].

Also, I have tested "perf top" in two VMs and host at the same time. It
works well.

Thanks,
Shannon

[1] https://git.linaro.org/people/shannon.zhao/linux-mainline.git  
KVM_ARM64_PMU_v13
[2] https://git.linaro.org/people/shannon.zhao/qemu.git  PMU
[3] http://people.linaro.org/~shannon.zhao/PMU/perf-test-host.txt
[4] http://people.linaro.org/~shannon.zhao/PMU/perf-test-guest.txt
[5] http://people.linaro.org/~shannon.zhao/PMU/perf_event_tests-host.txt
[6] http://people.linaro.org/~shannon.zhao/PMU/perf_event_tests-guest.txt

Changes since v12:
* Fix bisect problem
* Move ARMV8_PMU_PMCR_LC into the patch which firstly uses it
* Inject PMU irq only when level changed to save cycles(Marc, thanks!)

Changes since v11:
* Move PMU register related defines to asm/perf_event.h and rename them
  with prefix ARMV8_PMU_*
* BUG_ON when writing to PMCEID0/1 register
* Lower the PMU interrupt line when PMCR_EL0.E == 0
* Fix come coding styles
* Drop kvm_arm_pmu_irq_access

Changes since v10:
* Check if PMCR.N is zero before using GENMASK
* Fix attr.disabled
* use same way to reset counter's value in PATCH 14

Changes since v9:
* Change kvm_arm_support_pmu_v3 to a bool function [PATCH 19/21]
* Fix several typoes, change checking logic of kvm_arm_pmu_v3_init and
  change irq_is_invalid to irq_is_valid [PATCH 21/21]
* Add Acks and Rb from Peter and Andrew, thanks a lot

Changes since v8:
* Fix the wrong use of r->reg in register accessors for 32bit part
* Rewrite the handle of PMUSERENR based on the new inject UND patch
* Drop the inline attribute
* Introduce SET/GET/HAS_DEVICE_ATTR for vcpu iotcl and set the PMU
  overflow inter

[PATCH v13 01/20] ARM64: Move PMU register related defines to asm/perf_event.h

2016-02-23 Thread Shannon Zhao
From: Shannon Zhao 

To use the ARMv8 PMU related register defines from the KVM code, we move
the relevant definitions to asm/perf_event.h header file and rename them
with prefix ARMV8_PMU_.

Signed-off-by: Anup Patel 
Signed-off-by: Shannon Zhao 
Acked-by: Marc Zyngier 
Reviewed-by: Andrew Jones 
---
 arch/arm64/include/asm/perf_event.h | 35 +++
 arch/arm64/kernel/perf_event.c  | 68 ++---
 2 files changed, 52 insertions(+), 51 deletions(-)

diff --git a/arch/arm64/include/asm/perf_event.h 
b/arch/arm64/include/asm/perf_event.h
index 7bd3cdb..5c77ef8 100644
--- a/arch/arm64/include/asm/perf_event.h
+++ b/arch/arm64/include/asm/perf_event.h
@@ -17,6 +17,41 @@
 #ifndef __ASM_PERF_EVENT_H
 #define __ASM_PERF_EVENT_H
 
+#defineARMV8_PMU_MAX_COUNTERS  32
+#defineARMV8_PMU_COUNTER_MASK  (ARMV8_PMU_MAX_COUNTERS - 1)
+
+/*
+ * Per-CPU PMCR: config reg
+ */
+#define ARMV8_PMU_PMCR_E   (1 << 0) /* Enable all counters */
+#define ARMV8_PMU_PMCR_P   (1 << 1) /* Reset all counters */
+#define ARMV8_PMU_PMCR_C   (1 << 2) /* Cycle counter reset */
+#define ARMV8_PMU_PMCR_D   (1 << 3) /* CCNT counts every 64th cpu cycle */
+#define ARMV8_PMU_PMCR_X   (1 << 4) /* Export to ETM */
+#define ARMV8_PMU_PMCR_DP  (1 << 5) /* Disable CCNT if non-invasive debug*/
+#defineARMV8_PMU_PMCR_N_SHIFT  11   /* Number of counters 
supported */
+#defineARMV8_PMU_PMCR_N_MASK   0x1f
+#defineARMV8_PMU_PMCR_MASK 0x3f /* Mask for writable bits */
+
+/*
+ * PMOVSR: counters overflow flag status reg
+ */
+#defineARMV8_PMU_OVSR_MASK 0x  /* Mask for 
writable bits */
+#defineARMV8_PMU_OVERFLOWED_MASK   ARMV8_PMU_OVSR_MASK
+
+/*
+ * PMXEVTYPER: Event selection reg
+ */
+#defineARMV8_PMU_EVTYPE_MASK   0xc80003ff  /* Mask for writable 
bits */
+#defineARMV8_PMU_EVTYPE_EVENT  0x3ff   /* Mask for EVENT bits 
*/
+
+/*
+ * Event filters for PMUv3
+ */
+#defineARMV8_PMU_EXCLUDE_EL1   (1 << 31)
+#defineARMV8_PMU_EXCLUDE_EL0   (1 << 30)
+#defineARMV8_PMU_INCLUDE_EL2   (1 << 27)
+
 #ifdef CONFIG_PERF_EVENTS
 struct pt_regs;
 extern unsigned long perf_instruction_pointer(struct pt_regs *regs);
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index f7ab14c..212c9fc4 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -24,6 +24,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /*
  * ARMv8 PMUv3 Performance Events handling code.
@@ -333,9 +334,6 @@ static const struct attribute_group 
*armv8_pmuv3_attr_groups[] = {
 #defineARMV8_IDX_COUNTER_LAST(cpu_pmu) \
(ARMV8_IDX_CYCLE_COUNTER + cpu_pmu->num_events - 1)
 
-#defineARMV8_MAX_COUNTERS  32
-#defineARMV8_COUNTER_MASK  (ARMV8_MAX_COUNTERS - 1)
-
 /*
  * ARMv8 low level PMU access
  */
@@ -344,39 +342,7 @@ static const struct attribute_group 
*armv8_pmuv3_attr_groups[] = {
  * Perf Event to low level counters mapping
  */
 #defineARMV8_IDX_TO_COUNTER(x) \
-   (((x) - ARMV8_IDX_COUNTER0) & ARMV8_COUNTER_MASK)
-
-/*
- * Per-CPU PMCR: config reg
- */
-#define ARMV8_PMCR_E   (1 << 0) /* Enable all counters */
-#define ARMV8_PMCR_P   (1 << 1) /* Reset all counters */
-#define ARMV8_PMCR_C   (1 << 2) /* Cycle counter reset */
-#define ARMV8_PMCR_D   (1 << 3) /* CCNT counts every 64th cpu cycle */
-#define ARMV8_PMCR_X   (1 << 4) /* Export to ETM */
-#define ARMV8_PMCR_DP  (1 << 5) /* Disable CCNT if non-invasive debug*/
-#defineARMV8_PMCR_N_SHIFT  11   /* Number of counters 
supported */
-#defineARMV8_PMCR_N_MASK   0x1f
-#defineARMV8_PMCR_MASK 0x3f /* Mask for writable bits */
-
-/*
- * PMOVSR: counters overflow flag status reg
- */
-#defineARMV8_OVSR_MASK 0x  /* Mask for writable 
bits */
-#defineARMV8_OVERFLOWED_MASK   ARMV8_OVSR_MASK
-
-/*
- * PMXEVTYPER: Event selection reg
- */
-#defineARMV8_EVTYPE_MASK   0xc80003ff  /* Mask for writable 
bits */
-#defineARMV8_EVTYPE_EVENT  0x3ff   /* Mask for EVENT bits 
*/
-
-/*
- * Event filters for PMUv3
- */
-#defineARMV8_EXCLUDE_EL1   (1 << 31)
-#defineARMV8_EXCLUDE_EL0   (1 << 30)
-#defineARMV8_INCLUDE_EL2   (1 << 27)
+   (((x) - ARMV8_IDX_COUNTER0) & ARMV8_PMU_COUNTER_MASK)
 
 static inline u32 armv8pmu_pmcr_read(void)
 {
@@ -387,14 +353,14 @@ static inline u32 armv8pmu_pmcr_read(void)
 
 static inline void armv8pmu_pmcr_write(u32 val)
 {
-   val &= ARMV8_PMCR_MASK;
+   val &= ARMV8_PMU_PMCR_MASK;
isb();
asm volatile("msr pmcr_el0, %0" :: "r" (val));
 }
 
 static

[PATCH v13 16/20] KVM: ARM64: Reset PMU state when resetting vcpu

2016-02-23 Thread Shannon Zhao
From: Shannon Zhao 

When resetting vcpu, it needs to reset the PMU state to initial status.

Signed-off-by: Shannon Zhao 
Reviewed-by: Marc Zyngier 
Reviewed-by: Andrew Jones 
---
 arch/arm64/kvm/reset.c |  3 +++
 include/kvm/arm_pmu.h  |  2 ++
 virt/kvm/arm/pmu.c | 17 +
 3 files changed, 22 insertions(+)

diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index f34745c..dfbce78 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -120,6 +120,9 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
/* Reset system registers */
kvm_reset_sys_regs(vcpu);
 
+   /* Reset PMU */
+   kvm_pmu_vcpu_reset(vcpu);
+
/* Reset timer */
return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
 }
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 0aed4d4..a227213 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -42,6 +42,7 @@ struct kvm_pmu {
 u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
 void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val);
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
+void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
@@ -66,6 +67,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu 
*vcpu)
 {
return 0;
 }
+static inline void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {}
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 1cd4214..ee3772c 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -84,6 +84,23 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, 
struct kvm_pmc *pmc)
}
 }
 
+/**
+ * kvm_pmu_vcpu_reset - reset pmu state for cpu
+ * @vcpu: The vcpu pointer
+ *
+ */
+void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
+{
+   int i;
+   struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+   for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
+   kvm_pmu_stop_counter(vcpu, &pmu->pmc[i]);
+   pmu->pmc[i].idx = i;
+   pmu->pmc[i].bitmask = 0xUL;
+   }
+}
+
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 {
u64 val = vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT;
-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v13 17/20] KVM: ARM64: Free perf event of PMU when destroying vcpu

2016-02-23 Thread Shannon Zhao
From: Shannon Zhao 

When KVM frees VCPU, it needs to free the perf_event of PMU.

Signed-off-by: Shannon Zhao 
Reviewed-by: Marc Zyngier 
Reviewed-by: Andrew Jones 
---
 arch/arm/kvm/arm.c|  1 +
 include/kvm/arm_pmu.h |  2 ++
 virt/kvm/arm/pmu.c| 21 +
 3 files changed, 24 insertions(+)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index f54264c..d2c2cc3 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -266,6 +266,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
kvm_mmu_free_memory_caches(vcpu);
kvm_timer_vcpu_terminate(vcpu);
kvm_vgic_vcpu_destroy(vcpu);
+   kvm_pmu_vcpu_destroy(vcpu);
kmem_cache_free(kvm_vcpu_cache, vcpu);
 }
 
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index a227213..fd396d6 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -43,6 +43,7 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 
select_idx);
 void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val);
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu);
+void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
@@ -68,6 +69,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu 
*vcpu)
return 0;
 }
 static inline void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {}
+static inline void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) {}
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index ee3772c..cb946fd 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -101,6 +101,27 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
}
 }
 
+/**
+ * kvm_pmu_vcpu_destroy - free perf event of PMU for cpu
+ * @vcpu: The vcpu pointer
+ *
+ */
+void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
+{
+   int i;
+   struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+   for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
+   struct kvm_pmc *pmc = &pmu->pmc[i];
+
+   if (pmc->perf_event) {
+   perf_event_disable(pmc->perf_event);
+   perf_event_release_kernel(pmc->perf_event);
+   pmc->perf_event = NULL;
+   }
+   }
+}
+
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 {
u64 val = vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT;
-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v13 06/20] KVM: ARM64: Add access handler for event counter register

2016-02-23 Thread Shannon Zhao
From: Shannon Zhao 

These kind of registers include PMEVCNTRn, PMCCNTR and PMXEVCNTR which
is mapped to PMEVCNTRn.

The access handler translates all aarch32 register offsets to aarch64
ones and uses vcpu_sys_reg() to access their values to avoid taking care
of big endian.

When reading these registers, return the sum of register value and the
value perf event counts.

Signed-off-by: Shannon Zhao 
Reviewed-by: Andrew Jones 
---
 arch/arm64/include/asm/kvm_host.h |   3 +
 arch/arm64/kvm/Makefile   |   1 +
 arch/arm64/kvm/sys_regs.c | 139 --
 include/kvm/arm_pmu.h |  11 +++
 virt/kvm/arm/pmu.c|  63 +
 5 files changed, 213 insertions(+), 4 deletions(-)
 create mode 100644 virt/kvm/arm/pmu.c

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index e342b48..627f01e 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -118,6 +118,9 @@ enum vcpu_sysreg {
/* Performance Monitors Registers */
PMCR_EL0,   /* Control Register */
PMSELR_EL0, /* Event Counter Selection Register */
+   PMEVCNTR0_EL0,  /* Event Counter Register (0-30) */
+   PMEVCNTR30_EL0 = PMEVCNTR0_EL0 + 30,
+   PMCCNTR_EL0,/* Cycle Counter Register */
 
/* 32bit specific registers. Keep them at the end of the range */
DACR32_EL2, /* Domain Access Control Register */
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index caee9ee..122cff4 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -26,3 +26,4 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v2-emul.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3-emul.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o
+kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index ca8cdf6..ff3214b 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -513,6 +513,56 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct 
sys_reg_params *p,
return true;
 }
 
+static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx)
+{
+   u64 pmcr, val;
+
+   pmcr = vcpu_sys_reg(vcpu, PMCR_EL0);
+   val = (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
+   if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX)
+   return false;
+
+   return true;
+}
+
+static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+   u64 idx;
+
+   if (!kvm_arm_pmu_v3_ready(vcpu))
+   return trap_raz_wi(vcpu, p, r);
+
+   if (r->CRn == 9 && r->CRm == 13) {
+   if (r->Op2 == 2) {
+   /* PMXEVCNTR_EL0 */
+   idx = vcpu_sys_reg(vcpu, PMSELR_EL0)
+ & ARMV8_PMU_COUNTER_MASK;
+   } else if (r->Op2 == 0) {
+   /* PMCCNTR_EL0 */
+   idx = ARMV8_PMU_CYCLE_IDX;
+   } else {
+   BUG();
+   }
+   } else if (r->CRn == 14 && (r->CRm & 12) == 8) {
+   /* PMEVCNTRn_EL0 */
+   idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
+   } else {
+   BUG();
+   }
+
+   if (!pmu_counter_idx_valid(vcpu, idx))
+   return false;
+
+   if (p->is_write)
+   kvm_pmu_set_counter_value(vcpu, idx, p->regval);
+   else
+   p->regval = kvm_pmu_get_counter_value(vcpu, idx);
+
+   return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n) \
/* DBGBVRn_EL1 */   \
@@ -528,6 +578,13 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct 
sys_reg_params *p,
{ Op0(0b10), Op1(0b000), CRn(0b), CRm((n)), Op2(0b111), \
  trap_wcr, reset_wcr, n, 0,  get_wcr, set_wcr }
 
+/* Macro to expand the PMEVCNTRn_EL0 register */
+#define PMU_PMEVCNTR_EL0(n)\
+   /* PMEVCNTRn_EL0 */ \
+   { Op0(0b11), Op1(0b011), CRn(0b1110),   \
+ CRm((0b1000 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)), \
+ access_pmu_evcntr, reset_unknown, (PMEVCNTR0_EL0 + n), }
+
 /*
  * Architected system registers.
  * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
@@ -721,13 +778,13 @@ static const struct sys_reg_desc sys_reg_descs[] = {
  access_pmceid },
/* PMCCNTR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001),

[PATCH v13 09/20] KVM: ARM64: Add access handler for event type register

2016-02-23 Thread Shannon Zhao
From: Shannon Zhao 

These kind of registers include PMEVTYPERn, PMCCFILTR and PMXEVTYPER
which is mapped to PMEVTYPERn or PMCCFILTR.

The access handler translates all aarch32 register offsets to aarch64
ones and uses vcpu_sys_reg() to access their values to avoid taking care
of big endian.

When writing to these registers, create a perf_event for the selected
event type.

Signed-off-by: Shannon Zhao 
Reviewed-by: Andrew Jones 
---
 arch/arm64/include/asm/kvm_host.h |   3 +
 arch/arm64/kvm/sys_regs.c | 126 +-
 2 files changed, 127 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index e6910db..bf97e79 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -121,6 +121,9 @@ enum vcpu_sysreg {
PMEVCNTR0_EL0,  /* Event Counter Register (0-30) */
PMEVCNTR30_EL0 = PMEVCNTR0_EL0 + 30,
PMCCNTR_EL0,/* Cycle Counter Register */
+   PMEVTYPER0_EL0, /* Event Type Register (0-30) */
+   PMEVTYPER30_EL0 = PMEVTYPER0_EL0 + 30,
+   PMCCFILTR_EL0,  /* Cycle Count Filter Register */
PMCNTENSET_EL0, /* Count Enable Set Register */
 
/* 32bit specific registers. Keep them at the end of the range */
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index d4b6ae3..4faf324 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -563,6 +563,42 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
return true;
 }
 
+static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+  const struct sys_reg_desc *r)
+{
+   u64 idx, reg;
+
+   if (!kvm_arm_pmu_v3_ready(vcpu))
+   return trap_raz_wi(vcpu, p, r);
+
+   if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) {
+   /* PMXEVTYPER_EL0 */
+   idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_PMU_COUNTER_MASK;
+   reg = PMEVTYPER0_EL0 + idx;
+   } else if (r->CRn == 14 && (r->CRm & 12) == 12) {
+   idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
+   if (idx == ARMV8_PMU_CYCLE_IDX)
+   reg = PMCCFILTR_EL0;
+   else
+   /* PMEVTYPERn_EL0 */
+   reg = PMEVTYPER0_EL0 + idx;
+   } else {
+   BUG();
+   }
+
+   if (!pmu_counter_idx_valid(vcpu, idx))
+   return false;
+
+   if (p->is_write) {
+   kvm_pmu_set_counter_event_type(vcpu, p->regval, idx);
+   vcpu_sys_reg(vcpu, reg) = p->regval & ARMV8_PMU_EVTYPE_MASK;
+   } else {
+   p->regval = vcpu_sys_reg(vcpu, reg) & ARMV8_PMU_EVTYPE_MASK;
+   }
+
+   return true;
+}
+
 static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
   const struct sys_reg_desc *r)
 {
@@ -612,6 +648,13 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct 
sys_reg_params *p,
  CRm((0b1000 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)), \
  access_pmu_evcntr, reset_unknown, (PMEVCNTR0_EL0 + n), }
 
+/* Macro to expand the PMEVTYPERn_EL0 register */
+#define PMU_PMEVTYPER_EL0(n)   \
+   /* PMEVTYPERn_EL0 */\
+   { Op0(0b11), Op1(0b011), CRn(0b1110),   \
+ CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)), \
+ access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
+
 /*
  * Architected system registers.
  * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
@@ -808,7 +851,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
  access_pmu_evcntr, reset_unknown, PMCCNTR_EL0 },
/* PMXEVTYPER_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
- trap_raz_wi },
+ access_pmu_evtyper },
/* PMXEVCNTR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
  access_pmu_evcntr },
@@ -858,6 +901,44 @@ static const struct sys_reg_desc sys_reg_descs[] = {
PMU_PMEVCNTR_EL0(28),
PMU_PMEVCNTR_EL0(29),
PMU_PMEVCNTR_EL0(30),
+   /* PMEVTYPERn_EL0 */
+   PMU_PMEVTYPER_EL0(0),
+   PMU_PMEVTYPER_EL0(1),
+   PMU_PMEVTYPER_EL0(2),
+   PMU_PMEVTYPER_EL0(3),
+   PMU_PMEVTYPER_EL0(4),
+   PMU_PMEVTYPER_EL0(5),
+   PMU_PMEVTYPER_EL0(6),
+   PMU_PMEVTYPER_EL0(7),
+   PMU_PMEVTYPER_EL0(8),
+   PMU_PMEVTYPER_EL0(9),
+   PMU_PMEVTYPER_EL0(10),
+   PMU_PMEVTYPER_EL0(11),
+   PMU_PMEVTYPER_EL0(12),
+   PMU_PMEVTYPER_EL0(13),
+   PMU_PMEVTYPER_EL0(14),
+   PMU_PMEVTYPER_EL0(15),
+   PMU_PMEVTYPER_EL0(16),
+   PMU_PMEVTYPER_EL0(17),
+ 

[PATCH v13 20/20] KVM: ARM64: Add a new vcpu device control group for PMUv3

2016-02-23 Thread Shannon Zhao
From: Shannon Zhao 

To configure the virtual PMUv3 overflow interrupt number, we use the
vcpu kvm_device ioctl, encapsulating the KVM_ARM_VCPU_PMU_V3_IRQ
attribute within the KVM_ARM_VCPU_PMU_V3_CTRL group.

After configuring the PMUv3, call the vcpu ioctl with attribute
KVM_ARM_VCPU_PMU_V3_INIT to initialize the PMUv3.

Signed-off-by: Shannon Zhao 
Acked-by: Peter Maydell 
Reviewed-by: Andrew Jones 
Reviewed-by: Christoffer Dall 
---
 Documentation/virtual/kvm/devices/vcpu.txt |  25 +++
 arch/arm/include/asm/kvm_host.h|  15 
 arch/arm/kvm/arm.c |   3 +
 arch/arm64/include/asm/kvm_host.h  |   6 ++
 arch/arm64/include/uapi/asm/kvm.h  |   5 ++
 arch/arm64/kvm/guest.c |  51 +
 include/kvm/arm_pmu.h  |  23 ++
 virt/kvm/arm/pmu.c | 112 +
 8 files changed, 240 insertions(+)

diff --git a/Documentation/virtual/kvm/devices/vcpu.txt 
b/Documentation/virtual/kvm/devices/vcpu.txt
index 3cc59c5..c041658 100644
--- a/Documentation/virtual/kvm/devices/vcpu.txt
+++ b/Documentation/virtual/kvm/devices/vcpu.txt
@@ -6,3 +6,28 @@ KVM_GET_DEVICE_ATTR, and KVM_HAS_DEVICE_ATTR. The interface 
uses the same struct
 kvm_device_attr as other devices, but targets VCPU-wide settings and controls.
 
 The groups and attributes per virtual cpu, if any, are architecture specific.
+
+1. GROUP: KVM_ARM_VCPU_PMU_V3_CTRL
+Architectures: ARM64
+
+1.1. ATTRIBUTE: KVM_ARM_VCPU_PMU_V3_IRQ
+Parameters: in kvm_device_attr.addr the address for PMU overflow interrupt is a
+pointer to an int
+Returns: -EBUSY: The PMU overflow interrupt is already set
+ -ENXIO: The overflow interrupt not set when attempting to get it
+ -ENODEV: PMUv3 not supported
+ -EINVAL: Invalid PMU overflow interrupt number supplied
+
+A value describing the PMUv3 (Performance Monitor Unit v3) overflow interrupt
+number for this vcpu. This interrupt could be a PPI or SPI, but the interrupt
+type must be same for each vcpu. As a PPI, the interrupt number is the same for
+all vcpus, while as an SPI it must be a separate number per vcpu.
+
+1.2 ATTRIBUTE: KVM_ARM_VCPU_PMU_V3_INIT
+Parameters: no additional parameter in kvm_device_attr.addr
+Returns: -ENODEV: PMUv3 not supported
+ -ENXIO: PMUv3 not properly configured as required prior to calling 
this
+ attribute
+ -EBUSY: PMUv3 already initialized
+
+Request the initialization of the PMUv3.
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index f9f2779..6dd0992 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -242,5 +242,20 @@ static inline void kvm_arm_init_debug(void) {}
 static inline void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) {}
+static inline int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
+struct kvm_device_attr *attr)
+{
+   return -ENXIO;
+}
+static inline int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
+struct kvm_device_attr *attr)
+{
+   return -ENXIO;
+}
+static inline int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
+struct kvm_device_attr *attr)
+{
+   return -ENXIO;
+}
 
 #endif /* __ARM_KVM_HOST_H__ */
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 34d7395..dc8644f 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -833,6 +833,7 @@ static int kvm_arm_vcpu_set_attr(struct kvm_vcpu *vcpu,
 
switch (attr->group) {
default:
+   ret = kvm_arm_vcpu_arch_set_attr(vcpu, attr);
break;
}
 
@@ -846,6 +847,7 @@ static int kvm_arm_vcpu_get_attr(struct kvm_vcpu *vcpu,
 
switch (attr->group) {
default:
+   ret = kvm_arm_vcpu_arch_get_attr(vcpu, attr);
break;
}
 
@@ -859,6 +861,7 @@ static int kvm_arm_vcpu_has_attr(struct kvm_vcpu *vcpu,
 
switch (attr->group) {
default:
+   ret = kvm_arm_vcpu_arch_has_attr(vcpu, attr);
break;
}
 
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index cd177e9..48e1a12 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -359,5 +359,11 @@ void kvm_arm_init_debug(void);
 void kvm_arm_setup_debug(struct kvm_vcpu *vcpu);
 void kvm_arm_clear_debug(struct kvm_vcpu *vcpu);
 void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu);
+int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
+  struct kvm_device_attr *attr);
+int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
+  struct kvm_device_attr *at

[PATCH v13 04/20] KVM: ARM64: Add access handler for PMSELR register

2016-02-23 Thread Shannon Zhao
From: Shannon Zhao 

Since the reset value of PMSELR_EL0 is UNKNOWN, use reset_unknown for
its reset handler. When reading PMSELR, return the PMSELR.SEL field to
guest.

Signed-off-by: Shannon Zhao 
Reviewed-by: Andrew Jones 
---
 arch/arm64/include/asm/kvm_host.h |  1 +
 arch/arm64/kvm/sys_regs.c | 20 ++--
 2 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 1f3ca98..e342b48 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -117,6 +117,7 @@ enum vcpu_sysreg {
 
/* Performance Monitors Registers */
PMCR_EL0,   /* Control Register */
+   PMSELR_EL0, /* Event Counter Selection Register */
 
/* 32bit specific registers. Keep them at the end of the range */
DACR32_EL2, /* Domain Access Control Register */
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index e88ae2d..b05e20f 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -477,6 +477,22 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct 
sys_reg_params *p,
return true;
 }
 
+static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+   if (!kvm_arm_pmu_v3_ready(vcpu))
+   return trap_raz_wi(vcpu, p, r);
+
+   if (p->is_write)
+   vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval;
+   else
+   /* return PMSELR.SEL field */
+   p->regval = vcpu_sys_reg(vcpu, PMSELR_EL0)
+   & ARMV8_PMU_COUNTER_MASK;
+
+   return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n) \
/* DBGBVRn_EL1 */   \
@@ -676,7 +692,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
  trap_raz_wi },
/* PMSELR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
- trap_raz_wi },
+ access_pmselr, reset_unknown, PMSELR_EL0 },
/* PMCEID0_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
  trap_raz_wi },
@@ -927,7 +943,7 @@ static const struct sys_reg_desc cp15_regs[] = {
{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
-   { Op1( 0), CRn( 9), CRm(12), Op2( 5), trap_raz_wi },
+   { Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
{ Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v13 05/20] KVM: ARM64: Add access handler for PMCEID0 and PMCEID1 register

2016-02-23 Thread Shannon Zhao
From: Shannon Zhao 

Add access handler which gets host value of PMCEID0 or PMCEID1 when
guest access these registers. Writing action to PMCEID0 or PMCEID1 is
UNDEFINED.

Signed-off-by: Shannon Zhao 
---
 arch/arm64/kvm/sys_regs.c | 28 
 1 file changed, 24 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index b05e20f..ca8cdf6 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -493,6 +493,26 @@ static bool access_pmselr(struct kvm_vcpu *vcpu, struct 
sys_reg_params *p,
return true;
 }
 
+static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+   u64 pmceid;
+
+   if (!kvm_arm_pmu_v3_ready(vcpu))
+   return trap_raz_wi(vcpu, p, r);
+
+   BUG_ON(p->is_write);
+
+   if (!(p->Op2 & 1))
+   asm volatile("mrs %0, pmceid0_el0\n" : "=r" (pmceid));
+   else
+   asm volatile("mrs %0, pmceid1_el0\n" : "=r" (pmceid));
+
+   p->regval = pmceid;
+
+   return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n) \
/* DBGBVRn_EL1 */   \
@@ -695,10 +715,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
  access_pmselr, reset_unknown, PMSELR_EL0 },
/* PMCEID0_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
- trap_raz_wi },
+ access_pmceid },
/* PMCEID1_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b111),
- trap_raz_wi },
+ access_pmceid },
/* PMCCNTR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
  trap_raz_wi },
@@ -944,8 +964,8 @@ static const struct sys_reg_desc cp15_regs[] = {
{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
-   { Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
-   { Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
+   { Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
+   { Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v13 03/20] KVM: ARM64: Add access handler for PMCR register

2016-02-23 Thread Shannon Zhao
From: Shannon Zhao 

Add reset handler which gets host value of PMCR_EL0 and make writable
bits architecturally UNKNOWN except PMCR.E which is zero. Add an access
handler for PMCR.

Signed-off-by: Shannon Zhao 
Reviewed-by: Andrew Jones 
---
 arch/arm64/include/asm/kvm_host.h |  3 +++
 arch/arm64/kvm/sys_regs.c | 42 +--
 include/kvm/arm_pmu.h |  4 
 3 files changed, 47 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 6f0241f..1f3ca98 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -115,6 +115,9 @@ enum vcpu_sysreg {
MDSCR_EL1,  /* Monitor Debug System Control Register */
MDCCINT_EL1,/* Monitor Debug Comms Channel Interrupt Enable Reg */
 
+   /* Performance Monitors Registers */
+   PMCR_EL0,   /* Control Register */
+
/* 32bit specific registers. Keep them at the end of the range */
DACR32_EL2, /* Domain Access Control Register */
IFSR32_EL2, /* Instruction Fault Status Register */
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2e90371..e88ae2d 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -34,6 +34,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 
@@ -439,6 +440,43 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const 
struct sys_reg_desc *r)
vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
 }
 
+static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+   u64 pmcr, val;
+
+   asm volatile("mrs %0, pmcr_el0\n" : "=r" (pmcr));
+   /* Writable bits of PMCR_EL0 (ARMV8_PMU_PMCR_MASK) is reset to UNKNOWN
+* except PMCR.E resetting to zero.
+*/
+   val = ((pmcr & ~ARMV8_PMU_PMCR_MASK)
+  | (ARMV8_PMU_PMCR_MASK & 0xdecafbad)) & (~ARMV8_PMU_PMCR_E);
+   vcpu_sys_reg(vcpu, PMCR_EL0) = val;
+}
+
+static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+   const struct sys_reg_desc *r)
+{
+   u64 val;
+
+   if (!kvm_arm_pmu_v3_ready(vcpu))
+   return trap_raz_wi(vcpu, p, r);
+
+   if (p->is_write) {
+   /* Only update writeable bits of PMCR */
+   val = vcpu_sys_reg(vcpu, PMCR_EL0);
+   val &= ~ARMV8_PMU_PMCR_MASK;
+   val |= p->regval & ARMV8_PMU_PMCR_MASK;
+   vcpu_sys_reg(vcpu, PMCR_EL0) = val;
+   } else {
+   /* PMCR.P & PMCR.C are RAZ */
+   val = vcpu_sys_reg(vcpu, PMCR_EL0)
+ & ~(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C);
+   p->regval = val;
+   }
+
+   return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n) \
/* DBGBVRn_EL1 */   \
@@ -623,7 +661,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 
/* PMCR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b000),
- trap_raz_wi },
+ access_pmcr, reset_pmcr, },
/* PMCNTENSET_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
  trap_raz_wi },
@@ -885,7 +923,7 @@ static const struct sys_reg_desc cp15_regs[] = {
{ Op1( 0), CRn( 7), CRm(14), Op2( 2), access_dcsw },
 
/* PMU */
-   { Op1( 0), CRn( 9), CRm(12), Op2( 0), trap_raz_wi },
+   { Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmcr },
{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 3c2fd56..8157fe5 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -34,9 +34,13 @@ struct kvm_pmu {
struct kvm_pmc pmc[ARMV8_PMU_MAX_COUNTERS];
bool ready;
 };
+
+#define kvm_arm_pmu_v3_ready(v)((v)->arch.pmu.ready)
 #else
 struct kvm_pmu {
 };
+
+#define kvm_arm_pmu_v3_ready(v)(false)
 #endif
 
 #endif
-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v13 11/20] KVM: ARM64: Add access handler for PMOVSSET and PMOVSCLR register

2016-02-23 Thread Shannon Zhao
From: Shannon Zhao 

Since the reset value of PMOVSSET and PMOVSCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a handler to emulate writing
PMOVSSET or PMOVSCLR register.

When writing non-zero value to PMOVSSET, the counter and its interrupt
is enabled, kick this vcpu to sync PMU interrupt.

Signed-off-by: Shannon Zhao 
Reviewed-by: Andrew Jones 
---
 arch/arm64/include/asm/kvm_host.h |  1 +
 arch/arm64/kvm/sys_regs.c | 29 ++---
 include/kvm/arm_pmu.h |  2 ++
 virt/kvm/arm/pmu.c| 31 +++
 4 files changed, 60 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index c7642de..05f4808 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -126,6 +126,7 @@ enum vcpu_sysreg {
PMCCFILTR_EL0,  /* Cycle Count Filter Register */
PMCNTENSET_EL0, /* Count Enable Set Register */
PMINTENSET_EL1, /* Interrupt Enable Set Register */
+   PMOVSSET_EL0,   /* Overflow Flag Status Set Register */
 
/* 32bit specific registers. Keep them at the end of the range */
DACR32_EL2, /* Domain Access Control Register */
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index bfc70b2..6a774f9 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -650,6 +650,28 @@ static bool access_pminten(struct kvm_vcpu *vcpu, struct 
sys_reg_params *p,
return true;
 }
 
+static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+const struct sys_reg_desc *r)
+{
+   u64 mask = kvm_pmu_valid_counter_mask(vcpu);
+
+   if (!kvm_arm_pmu_v3_ready(vcpu))
+   return trap_raz_wi(vcpu, p, r);
+
+   if (p->is_write) {
+   if (r->CRm & 0x2)
+   /* accessing PMOVSSET_EL0 */
+   kvm_pmu_overflow_set(vcpu, p->regval & mask);
+   else
+   /* accessing PMOVSCLR_EL0 */
+   vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~(p->regval & mask);
+   } else {
+   p->regval = vcpu_sys_reg(vcpu, PMOVSSET_EL0) & mask;
+   }
+
+   return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n) \
/* DBGBVRn_EL1 */   \
@@ -857,7 +879,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
  access_pmcnten, NULL, PMCNTENSET_EL0 },
/* PMOVSCLR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
- trap_raz_wi },
+ access_pmovs, NULL, PMOVSSET_EL0 },
/* PMSWINC_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
  trap_raz_wi },
@@ -884,7 +906,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
  trap_raz_wi },
/* PMOVSSET_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
- trap_raz_wi },
+ access_pmovs, reset_unknown, PMOVSSET_EL0 },
 
/* TPIDR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b), Op2(0b010),
@@ -1198,7 +1220,7 @@ static const struct sys_reg_desc cp15_regs[] = {
{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmcr },
{ Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmcnten },
{ Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmcnten },
-   { Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
+   { Op1( 0), CRn( 9), CRm(12), Op2( 3), access_pmovs },
{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
@@ -1208,6 +1230,7 @@ static const struct sys_reg_desc cp15_regs[] = {
{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pminten },
{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pminten },
+   { Op1( 0), CRn( 9), CRm(14), Op2( 3), access_pmovs },
 
{ Op1( 0), CRn(10), CRm( 2), Op2( 0), access_vm_reg, NULL, c10_PRRR },
{ Op1( 0), CRn(10), CRm( 2), Op2( 1), access_vm_reg, NULL, c10_NMRR },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index c5737797..60061da 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -43,6 +43,7 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 
select_idx, u64 val);
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
u64 select_idx);
 #e

[PATCH v13 08/20] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function

2016-02-23 Thread Shannon Zhao
From: Shannon Zhao 

When we use tools like perf on host, perf passes the event type and the
id of this event type category to kernel, then kernel will map them to
hardware event number and write this number to PMU PMEVTYPER_EL0
register. When getting the event number in KVM, directly use raw event
type to create a perf_event for it.

Signed-off-by: Shannon Zhao 
Reviewed-by: Marc Zyngier 
---
 include/kvm/arm_pmu.h |  4 +++
 virt/kvm/arm/pmu.c| 74 +++
 2 files changed, 78 insertions(+)

diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index b70058e..c5737797 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -43,6 +43,8 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 
select_idx, u64 val);
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
+   u64 select_idx);
 #else
 struct kvm_pmu {
 };
@@ -61,6 +63,8 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu 
*vcpu)
 }
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
+ u64 data, u64 select_idx) {}
 #endif
 
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index f8dc174..591a11d 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -62,6 +62,27 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 
select_idx, u64 val)
vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, 
select_idx);
 }
 
+/**
+ * kvm_pmu_stop_counter - stop PMU counter
+ * @pmc: The PMU counter pointer
+ *
+ * If this counter has been configured to monitor some event, release it here.
+ */
+static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
+{
+   u64 counter, reg;
+
+   if (pmc->perf_event) {
+   counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
+   reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
+  ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
+   vcpu_sys_reg(vcpu, reg) = counter;
+   perf_event_disable(pmc->perf_event);
+   perf_event_release_kernel(pmc->perf_event);
+   pmc->perf_event = NULL;
+   }
+}
+
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 {
u64 val = vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT;
@@ -127,3 +148,56 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 
val)
perf_event_disable(pmc->perf_event);
}
 }
+
+static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
+{
+   return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
+  (vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(select_idx));
+}
+
+/**
+ * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
+ * @vcpu: The vcpu pointer
+ * @data: The data guest writes to PMXEVTYPER_EL0
+ * @select_idx: The number of selected counter
+ *
+ * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count 
an
+ * event with given hardware event number. Here we call perf_event API to
+ * emulate this action and create a kernel perf event for it.
+ */
+void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
+   u64 select_idx)
+{
+   struct kvm_pmu *pmu = &vcpu->arch.pmu;
+   struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+   struct perf_event *event;
+   struct perf_event_attr attr;
+   u64 eventsel, counter;
+
+   kvm_pmu_stop_counter(vcpu, pmc);
+   eventsel = data & ARMV8_PMU_EVTYPE_EVENT;
+
+   memset(&attr, 0, sizeof(struct perf_event_attr));
+   attr.type = PERF_TYPE_RAW;
+   attr.size = sizeof(attr);
+   attr.pinned = 1;
+   attr.disabled = !kvm_pmu_counter_is_enabled(vcpu, select_idx);
+   attr.exclude_user = data & ARMV8_PMU_EXCLUDE_EL0 ? 1 : 0;
+   attr.exclude_kernel = data & ARMV8_PMU_EXCLUDE_EL1 ? 1 : 0;
+   attr.exclude_hv = 1; /* Don't count EL2 events */
+   attr.exclude_host = 1; /* Don't count host events */
+   attr.config = eventsel;
+
+   counter = kvm_pmu_get_counter_value(vcpu, select_idx);
+   /* The initial sample period (overflow count) of an event. */
+   attr.sample_period = (-counter) & pmc->bitmask;
+
+   event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
+   if (IS_ERR(event)) {
+   pr_err_once("kvm: pmu event creation failed %ld\n",
+  

[PATCH v13 18/20] KVM: ARM64: Add a new feature bit for PMUv3

2016-02-23 Thread Shannon Zhao
From: Shannon Zhao 

To support guest PMUv3, use one bit of the VCPU INIT feature array.
Initialize the PMU when initialzing the vcpu with that bit and PMU
overflow interrupt set.

Signed-off-by: Shannon Zhao 
Acked-by: Peter Maydell 
Reviewed-by: Andrew Jones 
---
 Documentation/virtual/kvm/api.txt |  2 ++
 arch/arm64/include/asm/kvm_host.h |  2 +-
 arch/arm64/include/uapi/asm/kvm.h |  1 +
 arch/arm64/kvm/reset.c|  3 +++
 include/kvm/arm_pmu.h |  2 ++
 include/uapi/linux/kvm.h  |  1 +
 virt/kvm/arm/pmu.c| 10 ++
 7 files changed, 20 insertions(+), 1 deletion(-)

diff --git a/Documentation/virtual/kvm/api.txt 
b/Documentation/virtual/kvm/api.txt
index 07e4cdf..9684f8d 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2577,6 +2577,8 @@ Possible features:
  Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).
- KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU.
  Depends on KVM_CAP_ARM_PSCI_0_2.
+   - KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
+ Depends on KVM_CAP_ARM_PMU_V3.
 
 
 4.83 KVM_ARM_PREFERRED_TARGET
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 7b61675..cd177e9 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -40,7 +40,7 @@
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
-#define KVM_VCPU_MAX_FEATURES 3
+#define KVM_VCPU_MAX_FEATURES 4
 
 int __attribute_const__ kvm_target_cpu(void);
 int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/uapi/asm/kvm.h 
b/arch/arm64/include/uapi/asm/kvm.h
index 2d4ca4b..6aedbe3 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -94,6 +94,7 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_POWER_OFF 0 /* CPU is started in OFF state */
 #define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2  2 /* CPU uses PSCI v0.2 */
+#define KVM_ARM_VCPU_PMU_V33 /* Support guest PMUv3 */
 
 struct kvm_vcpu_init {
__u32 target;
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index dfbce78..cf4f28a 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -77,6 +77,9 @@ int kvm_arch_dev_ioctl_check_extension(long ext)
case KVM_CAP_GUEST_DEBUG_HW_WPS:
r = get_num_wrps();
break;
+   case KVM_CAP_ARM_PMU_V3:
+   r = kvm_arm_support_pmu_v3();
+   break;
case KVM_CAP_SET_GUEST_DEBUG:
r = 1;
break;
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index fd396d6..1d12e55 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -52,6 +52,7 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 
val);
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
u64 select_idx);
+bool kvm_arm_support_pmu_v3(void);
 #else
 struct kvm_pmu {
 };
@@ -78,6 +79,7 @@ static inline void kvm_pmu_software_increment(struct kvm_vcpu 
*vcpu, u64 val) {}
 static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
  u64 data, u64 select_idx) {}
+static inline bool kvm_arm_support_pmu_v3(void) { return false; }
 #endif
 
 #endif
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 9da9051..dc16d30 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -850,6 +850,7 @@ struct kvm_ppc_smmu_info {
 #define KVM_CAP_IOEVENTFD_ANY_LENGTH 122
 #define KVM_CAP_HYPERV_SYNIC 123
 #define KVM_CAP_S390_RI 124
+#define KVM_CAP_ARM_PMU_V3 125
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index cb946fd..d226360 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -387,3 +387,13 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, 
u64 data,
 
pmc->perf_event = event;
 }
+
+bool kvm_arm_support_pmu_v3(void)
+{
+   /*
+* Check if HW_PERF_EVENTS are supported by checking the number of
+* hardware performance counters. This could ensure the presence of
+* a physical PMU and CONFIG_PERF_EVENT is selected.
+*/
+   return (perf_num_counters() > 0);
+}
-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v13 02/20] KVM: ARM64: Define PMU data structure for each vcpu

2016-02-23 Thread Shannon Zhao
From: Shannon Zhao 

Here we plan to support virtual PMU for guest by full software
emulation, so define some basic structs and functions preparing for
futher steps. Define struct kvm_pmc for performance monitor counter and
struct kvm_pmu for performance monitor unit for each vcpu. According to
ARMv8 spec, the PMU contains at most 32(ARMV8_PMU_MAX_COUNTERS)
counters.

Since this only supports ARM64 (or PMUv3), add a separate config symbol
for it.

Signed-off-by: Shannon Zhao 
Acked-by: Marc Zyngier 
Reviewed-by: Andrew Jones 
---
 arch/arm64/include/asm/kvm_host.h |  2 ++
 arch/arm64/kvm/Kconfig|  7 +++
 include/kvm/arm_pmu.h | 42 +++
 3 files changed, 51 insertions(+)
 create mode 100644 include/kvm/arm_pmu.h

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 689d4c9..6f0241f 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -36,6 +36,7 @@
 
 #include 
 #include 
+#include 
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
@@ -211,6 +212,7 @@ struct kvm_vcpu_arch {
/* VGIC state */
struct vgic_cpu vgic_cpu;
struct arch_timer_cpu timer_cpu;
+   struct kvm_pmu pmu;
 
/*
 * Anything that is not used directly from assembly code goes
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index a5272c0..de7450d 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -36,6 +36,7 @@ config KVM
select HAVE_KVM_EVENTFD
select HAVE_KVM_IRQFD
select KVM_ARM_VGIC_V3
+   select KVM_ARM_PMU if HW_PERF_EVENTS
---help---
  Support hosting virtualized guest machines.
  We don't support KVM with 16K page tables yet, due to the multiple
@@ -48,6 +49,12 @@ config KVM_ARM_HOST
---help---
  Provides host support for ARM processors.
 
+config KVM_ARM_PMU
+   bool
+   ---help---
+ Adds support for a virtual Performance Monitoring Unit (PMU) in
+ virtual machines.
+
 source drivers/vhost/Kconfig
 
 endif # VIRTUALIZATION
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
new file mode 100644
index 000..3c2fd56
--- /dev/null
+++ b/include/kvm/arm_pmu.h
@@ -0,0 +1,42 @@
+/*
+ * Copyright (C) 2015 Linaro Ltd.
+ * Author: Shannon Zhao 
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ASM_ARM_KVM_PMU_H
+#define __ASM_ARM_KVM_PMU_H
+
+#ifdef CONFIG_KVM_ARM_PMU
+
+#include 
+#include 
+
+struct kvm_pmc {
+   u8 idx; /* index into the pmu->pmc array */
+   struct perf_event *perf_event;
+   u64 bitmask;
+};
+
+struct kvm_pmu {
+   int irq_num;
+   struct kvm_pmc pmc[ARMV8_PMU_MAX_COUNTERS];
+   bool ready;
+};
+#else
+struct kvm_pmu {
+};
+#endif
+
+#endif
-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v13 19/20] KVM: ARM: Introduce per-vcpu kvm device controls

2016-02-23 Thread Shannon Zhao
From: Shannon Zhao 

In some cases it needs to get/set attributes specific to a vcpu and so
needs something else than ONE_REG.

Let's copy the KVM_DEVICE approach, and define the respective ioctls
for the vcpu file descriptor.

Signed-off-by: Shannon Zhao 
Reviewed-by: Andrew Jones 
Acked-by: Peter Maydell 
---
 Documentation/virtual/kvm/api.txt  | 10 +++---
 Documentation/virtual/kvm/devices/vcpu.txt |  8 +
 arch/arm/kvm/arm.c | 55 ++
 arch/arm64/kvm/reset.c |  1 +
 include/uapi/linux/kvm.h   |  1 +
 5 files changed, 71 insertions(+), 4 deletions(-)
 create mode 100644 Documentation/virtual/kvm/devices/vcpu.txt

diff --git a/Documentation/virtual/kvm/api.txt 
b/Documentation/virtual/kvm/api.txt
index 9684f8d..cb2ef0b 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2507,8 +2507,9 @@ struct kvm_create_device {
 
 4.80 KVM_SET_DEVICE_ATTR/KVM_GET_DEVICE_ATTR
 
-Capability: KVM_CAP_DEVICE_CTRL, KVM_CAP_VM_ATTRIBUTES for vm device
-Type: device ioctl, vm ioctl
+Capability: KVM_CAP_DEVICE_CTRL, KVM_CAP_VM_ATTRIBUTES for vm device,
+  KVM_CAP_VCPU_ATTRIBUTES for vcpu device
+Type: device ioctl, vm ioctl, vcpu ioctl
 Parameters: struct kvm_device_attr
 Returns: 0 on success, -1 on error
 Errors:
@@ -2533,8 +2534,9 @@ struct kvm_device_attr {
 
 4.81 KVM_HAS_DEVICE_ATTR
 
-Capability: KVM_CAP_DEVICE_CTRL, KVM_CAP_VM_ATTRIBUTES for vm device
-Type: device ioctl, vm ioctl
+Capability: KVM_CAP_DEVICE_CTRL, KVM_CAP_VM_ATTRIBUTES for vm device,
+  KVM_CAP_VCPU_ATTRIBUTES for vcpu device
+Type: device ioctl, vm ioctl, vcpu ioctl
 Parameters: struct kvm_device_attr
 Returns: 0 on success, -1 on error
 Errors:
diff --git a/Documentation/virtual/kvm/devices/vcpu.txt 
b/Documentation/virtual/kvm/devices/vcpu.txt
new file mode 100644
index 000..3cc59c5
--- /dev/null
+++ b/Documentation/virtual/kvm/devices/vcpu.txt
@@ -0,0 +1,8 @@
+Generic vcpu interface
+
+
+The virtual cpu "device" also accepts the ioctls KVM_SET_DEVICE_ATTR,
+KVM_GET_DEVICE_ATTR, and KVM_HAS_DEVICE_ATTR. The interface uses the same 
struct
+kvm_device_attr as other devices, but targets VCPU-wide settings and controls.
+
+The groups and attributes per virtual cpu, if any, are architecture specific.
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index d2c2cc3..34d7395 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -826,11 +826,51 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu 
*vcpu,
return 0;
 }
 
+static int kvm_arm_vcpu_set_attr(struct kvm_vcpu *vcpu,
+struct kvm_device_attr *attr)
+{
+   int ret = -ENXIO;
+
+   switch (attr->group) {
+   default:
+   break;
+   }
+
+   return ret;
+}
+
+static int kvm_arm_vcpu_get_attr(struct kvm_vcpu *vcpu,
+struct kvm_device_attr *attr)
+{
+   int ret = -ENXIO;
+
+   switch (attr->group) {
+   default:
+   break;
+   }
+
+   return ret;
+}
+
+static int kvm_arm_vcpu_has_attr(struct kvm_vcpu *vcpu,
+struct kvm_device_attr *attr)
+{
+   int ret = -ENXIO;
+
+   switch (attr->group) {
+   default:
+   break;
+   }
+
+   return ret;
+}
+
 long kvm_arch_vcpu_ioctl(struct file *filp,
 unsigned int ioctl, unsigned long arg)
 {
struct kvm_vcpu *vcpu = filp->private_data;
void __user *argp = (void __user *)arg;
+   struct kvm_device_attr attr;
 
switch (ioctl) {
case KVM_ARM_VCPU_INIT: {
@@ -873,6 +913,21 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
return -E2BIG;
return kvm_arm_copy_reg_indices(vcpu, user_list->reg);
}
+   case KVM_SET_DEVICE_ATTR: {
+   if (copy_from_user(&attr, argp, sizeof(attr)))
+   return -EFAULT;
+   return kvm_arm_vcpu_set_attr(vcpu, &attr);
+   }
+   case KVM_GET_DEVICE_ATTR: {
+   if (copy_from_user(&attr, argp, sizeof(attr)))
+   return -EFAULT;
+   return kvm_arm_vcpu_get_attr(vcpu, &attr);
+   }
+   case KVM_HAS_DEVICE_ATTR: {
+   if (copy_from_user(&attr, argp, sizeof(attr)))
+   return -EFAULT;
+   return kvm_arm_vcpu_has_attr(vcpu, &attr);
+   }
default:
return -EINVAL;
}
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index cf4f28a..9677bf0 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -81,6 +81,7 @@ int kvm_arch_dev_ioctl_check_extension(long ext)
r = kvm_arm_support_pmu_v3();
break;
case KVM_CAP_SET_GUEST_DEBUG:
+   case KVM_CAP_VCPU_ATTRIBUTES:
 

[PATCH v13 12/20] KVM: ARM64: Add access handler for PMSWINC register

2016-02-23 Thread Shannon Zhao
From: Shannon Zhao 

Add access handler which emulates writing and reading PMSWINC
register and add support for creating software increment event.

Signed-off-by: Shannon Zhao 
Reviewed-by: Andrew Jones 
---
 arch/arm64/include/asm/kvm_host.h   |  1 +
 arch/arm64/include/asm/perf_event.h |  2 ++
 arch/arm64/kvm/sys_regs.c   | 20 +++-
 include/kvm/arm_pmu.h   |  2 ++
 virt/kvm/arm/pmu.c  | 34 ++
 5 files changed, 58 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 05f4808..de1f82d 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -127,6 +127,7 @@ enum vcpu_sysreg {
PMCNTENSET_EL0, /* Count Enable Set Register */
PMINTENSET_EL1, /* Interrupt Enable Set Register */
PMOVSSET_EL0,   /* Overflow Flag Status Set Register */
+   PMSWINC_EL0,/* Software Increment Register */
 
/* 32bit specific registers. Keep them at the end of the range */
DACR32_EL2, /* Domain Access Control Register */
diff --git a/arch/arm64/include/asm/perf_event.h 
b/arch/arm64/include/asm/perf_event.h
index 5c77ef8..ceb14a1 100644
--- a/arch/arm64/include/asm/perf_event.h
+++ b/arch/arm64/include/asm/perf_event.h
@@ -45,6 +45,8 @@
 #defineARMV8_PMU_EVTYPE_MASK   0xc80003ff  /* Mask for writable 
bits */
 #defineARMV8_PMU_EVTYPE_EVENT  0x3ff   /* Mask for EVENT bits 
*/
 
+#define ARMV8_PMU_EVTYPE_EVENT_SW_INCR 0   /* Software increment event */
+
 /*
  * Event filters for PMUv3
  */
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 6a774f9..10e5379 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -672,6 +672,23 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struct 
sys_reg_params *p,
return true;
 }
 
+static bool access_pmswinc(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+  const struct sys_reg_desc *r)
+{
+   u64 mask;
+
+   if (!kvm_arm_pmu_v3_ready(vcpu))
+   return trap_raz_wi(vcpu, p, r);
+
+   if (p->is_write) {
+   mask = kvm_pmu_valid_counter_mask(vcpu);
+   kvm_pmu_software_increment(vcpu, p->regval & mask);
+   return true;
+   }
+
+   return false;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n) \
/* DBGBVRn_EL1 */   \
@@ -882,7 +899,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
  access_pmovs, NULL, PMOVSSET_EL0 },
/* PMSWINC_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
- trap_raz_wi },
+ access_pmswinc, reset_unknown, PMSWINC_EL0 },
/* PMSELR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
  access_pmselr, reset_unknown, PMSELR_EL0 },
@@ -1221,6 +1238,7 @@ static const struct sys_reg_desc cp15_regs[] = {
{ Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmcnten },
{ Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmcnten },
{ Op1( 0), CRn( 9), CRm(12), Op2( 3), access_pmovs },
+   { Op1( 0), CRn( 9), CRm(12), Op2( 4), access_pmswinc },
{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 60061da..348c4c9 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -44,6 +44,7 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
u64 select_idx);
 #else
@@ -65,6 +66,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu 
*vcpu)
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) 
{}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
  u64 data, u64 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 0232861..9fc775e 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -180,6 +180,36 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val)
kvm_vcpu

[PATCH v13 10/20] KVM: ARM64: Add access handler for PMINTENSET and PMINTENCLR register

2016-02-23 Thread Shannon Zhao
From: Shannon Zhao 

Since the reset value of PMINTENSET and PMINTENCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a handler to emulate writing
PMINTENSET or PMINTENCLR register.

Signed-off-by: Shannon Zhao 
Reviewed-by: Andrew Jones 
---
 arch/arm64/include/asm/kvm_host.h |  1 +
 arch/arm64/kvm/sys_regs.c | 32 
 2 files changed, 29 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index bf97e79..c7642de 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -125,6 +125,7 @@ enum vcpu_sysreg {
PMEVTYPER30_EL0 = PMEVTYPER0_EL0 + 30,
PMCCFILTR_EL0,  /* Cycle Count Filter Register */
PMCNTENSET_EL0, /* Count Enable Set Register */
+   PMINTENSET_EL1, /* Interrupt Enable Set Register */
 
/* 32bit specific registers. Keep them at the end of the range */
DACR32_EL2, /* Domain Access Control Register */
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 4faf324..bfc70b2 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -626,6 +626,30 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct 
sys_reg_params *p,
return true;
 }
 
+static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+  const struct sys_reg_desc *r)
+{
+   u64 mask = kvm_pmu_valid_counter_mask(vcpu);
+
+   if (!kvm_arm_pmu_v3_ready(vcpu))
+   return trap_raz_wi(vcpu, p, r);
+
+   if (p->is_write) {
+   u64 val = p->regval & mask;
+
+   if (r->Op2 & 0x1)
+   /* accessing PMINTENSET_EL1 */
+   vcpu_sys_reg(vcpu, PMINTENSET_EL1) |= val;
+   else
+   /* accessing PMINTENCLR_EL1 */
+   vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~val;
+   } else {
+   p->regval = vcpu_sys_reg(vcpu, PMINTENSET_EL1) & mask;
+   }
+
+   return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n) \
/* DBGBVRn_EL1 */   \
@@ -784,10 +808,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 
/* PMINTENSET_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b001),
- trap_raz_wi },
+ access_pminten, reset_unknown, PMINTENSET_EL1 },
/* PMINTENCLR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b010),
- trap_raz_wi },
+ access_pminten, NULL, PMINTENSET_EL1 },
 
/* MAIR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0010), Op2(0b000),
@@ -1182,8 +1206,8 @@ static const struct sys_reg_desc cp15_regs[] = {
{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_evtyper },
{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_evcntr },
{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
-   { Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
-   { Op1( 0), CRn( 9), CRm(14), Op2( 2), trap_raz_wi },
+   { Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pminten },
+   { Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pminten },
 
{ Op1( 0), CRn(10), CRm( 2), Op2( 0), access_vm_reg, NULL, c10_PRRR },
{ Op1( 0), CRn(10), CRm( 2), Op2( 1), access_vm_reg, NULL, c10_NMRR },
-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v13 07/20] KVM: ARM64: Add access handler for PMCNTENSET and PMCNTENCLR register

2016-02-23 Thread Shannon Zhao
From: Shannon Zhao 

Since the reset value of PMCNTENSET and PMCNTENCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a handler to emulate writing
PMCNTENSET or PMCNTENCLR register.

When writing to PMCNTENSET, call perf_event_enable to enable the perf
event. When writing to PMCNTENCLR, call perf_event_disable to disable
the perf event.

Signed-off-by: Shannon Zhao 
---
 arch/arm64/include/asm/kvm_host.h |  1 +
 arch/arm64/kvm/sys_regs.c | 35 ++---
 include/kvm/arm_pmu.h |  9 ++
 virt/kvm/arm/pmu.c| 66 +++
 4 files changed, 107 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 627f01e..e6910db 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -121,6 +121,7 @@ enum vcpu_sysreg {
PMEVCNTR0_EL0,  /* Event Counter Register (0-30) */
PMEVCNTR30_EL0 = PMEVCNTR0_EL0 + 30,
PMCCNTR_EL0,/* Cycle Counter Register */
+   PMCNTENSET_EL0, /* Count Enable Set Register */
 
/* 32bit specific registers. Keep them at the end of the range */
DACR32_EL2, /* Domain Access Control Register */
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index ff3214b..d4b6ae3 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -563,6 +563,33 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
return true;
 }
 
+static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+  const struct sys_reg_desc *r)
+{
+   u64 val, mask;
+
+   if (!kvm_arm_pmu_v3_ready(vcpu))
+   return trap_raz_wi(vcpu, p, r);
+
+   mask = kvm_pmu_valid_counter_mask(vcpu);
+   if (p->is_write) {
+   val = p->regval & mask;
+   if (r->Op2 & 0x1) {
+   /* accessing PMCNTENSET_EL0 */
+   vcpu_sys_reg(vcpu, PMCNTENSET_EL0) |= val;
+   kvm_pmu_enable_counter(vcpu, val);
+   } else {
+   /* accessing PMCNTENCLR_EL0 */
+   vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val;
+   kvm_pmu_disable_counter(vcpu, val);
+   }
+   } else {
+   p->regval = vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
+   }
+
+   return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n) \
/* DBGBVRn_EL1 */   \
@@ -757,10 +784,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
  access_pmcr, reset_pmcr, },
/* PMCNTENSET_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
- trap_raz_wi },
+ access_pmcnten, reset_unknown, PMCNTENSET_EL0 },
/* PMCNTENCLR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b010),
- trap_raz_wi },
+ access_pmcnten, NULL, PMCNTENSET_EL0 },
/* PMOVSCLR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
  trap_raz_wi },
@@ -1057,8 +1084,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 
/* PMU */
{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmcr },
-   { Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
-   { Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
+   { Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmcnten },
+   { Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmcnten },
{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index bcb7698..b70058e 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -40,6 +40,9 @@ struct kvm_pmu {
 #define kvm_arm_pmu_v3_ready(v)((v)->arch.pmu.ready)
 u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
 void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val);
+u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
+void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 #else
 struct kvm_pmu {
 };
@@ -52,6 +55,12 @@ static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu 
*vcpu,
 }
 static inline void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu,
 u64 select_idx, u64 val) {}
+static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
+{
+   return 0;
+}
+static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 #endif

[PATCH v13 13/20] KVM: ARM64: Add helper to handle PMCR register bits

2016-02-23 Thread Shannon Zhao
From: Shannon Zhao 

According to ARMv8 spec, when writing 1 to PMCR.E, all counters are
enabled by PMCNTENSET, while writing 0 to PMCR.E, all counters are
disabled. When writing 1 to PMCR.P, reset all event counters, not
including PMCCNTR, to zero. When writing 1 to PMCR.C, reset PMCCNTR to
zero.

Signed-off-by: Shannon Zhao 
Reviewed-by: Marc Zyngier 
---
 arch/arm64/include/asm/perf_event.h |  2 ++
 arch/arm64/kvm/sys_regs.c   |  1 +
 include/kvm/arm_pmu.h   |  2 ++
 virt/kvm/arm/pmu.c  | 34 ++
 4 files changed, 39 insertions(+)

diff --git a/arch/arm64/include/asm/perf_event.h 
b/arch/arm64/include/asm/perf_event.h
index ceb14a1..c3f5937 100644
--- a/arch/arm64/include/asm/perf_event.h
+++ b/arch/arm64/include/asm/perf_event.h
@@ -29,6 +29,8 @@
 #define ARMV8_PMU_PMCR_D   (1 << 3) /* CCNT counts every 64th cpu cycle */
 #define ARMV8_PMU_PMCR_X   (1 << 4) /* Export to ETM */
 #define ARMV8_PMU_PMCR_DP  (1 << 5) /* Disable CCNT if non-invasive debug*/
+/* Determines which bit of PMCCNTR_EL0 generates an overflow */
+#define ARMV8_PMU_PMCR_LC  (1 << 6)
 #defineARMV8_PMU_PMCR_N_SHIFT  11   /* Number of counters 
supported */
 #defineARMV8_PMU_PMCR_N_MASK   0x1f
 #defineARMV8_PMU_PMCR_MASK 0x3f /* Mask for writable bits */
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 10e5379..12f36ef 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -467,6 +467,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct 
sys_reg_params *p,
val &= ~ARMV8_PMU_PMCR_MASK;
val |= p->regval & ARMV8_PMU_PMCR_MASK;
vcpu_sys_reg(vcpu, PMCR_EL0) = val;
+   kvm_pmu_handle_pmcr(vcpu, val);
} else {
/* PMCR.P & PMCR.C are RAZ */
val = vcpu_sys_reg(vcpu, PMCR_EL0)
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 348c4c9..8bc92d1 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -45,6 +45,7 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
u64 select_idx);
 #else
@@ -67,6 +68,7 @@ static inline void kvm_pmu_disable_counter(struct kvm_vcpu 
*vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) 
{}
+static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
  u64 data, u64 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 9fc775e..cda869c 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -210,6 +210,40 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 
val)
}
 }
 
+/**
+ * kvm_pmu_handle_pmcr - handle PMCR register
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMCR register
+ */
+void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
+{
+   struct kvm_pmu *pmu = &vcpu->arch.pmu;
+   struct kvm_pmc *pmc;
+   u64 mask;
+   int i;
+
+   mask = kvm_pmu_valid_counter_mask(vcpu);
+   if (val & ARMV8_PMU_PMCR_E) {
+   kvm_pmu_enable_counter(vcpu,
+   vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
+   } else {
+   kvm_pmu_disable_counter(vcpu, mask);
+   }
+
+   if (val & ARMV8_PMU_PMCR_C)
+   kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 0);
+
+   if (val & ARMV8_PMU_PMCR_P) {
+   for (i = 0; i < ARMV8_PMU_CYCLE_IDX; i++)
+   kvm_pmu_set_counter_value(vcpu, i, 0);
+   }
+
+   if (val & ARMV8_PMU_PMCR_LC) {
+   pmc = &pmu->pmc[ARMV8_PMU_CYCLE_IDX];
+   pmc->bitmask = 0xUL;
+   }
+}
+
 static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
 {
return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v12 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function

2016-02-23 Thread Shannon Zhao


On 2016/2/24 1:42, Marc Zyngier wrote:
> Hi Shannon,
> 
> Still picking up on details...
> 
> On 22/02/16 09:37, Shannon Zhao wrote:
>> From: Shannon Zhao 
>>
>> When we use tools like perf on host, perf passes the event type and the
>> id of this event type category to kernel, then kernel will map them to
>> hardware event number and write this number to PMU PMEVTYPER_EL0
>> register. When getting the event number in KVM, directly use raw event
>> type to create a perf_event for it.
>>
>> Signed-off-by: Shannon Zhao 
>> Reviewed-by: Marc Zyngier 
>> ---
>>  arch/arm64/include/asm/perf_event.h |   2 +
>>  arch/arm64/kvm/Makefile |   1 +
>>  include/kvm/arm_pmu.h   |  12 
>>  virt/kvm/arm/pmu.c  | 122 
>> 
>>  4 files changed, 137 insertions(+)
>>  create mode 100644 virt/kvm/arm/pmu.c
>>
>> diff --git a/arch/arm64/include/asm/perf_event.h 
>> b/arch/arm64/include/asm/perf_event.h
>> index 5c77ef8..867140d 100644
>> --- a/arch/arm64/include/asm/perf_event.h
>> +++ b/arch/arm64/include/asm/perf_event.h
>> @@ -29,6 +29,8 @@
>>  #define ARMV8_PMU_PMCR_D(1 << 3) /* CCNT counts every 64th cpu cycle */
>>  #define ARMV8_PMU_PMCR_X(1 << 4) /* Export to ETM */
>>  #define ARMV8_PMU_PMCR_DP   (1 << 5) /* Disable CCNT if non-invasive debug*/
>> +/* Determines which bit of PMCCNTR_EL0 generates an overflow */
>> +#define ARMV8_PMU_PMCR_LC   (1 << 6)
> 
> nit: this #define is only being used in patch #14. Consider moving it
> there...
> 
Sure, thanks!

-- 
Shannon

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v12 16/21] KVM: ARM64: Add PMU overflow interrupt routing

2016-02-23 Thread Shannon Zhao



On 2016/2/23 22:14, Marc Zyngier wrote:

On 22/02/16 09:37, Shannon Zhao wrote:

From: Shannon Zhao 

When calling perf_event_create_kernel_counter to create perf_event,
assign a overflow handler. Then when the perf event overflows, set the
corresponding bit of guest PMOVSSET register. If this counter is enabled
and its interrupt is enabled as well, kick the vcpu to sync the
interrupt.

On VM entry, if there is counter overflowed, inject the interrupt with
the level set to 1. Otherwise, inject the interrupt with the level set
to 0.

Signed-off-by: Shannon Zhao 
Reviewed-by: Marc Zyngier 
Reviewed-by: Andrew Jones 


As I mentioned yesterday, I was trying to pinpoint a performance drop, so I 
added PMU support to kvmtool (and made it an optional flag). This allowed be to 
find this:


---
  arch/arm/kvm/arm.c|  2 ++
  include/kvm/arm_pmu.h |  2 ++
  virt/kvm/arm/pmu.c| 47 ++-
  3 files changed, 50 insertions(+), 1 deletion(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index dda1959..f54264c 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -28,6 +28,7 @@
  #include 
  #include 
  #include 
+#include 

  #define CREATE_TRACE_POINTS
  #include "trace.h"
@@ -577,6 +578,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct 
kvm_run *run)
 * non-preemptible context.
 */
preempt_disable();
+   kvm_pmu_flush_hwstate(vcpu);
kvm_timer_flush_hwstate(vcpu);
kvm_vgic_flush_hwstate(vcpu);

diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 8bc92d1..cf68f9a 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -44,6 +44,7 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
  void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
  void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
  void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu);
  void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
  void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
  void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
@@ -67,6 +68,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu 
*vcpu)
  static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
  static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
  static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {}
  static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) 
{}
  static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
  static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index cda869c..6ac52ce 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -21,6 +21,7 @@
  #include 
  #include 
  #include 
+#include 

  /**
   * kvm_pmu_get_counter_value - get PMU counter value
@@ -181,6 +182,49 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val)
  }

  /**
+ * kvm_pmu_flush_hwstate - flush pmu state to cpu
+ * @vcpu: The vcpu pointer
+ *
+ * Inject virtual PMU IRQ if IRQ is pending for this cpu.
+ */
+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu)
+{
+   struct kvm_pmu *pmu = &vcpu->arch.pmu;
+   u64 overflow;
+
+   if (!kvm_arm_pmu_v3_ready(vcpu))
+   return;
+
+   overflow = kvm_pmu_overflow_status(vcpu);
+   kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, pmu->irq_num, !!overflow);


It turns out that this single line costs us about 400 cycles on each entry:


Oh, it's really a time-consuming function.


maz@flakes:~/kvm-ws-tests$ make LKVM=~/kvmtool/lkvm LKVM_ARGS=--pmu 
PERF=perf_4.3 tests-gicv2
GICv2:
do_hvc.bin:5690.17
do_sgi.bin:9395.05
do_sysreg.bin:5912.6
maz@flakes:~/kvm-ws-tests$ make LKVM=~/kvmtool/lkvm PERF=perf_4.3 tests-gicv2
GICv2:
do_hvc.bin:5285.02
do_sgi.bin:9131.24
do_sysreg.bin:5563.7

Caching the irq state and only injecting it if it has changed (just like we do 
for the timer) brings performance back to its previous level:

Marc, thanks a lot for finding out this. I'll merge below changes into 
this series and send out v13 tomorrow.



diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 176913f..b23e636 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -35,6 +35,7 @@ struct kvm_pmu {
int irq_num;
struct kvm_pmc pmc[ARMV8_PMU_MAX_COUNTERS];
bool ready;
+   bool irq_level;
  };

  #define kvm_arm_pmu_v3_ready(v)   ((v)->arch.pmu.ready)
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 904617e..7156f8b 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -229,13 +229,17 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val)
  voi

Re: [PATCH v12 03/21] KVM: ARM64: Add offset defines for PMU registers

2016-02-23 Thread Shannon Zhao


On 2016/2/23 9:46, Shannon Zhao wrote:
> 
> On 2016/2/23 1:51, Marc Zyngier wrote:
>> > On 22/02/16 09:37, Shannon Zhao wrote:
>>> >> From: Shannon Zhao 
>>> >>
>>> >> We are about to trap and emulate accesses to each PMU register
>>> >> individually. This adds the context offsets for the AArch64 PMU
>>> >> registers.
>>> >>
>>> >> Signed-off-by: Shannon Zhao 
>>> >> Reviewed-by: Marc Zyngier 
>>> >> Reviewed-by: Andrew Jones 
>>> >> ---
>>> >>  arch/arm64/include/asm/kvm_host.h | 15 +++
>>> >>  1 file changed, 15 insertions(+)
>>> >>
>>> >> diff --git a/arch/arm64/include/asm/kvm_host.h 
>>> >> b/arch/arm64/include/asm/kvm_host.h
>>> >> index 6f0241f..6bab7fb 100644
>>> >> --- a/arch/arm64/include/asm/kvm_host.h
>>> >> +++ b/arch/arm64/include/asm/kvm_host.h
>>> >> @@ -115,6 +115,21 @@ enum vcpu_sysreg {
>>> >>  MDSCR_EL1,  /* Monitor Debug System Control Register */
>>> >>  MDCCINT_EL1,/* Monitor Debug Comms Channel Interrupt Enable 
>>> >> Reg */
>>> >>  
>>> >> +/* Performance Monitors Registers */
>>> >> +PMCR_EL0,   /* Control Register */
>>> >> +PMOVSSET_EL0,   /* Overflow Flag Status Set Register */
>>> >> +PMSELR_EL0, /* Event Counter Selection Register */
>>> >> +PMEVCNTR0_EL0,  /* Event Counter Register (0-30) */
>>> >> +PMEVCNTR30_EL0 = PMEVCNTR0_EL0 + 30,
>>> >> +PMCCNTR_EL0,/* Cycle Counter Register */
>>> >> +PMEVTYPER0_EL0, /* Event Type Register (0-30) */
>>> >> +PMEVTYPER30_EL0 = PMEVTYPER0_EL0 + 30,
>>> >> +PMCCFILTR_EL0,  /* Cycle Count Filter Register */
>>> >> +PMCNTENSET_EL0, /* Count Enable Set Register */
>>> >> +PMINTENSET_EL1, /* Interrupt Enable Set Register */
>>> >> +PMUSERENR_EL0,  /* User Enable Register */
>>> >> +PMSWINC_EL0,/* Software Increment Register */
>>> >> +
>> > 
>> > I've just noticed a rather fundamental issue with this: this makes it
>> > impossible to bisect the whole series.
>> > 
> Ah, sorry. Will fix this.
> 
I've fixed this problem and pushed this series to below place. You can
fetch it from there.

https://git.linaro.org/people/shannon.zhao/linux-mainline.git/shortlog/refs/heads/KVM_ARM64_PMU_v13

Thanks,
-- 
Shannon
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v12 03/21] KVM: ARM64: Add offset defines for PMU registers

2016-02-22 Thread Shannon Zhao


On 2016/2/23 1:51, Marc Zyngier wrote:
> On 22/02/16 09:37, Shannon Zhao wrote:
>> From: Shannon Zhao 
>>
>> We are about to trap and emulate accesses to each PMU register
>> individually. This adds the context offsets for the AArch64 PMU
>> registers.
>>
>> Signed-off-by: Shannon Zhao 
>> Reviewed-by: Marc Zyngier 
>> Reviewed-by: Andrew Jones 
>> ---
>>  arch/arm64/include/asm/kvm_host.h | 15 +++
>>  1 file changed, 15 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/kvm_host.h 
>> b/arch/arm64/include/asm/kvm_host.h
>> index 6f0241f..6bab7fb 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -115,6 +115,21 @@ enum vcpu_sysreg {
>>  MDSCR_EL1,  /* Monitor Debug System Control Register */
>>  MDCCINT_EL1,/* Monitor Debug Comms Channel Interrupt Enable Reg */
>>  
>> +/* Performance Monitors Registers */
>> +PMCR_EL0,   /* Control Register */
>> +PMOVSSET_EL0,   /* Overflow Flag Status Set Register */
>> +PMSELR_EL0, /* Event Counter Selection Register */
>> +PMEVCNTR0_EL0,  /* Event Counter Register (0-30) */
>> +PMEVCNTR30_EL0 = PMEVCNTR0_EL0 + 30,
>> +PMCCNTR_EL0,/* Cycle Counter Register */
>> +PMEVTYPER0_EL0, /* Event Type Register (0-30) */
>> +PMEVTYPER30_EL0 = PMEVTYPER0_EL0 + 30,
>> +PMCCFILTR_EL0,  /* Cycle Count Filter Register */
>> +PMCNTENSET_EL0, /* Count Enable Set Register */
>> +PMINTENSET_EL1, /* Interrupt Enable Set Register */
>> +PMUSERENR_EL0,  /* User Enable Register */
>> +PMSWINC_EL0,/* Software Increment Register */
>> +
> 
> I've just noticed a rather fundamental issue with this: this makes it
> impossible to bisect the whole series.
> 
Ah, sorry. Will fix this.

> I was trying to pinpoint a performance regression with this series, and
> started bisecting.
You mean this series introduce a performance regression? Is there any
method to measure that? Then I can have a look too.

> Unfortunately, declaring these registers in one go
> means that we end-up with uninitialized registers after this patch (and
> probably until PMUSERENR is dealt with). The consequence of that is
> something like this:
> 
> Kernel panic - not syncing: Didn't reset vcpu_sys_reg(25)
> CPU: 1 PID: 1994 Comm: lkvm Tainted: GW   4.5.0-rc5+ #5563
> Hardware name: Default string Default string/Default string, BIOS
> ROD0084E 09/03/2015
> Call trace:
> [] dump_backtrace+0x0/0x1a8
> [] show_stack+0x14/0x20
> [] dump_stack+0x94/0xb8
> [] panic+0x10c/0x250
> [] kvm_reset_sys_regs+0xec/0xf0
> [] kvm_reset_vcpu+0x58/0x80
> [] kvm_arch_vcpu_ioctl+0x294/0x310
> [] kvm_vcpu_ioctl+0xcc/0x698
> [] do_vfs_ioctl+0xa4/0x750
> [] SyS_ioctl+0x8c/0xa0
> [] el0_svc_naked+0x24/0x28
> 
> The obvious fix would be to introduce each register with the patch that
> handles it. At least, we'll be able to bisect it...
> 
> Thanks,
> 
>   M.
> 

Thanks,
-- 
Shannon

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v12 21/21] KVM: ARM64: Add a new vcpu device control group for PMUv3

2016-02-22 Thread Shannon Zhao
From: Shannon Zhao 

To configure the virtual PMUv3 overflow interrupt number, we use the
vcpu kvm_device ioctl, encapsulating the KVM_ARM_VCPU_PMU_V3_IRQ
attribute within the KVM_ARM_VCPU_PMU_V3_CTRL group.

After configuring the PMUv3, call the vcpu ioctl with attribute
KVM_ARM_VCPU_PMU_V3_INIT to initialize the PMUv3.

Signed-off-by: Shannon Zhao 
Acked-by: Peter Maydell 
Reviewed-by: Andrew Jones 
Reviewed-by: Christoffer Dall 
---
 Documentation/virtual/kvm/devices/vcpu.txt |  25 +++
 arch/arm/include/asm/kvm_host.h|  15 
 arch/arm/kvm/arm.c |   3 +
 arch/arm64/include/asm/kvm_host.h  |   6 ++
 arch/arm64/include/uapi/asm/kvm.h  |   5 ++
 arch/arm64/kvm/guest.c |  51 +
 include/kvm/arm_pmu.h  |  23 ++
 virt/kvm/arm/pmu.c | 112 +
 8 files changed, 240 insertions(+)

diff --git a/Documentation/virtual/kvm/devices/vcpu.txt 
b/Documentation/virtual/kvm/devices/vcpu.txt
index 3cc59c5..c041658 100644
--- a/Documentation/virtual/kvm/devices/vcpu.txt
+++ b/Documentation/virtual/kvm/devices/vcpu.txt
@@ -6,3 +6,28 @@ KVM_GET_DEVICE_ATTR, and KVM_HAS_DEVICE_ATTR. The interface 
uses the same struct
 kvm_device_attr as other devices, but targets VCPU-wide settings and controls.
 
 The groups and attributes per virtual cpu, if any, are architecture specific.
+
+1. GROUP: KVM_ARM_VCPU_PMU_V3_CTRL
+Architectures: ARM64
+
+1.1. ATTRIBUTE: KVM_ARM_VCPU_PMU_V3_IRQ
+Parameters: in kvm_device_attr.addr the address for PMU overflow interrupt is a
+pointer to an int
+Returns: -EBUSY: The PMU overflow interrupt is already set
+ -ENXIO: The overflow interrupt not set when attempting to get it
+ -ENODEV: PMUv3 not supported
+ -EINVAL: Invalid PMU overflow interrupt number supplied
+
+A value describing the PMUv3 (Performance Monitor Unit v3) overflow interrupt
+number for this vcpu. This interrupt could be a PPI or SPI, but the interrupt
+type must be same for each vcpu. As a PPI, the interrupt number is the same for
+all vcpus, while as an SPI it must be a separate number per vcpu.
+
+1.2 ATTRIBUTE: KVM_ARM_VCPU_PMU_V3_INIT
+Parameters: no additional parameter in kvm_device_attr.addr
+Returns: -ENODEV: PMUv3 not supported
+ -ENXIO: PMUv3 not properly configured as required prior to calling 
this
+ attribute
+ -EBUSY: PMUv3 already initialized
+
+Request the initialization of the PMUv3.
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index f9f2779..6dd0992 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -242,5 +242,20 @@ static inline void kvm_arm_init_debug(void) {}
 static inline void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) {}
+static inline int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
+struct kvm_device_attr *attr)
+{
+   return -ENXIO;
+}
+static inline int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
+struct kvm_device_attr *attr)
+{
+   return -ENXIO;
+}
+static inline int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
+struct kvm_device_attr *attr)
+{
+   return -ENXIO;
+}
 
 #endif /* __ARM_KVM_HOST_H__ */
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 34d7395..dc8644f 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -833,6 +833,7 @@ static int kvm_arm_vcpu_set_attr(struct kvm_vcpu *vcpu,
 
switch (attr->group) {
default:
+   ret = kvm_arm_vcpu_arch_set_attr(vcpu, attr);
break;
}
 
@@ -846,6 +847,7 @@ static int kvm_arm_vcpu_get_attr(struct kvm_vcpu *vcpu,
 
switch (attr->group) {
default:
+   ret = kvm_arm_vcpu_arch_get_attr(vcpu, attr);
break;
}
 
@@ -859,6 +861,7 @@ static int kvm_arm_vcpu_has_attr(struct kvm_vcpu *vcpu,
 
switch (attr->group) {
default:
+   ret = kvm_arm_vcpu_arch_has_attr(vcpu, attr);
break;
}
 
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index cb220b7..a855a30 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -359,5 +359,11 @@ void kvm_arm_init_debug(void);
 void kvm_arm_setup_debug(struct kvm_vcpu *vcpu);
 void kvm_arm_clear_debug(struct kvm_vcpu *vcpu);
 void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu);
+int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
+  struct kvm_device_attr *attr);
+int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
+  struct kvm_device_attr *at

[PATCH v12 01/21] ARM64: Move PMU register related defines to asm/perf_event.h

2016-02-22 Thread Shannon Zhao
From: Shannon Zhao 

To use the ARMv8 PMU related register defines from the KVM code, we move
the relevant definitions to asm/perf_event.h header file and rename them
with prefix ARMV8_PMU_.

Signed-off-by: Anup Patel 
Signed-off-by: Shannon Zhao 
Acked-by: Marc Zyngier 
Reviewed-by: Andrew Jones 
---
 arch/arm64/include/asm/perf_event.h | 35 +++
 arch/arm64/kernel/perf_event.c  | 68 ++---
 2 files changed, 52 insertions(+), 51 deletions(-)

diff --git a/arch/arm64/include/asm/perf_event.h 
b/arch/arm64/include/asm/perf_event.h
index 7bd3cdb..5c77ef8 100644
--- a/arch/arm64/include/asm/perf_event.h
+++ b/arch/arm64/include/asm/perf_event.h
@@ -17,6 +17,41 @@
 #ifndef __ASM_PERF_EVENT_H
 #define __ASM_PERF_EVENT_H
 
+#defineARMV8_PMU_MAX_COUNTERS  32
+#defineARMV8_PMU_COUNTER_MASK  (ARMV8_PMU_MAX_COUNTERS - 1)
+
+/*
+ * Per-CPU PMCR: config reg
+ */
+#define ARMV8_PMU_PMCR_E   (1 << 0) /* Enable all counters */
+#define ARMV8_PMU_PMCR_P   (1 << 1) /* Reset all counters */
+#define ARMV8_PMU_PMCR_C   (1 << 2) /* Cycle counter reset */
+#define ARMV8_PMU_PMCR_D   (1 << 3) /* CCNT counts every 64th cpu cycle */
+#define ARMV8_PMU_PMCR_X   (1 << 4) /* Export to ETM */
+#define ARMV8_PMU_PMCR_DP  (1 << 5) /* Disable CCNT if non-invasive debug*/
+#defineARMV8_PMU_PMCR_N_SHIFT  11   /* Number of counters 
supported */
+#defineARMV8_PMU_PMCR_N_MASK   0x1f
+#defineARMV8_PMU_PMCR_MASK 0x3f /* Mask for writable bits */
+
+/*
+ * PMOVSR: counters overflow flag status reg
+ */
+#defineARMV8_PMU_OVSR_MASK 0x  /* Mask for 
writable bits */
+#defineARMV8_PMU_OVERFLOWED_MASK   ARMV8_PMU_OVSR_MASK
+
+/*
+ * PMXEVTYPER: Event selection reg
+ */
+#defineARMV8_PMU_EVTYPE_MASK   0xc80003ff  /* Mask for writable 
bits */
+#defineARMV8_PMU_EVTYPE_EVENT  0x3ff   /* Mask for EVENT bits 
*/
+
+/*
+ * Event filters for PMUv3
+ */
+#defineARMV8_PMU_EXCLUDE_EL1   (1 << 31)
+#defineARMV8_PMU_EXCLUDE_EL0   (1 << 30)
+#defineARMV8_PMU_INCLUDE_EL2   (1 << 27)
+
 #ifdef CONFIG_PERF_EVENTS
 struct pt_regs;
 extern unsigned long perf_instruction_pointer(struct pt_regs *regs);
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index f7ab14c..212c9fc4 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -24,6 +24,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /*
  * ARMv8 PMUv3 Performance Events handling code.
@@ -333,9 +334,6 @@ static const struct attribute_group 
*armv8_pmuv3_attr_groups[] = {
 #defineARMV8_IDX_COUNTER_LAST(cpu_pmu) \
(ARMV8_IDX_CYCLE_COUNTER + cpu_pmu->num_events - 1)
 
-#defineARMV8_MAX_COUNTERS  32
-#defineARMV8_COUNTER_MASK  (ARMV8_MAX_COUNTERS - 1)
-
 /*
  * ARMv8 low level PMU access
  */
@@ -344,39 +342,7 @@ static const struct attribute_group 
*armv8_pmuv3_attr_groups[] = {
  * Perf Event to low level counters mapping
  */
 #defineARMV8_IDX_TO_COUNTER(x) \
-   (((x) - ARMV8_IDX_COUNTER0) & ARMV8_COUNTER_MASK)
-
-/*
- * Per-CPU PMCR: config reg
- */
-#define ARMV8_PMCR_E   (1 << 0) /* Enable all counters */
-#define ARMV8_PMCR_P   (1 << 1) /* Reset all counters */
-#define ARMV8_PMCR_C   (1 << 2) /* Cycle counter reset */
-#define ARMV8_PMCR_D   (1 << 3) /* CCNT counts every 64th cpu cycle */
-#define ARMV8_PMCR_X   (1 << 4) /* Export to ETM */
-#define ARMV8_PMCR_DP  (1 << 5) /* Disable CCNT if non-invasive debug*/
-#defineARMV8_PMCR_N_SHIFT  11   /* Number of counters 
supported */
-#defineARMV8_PMCR_N_MASK   0x1f
-#defineARMV8_PMCR_MASK 0x3f /* Mask for writable bits */
-
-/*
- * PMOVSR: counters overflow flag status reg
- */
-#defineARMV8_OVSR_MASK 0x  /* Mask for writable 
bits */
-#defineARMV8_OVERFLOWED_MASK   ARMV8_OVSR_MASK
-
-/*
- * PMXEVTYPER: Event selection reg
- */
-#defineARMV8_EVTYPE_MASK   0xc80003ff  /* Mask for writable 
bits */
-#defineARMV8_EVTYPE_EVENT  0x3ff   /* Mask for EVENT bits 
*/
-
-/*
- * Event filters for PMUv3
- */
-#defineARMV8_EXCLUDE_EL1   (1 << 31)
-#defineARMV8_EXCLUDE_EL0   (1 << 30)
-#defineARMV8_INCLUDE_EL2   (1 << 27)
+   (((x) - ARMV8_IDX_COUNTER0) & ARMV8_PMU_COUNTER_MASK)
 
 static inline u32 armv8pmu_pmcr_read(void)
 {
@@ -387,14 +353,14 @@ static inline u32 armv8pmu_pmcr_read(void)
 
 static inline void armv8pmu_pmcr_write(u32 val)
 {
-   val &= ARMV8_PMCR_MASK;
+   val &= ARMV8_PMU_PMCR_MASK;
isb();
asm volatile("msr pmcr_el0, %0" :: "r" (val));
 }
 
 static

[PATCH v12 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function

2016-02-22 Thread Shannon Zhao
From: Shannon Zhao 

When we use tools like perf on host, perf passes the event type and the
id of this event type category to kernel, then kernel will map them to
hardware event number and write this number to PMU PMEVTYPER_EL0
register. When getting the event number in KVM, directly use raw event
type to create a perf_event for it.

Signed-off-by: Shannon Zhao 
Reviewed-by: Marc Zyngier 
---
 arch/arm64/include/asm/perf_event.h |   2 +
 arch/arm64/kvm/Makefile |   1 +
 include/kvm/arm_pmu.h   |  12 
 virt/kvm/arm/pmu.c  | 122 
 4 files changed, 137 insertions(+)
 create mode 100644 virt/kvm/arm/pmu.c

diff --git a/arch/arm64/include/asm/perf_event.h 
b/arch/arm64/include/asm/perf_event.h
index 5c77ef8..867140d 100644
--- a/arch/arm64/include/asm/perf_event.h
+++ b/arch/arm64/include/asm/perf_event.h
@@ -29,6 +29,8 @@
 #define ARMV8_PMU_PMCR_D   (1 << 3) /* CCNT counts every 64th cpu cycle */
 #define ARMV8_PMU_PMCR_X   (1 << 4) /* Export to ETM */
 #define ARMV8_PMU_PMCR_DP  (1 << 5) /* Disable CCNT if non-invasive debug*/
+/* Determines which bit of PMCCNTR_EL0 generates an overflow */
+#define ARMV8_PMU_PMCR_LC  (1 << 6)
 #defineARMV8_PMU_PMCR_N_SHIFT  11   /* Number of counters 
supported */
 #defineARMV8_PMU_PMCR_N_MASK   0x1f
 #defineARMV8_PMU_PMCR_MASK 0x3f /* Mask for writable bits */
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index caee9ee..122cff4 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -26,3 +26,4 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v2-emul.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3-emul.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o
+kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 8157fe5..9536a24 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -23,6 +23,8 @@
 #include 
 #include 
 
+#define ARMV8_PMU_CYCLE_IDX(ARMV8_PMU_MAX_COUNTERS - 1)
+
 struct kvm_pmc {
u8 idx; /* index into the pmu->pmc array */
struct perf_event *perf_event;
@@ -36,11 +38,21 @@ struct kvm_pmu {
 };
 
 #define kvm_arm_pmu_v3_ready(v)((v)->arch.pmu.ready)
+u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
+void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
+   u64 select_idx);
 #else
 struct kvm_pmu {
 };
 
 #define kvm_arm_pmu_v3_ready(v)(false)
+static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu,
+   u64 select_idx)
+{
+   return 0;
+}
+static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
+ u64 data, u64 select_idx) {}
 #endif
 
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
new file mode 100644
index 000..648c30e
--- /dev/null
+++ b/virt/kvm/arm/pmu.c
@@ -0,0 +1,122 @@
+/*
+ * Copyright (C) 2015 Linaro Ltd.
+ * Author: Shannon Zhao 
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+/**
+ * kvm_pmu_get_counter_value - get PMU counter value
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ */
+u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
+{
+   u64 counter, reg, enabled, running;
+   struct kvm_pmu *pmu = &vcpu->arch.pmu;
+   struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+
+   reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
+ ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
+   counter = vcpu_sys_reg(vcpu, reg);
+
+   /* The real counter value is equal to the value of counter register plus
+* the value perf event counts.
+*/
+   if (pmc->perf_event)
+   counter += perf_event_read_value(pmc->perf_event, &enabled,
+&running);
+
+   return counter & pmc->bitmask;
+}
+
+/**
+ * kvm_pmu_stop_counter - stop PMU counter
+ * @pmc: The PMU counter pointer
+ *
+ * If this counter has been configured to monitor some event, release it here.
+ */
+static void kvm_pmu_stop_counter(struct kvm_vcpu *vc

[PATCH v12 16/21] KVM: ARM64: Add PMU overflow interrupt routing

2016-02-22 Thread Shannon Zhao
From: Shannon Zhao 

When calling perf_event_create_kernel_counter to create perf_event,
assign a overflow handler. Then when the perf event overflows, set the
corresponding bit of guest PMOVSSET register. If this counter is enabled
and its interrupt is enabled as well, kick the vcpu to sync the
interrupt.

On VM entry, if there is counter overflowed, inject the interrupt with
the level set to 1. Otherwise, inject the interrupt with the level set
to 0.

Signed-off-by: Shannon Zhao 
Reviewed-by: Marc Zyngier 
Reviewed-by: Andrew Jones 
---
 arch/arm/kvm/arm.c|  2 ++
 include/kvm/arm_pmu.h |  2 ++
 virt/kvm/arm/pmu.c| 47 ++-
 3 files changed, 50 insertions(+), 1 deletion(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index dda1959..f54264c 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -28,6 +28,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #define CREATE_TRACE_POINTS
 #include "trace.h"
@@ -577,6 +578,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct 
kvm_run *run)
 * non-preemptible context.
 */
preempt_disable();
+   kvm_pmu_flush_hwstate(vcpu);
kvm_timer_flush_hwstate(vcpu);
kvm_vgic_flush_hwstate(vcpu);
 
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 8bc92d1..cf68f9a 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -44,6 +44,7 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu);
 void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
@@ -67,6 +68,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu 
*vcpu)
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {}
 static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) 
{}
 static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index cda869c..6ac52ce 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
@@ -181,6 +182,49 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val)
 }
 
 /**
+ * kvm_pmu_flush_hwstate - flush pmu state to cpu
+ * @vcpu: The vcpu pointer
+ *
+ * Inject virtual PMU IRQ if IRQ is pending for this cpu.
+ */
+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu)
+{
+   struct kvm_pmu *pmu = &vcpu->arch.pmu;
+   u64 overflow;
+
+   if (!kvm_arm_pmu_v3_ready(vcpu))
+   return;
+
+   overflow = kvm_pmu_overflow_status(vcpu);
+   kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, pmu->irq_num, !!overflow);
+}
+
+static inline struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)
+{
+   struct kvm_pmu *pmu;
+   struct kvm_vcpu_arch *vcpu_arch;
+
+   pmc -= pmc->idx;
+   pmu = container_of(pmc, struct kvm_pmu, pmc[0]);
+   vcpu_arch = container_of(pmu, struct kvm_vcpu_arch, pmu);
+   return container_of(vcpu_arch, struct kvm_vcpu, arch);
+}
+
+/**
+ * When perf event overflows, call kvm_pmu_overflow_set to set overflow status.
+ */
+static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
+ struct perf_sample_data *data,
+ struct pt_regs *regs)
+{
+   struct kvm_pmc *pmc = perf_event->overflow_handler_context;
+   struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc);
+   int idx = pmc->idx;
+
+   kvm_pmu_overflow_set(vcpu, BIT(idx));
+}
+
+/**
  * kvm_pmu_software_increment - do software increment
  * @vcpu: The vcpu pointer
  * @val: the value guest writes to PMSWINC register
@@ -291,7 +335,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, 
u64 data,
/* The initial sample period (overflow count) of an event. */
attr.sample_period = (-counter) & pmc->bitmask;
 
-   event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
+   event = perf_event_create_kernel_counter(&attr, -1, current,
+kvm_pmu_perf_overflow, pmc);
if (IS_ERR(event)) {
pr_err_once("kvm: pmu event creation failed %ld\n&q

[PATCH v12 15/21] KVM: ARM64: Add access handler for PMUSERENR register

2016-02-22 Thread Shannon Zhao
From: Shannon Zhao 

This register resets as unknown in 64bit mode while it resets as zero
in 32bit mode. Here we choose to reset it as zero for consistency.

PMUSERENR_EL0 holds some bits which decide whether PMU registers can be
accessed from EL0. Add some check helpers to handle the access from EL0.

When these bits are zero, only reading PMUSERENR will trap to EL2 and
writing PMUSERENR or reading/writing other PMU registers will trap to
EL1 other than EL2 when HCR.TGE==0. To current KVM configuration
(HCR.TGE==0) there is no way to get these traps. Here we write 0xf to
physical PMUSERENR register on VM entry, so that it will trap PMU access
from EL0 to EL2. Within the register access handler we check the real
value of guest PMUSERENR register to decide whether this access is
allowed. If not allowed, return false to inject UND to guest.

Signed-off-by: Shannon Zhao 
---
 arch/arm64/include/asm/perf_event.h |   9 
 arch/arm64/kvm/hyp/hyp.h|   1 +
 arch/arm64/kvm/hyp/switch.c |   3 ++
 arch/arm64/kvm/sys_regs.c   | 101 ++--
 4 files changed, 109 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/perf_event.h 
b/arch/arm64/include/asm/perf_event.h
index c3f5937..76e1931 100644
--- a/arch/arm64/include/asm/perf_event.h
+++ b/arch/arm64/include/asm/perf_event.h
@@ -56,6 +56,15 @@
 #defineARMV8_PMU_EXCLUDE_EL0   (1 << 30)
 #defineARMV8_PMU_INCLUDE_EL2   (1 << 27)
 
+/*
+ * PMUSERENR: user enable reg
+ */
+#define ARMV8_PMU_USERENR_MASK 0xf /* Mask for writable bits */
+#define ARMV8_PMU_USERENR_EN   (1 << 0) /* PMU regs can be accessed at EL0 */
+#define ARMV8_PMU_USERENR_SW   (1 << 1) /* PMSWINC can be written at EL0 */
+#define ARMV8_PMU_USERENR_CR   (1 << 2) /* Cycle counter can be read at EL0 */
+#define ARMV8_PMU_USERENR_ER   (1 << 3) /* Event counter can be read at EL0 */
+
 #ifdef CONFIG_PERF_EVENTS
 struct pt_regs;
 extern unsigned long perf_instruction_pointer(struct pt_regs *regs);
diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
index fb27517..c65f8c9 100644
--- a/arch/arm64/kvm/hyp/hyp.h
+++ b/arch/arm64/kvm/hyp/hyp.h
@@ -22,6 +22,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #define __hyp_text __section(.hyp.text) notrace
 
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index ca8f5a5..d087724 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -37,6 +37,8 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
/* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
write_sysreg(1 << 15, hstr_el2);
write_sysreg(CPTR_EL2_TTA | CPTR_EL2_TFP, cptr_el2);
+   /* Make sure we trap PMU access from EL0 to EL2 */
+   write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
 }
 
@@ -45,6 +47,7 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu 
*vcpu)
write_sysreg(HCR_RW, hcr_el2);
write_sysreg(0, hstr_el2);
write_sysreg(read_sysreg(mdcr_el2) & MDCR_EL2_HPMN_MASK, mdcr_el2);
+   write_sysreg(0, pmuserenr_el0);
write_sysreg(0, cptr_el2);
 }
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f1866c3..c2d9fb9 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -453,6 +453,37 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct 
sys_reg_desc *r)
vcpu_sys_reg(vcpu, PMCR_EL0) = val;
 }
 
+static bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu)
+{
+   u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
+
+   return !((reg & ARMV8_PMU_USERENR_EN) || vcpu_mode_priv(vcpu));
+}
+
+static bool pmu_write_swinc_el0_disabled(struct kvm_vcpu *vcpu)
+{
+   u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
+
+   return !((reg & (ARMV8_PMU_USERENR_SW | ARMV8_PMU_USERENR_EN))
+|| vcpu_mode_priv(vcpu));
+}
+
+static bool pmu_access_cycle_counter_el0_disabled(struct kvm_vcpu *vcpu)
+{
+   u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
+
+   return !((reg & (ARMV8_PMU_USERENR_CR | ARMV8_PMU_USERENR_EN))
+|| vcpu_mode_priv(vcpu));
+}
+
+static bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu)
+{
+   u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
+
+   return !((reg & (ARMV8_PMU_USERENR_ER | ARMV8_PMU_USERENR_EN))
+|| vcpu_mode_priv(vcpu));
+}
+
 static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
const struct sys_reg_desc *r)
 {
@@ -461,6 +492,9 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct 
sys_reg_params *p,
if (!kvm_arm_pmu_v3_ready(vcpu))
return trap_raz_wi(vcpu, p, r);
 
+   if (pmu_access_el0_disabled(vcpu))
+   return false;
+
if (p->is_write) {
/* Only update writeable bits of PMCR */

[PATCH v12 02/21] KVM: ARM64: Define PMU data structure for each vcpu

2016-02-22 Thread Shannon Zhao
From: Shannon Zhao 

Here we plan to support virtual PMU for guest by full software
emulation, so define some basic structs and functions preparing for
futher steps. Define struct kvm_pmc for performance monitor counter and
struct kvm_pmu for performance monitor unit for each vcpu. According to
ARMv8 spec, the PMU contains at most 32(ARMV8_PMU_MAX_COUNTERS)
counters.

Since this only supports ARM64 (or PMUv3), add a separate config symbol
for it.

Signed-off-by: Shannon Zhao 
Acked-by: Marc Zyngier 
Reviewed-by: Andrew Jones 
---
 arch/arm64/include/asm/kvm_host.h |  2 ++
 arch/arm64/kvm/Kconfig|  7 +++
 include/kvm/arm_pmu.h | 42 +++
 3 files changed, 51 insertions(+)
 create mode 100644 include/kvm/arm_pmu.h

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 689d4c9..6f0241f 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -36,6 +36,7 @@
 
 #include 
 #include 
+#include 
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
@@ -211,6 +212,7 @@ struct kvm_vcpu_arch {
/* VGIC state */
struct vgic_cpu vgic_cpu;
struct arch_timer_cpu timer_cpu;
+   struct kvm_pmu pmu;
 
/*
 * Anything that is not used directly from assembly code goes
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index a5272c0..de7450d 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -36,6 +36,7 @@ config KVM
select HAVE_KVM_EVENTFD
select HAVE_KVM_IRQFD
select KVM_ARM_VGIC_V3
+   select KVM_ARM_PMU if HW_PERF_EVENTS
---help---
  Support hosting virtualized guest machines.
  We don't support KVM with 16K page tables yet, due to the multiple
@@ -48,6 +49,12 @@ config KVM_ARM_HOST
---help---
  Provides host support for ARM processors.
 
+config KVM_ARM_PMU
+   bool
+   ---help---
+ Adds support for a virtual Performance Monitoring Unit (PMU) in
+ virtual machines.
+
 source drivers/vhost/Kconfig
 
 endif # VIRTUALIZATION
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
new file mode 100644
index 000..3c2fd56
--- /dev/null
+++ b/include/kvm/arm_pmu.h
@@ -0,0 +1,42 @@
+/*
+ * Copyright (C) 2015 Linaro Ltd.
+ * Author: Shannon Zhao 
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ASM_ARM_KVM_PMU_H
+#define __ASM_ARM_KVM_PMU_H
+
+#ifdef CONFIG_KVM_ARM_PMU
+
+#include 
+#include 
+
+struct kvm_pmc {
+   u8 idx; /* index into the pmu->pmc array */
+   struct perf_event *perf_event;
+   u64 bitmask;
+};
+
+struct kvm_pmu {
+   int irq_num;
+   struct kvm_pmc pmc[ARMV8_PMU_MAX_COUNTERS];
+   bool ready;
+};
+#else
+struct kvm_pmu {
+};
+#endif
+
+#endif
-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v12 09/21] KVM: ARM64: Add access handler for event counter register

2016-02-22 Thread Shannon Zhao
From: Shannon Zhao 

These kind of registers include PMEVCNTRn, PMCCNTR and PMXEVCNTR which
is mapped to PMEVCNTRn.

The access handler translates all aarch32 register offsets to aarch64
ones and uses vcpu_sys_reg() to access their values to avoid taking care
of big endian.

When reading these registers, return the sum of register value and the
value perf event counts.

Signed-off-by: Shannon Zhao 
Reviewed-by: Andrew Jones 
---
 arch/arm64/kvm/sys_regs.c | 125 --
 include/kvm/arm_pmu.h |   3 ++
 virt/kvm/arm/pmu.c|  15 ++
 3 files changed, 139 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 7c6212a..70a47a9 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -561,6 +561,44 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, 
struct sys_reg_params *p,
return true;
 }
 
+static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+   u64 idx;
+
+   if (!kvm_arm_pmu_v3_ready(vcpu))
+   return trap_raz_wi(vcpu, p, r);
+
+   if (r->CRn == 9 && r->CRm == 13) {
+   if (r->Op2 == 2) {
+   /* PMXEVCNTR_EL0 */
+   idx = vcpu_sys_reg(vcpu, PMSELR_EL0)
+ & ARMV8_PMU_COUNTER_MASK;
+   } else if (r->Op2 == 0) {
+   /* PMCCNTR_EL0 */
+   idx = ARMV8_PMU_CYCLE_IDX;
+   } else {
+   BUG();
+   }
+   } else if (r->CRn == 14 && (r->CRm & 12) == 8) {
+   /* PMEVCNTRn_EL0 */
+   idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
+   } else {
+   BUG();
+   }
+
+   if (!pmu_counter_idx_valid(vcpu, idx))
+   return false;
+
+   if (p->is_write)
+   kvm_pmu_set_counter_value(vcpu, idx, p->regval);
+   else
+   p->regval = kvm_pmu_get_counter_value(vcpu, idx);
+
+   return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n) \
/* DBGBVRn_EL1 */   \
@@ -576,6 +614,13 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, 
struct sys_reg_params *p,
{ Op0(0b10), Op1(0b000), CRn(0b), CRm((n)), Op2(0b111), \
  trap_wcr, reset_wcr, n, 0,  get_wcr, set_wcr }
 
+/* Macro to expand the PMEVCNTRn_EL0 register */
+#define PMU_PMEVCNTR_EL0(n)\
+   /* PMEVCNTRn_EL0 */ \
+   { Op0(0b11), Op1(0b011), CRn(0b1110),   \
+ CRm((0b1000 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)), \
+ access_pmu_evcntr, reset_unknown, (PMEVCNTR0_EL0 + n), }
+
 /* Macro to expand the PMEVTYPERn_EL0 register */
 #define PMU_PMEVTYPER_EL0(n)   \
/* PMEVTYPERn_EL0 */\
@@ -776,13 +821,13 @@ static const struct sys_reg_desc sys_reg_descs[] = {
  access_pmceid },
/* PMCCNTR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
- trap_raz_wi },
+ access_pmu_evcntr, reset_unknown, PMCCNTR_EL0 },
/* PMXEVTYPER_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
  access_pmu_evtyper },
/* PMXEVCNTR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
- trap_raz_wi },
+ access_pmu_evcntr },
/* PMUSERENR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
  trap_raz_wi },
@@ -797,6 +842,38 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b), Op2(0b011),
  NULL, reset_unknown, TPIDRRO_EL0 },
 
+   /* PMEVCNTRn_EL0 */
+   PMU_PMEVCNTR_EL0(0),
+   PMU_PMEVCNTR_EL0(1),
+   PMU_PMEVCNTR_EL0(2),
+   PMU_PMEVCNTR_EL0(3),
+   PMU_PMEVCNTR_EL0(4),
+   PMU_PMEVCNTR_EL0(5),
+   PMU_PMEVCNTR_EL0(6),
+   PMU_PMEVCNTR_EL0(7),
+   PMU_PMEVCNTR_EL0(8),
+   PMU_PMEVCNTR_EL0(9),
+   PMU_PMEVCNTR_EL0(10),
+   PMU_PMEVCNTR_EL0(11),
+   PMU_PMEVCNTR_EL0(12),
+   PMU_PMEVCNTR_EL0(13),
+   PMU_PMEVCNTR_EL0(14),
+   PMU_PMEVCNTR_EL0(15),
+   PMU_PMEVCNTR_EL0(16),
+   PMU_PMEVCNTR_EL0(17),
+   PMU_PMEVCNTR_EL0(18),
+   PMU_PMEVCNTR_EL0(19),
+   PMU_PMEVCNTR_EL0(20),
+   PMU_PMEVCNTR_EL0(21),
+   PMU_PMEVCNTR_EL0(22),
+   PMU_PMEVCNTR_EL0(23),
+   PMU_PMEVCNTR_EL0(24),
+   PMU_PMEVCNTR_EL0(25),
+ 

[PATCH v12 18/21] KVM: ARM64: Free perf event of PMU when destroying vcpu

2016-02-22 Thread Shannon Zhao
From: Shannon Zhao 

When KVM frees VCPU, it needs to free the perf_event of PMU.

Signed-off-by: Shannon Zhao 
Reviewed-by: Marc Zyngier 
Reviewed-by: Andrew Jones 
---
 arch/arm/kvm/arm.c|  1 +
 include/kvm/arm_pmu.h |  2 ++
 virt/kvm/arm/pmu.c| 21 +
 3 files changed, 24 insertions(+)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index f54264c..d2c2cc3 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -266,6 +266,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
kvm_mmu_free_memory_caches(vcpu);
kvm_timer_vcpu_terminate(vcpu);
kvm_vgic_vcpu_destroy(vcpu);
+   kvm_pmu_vcpu_destroy(vcpu);
kmem_cache_free(kvm_vcpu_cache, vcpu);
 }
 
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 46b773e..9cebf03 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -42,6 +42,7 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 
select_idx);
 void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val);
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu);
+void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
@@ -67,6 +68,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu 
*vcpu)
return 0;
 }
 static inline void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {}
+static inline void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) {}
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 92fff9a..0427af8 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -101,6 +101,27 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
}
 }
 
+/**
+ * kvm_pmu_vcpu_destroy - free perf event of PMU for cpu
+ * @vcpu: The vcpu pointer
+ *
+ */
+void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
+{
+   int i;
+   struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+   for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
+   struct kvm_pmc *pmc = &pmu->pmc[i];
+
+   if (pmc->perf_event) {
+   perf_event_disable(pmc->perf_event);
+   perf_event_release_kernel(pmc->perf_event);
+   pmc->perf_event = NULL;
+   }
+   }
+}
+
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 {
u64 val = vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT;
-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v12 12/21] KVM: ARM64: Add access handler for PMOVSSET and PMOVSCLR register

2016-02-22 Thread Shannon Zhao
From: Shannon Zhao 

Since the reset value of PMOVSSET and PMOVSCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a handler to emulate writing
PMOVSSET or PMOVSCLR register.

When writing non-zero value to PMOVSSET, the counter and its interrupt
is enabled, kick this vcpu to sync PMU interrupt.

Signed-off-by: Shannon Zhao 
Reviewed-by: Andrew Jones 
---
 arch/arm64/kvm/sys_regs.c | 29 ++---
 include/kvm/arm_pmu.h |  2 ++
 virt/kvm/arm/pmu.c| 31 +++
 3 files changed, 59 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 4778275..7769b19 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -650,6 +650,28 @@ static bool access_pminten(struct kvm_vcpu *vcpu, struct 
sys_reg_params *p,
return true;
 }
 
+static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+const struct sys_reg_desc *r)
+{
+   u64 mask = kvm_pmu_valid_counter_mask(vcpu);
+
+   if (!kvm_arm_pmu_v3_ready(vcpu))
+   return trap_raz_wi(vcpu, p, r);
+
+   if (p->is_write) {
+   if (r->CRm & 0x2)
+   /* accessing PMOVSSET_EL0 */
+   kvm_pmu_overflow_set(vcpu, p->regval & mask);
+   else
+   /* accessing PMOVSCLR_EL0 */
+   vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~(p->regval & mask);
+   } else {
+   p->regval = vcpu_sys_reg(vcpu, PMOVSSET_EL0) & mask;
+   }
+
+   return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n) \
/* DBGBVRn_EL1 */   \
@@ -857,7 +879,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
  access_pmcnten, NULL, PMCNTENSET_EL0 },
/* PMOVSCLR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
- trap_raz_wi },
+ access_pmovs, NULL, PMOVSSET_EL0 },
/* PMSWINC_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
  trap_raz_wi },
@@ -884,7 +906,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
  trap_raz_wi },
/* PMOVSSET_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
- trap_raz_wi },
+ access_pmovs, reset_unknown, PMOVSSET_EL0 },
 
/* TPIDR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b), Op2(0b010),
@@ -1198,7 +1220,7 @@ static const struct sys_reg_desc cp15_regs[] = {
{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmcr },
{ Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmcnten },
{ Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmcnten },
-   { Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
+   { Op1( 0), CRn( 9), CRm(12), Op2( 3), access_pmovs },
{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
@@ -1208,6 +1230,7 @@ static const struct sys_reg_desc cp15_regs[] = {
{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pminten },
{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pminten },
+   { Op1( 0), CRn( 9), CRm(14), Op2( 3), access_pmovs },
 
{ Op1( 0), CRn(10), CRm( 2), Op2( 0), access_vm_reg, NULL, c10_PRRR },
{ Op1( 0), CRn(10), CRm( 2), Op2( 1), access_vm_reg, NULL, c10_NMRR },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index c5737797..60061da 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -43,6 +43,7 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 
select_idx, u64 val);
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
u64 select_idx);
 #else
@@ -63,6 +64,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu 
*vcpu)
 }
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
  u64 data, u64 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 591a11d..0232861 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -149,6 +149,37 @@ void kvm_pmu_disable_coun

[PATCH v12 06/21] KVM: ARM64: Add access handler for PMCEID0 and PMCEID1 register

2016-02-22 Thread Shannon Zhao
From: Shannon Zhao 

Add access handler which gets host value of PMCEID0 or PMCEID1 when
guest access these registers. Writing action to PMCEID0 or PMCEID1 is
UNDEFINED.

Signed-off-by: Shannon Zhao 
---
 arch/arm64/kvm/sys_regs.c | 28 
 1 file changed, 24 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 65f5f00..27afa0b 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -493,6 +493,26 @@ static bool access_pmselr(struct kvm_vcpu *vcpu, struct 
sys_reg_params *p,
return true;
 }
 
+static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+   u64 pmceid;
+
+   if (!kvm_arm_pmu_v3_ready(vcpu))
+   return trap_raz_wi(vcpu, p, r);
+
+   BUG_ON(p->is_write);
+
+   if (!(p->Op2 & 1))
+   asm volatile("mrs %0, pmceid0_el0\n" : "=r" (pmceid));
+   else
+   asm volatile("mrs %0, pmceid1_el0\n" : "=r" (pmceid));
+
+   p->regval = pmceid;
+
+   return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n) \
/* DBGBVRn_EL1 */   \
@@ -695,10 +715,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
  access_pmselr, reset_unknown, PMSELR_EL0 },
/* PMCEID0_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
- trap_raz_wi },
+ access_pmceid },
/* PMCEID1_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b111),
- trap_raz_wi },
+ access_pmceid },
/* PMCCNTR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
  trap_raz_wi },
@@ -944,8 +964,8 @@ static const struct sys_reg_desc cp15_regs[] = {
{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
-   { Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
-   { Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
+   { Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
+   { Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
-- 
2.0.4


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v12 10/21] KVM: ARM64: Add access handler for PMCNTENSET and PMCNTENCLR register

2016-02-22 Thread Shannon Zhao
From: Shannon Zhao 

Since the reset value of PMCNTENSET and PMCNTENCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a handler to emulate writing
PMCNTENSET or PMCNTENCLR register.

When writing to PMCNTENSET, call perf_event_enable to enable the perf
event. When writing to PMCNTENCLR, call perf_event_disable to disable
the perf event.

Signed-off-by: Shannon Zhao 
---
 arch/arm64/kvm/sys_regs.c | 35 ++---
 include/kvm/arm_pmu.h |  9 +++
 virt/kvm/arm/pmu.c| 66 +++
 3 files changed, 106 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 70a47a9..d7c7ed4a 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -599,6 +599,33 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
return true;
 }
 
+static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+  const struct sys_reg_desc *r)
+{
+   u64 val, mask;
+
+   if (!kvm_arm_pmu_v3_ready(vcpu))
+   return trap_raz_wi(vcpu, p, r);
+
+   mask = kvm_pmu_valid_counter_mask(vcpu);
+   if (p->is_write) {
+   val = p->regval & mask;
+   if (r->Op2 & 0x1) {
+   /* accessing PMCNTENSET_EL0 */
+   vcpu_sys_reg(vcpu, PMCNTENSET_EL0) |= val;
+   kvm_pmu_enable_counter(vcpu, val);
+   } else {
+   /* accessing PMCNTENCLR_EL0 */
+   vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val;
+   kvm_pmu_disable_counter(vcpu, val);
+   }
+   } else {
+   p->regval = vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
+   }
+
+   return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n) \
/* DBGBVRn_EL1 */   \
@@ -800,10 +827,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
  access_pmcr, reset_pmcr, },
/* PMCNTENSET_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
- trap_raz_wi },
+ access_pmcnten, reset_unknown, PMCNTENSET_EL0 },
/* PMCNTENCLR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b010),
- trap_raz_wi },
+ access_pmcnten, NULL, PMCNTENSET_EL0 },
/* PMOVSCLR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
  trap_raz_wi },
@@ -1145,8 +1172,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 
/* PMU */
{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmcr },
-   { Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
-   { Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
+   { Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmcnten },
+   { Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmcnten },
{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 7404424..c5737797 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -40,6 +40,9 @@ struct kvm_pmu {
 #define kvm_arm_pmu_v3_ready(v)((v)->arch.pmu.ready)
 u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
 void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val);
+u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
+void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
u64 select_idx);
 #else
@@ -54,6 +57,12 @@ static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu 
*vcpu,
 }
 static inline void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu,
 u64 select_idx, u64 val) {}
+static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
+{
+   return 0;
+}
+static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
  u64 data, u64 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 96c8ffc..591a11d 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -83,6 +83,72 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, 
struct kvm_pmc *pmc)
}
 }
 
+u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
+{
+   u64 val = vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT;
+

  1   2   3   4   5   >