Re: [PATCH RFCv1 0/7] Support Async Page Fault

2020-04-10 Thread Marc Zyngier
Hi Gavin,

On 2020-04-10 09:58, Gavin Shan wrote:
> There are two stages of page faults and the stage one page fault is
> handled by guest itself. The guest is trapped to host when the page
> fault is caused by stage 2 page table, for example missing. The guest
> is suspended until the requested page is populated. To populate the
> requested page can be costly and might be related to IO activities
> if the page was swapped out previously. In this case, the guest has
> to suspend for a few of milliseconds at least, regardless of the
> overall system load.
> 
> The series adds support to asychornous page fault to improve above
> situation. If it's costly to populate the requested page, a signal
> (PAGE_NOT_PRESENT) is sent to guest so that the faulting process can
> be rescheduled if it can be. Otherwise, it is put into power-saving
> mode. Another signal (PAGE_READY) is sent to guest once the requested
> page is populated so that the faulting process can be waken up either
> from either waiting state or power-saving state.
> 
> In order to fulfil the control flow and convey signals between host
> and guest. A IMPDEF system register (SYS_ASYNC_PF_EL1) is introduced.
> The register accepts control block's physical address, plus requested
> features. Also, the signal is sent using data abort with the specific
> IMPDEF Data Fault Status Code (DFSC). The specific signal is stored
> in the control block by host, to be consumed by guest.
> 
> Todo
> 
> * CONFIG_KVM_ASYNC_PF_SYNC is disabled for now because the exception
>   injection can't work in nested mode. It might be something to be
>   improved in future.
> * KVM_ASYNC_PF_SEND_ALWAYS is disabled even with CONFIG_PREEMPTION
>   because it's simply not working reliably.
> * Tracepoints, which should something to be done in short term.
> * kvm-unit-test cases.
> * More testing and debugging are needed. Sometimes, the guest can be
>   stuck and the root cause needs to be figured out.

Let me add another few things:

- KVM/arm is (supposed to be) an architectural hypervisor. It means
  that one of the design goal is to have as few differences as possible
  from the actual hardware. I'm not keen on deviating from it (next
  thing you know, you'll add all the PV horror from Xen, HV, VMware...). 

- The idea of butchering the arm64 mm subsystem to handle a new exotic
  style of exceptions is not something I am looking forward to. We
  might as well PV the whole MMU, Xen style, and be done with it. I'll
  let the arm64 maintainers comment on this though.

- We don't add IMPDEF sysregs, period. That's reserved for the HW. If
  you want to trap, there's the HVC instruction to that effect.

- If this is such a great improvement, where are the performance
  numbers?

- The fact that it apparently cannot work with nesting nor with
  preemption tends to indicate that it isn't future proof.

Thanks,

M.
-- 
Jazz is not dead. It just smells funny...
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: Contribution to KVM.

2020-04-10 Thread Liran Alon



On 10/04/2020 6:52, Nadav Amit wrote:

2. Try to run the tests with more than 4GB of memory. The last time I tried
(actually by running the test on bare metal), the INIT test that Liran
wrote failed.

Wasn't this test failure fixed with kvm-unit-test commit fc47ccc19612 
("x86: vmx: Verify pending LAPIC INIT event consume when exit on VMX_INIT")?
If not, can you provide the details of this new failure? As I thought 
this commit address the previous issue you have reported when running 
this test

on bare-metal.

Thanks,
-Liran

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: I{S,C}ACTIVER implemention question

2020-04-10 Thread Julien Grall




On 06/04/2020 16:14, Marc Zyngier wrote:

Hi Julien,


Hi Marc,



Thanks for the heads up.

On 2020-04-06 14:16, Julien Grall wrote:

Hi,

Xen community is currently reviewing a new implementation for reading
I{S,C}ACTIVER registers (see [1]).

The implementation is based on vgic_mmio_read_active() in KVM, i.e the
active state of the interrupts is based on the vGIC state stored in
memory.

While reviewing the patch on xen-devel, I noticed a potential deadlock
at least with Xen implementation. I know that Xen vGIC and KVM vGIC
are quite different, so I looked at the implementation to see how this
is dealt.

With my limited knowledge of KVM, I wasn't able to rule it out. I am
curious to know if I missed anything.

vCPU A may read the active state of an interrupt routed to vCPU B.
When vCPU A is reading the state, it will read the state stored in
memory.

The only way the memory state can get synced with the HW state is when
vCPU B exit guest context.

AFAICT, vCPU B will not exit when deactivating HW mapped interrupts
and virtual edge interrupts. So vCPU B may run for an abritrary long
time before been exiting and syncing the memory state with the HW
state.


So while I agree that this is definitely not ideal, I don't think we end-up
with a deadlock (or rather a livelock) either. That's because we are 
guaranteed

to exit eventually if only because the kernel's own timer interrupt (or any
other host interrupt routed to the same physical CPU) will fire and get us
out of there. On its own, this is enough to allow the polling vcpu to make
forward progress.


That's a good point. I think in Xen we can't rely on this because in 
some of the setup (such as a pCPU dedicated to a vCPU), there will be 
close to zero host interrupts (timer is only used for scheduling).




Now, it is obvious that we should improve on the current situation. I just
hacked together a patch that provides the same guarantees as the one we
already have on the write side (kick all vcpus out of the guest, snapshot
the state, kick everyone back in). I boot-tested it, so it is obviously 
perfect

and won't eat your data at all! ;-)


Thank you for the patch! This is the similar to what I had in mind.

Cheers,

--
Julien Grall
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v7 7/7] ARM: Enable KASan for ARM

2020-04-10 Thread Ard Biesheuvel
(+ Linus)

On Fri, 10 Apr 2020 at 12:45, Ard Biesheuvel  wrote:
>
> On Fri, 17 Jan 2020 at 23:52, Florian Fainelli  wrote:
> >
> > From: Andrey Ryabinin 
> >
> > This patch enables the kernel address sanitizer for ARM. XIP_KERNEL has
> > not been tested and is therefore not allowed.
> >
> > Acked-by: Dmitry Vyukov 
> > Tested-by: Linus Walleij 
> > Signed-off-by: Abbott Liu 
> > Signed-off-by: Florian Fainelli 
> > ---
> >  Documentation/dev-tools/kasan.rst | 4 ++--
> >  arch/arm/Kconfig  | 9 +
> >  arch/arm/boot/compressed/Makefile | 1 +
> >  drivers/firmware/efi/libstub/Makefile | 3 ++-
> >  4 files changed, 14 insertions(+), 3 deletions(-)
> >
> > diff --git a/Documentation/dev-tools/kasan.rst 
> > b/Documentation/dev-tools/kasan.rst
> > index e4d66e7c50de..6acd949989c3 100644
> > --- a/Documentation/dev-tools/kasan.rst
> > +++ b/Documentation/dev-tools/kasan.rst
> > @@ -21,8 +21,8 @@ global variables yet.
> >
> >  Tag-based KASAN is only supported in Clang and requires version 7.0.0 or 
> > later.
> >
> > -Currently generic KASAN is supported for the x86_64, arm64, xtensa and s390
> > -architectures, and tag-based KASAN is supported only for arm64.
> > +Currently generic KASAN is supported for the x86_64, arm, arm64, xtensa and
> > +s390 architectures, and tag-based KASAN is supported only for arm64.
> >
> >  Usage
> >  -
> > diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> > index 96dab76da3b3..70a7eb50984e 100644
> > --- a/arch/arm/Kconfig
> > +++ b/arch/arm/Kconfig
> > @@ -65,6 +65,7 @@ config ARM
> > select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
> > select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && 
> > MMU
> > select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
> > +   select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
> > select HAVE_ARCH_MMAP_RND_BITS if MMU
> > select HAVE_ARCH_SECCOMP_FILTER if AEABI && !OABI_COMPAT
> > select HAVE_ARCH_THREAD_STRUCT_WHITELIST
> > @@ -212,6 +213,14 @@ config ARCH_MAY_HAVE_PC_FDC
> >  config ZONE_DMA
> > bool
> >
> > +config KASAN_SHADOW_OFFSET
> > +   hex
> > +   depends on KASAN
> > +   default 0x1f00 if PAGE_OFFSET=0x4000
> > +   default 0x5f00 if PAGE_OFFSET=0x8000
> > +   default 0x9f00 if PAGE_OFFSET=0xC000
> > +   default 0x
> > +
> >  config ARCH_SUPPORTS_UPROBES
> > def_bool y
> >
> > diff --git a/arch/arm/boot/compressed/Makefile 
> > b/arch/arm/boot/compressed/Makefile
> > index 83991a0447fa..efda24b00a44 100644
> > --- a/arch/arm/boot/compressed/Makefile
> > +++ b/arch/arm/boot/compressed/Makefile
> > @@ -25,6 +25,7 @@ endif
> >
> >  GCOV_PROFILE   := n
> >  KASAN_SANITIZE := n
> > +CFLAGS_KERNEL  += -D__SANITIZE_ADDRESS__
> >
> >  # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
> >  KCOV_INSTRUMENT:= n
> > diff --git a/drivers/firmware/efi/libstub/Makefile 
> > b/drivers/firmware/efi/libstub/Makefile
> > index c35f893897e1..c8b36824189b 100644
> > --- a/drivers/firmware/efi/libstub/Makefile
> > +++ b/drivers/firmware/efi/libstub/Makefile
> > @@ -20,7 +20,8 @@ cflags-$(CONFIG_ARM64):= $(subst 
> > $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \
> >-fpie $(DISABLE_STACKLEAK_PLUGIN)
> >  cflags-$(CONFIG_ARM)   := $(subst 
> > $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \
> >-fno-builtin -fpic \
> > -  $(call cc-option,-mno-single-pic-base)
> > +  $(call cc-option,-mno-single-pic-base) \
> > +  -D__SANITIZE_ADDRESS__
> >
>
> I am not too crazy about this need to unconditionally 'enable' KASAN
> on the command line like this, in order to be able to disable it again
> when CONFIG_KASAN=y.
>
> Could we instead add something like this at the top of
> arch/arm/boot/compressed/string.c?
>
> #ifdef CONFIG_KASAN
> #undef memcpy
> #undef memmove
> #undef memset
> void *__memcpy(void *__dest, __const void *__src, size_t __n) __alias(memcpy);
> void *__memmove(void *__dest, __const void *__src, size_t count)
> __alias(memmove);
> void *__memset(void *s, int c, size_t count) __alias(memset);
> #endif
>
> ___
> linux-arm-kernel mailing list
> linux-arm-ker...@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v7 7/7] ARM: Enable KASan for ARM

2020-04-10 Thread Ard Biesheuvel
On Fri, 17 Jan 2020 at 23:52, Florian Fainelli  wrote:
>
> From: Andrey Ryabinin 
>
> This patch enables the kernel address sanitizer for ARM. XIP_KERNEL has
> not been tested and is therefore not allowed.
>
> Acked-by: Dmitry Vyukov 
> Tested-by: Linus Walleij 
> Signed-off-by: Abbott Liu 
> Signed-off-by: Florian Fainelli 
> ---
>  Documentation/dev-tools/kasan.rst | 4 ++--
>  arch/arm/Kconfig  | 9 +
>  arch/arm/boot/compressed/Makefile | 1 +
>  drivers/firmware/efi/libstub/Makefile | 3 ++-
>  4 files changed, 14 insertions(+), 3 deletions(-)
>
> diff --git a/Documentation/dev-tools/kasan.rst 
> b/Documentation/dev-tools/kasan.rst
> index e4d66e7c50de..6acd949989c3 100644
> --- a/Documentation/dev-tools/kasan.rst
> +++ b/Documentation/dev-tools/kasan.rst
> @@ -21,8 +21,8 @@ global variables yet.
>
>  Tag-based KASAN is only supported in Clang and requires version 7.0.0 or 
> later.
>
> -Currently generic KASAN is supported for the x86_64, arm64, xtensa and s390
> -architectures, and tag-based KASAN is supported only for arm64.
> +Currently generic KASAN is supported for the x86_64, arm, arm64, xtensa and
> +s390 architectures, and tag-based KASAN is supported only for arm64.
>
>  Usage
>  -
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index 96dab76da3b3..70a7eb50984e 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -65,6 +65,7 @@ config ARM
> select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
> select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
> select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
> +   select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
> select HAVE_ARCH_MMAP_RND_BITS if MMU
> select HAVE_ARCH_SECCOMP_FILTER if AEABI && !OABI_COMPAT
> select HAVE_ARCH_THREAD_STRUCT_WHITELIST
> @@ -212,6 +213,14 @@ config ARCH_MAY_HAVE_PC_FDC
>  config ZONE_DMA
> bool
>
> +config KASAN_SHADOW_OFFSET
> +   hex
> +   depends on KASAN
> +   default 0x1f00 if PAGE_OFFSET=0x4000
> +   default 0x5f00 if PAGE_OFFSET=0x8000
> +   default 0x9f00 if PAGE_OFFSET=0xC000
> +   default 0x
> +
>  config ARCH_SUPPORTS_UPROBES
> def_bool y
>
> diff --git a/arch/arm/boot/compressed/Makefile 
> b/arch/arm/boot/compressed/Makefile
> index 83991a0447fa..efda24b00a44 100644
> --- a/arch/arm/boot/compressed/Makefile
> +++ b/arch/arm/boot/compressed/Makefile
> @@ -25,6 +25,7 @@ endif
>
>  GCOV_PROFILE   := n
>  KASAN_SANITIZE := n
> +CFLAGS_KERNEL  += -D__SANITIZE_ADDRESS__
>
>  # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
>  KCOV_INSTRUMENT:= n
> diff --git a/drivers/firmware/efi/libstub/Makefile 
> b/drivers/firmware/efi/libstub/Makefile
> index c35f893897e1..c8b36824189b 100644
> --- a/drivers/firmware/efi/libstub/Makefile
> +++ b/drivers/firmware/efi/libstub/Makefile
> @@ -20,7 +20,8 @@ cflags-$(CONFIG_ARM64):= $(subst 
> $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \
>-fpie $(DISABLE_STACKLEAK_PLUGIN)
>  cflags-$(CONFIG_ARM)   := $(subst 
> $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \
>-fno-builtin -fpic \
> -  $(call cc-option,-mno-single-pic-base)
> +  $(call cc-option,-mno-single-pic-base) \
> +  -D__SANITIZE_ADDRESS__
>

I am not too crazy about this need to unconditionally 'enable' KASAN
on the command line like this, in order to be able to disable it again
when CONFIG_KASAN=y.

Could we instead add something like this at the top of
arch/arm/boot/compressed/string.c?

#ifdef CONFIG_KASAN
#undef memcpy
#undef memmove
#undef memset
void *__memcpy(void *__dest, __const void *__src, size_t __n) __alias(memcpy);
void *__memmove(void *__dest, __const void *__src, size_t count)
__alias(memmove);
void *__memset(void *s, int c, size_t count) __alias(memset);
#endif
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH RFCv1 7/7] arm64: Support async page fault

2020-04-10 Thread Gavin Shan
This supports asynchronous page fault for the guest. The design is
similar to what x86 has: on receiving a PAGE_NOT_PRESENT signal from
the hypervisor, the current task is either rescheduled or put into
power-saving mode. The task will be waken up when PAGE_READY signal
is received.

The signals are conveyed through data abort with specific (IMPDEF)
Data Fault Status Code (DFSC). Besides, a hash table is introduced
to track the processes that have been put into waiting state, to
avoid out-of-consistency.

The feature is put into the CONFIG_KVM_GUEST umbrella, which is added
by this patch.

Signed-off-by: Gavin Shan 
---
 arch/arm64/Kconfig |  11 ++
 arch/arm64/include/asm/exception.h |   5 +
 arch/arm64/include/asm/kvm_para.h  |  42 -
 arch/arm64/kernel/smp.c|  47 ++
 arch/arm64/mm/fault.c  | 239 -
 5 files changed, 336 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 40fb05d96c60..2d5e5ee62d6d 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1045,6 +1045,17 @@ config PARAVIRT
  under a hypervisor, potentially improving performance significantly
  over full virtualization.
 
+config KVM_GUEST
+   bool "KVM Guest Support"
+   depends on PARAVIRT
+   default y
+   help
+ This option enables various optimizations for running under the KVM
+ hypervisor. Overhead for the kernel when not running inside KVM should
+ be minimal.
+
+ In case of doubt, say Y
+
 config PARAVIRT_TIME_ACCOUNTING
bool "Paravirtual steal time accounting"
select PARAVIRT
diff --git a/arch/arm64/include/asm/exception.h 
b/arch/arm64/include/asm/exception.h
index 7a6e81ca23a8..17ac2db36472 100644
--- a/arch/arm64/include/asm/exception.h
+++ b/arch/arm64/include/asm/exception.h
@@ -46,4 +46,9 @@ void bad_el0_sync(struct pt_regs *regs, int reason, unsigned 
int esr);
 void do_cp15instr(unsigned int esr, struct pt_regs *regs);
 void do_el0_svc(struct pt_regs *regs);
 void do_el0_svc_compat(struct pt_regs *regs);
+
+#ifdef CONFIG_KVM_GUEST
+void kvm_pv_async_pf_enable(void);
+void kvm_pv_async_pf_disable(void);
+#endif /* CONFIG_KVM_GUEST */
 #endif /* __ASM_EXCEPTION_H */
diff --git a/arch/arm64/include/asm/kvm_para.h 
b/arch/arm64/include/asm/kvm_para.h
index 0ea481dd1c7a..a43bed479c2b 100644
--- a/arch/arm64/include/asm/kvm_para.h
+++ b/arch/arm64/include/asm/kvm_para.h
@@ -3,6 +3,30 @@
 #define _ASM_ARM_KVM_PARA_H
 
 #include 
+#include 
+
+#ifdef CONFIG_KVM_GUEST
+static inline int kvm_para_available(void)
+{
+   struct device_node *hyper_node;
+   int ret = 0;
+
+   hyper_node = of_find_node_by_path("/hypervisor");
+   if (!hyper_node)
+   return 0;
+
+   if (of_device_is_compatible(hyper_node, "linux,kvm"))
+   ret = 1;
+
+   of_node_put(hyper_node);
+   return ret;
+}
+#else
+static inline int kvm_para_available(void)
+{
+   return 0;
+}
+#endif /* CONFIG_KVM_GUEST */
 
 static inline bool kvm_check_and_clear_guest_paused(void)
 {
@@ -11,17 +35,21 @@ static inline bool kvm_check_and_clear_guest_paused(void)
 
 static inline unsigned int kvm_arch_para_features(void)
 {
-   return 0;
+   struct device_node *hyper_node;
+   unsigned int features = 0;
+
+   if (!kvm_para_available())
+   return 0;
+
+   hyper_node = of_find_node_by_path("/hypervisor");
+   of_property_read_u32(hyper_node, "para-features", &features);
+   of_node_put(hyper_node);
+
+   return features;
 }
 
 static inline unsigned int kvm_arch_para_hints(void)
 {
return 0;
 }
-
-static inline bool kvm_para_available(void)
-{
-   return false;
-}
-
 #endif /* _ASM_ARM_KVM_PARA_H */
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 061f60fe452f..cc97a8462d7f 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -40,6 +40,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -443,6 +444,38 @@ void __init smp_cpus_done(unsigned int max_cpus)
mark_linear_text_alias_ro();
 }
 
+#ifdef CONFIG_KVM_GUEST
+static void kvm_cpu_reboot(void *unused)
+{
+   kvm_pv_async_pf_disable();
+}
+
+static int kvm_cpu_reboot_notify(struct notifier_block *nb,
+unsigned long code, void *unused)
+{
+   if (code == SYS_RESTART)
+   on_each_cpu(kvm_cpu_reboot, NULL, 1);
+
+   return NOTIFY_DONE;
+}
+
+static struct notifier_block kvm_cpu_reboot_nb = {
+   .notifier_call = kvm_cpu_reboot_notify,
+};
+
+static int kvm_cpu_online(unsigned int cpu)
+{
+   kvm_pv_async_pf_enable();
+   return 0;
+}
+
+static int kvm_cpu_offline(unsigned int cpu)
+{
+   kvm_pv_async_pf_disable();
+   return 0;
+}
+#endif /* CONFIG_KVM_GUEST */
+
 void __init smp_prepare_boot_cpu(void)
 {
set_my_cpu_offset(per_cpu_offset(smp_processor_id()));
@@ -458,6 +491

[PATCH RFCv1 5/7] kvm/arm64: Allow inject data abort with specified DFSC

2020-04-10 Thread Gavin Shan
The data abort will be used as signal by the asynchronous page fault.
However, the specific IMPDEF Data Fault Status Code (DFSC) is used.
Currently, there is no API to inject data abort with specific DSC.
This fixes the gap by introducing kvm_inject_dabt_with_dfsc().

Signed-off-by: Gavin Shan 
---
 arch/arm64/include/asm/kvm_emulate.h |  4 
 arch/arm64/kvm/inject_fault.c| 34 
 virt/kvm/arm/aarch32.c   | 27 +++---
 3 files changed, 53 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_emulate.h 
b/arch/arm64/include/asm/kvm_emulate.h
index 2873bf6dc85e..fdf6a01b9dcb 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -31,9 +31,13 @@ void kvm_skip_instr32(struct kvm_vcpu *vcpu, bool 
is_wide_instr);
 void kvm_inject_undefined(struct kvm_vcpu *vcpu);
 void kvm_inject_vabt(struct kvm_vcpu *vcpu);
 void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr);
+void kvm_inject_dabt_with_dfsc(struct kvm_vcpu *vcpu,
+  unsigned long addr, unsigned int dfsc);
 void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
 void kvm_inject_undef32(struct kvm_vcpu *vcpu);
 void kvm_inject_dabt32(struct kvm_vcpu *vcpu, unsigned long addr);
+void kvm_inject_dabt32_with_dfsc(struct kvm_vcpu *vcpu,
+unsigned long addr, unsigned int dfsc);
 void kvm_inject_pabt32(struct kvm_vcpu *vcpu, unsigned long addr);
 
 static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
index 0ae7c2e40e02..35794d0de0e9 100644
--- a/arch/arm64/kvm/inject_fault.c
+++ b/arch/arm64/kvm/inject_fault.c
@@ -110,7 +110,9 @@ static unsigned long get_except64_pstate(struct kvm_vcpu 
*vcpu)
return new;
 }
 
-static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long 
addr)
+static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt,
+unsigned long addr, bool dfsc_valid,
+unsigned int dfsc)
 {
unsigned long cpsr = *vcpu_cpsr(vcpu);
bool is_aarch32 = vcpu_mode_is_32bit(vcpu);
@@ -143,7 +145,12 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool 
is_iabt, unsigned long addr
if (!is_iabt)
esr |= ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT;
 
-   vcpu_write_sys_reg(vcpu, esr | ESR_ELx_FSC_EXTABT, ESR_EL1);
+   if (dfsc_valid)
+   esr |= dfsc;
+   else
+   esr |= ESR_ELx_FSC_EXTABT;
+
+   vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
 }
 
 static void inject_undef64(struct kvm_vcpu *vcpu)
@@ -180,7 +187,26 @@ void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long 
addr)
if (vcpu_el1_is_32bit(vcpu))
kvm_inject_dabt32(vcpu, addr);
else
-   inject_abt64(vcpu, false, addr);
+   inject_abt64(vcpu, false, addr, false, 0);
+}
+
+/**
+ * kvm_inject_dabt_with_dfsc - inject a data abort into the guest
+ * @vcpu: The VCPU to receive the data abort
+ * @addr: The address to report in the DFAR
+ * @dfsc: The data fault status code to be reported in DFSR
+ *
+ * It is assumed that this code is called from the VCPU thread and that the
+ * VCPU therefore is not currently executing guest code.
+ */
+void kvm_inject_dabt_with_dfsc(struct kvm_vcpu *vcpu,
+  unsigned long addr,
+  unsigned int dfsc)
+{
+   if (vcpu_el1_is_32bit(vcpu))
+   kvm_inject_dabt32_with_dfsc(vcpu, addr, dfsc);
+   else
+   inject_abt64(vcpu, false, addr, true, dfsc);
 }
 
 /**
@@ -196,7 +222,7 @@ void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long 
addr)
if (vcpu_el1_is_32bit(vcpu))
kvm_inject_pabt32(vcpu, addr);
else
-   inject_abt64(vcpu, true, addr);
+   inject_abt64(vcpu, true, addr, false, 0);
 }
 
 /**
diff --git a/virt/kvm/arm/aarch32.c b/virt/kvm/arm/aarch32.c
index 0a356aa91aa1..82bded4cab25 100644
--- a/virt/kvm/arm/aarch32.c
+++ b/virt/kvm/arm/aarch32.c
@@ -163,7 +163,8 @@ void kvm_inject_undef32(struct kvm_vcpu *vcpu)
  * pseudocode.
  */
 static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt,
-unsigned long addr)
+unsigned long addr, bool dfsc_valid,
+unsigned int dfsc)
 {
u32 vect_offset;
u32 *far, *fsr;
@@ -184,21 +185,31 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool 
is_pabt,
*far = addr;
 
/* Give the guest an IMPLEMENTATION DEFINED exception */
-   is_lpae = (vcpu_cp15(vcpu, c2_TTBCR) >> 31);
-   if (is_lpae) {
-   *fsr = DFSR_LPAE | DFSR_FSC_EXTABT_LPAE;
+   if (dfsc_valid) {
+   *fsr = dfsc;
} else {
-   /* no need to shuffle FS[4] into DFSR[10] as its 0 */
-   *fsr = 

[PATCH RFCv1 3/7] kvm/arm64: Replace hsr with esr

2020-04-10 Thread Gavin Shan
This replace the variable names to make them self-explaining. The
tracepoint isn't changed accordingly because they're part of ABI:

   * @hsr to @esr
   * @hsr_ec to @ec
   * Use kvm_vcpu_trap_get_class() helper if possible

Signed-off-by: Gavin Shan 
---
 arch/arm64/kvm/handle_exit.c | 28 ++--
 arch/arm64/kvm/hyp/switch.c  |  9 -
 arch/arm64/kvm/sys_regs.c| 30 +++---
 3 files changed, 33 insertions(+), 34 deletions(-)

diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 00858db82a64..e3b3dcd5b811 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -123,13 +123,13 @@ static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct 
kvm_run *run)
  */
 static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
-   u32 hsr = kvm_vcpu_get_esr(vcpu);
+   u32 esr = kvm_vcpu_get_esr(vcpu);
int ret = 0;
 
run->exit_reason = KVM_EXIT_DEBUG;
-   run->debug.arch.hsr = hsr;
+   run->debug.arch.hsr = esr;
 
-   switch (ESR_ELx_EC(hsr)) {
+   switch (kvm_vcpu_trap_get_class(esr)) {
case ESR_ELx_EC_WATCHPT_LOW:
run->debug.arch.far = vcpu->arch.fault.far_el2;
/* fall through */
@@ -139,8 +139,8 @@ static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu, 
struct kvm_run *run)
case ESR_ELx_EC_BRK64:
break;
default:
-   kvm_err("%s: un-handled case hsr: %#08x\n",
-   __func__, (unsigned int) hsr);
+   kvm_err("%s: un-handled case esr: %#08x\n",
+   __func__, (unsigned int)esr);
ret = -1;
break;
}
@@ -150,10 +150,10 @@ static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu, 
struct kvm_run *run)
 
 static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
-   u32 hsr = kvm_vcpu_get_esr(vcpu);
+   u32 esr = kvm_vcpu_get_esr(vcpu);
 
-   kvm_pr_unimpl("Unknown exception class: hsr: %#08x -- %s\n",
- hsr, esr_get_class_string(hsr));
+   kvm_pr_unimpl("Unknown exception class: esr: %#08x -- %s\n",
+ esr, esr_get_class_string(esr));
 
kvm_inject_undefined(vcpu);
return 1;
@@ -230,10 +230,10 @@ static exit_handle_fn arm_exit_handlers[] = {
 
 static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
 {
-   u32 hsr = kvm_vcpu_get_esr(vcpu);
-   u8 hsr_ec = ESR_ELx_EC(hsr);
+   u32 esr = kvm_vcpu_get_esr(vcpu);
+   u8 ec = kvm_vcpu_trap_get_class(esr);
 
-   return arm_exit_handlers[hsr_ec];
+   return arm_exit_handlers[ec];
 }
 
 /*
@@ -273,15 +273,15 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run 
*run,
 {
if (ARM_SERROR_PENDING(exception_index)) {
u32 esr = kvm_vcpu_get_esr(vcpu);
-   u8 hsr_ec = ESR_ELx_EC(esr);
+   u8 ec = kvm_vcpu_trap_get_class(esr);
 
/*
 * HVC/SMC already have an adjusted PC, which we need
 * to correct in order to return to after having
 * injected the SError.
 */
-   if (hsr_ec == ESR_ELx_EC_HVC32 || hsr_ec == ESR_ELx_EC_HVC64 ||
-   hsr_ec == ESR_ELx_EC_SMC32 || hsr_ec == ESR_ELx_EC_SMC64) {
+   if (ec == ESR_ELx_EC_HVC32 || ec == ESR_ELx_EC_HVC64 ||
+   ec == ESR_ELx_EC_SMC32 || ec == ESR_ELx_EC_SMC64) {
u32 adj =  kvm_vcpu_trap_il_is32bit(esr) ? 4 : 2;
*vcpu_pc(vcpu) -= adj;
}
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 369f22f49f3d..7bf4840bf90e 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -356,8 +356,8 @@ static bool __hyp_text __populate_fault_info(struct 
kvm_vcpu *vcpu)
 static bool __hyp_text __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 {
u32 esr = kvm_vcpu_get_esr(vcpu);
+   u8 ec = kvm_vcpu_trap_get_class(esr);
bool vhe, sve_guest, sve_host;
-   u8 hsr_ec;
 
if (!system_supports_fpsimd())
return false;
@@ -372,14 +372,13 @@ static bool __hyp_text __hyp_handle_fpsimd(struct 
kvm_vcpu *vcpu)
vhe = has_vhe();
}
 
-   hsr_ec = kvm_vcpu_trap_get_class(esr);
-   if (hsr_ec != ESR_ELx_EC_FP_ASIMD &&
-   hsr_ec != ESR_ELx_EC_SVE)
+   if (ec != ESR_ELx_EC_FP_ASIMD &&
+   ec != ESR_ELx_EC_SVE)
return false;
 
/* Don't handle SVE traps for non-SVE vcpus here: */
if (!sve_guest)
-   if (hsr_ec != ESR_ELx_EC_FP_ASIMD)
+   if (ec != ESR_ELx_EC_FP_ASIMD)
return false;
 
/* Valid trap.  Switch the context: */
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 012fff834a4b..58f81ab519af 100644
--- a/arch/arm64/kvm/sys_regs.c

[PATCH RFCv1 1/7] kvm/arm64: Rename kvm_vcpu_get_hsr() to kvm_vcpu_get_esr()

2020-04-10 Thread Gavin Shan
Since kvm/arm32 was removed, this renames kvm_vcpu_get_hsr() to
kvm_vcpu_get_esr() to it a bit more self-explaining because the
functions returns ESR instead of HSR on aarch64. This shouldn't
cause any functional changes.

Signed-off-by: Gavin Shan 
---
 arch/arm64/include/asm/kvm_emulate.h | 36 +++-
 arch/arm64/kvm/handle_exit.c | 12 +-
 arch/arm64/kvm/hyp/switch.c  |  2 +-
 arch/arm64/kvm/sys_regs.c|  6 ++---
 virt/kvm/arm/hyp/aarch32.c   |  2 +-
 virt/kvm/arm/hyp/vgic-v3-sr.c|  4 ++--
 virt/kvm/arm/mmu.c   |  6 ++---
 7 files changed, 35 insertions(+), 33 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_emulate.h 
b/arch/arm64/include/asm/kvm_emulate.h
index a30b4eec7cb4..bd1a69e7c104 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -265,14 +265,14 @@ static inline bool vcpu_mode_priv(const struct kvm_vcpu 
*vcpu)
return mode != PSR_MODE_EL0t;
 }
 
-static __always_inline u32 kvm_vcpu_get_hsr(const struct kvm_vcpu *vcpu)
+static __always_inline u32 kvm_vcpu_get_esr(const struct kvm_vcpu *vcpu)
 {
return vcpu->arch.fault.esr_el2;
 }
 
 static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu)
 {
-   u32 esr = kvm_vcpu_get_hsr(vcpu);
+   u32 esr = kvm_vcpu_get_esr(vcpu);
 
if (esr & ESR_ELx_CV)
return (esr & ESR_ELx_COND_MASK) >> ESR_ELx_COND_SHIFT;
@@ -297,64 +297,66 @@ static inline u64 kvm_vcpu_get_disr(const struct kvm_vcpu 
*vcpu)
 
 static inline u32 kvm_vcpu_hvc_get_imm(const struct kvm_vcpu *vcpu)
 {
-   return kvm_vcpu_get_hsr(vcpu) & ESR_ELx_xVC_IMM_MASK;
+   return kvm_vcpu_get_esr(vcpu) & ESR_ELx_xVC_IMM_MASK;
 }
 
 static __always_inline bool kvm_vcpu_dabt_isvalid(const struct kvm_vcpu *vcpu)
 {
-   return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_ISV);
+   return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_ISV);
 }
 
 static inline unsigned long kvm_vcpu_dabt_iss_nisv_sanitized(const struct 
kvm_vcpu *vcpu)
 {
-   return kvm_vcpu_get_hsr(vcpu) & (ESR_ELx_CM | ESR_ELx_WNR | 
ESR_ELx_FSC);
+   return kvm_vcpu_get_esr(vcpu) &
+  (ESR_ELx_CM | ESR_ELx_WNR | ESR_ELx_FSC);
 }
 
 static inline bool kvm_vcpu_dabt_issext(const struct kvm_vcpu *vcpu)
 {
-   return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SSE);
+   return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_SSE);
 }
 
 static inline bool kvm_vcpu_dabt_issf(const struct kvm_vcpu *vcpu)
 {
-   return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SF);
+   return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_SF);
 }
 
 static __always_inline int kvm_vcpu_dabt_get_rd(const struct kvm_vcpu *vcpu)
 {
-   return (kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SRT_MASK) >> ESR_ELx_SRT_SHIFT;
+   return (kvm_vcpu_get_esr(vcpu) & ESR_ELx_SRT_MASK) >> ESR_ELx_SRT_SHIFT;
 }
 
 static __always_inline bool kvm_vcpu_dabt_iss1tw(const struct kvm_vcpu *vcpu)
 {
-   return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_S1PTW);
+   return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_S1PTW);
 }
 
 static __always_inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu)
 {
-   return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR) ||
+   return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_WNR) ||
kvm_vcpu_dabt_iss1tw(vcpu); /* AF/DBM update */
 }
 
 static inline bool kvm_vcpu_dabt_is_cm(const struct kvm_vcpu *vcpu)
 {
-   return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_CM);
+   return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_CM);
 }
 
 static __always_inline unsigned int kvm_vcpu_dabt_get_as(const struct kvm_vcpu 
*vcpu)
 {
-   return 1 << ((kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SAS) >> 
ESR_ELx_SAS_SHIFT);
+   return 1 << ((kvm_vcpu_get_esr(vcpu) & ESR_ELx_SAS) >>
+ESR_ELx_SAS_SHIFT);
 }
 
 /* This one is not specific to Data Abort */
 static __always_inline bool kvm_vcpu_trap_il_is32bit(const struct kvm_vcpu 
*vcpu)
 {
-   return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_IL);
+   return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_IL);
 }
 
 static __always_inline u8 kvm_vcpu_trap_get_class(const struct kvm_vcpu *vcpu)
 {
-   return ESR_ELx_EC(kvm_vcpu_get_hsr(vcpu));
+   return ESR_ELx_EC(kvm_vcpu_get_esr(vcpu));
 }
 
 static inline bool kvm_vcpu_trap_is_iabt(const struct kvm_vcpu *vcpu)
@@ -364,12 +366,12 @@ static inline bool kvm_vcpu_trap_is_iabt(const struct 
kvm_vcpu *vcpu)
 
 static __always_inline u8 kvm_vcpu_trap_get_fault(const struct kvm_vcpu *vcpu)
 {
-   return kvm_vcpu_get_hsr(vcpu) & ESR_ELx_FSC;
+   return kvm_vcpu_get_esr(vcpu) & ESR_ELx_FSC;
 }
 
 static __always_inline u8 kvm_vcpu_trap_get_fault_type(const struct kvm_vcpu 
*vcpu)
 {
-   return kvm_vcpu_get_hsr(vcpu) & ESR_ELx_FSC_TYPE;
+   return kvm_vcpu_get_esr(vcpu) & ESR_ELx_FSC_TYPE;
 }
 
 static __always_inline bool kvm_vcpu_dabt_isextabt(const struct kvm_vcpu *vcpu)
@@ -393,7 +395,7 @@ static __always_inline bool kvm_vcpu_dabt_isextabt(cons

[PATCH RFCv1 6/7] kvm/arm64: Support async page fault

2020-04-10 Thread Gavin Shan
There are two stages of page faults and the stage one page fault is
handled by guest itself. The guest is trapped to host when the page
fault is caused by stage 2 page table, for example missing. The guest
is suspended until the requested page is populated. To populate the
requested page can be related to IO activities if the page was swapped
out previously. In this case, the guest has to suspend for a few of
milliseconds at least, regardless of the overall system load.

This adds support to asychornous page fault to improve above situation.
A signal (PAGE_NOT_PRESENT) is sent to guest. Guest might reschedule to
another running process if rescheduling is allowed. Otherwise, the CPU
is put into power-saving mode, which is actually to cause vCPU reschedule
from host's view. Another signal (PAGE_READY) is sent to guest once the
requested page is populated. The suspended task is scheduled or waken up
when guest receives the signal. There are more details highlighted as
below. Note the implementation is pretty similar to what x86 has.

   * Signal (PAGE_NOT_PRESENT) is sent to guest if the requested page
 isn't ready. In the mean while, a work is started to populate the
 page asynchronously in background. The stage 2 page table entry is
 updated accordingly and another signal (PAGE_READY) is fired after
 the request page is populted.

   * A IMPDEF system register (SYS_ASYNC_PF_EL1) is added. The register
 accepts the physical address of control block, which is 64-bits
 aligned and represented by struct kvm_vcpu_pv_apf_data. The low bits
 of the control block's physical address are used to enable/disable
 asynchronous page fault, enable the requested features etc.

   * A hash table whose key is gfn is maintained for each vCPU, to avoid
 duplicate signals will be fired for one gfn.

   * The signal is conveyed through data abort with IMPDEF Data Fault
 Status Code (DFSC), which is 0x34. the specific events are stored
 in the control block, waiting for guest to read.

Signed-off-by: Gavin Shan 
---
 arch/arm64/include/asm/kvm_host.h  |  42 
 arch/arm64/include/asm/kvm_para.h  |  27 +++
 arch/arm64/include/asm/sysreg.h|   3 +
 arch/arm64/include/uapi/asm/Kbuild |   3 -
 arch/arm64/include/uapi/asm/kvm_para.h |  22 ++
 arch/arm64/kvm/Kconfig |   1 +
 arch/arm64/kvm/Makefile|   2 +
 arch/arm64/kvm/sys_regs.c  |  53 +
 virt/kvm/arm/arm.c |  32 ++-
 virt/kvm/arm/async_pf.c| 290 +
 virt/kvm/arm/mmu.c |  29 ++-
 11 files changed, 498 insertions(+), 6 deletions(-)
 create mode 100644 arch/arm64/include/asm/kvm_para.h
 delete mode 100644 arch/arm64/include/uapi/asm/Kbuild
 create mode 100644 arch/arm64/include/uapi/asm/kvm_para.h
 create mode 100644 virt/kvm/arm/async_pf.c

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index f77c706777ec..24fbfa36a951 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -250,6 +250,23 @@ struct vcpu_reset_state {
boolreset;
 };
 
+#ifdef CONFIG_KVM_ASYNC_PF
+
+/* Should be a power of two number */
+#define ASYNC_PF_PER_VCPU  64
+
+/*
+ * The association of gfn and token. The token will be sent to guest as
+ * page fault address. Also, the guest could be in aarch32 mode. So its
+ * length should be 32-bits.
+ */
+struct kvm_arch_async_pf {
+   u32 token;
+   gfn_t   gfn;
+   u32 esr;
+};
+#endif /* CONFIG_KVM_ASYNC_PF */
+
 struct kvm_vcpu_arch {
struct kvm_cpu_context ctxt;
void *sve_state;
@@ -351,6 +368,16 @@ struct kvm_vcpu_arch {
u64 last_steal;
gpa_t base;
} steal;
+
+#ifdef CONFIG_KVM_ASYNC_PF
+   struct {
+   struct gfn_to_hva_cache cache;
+   gfn_t   gfns[ASYNC_PF_PER_VCPU];
+   u64 msr_val;
+   u16 id;
+   boolsend_user_only;
+   } apf;
+#endif
 };
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
@@ -604,6 +631,21 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
 
 static inline void __cpu_init_stage2(void) {}
 
+#ifdef CONFIG_KVM_ASYNC_PF
+bool kvm_async_pf_hash_find(struct kvm_vcpu *vcpu, gfn_t gfn);
+bool kvm_arch_can_inject_async_page_not_present(struct kvm_vcpu *vcpu);
+bool kvm_arch_can_inject_async_page_present(struct kvm_vcpu *vcpu);
+int kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, u32 esr,
+   gpa_t gpa, gfn_t gfn);
+void kvm_arch_async_page_not_present(struct kvm_vcpu *vcpu,
+struct kvm_async_pf *work);
+void kvm_arch_async_page_present(struct kvm_vcpu *vcpu,
+struct kvm_async_pf *work);
+void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu,

[PATCH RFCv1 0/7] Support Async Page Fault

2020-04-10 Thread Gavin Shan
There are two stages of page faults and the stage one page fault is
handled by guest itself. The guest is trapped to host when the page
fault is caused by stage 2 page table, for example missing. The guest
is suspended until the requested page is populated. To populate the
requested page can be costly and might be related to IO activities
if the page was swapped out previously. In this case, the guest has
to suspend for a few of milliseconds at least, regardless of the
overall system load.

The series adds support to asychornous page fault to improve above
situation. If it's costly to populate the requested page, a signal
(PAGE_NOT_PRESENT) is sent to guest so that the faulting process can
be rescheduled if it can be. Otherwise, it is put into power-saving
mode. Another signal (PAGE_READY) is sent to guest once the requested
page is populated so that the faulting process can be waken up either
from either waiting state or power-saving state.

In order to fulfil the control flow and convey signals between host
and guest. A IMPDEF system register (SYS_ASYNC_PF_EL1) is introduced.
The register accepts control block's physical address, plus requested
features. Also, the signal is sent using data abort with the specific
IMPDEF Data Fault Status Code (DFSC). The specific signal is stored
in the control block by host, to be consumed by guest.

Todo

* CONFIG_KVM_ASYNC_PF_SYNC is disabled for now because the exception
  injection can't work in nested mode. It might be something to be
  improved in future.
* KVM_ASYNC_PF_SEND_ALWAYS is disabled even with CONFIG_PREEMPTION
  because it's simply not working reliably.
* Tracepoints, which should something to be done in short term.
* kvm-unit-test cases.
* More testing and debugging are needed. Sometimes, the guest can be
  stuck and the root cause needs to be figured out.

PATCH[01] renames kvm_vcpu_get_hsr() to kvm_vcpu_get_esr() since the
  aarch32 host isn't supported.
PATCH[02] allows various helper functions to access ESR value from
  somewhere other than vCPU struct.
PATCH[03] replaces @hsr with @esr as aarch32 host isn't supported.
PATCH[04] exports kvm_handle_user_mem_abort(), which is used by the
  subsequent patch.
PATCH[05] introduces API to inject data abort with IMPDEF DFSC
PATCH[06] supports asynchronous page fault for host
PATCH[07] supports asynchronous page fault for guest

Testing
===

Start a VM and its QEMU process is put into the specific memory cgroup.
The cgroup's memory limitation is less that the total amount of memory
assigned to the VM. For example, the VM is assigned with 4GB memory, but
the cgroup's limitaton is 2GB. A program is run after VM boots up, to
allocate (and access) all free memory. No system hang is found.

Gavin Shan (7):
  kvm/arm64: Rename kvm_vcpu_get_hsr() to kvm_vcpu_get_esr()
  kvm/arm64: Detach ESR operator from vCPU struct
  kvm/arm64: Replace hsr with esr
  kvm/arm64: Export kvm_handle_user_mem_abort() with prefault mode
  kvm/arm64: Allow inject data abort with specified DFSC
  kvm/arm64: Support async page fault
  arm64: Support async page fault

 arch/arm64/Kconfig   |  11 +
 arch/arm64/include/asm/exception.h   |   5 +
 arch/arm64/include/asm/kvm_emulate.h |  87 +++
 arch/arm64/include/asm/kvm_host.h|  46 
 arch/arm64/include/asm/kvm_para.h|  55 +
 arch/arm64/include/asm/sysreg.h  |   3 +
 arch/arm64/include/uapi/asm/Kbuild   |   3 -
 arch/arm64/include/uapi/asm/kvm_para.h   |  22 ++
 arch/arm64/kernel/smp.c  |  47 
 arch/arm64/kvm/Kconfig   |   1 +
 arch/arm64/kvm/Makefile  |   2 +
 arch/arm64/kvm/handle_exit.c |  48 ++--
 arch/arm64/kvm/hyp/switch.c  |  33 +--
 arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c |   7 +-
 arch/arm64/kvm/inject_fault.c|  38 ++-
 arch/arm64/kvm/sys_regs.c|  91 +--
 arch/arm64/mm/fault.c| 239 ++-
 virt/kvm/arm/aarch32.c   |  27 ++-
 virt/kvm/arm/arm.c   |  36 ++-
 virt/kvm/arm/async_pf.c  | 290 +++
 virt/kvm/arm/hyp/aarch32.c   |   4 +-
 virt/kvm/arm/hyp/vgic-v3-sr.c|   7 +-
 virt/kvm/arm/mmio.c  |  27 ++-
 virt/kvm/arm/mmu.c   |  69 --
 24 files changed, 1040 insertions(+), 158 deletions(-)
 create mode 100644 arch/arm64/include/asm/kvm_para.h
 delete mode 100644 arch/arm64/include/uapi/asm/Kbuild
 create mode 100644 arch/arm64/include/uapi/asm/kvm_para.h
 create mode 100644 virt/kvm/arm/async_pf.c

-- 
2.23.0

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH RFCv1 2/7] kvm/arm64: Detach ESR operator from vCPU struct

2020-04-10 Thread Gavin Shan
There are a set of inline functions defined in kvm_emulate.h. Those
functions reads ESR from vCPU fault information struct and then operate
on it. So it's tied with vCPU fault information and vCPU struct. It
limits their usage scope.

This detaches these functions from the vCPU struct. With this, the
caller has flexibility on where the ESR is read. It shouldn't cause
any functional changes.

Signed-off-by: Gavin Shan 
---
 arch/arm64/include/asm/kvm_emulate.h | 83 +++-
 arch/arm64/kvm/handle_exit.c | 20 --
 arch/arm64/kvm/hyp/switch.c  | 24 ---
 arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c |  7 +-
 arch/arm64/kvm/inject_fault.c|  4 +-
 arch/arm64/kvm/sys_regs.c| 12 ++--
 virt/kvm/arm/arm.c   |  4 +-
 virt/kvm/arm/hyp/aarch32.c   |  2 +-
 virt/kvm/arm/hyp/vgic-v3-sr.c|  5 +-
 virt/kvm/arm/mmio.c  | 27 
 virt/kvm/arm/mmu.c   | 22 ---
 11 files changed, 112 insertions(+), 98 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_emulate.h 
b/arch/arm64/include/asm/kvm_emulate.h
index bd1a69e7c104..2873bf6dc85e 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -270,10 +270,8 @@ static __always_inline u32 kvm_vcpu_get_esr(const struct 
kvm_vcpu *vcpu)
return vcpu->arch.fault.esr_el2;
 }
 
-static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu)
+static __always_inline int kvm_vcpu_get_condition(u32 esr)
 {
-   u32 esr = kvm_vcpu_get_esr(vcpu);
-
if (esr & ESR_ELx_CV)
return (esr & ESR_ELx_COND_MASK) >> ESR_ELx_COND_SHIFT;
 
@@ -295,88 +293,86 @@ static inline u64 kvm_vcpu_get_disr(const struct kvm_vcpu 
*vcpu)
return vcpu->arch.fault.disr_el1;
 }
 
-static inline u32 kvm_vcpu_hvc_get_imm(const struct kvm_vcpu *vcpu)
+static __always_inline u32 kvm_vcpu_hvc_get_imm(u32 esr)
 {
-   return kvm_vcpu_get_esr(vcpu) & ESR_ELx_xVC_IMM_MASK;
+   return esr & ESR_ELx_xVC_IMM_MASK;
 }
 
-static __always_inline bool kvm_vcpu_dabt_isvalid(const struct kvm_vcpu *vcpu)
+static __always_inline bool kvm_vcpu_dabt_isvalid(u32 esr)
 {
-   return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_ISV);
+   return !!(esr & ESR_ELx_ISV);
 }
 
-static inline unsigned long kvm_vcpu_dabt_iss_nisv_sanitized(const struct 
kvm_vcpu *vcpu)
+static __always_inline unsigned long kvm_vcpu_dabt_iss_nisv_sanitized(u32 esr)
 {
-   return kvm_vcpu_get_esr(vcpu) &
-  (ESR_ELx_CM | ESR_ELx_WNR | ESR_ELx_FSC);
+   return esr & (ESR_ELx_CM | ESR_ELx_WNR | ESR_ELx_FSC);
 }
 
-static inline bool kvm_vcpu_dabt_issext(const struct kvm_vcpu *vcpu)
+static __always_inline bool kvm_vcpu_dabt_issext(u32 esr)
 {
-   return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_SSE);
+   return !!(esr & ESR_ELx_SSE);
 }
 
-static inline bool kvm_vcpu_dabt_issf(const struct kvm_vcpu *vcpu)
+static __always_inline bool kvm_vcpu_dabt_issf(u32 esr)
 {
-   return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_SF);
+   return !!(esr & ESR_ELx_SF);
 }
 
-static __always_inline int kvm_vcpu_dabt_get_rd(const struct kvm_vcpu *vcpu)
+static __always_inline int kvm_vcpu_dabt_get_rd(u32 esr)
 {
-   return (kvm_vcpu_get_esr(vcpu) & ESR_ELx_SRT_MASK) >> ESR_ELx_SRT_SHIFT;
+   return (esr & ESR_ELx_SRT_MASK) >> ESR_ELx_SRT_SHIFT;
 }
 
-static __always_inline bool kvm_vcpu_dabt_iss1tw(const struct kvm_vcpu *vcpu)
+static __always_inline bool kvm_vcpu_dabt_iss1tw(u32 esr)
 {
-   return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_S1PTW);
+   return !!(esr & ESR_ELx_S1PTW);
 }
 
-static __always_inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu)
+static __always_inline bool kvm_vcpu_dabt_iswrite(u32 esr)
 {
-   return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_WNR) ||
-   kvm_vcpu_dabt_iss1tw(vcpu); /* AF/DBM update */
+   return !!(esr & ESR_ELx_WNR) ||
+   kvm_vcpu_dabt_iss1tw(esr); /* AF/DBM update */
 }
 
-static inline bool kvm_vcpu_dabt_is_cm(const struct kvm_vcpu *vcpu)
+static __always_inline bool kvm_vcpu_dabt_is_cm(u32 esr)
 {
-   return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_CM);
+   return !!(esr & ESR_ELx_CM);
 }
 
-static __always_inline unsigned int kvm_vcpu_dabt_get_as(const struct kvm_vcpu 
*vcpu)
+static __always_inline unsigned int kvm_vcpu_dabt_get_as(u32 esr)
 {
-   return 1 << ((kvm_vcpu_get_esr(vcpu) & ESR_ELx_SAS) >>
-ESR_ELx_SAS_SHIFT);
+   return 1 << ((esr & ESR_ELx_SAS) >> ESR_ELx_SAS_SHIFT);
 }
 
 /* This one is not specific to Data Abort */
-static __always_inline bool kvm_vcpu_trap_il_is32bit(const struct kvm_vcpu 
*vcpu)
+static __always_inline bool kvm_vcpu_trap_il_is32bit(u32 esr)
 {
-   return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_IL);
+   return !!(esr & ESR_ELx_IL);
 }
 
-static __always_inline u8 kvm_vcpu_trap_get_class(const struct kvm_vcpu *vcpu)
+static __always_inli

[PATCH RFCv1 4/7] kvm/arm64: Export kvm_handle_user_mem_abort() with prefault mode

2020-04-10 Thread Gavin Shan
This renames user_mem_abort() to kvm_handle_user_mem_abort(), and
then export it. The function will be used in asynchronous page fault
to populate a page table entry once the corresponding page is populated
from the backup device (e.g. swap partition):

   * Parameter @fault_status is replace by @esr.
   * The parameters are reorder based on their importance.

This shouldn't cause any functional changes.

Signed-off-by: Gavin Shan 
---
 arch/arm64/include/asm/kvm_host.h |  4 
 virt/kvm/arm/mmu.c| 14 --
 2 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 32c8a675e5a4..f77c706777ec 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -437,6 +437,10 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu,
  struct kvm_vcpu_events *events);
 
 #define KVM_ARCH_WANT_MMU_NOTIFIER
+int kvm_handle_user_mem_abort(struct kvm_vcpu *vcpu, unsigned int esr,
+ struct kvm_memory_slot *memslot,
+ phys_addr_t fault_ipa, unsigned long hva,
+ bool prefault);
 int kvm_unmap_hva_range(struct kvm *kvm,
unsigned long start, unsigned long end);
 int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index e462e0368fd9..95aaabb2b1fc 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1656,12 +1656,12 @@ static bool fault_supports_stage2_huge_mapping(struct 
kvm_memory_slot *memslot,
   (hva & ~(map_size - 1)) + map_size <= uaddr_end;
 }
 
-static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
- struct kvm_memory_slot *memslot, unsigned long hva,
- unsigned long fault_status)
+int kvm_handle_user_mem_abort(struct kvm_vcpu *vcpu, unsigned int esr,
+ struct kvm_memory_slot *memslot,
+ phys_addr_t fault_ipa, unsigned long hva,
+ bool prefault)
 {
-   int ret;
-   u32 esr = kvm_vcpu_get_esr(vcpu);
+   unsigned int fault_status = kvm_vcpu_trap_get_fault_type(esr);
bool write_fault, writable, force_pte = false;
bool exec_fault, needs_exec;
unsigned long mmu_seq;
@@ -1674,6 +1674,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, 
phys_addr_t fault_ipa,
pgprot_t mem_type = PAGE_S2;
bool logging_active = memslot_is_logging(memslot);
unsigned long vma_pagesize, flags = 0;
+   int ret;
 
write_fault = kvm_is_write_fault(esr);
exec_fault = kvm_vcpu_trap_is_iabt(esr);
@@ -1995,7 +1996,8 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct 
kvm_run *run)
goto out_unlock;
}
 
-   ret = user_mem_abort(vcpu, fault_ipa, memslot, hva, fault_status);
+   ret = kvm_handle_user_mem_abort(vcpu, esr, memslot,
+   fault_ipa, hva, false);
if (ret == 0)
ret = 1;
 out:
-- 
2.23.0

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: Contribution to KVM.

2020-04-10 Thread Marc Zyngier

Hi Javier,

On 2020-04-09 22:29, Javier Romero wrote:

Hello,

 My name is Javier, live in Argentina and work as a cloud engineer.

Have been working with Linux servers for the ast 10 years in an
Internet Service Provider and I'm interested in contributing to KVM
maybe with testing as a start point.

If it can be useful to test KVM on ARM, I have a Raspberry PI 3 at 
disposal.


Testing is great (although the RPi-3 isn't the most interesting platform 
due
to its many hardware limitations). If you are familiar with the ARM 
architecture,

helping with patch review is also much appreciated.

Thanks,

M.
--
Jazz is not dead. It just smells funny...
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: Contribution to KVM.

2020-04-10 Thread Xu, Like

On 2020/4/10 5:29, Javier Romero wrote:

Hello,

  My name is Javier, live in Argentina and work as a cloud engineer.

Have been working with Linux servers for the last 10 years in an
Internet Service Provider and I'm interested in contributing to KVM

Welcome, I'm a newbie as well.

maybe with testing as a start point.

You may try the http://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git
and tools/testing/selftests/kvm in the kernel tree.


If it can be useful to test KVM on ARM, I have a Raspberry PI 3 at disposal.

If you test KVM on Intel platforms, you will definitely get support from me :D.

Thanks,
Like Xu


Thanks for your kind attention.

Best Regards,



Javier Romero


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: Contribution to KVM.

2020-04-10 Thread Javier Romero
Hi Like Xu,

Thank you for your time to answer.

Yes, I can also test KVM in an Intel platform, have a Pixelbook with a Core
i7 processor and 16 GB of RAM at disposal to start working.

Thanks for your attention.

Regards,

El vie., 10 abr. 2020 00:34, Xu, Like  escribió:

> On 2020/4/10 5:29, Javier Romero wrote:
> > Hello,
> >
> >   My name is Javier, live in Argentina and work as a cloud engineer.
> >
> > Have been working with Linux servers for the last 10 years in an
> > Internet Service Provider and I'm interested in contributing to KVM
> Welcome, I'm a newbie as well.
> > maybe with testing as a start point.
> You may try the http://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git
> and tools/testing/selftests/kvm in the kernel tree.
> >
> > If it can be useful to test KVM on ARM, I have a Raspberry PI 3 at
> disposal.
> If you test KVM on Intel platforms, you will definitely get support from
> me :D.
>
> Thanks,
> Like Xu
> >
> > Thanks for your kind attention.
> >
> > Best Regards,
> >
> >
> >
> > Javier Romero
>
>
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Contribution to KVM.

2020-04-10 Thread Javier Romero
Hello,

 My name is Javier, live in Argentina and work as a cloud engineer.

Have been working with Linux servers for the ast 10 years in an
Internet Service Provider and I'm interested in contributing to KVM
maybe with testing as a start point.

If it can be useful to test KVM on ARM, I have a Raspberry PI 3 at disposal.

Thanks for your kind attention.

Best Regards,



Javier Romero
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: Contribution to KVM.

2020-04-10 Thread Javier Romero
Hi Like Xu,

Thank you for your time to answer.

Yes, I can also test KVM in an Intel platform, have a Pixelbook with a Core
i7 processor and 16 GB of RAM at disposal to start working.

Thanks for your attention.

Regards,

El vie., 10 abr. 2020 00:34, Xu, Like  escribió:

> On 2020/4/10 5:29, Javier Romero wrote:
> > Hello,
> >
> >   My name is Javier, live in Argentina and work as a cloud engineer.
> >
> > Have been working with Linux servers for the last 10 years in an
> > Internet Service Provider and I'm interested in contributing to KVM
> Welcome, I'm a newbie as well.
> > maybe with testing as a start point.
> You may try the http://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git
> and tools/testing/selftests/kvm in the kernel tree.
> >
> > If it can be useful to test KVM on ARM, I have a Raspberry PI 3 at
> disposal.
> If you test KVM on Intel platforms, you will definitely get support from
> me :D.
>
> Thanks,
> Like Xu
> >
> > Thanks for your kind attention.
> >
> > Best Regards,
> >
> >
> >
> > Javier Romero
>
>
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: Contribution to KVM.

2020-04-10 Thread Nadav Amit
> On Apr 9, 2020, at 8:34 PM, Xu, Like  wrote:
> 
> On 2020/4/10 5:29, Javier Romero wrote:
>> Hello,
>> 
>>  My name is Javier, live in Argentina and work as a cloud engineer.
>> 
>> Have been working with Linux servers for the last 10 years in an
>> Internet Service Provider and I'm interested in contributing to KVM
> Welcome, I'm a newbie as well.
>> maybe with testing as a start point.
> You may try the http://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git
> and tools/testing/selftests/kvm in the kernel tree.
>> If it can be useful to test KVM on ARM, I have a Raspberry PI 3 at disposal.
> If you test KVM on Intel platforms, you will definitely get support from me 
> :D.

If you are looking for something specific, here are two issues with
relatively limited scope, which AFAIK were not resolved:

1. Shadow VMCS bug, which is also a test bug [1]. You can start by fixing
   the test and then fix KVM.

2. Try to run the tests with more than 4GB of memory. The last time I tried
   (actually by running the test on bare metal), the INIT test that Liran
   wrote failed.

Regards,
Nadav

[1] 
https://lore.kernel.org/kvm/3235dbb0-0dc0-418c-bc45-a4b78612e...@gmail.com/T/#u
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: Contribution to KVM.

2020-04-10 Thread Javier Romero
Hi Like Xu,

Thank you for your time to answer.

Of course I can also test KVM on an Intel Platform if this can be
useful, have a Pixelbook laptop with a Core i7 processor and 16 GB of
RAM at disposal :D

Thanks for your attention.

Regards,


Javier Romero


El vie., 10 abr. 2020 a las 0:34, Xu, Like () escribió:
>
> On 2020/4/10 5:29, Javier Romero wrote:
> > Hello,
> >
> >   My name is Javier, live in Argentina and work as a cloud engineer.
> >
> > Have been working with Linux servers for the last 10 years in an
> > Internet Service Provider and I'm interested in contributing to KVM
> Welcome, I'm a newbie as well.
> > maybe with testing as a start point.
> You may try the http://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git
> and tools/testing/selftests/kvm in the kernel tree.
> >
> > If it can be useful to test KVM on ARM, I have a Raspberry PI 3 at disposal.
> If you test KVM on Intel platforms, you will definitely get support from me 
> :D.
>
> Thanks,
> Like Xu
> >
> > Thanks for your kind attention.
> >
> > Best Regards,
> >
> >
> >
> > Javier Romero
>
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: Contribution to KVM.

2020-04-10 Thread Javier Romero
Hi Nadav,

Thank you for your answer,

Will also take a look at the test bug you suggested.

Regards,


Javier Romero



El vie., 10 abr. 2020 a las 0:53, Nadav Amit () escribió:
>
> > On Apr 9, 2020, at 8:34 PM, Xu, Like  wrote:
> >
> > On 2020/4/10 5:29, Javier Romero wrote:
> >> Hello,
> >>
> >>  My name is Javier, live in Argentina and work as a cloud engineer.
> >>
> >> Have been working with Linux servers for the last 10 years in an
> >> Internet Service Provider and I'm interested in contributing to KVM
> > Welcome, I'm a newbie as well.
> >> maybe with testing as a start point.
> > You may try the http://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git
> > and tools/testing/selftests/kvm in the kernel tree.
> >> If it can be useful to test KVM on ARM, I have a Raspberry PI 3 at 
> >> disposal.
> > If you test KVM on Intel platforms, you will definitely get support from me 
> > :D.
>
> If you are looking for something specific, here are two issues with
> relatively limited scope, which AFAIK were not resolved:
>
> 1. Shadow VMCS bug, which is also a test bug [1]. You can start by fixing
>the test and then fix KVM.
>
> 2. Try to run the tests with more than 4GB of memory. The last time I tried
>(actually by running the test on bare metal), the INIT test that Liran
>wrote failed.
>
> Regards,
> Nadav
>
> [1] 
> https://lore.kernel.org/kvm/3235dbb0-0dc0-418c-bc45-a4b78612e...@gmail.com/T/#u
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm