Re: [PATCH v3 06/15] KVM: arm64: Restore mdcr_el2 from vcpu

2021-08-12 Thread Will Deacon
On Wed, Jul 21, 2021 at 08:37:21AM +0100, Fuad Tabba wrote:
> On Tue, Jul 20, 2021 at 3:53 PM Andrew Jones  wrote:
> >
> > On Mon, Jul 19, 2021 at 05:03:37PM +0100, Fuad Tabba wrote:
> > > On deactivating traps, restore the value of mdcr_el2 from the
> > > newly created and preserved host value vcpu context, rather than
> > > directly reading the hardware register.
> > >
> > > Up until and including this patch the two values are the same,
> > > i.e., the hardware register and the vcpu one. A future patch will
> > > be changing the value of mdcr_el2 on activating traps, and this
> > > ensures that its value will be restored.
> > >
> > > No functional change intended.
> >
> > I'm probably missing something, but I can't convince myself that the host
> > will end up with the same mdcr_el2 value after deactivating traps after
> > this patch as before. We clearly now restore whatever we had when
> > activating traps (presumably whatever we configured at init_el2_state
> > time), but is that equivalent to what we had before with the masking and
> > ORing that this patch drops?
> 
> You're right. I thought that these were actually being initialized to
> the same values, but having a closer look at the code the mdcr values
> are not the same as pre-patch. I will fix this.

Can you elaborate on the issue here, please? I was just looking at this
but aren't you now relying on __init_el2_debug to configure this, which
should be fine?

Will
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v3 01/15] KVM: arm64: placeholder to check if VM is protected

2021-08-12 Thread Will Deacon
On Mon, Jul 19, 2021 at 05:03:32PM +0100, Fuad Tabba wrote:
> Add a function to check whether a VM is protected (under pKVM).
> Since the creation of protected VMs isn't enabled yet, this is a
> placeholder that always returns false. The intention is for this
> to become a check for protected VMs in the future (see Will's RFC
> [*]).
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba 
> 
> [*] https://lore.kernel.org/kvmarm/20210603183347.1695-1-w...@kernel.org/

You can make this a Link: tag.

Anyway, I think it makes lots of sense to decouple this from the user-ABI
series:

Acked-by: Will Deacon 

Will
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v3 02/15] KVM: arm64: Remove trailing whitespace in comment

2021-08-12 Thread Will Deacon
On Mon, Jul 19, 2021 at 05:03:33PM +0100, Fuad Tabba wrote:
> Remove trailing whitespace from comment in trap_dbgauthstatus_el1().
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba 
> ---
>  arch/arm64/kvm/sys_regs.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index f6f126eb6ac1..80a6e41cadad 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -318,14 +318,14 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu 
> *vcpu,
>  /*
>   * We want to avoid world-switching all the DBG registers all the
>   * time:
> - * 
> + *
>   * - If we've touched any debug register, it is likely that we're
>   *   going to touch more of them. It then makes sense to disable the
>   *   traps and start doing the save/restore dance
>   * - If debug is active (DBG_MDSCR_KDE or DBG_MDSCR_MDE set), it is
>   *   then mandatory to save/restore the registers, as the guest
>   *   depends on them.
> - * 
> + *
>   * For this, we use a DIRTY bit, indicating the guest has modified the
>   * debug registers, used as follow:
>   *

I'd usually be against these sorts of changes but given you're in the
area...

Acked-by: Will Deacon 

Will
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v3 05/15] KVM: arm64: Refactor sys_regs.h, c for nVHE reuse

2021-08-12 Thread Will Deacon
On Mon, Jul 19, 2021 at 05:03:36PM +0100, Fuad Tabba wrote:
> Refactor sys_regs.h and sys_regs.c to make it easier to reuse
> common code. It will be used in nVHE in a later patch.
> 
> Note that the refactored code uses __inline_bsearch for find_reg
> instead of bsearch to avoid copying the bsearch code for nVHE.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba 
> ---
>  arch/arm64/include/asm/sysreg.h |  3 +++
>  arch/arm64/kvm/sys_regs.c   | 30 +-
>  arch/arm64/kvm/sys_regs.h   | 31 +++
>  3 files changed, 35 insertions(+), 29 deletions(-)

With the naming change suggested by Drew:

Acked-by: Will Deacon https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v3 07/15] KVM: arm64: Track value of cptr_el2 in struct kvm_vcpu_arch

2021-08-12 Thread Will Deacon
On Mon, Jul 19, 2021 at 05:03:38PM +0100, Fuad Tabba wrote:
> Track the baseline guest value for cptr_el2 in struct
> kvm_vcpu_arch, similar to the other registers that control traps.
> Use this value when setting cptr_el2 for the guest.
> 
> Currently this value is unchanged (CPTR_EL2_DEFAULT), but future
> patches will set trapping bits based on features supported for
> the guest.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba 
> ---
>  arch/arm64/include/asm/kvm_host.h | 1 +
>  arch/arm64/kvm/arm.c  | 1 +
>  arch/arm64/kvm/hyp/nvhe/switch.c  | 2 +-
>  3 files changed, 3 insertions(+), 1 deletion(-)

Acked-by: Will Deacon 

Will
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v3 08/15] KVM: arm64: Add feature register flag definitions

2021-08-12 Thread Will Deacon
On Mon, Jul 19, 2021 at 05:03:39PM +0100, Fuad Tabba wrote:
> Add feature register flag definitions to clarify which features
> might be supported.
> 
> Consolidate the various ID_AA64PFR0_ELx flags for all ELs.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba 
> ---
>  arch/arm64/include/asm/cpufeature.h |  4 ++--
>  arch/arm64/include/asm/sysreg.h | 12 
>  arch/arm64/kernel/cpufeature.c  |  8 
>  3 files changed, 14 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h 
> b/arch/arm64/include/asm/cpufeature.h
> index 9bb9d11750d7..b7d9bb17908d 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -602,14 +602,14 @@ static inline bool id_aa64pfr0_32bit_el1(u64 pfr0)
>  {
>   u32 val = cpuid_feature_extract_unsigned_field(pfr0, 
> ID_AA64PFR0_EL1_SHIFT);
>  
> - return val == ID_AA64PFR0_EL1_32BIT_64BIT;
> + return val == ID_AA64PFR0_ELx_32BIT_64BIT;
>  }
>  
>  static inline bool id_aa64pfr0_32bit_el0(u64 pfr0)
>  {
>   u32 val = cpuid_feature_extract_unsigned_field(pfr0, 
> ID_AA64PFR0_EL0_SHIFT);
>  
> - return val == ID_AA64PFR0_EL0_32BIT_64BIT;
> + return val == ID_AA64PFR0_ELx_32BIT_64BIT;
>  }
>  
>  static inline bool id_aa64pfr0_sve(u64 pfr0)
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 326f49e7bd42..0b773037251c 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -784,14 +784,13 @@
>  #define ID_AA64PFR0_AMU  0x1
>  #define ID_AA64PFR0_SVE  0x1
>  #define ID_AA64PFR0_RAS_V1   0x1
> +#define ID_AA64PFR0_RAS_ANY  0xf

This doesn't correspond to an architectural definition afaict: the manual
says that any values other than 0, 1 or 2 are "reserved" so we should avoid
defining our own definitions here.

Will
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v3 09/15] KVM: arm64: Add config register bit definitions

2021-08-12 Thread Will Deacon
On Mon, Jul 19, 2021 at 05:03:40PM +0100, Fuad Tabba wrote:
> Add hardware configuration register bit definitions for HCR_EL2
> and MDCR_EL2. Future patches toggle these hyp configuration
> register bits to trap on certain accesses.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba 
> ---
>  arch/arm64/include/asm/kvm_arm.h | 22 ++
>  1 file changed, 22 insertions(+)

I checked all of these against the Arm ARM and they look correct to me:

Acked-by: Will Deacon 

Will
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 1/2] KVM: Refactor kvm_arch_vcpu_fault() to return a struct page pointer

2021-08-12 Thread David Hildenbrand

On 12.08.21 06:02, Hou Wenlong wrote:

From: Sean Christopherson 

Refactor kvm_arch_vcpu_fault() to return 'struct page *' instead of
'vm_fault_t' to simplify architecture specific implementations that do
more than return SIGBUS.  Currently this only applies to s390, but a
future patch will move x86's pio_data handling into x86 where it belongs.

No functional changed intended.

Cc: Hou Wenlong 
Signed-off-by: Sean Christopherson 
Signed-off-by: Hou Wenlong 
---
  arch/arm64/kvm/arm.c   |  4 ++--
  arch/mips/kvm/mips.c   |  4 ++--
  arch/powerpc/kvm/powerpc.c |  4 ++--
  arch/s390/kvm/kvm-s390.c   | 12 
  arch/x86/kvm/x86.c |  4 ++--
  include/linux/kvm_host.h   |  2 +-
  virt/kvm/kvm_main.c|  5 -
  7 files changed, 17 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index e9a2b8f27792..83f4ffe3e4f2 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -161,9 +161,9 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
return ret;
  }
  
-vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)

+struct page *kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)
  {
-   return VM_FAULT_SIGBUS;
+   return NULL;
  }
  
  
diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c

index af9dd029a4e1..ae79874e6fd2 100644
--- a/arch/mips/kvm/mips.c
+++ b/arch/mips/kvm/mips.c
@@ -1053,9 +1053,9 @@ int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, 
struct kvm_fpu *fpu)
return -ENOIOCTLCMD;
  }
  
-vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)

+struct page *kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)
  {
-   return VM_FAULT_SIGBUS;
+   return NULL;
  }
  
  int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)

diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index be33b5321a76..b9c21f9ab784 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -2090,9 +2090,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
return r;
  }
  
-vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)

+struct page *kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)
  {
-   return VM_FAULT_SIGBUS;
+   return NULL;
  }
  
  static int kvm_vm_ioctl_get_pvinfo(struct kvm_ppc_pvinfo *pvinfo)

diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 02574d7b3612..e1b69833e228 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -4979,17 +4979,13 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
return r;
  }
  
-vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)

+struct page *kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)
  {
  #ifdef CONFIG_KVM_S390_UCONTROL
-   if ((vmf->pgoff == KVM_S390_SIE_PAGE_OFFSET)
-&& (kvm_is_ucontrol(vcpu->kvm))) {
-   vmf->page = virt_to_page(vcpu->arch.sie_block);
-   get_page(vmf->page);
-   return 0;
-   }
+   if (vmf->pgoff == KVM_S390_SIE_PAGE_OFFSET && 
kvm_is_ucontrol(vcpu->kvm))
+   return virt_to_page(vcpu->arch.sie_block);
  #endif
-   return VM_FAULT_SIGBUS;
+   return NULL;
  }
  
  /* Section: memory related */

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 3cedc7cc132a..1e3bbe5cd33a 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5347,9 +5347,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
return r;
  }
  
-vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)

+struct page *kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)
  {
-   return VM_FAULT_SIGBUS;
+   return NULL;
  }
  
  static int kvm_vm_ioctl_set_tss_addr(struct kvm *kvm, unsigned long addr)

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 492d183dd7d0..a949de534722 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -995,7 +995,7 @@ long kvm_arch_dev_ioctl(struct file *filp,
unsigned int ioctl, unsigned long arg);
  long kvm_arch_vcpu_ioctl(struct file *filp,
 unsigned int ioctl, unsigned long arg);
-vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf);
+struct page *kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf);
  
  int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext);
  
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c

index 30d322519253..f7d21418971b 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -3448,7 +3448,10 @@ static vm_fault_t kvm_vcpu_fault(struct vm_fault *vmf)
&vcpu->dirty_ring,
vmf->pgoff - KVM_DIRTY_LOG_PAGE_OFFSET);
else
-   return kvm_arch_vcpu_fault(vcpu, vmf);
+   page = kvm_arch_vcpu_fault(vcpu, vmf);
+   if (!page)
+   

Re: [PATCH v3 11/15] KVM: arm64: Add trap handlers for protected VMs

2021-08-12 Thread Will Deacon
On Mon, Jul 19, 2021 at 05:03:42PM +0100, Fuad Tabba wrote:
> Add trap handlers for protected VMs. These are mainly for Sys64
> and debug traps.
> 
> No functional change intended as these are not hooked in yet to
> the guest exit handlers introduced earlier. So even when trapping
> is triggered, the exit handlers would let the host handle it, as
> before.
> 
> Signed-off-by: Fuad Tabba 
> ---
>  arch/arm64/include/asm/kvm_fixed_config.h | 178 +
>  arch/arm64/include/asm/kvm_host.h |   2 +
>  arch/arm64/include/asm/kvm_hyp.h  |   3 +
>  arch/arm64/kvm/Makefile   |   2 +-
>  arch/arm64/kvm/arm.c  |  11 +
>  arch/arm64/kvm/hyp/nvhe/Makefile  |   2 +-
>  arch/arm64/kvm/hyp/nvhe/sys_regs.c| 443 ++
>  arch/arm64/kvm/pkvm.c | 183 +
>  8 files changed, 822 insertions(+), 2 deletions(-)
>  create mode 100644 arch/arm64/include/asm/kvm_fixed_config.h
>  create mode 100644 arch/arm64/kvm/hyp/nvhe/sys_regs.c
>  create mode 100644 arch/arm64/kvm/pkvm.c
> 
> diff --git a/arch/arm64/include/asm/kvm_fixed_config.h 
> b/arch/arm64/include/asm/kvm_fixed_config.h
> new file mode 100644
> index ..b39a5de2c4b9
> --- /dev/null
> +++ b/arch/arm64/include/asm/kvm_fixed_config.h
> @@ -0,0 +1,178 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Copyright (C) 2021 Google LLC
> + * Author: Fuad Tabba 
> + */
> +
> +#ifndef __ARM64_KVM_FIXED_CONFIG_H__
> +#define __ARM64_KVM_FIXED_CONFIG_H__
> +
> +#include 
> +
> +/*
> + * This file contains definitions for features to be allowed or restricted 
> for
> + * guest virtual machines as a baseline, depending on what mode KVM is 
> running
> + * in and on the type of guest is running.

s/is running/that is running/

> + *
> + * The features are represented as the highest allowed value for a feature in
> + * the feature id registers. If the field is set to all ones (i.e., 0b),
> + * then it's only restricted by what the system allows. If the feature is 
> set to
> + * another value, then that value would be the maximum value allowed and
> + * supported in pKVM, even if the system supports a higher value.

Given that some fields are signed whereas others are unsigned, I think the
wording could be a bit tighter here when it refers to "maximum".

> + *
> + * Some features are forced to a certain value, in which case a SET bitmap is
> + * used to force these values.
> + */
> +
> +
> +/*
> + * Allowed features for protected guests (Protected KVM)
> + *
> + * The approach taken here is to allow features that are:
> + * - needed by common Linux distributions (e.g., flooating point)

s/flooating/floating

> + * - are trivial, e.g., supporting the feature doesn't introduce or require 
> the
> + * tracking of additional state

... in KVM.

> + * - not trapable

s/not trapable/cannot be trapped/

> + */
> +
> +/*
> + * - Floating-point and Advanced SIMD:
> + *   Don't require much support other than maintaining the context, which KVM
> + *   already has.

I'd rework this sentence. We have to support fpsimd because Linux guests
rely on it.

> + * - AArch64 guests only (no support for AArch32 guests):
> + *   Simplify support in case of asymmetric AArch32 systems.

I don't think asymmetric systems come into this really; AArch32 on its
own adds lots of complexity in trap handling, emulation, condition codes
etc. Restricting guests to AArch64 means we don't have to worry about the
AArch32 exception model or emulation of 32-bit instructions.

> + * - RAS (v1)
> + *   v1 doesn't require much additional support, but later versions do.

Be more specific?

> + * - Data Independent Timing
> + *   Trivial
> + * Remaining features are not supported either because they require too much
> + * support from KVM, or risk leaking guest data.

I think we should drop this sentence -- it makes it sounds like we can't
be arsed :)

> + */
> +#define PVM_ID_AA64PFR0_ALLOW (\
> + FEATURE(ID_AA64PFR0_FP) | \
> + FIELD_PREP(FEATURE(ID_AA64PFR0_EL0), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> + FIELD_PREP(FEATURE(ID_AA64PFR0_EL1), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> + FIELD_PREP(FEATURE(ID_AA64PFR0_EL2), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> + FIELD_PREP(FEATURE(ID_AA64PFR0_EL3), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> + FIELD_PREP(FEATURE(ID_AA64PFR0_RAS), ID_AA64PFR0_RAS_V1) | \

I think having the FIELD_PREP entries in the ALLOW mask is quite confusing
here -- naively you would expect to be able to bitwise-and the host register
value with the ALLOW mask and get the sanitised version back, but with these
here you have to go field-by-field to compute the common value.

So perhaps move those into a PVM_ID_AA64PFR0_RESTRICT mask or something?
Then pvm_access_id_aa64pfr0() will become a little easier to read, I think.

> + FEATURE(ID_AA64PFR0_ASIMD) | \
> + FEATURE(ID_AA64PFR0_DIT) \
> + )
> +
> +/*
> + * - Branch Target Identification
> + * - Speculative Store Byp

Re: [PATCH v3 12/15] KVM: arm64: Move sanitized copies of CPU features

2021-08-12 Thread Will Deacon
On Mon, Jul 19, 2021 at 05:03:43PM +0100, Fuad Tabba wrote:
> Move the sanitized copies of the CPU feature registers to the
> recently created sys_regs.c. This consolidates all copies in a
> more relevant file.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba 
> ---
>  arch/arm64/kvm/hyp/nvhe/mem_protect.c | 6 --
>  arch/arm64/kvm/hyp/nvhe/sys_regs.c| 2 ++
>  2 files changed, 2 insertions(+), 6 deletions(-)

Acked-by: Will Deacon 

Will
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v3 06/15] KVM: arm64: Restore mdcr_el2 from vcpu

2021-08-12 Thread Will Deacon
Hey Fuad,

On Thu, Aug 12, 2021 at 11:28:50AM +0200, Fuad Tabba wrote:
> On Thu, Aug 12, 2021 at 10:46 AM Will Deacon  wrote:
> >
> > On Wed, Jul 21, 2021 at 08:37:21AM +0100, Fuad Tabba wrote:
> > > On Tue, Jul 20, 2021 at 3:53 PM Andrew Jones  wrote:
> > > >
> > > > On Mon, Jul 19, 2021 at 05:03:37PM +0100, Fuad Tabba wrote:
> > > > > On deactivating traps, restore the value of mdcr_el2 from the
> > > > > newly created and preserved host value vcpu context, rather than
> > > > > directly reading the hardware register.
> > > > >
> > > > > Up until and including this patch the two values are the same,
> > > > > i.e., the hardware register and the vcpu one. A future patch will
> > > > > be changing the value of mdcr_el2 on activating traps, and this
> > > > > ensures that its value will be restored.
> > > > >
> > > > > No functional change intended.
> > > >
> > > > I'm probably missing something, but I can't convince myself that the 
> > > > host
> > > > will end up with the same mdcr_el2 value after deactivating traps after
> > > > this patch as before. We clearly now restore whatever we had when
> > > > activating traps (presumably whatever we configured at init_el2_state
> > > > time), but is that equivalent to what we had before with the masking and
> > > > ORing that this patch drops?
> > >
> > > You're right. I thought that these were actually being initialized to
> > > the same values, but having a closer look at the code the mdcr values
> > > are not the same as pre-patch. I will fix this.
> >
> > Can you elaborate on the issue here, please? I was just looking at this
> > but aren't you now relying on __init_el2_debug to configure this, which
> > should be fine?
> 
> I *think* that it should be fine, but as Drew pointed out, the host
> does not end up with the same mdcr_el2 value after the deactivation in
> this patch as it did after deactivation before this patch. In my v4
> (not sent out yet), I have fixed it to ensure that the host does end
> up with the same value as the one before this patch. That should make
> it easier to check that there's no functional change.
> 
> I'll look into it further, and if I can convince myself that there
> aren't any issues and that this patch makes the code cleaner, I will
> add it as a separate patch instead to make reviewing easier.

Cheers. I think the new code might actually be better, as things like
MDCR_EL2.E2PB are RES0 if SPE is not implemented. The init code takes care
to set those only if if probes SPE first, whereas the code you're removing
doesn't seem to check that.

Will
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v3 13/15] KVM: arm64: Trap access to pVM restricted features

2021-08-12 Thread Will Deacon
On Mon, Jul 19, 2021 at 05:03:44PM +0100, Fuad Tabba wrote:
> Trap accesses to restricted features for VMs running in protected
> mode.
> 
> Access to feature registers are emulated, and only supported
> features are exposed to protected VMs.
> 
> Accesses to restricted registers as well as restricted
> instructions are trapped, and an undefined exception is injected
> into the protected guests, i.e., with EC = 0x0 (unknown reason).
> This EC is the one used, according to the Arm Architecture
> Reference Manual, for unallocated or undefined system registers
> or instructions.
> 
> Only affects the functionality of protected VMs. Otherwise,
> should not affect non-protected VMs when KVM is running in
> protected mode.
> 
> Signed-off-by: Fuad Tabba 
> ---
>  arch/arm64/kvm/hyp/include/hyp/switch.h |  3 ++
>  arch/arm64/kvm/hyp/nvhe/switch.c| 52 ++---
>  2 files changed, 41 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h 
> b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 5a2b89b96c67..8431f1514280 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -33,6 +33,9 @@
>  extern struct exception_table_entry __start___kvm_ex_table;
>  extern struct exception_table_entry __stop___kvm_ex_table;
>  
> +int kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu);
> +int kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu);
> +
>  /* Check whether the FP regs were dirtied while in the host-side run loop: */
>  static inline bool update_fp_enabled(struct kvm_vcpu *vcpu)
>  {
> diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c 
> b/arch/arm64/kvm/hyp/nvhe/switch.c
> index 36da423006bd..99a90094 100644
> --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> @@ -158,30 +158,54 @@ static void __pmu_switch_to_host(struct kvm_cpu_context 
> *host_ctxt)
>   write_sysreg(pmu->events_host, pmcntenset_el0);
>  }
>  
> +/**
> + * Handle system register accesses for protected VMs.
> + *
> + * Return 1 if handled, or 0 if not.
> + */
> +static int handle_pvm_sys64(struct kvm_vcpu *vcpu)
> +{
> + return kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) ?
> +  kvm_handle_pvm_sys64(vcpu) :
> +  0;
> +}

Why don't we move the kvm_vm_is_protected() check into
kvm_get_hyp_exit_handler() so we can avoid adding it to each handler
instead?

Either way:

Acked-by: Will Deacon 

Will
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v3 14/15] KVM: arm64: Handle protected guests at 32 bits

2021-08-12 Thread Will Deacon
On Mon, Jul 19, 2021 at 05:03:45PM +0100, Fuad Tabba wrote:
> Protected KVM does not support protected AArch32 guests. However,
> it is possible for the guest to force run AArch32, potentially
> causing problems. Add an extra check so that if the hypervisor
> catches the guest doing that, it can prevent the guest from
> running again by resetting vcpu->arch.target and returning
> ARM_EXCEPTION_IL.
> 
> Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
> AArch32 systems")
> 
> Signed-off-by: Fuad Tabba 
> ---
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 24 
>  1 file changed, 24 insertions(+)
> 
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h 
> b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 8431f1514280..f09343e15a80 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -23,6 +23,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  #include 
> @@ -477,6 +478,29 @@ static inline bool fixup_guest_exit(struct kvm_vcpu 
> *vcpu, u64 *exit_code)
>   write_sysreg_el2(read_sysreg_el2(SYS_ELR) - 4, SYS_ELR);
>   }
>  
> + /*
> +  * Protected VMs might not be allowed to run in AArch32. The check below
> +  * is based on the one in kvm_arch_vcpu_ioctl_run().
> +  * The ARMv8 architecture doesn't give the hypervisor a mechanism to
> +  * prevent a guest from dropping to AArch32 EL0 if implemented by the
> +  * CPU. If the hypervisor spots a guest in such a state ensure it is
> +  * handled, and don't trust the host to spot or fix it.
> +  */
> + if (unlikely(is_nvhe_hyp_code() &&
> +  kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) &&
> +  FIELD_GET(FEATURE(ID_AA64PFR0_EL0),
> +PVM_ID_AA64PFR0_ALLOW) <
> +  ID_AA64PFR0_ELx_32BIT_64BIT &&
> +  vcpu_mode_is_32bit(vcpu))) {
> + /*
> +  * As we have caught the guest red-handed, decide that it isn't
> +  * fit for purpose anymore by making the vcpu invalid.
> +  */
> + vcpu->arch.target = -1;
> + *exit_code = ARM_EXCEPTION_IL;
> + goto exit;
> + }

Would this be better off inside the nvhe-specific run loop? Seems like we
could elide fixup_guest_exit() altogether if we've detect that we're in
AArch32 state when we shouldn't be and it would keep the code off the shared
path.

Will
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v3 15/15] KVM: arm64: Restrict protected VM capabilities

2021-08-12 Thread Will Deacon
On Mon, Jul 19, 2021 at 05:03:46PM +0100, Fuad Tabba wrote:
> Restrict protected VM capabilities based on the
> fixed-configuration for protected VMs.
> 
> No functional change intended in current KVM-supported modes
> (nVHE, VHE).
> 
> Signed-off-by: Fuad Tabba 
> ---
>  arch/arm64/include/asm/kvm_fixed_config.h | 10 
>  arch/arm64/kvm/arm.c  | 63 ++-
>  arch/arm64/kvm/pkvm.c | 30 +++
>  3 files changed, 102 insertions(+), 1 deletion(-)

This patch looks good to me, but I'd be inclined to add this to the user-ABI
series given that it's really all user-facing and, without a functional
kvm_vm_is_protected(), isn't serving much purpose.

Cheers,

Will
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v3 08/15] KVM: arm64: Add feature register flag definitions

2021-08-12 Thread Fuad Tabba
Hi Will,

On Thu, Aug 12, 2021 at 10:59 AM Will Deacon  wrote:
>
> On Mon, Jul 19, 2021 at 05:03:39PM +0100, Fuad Tabba wrote:
> > Add feature register flag definitions to clarify which features
> > might be supported.
> >
> > Consolidate the various ID_AA64PFR0_ELx flags for all ELs.
> >
> > No functional change intended.
> >
> > Signed-off-by: Fuad Tabba 
> > ---
> >  arch/arm64/include/asm/cpufeature.h |  4 ++--
> >  arch/arm64/include/asm/sysreg.h | 12 
> >  arch/arm64/kernel/cpufeature.c  |  8 
> >  3 files changed, 14 insertions(+), 10 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/cpufeature.h 
> > b/arch/arm64/include/asm/cpufeature.h
> > index 9bb9d11750d7..b7d9bb17908d 100644
> > --- a/arch/arm64/include/asm/cpufeature.h
> > +++ b/arch/arm64/include/asm/cpufeature.h
> > @@ -602,14 +602,14 @@ static inline bool id_aa64pfr0_32bit_el1(u64 pfr0)
> >  {
> >   u32 val = cpuid_feature_extract_unsigned_field(pfr0, 
> > ID_AA64PFR0_EL1_SHIFT);
> >
> > - return val == ID_AA64PFR0_EL1_32BIT_64BIT;
> > + return val == ID_AA64PFR0_ELx_32BIT_64BIT;
> >  }
> >
> >  static inline bool id_aa64pfr0_32bit_el0(u64 pfr0)
> >  {
> >   u32 val = cpuid_feature_extract_unsigned_field(pfr0, 
> > ID_AA64PFR0_EL0_SHIFT);
> >
> > - return val == ID_AA64PFR0_EL0_32BIT_64BIT;
> > + return val == ID_AA64PFR0_ELx_32BIT_64BIT;
> >  }
> >
> >  static inline bool id_aa64pfr0_sve(u64 pfr0)
> > diff --git a/arch/arm64/include/asm/sysreg.h 
> > b/arch/arm64/include/asm/sysreg.h
> > index 326f49e7bd42..0b773037251c 100644
> > --- a/arch/arm64/include/asm/sysreg.h
> > +++ b/arch/arm64/include/asm/sysreg.h
> > @@ -784,14 +784,13 @@
> >  #define ID_AA64PFR0_AMU  0x1
> >  #define ID_AA64PFR0_SVE  0x1
> >  #define ID_AA64PFR0_RAS_V1   0x1
> > +#define ID_AA64PFR0_RAS_ANY  0xf
>
> This doesn't correspond to an architectural definition afaict: the manual
> says that any values other than 0, 1 or 2 are "reserved" so we should avoid
> defining our own definitions here.

I'll add a ID_AA64PFR0_RAS_V2 definition in that case and use it for
the checking later. That would achieve the same goal and I wouldn't be
adding definitions to the reserved area.

Cheers,
/fuad
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v3 01/15] KVM: arm64: placeholder to check if VM is protected

2021-08-12 Thread Fuad Tabba
Hi Will,


On Thu, Aug 12, 2021 at 10:59 AM Will Deacon  wrote:
>
> On Mon, Jul 19, 2021 at 05:03:32PM +0100, Fuad Tabba wrote:
> > Add a function to check whether a VM is protected (under pKVM).
> > Since the creation of protected VMs isn't enabled yet, this is a
> > placeholder that always returns false. The intention is for this
> > to become a check for protected VMs in the future (see Will's RFC
> > [*]).
> >
> > No functional change intended.
> >
> > Signed-off-by: Fuad Tabba 
> >
> > [*] https://lore.kernel.org/kvmarm/20210603183347.1695-1-w...@kernel.org/
>
> You can make this a Link: tag.

Of course. Thanks!
/fuad
> Anyway, I think it makes lots of sense to decouple this from the user-ABI
> series:
>
> Acked-by: Will Deacon 
>
> Will
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v3 06/15] KVM: arm64: Restore mdcr_el2 from vcpu

2021-08-12 Thread Fuad Tabba
Hi Will,

On Thu, Aug 12, 2021 at 10:46 AM Will Deacon  wrote:
>
> On Wed, Jul 21, 2021 at 08:37:21AM +0100, Fuad Tabba wrote:
> > On Tue, Jul 20, 2021 at 3:53 PM Andrew Jones  wrote:
> > >
> > > On Mon, Jul 19, 2021 at 05:03:37PM +0100, Fuad Tabba wrote:
> > > > On deactivating traps, restore the value of mdcr_el2 from the
> > > > newly created and preserved host value vcpu context, rather than
> > > > directly reading the hardware register.
> > > >
> > > > Up until and including this patch the two values are the same,
> > > > i.e., the hardware register and the vcpu one. A future patch will
> > > > be changing the value of mdcr_el2 on activating traps, and this
> > > > ensures that its value will be restored.
> > > >
> > > > No functional change intended.
> > >
> > > I'm probably missing something, but I can't convince myself that the host
> > > will end up with the same mdcr_el2 value after deactivating traps after
> > > this patch as before. We clearly now restore whatever we had when
> > > activating traps (presumably whatever we configured at init_el2_state
> > > time), but is that equivalent to what we had before with the masking and
> > > ORing that this patch drops?
> >
> > You're right. I thought that these were actually being initialized to
> > the same values, but having a closer look at the code the mdcr values
> > are not the same as pre-patch. I will fix this.
>
> Can you elaborate on the issue here, please? I was just looking at this
> but aren't you now relying on __init_el2_debug to configure this, which
> should be fine?

I *think* that it should be fine, but as Drew pointed out, the host
does not end up with the same mdcr_el2 value after the deactivation in
this patch as it did after deactivation before this patch. In my v4
(not sent out yet), I have fixed it to ensure that the host does end
up with the same value as the one before this patch. That should make
it easier to check that there's no functional change.

I'll look into it further, and if I can convince myself that there
aren't any issues and that this patch makes the code cleaner, I will
add it as a separate patch instead to make reviewing easier.

Thanks,
/fuad

> Will
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v4 0/2] memblock: make memblock_find_in_range method private

2021-08-12 Thread Guenter Roeck
Mike,

On Thu, Aug 12, 2021 at 09:59:05AM +0300, Mike Rapoport wrote:
> From: Mike Rapoport 
> 
> Hi,
> 
> This is v4 of "memblock: make memblock_find_in_range method private" patch
> that essentially replaces memblock_find_in_range() + memblock_reserve()
> calls with equivalent calls to memblock_phys_alloc() and prevents usage of
> memblock_find_in_range() outside memblock itself.
> 
> The patch uncovered an issue with top down memory mapping on x86 and this
> version has a preparation patch that addresses this issue.
> 
> Guenter, I didn't add your Tested-by because the patch that addresses the
> crashes differs from the one you've tested.
> 

Unfortunately I am still seeing crashes.

1G of memory, x86_64:

[0.00] efi: EFI v2.70 by EDK II
[0.00] efi: SMBIOS=0x3fbcc000 ACPI=0x3fbfa000 ACPI 2.0=0x3fbfa014 
MEMATTR=0x3f229018 
[0.00] SMBIOS 2.8 present.
[0.00] DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
[0.00] tsc: Fast TSC calibration using PIT
[0.00] tsc: Detected 3792.807 MHz processor
[0.001816] last_pfn = 0x3ff50 max_arch_pfn = 0x4
[0.002595] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
[0.022989] Using GB pages for direct mapping
[0.025601] Kernel panic - not syncing: alloc_low_pages: can not alloc memory
[0.025910] CPU: 0 PID: 0 Comm: swapper Not tainted 5.14.0-rc5+ #1
[0.026133] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 
02/06/2015
[0.026462] Call Trace:
[0.026942]  ? dump_stack_lvl+0x57/0x7d
[0.027475]  ? panic+0x10a/0x2de
[0.027600]  ? alloc_low_pages+0x117/0x156
[0.027704]  ? phys_pmd_init+0x234/0x342
[0.027817]  ? phys_pud_init+0x171/0x337
[0.027926]  ? __kernel_physical_mapping_init+0xec/0x276
[0.028062]  ? init_memory_mapping+0x1ea/0x2ca
[0.028199]  ? init_range_memory_mapping+0xdf/0x12e
[0.028326]  ? init_mem_mapping+0x1e9/0x261
[0.028432]  ? setup_arch+0x5ff/0xb6d
[0.028535]  ? start_kernel+0x71/0x6b4
[0.028636]  ? secondary_startup_64_no_verify+0xc2/0xcb
[0.029479] ---[ end Kernel panic - not syncing: alloc_low_pages: can not 
alloc memory ]---

Complete log:
https://kerneltests.org/builders/qemu-x86_64-testing/builds/67/steps/qemubuildcommand/logs/stdio

x86, default memory size, all efi boots affected:

[0.025676] BUG: unable to handle page fault for address: cf3c1000
[0.025932] #PF: supervisor write access in kernel mode
[0.026022] #PF: error_code(0x0002) - not-present page
[0.026122] *pde = 
[0.026308] Oops: 0002 [#1] SMP
[0.026468] CPU: 0 PID: 0 Comm: swapper Not tainted 5.14.0-rc5+ #1
[0.026616] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 
02/06/2015
[0.026848] EIP: alloc_low_pages+0xa0/0x13f
[0.027355] Code: 00 74 77 a3 cc ba 62 ca 8b 45 f0 8d 90 00 00 0c 00 31 c0 
c1 e2 0c 85 f6 74 16 89 d7 b9 00 04 00 00 83 c3 01 81 c2 00 10 00 00  ab 39 
f3 75 ea 8b 45 f0 8d 65 f4 5b 5e c1 e0 0c 5f 5d 2d 00 00
[0.027802] EAX:  EBX: 0001 ECX: 0400 EDX: cf3c2000
[0.027903] ESI: 0001 EDI: cf3c1000 EBP: ca389e28 ESP: ca389e18
[0.028006] DS: 007b ES: 007b FS: 00d8 GS:  SS: 0068 EFLAGS: 00200086
[0.028125] CR0: 80050033 CR2: cf3c1000 CR3: 0a69f000 CR4: 00040690
[0.028287] Call Trace:
[0.028603]  one_page_table_init+0x15/0x6d
[0.028751]  kernel_physical_mapping_init+0xdd/0x19b
[0.028839]  init_memory_mapping+0x146/0x1f1
[0.028921]  init_range_memory_mapping+0xfe/0x144
[0.029001]  init_mem_mapping+0x145/0x185
[0.029066]  setup_arch+0x5ff/0xa75
[0.029128]  ? vprintk+0x4c/0x100
[0.029187]  start_kernel+0x66/0x5ba
[0.029246]  ? set_intr_gate+0x42/0x55
[0.029306]  ? early_idt_handler_common+0x44/0x44
[0.029380]  i386_start_kernel+0x43/0x45
[0.029441]  startup_32_smp+0x161/0x164
[0.029567] Modules linked in:
[0.029776] CR2: cf3c1000
[0.030406] random: get_random_bytes called from oops_exit+0x35/0x60 with 
crng_init=0
[0.031121] ---[ end trace 544692cd05e387e2 ]---
[0.031357] EIP: alloc_low_pages+0xa0/0x13f
[0.031427] Code: 00 74 77 a3 cc ba 62 ca 8b 45 f0 8d 90 00 00 0c 00 31 c0 
c1 e2 0c 85 f6 74 16 89 d7 b9 00 04 00 00 83 c3 01 81 c2 00 10 00 00  ab 39 
f3 75 ea 8b 45 f0 8d 65 f4 5b 5e c1 e0 0c 5f 5d 2d 00 00
[0.031698] EAX:  EBX: 0001 ECX: 0400 EDX: cf3c2000
[0.031787] ESI: 0001 EDI: cf3c1000 EBP: ca389e28 ESP: ca389e18
[0.031876] DS: 007b ES: 007b FS: 00d8 GS:  SS: 0068 EFLAGS: 00200086
[0.031972] CR0: 80050033 CR2: cf3c1000 CR3: 0a69f000 CR4: 00040690
[0.032198] Kernel panic - not syncing: Attempted to kill the idle task!
[0.032521] ---[ end Kernel panic - not syncing: Attempted to kill the idle
task! ]--

Complete log: 
https://kerneltests.org/builders/qemu-x86-testing/builds/65/steps/qemubuildcommand/logs/stdio

Guenter
___
kvmarm mailing

Re: [PATCH v2 1/2] KVM: Refactor kvm_arch_vcpu_fault() to return a struct page pointer

2021-08-12 Thread Paolo Bonzini

On 12/08/21 11:04, David Hildenbrand wrote:


Reviewed-by: David Hildenbrand 

But at the same time I wonder if we should just get rid of 
CONFIG_KVM_S390_UCONTROL and consequently kvm_arch_vcpu_fault().


In practice CONFIG_KVM_S390_UCONTROL, is never enabled in any reasonable 
kernel build and consequently it's never tested; further, exposing the 
sie_block to user space allows user space to generate random SIE 
validity intercepts.


CONFIG_KVM_S390_UCONTROL feels like something that should just be 
maintained out of tree by someone who really needs to hack deep into hw 
virtualization for testing purposes etc.


I have no preference either way.  It should definitely have selftests, 
but in x86 land there are some features that are not covered by QEMU and 
were nevertheless accepted upstream with selftests.


Paolo

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v3 14/15] KVM: arm64: Handle protected guests at 32 bits

2021-08-12 Thread Fuad Tabba
Hi Will,


On Thu, Aug 12, 2021 at 11:57 AM Will Deacon  wrote:
>
> On Mon, Jul 19, 2021 at 05:03:45PM +0100, Fuad Tabba wrote:
> > Protected KVM does not support protected AArch32 guests. However,
> > it is possible for the guest to force run AArch32, potentially
> > causing problems. Add an extra check so that if the hypervisor
> > catches the guest doing that, it can prevent the guest from
> > running again by resetting vcpu->arch.target and returning
> > ARM_EXCEPTION_IL.
> >
> > Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
> > AArch32 systems")
> >
> > Signed-off-by: Fuad Tabba 
> > ---
> >  arch/arm64/kvm/hyp/include/hyp/switch.h | 24 
> >  1 file changed, 24 insertions(+)
> >
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h 
> > b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > index 8431f1514280..f09343e15a80 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > @@ -23,6 +23,7 @@
> >  #include 
> >  #include 
> >  #include 
> > +#include 
> >  #include 
> >  #include 
> >  #include 
> > @@ -477,6 +478,29 @@ static inline bool fixup_guest_exit(struct kvm_vcpu 
> > *vcpu, u64 *exit_code)
> >   write_sysreg_el2(read_sysreg_el2(SYS_ELR) - 4, 
> > SYS_ELR);
> >   }
> >
> > + /*
> > +  * Protected VMs might not be allowed to run in AArch32. The check 
> > below
> > +  * is based on the one in kvm_arch_vcpu_ioctl_run().
> > +  * The ARMv8 architecture doesn't give the hypervisor a mechanism to
> > +  * prevent a guest from dropping to AArch32 EL0 if implemented by the
> > +  * CPU. If the hypervisor spots a guest in such a state ensure it is
> > +  * handled, and don't trust the host to spot or fix it.
> > +  */
> > + if (unlikely(is_nvhe_hyp_code() &&
> > +  kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) &&
> > +  FIELD_GET(FEATURE(ID_AA64PFR0_EL0),
> > +PVM_ID_AA64PFR0_ALLOW) <
> > +  ID_AA64PFR0_ELx_32BIT_64BIT &&
> > +  vcpu_mode_is_32bit(vcpu))) {
> > + /*
> > +  * As we have caught the guest red-handed, decide that it 
> > isn't
> > +  * fit for purpose anymore by making the vcpu invalid.
> > +  */
> > + vcpu->arch.target = -1;
> > + *exit_code = ARM_EXCEPTION_IL;
> > + goto exit;
> > + }
>
> Would this be better off inside the nvhe-specific run loop? Seems like we
> could elide fixup_guest_exit() altogether if we've detect that we're in
> AArch32 state when we shouldn't be and it would keep the code off the shared
> path.

Yes, it makes more sense and would result in cleaner code to have it
there, especially in the future where there's likely to be a separate
run loop for protected VMs. I'll move it.

Thanks,
/fuad
> Will
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[GIT PULL] KVM/arm64 fixes for 5.14, take #2

2021-08-12 Thread Marc Zyngier
Hi Paolo,

Here's the second batch of KVM/arm64 fixes for 5.14, and hopefully the
last for this cycle. We have another MTE fix from Steven, but also
have an off-by-one bug squashed by David in the protected memory path.

Please pull,

M.

The following changes since commit 5cf17746b302aa32a4f200cc6ce38865bfe4cf94:

  KVM: arm64: selftests: get-reg-list: actually enable pmu regs in pmu sublist 
(2021-07-14 11:55:18 +0100)

are available in the Git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git 
tags/kvmarm-fixes-5.14-2

for you to fetch changes up to c4d7c51845af9542d42cd18a25c570583abf2768:

  KVM: arm64: Fix race when enabling KVM_ARM_CAP_MTE (2021-07-29 17:34:01 +0100)


KVM/arm64 fixes for 5.14, take #2

- Plug race between enabling MTE and creating vcpus
- Fix off-by-one bug when checking whether an address range is RAM


David Brazdil (1):
  KVM: arm64: Fix off-by-one in range_is_memory

Steven Price (1):
  KVM: arm64: Fix race when enabling KVM_ARM_CAP_MTE

 arch/arm64/kvm/arm.c  | 12 
 arch/arm64/kvm/hyp/nvhe/mem_protect.c |  2 +-
 2 files changed, 9 insertions(+), 5 deletions(-)
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm