[PATCH] arm64/kvm: Fix zapping stage2 page table wrongly

2020-08-21 Thread Gavin Shan
Depending on the kernel configuration, PUD_SIZE could be equal to
PMD_SIZE. For example, both of them are 512MB with the following
kernel configuration. In this case, both PUD and PMD are folded
to PGD.

   CONFIG_ARM64_64K_PAGES   y
   CONFIG_ARM64_VA_BITS 42
   CONFIG_PGTABLE_LEVELS2

With the above configuration, the stage2 PUD is used to backup the
512MB huge page when the stage2 mapping is built. During the mapping,
the PUD and its subordinate levels of page table entries are unmapped
if the PUD is present and not huge page sensitive in stage2_set_pud_huge().
Unfornately, the @addr isn't aligned to S2_PUD_SIZE and wrong page table
entries are zapped. It eventually leads to PUD's present bit can't be
cleared successfully and infinite loop in stage2_set_pud_huge().

This fixes the issue by checking with S2_{PUD, PMD}_SIZE instead of
{PUD, PMD}_SIZE to determine if stage2 PUD or PMD is used to back the
huge page. For this particular case, the stage2 PMD entry should be
used to backup the 512MB huge page with stage2_set_pmd_huge().

Fixes: 3c3736cd32bf ("KVM: arm/arm64: Fix handling of stage2 huge mappings")
Cc: sta...@vger.kernel.org # v5.1+
Reported-by: Eric Auger 
Signed-off-by: Gavin Shan 
Tested-by: Eric Auger 
Reviewed-by: Eric Auger 
---
 arch/arm64/kvm/mmu.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 0121ef2c7c8d..deb1819ba9e2 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1964,7 +1964,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, 
phys_addr_t fault_ipa,
(fault_status == FSC_PERM &&
 stage2_is_exec(mmu, fault_ipa, vma_pagesize));
 
-   if (vma_pagesize == PUD_SIZE) {
+   if (vma_pagesize == S2_PUD_SIZE) {
pud_t new_pud = kvm_pfn_pud(pfn, mem_type);
 
new_pud = kvm_pud_mkhuge(new_pud);
@@ -1975,7 +1975,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, 
phys_addr_t fault_ipa,
new_pud = kvm_s2pud_mkexec(new_pud);
 
ret = stage2_set_pud_huge(mmu, memcache, fault_ipa, _pud);
-   } else if (vma_pagesize == PMD_SIZE) {
+   } else if (vma_pagesize == S2_PMD_SIZE) {
pmd_t new_pmd = kvm_pfn_pmd(pfn, mem_type);
 
new_pmd = kvm_pmd_mkhuge(new_pmd);
-- 
2.23.0

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v4 0/3] Cortex-A77 erratum 1508412 workaround

2020-08-21 Thread Catalin Marinas
On Mon, 3 Aug 2020 13:31:24 -0600, Rob Herring wrote:
> This series implements the work-around for Cortex-A77 erratum 1508412.
> KVM guests which don't implement the workaround can still deadlock the
> system. This is also the case with the existing Cortex-A57 erratum 832075,
> so we add a warning message if an erratum can cause deadlock.
> 
> Changes detailed in patches.
> 
> [...]

Applied to arm64 (for-next/fixes), thanks!

[1/3] KVM: arm64: Print warning when cpu erratum can cause guests to deadlock
  https://git.kernel.org/arm64/c/abf532cceaca

-- 
Catalin

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v4 3/3] arm64: Add workaround for Arm Cortex-A77 erratum 1508412

2020-08-21 Thread Catalin Marinas
On Fri, Aug 21, 2020 at 06:02:39PM +0100, Marc Zyngier wrote:
> On 2020-08-21 15:05, Catalin Marinas wrote:
> > On Fri, Aug 21, 2020 at 01:45:40PM +0100, Marc Zyngier wrote:
> > > On 2020-08-21 13:26, Catalin Marinas wrote:
> > > > On Fri, Aug 21, 2020 at 01:12:10PM +0100, Will Deacon wrote:
> > > > > On Fri, Aug 21, 2020 at 01:07:00PM +0100, Catalin Marinas wrote:
> > > > > > On Mon, Aug 03, 2020 at 01:31:27PM -0600, Rob Herring wrote:
> > > > > > > @@ -979,6 +980,14 @@
> > > > > > >   write_sysreg(__scs_new, sysreg);
> > > > > > > \
> > > > > > >  } while (0)
> > > > > > >
> > > > > > > +#define read_sysreg_par() ({ 
> > > > > > > \
> > > > > > > + u64 par;
> > > > > > > \
> > > > > > > + asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));
> > > > > > > \
> > > > > > > + par = read_sysreg(par_el1); 
> > > > > > > \
> > > > > > > + asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));
> > > > > > > \
> > > > > > > + par;
> > > > > > > \
> > > > > > > +})
> > > > > >
> > > > > > I was about to queue this up but one more point to clarify: can we 
> > > > > > get
> > > > > > an interrupt at either side of the PAR_EL1 read and the handler do a
> > > > > > device read, triggering the erratum? Do we need a DMB at exception
> > > > > > entry/return?
> > > > >
> > > > > Disabling irqs around the PAR access would be simpler, I think
> > > > > (assuming
> > > > > this is needed).
> > > >
> > > > This wouldn't work if it interrupts a guest.
> > > 
> > > If we take an interrupt either side of the PAR_EL1 read and that we
> > > fully exit, the saving of PAR_EL1 on the way out solves the problem.
> > > 
> > > If we don't fully exit, but instead reenter the guest immediately
> > > (fixup_guest_exit() returns true), we'd need a DMB at that point,
> > > at least because of the GICv2 proxying code which performs device
> > > accesses on the guest's behalf.
> > 
> > If you are ok with the diff below, I can fold it in:
> > 
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > index ca88ea416176..8770cf7ccd42 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > @@ -420,7 +420,7 @@ static inline bool fixup_guest_exit(struct
> > kvm_vcpu *vcpu, u64 *exit_code)
> > if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) &&
> > kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 &&
> > handle_tx2_tvm(vcpu))
> > -   return true;
> > +   goto guest;
> > 
> > /*
> >  * We trap the first access to the FP/SIMD to save the host context
> > @@ -430,13 +430,13 @@ static inline bool fixup_guest_exit(struct
> > kvm_vcpu *vcpu, u64 *exit_code)
> >  * Similarly for trapped SVE accesses.
> >  */
> > if (__hyp_handle_fpsimd(vcpu))
> > -   return true;
> > +   goto guest;
> > 
> > if (__hyp_handle_ptrauth(vcpu))
> > -   return true;
> > +   goto guest;
> > 
> > if (!__populate_fault_info(vcpu))
> > -   return true;
> > +   goto guest;
> > 
> > if (static_branch_unlikely(_v2_cpuif_trap)) {
> > bool valid;
> > @@ -451,7 +451,7 @@ static inline bool fixup_guest_exit(struct
> > kvm_vcpu *vcpu, u64 *exit_code)
> > int ret = __vgic_v2_perform_cpuif_access(vcpu);
> > 
> > if (ret == 1)
> > -   return true;
> > +   goto guest;
> > 
> > /* Promote an illegal access to an SError.*/
> > if (ret == -1)
> > @@ -467,12 +467,17 @@ static inline bool fixup_guest_exit(struct
> > kvm_vcpu *vcpu, u64 *exit_code)
> > int ret = __vgic_v3_perform_cpuif_access(vcpu);
> > 
> > if (ret == 1)
> > -   return true;
> > +   goto guest;
> > }
> > 
> >  exit:
> > /* Return to the host kernel and handle the exit */
> > return false;
> > +
> > +guest:
> > +   /* Re-enter the guest */
> > +   asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));
> > +   return true;
> >  }
> > 
> >  static inline bool __needs_ssbd_off(struct kvm_vcpu *vcpu)
> 
> Looks good to me!

Thanks Marc. Since it needs the local_irq_save() around the PAR_EL1
access in read_sysreg_par(), I'll wait for Rob to update the patches.
Rob also asked the hardware guys for clarification on this scenario, so
let's see what they reply.

-- 
Catalin
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH stable v4.9 v2] arm64: entry: Place an SB sequence following an ERET instruction

2020-08-21 Thread Florian Fainelli
On 8/21/20 9:03 AM, Will Deacon wrote:
> On Fri, Aug 07, 2020 at 03:14:29PM +0200, Greg KH wrote:
>> On Thu, Aug 06, 2020 at 01:00:54PM -0700, Florian Fainelli wrote:
>>>
>>>
>>> On 7/20/2020 11:26 AM, Florian Fainelli wrote:
 On 7/20/20 6:04 AM, Greg KH wrote:
> On Thu, Jul 09, 2020 at 12:50:23PM -0700, Florian Fainelli wrote:
>> From: Will Deacon 
>>
>> commit 679db70801da9fda91d26caf13bf5b5ccc74e8e8 upstream
>>
>> Some CPUs can speculate past an ERET instruction and potentially perform
>> speculative accesses to memory before processing the exception return.
>> Since the register state is often controlled by a lower privilege level
>> at the point of an ERET, this could potentially be used as part of a
>> side-channel attack.
>>
>> This patch emits an SB sequence after each ERET so that speculation is
>> held up on exception return.
>>
>> Signed-off-by: Will Deacon 
>> [florian: Adjust hyp-entry.S to account for the label
>>  added change to hyp/entry.S]
>> Signed-off-by: Florian Fainelli 
>> ---
>> Changes in v2:
>>
>> - added missing hunk in hyp/entry.S per Will's feedback
>
> What about 4.19.y and 4.14.y trees?  I can't take something for 4.9.y
> and then have a regression if someone moves to a newer release, right?

 Sure, send you candidates for 4.14 and 4.19.
>>>
>>> Greg, did you have a chance to queue those changes for 4.9, 4.14 and 4.19?
>>>
>>> https://lore.kernel.org/linux-arm-kernel/20200720182538.13304-1-f.faine...@gmail.com/
>>> https://lore.kernel.org/linux-arm-kernel/20200720182937.14099-1-f.faine...@gmail.com/
>>> https://lore.kernel.org/linux-arm-kernel/20200709195034.15185-1-f.faine...@gmail.com/
>>
>> Nope, I was waiting for Will's "ack" for these.
> 
> This patch doesn't even build for me (the 'sb' macro is not defined in 4.9),
> and I really wonder why we bother backporting it at all. Nobody's ever shown
> it to be a problem in practice, and it's clear that this is just being
> submitted to tick a box rather than anything else (otherwise it would build,
> right?).

Doh, I completely missed submitting the patch this depended on that's
why I did not notice the build failure locally, sorry about that, what a
shame.

Would not be the same "tick a box" argument be used against your
original submission then? Sure, I have not been able to demonstrate in
real life this was a problem, however the same can be said about a lot
security related fixes.

What if it becomes exploitable in the future, would not it be nice to
have it in a 6 year LTS kernel?

> 
> So I'm not going to Ack any of them. As with a lot of this side-channel
> stuff the cure is far worse than the disease.
Assuming that my v3 does build correctly, which it will, would you be
keen on changing your position?
-- 
Florian
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v4 3/3] arm64: Add workaround for Arm Cortex-A77 erratum 1508412

2020-08-21 Thread Marc Zyngier

On 2020-08-21 15:05, Catalin Marinas wrote:

On Fri, Aug 21, 2020 at 01:45:40PM +0100, Marc Zyngier wrote:

On 2020-08-21 13:26, Catalin Marinas wrote:
> On Fri, Aug 21, 2020 at 01:12:10PM +0100, Will Deacon wrote:
> > On Fri, Aug 21, 2020 at 01:07:00PM +0100, Catalin Marinas wrote:
> > > On Mon, Aug 03, 2020 at 01:31:27PM -0600, Rob Herring wrote:
> > > > @@ -979,6 +980,14 @@
> > > > write_sysreg(__scs_new, sysreg);\
> > > >  } while (0)
> > > >
> > > > +#define read_sysreg_par() ({   
\
> > > > +   u64 par;\
> > > > +   asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));
\
> > > > +   par = read_sysreg(par_el1); \
> > > > +   asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));
\
> > > > +   par;\
> > > > +})
> > >
> > > I was about to queue this up but one more point to clarify: can we get
> > > an interrupt at either side of the PAR_EL1 read and the handler do a
> > > device read, triggering the erratum? Do we need a DMB at exception
> > > entry/return?
> >
> > Disabling irqs around the PAR access would be simpler, I think
> > (assuming
> > this is needed).
>
> This wouldn't work if it interrupts a guest.

If we take an interrupt either side of the PAR_EL1 read and that we
fully exit, the saving of PAR_EL1 on the way out solves the problem.

If we don't fully exit, but instead reenter the guest immediately
(fixup_guest_exit() returns true), we'd need a DMB at that point,
at least because of the GICv2 proxying code which performs device
accesses on the guest's behalf.


If you are ok with the diff below, I can fold it in:

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h
b/arch/arm64/kvm/hyp/include/hyp/switch.h
index ca88ea416176..8770cf7ccd42 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -420,7 +420,7 @@ static inline bool fixup_guest_exit(struct
kvm_vcpu *vcpu, u64 *exit_code)
if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) &&
kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 &&
handle_tx2_tvm(vcpu))
-   return true;
+   goto guest;

/*
 * We trap the first access to the FP/SIMD to save the host context
@@ -430,13 +430,13 @@ static inline bool fixup_guest_exit(struct
kvm_vcpu *vcpu, u64 *exit_code)
 * Similarly for trapped SVE accesses.
 */
if (__hyp_handle_fpsimd(vcpu))
-   return true;
+   goto guest;

if (__hyp_handle_ptrauth(vcpu))
-   return true;
+   goto guest;

if (!__populate_fault_info(vcpu))
-   return true;
+   goto guest;

if (static_branch_unlikely(_v2_cpuif_trap)) {
bool valid;
@@ -451,7 +451,7 @@ static inline bool fixup_guest_exit(struct
kvm_vcpu *vcpu, u64 *exit_code)
int ret = __vgic_v2_perform_cpuif_access(vcpu);

if (ret == 1)
-   return true;
+   goto guest;

/* Promote an illegal access to an SError.*/
if (ret == -1)
@@ -467,12 +467,17 @@ static inline bool fixup_guest_exit(struct
kvm_vcpu *vcpu, u64 *exit_code)
int ret = __vgic_v3_perform_cpuif_access(vcpu);

if (ret == 1)
-   return true;
+   goto guest;
}

 exit:
/* Return to the host kernel and handle the exit */
return false;
+
+guest:
+   /* Re-enter the guest */
+   asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));
+   return true;
 }

 static inline bool __needs_ssbd_off(struct kvm_vcpu *vcpu)


Looks good to me!

M.
--
Jazz is not dead. It just smells funny...
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH stable v4.9 v2] arm64: entry: Place an SB sequence following an ERET instruction

2020-08-21 Thread Will Deacon
On Fri, Aug 07, 2020 at 03:14:29PM +0200, Greg KH wrote:
> On Thu, Aug 06, 2020 at 01:00:54PM -0700, Florian Fainelli wrote:
> > 
> > 
> > On 7/20/2020 11:26 AM, Florian Fainelli wrote:
> > > On 7/20/20 6:04 AM, Greg KH wrote:
> > >> On Thu, Jul 09, 2020 at 12:50:23PM -0700, Florian Fainelli wrote:
> > >>> From: Will Deacon 
> > >>>
> > >>> commit 679db70801da9fda91d26caf13bf5b5ccc74e8e8 upstream
> > >>>
> > >>> Some CPUs can speculate past an ERET instruction and potentially perform
> > >>> speculative accesses to memory before processing the exception return.
> > >>> Since the register state is often controlled by a lower privilege level
> > >>> at the point of an ERET, this could potentially be used as part of a
> > >>> side-channel attack.
> > >>>
> > >>> This patch emits an SB sequence after each ERET so that speculation is
> > >>> held up on exception return.
> > >>>
> > >>> Signed-off-by: Will Deacon 
> > >>> [florian: Adjust hyp-entry.S to account for the label
> > >>>  added change to hyp/entry.S]
> > >>> Signed-off-by: Florian Fainelli 
> > >>> ---
> > >>> Changes in v2:
> > >>>
> > >>> - added missing hunk in hyp/entry.S per Will's feedback
> > >>
> > >> What about 4.19.y and 4.14.y trees?  I can't take something for 4.9.y
> > >> and then have a regression if someone moves to a newer release, right?
> > > 
> > > Sure, send you candidates for 4.14 and 4.19.
> > 
> > Greg, did you have a chance to queue those changes for 4.9, 4.14 and 4.19?
> > 
> > https://lore.kernel.org/linux-arm-kernel/20200720182538.13304-1-f.faine...@gmail.com/
> > https://lore.kernel.org/linux-arm-kernel/20200720182937.14099-1-f.faine...@gmail.com/
> > https://lore.kernel.org/linux-arm-kernel/20200709195034.15185-1-f.faine...@gmail.com/
> 
> Nope, I was waiting for Will's "ack" for these.

This patch doesn't even build for me (the 'sb' macro is not defined in 4.9),
and I really wonder why we bother backporting it at all. Nobody's ever shown
it to be a problem in practice, and it's clear that this is just being
submitted to tick a box rather than anything else (otherwise it would build,
right?).

So I'm not going to Ack any of them. As with a lot of this side-channel
stuff the cure is far worse than the disease.

Will
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v4 3/3] arm64: Add workaround for Arm Cortex-A77 erratum 1508412

2020-08-21 Thread Catalin Marinas
On Fri, Aug 21, 2020 at 01:45:40PM +0100, Marc Zyngier wrote:
> On 2020-08-21 13:26, Catalin Marinas wrote:
> > On Fri, Aug 21, 2020 at 01:12:10PM +0100, Will Deacon wrote:
> > > On Fri, Aug 21, 2020 at 01:07:00PM +0100, Catalin Marinas wrote:
> > > > On Mon, Aug 03, 2020 at 01:31:27PM -0600, Rob Herring wrote:
> > > > > @@ -979,6 +980,14 @@
> > > > >   write_sysreg(__scs_new, sysreg);
> > > > > \
> > > > >  } while (0)
> > > > >
> > > > > +#define read_sysreg_par() ({ 
> > > > > \
> > > > > + u64 par;
> > > > > \
> > > > > + asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));
> > > > > \
> > > > > + par = read_sysreg(par_el1); 
> > > > > \
> > > > > + asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));
> > > > > \
> > > > > + par;
> > > > > \
> > > > > +})
> > > >
> > > > I was about to queue this up but one more point to clarify: can we get
> > > > an interrupt at either side of the PAR_EL1 read and the handler do a
> > > > device read, triggering the erratum? Do we need a DMB at exception
> > > > entry/return?
> > > 
> > > Disabling irqs around the PAR access would be simpler, I think
> > > (assuming
> > > this is needed).
> > 
> > This wouldn't work if it interrupts a guest.
> 
> If we take an interrupt either side of the PAR_EL1 read and that we
> fully exit, the saving of PAR_EL1 on the way out solves the problem.
> 
> If we don't fully exit, but instead reenter the guest immediately
> (fixup_guest_exit() returns true), we'd need a DMB at that point,
> at least because of the GICv2 proxying code which performs device
> accesses on the guest's behalf.

If you are ok with the diff below, I can fold it in:

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h 
b/arch/arm64/kvm/hyp/include/hyp/switch.h
index ca88ea416176..8770cf7ccd42 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -420,7 +420,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, 
u64 *exit_code)
if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) &&
kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 &&
handle_tx2_tvm(vcpu))
-   return true;
+   goto guest;
 
/*
 * We trap the first access to the FP/SIMD to save the host context
@@ -430,13 +430,13 @@ static inline bool fixup_guest_exit(struct kvm_vcpu 
*vcpu, u64 *exit_code)
 * Similarly for trapped SVE accesses.
 */
if (__hyp_handle_fpsimd(vcpu))
-   return true;
+   goto guest;
 
if (__hyp_handle_ptrauth(vcpu))
-   return true;
+   goto guest;
 
if (!__populate_fault_info(vcpu))
-   return true;
+   goto guest;
 
if (static_branch_unlikely(_v2_cpuif_trap)) {
bool valid;
@@ -451,7 +451,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, 
u64 *exit_code)
int ret = __vgic_v2_perform_cpuif_access(vcpu);
 
if (ret == 1)
-   return true;
+   goto guest;
 
/* Promote an illegal access to an SError.*/
if (ret == -1)
@@ -467,12 +467,17 @@ static inline bool fixup_guest_exit(struct kvm_vcpu 
*vcpu, u64 *exit_code)
int ret = __vgic_v3_perform_cpuif_access(vcpu);
 
if (ret == 1)
-   return true;
+   goto guest;
}
 
 exit:
/* Return to the host kernel and handle the exit */
return false;
+
+guest:
+   /* Re-enter the guest */
+   asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));
+   return true;
 }
 
 static inline bool __needs_ssbd_off(struct kvm_vcpu *vcpu)

-- 
Catalin
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 0/6] KVM: arm64: pvtime: Fixes and a new cap

2020-08-21 Thread Marc Zyngier

On 2020-08-19 13:50, Andrew Jones wrote:

On Tue, Aug 04, 2020 at 07:05:58PM +0200, Andrew Jones wrote:

v2:
  - ARM_SMCCC_HV_PV_TIME_FEATURES now also returns 
SMCCC_RET_NOT_SUPPORTED

when steal time is not supported
  - Added READ_ONCE() for the run_delay read
  - Reworked kvm_put/get_guest to not require type as a parameter
  - Added some more text to the documentation for KVM_CAP_STEAL_TIME
  - Enough changed that I didn't pick up Steven's r-b's


The first four patches in the series are fixes that come from testing
and reviewing pvtime code while writing the QEMU support[*]. The last
patch is only a convenience for userspace, and I wouldn't be 
heartbroken

if it wasn't deemed worth it. The QEMU patches are currently written
without the cap. However, if the cap is accepted, then I'll change the
QEMU code to use it.

Thanks,
drew

[*] 
https://lists.gnu.org/archive/html/qemu-devel/2020-07/msg03856.html

(a v2 of this series will also be posted shortly)

Andrew Jones (6):
  KVM: arm64: pvtime: steal-time is only supported when configured
  KVM: arm64: pvtime: Fix potential loss of stolen time
  KVM: arm64: Drop type input from kvm_put_guest
  KVM: arm64: pvtime: Fix stolen time accounting across migration
  KVM: Documentation: Minor fixups
  arm64/x86: KVM: Introduce steal-time cap

 Documentation/virt/kvm/api.rst| 22 ++
 arch/arm64/include/asm/kvm_host.h |  2 +-
 arch/arm64/kvm/arm.c  |  3 +++
 arch/arm64/kvm/pvtime.c   | 29 +
 arch/x86/kvm/x86.c|  3 +++
 include/linux/kvm_host.h  | 31 
++-

 include/uapi/linux/kvm.h  |  1 +
 7 files changed, 65 insertions(+), 26 deletions(-)

--
2.25.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm



Hi Marc,

Gentle ping. I'd like to to switch the QEMU code to using the proposed
KVM cap, if the cap is accepted.


I'm fine with it. To be honest, this series is mostly fixes, except
for that last patch.

Paolo, are you OK with me sending the whole thing as fixes, including
the UAPI patch? At least we'd have something consistent for 5.9.

Thanks,

M.
--
Jazz is not dead. It just smells funny...
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v4 3/3] arm64: Add workaround for Arm Cortex-A77 erratum 1508412

2020-08-21 Thread Marc Zyngier

On 2020-08-21 13:26, Catalin Marinas wrote:

On Fri, Aug 21, 2020 at 01:12:10PM +0100, Will Deacon wrote:

On Fri, Aug 21, 2020 at 01:07:00PM +0100, Catalin Marinas wrote:
> On Mon, Aug 03, 2020 at 01:31:27PM -0600, Rob Herring wrote:
> > @@ -979,6 +980,14 @@
> >   write_sysreg(__scs_new, sysreg);\
> >  } while (0)
> >
> > +#define read_sysreg_par() ({ \
> > + u64 par;\
> > + asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));\
> > + par = read_sysreg(par_el1); \
> > + asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));\
> > + par;\
> > +})
>
> I was about to queue this up but one more point to clarify: can we get
> an interrupt at either side of the PAR_EL1 read and the handler do a
> device read, triggering the erratum? Do we need a DMB at exception
> entry/return?

Disabling irqs around the PAR access would be simpler, I think 
(assuming

this is needed).


This wouldn't work if it interrupts a guest.


If we take an interrupt either side of the PAR_EL1 read and that we
fully exit, the saving of PAR_EL1 on the way out solves the problem.

If we don't fully exit, but instead reenter the guest immediately
(fixup_guest_exit() returns true), we'd need a DMB at that point,
at least because of the GICv2 proxying code which performs device
accesses on the guest's behalf.

Thanks,

M.
--
Jazz is not dead. It just smells funny...
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v4 3/3] arm64: Add workaround for Arm Cortex-A77 erratum 1508412

2020-08-21 Thread Catalin Marinas
On Fri, Aug 21, 2020 at 01:12:10PM +0100, Will Deacon wrote:
> On Fri, Aug 21, 2020 at 01:07:00PM +0100, Catalin Marinas wrote:
> > On Mon, Aug 03, 2020 at 01:31:27PM -0600, Rob Herring wrote:
> > > @@ -979,6 +980,14 @@
> > >   write_sysreg(__scs_new, sysreg);\
> > >  } while (0)
> > > 
> > > +#define read_sysreg_par() ({ 
> > > \
> > > + u64 par;\
> > > + asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));\
> > > + par = read_sysreg(par_el1); \
> > > + asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));\
> > > + par;\
> > > +})
> > 
> > I was about to queue this up but one more point to clarify: can we get
> > an interrupt at either side of the PAR_EL1 read and the handler do a
> > device read, triggering the erratum? Do we need a DMB at exception
> > entry/return?
> 
> Disabling irqs around the PAR access would be simpler, I think (assuming
> this is needed).

This wouldn't work if it interrupts a guest.

-- 
Catalin
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v4 3/3] arm64: Add workaround for Arm Cortex-A77 erratum 1508412

2020-08-21 Thread Will Deacon
On Fri, Aug 21, 2020 at 01:07:00PM +0100, Catalin Marinas wrote:
> On Mon, Aug 03, 2020 at 01:31:27PM -0600, Rob Herring wrote:
> > @@ -979,6 +980,14 @@
> > write_sysreg(__scs_new, sysreg);\
> >  } while (0)
> > 
> > +#define read_sysreg_par() ({   
> > \
> > +   u64 par;\
> > +   asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));\
> > +   par = read_sysreg(par_el1); \
> > +   asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));\
> > +   par;\
> > +})
> 
> I was about to queue this up but one more point to clarify: can we get
> an interrupt at either side of the PAR_EL1 read and the handler do a
> device read, triggering the erratum? Do we need a DMB at exception
> entry/return?

Disabling irqs around the PAR access would be simpler, I think (assuming
this is needed).

Will
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v4 3/3] arm64: Add workaround for Arm Cortex-A77 erratum 1508412

2020-08-21 Thread Catalin Marinas
On Mon, Aug 03, 2020 at 01:31:27PM -0600, Rob Herring wrote:
> @@ -979,6 +980,14 @@
>   write_sysreg(__scs_new, sysreg);\
>  } while (0)
> 
> +#define read_sysreg_par() ({ \
> + u64 par;\
> + asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));\
> + par = read_sysreg(par_el1); \
> + asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));\
> + par;\
> +})

I was about to queue this up but one more point to clarify: can we get
an interrupt at either side of the PAR_EL1 read and the handler do a
device read, triggering the erratum? Do we need a DMB at exception
entry/return?

-- 
Catalin
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v4 0/3] Cortex-A77 erratum 1508412 workaround

2020-08-21 Thread Will Deacon
On Mon, Aug 03, 2020 at 01:31:24PM -0600, Rob Herring wrote:
> This series implements the work-around for Cortex-A77 erratum 1508412.
> KVM guests which don't implement the workaround can still deadlock the
> system. This is also the case with the existing Cortex-A57 erratum 832075,
> so we add a warning message if an erratum can cause deadlock.

For the series:

Acked-by: Will Deacon 

I'm a bit worried about how we'll get on maintaining this, given that we
need to make sure that all new users of read_sysreg(par_el1) use
read_sysreg_par() instead, but oh well.

Will
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm