On Wed, Jan 04, 2023 at 11:04:41AM +, Alexandru Elisei wrote:
> Hi Mark,
>
> Thank you for having a look!
>
> On Wed, Jan 04, 2023 at 09:19:25AM +0000, Mark Rutland wrote:
> > On Tue, Jan 03, 2023 at 02:27:59PM +, Alexandru Elisei wrote:
> > > Hi,
> >
On Tue, Jan 03, 2023 at 02:27:59PM +, Alexandru Elisei wrote:
> Hi,
>
> Gentle ping regarding this.
Hi Alexandru,
Sorry for the delay; things were a bit hectic at the end of last year, and this
is still on my queue of things to look at.
> Thanks,
> Alex
>
> On Wed, Nov 23, 2022 at 11:40:45
Peter, it looks like this series is blocked on the below now; what would you
prefer out of:
(a) Take this as is, and look add adding additional validation on top.
(b) Add some flag to indicate a PMU driver supports config3, and have the core
code check that, but leave the existing fields as-i
On Thu, Oct 13, 2022 at 04:09:20PM +0100, Marc Zyngier wrote:
> [Reposting this, as it has been almost two weeks since the initial
> announcement and we're still at sub-10% of the users having
> subscribed to the new list]
FWIW, I didn't subscribe until just now because there weren't clear
instr
On Tue, Apr 19, 2022 at 10:37:56AM -0700, Kalesh Singh wrote:
> On Wed, Apr 13, 2022 at 6:59 AM Mark Rutland wrote:
> > I'm fine with the concept of splitting the unwind and logging steps; this is
> > akin to doing:
> >
> &
Hi Kalesh,
Sorry for the radiosilence.
I see that in v7 you've dropped the stacktrace bits for now; I'm just
commenting here fot future reference.
On Thu, Mar 31, 2022 at 12:22:05PM -0700, Kalesh Singh wrote:
> Hi everyone,
>
> There has been expressed interest in having hypervisor stack unwind
s a number of system register accesses and other barriers
> if we exited for any other reason (such as a trap, for example).
>
> Signed-off-by: Marc Zyngier
Acked-by: Mark Rutland
Mark.
> ---
> arch/arm64/kvm/arm.c | 8 +---
> 1 file changed, 5 insertions(+), 3 deletio
On Tue, Feb 22, 2022 at 08:51:05AM -0800, Kalesh Singh wrote:
> Maps the stack pages in the flexible private VA range and allocates
> guard pages below the stack as unbacked VA space. The stack is aligned
> to twice its size to aid overflow detection (implemented in a subsequent
> patch in the seri
On Tue, Feb 22, 2022 at 08:51:02AM -0800, Kalesh Singh wrote:
> hyp_alloc_private_va_range() can be used to reserve private VA ranges
> in the nVHE hypervisor. Also update __create_hyp_private_mapping()
> to allow specifying an alignment for the private VA mapping.
>
> These will be used to imple
On Tue, Feb 22, 2022 at 08:51:06AM -0800, Kalesh Singh wrote:
> From: Quentin Perret
>
> The asm entry code in the kernel uses a trick to check if VMAP'd stacks
> have overflowed by aligning them at THREAD_SHIFT * 2 granularity and
> checking the SP's THREAD_SHIFT bit.
>
> Protected KVM will soo
On Tue, Jan 11, 2022 at 12:32:38PM +0100, Nicolas Saenz Julienne wrote:
> Hi Mark,
>
> On Tue, 2022-01-04 at 16:39 +0000, Mark Rutland wrote:
> > On Fri, Dec 17, 2021 at 04:54:22PM +0100, Paolo Bonzini wrote:
> > > On 12/17/21 15:38, Mark Rutland wrote:
> > > >
On Fri, Dec 17, 2021 at 04:54:22PM +0100, Paolo Bonzini wrote:
> On 12/17/21 15:38, Mark Rutland wrote:
> > For example kvm_guest_enter_irqoff() calls guest_enter_irq_off() which calls
> > vtime_account_guest_enter(), but kvm_guest_exit_irqoff() doesn't call
> > guest_e
On Mon, Dec 20, 2021 at 05:10:14PM +0100, Frederic Weisbecker wrote:
> On Fri, Dec 17, 2021 at 01:21:39PM +0000, Mark Rutland wrote:
> > On Fri, Dec 17, 2021 at 12:51:57PM +0100, Nicolas Saenz Julienne wrote:
> > > Hi All,
> >
> > Hi,
> >
> > >
On Fri, Dec 17, 2021 at 03:15:29PM +0100, Nicolas Saenz Julienne wrote:
> On Fri, 2021-12-17 at 13:21 +0000, Mark Rutland wrote:
> > On Fri, Dec 17, 2021 at 12:51:57PM +0100, Nicolas Saenz Julienne wrote:
> > > Hi All,
> >
> > Hi,
> >
> > >
On Fri, Dec 17, 2021 at 12:51:57PM +0100, Nicolas Saenz Julienne wrote:
> Hi All,
Hi,
> arm64's guest entry code does the following:
>
> int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
> {
> [...]
>
> guest_enter_irqoff();
>
> ret = kvm_call_hyp_ret(__kvm_vcpu_run, vcpu);
On Wed, Dec 15, 2021 at 01:09:28PM +, Oliver Upton wrote:
> Hi Mark,
>
> On Wed, Dec 15, 2021 at 11:39:58AM +0000, Mark Rutland wrote:
> > Hi Oliver,
> >
> > On Tue, Dec 14, 2021 at 05:28:07PM +, Oliver Upton wrote:
> > > Any valid implementation of
On Tue, Dec 14, 2021 at 05:28:09PM +, Oliver Upton wrote:
> Allow writes to OSLAR and forward the OSLK bit to OSLSR. Do nothing with
> the value for now.
>
> Reviewed-by: Reiji Watanabe
> Signed-off-by: Oliver Upton
> ---
> arch/arm64/include/asm/sysreg.h | 9
> arch/arm64/kvm/sys
On Tue, Dec 14, 2021 at 05:28:08PM +, Oliver Upton wrote:
> An upcoming change to KVM will context switch the OS Lock status between
> guest/host. Add OSLSR_EL1 to the cpu context and handle guest reads
> using the stored value.
The "context switch" wording is stale here, since later patches e
ional impact.
|
| For clarity, use write_to_read_only() rather than ignore_write(). If a trap
| is unexpectedly taken to EL2 in violation of the architecture, this will
| WARN_ONCE() and inject an undef into the guest.
With that:
Reviewed-by: Mark Rutland
Mark.
> Reviewed-by: Reiji Wa
Hi,
I haven't looked at this in great detail, but I spotted a few issues
from an initial scan.
On Wed, Nov 24, 2021 at 12:07:07PM -0500, Tyler Baicar wrote:
> Add support for parsing the ARM Error Source Table and basic handling of
> errors reported through both memory mapped and system register
On Thu, Sep 23, 2021 at 12:22:52PM +0100, Will Deacon wrote:
> When pKVM is enabled, the hypervisor code at EL2 and its data structures
> are inaccessible to the host kernel and cannot be torn down or replaced
> as this would defeat the integrity properies which pKVM aims to provide.
> Furthermore,
On Thu, Jul 15, 2021 at 11:00:42AM +0100, Robin Murphy wrote:
> On 2021-07-15 10:44, Qu Wenruo wrote:
> >
> >
> > On 2021/7/15 下午5:28, Robin Murphy wrote:
> > > On 2021-07-15 09:55, Qu Wenruo wrote:
> > > > Hi,
> > > >
> > > > Recently I'm playing around the Nvidia Xavier AGX board, which
> > >
On Fri, Jul 02, 2021 at 09:00:22AM -0700, Joe Perches wrote:
> On Fri, 2021-07-02 at 13:22 +0200, Peter Zijlstra wrote:
> > On Tue, Jun 22, 2021 at 05:42:49PM +0800, Zhu Lingshan wrote:
> > > diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> []
> > > @@ -90,6 +90,27 @@ DEFINE_STATIC_CA
On Thu, Jun 03, 2021 at 07:33:47PM +0100, Will Deacon wrote:
> Introduce a new VM capability, KVM_CAP_ARM_PROTECTED_VM, which can be
> used to isolate guest memory from the host. For now, the EL2 portion is
> missing, so this documents and exposes the user ABI for the host.
>
> Signed-off-by: Will
On Thu, Jun 03, 2021 at 07:33:46PM +0100, Will Deacon wrote:
> Add support for a "linux,pkvm-guest-firmware-memory" reserved memory
> region, which can be used to identify a firmware image for protected
> VMs.
The idea that the guest's FW comes from the host's FW strikes me as
unusual; what's the
optimization, but in fact this allows symbol references on
> VHE-specific code paths to be dropped from the nVHE object.
>
> Expand the comment in has_vhe() to make this clearer, hopefully
> discouraging anybody from simplifying the code.
>
> Cc: David Brazdil
> Signed-off
ate_to_vhe stuff, passing 'kvm-arm.mode=protected' should make the
kernel stick to EL1, right? So this should only affect M1 (or other HW
with a similar impediment).
One minor comment below; otherwise:
Acked-by: Mark Rutland
>
> Cc: David Brazdil
> Signed-off-by: Will Deacon
g sure all the vcpus have the same
> register width.
>
> Reported-by: Steven Price
> Signed-off-by: Marc Zyngier
> Cc: sta...@vger.kernel.org
Looks good to me!
Acked-by: Mark Rutland
Mark.
> ---
>
> Notes:
> v2: Fix missing check against ARM64_HAS_32BIT_EL1 (M
On Thu, May 20, 2021 at 01:58:55PM +0100, Marc Zyngier wrote:
> On Thu, 20 May 2021 13:44:34 +0100,
> Mark Rutland wrote:
> >
> > On Thu, May 20, 2021 at 01:22:53PM +0100, Marc Zyngier wrote:
> > > It looks like we have tolerated creating mixed-width VMs since...
>
On Thu, May 20, 2021 at 01:22:53PM +0100, Marc Zyngier wrote:
> It looks like we have tolerated creating mixed-width VMs since...
> forever. However, that was never the intention, and we'd rather
> not have to support that pointless complexity.
>
> Forbid such a setup by making sure all the vcpus
On Mon, May 10, 2021 at 06:44:49PM +0100, Marc Zyngier wrote:
> On Mon, 10 May 2021 17:19:07 +0100,
> Mark Rutland wrote:
> >
> > On Mon, May 10, 2021 at 02:48:18PM +0100, Marc Zyngier wrote:
> > > As it turns out, not all the interrupt controllers are able to
>
On Mon, May 10, 2021 at 02:48:18PM +0100, Marc Zyngier wrote:
> As it turns out, not all the interrupt controllers are able to
> expose a vGIC maintenance interrupt as a distrete signal.
> And to be fair, it doesn't really matter as all we require is
> for *something* to kick us out of guest mode o
On Thu, Mar 11, 2021 at 11:35:29AM +, Mark Rutland wrote:
> Acked-by: Mark Rutland
Upon reflection, maybe I should spell my own name correctly:
Acked-by: Mark Rutland
... lest you decide to add a Mocked-by tag instead ;)
Mark.
___
kvm
init.S
> > +++ b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
> > @@ -117,13 +117,7 @@ alternative_else_nop_endif
> > tlbialle2
> > dsb sy
> >
> > - /*
> > - * Preserve all the RES1 bits while settin
On Mon, Mar 08, 2021 at 01:30:53PM +, Will Deacon wrote:
> On Sun, Mar 07, 2021 at 05:24:21PM +0530, Anshuman Khandual wrote:
> > On 3/5/21 8:21 PM, Mark Rutland wrote:
> > > On Fri, Mar 05, 2021 at 08:06:09PM +0530, Anshuman Khandual wrote
ilar situations in EFI stub and KVM as well.
>
> Cc: Catalin Marinas
> Cc: Will Deacon
> Cc: Marc Zyngier
> Cc: James Morse
> Cc: Suzuki K Poulose
> Cc: Ard Biesheuvel
> Cc: Mark Rutland
> Cc: linux-arm-ker...@lists.infradead.org
> Cc: kvmarm@lists.cs.col
> > set, eg. set bit for PSCI_CPU_SUSPEND if psci_ops.cpu_suspend != NULL.
> >
> > Previously config was split into multiple global variables. Put
> > everything into a single struct for convenience.
> >
> > Reported-by: Mark Rutland
> > Signed-off-by: David Braz
On Mon, Dec 07, 2020 at 10:20:03AM +, Will Deacon wrote:
> On Fri, Dec 04, 2020 at 06:01:52PM +, Quentin Perret wrote:
> > On Thursday 03 Dec 2020 at 12:57:33 (+), Fuad Tabba wrote:
> >
> > > > +SYM_FUNC_START(__kvm_init_switch_pgd)
> > > > + /* Turn the MMU off */
> > > > +
On Thu, Dec 03, 2020 at 04:49:49PM +, Steven Price wrote:
> On 03/12/2020 16:09, Mark Rutland wrote:
> > On Fri, Nov 27, 2020 at 03:21:11PM +, Steven Price wrote:
> > > It's been a week, and I think the comments on v5 made it clear that
> > > enforcing PROT
On Fri, Nov 27, 2020 at 03:21:11PM +, Steven Price wrote:
> It's been a week, and I think the comments on v5 made it clear that
> enforcing PROT_MTE requirements on the VMM was probably the wrong
> approach. So since I've got swap working correctly without that I
> thought I'd post a v6 which h
On Wed, Dec 02, 2020 at 06:41:12PM +, David Brazdil wrote:
> Add a handler of PSCI SMCs in nVHE hyp code. The handler is initialized
> with the version used by the host's PSCI driver and the function IDs it
> was configured with. If the SMC function ID matches one of the
> configured PSCI calls
On Wed, Dec 02, 2020 at 06:41:02PM +, David Brazdil wrote:
> Make it possible to retrieve a copy of the psci_0_1_function_ids struct.
> This is useful for KVM if it is configured to intercept host's PSCI SMCs.
>
> Signed-off-by: David Brazdil
Acked-by: Mark Rutland
... ju
to
> other parts of the kernel. Exposing a struct avoids the need for
> bounds checking.
>
> Signed-off-by: David Brazdil
Acked-by: Mark Rutland
Mark.
> ---
> drivers/firmware/psci/psci.c | 29 ++---
> 1 file changed, 14 insertions(+), 15 deletions(-)
mall cleanup so that the function ID array is only used for
> v0.1 configurations.
>
> Signed-off-by: David Brazdil
Acked-by: Mark Rutland
Mark.
> ---
> drivers/firmware/psci/psci.c | 94 +++-
> 1 file changed, 60 insertions(+), 34 delet
On Tue, Dec 01, 2020 at 02:43:49PM +, David Brazdil wrote:
> > > > be just me, but if you agree please update so that it doesn't give
> > > > remote
> > > > idea that it is not valid on VHE enabled hardware.
> > > >
> > > > I was trying to run this on the hardware and was trying to understand
On Thu, Nov 26, 2020 at 03:54:18PM +, David Brazdil wrote:
> Add a handler of CPU_SUSPEND host PSCI SMCs. The SMC can either enter
> a sleep state indistinguishable from a WFI or a deeper sleep state that
> behaves like a CPU_OFF+CPU_ON except that the core is still considered
> online when asl
On Tue, Dec 01, 2020 at 01:19:13PM +, David Brazdil wrote:
> Hey Sudeep,
>
> > > diff --git a/Documentation/admin-guide/kernel-parameters.txt
> > > b/Documentation/admin-guide/kernel-parameters.txt
> > > index 526d65d8573a..06c89975c29c 100644
> > > --- a/Documentation/admin-guide/kernel-para
On Thu, Nov 26, 2020 at 03:54:03PM +, David Brazdil wrote:
> When the a CPU is booted in EL2, the kernel checks for VHE support and
> initializes the CPU core accordingly. For nVHE it also installs the stub
> vectors and drops down to EL1.
>
> Once KVM gains the ability to boot cores without g
On Thu, Nov 26, 2020 at 03:54:02PM +, David Brazdil wrote:
> KVM currently initializes MAIR_EL2 to the value of MAIR_EL1. In
> preparation for initializing MAIR_EL2 before MAIR_EL1, move the constant
> into a shared header file. Since it is used for EL1 and EL2, rename to
> MAIR_ELx_SET.
>
> S
On Thu, Nov 26, 2020 at 03:54:01PM +, David Brazdil wrote:
> CPU index should never be negative. Change the signature of
> (set_)cpu_logical_map to take an unsigned int.
>
> Signed-off-by: David Brazdil
Is there a function problem here, or is this just cleanup from
inspection?
Core code inc
On Thu, Nov 26, 2020 at 03:54:00PM +, David Brazdil wrote:
> Function IDs used by PSCI are configurable for v0.1 via DT/APCI. If the
> host is using PSCI v0.1, KVM's host PSCI proxy needs to use the same IDs.
> Expose the array holding the information with a read-only accessor.
>
> Signed-off-
return -EINVAL;
> }
>
> +static u32 psci_get_version_0_1(void)
> +{
> + return PSCI_VERSION(0, 1);
> +}
Elsewhere in this file we've used a psci_${MAJOR}_${MINOR}_* naming
scheme.
To match that, I'd prefer we call this psci_0_1_get_version(), and
rename psci_get_ve
On Thu, Nov 05, 2020 at 03:34:01PM +0100, Ard Biesheuvel wrote:
> On Thu, 5 Nov 2020 at 15:30, Mark Rutland wrote:
> > On Thu, Nov 05, 2020 at 03:04:57PM +0100, Ard Biesheuvel wrote:
> > > On Thu, 5 Nov 2020 at 15:03, Mark Rutland wrote:
> >
> > > > That sa
On Thu, Nov 05, 2020 at 02:29:49PM +, Mark Brown wrote:
> On Thu, Nov 05, 2020 at 02:03:22PM +0000, Mark Rutland wrote:
> > On Thu, Nov 05, 2020 at 01:41:42PM +, Mark Brown wrote:
>
> > > It isn't obvious to me why we don't fall through to trying the SM
On Thu, Nov 05, 2020 at 03:04:57PM +0100, Ard Biesheuvel wrote:
> On Thu, 5 Nov 2020 at 15:03, Mark Rutland wrote:
> > On Thu, Nov 05, 2020 at 01:41:42PM +, Mark Brown wrote:
> > > On Thu, Nov 05, 2020 at 12:56:55PM +, Andre Przywara wrote:
> > That said, I'm
On Thu, Nov 05, 2020 at 01:41:42PM +, Mark Brown wrote:
> On Thu, Nov 05, 2020 at 12:56:55PM +, Andre Przywara wrote:
>
> > static inline bool __must_check arch_get_random_seed_int(unsigned int *v)
> > {
> > + struct arm_smccc_res res;
> > unsigned long val;
> > - bool ok = arch_
On Fri, Oct 30, 2020 at 08:20:14AM +, Will Deacon wrote:
> On Fri, Oct 30, 2020 at 08:18:48AM +, Will Deacon wrote:
> > On Mon, Oct 26, 2020 at 01:49:30PM +0000, Mark Rutland wrote:
> > > In a subsequent patch we'll modify cpus_have_const_cap() to call
> >
On Mon, Oct 26, 2020 at 01:34:48PM +, Marc Zyngier wrote:
> The SPR setting code is now completely unused, including that dealing
> with banked AArch32 SPSRs. Cleanup time.
>
> Signed-off-by: Marc Zyngier
Acked-by: Mark Rutland
Mark.
> ---
> arch/arm64/include/asm/
by harcoding the two possible
> LR registers (LR_abt in X20, LR_und in X22).
>
> We also introduce new accessors for SPSR and CP15 registers.
>
> Signed-off-by: Marc Zyngier
Modulo comments on the prior patch for the AArch64 exception bits that
get carried along:
Acked-by: Mark R
On Mon, Oct 26, 2020 at 02:08:35PM +, Marc Zyngier wrote:
> On 2020-10-26 13:53, Mark Rutland wrote:
> > Assuming that there is no 16-bit HVC:
>
> It is actually impossible to have a 16bit encoding for HVC, as
> it always convey a 16bit immediate, and you need some spa
On Mon, Oct 26, 2020 at 01:34:46PM +, Marc Zyngier wrote:
> Move the AArch64 exception injection code from EL1 to HYP, leaving
> only the ESR_EL1 updates to EL1. In order to come with the differences
> between VHE and nVHE, two set of system register accessors are provided.
>
> SPSR, ELR, PC a
Marc Zyngier
Acked-by: Mark Rutland
Mark.
> ---
> arch/arm64/include/asm/kvm_host.h | 85 +++
> arch/arm64/kvm/sys_regs.c | 81 -
> 2 files changed, 85 insertions(+), 81 deletions(-)
>
> diff --git a/arch/arm64
fixups.
>
> Isn't that neat?
>
> Signed-off-by: Marc Zyngier
Acked-by: Mark Rutland
Mark.
> ---
> arch/arm64/kvm/handle_exit.c| 17 -
> arch/arm64/kvm/hyp/include/hyp/switch.h | 15 +++
> 2 files changed, 15 insertions
On Mon, Oct 26, 2020 at 01:34:42PM +, Marc Zyngier wrote:
> In an effort to remove the vcpu PC manipulations from EL1 on nVHE
> systems, move kvm_skip_instr() to be HYP-specific. EL1's intent
> to increment PC post emulation is now signalled via a flag in the
> vcpu structure.
>
> Signed-off-b
assume you
know how to drive your favourite spellchecker. ;)
> this helper can equally be called from kvm_skip_instr32(), reducing
> the complexity at all the call sites.
>
> Signed-off-by: Marc Zyngier
Looks nice!
Acked-by: Mark Rutland
Mark.
> ---
> arch/arm64/
atter what the ISA is. Take this opportunity to simplify it.
>
> Signed-off-by: Marc Zyngier
Assuming that there is no 16-bit HVC:
Acked-by: Mark Rutland
Mark.
> ---
> arch/arm64/kvm/handle_exit.c | 16
> 1 file changed, 8 insertions(+), 8 deletions(-)
>
as a result of this patch.
Signed-off-by: Mark Rutland
Cc: David Brazdil
Cc: Marc Zyngier
Cc: Will Deacon
---
arch/arm64/include/asm/cpufeature.h | 16
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h
b/arch/arm64/includ
yp context.
With this change, there's never a reason to access to cpu_hwcaps array
from hyp code, and we don't need to create an NVHE alias for this.
This should have no effect on non-hyp code.
Signed-off-by: Mark Rutland
Cc: David Brazdil
Cc: Marc Zyngier
Cc: Will Deacon
---
ar
o separate helpers.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland
Cc: David Brazdil
Cc: Marc Zyngier
Cc: Will Deacon
---
arch/arm64/include/asm/virt.h | 21 -
1 file changed, 16 insertions(+), 5 deletions(-)
diff --git a/arch/arm
avoid creating an NVHE alias for the cpu_hwcaps array,
so we can catch if we accidentally introduce an runtime reference to
this (e.g. via cpus_have_cap()).
Since v1 [1]:
* Trivial rebase to v5.10-rc1
[1] https://lore.kernel.org/r/20201007125211.30043-1-mark.rutl...@arm.com
Mark Rutland (3):
On Tue, Oct 06, 2020 at 05:13:31PM +0100, Alexandru Elisei wrote:
> Hi Marc,
>
> Thank you for having a look at the patch!
>
> On 10/6/20 4:32 PM, Marc Zyngier wrote:
> > Hi Alex,
> >
> > On Tue, 06 Oct 2020 16:05:20 +0100,
> > Alexandru Elisei wrote:
> >> From ARM DDI 0487F.b, page D9-2807:
> >
On Wed, Aug 19, 2020 at 09:54:40AM +0100, Steven Price wrote:
> On 18/08/2020 15:41, Marc Zyngier wrote:
> > On 2020-08-17 09:41, Keqian Zhu wrote:
> We are discussing (re-)releasing the spec with the LPT parts added. If you
> have fundamental objections then please me know.
Like Marc, I argued s
On Tue, Jun 30, 2020 at 10:16:07AM +1000, Gavin Shan wrote:
> Hi Mark,
>
> On 6/29/20 9:00 PM, Mark Rutland wrote:
> > On Mon, Jun 29, 2020 at 07:18:41PM +1000, Gavin Shan wrote:
> > > There are a set of inline functions defined in kvm_emulate.h. Those
> > > fu
On Mon, Jun 29, 2020 at 11:32:08AM +0100, Mark Rutland wrote:
> On Mon, Jun 29, 2020 at 07:18:40PM +1000, Gavin Shan wrote:
> > kvm/arm32 isn't supported since commit 541ad0150ca4 ("arm: Remove
> > 32bit KVM host support"). So HSR isn't meaningful sinc
On Mon, Jun 29, 2020 at 07:18:41PM +1000, Gavin Shan wrote:
> There are a set of inline functions defined in kvm_emulate.h. Those
> functions reads ESR from vCPU fault information struct and then operate
> on it. So it's tied with vCPU fault information and vCPU struct. It
> limits their usage scop
On Mon, Jun 29, 2020 at 07:18:40PM +1000, Gavin Shan wrote:
> kvm/arm32 isn't supported since commit 541ad0150ca4 ("arm: Remove
> 32bit KVM host support"). So HSR isn't meaningful since then. This
> renames HSR to ESR accordingly. This shouldn't cause any functional
> changes:
>
>* Rename kvm_
On Mon, Jun 22, 2020 at 11:25:41AM +0100, Marc Zyngier wrote:
> On 2020-06-22 10:15, Mark Rutland wrote:
> > On Mon, Jun 22, 2020 at 09:06:43AM +0100, Marc Zyngier wrote:
> I have folded in the following patch:
>
> diff --git a/arch/arm64/include/asm/kvm_ptrauth.h
> b/a
test as the ARM64_HAS_ADDRESS_AUTH capability is
> exactly this expression.
>
> Suggested-by: Mark Rutland
> Signed-off-by: Marc Zyngier
Looks good to me. One minor suggestion below, but either way:
Acked-by: Mark Rutland
> ---
> arch/arm64/include/asm/kvm_ptrauth.h | 26 +
On Mon, Jun 22, 2020 at 09:06:39AM +0100, Marc Zyngier wrote:
> While initializing EL2, enable Address Authentication if detected
> from EL1. We still use the EL1-provided keys though.
>
> Acked-by: Andrew Scull
> Signed-off-by: Marc Zyngier
Acked-by: Mark Rutland
Mark.
>
rc Zyngier
It took me a while to spot that we switched the guest/host hcr_el2 value
in the __activate_traps() and __deactivate_traps() paths, but given that
this is only called in the __kvm_vcpu_run_*() paths called between
those, I agree this is sound. Given that:
Acked-by: Mark Rutland
Mark.
functionally equivalent and easier to follow, so:
Acked-by: Mark Rutland
Mark.
> ---
> arch/arm64/kvm/reset.c | 21 ++---
> 1 file changed, 10 insertions(+), 11 deletions(-)
>
> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index d3b209023727..2a
this, and agree the limitation is pointless, so:
Acked-by: Mark Rutland
Mark.
> ---
> arch/arm64/Kconfig | 4 +---
> 1 file changed, 1 insertion(+), 3 deletions(-)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 31380da53689..d719ea9c596d 100644
> --- a/a
On Mon, Jun 15, 2020 at 09:19:51AM +0100, Marc Zyngier wrote:
> While initializing EL2, switch Pointer Authentication if detected
> from EL1. We use the EL1-provided keys though.
Perhaps "enable address authentication", to avoid confusion with
context-switch, and since generic authentication canno
use in preemptible context, which is a sure sign that
> it would better be gone. Not to mention that a per-cpu
> pointer is faster to access at all times.
>
> Reported-by: Andrew Scull
> Signed-off-by: Marc Zyngier
>From a quick scan, this looks sane to me, so
Hi Marc,
On Thu, Jun 04, 2020 at 02:33:54PM +0100, Marc Zyngier wrote:
> Even if we don't expose PtrAuth to a guest, the guest can still
> write to its SCTIRLE_1 register and set the En{I,D}{A,B} bits
> and execute PtrAuth instructions from the NOP space. This has
> the effect of trapping to EL2,
abling ptrauth.
> - * - Or an UNDEF is injected as ptrauth is not supported/enabled.
> + * If we land here, that is because we didn't fixup the access on exit
> + * by allowing the PtrAuth sysregs. The only way this happens is when
> +
sta...@vger.kernel.org
> Signed-off-by: Marc Zyngier
This looks sound to me given kvm_arch_vcpu_load() is surrounded with
get_cpu() .. put_cpu() and gets called when the thread is preempted.
Reviewed-by: Mark Rutland
Thanks,
Mark.
> ---
> arch/arm64/include/asm/kvm_emulate.h | 6
On Wed, May 27, 2020 at 10:34:09AM +0100, Marc Zyngier wrote:
> HI Mark,
>
> On 2020-05-19 11:44, Mark Rutland wrote:
> > On Wed, Apr 22, 2020 at 01:00:50PM +0100, Marc Zyngier wrote:
> > > -static unsigned long get_except64_pstate(struct kvm_vcpu *vcpu)
> > > +s
(30%)
> Minimal swapin time: 36.2 us
> Maximal swapin time: 55.7 ms
>
> Changelog
> =
> RFCv1 -> RFCv2
>* Rebase to 5.7.rc3
> * Performance data (Marc
> Zyngier)
>* R
On Fri, May 08, 2020 at 01:29:19PM +1000, Gavin Shan wrote:
> This supports asynchronous page fault for the guest. The design is
> similar to what x86 has: on receiving a PAGE_NOT_PRESENT signal from
> the host, the current task is either rescheduled or put into power
> saving mode. The task will b
On Fri, May 08, 2020 at 01:29:17PM +1000, Gavin Shan wrote:
> There are two stages of fault pages and the stage one page fault is
> handled by guest itself. The guest is trapped to host when the page
> fault is caused by stage 2 page table, for example missing. The guest
> is suspended until the re
On Fri, May 08, 2020 at 01:29:16PM +1000, Gavin Shan wrote:
> This renames user_mem_abort() to kvm_handle_user_mem_abort(), and
> then export it. The function will be used in asynchronous page fault
> to populate a page table entry once the corresponding page is populated
> from the backup device (
On Fri, May 08, 2020 at 01:29:14PM +1000, Gavin Shan wrote:
> There are a set of inline functions defined in kvm_emulate.h. Those
> functions reads ESR from vCPU fault information struct and then operate
> on it. So it's tied with vCPU fault information and vCPU struct. It
> limits their usage scop
On Fri, May 08, 2020 at 01:29:15PM +1000, Gavin Shan wrote:
> This replace the variable names to make them self-explaining. The
> tracepoint isn't changed accordingly because they're part of ABI:
>
>* @hsr to @esr
>* @hsr_ec to @ec
>* Use kvm_vcpu_trap_get_class() helper if possible
>
On Fri, May 08, 2020 at 01:29:13PM +1000, Gavin Shan wrote:
> Since kvm/arm32 was removed, this renames kvm_vcpu_get_hsr() to
> kvm_vcpu_get_esr() to it a bit more self-explaining because the
> functions returns ESR instead of HSR on aarch64. This shouldn't
> cause any functional changes.
>
> Sign
*/
> + BUG();
> + }
> +
> + *vcpu_pc(vcpu) = vbar + exc_offset + type;
>
> old = *vcpu_cpsr(vcpu);
> new = 0;
> @@ -105,9 +114,10 @@ static unsigned long get_except64_pstate(struct kvm_vcpu
> *vcpu)
Hi Andre,
On Thu, May 14, 2020 at 10:45:53AM +0100, Andre Przywara wrote:
> On arm and arm64 we expose the Motorola RTC emulation to the guest,
> but never advertised this in the device tree.
>
> EDK-2 seems to rely on this device, but on its hardcoded address. To
> make this more future-proof, a
On Tue, May 12, 2020 at 11:53:43AM +0100, Mark Rutland wrote:
> >
> > /* Clamp the IPA limit to the PA size supported by the kernel */
> > ipa_max = (pa_max > PHYS_MASK_SHIFT) ? PHYS_MASK_SHIFT : pa_max;
> > @@ -411,7 +411,8 @@ int kvm_arm_setup_stage2(str
On Tue, May 12, 2020 at 07:43:26AM +0530, Anshuman Khandual wrote:
> This replaces multiple open encoding (0x7) with ID_AA64MMFR0_PARANGE_MASK
> thus cleaning the clutter. It modifies an existing ID_AA64MMFR0 helper and
> introduces a new one i.e id_aa64mmfr0_iparange() and id_aa64mmfr0_parange()
>
On Tue, May 05, 2020 at 01:12:39PM +0100, Will Deacon wrote:
> On Tue, May 05, 2020 at 12:50:54PM +0100, Mark Rutland wrote:
> > On Tue, May 05, 2020 at 12:27:19PM +0100, Will Deacon wrote:
> > > On Tue, May 05, 2020 at 12:16:07PM +0100, Mark Rutland wrote:
> > > >
1 - 100 of 517 matches
Mail list logo