On Thu, Nov 08, 2018 at 02:38:43PM +, Peter Maydell wrote:
> On 8 November 2018 at 14:28, Alex Bennée wrote:
> >
> > Mark Rutland writes:
> >> One problem is that I couldn't spot when we advance the PC for an MMIO
> >> trap. I presume we do that in the ker
On Thu, Nov 08, 2018 at 12:40:11PM +, Alex Bennée wrote:
> Mark Rutland writes:
> > On Wed, Nov 07, 2018 at 06:01:20PM +0000, Mark Rutland wrote:
> >> On Wed, Nov 07, 2018 at 05:10:31PM +, Alex Bennée wrote:
> >> > Not all faults handled by handle_e
On Wed, Nov 07, 2018 at 06:01:20PM +, Mark Rutland wrote:
> On Wed, Nov 07, 2018 at 05:10:31PM +, Alex Bennée wrote:
> > Not all faults handled by handle_exit are instruction emulations. For
> > example a ESR_ELx_EC_IABT will result in the page tables being updated
> >
On Wed, Nov 07, 2018 at 05:10:31PM +, Alex Bennée wrote:
> Not all faults handled by handle_exit are instruction emulations. For
> example a ESR_ELx_EC_IABT will result in the page tables being updated
> but the instruction that triggered the fault hasn't actually executed
> yet. We use the
SYS_ID_AA64MMFR1_EL1);
> + u32 sr = sys_reg((u32)r->Op0, (u32)r->Op1,
> + (u32)r->CRn, (u32)r->CRm, (u32)r->Op2);
> +
It might be worth factoring this into a helper (e.g. param_to_reg(p)),
since there are a few other places that this would
On Fri, Oct 19, 2018 at 08:36:45AM -0700, Kees Cook wrote:
> On Fri, Oct 19, 2018 at 4:24 AM, Will Deacon wrote:
> > Assuming we want this (Kees -- I was under the impression that everything in
> > Android would end up with the same key otherwise?), then the question is
> > do we want:
> >
> >
, only
the fixed-purpose cycle counter appears to work as expected.
Fix this by always stashing the host MDCR_EL2 value, regardless of VHE.
Fixes: 1e947bad0b63b351 ("arm64: KVM: Skip HYP setup when already running in
HYP")
Signed-off-by: Mark Rutland
Cc: Christopher Dall
Cc: James
On Fri, Oct 12, 2018 at 09:56:05AM +0100, Will Deacon wrote:
> On Fri, Oct 12, 2018 at 09:53:54AM +0100, Mark Rutland wrote:
> > On Thu, Oct 11, 2018 at 05:28:14PM +0100, Will Deacon wrote:
> > > On Fri, Oct 05, 2018 at 09:47:38AM +0100, Kristina Martsenko wrote:
>
On Thu, Oct 11, 2018 at 05:28:14PM +0100, Will Deacon wrote:
> On Fri, Oct 05, 2018 at 09:47:38AM +0100, Kristina Martsenko wrote:
> > +#define ESR_ELx_EC_PAC (0x09)
>
> Really minor nit: but shouldn't this be ESR_EL2_EC_PAC, since this trap
> can't occur at EL1 afaict?
It can also
whitepaper, available from the Arm security updates site [1].
Thanks,
Mark.
[1]
https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability
Mark Rutland (2):
arm64: fix possible spectre-v1 write in ptrace_hbp_set_event()
KVM: arm/arm64: vgic: fix possible spectr
It's possible for userspace to control idx. Sanitize idx when using it
as an array index, to inhibit the potential spectre-v1 write gadget.
Found by smatch.
Signed-off-by: Mark Rutland
Cc: Catalin Marinas
Cc: Will Deacon
---
arch/arm64/kernel/ptrace.c | 19 +++
1 file changed
(and/or future refactoring) will ensure this is
the case, and given this is a slow path it's better to always perform
the masking.
Found by smatch.
Signed-off-by: Mark Rutland
Cc: Christoffer Dall
Cc: Marc Zyngier
Cc: kvmarm@lists.cs.columbia.edu
---
virt/kvm/arm/vgic/vgic-mmio-v2.c | 3
compat tasks, and there is no way to actually
> prevent a compat task from issueing KVM ioctls.
>
> This patch changes this behaviour, by always registering a compat_ioctl
> method, even if KVM_COMPAT is not selected. In that case, the callback
> will always return -EINVAL.
>
> Re
Hi,
On Tue, May 29, 2018 at 03:20:47PM +0100, Dave Martin wrote:
> Currently, the {read,write}_sysreg_el*() accessors for accessing
> particular ELs' sysregs in the presence of VHE rely on some local
> hacks and define their system register encodings in a way that is
> inconsistent with the core
On Thu, May 31, 2018 at 02:00:11PM +0100, Marc Zyngier wrote:
> On 31/05/18 12:51, Mark Rutland wrote:
> > On Wed, May 30, 2018 at 01:47:02PM +0100, Marc Zyngier wrote:
> >> Set/Way handling is one of the ugliest corners of KVM. We shouldn't
> >> have to handle that,
andates them unconditionally.
>
> Let's remove these operations.
>
> Signed-off-by: Marc Zyngier
Acked-by: Mark Rutland
Mark.
> ---
> virt/kvm/arm/mmu.c | 4
> 1 file changed, 4 deletions(-)
>
> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index ad1980d2118a
r how the folding logic works for ARM is a pgd entry the
entire pud table?
Assuming so:
Acked-by: Mark Rutland
> +
> static inline pte_t kvm_s2pte_mkwrite(pte_t pte)
> {
> pte_val(pte) |= L_PTE_S2_RDWR;
> diff --git a/arch/arm64/include/asm/kvm_mmu.h
> b/arch/arm64/include
em.
>
> Signed-off-by: Marc Zyngier
Acked-by: Mark Rutland
Mark.
> ---
> arch/arm/include/asm/kvm_mmu.h | 12 ---
> arch/arm64/include/asm/kvm_mmu.h | 3 ---
> virt/kvm/arm/mmu.c | 35
> 3 files changed, 31 inse
te the
> icache is an unnecessary overhead.
>
> On such systems, we can safely leave the page as being executable.
>
> Acked-by: Catalin Marinas
> Signed-off-by: Marc Zyngier
Acked-by: Mark Rutland
Mark.
> ---
> arch/arm64/include/asm/pgtable-prot.h | 14 --
On Wed, May 30, 2018 at 01:47:02PM +0100, Marc Zyngier wrote:
> Set/Way handling is one of the ugliest corners of KVM. We shouldn't
> have to handle that, but better safe than sorry.
>
> Thankfully, FWB fixes this for us by not requiering any maintenance
> whatsoever, which means we don't have to
On Wed, May 30, 2018 at 01:47:01PM +0100, Marc Zyngier wrote:
> Up to ARMv8.3, the combinaison of Stage-1 and Stage-2 attributes
> results in the strongest attribute of the two stages. This means
> that the hypervisor has to perform quite a lot of cache maintenance
> just in case the guest has
; Signed-off-by: Marc Zyngier
With teh fixup in swsusp_arch_suspend(), this looks good to me. FWIW:
Reviewed-by: Mark Rutland
Mark.
> ---
> arch/arm64/include/asm/cpufeature.h | 6 ++
> arch/arm64/kernel/cpu_errata.c | 2 +-
> arch/arm64/kernel/hibernate.c |
On Fri, May 25, 2018 at 12:08:28PM +0100, Marc Zyngier wrote:
> On 25/05/18 11:50, Mark Rutland wrote:
> > On Thu, May 10, 2018 at 12:13:47PM +0100, Mark Rutland wrote:
> >> For historical reasons, we open-code lm_alias() in kvm_ksym_ref().
> >>
> >> Let's
On Wed, May 23, 2018 at 09:42:56AM +0100, Suzuki K Poulose wrote:
> On 03/05/18 14:20, Mark Rutland wrote:
> > +#define __ptrauth_key_install(k, v)\
> > +do { \
> > + write_sysreg_s(v.lo, S
On Wed, May 23, 2018 at 09:48:28AM +0100, Suzuki K Poulose wrote:
>
> Mark,
>
> On 03/05/18 14:20, Mark Rutland wrote:
> > So that we can dynamically handle the presence of pointer authentication
> > functionality, wire up probing code in cpufeature.c.
> &
On Thu, May 17, 2018 at 11:35:47AM +0100, Marc Zyngier wrote:
> There is no need to perform cache maintenance operations when
> creating the HYP page tables if we have the multiprocessing
> extensions. ARMv7 mandates them with the virtualization support,
> and ARMv8 just mandates them
offer.d...@arm.com>
> Signed-off-by: Marc Zyngier <marc.zyng...@arm.com>
Reviewed-by: Mark Rutland <mark.rutl...@arm.com>
Mark.
> ---
> arch/arm/include/asm/kvm_host.h | 12
> arch/arm64/include/asm/kvm_host.h | 23 +++
> arch/ar
ve_cb_end
> + get_vcpu_ptrx2, x0
> + ldr x0, [x2, #VCPU_WORKAROUND_FLAGS]
> +
> + /* Sanitize the argument and update the guest flags*/
Nit: space before the trailing '*/'. Either that or use a '//' comment.
Otherwise, this looks fine, so with that fixed:
Reviewed-by:
gt; KVM to disable ARCH_WORKAROUND_2 before entering the guest,
> and enable it when exiting it.
>
> Reviewed-by: Christoffer Dall <christoffer.d...@arm.com>
> Signed-off-by: Marc Zyngier <marc.zyng...@arm.com>
Reviewed-by: Mark Rutland <mark.rutl...@arm.com>
Mark.
gt;
> Reviewed-by: Christoffer Dall <christoffer.d...@arm.com>
> Signed-off-by: Marc Zyngier <marc.zyng...@arm.com>
Reviewed-by: Mark Rutland <mark.rutl...@arm.com>
Mark.
> ---
> arch/arm64/include/asm/kvm_asm.h | 27 +--
> 1 file changed, 25 inse
On Tue, May 22, 2018 at 04:06:44PM +0100, Marc Zyngier wrote:
> If running on a system that performs dynamic SSBD mitigation, allow
> userspace to request the mitigation for itself. This is implemented
> as a prctl call, allowing the mitigation to be enabled or disabled at
> will for this
flag cannot be flipped while a task is in
userspace:
Reviewed-by: Mark Rutland <mark.rutl...@arm.com>
Mark.
> ---
> arch/arm64/include/asm/thread_info.h | 1 +
> arch/arm64/kernel/entry.S| 2 ++
> 2 files changed, 3 insertions(+)
>
> diff --git a/arch
out if we're doing dynamic mitigation.
>
> Think of it as a poor man's static key...
I guess in future we can magic up a more general asm static key if we
need them elsewhere.
> Signed-off-by: Marc Zyngier <marc.zyng...@arm.com>
Reviewed-by: Mark Rutland <mark.rutl...@arm.co
->EL1 exceptions (and as with many other bits of the arm64
code, it's arguably misleading in the VHE case).
Perhaps ARM64_SSBD_KERNEL, which would align with the parameter name?
Not a big deal either way, and otherwise this looks good to me.
Regardless:
Reviewed-by: Mark Rutland <mark.rutl..
-off-by: Marc Zyngier <marc.zyng...@arm.com>
Reviewed-by: Mark Rutland <mark.rutl...@arm.com>
[...]
> +static void do_ssbd(bool state)
> +{
> + switch (psci_ops.conduit) {
> + case PSCI_CONDUIT_HVC:
> + arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_W
On Thu, May 24, 2018 at 12:00:58PM +0100, Mark Rutland wrote:
> On Tue, May 22, 2018 at 04:06:36PM +0100, Marc Zyngier wrote:
> > In order for the kernel to protect itself, let's call the SSBD mitigation
> > implemented by the higher exception level (either hypervisor or firmwa
I guess this may fix the issue I noted with the prior patch,
assuming we only set arm64_ssbd_callback_required for a CPU when the FW
supports the mitigation.
If so, if you fold this together with the prior patch:
Reviewed-by: Mark Rutland <mark.rutl...@arm.com>
Thanks,
Mark.
> ---
> ar
On Tue, May 22, 2018 at 04:06:36PM +0100, Marc Zyngier wrote:
> In order for the kernel to protect itself, let's call the SSBD mitigation
> implemented by the higher exception level (either hypervisor or firmware)
> on each transition between userspace and kernel.
>
> We must take the PSCI
On Wed, May 23, 2018 at 10:23:20AM +0100, Julien Grall wrote:
> Hi Marc,
>
> On 05/22/2018 04:06 PM, Marc Zyngier wrote:
> > diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> > index ec2ee720e33e..f33e6aed3037 100644
> > --- a/arch/arm64/kernel/entry.S
> > +++
block.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Christoffer Dall <christoffer.d...@arm.com>
Cc: Marc Zyngier <marc.zyng...@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
arch/arm64/include/asm/kvm_asm.h | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
Sinc
is enabled. Otherwise, it is
hidden.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Catalin Marinas <catalin.mari...@arm.com>
Cc: Ramana Radhakrishnan <ramana.radhakrish...@arm.com>
Cc: Will Deacon <will.dea...@arm.com>
---
arch/arm64/include/asm/pointer_auth.h |
Now that we've added code to support pointer authentication, add some
documentation so that people can figure out if/how to use it.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Andrew Jones <drjo...@redhat.com>
Cc: Catalin Marinas <catalin.mari...@arm.com>
Cc: Ra
the LR value, and not the
FP.
This only affects the in-kernel unwinder. When userspace performs
unwinding, it is up to userspace to strip PACs as necessary (which can
be determined from DWARF information).
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Catalin Marinas <catalin.mari..
Now that all the necessary bits are in place for userspace, add the
necessary Kconfig logic to allow this to be enabled.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Catalin Marinas <catalin.mari...@arm.com>
Cc: Will Deacon <will.dea...@arm.com>
---
arch/
is added for the generic key (APGAKey), though this cannot be
trapped or made to behave as a NOP. Its presence is not advertised with
a hwcap.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Catalin Marinas <catalin.mari...@arm.com>
Cc: Ramana Radhakrishnan <ramana.radhakrish
out address authentication, so we only
need to check APA and API. It is assumed that if all CPUs support an IMP
DEF algorithm, the same algorithm is used across all CPUs.
Note that when we implement KVM support, we will also need to ensure
that CPUs have uniform support for GPA and GPI.
Signed-off-by: M
instructions, triggering a trap to EL2,
resulting in noise from kvm_handle_unknown_ec(). So let's write up a
handler for the PAC trap, which silently injects an UNDEF into the
guest, as if the feature were really missing.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Reviewed-by: Andrew Jones
support for KVM guests, since KVM manages HCR_EL2
itself when running VMs.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Acked-by: Christoffer Dall <christoffer.d...@arm.com>
Cc: Catalin Marinas <catalin.mari...@arm.com>
Cc: Marc Zyngier <marc.zyng...@arm.com>
Cc: Will
ESR_ELx.EC code used when the new instructions are affected by
configurable traps
This patch adds the relevant definitions to and
for these, to be used by subsequent patches.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Catalin Marinas <catalin.mari...@arm.com>
Cc:
-mark.rutl...@arm.com
[5] git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git
arm64/pointer-auth
[6] git://git.kernel.org/pub/scm/linux/kernel/git/mark/boot-wrapper-aarch64.git
Mark Rutland (10):
arm64: add pointer authentication register bits
arm64/kvm: consistently handle host HCR_EL2
more flags for the host, so
lets add a HCR_HOST_NVHE_FLAGS defintion, and consistently use both
HCR_HOST_VHE_FLAGS and HCR_HOST_NVHE_FLAGS in the kvm code and head.S.
We now use mov_q to generate the HCR_EL2 value, as we use when
configuring other registers in head.S.
Signed-off-by: Mark Rutland
On Wed, Apr 25, 2018 at 12:23:32PM +0100, Catalin Marinas wrote:
> Hi Mark,
>
> On Tue, Apr 17, 2018 at 07:37:31PM +0100, Mark Rutland wrote:
> > diff --git a/arch/arm64/include/asm/mmu_context.h
> > b/arch/arm64/include/asm/mmu_context.h
> > index 39ec0b
On Fri, Apr 27, 2018 at 11:51:39AM +0200, Christoffer Dall wrote:
> On Tue, Apr 17, 2018 at 07:37:26PM +0100, Mark Rutland wrote:
> > In KVM we define the configuration of HCR_EL2 for a VHE HOST in
> > HCR_HOST_VHE_FLAGS, but we don't ahve a similar definition for the
>
> nit
this is
the case, and given this is a slow path it's better to always perform
the masking.
Found by smatch.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Christoffer Dall <cd...@kernel.org>
Cc: Marc Zyngier <marc.zyng...@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
virt/kvm/arm
.
Mark Rutland (3):
arm64: fix possible spectre-v1 in ptrace_hbp_get_event()
KVM: arm/arm64: vgic: fix possible spectre-v1 in vgic_get_irq()
KVM: arm/arm64: vgic: fix possible spectre-v1 in vgic_mmio_read_apr()
arch/arm64/kernel/ptrace.c | 14 ++
virt/kvm/arm/vgic/vgic-mmio
Hi Andrey,
On Fri, Apr 20, 2018 at 04:59:35PM +0200, Andrey Konovalov wrote:
> On Fri, Apr 20, 2018 at 10:13 AM, Marc Zyngier wrote:
> >> The issue is that
> >> clang doesn't know about the "S" asm constraint. I reported this to
> >> clang [2], and hopefully this will get
On Wed, Apr 18, 2018 at 03:19:26PM +0200, Andrew Jones wrote:
> On Tue, Apr 17, 2018 at 07:37:27PM +0100, Mark Rutland wrote:
> > @@ -1000,6 +1000,15 @@ static u64 read_id_reg(struct sys_reg_desc const *r,
> > bool raz)
> >
On Tue, Apr 17, 2018 at 09:56:02PM +0200, Arnd Bergmann wrote:
> On Tue, Apr 17, 2018 at 8:37 PM, Mark Rutland <mark.rutl...@arm.com> wrote:
> > Currently, an architecture must either implement all of the mm hooks
> > itself, or use all of those provided by the asm-generic imp
Now that we've added code to support pointer authentication, add some
documentation so that people can figure out if/how to use it.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Andrew Jones <drjo...@redhat.com>
Cc: Catalin Marinas <catalin.mari...@arm.com>
Cc: Ra
the LR value, and not the
FP.
This only affects the in-kernel unwinder. When userspace performs
unwinding, it is up to userspace to strip PACs as necessary (which can
be determined from DWARF information).
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Catalin Marinas <catalin.mari..
Now that all the necessary bits are in place for userspace, add the
necessary Kconfig logic to allow this to be enabled.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Catalin Marinas <catalin.mari...@arm.com>
Cc: Will Deacon <will.dea...@arm.com>
---
arch/
-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Catalin Marinas <catalin.mari...@arm.com>
Cc: Ramana Radhakrishnan <ramana.radhakrish...@arm.com>
Cc: Will Deacon <will.dea...@arm.com>
---
arch/arm64/include/asm/pointer_auth.h | 8
arch/arm64/include/uapi/asm/ptra
is added for the generic key (APGAKey), though this cannot be
trapped or made to behave as a NOP. Its presence is not advertised with
a hwcap.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Catalin Marinas <catalin.mari...@arm.com>
Cc: Ramana Radhakrishnan <ramana.radhakrish
, allow each hook to be overridden indiviually,
by placing each under an #ifndef block. As architectures providing their
own hooks can't include this file today, this shouldn't adversely affect
any existing hooks.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Arnd Bergmann <a...@arn
out address authentication, so we only
need to check APA and API. It is assumed that if all CPUs support an IMP
DEF algorithm, the same algorithm is used across all CPUs.
Note that when we implement KVM support, we will also need to ensure
that CPUs have uniform support for GPA and GPI.
Signed-off-by: M
support for KVM guests, since KVM manages HCR_EL2
itself when running VMs.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Christoffer Dall <cd...@kernel.org>
Cc: Catalin Marinas <catalin.mari...@arm.com>
Cc: Marc Zyngier <marc.zyng...@arm.com>
Cc: Will Deacon
instructions, triggering a trap to EL2,
resulting in noise from kvm_handle_unknown_ec(). So let's write up a
handler for the PAC trap, which silently injects an UNDEF into the
guest, as if the feature were really missing.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Christoffer Da
more flags for the host, so
lets add a HCR_HOST_NVHE_FLAGS defintion, adn consistently use both
HCR_HOST_VHE_FLAGS and HCR_HOST_NVHE_FLAGS in the kvm code and head.S.
We now use mov_q to generate the HCR_EL2 value, as we use when
configuring other registers in head.S.
Signed-off-by: Mark Rutland
ESR_ELx.EC code used when the new instructions are affected by
configurable traps
This patch adds the relevant definitions to and
for these, to be used by subsequent patches.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Catalin Marinas <catalin.mari...@arm.com>
Cc:
-email-mark.rutl...@arm.com
[3] https://lkml.kernel.org/r/20171127163806.31435-1-mark.rutl...@arm.com
[4] git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git
arm64/pointer-auth
[5] git://git.kernel.org/pub/scm/linux/kernel/git/mark/boot-wrapper-aarch64.git
pointer-auth
Mark Rutland (11
On Tue, Apr 10, 2018 at 05:05:40PM +0200, Christoffer Dall wrote:
> On Tue, Apr 10, 2018 at 11:51:19AM +0100, Mark Rutland wrote:
> > I think we also need to update kvm->arch.vttbr before updating
> > kvm->arch.vmid_gen, otherwise another CPU can come in, see that the
>
mance issues. A middle ground
> > is to convert the spinlock to a rwlock, and only take the read lock
> > on the fast path. If the check fails at that point, drop it and
> > acquire the write lock, rechecking the condition.
> >
> > This ensures that the above scenario do
On Tue, Feb 06, 2018 at 01:39:15PM +0100, Christoffer Dall wrote:
> On Mon, Nov 27, 2017 at 04:38:03PM +0000, Mark Rutland wrote:
> > diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> > index 525c01f48867..2205f0be3ced 100644
> > --- a/arch/arm64/kvm/h
On Mon, Apr 09, 2018 at 02:58:18PM +0200, Christoffer Dall wrote:
> Hi Mark,
>
> [Sorry for late reply]
>
> On Fri, Mar 09, 2018 at 02:28:38PM +0000, Mark Rutland wrote:
> > On Tue, Feb 06, 2018 at 01:38:47PM +0100, Christoffer Dall wrote:
> > > On Mon, Nov 27,
block.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Marc Zyngier <marc.zyng...@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
arch/arm64/include/asm/kvm_asm.h | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64
On Thu, Mar 29, 2018 at 04:27:58PM +0100, Mark Rutland wrote:
> On Thu, Mar 29, 2018 at 11:00:24PM +0800, Shannon Zhao wrote:
> > From: zhaoshenglong <zhaoshengl...@huawei.com>
> >
> > Currently the VMID for some VM is allocated during VCPU entry/exit
> >
On Fri, Mar 16, 2018 at 04:52:08PM +, Nick Desaulniers wrote:
> + Sami (Google), Takahiro (Linaro)
>
> Just so I fully understand the problem enough to articulate it, we'd be
> looking for the compiler to keep the jump tables for speed (I would guess
> -fno-jump-tables would emit an if-else
On Fri, Mar 16, 2018 at 02:13:14PM +, Mark Rutland wrote:
> On Fri, Mar 16, 2018 at 02:49:00PM +0100, Andrey Konovalov wrote:
> > Hi!
>
> Hi,
>
> > I've recently tried to boot clang built kernel on real hardware
> > (Odroid C2 board) instead of using a VM. The
On Fri, Mar 16, 2018 at 02:49:00PM +0100, Andrey Konovalov wrote:
> Hi!
Hi,
> I've recently tried to boot clang built kernel on real hardware
> (Odroid C2 board) instead of using a VM. The issue that I stumbled
> upon is that arm64 kvm built with clang doesn't boot.
>
> Adding -fno-jump-tables
On Tue, Feb 06, 2018 at 01:38:47PM +0100, Christoffer Dall wrote:
> On Mon, Nov 27, 2017 at 04:38:04PM +0000, Mark Rutland wrote:
> > When pointer authentication is supported, a guest may wish to use it.
> > This patch adds the necessary KVM infrastructure for this to work, with
rly return:
static inline void __flush_icache_all(void)
{
if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC))
return;
asm("ic ialluis");
dsb(ish);
}
... which minimizes indentation, and the diffstat.
The rest looks fine to me, so with the abo
Hi,
On Sat, Feb 24, 2018 at 06:09:53AM -0600, Shanker Donthineni wrote:
> +config ARM64_SKIP_CACHE_POU
> + bool "Enable support to skip cache POU operations"
Nit: s/POU/PoU/ in text
> + default y
> + help
> + Explicit point of unification cache operations can be eliminated
> +
On Thu, Feb 22, 2018 at 04:28:03PM +, Robin Murphy wrote:
> [Apologies to keep elbowing in, and if I'm being thick here...]
>
> On 22/02/18 15:22, Mark Rutland wrote:
> > On Thu, Feb 22, 2018 at 08:51:30AM -0600, Shanker Donthineni wrote:
> > > +#define CTR
On Thu, Feb 22, 2018 at 08:51:30AM -0600, Shanker Donthineni wrote:
> +#define CTR_B31_SHIFT31
Since this is just a RES1 bit, I think we don't need a mnemonic for it,
but I'll defer to Will and Catalin on that.
> ENTRY(invalidate_icache_range)
> +#ifdef
On Wed, Feb 21, 2018 at 04:51:40PM +, Robin Murphy wrote:
> On 21/02/18 16:14, Shanker Donthineni wrote:
> [...]
> > > > @@ -1100,6 +1114,20 @@ static int cpu_copy_el2regs(void *__unused)
> > > > .enable = cpu_clear_disr,
> > > > },
> > > > #endif /*
On Wed, Feb 21, 2018 at 07:49:06AM -0600, Shanker Donthineni wrote:
> The DCache clean & ICache invalidation requirements for instructions
> to be data coherence are discoverable through new fields in CTR_EL0.
> The following two control bits DIC and IDC were defined for this
> purpose. No need to
-existence.
As noted in D7.2.67, when no LORegions are implemented, LoadLOAcquire
and StoreLORelease must behave as LoadAcquire and StoreRelease
respectively. We can ensure this by clearing LORC_EL1.EN when a CPU's
EL2 is first initialized, as the host kernel will not modify this.
Signed-off-by: Mark
On Tue, Feb 13, 2018 at 11:27:42AM +0100, Christoffer Dall wrote:
> Hi Mark,
Hi Christoffer,
> On Mon, Feb 12, 2018 at 11:14:24AM +, Mark Rutland wrote:
> > We don't currently limit guest accesses to the LOR registers, which we
> > neither virtualize nor context-switc
On Tue, Feb 06, 2018 at 01:39:06PM +0100, Christoffer Dall wrote:
> Hi Mark,
>
> On Mon, Nov 27, 2017 at 04:37:59PM +0000, Mark Rutland wrote:
> > To allow EL0 (and/or EL1) to use pointer authentication functionality,
> > we must ensure that pointer authentication inst
ented, LoadLOAcquire
and StoreLORelease must behave as LoadAcquire and StoreRelease
respectively. We can ensure this by clearing LORC_EL1.EN when a CPU's
EL2 is first initialized, as the host kernel will not modify this.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Reviewed-by: Vladim
Hi,
On Sun, Dec 10, 2017 at 08:03:43PM -0600, Shanker Donthineni wrote:
> +/**
> + * Errata workaround prior to disable MMU. Insert an ISB immediately prior
> + * to executing the MSR that will change SCTLR_ELn[M] from a value of 1 to 0.
> + */
> + .macro pre_disable_mmu_workaround
> +#ifdef
// SPDX-License-Identifier: GPL-2.0
> +// Copyright (C) 2017 Arm Ltd.
> +#ifndef __ASM_VMAP_STACK_H
> +#define __ASM_VMAP_STACK_H
> +
> +#include
> +#include
> +#include
> +#include
> +#include
I think we also need:
#include // for BUILD_BUG_ON()
#incldue
Now that we've added code to support pointer authentication, add some
documentation so that people can figure out if/how to use it.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Catalin Marinas <catalin.mari...@arm.com>
Cc: Will Deacon <will.dea...@arm.com>
Cc: Yao
in
__tlb_switch_to_guest_vhe().
The now unused HCR_HOST_VHE_FLAGS definition is removed.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Reviewed-by: Christoffer Dall <cd...@linaro.org>
Cc: Marc Zyngier <marc.zyng...@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
arch/arm64/include/
(when scheduled on a physical CPU which
supports the relevant feature). When the guest is scheduled on a
physical CPU lacking the feature, these atetmps will result in an UNDEF
being taken by the guest.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Christoffer Dall <cd...@linaro.org&
the LR value, and not the
FP.
This only affects the in-kernel unwinder. When userspace performs
unwinding, it is up to userspace to strip PACs as necessary (which can
be determined from DWARF information).
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Catalin Marinas <catalin.mari..
is added for the generic key (APGAKey), though this cannot be
trapped or made to behave as a NOP. Its presence is not advertised with
a hwcap.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Catalin Marinas <catalin.mari...@arm.com>
Cc: Suzuki K Poulose <suzuki.poul
HCR_EL2
itself.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Catalin Marinas <catalin.mari...@arm.com>
Cc: Christoffer Dall <cd...@linaro.org>
Cc: Marc Zyngier <marc.zyng...@arm.com>
Cc: Will Deacon <will.dea...@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
ar
So that we can dynamically handle the presence of pointer authentication
functionality, wire up probing code in cpufeature.c.
It is assumed that if all CPUs support an IMP DEF algorithm, the same
algorithm is used across all CPUs.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: C
sts and
userspace. As marking them with FTR_HIDDEN only hides them from
userspace, they are also protected with ifdeffery on
CONFIG_ARM64_POINTER_AUTHENTICATION.
Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Suzuki K Poulose <suzuki.poul...@arm.com>
Cc: Catalin Marinas <catal
201 - 300 of 469 matches
Mail list logo