[PATCH] KVM: arm64: fix ptrauth ID register masking logic

2019-05-01 Thread Kristina Martsenko
When a VCPU doesn't have pointer auth, we want to hide all four pointer
auth ID register fields from the guest, not just one of them.

Fixes: 384b40caa8af ("KVM: arm/arm64: Context-switch ptrauth registers")
Reported-by: Andrew Murray 
Fsck-up-by: Marc Zyngier 
Signed-off-by: Kristina Martsenko 
---
 arch/arm64/kvm/sys_regs.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 9d02643bc601..857b226bcdde 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1088,10 +1088,10 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
if (id == SYS_ID_AA64PFR0_EL1 && !vcpu_has_sve(vcpu)) {
val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
} else if (id == SYS_ID_AA64ISAR1_EL1 && !vcpu_has_ptrauth(vcpu)) {
-   val &= ~(0xfUL << ID_AA64ISAR1_APA_SHIFT) |
-   (0xfUL << ID_AA64ISAR1_API_SHIFT) |
-   (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
-   (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
+   val &= ~((0xfUL << ID_AA64ISAR1_APA_SHIFT) |
+(0xfUL << ID_AA64ISAR1_API_SHIFT) |
+(0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
+(0xfUL << ID_AA64ISAR1_GPI_SHIFT));
}
 
return val;
-- 
2.11.0

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v8 4/9] KVM: arm/arm64: preserve host HCR_EL2 value

2019-04-08 Thread Kristina Martsenko
On 08/04/2019 14:05, Amit Daniel Kachhap wrote:
> Hi James,
> 
> On 4/6/19 4:07 PM, James Morse wrote:
>> Hi Amit,
>>
>> On 02/04/2019 03:27, Amit Daniel Kachhap wrote:
>>> From: Mark Rutland 
>>>
>>> When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
>>> is a constant value. This works today, as the host HCR_EL2 value is
>>> always the same, but this will get in the way of supporting extensions
>>> that require HCR_EL2 bits to be set conditionally for the host.
>>>
>>> To allow such features to work without KVM having to explicitly handle
>>> every possible host feature combination, this patch has KVM save/restore
>>> for the host HCR when switching to/from a guest HCR. The saving of the
>>> register is done once during cpu hypervisor initialization state and is
>>> just restored after switch from guest.
>>>
>>> For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
>>> kvm_call_hyp and is helpful in non-VHE case.
>>>
>>> For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
>>> to toggle the TGE bit with a RMW sequence, as we already do in
>>> __tlb_switch_to_guest_vhe().
>>>
>>> The value of hcr_el2 is now stored in struct kvm_cpu_context as both host
>>> and guest can now use this field in a common way.
>>
>> These HCR_EL2 flags have had me confused for quite a while.
>> I thought this was preserving the value that head.S or cpufeature.c had set, 
>> and with
>> ptrauth we couldn't know what this register should be anymore, the host 
>> flags has to vary.
>>
>> Kristina's explanation of it[0], clarified things, and with a bit more 
>> digging it appears
>> we always set API/APK, even if the hardware doesn't support the feature (as 
>> its harmless).
>> So we don't need to vary the host flags...
> 
> API/APK is always set for NVHE host mode.
>>
>> My question is, what breaks if this patch isn't merged? (the MDCR change is 
>> cleanup we can
>> do because of this HCR change), is this HCR change just cleanup too? If so, 
>> can we merge
>> ptrauth without either, so we only make the change when its needed? (it will 
>> cause some
>> changes in your patch 7, but I can't see where you depend on the host flags).
> 
> Yes you are right that this patch does not directly effect pointer 
> authentication functionality but contains several optimizations and cleanups 
> such as,
> 
> * Removes assigning static flags HCR_HOST_VHE_FLAGS/HCR_HOST_NVHE_FLAGS from 
> switch.c so switching functions now are more generic in nature.
> * Currently the variation in hcr_el2 flags is across modes (VHE/NVHE). Any 
> future conditional change within those modes in host HCR_EL2 may not effect 
> code changes in switch.c
> * Save of hcr_el2 done at hyp init time so not expensive switching wise.
> 
> I am fine on posting it separately also.

FWIW I think it makes sense to post the HCR and MDCR patches separately
from this series. That should make it clear that pointer auth does not
depend on these changes, and should make it easier to evaluate the
changes on their own.

Others' opinions are welcome as well.

>> I recall Christoffer wanting to keep the restored DAIF register value on 
>> guest-exit static
>> to avoid extra loads/stores when we know what the value would be. I think 
>> the same logic
>> applies here.
> Yes the saving of host registers once was suggested by Christoffer.

I'm not familiar with this, but James may be referring to
kvm_arm_vhe_guest_exit, which restores DAIF to a constant value. It
seems like originally the patch saved/restored DAIF [1], but it was
decided that a constant value was better.

Thanks,
Kristina

[1] https://www.spinics.net/lists/arm-kernel/msg599798.html

>> You mentioned in the cover letter the series has some history to it!
>>
>>
>> Thanks,
>>
>> James
>>
>> [0] http://lore.kernel.org/r/7ec2f950-7587-5ecd-6caa-c2fd091ad...@arm.com
>>

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v8 7/9] KVM: arm/arm64: context-switch ptrauth registers

2019-04-04 Thread Kristina Martsenko
On 02/04/2019 03:27, Amit Daniel Kachhap wrote:
> From: Mark Rutland 
> 
> When pointer authentication is supported, a guest may wish to use it.
> This patch adds the necessary KVM infrastructure for this to work, with
> a semi-lazy context switch of the pointer auth state.

[...]

> diff --git a/arch/arm64/include/asm/kvm_ptrauth_asm.h 
> b/arch/arm64/include/asm/kvm_ptrauth_asm.h
> new file mode 100644
> index 000..65f99e9
> --- /dev/null
> +++ b/arch/arm64/include/asm/kvm_ptrauth_asm.h
> @@ -0,0 +1,106 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/* arch/arm64/include/asm/kvm_ptrauth_asm.h: Guest/host ptrauth save/restore
> + * Copyright 2019 Arm Limited
> + * Author: Mark Rutland 
> + * Amit Daniel Kachhap 
> + */
> +
> +#ifndef __ASM_KVM_PTRAUTH_ASM_H
> +#define __ASM_KVM_PTRAUTH_ASM_H
> +
> +#ifndef __ASSEMBLY__
> +
> +#define __ptrauth_save_key(regs, key)
> \
> +({   
> \
> + regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);   
> \
> + regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);   
> \
> +})
> +
> +#define __ptrauth_save_state(ctxt)   
> \
> +({   
> \
> + __ptrauth_save_key(ctxt->sys_regs, APIA);   
> \
> + __ptrauth_save_key(ctxt->sys_regs, APIB);   
> \
> + __ptrauth_save_key(ctxt->sys_regs, APDA);   
> \
> + __ptrauth_save_key(ctxt->sys_regs, APDB);   
> \
> + __ptrauth_save_key(ctxt->sys_regs, APGA);   
> \
> +})
> +
> +#else /* __ASSEMBLY__ */
> +
> +#include 
> +
> +#ifdef   CONFIG_ARM64_PTR_AUTH
> +
> +#define PTRAUTH_REG_OFFSET(x)(x - CPU_APIAKEYLO_EL1)
> +
> +/*
> + * CPU_AP*_EL1 values exceed immediate offset range (512) for stp instruction
> + * so below macros takes CPU_APIAKEYLO_EL1 as base and calculates the offset 
> of
> + * the keys from this base to avoid an extra add instruction. These macros
> + * assumes the keys offsets are aligned in a specific increasing order.
> + */
> +.macro   ptrauth_save_state base, reg1, reg2
> + mrs_s   \reg1, SYS_APIAKEYLO_EL1
> + mrs_s   \reg2, SYS_APIAKEYHI_EL1
> + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
> + mrs_s   \reg1, SYS_APIBKEYLO_EL1
> + mrs_s   \reg2, SYS_APIBKEYHI_EL1
> + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
> + mrs_s   \reg1, SYS_APDAKEYLO_EL1
> + mrs_s   \reg2, SYS_APDAKEYHI_EL1
> + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
> + mrs_s   \reg1, SYS_APDBKEYLO_EL1
> + mrs_s   \reg2, SYS_APDBKEYHI_EL1
> + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
> + mrs_s   \reg1, SYS_APGAKEYLO_EL1
> + mrs_s   \reg2, SYS_APGAKEYHI_EL1
> + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
> +.endm
> +
> +.macro   ptrauth_restore_state base, reg1, reg2
> + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
> + msr_s   SYS_APIAKEYLO_EL1, \reg1
> + msr_s   SYS_APIAKEYHI_EL1, \reg2
> + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
> + msr_s   SYS_APIBKEYLO_EL1, \reg1
> + msr_s   SYS_APIBKEYHI_EL1, \reg2
> + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
> + msr_s   SYS_APDAKEYLO_EL1, \reg1
> + msr_s   SYS_APDAKEYHI_EL1, \reg2
> + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
> + msr_s   SYS_APDBKEYLO_EL1, \reg1
> + msr_s   SYS_APDBKEYHI_EL1, \reg2
> + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
> + msr_s   SYS_APGAKEYLO_EL1, \reg1
> + msr_s   SYS_APGAKEYHI_EL1, \reg2
> +.endm
> +
> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
> + ldr \reg1, [\g_ctxt, #CPU_HCR_EL2]
> + and \reg1, \reg1, #(HCR_API | HCR_APK)
> + cbz \reg1, 1f
> + add \reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
> + ptrauth_restore_state   \reg1, \reg2, \reg3
> +1:

Nit: the label in assembly macros is usually a larger number (see
assembler.h or alternative.h for example). I think this is to avoid
future accidents like

cbz x0, 1f
ptrauth_switch_to_guest x1, x2, x3, x4
add x5, x5, x6
1:
...

where the code would incorrectly branch to the label inside
ptrauth_switch_to_guest, instead of the one after it.

Thanks,
Kristina

> +.endm
> +
> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
> + ldr \reg1, [\g_ctxt, #CPU_HCR_EL2]
> + and \reg1, \reg1, #(HCR_API | HCR_APK)
> + cbz \reg1, 2f
> + add \reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
> + ptra

Re: [PATCH v7 7/10] KVM: arm/arm64: context-switch ptrauth registers

2019-03-26 Thread Kristina Martsenko
On 26/03/2019 04:03, Amit Daniel Kachhap wrote:
> Hi,
> 
> On 3/26/19 1:34 AM, Kristina Martsenko wrote:
>> On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
>>> From: Mark Rutland 
>>>
>>> When pointer authentication is supported, a guest may wish to use it.
>>> This patch adds the necessary KVM infrastructure for this to work, with
>>> a semi-lazy context switch of the pointer auth state.
>>
>> [...]
>>
>>> +    if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
>>> +    test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {
>>> +    /* Verify that KVM startup matches the conditions for ptrauth */
>>> +    if (WARN_ON(!vcpu_has_ptrauth(vcpu)))
>>> +    return -EINVAL;
>>> +    }
>>
>> I think this now needs to have "goto out;" instead of "return -EINVAL;",
>> since 5.1-rcX contains commit e761a927bc9a ("KVM: arm/arm64: Reset the
>> VCPU without preemption and vcpu state loaded") which changed some of
>> this code.
> ok missed the changes for this commit.

One more thing - I think the WARN_ON() here should be removed. Otherwise
if panic_on_warn is set then userspace can panic the kernel. I think
WARN_ON is only for internal kernel errors (see comment in
include/asm-generic/bug.h).

Thanks,
Kristina
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v7 9/10] KVM: arm64: docs: document KVM support of pointer authentication

2019-03-25 Thread Kristina Martsenko
On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
> This adds sections for KVM API extension for pointer authentication.
> A brief description about usage of pointer authentication for KVM guests
> is added in the arm64 documentations.
> 
> Signed-off-by: Amit Daniel Kachhap 
> Cc: Mark Rutland 
> Cc: Christoffer Dall 
> Cc: Marc Zyngier 
> Cc: kvmarm@lists.cs.columbia.edu

I think it makes sense to also update the Kconfig symbol description for
CONFIG_ARM64_PTR_AUTH, since it currently only mentions userspace
support, but now the option also enables KVM guest support.

It's also worth mentioning that CONFIG_ARM64_VHE=y is required for guest
support.

Thanks,
Kristina
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v7 8/10] KVM: arm64: Add capability to advertise ptrauth for guest

2019-03-25 Thread Kristina Martsenko
On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
> This patch advertises the capability of pointer authentication
> when system supports pointer authentication and VHE mode present.
> 
> Signed-off-by: Amit Daniel Kachhap 
> Cc: Mark Rutland 
> Cc: Marc Zyngier 
> Cc: Christoffer Dall 
> Cc: kvmarm@lists.cs.columbia.edu
> ---
>  arch/arm64/kvm/reset.c   | 4 
>  include/uapi/linux/kvm.h | 1 +
>  2 files changed, 5 insertions(+)
> 
> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index 00f0639..a3b269e 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -92,6 +92,10 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, 
> long ext)
>   case KVM_CAP_ARM_VM_IPA_SIZE:
>   r = kvm_ipa_limit;
>   break;
> + case KVM_CAP_ARM_PTRAUTH:
> + r = has_vhe() && system_supports_address_auth() &&
> + system_supports_generic_auth();
> + break;
>   default:
>   r = 0;
>   }
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 6d4ea4b..a553477 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -988,6 +988,7 @@ struct kvm_ppc_resize_hpt {
>  #define KVM_CAP_ARM_VM_IPA_SIZE 165
>  #define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166
>  #define KVM_CAP_HYPERV_CPUID 167
> +#define KVM_CAP_ARM_PTRAUTH 168

Since we now have two separate vcpu flags, then I think we also need two
capabilities here (one for address auth and one for generic auth). This
will allow us to support the features separately in the future if we
need to.

Thanks,
Kristina
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v7 7/10] KVM: arm/arm64: context-switch ptrauth registers

2019-03-25 Thread Kristina Martsenko
On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
> From: Mark Rutland 
> 
> When pointer authentication is supported, a guest may wish to use it.
> This patch adds the necessary KVM infrastructure for this to work, with
> a semi-lazy context switch of the pointer auth state.
> 
> Pointer authentication feature is only enabled when VHE is built
> in the kernel and present in the CPU implementation so only VHE code
> paths are modified.
> 
> When we schedule a vcpu, we disable guest usage of pointer
> authentication instructions and accesses to the keys. While these are
> disabled, we avoid context-switching the keys. When we trap the guest
> trying to use pointer authentication functionality, we change to eagerly
> context-switching the keys, and enable the feature. The next time the
> vcpu is scheduled out/in, we start again. However the host key save is
> optimized and implemented inside ptrauth instruction/register access
> trap.
> 
> Pointer authentication consists of address authentication and generic
> authentication, and CPUs in a system might have varied support for
> either. Where support for either feature is not uniform, it is hidden
> from guests via ID register emulation, as a result of the cpufeature
> framework in the host.
> 
> Unfortunately, address authentication and generic authentication cannot
> be trapped separately, as the architecture provides a single EL2 trap
> covering both. If we wish to expose one without the other, we cannot
> prevent a (badly-written) guest from intermittently using a feature
> which is not uniformly supported (when scheduled on a physical CPU which
> supports the relevant feature). Hence, this patch expects both type of
> authentication to be present in a cpu.
> 
> This switch of key is done from guest enter/exit assembly as preperation
> for the upcoming in-kernel pointer authentication support. Hence, these
> key switching routines are not implemented in C code as they may cause
> pointer authentication key signing error in some situations.
> 
> Signed-off-by: Mark Rutland 
> [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
> , save host key in ptrauth exception trap]
> Signed-off-by: Amit Daniel Kachhap 
> Reviewed-by: Julien Thierry 
> Cc: Marc Zyngier 
> Cc: Christoffer Dall 
> Cc: kvmarm@lists.cs.columbia.edu

[...]

> +/* SPDX-License-Identifier: GPL-2.0
> + * arch/arm64/include/asm/kvm_ptrauth_asm.h: Guest/host ptrauth save/restore
> + * Copyright 2019 Arm Limited
> + * Author: Mark Rutland 
> + * Amit Daniel Kachhap 
> + */

I think the license needs to be in its own comment, like

/* SPDX-License-Identifier: GPL-2.0 */
/* arch/arm64/include/asm/kvm_ptrauth_asm.h: ...
 * ...
 */

> +
> +#ifndef __ASM_KVM_ASM_PTRAUTH_H
> +#define __ASM_KVM_ASM_PTRAUTH_H

__ASM_KVM_PTRAUTH_ASM_H ? (to match the file name)

> + if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
> + test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {
> + /* Verify that KVM startup matches the conditions for ptrauth */
> + if (WARN_ON(!vcpu_has_ptrauth(vcpu)))
> + return -EINVAL;
> + }

I think this now needs to have "goto out;" instead of "return -EINVAL;",
since 5.1-rcX contains commit e761a927bc9a ("KVM: arm/arm64: Reset the
VCPU without preemption and vcpu state loaded") which changed some of
this code.

> @@ -385,6 +385,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>   vcpu_clear_wfe_traps(vcpu);
>   else
>   vcpu_set_wfe_traps(vcpu);
> +
> + kvm_arm_vcpu_ptrauth_setup_lazy(vcpu);

This version of the series seems to have lost the arch/arm/ definition
of kvm_arm_vcpu_ptrauth_setup_lazy (previously
kvm_arm_vcpu_ptrauth_reset), so KVM no longer compiles for arch/arm/ :(

Thanks,
Kristina
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v7 5/10] KVM: arm/arm64: preserve host MDCR_EL2 value

2019-03-25 Thread Kristina Martsenko
On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
> Save host MDCR_EL2 value during kvm HYP initialisation and restore
> after every switch from host to guest. There should not be any
> change in functionality due to this.
> 
> The value of mdcr_el2 is now stored in struct kvm_cpu_context as
> both host and guest can now use this field in a common way.
> 
> Signed-off-by: Amit Daniel Kachhap 
> Acked-by: Mark Rutland 
> Cc: Marc Zyngier 
> Cc: Mark Rutland 
> Cc: Christoffer Dall 
> Cc: kvmarm@lists.cs.columbia.edu

[...]

>  /**
> - * kvm_arm_init_debug - grab what we need for debug
> - *
> - * Currently the sole task of this function is to retrieve the initial
> - * value of mdcr_el2 so we can preserve MDCR_EL2.HPMN which has
> - * presumably been set-up by some knowledgeable bootcode.
> - *
> - * It is called once per-cpu during CPU hyp initialisation.
> - */
> -
> -void kvm_arm_init_debug(void)
> -{
> - __this_cpu_write(mdcr_el2, kvm_call_hyp_ret(__kvm_get_mdcr_el2));
> -}

The __kvm_get_mdcr_el2 function is no longer used anywhere, so can also
be removed.

Thanks,
Kristina
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v7 9/10] KVM: arm64: docs: document KVM support of pointer authentication

2019-03-20 Thread Kristina Martsenko
On 20/03/2019 18:06, Julien Thierry wrote:
> 
> 
> On 20/03/2019 15:04, Kristina Martsenko wrote:
>> On 20/03/2019 13:37, Julien Thierry wrote:
>>> Hi Amit,
>>>
>>> On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
>>>> This adds sections for KVM API extension for pointer authentication.
>>>> A brief description about usage of pointer authentication for KVM guests
>>>> is added in the arm64 documentations.
>>
>> [...]
>>
>>>> diff --git a/Documentation/virtual/kvm/api.txt 
>>>> b/Documentation/virtual/kvm/api.txt
>>>> index 7de9eee..b5c66bc 100644
>>>> --- a/Documentation/virtual/kvm/api.txt
>>>> +++ b/Documentation/virtual/kvm/api.txt
>>>> @@ -2659,6 +2659,12 @@ Possible features:
>>>>  Depends on KVM_CAP_ARM_PSCI_0_2.
>>>>- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
>>>>  Depends on KVM_CAP_ARM_PMU_V3.
>>>> +  - KVM_ARM_VCPU_PTRAUTH_ADDRESS:
>>>> +  - KVM_ARM_VCPU_PTRAUTH_GENERIC:
>>>> +Enables Pointer authentication for the CPU.
>>>> +Depends on KVM_CAP_ARM_PTRAUTH and only on arm64 architecture. If
>>>> +set, then the KVM guest allows the execution of pointer authentication
>>>> +instructions. Otherwise, KVM treats these instructions as undefined.
>>>>  
>>>
>>> Overall I feel one could easily get confused to whether
>>> PTRAUTH_ADDRESS/GENERIC are two individual features, whether one is a
>>> superset of the other, if the names are just an alias of one another, etc...
>>>
>>> I think the doc should at least stress out that *both* flags are
>>> required to enable ptrauth in a guest. However it raises the question,
>>> if we don't plan to support the features individually (because we
>>> can't), should we really expose two feature flags? I seems odd to
>>> introduce two flags that only do something if used together...
>>
>> Why can't we support the features individually? For example, if we ever
>> get a system where all CPUs support address authentication and none of
>> them support generic authentication, then we could still support address
>> authentication in the guest.
>>
>>
> 
> That's a good point, I didn't think of that.
> 
> Although, currently we don't have a way to detect that we are in such a
> configuration. So as is, both flags are required to enable either
> feature, and I feel the documentation should be clear on that aspect.

For now we only support enabling both features together, so both flags
need to be set. I agree that the documentation should be made clear on this.

In the future, if we need to, we can add "negative" cpucaps to detect
that a feature is absent on all CPUs.

> 
> Another option would be to introduce a flag that enables both for now,
> and if one day we decide to support the configuration you mentioned we
> could add "more modular" flags that allow you to control those features
> individually. While a bit cumbersome, I would find that less awkward
> than having two flags that only do something if both are present.

That would work too.

I find it more logical to have two flags since there are two features
(two ID register fields), and KVM enables two features for the guest.
The fact that KVM does not currently support enabling them separately is
a KVM implementation choice, and could change in the future.

Thanks,
Kristina
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v7 9/10] KVM: arm64: docs: document KVM support of pointer authentication

2019-03-20 Thread Kristina Martsenko
On 20/03/2019 13:37, Julien Thierry wrote:
> Hi Amit,
> 
> On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
>> This adds sections for KVM API extension for pointer authentication.
>> A brief description about usage of pointer authentication for KVM guests
>> is added in the arm64 documentations.

[...]

>> diff --git a/Documentation/virtual/kvm/api.txt 
>> b/Documentation/virtual/kvm/api.txt
>> index 7de9eee..b5c66bc 100644
>> --- a/Documentation/virtual/kvm/api.txt
>> +++ b/Documentation/virtual/kvm/api.txt
>> @@ -2659,6 +2659,12 @@ Possible features:
>>Depends on KVM_CAP_ARM_PSCI_0_2.
>>  - KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
>>Depends on KVM_CAP_ARM_PMU_V3.
>> +- KVM_ARM_VCPU_PTRAUTH_ADDRESS:
>> +- KVM_ARM_VCPU_PTRAUTH_GENERIC:
>> +  Enables Pointer authentication for the CPU.
>> +  Depends on KVM_CAP_ARM_PTRAUTH and only on arm64 architecture. If
>> +  set, then the KVM guest allows the execution of pointer authentication
>> +  instructions. Otherwise, KVM treats these instructions as undefined.
>>  
> 
> Overall I feel one could easily get confused to whether
> PTRAUTH_ADDRESS/GENERIC are two individual features, whether one is a
> superset of the other, if the names are just an alias of one another, etc...
> 
> I think the doc should at least stress out that *both* flags are
> required to enable ptrauth in a guest. However it raises the question,
> if we don't plan to support the features individually (because we
> can't), should we really expose two feature flags? I seems odd to
> introduce two flags that only do something if used together...

Why can't we support the features individually? For example, if we ever
get a system where all CPUs support address authentication and none of
them support generic authentication, then we could still support address
authentication in the guest.

Thanks,
Kristina
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v5 2/5] arm64/kvm: preserve host HCR_EL2/MDCR_EL2 value

2019-02-15 Thread Kristina Martsenko
On 14/02/2019 11:03, Amit Daniel Kachhap wrote:
> Hi,
> 
> On 2/13/19 11:04 PM, Kristina Martsenko wrote:
>> On 28/01/2019 06:58, Amit Daniel Kachhap wrote:
>>> When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
>>> is a constant value. This works today, as the host HCR_EL2 value is
>>> always the same, but this will get in the way of supporting extensions
>>> that require HCR_EL2 bits to be set conditionally for the host.
>>>
>>> To allow such features to work without KVM having to explicitly handle
>>> every possible host feature combination, this patch has KVM save/restore
>>> the host HCR when switching to/from a guest HCR. The saving of the
>>> register is done once during cpu hypervisor initialization state and is
>>> just restored after switch from guest.
>>
>> Why is this patch needed? I couldn't find anything in this series that
>> sets HCR_EL2 conditionally for the host. It seems like the kernel still
>> always sets it to HCR_HOST_VHE_FLAGS/HCR_HOST_NVHE_FLAGS.
> 
> This patch is not directly related to pointer authentication but just a
> helper to optimize save/restore. In this way save may be avoided for
> each switch and only restore is done. Patch 3 does sets HCR_EL2 in VHE_RUN.

Patch 3 sets the HCR_EL2.{API,APK} bits for the *guest*, not the host.
This patch here adds saving/restoring for the *host* HCR_EL2. As far as
I can tell, the value of the host HCR_EL2 never changes.

Regarding save/restore, currently the kernel never saves the host
HCR_EL2, because it always restores HCR_EL2 to HCR_HOST_{,N}VHE_FLAGS (a
constant value!) when returning to the host. With this patch, we
effectively just save HCR_HOST_{,N}VHE_FLAGS into kvm_host_cpu_state,
and restore it from there when returning to the host.

Unless we actually change the host HCR_EL2 value to something other than
HCR_HOST_{,N}VHE_FLAGS somewhere in this series, this patch is unnecessary.

>>
>> Looking back at v2 of the userspace pointer auth series, it seems that
>> the API/APK bits were set conditionally [1], so this patch would have
>> been needed to preserve HCR_EL2. But as of v3 of that series, the bits
>> have been set unconditionally through HCR_HOST_NVHE_FLAGS [2].
>>
>> Is there something else I've missed?
> Now HCR_EL2 is modified during switch time and NHVE doesnt support
> ptrauth so [2] doesn't makes sense.

In case of NVHE, we do support pointer auth in the *host* userspace, so
the patch [2] is necessary. In case of NVHE we do not support pointer
auth for KVM *guests*.

Thanks,
Kristina

>> [1] 
>> https://lore.kernel.org/linux-arm-kernel/20171127163806.31435-6-mark.rutl...@arm.com/
>> [2] 
>> https://lore.kernel.org/linux-arm-kernel/20180417183735.56985-5-mark.rutl...@arm.com/

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v5 5/5] arm64/kvm: control accessibility of ptrauth key registers

2019-02-13 Thread Kristina Martsenko
On 28/01/2019 06:58, Amit Daniel Kachhap wrote:
> According to userspace settings, ptrauth key registers are conditionally
> present in guest system register list based on user specified flag
> KVM_ARM_VCPU_PTRAUTH.
> 
> Signed-off-by: Amit Daniel Kachhap 
> Cc: Mark Rutland 
> Cc: Christoffer Dall 
> Cc: Marc Zyngier 
> Cc: Kristina Martsenko 
> Cc: kvmarm@lists.cs.columbia.edu
> Cc: Ramana Radhakrishnan 
> Cc: Will Deacon 
> ---
>  Documentation/arm64/pointer-authentication.txt |  3 ++
>  arch/arm64/kvm/sys_regs.c  | 42 
> +++---
>  2 files changed, 34 insertions(+), 11 deletions(-)
> 
> diff --git a/Documentation/arm64/pointer-authentication.txt 
> b/Documentation/arm64/pointer-authentication.txt
> index 0529a7d..3be4ee1 100644
> --- a/Documentation/arm64/pointer-authentication.txt
> +++ b/Documentation/arm64/pointer-authentication.txt
> @@ -87,3 +87,6 @@ created by passing a flag (KVM_ARM_VCPU_PTRAUTH) requesting 
> this feature
>  to be enabled. Without this flag, pointer authentication is not enabled
>  in KVM guests and attempted use of the feature will result in an UNDEFINED
>  exception being injected into the guest.
> +
> +Additionally, when KVM_ARM_VCPU_PTRAUTH is not set then KVM will filter
> +out the authentication key registers from userspace.
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 2546a65..b46a78e 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1334,12 +1334,6 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>   { SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
>   { SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
>  
> - PTRAUTH_KEY(APIA),
> - PTRAUTH_KEY(APIB),
> - PTRAUTH_KEY(APDA),
> - PTRAUTH_KEY(APDB),
> - PTRAUTH_KEY(APGA),
> -
>   { SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
>   { SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
>   { SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
> @@ -1491,6 +1485,14 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>   { SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x70 },
>  };
>  
> +static const struct sys_reg_desc ptrauth_reg_descs[] = {
> + PTRAUTH_KEY(APIA),
> + PTRAUTH_KEY(APIB),
> + PTRAUTH_KEY(APDA),
> + PTRAUTH_KEY(APDB),
> + PTRAUTH_KEY(APGA),
> +};
> +
>  static bool trap_dbgidr(struct kvm_vcpu *vcpu,
>   struct sys_reg_params *p,
>   const struct sys_reg_desc *r)
> @@ -2093,6 +2095,8 @@ static int emulate_sys_reg(struct kvm_vcpu *vcpu,
>   r = find_reg(params, table, num);
>   if (!r)
>   r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
> + if (!r && kvm_arm_vcpu_ptrauth_allowed(vcpu))
> + r = find_reg(params, ptrauth_reg_descs, 
> ARRAY_SIZE(ptrauth_reg_descs));
>  
>   if (likely(r)) {
>   perform_access(vcpu, params, r);
> @@ -2206,6 +2210,8 @@ static const struct sys_reg_desc 
> *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
>   r = find_reg_by_id(id, ¶ms, table, num);
>   if (!r)
>   r = find_reg(¶ms, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
> + if (!r && kvm_arm_vcpu_ptrauth_allowed(vcpu))
> + r = find_reg(¶ms, ptrauth_reg_descs, 
> ARRAY_SIZE(ptrauth_reg_descs));
>  
>   /* Not saved in the sys_reg array and not otherwise accessible? */
>   if (r && !(r->reg || r->get_user))
> @@ -2487,18 +2493,22 @@ static int walk_one_sys_reg(const struct sys_reg_desc 
> *rd,
>  }
>  
>  /* Assumed ordered tables, see kvm_sys_reg_table_init. */
> -static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
> +static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind,
> + const struct sys_reg_desc *desc, unsigned int len)
>  {
>   const struct sys_reg_desc *i1, *i2, *end1, *end2;
>   unsigned int total = 0;
>   size_t num;
>   int err;
>  
> + if (desc == ptrauth_reg_descs && !kvm_arm_vcpu_ptrauth_allowed(vcpu))
> + return total;
> +
>   /* We check for duplicates here, to allow arch-specific overrides. */
>   i1 = get_target_table(vcpu->arch.target, true, &num);
>   end1 = i1 + num;
> - i2 = sys_reg_descs;
> - end2 = sys_reg_descs + ARRAY_SIZE(sys_reg_descs);
> + i2 = desc;
> + end2 = desc + len;
>  
>   BUG_ON(i1 == end1 || i2 == end2);
>  
> @@ -2526,7 +2536,10 @@ unsigned long kvm_arm_num_sys_reg_des

Re: [PATCH v5 3/5] arm64/kvm: context-switch ptrauth register

2019-02-13 Thread Kristina Martsenko
On 31/01/2019 16:25, James Morse wrote:
> Hi Amit,
> 
> On 28/01/2019 06:58, Amit Daniel Kachhap wrote:
>> When pointer authentication is supported, a guest may wish to use it.
>> This patch adds the necessary KVM infrastructure for this to work, with
>> a semi-lazy context switch of the pointer auth state.

[...]

>> +void __no_ptrauth __hyp_text __ptrauth_switch_to_guest(struct kvm_vcpu 
>> *vcpu,
>> +  struct kvm_cpu_context *host_ctxt,
>> +  struct kvm_cpu_context *guest_ctxt)
>> +{
>> +if (!__ptrauth_is_enabled(vcpu))
>> +return;
>> +
> 
>> +ptrauth_keys_store((struct ptrauth_keys *) 
>> &host_ctxt->sys_regs[APIAKEYLO_EL1]);
> 
> We can't cast part of an array to a structure like this. What happens if the
> compiler inserts padding in struct-ptrauth_keys, or the struct randomization
> thing gets hold of it: https://lwn.net/Articles/722293/
> 
> If we want to use the helpers that take a struct-ptrauth_keys, we need to keep
> the keys in a struct-ptrauth_keys. To do this we'd need to provide accessors 
> so
> that GET_ONE_REG() of APIAKEYLO_EL1 comes from the struct-ptrauth_keys, 
> instead
> of the sys_reg array.

If I've understood correctly, the idea is to have a struct ptrauth_keys
in struct kvm_vcpu_arch, instead of having the keys in the
kvm_cpu_context->sys_regs array. This is to avoid having similar code in
__ptrauth_key_install/ptrauth_keys_switch and
__ptrauth_restore_key/__ptrauth_restore_state, and so that future
patches (that add pointer auth in the kernel) would only need to update
one place instead of two.

But it also means we'll have to special case pointer auth in
kvm_arm_sys_reg_set_reg/kvm_arm_sys_reg_get_reg and kvm_vcpu_arch. Is it
worth it? I'd prefer to keep the slight code duplication but avoid the
special casing.

> 
> 
> Wouldn't the host keys be available somewhere else? (they must get transfer to
> secondary CPUs somehow). Can we skip the save step when switching from the 
> host?
> 
> 
>> +ptrauth_keys_switch((struct ptrauth_keys *) 
>> &guest_ctxt->sys_regs[APIAKEYLO_EL1]);
>> +}
> 

[...]

> 
>> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
>> index 03b36f1..301d332 100644
>> --- a/arch/arm64/kvm/hyp/switch.c
>> +++ b/arch/arm64/kvm/hyp/switch.c
>> @@ -483,6 +483,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
>>  sysreg_restore_guest_state_vhe(guest_ctxt);
>>  __debug_switch_to_guest(vcpu);
>>  
>> +__ptrauth_switch_to_guest(vcpu, host_ctxt, guest_ctxt);
>> +
>>  __set_guest_arch_workaround_state(vcpu);
>>  
>>  do {
>> @@ -494,6 +496,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
>>  
>>  __set_host_arch_workaround_state(vcpu);
>>  
>> +__ptrauth_switch_to_host(vcpu, host_ctxt, guest_ctxt);
>> +
>>  sysreg_save_guest_state_vhe(guest_ctxt);
>>  
>>  __deactivate_traps(vcpu);
> 
> ...This makes me nervous...
> 
> __guest_enter() is a function that (might) change the keys, then we change 
> them
> again here. We can't have any signed return address between these two points. 
> I
> don't trust the compiler not to generate any.
> 
> ~
> 
> I had a chat with some friendly compiler folk... because there are two 
> identical
> sequences in kvm_vcpu_run_vhe() and __kvm_vcpu_run_nvhe(), the compiler could
> move the common code to a function it then calls. Apparently this is called
> 'function outlining'.
> 
> If the compiler does this, and the guest changes the keys, I think we would 
> fail
> the return address check.
> 
> Painting the whole thing with no_prauth would solve this, but this code then
> becomes a target.
> Because the compiler can't anticipate the keys changing, we ought to treat 
> them
> the same way we do the callee saved registers, stack-pointer etc, and
> save/restore them in the __guest_enter() assembly code.
> 
> (we can still keep the save/restore in C, but call it from assembly so we know
> nothing new is going on the stack).

I agree that this should be called from assembly if we were building the
kernel with pointer auth. But as we are not doing that yet in this
series, can't we keep the calls in kvm_vcpu_run_vhe for now?

In general I would prefer if the keys were switched in
kvm_arch_vcpu_load/put for now, since the keys are currently only used
in userspace. Once in-kernel pointer auth support comes along, it can
move the switch into kvm_vcpu_run_vhe or __guest_enter/__guest_exit as
required.

Thanks,
Kristina
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v5 4/5] arm64/kvm: add a userspace option to enable pointer authentication

2019-02-13 Thread Kristina Martsenko
On 28/01/2019 06:58, Amit Daniel Kachhap wrote:
> This feature will allow the KVM guest to allow the handling of
> pointer authentication instructions or to treat them as undefined
> if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to
> supply this parameter instead of creating a new API.
> 
> A new register is not created to pass this parameter via
> SET/GET_ONE_REG interface as just a flag (KVM_ARM_VCPU_PTRAUTH)
> supplied is enough to enable this feature.

[...]

> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index b200c14..b6950df 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -346,6 +346,10 @@ static inline int kvm_arm_have_ssbd(void)
>  static inline void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu) {}
> +static inline bool kvm_arm_vcpu_ptrauth_allowed(struct kvm_vcpu *vcpu)
> +{
> + return false;
> +}

It seems like this is only ever called from arm64 code, so do we need an
arch/arm/ definition?

> +/**
> + * kvm_arm_vcpu_ptrauth_allowed - checks if ptrauth feature is present in 
> vcpu
> + *
> + * @vcpu: The VCPU pointer
> + *
> + * This function will be used to enable/disable ptrauth in guest as 
> configured
> + * by the KVM userspace API.
> + */
> +bool kvm_arm_vcpu_ptrauth_allowed(struct kvm_vcpu *vcpu)
> +{
> + return test_bit(KVM_ARM_VCPU_PTRAUTH, vcpu->arch.features);
> +}

I'm not sure, but should there also be something like

if (test_bit(KVM_ARM_VCPU_PTRAUTH, vcpu->arch.features) &&
!kvm_supports_ptrauth())
return -EINVAL;

in kvm_reset_vcpu?

Thanks,
Kristina
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v5 3/5] arm64/kvm: context-switch ptrauth registers

2019-02-13 Thread Kristina Martsenko
Hi Amit,

(Please always Cc: everyone who commented on previous versions of the
series.)

On 28/01/2019 06:58, Amit Daniel Kachhap wrote:
> When pointer authentication is supported, a guest may wish to use it.
> This patch adds the necessary KVM infrastructure for this to work, with
> a semi-lazy context switch of the pointer auth state.
> 
> Pointer authentication feature is only enabled when VHE is built
> into the kernel and present into CPU implementation so only VHE code
> paths are modified.
> 
> When we schedule a vcpu, we disable guest usage of pointer
> authentication instructions and accesses to the keys. While these are
> disabled, we avoid context-switching the keys. When we trap the guest
> trying to use pointer authentication functionality, we change to eagerly
> context-switching the keys, and enable the feature. The next time the
> vcpu is scheduled out/in, we start again.
> 
> Pointer authentication consists of address authentication and generic
> authentication, and CPUs in a system might have varied support for
> either. Where support for either feature is not uniform, it is hidden
> from guests via ID register emulation, as a result of the cpufeature
> framework in the host.
> 
> Unfortunately, address authentication and generic authentication cannot
> be trapped separately, as the architecture provides a single EL2 trap
> covering both. If we wish to expose one without the other, we cannot
> prevent a (badly-written) guest from intermittently using a feature
> which is not uniformly supported (when scheduled on a physical CPU which
> supports the relevant feature). When the guest is scheduled on a
> physical CPU lacking the feature, these attempts will result in an UNDEF
> being taken by the guest.

[...]

>  /*
> + * Handle the guest trying to use a ptrauth instruction, or trying to access 
> a
> + * ptrauth register.
> + */
> +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
> +{
> + if (has_vhe() && kvm_supports_ptrauth())
> + kvm_arm_vcpu_ptrauth_enable(vcpu);
> + else
> + kvm_inject_undefined(vcpu);
> +}
> +
> +/*
>   * Guest usage of a ptrauth instruction (which the guest EL1 did not turn 
> into
> - * a NOP).
> + * a NOP), or guest EL1 access to a ptrauth register.

Doesn't guest EL1 access of ptrauth registers go through trap_ptrauth
instead?

>   */
>  static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  {
> - /*
> -  * We don't currently support ptrauth in a guest, and we mask the ID
> -  * registers to prevent well-behaved guests from trying to make use of
> -  * it.
> -  *
> -  * Inject an UNDEF, as if the feature really isn't present.
> -  */
> - kvm_inject_undefined(vcpu);
> + kvm_arm_vcpu_ptrauth_trap(vcpu);
>   return 1;
>  }
>  

[...]

> +static __always_inline bool __hyp_text __ptrauth_is_enabled(struct kvm_vcpu 
> *vcpu)
> +{
> + return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
> + vcpu->arch.ctxt.hcr_el2 & (HCR_API | HCR_APK);
> +}
> +
> +void __no_ptrauth __hyp_text __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
> +   struct kvm_cpu_context *host_ctxt,
> +   struct kvm_cpu_context *guest_ctxt)
> +{
> + if (!__ptrauth_is_enabled(vcpu))
> + return;
> +
> + ptrauth_keys_store((struct ptrauth_keys *) 
> &host_ctxt->sys_regs[APIAKEYLO_EL1]);
> + ptrauth_keys_switch((struct ptrauth_keys *) 
> &guest_ctxt->sys_regs[APIAKEYLO_EL1]);
> +}
> +
> +void __no_ptrauth __hyp_text __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,

We don't call this code in the !VHE case anymore, so are the __hyp_text
annotations still needed?

> +  struct kvm_cpu_context *host_ctxt,
> +  struct kvm_cpu_context *guest_ctxt)
> +{
> + if (!__ptrauth_is_enabled(vcpu))
> + return;
> +
> + ptrauth_keys_store((struct ptrauth_keys *) 
> &guest_ctxt->sys_regs[APIAKEYLO_EL1]);
> + ptrauth_keys_switch((struct ptrauth_keys *) 
> &host_ctxt->sys_regs[APIAKEYLO_EL1]);
> +}

[...]

> @@ -1040,14 +1066,6 @@ static u64 read_id_reg(struct sys_reg_desc const *r, 
> bool raz)
>   kvm_debug("SVE unsupported for guests, suppressing\n");
>  
>   val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
> - } else if (id == SYS_ID_AA64ISAR1_EL1) {
> - const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) |
> -  (0xfUL << ID_AA64ISAR1_API_SHIFT) |
> -  (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
> -  (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
> - if (val & ptrauth_mask)
> - kvm_debug("ptrauth unsupported for guests, 
> suppressing\n");
> - val &= ~ptrauth_mask;

If all CPUs support address authentication, but no CPUs support generic
authenticat

Re: [PATCH v5 2/5] arm64/kvm: preserve host HCR_EL2/MDCR_EL2 value

2019-02-13 Thread Kristina Martsenko
On 28/01/2019 06:58, Amit Daniel Kachhap wrote:
> When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
> is a constant value. This works today, as the host HCR_EL2 value is
> always the same, but this will get in the way of supporting extensions
> that require HCR_EL2 bits to be set conditionally for the host.
> 
> To allow such features to work without KVM having to explicitly handle
> every possible host feature combination, this patch has KVM save/restore
> the host HCR when switching to/from a guest HCR. The saving of the
> register is done once during cpu hypervisor initialization state and is
> just restored after switch from guest.

Why is this patch needed? I couldn't find anything in this series that
sets HCR_EL2 conditionally for the host. It seems like the kernel still
always sets it to HCR_HOST_VHE_FLAGS/HCR_HOST_NVHE_FLAGS.

Looking back at v2 of the userspace pointer auth series, it seems that
the API/APK bits were set conditionally [1], so this patch would have
been needed to preserve HCR_EL2. But as of v3 of that series, the bits
have been set unconditionally through HCR_HOST_NVHE_FLAGS [2].

Is there something else I've missed?

Thanks,
Kristina

[1] 
https://lore.kernel.org/linux-arm-kernel/20171127163806.31435-6-mark.rutl...@arm.com/
[2] 
https://lore.kernel.org/linux-arm-kernel/20180417183735.56985-5-mark.rutl...@arm.com/
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v5 1/5] arm64: Add utilities to save restore pointer authentication keys

2019-02-13 Thread Kristina Martsenko
On 31/01/2019 16:20, James Morse wrote:
> Hi Amit,
> 
> On 28/01/2019 06:58, Amit Daniel Kachhap wrote:
>> The keys can be switched either inside an assembly or such
>> functions which do not have pointer authentication checks, so a GCC
>> attribute is added to enable it.
>>
>> A function ptrauth_keys_store is added which is similar to existing
>> function ptrauth_keys_switch but saves the key values in memory.
>> This may be useful for save/restore scenarios when CPU changes
>> privilege levels, suspend/resume etc.
> 
> 
>> diff --git a/arch/arm64/include/asm/pointer_auth.h 
>> b/arch/arm64/include/asm/pointer_auth.h
>> index 15d4951..98441ce 100644
>> --- a/arch/arm64/include/asm/pointer_auth.h
>> +++ b/arch/arm64/include/asm/pointer_auth.h
>> @@ -11,6 +11,13 @@
>>  
>>  #ifdef CONFIG_ARM64_PTR_AUTH
>>  /*
>> + * Compile the function without pointer authentication instructions. This
>> + * allows pointer authentication to be enabled/disabled within the function
>> + * (but leaves the function unprotected by pointer authentication).
>> + */
>> +#define __no_ptrauth
>> __attribute__((target("sign-return-address=none")))
> 
> The documentation[0] for this says 'none' is the default. Will this only
> take-effect once the kernel supports pointer-auth for the host? (Is this just
> documentation until then?)

Yes, I don't think this should be in this series, since we're not
building the kernel with pointer auth yet.

> 
> ('noptrauth' would fit with 'notrace' slightly better)

(But worse with e.g. __noreturn, __notrace_funcgraph, __init,
__always_inline, __exception. Not sure what the pattern is. Would
__noptrauth be better?)

Thanks,
Kristina

> 
> [0]
> https://gcc.gnu.org/onlinedocs/gcc/AArch64-Function-Attributes.html#AArch64-Function-Attributes
> 

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v6 04/13] arm64/kvm: hide ptrauth from guests

2018-12-10 Thread Kristina Martsenko
On 10/12/2018 20:22, Richard Henderson wrote:
> On 12/10/18 2:12 PM, Kristina Martsenko wrote:
>> The plan was to disable trapping, yes. However, after that thread there
>> was a retrospective change applied to the architecture, such that the
>> XPACLRI (and XPACD/XPACI) instructions are no longer trapped by
>> HCR_EL2.API. (The public documentation on this has not been updated
>> yet.) This means that no HINT-space instructions should trap anymore.
> 
> Ah, thanks for the update.  I'll update my QEMU patch set.
> 
>>> It seems like the header comment here, and
>> Sorry, which header comment?
> 
> Sorry, the patch commit message.

Ah ok. Still seems correct.

Kristina
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v6 04/13] arm64/kvm: hide ptrauth from guests

2018-12-10 Thread Kristina Martsenko
On 09/12/2018 14:53, Richard Henderson wrote:
> On 12/7/18 12:39 PM, Kristina Martsenko wrote:
>> From: Mark Rutland 
>>
>> In subsequent patches we're going to expose ptrauth to the host kernel
>> and userspace, but things are a bit trickier for guest kernels. For the
>> time being, let's hide ptrauth from KVM guests.
>>
>> Regardless of how well-behaved the guest kernel is, guest userspace
>> could attempt to use ptrauth instructions, triggering a trap to EL2,
>> resulting in noise from kvm_handle_unknown_ec(). So let's write up a
>> handler for the PAC trap, which silently injects an UNDEF into the
>> guest, as if the feature were really missing.
> 
> Reviewing the long thread that accompanied v5, I thought we were *not* going 
> to
> trap PAuth instructions from the guest.
> 
> In particular, the OS distribution may legitimately be built to include
> hint-space nops.  This includes XPACLRI, which is used by the C++ exception
> unwinder and not controlled by SCTLR_EL1.EnI{A,B}.

The plan was to disable trapping, yes. However, after that thread there
was a retrospective change applied to the architecture, such that the
XPACLRI (and XPACD/XPACI) instructions are no longer trapped by
HCR_EL2.API. (The public documentation on this has not been updated
yet.) This means that no HINT-space instructions should trap anymore.
(The guest is expected to not set SCTLR_EL1.EnI{A,B} since
ID_AA64ISAR1_EL1.{APA,API} read as 0.)

> It seems like the header comment here, and
Sorry, which header comment?

>> +/*
>> + * Guest usage of a ptrauth instruction (which the guest EL1 did not turn 
>> into
>> + * a NOP).
>> + */
>> +static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
>> +
> 
> here, need updating.

Changed it to "a trapped ptrauth instruction".

Kristina
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v6 02/13] arm64: add pointer authentication register bits

2018-12-10 Thread Kristina Martsenko
On 09/12/2018 14:24, Richard Henderson wrote:
> On 12/7/18 12:39 PM, Kristina Martsenko wrote:
>>  #define SCTLR_ELx_DSSBS (1UL << 44)
>> +#define SCTLR_ELx_ENIA  (1 << 31)
> 
> 1U or 1UL lest you produce signed -0x8000.

Thanks, this was setting all SCTLR bits above 31 as well... Now fixed.

> Otherwise,
> Reviewed-by: Richard Henderson 

Thanks for all the review!

Kristina
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v6 10/13] arm64: add prctl control for resetting ptrauth keys

2018-12-07 Thread Kristina Martsenko
Add an arm64-specific prctl to allow a thread to reinitialize its
pointer authentication keys to random values. This can be useful when
exec() is not used for starting new processes, to ensure that different
processes still have different keys.

Signed-off-by: Kristina Martsenko 
---
 arch/arm64/include/asm/pointer_auth.h |  3 +++
 arch/arm64/include/asm/processor.h|  4 +++
 arch/arm64/kernel/Makefile|  1 +
 arch/arm64/kernel/pointer_auth.c  | 47 +++
 include/uapi/linux/prctl.h|  8 ++
 kernel/sys.c  |  8 ++
 6 files changed, 71 insertions(+)
 create mode 100644 arch/arm64/kernel/pointer_auth.c

diff --git a/arch/arm64/include/asm/pointer_auth.h 
b/arch/arm64/include/asm/pointer_auth.h
index 89190d93c850..7797bc346c6b 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -59,6 +59,8 @@ static inline void ptrauth_keys_switch(struct ptrauth_keys 
*keys)
__ptrauth_key_install(APGA, keys->apga);
 }
 
+extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long 
arg);
+
 /*
  * The EL0 pointer bits used by a pointer authentication code.
  * This is dependent on TBI0 being enabled, or bits 63:56 would also apply.
@@ -82,6 +84,7 @@ do {  
\
ptrauth_keys_switch(&(tsk)->thread_info.keys_user)
 
 #else /* CONFIG_ARM64_PTR_AUTH */
+#define ptrauth_prctl_reset_keys(tsk, arg) (-EINVAL)
 #define ptrauth_strip_insn_pac(lr) (lr)
 #define ptrauth_thread_init_user(tsk)
 #define ptrauth_thread_switch(tsk)
diff --git a/arch/arm64/include/asm/processor.h 
b/arch/arm64/include/asm/processor.h
index 6b0d4dff5012..40ccfb7605b6 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -46,6 +46,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -270,6 +271,9 @@ extern void __init minsigstksz_setup(void);
 #define SVE_SET_VL(arg)sve_set_current_vl(arg)
 #define SVE_GET_VL()   sve_get_current_vl()
 
+/* PR_PAC_RESET_KEYS prctl */
+#define PAC_RESET_KEYS(tsk, arg)   ptrauth_prctl_reset_keys(tsk, arg)
+
 /*
  * For CONFIG_GCC_PLUGIN_STACKLEAK
  *
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 4c8b13bede80..096740ab81d2 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -57,6 +57,7 @@ arm64-obj-$(CONFIG_CRASH_DUMP)+= crash_dump.o
 arm64-obj-$(CONFIG_CRASH_CORE) += crash_core.o
 arm64-obj-$(CONFIG_ARM_SDE_INTERFACE)  += sdei.o
 arm64-obj-$(CONFIG_ARM64_SSBD) += ssbd.o
+arm64-obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o
 
 obj-y  += $(arm64-obj-y) vdso/ probes/
 obj-m  += $(arm64-obj-m)
diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c
new file mode 100644
index ..b9f6f5f3409a
--- /dev/null
+++ b/arch/arm64/kernel/pointer_auth.c
@@ -0,0 +1,47 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
+{
+   struct ptrauth_keys *keys = &tsk->thread_info.keys_user;
+   unsigned long addr_key_mask = PR_PAC_APIAKEY | PR_PAC_APIBKEY |
+ PR_PAC_APDAKEY | PR_PAC_APDBKEY;
+   unsigned long key_mask = addr_key_mask | PR_PAC_APGAKEY;
+
+   if (!system_supports_address_auth() && !system_supports_generic_auth())
+   return -EINVAL;
+
+   if (!arg) {
+   ptrauth_keys_init(keys);
+   ptrauth_keys_switch(keys);
+   return 0;
+   }
+
+   if (arg & ~key_mask)
+   return -EINVAL;
+
+   if (((arg & addr_key_mask) && !system_supports_address_auth()) ||
+   ((arg & PR_PAC_APGAKEY) && !system_supports_generic_auth()))
+   return -EINVAL;
+
+   if (arg & PR_PAC_APIAKEY)
+   get_random_bytes(&keys->apia, sizeof(keys->apia));
+   if (arg & PR_PAC_APIBKEY)
+   get_random_bytes(&keys->apib, sizeof(keys->apib));
+   if (arg & PR_PAC_APDAKEY)
+   get_random_bytes(&keys->apda, sizeof(keys->apda));
+   if (arg & PR_PAC_APDBKEY)
+   get_random_bytes(&keys->apdb, sizeof(keys->apdb));
+   if (arg & PR_PAC_APGAKEY)
+   get_random_bytes(&keys->apga, sizeof(keys->apga));
+
+   ptrauth_keys_switch(keys);
+
+   return 0;
+}
diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
index b17201edfa09..b4875a93363a 100644
--- a/include/uapi/linux/prctl.h
+++ b/include/uapi/linux/prctl.h
@@ -220,4 +220,12 @@ struct prctl_mm_map {
 # define PR_SPEC_DISABLE   (1UL << 

[PATCH v6 09/13] arm64: perf: strip PAC when unwinding userspace

2018-12-07 Thread Kristina Martsenko
From: Mark Rutland 

When the kernel is unwinding userspace callchains, we can't expect that
the userspace consumer of these callchains has the data necessary to
strip the PAC from the stored LR.

This patch has the kernel strip the PAC from user stackframes when the
in-kernel unwinder is used. This only affects the LR value, and not the
FP.

This only affects the in-kernel unwinder. When userspace performs
unwinding, it is up to userspace to strip PACs as necessary (which can
be determined from DWARF information).

Signed-off-by: Mark Rutland 
Signed-off-by: Kristina Martsenko 
Cc: Catalin Marinas 
Cc: Ramana Radhakrishnan 
Cc: Will Deacon 
---
 arch/arm64/include/asm/pointer_auth.h | 7 +++
 arch/arm64/kernel/perf_callchain.c| 6 +-
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/pointer_auth.h 
b/arch/arm64/include/asm/pointer_auth.h
index 5721228836c1..89190d93c850 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -65,6 +65,12 @@ static inline void ptrauth_keys_switch(struct ptrauth_keys 
*keys)
  */
 #define ptrauth_pac_mask() GENMASK(54, VA_BITS)
 
+/* Only valid for EL0 TTBR0 instruction pointers */
+static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
+{
+   return ptr & ~ptrauth_pac_mask();
+}
+
 #define ptrauth_thread_init_user(tsk)  \
 do {   \
struct task_struct *__ptiu_tsk = (tsk); \
@@ -76,6 +82,7 @@ do {  
\
ptrauth_keys_switch(&(tsk)->thread_info.keys_user)
 
 #else /* CONFIG_ARM64_PTR_AUTH */
+#define ptrauth_strip_insn_pac(lr) (lr)
 #define ptrauth_thread_init_user(tsk)
 #define ptrauth_thread_switch(tsk)
 #endif /* CONFIG_ARM64_PTR_AUTH */
diff --git a/arch/arm64/kernel/perf_callchain.c 
b/arch/arm64/kernel/perf_callchain.c
index bcafd7dcfe8b..94754f07f67a 100644
--- a/arch/arm64/kernel/perf_callchain.c
+++ b/arch/arm64/kernel/perf_callchain.c
@@ -18,6 +18,7 @@
 #include 
 #include 
 
+#include 
 #include 
 
 struct frame_tail {
@@ -35,6 +36,7 @@ user_backtrace(struct frame_tail __user *tail,
 {
struct frame_tail buftail;
unsigned long err;
+   unsigned long lr;
 
/* Also check accessibility of one struct frame_tail beyond */
if (!access_ok(VERIFY_READ, tail, sizeof(buftail)))
@@ -47,7 +49,9 @@ user_backtrace(struct frame_tail __user *tail,
if (err)
return NULL;
 
-   perf_callchain_store(entry, buftail.lr);
+   lr = ptrauth_strip_insn_pac(buftail.lr);
+
+   perf_callchain_store(entry, lr);
 
/*
 * Frame pointers should strictly progress back up the stack
-- 
2.11.0

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v6 08/13] arm64: expose user PAC bit positions via ptrace

2018-12-07 Thread Kristina Martsenko
From: Mark Rutland 

When pointer authentication is in use, data/instruction pointers have a
number of PAC bits inserted into them. The number and position of these
bits depends on the configured TCR_ELx.TxSZ and whether tagging is
enabled. ARMv8.3 allows tagging to differ for instruction and data
pointers.

For userspace debuggers to unwind the stack and/or to follow pointer
chains, they need to be able to remove the PAC bits before attempting to
use a pointer.

This patch adds a new structure with masks describing the location of
the PAC bits in userspace instruction and data pointers (i.e. those
addressable via TTBR0), which userspace can query via PTRACE_GETREGSET.
By clearing these bits from pointers (and replacing them with the value
of bit 55), userspace can acquire the PAC-less versions.

This new regset is exposed when the kernel is built with (user) pointer
authentication support, and the address authentication feature is
enabled. Otherwise, the regset is hidden.

Signed-off-by: Mark Rutland 
Signed-off-by: Kristina Martsenko 
Cc: Catalin Marinas 
Cc: Ramana Radhakrishnan 
Cc: Will Deacon 
---
 arch/arm64/include/asm/pointer_auth.h |  8 
 arch/arm64/include/uapi/asm/ptrace.h  |  7 +++
 arch/arm64/kernel/ptrace.c| 38 +++
 include/uapi/linux/elf.h  |  1 +
 4 files changed, 54 insertions(+)

diff --git a/arch/arm64/include/asm/pointer_auth.h 
b/arch/arm64/include/asm/pointer_auth.h
index fc7ffe8e326f..5721228836c1 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -2,9 +2,11 @@
 #ifndef __ASM_POINTER_AUTH_H
 #define __ASM_POINTER_AUTH_H
 
+#include 
 #include 
 
 #include 
+#include 
 #include 
 
 #ifdef CONFIG_ARM64_PTR_AUTH
@@ -57,6 +59,12 @@ static inline void ptrauth_keys_switch(struct ptrauth_keys 
*keys)
__ptrauth_key_install(APGA, keys->apga);
 }
 
+/*
+ * The EL0 pointer bits used by a pointer authentication code.
+ * This is dependent on TBI0 being enabled, or bits 63:56 would also apply.
+ */
+#define ptrauth_pac_mask() GENMASK(54, VA_BITS)
+
 #define ptrauth_thread_init_user(tsk)  \
 do {   \
struct task_struct *__ptiu_tsk = (tsk); \
diff --git a/arch/arm64/include/uapi/asm/ptrace.h 
b/arch/arm64/include/uapi/asm/ptrace.h
index a36227fdb084..c2f249bcd829 100644
--- a/arch/arm64/include/uapi/asm/ptrace.h
+++ b/arch/arm64/include/uapi/asm/ptrace.h
@@ -229,6 +229,13 @@ struct user_sve_header {
  SVE_PT_SVE_OFFSET + SVE_PT_SVE_SIZE(vq, flags)\
: SVE_PT_FPSIMD_OFFSET + SVE_PT_FPSIMD_SIZE(vq, flags))
 
+/* pointer authentication masks (NT_ARM_PAC_MASK) */
+
+struct user_pac_mask {
+   __u64   data_mask;
+   __u64   insn_mask;
+};
+
 #endif /* __ASSEMBLY__ */
 
 #endif /* _UAPI__ASM_PTRACE_H */
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 1710a2d01669..6c1f63cb6c4e 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -46,6 +46,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -956,6 +957,30 @@ static int sve_set(struct task_struct *target,
 
 #endif /* CONFIG_ARM64_SVE */
 
+#ifdef CONFIG_ARM64_PTR_AUTH
+static int pac_mask_get(struct task_struct *target,
+   const struct user_regset *regset,
+   unsigned int pos, unsigned int count,
+   void *kbuf, void __user *ubuf)
+{
+   /*
+* The PAC bits can differ across data and instruction pointers
+* depending on TCR_EL1.TBID*, which we may make use of in future, so
+* we expose separate masks.
+*/
+   unsigned long mask = ptrauth_pac_mask();
+   struct user_pac_mask uregs = {
+   .data_mask = mask,
+   .insn_mask = mask,
+   };
+
+   if (!system_supports_address_auth())
+   return -EINVAL;
+
+   return user_regset_copyout(&pos, &count, &kbuf, &ubuf, &uregs, 0, -1);
+}
+#endif /* CONFIG_ARM64_PTR_AUTH */
+
 enum aarch64_regset {
REGSET_GPR,
REGSET_FPR,
@@ -968,6 +993,9 @@ enum aarch64_regset {
 #ifdef CONFIG_ARM64_SVE
REGSET_SVE,
 #endif
+#ifdef CONFIG_ARM64_PTR_AUTH
+   REGSET_PAC_MASK,
+#endif
 };
 
 static const struct user_regset aarch64_regsets[] = {
@@ -1037,6 +1065,16 @@ static const struct user_regset aarch64_regsets[] = {
.get_size = sve_get_size,
},
 #endif
+#ifdef CONFIG_ARM64_PTR_AUTH
+   [REGSET_PAC_MASK] = {
+   .core_note_type = NT_ARM_PAC_MASK,
+   .n = sizeof(struct user_pac_mask) / sizeof(u64),
+   .size = sizeof(u64),
+   .align = sizeof(u64),
+   .get = pac_mask_get,
+   /* this cannot be set dynamically */
+   },
+#en

[PATCH v6 07/13] arm64: add basic pointer authentication support

2018-12-07 Thread Kristina Martsenko
From: Mark Rutland 

This patch adds basic support for pointer authentication, allowing
userspace to make use of APIAKey, APIBKey, APDAKey, APDBKey, and
APGAKey. The kernel maintains key values for each process (shared by all
threads within), which are initialised to random values at exec() time.

The ID_AA64ISAR1_EL1.{APA,API,GPA,GPI} fields are exposed to userspace,
to describe that pointer authentication instructions are available and
that the kernel is managing the keys. Two new hwcaps are added for the
same reason: PACA (for address authentication) and PACG (for generic
authentication).

Signed-off-by: Mark Rutland 
Signed-off-by: Kristina Martsenko 
Tested-by: Adam Wallis 
Cc: Catalin Marinas 
Cc: Ramana Radhakrishnan 
Cc: Suzuki K Poulose 
Cc: Will Deacon 
---
 arch/arm64/include/asm/pointer_auth.h | 75 +++
 arch/arm64/include/asm/thread_info.h  |  4 ++
 arch/arm64/include/uapi/asm/hwcap.h   |  2 +
 arch/arm64/kernel/cpufeature.c| 13 ++
 arch/arm64/kernel/cpuinfo.c   |  2 +
 arch/arm64/kernel/process.c   |  4 ++
 6 files changed, 100 insertions(+)
 create mode 100644 arch/arm64/include/asm/pointer_auth.h

diff --git a/arch/arm64/include/asm/pointer_auth.h 
b/arch/arm64/include/asm/pointer_auth.h
new file mode 100644
index ..fc7ffe8e326f
--- /dev/null
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -0,0 +1,75 @@
+// SPDX-License-Identifier: GPL-2.0
+#ifndef __ASM_POINTER_AUTH_H
+#define __ASM_POINTER_AUTH_H
+
+#include 
+
+#include 
+#include 
+
+#ifdef CONFIG_ARM64_PTR_AUTH
+/*
+ * Each key is a 128-bit quantity which is split across a pair of 64-bit
+ * registers (Lo and Hi).
+ */
+struct ptrauth_key {
+   unsigned long lo, hi;
+};
+
+/*
+ * We give each process its own keys, which are shared by all threads. The keys
+ * are inherited upon fork(), and reinitialised upon exec*().
+ */
+struct ptrauth_keys {
+   struct ptrauth_key apia;
+   struct ptrauth_key apib;
+   struct ptrauth_key apda;
+   struct ptrauth_key apdb;
+   struct ptrauth_key apga;
+};
+
+static inline void ptrauth_keys_init(struct ptrauth_keys *keys)
+{
+   if (system_supports_address_auth())
+   get_random_bytes(keys, sizeof(struct ptrauth_key) * 4);
+
+   if (system_supports_generic_auth())
+   get_random_bytes(&keys->apga, sizeof(struct ptrauth_key));
+}
+
+#define __ptrauth_key_install(k, v)\
+do {   \
+   struct ptrauth_key __pki_v = (v);   \
+   write_sysreg_s(__pki_v.lo, SYS_ ## k ## KEYLO_EL1); \
+   write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1); \
+} while (0)
+
+static inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
+{
+   if (system_supports_address_auth()) {
+   __ptrauth_key_install(APIA, keys->apia);
+   __ptrauth_key_install(APIB, keys->apib);
+   __ptrauth_key_install(APDA, keys->apda);
+   __ptrauth_key_install(APDB, keys->apdb);
+   }
+
+   if (system_supports_generic_auth())
+   __ptrauth_key_install(APGA, keys->apga);
+}
+
+#define ptrauth_thread_init_user(tsk)  \
+do {   \
+   struct task_struct *__ptiu_tsk = (tsk); \
+   ptrauth_keys_init(&__ptiu_tsk->thread_info.keys_user);  \
+   ptrauth_keys_switch(&__ptiu_tsk->thread_info.keys_user);\
+} while (0)
+
+#define ptrauth_thread_switch(tsk) \
+   ptrauth_keys_switch(&(tsk)->thread_info.keys_user)
+
+#else /* CONFIG_ARM64_PTR_AUTH */
+#define ptrauth_thread_init_user(tsk)
+#define ptrauth_thread_switch(tsk)
+#endif /* CONFIG_ARM64_PTR_AUTH */
+
+#endif /* __ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/include/asm/thread_info.h 
b/arch/arm64/include/asm/thread_info.h
index cb2c10a8f0a8..ea9272fb52d4 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -28,6 +28,7 @@
 struct task_struct;
 
 #include 
+#include 
 #include 
 #include 
 
@@ -43,6 +44,9 @@ struct thread_info {
u64 ttbr0;  /* saved TTBR0_EL1 */
 #endif
int preempt_count;  /* 0 => preemptable, <0 => bug 
*/
+#ifdef CONFIG_ARM64_PTR_AUTH
+   struct ptrauth_keys keys_user;
+#endif
 };
 
 #define thread_saved_pc(tsk)   \
diff --git a/arch/arm64/include/uapi/asm/hwcap.h 
b/arch/arm64/include/uapi/asm/hwcap.h
index 2bcd6e4f3474..22efc70aa0a1 100644
--- a/arch/arm64/include/uapi/asm/hwcap.h
+++ b/arch/arm64/include/uapi/asm/hwcap.h
@@ -49,5 +49,7 @@
 #define HWCAP_ILRCPC   (1 << 26)
 #define HWCAP_FLAGM(1 << 27)
 #define HWCAP_SSBS (1 << 28)
+#define HWCAP_PACA (1 << 29)
+#define HWCAP_PAC

[PATCH v6 11/13] arm64: add ptrace regsets for ptrauth key management

2018-12-07 Thread Kristina Martsenko
Add two new ptrace regsets, which can be used to request and change the
pointer authentication keys of a thread. NT_ARM_PACA_KEYS gives access
to the instruction/data address keys, and NT_ARM_PACG_KEYS to the
generic authentication key.

The regsets are only exposed if the kernel is compiled with
CONFIG_CHECKPOINT_RESTORE=y, as the intended use case is checkpointing
and restoring processes that are using pointer authentication. Normally
applications or debuggers should not need to know the keys (and exposing
the keys is a security risk), so the regsets are not exposed by default.

Signed-off-by: Kristina Martsenko 
---
 arch/arm64/include/uapi/asm/ptrace.h | 18 +
 arch/arm64/kernel/ptrace.c   | 72 
 include/uapi/linux/elf.h |  2 +
 3 files changed, 92 insertions(+)

diff --git a/arch/arm64/include/uapi/asm/ptrace.h 
b/arch/arm64/include/uapi/asm/ptrace.h
index c2f249bcd829..fafa7f6decf9 100644
--- a/arch/arm64/include/uapi/asm/ptrace.h
+++ b/arch/arm64/include/uapi/asm/ptrace.h
@@ -236,6 +236,24 @@ struct user_pac_mask {
__u64   insn_mask;
 };
 
+/* pointer authentication keys (NT_ARM_PACA_KEYS, NT_ARM_PACG_KEYS) */
+
+struct user_pac_address_keys {
+   __u64   apiakey_lo;
+   __u64   apiakey_hi;
+   __u64   apibkey_lo;
+   __u64   apibkey_hi;
+   __u64   apdakey_lo;
+   __u64   apdakey_hi;
+   __u64   apdbkey_lo;
+   __u64   apdbkey_hi;
+};
+
+struct user_pac_generic_keys {
+   __u64   apgakey_lo;
+   __u64   apgakey_hi;
+};
+
 #endif /* __ASSEMBLY__ */
 
 #endif /* _UAPI__ASM_PTRACE_H */
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 6c1f63cb6c4e..f18f14c64d1e 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -979,6 +979,56 @@ static int pac_mask_get(struct task_struct *target,
 
return user_regset_copyout(&pos, &count, &kbuf, &ubuf, &uregs, 0, -1);
 }
+
+#ifdef CONFIG_CHECKPOINT_RESTORE
+static int pac_address_keys_get(struct task_struct *target,
+   const struct user_regset *regset,
+   unsigned int pos, unsigned int count,
+   void *kbuf, void __user *ubuf)
+{
+   if (!system_supports_address_auth())
+   return -EINVAL;
+
+   return user_regset_copyout(&pos, &count, &kbuf, &ubuf,
+   &target->thread_info.keys_user, 0, -1);
+}
+
+static int pac_address_keys_set(struct task_struct *target,
+   const struct user_regset *regset,
+   unsigned int pos, unsigned int count,
+   const void *kbuf, const void __user *ubuf)
+{
+   if (!system_supports_address_auth())
+   return -EINVAL;
+
+   return user_regset_copyin(&pos, &count, &kbuf, &ubuf,
+   &target->thread_info.keys_user, 0, -1);
+}
+
+static int pac_generic_keys_get(struct task_struct *target,
+   const struct user_regset *regset,
+   unsigned int pos, unsigned int count,
+   void *kbuf, void __user *ubuf)
+{
+   if (!system_supports_generic_auth())
+   return -EINVAL;
+
+   return user_regset_copyout(&pos, &count, &kbuf, &ubuf,
+   &target->thread_info.keys_user.apga, 0, -1);
+}
+
+static int pac_generic_keys_set(struct task_struct *target,
+   const struct user_regset *regset,
+   unsigned int pos, unsigned int count,
+   const void *kbuf, const void __user *ubuf)
+{
+   if (!system_supports_generic_auth())
+   return -EINVAL;
+
+   return user_regset_copyin(&pos, &count, &kbuf, &ubuf,
+   &target->thread_info.keys_user.apga, 0, -1);
+}
+#endif /* CONFIG_CHECKPOINT_RESTORE */
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
 enum aarch64_regset {
@@ -995,6 +1045,10 @@ enum aarch64_regset {
 #endif
 #ifdef CONFIG_ARM64_PTR_AUTH
REGSET_PAC_MASK,
+#ifdef CONFIG_CHECKPOINT_RESTORE
+   REGSET_PACA_KEYS,
+   REGSET_PACG_KEYS,
+#endif
 #endif
 };
 
@@ -1074,6 +1128,24 @@ static const struct user_regset aarch64_regsets[] = {
.get = pac_mask_get,
/* this cannot be set dynamically */
},
+#ifdef CONFIG_CHECKPOINT_RESTORE
+   [REGSET_PACA_KEYS] = {
+   .core_note_type = NT_ARM_PACA_KEYS,
+   .n = sizeof(struct user_pac_address_keys) / sizeof(u64),
+   .size = sizeof(u64),
+   .align = sizeof(u64),
+   .get = pac_address_keys_get,
+   .set = pac_address_keys_set,
+   },
+

[PATCH v6 04/13] arm64/kvm: hide ptrauth from guests

2018-12-07 Thread Kristina Martsenko
From: Mark Rutland 

In subsequent patches we're going to expose ptrauth to the host kernel
and userspace, but things are a bit trickier for guest kernels. For the
time being, let's hide ptrauth from KVM guests.

Regardless of how well-behaved the guest kernel is, guest userspace
could attempt to use ptrauth instructions, triggering a trap to EL2,
resulting in noise from kvm_handle_unknown_ec(). So let's write up a
handler for the PAC trap, which silently injects an UNDEF into the
guest, as if the feature were really missing.

Signed-off-by: Mark Rutland 
Signed-off-by: Kristina Martsenko 
Reviewed-by: Andrew Jones 
Reviewed-by: Christoffer Dall 
Cc: Marc Zyngier 
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm64/kvm/handle_exit.c | 18 ++
 arch/arm64/kvm/sys_regs.c|  8 
 2 files changed, 26 insertions(+)

diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 35a81bebd02b..ab35929dcb3c 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -173,6 +173,23 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct 
kvm_run *run)
return 1;
 }
 
+/*
+ * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
+ * a NOP).
+ */
+static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+   /*
+* We don't currently support ptrauth in a guest, and we mask the ID
+* registers to prevent well-behaved guests from trying to make use of
+* it.
+*
+* Inject an UNDEF, as if the feature really isn't present.
+*/
+   kvm_inject_undefined(vcpu);
+   return 1;
+}
+
 static exit_handle_fn arm_exit_handlers[] = {
[0 ... ESR_ELx_EC_MAX]  = kvm_handle_unknown_ec,
[ESR_ELx_EC_WFx]= kvm_handle_wfx,
@@ -195,6 +212,7 @@ static exit_handle_fn arm_exit_handlers[] = {
[ESR_ELx_EC_BKPT32] = kvm_handle_guest_debug,
[ESR_ELx_EC_BRK64]  = kvm_handle_guest_debug,
[ESR_ELx_EC_FP_ASIMD]   = handle_no_fpsimd,
+   [ESR_ELx_EC_PAC]= kvm_handle_ptrauth,
 };
 
 static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 22fbbdbece3c..1ca592d38c3c 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1040,6 +1040,14 @@ static u64 read_id_reg(struct sys_reg_desc const *r, 
bool raz)
kvm_debug("SVE unsupported for guests, suppressing\n");
 
val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
+   } else if (id == SYS_ID_AA64ISAR1_EL1) {
+   const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) |
+(0xfUL << ID_AA64ISAR1_API_SHIFT) |
+(0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
+(0xfUL << ID_AA64ISAR1_GPI_SHIFT);
+   if (val & ptrauth_mask)
+   kvm_debug("ptrauth unsupported for guests, 
suppressing\n");
+   val &= ~ptrauth_mask;
} else if (id == SYS_ID_AA64MMFR1_EL1) {
if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))
kvm_debug("LORegions unsupported for guests, 
suppressing\n");
-- 
2.11.0

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v6 12/13] arm64: enable pointer authentication

2018-12-07 Thread Kristina Martsenko
From: Mark Rutland 

Now that all the necessary bits are in place for userspace, add the
necessary Kconfig logic to allow this to be enabled.

Signed-off-by: Mark Rutland 
Signed-off-by: Kristina Martsenko 
Cc: Catalin Marinas 
Cc: Will Deacon 
---
 arch/arm64/Kconfig | 23 +++
 1 file changed, 23 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index ea2ab0330e3a..5279a8646fc6 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1188,6 +1188,29 @@ config ARM64_CNP
 
 endmenu
 
+menu "ARMv8.3 architectural features"
+
+config ARM64_PTR_AUTH
+   bool "Enable support for pointer authentication"
+   default y
+   help
+ Pointer authentication (part of the ARMv8.3 Extensions) provides
+ instructions for signing and authenticating pointers against secret
+ keys, which can be used to mitigate Return Oriented Programming (ROP)
+ and other attacks.
+
+ This option enables these instructions at EL0 (i.e. for userspace).
+
+ Choosing this option will cause the kernel to initialise secret keys
+ for each process at exec() time, with these keys being
+ context-switched along with the process.
+
+ The feature is detected at runtime. If the feature is not present in
+ hardware it will not be advertised to userspace nor will it be
+ enabled.
+
+endmenu
+
 config ARM64_SVE
bool "ARM Scalable Vector Extension support"
default y
-- 
2.11.0

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v6 13/13] arm64: docs: document pointer authentication

2018-12-07 Thread Kristina Martsenko
From: Mark Rutland 

Now that we've added code to support pointer authentication, add some
documentation so that people can figure out if/how to use it.

Signed-off-by: Mark Rutland 
Signed-off-by: Kristina Martsenko 
Reviewed-by: Ramana Radhakrishnan 
Cc: Andrew Jones 
Cc: Catalin Marinas 
Cc: Ramana Radhakrishnan 
Cc: Will Deacon 
---
 Documentation/arm64/booting.txt|  8 +++
 Documentation/arm64/cpu-feature-registers.txt  |  8 +++
 Documentation/arm64/elf_hwcaps.txt | 12 
 Documentation/arm64/pointer-authentication.txt | 93 ++
 4 files changed, 121 insertions(+)
 create mode 100644 Documentation/arm64/pointer-authentication.txt

diff --git a/Documentation/arm64/booting.txt b/Documentation/arm64/booting.txt
index 8d0df62c3fe0..8df9f4658d6f 100644
--- a/Documentation/arm64/booting.txt
+++ b/Documentation/arm64/booting.txt
@@ -205,6 +205,14 @@ Before jumping into the kernel, the following conditions 
must be met:
 ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b0.
   - The DT or ACPI tables must describe a GICv2 interrupt controller.
 
+  For CPUs with pointer authentication functionality:
+  - If EL3 is present:
+SCR_EL3.APK (bit 16) must be initialised to 0b1
+SCR_EL3.API (bit 17) must be initialised to 0b1
+  - If the kernel is entered at EL1:
+HCR_EL2.APK (bit 40) must be initialised to 0b1
+HCR_EL2.API (bit 41) must be initialised to 0b1
+
 The requirements described above for CPU mode, caches, MMUs, architected
 timers, coherency and system registers apply to all CPUs.  All CPUs must
 enter the kernel in the same exception level.
diff --git a/Documentation/arm64/cpu-feature-registers.txt 
b/Documentation/arm64/cpu-feature-registers.txt
index 7964f03846b1..d4b4dd1fe786 100644
--- a/Documentation/arm64/cpu-feature-registers.txt
+++ b/Documentation/arm64/cpu-feature-registers.txt
@@ -184,12 +184,20 @@ infrastructure:
  x--x
  | Name |  bits   | visible |
  |--|
+ | GPI  | [31-28] |y|
+ |--|
+ | GPA  | [27-24] |y|
+ |--|
  | LRCPC| [23-20] |y|
  |--|
  | FCMA | [19-16] |y|
  |--|
  | JSCVT| [15-12] |y|
  |--|
+ | API  | [11-8]  |y|
+ |--|
+ | APA  | [7-4]   |y|
+ |--|
  | DPB  | [3-0]   |y|
  x--x
 
diff --git a/Documentation/arm64/elf_hwcaps.txt 
b/Documentation/arm64/elf_hwcaps.txt
index ea819ae024dd..13d6691b37be 100644
--- a/Documentation/arm64/elf_hwcaps.txt
+++ b/Documentation/arm64/elf_hwcaps.txt
@@ -182,3 +182,15 @@ HWCAP_FLAGM
 HWCAP_SSBS
 
 Functionality implied by ID_AA64PFR1_EL1.SSBS == 0b0010.
+
+HWCAP_PACA
+
+Functionality implied by ID_AA64ISAR1_EL1.APA == 0b0001 or
+ID_AA64ISAR1_EL1.API == 0b0001, as described by
+Documentation/arm64/pointer-authentication.txt.
+
+HWCAP_PACG
+
+Functionality implied by ID_AA64ISAR1_EL1.GPA == 0b0001 or
+ID_AA64ISAR1_EL1.GPI == 0b0001, as described by
+Documentation/arm64/pointer-authentication.txt.
diff --git a/Documentation/arm64/pointer-authentication.txt 
b/Documentation/arm64/pointer-authentication.txt
new file mode 100644
index ..5baca42ba146
--- /dev/null
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -0,0 +1,93 @@
+Pointer authentication in AArch64 Linux
+===
+
+Author: Mark Rutland 
+Date: 2017-07-19
+
+This document briefly describes the provision of pointer authentication
+functionality in AArch64 Linux.
+
+
+Architecture overview
+-
+
+The ARMv8.3 Pointer Authentication extension adds primitives that can be
+used to mitigate certain classes of attack where an attacker can corrupt
+the contents of some memory (e.g. the stack).
+
+The extension uses a Pointer Authentication Code (PAC) to determine
+whether pointers have been modified unexpectedly. A PAC is derived from
+a pointer, another value (such as the stack pointer), and a secret key
+held in system registers.
+
+The extension adds instructions to insert a valid PAC into a pointer,
+and to verify/remove the PAC from a pointer. The PAC occupies a number
+of high-order bits of the pointer, which varies dependent on the
+configured virtual address size and whether pointer tagging is in use.

[PATCH v6 06/13] arm64/cpufeature: detect pointer authentication

2018-12-07 Thread Kristina Martsenko
From: Mark Rutland 

So that we can dynamically handle the presence of pointer authentication
functionality, wire up probing code in cpufeature.c.

>From ARMv8.3 onwards, ID_AA64ISAR1 is no longer entirely RES0, and now
has four fields describing the presence of pointer authentication
functionality:

* APA - address authentication present, using an architected algorithm
* API - address authentication present, using an IMP DEF algorithm
* GPA - generic authentication present, using an architected algorithm
* GPI - generic authentication present, using an IMP DEF algorithm

This patch checks for both address and generic authentication,
separately. It is assumed that if all CPUs support an IMP DEF algorithm,
the same algorithm is used across all CPUs.

Signed-off-by: Mark Rutland 
Signed-off-by: Kristina Martsenko 
Cc: Catalin Marinas 
Cc: Suzuki K Poulose 
Cc: Will Deacon 
---
 arch/arm64/include/asm/cpucaps.h|  8 +++-
 arch/arm64/include/asm/cpufeature.h | 12 +
 arch/arm64/kernel/cpufeature.c  | 90 +
 3 files changed, 109 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 6e2d254c09eb..62fc48604263 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -54,7 +54,13 @@
 #define ARM64_HAS_CRC3233
 #define ARM64_SSBS 34
 #define ARM64_WORKAROUND_1188873   35
+#define ARM64_HAS_ADDRESS_AUTH_ARCH36
+#define ARM64_HAS_ADDRESS_AUTH_IMP_DEF 37
+#define ARM64_HAS_ADDRESS_AUTH 38
+#define ARM64_HAS_GENERIC_AUTH_ARCH39
+#define ARM64_HAS_GENERIC_AUTH_IMP_DEF 40
+#define ARM64_HAS_GENERIC_AUTH 41
 
-#define ARM64_NCAPS36
+#define ARM64_NCAPS42
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/cpufeature.h 
b/arch/arm64/include/asm/cpufeature.h
index 7e2ec64aa414..1c8393ffabff 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -514,6 +514,18 @@ static inline bool system_supports_cnp(void)
cpus_have_const_cap(ARM64_HAS_CNP);
 }
 
+static inline bool system_supports_address_auth(void)
+{
+   return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
+   cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH);
+}
+
+static inline bool system_supports_generic_auth(void)
+{
+   return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
+   cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH);
+}
+
 #define ARM64_SSBD_UNKNOWN -1
 #define ARM64_SSBD_FORCE_DISABLE   0
 #define ARM64_SSBD_KERNEL  1
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index aec5ecb85737..f8e3c3568a79 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -141,9 +141,17 @@ static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
 };
 
 static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
+   ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+  FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_GPI_SHIFT, 4, 
0),
+   ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+  FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_GPA_SHIFT, 4, 
0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
ID_AA64ISAR1_LRCPC_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
+   ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+  FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 
0),
+   ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+  FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 
0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
ID_AA64ISAR1_DPB_SHIFT, 4, 0),
ARM64_FTR_END,
 };
@@ -1145,6 +1153,36 @@ static void cpu_clear_disr(const struct 
arm64_cpu_capabilities *__unused)
 }
 #endif /* CONFIG_ARM64_RAS_EXTN */
 
+#ifdef CONFIG_ARM64_PTR_AUTH
+static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
+int __unused)
+{
+   u64 isar1 = read_sanitised_ftr_reg(SYS_ID_AA64ISAR1_EL1);
+   bool api, apa;
+
+   apa = cpuid_feature_extract_unsigned_field(isar1,
+   ID_AA64ISAR1_APA_SHIFT) > 0;
+   api = cpuid_feature_extract_unsigned_field(isar1,
+   ID_AA64ISAR1_API_SHIFT) > 0;
+
+   return apa || api;
+}
+
+static bool has_generic_auth(const struct arm64_cpu_capabilities *entry,
+int __unused)
+{
+   u64 isar1 = read_sanitised_ftr_reg(SYS_ID_AA64ISA

[PATCH v6 03/13] arm64/kvm: consistently handle host HCR_EL2 flags

2018-12-07 Thread Kristina Martsenko
From: Mark Rutland 

In KVM we define the configuration of HCR_EL2 for a VHE HOST in
HCR_HOST_VHE_FLAGS, but we don't have a similar definition for the
non-VHE host flags, and open-code HCR_RW. Further, in head.S we
open-code the flags for VHE and non-VHE configurations.

In future, we're going to want to configure more flags for the host, so
lets add a HCR_HOST_NVHE_FLAGS defintion, and consistently use both
HCR_HOST_VHE_FLAGS and HCR_HOST_NVHE_FLAGS in the kvm code and head.S.

We now use mov_q to generate the HCR_EL2 value, as we use when
configuring other registers in head.S.

Signed-off-by: Mark Rutland 
Signed-off-by: Kristina Martsenko 
Reviewed-by: Christoffer Dall 
Cc: Catalin Marinas 
Cc: Marc Zyngier 
Cc: Will Deacon 
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm64/include/asm/kvm_arm.h | 1 +
 arch/arm64/kernel/head.S | 5 ++---
 arch/arm64/kvm/hyp/switch.c  | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 6f602af5263c..c8825c5a8dd0 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -87,6 +87,7 @@
 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \
 HCR_FMO | HCR_IMO)
 #define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF)
+#define HCR_HOST_NVHE_FLAGS (HCR_RW)
 #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
 
 /* TCR_EL2 Registers bits */
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 4471f570a295..b207a2ce4bc6 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -496,10 +496,9 @@ ENTRY(el2_setup)
 #endif
 
/* Hyp configuration. */
-   mov x0, #HCR_RW // 64-bit EL1
+   mov_q   x0, HCR_HOST_NVHE_FLAGS
cbz x2, set_hcr
-   orr x0, x0, #HCR_TGE// Enable Host Extensions
-   orr x0, x0, #HCR_E2H
+   mov_q   x0, HCR_HOST_VHE_FLAGS
 set_hcr:
msr hcr_el2, x0
isb
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 7cc175c88a37..f6e02cc4d856 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -157,7 +157,7 @@ static void __hyp_text __deactivate_traps_nvhe(void)
mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
 
write_sysreg(mdcr_el2, mdcr_el2);
-   write_sysreg(HCR_RW, hcr_el2);
+   write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2);
write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
 }
 
-- 
2.11.0

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v6 01/13] arm64: add comments about EC exception levels

2018-12-07 Thread Kristina Martsenko
To make it clear which exceptions can't be taken to EL1 or EL2, add
comments next to the ESR_ELx_EC_* macro definitions.

Signed-off-by: Kristina Martsenko 
---
 arch/arm64/include/asm/esr.h | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 676de2ec1762..23602a0083ad 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -29,23 +29,23 @@
 #define ESR_ELx_EC_CP14_MR (0x05)
 #define ESR_ELx_EC_CP14_LS (0x06)
 #define ESR_ELx_EC_FP_ASIMD(0x07)
-#define ESR_ELx_EC_CP10_ID (0x08)
+#define ESR_ELx_EC_CP10_ID (0x08)  /* EL2 only */
 /* Unallocated EC: 0x09 - 0x0B */
 #define ESR_ELx_EC_CP14_64 (0x0C)
 /* Unallocated EC: 0x0d */
 #define ESR_ELx_EC_ILL (0x0E)
 /* Unallocated EC: 0x0F - 0x10 */
 #define ESR_ELx_EC_SVC32   (0x11)
-#define ESR_ELx_EC_HVC32   (0x12)
-#define ESR_ELx_EC_SMC32   (0x13)
+#define ESR_ELx_EC_HVC32   (0x12)  /* EL2 only */
+#define ESR_ELx_EC_SMC32   (0x13)  /* EL2 and above */
 /* Unallocated EC: 0x14 */
 #define ESR_ELx_EC_SVC64   (0x15)
-#define ESR_ELx_EC_HVC64   (0x16)
-#define ESR_ELx_EC_SMC64   (0x17)
+#define ESR_ELx_EC_HVC64   (0x16)  /* EL2 and above */
+#define ESR_ELx_EC_SMC64   (0x17)  /* EL2 and above */
 #define ESR_ELx_EC_SYS64   (0x18)
 #define ESR_ELx_EC_SVE (0x19)
 /* Unallocated EC: 0x1A - 0x1E */
-#define ESR_ELx_EC_IMP_DEF (0x1f)
+#define ESR_ELx_EC_IMP_DEF (0x1f)  /* EL3 only */
 #define ESR_ELx_EC_IABT_LOW(0x20)
 #define ESR_ELx_EC_IABT_CUR(0x21)
 #define ESR_ELx_EC_PC_ALIGN(0x22)
@@ -68,7 +68,7 @@
 /* Unallocated EC: 0x36 - 0x37 */
 #define ESR_ELx_EC_BKPT32  (0x38)
 /* Unallocated EC: 0x39 */
-#define ESR_ELx_EC_VECTOR32(0x3A)
+#define ESR_ELx_EC_VECTOR32(0x3A)  /* EL2 only */
 /* Unallocted EC: 0x3B */
 #define ESR_ELx_EC_BRK64   (0x3C)
 /* Unallocated EC: 0x3D - 0x3F */
-- 
2.11.0

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v6 05/13] arm64: Don't trap host pointer auth use to EL2

2018-12-07 Thread Kristina Martsenko
From: Mark Rutland 

To allow EL0 (and/or EL1) to use pointer authentication functionality,
we must ensure that pointer authentication instructions and accesses to
pointer authentication keys are not trapped to EL2.

This patch ensures that HCR_EL2 is configured appropriately when the
kernel is booted at EL2. For non-VHE kernels we set HCR_EL2.{API,APK},
ensuring that EL1 can access keys and permit EL0 use of instructions.
For VHE kernels host EL0 (TGE && E2H) is unaffected by these settings,
and it doesn't matter how we configure HCR_EL2.{API,APK}, so we don't
bother setting them.

This does not enable support for KVM guests, since KVM manages HCR_EL2
itself when running VMs.

Signed-off-by: Mark Rutland 
Signed-off-by: Kristina Martsenko 
Acked-by: Christoffer Dall 
Cc: Catalin Marinas 
Cc: Marc Zyngier 
Cc: Will Deacon 
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm64/include/asm/kvm_arm.h | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index c8825c5a8dd0..f9123fe8fcf3 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -24,6 +24,8 @@
 
 /* Hyp Configuration Register (HCR) bits */
 #define HCR_FWB(UL(1) << 46)
+#define HCR_API(UL(1) << 41)
+#define HCR_APK(UL(1) << 40)
 #define HCR_TEA(UL(1) << 37)
 #define HCR_TERR   (UL(1) << 36)
 #define HCR_TLOR   (UL(1) << 35)
@@ -87,7 +89,7 @@
 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \
 HCR_FMO | HCR_IMO)
 #define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF)
-#define HCR_HOST_NVHE_FLAGS (HCR_RW)
+#define HCR_HOST_NVHE_FLAGS (HCR_RW | HCR_API | HCR_APK)
 #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
 
 /* TCR_EL2 Registers bits */
-- 
2.11.0

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v6 02/13] arm64: add pointer authentication register bits

2018-12-07 Thread Kristina Martsenko
From: Mark Rutland 

The ARMv8.3 pointer authentication extension adds:

* New fields in ID_AA64ISAR1 to report the presence of pointer
  authentication functionality.

* New control bits in SCTLR_ELx to enable this functionality.

* New system registers to hold the keys necessary for this
  functionality.

* A new ESR_ELx.EC code used when the new instructions are affected by
  configurable traps

This patch adds the relevant definitions to  and
 for these, to be used by subsequent patches.

Signed-off-by: Mark Rutland 
Signed-off-by: Kristina Martsenko 
Cc: Catalin Marinas 
Cc: Marc Zyngier 
Cc: Suzuki K Poulose 
Cc: Will Deacon 
---
 arch/arm64/include/asm/esr.h|  3 ++-
 arch/arm64/include/asm/sysreg.h | 30 ++
 2 files changed, 32 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 23602a0083ad..52233f00d53d 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -30,7 +30,8 @@
 #define ESR_ELx_EC_CP14_LS (0x06)
 #define ESR_ELx_EC_FP_ASIMD(0x07)
 #define ESR_ELx_EC_CP10_ID (0x08)  /* EL2 only */
-/* Unallocated EC: 0x09 - 0x0B */
+#define ESR_ELx_EC_PAC (0x09)  /* EL2 and above */
+/* Unallocated EC: 0x0A - 0x0B */
 #define ESR_ELx_EC_CP14_64 (0x0C)
 /* Unallocated EC: 0x0d */
 #define ESR_ELx_EC_ILL (0x0E)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 842fb9572661..cb6d7a2a2316 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -183,6 +183,19 @@
 #define SYS_TTBR1_EL1  sys_reg(3, 0, 2, 0, 1)
 #define SYS_TCR_EL1sys_reg(3, 0, 2, 0, 2)
 
+#define SYS_APIAKEYLO_EL1  sys_reg(3, 0, 2, 1, 0)
+#define SYS_APIAKEYHI_EL1  sys_reg(3, 0, 2, 1, 1)
+#define SYS_APIBKEYLO_EL1  sys_reg(3, 0, 2, 1, 2)
+#define SYS_APIBKEYHI_EL1  sys_reg(3, 0, 2, 1, 3)
+
+#define SYS_APDAKEYLO_EL1  sys_reg(3, 0, 2, 2, 0)
+#define SYS_APDAKEYHI_EL1  sys_reg(3, 0, 2, 2, 1)
+#define SYS_APDBKEYLO_EL1  sys_reg(3, 0, 2, 2, 2)
+#define SYS_APDBKEYHI_EL1  sys_reg(3, 0, 2, 2, 3)
+
+#define SYS_APGAKEYLO_EL1  sys_reg(3, 0, 2, 3, 0)
+#define SYS_APGAKEYHI_EL1  sys_reg(3, 0, 2, 3, 1)
+
 #define SYS_ICC_PMR_EL1sys_reg(3, 0, 4, 6, 0)
 
 #define SYS_AFSR0_EL1  sys_reg(3, 0, 5, 1, 0)
@@ -432,9 +445,13 @@
 
 /* Common SCTLR_ELx flags. */
 #define SCTLR_ELx_DSSBS(1UL << 44)
+#define SCTLR_ELx_ENIA (1 << 31)
+#define SCTLR_ELx_ENIB (1 << 30)
+#define SCTLR_ELx_ENDA (1 << 27)
 #define SCTLR_ELx_EE(1 << 25)
 #define SCTLR_ELx_IESB (1 << 21)
 #define SCTLR_ELx_WXN  (1 << 19)
+#define SCTLR_ELx_ENDB (1 << 13)
 #define SCTLR_ELx_I(1 << 12)
 #define SCTLR_ELx_SA   (1 << 3)
 #define SCTLR_ELx_C(1 << 2)
@@ -528,11 +545,24 @@
 #define ID_AA64ISAR0_AES_SHIFT 4
 
 /* id_aa64isar1 */
+#define ID_AA64ISAR1_GPI_SHIFT 28
+#define ID_AA64ISAR1_GPA_SHIFT 24
 #define ID_AA64ISAR1_LRCPC_SHIFT   20
 #define ID_AA64ISAR1_FCMA_SHIFT16
 #define ID_AA64ISAR1_JSCVT_SHIFT   12
+#define ID_AA64ISAR1_API_SHIFT 8
+#define ID_AA64ISAR1_APA_SHIFT 4
 #define ID_AA64ISAR1_DPB_SHIFT 0
 
+#define ID_AA64ISAR1_APA_NI0x0
+#define ID_AA64ISAR1_APA_ARCHITECTED   0x1
+#define ID_AA64ISAR1_API_NI0x0
+#define ID_AA64ISAR1_API_IMP_DEF   0x1
+#define ID_AA64ISAR1_GPA_NI0x0
+#define ID_AA64ISAR1_GPA_ARCHITECTED   0x1
+#define ID_AA64ISAR1_GPI_NI0x0
+#define ID_AA64ISAR1_GPI_IMP_DEF   0x1
+
 /* id_aa64pfr0 */
 #define ID_AA64PFR0_CSV3_SHIFT 60
 #define ID_AA64PFR0_CSV2_SHIFT 56
-- 
2.11.0

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v6 00/13] ARMv8.3 pointer authentication userspace support

2018-12-07 Thread Kristina Martsenko
Hi,

This series adds support for the ARMv8.3 pointer authentication extension,
enabling userspace return address protection with GCC 7 and above.

(The previous version also had in-kernel pointer authentication patches
as RFC; these will be updated and sent at a later time.)

Changes since v5 [1]:
 - Exposed all 5 keys (not just APIAKey) [Will]
 - New prctl for reinitializing keys [Will]
 - New ptrace options for getting and setting keys [Will]
 - Keys now per-thread instead of per-mm [Catalin]
 - Fixed cpufeature detection for late CPUs [Suzuki]
 - Added comments for ESR_ELx_EC_* definitions [Will]
 - Rebased onto v4.20-rc5

This series is based on v4.20-rc5. The aarch64 bootwrapper [2] does the
necessary EL3 setup.

The patches are also available at:
  git://linux-arm.org/linux-km.git ptrauth-user


Extension Overview
==

The ARMv8.3 pointer authentication extension adds functionality to detect
modification of pointer values, mitigating certain classes of attack such as
stack smashing, and making return oriented programming attacks harder.

The extension introduces the concept of a pointer authentication code (PAC),
which is stored in some upper bits of pointers. Each PAC is derived from the
original pointer, another 64-bit value (e.g. the stack pointer), and a secret
128-bit key.

New instructions are added which can be used to:

* Insert a PAC into a pointer
* Strip a PAC from a pointer
* Authenticate strip a PAC from a pointer

If authentication succeeds, the code is removed, yielding the original pointer.
If authentication fails, bits are set in the pointer such that it is guaranteed
to cause a fault if used.

These instructions can make use of four keys:

* APIAKey (A.K.A. Instruction A key)
* APIBKey (A.K.A. Instruction B key)
* APDAKey (A.K.A. Data A key)
* APDBKey (A.K.A. Data B Key)

A subset of these instruction encodings have been allocated from the HINT
space, and will operate as NOPs on any ARMv8-A parts which do not feature the
extension (or if purposefully disabled by the kernel). Software using only this
subset of the instructions should function correctly on all ARMv8-A parts.

Additionally, instructions are added to authenticate small blocks of memory in
similar fashion, using APGAKey (A.K.A. Generic key).


This series
===

This series enables userspace to use any pointer authentication instructions,
using any of the 5 keys. The keys are initialised and maintained per-process
(shared by all threads).

For the time being, this series hides pointer authentication functionality from
KVM guests. Amit Kachhap is currently looking into supporting pointer
authentication in guests.

Setting uprobes on pointer authentication instructions is not yet supported, and
may cause the application to behave in unexpected ways.

Feedback and comments are welcome.

Thanks,
Kristina

[1] 
https://lore.kernel.org/lkml/20181005084754.20950-1-kristina.martse...@arm.com/
[2] git://git.kernel.org/pub/scm/linux/kernel/git/mark/boot-wrapper-aarch64.git


Kristina Martsenko (3):
  arm64: add comments about EC exception levels
  arm64: add prctl control for resetting ptrauth keys
  arm64: add ptrace regsets for ptrauth key management

Mark Rutland (10):
  arm64: add pointer authentication register bits
  arm64/kvm: consistently handle host HCR_EL2 flags
  arm64/kvm: hide ptrauth from guests
  arm64: Don't trap host pointer auth use to EL2
  arm64/cpufeature: detect pointer authentication
  arm64: add basic pointer authentication support
  arm64: expose user PAC bit positions via ptrace
  arm64: perf: strip PAC when unwinding userspace
  arm64: enable pointer authentication
  arm64: docs: document pointer authentication

 Documentation/arm64/booting.txt|   8 ++
 Documentation/arm64/cpu-feature-registers.txt  |   8 ++
 Documentation/arm64/elf_hwcaps.txt |  12 +++
 Documentation/arm64/pointer-authentication.txt |  93 +
 arch/arm64/Kconfig |  23 ++
 arch/arm64/include/asm/cpucaps.h   |   8 +-
 arch/arm64/include/asm/cpufeature.h|  12 +++
 arch/arm64/include/asm/esr.h   |  17 ++--
 arch/arm64/include/asm/kvm_arm.h   |   3 +
 arch/arm64/include/asm/pointer_auth.h  |  93 +
 arch/arm64/include/asm/processor.h |   4 +
 arch/arm64/include/asm/sysreg.h|  30 +++
 arch/arm64/include/asm/thread_info.h   |   4 +
 arch/arm64/include/uapi/asm/hwcap.h|   2 +
 arch/arm64/include/uapi/asm/ptrace.h   |  25 ++
 arch/arm64/kernel/Makefile |   1 +
 arch/arm64/kernel/cpufeature.c | 103 +++
 arch/arm64/kernel/cpuinfo.c|   2 +
 arch/arm64/kernel/head.S   |   5 +-
 arch/arm64/kernel/perf_callchain.c |   6 +-
 arch/arm64/kernel/pointer_auth.c   |  47 +++
 arch/arm64/k

Re: [PATCH 00/17] ARMv8.3 pointer authentication support

2018-11-14 Thread Kristina Martsenko
On 13/11/2018 23:09, Kees Cook wrote:
> On Tue, Nov 13, 2018 at 10:17 AM, Kristina Martsenko
>  wrote:
>> When the PAC authentication fails, it doesn't actually generate an
>> exception, it just flips a bit in the high-order bits of the pointer,
>> making the pointer invalid. Then when the pointer is dereferenced (e.g.
>> as a function return address), it generates the usual type of exception
>> for an invalid address.
> 
> Ah! Okay, thanks. I missed that detail. :)
> 
> What area of memory ends up being addressable with such bit flips?
> (i.e. is the kernel making sure nothing executable ends up there?)

The address will be in between the user and kernel address ranges, so
it's guaranteed to be invalid and not address any memory.

Specifically, assuming a 48-bit VA configuration, user addresses must
have bits [55:48] clear and kernel addresses must have bits [63:48] set.
When authentication fails it will set two bits in those ranges to "10"
or "01", ensuring that the address is no longer a valid user or kernel
address.

>> So when a function return fails in user mode, the exception is handled
>> in __do_user_fault and a forced SIGSEGV is delivered to the task. When a
>> function return fails in kernel mode, the exception is handled in
>> __do_kernel_fault and the task is killed.
>>
>> This is different from stack protector as we don't panic the kernel, we
>> just kill the task. It would be difficult to panic as we don't have a
>> reliable way of knowing that the exception was caused by a PAC
>> authentication failure (we just have an invalid pointer with a specific
>> bit flipped). We also don't print out any PAC-related warning.
> 
> There are other "guesses" in __do_kernel_fault(), I think? Could a
> "PAC mismatch?" warning be included in the Oops if execution fails in
> the address range that PAC failures would resolve into?

Sounds reasonable to me, I'll add a warning.

Thanks,
Kristina
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 00/17] ARMv8.3 pointer authentication support

2018-11-13 Thread Kristina Martsenko
(Sorry for the really late response!)

On 15/10/2018 23:42, Kees Cook wrote:
> On Fri, Oct 5, 2018 at 1:47 AM, Kristina Martsenko
>  wrote:
>> This series adds support for the ARMv8.3 pointer authentication
>> extension. The series contains Mark's original patches to enable pointer
>> authentication for userspace [1], followed by early RFC patches using
>> pointer authentication in the kernel.
> 
> It wasn't obvious to me where the PAC mismatch exceptions will be
> caught. I'm mainly curious to compare the PAC exception handling to
> the existing stack-protector panic(). Can you point me to which
> routines manage that? (Perhaps I just missed it in the series...)

When the PAC authentication fails, it doesn't actually generate an
exception, it just flips a bit in the high-order bits of the pointer,
making the pointer invalid. Then when the pointer is dereferenced (e.g.
as a function return address), it generates the usual type of exception
for an invalid address.

So when a function return fails in user mode, the exception is handled
in __do_user_fault and a forced SIGSEGV is delivered to the task. When a
function return fails in kernel mode, the exception is handled in
__do_kernel_fault and the task is killed.

This is different from stack protector as we don't panic the kernel, we
just kill the task. It would be difficult to panic as we don't have a
reliable way of knowing that the exception was caused by a PAC
authentication failure (we just have an invalid pointer with a specific
bit flipped). We also don't print out any PAC-related warning.

Thanks,
Kristina
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v5 11/17] arm64: docs: document pointer authentication

2018-10-19 Thread Kristina Martsenko
On 19/10/2018 12:35, Catalin Marinas wrote:
> On Tue, Oct 16, 2018 at 05:14:39PM +0100, Kristina Martsenko wrote:
>> On 05/10/2018 10:04, Ramana Radhakrishnan wrote:
>>> On 05/10/2018 09:47, Kristina Martsenko wrote:
>>>> +Virtualization
>>>> +--
>>>> +
>>>> +Pointer authentication is not currently supported in KVM guests. KVM
>>>> +will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
>>>> +the feature will result in an UNDEFINED exception being injected into
>>>> +the guest.
>>>
>>> However applications using instructions from the hint space will
>>> continue to work albeit without any protection (as they would just be
>>> nops) ?
>>
>> Mostly, yes. If the guest leaves SCTLR_EL1.EnIA unset (and
>> EnIB/EnDA/EnDB), then PAC* and AUT* instructions in the HINT space will
>> execute as NOPs. If the guest sets EnIA, then PAC*/AUT* instructions
>> will trap and KVM will inject an "Unknown reason" exception into the
>> guest (which will cause a Linux guest to send a SIGILL to the application).
> 
> I think that part is fine. If KVM (a fairly recent version with CPUID
> sanitisation) does not enable ptr auth, the CPUID should not advertise
> this feature either so the guest kernel should not enable it. For the
> above instructions in the HINT space, they will just be NOPs. If the
> guest kernel enables the feature regardless of the CPUID information, it
> deserves to get an "Unknown reason" exception.
> 
>> In the latter case we could instead pretend the instruction was a NOP
>> and not inject an exception, but trapping twice per every function would
>> probably be terrible for performance. The guest shouldn't be setting
>> EnIA anyway if ID_AA64ISAR1_EL1 reports that pointer authentication is
>> not present (because KVM has hidden it).
> 
> I don't think we should. The SCTLR_EL1 bits are RES0 unless you know
> that the feature is present via CPUID.
> 
>> The other special case is the XPACLRI instruction, which is also in the
>> HINT space. Currently it will trap and KVM will inject an exception into
>> the guest. We should probably change this to NOP instead, as that's what
>> applications will expect. Unfortunately there is no EnIA-like control to
>> make it NOP.
> 
> Very good catch. Basically if EL2 doesn't know about ptr auth (older
> distro), EL1 may or may not know but leaves SCTLR_EL1 disabled (based on
> CPUID), the default HCR_EL2 is to trap (I'm ignoring EL3 as that's like
> to have ptr auth enabled, being built for the specific HW). So a user
> app considering XPACLRI a NOP (or inoffensive) will get a SIGILL
> (injected by the guest kernel following the injection of "Unknown
> reason" exception by KVM).
> 
> Ramana, is XPACLRI commonly generated by gcc and expects it to be a NOP?
> Could we restrict it to only being used at run-time if the corresponding
> HWCAP is set? This means redefining this instruction as no longer in the
> NOP space.

I think an alternative solution is to just disable trapping of pointer
auth instructions in KVM. This will mean that the instructions will
behave the same in the guest as they do in the host. HINT-space
instructions (including XPACLRI) will behave as NOPs (or perform their
function, if enabled by the guest), and will not trap.

A side effect of disabling trapping is that keys may effectively leak
from one guest to another, since one guest may set a key and another
guest may use an instruction that uses that key. But this can be fixed
by zeroing the keys every time we enter a guest. We can additionally
trap key accesses (which is separate from instruction trapping), to have
guests fail more reliably and avoid restoring host keys on guest exit.

Things still won't work well on big.LITTLE systems with mismatched
pointer auth support between CPUs, but as Marc pointed out in the other
email, we can just disable KVM on such systems when we detect a pointer
auth mismatch.

If we want current stable kernels to support guests that use HINT-space
pointer auth instructions, we'll need to backport the above changes to
stable kernels as well.

Even if we restricted userspace to only use XPACLRI if the HWCAP is set,
current stable kernels would still not be able to handle the HINT-space
PAC/AUT instructions that GCC generates, if the guest is pointer auth
aware. None of the stable kernels have the CPUID sanitisation patches,
so the guest would enable pointer auth, which would cause the PAC/AUT
instructions to trap.

>> One option is for KVM to pretend the instruction was a NOP and return to
>> the guest. But if XPACLRI gets executed frequently, then the constant
&g

Re: [PATCH v5 11/17] arm64: docs: document pointer authentication

2018-10-16 Thread Kristina Martsenko
On 05/10/2018 10:04, Ramana Radhakrishnan wrote:
> On 05/10/2018 09:47, Kristina Martsenko wrote:
>> From: Mark Rutland 
>>
>> Now that we've added code to support pointer authentication, add some
>> documentation so that people can figure out if/how to use it.
>>
>> Signed-off-by: Mark Rutland 
>> [kristina: update cpu-feature-registers.txt]
>> Signed-off-by: Kristina Martsenko 
>> Cc: Andrew Jones 
>> Cc: Catalin Marinas 
>> Cc: Ramana Radhakrishnan 
>> Cc: Will Deacon 
>> ---
>>   Documentation/arm64/booting.txt    |  8 +++
>>   Documentation/arm64/cpu-feature-registers.txt  |  4 ++
>>   Documentation/arm64/elf_hwcaps.txt |  5 ++
>>   Documentation/arm64/pointer-authentication.txt | 84
>> ++
>>   4 files changed, 101 insertions(+)
>>   create mode 100644 Documentation/arm64/pointer-authentication.txt
>>
>> diff --git a/Documentation/arm64/booting.txt
>> b/Documentation/arm64/booting.txt
>> index 8d0df62c3fe0..8df9f4658d6f 100644
>> --- a/Documentation/arm64/booting.txt
>> +++ b/Documentation/arm64/booting.txt
>> @@ -205,6 +205,14 @@ Before jumping into the kernel, the following
>> conditions must be met:
>>   ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b0.
>>     - The DT or ACPI tables must describe a GICv2 interrupt controller.
>>   +  For CPUs with pointer authentication functionality:
>> +  - If EL3 is present:
>> +    SCR_EL3.APK (bit 16) must be initialised to 0b1
>> +    SCR_EL3.API (bit 17) must be initialised to 0b1
>> +  - If the kernel is entered at EL1:
>> +    HCR_EL2.APK (bit 40) must be initialised to 0b1
>> +    HCR_EL2.API (bit 41) must be initialised to 0b1
>> +
>>   The requirements described above for CPU mode, caches, MMUs,
>> architected
>>   timers, coherency and system registers apply to all CPUs.  All CPUs
>> must
>>   enter the kernel in the same exception level.
>> diff --git a/Documentation/arm64/cpu-feature-registers.txt
>> b/Documentation/arm64/cpu-feature-registers.txt
>> index 7964f03846b1..b165677ffab9 100644
>> --- a/Documentation/arm64/cpu-feature-registers.txt
>> +++ b/Documentation/arm64/cpu-feature-registers.txt
>> @@ -190,6 +190,10 @@ infrastructure:
>>    |--|
>>    | JSCVT    | [15-12] |    y    |
>>    |--|
>> + | API  | [11-8]  |    y    |
>> + |--|
>> + | APA  | [7-4]   |    y    |
>> + |--|
>>    | DPB  | [3-0]   |    y    |
>>    x--x
>>   diff --git a/Documentation/arm64/elf_hwcaps.txt
>> b/Documentation/arm64/elf_hwcaps.txt
>> index d6aff2c5e9e2..95509a7b0ffe 100644
>> --- a/Documentation/arm64/elf_hwcaps.txt
>> +++ b/Documentation/arm64/elf_hwcaps.txt
>> @@ -178,3 +178,8 @@ HWCAP_ILRCPC
>>   HWCAP_FLAGM
>>     Functionality implied by ID_AA64ISAR0_EL1.TS == 0b0001.
>> +
>> +HWCAP_APIA
>> +
>> +    EL0 AddPac and Auth functionality using APIAKey_EL1 is enabled, as
>> +    described by Documentation/arm64/pointer-authentication.txt.
>> diff --git a/Documentation/arm64/pointer-authentication.txt
>> b/Documentation/arm64/pointer-authentication.txt
>> new file mode 100644
>> index ..8a9cb5713770
>> --- /dev/null
>> +++ b/Documentation/arm64/pointer-authentication.txt
>> @@ -0,0 +1,84 @@
>> +Pointer authentication in AArch64 Linux
>> +===
>> +
>> +Author: Mark Rutland 
>> +Date: 2017-07-19
>> +
>> +This document briefly describes the provision of pointer authentication
>> +functionality in AArch64 Linux.
>> +
>> +
>> +Architecture overview
>> +-
>> +
>> +The ARMv8.3 Pointer Authentication extension adds primitives that can be
>> +used to mitigate certain classes of attack where an attacker can corrupt
>> +the contents of some memory (e.g. the stack).
>> +
>> +The extension uses a Pointer Authentication Code (PAC) to determine
>> +whether pointers have been modified unexpectedly. A PAC is derived from
>> +a pointer, another value (such as the stack pointer), and a secret key
>> +held in system registers.
>> +
>> +The extension adds instructions to insert a valid PAC i

Re: [RFC 17/17] arm64: compile the kernel with ptrauth -msign-return-address

2018-10-11 Thread Kristina Martsenko
On 05/10/2018 10:01, Ramana Radhakrishnan wrote:
> On 05/10/2018 09:47, Kristina Martsenko wrote:
>> Compile all functions with two ptrauth instructions: paciasp in the
>> prologue to sign the return address, and autiasp in the epilogue to
>> authenticate the return address. This should help protect the kernel
>> against attacks using return-oriented programming.
>>
>> CONFIG_ARM64_PTR_AUTH enables pointer auth for both userspace and the
>> kernel.
>>
>> Signed-off-by: Mark Rutland 
>> Signed-off-by: Kristina Martsenko 
>> ---
>>   arch/arm64/Makefile | 4 
>>   1 file changed, 4 insertions(+)
>>
>> diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
>> index 106039d25e2f..dbcd43ea99d8 100644
>> --- a/arch/arm64/Makefile
>> +++ b/arch/arm64/Makefile
>> @@ -56,6 +56,10 @@ KBUILD_AFLAGS    += $(lseinstr) $(brokengasinst)
>>   KBUILD_CFLAGS    += $(call cc-option,-mabi=lp64)
>>   KBUILD_AFLAGS    += $(call cc-option,-mabi=lp64)
>>   +ifeq ($(CONFIG_ARM64_PTR_AUTH),y)
>> +KBUILD_CFLAGS    += -msign-return-address=all
> 
> Glad to see this being done and being proposed for mainline.
> 
> I can see why you would prefer this though have you guys experimented at
> all with -msign-return-address=non-leaf as well ?

I've tried non-leaf and it works too. I'd be fine with switching to it,
I'm not sure which would be better for the kernel.

What kind of experiments did you have in mind? If I understand
correctly, then compared to non-leaf, "all" additionally protects leaf
functions that write to the stack. I don't know how many of those there
are in the kernel (or will be in the future). I also don't know the
additional performance impact of "all", as I don't think we have any
v8.3 hardware to test on yet. There is a minor code size impact (0.36%
on the current kernel), but I'm not sure how much that matters.

> Orthogonally and just fair warning - the command lines for this are also
> being revised to provide ROP and JOP protection using BTI from v8.5-a
> during the GCC-9 timeframe but I suspect that's a different option.

Thanks. I expect it will be a separate Kconfig option to build the
kernel with BTI and pointer auth, yes.

> Reviewed-by: Ramana Radhakrishnan  

Thanks!

Kristina
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[RFC 16/17] arm64: initialize and switch ptrauth kernel keys

2018-10-05 Thread Kristina Martsenko
Set up keys to use pointer auth in the kernel. Each task has its own
APIAKey, which is initialized during fork. The key is changed during
context switch and on kernel entry from EL0.

A function that changes the key cannot return, so inline such functions.

Signed-off-by: Mark Rutland 
Signed-off-by: Kristina Martsenko 
---
 arch/arm64/include/asm/pointer_auth.h |  9 -
 arch/arm64/include/asm/ptrauth-asm.h  | 13 +
 arch/arm64/include/asm/thread_info.h  |  1 +
 arch/arm64/kernel/asm-offsets.c   |  1 +
 arch/arm64/kernel/entry.S |  4 
 arch/arm64/kernel/process.c   |  3 +++
 arch/arm64/kernel/smp.c   |  3 +++
 7 files changed, 33 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/pointer_auth.h 
b/arch/arm64/include/asm/pointer_auth.h
index 0634f06c3af2..e94ca7df8dab 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -50,12 +50,13 @@ do {
\
write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1); \
 } while (0)
 
-static inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
+static __always_inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
 {
if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
return;
 
__ptrauth_key_install(APIA, keys->apia);
+   isb();
 }
 
 static __always_inline void ptrauth_cpu_enable(void)
@@ -85,11 +86,17 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned 
long ptr)
 
 #define ptrauth_task_init_user(tsk)\
ptrauth_keys_init(&(tsk)->thread_info.keys_user)
+#define ptrauth_task_init_kernel(tsk)  \
+   ptrauth_keys_init(&(tsk)->thread_info.keys_kernel)
+#define ptrauth_task_switch(tsk)   \
+   ptrauth_keys_switch(&(tsk)->thread_info.keys_kernel)
 
 #else /* CONFIG_ARM64_PTR_AUTH */
 #define __no_ptrauth
 #define ptrauth_strip_insn_pac(lr) (lr)
 #define ptrauth_task_init_user(tsk)
+#define ptrauth_task_init_kernel(tsk)
+#define ptrauth_task_switch(tsk)
 #define ptrauth_cpu_enable(tsk)
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
diff --git a/arch/arm64/include/asm/ptrauth-asm.h 
b/arch/arm64/include/asm/ptrauth-asm.h
index f50bdfc4046c..3ef1cc8903d5 100644
--- a/arch/arm64/include/asm/ptrauth-asm.h
+++ b/arch/arm64/include/asm/ptrauth-asm.h
@@ -16,11 +16,24 @@ alternative_if ARM64_HAS_ADDRESS_AUTH
 alternative_else_nop_endif
.endm
 
+   .macro ptrauth_keys_install_kernel tsk, tmp
+alternative_if ARM64_HAS_ADDRESS_AUTH
+   ldr \tmp, [\tsk, #(TSK_TI_KEYS_KERNEL + PTRAUTH_KEY_APIALO)]
+   msr_s   SYS_APIAKEYLO_EL1, \tmp
+   ldr \tmp, [\tsk, #(TSK_TI_KEYS_KERNEL + PTRAUTH_KEY_APIAHI)]
+   msr_s   SYS_APIAKEYHI_EL1, \tmp
+   isb
+alternative_else_nop_endif
+   .endm
+
 #else /* CONFIG_ARM64_PTR_AUTH */
 
.macro ptrauth_keys_install_user tsk, tmp
.endm
 
+   .macro ptrauth_keys_install_kernel tsk, tmp
+   .endm
+
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
 #endif /* __ASM_PTRAUTH_ASM_H */
diff --git a/arch/arm64/include/asm/thread_info.h 
b/arch/arm64/include/asm/thread_info.h
index ea9272fb52d4..e3ec5345addc 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -46,6 +46,7 @@ struct thread_info {
int preempt_count;  /* 0 => preemptable, <0 => bug 
*/
 #ifdef CONFIG_ARM64_PTR_AUTH
struct ptrauth_keys keys_user;
+   struct ptrauth_keys keys_kernel;
 #endif
 };
 
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index b6be0dd037fd..6c61c9722b47 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -47,6 +47,7 @@ int main(void)
 #endif
 #ifdef CONFIG_ARM64_PTR_AUTH
   DEFINE(TSK_TI_KEYS_USER, offsetof(struct task_struct, 
thread_info.keys_user));
+  DEFINE(TSK_TI_KEYS_KERNEL,   offsetof(struct task_struct, 
thread_info.keys_kernel));
 #endif
   DEFINE(TSK_STACK,offsetof(struct task_struct, stack));
   BLANK();
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 1e925f6d2978..a4503da445f7 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -250,6 +250,10 @@ alternative_else_nop_endif
msr sp_el0, tsk
.endif
 
+   .if \el == 0
+   ptrauth_keys_install_kernel tsk, x20
+   .endif
+
/*
 * Registers that may be useful after this macro is invoked:
 *
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 857ae05cd04c..a866996610de 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -330,6 +330,8 @@ int copy_thread(unsigned long clone_flags, unsigned long 
stack_start,
 */
fpsimd_flush_task_state(p);
 
+   ptrauth_task_init_kernel(p);
+
if (likely(!(p->flags & PF_KTHREAD))) {
 

[RFC 17/17] arm64: compile the kernel with ptrauth -msign-return-address

2018-10-05 Thread Kristina Martsenko
Compile all functions with two ptrauth instructions: paciasp in the
prologue to sign the return address, and autiasp in the epilogue to
authenticate the return address. This should help protect the kernel
against attacks using return-oriented programming.

CONFIG_ARM64_PTR_AUTH enables pointer auth for both userspace and the
kernel.

Signed-off-by: Mark Rutland 
Signed-off-by: Kristina Martsenko 
---
 arch/arm64/Makefile | 4 
 1 file changed, 4 insertions(+)

diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 106039d25e2f..dbcd43ea99d8 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -56,6 +56,10 @@ KBUILD_AFLAGS+= $(lseinstr) $(brokengasinst)
 KBUILD_CFLAGS  += $(call cc-option,-mabi=lp64)
 KBUILD_AFLAGS  += $(call cc-option,-mabi=lp64)
 
+ifeq ($(CONFIG_ARM64_PTR_AUTH),y)
+KBUILD_CFLAGS  += -msign-return-address=all
+endif
+
 ifeq ($(CONFIG_CPU_BIG_ENDIAN), y)
 KBUILD_CPPFLAGS+= -mbig-endian
 CHECKFLAGS += -D__AARCH64EB__
-- 
2.11.0

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[RFC 15/17] arm64: enable ptrauth earlier

2018-10-05 Thread Kristina Martsenko
When the kernel is compiled with pointer auth instructions, the boot CPU
needs to start using pointer auth very early, so change the cpucap to
account for this.

A function that enables pointer auth cannot return, so inline such
functions or compile them without pointer auth.

Do not use the cpu_enable callback, to avoid compiling the whole
callchain down to cpu_enable without pointer auth.

Note the change in behavior: if the boot CPU has pointer auth and a late
CPU does not, we panic. Until now we would have just disabled pointer
auth in this case.

Signed-off-by: Kristina Martsenko 
---
 arch/arm64/include/asm/cpufeature.h   |  9 +
 arch/arm64/include/asm/pointer_auth.h | 18 ++
 arch/arm64/kernel/cpufeature.c| 14 --
 arch/arm64/kernel/smp.c   |  7 ++-
 4 files changed, 37 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h 
b/arch/arm64/include/asm/cpufeature.h
index 1717ba1db35d..af4ca92a5fa9 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -292,6 +292,15 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
  */
 #define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE ARM64_CPUCAP_SCOPE_BOOT_CPU
 
+/*
+ * CPU feature used early in the boot based on the boot CPU. It is safe for a
+ * late CPU to have this feature even though the boot CPU hasn't enabled it,
+ * although the feature will not be used by Linux in this case. If the boot CPU
+ * has enabled this feature already, then every late CPU must have it.
+ */
+#define ARM64_CPUCAP_BOOT_CPU_FEATURE  \
+(ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU)
+
 struct arm64_cpu_capabilities {
const char *desc;
u16 capability;
diff --git a/arch/arm64/include/asm/pointer_auth.h 
b/arch/arm64/include/asm/pointer_auth.h
index e60f225d9fa2..0634f06c3af2 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -11,6 +11,13 @@
 
 #ifdef CONFIG_ARM64_PTR_AUTH
 /*
+ * Compile the function without pointer authentication instructions. This
+ * allows pointer authentication to be enabled/disabled within the function
+ * (but leaves the function unprotected by pointer authentication).
+ */
+#define __no_ptrauth   __attribute__((target("sign-return-address=none")))
+
+/*
  * Each key is a 128-bit quantity which is split across a pair of 64-bit
  * registers (Lo and Hi).
  */
@@ -51,6 +58,15 @@ static inline void ptrauth_keys_switch(struct ptrauth_keys 
*keys)
__ptrauth_key_install(APIA, keys->apia);
 }
 
+static __always_inline void ptrauth_cpu_enable(void)
+{
+   if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
+   return;
+
+   sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA);
+   isb();
+}
+
 /*
  * The EL0 pointer bits used by a pointer authentication code.
  * This is dependent on TBI0 being enabled, or bits 63:56 would also apply.
@@ -71,8 +87,10 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned 
long ptr)
ptrauth_keys_init(&(tsk)->thread_info.keys_user)
 
 #else /* CONFIG_ARM64_PTR_AUTH */
+#define __no_ptrauth
 #define ptrauth_strip_insn_pac(lr) (lr)
 #define ptrauth_task_init_user(tsk)
+#define ptrauth_cpu_enable(tsk)
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
 #endif /* __ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 3157685aa56a..380ee01145e8 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1040,15 +1040,10 @@ static void cpu_has_fwb(const struct 
arm64_cpu_capabilities *__unused)
 }
 
 #ifdef CONFIG_ARM64_PTR_AUTH
-static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap)
-{
-   sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA);
-}
-
 static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
 int __unused)
 {
-   u64 isar1 = read_sanitised_ftr_reg(SYS_ID_AA64ISAR1_EL1);
+   u64 isar1 = read_sysreg(id_aa64isar1_el1);
bool api, apa;
 
apa = cpuid_feature_extract_unsigned_field(isar1,
@@ -1251,7 +1246,7 @@ static const struct arm64_cpu_capabilities 
arm64_features[] = {
{
.desc = "Address authentication (architected algorithm)",
.capability = ARM64_HAS_ADDRESS_AUTH_ARCH,
-   .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+   .type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
.sys_reg = SYS_ID_AA64ISAR1_EL1,
.sign = FTR_UNSIGNED,
.field_pos = ID_AA64ISAR1_APA_SHIFT,
@@ -1261,7 +1256,7 @@ static const struct arm64_cpu_capabilities 
arm64_features[] = {
{
.desc = "Address authentication (IMP DEF algorithm)",
.capability = ARM64_HAS_ADDRESS_AUTH_IMP_DEF,
-   .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+   .type = ARM64_CPUC

[RFC 14/17] arm64: unwind: strip PAC from kernel addresses

2018-10-05 Thread Kristina Martsenko
From: Mark Rutland 

When we enable pointer authentication in the kernel, LR values saved to
the stack will have a PAC which we must strip in order to retrieve the
real return address.

Strip PACs when unwinding the stack in order to account for this.

Signed-off-by: Mark Rutland 
Signed-off-by: Kristina Martsenko 
---
 arch/arm64/include/asm/pointer_auth.h | 10 +++---
 arch/arm64/kernel/ptrace.c|  2 +-
 arch/arm64/kernel/stacktrace.c|  3 +++
 3 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/pointer_auth.h 
b/arch/arm64/include/asm/pointer_auth.h
index 5e40533f4ea2..e60f225d9fa2 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -55,12 +55,16 @@ static inline void ptrauth_keys_switch(struct ptrauth_keys 
*keys)
  * The EL0 pointer bits used by a pointer authentication code.
  * This is dependent on TBI0 being enabled, or bits 63:56 would also apply.
  */
-#define ptrauth_pac_mask() GENMASK(54, VA_BITS)
+#define ptrauth_pac_mask_ttbr0()   GENMASK(54, VA_BITS)
+
+#define ptrauth_pac_mask_ttbr1()   (GENMASK(63, 56) | GENMASK(54, VA_BITS))
 
-/* Only valid for EL0 TTBR0 instruction pointers */
 static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
 {
-   return ptr & ~ptrauth_pac_mask();
+   if (ptr & BIT_ULL(55))
+   return ptr | ptrauth_pac_mask_ttbr1();
+   else
+   return ptr & ~ptrauth_pac_mask_ttbr0();
 }
 
 #define ptrauth_task_init_user(tsk)\
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index cb8246f8c603..bf4d6d384e4f 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -970,7 +970,7 @@ static int pac_mask_get(struct task_struct *target,
 * depending on TCR_EL1.TBID*, which we may make use of in future, so
 * we expose separate masks.
 */
-   unsigned long mask = ptrauth_pac_mask();
+   unsigned long mask = ptrauth_pac_mask_ttbr0();
struct user_pac_mask uregs = {
.data_mask = mask,
.insn_mask = mask,
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 4989f7ea1e59..44f6a64a8006 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -24,6 +24,7 @@
 #include 
 
 #include 
+#include 
 #include 
 #include 
 
@@ -56,6 +57,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct 
stackframe *frame)
frame->fp = READ_ONCE_NOCHECK(*(unsigned long *)(fp));
frame->pc = READ_ONCE_NOCHECK(*(unsigned long *)(fp + 8));
 
+   frame->pc = ptrauth_strip_insn_pac(frame->pc);
+
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
if (tsk->ret_stack &&
(frame->pc == (unsigned long)return_to_handler)) {
-- 
2.11.0

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v5 05/17] arm64/cpufeature: detect pointer authentication

2018-10-05 Thread Kristina Martsenko
From: Mark Rutland 

So that we can dynamically handle the presence of pointer authentication
functionality, wire up probing code in cpufeature.c.

>From ARMv8.3 onwards, ID_AA64ISAR1 is no longer entirely RES0, and now
has four fields describing the presence of pointer authentication
functionality:

* APA - address authentication present, using an architected algorithm
* API - address authentication present, using an IMP DEF algorithm
* GPA - generic authentication present, using an architected algorithm
* GPI - generic authentication present, using an IMP DEF algorithm

For the moment we only care about address authentication, so we only
need to check APA and API. It is assumed that if all CPUs support an IMP
DEF algorithm, the same algorithm is used across all CPUs.

Note that when we implement KVM support, we will also need to ensure
that CPUs have uniform support for GPA and GPI.

Signed-off-by: Mark Rutland 
[kristina: update cpucap numbers]
Signed-off-by: Kristina Martsenko 
Cc: Catalin Marinas 
Cc: Suzuki K Poulose 
Cc: Will Deacon 
---
 arch/arm64/include/asm/cpucaps.h |  5 -
 arch/arm64/kernel/cpufeature.c   | 47 
 2 files changed, 51 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index ae1f70450fb2..276d4c95aa3c 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -51,7 +51,10 @@
 #define ARM64_SSBD 30
 #define ARM64_MISMATCHED_CACHE_TYPE31
 #define ARM64_HAS_STAGE2_FWB   32
+#define ARM64_HAS_ADDRESS_AUTH_ARCH33
+#define ARM64_HAS_ADDRESS_AUTH_IMP_DEF 34
+#define ARM64_HAS_ADDRESS_AUTH 35
 
-#define ARM64_NCAPS33
+#define ARM64_NCAPS36
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index e238b7932096..0dd171c7d71e 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -142,6 +142,10 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
ID_AA64ISAR1_LRCPC_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
+   ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+  FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 
0),
+   ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+  FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 
0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
ID_AA64ISAR1_DPB_SHIFT, 4, 0),
ARM64_FTR_END,
 };
@@ -1035,6 +1039,22 @@ static void cpu_has_fwb(const struct 
arm64_cpu_capabilities *__unused)
WARN_ON(val & (7 << 27 | 7 << 21));
 }
 
+#ifdef CONFIG_ARM64_PTR_AUTH
+static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
+int __unused)
+{
+   u64 isar1 = read_sanitised_ftr_reg(SYS_ID_AA64ISAR1_EL1);
+   bool api, apa;
+
+   apa = cpuid_feature_extract_unsigned_field(isar1,
+   ID_AA64ISAR1_APA_SHIFT) > 0;
+   api = cpuid_feature_extract_unsigned_field(isar1,
+   ID_AA64ISAR1_API_SHIFT) > 0;
+
+   return apa || api;
+}
+#endif /* CONFIG_ARM64_PTR_AUTH */
+
 static const struct arm64_cpu_capabilities arm64_features[] = {
{
.desc = "GIC system register CPU interface",
@@ -1222,6 +1242,33 @@ static const struct arm64_cpu_capabilities 
arm64_features[] = {
.cpu_enable = cpu_enable_hw_dbm,
},
 #endif
+#ifdef CONFIG_ARM64_PTR_AUTH
+   {
+   .desc = "Address authentication (architected algorithm)",
+   .capability = ARM64_HAS_ADDRESS_AUTH_ARCH,
+   .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+   .sys_reg = SYS_ID_AA64ISAR1_EL1,
+   .sign = FTR_UNSIGNED,
+   .field_pos = ID_AA64ISAR1_APA_SHIFT,
+   .min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
+   .matches = has_cpuid_feature,
+   },
+   {
+   .desc = "Address authentication (IMP DEF algorithm)",
+   .capability = ARM64_HAS_ADDRESS_AUTH_IMP_DEF,
+   .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+   .sys_reg = SYS_ID_AA64ISAR1_EL1,
+   .sign = FTR_UNSIGNED,
+   .field_pos = ID_AA64ISAR1_API_SHIFT,
+   .min_field_value = ID_AA64ISAR1_API_IMP_DEF,
+   .matches = has_cpuid_feature,
+   },
+   {
+   .capability = ARM64_HAS_ADDRESS_AUTH,
+   .type = ARM64_CPUCAP_SYST

[PATCH v5 04/17] arm64: Don't trap host pointer auth use to EL2

2018-10-05 Thread Kristina Martsenko
From: Mark Rutland 

To allow EL0 (and/or EL1) to use pointer authentication functionality,
we must ensure that pointer authentication instructions and accesses to
pointer authentication keys are not trapped to EL2.

This patch ensures that HCR_EL2 is configured appropriately when the
kernel is booted at EL2. For non-VHE kernels we set HCR_EL2.{API,APK},
ensuring that EL1 can access keys and permit EL0 use of instructions.
For VHE kernels host EL0 (TGE && E2H) is unaffected by these settings,
and it doesn't matter how we configure HCR_EL2.{API,APK}, so we don't
bother setting them.

This does not enable support for KVM guests, since KVM manages HCR_EL2
itself when running VMs.

Signed-off-by: Mark Rutland 
Signed-off-by: Kristina Martsenko 
Acked-by: Christoffer Dall 
Cc: Catalin Marinas 
Cc: Marc Zyngier 
Cc: Will Deacon 
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm64/include/asm/kvm_arm.h | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index f885f4e96002..1405bb24acac 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -24,6 +24,8 @@
 
 /* Hyp Configuration Register (HCR) bits */
 #define HCR_FWB(UL(1) << 46)
+#define HCR_API(UL(1) << 41)
+#define HCR_APK(UL(1) << 40)
 #define HCR_TEA(UL(1) << 37)
 #define HCR_TERR   (UL(1) << 36)
 #define HCR_TLOR   (UL(1) << 35)
@@ -87,7 +89,7 @@
 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \
 HCR_FMO | HCR_IMO)
 #define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF)
-#define HCR_HOST_NVHE_FLAGS (HCR_RW)
+#define HCR_HOST_NVHE_FLAGS (HCR_RW | HCR_API | HCR_APK)
 #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
 
 /* TCR_EL2 Registers bits */
-- 
2.11.0

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v5 08/17] arm64: expose user PAC bit positions via ptrace

2018-10-05 Thread Kristina Martsenko
From: Mark Rutland 

When pointer authentication is in use, data/instruction pointers have a
number of PAC bits inserted into them. The number and position of these
bits depends on the configured TCR_ELx.TxSZ and whether tagging is
enabled. ARMv8.3 allows tagging to differ for instruction and data
pointers.

For userspace debuggers to unwind the stack and/or to follow pointer
chains, they need to be able to remove the PAC bits before attempting to
use a pointer.

This patch adds a new structure with masks describing the location of
the PAC bits in userspace instruction and data pointers (i.e. those
addressable via TTBR0), which userspace can query via PTRACE_GETREGSET.
By clearing these bits from pointers (and replacing them with the value
of bit 55), userspace can acquire the PAC-less versions.

This new regset is exposed when the kernel is built with (user) pointer
authentication support, and the feature is enabled. Otherwise, it is
hidden.

Signed-off-by: Mark Rutland 
[kristina: cpus_have_cap -> cpus_have_const_cap]
Signed-off-by: Kristina Martsenko 
Cc: Catalin Marinas 
Cc: Ramana Radhakrishnan 
Cc: Will Deacon 
---
 arch/arm64/include/asm/pointer_auth.h |  8 
 arch/arm64/include/uapi/asm/ptrace.h  |  7 +++
 arch/arm64/kernel/ptrace.c| 38 +++
 include/uapi/linux/elf.h  |  1 +
 4 files changed, 54 insertions(+)

diff --git a/arch/arm64/include/asm/pointer_auth.h 
b/arch/arm64/include/asm/pointer_auth.h
index 2aefedc31d9e..15486079e9ec 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -2,9 +2,11 @@
 #ifndef __ASM_POINTER_AUTH_H
 #define __ASM_POINTER_AUTH_H
 
+#include 
 #include 
 
 #include 
+#include 
 #include 
 
 #ifdef CONFIG_ARM64_PTR_AUTH
@@ -49,6 +51,12 @@ static inline void ptrauth_keys_switch(struct ptrauth_keys 
*keys)
__ptrauth_key_install(APIA, keys->apia);
 }
 
+/*
+ * The EL0 pointer bits used by a pointer authentication code.
+ * This is dependent on TBI0 being enabled, or bits 63:56 would also apply.
+ */
+#define ptrauth_pac_mask() GENMASK(54, VA_BITS)
+
 #define mm_ctx_ptrauth_init(ctx) \
ptrauth_keys_init(&(ctx)->ptrauth_keys)
 
diff --git a/arch/arm64/include/uapi/asm/ptrace.h 
b/arch/arm64/include/uapi/asm/ptrace.h
index 98c4ce55d9c3..4994d718771a 100644
--- a/arch/arm64/include/uapi/asm/ptrace.h
+++ b/arch/arm64/include/uapi/asm/ptrace.h
@@ -228,6 +228,13 @@ struct user_sve_header {
  SVE_PT_SVE_OFFSET + SVE_PT_SVE_SIZE(vq, flags)\
: SVE_PT_FPSIMD_OFFSET + SVE_PT_FPSIMD_SIZE(vq, flags))
 
+/* pointer authentication masks (NT_ARM_PAC_MASK) */
+
+struct user_pac_mask {
+   __u64   data_mask;
+   __u64   insn_mask;
+};
+
 #endif /* __ASSEMBLY__ */
 
 #endif /* _UAPI__ASM_PTRACE_H */
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 6219486fa25f..cb8246f8c603 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -46,6 +46,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -958,6 +959,30 @@ static int sve_set(struct task_struct *target,
 
 #endif /* CONFIG_ARM64_SVE */
 
+#ifdef CONFIG_ARM64_PTR_AUTH
+static int pac_mask_get(struct task_struct *target,
+   const struct user_regset *regset,
+   unsigned int pos, unsigned int count,
+   void *kbuf, void __user *ubuf)
+{
+   /*
+* The PAC bits can differ across data and instruction pointers
+* depending on TCR_EL1.TBID*, which we may make use of in future, so
+* we expose separate masks.
+*/
+   unsigned long mask = ptrauth_pac_mask();
+   struct user_pac_mask uregs = {
+   .data_mask = mask,
+   .insn_mask = mask,
+   };
+
+   if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
+   return -EINVAL;
+
+   return user_regset_copyout(&pos, &count, &kbuf, &ubuf, &uregs, 0, -1);
+}
+#endif /* CONFIG_ARM64_PTR_AUTH */
+
 enum aarch64_regset {
REGSET_GPR,
REGSET_FPR,
@@ -970,6 +995,9 @@ enum aarch64_regset {
 #ifdef CONFIG_ARM64_SVE
REGSET_SVE,
 #endif
+#ifdef CONFIG_ARM64_PTR_AUTH
+   REGSET_PAC_MASK,
+#endif
 };
 
 static const struct user_regset aarch64_regsets[] = {
@@ -1039,6 +1067,16 @@ static const struct user_regset aarch64_regsets[] = {
.get_size = sve_get_size,
},
 #endif
+#ifdef CONFIG_ARM64_PTR_AUTH
+   [REGSET_PAC_MASK] = {
+   .core_note_type = NT_ARM_PAC_MASK,
+   .n = sizeof(struct user_pac_mask) / sizeof(u64),
+   .size = sizeof(u64),
+   .align = sizeof(u64),
+   .get = pac_mask_get,
+   /* this cannot be set dynamically */
+   },
+#endif
 };
 
 static const struct user_regset_view user_aarch64_view = {
diff --git a/include/uapi/linux/elf.h

[RFC 13/17] arm64: install user ptrauth keys at kernel exit time

2018-10-05 Thread Kristina Martsenko
From: Mark Rutland 

This will mean we do more work per EL0 exception return, but is a
stepping-stone to enable keys within the kernel.

Signed-off-by: Mark Rutland 
Signed-off-by: Kristina Martsenko 
---
 arch/arm64/include/asm/pointer_auth.h |  7 +--
 arch/arm64/include/asm/ptrauth-asm.h  | 26 ++
 arch/arm64/kernel/asm-offsets.c   |  7 +++
 arch/arm64/kernel/entry.S |  9 +++--
 arch/arm64/kernel/process.c   |  1 -
 5 files changed, 41 insertions(+), 9 deletions(-)
 create mode 100644 arch/arm64/include/asm/ptrauth-asm.h

diff --git a/arch/arm64/include/asm/pointer_auth.h 
b/arch/arm64/include/asm/pointer_auth.h
index cedb03bd175b..5e40533f4ea2 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -64,16 +64,11 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned 
long ptr)
 }
 
 #define ptrauth_task_init_user(tsk)\
-   ptrauth_keys_init(&(tsk)->thread_info.keys_user); \
-   ptrauth_keys_switch(&(tsk)->thread_info.keys_user)
-
-#define ptrauth_task_switch(tsk)   \
-   ptrauth_keys_switch(&(tsk)->thread_info.keys_user)
+   ptrauth_keys_init(&(tsk)->thread_info.keys_user)
 
 #else /* CONFIG_ARM64_PTR_AUTH */
 #define ptrauth_strip_insn_pac(lr) (lr)
 #define ptrauth_task_init_user(tsk)
-#define ptrauth_task_switch(tsk)
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
 #endif /* __ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/include/asm/ptrauth-asm.h 
b/arch/arm64/include/asm/ptrauth-asm.h
new file mode 100644
index ..f50bdfc4046c
--- /dev/null
+++ b/arch/arm64/include/asm/ptrauth-asm.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_PTRAUTH_ASM_H
+#define __ASM_PTRAUTH_ASM_H
+
+#include 
+#include 
+
+#ifdef CONFIG_ARM64_PTR_AUTH
+
+   .macro ptrauth_keys_install_user tsk, tmp
+alternative_if ARM64_HAS_ADDRESS_AUTH
+   ldr \tmp, [\tsk, #(TSK_TI_KEYS_USER + PTRAUTH_KEY_APIALO)]
+   msr_s   SYS_APIAKEYLO_EL1, \tmp
+   ldr \tmp, [\tsk, #(TSK_TI_KEYS_USER + PTRAUTH_KEY_APIAHI)]
+   msr_s   SYS_APIAKEYHI_EL1, \tmp
+alternative_else_nop_endif
+   .endm
+
+#else /* CONFIG_ARM64_PTR_AUTH */
+
+   .macro ptrauth_keys_install_user tsk, tmp
+   .endm
+
+#endif /* CONFIG_ARM64_PTR_AUTH */
+
+#endif /* __ASM_PTRAUTH_ASM_H */
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 323aeb5f2fe6..b6be0dd037fd 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -45,6 +45,9 @@ int main(void)
 #ifdef CONFIG_ARM64_SW_TTBR0_PAN
   DEFINE(TSK_TI_TTBR0, offsetof(struct task_struct, 
thread_info.ttbr0));
 #endif
+#ifdef CONFIG_ARM64_PTR_AUTH
+  DEFINE(TSK_TI_KEYS_USER, offsetof(struct task_struct, 
thread_info.keys_user));
+#endif
   DEFINE(TSK_STACK,offsetof(struct task_struct, stack));
   BLANK();
   DEFINE(THREAD_CPU_CONTEXT,   offsetof(struct task_struct, 
thread.cpu_context));
@@ -169,5 +172,9 @@ int main(void)
   DEFINE(SDEI_EVENT_INTREGS,   offsetof(struct sdei_registered_event, 
interrupted_regs));
   DEFINE(SDEI_EVENT_PRIORITY,  offsetof(struct sdei_registered_event, 
priority));
 #endif
+#ifdef CONFIG_ARM64_PTR_AUTH
+  DEFINE(PTRAUTH_KEY_APIALO,   offsetof(struct ptrauth_keys, apia.lo));
+  DEFINE(PTRAUTH_KEY_APIAHI,   offsetof(struct ptrauth_keys, apia.hi));
+#endif
   return 0;
 }
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 09dbea221a27..1e925f6d2978 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -23,8 +23,9 @@
 #include 
 
 #include 
-#include 
 #include 
+#include 
+#include 
 #include 
 #include 
 #include 
@@ -33,8 +34,8 @@
 #include 
 #include 
 #include 
+#include 
 #include 
-#include 
 #include 
 
 /*
@@ -325,6 +326,10 @@ alternative_else_nop_endif
apply_ssbd 0, x0, x1
.endif
 
+   .if \el == 0
+   ptrauth_keys_install_user tsk, x0
+   .endif
+
msr elr_el1, x21// set up the return data
msr spsr_el1, x22
ldp x0, x1, [sp, #16 * 0]
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index fae52be66c92..857ae05cd04c 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -426,7 +426,6 @@ __notrace_funcgraph struct task_struct *__switch_to(struct 
task_struct *prev,
contextidr_thread_switch(next);
entry_task_switch(next);
uao_thread_switch(next);
-   ptrauth_task_switch(next);
 
/*
 * Complete any pending TLB or cache maintenance on this CPU in case
-- 
2.11.0

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v5 03/17] arm64/kvm: hide ptrauth from guests

2018-10-05 Thread Kristina Martsenko
From: Mark Rutland 

In subsequent patches we're going to expose ptrauth to the host kernel
and userspace, but things are a bit trickier for guest kernels. For the
time being, let's hide ptrauth from KVM guests.

Regardless of how well-behaved the guest kernel is, guest userspace
could attempt to use ptrauth instructions, triggering a trap to EL2,
resulting in noise from kvm_handle_unknown_ec(). So let's write up a
handler for the PAC trap, which silently injects an UNDEF into the
guest, as if the feature were really missing.

Signed-off-by: Mark Rutland 
[kristina: fix comment]
Signed-off-by: Kristina Martsenko 
Reviewed-by: Andrew Jones 
Reviewed-by: Christoffer Dall 
Cc: Marc Zyngier 
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm64/kvm/handle_exit.c | 18 ++
 arch/arm64/kvm/sys_regs.c|  8 
 2 files changed, 26 insertions(+)

diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index e5e741bfffe1..53759b3c165d 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -173,6 +173,23 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct 
kvm_run *run)
return 1;
 }
 
+/*
+ * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
+ * a NOP).
+ */
+static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+   /*
+* We don't currently support ptrauth in a guest, and we mask the ID
+* registers to prevent well-behaved guests from trying to make use of
+* it.
+*
+* Inject an UNDEF, as if the feature really isn't present.
+*/
+   kvm_inject_undefined(vcpu);
+   return 1;
+}
+
 static exit_handle_fn arm_exit_handlers[] = {
[0 ... ESR_ELx_EC_MAX]  = kvm_handle_unknown_ec,
[ESR_ELx_EC_WFx]= kvm_handle_wfx,
@@ -195,6 +212,7 @@ static exit_handle_fn arm_exit_handlers[] = {
[ESR_ELx_EC_BKPT32] = kvm_handle_guest_debug,
[ESR_ELx_EC_BRK64]  = kvm_handle_guest_debug,
[ESR_ELx_EC_FP_ASIMD]   = handle_no_fpsimd,
+   [ESR_ELx_EC_PAC]= kvm_handle_ptrauth,
 };
 
 static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 22fbbdbece3c..1ca592d38c3c 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1040,6 +1040,14 @@ static u64 read_id_reg(struct sys_reg_desc const *r, 
bool raz)
kvm_debug("SVE unsupported for guests, suppressing\n");
 
val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
+   } else if (id == SYS_ID_AA64ISAR1_EL1) {
+   const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) |
+(0xfUL << ID_AA64ISAR1_API_SHIFT) |
+(0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
+(0xfUL << ID_AA64ISAR1_GPI_SHIFT);
+   if (val & ptrauth_mask)
+   kvm_debug("ptrauth unsupported for guests, 
suppressing\n");
+   val &= ~ptrauth_mask;
} else if (id == SYS_ID_AA64MMFR1_EL1) {
if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))
kvm_debug("LORegions unsupported for guests, 
suppressing\n");
-- 
2.11.0

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v5 01/17] arm64: add pointer authentication register bits

2018-10-05 Thread Kristina Martsenko
From: Mark Rutland 

The ARMv8.3 pointer authentication extension adds:

* New fields in ID_AA64ISAR1 to report the presence of pointer
  authentication functionality.

* New control bits in SCTLR_ELx to enable this functionality.

* New system registers to hold the keys necessary for this
  functionality.

* A new ESR_ELx.EC code used when the new instructions are affected by
  configurable traps

This patch adds the relevant definitions to  and
 for these, to be used by subsequent patches.

Signed-off-by: Mark Rutland 
Signed-off-by: Kristina Martsenko 
Cc: Catalin Marinas 
Cc: Marc Zyngier 
Cc: Suzuki K Poulose 
Cc: Will Deacon 
---
 arch/arm64/include/asm/esr.h|  3 ++-
 arch/arm64/include/asm/sysreg.h | 30 ++
 2 files changed, 32 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index ce70c3ffb993..022785162281 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -30,7 +30,8 @@
 #define ESR_ELx_EC_CP14_LS (0x06)
 #define ESR_ELx_EC_FP_ASIMD(0x07)
 #define ESR_ELx_EC_CP10_ID (0x08)
-/* Unallocated EC: 0x09 - 0x0B */
+#define ESR_ELx_EC_PAC (0x09)
+/* Unallocated EC: 0x0A - 0x0B */
 #define ESR_ELx_EC_CP14_64 (0x0C)
 /* Unallocated EC: 0x0d */
 #define ESR_ELx_EC_ILL (0x0E)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index c1470931b897..343b7a3c59e0 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -171,6 +171,19 @@
 #define SYS_TTBR1_EL1  sys_reg(3, 0, 2, 0, 1)
 #define SYS_TCR_EL1sys_reg(3, 0, 2, 0, 2)
 
+#define SYS_APIAKEYLO_EL1  sys_reg(3, 0, 2, 1, 0)
+#define SYS_APIAKEYHI_EL1  sys_reg(3, 0, 2, 1, 1)
+#define SYS_APIBKEYLO_EL1  sys_reg(3, 0, 2, 1, 2)
+#define SYS_APIBKEYHI_EL1  sys_reg(3, 0, 2, 1, 3)
+
+#define SYS_APDAKEYLO_EL1  sys_reg(3, 0, 2, 2, 0)
+#define SYS_APDAKEYHI_EL1  sys_reg(3, 0, 2, 2, 1)
+#define SYS_APDBKEYLO_EL1  sys_reg(3, 0, 2, 2, 2)
+#define SYS_APDBKEYHI_EL1  sys_reg(3, 0, 2, 2, 3)
+
+#define SYS_APGAKEYLO_EL1  sys_reg(3, 0, 2, 3, 0)
+#define SYS_APGAKEYHI_EL1  sys_reg(3, 0, 2, 3, 1)
+
 #define SYS_ICC_PMR_EL1sys_reg(3, 0, 4, 6, 0)
 
 #define SYS_AFSR0_EL1  sys_reg(3, 0, 5, 1, 0)
@@ -419,9 +432,13 @@
 #define SYS_ICH_LR15_EL2   __SYS__LR8_EL2(7)
 
 /* Common SCTLR_ELx flags. */
+#define SCTLR_ELx_ENIA (1 << 31)
+#define SCTLR_ELx_ENIB (1 << 30)
+#define SCTLR_ELx_ENDA (1 << 27)
 #define SCTLR_ELx_EE(1 << 25)
 #define SCTLR_ELx_IESB (1 << 21)
 #define SCTLR_ELx_WXN  (1 << 19)
+#define SCTLR_ELx_ENDB (1 << 13)
 #define SCTLR_ELx_I(1 << 12)
 #define SCTLR_ELx_SA   (1 << 3)
 #define SCTLR_ELx_C(1 << 2)
@@ -515,11 +532,24 @@
 #define ID_AA64ISAR0_AES_SHIFT 4
 
 /* id_aa64isar1 */
+#define ID_AA64ISAR1_GPI_SHIFT 28
+#define ID_AA64ISAR1_GPA_SHIFT 24
 #define ID_AA64ISAR1_LRCPC_SHIFT   20
 #define ID_AA64ISAR1_FCMA_SHIFT16
 #define ID_AA64ISAR1_JSCVT_SHIFT   12
+#define ID_AA64ISAR1_API_SHIFT 8
+#define ID_AA64ISAR1_APA_SHIFT 4
 #define ID_AA64ISAR1_DPB_SHIFT 0
 
+#define ID_AA64ISAR1_APA_NI0x0
+#define ID_AA64ISAR1_APA_ARCHITECTED   0x1
+#define ID_AA64ISAR1_API_NI0x0
+#define ID_AA64ISAR1_API_IMP_DEF   0x1
+#define ID_AA64ISAR1_GPA_NI0x0
+#define ID_AA64ISAR1_GPA_ARCHITECTED   0x1
+#define ID_AA64ISAR1_GPI_NI0x0
+#define ID_AA64ISAR1_GPI_IMP_DEF   0x1
+
 /* id_aa64pfr0 */
 #define ID_AA64PFR0_CSV3_SHIFT 60
 #define ID_AA64PFR0_CSV2_SHIFT 56
-- 
2.11.0

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[RFC 12/17] arm64: move ptrauth keys to thread_info

2018-10-05 Thread Kristina Martsenko
From: Mark Rutland 

To use pointer authentication in the kernel, we'll need to switch keys
in the entry assembly. This patch moves the pointer auth keys into
thread_info to make this possible.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland 
Signed-off-by: Kristina Martsenko 
---
 arch/arm64/include/asm/mmu.h  |  5 -
 arch/arm64/include/asm/mmu_context.h  | 13 -
 arch/arm64/include/asm/pointer_auth.h | 13 +++--
 arch/arm64/include/asm/thread_info.h  |  4 
 arch/arm64/kernel/process.c   |  4 
 5 files changed, 15 insertions(+), 24 deletions(-)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index f6480ea7b0d5..dd320df0d026 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -25,15 +25,10 @@
 
 #ifndef __ASSEMBLY__
 
-#include 
-
 typedef struct {
atomic64_t  id;
void*vdso;
unsigned long   flags;
-#ifdef CONFIG_ARM64_PTR_AUTH
-   struct ptrauth_keys ptrauth_keys;
-#endif
 } mm_context_t;
 
 /*
diff --git a/arch/arm64/include/asm/mmu_context.h 
b/arch/arm64/include/asm/mmu_context.h
index 983f80925566..387e810063c7 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -215,8 +215,6 @@ static inline void __switch_mm(struct mm_struct *next)
return;
}
 
-   mm_ctx_ptrauth_switch(&next->context);
-
check_and_switch_context(next, cpu);
 }
 
@@ -242,17 +240,6 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next,
 void verify_cpu_asid_bits(void);
 void post_ttbr_update_workaround(void);
 
-static inline void arch_bprm_mm_init(struct mm_struct *mm,
-struct vm_area_struct *vma)
-{
-   mm_ctx_ptrauth_init(&mm->context);
-}
-#define arch_bprm_mm_init arch_bprm_mm_init
-
-/*
- * We need to override arch_bprm_mm_init before including the generic hooks,
- * which are otherwise sufficient for us.
- */
 #include 
 
 #endif /* !__ASSEMBLY__ */
diff --git a/arch/arm64/include/asm/pointer_auth.h 
b/arch/arm64/include/asm/pointer_auth.h
index f5a4b075be65..cedb03bd175b 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -63,16 +63,17 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned 
long ptr)
return ptr & ~ptrauth_pac_mask();
 }
 
-#define mm_ctx_ptrauth_init(ctx) \
-   ptrauth_keys_init(&(ctx)->ptrauth_keys)
+#define ptrauth_task_init_user(tsk)\
+   ptrauth_keys_init(&(tsk)->thread_info.keys_user); \
+   ptrauth_keys_switch(&(tsk)->thread_info.keys_user)
 
-#define mm_ctx_ptrauth_switch(ctx) \
-   ptrauth_keys_switch(&(ctx)->ptrauth_keys)
+#define ptrauth_task_switch(tsk)   \
+   ptrauth_keys_switch(&(tsk)->thread_info.keys_user)
 
 #else /* CONFIG_ARM64_PTR_AUTH */
 #define ptrauth_strip_insn_pac(lr) (lr)
-#define mm_ctx_ptrauth_init(ctx)
-#define mm_ctx_ptrauth_switch(ctx)
+#define ptrauth_task_init_user(tsk)
+#define ptrauth_task_switch(tsk)
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
 #endif /* __ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/include/asm/thread_info.h 
b/arch/arm64/include/asm/thread_info.h
index cb2c10a8f0a8..ea9272fb52d4 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -28,6 +28,7 @@
 struct task_struct;
 
 #include 
+#include 
 #include 
 #include 
 
@@ -43,6 +44,9 @@ struct thread_info {
u64 ttbr0;  /* saved TTBR0_EL1 */
 #endif
int preempt_count;  /* 0 => preemptable, <0 => bug 
*/
+#ifdef CONFIG_ARM64_PTR_AUTH
+   struct ptrauth_keys keys_user;
+#endif
 };
 
 #define thread_saved_pc(tsk)   \
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 7f1628effe6d..fae52be66c92 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -57,6 +57,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 
 #ifdef CONFIG_STACKPROTECTOR
@@ -425,6 +426,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct 
task_struct *prev,
contextidr_thread_switch(next);
entry_task_switch(next);
uao_thread_switch(next);
+   ptrauth_task_switch(next);
 
/*
 * Complete any pending TLB or cache maintenance on this CPU in case
@@ -492,6 +494,8 @@ unsigned long arch_randomize_brk(struct mm_struct *mm)
 void arch_setup_new_exec(void)
 {
current->mm->context.flags = is_compat_task() ? MMCF_AARCH32 : 0;
+
+   ptrauth_task_init_user(current);
 }
 
 #ifdef CONFIG_GCC_PLUGIN_STACKLEAK
-- 
2.11.0

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v5 11/17] arm64: docs: document pointer authentication

2018-10-05 Thread Kristina Martsenko
From: Mark Rutland 

Now that we've added code to support pointer authentication, add some
documentation so that people can figure out if/how to use it.

Signed-off-by: Mark Rutland 
[kristina: update cpu-feature-registers.txt]
Signed-off-by: Kristina Martsenko 
Cc: Andrew Jones 
Cc: Catalin Marinas 
Cc: Ramana Radhakrishnan 
Cc: Will Deacon 
---
 Documentation/arm64/booting.txt|  8 +++
 Documentation/arm64/cpu-feature-registers.txt  |  4 ++
 Documentation/arm64/elf_hwcaps.txt |  5 ++
 Documentation/arm64/pointer-authentication.txt | 84 ++
 4 files changed, 101 insertions(+)
 create mode 100644 Documentation/arm64/pointer-authentication.txt

diff --git a/Documentation/arm64/booting.txt b/Documentation/arm64/booting.txt
index 8d0df62c3fe0..8df9f4658d6f 100644
--- a/Documentation/arm64/booting.txt
+++ b/Documentation/arm64/booting.txt
@@ -205,6 +205,14 @@ Before jumping into the kernel, the following conditions 
must be met:
 ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b0.
   - The DT or ACPI tables must describe a GICv2 interrupt controller.
 
+  For CPUs with pointer authentication functionality:
+  - If EL3 is present:
+SCR_EL3.APK (bit 16) must be initialised to 0b1
+SCR_EL3.API (bit 17) must be initialised to 0b1
+  - If the kernel is entered at EL1:
+HCR_EL2.APK (bit 40) must be initialised to 0b1
+HCR_EL2.API (bit 41) must be initialised to 0b1
+
 The requirements described above for CPU mode, caches, MMUs, architected
 timers, coherency and system registers apply to all CPUs.  All CPUs must
 enter the kernel in the same exception level.
diff --git a/Documentation/arm64/cpu-feature-registers.txt 
b/Documentation/arm64/cpu-feature-registers.txt
index 7964f03846b1..b165677ffab9 100644
--- a/Documentation/arm64/cpu-feature-registers.txt
+++ b/Documentation/arm64/cpu-feature-registers.txt
@@ -190,6 +190,10 @@ infrastructure:
  |--|
  | JSCVT| [15-12] |y|
  |--|
+ | API  | [11-8]  |y|
+ |--|
+ | APA  | [7-4]   |y|
+ |--|
  | DPB  | [3-0]   |y|
  x--x
 
diff --git a/Documentation/arm64/elf_hwcaps.txt 
b/Documentation/arm64/elf_hwcaps.txt
index d6aff2c5e9e2..95509a7b0ffe 100644
--- a/Documentation/arm64/elf_hwcaps.txt
+++ b/Documentation/arm64/elf_hwcaps.txt
@@ -178,3 +178,8 @@ HWCAP_ILRCPC
 HWCAP_FLAGM
 
 Functionality implied by ID_AA64ISAR0_EL1.TS == 0b0001.
+
+HWCAP_APIA
+
+EL0 AddPac and Auth functionality using APIAKey_EL1 is enabled, as
+described by Documentation/arm64/pointer-authentication.txt.
diff --git a/Documentation/arm64/pointer-authentication.txt 
b/Documentation/arm64/pointer-authentication.txt
new file mode 100644
index ..8a9cb5713770
--- /dev/null
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -0,0 +1,84 @@
+Pointer authentication in AArch64 Linux
+===
+
+Author: Mark Rutland 
+Date: 2017-07-19
+
+This document briefly describes the provision of pointer authentication
+functionality in AArch64 Linux.
+
+
+Architecture overview
+-
+
+The ARMv8.3 Pointer Authentication extension adds primitives that can be
+used to mitigate certain classes of attack where an attacker can corrupt
+the contents of some memory (e.g. the stack).
+
+The extension uses a Pointer Authentication Code (PAC) to determine
+whether pointers have been modified unexpectedly. A PAC is derived from
+a pointer, another value (such as the stack pointer), and a secret key
+held in system registers.
+
+The extension adds instructions to insert a valid PAC into a pointer,
+and to verify/remove the PAC from a pointer. The PAC occupies a number
+of high-order bits of the pointer, which varies dependent on the
+configured virtual address size and whether pointer tagging is in use.
+
+A subset of these instructions have been allocated from the HINT
+encoding space. In the absence of the extension (or when disabled),
+these instructions behave as NOPs. Applications and libraries using
+these instructions operate correctly regardless of the presence of the
+extension.
+
+
+Basic support
+-
+
+When CONFIG_ARM64_PTR_AUTH is selected, and relevant HW support is
+present, the kernel will assign a random APIAKey value to each process
+at exec*() time. This key is shared by all threads within the process,
+and the key is preserved across fork(). Presence of functionality using
+APIAKey is advertised via HWCAP_APIA.
+
+Recent versions of GCC can compile code with APIAKey-based return
+address protection when passed the -msign-return-address option. This

[PATCH v5 10/17] arm64: enable pointer authentication

2018-10-05 Thread Kristina Martsenko
From: Mark Rutland 

Now that all the necessary bits are in place for userspace, add the
necessary Kconfig logic to allow this to be enabled.

Signed-off-by: Mark Rutland 
Signed-off-by: Kristina Martsenko 
Cc: Catalin Marinas 
Cc: Will Deacon 
---
 arch/arm64/Kconfig | 23 +++
 1 file changed, 23 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 1b1a0e95c751..8a6d44160fa8 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1134,6 +1134,29 @@ config ARM64_RAS_EXTN
 
 endmenu
 
+menu "ARMv8.3 architectural features"
+
+config ARM64_PTR_AUTH
+   bool "Enable support for pointer authentication"
+   default y
+   help
+ Pointer authentication (part of the ARMv8.3 Extensions) provides
+ instructions for signing and authenticating pointers against secret
+ keys, which can be used to mitigate Return Oriented Programming (ROP)
+ and other attacks.
+
+ This option enables these instructions at EL0 (i.e. for userspace).
+
+ Choosing this option will cause the kernel to initialise secret keys
+ for each process at exec() time, with these keys being
+ context-switched along with the process.
+
+ The feature is detected at runtime. If the feature is not present in
+ hardware it will not be advertised to userspace nor will it be
+ enabled.
+
+endmenu
+
 config ARM64_SVE
bool "ARM Scalable Vector Extension support"
default y
-- 
2.11.0

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v5 02/17] arm64/kvm: consistently handle host HCR_EL2 flags

2018-10-05 Thread Kristina Martsenko
From: Mark Rutland 

In KVM we define the configuration of HCR_EL2 for a VHE HOST in
HCR_HOST_VHE_FLAGS, but we don't have a similar definition for the
non-VHE host flags, and open-code HCR_RW. Further, in head.S we
open-code the flags for VHE and non-VHE configurations.

In future, we're going to want to configure more flags for the host, so
lets add a HCR_HOST_NVHE_FLAGS defintion, and consistently use both
HCR_HOST_VHE_FLAGS and HCR_HOST_NVHE_FLAGS in the kvm code and head.S.

We now use mov_q to generate the HCR_EL2 value, as we use when
configuring other registers in head.S.

Signed-off-by: Mark Rutland 
Signed-off-by: Kristina Martsenko 
Reviewed-by: Christoffer Dall 
Cc: Catalin Marinas 
Cc: Marc Zyngier 
Cc: Will Deacon 
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm64/include/asm/kvm_arm.h | 1 +
 arch/arm64/kernel/head.S | 5 ++---
 arch/arm64/kvm/hyp/switch.c  | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index aa45df752a16..f885f4e96002 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -87,6 +87,7 @@
 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \
 HCR_FMO | HCR_IMO)
 #define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF)
+#define HCR_HOST_NVHE_FLAGS (HCR_RW)
 #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
 
 /* TCR_EL2 Registers bits */
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index b0853069702f..651a06b1980f 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -494,10 +494,9 @@ ENTRY(el2_setup)
 #endif
 
/* Hyp configuration. */
-   mov x0, #HCR_RW // 64-bit EL1
+   mov_q   x0, HCR_HOST_NVHE_FLAGS
cbz x2, set_hcr
-   orr x0, x0, #HCR_TGE// Enable Host Extensions
-   orr x0, x0, #HCR_E2H
+   mov_q   x0, HCR_HOST_VHE_FLAGS
 set_hcr:
msr hcr_el2, x0
isb
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index ca46153d7915..a1c32c1f2267 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -157,7 +157,7 @@ static void __hyp_text __deactivate_traps_nvhe(void)
mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
 
write_sysreg(mdcr_el2, mdcr_el2);
-   write_sysreg(HCR_RW, hcr_el2);
+   write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2);
write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
 }
 
-- 
2.11.0

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v5 06/17] asm-generic: mm_hooks: allow hooks to be overridden individually

2018-10-05 Thread Kristina Martsenko
From: Mark Rutland 

Currently, an architecture must either implement all of the mm hooks
itself, or use all of those provided by the asm-generic implementation.
When an architecture only needs to override a single hook, it must copy
the stub implementations from the asm-generic version.

To avoid this repetition, allow each hook to be overridden indiviually,
by placing each under an #ifndef block. As architectures providing their
own hooks can't include this file today, this shouldn't adversely affect
any existing hooks.

Signed-off-by: Mark Rutland 
Signed-off-by: Kristina Martsenko 
Acked-by: Arnd Bergmann 
Cc: linux-a...@vger.kernel.org
---
 include/asm-generic/mm_hooks.h | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/include/asm-generic/mm_hooks.h b/include/asm-generic/mm_hooks.h
index 8ac4e68a12f0..2b3ee15d3702 100644
--- a/include/asm-generic/mm_hooks.h
+++ b/include/asm-generic/mm_hooks.h
@@ -7,31 +7,42 @@
 #ifndef _ASM_GENERIC_MM_HOOKS_H
 #define _ASM_GENERIC_MM_HOOKS_H
 
+#ifndef arch_dup_mmap
 static inline int arch_dup_mmap(struct mm_struct *oldmm,
struct mm_struct *mm)
 {
return 0;
 }
+#endif
 
+#ifndef arch_exit_mmap
 static inline void arch_exit_mmap(struct mm_struct *mm)
 {
 }
+#endif
 
+#ifndef arch_unmap
 static inline void arch_unmap(struct mm_struct *mm,
struct vm_area_struct *vma,
unsigned long start, unsigned long end)
 {
 }
+#endif
 
+#ifndef arch_bprm_mm_init
 static inline void arch_bprm_mm_init(struct mm_struct *mm,
 struct vm_area_struct *vma)
 {
 }
+#endif
 
+#ifndef arch_vma_access_permitted
 static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
bool write, bool execute, bool foreign)
 {
/* by default, allow everything */
return true;
 }
+#endif
+
 #endif /* _ASM_GENERIC_MM_HOOKS_H */
-- 
2.11.0

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v5 07/17] arm64: add basic pointer authentication support

2018-10-05 Thread Kristina Martsenko
From: Mark Rutland 

This patch adds basic support for pointer authentication, allowing
userspace to make use of APIAKey. The kernel maintains an APIAKey value
for each process (shared by all threads within), which is initialised to
a random value at exec() time.

To describe that address authentication instructions are available, the
ID_AA64ISAR0.{APA,API} fields are exposed to userspace. A new hwcap,
APIA, is added to describe that the kernel manages APIAKey.

Instructions using other keys (APIBKey, APDAKey, APDBKey) are disabled,
and will behave as NOPs. These may be made use of in future patches.

No support is added for the generic key (APGAKey), though this cannot be
trapped or made to behave as a NOP. Its presence is not advertised with
a hwcap.

Signed-off-by: Mark Rutland 
[kristina: init keys in arch_bprm_mm_init; add AA64ISAR1.API HWCAP_CAP; use 
sysreg_clear_set]
Signed-off-by: Kristina Martsenko 
Tested-by: Adam Wallis 
Cc: Catalin Marinas 
Cc: Ramana Radhakrishnan 
Cc: Suzuki K Poulose 
Cc: Will Deacon 
---
 arch/arm64/include/asm/mmu.h  |  5 +++
 arch/arm64/include/asm/mmu_context.h  | 16 -
 arch/arm64/include/asm/pointer_auth.h | 63 +++
 arch/arm64/include/uapi/asm/hwcap.h   |  1 +
 arch/arm64/kernel/cpufeature.c| 10 ++
 arch/arm64/kernel/cpuinfo.c   |  1 +
 6 files changed, 95 insertions(+), 1 deletion(-)
 create mode 100644 arch/arm64/include/asm/pointer_auth.h

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index dd320df0d026..f6480ea7b0d5 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -25,10 +25,15 @@
 
 #ifndef __ASSEMBLY__
 
+#include 
+
 typedef struct {
atomic64_t  id;
void*vdso;
unsigned long   flags;
+#ifdef CONFIG_ARM64_PTR_AUTH
+   struct ptrauth_keys ptrauth_keys;
+#endif
 } mm_context_t;
 
 /*
diff --git a/arch/arm64/include/asm/mmu_context.h 
b/arch/arm64/include/asm/mmu_context.h
index 39ec0b8a689e..983f80925566 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -29,7 +29,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 #include 
 #include 
@@ -216,6 +215,8 @@ static inline void __switch_mm(struct mm_struct *next)
return;
}
 
+   mm_ctx_ptrauth_switch(&next->context);
+
check_and_switch_context(next, cpu);
 }
 
@@ -241,6 +242,19 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next,
 void verify_cpu_asid_bits(void);
 void post_ttbr_update_workaround(void);
 
+static inline void arch_bprm_mm_init(struct mm_struct *mm,
+struct vm_area_struct *vma)
+{
+   mm_ctx_ptrauth_init(&mm->context);
+}
+#define arch_bprm_mm_init arch_bprm_mm_init
+
+/*
+ * We need to override arch_bprm_mm_init before including the generic hooks,
+ * which are otherwise sufficient for us.
+ */
+#include 
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* !__ASM_MMU_CONTEXT_H */
diff --git a/arch/arm64/include/asm/pointer_auth.h 
b/arch/arm64/include/asm/pointer_auth.h
new file mode 100644
index ..2aefedc31d9e
--- /dev/null
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -0,0 +1,63 @@
+// SPDX-License-Identifier: GPL-2.0
+#ifndef __ASM_POINTER_AUTH_H
+#define __ASM_POINTER_AUTH_H
+
+#include 
+
+#include 
+#include 
+
+#ifdef CONFIG_ARM64_PTR_AUTH
+/*
+ * Each key is a 128-bit quantity which is split across a pair of 64-bit
+ * registers (Lo and Hi).
+ */
+struct ptrauth_key {
+   unsigned long lo, hi;
+};
+
+/*
+ * We give each process its own instruction A key (APIAKey), which is shared by
+ * all threads. This is inherited upon fork(), and reinitialised upon exec*().
+ * All other keys are currently unused, with APIBKey, APDAKey, and APBAKey
+ * instructions behaving as NOPs.
+ */
+struct ptrauth_keys {
+   struct ptrauth_key apia;
+};
+
+static inline void ptrauth_keys_init(struct ptrauth_keys *keys)
+{
+   if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
+   return;
+
+   get_random_bytes(keys, sizeof(*keys));
+}
+
+#define __ptrauth_key_install(k, v)\
+do {   \
+   struct ptrauth_key __pki_v = (v);   \
+   write_sysreg_s(__pki_v.lo, SYS_ ## k ## KEYLO_EL1); \
+   write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1); \
+} while (0)
+
+static inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
+{
+   if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
+   return;
+
+   __ptrauth_key_install(APIA, keys->apia);
+}
+
+#define mm_ctx_ptrauth_init(ctx) \
+   ptrauth_keys_init(&(ctx)->ptrauth_keys)
+
+#define mm_ctx_ptrauth_switch(ctx) \
+   ptrauth_keys_switch(&(ctx)->ptrauth_keys)
+
+#else /* CONFIG_ARM64_PTR_AUTH */
+#define mm_ctx_ptrauth_init(ctx)
+#define mm_

[PATCH v5 09/17] arm64: perf: strip PAC when unwinding userspace

2018-10-05 Thread Kristina Martsenko
From: Mark Rutland 

When the kernel is unwinding userspace callchains, we can't expect that
the userspace consumer of these callchains has the data necessary to
strip the PAC from the stored LR.

This patch has the kernel strip the PAC from user stackframes when the
in-kernel unwinder is used. This only affects the LR value, and not the
FP.

This only affects the in-kernel unwinder. When userspace performs
unwinding, it is up to userspace to strip PACs as necessary (which can
be determined from DWARF information).

Signed-off-by: Mark Rutland 
[kristina: add pointer_auth.h #include]
Signed-off-by: Kristina Martsenko 
Cc: Catalin Marinas 
Cc: Ramana Radhakrishnan 
Cc: Will Deacon 
---
 arch/arm64/include/asm/pointer_auth.h | 7 +++
 arch/arm64/kernel/perf_callchain.c| 6 +-
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/pointer_auth.h 
b/arch/arm64/include/asm/pointer_auth.h
index 15486079e9ec..f5a4b075be65 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -57,6 +57,12 @@ static inline void ptrauth_keys_switch(struct ptrauth_keys 
*keys)
  */
 #define ptrauth_pac_mask() GENMASK(54, VA_BITS)
 
+/* Only valid for EL0 TTBR0 instruction pointers */
+static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
+{
+   return ptr & ~ptrauth_pac_mask();
+}
+
 #define mm_ctx_ptrauth_init(ctx) \
ptrauth_keys_init(&(ctx)->ptrauth_keys)
 
@@ -64,6 +70,7 @@ static inline void ptrauth_keys_switch(struct ptrauth_keys 
*keys)
ptrauth_keys_switch(&(ctx)->ptrauth_keys)
 
 #else /* CONFIG_ARM64_PTR_AUTH */
+#define ptrauth_strip_insn_pac(lr) (lr)
 #define mm_ctx_ptrauth_init(ctx)
 #define mm_ctx_ptrauth_switch(ctx)
 #endif /* CONFIG_ARM64_PTR_AUTH */
diff --git a/arch/arm64/kernel/perf_callchain.c 
b/arch/arm64/kernel/perf_callchain.c
index bcafd7dcfe8b..94754f07f67a 100644
--- a/arch/arm64/kernel/perf_callchain.c
+++ b/arch/arm64/kernel/perf_callchain.c
@@ -18,6 +18,7 @@
 #include 
 #include 
 
+#include 
 #include 
 
 struct frame_tail {
@@ -35,6 +36,7 @@ user_backtrace(struct frame_tail __user *tail,
 {
struct frame_tail buftail;
unsigned long err;
+   unsigned long lr;
 
/* Also check accessibility of one struct frame_tail beyond */
if (!access_ok(VERIFY_READ, tail, sizeof(buftail)))
@@ -47,7 +49,9 @@ user_backtrace(struct frame_tail __user *tail,
if (err)
return NULL;
 
-   perf_callchain_store(entry, buftail.lr);
+   lr = ptrauth_strip_insn_pac(buftail.lr);
+
+   perf_callchain_store(entry, lr);
 
/*
 * Frame pointers should strictly progress back up the stack
-- 
2.11.0

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH 00/17] ARMv8.3 pointer authentication support

2018-10-05 Thread Kristina Martsenko
uctions to every function: one to sign the
return address at the beginning of the function, and one to authenticate
the return address just before returning to it. If authentication fails,
the return will cause an exception to be taken, followed by a
panic/oops. This should help protect the kernel against attacks using
return-oriented programming.

Each task has its own pointer authentication key for use in the kernel,
initialized during fork. On systems without much entropy during early
boot, the earlier keys are unfortunately not unpredictable. Ideally the
kernel should get early randomness from firmware. Currently, this should
be possible on UEFI systems that support EFI_RNG_PROTOCOL (via
LINUX_EFI_RANDOM_SEED_TABLE_GUID). ARMv8.5-A will also add Random Number
instructions that should help with this [3].

The kernel currently uses only APIAKey, and switches it on entry and
exit to userspace. If/when GCC gains support for generating APIBKey
instructions, it may be worth switching to APIBKey if there is a
performance benefit (if userspace only uses APIAKey).

This series is currently intended as an RFC. Some things I haven't yet
looked at include:
  - debug and trace (ftrace, kprobes, __builtin_return_address(n),
kdump, ...)
  - interaction with stack protector
  - suspend/resume
  - compiler without ptrauth support
  - separate kconfig option?

Feedback and comments are welcome.

Thanks,
Kristina

[1] https://lore.kernel.org/lkml/20180503132031.25705-1-mark.rutl...@arm.com/
[2] git://git.kernel.org/pub/scm/linux/kernel/git/mark/boot-wrapper-aarch64.git
[3] 
https://community.arm.com/processors/b/blog/posts/arm-a-profile-architecture-2018-developments-armv85a


Kristina Martsenko (3):
  arm64: enable ptrauth earlier
  arm64: initialize and switch ptrauth kernel keys
  arm64: compile the kernel with ptrauth -msign-return-address

Mark Rutland (14):
  arm64: add pointer authentication register bits
  arm64/kvm: consistently handle host HCR_EL2 flags
  arm64/kvm: hide ptrauth from guests
  arm64: Don't trap host pointer auth use to EL2
  arm64/cpufeature: detect pointer authentication
  asm-generic: mm_hooks: allow hooks to be overridden individually
  arm64: add basic pointer authentication support
  arm64: expose user PAC bit positions via ptrace
  arm64: perf: strip PAC when unwinding userspace
  arm64: enable pointer authentication
  arm64: docs: document pointer authentication
  arm64: move ptrauth keys to thread_info
  arm64: install user ptrauth keys at kernel exit time
  arm64: unwind: strip PAC from kernel addresses

 Documentation/arm64/booting.txt|   8 ++
 Documentation/arm64/cpu-feature-registers.txt  |   4 +
 Documentation/arm64/elf_hwcaps.txt |   5 ++
 Documentation/arm64/pointer-authentication.txt |  84 
 arch/arm64/Kconfig |  23 ++
 arch/arm64/Makefile|   4 +
 arch/arm64/include/asm/cpucaps.h   |   5 +-
 arch/arm64/include/asm/cpufeature.h|   9 +++
 arch/arm64/include/asm/esr.h   |   3 +-
 arch/arm64/include/asm/kvm_arm.h   |   3 +
 arch/arm64/include/asm/mmu_context.h   |   3 +-
 arch/arm64/include/asm/pointer_auth.h  | 103 +
 arch/arm64/include/asm/ptrauth-asm.h   |  39 ++
 arch/arm64/include/asm/sysreg.h|  30 +++
 arch/arm64/include/asm/thread_info.h   |   5 ++
 arch/arm64/include/uapi/asm/hwcap.h|   1 +
 arch/arm64/include/uapi/asm/ptrace.h   |   7 ++
 arch/arm64/kernel/asm-offsets.c|   8 ++
 arch/arm64/kernel/cpufeature.c |  51 
 arch/arm64/kernel/cpuinfo.c|   1 +
 arch/arm64/kernel/entry.S  |  13 +++-
 arch/arm64/kernel/head.S   |   5 +-
 arch/arm64/kernel/perf_callchain.c |   6 +-
 arch/arm64/kernel/process.c|   6 ++
 arch/arm64/kernel/ptrace.c |  38 +
 arch/arm64/kernel/smp.c|  10 ++-
 arch/arm64/kernel/stacktrace.c |   3 +
 arch/arm64/kvm/handle_exit.c   |  18 +
 arch/arm64/kvm/hyp/switch.c|   2 +-
 arch/arm64/kvm/sys_regs.c  |   8 ++
 include/asm-generic/mm_hooks.h |  11 +++
 include/uapi/linux/elf.h   |   1 +
 32 files changed, 506 insertions(+), 11 deletions(-)
 create mode 100644 Documentation/arm64/pointer-authentication.txt
 create mode 100644 arch/arm64/include/asm/pointer_auth.h
 create mode 100644 arch/arm64/include/asm/ptrauth-asm.h

-- 
2.11.0

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCHv4 06/10] arm64: add basic pointer authentication support

2018-06-08 Thread Kristina Martsenko
Hi Mark,

On 03/05/18 14:20, Mark Rutland wrote:
> This patch adds basic support for pointer authentication, allowing
> userspace to make use of APIAKey. The kernel maintains an APIAKey value
> for each process (shared by all threads within), which is initialised to
> a random value at exec() time.
> 
> To describe that address authentication instructions are available, the
> ID_AA64ISAR0.{APA,API} fields are exposed to userspace. A new hwcap,
> APIA, is added to describe that the kernel manages APIAKey.
> 
> Instructions using other keys (APIBKey, APDAKey, APDBKey) are disabled,
> and will behave as NOPs. These may be made use of in future patches.
> 
> No support is added for the generic key (APGAKey), though this cannot be
> trapped or made to behave as a NOP. Its presence is not advertised with
> a hwcap.
> 
> Signed-off-by: Mark Rutland 
> Cc: Catalin Marinas 
> Cc: Ramana Radhakrishnan 
> Cc: Suzuki K Poulose 
> Cc: Will Deacon 
> ---
>  arch/arm64/include/asm/mmu.h  |  5 +++
>  arch/arm64/include/asm/mmu_context.h  | 11 -
>  arch/arm64/include/asm/pointer_auth.h | 75 
> +++
>  arch/arm64/include/uapi/asm/hwcap.h   |  1 +
>  arch/arm64/kernel/cpufeature.c|  9 +
>  arch/arm64/kernel/cpuinfo.c   |  1 +
>  6 files changed, 101 insertions(+), 1 deletion(-)
>  create mode 100644 arch/arm64/include/asm/pointer_auth.h
> 
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index dd320df0d026..f6480ea7b0d5 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -25,10 +25,15 @@
>  
>  #ifndef __ASSEMBLY__
>  
> +#include 
> +
>  typedef struct {
>   atomic64_t  id;
>   void*vdso;
>   unsigned long   flags;
> +#ifdef CONFIG_ARM64_PTR_AUTH
> + struct ptrauth_keys ptrauth_keys;
> +#endif
>  } mm_context_t;
>  
>  /*
> diff --git a/arch/arm64/include/asm/mmu_context.h 
> b/arch/arm64/include/asm/mmu_context.h
> index 39ec0b8a689e..83eadbc6b946 100644
> --- a/arch/arm64/include/asm/mmu_context.h
> +++ b/arch/arm64/include/asm/mmu_context.h
> @@ -168,7 +168,14 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp)
>  #define destroy_context(mm)  do { } while(0)
>  void check_and_switch_context(struct mm_struct *mm, unsigned int cpu);
>  
> -#define init_new_context(tsk,mm) ({ atomic64_set(&(mm)->context.id, 0); 
> 0; })
> +static inline int init_new_context(struct task_struct *tsk,
> +struct mm_struct *mm)
> +{
> + atomic64_set(&mm->context.id, 0);
> + mm_ctx_ptrauth_init(&mm->context);
> +
> + return 0;
> +}>
>  #ifdef CONFIG_ARM64_SW_TTBR0_PAN
>  static inline void update_saved_ttbr0(struct task_struct *tsk,
> @@ -216,6 +223,8 @@ static inline void __switch_mm(struct mm_struct *next)
>   return;
>   }
>  
> + mm_ctx_ptrauth_switch(&next->context);
> +
>   check_and_switch_context(next, cpu);
>  }

It seems you've removed arch_dup_mmap here (as Catalin suggested [1]),
but forgotten to move the key initialization from init_new_context to
arch_bprm_mm_init. In my tests I'm seeing child processes get different
keys than the parent after a fork().

Kristina

[1] https://lkml.org/lkml/2018/4/25/506
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH] arm64: KVM: fix VTTBR_BADDR_MASK BUG_ON off-by-one

2017-11-14 Thread Kristina Martsenko
VTTBR_BADDR_MASK is used to sanity check the size and alignment of the
VTTBR address. It seems to currently be off by one, thereby only
allowing up to 47-bit addresses (instead of 48-bit) and also
insufficiently checking the alignment. This patch fixes it.

As an example, with 4k pages, before this patch we have:

  PHYS_MASK_SHIFT = 48
  VTTBR_X = 37 - 24 = 13
  VTTBR_BADDR_SHIFT = 13 - 1 = 12
  VTTBR_BADDR_MASK = ((1 << 35) - 1) << 12 = 0x7000

Which is wrong, because the mask doesn't allow bit 47 of the VTTBR
address to be set, and only requires the address to be 12-bit (4k)
aligned, while it actually needs to be 13-bit (8k) aligned because we
concatenate two 4k tables.

With this patch, the mask becomes 0xe000, which is what we
want.

Fixes: 0369f6a34b9f ("arm64: KVM: EL2 register definitions")
Cc:  # 3.11.x
Reviewed-by: Suzuki K Poulose 
Signed-off-by: Kristina Martsenko 
---
 arch/arm64/include/asm/kvm_arm.h | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 61d694c2eae5..555d463c0eaa 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -170,8 +170,7 @@
 #define VTCR_EL2_FLAGS (VTCR_EL2_COMMON_BITS | 
VTCR_EL2_TGRAN_FLAGS)
 #define VTTBR_X(VTTBR_X_TGRAN_MAGIC - 
VTCR_EL2_T0SZ_IPA)
 
-#define VTTBR_BADDR_SHIFT (VTTBR_X - 1)
-#define VTTBR_BADDR_MASK  (((UL(1) << (PHYS_MASK_SHIFT - VTTBR_X)) - 1) << 
VTTBR_BADDR_SHIFT)
+#define VTTBR_BADDR_MASK  (((UL(1) << (PHYS_MASK_SHIFT - VTTBR_X)) - 1) << 
VTTBR_X)
 #define VTTBR_VMID_SHIFT  (UL(48))
 #define VTTBR_VMID_MASK(size) (_AT(u64, (1 << size) - 1) << VTTBR_VMID_SHIFT)
 
-- 
2.1.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm