Hi Suzuki,

On 06/29/2018 01:15 PM, Suzuki K Poulose wrote:
> On arm64, ID_AA64MMFR0_EL1.PARange encodes the maximum Physical
> Address range supported by the CPU. Add a helper to decode this
> to actual physical shift. If we hit an unallocated value, return
> the maximum range supported by the kernel.
> This is will be used by the KVM to set the VTCR_EL2.T0SZ, as it
s/is// and s/the KVM/KVM
> is about to move its place. Having this helper keeps the code
> movement cleaner.
> 
> Cc: Catalin Marinas <catalin.mari...@arm.com>
> Cc: Marc Zyngier <marc.zyng...@arm.com>
> Cc: James Morse <james.mo...@arm.com>
> Cc: Christoffer Dall <cd...@kernel.org>
> Signed-off-by: Suzuki K Poulose <suzuki.poul...@arm.com>
> ---
> Changes since V2:
>  - Split the patch
>  - Limit the physical shift only for values unrecognized.
> ---
>  arch/arm64/include/asm/cpufeature.h | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h 
> b/arch/arm64/include/asm/cpufeature.h
> index 1717ba1..855cf0e 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -530,6 +530,19 @@ void arm64_set_ssbd_mitigation(bool state);
>  static inline void arm64_set_ssbd_mitigation(bool state) {}
>  #endif
>  
> +static inline u32 id_aa64mmfr0_parange_to_phys_shift(int parange)
> +{
> +     switch (parange) {
> +     case 0: return 32;
> +     case 1: return 36;
> +     case 2: return 40;
> +     case 3: return 42;
> +     case 4: return 44;
> +     case 5: return 48;
> +     case 6: return 52;
> +     default: return CONFIG_ARM64_PA_BITS;
> +     }
> +}
>  #endif /* __ASSEMBLY__ */
>  
>  #endif
>

Reviewed-by: Eric Auger <eric.au...@redhat.com>

Thanks

Eric


Reply via email to