Re: [PATCH v4 14/21] arm64: Honor VHE being disabled from the command-line

2021-01-24 Thread Marc Zyngier
On Sat, 23 Jan 2021 14:07:53 +,
Catalin Marinas  wrote:
> 
> On Mon, Jan 18, 2021 at 09:45:26AM +, Marc Zyngier wrote:
> > diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
> > index 59820f9b8522..bbab2148a2a2 100644
> > --- a/arch/arm64/kernel/hyp-stub.S
> > +++ b/arch/arm64/kernel/hyp-stub.S
> > @@ -77,13 +77,24 @@ SYM_CODE_END(el1_sync)
> >  SYM_CODE_START_LOCAL(mutate_to_vhe)
> > // Sanity check: MMU *must* be off
> > mrs x0, sctlr_el2
> > -   tbnzx0, #0, 1f
> > +   tbnzx0, #0, 2f
> >  
> > // Needs to be VHE capable, obviously
> > mrs x0, id_aa64mmfr1_el1
> > ubfxx0, x0, #ID_AA64MMFR1_VHE_SHIFT, #4
> > -   cbz x0, 1f
> > +   cbz x0, 2f
> >  
> > +   // Check whether VHE is disabled from the command line
> > +   adr_l   x1, id_aa64mmfr1_val
> > +   ldr x0, [x1]
> > +   adr_l   x1, id_aa64mmfr1_mask
> > +   ldr x1, [x1]
> > +   ubfxx0, x0, #ID_AA64MMFR1_VHE_SHIFT, #4
> > +   ubfxx1, x1, #ID_AA64MMFR1_VHE_SHIFT, #4
> > +   cbz x1, 1f
> > +   and x0, x0, x1
> > +   cbz x0, 2f
> > +1:
> 
> I can see the advantage here in separate id_aa64mmfr1_val/mask but we
> could use some asm offsets here and keep the pointer indirection simpler
> in C code. You'd just need something like 'adr_l mmfr1_ovrd + VAL_OFFSET'.
> 
> Anyway, if you have a strong preference for the current approach, leave
> it as is.

I've now moved over to a structure containing both val/mask, meaning
that we only need to keep a single pointer around in the various
feature descriptors. It certainly looks better.

Thanks,

M.

-- 
Without deviation from the norm, progress is not possible.


Re: [PATCH v4 14/21] arm64: Honor VHE being disabled from the command-line

2021-01-23 Thread Catalin Marinas
On Mon, Jan 18, 2021 at 09:45:26AM +, Marc Zyngier wrote:
> diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
> index 59820f9b8522..bbab2148a2a2 100644
> --- a/arch/arm64/kernel/hyp-stub.S
> +++ b/arch/arm64/kernel/hyp-stub.S
> @@ -77,13 +77,24 @@ SYM_CODE_END(el1_sync)
>  SYM_CODE_START_LOCAL(mutate_to_vhe)
>   // Sanity check: MMU *must* be off
>   mrs x0, sctlr_el2
> - tbnzx0, #0, 1f
> + tbnzx0, #0, 2f
>  
>   // Needs to be VHE capable, obviously
>   mrs x0, id_aa64mmfr1_el1
>   ubfxx0, x0, #ID_AA64MMFR1_VHE_SHIFT, #4
> - cbz x0, 1f
> + cbz x0, 2f
>  
> + // Check whether VHE is disabled from the command line
> + adr_l   x1, id_aa64mmfr1_val
> + ldr x0, [x1]
> + adr_l   x1, id_aa64mmfr1_mask
> + ldr x1, [x1]
> + ubfxx0, x0, #ID_AA64MMFR1_VHE_SHIFT, #4
> + ubfxx1, x1, #ID_AA64MMFR1_VHE_SHIFT, #4
> + cbz x1, 1f
> + and x0, x0, x1
> + cbz x0, 2f
> +1:

I can see the advantage here in separate id_aa64mmfr1_val/mask but we
could use some asm offsets here and keep the pointer indirection simpler
in C code. You'd just need something like 'adr_l mmfr1_ovrd + VAL_OFFSET'.

Anyway, if you have a strong preference for the current approach, leave
it as is.

-- 
Catalin


Re: [PATCH v4 14/21] arm64: Honor VHE being disabled from the command-line

2021-01-18 Thread David Brazdil
On Mon, Jan 18, 2021 at 09:45:26AM +, Marc Zyngier wrote:
> Finally we can check whether VHE is disabled on the command line,
> and not enable it if that's the user's wish.
> 
> Signed-off-by: Marc Zyngier 
Acked-by: David Brazdil 

> ---
>  arch/arm64/kernel/hyp-stub.S | 17 ++---
>  1 file changed, 14 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
> index 59820f9b8522..bbab2148a2a2 100644
> --- a/arch/arm64/kernel/hyp-stub.S
> +++ b/arch/arm64/kernel/hyp-stub.S
> @@ -77,13 +77,24 @@ SYM_CODE_END(el1_sync)
>  SYM_CODE_START_LOCAL(mutate_to_vhe)
>   // Sanity check: MMU *must* be off
>   mrs x0, sctlr_el2
> - tbnzx0, #0, 1f
> + tbnzx0, #0, 2f
>  
>   // Needs to be VHE capable, obviously
>   mrs x0, id_aa64mmfr1_el1
>   ubfxx0, x0, #ID_AA64MMFR1_VHE_SHIFT, #4
> - cbz x0, 1f
> + cbz x0, 2f
>  
> + // Check whether VHE is disabled from the command line
> + adr_l   x1, id_aa64mmfr1_val
> + ldr x0, [x1]
> + adr_l   x1, id_aa64mmfr1_mask
> + ldr x1, [x1]
super nit: There's a ldr_l macro

> + ubfxx0, x0, #ID_AA64MMFR1_VHE_SHIFT, #4
> + ubfxx1, x1, #ID_AA64MMFR1_VHE_SHIFT, #4
> + cbz x1, 1f
> + and x0, x0, x1
> + cbz x0, 2f
> +1:
>   // Engage the VHE magic!
>   mov_q   x0, HCR_HOST_VHE_FLAGS
>   msr hcr_el2, x0
> @@ -152,7 +163,7 @@ skip_spe:
>   orr x0, x0, x1
>   msr spsr_el1, x0
>  
> -1:   eret
> +2:   eret
>  SYM_CODE_END(mutate_to_vhe)
>  
>  .macro invalid_vectorlabel
> -- 
> 2.29.2
>