On Fri, Oct 09, 2020 at 12:42:53PM -0700, ira.we...@intel.com wrote:

> @@ -644,6 +663,8 @@ void __switch_to_xtra(struct task_struct *prev_p, struct 
> task_struct *next_p)
>  
>       if ((tifp ^ tifn) & _TIF_SLD)
>               switch_to_sld(tifn);
> +
> +     pks_sched_in();
>  }
>  

You seem to have lost the comment proposed here:

  
https://lkml.kernel.org/r/20200717083140.gw10...@hirez.programming.kicks-ass.net

It is useful and important information that the wrmsr normally doesn't
happen.

> diff --git a/arch/x86/mm/pkeys.c b/arch/x86/mm/pkeys.c
> index 3cf8f775f36d..30f65dd3d0c5 100644
> --- a/arch/x86/mm/pkeys.c
> +++ b/arch/x86/mm/pkeys.c
> @@ -229,3 +229,31 @@ u32 update_pkey_val(u32 pk_reg, int pkey, unsigned int 
> flags)
>  
>       return pk_reg;
>  }
> +
> +DEFINE_PER_CPU(u32, pkrs_cache);
> +
> +/**
> + * It should also be noted that the underlying WRMSR(MSR_IA32_PKRS) is not
> + * serializing but still maintains ordering properties similar to WRPKRU.
> + * The current SDM section on PKRS needs updating but should be the same as
> + * that of WRPKRU.  So to quote from the WRPKRU text:
> + *
> + *   WRPKRU will never execute transiently. Memory accesses
> + *   affected by PKRU register will not execute (even transiently)
> + *   until all prior executions of WRPKRU have completed execution
> + *   and updated the PKRU register.

(whitespace damage; space followed by tabstop)

> + */
> +void write_pkrs(u32 new_pkrs)
> +{
> +     u32 *pkrs;
> +
> +     if (!static_cpu_has(X86_FEATURE_PKS))
> +             return;
> +
> +     pkrs = get_cpu_ptr(&pkrs_cache);
> +     if (*pkrs != new_pkrs) {
> +             *pkrs = new_pkrs;
> +             wrmsrl(MSR_IA32_PKRS, new_pkrs);
> +     }
> +     put_cpu_ptr(pkrs);
> +}

looks familiar that... :-)

Reply via email to