On Tue, Oct 23, 2018 at 11:43 AM Chang S. Bae <chang.seok....@intel.com> wrote:
>
> From: Andy Lutomirski <l...@kernel.org>
>
> With the new FSGSBASE instructions, we can efficiently read and write
> the FSBASE and GSBASE in __switch_to().  Use that capability to preserve
> the full state.
>
> This will enable user code to do whatever it wants with the new
> instructions without any kernel-induced gotchas.  (There can still be
> architectural gotchas: movl %gs,%eax; movl %eax,%gs may change GSBASE
> if WRGSBASE was used, but users are expected to read the CPU manual
> before doing things like that.)
>
> This is a considerable speedup.  It seems to save about 100 cycles
> per context switch compared to the baseline 4.6-rc1 behavior on my
> Skylake laptop.
>
> [ chang: 5~10% performance improvements were seen by a context switch
>   benchmark that ran threads with different FS/GSBASE values. Minor
>   edit on the changelog. ]
>
> Signed-off-by: Andy Lutomirski <l...@kernel.org>
> Signed-off-by: Chang S. Bae <chang.seok....@intel.com>
> Reviewed-by: Andi Kleen <a...@linux.intel.com>
> Cc: H. Peter Anvin <h...@zytor.com>
> Cc: Thomas Gleixner <t...@linutronix.de>
> Cc: Ingo Molnar <mi...@kernel.org>
> Cc: Dave Hansen <dave.han...@linux.intel.com>
> ---
>  arch/x86/kernel/process_64.c | 34 ++++++++++++++++++++++++++++------
>  1 file changed, 28 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
> index fcf18046c3d6..1d975cadc256 100644
> --- a/arch/x86/kernel/process_64.c
> +++ b/arch/x86/kernel/process_64.c
> @@ -238,8 +238,18 @@ static __always_inline void save_fsgs(struct task_struct 
> *task)
>  {
>         savesegment(fs, task->thread.fsindex);
>         savesegment(gs, task->thread.gsindex);
> -       save_base_legacy(task, task->thread.fsindex, FS);
> -       save_base_legacy(task, task->thread.gsindex, GS);
> +       if (static_cpu_has(X86_FEATURE_FSGSBASE)) {
> +               /*
> +                * If FSGSBASE is enabled, we can't make any useful guesses
> +                * about the base, and user code expects us to save the 
> current
> +                * value.  Fortunately, reading the base directly is 
> efficient.
> +                */
> +               task->thread.fsbase = rdfsbase();
> +               task->thread.gsbase = rd_inactive_gsbase();
> +       } else {
> +               save_base_legacy(task, task->thread.fsindex, FS);
> +               save_base_legacy(task, task->thread.gsindex, GS);
> +       }
>  }
>
>  #if IS_ENABLED(CONFIG_KVM)
> @@ -318,10 +328,22 @@ static __always_inline void load_seg_legacy(unsigned 
> short prev_index,
>  static __always_inline void x86_fsgsbase_load(struct thread_struct *prev,
>                                               struct thread_struct *next)
>  {
> -       load_seg_legacy(prev->fsindex, prev->fsbase,
> -                       next->fsindex, next->fsbase, FS);
> -       load_seg_legacy(prev->gsindex, prev->gsbase,
> -                       next->gsindex, next->gsbase, GS);
> +       if (static_cpu_has(X86_FEATURE_FSGSBASE)) {
> +               /* Update the FS and GS selectors if they could have changed. 
> */
> +               if (unlikely(prev->fsindex || next->fsindex))
> +                       loadseg(FS, next->fsindex);
> +               if (unlikely(prev->gsindex || next->gsindex))
> +                       loadseg(GS, next->gsindex);
> +
> +               /* Update the bases. */
> +               wrfsbase(next->fsbase);
> +               wr_inactive_gsbase(next->gsbase);

Aha, I see what you're doing with the FSGSBASE-optimized version being
out of line.  But it's way too unclear from the code.  You should name
the helper wrgsbase_inactive or maybe __wrgsbase_inactive() to
emphasize that you're literally using the WRGSBASE instruction.  (Or
it's Xen PV equivalent.  Hmm.)

Reply via email to