On 06/12/17 07:00, Andy Lutomirski wrote: > Add to commit message: > > Using a trampoline stack would obnoxious for Xen PV because Xen PV > enters entry_SYSCALL_64_after_hwframe on the stack indicated by sp0. > This could be fixed, but I think it's nice to ensure the entry code > can still work without a trampoline stack. So this patch doesn't > use the entry trampoline stack on Xen. > > The regs != eregs check in sync_regs is optional, but I think it's > better than copying the whole set of regs over itself. > > Signed-off-by: Andy Lutomirski <l...@kernel.org>
I verified the crash occurring without the patch when started as a pv guest. Your patch makes pv guest boot again. HVM guest is working, too. You can add my: Tested-by: Juergen Gross <jgr...@suse.com> Juergen > --- > > Boris, this is lightly tested and should fix the problem you're seeing. > > arch/x86/include/asm/switch_to.h | 3 +++ > arch/x86/kernel/traps.c | 9 +++++---- > 2 files changed, 8 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/include/asm/switch_to.h > b/arch/x86/include/asm/switch_to.h > index fab453ad2460..cbc71e73bd32 100644 > --- a/arch/x86/include/asm/switch_to.h > +++ b/arch/x86/include/asm/switch_to.h > @@ -93,6 +93,9 @@ static inline void update_sp0(struct task_struct *task) > /* On x86_64, sp0 always points to the entry trampoline stack, which is > constant: */ > #ifdef CONFIG_X86_32 > load_sp0(task->thread.sp0); > +#else > + if (static_cpu_has(X86_FEATURE_XENPV)) > + load_sp0(task_top_of_stack(task)); > #endif > } > > diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c > index 40cc3dc5967a..ee9ca0ad4388 100644 > --- a/arch/x86/kernel/traps.c > +++ b/arch/x86/kernel/traps.c > @@ -619,14 +619,15 @@ NOKPROBE_SYMBOL(do_int3); > > #ifdef CONFIG_X86_64 > /* > - * Help handler running on IST stack to switch off the IST stack if the > - * interrupted code was in user mode. The actual stack switch is done in > - * entry_64.S > + * Help handler running on a per-cpu (IST or entry trampoline) stack > + * to switch to the normal thread stack if the interrupted code was in > + * user mode. The actual stack switch is done in entry_64.S > */ > asmlinkage __visible notrace struct pt_regs *sync_regs(struct pt_regs *eregs) > { > struct pt_regs *regs = (struct pt_regs > *)this_cpu_read(cpu_current_top_of_stack) - 1; > - *regs = *eregs; > + if (regs != eregs) > + *regs = *eregs; > return regs; > } > NOKPROBE_SYMBOL(sync_regs); >