[RFC PATCH 66/73] x86/pvm: Use new cpu feature to describe XENPV and PVM

2024-02-26 Thread Lai Jiangshan
not a paravirtual guest. Signed-off-by: Hou Wenlong Signed-off-by: Lai Jiangshan --- arch/x86/entry/entry_64.S | 5 ++--- arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/paravirt.h| 14 +++--- arch/x86/kernel/pvm.c | 1 + arch/x86/xen

[RFC PATCH 56/73] x86/pvm: Relocate kernel image early in PVH entry

2024-02-26 Thread Lai Jiangshan
From: Hou Wenlong For a PIE kernel, it runs in a high virtual address in the PVH entry, so it needs to relocate the kernel image early in the PVH entry for the PVM guest. Signed-off-by: Hou Wenlong Signed-off-by: Lai Jiangshan --- arch/x86/include/asm/init.h | 5 + arch/x86/kernel

Re: [PATCH 21/36] x86/tdx: Remove TDX_HCALL_ISSUE_STI

2022-06-13 Thread Lai Jiangshan
On Wed, Jun 8, 2022 at 10:48 PM Peter Zijlstra wrote: > > Now that arch_cpu_idle() is expected to return with IRQs disabled, > avoid the useless STI/CLI dance. > > Per the specs this is supposed to work, but nobody has yet relied up > this behaviour so broken implementations are possible. I'm tot

[PATCH V2 7/7] x86/entry: Convert SWAPGS to swapgs and remove the definition of SWAPGS

2022-03-02 Thread Lai Jiangshan
From: Lai Jiangshan XENPV doesn't use swapgs_restore_regs_and_return_to_usermode(), error_entry() and entry_SYSENTER_compat(), so the PV-awared SWAPGS in them can be changed to swapgs. There is no user of the SWAPGS anymore after this change. The INTERRUPT_RETU

[PATCH V2 5/7] x86/entry: Don't call error_entry for XENPV

2022-03-02 Thread Lai Jiangshan
From: Lai Jiangshan When in XENPV, it is already in the task stack, and it can't fault for native_iret() nor native_load_gs_index() since XENPV uses its own pvops for iret and load_gs_index(). And it doesn't need to switch CR3. So there is no reason to call error_entry() in XENPV

[PATCH 10/11] x86: Remove the definition of SWAPGS

2021-12-08 Thread Lai Jiangshan
From: Lai Jiangshan There is no user of the pv-aware SWAPGS anymore. Signed-off-by: Lai Jiangshan --- arch/x86/include/asm/irqflags.h | 2 -- 1 file changed, 2 deletions(-) diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h index 87761396e8cc..ac2e4cc47210 100644

[PATCH 11/11] x86/entry: Remove the branch in sync_regs()

2021-12-08 Thread Lai Jiangshan
From: Lai Jiangshan In non-xenpv, the sp0 is the trampoline stack, and sync_regs() is called on non-xenpv only since error_entry is not called on xenpv, so the stack must be the trampoline stack or one of the IST stack and the check in sync_regs() is unneeded. Signed-off-by: Lai Jiangshan

[PATCH 08/11] x86/entry: Use idtentry macro for entry_INT80_compat

2021-12-08 Thread Lai Jiangshan
From: Lai Jiangshan entry_INT80_compat is identical to idtentry macro except a special handling for %rax in the prolog. Add the prolog to idtentry and use idtentry for entry_INT80_compat. Signed-off-by: Lai Jiangshan --- arch/x86/entry/entry_64.S| 18 ++ arch/x86/entry

[PATCH 09/11] x86/entry: Convert SWAPGS to swapgs in entry_SYSENTER_compat()

2021-12-08 Thread Lai Jiangshan
From: Lai Jiangshan XENPV has its own entry point for SYSENTER, it doesn't use entry_SYSENTER_compat. So the pv-awared SWAPGS can be changed to swapgs. Signed-off-by: Lai Jiangshan --- arch/x86/entry/entry_64_compat.S | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a

[PATCH 07/11] x86/entry: Convert SWAPGS to swapgs in error_entry()

2021-12-08 Thread Lai Jiangshan
From: Lai Jiangshan XENPV doesn't use error_entry() anymore, so the pv-aware SWAPGS can be changed to native swapgs. Signed-off-by: Lai Jiangshan --- arch/x86/entry/entry_64.S | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/

[PATCH 06/11] x86/entry: Don't call error_entry for XENPV

2021-12-08 Thread Lai Jiangshan
From: Lai Jiangshan When in XENPV, it is already in the task stack, and it can't fault for native_iret() nor native_load_gs_index() since XENPV uses its own pvops for iret and load_gs_index(). And it doesn't need to switch CR3. So there is no reason to call error_entry() in XENPV.

[PATCH 05/11] x86/entry: Move cld to the start of idtentry

2021-12-08 Thread Lai Jiangshan
From: Lai Jiangshan Make it next to CLAC Suggested-by: Peter Zijlstra Signed-off-by: Lai Jiangshan --- arch/x86/entry/entry_64.S | 8 +--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 72ceb4b70822..ee1d4adcdab0

[PATCH 04/11] x86/entry: move PUSH_AND_CLEAR_REGS out of error_entry

2021-12-08 Thread Lai Jiangshan
From: Lai Jiangshan Moving PUSH_AND_CLEAR_REGS out of error_entry doesn't change any functionality. It will enlarge the size: size arch/x86/entry/entry_64.o.before: textdata bss dec hex filename 17916 384 0 18300477c arch/x86/entry/entry_64.o size --f

[PATCH 03/11] x86/entry: Switch the stack after error_entry() returns

2021-12-08 Thread Lai Jiangshan
From: Lai Jiangshan error_entry() calls sync_regs() to settle/copy the pt_regs and switches the stack directly after sync_regs(). But error_entry() itself is also a function call, the switching has to handle the return address of it together, which causes the work complicated and tangly

[PATCH 02/11] x86/traps: Move pt_regs only in fixup_bad_iret()

2021-12-08 Thread Lai Jiangshan
From: Lai Jiangshan fixup_bad_iret() and sync_regs() have similar arguments and do similar work that copies full or partial pt_regs to a place and switches stack after return. They are quite the same, but fixup_bad_iret() not only copies the pt_regs but also the return address of error_entry

[PATCH 01/11] x86/entry: Use swapgs and native_iret directly in swapgs_restore_regs_and_return_to_usermode

2021-12-08 Thread Lai Jiangshan
From: Lai Jiangshan swapgs_restore_regs_and_return_to_usermode() is used in native code (non-xenpv) only now, so it doesn't need the PV-aware SWAPGS and INTERRUPT_RETURN. Signed-off-by: Lai Jiangshan --- arch/x86/entry/entry_64.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)

[PATCH 00/11] x86/entry: Clean up entry code

2021-12-08 Thread Lai Jiangshan
From: Lai Jiangshan This patchset moves the stack-switch code to the place where error_entry() return, distangles error_entry() from XENpv and makes entry_INT80_compat use idtentry macro. This patchset is highly related to XENpv, because it does the extra cleanup to convert SWAPGS to swapgs

[PATCH V6 03/49] x86/xen: Add xenpv_restore_regs_and_return_to_usermode()

2021-11-26 Thread Lai Jiangshan
From: Lai Jiangshan While in the native case, PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is the trampoline stack. But XEN pv doesn't use trampoline stack, so PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is also the kernel stack. Hence source and destination stacks are identical in that case, which means re

[PATCH V5 04/50] x86/xen: Add xenpv_restore_regs_and_return_to_usermode()

2021-11-10 Thread Lai Jiangshan
From: Lai Jiangshan While in the native case, PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is the trampoline stack. But XEN pv doesn't use trampoline stack, so PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is also the kernel stack. Hence source and destination stacks are identical in that case, which means re

Re: [PATCH V4 04/50] x86/xen: Add xenpv_restore_regs_and_return_to_usermode()

2021-11-02 Thread Lai Jiangshan
On 2021/11/2 16:58, Borislav Petkov wrote: */ - ALTERNATIVE "", "jmp swapgs_restore_regs_and_return_to_usermode", \ + ALTERNATIVE "", "jmp xenpv_restore_regs_and_return_to_usermode", \ Instead of sprinkling all those ALTERNATIVE calls everywhere, why don't you simply jump

[PATCH V4 04/50] x86/xen: Add xenpv_restore_regs_and_return_to_usermode()

2021-10-26 Thread Lai Jiangshan
From: Lai Jiangshan While in the native case, PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is the trampoline stack. But XEN pv doesn't use trampoline stack, so PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is also the kernel stack. Hence source and destination stacks are identical in that case, which means re

[PATCH V3 04/49] x86/xen: Add xenpv_restore_regs_and_return_to_usermode()

2021-10-13 Thread Lai Jiangshan
From: Lai Jiangshan While in the native case, PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is the trampoline stack. But XEN pv doesn't use trampoline stack, so PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is also the kernel stack. Hence source and destination stacks are identical in that case, which means re

[PATCH 1/4] x86/xen/entry: Rename xenpv_exc_nmi to noist_exc_nmi

2021-04-27 Thread Lai Jiangshan
From: Lai Jiangshan There is no any functionality change intended. Just rename it and move it to arch/x86/kernel/nmi.c so that we can resue it later in next patch for early NMI and kvm. Cc: Thomas Gleixner Cc: Paolo Bonzini Cc: Sean Christopherson Cc: Steven Rostedt Cc: Andi Kleen Cc