not a paravirtual guest.
Signed-off-by: Hou Wenlong
Signed-off-by: Lai Jiangshan
---
arch/x86/entry/entry_64.S | 5 ++---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/paravirt.h| 14 +++---
arch/x86/kernel/pvm.c | 1 +
arch/x86/xen
From: Hou Wenlong
For a PIE kernel, it runs in a high virtual address in the PVH entry, so
it needs to relocate the kernel image early in the PVH entry for the PVM
guest.
Signed-off-by: Hou Wenlong
Signed-off-by: Lai Jiangshan
---
arch/x86/include/asm/init.h | 5 +
arch/x86/kernel
On Wed, Jun 8, 2022 at 10:48 PM Peter Zijlstra wrote:
>
> Now that arch_cpu_idle() is expected to return with IRQs disabled,
> avoid the useless STI/CLI dance.
>
> Per the specs this is supposed to work, but nobody has yet relied up
> this behaviour so broken implementations are possible.
I'm tot
From: Lai Jiangshan
XENPV doesn't use swapgs_restore_regs_and_return_to_usermode(),
error_entry() and entry_SYSENTER_compat(), so the PV-awared SWAPGS in
them can be changed to swapgs. There is no user of the SWAPGS anymore
after this change.
The INTERRUPT_RETU
From: Lai Jiangshan
When in XENPV, it is already in the task stack, and it can't fault
for native_iret() nor native_load_gs_index() since XENPV uses its own
pvops for iret and load_gs_index(). And it doesn't need to switch CR3.
So there is no reason to call error_entry() in XENPV
From: Lai Jiangshan
There is no user of the pv-aware SWAPGS anymore.
Signed-off-by: Lai Jiangshan
---
arch/x86/include/asm/irqflags.h | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
index 87761396e8cc..ac2e4cc47210 100644
From: Lai Jiangshan
In non-xenpv, the sp0 is the trampoline stack, and sync_regs() is
called on non-xenpv only since error_entry is not called on xenpv, so
the stack must be the trampoline stack or one of the IST stack and the
check in sync_regs() is unneeded.
Signed-off-by: Lai Jiangshan
From: Lai Jiangshan
entry_INT80_compat is identical to idtentry macro except a special
handling for %rax in the prolog.
Add the prolog to idtentry and use idtentry for entry_INT80_compat.
Signed-off-by: Lai Jiangshan
---
arch/x86/entry/entry_64.S| 18 ++
arch/x86/entry
From: Lai Jiangshan
XENPV has its own entry point for SYSENTER, it doesn't use
entry_SYSENTER_compat. So the pv-awared SWAPGS can be changed to
swapgs.
Signed-off-by: Lai Jiangshan
---
arch/x86/entry/entry_64_compat.S | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a
From: Lai Jiangshan
XENPV doesn't use error_entry() anymore, so the pv-aware SWAPGS can be
changed to native swapgs.
Signed-off-by: Lai Jiangshan
---
arch/x86/entry/entry_64.S | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/
From: Lai Jiangshan
When in XENPV, it is already in the task stack, and it can't fault
for native_iret() nor native_load_gs_index() since XENPV uses its own
pvops for iret and load_gs_index(). And it doesn't need to switch CR3.
So there is no reason to call error_entry() in XENPV.
From: Lai Jiangshan
Make it next to CLAC
Suggested-by: Peter Zijlstra
Signed-off-by: Lai Jiangshan
---
arch/x86/entry/entry_64.S | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 72ceb4b70822..ee1d4adcdab0
From: Lai Jiangshan
Moving PUSH_AND_CLEAR_REGS out of error_entry doesn't change any
functionality. It will enlarge the size:
size arch/x86/entry/entry_64.o.before:
textdata bss dec hex filename
17916 384 0 18300477c arch/x86/entry/entry_64.o
size --f
From: Lai Jiangshan
error_entry() calls sync_regs() to settle/copy the pt_regs and switches
the stack directly after sync_regs(). But error_entry() itself is also
a function call, the switching has to handle the return address of it
together, which causes the work complicated and tangly
From: Lai Jiangshan
fixup_bad_iret() and sync_regs() have similar arguments and do similar
work that copies full or partial pt_regs to a place and switches stack
after return. They are quite the same, but fixup_bad_iret() not only
copies the pt_regs but also the return address of error_entry
From: Lai Jiangshan
swapgs_restore_regs_and_return_to_usermode() is used in native code
(non-xenpv) only now, so it doesn't need the PV-aware SWAPGS and
INTERRUPT_RETURN.
Signed-off-by: Lai Jiangshan
---
arch/x86/entry/entry_64.S | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
From: Lai Jiangshan
This patchset moves the stack-switch code to the place where
error_entry() return, distangles error_entry() from XENpv and makes
entry_INT80_compat use idtentry macro.
This patchset is highly related to XENpv, because it does the extra
cleanup to convert SWAPGS to swapgs
From: Lai Jiangshan
While in the native case, PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is the
trampoline stack. But XEN pv doesn't use trampoline stack, so
PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is also the kernel stack. Hence source
and destination stacks are identical in that case, which means re
From: Lai Jiangshan
While in the native case, PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is the
trampoline stack. But XEN pv doesn't use trampoline stack, so
PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is also the kernel stack. Hence source
and destination stacks are identical in that case, which means re
On 2021/11/2 16:58, Borislav Petkov wrote:
*/
- ALTERNATIVE "", "jmp swapgs_restore_regs_and_return_to_usermode", \
+ ALTERNATIVE "", "jmp xenpv_restore_regs_and_return_to_usermode", \
Instead of sprinkling all those ALTERNATIVE calls everywhere,
why don't you simply jump
From: Lai Jiangshan
While in the native case, PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is the
trampoline stack. But XEN pv doesn't use trampoline stack, so
PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is also the kernel stack. Hence source
and destination stacks are identical in that case, which means re
From: Lai Jiangshan
While in the native case, PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is the
trampoline stack. But XEN pv doesn't use trampoline stack, so
PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is also the kernel stack. Hence source
and destination stacks are identical in that case, which means re
From: Lai Jiangshan
There is no any functionality change intended. Just rename it and
move it to arch/x86/kernel/nmi.c so that we can resue it later in
next patch for early NMI and kvm.
Cc: Thomas Gleixner
Cc: Paolo Bonzini
Cc: Sean Christopherson
Cc: Steven Rostedt
Cc: Andi Kleen
Cc
23 matches
Mail list logo