On 10/18/2025 12:02 AM, Michael Kelley wrote:
From: Naman Jain <[email protected]> Sent: Friday, October
17, 2025 12:45 AM
Introduce a new mshv_vtl driver to provide an interface for Virtual
Machine Monitor like OpenVMM and its use as OpenHCL paravisor to
control VTL0 (Virtual trust Level).
Expose devices and support IOCTLs for features like VTL creation,
VTL0 memory management, context switch, making hypercalls,
mapping VTL0 address space to VTL2 userspace, getting new VMBus
messages and channel events in VTL2 etc.
OpenVMM : https://openvmm.dev/guide/
Changes since v8:
https://lore.kernel.org/all/20251013060353.67326-1-
[email protected]/
Addressed Sean's comments:
* Removed forcing SIGPENDING, and other minor changes, in
mshv_vtl_ioctl_return_to_lower_vtl after referring
to Sean's earlier changes for xfer_to_guest_mode_handle_work.
* Rebased and resolved merge conflicts, compilation errors on latest
linux-next kernel tip, after Roman's Confidential VM changes,
which merged recently. No functional changes.
Did your testing against the latest linux-next included testing with
CONFIG_X86_KERNEL_IBT=y? This is Indirect Branch Tracking, which would
have generated a fault with your v7 series and earlier because of the
indirect
call instruction when doing VTL Return through the hypercall page (which
doesn't have the needed ENDBR64 instruction). But now that VTL Return is
doing a static call, that should be direct, which won't trigger an IBT
fault.
To confirm that you really are running with IBT enabled, you should see
[ 0.047008] CET detected: Indirect Branch Tracking enabled
in the VTL2 dmesg output. And "ibt" should appear in the
"flags" output line of 'cat /proc/cpuinfo' (or the 'lscpu' command).
Michael
Hi Michael,
I have now tested with and without IBT, and in case of IBT enabled, I do
see the log you pasted for IBT in VTL2 logs and there are no failures.
However, this additional testing uncovered another issue here where
there is a crash in VTL0, some time after boot, due to rbp clobbering in
mshv_vtl_return_hypercall() wrapper function.
Thanks a lot Michael for helping me offline on this, to understand and
identify the issue.
Hi Peter, Paolo, Sean,
Here is the summary of the problem and the fix:
Assembly code make a call to mshv_vtl_return_hypercall() after handling
rbp properly. However, current wrapper function in C updates rbp to rsp
before making the static call. This creates problems.
<-snippet->
arch/x86/hyperv/mshv_vtl_asm.S:
/* make a hypercall to switch VTL */
call mshv_vtl_return_hypercall
arch/x86/hyperv/hv_vtl.c:
noinstr void mshv_vtl_return_hypercall(void)
{
asm volatile ("call "
STATIC_CALL_TRAMP_STR(__mshv_vtl_return_hypercall) :
ASM_CALL_CONSTRAINT);
}
(gdb) disassemble mshv_vtl_return_hypercall
Dump of assembler code for function mshv_vtl_return_hypercall:
0xffffffff886981a0 <+0>: push %rbp
0xffffffff886981a1 <+1>: mov %rsp,%rbp
0xffffffff886981a4 <+4>: call 0xffffffff886a77a8
<__SCT____mshv_vtl_return_hypercall>
0xffffffff886981a9 <+9>: pop %rbp
0xffffffff886981aa <+10>: jmp 0xffffffff886a7670
<__x86_return_thunk>
<-end->
This is fixed after removing ASM_CALL_CONSTRAINT from above function
which makes sure it does not add save/restore rbp logic before the
assembly call instructions.
<-snippet->
(gdb) disassemble mshv_vtl_return_hypercall
Dump of assembler code for function mshv_vtl_return_hypercall:
0xffffffff886981a0 <+0>: call 0xffffffff886a77a8
<__SCT____mshv_vtl_return_hypercall>
0xffffffff886981a5 <+5>: jmp 0xffffffff886a7670
<__x86_return_thunk>
End of assembler dump.
<-end->
But then we see a warning reported by objtool for frame pointer, but
since this is expected, I will need to add STACK_FRAME_NON_STANDARD_FP
to suppress it.
vmlinux.o: warning: objtool: mshv_vtl_return_hypercall+0x4: call without
frame pointer save/setup
During code review, I found CR2 handling was missing after making
mshv_vtl_return_hypercall call in assembly, which I will *additionally*
fix in next version.
Pasting the diff at the end, on top of this patch, which should fix
these issues.
Please let me know if I should be doing it differently or if you foresee
any issues with this approach.
Regards,
Naman
------------------------
diff --git a/arch/x86/hyperv/hv_vtl.c b/arch/x86/hyperv/hv_vtl.c
index 636e9253b81e..c61d2dce4d68 100644
--- a/arch/x86/hyperv/hv_vtl.c
+++ b/arch/x86/hyperv/hv_vtl.c
@@ -258,9 +258,9 @@ DEFINE_STATIC_CALL_NULL(__mshv_vtl_return_hypercall,
void (*)(void));
noinstr void mshv_vtl_return_hypercall(void)
{
- asm volatile ("call "
STATIC_CALL_TRAMP_STR(__mshv_vtl_return_hypercall) :
- ASM_CALL_CONSTRAINT);
+ asm volatile ("call "
STATIC_CALL_TRAMP_STR(__mshv_vtl_return_hypercall));
}
+STACK_FRAME_NON_STANDARD_FP(mshv_vtl_return_hypercall);
extern void __mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0);
diff --git a/arch/x86/hyperv/mshv_vtl_asm.S b/arch/x86/hyperv/
mshv_vtl_asm.S
index 4085073a5876..5f4b511749f8 100644
--- a/arch/x86/hyperv/mshv_vtl_asm.S
+++ b/arch/x86/hyperv/mshv_vtl_asm.S
@@ -65,6 +65,9 @@ SYM_FUNC_START(__mshv_vtl_return_call)
mov 16(%rsp), %rcx
mov 24(%rsp), %rax
+ mov %rdx, MSHV_VTL_CPU_CONTEXT_rdx(%rax)
+ mov %cr2, %rdx
+ mov %rdx, MSHV_VTL_CPU_CONTEXT_cr2(%rax)
pop MSHV_VTL_CPU_CONTEXT_rcx(%rax)
pop MSHV_VTL_CPU_CONTEXT_rax(%rax)
add $16, %rsp