On 1/20/26 13:50, Jacob Pan wrote:
Hi Mukesh,
On Mon, 19 Jan 2026 22:42:15 -0800
Mukesh R <[email protected]> wrote:
From: Mukesh Rathor <[email protected]>
Implement passthru of PCI devices to unprivileged virtual machines
(VMs) when Linux is running as a privileged VM on Microsoft Hyper-V
hypervisor. This support is made to fit within the workings of VFIO
framework, and any VMM needing to use it must use the VFIO subsystem.
This supports both full device passthru and SR-IOV based VFs.
There are 3 cases where Linux can run as a privileged VM (aka MSHV):
Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested.
I think some introduction/background to L1VH would help.
Ok, i can add something, but l1vh was very well introduced if you
search the mshv commits for "l1vh".
At a high level, the hypervisor supports traditional mapped iommu
domains that use explicit map and unmap hypercalls for mapping and
unmapping guest RAM into the iommu subsystem.
It may be clearer to state that the hypervisor supports Linux IOMMU
paging domains through map/unmap hypercalls, mapping GPAs to HPAs using
stage?2 I/O page tables.
sure.
Hyper-V also has a
concept of direct attach devices whereby the iommu subsystem simply
uses the guest HW page table (ept/npt/..). This series adds support
for both, and both are made to work in VFIO type1 subsystem.
This may warrant introducing a new IOMMU domain feature flag, as it
performs mappings but does not support map/unmap semantics in the same
way as a paging domain.
Yeah, I was hoping we can get by for now without it. At least in case of
the cloud hypervisor, entire guest ram is mapped anyways. We can document
it and work on enhancements which are much easier once we have a baseline.
For now, it's a paging domain will all pages pinned.. :).
While this Part I focuses on memory mappings, upcoming Part II
will focus on irq bypass along with some minor irq remapping
updates.
This patch series was tested using Cloud Hypervisor verion 48. Qemu
support of MSHV is in the works, and that will be extended to include
PCI passthru and SR-IOV support also in near future.
Based on: 8f0b4cce4481 (origin/hyperv-next)
Thanks,
-Mukesh
Mukesh Rathor (15):
iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c
x86/hyperv: cosmetic changes in irqdomain.c for readability
x86/hyperv: add insufficient memory support in irqdomain.c
mshv: Provide a way to get partition id if running in a VMM process
mshv: Declarations and definitions for VFIO-MSHV bridge device
mshv: Implement mshv bridge device for VFIO
mshv: Add ioctl support for MSHV-VFIO bridge device
PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg
mshv: Import data structs around device domains and irq remapping
PCI: hv: Build device id for a VMBus device
x86/hyperv: Build logical device ids for PCI passthru hcalls
x86/hyperv: Implement hyperv virtual iommu
x86/hyperv: Basic interrupt support for direct attached devices
mshv: Remove mapping of mmio space during map user ioctl
mshv: Populate mmio mappings for PCI passthru
MAINTAINERS | 1 +
arch/arm64/include/asm/mshyperv.h | 15 +
arch/x86/hyperv/irqdomain.c | 314 ++++++---
arch/x86/include/asm/mshyperv.h | 21 +
arch/x86/kernel/pci-dma.c | 2 +
drivers/hv/Makefile | 3 +-
drivers/hv/mshv_root.h | 24 +
drivers/hv/mshv_root_main.c | 296 +++++++-
drivers/hv/mshv_vfio.c | 210 ++++++
drivers/iommu/Kconfig | 1 +
drivers/iommu/Makefile | 2 +-
drivers/iommu/hyperv-iommu.c | 1004
+++++++++++++++++++++------ drivers/iommu/hyperv-irq.c |
330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++--
include/asm-generic/mshyperv.h | 1 +
include/hyperv/hvgdk_mini.h | 11 +
include/hyperv/hvhdk_mini.h | 112 +++
include/linux/hyperv.h | 6 +
include/uapi/linux/mshv.h | 31 +
19 files changed, 2182 insertions(+), 409 deletions(-)
create mode 100644 drivers/hv/mshv_vfio.c
create mode 100644 drivers/iommu/hyperv-irq.c