Kernel allows user to switch IOMMU domain, e.g., switch between DMA
and identity domain. When this happen in IOMMU scalable mode, a pasid
cache invalidation request is sent, this request is ignored by vIOMMU
which leads to device binding to wrong address space, then DMA fails.
This issue exists in scalable mode with both first stage and second
stage translations, both emulated and passthrough devices.
Take network device for example, below sequence trigger issue:
1. start a guest with iommu=pt
2. echo 0000:01:00.0 > /sys/bus/pci/drivers/virtio-pci/unbind
3. echo DMA > /sys/kernel/iommu_groups/6/type
4. echo 0000:01:00.0 > /sys/bus/pci/drivers/virtio-pci/bind
5. Ping test
Fix it by switching address space in invalidation handler.
Fixes: 4a4f219e8a10 ("intel_iommu: add scalable-mode option to make scalable
mode work")
Signed-off-by: Zhenzhong Duan <[email protected]>
---
hw/i386/intel_iommu.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index d656e9c256..30275a4f23 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -3104,7 +3104,7 @@ static void vtd_pasid_cache_sync_locked(gpointer key,
gpointer value,
* reset where the whole guest memory is treated as zeroed.
*/
pc_entry->valid = false;
- return;
+ goto switch_as;
}
/*
@@ -3134,6 +3134,10 @@ static void vtd_pasid_cache_sync_locked(gpointer key,
gpointer value,
pc_entry->pasid_entry = pe;
pc_entry->valid = true;
+
+switch_as:
+ vtd_switch_address_space(vtd_as);
+ vtd_address_space_sync(vtd_as);
}
static void vtd_pasid_cache_sync(IntelIOMMUState *s, VTDPASIDCacheInfo
*pc_info)
--
2.47.1