Hi Joerg,
One fix is queued for v5.18. It aims to fix:
- Calculate a feasible mask for non-aligned page-selective
IOTLB invalidation.
Please consider it for the iommu/fix branch.
Best regards,
Lu Baolu
David Stevens (1):
iommu/vt-d: Calculate mask for non-aligned flushes
drivers/iommu/
From: David Stevens
Calculate the appropriate mask for non-size-aligned page selective
invalidation. Since psi uses the mask value to mask out the lower order
bits of the target address, properly flushing the iotlb requires using a
mask value such that [pfn, pfn+pages) all lie within the flushed
The dmar_insert_one_dev_info() returns the pass-in domain on success and
NULL on failure. This doesn't make much sense. Change it to an integer.
Signed-off-by: Lu Baolu
---
drivers/iommu/intel/iommu.c | 24 +---
1 file changed, 9 insertions(+), 15 deletions(-)
diff --git a/d
On 2022/4/7 23:23, Jason Gunthorpe wrote:
While the comment was correct that this flag was intended to convey the
block no-snoop support in the IOMMU, it has become widely implemented and
used to mean the IOMMU supports IOMMU_CACHE as a map flag. Only the Intel
driver was different.
Now that the
On 2022/4/8 16:16, Tian, Kevin wrote:
From: Jason Gunthorpe
Sent: Thursday, April 7, 2022 11:24 PM
IOMMU_CACHE means "normal DMA to this iommu_domain's IOVA should
be cache
coherent" and is used by the DMA API. The definition allows for special
non-coherent DMA to exist - ie processing of the n
On 2022/4/8 16:05, Tian, Kevin wrote:
diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
index 2f9891cb3d0014..1f930c0c225d94 100644
--- a/include/linux/intel-iommu.h
+++ b/include/linux/intel-iommu.h
@@ -540,6 +540,7 @@ struct dmar_domain {
u8 has_iotlb_device: 1;