On 2021-06-07 19:25, Nadav Amit wrote:
From: Nadav Amit <na...@vmware.com>

On virtual machines, software must flush the IOTLB after each page table
entry update.

The iommu_map_sg() code iterates through the given scatter-gather list
and invokes iommu_map() for each element in the scatter-gather list,
which calls into the vendor IOMMU driver through iommu_ops callback. As
the result, a single sg mapping may lead to multiple IOTLB flushes.

Fix this by adding amd_iotlb_sync_map() callback and flushing at this
point after all sg mappings we set.

This commit is followed and inspired by commit 933fcd01e97e2
("iommu/vt-d: Add iotlb_sync_map callback").

Cc: Joerg Roedel <j...@8bytes.org>
Cc: Will Deacon <w...@kernel.org>
Cc: Jiajun Cao <caojia...@vmware.com>
Cc: Robin Murphy <robin.mur...@arm.com>
Cc: Lu Baolu <baolu...@linux.intel.com>
Cc: iommu@lists.linux-foundation.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Nadav Amit <na...@vmware.com>
---
  drivers/iommu/amd/iommu.c | 15 ++++++++++++---
  1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 128f2e889ced..dd23566f1db8 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -2027,6 +2027,16 @@ static int amd_iommu_attach_device(struct iommu_domain 
*dom,
        return ret;
  }
+static void amd_iommu_iotlb_sync_map(struct iommu_domain *dom,
+                                    unsigned long iova, size_t size)
+{
+       struct protection_domain *domain = to_pdomain(dom);
+       struct io_pgtable_ops *ops = &domain->iop.iop.ops;
+
+       if (ops->map)

Not too critical since you're only moving existing code around, but is ops->map ever not set? Either way the check ends up looking rather out-of-place here :/

It's not very clear what the original intent was - I do wonder whether it's supposed to be related to PAGE_MODE_NONE, but given that amd_iommu_map() has an explicit check and errors out early in that case, we'd never get here anyway. Possibly something to come back and clean up later?

Robin.

+               domain_flush_np_cache(domain, iova, size);
+}
+
  static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
                         phys_addr_t paddr, size_t page_size, int iommu_prot,
                         gfp_t gfp)
@@ -2045,10 +2055,8 @@ static int amd_iommu_map(struct iommu_domain *dom, 
unsigned long iova,
        if (iommu_prot & IOMMU_WRITE)
                prot |= IOMMU_PROT_IW;
- if (ops->map) {
+       if (ops->map)
                ret = ops->map(ops, iova, paddr, page_size, prot, gfp);
-               domain_flush_np_cache(domain, iova, page_size);
-       }
return ret;
  }
@@ -2249,6 +2257,7 @@ const struct iommu_ops amd_iommu_ops = {
        .attach_dev = amd_iommu_attach_device,
        .detach_dev = amd_iommu_detach_device,
        .map = amd_iommu_map,
+       .iotlb_sync_map = amd_iommu_iotlb_sync_map,
        .unmap = amd_iommu_unmap,
        .iova_to_phys = amd_iommu_iova_to_phys,
        .probe_device = amd_iommu_probe_device,

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to