From: Joerg Roedel <[email protected]>

The map and unmap functions of the IOMMU-API changed their
semantics: They do no longer guarantee that the hardware
TLBs are synchronized with the page-table updates they made.

To make conversion easier, new synchronized functions have
been introduced which give these guarantees again until the
code is converted to use the new TLB-flush interface of the
IOMMU-API, which allows certain optimizations.

But for now, just convert this code to use the synchronized
functions so that it will behave as before.

Cc: Robin Murphy <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Nate Watterson <[email protected]>
Cc: Eric Auger <[email protected]>
Cc: Mitchel Humpherys <[email protected]>
Signed-off-by: Joerg Roedel <[email protected]>
---
 drivers/iommu/dma-iommu.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 9d1cebe..38c41a2 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -417,7 +417,7 @@ static void __iommu_dma_unmap(struct iommu_domain *domain, 
dma_addr_t dma_addr,
        dma_addr -= iova_off;
        size = iova_align(iovad, size + iova_off);
 
-       WARN_ON(iommu_unmap(domain, dma_addr, size) != size);
+       WARN_ON(iommu_unmap_sync(domain, dma_addr, size) != size);
        iommu_dma_free_iova(cookie, dma_addr, size);
 }
 
@@ -572,7 +572,7 @@ struct page **iommu_dma_alloc(struct device *dev, size_t 
size, gfp_t gfp,
                sg_miter_stop(&miter);
        }
 
-       if (iommu_map_sg(domain, iova, sgt.sgl, sgt.orig_nents, prot)
+       if (iommu_map_sg_sync(domain, iova, sgt.sgl, sgt.orig_nents, prot)
                        < size)
                goto out_free_sg;
 
@@ -631,7 +631,7 @@ static dma_addr_t __iommu_dma_map(struct device *dev, 
phys_addr_t phys,
        if (!iova)
                return IOMMU_MAPPING_ERROR;
 
-       if (iommu_map(domain, iova, phys - iova_off, size, prot)) {
+       if (iommu_map_sync(domain, iova, phys - iova_off, size, prot)) {
                iommu_dma_free_iova(cookie, iova, size);
                return IOMMU_MAPPING_ERROR;
        }
@@ -791,7 +791,7 @@ int iommu_dma_map_sg(struct device *dev, struct scatterlist 
*sg,
         * We'll leave any physical concatenation to the IOMMU driver's
         * implementation - it knows better than we do.
         */
-       if (iommu_map_sg(domain, iova, sg, nents, prot) < iova_len)
+       if (iommu_map_sg_sync(domain, iova, sg, nents, prot) < iova_len)
                goto out_free_iova;
 
        return __finalise_sg(dev, sg, nents, iova);
-- 
2.7.4

Reply via email to