From: Joerg Roedel <jroe...@suse.de>

The map and unmap functions of the IOMMU-API changed their
semantics: They do no longer guarantee that the hardware
TLBs are synchronized with the page-table updates they made.

To make conversion easier, new synchronized functions have
been introduced which give these guarantees again until the
code is converted to use the new TLB-flush interface of the
IOMMU-API, which allows certain optimizations.

But for now, just convert this code to use the synchronized
functions so that it will behave as before.

Cc: Ben Skeggs <bske...@redhat.com>
Cc: David Airlie <airl...@linux.ie>
Cc: dri-de...@lists.freedesktop.org
Cc: nouveau@lists.freedesktop.org
Signed-off-by: Joerg Roedel <jroe...@suse.de>
---
 drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c 
b/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c
index cd5adbe..3f0de47 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c
@@ -322,8 +322,9 @@ gk20a_instobj_dtor_iommu(struct nvkm_memory *memory)
 
        /* Unmap pages from GPU address space and free them */
        for (i = 0; i < node->base.mem.size; i++) {
-               iommu_unmap(imem->domain,
-                           (r->offset + i) << imem->iommu_pgshift, PAGE_SIZE);
+               iommu_unmap_sync(imem->domain,
+                                (r->offset + i) << imem->iommu_pgshift,
+                                PAGE_SIZE);
                dma_unmap_page(dev, node->dma_addrs[i], PAGE_SIZE,
                               DMA_BIDIRECTIONAL);
                __free_page(node->pages[i]);
@@ -458,14 +459,15 @@ gk20a_instobj_ctor_iommu(struct gk20a_instmem *imem, u32 
npages, u32 align,
        for (i = 0; i < npages; i++) {
                u32 offset = (r->offset + i) << imem->iommu_pgshift;
 
-               ret = iommu_map(imem->domain, offset, node->dma_addrs[i],
-                               PAGE_SIZE, IOMMU_READ | IOMMU_WRITE);
+               ret = iommu_map_sync(imem->domain, offset, node->dma_addrs[i],
+                                    PAGE_SIZE, IOMMU_READ | IOMMU_WRITE);
                if (ret < 0) {
                        nvkm_error(subdev, "IOMMU mapping failure: %d\n", ret);
 
                        while (i-- > 0) {
                                offset -= PAGE_SIZE;
-                               iommu_unmap(imem->domain, offset, PAGE_SIZE);
+                               iommu_unmap_sync(imem->domain, offset,
+                                                PAGE_SIZE);
                        }
                        goto release_area;
                }
-- 
2.7.4

_______________________________________________
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

Reply via email to