The GFP_DMA32 uses the DMA32 zone to satisfy the allocation
requests. Therefore, pages allocated with GFP_DMA32 cannot
come from Highmem.

Avoid using calls to kmap() / kunmap() as the kmap() is being
deprecated [1].

Avoid using calls to kmap_local_page() / kunmap_local() as the
code does not depends on the implicit disable of migration of
local mappings and is, in fact, an unnecessary overhead for
the main code [2].

Hence, use a plain page_address() directly in the
psb_mmu_alloc_pd function.

[1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.we...@intel.com/
[2]: https://lwn.net/Articles/836503/

Suggested-by: Ira Weiny <ira.we...@intel.com>
Signed-off-by: Sumitra Sharma <sumitraar...@gmail.com>
---
 drivers/gpu/drm/gma500/mmu.c | 11 +++--------
 1 file changed, 3 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/gma500/mmu.c b/drivers/gpu/drm/gma500/mmu.c
index a70b01ccdf70..1a44dd062fd1 100644
--- a/drivers/gpu/drm/gma500/mmu.c
+++ b/drivers/gpu/drm/gma500/mmu.c
@@ -184,20 +184,15 @@ struct psb_mmu_pd *psb_mmu_alloc_pd(struct psb_mmu_driver 
*driver,
                pd->invalid_pte = 0;
        }
 
-       v = kmap_local_page(pd->dummy_pt);
+       v = page_address(pd->dummy_pt);
        for (i = 0; i < (PAGE_SIZE / sizeof(uint32_t)); ++i)
                v[i] = pd->invalid_pte;
 
-       kunmap_local(v);
-
-       v = kmap_local_page(pd->p);
+       v = page_address(pd->p);
        for (i = 0; i < (PAGE_SIZE / sizeof(uint32_t)); ++i)
                v[i] = pd->invalid_pde;
 
-       kunmap_local(v);
-
-       clear_page(kmap(pd->dummy_page));
-       kunmap(pd->dummy_page);
+       clear_page(page_address(pd->dummy_page));
 
        pd->tables = vmalloc_user(sizeof(struct psb_mmu_pt *) * 1024);
        if (!pd->tables)
-- 
2.25.1

Reply via email to