This is v4 of [1]. This patch is a workaround for performing partial unmaps of a VM region backed by huge pages. Since these are now disallowed, the patch makes sure unmaps are done on a backing page-granularity, and then regions untouched by the VM_BIND unmap operation are restored.
A patch series with IGT tests to validate this functionality is found at [2]. Changelog: v4: - Added VM lock for expanded unmap region. - No longer pass the original VMA when calculating expanded unmap address and ranges for calculating BO offsets, as this can be obtained from the map va op's - Added more comments and renamed the unmap boundary calculation function - Calculate prev and next map offsets, sizes and addresses in the block prelude for the sake of clarity. - Addressed some minor style nits. - Rebased the patch onto the latest drm-misc. v3: - Reworked address logic so that prev and next gpuava_op's va's are used in the calculations instead of those of the original unmap vma. - Got rid of the return struct from get_map_unmap_intervals() and now reckon panthor_vm_map_pages() arguments by fiddlign with the gpuva's respective gem object offsets. - Use folio_size() instead of folio_order() because the latter implies page sizes from the CPU's MMU perspective, rather than that of the GPU. v2: - Fixed bug caused by confusion between semantics of gpu_va prev and next ops boundaries and those of the original vma object. - Coalesce all unmap operations into a single one. - Refactored and simplified code. [1] https://lore.kernel.org/dri-devel/[email protected]/T/#t [2] https://lore.kernel.org/igt-dev/[email protected]/T/# Adrián Larumbe (1): drm/panthor: Support partial unmaps of huge pages drivers/gpu/drm/panthor/panthor_mmu.c | 99 ++++++++++++++++++++++++--- 1 file changed, 91 insertions(+), 8 deletions(-) base-commit: d8684ae1cdcf848d21e00bc0e0de821d694a207b prerequisite-patch-id: 3b0f61bfc22a616a205ff7c15d546d2049fd53de -- 2.51.2
