On 11/17/2025 9:06 AM, Kuehling, Felix wrote:
On 2025-11-17 10:00, Philip Yang wrote:

On 2025-11-14 18:26, Felix Kuehling wrote:

On 2025-11-13 16:09, Xiaogang.Chen wrote:
From: Xiaogang Chen <[email protected]>

Fixes: 7ef6b2d4b7e5 (drm/amdkfd: remap unaligned svm ranges that have split)

When split svm ranges that have been mapped using huge page should use huge page size(2MB) to check split range alignment, not prange->granularity that
means migration granularity.

Signed-off-by: Xiaogang Chen <[email protected]>
---
  drivers/gpu/drm/amd/amdkfd/kfd_svm.c | 16 ++++++++++++----
  1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
index 521c14c7a789..c60d8134db45 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
@@ -1146,11 +1146,14 @@ svm_range_split_tail(struct svm_range *prange, uint64_t new_last,
  {
      struct svm_range *tail = NULL;
      int r = svm_range_split(prange, prange->start, new_last, &tail);
+    bool huge_page_mapping = IS_ALIGNED(prange->start, 512);

Instead of hard-coding 512 here, it would be more future-proof to call amdgpu_vm_pt_num_entries(adev, AMDGPU_VM_PDB0). That's currently a static function in amdgpu_vm_pt.c. Christian, would you object to making that non-static?

We don't have method to know if prange has huge page mapping which depends on virtual address 2MB align and physical address contiguous, it is decided inside amdgpu_vm_ptes_update. Even prange->start is not 2MB align, prange could have huge page mapping.

Right, I guess the correct condition would check that the range contains at least one potential huge page. I.e., the start address aligned up to 2MB is larger and the end address aligned down to 2MB. And the unaligned split is between those two aligned addresses.

If prange->start is not 2MB aligned amdgpu_vm_ptes_update will use 2MB fragments for the aligned middle portion of the range if possible, starting sub-range and ending sub-range of prange are not 2MB huge page mapped.

I think the checking is on start address of split range that it is not aligned and whole split range is inside 2MB aligned part of prange(from down aligned of split start address to end of split range is inside prange).

Regards

Xiaogang


  Felix



Regards,

Philip


Also, to improve the check whether the range can really use huge pages, you could add a check that it's size is at least 512 pages.



        if (!r) {
          list_add(&tail->list, insert_list);
-        if (!IS_ALIGNED(new_last + 1, 1UL << prange->granularity))
-            list_add(&tail->update_list, remap_list);
+        if (huge_page_mapping) {
+            if (!IS_ALIGNED(tail->start, 512))

Make that one condition: if (huge_page_mapping && !IS_ALIGNED...)


+ list_add(&tail->update_list, remap_list);
+        }
      }
      return r;
  }
@@ -1162,11 +1165,16 @@ svm_range_split_head(struct svm_range *prange, uint64_t new_start,
      struct svm_range *head = NULL;
      int r = svm_range_split(prange, new_start, prange->last, &head);
  +    bool huge_page_mapping = IS_ALIGNED(prange->start, 512);

Why the blank line before huge_page_mapping? It's part of your variable declarations.


+
      if (!r) {
          list_add(&head->list, insert_list);
-        if (!IS_ALIGNED(new_start, 1UL << prange->granularity))
-            list_add(&head->update_list, remap_list);
+        if (huge_page_mapping) {
+            if (!IS_ALIGNED(prange->start, 512))

Same as above.

Regards,
  Felix


+ list_add(&head->update_list, remap_list);
+        }
      }
+
      return r;
  }

Reply via email to