The min_page_size is only needed for pages inserted into the GTT, and
for our paging structures we only need at most 4K bytes, so simply
ignore the min_page_size restrictions here, otherwise we might see some
severe overallocation on some devices.

v2(Thomas): add some commentary

Signed-off-by: Matthew Auld <matthew.a...@intel.com>
Cc: Thomas Hellström <thomas.hellst...@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellst...@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/intel_gtt.c | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c 
b/drivers/gpu/drm/i915/gt/intel_gtt.c
index 084ea65d59c0..f7e0352edb62 100644
--- a/drivers/gpu/drm/i915/gt/intel_gtt.c
+++ b/drivers/gpu/drm/i915/gt/intel_gtt.c
@@ -16,7 +16,19 @@ struct drm_i915_gem_object *alloc_pt_lmem(struct 
i915_address_space *vm, int sz)
 {
        struct drm_i915_gem_object *obj;
 
-       obj = i915_gem_object_create_lmem(vm->i915, sz, 0);
+       /*
+        * To avoid severe over-allocation when dealing with min_page_size
+        * restrictions, we override that behaviour here by allowing an object
+        * size and page layout which can be smaller. In practice this should be
+        * totally fine, since GTT paging structures are not typically inserted
+        * into the GTT.
+        *
+        * Note that we also hit this path for the scratch page, and for this
+        * case it might need to be 64K, but that should work fine here since we
+        * used the passed in size for the page size, which should ensure it
+        * also has the same alignment.
+        */
+       obj = __i915_gem_object_create_lmem_with_ps(vm->i915, sz, sz, 0);
        /*
         * Ensure all paging structures for this vm share the same dma-resv
         * object underneath, with the idea that one object_lock() will lock
-- 
2.26.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to