On 02/05/2024 17:51, Boris Brezillon wrote:
> The heap ID is used to index the heap context pool, and allocating
> in the [1:MAX_HEAPS_PER_POOL] leads to an off-by-one. This was
> originally to avoid returning a zero heap handle, but given the handle
> is formed with (vm_id << 16) | heap_id, with vm_id > 0, we already can't
> end up with a valid heap handle that's zero.
> 
> v4:
> - s/XA_FLAGS_ALLOC1/XA_FLAGS_ALLOC/
> 
> v3:
> - Allocate in the [0:MAX_HEAPS_PER_POOL-1] range
> 
> v2:
> - New patch
> 
> Fixes: 9cca48fa4f89 ("drm/panthor: Add the heap logical block")
> Reported-by: Eric Smith <eric.sm...@collabora.com>
> Signed-off-by: Boris Brezillon <boris.brezil...@collabora.com>
> Tested-by: Eric Smith <eric.sm...@collabora.com>

Reviewed-by: Steven Price <steven.pr...@arm.com>

Thanks,
Steve

> ---
>  drivers/gpu/drm/panthor/panthor_heap.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panthor/panthor_heap.c 
> b/drivers/gpu/drm/panthor/panthor_heap.c
> index b0fc5b9ee847..95a1c6c9f35e 100644
> --- a/drivers/gpu/drm/panthor/panthor_heap.c
> +++ b/drivers/gpu/drm/panthor/panthor_heap.c
> @@ -323,7 +323,8 @@ int panthor_heap_create(struct panthor_heap_pool *pool,
>       if (!pool->vm) {
>               ret = -EINVAL;
>       } else {
> -             ret = xa_alloc(&pool->xa, &id, heap, XA_LIMIT(1, 
> MAX_HEAPS_PER_POOL), GFP_KERNEL);
> +             ret = xa_alloc(&pool->xa, &id, heap,
> +                            XA_LIMIT(0, MAX_HEAPS_PER_POOL - 1), GFP_KERNEL);
>               if (!ret) {
>                       void *gpu_ctx = panthor_get_heap_ctx(pool, id);
>  
> @@ -543,7 +544,7 @@ panthor_heap_pool_create(struct panthor_device *ptdev, 
> struct panthor_vm *vm)
>       pool->vm = vm;
>       pool->ptdev = ptdev;
>       init_rwsem(&pool->lock);
> -     xa_init_flags(&pool->xa, XA_FLAGS_ALLOC1);
> +     xa_init_flags(&pool->xa, XA_FLAGS_ALLOC);
>       kref_init(&pool->refcount);
>  
>       pool->gpu_contexts = panthor_kernel_bo_create(ptdev, vm, bosize,

Reply via email to