On 1/20/26 12:41, Tvrtko Ursulin wrote:
>
> On 20/01/2026 10:54, Christian König wrote:
>> Implement per-fence spinlocks, allowing implementations to not give an
>> external spinlock to protect the fence internal statei. Instead a spinlock
>> embedded into the fence structure itself is used in this case.
>>
>> Shared spinlocks have the problem that implementations need to guarantee
>> that the lock live at least as long all fences referencing them.
>>
>> Using a per-fence spinlock allows completely decoupling spinlock producer
>> and consumer life times, simplifying the handling in most use cases.
>>
>> v2: improve naming, coverage and function documentation
>> v3: fix one additional locking in the selftests
>> v4: separate out some changes to make the patch smaller,
>> fix one amdgpu crash found by CI systems
>>
>> Signed-off-by: Christian König <[email protected]>
>> ---
>> drivers/dma-buf/dma-fence.c | 25 +++++++++++++++++-------
>> drivers/dma-buf/sync_debug.h | 2 +-
>> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 2 +-
>> drivers/gpu/drm/drm_crtc.c | 2 +-
>> drivers/gpu/drm/drm_writeback.c | 2 +-
>> drivers/gpu/drm/nouveau/nouveau_fence.c | 3 ++-
>> drivers/gpu/drm/qxl/qxl_release.c | 3 ++-
>> drivers/gpu/drm/vmwgfx/vmwgfx_fence.c | 3 ++-
>> drivers/gpu/drm/xe/xe_hw_fence.c | 3 ++-
>
> i915 needed changes too, based on the kbuild report.
Going to take a look now.
> Have you seen my note about the RCU sparse warning as well?
Nope, I must have missed that mail.
...
>> +/**
>> + * dma_fence_spinlock - return pointer to the spinlock protecting the fence
>> + * @fence: the fence to get the lock from
>> + *
>> + * Return either the pointer to the embedded or the external spin lock.
>> + */
>> +static inline spinlock_t *dma_fence_spinlock(struct dma_fence *fence)
>> +{
>> + return test_bit(DMA_FENCE_FLAG_INLINE_LOCK_BIT, &fence->flags) ?
>> + &fence->inline_lock : fence->extern_lock;
>> +}
>
> You did not want to move this helper into "dma-buf: abstract fence locking" ?
I was avoiding that to keep the pre-requisite patch smaller, cause this change
here seemed independent to that.
But thinking about it I could make a third patch which introduces
dma_fence_spinlock() and changes all the container_of uses.
> I think that would have been better to keep everything mechanical in one
> patch, and then this patch which changes behaviour does not touch any drivers
> but only dma-fence core.
>
> Also, what about adding something like dma_fence_container_of() in that patch
> as well?
I would rather like to avoid that. Using the spinlock pointer with container_of
seemed to be a bit of a hack to me in the first place and I don't want to
encourage people to do that in new code as well.
Regards,
Christian.
>
> Regards,
>
> Tvrtko
>
>> +
>> /**
>> * dma_fence_lock_irqsave - irqsave lock the fence
>> * @fence: the fence to lock
>> @@ -385,7 +403,7 @@ dma_fence_get_rcu_safe(struct dma_fence __rcu **fencep)
>> * Lock the fence, preventing it from changing to the signaled state.
>> */
>> #define dma_fence_lock_irqsave(fence, flags) \
>> - spin_lock_irqsave(fence->lock, flags)
>> + spin_lock_irqsave(dma_fence_spinlock(fence), flags)
>> /**
>> * dma_fence_unlock_irqrestore - unlock the fence and irqrestore
>> @@ -395,7 +413,7 @@ dma_fence_get_rcu_safe(struct dma_fence __rcu **fencep)
>> * Unlock the fence, allowing it to change it's state to signaled again.
>> */
>> #define dma_fence_unlock_irqrestore(fence, flags) \
>> - spin_unlock_irqrestore(fence->lock, flags)
>> + spin_unlock_irqrestore(dma_fence_spinlock(fence), flags)
>> #ifdef CONFIG_LOCKDEP
>> bool dma_fence_begin_signalling(void);
>