On 11/7/25 09:33, Philipp Stanner wrote:
> On Tue, 2025-11-04 at 15:12 +0000, Tvrtko Ursulin wrote:
>>
>> On 31/10/2025 13:16, Christian König wrote:
>>> Just as proof of concept and minor cleanup.
> 
> That's by the way why I'm asking whether this series is intended as an
> RFC.
> 
>>>
>>> Signed-off-by: Christian König <[email protected]>
>>> ---
>>>   drivers/gpu/drm/scheduler/sched_fence.c | 11 +++++------
>>>   include/drm/gpu_scheduler.h             |  4 ----
>>>   2 files changed, 5 insertions(+), 10 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/scheduler/sched_fence.c 
>>> b/drivers/gpu/drm/scheduler/sched_fence.c
>>> index 9391d6f0dc01..7a94e03341cb 100644
>>> --- a/drivers/gpu/drm/scheduler/sched_fence.c
>>> +++ b/drivers/gpu/drm/scheduler/sched_fence.c
>>> @@ -156,19 +156,19 @@ static void 
>>> drm_sched_fence_set_deadline_finished(struct dma_fence *f,
>>>     struct dma_fence *parent;
>>>     unsigned long flags;
>>>   
>>> -   spin_lock_irqsave(&fence->lock, flags);
>>> +   dma_fence_lock(f, flags);
>>
>> Moving to dma_fence_lock should either be a separate patch or squashed 
>> into the one which converts many other drivers. Even a separate patch
>> before that previous patch would be better.
> 
> Yes. +1

Now that I got what was meant here I agree as well.

> 
>>
>> Naming wise, I however still think dma_fence_lock_irqsave would probably 
>> be better to stick with the same pattern everyone is so used too.
> 
> I also think this would be better. Who knows, one day maybe someone
> really wants a lock that definitely must not be irqsave for some
> reason, and then the naming pattern would completely break.

Already done.

> 
>>>
>>>
> 
> […]
> 
>>> @@ -230,9 +229,9 @@ void drm_sched_fence_init(struct drm_sched_fence *fence,
>>>     fence->sched = entity->rq->sched;
>>>     seq = atomic_inc_return(&entity->fence_seq);
>>>     dma_fence_init(&fence->scheduled, &drm_sched_fence_ops_scheduled,
>>> -                  &fence->lock, entity->fence_context, seq);
>>> +                  NULL, entity->fence_context, seq);
>>>     dma_fence_init(&fence->finished, &drm_sched_fence_ops_finished,
>>> -                  &fence->lock, entity->fence_context + 1, seq);
>>> +                  NULL, entity->fence_context + 1, seq);
>>>   }
> 
> Do we agree that there is no benefit in porting the scheduler to the
> non-shared spinlock?

It saves something like 4 bytes :)

> 
> If so, I really prefer to not do it.

k, going to drop it. On the other hand if I'm not completely mistaken it 
slightly reduces the complexity.

Regards,
Christian.

> 
> 
> P.

Reply via email to