On 1/16/26 17:20, Alex Deucher wrote:
> We only want to stop the work queues, not mess with the
> pending list so just stop the work queues.
> 
> Signed-off-by: Alex Deucher <[email protected]>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index 362ab2b344984..ed7f13752f462 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -6310,7 +6310,7 @@ static void amdgpu_device_halt_activities(struct 
> amdgpu_device *adev,
>                       if (!amdgpu_ring_sched_ready(ring))
>                               continue;
>  
> -                     drm_sched_stop(&ring->sched, job ? &job->base : NULL);
> +                     drm_sched_wqueue_stop(&ring->sched);

I'm pretty sure that this will lead to memory leak if we either don't manually 
free the job or return the code to insert it back into the list.

Regards,
Christian.

>  
>                       if (need_emergency_restart)
>                               amdgpu_job_stop_all_jobs_on_sched(&ring->sched);
> @@ -6394,7 +6394,7 @@ static int amdgpu_device_sched_resume(struct list_head 
> *device_list,
>                       if (!amdgpu_ring_sched_ready(ring))
>                               continue;
>  
> -                     drm_sched_start(&ring->sched, 0);
> +                     drm_sched_wqueue_start(&ring->sched);
>               }
>  
>               if (!drm_drv_uses_atomic_modeset(adev_to_drm(tmp_adev)) && 
> !job_signaled)

Reply via email to