On 04/24, Andrey Grodzovsky wrote:
>
> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
> @@ -227,9 +227,10 @@ void drm_sched_entity_do_release(struct 
> drm_gpu_scheduler *sched,
>               return;
>       /**
>        * The client will not queue more IBs during this fini, consume existing
> -      * queued IBs or discard them on SIGKILL
> +      * queued IBs or discard them when in death signal state since
> +      * wait_event_killable can't receive signals in that state.
>       */
> -     if ((current->flags & PF_SIGNALED) && current->exit_code == SIGKILL)
> +     if (current->flags & PF_SIGNALED)

please do not use PF_SIGNALED, it must die. Besides you can't rely on this flag
in multi-threaded case. current->exit_code doesn't look right too.

>               entity->fini_status = -ERESTARTSYS;
>       else
>               entity->fini_status = wait_event_killable(sched->job_scheduled,

So afaics the problem is that fatal_signal_pending() is not necessarily true
after SIGKILL was already dequeued and thus wait_event_killable(), right?

This was already discussed, but it is not clear what we can/should do. We can
probably change get_signal() to not dequeue SIGKILL or do something else to keep
fatal_signal_pending() == T for the exiting killed thread.

But in this case we probably also want to discriminate the "real" SIGKILL's from
group_exit/exec/coredump.

Oleg.

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Reply via email to