Hi Lucas, Thank you for bringing awareness of this issue, publicly.
As soon as this patch showed up back in November of 2019, I objected to it, privately. I suggested to instead use a _list_ to store the "state" of all jobs of the same state. Then, at any time, timeout interrupt or whatever, we can atomically (irq spinlock) move the timeout/bad job to the timedout/cleanup/bad job list, and wake someone up to deal with that list asynchronously, and return from the interrupt/etc. immediately. Then in due time, if any more interrupts or whatnot take place, the job will either be in the timeout list or not. If it it, then the instigator backs off as someone else (the list handler) will/is awake and handling it (obviously a state variable may be kept as well). This draws somewhat from my days with iSCSI, SCSI and SAS, 15 years ago, where a device can complete a job (task) at anytime regardless of what the SCSI layer "thinks" the task's state is: timed-out, aborted, whatever. It is a very simple and elegant solution which generalizes well. Regards, Luben On 2020-02-10 11:55 a.m., Andrey Grodzovsky wrote: > Lucas - Ping on my question and also I attached this temporary solution for > etnaviv to clarify my point. If that something acceptable for now at least i > can do the same for v3d where it requires a bit more code changes. > > Andrey > > On 2/6/20 10:49 AM, Andrey Grodzovsky wrote: >>> Well a revert would break our driver. >>> >>> The real solution is that somebody needs to sit down, gather ALL the >>> requirements and then come up with a solution which is clean and works for >>> everyone. >>> >>> Christian. >> >> >> I can to take on this as indeed our general design on this becomes more and >> more entangled as GPU reset scenarios grow in complexity (at least in AMD >> driver). Currently I am on a high priority internal task which should take >> me around a week or 2 to finish and after that I can get to it. >> >> Regarding temporary solution - I looked into v3d and etnaviv use cases and >> we in AMD actually face the same scenario where we decide to skip HW reset >> if the guilty job did finish by the time we are processing the timeout (see >> amdgpu_device_gpu_recover and skip_hw_reset goto) - the difference is we >> always call drm_sched_stop/start irrespectively of whether we are going to >> actually HW reset or not (same as extend timeout). I wonder if something >> like this can be done also for ve3 and etnaviv ? >> >> Andrey > > _______________________________________________ > amd-gfx mailing list > amd-gfx@lists.freedesktop.org > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cluben.tuikov%40amd.com%7Cce97bc29988e4068ef8108d7ae4a043d%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637169505277381327&sdata=FyV0q3y5uWPwBgJF5QZLWARcXau916EUcYez2VA%2FqRA%3D&reserved=0 > _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx