Reviewed-by: Maciej Falkowski <[email protected]>

You could add some lockdep assertions for dev_lock to better track its state.

Best regards,
Maciej

On 11/7/2025 7:10 PM, Lizhi Hou wrote:
Hardware context destroy function holds dev_lock while waiting for all jobs
to complete. The timeout job also needs to acquire dev_lock, this leads to
a deadlock.

Fix the issue by temporarily releasing dev_lock before waiting for all
jobs to finish, and reacquiring it afterward.

Fixes: 4fd6ca90fc7f ("accel/amdxdna: Refactor hardware context destroy routine")
Signed-off-by: Lizhi Hou <[email protected]>
---
  drivers/accel/amdxdna/aie2_ctx.c | 6 ++++--
  1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/accel/amdxdna/aie2_ctx.c b/drivers/accel/amdxdna/aie2_ctx.c
index bdc90fe8a47e..42d876a427c5 100644
--- a/drivers/accel/amdxdna/aie2_ctx.c
+++ b/drivers/accel/amdxdna/aie2_ctx.c
@@ -690,17 +690,19 @@ void aie2_hwctx_fini(struct amdxdna_hwctx *hwctx)
        xdna = hwctx->client->xdna;
XDNA_DBG(xdna, "%s sequence number %lld", hwctx->name, hwctx->priv->seq);
-       drm_sched_entity_destroy(&hwctx->priv->entity);
-
        aie2_hwctx_wait_for_idle(hwctx);
/* Request fw to destroy hwctx and cancel the rest pending requests */
        aie2_release_resource(hwctx);
+ mutex_unlock(&xdna->dev_lock);
+       drm_sched_entity_destroy(&hwctx->priv->entity);
+
        /* Wait for all submitted jobs to be completed or canceled */
        wait_event(hwctx->priv->job_free_wq,
                   atomic64_read(&hwctx->job_submit_cnt) ==
                   atomic64_read(&hwctx->job_free_cnt));
+       mutex_lock(&xdna->dev_lock);
drm_sched_fini(&hwctx->priv->sched);
        aie2_ctx_syncobj_destroy(hwctx);

Reply via email to