On Fri, 30 Jun 2023 10:02:52 +0200
Boris Brezillon <boris.brezil...@collabora.com> wrote:

> In practice, I don't expect things to deadlock, because the VM resv is
> not supposed to be taken outside the VM context and the locking order
> is always the same (VM lock first, and then each shared BO
> taken/released independently), but I'm not super thrilled by this
> nested lock, and I'm wondering if we shouldn't have a pass collecting
> locks in a drm_exec context first, and then have
> the operations executed. IOW, something like that:
> 
>       drm_exec_init(exec, DRM_EXEC_IGNORE_DUPLICATES)
>       drm_exec_until_all_locked(exec) {
>               // Dummy GEM is the dummy GEM object I use to make the VM
>               // participate in the locking without having to teach
>               // drm_exec how to deal with raw dma_resv objects.
>               ret = drm_exec_lock_obj(exec, vm->dummy_gem);
>               drm_exec_retry_on_contention(exec);
>               if (ret)
>                       return ret;
> 
>               // Could take the form of drm_gpuva_sm_[un]map_acquire_locks()
>               // helpers

Nevermind, I implemented a driver specific acquire_op_locks(), and it's
fairly simple with the gpuva iter (we just have to iterate over all VAs
covered by the operation range and call drm_exec_lock_obj() on the GEM
attached to these VAs), so it's probably not worth providing a generic
helper for that.

Reply via email to