On Fri, 13 Feb 2026 18:23:34 +0000
Adrián Larumbe <[email protected]> wrote:
> > +static int remap_evicted_vma(struct drm_gpuvm_bo *vm_bo,
> > + struct panthor_vma *evicted_vma,
> > + struct panthor_vm_op_ctx *op_ctx)
> > +{
> > + struct panthor_vm *vm = container_of(vm_bo->vm, struct panthor_vm,
> > base);
> > + struct panthor_gem_object *bo = to_panthor_bo(vm_bo->obj);
> > + struct drm_gpuva *va;
> > + bool found = false;
> > + int ret;
> > +
> > + ret = panthor_vm_op_ctx_prealloc_pts(op_ctx);
> > + if (ret)
> > + goto out_cleanup;
> > +
> > + /* Take op_lock to protect against va insertion/removal. */
> > + mutex_lock(&vm->op_lock);
> > + drm_gpuvm_bo_for_each_va(va, vm_bo) {
> > + struct panthor_vma *vma = container_of(va, struct panthor_vma,
> > base);
> > +
> > + if (vma != evicted_vma)
> > + continue;
> > +
> > + /* We can't rely solely on pointer equality, because the VMA
> > might have been
> > + * freed and a new one allocated at the same address. If the
> > evicted bit
> > + * is still set, we're sure it's our VMA, because
> > population/eviction is
> > + * serialized with the BO resv lock.
> > + */
>
> At first I thought to avoid having VM_BIND operations change the VM's VMA's
> in the interval between
> select_evicted_vma() and remap_evicted_vma(), maybe you could take the VM's
> operation lock around both of them
> in panthor_vm_restore_vmas(). But because the same lock is taken in the dma
> signalling path and
> panthor_vm_op_ctx_prealloc_pts() can sleep, then we cannot do that. Maybe you
> could add a note clarifying this?
Sure, how about
index 78b875a59d1e..9ce4077bef8e 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -2395,7 +2395,11 @@ static int remap_evicted_vma(struct drm_gpuvm_bo *vm_bo,
if (ret)
goto out_cleanup;
- /* Take op_lock to protect against va insertion/removal. */
+ /* Take op_lock to protect against va insertion/removal. Note that the
+ * evicted_vma selection was done with the same lock held, but we had
+ * to release it so we can allocate PTs, because this very same lock
+ * is taken in a DMA-signalling path.
+ */
mutex_lock(&vm->op_lock);
drm_gpuvm_bo_for_each_va(va, vm_bo) {
struct panthor_vma *vma = container_of(va, struct panthor_vma,
base);
@@ -2403,10 +2407,11 @@ static int remap_evicted_vma(struct drm_gpuvm_bo *vm_bo,
if (vma != evicted_vma)
continue;
- /* We can't rely solely on pointer equality, because the VMA
might have been
- * freed and a new one allocated at the same address. If the
evicted bit
- * is still set, we're sure it's our VMA, because
population/eviction is
- * serialized with the BO resv lock.
+ /* Because we had to release the lock between the evicted_vma
selection
+ * and its repopulation, we can't rely solely on pointer
equality (the
+ * VMA might have been freed and a new one allocated at the
same address).
+ * If the evicted bit is still set, we're sure it's our VMA,
because
+ * population/eviction is serialized with the BO resv lock.
*/
if (vma->evicted)
found = true;
?