On 11/9/23 17:03, Christian König wrote:
Am 09.11.23 um 16:50 schrieb Thomas Hellström:
[SNIP]
Did we get any resolution on this?
FWIW, my take on this is that it would be possible to get GPUVM to work both
with and without internal refcounting; If with, the driver needs a vm close to
resolve cyclic references, if without that's not necessary. If GPUVM is allowed
to refcount in mappings and vm_bos, that comes with a slight performance drop
but as Danilo pointed out, the VM lifetime problem iterating over a vm_bo's
mapping becomes much easier and the code thus becomes easier to maintain moving
forward. That convinced me it's a good thing.
I strongly believe you guys stumbled over one of the core problems with the VM
here and I think that reference counting is the right answer to solving this.
The big question is that what is reference counted and in which direction does
the dependency points, e.g. we have here VM, BO, BO_VM and Mapping objects.
Those patches here suggest a counted Mapping -> VM reference and I'm pretty sure that
this isn't a good idea. What we should rather really have is a BO -> VM or BO_VM
->VM reference. In other words that each BO which is part of the VM keeps a reference
to the VM.
We have both. Please see the subsequent patch introducing VM_BO structures for
that.
As I explained, mappings (struct drm_gpuva) keep a pointer to their VM they're
mapped
in and besides that it doesn't make sense to free a VM that still contains
mappings,
the reference count ensures that. This simply ensures memory safety.
BTW: At least in amdgpu we can have BOs which (temporary) doesn't have any
mappings, but are still considered part of the VM.
That should be possible.
Another issue Christian brought up is that something intended to be embeddable
(a base class) shouldn't really have its own refcount. I think that's a valid
point. If you at some point need to derive from multiple such structs each
having its own refcount, things will start to get weird. One way to resolve
that would be to have the driver's subclass provide get() and put() ops, and
export a destructor for the base-class, rather than to have the base-class
provide the refcount and a destructor ops.
GPUVM simply follows the same pattern we have with drm_gem_objects. And I think
it makes
sense. Why would we want to embed two struct drm_gpuvm in a single driver
structure?
Well, I have never seen stuff like that in the kernel. Might be that this
works, but I would rather not try if avoidable.
That would also make it possible for the driver to decide the context for the
put() call: If the driver needs to be able to call put() from irq / atomic
context but the base-class'es destructor doesn't allow atomic context, the
driver can push freeing out to a work item if needed.
Finally, the refcount overflow Christian pointed out. Limiting the number of
mapping sounds like a reasonable remedy to me.
Well that depends, I would rather avoid having a dependency for mappings.
Taking the CPU VM handling as example as far as I know vm_area_structs doesn't
grab a reference to their mm_struct either. Instead they get automatically
destroyed when the mm_struct is destroyed.
Certainly, that would be possible. However, thinking about it, this might call
for
huge trouble.
First of all, we'd still need to reference count a GPUVM and take a reference
for each
VM_BO, as we do already. Now instead of simply increasing the reference count
for each
mapping as well, we'd need a *mandatory* driver callback that is called when
the GPUVM
reference count drops to zero. Maybe something like vm_destroy().
The reason is that GPUVM can't just remove all mappings from the tree nor can
it free them
by itself, since drivers might use them for tracking their allocated page
tables and/or
other stuff.
Now, let's think about the scope this callback might be called from. When a
VM_BO is destroyed
the driver might hold a couple of locks (for Xe it would be the VM's shared
dma-resv lock and
potentially the corresponding object's dma-resv lock if they're not the same
already). If
destroying this VM_BO leads to the VM being destroyed, the drivers vm_destroy()
callback would
be called with those locks being held as well.
I feel like doing this finally opens the doors of the locking hell entirely. I
think we should
really avoid that.
Which makes sense in that case because when the mm_struct is gone the
vm_area_struct doesn't make sense any more either.
What we clearly need is a reference to prevent the VM or at least the shared
resv to go away to early.
Yeah, that was a good hint and we've covered that.
Regards,
Christian.
But I think all of this is fixable as follow-ups if needed, unless I'm missing
something crucial.
Fully agree, I think at this point we should go ahead and land this series.
Just my 2 cents.
/Thomas