On 10/15/2018 10:43 AM, Koenig, Christian wrote:
Hi Thomas,

1. You need the a lock with the order lock->mmap_sem, cause otherwise
you run into trouble when drm_vma_node_unmap() needs to be called.
Why is this? unmap_mapping_range() never takes the mmap_sem. It takes
a finer-level per-address-space lock.
Well, that is interesting. Where do we actually then have the order of
reservation->mmap_sem defined?

TTM doesn't. But some drivers using TTM do, which prompted the wording in the fault handler comments. But as Daniel mentioned, to be able to share buffers throughout DRM, we at some point need to agree on a locking order, and when we do it's important that we consider all relevant use-cases and workarounds.

To me it certainly looks like the vm_ops implementations would be more straightforward if we were to use mmap_sem->bo_reserve, but that would again require a number of drivers to rework their copy_*_user code. If those drivers, however, insisted on keeping their copy_*_user code, they probably need to fix the recursive bo reservation up anyway, for example by making sure not to copy from a vma that has VM_IO set.


Do you mean to temporarily pin the bo while page-table-entries are set
up?
That is basically what we already do by taking the reservation lock.

Otherwise whatever lock we introduce in the fault handler will also
need to block the bo from being moved..
Correct, let me explain it a bit differently:

We need one lock which is held while the BO backing store is replaced to
do faults and other stuff.

And we need another lock which is held while we allocate backing store
for the BO, do command submission and prepare moves.

As far as I can see the first one should be a simple mutex while the
second is the reservation object.

So why can't we use reservation for both, like today?

Thanks,
Thomas



Regards,
Christian.


Am 15.10.2018 um 10:20 schrieb Thomas Hellstrom:
Hi!

Interesting disscussion. Some comments below.

On 10/15/2018 09:41 AM, Koenig, Christian wrote:
Now amdgpu switched over
Well exactly that's the point. amdgpu has *NOT* switched the order over.

like i915 did yearsearlier
I actually consider this a problem in i915 which should be fixed sooner
or later.

The locking in the fault handler as exposed by TTM is actually pointing
out a deeper issue which nobody seems to have correctly understood so
far.

Let me explain a bit:
1. You need the a lock with the order lock->mmap_sem, cause otherwise
you run into trouble when drm_vma_node_unmap() needs to be called.
Why is this? unmap_mapping_range() never takes the mmap_sem. It takes
a finer-level per-address-space lock.

2. But for CPU fault processing you also need a lock to determine the
actual address where a BO is currently located
Do you mean to temporarily pin the bo while page-table-entries are set
up?

i915 has solved this by separating the locks and using the reservation
lock as fault processing lock.

Problem is that this clashes with filling reservation objects on demand
when another driver wants to use it.
Could you elaborate a bit on this, what locking scenarios are
conflicting etc.

So what we should probably do is to fix i915 to not use the reservation
object as fault processing lock any more, then separate the TTM handling
into two locks as well.
This sounds a bit strange to me. What we want to do in the fault
handler is to temporarily pin down the object so that we can set up
page-table entries, which is exactly what we accomplish with
reservation objects. Otherwise whatever lock we introduce in the fault
handler will also need to block the bo from being moved..

Thanks, Thomas.

And last fix the the reservation object logic to allow pinning DMA-bufs
on demand.

Christian.

Am 15.10.2018 um 08:55 schrieb Daniel Vetter:
On Sun, Oct 14, 2018 at 8:20 PM Koenig, Christian
<christian.koe...@amd.com> wrote:
Hi Thomas,

that the access() handler took a shortcut when the new locking order
was  established
There is no new locking order, the access handler is just for
debugging
and ignoring the correct locking order between mmap_sem and
bo_reserve.

That this is throwing a lockdep warning is perfectly possible. We
should
probably move that to a trylock instead.

bo_reserve()
copy_to_user() / copy_from_user()
bo_unreserve()
That one is illegal for a completely different reason.

The address accessed by copy_to_user()/copy_from_user() could be a BO
itself, so to resolve this we could end up locking a BO twice.

Adding a might_lock() to the beginning of ttm_bo_vm_fault as you
suggested doesn't work either, because at this point the mmap_sem is
still locked.

So lockdep would complain about the incorrect bo_reserve and
mmap_sem order.
I think Thomas' point is the one below:

Christian.

Am 13.10.2018 um 21:04 schrieb Thomas Hellstrom:
Hi, Christian,

On 10/13/2018 07:36 PM, Christian König wrote:
Hi Thomas,

bo_reserve()
copy_to_user() / copy_from_user()
bo_unreserve()
That pattern is illegal for a number of reasons and the mmap_sem is
only one of it.

So the locking order must always be mmap_sem->bo_reservation. See
the
userptr implementation in amdgpu as well.

Christian.
I'm not arguing against that, and since vmwgfx doesn't use that
pattern, the locking order doesn't really matter to me since it's
even
possible to make the TTM fault() handler more well-behaved if we were
to fix the locking order to mmap_sem->bo_reserve.

My concern is, since the _opposite_ locking order is (admittedly
vaguely) documented in the fault handler, that the access() handler
took a shortcut when the new locking order was established possibly
without auditing of the other TTM drivers for locking inversion: For
example it looks from a quick glance like
nouveau_gem_pushbuf_reloc_apply() calls copy_from_user() with bo's
reserved (which IIRC was the typical use-case at the time this was
last lifted). And lockdep won't trip unless the access() callback is
actually called.

My point is if AMD wants to enforce this locking order, then IMHO the
other drivers need to be audited and corrected if they are assuming
the locking order documented in fault(). A good way to catch such
drivers would be to add that might_lock().
^^ This one here. There's a bunch of drivers which try-lock in the
fault handler, so that the _can_ do the

bo_reserve()
copy*user()
bo_unreserve()

pattern. Yes the trylock will just loop forever if you copy*user()
hits a bo itself that's already in the CS. Iirc I've complained about
this years back. Now amdgpu switched over (like i915 did years
earlier), because it's the only thing that reliably works even when
facing evil userspace, but there's still radeon&noveau. I think Thomas
argues that we should fix those, and I agree.

Once those are fixed I also think that a might_lock in the fault
handler should not blow up anymore. If it does, you have an inversion
still somewhere.

Aside: I think it'd be good to document this as part of struct
reservation_object, preferrably with lockdep annotations, to make sure
no one gets this wrong. Since we need _every_ driver to obey this, or
you just need the right buffer sharing combination to deadlock.

Cheers, Daniel

Thanks,
Thomas


Am 12.10.2018 um 16:52 schrieb Thomas Hellstrom:
Hi, Felix,

It looks like there is a locking inversion in ttm_bo_vm_access()
where we take a sleeping bo_reserve() while holding mmap_sem().

Previously we've been assuming the other way around or at least
undefined allowing for drivers to do

bo_reserve()
copy_to_user() / copy_from_user()
bo_unreserve()

I'm not sure the latter pattern is used in any drivers, though, and
I guess there are ways around it. So it might make sense to fix the
locking order at this point. In that case, perhaps one should add a

might_lock(&bo->resv->lock.base);

at the start of the TTM fault handler to trip lockdep on locking
order violations in situations where the access() callback isn't
commonly used...

/Thomas




_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Reply via email to