Re: [Intel-gfx] [PATCH 01/25] dma-fence: basic lockdep annotations
On Wed, Jul 08, 2020 at 05:37:19PM +0200, Daniel Vetter wrote: > On Wed, Jul 8, 2020 at 5:19 PM Alex Deucher wrote: > > > > On Wed, Jul 8, 2020 at 11:13 AM Daniel Vetter > > wrote: > > > > > > On Wed, Jul 8, 2020 at 4:57 PM Christian König > > > wrote: > > > > > > > > Could we merge this controlled by a separate config option? > > > > > > > > This way we could have the checks upstream without having to fix all the > > > > stuff before we do this? > > > > > > Since it's fully opt-in annotations nothing blows up if we don't merge > > > any annotations. So we could start merging the first 3 patches. After > > > that the fun starts ... > > > > > > My rough idea was that first I'd try to tackle display, thus far > > > there's 2 actual issues in drivers: > > > - amdgpu has some dma_resv_lock in commit_tail, plus a kmalloc. I > > > think those should be fairly easy to fix (I'd try a stab at them even) > > > - vmwgfx has a full on locking inversion with dma_resv_lock in > > > commit_tail, and that one is functional. Not just reading something > > > which we can safely assume to be invariant anyway (like the tmz flag > > > for amdgpu, or whatever it was). > > > > > > I've done a pile more annotations patches for other atomic drivers > > > now, so hopefully that flushes out any remaining offenders here. Since > > > some of the annotations are in helper code worst case we might need a > > > dev->mode_config.broken_atomic_commit flag to disable them. At least > > > for now I have 0 plans to merge any of these while there's known > > > unsolved issues. Maybe if some drivers take forever to get fixed we > > > can then apply some duct-tape for the atomic helper annotation patch. > > > Instead of a flag we can also copypasta the atomic_commit_tail hook, > > > leaving the annotations out and adding a huge warning about that. > > > > > > Next big chunk is the drm/scheduler annotations: > > > - amdgpu needs a full rework of display reset (but apparently in the > > > works) > > > > I think the display deadlock issues should be fixed in: > > https://cgit.freedesktop.org/drm/drm/commit/?id=cdaae8371aa9d4ea1648a299b1a75946b9556944 Oh btw you have some more memory allocations in that commit, so you just traded one deadlock for another one :-) -Daniel > > That's the reset/tdr inversion, there's two more: > - kmalloc, see > https://cgit.freedesktop.org/~danvet/drm/commit/?id=d9353cc3bf6111430a24188b92412dc49e7ead79 > - ttm_bo_reserve in the wrong place > https://cgit.freedesktop.org/~danvet/drm/commit/?id=a6c03176152625a2f9cf1e499aceb8b2217dc2a2 > - console_lock in the wrong spot > https://cgit.freedesktop.org/~danvet/drm/commit/?id=a6c03176152625a2f9cf1e499aceb8b2217dc2a2 > > Especially the last one I have no idea how to address really. > -Daniel > > > > > > Alex > > > > > - I read all the drivers, they all have the fairly cosmetic issue of > > > doing small allocations in their callbacks. > > > > > > I might end up typing the mempool we need for the latter issue, but > > > first still hoping for some actual test feedback from other drivers > > > using drm/scheduler. Again no intentions of merging these annotations > > > without the drivers being fixed first, or at least some duct-atpe > > > applied. > > > > > > Another option I've been thinking about, if there's cases where fixing > > > things properly is a lot of effort: We could do annotations for broken > > > sections (just the broken part, so we still catch bugs everywhere > > > else). They'd simply drop&reacquire the lock. We could then e.g. use > > > that in the amdgpu display reset code, and so still make sure that > > > everything else in reset doesn't get worse. But I think adding that > > > shouldn't be our first option. > > > > > > I'm not personally a big fan of the Kconfig or runtime option, only > > > upsets people since it breaks lockdep for them. Or they ignore it, and > > > we don't catch bugs, making it fairly pointless to merge. > > > > > > Cheers, Daniel > > > > > > > > > > > > > > Thanks, > > > > Christian. > > > > > > > > Am 07.07.20 um 22:12 schrieb Daniel Vetter: > > > > > Design is similar to the lockdep annotations for workers, but with > > > > > some twists: > > > > > > > > > > - We use a read-lock for the execution/worker/completion side, so that > > > > >this explicit annotation can be more liberally sprinkled around. > > > > >With read locks lockdep isn't going to complain if the read-side > > > > >isn't nested the same way under all circumstances, so ABBA > > > > > deadlocks > > > > >are ok. Which they are, since this is an annotation only. > > > > > > > > > > - We're using non-recursive lockdep read lock mode, since in recursive > > > > >read lock mode lockdep does not catch read side hazards. And we > > > > >_very_ much want read side hazards to be caught. For full details > > > > > of > > > > >this limitation see > > > > > > > > > >commit e91498589746065e3ae95d9a00b068e525eec34f > > > > >Author:
Re: [Intel-gfx] [PATCH 01/25] dma-fence: basic lockdep annotations
On Tue, 14 Jul 2020 at 02:39, Christian König wrote: > > Am 13.07.20 um 18:26 schrieb Daniel Vetter: > > Hi Christian, > > > > On Wed, Jul 08, 2020 at 04:57:21PM +0200, Christian König wrote: > >> Could we merge this controlled by a separate config option? > >> > >> This way we could have the checks upstream without having to fix all the > >> stuff before we do this? > > Discussions died out a bit, do you consider this a blocker for the first > > two patches, or good for an ack on these? > > Yes, I think the first two can be merged without causing any pain. Feel > free to add my ab on them. > > And the third one can go in immediately as well. Acked-by: Dave Airlie for the first 2 + indefinite explains. Dave. ___ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx
Re: [Intel-gfx] [PATCH 01/25] dma-fence: basic lockdep annotations
Am 13.07.20 um 18:26 schrieb Daniel Vetter: Hi Christian, On Wed, Jul 08, 2020 at 04:57:21PM +0200, Christian König wrote: Could we merge this controlled by a separate config option? This way we could have the checks upstream without having to fix all the stuff before we do this? Discussions died out a bit, do you consider this a blocker for the first two patches, or good for an ack on these? Yes, I think the first two can be merged without causing any pain. Feel free to add my ab on them. And the third one can go in immediately as well. Thanks, Christian. Like I said I don't plan to merge patches where I know it causes a lockdep splat with a driver still. At least for now. Thanks, Daniel Thanks, Christian. Am 07.07.20 um 22:12 schrieb Daniel Vetter: Design is similar to the lockdep annotations for workers, but with some twists: - We use a read-lock for the execution/worker/completion side, so that this explicit annotation can be more liberally sprinkled around. With read locks lockdep isn't going to complain if the read-side isn't nested the same way under all circumstances, so ABBA deadlocks are ok. Which they are, since this is an annotation only. - We're using non-recursive lockdep read lock mode, since in recursive read lock mode lockdep does not catch read side hazards. And we _very_ much want read side hazards to be caught. For full details of this limitation see commit e91498589746065e3ae95d9a00b068e525eec34f Author: Peter Zijlstra Date: Wed Aug 23 13:13:11 2017 +0200 locking/lockdep/selftests: Add mixed read-write ABBA tests - To allow nesting of the read-side explicit annotations we explicitly keep track of the nesting. lock_is_held() allows us to do that. - The wait-side annotation is a write lock, and entirely done within dma_fence_wait() for everyone by default. - To be able to freely annotate helper functions I want to make it ok to call dma_fence_begin/end_signalling from soft/hardirq context. First attempt was using the hardirq locking context for the write side in lockdep, but this forces all normal spinlocks nested within dma_fence_begin/end_signalling to be spinlocks. That bollocks. The approach now is to simple check in_atomic(), and for these cases entirely rely on the might_sleep() check in dma_fence_wait(). That will catch any wrong nesting against spinlocks from soft/hardirq contexts. The idea here is that every code path that's critical for eventually signalling a dma_fence should be annotated with dma_fence_begin/end_signalling. The annotation ideally starts right after a dma_fence is published (added to a dma_resv, exposed as a sync_file fd, attached to a drm_syncobj fd, or anything else that makes the dma_fence visible to other kernel threads), up to and including the dma_fence_wait(). Examples are irq handlers, the scheduler rt threads, the tail of execbuf (after the corresponding fences are visible), any workers that end up signalling dma_fences and really anything else. Not annotated should be code paths that only complete fences opportunistically as the gpu progresses, like e.g. shrinker/eviction code. The main class of deadlocks this is supposed to catch are: Thread A: mutex_lock(A); mutex_unlock(A); dma_fence_signal(); Thread B: mutex_lock(A); dma_fence_wait(); mutex_unlock(A); Thread B is blocked on A signalling the fence, but A never gets around to that because it cannot acquire the lock A. Note that dma_fence_wait() is allowed to be nested within dma_fence_begin/end_signalling sections. To allow this to happen the read lock needs to be upgraded to a write lock, which means that any other lock is acquired between the dma_fence_begin_signalling() call and the call to dma_fence_wait(), and still held, this will result in an immediate lockdep complaint. The only other option would be to not annotate such calls, defeating the point. Therefore these annotations cannot be sprinkled over the code entirely mindless to avoid false positives. Originally I hope that the cross-release lockdep extensions would alleviate the need for explicit annotations: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flwn.net%2FArticles%2F709849%2F&data=02%7C01%7Cchristian.koenig%40amd.com%7Ca3f4bf29ad9640f56a5308d82749770e%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637302543770870283&sdata=jSHWG%2FNEZ9NqgT4V2l62sEVjfMeH5a%2F4Bbh1SPrKf%2Fw%3D&reserved=0 But there's a few reasons why that's not an option: - It's not happening in upstream, since it got reverted due to too many false positives: commit e966eaeeb623f09975ef362c2866fae6f86844f9 Author: Ingo Molnar Date: Tue Dec 12 12:31:16 2017 +0100 locking/lockdep: Remove the cross-release locking checks This code (CONFIG_LOCKDEP_CROSSRELEASE=y and CONFIG_LOCKDEP_COMPLETIONS=y), while it
Re: [Intel-gfx] [PATCH 01/25] dma-fence: basic lockdep annotations
Hi Christian, On Wed, Jul 08, 2020 at 04:57:21PM +0200, Christian König wrote: > Could we merge this controlled by a separate config option? > > This way we could have the checks upstream without having to fix all the > stuff before we do this? Discussions died out a bit, do you consider this a blocker for the first two patches, or good for an ack on these? Like I said I don't plan to merge patches where I know it causes a lockdep splat with a driver still. At least for now. Thanks, Daniel > > Thanks, > Christian. > > Am 07.07.20 um 22:12 schrieb Daniel Vetter: > > Design is similar to the lockdep annotations for workers, but with > > some twists: > > > > - We use a read-lock for the execution/worker/completion side, so that > >this explicit annotation can be more liberally sprinkled around. > >With read locks lockdep isn't going to complain if the read-side > >isn't nested the same way under all circumstances, so ABBA deadlocks > >are ok. Which they are, since this is an annotation only. > > > > - We're using non-recursive lockdep read lock mode, since in recursive > >read lock mode lockdep does not catch read side hazards. And we > >_very_ much want read side hazards to be caught. For full details of > >this limitation see > > > >commit e91498589746065e3ae95d9a00b068e525eec34f > >Author: Peter Zijlstra > >Date: Wed Aug 23 13:13:11 2017 +0200 > > > >locking/lockdep/selftests: Add mixed read-write ABBA tests > > > > - To allow nesting of the read-side explicit annotations we explicitly > >keep track of the nesting. lock_is_held() allows us to do that. > > > > - The wait-side annotation is a write lock, and entirely done within > >dma_fence_wait() for everyone by default. > > > > - To be able to freely annotate helper functions I want to make it ok > >to call dma_fence_begin/end_signalling from soft/hardirq context. > >First attempt was using the hardirq locking context for the write > >side in lockdep, but this forces all normal spinlocks nested within > >dma_fence_begin/end_signalling to be spinlocks. That bollocks. > > > >The approach now is to simple check in_atomic(), and for these cases > >entirely rely on the might_sleep() check in dma_fence_wait(). That > >will catch any wrong nesting against spinlocks from soft/hardirq > >contexts. > > > > The idea here is that every code path that's critical for eventually > > signalling a dma_fence should be annotated with > > dma_fence_begin/end_signalling. The annotation ideally starts right > > after a dma_fence is published (added to a dma_resv, exposed as a > > sync_file fd, attached to a drm_syncobj fd, or anything else that > > makes the dma_fence visible to other kernel threads), up to and > > including the dma_fence_wait(). Examples are irq handlers, the > > scheduler rt threads, the tail of execbuf (after the corresponding > > fences are visible), any workers that end up signalling dma_fences and > > really anything else. Not annotated should be code paths that only > > complete fences opportunistically as the gpu progresses, like e.g. > > shrinker/eviction code. > > > > The main class of deadlocks this is supposed to catch are: > > > > Thread A: > > > > mutex_lock(A); > > mutex_unlock(A); > > > > dma_fence_signal(); > > > > Thread B: > > > > mutex_lock(A); > > dma_fence_wait(); > > mutex_unlock(A); > > > > Thread B is blocked on A signalling the fence, but A never gets around > > to that because it cannot acquire the lock A. > > > > Note that dma_fence_wait() is allowed to be nested within > > dma_fence_begin/end_signalling sections. To allow this to happen the > > read lock needs to be upgraded to a write lock, which means that any > > other lock is acquired between the dma_fence_begin_signalling() call and > > the call to dma_fence_wait(), and still held, this will result in an > > immediate lockdep complaint. The only other option would be to not > > annotate such calls, defeating the point. Therefore these annotations > > cannot be sprinkled over the code entirely mindless to avoid false > > positives. > > > > Originally I hope that the cross-release lockdep extensions would > > alleviate the need for explicit annotations: > > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flwn.net%2FArticles%2F709849%2F&data=02%7C01%7Cchristian.koenig%40amd.com%7Cff1a9dd17c544534eeb808d822b21ba2%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637297495649621566&sdata=pbDwf%2BAG1UZ5bLZeep7VeGVQMnlQhX0TKG1d6Ok8GfQ%3D&reserved=0 > > > > But there's a few reasons why that's not an option: > > > > - It's not happening in upstream, since it got reverted due to too > >many false positives: > > > > commit e966eaeeb623f09975ef362c2866fae6f86844f9 > > Author: Ingo Molnar > > Date: Tue Dec 12 12:31:16 2017 +0100 > > > > locking/lockdep: Remove the cross-release locking checks > > > >
Re: [Intel-gfx] [PATCH 01/25] dma-fence: basic lockdep annotations
On Thu, Jul 09, 2020 at 08:32:41AM +0100, Daniel Stone wrote: > Hi, > > On Wed, 8 Jul 2020 at 16:13, Daniel Vetter wrote: > > On Wed, Jul 8, 2020 at 4:57 PM Christian König > > wrote: > > > Could we merge this controlled by a separate config option? > > > > > > This way we could have the checks upstream without having to fix all the > > > stuff before we do this? > > > > Since it's fully opt-in annotations nothing blows up if we don't merge > > any annotations. So we could start merging the first 3 patches. After > > that the fun starts ... > > > > My rough idea was that first I'd try to tackle display, thus far > > there's 2 actual issues in drivers: > > - amdgpu has some dma_resv_lock in commit_tail, plus a kmalloc. I > > think those should be fairly easy to fix (I'd try a stab at them even) > > - vmwgfx has a full on locking inversion with dma_resv_lock in > > commit_tail, and that one is functional. Not just reading something > > which we can safely assume to be invariant anyway (like the tmz flag > > for amdgpu, or whatever it was). > > > > I've done a pile more annotations patches for other atomic drivers > > now, so hopefully that flushes out any remaining offenders here. Since > > some of the annotations are in helper code worst case we might need a > > dev->mode_config.broken_atomic_commit flag to disable them. At least > > for now I have 0 plans to merge any of these while there's known > > unsolved issues. Maybe if some drivers take forever to get fixed we > > can then apply some duct-tape for the atomic helper annotation patch. > > Instead of a flag we can also copypasta the atomic_commit_tail hook, > > leaving the annotations out and adding a huge warning about that. > > How about an opt-in drm_driver DRIVER_DEADLOCK_HAPPY flag? At first > this could just disable the annotations and nothing else, but as we > see the annotations gaining real-world testing and maturity, we could > eventually make it taint the kernel. You can do that pretty much per-driver, since the annotations are pretty much per-driver. No annotations in your code, no lockdep splat. Only if there's some dma_fence_begin/end_signalling() calls is there even the chance of a problem. E.g. this round has the i915 patch dropped and *trar* intel-gfx-ci is happy (or well at least a lot happier, there's some noise in there that's probably not from my stuff). So I guess if amd wants this, we could do an DRM_AMDGPU_MOAR_LOCKDEP Kconfig or similar. I haven't tested, but I think as long as we don't merge any of the amdgpu specific patches, there's no splat in amdgpu. So with that I think that's plenty enough opt-in for each driver. The only problem is a bit shared helper code like atomic helpers and drm scheduler. There we might need some opt-out (I don't think merging makes sense when most of the users are still broken). -Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch ___ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx
Re: [Intel-gfx] [PATCH 01/25] dma-fence: basic lockdep annotations
Hi, On Wed, 8 Jul 2020 at 16:13, Daniel Vetter wrote: > On Wed, Jul 8, 2020 at 4:57 PM Christian König > wrote: > > Could we merge this controlled by a separate config option? > > > > This way we could have the checks upstream without having to fix all the > > stuff before we do this? > > Since it's fully opt-in annotations nothing blows up if we don't merge > any annotations. So we could start merging the first 3 patches. After > that the fun starts ... > > My rough idea was that first I'd try to tackle display, thus far > there's 2 actual issues in drivers: > - amdgpu has some dma_resv_lock in commit_tail, plus a kmalloc. I > think those should be fairly easy to fix (I'd try a stab at them even) > - vmwgfx has a full on locking inversion with dma_resv_lock in > commit_tail, and that one is functional. Not just reading something > which we can safely assume to be invariant anyway (like the tmz flag > for amdgpu, or whatever it was). > > I've done a pile more annotations patches for other atomic drivers > now, so hopefully that flushes out any remaining offenders here. Since > some of the annotations are in helper code worst case we might need a > dev->mode_config.broken_atomic_commit flag to disable them. At least > for now I have 0 plans to merge any of these while there's known > unsolved issues. Maybe if some drivers take forever to get fixed we > can then apply some duct-tape for the atomic helper annotation patch. > Instead of a flag we can also copypasta the atomic_commit_tail hook, > leaving the annotations out and adding a huge warning about that. How about an opt-in drm_driver DRIVER_DEADLOCK_HAPPY flag? At first this could just disable the annotations and nothing else, but as we see the annotations gaining real-world testing and maturity, we could eventually make it taint the kernel. Cheers, Daniel ___ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx
Re: [Intel-gfx] [PATCH 01/25] dma-fence: basic lockdep annotations
On Wed, Jul 8, 2020 at 5:19 PM Alex Deucher wrote: > > On Wed, Jul 8, 2020 at 11:13 AM Daniel Vetter wrote: > > > > On Wed, Jul 8, 2020 at 4:57 PM Christian König > > wrote: > > > > > > Could we merge this controlled by a separate config option? > > > > > > This way we could have the checks upstream without having to fix all the > > > stuff before we do this? > > > > Since it's fully opt-in annotations nothing blows up if we don't merge > > any annotations. So we could start merging the first 3 patches. After > > that the fun starts ... > > > > My rough idea was that first I'd try to tackle display, thus far > > there's 2 actual issues in drivers: > > - amdgpu has some dma_resv_lock in commit_tail, plus a kmalloc. I > > think those should be fairly easy to fix (I'd try a stab at them even) > > - vmwgfx has a full on locking inversion with dma_resv_lock in > > commit_tail, and that one is functional. Not just reading something > > which we can safely assume to be invariant anyway (like the tmz flag > > for amdgpu, or whatever it was). > > > > I've done a pile more annotations patches for other atomic drivers > > now, so hopefully that flushes out any remaining offenders here. Since > > some of the annotations are in helper code worst case we might need a > > dev->mode_config.broken_atomic_commit flag to disable them. At least > > for now I have 0 plans to merge any of these while there's known > > unsolved issues. Maybe if some drivers take forever to get fixed we > > can then apply some duct-tape for the atomic helper annotation patch. > > Instead of a flag we can also copypasta the atomic_commit_tail hook, > > leaving the annotations out and adding a huge warning about that. > > > > Next big chunk is the drm/scheduler annotations: > > - amdgpu needs a full rework of display reset (but apparently in the works) > > I think the display deadlock issues should be fixed in: > https://cgit.freedesktop.org/drm/drm/commit/?id=cdaae8371aa9d4ea1648a299b1a75946b9556944 That's the reset/tdr inversion, there's two more: - kmalloc, see https://cgit.freedesktop.org/~danvet/drm/commit/?id=d9353cc3bf6111430a24188b92412dc49e7ead79 - ttm_bo_reserve in the wrong place https://cgit.freedesktop.org/~danvet/drm/commit/?id=a6c03176152625a2f9cf1e499aceb8b2217dc2a2 - console_lock in the wrong spot https://cgit.freedesktop.org/~danvet/drm/commit/?id=a6c03176152625a2f9cf1e499aceb8b2217dc2a2 Especially the last one I have no idea how to address really. -Daniel > > Alex > > > - I read all the drivers, they all have the fairly cosmetic issue of > > doing small allocations in their callbacks. > > > > I might end up typing the mempool we need for the latter issue, but > > first still hoping for some actual test feedback from other drivers > > using drm/scheduler. Again no intentions of merging these annotations > > without the drivers being fixed first, or at least some duct-atpe > > applied. > > > > Another option I've been thinking about, if there's cases where fixing > > things properly is a lot of effort: We could do annotations for broken > > sections (just the broken part, so we still catch bugs everywhere > > else). They'd simply drop&reacquire the lock. We could then e.g. use > > that in the amdgpu display reset code, and so still make sure that > > everything else in reset doesn't get worse. But I think adding that > > shouldn't be our first option. > > > > I'm not personally a big fan of the Kconfig or runtime option, only > > upsets people since it breaks lockdep for them. Or they ignore it, and > > we don't catch bugs, making it fairly pointless to merge. > > > > Cheers, Daniel > > > > > > > > > > Thanks, > > > Christian. > > > > > > Am 07.07.20 um 22:12 schrieb Daniel Vetter: > > > > Design is similar to the lockdep annotations for workers, but with > > > > some twists: > > > > > > > > - We use a read-lock for the execution/worker/completion side, so that > > > >this explicit annotation can be more liberally sprinkled around. > > > >With read locks lockdep isn't going to complain if the read-side > > > >isn't nested the same way under all circumstances, so ABBA deadlocks > > > >are ok. Which they are, since this is an annotation only. > > > > > > > > - We're using non-recursive lockdep read lock mode, since in recursive > > > >read lock mode lockdep does not catch read side hazards. And we > > > >_very_ much want read side hazards to be caught. For full details of > > > >this limitation see > > > > > > > >commit e91498589746065e3ae95d9a00b068e525eec34f > > > >Author: Peter Zijlstra > > > >Date: Wed Aug 23 13:13:11 2017 +0200 > > > > > > > >locking/lockdep/selftests: Add mixed read-write ABBA tests > > > > > > > > - To allow nesting of the read-side explicit annotations we explicitly > > > >keep track of the nesting. lock_is_held() allows us to do that. > > > > > > > > - The wait-side annotation is a write lock, and entirely done within > > > >dma_fence_wait()
Re: [Intel-gfx] [PATCH 01/25] dma-fence: basic lockdep annotations
On Wed, Jul 8, 2020 at 11:13 AM Daniel Vetter wrote: > > On Wed, Jul 8, 2020 at 4:57 PM Christian König > wrote: > > > > Could we merge this controlled by a separate config option? > > > > This way we could have the checks upstream without having to fix all the > > stuff before we do this? > > Since it's fully opt-in annotations nothing blows up if we don't merge > any annotations. So we could start merging the first 3 patches. After > that the fun starts ... > > My rough idea was that first I'd try to tackle display, thus far > there's 2 actual issues in drivers: > - amdgpu has some dma_resv_lock in commit_tail, plus a kmalloc. I > think those should be fairly easy to fix (I'd try a stab at them even) > - vmwgfx has a full on locking inversion with dma_resv_lock in > commit_tail, and that one is functional. Not just reading something > which we can safely assume to be invariant anyway (like the tmz flag > for amdgpu, or whatever it was). > > I've done a pile more annotations patches for other atomic drivers > now, so hopefully that flushes out any remaining offenders here. Since > some of the annotations are in helper code worst case we might need a > dev->mode_config.broken_atomic_commit flag to disable them. At least > for now I have 0 plans to merge any of these while there's known > unsolved issues. Maybe if some drivers take forever to get fixed we > can then apply some duct-tape for the atomic helper annotation patch. > Instead of a flag we can also copypasta the atomic_commit_tail hook, > leaving the annotations out and adding a huge warning about that. > > Next big chunk is the drm/scheduler annotations: > - amdgpu needs a full rework of display reset (but apparently in the works) I think the display deadlock issues should be fixed in: https://cgit.freedesktop.org/drm/drm/commit/?id=cdaae8371aa9d4ea1648a299b1a75946b9556944 Alex > - I read all the drivers, they all have the fairly cosmetic issue of > doing small allocations in their callbacks. > > I might end up typing the mempool we need for the latter issue, but > first still hoping for some actual test feedback from other drivers > using drm/scheduler. Again no intentions of merging these annotations > without the drivers being fixed first, or at least some duct-atpe > applied. > > Another option I've been thinking about, if there's cases where fixing > things properly is a lot of effort: We could do annotations for broken > sections (just the broken part, so we still catch bugs everywhere > else). They'd simply drop&reacquire the lock. We could then e.g. use > that in the amdgpu display reset code, and so still make sure that > everything else in reset doesn't get worse. But I think adding that > shouldn't be our first option. > > I'm not personally a big fan of the Kconfig or runtime option, only > upsets people since it breaks lockdep for them. Or they ignore it, and > we don't catch bugs, making it fairly pointless to merge. > > Cheers, Daniel > > > > > > Thanks, > > Christian. > > > > Am 07.07.20 um 22:12 schrieb Daniel Vetter: > > > Design is similar to the lockdep annotations for workers, but with > > > some twists: > > > > > > - We use a read-lock for the execution/worker/completion side, so that > > >this explicit annotation can be more liberally sprinkled around. > > >With read locks lockdep isn't going to complain if the read-side > > >isn't nested the same way under all circumstances, so ABBA deadlocks > > >are ok. Which they are, since this is an annotation only. > > > > > > - We're using non-recursive lockdep read lock mode, since in recursive > > >read lock mode lockdep does not catch read side hazards. And we > > >_very_ much want read side hazards to be caught. For full details of > > >this limitation see > > > > > >commit e91498589746065e3ae95d9a00b068e525eec34f > > >Author: Peter Zijlstra > > >Date: Wed Aug 23 13:13:11 2017 +0200 > > > > > >locking/lockdep/selftests: Add mixed read-write ABBA tests > > > > > > - To allow nesting of the read-side explicit annotations we explicitly > > >keep track of the nesting. lock_is_held() allows us to do that. > > > > > > - The wait-side annotation is a write lock, and entirely done within > > >dma_fence_wait() for everyone by default. > > > > > > - To be able to freely annotate helper functions I want to make it ok > > >to call dma_fence_begin/end_signalling from soft/hardirq context. > > >First attempt was using the hardirq locking context for the write > > >side in lockdep, but this forces all normal spinlocks nested within > > >dma_fence_begin/end_signalling to be spinlocks. That bollocks. > > > > > >The approach now is to simple check in_atomic(), and for these cases > > >entirely rely on the might_sleep() check in dma_fence_wait(). That > > >will catch any wrong nesting against spinlocks from soft/hardirq > > >contexts. > > > > > > The idea here is that every code path that's critical for
Re: [Intel-gfx] [PATCH 01/25] dma-fence: basic lockdep annotations
On Wed, Jul 8, 2020 at 4:57 PM Christian König wrote: > > Could we merge this controlled by a separate config option? > > This way we could have the checks upstream without having to fix all the > stuff before we do this? Since it's fully opt-in annotations nothing blows up if we don't merge any annotations. So we could start merging the first 3 patches. After that the fun starts ... My rough idea was that first I'd try to tackle display, thus far there's 2 actual issues in drivers: - amdgpu has some dma_resv_lock in commit_tail, plus a kmalloc. I think those should be fairly easy to fix (I'd try a stab at them even) - vmwgfx has a full on locking inversion with dma_resv_lock in commit_tail, and that one is functional. Not just reading something which we can safely assume to be invariant anyway (like the tmz flag for amdgpu, or whatever it was). I've done a pile more annotations patches for other atomic drivers now, so hopefully that flushes out any remaining offenders here. Since some of the annotations are in helper code worst case we might need a dev->mode_config.broken_atomic_commit flag to disable them. At least for now I have 0 plans to merge any of these while there's known unsolved issues. Maybe if some drivers take forever to get fixed we can then apply some duct-tape for the atomic helper annotation patch. Instead of a flag we can also copypasta the atomic_commit_tail hook, leaving the annotations out and adding a huge warning about that. Next big chunk is the drm/scheduler annotations: - amdgpu needs a full rework of display reset (but apparently in the works) - I read all the drivers, they all have the fairly cosmetic issue of doing small allocations in their callbacks. I might end up typing the mempool we need for the latter issue, but first still hoping for some actual test feedback from other drivers using drm/scheduler. Again no intentions of merging these annotations without the drivers being fixed first, or at least some duct-atpe applied. Another option I've been thinking about, if there's cases where fixing things properly is a lot of effort: We could do annotations for broken sections (just the broken part, so we still catch bugs everywhere else). They'd simply drop&reacquire the lock. We could then e.g. use that in the amdgpu display reset code, and so still make sure that everything else in reset doesn't get worse. But I think adding that shouldn't be our first option. I'm not personally a big fan of the Kconfig or runtime option, only upsets people since it breaks lockdep for them. Or they ignore it, and we don't catch bugs, making it fairly pointless to merge. Cheers, Daniel > > Thanks, > Christian. > > Am 07.07.20 um 22:12 schrieb Daniel Vetter: > > Design is similar to the lockdep annotations for workers, but with > > some twists: > > > > - We use a read-lock for the execution/worker/completion side, so that > >this explicit annotation can be more liberally sprinkled around. > >With read locks lockdep isn't going to complain if the read-side > >isn't nested the same way under all circumstances, so ABBA deadlocks > >are ok. Which they are, since this is an annotation only. > > > > - We're using non-recursive lockdep read lock mode, since in recursive > >read lock mode lockdep does not catch read side hazards. And we > >_very_ much want read side hazards to be caught. For full details of > >this limitation see > > > >commit e91498589746065e3ae95d9a00b068e525eec34f > >Author: Peter Zijlstra > >Date: Wed Aug 23 13:13:11 2017 +0200 > > > >locking/lockdep/selftests: Add mixed read-write ABBA tests > > > > - To allow nesting of the read-side explicit annotations we explicitly > >keep track of the nesting. lock_is_held() allows us to do that. > > > > - The wait-side annotation is a write lock, and entirely done within > >dma_fence_wait() for everyone by default. > > > > - To be able to freely annotate helper functions I want to make it ok > >to call dma_fence_begin/end_signalling from soft/hardirq context. > >First attempt was using the hardirq locking context for the write > >side in lockdep, but this forces all normal spinlocks nested within > >dma_fence_begin/end_signalling to be spinlocks. That bollocks. > > > >The approach now is to simple check in_atomic(), and for these cases > >entirely rely on the might_sleep() check in dma_fence_wait(). That > >will catch any wrong nesting against spinlocks from soft/hardirq > >contexts. > > > > The idea here is that every code path that's critical for eventually > > signalling a dma_fence should be annotated with > > dma_fence_begin/end_signalling. The annotation ideally starts right > > after a dma_fence is published (added to a dma_resv, exposed as a > > sync_file fd, attached to a drm_syncobj fd, or anything else that > > makes the dma_fence visible to other kernel threads), up to and > > including the dma_fence_wait(). Examples are irq han
Re: [Intel-gfx] [PATCH 01/25] dma-fence: basic lockdep annotations
Could we merge this controlled by a separate config option? This way we could have the checks upstream without having to fix all the stuff before we do this? Thanks, Christian. Am 07.07.20 um 22:12 schrieb Daniel Vetter: Design is similar to the lockdep annotations for workers, but with some twists: - We use a read-lock for the execution/worker/completion side, so that this explicit annotation can be more liberally sprinkled around. With read locks lockdep isn't going to complain if the read-side isn't nested the same way under all circumstances, so ABBA deadlocks are ok. Which they are, since this is an annotation only. - We're using non-recursive lockdep read lock mode, since in recursive read lock mode lockdep does not catch read side hazards. And we _very_ much want read side hazards to be caught. For full details of this limitation see commit e91498589746065e3ae95d9a00b068e525eec34f Author: Peter Zijlstra Date: Wed Aug 23 13:13:11 2017 +0200 locking/lockdep/selftests: Add mixed read-write ABBA tests - To allow nesting of the read-side explicit annotations we explicitly keep track of the nesting. lock_is_held() allows us to do that. - The wait-side annotation is a write lock, and entirely done within dma_fence_wait() for everyone by default. - To be able to freely annotate helper functions I want to make it ok to call dma_fence_begin/end_signalling from soft/hardirq context. First attempt was using the hardirq locking context for the write side in lockdep, but this forces all normal spinlocks nested within dma_fence_begin/end_signalling to be spinlocks. That bollocks. The approach now is to simple check in_atomic(), and for these cases entirely rely on the might_sleep() check in dma_fence_wait(). That will catch any wrong nesting against spinlocks from soft/hardirq contexts. The idea here is that every code path that's critical for eventually signalling a dma_fence should be annotated with dma_fence_begin/end_signalling. The annotation ideally starts right after a dma_fence is published (added to a dma_resv, exposed as a sync_file fd, attached to a drm_syncobj fd, or anything else that makes the dma_fence visible to other kernel threads), up to and including the dma_fence_wait(). Examples are irq handlers, the scheduler rt threads, the tail of execbuf (after the corresponding fences are visible), any workers that end up signalling dma_fences and really anything else. Not annotated should be code paths that only complete fences opportunistically as the gpu progresses, like e.g. shrinker/eviction code. The main class of deadlocks this is supposed to catch are: Thread A: mutex_lock(A); mutex_unlock(A); dma_fence_signal(); Thread B: mutex_lock(A); dma_fence_wait(); mutex_unlock(A); Thread B is blocked on A signalling the fence, but A never gets around to that because it cannot acquire the lock A. Note that dma_fence_wait() is allowed to be nested within dma_fence_begin/end_signalling sections. To allow this to happen the read lock needs to be upgraded to a write lock, which means that any other lock is acquired between the dma_fence_begin_signalling() call and the call to dma_fence_wait(), and still held, this will result in an immediate lockdep complaint. The only other option would be to not annotate such calls, defeating the point. Therefore these annotations cannot be sprinkled over the code entirely mindless to avoid false positives. Originally I hope that the cross-release lockdep extensions would alleviate the need for explicit annotations: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flwn.net%2FArticles%2F709849%2F&data=02%7C01%7Cchristian.koenig%40amd.com%7Cff1a9dd17c544534eeb808d822b21ba2%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637297495649621566&sdata=pbDwf%2BAG1UZ5bLZeep7VeGVQMnlQhX0TKG1d6Ok8GfQ%3D&reserved=0 But there's a few reasons why that's not an option: - It's not happening in upstream, since it got reverted due to too many false positives: commit e966eaeeb623f09975ef362c2866fae6f86844f9 Author: Ingo Molnar Date: Tue Dec 12 12:31:16 2017 +0100 locking/lockdep: Remove the cross-release locking checks This code (CONFIG_LOCKDEP_CROSSRELEASE=y and CONFIG_LOCKDEP_COMPLETIONS=y), while it found a number of old bugs initially, was also causing too many false positives that caused people to disable lockdep - which is arguably a worse overall outcome. - cross-release uses the complete() call to annotate the end of critical sections, for dma_fence that would be dma_fence_signal(). But we do not want all dma_fence_signal() calls to be treated as critical, since many are opportunistic cleanup of gpu requests. If these get stuck there's still the main completion interrupt and workers who can unblock everyone. Automatically
[Intel-gfx] [PATCH 01/25] dma-fence: basic lockdep annotations
Design is similar to the lockdep annotations for workers, but with some twists: - We use a read-lock for the execution/worker/completion side, so that this explicit annotation can be more liberally sprinkled around. With read locks lockdep isn't going to complain if the read-side isn't nested the same way under all circumstances, so ABBA deadlocks are ok. Which they are, since this is an annotation only. - We're using non-recursive lockdep read lock mode, since in recursive read lock mode lockdep does not catch read side hazards. And we _very_ much want read side hazards to be caught. For full details of this limitation see commit e91498589746065e3ae95d9a00b068e525eec34f Author: Peter Zijlstra Date: Wed Aug 23 13:13:11 2017 +0200 locking/lockdep/selftests: Add mixed read-write ABBA tests - To allow nesting of the read-side explicit annotations we explicitly keep track of the nesting. lock_is_held() allows us to do that. - The wait-side annotation is a write lock, and entirely done within dma_fence_wait() for everyone by default. - To be able to freely annotate helper functions I want to make it ok to call dma_fence_begin/end_signalling from soft/hardirq context. First attempt was using the hardirq locking context for the write side in lockdep, but this forces all normal spinlocks nested within dma_fence_begin/end_signalling to be spinlocks. That bollocks. The approach now is to simple check in_atomic(), and for these cases entirely rely on the might_sleep() check in dma_fence_wait(). That will catch any wrong nesting against spinlocks from soft/hardirq contexts. The idea here is that every code path that's critical for eventually signalling a dma_fence should be annotated with dma_fence_begin/end_signalling. The annotation ideally starts right after a dma_fence is published (added to a dma_resv, exposed as a sync_file fd, attached to a drm_syncobj fd, or anything else that makes the dma_fence visible to other kernel threads), up to and including the dma_fence_wait(). Examples are irq handlers, the scheduler rt threads, the tail of execbuf (after the corresponding fences are visible), any workers that end up signalling dma_fences and really anything else. Not annotated should be code paths that only complete fences opportunistically as the gpu progresses, like e.g. shrinker/eviction code. The main class of deadlocks this is supposed to catch are: Thread A: mutex_lock(A); mutex_unlock(A); dma_fence_signal(); Thread B: mutex_lock(A); dma_fence_wait(); mutex_unlock(A); Thread B is blocked on A signalling the fence, but A never gets around to that because it cannot acquire the lock A. Note that dma_fence_wait() is allowed to be nested within dma_fence_begin/end_signalling sections. To allow this to happen the read lock needs to be upgraded to a write lock, which means that any other lock is acquired between the dma_fence_begin_signalling() call and the call to dma_fence_wait(), and still held, this will result in an immediate lockdep complaint. The only other option would be to not annotate such calls, defeating the point. Therefore these annotations cannot be sprinkled over the code entirely mindless to avoid false positives. Originally I hope that the cross-release lockdep extensions would alleviate the need for explicit annotations: https://lwn.net/Articles/709849/ But there's a few reasons why that's not an option: - It's not happening in upstream, since it got reverted due to too many false positives: commit e966eaeeb623f09975ef362c2866fae6f86844f9 Author: Ingo Molnar Date: Tue Dec 12 12:31:16 2017 +0100 locking/lockdep: Remove the cross-release locking checks This code (CONFIG_LOCKDEP_CROSSRELEASE=y and CONFIG_LOCKDEP_COMPLETIONS=y), while it found a number of old bugs initially, was also causing too many false positives that caused people to disable lockdep - which is arguably a worse overall outcome. - cross-release uses the complete() call to annotate the end of critical sections, for dma_fence that would be dma_fence_signal(). But we do not want all dma_fence_signal() calls to be treated as critical, since many are opportunistic cleanup of gpu requests. If these get stuck there's still the main completion interrupt and workers who can unblock everyone. Automatically annotating all dma_fence_signal() calls would hence cause false positives. - cross-release had some educated guesses for when a critical section starts, like fresh syscall or fresh work callback. This would again cause false positives without explicit annotations, since for dma_fence the critical sections only starts when we publish a fence. - Furthermore there can be cases where a thread never does a dma_fence_signal, but is still critical for reaching completion of fences. One example would be a schedule