On Tue, Feb 03, 2015 at 11:00:56AM +0000, Chris Wilson wrote:
> On Tue, Feb 03, 2015 at 11:49:00AM +0100, Daniel Vetter wrote:
> > You can _never_ assert that a lock is not held, except in some very
> > restricted corner cases where it's guranteed that your code is running
> > single-threade (e.g. driver load before you've published any pointers
> > leading to that lock).
> 
> Except that the mistake here was that we thought we were already inside
> the strictly single threaded recovery phase. Seems a bit blasé not to
> mention that recovery includes several tricks to break locks.

Even if this check is after the wake_up calls it's still invalid, since
only until we actually try to grab the mutex with mutex_lock will we
enforce enough synchronization to stall for any other lock holders. The
scheduler is free to honor our wake_up whenever it pleases.

Hence I stand by my assertion that except in cases where it's trivially
true (i.e. driver load and no other cpu could have possible seen a pointer
to that lock yet) check for unlockedness is wrong. The only reliable way
is to grab the lock (and hang if there's a bug).

We've had this exact bug in the past with hangcheck years back when we
started to stress-test hangs: There was a mutex_trylock in the recovery
work and we bailed when that failed:

commit d54a02c041ccfdcfe3efcd1e5b90c6e8d5e7a8d9
Author: Daniel Vetter <daniel.vet...@ffwll.ch>
Date:   Wed Jul 4 22:18:39 2012 +0200

    drm/i915: don't trylock in the gpu reset code
    
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to