It turns out that treating ww_mutex_lock() as a trylock will fail to catch
real deadlock hazard like:

        mutex_lock(&A);                 ww_mutex_lock(&B, ctx);
        ww_mutex_lock(&B, ctx);         mutex_lock(&A);

The current lockdep code should be able to handle mixed lock ordering
of ww_mutexes as long as

 1) there is a top level nested lock that is acquired before hand, and
 2) the nested lock and the ww_mutex are of the same lock class.

Any ww_mutex use cases that do not provide the above guarantee will
have to be modified to avoid lockdep problem.

Revert the previous commit b058f2e4d0a7 ("locking/ww_mutex: Treat
ww_mutex_lock() like a trylock").

Fixes: commit b058f2e4d0a7 ("locking/ww_mutex: Treat ww_mutex_lock() like a 
trylock")
Signed-off-by: Waiman Long <long...@redhat.com>
---
 kernel/locking/mutex.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index bb89393cd3a2..622ebdfcd083 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -946,10 +946,7 @@ __mutex_lock_common(struct mutex *lock, long state, 
unsigned int subclass,
        }
 
        preempt_disable();
-       /*
-        * Treat as trylock for ww_mutex.
-        */
-       mutex_acquire_nest(&lock->dep_map, subclass, !!ww_ctx, nest_lock, ip);
+       mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip);
 
        if (__mutex_trylock(lock) ||
            mutex_optimistic_spin(lock, ww_ctx, NULL)) {
-- 
2.18.1

Reply via email to