From: Peter W. Morreale <[EMAIL PROTECTED]>

Remove the redundant attempt to get the lock.  While it is true that the
exit path with this patch adds an un-necessary xchg (in the event the
lock is granted without further traversal in the loop) experimentation
shows that we almost never encounter this situation. 

Signed-off-by: Peter W. Morreale <[EMAIL PROTECTED]>
---

 kernel/rtmutex.c |    6 ------
 1 files changed, 0 insertions(+), 6 deletions(-)

diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
index ebdaa17..95c3644 100644
--- a/kernel/rtmutex.c
+++ b/kernel/rtmutex.c
@@ -718,12 +718,6 @@ rt_spin_lock_slowlock(struct rt_mutex *lock)
        spin_lock_irqsave(&lock->wait_lock, flags);
        init_lists(lock);
 
-       /* Try to acquire the lock again: */
-       if (try_to_take_rt_mutex(lock)) {
-               spin_unlock_irqrestore(&lock->wait_lock, flags);
-               return;
-       }
-
        BUG_ON(rt_mutex_owner(lock) == current);
 
        /*

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to