On 04/16/2013 05:10 AM, Ingo Molnar wrote:
* Waiman Long<waiman.l...@hp.com>  wrote:

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3021,9 +3021,6 @@ static inline bool owner_running(struct mutex *lock, 
struct task_struct *owner)
   */
  int mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
  {
-       if (!sched_feat(OWNER_SPIN))
-               return 0;
-
        rcu_read_lock();
        while (owner_running(lock, owner)) {
                if (need_resched())
@@ -3040,6 +3037,27 @@ int mutex_spin_on_owner(struct mutex *lock, struct 
task_struct *owner)
         */
        return lock->owner == NULL;
  }
+
+/*
+ * Initial check for entering the mutex spinning loop
+ */
+int mutex_can_spin_on_owner(struct mutex *lock)
+{
+       int retval = 1;
+
+       if (!sched_feat(OWNER_SPIN))
+               return 0;
+
+       rcu_read_lock();
+       if (lock->owner)
+               retval = lock->owner->on_cpu;
+       rcu_read_unlock();
+       /*
+        * if lock->owner is not set, the mutex owner may have just acquired
+        * it and not set the owner yet or the mutex has been released.
+        */
+       return retval;
+}
The SCHED_FEAT_OWNER_SPIN was really just an early hack we did to make
with/without mutex-spinning testable.
I see.

I'd suggest a preparatory patch that gets rid of that flag and moves these two
functions from sched/core.c to mutex.c where they belong.

This will also allow the removal of the mutex prototypes from sched.h.

Yes, I can certainly prepare a patch to remove SCHED_FEAT_OWNER_SPIN & move those functions back to mutex.c after my patch set goes in. As for the timing, do you want me to do it now or it can wait as I will start my vacation later this week and will be back by the end of the month.

Regards,
Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to