On Fri, May 31, 2019 at 03:12:13PM -0600, Jens Axboe wrote:
> On 5/30/19 2:03 AM, Peter Zijlstra wrote:

> > What is the purpose of that patch ?! The Changelog doesn't mention any
> > benefit or performance gain. So why not revert that?
> 
> Yeah that is actually pretty weak. There are substantial performance
> gains for small IOs using this trick, the changelog should have
> included those. I guess that was left on the list...

OK. I've looked at the try_to_wake_up() path for these exact
conditions and we're certainly sub-optimal there, and I think we can put
much of this special case in there. Please see below.

> I know it's not super kosher, your patch, but I don't think it's that
> bad hidden in a generic helper.

How about the thing that Oleg proposed? That is, not set a waiter when
we know the loop is polling? That would avoid the need for this
alltogether, it would also avoid any set_current_state() on the wait
side of things.

Anyway, Oleg, do you see anything blatantly buggered with this patch?

(the stats were already dodgy for rq-stats, this patch makes them dodgy
for task-stats too)

---
 kernel/sched/core.c | 38 ++++++++++++++++++++++++++++++++------
 1 file changed, 32 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 102dfcf0a29a..474aa4c8e9d2 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1990,6 +1990,28 @@ try_to_wake_up(struct task_struct *p, unsigned int 
state, int wake_flags)
        unsigned long flags;
        int cpu, success = 0;
 
+       if (p == current) {
+               /*
+                * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
+                * == smp_processor_id()'. Together this means we can special
+                * case the whole 'p->on_rq && ttwu_remote()' case below
+                * without taking any locks.
+                *
+                * In particular:
+                *  - we rely on Program-Order guarantees for all the ordering,
+                *  - we're serialized against set_special_state() by virtue of
+                *    it disabling IRQs (this allows not taking ->pi_lock).
+                */
+               if (!(p->state & state))
+                       goto out;
+
+               success = 1;
+               trace_sched_waking(p);
+               p->state = TASK_RUNNING;
+               trace_sched_woken(p);
+               goto out;
+       }
+
        /*
         * If we are going to wake up a thread waiting for CONDITION we
         * need to ensure that CONDITION=1 done by the caller can not be
@@ -1999,7 +2021,7 @@ try_to_wake_up(struct task_struct *p, unsigned int state, 
int wake_flags)
        raw_spin_lock_irqsave(&p->pi_lock, flags);
        smp_mb__after_spinlock();
        if (!(p->state & state))
-               goto out;
+               goto unlock;
 
        trace_sched_waking(p);
 
@@ -2029,7 +2051,7 @@ try_to_wake_up(struct task_struct *p, unsigned int state, 
int wake_flags)
         */
        smp_rmb();
        if (p->on_rq && ttwu_remote(p, wake_flags))
-               goto stat;
+               goto unlock;
 
 #ifdef CONFIG_SMP
        /*
@@ -2089,12 +2111,16 @@ try_to_wake_up(struct task_struct *p, unsigned int 
state, int wake_flags)
 #endif /* CONFIG_SMP */
 
        ttwu_queue(p, cpu, wake_flags);
-stat:
-       ttwu_stat(p, cpu, wake_flags);
-out:
+unlock:
        raw_spin_unlock_irqrestore(&p->pi_lock, flags);
 
-       return success;
+out:
+       if (success) {
+               ttwu_stat(p, cpu, wake_flags);
+               return true;
+       }
+
+       return false;
 }
 
 /**

Reply via email to