When a task prepares to sleep and then aborts it somehow, there is a small chance that a waker may be spinning on the on_cpu flag of that task waiting for the flag to turn off before doing the wakeup operation. It may keep on spinning for a long time until that task actually sleeps leading to spurious wakeup.
This patch adds code to detect the change in task state and abort the wakeup operation, when appropriate, to free up the waker's cpu to do other useful works. Signed-off-by: Waiman Long <waiman.l...@hp.com> --- kernel/sched/core.c | 9 ++++++++- 1 files changed, 8 insertions(+), 1 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 7e548bd..e4b6e84 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2075,8 +2075,15 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) * * This ensures that tasks getting woken will be fully ordered against * their previous state and preserve Program Order. + * + * If the owning cpu decides not to sleep after all by changing back + * its task state, we can return immediately. */ - smp_cond_acquire(!p->on_cpu); + smp_cond_acquire(!p->on_cpu || !(p->state & state)); + if (!(p->state & state)) { + success = 0; + goto out; + } p->sched_contributes_to_load = !!task_contributes_to_load(p); p->state = TASK_WAKING; -- 1.7.1