On Fri, 2015-01-16 at 09:46 -0800, Tim Chen wrote: > On Thu, 2015-01-15 at 20:58 -0500, Steven Rostedt wrote: > > > > > Please add a comment here that says something like: > > > > /* > > * Don't bother moving it if the destination CPU is > > * not running a lower priority task. > > */ > > > Okay. Updated in patch below. > > > > - if (target != -1) > > > + if (target != -1 && > > > + p->prio < cpu_rq(target)->rt.highest_prio.curr) > > > cpu = target; > > > } > > > rcu_read_unlock(); > > > @@ -1613,6 +1614,12 @@ static struct rq *find_lock_lowest_rq(struct > > > task_struct *task, struct rq *rq) break; > > > > > > lowest_rq = cpu_rq(cpu); > > > + > > > + if (lowest_rq->rt.highest_prio.curr <= task->prio) { > > > + /* target rq has tasks of equal or higher priority, > > > try again */ > > > + lowest_rq = NULL; > > > + continue; > > > > This should just break out and not try again. The reason for the other > > try again is because of the double_lock which can release the locks > > which can cause a process waiting for the lock to sneak in and > > change the priorities. But this case, a try again is highly unlikely to > > do anything differently (no locks are released) and just waste cycles. > > Agree. Updated in updated patch below. > > Thanks. > > Tim >
Steven and Peter, are you okay with the updated patch? Thanks. Tim > ---->8------ > > From 5f676f7a351e85eb5cc64f1971dd03eca43b5271 Mon Sep 17 00:00:00 2001 > From: Tim Chen <tim.c.c...@linux.intel.com> > Date: Fri, 12 Dec 2014 15:38:12 -0800 > Subject: [PATCH] sched-rt: Reduce rq lock contention by eliminating > locking of > non-feasible target > To: Peter Zijlstra <pet...@infradead.org> > Cc: Andi Kleen <a...@firstfloor.org>, Ingo Molnar <mi...@elte.hu>, > Shawn Bohrer <sboh...@rgmadvisors.com>, Steven Rostedt > <rost...@goodmis.org>, Suruchi Kadu <suruchi.a.k...@intel.com>, Doug > Nelson <doug.nel...@intel.com>, linux-kernel@vger.kernel.org > > This patch added checks that prevent futile attempts to move rt tasks > to cpu with active tasks of equal or higher priority. This reduces > run queue lock contention and improves the performance of a well > known OLTP benchmark by 0.7%. > > Signed-off-by: Tim Chen <tim.c.c...@linux.intel.com> > --- > kernel/sched/rt.c | 17 ++++++++++++++++- > 1 file changed, 16 insertions(+), 1 deletion(-) > > diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c > index ee15f5a..46ebcb1 100644 > --- a/kernel/sched/rt.c > +++ b/kernel/sched/rt.c > @@ -1337,7 +1337,12 @@ select_task_rq_rt(struct task_struct *p, int cpu, int > sd_flag, int flags) > curr->prio <= p->prio)) { > int target = find_lowest_rq(p); > > - if (target != -1) > + /* > + * Don't bother moving it if the destination CPU is > + * not running a lower priority task. > + */ > + if (target != -1 && > + p->prio < cpu_rq(target)->rt.highest_prio.curr) > cpu = target; > } > rcu_read_unlock(); > @@ -1614,6 +1619,16 @@ static struct rq *find_lock_lowest_rq(struct > task_struct *task, struct rq *rq) > > lowest_rq = cpu_rq(cpu); > > + if (lowest_rq->rt.highest_prio.curr <= task->prio) { > + /* > + * Target rq has tasks of equal or higher priority, > + * retrying does not release any lock and is unlikely > + * to yield a different result. > + */ > + lowest_rq = NULL; > + break; > + } > + > /* if the prio of this runqueue changed, try again */ > if (double_lock_balance(rq, lowest_rq)) { > /* -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/