[PATCH 18/27] sched: Update nohz rq clock before searching busiest group on load balancing

2012-12-29 Thread Frederic Weisbecker
While load balancing an rq target, we look for the busiest group.
This operation may require an uptodate rq clock if we end up calling
scale_rt_power(). To this end, update it manually if the target is
running tickless.

DOUBT: don't we actually also need this in vanilla kernel, in case
this_cpu is in dyntick-idle mode?

Signed-off-by: Frederic Weisbecker 
Cc: Alessio Igor Bogani 
Cc: Andrew Morton 
Cc: Chris Metcalf 
Cc: Christoph Lameter 
Cc: Geoff Levand 
Cc: Gilad Ben Yossef 
Cc: Hakan Akkan 
Cc: Ingo Molnar 
Cc: Paul E. McKenney 
Cc: Paul Gortmaker 
Cc: Peter Zijlstra 
Cc: Steven Rostedt 
Cc: Thomas Gleixner 
---
 kernel/sched/fair.c |   13 +
 1 files changed, 13 insertions(+), 0 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 698137d..473f50f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5023,6 +5023,19 @@ static int load_balance(int this_cpu, struct rq *this_rq,
 
schedstat_inc(sd, lb_count[idle]);
 
+   /*
+* find_busiest_group() may need an uptodate cpu clock
+* for find_busiest_group() (see scale_rt_power()). If
+* the CPU is nohz, it's clock may be stale.
+*/
+   if (tick_nohz_full_cpu(this_cpu)) {
+   local_irq_save(flags);
+   raw_spin_lock(_rq->lock);
+   update_rq_clock(this_rq);
+   raw_spin_unlock(_rq->lock);
+   local_irq_restore(flags);
+   }
+
 redo:
group = find_busiest_group(, balance);
 
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 18/27] sched: Update nohz rq clock before searching busiest group on load balancing

2012-12-29 Thread Frederic Weisbecker
While load balancing an rq target, we look for the busiest group.
This operation may require an uptodate rq clock if we end up calling
scale_rt_power(). To this end, update it manually if the target is
running tickless.

DOUBT: don't we actually also need this in vanilla kernel, in case
this_cpu is in dyntick-idle mode?

Signed-off-by: Frederic Weisbecker fweis...@gmail.com
Cc: Alessio Igor Bogani abog...@kernel.org
Cc: Andrew Morton a...@linux-foundation.org
Cc: Chris Metcalf cmetc...@tilera.com
Cc: Christoph Lameter c...@linux.com
Cc: Geoff Levand ge...@infradead.org
Cc: Gilad Ben Yossef gi...@benyossef.com
Cc: Hakan Akkan hakanak...@gmail.com
Cc: Ingo Molnar mi...@kernel.org
Cc: Paul E. McKenney paul...@linux.vnet.ibm.com
Cc: Paul Gortmaker paul.gortma...@windriver.com
Cc: Peter Zijlstra pet...@infradead.org
Cc: Steven Rostedt rost...@goodmis.org
Cc: Thomas Gleixner t...@linutronix.de
---
 kernel/sched/fair.c |   13 +
 1 files changed, 13 insertions(+), 0 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 698137d..473f50f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5023,6 +5023,19 @@ static int load_balance(int this_cpu, struct rq *this_rq,
 
schedstat_inc(sd, lb_count[idle]);
 
+   /*
+* find_busiest_group() may need an uptodate cpu clock
+* for find_busiest_group() (see scale_rt_power()). If
+* the CPU is nohz, it's clock may be stale.
+*/
+   if (tick_nohz_full_cpu(this_cpu)) {
+   local_irq_save(flags);
+   raw_spin_lock(this_rq-lock);
+   update_rq_clock(this_rq);
+   raw_spin_unlock(this_rq-lock);
+   local_irq_restore(flags);
+   }
+
 redo:
group = find_busiest_group(env, balance);
 
-- 
1.7.5.4

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/