On Tue, Nov 10, 2015 at 09:36:02AM +0900, [email protected] wrote: > +++ b/kernel/sched/fair.c > @@ -4419,10 +4419,11 @@ static void update_idle_cpu_load(struct rq *this_rq) > /* > * Called from tick_nohz_idle_exit() -- try and fix up the ticks we missed. > */ > -void update_cpu_load_nohz(void) > +void update_cpu_load_nohz(int active) > { > struct rq *this_rq = this_rq(); > unsigned long curr_jiffies = READ_ONCE(jiffies); > + unsigned long load = active ? weighted_cpuload(cpu_of(this_rq)) : 0; > unsigned long pending_updates; > > if (curr_jiffies == this_rq->last_load_update_tick) > @@ -4433,10 +4434,11 @@ void update_cpu_load_nohz(void) > if (pending_updates) { > this_rq->last_load_update_tick = curr_jiffies; > /* > - * We were idle, this means load 0, the current load might be > - * !0 due to remote wakeups and the sort. > + * In the regular NOHZ case, we were idle, this means load 0. > + * In the NOHZ_FULL case, we were non-idle, we should consider > + * its weighted load. > */ > - __update_cpu_load(this_rq, 0, pending_updates, 0); > + __update_cpu_load(this_rq, load, pending_updates, active); > } > raw_spin_unlock(&this_rq->lock); > }
Bah, so I did all the work to get the actual number of lost ticks in there, only to _then_ find out that's mostly pointless :-) The problem is update_idle_cpu_load() is called while idle (from another CPU), so it still needs the whole jiffy based thing. So I'll take this patch for now. Thanks. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

