On 01/24/2013 06:08 PM, Ingo Molnar wrote:
> 
> * Alex Shi <alex....@intel.com> wrote:
> 
>> @@ -2539,7 +2539,11 @@ static void __update_cpu_load(struct rq *this_rq, 
>> unsigned long this_load,
>>  void update_idle_cpu_load(struct rq *this_rq)
>>  {
>>      unsigned long curr_jiffies = ACCESS_ONCE(jiffies);
>> +#if defined(CONFIG_SMP) && defined(CONFIG_FAIR_GROUP_SCHED)
>> +    unsigned long load = (unsigned long)this_rq->cfs.runnable_load_avg;
>> +#else
>>      unsigned long load = this_rq->load.weight;
>> +#endif
> 
> I'd not make it conditional - just calculate runnable_load_avg 
> all the time (even if group scheduling is disabled) and use it 
> consistently. The last thing we want is to bifurcate scheduler 
> balancer behavior even further.

Very glad to see you being back, Ingo! :)

This patch set is following my power aware scheduling patchset. But for
a separate workable runnable load engaged balancing. only needs the
other 3 patches, that already sent you at another patchset

[patch v4 06/18] sched: give initial value for runnable avg of sched
[patch v4 07/18] sched: set initial load avg of new forked task
[patch v4 08/18] Revert "sched: Introduce temporary FAIR_GROUP_SCHED


> 
> Thanks,
> 
>       Ingo
> 


-- 
Thanks
    Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to