On 01/14/2013 08:01 PM, Morten Rasmussen wrote:
static long effective_load(struct task_group *tg, int cpu, long wl, long
wg)
> >> {
> >> struct sched_entity *se = tg->se[cpu];
> >>
> >> if (!tg->parent)/* the trivial, non-cgroup case */
>
On Fri, Jan 11, 2013 at 03:26:59AM +, Alex Shi wrote:
> On 01/10/2013 07:28 PM, Morten Rasmussen wrote:
> > On Sat, Jan 05, 2013 at 08:37:40AM +, Alex Shi wrote:
> >> effective_load calculates the load change as seen from the
> >> root_task_group. It needs to multiple cfs_rq's tg_runnable_c
On 01/10/2013 07:28 PM, Morten Rasmussen wrote:
> On Sat, Jan 05, 2013 at 08:37:40AM +, Alex Shi wrote:
>> effective_load calculates the load change as seen from the
>> root_task_group. It needs to multiple cfs_rq's tg_runnable_contrib
>> when we turn to runnable load average balance.
>>
>> Sig
On Sat, Jan 05, 2013 at 08:37:40AM +, Alex Shi wrote:
> effective_load calculates the load change as seen from the
> root_task_group. It needs to multiple cfs_rq's tg_runnable_contrib
> when we turn to runnable load average balance.
>
> Signed-off-by: Alex Shi
> ---
> kernel/sched/fair.c | 1
effective_load calculates the load change as seen from the
root_task_group. It needs to multiple cfs_rq's tg_runnable_contrib
when we turn to runnable load average balance.
Signed-off-by: Alex Shi
---
kernel/sched/fair.c | 11 ---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --g
5 matches
Mail list logo