Now that we need the per-entity load tracking for load balancing,
trivially revert the patch which introduced the FAIR_GROUP_SCHED
dependence for load tracking.
Signed-off-by: Preeti U Murthy
---
include/linux/sched.h |7 +--
kernel/sched/core.c |7 +--
kernel/sched/fair.c
r Zijlstra and Ingo Molnar for their valuable feedback on v1
of the RFC which was the foundation for this version.
PATCH[1/2] Aims at enabling usage of Per-Entity-Load-Tracking for load balacing
PATCH[2/2] The crux of the patchset lies here.
---
Preeti U Murthy (2):
sched: Revert
ing through your suggestions,below is a patch which I wish to begin
with in my effort to integrate the per-entity-load-tracking metric with the
scheduler.I had posted out a patchset earlier,
(https://lkml.org/lkml/2012/10/25/162) but due to various drawbacks,
I am redoing it along the lines o
On 10/26/2012 06:37 PM, Ingo Molnar wrote:
>
> * Peter Zijlstra wrote:
>
>> [...]
>>
>> So a sane series would introduce maybe two functions:
>> cpu_load() and task_load() and use those where we now use
>> rq->load.weight and p->se.load.weight for load balancing
>> purposes. Implement these f
On 10/26/2012 05:59 PM, Peter Zijlstra wrote:
> On Thu, 2012-10-25 at 23:42 +0530, Preeti U Murthy wrote:
> firstly, cfs_rq is the wrong place for a per-cpu load measure, secondly
> why add another load field instead of fixing the one we have?
Hmm..,rq->load.weight is the place.
>
ffected although there is less load on GP1.If yes it
is a better *busy * gp.
*End Result: Better candidates for lb*
Rest of the patches: now that we have our busy sched group,let us load
balance with the aid of the new metric.
*End Result: Hopefully a more sensible movement of loads
Hi Peter,
Thank you very much for your feedback.
On 10/25/2012 09:26 PM, Peter Zijlstra wrote:
> OK, so I tried reading a few patches and I'm completely failing.. maybe
> its late and my brain stopped working, but it simply doesn't make any
> sense.
>
> Most changelogs and comments aren't really
eshold.The call should be taken if the tasks can afford to be throttled.
This is why an additional metric has been included,which can determine how
long we can tolerate tasks not being moved even if the load is low.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 16 ++
Additional parameters which decide the amount of imbalance in the sched domain
calculated using PJT's metric are used.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 36 +++-
1 file changed, 23 insertions(+), 13 deletions(-)
diff --git a/kernel/
rent sched group is capable of pulling tasks upon
itself.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 33 +
1 file changed, 25 insertions(+), 8 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index aafa3c1..67a916d 100644
--- a/ke
Make decisions based on PJT's metrics and the dependent metrics
about which tasks to move to reduce the imbalance.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 14 +-
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/
Additional parameters introduced to perform this function which are
calculated using PJT's metrics and its helpers.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 34 +++---
1 file changed, 15 insertions(+), 19 deletions(-)
diff --git a/kernel/
Additional parameters introduced to perform this function which are
calculated using PJT's metrics and its helpers.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c |8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/f
Make appropriate modifications in check_asym_packing to reflect PJT's
metric.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c |2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 68a6b1d..3b18f5f 100644
--- a/kernel/sched/fair.c
Additional parameters which aid in taking the decisions in
fix_small_imbalance which are calculated using PJT's metric are used.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 54 +++
1 file changed, 33 insertions(+), 21 dele
e a balance between the loads of the
group and the number of tasks running on the group to decide the
busiest group in the sched_domain.
This means we will need to use the PJT's metrics but with an
additional constraint.
Signed-off-by: Preeti U Murthy
---
kernel/sch
Additional parameters introduced to perform this function which are
calculated using PJT's metrics and its helpers.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 14 ++
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/
Modify certain decisions in load_balance to use the imbalance
amount as calculated by the PJT's metric.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c |5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bd
Additional parameters which decide the busiest cpu in the chosen sched group
calculated using PJT's metric are used
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 13 +
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/
Additional parameters for deciding a sched group's imbalance status
which are calculated using the per entity load tracking are used.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 22 --
1 file changed, 20 insertions(+), 2 deletions(-)
diff --git a/kernel/
With_Patchset Without_patchset
-
Average_number_of_migrations 046
Average_number_of_records/s 9,71,114 9,45,158
With more memory intensive workloads, a higher difference in the number of
migrations is seen without any
e a balance between the loads of the
group and the number of tasks running on the group to decide the
busiest group in the sched_domain.
This means we will need to use the PJT's metrics but with an
additional constraint.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 22 +
eshold.The call should be taken if the tasks can afford to be throttled.
This is why an additional metric has been included,which can determine how
long we can tolerate tasks not being moved even if the load is low.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 16 ++
p which has to pull the tasks,which happens in find_busiest_group.
---
Preeti U Murthy (2):
sched:Prevent movement of short running tasks during load balancing
sched:Pick the apt busy sched group during load balancing
kernel/sched/fair.c | 38 +++---
601 - 624 of 624 matches
Mail list logo