Re: [PATCH v3 03/22] sched: fix find_idlest_group mess logical

2013-01-10 Thread Preeti U Murthy
On 01/05/2013 02:07 PM, Alex Shi wrote:
> There is 4 situations in the function:
> 1, no task allowed group;
>   so min_load = ULONG_MAX, this_load = 0, idlest = NULL
> 2, only local group task allowed;
>   so min_load = ULONG_MAX, this_load assigned, idlest = NULL
> 3, only non-local task group allowed;
>   so min_load assigned, this_load = 0, idlest != NULL
> 4, local group + another group are task allowed.
>   so min_load assigned, this_load assigned, idlest != NULL
> 
> Current logical will return NULL in first 3 kinds of scenarios.
> And still return NULL, if idlest group is heavier then the
> local group in the 4th situation.
> 
> Actually, I thought groups in situation 2,3 are also eligible to host
> the task. And in 4th situation, agree to bias toward local group.
> So, has this patch.
> 
> Signed-off-by: Alex Shi 
> ---
>  kernel/sched/fair.c | 12 +---
>  1 file changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 6d3a95d..3c7b09a 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3181,6 +3181,7 @@ find_idlest_group(struct sched_domain *sd, struct 
> task_struct *p,
> int this_cpu, int load_idx)
>  {
>   struct sched_group *idlest = NULL, *group = sd->groups;
> + struct sched_group *this_group = NULL;
>   unsigned long min_load = ULONG_MAX, this_load = 0;
>   int imbalance = 100 + (sd->imbalance_pct-100)/2;
> 
> @@ -3215,14 +3216,19 @@ find_idlest_group(struct sched_domain *sd, struct 
> task_struct *p,
> 
>   if (local_group) {
>   this_load = avg_load;
> - } else if (avg_load < min_load) {
> + this_group = group;
> + }
> + if (avg_load < min_load) {
>   min_load = avg_load;
>   idlest = group;
>   }
>   } while (group = group->next, group != sd->groups);
> 
> - if (!idlest || 100*this_load < imbalance*min_load)
> - return NULL;
> + if (this_group && idlest != this_group)
> + /* Bias toward our group again */
> + if (100*this_load < imbalance*min_load)
> + idlest = this_group;
> +
>   return idlest;
>  }
> 
Reviewed-by:Preeti U Murthy

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v3 03/22] sched: fix find_idlest_group mess logical

2013-01-10 Thread Preeti U Murthy
On 01/05/2013 02:07 PM, Alex Shi wrote:
 There is 4 situations in the function:
 1, no task allowed group;
   so min_load = ULONG_MAX, this_load = 0, idlest = NULL
 2, only local group task allowed;
   so min_load = ULONG_MAX, this_load assigned, idlest = NULL
 3, only non-local task group allowed;
   so min_load assigned, this_load = 0, idlest != NULL
 4, local group + another group are task allowed.
   so min_load assigned, this_load assigned, idlest != NULL
 
 Current logical will return NULL in first 3 kinds of scenarios.
 And still return NULL, if idlest group is heavier then the
 local group in the 4th situation.
 
 Actually, I thought groups in situation 2,3 are also eligible to host
 the task. And in 4th situation, agree to bias toward local group.
 So, has this patch.
 
 Signed-off-by: Alex Shi alex@intel.com
 ---
  kernel/sched/fair.c | 12 +---
  1 file changed, 9 insertions(+), 3 deletions(-)
 
 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
 index 6d3a95d..3c7b09a 100644
 --- a/kernel/sched/fair.c
 +++ b/kernel/sched/fair.c
 @@ -3181,6 +3181,7 @@ find_idlest_group(struct sched_domain *sd, struct 
 task_struct *p,
 int this_cpu, int load_idx)
  {
   struct sched_group *idlest = NULL, *group = sd-groups;
 + struct sched_group *this_group = NULL;
   unsigned long min_load = ULONG_MAX, this_load = 0;
   int imbalance = 100 + (sd-imbalance_pct-100)/2;
 
 @@ -3215,14 +3216,19 @@ find_idlest_group(struct sched_domain *sd, struct 
 task_struct *p,
 
   if (local_group) {
   this_load = avg_load;
 - } else if (avg_load  min_load) {
 + this_group = group;
 + }
 + if (avg_load  min_load) {
   min_load = avg_load;
   idlest = group;
   }
   } while (group = group-next, group != sd-groups);
 
 - if (!idlest || 100*this_load  imbalance*min_load)
 - return NULL;
 + if (this_group  idlest != this_group)
 + /* Bias toward our group again */
 + if (100*this_load  imbalance*min_load)
 + idlest = this_group;
 +
   return idlest;
  }
 
Reviewed-by:Preeti U Murthy

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v3 03/22] sched: fix find_idlest_group mess logical

2013-01-05 Thread Alex Shi
There is 4 situations in the function:
1, no task allowed group;
so min_load = ULONG_MAX, this_load = 0, idlest = NULL
2, only local group task allowed;
so min_load = ULONG_MAX, this_load assigned, idlest = NULL
3, only non-local task group allowed;
so min_load assigned, this_load = 0, idlest != NULL
4, local group + another group are task allowed.
so min_load assigned, this_load assigned, idlest != NULL

Current logical will return NULL in first 3 kinds of scenarios.
And still return NULL, if idlest group is heavier then the
local group in the 4th situation.

Actually, I thought groups in situation 2,3 are also eligible to host
the task. And in 4th situation, agree to bias toward local group.
So, has this patch.

Signed-off-by: Alex Shi 
---
 kernel/sched/fair.c | 12 +---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6d3a95d..3c7b09a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3181,6 +3181,7 @@ find_idlest_group(struct sched_domain *sd, struct 
task_struct *p,
  int this_cpu, int load_idx)
 {
struct sched_group *idlest = NULL, *group = sd->groups;
+   struct sched_group *this_group = NULL;
unsigned long min_load = ULONG_MAX, this_load = 0;
int imbalance = 100 + (sd->imbalance_pct-100)/2;
 
@@ -3215,14 +3216,19 @@ find_idlest_group(struct sched_domain *sd, struct 
task_struct *p,
 
if (local_group) {
this_load = avg_load;
-   } else if (avg_load < min_load) {
+   this_group = group;
+   }
+   if (avg_load < min_load) {
min_load = avg_load;
idlest = group;
}
} while (group = group->next, group != sd->groups);
 
-   if (!idlest || 100*this_load < imbalance*min_load)
-   return NULL;
+   if (this_group && idlest != this_group)
+   /* Bias toward our group again */
+   if (100*this_load < imbalance*min_load)
+   idlest = this_group;
+
return idlest;
 }
 
-- 
1.7.12

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v3 03/22] sched: fix find_idlest_group mess logical

2013-01-05 Thread Alex Shi
There is 4 situations in the function:
1, no task allowed group;
so min_load = ULONG_MAX, this_load = 0, idlest = NULL
2, only local group task allowed;
so min_load = ULONG_MAX, this_load assigned, idlest = NULL
3, only non-local task group allowed;
so min_load assigned, this_load = 0, idlest != NULL
4, local group + another group are task allowed.
so min_load assigned, this_load assigned, idlest != NULL

Current logical will return NULL in first 3 kinds of scenarios.
And still return NULL, if idlest group is heavier then the
local group in the 4th situation.

Actually, I thought groups in situation 2,3 are also eligible to host
the task. And in 4th situation, agree to bias toward local group.
So, has this patch.

Signed-off-by: Alex Shi alex@intel.com
---
 kernel/sched/fair.c | 12 +---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6d3a95d..3c7b09a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3181,6 +3181,7 @@ find_idlest_group(struct sched_domain *sd, struct 
task_struct *p,
  int this_cpu, int load_idx)
 {
struct sched_group *idlest = NULL, *group = sd-groups;
+   struct sched_group *this_group = NULL;
unsigned long min_load = ULONG_MAX, this_load = 0;
int imbalance = 100 + (sd-imbalance_pct-100)/2;
 
@@ -3215,14 +3216,19 @@ find_idlest_group(struct sched_domain *sd, struct 
task_struct *p,
 
if (local_group) {
this_load = avg_load;
-   } else if (avg_load  min_load) {
+   this_group = group;
+   }
+   if (avg_load  min_load) {
min_load = avg_load;
idlest = group;
}
} while (group = group-next, group != sd-groups);
 
-   if (!idlest || 100*this_load  imbalance*min_load)
-   return NULL;
+   if (this_group  idlest != this_group)
+   /* Bias toward our group again */
+   if (100*this_load  imbalance*min_load)
+   idlest = this_group;
+
return idlest;
 }
 
-- 
1.7.12

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/