Re: [PATCH v3 07/22] sched: set initial load avg of new forked task

2013-01-10 Thread Alex Shi
On 01/11/2013 01:10 PM, Preeti U Murthy wrote:
>> >update_curr(cfs_rq);
>> > -  enqueue_entity_load_avg(cfs_rq, se, flags & ENQUEUE_WAKEUP);
>> > +  enqueue_entity_load_avg(cfs_rq, se, flags);
>> >account_entity_enqueue(cfs_rq, se);
>> >update_cfs_shares(cfs_rq);
>> > 
> I had seen in my experiments, that the forked tasks with initial load to
> be 0,would adversely affect the runqueue lengths.Since the load for
> these tasks to pick up takes some time,the cpus on which the forked
> tasks are scheduled, could be candidates for "dst_cpu" many times and
> the runqueue lengths increase considerably.
> 
> This patch solves this issue by making the forked tasks contribute
> actively to the runqueue load.
> 
> Reviewed-by:Preeti U Murthy
> 

Thanks for review, Preeti! :)


-- 
Thanks Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v3 07/22] sched: set initial load avg of new forked task

2013-01-10 Thread Preeti U Murthy
On 01/05/2013 02:07 PM, Alex Shi wrote:
> New task has no runnable sum at its first runnable time, that make
> burst forking just select few idle cpus to put tasks.
> Set initial load avg of new forked task as its load weight to resolve
> this issue.
> 
> Signed-off-by: Alex Shi 
> ---
>  include/linux/sched.h |  1 +
>  kernel/sched/core.c   |  2 +-
>  kernel/sched/fair.c   | 11 +--
>  3 files changed, 11 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 206bb08..fb7aab5 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1069,6 +1069,7 @@ struct sched_domain;
>  #else
>  #define ENQUEUE_WAKING   0
>  #endif
> +#define ENQUEUE_NEWTASK  8
> 
>  #define DEQUEUE_SLEEP1
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 66c1718..66ce1f1 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -1705,7 +1705,7 @@ void wake_up_new_task(struct task_struct *p)
>  #endif
> 
>   rq = __task_rq_lock(p);
> - activate_task(rq, p, 0);
> + activate_task(rq, p, ENQUEUE_NEWTASK);
>   p->on_rq = 1;
>   trace_sched_wakeup_new(p, true);
>   check_preempt_curr(rq, p, WF_FORK);
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 895a3f4..5c545e4 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1503,8 +1503,9 @@ static inline void update_rq_runnable_avg(struct rq 
> *rq, int runnable)
>  /* Add the load generated by se into cfs_rq's child load-average */
>  static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq,
> struct sched_entity *se,
> -   int wakeup)
> +   int flags)
>  {
> + int wakeup = flags & ENQUEUE_WAKEUP;
>   /*
>* We track migrations using entity decay_count <= 0, on a wake-up
>* migration we use a negative decay count to track the remote decays
> @@ -1538,6 +1539,12 @@ static inline void enqueue_entity_load_avg(struct 
> cfs_rq *cfs_rq,
>   update_entity_load_avg(se, 0);
>   }
> 
> + /*
> +  * set the initial load avg of new task same as its load
> +  * in order to avoid brust fork make few cpu too heavier
> +  */
> + if (flags & ENQUEUE_NEWTASK)
> + se->avg.load_avg_contrib = se->load.weight;
>   cfs_rq->runnable_load_avg += se->avg.load_avg_contrib;
>   /* we force update consideration on load-balancer moves */
>   update_cfs_rq_blocked_load(cfs_rq, !wakeup);
> @@ -1701,7 +1708,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct 
> sched_entity *se, int flags)
>* Update run-time statistics of the 'current'.
>*/
>   update_curr(cfs_rq);
> - enqueue_entity_load_avg(cfs_rq, se, flags & ENQUEUE_WAKEUP);
> + enqueue_entity_load_avg(cfs_rq, se, flags);
>   account_entity_enqueue(cfs_rq, se);
>   update_cfs_shares(cfs_rq);
> 
I had seen in my experiments, that the forked tasks with initial load to
be 0,would adversely affect the runqueue lengths.Since the load for
these tasks to pick up takes some time,the cpus on which the forked
tasks are scheduled, could be candidates for "dst_cpu" many times and
the runqueue lengths increase considerably.

This patch solves this issue by making the forked tasks contribute
actively to the runqueue load.

Reviewed-by:Preeti U Murthy

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v3 07/22] sched: set initial load avg of new forked task

2013-01-10 Thread Preeti U Murthy
On 01/05/2013 02:07 PM, Alex Shi wrote:
 New task has no runnable sum at its first runnable time, that make
 burst forking just select few idle cpus to put tasks.
 Set initial load avg of new forked task as its load weight to resolve
 this issue.
 
 Signed-off-by: Alex Shi alex@intel.com
 ---
  include/linux/sched.h |  1 +
  kernel/sched/core.c   |  2 +-
  kernel/sched/fair.c   | 11 +--
  3 files changed, 11 insertions(+), 3 deletions(-)
 
 diff --git a/include/linux/sched.h b/include/linux/sched.h
 index 206bb08..fb7aab5 100644
 --- a/include/linux/sched.h
 +++ b/include/linux/sched.h
 @@ -1069,6 +1069,7 @@ struct sched_domain;
  #else
  #define ENQUEUE_WAKING   0
  #endif
 +#define ENQUEUE_NEWTASK  8
 
  #define DEQUEUE_SLEEP1
 
 diff --git a/kernel/sched/core.c b/kernel/sched/core.c
 index 66c1718..66ce1f1 100644
 --- a/kernel/sched/core.c
 +++ b/kernel/sched/core.c
 @@ -1705,7 +1705,7 @@ void wake_up_new_task(struct task_struct *p)
  #endif
 
   rq = __task_rq_lock(p);
 - activate_task(rq, p, 0);
 + activate_task(rq, p, ENQUEUE_NEWTASK);
   p-on_rq = 1;
   trace_sched_wakeup_new(p, true);
   check_preempt_curr(rq, p, WF_FORK);
 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
 index 895a3f4..5c545e4 100644
 --- a/kernel/sched/fair.c
 +++ b/kernel/sched/fair.c
 @@ -1503,8 +1503,9 @@ static inline void update_rq_runnable_avg(struct rq 
 *rq, int runnable)
  /* Add the load generated by se into cfs_rq's child load-average */
  static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq,
 struct sched_entity *se,
 -   int wakeup)
 +   int flags)
  {
 + int wakeup = flags  ENQUEUE_WAKEUP;
   /*
* We track migrations using entity decay_count = 0, on a wake-up
* migration we use a negative decay count to track the remote decays
 @@ -1538,6 +1539,12 @@ static inline void enqueue_entity_load_avg(struct 
 cfs_rq *cfs_rq,
   update_entity_load_avg(se, 0);
   }
 
 + /*
 +  * set the initial load avg of new task same as its load
 +  * in order to avoid brust fork make few cpu too heavier
 +  */
 + if (flags  ENQUEUE_NEWTASK)
 + se-avg.load_avg_contrib = se-load.weight;
   cfs_rq-runnable_load_avg += se-avg.load_avg_contrib;
   /* we force update consideration on load-balancer moves */
   update_cfs_rq_blocked_load(cfs_rq, !wakeup);
 @@ -1701,7 +1708,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct 
 sched_entity *se, int flags)
* Update run-time statistics of the 'current'.
*/
   update_curr(cfs_rq);
 - enqueue_entity_load_avg(cfs_rq, se, flags  ENQUEUE_WAKEUP);
 + enqueue_entity_load_avg(cfs_rq, se, flags);
   account_entity_enqueue(cfs_rq, se);
   update_cfs_shares(cfs_rq);
 
I had seen in my experiments, that the forked tasks with initial load to
be 0,would adversely affect the runqueue lengths.Since the load for
these tasks to pick up takes some time,the cpus on which the forked
tasks are scheduled, could be candidates for dst_cpu many times and
the runqueue lengths increase considerably.

This patch solves this issue by making the forked tasks contribute
actively to the runqueue load.

Reviewed-by:Preeti U Murthy

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v3 07/22] sched: set initial load avg of new forked task

2013-01-10 Thread Alex Shi
On 01/11/2013 01:10 PM, Preeti U Murthy wrote:
 update_curr(cfs_rq);
  -  enqueue_entity_load_avg(cfs_rq, se, flags  ENQUEUE_WAKEUP);
  +  enqueue_entity_load_avg(cfs_rq, se, flags);
 account_entity_enqueue(cfs_rq, se);
 update_cfs_shares(cfs_rq);
  
 I had seen in my experiments, that the forked tasks with initial load to
 be 0,would adversely affect the runqueue lengths.Since the load for
 these tasks to pick up takes some time,the cpus on which the forked
 tasks are scheduled, could be candidates for dst_cpu many times and
 the runqueue lengths increase considerably.
 
 This patch solves this issue by making the forked tasks contribute
 actively to the runqueue load.
 
 Reviewed-by:Preeti U Murthy
 

Thanks for review, Preeti! :)


-- 
Thanks Alex
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v3 07/22] sched: set initial load avg of new forked task

2013-01-05 Thread Alex Shi
New task has no runnable sum at its first runnable time, that make
burst forking just select few idle cpus to put tasks.
Set initial load avg of new forked task as its load weight to resolve
this issue.

Signed-off-by: Alex Shi 
---
 include/linux/sched.h |  1 +
 kernel/sched/core.c   |  2 +-
 kernel/sched/fair.c   | 11 +--
 3 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 206bb08..fb7aab5 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1069,6 +1069,7 @@ struct sched_domain;
 #else
 #define ENQUEUE_WAKING 0
 #endif
+#define ENQUEUE_NEWTASK8
 
 #define DEQUEUE_SLEEP  1
 
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 66c1718..66ce1f1 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1705,7 +1705,7 @@ void wake_up_new_task(struct task_struct *p)
 #endif
 
rq = __task_rq_lock(p);
-   activate_task(rq, p, 0);
+   activate_task(rq, p, ENQUEUE_NEWTASK);
p->on_rq = 1;
trace_sched_wakeup_new(p, true);
check_preempt_curr(rq, p, WF_FORK);
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 895a3f4..5c545e4 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1503,8 +1503,9 @@ static inline void update_rq_runnable_avg(struct rq *rq, 
int runnable)
 /* Add the load generated by se into cfs_rq's child load-average */
 static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq,
  struct sched_entity *se,
- int wakeup)
+ int flags)
 {
+   int wakeup = flags & ENQUEUE_WAKEUP;
/*
 * We track migrations using entity decay_count <= 0, on a wake-up
 * migration we use a negative decay count to track the remote decays
@@ -1538,6 +1539,12 @@ static inline void enqueue_entity_load_avg(struct cfs_rq 
*cfs_rq,
update_entity_load_avg(se, 0);
}
 
+   /*
+* set the initial load avg of new task same as its load
+* in order to avoid brust fork make few cpu too heavier
+*/
+   if (flags & ENQUEUE_NEWTASK)
+   se->avg.load_avg_contrib = se->load.weight;
cfs_rq->runnable_load_avg += se->avg.load_avg_contrib;
/* we force update consideration on load-balancer moves */
update_cfs_rq_blocked_load(cfs_rq, !wakeup);
@@ -1701,7 +1708,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity 
*se, int flags)
 * Update run-time statistics of the 'current'.
 */
update_curr(cfs_rq);
-   enqueue_entity_load_avg(cfs_rq, se, flags & ENQUEUE_WAKEUP);
+   enqueue_entity_load_avg(cfs_rq, se, flags);
account_entity_enqueue(cfs_rq, se);
update_cfs_shares(cfs_rq);
 
-- 
1.7.12

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v3 07/22] sched: set initial load avg of new forked task

2013-01-05 Thread Alex Shi
New task has no runnable sum at its first runnable time, that make
burst forking just select few idle cpus to put tasks.
Set initial load avg of new forked task as its load weight to resolve
this issue.

Signed-off-by: Alex Shi alex@intel.com
---
 include/linux/sched.h |  1 +
 kernel/sched/core.c   |  2 +-
 kernel/sched/fair.c   | 11 +--
 3 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 206bb08..fb7aab5 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1069,6 +1069,7 @@ struct sched_domain;
 #else
 #define ENQUEUE_WAKING 0
 #endif
+#define ENQUEUE_NEWTASK8
 
 #define DEQUEUE_SLEEP  1
 
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 66c1718..66ce1f1 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1705,7 +1705,7 @@ void wake_up_new_task(struct task_struct *p)
 #endif
 
rq = __task_rq_lock(p);
-   activate_task(rq, p, 0);
+   activate_task(rq, p, ENQUEUE_NEWTASK);
p-on_rq = 1;
trace_sched_wakeup_new(p, true);
check_preempt_curr(rq, p, WF_FORK);
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 895a3f4..5c545e4 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1503,8 +1503,9 @@ static inline void update_rq_runnable_avg(struct rq *rq, 
int runnable)
 /* Add the load generated by se into cfs_rq's child load-average */
 static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq,
  struct sched_entity *se,
- int wakeup)
+ int flags)
 {
+   int wakeup = flags  ENQUEUE_WAKEUP;
/*
 * We track migrations using entity decay_count = 0, on a wake-up
 * migration we use a negative decay count to track the remote decays
@@ -1538,6 +1539,12 @@ static inline void enqueue_entity_load_avg(struct cfs_rq 
*cfs_rq,
update_entity_load_avg(se, 0);
}
 
+   /*
+* set the initial load avg of new task same as its load
+* in order to avoid brust fork make few cpu too heavier
+*/
+   if (flags  ENQUEUE_NEWTASK)
+   se-avg.load_avg_contrib = se-load.weight;
cfs_rq-runnable_load_avg += se-avg.load_avg_contrib;
/* we force update consideration on load-balancer moves */
update_cfs_rq_blocked_load(cfs_rq, !wakeup);
@@ -1701,7 +1708,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity 
*se, int flags)
 * Update run-time statistics of the 'current'.
 */
update_curr(cfs_rq);
-   enqueue_entity_load_avg(cfs_rq, se, flags  ENQUEUE_WAKEUP);
+   enqueue_entity_load_avg(cfs_rq, se, flags);
account_entity_enqueue(cfs_rq, se);
update_cfs_shares(cfs_rq);
 
-- 
1.7.12

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/