Re: [PATCH v3 15/22] sched: log the cpu utilization at rq

2013-01-15 Thread Alex Shi
On 01/14/2013 09:59 PM, Morten Rasmussen wrote:
> On Fri, Jan 11, 2013 at 03:30:30AM +, Alex Shi wrote:
>> On 01/10/2013 07:40 PM, Morten Rasmussen wrote:
>  #undef P64
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index ee015b8..7bfbd69 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1495,8 +1495,12 @@ static void update_cfs_rq_blocked_load(struct 
> cfs_rq *cfs_rq, int force_update)
>  
>  static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
>  {
> + u32 period;
>   __update_entity_runnable_avg(rq->clock_task, >avg, runnable);
>   __update_tg_runnable_avg(>avg, >cfs);
> +
> + period = rq->avg.runnable_avg_period ? rq->avg.runnable_avg_period : 1;
> + rq->util = rq->avg.runnable_avg_sum * 100 / period;
>>> The existing tg->runnable_avg and cfs_rq->tg_runnable_contrib variables
>>> both holds
>>> rq->avg.runnable_avg_sum / rq->avg.runnable_avg_period scaled by
>>> NICE_0_LOAD (1024). Why not use one of the existing variables instead of
>>> introducing a new one?
>>
>> we want to a rq variable that reflect the utilization of the cpu, not of
>> the tg
> 
> It is the same thing for the root tg. You use exactly the same variables
> for calculating rq->util as is used to calculate both tg->runnable_avg and
> cfs_rq->tg_runnable_contrib in __update_tg_runnable_avg(). The only
> difference is that you scale by 100 while __update_tg_runnable_avg()
> scale by NICE_0_LOAD.

yes, the root tg->runnable_avg has same meaningful, but normal tg not,
and more important it is hidden by CONFIG_FAIR_GROUP_SCHED,
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v3 15/22] sched: log the cpu utilization at rq

2013-01-15 Thread Alex Shi
On 01/14/2013 09:59 PM, Morten Rasmussen wrote:
 On Fri, Jan 11, 2013 at 03:30:30AM +, Alex Shi wrote:
 On 01/10/2013 07:40 PM, Morten Rasmussen wrote:
  #undef P64
 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
 index ee015b8..7bfbd69 100644
 --- a/kernel/sched/fair.c
 +++ b/kernel/sched/fair.c
 @@ -1495,8 +1495,12 @@ static void update_cfs_rq_blocked_load(struct 
 cfs_rq *cfs_rq, int force_update)
  
  static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
  {
 + u32 period;
   __update_entity_runnable_avg(rq-clock_task, rq-avg, runnable);
   __update_tg_runnable_avg(rq-avg, rq-cfs);
 +
 + period = rq-avg.runnable_avg_period ? rq-avg.runnable_avg_period : 1;
 + rq-util = rq-avg.runnable_avg_sum * 100 / period;
 The existing tg-runnable_avg and cfs_rq-tg_runnable_contrib variables
 both holds
 rq-avg.runnable_avg_sum / rq-avg.runnable_avg_period scaled by
 NICE_0_LOAD (1024). Why not use one of the existing variables instead of
 introducing a new one?

 we want to a rq variable that reflect the utilization of the cpu, not of
 the tg
 
 It is the same thing for the root tg. You use exactly the same variables
 for calculating rq-util as is used to calculate both tg-runnable_avg and
 cfs_rq-tg_runnable_contrib in __update_tg_runnable_avg(). The only
 difference is that you scale by 100 while __update_tg_runnable_avg()
 scale by NICE_0_LOAD.

yes, the root tg-runnable_avg has same meaningful, but normal tg not,
and more important it is hidden by CONFIG_FAIR_GROUP_SCHED,
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v3 15/22] sched: log the cpu utilization at rq

2013-01-14 Thread Morten Rasmussen
On Fri, Jan 11, 2013 at 03:30:30AM +, Alex Shi wrote:
> On 01/10/2013 07:40 PM, Morten Rasmussen wrote:
> >> >  #undef P64
> >> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> >> > index ee015b8..7bfbd69 100644
> >> > --- a/kernel/sched/fair.c
> >> > +++ b/kernel/sched/fair.c
> >> > @@ -1495,8 +1495,12 @@ static void update_cfs_rq_blocked_load(struct 
> >> > cfs_rq *cfs_rq, int force_update)
> >> >  
> >> >  static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
> >> >  {
> >> > +u32 period;
> >> >  __update_entity_runnable_avg(rq->clock_task, >avg, 
> >> > runnable);
> >> >  __update_tg_runnable_avg(>avg, >cfs);
> >> > +
> >> > +period = rq->avg.runnable_avg_period ? 
> >> > rq->avg.runnable_avg_period : 1;
> >> > +rq->util = rq->avg.runnable_avg_sum * 100 / period;
> > The existing tg->runnable_avg and cfs_rq->tg_runnable_contrib variables
> > both holds
> > rq->avg.runnable_avg_sum / rq->avg.runnable_avg_period scaled by
> > NICE_0_LOAD (1024). Why not use one of the existing variables instead of
> > introducing a new one?
> 
> we want to a rq variable that reflect the utilization of the cpu, not of
> the tg

It is the same thing for the root tg. You use exactly the same variables
for calculating rq->util as is used to calculate both tg->runnable_avg and
cfs_rq->tg_runnable_contrib in __update_tg_runnable_avg(). The only
difference is that you scale by 100 while __update_tg_runnable_avg()
scale by NICE_0_LOAD.

Morten

> -- 
> Thanks Alex
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v3 15/22] sched: log the cpu utilization at rq

2013-01-14 Thread Morten Rasmussen
On Fri, Jan 11, 2013 at 03:30:30AM +, Alex Shi wrote:
 On 01/10/2013 07:40 PM, Morten Rasmussen wrote:
#undef P64
   diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
   index ee015b8..7bfbd69 100644
   --- a/kernel/sched/fair.c
   +++ b/kernel/sched/fair.c
   @@ -1495,8 +1495,12 @@ static void update_cfs_rq_blocked_load(struct 
   cfs_rq *cfs_rq, int force_update)

static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
{
   +u32 period;
__update_entity_runnable_avg(rq-clock_task, rq-avg, 
   runnable);
__update_tg_runnable_avg(rq-avg, rq-cfs);
   +
   +period = rq-avg.runnable_avg_period ? 
   rq-avg.runnable_avg_period : 1;
   +rq-util = rq-avg.runnable_avg_sum * 100 / period;
  The existing tg-runnable_avg and cfs_rq-tg_runnable_contrib variables
  both holds
  rq-avg.runnable_avg_sum / rq-avg.runnable_avg_period scaled by
  NICE_0_LOAD (1024). Why not use one of the existing variables instead of
  introducing a new one?
 
 we want to a rq variable that reflect the utilization of the cpu, not of
 the tg

It is the same thing for the root tg. You use exactly the same variables
for calculating rq-util as is used to calculate both tg-runnable_avg and
cfs_rq-tg_runnable_contrib in __update_tg_runnable_avg(). The only
difference is that you scale by 100 while __update_tg_runnable_avg()
scale by NICE_0_LOAD.

Morten

 -- 
 Thanks Alex
 --
 To unsubscribe from this list: send the line unsubscribe linux-kernel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 Please read the FAQ at  http://www.tux.org/lkml/
 

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v3 15/22] sched: log the cpu utilization at rq

2013-01-10 Thread Alex Shi
On 01/10/2013 07:40 PM, Morten Rasmussen wrote:
>> >  #undef P64
>> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> > index ee015b8..7bfbd69 100644
>> > --- a/kernel/sched/fair.c
>> > +++ b/kernel/sched/fair.c
>> > @@ -1495,8 +1495,12 @@ static void update_cfs_rq_blocked_load(struct 
>> > cfs_rq *cfs_rq, int force_update)
>> >  
>> >  static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
>> >  {
>> > +  u32 period;
>> >__update_entity_runnable_avg(rq->clock_task, >avg, runnable);
>> >__update_tg_runnable_avg(>avg, >cfs);
>> > +
>> > +  period = rq->avg.runnable_avg_period ? rq->avg.runnable_avg_period : 1;
>> > +  rq->util = rq->avg.runnable_avg_sum * 100 / period;
> The existing tg->runnable_avg and cfs_rq->tg_runnable_contrib variables
> both holds
> rq->avg.runnable_avg_sum / rq->avg.runnable_avg_period scaled by
> NICE_0_LOAD (1024). Why not use one of the existing variables instead of
> introducing a new one?

we want to a rq variable that reflect the utilization of the cpu, not of
the tg
-- 
Thanks Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v3 15/22] sched: log the cpu utilization at rq

2013-01-10 Thread Morten Rasmussen
On Sat, Jan 05, 2013 at 08:37:44AM +, Alex Shi wrote:
> The cpu's utilization is to measure how busy is the cpu.
> util = cpu_rq(cpu)->avg.runnable_avg_sum
> / cpu_rq(cpu)->avg.runnable_avg_period;
> 
> Since the util is no more than 1, we use its percentage value in later
> caculations. And set the the FULL_UTIL as 99%.
> 
> In later power aware scheduling, we are sensitive for how busy of the
> cpu, not how weight of its load. As to power consuming, it is more
> related with busy time, not the load weight.
> 
> Signed-off-by: Alex Shi 
> ---
>  kernel/sched/debug.c | 1 +
>  kernel/sched/fair.c  | 4 
>  kernel/sched/sched.h | 4 
>  3 files changed, 9 insertions(+)
> 
> diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
> index 2cd3c1b..e4035f7 100644
> --- a/kernel/sched/debug.c
> +++ b/kernel/sched/debug.c
> @@ -318,6 +318,7 @@ do {  
> \
>  
>   P(ttwu_count);
>   P(ttwu_local);
> + P(util);
>  
>  #undef P
>  #undef P64
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index ee015b8..7bfbd69 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1495,8 +1495,12 @@ static void update_cfs_rq_blocked_load(struct cfs_rq 
> *cfs_rq, int force_update)
>  
>  static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
>  {
> + u32 period;
>   __update_entity_runnable_avg(rq->clock_task, >avg, runnable);
>   __update_tg_runnable_avg(>avg, >cfs);
> +
> + period = rq->avg.runnable_avg_period ? rq->avg.runnable_avg_period : 1;
> + rq->util = rq->avg.runnable_avg_sum * 100 / period;

The existing tg->runnable_avg and cfs_rq->tg_runnable_contrib variables
both holds
rq->avg.runnable_avg_sum / rq->avg.runnable_avg_period scaled by
NICE_0_LOAD (1024). Why not use one of the existing variables instead of
introducing a new one?

Morten

>  }
>  
>  /* Add the load generated by se into cfs_rq's child load-average */
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 66b08a1..3c6e803 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -350,6 +350,9 @@ extern struct root_domain def_root_domain;
>  
>  #endif /* CONFIG_SMP */
>  
> +/* Take as full load, if the cpu percentage util is up to 99 */
> +#define FULL_UTIL99
> +
>  /*
>   * This is the main, per-CPU runqueue data structure.
>   *
> @@ -481,6 +484,7 @@ struct rq {
>  #endif
>  
>   struct sched_avg avg;
> + unsigned int util;
>  };
>  
>  static inline int cpu_of(struct rq *rq)
> -- 
> 1.7.12
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v3 15/22] sched: log the cpu utilization at rq

2013-01-10 Thread Alex Shi
On 01/10/2013 07:40 PM, Morten Rasmussen wrote:
   #undef P64
  diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
  index ee015b8..7bfbd69 100644
  --- a/kernel/sched/fair.c
  +++ b/kernel/sched/fair.c
  @@ -1495,8 +1495,12 @@ static void update_cfs_rq_blocked_load(struct 
  cfs_rq *cfs_rq, int force_update)
   
   static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
   {
  +  u32 period;
 __update_entity_runnable_avg(rq-clock_task, rq-avg, runnable);
 __update_tg_runnable_avg(rq-avg, rq-cfs);
  +
  +  period = rq-avg.runnable_avg_period ? rq-avg.runnable_avg_period : 1;
  +  rq-util = rq-avg.runnable_avg_sum * 100 / period;
 The existing tg-runnable_avg and cfs_rq-tg_runnable_contrib variables
 both holds
 rq-avg.runnable_avg_sum / rq-avg.runnable_avg_period scaled by
 NICE_0_LOAD (1024). Why not use one of the existing variables instead of
 introducing a new one?

we want to a rq variable that reflect the utilization of the cpu, not of
the tg
-- 
Thanks Alex
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v3 15/22] sched: log the cpu utilization at rq

2013-01-10 Thread Morten Rasmussen
On Sat, Jan 05, 2013 at 08:37:44AM +, Alex Shi wrote:
 The cpu's utilization is to measure how busy is the cpu.
 util = cpu_rq(cpu)-avg.runnable_avg_sum
 / cpu_rq(cpu)-avg.runnable_avg_period;
 
 Since the util is no more than 1, we use its percentage value in later
 caculations. And set the the FULL_UTIL as 99%.
 
 In later power aware scheduling, we are sensitive for how busy of the
 cpu, not how weight of its load. As to power consuming, it is more
 related with busy time, not the load weight.
 
 Signed-off-by: Alex Shi alex@intel.com
 ---
  kernel/sched/debug.c | 1 +
  kernel/sched/fair.c  | 4 
  kernel/sched/sched.h | 4 
  3 files changed, 9 insertions(+)
 
 diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
 index 2cd3c1b..e4035f7 100644
 --- a/kernel/sched/debug.c
 +++ b/kernel/sched/debug.c
 @@ -318,6 +318,7 @@ do {  
 \
  
   P(ttwu_count);
   P(ttwu_local);
 + P(util);
  
  #undef P
  #undef P64
 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
 index ee015b8..7bfbd69 100644
 --- a/kernel/sched/fair.c
 +++ b/kernel/sched/fair.c
 @@ -1495,8 +1495,12 @@ static void update_cfs_rq_blocked_load(struct cfs_rq 
 *cfs_rq, int force_update)
  
  static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
  {
 + u32 period;
   __update_entity_runnable_avg(rq-clock_task, rq-avg, runnable);
   __update_tg_runnable_avg(rq-avg, rq-cfs);
 +
 + period = rq-avg.runnable_avg_period ? rq-avg.runnable_avg_period : 1;
 + rq-util = rq-avg.runnable_avg_sum * 100 / period;

The existing tg-runnable_avg and cfs_rq-tg_runnable_contrib variables
both holds
rq-avg.runnable_avg_sum / rq-avg.runnable_avg_period scaled by
NICE_0_LOAD (1024). Why not use one of the existing variables instead of
introducing a new one?

Morten

  }
  
  /* Add the load generated by se into cfs_rq's child load-average */
 diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
 index 66b08a1..3c6e803 100644
 --- a/kernel/sched/sched.h
 +++ b/kernel/sched/sched.h
 @@ -350,6 +350,9 @@ extern struct root_domain def_root_domain;
  
  #endif /* CONFIG_SMP */
  
 +/* Take as full load, if the cpu percentage util is up to 99 */
 +#define FULL_UTIL99
 +
  /*
   * This is the main, per-CPU runqueue data structure.
   *
 @@ -481,6 +484,7 @@ struct rq {
  #endif
  
   struct sched_avg avg;
 + unsigned int util;
  };
  
  static inline int cpu_of(struct rq *rq)
 -- 
 1.7.12
 
 --
 To unsubscribe from this list: send the line unsubscribe linux-kernel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 Please read the FAQ at  http://www.tux.org/lkml/
 

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v3 15/22] sched: log the cpu utilization at rq

2013-01-05 Thread Alex Shi
The cpu's utilization is to measure how busy is the cpu.
util = cpu_rq(cpu)->avg.runnable_avg_sum
/ cpu_rq(cpu)->avg.runnable_avg_period;

Since the util is no more than 1, we use its percentage value in later
caculations. And set the the FULL_UTIL as 99%.

In later power aware scheduling, we are sensitive for how busy of the
cpu, not how weight of its load. As to power consuming, it is more
related with busy time, not the load weight.

Signed-off-by: Alex Shi 
---
 kernel/sched/debug.c | 1 +
 kernel/sched/fair.c  | 4 
 kernel/sched/sched.h | 4 
 3 files changed, 9 insertions(+)

diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 2cd3c1b..e4035f7 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -318,6 +318,7 @@ do {
\
 
P(ttwu_count);
P(ttwu_local);
+   P(util);
 
 #undef P
 #undef P64
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ee015b8..7bfbd69 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1495,8 +1495,12 @@ static void update_cfs_rq_blocked_load(struct cfs_rq 
*cfs_rq, int force_update)
 
 static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
 {
+   u32 period;
__update_entity_runnable_avg(rq->clock_task, >avg, runnable);
__update_tg_runnable_avg(>avg, >cfs);
+
+   period = rq->avg.runnable_avg_period ? rq->avg.runnable_avg_period : 1;
+   rq->util = rq->avg.runnable_avg_sum * 100 / period;
 }
 
 /* Add the load generated by se into cfs_rq's child load-average */
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 66b08a1..3c6e803 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -350,6 +350,9 @@ extern struct root_domain def_root_domain;
 
 #endif /* CONFIG_SMP */
 
+/* Take as full load, if the cpu percentage util is up to 99 */
+#define FULL_UTIL  99
+
 /*
  * This is the main, per-CPU runqueue data structure.
  *
@@ -481,6 +484,7 @@ struct rq {
 #endif
 
struct sched_avg avg;
+   unsigned int util;
 };
 
 static inline int cpu_of(struct rq *rq)
-- 
1.7.12

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v3 15/22] sched: log the cpu utilization at rq

2013-01-05 Thread Alex Shi
The cpu's utilization is to measure how busy is the cpu.
util = cpu_rq(cpu)-avg.runnable_avg_sum
/ cpu_rq(cpu)-avg.runnable_avg_period;

Since the util is no more than 1, we use its percentage value in later
caculations. And set the the FULL_UTIL as 99%.

In later power aware scheduling, we are sensitive for how busy of the
cpu, not how weight of its load. As to power consuming, it is more
related with busy time, not the load weight.

Signed-off-by: Alex Shi alex@intel.com
---
 kernel/sched/debug.c | 1 +
 kernel/sched/fair.c  | 4 
 kernel/sched/sched.h | 4 
 3 files changed, 9 insertions(+)

diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 2cd3c1b..e4035f7 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -318,6 +318,7 @@ do {
\
 
P(ttwu_count);
P(ttwu_local);
+   P(util);
 
 #undef P
 #undef P64
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ee015b8..7bfbd69 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1495,8 +1495,12 @@ static void update_cfs_rq_blocked_load(struct cfs_rq 
*cfs_rq, int force_update)
 
 static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
 {
+   u32 period;
__update_entity_runnable_avg(rq-clock_task, rq-avg, runnable);
__update_tg_runnable_avg(rq-avg, rq-cfs);
+
+   period = rq-avg.runnable_avg_period ? rq-avg.runnable_avg_period : 1;
+   rq-util = rq-avg.runnable_avg_sum * 100 / period;
 }
 
 /* Add the load generated by se into cfs_rq's child load-average */
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 66b08a1..3c6e803 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -350,6 +350,9 @@ extern struct root_domain def_root_domain;
 
 #endif /* CONFIG_SMP */
 
+/* Take as full load, if the cpu percentage util is up to 99 */
+#define FULL_UTIL  99
+
 /*
  * This is the main, per-CPU runqueue data structure.
  *
@@ -481,6 +484,7 @@ struct rq {
 #endif
 
struct sched_avg avg;
+   unsigned int util;
 };
 
 static inline int cpu_of(struct rq *rq)
-- 
1.7.12

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/