Re: [PATCH v2 00/12] sched/fair: Optimize and clean up sched averages

2016-05-02 Thread Yuyang Du
Hi,

This patch series should have no perceivable changes to load
and util except that load's range is increased by 1024.

My initial tests suggest that. See attached figures. The workload
is running 100us out of every 200us, and 2000us out of every 8000us.
Again fixed workload, fixed CPU, and fixed frequency.
 
In addition, of course, I believe the codes should be cleaner and
more efficient after the patches.

Thanks,
Yuyang

On Tue, May 03, 2016 at 05:54:26AM +0800, Yuyang Du wrote:
> Hi Peter,
> 
> This patch series combines the previous cleanup and optimization
> series. And as you and Ingo suggested, the increased kernel load
> scale is reinstated when on 64BIT and FAIR_GROUP_SCHED. In addition
> to that, the changes include Vincent's fix, typos fixes, changelog
> and comment reword.
> 
> Thanks,
> Yuyang
> 
> Yuyang Du (12):
>   sched/fair: Optimize sum computation with a lookup table
>   sched/fair: Rename variable names for sched averages
>   sched/fair: Change the variable to hold the number of periods to
> 32bit integer
>   sched/fair: Add __always_inline compiler attribute to
> __accumulate_sum()
>   sched/fair: Optimize __update_sched_avg()
>   documentation: Add scheduler/sched-avg.txt
>   sched/fair: Generalize the load/util averages resolution definition
>   sched/fair: Remove SCHED_LOAD_SHIFT and SCHED_LOAD_SCALE
>   sched/fair: Add introduction to the sched average metrics
>   sched/fair: Remove scale_load_down() for load_avg
>   sched/fair: Rename scale_load() and scale_load_down()
>   sched/fair: Enable increased scale for kernel load
> 
>  Documentation/scheduler/sched-avg.txt |  137 
>  include/linux/sched.h |   81 ++-
>  kernel/sched/core.c   |8 +-
>  kernel/sched/fair.c   |  398 
> +
>  kernel/sched/sched.h  |   48 ++--
>  5 files changed, 439 insertions(+), 233 deletions(-)
>  create mode 100644 Documentation/scheduler/sched-avg.txt
> 
> -- 
> 1.7.9.5


Re: [PATCH v2 00/12] sched/fair: Optimize and clean up sched averages

2016-05-02 Thread Yuyang Du
Hi,

This patch series should have no perceivable changes to load
and util except that load's range is increased by 1024.

My initial tests suggest that. See attached figures. The workload
is running 100us out of every 200us, and 2000us out of every 8000us.
Again fixed workload, fixed CPU, and fixed frequency.
 
In addition, of course, I believe the codes should be cleaner and
more efficient after the patches.

Thanks,
Yuyang

On Tue, May 03, 2016 at 05:54:26AM +0800, Yuyang Du wrote:
> Hi Peter,
> 
> This patch series combines the previous cleanup and optimization
> series. And as you and Ingo suggested, the increased kernel load
> scale is reinstated when on 64BIT and FAIR_GROUP_SCHED. In addition
> to that, the changes include Vincent's fix, typos fixes, changelog
> and comment reword.
> 
> Thanks,
> Yuyang
> 
> Yuyang Du (12):
>   sched/fair: Optimize sum computation with a lookup table
>   sched/fair: Rename variable names for sched averages
>   sched/fair: Change the variable to hold the number of periods to
> 32bit integer
>   sched/fair: Add __always_inline compiler attribute to
> __accumulate_sum()
>   sched/fair: Optimize __update_sched_avg()
>   documentation: Add scheduler/sched-avg.txt
>   sched/fair: Generalize the load/util averages resolution definition
>   sched/fair: Remove SCHED_LOAD_SHIFT and SCHED_LOAD_SCALE
>   sched/fair: Add introduction to the sched average metrics
>   sched/fair: Remove scale_load_down() for load_avg
>   sched/fair: Rename scale_load() and scale_load_down()
>   sched/fair: Enable increased scale for kernel load
> 
>  Documentation/scheduler/sched-avg.txt |  137 
>  include/linux/sched.h |   81 ++-
>  kernel/sched/core.c   |8 +-
>  kernel/sched/fair.c   |  398 
> +
>  kernel/sched/sched.h  |   48 ++--
>  5 files changed, 439 insertions(+), 233 deletions(-)
>  create mode 100644 Documentation/scheduler/sched-avg.txt
> 
> -- 
> 1.7.9.5


[PATCH v2 00/12] sched/fair: Optimize and clean up sched averages

2016-05-02 Thread Yuyang Du
Hi Peter,

This patch series combines the previous cleanup and optimization
series. And as you and Ingo suggested, the increased kernel load
scale is reinstated when on 64BIT and FAIR_GROUP_SCHED. In addition
to that, the changes include Vincent's fix, typos fixes, changelog
and comment reword.

Thanks,
Yuyang

Yuyang Du (12):
  sched/fair: Optimize sum computation with a lookup table
  sched/fair: Rename variable names for sched averages
  sched/fair: Change the variable to hold the number of periods to
32bit integer
  sched/fair: Add __always_inline compiler attribute to
__accumulate_sum()
  sched/fair: Optimize __update_sched_avg()
  documentation: Add scheduler/sched-avg.txt
  sched/fair: Generalize the load/util averages resolution definition
  sched/fair: Remove SCHED_LOAD_SHIFT and SCHED_LOAD_SCALE
  sched/fair: Add introduction to the sched average metrics
  sched/fair: Remove scale_load_down() for load_avg
  sched/fair: Rename scale_load() and scale_load_down()
  sched/fair: Enable increased scale for kernel load

 Documentation/scheduler/sched-avg.txt |  137 
 include/linux/sched.h |   81 ++-
 kernel/sched/core.c   |8 +-
 kernel/sched/fair.c   |  398 +
 kernel/sched/sched.h  |   48 ++--
 5 files changed, 439 insertions(+), 233 deletions(-)
 create mode 100644 Documentation/scheduler/sched-avg.txt

-- 
1.7.9.5



[PATCH v2 00/12] sched/fair: Optimize and clean up sched averages

2016-05-02 Thread Yuyang Du
Hi Peter,

This patch series combines the previous cleanup and optimization
series. And as you and Ingo suggested, the increased kernel load
scale is reinstated when on 64BIT and FAIR_GROUP_SCHED. In addition
to that, the changes include Vincent's fix, typos fixes, changelog
and comment reword.

Thanks,
Yuyang

Yuyang Du (12):
  sched/fair: Optimize sum computation with a lookup table
  sched/fair: Rename variable names for sched averages
  sched/fair: Change the variable to hold the number of periods to
32bit integer
  sched/fair: Add __always_inline compiler attribute to
__accumulate_sum()
  sched/fair: Optimize __update_sched_avg()
  documentation: Add scheduler/sched-avg.txt
  sched/fair: Generalize the load/util averages resolution definition
  sched/fair: Remove SCHED_LOAD_SHIFT and SCHED_LOAD_SCALE
  sched/fair: Add introduction to the sched average metrics
  sched/fair: Remove scale_load_down() for load_avg
  sched/fair: Rename scale_load() and scale_load_down()
  sched/fair: Enable increased scale for kernel load

 Documentation/scheduler/sched-avg.txt |  137 
 include/linux/sched.h |   81 ++-
 kernel/sched/core.c   |8 +-
 kernel/sched/fair.c   |  398 +
 kernel/sched/sched.h  |   48 ++--
 5 files changed, 439 insertions(+), 233 deletions(-)
 create mode 100644 Documentation/scheduler/sched-avg.txt

-- 
1.7.9.5