Michal Nazarewicz <m...@google.com> writes:

> From: Michal Nazarewicz <min...@mina86.com>
>
> sa->runnable_avg_sum is of type u32 but after shifting it by NICE_0_SHIFT
> bits it is promoted to u64.  This of course makes no sense, since the
> result will never be more then 32-bit long.  Casting sa->runnable_avg_sum
> to u64 before it is shifted, fixes this problem.
>
> Signed-off-by: Michal Nazarewicz <min...@mina86.com>
Reviewed-by: Ben Segall <bseg...@google.com>

> ---
>  kernel/sched/fair.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index df77c60..50f1e170 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -2153,7 +2153,7 @@ static inline void __update_tg_runnable_avg(struct 
> sched_avg *sa,
>       long contrib;
>  
>       /* The fraction of a cpu used by this cfs_rq */
> -     contrib = div_u64(sa->runnable_avg_sum << NICE_0_SHIFT,
> +     contrib = div_u64((u64)sa->runnable_avg_sum << NICE_0_SHIFT,
>                         sa->runnable_avg_period + 1);
>       contrib -= cfs_rq->tg_runnable_contrib;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to