On Thu, Apr 28, 2016 at 11:19:19AM +0200, Peter Zijlstra wrote: > On Tue, Apr 05, 2016 at 12:12:30PM +0800, Yuyang Du wrote: > > Rename scale_load() and scale_load_down() to user_to_kernel_load() > > and kernel_to_user_load() respectively, to allow the names to bear > > what they are really about. > > > --- a/kernel/sched/fair.c > > +++ b/kernel/sched/fair.c > > @@ -189,7 +189,7 @@ static void __update_inv_weight(struct load_weight *lw) > > if (likely(lw->inv_weight)) > > return; > > > > - w = scale_load_down(lw->weight); > > + w = kernel_to_user_load(lw->weight); > > > > if (BITS_PER_LONG > 32 && unlikely(w >= WMULT_CONST)) > > lw->inv_weight = 1; > > @@ -213,7 +213,7 @@ static void __update_inv_weight(struct load_weight *lw) > > */ > > static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct > > load_weight *lw) > > { > > - u64 fact = scale_load_down(weight); > > + u64 fact = kernel_to_user_load(weight); > > int shift = WMULT_SHIFT; > > > > __update_inv_weight(lw);
[snip] > Except these 3 really are not about user/kernel visible fixed point > ranges _at_all_... :/ But are the above two falling back to user fixed point precision? And the reason being that we can't efficiently do this multiply/divide thing with increased fixed point for kernel load.