* Peter Zijlstra <pet...@infradead.org> [2010-05-31 10:33:16]: > On Fri, 2010-04-16 at 15:58 +0200, Peter Zijlstra wrote: > > > > > > Hrmm, my brain seems muddled but I might have another solution, let me > > ponder this for a bit.. > > > > Right, so the thing I was thinking about is taking the group capacity > into account when determining the capacity for a single cpu. > > Say the group contains all the SMT siblings, then use the group capacity > (usually larger than 1024) and then distribute the capacity over the > group members, preferring CPUs with higher individual cpu_power over > those with less. > > So suppose you've got 4 siblings with cpu_power=294 each, then we assign > capacity 1 to the first member, and the remaining 153 is insufficient, > and thus we stop and the rest lives with 0 capacity. > > Now take the example that the first sibling would be running a heavy RT > load, and its cpu_power would be reduced to say, 50, then we still got > nearly 933 left over the others, which is still sufficient for one > capacity, but because the first sibling is low, we'll assign it 0 and > instead assign 1 to the second, again, leaving the third and fourth 0.
Hi Peter, Thanks for the suggestion. > If the group were a core group, the total would be much higher and we'd > likely end up assigning 1 to each before we'd run out of capacity. This is a tricky case because we are depending upon the DIV_ROUND_CLOSEST to decide whether to flag capacity to 0 or 1. We will not have any task movement until capacity is depleted to quite low value due to RT task. Having a threshold to flag 0/1 instead of DIV_ROUND_CLOSEST just like you have suggested in the power savings case may help here as well to move tasks to other idle cores. > For power savings, we can lower the threshold and maybe use the maximal > individual cpu_power in the group to base 1 capacity from. > > So, suppose the second example, where sibling0 has 50 and the others > have 294, you'd end up with a capacity distribution of: {0,1,1,1}. One challenge here is that if RT tasks run on more that one thread in this group, we will have slightly different cpu powers. Arranging them from max to min and having a cutoff threshold should work. Should we keep the RT scaling as a separate entity along with cpu_power to simplify these thresholds. Whenever we need to scale group load with cpu power can take the product of cpu_power and scale_rt_power but in these cases where we compute capacity, we can mark a 0 or 1 just based on whether scale_rt_power was less than SCHED_LOAD_SCALE or not. Alternatively we can keep cpu_power as a product of all scaling factors as it is today but save the component scale factors also like scale_rt_power() and arch_scale_freq_power() so that it can be used in load balance decisions. Basically in power save balance we would give all threads a capacity '1' unless the cpu_power was reduced due to RT task. Similarly in the non-power save case, we can have flag 1,0,0,0 unless first thread had a RT scaling during the last interval. I am suggesting to distinguish the reduction is cpu_power due to architectural (hardware DVFS) reasons from RT tasks so that it is easy to decide if moving tasks to sibling thread or core can help or not. --Vaidy _______________________________________________ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev