On 6/18/2013 10:47 AM, David Lang wrote:


It's bad enough trying to guess the needs of the processes, but if you also are 
reduced to guessing the capabilities of the cores, how can anything be made to 
work?

btw one way to look at this is to assume that (with some minimal hinting)
the CPU driver will do the right thing and get you just about the best 
performance you can get
(that is appropriate for the task at hand)...
... and don't do anything in the scheduler proactively.

Now for big.little and other temporary or permanent asymmetries, we may want to
have a "max performance level" type indicator, and that's fair enough
(and this can be dynamic, since it for thermal reasons this can change over 
time,
but on a somewhat slower timescale)


the hints I have in mind are not all that complex; we have the biggest issues 
today
around task migration (the task migrates to a cold cpu... so a simple notifier 
chain
on the new cpu as it is accepting a task and we can bump it up), real time tasks
(again, simple notifier chain to get you to a predictably high performance 
level)
and we're a long way better than we are today in terms of actual problems.

For all the talk of ondemand (as ARM still uses that today)... that guy puts 
you in
either the lowest or highest frequency over 95% of the time. Other non-cpufreq 
solutions
like on Intel are bit more advanced (and will grow more so over time), but even 
there,
in the grand scheme of things, the scheduler shouldn't have to care anymore 
with those
two notifiers in place.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to