Well, after looking at your code, my suggestion for advice would be:
"Do no evil!" :=)
When I implemented something like this on Symbolics Lisp Machines back
in the 1980's, I made the scheduling boost for UI actions be for a
limited period of time. Perhaps something like that is going on here?
I did this, because I found that there would occasionally be some bit
of code or other that would do something in a UI thread (typically the
mouse-handling thread) that would consume however much CPU was
available, while waiting for the INTERESTING things to do to be
computed by another thread. And an unlimited priority boost in the UI
could tend to make the UI very difficult to debug, as well.
So I had a macro that could be wrapped around various components of UI
code, that would boost the priority of the UI thread. It would boost
it for a maximum period of time, after which it would fall back to
normal.
It would ALSO boost it for a *minimum* period of time. The idea being
that if you'd just done user interaction, then perhaps completing the
work implied by that interaction would also be of interest to the
user. The equivalent here would be to boost priority for any incoming
events on the main thread, up through some number of scheduler quanta.
This all worked very well, but wasn't a panacea. The real fix was
usually to write the application better.
Another factor to figure in here is scheduling quanta. When the
foreground "breaths", it allows the background to run. There will
always be a minimum amount of time the scheduler will allocate to run
anything it does decide to run. Otherwise, you'd waste too much time
switching back and forth!
Anyway, I do agree with Robert Green that giving the scheduler
explicit information to aid it in policy decisions would be a good
thing. You still have to consider how to handle 'exclusive mode" -- do
you shut out non-foreground tasks entirely, even when the foreground
is idle? Because you may then be blocking the foreground for a
scheduling quantum?
On Apr 19, 3:32 pm, Mark Murphy<mmur...@commonsware.com> wrote:
We were told that, as of Android 1.6, background processes were put in a
Linux process scheduling class that limited how much CPU they would use.
A few weeks ago, I ran a benchmark test that seemed to validate this claim.
I have run more tests, and I am no longer confident in my earlier
conclusion. I can get a background process to significantly impact the
foreground process, more than would seem to be possible if the
background process was, indeed, CPU-limited.
Details, including sample code, can be found in the issue I opened that
was promptly closed:
http://code.google.com/p/android/issues/detail?id=7844
Clearly, the failed issue was my fault, for not running around screaming
about bugs in Android and not jumping to conclusions.
Anyway, if anyone else has any ideas on how we can prove whether
background processes are CPU-limited -- and if so, how come that's not
helping much -- please respond to this thread or shoot me an email
off-list if you prefer.
And, I apologize to anyone who took my prior advice regarding this CPU
utilization, as it looks like I screwed up big-time on that analysis.
--
Mark Murphy (a Commons Guy)http://commonsware.com|http://twitter.com/commonsguy
--
You received this message because you are subscribed to the Google
Groups "Android Developers" group.
To post to this group, send email to android-developers@googlegroups.com
To unsubscribe from this group, send email to
android-developers+unsubscr...@googlegroups.com
For more options, visit this group
athttp://groups.google.com/group/android-developers?hl=en
--
You received this message because you are subscribed to the Google
Groups "Android Developers" group.
To post to this group, send email to android-developers@googlegroups.com
To unsubscribe from this group, send email to
android-developers+unsubscr...@googlegroups.com
For more options, visit this group
athttp://groups.google.com/group/android-developers?hl=en