Le 09/08/2012 09:59, Jussi Kalliokoski a écrit :
Hello David,
Hi Jussi,
On Thu, Aug 9, 2012 at 3:54 PM, David Bruant <bruan...@gmail.com
<mailto:bruan...@gmail.com>> wrote:
* The last source is your own content competing with itself for CPU.
*snip*
One question I have is whether different parts of your own content
(like different workers) should declare which priority should be
given or whether the application should be written in a way that
is resistant to high CPU stress (e.g. doing few work besides audio
work).
I'm sorry, not entirely sure I follow... :)
No worries, it wasn't really clear, I admit :-)
Your proposal draws an API between the developer and the system that is
based on assigning a priority and letting the system judge what to do
with that. I was suggesting that more (not all, but more) should be put
on the developer shoulders rather than letting the computer guess.
Since the only relevant case for priorities is the third one, I'd
like to question the relevance of the use case.
Is implementing per-browsing-content web worker priority worth the
result? Will we be able to really notice an improvement in the
audio quality that often?
Yes. Especially in mobile devices it makes a world of difference when
for example on a single-core phone you have an audio app in the
foreground, and a Twitter client in the background. If the Twitter
client decides to update its content, the audio is most likely to
glitch and this is most likely not the way the user wanted it to go.
We're back to the case of 2 competing content. An API shouldn't be able
to influence that for the reason cited in the previous message (which
you said you were worried about)
I know Firefox is doing work currently to reduce the work done by
background tabs (short setTimeouts are clamped to 1s when in background
for instance. There is other work going on).
Prioritizing between background and foreground tasks is an
implementation issue, not an issue that should requires a web content
API IMHO.
Once again, the only use case being discussed here is likely content
competing against itself for CPU.
Here's the discussion thread on AudioWG [1] and a good article
exploring the subject of interaction between audio and the rest of the
system [2].
I haven't fully read the AudioWG thread (I will. Meanwhile, if the
thread addresses my point, can you link to specific messages?), but I
have read the article.
Most points either don't apply to the web or are on the developer
shoulders already.
* Blocking
=> Except for a couple of pathological exceptions (alert, prompt, sync
xhr), JavaScript has a non-blocking model
* Poor worst-case complexity algorithms
=> That's almost fully on developer shoulders. The web platform
implementers try to avoid such algorithms already (which is a dilemma in
text-layout algorithms I heard)
* Locking
=> The message passing model has no notion of locking.
* Memory allocation
=> On developer shoulders mostly.
* Invisible things: garbage collection
=> GC could be "controlled" by a priority actually, but this needs to be
discussed with the JS engine folks.
* page faults
=> You can't do anything against that on the web.
One thing that isn't explicitely written is that when doing audio in C,
you have shared-memory in threads (hence locking) and my guess is that
it's a good source of. You however don't have shared memory in JS with
web workers. Transferables are a good step forward, maybe a better thing
to discuss would be to move further in that direction.
According to this article, it seems that the web platform is well-suited
(no lock, no blocking) for audio actually, isn't it?
The gain for audio is so significant
Did someone do research on that? Do we have benchmarks, numbers? Or is
the "significant" hypothetical?
that a lot of the working group seems to think it's a good idea to
have a whole lot of (not very modular to be honest) native DSP nodes
that can run in a priority thread just to get the audio running in a
priority thread, and I think priority thread workers is a way better idea.
I would be more in favor of browsers sharing with content how busy
the CPU is (in a way or another) so that the content shuts down
things itself and decides programmatically what is worth running
and what isn't.
Yes, that would be ideal. However I fear it's not good enough for audio.
Purely based on the article, it seems that the web platform does a good
job at helping developers write good real-time code (no blocking, no
locking, no built-in poor worst-case complexity algorithms). The other
points (memory allocation, page faults) are either on the developer
shoulders or at the system level and priority would unlikely help with
that (if it does, I would be interested in reading the related research
on the topic). Priority could help with GC (not doing it under
pressure), but at the same time, GC are undergoing tremedous
improvements (incremental GC in Chrome and now in Firefox, Generational
GC in Chrome and soon in FF) lately, so it would need to be proven too
that the difference would be that substancial.
Not having shared memory may be a bottleneck. Transferable helps.
All in all, the article you linked to makes me more confident that the
web is close to be ready for real-time code.
It would be nice (a requirement?) to see actual research on every
assumption on how a web worker priority mechanism would improve audio
quality.
David