For purposes of this discussion, let's leave audio capture alone, and focus 
solely on audio playback.

>From reading AudioFlinger source and experimenting with OpenSL ES code, 
here's what I believe happens under the hood:
    1. Each playback thread in AudioFlinger has a mixer buffer, from which 
these threads write to the appropriate sink.
    2. Each mixer thread is a particular kind of playback thread with extra 
capabilities of acting as a "fast mixer" and sending audio directly to 
hardware.
    3. Each audio track is attached to a playback thread. Several tracks 
can run simultaneously on the same playback thread.
    4. There exists an AudioMixer class which also keeps a list of tracks, 
and is responsible for things like downmixing, resampling, etc.
    5. It doesn't look like AudioMixer and MixerThread are related things, 
they just both happen to have "mixer" in their names.

Here are four questions I have, all closely related, hopefully it's okay to 
ask all of them in a single post.
    1. MixerThread derives from PlaybackThread. Is my assessment of their 
differences correct, and if not, why do both exist?
    2. What is the connection between AudioMixer, 
MixerThread, PlaybackThread and AudioTrack? In particular, when an audio 
track is playing audio (let's forget about preparation and 
initialization), when chain of function calls is induced in all of them? 
Some graphical representation would be very helpful, if it's available.
    3. How do all of the above keep track of each other? In other words, do 
playback threads keep track of audio mixers that in turn keep track of 
audio tracks, etc.? What's the structure of object "ownership"?
    4. Finally, I see there two interesting variables in the 
source. PlaybackThread has a variable named mMixBuffer, from which audio 
is read into the sink. PlaybackThread also has a vector 
called mActiveTracks, which contains a list of all track objects from 
which audio should be played. I read through entire Threads.c, and 
unless I've missed something important, I don't see how audio from 
active tracks eventually ends up in mix buffer. In fact, I don't see a 
single method of PlaybackThread which would read data from active tracks 
or which data to mix buffer. Where does this magic happen?

Hopefully, I'm not asking for an entire book chapter worth of info. If you 
know of an article that contains some of this information, please share a 
link.

-- 
-- 
unsubscribe: android-porting+unsubscr...@googlegroups.com
website: http://groups.google.com/group/android-porting

--- 
You received this message because you are subscribed to the Google Groups 
"android-porting" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to android-porting+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to