Dear Experts,

I have a question on the Audio-Video synchronization in an audio-
visual file playback on Gingerbread. From the sources, we can observe
that the video sub-system reads the timestamp from the audio sub-
system to take decisions on whether to delay/render/drop a frame. I am
referring to the AwesomePlayer::onVideoEvent method, in which the
TimeSource ts is set to mTimeSource which is nothing but mAudioPlayer
object.

The method used to retrieve the time is getRealTimeUs which basically
returns the timestamp based on the last known packet read from the
Audio Decoder as part of fillBuffer. From a practical perspective,
though this packet might have been read, it may not have been rendered
due to the buffering in the system. Hence, the time returned as part
of this call may not exactly match with the actual audio sample being
rendered.

Hence, my questions are:

1) Why did Android choose to use this as the Audio time reference and
not the underlying HAL or Audio driver time reference?

2) If the rationale is to have an uniform reference across multiple
HAL or low lying implementations, what are the design assumptions made
as part of this design?

3) AudioPlayer::fillBuffer is more of a "pull" model concept, where it
is scheduled as part of a callback implementation. If this is the
case, what should be the intervals (minimum and maximum allowed) at
which this callback/fillBuffer should be invoked?

Your answers would be helpful for understanding and analyzing the
system better. I look forward to your expert views and comments.

Many thanks in advance.

Best Regards,
Ganesh

-- 
You received this message because you are subscribed to the Google
Groups "Android Developers" group.
To post to this group, send email to android-developers@googlegroups.com
To unsubscribe from this group, send email to
android-developers+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/android-developers?hl=en

Reply via email to