On Wed, 30 Jul 2008 03:50:57 -0400, Alexander Benz  
<[EMAIL PROTECTED]> wrote:
[...]
>> Actually its worse than this.  The AVFrame must also be duplicated,
>> because some codecs re-use the pixel buffers.  Basically you can
>> only rely on having a single AVFrame decoded at a time.  This is
>> only an issue for some codecs, though.
[...]
>
> So basically, it's best to avoid doing frame fetching and decoding
> in parallel.
> But thanks for the insight ! :-)

Actually, the design I chose for my needs was to have one thread that both  
reads packets and decodes them and copies video frames, a second thread  
that runs the YUV->RGB conversion and uploads the frame to video memory to  
be displayed, and a third thread which feeds the audio device.  In the  
first thread, I also check to see if either (video or audio) queue is  
running dry, and conditionally copy packets into a temporary queue so that  
I can find more of the depleted stream.

This requires more ram (and video frame copying) (but less packet copying)  
than the design of ffplay (which copies video packets so they can be  
decoded and displayed by the video thread, one at a time) but provides  
better buffering.  This way I can survive a small processing shortage  
without running out of either video or audio.  In an embedded realtime  
environment, you would likely want to stick with the ffplay design because  
you can guarantee that decoding one video frame will happen on time.


-- 
Michael Conrad
IntelliTree Solutions llc.
513-552-6362
[EMAIL PROTECTED]
_______________________________________________
libav-user mailing list
[email protected]
https://lists.mplayerhq.hu/mailman/listinfo/libav-user

Reply via email to