Francesco Romani wrote:
Kinda of. We're not so satisfied of such structur and I'm not satisfied
at all of our framebuffer handling.

There are quite a few interesting buffer architectures used in some open source frameworks that could be looked at.
This structure could have various attributes filled up based upon the processing stage the frame is in?

Yes, depending of filters but also core is allowed to change some values.
True. Cases when the Core can update these values and the Filters themselves update needs to be sorted out.
For example - Clock information,derived from container format could be added to the frame_buffer_t . This tagged information would be retained through the post/pre/encoding and be used in AV sync activities in the final stage.
  How is Av sync achieved currently?

There is a few methods - and that already isn't optimal - (see -M option),
the default anyway is less or more ``get frames in decoded order and hope
for best''.
I am doing just this.

transcode aims to support most formats as possible for I/O; some of them
can't have notion of timestamping at all (group of images, plain YUV streams,
maybe YUV4MPEG2 too IIRC), so a more general kind of `virtual' timestamping
is needed.
According to what i read, YUV4MPEG2 seems to contain uncompressed YUV "Video" data meant for MPEG Encoding. Timestamping in case of transcoding would be relevant for AV sync only right?


Furthermore -
I am trying to get a hang of the whole file format demux , decode , encode operation chain.
My interest is in trying to understand:
1. How are the time stamps applciable to the Audio/Video Data wrt a common clock stored
2. How they are passed through the filters
3. How they are used based upon the target transcode container format - although the Audio/Video encoding format might change in the output file, the timestamps (relative between Audio/Video decode, presentation would be retained to achieve AV sync)

  guidance from the community members?



Reply via email to