Hi Marton,

> 
> In the current implementation per-frame timecode is stored as 
> AV_PKT_DATA_STRINGS_METADATA side data, when AVPackets become AVFrames, the 
> AV_PKT_DATA_STRINGS_METADATA is automatically converted to entries in the 
> AVFrame->metadata AVDictionary. The dictionary key is "timecode".
> 
> There is no "standard" way to store per-frame timecode, neither in packets, 
> nor in frames (other than the frame side data AV_FRAME_DATA_GOP_TIMECODE, but 
> that is too specific to MPEG). Using AVFrame->metadata for this is also 
> non-standard, but it allows us to implement the feature without worrying too 
> much about defining / documenting it.

For what it’s worth, I’ve got timecode support implemented here where I turned 
the uint32 defined in libavutil/timecode.h into a new frame side data type.  
I’ve got the H.264 decoder extracting timecodes from SEI and creating these, 
which then feed it to the decklink output where they get converted into the 
appropriate VANC packets.  Seems to be working pretty well, although still a 
couple of edge cases to be ironed out with interlaced content and PAFF streams.

> 
> Also it is worth mentioning that the frame metadata is lost when encoding, so 
> the muxers won't have access to it, unless the encoders export it in some 
> way, such as packet metadata or side data (they current don't).

Since for the moment I’m focused on the decoding case, I’ve changed the V210 
encoder to convert the AVFrame side data into AVPacket side data (so the 
decklink output can get access to the data), and when I hook in the decklink 
capture support I will be submitting patches for the H.264 and HEVC encoders.

>> 
>> 2) Is there any reason not to make a valid timecode track (ala 
>> AVMEDIA_TYPE_DATA AVStream) with timecode packets? Would that conflict with 
>> the side data approach currently implemented?
> 
> I see no conflict, you might implement a timecode "track", but I don't see 
> why that would make your life any easier.

The whole notion of supporting via a stream versus side data is a long-standing 
issue.  It impacts not just timecodes but also stuff like closed captions, 
SCTE-104 triggers, and teletext.  In some cases like MOV it’s carried in the 
container as a separate stream; in other cases like MPEG2/H.264/HEVC it’s 
carried in the video stream.

At least for captions and timecodes the side data approach works fine in the 
video stream case but it’s problematic if the data is carried as side data 
needs to be extracted into a stream.  The only way I could think of doing it 
was to insert a split filter on the video stream and feed both the actual video 
encoder and a second encoder instance which throws away the video frames and 
just acts on the side data to create the caption stream.

And of course you have same problem in the other direction - if you receive the 
timecodes/captions via a stream, how to you get it into side data so it can be 
encoded by the video encoder.

---
Devin Heitmueller - LTN Global Communications
dheitmuel...@ltnglobal.com

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

Reply via email to