On 11/17/2012 11:30 AM, Anton Khirnov wrote: > > On Sat, 17 Nov 2012 11:12:34 -0800, John Stebbins <stebb...@jetheaddev.com> > wrote: >> On 11/16/2012 09:46 AM, Anton Khirnov wrote: >>> >>> On Fri, 2 Nov 2012 09:32:51 -0700, John Stebbins <stebb...@jetheaddev.com> >>> wrote: >>>> On 11/02/2012 08:20 AM, Diego Biurrun wrote: >>>>> On Sun, Oct 14, 2012 at 04:52:12PM +0200, John Stebbins wrote: >>>>>> pts should be that of the packet containing the presentation segment. >>>>>> --- >>>>>> libavcodec/pgssubdec.c | 9 +++++++-- >>>>>> 1 file changed, 7 insertions(+), 2 deletions(-) >>>>> ping, anybody? >>>>> >>>>> >>>> Patch updated to apply cleanly to head. >>>> >>> >>> The docs say pts is supposed to be in AV_TIME_BASE, while the packet pts is >>> presumable in the decoder timebase. >>> >>> Not sure if the other subs decoders follow this though... Are you familiar >>> with >>> them? >>> >> >> None of the other subtitle decoders even set AVSubtitle->pts as far as I can >> see. >> There are probably multiple problems here. Given that AVPacket documentation >> says the input packet pts is in AVStream->time_base units, and AVStream is >> not available to the decoder, how is this time_base conversion suppose to >> be performed? > > The caller can convert it to the decoder timebase. That is the same e.g. for > audio, where you need to rescale the pts you send to the decoder for it to be > able to handle delay (this is not done now iirc, but Justin has plans i > think). > > FWIW I'm fine with the patch as is for now, until someone comes to resolve the > whole subtitle mess. >
Are you certain you are not talking about audio *encoder* delay above? Currently, the audio encoder requires pts to be rescaled to samplerate in order to get the initial delay right. Note that this is also not well documented afaict. If this is what is planned/intended, then it seems that the comment for AVPacket->pts needs to change to reflect this. It should say something to the effect that when passing an AVPacket to a decoder, the pts should be in AVCodecContext->time_base units rather than AVStream->time_base units. Maybe also say that if AVCodecContext->time_base is not set, the units shall be AV_TIME_BASE whenever passing to a decoder. The problem with the above is that I don't think it really applies universally. Video decoders have a tendency to set time_base to 1/framerate which is a horrible time_base for presentation timestamps. The original stream time_base is always much finer grained than this. Variable framerate video would be rendered incorrectly in this case because the elementary stream signals a "nominal" framerate in some cases, but the container contains more accurate discrete timestamps. TBH, I like the approach used in AVFrame where the pkt_pts is returned by the decoder unmodified (aside from reordering). Also, none of the subtitle decoders appear to set AVCodecContext->time_base. Not set in util.c either afaict. If you can clarify these time_base issues, I would be happy to chip away at some of these subtitle pts issues. -- John GnuPG fingerprint: D0EC B3DB C372 D1F1 0B01 83F0 49F1 D7B2 60D4 D0F7
signature.asc
Description: OpenPGP digital signature
_______________________________________________ libav-devel mailing list libav-devel@libav.org https://lists.libav.org/mailman/listinfo/libav-devel