Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-28 Thread Vittorio Giovara
On Fri, Feb 28, 2020 at 6:58 AM Nicolas George  wrote:

> Anton Khirnov (12020-02-28):
> > avpriv is not a necessary result of having multiple libraries. It is a
> > bug caused by bad API design. There is no fundamental reason for having
> > private interfaces.
> >
> > In my view, no new private interfaces should be accepted and all of the
> > existing ones should be gradually removed.
>
> It's a nice view to hold in theory, it's another thing to apply it when
> in front of an actual issue. You should realize how disparaging an
> offhand comment like that can be of other people's efforts.
>
> > I can give you several:
> > - it allows callers to use a subset of functionality without depending
> >   on a giant monsterlibrary; you don't want to depend on the entire
> >   libavcodec if you just want to resample; you don't want to depend on
> >   the entire libavformat if you just want some handy IO wrappers
>
> There are many ways to "depend": static linking, dynamic linking, using
> system libraries, shipping the source code, etc. Each of these way offer
> better solutions to avoid unnecessary "monsterlibrary".
>

err a monsterlibrary is a monsterlibrary regardless of how it is linked,
and it's good that you mention static linking since that would be one of
the best reason to keep library separated


> Also, I suspect very few projects use the FFmpeg libraries without
> libavcodec itself.
>

err^2 wouldn't we want to change that? There are plenty of good APIs that
could be used in the wild, but aren't because ffmpeg is a huge dependency
in any project.


> > - related to the previous points, it would allow us to use that
> >   functionality more easily internally without having everything depend
> >   on everything; people already do IO in libavcodec, but they can't use
> >   avio for it; if the libraries were split - they could
>
> If the libraries were merged, we could too. Splitting the libraries is
> only a mediocre proxy for good code organization.
>
> > - having our interfaces public forces us to make them more strict and
> >   explicit and generally be more careful about their design; that is
> >   generally a good thing - lavc and lavf would greatly benefit from
> >   having more internal structure
>
> Again, splitting the libraries is not necessary to have better code
> organization.
>

There are plenty of example in which imposing constraints on the code or
language forces developers to write better code.
For example splitting the libraries would make sure that a private header
does not "leak" to a dependent library, like in a patch that was published
a few days ago.

Let me be clear: the organization in thematic sub-directories is fine.
> The terrible waste is to produce separate dynamic objects at the end.
>

Hard disagree on that one, but also quite off-topic, let's get back on
subtitles.
-- 
Vittorio
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-28 Thread Nicolas George
Anton Khirnov (12020-02-28):
> avpriv is not a necessary result of having multiple libraries. It is a
> bug caused by bad API design. There is no fundamental reason for having
> private interfaces.
> 
> In my view, no new private interfaces should be accepted and all of the
> existing ones should be gradually removed.

It's a nice view to hold in theory, it's another thing to apply it when
in front of an actual issue. You should realize how disparaging an
offhand comment like that can be of other people's efforts.

> I can give you several:
> - it allows callers to use a subset of functionality without depending
>   on a giant monsterlibrary; you don't want to depend on the entire
>   libavcodec if you just want to resample; you don't want to depend on
>   the entire libavformat if you just want some handy IO wrappers

There are many ways to "depend": static linking, dynamic linking, using
system libraries, shipping the source code, etc. Each of these way offer
better solutions to avoid unnecessary "monsterlibrary".

Also, I suspect very few projects use the FFmpeg libraries without
libavcodec itself.

> - related to the previous points, it would allow us to use that
>   functionality more easily internally without having everything depend
>   on everything; people already do IO in libavcodec, but they can't use
>   avio for it; if the libraries were split - they could

If the libraries were merged, we could too. Splitting the libraries is
only a mediocre proxy for good code organization.

> - having our interfaces public forces us to make them more strict and
>   explicit and generally be more careful about their design; that is
>   generally a good thing - lavc and lavf would greatly benefit from
>   having more internal structure

Again, splitting the libraries is not necessary to have better code
organization.

Let me be clear: the organization in thematic sub-directories is fine.
The terrible waste is to produce separate dynamic objects at the end.

Regards,

-- 
  Nicolas George


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-28 Thread Nicolas George
Jean-Baptiste Kempf (12020-02-27):
> Separating I/O from demuxers would bring a lot of interesting things
> for security and for custom protocols.

At the API level, yes, indeed. I personally would like demuxers to use
the same push/pull API the decoders have been made to use.
But that does not mean they have to live in a different library.

Regards,

-- 
  Nicolas George


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-28 Thread Clément Bœsch
On Fri, Feb 28, 2020 at 05:55:19AM +0100, Anton Khirnov wrote:
> Quoting Clément Bœsch (2020-02-27 19:36:24)
> > On Thu, Feb 27, 2020 at 12:35:03PM +0100, Anton Khirnov wrote:
> > [...]
> > > AFAIU one of the still-open questions for the subtitle redesign is what
> > > does it mean to decode or encode a subtitle.
> > 
> > There are multiple markups available for text subtitles, and there are
> > multiple ways of representing graphic rectangles for bitmap subtitles.
> > 
> > So for text subtitles, decoding and encoding respectively means
> > transforming them in a common markup to represent them all (currently ASS
> > is our "raw" representation) and back into their markup specifications. We
> > have a bunch of those already (subrip, microdvd, subviewer, ...).
> 
> Is it still true that ASS is a superset of everything? Is that likely to
> remain the case for the foreseeable future?
> 

Nah, it isn't, and actually never really was. It was just the best we had
at that time, and I believe it's still the best. The libass implementation
plays a huge role in having ASS the de-facto "standard" for subtitles
markup. It was discussed in the past the ability to represent raw as an
AST, to allow custom renderers. We could consider such a thing in the
future, but I'd bet that most users will convert that AST back in to ASS
to send it to the libass renderer and not bother with it.

I'm definitely not opposed to consider alternate representations for raw
text markup, but currently I wouldn't consider that a real limitation. And
for once, this is not something I'd consider blocking in this refactor.

I'm not an ASS expert, but the two limitations I'm aware of are the timing
precision (but that's at format level, not markup), and the lack of
furigana builtin: webvtt typically has those (look for ). There
might be others.

> > For bitmap subtitles, decoding and encoding respectively means
> > transforming the bitstream into rectangle structures with RAW images
> > inside and back into the codec-specific bitstream.
> > 
> > > And one of the options is putting the AVPacket->"decoded subtitle"
> > > (whatever that is) and "decoded subtitle"->AVPacket conversions into a
> > > separate library.
> > 
> > And then you can't have them in libavfilter, so you can't have a sane
> > harmony with medias including subtitle streams. It's problematic with many
> > basic use cases. One random example: if you're transcoding an audio/video
> > and somehow altering the timings within lavfi, you have to give the
> > subtitles.
> 
> I don't see why that necessarily follows from not using AVFrame.
> avfilter does not have to be tied to only using AVFrame forever for all
> eternity. It could have a different path for subtitles. Their handling
> is going to be pretty different in any case.

All the filter API and builtin are designed around AVFrame, pretty sure
this would cause a huge mess of duplication and nuisance for the users and
developers willing to abstract away that complexity.

> Note that I'm not saying it SHOULD be done this way. I'm saying that it
> seems like an option that should not be disregarded without
> consideration.

Of course; and it won't surprise you if I said it was considered and
discussed in the past already (OK it was about 10 years ago now).

-- 
Clément B.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-27 Thread Anton Khirnov
Quoting Nicolas George (2020-02-27 19:44:39)
> Vittorio Giovara (12020-02-27):
> > joking aside, i see nothing wrong in having a bit more granular libraries,
> > subtitle handling could be a good example usecase.
> 
> Seriously?
> 
> $ git grep avpriv | wc -l
> 1648
> 
> This is how much "nothing wrong" we already have because the libraries
> are split. And having to maintain ABI stability for private APIs is only
> one cause of problems among others.

avpriv is not a necessary result of having multiple libraries. It is a
bug caused by bad API design. There is no fundamental reason for having
private interfaces.

In my view, no new private interfaces should be accepted and all of the
existing ones should be gradually removed.

> 
> On the other side, would you be able to quote only one actual, practical
> benefit of having several libraries instead of one that could not be
> achieved more simply with configure options? I suspect not, because I
> have looked for them and not found.

I can give you several:
- it allows callers to use a subset of functionality without depending
  on a giant monsterlibrary; you don't want to depend on the entire
  libavcodec if you just want to resample; you don't want to depend on
  the entire libavformat if you just want some handy IO wrappers
- related to the previous points, it would allow us to use that
  functionality more easily internally without having everything depend
  on everything; people already do IO in libavcodec, but they can't use
  avio for it; if the libraries were split - they could
- having our interfaces public forces us to make them more strict and
  explicit and generally be more careful about their design; that is
  generally a good thing - lavc and lavf would greatly benefit from
  having more internal structure

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-27 Thread Anton Khirnov
Quoting Clément Bœsch (2020-02-27 19:36:24)
> On Thu, Feb 27, 2020 at 12:35:03PM +0100, Anton Khirnov wrote:
> [...]
> > AFAIU one of the still-open questions for the subtitle redesign is what
> > does it mean to decode or encode a subtitle.
> 
> There are multiple markups available for text subtitles, and there are
> multiple ways of representing graphic rectangles for bitmap subtitles.
> 
> So for text subtitles, decoding and encoding respectively means
> transforming them in a common markup to represent them all (currently ASS
> is our "raw" representation) and back into their markup specifications. We
> have a bunch of those already (subrip, microdvd, subviewer, ...).

Is it still true that ASS is a superset of everything? Is that likely to
remain the case for the foreseeable future?

> 
> For bitmap subtitles, decoding and encoding respectively means
> transforming the bitstream into rectangle structures with RAW images
> inside and back into the codec-specific bitstream.
> 
> > And one of the options is putting the AVPacket->"decoded subtitle"
> > (whatever that is) and "decoded subtitle"->AVPacket conversions into a
> > separate library.
> 
> And then you can't have them in libavfilter, so you can't have a sane
> harmony with medias including subtitle streams. It's problematic with many
> basic use cases. One random example: if you're transcoding an audio/video
> and somehow altering the timings within lavfi, you have to give the
> subtitles.

I don't see why that necessarily follows from not using AVFrame.
avfilter does not have to be tied to only using AVFrame forever for all
eternity. It could have a different path for subtitles. Their handling
is going to be pretty different in any case.

Note that I'm not saying it SHOULD be done this way. I'm saying that it
seems like an option that should not be disregarded without
consideration.

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-27 Thread Jean-Baptiste Kempf
On Thu, Feb 27, 2020, at 19:44, Nicolas George wrote:
> On the other side, would you be able to quote only one actual, practical
> benefit of having several libraries instead of one that could not be
> achieved more simply with configure options? I suspect not, because I
> have looked for them and not found.

Separating I/O from demuxers would bring a lot of interesting things for 
security and for custom protocols.

But I agree on subtitles decoders, which is the topic here.

--
Jean-Baptiste Kempf - President
+33 672 704 734


___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-27 Thread Nicolas George
Vittorio Giovara (12020-02-27):
> joking aside, i see nothing wrong in having a bit more granular libraries,
> subtitle handling could be a good example usecase.

Seriously?

$ git grep avpriv | wc -l
1648

This is how much "nothing wrong" we already have because the libraries
are split. And having to maintain ABI stability for private APIs is only
one cause of problems among others.

On the other side, would you be able to quote only one actual, practical
benefit of having several libraries instead of one that could not be
achieved more simply with configure options? I suspect not, because I
have looked for them and not found.

Regards,

-- 
  Nicolas George


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-27 Thread Clément Bœsch
On Thu, Feb 27, 2020 at 07:36:24PM +0100, Clément Bœsch wrote:
[...]
> And then you can't have them in libavfilter, so you can't have a sane
> harmony with medias including subtitle streams. It's problematic with many
> basic use cases. One random example: if you're transcoding an audio/video
> and somehow altering the timings within lavfi, you have to give the

give up*

> subtitles.
> 

-- 
Clément B.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-27 Thread Clément Bœsch
On Thu, Feb 27, 2020 at 12:35:03PM +0100, Anton Khirnov wrote:
[...]
> AFAIU one of the still-open questions for the subtitle redesign is what
> does it mean to decode or encode a subtitle.

There are multiple markups available for text subtitles, and there are
multiple ways of representing graphic rectangles for bitmap subtitles.

So for text subtitles, decoding and encoding respectively means
transforming them in a common markup to represent them all (currently ASS
is our "raw" representation) and back into their markup specifications. We
have a bunch of those already (subrip, microdvd, subviewer, ...).

For bitmap subtitles, decoding and encoding respectively means
transforming the bitstream into rectangle structures with RAW images
inside and back into the codec-specific bitstream.

> And one of the options is putting the AVPacket->"decoded subtitle"
> (whatever that is) and "decoded subtitle"->AVPacket conversions into a
> separate library.

And then you can't have them in libavfilter, so you can't have a sane
harmony with medias including subtitle streams. It's problematic with many
basic use cases. One random example: if you're transcoding an audio/video
and somehow altering the timings within lavfi, you have to give the
subtitles.

Having subtitles within libavfilter is not a fancy utopia to give
ourselves a reason to write a bunch of random filters, it actually helps
addressing real limitations in the current model.

What you are suggesting is basically what we already have: the few
subtitles specific API already present in lavc is what you would want out
of it, but it won't solve the core issues.

-- 
Clément B.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-27 Thread Vittorio Giovara
On Thu, Feb 27, 2020 at 10:41 AM Hendrik Leppkes 
wrote:

> On Thu, Feb 27, 2020 at 12:35 PM Anton Khirnov  wrote:
> >
> > Quoting Hendrik Leppkes (2020-02-27 12:27:09)
> > > On Thu, Feb 27, 2020 at 12:18 PM Anton Khirnov 
> wrote:
> > > > Why does it need to be within AVFrame? I am still unconvinced that
> is a
> > > > good idea. What do we gain from storing them in the same struct?
> > > > It makes sense for audio and video, because they are similar in many
> > > > important aspects (and even then there are people saying that they
> > > > should be separate). Subtitles are even more different.
> > > >
> > >
> > > You gain a unified API, which we already have now, intead of a
> > > secondary API just for subtitles thats practically the same but
> > > accepts different structs.
> > > This makes everything a lot easier imho. You can decode whatever input
> > > data into an AVFrame, you can filter your AVFrame, still without
> > > needing special data paths, and only at the last step after all of
> > > that do you need to possibly care when it comes to output. If you're
> > > encoding again, you might not even have that.
> >
> > AFAIU one of the still-open questions for the subtitle redesign is what
> > does it mean to decode or encode a subtitle. And one of the options is
> > putting the AVPacket->"decoded subtitle" (whatever that is) and "decoded
> > subtitle"->AVPacket conversions into a separate library.
>
>
> FFmpeg really doesn't need *more* libraries.
>

libavio says hi

joking aside, i see nothing wrong in having a bit more granular libraries,
subtitle handling could be a good example usecase.
-- 
Vittorio
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-27 Thread Hendrik Leppkes
On Thu, Feb 27, 2020 at 12:35 PM Anton Khirnov  wrote:
>
> Quoting Hendrik Leppkes (2020-02-27 12:27:09)
> > On Thu, Feb 27, 2020 at 12:18 PM Anton Khirnov  wrote:
> > > Why does it need to be within AVFrame? I am still unconvinced that is a
> > > good idea. What do we gain from storing them in the same struct?
> > > It makes sense for audio and video, because they are similar in many
> > > important aspects (and even then there are people saying that they
> > > should be separate). Subtitles are even more different.
> > >
> >
> > You gain a unified API, which we already have now, intead of a
> > secondary API just for subtitles thats practically the same but
> > accepts different structs.
> > This makes everything a lot easier imho. You can decode whatever input
> > data into an AVFrame, you can filter your AVFrame, still without
> > needing special data paths, and only at the last step after all of
> > that do you need to possibly care when it comes to output. If you're
> > encoding again, you might not even have that.
>
> AFAIU one of the still-open questions for the subtitle redesign is what
> does it mean to decode or encode a subtitle. And one of the options is
> putting the AVPacket->"decoded subtitle" (whatever that is) and "decoded
> subtitle"->AVPacket conversions into a separate library.


FFmpeg really doesn't need *more* libraries.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-27 Thread Anton Khirnov
Quoting Hendrik Leppkes (2020-02-27 12:27:09)
> On Thu, Feb 27, 2020 at 12:18 PM Anton Khirnov  wrote:
> > Why does it need to be within AVFrame? I am still unconvinced that is a
> > good idea. What do we gain from storing them in the same struct?
> > It makes sense for audio and video, because they are similar in many
> > important aspects (and even then there are people saying that they
> > should be separate). Subtitles are even more different.
> >
> 
> You gain a unified API, which we already have now, intead of a
> secondary API just for subtitles thats practically the same but
> accepts different structs.
> This makes everything a lot easier imho. You can decode whatever input
> data into an AVFrame, you can filter your AVFrame, still without
> needing special data paths, and only at the last step after all of
> that do you need to possibly care when it comes to output. If you're
> encoding again, you might not even have that.

AFAIU one of the still-open questions for the subtitle redesign is what
does it mean to decode or encode a subtitle. And one of the options is
putting the AVPacket->"decoded subtitle" (whatever that is) and "decoded
subtitle"->AVPacket conversions into a separate library.
> 
> - Hendrik
> 

-- 
Anton Khirnov

> - Hendrik

-- 
elenril
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-27 Thread Hendrik Leppkes
On Thu, Feb 27, 2020 at 12:18 PM Anton Khirnov  wrote:
>
> Quoting Clément Bœsch (2020-02-25 18:40:13)
> > On Sun, Feb 23, 2020 at 09:59:59PM +0100, Michael Niedermayer wrote:
> > [...]
> > > > The subtitles refactor requires to see the big picture and all the 
> > > > problems at
> > > > once.
> > >
> > > really ?
> > > just hypothetically, and playing the devils advocat here.
> > > what would happen if one problem or set of problems is solved at a time ?
> >
> > The first requirement of everything following is to define a new
> > structure/API for holding the subtitles within the AVFrame (which has to
> > live in lavu and not lavc like current API). So you have to address all
> > the current limitations in that new API first, unless you're ready to
> > change that new API 10x in the near future. And even if you keep most of
> > the current design, you still have to at least come up with ways to remove
> > all the current hacks that would go away while moving to the new design.
>
> Why does it need to be within AVFrame? I am still unconvinced that is a
> good idea. What do we gain from storing them in the same struct?
> It makes sense for audio and video, because they are similar in many
> important aspects (and even then there are people saying that they
> should be separate). Subtitles are even more different.
>

You gain a unified API, which we already have now, intead of a
secondary API just for subtitles thats practically the same but
accepts different structs.
This makes everything a lot easier imho. You can decode whatever input
data into an AVFrame, you can filter your AVFrame, still without
needing special data paths, and only at the last step after all of
that do you need to possibly care when it comes to output. If you're
encoding again, you might not even have that.

- Hendrik

- Hendrik
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-27 Thread Anton Khirnov
Quoting Clément Bœsch (2020-02-25 18:40:13)
> On Sun, Feb 23, 2020 at 09:59:59PM +0100, Michael Niedermayer wrote:
> [...]
> > > The subtitles refactor requires to see the big picture and all the 
> > > problems at
> > > once. 
> > 
> > really ?
> > just hypothetically, and playing the devils advocat here.
> > what would happen if one problem or set of problems is solved at a time ?
> 
> The first requirement of everything following is to define a new
> structure/API for holding the subtitles within the AVFrame (which has to
> live in lavu and not lavc like current API). So you have to address all
> the current limitations in that new API first, unless you're ready to
> change that new API 10x in the near future. And even if you keep most of
> the current design, you still have to at least come up with ways to remove
> all the current hacks that would go away while moving to the new design.

Why does it need to be within AVFrame? I am still unconvinced that is a
good idea. What do we gain from storing them in the same struct?
It makes sense for audio and video, because they are similar in many
important aspects (and even then there are people saying that they
should be separate). Subtitles are even more different.

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-26 Thread Nicolas George
Michael Niedermayer (12020-02-26):
> I do think i misunderstand something here
> because if we have a video with a signpost shown from 0:00 to 1:00
> and another shown from 0:30 to 1:30 then the subtitles translating
> or commenting that would overlap.

The existence of signs implies that overlap does happen frequently and
needs to happen gracefully. The idea of speech synthesis implies that
splitting and merging cannot be used indiscriminately. Both are true,
they do not need to happen at the same time.

Yet, they can happen at the same time, if for example spoken dialogue
meant for speech synthesis is separate (with a different ASS style or
layer) from the signs.

Furthermore, speech synthesis was just one example among many to explain
why splitting and merging is not acceptable. There are many others. The
case of timed animations has been given.

> and also the video frames showing these signposts overlap , ehm i mean
> they dont overlap. That is what i do not understand.
> Video frames dont do that and its fine
> and then theres audio
> someone playing a note on the trumpet and another a note on the piano
> again we have 2 AVFrame overlapp i mean not overlapping.
> So why subtitles ?
> 
> and one could even argue why it would make sense for audio to be
> overlapping with this information about instruments and it is in 
> midi and mod files. And a filter writing notes for the instruments
> would benefit from this and simlar a midi encoder

You're hinting at the answer. If we worked with MIDI and mod files,
splitting or merging notes would be unacceptable. Same goes for frames:
if we were a vectorial drawing program, rasterizing the graphic objects
would be unacceptable. But we're not: we consider audio is just a stream
of sample going to the speakers, and if some codec tries to do something
fancy with notes, that's its problem and we don't try to help. Same goes
for video: it's just pixels going to the screen, we don't try to
preserve sprites.

But it's not the same with subtitles. Subtitles are not just a bunch of
pixels that get overlaid on top of the video. Well, they could be, but
it's not what the users expect. Subtitles are often hand-written,
partially or completely, and read directly. A tool that mangles it would
be useless for most usages.

Regards,

-- 
  Nicolas George


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-26 Thread Michael Niedermayer
On Tue, Feb 25, 2020 at 06:40:13PM +0100, Clément Bœsch wrote:
> On Sun, Feb 23, 2020 at 09:59:59PM +0100, Michael Niedermayer wrote:
> [...]
> > > The subtitles refactor requires to see the big picture and all the 
> > > problems at
> > > once. 
> > 
> > really ?
> > just hypothetically, and playing the devils advocat here.
> > what would happen if one problem or set of problems is solved at a time ?
> 
> The first requirement of everything following is to define a new
> structure/API for holding the subtitles within the AVFrame (which has to
> live in lavu and not lavc like current API). So you have to address all
> the current limitations in that new API first, unless you're ready to
> change that new API 10x in the near future. 

yes i realized this implication when i wrote my mail and while it gave me
pause iam not sure this is a problem. This would not necessarily be public
API for user applications to use. Rather a step toward implementing the
new final API. Done that way to simplify things.

This like other comments is also really just a suggestion to simplify
the work. If it doesnt simplify anything it makes no sense of course.



> And even if you keep most of
> the current design, you still have to at least come up with ways to remove
> all the current hacks that would go away while moving to the new design.
> 
> > 
> > Maybe the thinking should not be "what are all the things that might need
> > to be considered"
> > but rather "what is the minimum set of things that need to be considered"
> > to make the first step towards a better API/first git push
> > 
> > 
> > 
> > > Since the core change (subtitles in AVFrame) requires the introduction of
> > > a new subtitles structure and API, it also involve addressing the 
> > > shortcomings
> > > of the original API (or maybe we could tolerate a new API that actually 
> > > looks
> > > like the old?). So even if we ignore the subtitle-in-avframe thing, we 
> > > don't
> > > have a clear answer for a sane API that handles everything. Here is a
> > > non-exhaustive list of stuff that we have to take into account while 
> > > thinking
> > > about that:
> > > 
> > > - text subtitles with and without markup
> > 
> > > - sparsity, overlapping
> > 
> > heartbeat frames would eliminate sparsity
> 
> Yes, and like many aspect of this refactor: we need to come up and
> formalize a convention. Of course I can make a suggestion, but there are
> many other cases and exceptions.
> 
> > what happens if you forbid overlapping ?
> 
> You can't, it's too common. The classic "Hello, hello" was already
> mentioned, but I could also mention subtitles used to "legend" the
> environment (you know, like, signposts and stuff) in addition to
> dialogues.

I do think i misunderstand something here
because if we have a video with a signpost shown from 0:00 to 1:00
and another shown from 0:30 to 1:30 then the subtitles translating
or commenting that would overlap.
and also the video frames showing these signposts overlap , ehm i mean
they dont overlap. That is what i do not understand.
Video frames dont do that and its fine
and then theres audio
someone playing a note on the trumpet and another a note on the piano
again we have 2 AVFrame overlapp i mean not overlapping.
So why subtitles ?

and one could even argue why it would make sense for audio to be
overlapping with this information about instruments and it is in 
midi and mod files. And a filter writing notes for the instruments
would benefit from this and simlar a midi encoder



[...]
> > > - bitmap subtitles and their potential colorspaces (each rectangle as an
> > >   AVFrame is way overkill but technically that's exactly what it is)
> > 
> > then a AVFrame needs to represent a collection of rectangles.
> > Its either 1 or N for the design i think.
> > Our current subtitle structures already have a similar design so this
> > wouldnt be really different.
> 
> Yeah, the new API prototype ended up being:
> 
> +#define AV_NUM_DATA_POINTERS 8
> +
> +/**
> + * This structure describes decoded subtitle rectangle
> + */
> +typedef struct AVFrameSubtitleRectangle {
> +int x, y;
> +int w, h;
> +
> +/* image data for bitmap subtitles, in AVFrame.format (AVPixelFormat) */
> +uint8_t *data[AV_NUM_DATA_POINTERS];
> +int linesize[AV_NUM_DATA_POINTERS];
> +
> +/* decoded text for text subtitles, in ASS */
> +char *text;
> +

> +int flags;

is 32bit flags enough ?
just bringing this up as a int64 is less ugly than a flags2

Thanks

[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Elect your leaders based on what they did after the last election, not
based on what they say before an election.



signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject 

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-25 Thread Michael Niedermayer
On Mon, Feb 24, 2020 at 08:48:23PM +0100, Nicolas George wrote:
> Michael Niedermayer (12020-02-24):
> > > No, they can't: being the same subtitle or not is part of the semantic.
> 
> > Does anyone else share this oppinion ?
> > 
> > iam asking because we need to resolve such differences of oppinion to
> > move forward.
> > Theres no way to design an API if such relativly fundamental things
> > have disagreements on them
> 
> It's not a matter of opinion, it is actually quite obvious:
> 
> # 1
> # 00:00:10,000 --> 00:00:11,000
> # Hello.
> # 
> # 2
> # 00:00:11,000 --> 00:00:12,000
> # Hello.
> 
> … means that two people said Hello in quick succession while:
> 
> # 1
> # 00:00:10,000 --> 00:00:12,000
> # Hello.
> 
> … means that Hello was said only once, slowly.

Yes
but the overlap is neither solving that nor sufficient
nor does this work very well

it doesnt work very well because when someone speaks really fast
you display the text only for a short time and noone can read it.
that fails to achive the main goal of a subtitle of allowing
someone to read it.
one can go on now to list cases where this is ambigous or not
enough.

But i think a better summary is that there are 2 really seperate things
1. The actual content 
2. The way it is presented. (loud, fast fearfull, whatever)

I think we should not in our internal representation use the duration
of display for the duration of sound.
Especially formats with strict random access points will always start
all subtitles at that point anew. Otherwise one could not seek to
that point. and that will produce subtitles where the duration
interpretation as sound duration would not work well


> 
> And it has practical consequences: Clément suggested a voice synthesis
> filter, that would change its output.
> 
> Some subtitles have overlap all over the place. I am thinking in
> particular of some animé fansub, with on-screen signs and onomatopoeia
> translated and cultural notes, all along with dialogue. De-overlapping
> would increase their size considerably, and cause actual dialogue to be
> split, which results in the problems I have explained above.

i think you mix things up

subtitle size matters in the muxed format, this is talking about the
representation in AVFrames. This would make no difference to what is
stored, in fact the encoder searching for things it can merge instead
of not doing that could lead to smaller files.

Also for the subtitle rectangles we could even use reference counting
and reuse them as long as they did not change.



> 
> But I don't know why you are so focussed on this. Overlapping is not a

Its not a focus at all, just was something i noticed when reading this
which IMHO can be avoided to make the API maybe simpler

Its a suggestion nothing else


> problem, it's just something to keep in mind while designing the API,
> like the fact that bitmap subtitles have several rectangles. It's
> actually quite easy to handle.

Iam not sure arbitrary overlapping AVFrames will not cause problems,
its very different from existing semantics

Thanks

[...]

-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Asymptotically faster algorithms should always be preferred if you have
asymptotical amounts of data


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-25 Thread Clément Bœsch
On Sun, Feb 23, 2020 at 09:59:59PM +0100, Michael Niedermayer wrote:
[...]
> > The subtitles refactor requires to see the big picture and all the problems 
> > at
> > once. 
> 
> really ?
> just hypothetically, and playing the devils advocat here.
> what would happen if one problem or set of problems is solved at a time ?

The first requirement of everything following is to define a new
structure/API for holding the subtitles within the AVFrame (which has to
live in lavu and not lavc like current API). So you have to address all
the current limitations in that new API first, unless you're ready to
change that new API 10x in the near future. And even if you keep most of
the current design, you still have to at least come up with ways to remove
all the current hacks that would go away while moving to the new design.

> 
> Maybe the thinking should not be "what are all the things that might need
> to be considered"
> but rather "what is the minimum set of things that need to be considered"
> to make the first step towards a better API/first git push
> 
> 
> 
> > Since the core change (subtitles in AVFrame) requires the introduction of
> > a new subtitles structure and API, it also involve addressing the 
> > shortcomings
> > of the original API (or maybe we could tolerate a new API that actually 
> > looks
> > like the old?). So even if we ignore the subtitle-in-avframe thing, we don't
> > have a clear answer for a sane API that handles everything. Here is a
> > non-exhaustive list of stuff that we have to take into account while 
> > thinking
> > about that:
> > 
> > - text subtitles with and without markup
> 
> > - sparsity, overlapping
> 
> heartbeat frames would eliminate sparsity

Yes, and like many aspect of this refactor: we need to come up and
formalize a convention. Of course I can make a suggestion, but there are
many other cases and exceptions.

> what happens if you forbid overlapping ?

You can't, it's too common. The classic "Hello, hello" was already
mentioned, but I could also mention subtitles used to "legend" the
environment (you know, like, signposts and stuff) in addition to
dialogues.

> > - different semantics for duration (duration available, no known duration,
> >   event-based clearing, ...)
> 
> This one is annoying (though similar to video where its just not so much an
> issue as video is generally regularly spaced)
> But does this actually impact the API in any way ?
> decoder -> avframe -> encoder

AVFrame always go through lavfi. I don't remember the details (it's been
about 2 years now), but the lack of semantic for duration was causing some
issues within lavfi.

> (if some information is missing some look 
> ahead/buffer/filter/converter/whatever may be needed but the API wouldnt 
> change i think and that should work with any API)
> 
> 
> > - closed captions / teletext
> 
> What happens if you ignore these at this stage?

I can't ignore them, the way we change the subtitle interface must address
their special behaviours. But I'd say my main issue with closed captions /
teletext was the same as DVB subtitles: we don't have much tests.

Typically, the DVB subtitles hack we have in ffmpeg.c like, forever, I'm
dropping it, but I can't test it properly: DVBsub coverage is almost
non-existent: http://coverage.ffmpeg.org/ (look for dvbsub and dvbsuddec)

Actually, if someone does improve subtitle coverage for formats I'm not
comfortable with (specifically cc and dvb), that would actually help A
LOT. At least I wouldn't have to speculate on how it should/could/would
behave.

BTW, if there is someone available to explain to me DVB subtitles, I'm all
ear. I understand that they have no duration, but random (partial?)
subtitle resets?

> > - bitmap subtitles and their potential colorspaces (each rectangle as an
> >   AVFrame is way overkill but technically that's exactly what it is)
> 
> then a AVFrame needs to represent a collection of rectangles.
> Its either 1 or N for the design i think.
> Our current subtitle structures already have a similar design so this
> wouldnt be really different.

Yeah, the new API prototype ended up being:

+#define AV_NUM_DATA_POINTERS 8
+
+/**
+ * This structure describes decoded subtitle rectangle
+ */
+typedef struct AVFrameSubtitleRectangle {
+int x, y;
+int w, h;
+
+/* image data for bitmap subtitles, in AVFrame.format (AVPixelFormat) */
+uint8_t *data[AV_NUM_DATA_POINTERS];
+int linesize[AV_NUM_DATA_POINTERS];
+
+/* decoded text for text subtitles, in ASS */
+char *text;
+
+int flags;
+} AVFrameSubtitleRectangle;
+

But then, do we use a fixed pixel format for all codecs? Is this really
enough when some subtitles are actually a bunch of image files inside a
"modern standard container"? (before you ask, yeah I saw that a few years
back in some broadcasting garbage thing).

What about PAL8 subtitles? We currently need to convert them into codecs,
and re-analyzed them again during encoding to reconstitute the palette,

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-24 Thread Marton Balint



On Mon, 24 Feb 2020, Nicolas George wrote:


Michael Niedermayer (12020-02-24):

No, they can't: being the same subtitle or not is part of the semantic.



Does anyone else share this oppinion ?

iam asking because we need to resolve such differences of oppinion to
move forward.
Theres no way to design an API if such relativly fundamental things
have disagreements on them


It's not a matter of opinion, it is actually quite obvious:

# 1
# 00:00:10,000 --> 00:00:11,000
# Hello.
#
# 2
# 00:00:11,000 --> 00:00:12,000
# Hello.

… means that two people said Hello in quick succession while:


That is not the real issue (although the normally used techniques to 
signal different speakers is coloring, alignment or simply putting both 
sentences in a single subtitle).


The real issue is that for animations like \move{} the rendering cannot be 
splitted. So it seems if we want to support animations, hard splitting is 
not an option.




Some subtitles have overlap all over the place. I am thinking in
particular of some animé fansub, with on-screen signs and onomatopoeia
translated and cultural notes, all along with dialogue. De-overlapping
would increase their size considerably, and cause actual dialogue to be
split, which results in the problems I have explained above.

But I don't know why you are so focussed on this. Overlapping is not a
problem, it's just something to keep in mind while designing the API,
like the fact that bitmap subtitles have several rectangles. It's
actually quite easy to handle.


My problem with overlapping is that in order to render subtitles 
at a given time you need more than one AVSubtitle. That is a 
fundamental difference to audio or video AVFrames where a single object 
fully represents the media at a given time.


Maybe we should deal with collections of AVSubtitles which affect time 
durations, this way you don't need to hard-merge the subtitle rectangles 
but still can reference objects which fully describe subtitles for a 
time period.


Regards,
Marton
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-24 Thread Matt Zagrabelny
On Sat, Feb 22, 2020 at 2:47 AM Clément Bœsch  wrote:
>
> On Fri, Feb 14, 2020 at 03:26:30AM +, Soft Works wrote:
> > Hi,
> >
>
> Hi,
>
> > I am looking for some guidance regarding future plans about processing 
> > subtitle streams in filter graphs.
> >
> > Please correct me where I'm wrong - this is the situation as I've 
> > understood it so far:
> [...]
>
> Your analysis was pretty much on point. I've been away from FFmpeg development
> from around the time of that patchset. While I can't recommend a course of
> action, I can elaborate on what was blocking and missing. Beware that this is
> reconstructed from my unreliable memory and I may forget important points.
>
> Last state can be found at 
> https://github.com/ubitux/FFmpeg/tree/subtitles-new-api
>
> The last WIP commit includes a TODO.txt which I'm sharing here for the
> record:
>
> > TODO:
> > - heartbeat mechanism
> > - drop sub2video (needs heartbeat)
> > - properly deal with -ss and -t (need strim filter?)
> > - sub_start_display/sub_end_display needs to be honored
> > - find a test case for dvbsub as it's likely broken (ffmpeg.c hack is
> >   removed and should be replaced by a EAGAIN logic in lavc/utils.c)
> > - make it pass FATE:
> >   * fix cc/subcc
> >   * broke various other stuff
> > - Changelog/APIchanges
> > - proper API doxy
> > - update lavfi/subtitles?
> > - merge [avs]null filters
> > - filters doc
> > - avcodec_default_get_buffer2?
> > - how to transfer subtitle header down to libavfilter?
>
> The biggest TODO entry right now is the heartbeat mechanism which is required
> for being able to drop the sub2video hack. You've seen that discussed in the
> thread.
>
> Thing is, that branch is already a relatively invasive and may include
> controversial API change. Typically, the way I decided to handle subtitle
> text/rectangle allocation within AVSubtitle is "different" but I couldn't come
> up with a better solution. Basically, we have to fit them in AVFrame for a
> clean integration within FFmpeg ecosystem, but subtitles are not simple 
> buffers
> like audio and video can be: they have to be backed by more complex dynamic
> structures.
>
> Also unfortunately, addressing the problem through an iterative process is
> extremely difficult in the current situation due to historical technical debt.
> You may have noticed that the decode and encode subtitles API are a few
> generations behind the audio and video ones. The reason it wasn't modernized
> earlier was because it was already a pita in the past.
>
> The subtitles refactor requires to see the big picture and all the problems at
> once. Since the core change (subtitles in AVFrame) requires the introduction 
> of
> a new subtitles structure and API, it also involve addressing the shortcomings
> of the original API (or maybe we could tolerate a new API that actually looks
> like the old?). So even if we ignore the subtitle-in-avframe thing, we don't
> have a clear answer for a sane API that handles everything. Here is a
> non-exhaustive list of stuff that we have to take into account while thinking
> about that:
>
> - text subtitles with and without markup
> - sparsity, overlapping
> - different semantics for duration (duration available, no known duration,
>   event-based clearing, ...)
> - closed captions / teletext
> - bitmap subtitles and their potential colorspaces (each rectangle as an
>   AVFrame is way overkill but technically that's exactly what it is)
>
> This should give you a hint on why the task has been quite overwhelming.
> Subtitles were the reason I initially came into the multimedia world, and they
> might have played a role in why I distanced myself from it.
>
> That said, I'd say the main reason it was put in stand by was because I was
> kind of alone in that struggle. While I got a lot of support from people, I
> think the main help I needed would have been formalizing the API we wanted.
> Like, code and API gymnastic is not that much of a problem, but deciding on
> what to do, and what path we take to reach that point is/was the core issue.
>
> And to be honest, I never really made up my mind on abandoning the work. So 
> I'm
> calling it again: if someone is interested in addressing the problem once and
> for all, I can spend some time rebasing the current state and clarifying what 
> has
> been said in this mail in the details so we can work together on an API
> contract we want between FFmpeg and our users. When we have this, I think
> progress can be made again.


Nicolas and Clément, et. al,

Is financial support at all blocking progress in subtitle filters?

I'm afraid I don't have much ffmpeg coding expertise to contribute,
but I am interested in seeing better subtitle support in ffmpeg and am
looking to help where I can.

Let us know if there is anything else that non-coders could help with.

Thanks,

-m
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-24 Thread Nicolas George
Michael Niedermayer (12020-02-24):
> > No, they can't: being the same subtitle or not is part of the semantic.

> Does anyone else share this oppinion ?
> 
> iam asking because we need to resolve such differences of oppinion to
> move forward.
> Theres no way to design an API if such relativly fundamental things
> have disagreements on them

It's not a matter of opinion, it is actually quite obvious:

# 1
# 00:00:10,000 --> 00:00:11,000
# Hello.
# 
# 2
# 00:00:11,000 --> 00:00:12,000
# Hello.

… means that two people said Hello in quick succession while:

# 1
# 00:00:10,000 --> 00:00:12,000
# Hello.

… means that Hello was said only once, slowly.

And it has practical consequences: Clément suggested a voice synthesis
filter, that would change its output.

Some subtitles have overlap all over the place. I am thinking in
particular of some animé fansub, with on-screen signs and onomatopoeia
translated and cultural notes, all along with dialogue. De-overlapping
would increase their size considerably, and cause actual dialogue to be
split, which results in the problems I have explained above.

But I don't know why you are so focussed on this. Overlapping is not a
problem, it's just something to keep in mind while designing the API,
like the fact that bitmap subtitles have several rectangles. It's
actually quite easy to handle.

Regards,

-- 
  Nicolas George


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-24 Thread Michael Niedermayer
On Mon, Feb 24, 2020 at 12:17:37AM +0100, Nicolas George wrote:
> Marton Balint (12020-02-23):
> > Two overlapping subtitles can be broken into 3 non-overlapping subtitles,
> 
> No, they can't: being the same subtitle or not is part of the semantic.

Does anyone else share this oppinion ?

iam asking because we need to resolve such differences of oppinion to
move forward.
Theres no way to design an API if such relativly fundamental things
have disagreements on them

Thanks

[...]

-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Any man who breaks a law that conscience tells him is unjust and willingly 
accepts the penalty by staying in jail in order to arouse the conscience of 
the community on the injustice of the law is at that moment expressing the 
very highest respect for law. - Martin Luther King Jr


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-23 Thread Nicolas George
Marton Balint (12020-02-23):
> Two overlapping subtitles can be broken into 3 non-overlapping subtitles,

No, they can't: being the same subtitle or not is part of the semantic.

Regards,

-- 
  Nicolas George


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-23 Thread Marton Balint



On Sun, 23 Feb 2020, Nicolas George wrote:


Michael Niedermayer (12020-02-23):

really ?
just hypothetically, and playing the devils advocat here.
what would happen if one problem or set of problems is solved at a time ?


Odds are a design decision made early would prove insufficient to solve
a later problem.


what happens if you forbid overlapping ?


We break perfectly valid subtitles. Unlike B-frames, overlapping
subtitles is part of the semantic.


Two overlapping subtitles can be broken into 3 non-overlapping subtitles, 
the middle one containing both the rectangles of the two. The same can be 
done for more overlaps in a similar fashion.


Regards,
Marton
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-23 Thread Nicolas George
Michael Niedermayer (12020-02-23):
> really ?
> just hypothetically, and playing the devils advocat here.
> what would happen if one problem or set of problems is solved at a time ?

Odds are a design decision made early would prove insufficient to solve
a later problem.

> what happens if you forbid overlapping ?

We break perfectly valid subtitles. Unlike B-frames, overlapping
subtitles is part of the semantic.

> > - different semantics for duration (duration available, no known duration,
> >   event-based clearing, ...)
> This one is annoying (though similar to video where its just not so much an
> issue as video is generally regularly spaced)
> But does this actually impact the API in any way ?
> decoder -> avframe -> encoder

decoder → avframe → filters → avframe → encoder

Many filters need timing to work.

Regards,

-- 
  Nicolas George


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-23 Thread Michael Niedermayer
On Sat, Feb 22, 2020 at 09:47:20AM +0100, Clément Bœsch wrote:
> On Fri, Feb 14, 2020 at 03:26:30AM +, Soft Works wrote:
> > Hi,
> > 
> 
> Hi,
> 
> > I am looking for some guidance regarding future plans about processing 
> > subtitle streams in filter graphs.
> > 
> > Please correct me where I'm wrong - this is the situation as I've 
> > understood it so far:
> [...]
> 
> Your analysis was pretty much on point. I've been away from FFmpeg development
> from around the time of that patchset. While I can't recommend a course of
> action, I can elaborate on what was blocking and missing. Beware that this is
> reconstructed from my unreliable memory and I may forget important points.
> 
> Last state can be found at 
> https://github.com/ubitux/FFmpeg/tree/subtitles-new-api
> 
> The last WIP commit includes a TODO.txt which I'm sharing here for the
> record:
> 
> > TODO:
> > - heartbeat mechanism
> > - drop sub2video (needs heartbeat)
> > - properly deal with -ss and -t (need strim filter?)
> > - sub_start_display/sub_end_display needs to be honored
> > - find a test case for dvbsub as it's likely broken (ffmpeg.c hack is
> >   removed and should be replaced by a EAGAIN logic in lavc/utils.c)
> > - make it pass FATE:
> >   * fix cc/subcc
> >   * broke various other stuff
> > - Changelog/APIchanges
> > - proper API doxy
> > - update lavfi/subtitles?
> > - merge [avs]null filters
> > - filters doc
> > - avcodec_default_get_buffer2?
> > - how to transfer subtitle header down to libavfilter?
> 
> The biggest TODO entry right now is the heartbeat mechanism which is required
> for being able to drop the sub2video hack. You've seen that discussed in the
> thread.
> 
> Thing is, that branch is already a relatively invasive and may include
> controversial API change. Typically, the way I decided to handle subtitle
> text/rectangle allocation within AVSubtitle is "different" but I couldn't come
> up with a better solution. Basically, we have to fit them in AVFrame for a
> clean integration within FFmpeg ecosystem, but subtitles are not simple 
> buffers
> like audio and video can be: they have to be backed by more complex dynamic
> structures.
> 
> Also unfortunately, addressing the problem through an iterative process is
> extremely difficult in the current situation due to historical technical debt.
> You may have noticed that the decode and encode subtitles API are a few
> generations behind the audio and video ones. The reason it wasn't modernized
> earlier was because it was already a pita in the past.
> 

> The subtitles refactor requires to see the big picture and all the problems at
> once. 

really ?
just hypothetically, and playing the devils advocat here.
what would happen if one problem or set of problems is solved at a time ?

Maybe the thinking should not be "what are all the things that might need
to be considered"
but rather "what is the minimum set of things that need to be considered"
to make the first step towards a better API/first git push



> Since the core change (subtitles in AVFrame) requires the introduction of
> a new subtitles structure and API, it also involve addressing the shortcomings
> of the original API (or maybe we could tolerate a new API that actually looks
> like the old?). So even if we ignore the subtitle-in-avframe thing, we don't
> have a clear answer for a sane API that handles everything. Here is a
> non-exhaustive list of stuff that we have to take into account while thinking
> about that:
> 
> - text subtitles with and without markup

> - sparsity, overlapping

heartbeat frames would eliminate sparsity
what happens if you forbid overlapping ?
I mean if i just imagine for a moment that a video stream carries some data
256 color palette in 4 parts and these get updated in a way that overlapps in
time like you talk about in subtitles.
This isnt a problem for video, we just have the whole palette anywhere it is
needed
And similarly a B frame updates parts of the pixels of the previous and next
frame yet our AVFrame contains whole bitmaps.

at the stage of encoding such subtitle AVFrame back to "binary" data, the 
encoder
would have to merge identical subtitle parts if that is supported.


> - different semantics for duration (duration available, no known duration,
>   event-based clearing, ...)

This one is annoying (though similar to video where its just not so much an
issue as video is generally regularly spaced)
But does this actually impact the API in any way ?
decoder -> avframe -> encoder
(if some information is missing some look 
ahead/buffer/filter/converter/whatever may be needed but the API wouldnt 
change i think and that should work with any API)


> - closed captions / teletext

What happens if you ignore these at this stage?


> - bitmap subtitles and their potential colorspaces (each rectangle as an
>   AVFrame is way overkill but technically that's exactly what it is)

then a AVFrame needs to represent a collection of rectangles.
Its either 1 or N for the 

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-22 Thread Clément Bœsch
On Sat, Feb 22, 2020 at 12:51:13PM +, Soft Works wrote:
[...]
> > > Would there be any filters at all that would operate on subtitles?
> > >  (other than rendering to a video surface)
> > 
> > Sure. A few ideas that come to my mind:
> > 
> > - rasterization (text subtitles to bitmap subtitles)
> > - ocr (bitmap subtitles to text)
> > - all kind of text processing (eventually piped to some external tools)
> > - censoring bad words
> > - inserting "watermark" text
> > - timing processing: trimming, shift, scaling of time
> > - lorem ipsum or similar "source" filter (equivalent to our video mires)
> >   for testing purposes
> > - audio to text for auto captioning
> > - text to audio for audio synthesis
> > - concat multiple subtitle files (think of a multiple episode merged into
> >   one, and you want to do the same for subtitles)
> > - merge/overlap multiple subtitle tracks (think of multi-language
> >   subtitles)
> 
> I knew there would be reasonable ones. Maybe except the text-to-speech
> Idea. I suppose you need to be a masochist to watch a full movie hearing
> synthesized speech ;-)

As a creator, you may not want to use your voice (because of your
pronunciation, because you're mute, because you care for you anonymity,
etc), and thus you would write subtitles (for accessibility) and use a
synth for the audio track. We already have something similar btw, see the
flite filter.

> 
> > [...]
> > > But when the primary purpose of having subtitles in filtergraphs would
> > > be to have them eventually converted to bitmaps, and given that it's
> > > really so extremely difficult and controversial to implement this,
> > > plus that there seems to be only moderate support for this from other
> > > developers- could it possibly be an easier and more pragmatic solution
> > > to convert the subtitles to images simply before they are entering the
> > filtergraph?
> > 
> > That means it's likely to be only available within the command line tool and
> > not the API. Unless you design a separated "libavsubtitle" (discussed in the
> > past several times), but you'll need at some point many interfaces with the
> > usual demuxing-decoding-encoding-muxing pipeline.
> 
> You're right, I was focused on the CLI, and first of all at the huge 
> discrepancy 
> in the required amount of work. 
> 
> While the predominant model of ffmpeg development (patch-trial-and-error
> until it gets accepted) seems to have proven to be quite successful, I'm 
> wondering whether in this case it wouldn't be a better strategy to come to
> agree about a plan before anybody will spend more time on this..?

Yes, that was my point earlier.

-- 
Clément B.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-22 Thread Soft Works
> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> 
> On Sat, Feb 22, 2020 at 10:59:46AM +, Soft Works wrote:
> [...]
> > Reading through the discussion around your patch was discouraging,
> > even destructive in some parts. I understand why you felt alone with
> > that and I wonder why nobody else chimed in. I mean, sometimes there
> > are extensive discussions about some of the least important video
> > formats in the world, while subtitles are a pretty fundamental thing...
> 
> I think the main reason is that subtitles are a different beast in the
> multimedia world, and most people intuitively understand this is not fun
> work at all. It's much more comfortable to work with audio and video since
> the framework design revolves around them.
> 
> > On the other hand - playing devil's advocate: Why even handle a
> > subtitle media type in filtergraphs?
> >
> 
> It's not only about lavfi: the whole framework works with AVFrame. If you
> use something else, you'll have to duplicate most of the APIs to handle
> subtitles as well. In the past, audio was actually separated, and unification
> with video was a relief. Going another path for subtitles is going to be
> extremely invasive, verbose, and annoying to maintain on API change.
> 
> > Would there be any filters at all that would operate on subtitles?
> >  (other than rendering to a video surface)
> 
> Sure. A few ideas that come to my mind:
> 
> - rasterization (text subtitles to bitmap subtitles)
> - ocr (bitmap subtitles to text)
> - all kind of text processing (eventually piped to some external tools)
> - censoring bad words
> - inserting "watermark" text
> - timing processing: trimming, shift, scaling of time
> - lorem ipsum or similar "source" filter (equivalent to our video mires)
>   for testing purposes
> - audio to text for auto captioning
> - text to audio for audio synthesis
> - concat multiple subtitle files (think of a multiple episode merged into
>   one, and you want to do the same for subtitles)
> - merge/overlap multiple subtitle tracks (think of multi-language
>   subtitles)

I knew there would be reasonable ones. Maybe except the text-to-speech
Idea. I suppose you need to be a masochist to watch a full movie hearing
synthesized speech ;-)

> [...]
> > But when the primary purpose of having subtitles in filtergraphs would
> > be to have them eventually converted to bitmaps, and given that it's
> > really so extremely difficult and controversial to implement this,
> > plus that there seems to be only moderate support for this from other
> > developers- could it possibly be an easier and more pragmatic solution
> > to convert the subtitles to images simply before they are entering the
> filtergraph?
> 
> That means it's likely to be only available within the command line tool and
> not the API. Unless you design a separated "libavsubtitle" (discussed in the
> past several times), but you'll need at some point many interfaces with the
> usual demuxing-decoding-encoding-muxing pipeline.

You're right, I was focused on the CLI, and first of all at the huge 
discrepancy 
in the required amount of work. 

While the predominant model of ffmpeg development (patch-trial-and-error
until it gets accepted) seems to have proven to be quite successful, I'm 
wondering whether in this case it wouldn't be a better strategy to come to
agree about a plan before anybody will spend more time on this..?

softworkz




___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-22 Thread Clément Bœsch
On Sat, Feb 22, 2020 at 10:59:46AM +, Soft Works wrote:
[...]
> Reading through the discussion around your patch was discouraging, 
> even destructive in some parts. I understand why you felt alone with that
> and I wonder why nobody else chimed in. I mean, sometimes there are 
> extensive discussions about some of the least important video formats in 
> the world, while subtitles are a pretty fundamental thing...

I think the main reason is that subtitles are a different beast in the
multimedia world, and most people intuitively understand this is not fun
work at all. It's much more comfortable to work with audio and video since
the framework design revolves around them.

> On the other hand - playing devil's advocate: Why even handle a subtitle 
> media type in filtergraphs?
> 

It's not only about lavfi: the whole framework works with AVFrame. If you
use something else, you'll have to duplicate most of the APIs to handle
subtitles as well. In the past, audio was actually separated, and
unification with video was a relief. Going another path for subtitles is
going to be extremely invasive, verbose, and annoying to maintain on API
change.

> Would there be any filters at all that would operate on subtitles?
>  (other than rendering to a video surface)

Sure. A few ideas that come to my mind:

- rasterization (text subtitles to bitmap subtitles)
- ocr (bitmap subtitles to text)
- all kind of text processing (eventually piped to some external tools)
- censoring bad words
- inserting "watermark" text
- timing processing: trimming, shift, scaling of time
- lorem ipsum or similar "source" filter (equivalent to our video mires)
  for testing purposes
- audio to text for auto captioning
- text to audio for audio synthesis
- concat multiple subtitle files (think of a multiple episode merged into
  one, and you want to do the same for subtitles)
- merge/overlap multiple subtitle tracks (think of multi-language
  subtitles)

[...]
> But when the primary purpose of having subtitles in filtergraphs would be 
> to have them eventually converted to bitmaps, and given that it's really so 
> extremely difficult and controversial to implement this, plus that there
> seems to be only moderate support for this from other developers- 
> could it possibly be an easier and more pragmatic solution to convert
> the subtitles to images simply before they are entering the filtergraph?

That means it's likely to be only available within the command line tool
and not the API. Unless you design a separated "libavsubtitle" (discussed
in the past several times), but you'll need at some point many interfaces
with the usual demuxing-decoding-encoding-muxing pipeline.

Regards,

-- 
Clément B.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-22 Thread Soft Works
> -Original Message-
> From: Clément Bœsch 
> Sent: Saturday, February 22, 2020 9:47 AM
> To: FFmpeg development discussions and patches  de...@ffmpeg.org>
> Cc: Soft Works 
> Subject: Re: [FFmpeg-devel] Status and Plans for Subtitle Filters
> 
> On Fri, Feb 14, 2020 at 03:26:30AM +, Soft Works wrote:
> > Hi,
> >
> 
> Hi,
> 
> > I am looking for some guidance regarding future plans about processing
> subtitle streams in filter graphs.
> >
> > Please correct me where I'm wrong - this is the situation as I've understood
> it so far:
> [...]
> 
> Your analysis was pretty much on point. I've been away from FFmpeg
> development from around the time of that patchset. While I can't
> recommend a course of action, I can elaborate on what was blocking and
> missing. Beware that this is reconstructed from my unreliable memory and I
> may forget important points.
> 
> Last state can be found at
> https://github.com/ubitux/FFmpeg/tree/subtitles-new-api
> 
> The last WIP commit includes a TODO.txt which I'm sharing here for the
> record:
> 
> > TODO:
> > - heartbeat mechanism
> > - drop sub2video (needs heartbeat)
> > - properly deal with -ss and -t (need strim filter?)
> > - sub_start_display/sub_end_display needs to be honored
> > - find a test case for dvbsub as it's likely broken (ffmpeg.c hack is
> >   removed and should be replaced by a EAGAIN logic in lavc/utils.c)
> > - make it pass FATE:
> >   * fix cc/subcc
> >   * broke various other stuff
> > - Changelog/APIchanges
> > - proper API doxy
> > - update lavfi/subtitles?
> > - merge [avs]null filters
> > - filters doc
> > - avcodec_default_get_buffer2?
> > - how to transfer subtitle header down to libavfilter?
> 
> The biggest TODO entry right now is the heartbeat mechanism which is
> required for being able to drop the sub2video hack. You've seen that
> discussed in the thread.
> 
> Thing is, that branch is already a relatively invasive and may include
> controversial API change. Typically, the way I decided to handle subtitle
> text/rectangle allocation within AVSubtitle is "different" but I couldn't come
> up with a better solution. Basically, we have to fit them in AVFrame for a
> clean integration within FFmpeg ecosystem, but subtitles are not simple
> buffers like audio and video can be: they have to be backed by more
> complex dynamic structures.
> 
> Also unfortunately, addressing the problem through an iterative process is
> extremely difficult in the current situation due to historical technical debt.
> You may have noticed that the decode and encode subtitles API are a few
> generations behind the audio and video ones. The reason it wasn't
> modernized earlier was because it was already a pita in the past.
> 
> The subtitles refactor requires to see the big picture and all the problems at
> once. Since the core change (subtitles in AVFrame) requires the introduction
> of a new subtitles structure and API, it also involve addressing the
> shortcomings of the original API (or maybe we could tolerate a new API that
> actually looks like the old?). So even if we ignore the subtitle-in-avframe
> thing, we don't have a clear answer for a sane API that handles everything.
> Here is a non-exhaustive list of stuff that we have to take into account while
> thinking about that:
> 
> - text subtitles with and without markup
> - sparsity, overlapping
> - different semantics for duration (duration available, no known duration,
>   event-based clearing, ...)
> - closed captions / teletext
> - bitmap subtitles and their potential colorspaces (each rectangle as an
>   AVFrame is way overkill but technically that's exactly what it is)
> 
> This should give you a hint on why the task has been quite overwhelming.
> Subtitles were the reason I initially came into the multimedia world, and they
> might have played a role in why I distanced myself from it.
> 
> That said, I'd say the main reason it was put in stand by was because I was
> kind of alone in that struggle. While I got a lot of support from people, I 
> think
> the main help I needed would have been formalizing the API we wanted.
> Like, code and API gymnastic is not that much of a problem, but deciding on
> what to do, and what path we take to reach that point is/was the core issue.
> 
> And to be honest, I never really made up my mind on abandoning the work.
> So I'm calling it again: if someone is interested in addressing the problem
> once and for all, I can spend some time rebasing the current state and
> clarifying what has been said in this mail in the details so we can work
> together on an API contract we want bet

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-22 Thread Clément Bœsch
On Fri, Feb 14, 2020 at 03:26:30AM +, Soft Works wrote:
> Hi,
> 

Hi,

> I am looking for some guidance regarding future plans about processing 
> subtitle streams in filter graphs.
> 
> Please correct me where I'm wrong - this is the situation as I've understood 
> it so far:
[...]

Your analysis was pretty much on point. I've been away from FFmpeg development
from around the time of that patchset. While I can't recommend a course of
action, I can elaborate on what was blocking and missing. Beware that this is
reconstructed from my unreliable memory and I may forget important points.

Last state can be found at 
https://github.com/ubitux/FFmpeg/tree/subtitles-new-api

The last WIP commit includes a TODO.txt which I'm sharing here for the
record:

> TODO:
> - heartbeat mechanism
> - drop sub2video (needs heartbeat)
> - properly deal with -ss and -t (need strim filter?)
> - sub_start_display/sub_end_display needs to be honored
> - find a test case for dvbsub as it's likely broken (ffmpeg.c hack is
>   removed and should be replaced by a EAGAIN logic in lavc/utils.c)
> - make it pass FATE:
>   * fix cc/subcc
>   * broke various other stuff
> - Changelog/APIchanges
> - proper API doxy
> - update lavfi/subtitles?
> - merge [avs]null filters
> - filters doc
> - avcodec_default_get_buffer2?
> - how to transfer subtitle header down to libavfilter?

The biggest TODO entry right now is the heartbeat mechanism which is required
for being able to drop the sub2video hack. You've seen that discussed in the
thread.

Thing is, that branch is already a relatively invasive and may include
controversial API change. Typically, the way I decided to handle subtitle
text/rectangle allocation within AVSubtitle is "different" but I couldn't come
up with a better solution. Basically, we have to fit them in AVFrame for a
clean integration within FFmpeg ecosystem, but subtitles are not simple buffers
like audio and video can be: they have to be backed by more complex dynamic
structures.

Also unfortunately, addressing the problem through an iterative process is
extremely difficult in the current situation due to historical technical debt.
You may have noticed that the decode and encode subtitles API are a few
generations behind the audio and video ones. The reason it wasn't modernized
earlier was because it was already a pita in the past.

The subtitles refactor requires to see the big picture and all the problems at
once. Since the core change (subtitles in AVFrame) requires the introduction of
a new subtitles structure and API, it also involve addressing the shortcomings
of the original API (or maybe we could tolerate a new API that actually looks
like the old?). So even if we ignore the subtitle-in-avframe thing, we don't
have a clear answer for a sane API that handles everything. Here is a
non-exhaustive list of stuff that we have to take into account while thinking
about that:

- text subtitles with and without markup
- sparsity, overlapping
- different semantics for duration (duration available, no known duration,
  event-based clearing, ...)
- closed captions / teletext
- bitmap subtitles and their potential colorspaces (each rectangle as an
  AVFrame is way overkill but technically that's exactly what it is)

This should give you a hint on why the task has been quite overwhelming.
Subtitles were the reason I initially came into the multimedia world, and they
might have played a role in why I distanced myself from it.

That said, I'd say the main reason it was put in stand by was because I was
kind of alone in that struggle. While I got a lot of support from people, I
think the main help I needed would have been formalizing the API we wanted.
Like, code and API gymnastic is not that much of a problem, but deciding on
what to do, and what path we take to reach that point is/was the core issue.

And to be honest, I never really made up my mind on abandoning the work. So I'm
calling it again: if someone is interested in addressing the problem once and
for all, I can spend some time rebasing the current state and clarifying what 
has
been said in this mail in the details so we can work together on an API
contract we want between FFmpeg and our users. When we have this, I think
progress can be made again.

Regards,

-- 
Clément B.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-17 Thread Soft Works
> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Nicolas George
> Sent: Monday, February 17, 2020 8:37 PM
> To: FFmpeg development discussions and patches  de...@ffmpeg.org>
> Subject: Re: [FFmpeg-devel] Status and Plans for Subtitle Filters
> 
> Soft Works (12020-02-14):
> > I am looking for some guidance regarding future plans about processing
> > subtitle streams in filter graphs.
> >
> > Please correct me where I'm wrong - this is the situation as I've
> > understood it so far:
> >
> > - Currently, ffmpeg filter graphs do not support processing subtitle
> > streams
> > - This is why filters like 'subtitles' and 'ass' need to open, read
> > and parse the media file a second time  in parallel instead of just
> > taking the subtitle events from ffmpeg'demuxing
> > - For graphical subtitles, there exists the so-called 'sub2video'
> > workaround which is injecting the the graphical subtitle overlay
> > images into the video filtergraph by declaring them as video
> > - The 'sub2video' was meant to exist until filtering would support
> > subtitle streams
> > - A while ago, Clement Boesch submitted a patch for adding subtitle
> > streams to filtergraph processing
> >
> (https://patchwork.ffmpeg.org/project/ffmpeg/patch/20161102220934.2601
> > 0-...@pkh.me/)
> > - I read through all the discussion about it, but nothing happened
> > afterwards and I couldn't find any indication about why it didn't get
> > merged
> 
> This looks accurate.
> 
> > I'm asking because I'm intending to implement a subtitle filter that
> > operates on in-stream data rather than a separate input and that will
> > render text on transparent frames for later overlay.
> >
> > The possible options that I have identified so far for creating that
> > kind of "subtitle rendering filter" would be:
> >
> > - Create a video source filter and implement some hack to get the
> > subtitle data from the decoder to that filter - or...
> > - Jump on the sub2video implementation and extend it to render overlay
> > images  in case of text subtitles - or...
> > - check out the situation with regards to adding subtitle filter
> > support in ffmpeg and ask about plans for this
> >
> > That's where I stand right now. Does  it even make sense, the way I
> > summarized it?`
> 
> I don't speak for the project as a whole, but I am quite confident that "some
> hack" would be accepted if and only if it is generic enough to be useful for
> many people, not just your use case. And tweaks to sub2video are "some
> hack" too.
> 
> If you want to implement real support for subtitles, that would be greatly
> appreciated, but I have to warn you it is a very difficult and intensive task.
> Otherwise it would have already been done. I can summarize where the
> difficulty resides:
> 
> - lavfi works with AVFrame, therefore subtitles need to be encoded into
>   AVFrame. This is the work of Clément that you found; he only started
>   on the rest.
> 
> - lavfi is not ready to have a third media type: there are parts that
>   strongly assume audio or video, and parts that merge the audio and
>   video case but cannot handle a different type.
> 
> - The utility filters that only work on metadata, like setpts, need to
>   be ported ported to the new media type. We don't want too much code
>   duplication, a more elegant approach needs to be found. Possibly make
>   the media type part of a first round of format negotiation.
> 
> - We need to decide the format negotiation for filters. Do we
>   automatically insert text→bitmap renderer or does it need to be
>   explicit?
> 
> - Subtitles streams are sparse, lavfi is not designed for that, and it
>   is a problem when subtitles are interleaved with audio or video. The
>   best way to solve it is dummy heartbeat frames. They can be
>   automatically generated by the framework with an API to connect two
>   buffersrc: when a video frame is added on one, generate a heartbeat
>   subtitle frame on the other if necessary.
> 
> Feel free to ask details on any of these points: I do not have the courage to
> start working on it for now, but I have thought about it.

Hi Nicholas,

Thanks a lot for your comments. To be honest, I pretty much feel the same
about it. It was painful enough to read through the discussion about Clement's
patch, and I'm neither ready for this nor do I have the time.

My idea would be rather to take at look at the latest work from Clement and
see whether it is in a state that is close to what I need - from your lines it
rather seems that it is not.

I guess my best bet will be

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-17 Thread Nicolas George
Soft Works (12020-02-14):
> I am looking for some guidance regarding future plans about processing
> subtitle streams in filter graphs.
> 
> Please correct me where I'm wrong - this is the situation as I've
> understood it so far:
> 
> - Currently, ffmpeg filter graphs do not support processing subtitle
> streams
> - This is why filters like 'subtitles' and 'ass' need to open, read
> and parse the media file a second time  in parallel instead of just
> taking the subtitle events from ffmpeg'demuxing
> - For graphical subtitles, there exists the so-called 'sub2video'
> workaround which is injecting the the graphical subtitle overlay
> images into the video filtergraph by declaring them as video
> - The 'sub2video' was meant to exist until filtering would support
> subtitle streams
> - A while ago, Clement Boesch submitted a patch for adding subtitle
> streams to filtergraph processing
> (https://patchwork.ffmpeg.org/project/ffmpeg/patch/20161102220934.26010-...@pkh.me/)
> - I read through all the discussion about it, but nothing happened
> afterwards and I couldn't find any indication about why it didn't get
> merged

This looks accurate.

> I'm asking because I'm intending to implement a subtitle filter that
> operates on in-stream data rather than a separate input and that will
> render text on transparent frames for later overlay.
> 
> The possible options that I have identified so far for creating that
> kind of "subtitle rendering filter" would be:
> 
> - Create a video source filter and implement some hack to get the
> subtitle data from the decoder to that filter - or...
> - Jump on the sub2video implementation and extend it to render overlay
> images  in case of text subtitles - or...
> - check out the situation with regards to adding subtitle filter
> support in ffmpeg and ask about plans for this
> 
> That's where I stand right now. Does  it even make sense, the way I
> summarized it?`

I don't speak for the project as a whole, but I am quite confident that
"some hack" would be accepted if and only if it is generic enough to be
useful for many people, not just your use case. And tweaks to sub2video
are "some hack" too.

If you want to implement real support for subtitles, that would be
greatly appreciated, but I have to warn you it is a very difficult and
intensive task. Otherwise it would have already been done. I can
summarize where the difficulty resides:

- lavfi works with AVFrame, therefore subtitles need to be encoded into
  AVFrame. This is the work of Clément that you found; he only started
  on the rest.

- lavfi is not ready to have a third media type: there are parts that
  strongly assume audio or video, and parts that merge the audio and
  video case but cannot handle a different type.

- The utility filters that only work on metadata, like setpts, need to
  be ported ported to the new media type. We don't want too much code
  duplication, a more elegant approach needs to be found. Possibly make
  the media type part of a first round of format negotiation.

- We need to decide the format negotiation for filters. Do we
  automatically insert text→bitmap renderer or does it need to be
  explicit?

- Subtitles streams are sparse, lavfi is not designed for that, and it
  is a problem when subtitles are interleaved with audio or video. The
  best way to solve it is dummy heartbeat frames. They can be
  automatically generated by the framework with an API to connect two
  buffersrc: when a video frame is added on one, generate a heartbeat
  subtitle frame on the other if necessary.

Feel free to ask details on any of these points: I do not have the
courage to start working on it for now, but I have thought about it.

Regards,

-- 
  Nicolas George


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-16 Thread Soft Works
> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Paul B Mahol
> Sent: Sunday, February 16, 2020 4:12 PM
> To: FFmpeg development discussions and patches  de...@ffmpeg.org>
> Subject: Re: [FFmpeg-devel] Status and Plans for Subtitle Filters
> 
> On 2/16/20, Soft Works  wrote:
> >> -Original Message-
> >> From: ffmpeg-devel  On Behalf Of
> >> Paul B Mahol
> >> Sent: Sunday, February 16, 2020 3:58 PM
> >> To: FFmpeg development discussions and patches  >> de...@ffmpeg.org>
> >> Subject: Re: [FFmpeg-devel] Status and Plans for Subtitle Filters
> >>
> >> On 2/16/20, Paul B Mahol  wrote:
> >> > On 2/16/20, Soft Works  wrote:
> >> >>> -Original Message-
> >> >>> From: ffmpeg-devel  On
> Behalf
> >> Of
> >> >>> Paul B Mahol
> >> >>> Sent: Sunday, February 16, 2020 11:33 AM
> >> >>> To: FFmpeg development discussions and patches  >> >>> de...@ffmpeg.org>
> >> >>> Subject: Re: [FFmpeg-devel] Status and Plans for Subtitle Filters
> >> >>>
> >> >>> On 2/14/20, Soft Works  wrote:
> >> >>> > Hi,
> >> >>> >
> >> >>> > I am looking for some guidance regarding future plans about
> >> >>> > processing subtitle streams in filter graphs.
> >> >>> >
> >> >>
> >> >> [...]
> >> >>
> >> >>> >
> >> >>> > That's where I stand right now. Does  it even make sense, the
> >> >>> > way I summarized it?`
> >> >>>
> >> >>> Very nice summarization.
> >> >>> Main developer working on this is very busy.
> >> >>> Your best bet is to start from his branch and continue work on it.
> >> >>
> >> >> Thanks Paul, main developer is Clement I suppose?
> >> >>
> >> >> I could not find any related branches on the main git, did you
> >> >> simply mean the patch on Patchwork that I referenced or does there
> >> >> exist some later work?
> >> >
> >> > Yes, on github.
> >>
> >> Specifically Clement fork of ffmpeg.
> >
> > Where do you see that?
> >
> > Only thing I found is https://github.com/cboesch-gpsw but there are no
> > repositories.
> > Unfortunately the forks list is limited, so I couldn’t see if there
> > might be one under a different name.
> >
> 
> Under nick ubitux.

Now I got it! 

Thanks for your patience ;-)
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-16 Thread Paul B Mahol
On 2/16/20, Soft Works  wrote:
>> -Original Message-
>> From: ffmpeg-devel  On Behalf Of
>> Paul B Mahol
>> Sent: Sunday, February 16, 2020 3:58 PM
>> To: FFmpeg development discussions and patches > de...@ffmpeg.org>
>> Subject: Re: [FFmpeg-devel] Status and Plans for Subtitle Filters
>>
>> On 2/16/20, Paul B Mahol  wrote:
>> > On 2/16/20, Soft Works  wrote:
>> >>> -Original Message-
>> >>> From: ffmpeg-devel  On Behalf
>> Of
>> >>> Paul B Mahol
>> >>> Sent: Sunday, February 16, 2020 11:33 AM
>> >>> To: FFmpeg development discussions and patches > >>> de...@ffmpeg.org>
>> >>> Subject: Re: [FFmpeg-devel] Status and Plans for Subtitle Filters
>> >>>
>> >>> On 2/14/20, Soft Works  wrote:
>> >>> > Hi,
>> >>> >
>> >>> > I am looking for some guidance regarding future plans about
>> >>> > processing subtitle streams in filter graphs.
>> >>> >
>> >>
>> >> [...]
>> >>
>> >>> >
>> >>> > That's where I stand right now. Does  it even make sense, the way
>> >>> > I summarized it?`
>> >>>
>> >>> Very nice summarization.
>> >>> Main developer working on this is very busy.
>> >>> Your best bet is to start from his branch and continue work on it.
>> >>
>> >> Thanks Paul, main developer is Clement I suppose?
>> >>
>> >> I could not find any related branches on the main git, did you simply
>> >> mean the patch on Patchwork that I referenced or does there exist
>> >> some later work?
>> >
>> > Yes, on github.
>>
>> Specifically Clement fork of ffmpeg.
>
> Where do you see that?
>
> Only thing I found is https://github.com/cboesch-gpsw but there are no
> repositories.
> Unfortunately the forks list is limited, so I couldn’t see if there might be
> one under a different name.
>

Under nick ubitux.


> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-16 Thread Soft Works
> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Paul B Mahol
> Sent: Sunday, February 16, 2020 3:58 PM
> To: FFmpeg development discussions and patches  de...@ffmpeg.org>
> Subject: Re: [FFmpeg-devel] Status and Plans for Subtitle Filters
> 
> On 2/16/20, Paul B Mahol  wrote:
> > On 2/16/20, Soft Works  wrote:
> >>> -Original Message-
> >>> From: ffmpeg-devel  On Behalf
> Of
> >>> Paul B Mahol
> >>> Sent: Sunday, February 16, 2020 11:33 AM
> >>> To: FFmpeg development discussions and patches  >>> de...@ffmpeg.org>
> >>> Subject: Re: [FFmpeg-devel] Status and Plans for Subtitle Filters
> >>>
> >>> On 2/14/20, Soft Works  wrote:
> >>> > Hi,
> >>> >
> >>> > I am looking for some guidance regarding future plans about
> >>> > processing subtitle streams in filter graphs.
> >>> >
> >>
> >> [...]
> >>
> >>> >
> >>> > That's where I stand right now. Does  it even make sense, the way
> >>> > I summarized it?`
> >>>
> >>> Very nice summarization.
> >>> Main developer working on this is very busy.
> >>> Your best bet is to start from his branch and continue work on it.
> >>
> >> Thanks Paul, main developer is Clement I suppose?
> >>
> >> I could not find any related branches on the main git, did you simply
> >> mean the patch on Patchwork that I referenced or does there exist
> >> some later work?
> >
> > Yes, on github.
> 
> Specifically Clement fork of ffmpeg.

Where do you see that?

Only thing I found is https://github.com/cboesch-gpsw but there are no 
repositories.
Unfortunately the forks list is limited, so I couldn’t see if there might be 
one under a different name.

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-16 Thread Paul B Mahol
On 2/16/20, Paul B Mahol  wrote:
> On 2/16/20, Soft Works  wrote:
>>> -Original Message-
>>> From: ffmpeg-devel  On Behalf Of
>>> Paul B Mahol
>>> Sent: Sunday, February 16, 2020 11:33 AM
>>> To: FFmpeg development discussions and patches >> de...@ffmpeg.org>
>>> Subject: Re: [FFmpeg-devel] Status and Plans for Subtitle Filters
>>>
>>> On 2/14/20, Soft Works  wrote:
>>> > Hi,
>>> >
>>> > I am looking for some guidance regarding future plans about processing
>>> > subtitle streams in filter graphs.
>>> >
>>
>> [...]
>>
>>> >
>>> > That's where I stand right now. Does  it even make sense, the way I
>>> > summarized it?`
>>>
>>> Very nice summarization.
>>> Main developer working on this is very busy.
>>> Your best bet is to start from his branch and continue work on it.
>>
>> Thanks Paul, main developer is Clement I suppose?
>>
>> I could not find any related branches on the main git, did you simply
>> mean
>> the patch on Patchwork that I referenced or does there exist some later
>> work?
>
> Yes, on github.

Specifically Clement fork of ffmpeg.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-16 Thread Paul B Mahol
On 2/16/20, Soft Works  wrote:
>> -Original Message-
>> From: ffmpeg-devel  On Behalf Of
>> Paul B Mahol
>> Sent: Sunday, February 16, 2020 11:33 AM
>> To: FFmpeg development discussions and patches > de...@ffmpeg.org>
>> Subject: Re: [FFmpeg-devel] Status and Plans for Subtitle Filters
>>
>> On 2/14/20, Soft Works  wrote:
>> > Hi,
>> >
>> > I am looking for some guidance regarding future plans about processing
>> > subtitle streams in filter graphs.
>> >
>
> [...]
>
>> >
>> > That's where I stand right now. Does  it even make sense, the way I
>> > summarized it?`
>>
>> Very nice summarization.
>> Main developer working on this is very busy.
>> Your best bet is to start from his branch and continue work on it.
>
> Thanks Paul, main developer is Clement I suppose?
>
> I could not find any related branches on the main git, did you simply mean
> the patch on Patchwork that I referenced or does there exist some later
> work?

Yes, on github.

>
> Thanks again,
> softworkz
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-16 Thread Soft Works
> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Paul B Mahol
> Sent: Sunday, February 16, 2020 11:33 AM
> To: FFmpeg development discussions and patches  de...@ffmpeg.org>
> Subject: Re: [FFmpeg-devel] Status and Plans for Subtitle Filters
> 
> On 2/14/20, Soft Works  wrote:
> > Hi,
> >
> > I am looking for some guidance regarding future plans about processing
> > subtitle streams in filter graphs.
> >

[...]

> >
> > That's where I stand right now. Does  it even make sense, the way I
> > summarized it?`
> 
> Very nice summarization.
> Main developer working on this is very busy.
> Your best bet is to start from his branch and continue work on it.

Thanks Paul, main developer is Clement I suppose?

I could not find any related branches on the main git, did you simply mean 
the patch on Patchwork that I referenced or does there exist some later work?

Thanks again,
softworkz
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-16 Thread Paul B Mahol
On 2/14/20, Soft Works  wrote:
> Hi,
>
> I am looking for some guidance regarding future plans about processing
> subtitle streams in filter graphs.
>
> Please correct me where I'm wrong - this is the situation as I've understood
> it so far:
>
> - Currently, ffmpeg filter graphs do not support processing subtitle streams
> - This is why filters like 'subtitles' and 'ass' need to open, read and
> parse the media file a second time  in parallel instead of just taking the
> subtitle events from ffmpeg'demuxing
> - For graphical subtitles, there exists the so-called 'sub2video' workaround
> which is injecting the the graphical subtitle overlay images into the video
> filtergraph by declaring them as video
> - The 'sub2video' was meant to exist until filtering would support subtitle
> streams
> - A while ago, Clement Boesch submitted a patch for adding subtitle streams
> to filtergraph processing
> (https://patchwork.ffmpeg.org/project/ffmpeg/patch/20161102220934.26010-...@pkh.me/)
> - I read through all the discussion about it, but nothing happened
> afterwards and I couldn't find any indication about why it didn't get merged
>
>
> I'm asking because I'm intending to implement a subtitle filter that
> operates on in-stream data rather than a separate input and that will render
> text on transparent frames for later overlay.
>
> The possible options that I have identified so far for creating that kind of
> "subtitle rendering filter" would be:
>
> - Create a video source filter and implement some hack to get the subtitle
> data from the decoder to that filter - or...
> - Jump on the sub2video implementation and extend it to render overlay
> images  in case of text subtitles - or...
> - check out the situation with regards to adding subtitle filter support in
> ffmpeg and ask about plans for this
>
> That's where I stand right now. Does  it even make sense, the way I
> summarized it?`

Very nice summarization.
Main developer working on this is very busy.
Your best bet is to start from his branch and continue work on it.

>
> Thank you very much,
>
> softworkz
>
>
>
> .
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".