Le dimanche 10 juin 2012 à 16:24 +0200, Anton Khirnov a écrit :
> On Sun, 10 Jun 2012 15:08:35 +0200, David Girault <[email protected]> wrote:
> > 
> > Yes, it was very similar (igsmenudec.c was started as a fork of
> > pgssubdec.c), and thus my first implementation use the subtitle API. But
> > I switch to another media type for two main reason:
> > - I don't want to change subtitle API,
> 
> That's not a very convincing reason without more information =p
> Why not?

First, subtitle API may be actually used by a lot of players, so I'm
pretty cautious before touching an API.

Second, I prefer making a new more generic API for overlay, and them
migrate subtitle API to this generic framework. This allow moving
existing subtitle decoder to new generic overlay API easier.

> 
> > - avplay can only display one subtitle at a time, and this is very
> > limiting for displaying blu-ray.
> 
> I don't think the design of our public API should be adapted to what
> avplay (itself primarily a testing toy) can or cannot do currently.
> 

I agree to that, but avplay is a very good example of use. So it should
also demonstrate the use of interactive overlay.

> > 
> > I think subtitle and menu are different type of a generic overlay
> > object. The two are displayed over the video but they differ in the way
> > they are used.
> > 
> > Regarding Bluray playing (my first priority), player must deal with at
> > least 4 planes (that may displayed at the same time):
> > - video plane,
> > - PIP video plane,
> > - Subtitle (or more generic 'presentation') plane,
> > - Interactive graphics plane.
> > 
> > Having this in mind, we may extend subtitle API or create a new 'father'
> > API for both subtitle and menu. I have no preference. I don't know Libav
> > internals that much. I start this project as a quick hack to suit my
> > needs.
> > 
> 
> I wouldn't want two pairs of almost identical structs in our public API
> without a good reason. So far I don't see such a reason, so I'd prefer
> to just extend the subtitle API. We can add AVMEDIA_TYPE_OVERLAY as an
> alias for AVMEDIA_TYPE_SUBTITLE and maybe even rename/alias the subtitle
> structs/functions to use the overlay name.
> 

I see only one reason to have both API: compatibility during the time
all existing subtitle codec are migrated to the new overlay API. Then,
the oldest subtitle API should be reworked to use the new overlay API
internally or simply deprecated.

I think we can add AVMEDIA_TYPE_OVERLAY, and latter change
AVMEDIA_TYPE_SUBTITLE to be an alias for AVMEDIA_TYPE_OVERLAY later.

Additionally, I think a field/flag or something like that must be added
in overlay stream to allow apps to detect if it's an interactive menu
overlay, a subtitle overlay or any other overlay type to allow good
selection of what stream to play (interactive stream may alway be
selected according language, but not alway displayed, like popup menu).
avprobe may use the same flag to display better information about the
overlay stream.


> > > 
> > > From my quick glance through the patchset, it seems that the main
> > > (only?) difference from subs is that those menus are interactive.
> > 
> > Yes, mainly. Subtitle may be displayed a short time, menu can be here an
> > infinite time. Subtitle don't have any interactivity, menu have. 
> > Interactivity mean:
> > - redisplay menu with other pictures in response to key events,
> > - return HDMV commands for current selected button when it was
> > 'activated'.
> > 
> 
> Libav wasn't exactly designed with user interactivity in mind, so any
> attempts at it will most probably be a bit hacky. But still we should
> try to reduce the hackiness level as much as possible.
> 

I agree, I have authoring tool in mind that use standard and non-hacky
Libav API ;)

A well designed API for user interactivity will allow a lot more
features in apps using it (On-screen display, channel selection and many
more). Currently, the only restricting thing is the way all received
overlay are displayed. It seems GL textures is a lot better than what is
used in avplay (SDL YUV overlay).

> > > 
> > > If I'm reading it correctly, you're implementing interactivity by the
> > > player constructing its own packets. The first point here is that this
> > > is also public API and thus needs to be documented. The second is that
> > > this approach looks quite hacky to me. Using packet side data might be
> > > better.
> > > 
> > 
> > It's a hack, yes. But it work well without touching too much current
> > avplay architecture. I saw the 'display segment' frame in the stream
> > like the first interactive command executed. It result the first
> > AVOverlay to be displayed. Injecting another 'display segment' with
> > embedded (or extra 'key' data attached) is the simpliest way to do the
> > job for now. But we can call decode api with no frame (player may not
> > know how to create such frame) at all but with 'interactive key' data
> > set.
> > 
> 
> Moving this into packet side data shouldn't be hard and will IMO be
> cleaner.
> 

I agree too, but I don't know how yet. I need to look deeper in Libav
structures...

David


_______________________________________________
libav-devel mailing list
[email protected]
https://lists.libav.org/mailman/listinfo/libav-devel

Reply via email to