On Sun, 10 Jun 2012 15:08:35 +0200, David Girault <[email protected]> wrote:
> 
> Yes, it was very similar (igsmenudec.c was started as a fork of
> pgssubdec.c), and thus my first implementation use the subtitle API. But
> I switch to another media type for two main reason:
> - I don't want to change subtitle API,

That's not a very convincing reason without more information =p
Why not?

> - avplay can only display one subtitle at a time, and this is very
> limiting for displaying blu-ray.

I don't think the design of our public API should be adapted to what
avplay (itself primarily a testing toy) can or cannot do currently.

> 
> I think subtitle and menu are different type of a generic overlay
> object. The two are displayed over the video but they differ in the way
> they are used.
> 
> Regarding Bluray playing (my first priority), player must deal with at
> least 4 planes (that may displayed at the same time):
> - video plane,
> - PIP video plane,
> - Subtitle (or more generic 'presentation') plane,
> - Interactive graphics plane.
> 
> Having this in mind, we may extend subtitle API or create a new 'father'
> API for both subtitle and menu. I have no preference. I don't know Libav
> internals that much. I start this project as a quick hack to suit my
> needs.
> 

I wouldn't want two pairs of almost identical structs in our public API
without a good reason. So far I don't see such a reason, so I'd prefer
to just extend the subtitle API. We can add AVMEDIA_TYPE_OVERLAY as an
alias for AVMEDIA_TYPE_SUBTITLE and maybe even rename/alias the subtitle
structs/functions to use the overlay name.

> > 
> > From my quick glance through the patchset, it seems that the main
> > (only?) difference from subs is that those menus are interactive.
> 
> Yes, mainly. Subtitle may be displayed a short time, menu can be here an
> infinite time. Subtitle don't have any interactivity, menu have. 
> Interactivity mean:
> - redisplay menu with other pictures in response to key events,
> - return HDMV commands for current selected button when it was
> 'activated'.
> 

Libav wasn't exactly designed with user interactivity in mind, so any
attempts at it will most probably be a bit hacky. But still we should
try to reduce the hackiness level as much as possible.

> > 
> > If I'm reading it correctly, you're implementing interactivity by the
> > player constructing its own packets. The first point here is that this
> > is also public API and thus needs to be documented. The second is that
> > this approach looks quite hacky to me. Using packet side data might be
> > better.
> > 
> 
> It's a hack, yes. But it work well without touching too much current
> avplay architecture. I saw the 'display segment' frame in the stream
> like the first interactive command executed. It result the first
> AVOverlay to be displayed. Injecting another 'display segment' with
> embedded (or extra 'key' data attached) is the simpliest way to do the
> job for now. But we can call decode api with no frame (player may not
> know how to create such frame) at all but with 'interactive key' data
> set.
> 

Moving this into packet side data shouldn't be hard and will IMO be
cleaner.

-- 
Anton Khirnov
_______________________________________________
libav-devel mailing list
[email protected]
https://lists.libav.org/mailman/listinfo/libav-devel

Reply via email to