Re: [MeeGo-dev] Candidate specification for a generic video player from the TV list

2011-04-20 Thread Teemu Tuominen

On 03/25/2011 05:23 PM, Edward Hervey wrote:

   Yes, with playbin2/decodebin2 you can change the order of
codecs/plugins being used. By default it will use the one with the
highest rank matching the stream to decode, but you can connect to the
'autoplug-factories' signal and reorder those plugins to have it use the
software one or the hardware one.

Thanks!

I was planning to familiarize with this as soon I found some time to do 
it. At first sight, the elements ranking mechanism in gstreamer seems to 
need some generalization in common. The auto-select fea might be in 
shape with some bins but afaik bins aren't using any common APIs for 
this. Static rank bundled with elements that are distributed in 
decentralized way does not seem effective.


I asked questions about this a few days ago @#gstreamer and had to back 
up to think what are the real use cases that could benefit from 
centralized or even dynamic ranking of elements. I'm not well 
experienced with gstreamer in general, but have designed & implemented 
MM middleware for Windows and addressed the same common problems there. 
So, gstreamer might surprise me eventually, but I want to propose 
centralized ranking mechanism to be considered and maybe dealt with UMMS 
effort.


* The amount of meego adaptations that supporting varying devices in 
public oss will grow. They are all heading to use as much of generic 
core as they can. Now, these devices will need their specific gstreamer 
pipelines, elements or just sub-components as omx etc. Atm. there are 
areas where adaptations need to collaborate on elements selection logic 
above gstreamer and the hacks will flood eventually if not re-factored. 
The core packaging can have only single ranking for elements. Instead of 
collaborating into framework(s) itself, adaptations could just provide a 
configuration. So far simple, but with the requirement to be expandable 
the configuration becomes complex and requires suitable language.


* The request is not only in scope of codecs, but the sources and 
filters as well. Man cannot say the type of content before it gets 
examined. Depending on content & transport the detection might get heavy 
or even impossible. In these cases, a specified pipeline for particular 
use case is a ok solution, but it could be made possible to fixate such 
without implementing new branches of Meego applications or even worse 
the framework.


* In terms of dynamic configuration for element ranking, the feature 
would be a great development tool at first. But I can see scenarios that 
could benefit from it in terms of framework generalization - but I'm 
afraid it would mean creating just-another-bin.


I would propose an API that splits the complexity out of the exported 
functions and gathers the complexity into the configuration hold in tree 
hierachy of use cases. Well, regardless of the defined API, I ponder 
that the above ranking configuration could be somehow solved with UMMS 
in general and be part of the configuration needed in other sense as well.


So, lets consider a sequence where custom gstreamer pipeline gets 
created. Source is a bit specific, so lets skip it. In next step the 
mimetype should be known and the content may contain uncompressed 
content so the list of suitable elements is long. The decisions that 
categorizes this list currently with gstreamer is about selecting the 
suitable GstBin for the use case. Instead of this Application, Qt 
backend, or UMMS wrapper could pick up an interface and fetch the info 
about elements that can be connected. This info can base on external 
configuration which contain the categories describing the complete tree 
hierarchy of use cases. E.g. XML is suitable and expandable for this. 
When the code continues initiating new elements to pipeline, the 
available configuration for next step focuses on hierarchy and provide 
the available options for each step. This approach allows backend(s) 
developer to think 80/20, fixate only on things that are the most common 
and transparently provide the support of options shown with the UI.


Let's consider some of todays use cases with the camera application. 
Application is not interested in source and just initiates the API that 
falls into its defaults with gstreamer/camerabin logic. Viewfinder gets 
created and the formats for image and video to be captured are somewhat 
seen with settings dialog. User browses the web, downloads and installs 
new support for Meego to encode some specific format, then he goes to 
the camera applications setting dialog again and sees that the format is 
now available to use. This is the rest 20 that users want to control and 
the device adaptation could provides the 80 so that users can be 
ignorant. However, elements with high ranks can get installed similarly 
or as part of other applications without a notice and the application 
falling in automation gets unreliable. External configuration could be 
used to blacklist the elements that aren

Re: [MeeGo-dev] Candidate specification for a generic video player from the TV list

2011-04-11 Thread Brendan Le Foll
On 11 April 2011 10:21, Stefan Kost  wrote:
> My suggestion is to avoid this scenario totally. Having different
> pipelines depending on systems state is increasing the complexity of
> your system a lot. Apply the KISS pattern. Multimedia is complex enough
> due to the sheer amount of formats, codecs, profiles etc.

At the moment you can't use GStreamer to play blu-ray. With the
protection, this is probably not going to work any time soon legally
in a nice autopluggable bin for GStreamer (that's currently the case
for DVD playback if I understand correctly). Therefore this is an evil
but at least a viable short term solution.

-- 
Brendan Le Foll
http://www.madeo.co.uk
___
MeeGo-dev mailing list
MeeGo-dev@meego.com
http://lists.meego.com/listinfo/meego-dev
http://wiki.meego.com/Mailing_list_guidelines


Re: [MeeGo-dev] Candidate specification for a generic video player from the TV list

2011-04-11 Thread Stefan Kost
hi,

On 25.03.2011 18:28, Cory T. Tusar wrote:
> On 03/25/2011 11:23 AM, Edward Hervey wrote:
> >> On Fri, 2011-03-25 at 10:58 -0400, Cory T. Tusar wrote:
> >>> On 03/17/2011 06:57 AM, Stefan Kost wrote:
> >>
> >> 
> >>
> >>> In "7 Transparency" you need to highlight what your proposal adds
> to the
> >>> existing features.
> >>> * Transport protocol: handled e.g. by gstreamer already, standarts
> like
> >>> DLNA specify subsets for interoperability already
> >>> * Transparent Encapsulation and Multiplexing: could you please
> elaborate
> >>> why one would need the non-automatic mode. I think it does not make
> >>> sense to let the application specify what format the stream is in, if
> >>> the media-framework can figure it (in almost all of the cases). In
> some
> >>> corner cases one can e.g. use custom pipelines and specify the format
> >>> (e.g. a ringtone playback service might do that if it knows the format
> >>> already).
> >>
> >> As a possible example (pulled from recent experience), automagic
> >> determination of stream parameters takes time (and CPU cycles).  A
> >> "non-automatic" mode would be (was) helpful in instances where the
> >> application knows exactly what type of stream to expect, and there is
> >> a requirement for an absolute minimum of start-up time between the user
> >> pressing the "Play" button and video appearing on the screen.
> >>
> >   A lot of improvement has gone into GStreamer over the past year to
> > speed up the pre-roll/typefinding/setup of playback pipelines. This was
> > mainly to get gst-discoverer to be faster than exiftool to get
> > information about media files, which it now is ... considering it also
> > decodes the first audio/video frame(s).
> >   The only case I can think of where you would gain time would be for
> > live mpeg-ts streams where you could provide the PAT/PMT information
> > which you would have cached previously (in order not to have to wait for
> > the next occurence). But that would still require you to wait for the
> > next keyframe to appear unless you already have a few seconds live
> > back-buffer on the machine (in which case you would also have cached
> > PAT/PMT).
> >   Did you have another use-case in mind ?
>
> Pretty much the above, or slight variations thereof.
>
> Short version: there were product requirements regarding startup time
> and display of the first keyframe received over the network within N
> milliseconds.  Explicit knowledge of stream type when constructing the
> decode pipeline proved helpful in meeting those requirements (this
> particular case was with a GStreamer pipeline on Moblin).
>
> I'm not arguing against automatic detection - it's what works and works
> well in a vast majority of cases - just leave the "power-user" option
> of explicitly specifying codec use / buffer sizing / etc. available for
> times when it's needed.

Maybe we could progress by having a requirement in featurezilla? Also I
wonder how much we are off the target.
I believe before changing things it would be good to have a test case at
hand that shows how much the target is missing and that avoiding
auto-detection would meet the target (by saving enough time).

>
> >>> * Transparent Target: Whats the role of the UMMS here? How does
> the URI
> >>> make sense here. Are you suggesting to use something like
> >>> opengl://localdisplay/0/0/854/480? MAFW was introducing renderers,
> where
> >>> a local renderer would render well locally and one could e.g. have a
> >>> UPnP DLNA renderer or a media recorder.
> >>> * Transparent Resource Management: That makes a lot of sense and
> so far
> >>> was planned to be done on QT MultimediaKit
> >>> * Attended and Non Attended execution: This sounds like having a media
> >>> recording service in the platform.
> >>>
> >>> "8 Audio Video Control"
> >>> This is a media player interface. Most of the things make sense. Below
> >>> those that might need more thinking
> >>> * Codec Selection: please don't. This is something that we need to
> solve
> >>> below and not push to the application or even to the user.
> >>
> >> Agreed, in part.  As a general rule, the underlying detection and codec
> >> selection should be transparent to an application, however there are
> >> corner cases where this may not be desirable, and specific selection of
> >> a codec may be necessary.
> >>
> >> Consider a system which has an external (to the main CPU)
> >> PowerDrain-5000(tm) video processor capable of both MPEG-2 and MPEG-4
> >> decode.  If the system is in a commanded low-power state, it may be
> >> more prudent to decode standard-definition MPEG-2 content in
> software on
> >> the main CPU and leave the external video processor powered-down.
> >> However, when decode of MPEG-4 content is desired, soft-decode may not
> >> be feasible and the external video hardware needs to be used.
> >>
> >> In instances, as above, where the system has multiple codecs (hardware
> >> and software) capable of decoding given content, is ther

Re: [MeeGo-dev] Candidate specification for a generic video player from the TV list

2011-03-25 Thread Niels Mayer
FYI, I just proposed a BOF that will partially be on this topic from
an app-developers pt-of-view:

http://sf2011.meego.com/program/sessions/bof-developing-qml-youtube-api-and-internet-video
BOF: Developing in QML for YouTube API and Internet video
...
[...] demonstrate powerful techniques for working with YouTube feeds
and for displaying YouTube videos in QML-based apps. Discussion on
ideas, implementation strategies and applications of Internet video in
MeeGo; focus will be on QML implementation, providing touch-interfaces
for streaming media browsing, hybrid implementations involving Flash
embedded in WebKit implementations, issues with using QtMultimediaKit
Player, and discussion of alternatives.
 [...]
e. Open discussion on media players including issues with using
QtMultimediaKit Player in QML, and discussion of alternatives such as
Grilo ( 
http://sf2011.meego.com/program/sessions/grilo-enhancing-multimedia-experience-meego
), MAFW ( http://www.grancanariadesktopsummit.org/node/219 ), gst123 (
http://space.twc.de/~stefan/gst123.php ) and the Media Lovin' Toolkit
( http://www.mltframework.org/ ) which is the basis of the amazing
http://wiki.meego.com/MeeGo-Lem#The_OpenShot_Video_Editor .
...

Any suggestions or changes? Let me know... it's not midnight yet! :-),
especially from pt-of-view of using aforementioned playback/loop/clip
tools from QML.

Niels
http://nielsmayer.com
___
MeeGo-dev mailing list
MeeGo-dev@meego.com
http://lists.meego.com/listinfo/meego-dev
http://wiki.meego.com/Mailing_list_guidelines


Re: [MeeGo-dev] Candidate specification for a generic video player from the TV list

2011-03-25 Thread Cory T. Tusar
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/25/2011 11:23 AM, Edward Hervey wrote:
>> On Fri, 2011-03-25 at 10:58 -0400, Cory T. Tusar wrote:
>>> On 03/17/2011 06:57 AM, Stefan Kost wrote:
>> 
>> 
>> 
>>> In "7 Transparency" you need to highlight what your proposal adds to the
>>> existing features.
>>> * Transport protocol: handled e.g. by gstreamer already, standarts like
>>> DLNA specify subsets for interoperability already
>>> * Transparent Encapsulation and Multiplexing: could you please elaborate
>>> why one would need the non-automatic mode. I think it does not make
>>> sense to let the application specify what format the stream is in, if
>>> the media-framework can figure it (in almost all of the cases). In some
>>> corner cases one can e.g. use custom pipelines and specify the format
>>> (e.g. a ringtone playback service might do that if it knows the format
>>> already).
>> 
>> As a possible example (pulled from recent experience), automagic
>> determination of stream parameters takes time (and CPU cycles).  A
>> "non-automatic" mode would be (was) helpful in instances where the
>> application knows exactly what type of stream to expect, and there is
>> a requirement for an absolute minimum of start-up time between the user
>> pressing the "Play" button and video appearing on the screen.
>> 
>   A lot of improvement has gone into GStreamer over the past year to
> speed up the pre-roll/typefinding/setup of playback pipelines. This was
> mainly to get gst-discoverer to be faster than exiftool to get
> information about media files, which it now is ... considering it also
> decodes the first audio/video frame(s).
>   The only case I can think of where you would gain time would be for
> live mpeg-ts streams where you could provide the PAT/PMT information
> which you would have cached previously (in order not to have to wait for
> the next occurence). But that would still require you to wait for the
> next keyframe to appear unless you already have a few seconds live
> back-buffer on the machine (in which case you would also have cached
> PAT/PMT).
>   Did you have another use-case in mind ?

Pretty much the above, or slight variations thereof.

Short version: there were product requirements regarding startup time
and display of the first keyframe received over the network within N
milliseconds.  Explicit knowledge of stream type when constructing the
decode pipeline proved helpful in meeting those requirements (this
particular case was with a GStreamer pipeline on Moblin).

I'm not arguing against automatic detection - it's what works and works
well in a vast majority of cases - just leave the "power-user" option
of explicitly specifying codec use / buffer sizing / etc. available for
times when it's needed.

>>> * Transparent Target: Whats the role of the UMMS here? How does the URI
>>> make sense here. Are you suggesting to use something like
>>> opengl://localdisplay/0/0/854/480? MAFW was introducing renderers, where
>>> a local renderer would render well locally and one could e.g. have a
>>> UPnP DLNA renderer or a media recorder.
>>> * Transparent Resource Management: That makes a lot of sense and so far
>>> was planned to be done on QT MultimediaKit
>>> * Attended and Non Attended execution: This sounds like having a media
>>> recording service in the platform.
>>>
>>> "8 Audio Video Control"
>>> This is a media player interface. Most of the things make sense. Below
>>> those that might need more thinking
>>> * Codec Selection: please don't. This is something that we need to solve
>>> below and not push to the application or even to the user.
>> 
>> Agreed, in part.  As a general rule, the underlying detection and codec
>> selection should be transparent to an application, however there are
>> corner cases where this may not be desirable, and specific selection of
>> a codec may be necessary.
>> 
>> Consider a system which has an external (to the main CPU)
>> PowerDrain-5000(tm) video processor capable of both MPEG-2 and MPEG-4
>> decode.  If the system is in a commanded low-power state, it may be
>> more prudent to decode standard-definition MPEG-2 content in software on
>> the main CPU and leave the external video processor powered-down.
>> However, when decode of MPEG-4 content is desired, soft-decode may not
>> be feasible and the external video hardware needs to be used.
>> 
>> In instances, as above, where the system has multiple codecs (hardware
>> and software) capable of decoding given content, is there envisioned
>> some method of specifying codec priority so that a given method of
>> decode is used preferentially?
>> 
>   Yes, with playbin2/decodebin2 you can change the order of
> codecs/plugins being used. By default it will use the one with the
> highest rank matching the stream to decode, but you can connect to the
> 'autoplug-factories' signal and reorder those plugins to have it use the
> software one or the hardware one.
>   Another way to go around that problem would be

Re: [MeeGo-dev] Candidate specification for a generic video player from the TV list

2011-03-25 Thread Edward Hervey
On Fri, 2011-03-25 at 10:58 -0400, Cory T. Tusar wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> On 03/17/2011 06:57 AM, Stefan Kost wrote:
> 
> 
> 
> > In "7 Transparency" you need to highlight what your proposal adds to the
> > existing features.
> > * Transport protocol: handled e.g. by gstreamer already, standarts like
> > DLNA specify subsets for interoperability already
> > * Transparent Encapsulation and Multiplexing: could you please elaborate
> > why one would need the non-automatic mode. I think it does not make
> > sense to let the application specify what format the stream is in, if
> > the media-framework can figure it (in almost all of the cases). In some
> > corner cases one can e.g. use custom pipelines and specify the format
> > (e.g. a ringtone playback service might do that if it knows the format
> > already).
> 
> As a possible example (pulled from recent experience), automagic
> determination of stream parameters takes time (and CPU cycles).  A
> "non-automatic" mode would be (was) helpful in instances where the
> application knows exactly what type of stream to expect, and there is
> a requirement for an absolute minimum of start-up time between the user
> pressing the "Play" button and video appearing on the screen.

  A lot of improvement has gone into GStreamer over the past year to
speed up the pre-roll/typefinding/setup of playback pipelines. This was
mainly to get gst-discoverer to be faster than exiftool to get
information about media files, which it now is ... considering it also
decodes the first audio/video frame(s).
  The only case I can think of where you would gain time would be for
live mpeg-ts streams where you could provide the PAT/PMT information
which you would have cached previously (in order not to have to wait for
the next occurence). But that would still require you to wait for the
next keyframe to appear unless you already have a few seconds live
back-buffer on the machine (in which case you would also have cached
PAT/PMT).
  Did you have another use-case in mind ?

> 
> > * Transparent Target: Whats the role of the UMMS here? How does the URI
> > make sense here. Are you suggesting to use something like
> > opengl://localdisplay/0/0/854/480? MAFW was introducing renderers, where
> > a local renderer would render well locally and one could e.g. have a
> > UPnP DLNA renderer or a media recorder.
> > * Transparent Resource Management: That makes a lot of sense and so far
> > was planned to be done on QT MultimediaKit
> > * Attended and Non Attended execution: This sounds like having a media
> > recording service in the platform.
> >
> > "8 Audio Video Control"
> > This is a media player interface. Most of the things make sense. Below
> > those that might need more thinking
> > * Codec Selection: please don't. This is something that we need to solve
> > below and not push to the application or even to the user.
> 
> Agreed, in part.  As a general rule, the underlying detection and codec
> selection should be transparent to an application, however there are
> corner cases where this may not be desirable, and specific selection of
> a codec may be necessary.
> 
> Consider a system which has an external (to the main CPU)
> PowerDrain-5000(tm) video processor capable of both MPEG-2 and MPEG-4
> decode.  If the system is in a commanded low-power state, it may be
> more prudent to decode standard-definition MPEG-2 content in software on
> the main CPU and leave the external video processor powered-down.
> However, when decode of MPEG-4 content is desired, soft-decode may not
> be feasible and the external video hardware needs to be used.
> 
> In instances, as above, where the system has multiple codecs (hardware
> and software) capable of decoding given content, is there envisioned
> some method of specifying codec priority so that a given method of
> decode is used preferentially?

  Yes, with playbin2/decodebin2 you can change the order of
codecs/plugins being used. By default it will use the one with the
highest rank matching the stream to decode, but you can connect to the
'autoplug-factories' signal and reorder those plugins to have it use the
software one or the hardware one.
  Another way to go around that problem would be to have the software
plugin only accept SD streams in input (via its pad template caps) and
have a higher rank than the hardware one, which would make the system
automatically pick up the SW plugin for SD content, but use the HW one
for HD content.

> 
> > * Buffer Strategy: same as before. Buffering strategy depends on the
> > use-case and media. The application needs to express whether its a
> > media-player/media-editor/.. and from that we need to derive this.
> 
> But not all use-cases may have the same requirements.  Again, from
> recent experience, my system's requirements for low-latency may or may
> not match yours.  That's not to say that providing some sane defaults
> that cover a majority of expected use cases isn't a go

Re: [MeeGo-dev] Candidate specification for a generic video player from the TV list

2011-03-25 Thread Cory T. Tusar
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/17/2011 06:57 AM, Stefan Kost wrote:



> In "7 Transparency" you need to highlight what your proposal adds to the
> existing features.
> * Transport protocol: handled e.g. by gstreamer already, standarts like
> DLNA specify subsets for interoperability already
> * Transparent Encapsulation and Multiplexing: could you please elaborate
> why one would need the non-automatic mode. I think it does not make
> sense to let the application specify what format the stream is in, if
> the media-framework can figure it (in almost all of the cases). In some
> corner cases one can e.g. use custom pipelines and specify the format
> (e.g. a ringtone playback service might do that if it knows the format
> already).

As a possible example (pulled from recent experience), automagic
determination of stream parameters takes time (and CPU cycles).  A
"non-automatic" mode would be (was) helpful in instances where the
application knows exactly what type of stream to expect, and there is
a requirement for an absolute minimum of start-up time between the user
pressing the "Play" button and video appearing on the screen.

> * Transparent Target: Whats the role of the UMMS here? How does the URI
> make sense here. Are you suggesting to use something like
> opengl://localdisplay/0/0/854/480? MAFW was introducing renderers, where
> a local renderer would render well locally and one could e.g. have a
> UPnP DLNA renderer or a media recorder.
> * Transparent Resource Management: That makes a lot of sense and so far
> was planned to be done on QT MultimediaKit
> * Attended and Non Attended execution: This sounds like having a media
> recording service in the platform.
>
> "8 Audio Video Control"
> This is a media player interface. Most of the things make sense. Below
> those that might need more thinking
> * Codec Selection: please don't. This is something that we need to solve
> below and not push to the application or even to the user.

Agreed, in part.  As a general rule, the underlying detection and codec
selection should be transparent to an application, however there are
corner cases where this may not be desirable, and specific selection of
a codec may be necessary.

Consider a system which has an external (to the main CPU)
PowerDrain-5000(tm) video processor capable of both MPEG-2 and MPEG-4
decode.  If the system is in a commanded low-power state, it may be
more prudent to decode standard-definition MPEG-2 content in software on
the main CPU and leave the external video processor powered-down.
However, when decode of MPEG-4 content is desired, soft-decode may not
be feasible and the external video hardware needs to be used.

In instances, as above, where the system has multiple codecs (hardware
and software) capable of decoding given content, is there envisioned
some method of specifying codec priority so that a given method of
decode is used preferentially?

> * Buffer Strategy: same as before. Buffering strategy depends on the
> use-case and media. The application needs to express whether its a
> media-player/media-editor/.. and from that we need to derive this.

But not all use-cases may have the same requirements.  Again, from
recent experience, my system's requirements for low-latency may or may
not match yours.  That's not to say that providing some sane defaults
that cover a majority of expected use cases isn't a good idea, just
don't restrict the application to those and those alone.



> "15 GStreamer"
> It is GStreamer (with a upper case 'S') :) In general please spell check
> the section.
> Regarding the three weak points:
> * smooth fast forward is a seek_event with a rate>1.0. There might be
> elements not properly implementing that, but I fail to understand how
> you can fix that on higher layers instead of in the elements. It might
> make sense to define extended compliance criteria for base adaptation
> vendors to ensure consistent behavior and features.

+1.

- -Cory


- -- 
Cory T. Tusar
Senior Software Engineer
Videon Central, Inc.
2171 Sandy Drive
State College, PA 16803
(814) 235- x316
(814) 235-1118 fax


"There are two ways of constructing a software design.  One way is to
 make it so simple that there are obviously no deficiencies, and the
 other way is to make it so complicated that there are no obvious
 deficiencies."  --Sir Charles Anthony Richard Hoare

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.17 (GNU/Linux)

iEYEARECAAYFAk2MraoACgkQHT1tsfGwHJ+W0wCghQdfIej8YDiGQ/o1bmDVGohs
rf4AoI26XSbPONI24mzCDJo5hAOM+PEN
=kGk+
-END PGP SIGNATURE-
___
MeeGo-dev mailing list
MeeGo-dev@meego.com
http://lists.meego.com/listinfo/meego-dev
http://wiki.meego.com/Mailing_list_guidelines


Re: [MeeGo-dev] Candidate specification for a generic video player from the TV list

2011-03-25 Thread Iago Toral Quiroga
El vie, 25-03-2011 a las 12:07 +0200, Stefan Kost escribió:
> Hi,
> 
> On 24.03.2011 23:57, Dominig Ar Foll wrote:
> > Apologies for responding in dual posting. I would like as much as
> > possible to concentrate that type of chat on the TV mailing list but I
> > would not enjoy to leave a question open.
> >
> > Regarding the already existing projects. Are you aware of MAFW on
> > Maemo5
> > http://www.grancanariadesktopsummit.org/node/219
> > The implementation might not be perfect, but the concept behind is
> > sane.
> >
> > No I did not know it and I think you for the link. I have so far only
> > written a requirement specification and any idea to move toward a goo
> > implementation specifications are welcomed. I will dig in their
> > documentation.
> 
> MAFW on Fremantle was mostly done by Igalia. Some of the people now work
> on Grillo open source project where you might be able to talk to them
> (on irc).

Grilo actually expands on the ideas introduced by MAFW, and we think it
fixes many of its shortcomings while keeping the good ideas behind it. I
elaborated on the reasons we had to create Grilo and what it provides to
multimedia solution developers in a post when we first announced the
project [1], it is an old post and some things are outdated (like the
link to the repository), but the main ideas are there. I also wrote an
article explaining the purpose of Grilo [2] that you might be interested
in reading.

BTW, I sent a paper to the MeeGo Conference in Dublin with the idea of
introducing Grilo to the MeeGo community there. Unfortunately I did not
get it accepted, although I did have a lightning talk on the topic. I've
sent the proposal again for the San Francisco event [3], hopefully I am
more lucky this time around :)

I would love to see Grilo included in the multimedia stack of MeeGo, I
think it would be a great tool for multimedia solution developers and we
(Igalia) would be glad to work on making that possible if the MeeGo
community welcomes the addition.

Iago

[1]http://blogs.igalia.com/itoral/2010/02/10/grilo/
[2]http://www.gnomejournal.org/article/103/grilo-integrating-multimedia-content-in-your-application
[3]http://sf2011.meego.com/program/sessions/grilo-enhancing-multimedia-experience-meego


___
MeeGo-dev mailing list
MeeGo-dev@meego.com
http://lists.meego.com/listinfo/meego-dev
http://wiki.meego.com/Mailing_list_guidelines

Re: [MeeGo-dev] Candidate specification for a generic video player from the TV list

2011-03-25 Thread Stefan Kost
Hi again,

On 25.03.2011 12:07, Stefan Kost wrote:
> Hi,
>
> On 24.03.2011 23:57, Dominig Ar Foll wrote:

> There is no architectural issue regarding blueray in the our stack. It
> is more a legal mess. I belive given you have the time and money, you
> could write blueray support to gstreamer in a similar fashion as we have
> the dvd support now.
>> The idea presented here is that the UMMS can decide which pipe line to
>> call on depending of URL or the detected stream type without requiring
>> a prior knowledge from the application about the pipeline
>> configuration/selection.
> That means you would like to have a bunch of different multimedia
> frameworks on a device and then use an appropriate one depending on the
> URI. E.g. use gstreamer for some formats and use mplayer for some
> others. While that might sound like a good idea, I don't think it is one
> for several reasons:
> - you will needs to abstract the the different apis (well thats actually
> your proposal)
> - you increase size and complexity of the multimedia stack
> - more difficult to debug (e.g. differnet tools needed)
> - testing is a lot more difficult
> - users might get annoyed by small incompatibilities (seeking works
> differently depending on the media)
> - you need to do base adaptation several times (integrate codecs,
> rendering etc. in several frameworks)
>
> There might be more reasons that speak against such an approach, but
> already the last one would be major enough for me.
Can't resist, while speaking with a colleague he came up with a good
metaphor for the above. Is there an idea car? No, there isn't. Thus
instead of fixing the car to be what we want, we have a simpler solution
- we take 3 cars, string the together, put an extra seat on the roof and
proxy the controls. Then we can drive using the Fiat in the city, using
the Porsche on the motorway and using the van when we need more space
for transportation. Of course finding a parking space gets tricky ...

Stefan
___
MeeGo-dev mailing list
MeeGo-dev@meego.com
http://lists.meego.com/listinfo/meego-dev
http://wiki.meego.com/Mailing_list_guidelines


Re: [MeeGo-dev] Candidate specification for a generic video player from the TV list

2011-03-25 Thread Stefan Kost
Hi,

On 24.03.2011 23:57, Dominig Ar Foll wrote:
> Apologies for responding in dual posting. I would like as much as
> possible to concentrate that type of chat on the TV mailing list but I
> would not enjoy to leave a question open.
>
> Regarding the already existing projects. Are you aware of MAFW on
> Maemo5
> http://www.grancanariadesktopsummit.org/node/219
> The implementation might not be perfect, but the concept behind is
> sane.
>
> No I did not know it and I think you for the link. I have so far only
> written a requirement specification and any idea to move toward a goo
> implementation specifications are welcomed. I will dig in their
> documentation.

MAFW on Fremantle was mostly done by Igalia. Some of the people now work
on Grillo open source project where you might be able to talk to them
(on irc).

>
> The picture in "6 Position in MeeGo" looks quite arbitrary to me.
> Do the
> colors have a special semantics (meybe add a small leged below).
>
> No the colour are just imported from a slide and where just there to
> help to identify the blocks. The main idea of that graph is to make
> very clear that the proposed concept does not pan to create a
> universal audio/video pipeline but has the goal to be able to
> integrate multiple video pipeline under a unified umbrella. In
> particular it aims at enabling to get non open source pipeline to
> coexist with public pipelines.
>
>
> In "7 Transparency" you need to highlight what your proposal adds
> to the
> existing features.
>
> The chapter 7) "Transparency" regroup the need to provide certain type
> of services in a transparent manner to the the application. My goal is
> to enable applications to play multimedia content which knowing much
> about that content. e.g. if you write an application which need to
> access a live TV service but you live in US, you will have different
> pipeline once that same application is run in Europe.The requirement
> of transparency is applied to the typeof source and target. In a very
> similar manner as when you print on Linux today. Your application
> knowns very little about the printer but still can print.
Which part of the pipeline your are thinking is not well handled right
now. I you have concrete examples for illustration, I would encourage
you to add them. I believe architecturally we don't miss anything major
here.

>
> * Transport protocol: handled e.g. by gstreamer already, standarts
> like
> DLNA specify subsets for interoperability already
>
>
> I am afraid that GStreamer cannot do today everything that I would
> love it to do. It does pretty well on most of internet format but
> Playbin2 has a very limited set of supported services when it come to
> Broadcast or IPTV. Furthermore by default it does not support any
> smoth streaming feature or protection.
The gstreamer people already have smooth streaming implementation. There
are two things:
1) missing architecture to implement a feature
2) missing implementation for a certain feature
I believe the gstreamer architecture is pretty solid for adding extra
streaming protocols, container, codecs etc.
Regarding content protection, I believe it should be done outside of
gstreamer. As I said it is not media specific. One idea would be to
implement a virtual file system with the related access rights and
process isolation. This would allow to run an unmodified media pipeline.

> But I agree that GStreamer is a great tool and I would certainly see
> it as one of the strong candidate to implement the first open source
> audio video pipe line under a UMMS framework.
Just to be clear - I am not saying that gstreamer is the tool for
everything. But integrating two many thing in parallel might not be
beneficial either. Thus your document needs to improve pointing out the
missing parts (explicitly). Then people can help you to identify
existing implementations (or where they believe the feature should be
added). Then we can also identify things that are completely missing.

We also have to keep in mind that people need to be able to understand
our multimedia stack. Right now I think it makes sense:

QtMulitmediaKit
* high level qt/c++ api that focuses on particular use cases
* might apply constraints to keep the api simple
QtGStreamer
* the full feature set of gstreamer bound to a qt style api

GStreamer
* high level api (playbin2, camerabin(2), decodebin2, gnonlin, rtpbin, ...)
* open and closed multimedia components

Kernel
* audio/video i/o, network, accelerated codecs, ...

>  
>
> * Transparent Encapsulation and Multiplexing: could you please
> elaborate
> why one would need the non-automatic mode. I think it does not make
> sense to let the application specify what format the stream is in, if
> the media-framework can figure it (in almost all of the cases). In
> some
> corner cases one can e.g. use custom pipelines and specify the format
> (e.g. a ringtone playback servic

Re: [MeeGo-dev] Candidate specification for a generic video player from the TV list

2011-03-24 Thread Dmytro Poplavskiy
Hi

> 
> "14 QT-Multimedia"
> Seriously, don't even consider to stack it on top of qt-multimedia.
> We're still embedded. You could propose to implement it as part of QT
> multimedia though (or having it at the same level).
> 

For me the purpose/level of QMMS sounds very similar to one of QtMultimediaKit,
both provide an API for a defined set of cases and provide the integartion with 
qt graphics systems (QWidgets/QGrpahicsView/QML/SceneGraph).
Both are a thin portable API layers on top of existing frameworks with a 
similar pros and cons of such solution.

>* DRM can be implemented outside of GStreamer. still I don't fully
>understand what the issue here might be.

As I understand for the DRM case the playback should be done in a separate 
"protected" process,
an application should not be able to touch any data in the pipeline.
Probably it can be done transparently inside of playbin2 while keeping 
API/behavior the same, but it may be not trivial.

Regards
  Dmytro.
___
MeeGo-dev mailing list
MeeGo-dev@meego.com
http://lists.meego.com/listinfo/meego-dev
http://wiki.meego.com/Mailing_list_guidelines


Re: [MeeGo-dev] Candidate specification for a generic video player from the TV list

2011-03-24 Thread Dominig Ar Foll
Apologies for responding in dual posting. I would like as much as possible
to concentrate that type of chat on the TV mailing list but I would not
enjoy to leave a question open.

 Regarding the already existing projects. Are you aware of MAFW on Maemo5
> http://www.grancanariadesktopsummit.org/node/219
> The implementation might not be perfect, but the concept behind is sane.
>
No I did not know it and I think you for the link. I have so far only
written a requirement specification and any idea to move toward a goo
implementation specifications are welcomed. I will dig in their
documentation.

>
> The picture in "6 Position in MeeGo" looks quite arbitrary to me. Do the
> colors have a special semantics (meybe add a small leged below).
>
No the colour are just imported from a slide and where just there to help to
identify the blocks. The main idea of that graph is to make very clear that
the proposed concept does not pan to create a universal audio/video pipeline
but has the goal to be able to integrate multiple video pipeline under a
unified umbrella. In particular it aims at enabling to get non open source
pipeline to coexist with public pipelines.

>
> In "7 Transparency" you need to highlight what your proposal adds to the
> existing features.
>
The chapter 7) "Transparency" regroup the need to provide certain type of
services in a transparent manner to the the application. My goal is to
enable applications to play multimedia content which knowing much about that
content. e.g. if you write an application which need to access a live TV
service but you live in US, you will have different pipeline once that same
application is run in Europe.The requirement of transparency is applied to
the typeof source and target. In a very similar manner as when you print on
Linux today. Your application knowns very little about the printer but still
can print.

* Transport protocol: handled e.g. by gstreamer already, standarts like
> DLNA specify subsets for interoperability already
>

I am afraid that GStreamer cannot do today everything that I would love it
to do. It does pretty well on most of internet format but Playbin2 has a
very limited set of supported services when it come to Broadcast or IPTV.
Furthermore by default it does not support any smoth streaming feature or
protection.
But I agree that GStreamer is a great tool and I would certainly see it as
one of the strong candidate to implement the first open source audio video
pipe line under a UMMS framework.


> * Transparent Encapsulation and Multiplexing: could you please elaborate
> why one would need the non-automatic mode. I think it does not make
> sense to let the application specify what format the stream is in, if
> the media-framework can figure it (in almost all of the cases). In some
> corner cases one can e.g. use custom pipelines and specify the format
> (e.g. a ringtone playback service might do that if it knows the format
> already).
>

Multimedia asset comes in multiple mode of transport and multiplexing (from
HTTP to Live DVB) in MPEG2-TS, mp4, quick time or Flash. The automatic
detection is sometime possible and some time not. Futhermore some video
pipeline can do many format well while still some other format will impose
an alternative pipeline (Bluray is a good example).
The idea presented here is that the UMMS can decide which pipe line to call
on depending of URL or the detected stream type without requiring a prior
knowledge from the application about the pipeline configuration/selection.


> * Transparent Target: Whats the role of the UMMS here? How does the URI
> make sense here. Are you suggesting to use something like
> opengl://localdisplay/0/0/854/480? MAFW was introducing renderers, where
> a local renderer would render well locally and one could e.g. have a
> UPnP DLNA renderer or a media recorder.
>

Once again here the goal is to decouple the application from the prior
knowledge requirement of the videopipe line. I am proposing to add to the
traditional target to play video of an xvid not only openGL texture, but
also DLA target and video in overlay. The later is a speciality of SoC but
is mandatory when it come to run HD video on low energy system or to respect
tight security requirement.


> * Transparent Resource Management: That makes a lot of sense and so far
> was planned to be done on QT MultimediaKit
>

Yes. It make sense and on SoC it's even more critical.


> * Attended and Non Attended execution: This sounds like having a media
> recording service in the platform.
>

Yes that exactly what it is.

>
> "8 Audio Video Control"
> This is a media player interface. Most of the things make sense. Below
> those that might need more thinking
> * Codec Selection: please don't. This is something that we need to solve
> below and not push to the application or even to the user.
>

In general I do agree but sometime you need to specify. In particular when
you have multiple streams in the same multiplex (e.g. Dolby 7.1 and simpl

Re: [MeeGo-dev] Candidate specification for a generic video player from the TV list

2011-03-17 Thread Stefan Kost
Hi Dominig,


On 16.03.2011 11:08, ext Dominig ar Foll wrote:
> Hello,
>
> As many of you may not follow the TV mailing list. I post you an information 
> about a proposal which might also present interest outside of the TV world.
>
> when you start to play video on Linux you realise quickly that no
> solution can cover the full need of playing video for a real full TV
> application. If you try to create a Consumer Electronic (CE) device, you
> likely use a System on Chip (SoC) which provides a concept of video
> overlay which provide a great help on performance but make any
> integration very specific.
>
> The propose specification has been written for TV but cold be valuable
> to any device which either wants to be fully compliant with TV
> requirement when playing videos or use SoC with hardware acceleration.
>
> The proposed specification aims at creating a generic service for Linux
> (I will start by MeeGo TV) which could unify the play out of a video and
> make transparent the support of various hardware helper provider by some
> chip and in particular the support of overlay video common on SoC.
>
> Thanks to let me know your feedback, preferably on the MeeGo TV mailing
> list..
>
> File can be found here :
>   http://wiki.meego.com/File:Meego_Unified_MultiMedia_Service_V0.4.odt

Regarding the already existing projects. Are you aware of MAFW on Maemo5
http://www.grancanariadesktopsummit.org/node/219
The implementation might not be perfect, but the concept behind is sane.

The picture in "6 Position in MeeGo" looks quite arbitrary to me. Do the
colors have a special semantics (meybe add a small leged below).

In "7 Transparency" you need to highlight what your proposal adds to the
existing features.
* Transport protocol: handled e.g. by gstreamer already, standarts like
DLNA specify subsets for interoperability already
* Transparent Encapsulation and Multiplexing: could you please elaborate
why one would need the non-automatic mode. I think it does not make
sense to let the application specify what format the stream is in, if
the media-framework can figure it (in almost all of the cases). In some
corner cases one can e.g. use custom pipelines and specify the format
(e.g. a ringtone playback service might do that if it knows the format
already).
* Transparent Target: Whats the role of the UMMS here? How does the URI
make sense here. Are you suggesting to use something like
opengl://localdisplay/0/0/854/480? MAFW was introducing renderers, where
a local renderer would render well locally and one could e.g. have a
UPnP DLNA renderer or a media recorder.
* Transparent Resource Management: That makes a lot of sense and so far
was planned to be done on QT MultimediaKit
* Attended and Non Attended execution: This sounds like having a media
recording service in the platform.

"8 Audio Video Control"
This is a media player interface. Most of the things make sense. Below
those that might need more thinking
* Codec Selection: please don't. This is something that we need to solve
below and not push to the application or even to the user.
* Buffer Strategy: same as before. Buffering strategy depends on the
use-case and media. The application needs to express whether its a
media-player/media-editor/.. and from that we need to derive this.

"9 Restricted Access Mode"
Most of those are needed as platform wide services. E.g. Parental
Control would also be needed for Internet access.

"11 Developer and Linux friendly"
* Backwards compatible ...: My suggestion is to take inspiration in
existing components, but only do any emulation if someone really needs
that. It is usually possible to some extend, but whats the point?
* Device and Domain independence: Again, how does UMMS improve the
situation here?

"12 Typical use cases"
I think it would be helpful to have before and after stories here to
highlight the benefits of your concept.

"13 D-Bus"
Be careful with generic statements like "D-Bus can be a bit slow ...".
Stick with facts and avoid myths.

"14 QT-Multimedia"
Seriously, don't even consider to stack it on top of qt-multimedia.
We're still embedded. You could propose to implement it as part of QT
multimedia though (or having it at the same level).

"15 GStreamer"
It is GStreamer (with a upper case 'S') :) In general please spell check
the section.
Regarding the three weak points:
* smooth fast forward is a seek_event with a rate>1.0. There might be
elements not properly implementing that, but I fail to understand how
you can fix that on higher layers instead of in the elements. It might
make sense to define extended compliance criteria for base adaptation
vendors to ensure consistent behavior and features.
* DRM can be implemented outside of GStreamer. still I don't fully
understand what the issue here might be.
* Push/pull: gstreamer is a library. you can do lots of things with it.
If you want to use it to broadcast media you can do that very well. Some
known examples: rygel (upnp media server), gst-rtsp-server. J

[MeeGo-dev] Candidate specification for a generic video player from the TV list

2011-03-16 Thread Dominig ar Foll
Hello,

As many of you may not follow the TV mailing list. I post you an information 
about a proposal which might also present interest outside of the TV world.

when you start to play video on Linux you realise quickly that no
solution can cover the full need of playing video for a real full TV
application. If you try to create a Consumer Electronic (CE) device, you
likely use a System on Chip (SoC) which provides a concept of video
overlay which provide a great help on performance but make any
integration very specific.

The propose specification has been written for TV but cold be valuable
to any device which either wants to be fully compliant with TV
requirement when playing videos or use SoC with hardware acceleration.

The proposed specification aims at creating a generic service for Linux
(I will start by MeeGo TV) which could unify the play out of a video and
make transparent the support of various hardware helper provider by some
chip and in particular the support of overlay video common on SoC.

Thanks to let me know your feedback, preferably on the MeeGo TV mailing
list..

File can be found here :
  http://wiki.meego.com/File:Meego_Unified_MultiMedia_Service_V0.4.odt

-- Dominig ar Foll MeeGo TV Open Source Technology Centre Intel SSG

___
MeeGo-dev mailing list
MeeGo-dev@meego.com
http://lists.meego.com/listinfo/meego-dev
http://wiki.meego.com/Mailing_list_guidelines