RE: [st-ericsson] v4l2 vs omx for camera

2011-03-01 Thread Edward Hervey
On Mon, 2011-02-28 at 09:50 +0100, Marek Szyprowski wrote:
 Hello,
[...]
 
 I'm not sure that highmem is the right solution. First, this will force
 systems with rather small amount of memory (like 256M) to use highmem just
 to support DMA allocable memory. It also doesn't solve the issue with
 specific memory requirement for our DMA hardware (multimedia codec needs
 video memory buffers from 2 physical banks).

  Could you explain why a codec would require memory buffers from 2
physical banks ?

  Thanks,

   Edward

 
 The relocation issue has been already addressed in the last CMA patch series.
 Michal managed to create a framework that allowed to relocate on demand any
 pages from the CMA area.
 
 Best regards
 --
 Marek Szyprowski
 Samsung Poland RD Center
 
 


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [st-ericsson] v4l2 vs omx for camera

2011-03-01 Thread Marek Szyprowski
Hello,

On Tuesday, March 01, 2011 11:26 AM Edward Hervey wrote:

 On Mon, 2011-02-28 at 09:50 +0100, Marek Szyprowski wrote:
  Hello,
 [...]
 
  I'm not sure that highmem is the right solution. First, this will force
  systems with rather small amount of memory (like 256M) to use highmem just
  to support DMA allocable memory. It also doesn't solve the issue with
  specific memory requirement for our DMA hardware (multimedia codec needs
  video memory buffers from 2 physical banks).
 
   Could you explain why a codec would require memory buffers from 2
 physical banks ?
 

Well, this is rather a question to hardware engineer who designed it. 

I suspect that the buffers has been split into 2 regions and placed in 2 
different
memory banks to achieve the performance required to decode/encode full hd h264
movie. Video codec has 2 AXI master interfaces and I expect it is able to 
perform
2 transaction to the memory at once.

Best regards
--
Marek Szyprowski
Samsung Poland RD Center


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [st-ericsson] v4l2 vs omx for camera

2011-02-28 Thread Marek Szyprowski
Hello,

On Saturday, February 26, 2011 8:20 PM Nicolas Pitre wrote:

 On Sat, 26 Feb 2011, Kyungmin Park wrote:
 
  On Sat, Feb 26, 2011 at 2:22 AM, Linus Walleij linus.wall...@linaro.org 
  wrote:
   2011/2/24 Edward Hervey bilb...@gmail.com:
  
    What *needs* to be solved is an API for data allocation/passing at the
   kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that
   userspace (like GStreamer) can pass around, monitor and know about.
  
   I think the patches sent out from ST-Ericsson's Johan Mossberg to
   linux-mm for HWMEM (hardware memory) deals exactly with buffer
   passing, pinning of buffers and so on. The CMA (Contigous Memory
   Allocator) has been slightly modified to fit hand-in-glove with HWMEM,
   so CMA provides buffers, HWMEM pass them around.
  
   Johan, when you re-spin the HWMEM patchset, can you include
   linaro-dev and linux-media in the CC? I think there is *much* interest
   in this mechanism, people just don't know from the name what it
   really does. Maybe it should be called mediamem or something
   instead...
 
  To Marek,
 
  Can you also update the CMA status and plan?
 
  The important thing is still Russell don't agree the CMA since it's
  not solve the ARM different memory attribute mapping issue. Of course
  there's no way to solve the ARM issue.
 
 There are at least two ways to solve that issue, and I have suggested
 both on the lak mailing list already.
 
 1) Make the direct mapped kernel memory usable by CMA mapped through a
page-sized two-level page table mapping which would allow for solving
the attributes conflict on a per page basis.

That's the solution I work on now. I've also suggested this in the last
CMA discussion, however there was no response if this is the right way

 2) Use highmem more aggressively and allow only highmem pages for CMA.
This is quite easy to make sure the target page(s) for CMA would have
no kernel mappings and therefore no attribute conflict.  Furthermore,
highmem pages are always relocatable for making physically contiguous
segments available.

I'm not sure that highmem is the right solution. First, this will force
systems with rather small amount of memory (like 256M) to use highmem just
to support DMA allocable memory. It also doesn't solve the issue with
specific memory requirement for our DMA hardware (multimedia codec needs
video memory buffers from 2 physical banks).

The relocation issue has been already addressed in the last CMA patch series.
Michal managed to create a framework that allowed to relocate on demand any
pages from the CMA area.

Best regards
--
Marek Szyprowski
Samsung Poland RD Center


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-28 Thread Laurent Pinchart
On Saturday 26 February 2011 13:12:42 Hans Verkuil wrote:
 On Friday, February 25, 2011 18:22:51 Linus Walleij wrote:
  2011/2/24 Edward Hervey bilb...@gmail.com:
What *needs* to be solved is an API for data allocation/passing at the
   
   kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that
   userspace (like GStreamer) can pass around, monitor and know about.
  
  I think the patches sent out from ST-Ericsson's Johan Mossberg to
  linux-mm for HWMEM (hardware memory) deals exactly with buffer
  passing, pinning of buffers and so on. The CMA (Contigous Memory
  Allocator) has been slightly modified to fit hand-in-glove with HWMEM,
  so CMA provides buffers, HWMEM pass them around.
  
  Johan, when you re-spin the HWMEM patchset, can you include
  linaro-dev and linux-media in the CC?
 
 Yes, please. This sounds promising and we at linux-media would very much
 like to take a look at this. I hope that the CMA + HWMEM combination is
 exactly what we need.

Once again let me restate what I've been telling for some time: CMA must be 
*optional*. Not all hardware need contiguous memory. I'll have a look at the 
next HWMEM version.

-- 
Regards,

Laurent Pinchart
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-28 Thread Hans Verkuil
On Sunday, February 27, 2011 20:49:37 Arnd Bergmann wrote:
 On Saturday 26 February 2011, Edward Hervey wrote:
   
   Are there any gstreamer/linaro/etc core developers attending the ELC in 
San Francisco
   in April? I think it might be useful to get together before, during or 
after the
   conference and see if we can turn this discussion in something more 
concrete.
   
   It seems to me that there is an overall agreement of what should be 
done, but
   that we are far from anything concrete.
   
  
I will be there and this was definitely a topic I intended to talk
  about.
See you there.
 
 I'll also be there. Should we organize an official BOF session for this and
 invite more people?

I think that is an excellent idea. Do you want to organize that? (Always the
penalty for suggesting this first :-) )

Regards,

Hans
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-28 Thread Laurent Pinchart
On Sunday 27 February 2011 20:49:37 Arnd Bergmann wrote:
 On Saturday 26 February 2011, Edward Hervey wrote:
   Are there any gstreamer/linaro/etc core developers attending the ELC in
   San Francisco in April? I think it might be useful to get together
   before, during or after the conference and see if we can turn this
   discussion in something more concrete.
   
   It seems to me that there is an overall agreement of what should be
   done, but that we are far from anything concrete.
   
  I will be there and this was definitely a topic I intended to talk about.
  
  See you there.
 
 I'll also be there. Should we organize an official BOF session for this and
 invite more people?

Any chance of an IRC backchannel and a live audio/video stream for those of us 
who won't be there ?

-- 
Regards,

Laurent Pinchart
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-28 Thread Hans Verkuil
On Monday, February 28, 2011 11:11:47 Laurent Pinchart wrote:
 On Saturday 26 February 2011 13:12:42 Hans Verkuil wrote:
  On Friday, February 25, 2011 18:22:51 Linus Walleij wrote:
   2011/2/24 Edward Hervey bilb...@gmail.com:
 What *needs* to be solved is an API for data allocation/passing at 
the

kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that
userspace (like GStreamer) can pass around, monitor and know about.
   
   I think the patches sent out from ST-Ericsson's Johan Mossberg to
   linux-mm for HWMEM (hardware memory) deals exactly with buffer
   passing, pinning of buffers and so on. The CMA (Contigous Memory
   Allocator) has been slightly modified to fit hand-in-glove with HWMEM,
   so CMA provides buffers, HWMEM pass them around.
   
   Johan, when you re-spin the HWMEM patchset, can you include
   linaro-dev and linux-media in the CC?
  
  Yes, please. This sounds promising and we at linux-media would very much
  like to take a look at this. I hope that the CMA + HWMEM combination is
  exactly what we need.
 
 Once again let me restate what I've been telling for some time: CMA must be 
 *optional*. Not all hardware need contiguous memory. I'll have a look at the 
 next HWMEM version.

Yes, it is optional when you look at specific hardware. On a kernel level 
however it is functionality that is required in order to support all the 
hardware. There is little point in solving one issue and not the other.

Regards,

Hans
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-28 Thread Laurent Pinchart
On Monday 28 February 2011 11:21:52 Hans Verkuil wrote:
 On Monday, February 28, 2011 11:11:47 Laurent Pinchart wrote:
  On Saturday 26 February 2011 13:12:42 Hans Verkuil wrote:
   On Friday, February 25, 2011 18:22:51 Linus Walleij wrote:
2011/2/24 Edward Hervey bilb...@gmail.com:
  What *needs* to be solved is an API for data allocation/passing at 
 the kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and
 that userspace (like GStreamer) can pass around, monitor and know
 about.

I think the patches sent out from ST-Ericsson's Johan Mossberg to
linux-mm for HWMEM (hardware memory) deals exactly with buffer
passing, pinning of buffers and so on. The CMA (Contigous Memory
Allocator) has been slightly modified to fit hand-in-glove with
HWMEM, so CMA provides buffers, HWMEM pass them around.

Johan, when you re-spin the HWMEM patchset, can you include
linaro-dev and linux-media in the CC?
   
   Yes, please. This sounds promising and we at linux-media would very
   much like to take a look at this. I hope that the CMA + HWMEM
   combination is exactly what we need.
  
  Once again let me restate what I've been telling for some time: CMA must
  be *optional*. Not all hardware need contiguous memory. I'll have a look
  at the next HWMEM version.
 
 Yes, it is optional when you look at specific hardware. On a kernel level
 however it is functionality that is required in order to support all the
 hardware. There is little point in solving one issue and not the other.

I agree. What I meant is that we need to make sure there's no HWMEM - CMA 
dependency.

-- 
Regards,

Laurent Pinchart
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-27 Thread Arnd Bergmann
On Saturday 26 February 2011, Edward Hervey wrote:
  
  Are there any gstreamer/linaro/etc core developers attending the ELC in San 
  Francisco
  in April? I think it might be useful to get together before, during or 
  after the
  conference and see if we can turn this discussion in something more 
  concrete.
  
  It seems to me that there is an overall agreement of what should be done, 
  but
  that we are far from anything concrete.
  
 
   I will be there and this was definitely a topic I intended to talk
 about.
   See you there.

I'll also be there. Should we organize an official BOF session for this and
invite more people?

Arnd

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-26 Thread Hans Verkuil
On Friday, February 25, 2011 18:22:51 Linus Walleij wrote:
 2011/2/24 Edward Hervey bilb...@gmail.com:
 
   What *needs* to be solved is an API for data allocation/passing at the
  kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that
  userspace (like GStreamer) can pass around, monitor and know about.
 
 I think the patches sent out from ST-Ericsson's Johan Mossberg to
 linux-mm for HWMEM (hardware memory) deals exactly with buffer
 passing, pinning of buffers and so on. The CMA (Contigous Memory
 Allocator) has been slightly modified to fit hand-in-glove with HWMEM,
 so CMA provides buffers, HWMEM pass them around.
 
 Johan, when you re-spin the HWMEM patchset, can you include
 linaro-dev and linux-media in the CC?

Yes, please. This sounds promising and we at linux-media would very much like
to take a look at this. I hope that the CMA + HWMEM combination is exactly
what we need.

Regards,

Hans

 I think there is *much* interest
 in this mechanism, people just don't know from the name what it
 really does. Maybe it should be called mediamem or something
 instead...
 
 Yours,
 Linus Walleij
 
 

-- 
Hans Verkuil - video4linux developer - sponsored by Cisco
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-26 Thread Felipe Contreras
Hi,

On Fri, Feb 18, 2011 at 6:39 PM, Robert Fekete robert.fek...@linaro.org wrote:
 To make a long story short:
 Different vendors provide custom OpenMax solutions for say Camera/ISP. In
 the Linux eco-system there is V4L2 doing much of this work already and is
 evolving with mediacontroller as well. Then there is the integration in
 Gstreamer...Which solution is the best way forward. Current discussions so
 far puts V4L2 greatly in favor of OMX.
 Please have in mind that OpenMAX as a concept is more like GStreamer in many
 senses. The question is whether Camera drivers should have OMX or V4L2 as
 the driver front end? This may perhaps apply to video codecs as well. Then
 there is how to in best of ways make use of this in GStreamer in order to
 achieve no copy highly efficient multimedia pipelines. Is gst-omx the way
 forward?

 Let the discussion continue...

We are talking about 3 different layers here which don't necessarily
overlap. You could have a v4l2 driver, which is wrapped in an OpenMAX
IL library, which is wrapped again by gst-openmax. Each layer is
different. The problem here is the OMX layer, which is often
ill-conceived.

First of all, you have to remember that whatever OMX is supposed to
provide, that doesn't apply to camera; you can argue that there's some
value in audio/video encoding/decoding, as the interfaces are very
simple and easy to standardize, but that's not the case with camera. I
haven't worked with OMX camera interfaces, but AFAIK it's very
incomplete and vendors have to implement their own interfaces, which
defeats the purpose of OMX. So OMX provides nothing in the camera
case.

Secondly, there's no OMX kernel interface. You still need something
between kernel to user-space, the only established interface is v4l2.
So, even if you choose OMX in user-space, the sensible choice in
kernel-space is v4l2, otherwise you would end up with some custom
interface which is never good.

And third, as Laurent already pointed out; OpenMAX is _not_ open. The
community has no say in what happens, everything is decided by a
consortium, you need to pay money to be in it, to access their
bugzilla, to subscribe to their mailing lists, and to get access to
their conformance test.

If you forget all the marketing mumbo jumbo about OMX, at the of the
day what is provided is a bunch of headers (and a document explaining
how to use them). We (the linux community) can come up with a bunch of
headers too, in fact, we already do much more than that with v4l2, the
only part missing is encoders/decoders, which if needed could be added
very easily (Samsung already does AFAIK). Right?

Cheers.

-- 
Felipe Contreras
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-26 Thread Felipe Contreras
Hi,

On Thu, Feb 24, 2011 at 3:04 PM, Hans Verkuil hverk...@xs4all.nl wrote:
 On Thursday, February 24, 2011 13:29:56 Linus Walleij wrote:
 2011/2/23 Sachin Gupta sachin.gu...@linaro.org:

  The imaging coprocessor in today's platforms have a general purpose DSP
  attached to it I have seen some work being done to use this DSP for
  graphics/audio processing in case the camera use case is not being tried or
  also if the camera usecases does not consume the full bandwidth of this
  dsp.I am not sure how v4l2 would fit in such an architecture,

 Earlier in this thread I discussed TI:s DSPbridge.

 In drivers/staging/tidspbridge
 http://omappedia.org/wiki/DSPBridge_Project
 you find the TI hackers happy at work with providing a DSP accelerator
 subsystem.

 Isn't it possible for a V4L2 component to use this interface (or something
 more evolved, generic) as backend for assorted DSP offloading?

Yes it is, and it has been part of my to-do list for some time now.

 So using one kernel framework does not exclude using another one
 at the same time. Whereas something like DSPbridge will load firmware
 into DSP accelerators and provide control/datapath for that, this can
 in turn be used by some camera or codec which in turn presents a
 V4L2 or ALSA interface.

 Yes, something along those lines can be done.

 While normally V4L2 talks to hardware it is perfectly fine to talk to a DSP
 instead.

 The hardest part will be to identify the missing V4L2 API pieces and design
 and add them. I don't think the actual driver code will be particularly hard.
 It should be nothing more than a thin front-end for the DSP. Of course, that's
 just theory at the moment :-)

The pieces are known already. I started a project called gst-dsp,
which I plan to split into the gst part, and the part that
communicates with the DSP, this part can move to kernel side with a
v4l2 interface.

It's easier to identify the code in the patches for FFmpeg:
http://article.gmane.org/gmane.comp.video.ffmpeg.devel/116798

 The problem is that someone has to do the actual work for the initial driver.
 And I expect that it will be a substantial amount of work. Future drivers 
 should
 be *much* easier, though.

 A good argument for doing this work is that this API can hide which parts of
 the video subsystem are hardware and which are software. The application 
 really
 doesn't care how it is organized. What is done in hardware on one SoC might be
 done on a DSP instead on another SoC. But the end result is pretty much the 
 same.

Exactly.

-- 
Felipe Contreras
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-26 Thread Felipe Contreras
On Thu, Feb 24, 2011 at 3:27 PM, Laurent Pinchart
laurent.pinch...@ideasonboard.com wrote:
  Perhaps GStreamer experts would like to comment on the future plans ahead
  for zero copying/IPC and low power HW use cases? Could Gstreamer adapt
  some ideas from OMX IL making OMX IL obsolete?

 perhaps OMX should adapt some of the ideas from GStreamer ;-)

 I'd very much like to see GStreamer (or something else, maybe lower level, but
 community-maintainted) replace OMX.

Yes, it would be great to have something that wraps all the hardware
acceleration and could have support for software codecs too, all in a
standard interface. It would also be great if this interface would be
used in the upper layers like GStreamer, VLC, etc. Kind of what OMX
was supposed to be, but open [1].

Oh wait, I'm describing FFmpeg :) (supports vl42, VA-API, VDPAU,
DirectX, and soon OMAP3 DSP)

Cheers.

[1] 
http://freedesktop.org/wiki/GstOpenMAX?action=AttachFiledo=gettarget=gst-openmax.png

-- 
Felipe Contreras
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-26 Thread Hans Verkuil
On Saturday, February 26, 2011 14:38:50 Felipe Contreras wrote:
 On Thu, Feb 24, 2011 at 3:27 PM, Laurent Pinchart
 laurent.pinch...@ideasonboard.com wrote:
   Perhaps GStreamer experts would like to comment on the future plans ahead
   for zero copying/IPC and low power HW use cases? Could Gstreamer adapt
   some ideas from OMX IL making OMX IL obsolete?
 
  perhaps OMX should adapt some of the ideas from GStreamer ;-)
 
  I'd very much like to see GStreamer (or something else, maybe lower level, 
  but
  community-maintainted) replace OMX.
 
 Yes, it would be great to have something that wraps all the hardware
 acceleration and could have support for software codecs too, all in a
 standard interface. It would also be great if this interface would be
 used in the upper layers like GStreamer, VLC, etc. Kind of what OMX
 was supposed to be, but open [1].
 
 Oh wait, I'm describing FFmpeg :) (supports vl42, VA-API, VDPAU,
 DirectX, and soon OMAP3 DSP)
 
 Cheers.
 
 [1] 
 http://freedesktop.org/wiki/GstOpenMAX?action=AttachFiledo=gettarget=gst-openmax.png
 
 

Are there any gstreamer/linaro/etc core developers attending the ELC in San 
Francisco
in April? I think it might be useful to get together before, during or after the
conference and see if we can turn this discussion in something more concrete.

It seems to me that there is an overall agreement of what should be done, but
that we are far from anything concrete.

Regards,

Hans

-- 
Hans Verkuil - video4linux developer - sponsored by Cisco
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-26 Thread Kyungmin Park
On Sat, Feb 26, 2011 at 2:22 AM, Linus Walleij linus.wall...@linaro.org wrote:
 2011/2/24 Edward Hervey bilb...@gmail.com:

  What *needs* to be solved is an API for data allocation/passing at the
 kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that
 userspace (like GStreamer) can pass around, monitor and know about.

 I think the patches sent out from ST-Ericsson's Johan Mossberg to
 linux-mm for HWMEM (hardware memory) deals exactly with buffer
 passing, pinning of buffers and so on. The CMA (Contigous Memory
 Allocator) has been slightly modified to fit hand-in-glove with HWMEM,
 so CMA provides buffers, HWMEM pass them around.

 Johan, when you re-spin the HWMEM patchset, can you include
 linaro-dev and linux-media in the CC? I think there is *much* interest
 in this mechanism, people just don't know from the name what it
 really does. Maybe it should be called mediamem or something
 instead...

To Marek,

Can you also update the CMA status and plan?

The important thing is still Russell don't agree the CMA since it's
not solve the ARM different memory attribute mapping issue. Of course
there's no way to solve the ARM issue.

We really need the memory solution for multimedia.

Thank you,
Kyungmin Park



 Yours,
 Linus Walleij
 --
 To unsubscribe from this list: send the line unsubscribe linux-media in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-26 Thread Edward Hervey
On Sat, 2011-02-26 at 14:47 +0100, Hans Verkuil wrote:
 On Saturday, February 26, 2011 14:38:50 Felipe Contreras wrote:
  On Thu, Feb 24, 2011 at 3:27 PM, Laurent Pinchart
  laurent.pinch...@ideasonboard.com wrote:
Perhaps GStreamer experts would like to comment on the future plans 
ahead
for zero copying/IPC and low power HW use cases? Could Gstreamer adapt
some ideas from OMX IL making OMX IL obsolete?
  
   perhaps OMX should adapt some of the ideas from GStreamer ;-)
  
   I'd very much like to see GStreamer (or something else, maybe lower 
   level, but
   community-maintainted) replace OMX.
  
  Yes, it would be great to have something that wraps all the hardware
  acceleration and could have support for software codecs too, all in a
  standard interface. It would also be great if this interface would be
  used in the upper layers like GStreamer, VLC, etc. Kind of what OMX
  was supposed to be, but open [1].
  
  Oh wait, I'm describing FFmpeg :) (supports vl42, VA-API, VDPAU,
  DirectX, and soon OMAP3 DSP)
  
  Cheers.
  
  [1] 
  http://freedesktop.org/wiki/GstOpenMAX?action=AttachFiledo=gettarget=gst-openmax.png
  
  
 
 Are there any gstreamer/linaro/etc core developers attending the ELC in San 
 Francisco
 in April? I think it might be useful to get together before, during or after 
 the
 conference and see if we can turn this discussion in something more concrete.
 
 It seems to me that there is an overall agreement of what should be done, but
 that we are far from anything concrete.
 

  I will be there and this was definitely a topic I intended to talk
about.
  See you there.

 Edward

 Regards,
 
   Hans
 


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-26 Thread Nicolas Pitre
On Sat, 26 Feb 2011, Kyungmin Park wrote:

 On Sat, Feb 26, 2011 at 2:22 AM, Linus Walleij linus.wall...@linaro.org 
 wrote:
  2011/2/24 Edward Hervey bilb...@gmail.com:
 
   What *needs* to be solved is an API for data allocation/passing at the
  kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that
  userspace (like GStreamer) can pass around, monitor and know about.
 
  I think the patches sent out from ST-Ericsson's Johan Mossberg to
  linux-mm for HWMEM (hardware memory) deals exactly with buffer
  passing, pinning of buffers and so on. The CMA (Contigous Memory
  Allocator) has been slightly modified to fit hand-in-glove with HWMEM,
  so CMA provides buffers, HWMEM pass them around.
 
  Johan, when you re-spin the HWMEM patchset, can you include
  linaro-dev and linux-media in the CC? I think there is *much* interest
  in this mechanism, people just don't know from the name what it
  really does. Maybe it should be called mediamem or something
  instead...
 
 To Marek,
 
 Can you also update the CMA status and plan?
 
 The important thing is still Russell don't agree the CMA since it's
 not solve the ARM different memory attribute mapping issue. Of course
 there's no way to solve the ARM issue.

There are at least two ways to solve that issue, and I have suggested 
both on the lak mailing list already.

1) Make the direct mapped kernel memory usable by CMA mapped through a 
   page-sized two-level page table mapping which would allow for solving 
   the attributes conflict on a per page basis.

2) Use highmem more aggressively and allow only highmem pages for CMA.
   This is quite easy to make sure the target page(s) for CMA would have
   no kernel mappings and therefore no attribute conflict.  Furthermore, 
   highmem pages are always relocatable for making physically contiguous 
   segments available.


Nicolas

Re: [st-ericsson] v4l2 vs omx for camera

2011-02-25 Thread Linus Walleij
2011/2/24 Edward Hervey bilb...@gmail.com:

  What *needs* to be solved is an API for data allocation/passing at the
 kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that
 userspace (like GStreamer) can pass around, monitor and know about.

I think the patches sent out from ST-Ericsson's Johan Mossberg to
linux-mm for HWMEM (hardware memory) deals exactly with buffer
passing, pinning of buffers and so on. The CMA (Contigous Memory
Allocator) has been slightly modified to fit hand-in-glove with HWMEM,
so CMA provides buffers, HWMEM pass them around.

Johan, when you re-spin the HWMEM patchset, can you include
linaro-dev and linux-media in the CC? I think there is *much* interest
in this mechanism, people just don't know from the name what it
really does. Maybe it should be called mediamem or something
instead...

Yours,
Linus Walleij
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-24 Thread Linus Walleij
2011/2/23 Sachin Gupta sachin.gu...@linaro.org:

 The imaging coprocessor in today's platforms have a general purpose DSP
 attached to it I have seen some work being done to use this DSP for
 graphics/audio processing in case the camera use case is not being tried or
 also if the camera usecases does not consume the full bandwidth of this
 dsp.I am not sure how v4l2 would fit in such an architecture,

Earlier in this thread I discussed TI:s DSPbridge.

In drivers/staging/tidspbridge
http://omappedia.org/wiki/DSPBridge_Project
you find the TI hackers happy at work with providing a DSP accelerator
subsystem.

Isn't it possible for a V4L2 component to use this interface (or something
more evolved, generic) as backend for assorted DSP offloading?

So using one kernel framework does not exclude using another one
at the same time. Whereas something like DSPbridge will load firmware
into DSP accelerators and provide control/datapath for that, this can
in turn be used by some camera or codec which in turn presents a
V4L2 or ALSA interface.

Yours,
Linus Walleij
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-24 Thread Hans Verkuil
On Tuesday, February 22, 2011 03:44:19 Clark, Rob wrote:
 On Fri, Feb 18, 2011 at 10:39 AM, Robert Fekete
 robert.fek...@linaro.org wrote:
  Hi,
 
  In order to expand this knowledge outside of Linaro I took the Liberty of
  inviting both linux-media@vger.kernel.org and
  gstreamer-de...@lists.freedesktop.org. For any newcomer I really recommend
  to do some catch-up reading on
  http://lists.linaro.org/pipermail/linaro-dev/2011-February/thread.html
  (v4l2 vs omx for camera thread) before making any comments. And sign up
  for Linaro-dev while you are at it :-)
 
  To make a long story short:
  Different vendors provide custom OpenMax solutions for say Camera/ISP. In
  the Linux eco-system there is V4L2 doing much of this work already and is
  evolving with mediacontroller as well. Then there is the integration in
  Gstreamer...Which solution is the best way forward. Current discussions so
  far puts V4L2 greatly in favor of OMX.
  Please have in mind that OpenMAX as a concept is more like GStreamer in many
  senses. The question is whether Camera drivers should have OMX or V4L2 as
  the driver front end? This may perhaps apply to video codecs as well. Then
  there is how to in best of ways make use of this in GStreamer in order to
  achieve no copy highly efficient multimedia pipelines. Is gst-omx the way
  forward?
 
 just fwiw, there were some patches to make v4l2src work with userptr
 buffers in case the camera has an mmu and can handle any random
 non-physically-contiguous buffer..  so there is in theory no reason
 why a gst capture pipeline could not be zero copy and capture directly
 into buffers allocated from the display

V4L2 also allows userspace to pass pointers to contiguous physical memory.
On TI systems this memory is usually obtained via the out-of-tree cmem module.

 Certainly a more general way to allocate buffers that any of the hw
 blocks (display, imaging, video encoders/decoders, 3d/2d hw, etc)
 could use, and possibly share across-process for some zero copy DRI
 style rendering, would be nice.  Perhaps V4L2_MEMORY_GEM?

There are two parts to this: first of all you need a way to allocate large
buffers. The CMA patch series is available (but not yet merged) that does this.
I'm not sure of the latest status of this series.

The other part is that everyone can use and share these buffers. There isn't
anything for this yet. We have discussed this in the past and we need something
generic for this that all subsystems can use. It's not a good idea to tie this
to any specific framework like GEM. Instead any subsystem should be able to use
the same subsystem-independent buffer pool API.

The actual code is probably not too bad, but trying to coordinate this over all
subsystems is not an easy task.

 
 
  Let the discussion continue...
 
 
  On 17 February 2011 14:48, Laurent Pinchart
  laurent.pinch...@ideasonboard.com wrote:
 
  On Thursday 10 February 2011 08:47:15 Hans Verkuil wrote:
   On Thursday, February 10, 2011 08:17:31 Linus Walleij wrote:
On Wed, Feb 9, 2011 at 8:44 PM, Harald Gustafsson wrote:
 OMX main purpose is to handle multimedia hardware and offer an
 interface to that HW that looks identical indenpendent of the vendor
 delivering that hardware, much like the v4l2 or USB subsystems tries
 to
 do. And yes optimally it should be implemented in drivers/omx in
 Linux
 and a user space library on top of that.
   
Thanks for clarifying this part, it was unclear to me. The reason
being
that it seems OMX does not imply userspace/kernelspace separation, and
I was thinking more of it as a userspace lib. Now my understanding is
that if e.g. OpenMAX defines a certain data structure, say for a PCM
frame or whatever, then that exact struct is supposed to be used by
the
kernelspace/userspace interface, and defined in the include file
exported
by the kernel.
   
 It might be that some alignment also needs to be made between 4vl2
 and
 other OS's implementation, to ease developing drivers for many OSs
 (sorry I don't know these details, but you ST-E guys should know).
   
The basic conflict I would say is that Linux has its own API+ABI,
which
is defined by V4L and ALSA through a community process without much
thought about any existing standard APIs. (In some cases also
predating
them.)
   
 By the way IL is about to finalize version 1.2 of OpenMAX IL which
 is
 more than a years work of aligning all vendors and fixing unclear
 and
 buggy parts.
   
I suspect that the basic problem with Khronos OpenMAX right now is
how to handle communities - for example the X consortium had
something like the same problem a while back, only member companies
could partake in the standard process, and they need of course to pay
an upfront fee for that, and the majority of these companies didn't
exactly send Linux community members to the meetings.
   
  

Re: [st-ericsson] v4l2 vs omx for camera

2011-02-24 Thread Laurent Pinchart
On Tuesday 22 February 2011 03:44:19 Clark, Rob wrote:
 On Fri, Feb 18, 2011 at 10:39 AM, Robert Fekete wrote:
  In order to expand this knowledge outside of Linaro I took the Liberty of
  inviting both linux-media@vger.kernel.org and
  gstreamer-de...@lists.freedesktop.org. For any newcomer I really
  recommend to do some catch-up reading on
  http://lists.linaro.org/pipermail/linaro-dev/2011-February/thread.html
  (v4l2 vs omx for camera thread) before making any comments. And sign up
  for Linaro-dev while you are at it :-)
  
  To make a long story short:
  Different vendors provide custom OpenMax solutions for say Camera/ISP. In
  the Linux eco-system there is V4L2 doing much of this work already and is
  evolving with mediacontroller as well. Then there is the integration in
  Gstreamer...Which solution is the best way forward. Current discussions
  so far puts V4L2 greatly in favor of OMX.
  Please have in mind that OpenMAX as a concept is more like GStreamer in
  many senses. The question is whether Camera drivers should have OMX or
  V4L2 as the driver front end? This may perhaps apply to video codecs as
  well. Then there is how to in best of ways make use of this in GStreamer
  in order to achieve no copy highly efficient multimedia pipelines. Is
  gst-omx the way forward?
 
 just fwiw, there were some patches to make v4l2src work with userptr
 buffers in case the camera has an mmu and can handle any random
 non-physically-contiguous buffer..  so there is in theory no reason
 why a gst capture pipeline could not be zero copy and capture directly
 into buffers allocated from the display
 
 Certainly a more general way to allocate buffers that any of the hw
 blocks (display, imaging, video encoders/decoders, 3d/2d hw, etc)
 could use, and possibly share across-process for some zero copy DRI
 style rendering, would be nice.  Perhaps V4L2_MEMORY_GEM?

This is something we first discussed in the end of 2009. We need to get people 
from different subsystems around the same table, with memory management 
specialists (especially for ARM), and lay the ground for a common memory 
management system. Discussions on the V4L2 side called this the global buffers 
pool (see http://lwn.net/Articles/353044/ for instance, more information can 
be found in the linux-media list archives).

[snip]


  Let the discussion continue...
  
  On 17 February 2011 14:48, Laurent Pinchart wrote:
  On Thursday 10 February 2011 08:47:15 Hans Verkuil wrote:
   On Thursday, February 10, 2011 08:17:31 Linus Walleij wrote:
On Wed, Feb 9, 2011 at 8:44 PM, Harald Gustafsson wrote:
 OMX main purpose is to handle multimedia hardware and offer an
 interface to that HW that looks identical indenpendent of the
 vendor delivering that hardware, much like the v4l2 or USB
 subsystems tries to
 do. And yes optimally it should be implemented in drivers/omx in
 Linux
 and a user space library on top of that.

Thanks for clarifying this part, it was unclear to me. The reason
being
that it seems OMX does not imply userspace/kernelspace separation,
and I was thinking more of it as a userspace lib. Now my
understanding is that if e.g. OpenMAX defines a certain data
structure, say for a PCM frame or whatever, then that exact struct
is supposed to be used by the
kernelspace/userspace interface, and defined in the include file
exported
by the kernel.

 It might be that some alignment also needs to be made between 4vl2
 and
 other OS's implementation, to ease developing drivers for many OSs
 (sorry I don't know these details, but you ST-E guys should know).

The basic conflict I would say is that Linux has its own API+ABI,
which
is defined by V4L and ALSA through a community process without much
thought about any existing standard APIs. (In some cases also
predating
them.)

 By the way IL is about to finalize version 1.2 of OpenMAX IL which
 is more than a years work of aligning all vendors and fixing
 unclear and buggy parts.

I suspect that the basic problem with Khronos OpenMAX right now is
how to handle communities - for example the X consortium had
something like the same problem a while back, only member companies
could partake in the standard process, and they need of course to
pay an upfront fee for that, and the majority of these companies
didn't exactly send Linux community members to the meetings.

And now all the companies who took part in OpenMAX somehow end up
having to do a lot of upfront community work if they want to drive
the API:s in a certain direction, discuss it again with the V4L and
ALSA maintainers and so on. Which takes a lot of time and patience
with uncertain outcome, since this process is autonomous from
Khronos. Nobody seems to be doing this, I javen't seen a single patch
aimed at trying to unify the APIs so far. I don't know 

Re: [st-ericsson] v4l2 vs omx for camera

2011-02-24 Thread Laurent Pinchart
On Thursday 24 February 2011 14:17:12 Hans Verkuil wrote:
 On Tuesday, February 22, 2011 03:44:19 Clark, Rob wrote:
  On Fri, Feb 18, 2011 at 10:39 AM, Robert Fekete wrote:
   Hi,
   
   In order to expand this knowledge outside of Linaro I took the Liberty
   of inviting both linux-media@vger.kernel.org and
   gstreamer-de...@lists.freedesktop.org. For any newcomer I really
   recommend to do some catch-up reading on
   http://lists.linaro.org/pipermail/linaro-dev/2011-February/thread.html
   (v4l2 vs omx for camera thread) before making any comments. And sign
   up for Linaro-dev while you are at it :-)
   
   To make a long story short:
   Different vendors provide custom OpenMax solutions for say Camera/ISP.
   In the Linux eco-system there is V4L2 doing much of this work already
   and is evolving with mediacontroller as well. Then there is the
   integration in Gstreamer...Which solution is the best way forward.
   Current discussions so far puts V4L2 greatly in favor of OMX.
   Please have in mind that OpenMAX as a concept is more like GStreamer in
   many senses. The question is whether Camera drivers should have OMX or
   V4L2 as the driver front end? This may perhaps apply to video codecs
   as well. Then there is how to in best of ways make use of this in
   GStreamer in order to achieve no copy highly efficient multimedia
   pipelines. Is gst-omx the way forward?
  
  just fwiw, there were some patches to make v4l2src work with userptr
  buffers in case the camera has an mmu and can handle any random
  non-physically-contiguous buffer..  so there is in theory no reason
  why a gst capture pipeline could not be zero copy and capture directly
  into buffers allocated from the display
 
 V4L2 also allows userspace to pass pointers to contiguous physical memory.
 On TI systems this memory is usually obtained via the out-of-tree cmem
 module.

On the OMAP3 the ISP doesn't require physically contiguous memory. User 
pointers can be used quite freely, except that they introduce cache management 
issues on ARM when speculative prefetching comes into play (those issues are 
currently ignored completely).

  Certainly a more general way to allocate buffers that any of the hw
  blocks (display, imaging, video encoders/decoders, 3d/2d hw, etc)
  could use, and possibly share across-process for some zero copy DRI
  style rendering, would be nice.  Perhaps V4L2_MEMORY_GEM?
 
 There are two parts to this: first of all you need a way to allocate large
 buffers. The CMA patch series is available (but not yet merged) that does
 this. I'm not sure of the latest status of this series.

Some platforms don't require contiguous memory. What we need is a way to 
allocate memory in the kernel with various options, and use that memory in 
various drivers (V4L2, GPU, ...)

 The other part is that everyone can use and share these buffers. There
 isn't anything for this yet. We have discussed this in the past and we
 need something generic for this that all subsystems can use. It's not a
 good idea to tie this to any specific framework like GEM. Instead any
 subsystem should be able to use the same subsystem-independent buffer pool
 API.
 
 The actual code is probably not too bad, but trying to coordinate this over
 all subsystems is not an easy task.

[snip]

  Not even different vendor's omx camera implementations are
  compatible.. there seems to be too much various in ISP architecture
  and features for this.
  
  Another point, and possibly the reason that TI went the OMX camera
  route, was that a userspace API made it possible to move the camera
  driver all to a co-processor (with advantages of reduced interrupt
  latency for SIMCOP processing, and a larger part of the code being OS
  independent)..  doing this in a kernel mode driver would have required
  even more of syslink in the kernel.
  
  But maybe it would be nice to have a way to have sensor driver on the
  linux side, pipelined with hw and imaging drivers on a co-processor
  for various algorithms and filters with configuration all exposed to
  userspace thru MCF.. I'm not immediately sure how this would work, but
  it sounds nice at least ;-)
 
 MCF? What does that stand for?

Media Controller Framework I guess.

   The question is if the Linux kernel and V4L2 is ready to incorporate
   several HW(DSP, CPU, ISP, xxHW) in an imaging pipeline for instance.
   The reason Embedded Vendors provide custom solutions is to implement
   low power non(or minimal) CPU intervention pipelines where dedicated
   HW does the work most of the time(like full screen Video Playback).
   
   A common way of managing memory would of course also be necessary as
   well, like hwmem(search for hwmem in Linux-mm) handles to pass buffers
   in between different drivers and processes all the way from
   sources(camera, video parser/decode) to sinks(display, hdmi, video
   encoders(record))
  
  (ahh, ok, you have some of the same thoughts as I do regarding sharing
  buffers 

Re: [st-ericsson] v4l2 vs omx for camera

2011-02-24 Thread Kyungmin Park
On Thu, Feb 24, 2011 at 10:17 PM, Hans Verkuil hverk...@xs4all.nl wrote:
 On Tuesday, February 22, 2011 03:44:19 Clark, Rob wrote:
 On Fri, Feb 18, 2011 at 10:39 AM, Robert Fekete
 robert.fek...@linaro.org wrote:
  Hi,
 
  In order to expand this knowledge outside of Linaro I took the Liberty of
  inviting both linux-media@vger.kernel.org and
  gstreamer-de...@lists.freedesktop.org. For any newcomer I really recommend
  to do some catch-up reading on
  http://lists.linaro.org/pipermail/linaro-dev/2011-February/thread.html
  (v4l2 vs omx for camera thread) before making any comments. And sign up
  for Linaro-dev while you are at it :-)
 
  To make a long story short:
  Different vendors provide custom OpenMax solutions for say Camera/ISP. In
  the Linux eco-system there is V4L2 doing much of this work already and is
  evolving with mediacontroller as well. Then there is the integration in
  Gstreamer...Which solution is the best way forward. Current discussions so
  far puts V4L2 greatly in favor of OMX.
  Please have in mind that OpenMAX as a concept is more like GStreamer in 
  many
  senses. The question is whether Camera drivers should have OMX or V4L2 as
  the driver front end? This may perhaps apply to video codecs as well. Then
  there is how to in best of ways make use of this in GStreamer in order to
  achieve no copy highly efficient multimedia pipelines. Is gst-omx the way
  forward?

 just fwiw, there were some patches to make v4l2src work with userptr
 buffers in case the camera has an mmu and can handle any random
 non-physically-contiguous buffer..  so there is in theory no reason
 why a gst capture pipeline could not be zero copy and capture directly
 into buffers allocated from the display

 V4L2 also allows userspace to pass pointers to contiguous physical memory.
 On TI systems this memory is usually obtained via the out-of-tree cmem module.

 Certainly a more general way to allocate buffers that any of the hw
 blocks (display, imaging, video encoders/decoders, 3d/2d hw, etc)
 could use, and possibly share across-process for some zero copy DRI
 style rendering, would be nice.  Perhaps V4L2_MEMORY_GEM?

 There are two parts to this: first of all you need a way to allocate large
 buffers. The CMA patch series is available (but not yet merged) that does 
 this.
 I'm not sure of the latest status of this series.
Still ARM maintainer doesn't agree these patches since it's not solve
the ARM memory different attribute mapping problem.
but try to send the CMA v9 patch soon.

We need really require the physical memory management module. Each
chip vendors use the their own implementations.
Our approach called it as CMA, others called it as cmem, carveout,
hwmon and so on.

I think Laurent's approaches is similar one.

We will try it again to merge CMA.

Thank you,
Kyungmin Park



 The other part is that everyone can use and share these buffers. There isn't
 anything for this yet. We have discussed this in the past and we need 
 something
 generic for this that all subsystems can use. It's not a good idea to tie this
 to any specific framework like GEM. Instead any subsystem should be able to 
 use
 the same subsystem-independent buffer pool API.

 The actual code is probably not too bad, but trying to coordinate this over 
 all
 subsystems is not an easy task.


 
  Let the discussion continue...
 
 
  On 17 February 2011 14:48, Laurent Pinchart
  laurent.pinch...@ideasonboard.com wrote:
 
  On Thursday 10 February 2011 08:47:15 Hans Verkuil wrote:
   On Thursday, February 10, 2011 08:17:31 Linus Walleij wrote:
On Wed, Feb 9, 2011 at 8:44 PM, Harald Gustafsson wrote:
 OMX main purpose is to handle multimedia hardware and offer an
 interface to that HW that looks identical indenpendent of the vendor
 delivering that hardware, much like the v4l2 or USB subsystems tries
 to
 do. And yes optimally it should be implemented in drivers/omx in
 Linux
 and a user space library on top of that.
   
Thanks for clarifying this part, it was unclear to me. The reason
being
that it seems OMX does not imply userspace/kernelspace separation, and
I was thinking more of it as a userspace lib. Now my understanding is
that if e.g. OpenMAX defines a certain data structure, say for a PCM
frame or whatever, then that exact struct is supposed to be used by
the
kernelspace/userspace interface, and defined in the include file
exported
by the kernel.
   
 It might be that some alignment also needs to be made between 4vl2
 and
 other OS's implementation, to ease developing drivers for many OSs
 (sorry I don't know these details, but you ST-E guys should know).
   
The basic conflict I would say is that Linux has its own API+ABI,
which
is defined by V4L and ALSA through a community process without much
thought about any existing standard APIs. (In some cases also
predating
them.)
   
 By the way IL is about to 

Re: [st-ericsson] v4l2 vs omx for camera

2011-02-24 Thread Edward Hervey
Hi,

On Fri, 2011-02-18 at 17:39 +0100, Robert Fekete wrote:
 Hi,
 
 In order to expand this knowledge outside of Linaro I took the Liberty
 of inviting both linux-media@vger.kernel.org and
 gstreamer-de...@lists.freedesktop.org. For any newcomer I really
 recommend to do some catch-up reading on
 http://lists.linaro.org/pipermail/linaro-dev/2011-February/thread.html
 (v4l2 vs omx for camera thread) before making any comments. And sign
 up for Linaro-dev while you are at it :-)
 
 To make a long story short:
 Different vendors provide custom OpenMax solutions for say Camera/ISP.
 In the Linux eco-system there is V4L2 doing much of this work already
 and is evolving with mediacontroller as well. Then there is the
 integration in Gstreamer...Which solution is the best way forward.
 Current discussions so far puts V4L2 greatly in favor of OMX.
 Please have in mind that OpenMAX as a concept is more like GStreamer
 in many senses. The question is whether Camera drivers should have OMX
 or V4L2 as the driver front end? This may perhaps apply to video
 codecs as well. Then there is how to in best of ways make use of this
 in GStreamer in order to achieve no copy highly efficient multimedia
 pipelines. Is gst-omx the way forward?
 
 Let the discussion continue...
 

  I'll try to summarize here my perspective from a GStreamer point of
view. You wanted some, here it is :) This is a summary to answering
everything in this mail thread at this time. You can go straight to the
last paragraphs for a summary.

  The question to be asked, imho, is not omx or v4l2 or gstreamer, but
rather What purpose does each of those API/interface serve, when do
they make sense, and how can they interact in the most efficient way
possible

  Looking at the bigger picture, the end goal to all of us is to make
best usage of what hardware/IP/silica is available all the way up to
end-user applications/use-cases, and do so in the most efficient way
possible (whether in terms of memory/cpu/power usage at the lower
levels, but also in terms of manpower and flexibility at the higher
levels).

  Will GStreamer be as cpu/memory efficient as a pure OMX solution ? No,
I seriously doubt we'll break down all the fundamental notions in
GStreamer to make it use 0 cpu when running some processing.

  Can GStreamer provide higher flexibility than a pure OMX solution ?
Definitely, unless you have all the plugins for accesing all other hw
systems out there,  (de)muxers, rtp (de)payloaders, jitter buffers,
network components, auto-pluggers, convenience elements, application
interaction that GStreamer has been improving over the past 10 years.
All that is far from trivial.
  And as Rob Clark said that you could drop HW specific gst plugins in
and have it work with all existing applications, the same applies to all
the other peripheral existing *and* future plugins you need to make a
final application. So there you benefit from all the work done from the
non-hw-centric community.

  Can we make GStreamer use as little cpu/overhead as possible without
breaking the fundamental concepts it provides ? Definitely.
  There are quite a few examples out there of zero-memcpy gst plugins
wrapping hw accelerated systems for a ridiculous amount of cpu (they
just take a opaque buffer and pass it down. That's 300-500 cpu
instructions for a properly configured setup if my memory serves me
right). And efforts have been going on for the past 2 years to carry on
to make GStreamer overall consume as little cpu as possible, making it
as lockless as possible and so forth. The undergoing GStreamer 0.11/1.0
effort will allow breaking down even more barriers for even more
efficient usage.

  Can OMX provide a better interface than v4l2 for video sources ?
Possible, but doubtful, The V4L2 people have been working at it for ages
and works for a *lot* of devices out there. It is the interface one
expects to use on Linux based systems, you write your kernel drivers
with a v4l2 interface and people can use it straight away on any linux
setup.

  Do Hardware/Silica vendors want to write kernel/userspace drivers for
their hw-accelerated codecs in all variants available out there ? No
way, they've got better things to do, they need to chose one.
  Is OMX the best API out there for providing hw-accelerated codecs ?
Not in my opinion. Efforts like libva/vdpau are better in that regards,
but for most ARM SoC ... OMX is the closest thing to a '''standard'''.
And they (Khronos) don't even provide reference implementations, so you
end up with a bunch of header files that everybody {mis|ab}uses.



  So where does this leave us ?

  * OMX is here for HW-accelerated codecs and vendors are doubtfully
going to switch from it, but there are other system popping up that will
use other APIs (libva, vdpau, ...).
  * V4L2 has an long standing and evolving interface people expect for
video sources on linux-based systems. Making OMX provide an
as-robust/tested interface as that is going to be hard.
  * 

Re: [st-ericsson] v4l2 vs omx for camera

2011-02-24 Thread Edward Hervey
On Thu, 2011-02-24 at 21:19 +0100, Edward Hervey wrote:
 
   Will GStreamer be as cpu/memory efficient as a pure OMX solution ?
 No,
 I seriously doubt we'll break down all the fundamental notions in
 GStreamer to make it use 0 cpu when running some processing. 

  I blame late night mails...

  I meant Will GStreamer be capable of zero-cpu usage like OMX is
capable in some situation. The answer still stands.

  But regarding memory usage, GStreamer can do zero-memcpy provided the
underlying layers have a mechanism it can use.

   Edward


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-24 Thread Clark, Rob
On Thu, Feb 24, 2011 at 7:10 AM, Laurent Pinchart
laurent.pinch...@ideasonboard.com wrote:
 On Thursday 24 February 2011 14:04:19 Hans Verkuil wrote:
 On Thursday, February 24, 2011 13:29:56 Linus Walleij wrote:
  2011/2/23 Sachin Gupta sachin.gu...@linaro.org:
   The imaging coprocessor in today's platforms have a general purpose DSP
   attached to it I have seen some work being done to use this DSP for
   graphics/audio processing in case the camera use case is not being
   tried or also if the camera usecases does not consume the full
   bandwidth of this dsp.I am not sure how v4l2 would fit in such an
   architecture,
 
  Earlier in this thread I discussed TI:s DSPbridge.
 
  In drivers/staging/tidspbridge
  http://omappedia.org/wiki/DSPBridge_Project
  you find the TI hackers happy at work with providing a DSP accelerator
  subsystem.
 
  Isn't it possible for a V4L2 component to use this interface (or
  something more evolved, generic) as backend for assorted DSP offloading?
 
  So using one kernel framework does not exclude using another one
  at the same time. Whereas something like DSPbridge will load firmware
  into DSP accelerators and provide control/datapath for that, this can
  in turn be used by some camera or codec which in turn presents a
  V4L2 or ALSA interface.

 Yes, something along those lines can be done.

 While normally V4L2 talks to hardware it is perfectly fine to talk to a DSP
 instead.

 The hardest part will be to identify the missing V4L2 API pieces and design
 and add them. I don't think the actual driver code will be particularly
 hard. It should be nothing more than a thin front-end for the DSP. Of
 course, that's just theory at the moment :-)

 The problem is that someone has to do the actual work for the initial
 driver. And I expect that it will be a substantial amount of work. Future
 drivers should be *much* easier, though.

 A good argument for doing this work is that this API can hide which parts
 of the video subsystem are hardware and which are software. The
 application really doesn't care how it is organized. What is done in
 hardware on one SoC might be done on a DSP instead on another SoC. But the
 end result is pretty much the same.

 I think the biggest issue we will have here is that part of the inter-
 processors communication stack lives in userspace in most recent SoCs (OMAP4
 comes to mind for instance). This will make implementing a V4L2 driver that
 relies on IPC difficult.

 It's probably time to start seriously thinking about userspace
 drivers/librairies/middlewares/frameworks/whatever, at least to clearly tell
 chip vendors what the Linux community expects.


I suspect more of the IPC framework needs to move down to the kernel..
this is the only way I can see to move the virt-phys address
translation to a trusted layer.  I'm not sure how others would feel
about pushing more if the IPC stack down to the kernel, but at least
it would make it easier for a v4l2 driver to leverage the
coprocessors..

BR,
-R

 --
 Regards,

 Laurent Pinchart

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-24 Thread Clark, Rob
On Thu, Feb 24, 2011 at 2:19 PM, Edward Hervey bilb...@gmail.com wrote:

  What *needs* to be solved is an API for data allocation/passing at the
 kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that
 userspace (like GStreamer) can pass around, monitor and know about.

yes yes yes yes!!

vaapi/vdpau is half way there, as they cover sharing buffers with
X/GL..  but sadly they ignore camera.  There are a few other
inconveniences with vaapi and possibly vdpau.. at least we'd prefer to
have an API the covered decoding config data like SPS/PPS and not just
slice data since config data NALU's are already decoded by our
accelerators..

  That is a *massive* challenge on its own. The choice of using
 GStreamer or not ... is what you want to do once that challenge is
 solved.

  Regards,

    Edward

 P.S. GStreamer for Android already works :
 http://www.elinux.org/images/a/a4/Android_and_Gstreamer.ppt


yeah, I'm aware of that.. someone please convince google to pick it up
and drop stagefright so we can only worry about a single framework
between android and linux  (and then I look forward to playing with
pitivi on an android phone :-))

BR,
-R

 ___
 gstreamer-devel mailing list
 gstreamer-de...@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-24 Thread Clark, Rob
On Thu, Feb 24, 2011 at 7:17 AM, Hans Verkuil hverk...@xs4all.nl wrote:
 There are two parts to this: first of all you need a way to allocate large
 buffers. The CMA patch series is available (but not yet merged) that does 
 this.
 I'm not sure of the latest status of this series.

 The other part is that everyone can use and share these buffers. There isn't
 anything for this yet. We have discussed this in the past and we need 
 something
 generic for this that all subsystems can use. It's not a good idea to tie this
 to any specific framework like GEM. Instead any subsystem should be able to 
 use
 the same subsystem-independent buffer pool API.

yeah, doesn't need to be GEM.. but should at least inter-operate so we
can share buffers with the display/gpu..

[snip]
 But maybe it would be nice to have a way to have sensor driver on the
 linux side, pipelined with hw and imaging drivers on a co-processor
 for various algorithms and filters with configuration all exposed to
 userspace thru MCF.. I'm not immediately sure how this would work, but
 it sounds nice at least ;-)

 MCF? What does that stand for?


sorry, v4l2 media controller framework

BR,
-R
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [st-ericsson] v4l2 vs omx for camera

2011-02-21 Thread Clark, Rob
On Fri, Feb 18, 2011 at 10:39 AM, Robert Fekete
robert.fek...@linaro.org wrote:
 Hi,

 In order to expand this knowledge outside of Linaro I took the Liberty of
 inviting both linux-media@vger.kernel.org and
 gstreamer-de...@lists.freedesktop.org. For any newcomer I really recommend
 to do some catch-up reading on
 http://lists.linaro.org/pipermail/linaro-dev/2011-February/thread.html
 (v4l2 vs omx for camera thread) before making any comments. And sign up
 for Linaro-dev while you are at it :-)

 To make a long story short:
 Different vendors provide custom OpenMax solutions for say Camera/ISP. In
 the Linux eco-system there is V4L2 doing much of this work already and is
 evolving with mediacontroller as well. Then there is the integration in
 Gstreamer...Which solution is the best way forward. Current discussions so
 far puts V4L2 greatly in favor of OMX.
 Please have in mind that OpenMAX as a concept is more like GStreamer in many
 senses. The question is whether Camera drivers should have OMX or V4L2 as
 the driver front end? This may perhaps apply to video codecs as well. Then
 there is how to in best of ways make use of this in GStreamer in order to
 achieve no copy highly efficient multimedia pipelines. Is gst-omx the way
 forward?

just fwiw, there were some patches to make v4l2src work with userptr
buffers in case the camera has an mmu and can handle any random
non-physically-contiguous buffer..  so there is in theory no reason
why a gst capture pipeline could not be zero copy and capture directly
into buffers allocated from the display

Certainly a more general way to allocate buffers that any of the hw
blocks (display, imaging, video encoders/decoders, 3d/2d hw, etc)
could use, and possibly share across-process for some zero copy DRI
style rendering, would be nice.  Perhaps V4L2_MEMORY_GEM?


 Let the discussion continue...


 On 17 February 2011 14:48, Laurent Pinchart
 laurent.pinch...@ideasonboard.com wrote:

 On Thursday 10 February 2011 08:47:15 Hans Verkuil wrote:
  On Thursday, February 10, 2011 08:17:31 Linus Walleij wrote:
   On Wed, Feb 9, 2011 at 8:44 PM, Harald Gustafsson wrote:
OMX main purpose is to handle multimedia hardware and offer an
interface to that HW that looks identical indenpendent of the vendor
delivering that hardware, much like the v4l2 or USB subsystems tries
to
do. And yes optimally it should be implemented in drivers/omx in
Linux
and a user space library on top of that.
  
   Thanks for clarifying this part, it was unclear to me. The reason
   being
   that it seems OMX does not imply userspace/kernelspace separation, and
   I was thinking more of it as a userspace lib. Now my understanding is
   that if e.g. OpenMAX defines a certain data structure, say for a PCM
   frame or whatever, then that exact struct is supposed to be used by
   the
   kernelspace/userspace interface, and defined in the include file
   exported
   by the kernel.
  
It might be that some alignment also needs to be made between 4vl2
and
other OS's implementation, to ease developing drivers for many OSs
(sorry I don't know these details, but you ST-E guys should know).
  
   The basic conflict I would say is that Linux has its own API+ABI,
   which
   is defined by V4L and ALSA through a community process without much
   thought about any existing standard APIs. (In some cases also
   predating
   them.)
  
By the way IL is about to finalize version 1.2 of OpenMAX IL which
is
more than a years work of aligning all vendors and fixing unclear
and
buggy parts.
  
   I suspect that the basic problem with Khronos OpenMAX right now is
   how to handle communities - for example the X consortium had
   something like the same problem a while back, only member companies
   could partake in the standard process, and they need of course to pay
   an upfront fee for that, and the majority of these companies didn't
   exactly send Linux community members to the meetings.
  
   And now all the companies who took part in OpenMAX somehow
   end up having to do a lot of upfront community work if they want
   to drive the API:s in a certain direction, discuss it again with the
   V4L
   and ALSA maintainers and so on. Which takes a lot of time and
   patience with uncertain outcome, since this process is autonomous
   from Khronos. Nobody seems to be doing this, I javen't seen a single
   patch aimed at trying to unify the APIs so far. I don't know if it'd
   be
   welcome.
  
   This coupled with strict delivery deadlines and a marketing will
   to state conformance to OpenMAX of course leads companies into
   solutions breaking the Linux kernelspace API to be able to present
   this.

 From my experience with OMX, one of the issues is that companies usually
 extend the API to fullfill their platform's needs, without going through
 any
 standardization process. Coupled with the lack of open and free reference
 implementation and test tools, this