Re: [PATCH/RFC v2 0/4] Meta-data video device type

2016-05-23 Thread Guennadi Liakhovetski
Hi Sakari,

On Fri, 13 May 2016, Sakari Ailus wrote:

> Hi Hans and Laurent,
> 
> On Fri, May 13, 2016 at 11:26:22AM +0200, Hans Verkuil wrote:
> > On 05/12/2016 02:17 AM, Laurent Pinchart wrote:
> > > Hello,
> > > 
> > > This RFC patch series is a second attempt at adding support for passing
> > > statistics data to userspace using a standard API.
> > > 
> > > The core requirements haven't changed. Statistics data capture requires
> > > zero-copy and decoupling statistics buffers from images buffers, in order 
> > > to
> > > make statistics data available to userspace as soon as they're captured. 
> > > For
> > > those reasons the early consensus we have reached is to use a video device
> > > node with a buffer queue to pass statistics buffers using the V4L2 API, 
> > > and
> > > this new RFC version doesn't challenge that.
> > > 
> > > The major change compared to the previous version is how the first patch 
> > > has
> > > been split in two. Patch 1/4 now adds a new metadata buffer type and 
> > > format
> > > (including their support in videobuf2-v4l2), usable with regular V4L2 
> > > video
> > > device nodes, while patch 2/4 adds the new metadata video device type.
> > > Metadata buffer queues are thus usable on both the regular V4L2 device 
> > > nodes
> > > and the new metadata device nodes.
> > > 
> > > This change was driven by the fact that an important category of use cases
> > > doesn't differentiate between metadata and image data in hardware at the 
> > > DMA
> > > engine level. With such hardware (CSI-2 receivers in particular, but 
> > > other bus
> > > types could also fall into this category) a stream containing both 
> > > metadata
> > > and image data virtual streams is transmitted over a single physical 
> > > link. The
> > > receiver demultiplexes, filters and routes the virtual streams to further
> > > hardware blocks, and in many cases, directly to DMA engines that are part 
> > > of
> > > the receiver. Those DMA engines can capture a single virtual stream to 
> > > memory,
> > > with as many DMA engines physically present in the device as the number of
> > > virtual streams that can be captured concurrently. All those DMA engines 
> > > are
> > > usually identical and don't care about the type of data they receive and
> > > capture. For that reason limiting the metadata buffer type to metadata 
> > > device
> > > nodes would require creating two device nodes for each DMA engine (and
> > > possibly more later if we need to capture other types of data). Not only 
> > > would
> > > this make the API more complex to use for applications, it wouldn't bring 
> > > any
> > > added value as the video and metadata device nodes associated with a DMA
> > > engine couldn't be used concurrently anyway, as they both correspond to 
> > > the
> > > same hardware resource.
> > > 
> > > For this reason the ability to capture metadata on a video device node is
> > > useful and desired, and is implemented patch 1/4 using a dedicated video
> > > buffers queue. In the CSI-2 case a driver will create two buffer queues
> > > internally for the same DMA engine, and can select which one to use based 
> > > on
> > > the buffer type passed for instance to the REQBUFS ioctl (details still 
> > > need
> > > to be discussed here).
> > 
> > Not quite. It still has only one vb2_queue, you just change the type 
> > depending
> > on what mode it is in (video or meta data). Similar to raw vs sliced VBI.
> > 
> > In the latter case it is the VIDIOC_S_FMT call that changes the vb2_queue 
> > type
> > depending on whether raw or sliced VBI is requested. That's probably where I
> > would do this for video vs meta as well.
> > 
> > There is one big thing missing here: how does userspace know in this case 
> > whether
> > it will get metadata or video? Who decides which CSI virtual stream is 
> > routed
> 
> My first impression would be to say by formats, so that's actually defined
> by the user. The media bus formats do not have such separation between image
> and metadata formats either.

I'm still not sure whether we actually need different formats for 
metadata. E.g. on CSI-2 I expect metadata to use the 8-bit embedded 
non-image Data Type - on all cameras. So, what should the CSI-2 bridge 
sink pad be configured with? Some sensor-specific type or just a format, 
telling it what to capture on the CSI-2 bus?

> VIDIOC_ENUM_FMT should be amended with media bus code as well so that the
> user can figure out which format corresponds to a given media bus code.

I'm not sure what you mean by this correspondence, could you elaborate on 
this a bit, please?

> > to which video node?
> 
> I think that should be considered as a seprate problem, albeit it will
> require a solution as well. And it's a much biffer problem than this one.

Yes, we did want to revive the stream routing work, didn't we? ;-)

But let me add one more use-case for consideration: UVC. Some UVC cameras 
include per-frame (meta)data in the private part of 

Re: [PATCH/RFC v2 0/4] Meta-data video device type

2016-05-17 Thread Sakari Ailus
Hi Laurent,

On Mon, May 16, 2016 at 12:21:17PM +0300, Laurent Pinchart wrote:
> Hi Hans,
> 
> On Friday 13 May 2016 11:26:22 Hans Verkuil wrote:
> > On 05/12/2016 02:17 AM, Laurent Pinchart wrote:
> > > Hello,
> > > 
> > > This RFC patch series is a second attempt at adding support for passing
> > > statistics data to userspace using a standard API.
> > > 
> > > The core requirements haven't changed. Statistics data capture requires
> > > zero-copy and decoupling statistics buffers from images buffers, in order
> > > to make statistics data available to userspace as soon as they're
> > > captured. For those reasons the early consensus we have reached is to use
> > > a video device node with a buffer queue to pass statistics buffers using
> > > the V4L2 API, and this new RFC version doesn't challenge that.
> > > 
> > > The major change compared to the previous version is how the first patch
> > > has been split in two. Patch 1/4 now adds a new metadata buffer type and
> > > format (including their support in videobuf2-v4l2), usable with regular
> > > V4L2 video device nodes, while patch 2/4 adds the new metadata video
> > > device type. Metadata buffer queues are thus usable on both the regular
> > > V4L2 device nodes and the new metadata device nodes.
> > > 
> > > This change was driven by the fact that an important category of use cases
> > > doesn't differentiate between metadata and image data in hardware at the
> > > DMA engine level. With such hardware (CSI-2 receivers in particular, but
> > > other bus types could also fall into this category) a stream containing
> > > both metadata and image data virtual streams is transmitted over a single
> > > physical link. The receiver demultiplexes, filters and routes the virtual
> > > streams to further hardware blocks, and in many cases, directly to DMA
> > > engines that are part of the receiver. Those DMA engines can capture a
> > > single virtual stream to memory, with as many DMA engines physically
> > > present in the device as the number of virtual streams that can be
> > > captured concurrently. All those DMA engines are usually identical and
> > > don't care about the type of data they receive and capture. For that
> > > reason limiting the metadata buffer type to metadata device nodes would
> > > require creating two device nodes for each DMA engine (and possibly more
> > > later if we need to capture other types of data). Not only would this
> > > make the API more complex to use for applications, it wouldn't bring any
> > > added value as the video and metadata device nodes associated with a DMA
> > > engine couldn't be used concurrently anyway, as they both correspond to
> > > the same hardware resource.
> > > 
> > > For this reason the ability to capture metadata on a video device node is
> > > useful and desired, and is implemented patch 1/4 using a dedicated video
> > > buffers queue. In the CSI-2 case a driver will create two buffer queues
> > > internally for the same DMA engine, and can select which one to use based
> > > on the buffer type passed for instance to the REQBUFS ioctl (details
> > > still need to be discussed here).
> > 
> > Not quite. It still has only one vb2_queue, you just change the type
> > depending on what mode it is in (video or meta data). Similar to raw vs
> > sliced VBI.
> > 
> > In the latter case it is the VIDIOC_S_FMT call that changes the vb2_queue
> > type depending on whether raw or sliced VBI is requested. That's probably
> > where I would do this for video vs meta as well.
> 
> That sounds good to me. I didn't know we had support for changing the type of 
> a vb2 queue at runtime, that's good news :-)
> 
> > There is one big thing missing here: how does userspace know in this case
> > whether it will get metadata or video? Who decides which CSI virtual stream
> > is routed to which video node?
> 
> I've replied to Sakari's e-mail about this.
> 
> > > A device that contains DMA engines dedicated to
> > > metadata would create a single buffer queue and implement metadata capture
> > > only.
> > > 
> > > Patch 2/4 then adds a dedicated metadata device node type that is limited
> > > to metadata capture. Support for metadata on video device nodes isn't
> > > removed though, both device node types support metadata capture. I have
> > > included this patch as the code existed in the previous version of the
> > > series (and was explicitly requested during review of an earlier
> > > version), but I don't really see what value this would bring compared to
> > > just using video device nodes.
> > 
> > I'm inclined to agree with you.
> 
> Great :-) Should I just drop patch 2/4 then ? Sakari, Mauro, would that be 
> fine with you ?

I do encourage dropping that patch, yes!

> 
> > > As before patch 3/4 defines a first metadata format for the R-Car VSP1 1-D
> > > statistics format as an example, and the new patch 4/4 adds support for
> > > the histogram engine to the VSP1 driver. The implementation uses a
> > > 

Re: [PATCH/RFC v2 0/4] Meta-data video device type

2016-05-16 Thread Laurent Pinchart
Hi Hans,

On Friday 13 May 2016 11:26:22 Hans Verkuil wrote:
> On 05/12/2016 02:17 AM, Laurent Pinchart wrote:
> > Hello,
> > 
> > This RFC patch series is a second attempt at adding support for passing
> > statistics data to userspace using a standard API.
> > 
> > The core requirements haven't changed. Statistics data capture requires
> > zero-copy and decoupling statistics buffers from images buffers, in order
> > to make statistics data available to userspace as soon as they're
> > captured. For those reasons the early consensus we have reached is to use
> > a video device node with a buffer queue to pass statistics buffers using
> > the V4L2 API, and this new RFC version doesn't challenge that.
> > 
> > The major change compared to the previous version is how the first patch
> > has been split in two. Patch 1/4 now adds a new metadata buffer type and
> > format (including their support in videobuf2-v4l2), usable with regular
> > V4L2 video device nodes, while patch 2/4 adds the new metadata video
> > device type. Metadata buffer queues are thus usable on both the regular
> > V4L2 device nodes and the new metadata device nodes.
> > 
> > This change was driven by the fact that an important category of use cases
> > doesn't differentiate between metadata and image data in hardware at the
> > DMA engine level. With such hardware (CSI-2 receivers in particular, but
> > other bus types could also fall into this category) a stream containing
> > both metadata and image data virtual streams is transmitted over a single
> > physical link. The receiver demultiplexes, filters and routes the virtual
> > streams to further hardware blocks, and in many cases, directly to DMA
> > engines that are part of the receiver. Those DMA engines can capture a
> > single virtual stream to memory, with as many DMA engines physically
> > present in the device as the number of virtual streams that can be
> > captured concurrently. All those DMA engines are usually identical and
> > don't care about the type of data they receive and capture. For that
> > reason limiting the metadata buffer type to metadata device nodes would
> > require creating two device nodes for each DMA engine (and possibly more
> > later if we need to capture other types of data). Not only would this
> > make the API more complex to use for applications, it wouldn't bring any
> > added value as the video and metadata device nodes associated with a DMA
> > engine couldn't be used concurrently anyway, as they both correspond to
> > the same hardware resource.
> > 
> > For this reason the ability to capture metadata on a video device node is
> > useful and desired, and is implemented patch 1/4 using a dedicated video
> > buffers queue. In the CSI-2 case a driver will create two buffer queues
> > internally for the same DMA engine, and can select which one to use based
> > on the buffer type passed for instance to the REQBUFS ioctl (details
> > still need to be discussed here).
> 
> Not quite. It still has only one vb2_queue, you just change the type
> depending on what mode it is in (video or meta data). Similar to raw vs
> sliced VBI.
> 
> In the latter case it is the VIDIOC_S_FMT call that changes the vb2_queue
> type depending on whether raw or sliced VBI is requested. That's probably
> where I would do this for video vs meta as well.

That sounds good to me. I didn't know we had support for changing the type of 
a vb2 queue at runtime, that's good news :-)

> There is one big thing missing here: how does userspace know in this case
> whether it will get metadata or video? Who decides which CSI virtual stream
> is routed to which video node?

I've replied to Sakari's e-mail about this.

> > A device that contains DMA engines dedicated to
> > metadata would create a single buffer queue and implement metadata capture
> > only.
> > 
> > Patch 2/4 then adds a dedicated metadata device node type that is limited
> > to metadata capture. Support for metadata on video device nodes isn't
> > removed though, both device node types support metadata capture. I have
> > included this patch as the code existed in the previous version of the
> > series (and was explicitly requested during review of an earlier
> > version), but I don't really see what value this would bring compared to
> > just using video device nodes.
> 
> I'm inclined to agree with you.

Great :-) Should I just drop patch 2/4 then ? Sakari, Mauro, would that be 
fine with you ?

> > As before patch 3/4 defines a first metadata format for the R-Car VSP1 1-D
> > statistics format as an example, and the new patch 4/4 adds support for
> > the histogram engine to the VSP1 driver. The implementation uses a
> > metadata device node, and switching to a video device node wouldn't
> > require more than applying the following one-liner patch.
> > 
> > -   histo->queue.type = V4L2_BUF_TYPE_META_CAPTURE;
> > +   histo->queue.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
> 
> You probably mean replacing this:
> 
>  

Re: [PATCH/RFC v2 0/4] Meta-data video device type

2016-05-16 Thread Laurent Pinchart
Hello,

On Friday 13 May 2016 12:52:42 Sakari Ailus wrote:
> On Fri, May 13, 2016 at 11:26:22AM +0200, Hans Verkuil wrote:
> > On 05/12/2016 02:17 AM, Laurent Pinchart wrote:
> > > Hello,
> > > 
> > > This RFC patch series is a second attempt at adding support for passing
> > > statistics data to userspace using a standard API.
> > > 
> > > The core requirements haven't changed. Statistics data capture requires
> > > zero-copy and decoupling statistics buffers from images buffers, in
> > > order to make statistics data available to userspace as soon as they're
> > > captured. For those reasons the early consensus we have reached is to
> > > use a video device node with a buffer queue to pass statistics buffers
> > > using the V4L2 API, and this new RFC version doesn't challenge that.
> > > 
> > > The major change compared to the previous version is how the first patch
> > > has been split in two. Patch 1/4 now adds a new metadata buffer type
> > > and format (including their support in videobuf2-v4l2), usable with
> > > regular V4L2 video device nodes, while patch 2/4 adds the new metadata
> > > video device type. Metadata buffer queues are thus usable on both the
> > > regular V4L2 device nodes and the new metadata device nodes.
> > > 
> > > This change was driven by the fact that an important category of use
> > > cases doesn't differentiate between metadata and image data in hardware
> > > at the DMA engine level. With such hardware (CSI-2 receivers in
> > > particular, but other bus types could also fall into this category) a
> > > stream containing both metadata and image data virtual streams is
> > > transmitted over a single physical link. The receiver demultiplexes,
> > > filters and routes the virtual streams to further hardware blocks, and
> > > in many cases, directly to DMA engines that are part of the receiver.
> > > Those DMA engines can capture a single virtual stream to memory, with as
> > > many DMA engines physically present in the device as the number of
> > > virtual streams that can be captured concurrently. All those DMA engines
> > > are usually identical and don't care about the type of data they receive
> > > and capture. For that reason limiting the metadata buffer type to
> > > metadata device nodes would require creating two device nodes for each
> > > DMA engine (and possibly more later if we need to capture other types
> > > of data). Not only would this make the API more complex to use for
> > > applications, it wouldn't bring any added value as the video and
> > > metadata device nodes associated with a DMA engine couldn't be used
> > > concurrently anyway, as they both correspond to the same hardware
> > > resource.
> > > 
> > > For this reason the ability to capture metadata on a video device node
> > > is useful and desired, and is implemented patch 1/4 using a dedicated
> > > video buffers queue. In the CSI-2 case a driver will create two buffer
> > > queues internally for the same DMA engine, and can select which one to
> > > use based on the buffer type passed for instance to the REQBUFS ioctl
> > > (details still need to be discussed here).
> > 
> > Not quite. It still has only one vb2_queue, you just change the type
> > depending on what mode it is in (video or meta data). Similar to raw vs
> > sliced VBI.
> > 
> > In the latter case it is the VIDIOC_S_FMT call that changes the vb2_queue
> > type depending on whether raw or sliced VBI is requested. That's probably
> > where I would do this for video vs meta as well.
> > 
> > There is one big thing missing here: how does userspace know in this case
> > whether it will get metadata or video? Who decides which CSI virtual
> > stream is routed
> 
> My first impression would be to say by formats, so that's actually defined
> by the user. The media bus formats do not have such separation between image
> and metadata formats either.

It should be configured by userspace, as ultimately it's userspace that knows 
which virtual channels it wants to capture. This is somehow analogous to the 
input selection API, a capture driver that supports multiple inputs can't 
decide on its own which input(s) to capture.

We might be able to make an educated guess in the kernel if we have access to 
information about what the source produces, but that would just be a default 
configuration that userspace will override when needed (and I'm not even sure 
that would be a good idea, I'm brainstorming here).

In order to configure virtual channel filtering/selection userspace would need 
to know what the source produces. This knowledge could be hardcoded in 
applications that deal with known hardware, but in the general case we 
probably need a new API to expose that information to userspace.

The next question is how to configure filtering and routing. We also need a 
new API here, and it could make sense to discuss it along with the subdev 
routing API I've proposed a while back. I don't know whether the two problems 
could be solved by the 

Re: [PATCH/RFC v2 0/4] Meta-data video device type

2016-05-13 Thread Sakari Ailus
Hi Hans and Laurent,

On Fri, May 13, 2016 at 11:26:22AM +0200, Hans Verkuil wrote:
> On 05/12/2016 02:17 AM, Laurent Pinchart wrote:
> > Hello,
> > 
> > This RFC patch series is a second attempt at adding support for passing
> > statistics data to userspace using a standard API.
> > 
> > The core requirements haven't changed. Statistics data capture requires
> > zero-copy and decoupling statistics buffers from images buffers, in order to
> > make statistics data available to userspace as soon as they're captured. For
> > those reasons the early consensus we have reached is to use a video device
> > node with a buffer queue to pass statistics buffers using the V4L2 API, and
> > this new RFC version doesn't challenge that.
> > 
> > The major change compared to the previous version is how the first patch has
> > been split in two. Patch 1/4 now adds a new metadata buffer type and format
> > (including their support in videobuf2-v4l2), usable with regular V4L2 video
> > device nodes, while patch 2/4 adds the new metadata video device type.
> > Metadata buffer queues are thus usable on both the regular V4L2 device nodes
> > and the new metadata device nodes.
> > 
> > This change was driven by the fact that an important category of use cases
> > doesn't differentiate between metadata and image data in hardware at the DMA
> > engine level. With such hardware (CSI-2 receivers in particular, but other 
> > bus
> > types could also fall into this category) a stream containing both metadata
> > and image data virtual streams is transmitted over a single physical link. 
> > The
> > receiver demultiplexes, filters and routes the virtual streams to further
> > hardware blocks, and in many cases, directly to DMA engines that are part of
> > the receiver. Those DMA engines can capture a single virtual stream to 
> > memory,
> > with as many DMA engines physically present in the device as the number of
> > virtual streams that can be captured concurrently. All those DMA engines are
> > usually identical and don't care about the type of data they receive and
> > capture. For that reason limiting the metadata buffer type to metadata 
> > device
> > nodes would require creating two device nodes for each DMA engine (and
> > possibly more later if we need to capture other types of data). Not only 
> > would
> > this make the API more complex to use for applications, it wouldn't bring 
> > any
> > added value as the video and metadata device nodes associated with a DMA
> > engine couldn't be used concurrently anyway, as they both correspond to the
> > same hardware resource.
> > 
> > For this reason the ability to capture metadata on a video device node is
> > useful and desired, and is implemented patch 1/4 using a dedicated video
> > buffers queue. In the CSI-2 case a driver will create two buffer queues
> > internally for the same DMA engine, and can select which one to use based on
> > the buffer type passed for instance to the REQBUFS ioctl (details still need
> > to be discussed here).
> 
> Not quite. It still has only one vb2_queue, you just change the type depending
> on what mode it is in (video or meta data). Similar to raw vs sliced VBI.
> 
> In the latter case it is the VIDIOC_S_FMT call that changes the vb2_queue type
> depending on whether raw or sliced VBI is requested. That's probably where I
> would do this for video vs meta as well.
> 
> There is one big thing missing here: how does userspace know in this case 
> whether
> it will get metadata or video? Who decides which CSI virtual stream is routed

My first impression would be to say by formats, so that's actually defined
by the user. The media bus formats do not have such separation between image
and metadata formats either.

VIDIOC_ENUM_FMT should be amended with media bus code as well so that the
user can figure out which format corresponds to a given media bus code.

> to which video node?

I think that should be considered as a seprate problem, albeit it will
require a solution as well. And it's a much biffer problem than this one.

-- 
Kind regards,

Sakari Ailus
e-mail: sakari.ai...@iki.fi XMPP: sai...@retiisi.org.uk
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH/RFC v2 0/4] Meta-data video device type

2016-05-13 Thread Hans Verkuil
On 05/12/2016 02:17 AM, Laurent Pinchart wrote:
> Hello,
> 
> This RFC patch series is a second attempt at adding support for passing
> statistics data to userspace using a standard API.
> 
> The core requirements haven't changed. Statistics data capture requires
> zero-copy and decoupling statistics buffers from images buffers, in order to
> make statistics data available to userspace as soon as they're captured. For
> those reasons the early consensus we have reached is to use a video device
> node with a buffer queue to pass statistics buffers using the V4L2 API, and
> this new RFC version doesn't challenge that.
> 
> The major change compared to the previous version is how the first patch has
> been split in two. Patch 1/4 now adds a new metadata buffer type and format
> (including their support in videobuf2-v4l2), usable with regular V4L2 video
> device nodes, while patch 2/4 adds the new metadata video device type.
> Metadata buffer queues are thus usable on both the regular V4L2 device nodes
> and the new metadata device nodes.
> 
> This change was driven by the fact that an important category of use cases
> doesn't differentiate between metadata and image data in hardware at the DMA
> engine level. With such hardware (CSI-2 receivers in particular, but other bus
> types could also fall into this category) a stream containing both metadata
> and image data virtual streams is transmitted over a single physical link. The
> receiver demultiplexes, filters and routes the virtual streams to further
> hardware blocks, and in many cases, directly to DMA engines that are part of
> the receiver. Those DMA engines can capture a single virtual stream to memory,
> with as many DMA engines physically present in the device as the number of
> virtual streams that can be captured concurrently. All those DMA engines are
> usually identical and don't care about the type of data they receive and
> capture. For that reason limiting the metadata buffer type to metadata device
> nodes would require creating two device nodes for each DMA engine (and
> possibly more later if we need to capture other types of data). Not only would
> this make the API more complex to use for applications, it wouldn't bring any
> added value as the video and metadata device nodes associated with a DMA
> engine couldn't be used concurrently anyway, as they both correspond to the
> same hardware resource.
> 
> For this reason the ability to capture metadata on a video device node is
> useful and desired, and is implemented patch 1/4 using a dedicated video
> buffers queue. In the CSI-2 case a driver will create two buffer queues
> internally for the same DMA engine, and can select which one to use based on
> the buffer type passed for instance to the REQBUFS ioctl (details still need
> to be discussed here).

Not quite. It still has only one vb2_queue, you just change the type depending
on what mode it is in (video or meta data). Similar to raw vs sliced VBI.

In the latter case it is the VIDIOC_S_FMT call that changes the vb2_queue type
depending on whether raw or sliced VBI is requested. That's probably where I
would do this for video vs meta as well.

There is one big thing missing here: how does userspace know in this case 
whether
it will get metadata or video? Who decides which CSI virtual stream is routed
to which video node?

> A device that contains DMA engines dedicated to
> metadata would create a single buffer queue and implement metadata capture
> only.
> 
> Patch 2/4 then adds a dedicated metadata device node type that is limited to
> metadata capture. Support for metadata on video device nodes isn't removed
> though, both device node types support metadata capture. I have included this
> patch as the code existed in the previous version of the series (and was
> explicitly requested during review of an earlier version), but I don't really
> see what value this would bring compared to just using video device nodes.

I'm inclined to agree with you.

> As before patch 3/4 defines a first metadata format for the R-Car VSP1 1-D
> statistics format as an example, and the new patch 4/4 adds support for the
> histogram engine to the VSP1 driver. The implementation uses a metadata device
> node, and switching to a video device node wouldn't require more than applying
> the following one-liner patch.
> 
> -   histo->queue.type = V4L2_BUF_TYPE_META_CAPTURE;
> +   histo->queue.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;

You probably mean replacing this:

histo->video.vfl_type = VFL_TYPE_META;

by this:

histo->video.vfl_type = VFL_TYPE_GRABBER;

Regards,

Hans

> 
> Beside whether patch 2/4 should be included or not (I would prefer dropping
> it) and how to select the active queue on a multi-type video device node
> (through the REQBUFS ioctl or through a diffent mean), one point that remains
> to be discussed is what information to include in the metadata format. Patch
> 1/1 defines the new metadata format as
> 
> 

[PATCH/RFC v2 0/4] Meta-data video device type

2016-05-11 Thread Laurent Pinchart
Hello,

This RFC patch series is a second attempt at adding support for passing
statistics data to userspace using a standard API.

The core requirements haven't changed. Statistics data capture requires
zero-copy and decoupling statistics buffers from images buffers, in order to
make statistics data available to userspace as soon as they're captured. For
those reasons the early consensus we have reached is to use a video device
node with a buffer queue to pass statistics buffers using the V4L2 API, and
this new RFC version doesn't challenge that.

The major change compared to the previous version is how the first patch has
been split in two. Patch 1/4 now adds a new metadata buffer type and format
(including their support in videobuf2-v4l2), usable with regular V4L2 video
device nodes, while patch 2/4 adds the new metadata video device type.
Metadata buffer queues are thus usable on both the regular V4L2 device nodes
and the new metadata device nodes.

This change was driven by the fact that an important category of use cases
doesn't differentiate between metadata and image data in hardware at the DMA
engine level. With such hardware (CSI-2 receivers in particular, but other bus
types could also fall into this category) a stream containing both metadata
and image data virtual streams is transmitted over a single physical link. The
receiver demultiplexes, filters and routes the virtual streams to further
hardware blocks, and in many cases, directly to DMA engines that are part of
the receiver. Those DMA engines can capture a single virtual stream to memory,
with as many DMA engines physically present in the device as the number of
virtual streams that can be captured concurrently. All those DMA engines are
usually identical and don't care about the type of data they receive and
capture. For that reason limiting the metadata buffer type to metadata device
nodes would require creating two device nodes for each DMA engine (and
possibly more later if we need to capture other types of data). Not only would
this make the API more complex to use for applications, it wouldn't bring any
added value as the video and metadata device nodes associated with a DMA
engine couldn't be used concurrently anyway, as they both correspond to the
same hardware resource.

For this reason the ability to capture metadata on a video device node is
useful and desired, and is implemented patch 1/4 using a dedicated video
buffers queue. In the CSI-2 case a driver will create two buffer queues
internally for the same DMA engine, and can select which one to use based on
the buffer type passed for instance to the REQBUFS ioctl (details still need
to be discussed here). A device that contains DMA engines dedicated to
metadata would create a single buffer queue and implement metadata capture
only.

Patch 2/4 then adds a dedicated metadata device node type that is limited to
metadata capture. Support for metadata on video device nodes isn't removed
though, both device node types support metadata capture. I have included this
patch as the code existed in the previous version of the series (and was
explicitly requested during review of an earlier version), but I don't really
see what value this would bring compared to just using video device nodes.

As before patch 3/4 defines a first metadata format for the R-Car VSP1 1-D
statistics format as an example, and the new patch 4/4 adds support for the
histogram engine to the VSP1 driver. The implementation uses a metadata device
node, and switching to a video device node wouldn't require more than applying
the following one-liner patch.

-   histo->queue.type = V4L2_BUF_TYPE_META_CAPTURE;
+   histo->queue.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;

Beside whether patch 2/4 should be included or not (I would prefer dropping
it) and how to select the active queue on a multi-type video device node
(through the REQBUFS ioctl or through a diffent mean), one point that remains
to be discussed is what information to include in the metadata format. Patch
1/1 defines the new metadata format as

struct v4l2_meta_format {
__u32   dataformat;
__u32   buffersize;
__u8reserved[24];
} __attribute__ ((packed));

but at least in the CSI-2 case metadata is, as image data, transmitted in
lines and the receiver needs to be programmed with the line length and the
number of lines for proper operation. We started discussing this on IRC but
haven't reached a conclusion yet.

Laurent Pinchart (4):
  v4l: Add metadata buffer type and format
  v4l: Add metadata video device type
  v4l: Define a pixel format for the R-Car VSP1 1-D histogram engine
  v4l: vsp1: Add HGO support

 Documentation/DocBook/media/v4l/dev-meta.xml   |  97 
 .../DocBook/media/v4l/pixfmt-meta-vsp1-hgo.xml | 307 +
 Documentation/DocBook/media/v4l/pixfmt.xml |   9 +
 Documentation/DocBook/media/v4l/v4l2.xml   |   1