Hi Sakari,

On 09/24/2012 08:26 PM, Sakari Ailus wrote:
> On Mon, Sep 24, 2012 at 06:51:41PM +0200, Sylwester Nawrocki wrote:
>> On 09/24/2012 03:44 PM, Sakari Ailus wrote:
>>> How about useing a separate video buffer queue for the purpose? That would
>>> provide a nice way to pass it to the user space where it's needed. It'd also
>>> play nicely together with the frame layout descriptors.
>>
>> It's tempting, but doing frame synchronisation in user space in this case
>> would have been painful, if at all possible in reliable manner. It would 
>> have significantly complicate applications and the drivers.
> 
> Let's face it: applications that are interested in this information have to
> do exactly the same frame number matching with the statistics buffers. Just
> stitching the data to the same video buffer isn't a generic solution.

Let me list disadvantages of using separate buffer queue:

1. significant complication of the driver:      
    - need to add video node support with all it's memory and file ops,
    - more complicated VIDIOC_STREAMON logic, MIPI CSI receiver needs to
      care about the data pipeline details (power, streaming,..);
2. more processing overhead due second /dev/video handling;
3. much more complex device handling in user space.

All this for virtually nothing but 2 x 4-byte integers we are interested
in in the Embedded Data stream.

And advantages:

1. More generic solution, no need to invent new fourcc's for standard image
   data formats with metadata (new fourcc is needed anyway for the device-
   specific image data (JPEG/YUV/.../YUV/JPEG/meta-data, and we can choose
   to use multi-planar only for non-standard formats and separate meta-data
   buffer queue for others);
2. Probably other host IF/ISP drivers would implement it this way, or would
   they ?
3. What else could be added ?

Currently I don't see justification for using separate video node as the
frame embedded frame grabber. I don't expect it to be useful for us in
future, not for the ISPs that process sensor data separately from the host
CPUs. Moreover, this MIPI-CSIS device has maximum buffer size of only 4 KiB,
which effectively limits its possible use cases.

I don't think there is a need to force host drivers to use either separate
buffer queues or multi-planar APIs. Especially in case of non-standard hybrid
data formats. I'm ready to discuss separate buffer queue approach if we have
real use case for it. I don't think these two methods are exclusive.
Until then I would prefer not to live with an awkward solution.

>> VIDIOC_STREAMON, VIDIOC_QBUF/DQBUF calls would have been at least roughly
>> synchronized, and applications would have to know somehow which video nodes
>> needs to be opened together. I guess things like that could be abstracted
>> in a library, but what do we really gain for such effort ?
>> And now I can just ask kernel for 2-planar buffers where everything is in
>> place..
> 
> That's equally good --- some hardware can only do that after all, but do you
> need the callback in that case, if there's a single destination buffer
> anyway? Wouldn't the frame layout descriptor have enough information to do
> this?

There is as many buffers as user requested with REQBUFS. In VSYNC interrupt
of one device there is a buffer configured for the other device. With each
frame interrupt there is a different buffer used, the one that the DMA engine
actually writes data to. Data copying happens from the MIPI-CSIS internal
ioremapped buffer to a buffer owned by the host interface driver. And the
callback is used for dynamically switching buffers at the subdev.


Regards,
-- 
Sylwester Nawrocki
Samsung Poland R&D Center
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to