I'm trying to understand how the media framework and V4L2 share the 
responsibility of configuring a video device.  Referring to the ISP code on 
Laurent's media-0004-omap3isp branch, the video device is now split up into 
several devices... suppose you have a sensor delivering raw bayer data to the 
CCDC.  I could get this raw data from the /dev/video2 device (named "OMAP3 ISP 
CCDC output") or I could get YUV data from the previewer or resizer.  But I 
would no longer have a single device where I could ENUM_FMT and see that I 
could get either.  Correct?

Having settled on a particular video device, (how) do regular controls (ie. 
VIDIOC_[S|G]_CTRL) work?  I don't see any support for them in ispvideo.c.  Is 
it just yet to be implemented?  Or is it expected that the application will 
access the subdevs individually?

Basically the same Q for CROPCAP:  isp_video_cropcap passes it on to the last 
link in the chain, but none of the subdevs in the ISP currently have a cropcap 
function implemented (yet).  Does this still need to be written?

-- 
Michael Jones

MATRIX VISION GmbH, Talstrasse 16, DE-71570 Oppenweiler
Registergericht: Amtsgericht Stuttgart, HRB 271090
Geschaeftsfuehrer: Gerhard Thullner, Werner Armingeon, Uwe Furtner
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to