We have a solution to this which is to make them as sensor's private data, and 
let ISP access them by share the same structure definition. 

But I don't think it's a good idea, because it is so driver-specific and can 
not be used by others.

Best Regards
Jinlu Yu
UMG UPSG PRC
INET: 8758 1603
TEL:  86 10 8217 1603
FAX:  86 10 8286 1400
-----Original Message-----
From: linux-media-ow...@vger.kernel.org 
[mailto:linux-media-ow...@vger.kernel.org] On Behalf Of Yu, Jinlu
Sent: 2009年9月27日 22:31
To: Mauro Carvalho Chehab
Cc: linux-media@vger.kernel.org
Subject: RE: [PATCH 0/5] V4L2 patches for Intel Moorestown Camera Imaging

Hi, Mauro

Thank you for your suggestion on this.

Now I have another problem. The ISP needs the following parameters of the 
sensor to set the acquisition interface, but I can not find a suitable subdev 
ioctls to get them from sensor driver. 

bus_width; width of the buss connecting sensor and ISP.
field_sel; field selection, even or odd.
ycseq; YCbCr sequence, YCbCr or YCrCb or CbYCrY or CrYCbY
conv422; subsampling type, co-sited 4:4:4 or non-cosited 4:4:4 or color 
interpolation
bpat; bayer sampling sequence, RGRG GBGB or GRGR BGBG or ...
hpol; horizontal sync polarity
vpol; vertical sync polarity
edge; sampling edge

Best Regards
Jinlu Yu
UMG UPSG PRC
INET: 8758 1603
TEL:  86 10 8217 1603
FAX:  86 10 8286 1400
-----Original Message-----
From: Mauro Carvalho Chehab [mailto:mche...@infradead.org] 
Sent: 2009年9月24日 19:45
To: Yu, Jinlu
Cc: linux-media@vger.kernel.org
Subject: Re: [PATCH 0/5] V4L2 patches for Intel Moorestown Camera Imaging

Em Thu, 24 Sep 2009 19:21:40 +0800
"Yu, Jinlu" <jinlu...@intel.com> escreveu:

> Hi, Hans/Guennadi
> 
> I am modifying these drivers to comply with v4l2 framework. I have finished 
> replacing our buffer managing code with utility function from videobuf-core.c 
> and videobuf-dma-contig.c. Now I am working on the subdev. One thing I am 
> sure is that each sensor should be registered as a v4l2_subdev and ISP (Image 
> Signal Processor) is registered as a v4l2_device acting as the bridge device. 
> 
> But we have two ways to deal with the relationship of sensor and ISP, and we 
> don't know which one is better. Could you help me on this?
> 
> No.1. Register the ISP as a video_device (/dev/video0) and treat each of the 
> sensor (SOC and RAW) as an input of the ISP. If I want to change the sensor, 
> use the VIDIOC_S_INPUT to change input from sensor A to sensor B. But I have 
> a concern about this ioctl. Since I didn't find any code related HW pipeline 
> status checking and HW register setting in the implement of this ioctl (e.g. 
> vino_s_input in /drivers/media/video/vino.c). So don't I have to stream-off 
> the HW pipeline and change the HW register setting for the new input? Or is 
> it application's responsibility to stream-off the pipeline and renegotiate 
> the parameters for the new input?
> 
> No.2. Combine the SOC sensor together with the ISP as Channel One and 
> register it as /dev/video0, and combine the RAW sensor together with the ISP 
> as Channel Two and register it as /dev/video1. Surely, only one channel works 
> at a certain time due to HW restriction. When I want to change the sensor 
> (e.g. from SOC sensor to RAW sensor), just close /dev/video0 and open 
> /dev/video1.

The better seems to be No. 1. As you need to re-negotiate parameters for
switching from one sensor to another, if some app tries to change from one
input to another while streaming, you should just return -EBUSY, if it is not
possible to switch (for example, if the selected format/resolution/frame rate
is incompatible).



Cheers,
Mauro
N嫥叉靣笡y氊b瞂千v豝?藓{.n?壏{睓鏱j)韰骅w*jg?秹殠娸/侁鋤罐枈?娹櫒璀??摺玜囤瓽珴閔?鎗:+v墾妛鑶佶
N�����r��y����b�X��ǧv�^�)޺{.n�+����{���bj)����w*jg��������ݢj/���z�ޖ��2�ޙ����&�)ߡ�a�����G���h��j:+v���w��٥

Reply via email to