Re: V4L2 analogue gain contol

2018-08-25 Thread Pavel Machek
Hi!

> I've been looking at various image sensor drivers to see how they expose
> gains, in particular analogue ones. What I found in 4.18 looks like a
> mess to me.
> 
> In particular, my interest is about separation of analogue vs digital
> gain and an understanding of what effect a change in gain has on the
> brightness of an image. The latter is characterized in the following
> table in the "linear" column.
> 
> driver  | CID | register name   | min | max  | def | linear | 
> comments
> +-+-+-+--+-++-
> adv7343 | G   | ADV7343_DAC2_OUTPUT_LEVEL   | -64 | 64   | 0   ||
> adv7393 | G   | ADV7393_DAC123_OUTPUT_LEVEL | -64 | 64   | 0   ||
> imx258  | A   | IMX258_REG_ANALOG_GAIN  | 0   | 8191 | 0   ||
> imx274  | G   | multiple| |  | | yes| [1]
> mt9m032 | G   | MT9M032_GAIN_ALL| 0   | 127  | 64  | no | [2]
> mt9m111 | G   | GLOBAL_GAIN | 0   | 252  | 32  | no | [3]
> mt9p031 | G   | MT9P031_GLOBAL_GAIN | 8   | 1024 | 8   | no | [4]
> mt9v011 | G   | multiple| 0   | 4063 | 32  ||
> mt9v032 | G   | MT9V032_ANALOG_GAIN | 16  | 64   | 16  | no | [5]
> ov13858 | A   | OV13858_REG_ANALOG_GAIN | 0   | 8191 | 128 ||
> ov2685  | A   | OV2685_REG_GAIN | 0   | 2047 | 54  ||
> ov5640  | G   | OV5640_REG_AEC_PK_REAL_GAIN | 0   | 1023 | 0   ||
> ov5670  | A   | OV5670_REG_ANALOG_GAIN  | 0   | 8191 | 128 ||
> ov5695  | A   | OV5695_REG_ANALOG_GAIN  | 16  | 248  | 248 ||
> mt9m001 | G   | MT9M001_GLOBAL_GAIN | 0   | 127  | 64  | no |
> mt9v022 | G   | MT9V022_ANALOG_GAIN | 0   | 127  | 64  ||
> 
> CID:
>   A -> V4L2_CID_ANALOGUE_GAIN
>   G -> V4L2_CID_GAIN, no V4L2_CID_ANALOGUE_GAIN present
> step: always 1
> comments:
> [1] controls a product of analogue and digital gain, value scales
> roughly linear
> [2] code comments contradict data sheet
> [3] it is not clear whether it also controls a digital gain.
> [4] controls a combination of analogue and digital gain
> [5] analogue only
> 
> The documentation (extended-controls.rst) says that the digital gain is
> supposed to be a linear fixed-point number with 0x100 meaning factor 1.
> The situation for analogue is much less precise.
> 
> Typically, the number of analogue gains is much smaller than the number
> of digital gains. No driver exposes more than 13 bit for the analogue
> gain and half of them use at most 8 bits.
> 
> Can we give more structure to the analogue gain as exposed by V4L2?
> Ideally, I'd like to query a driver for the possible gain values if
> there are few (say < 256) and their factors (which are often given in
> data sheets). The nature of gains though is that they are often similar
> to floating point numbers (2 ** exp * (1 + mant / precision)), which
> makes it difficult to represent them using min/max/step/default.

Yes, it would be nice to have uniform controls for that. And it would
be good if mapping to "ISO" sensitivity from digital photography existed.

> Would it be reasonable to add a new V4L2_CID_ANALOGUE_GAIN_MENU that
> claims linearity and uses fixed-point numbers like
> V4L2_CID_DIGITAL_GAIN? There already is the integer menu
> V4L2_CID_AUTO_EXPOSURE_BIAS, but it also affects the exposure.

I'm not sure if linear scale is really appropriate. You can expect
camera to do ISO100 or ISO200, but if your camera supports ISO48,
you don't really expect it to support ISO480100.

./drivers/media/i2c/et8ek8/et8ek8_driver.c already does that.

IOW logarithmic scale would be more appropriate; min/max would be
nice, and step 

> An important application is implementing a custom gain control when the
> built-in auto exposure is not applicable.

Looking at et8ek8 again, perhaps that's the right solution? Userland
just sets the gain, and the driver automatically selects best
analog/digital gain combination.

/*
 * This table describes what should be written to the sensor register
  * for each gain value. The gain(index in the table) is in terms of
   * 0.1EV, i.e. 10 indexes in the table give 2 time more gain [0] in
* the *analog gain, [1] in the digital gain
 *
  * Analog gain [dB] = 20*log10(regvalue/32); 0x20..0x100
   */
   

Best regards,
Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


signature.asc
Description: Digital signature


cron job: media_tree daily build: OK

2018-08-25 Thread Hans Verkuil
This message is generated daily by a cron job that builds media_tree for
the kernels and architectures in the list below.

Results of the daily build of media_tree:

date:   Sun Aug 26 05:00:12 CEST 2018
media-tree git hash:da2048b7348a0be92f706ac019e022139e29495e
media_build git hash:   baf45935ffad914f33faf751ad9f4d0dd276c021
v4l-utils git hash: 015ca7524748fa7cef296102c68b631b078b63c6
edid-decode git hash:   b2da1516df3cc2756bfe8d1fa06d7bf2562ba1f4
gcc version:i686-linux-gcc (GCC) 8.1.0
sparse version: 0.5.2
smatch version: 0.5.1
host hardware:  x86_64
host os:4.17.0-1-amd64

linux-git-arm-at91: OK
linux-git-arm-davinci: OK
linux-git-arm-multi: OK
linux-git-arm-pxa: OK
linux-git-arm-stm32: OK
linux-git-arm64: OK
linux-git-i686: OK
linux-git-mips: OK
linux-git-powerpc64: OK
linux-git-sh: OK
linux-git-x86_64: OK
Check COMPILE_TEST: OK
linux-2.6.36.4-i686: OK
linux-2.6.36.4-x86_64: OK
linux-2.6.37.6-i686: OK
linux-2.6.37.6-x86_64: OK
linux-2.6.38.8-i686: OK
linux-2.6.38.8-x86_64: OK
linux-2.6.39.4-i686: OK
linux-2.6.39.4-x86_64: OK
linux-3.0.101-i686: OK
linux-3.0.101-x86_64: OK
linux-3.1.10-i686: OK
linux-3.1.10-x86_64: OK
linux-3.2.102-i686: OK
linux-3.2.102-x86_64: OK
linux-3.3.8-i686: OK
linux-3.3.8-x86_64: OK
linux-3.4.113-i686: OK
linux-3.4.113-x86_64: OK
linux-3.5.7-i686: OK
linux-3.5.7-x86_64: OK
linux-3.6.11-i686: OK
linux-3.6.11-x86_64: OK
linux-3.7.10-i686: OK
linux-3.7.10-x86_64: OK
linux-3.8.13-i686: OK
linux-3.8.13-x86_64: OK
linux-3.9.11-i686: OK
linux-3.9.11-x86_64: OK
linux-3.10.108-i686: OK
linux-3.10.108-x86_64: OK
linux-3.11.10-i686: OK
linux-3.11.10-x86_64: OK
linux-3.12.74-i686: OK
linux-3.12.74-x86_64: OK
linux-3.13.11-i686: OK
linux-3.13.11-x86_64: OK
linux-3.14.79-i686: OK
linux-3.14.79-x86_64: OK
linux-3.15.10-i686: OK
linux-3.15.10-x86_64: OK
linux-3.16.57-i686: OK
linux-3.16.57-x86_64: OK
linux-3.17.8-i686: OK
linux-3.17.8-x86_64: OK
linux-3.18.115-i686: OK
linux-3.18.115-x86_64: OK
linux-3.19.8-i686: OK
linux-3.19.8-x86_64: OK
linux-4.18-i686: OK
linux-4.18-x86_64: OK
apps: OK
spec-git: OK
sparse: WARNINGS

Detailed results are available here:

http://www.xs4all.nl/~hverkuil/logs/Sunday.log

Full logs are available here:

http://www.xs4all.nl/~hverkuil/logs/Sunday.tar.bz2

The Media Infrastructure API from this daily build is here:

http://www.xs4all.nl/~hverkuil/spec/index.html


Re: [PATCH v2 5/5] media: ov5640: fix restore of last mode set

2018-08-25 Thread jacopo mondi
Hi Hugues,
 one more comment on this patch..

On Mon, Aug 13, 2018 at 12:19:46PM +0200, Hugues Fruchet wrote:
> Mode setting depends on last mode set, in particular
> because of exposure calculation when downscale mode
> change between subsampling and scaling.
> At stream on the last mode was wrongly set to current mode,
> so no change was detected and exposure calculation
> was not made, fix this.
>
> Signed-off-by: Hugues Fruchet 
> ---
>  drivers/media/i2c/ov5640.c | 8 +++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/media/i2c/ov5640.c b/drivers/media/i2c/ov5640.c
> index c110a6a..923cc30 100644
> --- a/drivers/media/i2c/ov5640.c
> +++ b/drivers/media/i2c/ov5640.c
> @@ -225,6 +225,7 @@ struct ov5640_dev {
>   struct v4l2_mbus_framefmt fmt;
>
>   const struct ov5640_mode_info *current_mode;
> + const struct ov5640_mode_info *last_mode;
>   enum ov5640_frame_rate current_fr;
>   struct v4l2_fract frame_interval;
>
> @@ -1628,6 +1629,9 @@ static int ov5640_set_mode(struct ov5640_dev *sensor,
>   bool auto_exp =  sensor->ctrls.auto_exp->val == V4L2_EXPOSURE_AUTO;
>   int ret;
>
> + if (!orig_mode)
> + orig_mode = mode;
> +

Am I wrong or with the introduction of last_mode we could drop the
'orig_mode' parameter (which has confused me already :/ ) from the
set_mode() function?

Just set here 'orig_mode = sensor->last_mode' and make sure last_mode
is intialized properly at probe time...

Or is there some other value in keeping the orig_mode parameter here?

Thanks
   j

>   dn_mode = mode->dn_mode;
>   orig_dn_mode = orig_mode->dn_mode;
>
> @@ -1688,6 +1692,7 @@ static int ov5640_set_mode(struct ov5640_dev *sensor,
>   return ret;
>
>   sensor->pending_mode_change = false;
> + sensor->last_mode = mode;
>
>   return 0;
>
> @@ -2551,7 +2556,8 @@ static int ov5640_s_stream(struct v4l2_subdev *sd, int 
> enable)
>
>   if (sensor->streaming == !enable) {
>   if (enable && sensor->pending_mode_change) {
> - ret = ov5640_set_mode(sensor, sensor->current_mode);
> + ret = ov5640_set_mode(sensor, sensor->last_mode);
> +
>   if (ret)
>   goto out;
>
> --
> 2.7.4
>


signature.asc
Description: PGP signature


TRADING ACCOUNT

2018-08-25 Thread KELLY ALAN



Dear sir ,

I KELLY ALAN  purchasing and sales manager of CFM INTERNATIONAL .Our 
Company specialised in Supplying computer hardware and Electronic .We 
want to extend our supplier list because of concurrency in prices on the 
international market. We are seeking a supplier with whom we can to have 
 partnered long-term in order to have competitive prices . we are 
interested to buy the products you sell and want to place an order with 
you in big quantities.
Can you give us payment facilities ( 14 , 30 or 60 days payment terms ) 
?
What is the procedure for our account opening  and credit line 
application ?


Cordially

 KELLY ALAN

CFM INTERNATIONAL
2 BOULEVARD DU GAL MARTIAL VALIN
75015 PARIS
REG N° 302 527 700
VAT N° FR90 302527700
TEL +33171025367
FAX +33177759149
https://www.cfmaeroengines.com



Re: [PATCHv18 24/35] videobuf2-core: embed media_request_object

2018-08-25 Thread Sakari Ailus
On Tue, Aug 14, 2018 at 04:20:36PM +0200, Hans Verkuil wrote:
> From: Hans Verkuil 
> 
> Make vb2_buffer a request object.
> 
> Signed-off-by: Hans Verkuil 

Acked-by: Sakari Ailus 

-- 
Sakari Ailus
e-mail: sakari.ai...@iki.fi


Re: [PATCHv18 23/35] vb2: add init_buffer buffer op

2018-08-25 Thread Sakari Ailus
On Tue, Aug 14, 2018 at 04:20:35PM +0200, Hans Verkuil wrote:
> From: Hans Verkuil 
> 
> We need to initialize the request_fd field in struct vb2_v4l2_buffer
> to -1 instead of the default of 0. So we need to add a new op that
> is called when struct vb2_v4l2_buffer is allocated.
> 
> Signed-off-by: Hans Verkuil 
> Reviewed-by: Mauro Carvalho Chehab 

Acked-by: Sakari Ailus 

-- 
Sakari Ailus
e-mail: sakari.ai...@iki.fi


Re: [PATCHv18 22/35] videodev2.h: Add request_fd field to v4l2_buffer

2018-08-25 Thread Sakari Ailus
On Tue, Aug 14, 2018 at 04:20:34PM +0200, Hans Verkuil wrote:
> From: Hans Verkuil 
> 
> When queuing buffers allow for passing the request that should
> be associated with this buffer.
> 
> If V4L2_BUF_FLAG_REQUEST_FD is set, then request_fd is used as
> the file descriptor.
> 
> If a buffer is stored in a request, but not yet queued to the
> driver, then V4L2_BUF_FLAG_IN_REQUEST is set.
> 
> Signed-off-by: Hans Verkuil 
> Reviewed-by: Mauro Carvalho Chehab 

Acked-by: Sakari Ailus 

-- 
Sakari Ailus
e-mail: sakari.ai...@iki.fi


Re: camss: camera controls missing on vfe interfaces

2018-08-25 Thread Sakari Ailus
On Thu, Dec 14, 2017 at 02:49:11PM -0200, Mauro Carvalho Chehab wrote:
> Em Mon, 20 Nov 2017 11:59:59 +0100
> Daniel Mack  escreveu:
> 
> > Hi Todor,
> > 
> > Thanks for following up!
> > 
> > On Monday, November 20, 2017 09:32 AM, Todor Tomov wrote:
> > > On 15.11.2017 21:31, Daniel Mack wrote:  
> > >> Todor et all,
> > >>
> > >> Any hint on how to tackle this?
> > >>
> > >> I can contribute patches, but I'd like to understand what the idea is.
> > >>
> > >>
> > >> Thanks,
> > >> Daniel
> > >>
> > >>
> > >> On Thursday, October 26, 2017 06:11 PM, Daniel Mack wrote:  
> > >>> Hi Todor,
> > >>>
> > >>> When using the camss driver trough one of its /dev/videoX device nodes,
> > >>> applications are currently unable to see the video controls the camera
> > >>> sensor exposes.
> > >>>
> > >>> Same goes for other ioctls such as VIDIOC_ENUM_FMT, so the only valid
> > >>> resolution setting for applications to use is the one that was
> > >>> previously set through the media controller layer. Applications usually
> > >>> query the available formats and then pick one using the standard V4L2
> > >>> APIs, and many can't easily be forced to use a specific one.
> > >>>
> > >>> If I'm getting this right, could you explain what's the rationale here?
> > >>> Is that simply a missing feature or was that approach chosen on purpose?
> > >>>  
> > > 
> > > It is not a missing feature, it is more of a missing userspace 
> > > implementation.
> > > When working with a media oriented device driver, the userspace has to
> > > config the media pipeline too and if controls are exposed by the subdev 
> > > nodes,
> > > the userspace has to configure them on the subdev nodes.
> > > 
> > > As there weren't a lot of media oriented drivers there is no generic
> > > implementation/support for this in the userspace (at least I'm not aware 
> > > of
> > > any). There have been discussions about adding such functionality in 
> > > libv4l
> > > so that applications which do not support media configuration can still
> > > use these drivers. I'm not sure if decision for this was taken or not or
> > > is it just that there was noone to actually do the work. Probably Laurent,
> > > Mauro or Hans know more about what were the plans for this.  
> > 
> > Hmm, that's not good.
> > 
> > Considering the use-case in our application, the pipeline is set up once
> > and considered more or less static, and then applications such as the
> > Chrome browsers make use of the high-level VFE interface. If there are
> > no controls exposed on that interface, they are not available to the
> > application. Patching all userspace applications is an uphill battle
> > that can't be won I'm afraid.
> > 
> > Is there any good reason not to expose the sensor controls on the VFE? I
> > guess it would be easy to do, right?
> 
> Sorry for a late answer. I'm usually very busy on 4Q, but this year, it
> was atypical.
> 
> A little historic is needed in order to answer this question.
> Up to very recently, V4L2 drivers that are media-controller centric, 
> e. g. whose sub-devices are controlled directly by subdev devnodes,
> were used only on specialized hardware, with special V4L2 applications
> designed for them. In other words, it was designed to be used by generic
> applications (although we always wanted a solution for it), and this
> was never a real problem so far.
> 
> However, with the advent of cheap SoC hardware with complex media
> processors on it, the scenario changed recently, and we need some
> discussions upstream about the best way to solve it.
> 
> The original idea, back when the media controller was introduced,
> were to add support at libv4l. But this never happened, and it turns
> to be a way more complex than originally foreseen.
> 
> As you're pointing, on such scenarios, one alternative is to expose subdev
> controls also to the /dev/video devnode that is controlling the pipeline
> streaming. However, depending on the pipeline, this may not be possible,
> as the same control could be implemented on more than on block inside
> the pipeline. When such case happens, the proper solution is to pinpoint
> what sub-device will handle the control via the subdev API.
> 
> However, on several scenarios (like, for instance, a RPi3 with
> a single camera sensor), the pipeline is simple enough to either
> avoid such conflicts, or to have an obvious subdevice that would
> be handling such control.
> 
> From my PoV, on cases like RPi3, the best is to just implement control
> propagation inside the pipelines. However, other media core developers
> think otherwise.
> 
> If you can provide us a broader view about what are the issues that
> you're facing, what's your use case scenario and what are the pipelines,
> this could be valuable for us to improve our discussions about the
> best way to solve it.
> 
> Please notice, however, that this is not the best time for taking
> such discussions, as several core developers will be taking
> vacations those days.

I marked

Re: [PATCH] media: aptina-pll: allow approximating the requested pix_clock

2018-08-25 Thread Sakari Ailus
On Fri, Aug 24, 2018 at 02:05:17PM +0200, Helmut Grohne wrote:
> Hi Laurent,
> 
> Thank you for taking the time to reply to my patch and to my earlier
> questions.
> 
> On Thu, Aug 23, 2018 at 01:12:15PM +0200, Laurent Pinchart wrote:
> > Could you please share numbers, ideally when run in kernel space ?
> 
> Can you explain the benefits of profiling this inside the kernel rather
> than outside? Doing it outside is much simpler, so I'd like to
> understand your reasons. The next question is what kind of system to
> profile on. I guess doing it on an X86 will not help. Typical users will
> use some arm CPU and integer division on arm is slower than on x86. Is a
> Cortex A9 ok?
> 
> If you can provide more relevant inputs, that'd also help. I was able to
> derive an example input from the dt bindings for mt9p031 (ext=6MHz,
> pix=96MHz) and also used random inputs for testing. Getting more
> real-world inputs would help in producing a useful benchmark.
> 
> > This patch is very hard to review as you rewrite the whole, removing all 
> > the 
> > documentation for the existing algorithm, without documenting the new one. 
> 
> That's not entirely fair. Unlike the original algorithm, I've split it
> to multiple functions and unlike the original algorithm, I've documented
> pre- and post-conditions. In a sense, that's actually more
> documentation. At least, you now see what the functions are supposed to
> be doing.
> 
> Inside the alogrithm, I've often writting which precondition
> (in)equation I used in which particular step.
> 
> The variable naming choice makes it very clear that the first step is
> reducing the value ranges for n, m, and p1 as much as possible before
> proceeding to an actual parameter computation.
> 
> There was little choice in removing much of the old algorithm as the
> approach of using gcd is numerically unstable. I actually kept
> everything until the first gcd computation.
> 
> > There's also no example of a failing case with the existing code that works 
> > with yours.
> 
> That is an unfortunate omission. I should have thought of this and
> should have given it upfront. I'm sorry.
> 
> Take for instance MT9M024. The data sheet
> (http://www.mouser.com/ds/2/308/MT9M024-D-606228.pdf) allows deducing
> the following limits:
> 
>   const struct aptina_pll_limits mt9m024_limits = {
>   .ext_clock_min = 600,
>   .ext_clock_max = 5000,
>   .int_clock_min = 200,
>   .int_clock_max = 2400,
>   .out_clock_min = 38400,
>   .out_clock_max = 76800,
>   .pix_clock_max = 7425,
>   .n_min = 1,
>   .n_max = 63,
>   .m_min = 32,
>   .m_max = 255,
>   .p1_min = 4,
>   .p1_max = 16,
>   };
> 
> Now if you choose ext_clock and pix_clock maximal within the given
> limits, the existing aptina_pll_calculate gives up. Lowering the
> pix_clock does not help either. Even down to 73 MHz, it is unable to
> find any pll configuration.
> 
> The new algorithm finds a solution (n=11, m=98, p1=6) with 7.5 KHz
> error. Incidentally, that solution is close to the one given by the
> vendor tool (n=22, m=196, p1=6).

These values don't seem valid for 6 MHz --- the frequency after the PLL is
less than 384 MHz. Did you use a different external clock frequency?

> 
> > Could you please document the algorithm in details (especially explaining 
> > the 
> > background ideas), and share at least one use case, with with numbers for 
> > all 
> > the input and output parameters ?
> 
> I'll try to improve the documentation in the next version. Summarizing
> the idea is something I can do, but beyond that I don't see much to add
> beyond prose.
> 
> Ahead of posting a V2, let me suggest this:
> 
> /* The first part of the algorithm reduces the given aptina_pll_limits
>  * for n, m and p1 using the (in)equalities and the concrete values for
>  * ext_clock and pix_clock in order to reduce the search space.
>  *
>  * The main loop iterates over all remaining possible p1 values and
>  * computes the necessary out_clock frequency. The ext_clock / out_clock
>  * ratio is then approximated with n / m within their respective bounds.
>  * For each parameter choice, the preconditions must be rechecked,
>  * because integer rounding errors may result in violating some of the
>  * preconditions. The parameter set with the least frequency error is
>  * returned.
>  */
> 
> Is this what you are looking for?
> 
> Helmut

-- 
Sakari Ailus
e-mail: sakari.ai...@iki.fi


Dear

2018-08-25 Thread lisa jaster
Dear,i am lisa jaster,it would be great to know you,i have a very
important and confidential matter that i want to discuss with
you,reply me back for more discus.
Regards,
Lisa Jaster.