Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline

2017-03-11 Thread Russell King - ARM Linux
On Sat, Mar 11, 2017 at 07:31:18PM -0800, Steve Longerbeam wrote:
> 
> 
> On 03/11/2017 10:59 AM, Russell King - ARM Linux wrote:
> >On Sat, Mar 11, 2017 at 10:54:55AM -0800, Steve Longerbeam wrote:
> >>
> >>
> >>On 03/11/2017 10:45 AM, Russell King - ARM Linux wrote:
> >>>I really don't think expecting the user to understand and configure
> >>>the pipeline is a sane way forward.  Think about it - should the
> >>>user need to know that, because they have a bayer-only CSI data
> >>>source, that there is only one path possible, and if they try to
> >>>configure a different path, then things will just error out?
> >>>
> >>>For the case of imx219 connected to iMX6, it really is as simple as
> >>>"there is only one possible path" and all the complexity of the media
> >>>interfaces/subdevs is completely unnecessary.  Every other block in
> >>>the graph is just noise.
> >>>
> >>>The fact is that these dot graphs show a complex picture, but reality
> >>>is somewhat different - there's only relatively few paths available
> >>>depending on the connected source and the rest of the paths are
> >>>completely useless.
> >>>
> >>
> >>I totally disagree there. Raw bayer requires passthrough yes, but for
> >>all other media bus formats on a mipi csi-2 bus, and all other media
> >>bus formats on 8-bit parallel buses, the conersion pipelines can be
> >>used for scaling, CSC, rotation, and motion-compensated de-interlacing.
> >
> >... which only makes sense _if_ your source can produce those formats.
> >We don't actually disagree on that.
> 
> ...and there are lots of those sources! You should try getting out of
> your imx219 shell some time, and have a look around! :)

If you think that, you are insulting me.  I've been thinking about this
from the "big picture" point of view.  If you think I'm only thinking
about this from only the bayer point of view, you're wrong.

Given what Mauro has said, I'm convinced that the media controller stuff
is a complete failure for usability, and adding further drivers using it
is a mistake.

I counter your accusation by saying that you are actually so focused on
the media controller way of doing things that you can't see the bigger
picture here.

So, tell me how the user can possibly use iMX6 video capture without
resorting to opening up a terminal and using media-ctl to manually
configure the pipeline.  How is the user going to control the source
device without using media-ctl to find the subdev node, and then using
v4l2-ctl on it.  How is the user supposed to know which /dev/video*
node they should be opening with their capture application?

If you can actually respond to the points that I've been raising about
end user usability, then we can have a discussion.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.


cron job: media_tree daily build: ERRORS

2017-03-11 Thread Hans Verkuil
This message is generated daily by a cron job that builds media_tree for
the kernels and architectures in the list below.

Results of the daily build of media_tree:

date:   Sun Mar 12 05:00:15 CET 2017
media-tree git hash:700ea5e0e0dd70420a04e703ff264cc133834cba
media_build git hash:   bc4c2a205c087c8deff3cd14ed663c4767dd2016
v4l-utils git hash: ca6a0c099399cc51ecfacff7ef938be6ef73b8b3
gcc version:i686-linux-gcc (GCC) 6.2.0
sparse version: v0.5.0-3553-g78b2ea6
smatch version: v0.5.0-3553-g78b2ea6
host hardware:  x86_64
host os:4.9.0-164

linux-git-arm-at91: OK
linux-git-arm-davinci: OK
linux-git-arm-multi: OK
linux-git-arm-pxa: OK
linux-git-blackfin-bf561: OK
linux-git-i686: OK
linux-git-m32r: OK
linux-git-mips: OK
linux-git-powerpc64: OK
linux-git-sh: OK
linux-git-x86_64: OK
linux-2.6.36.4-i686: WARNINGS
linux-2.6.37.6-i686: WARNINGS
linux-2.6.38.8-i686: WARNINGS
linux-2.6.39.4-i686: WARNINGS
linux-3.0.60-i686: WARNINGS
linux-3.1.10-i686: WARNINGS
linux-3.2.37-i686: WARNINGS
linux-3.3.8-i686: WARNINGS
linux-3.4.27-i686: WARNINGS
linux-3.5.7-i686: WARNINGS
linux-3.6.11-i686: WARNINGS
linux-3.7.4-i686: WARNINGS
linux-3.8-i686: WARNINGS
linux-3.9.2-i686: WARNINGS
linux-3.10.1-i686: WARNINGS
linux-3.11.1-i686: ERRORS
linux-3.12.67-i686: ERRORS
linux-3.13.11-i686: WARNINGS
linux-3.14.9-i686: WARNINGS
linux-3.15.2-i686: WARNINGS
linux-3.16.7-i686: WARNINGS
linux-3.17.8-i686: WARNINGS
linux-3.18.7-i686: WARNINGS
linux-3.19-i686: WARNINGS
linux-4.0.9-i686: WARNINGS
linux-4.1.33-i686: WARNINGS
linux-4.2.8-i686: WARNINGS
linux-4.3.6-i686: WARNINGS
linux-4.4.22-i686: WARNINGS
linux-4.5.7-i686: WARNINGS
linux-4.6.7-i686: WARNINGS
linux-4.7.5-i686: WARNINGS
linux-4.8-i686: OK
linux-4.9-i686: OK
linux-4.10.1-i686: OK
linux-4.11-rc1-i686: OK
linux-2.6.36.4-x86_64: WARNINGS
linux-2.6.37.6-x86_64: WARNINGS
linux-2.6.38.8-x86_64: WARNINGS
linux-2.6.39.4-x86_64: WARNINGS
linux-3.0.60-x86_64: WARNINGS
linux-3.1.10-x86_64: WARNINGS
linux-3.2.37-x86_64: WARNINGS
linux-3.3.8-x86_64: WARNINGS
linux-3.4.27-x86_64: WARNINGS
linux-3.5.7-x86_64: WARNINGS
linux-3.6.11-x86_64: WARNINGS
linux-3.7.4-x86_64: WARNINGS
linux-3.8-x86_64: WARNINGS
linux-3.9.2-x86_64: WARNINGS
linux-3.10.1-x86_64: WARNINGS
linux-3.11.1-x86_64: ERRORS
linux-3.12.67-x86_64: ERRORS
linux-3.13.11-x86_64: WARNINGS
linux-3.14.9-x86_64: WARNINGS
linux-3.15.2-x86_64: WARNINGS
linux-3.16.7-x86_64: WARNINGS
linux-3.17.8-x86_64: WARNINGS
linux-3.18.7-x86_64: WARNINGS
linux-3.19-x86_64: WARNINGS
linux-4.0.9-x86_64: WARNINGS
linux-4.1.33-x86_64: WARNINGS
linux-4.2.8-x86_64: WARNINGS
linux-4.3.6-x86_64: WARNINGS
linux-4.4.22-x86_64: WARNINGS
linux-4.5.7-x86_64: WARNINGS
linux-4.6.7-x86_64: WARNINGS
linux-4.7.5-x86_64: WARNINGS
linux-4.8-x86_64: WARNINGS
linux-4.9-x86_64: WARNINGS
linux-4.10.1-x86_64: WARNINGS
linux-4.11-rc1-x86_64: OK
apps: WARNINGS
spec-git: OK
sparse: WARNINGS

Detailed results are available here:

http://www.xs4all.nl/~hverkuil/logs/Sunday.log

Full logs are available here:

http://www.xs4all.nl/~hverkuil/logs/Sunday.tar.bz2

The Media Infrastructure API from this daily build is here:

http://www.xs4all.nl/~hverkuil/spec/index.html


Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline

2017-03-11 Thread Steve Longerbeam



On 03/11/2017 10:59 AM, Russell King - ARM Linux wrote:

On Sat, Mar 11, 2017 at 10:54:55AM -0800, Steve Longerbeam wrote:



On 03/11/2017 10:45 AM, Russell King - ARM Linux wrote:

I really don't think expecting the user to understand and configure
the pipeline is a sane way forward.  Think about it - should the
user need to know that, because they have a bayer-only CSI data
source, that there is only one path possible, and if they try to
configure a different path, then things will just error out?

For the case of imx219 connected to iMX6, it really is as simple as
"there is only one possible path" and all the complexity of the media
interfaces/subdevs is completely unnecessary.  Every other block in
the graph is just noise.

The fact is that these dot graphs show a complex picture, but reality
is somewhat different - there's only relatively few paths available
depending on the connected source and the rest of the paths are
completely useless.



I totally disagree there. Raw bayer requires passthrough yes, but for
all other media bus formats on a mipi csi-2 bus, and all other media
bus formats on 8-bit parallel buses, the conersion pipelines can be
used for scaling, CSC, rotation, and motion-compensated de-interlacing.


... which only makes sense _if_ your source can produce those formats.
We don't actually disagree on that.


...and there are lots of those sources! You should try getting out of
your imx219 shell some time, and have a look around! :)



Let me re-state.  If the source can _only_ produce bayer, then there is
_only_ _one_ possible path, and all the overhead of the media controller
stuff is totally unnecessary.

Or, are you going to tell me that the user should have the right to
configure paths through the iMX6 hardware that are not permitted by the
iMX6 manuals for the data format being produced by the sensor?


Anyway, no the user is not allowed to configure a path that is not
allowed by the hardware, such as attempting to pass raw bayer through
an Image Converter path.

I guess you are simply commenting that for users of bayer sensors, the
other pipelines can be "confusing". But I trust you're not saying those
other pipelines should therefore not be present, which would be a
completely nutty argument.

Steve



Re: [PATCH v5 00/39] i.MX Media Driver

2017-03-11 Thread Steve Longerbeam



On 03/10/2017 12:13 PM, Russell King - ARM Linux wrote:

Version 5 gives me no v4l2 controls exposed through the video device
interface.

Just like with version 4, version 5 is completely useless with IMX219:

imx6-mipi-csi2: LP-11 timeout, phy_state = 0x0200
ipu1_csi0: pipeline start failed with -110
imx6-mipi-csi2: LP-11 timeout, phy_state = 0x0200
ipu1_csi0: pipeline start failed with -110
imx6-mipi-csi2: LP-11 timeout, phy_state = 0x0200
ipu1_csi0: pipeline start failed with -110



If it's too difficult to get the imx219 csi-2 transmitter into the
LP-11 state on power on, perhaps the csi-2 receiver can be a little
more lenient on the transmitter and make the LP-11 timeout a warning
instead of error-out.

Can you try the attached change on top of the version 5 patchset?

If that doesn't work then you're just going to have to fix the bug
in imx219.

Steve
diff --git a/drivers/staging/media/imx/imx6-mipi-csi2.c b/drivers/staging/media/imx/imx6-mipi-csi2.c
index d8f931e..720bf4d 100644
--- a/drivers/staging/media/imx/imx6-mipi-csi2.c
+++ b/drivers/staging/media/imx/imx6-mipi-csi2.c
@@ -224,11 +224,8 @@ static int csi2_dphy_wait_stopstate(struct csi2_dev *csi2)
 
 	ret = readl_poll_timeout(csi2->base + CSI2_PHY_STATE, reg,
  (reg & mask) == mask, 0, 50);
-	if (ret) {
-		v4l2_err(>sd, "LP-11 timeout, phy_state = 0x%08x\n", reg);
-		return ret;
-	}
-
+	if (ret)
+		v4l2_warn(>sd, "LP-11 timeout, phy_state = 0x%08x\n", reg);
 	return 0;
 }
 


Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline

2017-03-11 Thread Steve Longerbeam



On 03/11/2017 03:14 PM, Russell King - ARM Linux wrote:

On Sat, Mar 11, 2017 at 08:25:49AM -0300, Mauro Carvalho Chehab wrote:

This situation is there since 2009. If I remember well, you tried to write
such generic plugin in the past, but never finished it, apparently because
it is too complex. Others tried too over the years.

The last trial was done by Jacek, trying to cover just the exynos4 driver.
Yet, even such limited scope plugin was not good enough, as it was never
merged upstream. Currently, there's no such plugins upstream.

If we can't even merge a plugin that solves it for just *one* driver,
I have no hope that we'll be able to do it for the generic case.


This is what really worries me right now about the current proposal for
iMX6.  What's being proposed is to make the driver exclusively MC-based.



I don't see anything wrong with that.



What that means is that existing applications are _not_ going to work
until we have some answer for libv4l2, and from what you've said above,
it seems that this has been attempted multiple times over the last _8_
years, and each time it's failed.

When thinking about it, it's quite obvious why merely trying to push
the problem into userspace fails:

  If we assert that the kernel does not have sufficient information to
  make decisions about how to route and control the hardware, then under
  what scenario does a userspace library have sufficient information to
  make those decisions?

So, merely moving the problem into userspace doesn't solve anything.

Loading the problem onto the user in the hope that the user knows
enough to properly configure it also doesn't work - who is going to
educate the user about the various quirks of the hardware they're
dealing with?


Documentation?



I don't think pushing it into platform specific libv4l2 plugins works
either - as you say above, even just trying to develop a plugin for
exynos4 seems to have failed, so what makes us think that developing
a plugin for iMX6 is going to succeed?  Actually, that's exactly where
the problem lies.

Is "iMX6 plugin" even right?  That only deals with the complexity of
one part of the system - what about the source device, which as we
have already seen can be a tuner or a camera with its own multiple
sub-devices.  What if there's a video improvement chip in the chain
as well - how is a "generic" iMX6 plugin supposed to know how to deal
with that?

It seems to me that what's required is not an "iMX6 plugin" but a
separate plugin for each platform - or worse.  Consider boards like
the Raspberry Pi, where users can attach a variety of cameras.  I
don't think this approach scales.  (This is relevant: the iMX6 board
I have here has a RPi compatible connector for a MIPI CSI2 camera.
In fact, the IMX219 module I'm using _is_ a RPi camera, it's the RPi
NoIR Camera V2.)

The iMX6 problem is way larger than just "which subdev do I need to
configure for control X" - if you look at the dot graphs both Steve
and myself have supplied, you'll notice that there are eight (yes,
8) video capture devices.


There are 4 video nodes (per IPU):

- unconverted capture from CSI0
- unconverted capture from CSI1
- scaled, CSC, and/or rotated capture from PRP ENC
- scaled, CSC, rotated, and/or de-interlaced capture from PRP VF


Configuring the imx6 pipelines are not that difficult. I've put quite
a bit of detail in the media doc, so it should become clear to any
user with MC knowledge (even those with absolutely no knowledge of
imx) to quickly start getting working pipelines.



  Let's say that we can solve the subdev

problem in libv4l2.  There's another problem lurking here - libv4l2
is /dev/video* based.  How does it know which /dev/video* device to
open?

We don't open by sensor, we open by /dev/video*.  In my case, there
is only one correct /dev/video* node for the attached sensor, the
other seven are totally irrelevant.  For other situations, there may
be the choice of three functional /dev/video* nodes.

Right now, for my case, there isn't the information exported from the
kernel to know which is the correct one, since that requires knowing
which virtual channel the data is going to be sent over the CSI2
interface.  That information is not present in DT, or anywhere.


It is described in the media doc:

"This is the MIPI CSI-2 receiver entity. It has one sink pad to receive
the MIPI CSI-2 stream (usually from a MIPI CSI-2 camera sensor). It has
four source pads, corresponding to the four MIPI CSI-2 demuxed virtual
channel outputs."



It only comes from system knowledge - in my case, I know that the IMX219
is currently being configured to use virtual channel 0.  SMIA cameras
are also configurable.  Then there's CSI2 cameras that can produce
different formats via different virtual channels (eg, JPEG compressed
image on one channel while streaming a RGB image via the other channel.)

Whether you can use one or three in _this_ scenario depends on the
source format - again, another bit of 

Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline

2017-03-11 Thread Russell King - ARM Linux
On Sat, Mar 11, 2017 at 08:25:49AM -0300, Mauro Carvalho Chehab wrote:
> This situation is there since 2009. If I remember well, you tried to write
> such generic plugin in the past, but never finished it, apparently because
> it is too complex. Others tried too over the years. 
> 
> The last trial was done by Jacek, trying to cover just the exynos4 driver. 
> Yet, even such limited scope plugin was not good enough, as it was never
> merged upstream. Currently, there's no such plugins upstream.
> 
> If we can't even merge a plugin that solves it for just *one* driver,
> I have no hope that we'll be able to do it for the generic case.

This is what really worries me right now about the current proposal for
iMX6.  What's being proposed is to make the driver exclusively MC-based.

What that means is that existing applications are _not_ going to work
until we have some answer for libv4l2, and from what you've said above,
it seems that this has been attempted multiple times over the last _8_
years, and each time it's failed.

When thinking about it, it's quite obvious why merely trying to push
the problem into userspace fails:

  If we assert that the kernel does not have sufficient information to
  make decisions about how to route and control the hardware, then under
  what scenario does a userspace library have sufficient information to
  make those decisions?

So, merely moving the problem into userspace doesn't solve anything.

Loading the problem onto the user in the hope that the user knows
enough to properly configure it also doesn't work - who is going to
educate the user about the various quirks of the hardware they're
dealing with?

I don't think pushing it into platform specific libv4l2 plugins works
either - as you say above, even just trying to develop a plugin for
exynos4 seems to have failed, so what makes us think that developing
a plugin for iMX6 is going to succeed?  Actually, that's exactly where
the problem lies.

Is "iMX6 plugin" even right?  That only deals with the complexity of
one part of the system - what about the source device, which as we
have already seen can be a tuner or a camera with its own multiple
sub-devices.  What if there's a video improvement chip in the chain
as well - how is a "generic" iMX6 plugin supposed to know how to deal
with that?

It seems to me that what's required is not an "iMX6 plugin" but a
separate plugin for each platform - or worse.  Consider boards like
the Raspberry Pi, where users can attach a variety of cameras.  I
don't think this approach scales.  (This is relevant: the iMX6 board
I have here has a RPi compatible connector for a MIPI CSI2 camera.
In fact, the IMX219 module I'm using _is_ a RPi camera, it's the RPi
NoIR Camera V2.)

The iMX6 problem is way larger than just "which subdev do I need to
configure for control X" - if you look at the dot graphs both Steve
and myself have supplied, you'll notice that there are eight (yes,
8) video capture devices.  Let's say that we can solve the subdev
problem in libv4l2.  There's another problem lurking here - libv4l2
is /dev/video* based.  How does it know which /dev/video* device to
open?

We don't open by sensor, we open by /dev/video*.  In my case, there
is only one correct /dev/video* node for the attached sensor, the
other seven are totally irrelevant.  For other situations, there may
be the choice of three functional /dev/video* nodes.

Right now, for my case, there isn't the information exported from the
kernel to know which is the correct one, since that requires knowing
which virtual channel the data is going to be sent over the CSI2
interface.  That information is not present in DT, or anywhere.  It
only comes from system knowledge - in my case, I know that the IMX219
is currently being configured to use virtual channel 0.  SMIA cameras
are also configurable.  Then there's CSI2 cameras that can produce
different formats via different virtual channels (eg, JPEG compressed
image on one channel while streaming a RGB image via the other channel.)

Whether you can use one or three in _this_ scenario depends on the
source format - again, another bit of implementation specific
information that userspace would need to know.  Kernel space should
know that, and it's discoverable by testing which paths accept the
source format - but that doesn't tell you ahead of time which
/dev/video* node to open.

So, the problem space we have here is absolutely huge, and merely
having a plugin that activates when you open a /dev/video* node
really doesn't solve it.

All in all, I really don't think "lets hope someone writes a v4l2
plugin to solve it" is ever going to be successful.  I don't even
see that there will ever be a userspace application that is anything
more than a representation of the dot graphs that users can use to
manually configure the capture system with system knowledge.

I think everyone needs to take a step back and think long and hard
about this from the system usability perspective - I 

Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline

2017-03-11 Thread Pavel Machek
Hi!

> > > The rationale is that we should support the simplest use cases first.
> > > 
> > > In the case of the first MC-based driver (and several subsequent
> > > ones), the simplest use case required MC, as it was meant to suport
> > > a custom-made sophisticated application that required fine control
> > > on each component of the pipeline and to allow their advanced
> > > proprietary AAA userspace-based algorithms to work.  
> > 
> > The first MC based driver (omap3isp) supports what the hardware can do, it
> > does not support applications as such.
> 
> All media drivers support a subset of what the hardware can do. The
> question is if such subset covers the use cases or not.
> 
> The current MC-based drivers (except for uvc) took a patch to offer a
> more advanced API, to allow direct control to each IP module, as it was
> said, by the time we merged the OMAP3 driver, that, for the N9/N900 camera
> to work, it was mandatory to access the pipeline's individual components.
> 
> Such approach require that some userspace software will have knowledge
> about some hardware details, in order to setup pipelines and send controls
> to the right components. That makes really hard to have a generic user
> friendly application to use such devices.

Well. Even if you propagate controls to the right components, there's
still a lot application needs to know about the camera
subsystem. Focus lengths, for example. Speed of the focus
coil. Whether or not aperture controls are available. If they are not,
what is the fixed aperture.

Dunno. Knowing what control to apply on what subdevice does not look
like the hardest part of camera driver. Yes, it would be a tiny bit
easier if I would have just one device to deal with, but fcam-dev
has cca 2 lines of C++ code. 

> In the case of V4L2 controls, when there's no subdev API, the main
> driver (e. g. the driver that creates the /dev/video nodes) sends a
> multicast message to all bound I2C drivers. The driver(s) that need 
> them handle it. When the same control may be implemented on different
> drivers, the main driver sends a unicast message to just one
> driver[1].

Dunno. There's quite common to have two flashes. In that case, will
application control both at the same time?

> There's nothing wrong with this approach: it works, it is simpler,
> it is generic. So, if it covers most use cases, why not allowing it
> for usecases where a finer control is not a requirement?

Because the resulting interface is quite ugly?

> That's why I'm saying that I'm OK on merging any patch that would allow
> setting controls via the /dev/video interface on MC-based drivers when
> compiled without subdev API. I may also consider merging patches allowing

So.. userspace will now have to detect if subdev is available or not,
and access hardware in different ways?

> > The original plan was and continues to be sound, it's just that there have
> > always been too few hands to implement it. :-(
> 
> If there are no people to implement a plan, it doesn't matter how good
> the plan is, it won't work.

If the plan is good, someone will do it.
Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


signature.asc
Description: Digital signature


Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline

2017-03-11 Thread Pavel Machek
Hi!

> > > Ok, perhaps supporting both subdev API and V4L2 API at the same
> > > time doesn't make much sense. We could disable one in favor of the
> > > other, either at compilation time or at runtime.  
> > 
> > Right. If the subdev API is disabled, then you have to inherit the subdev
> > controls in the bridge driver (how else would you be able to access them?).
> > And that's the usual case.
> > 
> > If you do have the subdev API enabled, AND you use the MC, then the
> > intention clearly is to give userspace full control and inheriting controls
> > no longer makes any sense (and is highly confusing IMHO).
> 
> I tend to agree with that.

Well, having different userspace interface according to config options
is strange. I believe the right solution is to make complex drivers
depend on CONFIG_VIDEO_V4L2_SUBDEV_API...

Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


signature.asc
Description: Digital signature


Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline

2017-03-11 Thread Russell King - ARM Linux
On Sat, Mar 11, 2017 at 11:06:55AM -0800, Steve Longerbeam wrote:
> On 03/11/2017 10:59 AM, Russell King - ARM Linux wrote:
> >On Sat, Mar 11, 2017 at 10:54:55AM -0800, Steve Longerbeam wrote:
> >>
> >>
> >>On 03/11/2017 10:45 AM, Russell King - ARM Linux wrote:
> >>>I really don't think expecting the user to understand and configure
> >>>the pipeline is a sane way forward.  Think about it - should the
> >>>user need to know that, because they have a bayer-only CSI data
> >>>source, that there is only one path possible, and if they try to
> >>>configure a different path, then things will just error out?
> >>>
> >>>For the case of imx219 connected to iMX6, it really is as simple as
> >>>"there is only one possible path" and all the complexity of the media
> >>>interfaces/subdevs is completely unnecessary.  Every other block in
> >>>the graph is just noise.
> >>>
> >>>The fact is that these dot graphs show a complex picture, but reality
> >>>is somewhat different - there's only relatively few paths available
> >>>depending on the connected source and the rest of the paths are
> >>>completely useless.
> >>>
> >>
> >>I totally disagree there. Raw bayer requires passthrough yes, but for
> >>all other media bus formats on a mipi csi-2 bus, and all other media
> >>bus formats on 8-bit parallel buses, the conersion pipelines can be
> >>used for scaling, CSC, rotation, and motion-compensated de-interlacing.
> >
> >... which only makes sense _if_ your source can produce those formats.
> >We don't actually disagree on that.
> >
> >Let me re-state.  If the source can _only_ produce bayer, then there is
> >_only_ _one_ possible path, and all the overhead of the media controller
> >stuff is totally unnecessary.
> >
> >Or, are you going to tell me that the user should have the right to
> >configure paths through the iMX6 hardware that are not permitted by the
> >iMX6 manuals for the data format being produced by the sensor?
> >
> 
> Russell, I'm not following you. The imx6 pipelines allow for many
> different sources, not just the inx219 that only outputs bayer. You
> seem to be saying that those other pipelines should not be present
> because they don't support raw bayer.

What I'm saying is this:

_If_ you have a sensor connected that can _only_ produce bayer, _then_
there is only _one_ possible path through the imx6 pipelines that is
legal.  Offering other paths from the source is noise, because every
other path can't be used with a bayer source.

_If_ you have a sensor connected which can produce RGB or YUV formats,
_then_ other paths are available, and pipeline needs to be configured
to select the appropriate path with the desired features.

So, in the case of a bayer source, offering the user the chance to
manually configure the _single_ allowable route through the tree is
needless complexity.  Forcing the user to have to use the subdev
interfaces to configure the camera is needless complexity.  Such a
source can only ever be used with one single /dev/video* node.

Moreover, this requires user education, and this brings me on to much
larger concerns.  We seem to be saying "this is too complicated, the
user can work it out!"

We've been here with VGA devices.  Remember the old days when you had
to put mode lines into the Xorg.conf, or go through a lengthy setup
process to get X running?  It wasn't very user-friendly.  We seem to
be making the same mistake here.

Usability comes first and foremost - throwing complex problems at
users is not a solution.

Now, given that this media control API has been around for several
years, and the userspace side of the story has not really improved
(according to Mauro, several attempts have been made, every single
attempt so far has failed, even for specific hardware) it seems to me
that using the media control API is a very poor choice for the very
simple reason that _no one_ knows how to configure a system using it.
Hans thoughts of getting some funding to look at this aspect is a
good idea, but I really wonder, given the history so far, how long
this will take - and whether it _ever_ will get solved.

If it doesn't get solved, then we're stuck with quite a big problem.

So, I suggest that we don't merge any further media-controller based
kernel code _until_ we have the userspace side sorted out.  Merging
the kernel side drivers when we don't even know that the userspace
API is functionally usable in userspace beyond test programs is
utterly absurd - what if it turns out that no one can write v4l
plugins that sort out the issues that have been highlighted throughout
these discussions.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.


Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline

2017-03-11 Thread Steve Longerbeam



On 03/11/2017 12:26 PM, Pavel Machek wrote:

Hi!


I tend to agree with that.


I agree as well.

This is in line with how existing drivers behave, too.



Well, sounds like there is consensus on this topic. I guess I'll
go ahead and remove the control inheritance support. I suppose
having a control appear in two places (subdev and video nodes) can
be confusing.


I guess that's way to go. It is impossible to change userland APIs
once the patch is merged...


Ok, not including myself, it's now 4 in favor of removing, 1 against...

Steve


Re: [PATCH v5 18/39] [media] v4l: subdev: Add function to validate frame interval

2017-03-11 Thread Steve Longerbeam



On 03/11/2017 05:41 AM, Sakari Ailus wrote:

Hi Steve,

On Thu, Mar 09, 2017 at 08:52:58PM -0800, Steve Longerbeam wrote:

If the pads on both sides of a link specify a frame interval, then
those frame intervals should match. Create the exported function
v4l2_subdev_link_validate_frame_interval() to verify this. This
function can be called in a subdevice's media_entity_operations
or v4l2_subdev_pad_ops link_validate callbacks.

Signed-off-by: Steve Longerbeam 


If your only goal is to configure frame dropping on a sub-device, I suggest
to implement s_frame_interval() on the pads of that sub-device only. The
frames are then dropped according to the configured frame rates between the
sink and source pads. Say, configuring sink for 1/30 s and source 1/15 would
drop half of the incoming frames.

Considering that supporting specific frame interval on most sub-devices adds
no value or is not the interface through which it the frame rate configured,
I think it is overkill to change the link validation to expect otherwise.



Well, while I think this function might still have validity in the 
future, I do agree with you that a subdev that has no control over

frame rate has no business implementing the get|set ops.

In the imx-media subdevs, the only one that can affect frame rate (via
frame skipping) is the CSI. So I'll go ahead and remove the
[gs]_frame_interval ops from the others. I can remove this patch as
a result.


Steve


Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline

2017-03-11 Thread Pavel Machek
Hi!

> >>I tend to agree with that.
> >
> >I agree as well.
> >
> >This is in line with how existing drivers behave, too.
> 
> 
> Well, sounds like there is consensus on this topic. I guess I'll
> go ahead and remove the control inheritance support. I suppose
> having a control appear in two places (subdev and video nodes) can
> be confusing.

I guess that's way to go. It is impossible to change userland APIs
once the patch is merged...
Pavel

-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


[PATCH] staging: atomisp: remove redundant check for client being null

2017-03-11 Thread Colin King
From: Colin Ian King 

The previous statement checks if client is null, so the null check
when assigning dev is redundant and can be removed.

Detected by CoverityScan, CID#1416555 ("Logically Dead Code")

Signed-off-by: Colin Ian King 
---
 .../staging/media/atomisp/platform/intel-mid/atomisp_gmin_platform.c| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/drivers/staging/media/atomisp/platform/intel-mid/atomisp_gmin_platform.c 
b/drivers/staging/media/atomisp/platform/intel-mid/atomisp_gmin_platform.c
index 65513ca..2929492 100644
--- a/drivers/staging/media/atomisp/platform/intel-mid/atomisp_gmin_platform.c
+++ b/drivers/staging/media/atomisp/platform/intel-mid/atomisp_gmin_platform.c
@@ -354,7 +354,7 @@ static struct gmin_subdev *gmin_subdev_add(struct 
v4l2_subdev *subdev)
if (!client)
return NULL;
 
-   dev = client ? >dev : NULL;
+   dev = >dev;
 
for (i=0; i < MAX_SUBDEVS && gmin_subdevs[i].subdev; i++)
;
-- 
2.10.2



[PATCH] staging: atomisp: clean up return logic, remove redunant code

2017-03-11 Thread Colin King
From: Colin Ian King 

There is no need to check if ret is non-zero, remove this
redundant check and just return the error status from the call
to mt9m114_write_reg_array.

Detected by CoverityScan, CID#1416577 ("Identical code for
different branches")

Signed-off-by: Colin Ian King 
---
 drivers/staging/media/atomisp/i2c/mt9m114.c | 6 +-
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/drivers/staging/media/atomisp/i2c/mt9m114.c 
b/drivers/staging/media/atomisp/i2c/mt9m114.c
index 8762124..a555aec 100644
--- a/drivers/staging/media/atomisp/i2c/mt9m114.c
+++ b/drivers/staging/media/atomisp/i2c/mt9m114.c
@@ -444,12 +444,8 @@ static int mt9m114_set_suspend(struct v4l2_subdev *sd)
 static int mt9m114_init_common(struct v4l2_subdev *sd)
 {
struct i2c_client *client = v4l2_get_subdevdata(sd);
-   int ret;
 
-   ret = mt9m114_write_reg_array(client, mt9m114_common, PRE_POLLING);
-   if (ret)
-   return ret;
-   return ret;
+   return mt9m114_write_reg_array(client, mt9m114_common, PRE_POLLING);
 }
 
 static int power_ctrl(struct v4l2_subdev *sd, bool flag)
-- 
2.10.2



Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline

2017-03-11 Thread Steve Longerbeam



On 03/11/2017 10:59 AM, Russell King - ARM Linux wrote:

On Sat, Mar 11, 2017 at 10:54:55AM -0800, Steve Longerbeam wrote:



On 03/11/2017 10:45 AM, Russell King - ARM Linux wrote:

I really don't think expecting the user to understand and configure
the pipeline is a sane way forward.  Think about it - should the
user need to know that, because they have a bayer-only CSI data
source, that there is only one path possible, and if they try to
configure a different path, then things will just error out?

For the case of imx219 connected to iMX6, it really is as simple as
"there is only one possible path" and all the complexity of the media
interfaces/subdevs is completely unnecessary.  Every other block in
the graph is just noise.

The fact is that these dot graphs show a complex picture, but reality
is somewhat different - there's only relatively few paths available
depending on the connected source and the rest of the paths are
completely useless.



I totally disagree there. Raw bayer requires passthrough yes, but for
all other media bus formats on a mipi csi-2 bus, and all other media
bus formats on 8-bit parallel buses, the conersion pipelines can be
used for scaling, CSC, rotation, and motion-compensated de-interlacing.


... which only makes sense _if_ your source can produce those formats.
We don't actually disagree on that.

Let me re-state.  If the source can _only_ produce bayer, then there is
_only_ _one_ possible path, and all the overhead of the media controller
stuff is totally unnecessary.

Or, are you going to tell me that the user should have the right to
configure paths through the iMX6 hardware that are not permitted by the
iMX6 manuals for the data format being produced by the sensor?



Russell, I'm not following you. The imx6 pipelines allow for many
different sources, not just the inx219 that only outputs bayer. You
seem to be saying that those other pipelines should not be present
because they don't support raw bayer.

Steve


Re: [PATCH v5 15/39] [media] v4l2: add a frame interval error event

2017-03-11 Thread Steve Longerbeam



On 03/11/2017 10:51 AM, Russell King - ARM Linux wrote:

On Sat, Mar 11, 2017 at 10:14:49AM -0800, Steve Longerbeam wrote:

On 03/11/2017 03:39 AM, Hans Verkuil wrote:

It's fine to use an internal event as long as the end-user doesn't
see it. But if you lose vsyncs, then you never capture another frame,
right?


No, that's not correct. By loss of vertical sync, I mean the IPU
captures portions of two different frames, resulting in a permanent
"split image", with one frame containing portions of two consecutive
images. Or, the video rolls continuously, if you remember the old CRT
television sets of yore, it's the same rolling effect.


I have seen that rolling effect, but the iMX6 regains correct sync
within one complete "roll" just fine here with IMX219.  However, it
has always recovered.

So, I don't think there's a problem with the iMX6 part of the
processing, and so I don't think we should cripple the iMX6 capture
drivers for this problem.


And there is no crippling going on. Measuring the frame intervals
adds very little overhead.

Steve


Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline

2017-03-11 Thread Russell King - ARM Linux
On Sat, Mar 11, 2017 at 10:54:55AM -0800, Steve Longerbeam wrote:
> 
> 
> On 03/11/2017 10:45 AM, Russell King - ARM Linux wrote:
> >I really don't think expecting the user to understand and configure
> >the pipeline is a sane way forward.  Think about it - should the
> >user need to know that, because they have a bayer-only CSI data
> >source, that there is only one path possible, and if they try to
> >configure a different path, then things will just error out?
> >
> >For the case of imx219 connected to iMX6, it really is as simple as
> >"there is only one possible path" and all the complexity of the media
> >interfaces/subdevs is completely unnecessary.  Every other block in
> >the graph is just noise.
> >
> >The fact is that these dot graphs show a complex picture, but reality
> >is somewhat different - there's only relatively few paths available
> >depending on the connected source and the rest of the paths are
> >completely useless.
> >
> 
> I totally disagree there. Raw bayer requires passthrough yes, but for
> all other media bus formats on a mipi csi-2 bus, and all other media
> bus formats on 8-bit parallel buses, the conersion pipelines can be
> used for scaling, CSC, rotation, and motion-compensated de-interlacing.

... which only makes sense _if_ your source can produce those formats.
We don't actually disagree on that.

Let me re-state.  If the source can _only_ produce bayer, then there is
_only_ _one_ possible path, and all the overhead of the media controller
stuff is totally unnecessary.

Or, are you going to tell me that the user should have the right to
configure paths through the iMX6 hardware that are not permitted by the
iMX6 manuals for the data format being produced by the sensor?

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.


Re: [PATCH v5 15/39] [media] v4l2: add a frame interval error event

2017-03-11 Thread Steve Longerbeam



On 03/11/2017 10:51 AM, Russell King - ARM Linux wrote:

On Sat, Mar 11, 2017 at 10:14:49AM -0800, Steve Longerbeam wrote:

On 03/11/2017 03:39 AM, Hans Verkuil wrote:

It's fine to use an internal event as long as the end-user doesn't
see it. But if you lose vsyncs, then you never capture another frame,
right?


No, that's not correct. By loss of vertical sync, I mean the IPU
captures portions of two different frames, resulting in a permanent
"split image", with one frame containing portions of two consecutive
images. Or, the video rolls continuously, if you remember the old CRT
television sets of yore, it's the same rolling effect.


I have seen that rolling effect, but the iMX6 regains correct sync
within one complete "roll" just fine here with IMX219.  However, it
has always recovered.


I've seen permanent split images, and rolling that continues for
minutes.




So, I don't think there's a problem with the iMX6 part of the
processing, and so I don't think we should cripple the iMX6 capture
drivers for this problem.

It seems to me that the problem is with the source.


The the problem is almost surely in the CSI. It's handling of the
bt.656 sync codes is broken in some way.

Steve


Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline

2017-03-11 Thread Steve Longerbeam



On 03/11/2017 10:45 AM, Russell King - ARM Linux wrote:

On Sat, Mar 11, 2017 at 10:08:23AM -0800, Steve Longerbeam wrote:

On 03/11/2017 07:32 AM, Sakari Ailus wrote:

Hi Mauro and Hans,

On Sat, Mar 11, 2017 at 10:14:08AM -0300, Mauro Carvalho Chehab wrote:

Em Sat, 11 Mar 2017 12:32:43 +0100
Hans Verkuil  escreveu:


On 10/03/17 16:09, Mauro Carvalho Chehab wrote:

Em Fri, 10 Mar 2017 13:54:28 +0100
Hans Verkuil  escreveu:


Devices that have complex pipeline that do essentially require using the
Media controller interface to configure them are out of that scope.



Way too much of how the MC devices should be used is in the minds of developers.
There is a major lack for good detailed documentation, utilities, compliance
test (really needed!) and libv4l plugins.


Unfortunately, we merged an incomplete MC support at the Kernel. We knew
all the problems with MC-based drivers and V4L2 applications by the time
it was developed, and we requested Nokia developers (with was sponsoring MC
develoment, on that time) to work on a solution to allow standard V4L2
applications to work with MC based boards.

Unfortunately, we took the decision to merge MC without that, because
Nokia was giving up on Linux development, and we didn't want to lose the
2 years of discussions and work around it, as Nokia employers were leaving
the company. Also, on that time, there was already some patches floating
around adding backward support via libv4l. Unfortunately, those patches
were never finished.

The net result is that MC was merged with some huge gaps, including
the lack of a proper solution for a generic V4L2 program to work
with V4L2 devices that use the subdev API.

That was not that bad by then, as MC was used only on cell phones
that run custom-made applications.

The reallity changed, as now, we have lots of low cost SoC based
boards, used for all sort of purposes. So, we need a quick solution
for it.

In other words, while that would be acceptable support special apps
on really embedded systems, it is *not OK* for general purpose SoC
harware[1].

[1] I'm calling "general purpose SoC harware" those ARM boards
like Raspberry Pi that are shipped to the mass and used by a wide
range of hobbyists and other people that just wants to run Linux on
ARM. It is possible to buy such boards for a very cheap price,
making them to be used not only on special projects, where a custom
made application could be interesting, but also for a lot of
users that just want to run Linux on a low cost ARM board, while
keeping using standard V4L2 apps, like "camorama".

That's perhaps one of the reasons why it took a long time for us to
start receiving drivers upstream for such hardware: it is quite
intimidating and not logical to require developers to implement
on their drivers 2 complex APIs (MC, subdev) for those
hardware that most users won't care. From user's perspective,
being able to support generic applications like "camorama" and
"zbar" is all they want.

In summary, I'm pretty sure we need to support standard V4L2
applications on boards like Raspberry Pi and those low-cost
SoC-based boards that are shipped to end users.


Anyway, regarding this specific patch and for this MC-aware driver: no, you
shouldn't inherit controls from subdevs. It defeats the purpose.


Sorry, but I don't agree with that. The subdev API is an optional API
(and even the MC API can be optional).

I see the rationale for using MC and subdev APIs on cell phones,
ISV and other embedded hardware, as it will allow fine-tuning
the driver's support to allow providing the required quality for
certain custom-made applications. but on general SoC hardware,
supporting standard V4L2 applications is a need.

Ok, perhaps supporting both subdev API and V4L2 API at the same
time doesn't make much sense. We could disable one in favor of the
other, either at compilation time or at runtime.


Right. If the subdev API is disabled, then you have to inherit the subdev
controls in the bridge driver (how else would you be able to access them?).
And that's the usual case.

If you do have the subdev API enabled, AND you use the MC, then the
intention clearly is to give userspace full control and inheriting controls
no longer makes any sense (and is highly confusing IMHO).


I tend to agree with that.


I agree as well.

This is in line with how existing drivers behave, too.


Well, sounds like there is consensus on this topic. I guess I'll
go ahead and remove the control inheritance support. I suppose
having a control appear in two places (subdev and video nodes) can
be confusing.


I would say _don't_ do that until there are tools/libraries in place
that are able to support controlling subdevs, otherwise it's just
going to be another reason for me to walk away from this stuff, and
stick with a version that does work sensibly.


As for the configurability vs. ease-of-use debate, I added the
control inheritance to make it a little easier on the 

Re: [PATCH v5 15/39] [media] v4l2: add a frame interval error event

2017-03-11 Thread Russell King - ARM Linux
On Sat, Mar 11, 2017 at 10:14:49AM -0800, Steve Longerbeam wrote:
> On 03/11/2017 03:39 AM, Hans Verkuil wrote:
> >It's fine to use an internal event as long as the end-user doesn't
> >see it. But if you lose vsyncs, then you never capture another frame,
> >right?
> 
> No, that's not correct. By loss of vertical sync, I mean the IPU
> captures portions of two different frames, resulting in a permanent
> "split image", with one frame containing portions of two consecutive
> images. Or, the video rolls continuously, if you remember the old CRT
> television sets of yore, it's the same rolling effect.

I have seen that rolling effect, but the iMX6 regains correct sync
within one complete "roll" just fine here with IMX219.  However, it
has always recovered.

So, I don't think there's a problem with the iMX6 part of the
processing, and so I don't think we should cripple the iMX6 capture
drivers for this problem.

It seems to me that the problem is with the source.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.


Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline

2017-03-11 Thread Russell King - ARM Linux
On Sat, Mar 11, 2017 at 10:08:23AM -0800, Steve Longerbeam wrote:
> On 03/11/2017 07:32 AM, Sakari Ailus wrote:
> >Hi Mauro and Hans,
> >
> >On Sat, Mar 11, 2017 at 10:14:08AM -0300, Mauro Carvalho Chehab wrote:
> >>Em Sat, 11 Mar 2017 12:32:43 +0100
> >>Hans Verkuil  escreveu:
> >>
> >>>On 10/03/17 16:09, Mauro Carvalho Chehab wrote:
> Em Fri, 10 Mar 2017 13:54:28 +0100
> Hans Verkuil  escreveu:
> 
> >>Devices that have complex pipeline that do essentially require using the
> >>Media controller interface to configure them are out of that scope.
> >>
> >
> >Way too much of how the MC devices should be used is in the minds of 
> >developers.
> >There is a major lack for good detailed documentation, utilities, 
> >compliance
> >test (really needed!) and libv4l plugins.
> 
> Unfortunately, we merged an incomplete MC support at the Kernel. We knew
> all the problems with MC-based drivers and V4L2 applications by the time
> it was developed, and we requested Nokia developers (with was sponsoring 
> MC
> develoment, on that time) to work on a solution to allow standard V4L2
> applications to work with MC based boards.
> 
> Unfortunately, we took the decision to merge MC without that, because
> Nokia was giving up on Linux development, and we didn't want to lose the
> 2 years of discussions and work around it, as Nokia employers were leaving
> the company. Also, on that time, there was already some patches floating
> around adding backward support via libv4l. Unfortunately, those patches
> were never finished.
> 
> The net result is that MC was merged with some huge gaps, including
> the lack of a proper solution for a generic V4L2 program to work
> with V4L2 devices that use the subdev API.
> 
> That was not that bad by then, as MC was used only on cell phones
> that run custom-made applications.
> 
> The reallity changed, as now, we have lots of low cost SoC based
> boards, used for all sort of purposes. So, we need a quick solution
> for it.
> 
> In other words, while that would be acceptable support special apps
> on really embedded systems, it is *not OK* for general purpose SoC
> harware[1].
> 
> [1] I'm calling "general purpose SoC harware" those ARM boards
> like Raspberry Pi that are shipped to the mass and used by a wide
> range of hobbyists and other people that just wants to run Linux on
> ARM. It is possible to buy such boards for a very cheap price,
> making them to be used not only on special projects, where a custom
> made application could be interesting, but also for a lot of
> users that just want to run Linux on a low cost ARM board, while
> keeping using standard V4L2 apps, like "camorama".
> 
> That's perhaps one of the reasons why it took a long time for us to
> start receiving drivers upstream for such hardware: it is quite
> intimidating and not logical to require developers to implement
> on their drivers 2 complex APIs (MC, subdev) for those
> hardware that most users won't care. From user's perspective,
> being able to support generic applications like "camorama" and
> "zbar" is all they want.
> 
> In summary, I'm pretty sure we need to support standard V4L2
> applications on boards like Raspberry Pi and those low-cost
> SoC-based boards that are shipped to end users.
> 
> >Anyway, regarding this specific patch and for this MC-aware driver: no, 
> >you
> >shouldn't inherit controls from subdevs. It defeats the purpose.
> 
> Sorry, but I don't agree with that. The subdev API is an optional API
> (and even the MC API can be optional).
> 
> I see the rationale for using MC and subdev APIs on cell phones,
> ISV and other embedded hardware, as it will allow fine-tuning
> the driver's support to allow providing the required quality for
> certain custom-made applications. but on general SoC hardware,
> supporting standard V4L2 applications is a need.
> 
> Ok, perhaps supporting both subdev API and V4L2 API at the same
> time doesn't make much sense. We could disable one in favor of the
> other, either at compilation time or at runtime.
> >>>
> >>>Right. If the subdev API is disabled, then you have to inherit the subdev
> >>>controls in the bridge driver (how else would you be able to access them?).
> >>>And that's the usual case.
> >>>
> >>>If you do have the subdev API enabled, AND you use the MC, then the
> >>>intention clearly is to give userspace full control and inheriting controls
> >>>no longer makes any sense (and is highly confusing IMHO).
> >>
> >>I tend to agree with that.
> >
> >I agree as well.
> >
> >This is in line with how existing drivers behave, too.
> 
> Well, sounds like there is consensus on this topic. I guess I'll
> go 

Re: [PATCH v5 21/39] UAPI: Add media UAPI Kbuild file

2017-03-11 Thread Steve Longerbeam



On 03/11/2017 05:49 AM, Sakari Ailus wrote:

Hi Steve,

On Thu, Mar 09, 2017 at 08:53:01PM -0800, Steve Longerbeam wrote:

Add an empty UAPI Kbuild file for media UAPI headers.

Signed-off-by: Steve Longerbeam 


The existing V4L2 UAPI headers are under include/uapi/linux. Could you use
that directory instead?

I actually wouldn't really object doing this but it should have been done in
2002 or so when the first V4L2 header was added. Now the benefit is
questionable.



Agreed, I think the current headers should be moved to uapi/media
eventually, but for now I'll go ahead and put under uapi/linux.

Steve




Re: [PATCH v5 15/39] [media] v4l2: add a frame interval error event

2017-03-11 Thread Steve Longerbeam



On 03/11/2017 03:39 AM, Hans Verkuil wrote:

On 10/03/17 19:37, Steve Longerbeam wrote:

Hi Hans,

On 03/10/2017 04:03 AM, Hans Verkuil wrote:

On 10/03/17 05:52, Steve Longerbeam wrote:

Add a new FRAME_INTERVAL_ERROR event to signal that a video capture or
output device has measured an interval between the reception or transmit
completion of two consecutive frames of video that is outside the nominal
frame interval by some tolerance value.


Reading back what was said on this I agree with Sakari that this doesn't
belong here.

Userspace can detect this just as easily (if not easier) with a timeout.




Unfortunately measuring frame intervals from userland is not accurate
enough for i.MX6.

The issue here is that the IPUv3, specifically the CSI unit, can
permanently lose vertical sync if there are truncated frames sent
on the bt.656 bus. We have seen a single missing line of video cause
loss of vertical sync. The only way to correct this is to shutdown
the IPU capture hardware and restart, which can be accomplished
simply by restarting streaming from userland.

There are no other indicators from the sensor about these short
frame events (believe me, we've exhausted all avenues with the ADV718x).
And the IPUv3 DMA engine has no status indicators for short frames
either. So the only way to detect them is by measuring frame intervals.

The intervals have to be able to resolve a single line of missing video.
With a PAL video source that requires better than 58 usec accuracy.

There is too much uncertainty to resolve this at user level. The
driver is able to resolve this by measuring intervals between hardware
interrupts as long as interrupt latency is reasonably low, and we
have another method using the i.MX6 hardware input capture support
that can measure these intervals very accurately with no errors
introduced by interrupt latency.

I made this event a private event to imx-media driver in a previous
iteration, so I can return it to a private event, but this can't be
done at user level.


It's fine to use an internal event as long as the end-user doesn't
see it. But if you lose vsyncs, then you never capture another frame,
right?


No, that's not correct. By loss of vertical sync, I mean the IPU
captures portions of two different frames, resulting in a permanent
"split image", with one frame containing portions of two consecutive
images. Or, the video rolls continuously, if you remember the old CRT
television sets of yore, it's the same rolling effect.



So userspace can detect that (i.e. no new frames arrive) and
it can timeout on that. Or you detect it in the driver and restart there,
or call vb2_queue_error().



There is no timeout, the frames keep coming, but they are split images
or rolling.


Anything really as long as this event isn't user-visible :-)


The event must be user visible, otherwise the user has no indication
the error, and can't correct it by stream restart.

Steve


Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline

2017-03-11 Thread Steve Longerbeam



On 03/11/2017 07:32 AM, Sakari Ailus wrote:

Hi Mauro and Hans,

On Sat, Mar 11, 2017 at 10:14:08AM -0300, Mauro Carvalho Chehab wrote:

Em Sat, 11 Mar 2017 12:32:43 +0100
Hans Verkuil  escreveu:


On 10/03/17 16:09, Mauro Carvalho Chehab wrote:

Em Fri, 10 Mar 2017 13:54:28 +0100
Hans Verkuil  escreveu:


Devices that have complex pipeline that do essentially require using the
Media controller interface to configure them are out of that scope.



Way too much of how the MC devices should be used is in the minds of developers.
There is a major lack for good detailed documentation, utilities, compliance
test (really needed!) and libv4l plugins.


Unfortunately, we merged an incomplete MC support at the Kernel. We knew
all the problems with MC-based drivers and V4L2 applications by the time
it was developed, and we requested Nokia developers (with was sponsoring MC
develoment, on that time) to work on a solution to allow standard V4L2
applications to work with MC based boards.

Unfortunately, we took the decision to merge MC without that, because
Nokia was giving up on Linux development, and we didn't want to lose the
2 years of discussions and work around it, as Nokia employers were leaving
the company. Also, on that time, there was already some patches floating
around adding backward support via libv4l. Unfortunately, those patches
were never finished.

The net result is that MC was merged with some huge gaps, including
the lack of a proper solution for a generic V4L2 program to work
with V4L2 devices that use the subdev API.

That was not that bad by then, as MC was used only on cell phones
that run custom-made applications.

The reallity changed, as now, we have lots of low cost SoC based
boards, used for all sort of purposes. So, we need a quick solution
for it.

In other words, while that would be acceptable support special apps
on really embedded systems, it is *not OK* for general purpose SoC
harware[1].

[1] I'm calling "general purpose SoC harware" those ARM boards
like Raspberry Pi that are shipped to the mass and used by a wide
range of hobbyists and other people that just wants to run Linux on
ARM. It is possible to buy such boards for a very cheap price,
making them to be used not only on special projects, where a custom
made application could be interesting, but also for a lot of
users that just want to run Linux on a low cost ARM board, while
keeping using standard V4L2 apps, like "camorama".

That's perhaps one of the reasons why it took a long time for us to
start receiving drivers upstream for such hardware: it is quite
intimidating and not logical to require developers to implement
on their drivers 2 complex APIs (MC, subdev) for those
hardware that most users won't care. From user's perspective,
being able to support generic applications like "camorama" and
"zbar" is all they want.

In summary, I'm pretty sure we need to support standard V4L2
applications on boards like Raspberry Pi and those low-cost
SoC-based boards that are shipped to end users.


Anyway, regarding this specific patch and for this MC-aware driver: no, you
shouldn't inherit controls from subdevs. It defeats the purpose.


Sorry, but I don't agree with that. The subdev API is an optional API
(and even the MC API can be optional).

I see the rationale for using MC and subdev APIs on cell phones,
ISV and other embedded hardware, as it will allow fine-tuning
the driver's support to allow providing the required quality for
certain custom-made applications. but on general SoC hardware,
supporting standard V4L2 applications is a need.

Ok, perhaps supporting both subdev API and V4L2 API at the same
time doesn't make much sense. We could disable one in favor of the
other, either at compilation time or at runtime.


Right. If the subdev API is disabled, then you have to inherit the subdev
controls in the bridge driver (how else would you be able to access them?).
And that's the usual case.

If you do have the subdev API enabled, AND you use the MC, then the
intention clearly is to give userspace full control and inheriting controls
no longer makes any sense (and is highly confusing IMHO).


I tend to agree with that.


I agree as well.

This is in line with how existing drivers behave, too.



Well, sounds like there is consensus on this topic. I guess I'll
go ahead and remove the control inheritance support. I suppose
having a control appear in two places (subdev and video nodes) can
be confusing.

As for the configurability vs. ease-of-use debate, I added the
control inheritance to make it a little easier on the user, but,
as the dot graphs below will show, the user already needs quite
a lot of knowledge of the architecture already, in order to setup
the different pipelines. So perhaps the control inheritance is
rather pointless anyway.









This way, if the subdev API is disabled, the driver will be
functional for V4L2-based applications that don't support neither
MC or subdev 

usb_video.c: 0bda:579f Realtek - corrupted frames and low FPS of captured mjpeg video frames

2017-03-11 Thread Dmitrii Shcherbakov
Hi everybody,

I have a usb camera built into my laptop (hardware details here
https://paste.ubuntu.com/24126969/) and I am looking for some guidance
on how to debug it further - any feedback is highly appreciated.

I am aware that this hardware is probably buggy/does not follow the
UVC spec/vendor only cared about Windows etc.

I am trying to capture mjpeg-encoded video at various resolutions and
frame rates via ffmpeg using v4l2 and the uvc kernel driver.

Test results (kernel logs with uvc driver in verbose mode and ffmpeg
output are included):
https://paste.ubuntu.com/24126930/
https://paste.ubuntu.com/24126960/

uname -r
4.11.0-041100rc1-generic

Conclusions:
- using any resolution higher than 640x480 results in corrupted bottom
half of the image (grey and green artifacts)
- frame rate is low on any resolution, even when I specify 640x480
with 30 or 15 fps via v4l2 it is nowhere near that point

frame=  179 fps=7.7 q=-1.0 Lsize=3882kB time=00:00:23.06
bitrate=1378.6kbits/s speed=0.994x

- the kernel log is filled with the following messages

мар 07 00:15:31 blade kernel: uvcvideo: uvc_v4l2_mmap
мар 07 00:15:31 blade kernel: uvcvideo: Allocated 5 URB buffers of
32x512 bytes each.
мар 07 00:15:31 blade kernel: uvcvideo: frame 1 stats: 0/0/1 packets,
0/0/1 pts (!early initial), 0/1 scr, last pts/stc/sof
6446951/6832111/975
мар 07 00:15:31 blade kernel: uvcvideo: Marking buffer as bad (error bit set).
мар 07 00:15:31 blade kernel: uvcvideo: Frame complete (FID bit toggled).
мар 07 00:15:31 blade kernel: uvcvideo: frame 2 stats: 0/0/1 packets,
0/0/1 pts (!early initial), 0/0 scr, last pts/stc/sof 3617775107/0/0
мар 07 00:15:31 blade kernel: uvcvideo: Marking buffer as bad (error bit set).
мар 07 00:15:31 blade kernel: uvcvideo: Frame complete (EOF found).
мар 07 00:15:31 blade kernel: uvcvideo: Dropping payload (out of sync).
мар 07 00:15:31 blade kernel: uvcvideo: frame 3 stats: 0/0/2 packets,
1/1/2 pts (!early initial), 0/1 scr, last pts/stc/sof
8413602/8799007/1106
мар 07 00:15:31 blade kernel: uvcvideo: Frame complete (EOF found).
мар 07 00:15:32 blade kernel: uvcvideo: Dropping payload (out of sync).
мар 07 00:15:32 blade kernel: uvcvideo: Marking buffer as bad (error bit set).
мар 07 00:15:32 blade kernel: uvcvideo: Dropping payload (out of sync).

- some entity types were not initialized at module loading time

мар 06 20:47:35 blade kernel: uvcvideo: Found UVC 1.00 device USB
Camera (0bda:579f)
мар 06 20:47:35 blade kernel: uvcvideo: Forcing device quirks to 0x80
by module parameter for testing purpose.
мар 06 20:47:35 blade kernel: uvcvideo: Please report required quirks
to the linux-uvc-devel mailing list.
мар 06 20:47:35 blade kernel: uvcvideo 1-7:1.0: Entity type for entity
Extension 4 was not initialized!
мар 06 20:47:35 blade kernel: uvcvideo 1-7:1.0: Entity type for entity
Processing 2 was not initialized!
мар 06 20:47:35 blade kernel: uvcvideo 1-7:1.0: Entity type for entity
Camera 1 was not initialized!
мар 06 20:47:35 blade kernel: usbcore: registered new interface driver uvcvideo

- tried different quirks non of which made any difference (I can try a
specific one or a combination if there are any ideas). I cannot
diagnose what it is so this was just random poking.

- a similar thread for XPS 12 with a Realtek webcam
https://www.spinics.net/lists/linux-media/msg73476.html

- the buffer is marked as bad in uvc_video_decode_start based upon a
UVC_STREAM_ERR flag:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/media/usb/uvc/uvc_video.c#n1004

If anybody has any pointers/suggestions, please let me know.

Thanks in advance!


usb_video.c: 0bda:579f Realtek - corrupted frames and low FPS of captured mjpeg video frames

2017-03-11 Thread Dmitrii Shcherbakov
Hi everybody,

I have a usb camera built into my laptop (hardware details here
https://paste.ubuntu.com/24126969/) and I am looking for some guidance
on how to debug it further - any feedback is highly appreciated.

I am aware that this hardware is probably buggy/does not follow the
UVC spec/vendor only cared about Windows etc.

I am trying to capture mjpeg-encoded video at various resolutions and
frame rates via ffmpeg using v4l2 and the uvc kernel driver.

Test results (kernel logs with uvc driver in verbose mode and ffmpeg
output are included):
https://paste.ubuntu.com/24126930/
https://paste.ubuntu.com/24126960/

uname -r
4.11.0-041100rc1-generic

Conclusions:
- using any resolution higher than 640x480 results in corrupted bottom
half of the image (grey and green artifacts)
- frame rate is low on any resolution, even when I specify 640x480
with 30 or 15 fps via v4l2 it is nowhere near that point

frame=  179 fps=7.7 q=-1.0 Lsize=3882kB time=00:00:23.06
bitrate=1378.6kbits/s speed=0.994x

- the kernel log is filled with the following messages

мар 07 00:15:31 blade kernel: uvcvideo: uvc_v4l2_mmap
мар 07 00:15:31 blade kernel: uvcvideo: Allocated 5 URB buffers of
32x512 bytes each.
мар 07 00:15:31 blade kernel: uvcvideo: frame 1 stats: 0/0/1 packets,
0/0/1 pts (!early initial), 0/1 scr, last pts/stc/sof
6446951/6832111/975
мар 07 00:15:31 blade kernel: uvcvideo: Marking buffer as bad (error bit set).
мар 07 00:15:31 blade kernel: uvcvideo: Frame complete (FID bit toggled).
мар 07 00:15:31 blade kernel: uvcvideo: frame 2 stats: 0/0/1 packets,
0/0/1 pts (!early initial), 0/0 scr, last pts/stc/sof 3617775107/0/0
мар 07 00:15:31 blade kernel: uvcvideo: Marking buffer as bad (error bit set).
мар 07 00:15:31 blade kernel: uvcvideo: Frame complete (EOF found).
мар 07 00:15:31 blade kernel: uvcvideo: Dropping payload (out of sync).
мар 07 00:15:31 blade kernel: uvcvideo: frame 3 stats: 0/0/2 packets,
1/1/2 pts (!early initial), 0/1 scr, last pts/stc/sof
8413602/8799007/1106
мар 07 00:15:31 blade kernel: uvcvideo: Frame complete (EOF found).
мар 07 00:15:32 blade kernel: uvcvideo: Dropping payload (out of sync).
мар 07 00:15:32 blade kernel: uvcvideo: Marking buffer as bad (error bit set).
мар 07 00:15:32 blade kernel: uvcvideo: Dropping payload (out of sync).

- some entity types were not initialized at module loading time

мар 06 20:47:35 blade kernel: uvcvideo: Found UVC 1.00 device USB
Camera (0bda:579f)
мар 06 20:47:35 blade kernel: uvcvideo: Forcing device quirks to 0x80
by module parameter for testing purpose.
мар 06 20:47:35 blade kernel: uvcvideo: Please report required quirks
to the linux-uvc-devel mailing list.
мар 06 20:47:35 blade kernel: uvcvideo 1-7:1.0: Entity type for entity
Extension 4 was not initialized!
мар 06 20:47:35 blade kernel: uvcvideo 1-7:1.0: Entity type for entity
Processing 2 was not initialized!
мар 06 20:47:35 blade kernel: uvcvideo 1-7:1.0: Entity type for entity
Camera 1 was not initialized!
мар 06 20:47:35 blade kernel: usbcore: registered new interface driver uvcvideo

- tried different quirks non of which made any difference (I can try a
specific one or a combination if there are any ideas). I cannot
diagnose what it is so this was just random poking.

- a similar thread for XPS 12 with a Realtek webcam
https://www.spinics.net/lists/linux-media/msg73476.html

- the buffer is marked as bad in uvc_video_decode_start based upon a
UVC_STREAM_ERR flag:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/media/usb/uvc/uvc_video.c#n1004

If anybody has any pointers/suggestions, please let me know.

Thanks in advance!


Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline

2017-03-11 Thread Russell King - ARM Linux
On Sat, Mar 11, 2017 at 05:32:29PM +0200, Sakari Ailus wrote:
> My understanding of the i.MX6 case is the hardware is configurable enough
> to warrant the use of the Media controller API. Some patches indicate
> there are choices to be made in data routing.

The iMX6 does have configurable data routing, but in some scenarios
(eg, when receiving bayer data) there's only one possible routing.

> Steve: could you enlighten us on the topic, by e.g. doing media-ctl
> --print-dot and sending the results to the list? What kind of different IP
> blocks are there and what do they do? A pointer to hardware documentation
> wouldn't hurt either (if it's available).

Attached for the imx219 camera.  Note that although the CSI2 block has
four outputs, each output is dedicated to a CSI virtual channel, so
they can not be arbitarily assigned without configuring the sensor.

Since the imx219 only produces bayer, the graph is also showing the
_only_ possible routing for the imx219 configured for CSI virtual
channel 0.

The iMX6 manuals are available on the 'net.

https://community.nxp.com/docs/DOC-101840

There are several chapters that cover the capture side:

* MIPI CSI2
* IPU CSI2 gasket
* IPU

The IPU not only performs capture, but also display as well.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.
digraph board {
	rankdir=TB
	n0001 [label="{{ 0 |  1} | ipu1_csi0_mux\n/dev/v4l-subdev0 | { 2}}", shape=Mrecord, style=filled, fillcolor=green]
	n0001:port2 -> n0044:port0
	n0005 [label="{{ 0 |  1} | ipu2_csi1_mux\n/dev/v4l-subdev1 | { 2}}", shape=Mrecord, style=filled, fillcolor=green]
	n0005:port2 -> n0068:port0 [style=dashed]
	n0009 [label="{{ 0 |  1} | ipu1_vdic\n/dev/v4l-subdev2 | { 2}}", shape=Mrecord, style=filled, fillcolor=green]
	n0009:port2 -> n0011:port0 [style=dashed]
	n000d [label="{{ 0 |  1} | ipu2_vdic\n/dev/v4l-subdev3 | { 2}}", shape=Mrecord, style=filled, fillcolor=green]
	n000d:port2 -> n0027:port0 [style=dashed]
	n0011 [label="{{ 0} | ipu1_ic_prp\n/dev/v4l-subdev4 | { 1 |  2}}", shape=Mrecord, style=filled, fillcolor=green]
	n0011:port1 -> n0015:port0 [style=dashed]
	n0011:port2 -> n001e:port0 [style=dashed]
	n0015 [label="{{ 0} | ipu1_ic_prpenc\n/dev/v4l-subdev5 | { 1}}", shape=Mrecord, style=filled, fillcolor=green]
	n0015:port1 -> n0018 [style=dashed]
	n0018 [label="ipu1_ic_prpenc capture\n/dev/video0", shape=box, style=filled, fillcolor=yellow]
	n001e [label="{{ 0} | ipu1_ic_prpvf\n/dev/v4l-subdev6 | { 1}}", shape=Mrecord, style=filled, fillcolor=green]
	n001e:port1 -> n0021 [style=dashed]
	n0021 [label="ipu1_ic_prpvf capture\n/dev/video1", shape=box, style=filled, fillcolor=yellow]
	n0027 [label="{{ 0} | ipu2_ic_prp\n/dev/v4l-subdev7 | { 1 |  2}}", shape=Mrecord, style=filled, fillcolor=green]
	n0027:port1 -> n002b:port0 [style=dashed]
	n0027:port2 -> n0034:port0 [style=dashed]
	n002b [label="{{ 0} | ipu2_ic_prpenc\n/dev/v4l-subdev8 | { 1}}", shape=Mrecord, style=filled, fillcolor=green]
	n002b:port1 -> n002e [style=dashed]
	n002e [label="ipu2_ic_prpenc capture\n/dev/video2", shape=box, style=filled, fillcolor=yellow]
	n0034 [label="{{ 0} | ipu2_ic_prpvf\n/dev/v4l-subdev9 | { 1}}", shape=Mrecord, style=filled, fillcolor=green]
	n0034:port1 -> n0037 [style=dashed]
	n0037 [label="ipu2_ic_prpvf capture\n/dev/video3", shape=box, style=filled, fillcolor=yellow]
	n003d [label="{{ 1} | imx219 0-0010\n/dev/v4l-subdev11 | { 0}}", shape=Mrecord, style=filled, fillcolor=green]
	n003d:port0 -> n0058:port0
	n0040 [label="{{} | imx219 pixel 0-0010\n/dev/v4l-subdev10 | { 0}}", shape=Mrecord, style=filled, fillcolor=green]
	n0040:port0 -> n003d:port1 [style=bold]
	n0044 [label="{{ 0} | ipu1_csi0\n/dev/v4l-subdev12 | { 1 |  2}}", shape=Mrecord, style=filled, fillcolor=green]
	n0044:port2 -> n0048
	n0044:port1 -> n0011:port0 [style=dashed]
	n0044:port1 -> n0009:port0 [style=dashed]
	n0048 [label="ipu1_csi0 capture\n/dev/video4", shape=box, style=filled, fillcolor=yellow]
	n004e [label="{{ 0} | ipu1_csi1\n/dev/v4l-subdev13 | { 1 |  2}}", shape=Mrecord, style=filled, fillcolor=green]
	n004e:port2 -> n0052 [style=dashed]
	n004e:port1 -> n0011:port0 [style=dashed]
	n004e:port1 -> n0009:port0 [style=dashed]
	n0052 [label="ipu1_csi1 capture\n/dev/video5", shape=box, style=filled, fillcolor=yellow]
	n0058 [label="{{ 0} | imx6-mipi-csi2\n/dev/v4l-subdev14 | { 1 |  2 |  3 |  4}}", shape=Mrecord, style=filled, fillcolor=green]
	n0058:port1 -> n0001:port0
	n0058:port2 -> n004e:port0 [style=dashed]
	n0058:port3 -> n005e:port0 [style=dashed]
	n0058:port4 -> n0005:port0 [style=dashed]
	n005e [label="{{ 0} | 

Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline

2017-03-11 Thread Sakari Ailus
Hi Mauro and Hans,

On Sat, Mar 11, 2017 at 10:14:08AM -0300, Mauro Carvalho Chehab wrote:
> Em Sat, 11 Mar 2017 12:32:43 +0100
> Hans Verkuil  escreveu:
> 
> > On 10/03/17 16:09, Mauro Carvalho Chehab wrote:
> > > Em Fri, 10 Mar 2017 13:54:28 +0100
> > > Hans Verkuil  escreveu:
> > >   
> > >>> Devices that have complex pipeline that do essentially require using the
> > >>> Media controller interface to configure them are out of that scope.
> > >>> 
> > >>
> > >> Way too much of how the MC devices should be used is in the minds of 
> > >> developers.
> > >> There is a major lack for good detailed documentation, utilities, 
> > >> compliance
> > >> test (really needed!) and libv4l plugins.  
> > > 
> > > Unfortunately, we merged an incomplete MC support at the Kernel. We knew
> > > all the problems with MC-based drivers and V4L2 applications by the time
> > > it was developed, and we requested Nokia developers (with was sponsoring 
> > > MC
> > > develoment, on that time) to work on a solution to allow standard V4L2
> > > applications to work with MC based boards.
> > > 
> > > Unfortunately, we took the decision to merge MC without that, because 
> > > Nokia was giving up on Linux development, and we didn't want to lose the
> > > 2 years of discussions and work around it, as Nokia employers were leaving
> > > the company. Also, on that time, there was already some patches floating
> > > around adding backward support via libv4l. Unfortunately, those patches
> > > were never finished.
> > > 
> > > The net result is that MC was merged with some huge gaps, including
> > > the lack of a proper solution for a generic V4L2 program to work
> > > with V4L2 devices that use the subdev API.
> > > 
> > > That was not that bad by then, as MC was used only on cell phones
> > > that run custom-made applications. 
> > > 
> > > The reallity changed, as now, we have lots of low cost SoC based
> > > boards, used for all sort of purposes. So, we need a quick solution
> > > for it.
> > > 
> > > In other words, while that would be acceptable support special apps
> > > on really embedded systems, it is *not OK* for general purpose SoC
> > > harware[1].
> > > 
> > > [1] I'm calling "general purpose SoC harware" those ARM boards
> > > like Raspberry Pi that are shipped to the mass and used by a wide
> > > range of hobbyists and other people that just wants to run Linux on
> > > ARM. It is possible to buy such boards for a very cheap price,
> > > making them to be used not only on special projects, where a custom
> > > made application could be interesting, but also for a lot of
> > > users that just want to run Linux on a low cost ARM board, while
> > > keeping using standard V4L2 apps, like "camorama".
> > > 
> > > That's perhaps one of the reasons why it took a long time for us to
> > > start receiving drivers upstream for such hardware: it is quite 
> > > intimidating and not logical to require developers to implement
> > > on their drivers 2 complex APIs (MC, subdev) for those
> > > hardware that most users won't care. From user's perspective,
> > > being able to support generic applications like "camorama" and
> > > "zbar" is all they want.
> > > 
> > > In summary, I'm pretty sure we need to support standard V4L2 
> > > applications on boards like Raspberry Pi and those low-cost 
> > > SoC-based boards that are shipped to end users.
> > >   
> > >> Anyway, regarding this specific patch and for this MC-aware driver: no, 
> > >> you
> > >> shouldn't inherit controls from subdevs. It defeats the purpose.  
> > > 
> > > Sorry, but I don't agree with that. The subdev API is an optional API
> > > (and even the MC API can be optional).
> > > 
> > > I see the rationale for using MC and subdev APIs on cell phones,
> > > ISV and other embedded hardware, as it will allow fine-tuning
> > > the driver's support to allow providing the required quality for
> > > certain custom-made applications. but on general SoC hardware,
> > > supporting standard V4L2 applications is a need.
> > > 
> > > Ok, perhaps supporting both subdev API and V4L2 API at the same
> > > time doesn't make much sense. We could disable one in favor of the
> > > other, either at compilation time or at runtime.  
> > 
> > Right. If the subdev API is disabled, then you have to inherit the subdev
> > controls in the bridge driver (how else would you be able to access them?).
> > And that's the usual case.
> > 
> > If you do have the subdev API enabled, AND you use the MC, then the
> > intention clearly is to give userspace full control and inheriting controls
> > no longer makes any sense (and is highly confusing IMHO).
> 
> I tend to agree with that.

I agree as well.

This is in line with how existing drivers behave, too.

> 
> > > 
> > > This way, if the subdev API is disabled, the driver will be
> > > functional for V4L2-based applications that don't support neither
> > > MC or subdev APIs.  
> > 
> > I'm not sure if it makes 

Re: [PATCH v5 21/39] UAPI: Add media UAPI Kbuild file

2017-03-11 Thread Sakari Ailus
Hi Steve,

On Thu, Mar 09, 2017 at 08:53:01PM -0800, Steve Longerbeam wrote:
> Add an empty UAPI Kbuild file for media UAPI headers.
> 
> Signed-off-by: Steve Longerbeam 

The existing V4L2 UAPI headers are under include/uapi/linux. Could you use
that directory instead?

I actually wouldn't really object doing this but it should have been done in
2002 or so when the first V4L2 header was added. Now the benefit is
questionable.

-- 
Sakari Ailus
e-mail: sakari.ai...@iki.fi XMPP: sai...@retiisi.org.uk


Re: [PATCH v5 18/39] [media] v4l: subdev: Add function to validate frame interval

2017-03-11 Thread Sakari Ailus
Hi Steve,

On Thu, Mar 09, 2017 at 08:52:58PM -0800, Steve Longerbeam wrote:
> If the pads on both sides of a link specify a frame interval, then
> those frame intervals should match. Create the exported function
> v4l2_subdev_link_validate_frame_interval() to verify this. This
> function can be called in a subdevice's media_entity_operations
> or v4l2_subdev_pad_ops link_validate callbacks.
> 
> Signed-off-by: Steve Longerbeam 

If your only goal is to configure frame dropping on a sub-device, I suggest
to implement s_frame_interval() on the pads of that sub-device only. The
frames are then dropped according to the configured frame rates between the
sink and source pads. Say, configuring sink for 1/30 s and source 1/15 would
drop half of the incoming frames.

Considering that supporting specific frame interval on most sub-devices adds
no value or is not the interface through which it the frame rate configured,
I think it is overkill to change the link validation to expect otherwise.

-- 
Kind regards,

Sakari Ailus
e-mail: sakari.ai...@iki.fi XMPP: sai...@retiisi.org.uk


Re: [PATCHv5 00/16] atmel-isi/ov7670/ov2640: convert to standalone drivers

2017-03-11 Thread Sakari Ailus
On Sat, Mar 11, 2017 at 12:23:12PM +0100, Hans Verkuil wrote:
> From: Hans Verkuil 
> 
> This patch series converts the soc-camera atmel-isi to a standalone V4L2
> driver.

Patches 5 and 13:

Acked-by: Sakari Ailus 

-- 
Sakari Ailus
e-mail: sakari.ai...@iki.fi XMPP: sai...@retiisi.org.uk


Re: [PATCHv5 07/16] atmel-isi: remove dependency of the soc-camera framework

2017-03-11 Thread Sakari Ailus
Hi Hans,

On Sat, Mar 11, 2017 at 12:23:19PM +0100, Hans Verkuil wrote:
> From: Hans Verkuil 
> 
> This patch converts the atmel-isi driver from a soc-camera driver to a driver
> that is stand-alone.
> 
> Signed-off-by: Hans Verkuil 
> ---
>  drivers/media/platform/soc_camera/Kconfig |3 +-
>  drivers/media/platform/soc_camera/atmel-isi.c | 1209 
> +++--
>  2 files changed, 714 insertions(+), 498 deletions(-)
> 
> diff --git a/drivers/media/platform/soc_camera/Kconfig 
> b/drivers/media/platform/soc_camera/Kconfig
> index 86d74788544f..a37ec91b026e 100644
> --- a/drivers/media/platform/soc_camera/Kconfig
> +++ b/drivers/media/platform/soc_camera/Kconfig
> @@ -29,9 +29,8 @@ config VIDEO_SH_MOBILE_CEU
>  
>  config VIDEO_ATMEL_ISI
>   tristate "ATMEL Image Sensor Interface (ISI) support"
> - depends on VIDEO_DEV && SOC_CAMERA
> + depends on VIDEO_V4L2 && OF && HAS_DMA
>   depends on ARCH_AT91 || COMPILE_TEST
> - depends on HAS_DMA
>   select VIDEOBUF2_DMA_CONTIG
>   ---help---
> This module makes the ATMEL Image Sensor Interface available
> diff --git a/drivers/media/platform/soc_camera/atmel-isi.c 
> b/drivers/media/platform/soc_camera/atmel-isi.c
> index 46de657c3e6d..a6d60c2e207d 100644
> --- a/drivers/media/platform/soc_camera/atmel-isi.c
> +++ b/drivers/media/platform/soc_camera/atmel-isi.c

...

> +static int isi_graph_init(struct atmel_isi *isi)
> +{
> + struct v4l2_async_subdev **subdevs = NULL;
> + int ret;
> +
> + /* Parse the graph to extract a list of subdevice DT nodes. */
> + ret = isi_graph_parse(isi, isi->dev->of_node);
> + if (ret < 0) {
> + dev_err(isi->dev, "Graph parsing failed\n");
> + goto done;
> + }
> +
> + if (!ret) {
> + dev_err(isi->dev, "No subdev found in graph\n");
> + goto done;
> + }
> +
> + /* Register the subdevices notifier. */
> + subdevs = devm_kzalloc(isi->dev, sizeof(*subdevs), GFP_KERNEL);
> + if (subdevs == NULL) {
> + ret = -ENOMEM;
> + goto done;
> + }
> +
> + subdevs[0] = >entity.asd;
> +
> + isi->notifier.subdevs = subdevs;
> + isi->notifier.num_subdevs = 1;
> + isi->notifier.bound = isi_graph_notify_bound;
> + isi->notifier.unbind = isi_graph_notify_unbind;
> + isi->notifier.complete = isi_graph_notify_complete;
> +
> + ret = v4l2_async_notifier_register(>v4l2_dev, >notifier);
> + if (ret < 0) {
> + dev_err(isi->dev, "Notifier registration failed\n");
> + goto done;
> + }
> +
> + ret = 0;

You can replace this by

return 0;

And remove the if () below.

> +
> +done:
> + if (ret < 0) {
> + v4l2_async_notifier_unregister(>notifier);
> + of_node_put(isi->entity.node);
> + }
> +
> + return ret;
> +}
> +

Acked-by: Sakari Ailus 

-- 
Regards,

Sakari Ailus
e-mail: sakari.ai...@iki.fi XMPP: sai...@retiisi.org.uk


Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline

2017-03-11 Thread Mauro Carvalho Chehab
Em Sat, 11 Mar 2017 12:32:43 +0100
Hans Verkuil  escreveu:

> On 10/03/17 16:09, Mauro Carvalho Chehab wrote:
> > Em Fri, 10 Mar 2017 13:54:28 +0100
> > Hans Verkuil  escreveu:
> >   
> >>> Devices that have complex pipeline that do essentially require using the
> >>> Media controller interface to configure them are out of that scope.
> >>> 
> >>
> >> Way too much of how the MC devices should be used is in the minds of 
> >> developers.
> >> There is a major lack for good detailed documentation, utilities, 
> >> compliance
> >> test (really needed!) and libv4l plugins.  
> > 
> > Unfortunately, we merged an incomplete MC support at the Kernel. We knew
> > all the problems with MC-based drivers and V4L2 applications by the time
> > it was developed, and we requested Nokia developers (with was sponsoring MC
> > develoment, on that time) to work on a solution to allow standard V4L2
> > applications to work with MC based boards.
> > 
> > Unfortunately, we took the decision to merge MC without that, because 
> > Nokia was giving up on Linux development, and we didn't want to lose the
> > 2 years of discussions and work around it, as Nokia employers were leaving
> > the company. Also, on that time, there was already some patches floating
> > around adding backward support via libv4l. Unfortunately, those patches
> > were never finished.
> > 
> > The net result is that MC was merged with some huge gaps, including
> > the lack of a proper solution for a generic V4L2 program to work
> > with V4L2 devices that use the subdev API.
> > 
> > That was not that bad by then, as MC was used only on cell phones
> > that run custom-made applications. 
> > 
> > The reallity changed, as now, we have lots of low cost SoC based
> > boards, used for all sort of purposes. So, we need a quick solution
> > for it.
> > 
> > In other words, while that would be acceptable support special apps
> > on really embedded systems, it is *not OK* for general purpose SoC
> > harware[1].
> > 
> > [1] I'm calling "general purpose SoC harware" those ARM boards
> > like Raspberry Pi that are shipped to the mass and used by a wide
> > range of hobbyists and other people that just wants to run Linux on
> > ARM. It is possible to buy such boards for a very cheap price,
> > making them to be used not only on special projects, where a custom
> > made application could be interesting, but also for a lot of
> > users that just want to run Linux on a low cost ARM board, while
> > keeping using standard V4L2 apps, like "camorama".
> > 
> > That's perhaps one of the reasons why it took a long time for us to
> > start receiving drivers upstream for such hardware: it is quite 
> > intimidating and not logical to require developers to implement
> > on their drivers 2 complex APIs (MC, subdev) for those
> > hardware that most users won't care. From user's perspective,
> > being able to support generic applications like "camorama" and
> > "zbar" is all they want.
> > 
> > In summary, I'm pretty sure we need to support standard V4L2 
> > applications on boards like Raspberry Pi and those low-cost 
> > SoC-based boards that are shipped to end users.
> >   
> >> Anyway, regarding this specific patch and for this MC-aware driver: no, you
> >> shouldn't inherit controls from subdevs. It defeats the purpose.  
> > 
> > Sorry, but I don't agree with that. The subdev API is an optional API
> > (and even the MC API can be optional).
> > 
> > I see the rationale for using MC and subdev APIs on cell phones,
> > ISV and other embedded hardware, as it will allow fine-tuning
> > the driver's support to allow providing the required quality for
> > certain custom-made applications. but on general SoC hardware,
> > supporting standard V4L2 applications is a need.
> > 
> > Ok, perhaps supporting both subdev API and V4L2 API at the same
> > time doesn't make much sense. We could disable one in favor of the
> > other, either at compilation time or at runtime.  
> 
> Right. If the subdev API is disabled, then you have to inherit the subdev
> controls in the bridge driver (how else would you be able to access them?).
> And that's the usual case.
> 
> If you do have the subdev API enabled, AND you use the MC, then the
> intention clearly is to give userspace full control and inheriting controls
> no longer makes any sense (and is highly confusing IMHO).

I tend to agree with that.

> > 
> > This way, if the subdev API is disabled, the driver will be
> > functional for V4L2-based applications that don't support neither
> > MC or subdev APIs.  
> 
> I'm not sure if it makes sense for the i.MX driver to behave differently
> depending on whether the subdev API is enabled or disabled. I don't know
> enough of the hardware to tell if it would ever make sense to disable the
> subdev API.

Yeah, I don't know enough about it either. The point is: this is
something that the driver maintainer and driver users should
decide if it either makes sense or 

Re: [PATCH v2 1/2] libv4lconvert: make it clear about the criteria for needs_conversion

2017-03-11 Thread Sakari Ailus
On Sat, Mar 11, 2017 at 06:31:47AM -0300, Mauro Carvalho Chehab wrote:
> While there is already a comment about the always_needs_conversion
> logic at libv4lconvert, the comment is not clear enough. Also,
> the decision of needing a conversion or not is actually at the
> supported_src_pixfmts[] table.
> 
> Improve the comments to make it clearer about what criteria should be
> used with regards to exposing formats to userspace.
> 
> Suggested-by: Sakari Ailus 
> Signed-off-by: Mauro Carvalho Chehab 

Acked-by: Sakari Ailus 

-- 
Sakari Ailus
e-mail: sakari.ai...@iki.fi XMPP: sai...@retiisi.org.uk


Re: [PATCH v5 15/39] [media] v4l2: add a frame interval error event

2017-03-11 Thread Hans Verkuil
On 10/03/17 19:37, Steve Longerbeam wrote:
> Hi Hans,
> 
> On 03/10/2017 04:03 AM, Hans Verkuil wrote:
>> On 10/03/17 05:52, Steve Longerbeam wrote:
>>> Add a new FRAME_INTERVAL_ERROR event to signal that a video capture or
>>> output device has measured an interval between the reception or transmit
>>> completion of two consecutive frames of video that is outside the nominal
>>> frame interval by some tolerance value.
>>
>> Reading back what was said on this I agree with Sakari that this doesn't
>> belong here.
>>
>> Userspace can detect this just as easily (if not easier) with a timeout.
>>
> 
> 
> Unfortunately measuring frame intervals from userland is not accurate
> enough for i.MX6.
> 
> The issue here is that the IPUv3, specifically the CSI unit, can
> permanently lose vertical sync if there are truncated frames sent
> on the bt.656 bus. We have seen a single missing line of video cause
> loss of vertical sync. The only way to correct this is to shutdown
> the IPU capture hardware and restart, which can be accomplished
> simply by restarting streaming from userland.
> 
> There are no other indicators from the sensor about these short
> frame events (believe me, we've exhausted all avenues with the ADV718x).
> And the IPUv3 DMA engine has no status indicators for short frames
> either. So the only way to detect them is by measuring frame intervals.
> 
> The intervals have to be able to resolve a single line of missing video.
> With a PAL video source that requires better than 58 usec accuracy.
> 
> There is too much uncertainty to resolve this at user level. The
> driver is able to resolve this by measuring intervals between hardware
> interrupts as long as interrupt latency is reasonably low, and we
> have another method using the i.MX6 hardware input capture support
> that can measure these intervals very accurately with no errors
> introduced by interrupt latency.
> 
> I made this event a private event to imx-media driver in a previous
> iteration, so I can return it to a private event, but this can't be
> done at user level.

It's fine to use an internal event as long as the end-user doesn't
see it. But if you lose vsyncs, then you never capture another frame,
right? So userspace can detect that (i.e. no new frames arrive) and
it can timeout on that. Or you detect it in the driver and restart there,
or call vb2_queue_error().

Anything really as long as this event isn't user-visible :-)

Regards,

Hans

> 
> Steve
> 
> 
>>
>>>
>>> Signed-off-by: Steve Longerbeam 
>>> ---
>>>  Documentation/media/uapi/v4l/vidioc-dqevent.rst | 6 ++
>>>  Documentation/media/videodev2.h.rst.exceptions  | 1 +
>>>  include/uapi/linux/videodev2.h  | 1 +
>>>  3 files changed, 8 insertions(+)
>>>
>>> diff --git a/Documentation/media/uapi/v4l/vidioc-dqevent.rst 
>>> b/Documentation/media/uapi/v4l/vidioc-dqevent.rst
>>> index 8d663a7..dc77363 100644
>>> --- a/Documentation/media/uapi/v4l/vidioc-dqevent.rst
>>> +++ b/Documentation/media/uapi/v4l/vidioc-dqevent.rst
>>> @@ -197,6 +197,12 @@ call.
>>>  the regions changes. This event has a struct
>>>  :c:type:`v4l2_event_motion_det`
>>>  associated with it.
>>> +* - ``V4L2_EVENT_FRAME_INTERVAL_ERROR``
>>> +  - 7
>>> +  - This event is triggered when the video capture or output device
>>> +has measured an interval between the reception or transmit
>>> +completion of two consecutive frames of video that is outside
>>> +the nominal frame interval by some tolerance value.
>>>  * - ``V4L2_EVENT_PRIVATE_START``
>>>- 0x0800
>>>- Base event number for driver-private events.
>>> diff --git a/Documentation/media/videodev2.h.rst.exceptions 
>>> b/Documentation/media/videodev2.h.rst.exceptions
>>> index e11a0d0..c7d8fad 100644
>>> --- a/Documentation/media/videodev2.h.rst.exceptions
>>> +++ b/Documentation/media/videodev2.h.rst.exceptions
>>> @@ -459,6 +459,7 @@ replace define V4L2_EVENT_CTRL event-type
>>>  replace define V4L2_EVENT_FRAME_SYNC event-type
>>>  replace define V4L2_EVENT_SOURCE_CHANGE event-type
>>>  replace define V4L2_EVENT_MOTION_DET event-type
>>> +replace define V4L2_EVENT_FRAME_INTERVAL_ERROR event-type
>>>  replace define V4L2_EVENT_PRIVATE_START event-type
>>>
>>>  replace define V4L2_EVENT_CTRL_CH_VALUE ctrl-changes-flags
>>> diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
>>> index 45184a2..cf5a0d0 100644
>>> --- a/include/uapi/linux/videodev2.h
>>> +++ b/include/uapi/linux/videodev2.h
>>> @@ -2131,6 +2131,7 @@ struct v4l2_streamparm {
>>>  #define V4L2_EVENT_FRAME_SYNC4
>>>  #define V4L2_EVENT_SOURCE_CHANGE5
>>>  #define V4L2_EVENT_MOTION_DET6
>>> +#define V4L2_EVENT_FRAME_INTERVAL_ERROR7
>>>  #define V4L2_EVENT_PRIVATE_START0x0800
>>>
>>>  /* Payload for V4L2_EVENT_VSYNC */
>>>
>>
> 



Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline

2017-03-11 Thread Hans Verkuil
On 10/03/17 16:09, Mauro Carvalho Chehab wrote:
> Em Fri, 10 Mar 2017 13:54:28 +0100
> Hans Verkuil  escreveu:
> 
>>> Devices that have complex pipeline that do essentially require using the
>>> Media controller interface to configure them are out of that scope.
>>>   
>>
>> Way too much of how the MC devices should be used is in the minds of 
>> developers.
>> There is a major lack for good detailed documentation, utilities, compliance
>> test (really needed!) and libv4l plugins.
> 
> Unfortunately, we merged an incomplete MC support at the Kernel. We knew
> all the problems with MC-based drivers and V4L2 applications by the time
> it was developed, and we requested Nokia developers (with was sponsoring MC
> develoment, on that time) to work on a solution to allow standard V4L2
> applications to work with MC based boards.
> 
> Unfortunately, we took the decision to merge MC without that, because 
> Nokia was giving up on Linux development, and we didn't want to lose the
> 2 years of discussions and work around it, as Nokia employers were leaving
> the company. Also, on that time, there was already some patches floating
> around adding backward support via libv4l. Unfortunately, those patches
> were never finished.
> 
> The net result is that MC was merged with some huge gaps, including
> the lack of a proper solution for a generic V4L2 program to work
> with V4L2 devices that use the subdev API.
> 
> That was not that bad by then, as MC was used only on cell phones
> that run custom-made applications. 
> 
> The reallity changed, as now, we have lots of low cost SoC based
> boards, used for all sort of purposes. So, we need a quick solution
> for it.
> 
> In other words, while that would be acceptable support special apps
> on really embedded systems, it is *not OK* for general purpose SoC
> harware[1].
> 
> [1] I'm calling "general purpose SoC harware" those ARM boards
> like Raspberry Pi that are shipped to the mass and used by a wide
> range of hobbyists and other people that just wants to run Linux on
> ARM. It is possible to buy such boards for a very cheap price,
> making them to be used not only on special projects, where a custom
> made application could be interesting, but also for a lot of
> users that just want to run Linux on a low cost ARM board, while
> keeping using standard V4L2 apps, like "camorama".
> 
> That's perhaps one of the reasons why it took a long time for us to
> start receiving drivers upstream for such hardware: it is quite 
> intimidating and not logical to require developers to implement
> on their drivers 2 complex APIs (MC, subdev) for those
> hardware that most users won't care. From user's perspective,
> being able to support generic applications like "camorama" and
> "zbar" is all they want.
> 
> In summary, I'm pretty sure we need to support standard V4L2 
> applications on boards like Raspberry Pi and those low-cost 
> SoC-based boards that are shipped to end users.
> 
>> Anyway, regarding this specific patch and for this MC-aware driver: no, you
>> shouldn't inherit controls from subdevs. It defeats the purpose.
> 
> Sorry, but I don't agree with that. The subdev API is an optional API
> (and even the MC API can be optional).
> 
> I see the rationale for using MC and subdev APIs on cell phones,
> ISV and other embedded hardware, as it will allow fine-tuning
> the driver's support to allow providing the required quality for
> certain custom-made applications. but on general SoC hardware,
> supporting standard V4L2 applications is a need.
> 
> Ok, perhaps supporting both subdev API and V4L2 API at the same
> time doesn't make much sense. We could disable one in favor of the
> other, either at compilation time or at runtime.

Right. If the subdev API is disabled, then you have to inherit the subdev
controls in the bridge driver (how else would you be able to access them?).
And that's the usual case.

If you do have the subdev API enabled, AND you use the MC, then the
intention clearly is to give userspace full control and inheriting controls
no longer makes any sense (and is highly confusing IMHO).

> 
> This way, if the subdev API is disabled, the driver will be
> functional for V4L2-based applications that don't support neither
> MC or subdev APIs.

I'm not sure if it makes sense for the i.MX driver to behave differently
depending on whether the subdev API is enabled or disabled. I don't know
enough of the hardware to tell if it would ever make sense to disable the
subdev API.

Regards,

Hans

> 
>> As mentioned, I will attempt to try and get some time to work on this
>> later this year. Fingers crossed.
> 
> That will be good, and, once we have a solution that works, we can
> work on cleanup the code, but, until then, drivers for arm-based boards
> sold to end consumers should work out of the box with standard V4L2 apps.
> 
> While we don't have that, I'm OK to merge patches adding such support
> upstream.
> 
> Thanks,
> Mauro
> 



Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline

2017-03-11 Thread Mauro Carvalho Chehab
Em Sat, 11 Mar 2017 00:37:14 +0200
Sakari Ailus  escreveu:

> Hi Mauro (and others),
> 
> On Fri, Mar 10, 2017 at 12:53:42PM -0300, Mauro Carvalho Chehab wrote:
> > Em Fri, 10 Mar 2017 15:20:48 +0100
> > Hans Verkuil  escreveu:
> >   
> > >   
> > > > As I've already mentioned, from talking about this with Mauro, it seems
> > > > Mauro is in agreement with permitting the control inheritence... I wish
> > > > Mauro would comment for himself, as I can't quote our private discussion
> > > > on the subject.
> > > 
> > > I can't comment either, not having seen his mail and reasoning.  
> > 
> > The rationale is that we should support the simplest use cases first.
> > 
> > In the case of the first MC-based driver (and several subsequent
> > ones), the simplest use case required MC, as it was meant to suport
> > a custom-made sophisticated application that required fine control
> > on each component of the pipeline and to allow their advanced
> > proprietary AAA userspace-based algorithms to work.  
> 
> The first MC based driver (omap3isp) supports what the hardware can do, it
> does not support applications as such.

All media drivers support a subset of what the hardware can do. The
question is if such subset covers the use cases or not.

The current MC-based drivers (except for uvc) took a patch to offer a
more advanced API, to allow direct control to each IP module, as it was
said, by the time we merged the OMAP3 driver, that, for the N9/N900 camera
to work, it was mandatory to access the pipeline's individual components.

Such approach require that some userspace software will have knowledge
about some hardware details, in order to setup pipelines and send controls
to the right components. That makes really hard to have a generic user
friendly application to use such devices.

Non-MC based drivers control the hardware via a portable interface with
doesn't require any knowledge about the hardware specifics, as either the
Kernel or some firmware at the device will set any needed pipelines.

In the case of V4L2 controls, when there's no subdev API, the main
driver (e. g. the driver that creates the /dev/video nodes) sends a
multicast message to all bound I2C drivers. The driver(s) that need 
them handle it. When the same control may be implemented on different
drivers, the main driver sends a unicast message to just one driver[1].

[1] There are several non-MC drivers that have multiple ways to
control some things, like doing scaling or adjust volume levels at
either the bridge driver or at a subdriver.

There's nothing wrong with this approach: it works, it is simpler,
it is generic. So, if it covers most use cases, why not allowing it
for usecases where a finer control is not a requirement?

> Adding support to drivers for different "operation modes" --- this is
> essentially what is being asked for --- is not an approach which could serve
> either purpose (some functionality with simple interface vs. fully support
> what the hardware can do, with interfaces allowing that) adequately in the
> short or the long run.

Why not?

> If we are missing pieces in the puzzle --- in this case the missing pieces
> in the puzzle are a generic pipeline configuration library and another
> library that, with the help of pipeline autoconfiguration would implement
> "best effort" service for regular V4L2 on top of the MC + V4L2 subdev + V4L2
> --- then these pieces need to be impelemented. The solution is
> *not* to attempt to support different types of applications in each driver
> separately. That will make writing drivers painful, error prone and is
> unlikely ever deliver what either purpose requires.
> 
> So let's continue to implement the functionality that the hardware supports.
> Making a different choice here is bound to create a lasting conflict between
> having to change kernel interface behaviour and the requirement of
> supporting new functionality that hasn't been previously thought of, pushing
> away SoC vendors from V4L2 ecosystem. This is what we all do want to avoid.

This situation is there since 2009. If I remember well, you tried to write
such generic plugin in the past, but never finished it, apparently because
it is too complex. Others tried too over the years. 

The last trial was done by Jacek, trying to cover just the exynos4 driver. 
Yet, even such limited scope plugin was not good enough, as it was never
merged upstream. Currently, there's no such plugins upstream.

If we can't even merge a plugin that solves it for just *one* driver,
I have no hope that we'll be able to do it for the generic case.

That's why I'm saying that I'm OK on merging any patch that would allow
setting controls via the /dev/video interface on MC-based drivers when
compiled without subdev API. I may also consider merging patches allowing
to change the behavior on runtime, when compiled with subdev API.

> As far as i.MX6 driver goes, it is always possible to implement i.MX6 

[PATCHv5 14/16] em28xx: drop last soc_camera link

2017-03-11 Thread Hans Verkuil
From: Hans Verkuil 

The em28xx driver still used the soc_camera.h header for the ov2640
driver. Since this driver no longer uses soc_camera, that include can
be removed.

Signed-off-by: Hans Verkuil 
Acked-by: Sakari Ailus 
---
 drivers/media/usb/em28xx/em28xx-camera.c | 9 -
 1 file changed, 9 deletions(-)

diff --git a/drivers/media/usb/em28xx/em28xx-camera.c 
b/drivers/media/usb/em28xx/em28xx-camera.c
index 89c890ba7dd6..63aaa577a742 100644
--- a/drivers/media/usb/em28xx/em28xx-camera.c
+++ b/drivers/media/usb/em28xx/em28xx-camera.c
@@ -23,7 +23,6 @@
 
 #include 
 #include 
-#include 
 #include 
 #include 
 #include 
@@ -43,13 +42,6 @@ static unsigned short omnivision_sensor_addrs[] = {
I2C_CLIENT_END
 };
 
-static struct soc_camera_link camlink = {
-   .bus_id = 0,
-   .flags = 0,
-   .module_name = "em28xx",
-   .unbalanced_power = true,
-};
-
 /* FIXME: Should be replaced by a proper mt9m111 driver */
 static int em28xx_initialize_mt9m111(struct em28xx *dev)
 {
@@ -421,7 +413,6 @@ int em28xx_init_camera(struct em28xx *dev)
.type = "ov2640",
.flags = I2C_CLIENT_SCCB,
.addr = client->addr,
-   .platform_data = ,
};
struct v4l2_subdev_format format = {
.which = V4L2_SUBDEV_FORMAT_ACTIVE,
-- 
2.11.0



[PATCHv5 15/16] sama5d3 dts: enable atmel-isi

2017-03-11 Thread Hans Verkuil
From: Hans Verkuil 

This illustrates the changes needed to the dts in order to hook up the
ov7670. I don't plan on merging this.

Signed-off-by: Hans Verkuil 
---
 arch/arm/boot/dts/at91-sama5d3_xplained.dts | 59 ++---
 arch/arm/boot/dts/sama5d3.dtsi  |  4 +-
 2 files changed, 56 insertions(+), 7 deletions(-)

diff --git a/arch/arm/boot/dts/at91-sama5d3_xplained.dts 
b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
index c51fc652f6c7..c6b07f83578b 100644
--- a/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+++ b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
@@ -65,18 +65,51 @@
status = "okay";
};
 
+   isi0: isi@f0034000 {
+   status = "okay";
+   port {
+   #address-cells = <1>;
+   #size-cells = <0>;
+   isi_0: endpoint {
+   remote-endpoint = <_0>;
+   bus-width = <8>;
+   vsync-active = <1>;
+   hsync-active = <1>;
+   };
+   };
+   };
+
i2c0: i2c@f0014000 {
pinctrl-0 = <_i2c0_pu>;
-   status = "okay";
+   status = "disabled";
};
 
i2c1: i2c@f0018000 {
status = "okay";
 
+   ov7670: camera@21 {
+   compatible = "ovti,ov7670";
+   reg = <0x21>;
+   pinctrl-names = "default";
+   pinctrl-0 = <_pck0_as_isi_mck 
_sensor_power _sensor_reset>;
+   reset-gpios = < 11 
GPIO_ACTIVE_LOW>;
+   powerdown-gpios = < 13 
GPIO_ACTIVE_HIGH>;
+   clocks = <>;
+   clock-names = "xclk";
+   assigned-clocks = <>;
+   assigned-clock-rates = <2500>;
+
+   port {
+   ov7670_0: endpoint {
+   remote-endpoint = 
<_0>;
+   };
+   };
+   };
+
pmic: act8865@5b {
compatible = "active-semi,act8865";
reg = <0x5b>;
-   status = "disabled";
+   status = "okay";
 
regulators {
vcc_1v8_reg: DCDC_REG1 {
@@ -130,7 +163,7 @@
pwm0: pwm@f002c000 {
pinctrl-names = "default";
pinctrl-0 = <_pwm0_pwmh0_0 
_pwm0_pwmh1_0>;
-   status = "okay";
+   status = "disabled";
};
 
usart0: serial@f001c000 {
@@ -143,7 +176,7 @@
};
 
uart0: serial@f0024000 {
-   status = "okay";
+   status = "disabled";
};
 
mmc1: mmc@f800 {
@@ -181,7 +214,7 @@
i2c2: i2c@f801c000 {
dmas = <0>, <0>;/* Do not use DMA for 
i2c2 */
pinctrl-0 = <_i2c2_pu>;
-   status = "okay";
+   status = "disabled";
};
 
macb1: ethernet@f802c000 {
@@ -200,6 +233,22 @@
};
 
pinctrl@f200 {
+   camera_sensor {
+   pinctrl_pck0_as_isi_mck: 
pck0_as_isi_mck-0 {
+   atmel,pins =
+   ; /* ISI_MCK */
+   };
+
+   pinctrl_sensor_power: sensor_power-0 {
+   atmel,pins =
+   ;
+   };
+
+  

[PATCHv5 12/16] ov2640: use standard clk and enable it.

2017-03-11 Thread Hans Verkuil
From: Hans Verkuil 

Convert v4l2_clk to normal clk and enable the clock.

Signed-off-by: Hans Verkuil 
Acked-by: Sakari Ailus 
---
 drivers/media/i2c/ov2640.c | 31 ++-
 1 file changed, 14 insertions(+), 17 deletions(-)

diff --git a/drivers/media/i2c/ov2640.c b/drivers/media/i2c/ov2640.c
index 83f88efbce69..0445963c5fae 100644
--- a/drivers/media/i2c/ov2640.c
+++ b/drivers/media/i2c/ov2640.c
@@ -16,6 +16,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -24,7 +25,6 @@
 #include 
 #include 
 
-#include 
 #include 
 #include 
 #include 
@@ -284,7 +284,7 @@ struct ov2640_priv {
struct v4l2_subdev  subdev;
struct v4l2_ctrl_handlerhdl;
u32 cfmt_code;
-   struct v4l2_clk *clk;
+   struct clk  *clk;
const struct ov2640_win_size*win;
 
struct gpio_desc *resetb_gpio;
@@ -1051,14 +1051,11 @@ static int ov2640_probe(struct i2c_client *client,
return -ENOMEM;
}
 
-   priv->clk = v4l2_clk_get(>dev, "xvclk");
-   if (IS_ERR(priv->clk))
-   return -EPROBE_DEFER;
-
-   if (!client->dev.of_node) {
-   dev_err(>dev, "Missing platform_data for driver\n");
-   ret = -EINVAL;
-   goto err_clk;
+   if (client->dev.of_node) {
+   priv->clk = devm_clk_get(>dev, "xvclk");
+   if (IS_ERR(priv->clk))
+   return -EPROBE_DEFER;
+   clk_prepare_enable(priv->clk);
}
 
ret = ov2640_probe_dt(client, priv);
@@ -1074,25 +1071,25 @@ static int ov2640_probe(struct i2c_client *client,
priv->subdev.ctrl_handler = >hdl;
if (priv->hdl.error) {
ret = priv->hdl.error;
-   goto err_clk;
+   goto err_hdl;
}
 
ret = ov2640_video_probe(client);
if (ret < 0)
-   goto err_videoprobe;
+   goto err_hdl;
 
ret = v4l2_async_register_subdev(>subdev);
if (ret < 0)
-   goto err_videoprobe;
+   goto err_hdl;
 
dev_info(>dev, "OV2640 Probed\n");
 
return 0;
 
-err_videoprobe:
+err_hdl:
v4l2_ctrl_handler_free(>hdl);
 err_clk:
-   v4l2_clk_put(priv->clk);
+   clk_disable_unprepare(priv->clk);
return ret;
 }
 
@@ -1101,9 +1098,9 @@ static int ov2640_remove(struct i2c_client *client)
struct ov2640_priv   *priv = to_ov2640(client);
 
v4l2_async_unregister_subdev(>subdev);
-   v4l2_clk_put(priv->clk);
-   v4l2_device_unregister_subdev(>subdev);
v4l2_ctrl_handler_free(>hdl);
+   v4l2_device_unregister_subdev(>subdev);
+   clk_disable_unprepare(priv->clk);
return 0;
 }
 
-- 
2.11.0



[PATCHv5 16/16] at91-sama5d3_xplained.dts: select ov2640

2017-03-11 Thread Hans Verkuil
From: Hans Verkuil 

This patch replaces the ov7670 with the ov2640. This patch is not
meant to be merged but is for demonstration purposes only.

Signed-off-by: Hans Verkuil 
---
 arch/arm/boot/dts/at91-sama5d3_xplained.dts | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/arm/boot/dts/at91-sama5d3_xplained.dts 
b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
index c6b07f83578b..6951bd6a11d7 100644
--- a/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+++ b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
@@ -71,7 +71,7 @@
#address-cells = <1>;
#size-cells = <0>;
isi_0: endpoint {
-   remote-endpoint = <_0>;
+   remote-endpoint = <_0>;
bus-width = <8>;
vsync-active = <1>;
hsync-active = <1>;
@@ -87,20 +87,20 @@
i2c1: i2c@f0018000 {
status = "okay";
 
-   ov7670: camera@21 {
-   compatible = "ovti,ov7670";
-   reg = <0x21>;
+   ov2640: camera@30 {
+   compatible = "ovti,ov2640";
+   reg = <0x30>;
pinctrl-names = "default";
pinctrl-0 = <_pck0_as_isi_mck 
_sensor_power _sensor_reset>;
-   reset-gpios = < 11 
GPIO_ACTIVE_LOW>;
-   powerdown-gpios = < 13 
GPIO_ACTIVE_HIGH>;
+   resetb-gpios = < 11 
GPIO_ACTIVE_LOW>;
+   pwdn-gpios = < 13 
GPIO_ACTIVE_HIGH>;
clocks = <>;
-   clock-names = "xclk";
+   clock-names = "xvclk";
assigned-clocks = <>;
assigned-clock-rates = <2500>;
 
port {
-   ov7670_0: endpoint {
+   ov2640_0: endpoint {
remote-endpoint = 
<_0>;
};
};
-- 
2.11.0



[PATCHv5 05/16] ov7670: add devicetree support

2017-03-11 Thread Hans Verkuil
From: Hans Verkuil 

Add DT support. Use it to get the reset and pwdn pins (if there are any).
Tested with one sensor requiring reset/pwdn and one sensor that doesn't
have reset/pwdn pins.

Signed-off-by: Hans Verkuil 
Acked-by: Sakari Ailus 
---
 drivers/media/i2c/ov7670.c | 40 ++--
 1 file changed, 38 insertions(+), 2 deletions(-)

diff --git a/drivers/media/i2c/ov7670.c b/drivers/media/i2c/ov7670.c
index 912ff09c6100..7270c68ed18a 100644
--- a/drivers/media/i2c/ov7670.c
+++ b/drivers/media/i2c/ov7670.c
@@ -17,6 +17,8 @@
 #include 
 #include 
 #include 
+#include 
+#include 
 #include 
 #include 
 #include 
@@ -229,6 +231,8 @@ struct ov7670_info {
};
struct ov7670_format_struct *fmt;  /* Current format */
struct clk *clk;
+   struct gpio_desc *resetb_gpio;
+   struct gpio_desc *pwdn_gpio;
int min_width;  /* Filter out smaller sizes */
int min_height; /* Filter out smaller sizes */
int clock_speed;/* External clock speed (MHz) */
@@ -591,8 +595,6 @@ static int ov7670_init(struct v4l2_subdev *sd, u32 val)
return ov7670_write_array(sd, ov7670_default_regs);
 }
 
-
-
 static int ov7670_detect(struct v4l2_subdev *sd)
 {
unsigned char v;
@@ -1549,6 +1551,27 @@ static const struct ov7670_devtype ov7670_devdata[] = {
},
 };
 
+static int ov7670_init_gpio(struct i2c_client *client, struct ov7670_info 
*info)
+{
+   info->pwdn_gpio = devm_gpiod_get_optional(>dev, "powerdown",
+   GPIOD_OUT_LOW);
+   if (IS_ERR(info->pwdn_gpio)) {
+   dev_info(>dev, "can't get %s GPIO\n", "powerdown");
+   return PTR_ERR(info->pwdn_gpio);
+   }
+
+   info->resetb_gpio = devm_gpiod_get_optional(>dev, "reset",
+   GPIOD_OUT_LOW);
+   if (IS_ERR(info->resetb_gpio)) {
+   dev_info(>dev, "can't get %s GPIO\n", "reset");
+   return PTR_ERR(info->resetb_gpio);
+   }
+
+   usleep_range(3000, 5000);
+
+   return 0;
+}
+
 static int ov7670_probe(struct i2c_client *client,
const struct i2c_device_id *id)
 {
@@ -1594,6 +1617,10 @@ static int ov7670_probe(struct i2c_client *client,
return -EPROBE_DEFER;
clk_prepare_enable(info->clk);
 
+   ret = ov7670_init_gpio(client, info);
+   if (ret)
+   goto clk_disable;
+
info->clock_speed = clk_get_rate(info->clk) / 100;
if (info->clock_speed < 10 || info->clock_speed > 48) {
ret = -EINVAL;
@@ -1693,9 +1720,18 @@ static const struct i2c_device_id ov7670_id[] = {
 };
 MODULE_DEVICE_TABLE(i2c, ov7670_id);
 
+#if IS_ENABLED(CONFIG_OF)
+static const struct of_device_id ov7670_of_match[] = {
+   { .compatible = "ovti,ov7670", },
+   { /* sentinel */ },
+};
+MODULE_DEVICE_TABLE(of, ov7670_of_match);
+#endif
+
 static struct i2c_driver ov7670_driver = {
.driver = {
.name   = "ov7670",
+   .of_match_table = of_match_ptr(ov7670_of_match),
},
.probe  = ov7670_probe,
.remove = ov7670_remove,
-- 
2.11.0



[PATCHv5 10/16] ov2640: update bindings

2017-03-11 Thread Hans Verkuil
From: Hans Verkuil 

Update the bindings for this device based on a working DT example.

Signed-off-by: Hans Verkuil 
Acked-by: Rob Herring 
Acked-by: Sakari Ailus 
---
 .../devicetree/bindings/media/i2c/ov2640.txt   | 23 +-
 1 file changed, 9 insertions(+), 14 deletions(-)

diff --git a/Documentation/devicetree/bindings/media/i2c/ov2640.txt 
b/Documentation/devicetree/bindings/media/i2c/ov2640.txt
index c429b5bdcaa0..989ce6cb6ac3 100644
--- a/Documentation/devicetree/bindings/media/i2c/ov2640.txt
+++ b/Documentation/devicetree/bindings/media/i2c/ov2640.txt
@@ -1,8 +1,8 @@
 * Omnivision OV2640 CMOS sensor
 
-The Omnivision OV2640 sensor support multiple resolutions output, such as
-CIF, SVGA, UXGA. It also can support YUV422/420, RGB565/555 or raw RGB
-output format.
+The Omnivision OV2640 sensor supports multiple resolutions output, such as
+CIF, SVGA, UXGA. It also can support the YUV422/420, RGB565/555 or raw RGB
+output formats.
 
 Required Properties:
 - compatible: should be "ovti,ov2640"
@@ -20,26 +20,21 @@ 
Documentation/devicetree/bindings/media/video-interfaces.txt.
 Example:
 
i2c1: i2c@f0018000 {
-   ov2640: camera@0x30 {
+   ov2640: camera@30 {
compatible = "ovti,ov2640";
reg = <0x30>;
-
pinctrl-names = "default";
-   pinctrl-0 = <_pck1 _ov2640_pwdn 
_ov2640_resetb>;
-
-   resetb-gpios = < 24 GPIO_ACTIVE_LOW>;
-   pwdn-gpios = < 29 GPIO_ACTIVE_HIGH>;
-
-   clocks = <>;
+   pinctrl-0 = <_pck0_as_isi_mck 
_sensor_power _sensor_reset>;
+   resetb-gpios = < 11 GPIO_ACTIVE_LOW>;
+   pwdn-gpios = < 13 GPIO_ACTIVE_HIGH>;
+   clocks = <>;
clock-names = "xvclk";
-
-   assigned-clocks = <>;
+   assigned-clocks = <>;
assigned-clock-rates = <2500>;
 
port {
ov2640_0: endpoint {
remote-endpoint = <_0>;
-   bus-width = <8>;
};
};
};
-- 
2.11.0



[PATCHv5 11/16] ov2640: convert from soc-camera to a standard subdev sensor driver.

2017-03-11 Thread Hans Verkuil
From: Hans Verkuil 

Convert ov2640 to a standard subdev driver. The soc-camera driver no longer
uses this driver, so it can safely be converted.

Note: the s_power op has been dropped: this never worked. When the last open()
is closed, then the power is turned off, and when it is opened again the power
is turned on again, but the old state isn't restored.

Someone else can figure out in the future how to get this working correctly,
but I don't want to spend more time on this.

Signed-off-by: Hans Verkuil 
Acked-by: Sakari Ailus 
---
 drivers/media/i2c/Kconfig   | 11 
 drivers/media/i2c/Makefile  |  1 +
 drivers/media/i2c/{soc_camera => }/ov2640.c | 89 +
 drivers/media/i2c/soc_camera/Kconfig|  6 --
 drivers/media/i2c/soc_camera/Makefile   |  1 -
 5 files changed, 27 insertions(+), 81 deletions(-)
 rename drivers/media/i2c/{soc_camera => }/ov2640.c (94%)

diff --git a/drivers/media/i2c/Kconfig b/drivers/media/i2c/Kconfig
index cee1dae6e014..db2c63f592c5 100644
--- a/drivers/media/i2c/Kconfig
+++ b/drivers/media/i2c/Kconfig
@@ -520,6 +520,17 @@ config VIDEO_APTINA_PLL
 config VIDEO_SMIAPP_PLL
tristate
 
+config VIDEO_OV2640
+   tristate "OmniVision OV2640 sensor support"
+   depends on VIDEO_V4L2 && I2C && GPIOLIB
+   depends on MEDIA_CAMERA_SUPPORT
+   help
+ This is a Video4Linux2 sensor-level driver for the OmniVision
+ OV2640 camera.
+
+ To compile this driver as a module, choose M here: the
+ module will be called ov2640.
+
 config VIDEO_OV2659
tristate "OmniVision OV2659 sensor support"
depends on VIDEO_V4L2 && I2C
diff --git a/drivers/media/i2c/Makefile b/drivers/media/i2c/Makefile
index 5bc7bbeb5499..50af1e11c85a 100644
--- a/drivers/media/i2c/Makefile
+++ b/drivers/media/i2c/Makefile
@@ -57,6 +57,7 @@ obj-$(CONFIG_VIDEO_VP27SMPX) += vp27smpx.o
 obj-$(CONFIG_VIDEO_SONY_BTF_MPX) += sony-btf-mpx.o
 obj-$(CONFIG_VIDEO_UPD64031A) += upd64031a.o
 obj-$(CONFIG_VIDEO_UPD64083) += upd64083.o
+obj-$(CONFIG_VIDEO_OV2640) += ov2640.o
 obj-$(CONFIG_VIDEO_OV7640) += ov7640.o
 obj-$(CONFIG_VIDEO_OV7670) += ov7670.o
 obj-$(CONFIG_VIDEO_OV9650) += ov9650.o
diff --git a/drivers/media/i2c/soc_camera/ov2640.c b/drivers/media/i2c/ov2640.c
similarity index 94%
rename from drivers/media/i2c/soc_camera/ov2640.c
rename to drivers/media/i2c/ov2640.c
index b9a0069f5b33..83f88efbce69 100644
--- a/drivers/media/i2c/soc_camera/ov2640.c
+++ b/drivers/media/i2c/ov2640.c
@@ -24,8 +24,8 @@
 #include 
 #include 
 
-#include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -287,7 +287,6 @@ struct ov2640_priv {
struct v4l2_clk *clk;
const struct ov2640_win_size*win;
 
-   struct soc_camera_subdev_desc   ssdd_dt;
struct gpio_desc *resetb_gpio;
struct gpio_desc *pwdn_gpio;
 };
@@ -677,13 +676,8 @@ static int ov2640_reset(struct i2c_client *client)
 }
 
 /*
- * soc_camera_ops functions
+ * functions
  */
-static int ov2640_s_stream(struct v4l2_subdev *sd, int enable)
-{
-   return 0;
-}
-
 static int ov2640_s_ctrl(struct v4l2_ctrl *ctrl)
 {
struct v4l2_subdev *sd =
@@ -744,10 +738,16 @@ static int ov2640_s_register(struct v4l2_subdev *sd,
 static int ov2640_s_power(struct v4l2_subdev *sd, int on)
 {
struct i2c_client *client = v4l2_get_subdevdata(sd);
-   struct soc_camera_subdev_desc *ssdd = soc_camera_i2c_to_desc(client);
struct ov2640_priv *priv = to_ov2640(client);
 
-   return soc_camera_set_power(>dev, ssdd, priv->clk, on);
+   gpiod_direction_output(priv->pwdn_gpio, !on);
+   if (on && priv->resetb_gpio) {
+   /* Active the resetb pin to perform a reset pulse */
+   gpiod_direction_output(priv->resetb_gpio, 1);
+   usleep_range(3000, 5000);
+   gpiod_direction_output(priv->resetb_gpio, 0);
+   }
+   return 0;
 }
 
 /* Select the nearest higher resolution for capture */
@@ -994,26 +994,6 @@ static struct v4l2_subdev_core_ops ov2640_subdev_core_ops 
= {
.s_power= ov2640_s_power,
 };
 
-static int ov2640_g_mbus_config(struct v4l2_subdev *sd,
-   struct v4l2_mbus_config *cfg)
-{
-   struct i2c_client *client = v4l2_get_subdevdata(sd);
-   struct soc_camera_subdev_desc *ssdd = soc_camera_i2c_to_desc(client);
-
-   cfg->flags = V4L2_MBUS_PCLK_SAMPLE_RISING | V4L2_MBUS_MASTER |
-   V4L2_MBUS_VSYNC_ACTIVE_HIGH | V4L2_MBUS_HSYNC_ACTIVE_HIGH |
-   V4L2_MBUS_DATA_ACTIVE_HIGH;
-   cfg->type = V4L2_MBUS_PARALLEL;
-   cfg->flags = soc_camera_apply_board_flags(ssdd, cfg);
-
-   return 0;
-}
-
-static struct v4l2_subdev_video_ops ov2640_subdev_video_ops = {
-   .s_stream   = ov2640_s_stream,
-   .g_mbus_config  = ov2640_g_mbus_config,
-};
-
 static const 

[PATCHv5 13/16] ov2640: add MC support

2017-03-11 Thread Hans Verkuil
From: Hans Verkuil 

The MC support is needed by the em28xx driver.

Signed-off-by: Hans Verkuil 
---
 drivers/media/i2c/ov2640.c | 18 --
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/drivers/media/i2c/ov2640.c b/drivers/media/i2c/ov2640.c
index 0445963c5fae..d1f04a4ca2ac 100644
--- a/drivers/media/i2c/ov2640.c
+++ b/drivers/media/i2c/ov2640.c
@@ -282,6 +282,9 @@ struct ov2640_win_size {
 
 struct ov2640_priv {
struct v4l2_subdev  subdev;
+#if defined(CONFIG_MEDIA_CONTROLLER)
+   struct media_pad pad;
+#endif
struct v4l2_ctrl_handlerhdl;
u32 cfmt_code;
struct clk  *clk;
@@ -1063,6 +1066,7 @@ static int ov2640_probe(struct i2c_client *client,
goto err_clk;
 
v4l2_i2c_subdev_init(>subdev, client, _subdev_ops);
+   priv->subdev.flags = V4L2_SUBDEV_FL_HAS_DEVNODE;
v4l2_ctrl_handler_init(>hdl, 2);
v4l2_ctrl_new_std(>hdl, _ctrl_ops,
V4L2_CID_VFLIP, 0, 1, 1, 0);
@@ -1073,19 +1077,28 @@ static int ov2640_probe(struct i2c_client *client,
ret = priv->hdl.error;
goto err_hdl;
}
+#if defined(CONFIG_MEDIA_CONTROLLER)
+   priv->pad.flags = MEDIA_PAD_FL_SOURCE;
+   priv->subdev.entity.function = MEDIA_ENT_F_CAM_SENSOR;
+   ret = media_entity_pads_init(>subdev.entity, 1, >pad);
+   if (ret < 0)
+   goto err_hdl;
+#endif
 
ret = ov2640_video_probe(client);
if (ret < 0)
-   goto err_hdl;
+   goto err_videoprobe;
 
ret = v4l2_async_register_subdev(>subdev);
if (ret < 0)
-   goto err_hdl;
+   goto err_videoprobe;
 
dev_info(>dev, "OV2640 Probed\n");
 
return 0;
 
+err_videoprobe:
+   media_entity_cleanup(>subdev.entity);
 err_hdl:
v4l2_ctrl_handler_free(>hdl);
 err_clk:
@@ -1099,6 +1112,7 @@ static int ov2640_remove(struct i2c_client *client)
 
v4l2_async_unregister_subdev(>subdev);
v4l2_ctrl_handler_free(>hdl);
+   media_entity_cleanup(>subdev.entity);
v4l2_device_unregister_subdev(>subdev);
clk_disable_unprepare(priv->clk);
return 0;
-- 
2.11.0



[PATCHv5 03/16] ov7670: fix g/s_parm

2017-03-11 Thread Hans Verkuil
From: Hans Verkuil 

Drop unnecesary memset. Drop the unnecessary extendedmode check and
set the V4L2_CAP_TIMEPERFRAME capability.

Signed-off-by: Hans Verkuil 
Acked-by: Sakari Ailus 
---
 drivers/media/i2c/ov7670.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/media/i2c/ov7670.c b/drivers/media/i2c/ov7670.c
index 9af8d3b8f848..50e4466a2b37 100644
--- a/drivers/media/i2c/ov7670.c
+++ b/drivers/media/i2c/ov7670.c
@@ -1046,7 +1046,6 @@ static int ov7670_g_parm(struct v4l2_subdev *sd, struct 
v4l2_streamparm *parms)
if (parms->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
return -EINVAL;
 
-   memset(cp, 0, sizeof(struct v4l2_captureparm));
cp->capability = V4L2_CAP_TIMEPERFRAME;
info->devtype->get_framerate(sd, >timeperframe);
 
@@ -1061,9 +1060,8 @@ static int ov7670_s_parm(struct v4l2_subdev *sd, struct 
v4l2_streamparm *parms)
 
if (parms->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
return -EINVAL;
-   if (cp->extendedmode != 0)
-   return -EINVAL;
 
+   cp->capability = V4L2_CAP_TIMEPERFRAME;
return info->devtype->set_framerate(sd, tpf);
 }
 
-- 
2.11.0



[PATCHv5 06/16] atmel-isi: document device tree bindings

2017-03-11 Thread Hans Verkuil
From: Hans Verkuil 

Document the device tree bindings for this hardware.

Mostly copied from the atmel-isc bindings.

Signed-off-by: Hans Verkuil 
Acked-by: Sakari Ailus 
---
 .../devicetree/bindings/media/atmel-isi.txt| 96 +-
 1 file changed, 58 insertions(+), 38 deletions(-)

diff --git a/Documentation/devicetree/bindings/media/atmel-isi.txt 
b/Documentation/devicetree/bindings/media/atmel-isi.txt
index 251f008f220c..65249bbd5c00 100644
--- a/Documentation/devicetree/bindings/media/atmel-isi.txt
+++ b/Documentation/devicetree/bindings/media/atmel-isi.txt
@@ -1,51 +1,71 @@
-Atmel Image Sensor Interface (ISI) SoC Camera Subsystem
---
-
-Required properties:
-- compatible: must be "atmel,at91sam9g45-isi"
-- reg: physical base address and length of the registers set for the device;
-- interrupts: should contain IRQ line for the ISI;
-- clocks: list of clock specifiers, corresponding to entries in
-  the clock-names property;
-- clock-names: must contain "isi_clk", which is the isi peripherial clock.
-
-ISI supports a single port node with parallel bus. It should contain one
+Atmel Image Sensor Interface (ISI)
+--
+
+Required properties for ISI:
+- compatible: must be "atmel,at91sam9g45-isi".
+- reg: physical base address and length of the registers set for the device.
+- interrupts: should contain IRQ line for the ISI.
+- clocks: list of clock specifiers, corresponding to entries in the clock-names
+   property; please refer to clock-bindings.txt.
+- clock-names: required elements: "isi_clk".
+- pinctrl-names, pinctrl-0: please refer to pinctrl-bindings.txt.
+
+ISI supports a single port node with parallel bus. It shall contain one
 'port' child node with child 'endpoint' node. Please refer to the bindings
 defined in Documentation/devicetree/bindings/media/video-interfaces.txt.
 
-Example:
-   isi: isi@f0034000 {
-   compatible = "atmel,at91sam9g45-isi";
-   reg = <0xf0034000 0x4000>;
-   interrupts = <37 IRQ_TYPE_LEVEL_HIGH 5>;
+Endpoint node properties
+
 
-   clocks = <_clk>;
-   clock-names = "isi_clk";
+- bus-width: <8> or <10> (mandatory)
+- hsync-active (default: active high)
+- vsync-active (default: active high)
+- pclk-sample (default: sample on falling edge)
+- remote-endpoint: A phandle to the bus receiver's endpoint node (mandatory).
 
-   pinctrl-names = "default";
-   pinctrl-0 = <_isi>;
-
-   port {
-   #address-cells = <1>;
-   #size-cells = <0>;
+Example:
 
-   isi_0: endpoint {
-   remote-endpoint = <_0>;
-   bus-width = <8>;
-   };
+isi: isi@f0034000 {
+   compatible = "atmel,at91sam9g45-isi";
+   reg = <0xf0034000 0x4000>;
+   interrupts = <37 IRQ_TYPE_LEVEL_HIGH 5>;
+   pinctrl-names = "default";
+   pinctrl-0 = <_isi_data_0_7>;
+   clocks = <_clk>;
+   clock-names = "isi_clk";
+   status = "ok";
+   port {
+   #address-cells = <1>;
+   #size-cells = <0>;
+   isi_0: endpoint {
+   remote-endpoint = <_0>;
+   bus-width = <8>;
+   vsync-active = <1>;
+   hsync-active = <1>;
};
};
+};
+
+i2c1: i2c@f0018000 {
+   status = "okay";
 
-   i2c1: i2c@f0018000 {
-   ov2640: camera@0x30 {
-   compatible = "ovti,ov2640";
-   reg = <0x30>;
+   ov2640: camera@30 {
+   compatible = "ovti,ov2640";
+   reg = <0x30>;
+   pinctrl-names = "default";
+   pinctrl-0 = <_pck0_as_isi_mck _sensor_power 
_sensor_reset>;
+   resetb-gpios = < 11 GPIO_ACTIVE_LOW>;
+   pwdn-gpios = < 13 GPIO_ACTIVE_HIGH>;
+   clocks = <>;
+   clock-names = "xvclk";
+   assigned-clocks = <>;
+   assigned-clock-rates = <2500>;
 
-   port {
-   ov2640_0: endpoint {
-   remote-endpoint = <_0>;
-   bus-width = <8>;
-   };
+   port {
+   ov2640_0: endpoint {
+   remote-endpoint = <_0>;
+   bus-width = <8>;
};
};
};
+};
-- 
2.11.0



[PATCHv5 07/16] atmel-isi: remove dependency of the soc-camera framework

2017-03-11 Thread Hans Verkuil
From: Hans Verkuil 

This patch converts the atmel-isi driver from a soc-camera driver to a driver
that is stand-alone.

Signed-off-by: Hans Verkuil 
---
 drivers/media/platform/soc_camera/Kconfig |3 +-
 drivers/media/platform/soc_camera/atmel-isi.c | 1209 +++--
 2 files changed, 714 insertions(+), 498 deletions(-)

diff --git a/drivers/media/platform/soc_camera/Kconfig 
b/drivers/media/platform/soc_camera/Kconfig
index 86d74788544f..a37ec91b026e 100644
--- a/drivers/media/platform/soc_camera/Kconfig
+++ b/drivers/media/platform/soc_camera/Kconfig
@@ -29,9 +29,8 @@ config VIDEO_SH_MOBILE_CEU
 
 config VIDEO_ATMEL_ISI
tristate "ATMEL Image Sensor Interface (ISI) support"
-   depends on VIDEO_DEV && SOC_CAMERA
+   depends on VIDEO_V4L2 && OF && HAS_DMA
depends on ARCH_AT91 || COMPILE_TEST
-   depends on HAS_DMA
select VIDEOBUF2_DMA_CONTIG
---help---
  This module makes the ATMEL Image Sensor Interface available
diff --git a/drivers/media/platform/soc_camera/atmel-isi.c 
b/drivers/media/platform/soc_camera/atmel-isi.c
index 46de657c3e6d..a6d60c2e207d 100644
--- a/drivers/media/platform/soc_camera/atmel-isi.c
+++ b/drivers/media/platform/soc_camera/atmel-isi.c
@@ -22,18 +22,22 @@
 #include 
 #include 
 #include 
-
-#include 
-#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
 #include 
 #include 
+#include 
 
 #include "atmel-isi.h"
 
-#define MAX_BUFFER_NUM 32
 #define MAX_SUPPORT_WIDTH  2048
 #define MAX_SUPPORT_HEIGHT 2048
-#define VID_LIMIT_BYTES(16 * 1024 * 1024)
 #define MIN_FRAME_RATE 15
 #define FRAME_INTERVAL_MILLI_SEC   (1000 / MIN_FRAME_RATE)
 
@@ -65,9 +69,37 @@ struct frame_buffer {
struct list_head list;
 };
 
+struct isi_graph_entity {
+   struct device_node *node;
+
+   struct v4l2_async_subdev asd;
+   struct v4l2_subdev *subdev;
+};
+
+/*
+ * struct isi_format - ISI media bus format information
+ * @fourcc:Fourcc code for this format
+ * @mbus_code: V4L2 media bus format code.
+ * @bpp:   Bytes per pixel (when stored in memory)
+ * @swap:  Byte swap configuration value
+ * @support:   Indicates format supported by subdev
+ * @skip:  Skip duplicate format supported by subdev
+ */
+struct isi_format {
+   u32 fourcc;
+   u32 mbus_code;
+   u8  bpp;
+   u32 swap;
+
+   boolsupport;
+   boolskip;
+};
+
+
 struct atmel_isi {
/* Protects the access of variables shared with the ISR */
-   spinlock_t  lock;
+   spinlock_t  irqlock;
+   struct device   *dev;
void __iomem*regs;
 
int sequence;
@@ -76,7 +108,7 @@ struct atmel_isi {
struct fbd  *p_fb_descriptors;
dma_addr_t  fb_descriptors_phys;
struct  list_head dma_desc_head;
-   struct isi_dma_desc dma_desc[MAX_BUFFER_NUM];
+   struct isi_dma_desc dma_desc[VIDEO_MAX_FRAME];
boolenable_preview_path;
 
struct completion   complete;
@@ -90,9 +122,22 @@ struct atmel_isi {
struct list_headvideo_buffer_list;
struct frame_buffer *active;
 
-   struct soc_camera_host  soc_host;
+   struct v4l2_device  v4l2_dev;
+   struct video_device *vdev;
+   struct v4l2_async_notifier  notifier;
+   struct isi_graph_entity entity;
+   struct v4l2_format  fmt;
+
+   struct isi_format   **user_formats;
+   unsigned intnum_user_formats;
+   const struct isi_format *current_fmt;
+
+   struct mutexlock;
+   struct vb2_queuequeue;
 };
 
+#define notifier_to_isi(n) container_of(n, struct atmel_isi, notifier)
+
 static void isi_writel(struct atmel_isi *isi, u32 reg, u32 val)
 {
writel(val, isi->regs + reg);
@@ -102,107 +147,46 @@ static u32 isi_readl(struct atmel_isi *isi, u32 reg)
return readl(isi->regs + reg);
 }
 
-static u32 setup_cfg2_yuv_swap(struct atmel_isi *isi,
-   const struct soc_camera_format_xlate *xlate)
-{
-   if (xlate->host_fmt->fourcc == V4L2_PIX_FMT_YUYV) {
-   /* all convert to YUYV */
-   switch (xlate->code) {
-   case MEDIA_BUS_FMT_VYUY8_2X8:
-   return ISI_CFG2_YCC_SWAP_MODE_3;
-   case MEDIA_BUS_FMT_UYVY8_2X8:
-   return ISI_CFG2_YCC_SWAP_MODE_2;
-   case MEDIA_BUS_FMT_YVYU8_2X8:
-   return 

[PATCHv5 08/16] atmel-isi: move out of soc_camera to atmel

2017-03-11 Thread Hans Verkuil
From: Hans Verkuil 

Move this out of the soc_camera directory into the atmel directory
where it belongs.

Signed-off-by: Hans Verkuil 
Acked-by: Sakari Ailus 
---
 drivers/media/platform/Makefile  |  1 +
 drivers/media/platform/atmel/Kconfig | 11 ++-
 drivers/media/platform/atmel/Makefile|  1 +
 drivers/media/platform/{soc_camera => atmel}/atmel-isi.c |  0
 drivers/media/platform/{soc_camera => atmel}/atmel-isi.h |  0
 drivers/media/platform/soc_camera/Kconfig| 10 --
 drivers/media/platform/soc_camera/Makefile   |  1 -
 7 files changed, 12 insertions(+), 12 deletions(-)
 rename drivers/media/platform/{soc_camera => atmel}/atmel-isi.c (100%)
 rename drivers/media/platform/{soc_camera => atmel}/atmel-isi.h (100%)

diff --git a/drivers/media/platform/Makefile b/drivers/media/platform/Makefile
index 8959f6e6692a..c491731f5909 100644
--- a/drivers/media/platform/Makefile
+++ b/drivers/media/platform/Makefile
@@ -63,6 +63,7 @@ obj-$(CONFIG_VIDEO_XILINX)+= xilinx/
 obj-$(CONFIG_VIDEO_RCAR_VIN)   += rcar-vin/
 
 obj-$(CONFIG_VIDEO_ATMEL_ISC)  += atmel/
+obj-$(CONFIG_VIDEO_ATMEL_ISI)  += atmel/
 
 ccflags-y += -I$(srctree)/drivers/media/i2c
 
diff --git a/drivers/media/platform/atmel/Kconfig 
b/drivers/media/platform/atmel/Kconfig
index 867dca22a473..9bd0f19b127f 100644
--- a/drivers/media/platform/atmel/Kconfig
+++ b/drivers/media/platform/atmel/Kconfig
@@ -6,4 +6,13 @@ config VIDEO_ATMEL_ISC
select REGMAP_MMIO
help
   This module makes the ATMEL Image Sensor Controller available
-  as a v4l2 device.
\ No newline at end of file
+  as a v4l2 device.
+
+config VIDEO_ATMEL_ISI
+   tristate "ATMEL Image Sensor Interface (ISI) support"
+   depends on VIDEO_V4L2 && OF && HAS_DMA
+   depends on ARCH_AT91 || COMPILE_TEST
+   select VIDEOBUF2_DMA_CONTIG
+   ---help---
+ This module makes the ATMEL Image Sensor Interface available
+ as a v4l2 device.
diff --git a/drivers/media/platform/atmel/Makefile 
b/drivers/media/platform/atmel/Makefile
index 9d7c999d434d..27000d099a5e 100644
--- a/drivers/media/platform/atmel/Makefile
+++ b/drivers/media/platform/atmel/Makefile
@@ -1 +1,2 @@
 obj-$(CONFIG_VIDEO_ATMEL_ISC) += atmel-isc.o
+obj-$(CONFIG_VIDEO_ATMEL_ISI) += atmel-isi.o
diff --git a/drivers/media/platform/soc_camera/atmel-isi.c 
b/drivers/media/platform/atmel/atmel-isi.c
similarity index 100%
rename from drivers/media/platform/soc_camera/atmel-isi.c
rename to drivers/media/platform/atmel/atmel-isi.c
diff --git a/drivers/media/platform/soc_camera/atmel-isi.h 
b/drivers/media/platform/atmel/atmel-isi.h
similarity index 100%
rename from drivers/media/platform/soc_camera/atmel-isi.h
rename to drivers/media/platform/atmel/atmel-isi.h
diff --git a/drivers/media/platform/soc_camera/Kconfig 
b/drivers/media/platform/soc_camera/Kconfig
index a37ec91b026e..0c581aa1b18a 100644
--- a/drivers/media/platform/soc_camera/Kconfig
+++ b/drivers/media/platform/soc_camera/Kconfig
@@ -26,13 +26,3 @@ config VIDEO_SH_MOBILE_CEU
select SOC_CAMERA_SCALE_CROP
---help---
  This is a v4l2 driver for the SuperH Mobile CEU Interface
-
-config VIDEO_ATMEL_ISI
-   tristate "ATMEL Image Sensor Interface (ISI) support"
-   depends on VIDEO_V4L2 && OF && HAS_DMA
-   depends on ARCH_AT91 || COMPILE_TEST
-   select VIDEOBUF2_DMA_CONTIG
-   ---help---
- This module makes the ATMEL Image Sensor Interface available
- as a v4l2 device.
-
diff --git a/drivers/media/platform/soc_camera/Makefile 
b/drivers/media/platform/soc_camera/Makefile
index 7633a0f2f66f..07a451e8b228 100644
--- a/drivers/media/platform/soc_camera/Makefile
+++ b/drivers/media/platform/soc_camera/Makefile
@@ -6,5 +6,4 @@ obj-$(CONFIG_SOC_CAMERA_SCALE_CROP) += soc_scale_crop.o
 obj-$(CONFIG_SOC_CAMERA_PLATFORM)  += soc_camera_platform.o
 
 # soc-camera host drivers have to be linked after camera drivers
-obj-$(CONFIG_VIDEO_ATMEL_ISI)  += atmel-isi.o
 obj-$(CONFIG_VIDEO_SH_MOBILE_CEU)  += sh_mobile_ceu_camera.o
-- 
2.11.0



[PATCHv5 02/16] ov7670: call v4l2_async_register_subdev

2017-03-11 Thread Hans Verkuil
From: Hans Verkuil 

Add v4l2-async support for this driver.

Signed-off-by: Hans Verkuil 
Acked-by: Sakari Ailus 
---
 drivers/media/i2c/ov7670.c | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/drivers/media/i2c/ov7670.c b/drivers/media/i2c/ov7670.c
index 56cfb5ca9c95..9af8d3b8f848 100644
--- a/drivers/media/i2c/ov7670.c
+++ b/drivers/media/i2c/ov7670.c
@@ -1636,10 +1636,9 @@ static int ov7670_probe(struct i2c_client *client,
V4L2_EXPOSURE_AUTO);
sd->ctrl_handler = >hdl;
if (info->hdl.error) {
-   int err = info->hdl.error;
+   ret = info->hdl.error;
 
-   v4l2_ctrl_handler_free(>hdl);
-   return err;
+   goto hdl_free;
}
/*
 * We have checked empirically that hw allows to read back the gain
@@ -1651,7 +1650,15 @@ static int ov7670_probe(struct i2c_client *client,
v4l2_ctrl_cluster(2, >saturation);
v4l2_ctrl_handler_setup(>hdl);
 
+   ret = v4l2_async_register_subdev(>sd);
+   if (ret < 0)
+   goto hdl_free;
+
return 0;
+
+hdl_free:
+   v4l2_ctrl_handler_free(>hdl);
+   return ret;
 }
 
 
-- 
2.11.0



[PATCHv5 04/16] ov7670: get xclk

2017-03-11 Thread Hans Verkuil
From: Hans Verkuil 

Get the clock for this sensor.

Signed-off-by: Hans Verkuil 
Acked-by: Sakari Ailus 
---
 drivers/media/i2c/ov7670.c | 18 +-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/drivers/media/i2c/ov7670.c b/drivers/media/i2c/ov7670.c
index 50e4466a2b37..912ff09c6100 100644
--- a/drivers/media/i2c/ov7670.c
+++ b/drivers/media/i2c/ov7670.c
@@ -10,6 +10,7 @@
  * This file may be distributed under the terms of the GNU General
  * Public License, version 2.
  */
+#include 
 #include 
 #include 
 #include 
@@ -227,6 +228,7 @@ struct ov7670_info {
struct v4l2_ctrl *hue;
};
struct ov7670_format_struct *fmt;  /* Current format */
+   struct clk *clk;
int min_width;  /* Filter out smaller sizes */
int min_height; /* Filter out smaller sizes */
int clock_speed;/* External clock speed (MHz) */
@@ -1587,13 +1589,24 @@ static int ov7670_probe(struct i2c_client *client,
info->pclk_hb_disable = true;
}
 
+   info->clk = devm_clk_get(>dev, "xclk");
+   if (IS_ERR(info->clk))
+   return -EPROBE_DEFER;
+   clk_prepare_enable(info->clk);
+
+   info->clock_speed = clk_get_rate(info->clk) / 100;
+   if (info->clock_speed < 10 || info->clock_speed > 48) {
+   ret = -EINVAL;
+   goto clk_disable;
+   }
+
/* Make sure it's an ov7670 */
ret = ov7670_detect(sd);
if (ret) {
v4l_dbg(1, debug, client,
"chip found @ 0x%x (%s) is not an ov7670 chip.\n",
client->addr << 1, client->adapter->name);
-   return ret;
+   goto clk_disable;
}
v4l_info(client, "chip found @ 0x%02x (%s)\n",
client->addr << 1, client->adapter->name);
@@ -1656,6 +1669,8 @@ static int ov7670_probe(struct i2c_client *client,
 
 hdl_free:
v4l2_ctrl_handler_free(>hdl);
+clk_disable:
+   clk_disable_unprepare(info->clk);
return ret;
 }
 
@@ -1667,6 +1682,7 @@ static int ov7670_remove(struct i2c_client *client)
 
v4l2_device_unregister_subdev(sd);
v4l2_ctrl_handler_free(>hdl);
+   clk_disable_unprepare(info->clk);
return 0;
 }
 
-- 
2.11.0



[PATCHv5 09/16] ov2640: fix colorspace handling

2017-03-11 Thread Hans Verkuil
From: Hans Verkuil 

The colorspace is independent of whether YUV or RGB is sent to the SoC.
Fix this.

Signed-off-by: Hans Verkuil 
---
 drivers/media/i2c/soc_camera/ov2640.c | 23 +++
 1 file changed, 7 insertions(+), 16 deletions(-)

diff --git a/drivers/media/i2c/soc_camera/ov2640.c 
b/drivers/media/i2c/soc_camera/ov2640.c
index 56de18263359..b9a0069f5b33 100644
--- a/drivers/media/i2c/soc_camera/ov2640.c
+++ b/drivers/media/i2c/soc_camera/ov2640.c
@@ -794,10 +794,11 @@ static int ov2640_set_params(struct i2c_client *client, 
u32 *width, u32 *height,
dev_dbg(>dev, "%s: Selected cfmt YUYV (YUV422)", 
__func__);
selected_cfmt_regs = ov2640_yuyv_regs;
break;
-   default:
case MEDIA_BUS_FMT_UYVY8_2X8:
+   default:
dev_dbg(>dev, "%s: Selected cfmt UYVY", __func__);
selected_cfmt_regs = ov2640_uyvy_regs;
+   break;
}
 
/* reset hardware */
@@ -865,17 +866,7 @@ static int ov2640_get_fmt(struct v4l2_subdev *sd,
mf->width   = priv->win->width;
mf->height  = priv->win->height;
mf->code= priv->cfmt_code;
-
-   switch (mf->code) {
-   case MEDIA_BUS_FMT_RGB565_2X8_BE:
-   case MEDIA_BUS_FMT_RGB565_2X8_LE:
-   mf->colorspace = V4L2_COLORSPACE_SRGB;
-   break;
-   default:
-   case MEDIA_BUS_FMT_YUYV8_2X8:
-   case MEDIA_BUS_FMT_UYVY8_2X8:
-   mf->colorspace = V4L2_COLORSPACE_JPEG;
-   }
+   mf->colorspace  = V4L2_COLORSPACE_SRGB;
mf->field   = V4L2_FIELD_NONE;
 
return 0;
@@ -897,17 +888,17 @@ static int ov2640_set_fmt(struct v4l2_subdev *sd,
ov2640_select_win(>width, >height);
 
mf->field   = V4L2_FIELD_NONE;
+   mf->colorspace  = V4L2_COLORSPACE_SRGB;
 
switch (mf->code) {
case MEDIA_BUS_FMT_RGB565_2X8_BE:
case MEDIA_BUS_FMT_RGB565_2X8_LE:
-   mf->colorspace = V4L2_COLORSPACE_SRGB;
+   case MEDIA_BUS_FMT_YUYV8_2X8:
+   case MEDIA_BUS_FMT_UYVY8_2X8:
break;
default:
mf->code = MEDIA_BUS_FMT_UYVY8_2X8;
-   case MEDIA_BUS_FMT_YUYV8_2X8:
-   case MEDIA_BUS_FMT_UYVY8_2X8:
-   mf->colorspace = V4L2_COLORSPACE_JPEG;
+   break;
}
 
if (format->which == V4L2_SUBDEV_FORMAT_ACTIVE)
-- 
2.11.0



[PATCHv5 01/16] ov7670: document device tree bindings

2017-03-11 Thread Hans Verkuil
From: Hans Verkuil 

Add binding documentation and add that file to the MAINTAINERS entry.

Signed-off-by: Hans Verkuil 
Acked-by: Sakari Ailus 
---
 .../devicetree/bindings/media/i2c/ov7670.txt   | 43 ++
 MAINTAINERS|  1 +
 2 files changed, 44 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/media/i2c/ov7670.txt

diff --git a/Documentation/devicetree/bindings/media/i2c/ov7670.txt 
b/Documentation/devicetree/bindings/media/i2c/ov7670.txt
new file mode 100644
index ..826b6563b009
--- /dev/null
+++ b/Documentation/devicetree/bindings/media/i2c/ov7670.txt
@@ -0,0 +1,43 @@
+* Omnivision OV7670 CMOS sensor
+
+The Omnivision OV7670 sensor supports multiple resolutions output, such as
+CIF, SVGA, UXGA. It also can support the YUV422/420, RGB565/555 or raw RGB
+output formats.
+
+Required Properties:
+- compatible: should be "ovti,ov7670"
+- clocks: reference to the xclk input clock.
+- clock-names: should be "xclk".
+
+Optional Properties:
+- reset-gpios: reference to the GPIO connected to the resetb pin, if any.
+  Active is low.
+- powerdown-gpios: reference to the GPIO connected to the pwdn pin, if any.
+  Active is high.
+
+The device node must contain one 'port' child node for its digital output
+video port, in accordance with the video interface bindings defined in
+Documentation/devicetree/bindings/media/video-interfaces.txt.
+
+Example:
+
+   i2c1: i2c@f0018000 {
+   ov7670: camera@21 {
+   compatible = "ovti,ov7670";
+   reg = <0x21>;
+   pinctrl-names = "default";
+   pinctrl-0 = <_pck0_as_isi_mck 
_sensor_power _sensor_reset>;
+   reset-gpios = < 11 GPIO_ACTIVE_LOW>;
+   powerdown-gpios = < 13 GPIO_ACTIVE_HIGH>;
+   clocks = <>;
+   clock-names = "xclk";
+   assigned-clocks = <>;
+   assigned-clock-rates = <2500>;
+
+   port {
+   ov7670_0: endpoint {
+   remote-endpoint = <_0>;
+   };
+   };
+   };
+   };
diff --git a/MAINTAINERS b/MAINTAINERS
index 83a42ef1d1a7..93500928ca4f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -9273,6 +9273,7 @@ L:linux-media@vger.kernel.org
 T: git git://linuxtv.org/media_tree.git
 S: Maintained
 F: drivers/media/i2c/ov7670.c
+F: Documentation/devicetree/bindings/media/i2c/ov7670.txt
 
 ONENAND FLASH DRIVER
 M: Kyungmin Park 
-- 
2.11.0



[PATCHv5 00/16] atmel-isi/ov7670/ov2640: convert to standalone drivers

2017-03-11 Thread Hans Verkuil
From: Hans Verkuil 

This patch series converts the soc-camera atmel-isi to a standalone V4L2
driver.

The same is done for the ov7670 and ov2640 sensor drivers: the ov7670 was
used to test the atmel-isi driver. The ov2640 is needed because the em28xx
driver has a soc_camera include dependency. Both ov7670 and ov2640 sensors
have been tested with the atmel-isi driver.

The first 5 patches improve the ov7670 sensor driver, mostly adding modern
features such as DT support.

The next three convert the atmel-isi and move it out of soc_camera.

The following 6 patches convert ov2640 and drop the soc_camera dependency
in em28xx. I have tested that this works with my 'SpeedLink Vicious And
Divine Laplace webcam'.

The last two patches add isi support to the DT: the first for the ov7670
sensor, the second modifies it for the ov2640 sensor.

These two final patches are for demonstration purposes only, I do not plan
on merging them.

Tested with my sama5d3-Xplained board, the ov2640 sensor and two ov7670
sensors: one with and one without reset/pwdn pins. Also tested with my
em28xx-based webcam.

I'd like to get this in for 4.12. Fingers crossed.

Regards,

Hans

Changes since v4:
- the ov2640 colorspace fixes were inexplicably part of an atmel-isi patch.
  Split it off as a separate patch.
- add V4L2_SUBDEV_FL_HAS_DEVNODE to ov2640.
- drop #if defined(CONFIG_MEDIA_CONTROLLER) guard around media_entity_cleanup
  in ov2640.

Changes since v3:
- ov2640/ov7670: call clk_disable_unprepare where needed. I assumed this was
  done by the devm_clk_get cleanup, but that wasn't the case.
- bindings: be even more explicit about which properties are mandatory.
- ov2640/ov7670: drop unused bus-width from the dts binding examples and from
  the actual dts patches.

Changes since v2:
- Incorporated Sakari's and Rob's device tree bindings comments.
- ov2640: dropped the reset/power changes. These actually broke the em28xx
  and there was really nothing wrong with it.
- merged the "ov2640: allow use inside em28xx" into patches 10 and 11.
  It really shouldn't have been a separate patch in the first place.
- rebased on top of 4.11-rc1.

Changes since v1:

- Dropped MC support from atmel-isi and ov7670: not needed to make this
  work. Only for the ov2640 was it kept since the em28xx driver requires it.
- Use devm_clk_get instead of clk_get.
- The ov7670 lower limit of the clock speed is 10 MHz instead of 12. Adjust
  accordingly.



Hans Verkuil (16):
  ov7670: document device tree bindings
  ov7670: call v4l2_async_register_subdev
  ov7670: fix g/s_parm
  ov7670: get xclk
  ov7670: add devicetree support
  atmel-isi: document device tree bindings
  atmel-isi: remove dependency of the soc-camera framework
  atmel-isi: move out of soc_camera to atmel
  ov2640: fix colorspace handling
  ov2640: update bindings
  ov2640: convert from soc-camera to a standard subdev sensor driver.
  ov2640: use standard clk and enable it.
  ov2640: add MC support
  em28xx: drop last soc_camera link
  sama5d3 dts: enable atmel-isi
  at91-sama5d3_xplained.dts: select ov2640

 .../devicetree/bindings/media/atmel-isi.txt|   96 +-
 .../devicetree/bindings/media/i2c/ov2640.txt   |   23 +-
 .../devicetree/bindings/media/i2c/ov7670.txt   |   43 +
 MAINTAINERS|1 +
 arch/arm/boot/dts/at91-sama5d3_xplained.dts|   59 +-
 arch/arm/boot/dts/sama5d3.dtsi |4 +-
 drivers/media/i2c/Kconfig  |   11 +
 drivers/media/i2c/Makefile |1 +
 drivers/media/i2c/{soc_camera => }/ov2640.c|  149 +--
 drivers/media/i2c/ov7670.c |   75 +-
 drivers/media/i2c/soc_camera/Kconfig   |6 -
 drivers/media/i2c/soc_camera/Makefile  |1 -
 drivers/media/platform/Makefile|1 +
 drivers/media/platform/atmel/Kconfig   |   11 +-
 drivers/media/platform/atmel/Makefile  |1 +
 drivers/media/platform/atmel/atmel-isi.c   | 1384 
 .../platform/{soc_camera => atmel}/atmel-isi.h |0
 drivers/media/platform/soc_camera/Kconfig  |   11 -
 drivers/media/platform/soc_camera/Makefile |1 -
 drivers/media/platform/soc_camera/atmel-isi.c  | 1167 -
 drivers/media/usb/em28xx/em28xx-camera.c   |9 -
 21 files changed, 1687 insertions(+), 1367 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/media/i2c/ov7670.txt
 rename drivers/media/i2c/{soc_camera => }/ov2640.c (92%)
 create mode 100644 drivers/media/platform/atmel/atmel-isi.c
 rename drivers/media/platform/{soc_camera => atmel}/atmel-isi.h (100%)
 delete mode 100644 drivers/media/platform/soc_camera/atmel-isi.c

-- 
2.11.0



Re: [PATCH 2/3] libv4lconvert: by default, offer the original format to the client

2017-03-11 Thread Mauro Carvalho Chehab
Em Sat, 11 Mar 2017 06:21:40 -0300
Mauro Carvalho Chehab  escreveu:

> The libv4lconvert part of libv4l was meant to provide a common
> place to handle weird proprietary formats. With time, we also
> added support to other standard formats, in order to help
> V4L2 applications that are not performance sensitive to support
> all V4L2 formats.
> 
> Yet, the hole idea is to let userspace to decide to implement
> their own format conversion code when it needs either more
> performance or more quality than what libv4lconvert provides.
> 
> In other words, applications should have the right to decide
> between using a libv4lconvert emulated format or to implement
> the decoding themselves for non-proprietary formats,
> as this may have significative performance impact.
> 
> At the application side, deciding between them is just a matter
> of looking at the V4L2_FMT_FLAG_EMULATED flag.
> 
> Yet, we don't want to have a myriad of format converters
> everywhere for the proprietary formats, like V4L2_PIX_FMT_KONICA420,
> V4L2_PIX_FMT_SPCA501, etc. So, let's offer only the emulated
> variant for those weird stuff.
> 
> So, this patch changes the libv4lconvert default behavior to
> show emulated formats, except for the explicit ones marked as
> such.
> 
> Signed-off-by: Mauro Carvalho Chehab 
> Acked-by: Sakari Ailus 

Please ignore this patch. It is wrong. I sent a version 2 of this
series without it.

Regards,
Mauro


Re: [PATCH] libv4lconvert: by default, offer the original format to the client

2017-03-11 Thread Mauro Carvalho Chehab
Em Fri, 10 Mar 2017 17:00:59 -0800
Troy Kisky  escreveu:

> On 3/10/2017 12:53 PM, Mauro Carvalho Chehab wrote:
> > The libv4lconvert part of libv4l was meant to provide a common
> > place to handle weird proprietary formats. With time, we also
> > added support to other standard formats, in order to help
> > V4L2 applications that are not performance sensitive to support
> > all V4L2 formats.
> > 
> > Yet, the hole idea is to let userspace to decide to implement
> > their own format conversion code when it needs either more
> > performance or more quality than what libv4lconvert provides.
> > 
> > In other words, applications should have the right to decide
> > between using a libv4lconvert emulated format or to implement
> > the decoding themselves for non-proprietary formats,
> > as this may have significative performance impact.
> > 
> > At the application side, deciding between them is just a matter
> > of looking at the V4L2_FMT_FLAG_EMULATED flag.
> > 
> > Yet, we don't want to have a miriad of format converters
> > everywhere for the proprietary formats, like V4L2_PIX_FMT_KONICA420,
> > V4L2_PIX_FMT_SPCA501, etc. So, let's offer only the emulated
> > variant for those weird stuff.
> > 
> > So, this patch changes the libv4lconvert behavior, for
> > all non-proprietary formats, including Bayer, to offer both
> > the original format and the emulated ones.
> > 
> > Signed-off-by: Mauro Carvalho Chehab 
> > ---
> >  lib/libv4lconvert/libv4lconvert.c | 17 -
> >  1 file changed, 8 insertions(+), 9 deletions(-)
> > 
> > diff --git a/lib/libv4lconvert/libv4lconvert.c 
> > b/lib/libv4lconvert/libv4lconvert.c
> > index da718918b030..d87d6b91a838 100644
> > --- a/lib/libv4lconvert/libv4lconvert.c
> > +++ b/lib/libv4lconvert/libv4lconvert.c
> > @@ -118,10 +118,10 @@ static const struct v4lconvert_pixfmt 
> > supported_src_pixfmts[] = {
> > { V4L2_PIX_FMT_OV511,0,  7,  7, 1 },
> > { V4L2_PIX_FMT_OV518,0,  7,  7, 1 },
> > /* uncompressed bayer */
> > -   { V4L2_PIX_FMT_SBGGR8,   8,  8,  8, 1 },
> > -   { V4L2_PIX_FMT_SGBRG8,   8,  8,  8, 1 },
> > -   { V4L2_PIX_FMT_SGRBG8,   8,  8,  8, 1 },
> > -   { V4L2_PIX_FMT_SRGGB8,   8,  8,  8, 1 },
> > +   { V4L2_PIX_FMT_SBGGR8,   8,  8,  8, 0 },
> > +   { V4L2_PIX_FMT_SGBRG8,   8,  8,  8, 0 },
> > +   { V4L2_PIX_FMT_SGRBG8,   8,  8,  8, 0 },
> > +   { V4L2_PIX_FMT_SRGGB8,   8,  8,  8, 0 },
> > { V4L2_PIX_FMT_STV0680,  8,  8,  8, 1 },
> > /* compressed bayer */
> > { V4L2_PIX_FMT_SPCA561,  0,  9,  9, 1 },
> > @@ -178,7 +178,7 @@ struct v4lconvert_data 
> > *v4lconvert_create_with_dev_ops(int fd, void *dev_ops_pri
> > /* This keeps tracks of devices which have only formats for which apps
> >most likely will need conversion and we can thus safely add software
> >processing controls without a performance impact. */  
> 
> 
> The bottom part seems unwanted.
> Before, if any source format did not need conversion then 
> always_needs_conversion = 0
> else always_needs_conversion = 1
> 
> Now
> if any source format needs conversion then always_needs_conversion = 1
> else always_needs_conversion = 0

Hmm... you're right. I dropped that part at "PATH v2".

I also added a patch improving the documentation about this logic,
with is not too clear, IMHO.

Thanks,
Mauro


[PATCH v2 1/2] libv4lconvert: make it clear about the criteria for needs_conversion

2017-03-11 Thread Mauro Carvalho Chehab
While there is already a comment about the always_needs_conversion
logic at libv4lconvert, the comment is not clear enough. Also,
the decision of needing a conversion or not is actually at the
supported_src_pixfmts[] table.

Improve the comments to make it clearer about what criteria should be
used with regards to exposing formats to userspace.

Suggested-by: Sakari Ailus 
Signed-off-by: Mauro Carvalho Chehab 
---
 lib/libv4lconvert/libv4lconvert.c | 21 -
 1 file changed, 16 insertions(+), 5 deletions(-)

diff --git a/lib/libv4lconvert/libv4lconvert.c 
b/lib/libv4lconvert/libv4lconvert.c
index da718918b030..2718446ff239 100644
--- a/lib/libv4lconvert/libv4lconvert.c
+++ b/lib/libv4lconvert/libv4lconvert.c
@@ -74,8 +74,15 @@ const struct libv4l_dev_ops *v4lconvert_get_default_dev_ops()
 static void v4lconvert_get_framesizes(struct v4lconvert_data *data,
unsigned int pixelformat, int index);
 
-/* Note for proper functioning of v4lconvert_enum_fmt the first entries in
-   supported_src_pixfmts must match with the entries in supported_dst_pixfmts 
*/
+/*
+ * Notes:
+ * 1) for proper functioning of v4lconvert_enum_fmt the first entries in
+ *supported_src_pixfmts must match with the entries in
+ *supported_dst_pixfmts.
+ * 2) The field needs_conversion should be zero, *except* for device-specific
+ *formats, where it doesn't make sense for applications to have their
+ *own decoders.
+ */
 #define SUPPORTED_DST_PIXFMTS \
/* fourcc   bpp rgb yuv needs  */ \
/*  rankrankconversion */ \
@@ -175,9 +182,13 @@ struct v4lconvert_data *v4lconvert_create_with_dev_ops(int 
fd, void *dev_ops_pri
int i, j;
struct v4lconvert_data *data = calloc(1, sizeof(struct 
v4lconvert_data));
struct v4l2_capability cap;
-   /* This keeps tracks of devices which have only formats for which apps
-  most likely will need conversion and we can thus safely add software
-  processing controls without a performance impact. */
+   /*
+* This keeps tracks of device-specific formats for which apps most
+* likely don't know. If all a driver can offer are proprietary
+* formats, a conversion is needed anyway. We can thus safely
+* add software processing controls without much concern about a
+* performance impact.
+*/
int always_needs_conversion = 1;
 
if (!data) {
-- 
2.9.3



[PATCH v2 2/2] libv4lconvert: expose bayer formats

2017-03-11 Thread Mauro Carvalho Chehab
The libv4lconvert part of libv4l was meant to provide a common
place to handle weird proprietary formats. With time, we also
added support to other standard formats, in order to help
V4L2 applications that are not performance sensitive to support
all V4L2 formats.

Yet, the hole idea is to let userspace to decide to implement
their own format conversion code when it needs either more
performance or more quality than what libv4lconvert provides.

In other words, applications should have the right to decide
between using a libv4lconvert emulated format or to implement
the decoding themselves for non-proprietary formats,
as this may have significative performance impact.

At the application side, deciding between them is just a matter
of looking at the V4L2_FMT_FLAG_EMULATED flag.

Currently, the bayer formats, if present, are not shown to the
applications, with prevents them to use more optimized code to
handle it. Change them to be shown to userspace.

Signed-off-by: Mauro Carvalho Chehab 
Acked-by: Sakari Ailus 
---
 lib/libv4lconvert/libv4lconvert.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/lib/libv4lconvert/libv4lconvert.c 
b/lib/libv4lconvert/libv4lconvert.c
index 2718446ff239..4965e38754e2 100644
--- a/lib/libv4lconvert/libv4lconvert.c
+++ b/lib/libv4lconvert/libv4lconvert.c
@@ -125,10 +125,10 @@ static const struct v4lconvert_pixfmt 
supported_src_pixfmts[] = {
{ V4L2_PIX_FMT_OV511,0,  7,  7, 1 },
{ V4L2_PIX_FMT_OV518,0,  7,  7, 1 },
/* uncompressed bayer */
-   { V4L2_PIX_FMT_SBGGR8,   8,  8,  8, 1 },
-   { V4L2_PIX_FMT_SGBRG8,   8,  8,  8, 1 },
-   { V4L2_PIX_FMT_SGRBG8,   8,  8,  8, 1 },
-   { V4L2_PIX_FMT_SRGGB8,   8,  8,  8, 1 },
+   { V4L2_PIX_FMT_SBGGR8,   8,  8,  8, 0 },
+   { V4L2_PIX_FMT_SGBRG8,   8,  8,  8, 0 },
+   { V4L2_PIX_FMT_SGRBG8,   8,  8,  8, 0 },
+   { V4L2_PIX_FMT_SRGGB8,   8,  8,  8, 0 },
{ V4L2_PIX_FMT_STV0680,  8,  8,  8, 1 },
/* compressed bayer */
{ V4L2_PIX_FMT_SPCA561,  0,  9,  9, 1 },
-- 
2.9.3



[PATCH 3/3] libv4lconvert: make it clear about the criteria for needs_conversion

2017-03-11 Thread Mauro Carvalho Chehab
While there is already a comment about the always_needs_conversion
logic at libv4lconvert, the comment is not clear enough. Also,
the decision of needing a conversion or not is actually at the
supported_src_pixfmts[] table.

Improve the comments to make it clearer about what criteria should be
used with regards to exposing formats to userspace.

Suggested-by: Sakari Ailus 
Signed-off-by: Mauro Carvalho Chehab 
---
 lib/libv4lconvert/libv4lconvert.c | 19 ++-
 1 file changed, 14 insertions(+), 5 deletions(-)

diff --git a/lib/libv4lconvert/libv4lconvert.c 
b/lib/libv4lconvert/libv4lconvert.c
index d87d6b91a838..5cb852fc7169 100644
--- a/lib/libv4lconvert/libv4lconvert.c
+++ b/lib/libv4lconvert/libv4lconvert.c
@@ -74,8 +74,15 @@ const struct libv4l_dev_ops *v4lconvert_get_default_dev_ops()
 static void v4lconvert_get_framesizes(struct v4lconvert_data *data,
unsigned int pixelformat, int index);
 
-/* Note for proper functioning of v4lconvert_enum_fmt the first entries in
-   supported_src_pixfmts must match with the entries in supported_dst_pixfmts 
*/
+/*
+ * Notes:
+ * 1) for proper functioning of v4lconvert_enum_fmt the first entries in
+ *supported_src_pixfmts must match with the entries in
+ *supported_dst_pixfmts.
+ * 2) The field needs_conversion should be zero, *except* for device-specific
+ *formats, where it doesn't make sense for applications to have their
+ *own decoders.
+ */
 #define SUPPORTED_DST_PIXFMTS \
/* fourcc   bpp rgb yuv needs  */ \
/*  rankrankconversion */ \
@@ -175,9 +182,11 @@ struct v4lconvert_data *v4lconvert_create_with_dev_ops(int 
fd, void *dev_ops_pri
int i, j;
struct v4lconvert_data *data = calloc(1, sizeof(struct 
v4lconvert_data));
struct v4l2_capability cap;
-   /* This keeps tracks of devices which have only formats for which apps
-  most likely will need conversion and we can thus safely add software
-  processing controls without a performance impact. */
+   /*
+* This keeps tracks of device-specific formats for which apps most
+* likely don't know. We can thus safely add software processing
+* controls without much concern about a performance impact.
+*/
int always_needs_conversion = 0;
 
if (!data) {
-- 
2.9.3



[PATCH 1/3] libv4lconvert: expose bayer formats

2017-03-11 Thread Mauro Carvalho Chehab
Currently, the bayer formats, if present, are not shown to the
applications, with prevents them to use more optimized code to
handle it.

Signed-off-by: Mauro Carvalho Chehab 
Acked-by: Sakari Ailus 
---
 lib/libv4lconvert/libv4lconvert.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/lib/libv4lconvert/libv4lconvert.c 
b/lib/libv4lconvert/libv4lconvert.c
index da718918b030..6cfaf6edbc40 100644
--- a/lib/libv4lconvert/libv4lconvert.c
+++ b/lib/libv4lconvert/libv4lconvert.c
@@ -118,10 +118,10 @@ static const struct v4lconvert_pixfmt 
supported_src_pixfmts[] = {
{ V4L2_PIX_FMT_OV511,0,  7,  7, 1 },
{ V4L2_PIX_FMT_OV518,0,  7,  7, 1 },
/* uncompressed bayer */
-   { V4L2_PIX_FMT_SBGGR8,   8,  8,  8, 1 },
-   { V4L2_PIX_FMT_SGBRG8,   8,  8,  8, 1 },
-   { V4L2_PIX_FMT_SGRBG8,   8,  8,  8, 1 },
-   { V4L2_PIX_FMT_SRGGB8,   8,  8,  8, 1 },
+   { V4L2_PIX_FMT_SBGGR8,   8,  8,  8, 0 },
+   { V4L2_PIX_FMT_SGBRG8,   8,  8,  8, 0 },
+   { V4L2_PIX_FMT_SGRBG8,   8,  8,  8, 0 },
+   { V4L2_PIX_FMT_SRGGB8,   8,  8,  8, 0 },
{ V4L2_PIX_FMT_STV0680,  8,  8,  8, 1 },
/* compressed bayer */
{ V4L2_PIX_FMT_SPCA561,  0,  9,  9, 1 },
-- 
2.9.3



[PATCH 2/3] libv4lconvert: by default, offer the original format to the client

2017-03-11 Thread Mauro Carvalho Chehab
The libv4lconvert part of libv4l was meant to provide a common
place to handle weird proprietary formats. With time, we also
added support to other standard formats, in order to help
V4L2 applications that are not performance sensitive to support
all V4L2 formats.

Yet, the hole idea is to let userspace to decide to implement
their own format conversion code when it needs either more
performance or more quality than what libv4lconvert provides.

In other words, applications should have the right to decide
between using a libv4lconvert emulated format or to implement
the decoding themselves for non-proprietary formats,
as this may have significative performance impact.

At the application side, deciding between them is just a matter
of looking at the V4L2_FMT_FLAG_EMULATED flag.

Yet, we don't want to have a myriad of format converters
everywhere for the proprietary formats, like V4L2_PIX_FMT_KONICA420,
V4L2_PIX_FMT_SPCA501, etc. So, let's offer only the emulated
variant for those weird stuff.

So, this patch changes the libv4lconvert default behavior to
show emulated formats, except for the explicit ones marked as
such.

Signed-off-by: Mauro Carvalho Chehab 
Acked-by: Sakari Ailus 
---
 lib/libv4lconvert/libv4lconvert.c | 9 -
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/lib/libv4lconvert/libv4lconvert.c 
b/lib/libv4lconvert/libv4lconvert.c
index 6cfaf6edbc40..d87d6b91a838 100644
--- a/lib/libv4lconvert/libv4lconvert.c
+++ b/lib/libv4lconvert/libv4lconvert.c
@@ -178,7 +178,7 @@ struct v4lconvert_data *v4lconvert_create_with_dev_ops(int 
fd, void *dev_ops_pri
/* This keeps tracks of devices which have only formats for which apps
   most likely will need conversion and we can thus safely add software
   processing controls without a performance impact. */
-   int always_needs_conversion = 1;
+   int always_needs_conversion = 0;
 
if (!data) {
fprintf(stderr, "libv4lconvert: error: out of memory!\n");
@@ -208,10 +208,9 @@ struct v4lconvert_data *v4lconvert_create_with_dev_ops(int 
fd, void *dev_ops_pri
if (j < ARRAY_SIZE(supported_src_pixfmts)) {
data->supported_src_formats |= 1ULL << j;
v4lconvert_get_framesizes(data, fmt.pixelformat, j);
-   if (!supported_src_pixfmts[j].needs_conversion)
-   always_needs_conversion = 0;
-   } else
-   always_needs_conversion = 0;
+   if (supported_src_pixfmts[j].needs_conversion)
+   always_needs_conversion = 1;
+   }
}
 
data->no_formats = i;
-- 
2.9.3