RE: [PATCH/RFC 7/9 v2] v4l: add an image-bus API for configuring v4l2 subdev pixel and frame formats

2009-11-20 Thread Karicheri, Muralidharan
Hi,

I guess this is only one part of the required API support for setting
bus configuration for which I had sent an RFC some time back. I am sure
we need to set bus image/data format in vpfe/vpbe drivers of DMxxx.
I am starting to do more upstream work for vpfe capture  display drivers and 
would like to submit an updated RFC for bus configuration. I am not sure if 
someone is already working on that RFC. 

Looks like we need to have two APIs at sub-device level for handling this.
One for image data format (Which is addressed by this RFC) and other for 
hardware signals like polarities, bus type etc. Any comments?

BTW, I didn't have a chance to go over Guennadi's RFC for bus image format
so far and hope to spend sometime on this next week. 

Murali Karicheri
Software Design Engineer
Texas Instruments Inc.
Germantown, MD 20874
phone: 301-407-9583
email: m-kariche...@ti.com

-Original Message-
From: Hans Verkuil [mailto:hverk...@xs4all.nl]
Sent: Friday, November 20, 2009 7:29 AM
To: Guennadi Liakhovetski
Cc: Linux Media Mailing List; Laurent Pinchart; Sakari Ailus; Karicheri,
Muralidharan
Subject: Re: [PATCH/RFC 7/9 v2] v4l: add an image-bus API for configuring
v4l2 subdev pixel and frame formats

On Thursday 19 November 2009 23:33:22 Guennadi Liakhovetski wrote:
 Hi Hans

 On Sun, 15 Nov 2009, Hans Verkuil wrote:

 [snip]

 +s32 v4l2_imgbus_bytes_per_line(u32 width,
 +   const struct v4l2_imgbus_pixelfmt *imgf)
 +{
 +switch (imgf-packing) {
 +case V4L2_IMGBUS_PACKING_NONE:
 +return width * imgf-bits_per_sample / 8;
 +case V4L2_IMGBUS_PACKING_2X8_PADHI:
 +case V4L2_IMGBUS_PACKING_2X8_PADLO:
 +case V4L2_IMGBUS_PACKING_EXTEND16:
 +return width * 2;
 +}
 +return -EINVAL;
 +}
 +EXPORT_SYMBOL(v4l2_imgbus_bytes_per_line);
   
As you know, I am not convinced that this code belongs in the core.
I do not think this translation from IMGBUS to PIXFMT is generic
enough. However, if you just make this part of soc-camera then I am
OK with this.
  
   Are you referring to a specific function like
   v4l2_imgbus_bytes_per_line or to the whole v4l2-imagebus.c?
 
  I'm referring to the whole file.
 
   The whole file and the
   v4l2_imgbus_get_fmtdesc() function must be available to all drivers,
   not just to soc-camera, if we want to use {enum,g,s,try}_imgbus_fmt
   API in other drivers too, and we do want to use them, if we want to
   re-use client drivers.
 
  The sub-device drivers do not need this source. They just need to
  report the supported image bus formats. And I am far from convinced
  that other bridge drivers can actually reuse your v4l2-imagebus.c code.

 You mean, all non-soc-camera bridge drivers only handle special client
 formats, no generic pass-through?

That's correct. It's never been a problem until now. Usually the format is
fixed, so there is nothing to configure.

 What about other SoC v4l host drivers,
 not using soc-camera, and willing to switch to v4l2-subdev? Like OMAPs,
 etc? I'm sure they would want to be able to use the pass-through mode

And if they can reuse your code, then we will rename it to v4l2-busimage.c

But I have my doubts about that. I don't like that code, but I also don't
have the time to think about a better alternative. As long as it is
soc-camera specific, then I don't mind. And if omap3 can reuse it, then I
clearly was wrong and we can rename it and make it part of the core
framework.

  If they can, then we can always rename it from e.g. soc-imagebus.c to
  v4l2-imagebus.c. Right now I prefer to keep it inside soc-camera where
  is clearly does work and when other people start implementing imagebus
  support, then we can refer them to the work you did in soc-camera and
  we'll see what happens.

 You know how it happens - some authors do not know about some hidden
 code, during the review noone realises, that they are re-implementing
 that... Eventually you end up with duplicated customised sub-optimal
 code. Fresh example - the whole soc-camera framework:-) I only learned
 about int-device after soc-camera has already been submitted in its
 submission form. And I did ask on lists whether there was any code for
 such systems:-)

All the relevant omap developers are CC-ed in this discussion, and I'm also
paying fairly close attention to anything SoC related.

 I do not quite understand what disturbs you about making this API global.
 It is a completely internal API - no exposure to user-space. We can
 modify or remove it any time.

 Then think about wider exposure, testing. If you like we can make it a
 separate module and make soc-camera select it. And we can always degrade
 it back to soc-camera-specific:-)

Making this API global means that it becomes part of the framework. And I
want to pay a lot more attention to that code than we did in the past. So I
have to be convinced that it is code that is really reusable 

Re: [PATCH/RFC 7/9 v2] v4l: add an image-bus API for configuring v4l2 subdev pixel and frame formats

2009-11-19 Thread Guennadi Liakhovetski
Hi Hans

On Sun, 15 Nov 2009, Hans Verkuil wrote:

[snip]

+s32 v4l2_imgbus_bytes_per_line(u32 width,
+  const struct v4l2_imgbus_pixelfmt *imgf)
+{
+   switch (imgf-packing) {
+   case V4L2_IMGBUS_PACKING_NONE:
+   return width * imgf-bits_per_sample / 8;
+   case V4L2_IMGBUS_PACKING_2X8_PADHI:
+   case V4L2_IMGBUS_PACKING_2X8_PADLO:
+   case V4L2_IMGBUS_PACKING_EXTEND16:
+   return width * 2;
+   }
+   return -EINVAL;
+}
+EXPORT_SYMBOL(v4l2_imgbus_bytes_per_line);
   
   As you know, I am not convinced that this code belongs in the core. I do 
   not
   think this translation from IMGBUS to PIXFMT is generic enough. However, 
   if
   you just make this part of soc-camera then I am OK with this.
  
  Are you referring to a specific function like v4l2_imgbus_bytes_per_line 
  or to the whole v4l2-imagebus.c?
 
 I'm referring to the whole file.
 
  The whole file and the  
  v4l2_imgbus_get_fmtdesc() function must be available to all drivers, not 
  just to soc-camera, if we want to use {enum,g,s,try}_imgbus_fmt API in 
  other drivers too, and we do want to use them, if we want to re-use client 
  drivers.
 
 The sub-device drivers do not need this source. They just need to report
 the supported image bus formats. And I am far from convinced that other bridge
 drivers can actually reuse your v4l2-imagebus.c code.

You mean, all non-soc-camera bridge drivers only handle special client 
formats, no generic pass-through? What about other SoC v4l host drivers, 
not using soc-camera, and willing to switch to v4l2-subdev? Like OMAPs, 
etc? I'm sure they would want to be able to use the pass-through mode

 If they can, then we can always rename it from e.g. soc-imagebus.c to
 v4l2-imagebus.c. Right now I prefer to keep it inside soc-camera where is
 clearly does work and when other people start implementing imagebus support,
 then we can refer them to the work you did in soc-camera and we'll see what
 happens.

You know how it happens - some authors do not know about some hidden code, 
during the review noone realises, that they are re-implementing that... 
Eventually you end up with duplicated customised sub-optimal code. Fresh 
example - the whole soc-camera framework:-) I only learned about 
int-device after soc-camera has already been submitted in its submission 
form. And I did ask on lists whether there was any code for such 
systems:-)

I do not quite understand what disturbs you about making this API global. 
It is a completely internal API - no exposure to user-space. We can modify 
or remove it any time.

Then think about wider exposure, testing. If you like we can make it a 
separate module and make soc-camera select it. And we can always degrade 
it back to soc-camera-specific:-)

   One other comment to throw into the pot: what about calling this just
   V4L2_BUS_FMT...? So imgbus becomes just bus. For some reason I find 
   imgbus a
   bit odd. Probably because I think of it more as a video bus or even as a 
   more
   general data bus. For all I know it might be used in the future to choose
   between different types of histogram data or something like that.
  
  It might well be not the best namespace choice. But just bus OTOH seems 
  way too generic to me. Maybe some (multi)mediabus? Or is even that too 
  generic? It certainly depends on the scope which we foresee for this API.
 
 Hmm, I like that: 'mediabus'. Much better IMHO than imagebus. Image bus is
 too specific to sensor, I think. Media bus is more generic (also for video
 and audio formats), but it still clearly refers to the media data flowing
 over the bus rather than e.g. control data.

Well, do we really think it might ever become relevant for audio? We're 
having problems adopting it generically for video even:-)

+   V4L2_IMGBUS_FMT_YUYV,
+   V4L2_IMGBUS_FMT_YVYU,
+   V4L2_IMGBUS_FMT_UYVY,
+   V4L2_IMGBUS_FMT_VYUY,
+   V4L2_IMGBUS_FMT_VYUY_SMPTE170M_8,
+   V4L2_IMGBUS_FMT_VYUY_SMPTE170M_16,
+   V4L2_IMGBUS_FMT_RGB555,
+   V4L2_IMGBUS_FMT_RGB555X,
+   V4L2_IMGBUS_FMT_RGB565,
+   V4L2_IMGBUS_FMT_RGB565X,
+   V4L2_IMGBUS_FMT_SBGGR8,
+   V4L2_IMGBUS_FMT_SGBRG8,
+   V4L2_IMGBUS_FMT_SGRBG8,
+   V4L2_IMGBUS_FMT_SRGGB8,
+   V4L2_IMGBUS_FMT_SBGGR10,
+   V4L2_IMGBUS_FMT_SGBRG10,
+   V4L2_IMGBUS_FMT_SGRBG10,
+   V4L2_IMGBUS_FMT_SRGGB10,
+   V4L2_IMGBUS_FMT_GREY,
+   V4L2_IMGBUS_FMT_Y16,
+   V4L2_IMGBUS_FMT_Y10,
+   V4L2_IMGBUS_FMT_SBGGR10_2X8_PADHI_BE,
+   V4L2_IMGBUS_FMT_SBGGR10_2X8_PADLO_BE,
+   V4L2_IMGBUS_FMT_SBGGR10_2X8_PADHI_LE,
+   V4L2_IMGBUS_FMT_SBGGR10_2X8_PADLO_LE,
   
   Obviously the meaning of these formats need to be documented in this 
   header
   as well. Are all these imgbus formats used? 

Re: [PATCH/RFC 7/9 v2] v4l: add an image-bus API for configuring v4l2 subdev pixel and frame formats

2009-11-15 Thread Hans Verkuil
Hi Guennadi,

Apologies for the late reply, but better late than never :-)

On Thursday 12 November 2009 09:08:42 Guennadi Liakhovetski wrote:
 Hi Hans
 
 Thanks for the review
 
 On Wed, 11 Nov 2009, Hans Verkuil wrote:
 
  Hi Guennadi,
  
  OK, I've looked at this again. See my comments below.
  
  On Friday 30 October 2009 15:01:27 Guennadi Liakhovetski wrote:
   Video subdevices, like cameras, decoders, connect to video bridges over
   specialised busses. Data is being transferred over these busses in various
   formats, which only loosely correspond to fourcc codes, describing how 
   video
   data is stored in RAM. This is not a one-to-one correspondence, therefore 
   we
   cannot use fourcc codes to configure subdevice output data formats. This 
   patch
   adds codes for several such on-the-bus formats and an API, similar to the
   familiar .s_fmt(), .g_fmt(), .try_fmt(), .enum_fmt() API for configuring 
   those
   codes. After all users of the old API in struct v4l2_subdev_video_ops are
   converted, the API will be removed.
   
   Signed-off-by: Guennadi Liakhovetski g.liakhovet...@gmx.de
   ---
drivers/media/video/Makefile|2 +-
drivers/media/video/v4l2-imagebus.c |  218 
   +++
include/media/v4l2-imagebus.h   |   84 ++
include/media/v4l2-subdev.h |   10 ++-
4 files changed, 312 insertions(+), 2 deletions(-)
create mode 100644 drivers/media/video/v4l2-imagebus.c
create mode 100644 include/media/v4l2-imagebus.h
   
   diff --git a/drivers/media/video/Makefile b/drivers/media/video/Makefile
   index 7a2dcc3..62d8907 100644
   --- a/drivers/media/video/Makefile
   +++ b/drivers/media/video/Makefile
   @@ -10,7 +10,7 @@ stkwebcam-objs  :=  stk-webcam.o stk-sensor.o

omap2cam-objs:=  omap24xxcam.o omap24xxcam-dma.o

   -videodev-objs:=  v4l2-dev.o v4l2-ioctl.o v4l2-device.o
   +videodev-objs:=  v4l2-dev.o v4l2-ioctl.o v4l2-device.o 
   v4l2-imagebus.o

# V4L2 core modules

   diff --git a/drivers/media/video/v4l2-imagebus.c 
   b/drivers/media/video/v4l2-imagebus.c
   new file mode 100644
   index 000..e0a3a83
   --- /dev/null
   +++ b/drivers/media/video/v4l2-imagebus.c
   @@ -0,0 +1,218 @@
   +/*
   + * Image Bus API
   + *
   + * Copyright (C) 2009, Guennadi Liakhovetski g.liakhovet...@gmx.de
   + *
   + * This program is free software; you can redistribute it and/or modify
   + * it under the terms of the GNU General Public License version 2 as
   + * published by the Free Software Foundation.
   + */
   +
   +#include linux/kernel.h
   +#include linux/module.h
   +
   +#include media/v4l2-device.h
   +#include media/v4l2-imagebus.h
   +
   +static const struct v4l2_imgbus_pixelfmt imgbus_fmt[] = {
   + [V4L2_IMGBUS_FMT_YUYV] = {
   + .fourcc = V4L2_PIX_FMT_YUYV,
   + .colorspace = V4L2_COLORSPACE_JPEG,
   + .name   = YUYV,
   + .bits_per_sample= 8,
   + .packing= V4L2_IMGBUS_PACKING_2X8_PADHI,
   + .order  = V4L2_IMGBUS_ORDER_LE,
   + }, [V4L2_IMGBUS_FMT_YVYU] = {
   + .fourcc = V4L2_PIX_FMT_YVYU,
   + .colorspace = V4L2_COLORSPACE_JPEG,
   + .name   = YVYU,
   + .bits_per_sample= 8,
   + .packing= V4L2_IMGBUS_PACKING_2X8_PADHI,
   + .order  = V4L2_IMGBUS_ORDER_LE,
   + }, [V4L2_IMGBUS_FMT_UYVY] = {
   + .fourcc = V4L2_PIX_FMT_UYVY,
   + .colorspace = V4L2_COLORSPACE_JPEG,
   + .name   = UYVY,
   + .bits_per_sample= 8,
   + .packing= V4L2_IMGBUS_PACKING_2X8_PADHI,
   + .order  = V4L2_IMGBUS_ORDER_LE,
   + }, [V4L2_IMGBUS_FMT_VYUY] = {
   + .fourcc = V4L2_PIX_FMT_VYUY,
   + .colorspace = V4L2_COLORSPACE_JPEG,
   + .name   = VYUY,
   + .bits_per_sample= 8,
   + .packing= V4L2_IMGBUS_PACKING_2X8_PADHI,
   + .order  = V4L2_IMGBUS_ORDER_LE,
   + }, [V4L2_IMGBUS_FMT_VYUY_SMPTE170M_8] = {
   + .fourcc = V4L2_PIX_FMT_VYUY,
   + .colorspace = V4L2_COLORSPACE_SMPTE170M,
   + .name   = VYUY in SMPTE170M,
   + .bits_per_sample= 8,
   + .packing= V4L2_IMGBUS_PACKING_2X8_PADHI,
   + .order  = V4L2_IMGBUS_ORDER_LE,
   + }, [V4L2_IMGBUS_FMT_VYUY_SMPTE170M_16] = {
   + .fourcc = V4L2_PIX_FMT_VYUY,
   + .colorspace = V4L2_COLORSPACE_SMPTE170M,
   + .name   = VYUY in SMPTE170M, 16bit,
   + .bits_per_sample= 16,
   + .packing  

Re: [PATCH/RFC 7/9 v2] v4l: add an image-bus API for configuring v4l2 subdev pixel and frame formats

2009-11-12 Thread Guennadi Liakhovetski
Hi Hans

Thanks for the review

On Wed, 11 Nov 2009, Hans Verkuil wrote:

 Hi Guennadi,
 
 OK, I've looked at this again. See my comments below.
 
 On Friday 30 October 2009 15:01:27 Guennadi Liakhovetski wrote:
  Video subdevices, like cameras, decoders, connect to video bridges over
  specialised busses. Data is being transferred over these busses in various
  formats, which only loosely correspond to fourcc codes, describing how video
  data is stored in RAM. This is not a one-to-one correspondence, therefore we
  cannot use fourcc codes to configure subdevice output data formats. This 
  patch
  adds codes for several such on-the-bus formats and an API, similar to the
  familiar .s_fmt(), .g_fmt(), .try_fmt(), .enum_fmt() API for configuring 
  those
  codes. After all users of the old API in struct v4l2_subdev_video_ops are
  converted, the API will be removed.
  
  Signed-off-by: Guennadi Liakhovetski g.liakhovet...@gmx.de
  ---
   drivers/media/video/Makefile|2 +-
   drivers/media/video/v4l2-imagebus.c |  218 
  +++
   include/media/v4l2-imagebus.h   |   84 ++
   include/media/v4l2-subdev.h |   10 ++-
   4 files changed, 312 insertions(+), 2 deletions(-)
   create mode 100644 drivers/media/video/v4l2-imagebus.c
   create mode 100644 include/media/v4l2-imagebus.h
  
  diff --git a/drivers/media/video/Makefile b/drivers/media/video/Makefile
  index 7a2dcc3..62d8907 100644
  --- a/drivers/media/video/Makefile
  +++ b/drivers/media/video/Makefile
  @@ -10,7 +10,7 @@ stkwebcam-objs:=  stk-webcam.o stk-sensor.o
   
   omap2cam-objs  :=  omap24xxcam.o omap24xxcam-dma.o
   
  -videodev-objs  :=  v4l2-dev.o v4l2-ioctl.o v4l2-device.o
  +videodev-objs  :=  v4l2-dev.o v4l2-ioctl.o v4l2-device.o 
  v4l2-imagebus.o
   
   # V4L2 core modules
   
  diff --git a/drivers/media/video/v4l2-imagebus.c 
  b/drivers/media/video/v4l2-imagebus.c
  new file mode 100644
  index 000..e0a3a83
  --- /dev/null
  +++ b/drivers/media/video/v4l2-imagebus.c
  @@ -0,0 +1,218 @@
  +/*
  + * Image Bus API
  + *
  + * Copyright (C) 2009, Guennadi Liakhovetski g.liakhovet...@gmx.de
  + *
  + * This program is free software; you can redistribute it and/or modify
  + * it under the terms of the GNU General Public License version 2 as
  + * published by the Free Software Foundation.
  + */
  +
  +#include linux/kernel.h
  +#include linux/module.h
  +
  +#include media/v4l2-device.h
  +#include media/v4l2-imagebus.h
  +
  +static const struct v4l2_imgbus_pixelfmt imgbus_fmt[] = {
  +   [V4L2_IMGBUS_FMT_YUYV] = {
  +   .fourcc = V4L2_PIX_FMT_YUYV,
  +   .colorspace = V4L2_COLORSPACE_JPEG,
  +   .name   = YUYV,
  +   .bits_per_sample= 8,
  +   .packing= V4L2_IMGBUS_PACKING_2X8_PADHI,
  +   .order  = V4L2_IMGBUS_ORDER_LE,
  +   }, [V4L2_IMGBUS_FMT_YVYU] = {
  +   .fourcc = V4L2_PIX_FMT_YVYU,
  +   .colorspace = V4L2_COLORSPACE_JPEG,
  +   .name   = YVYU,
  +   .bits_per_sample= 8,
  +   .packing= V4L2_IMGBUS_PACKING_2X8_PADHI,
  +   .order  = V4L2_IMGBUS_ORDER_LE,
  +   }, [V4L2_IMGBUS_FMT_UYVY] = {
  +   .fourcc = V4L2_PIX_FMT_UYVY,
  +   .colorspace = V4L2_COLORSPACE_JPEG,
  +   .name   = UYVY,
  +   .bits_per_sample= 8,
  +   .packing= V4L2_IMGBUS_PACKING_2X8_PADHI,
  +   .order  = V4L2_IMGBUS_ORDER_LE,
  +   }, [V4L2_IMGBUS_FMT_VYUY] = {
  +   .fourcc = V4L2_PIX_FMT_VYUY,
  +   .colorspace = V4L2_COLORSPACE_JPEG,
  +   .name   = VYUY,
  +   .bits_per_sample= 8,
  +   .packing= V4L2_IMGBUS_PACKING_2X8_PADHI,
  +   .order  = V4L2_IMGBUS_ORDER_LE,
  +   }, [V4L2_IMGBUS_FMT_VYUY_SMPTE170M_8] = {
  +   .fourcc = V4L2_PIX_FMT_VYUY,
  +   .colorspace = V4L2_COLORSPACE_SMPTE170M,
  +   .name   = VYUY in SMPTE170M,
  +   .bits_per_sample= 8,
  +   .packing= V4L2_IMGBUS_PACKING_2X8_PADHI,
  +   .order  = V4L2_IMGBUS_ORDER_LE,
  +   }, [V4L2_IMGBUS_FMT_VYUY_SMPTE170M_16] = {
  +   .fourcc = V4L2_PIX_FMT_VYUY,
  +   .colorspace = V4L2_COLORSPACE_SMPTE170M,
  +   .name   = VYUY in SMPTE170M, 16bit,
  +   .bits_per_sample= 16,
  +   .packing= V4L2_IMGBUS_PACKING_NONE,
  +   .order  = V4L2_IMGBUS_ORDER_LE,
  +   }, [V4L2_IMGBUS_FMT_RGB555] = {
  +   .fourcc   

Re: [PATCH/RFC 7/9 v2] v4l: add an image-bus API for configuring v4l2 subdev pixel and frame formats

2009-11-10 Thread Guennadi Liakhovetski
On Tue, 10 Nov 2009, Laurent Pinchart wrote:

 On Friday 30 October 2009 15:01:27 Guennadi Liakhovetski wrote:
  Video subdevices, like cameras, decoders, connect to video bridges over
  specialised busses. Data is being transferred over these busses in various
  formats, which only loosely correspond to fourcc codes, describing how
   video data is stored in RAM. This is not a one-to-one correspondence,
   therefore we cannot use fourcc codes to configure subdevice output data
   formats. This patch adds codes for several such on-the-bus formats and an
   API, similar to the familiar .s_fmt(), .g_fmt(), .try_fmt(), .enum_fmt()
   API for configuring those codes. After all users of the old API in struct
   v4l2_subdev_video_ops are converted, the API will be removed.
 
 [snip]
 
  diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-subdev.h
  index 04193eb..1e86f39 100644
  --- a/include/media/v4l2-subdev.h
  +++ b/include/media/v4l2-subdev.h
 
 [snip]
 
  @@ -206,6 +207,8 @@ struct v4l2_subdev_audio_ops {
  
  s_routing: see s_routing in audio_ops, except this version is for video
  devices.
  +
  +   enum_imgbus_fmt: enumerate pixel formats provided by a video data
 
 Do we need to make that dynamic (and O(n)) or could we use a static array of 
 image bug frame formats ?

The current soc-camera uses a static array. It works for its users, but I 
do not know about others.

   source */
   struct v4l2_subdev_video_ops {
  int (*s_routing)(struct v4l2_subdev *sd, u32 input, u32 output, u32
   config); @@ -227,6 +230,11 @@ struct v4l2_subdev_video_ops {
  int (*s_crop)(struct v4l2_subdev *sd, struct v4l2_crop *crop);
  int (*g_parm)(struct v4l2_subdev *sd, struct v4l2_streamparm *param);
  int (*s_parm)(struct v4l2_subdev *sd, struct v4l2_streamparm *param);
  +   int (*enum_imgbus_fmt)(struct v4l2_subdev *sd, int index,
  +  enum v4l2_imgbus_pixelcode *code);
  +   int (*g_imgbus_fmt)(struct v4l2_subdev *sd, struct v4l2_imgbus_framefmt
   *fmt);
  +   int (*try_imgbus_fmt)(struct v4l2_subdev *sd, struct 
  v4l2_imgbus_framefmt
   *fmt);
  +   int (*s_imgbus_fmt)(struct v4l2_subdev *sd, struct v4l2_imgbus_framefmt
  *fmt); };
 
 Obviously those calls will need to be moved to pad operations later.

Right.

 They will 
 be exposed to userspace through ioctls on the media controller device and/or 
 the subdevices, so the v4l2_imgbus_pixelcode type shouldn't be an enum.

Ok, will use __u32 for it then just as all other enum types...

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH/RFC 7/9 v2] v4l: add an image-bus API for configuring v4l2 subdev pixel and frame formats

2009-11-10 Thread Hans Verkuil
Hi Guennadi,

OK, I've looked at this again. See my comments below.

On Friday 30 October 2009 15:01:27 Guennadi Liakhovetski wrote:
 Video subdevices, like cameras, decoders, connect to video bridges over
 specialised busses. Data is being transferred over these busses in various
 formats, which only loosely correspond to fourcc codes, describing how video
 data is stored in RAM. This is not a one-to-one correspondence, therefore we
 cannot use fourcc codes to configure subdevice output data formats. This patch
 adds codes for several such on-the-bus formats and an API, similar to the
 familiar .s_fmt(), .g_fmt(), .try_fmt(), .enum_fmt() API for configuring those
 codes. After all users of the old API in struct v4l2_subdev_video_ops are
 converted, the API will be removed.
 
 Signed-off-by: Guennadi Liakhovetski g.liakhovet...@gmx.de
 ---
  drivers/media/video/Makefile|2 +-
  drivers/media/video/v4l2-imagebus.c |  218 
 +++
  include/media/v4l2-imagebus.h   |   84 ++
  include/media/v4l2-subdev.h |   10 ++-
  4 files changed, 312 insertions(+), 2 deletions(-)
  create mode 100644 drivers/media/video/v4l2-imagebus.c
  create mode 100644 include/media/v4l2-imagebus.h
 
 diff --git a/drivers/media/video/Makefile b/drivers/media/video/Makefile
 index 7a2dcc3..62d8907 100644
 --- a/drivers/media/video/Makefile
 +++ b/drivers/media/video/Makefile
 @@ -10,7 +10,7 @@ stkwebcam-objs  :=  stk-webcam.o stk-sensor.o
  
  omap2cam-objs:=  omap24xxcam.o omap24xxcam-dma.o
  
 -videodev-objs:=  v4l2-dev.o v4l2-ioctl.o v4l2-device.o
 +videodev-objs:=  v4l2-dev.o v4l2-ioctl.o v4l2-device.o 
 v4l2-imagebus.o
  
  # V4L2 core modules
  
 diff --git a/drivers/media/video/v4l2-imagebus.c 
 b/drivers/media/video/v4l2-imagebus.c
 new file mode 100644
 index 000..e0a3a83
 --- /dev/null
 +++ b/drivers/media/video/v4l2-imagebus.c
 @@ -0,0 +1,218 @@
 +/*
 + * Image Bus API
 + *
 + * Copyright (C) 2009, Guennadi Liakhovetski g.liakhovet...@gmx.de
 + *
 + * This program is free software; you can redistribute it and/or modify
 + * it under the terms of the GNU General Public License version 2 as
 + * published by the Free Software Foundation.
 + */
 +
 +#include linux/kernel.h
 +#include linux/module.h
 +
 +#include media/v4l2-device.h
 +#include media/v4l2-imagebus.h
 +
 +static const struct v4l2_imgbus_pixelfmt imgbus_fmt[] = {
 + [V4L2_IMGBUS_FMT_YUYV] = {
 + .fourcc = V4L2_PIX_FMT_YUYV,
 + .colorspace = V4L2_COLORSPACE_JPEG,
 + .name   = YUYV,
 + .bits_per_sample= 8,
 + .packing= V4L2_IMGBUS_PACKING_2X8_PADHI,
 + .order  = V4L2_IMGBUS_ORDER_LE,
 + }, [V4L2_IMGBUS_FMT_YVYU] = {
 + .fourcc = V4L2_PIX_FMT_YVYU,
 + .colorspace = V4L2_COLORSPACE_JPEG,
 + .name   = YVYU,
 + .bits_per_sample= 8,
 + .packing= V4L2_IMGBUS_PACKING_2X8_PADHI,
 + .order  = V4L2_IMGBUS_ORDER_LE,
 + }, [V4L2_IMGBUS_FMT_UYVY] = {
 + .fourcc = V4L2_PIX_FMT_UYVY,
 + .colorspace = V4L2_COLORSPACE_JPEG,
 + .name   = UYVY,
 + .bits_per_sample= 8,
 + .packing= V4L2_IMGBUS_PACKING_2X8_PADHI,
 + .order  = V4L2_IMGBUS_ORDER_LE,
 + }, [V4L2_IMGBUS_FMT_VYUY] = {
 + .fourcc = V4L2_PIX_FMT_VYUY,
 + .colorspace = V4L2_COLORSPACE_JPEG,
 + .name   = VYUY,
 + .bits_per_sample= 8,
 + .packing= V4L2_IMGBUS_PACKING_2X8_PADHI,
 + .order  = V4L2_IMGBUS_ORDER_LE,
 + }, [V4L2_IMGBUS_FMT_VYUY_SMPTE170M_8] = {
 + .fourcc = V4L2_PIX_FMT_VYUY,
 + .colorspace = V4L2_COLORSPACE_SMPTE170M,
 + .name   = VYUY in SMPTE170M,
 + .bits_per_sample= 8,
 + .packing= V4L2_IMGBUS_PACKING_2X8_PADHI,
 + .order  = V4L2_IMGBUS_ORDER_LE,
 + }, [V4L2_IMGBUS_FMT_VYUY_SMPTE170M_16] = {
 + .fourcc = V4L2_PIX_FMT_VYUY,
 + .colorspace = V4L2_COLORSPACE_SMPTE170M,
 + .name   = VYUY in SMPTE170M, 16bit,
 + .bits_per_sample= 16,
 + .packing= V4L2_IMGBUS_PACKING_NONE,
 + .order  = V4L2_IMGBUS_ORDER_LE,
 + }, [V4L2_IMGBUS_FMT_RGB555] = {
 + .fourcc = V4L2_PIX_FMT_RGB555,
 + .colorspace = V4L2_COLORSPACE_SRGB,

Re: [PATCH/RFC 7/9 v2] v4l: add an image-bus API for configuring v4l2 subdev pixel and frame formats

2009-11-06 Thread Hans Verkuil

 On Fri, 6 Nov 2009, Hans Verkuil wrote:

 On Thursday 05 November 2009 19:56:04 Guennadi Liakhovetski wrote:
  On Thu, 5 Nov 2009, Hans Verkuil wrote:
 
   On Thursday 05 November 2009 17:51:50 Guennadi Liakhovetski wrote:
On Thu, 5 Nov 2009, Hans Verkuil wrote:
   
 On Friday 30 October 2009 15:01:27 Guennadi Liakhovetski wrote:
  Video subdevices, like cameras, decoders, connect to video
 bridges over
  specialised busses. Data is being transferred over these
 busses in various
  formats, which only loosely correspond to fourcc codes,
 describing how video
  data is stored in RAM. This is not a one-to-one
 correspondence, therefore we
  cannot use fourcc codes to configure subdevice output data
 formats. This patch
  adds codes for several such on-the-bus formats and an API,
 similar to the
  familiar .s_fmt(), .g_fmt(), .try_fmt(), .enum_fmt() API for
 configuring those
  codes. After all users of the old API in struct
 v4l2_subdev_video_ops are
  converted, the API will be removed.

 OK, this seems to completely disregard points raised in my
 earlier bus and
 data format negotiation RFC which is available here once
 www.mail-archive.org
 is working again:

 http://www.mail-archive.com/linux-media%40vger.kernel.org/msg09644.html

 BTW, ignore the 'Video timings' section of that RFC. That part
 is wrong.

 The big problem I have with this proposal is the unholy mixing
 of bus and
 memory formatting. That should be completely separated. Only the
 bridge
 knows how a bus format can be converted into which memory
 (pixel) formats.
   
Please, explain why only the bridge knows about that.
   
My model is the following:
   
1. we define various data formats on the bus. Each such format
 variation
gets a unique identification.
   
2. given a data format ID the data format is perfectly defined.
 This
means, you do not have to have a special knowledge about this
 specific
format to be able to handle it in some _generic_ way. A typical
 such
generic handling on a bridge is, for instance, copying the data
 into
memory one-to-one. For example, if a sensor delivers 10 bit
 monochrome
data over an eight bit bus as follows
   
y7 y6 y5 y4 y3 y2 y1 y0   xx xx xx xx xx xx y9 y8 ...
   
then _any_ bridge, capable of just copying data from the bus
 bytewise into
RAM will be able to produce little-endian 10-bit grey pixel format
 in RAM.
This handling is _not_ bridge specific. This is what I call
 packing.
  
   Of course it is bridge dependent. It is the bridge that takes data
 from the
   bus and puts it in memory. In many cases that is done very simply by
 bytewise
   copying. Other bridges can do RGB to YUV or vice versa conversions
 or can do
   endianness conversion or can do JPEG/MPEG compression on the fly or
 whatever
   else hardware designers will think of.
  
   It's no doubt true for the SoCs you have been working with, but it
 is not so
   simple in general.
 
  Ok, I forgot to mention one more point in the model:
 
  4. Each bridge has _two_ ways to process data: data-format-specific
 and
  generic (pass-through). It's the _former_ one that is bridge specific,
  quite right! For a bridge to be able to process a data format, that it
 can
  process in a _special_ way, it doesn't need v4l2_imgbus_pixelfmt, it's
  only for data-formats, that bridges do _not_ know specifically they
 need
  it. In that _generic_ case it is not bridge-specific and a bridge
 driver
  can just look into the respective v4l2_imgbus_pixelfmt descriptor.
 
  Consider the following: a bridge can process N formats in a specific
 way.
  It knows which bits in which order represent which colours, etc. In
 such a
  case you just tell the driver format X and that's all it has to know
  about it to be able to handle it.
 
  The sensor, connected to the bridge, can also provide format Y, which
 the
  bridge doesn't know about. So what, there's then no way to use that
  format? Or do we have to add a _special_ handling rule for each format
 to
  each bridge driver?...
 
3. Therefore, each bridge, capable of handling of some generic
 data
using some specific packing, can perfectly look through
 data-format
descriptors, see if it finds any with the supported packing, and
 if so, it
_then_ knows, that it can use that specific data format and the
 specific
packing to produce the resulting pixel format from the format
 descriptor.
   
 A bus format is also separate from the colorspace: that is an
 independent
 piece of data.
   
Sure. TBH, I do not quite how enum v4l2_colorspace is actually
 used. Is it
uniquely defined by each pixel format? So, it can be derived from
 that?
Then it is indeed redundant. Can drop, don't care about it that
 much.
  
   It's independent from the pixel format. So the same pixel (or bus)
 format can
   have different colorspaces.
 
  

Re: [PATCH/RFC 7/9 v2] v4l: add an image-bus API for configuring v4l2 subdev pixel and frame formats

2009-11-05 Thread Guennadi Liakhovetski
On Thu, 5 Nov 2009, Hans Verkuil wrote:

 On Thursday 05 November 2009 17:51:50 Guennadi Liakhovetski wrote:
  On Thu, 5 Nov 2009, Hans Verkuil wrote:
  
   On Friday 30 October 2009 15:01:27 Guennadi Liakhovetski wrote:
Video subdevices, like cameras, decoders, connect to video bridges over
specialised busses. Data is being transferred over these busses in 
various
formats, which only loosely correspond to fourcc codes, describing how 
video
data is stored in RAM. This is not a one-to-one correspondence, 
therefore we
cannot use fourcc codes to configure subdevice output data formats. 
This patch
adds codes for several such on-the-bus formats and an API, similar to 
the
familiar .s_fmt(), .g_fmt(), .try_fmt(), .enum_fmt() API for 
configuring those
codes. After all users of the old API in struct v4l2_subdev_video_ops 
are
converted, the API will be removed.
   
   OK, this seems to completely disregard points raised in my earlier bus 
   and
   data format negotiation RFC which is available here once 
   www.mail-archive.org
   is working again:
   
   http://www.mail-archive.com/linux-media%40vger.kernel.org/msg09644.html
   
   BTW, ignore the 'Video timings' section of that RFC. That part is wrong.
   
   The big problem I have with this proposal is the unholy mixing of bus and
   memory formatting. That should be completely separated. Only the bridge
   knows how a bus format can be converted into which memory (pixel) formats.
  
  Please, explain why only the bridge knows about that.
  
  My model is the following:
  
  1. we define various data formats on the bus. Each such format variation 
  gets a unique identification.
  
  2. given a data format ID the data format is perfectly defined. This 
  means, you do not have to have a special knowledge about this specific 
  format to be able to handle it in some _generic_ way. A typical such 
  generic handling on a bridge is, for instance, copying the data into 
  memory one-to-one. For example, if a sensor delivers 10 bit monochrome 
  data over an eight bit bus as follows
  
  y7 y6 y5 y4 y3 y2 y1 y0   xx xx xx xx xx xx y9 y8 ...
  
  then _any_ bridge, capable of just copying data from the bus bytewise into 
  RAM will be able to produce little-endian 10-bit grey pixel format in RAM. 
  This handling is _not_ bridge specific. This is what I call packing.
 
 Of course it is bridge dependent. It is the bridge that takes data from the
 bus and puts it in memory. In many cases that is done very simply by bytewise
 copying. Other bridges can do RGB to YUV or vice versa conversions or can do
 endianness conversion or can do JPEG/MPEG compression on the fly or whatever
 else hardware designers will think of.
 
 It's no doubt true for the SoCs you have been working with, but it is not so
 simple in general.

Ok, I forgot to mention one more point in the model:

4. Each bridge has _two_ ways to process data: data-format-specific and 
generic (pass-through). It's the _former_ one that is bridge specific, 
quite right! For a bridge to be able to process a data format, that it can 
process in a _special_ way, it doesn't need v4l2_imgbus_pixelfmt, it's 
only for data-formats, that bridges do _not_ know specifically they need 
it. In that _generic_ case it is not bridge-specific and a bridge driver 
can just look into the respective v4l2_imgbus_pixelfmt descriptor.

Consider the following: a bridge can process N formats in a specific way. 
It knows which bits in which order represent which colours, etc. In such a 
case you just tell the driver format X and that's all it has to know 
about it to be able to handle it.

The sensor, connected to the bridge, can also provide format Y, which the 
bridge doesn't know about. So what, there's then no way to use that 
format? Or do we have to add a _special_ handling rule for each format to 
each bridge driver?...

  3. Therefore, each bridge, capable of handling of some generic data 
  using some specific packing, can perfectly look through data-format 
  descriptors, see if it finds any with the supported packing, and if so, it 
  _then_ knows, that it can use that specific data format and the specific 
  packing to produce the resulting pixel format from the format descriptor.
  
   A bus format is also separate from the colorspace: that is an independent
   piece of data.
  
  Sure. TBH, I do not quite how enum v4l2_colorspace is actually used. Is it 
  uniquely defined by each pixel format? So, it can be derived from that? 
  Then it is indeed redundant. Can drop, don't care about it that much.
 
 It's independent from the pixel format. So the same pixel (or bus) format can
 have different colorspaces.

Then I do not understand what a colourspace means in v4l context. You mean 
a yuv format can belong to a jpeg, or an srgb space?...

   Personally I would just keep using v4l2_pix_format, except
   that the fourcc field refers to a busimg format rather 

Re: [PATCH/RFC 7/9 v2] v4l: add an image-bus API for configuring v4l2 subdev pixel and frame formats

2009-11-05 Thread Hans Verkuil
On Thursday 05 November 2009 19:56:04 Guennadi Liakhovetski wrote:
 On Thu, 5 Nov 2009, Hans Verkuil wrote:
 
  On Thursday 05 November 2009 17:51:50 Guennadi Liakhovetski wrote:
   On Thu, 5 Nov 2009, Hans Verkuil wrote:
   
On Friday 30 October 2009 15:01:27 Guennadi Liakhovetski wrote:
 Video subdevices, like cameras, decoders, connect to video bridges 
 over
 specialised busses. Data is being transferred over these busses in 
 various
 formats, which only loosely correspond to fourcc codes, describing 
 how video
 data is stored in RAM. This is not a one-to-one correspondence, 
 therefore we
 cannot use fourcc codes to configure subdevice output data formats. 
 This patch
 adds codes for several such on-the-bus formats and an API, similar to 
 the
 familiar .s_fmt(), .g_fmt(), .try_fmt(), .enum_fmt() API for 
 configuring those
 codes. After all users of the old API in struct v4l2_subdev_video_ops 
 are
 converted, the API will be removed.

OK, this seems to completely disregard points raised in my earlier bus 
and
data format negotiation RFC which is available here once 
www.mail-archive.org
is working again:

http://www.mail-archive.com/linux-media%40vger.kernel.org/msg09644.html

BTW, ignore the 'Video timings' section of that RFC. That part is wrong.

The big problem I have with this proposal is the unholy mixing of bus 
and
memory formatting. That should be completely separated. Only the bridge
knows how a bus format can be converted into which memory (pixel) 
formats.
   
   Please, explain why only the bridge knows about that.
   
   My model is the following:
   
   1. we define various data formats on the bus. Each such format variation 
   gets a unique identification.
   
   2. given a data format ID the data format is perfectly defined. This 
   means, you do not have to have a special knowledge about this specific 
   format to be able to handle it in some _generic_ way. A typical such 
   generic handling on a bridge is, for instance, copying the data into 
   memory one-to-one. For example, if a sensor delivers 10 bit monochrome 
   data over an eight bit bus as follows
   
   y7 y6 y5 y4 y3 y2 y1 y0   xx xx xx xx xx xx y9 y8 ...
   
   then _any_ bridge, capable of just copying data from the bus bytewise 
   into 
   RAM will be able to produce little-endian 10-bit grey pixel format in 
   RAM. 
   This handling is _not_ bridge specific. This is what I call packing.
  
  Of course it is bridge dependent. It is the bridge that takes data from the
  bus and puts it in memory. In many cases that is done very simply by 
  bytewise
  copying. Other bridges can do RGB to YUV or vice versa conversions or can do
  endianness conversion or can do JPEG/MPEG compression on the fly or whatever
  else hardware designers will think of.
  
  It's no doubt true for the SoCs you have been working with, but it is not so
  simple in general.
 
 Ok, I forgot to mention one more point in the model:
 
 4. Each bridge has _two_ ways to process data: data-format-specific and 
 generic (pass-through). It's the _former_ one that is bridge specific, 
 quite right! For a bridge to be able to process a data format, that it can 
 process in a _special_ way, it doesn't need v4l2_imgbus_pixelfmt, it's 
 only for data-formats, that bridges do _not_ know specifically they need 
 it. In that _generic_ case it is not bridge-specific and a bridge driver 
 can just look into the respective v4l2_imgbus_pixelfmt descriptor.
 
 Consider the following: a bridge can process N formats in a specific way. 
 It knows which bits in which order represent which colours, etc. In such a 
 case you just tell the driver format X and that's all it has to know 
 about it to be able to handle it.
 
 The sensor, connected to the bridge, can also provide format Y, which the 
 bridge doesn't know about. So what, there's then no way to use that 
 format? Or do we have to add a _special_ handling rule for each format to 
 each bridge driver?...
 
   3. Therefore, each bridge, capable of handling of some generic data 
   using some specific packing, can perfectly look through data-format 
   descriptors, see if it finds any with the supported packing, and if so, 
   it 
   _then_ knows, that it can use that specific data format and the specific 
   packing to produce the resulting pixel format from the format descriptor.
   
A bus format is also separate from the colorspace: that is an 
independent
piece of data.
   
   Sure. TBH, I do not quite how enum v4l2_colorspace is actually used. Is 
   it 
   uniquely defined by each pixel format? So, it can be derived from that? 
   Then it is indeed redundant. Can drop, don't care about it that much.
  
  It's independent from the pixel format. So the same pixel (or bus) format 
  can
  have different colorspaces.
 
 Then I do not understand what a 

Re: [PATCH/RFC 7/9 v2] v4l: add an image-bus API for configuring v4l2 subdev pixel and frame formats

2009-11-05 Thread Guennadi Liakhovetski
On Fri, 6 Nov 2009, Hans Verkuil wrote:

 On Thursday 05 November 2009 19:56:04 Guennadi Liakhovetski wrote:
  On Thu, 5 Nov 2009, Hans Verkuil wrote:
  
   On Thursday 05 November 2009 17:51:50 Guennadi Liakhovetski wrote:
On Thu, 5 Nov 2009, Hans Verkuil wrote:

 On Friday 30 October 2009 15:01:27 Guennadi Liakhovetski wrote:
  Video subdevices, like cameras, decoders, connect to video bridges 
  over
  specialised busses. Data is being transferred over these busses in 
  various
  formats, which only loosely correspond to fourcc codes, describing 
  how video
  data is stored in RAM. This is not a one-to-one correspondence, 
  therefore we
  cannot use fourcc codes to configure subdevice output data formats. 
  This patch
  adds codes for several such on-the-bus formats and an API, similar 
  to the
  familiar .s_fmt(), .g_fmt(), .try_fmt(), .enum_fmt() API for 
  configuring those
  codes. After all users of the old API in struct 
  v4l2_subdev_video_ops are
  converted, the API will be removed.
 
 OK, this seems to completely disregard points raised in my earlier 
 bus and
 data format negotiation RFC which is available here once 
 www.mail-archive.org
 is working again:
 
 http://www.mail-archive.com/linux-media%40vger.kernel.org/msg09644.html
 
 BTW, ignore the 'Video timings' section of that RFC. That part is 
 wrong.
 
 The big problem I have with this proposal is the unholy mixing of bus 
 and
 memory formatting. That should be completely separated. Only the 
 bridge
 knows how a bus format can be converted into which memory (pixel) 
 formats.

Please, explain why only the bridge knows about that.

My model is the following:

1. we define various data formats on the bus. Each such format 
variation 
gets a unique identification.

2. given a data format ID the data format is perfectly defined. This 
means, you do not have to have a special knowledge about this specific 
format to be able to handle it in some _generic_ way. A typical such 
generic handling on a bridge is, for instance, copying the data into 
memory one-to-one. For example, if a sensor delivers 10 bit 
monochrome 
data over an eight bit bus as follows

y7 y6 y5 y4 y3 y2 y1 y0   xx xx xx xx xx xx y9 y8 ...

then _any_ bridge, capable of just copying data from the bus bytewise 
into 
RAM will be able to produce little-endian 10-bit grey pixel format in 
RAM. 
This handling is _not_ bridge specific. This is what I call packing.
   
   Of course it is bridge dependent. It is the bridge that takes data from 
   the
   bus and puts it in memory. In many cases that is done very simply by 
   bytewise
   copying. Other bridges can do RGB to YUV or vice versa conversions or can 
   do
   endianness conversion or can do JPEG/MPEG compression on the fly or 
   whatever
   else hardware designers will think of.
   
   It's no doubt true for the SoCs you have been working with, but it is not 
   so
   simple in general.
  
  Ok, I forgot to mention one more point in the model:
  
  4. Each bridge has _two_ ways to process data: data-format-specific and 
  generic (pass-through). It's the _former_ one that is bridge specific, 
  quite right! For a bridge to be able to process a data format, that it can 
  process in a _special_ way, it doesn't need v4l2_imgbus_pixelfmt, it's 
  only for data-formats, that bridges do _not_ know specifically they need 
  it. In that _generic_ case it is not bridge-specific and a bridge driver 
  can just look into the respective v4l2_imgbus_pixelfmt descriptor.
  
  Consider the following: a bridge can process N formats in a specific way. 
  It knows which bits in which order represent which colours, etc. In such a 
  case you just tell the driver format X and that's all it has to know 
  about it to be able to handle it.
  
  The sensor, connected to the bridge, can also provide format Y, which the 
  bridge doesn't know about. So what, there's then no way to use that 
  format? Or do we have to add a _special_ handling rule for each format to 
  each bridge driver?...
  
3. Therefore, each bridge, capable of handling of some generic data 
using some specific packing, can perfectly look through data-format 
descriptors, see if it finds any with the supported packing, and if so, 
it 
_then_ knows, that it can use that specific data format and the 
specific 
packing to produce the resulting pixel format from the format 
descriptor.

 A bus format is also separate from the colorspace: that is an 
 independent
 piece of data.

Sure. TBH, I do not quite how enum v4l2_colorspace is actually used. Is 
it 
uniquely defined by each pixel format? So, it can be derived from that? 
Then it is indeed redundant. Can 

[PATCH/RFC 7/9 v2] v4l: add an image-bus API for configuring v4l2 subdev pixel and frame formats

2009-10-30 Thread Guennadi Liakhovetski
Video subdevices, like cameras, decoders, connect to video bridges over
specialised busses. Data is being transferred over these busses in various
formats, which only loosely correspond to fourcc codes, describing how video
data is stored in RAM. This is not a one-to-one correspondence, therefore we
cannot use fourcc codes to configure subdevice output data formats. This patch
adds codes for several such on-the-bus formats and an API, similar to the
familiar .s_fmt(), .g_fmt(), .try_fmt(), .enum_fmt() API for configuring those
codes. After all users of the old API in struct v4l2_subdev_video_ops are
converted, the API will be removed.

Signed-off-by: Guennadi Liakhovetski g.liakhovet...@gmx.de
---
 drivers/media/video/Makefile|2 +-
 drivers/media/video/v4l2-imagebus.c |  218 +++
 include/media/v4l2-imagebus.h   |   84 ++
 include/media/v4l2-subdev.h |   10 ++-
 4 files changed, 312 insertions(+), 2 deletions(-)
 create mode 100644 drivers/media/video/v4l2-imagebus.c
 create mode 100644 include/media/v4l2-imagebus.h

diff --git a/drivers/media/video/Makefile b/drivers/media/video/Makefile
index 7a2dcc3..62d8907 100644
--- a/drivers/media/video/Makefile
+++ b/drivers/media/video/Makefile
@@ -10,7 +10,7 @@ stkwebcam-objs:=  stk-webcam.o stk-sensor.o
 
 omap2cam-objs  :=  omap24xxcam.o omap24xxcam-dma.o
 
-videodev-objs  :=  v4l2-dev.o v4l2-ioctl.o v4l2-device.o
+videodev-objs  :=  v4l2-dev.o v4l2-ioctl.o v4l2-device.o v4l2-imagebus.o
 
 # V4L2 core modules
 
diff --git a/drivers/media/video/v4l2-imagebus.c 
b/drivers/media/video/v4l2-imagebus.c
new file mode 100644
index 000..e0a3a83
--- /dev/null
+++ b/drivers/media/video/v4l2-imagebus.c
@@ -0,0 +1,218 @@
+/*
+ * Image Bus API
+ *
+ * Copyright (C) 2009, Guennadi Liakhovetski g.liakhovet...@gmx.de
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include linux/kernel.h
+#include linux/module.h
+
+#include media/v4l2-device.h
+#include media/v4l2-imagebus.h
+
+static const struct v4l2_imgbus_pixelfmt imgbus_fmt[] = {
+   [V4L2_IMGBUS_FMT_YUYV] = {
+   .fourcc = V4L2_PIX_FMT_YUYV,
+   .colorspace = V4L2_COLORSPACE_JPEG,
+   .name   = YUYV,
+   .bits_per_sample= 8,
+   .packing= V4L2_IMGBUS_PACKING_2X8_PADHI,
+   .order  = V4L2_IMGBUS_ORDER_LE,
+   }, [V4L2_IMGBUS_FMT_YVYU] = {
+   .fourcc = V4L2_PIX_FMT_YVYU,
+   .colorspace = V4L2_COLORSPACE_JPEG,
+   .name   = YVYU,
+   .bits_per_sample= 8,
+   .packing= V4L2_IMGBUS_PACKING_2X8_PADHI,
+   .order  = V4L2_IMGBUS_ORDER_LE,
+   }, [V4L2_IMGBUS_FMT_UYVY] = {
+   .fourcc = V4L2_PIX_FMT_UYVY,
+   .colorspace = V4L2_COLORSPACE_JPEG,
+   .name   = UYVY,
+   .bits_per_sample= 8,
+   .packing= V4L2_IMGBUS_PACKING_2X8_PADHI,
+   .order  = V4L2_IMGBUS_ORDER_LE,
+   }, [V4L2_IMGBUS_FMT_VYUY] = {
+   .fourcc = V4L2_PIX_FMT_VYUY,
+   .colorspace = V4L2_COLORSPACE_JPEG,
+   .name   = VYUY,
+   .bits_per_sample= 8,
+   .packing= V4L2_IMGBUS_PACKING_2X8_PADHI,
+   .order  = V4L2_IMGBUS_ORDER_LE,
+   }, [V4L2_IMGBUS_FMT_VYUY_SMPTE170M_8] = {
+   .fourcc = V4L2_PIX_FMT_VYUY,
+   .colorspace = V4L2_COLORSPACE_SMPTE170M,
+   .name   = VYUY in SMPTE170M,
+   .bits_per_sample= 8,
+   .packing= V4L2_IMGBUS_PACKING_2X8_PADHI,
+   .order  = V4L2_IMGBUS_ORDER_LE,
+   }, [V4L2_IMGBUS_FMT_VYUY_SMPTE170M_16] = {
+   .fourcc = V4L2_PIX_FMT_VYUY,
+   .colorspace = V4L2_COLORSPACE_SMPTE170M,
+   .name   = VYUY in SMPTE170M, 16bit,
+   .bits_per_sample= 16,
+   .packing= V4L2_IMGBUS_PACKING_NONE,
+   .order  = V4L2_IMGBUS_ORDER_LE,
+   }, [V4L2_IMGBUS_FMT_RGB555] = {
+   .fourcc = V4L2_PIX_FMT_RGB555,
+   .colorspace = V4L2_COLORSPACE_SRGB,
+   .name   = RGB555,
+   .bits_per_sample= 8,
+   .packing=