Re: [RFC v2 1/2] media: platform: Add SH CEU camera interface driver

2017-05-04 Thread Laurent Pinchart
Hi Chris,

On Thursday 04 May 2017 12:23:32 Chris Brandt wrote:
> On Wednesday, May 03, 2017, jmondi wrote:
> > I have a proposal here, as the original driver only supported "image
> > fetch mode" (ie. It accepts data in YUYV with components ordering
> > arbitrary swapped) as a first step we may want to replicate this, ignoring
> > data synch fetch mode (Chris, you have a driver for this you are already
> > using in your BSP so I guess it's less urgent to support it, right?).
> 
> 
> My "driver" (if you can call it that) is basically 20 lines of code that
> sets up the registers either "image capture" mode or "data fetch". The
> main reason for the code was that the current CEU driver only supported
> "image capture" which resulted in separate Y and CbCr output buffers
> output, and then the app would have to merge them back together which was a
> waste of time (and memory).
> 
> My customers simply wanted the packed format
> that came out of the camera. So, I created the 20 lines of code and we
> abandoned the CEU driver in the kernel.
> 
> Also, the LCD controller (VDC5) can display a YCbCr frame buffer directly if
> it's packed...but not if it's a separate Y/CbCr buffers.
> 
> Not to mention, if you just have a black and white camera (Y only), the
> Y/CbCr spilt function is totally useless and cuts your B image in half
> for no reason. 
> 
> My point: You can do the "image capture" mode (CAMCR:JPG=0) first, but no
> one will actually use the driver until the "data fetch" mode (CAMCR:JPG=1)
> is implemented. 

Thanks for the feedback. I think that, given a YUV sensor, implementing 
support for both image capture and data fetch modes won't be difficult. 
Jacopo, are you OK with that ? If you implement image capture mode first, data 
fetch mode should be just a small additional patch.

-- 
Regards,

Laurent Pinchart



RE: [RFC v2 1/2] media: platform: Add SH CEU camera interface driver

2017-05-04 Thread Chris Brandt
Hi Jacopo,

On Wednesday, May 03, 2017, jmondi wrote:
> I have a proposal here, as the original driver only supported "image fetch
> mode" (ie. It accepts data in YUYV with components ordering arbitrary
> swapped) as a first step we may want to replicate this, ignoring data
> synch fetch mode (Chris, you have a driver for this you are already using
> in your BSP so I guess it's less urgent to support it, right?).

My "driver" (if you can call it that) is basically 20 lines of code that sets
up the registers either "image capture" mode or "data fetch". The main reason
for the code was that the current CEU driver only supported "image capture"
which resulted in separate Y and CbCr output buffers output, and then the app
would have to merge them back together which was a waste of time (and memory).

My customers simply wanted the packed format that came out of the camera.
So, I created the 20 lines of code and we abandoned the CEU driver in the 
kernel.

Also, the LCD controller (VDC5) can display a YCbCr frame buffer directly if 
it's
packed...but not if it's a separate Y/CbCr buffers.

Not to mention, if you just have a black and white camera (Y only), the Y/CbCr
spilt function is totally useless and cuts your B image in half for no reason.


My point: You can do the "image capture" mode (CAMCR:JPG=0) first, but no one 
will
actually use the driver until the "data fetch" mode (CAMCR:JPG=1) is 
implemented.


Chris



Re: [RFC v2 1/2] media: platform: Add SH CEU camera interface driver

2017-05-03 Thread Laurent Pinchart
Hi Jacopo,

On Wednesday 03 May 2017 18:14:03 jmondi wrote:
> On Wed, May 03, 2017 at 06:06:19PM +0300, Laurent Pinchart wrote:
> > On Wednesday 03 May 2017 11:52:36 jmondi wrote:
> >> On Thu, Apr 27, 2017 at 02:47:14PM +0300, Laurent Pinchart wrote:
> >>> Hi Jacopo,
> >>> 
> >>> Thank you for the patch.
> >> 
> >> [snip]
> >> 
>  +/**
>  + * Collect formats supported by CEU and sensor subdevice
>  + */
>  +static int sh_ceu_init_active_formats(struct sh_ceu_dev *pcdev)
>  +{
>  +struct v4l2_subdev *sensor = pcdev->sensor;
>  +struct sh_ceu_fmt *active_fmts;
>  +unsigned int n_active_fmts;
>  +struct sh_ceu_fmt *fmt;
>  +unsigned int i;
>  +
>  +struct v4l2_subdev_mbus_code_enum mbus_code = {
>  +.which = V4L2_SUBDEV_FORMAT_ACTIVE,
>  +.index = 0,
>  +};
>  +
>  +/* Count how may formats are supported by CEU and sensor
>  subdevice */
>  +n_active_fmts = 0;
>  +while (!v4l2_subdev_call(sensor, pad, enum_mbus_code,
>  + NULL, _code)) {
>  +mbus_code.index++;
>  +
>  +fmt = get_ceu_fmt_from_mbus(mbus_code.code);
> >>> 
> >>> This is not correct. You will get one one pixel format per media bus
> >>> format supported by the sensor. The CEU supports
> >>> 
> >>> 1. converting packed YUV 4:2:2 (YUYV) to YUV 4:2:2 planar (CAMCR.JPG =
> >>> 00) or capturing raw data (CAMCR.JPG = 01)
> >>> 
> >>> 2. outputting YUV 4:2:2 planar to memory (CDOCR.CDS = 1) or
> >>> downsampling it to YUV 4:2:0 planar (CDOCR.CDS = 0)
> >>> 
> >>> 3. swapping bytes, words and long words at the output (CDOCR.COLS,
> >>> CDOCR.COWS and CDOCR.COBS fields)
> >>> 
> >>> This effectively allows to
> >>> 
> >>> - capture the raw data produced by the sensor
> >>> - capture YUV images produced by the sensor in packed YUV 4:2:2
> >>> format, from any Y/U/V order to any Y/U/V order
> >>> - capture YUV images produced by the sensor in planar YUV 4:2:2 or
> >>> planar YUV 4:2:0, from any Y/U/V order to any Y/U/V order
> >>> 
> >>> The list of formats you create needs to reflect that. This means that
> >>> 
> >>> - if the sensor can produce a packed YUV 4:2:2 format, the CEU will be
> >>> able to output YUYV, UYVY, YVYU, VYUY, NV12, NV21, NV16 and NV16
> >>> 
> >>> - for every non-YUV format produced by the sensor, the CEU will be
> >>> able to output raw data
> >>> 
> >>> The former is easy. You just need to list the formats produced by the
> >>> sensor, and if one of them is packed YUV 4:2:2, flag that in the
> >>> sh_ceu_dev structure, and allow all the above output YUV formats. You
> >>> don't need to create an instance-specific list of those formats in the
> >>> sh_ceu_dev structure, as they will be either all available or all
> >>> non-available.
> >>> 
> >>> The latter is a bit more complex, you need a list of media bus code to
> >>> pixel format in the driver. I'd recommend counting the non-YUV packed
> >>> formats produced by the sensor, allocating an array of sh_ceu_fmt
> >>> pointers with that number of entries, and make each entry point to the
> >>> corresponding entry in the global sh_ceu_fmts array. The global
> >>> sh_ceu_fmts needs to be extended with common sensor formats (raw Bayer
> >>> and JPEG come to mind).
> >>> 
> >>> If all sensors currently used with the CEU can produce packed YUV, you
> >>> could implement YUV support only to start with, leaving raw capture
> >>> support for later.
> >> 
> >> Ok, thanks for explaining.
> >> 
> >> I have a proposal here, as the original driver only supported "image
> >> fetch mode" (ie. It accepts data in YUYV with components ordering
> >> arbitrary swapped) as a first step we may want to replicate this,
> >> ignoring data synch fetch mode (Chris, you have a driver for this you are
> >> already using in your BSP so I guess it's less urgent to support it,
> >> right?).
> >> 
> >> Also, regarding output memory format I am sure we can produce planar
> >> formats as NV12/21 and NV16/61, but I'm not sure I have found a way to
> >> output packed YUVY (and its permutations) to memory, if not considering
> >> it a binary format, as thus using data synch fetch mode.
> > 
> > As I understand it, outputting packed YUV is indeed done using data synch
> > (or enable, I'd have to check) fetch mode, and swapping components to
> > convert between different YUYV orders is done through the CDOCR.COLS,
> > CDOCR.COWS and CDOCR.COBS bits.
> 
> That's waht I wanted to have confirmed, thanks.
> 
> >> So, as a first step my proposal is the following one:
> >> Accept any YUYV bus format from sensor, and support planar ones as
> >> memory output formats with down-sampling support (NV12/21 and NV16/61
> >> then).
> > 
> > You can start with that, but from the same YUV bus format, you should be
> > able to output packed YUV easily using data sync 

Re: [RFC v2 1/2] media: platform: Add SH CEU camera interface driver

2017-05-03 Thread jmondi
Hi Laurent,

On Wed, May 03, 2017 at 06:06:19PM +0300, Laurent Pinchart wrote:
> Hi Jacopo,
>
> On Wednesday 03 May 2017 11:52:36 jmondi wrote:
> > On Thu, Apr 27, 2017 at 02:47:14PM +0300, Laurent Pinchart wrote:
> > > Hi Jacopo,
> > >
> > > Thank you for the patch.
> >
> > [snip]
> >
> > >> +/**
> > >> + * Collect formats supported by CEU and sensor subdevice
> > >> + */
> > >> +static int sh_ceu_init_active_formats(struct sh_ceu_dev *pcdev)
> > >> +{
> > >> +struct v4l2_subdev *sensor = pcdev->sensor;
> > >> +struct sh_ceu_fmt *active_fmts;
> > >> +unsigned int n_active_fmts;
> > >> +struct sh_ceu_fmt *fmt;
> > >> +unsigned int i;
> > >> +
> > >> +struct v4l2_subdev_mbus_code_enum mbus_code = {
> > >> +.which = V4L2_SUBDEV_FORMAT_ACTIVE,
> > >> +.index = 0,
> > >> +};
> > >> +
> > >> +/* Count how may formats are supported by CEU and sensor 
> > >> subdevice */
> > >> +n_active_fmts = 0;
> > >> +while (!v4l2_subdev_call(sensor, pad, enum_mbus_code,
> > >> + NULL, _code)) {
> > >> +mbus_code.index++;
> > >> +
> > >> +fmt = get_ceu_fmt_from_mbus(mbus_code.code);
> > >
> > > This is not correct. You will get one one pixel format per media bus
> > > format supported by the sensor. The CEU supports
> > >
> > > 1. converting packed YUV 4:2:2 (YUYV) to YUV 4:2:2 planar (CAMCR.JPG = 00)
> > > or capturing raw data (CAMCR.JPG = 01)
> > >
> > > 2. outputting YUV 4:2:2 planar to memory (CDOCR.CDS = 1) or downsampling
> > > it to YUV 4:2:0 planar (CDOCR.CDS = 0)
> > >
> > > 3. swapping bytes, words and long words at the output (CDOCR.COLS,
> > > CDOCR.COWS and CDOCR.COBS fields)
> > >
> > > This effectively allows to
> > >
> > > - capture the raw data produced by the sensor
> > > - capture YUV images produced by the sensor in packed YUV 4:2:2 format,
> > > from any Y/U/V order to any Y/U/V order
> > > - capture YUV images produced by the sensor in planar YUV 4:2:2 or planar
> > > YUV 4:2:0, from any Y/U/V order to any Y/U/V order
> > >
> > > The list of formats you create needs to reflect that. This means that
> > >
> > > - if the sensor can produce a packed YUV 4:2:2 format, the CEU will be
> > > able to output YUYV, UYVY, YVYU, VYUY, NV12, NV21, NV16 and NV16
> > >
> > > - for every non-YUV format produced by the sensor, the CEU will be able to
> > > output raw data
> > >
> > > The former is easy. You just need to list the formats produced by the
> > > sensor, and if one of them is packed YUV 4:2:2, flag that in the
> > > sh_ceu_dev structure, and allow all the above output YUV formats. You
> > > don't need to create an instance-specific list of those formats in the
> > > sh_ceu_dev structure, as they will be either all available or all
> > > non-available.
> > >
> > > The latter is a bit more complex, you need a list of media bus code to
> > > pixel format in the driver. I'd recommend counting the non-YUV packed
> > > formats produced by the sensor, allocating an array of sh_ceu_fmt
> > > pointers with that number of entries, and make each entry point to the
> > > corresponding entry in the global sh_ceu_fmts array. The global
> > > sh_ceu_fmts needs to be extended with common sensor formats (raw Bayer
> > > and JPEG come to mind).
> > >
> > > If all sensors currently used with the CEU can produce packed YUV, you
> > > could implement YUV support only to start with, leaving raw capture
> > > support for later.
> >
> > Ok, thanks for explaining.
> >
> > I have a proposal here, as the original driver only supported "image
> > fetch mode" (ie. It accepts data in YUYV with components ordering arbitrary
> > swapped) as a first step we may want to replicate this, ignoring data
> > synch fetch mode (Chris, you have a driver for this you are already
> > using in your BSP so I guess it's less urgent to support it, right?).
> >
> > Also, regarding output memory format I am sure we can produce planar formats
> > as NV12/21 and NV16/61, but I'm not sure I have found a way to output
> > packed YUVY (and its permutations) to memory, if not considering it a
> > binary format, as thus using data synch fetch mode.
>
> As I understand it, outputting packed YUV is indeed done using data synch (or
> enable, I'd have to check) fetch mode, and swapping components to convert
> between different YUYV orders is done through the CDOCR.COLS, CDOCR.COWS and
> CDOCR.COBS bits.

That's waht I wanted to have confirmed, thanks.

>
> > So, as a first step my proposal is the following one:
> > Accept any YUYV bus format from sensor, and support planar ones as memory
> > output formats with down-sampling support (NV12/21 and NV16/61 then).
>
> You can start with that, but from the same YUV bus format, you should be able
> to output packed YUV easily using data sync fetch mode. That can be
> implemented as a second step, it will just result in extending the hardcoded
> list 

Re: [RFC v2 1/2] media: platform: Add SH CEU camera interface driver

2017-05-03 Thread Laurent Pinchart
Hi Jacopo,

On Wednesday 03 May 2017 11:52:36 jmondi wrote:
> On Thu, Apr 27, 2017 at 02:47:14PM +0300, Laurent Pinchart wrote:
> > Hi Jacopo,
> > 
> > Thank you for the patch.
> 
> [snip]
> 
> >> +/**
> >> + * Collect formats supported by CEU and sensor subdevice
> >> + */
> >> +static int sh_ceu_init_active_formats(struct sh_ceu_dev *pcdev)
> >> +{
> >> +  struct v4l2_subdev *sensor = pcdev->sensor;
> >> +  struct sh_ceu_fmt *active_fmts;
> >> +  unsigned int n_active_fmts;
> >> +  struct sh_ceu_fmt *fmt;
> >> +  unsigned int i;
> >> +
> >> +  struct v4l2_subdev_mbus_code_enum mbus_code = {
> >> +  .which = V4L2_SUBDEV_FORMAT_ACTIVE,
> >> +  .index = 0,
> >> +  };
> >> +
> >> +  /* Count how may formats are supported by CEU and sensor subdevice */
> >> +  n_active_fmts = 0;
> >> +  while (!v4l2_subdev_call(sensor, pad, enum_mbus_code,
> >> +   NULL, _code)) {
> >> +  mbus_code.index++;
> >> +
> >> +  fmt = get_ceu_fmt_from_mbus(mbus_code.code);
> > 
> > This is not correct. You will get one one pixel format per media bus
> > format supported by the sensor. The CEU supports
> > 
> > 1. converting packed YUV 4:2:2 (YUYV) to YUV 4:2:2 planar (CAMCR.JPG = 00)
> > or capturing raw data (CAMCR.JPG = 01)
> > 
> > 2. outputting YUV 4:2:2 planar to memory (CDOCR.CDS = 1) or downsampling
> > it to YUV 4:2:0 planar (CDOCR.CDS = 0)
> > 
> > 3. swapping bytes, words and long words at the output (CDOCR.COLS,
> > CDOCR.COWS and CDOCR.COBS fields)
> > 
> > This effectively allows to
> > 
> > - capture the raw data produced by the sensor
> > - capture YUV images produced by the sensor in packed YUV 4:2:2 format,
> > from any Y/U/V order to any Y/U/V order
> > - capture YUV images produced by the sensor in planar YUV 4:2:2 or planar
> > YUV 4:2:0, from any Y/U/V order to any Y/U/V order
> > 
> > The list of formats you create needs to reflect that. This means that
> > 
> > - if the sensor can produce a packed YUV 4:2:2 format, the CEU will be
> > able to output YUYV, UYVY, YVYU, VYUY, NV12, NV21, NV16 and NV16
> > 
> > - for every non-YUV format produced by the sensor, the CEU will be able to
> > output raw data
> > 
> > The former is easy. You just need to list the formats produced by the
> > sensor, and if one of them is packed YUV 4:2:2, flag that in the
> > sh_ceu_dev structure, and allow all the above output YUV formats. You
> > don't need to create an instance-specific list of those formats in the
> > sh_ceu_dev structure, as they will be either all available or all
> > non-available.
> > 
> > The latter is a bit more complex, you need a list of media bus code to
> > pixel format in the driver. I'd recommend counting the non-YUV packed
> > formats produced by the sensor, allocating an array of sh_ceu_fmt
> > pointers with that number of entries, and make each entry point to the
> > corresponding entry in the global sh_ceu_fmts array. The global
> > sh_ceu_fmts needs to be extended with common sensor formats (raw Bayer
> > and JPEG come to mind).
> > 
> > If all sensors currently used with the CEU can produce packed YUV, you
> > could implement YUV support only to start with, leaving raw capture
> > support for later.
> 
> Ok, thanks for explaining.
> 
> I have a proposal here, as the original driver only supported "image
> fetch mode" (ie. It accepts data in YUYV with components ordering arbitrary
> swapped) as a first step we may want to replicate this, ignoring data
> synch fetch mode (Chris, you have a driver for this you are already
> using in your BSP so I guess it's less urgent to support it, right?).
> 
> Also, regarding output memory format I am sure we can produce planar formats
> as NV12/21 and NV16/61, but I'm not sure I have found a way to output
> packed YUVY (and its permutations) to memory, if not considering it a
> binary format, as thus using data synch fetch mode.

As I understand it, outputting packed YUV is indeed done using data synch (or 
enable, I'd have to check) fetch mode, and swapping components to convert 
between different YUYV orders is done through the CDOCR.COLS, CDOCR.COWS and 
CDOCR.COBS bits.

> So, as a first step my proposal is the following one:
> Accept any YUYV bus format from sensor, and support planar ones as memory
> output formats with down-sampling support (NV12/21 and NV16/61 then).

You can start with that, but from the same YUV bus format, you should be able 
to output packed YUV easily using data sync fetch mode. That can be 
implemented as a second step, it will just result in extending the hardcoded 
list of YUV pixel formats supported by the driver (as any YUV bus format can 
produce any of the four NV variants and any of the four packed YUV variants).

The third step would be to enumerate the non-YUV formats supported by the 
sensor, and create a list of corresponding pixel formats that can be supported 
in data sync fetch mode. I'd leave that out for now.

So, if you only implement the first two steps, you 

Re: [RFC v2 1/2] media: platform: Add SH CEU camera interface driver

2017-05-03 Thread jmondi
Hi Laurent,

On Thu, Apr 27, 2017 at 02:47:14PM +0300, Laurent Pinchart wrote:
> Hi Jacopo,
>
> Thank you for the patch.

[snip]

> > +/**
> > + * Collect formats supported by CEU and sensor subdevice
> > + */
> > +static int sh_ceu_init_active_formats(struct sh_ceu_dev *pcdev)
> > +{
> > +   struct v4l2_subdev *sensor = pcdev->sensor;
> > +   struct sh_ceu_fmt *active_fmts;
> > +   unsigned int n_active_fmts;
> > +   struct sh_ceu_fmt *fmt;
> > +   unsigned int i;
> > +
> > +   struct v4l2_subdev_mbus_code_enum mbus_code = {
> > +   .which = V4L2_SUBDEV_FORMAT_ACTIVE,
> > +   .index = 0,
> > +   };
> > +
> > +   /* Count how may formats are supported by CEU and sensor subdevice */
> > +   n_active_fmts = 0;
> > +   while (!v4l2_subdev_call(sensor, pad, enum_mbus_code,
> > +NULL, _code)) {
> > +   mbus_code.index++;
> > +
> > +   fmt = get_ceu_fmt_from_mbus(mbus_code.code);
>
> This is not correct. You will get one one pixel format per media bus format
> supported by the sensor. The CEU supports
>
> 1. converting packed YUV 4:2:2 (YUYV) to YUV 4:2:2 planar (CAMCR.JPG = 00) or
> capturing raw data (CAMCR.JPG = 01)
>
> 2. outputting YUV 4:2:2 planar to memory (CDOCR.CDS = 1) or downsampling it to
> YUV 4:2:0 planar (CDOCR.CDS = 0)
>
> 3. swapping bytes, words and long words at the output (CDOCR.COLS, CDOCR.COWS
> and CDOCR.COBS fields)
>
> This effectively allows to
>
> - capture the raw data produced by the sensor
> - capture YUV images produced by the sensor in packed YUV 4:2:2 format, from
> any Y/U/V order to any Y/U/V order
> - capture YUV images produced by the sensor in planar YUV 4:2:2 or planar YUV
> 4:2:0, from any Y/U/V order to any Y/U/V order
>
> The list of formats you create needs to reflect that. This means that
>
> - if the sensor can produce a packed YUV 4:2:2 format, the CEU will be able to
> output YUYV, UYVY, YVYU, VYUY, NV12, NV21, NV16 and NV16
>
> - for every non-YUV format produced by the sensor, the CEU will be able to
> output raw data
>
> The former is easy. You just need to list the formats produced by the sensor,
> and if one of them is packed YUV 4:2:2, flag that in the sh_ceu_dev structure,
> and allow all the above output YUV formats. You don't need to create an
> instance-specific list of those formats in the sh_ceu_dev structure, as they
> will be either all available or all non-available.
>
> The latter is a bit more complex, you need a list of media bus code to pixel
> format in the driver. I'd recommend counting the non-YUV packed formats
> produced by the sensor, allocating an array of sh_ceu_fmt pointers with that
> number of entries, and make each entry point to the corresponding entry in the
> global sh_ceu_fmts array. The global sh_ceu_fmts needs to be extended with
> common sensor formats (raw Bayer and JPEG come to mind).
>
> If all sensors currently used with the CEU can produce packed YUV, you could
> implement YUV support only to start with, leaving raw capture support for
> later.

Ok, thanks for explaining.

I have a proposal here, as the original driver only supported "image
fetch mode" (ie. It accepts data in YUYV with components ordering arbitrary
swapped) as a first step we may want to replicate this, ignoring data
synch fetch mode (Chris, you have a driver for this you are already
using in your BSP so I guess it's less urgent to support it, right?).

Also, regarding output memory format I am sure we can produce planar formats
as NV12/21 and NV16/61, but I'm not sure I have found a way to output
packed YUVY (and its permutations) to memory, if not considering it a
binary format, as thus using data synch fetch mode.

So, as a first step my proposal is the following one:
Accept any YUYV bus format from sensor, and support planar ones as memory output
formats with down-sampling support (NV12/21 and NV16/61 then).

The way I am building the supported format list is indeed wrong, and
as you suggested I should just try to make sure the sensor can output
a YUV422 bus format. From there I can output planar memory formats.

One last thing, I am not that sure how I can produce NV21/61 from
bus formats that do not already present the CbCr components in the
desired order (which is Cr then Cb for NV61/21)
Eg. Can I produce NV21 from YUYV though byte/word/long word swapping
(CDOCR.COLS/CDOCR.COWS/CDOCR.COBS) ? Won't I be swapping the luminance
components as well (see figure 46.45 in RZ/A1H TRM).

If I cannot do that, I should restrict swapped planar format
availability based on the ability of the sensor to produce
chrominance components in the desired order
(Eg. If the sensor does not support producing YVYU I cannot produce
NV21 or NV61). Is this ok?

Thanks
 j
>
> > +   if (!fmt)
> > +   continue;
> > +
> > +   fmt->active = 1;
> > +   n_active_fmts++;
> > +   }
> > +
> > +   if (n_active_fmts == 0)
> > +   return -ENXIO;
> > +
> > +   pcdev->n_active_fmts = 

Re: [RFC v2 1/2] media: platform: Add SH CEU camera interface driver

2017-05-02 Thread Laurent Pinchart
Hi Jacopo,

On Tuesday 02 May 2017 11:48:42 jmondi wrote:
> On Thu, Apr 27, 2017 at 02:47:14PM +0300, Laurent Pinchart wrote:
> > Hi Jacopo,
> 
> [snip]
> 
> > > + pcdev = devm_kzalloc(>dev, sizeof(*pcdev), GFP_KERNEL);
> > 
> > devm_kzalloc() is harmful here. You're embedding a struct video_device
> > instead struct sh_ceu_dev, and the video device can outlive the platform
> > driver remove() call if the device node is kept open by userspace when
> > the device is removed. You should use kzalloc() and implement a .release
> > callback for the video_device to kfree() the memory.
> 
> Does this apply to video_unregister_device() as well?
> Should I keep it registered after platform driver removal?

No, the video device must be unregistered in the platform driver's remove() 
callback. As explained in my previous reply the driver private data structure 
can still be reachable if a userspace application has the device node opened 
when it gets unregistered, so you can't free it in the remove() callback.

> Other drivers (eg atmel-isc and pxa_camera) unregister the video
> device in their sensor unbind callbacks.
> As video device has pointer to fops embedded, if we unregister it at
> sensor unbind time, does the ipotetical application having an open
> reference to the device node lose the ability to properly close it?

An application will always be able to close an open file handle, and that will 
always end up in the v4l2_file_operations release() handler, which can access 
the driver's private data structure (including the video_device instance it 
embeds). That's why you can't free the memory before the last file handle is 
closed.

-- 
Regards,

Laurent Pinchart



Re: [RFC v2 1/2] media: platform: Add SH CEU camera interface driver

2017-05-02 Thread Laurent Pinchart
Hi Jacopo,

On Monday 01 May 2017 16:37:54 jmondi wrote:
> On Thu, Apr 27, 2017 at 02:47:14PM +0300, Laurent Pinchart wrote:
> > Hi Jacopo,
> > 
> > Thank you for the patch.
> > 
> > This is a partial review, as some of the comments will result in large
> > changes that would make review of some of the current code pointless.
> > There is however enough comments to keep you busy for a bit :-)
> 
> Indeed
> 
> > By the way, you might want to keep your development history in a private
> > git branch, in order to bisect issues when you'll finally be able to test
> > the driver on real hardware. Otherwise we'll end up at v7, the driver
> > won't work, and you will have no idea why.
> 
> As many of your comments are on existing code that I blindly took from
> the existing driver, I decided to start fresh with a new driver from
> scratch. Next patches I will send will be based on your comments on this
> series, and I take the opportunity to ask some questions on some
> parts of code that were already in the driver and I'm failing to fully
> understand.

Please try to keep the structure of the driver you posted if possible, 
otherwise I'll have to review completely new code, which will be more time 
consuming.

> To speed things up, I'm going to reply/make questions on code as I'm
> rewriting it, so let's start from the bottom of the driver.
> 
> > On Thursday 27 Apr 2017 10:42:27 Jacopo Mondi wrote:
> > > Add driver for SH Mobile CEU driver, based on
> > > drivers/media/platform/soc_camera/sh_mobile_ceu_camera.c
> > > 
> > > The original driver was based on soc-camera framework.
> > > 
> > > Signed-off-by: Jacopo Mondi 
> > > ---
> > > 
> > >  drivers/media/platform/Kconfig |9 +
> > >  drivers/media/platform/Makefile|2 +
> > >  drivers/media/platform/sh_ceu_camera.c | 1597 +
> > >  3 files changed, 1608 insertions(+)
> > >  create mode 100644 drivers/media/platform/sh_ceu_camera.c

[snip]

> > > diff --git a/drivers/media/platform/Makefile
> > > b/drivers/media/platform/Makefile index 349ddf6..a2b9aa6 100644
> > > --- a/drivers/media/platform/Makefile
> > > +++ b/drivers/media/platform/Makefile
> > > @@ -25,6 +25,8 @@
> > >  obj-$(CONFIG_VIDEO_CODA) += coda/
> > > 
> > >  obj-$(CONFIG_VIDEO_SH_VEU)   += sh_veu.o
> > > +obj-$(CONFIG_VIDEO_SH_MOBILE_CEU)+= sh_ceu_camera.o
> > > +
> 
> This will be renamed in "ceu_camera.c" as well

How about renesas-ceu.c ?

> > >  obj-$(CONFIG_VIDEO_MEM2MEM_DEINTERLACE)  += m2m-deinterlace.o
> > >  obj-$(CONFIG_VIDEO_S3C_CAMIF)+= s3c-camif/
> > > 
> > > diff --git a/drivers/media/platform/sh_ceu_camera.c
> > > b/drivers/media/platform/sh_ceu_camera.c new file mode 100644
> > > index 000..07fe0e7
> > > --- /dev/null
> > > +++ b/drivers/media/platform/sh_ceu_camera.c

[snip]

> > > +struct sh_ceu_dev {
> > > + struct video_device vdev;
> > > + struct v4l2_device  v4l2_dev;
> > > +
> > > + struct v4l2_subdev *sensor;
> > > +
> > > + struct v4l2_async_notifier notifier;
> > > +
> > > + struct v4l2_async_subdev asd;
> > > + struct v4l2_async_subdev *asds[1];
> > > +
> > > + struct vb2_queue vb2_vq;
> > > +
> > > + struct mutex mlock;
> > 
> > You need to document what this lock protects.
> > 
> > > +
> > > + unsigned int irq;
> > > + void __iomem *base;
> > > + size_t video_limit;
> > > + size_t buf_total;
> > > +
> > > + spinlock_t lock;/* Protects video buffer lists */
> > 
> > Please list which field(s) the lock protects explicitly.
> > 
> > > + struct list_head capture;
> > > + struct vb2_v4l2_buffer *active;
> > > +
> > > + enum v4l2_field field;
> > > + struct v4l2_format v4l2_fmt;
> > 
> > v4l2_format is quite a big structure, and you don't need most of it. I
> > would store v4l2_pix_format only.
> > 
> > > + unsigned int n_active_fmts;
> > > + struct sh_ceu_fmt **active_fmts;
> > > + struct sh_ceu_fmt *current_fmt;
> > 
> > These two should be const.
> 
> Why both of them? I understand the list of active formats is built
> during driver initilization, but the "current_fmt" is set during
> "set_fmt" and can change at run time, right?

Please see Geert's reply on this.

> > > +
> > > + struct sh_ceu_info *pdata;
> > > + struct completion complete;
> > > +
> > > + u32 cflcr;
> > > +
> > > + /* static max sizes either from platform data or default */
> > > + int max_width;
> > > + int max_height;
> > 
> > No platform sets non-default max_width or max_height values, just replace
> > them with #define's.
> 
> yup
> 
> > > +
> > > + int sequence;
> > > + unsigned long flags;
> > > +
> > > + unsigned int image_mode:1;
> > > + unsigned int is_16bit:1;
> > > + unsigned int frozen:1;
> > > +};

[snip]

> > > +}
> > > +static int sh_ceu_probe(struct platform_device *pdev)
> > > +{
> > > + struct sh_ceu_dev *pcdev;
> > > + struct resource *res;
> > > + void __iomem *base;
> > > + unsigned int irq;
> > > + int err;
> > > +
> > > + res = 

Re: [RFC v2 1/2] media: platform: Add SH CEU camera interface driver

2017-05-02 Thread jmondi
Hi Laurent,

On Thu, Apr 27, 2017 at 02:47:14PM +0300, Laurent Pinchart wrote:
> Hi Jacopo,
>
>

[snip]

> > +   pcdev = devm_kzalloc(>dev, sizeof(*pcdev), GFP_KERNEL);
>
> devm_kzalloc() is harmful here. You're embedding a struct video_device instead
> struct sh_ceu_dev, and the video device can outlive the platform driver
> remove() call if the device node is kept open by userspace when the device is
> removed. You should use kzalloc() and implement a .release callback for the
> video_device to kfree() the memory.
>

Does this apply to video_unregister_device() as well?
Should I keep it registered after platform driver removal?

Other drivers (eg atmel-isc and pxa_camera) unregister the video
device in their sensor unbind callbacks.
As video device has pointer to fops embedded, if we unregister it at
sensor unbind time, does the ipotetical application having an open
reference to the device node lose the ability to properly close it?

Thanks
   j

> Note that devm_ioremap_resource() and devm_request_irq() are fine as the MMIO
> and interrupts must not be used after the remove() function returns.
>
> > +   if (!pcdev) {
> > +   dev_err(>dev, "Could not allocate pcdev\n");
> > +   return -ENOMEM;
> > +   }
> > +
> > +   mutex_init(>mlock);
> > +   INIT_LIST_HEAD(>capture);
> > +   spin_lock_init(>lock);
> > +   init_completion(>complete);
> > +
> > +   pcdev->pdata = pdev->dev.platform_data;
> > +   if (pdev->dev.of_node && !pcdev->pdata) {
>
> If there's an OF node there's no platform data, you can this code this as
>
>   if (IS_ENABLED(CONFIG_OF) && pdev->dev.of_node) {
>   ...
>   } else if (pcdev->pdata) {
>   ...
>   } else {
>   ...
>   }
>
> The IS_ENABLED(CONFIG_OF) will make sure the code is not compiled in for non-
> OF platforms.
>
> > +   /* TODO: implement OF parsing */
> > +   } else if (pcdev->pdata && !pdev->dev.of_node) {
> > +   /* TODO: implement per-device bus flags */
>
> Is this needed ? I don't expect any new feature to be implemented for non-OF
> platforms.
>
> > +   pcdev->max_width = pcdev->pdata->max_width;
> > +   pcdev->max_height = pcdev->pdata->max_height;
> > +   pcdev->flags = pcdev->pdata->flags;
> > +   pcdev->asd.match_type = V4L2_ASYNC_MATCH_I2C;
> > +   pcdev->asd.match.i2c.adapter_id =
> > +   pcdev->pdata->sensor_i2c_adapter_id;
> > +   pcdev->asd.match.i2c.address = pcdev->pdata-
> >sensor_i2c_address;
> > +   } else {
> > +   dev_err(>dev, "CEU platform data not set.\n");
> > +   return -EINVAL;
> > +   }
> > +
> > +   pcdev->field = V4L2_FIELD_NONE;
> > +
> > +   if (!pcdev->max_width)
> > +   pcdev->max_width = 2560;
> > +   if (!pcdev->max_height)
> > +   pcdev->max_height = 1920;
> > +
> > +   base = devm_ioremap_resource(>dev, res);
> > +   if (IS_ERR(base))
> > +   return PTR_ERR(base);
> > +
> > +   pcdev->irq = irq;
> > +   pcdev->base = base;
> > +   pcdev->video_limit = 0; /* only enabled if second resource exists */
> > +
> > +   res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
> > +   if (res) {
> > +   err = dma_declare_coherent_memory(>dev, res->start,
> > + res->start,
> > + resource_size(res),
> > + DMA_MEMORY_MAP |
> > + DMA_MEMORY_EXCLUSIVE);
> > +   if (!err) {
> > +   dev_err(>dev, "Unable to declare CEU memory.
> \n");
> > +   return -ENXIO;
> > +   }
> > +
> > +   pcdev->video_limit = resource_size(res);
>
> You can get rid of this, we now have CMA that can be used unconditionally to
> allocate physically contiguous memory. Don't forget to remove the
> corresponding resource from SH board code that instantiate CEU devices.
>
> > +   }
> > +
> > +   /* request irq */
>
> I'm not sure the comment is really valuable :-)
>
> > +   err = devm_request_irq(>dev, pcdev->irq, sh_ceu_irq,
> > +  0, dev_name(>dev), pcdev);
> > +   if (err) {
> > +   dev_err(>dev, "Unable to register CEU interrupt.\n");
> > +   goto exit_release_mem;
> > +   }
> > +
> > +   pm_suspend_ignore_children(>dev, true);
> > +   pm_runtime_enable(>dev);
> > +   pm_runtime_resume(>dev);
> > +
> > +   dev_set_drvdata(>dev, pcdev);
> > +   err = v4l2_device_register(>dev, >v4l2_dev);
> > +   if (err)
> > +   goto exit_free_clk;
> > +
> > +   pcdev->asds[0] = >asd;
> > +   pcdev->notifier.v4l2_dev = >v4l2_dev;
> > +   pcdev->notifier.subdevs = pcdev->asds;
> > +   pcdev->notifier.num_subdevs = 1;
> > +   pcdev->notifier.bound = sh_ceu_sensor_bound;
> > +   pcdev->notifier.unbind = sh_ceu_sensor_unbind;
> > +
> > +   err = v4l2_async_notifier_register(>v4l2_dev, 
> >notifier);
> > +   if (err)
> > +   

Re: [RFC v2 1/2] media: platform: Add SH CEU camera interface driver

2017-05-01 Thread jmondi
Hi Laurent,
   thanks for the extensive review

On Thu, Apr 27, 2017 at 02:47:14PM +0300, Laurent Pinchart wrote:
> Hi Jacopo,
>
> Thank you for the patch.
>
> This is a partial review, as some of the comments will result in large changes
> that would make review of some of the current code pointless. There is however
> enough comments to keep you busy for a bit :-)
>

Indeed

> By the way, you might want to keep your development history in a private git
> branch, in order to bisect issues when you'll finally be able to test the
> driver on real hardware. Otherwise we'll end up at v7, the driver won't work,
> and you will have no idea why.
>

As many of your comments are on existing code that I blindly took from
the existing driver, I decided to start fresh with a new driver from
scratch.
Next patches I will send will be based on your comments on this
series, and I take the opportunity to ask some questions on some
parts of code that were already in the driver and I'm failing to fully
understand.

To speed things up, I'm going to reply/make questions on code as I'm
rewriting it, so let's start from the bottom of the driver.

> On Thursday 27 Apr 2017 10:42:27 Jacopo Mondi wrote:
> > Add driver for SH Mobile CEU driver, based on
> > drivers/media/platform/soc_camera/sh_mobile_ceu_camera.c
> >
> > The original driver was based on soc-camera framework.
> >
> > Signed-off-by: Jacopo Mondi 
> > ---
> >  drivers/media/platform/Kconfig |9 +
> >  drivers/media/platform/Makefile|2 +
> >  drivers/media/platform/sh_ceu_camera.c | 1597 +
> >  3 files changed, 1608 insertions(+)
> >  create mode 100644 drivers/media/platform/sh_ceu_camera.c
> >
> > diff --git a/drivers/media/platform/Kconfig b/drivers/media/platform/Kconfig
> > index c9106e1..afb10fd 100644
> > --- a/drivers/media/platform/Kconfig
> > +++ b/drivers/media/platform/Kconfig
> > @@ -114,6 +114,15 @@ config VIDEO_S3C_CAMIF
> >   To compile this driver as a module, choose M here: the module
> >   will be called s3c-camif.
> >
> > +config VIDEO_SH_MOBILE_CEU
> > +   tristate "SuperH Mobile CEU Interface driver"
>
> Maybe this should be renamed to include the RZ family ?
>

I have renamed config symbol in RENESAS_CEU and transformed all
instances of "sh_ceu" in driver's code in "ceu_camera" or just "ceu"
when appropriate (eg. "ceu_device" or "ceu_buffer")

> > +   depends on VIDEO_DEV && VIDEO_V4L2 && HAS_DMA && HAVE_CLK
>
> You don't use the clock API directly, I think you can drop HAVE_CLK.
>
> > +   depends on ARCH_SHMOBILE || COMPILE_TEST
> > +   depends on HAS_DMA
>
> This is already included above.
>
> > +   select VIDEOBUF2_DMA_CONTIG
> > +   ---help---
> > + This is a v4l2 driver for the SuperH Mobile CEU Interface
> > +
> >  source "drivers/media/platform/soc_camera/Kconfig"
> >  source "drivers/media/platform/exynos4-is/Kconfig"
> >  source "drivers/media/platform/am437x/Kconfig"
> > diff --git a/drivers/media/platform/Makefile
> > b/drivers/media/platform/Makefile index 349ddf6..a2b9aa6 100644
> > --- a/drivers/media/platform/Makefile
> > +++ b/drivers/media/platform/Makefile
> > @@ -25,6 +25,8 @@ obj-$(CONFIG_VIDEO_CODA)  += coda/
> >
> >  obj-$(CONFIG_VIDEO_SH_VEU) += sh_veu.o
> >
> > +obj-$(CONFIG_VIDEO_SH_MOBILE_CEU)  += sh_ceu_camera.o
> > +

This will be renamed in "ceu_camera.c" as well

> >  obj-$(CONFIG_VIDEO_MEM2MEM_DEINTERLACE)+= m2m-deinterlace.o
> >
> >  obj-$(CONFIG_VIDEO_S3C_CAMIF)  += s3c-camif/
> > diff --git a/drivers/media/platform/sh_ceu_camera.c
> > b/drivers/media/platform/sh_ceu_camera.c new file mode 100644
> > index 000..07fe0e7
> > --- /dev/null
> > +++ b/drivers/media/platform/sh_ceu_camera.c
> > @@ -0,0 +1,1597 @@
> > +/*
> > + * V4L2 Driver for SuperH Mobile CEU interface
> > + *
> > + * Copyright (C) 2017 Jacopo Mondi 
> > + *
> > + * Based on soc-camera driver "soc_camera/sh_mobile_ceu_camera.c"
> > + *
> > + * Copyright (C) 2008 Magnus Damm
> > + *
> > + * Based on V4L2 Driver for PXA camera host - "pxa_camera.c",
> > + *
> > + * Copyright (C) 2006, Sascha Hauer, Pengutronix
> > + * Copyright (C) 2008, Guennadi Liakhovetski 
> > + *
> > + * This program is free software; you can redistribute it and/or modify
> > + * it under the terms of the GNU General Public License as published by
> > + * the Free Software Foundation; either version 2 of the License, or
> > + * (at your option) any later version.
> > + */
> > +
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
>
> Please sort headers alphabetically, it makes locating duplicated easier.
>

yup

> > +
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> 

Re: [RFC v2 1/2] media: platform: Add SH CEU camera interface driver

2017-04-27 Thread Laurent Pinchart
Hi Jacopo,

Thank you for the patch.

This is a partial review, as some of the comments will result in large changes 
that would make review of some of the current code pointless. There is however 
enough comments to keep you busy for a bit :-)

By the way, you might want to keep your development history in a private git 
branch, in order to bisect issues when you'll finally be able to test the 
driver on real hardware. Otherwise we'll end up at v7, the driver won't work, 
and you will have no idea why.

On Thursday 27 Apr 2017 10:42:27 Jacopo Mondi wrote:
> Add driver for SH Mobile CEU driver, based on
> drivers/media/platform/soc_camera/sh_mobile_ceu_camera.c
> 
> The original driver was based on soc-camera framework.
> 
> Signed-off-by: Jacopo Mondi 
> ---
>  drivers/media/platform/Kconfig |9 +
>  drivers/media/platform/Makefile|2 +
>  drivers/media/platform/sh_ceu_camera.c | 1597 +
>  3 files changed, 1608 insertions(+)
>  create mode 100644 drivers/media/platform/sh_ceu_camera.c
> 
> diff --git a/drivers/media/platform/Kconfig b/drivers/media/platform/Kconfig
> index c9106e1..afb10fd 100644
> --- a/drivers/media/platform/Kconfig
> +++ b/drivers/media/platform/Kconfig
> @@ -114,6 +114,15 @@ config VIDEO_S3C_CAMIF
> To compile this driver as a module, choose M here: the module
> will be called s3c-camif.
> 
> +config VIDEO_SH_MOBILE_CEU
> + tristate "SuperH Mobile CEU Interface driver"

Maybe this should be renamed to include the RZ family ?

> + depends on VIDEO_DEV && VIDEO_V4L2 && HAS_DMA && HAVE_CLK

You don't use the clock API directly, I think you can drop HAVE_CLK.

> + depends on ARCH_SHMOBILE || COMPILE_TEST
> + depends on HAS_DMA

This is already included above.

> + select VIDEOBUF2_DMA_CONTIG
> + ---help---
> +   This is a v4l2 driver for the SuperH Mobile CEU Interface
> +
>  source "drivers/media/platform/soc_camera/Kconfig"
>  source "drivers/media/platform/exynos4-is/Kconfig"
>  source "drivers/media/platform/am437x/Kconfig"
> diff --git a/drivers/media/platform/Makefile
> b/drivers/media/platform/Makefile index 349ddf6..a2b9aa6 100644
> --- a/drivers/media/platform/Makefile
> +++ b/drivers/media/platform/Makefile
> @@ -25,6 +25,8 @@ obj-$(CONFIG_VIDEO_CODA)+= coda/
> 
>  obj-$(CONFIG_VIDEO_SH_VEU)   += sh_veu.o
> 
> +obj-$(CONFIG_VIDEO_SH_MOBILE_CEU)+= sh_ceu_camera.o
> +
>  obj-$(CONFIG_VIDEO_MEM2MEM_DEINTERLACE)  += m2m-deinterlace.o
> 
>  obj-$(CONFIG_VIDEO_S3C_CAMIF)+= s3c-camif/
> diff --git a/drivers/media/platform/sh_ceu_camera.c
> b/drivers/media/platform/sh_ceu_camera.c new file mode 100644
> index 000..07fe0e7
> --- /dev/null
> +++ b/drivers/media/platform/sh_ceu_camera.c
> @@ -0,0 +1,1597 @@
> +/*
> + * V4L2 Driver for SuperH Mobile CEU interface
> + *
> + * Copyright (C) 2017 Jacopo Mondi 
> + *
> + * Based on soc-camera driver "soc_camera/sh_mobile_ceu_camera.c"
> + *
> + * Copyright (C) 2008 Magnus Damm
> + *
> + * Based on V4L2 Driver for PXA camera host - "pxa_camera.c",
> + *
> + * Copyright (C) 2006, Sascha Hauer, Pengutronix
> + * Copyright (C) 2008, Guennadi Liakhovetski 
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + */
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 

Please sort headers alphabetically, it makes locating duplicated easier.

> +
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +/* register offsets for sh7722 / sh7723 */
> +
> +#define CAPSR  0x00 /* Capture start register */
> +#define CAPCR  0x04 /* Capture control register */
> +#define CAMCR  0x08 /* Capture interface control register */
> +#define CMCYR  0x0c /* Capture interface cycle  register */
> +#define CAMOR  0x10 /* Capture interface offset register */
> +#define CAPWR  0x14 /* Capture interface width register */
> +#define CAIFR  0x18 /* Capture interface input format register */
> +#define CSTCR  0x20 /* Camera strobe control register (<= sh7722) */
> +#define CSECR  0x24 /* Camera strobe emission count register (<= sh7722) */
> +#define CRCNTR 0x28 /* CEU register control register */
> +#define CRCMPR 0x2c /* CEU register forcible control register */
> +#define CFLCR  0x30 /* Capture filter control register */
> +#define CFSZR  0x34 /* Capture filter size clip register */
> +#define CDWDR  0x38 /* Capture destination width register */
> +#define CDAYR  0x3c /* Capture data address Y 

[RFC v2 1/2] media: platform: Add SH CEU camera interface driver

2017-04-27 Thread Jacopo Mondi
Add driver for SH Mobile CEU driver, based on
drivers/media/platform/soc_camera/sh_mobile_ceu_camera.c

The original driver was based on soc-camera framework.

Signed-off-by: Jacopo Mondi 
---
 drivers/media/platform/Kconfig |9 +
 drivers/media/platform/Makefile|2 +
 drivers/media/platform/sh_ceu_camera.c | 1597 
 3 files changed, 1608 insertions(+)
 create mode 100644 drivers/media/platform/sh_ceu_camera.c

diff --git a/drivers/media/platform/Kconfig b/drivers/media/platform/Kconfig
index c9106e1..afb10fd 100644
--- a/drivers/media/platform/Kconfig
+++ b/drivers/media/platform/Kconfig
@@ -114,6 +114,15 @@ config VIDEO_S3C_CAMIF
  To compile this driver as a module, choose M here: the module
  will be called s3c-camif.
 
+config VIDEO_SH_MOBILE_CEU
+   tristate "SuperH Mobile CEU Interface driver"
+   depends on VIDEO_DEV && VIDEO_V4L2 && HAS_DMA && HAVE_CLK
+   depends on ARCH_SHMOBILE || COMPILE_TEST
+   depends on HAS_DMA
+   select VIDEOBUF2_DMA_CONTIG
+   ---help---
+ This is a v4l2 driver for the SuperH Mobile CEU Interface
+
 source "drivers/media/platform/soc_camera/Kconfig"
 source "drivers/media/platform/exynos4-is/Kconfig"
 source "drivers/media/platform/am437x/Kconfig"
diff --git a/drivers/media/platform/Makefile b/drivers/media/platform/Makefile
index 349ddf6..a2b9aa6 100644
--- a/drivers/media/platform/Makefile
+++ b/drivers/media/platform/Makefile
@@ -25,6 +25,8 @@ obj-$(CONFIG_VIDEO_CODA)  += coda/
 
 obj-$(CONFIG_VIDEO_SH_VEU) += sh_veu.o
 
+obj-$(CONFIG_VIDEO_SH_MOBILE_CEU)  += sh_ceu_camera.o
+
 obj-$(CONFIG_VIDEO_MEM2MEM_DEINTERLACE)+= m2m-deinterlace.o
 
 obj-$(CONFIG_VIDEO_S3C_CAMIF)  += s3c-camif/
diff --git a/drivers/media/platform/sh_ceu_camera.c 
b/drivers/media/platform/sh_ceu_camera.c
new file mode 100644
index 000..07fe0e7
--- /dev/null
+++ b/drivers/media/platform/sh_ceu_camera.c
@@ -0,0 +1,1597 @@
+/*
+ * V4L2 Driver for SuperH Mobile CEU interface
+ *
+ * Copyright (C) 2017 Jacopo Mondi 
+ *
+ * Based on soc-camera driver "soc_camera/sh_mobile_ceu_camera.c"
+ *
+ * Copyright (C) 2008 Magnus Damm
+ *
+ * Based on V4L2 Driver for PXA camera host - "pxa_camera.c",
+ *
+ * Copyright (C) 2006, Sascha Hauer, Pengutronix
+ * Copyright (C) 2008, Guennadi Liakhovetski 
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+/* register offsets for sh7722 / sh7723 */
+
+#define CAPSR  0x00 /* Capture start register */
+#define CAPCR  0x04 /* Capture control register */
+#define CAMCR  0x08 /* Capture interface control register */
+#define CMCYR  0x0c /* Capture interface cycle  register */
+#define CAMOR  0x10 /* Capture interface offset register */
+#define CAPWR  0x14 /* Capture interface width register */
+#define CAIFR  0x18 /* Capture interface input format register */
+#define CSTCR  0x20 /* Camera strobe control register (<= sh7722) */
+#define CSECR  0x24 /* Camera strobe emission count register (<= sh7722) */
+#define CRCNTR 0x28 /* CEU register control register */
+#define CRCMPR 0x2c /* CEU register forcible control register */
+#define CFLCR  0x30 /* Capture filter control register */
+#define CFSZR  0x34 /* Capture filter size clip register */
+#define CDWDR  0x38 /* Capture destination width register */
+#define CDAYR  0x3c /* Capture data address Y register */
+#define CDACR  0x40 /* Capture data address C register */
+#define CDBYR  0x44 /* Capture data bottom-field address Y register */
+#define CDBCR  0x48 /* Capture data bottom-field address C register */
+#define CBDSR  0x4c /* Capture bundle destination size register */
+#define CFWCR  0x5c /* Firewall operation control register */
+#define CLFCR  0x60 /* Capture low-pass filter control register */
+#define CDOCR  0x64 /* Capture data output control register */
+#define CDDCR  0x68 /* Capture data complexity level register */
+#define CDDAR  0x6c /* Capture data complexity level address register */
+#define CEIER  0x70 /* Capture event interrupt enable register */
+#define CETCR  0x74 /* Capture event flag clear register */
+#define CSTSR  0x7c /* Capture status register */
+#define CSRTR  0x80 /* Capture software reset register */
+#define CDSSR  0x84 /* Capture data size register */
+#define CDAYR2 0x90 /* Capture data address Y register 2 */
+#define CDACR2 0x94 /* Capture data