Re: [PATCH v7] media: vb2: Take queue or device lock in mmap-related vb2 ioctl handlers

2013-08-06 Thread Hans Verkuil
On 08/06/2013 10:10 PM, Laurent Pinchart wrote:
> The vb2_fop_mmap() and vb2_fop_get_unmapped_area() functions are plug-in
> implementation of the mmap() and get_unmapped_area() file operations
> that calls vb2_mmap() and vb2_get_unmapped_area() on the queue
> associated with the video device. Neither the
> vb2_fop_mmap/vb2_fop_get_unmapped_area nor the
> v4l2_mmap/vb2_get_unmapped_area functions in the V4L2 core take any
> lock, leading to race conditions between mmap/get_unmapped_area and
> other buffer-related ioctls such as VIDIOC_REQBUFS.
> 
> Fix it by taking the queue or device lock around the vb2_mmap() and
> vb2_get_unmapped_area() calls.
> 
> Signed-off-by: Laurent Pinchart 

Acked-by: Hans Verkuil 

Thanks!

Hans

> ---
>  drivers/media/v4l2-core/videobuf2-core.c | 18 --
>  1 file changed, 16 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/media/v4l2-core/videobuf2-core.c 
> b/drivers/media/v4l2-core/videobuf2-core.c
> index 9fc4bab..c9b50c7 100644
> --- a/drivers/media/v4l2-core/videobuf2-core.c
> +++ b/drivers/media/v4l2-core/videobuf2-core.c
> @@ -2578,8 +2578,15 @@ EXPORT_SYMBOL_GPL(vb2_ioctl_expbuf);
>  int vb2_fop_mmap(struct file *file, struct vm_area_struct *vma)
>  {
>   struct video_device *vdev = video_devdata(file);
> + struct mutex *lock = vdev->queue->lock ? vdev->queue->lock : vdev->lock;
> + int err;
>  
> - return vb2_mmap(vdev->queue, vma);
> + if (lock && mutex_lock_interruptible(lock))
> + return -ERESTARTSYS;
> + err = vb2_mmap(vdev->queue, vma);
> + if (lock)
> + mutex_unlock(lock);
> + return err;
>  }
>  EXPORT_SYMBOL_GPL(vb2_fop_mmap);
>  
> @@ -2685,8 +2692,15 @@ unsigned long vb2_fop_get_unmapped_area(struct file 
> *file, unsigned long addr,
>   unsigned long len, unsigned long pgoff, unsigned long flags)
>  {
>   struct video_device *vdev = video_devdata(file);
> + struct mutex *lock = vdev->queue->lock ? vdev->queue->lock : vdev->lock;
> + int ret;
>  
> - return vb2_get_unmapped_area(vdev->queue, addr, len, pgoff, flags);
> + if (lock && mutex_lock_interruptible(lock))
> + return -ERESTARTSYS;
> + ret = vb2_get_unmapped_area(vdev->queue, addr, len, pgoff, flags);
> + if (lock)
> + mutex_unlock(lock);
> + return ret;
>  }
>  EXPORT_SYMBOL_GPL(vb2_fop_get_unmapped_area);
>  #endif
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC v3 09/13] [media] exynos5-fimc-is: Add the hardware pipeline control

2013-08-06 Thread Sachin Kamat
Hi Arun,

On 4 August 2013 20:30, Sylwester Nawrocki  wrote:
> Hi Arun,
>
> On 08/02/2013 05:02 PM, Arun Kumar K wrote:
>>
>> This patch adds the crucial hardware pipeline control for the
>> fimc-is driver. All the subdev nodes will call this pipeline
>> interfaces to reach the hardware. Responsibilities of this module
>> involves configuring and maintaining the hardware pipeline involving
>> multiple sub-ips like ISP, DRC, Scalers, ODC, 3DNR, FD etc.
>>
>> Signed-off-by: Arun Kumar K
>> Signed-off-by: Kilyeon Im
>> ---
>>   .../media/platform/exynos5-is/fimc-is-pipeline.c   | 1961
>> 
>>   .../media/platform/exynos5-is/fimc-is-pipeline.h   |  129 ++
>>   2 files changed, 2090 insertions(+)
>>   create mode 100644 drivers/media/platform/exynos5-is/fimc-is-pipeline.c
>>   create mode 100644 drivers/media/platform/exynos5-is/fimc-is-pipeline.h

[snip]

>> +   setfile_name,&is->pdev->dev);
>> +   if (ret != 0) {
>> +   pr_err("Setfile %s not found\n", setfile_name);
>
>
> dev_err(), please. I'm a bit tired of commenting on this excessive pr_*
> usage. Please really make sure dev_err()/v4l2_err() is used where possible.

pr_* should be used only when device pointer is not available. In all
other cases dev_* takes priority.

-- 
With warm regards,
Sachin
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC v3 10/13] [media] exynos5-fimc-is: Add the hardware interface module

2013-08-06 Thread Arun Kumar K
Hi Sylwester,

On Sun, Aug 4, 2013 at 8:33 PM, Sylwester Nawrocki
 wrote:
> Hi Arun,
>
>
> On 08/02/2013 05:02 PM, Arun Kumar K wrote:
>>
>> The hardware interface module finally sends the commands to the
>> FIMC-IS firmware and runs the interrupt handler for getting the
>> responses.
>>
>> Signed-off-by: Arun Kumar K
>> Signed-off-by: Kilyeon Im
>> ---

[snip]

>> +static int itf_get_state(struct fimc_is_interface *itf,
>> +   unsigned long state)
>> +{
>> +   int ret = 0;
>> +   unsigned long flags;
>> +
>> +   spin_lock_irqsave(&itf->slock_state, flags);
>> +   ret = test_bit(state,&itf->state);
>
>
> Shouldn't it be __test_bit() ?
>

__test_bit() is not availble !
In file include/asm-generic/bitops/non-atomic.h, all other ops
are prefixed with __xxx(), but its just test_bit().

Regards
Arun
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC 0/1] drm/pl111: Initial drm/kms driver for pl111

2013-08-06 Thread John Stultz
On Tue, Aug 6, 2013 at 5:15 AM, Rob Clark  wrote:
> On Tue, Aug 6, 2013 at 7:31 AM, Tom Cooksey  wrote:
>>
>>> > So in some respects, there is a constraint on how buffers which will
>>> > be drawn to using the GPU are allocated. I don't really like the idea
>>> > of teaching the display controller DRM driver about the GPU buffer
>>> > constraints, even if they are fairly trivial like this. If the same
>>> > display HW IP is being used on several SoCs, it seems wrong somehow
>>> > to enforce those GPU constraints if some of those SoCs don't have a
>>> > GPU.
>>>
>>> Well, I suppose you could get min_pitch_alignment from devicetree, or
>>> something like this..
>>>
>>> In the end, the easy solution is just to make the display allocate to
>>> the worst-case pitch alignment.  In the early days of dma-buf
>>> discussions, we kicked around the idea of negotiating or
>>> programatically describing the constraints, but that didn't really
>>> seem like a bounded problem.
>>
>> Yeah - I was around for some of those discussions and agree it's not
>> really an easy problem to solve.
>>
>>
>>
>>> > We may also then have additional constraints when sharing buffers
>>> > between the display HW and video decode or even camera ISP HW.
>>> > Programmatically describing buffer allocation constraints is very
>>> > difficult and I'm not sure you can actually do it - there's some
>>> > pretty complex constraints out there! E.g. I believe there's a
>>> > platform where Y and UV planes of the reference frame need to be in
>>> > separate DRAM banks for real-time 1080p decode, or something like
>>> > that?
>>>
>>> yes, this was discussed.  This is different from pitch/format/size
>>> constraints.. it is really just a placement constraint (ie. where do
>>> the physical pages go).  IIRC the conclusion was to use a dummy
>>> devices with it's own CMA pool for attaching the Y vs UV buffers.
>>>
>>> > Anyway, I guess my point is that even if we solve how to allocate
>>> > buffers which will be shared between the GPU and display HW such that
>>> > both sets of constraints are satisfied, that may not be the end of
>>> > the story.
>>> >
>>>
>>> that was part of the reason to punt this problem to userspace ;-)
>>>
>>> In practice, the kernel drivers doesn't usually know too much about
>>> the dimensions/format/etc.. that is really userspace level knowledge.
>>> There are a few exceptions when the kernel needs to know how to setup
>>> GTT/etc for tiled buffers, but normally this sort of information is up
>>> at the next level up (userspace, and drm_framebuffer in case of
>>> scanout).  Userspace media frameworks like GStreamer already have a
>>> concept of format/caps negotiation.  For non-display<->gpu sharing, I
>>> think this is probably where this sort of constraint negotiation
>>> should be handled.
>>
>> I agree that user-space will know which devices will access the buffer
>> and thus can figure out at least a common pixel format. Though I'm not
>> so sure userspace can figure out more low-level details like alignment
>> and placement in physical memory, etc.
>
> well, let's divide things up into two categories:
>
> 1) the arrangement and format of pixels.. ie. what userspace would
> need to know if it mmap's a buffer.  This includes pixel format,
> stride, etc.  This should be negotiated in userspace, it would be
> crazy to try to do this in the kernel.
>
> 2) the physical placement of the pages.  Ie. whether it is contiguous
> or not.  Which bank the pages in the buffer are placed in, etc.  This
> is not visible to userspace.  This is the purpose of the attach step,
> so you know all the devices involved in sharing up front before
> allocating the backing pages.  (Or in the worst case, if you have a
> "late attacher" you at least know when no device is doing dma access
> to a buffer and can reallocate and move the buffer.)  A long time

One concern I know the Android folks have expressed previously (and
correct me if its no longer an objection), is that this attach time
in-kernel constraint solving / moving or reallocating buffers is
likely to hurt determinism.  If I understood, their perspective was
that userland knows the device path the buffers will travel through,
so why not leverage that knowledge, rather then having the kernel have
to sort it out for itself after the fact.

The concern about determinism even makes them hesitant about CMA, over
things like carveout, as they don't want to be moving pages around at
allocation time (which could hurt reasonable use cases like the time
it takes to launch a camera app - which is quite important). Though
maybe this concern will lessen as more CMA solutions ship.

I worry some of this split between fully general solutions vs
hard-coded known constraints is somewhat intractable. But what might
make it easier to get android folks interested in approaches like the
attach-time allocation / relocating on late-attach you're proposing is
if there is maybe some way, as you suggested, to hint 

Re: [PATCH 2/2] media: s5p-mfc: remove DT hacks and simplify initialization code

2013-08-06 Thread Tomasz Figa
Hi Kukjin,

On Wednesday 07 of August 2013 07:13:09 Kukjin Kim wrote:
> On 08/06/13 19:22, Kamil Debski wrote:
> > Hi Kukjin,
> > 
> > This patch looks good.
> > 
> > Best wishes,
> > Kamil Debski
> > 
> >> From: Marek Szyprowski [mailto:m.szyprow...@samsung.com]
> >> Sent: Monday, August 05, 2013 2:27 PM
> >> 
> >> This patch removes custom initialization of reserved memory regions
> >> from s5p-mfc driver. Memory initialization can be now handled by
> >> generic code.
> >> 
> >> Signed-off-by: Marek Szyprowski
> > 
> > Acked-by: Kamil Debski
> 
> Kamil, thanks for your ack.
> 
> Applied.

This is a nice cleanup, but I don't think it should be applied yet, 
because it depends on patch 1/2, which needs an Ack from DT maintainers.

Best regards,
Tomasz

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/2] media: s5p-mfc: remove DT hacks and simplify initialization code

2013-08-06 Thread Kukjin Kim

On 08/06/13 19:22, Kamil Debski wrote:

Hi Kukjin,

This patch looks good.

Best wishes,
Kamil Debski


From: Marek Szyprowski [mailto:m.szyprow...@samsung.com]
Sent: Monday, August 05, 2013 2:27 PM

This patch removes custom initialization of reserved memory regions
from s5p-mfc driver. Memory initialization can be now handled by
generic code.

Signed-off-by: Marek Szyprowski


Acked-by: Kamil Debski


Kamil, thanks for your ack.

Applied.
- Kukjin
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7] media: vb2: Take queue or device lock in mmap-related vb2 ioctl handlers

2013-08-06 Thread Laurent Pinchart
The vb2_fop_mmap() and vb2_fop_get_unmapped_area() functions are plug-in
implementation of the mmap() and get_unmapped_area() file operations
that calls vb2_mmap() and vb2_get_unmapped_area() on the queue
associated with the video device. Neither the
vb2_fop_mmap/vb2_fop_get_unmapped_area nor the
v4l2_mmap/vb2_get_unmapped_area functions in the V4L2 core take any
lock, leading to race conditions between mmap/get_unmapped_area and
other buffer-related ioctls such as VIDIOC_REQBUFS.

Fix it by taking the queue or device lock around the vb2_mmap() and
vb2_get_unmapped_area() calls.

Signed-off-by: Laurent Pinchart 
---
 drivers/media/v4l2-core/videobuf2-core.c | 18 --
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/drivers/media/v4l2-core/videobuf2-core.c 
b/drivers/media/v4l2-core/videobuf2-core.c
index 9fc4bab..c9b50c7 100644
--- a/drivers/media/v4l2-core/videobuf2-core.c
+++ b/drivers/media/v4l2-core/videobuf2-core.c
@@ -2578,8 +2578,15 @@ EXPORT_SYMBOL_GPL(vb2_ioctl_expbuf);
 int vb2_fop_mmap(struct file *file, struct vm_area_struct *vma)
 {
struct video_device *vdev = video_devdata(file);
+   struct mutex *lock = vdev->queue->lock ? vdev->queue->lock : vdev->lock;
+   int err;
 
-   return vb2_mmap(vdev->queue, vma);
+   if (lock && mutex_lock_interruptible(lock))
+   return -ERESTARTSYS;
+   err = vb2_mmap(vdev->queue, vma);
+   if (lock)
+   mutex_unlock(lock);
+   return err;
 }
 EXPORT_SYMBOL_GPL(vb2_fop_mmap);
 
@@ -2685,8 +2692,15 @@ unsigned long vb2_fop_get_unmapped_area(struct file 
*file, unsigned long addr,
unsigned long len, unsigned long pgoff, unsigned long flags)
 {
struct video_device *vdev = video_devdata(file);
+   struct mutex *lock = vdev->queue->lock ? vdev->queue->lock : vdev->lock;
+   int ret;
 
-   return vb2_get_unmapped_area(vdev->queue, addr, len, pgoff, flags);
+   if (lock && mutex_lock_interruptible(lock))
+   return -ERESTARTSYS;
+   ret = vb2_get_unmapped_area(vdev->queue, addr, len, pgoff, flags);
+   if (lock)
+   mutex_unlock(lock);
+   return ret;
 }
 EXPORT_SYMBOL_GPL(vb2_fop_get_unmapped_area);
 #endif
-- 
Regards,

Laurent Pinchart

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v6 04/10] media: vb2: Take queue or device lock in vb2_fop_mmap()

2013-08-06 Thread Laurent Pinchart
Hi Hans,

On Tuesday 06 August 2013 12:39:27 Hans Verkuil wrote:
> On Mon 5 August 2013 19:53:23 Laurent Pinchart wrote:
> > The vb2_fop_mmap() function is a plug-in implementation of the mmap()
> > file operation that calls vb2_mmap() on the queue associated with the
> > video device. Neither the vb2_fop_mmap() function nor the v4l2_mmap()
> > mmap handler in the V4L2 core take any lock, leading to race conditions
> > between mmap() and other buffer-related ioctls such as VIDIOC_REQBUFS.
> > 
> > Fix it by taking the queue or device lock around the vb2_mmap() call.
> 
> Hi Laurent,
> 
> Can you do the same for vb2_fop_get_unmapped_area()?

Sure. I'll repost a v7 of this patch that fixes both mmap and 
get_unmapped_area.

> > Signed-off-by: Laurent Pinchart
> > 
> > ---
> > 
> >  drivers/media/v4l2-core/videobuf2-core.c | 9 -
> >  1 file changed, 8 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/media/v4l2-core/videobuf2-core.c
> > b/drivers/media/v4l2-core/videobuf2-core.c index 9fc4bab..bd4bade 100644
> > --- a/drivers/media/v4l2-core/videobuf2-core.c
> > +++ b/drivers/media/v4l2-core/videobuf2-core.c
> > @@ -2578,8 +2578,15 @@ EXPORT_SYMBOL_GPL(vb2_ioctl_expbuf);
> > 
> >  int vb2_fop_mmap(struct file *file, struct vm_area_struct *vma)
> >  {
> >  
> > struct video_device *vdev = video_devdata(file);
> > 
> > +   struct mutex *lock = vdev->queue->lock ? vdev->queue->lock : vdev-
>lock;
> > +   int err;
> > 
> > -   return vb2_mmap(vdev->queue, vma);
> > +   if (lock && mutex_lock_interruptible(lock))
> > +   return -ERESTARTSYS;
> > +   err = vb2_mmap(vdev->queue, vma);
> > +   if (lock)
> > +   mutex_unlock(lock);
> > +   return err;
> > 
> >  }
> >  EXPORT_SYMBOL_GPL(vb2_fop_mmap);
-- 
Regards,

Laurent Pinchart
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC 0/1] drm/pl111: Initial drm/kms driver for pl111

2013-08-06 Thread Rob Clark
On Tue, Aug 6, 2013 at 1:38 PM, Tom Cooksey  wrote:
>
>> >> ... This is the purpose of the attach step,
>> >> so you know all the devices involved in sharing up front before
>> >> allocating the backing pages. (Or in the worst case, if you have a
>> >> "late attacher" you at least know when no device is doing dma access
>> >> to a buffer and can reallocate and move the buffer.)  A long time
>> >> back, I had a patch that added a field or two to 'struct
>> >> device_dma_parameters' so that it could be known if a device
>> >> required contiguous buffers.. looks like that never got merged, so
>> >> I'd need to dig that back up and resend it.  But the idea was to
>> >> have the 'struct device' encapsulate all the information that would
>> >> be needed to do-the-right-thing when it comes to placement.
>> >
>> > As I understand it, it's up to the exporting device to allocate the
>> > memory backing the dma_buf buffer. I guess the latest possible point
>> > you can allocate the backing pages is when map_dma_buf is first
>> > called? At that point the exporter can iterate over the current set
>> > of attachments, programmatically determine the all the constraints of
>> > all the attached drivers and attempt to allocate the backing pages
>> > in such a way as to satisfy all those constraints?
>>
>> yes, this is the idea..  possibly some room for some helpers to help
>> out with this, but that is all under the hood from userspace
>> perspective
>>
>> > Didn't you say that programmatically describing device placement
>> > constraints was an unbounded problem? I guess we would have to
>> > accept that it's not possible to describe all possible constraints
>> > and instead find a way to describe the common ones?
>>
>> well, the point I'm trying to make, is by dividing your constraints
>> into two groups, one that impacts and is handled by userspace, and one
>> that is in the kernel (ie. where the pages go), you cut down the
>> number of permutations that the kernel has to care about considerably.
>>  And kernel already cares about, for example, what range of addresses
>> that a device can dma to/from.  I think really the only thing missing
>> is the max # of sglist entries (contiguous or not)
>
> I think it's more than physically contiguous or not.
>
> For example, it can be more efficient to use large page sizes on
> devices with IOMMUs to reduce TLB traffic. I think the size and even
> the availability of large pages varies between different IOMMUs.

sure.. but I suppose if we can spiff out dma_params to express "I need
contiguous", perhaps we can add some way to express "I prefer
as-contiguous-as-possible".. either way, this is about where the pages
are placed, and not about the layout of pixels within the page, so
should be in kernel.  It's something that is missing, but I believe
that it belongs in dma_params and hidden behind dma_alloc_*() for
simple drivers.

> There's also the issue of buffer stride alignment. As I say, if the
> buffer is to be written by a tile-based GPU like Mali, it's more
> efficient if the buffer's stride is aligned to the max AXI bus burst
> length. Though I guess a buffer stride only makes sense as a concept
> when interpreting the data as a linear-layout 2D image, so perhaps
> belongs in user-space along with format negotiation?
>

Yeah.. this isn't about where the pages go, but about the arrangement
within a page.

And, well, except for hw that supports the same tiling (or
compressed-fb) in display+gpu, you probably aren't sharing tiled
buffers.

>
>> > One problem with this is it duplicates a lot of logic in each
>> > driver which can export a dma_buf buffer. Each exporter will need to
>> > do pretty much the same thing: iterate over all the attachments,
>> > determine of all the constraints (assuming that can be done) and
>> > allocate pages such that the lowest-common-denominator is satisfied.
>> >
>> > Perhaps rather than duplicating that logic in every driver, we could
>> > Instead move allocation of the backing pages into dma_buf itself?
>> >
>>
>> I tend to think it is better to add helpers as we see common patterns
>> emerge, which drivers can opt-in to using.  I don't think that we
>> should move allocation into dma_buf itself, but it would perhaps be
>> useful to have dma_alloc_*() variants that could allocate for multiple
>> devices.
>
> A helper could work I guess, though I quite like the idea of having
> dma_alloc_*() variants which take a list of devices to allocate memory
> for.
>
>
>> That would help for simple stuff, although I'd suspect
>> eventually a GPU driver will move away from that.  (Since you probably
>> want to play tricks w/ pools of pages that are pre-zero'd and in the
>> correct cache state, use spare cycles on the gpu or dma engine to
>> pre-zero uncached pages, and games like that.)
>
> So presumably you're talking about a GPU driver being the exporter
> here? If so, how could the GPU driver do these kind of tricks on
> memory shared with another device?

cron job: media_tree daily build: WARNINGS

2013-08-06 Thread Hans Verkuil
This message is generated daily by a cron job that builds media_tree for
the kernels and architectures in the list below.

Results of the daily build of media_tree:

date:   Tue Aug  6 19:00:17 CEST 2013
git branch: test
git hash:   dfb9f94e8e5e7f73c8e2bcb7d4fb1de57e7c333d
gcc version:i686-linux-gcc (GCC) 4.8.1
sparse version: v0.4.5-rc1
host hardware:  x86_64
host os:3.9-7.slh.1-amd64

linux-git-arm-at91: OK
linux-git-arm-davinci: OK
linux-git-arm-exynos: OK
linux-git-arm-mx: OK
linux-git-arm-omap: OK
linux-git-arm-omap1: OK
linux-git-arm-pxa: OK
linux-git-blackfin: OK
linux-git-i686: OK
linux-git-m32r: OK
linux-git-mips: OK
linux-git-powerpc64: OK
linux-git-sh: OK
linux-git-x86_64: OK
linux-2.6.31.14-i686: WARNINGS
linux-2.6.32.27-i686: WARNINGS
linux-2.6.33.7-i686: WARNINGS
linux-2.6.34.7-i686: WARNINGS
linux-2.6.35.9-i686: WARNINGS
linux-2.6.36.4-i686: WARNINGS
linux-2.6.37.6-i686: WARNINGS
linux-2.6.38.8-i686: WARNINGS
linux-2.6.39.4-i686: WARNINGS
linux-3.0.60-i686: WARNINGS
linux-3.10-i686: OK
linux-3.1.10-i686: WARNINGS
linux-3.2.37-i686: WARNINGS
linux-3.3.8-i686: WARNINGS
linux-3.4.27-i686: WARNINGS
linux-3.5.7-i686: WARNINGS
linux-3.6.11-i686: WARNINGS
linux-3.7.4-i686: WARNINGS
linux-3.8-i686: WARNINGS
linux-3.9.2-i686: WARNINGS
linux-2.6.31.14-x86_64: WARNINGS
linux-2.6.32.27-x86_64: WARNINGS
linux-2.6.33.7-x86_64: WARNINGS
linux-2.6.34.7-x86_64: WARNINGS
linux-2.6.35.9-x86_64: WARNINGS
linux-2.6.36.4-x86_64: WARNINGS
linux-2.6.37.6-x86_64: WARNINGS
linux-2.6.38.8-x86_64: WARNINGS
linux-2.6.39.4-x86_64: WARNINGS
linux-3.0.60-x86_64: WARNINGS
linux-3.10-x86_64: OK
linux-3.1.10-x86_64: WARNINGS
linux-3.2.37-x86_64: WARNINGS
linux-3.3.8-x86_64: WARNINGS
linux-3.4.27-x86_64: WARNINGS
linux-3.5.7-x86_64: WARNINGS
linux-3.6.11-x86_64: WARNINGS
linux-3.7.4-x86_64: WARNINGS
linux-3.8-x86_64: WARNINGS
linux-3.9.2-x86_64: WARNINGS
apps: WARNINGS
spec-git: OK
sparse version: v0.4.5-rc1
sparse: ERRORS

Detailed results are available here:

http://www.xs4all.nl/~hverkuil/logs/Tuesday.log

Full logs are available here:

http://www.xs4all.nl/~hverkuil/logs/Tuesday.tar.bz2

The Media Infrastructure API from this daily build is here:

http://www.xs4all.nl/~hverkuil/spec/media.html
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: mceusb Fintek ir transmitter only works when X is not running

2013-08-06 Thread Rajil Saraswat
>
> It's not about whether there is enough bandwidth, it's about whether
> issuing more usb urbs would overflow the bandwidth allocated to other
> devices (whether in use or not). Make sure you have
> CONFIG_USB_EHCI_TT_NEWSCHED defined in your kernel.
>

This did the trick!!. After enabling this along with
CONFIG_USB_EHCI_ROOT_HUB, the irsend works. Both the tx ports are
working independently.

Thanks a lot.
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: width and height of JPEG compressed images

2013-08-06 Thread Thomas Vajzovic
Hi,

On 24 July 2013 10:30 Sylwester Nawrocki wrote:
> On 07/22/2013 10:40 AM, Thomas Vajzovic wrote:
>> On 21 July 2013 21:38 Sylwester Nawrocki wrote:
>>> On 07/19/2013 10:28 PM, Sakari Ailus wrote:
 On Sat, Jul 06, 2013 at 09:58:23PM +0200, Sylwester Nawrocki wrote:
> On 07/05/2013 10:22 AM, Thomas Vajzovic wrote:
>
>> The hardware reads AxB sensor pixels from its array, resamples
>> them to CxD image pixels, and then compresses them to ExF bytes.
>>
>> If the sensor driver is only told the user's requested sizeimage, it
>> can be made to factorize (ExF) into (E,F) itself, but then both the
>> parallel interface and the 2D DMA peripheral need to be told the
>> particular factorization that it has chosen.
>>
>> If the user requests sizeimage which cannot be satisfied (eg: a prime
>> number) then it will need to return (E,F) to the bridge driver which
>> does not multiply exactly to sizeimage.  Because of this the bridge
>> driver must set the corrected value of sizeimage which it returns to
>> userspace to the product ExF.
>
> Ok, let's consider following data structure describing the frame:
>
> struct v4l2_frame_desc_entry {
>   u32 flags;
>   u32 pixelcode;
>   u32 samples_per_line;
>   u32 num_lines;
>   u32 size;
> };
>
> I think we could treat the frame descriptor to be at lower lever in
> the protocol stack than struct v4l2_mbus_framefmt.
>
> Then the bridge would set size and pixelcode and the subdev would
> return (E, F) in (samples_per_frame, num_lines) and adjust size if
> required. Number of bits per sample can be determined by pixelcode.
>
> It needs to be considered that for some sensor drivers it might not
> be immediately clear what samples_per_line, num_lines values are.
> In such case those fields could be left zeroed and bridge driver
> could signal such condition as a more or less critical error. In
> end of the day specific sensor driver would need to be updated to
> interwork with a bridge that requires samples_per_line, num_lines.

I think we ought to try to consider the four cases:

1D sensor and 1D bridge: already works

2D sensor and 2D bridge: my use case

1D sensor and 2D bridge, 2D sensor and 1D bridge:
Perhaps both of these cases could be made to work by setting:
num_lines = 1; samples_per_line = ((size * 8) / bpp);

(Obviously this would also require the appropriate pull-up/down
on the second sync input on a 2D bridge).

Since the frame descriptor interface is still new and used in so
few drivers, is it reasonable to expect them all to be fixed to
do this?

> Not sure if we need to add image width and height in pixels to the
> above structure. It wouldn't make much sensor when single frame
> carries multiple images, e.g. interleaved YUV and compressed image
> data at different resolutions.

If image size were here then we are duplicating get_fmt/set_fmt.
But then, by having pixelcode here we are already duplicating part
of get_fmt/set_fmt.  If the bridge changes pixelcode and calls
set_frame_desc then is this equivalent to calling set_fmt?
I would like to see as much data normalization as possible and
eliminate the redundancy.

>> Whatever mechanism is chosen needs to have corresponding get/set/try
>> methods to be used when the user calls
>> VIDIOC_G_FMT/VIDIOC_S_FMT/VIDIOC_TRY_FMT.
>
> Agreed, it seems we need some sort of negotiation of those low
> level parameters.

Should there be set/get/try function pointers, or should the struct
include an enum member like v4l2_subdev_format.which to determine
which operation is to be perfomed?

Personally I think that it is a bit ugly having two different
function pointers for set_fmt/get_fmt but then a structure member
to determine between set/try.  IMHO it should be three function
pointers or one function with a three valued enum in the struct.

Best regards,
Tom

--
Mr T. Vajzovic
Software Engineer
Infrared Integrated Systems Ltd
Visit us at www.irisys.co.uk
Disclaimer: This e-mail message is confidential and for use by the addressee 
only. If the message is received by anyone other than the addressee, please 
return the message to the sender by replying to it and then delete the original 
message and the sent message from your computer. Infrared Integrated Systems 
Limited Park Circle Tithe Barn Way Swan Valley Northampton NN4 9BG Registration 
Number: 3186364.
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Linaro-mm-sig] [RFC 0/1] drm/pl111: Initial drm/kms driver for pl111

2013-08-06 Thread Daniel Vetter
On Tue, Aug 6, 2013 at 2:18 PM, Lucas Stach  wrote:
> I strongly disagree with exposing low-level hardware details like tiling
> to userspace. If we have to do the negotiation of those things in
> userspace we will end up with having to pipe those information through
> things like the wayland protocol. I don't see how this could ever be
> considered a good idea.
>
> I would rather see kernel drivers negotiating those things at dmabuf
> attach time in way invisible to userspace. I agree that this negotiation
> thing isn't easy to get right for the plethora of different hardware
> constraints we see today, but I would rather see this in-kernel, where
> we have the chance to fix things up if needed, than in a fixed userspace
> interface.

The problem with negotiating tiling in the kernel for at least
drm/i915 is that userspace needs to know the tiling and for allocating
actually be in charge of it. The approach used by prime thus far is to
simply use linear buffers between different gpus. If you want
something better I guess you need to have a fairly tightly integrate
userspace driver, so I don't see how passing around the tiling
information should be an issue for those cases.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: width and height of JPEG compressed images

2013-08-06 Thread Thomas Vajzovic
Hi,

On 26 July 2013 10:07 Sakari Ailus wrote:
> On Wed, Jul 24, 2013 at 10:39:11AM +0200, Sylwester Nawrocki wrote:
>> On 07/24/2013 09:47 AM, Thomas Vajzovic wrote:
>>> On 23 July 2013 23:21 Sakari Ailus wrote:
 On Sun, Jul 21, 2013 at 10:38:18PM +0200, Sylwester Nawrocki wrote:
> On 07/19/2013 10:28 PM, Sakari Ailus wrote:
>> On Sat, Jul 06, 2013 at 09:58:23PM +0200, Sylwester Nawrocki wrote:
>>> On 07/05/2013 10:22 AM, Thomas Vajzovic wrote:
>>>
 The hardware reads AxB sensor pixels from its array, resamples
 them to CxD image pixels, and then compresses them to ExF bytes.

>>> sensor matrix (AxB pixels) ->  binning/skipping (CxD pixels) ->
>>> ->  JPEG compresion (width = C, height = D, sizeimage ExF bytes)
>>
>> Does the user need to specify ExF, for other purposes than
>> limiting the size of the image? I would leave this up to the
>> sensor driver (with reasonable alignment). The sensor driver
>> would tell about this to the receiver through
>
> AFAIU ExF is closely related to the memory buffer size, so the
> sensor driver itself wouldn't have enough information to fix up ExF, 
> would it ?

 If the desired sizeimage is known, F can be calculated if E is
 fixed, say
 1024 should probably work for everyone, shoulnd't it?
>>>
>>> It's a nice clean idea (and I did already consider it) but it
>>> reduces the flexibility of the system as a whole.
>>>
>>> Suppose an embedded device wants to send the compressed image over a
>>> network in packets of 1500 bytes, and they want to allow 3 packets
>>> per frame.  Your proposal limits sizeimage to a multiple of 1K, so
>>> they have to set sizeimage to 4K when they want 4.5K, meaning that
>>> they waste 500 bytes of bandwidth every frame.
>>>
>>> You could say "tough luck, extra overhead like this is something you
>>> should expect if you want to use a general purpose API like V4L2",
>>> but why make it worse if we can make it better?
>>
>> I entirely agree with that. Other issue with fixed number of samples
>> per line is that internal (FIFO) line buffer size of the transmitter
>> devices will vary, and for example some devices might have line buffer
>> smaller than the value we have arbitrarily chosen. I'd expect the
>> optimal number of samples per line to vary among different devices and
>> use cases.
>
> I guess the sensor driver could factor the size as well (provided it
> can choose an arbitrary size) but then to be fully generic, I think
> alignment must also be taken care of. Many receivers might require
> width to be even but some might have tighter requirements. They have
> a minimum width, too.

> To make this working in a generic case might not be worth the time
> and effort of being able to shave up to 1 kiB off of video buffer
> allocations.

I think that a good enough solution here is that the code within each
sensor driver that does the factorization has to be written to account
for whatever reasonable restrictions that a bridge might require.

Eg: if the userspace requests 49 bytes, it doesn't give back 7x7,
because it knows that some bridges don't like odd numbers.

A sensor driver author would have to do a quick survey of bridges to
see what was likely to be problematic.  A bit of common sense would
solve the vast majority of cases.  After that if the bridge didn't
like what the sensor set, then the whole operation would fail.  The
user would then have to make a feature request to the sensor driver
author saying "can you please tweak it to work with such-a-bridge".

This solution is only slightly more complicated than picking a fixed
width, and I think that the advantage is worth the extra complication.

> Remember v4l2_buffer.length is different from
>  v4l2_pix_format.sizeimage.
> Hmm. Yes --- so to the sensor goes desired maximum size, and back
> you'd get ExF (i.e. buffer length) AND the size of the image.

I really don't understand this last paragraph. Try adding coffee ;-)

Best regards,
Tom

--
Mr T. Vajzovic
Software Engineer
Infrared Integrated Systems Ltd
Visit us at www.irisys.co.uk
Disclaimer: This e-mail message is confidential and for use by the addressee 
only. If the message is received by anyone other than the addressee, please 
return the message to the sender by replying to it and then delete the original 
message and the sent message from your computer. Infrared Integrated Systems 
Limited Park Circle Tithe Barn Way Swan Valley Northampton NN4 9BG Registration 
Number: 3186364.
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Linaro-mm-sig] [RFC 0/1] drm/pl111: Initial drm/kms driver for pl111

2013-08-06 Thread Rob Clark
On Tue, Aug 6, 2013 at 10:36 AM, Lucas Stach  wrote:
> Am Dienstag, den 06.08.2013, 10:14 -0400 schrieb Rob Clark:
>> On Tue, Aug 6, 2013 at 8:18 AM, Lucas Stach  wrote:
>> > Am Dienstag, den 06.08.2013, 12:31 +0100 schrieb Tom Cooksey:
>> >> Hi Rob,
>> >>
>> >> +lkml
>> >>
>> >> > >> On Fri, Jul 26, 2013 at 11:58 AM, Tom Cooksey 
>> >> > >> wrote:
>> >> > >> >> >  * It abuses flags parameter of DRM_IOCTL_MODE_CREATE_DUMB to
>> >> > >> >> >also allocate buffers for the GPU. Still not sure how to
>> >> > >> >> >resolve this as we don't use DRM for our GPU driver.
>> >> > >> >>
>> >> > >> >> any thoughts/plans about a DRM GPU driver?  Ideally long term
>> >> > >> >> (esp. once the dma-fence stuff is in place), we'd have
>> >> > >> >> gpu-specific drm (gpu-only, no kms) driver, and SoC/display
>> >> > >> >> specific drm/kms driver, using prime/dmabuf to share between
>> >> > >> >> the two.
>> >> > >> >
>> >> > >> > The "extra" buffers we were allocating from armsoc DDX were really
>> >> > >> > being allocated through DRM/GEM so we could get an flink name
>> >> > >> > for them and pass a reference to them back to our GPU driver on
>> >> > >> > the client side. If it weren't for our need to access those
>> >> > >> > extra off-screen buffers with the GPU we wouldn't need to
>> >> > >> > allocate them with DRM at all. So, given they are really "GPU"
>> >> > >> > buffers, it does absolutely make sense to allocate them in a
>> >> > >> > different driver to the display driver.
>> >> > >> >
>> >> > >> > However, to avoid unnecessary memcpys & related cache
>> >> > >> > maintenance ops, we'd also like the GPU to render into buffers
>> >> > >> > which are scanned out by the display controller. So let's say
>> >> > >> > we continue using DRM_IOCTL_MODE_CREATE_DUMB to allocate scan
>> >> > >> > out buffers with the display's DRM driver but a custom ioctl
>> >> > >> > on the GPU's DRM driver to allocate non scanout, off-screen
>> >> > >> > buffers. Sounds great, but I don't think that really works
>> >> > >> > with DRI2. If we used two drivers to allocate buffers, which
>> >> > >> > of those drivers do we return in DRI2ConnectReply? Even if we
>> >> > >> > solve that somehow, GEM flink names are name-spaced to a
>> >> > >> > single device node (AFAIK). So when we do a DRI2GetBuffers,
>> >> > >> > how does the EGL in the client know which DRM device owns GEM
>> >> > >> > flink name "1234"? We'd need some pretty dirty hacks.
>> >> > >>
>> >> > >> You would return the name of the display driver allocating the
>> >> > >> buffers.  On the client side you can use generic ioctls to go from
>> >> > >> flink -> handle -> dmabuf.  So the client side would end up opening
>> >> > >> both the display drm device and the gpu, but without needing to know
>> >> > >> too much about the display.
>> >> > >
>> >> > > I think the bit I was missing was that a GEM bo for a buffer imported
>> >> > > using dma_buf/PRIME can still be flink'd. So the display controller's
>> >> > > DRM driver allocates scan-out buffers via the DUMB buffer allocate
>> >> > > ioctl. Those scan-out buffers than then be exported from the
>> >> > > dispaly's DRM driver and imported into the GPU's DRM driver using
>> >> > > PRIME. Once imported into the GPU's driver, we can use flink to get a
>> >> > > name for that buffer within the GPU DRM driver's name-space to return
>> >> > > to the DRI2 client. That same namespace is also what DRI2 back-
>> >> > > buffers are allocated from, so I think that could work... Except...
>> >> >
>> >> > (and.. the general direction is that things will move more to just use
>> >> > dmabuf directly, ie. wayland or dri3)
>> >>
>> >> I agree, DRI2 is the only reason why we need a system-wide ID. I also
>> >> prefer buffers to be passed around by dma_buf fd, but we still need to
>> >> support DRI2 and will do for some time I expect.
>> >>
>> >>
>> >>
>> >> > >> > Anyway, that latter case also gets quite difficult. The "GPU"
>> >> > >> > DRM driver would need to know the constraints of the display
>> >> > >> > controller when allocating buffers intended to be scanned out.
>> >> > >> > For example, pl111 typically isn't behind an IOMMU and so
>> >> > >> > requires physically contiguous memory. We'd have to teach the
>> >> > >> > GPU's DRM driver about the constraints of the display HW. Not
>> >> > >> > exactly a clean driver model. :-(
>> >> > >> >
>> >> > >> > I'm still a little stuck on how to proceed, so any ideas
>> >> > >> > would greatly appreciated! My current train of thought is
>> >> > >> > having a kind of SoC-specific DRM driver which allocates
>> >> > >> > buffers for both display and GPU within a single GEM
>> >> > >> > namespace. That SoC-specific DRM driver could then know the
>> >> > >> > constraints of both the GPU and the display HW. We could then
>> >> > >> > use PRIME to export buffers allocated with the SoC DRM driver
>> >> > >> > and import them into the GPU and/or display DRM driver.
>> >> > >>
>> >> > >> Usually if 

Re: [RFC 0/1] drm/pl111: Initial drm/kms driver for pl111

2013-08-06 Thread Rob Clark
On Tue, Aug 6, 2013 at 10:03 AM, Tom Cooksey  wrote:
> Hi Rob,
>
>> >> > We may also then have additional constraints when sharing buffers
>> >> > between the display HW and video decode or even camera ISP HW.
>> >> > Programmatically describing buffer allocation constraints is very
>> >> > difficult and I'm not sure you can actually do it - there's some
>> >> > pretty complex constraints out there! E.g. I believe there's a
>> >> > platform where Y and UV planes of the reference frame need to be
>> >> > in separate DRAM banks for real-time 1080p decode, or something
>> >> > like that?
>> >>
>> >> yes, this was discussed.  This is different from pitch/format/size
>> >> constraints.. it is really just a placement constraint (ie. where
>> >> do the physical pages go).  IIRC the conclusion was to use a dummy
>> >> devices with it's own CMA pool for attaching the Y vs UV buffers.
>> >>
>> >> > Anyway, I guess my point is that even if we solve how to allocate
>> >> > buffers which will be shared between the GPU and display HW such
>> >> > that both sets of constraints are satisfied, that may not be the
>> >> > end of the story.
>> >> >
>> >>
>> >> that was part of the reason to punt this problem to userspace ;-)
>> >>
>> >> In practice, the kernel drivers doesn't usually know too much about
>> >> the dimensions/format/etc.. that is really userspace level
>> >> knowledge. There are a few exceptions when the kernel needs to know
>> >> how to setup GTT/etc for tiled buffers, but normally this sort of
>> >> information is up at the next level up (userspace, and
>> >> drm_framebuffer in case of scanout).  Userspace media frameworks
>> >> like GStreamer already have a concept of format/caps negotiation.
>> >> For non-display<->gpu sharing, I think this is probably where this
>> >> sort of constraint negotiation should be handled.
>> >
>> > I agree that user-space will know which devices will access the
>> > buffer and thus can figure out at least a common pixel format.
>> > Though I'm not so sure userspace can figure out more low-level
>> > details like alignment and placement in physical memory, etc.
>> >
>>
>> well, let's divide things up into two categories:
>>
>> 1) the arrangement and format of pixels.. ie. what userspace would
>> need to know if it mmap's a buffer.  This includes pixel format,
>> stride, etc.  This should be negotiated in userspace, it would be
>> crazy to try to do this in the kernel.
>
> Absolutely. Pixel format has to be negotiated by user-space as in
> most cases, user-space can map the buffer and thus will need to
> know how to interpret the data.
>
>
>
>> 2) the physical placement of the pages.  Ie. whether it is contiguous
>> or not.  Which bank the pages in the buffer are placed in, etc.  This
>> is not visible to userspace.
>
> Seems sensible to me.
>
>
>> ... This is the purpose of the attach step,
>> so you know all the devices involved in sharing up front before
>> allocating the backing pages. (Or in the worst case, if you have a
>> "late attacher" you at least know when no device is doing dma access
>> to a buffer and can reallocate and move the buffer.)  A long time
>> back, I had a patch that added a field or two to 'struct
>> device_dma_parameters' so that it could be known if a device required
>> contiguous buffers.. looks like that never got merged, so I'd need to
>> dig that back up and resend it.  But the idea was to have the 'struct
>> device' encapsulate all the information that would be needed to
>> do-the-right-thing when it comes to placement.
>
> As I understand it, it's up to the exporting device to allocate the
> memory backing the dma_buf buffer. I guess the latest possible point
> you can allocate the backing pages is when map_dma_buf is first
> called? At that point the exporter can iterate over the current set
> of attachments, programmatically determine the all the constraints of
> all the attached drivers and attempt to allocate the backing pages
> in such a way as to satisfy all those constraints?

yes, this is the idea..  possibly some room for some helpers to help
out with this, but that is all under the hood from userspace
perspective

> Didn't you say that programmatically describing device placement
> constraints was an unbounded problem? I guess we would have to
> accept that it's not possible to describe all possible constraints
> and instead find a way to describe the common ones?

well, the point I'm trying to make, is by dividing your constraints
into two groups, one that impacts and is handled by userspace, and one
that is in the kernel (ie. where the pages go), you cut down the
number of permutations that the kernel has to care about considerably.
 And kernel already cares about, for example, what range of addresses
that a device can dma to/from.  I think really the only thing missing
is the max # of sglist entries (contiguous or not)

> One problem with this is it duplicates a lot of logic in each
> driver which can export a dma_buf buffer. Each e

Re: [Linaro-mm-sig] [RFC 0/1] drm/pl111: Initial drm/kms driver for pl111

2013-08-06 Thread Lucas Stach
Am Dienstag, den 06.08.2013, 10:14 -0400 schrieb Rob Clark:
> On Tue, Aug 6, 2013 at 8:18 AM, Lucas Stach  wrote:
> > Am Dienstag, den 06.08.2013, 12:31 +0100 schrieb Tom Cooksey:
> >> Hi Rob,
> >>
> >> +lkml
> >>
> >> > >> On Fri, Jul 26, 2013 at 11:58 AM, Tom Cooksey 
> >> > >> wrote:
> >> > >> >> >  * It abuses flags parameter of DRM_IOCTL_MODE_CREATE_DUMB to
> >> > >> >> >also allocate buffers for the GPU. Still not sure how to
> >> > >> >> >resolve this as we don't use DRM for our GPU driver.
> >> > >> >>
> >> > >> >> any thoughts/plans about a DRM GPU driver?  Ideally long term
> >> > >> >> (esp. once the dma-fence stuff is in place), we'd have
> >> > >> >> gpu-specific drm (gpu-only, no kms) driver, and SoC/display
> >> > >> >> specific drm/kms driver, using prime/dmabuf to share between
> >> > >> >> the two.
> >> > >> >
> >> > >> > The "extra" buffers we were allocating from armsoc DDX were really
> >> > >> > being allocated through DRM/GEM so we could get an flink name
> >> > >> > for them and pass a reference to them back to our GPU driver on
> >> > >> > the client side. If it weren't for our need to access those
> >> > >> > extra off-screen buffers with the GPU we wouldn't need to
> >> > >> > allocate them with DRM at all. So, given they are really "GPU"
> >> > >> > buffers, it does absolutely make sense to allocate them in a
> >> > >> > different driver to the display driver.
> >> > >> >
> >> > >> > However, to avoid unnecessary memcpys & related cache
> >> > >> > maintenance ops, we'd also like the GPU to render into buffers
> >> > >> > which are scanned out by the display controller. So let's say
> >> > >> > we continue using DRM_IOCTL_MODE_CREATE_DUMB to allocate scan
> >> > >> > out buffers with the display's DRM driver but a custom ioctl
> >> > >> > on the GPU's DRM driver to allocate non scanout, off-screen
> >> > >> > buffers. Sounds great, but I don't think that really works
> >> > >> > with DRI2. If we used two drivers to allocate buffers, which
> >> > >> > of those drivers do we return in DRI2ConnectReply? Even if we
> >> > >> > solve that somehow, GEM flink names are name-spaced to a
> >> > >> > single device node (AFAIK). So when we do a DRI2GetBuffers,
> >> > >> > how does the EGL in the client know which DRM device owns GEM
> >> > >> > flink name "1234"? We'd need some pretty dirty hacks.
> >> > >>
> >> > >> You would return the name of the display driver allocating the
> >> > >> buffers.  On the client side you can use generic ioctls to go from
> >> > >> flink -> handle -> dmabuf.  So the client side would end up opening
> >> > >> both the display drm device and the gpu, but without needing to know
> >> > >> too much about the display.
> >> > >
> >> > > I think the bit I was missing was that a GEM bo for a buffer imported
> >> > > using dma_buf/PRIME can still be flink'd. So the display controller's
> >> > > DRM driver allocates scan-out buffers via the DUMB buffer allocate
> >> > > ioctl. Those scan-out buffers than then be exported from the
> >> > > dispaly's DRM driver and imported into the GPU's DRM driver using
> >> > > PRIME. Once imported into the GPU's driver, we can use flink to get a
> >> > > name for that buffer within the GPU DRM driver's name-space to return
> >> > > to the DRI2 client. That same namespace is also what DRI2 back-
> >> > > buffers are allocated from, so I think that could work... Except...
> >> >
> >> > (and.. the general direction is that things will move more to just use
> >> > dmabuf directly, ie. wayland or dri3)
> >>
> >> I agree, DRI2 is the only reason why we need a system-wide ID. I also
> >> prefer buffers to be passed around by dma_buf fd, but we still need to
> >> support DRI2 and will do for some time I expect.
> >>
> >>
> >>
> >> > >> > Anyway, that latter case also gets quite difficult. The "GPU"
> >> > >> > DRM driver would need to know the constraints of the display
> >> > >> > controller when allocating buffers intended to be scanned out.
> >> > >> > For example, pl111 typically isn't behind an IOMMU and so
> >> > >> > requires physically contiguous memory. We'd have to teach the
> >> > >> > GPU's DRM driver about the constraints of the display HW. Not
> >> > >> > exactly a clean driver model. :-(
> >> > >> >
> >> > >> > I'm still a little stuck on how to proceed, so any ideas
> >> > >> > would greatly appreciated! My current train of thought is
> >> > >> > having a kind of SoC-specific DRM driver which allocates
> >> > >> > buffers for both display and GPU within a single GEM
> >> > >> > namespace. That SoC-specific DRM driver could then know the
> >> > >> > constraints of both the GPU and the display HW. We could then
> >> > >> > use PRIME to export buffers allocated with the SoC DRM driver
> >> > >> > and import them into the GPU and/or display DRM driver.
> >> > >>
> >> > >> Usually if the display drm driver is allocating the buffers that
> >> > >> might be scanned out, it just needs to have minimal knowledge of
> >> > >> 

Re: [Linaro-mm-sig] [RFC 0/1] drm/pl111: Initial drm/kms driver for pl111

2013-08-06 Thread Rob Clark
On Tue, Aug 6, 2013 at 8:18 AM, Lucas Stach  wrote:
> Am Dienstag, den 06.08.2013, 12:31 +0100 schrieb Tom Cooksey:
>> Hi Rob,
>>
>> +lkml
>>
>> > >> On Fri, Jul 26, 2013 at 11:58 AM, Tom Cooksey 
>> > >> wrote:
>> > >> >> >  * It abuses flags parameter of DRM_IOCTL_MODE_CREATE_DUMB to
>> > >> >> >also allocate buffers for the GPU. Still not sure how to
>> > >> >> >resolve this as we don't use DRM for our GPU driver.
>> > >> >>
>> > >> >> any thoughts/plans about a DRM GPU driver?  Ideally long term
>> > >> >> (esp. once the dma-fence stuff is in place), we'd have
>> > >> >> gpu-specific drm (gpu-only, no kms) driver, and SoC/display
>> > >> >> specific drm/kms driver, using prime/dmabuf to share between
>> > >> >> the two.
>> > >> >
>> > >> > The "extra" buffers we were allocating from armsoc DDX were really
>> > >> > being allocated through DRM/GEM so we could get an flink name
>> > >> > for them and pass a reference to them back to our GPU driver on
>> > >> > the client side. If it weren't for our need to access those
>> > >> > extra off-screen buffers with the GPU we wouldn't need to
>> > >> > allocate them with DRM at all. So, given they are really "GPU"
>> > >> > buffers, it does absolutely make sense to allocate them in a
>> > >> > different driver to the display driver.
>> > >> >
>> > >> > However, to avoid unnecessary memcpys & related cache
>> > >> > maintenance ops, we'd also like the GPU to render into buffers
>> > >> > which are scanned out by the display controller. So let's say
>> > >> > we continue using DRM_IOCTL_MODE_CREATE_DUMB to allocate scan
>> > >> > out buffers with the display's DRM driver but a custom ioctl
>> > >> > on the GPU's DRM driver to allocate non scanout, off-screen
>> > >> > buffers. Sounds great, but I don't think that really works
>> > >> > with DRI2. If we used two drivers to allocate buffers, which
>> > >> > of those drivers do we return in DRI2ConnectReply? Even if we
>> > >> > solve that somehow, GEM flink names are name-spaced to a
>> > >> > single device node (AFAIK). So when we do a DRI2GetBuffers,
>> > >> > how does the EGL in the client know which DRM device owns GEM
>> > >> > flink name "1234"? We'd need some pretty dirty hacks.
>> > >>
>> > >> You would return the name of the display driver allocating the
>> > >> buffers.  On the client side you can use generic ioctls to go from
>> > >> flink -> handle -> dmabuf.  So the client side would end up opening
>> > >> both the display drm device and the gpu, but without needing to know
>> > >> too much about the display.
>> > >
>> > > I think the bit I was missing was that a GEM bo for a buffer imported
>> > > using dma_buf/PRIME can still be flink'd. So the display controller's
>> > > DRM driver allocates scan-out buffers via the DUMB buffer allocate
>> > > ioctl. Those scan-out buffers than then be exported from the
>> > > dispaly's DRM driver and imported into the GPU's DRM driver using
>> > > PRIME. Once imported into the GPU's driver, we can use flink to get a
>> > > name for that buffer within the GPU DRM driver's name-space to return
>> > > to the DRI2 client. That same namespace is also what DRI2 back-
>> > > buffers are allocated from, so I think that could work... Except...
>> >
>> > (and.. the general direction is that things will move more to just use
>> > dmabuf directly, ie. wayland or dri3)
>>
>> I agree, DRI2 is the only reason why we need a system-wide ID. I also
>> prefer buffers to be passed around by dma_buf fd, but we still need to
>> support DRI2 and will do for some time I expect.
>>
>>
>>
>> > >> > Anyway, that latter case also gets quite difficult. The "GPU"
>> > >> > DRM driver would need to know the constraints of the display
>> > >> > controller when allocating buffers intended to be scanned out.
>> > >> > For example, pl111 typically isn't behind an IOMMU and so
>> > >> > requires physically contiguous memory. We'd have to teach the
>> > >> > GPU's DRM driver about the constraints of the display HW. Not
>> > >> > exactly a clean driver model. :-(
>> > >> >
>> > >> > I'm still a little stuck on how to proceed, so any ideas
>> > >> > would greatly appreciated! My current train of thought is
>> > >> > having a kind of SoC-specific DRM driver which allocates
>> > >> > buffers for both display and GPU within a single GEM
>> > >> > namespace. That SoC-specific DRM driver could then know the
>> > >> > constraints of both the GPU and the display HW. We could then
>> > >> > use PRIME to export buffers allocated with the SoC DRM driver
>> > >> > and import them into the GPU and/or display DRM driver.
>> > >>
>> > >> Usually if the display drm driver is allocating the buffers that
>> > >> might be scanned out, it just needs to have minimal knowledge of
>> > >> the GPU (pitch alignment constraints).  I don't think we need a
>> > >> 3rd device just to allocate buffers.
>> > >
>> > > While Mali can render to pretty much any buffer, there is a mild
>> > > performance improvement to be had if 

Re: [RFC v3 09/13] [media] exynos5-fimc-is: Add the hardware pipeline control

2013-08-06 Thread Arun Kumar K
Hi Sylwester,

On Sun, Aug 4, 2013 at 8:30 PM, Sylwester Nawrocki
 wrote:
> Hi Arun,
>
> On 08/02/2013 05:02 PM, Arun Kumar K wrote:
>>
>> This patch adds the crucial hardware pipeline control for the
>> fimc-is driver. All the subdev nodes will call this pipeline
>> interfaces to reach the hardware. Responsibilities of this module
>> involves configuring and maintaining the hardware pipeline involving
>> multiple sub-ips like ISP, DRC, Scalers, ODC, 3DNR, FD etc.
>>
>> Signed-off-by: Arun Kumar K
>> Signed-off-by: Kilyeon Im
>> ---

[snip]


>> +static int fimc_is_pipeline_isp_setparams(struct fimc_is_pipeline
>> *pipeline,
>> +   unsigned int enable)
>> +{
>> +   struct isp_param *isp_param =&pipeline->is_region->parameter.isp;
>> +   struct fimc_is *is = pipeline->is;
>> +   unsigned int indexes, lindex, hindex;
>> +   unsigned int sensor_width, sensor_height, scc_width, scc_height;
>> +   unsigned int crop_x, crop_y, isp_width, isp_height;
>> +   unsigned int sensor_ratio, output_ratio;
>> +   int ret;
>> +
>> +   /* Crop calculation */
>> +   sensor_width = pipeline->sensor_width;
>> +   sensor_height = pipeline->sensor_height;
>> +   scc_width = pipeline->scaler_width[SCALER_SCC];
>> +   scc_height = pipeline->scaler_height[SCALER_SCC];
>> +   isp_width = sensor_width;
>> +   isp_height = sensor_height;
>> +   crop_x = crop_y = 0;
>> +
>> +   sensor_ratio = sensor_width * 1000 / sensor_height;
>> +   output_ratio = scc_width * 1000 / scc_height;
>> +
>> +   if (sensor_ratio == output_ratio) {
>> +   isp_width = sensor_width;
>> +   isp_height = sensor_height;
>> +   } else if (sensor_ratio<  output_ratio) {
>> +   isp_height = (sensor_width * scc_height) / scc_width;
>> +   isp_height = ALIGN(isp_height, 2);
>> +   crop_y = ((sensor_height - isp_height)>>  1)&  0xFFFE;
>
>
> nit: Use ~1U instead of 0xFFFE.
>
>
>> +   } else {
>> +   isp_width = (sensor_height * scc_width) / scc_height;
>> +   isp_width = ALIGN(isp_width, 4);
>> +   crop_x =  ((sensor_width - isp_width)>>  1)&  0xFFFE;
>
>
> Ditto.
>
>> +   }
>> +   pipeline->isp_width = isp_width;
>> +   pipeline->isp_height = isp_height;
>> +
>> +   indexes = hindex = lindex = 0;
>> +
>> +   isp_param->otf_output.cmd = OTF_OUTPUT_COMMAND_ENABLE;
>> +   isp_param->otf_output.width = pipeline->sensor_width;
>> +   isp_param->otf_output.height = pipeline->sensor_height;
>> +   isp_param->otf_output.format = OTF_OUTPUT_FORMAT_YUV444;
>> +   isp_param->otf_output.bitwidth = OTF_OUTPUT_BIT_WIDTH_12BIT;
>> +   isp_param->otf_output.order = OTF_INPUT_ORDER_BAYER_GR_BG;
>> +   lindex |= LOWBIT_OF(PARAM_ISP_OTF_OUTPUT);
>> +   hindex |= HIGHBIT_OF(PARAM_ISP_OTF_OUTPUT);
>> +   indexes++;
>
>
> All right, let's stop this hindex/lindex/indexes madness. I've already
> commented on that IIRC. Nevertheless, this should be replaced with proper
> bitmap operations. A similar issue has been fixed in commit
>
>
>> +   isp_param->dma1_output.cmd = DMA_OUTPUT_COMMAND_DISABLE;
>> +   lindex |= LOWBIT_OF(PARAM_ISP_DMA1_OUTPUT);
>> +   hindex |= HIGHBIT_OF(PARAM_ISP_DMA1_OUTPUT);
>> +   indexes++;
>> +
>> +   isp_param->dma2_output.cmd = DMA_OUTPUT_COMMAND_DISABLE;
>> +   lindex |= LOWBIT_OF(PARAM_ISP_DMA2_OUTPUT);
>> +   hindex |= HIGHBIT_OF(PARAM_ISP_DMA2_OUTPUT);
>> +   indexes++;
>> +
>> +   if (enable)
>> +   isp_param->control.bypass = CONTROL_BYPASS_DISABLE;
>> +   else
>> +   isp_param->control.bypass = CONTROL_BYPASS_ENABLE;
>> +   isp_param->control.cmd = CONTROL_COMMAND_START;
>> +   isp_param->control.run_mode = 1;
>> +   lindex |= LOWBIT_OF(PARAM_ISP_CONTROL);
>> +   hindex |= HIGHBIT_OF(PARAM_ISP_CONTROL);
>> +   indexes++;
>> +
>> +   isp_param->dma1_input.cmd = DMA_INPUT_COMMAND_BUF_MNGR;
>> +   isp_param->dma1_input.width = sensor_width;
>> +   isp_param->dma1_input.height = sensor_height;
>> +   isp_param->dma1_input.dma_crop_offset_x = crop_x;
>> +   isp_param->dma1_input.dma_crop_offset_y = crop_y;
>> +   isp_param->dma1_input.dma_crop_width = isp_width;
>> +   isp_param->dma1_input.dma_crop_height = isp_height;
>> +   isp_param->dma1_input.bayer_crop_offset_x = 0;
>> +   isp_param->dma1_input.bayer_crop_offset_y = 0;
>> +   isp_param->dma1_input.bayer_crop_width = 0;
>> +   isp_param->dma1_input.bayer_crop_height = 0;
>> +   isp_param->dma1_input.user_min_frametime = 0;
>> +   isp_param->dma1_input.user_max_frametime = 6;
>> +   isp_param->dma1_input.wide_frame_gap = 1;
>> +   isp_param->dma1_input.frame_gap = 4096;
>> +   isp_param->dma1_input.line_gap = 45;
>> +   isp_param->dma1_input.order = DMA_INPUT_ORDER_GR_BG;
>> +   isp_param->dma1_input.p

Re: Syntek webcams and out-of-tree driver

2013-08-06 Thread Ondrej Zary
On Tuesday 06 August 2013, Hans de Goede wrote:
> Hi,
>
> On 08/05/2013 11:19 PM, Ondrej Zary wrote:
> > Hello,
> > the in-kernel stkwebcam driver (by Jaime Velasco Juan and Nicolas VIVIEN)
> > supports only two webcam types (USB IDs 0x174f:0xa311 and 0x05e1:0x0501).
> > There are many other Syntek webcam types that are not supported by this
> > driver (such as 0x174f:0x6a31 in Asus F5RL laptop).
> >
> > There is an out-of-tree GPL driver called stk11xx (by Martin Roos and
> > also Nicolas VIVIEN) at http://sourceforge.net/projects/syntekdriver/
> > which supports more webcams. It can be even compiled for the latest
> > kernels using the patch below and seems to work somehow (slow and buggy
> > but better than nothing) with the Asus F5RL.
>
> I took a quick look and there are a number of issues with this driver:
>
> 1) It conflicts usb-id wise with the new stk1160 driver (which supports
> usb-id 05e1:0408) so support for that usb-id, and any code only used for
> that id will need to be removed
>
> 2) "seems to work somehow (slow and buggy)" is not really the quality
> we aim for with in kernel drivers. We definitely will want to remove
> any usb-ids, and any code only used for those ids, where there is overlap
> with the existing stkwebcam driver, to avoid regressions
>
> 3) It does in kernel bayer decoding, this is not acceptable, it needs to
> be modified to produce buffers with raw bayer data (libv4l will take care
> of the bater decoding in userspace).
>
> 4) It is not using any of the new kernel infrastructure we have been adding
> over time, like the control-framework, videobuf2, etc. It would be best
> to convert this to a gspca sub driver (of which there are many already,
> which can serve as examples), so that it will use all the existing
> framework code.

Yes, this would be the best way - only extract the HW-dependent parts.

> As a minimum issues 1-3 needs to be addressed before this can be merged. An
> alternative /  better approach might be to simply only lift the code for
> your camera, and add a new gspca driver supporting only your camera.
>
> Either way since non of the v4l developers have a laptop which such a
> camera, you will need to do most of the work yourself, as we cannot test.
>
> So congratulations, you've just become a v4l kernel developer :)

Unfortunately the laptop isn't mine. I'll have it only for a while but will 
try to do something.

> Regards,
>
> Hans
>
> > Is there any possibility that this driver could be merged into the
> > kernel? The code could probably be simplified a lot and integrated into
> > gspca.
> >
> >
> > diff -urp syntekdriver-code-107-trunk-orig/driver/stk11xx.h
> > syntekdriver-code-107-trunk//driver/stk11xx.h ---
> > syntekdriver-code-107-trunk-orig/driver/stk11xx.h   2012-03-10
> > 10:03:12.0 +0100 +++
> > syntekdriver-code-107-trunk//driver/stk11xx.h   2013-08-05
> > 22:50:00.0 +0200 @@ -33,6 +33,7 @@
> >
> >   #ifndef STK11XX_H
> >   #define STK11XX_H
> > +#include 
> >
> >   #define DRIVER_NAME   "stk11xx"   
> > /**< Name of this driver */
> >   #define DRIVER_VERSION"v3.0.0"
> > /**< Version of this driver */
> > @@ -316,6 +317,7 @@ struct stk11xx_video {
> >* @struct usb_stk11xx
> >*/
> >   struct usb_stk11xx {
> > +   struct v4l2_device v4l2_dev;
> > struct video_device *vdev;  /**< Pointer on a V4L2 
> > video device */
> > struct usb_device *udev;/**< Pointer on a USB 
> > device */
> > struct usb_interface *interface;/**< Pointer on a USB interface 
> > */
> > diff -urp syntekdriver-code-107-trunk-orig/driver/stk11xx-v4l.c
> > syntekdriver-code-107-trunk//driver/stk11xx-v4l.c ---
> > syntekdriver-code-107-trunk-orig/driver/stk11xx-v4l.c   2012-03-10
> > 09:54:57.0 +0100 +++
> > syntekdriver-code-107-trunk//driver/stk11xx-v4l.c   2013-08-05
> > 22:51:12.0 +0200 @@ -1498,9 +1498,17 @@ int
> > v4l_stk11xx_register_video_device(st
> >   {
> > int err;
> >
> > +   err = v4l2_device_register(&dev->interface->dev, &dev->v4l2_dev);
> > +   if (err < 0) {
> > +   STK_ERROR("couldn't register v4l2_device\n");
> > +   kfree(dev);
> > +   return err;
> > +   }
> > +
> > strcpy(dev->vdev->name, DRIVER_DESC);
> >
> > -   dev->vdev->parent = &dev->interface->dev;
> > +// dev->vdev->parent = &dev->interface->dev;
> > +   dev->vdev->v4l2_dev = &dev->v4l2_dev;
> > dev->vdev->fops = &v4l_stk11xx_fops;
> > dev->vdev->release = video_device_release;
> > dev->vdev->minor = -1;
> > @@ -1533,6 +1541,7 @@ int v4l_stk11xx_unregister_video_device(
> >
> > video_set_drvdata(dev->vdev, NULL);
> > video_unregister_device(dev->vdev);
> > +   v4l2_device_unregister(&dev->v4l2_dev);
> >
> > return 0;
> >   }



-- 
Ondrej Zary
--
To unsubscribe from this list: send the 

Re: [Linaro-mm-sig] [RFC 0/1] drm/pl111: Initial drm/kms driver for pl111

2013-08-06 Thread Lucas Stach
Am Dienstag, den 06.08.2013, 12:31 +0100 schrieb Tom Cooksey:
> Hi Rob,
> 
> +lkml
> 
> > >> On Fri, Jul 26, 2013 at 11:58 AM, Tom Cooksey 
> > >> wrote:
> > >> >> >  * It abuses flags parameter of DRM_IOCTL_MODE_CREATE_DUMB to
> > >> >> >also allocate buffers for the GPU. Still not sure how to 
> > >> >> >resolve this as we don't use DRM for our GPU driver.
> > >> >>
> > >> >> any thoughts/plans about a DRM GPU driver?  Ideally long term
> > >> >> (esp. once the dma-fence stuff is in place), we'd have 
> > >> >> gpu-specific drm (gpu-only, no kms) driver, and SoC/display
> > >> >> specific drm/kms driver, using prime/dmabuf to share between
> > >> >> the two.
> > >> >
> > >> > The "extra" buffers we were allocating from armsoc DDX were really
> > >> > being allocated through DRM/GEM so we could get an flink name
> > >> > for them and pass a reference to them back to our GPU driver on
> > >> > the client side. If it weren't for our need to access those
> > >> > extra off-screen buffers with the GPU we wouldn't need to
> > >> > allocate them with DRM at all. So, given they are really "GPU"
> > >> > buffers, it does absolutely make sense to allocate them in a
> > >> > different driver to the display driver.
> > >> >
> > >> > However, to avoid unnecessary memcpys & related cache
> > >> > maintenance ops, we'd also like the GPU to render into buffers
> > >> > which are scanned out by the display controller. So let's say
> > >> > we continue using DRM_IOCTL_MODE_CREATE_DUMB to allocate scan
> > >> > out buffers with the display's DRM driver but a custom ioctl
> > >> > on the GPU's DRM driver to allocate non scanout, off-screen
> > >> > buffers. Sounds great, but I don't think that really works
> > >> > with DRI2. If we used two drivers to allocate buffers, which
> > >> > of those drivers do we return in DRI2ConnectReply? Even if we
> > >> > solve that somehow, GEM flink names are name-spaced to a
> > >> > single device node (AFAIK). So when we do a DRI2GetBuffers,
> > >> > how does the EGL in the client know which DRM device owns GEM
> > >> > flink name "1234"? We'd need some pretty dirty hacks.
> > >>
> > >> You would return the name of the display driver allocating the
> > >> buffers.  On the client side you can use generic ioctls to go from
> > >> flink -> handle -> dmabuf.  So the client side would end up opening
> > >> both the display drm device and the gpu, but without needing to know
> > >> too much about the display.
> > >
> > > I think the bit I was missing was that a GEM bo for a buffer imported
> > > using dma_buf/PRIME can still be flink'd. So the display controller's
> > > DRM driver allocates scan-out buffers via the DUMB buffer allocate
> > > ioctl. Those scan-out buffers than then be exported from the
> > > dispaly's DRM driver and imported into the GPU's DRM driver using
> > > PRIME. Once imported into the GPU's driver, we can use flink to get a
> > > name for that buffer within the GPU DRM driver's name-space to return
> > > to the DRI2 client. That same namespace is also what DRI2 back-
> > > buffers are allocated from, so I think that could work... Except...
> > 
> > (and.. the general direction is that things will move more to just use
> > dmabuf directly, ie. wayland or dri3)
> 
> I agree, DRI2 is the only reason why we need a system-wide ID. I also
> prefer buffers to be passed around by dma_buf fd, but we still need to
> support DRI2 and will do for some time I expect.
> 
> 
> 
> > >> > Anyway, that latter case also gets quite difficult. The "GPU"
> > >> > DRM driver would need to know the constraints of the display
> > >> > controller when allocating buffers intended to be scanned out.
> > >> > For example, pl111 typically isn't behind an IOMMU and so
> > >> > requires physically contiguous memory. We'd have to teach the
> > >> > GPU's DRM driver about the constraints of the display HW. Not
> > >> > exactly a clean driver model. :-(
> > >> >
> > >> > I'm still a little stuck on how to proceed, so any ideas
> > >> > would greatly appreciated! My current train of thought is
> > >> > having a kind of SoC-specific DRM driver which allocates
> > >> > buffers for both display and GPU within a single GEM
> > >> > namespace. That SoC-specific DRM driver could then know the
> > >> > constraints of both the GPU and the display HW. We could then
> > >> > use PRIME to export buffers allocated with the SoC DRM driver
> > >> > and import them into the GPU and/or display DRM driver.
> > >>
> > >> Usually if the display drm driver is allocating the buffers that
> > >> might be scanned out, it just needs to have minimal knowledge of 
> > >> the GPU (pitch alignment constraints).  I don't think we need a 
> > >> 3rd device just to allocate buffers.
> > >
> > > While Mali can render to pretty much any buffer, there is a mild
> > > performance improvement to be had if the buffer stride is aligned to
> > > the AXI bus's max burst length when drawing to the buffer.
> > 
> > I suspect the display con

Re: [RFC 0/1] drm/pl111: Initial drm/kms driver for pl111

2013-08-06 Thread Rob Clark
On Tue, Aug 6, 2013 at 7:31 AM, Tom Cooksey  wrote:
>
>> > So in some respects, there is a constraint on how buffers which will
>> > be drawn to using the GPU are allocated. I don't really like the idea
>> > of teaching the display controller DRM driver about the GPU buffer
>> > constraints, even if they are fairly trivial like this. If the same
>> > display HW IP is being used on several SoCs, it seems wrong somehow
>> > to enforce those GPU constraints if some of those SoCs don't have a
>> > GPU.
>>
>> Well, I suppose you could get min_pitch_alignment from devicetree, or
>> something like this..
>>
>> In the end, the easy solution is just to make the display allocate to
>> the worst-case pitch alignment.  In the early days of dma-buf
>> discussions, we kicked around the idea of negotiating or
>> programatically describing the constraints, but that didn't really
>> seem like a bounded problem.
>
> Yeah - I was around for some of those discussions and agree it's not
> really an easy problem to solve.
>
>
>
>> > We may also then have additional constraints when sharing buffers
>> > between the display HW and video decode or even camera ISP HW.
>> > Programmatically describing buffer allocation constraints is very
>> > difficult and I'm not sure you can actually do it - there's some
>> > pretty complex constraints out there! E.g. I believe there's a
>> > platform where Y and UV planes of the reference frame need to be in
>> > separate DRAM banks for real-time 1080p decode, or something like
>> > that?
>>
>> yes, this was discussed.  This is different from pitch/format/size
>> constraints.. it is really just a placement constraint (ie. where do
>> the physical pages go).  IIRC the conclusion was to use a dummy
>> devices with it's own CMA pool for attaching the Y vs UV buffers.
>>
>> > Anyway, I guess my point is that even if we solve how to allocate
>> > buffers which will be shared between the GPU and display HW such that
>> > both sets of constraints are satisfied, that may not be the end of
>> > the story.
>> >
>>
>> that was part of the reason to punt this problem to userspace ;-)
>>
>> In practice, the kernel drivers doesn't usually know too much about
>> the dimensions/format/etc.. that is really userspace level knowledge.
>> There are a few exceptions when the kernel needs to know how to setup
>> GTT/etc for tiled buffers, but normally this sort of information is up
>> at the next level up (userspace, and drm_framebuffer in case of
>> scanout).  Userspace media frameworks like GStreamer already have a
>> concept of format/caps negotiation.  For non-display<->gpu sharing, I
>> think this is probably where this sort of constraint negotiation
>> should be handled.
>
> I agree that user-space will know which devices will access the buffer
> and thus can figure out at least a common pixel format. Though I'm not
> so sure userspace can figure out more low-level details like alignment
> and placement in physical memory, etc.

well, let's divide things up into two categories:

1) the arrangement and format of pixels.. ie. what userspace would
need to know if it mmap's a buffer.  This includes pixel format,
stride, etc.  This should be negotiated in userspace, it would be
crazy to try to do this in the kernel.

2) the physical placement of the pages.  Ie. whether it is contiguous
or not.  Which bank the pages in the buffer are placed in, etc.  This
is not visible to userspace.  This is the purpose of the attach step,
so you know all the devices involved in sharing up front before
allocating the backing pages.  (Or in the worst case, if you have a
"late attacher" you at least know when no device is doing dma access
to a buffer and can reallocate and move the buffer.)  A long time
back, I had a patch that added a field or two to 'struct
device_dma_parameters' so that it could be known if a device required
contiguous buffers.. looks like that never got merged, so I'd need to
dig that back up and resend it.  But the idea was to have the 'struct
device' encapsulate all the information that would be needed to
do-the-right-thing when it comes to placement.

> Anyway, assuming user-space can figure out how a buffer should be
> stored in memory, how does it indicate this to a kernel driver and
> actually allocate it? Which ioctl on which device does user-space
> call, with what parameters? Are you suggesting using something like
> ION which exposes the low-level details of how buffers are laid out in
> physical memory to userspace? If not, what?

no, userspace should not need to know this.  And having a central
driver that knows this for all the other drivers in the system doesn't
really solve anything and isn't really scalable.  At best you might
want, in some cases, a flag you can pass when allocating.  For
example, some of the drivers have a 'SCANOUT' flag that can be passed
when allocating a GEM buffer, as a hint to the kernel that 'if this hw
requires contig memory for scanout, allocate this buffer contig'.  But
r

My Subject

2013-08-06 Thread tanja
your assistance is needed for a transfer of $24, 500, 000.00 reply for more 
details
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v6 04/10] media: vb2: Take queue or device lock in vb2_fop_mmap()

2013-08-06 Thread Hans Verkuil
On Mon 5 August 2013 19:53:23 Laurent Pinchart wrote:
> The vb2_fop_mmap() function is a plug-in implementation of the mmap()
> file operation that calls vb2_mmap() on the queue associated with the
> video device. Neither the vb2_fop_mmap() function nor the v4l2_mmap()
> mmap handler in the V4L2 core take any lock, leading to race conditions
> between mmap() and other buffer-related ioctls such as VIDIOC_REQBUFS.
> 
> Fix it by taking the queue or device lock around the vb2_mmap() call.

Hi Laurent,

Can you do the same for vb2_fop_get_unmapped_area()?

Thanks!

Hans

> 
> Signed-off-by: Laurent Pinchart 
> ---
>  drivers/media/v4l2-core/videobuf2-core.c | 9 -
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/media/v4l2-core/videobuf2-core.c 
> b/drivers/media/v4l2-core/videobuf2-core.c
> index 9fc4bab..bd4bade 100644
> --- a/drivers/media/v4l2-core/videobuf2-core.c
> +++ b/drivers/media/v4l2-core/videobuf2-core.c
> @@ -2578,8 +2578,15 @@ EXPORT_SYMBOL_GPL(vb2_ioctl_expbuf);
>  int vb2_fop_mmap(struct file *file, struct vm_area_struct *vma)
>  {
>   struct video_device *vdev = video_devdata(file);
> + struct mutex *lock = vdev->queue->lock ? vdev->queue->lock : vdev->lock;
> + int err;
>  
> - return vb2_mmap(vdev->queue, vma);
> + if (lock && mutex_lock_interruptible(lock))
> + return -ERESTARTSYS;
> + err = vb2_mmap(vdev->queue, vma);
> + if (lock)
> + mutex_unlock(lock);
> + return err;
>  }
>  EXPORT_SYMBOL_GPL(vb2_fop_mmap);
>  
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v6 02/10] Documentation: media: Clarify the VIDIOC_CREATE_BUFS format requirements

2013-08-06 Thread Hans Verkuil
On Mon 5 August 2013 19:53:21 Laurent Pinchart wrote:
> The VIDIOC_CREATE_BUFS ioctl takes a format argument that must contain a
> valid format supported by the driver. Clarify the documentation.
> 
> Signed-off-by: Laurent Pinchart 

Acked-by: Hans Verkuil 

Regards,

Hans

> ---
>  .../DocBook/media/v4l/vidioc-create-bufs.xml   | 41 
> ++
>  1 file changed, 26 insertions(+), 15 deletions(-)
> 
> diff --git a/Documentation/DocBook/media/v4l/vidioc-create-bufs.xml 
> b/Documentation/DocBook/media/v4l/vidioc-create-bufs.xml
> index cd99436..9b700a5 100644
> --- a/Documentation/DocBook/media/v4l/vidioc-create-bufs.xml
> +++ b/Documentation/DocBook/media/v4l/vidioc-create-bufs.xml
> @@ -62,18 +62,29 @@ addition to the VIDIOC_REQBUFS 
> ioctl, when a tighter
>  control over buffers is required. This ioctl can be called multiple times to
>  create buffers of different sizes.
>  
> -To allocate device buffers applications initialize relevant fields 
> of
> -the v4l2_create_buffers structure. They set the
> -type field in the
> -&v4l2-format; structure, embedded in this
> -structure, to the respective stream or buffer type.
> -count must be set to the number of required 
> buffers.
> -memory specifies the required I/O method. The
> -format field shall typically be filled in using
> -either the VIDIOC_TRY_FMT or
> -VIDIOC_G_FMT ioctl(). Additionally, applications can 
> adjust
> -sizeimage fields to fit their specific needs. The
> -reserved array must be zeroed.
> +To allocate the device buffers applications must initialize the
> +relevant fields of the v4l2_create_buffers 
> structure.
> +The count field must be set to the number of
> +requested buffers, the memory field specifies the
> +requested I/O method and the reserved array must 
> be
> +zeroed.
> +
> +The format field specifies the image 
> format
> +that the buffers must be able to handle. The application has to fill in this
> +&v4l2-format;. Usually this will be done using the
> +VIDIOC_TRY_FMT or VIDIOC_G_FMT 
> ioctl()
> +to ensure that the requested format is supported by the driver. Unsupported
> +formats will result in an error.
> +
> +The buffers created by this ioctl will have as minimum size the 
> size
> +defined by the format.pix.sizeimage field. If the
> +format.pix.sizeimage field is less than the 
> minimum
> +required for the given format, then sizeimage 
> will be
> +increased by the driver to that minimum to allocate the buffers. If it is
> +larger, then the value will be used as-is. The same applies to the
> +sizeimage field of the
> +v4l2_plane_pix_format structure in the case of
> +multiplanar formats.
>  
>  When the ioctl is called with a pointer to this structure the 
> driver
>  will attempt to allocate up to the requested number of buffers and store the
> @@ -144,9 +155,9 @@ mapped I/O.
>
>   EINVAL
>   
> -   The buffer type (type field) or the
> -requested I/O method (memory) is not
> -supported.
> +   The buffer type (format.type field),
> +requested I/O method (memory) or format
> +(format field) is not valid.
>   
>
>  
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [GIT PULL for v3.11-rc5] media fixes

2013-08-06 Thread Mauro Carvalho Chehab
Em Mon, 5 Aug 2013 16:53:54 -0300
Mauro Carvalho Chehab  escreveu:

> Hi Linus,
> 
> Please pull from:
>   git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media 
> v4l_for_linus
> 
> for some drivers fixes (em28xx, coda, usbtv, s5p, hdpvr and ml86v7667) and 
> at a fix for media DocBook.
> 
> Thanks!
> Mauro
> 
> -
> 
> The following changes since commit 1b2c14b44adcb7836528640bfdc40bf7499d987d:
> 
Gah! It seems that my emailer replaced everything below this line by the
content of .signature when clicked at the combo-box to switch to use my email
at Samsung[1]. Sorry for that.

[1] maybe as a side-effect of this bug: 
https://bugs.freedesktop.org/show_bug.cgi?id=66515

Anyway, let me redo the pull request below.

Thanks,
Mauro

-

Hi Linus,
 
Please pull from:
  git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media 
v4l_for_linus

for some drivers fixes (em28xx, coda, usbtv, s5p, hdpvr and ml86v7667) and 
at a fix for media DocBook.

-

The following changes since commit 1b2c14b44adcb7836528640bfdc40bf7499d987d:

  MAINTAINERS & ABI: Update to point to my new email (2013-07-08 11:04:11 -0300)

are available in the git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media 
v4l_for_linus

for you to fetch changes up to f813b5775b471b656382ae8f087bb34dc894261f:

  [media] em28xx: fix assignment of the eeprom data (2013-07-26 12:28:03 -0300)


Alban Browaeys (1):
  [media] em28xx: fix assignment of the eeprom data

Alexander Shiyan (1):
  [media] media: coda: Fix DT driver data pointer for i.MX27

Alexey Khoroshilov (1):
  [media] hdpvr: fix iteration over uninitialized lists in hdpvr_probe()

Andrzej Hajda (2):
  [media] DocBook: upgrade media_api DocBook version to 4.2
  [media] v4l2: added missing mutex.h include to v4l2-ctrls.h

Hans Verkuil (2):
  [media] ml86v7667: fix compile warning: 'ret' set but not used
  [media] usbtv: fix dependency

John Sheu (1):
  [media] s5p-mfc: Fix input/output format reporting

Lubomir Rintel (2):
  [media] usbtv: Fix deinterlacing
  [media] usbtv: Throw corrupted frames away

Sachin Kamat (1):
  [media] s5p-g2d: Fix registration failure

 Documentation/DocBook/media_api.tmpl |  4 +-
 drivers/media/i2c/ml86v7667.c|  4 +-
 drivers/media/platform/coda.c|  2 +-
 drivers/media/platform/s5p-g2d/g2d.c |  1 +
 drivers/media/platform/s5p-mfc/s5p_mfc_dec.c | 79 +++-
 drivers/media/platform/s5p-mfc/s5p_mfc_enc.c | 46 ++--
 drivers/media/usb/em28xx/em28xx-i2c.c|  2 +-
 drivers/media/usb/hdpvr/hdpvr-core.c | 11 ++--
 drivers/media/usb/usbtv/Kconfig  |  2 +-
 drivers/media/usb/usbtv/usbtv.c  | 51 +-
 include/media/v4l2-ctrls.h   |  1 +
 11 files changed, 101 insertions(+), 102 deletions(-)

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 7/9] qv4l2: added resize to frame size in Capture menu

2013-08-06 Thread Bård Eirik Winther
This will resize the CaptureWin to the original frame size.
It also works with maximized windows.

Signed-off-by: Bård Eirik Winther 
---
 utils/qv4l2/capture-win.cpp | 12 
 utils/qv4l2/capture-win.h   |  3 +++
 utils/qv4l2/qv4l2.cpp   |  6 --
 utils/qv4l2/qv4l2.h |  1 +
 4 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/utils/qv4l2/capture-win.cpp b/utils/qv4l2/capture-win.cpp
index 33f7084..3bd6549 100644
--- a/utils/qv4l2/capture-win.cpp
+++ b/utils/qv4l2/capture-win.cpp
@@ -61,6 +61,18 @@ void CaptureWin::buildWindow(QWidget *videoSurface)
vbox->setSpacing(b);
 }
 
+void CaptureWin::resetSize()
+{
+   if (isMaximized())
+   showNormal();
+
+   int w = m_curWidth;
+   int h = m_curHeight;
+   m_curWidth = -1;
+   m_curHeight = -1;
+   resize(w, h);
+}
+
 QSize CaptureWin::getMargins()
 {
int l, t, r, b;
diff --git a/utils/qv4l2/capture-win.h b/utils/qv4l2/capture-win.h
index dd19f2d..eea0335 100644
--- a/utils/qv4l2/capture-win.h
+++ b/utils/qv4l2/capture-win.h
@@ -78,6 +78,9 @@ public:
void enableScaling(bool enable);
static QSize scaleFrameSize(QSize window, QSize frame);
 
+public slots:
+   void resetSize();
+
 protected:
void closeEvent(QCloseEvent *event);
void buildWindow(QWidget *videoSurface);
diff --git a/utils/qv4l2/qv4l2.cpp b/utils/qv4l2/qv4l2.cpp
index 50ba07a..1a476f0 100644
--- a/utils/qv4l2/qv4l2.cpp
+++ b/utils/qv4l2/qv4l2.cpp
@@ -142,11 +142,14 @@ ApplicationWindow::ApplicationWindow() :
m_scalingAct->setCheckable(true);
m_scalingAct->setChecked(true);
connect(m_scalingAct, SIGNAL(toggled(bool)), this, 
SLOT(enableScaling(bool)));
+   m_resetScalingAct = new QAction("Resize to Frame Size", this);
+   m_resetScalingAct->setStatusTip("Resizes the capture window to match 
frame size");
 
QMenu *captureMenu = menuBar()->addMenu("&Capture");
captureMenu->addAction(m_capStartAct);
captureMenu->addAction(m_showFramesAct);
captureMenu->addAction(m_scalingAct);
+   captureMenu->addAction(m_resetScalingAct);
 
if (CaptureWinGL::isSupported()) {
m_renderMethod = QV4L2_RENDER_GL;
@@ -211,8 +214,6 @@ void ApplicationWindow::setDevice(const QString &device, 
bool rawOpen)
 
newCaptureWin();
 
-   m_capture->setMinimumSize(150, 50);
-
QWidget *w = new QWidget(m_tabs);
m_genTab = new GeneralTab(device, *this, 4, w);
 
@@ -360,6 +361,7 @@ void ApplicationWindow::newCaptureWin()
 
m_capture->enableScaling(m_scalingAct->isChecked());
 connect(m_capture, SIGNAL(close()), this, SLOT(closeCaptureWin()));
+   connect(m_resetScalingAct, SIGNAL(triggered()), m_capture, 
SLOT(resetSize()));
 }
 
 void ApplicationWindow::capVbiFrame()
diff --git a/utils/qv4l2/qv4l2.h b/utils/qv4l2/qv4l2.h
index 1402673..179cecb 100644
--- a/utils/qv4l2/qv4l2.h
+++ b/utils/qv4l2/qv4l2.h
@@ -187,6 +187,7 @@ private:
QAction *m_showAllAudioAct;
QAction *m_audioBufferAct;
QAction *m_scalingAct;
+   QAction *m_resetScalingAct;
QString m_filename;
QSignalMapper *m_sigMapper;
QTabWidget *m_tabs;
-- 
1.8.3.2

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 9/9] qv4l2: add pixel aspect ratio support for CaptureWin

2013-08-06 Thread Bård Eirik Winther
Signed-off-by: Bård Eirik Winther 
---
 utils/qv4l2/capture-win.cpp | 36 ++--
 utils/qv4l2/capture-win.h   |  6 
 utils/qv4l2/general-tab.cpp | 68 +
 utils/qv4l2/general-tab.h   |  4 +++
 utils/qv4l2/qv4l2.cpp   | 21 ++
 utils/qv4l2/qv4l2.h |  1 +
 6 files changed, 122 insertions(+), 14 deletions(-)

diff --git a/utils/qv4l2/capture-win.cpp b/utils/qv4l2/capture-win.cpp
index 3abb6cb..7538756 100644
--- a/utils/qv4l2/capture-win.cpp
+++ b/utils/qv4l2/capture-win.cpp
@@ -30,6 +30,7 @@
 #define MIN_WIN_SIZE_HEIGHT 120
 
 bool CaptureWin::m_enableScaling = true;
+double CaptureWin::m_pixelAspectRatio = 1.0;
 
 CaptureWin::CaptureWin() :
m_curWidth(-1),
@@ -76,6 +77,14 @@ void CaptureWin::resetSize()
resize(w, h);
 }
 
+int CaptureWin::actualFrameWidth(int width)
+{
+   if (m_enableScaling)
+   return (int)((double)width * m_pixelAspectRatio);
+
+   return width;
+}
+
 QSize CaptureWin::getMargins()
 {
int l, t, r, b;
@@ -108,7 +117,7 @@ void CaptureWin::resize(int width, int height)
m_curHeight = height;
 
QSize margins = getMargins();
-   width += margins.width();
+   width = actualFrameWidth(width) + margins.width();
height += margins.height();
 
QDesktopWidget *screen = QApplication::desktop();
@@ -130,25 +139,36 @@ void CaptureWin::resize(int width, int height)
 
 QSize CaptureWin::scaleFrameSize(QSize window, QSize frame)
 {
-   int actualFrameWidth = frame.width();;
-   int actualFrameHeight = frame.height();
+   int actualWidth;
+   int actualHeight = frame.height();
 
if (!m_enableScaling) {
window.setWidth(frame.width());
window.setHeight(frame.height());
+   actualWidth = frame.width();
+   } else {
+   actualWidth = CaptureWin::actualFrameWidth(frame.width());
}
 
double newW, newH;
if (window.width() >= window.height()) {
-   newW = (double)window.width() / actualFrameWidth;
-   newH = (double)window.height() / actualFrameHeight;
+   newW = (double)window.width() / actualWidth;
+   newH = (double)window.height() / actualHeight;
} else {
-   newH = (double)window.width() / actualFrameWidth;
-   newW = (double)window.height() / actualFrameHeight;
+   newH = (double)window.width() / actualWidth;
+   newW = (double)window.height() / actualHeight;
}
double resized = std::min(newW, newH);
 
-   return QSize((int)(actualFrameWidth * resized), (int)(actualFrameHeight 
* resized));
+   return QSize((int)(actualWidth * resized), (int)(actualHeight * 
resized));
+}
+
+void CaptureWin::setPixelAspectRatio(double ratio)
+{
+   m_pixelAspectRatio = ratio;
+   QResizeEvent *event = new QResizeEvent(QSize(width(), height()), 
QSize(width(), height()));
+   QCoreApplication::sendEvent(this, event);
+   delete event;
 }
 
 void CaptureWin::closeEvent(QCloseEvent *event)
diff --git a/utils/qv4l2/capture-win.h b/utils/qv4l2/capture-win.h
index 1bfb1e1..e8f0ada 100644
--- a/utils/qv4l2/capture-win.h
+++ b/utils/qv4l2/capture-win.h
@@ -76,6 +76,7 @@ public:
static bool isSupported() { return false; }
 
void enableScaling(bool enable);
+   void setPixelAspectRatio(double ratio);
static QSize scaleFrameSize(QSize window, QSize frame);
 
 public slots:
@@ -99,6 +100,11 @@ protected:
 */
static bool m_enableScaling;
 
+   /**
+* @note Aspect ratio it taken care of by scaling, frame size is for 
square pixels only!
+*/
+   static double m_pixelAspectRatio;
+
 signals:
void close();
 
diff --git a/utils/qv4l2/general-tab.cpp b/utils/qv4l2/general-tab.cpp
index 5cfaf07..c404a3b 100644
--- a/utils/qv4l2/general-tab.cpp
+++ b/utils/qv4l2/general-tab.cpp
@@ -53,6 +53,7 @@ GeneralTab::GeneralTab(const QString &device, v4l2 &fd, int 
n, QWidget *parent)
m_tvStandard(NULL),
m_qryStandard(NULL),
m_videoTimings(NULL),
+   m_pixelAspectRatio(NULL),
m_qryTimings(NULL),
m_freq(NULL),
m_vidCapFormats(NULL),
@@ -210,6 +211,23 @@ GeneralTab::GeneralTab(const QString &device, v4l2 &fd, 
int n, QWidget *parent)
connect(m_qryTimings, SIGNAL(clicked()), 
SLOT(qryTimingsClicked()));
}
 
+   if (!isRadio() && !isVbi()) {
+   m_pixelAspectRatio = new QComboBox(parent);
+   m_pixelAspectRatio->addItem("Autodetect");
+   m_pixelAspectRatio->addItem("Square");
+   m_pixelAspectRatio->addItem("NTSC/PAL-M/PAL-60");
+   m_pixelAspectRatio->addItem("NTSC/PAL-M/PAL-60, Anamorphic");
+   m_pixelAspectRatio->addItem("PAL/SECAM");
+   m_pixelAspectRatio->addItem("PAL/SECAM, Anamorphic");
+
+   

RE: [PATCH 2/2] media: s5p-mfc: remove DT hacks and simplify initialization code

2013-08-06 Thread Kamil Debski
Hi Kukjin,

This patch looks good.

Best wishes,
Kamil Debski

> From: Marek Szyprowski [mailto:m.szyprow...@samsung.com]
> Sent: Monday, August 05, 2013 2:27 PM
> 
> This patch removes custom initialization of reserved memory regions
> from s5p-mfc driver. Memory initialization can be now handled by
> generic code.
> 
> Signed-off-by: Marek Szyprowski 

Acked-by: Kamil Debski 

> ---
>  drivers/media/platform/s5p-mfc/s5p_mfc.c |   75 ++
> 
>  1 file changed, 15 insertions(+), 60 deletions(-)
> 
> diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc.c
> b/drivers/media/platform/s5p-mfc/s5p_mfc.c
> index a130dcd..696e0e0 100644
> --- a/drivers/media/platform/s5p-mfc/s5p_mfc.c
> +++ b/drivers/media/platform/s5p-mfc/s5p_mfc.c
> @@ -1011,51 +1011,11 @@ static int match_child(struct device *dev, void
> *data)  {
>   if (!dev_name(dev))
>   return 0;
> - return !strcmp(dev_name(dev), (char *)data);
> + return !!strstr(dev_name(dev), (char *)data);
>  }
> 
>  static void *mfc_get_drv_data(struct platform_device *pdev);
> 
> -static int s5p_mfc_alloc_memdevs(struct s5p_mfc_dev *dev) -{
> - unsigned int mem_info[2] = { };
> -
> - dev->mem_dev_l = devm_kzalloc(&dev->plat_dev->dev,
> - sizeof(struct device), GFP_KERNEL);
> - if (!dev->mem_dev_l) {
> - mfc_err("Not enough memory\n");
> - return -ENOMEM;
> - }
> - device_initialize(dev->mem_dev_l);
> - of_property_read_u32_array(dev->plat_dev->dev.of_node,
> - "samsung,mfc-l", mem_info, 2);
> - if (dma_declare_coherent_memory(dev->mem_dev_l, mem_info[0],
> - mem_info[0], mem_info[1],
> - DMA_MEMORY_MAP | DMA_MEMORY_EXCLUSIVE) == 0)
{
> - mfc_err("Failed to declare coherent memory for\n"
> - "MFC device\n");
> - return -ENOMEM;
> - }
> -
> - dev->mem_dev_r = devm_kzalloc(&dev->plat_dev->dev,
> - sizeof(struct device), GFP_KERNEL);
> - if (!dev->mem_dev_r) {
> - mfc_err("Not enough memory\n");
> - return -ENOMEM;
> - }
> - device_initialize(dev->mem_dev_r);
> - of_property_read_u32_array(dev->plat_dev->dev.of_node,
> - "samsung,mfc-r", mem_info, 2);
> - if (dma_declare_coherent_memory(dev->mem_dev_r, mem_info[0],
> - mem_info[0], mem_info[1],
> - DMA_MEMORY_MAP | DMA_MEMORY_EXCLUSIVE) == 0)
{
> - pr_err("Failed to declare coherent memory for\n"
> - "MFC device\n");
> - return -ENOMEM;
> - }
> - return 0;
> -}
> -
>  /* MFC probe function */
>  static int s5p_mfc_probe(struct platform_device *pdev)  { @@ -1107,25
> +1067,20 @@ static int s5p_mfc_probe(struct platform_device *pdev)
>   goto err_res;
>   }
> 
> - if (pdev->dev.of_node) {
> - ret = s5p_mfc_alloc_memdevs(dev);
> - if (ret < 0)
> - goto err_res;
> - } else {
> - dev->mem_dev_l = device_find_child(&dev->plat_dev->dev,
> - "s5p-mfc-l", match_child);
> - if (!dev->mem_dev_l) {
> - mfc_err("Mem child (L) device get failed\n");
> - ret = -ENODEV;
> - goto err_res;
> - }
> - dev->mem_dev_r = device_find_child(&dev->plat_dev->dev,
> - "s5p-mfc-r", match_child);
> - if (!dev->mem_dev_r) {
> - mfc_err("Mem child (R) device get failed\n");
> - ret = -ENODEV;
> - goto err_res;
> - }
> + dev->mem_dev_l = device_find_child(&dev->plat_dev->dev, "-l",
> +match_child);
> + if (!dev->mem_dev_l) {
> + mfc_err("Mem child (L) device get failed\n");
> + ret = -ENODEV;
> + goto err_res;
> + }
> +
> + dev->mem_dev_r = device_find_child(&dev->plat_dev->dev, "-r",
> +match_child);
> + if (!dev->mem_dev_r) {
> + mfc_err("Mem child (R) device get failed\n");
> + ret = -ENODEV;
> + goto err_res;
>   }
> 
>   dev->alloc_ctx[0] = vb2_dma_contig_init_ctx(dev->mem_dev_l);
> --
> 1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 5/9] qv4l2: create function getMargins

2013-08-06 Thread Bård Eirik Winther
Created a function to get the total margins (window frame) in pixels
outside the actual video frame beeing displayed.

Signed-off-by: Bård Eirik Winther 
---
 utils/qv4l2/capture-win.cpp | 14 ++
 utils/qv4l2/capture-win.h   |  1 +
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/utils/qv4l2/capture-win.cpp b/utils/qv4l2/capture-win.cpp
index 2d57909..8722066 100644
--- a/utils/qv4l2/capture-win.cpp
+++ b/utils/qv4l2/capture-win.cpp
@@ -54,16 +54,22 @@ void CaptureWin::buildWindow(QWidget *videoSurface)
vbox->setSpacing(b);
 }
 
+QSize CaptureWin::getMargins()
+{
+   int l, t, r, b;
+   layout()->getContentsMargins(&l, &t, &r, &b);
+   return QSize(l + r, t + b + m_information.minimumSizeHint().height() + 
layout()->spacing());
+}
+
 void CaptureWin::setMinimumSize(int minw, int minh)
 {
QDesktopWidget *screen = QApplication::desktop();
QRect resolution = screen->screenGeometry();
QSize maxSize = maximumSize();
 
-   int l, t, r, b;
-   layout()->getContentsMargins(&l, &t, &r, &b);
-   minw += l + r;
-   minh += t + b + m_information.minimumSizeHint().height() + 
layout()->spacing();
+   QSize margins = getMargins();
+   minw += margins.width();
+   minh += margins.height();
 
if (minw > resolution.width())
minw = resolution.width();
diff --git a/utils/qv4l2/capture-win.h b/utils/qv4l2/capture-win.h
index ca60244..f662bd3 100644
--- a/utils/qv4l2/capture-win.h
+++ b/utils/qv4l2/capture-win.h
@@ -78,6 +78,7 @@ public:
 protected:
void closeEvent(QCloseEvent *event);
void buildWindow(QWidget *videoSurface);
+   QSize getMargins();
 
/**
 * @brief A label that can is used to display capture information.
-- 
1.8.3.2

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 6/9] qv4l2: add video scaling for CaptureWin

2013-08-06 Thread Bård Eirik Winther
Signed-off-by: Bård Eirik Winther 
---
 utils/qv4l2/capture-win-gl.cpp | 26 ++--
 utils/qv4l2/capture-win-gl.h   |  7 
 utils/qv4l2/capture-win-qt.cpp | 23 ++-
 utils/qv4l2/capture-win-qt.h   |  5 +++
 utils/qv4l2/capture-win.cpp| 93 --
 utils/qv4l2/capture-win.h  | 14 ++-
 utils/qv4l2/qv4l2.cpp  | 22 --
 utils/qv4l2/qv4l2.h|  2 +
 8 files changed, 159 insertions(+), 33 deletions(-)

diff --git a/utils/qv4l2/capture-win-gl.cpp b/utils/qv4l2/capture-win-gl.cpp
index c8238c5..27ff3d3 100644
--- a/utils/qv4l2/capture-win-gl.cpp
+++ b/utils/qv4l2/capture-win-gl.cpp
@@ -43,6 +43,15 @@ void CaptureWinGL::stop()
 #endif
 }
 
+void CaptureWinGL::resizeEvent(QResizeEvent *event)
+{
+#ifdef HAVE_QTGL
+   QSize margins = getMargins();
+   m_videoSurface.setSize(width() - margins.width(), height() - 
margins.height());
+#endif
+   event->accept();
+}
+
 void CaptureWinGL::setFrame(int width, int height, __u32 format, unsigned char 
*data, const QString &info)
 {
 #ifdef HAVE_QTGL
@@ -109,11 +118,22 @@ void CaptureWinGLEngine::initializeGL()
checkError("InitializeGL");
 }
 
+void CaptureWinGLEngine::setSize(int width, int height)
+{
+   QSize sizedFrame = CaptureWin::scaleFrameSize(QSize(width, height), 
QSize(m_frameWidth, m_frameHeight));
+
+   width = sizedFrame.width();
+   height = sizedFrame.height();
+
+   if (width > 0 && height > 0) {
+   setMaximumSize(width, height);
+   resizeGL(width, height);
+   }
+}
 
 void CaptureWinGLEngine::resizeGL(int width, int height)
 {
-   // Resizing is disabled by setting viewport equal to frame size
-   glViewport(0, 0, m_frameWidth, m_frameHeight);
+   glViewport(0, 0, width, height);
 }
 
 void CaptureWinGLEngine::setFrame(int width, int height, __u32 format, 
unsigned char *data)
@@ -123,8 +143,6 @@ void CaptureWinGLEngine::setFrame(int width, int height, 
__u32 format, unsigned
m_frameWidth = width;
m_frameHeight = height;
m_frameFormat = format;
-
-   QGLWidget::setMaximumSize(m_frameWidth, m_frameHeight);
}
 
m_frameData = data;
diff --git a/utils/qv4l2/capture-win-gl.h b/utils/qv4l2/capture-win-gl.h
index 6e64269..0c3ff8b 100644
--- a/utils/qv4l2/capture-win-gl.h
+++ b/utils/qv4l2/capture-win-gl.h
@@ -23,6 +23,8 @@
 #include "qv4l2.h"
 #include "capture-win.h"
 
+#include 
+
 #ifdef HAVE_QTGL
 #define GL_GLEXT_PROTOTYPES
 #include 
@@ -42,6 +44,7 @@ public:
void stop();
void setFrame(int width, int height, __u32 format, unsigned char *data);
bool hasNativeFormat(__u32 format);
+   void setSize(int width, int height);
 
 protected:
void paintGL();
@@ -90,6 +93,10 @@ public:
bool hasNativeFormat(__u32 format);
static bool isSupported();
 
+protected:
+   void resizeEvent(QResizeEvent *event);
+
+private:
 #ifdef HAVE_QTGL
CaptureWinGLEngine m_videoSurface;
 #endif
diff --git a/utils/qv4l2/capture-win-qt.cpp b/utils/qv4l2/capture-win-qt.cpp
index 63c77d5..f746379 100644
--- a/utils/qv4l2/capture-win-qt.cpp
+++ b/utils/qv4l2/capture-win-qt.cpp
@@ -24,6 +24,8 @@ CaptureWinQt::CaptureWinQt() :
 {
 
CaptureWin::buildWindow(&m_videoSurface);
+   m_scaledFrame.setWidth(0);
+   m_scaledFrame.setHeight(0);
 }
 
 CaptureWinQt::~CaptureWinQt()
@@ -31,6 +33,19 @@ CaptureWinQt::~CaptureWinQt()
delete m_frame;
 }
 
+void CaptureWinQt::resizeEvent(QResizeEvent *event)
+{
+   if (m_frame->bits() == NULL)
+   return;
+
+   QPixmap img = QPixmap::fromImage(*m_frame);
+   m_scaledFrame = scaleFrameSize(QSize(m_videoSurface.width(), 
m_videoSurface.height()),
+  QSize(m_frame->width(), 
m_frame->height()));
+   img = img.scaled(m_scaledFrame.width(), m_scaledFrame.height(), 
Qt::IgnoreAspectRatio);
+   m_videoSurface.setPixmap(img);
+   QWidget::resizeEvent(event);
+}
+
 void CaptureWinQt::setFrame(int width, int height, __u32 format, unsigned char 
*data, const QString &info)
 {
QImage::Format dstFmt;
@@ -41,6 +56,8 @@ void CaptureWinQt::setFrame(int width, int height, __u32 
format, unsigned char *
if (m_frame->width() != width || m_frame->height() != height || 
m_frame->format() != dstFmt) {
delete m_frame;
m_frame = new QImage(width, height, dstFmt);
+   m_scaledFrame = scaleFrameSize(QSize(m_videoSurface.width(), 
m_videoSurface.height()),
+  QSize(m_frame->width(), 
m_frame->height()));
}
 
if (data == NULL || !supported)
@@ -49,7 +66,11 @@ void CaptureWinQt::setFrame(int width, int height, __u32 
format, unsigned char *
memcpy(m_frame->bits(), data, m_frame->numBytes());
 
m_information.setText(info);
-   m_videoSurface.setPixmap(QPi

[PATCH 8/9] qv4l2: add hotkey for reset scaling to frame size

2013-08-06 Thread Bård Eirik Winther
Adds hotkey CTRL + F for both CaptureWin and main Capture menu.
Resets the scaling of CaptureWin to fit frame size.

Signed-off-by: Bård Eirik Winther 
---
 utils/qv4l2/capture-win.cpp | 3 +++
 utils/qv4l2/capture-win.h   | 1 +
 utils/qv4l2/qv4l2.cpp   | 1 +
 3 files changed, 5 insertions(+)

diff --git a/utils/qv4l2/capture-win.cpp b/utils/qv4l2/capture-win.cpp
index 3bd6549..3abb6cb 100644
--- a/utils/qv4l2/capture-win.cpp
+++ b/utils/qv4l2/capture-win.cpp
@@ -38,6 +38,8 @@ CaptureWin::CaptureWin() :
setWindowTitle("V4L2 Capture");
m_hotkeyClose = new QShortcut(Qt::CTRL+Qt::Key_W, this);
connect(m_hotkeyClose, SIGNAL(activated()), this, SLOT(close()));
+   m_hotkeyScaleReset = new QShortcut(Qt::CTRL+Qt::Key_F, this);
+   connect(m_hotkeyScaleReset, SIGNAL(activated()), this, 
SLOT(resetSize()));
 }
 
 CaptureWin::~CaptureWin()
@@ -48,6 +50,7 @@ CaptureWin::~CaptureWin()
layout()->removeWidget(this);
delete layout();
delete m_hotkeyClose;
+   delete m_hotkeyScaleReset;
 }
 
 void CaptureWin::buildWindow(QWidget *videoSurface)
diff --git a/utils/qv4l2/capture-win.h b/utils/qv4l2/capture-win.h
index eea0335..1bfb1e1 100644
--- a/utils/qv4l2/capture-win.h
+++ b/utils/qv4l2/capture-win.h
@@ -104,6 +104,7 @@ signals:
 
 private:
QShortcut *m_hotkeyClose;
+   QShortcut *m_hotkeyScaleReset;
int m_curWidth;
int m_curHeight;
 };
diff --git a/utils/qv4l2/qv4l2.cpp b/utils/qv4l2/qv4l2.cpp
index 1a476f0..c94b0a8 100644
--- a/utils/qv4l2/qv4l2.cpp
+++ b/utils/qv4l2/qv4l2.cpp
@@ -144,6 +144,7 @@ ApplicationWindow::ApplicationWindow() :
connect(m_scalingAct, SIGNAL(toggled(bool)), this, 
SLOT(enableScaling(bool)));
m_resetScalingAct = new QAction("Resize to Frame Size", this);
m_resetScalingAct->setStatusTip("Resizes the capture window to match 
frame size");
+   m_resetScalingAct->setShortcut(Qt::CTRL+Qt::Key_F);
 
QMenu *captureMenu = menuBar()->addMenu("&Capture");
captureMenu->addAction(m_capStartAct);
-- 
1.8.3.2

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/9] qv4l2: fix YUY2 shader

2013-08-06 Thread Bård Eirik Winther
Fixed the YUY2 shaders to support scaling.
The new solution has cleaner shader code and texture upload
uses a better format for OpenGL.

Signed-off-by: Bård Eirik Winther 
---
 utils/qv4l2/capture-win-gl.cpp | 68 ++
 1 file changed, 35 insertions(+), 33 deletions(-)

diff --git a/utils/qv4l2/capture-win-gl.cpp b/utils/qv4l2/capture-win-gl.cpp
index c499f1f..6071410 100644
--- a/utils/qv4l2/capture-win-gl.cpp
+++ b/utils/qv4l2/capture-win-gl.cpp
@@ -1,5 +1,5 @@
 /*
- * The YUY2 shader code was copied and simplified from face-responder. The 
code is under public domain:
+ * The YUY2 shader code is based on face-responder. The code is under public 
domain:
  * 
https://bitbucket.org/nateharward/face-responder/src/0c3b4b957039d9f4bf1da09b9471371942de2601/yuv42201_laplace.frag?at=master
  *
  * All other OpenGL code:
@@ -446,47 +446,51 @@ QString CaptureWinGLEngine::shader_YUY2_invariant(__u32 
format)
 {
switch (format) {
case V4L2_PIX_FMT_YUYV:
-   return QString("y = (luma_chroma.r - 0.0625) * 1.1643;"
-  "if (mod(xcoord, 2.0) == 0.0) {"
-  "   u = luma_chroma.a;"
-  "   v = texture2D(tex, vec2(pixelx + texl_w, 
pixely)).a;"
+   return QString("if (mod(xcoord, 2.0) == 0.0) {"
+  "   luma_chroma = texture2D(tex, vec2(pixelx, 
pixely));"
+  "   y = (luma_chroma.r - 0.0625) * 1.1643;"
   "} else {"
-  "   v = luma_chroma.a;"
-  "   u = texture2D(tex, vec2(pixelx - texl_w, 
pixely)).a;"
+  "   luma_chroma = texture2D(tex, vec2(pixelx - 
texl_w, pixely));"
+  "   y = (luma_chroma.b - 0.0625) * 1.1643;"
   "}"
+  "u = luma_chroma.g - 0.5;"
+  "v = luma_chroma.a - 0.5;"
   );
 
case V4L2_PIX_FMT_YVYU:
-   return QString("y = (luma_chroma.r - 0.0625) * 1.1643;"
-  "if (mod(xcoord, 2.0) == 0.0) {"
-  "   v = luma_chroma.a;"
-  "   u = texture2D(tex, vec2(pixelx + texl_w, 
pixely)).a;"
+   return QString("if (mod(xcoord, 2.0) == 0.0) {"
+  "   luma_chroma = texture2D(tex, vec2(pixelx, 
pixely));"
+  "   y = (luma_chroma.r - 0.0625) * 1.1643;"
   "} else {"
-  "   u = luma_chroma.a;"
-  "   v = texture2D(tex, vec2(pixelx - texl_w, 
pixely)).a;"
+  "   luma_chroma = texture2D(tex, vec2(pixelx - 
texl_w, pixely));"
+  "   y = (luma_chroma.b - 0.0625) * 1.1643;"
   "}"
+  "u = luma_chroma.a - 0.5;"
+  "v = luma_chroma.g - 0.5;"
   );
 
case V4L2_PIX_FMT_UYVY:
-   return QString("y = (luma_chroma.a - 0.0625) * 1.1643;"
-  "if (mod(xcoord, 2.0) == 0.0) {"
-  "   u = luma_chroma.r;"
-  "   v = texture2D(tex, vec2(pixelx + texl_w, 
pixely)).r;"
+   return QString("if (mod(xcoord, 2.0) == 0.0) {"
+  "   luma_chroma = texture2D(tex, vec2(pixelx, 
pixely));"
+  "   y = (luma_chroma.g - 0.0625) * 1.1643;"
   "} else {"
-  "   v = luma_chroma.r;"
-  "   u = texture2D(tex, vec2(pixelx - texl_w, 
pixely)).r;"
+  "   luma_chroma = texture2D(tex, vec2(pixelx - 
texl_w, pixely));"
+  "   y = (luma_chroma.a - 0.0625) * 1.1643;"
   "}"
+  "u = luma_chroma.r - 0.5;"
+  "v = luma_chroma.b - 0.5;"
   );
 
case V4L2_PIX_FMT_VYUY:
-   return QString("y = (luma_chroma.a - 0.0625) * 1.1643;"
-  "if (mod(xcoord, 2.0) == 0.0) {"
-  "   v = luma_chroma.r;"
-  "   u = texture2D(tex, vec2(pixelx + texl_w, 
pixely)).r;"
+   return QString("if (mod(xcoord, 2.0) == 0.0) {"
+  "   luma_chroma = texture2D(tex, vec2(pixelx, 
pixely));"
+  "   y = (luma_chroma.g - 0.0625) * 1.1643;"
   "} else {"
-  "   u = luma_chroma.r;"
-  "   v = texture2D(tex, vec2(pixelx - texl_w, 
pixely)).r;"
+  

[PATCH 1/9] qv4l2: generalized opengl include guards

2013-08-06 Thread Bård Eirik Winther
Created a general QtGL makefile condition and using config.h
to check in code if QtGL is present.

Signed-off-by: Bård Eirik Winther 
---
 configure.ac   |  6 --
 utils/qv4l2/Makefile.am|  4 ++--
 utils/qv4l2/capture-win-gl.cpp | 12 ++--
 utils/qv4l2/capture-win-gl.h   |  6 --
 4 files changed, 16 insertions(+), 12 deletions(-)

diff --git a/configure.ac b/configure.ac
index 5a0bb5f..e83baa4 100644
--- a/configure.ac
+++ b/configure.ac
@@ -132,7 +132,9 @@ else
 fi
 
 PKG_CHECK_MODULES(QTGL, [QtOpenGL >= 4.4 gl], [qt_pkgconfig_gl=true], 
[qt_pkgconfig_gl=false])
-if test "x$qt_pkgconfig_gl" = "xfalse"; then
+if test "x$qt_pkgconfig_gl" = "xtrue"; then
+   AC_DEFINE([HAVE_QTGL], [1], [qt has opengl support])
+else
AC_MSG_WARN(Qt4 OpenGL or higher is not available)
 fi
 
@@ -249,9 +251,9 @@ AM_CONDITIONAL([WITH_LIBDVBV5], [test x$enable_libdvbv5 = 
xyes])
 AM_CONDITIONAL([WITH_LIBV4L], [test x$enable_libv4l != xno])
 AM_CONDITIONAL([WITH_V4LUTILS], [test x$enable_v4lutils != xno])
 AM_CONDITIONAL([WITH_QV4L2], [test ${qt_pkgconfig} = true -a x$enable_qv4l2 != 
xno])
-AM_CONDITIONAL([WITH_QV4L2_GL], [test WITH_QV4L2 -a ${qt_pkgconfig_gl} = true])
 AM_CONDITIONAL([WITH_V4L_PLUGINS], [test x$enable_libv4l != xno -a 
x$enable_shared != xno])
 AM_CONDITIONAL([WITH_V4L_WRAPPERS], [test x$enable_libv4l != xno -a 
x$enable_shared != xno])
+AM_CONDITIONAL([WITH_QTGL], [test ${qt_pkgconfig_gl} = true])
 
 # append -static to libtool compile and link command to enforce static libs
 AS_IF([test x$enable_libdvbv5 != xyes], [AC_SUBST([ENFORCE_LIBDVBV5_STATIC], 
["-static"])])
diff --git a/utils/qv4l2/Makefile.am b/utils/qv4l2/Makefile.am
index 3aed18c..58ac097 100644
--- a/utils/qv4l2/Makefile.am
+++ b/utils/qv4l2/Makefile.am
@@ -7,8 +7,8 @@ nodist_qv4l2_SOURCES = moc_qv4l2.cpp moc_general-tab.cpp 
moc_capture-win.cpp moc
 qv4l2_LDADD = ../../lib/libv4l2/libv4l2.la 
../../lib/libv4lconvert/libv4lconvert.la ../libv4l2util/libv4l2util.la \
   ../libmedia_dev/libmedia_dev.la
 
-if WITH_QV4L2_GL
-qv4l2_CPPFLAGS = $(QTGL_CFLAGS) -DENABLE_GL
+if WITH_QTGL
+qv4l2_CPPFLAGS = $(QTGL_CFLAGS)
 qv4l2_LDFLAGS = $(QTGL_LIBS)
 else
 qv4l2_CPPFLAGS = $(QT_CFLAGS)
diff --git a/utils/qv4l2/capture-win-gl.cpp b/utils/qv4l2/capture-win-gl.cpp
index 52412c7..c499f1f 100644
--- a/utils/qv4l2/capture-win-gl.cpp
+++ b/utils/qv4l2/capture-win-gl.cpp
@@ -26,7 +26,7 @@
 
 CaptureWinGL::CaptureWinGL()
 {
-#ifdef ENABLE_GL
+#ifdef HAVE_QTGL
CaptureWin::buildWindow(&m_videoSurface);
 #endif
CaptureWin::setWindowTitle("V4L2 Capture (OpenGL)");
@@ -38,14 +38,14 @@ CaptureWinGL::~CaptureWinGL()
 
 void CaptureWinGL::stop()
 {
-#ifdef ENABLE_GL
+#ifdef HAVE_QTGL
m_videoSurface.stop();
 #endif
 }
 
 void CaptureWinGL::setFrame(int width, int height, __u32 format, unsigned char 
*data, const QString &info)
 {
-#ifdef ENABLE_GL
+#ifdef HAVE_QTGL
m_videoSurface.setFrame(width, height, format, data);
 #endif
m_information.setText(info);
@@ -53,7 +53,7 @@ void CaptureWinGL::setFrame(int width, int height, __u32 
format, unsigned char *
 
 bool CaptureWinGL::hasNativeFormat(__u32 format)
 {
-#ifdef ENABLE_GL
+#ifdef HAVE_QTGL
return m_videoSurface.hasNativeFormat(format);
 #else
return false;
@@ -62,14 +62,14 @@ bool CaptureWinGL::hasNativeFormat(__u32 format)
 
 bool CaptureWinGL::isSupported()
 {
-#ifdef ENABLE_GL
+#ifdef HAVE_QTGL
return true;
 #else
return false;
 #endif
 }
 
-#ifdef ENABLE_GL
+#ifdef HAVE_QTGL
 CaptureWinGLEngine::CaptureWinGLEngine() :
m_frameHeight(0),
m_frameWidth(0),
diff --git a/utils/qv4l2/capture-win-gl.h b/utils/qv4l2/capture-win-gl.h
index 08e72b2..6e64269 100644
--- a/utils/qv4l2/capture-win-gl.h
+++ b/utils/qv4l2/capture-win-gl.h
@@ -18,10 +18,12 @@
 #ifndef CAPTURE_WIN_GL_H
 #define CAPTURE_WIN_GL_H
 
+#include 
+
 #include "qv4l2.h"
 #include "capture-win.h"
 
-#ifdef ENABLE_GL
+#ifdef HAVE_QTGL
 #define GL_GLEXT_PROTOTYPES
 #include 
 #include 
@@ -88,7 +90,7 @@ public:
bool hasNativeFormat(__u32 format);
static bool isSupported();
 
-#ifdef ENABLE_GL
+#ifdef HAVE_QTGL
CaptureWinGLEngine m_videoSurface;
 #endif
 };
-- 
1.8.3.2

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 0/9] qv4l2: scaling, pixel aspect ratio and render fixes

2013-08-06 Thread Bård Eirik Winther
The patch series depends on the qv4l2 ALSA and OpenGL patch series.

This adds scaling and aspect ratio support to the qv4l2 CaptureWin.
In that regard it fixes a lot of other issues that would otherwise make scaling
render incorrectly. It also fixes some issues with the original OpenGL patch 
series,
as well as adding tweaks and improvements left out in the original patches.

Some of the changes/improvements:
- CaptureWin have scaling support for video frames for all renderers
- CaptureWin support pixel aspect ratio scaling
- Aspect ratio and scaling can be changed during capture
- Reset and disable scaling options
- CaptureWin's setMinimumSize is now resize, which resizes the window to the 
frame size given
  and minimum size is set automatically
- The YUY2 shader programs are rewritten and has the resizing issue fixed
- The Show Frames option in Capture menu can be toggled during capture
- Added a hotkey:
CTRL + F : (size to video 'F'rame)
   When either the main window or capture window is selected
   this will reset the scaling to fit the frame size.
   This option is also available in the Capture menu.

Pixel Aspect Ratio Modes:
- Autodetect (if not supported this assumes square pixels)
- Square
- NTSC/PAL-M/PAL-60
- NTSC/PAL-M/PAL-60, Anamorphic
- PAL/SECAM
- PAL/SECAM, Anamorphic

Perfomance:
  All tests are done using the 3.10 kernel with OpenGL enabled and desktop 
effects disabled.
  Testing was done on an Intel i7-2600S (with Turbo Boost disabled)
  using the integrated Intel HD 2000 graphics processor. The mothreboard is an 
ASUS P8H77-I
  with 2x2GB CL 9-9-9-24 DDR3 RAM. The capture card is a Cisco test card with 4 
HDMI
  inputs connected using PCIe2.0x8. All video input streams used for testing are
  progressive HD (1920x1080) with 60fps.

  FPS for every input for a given number of streams
  (BGR3, YU12 and YV12 are emulated using the CPU):
1 STREAM  2 STREAMS  3 STREAMS  4 STREAMS
  RGB3  6060 60 60
  BGR3  6060 60 58
  YUYV  6060 60 60
  YU12  6060 60 60
  YV12  6060 60 60

Sidenote:
- Performing scaling and colorspace conversion for 1080p60 using the CPU
  can/will give a performance drop in framerate. Recommended to use OpenGL 
instead.

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 3/9] qv4l2: fix black screen with opengl after capture

2013-08-06 Thread Bård Eirik Winther
Fixes the issue when the window was beeing resized/moved
and the frame image would become black.

Signed-off-by: Bård Eirik Winther 
---
 utils/qv4l2/capture-win-gl.cpp | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/utils/qv4l2/capture-win-gl.cpp b/utils/qv4l2/capture-win-gl.cpp
index 6071410..c8238c5 100644
--- a/utils/qv4l2/capture-win-gl.cpp
+++ b/utils/qv4l2/capture-win-gl.cpp
@@ -253,6 +253,12 @@ void CaptureWinGLEngine::paintGL()
changeShader();
 
if (m_frameData == NULL) {
+   glBegin(GL_QUADS);
+   glTexCoord2f(0.0f, 0.0f); glVertex2f(0.0, 0);
+   glTexCoord2f(1.0f, 0.0f); glVertex2f(m_frameWidth, 0);
+   glTexCoord2f(1.0f, 1.0f); glVertex2f(m_frameWidth, 
m_frameHeight);
+   glTexCoord2f(0.0f, 1.0f); glVertex2f(0, m_frameHeight);
+   glEnd();
return;
}
 
-- 
1.8.3.2

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 4/9] qv4l2: show frames option can be toggled during capture

2013-08-06 Thread Bård Eirik Winther
Signed-off-by: Bård Eirik Winther 
---
 utils/qv4l2/qv4l2.cpp | 79 +++
 utils/qv4l2/qv4l2.h   |  2 +-
 2 files changed, 43 insertions(+), 38 deletions(-)

diff --git a/utils/qv4l2/qv4l2.cpp b/utils/qv4l2/qv4l2.cpp
index 5510041..ee0c22d 100644
--- a/utils/qv4l2/qv4l2.cpp
+++ b/utils/qv4l2/qv4l2.cpp
@@ -404,7 +404,7 @@ void ApplicationWindow::capVbiFrame()
m_capStartAct->setChecked(false);
return;
}
-   if (m_showFrames) {
+   if (showFrames()) {
for (unsigned y = 0; y < m_vbiHeight; y++) {
__u8 *p = data + y * m_vbiWidth;
__u8 *q = m_capImage->bits() + y * 
m_capImage->bytesPerLine();
@@ -448,7 +448,7 @@ void ApplicationWindow::capVbiFrame()
m_tv = tv;
}
status = QString("Frame: %1 Fps: %2").arg(++m_frame).arg(m_fps);
-   if (m_showFrames)
+   if (showFrames())
m_capture->setFrame(m_capImage->width(), m_capImage->height(),
m_capDestFormat.fmt.pix.pixelformat, 
m_capImage->bits(), status);
 
@@ -491,7 +491,7 @@ void ApplicationWindow::capFrame()
if (m_saveRaw.openMode())
m_saveRaw.write((const char *)m_frameData, s);
 
-   if (!m_showFrames)
+   if (!showFrames())
break;
if (m_mustConvert)
err = v4lconvert_convert(m_convertData, 
&m_capSrcFormat, &m_capDestFormat,
@@ -515,7 +515,7 @@ void ApplicationWindow::capFrame()
if (again)
return;
 
-   if (m_showFrames) {
+   if (showFrames()) {
if (m_mustConvert)
err = v4lconvert_convert(m_convertData, 
&m_capSrcFormat, &m_capDestFormat,
 (unsigned char 
*)m_buffers[buf.index].start, buf.bytesused,
@@ -544,7 +544,7 @@ void ApplicationWindow::capFrame()
if (again)
return;
 
-   if (m_showFrames) {
+   if (showFrames()) {
if (m_mustConvert)
err = v4lconvert_convert(m_convertData, 
&m_capSrcFormat, &m_capDestFormat,
 (unsigned char 
*)buf.m.userptr, buf.bytesused,
@@ -590,10 +590,10 @@ void ApplicationWindow::capFrame()
  .arg((m_totalAudioLatency.tv_sec * 1000 + 
m_totalAudioLatency.tv_usec / 1000) / m_frame));
}
 #endif
-   if (displaybuf == NULL && m_showFrames)
+   if (displaybuf == NULL && showFrames())
status.append(" Error: Unsupported format.");
 
-   if (m_showFrames)
+   if (showFrames())
m_capture->setFrame(m_capImage->width(), m_capImage->height(),
m_capDestFormat.fmt.pix.pixelformat, 
displaybuf, status);
 
@@ -776,6 +776,15 @@ void ApplicationWindow::stopCapture()
refresh();
 }
 
+bool ApplicationWindow::showFrames()
+{
+   if (m_showFramesAct->isChecked() && !m_capture->isVisible())
+   m_capture->show();
+   if (!m_showFramesAct->isChecked() && m_capture->isVisible())
+   m_capture->hide();
+   return m_showFramesAct->isChecked();
+}
+
 void ApplicationWindow::startOutput(unsigned)
 {
 }
@@ -849,7 +858,6 @@ void ApplicationWindow::capStart(bool start)
m_capImage = NULL;
return;
}
-   m_showFrames = m_showFramesAct->isChecked();
m_frame = m_lastFrame = m_fps = 0;
m_capMethod = m_genTab->capMethod();
 
@@ -857,7 +865,6 @@ void ApplicationWindow::capStart(bool start)
v4l2_format fmt;
v4l2_std_id std;
 
-   m_showFrames = false;
g_fmt_sliced_vbi(fmt);
g_std(std);
fmt.fmt.sliced.service_set = (std & V4L2_STD_625_50) ?
@@ -896,14 +903,14 @@ void ApplicationWindow::capStart(bool start)
m_vbiHeight = fmt.fmt.vbi.count[0] + 
fmt.fmt.vbi.count[1];
m_vbiSize = m_vbiWidth * m_vbiHeight;
m_frameData = new unsigned char[m_vbiSize];
-   if (m_showFrames) {
-   m_capture->setMinimumSize(m_vbiWidth, m_vbiHeight);
-   m_capImage = new QImage(m_vbiWidth, m_vbiHeight, 
dstFmt);
-   m_capImage->fill(0);
-   m_capture->setFrame(m_capImage->width(), 
m_capImage->height(),
-   
m_capDestFormat.fmt.pix.pixelformat, m_capImage->bits(), "No frame");
+   m_capture->setMinimumSize(m_vbiWidth, m_vbiHeight);
+   m_capImage = new QImage(m_vbiWidth, m_vbiHeight, dstFmt);
+   m_capImage->fill(0);
+   m_capture->setFrame(m_capIm

[PATCHv2 0/5] qv4l2: add ALSA audio playback

2013-08-06 Thread Bård Eirik Winther
The qv4l2 test utility now supports ALSA playback of audio.
This allows for PCM playback during capture for supported devices.

This requires at least the OpenGL patch series' "qv4l2: add Capture menu" patch.
A device must be ALSA compatible in order to be used with the qv4l2.
The ALSA implementation requires ALSA on the system. If ALSA support is not 
present,
then this feature will not be compiled in.

Changelog v2:
- Fixed the A-V average measuring
- ALSA is always compiled in but uses include guards from config.h instead

Some of the changes/improvements:
- Capturing will also capture audio
- Added audio controls to the capture menu
- Selectable audio devices (can also have no audio)
- Automatically find corresponding audio source for a given video device if 
applicable
- Supports both radio, video and audio devices that uses PCM.
- Bug fixes

Known issues:
- Sometimes when generating the audio in and out device lists,
  it may take some time for the combo boxes to render correctly.
- If the audio causes underruns/overruns, try increase the audio buffer.
- Not all audio input/output combination will work, depending on system and 
devices.
- The A-V difference in ms is not always correct, but should still help as an 
indicator

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCHv2 3/5] qv4l2: fix a bug where the alsa thread never stops

2013-08-06 Thread Bård Eirik Winther
If the output audio device never read the buffer then the alsa thread
would continue to fill it up and never stop when the capture stops.

Signed-off-by: Bård Eirik Winther 
---
 utils/qv4l2/alsa_stream.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/utils/qv4l2/alsa_stream.c b/utils/qv4l2/alsa_stream.c
index 3e33b5e..fbff4cb 100644
--- a/utils/qv4l2/alsa_stream.c
+++ b/utils/qv4l2/alsa_stream.c
@@ -437,7 +437,7 @@ static snd_pcm_sframes_t writebuf(snd_pcm_t *handle, char 
*buf, long len)
 {
 snd_pcm_sframes_t r;
 
-while (1) {
+while (!stop_alsa) {
r = snd_pcm_writei(handle, buf, len);
if (r == len)
return 0;
-- 
1.8.3.2

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCHv2 2/5] qv4l2: new ALSA stream source code

2013-08-06 Thread Bård Eirik Winther
Code copied from xawtv3

Signed-off-by: Bård Eirik Winther 
---
 utils/qv4l2/alsa_stream.c | 645 ++
 utils/qv4l2/alsa_stream.h |   5 +
 2 files changed, 650 insertions(+)
 create mode 100644 utils/qv4l2/alsa_stream.c
 create mode 100644 utils/qv4l2/alsa_stream.h

diff --git a/utils/qv4l2/alsa_stream.c b/utils/qv4l2/alsa_stream.c
new file mode 100644
index 000..3e33b5e
--- /dev/null
+++ b/utils/qv4l2/alsa_stream.c
@@ -0,0 +1,645 @@
+/*
+ *  ALSA streaming support
+ *
+ *  Originally written by:
+ *  Copyright (c) by Devin Heitmueller 
+ * for usage at tvtime
+ *  Derived from the alsa-driver test tool latency.c:
+ *Copyright (c) by Jaroslav Kysela 
+ *
+ *  Copyright (c) 2011 - Mauro Carvalho Chehab 
+ * Ported to xawtv, with bug fixes and improvements
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; either version 2 of the License, or
+ *  (at your option) any later version.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  You should have received a copy of the GNU General Public License
+ *  along with this program; if not, write to the Free Software
+ *  Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307 USA
+ *
+ */
+
+#include "config.h"
+
+#ifdef HAVE_ALSA_ASOUNDLIB_H
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "alsa_stream.h"
+
+#define ARRAY_SIZE(a) (sizeof(a)/sizeof(*(a)))
+
+/* Private vars to control alsa thread status */
+static int stop_alsa = 0;
+
+/* Error handlers */
+snd_output_t *output = NULL;
+FILE *error_fp;
+int verbose = 0;
+
+struct final_params {
+int bufsize;
+int rate;
+int latency;
+int channels;
+};
+
+static int setparams_stream(snd_pcm_t *handle,
+   snd_pcm_hw_params_t *params,
+   snd_pcm_format_t format,
+   int *channels,
+   const char *id)
+{
+int err;
+
+err = snd_pcm_hw_params_any(handle, params);
+if (err < 0) {
+   fprintf(error_fp,
+   "alsa: Broken configuration for %s PCM: no configurations 
available: %s\n",
+   snd_strerror(err), id);
+   return err;
+}
+
+err = snd_pcm_hw_params_set_access(handle, params,
+  SND_PCM_ACCESS_RW_INTERLEAVED);
+if (err < 0) {
+   fprintf(error_fp, "alsa: Access type not available for %s: %s\n", id,
+   snd_strerror(err));
+   return err;
+}
+
+err = snd_pcm_hw_params_set_format(handle, params, format);
+if (err < 0) {
+   fprintf(error_fp, "alsa: Sample format not available for %s: %s\n", id,
+  snd_strerror(err));
+   return err;
+}
+
+retry:
+err = snd_pcm_hw_params_set_channels(handle, params, *channels);
+if (err < 0) {
+   if (strcmp(id, "capture") == 0 && *channels == 2) {
+   *channels = 1;
+   goto retry; /* Retry with mono capture */
+   }
+   fprintf(error_fp, "alsa: Channels count (%i) not available for %s: 
%s\n",
+   *channels, id, snd_strerror(err));
+   return err;
+}
+
+return 0;
+}
+
+static void getparams_periods(snd_pcm_t *handle,
+ snd_pcm_hw_params_t *params,
+ unsigned int *usecs,
+ unsigned int *count,
+ const char *id)
+{
+unsigned min = 0, max = 0;
+
+snd_pcm_hw_params_get_periods_min(params, &min, 0);
+snd_pcm_hw_params_get_periods_max(params, &max, 0);
+if (min && max) {
+   if (verbose)
+   fprintf(error_fp, "alsa: %s periods range between %u and %u. Want: 
%u\n",
+   id, min, max, *count);
+   if (*count < min)
+   *count = min;
+   if (*count > max)
+   *count = max;
+}
+
+min = max = 0;
+snd_pcm_hw_params_get_period_time_min(params, &min, 0);
+snd_pcm_hw_params_get_period_time_max(params, &max, 0);
+if (min && max) {
+   if (verbose)
+   fprintf(error_fp, "alsa: %s period time range between %u and %u. 
Want: %u\n",
+   id, min, max, *usecs);
+   if (*usecs < min)
+   *usecs = min;
+   if (*usecs > max)
+   *usecs = max;
+}
+}
+
+static int setparams_periods(snd_pcm_t *handle,
+ snd_pcm_hw_params_t *params,
+ unsigned int *usecs,
+ unsigned int *count,
+ const char *id)
+{
+int err;
+
+err = snd_pcm_hw_params_set_period_time_near(handle, params, usecs, 0);
+if (err < 0) {

[PATCHv2 1/5] qv4l2: alter capture menu

2013-08-06 Thread Bård Eirik Winther
Corrected Use OpenGL Render to Rendering and removed the separation line.

Signed-off-by: Bård Eirik Winther 
---
 utils/qv4l2/qv4l2.cpp | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/utils/qv4l2/qv4l2.cpp b/utils/qv4l2/qv4l2.cpp
index 4dc5a3e..275b399 100644
--- a/utils/qv4l2/qv4l2.cpp
+++ b/utils/qv4l2/qv4l2.cpp
@@ -131,12 +131,11 @@ ApplicationWindow::ApplicationWindow() :
QMenu *captureMenu = menuBar()->addMenu("&Capture");
captureMenu->addAction(m_capStartAct);
captureMenu->addAction(m_showFramesAct);
-   captureMenu->addSeparator();
 
if (CaptureWinGL::isSupported()) {
m_renderMethod = QV4L2_RENDER_GL;
 
-   m_useGLAct = new QAction("Use Open&GL Render", this);
+   m_useGLAct = new QAction("Use Open&GL Rendering", this);
m_useGLAct->setStatusTip("Use GPU with OpenGL for video capture 
if set.");
m_useGLAct->setCheckable(true);
m_useGLAct->setChecked(true);
-- 
1.8.3.2

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCHv2 4/5] qv4l2: add ALSA stream to qv4l2

2013-08-06 Thread Bård Eirik Winther
Changes the ALSA streaming code to work with qv4l2 and allows it to
be compiled in. qv4l2 does not use the streaming function yet.

Signed-off-by: Bård Eirik Winther 
---
 configure.ac  |  7 +++
 utils/qv4l2/Makefile.am   |  8 ++--
 utils/qv4l2/alsa_stream.c | 28 
 utils/qv4l2/alsa_stream.h | 13 ++---
 4 files changed, 47 insertions(+), 9 deletions(-)

diff --git a/configure.ac b/configure.ac
index d74da61..5a0bb5f 100644
--- a/configure.ac
+++ b/configure.ac
@@ -136,6 +136,13 @@ if test "x$qt_pkgconfig_gl" = "xfalse"; then
AC_MSG_WARN(Qt4 OpenGL or higher is not available)
 fi
 
+PKG_CHECK_MODULES(ALSA, [alsa], [alsa_pkgconfig=true], [alsa_pkgconfig=false])
+if test "x$alsa_pkgconfig" = "xtrue"; then
+   AC_DEFINE([HAVE_ALSA], [1], [alsa library is present])
+else
+   AC_MSG_WARN(ALSA library not available)
+fi
+
 AC_SUBST([JPEG_LIBS])
 
 # The dlopen() function is in the C library for *BSD and in
diff --git a/utils/qv4l2/Makefile.am b/utils/qv4l2/Makefile.am
index 22d4c17..3aed18c 100644
--- a/utils/qv4l2/Makefile.am
+++ b/utils/qv4l2/Makefile.am
@@ -1,10 +1,11 @@
 bin_PROGRAMS = qv4l2
 
 qv4l2_SOURCES = qv4l2.cpp general-tab.cpp ctrl-tab.cpp vbi-tab.cpp 
v4l2-api.cpp capture-win.cpp \
-  capture-win-qt.cpp capture-win-qt.h capture-win-gl.cpp capture-win-gl.h \
+  capture-win-qt.cpp capture-win-qt.h capture-win-gl.cpp capture-win-gl.h 
alsa_stream.c alsa_stream.h \
   raw2sliced.cpp qv4l2.h capture-win.h general-tab.h vbi-tab.h v4l2-api.h 
raw2sliced.h
 nodist_qv4l2_SOURCES = moc_qv4l2.cpp moc_general-tab.cpp moc_capture-win.cpp 
moc_vbi-tab.cpp qrc_qv4l2.cpp
-qv4l2_LDADD = ../../lib/libv4l2/libv4l2.la 
../../lib/libv4lconvert/libv4lconvert.la ../libv4l2util/libv4l2util.la
+qv4l2_LDADD = ../../lib/libv4l2/libv4l2.la 
../../lib/libv4lconvert/libv4lconvert.la ../libv4l2util/libv4l2util.la \
+  ../libmedia_dev/libmedia_dev.la
 
 if WITH_QV4L2_GL
 qv4l2_CPPFLAGS = $(QTGL_CFLAGS) -DENABLE_GL
@@ -14,6 +15,9 @@ qv4l2_CPPFLAGS = $(QT_CFLAGS)
 qv4l2_LDFLAGS = $(QT_LIBS)
 endif
 
+qv4l2_CPPFLAGS += $(ALSA_CFLAGS)
+qv4l2_LDFLAGS += $(ALSA_LIBS) -pthread
+
 EXTRA_DIST = exit.png fileopen.png qv4l2_24x24.png qv4l2_64x64.png qv4l2.png 
qv4l2.svg snapshot.png \
   video-television.png fileclose.png qv4l2_16x16.png qv4l2_32x32.png 
qv4l2.desktop qv4l2.qrc record.png \
   saveraw.png qv4l2.pro
diff --git a/utils/qv4l2/alsa_stream.c b/utils/qv4l2/alsa_stream.c
index fbff4cb..dd01d1a 100644
--- a/utils/qv4l2/alsa_stream.c
+++ b/utils/qv4l2/alsa_stream.c
@@ -26,9 +26,10 @@
  *
  */
 
-#include "config.h"
+#include 
 
-#ifdef HAVE_ALSA_ASOUNDLIB_H
+#ifdef HAVE_ALSA
+#include "alsa_stream.h"
 
 #include 
 #include 
@@ -40,12 +41,12 @@
 #include 
 #include 
 #include 
-#include "alsa_stream.h"
 
 #define ARRAY_SIZE(a) (sizeof(a)/sizeof(*(a)))
 
 /* Private vars to control alsa thread status */
 static int stop_alsa = 0;
+static snd_htimestamp_t timestamp;
 
 /* Error handlers */
 snd_output_t *output = NULL;
@@ -202,6 +203,13 @@ static int setparams_set(snd_pcm_t *handle,
id, snd_strerror(err));
return err;
 }
+
+err = snd_pcm_sw_params_set_tstamp_mode(handle, swparams, 
SND_PCM_TSTAMP_ENABLE);
+if (err < 0) {
+   fprintf(error_fp, "alsa: Unable to enable timestamps for %s: %s\n",
+   id, snd_strerror(err));
+}
+
 err = snd_pcm_sw_params(handle, swparams);
 if (err < 0) {
fprintf(error_fp, "alsa: Unable to set sw params for %s: %s\n",
@@ -422,7 +430,8 @@ static int setparams(snd_pcm_t *phandle, snd_pcm_t *chandle,
 static snd_pcm_sframes_t readbuf(snd_pcm_t *handle, char *buf, long len)
 {
 snd_pcm_sframes_t r;
-
+snd_pcm_uframes_t frames;
+snd_pcm_htimestamp(handle, &frames, ×tamp);
 r = snd_pcm_readi(handle, buf, len);
 if (r < 0 && r != -EAGAIN) {
r = snd_pcm_recover(handle, r, 0);
@@ -453,6 +462,7 @@ static snd_pcm_sframes_t writebuf(snd_pcm_t *handle, char 
*buf, long len)
len -= r;
snd_pcm_wait(handle, 100);
 }
+return -1;
 }
 
 static int alsa_stream(const char *pdevice, const char *cdevice, int latency)
@@ -642,4 +652,14 @@ int alsa_thread_is_running(void)
 return alsa_is_running;
 }
 
+void alsa_thread_timestamp(struct timeval *tv)
+{
+   if (alsa_thread_is_running()) {
+   tv->tv_sec = timestamp.tv_sec;
+   tv->tv_usec = timestamp.tv_nsec / 1000;
+   } else {
+   tv->tv_sec = 0;
+   tv->tv_usec = 0;
+   }
+}
 #endif
diff --git a/utils/qv4l2/alsa_stream.h b/utils/qv4l2/alsa_stream.h
index c68fd6d..b736ec3 100644
--- a/utils/qv4l2/alsa_stream.h
+++ b/utils/qv4l2/alsa_stream.h
@@ -1,5 +1,12 @@
-int alsa_thread_startup(const char *pdevice, const char *cdevice, int latency,
-   FILE *__error_fp,
-   int __verbose);
+#ifndef ALSA_STREAM_H
+#define ALSA_STREAM_H
+
+#include 
+#include 
+
+int alsa_thread_startup(const char *pdevice, const ch

[PATCHv2 5/5] qv4l2: add ALSA audio playback

2013-08-06 Thread Bård Eirik Winther
The qv4l2 test utility now supports ALSA playback of audio.
This allows for PCM playback during capture for supported devices.

Signed-off-by: Bård Eirik Winther 
---
 utils/qv4l2/general-tab.cpp | 296 +++-
 utils/qv4l2/general-tab.h   |  38 ++
 utils/qv4l2/qv4l2.cpp   | 141 -
 utils/qv4l2/qv4l2.h |   9 ++
 4 files changed, 480 insertions(+), 4 deletions(-)

diff --git a/utils/qv4l2/general-tab.cpp b/utils/qv4l2/general-tab.cpp
index 10b14ca..5cfaf07 100644
--- a/utils/qv4l2/general-tab.cpp
+++ b/utils/qv4l2/general-tab.cpp
@@ -30,6 +30,16 @@
 
 #include 
 #include 
+#include 
+
+bool GeneralTab::m_fullAudioName = false;
+
+enum audioDeviceAdd {
+   AUDIO_ADD_NO,
+   AUDIO_ADD_READ,
+   AUDIO_ADD_WRITE,
+   AUDIO_ADD_READWRITE
+};
 
 GeneralTab::GeneralTab(const QString &device, v4l2 &fd, int n, QWidget 
*parent) :
QGridLayout(parent),
@@ -48,12 +58,16 @@ GeneralTab::GeneralTab(const QString &device, v4l2 &fd, int 
n, QWidget *parent)
m_vidCapFormats(NULL),
m_frameSize(NULL),
m_vidOutFormats(NULL),
-   m_vbiMethods(NULL)
+   m_vbiMethods(NULL),
+   m_audioInDevice(NULL),
+   m_audioOutDevice(NULL)
 {
+   m_device.append(device);
setSpacing(3);
 
setSizeConstraint(QLayout::SetMinimumSize);
 
+
if (querycap(m_querycap)) {
addLabel("Device:");
addLabel(device + (useWrapper() ? " (wrapped)" : ""), 
Qt::AlignLeft);
@@ -132,6 +146,42 @@ GeneralTab::GeneralTab(const QString &device, v4l2 &fd, 
int n, QWidget *parent)
updateAudioOutput();
}
 
+   if (hasAlsaAudio()) {
+   m_audioInDevice = new QComboBox(parent);
+   m_audioOutDevice = new QComboBox(parent);
+   
m_audioInDevice->setSizeAdjustPolicy(QComboBox::AdjustToContents);
+   
m_audioOutDevice->setSizeAdjustPolicy(QComboBox::AdjustToContents);
+
+   if (createAudioDeviceList()) {
+   addLabel("Audio Input Device");
+   connect(m_audioInDevice, SIGNAL(activated(int)), 
SLOT(changeAudioDevice()));
+   addWidget(m_audioInDevice);
+
+   addLabel("Audio Output Device");
+   connect(m_audioOutDevice, SIGNAL(activated(int)), 
SLOT(changeAudioDevice()));
+   addWidget(m_audioOutDevice);
+
+   if (isRadio()) {
+   setAudioDeviceBufferSize(75);
+   } else {
+   v4l2_fract fract;
+   if (!v4l2::get_interval(fract)) {
+   // Default values are for 30 FPS
+   fract.numerator = 33;
+   fract.denominator = 1000;
+   }
+   // Standard capacity is two frames
+   setAudioDeviceBufferSize((fract.numerator * 
2000) / fract.denominator);
+   }
+   } else {
+   fprintf(stderr, "BANNA\n");
+   delete m_audioInDevice;
+   delete m_audioOutDevice;
+   m_audioInDevice = NULL;
+   m_audioOutDevice = NULL;
+   }
+   }
+
if (needsStd) {
v4l2_std_id tmp;
 
@@ -370,6 +420,180 @@ done:
setRowStretch(rowCount() - 1, 1);
 }
 
+void GeneralTab::showAllAudioDevices(bool use)
+{
+   QString oldIn(m_audioInDevice->currentText());
+   QString oldOut(m_audioOutDevice->currentText());
+
+   m_fullAudioName = use;
+   if (oldIn == NULL || oldOut == NULL || !createAudioDeviceList())
+   return;
+
+   // Select a similar device as before the listings method change
+   // check by comparing old selection with any matching in the new list
+   bool setIn = false, setOut = false;
+   int listSize = std::max(m_audioInDevice->count(), 
m_audioOutDevice->count());
+
+   for (int i = 0; i < listSize; i++) {
+   QString 
oldInCmp(oldIn.left(std::min(m_audioInDevice->itemText(i).length(), 
oldIn.length(;
+   QString 
oldOutCmp(oldOut.left(std::min(m_audioOutDevice->itemText(i).length(), 
oldOut.length(;
+
+   if (!setIn && i < m_audioInDevice->count()
+   && m_audioInDevice->itemText(i).startsWith(oldInCmp)) {
+   setIn = true;
+   m_audioInDevice->setCurrentIndex(i);
+   }
+
+   if (!setOut && i < m_audioOutDevice->count()
+   && m_audioOutDevice->itemText(i).startsWith(oldOutCmp)) {
+   setOut = true;
+   m_audioOutDevice->setCurrentIndex(i);
+   }
+   }
+}
+
+bool GeneralTab::filter

[GIT PULL] New features for 3.12

2013-08-06 Thread Kamil Debski
The following changes since commit b43ea8068d2090cb1e44632c8a938ab40d2c7419:

  [media] cx23885: Fix TeVii S471 regression since introduction of ts2020
(2013-07-30 17:23:24 -0300)

are available in the git repository at:

  git://linuxtv.org/kdebski/media.git new-for-3.12-2nd

for you to fetch changes up to 8f55301a822a27f9c30c87284ff1d9e13aa1ea31:

  s5p-mfc: Add support for VP8 encoder (2013-08-01 11:57:56 +0200)


Arun Kumar K (7):
  s5p-mfc: Update v6 encoder buffer sizes
  s5p-mfc: Rename IS_MFCV6 macro
  s5p-mfc: Add register definition file for MFC v7
  s5p-mfc: Core support for MFC v7
  s5p-mfc: Update driver for v7 firmware
  V4L: Add VP8 encoder controls
  s5p-mfc: Add support for VP8 encoder

Sylwester Nawrocki (1):
  V4L: Add support for integer menu controls with standard menu items

 Documentation/DocBook/media/v4l/controls.xml   |  168
+++-
 .../devicetree/bindings/media/s5p-mfc.txt  |1 +
 Documentation/video4linux/v4l2-controls.txt|   21 +--
 drivers/media/platform/s5p-mfc/regs-mfc-v6.h   |4 +-
 drivers/media/platform/s5p-mfc/regs-mfc-v7.h   |   61 +++
 drivers/media/platform/s5p-mfc/s5p_mfc.c   |   32 
 drivers/media/platform/s5p-mfc/s5p_mfc_cmd.c   |2 +-
 drivers/media/platform/s5p-mfc/s5p_mfc_cmd_v6.c|3 +
 drivers/media/platform/s5p-mfc/s5p_mfc_common.h|   23 ++-
 drivers/media/platform/s5p-mfc/s5p_mfc_ctrl.c  |   12 +-
 drivers/media/platform/s5p-mfc/s5p_mfc_dec.c   |   18 ++-
 drivers/media/platform/s5p-mfc/s5p_mfc_enc.c   |  107 -
 drivers/media/platform/s5p-mfc/s5p_mfc_opr.c   |2 +-
 drivers/media/platform/s5p-mfc/s5p_mfc_opr_v6.c|  149 +++--
 drivers/media/v4l2-core/v4l2-ctrls.c   |   67 +++-
 include/uapi/linux/v4l2-controls.h |   29 
 16 files changed, 642 insertions(+), 57 deletions(-)
 create mode 100644 drivers/media/platform/s5p-mfc/regs-mfc-v7.h

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to express planar formats with mediabus format code?

2013-08-06 Thread Guennadi Liakhovetski
Hi Su Jiaquan,

On Tue, 6 Aug 2013, Su Jiaquan wrote:

> Hi Guennadi,
> 
> Thanks for the reply! Please see my description inline.
> 
> On Mon, Aug 5, 2013 at 5:02 AM, Guennadi Liakhovetski
>  wrote:
> > Hi Su Jiaquan
> >
> > On Sun, 4 Aug 2013, Su Jiaquan wrote:
> >
> >> Hi,
> >>
> >> I know the title looks crazy, but here is our problem:
> >>
> >> In our SoC based ISP, the hardware can be divide to several blocks.
> >> Some blocks can do color space conversion(raw to YUV
> >> interleave/planar), others can do the pixel
> >> re-order(interleave/planar/semi-planar conversion, UV planar switch).
> >> We use one subdev to describe each of them, then came the problem: How
> >> can we express the planar formats with mediabus format code?
> >
> > Could you please explain more exactly what you mean? How are those your
> > blocks connected? How do they exchange data? If they exchange data over a
> > serial bus, then I don't think planar formats make sense, right? Or do
> > your blocks really output planes one after another, reordering data
> > internally? That would be odd... If OTOH your blocks output data to RAM,
> > and the next block takes data from there, then you use V4L2_PIX_FMT_*
> > formats to describe them and any further processing block should be a
> > mem2mem device. Wouldn't this work?
> 
> These two hardware blocks are both located inside of ISP, and is
> connected by a hardware data bus.
> 
> Actually, there are three blocks inside ISP: One is close to sensor,
> and can do color space conversion(RGB->YUV), we call it IPC; The other
> two are at back end, which are basically DMA Engine, and they are
> identical. When data flow out of IPC, it can go into each one of these
> DMA Engines and finally into RAM. Whether the DMA Engine is turned
> on/off and the output format can be controlled independently. Since
> they are DMA Engines, they have some basic pixel reordering
> ability(i.e. interleave->planar/semi-planar).
> 
> In our H/W design, when we want to get YUV semi-planar format, the IPC
> output should be configured to interleave, and the DMA engine will do
> the interleave->semi-planar job. If we want planar / interleave
> format, the IPC will output planar format directly, DMA engine simply
> send the data to RAM, and don't do any re-order. So in the planar
> output case, media-bus formats can't express the format of the data
> between IPC and DMA Engine, that's the problem we meet.

Ok, so, do I understand you correctly, that in the case, where IPC outputs 
planar data you have:

1. your sensor is sending data to IPC

Then one of the following happens

2a. IPC stores the complete frame first, and only when the frame is 
complete, it first outputs the Y plane amd then the UV plane.

A slight optimisation of this would be

2b. it outputs Y components as pixels arrive and stores UV data 
internally, and at the end of the frame it sends out UV

If this is indeed the case, well, then I'm on the same page with you - I 
don't know a standard solution for this, sorry. It seems to me, that you 
will indeed need a new mediabus pixel code for this - either a generic one 
or a vendor-specific one. Let's see what others say.

Thanks
Guennadi

> We want to adopt a formal solution before we send our patch to the
> community, that's where our headache comes.
> >
> > Thanks
> > Guennadi
> >
> >> I understand at beginning, media-bus was designed to describe the data
> >> link between camera sensor and camera controller, where sensor is
> >> described in subdev. So interleave formats looks good enough at that
> >> time. But now as Media-controller is introduced, subdev can describe a
> >> much wider range of hardware, which is not limited to camera sensor.
> >> So now planar formats are possible to be passed between subdevs.
> >>
> >> I think the problem we meet can be very common for SoC based ISP
> >> solutions, what do you think about it?
> >>
> >> there are many possible solution for it:
> >>
> >> 1> change the definition of v4l2_subdev_format::format, use v4l2_format;
> >>
> >> 2> extend the mediabus format code, add planar format code;
> >>
> >> 3> use a extra bit to tell the meaning of v4l2_mbus_framefmt::code, is
> >> it in mediabus-format or in fourcc
> >>
> >>  Do you have any suggestions?
> >>
> >>  Thanks a lot!
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> >> the body of a message to majord...@vger.kernel.org
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >>
> >
> > ---
> > Guennadi Liakhovetski, Ph.D.
> > Freelance Open-Source Software Developer
> > http://www.open-technology.de/
> 
> Thanks!
> 
> Jiaquan
> 

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to express planar formats with mediabus format code?

2013-08-06 Thread Su Jiaquan
Hi Guennadi,

Thanks for the reply! Please see my description inline.

On Mon, Aug 5, 2013 at 5:02 AM, Guennadi Liakhovetski
 wrote:
> Hi Su Jiaquan
>
> On Sun, 4 Aug 2013, Su Jiaquan wrote:
>
>> Hi,
>>
>> I know the title looks crazy, but here is our problem:
>>
>> In our SoC based ISP, the hardware can be divide to several blocks.
>> Some blocks can do color space conversion(raw to YUV
>> interleave/planar), others can do the pixel
>> re-order(interleave/planar/semi-planar conversion, UV planar switch).
>> We use one subdev to describe each of them, then came the problem: How
>> can we express the planar formats with mediabus format code?
>
> Could you please explain more exactly what you mean? How are those your
> blocks connected? How do they exchange data? If they exchange data over a
> serial bus, then I don't think planar formats make sense, right? Or do
> your blocks really output planes one after another, reordering data
> internally? That would be odd... If OTOH your blocks output data to RAM,
> and the next block takes data from there, then you use V4L2_PIX_FMT_*
> formats to describe them and any further processing block should be a
> mem2mem device. Wouldn't this work?

These two hardware blocks are both located inside of ISP, and is
connected by a hardware data bus.

Actually, there are three blocks inside ISP: One is close to sensor,
and can do color space conversion(RGB->YUV), we call it IPC; The other
two are at back end, which are basically DMA Engine, and they are
identical. When data flow out of IPC, it can go into each one of these
DMA Engines and finally into RAM. Whether the DMA Engine is turned
on/off and the output format can be controlled independently. Since
they are DMA Engines, they have some basic pixel reordering
ability(i.e. interleave->planar/semi-planar).

In our H/W design, when we want to get YUV semi-planar format, the IPC
output should be configured to interleave, and the DMA engine will do
the interleave->semi-planar job. If we want planar / interleave
format, the IPC will output planar format directly, DMA engine simply
send the data to RAM, and don't do any re-order. So in the planar
output case, media-bus formats can't express the format of the data
between IPC and DMA Engine, that's the problem we meet.

We want to adopt a formal solution before we send our patch to the
community, that's where our headache comes.
>
> Thanks
> Guennadi
>
>> I understand at beginning, media-bus was designed to describe the data
>> link between camera sensor and camera controller, where sensor is
>> described in subdev. So interleave formats looks good enough at that
>> time. But now as Media-controller is introduced, subdev can describe a
>> much wider range of hardware, which is not limited to camera sensor.
>> So now planar formats are possible to be passed between subdevs.
>>
>> I think the problem we meet can be very common for SoC based ISP
>> solutions, what do you think about it?
>>
>> there are many possible solution for it:
>>
>> 1> change the definition of v4l2_subdev_format::format, use v4l2_format;
>>
>> 2> extend the mediabus format code, add planar format code;
>>
>> 3> use a extra bit to tell the meaning of v4l2_mbus_framefmt::code, is
>> it in mediabus-format or in fourcc
>>
>>  Do you have any suggestions?
>>
>>  Thanks a lot!
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-media" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
> ---
> Guennadi Liakhovetski, Ph.D.
> Freelance Open-Source Software Developer
> http://www.open-technology.de/

Thanks!

Jiaquan
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3] drm/exynos: Add fallback option to get non physically continous memory for fb

2013-08-06 Thread Sylwester Nawrocki
Vikas,

On 08/06/2013 07:23 AM, Vikas Sajjan wrote:
> While trying to get boot-logo up on exynos5420 SMDK which has eDP panel
> connected with resolution 2560x1600, following error occured even with
> IOMMU enabled:
> [0.88] [drm:lowlevel_buffer_allocate] *ERROR* failed to allocate buffer.
> [0.89] [drm] Initialized exynos 1.0.0 20110530 on minor 0
> 
> To address the cases where physically continous memory MAY NOT be a
> mandatory requirement for fb, the patch adds a feature to get non physically
> continous memory for fb if IOMMU is supported and if CONTIG memory allocation
> fails.
> 
> Signed-off-by: Vikas Sajjan 
> Signed-off-by: Arun Kumar 
> Reviewed-by: Rob Clark 
> ---
> changes since v2:
>   - addressed comments given by Tomasz Figa .
> 
> changes since v1:
>- Modified to add the fallback patch if CONTIG alloc fails as suggested
>by Rob Clark robdcl...@gmail.com and Tomasz Figa 
> .
> 
>- changed the commit message.
> ---
>  drivers/gpu/drm/exynos/exynos_drm_fbdev.c |   14 --
>  1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/exynos/exynos_drm_fbdev.c 
> b/drivers/gpu/drm/exynos/exynos_drm_fbdev.c
> index 8e60bd6..faec77e 100644
> --- a/drivers/gpu/drm/exynos/exynos_drm_fbdev.c
> +++ b/drivers/gpu/drm/exynos/exynos_drm_fbdev.c
> @@ -16,6 +16,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  
>  #include "exynos_drm_drv.h"
>  #include "exynos_drm_fb.h"
> @@ -165,8 +166,17 @@ static int exynos_drm_fbdev_create(struct drm_fb_helper 
> *helper,
>  
>   size = mode_cmd.pitches[0] * mode_cmd.height;
>  
> - /* 0 means to allocate physically continuous memory */
> - exynos_gem_obj = exynos_drm_gem_create(dev, 0, size);
> + exynos_gem_obj = exynos_drm_gem_create(dev, EXYNOS_BO_CONTIG, size);
> + /*
> +  * If IOMMU is supported then try to get buffer from non physically
> +  * continous memory area.

s/continous/continuous

or better

s/continous/contiguous

Otherwise the patch looks good. But please note it has nothing to do
with linux-media@vger.kernel.org, please drop this mailing list from
Cc as I need to be constantly marking those patches as Non applicable
in the patchwork.

Thanks!
Sylwester

> +  */
> + if (IS_ERR(exynos_gem_obj) && is_drm_iommu_supported(dev)) {
> + dev_warn(&pdev->dev, "contiguous FB allocation failed, falling 
> back to non-contiguous\n");
> + exynos_gem_obj = exynos_drm_gem_create(dev, EXYNOS_BO_NONCONTIG,
> + size);
> + }
> +
>   if (IS_ERR(exynos_gem_obj)) {
>   ret = PTR_ERR(exynos_gem_obj);
>   goto err_release_framebuffer;
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: mceusb Fintek ir transmitter only works when X is not running

2013-08-06 Thread Sean Young
On Mon, Aug 05, 2013 at 11:57:58PM +0100, Rajil Saraswat wrote:
> >
> > Why are you doing this?
> >
> > -snip-
> 
> My initial guess was that X was claiming over the ir device, so I
> wanted to disable ir device as an input device.

X may open the input device, but that does not affect IR transmission.

> > X case where it does not work:
> >
> >> 880118d1f240 2548275209 S Io:2:008:1 -115:1 3 = 9f0802
> >> 880118d1f240 2548275281 E Io:2:008:1 -28 0
> >> 880118d1fb40 2548286204 S Io:2:008:1 -115:1 86 = 84ffb458 8b840a8b 
> >> 0a8b8420 8b0a8b84 0a8b0a8b 840a8b0a 8b84208b 208b840a
> >> 880118d1fb40 2548286310 E Io:2:008:1 -28 0
> >
> > All the urb submissions result in an error -28: ENOSPC. These errors aren't
> > logged by default. I'm not sure about why this would happen.
> >
> > According to Documentation/usb/error-codes.txt:
> >
> > -ENOSPC This request would overcommit the usb bandwidth reserved
> > for periodic transfers (interrupt, isochronous).
> >
> > Could you try putting the device on its own bus (i.e root hub which does
> > not share bus with another device, see lsusb output).
> >
> 
> 
> Unfortunately, this is a laptop with few usb ports. I have tried
> moving devices around but still end-up on the same bus (02). I am
> running the OS off the 1TB usb harddisk ( Western Digital
> Technologies) connected on the same bus.
> 
> # lsusb
> Bus 001 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub
> Bus 002 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub
> Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
> Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
> Bus 001 Device 003: ID 05ca:1814 Ricoh Co., Ltd HD Webcam
> Bus 002 Device 003: ID 046d:c00c Logitech, Inc. Optical Wheel Mouse
> Bus 002 Device 009: ID 1934:5168 Feature Integration Technology Inc.
> (Fintek) F71610A or F71612A Consumer Infrared Receiver/Transceiver
> Bus 002 Device 005: ID 1058:0748 Western Digital Technologies, Inc. My
> Passport 1TB USB 3.0
> Bus 002 Device 006: ID 413c:8187 Dell Computer Corp. DW375 Bluetooth Module
> Bus 002 Device 007: ID 0a5c:5800 Broadcom Corp. BCM5880 Secure
> Applications Processor

That is a lot devices, can you try with less devices connected? 

> The disk is quite responsive
> #hdparm -Tt /dev/sdb3
> 
> /dev/sdb3:
>  Timing cached reads:   4896 MB in  2.00 seconds = 2449.42 MB/sec
>  Timing buffered disk reads:  90 MB in  3.04 seconds =  29.58 MB/sec

It's not about whether there is enough bandwidth, it's about whether
issuing more usb urbs would overflow the bandwidth allocated to other
devices (whether in use or not). Make sure you have 
CONFIG_USB_EHCI_TT_NEWSCHED defined in your kernel.

> > If that does not work, could you capture the usbmon output while starting
> > X and then irsend, to see if your X config somehow affects it.
> 
> The usbmon capture (Xstart.txt) is attached as requested. I ran a
> script which rotated on channel numbers and simultaneously started X.
> The channels initially changed but stopped when I logged into the X
> session.

Thanks. Only the IR transmit urb submits results in error ENOSPC.


Sean
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cron job: media_tree daily build: ERRORS

2013-08-06 Thread Hans Verkuil
On Fri 2 August 2013 20:12:03 Hans Verkuil wrote:
> This message is generated daily by a cron job that builds media_tree for
> the kernels and architectures in the list below.

Hard-core mailinglist readers will have noticed that the daily build email
was missing for a few days. That was due to the fact that I moved my mailserver
to another host and I missed one single configuration line preventing
remote smtp logins.

I fixed it this morning, and todays daily build should be able to post its
results again.

I also fixed the errors for kernels <2.6.38 yesterday, so hopefully everything
should compile again.

Hans

> 
> Results of the daily build of media_tree:
> 
> date: Fri Aug  2 19:00:23 CEST 2013
> git branch:   test
> git hash: dfb9f94e8e5e7f73c8e2bcb7d4fb1de57e7c333d
> gcc version:  i686-linux-gcc (GCC) 4.8.1
> sparse version:   v0.4.5-rc1
> host hardware:x86_64
> host os:  3.9-7.slh.1-amd64
> 
> linux-git-arm-at91: OK
> linux-git-arm-davinci: OK
> linux-git-arm-exynos: OK
> linux-git-arm-mx: OK
> linux-git-arm-omap: OK
> linux-git-arm-omap1: OK
> linux-git-arm-pxa: OK
> linux-git-blackfin: OK
> linux-git-i686: OK
> linux-git-m32r: OK
> linux-git-mips: ERRORS
> linux-git-powerpc64: OK
> linux-git-sh: OK
> linux-git-x86_64: OK
> linux-2.6.31.14-i686: ERRORS
> linux-2.6.32.27-i686: ERRORS
> linux-2.6.33.7-i686: ERRORS
> linux-2.6.34.7-i686: ERRORS
> linux-2.6.35.9-i686: ERRORS
> linux-2.6.36.4-i686: ERRORS
> linux-2.6.37.6-i686: ERRORS
> linux-2.6.38.8-i686: ERRORS
> linux-2.6.39.4-i686: WARNINGS
> linux-3.0.60-i686: OK
> linux-3.10-i686: OK
> linux-3.1.10-i686: OK
> linux-3.2.37-i686: OK
> linux-3.3.8-i686: OK
> linux-3.4.27-i686: WARNINGS
> linux-3.5.7-i686: WARNINGS
> linux-3.6.11-i686: WARNINGS
> linux-3.7.4-i686: WARNINGS
> linux-3.8-i686: WARNINGS
> linux-3.9.2-i686: WARNINGS
> linux-2.6.31.14-x86_64: ERRORS
> linux-2.6.32.27-x86_64: ERRORS
> linux-2.6.33.7-x86_64: ERRORS
> linux-2.6.34.7-x86_64: ERRORS
> linux-2.6.35.9-x86_64: ERRORS
> linux-2.6.36.4-x86_64: ERRORS
> linux-2.6.37.6-x86_64: ERRORS
> linux-2.6.38.8-x86_64: ERRORS
> linux-2.6.39.4-x86_64: WARNINGS
> linux-3.0.60-x86_64: OK
> linux-3.10-x86_64: OK
> linux-3.1.10-x86_64: OK
> linux-3.2.37-x86_64: OK
> linux-3.3.8-x86_64: OK
> linux-3.4.27-x86_64: WARNINGS
> linux-3.5.7-x86_64: WARNINGS
> linux-3.6.11-x86_64: WARNINGS
> linux-3.7.4-x86_64: WARNINGS
> linux-3.8-x86_64: WARNINGS
> linux-3.9.2-x86_64: WARNINGS
> apps: WARNINGS
> spec-git: OK
> sparse version:   v0.4.5-rc1
> sparse: ERRORS
> 
> Detailed results are available here:
> 
> http://www.xs4all.nl/~hverkuil/logs/Friday.log
> 
> Full logs are available here:
> 
> http://www.xs4all.nl/~hverkuil/logs/Friday.tar.bz2
> 
> The Media Infrastructure API from this daily build is here:
> 
> http://www.xs4all.nl/~hverkuil/spec/media.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html