Re: [PATCH] Revert 95f408bb Ryzen DMA related RiSC engine stall fixes

2018-12-06 Thread Alex Deucher
On Thu, Dec 6, 2018 at 1:05 PM Mauro Carvalho Chehab  wrote:
>
> Em Thu, 06 Dec 2018 18:18:23 +0100
> Markus Dobel  escreveu:
>
> > Hi everyone,
> >
> > I will try if the hack mentioned fixes the issue for me on the weekend (but 
> > I assume, as if effectively removes the function).
>
> It should, but it keeps a few changes. Just want to be sure that what
> would be left won't cause issues. If this works, the logic that would
> solve Ryzen DMA fixes will be contained into a single point, making
> easier to maintain it.
>
> >
> > Just in case this is of interest, I neither have Ryzen nor Intel, but an HP 
> > Microserver G7 with an AMD Turion II Neo  N54L, so the machine is more on 
> > the slow side.
>
> Good to know. It would probably worth to check if this Ryzen
> bug occors with all versions of it or with just a subset.
> I mean: maybe it is only at the first gen or Ryzen and doesn't
> affect Ryzen 2 (or vice versa).

The original commit also mentions some Xeons are affected too.  Seems
like this is potentially an issue on the device side rather than the
platform.

>
> The PCI quirks logic will likely need to detect the PCI ID of
> the memory controllers found at the buggy CPUs, in order to enable
> the quirk only for the affected ones.
>
> It could be worth talking with AMD people in order to be sure about
> the differences at the DMA engine side.
>

It's not clear to me what the pci or platform quirk would do.  The
workaround seems to be in the driver, not on the platform.

Alex


Re: [Linaro-mm-sig] [PATCH 4/8] dma-buf: add peer2peer flag

2018-04-25 Thread Alex Deucher
On Wed, Apr 25, 2018 at 2:41 AM, Christoph Hellwig <h...@infradead.org> wrote:
> On Wed, Apr 25, 2018 at 02:24:36AM -0400, Alex Deucher wrote:
>> > It has a non-coherent transaction mode (which the chipset can opt to
>> > not implement and still flush), to make sure the AGP horror show
>> > doesn't happen again and GPU folks are happy with PCIe. That's at
>> > least my understanding from digging around in amd the last time we had
>> > coherency issues between intel and amd gpus. GPUs have some bits
>> > somewhere (in the pagetables, or in the buffer object description
>> > table created by userspace) to control that stuff.
>>
>> Right.  We have a bit in the GPU page table entries that determines
>> whether we snoop the CPU's cache or not.
>
> I can see how that works with the GPU on the same SOC or SOC set as the
> CPU.  But how is that going to work for a GPU that is a plain old PCIe
> card?  The cache snooping in that case is happening in the PCIe root
> complex.

I'm not a pci expert, but as far as I know, the device sends either a
snooped or non-snooped transaction on the bus.  I think the
transaction descriptor supports a no snoop attribute.  Our GPUs have
supported this feature for probably 20 years if not more, going back
to PCI.  Using non-snooped transactions have lower latency and faster
throughput compared to snooped transactions.

Alex


Re: [Linaro-mm-sig] [PATCH 4/8] dma-buf: add peer2peer flag

2018-04-25 Thread Alex Deucher
On Wed, Apr 25, 2018 at 2:13 AM, Daniel Vetter  wrote:
> On Wed, Apr 25, 2018 at 7:48 AM, Christoph Hellwig  wrote:
>> On Tue, Apr 24, 2018 at 09:32:20PM +0200, Daniel Vetter wrote:
>>> Out of curiosity, how much virtual flushing stuff is there still out
>>> there? At least in drm we've pretty much ignore this, and seem to be
>>> getting away without a huge uproar (at least from driver developers
>>> and users, core folks are less amused about that).
>>
>> As I've just been wading through the code, the following architectures
>> have non-coherent dma that flushes by virtual address for at least some
>> platforms:
>>
>>  - arm [1], arm64, hexagon, nds32, nios2, parisc, sh, xtensa, mips,
>>powerpc
>>
>> These have non-coherent dma ops that flush by physical address:
>>
>>  - arc, arm [1], c6x, m68k, microblaze, openrisc, sparc
>>
>> And these do not have non-coherent dma ops at all:
>>
>>  - alpha, h8300, riscv, unicore32, x86
>>
>> [1] arm ѕeems to do both virtually and physically based ops, further
>> audit needed.
>>
>> Note that using virtual addresses in the cache flushing interface
>> doesn't mean that the cache actually is virtually indexed, but it at
>> least allows for the possibility.
>>
>>> > I think the most important thing about such a buffer object is that
>>> > it can distinguish the underlying mapping types.  While
>>> > dma_alloc_coherent, dma_alloc_attrs with DMA_ATTR_NON_CONSISTENT,
>>> > dma_map_page/dma_map_single/dma_map_sg and dma_map_resource all give
>>> > back a dma_addr_t they are in now way interchangable.  And trying to
>>> > stuff them all into a structure like struct scatterlist that has
>>> > no indication what kind of mapping you are dealing with is just
>>> > asking for trouble.
>>>
>>> Well the idea was to have 1 interface to allow all drivers to share
>>> buffers with anything else, no matter how exactly they're allocated.
>>
>> Isn't that interface supposed to be dmabuf?  Currently dma_map leaks
>> a scatterlist through the sg_table in dma_buf_map_attachment /
>> ->map_dma_buf, but looking at a few of the callers it seems like they
>> really do not even want a scatterlist to start with, but check that
>> is contains a physically contiguous range first.  So kicking the
>> scatterlist our there will probably improve the interface in general.
>
> I think by number most drm drivers require contiguous memory (or an
> iommu that makes it look contiguous). But there's plenty others who
> have another set of pagetables on the gpu itself and can
> scatter-gather. Usually it's the former for display/video blocks, and
> the latter for rendering.
>
>>> dma-buf has all the functions for flushing, so you can have coherent
>>> mappings, non-coherent mappings and pretty much anything else. Or well
>>> could, because in practice people hack up layering violations until it
>>> works for the 2-3 drivers they care about. On top of that there's the
>>> small issue that x86 insists that dma is coherent (and that's true for
>>> most devices, including v4l drivers you might want to share stuff
>>> with), and gpus really, really really do want to make almost
>>> everything incoherent.
>>
>> How do discrete GPUs manage to be incoherent when attached over PCIe?
>
> It has a non-coherent transaction mode (which the chipset can opt to
> not implement and still flush), to make sure the AGP horror show
> doesn't happen again and GPU folks are happy with PCIe. That's at
> least my understanding from digging around in amd the last time we had
> coherency issues between intel and amd gpus. GPUs have some bits
> somewhere (in the pagetables, or in the buffer object description
> table created by userspace) to control that stuff.

Right.  We have a bit in the GPU page table entries that determines
whether we snoop the CPU's cache or not.

Alex

>
> For anything on the SoC it's presented as pci device, but that's
> extremely fake, and we can definitely do non-snooped transactions on
> drm/i915. Again, controlled by a mix of pagetables and
> userspace-provided buffer object description tables.
> -Daniel
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
> ___
> amd-gfx mailing list
> amd-...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [Linaro-mm-sig] [PATCH 4/8] dma-buf: add peer2peer flag

2018-04-25 Thread Alex Deucher
On Wed, Apr 25, 2018 at 1:48 AM, Christoph Hellwig  wrote:
> On Tue, Apr 24, 2018 at 09:32:20PM +0200, Daniel Vetter wrote:
>> Out of curiosity, how much virtual flushing stuff is there still out
>> there? At least in drm we've pretty much ignore this, and seem to be
>> getting away without a huge uproar (at least from driver developers
>> and users, core folks are less amused about that).
>
> As I've just been wading through the code, the following architectures
> have non-coherent dma that flushes by virtual address for at least some
> platforms:
>
>  - arm [1], arm64, hexagon, nds32, nios2, parisc, sh, xtensa, mips,
>powerpc
>
> These have non-coherent dma ops that flush by physical address:
>
>  - arc, arm [1], c6x, m68k, microblaze, openrisc, sparc
>
> And these do not have non-coherent dma ops at all:
>
>  - alpha, h8300, riscv, unicore32, x86
>
> [1] arm ѕeems to do both virtually and physically based ops, further
> audit needed.
>
> Note that using virtual addresses in the cache flushing interface
> doesn't mean that the cache actually is virtually indexed, but it at
> least allows for the possibility.
>
>> > I think the most important thing about such a buffer object is that
>> > it can distinguish the underlying mapping types.  While
>> > dma_alloc_coherent, dma_alloc_attrs with DMA_ATTR_NON_CONSISTENT,
>> > dma_map_page/dma_map_single/dma_map_sg and dma_map_resource all give
>> > back a dma_addr_t they are in now way interchangable.  And trying to
>> > stuff them all into a structure like struct scatterlist that has
>> > no indication what kind of mapping you are dealing with is just
>> > asking for trouble.
>>
>> Well the idea was to have 1 interface to allow all drivers to share
>> buffers with anything else, no matter how exactly they're allocated.
>
> Isn't that interface supposed to be dmabuf?  Currently dma_map leaks
> a scatterlist through the sg_table in dma_buf_map_attachment /
> ->map_dma_buf, but looking at a few of the callers it seems like they
> really do not even want a scatterlist to start with, but check that
> is contains a physically contiguous range first.  So kicking the
> scatterlist our there will probably improve the interface in general.
>
>> dma-buf has all the functions for flushing, so you can have coherent
>> mappings, non-coherent mappings and pretty much anything else. Or well
>> could, because in practice people hack up layering violations until it
>> works for the 2-3 drivers they care about. On top of that there's the
>> small issue that x86 insists that dma is coherent (and that's true for
>> most devices, including v4l drivers you might want to share stuff
>> with), and gpus really, really really do want to make almost
>> everything incoherent.
>
> How do discrete GPUs manage to be incoherent when attached over PCIe?

They can do CPU cache snooped (coherent) or non-snooped (incoherent)
DMA.  Also for things like APUs, they show up as a PCIe device, but
that actual GPU core is part of the same die as the CPU and they have
their own special paths to memory, etc.  The fact that they show up as
PCIe devices is mostly for enumeration purposes.

Alex

> ___
> Linaro-mm-sig mailing list
> linaro-mm-...@lists.linaro.org
> https://lists.linaro.org/mailman/listinfo/linaro-mm-sig


Re: [PATCH 2/8] PCI: Add pci_find_common_upstream_dev()

2018-03-29 Thread Alex Deucher
Sorry, didn't mean to drop the lists here. re-adding.

On Wed, Mar 28, 2018 at 4:05 PM, Alex Deucher <alexdeuc...@gmail.com> wrote:
> On Wed, Mar 28, 2018 at 3:53 PM, Logan Gunthorpe <log...@deltatee.com> wrote:
>>
>>
>> On 28/03/18 01:44 PM, Christian König wrote:
>>> Well, isn't that exactly what dma_map_resource() is good for? As far as
>>> I can see it makes sure IOMMU is aware of the access route and
>>> translates a CPU address into a PCI Bus address.
>>
>>> I'm using that with the AMD IOMMU driver and at least there it works
>>> perfectly fine.
>>
>> Yes, it would be nice, but no arch has implemented this yet. We are just
>> lucky in the x86 case because that arch is simple and doesn't need to do
>> anything for P2P (partially due to the Bus and CPU addresses being the
>> same). But in the general case, you can't rely on it.
>
> Could we do something for the arches where it works?  I feel like peer
> to peer has dragged out for years because everyone is trying to boil
> the ocean for all arches.  There are a huge number of use cases for
> peer to peer on these "simple" architectures which actually represent
> a good deal of the users that want this.
>
> Alex
>
>>
>>>>> Yeah, but not for ours. See if you want to do real peer 2 peer you need
>>>>> to keep both the operation as well as the direction into account.
>>>> Not sure what you are saying here... I'm pretty sure we are doing "real"
>>>> peer 2 peer...
>>>>
>>>>> For example when you can do writes between A and B that doesn't mean
>>>>> that writes between B and A work. And reads are generally less likely to
>>>>> work than writes. etc...
>>>> If both devices are behind a switch then the PCI spec guarantees that A
>>>> can both read and write B and vice versa.
>>>
>>> Sorry to say that, but I know a whole bunch of PCI devices which
>>> horrible ignores that.
>>
>> Can you elaborate? As far as the device is concerned it shouldn't know
>> whether a request comes from a peer or from the host. If it does do
>> crazy stuff like that it's well out of spec. It's up to the switch (or
>> root complex if good support exists) to route the request to the device
>> and it's the root complex that tends to be what drops the load requests
>> which causes the asymmetries.
>>
>> Logan
>> ___
>> amd-gfx mailing list
>> amd-...@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/7] cx231xx: Add second frontend option

2018-01-12 Thread Alex Deucher
On Fri, Jan 12, 2018 at 11:19 AM, Brad Love  wrote:
> Include ability to add a second dvb attach style frontend to cx231xx
> USB bridge. All current boards set to use frontend[0]. Changes are
> backwards compatible with current behaviour.
>
> Signed-off-by: Brad Love 
> ---
>  drivers/media/usb/cx231xx/cx231xx-dvb.c | 173 
> ++--
>  1 file changed, 97 insertions(+), 76 deletions(-)
>
> diff --git a/drivers/media/usb/cx231xx/cx231xx-dvb.c 
> b/drivers/media/usb/cx231xx/cx231xx-dvb.c
> index cb4209f..4c6d2f4 100644
> --- a/drivers/media/usb/cx231xx/cx231xx-dvb.c
> +++ b/drivers/media/usb/cx231xx/cx231xx-dvb.c
> @@ -55,7 +55,7 @@ DVB_DEFINE_MOD_OPT_ADAPTER_NR(adapter_nr);
>  #define CX231XX_DVB_MAX_PACKETS 64
>
>  struct cx231xx_dvb {
> -   struct dvb_frontend *frontend;
> +   struct dvb_frontend *frontend[2];

Maybe define something like CX231XX_MAX_FRONTEND and use it here
rather than using a hardcoded 2.

Alex


>
> /* feed count management */
> struct mutex lock;
> @@ -386,17 +386,17 @@ static int attach_xc5000(u8 addr, struct cx231xx *dev)
> cfg.i2c_adap = cx231xx_get_i2c_adap(dev, dev->board.tuner_i2c_master);
> cfg.i2c_addr = addr;
>
> -   if (!dev->dvb->frontend) {
> +   if (!dev->dvb->frontend[0]) {
> dev_err(dev->dev, "%s/2: dvb frontend not attached. Can't 
> attach xc5000\n",
> dev->name);
> return -EINVAL;
> }
>
> -   fe = dvb_attach(xc5000_attach, dev->dvb->frontend, );
> +   fe = dvb_attach(xc5000_attach, dev->dvb->frontend[0], );
> if (!fe) {
> dev_err(dev->dev, "%s/2: xc5000 attach failed\n", dev->name);
> -   dvb_frontend_detach(dev->dvb->frontend);
> -   dev->dvb->frontend = NULL;
> +   dvb_frontend_detach(dev->dvb->frontend[0]);
> +   dev->dvb->frontend[0] = NULL;
> return -EINVAL;
> }
>
> @@ -408,9 +408,9 @@ static int attach_xc5000(u8 addr, struct cx231xx *dev)
>
>  int cx231xx_set_analog_freq(struct cx231xx *dev, u32 freq)
>  {
> -   if ((dev->dvb != NULL) && (dev->dvb->frontend != NULL)) {
> +   if ((dev->dvb != NULL) && (dev->dvb->frontend[0] != NULL)) {
>
> -   struct dvb_tuner_ops *dops = 
> >dvb->frontend->ops.tuner_ops;
> +   struct dvb_tuner_ops *dops = 
> >dvb->frontend[0]->ops.tuner_ops;
>
> if (dops->set_analog_params != NULL) {
> struct analog_parameters params;
> @@ -421,7 +421,7 @@ int cx231xx_set_analog_freq(struct cx231xx *dev, u32 freq)
> /*params.audmode = ;   */
>
> /* Set the analog parameters to set the frequency */
> -   dops->set_analog_params(dev->dvb->frontend, );
> +   dops->set_analog_params(dev->dvb->frontend[0], 
> );
> }
>
> }
> @@ -433,15 +433,15 @@ int cx231xx_reset_analog_tuner(struct cx231xx *dev)
>  {
> int status = 0;
>
> -   if ((dev->dvb != NULL) && (dev->dvb->frontend != NULL)) {
> +   if ((dev->dvb != NULL) && (dev->dvb->frontend[0] != NULL)) {
>
> -   struct dvb_tuner_ops *dops = 
> >dvb->frontend->ops.tuner_ops;
> +   struct dvb_tuner_ops *dops = 
> >dvb->frontend[0]->ops.tuner_ops;
>
> if (dops->init != NULL && !dev->xc_fw_load_done) {
>
> dev_dbg(dev->dev,
> "Reloading firmware for XC5000\n");
> -   status = dops->init(dev->dvb->frontend);
> +   status = dops->init(dev->dvb->frontend[0]);
> if (status == 0) {
> dev->xc_fw_load_done = 1;
> dev_dbg(dev->dev,
> @@ -481,17 +481,29 @@ static int register_dvb(struct cx231xx_dvb *dvb,
> dvb_register_media_controller(>adapter, dev->media_dev);
>
> /* Ensure all frontends negotiate bus access */
> -   dvb->frontend->ops.ts_bus_ctrl = cx231xx_dvb_bus_ctrl;
> +   dvb->frontend[0]->ops.ts_bus_ctrl = cx231xx_dvb_bus_ctrl;
> +   if (dvb->frontend[1])
> +   dvb->frontend[1]->ops.ts_bus_ctrl = cx231xx_dvb_bus_ctrl;
>
> dvb->adapter.priv = dev;
>
> /* register frontend */
> -   result = dvb_register_frontend(>adapter, dvb->frontend);
> +   result = dvb_register_frontend(>adapter, dvb->frontend[0]);
> if (result < 0) {
> dev_warn(dev->dev,
>"%s: dvb_register_frontend failed (errno = %d)\n",
>dev->name, result);
> -   goto fail_frontend;
> +   goto fail_frontend0;
> +   }
> +
> +   if (dvb->frontend[1]) {
> +   result = dvb_register_frontend(>adapter, 
> dvb->frontend[1]);
> +   if (result < 0) {
> + 

Re: [trivial PATCH] treewide: Align function definition open/close braces

2017-12-18 Thread Alex Deucher
On Sun, Dec 17, 2017 at 7:28 PM, Joe Perches <j...@perches.com> wrote:
> Some functions definitions have either the initial open brace and/or
> the closing brace outside of column 1.
>
> Move those braces to column 1.
>
> This allows various function analyzers like gnu complexity to work
> properly for these modified functions.
>
> Miscellanea:
>
> o Remove extra trailing ; and blank line from xfs_agf_verify
>
> Signed-off-by: Joe Perches <j...@perches.com>
> ---
> git diff -w shows no difference other than the above 'Miscellanea'
>
> (this is against -next, but it applies against Linus' tree
>  with a couple offsets)
>
>  arch/x86/include/asm/atomic64_32.h   |  2 +-
>  drivers/acpi/custom_method.c |  2 +-
>  drivers/acpi/fan.c   |  2 +-
>  drivers/gpu/drm/amd/display/dc/core/dc.c |  2 +-

For amdgpu:
Acked-by: Alex Deucher <alexander.deuc...@amd.com>

>  drivers/media/i2c/msp3400-kthreads.c |  2 +-
>  drivers/message/fusion/mptsas.c  |  2 +-
>  drivers/net/ethernet/qlogic/netxen/netxen_nic_init.c |  2 +-
>  drivers/net/wireless/ath/ath9k/xmit.c|  2 +-
>  drivers/platform/x86/eeepc-laptop.c  |  2 +-
>  drivers/rtc/rtc-ab-b5ze-s3.c |  2 +-
>  drivers/scsi/dpt_i2o.c   |  2 +-
>  drivers/scsi/sym53c8xx_2/sym_glue.c  |  2 +-
>  fs/locks.c   |  2 +-
>  fs/ocfs2/stack_user.c|  2 +-
>  fs/xfs/libxfs/xfs_alloc.c|  5 ++---
>  fs/xfs/xfs_export.c  |  2 +-
>  kernel/audit.c   |  6 +++---
>  kernel/trace/trace_printk.c  |  4 ++--
>  lib/raid6/sse2.c | 14 +++---
>  sound/soc/fsl/fsl_dma.c  |  2 +-
>  20 files changed, 30 insertions(+), 31 deletions(-)
>
> diff --git a/arch/x86/include/asm/atomic64_32.h 
> b/arch/x86/include/asm/atomic64_32.h
> index 97c46b8169b7..d4d4883080fa 100644
> --- a/arch/x86/include/asm/atomic64_32.h
> +++ b/arch/x86/include/asm/atomic64_32.h
> @@ -122,7 +122,7 @@ static inline long long atomic64_read(const atomic64_t *v)
> long long r;
> alternative_atomic64(read, "=" (r), "c" (v) : "memory");
> return r;
> - }
> +}
>
>  /**
>   * atomic64_add_return - add and return
> diff --git a/drivers/acpi/custom_method.c b/drivers/acpi/custom_method.c
> index c68e72414a67..e967c1173ba3 100644
> --- a/drivers/acpi/custom_method.c
> +++ b/drivers/acpi/custom_method.c
> @@ -94,7 +94,7 @@ static void __exit acpi_custom_method_exit(void)
>  {
> if (cm_dentry)
> debugfs_remove(cm_dentry);
> - }
> +}
>
>  module_init(acpi_custom_method_init);
>  module_exit(acpi_custom_method_exit);
> diff --git a/drivers/acpi/fan.c b/drivers/acpi/fan.c
> index 6cf4988206f2..3563103590c6 100644
> --- a/drivers/acpi/fan.c
> +++ b/drivers/acpi/fan.c
> @@ -219,7 +219,7 @@ fan_set_cur_state(struct thermal_cooling_device *cdev, 
> unsigned long state)
> return fan_set_state_acpi4(device, state);
> else
> return fan_set_state(device, state);
> - }
> +}
>
>  static const struct thermal_cooling_device_ops fan_cooling_ops = {
> .get_max_state = fan_get_max_state,
> diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c 
> b/drivers/gpu/drm/amd/display/dc/core/dc.c
> index d1488d5ee028..1e0d1e7c5324 100644
> --- a/drivers/gpu/drm/amd/display/dc/core/dc.c
> +++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
> @@ -461,7 +461,7 @@ static void disable_dangling_plane(struct dc *dc, struct 
> dc_state *context)
>   
> **/
>
>  struct dc *dc_create(const struct dc_init_data *init_params)
> - {
> +{
> struct dc *dc = kzalloc(sizeof(*dc), GFP_KERNEL);
> unsigned int full_pipe_count;
>
> diff --git a/drivers/media/i2c/msp3400-kthreads.c 
> b/drivers/media/i2c/msp3400-kthreads.c
> index 4dd01e9f553b..dc6cb8d475b3 100644
> --- a/drivers/media/i2c/msp3400-kthreads.c
> +++ b/drivers/media/i2c/msp3400-kthreads.c
> @@ -885,7 +885,7 @@ static int msp34xxg_modus(struct i2c_client *client)
>  }
>
>  static void msp34xxg_set_source(struct i2c_client *client, u16 reg, int in)
> - {
> +{
> struct msp_state *state = to_state(i2c_get_clientdata(client));
> int source, matrix;
>
> diff --git a/drivers

Re: [PATCH] dma-buf: Cleanup comments on dma_buf_map_attachment()

2017-11-01 Thread Alex Deucher
On Wed, Nov 1, 2017 at 10:06 AM, Liviu Dudau <liviu.du...@arm.com> wrote:
> Mappings need to be unmapped by calling dma_buf_unmap_attachment() and
> not by calling again dma_buf_map_attachment(). Also fix some spelling
> mistakes.
>
> Signed-off-by: Liviu Dudau <liviu.du...@arm.com>

Reviewed-by: Alex Deucher <alexander.deuc...@amd.com>

> ---
>  drivers/dma-buf/dma-buf.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index bc1cb284111cb..1792385405f0e 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -351,13 +351,13 @@ static inline int is_dma_buf_file(struct file *file)
>   *
>   * 2. Userspace passes this file-descriptors to all drivers it wants this 
> buffer
>   *to share with: First the filedescriptor is converted to a _buf 
> using
> - *dma_buf_get(). The the buffer is attached to the device using
> + *dma_buf_get(). Then the buffer is attached to the device using
>   *dma_buf_attach().
>   *
>   *Up to this stage the exporter is still free to migrate or reallocate 
> the
>   *backing storage.
>   *
> - * 3. Once the buffer is attached to all devices userspace can inniate DMA
> + * 3. Once the buffer is attached to all devices userspace can initiate DMA
>   *access to the shared buffer. In the kernel this is done by calling
>   *dma_buf_map_attachment() and dma_buf_unmap_attachment().
>   *
> @@ -617,7 +617,7 @@ EXPORT_SYMBOL_GPL(dma_buf_detach);
>   * Returns sg_table containing the scatterlist to be returned; returns 
> ERR_PTR
>   * on error. May return -EINTR if it is interrupted by a signal.
>   *
> - * A mapping must be unmapped again using dma_buf_map_attachment(). Note that
> + * A mapping must be unmapped by using dma_buf_unmap_attachment(). Note that
>   * the underlying backing storage is pinned for as long as a mapping exists,
>   * therefore users/importers should not hold onto a mapping for undue 
> amounts of
>   * time.
> --
> 2.14.3
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: how to link up audio bus from media controller driver to soc dai bus?

2017-06-12 Thread Alex Deucher
On Mon, Jun 12, 2017 at 3:15 PM, Tim Harvey  wrote:
> Greetings,
>
> I'm working on a media controller driver for the tda1997x HDMI
> receiver which provides an audio bus supporting I2S/SPDIF/OBA/HBR/DST.
> I'm unclear how to bind the audio bus to a SoC's audio bus, for
> example the IMX6 SSI (I2S) bus. I thought perhaps it was via a
> simple-audio-card device-tree binding but that appears to require an
> ALSA codec to bind to?
>
> Can anyone point me to an example of a media controller device driver
> that supports audio and video and how the audio is bound to a I2S bus?

I'm not sure if this is what you are looking for now not, but on some
AMD APUs, we have an i2s bus and codec attached to the GPU rather than
as a standalone device.  The audio DMA engine and interrupts are
controlled via the GPU's mmio aperture, but we expose the audio DMA
engine and i2c interface via alsa.  We use the MFD (Multi-Function
Device) kernel infrastructure to do this.  The GPU driver loads and
probes the audio capabilities and triggers the hotplug of the i2s and
audio dma engine.

For the GPU side see:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
for the audio side:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/sound/soc/amd/acp-pcm-dma.c

Alex


Re: DRM Atomic property for color-space conversion

2017-03-16 Thread Alex Deucher
On Thu, Mar 16, 2017 at 10:07 AM, Ville Syrjälä
 wrote:
> On Tue, Jan 31, 2017 at 03:55:41PM +, Brian Starkey wrote:
>> On Tue, Jan 31, 2017 at 05:15:46PM +0200, Ville Syrjälä wrote:
>> >On Tue, Jan 31, 2017 at 12:33:29PM +, Brian Starkey wrote:
>> >> Hi,
>> >>
>> >> On Mon, Jan 30, 2017 at 03:35:13PM +0200, Ville Syrjälä wrote:
>> >> >On Fri, Jan 27, 2017 at 05:23:24PM +, Brian Starkey wrote:
>> >> >> Hi,
>> >> >>
>> >> >> We're looking to enable the per-plane color management hardware in
>> >> >> Mali-DP with atomic properties, which has sparked some conversation
>> >> >> around how to handle YCbCr formats.
>> >> >>
>> >> >> As it stands today, it's assumed that a driver will implicitly "do the
>> >> >> right thing" to display a YCbCr buffer.
>> >> >>
>> >> >> YCbCr data often uses different gamma curves and signal ranges (e.g.
>> >> >> BT.609, BT.701, BT.2020, studio range, full-range), so its desirable
>> >> >> to be able to explicitly control the YCbCr to RGB conversion process
>> >> >> from userspace.
>> >> >>
>> >> >> We're proposing adding a "CSC" (color-space conversion) property to
>> >> >> control this - primarily per-plane for framebuffer->pipeline CSC, but
>> >> >> perhaps one per CRTC too for devices which have an RGB pipeline and
>> >> >> want to output in YUV to the display:
>> >> >>
>> >> >> Name: "CSC"
>> >> >> Type: ENUM | ATOMIC;
>> >> >> Enum values (representative):
>> >> >> "default":
>> >> >> Same behaviour as now. "Some kind" of YCbCr->RGB conversion
>> >> >> for YCbCr buffers, bypass for RGB buffers
>> >> >> "disable":
>> >> >> Explicitly disable all colorspace conversion (i.e. use an
>> >> >> identity matrix).
>> >> >> "YCbCr to RGB: BT.709":
>> >> >> Only valid for YCbCr formats. CSC in accordance with BT.709
>> >> >> using [16..235] for (8-bit) luma values, and [16..240] for
>> >> >> 8-bit chroma values. For 10-bit formats, the range limits are
>> >> >> multiplied by 4.
>> >> >> "YCbCr to RGB: BT.709 full-swing":
>> >> >> Only valid for YCbCr formats. CSC in accordance with BT.709,
>> >> >> but using the full range of each channel.
>> >> >> "YCbCr to RGB: Use CTM":*
>> >> >> Only valid for YCbCr formats. Use the matrix applied via the
>> >> >> plane's CTM property
>> >> >> "RGB to RGB: Use CTM":*
>> >> >> Only valid for RGB formats. Use the matrix applied via the
>> >> >> plane's CTM property
>> >> >> "Use CTM":*
>> >> >> Valid for any format. Use the matrix applied via the plane's
>> >> >> CTM property
>> >> >> ... any other values for BT.601, BT.2020, RGB to YCbCr etc. etc. as
>> >> >> they are required.
>> >> >
>> >> >Having some RGB2RGB and YCBCR2RGB things in the same property seems
>> >> >weird. I would just go with something very simple like:
>> >> >
>> >> >YCBCR_TO_RGB_CSC:
>> >> >* BT.601
>> >> >* BT.709
>> >> >* custom matrix
>> >> >
>> >>
>> >> I think we've agreed in #dri-devel that this CSC property
>> >> can't/shouldn't be mapped on-to the existing (hardware implementing
>> >> the) CTM property - even in the case of per-plane color management -
>> >> because CSC needs to be done before DEGAMMA.
>> >>
>> >> So, I'm in favour of going with what you suggested in the first place:
>> >>
>> >> A new YCBCR_TO_RGB_CSC property, enum type, with a list of fixed
>> >> conversions. I'd drop the custom matrix for now, as we'd need to add
>> >> another property to attach the custom matrix blob too.
>> >>
>> >> I still think we need a way to specify whether the source data range
>> >> is broadcast/full-range, so perhaps the enum list should be expanded
>> >> to all combinations of BT.601/BT.709 + broadcast/full-range.
>> >
>> >Sounds reasonable. Not that much full range YCbCr stuff out there
>> >perhaps. Well, apart from jpegs I suppose. But no harm in being able
>> >to deal with it.
>> >
>> >>
>> >> (I'm not sure what the canonical naming for broadcast/full-range is,
>> >> we call them narrow and wide)
>> >
>> >We tend to call them full vs. limited range. That's how our
>> >"Broadcast RGB" property is defined as well.
>> >
>>
>> OK, using the same ones sounds sensible.
>>
>> >>
>> >> >And trying to use the same thing for the crtc stuff is probably not
>> >> >going to end well. Like Daniel said we already have the
>> >> >'Broadcast RGB' property muddying the waters there, and that stuff
>> >> >also ties in with what colorspace we signal to the sink via
>> >> >infoframes/whatever the DP thing was called. So my gut feeling is
>> >> >that trying to use the same property everywhere will just end up
>> >> >messy.
>> >>
>> >> Yeah, agreed. If/when someone wants to add CSC on the output of a CRTC
>> >> (after GAMMA), we can add a new property.
>> >>
>> >> That makes me wonder about calling this one SOURCE_YCBCR_TO_RGB_CSC to
>> >> be explicit that it describes the source data. Then we can later add
>> >> 

Re: Enabling peer to peer device transactions for PCIe devices

2016-11-25 Thread Alex Deucher
On Fri, Nov 25, 2016 at 2:34 PM, Jason Gunthorpe
 wrote:
> On Fri, Nov 25, 2016 at 12:16:30PM -0500, Serguei Sagalovitch wrote:
>
>> b) Allocation may not  have CPU address  at all - only GPU one.
>
> But you don't expect RDMA to work in the case, right?
>
> GPU people need to stop doing this windowed memory stuff :)
>

Blame 32 bit systems and GPUs with tons of vram :)

I think resizable bars are finally coming in a useful way so this
should go away soon.

Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Linaro-mm-sig] [RFC 0/1] drm/pl111: Initial drm/kms driver for pl111

2013-08-07 Thread Alex Deucher
On Wed, Aug 7, 2013 at 1:33 PM, Tom Cooksey tom.cook...@arm.com wrote:

   Didn't you say that programmatically describing device placement
   constraints was an unbounded problem? I guess we would have to
   accept that it's not possible to describe all possible constraints
   and instead find a way to describe the common ones?
 
  well, the point I'm trying to make, is by dividing your constraints
  into two groups, one that impacts and is handled by userspace, and
  one that is in the kernel (ie. where the pages go), you cut down
  the number of permutations that the kernel has to care about
   considerably. And kernel already cares about, for example, what
  range of addresses that a device can dma to/from.  I think really
  the only thing missing is the max # of sglist entries (contiguous
  or not)
 
  I think it's more than physically contiguous or not.
 
  For example, it can be more efficient to use large page sizes on
  devices with IOMMUs to reduce TLB traffic. I think the size and even
  the availability of large pages varies between different IOMMUs.

 sure.. but I suppose if we can spiff out dma_params to express I need
 contiguous, perhaps we can add some way to express I prefer
 as-contiguous-as-possible.. either way, this is about where the pages
 are placed, and not about the layout of pixels within the page, so
 should be in kernel.  It's something that is missing, but I believe
 that it belongs in dma_params and hidden behind dma_alloc_*() for
 simple drivers.

 Thinking about it, isn't this more a property of the IOMMU? I mean,
 are there any cases where an IOMMU had a large page mode but you
 wouldn't want to use it? So when allocating the memory, you'd have to
 take into account not just the constraints of the devices themselves,
 but also of any IOMMUs any of the device sit behind?


  There's also the issue of buffer stride alignment. As I say, if the
  buffer is to be written by a tile-based GPU like Mali, it's more
  efficient if the buffer's stride is aligned to the max AXI bus burst
  length. Though I guess a buffer stride only makes sense as a concept
  when interpreting the data as a linear-layout 2D image, so perhaps
  belongs in user-space along with format negotiation?
 

 Yeah.. this isn't about where the pages go, but about the arrangement
 within a page.

 And, well, except for hw that supports the same tiling (or
 compressed-fb) in display+gpu, you probably aren't sharing tiled
 buffers.

 You'd only want to share a buffer between devices if those devices can
 understand the same pixel format. That pixel format can't be device-
 specific or opaque, it has to be explicit. I think drm_fourcc.h is
 what defines all the possible pixel formats. This is the enum I used
 in EGL_EXT_image_dma_buf_import at least. So if we get to the point
 where multiple devices can understand a tiled or compressed format, I
 assume we could just add that format to drm_fourcc.h and possibly
 v4l2's v4l2_mbus_pixelcode enum in v4l2-mediabus.h.

 For user-space to negotiate a common pixel format and now stride
 alignment, I guess it will obviously need a way to query what pixel
 formats a device supports and what its stride alignment requirements
 are.

 I don't know v4l2 very well, but it certainly seems the pixel format
 can be queried using V4L2_SUBDEV_FORMAT_TRY when attempting to set
 a particular format. I couldn't however find a way to retrieve a list
 of supported formats - it seems the mechanism is to try out each
 format in turn to determine if it is supported. Is that right?

 There doesn't however seem a way to query what stride constraints a
 V4l2 device might have. Does HW abstracted by v4l2 typically have
 such constraints? If so, how can we query them such that a buffer
 allocated by a DRM driver can be imported into v4l2 and used with
 that HW?

 Turning to DRM/KMS, it seems the supported formats of a plane can be
 queried using drm_mode_get_plane. However, there doesn't seem to be a
 way to query the supported formats of a crtc? If display HW only
 supports scanning out from a single buffer (like pl111 does), I think
 it won't have any planes and a fb can only be set on the crtc. In
 which case, how should user-space query which pixel formats that crtc
 supports?

 Assuming user-space can query the supported formats and find a common
 one, it will need to allocate a buffer. Looks like
 drm_mode_create_dumb can do that, but it only takes a bpp parameter,
 there's no format parameter. I assume then that user-space defines
 the format and tells the DRM driver which format the buffer is in
 when creating the fb with drm_mode_fb_cmd2, which does take a format
 parameter? Is that right?

 As with v4l2, DRM doesn't appear to have a way to query the stride
 constraints? Assuming there is a way to query the stride constraints,
 there also isn't a way to specify them when creating a buffer with
 DRM, though perhaps the existing pitch parameter of
 drm_mode_create_dumb could be used to 

Re: HD-Audio Generic HDMI/DP on wheezy

2013-05-09 Thread Alex Deucher
On Thu, May 9, 2013 at 11:04 AM, pierre pduran...@libertysurf.fr wrote:
 Hi,

 Some difficult on wheezy, on my computer
 product: Inspiron 620
 vendor: Dell Inc.
 version: 00
 serial: D9V135J
 width: 64 bits

 My sound card is now defined as Caicos HDMI Audio [Radeon HD 6400 Series]
 Digital Stereo (HDMI) on Squeeze, it was HD-Audio Generic Digital Stereo
 (HDMI).
 It works but i'm not able to get analogic output, only HDMI / display port
 that i can't use.

You need to enable the audio parameter in the radeon driver.  Boot with:
radeon.audio=1
on the kernel command line in grub.

Alex
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: HD-Audio Generic HDMI/DP on wheezy

2013-05-09 Thread Alex Deucher
On Thu, May 9, 2013 at 1:41 PM, pierre pduran...@libertysurf.fr wrote:
 Thanks for your quick ansver, but after modifing /etc/default/grub with
 GRUB_CMDLINE_LINUX=radeon.audio=1 and update-grub then reboot, nothing go
 better:

Perhaps I misunderstood what you were asking about.  radeon.audio=1
just enables auto routing via HDMI.  what exactly are you trying to
do?  Discrete graphics cards only support audio via HDMI or DP.

Alex



 May  9 19:32:37 retraite kernel: [6.660223] snd_hda_intel :01:00.1:
 irq 45 for MSI/MSI-X
 May  9 19:32:37 retraite kernel: [6.660242] snd_hda_intel :01:00.1:
 setting latency timer to 64
 May  9 19:32:37 retraite kernel: [6.684425] HDMI status: Codec=0 Pin=3
 Presence_Detect=0 ELD_Valid=0
 May  9 19:32:37 retraite kernel: [6.684523] input: HD-Audio Generic
 HDMI/DP,pcm=3 as
 /devices/pci:00/:00:01.0/:01:00.1/sound/card1/input6
 May  9 19:32:37 retraite kernel: [6.996110] WARNING: You are using an
 experimental version of the media stack.
 May  9 19:32:37 retraite kernel: [6.996111] As the driver is
 backported to an older kernel, it doesn't offer
 May  9 19:32:37 retraite kernel: [6.996112] enough quality for its
 usage in production.
 May  9 19:32:37 retraite kernel: [6.996113] Use it with care.
 May  9 19:32:37 retraite kernel: [6.996114] Latest git patches (needed
 if you report a bug to linux-media@vger.kernel.org):
 May  9 19:32:37 retraite kernel: [6.996115]
 02615ed5e1b2283db2495af3cf8f4ee172c77d80 [media] cx88: make core less
 verbose
 May  9 19:32:37 retraite kernel: [6.996116]
 a3b60209e7dd4db05249a9fb27940bb6705cd186 [media] em28xx: fix oops at
 em28xx_dvb_bus_ctrl()
 May  9 19:32:37 retraite kernel: [6.996117]
 4494f0fdd825958d596d05a4bd577df94b149038 [media] s5c73m3: fix indentation of
 the help section in Kconfig
 May  9 19:32:37 retraite kernel: [7.055361] WARNING: You are using an
 experimental version of the media stack.
 May  9 19:32:37 retraite kernel: [7.055363] As the driver is
 backported to an older kernel, it doesn't offer
 May  9 19:32:37 retraite kernel: [7.055364] enough quality for its
 usage in production.
 May  9 19:32:37 retraite kernel: [7.055365] Use it with care.
 May  9 19:32:37 retraite kernel: [7.055366] Latest git patches (needed
 if you report a bug to linux-media@vger.kernel.org):
 May  9 19:32:37 retraite kernel: [7.055367]
 02615ed5e1b2283db2495af3cf8f4ee172c77d80 [media] cx88: make core less
 verbose
 May  9 19:32:37 retraite kernel: [7.055369]
 a3b60209e7dd4db05249a9fb27940bb6705cd186 [media] em28xx: fix oops at
 em28xx_dvb_bus_ctrl()
 May  9 19:32:37 retraite kernel: [7.055370]
 4494f0fdd825958d596d05a4bd577df94b149038 [media] s5c73m3: fix indentation of
 the help section in Kconfig

 Pierre

 Le 09/05/2013 18:12, Alex Deucher a écrit :

 On Thu, May 9, 2013 at 11:04 AM, pierre pduran...@libertysurf.fr wrote:

 Hi,

 Some difficult on wheezy, on my computer
 product: Inspiron 620
 vendor: Dell Inc.
 version: 00
 serial: D9V135J
 width: 64 bits

 My sound card is now defined as Caicos HDMI Audio [Radeon HD 6400 Series]
 Digital Stereo (HDMI) on Squeeze, it was HD-Audio Generic Digital Stereo
 (HDMI).
 It works but i'm not able to get analogic output, only HDMI / display port
 that i can't use.

 You need to enable the audio parameter in the radeon driver.  Boot with:
 radeon.audio=1
 on the kernel command line in grub.

 Alex


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: CDF meeting @FOSDEM report

2013-02-06 Thread Alex Deucher
On Wed, Feb 6, 2013 at 6:11 AM, Tomi Valkeinen tomi.valkei...@ti.com wrote:
 Hi,

 On 2013-02-06 00:27, Laurent Pinchart wrote:
 Hello,

 We've hosted a CDF meeting at the FOSDEM on Sunday morning. Here's a summary
 of the discussions.

 Thanks for the summary. I've been on a longish leave, and just got back,
 so I haven't read the recent CDF discussions on lists yet. I thought
 I'll start by replying to this summary first =).

 0. Abbreviations
 

 DBI - Display Bus Interface, a parallel video control and data bus that
 transmits data using parallel data, read/write, chip select and address
 signals, similarly to 8051-style microcontroller parallel busses. This is a
 mixed video control and data bus.

 DPI - Display Pixel Interface, a parallel video data bus that transmits data
 using parallel data, h/v sync and clock signals. This is a video data bus
 only.

 DSI - Display Serial Interface, a serial video control and data bus that
 transmits data using one or more differential serial lines. This is a mixed
 video control and data bus.

 In case you'll re-use these abbrevs in later posts, I think it would be
 good to mention that DPI is a one-way bus, whereas DBI and DSI are
 two-way (perhaps that's implicit with control bus, though).

 1. Goals
 

 The meeting started with a brief discussion about the CDF goals.

 Tomi Valkeinin and Tomasz Figa have sent RFC patches to show their views of
 what CDF could/should be. Many others have provided very valuable feedback.
 Given the early development stage propositions were sometimes contradictory,
 and focused on different areas of interest. We have thus started the meeting
 with a discussion about what CDF should try to achieve, and what it 
 shouldn't.

 CDF has two main purposes. The original goal was to support display panels in
 a platform- and subsystem-independent way. While mostly useful for embedded
 systems, the emergence of platforms such as Intel Medfield and ARM-based PCs
 that blends the embedded and PC worlds makes panel support useful for the PC
 world as well.

 The second purpose is to provide a cross-subsystem interface to support video
 encoders. The idea originally came from a generalisation of the original RFC
 that supported panels only. While encoder support is considered as lower
 priority than display panel support by developers focussed on display
 controller driver (Intel, Renesas, ST Ericsson, TI), companies that produce
 video encoders (Analog Devices, and likely others) don't share that point of
 view and would like to provide a single encoder driver that can be used in
 both KMS and V4L2 drivers.

 What is an encoder? Something that takes a video signal in, and lets the
 CPU store the received data to memory? Isn't that a decoder?

 Or do you mean something that takes a video signal in, and outputs a
 video signal in another format? (transcoder?)

In KMS parlance, we have two objects a crtc and an encoder.  A crtc
reads data from memory and produces a data stream with display timing.
 The encoder then takes that datastream and timing from the crtc and
converts it some sort of physical signal (LVDS, TMDS, DP, etc.).  It's
not always a perfect match to the hardware.  For example a lot of GPUs
have a DVO encoder which feeds a secondary encoder like an sil164 DVO
to TMDS encoder.

Alex
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Asus PVR-416

2012-07-30 Thread Alex Deucher
On Mon, Jul 30, 2012 at 6:57 AM, Jerry Haggard xen2x...@gmail.com wrote:
 I've been trying to get an ASUS PVR-416 card to work with MythTV .25 on
 Scientific Linux 6.  I have a bttv card working, my setup works in general,
 etc, and the driver attempts to load.  But when I check dmesg, I keep
 getting firmware load errors and checksum errors. I've tried every firmware
 I could find.  I've used the one from Atrpms, I've downloaded the correctly
 named firmware from ivtv, but no luck.  Anyone know anything about this
 card?  I've tried cutting the drivers myself like it says in the direcitons
 at mythtv.org. This is supposed to be a supported card, does anyone have any
 experience with it?

I've got one and it worked years ago, but I haven't used it since.
IIRC, the initial blackbird support was added using this card, but I'm
not sure what the current state is.

Alex
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: ATI theatre 750 HD tuner USB stick

2012-07-04 Thread Alex Deucher
On Wed, Jul 4, 2012 at 9:27 AM, Fisher Grubb fisher.gr...@gmail.com wrote:
 Hi all,

 I was in contact with AMD today regarding this tuner haveing no
 support on Linux and I was given a link for a feedback form and told
 to get specific needs from www.linuxtv.org to help the cause and if
 there were enough people, then the AMD developers may help.

 Of course I wouldn't be surprised if people will have to reverse
 engineer it from the windows drivers but I thought I would mention it.
  I could not find any info on this 750 HD on www.linuxtv.org regarding
 where it stands.  What help is needed for it?

Unfortunately, I don't think there is much AMD can do.  We sold our
multimedia DTV division to Broadcom several years ago.  IANAL, so I
don't know exactly what rights we retained for the IP.  You may need
to talk to Broadcom now.

Alex


 Fisher

 On Wed, Jul 4, 2012 at 11:21 PM, Fisher Grubb fisher.gr...@gmail.com wrote:
 Hi all,

 My name is Fisher Grubb, I have an ATI (now AMD) theatre 750 HD based
 TV tuner USB stick.  I don't think this ATI chipset is supported by
 linuxTV and have had no joy search google as others also hit a dead
 end.

 I have put USB bus dump for that device and the chip part numbers at the 
 bottom.

 Please may I have a quick reply if someone looks at this, thanks.

 Model number is U5071, manufacturer site: 
 http://www.geniatech.com/pa/u5071.asp

 I think this is a very impressive piece of hardware as it can do:
 Analogue TV, DVB, AV capture and S video capture.  There is an IR
 receiver on board and came with IR remote control.

 I'm happy to provide any info on it that you may want such as picture
 of board  chips.  I'm almost finished my electronics degree and so
 can do hardware probing if someone may give me something specific to
 look for.  I'm also happy to run software or commands to dump stuff or
 even help with dumping things from the windows drivers if I've given
 directions etc.

 Thank you,

 Fisher

 lsusb: Bus 002 Device 021: ID 0438:ac14 Advanced Micro Devices, Inc.

 sudo lsusb -vd 0438:ac14

 Bus 002 Device 021: ID 0438:ac14 Advanced Micro Devices, Inc.
 Device Descriptor:
   bLength18
   bDescriptorType 1
   bcdUSB   2.00
   bDeviceClass0 (Defined at Interface level)
   bDeviceSubClass 0
   bDeviceProtocol 0
   bMaxPacketSize064
   idVendor   0x0438 Advanced Micro Devices, Inc.
   idProduct  0xac14
   bcdDevice1.00
   iManufacturer   1 AMD
   iProduct2 Cali TV Card
   iSerial 3 1234-5678
   bNumConfigurations  1
   Configuration Descriptor:
 bLength 9
 bDescriptorType 2
 wTotalLength   97
 bNumInterfaces  1
 bConfigurationValue 1
 iConfiguration  0
 bmAttributes 0x80
   (Bus Powered)
 MaxPower  500mA
 Interface Descriptor:
   bLength 9
   bDescriptorType 4
   bInterfaceNumber0
   bAlternateSetting   0
   bNumEndpoints   5
   bInterfaceClass   255 Vendor Specific Class
   bInterfaceSubClass255 Vendor Specific Subclass
   bInterfaceProtocol255 Vendor Specific Protocol
   iInterface  0
   Endpoint Descriptor:
 bLength 7
 bDescriptorType 5
 bEndpointAddress 0x81  EP 1 IN
 bmAttributes2
   Transfer TypeBulk
   Synch Type   None
   Usage Type   Data
 wMaxPacketSize 0x0200  1x 512 bytes
 bInterval   0
   Endpoint Descriptor:
 bLength 7
 bDescriptorType 5
 bEndpointAddress 0x82  EP 2 IN
 bmAttributes2
   Transfer TypeBulk
   Synch Type   None
   Usage Type   Data
 wMaxPacketSize 0x0200  1x 512 bytes
 bInterval   0
   Endpoint Descriptor:
 bLength 7
 bDescriptorType 5
 bEndpointAddress 0x03  EP 3 OUT
 bmAttributes2
   Transfer TypeBulk
   Synch Type   None
   Usage Type   Data
 wMaxPacketSize 0x0200  1x 512 bytes
 bInterval   3
   Endpoint Descriptor:
 bLength 7
 bDescriptorType 5
 bEndpointAddress 0x84  EP 4 IN
 bmAttributes2
   Transfer TypeBulk
   Synch Type   None
   Usage Type   Data
 wMaxPacketSize 0x0200  1x 512 bytes
 bInterval   0
   Endpoint Descriptor:
 bLength 7
 bDescriptorType 5
 

Re: Unknown eMPIA tuner

2012-04-10 Thread Alex Deucher
On Tue, Apr 10, 2012 at 3:49 PM, Stefan Monnier
monn...@iro.umontreal.ca wrote:
 Ok, so it's an em2874/drx-j/tda18271 design, which in terms of the
 components is very similar to the PCTV 80e (which I believe Mauro got
 into staging recently).  I would probably recommend looking at that
 code as a starting point.

 Any pointers to actual file names?

 That said, you'll need to figure out the correct IF frequency, the
 drx-j configuration block, the GPIO layout, and the correct tuner
 config.  If those terms don't mean anything to you, then you are best
 to wait until some developer stumbles across the device and has the
 time to add the needed support.

 The words aren't meaningless to me, but not too far either.  Maybe if
 someone could give me pointers as to how I could try to figure out the
 corresponding info, I could give it a try.

Probably useful to start with the config from a similar card (lots of
vendors use the same reference design) and see how much of it works
then tweak from there until you get it fully working.

Alex
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Display hotplug

2011-10-25 Thread Alex Deucher
On Tue, Oct 25, 2011 at 5:09 AM, James Courtier-Dutton
james.dut...@gmail.com wrote:
 Hi,

 Does anyone know when X will support display hotplug?
 I have a PC connected via HDMI to a TV.
 Unless I turn the TV on first, I have to reboot the PC before it
 displays anything on the HDMI TV.

It's be supported on all modern KMS drivers (radeon, intel, nouveau)
assuming you have a recent enough userspace to deal with the hotplug
uevents.

Alex
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH] HDMI:Support for EDID parsing in kernel.

2011-03-24 Thread Alex Deucher
On Thu, Mar 24, 2011 at 3:13 PM, Guennadi Liakhovetski
g.liakhovet...@gmx.de wrote:
 On Thu, 24 Mar 2011, K, Mythri P wrote:

 Hi Jesse,

 On Wed, Mar 23, 2011 at 8:48 PM, Jesse Barnes jbar...@virtuousgeek.org 
 wrote:
  On Wed, 23 Mar 2011 18:58:27 +0530
  K, Mythri P mythr...@ti.com wrote:
 
  Hi Dave,
 
  On Wed, Mar 23, 2011 at 6:16 AM, Dave Airlie airl...@gmail.com wrote:
   On Wed, Mar 23, 2011 at 3:32 AM, Mythri P K mythr...@ti.com wrote:
   Adding support for common EDID parsing in kernel.
  
   EDID - Extended display identification data is a data structure 
   provided by
   a digital display to describe its capabilities to a video source, This 
   a
   standard supported by CEA and VESA.
  
   There are several custom implementations for parsing EDID in kernel, 
   some
   of them are present in fbmon.c, drm_edid.c, sh_mobile_hdmi.c, Ideally
   parsing of EDID should be done in a library, which is agnostic of the
   framework (V4l2, DRM, FB)  which is using the functionality, just 
   based on
   the raw EDID pointer with size/segment information.
  
   With other RFC's such as the one below, which tries to standardize 
   HDMI API's
   It would be better to have a common EDID code in one place.It also 
   helps to
   provide better interoperability with variety of TV/Monitor may be even 
   by
   listing out quirks which might get missed with several custom 
   implementation
   of EDID.
   http://permalink.gmane.org/gmane.linux.drivers.video-input-infrastructure/30401
  
   This patch tries to add functions to parse some portion EDID (detailed 
   timing,
   monitor limits, AV delay information, deep color mode support, Audio 
   and VSDB)
   If we can align on this library approach i can enhance this library to 
   parse
   other blocks and probably we could also add quirks from other 
   implementation
   as well.
  
  
   If you want to take this approach, you need to start from the DRM EDID 
   parser,
   its the most well tested and I can guarantee its been plugged into more 
   monitors
   than any of the others. There is just no way we would move the DRM 
   parser to a
   library one that isn't derived from it + enhancements, as we'd throw 
   away the
   years of testing and the regression count would be way too high.
  
  I had a look at the DRM EDID code, but for quirks it looks pretty much 
  the same.
  yes i could take quirks and other DRM tested code and enhance, but
  still the code has to do away with struct drm_display_mode
  which is very much custom to DRM.
 
  If that's the only issue you have, we could easily rename that
  structure or add conversion funcs to a smaller structure if that's what
  you need.
 
  Dave's point is that we can't ditch the existing code without
  introducing a lot of risk; it would be better to start a library-ized
  EDID codebase from the most complete one we have already, i.e. the DRM
  EDID code.

 Does the DRM EDID-parser also process blocks beyond the first one and
 also parses SVD entries similar to what I've recently added to fbdev? Yes,
 we definitely need a common EDID parses, and maybe we'll have to collect
 various pieces from different implementations.

At the moment there is only limited support for looking up things like
the hdmi block and checking for audio.

Alex


 Thanks
 Guennadi

 
 This sounds good. If we can remove the DRM dependent portion to have a
 library-ized EDID code,
 That would be perfect. The main Intention to have a library is,
 Instead of having several different Implementation in kernel, all
 doing the same EDID parsing , if we could have one single
 implementation , it would help in better testing and interoperability.

  Do you really think the differences between your code and the existing
  DRM code are irreconcilable?
 
 On the contrary if there is a library-ized  EDID parsing using the
 drm_edid, and there is any delta / fields( Parsing the video block in
 CEA extension for Short Video Descriptor, Vendor block for AV delay
 /Deep color information etc) that are parsed with the RFC i posted i
 would be happy to add.

 Thanks and regards,
 Mythri.
  --
  Jesse Barnes, Intel Open Source Technology Center
 
 --
 To unsubscribe from this list: send the line unsubscribe linux-media in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html


 ---
 Guennadi Liakhovetski, Ph.D.
 Freelance Open-Source Software Developer
 http://www.open-technology.de/
 --
 To unsubscribe from this list: send the line unsubscribe linux-media in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH] HDMI:Support for EDID parsing in kernel.

2011-03-22 Thread Alex Deucher
Adding dri-devel.

On Tue, Mar 22, 2011 at 1:32 PM, Mythri P K mythr...@ti.com wrote:
 Adding support for common EDID parsing in kernel.

 EDID - Extended display identification data is a data structure provided by
 a digital display to describe its capabilities to a video source, This a
 standard supported by CEA and VESA.

 There are several custom implementations for parsing EDID in kernel, some
 of them are present in fbmon.c, drm_edid.c, sh_mobile_hdmi.c, Ideally
 parsing of EDID should be done in a library, which is agnostic of the
 framework (V4l2, DRM, FB)  which is using the functionality, just based on
 the raw EDID pointer with size/segment information.

 With other RFC's such as the one below, which tries to standardize HDMI API's
 It would be better to have a common EDID code in one place.It also helps to
 provide better interoperability with variety of TV/Monitor may be even by
 listing out quirks which might get missed with several custom implementation
 of EDID.
 http://permalink.gmane.org/gmane.linux.drivers.video-input-infrastructure/30401

 This patch tries to add functions to parse some portion EDID (detailed timing,
 monitor limits, AV delay information, deep color mode support, Audio and VSDB)
 If we can align on this library approach i can enhance this library to parse
 other blocks and probably we could also add quirks from other implementation
 as well.

 Signed-off-by: Mythri P K mythr...@ti.com
 ---
  arch/arm/include/asm/edid.h |  243 ++
  drivers/video/edid.c        |  340 
 +++
  2 files changed, 583 insertions(+), 0 deletions(-)
  create mode 100644 arch/arm/include/asm/edid.h
  create mode 100644 drivers/video/edid.c

 diff --git a/arch/arm/include/asm/edid.h b/arch/arm/include/asm/edid.h
 new file mode 100644
 index 000..843346a
 --- /dev/null
 +++ b/arch/arm/include/asm/edid.h
 @@ -0,0 +1,243 @@
 +/*
 + * edid.h
 + *
 + * Copyright (C) 2011 Texas Instruments
 + * Author: Mythri P K mythr...@ti.com
 + *
 + * This program is free software; you can redistribute it and/or modify it
 + * under the terms of the GNU General Public License version 2 as published 
 by
 + * the Free Software Foundation.
 + *
 + * This program is distributed in the hope that it will be useful, but 
 WITHOUT
 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
 + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
 + * more details.
 + *
 + * You should have received a copy of the GNU General Public License along 
 with
 + * this program.  If not, see http://www.gnu.org/licenses/.
 + * History:
 + */
 +
 +#ifndef _EDID_H_
 +#define _EDID_H_
 +
 +/* HDMI EDID Length */
 +#define HDMI_EDID_MAX_LENGTH                   512
 +
 +/* HDMI EDID Extension Data Block Tags  */
 +#define HDMI_EDID_EX_DATABLOCK_TAG_MASK                0xE0
 +#define HDMI_EDID_EX_DATABLOCK_LEN_MASK                0x1F
 +
 +#define EDID_TIMING_DESCRIPTOR_SIZE            0x12
 +#define EDID_DESCRIPTOR_BLOCK0_ADDRESS         0x36
 +#define EDID_DESCRIPTOR_BLOCK1_ADDRESS         0x80
 +#define EDID_SIZE_BLOCK0_TIMING_DESCRIPTOR     4
 +#define EDID_SIZE_BLOCK1_TIMING_DESCRIPTOR     4
 +
 +/* EDID Detailed Timing        Info 0 begin offset */
 +#define HDMI_EDID_DETAILED_TIMING_OFFSET       0x36
 +
 +#define HDMI_EDID_PIX_CLK_OFFSET               0
 +#define HDMI_EDID_H_ACTIVE_OFFSET              2
 +#define HDMI_EDID_H_BLANKING_OFFSET            3
 +#define HDMI_EDID_V_ACTIVE_OFFSET              5
 +#define HDMI_EDID_V_BLANKING_OFFSET            6
 +#define HDMI_EDID_H_SYNC_OFFSET                        8
 +#define HDMI_EDID_H_SYNC_PW_OFFSET             9
 +#define HDMI_EDID_V_SYNC_OFFSET                        10
 +#define HDMI_EDID_V_SYNC_PW_OFFSET             11
 +#define HDMI_EDID_H_IMAGE_SIZE_OFFSET          12
 +#define HDMI_EDID_V_IMAGE_SIZE_OFFSET          13
 +#define HDMI_EDID_H_BORDER_OFFSET              15
 +#define HDMI_EDID_V_BORDER_OFFSET              16
 +#define HDMI_EDID_FLAGS_OFFSET                 17
 +
 +/* HDMI EDID DTDs */
 +#define HDMI_EDID_MAX_DTDS                     4
 +
 +/* HDMI EDID DTD Tags */
 +#define HDMI_EDID_DTD_TAG_MONITOR_NAME         0xFC
 +#define HDMI_EDID_DTD_TAG_MONITOR_SERIALNUM    0xFF
 +#define HDMI_EDID_DTD_TAG_MONITOR_LIMITS       0xFD
 +#define HDMI_EDID_DTD_TAG_STANDARD_TIMING_DATA 0xFA
 +#define HDMI_EDID_DTD_TAG_COLOR_POINT_DATA     0xFB
 +#define HDMI_EDID_DTD_TAG_ASCII_STRING         0xFE
 +
 +#define HDMI_IMG_FORMAT_MAX_LENGTH             20
 +#define HDMI_AUDIO_FORMAT_MAX_LENGTH           10
 +
 +/* HDMI EDID Extenion Data Block Values: Video */
 +#define HDMI_EDID_EX_VIDEO_NATIVE              0x80
 +#define HDMI_EDID_EX_VIDEO_MASK                        0x7F
 +#define HDMI_EDID_EX_VIDEO_MAX                 35
 +
 +#define STANDARD_HDMI_TIMINGS_NB               34
 +#define STANDARD_HDMI_TIMINGS_VESA_START       15
 +
 +#ifdef __cplusplus
 +extern C {
 +#endif
 

Re: Yet another memory provider: can linaro organize a meeting?

2011-03-16 Thread Alex Deucher
On Wed, Mar 16, 2011 at 4:52 AM, Laurent Pinchart
laurent.pinch...@ideasonboard.com wrote:
 Hi Alex,

 On Tuesday 15 March 2011 17:47:47 Alex Deucher wrote:

 [snip]

 FWIW, I have yet to see any v4l developers ever email the dri mailing
 list while discussing GEM, TTM, or the DRM, all the while conjecturing
 on aspects of it they admit to not fully understanding.  For future
 reference, the address is: dri-de...@lists.freedesktop.org.  We are
 happy to answer questions.

 Please don't see any malice there. Even though the topic has been on our table
 for quite some time now, we're only starting to actively work on it. The first
 step is to gather our requirements (this will likely be done this week, during
 the V4L2 brainstorming meeting in Warsaw). We will then of course contact
 DRM/DRI developers.

Sorry, it came out a little harsher than I wanted.  I just want to
avoid duplication of effort if possible.

Alex
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Yet another memory provider: can linaro organize a meeting?

2011-03-16 Thread Alex Deucher
On Wed, Mar 16, 2011 at 3:37 AM, Li Li eggon...@gmail.com wrote:
 Sorry but I feel the discussion is a bit off the point. We're not
 going to compare the pros and cons of current code (GEM/TTM, HWMEM,
 UMP, CMA, VCM, CMEM, PMEM, etc.)

 The real problem is to find a suitable unified memory management
 module for various kinds of HW components (including CPU, VPU, GPU,
 camera, FB/OVL, etc.), especially for ARM based SOC. Some HW requires
 physical continuous big chunk of memory (e.g. some VPU  OVL); while
 others could live with DMA chain (e.g. some powerful GPU has built-in
 MMU).

 So, what's current situation?

 1) As Hans mentioned, there're GEM  TTM in upstream kernel, under the
 DRM framework (w/ KMS, etc.). This works fine on conventional (mostly
 Xorg-based) Linux distribution.

 2) But DRM (or GEM/TTM) is still too heavy and complex to some
 embedded OS, which only want a cheaper memory management module. So...

 2.1) Google uses PMEM in Android - However PMEM was removed from
 upstream kernel for well-known reasons;

 2.2) Qualcomm writes a hybrid KGSL based DRM+PMEM solution - However
 KGSL was shamed in dri-devel list because their close user space
 binary.

 2.3) ARM starts UMP/MaliDRM for both of Android and X11/DRI2 - This
 makes things even more complicated. (Therefore I personally think this
 is actually a shame for ARM to create another private SW. As a leader
 of Linaro, ARM should think more and coordinate with partners better
 to come up a unified solution to make our life easier.)

 2.4) Other companies also have their own private solutions because
 nobody can get a STANDARD interface from upstream, including Marvell,
 TI, Freescale.



 In general, it would be highly appreciated if Linaro guys could sit
 down together around a table, co-work with silicon vendors and
 upstream Linux kernel maintainers to make a unified (and cheaper than
 GEM/TTM/DRM) memory management module. This module should be reviewed
 carefully and strong enough to replace any other private memory
 manager mentioned above. It should replace PMEM for Android (with
 respect to Gralloc). And it could even be leveraged in DRM framework
 (as a primitive memory allocation provider under GEM).

 Anyway, such a module is necessary, because user space application
 cannot exchange enough information by a single virtual address (among
 different per-process virtual address space). Gstreamer, V4L and any
 other middleware could remain using a single virtual address in the
 same process. But a global handler/ID is also necessary for sharing
 buffers between processes.

 Furthermore, besides those well-known basic features, some advanced
 APIs should be provided for application to map the same physical
 memory region into another process, with 1) manageable fine
 CACHEable/BUFFERable attributes and cache flush mechanism (for
 performance); 2) lock/unlock synchronization; 3) swap/migration
 ability (optional in current stage, as those buffer are often expected
 to stay in RAM for better performance).

 Finally, and the most important, THIS MODULE SHOULD BE PUSHED TO
 UPSTREAM (sorry, please ignore all the nonsense I wrote above if we
 can achieve this) so that everyone treat it as a de facto well
 supported memory management module. Thus all companies could transit
 from current private design to this public one. And, let's cheer for
 the end of this damn chaos!

FWIW, I don't know if a common memory management API is possible.  On
the GPU side we tried, but there ended up being too many weird
hardware quirks from vendor to vendor (types of memory addressable,
strange tiling formats, etc.).  You might be able to come up with some
kind of basic framework like TTM, but by the time you add the
necessary quirks for various hw, it may be bigger than you want.
That's why we have GEM and TTM and driver specific memory management
ioctls in the drm.

Alex


 Thanks,
 Lea

 On Wed, Mar 16, 2011 at 12:47 AM, Alex Deucher alexdeuc...@gmail.com wrote:
 On Tue, Mar 15, 2011 at 12:07 PM, Robert Fekete
 robert.fek...@linaro.org wrote:
 On 8 March 2011 20:23, Laurent Pinchart
 laurent.pinch...@ideasonboard.com wrote:
 Hi Andy,

 On Tuesday 08 March 2011 20:12:45 Andy Walls wrote:
 On Tue, 2011-03-08 at 16:52 +0100, Laurent Pinchart wrote:

 [snip]

It really shouldn't be that hard to get everyone involved together
and settle on a single solution (either based on an existing
proposal or create a 'the best of' vendor-neutral solution).
  
   Single might be making the problem impossibly hard to solve well.
   One-size-fits-all solutions have a tendency to fall short on meeting
   someone's critical requirement.  I will agree that less than n, for
   some small n, is certainly desirable.
  
   The memory allocators and managers are ideally satisfying the
   requirements imposed by device hardware, what userspace applications
   are expected to do with the buffers, and system performance.  (And
   maybe the platform architecture, I/O

Re: Yet another memory provider: can linaro organize a meeting?

2011-03-15 Thread Alex Deucher
On Tue, Mar 15, 2011 at 12:07 PM, Robert Fekete
robert.fek...@linaro.org wrote:
 On 8 March 2011 20:23, Laurent Pinchart
 laurent.pinch...@ideasonboard.com wrote:
 Hi Andy,

 On Tuesday 08 March 2011 20:12:45 Andy Walls wrote:
 On Tue, 2011-03-08 at 16:52 +0100, Laurent Pinchart wrote:

 [snip]

It really shouldn't be that hard to get everyone involved together
and settle on a single solution (either based on an existing
proposal or create a 'the best of' vendor-neutral solution).
  
   Single might be making the problem impossibly hard to solve well.
   One-size-fits-all solutions have a tendency to fall short on meeting
   someone's critical requirement.  I will agree that less than n, for
   some small n, is certainly desirable.
  
   The memory allocators and managers are ideally satisfying the
   requirements imposed by device hardware, what userspace applications
   are expected to do with the buffers, and system performance.  (And
   maybe the platform architecture, I/O bus, and dedicated video memory?)
 
  In the embedded world, a very common use case is to capture video data
  from an ISP (V4L2+MC), process it in a DSP (V4L2+M2M, tidspbridge, ...)
  and display it on the GPU (OpenGL/ES). We need to be able to share a
  data buffer between the ISP and the DSP, and another buffer between the
  DSP and the GPU. If processing is not required, sharing a data buffer
  between the ISP and the GPU is required. Achieving zero-copy requires a
  single memory management solution used by the ISP, the DSP and the GPU.

 Ah.  I guess I misunderstood what was meant by memory provider to some
 extent.

 So what I read is a common way of providing in kernel persistent buffers
 (buffer objects? buffer entities?) for drivers and userspace
 applications to pass around by reference (no copies).  Userspace may or
 may not want to see the contents of the buffer objects.

 Exactly. How that memory is allocated in irrelevant here, and we can have
 several different allocators as long as the buffer objects can be managed
 through a single API. That API will probably have to expose buffer properties
 related to allocation, in order for all components in the system to verify
 that the buffers are suitable for their needs, but the allocation process
 itself is irrelevant.

 So I understand now why a single solution is desirable.


 Exactly,

 It is important to know that there are 3 topics of discussion which
 all are a separate topic of its own:

 1. The actual memory allocator
 2. In-kernel API
 3. Userland API

 Explained:
 1. This is how you acquire the actual physical or virtual memory,
 defrag, swap, etc. This can be enhanced by CMA, hotswap, memory
 regions or whatever and the main topic for a system wide memory
 allocator does not deal much with how this is done.
 2. In-kernel API is important from a device driver point of view in
 order to resolve buffers, pin memory when used(enable defrag when
 unpinned)
 3. Userland API deals with alloc/free, import/export(IPC), security,
 and set-domain capabilities among others and is meant to pass buffers
 between processes in userland and enable no-copy data paths.

 We need to resolve 2. and 3.

 GEM/TTM is mentioned in this thread and there is an overlap of what is
 happening within DRM/DRI/GEM/TTM/KMS and V4L2. The whole idea behind
 DRM is to have one device driver for everything (well at least 2D/3D,
 video codecs, display output/composition), while on a SoC all this is
 on several drivers/IP's. A V4L2 device cannot resolve a GEM handle.
 GEM only lives inside one DRM device (AFAIK). GEM is also mainly for
 dedicated memory-less graphics cards while TTM mainly targets
 advanced Graphics Card with dedicated memory. From a SoC point of view
 DRM looks very fluffy and not quite slimmed for an embedded device,
 and you cannot get GEM/TTM without bringing in all of DRM/DRI. KMS on
 the other hand is very attractive as a framebuffer device replacer. It
 is not an easy task to decide on a multimedia user interface for a SoC
 vendor.

Modern GPUs are basically an SoC: 3D engine, video decode, hdmi packet
engines, audio, dma engine, display blocks, etc. with a shared memory
controller.  Also the AMD fusion and Intel moorestown SoCs are not too
different from ARM-based SoCs and we are supporting them with the drm.
 I expect we'll see the x86 and ARM/MIPS based SoCs continue to get
closer together.

What are you basing your fluffy statement on?  We recently merged a
set of patches from qualcomm to support platform devices in the drm
and Dave added support for USB devices. Qualcomm also has an open
source drm for their snapdragon GPUs (although the userspace driver is
closed) and they are using that on their SoCs.


 Uniting the frameworks within the kernel will likely fail(too big of a
 task) but a common system wide memory manager would for sure make life
 easier enabling the  possibility to pass buffers between drivers(and
 user-land as well). In order for No-copy to 

Re: [ANN] Agenda for the Warsaw meeting.

2011-03-13 Thread Alex Deucher
On Sun, Mar 13, 2011 at 8:31 AM, Hans Verkuil hverk...@xs4all.nl wrote:
 Agenda for V4L2 brainstorm meeting in Warsaw, March 16-18 2011.

 Purpose of the meeting: to brainstorm about current V4L2 API limitations
 with regards to required functionality. Ideally the results of the meeting
 are actual solutions to these problems, but at the very least we should
 have a concensus of what direction to take and who will continue working
 on each problem. The hope is that this meeting will save us endless email
 and irc discussions.

 It is *not* a summit meeting, so any conclusions need to be discussed and
 approved on the mailinglist.

 The basic outline is the same as during previous meetings: the first day we
 go through all the agenda points and make sure everyone understands the
 problem. Smaller issues will be discussed and decided, more complex issues
 are just discussed.

 The second day we go in depth into the complex issues and try to come up with
 ideas that might work. The last day we translate the all agenda items into
 actions.

 This approach worked well in the past and it ensures that we end up with
 something concrete.

 Those who have a vested interest in an agenda item should be prepared to
 explain their take on it and if necessary have a presentation ready.

 Besides the main agenda I also added a few items falling under the category
 'if time permits'.

 Attendees:

 Samsung Poland RD Center:
  Kamil Debski k.deb...@samsung.com
  Sylwester Nawrocki s.nawro...@samsung.com
  Tomasz Stanislawski t.stanisl...@samsung.com
  Marek Szyprowski (Organizer) m.szyprow...@samsung.com

 Cisco Systems Norway:
  Martin Bugge marbu...@cisco.com
  Hans Verkuil (Chair) hverk...@xs4all.nl

 Nokia:
  Sakari Ailus sakari.ai...@maxwell.research.nokia.com

 Ideas On Board:
  Laurent Pinchart laurent.pinch...@ideasonboard.com

 ST-Ericsson:
  Willy Poisson willy.pois...@stericsson.com

 Samsung System.LSI Korea
  Jonghun Han jonghun@samsung.com
  Jaeryul Oh jaeryul...@samsung.com

 Samsung DMC Korea:
   Seung-Woo Kim sw0312@samsung.com

 Freelance:
  Guennadi Liakhovetski g.liakhovet...@gmx.de


 Agenda:

 1) Compressed format API for MPEG, H.264, etc. Also need to discuss what to
   do with weird 'H.264 inside MJPEG' muxed formats.
   (Hans, Laurent, Samsung)

 2) Small architecture enhancements:
        - Acquiring subdevs from other devices using subdev pool
          http://www.mail-archive.com/linux-media@vger.kernel.org/msg21831.html
          (Tomasz)
        - Introducing subdev hierarchy. Below there is a link to driver using 
 it:
          
 http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/28885/focus=28890
          (Tomasz)
        - Allow per-filehandle control handlers.
          http://www.spinics.net/lists/linux-media/msg27975.html
          (Jaeryul)
        - How to control FrameBuffer device as v4l2 sub-device?
          
 http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/29442/focus=29570
          (Jaeryul)
        - Which interface is better for Mixer of Exynos, frame buffer or V4l2?
          http://www.mail-archive.com/linux-media@vger.kernel.org/msg28549.html
          (Jaeryul)
        - Entity information ioctl
          Some drivers (namely the uvcvideo driver) will need to report 
 driver-specific
          information about each entity (the UVC entity GUID, the UVC controls 
 it
          supports, ...). We need an API for that.
          (Laurent)

 3) Pipeline configuration, cropping and scaling:

   http://www.mail-archive.com/linux-media@vger.kernel.org/msg27956.html
   http://www.mail-archive.com/linux-media@vger.kernel.org/msg26630.html

   (Everyone)

 4) HDMI receiver/transmitter API support

   Some hotplug/CEC code can be found here:

   http://www.mail-archive.com/linux-media@vger.kernel.org/msg28549.html

   CEC RFC from Cisco Systems Norway:

   http://www.mail-archive.com/linux-media@vger.kernel.org/msg29241.html

   Hopefully we can post an initial HDMI RFC as well on Monday.

   (Martin, Hans, Samsung, ST-Ericsson)

 5) Sensor/Flash/Snapshot functionality.

   http://www.mail-archive.com/linux-media@vger.kernel.org/msg28192.html
   http://www.mail-archive.com/linux-media@vger.kernel.org/msg28490.html

   - Sensor blanking/pixel-clock/frame-rate settings (including
     enumeration/discovery)

   - Multiple video buffer queues per device (currently implemented in the
     OMAP 3 ISP driver in non-standard way).

   - Synchronising parameters (e.g. exposure time and gain) on given
     frames. Some sensors support this on hardware. There are many use cases
     which benefit from this, for example this one:

     URL:http://fcam.garage.maemo.org/

   - Flash synchronisation (might fall under the above topic).

   - Frame metadata. It is important for the control algorithms (exposure,
     white balance, for example), to know which sensor settings have been
     used to expose a given frame. Many sensors do support 

Re: Yet another memory provider: can linaro organize a meeting?

2011-03-13 Thread Alex Deucher
On Tue, Mar 8, 2011 at 12:23 PM, Hans Verkuil hverk...@xs4all.nl wrote:
 On Tuesday, March 08, 2011 15:01:10 Andy Walls wrote:
 On Tue, 2011-03-08 at 09:13 +0100, Hans Verkuil wrote:
  Hi all,
 
  We had a discussion yesterday regarding ways in which linaro can assist
  V4L2 development. One topic was that of sorting out memory providers like
  GEM and HWMEM.
 
  Today I learned of yet another one: UMP from ARM.
 
  http://blogs.arm.com/multimedia/249-making-the-mali-gpu-device-driver-open-source/page__cid__133__show__newcomment/
 
  This is getting out of hand. I think that organizing a meeting to solve 
  this
  mess should be on the top of the list. Companies keep on solving the same
  problem time and again and since none of it enters the mainline kernel any
  driver using it is also impossible to upstream.
 
  All these memory-related modules have the same purpose: make it possible to
  allocate/reserve large amounts of memory and share it between different
  subsystems (primarily framebuffer, GPU and V4L).

 I'm not sure that's the entire story regarding what the current
 allocators for GPU do.  TTM and GEM create in kernel objects that can be
 passed between applications.  TTM apparently has handling for VRAM
 (video RAM).  GEM uses anonymous userspace memory that can be swapped
 out.

 TTM:
 http://lwn.net/Articles/257417/
 http://www.x.org/wiki/ttm
 http://nouveau.freedesktop.org/wiki/TTMMemoryManager?action=AttachFiledo=gettarget=mm.pdf
 http://nouveau.freedesktop.org/wiki/TTMMemoryManager?action=AttachFiledo=gettarget=xdevconf2006.pdf

 GEM:
 http://lwn.net/Articles/283798/

 GEM vs. TTM:
 http://lwn.net/Articles/283793/


 The current TTM and GEM allocators appear to have API and buffer
 processing and management functions tied in with memory allocation.

 TTM has fences for event notification of buffer processing completion.
 (maybe something v4l2 can do with v4l2_events?)

 GEM tries avoid mapping buffers to userspace. (sounds like the v4l2 mem
 to mem API?)


 Thanks to the good work of developers on the LMML in the past year or
 two, V4L2 has separated out some of that functionality from video buffer
 allocation:

       video buffer queue management and userspace access (videobuf2)
       memory to memory buffer transformation/movement (m2m)
       event notification (VIDIOC_SUBSCRIBE_EVENT)

       http://lwn.net/Articles/389081/
       http://lwn.net/Articles/420512/


  It really shouldn't be that hard to get everyone involved together and 
  settle
  on a single solution (either based on an existing proposal or create a 'the
  best of' vendor-neutral solution).


 Single might be making the problem impossibly hard to solve well.
 One-size-fits-all solutions have a tendency to fall short on meeting
 someone's critical requirement.  I will agree that less than n, for
 some small n, is certainly desirable.

 Actually, I think we really need one solution. I don't see how you can have
 it any other way without requiring e.g. gstreamer to support multiple
 'solutions'.

 The memory allocators and managers are ideally satisfying the
 requirements imposed by device hardware, what userspace applications are
 expected to do with the buffers, and system performance.  (And maybe the
 platform architecture, I/O bus, and dedicated video memory?)


 We discussed this before at the V4L2 brainstorm meeting in Oslo. The idea
 was to have opaque buffer IDs (more like cookies) that both kernel and
 userspace can use. You have some standard operations you can do with that
 and tied to the buffer is the knowledge (probably a set of function pointers
 in practice) of how to do each operation. So a buffer referring to video
 memory might have different code to e.g. obtain the virtual address compared
 to a buffer residing in regular memory.

 This way you would hide all these details while still have enough flexibility.
 It also allows you to support 'hidden' memory. It is my understanding that on
 some platforms (particular those used for set-top boxes) the video buffers are
 on memory that is not accessible from the CPU (rights management related). But
 apparently you still have to be able to refer to it. I may be wrong, it's
 something I was told.

A related example is vram on GPUs.  Often, the CPU can only mmap the
region of vram that is covered by the PCI framebuffer BAR, but the GPU
can access the entire vram pool.  As such in order to access the
buffer using the CPU, you either have to migrate it to a mappable
region of vram using the GPU (using a dma engine or a blit), or
migrate the buffer to another memory pool (such as gart memory -
system memory that is remapped into a linear aperture on the GPU).

Alex




  I am currently aware of the following solutions floating around the net
  that all solve different parts of the problem:
 
  In the kernel: GEM and TTM.
  Out-of-tree: HWMEM, UMP, CMA, VCM, CMEM, PMEM.

 Prior to a meeting one would probably want to capture for each
 allocator:

 1. What are 

Re: Yet another memory provider: can linaro organize a meeting?

2011-03-13 Thread Alex Deucher
On Tue, Mar 8, 2011 at 9:01 AM, Andy Walls awa...@md.metrocast.net wrote:
 On Tue, 2011-03-08 at 09:13 +0100, Hans Verkuil wrote:
 Hi all,

 We had a discussion yesterday regarding ways in which linaro can assist
 V4L2 development. One topic was that of sorting out memory providers like
 GEM and HWMEM.

 Today I learned of yet another one: UMP from ARM.

 http://blogs.arm.com/multimedia/249-making-the-mali-gpu-device-driver-open-source/page__cid__133__show__newcomment/

 This is getting out of hand. I think that organizing a meeting to solve this
 mess should be on the top of the list. Companies keep on solving the same
 problem time and again and since none of it enters the mainline kernel any
 driver using it is also impossible to upstream.

 All these memory-related modules have the same purpose: make it possible to
 allocate/reserve large amounts of memory and share it between different
 subsystems (primarily framebuffer, GPU and V4L).

 I'm not sure that's the entire story regarding what the current
 allocators for GPU do.  TTM and GEM create in kernel objects that can be
 passed between applications.  TTM apparently has handling for VRAM
 (video RAM).  GEM uses anonymous userspace memory that can be swapped
 out.

TTM can handle pretty much any type of memory, it's just a basic
memory manager.  You specify things cacheability attributes when you
set up the pools.

Generally on GPUs we see 3 types of buffers:
1. video ram connected to the GPU.  Often some or all of that pool is
not accessible by the CPU.  The GPU usually provides a mechanism to
migrate the buffer to a pool or region that is accessible to the CPU.
Vram buffers are usually mapped uncached write-combined.
2. GART/GTT (Graphics Address Remapping Table) memory.  This is
DMAable system memory that is mapped into the GPU's address space and
remapped to look linear to the GPU.  It can either be cached or
uncached pages depending on the GPU's capabilities and what the
buffers are used for.
3. unpinned system pages.  Depending on the GPU, they have to be
copied to DMAable memory before the GPU can access them.

The DRI protocol (used for communication between GPU acceleration
drivers) doesn't really care what the underlying memory manager is.
It just passes around handles.

Alex


 TTM:
 http://lwn.net/Articles/257417/
 http://www.x.org/wiki/ttm
 http://nouveau.freedesktop.org/wiki/TTMMemoryManager?action=AttachFiledo=gettarget=mm.pdf
 http://nouveau.freedesktop.org/wiki/TTMMemoryManager?action=AttachFiledo=gettarget=xdevconf2006.pdf

 GEM:
 http://lwn.net/Articles/283798/

 GEM vs. TTM:
 http://lwn.net/Articles/283793/


 The current TTM and GEM allocators appear to have API and buffer
 processing and management functions tied in with memory allocation.

 TTM has fences for event notification of buffer processing completion.
 (maybe something v4l2 can do with v4l2_events?)

 GEM tries avoid mapping buffers to userspace. (sounds like the v4l2 mem
 to mem API?)


 Thanks to the good work of developers on the LMML in the past year or
 two, V4L2 has separated out some of that functionality from video buffer
 allocation:

        video buffer queue management and userspace access (videobuf2)
        memory to memory buffer transformation/movement (m2m)
        event notification (VIDIOC_SUBSCRIBE_EVENT)

        http://lwn.net/Articles/389081/
        http://lwn.net/Articles/420512/


 It really shouldn't be that hard to get everyone involved together and settle
 on a single solution (either based on an existing proposal or create a 'the
 best of' vendor-neutral solution).


 Single might be making the problem impossibly hard to solve well.
 One-size-fits-all solutions have a tendency to fall short on meeting
 someone's critical requirement.  I will agree that less than n, for
 some small n, is certainly desirable.

 The memory allocators and managers are ideally satisfying the
 requirements imposed by device hardware, what userspace applications are
 expected to do with the buffers, and system performance.  (And maybe the
 platform architecture, I/O bus, and dedicated video memory?)



 I am currently aware of the following solutions floating around the net
 that all solve different parts of the problem:

 In the kernel: GEM and TTM.
 Out-of-tree: HWMEM, UMP, CMA, VCM, CMEM, PMEM.

 Prior to a meeting one would probably want to capture for each
 allocator:

 1. What are the attributes of the memory allocated by this allocator?

 2. For what domain was this allocator designed: GPU, video capture,
 video decoder, etc.

 3. How are applications expected to use objects from this allocator?

 4. What are the estimated sizes and lifetimes of objects that would be
 allocated this allocator?

 5. Beyond memory allocation, what other functionality is built into this
 allocator: buffer queue management, event notification, etc.?

 6. Of the requirements that this allocator satisfies, what are the
 performance critical requirements?


 Maybe there are 

Re: [RFC] HDMI-CEC proposal

2011-03-02 Thread Alex Deucher
On Wed, Mar 2, 2011 at 4:13 AM, Hans Verkuil hansv...@cisco.com wrote:
 Hi Alex,

 On Tuesday, March 01, 2011 18:52:28 Alex Deucher wrote:
 On Tue, Mar 1, 2011 at 4:59 AM, Martin Bugge (marbugge)
 marbu...@cisco.com wrote:
  Author: Martin Bugge marbu...@cisco.com
  Date:  Tue, 1 March 2010
  ==
 
  This is a proposal for adding a Consumer Electronic Control (CEC) API to
  V4L2.
  This document describes the changes and new ioctls needed.
 
  Version 1.0 (This is first version)
 
  Background
  ==
  CEC is a protocol that provides high-level control functions between
 various
  audiovisual products.
  It is an optional supplement to the High-Definition Multimedia Interface
  Specification (HDMI).
  Physical layer is a one-wire bidirectional serial bus that uses the
  industry-standard AV.link protocol.
 
  In short: CEC uses pin 13 on the HDMI connector to transmit and receive
  small data-packets
           (maximum 16 bytes including a 1 byte header) at low data rates
  (~400 bits/s).
 
  A CEC device may have any of 15 logical addresses (0 - 14).
  (address 15 is broadcast and some addresses are reserved)
 

 It would be nice if this was not tied to v4l as we'll start seeing CEC
 support show in GPUs soon as well.

 As mentioned in other emails it is my firm believe that mixing APIs is a bad
 idea. I've never seen that work in practice. That said, I do think that any
 userspace CEC library shouldn't be tied to V4L allowing it to be used by GPUs.


Right.  That was my concern.  You are probably more of an expert on
CEC so I'll leave the API to you, but as it's going to show up in
GPUs, I'd rather not re-invent the wheel to support it on the GPU side
in some incompatible manner if it can be avoided.

 It would also be interesting to see if i2c HDMI receiver/transmitter drivers
 can be used by both subsystems. This would make a lot of sense.

There are already several i2c tmds drivers in the drm tree:
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=tree;f=drivers/gpu/drm/i2c;h=9eb6dad3ffa6cac6dfc07afb0b8526049416398b;hb=HEAD
And a few in the intel kms driver that could be broken out as
independent drivers.  See the dvo_*.c files in:
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=tree;f=drivers/gpu/drm/i915;h=19e4b8fe8f5413f0c5d5059d8b2561eafab9e5dd;hb=HEAD
Still they are tied to the drm as they are used as kms encoders.



 Apologies if I asked this before, but are you planning to attend the ELC in
 San Francisco? If so, then we should sit together and compare the subsystems
 and see if we can work something out.

Probably not, but I'll know more soon.

Alex


 Regards,

        Hans


 Alex

 
  References
  ==
  [1] High-Definition Multimedia Interface Specification version 1.3a,
     Supplement 1 Consumer Electronic Control (CEC).
     http://www.hdmi.org/manufacturer/specification.aspx
 
  [2]
  http://www.hdmi.org/pdf/whitepaper/DesigningCECintoYourNextHDMIProduct.pdf
 
 
  Proposed solution
  =
 
  Two new ioctls:
     VIDIOC_CEC_CAP (read)
     VIDIOC_CEC_CMD (read/write)
 
  VIDIOC_CEC_CAP:
  ---
 
  struct vl2_cec_cap {
        __u32 logicaldevices;
        __u32 reserved[7];
  };
 
  The capability ioctl will return the number of logical devices/addresses
  which can be
  simultaneously supported on this HW.
     0:       This HW don't support CEC.
     1 - 14: This HW supports n logical devices simultaneously.
 
  VIDIOC_CEC_CMD:
  ---
 
  struct v4l2_cec_cmd {
     __u32 cmd;
     __u32 reserved[7];
     union {
         struct {
             __u32 index;
             __u32 enable;
             __u32 addr;
         } conf;
         struct {
             __u32 len;
             __u8  msg[16];
             __u32 status;
         } data;
         __u32 raw[8];
     };
  };
 
  Alternatively the data struct could be:
         struct {
             __u8  initiator;
             __u8  destination;
             __u8  len;
             __u8  msg[15];
             __u32 status;
         } data;
 
  Commands:
 
  #define V4L2_CEC_CMD_CONF  (1)
  #define V4L2_CEC_CMD_TX    (2)
  #define V4L2_CEC_CMD_RX    (3)
 
  Tx status field:
 
  #define V4L2_CEC_STAT_TX_OK            (0)
  #define V4L2_CEC_STAT_TX_ARB_LOST      (1)
  #define V4L2_CEC_STAT_TX_RETRY_TIMEOUT (2)
 
  The command ioctl is used both for configuration and to receive/transmit
  data.
 
  * The configuration command must be done for each logical device address
   which is to be enabled on this HW. Maximum number of logical devices
   is found with the capability ioctl.
     conf:
          index:  0 - number_of_logical_devices-1
          enable: true/false
          addr:   logical address
 
   By default all logical devices are disabled.
 
  * Tx/Rx command
     data:
          len:    length of message (data + header)
          msg:    the raw CEC message received/transmitted
          status: when the driver is in blocking

Re: [RFC] HDMI-CEC proposal

2011-03-01 Thread Alex Deucher
On Tue, Mar 1, 2011 at 4:59 AM, Martin Bugge (marbugge)
marbu...@cisco.com wrote:
 Author: Martin Bugge marbu...@cisco.com
 Date:  Tue, 1 March 2010
 ==

 This is a proposal for adding a Consumer Electronic Control (CEC) API to
 V4L2.
 This document describes the changes and new ioctls needed.

 Version 1.0 (This is first version)

 Background
 ==
 CEC is a protocol that provides high-level control functions between various
 audiovisual products.
 It is an optional supplement to the High-Definition Multimedia Interface
 Specification (HDMI).
 Physical layer is a one-wire bidirectional serial bus that uses the
 industry-standard AV.link protocol.

 In short: CEC uses pin 13 on the HDMI connector to transmit and receive
 small data-packets
          (maximum 16 bytes including a 1 byte header) at low data rates
 (~400 bits/s).

 A CEC device may have any of 15 logical addresses (0 - 14).
 (address 15 is broadcast and some addresses are reserved)


It would be nice if this was not tied to v4l as we'll start seeing CEC
support show in GPUs soon as well.

Alex


 References
 ==
 [1] High-Definition Multimedia Interface Specification version 1.3a,
    Supplement 1 Consumer Electronic Control (CEC).
    http://www.hdmi.org/manufacturer/specification.aspx

 [2]
 http://www.hdmi.org/pdf/whitepaper/DesigningCECintoYourNextHDMIProduct.pdf


 Proposed solution
 =

 Two new ioctls:
    VIDIOC_CEC_CAP (read)
    VIDIOC_CEC_CMD (read/write)

 VIDIOC_CEC_CAP:
 ---

 struct vl2_cec_cap {
       __u32 logicaldevices;
       __u32 reserved[7];
 };

 The capability ioctl will return the number of logical devices/addresses
 which can be
 simultaneously supported on this HW.
    0:       This HW don't support CEC.
    1 - 14: This HW supports n logical devices simultaneously.

 VIDIOC_CEC_CMD:
 ---

 struct v4l2_cec_cmd {
    __u32 cmd;
    __u32 reserved[7];
    union {
        struct {
            __u32 index;
            __u32 enable;
            __u32 addr;
        } conf;
        struct {
            __u32 len;
            __u8  msg[16];
            __u32 status;
        } data;
        __u32 raw[8];
    };
 };

 Alternatively the data struct could be:
        struct {
            __u8  initiator;
            __u8  destination;
            __u8  len;
            __u8  msg[15];
            __u32 status;
        } data;

 Commands:

 #define V4L2_CEC_CMD_CONF  (1)
 #define V4L2_CEC_CMD_TX    (2)
 #define V4L2_CEC_CMD_RX    (3)

 Tx status field:

 #define V4L2_CEC_STAT_TX_OK            (0)
 #define V4L2_CEC_STAT_TX_ARB_LOST      (1)
 #define V4L2_CEC_STAT_TX_RETRY_TIMEOUT (2)

 The command ioctl is used both for configuration and to receive/transmit
 data.

 * The configuration command must be done for each logical device address
  which is to be enabled on this HW. Maximum number of logical devices
  is found with the capability ioctl.
    conf:
         index:  0 - number_of_logical_devices-1
         enable: true/false
         addr:   logical address

  By default all logical devices are disabled.

 * Tx/Rx command
    data:
         len:    length of message (data + header)
         msg:    the raw CEC message received/transmitted
         status: when the driver is in blocking mode it gives the result for
 transmit.

 Events
 --

 In the case of non-blocking mode the driver will issue the following events:

 V4L2_EVENT_CEC_TX
 V4L2_EVENT_CEC_RX

 V4L2_EVENT_CEC_TX
 -
  * transmit is complete with the following status:
 Add an additional struct to the struct v4l2_event

 struct v4l2_event_cec_tx {
       __u32 status;
 }

 V4L2_EVENT_CEC_RX
 -
  * received a complete message


 Comments ?

           Martin Bugge

 --
 Martin Bugge - Tandberg (now a part of Cisco)
 --

 --
 To unsubscribe from this list: send the line unsubscribe linux-media in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH/RFC 0/5] HDMI driver for Samsung S5PV310 platform

2011-02-12 Thread Alex Deucher
On Wed, Feb 9, 2011 at 7:51 PM, Andy Walls awa...@md.metrocast.net wrote:
 On Wed, 2011-02-09 at 02:12 -0500, Alex Deucher wrote:
 On Tue, Feb 8, 2011 at 5:47 PM, Andy Walls awa...@md.metrocast.net wrote:
  On Tue, 2011-02-08 at 10:28 -0500, Alex Deucher wrote:
  On Tue, Feb 8, 2011 at 4:47 AM, Hans Verkuil hansv...@cisco.com wrote:
   Just two quick notes. I'll try to do a full review this weekend.
  
   On Tuesday, February 08, 2011 10:30:22 Tomasz Stanislawski wrote:
   ==
    Introduction
   ==
  
   The purpose of this RFC is to discuss the driver for a TV output 
   interface
   available in upcoming Samsung SoC. The HW is able to generate digital 
   and
   analog signals. Current version of the driver supports only digital 
   output.
  
   Internally the driver uses videobuf2 framework, and CMA memory 
   allocator.
   Not
   all of them are merged by now, but I decided to post the sources to 
   start
   discussion driver's design.
 
  
   Cisco (i.e. a few colleagues and myself) are working on this. We hope 
   to post
   an RFC by the end of this month. We also have a proposal for CEC 
   support in
   the pipeline.
 
  Any reason to not use the drm kms APIs for modesetting, display
  configuration, and hotplug support?  We already have the
  infrastructure in place for complex display configurations and
  generating events for hotplug interrupts.  It would seem to make more
  sense to me to fix any deficiencies in the KMS APIs than to spin a new
  API.  Things like CEC would be a natural fit since a lot of desktop
  GPUs support hdmi audio/3d/etc. and are already using kms.
 
  Alex
 
  I'll toss one out: lack of API documentation for driver or application
  developers to use.
 
 
  When I last looked at converting ivtvfb to use DRM, KMS, TTM, etc. (to
  possibly get rid of reliance on the ivtv X video driver
  http://dl.ivtvdriver.org/xf86-video-ivtv/ ), I found the documentation
  was really sparse.
 
  DRM had the most documentation under Documentation/DocBook/drm.tmpl, but
  the userland API wasn't fleshed out.  GEM was talked about a bit in
  there as well, IIRC.
 
  TTM documentation was essentially non-existant.
 
  I can't find any KMS documentation either.
 
  I recall having to read much of the drm code, and having to look at the
  radeon driver, just to tease out what the DRM ioctls needed to do.
 
  Am I missing a Documentation source for the APIs?
 

 Documentation is somewhat sparse compared to some other APIs.  Mostly
 inline kerneldoc comments in the core functions.  It would be nice to
 improve things.   The modesetting API is very similar to the xrandr
 API in the xserver.

 At the moment a device specific surface manager (Xorg ddx, or some
 other userspace lib) is required to use kms due to device specific
 requirements with respect to memory management and alignment for
 acceleration.  The kms modesetting ioctls are common across all kms
 drm drivers, but the memory management ioctls are device specific.
 GEM itself is an Intel-specific memory manager, although radeon uses
 similar ioctls.  TTM is used internally by radeon, nouveau, and svga
 for managing memory gpu accessible memory pools.  Drivers are free to
 use whatever memory manager they want; an existing one shared with a
 v4l or platform driver, TTM, or something new.
   There is no generic
 userspace kms driver/lib although Dave and others have done some work
 to support that, but it's really hard to make a generic interface
 flexible enough to handle all the strange acceleration requirements of
 GPUs.

 All of the above unfortunately says to me that the KMS API has a fairly
 tightly coupled set of userspace components, because userspace
 applications need have details about the specific underlying hardware
 embeeded in the application to effectively use the API.


At the moment, the only thing that uses the APIs are X-like things,
Xorg, but also, wayland and graphical boot managers like plymouth.
However, embedded devices with graphics often have similar usage
models so the APIs would work for them as well.  I'm sorry if I gave
the wrong impression, I was not implying you should use kms for video
capture, but rather it should be considered for video output type
things.  Right now just about every embedded device out there uses
some device specific hack (either a hacked up kernel fb interface or
some proprietary ioctls) to support video output and framebuffers.
The hardware is not that different from desktop hardware.

 If so, that's not really conducive to getting application developers to
 write applications to the API, since applications will get tied to
 specific sets of hardware.

 Lack of documentation on the API for userpace application writers to use
 exacerbates that issue, as there are no clearly stated guarantees on

        device node conventions
        ioctl's
                arguments and bounds on the arguments
                expected error return values

Re: [PATCH/RFC 0/5] HDMI driver for Samsung S5PV310 platform

2011-02-09 Thread Alex Deucher
On Wed, Feb 9, 2011 at 3:59 AM, Hans Verkuil hansv...@cisco.com wrote:
 On Tuesday, February 08, 2011 16:28:32 Alex Deucher wrote:
 On Tue, Feb 8, 2011 at 4:47 AM, Hans Verkuil hansv...@cisco.com wrote:

 snip

    The driver supports an interrupt. It is used to detect plug/unplug
 events
  in
  kernel debugs.  The API for detection of such an events in V4L2 API is to
 be
  defined.
 
  Cisco (i.e. a few colleagues and myself) are working on this. We hope to
 post
  an RFC by the end of this month. We also have a proposal for CEC support
 in
  the pipeline.

 Any reason to not use the drm kms APIs for modesetting, display
 configuration, and hotplug support?  We already have the
 infrastructure in place for complex display configurations and
 generating events for hotplug interrupts.  It would seem to make more
 sense to me to fix any deficiencies in the KMS APIs than to spin a new
 API.  Things like CEC would be a natural fit since a lot of desktop
 GPUs support hdmi audio/3d/etc. and are already using kms.

 There are various reasons for not going down that road. The most important one
 is that mixing APIs is actually a bad idea. I've done that once in the past
 and I've regretted ever since. The problem with doing that is that it is
 pretty hard on applications who have to mix two different styles of API,
 somehow know where to find the documentation for each and know that both APIs
 can in fact be used on the same device.

 Now, if there was a lot of code that could be shared, then that might be
 enough reason to go that way, but in practice there is very little overlap.
 Take CEC: all the V4L API will do is to pass the CEC packets from kernel to
 userspace and vice versa. There is no parsing at all. This is typically used
 by embedded apps that want to do their own CEC processing.

 An exception might be a PCI(e) card with HDMI input/output that wants to
 handle CEC internally. At that point we might look at sharing CEC parsing
 code. A similar story is true for EDID handling.

 One area that might be nice to look at would be to share drivers for HDMI
 receivers and transmitters. However, the infrastructure for such drivers is
 wildly different between how it is used for GPUs versus V4L and has been for
 10 years or so. I also suspect that most GPUs have there own HDMI internal
 implementation so code sharing will probably be quite limited.


You don't need to worry about the rest of the 3D and acceleration
stuff to use the kms modesetting API.  For video output, you have a
timing generator, an encoder that translates a bitstream into
voltages, and an connector that you plug into a monitor.  Additionally
you may want to read an edid or generate a hotplug event and use some
modeline handling helpers.  The kms api provides core modesetting code
and a set of modesetting driver callbacks for crtcs, encoders, and
connectors.  The hardware implementations will vary, but modesetting
is the same.  From drm_crtc_helper.h:

The driver provides the following callbacks for the crtc.  The crtc
loosely refers to the part of the display pipe that generates timing
and framebuffer scanout position.

struct drm_crtc_helper_funcs {
/*
 * Control power levels on the CRTC.  If the mode passed in is
 * unsupported, the provider must use the next lowest power
level.
 */
void (*dpms)(struct drm_crtc *crtc, int mode);
void (*prepare)(struct drm_crtc *crtc);
void (*commit)(struct drm_crtc *crtc);

/* Provider can fixup or change mode timings before modeset occurs */
bool (*mode_fixup)(struct drm_crtc *crtc,
   struct drm_display_mode *mode,
   struct drm_display_mode *adjusted_mode);
/* Actually set the mode */
int (*mode_set)(struct drm_crtc *crtc, struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode, int x, int y,
struct drm_framebuffer *old_fb);

/* Move the crtc on the current fb to the given position *optional* */
int (*mode_set_base)(struct drm_crtc *crtc, int x, int y,
 struct drm_framebuffer *old_fb);
int (*mode_set_base_atomic)(struct drm_crtc *crtc,
struct drm_framebuffer *fb, int x, int y,
enum mode_set_atomic);

/* reload the current crtc LUT */
void (*load_lut)(struct drm_crtc *crtc);

/* disable crtc when not in use - more explicit than dpms off */
void (*disable)(struct drm_crtc *crtc);
};

encoders take the bitstream from the crtc and convert it into a set of
voltages understood by the monitor, e.g., TMDS or LVDS encoders.  The
callbacks follow a similar pattern to crtcs.

struct drm_encoder_helper_funcs {
void (*dpms)(struct drm_encoder *encoder, int mode);
void (*save)(struct drm_encoder *encoder);
void (*restore)(struct drm_encoder

Re: [PATCH/RFC 0/5] HDMI driver for Samsung S5PV310 platform

2011-02-09 Thread Alex Deucher
On Wed, Feb 9, 2011 at 2:43 PM, Hans Verkuil hverk...@xs4all.nl wrote:
 On Wednesday, February 09, 2011 20:00:38 Matt Turner wrote:
 On Wed, Feb 9, 2011 at 7:12 AM, Alex Deucher alexdeuc...@gmail.com wrote:
  On Tue, Feb 8, 2011 at 5:47 PM, Andy Walls awa...@md.metrocast.net wrote:
  On Tue, 2011-02-08 at 10:28 -0500, Alex Deucher wrote:
  On Tue, Feb 8, 2011 at 4:47 AM, Hans Verkuil hansv...@cisco.com wrote:
   Just two quick notes. I'll try to do a full review this weekend.
  
   On Tuesday, February 08, 2011 10:30:22 Tomasz Stanislawski wrote:
   ==
    Introduction
   ==
  
   The purpose of this RFC is to discuss the driver for a TV output 
   interface
   available in upcoming Samsung SoC. The HW is able to generate digital 
   and
   analog signals. Current version of the driver supports only digital 
   output.
  
   Internally the driver uses videobuf2 framework, and CMA memory 
   allocator.
   Not
   all of them are merged by now, but I decided to post the sources to 
   start
   discussion driver's design.
 
  
   Cisco (i.e. a few colleagues and myself) are working on this. We hope 
   to post
   an RFC by the end of this month. We also have a proposal for CEC 
   support in
   the pipeline.
 
  Any reason to not use the drm kms APIs for modesetting, display
  configuration, and hotplug support?  We already have the
  infrastructure in place for complex display configurations and
  generating events for hotplug interrupts.  It would seem to make more
  sense to me to fix any deficiencies in the KMS APIs than to spin a new
  API.  Things like CEC would be a natural fit since a lot of desktop
  GPUs support hdmi audio/3d/etc. and are already using kms.
 
  Alex
 
  I'll toss one out: lack of API documentation for driver or application
  developers to use.
 
 
  When I last looked at converting ivtvfb to use DRM, KMS, TTM, etc. (to
  possibly get rid of reliance on the ivtv X video driver
  http://dl.ivtvdriver.org/xf86-video-ivtv/ ), I found the documentation
  was really sparse.
 
  DRM had the most documentation under Documentation/DocBook/drm.tmpl, but
  the userland API wasn't fleshed out.  GEM was talked about a bit in
  there as well, IIRC.
 
  TTM documentation was essentially non-existant.
 
  I can't find any KMS documentation either.
 
  I recall having to read much of the drm code, and having to look at the
  radeon driver, just to tease out what the DRM ioctls needed to do.
 
  Am I missing a Documentation source for the APIs?

 Yes,

 My summer of code project's purpose was to create something of a
 tutorial for writing a KMS driver. The code, split out into something
 like 15 step-by-step patches, and accompanying documentation are
 available from Google's website.

 http://code.google.com/p/google-summer-of-code-2010-xorg/downloads/detail?name=Matt_Turner.tar.gz

 Nice!

 What I still don't understand is if and how this is controlled via userspace.

 Is there some documentation of the userspace API somewhere?

At the moment, it's only used by Xorg ddxes and the plymouth
bootsplash.  For details see:
http://cgit.freedesktop.org/plymouth/tree/src/plugins/renderers/drm
http://cgit.freedesktop.org/xorg/driver/xf86-video-ati/tree/src/drmmode_display.c


 My repository (doesn't include the documentation) is available here:
 http://git.kernel.org/?p=linux/kernel/git/mattst88/glint.git;a=summary

 There's a 'rebased' branch that contains API changes required for the
 code to work with 2.6.37~.

 I hope it's useful to you.

 I can't image how the lack of documentation of an used and tested API
 could be a serious reason to write you own.

 That never was the main reason. It doesn't help, though.

 That makes absolutely no
 sense to me, so I hope you'll decide to use KMS.

 No, we won't. A GPU driver != a V4L driver. The primary purpose of a V4L2
 display driver is to output discrete frames from memory to some device. This
 may be a HDMI transmitter, a SDTV transmitter, a memory-to-memory codec, an
 FPGA, whatever. In other words, there is not necessarily a monitor on the 
 other
 end. We have for some time now V4L2 APIs to set up video formats. The original
 ioctl was VIDIOC_G/S_STD to select PAL/NTSC/SECAM. The new ones are
 VIDIOC_G/S_DV_PRESETS which set up standard formats (1080p60, 720p60, etc) and
 DV_TIMINGS which can be used for custom bt.656/1120 digital video timings.

 Trying to mix KMS into the V4L2 API is just a recipe for disaster. Just think
 about what it would mean for DRM if DRM would use the V4L2 API for setting
 video modes. That would be a disaster as well.

I still think there's room for cooperation here.  There are GPUs out
there that have a capture interface and a full gamut of display output
options in addition to a 3D engine.  Besides conventional desktop
stuff, you could use this sort of setup to capture frames, run them
through shader-based filters/transforms and render then to memory to
be scanned out by display hardware, or dma'ed

Re: [PATCH/RFC 0/5] HDMI driver for Samsung S5PV310 platform

2011-02-08 Thread Alex Deucher
On Tue, Feb 8, 2011 at 4:47 AM, Hans Verkuil hansv...@cisco.com wrote:
 Just two quick notes. I'll try to do a full review this weekend.

 On Tuesday, February 08, 2011 10:30:22 Tomasz Stanislawski wrote:
 ==
  Introduction
 ==

 The purpose of this RFC is to discuss the driver for a TV output interface
 available in upcoming Samsung SoC. The HW is able to generate digital and
 analog signals. Current version of the driver supports only digital output.

 Internally the driver uses videobuf2 framework, and CMA memory allocator.
 Not
 all of them are merged by now, but I decided to post the sources to start
 discussion driver's design.

 ==
  Hardware description
 ==

 The SoC contains a few HW sub-blocks:

 1. Video Processor (VP). It is used for processing of NV12 data.  An image
 stored in RAM is accessed by DMA. Pixels are cropped, scaled. Additionally,
 post processing operations like brightness, sharpness and contrast
 adjustments
 could be performed. The output in YCbCr444 format is send to Mixer.

 2. Mixer (MXR). The piece of hardware responsible for mixing and blending
 multiple data inputs before passing it to an output device.  The MXR is
 capable
 of handling up to three image layers. One is the output of VP.  Other two
 are
 images in RGB format (multiple variants are supported).  The layers are
 scaled,
 cropped and blended with background color.  The blending factor, and layers'
 priority are controlled by MXR's registers. The output is passed either to
 HDMI
 or TVOUT.

 3. HDMI. The piece of HW responsible for generation of HDMI packets. It
 takes
 pixel data from mixer and transforms it into data frames. The output is send
 to HDMIPHY interface.

 4. HDMIPHY. Physical interface for HDMI. Its duties are sending HDMI packets
 to
 HDMI connector. Basically, it contains a PLL that produces source clock for
 Mixer, VP and HDMI during streaming.

 5. TVOUT. Generation of TV analog signal. (driver not implemented)

 6. VideoDAC. Modulator for TVOUT signal. (driver not implemented)


 The diagram below depicts connection between all HW pieces.
                     +---+
 NV12 data ---dma---|   Video   |
                     | Processor |
                     +---+
                           |
                           V
                     +---+
 RGB data  ---dma---|           |
                     |   Mixer   |
 RGB data  ---dma---|           |
                     +---+
                           |
                           * dmux
                          /
                   +-*   *--+
                   |                |
                   V                V
             +---+    +---+
             |    HDMI   |    |   TVOUT   |
             +---+    +---+
                   |                |
                   V                V
             +---+    +---+
             |  HDMIPHY  |    |  VideoDAC |
             +---+    +---+
                   |                |
                   V                V
                 HDMI           Composite
              connector         connector


 ==
  Driver interface
 ==

 The posted driver implements three V4L2 nodes. Every video node implements
 V4L2
 output buffer. One of nodes corresponds to input of Video Processor. The
 other
 two nodes correspond to RGB inputs of Mixer. All nodes share the same
 output.
 It is one of the Mixer's outputs: TVOUT or HDMI. Changing output in one
 layer
 using S_OUTPUT would change outputs of all other video nodes. The same thing
 happens if one try to reconfigure output i.e. by calling S_DV_PRESET.
 However
 it not possible to change or reconfigure the output while streaming. To sum
 up,
 all features in posted version of driver goes as follows:

 1. QUERYCAP
 2. S_FMT, G_FMT - single and multiplanar API
   a) node named video0 supports formats NV12, NV12, NV12T (tiled version of
 NV12), NV12MT (multiplane version of NV12T).
   b) nodes named graph0 and graph1 support formats RGB565, ARGB1555,
 ARGB,
 ARGB.

 graph0? Do you perhaps mean fb0? I haven't heard about nodes names 'graph'
 before.

 3. Buffer with USERPTR and MMAP memory.
 4. Streaming and buffer control. (STREAMON, STREAMOFF, REQBUF, QBUF, DQBUF)
 5. OUTPUT enumeration.
 6. DV preset control (SET, GET, ENUM). Currently modes 480P59_94, 720P59_94,
 1080P30, 1080P59_94 and 1080P60 work.
 7. Positioning layer's window on output display using S_CROP, G_GROP,
 CROPCAP.
 8. Positioning and cropping data in buffer using S_CROP, G_GROP, CROPCAP
 with
 buffer type OVERLAY. *

 TODOs:
 - add analog TVOUT driver
 - add S_OUTPUT
 - add S_STD ioctl
 - add control of alpha blending / chroma keying via V4L2 controls
 - add controls for luminance curve and sharpness in VP
 - consider exporting all output functionalities to separate video node
 - consider 

Re: [PATCH/RFC 0/5] HDMI driver for Samsung S5PV310 platform

2011-02-08 Thread Alex Deucher
On Tue, Feb 8, 2011 at 5:47 PM, Andy Walls awa...@md.metrocast.net wrote:
 On Tue, 2011-02-08 at 10:28 -0500, Alex Deucher wrote:
 On Tue, Feb 8, 2011 at 4:47 AM, Hans Verkuil hansv...@cisco.com wrote:
  Just two quick notes. I'll try to do a full review this weekend.
 
  On Tuesday, February 08, 2011 10:30:22 Tomasz Stanislawski wrote:
  ==
   Introduction
  ==
 
  The purpose of this RFC is to discuss the driver for a TV output interface
  available in upcoming Samsung SoC. The HW is able to generate digital and
  analog signals. Current version of the driver supports only digital 
  output.
 
  Internally the driver uses videobuf2 framework, and CMA memory allocator.
  Not
  all of them are merged by now, but I decided to post the sources to start
  discussion driver's design.

 
  Cisco (i.e. a few colleagues and myself) are working on this. We hope to 
  post
  an RFC by the end of this month. We also have a proposal for CEC support in
  the pipeline.

 Any reason to not use the drm kms APIs for modesetting, display
 configuration, and hotplug support?  We already have the
 infrastructure in place for complex display configurations and
 generating events for hotplug interrupts.  It would seem to make more
 sense to me to fix any deficiencies in the KMS APIs than to spin a new
 API.  Things like CEC would be a natural fit since a lot of desktop
 GPUs support hdmi audio/3d/etc. and are already using kms.

 Alex

 I'll toss one out: lack of API documentation for driver or application
 developers to use.


 When I last looked at converting ivtvfb to use DRM, KMS, TTM, etc. (to
 possibly get rid of reliance on the ivtv X video driver
 http://dl.ivtvdriver.org/xf86-video-ivtv/ ), I found the documentation
 was really sparse.

 DRM had the most documentation under Documentation/DocBook/drm.tmpl, but
 the userland API wasn't fleshed out.  GEM was talked about a bit in
 there as well, IIRC.

 TTM documentation was essentially non-existant.

 I can't find any KMS documentation either.

 I recall having to read much of the drm code, and having to look at the
 radeon driver, just to tease out what the DRM ioctls needed to do.

 Am I missing a Documentation source for the APIs?


Documentation is somewhat sparse compared to some other APIs.  Mostly
inline kerneldoc comments in the core functions.  It would be nice to
improve things.   The modesetting API is very similar to the xrandr
API in the xserver.

At the moment a device specific surface manager (Xorg ddx, or some
other userspace lib) is required to use kms due to device specific
requirements with respect to memory management and alignment for
acceleration.  The kms modesetting ioctls are common across all kms
drm drivers, but the memory management ioctls are device specific.
GEM itself is an Intel-specific memory manager, although radeon uses
similar ioctls.  TTM is used internally by radeon, nouveau, and svga
for managing memory gpu accessible memory pools.  Drivers are free to
use whatever memory manager they want; an existing one shared with a
v4l or platform driver, TTM, or something new.  There is no generic
userspace kms driver/lib although Dave and others have done some work
to support that, but it's really hard to make a generic interface
flexible enough to handle all the strange acceleration requirements of
GPUs.  kms does however provide a legacy kernel fb interface.

While the documentation is not great, the modesetting API is solid and
it would be nice to get more people involved and working on it (or at
least looking at it) rather than starting something equivalent from
scratch or implementing a device specific modesetting API.  If you
have any questions about it, please ask on dri-devel (CCed).

Alex



 For V4L2 and DVB on ther other hand, one can point to pretty verbose
 documentation that application developers can use:

        http://linuxtv.org/downloads/v4l-dvb-apis/



 Regards,
 Andy



--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v10 8/8] davinci vpbe: Readme text for Dm6446 vpbe

2010-12-23 Thread Alex Deucher
On Thu, Dec 23, 2010 at 6:55 AM, Manjunath Hadli manjunath.ha...@ti.com wrote:
 Please refer to this file for detailed documentation of
 davinci vpbe v4l2 driver

 Signed-off-by: Manjunath Hadli manjunath.ha...@ti.com
 Acked-by: Muralidharan Karicheri m-kariche...@ti.com
 Acked-by: Hans Verkuil hverk...@xs4all.nl
 ---
  Documentation/video4linux/README.davinci-vpbe |   93 
 +
  1 files changed, 93 insertions(+), 0 deletions(-)
  create mode 100644 Documentation/video4linux/README.davinci-vpbe

 diff --git a/Documentation/video4linux/README.davinci-vpbe 
 b/Documentation/video4linux/README.davinci-vpbe
 new file mode 100644
 index 000..7a460b0
 --- /dev/null
 +++ b/Documentation/video4linux/README.davinci-vpbe
 @@ -0,0 +1,93 @@
 +
 +                VPBE V4L2 driver design
 + ==
 +
 + File partitioning
 + -
 + V4L2 display device driver
 +         drivers/media/video/davinci/vpbe_display.c
 +         drivers/media/video/davinci/vpbe_display.h
 +
 + VPBE display controller
 +         drivers/media/video/davinci/vpbe.c
 +         drivers/media/video/davinci/vpbe.h
 +
 + VPBE venc sub device driver
 +         drivers/media/video/davinci/vpbe_venc.c
 +         drivers/media/video/davinci/vpbe_venc.h
 +         drivers/media/video/davinci/vpbe_venc_regs.h
 +
 + VPBE osd driver
 +         drivers/media/video/davinci/vpbe_osd.c
 +         drivers/media/video/davinci/vpbe_osd.h
 +         drivers/media/video/davinci/vpbe_osd_regs.h
 +
 + Functional partitioning
 + ---
 +
 + Consists of the following (in the same order as the list under file
 + partitioning):-
 +
 + 1. V4L2 display driver
 +    Implements creation of video2 and video3 device nodes and
 +    provides v4l2 device interface to manage VID0 and VID1 layers.
 +
 + 2. Display controller
 +    Loads up VENC, OSD and external encoders such as ths8200. It provides
 +    a set of API calls to V4L2 drivers to set the output/standards
 +    in the VENC or external sub devices. It also provides
 +    a device object to access the services from OSD subdevice
 +    using sub device ops. The connection of external encoders to VENC LCD
 +    controller port is done at init time based on default output and standard
 +    selection or at run time when application change the output through
 +    V4L2 IOCTLs.
 +
 +    When connected to an external encoder, vpbe controller is also 
 responsible
 +    for setting up the interface between VENC and external encoders based on
 +    board specific settings (specified in board-xxx-evm.c). This allows
 +    interfacing external encoders such as ths8200. The setup_if_config()
 +    is implemented for this as well as configure_venc() (part of the next 
 patch)
 +    API to set timings in VENC for a specific display resolution. As of this
 +    patch series, the interconnection and enabling and setting of the 
 external
 +    encoders is not present, and would be a part of the next patch series.
 +
 + 3. VENC subdevice module
 +    Responsible for setting outputs provided through internal DACs and also
 +    setting timings at LCD controller port when external encoders are 
 connected
 +    at the port or LCD panel timings required. When external encoder/LCD 
 panel
 +    is connected, the timings for a specific standard/preset is retrieved 
 from
 +    the board specific table and the values are used to set the timings in
 +    venc using non-standard timing mode.
 +
 +    Support LCD Panel displays using the VENC. For example to support a Logic
 +    PD display, it requires setting up the LCD controller port with a set of
 +    timings for the resolution supported and setting the dot clock. So we 
 could
 +    add the available outputs as a board specific entry (i.e add the 
 LogicPD
 +    output name to board-xxx-evm.c). A table of timings for various LCDs
 +    supported can be maintained in the board specific setup file to support
 +    various LCD displays.As of this patch a basic driver is present, and this
 +    support for external encoders and displays forms a part of the next
 +    patch series.
 +

For 2. and 3., any plans to eventually migrate to a KMS interface?  It
seems like a lot of device specific controls here.

Alex


 + 4. OSD module
 +    OSD module implements all OSD layer management and hardware specific
 +    features. The VPBE module interacts with the OSD for enabling and
 +    disabling appropriate features of the OSD.
 +
 + Current status:-
 +
 + A fully functional working version of the V4L2 driver is available. This
 + driver has been tested with NTSC and PAL standards and buffer streaming.
 +
 + Following are TBDs.
 +
 + vpbe display controller
 +    - Add support for external encoders.
 +    - add support for selecting external encoder as default at probe time.
 +
 + vpbe venc sub device
 +    - add timings for supporting ths8200
 +    - add support for LogicPD LCD.
 +
 

Re: problems with using the -rc kernel in the git tree

2010-12-19 Thread Alex Deucher
On Sun, Dec 19, 2010 at 6:56 PM, Theodore Kilgore
kilg...@banach.math.auburn.edu wrote:

 Hans,

 Thanks for the helpful advice about how to set up a git tree for current
 development so that I can get back into things.

 However, there is a problem with that -rc kernel, at least as far as my
 hardware is concerned. So if I am supposed to use it to work on camera
 stuff there is an obstacle.

 I started by copying my .config file over to the tree, and then running
 make oldconfig (as you said and as I would have done anyway).

 The problem seems to be centered right here (couple of lines
 from .config follow)

 CONFIG_DRM_RADEON=m
 # CONFIG_DRM_RADEON_KMS is not set

 I have a Radeon video card, obviously. Specifically, it is (extract from X
 config file follows)

 # Device configured by xorgconfig:

 Section Device
    Identifier  ATI Radeon HD 3200
    Driver      radeon

 Now, what happens is that with the kernel configuration (see above) I
 cannot start X in the -rc kernel. I get bumped out with an error
 message (details below) whereas that _was_ my previous configuration
 setting.

 But if in the config for the -rc kernel I change the second line by
 turning on CONFIG_DRM_RADEON_KMS the situation is even worse. Namely, the
 video cuts off during the boot process, with the monitor going blank and
 flashing up a message that it lost signal. After that the only thing to do
 is a hard reset, which strangely does not result in any check for a dirty
 file system, showing that things _really_ got screwed. These problems wit
 the video cutting off at boot are with booting into the _terminal_, BTW. I
 do not and never have made a practice of booting into X. I start X from
 the command line after boot. Thus, the video cutting off during boot has
 nothing to do with X at all, AFAICT.

 So as I said there are two alternatives, both of them quite unpleasant.

 Here is what the crash message is on the screen from the attempt to start
 up X, followed by what seem to be the relevant lines from the log file,
 with slightly more detail.

 Markers: (--) probed, (**) from config file, (==) default setting,
        (++) from command line, (!!) notice, (II) informational,
        (WW) warning, (EE) error, (NI) not implemented, (??) unknown.
 (==) Log file: /var/log/Xorg.0.log, Time: Sun Dec 19 14:32:12 2010
 (==) Using config file: /etc/X11/xorg.conf
 (==) Using system config directory /usr/share/X11/xorg.conf.d
 (II) [KMS] drm report modesetting isn't supported.
 (EE) RADEON(0): Unable to map MMIO aperture. Invalid argument (22)
 (EE) RADEON(0): Memory map the MMIO region failed
 (EE) Screen(s) found, but none have a usable configuration.

 Fatal server error:
 no screens found

 Please consult the The X.Org Foundation support
         at http://wiki.x.org
  for help.
 Please also check the log file at /var/log/Xorg.0.log for additional
 information.

 xinit: giving up
 xinit: unable to connect to X server: Connection refused
 xinit: server error
 xinit: unable to connect to X server: Connection refused
 xinit: server error
 kilg...@khayyam:~$

 And the following, too, from the log file, which perhaps contains one or
 two
 more details:

 [    48.050] (--) using VT number 7

 [    48.052] (II) [KMS] drm report modesetting isn't supported.
 [    48.052] (II) RADEON(0): TOTO SAYS feaf
 [    48.052] (II) RADEON(0): MMIO registers at 0xfeaf: size
 64KB
 [    48.052] (EE) RADEON(0): Unable to map MMIO aperture. Invalid argument
 (22)
 [    48.052] (EE) RADEON(0): Memory map the MMIO region failed
 [    48.052] (II) UnloadModule: radeon
 [    48.052] (EE) Screen(s) found, but none have a usable configuration.
 [    48.052]
 Fatal server error:
 [    48.052] no screens found
 [    48.052]

 There are a couple of suggestions about things to try, such as compiling
 with CONFIG_DRM_RADEON_KMS and then passing the parameter modeset=0 to the
 radeon module. But that does not seem to help, either.

 The help screens in make menuconfig do not seem to praise the
 CONFIG_DRM_RADEON_KMS very highly, and seem to indicate that this is still
 a very experimental feature.

 There are no such equivalent problems with my current kernel, which is a
 home-compiled 2.6.35.7.

 I realize that this is a done decision, but it is exactly this kind of
 thing that I had in mind when we had the Great Debate on the linux-media
 list about whether to use hg or git. My position was to let hardware
 support people to run hg with the compatibility layer for recent kernels
 (and 2.6.35.7 is certainly recent!). Well, the people who had such a
 position did not win. So now here is unfortunately the foreseeable result.
 An experimental kernel with some totally unrelated bug which affects my
 hardware and meanwhile stops all progress.

If you enable radeon KMS, you need to enable fbcon in your kernel or
you will lose video when the radeon kms driver loads since it controls
the video device and provide a legacy kernel fbdev interface.  As 

Re: rtl2832u support

2010-12-06 Thread Alex Deucher
On Mon, Dec 6, 2010 at 4:29 PM, Jan Hoogenraad
jan-conceptro...@hoogenraad.net wrote:
 I haven't seen any data sheets. At the other hand, Antti was able to create
 a separated (tuner vs demod) driver, except for IR.

 http://linuxtv.org/hg/~anttip/rtl2831u/

 http://linuxtv.org/hg/~anttip/qt1010/

 I haven't seen a data sheet, but I doubt if it would be of more use than
 using this example code.
 My mail contacts (latest in may 2008) focused on getting the code to work
 and on signing off the code.

 I'll give my mail contact a try, especially if you have questions that
 cannot be found in the current code base.

FWIW, in my experience, working code is preferable to datasheets if
you have to pick.  Often times datasheets are produced pre-silicon and
aren't always updated properly with the final changes.

Alex


 Maxim Levitsky wrote:

 On Mon, 2010-12-06 at 16:45 +0100, Jan Hoogenraad wrote:

 Could the tree from

 http://linuxtv.org/hg/~jhoogenraad/rtl2831-r2

 which is really just an older version of the same code, go into staging
 than as well ?

 Yes, but the problem is that due to shaddy license status of the
 'windows' driver, I am afraid to look seriously at it.
 Up till now, I only experimented with IR code.

 Jan, since you have contacts with Realtek, maybe it would be possible to
 get datasheet for their hardware?

 And the above code is guaraneed not to work on my card because even
 their 'windows' driver v1.4 doesn't work here.
 Only 2.0 driver works.

 And you said that you couldn't seperate demod from bridge?
 Is that nessesary?
 I have seen few drivers that don't separate it in v4l source.

 Tuners are of course another story.

 Best regards,
        Maxim Levitsky



 For that one, I have the signoff by RealTek already.

 Mauro Carvalho Chehab wrote:

 Em 20-11-2010 20:37, Maxim Levitsky escreveu:

 Do we have a common agreement that this driver can go to staging
 as-is?

 If yes, I have patch ready, just need to know where to send it (It is
 around 1 MB).

 Yes, if people is interested on later fixing the issues. As Antti said
 he already broke the driver into more consistent parts, maybe his tree
 may be an start.


 I would like to volunteer to clean up the driver for eventual merge.
 At least I can start right away with low handing fruit.

 Ok, Seems fine for me.

 I have took the driver from


 http://www.turnovfree.net/~stybla/linux/v4l-dvb/lv5tdlx/20101102_RTL2832_2836_2840_LINUX+RC-Dongle.rar


 And it looks very recent, so that means that Realtek actually continues
 to develop it.

 The better would be to try to sync with Realtek to be sure that they'll
 continue to develop the upstream driver, after having it merged.
 Otherwise,
 someone will need to do the manual sync, and this can be very painful.

 Greg KH, maybe you know how to contact Realteck to figure out the best
 strategy in handling this code.

 Meanwhile, lets put that into staging.
 (The above driver doesn't compile due to changes in RC code, but it can
 be removed (that what I did for now) or ported to new rc-core which
 what
 I will do very soon).

 Just send the patches. The better is to submit them via linux-media, and
 say, at
 a TODO file, that patches for it should be submitted to
 linux-me...@vger.kernel.org.
 Something similar to drivers/staging/tm6000/TODO.

 Cheers,
 Mauro
 --
 To unsubscribe from this list: send the line unsubscribe linux-media
 in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html







 --
 Jan Hoogenraad
 Hoogenraad Interface Services
 Postbus 2717
 3500 GS Utrecht
 --
 To unsubscribe from this list: send the line unsubscribe linux-media in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 09/10] MCDE: Add build files and bus

2010-12-04 Thread Alex Deucher
 ignorance of display device drivers.

 All APIs have to be provided, these are user space API requirements.
 KMS has a generic FB implementation. But most of KMS is modeled by
 desktop/PC graphics cards. And while we might squeeze MCDE in to look
 like a PC card, it might also just make things more complex and
 restrict us to do things not possible in PC architecture.

 Ok, so you have identified a flaw with the existing KMS code. You should
 most certainly not try to make your driver fit into the flawed model by
 making it look like a PC. Instead, you are encouraged to fix the problems
 with KMS to make sure it can also meet your requirements. The reason
 why it doesn't do that today is that all the existing users are PC
 hardware and we don't build infrastructure that we expect to be used
 in the future but don't need yet. It would be incorrect anyway.

 Can you describe the shortcomings of the KSM code? I've added the dri-devel
 list to Cc, to get the attention of the right people.

This doesn't seem that different from the graphics chips we support
with kms.  I don't think it would require much work to use KMS.  One
thing we considered, but never ended up implementing was a generic
overlay API for KMS.  Most PC hardware still has overlays, but we
don't use them much any more on the desktop side.  It may be
worthwhile to design an appropriate API for them for these type of
hardware.

To elaborate on the current KMS design, we have:
crtcs - the display controller.  these map to the timing generators
and scanout hardware
encoders - the hw that takes the bitstream from the display controller
and converts it to the appropriate format for the connected display.
connector - the physical interface that a display attaches to (VGA,
LVDS, eDP, HDMI-A, etc.)

Modern PC hardware is pretty complex.  I've blogged about some of the
recent radeon display hardware:
http://www.botchco.com/agd5f/?p=51
Moreover, each oem designs different boards with vastly different
display configurations.  It gets more complex with things like
advanced color management and DP (DisplayPort) 1.2 that introduces
things like daisy-chaining monitors and tunnelling USB and audio over
DP.



 Alex Deucher noted in a previous post that we also have the option of
 implementing the KMS ioctls. We are looking at both options. And having
 our own framebuffer driver might make sense since it is a very basic
 driver, and it will allow us to easily extend support for things like
 partial updates for display panels with on board memory. These panels
 with memory (like DSI command mode displays) is one of the reasons why
 KMS is not the perfect match. Since we want to expose features available
 for these types of displays.

 Ok.

From what I understood so far, you have a single multi-channel
  display
controller (mcde_hw.c) that drives the hardware.
Each controller can have multiple frame buffers attached to it,
  which
in turn can have multiple displays attached to each of them, but
  your
current configuration only has one of each, right?
  
   Correct, channels A/B (crtcs) can have two blended framebuffers
  plus
   background color, channels C0/C1 can have one framebuffer.
 
  We might still be talking about different things here, not sure.

 In short,
 KMS connector = MCDE port
 KMS encoder = MCDE channel
 KMS crtc = MCDE overlay

 Any chance you could change the identifiers in the code for this
 without confusing other people?

  Looking at the representation in sysfs, you should probably aim
  for something like
 
  /sys/devices/axi/axi0/mcde_controller
                                /chnlA
                                        /dspl_crtc0
                                                /fb0
                                                /fb1
                                                /v4l_0
                                        /dspl_dbi0
                                                /fb2
                                                /v4l_1
                                /chnlB
                                        /dspl_ctrc1
                                                /fb3
                                /chnlC
                                        /dspl_lcd0
                                                /fb4
                                                /v4l_2
 
  Not sure if that is close to what your hardware would really
  look like. My point is that all the objects that you are
  dealing with as a device driver should be represented hierarchically
  according to how you probe them.

 Yes, mcde_bus should be connected to mcde, this is a bug. The display
 drivers will placed in this bus, since this is where they are probed
 like platform devices, by name (unless driver can do MIPI standard
 probing or something). Framebuffers/V4L2 overlay devices can't be
 put in same hierarchy, since they have multiple parents in case
 the same framebuffer is cloned to multiple displays for example.
 But I think I

Re: [PATCH 00/10] MCDE: Add frame buffer device driver

2010-11-12 Thread Alex Deucher
On Fri, Nov 12, 2010 at 8:18 AM, Jimmy RUBIN jimmy.ru...@stericsson.com wrote:
 Hi Alex,

 Good point, we are looking at this for possible future improvements but for 
 the moment we feel like
 the structure of drm does not add any simplifications for our driver. We have 
 the display manager (MCDE DSS = KMS) and the memory manager (HWMEM = GEM) 
 that could be migrated to drm framework. But we do not have drm drivers for 
 3D hw and this also makes drm a less obvious choice at the moment.


You don't have to use the drm strictly for 3D hardware.  historically
that's why it was written, but with kms, it also provides an interface
for complex display systems.  fbdev doesn't really deal properly with
multiple display controllers or connectors that are dynamically
re-routeable at runtime.  I've seen a lot of gross hacks to fbdev to
support this kind of stuff in the past, so it'd be nice to use the
interface we now have for it if you need that functionality.
Additionally, you can use the shared memory manager to both the
display side and v4l side.  While the current drm drivers use GEM
externally, there's no requirement that a kms driver has to use GEM.
radeon and nouveau use ttm internally for example.  Something to
consider.  I just want to make sure people are aware of the interface
and what it's capable of.

Alex

 Jimmy

 -Original Message-
 From: Alex Deucher [mailto:alexdeuc...@gmail.com]
 Sent: den 10 november 2010 15:43
 To: Jimmy RUBIN
 Cc: linux-fb...@vger.kernel.org; linux-arm-ker...@lists.infradead.org; 
 linux-media@vger.kernel.org; Linus WALLEIJ; Dan JOHANSSON
 Subject: Re: [PATCH 00/10] MCDE: Add frame buffer device driver

 On Wed, Nov 10, 2010 at 7:04 AM, Jimmy Rubin jimmy.ru...@stericsson.com 
 wrote:
 These set of patches contains a display sub system framework (DSS) which is 
 used to
 implement the frame buffer device interface and a display device
 framework that is used to add support for different type of displays
 such as LCD, HDMI and so on.

 For complex display hardware, you may want to consider using the drm
 kms infrastructure rather than the kernel fb interface.  It provides
 an API for complex display hardware (multiple encoders, display
 controllers, etc.) and also provides a legacy kernel fb interface for
 compatibility.  See:
 Documentation/DocBook/drm.tmpl
 drivers/gpu/drm/
 in the kernel tree.

 Alex


 The current implementation supports DSI command mode displays.

 Below is a short summary of the files in this patchset:

 mcde_fb.c
 Implements the frame buffer device driver.

 mcde_dss.c
 Contains the implementation of the display sub system framework (DSS).
 This API is used by the frame buffer device driver.

 mcde_display.c
 Contains default implementations of the functions in the display driver
 API. A display driver may override the necessary functions to function
 properly. A simple display driver is implemented in display-generic_dsi.c.

 display-generic_dsi.c
 Sample driver for a DSI command mode display.

 mcde_bus.c
 Implementation of the display bus. A display device is probed when both
 the display driver and display configuration have been registered with
 the display bus.

 mcde_hw.c
 Hardware abstraction layer of MCDE. All code that communicates directly
 with the hardware resides in this file.

 board-mop500-mcde.c
 The configuration of the display and the frame buffer device is handled
 in this file

 NOTE: These set of patches replaces the patches already sent out for review.

 RFC:[PATCH 1/2] Video: Add support for MCDE frame buffer driver
 RFC:[PATCH 2/2] Ux500: Add support for MCDE frame buffer driver

 The old patchset was to large to be handled by the mailing lists.

 Jimmy Rubin (10):
  MCDE: Add hardware abstraction layer
  MCDE: Add configuration registers
  MCDE: Add pixel processing registers
  MCDE: Add formatter registers
  MCDE: Add dsi link registers
  MCDE: Add generic display
  MCDE: Add display subsystem framework
  MCDE: Add frame buffer device driver
  MCDE: Add build files and bus
  ux500: MCDE: Add platform specific data

  arch/arm/mach-ux500/Kconfig                    |    8 +
  arch/arm/mach-ux500/Makefile                   |    1 +
  arch/arm/mach-ux500/board-mop500-mcde.c        |  209 ++
  arch/arm/mach-ux500/board-mop500-regulators.c  |   28 +
  arch/arm/mach-ux500/board-mop500.c             |    3 +
  arch/arm/mach-ux500/devices-db8500.c           |   68 +
  arch/arm/mach-ux500/include/mach/db8500-regs.h |    7 +
  arch/arm/mach-ux500/include/mach/devices.h     |    1 +
  arch/arm/mach-ux500/include/mach/prcmu-regs.h  |    1 +
  arch/arm/mach-ux500/include/mach/prcmu.h       |    3 +
  arch/arm/mach-ux500/prcmu.c                    |  129 ++
  drivers/video/Kconfig                          |    2 +
  drivers/video/Makefile                         |    1 +
  drivers/video/mcde/Kconfig                     |   39 +
  drivers/video/mcde/Makefile                    |   12 +
  drivers/video/mcde/display-generic_dsi.c

Re: [PATCH 00/10] MCDE: Add frame buffer device driver

2010-11-12 Thread Alex Deucher
On Fri, Nov 12, 2010 at 11:46 AM, Marcus LORENTZON
marcus.xm.lorent...@stericsson.com wrote:
 Hi Alex,
 Do you have any idea of how we should use KMS without being a real drm 3D 
 device? Do you mean that we should use the KMS ioctls on for display driver? 
 Or do you mean that we should expose a /dev/drmX device only capable of KMS 
 and no GEM?


In this case I was only speaking of using the kms icotls and fbdev
emulation for modesetting as your device seems to have a fairly
complex display engine.  As for 2D/3D/video accel, that's up to you.
Each drm driver does it differently depending on how they handle
command buffers.  Intel and AMD have different sets of ioctls for
submitting 2D/3D/video commands from userspace acceleration drivers
and a different set of ioctls for memory management.

 What if we were to add a drm driver for 3D later on. Is it possible to have a 
 separate drm device for display and one for 3D, but still share GEM like 
 buffers between these devices? It look like GEM handles are device relative. 
 This is a vital use case for us. And we really don't like to entangle our 
 MCDE display driver, memory manager and 3D driver without a good reason. 
 Today they are maintained as independent drivers without code dependencies. 
 Would this still be possible using drm? Or does drm require memory manager, 
 3D and display to be one driver? I can see the drm=graphics card on desktop 
 machines. But embedded UMA systems doesn't really have this dependency. You 
 can switch memory mamanger, 3D driver, display manager in menuconfig 
 independently of the other drivers. Not that it's used like that on one 
 particular HW, but for different HW you can use different parts. In drm it 
 looks like all these pieces belong together.


No one has done anything like that, but I don't think it would be an
issue, you'd just need some sort of way to get buffers in your display
driver or your 3D driver, so I'd assume they would depend on your
memory manager.  Right now the userspace 2D/3D accel drivers all talk
to the drm independently depending on what they need to do.  Whatever
your userspace stack looks like could do something similar, call into
one set of ioctls for memory, another set for modesetting, and another
for accel.  As long as the kernel memory manager is common, you should
be able to pass buffer handles between all of them.  If you wanted
separate memory managers for each, things get a bit trickier, but
that's up to you.

 Do you think the driver should live in the gpu/drm folder, even though it's 
 not a gpu driver?


gpu is kind of a broad term.  It encompasses, 3D, display, video, etc.
 I don't think it really matters.

 Do you know of any other driver that use DRM/KMS API but not being a PC-style 
 graphics card that we could look at for inspiration?

Jordan Crouse submitted some patches for Qualcomm snapdragon a while
back although it was mostly a shim for a userspace accel driver.  He
did implement platform support in the drm however:
http://git.kernel.org/?p=linux/kernel/git/airlied/drm-2.6.git;a=commit;h=dcdb167402cbdca1d021bdfa5f63995ee0a79317


 And GEM, is that the only way of exposing graphics buffers to user space in 
 drm? Or is it possible (is it ok) to expose another similar API? You 
 mentioned that there are TTM and GEM, do both expose user space APIs for 
 things like sharing buffers between processes, security, cache management, 
 defragmentation? Or are these type of features defined by DRM and not TTM/GEM?


GEM is not a requirement, it just happens that all the current drm
drivers use variants of it for their external memory management
interface.  However, they are free to implement the memory manager
however they like.

Alex

 /BR
 /Marcus

 -Original Message-
 From: Alex Deucher [mailto:alexdeuc...@gmail.com]
 Sent: den 12 november 2010 16:53
 To: Jimmy RUBIN
 Cc: linux-fb...@vger.kernel.org; linux-arm-ker...@lists.infradead.org;
 linux-media@vger.kernel.org; Linus WALLEIJ; Dan JOHANSSON; Marcus
 LORENTZON
 Subject: Re: [PATCH 00/10] MCDE: Add frame buffer device driver

 On Fri, Nov 12, 2010 at 8:18 AM, Jimmy RUBIN
 jimmy.ru...@stericsson.com wrote:
  Hi Alex,
 
  Good point, we are looking at this for possible future improvements
 but for the moment we feel like
  the structure of drm does not add any simplifications for our driver.
 We have the display manager (MCDE DSS = KMS) and the memory manager
 (HWMEM = GEM) that could be migrated to drm framework. But we do not
 have drm drivers for 3D hw and this also makes drm a less obvious
 choice at the moment.
 

 You don't have to use the drm strictly for 3D hardware.  historically
 that's why it was written, but with kms, it also provides an interface
 for complex display systems.  fbdev doesn't really deal properly with
 multiple display controllers or connectors that are dynamically
 re-routeable at runtime.  I've seen a lot of gross hacks to fbdev to
 support this kind of stuff in the past, so it'd

Re: [PATCH 00/10] MCDE: Add frame buffer device driver

2010-11-10 Thread Alex Deucher
On Wed, Nov 10, 2010 at 7:04 AM, Jimmy Rubin jimmy.ru...@stericsson.com wrote:
 These set of patches contains a display sub system framework (DSS) which is 
 used to
 implement the frame buffer device interface and a display device
 framework that is used to add support for different type of displays
 such as LCD, HDMI and so on.

For complex display hardware, you may want to consider using the drm
kms infrastructure rather than the kernel fb interface.  It provides
an API for complex display hardware (multiple encoders, display
controllers, etc.) and also provides a legacy kernel fb interface for
compatibility.  See:
Documentation/DocBook/drm.tmpl
drivers/gpu/drm/
in the kernel tree.

Alex


 The current implementation supports DSI command mode displays.

 Below is a short summary of the files in this patchset:

 mcde_fb.c
 Implements the frame buffer device driver.

 mcde_dss.c
 Contains the implementation of the display sub system framework (DSS).
 This API is used by the frame buffer device driver.

 mcde_display.c
 Contains default implementations of the functions in the display driver
 API. A display driver may override the necessary functions to function
 properly. A simple display driver is implemented in display-generic_dsi.c.

 display-generic_dsi.c
 Sample driver for a DSI command mode display.

 mcde_bus.c
 Implementation of the display bus. A display device is probed when both
 the display driver and display configuration have been registered with
 the display bus.

 mcde_hw.c
 Hardware abstraction layer of MCDE. All code that communicates directly
 with the hardware resides in this file.

 board-mop500-mcde.c
 The configuration of the display and the frame buffer device is handled
 in this file

 NOTE: These set of patches replaces the patches already sent out for review.

 RFC:[PATCH 1/2] Video: Add support for MCDE frame buffer driver
 RFC:[PATCH 2/2] Ux500: Add support for MCDE frame buffer driver

 The old patchset was to large to be handled by the mailing lists.

 Jimmy Rubin (10):
  MCDE: Add hardware abstraction layer
  MCDE: Add configuration registers
  MCDE: Add pixel processing registers
  MCDE: Add formatter registers
  MCDE: Add dsi link registers
  MCDE: Add generic display
  MCDE: Add display subsystem framework
  MCDE: Add frame buffer device driver
  MCDE: Add build files and bus
  ux500: MCDE: Add platform specific data

  arch/arm/mach-ux500/Kconfig                    |    8 +
  arch/arm/mach-ux500/Makefile                   |    1 +
  arch/arm/mach-ux500/board-mop500-mcde.c        |  209 ++
  arch/arm/mach-ux500/board-mop500-regulators.c  |   28 +
  arch/arm/mach-ux500/board-mop500.c             |    3 +
  arch/arm/mach-ux500/devices-db8500.c           |   68 +
  arch/arm/mach-ux500/include/mach/db8500-regs.h |    7 +
  arch/arm/mach-ux500/include/mach/devices.h     |    1 +
  arch/arm/mach-ux500/include/mach/prcmu-regs.h  |    1 +
  arch/arm/mach-ux500/include/mach/prcmu.h       |    3 +
  arch/arm/mach-ux500/prcmu.c                    |  129 ++
  drivers/video/Kconfig                          |    2 +
  drivers/video/Makefile                         |    1 +
  drivers/video/mcde/Kconfig                     |   39 +
  drivers/video/mcde/Makefile                    |   12 +
  drivers/video/mcde/display-generic_dsi.c       |  152 ++
  drivers/video/mcde/dsi_link_config.h           | 1486 ++
  drivers/video/mcde/mcde_bus.c                  |  259 +++
  drivers/video/mcde/mcde_config.h               | 2156 
  drivers/video/mcde/mcde_display.c              |  427 
  drivers/video/mcde/mcde_dss.c                  |  353 
  drivers/video/mcde/mcde_fb.c                   |  697 +++
  drivers/video/mcde/mcde_formatter.h            |  782 
  drivers/video/mcde/mcde_hw.c                   | 2528 
 
  drivers/video/mcde/mcde_mod.c                  |   67 +
  drivers/video/mcde/mcde_pixelprocess.h         | 1137 +++
  include/video/mcde/mcde.h                      |  387 
  include/video/mcde/mcde_display-generic_dsi.h  |   34 +
  include/video/mcde/mcde_display.h              |  139 ++
  include/video/mcde/mcde_dss.h                  |   78 +
  include/video/mcde/mcde_fb.h                   |   54 +
  31 files changed, 11248 insertions(+), 0 deletions(-)
  create mode 100644 arch/arm/mach-ux500/board-mop500-mcde.c
  create mode 100644 drivers/video/mcde/Kconfig
  create mode 100644 drivers/video/mcde/Makefile
  create mode 100644 drivers/video/mcde/display-generic_dsi.c
  create mode 100644 drivers/video/mcde/dsi_link_config.h
  create mode 100644 drivers/video/mcde/mcde_bus.c
  create mode 100644 drivers/video/mcde/mcde_config.h
  create mode 100644 drivers/video/mcde/mcde_display.c
  create mode 100644 drivers/video/mcde/mcde_dss.c
  create mode 100644 drivers/video/mcde/mcde_fb.c
  create mode 100644 drivers/video/mcde/mcde_formatter.h
  create mode 100644 drivers/video/mcde/mcde_hw.c
  create 

Re: rtl2832u support

2010-10-19 Thread Alex Deucher
On Tue, Oct 19, 2010 at 4:27 PM, Hans Verkuil hverk...@xs4all.nl wrote:
 On Tuesday, October 19, 2010 21:26:13 Devin Heitmueller wrote:
 On Tue, Oct 19, 2010 at 1:42 PM, Damjan Marion damjan.mar...@gmail.com 
 wrote:
 
  Hi,
 
  Is there any special reason why driver for rtl2832u DVB-T receiver chipset 
  is not included into v4l-dvb?
 
  Realtek published source code under GPL:
 
  MODULE_AUTHOR(Realtek);
  MODULE_DESCRIPTION(Driver for the RTL2832U DVB-T / RTL2836 DTMB USB2.0 
  device);
  MODULE_VERSION(1.4.2);
  MODULE_LICENSE(GPL);

 Unfortunately, in most cases much more is required than having a
 working driver under the GPL in order for it to be accepted upstream.
 In some cases it can mean a developer spending a few hours cleaning up
 whitespace and indentation, and in other cases it means significant
 work to the driver is required.

 The position the LinuxTV team has taken is that they would rather have
 no upstream driver at all than to have a driver which doesn't have the
 right indentation or other aesthetic problems which has no bearing on
 how well the driver actually works.

 This is one of the big reasons KernelLabs has tens of thousands of
 lines of code adding support for a variety of devices with many happy
 users (who are willing to go through the trouble to compile from
 source), but the code cannot be accepted upstream.  I just cannot find
 the time to do the idiot work.

 Bullshit. First of all these rules are those of the kernel community
 as a whole and *not* linuxtv as such, and secondly you can upstream such
 drivers in the staging tree. If you want to move it out of staging, then
 it will take indeed more work since the quality requirements are higher
 there.

 Them's the rules for kernel development.

 I've done my share of coding style cleanups. I never understand why people
 dislike doing that. In my experience it always greatly improves the code
 (i.e. I can actually understand it) and it tends to highlight the remaining
 problematic areas in the driver.

 Of course, I can also rant for several paragraphs about companies throwing
 code over the wall without bothering to actually do the remaining work to
 get it mainlined. The very least they can do is to sponsor someone to do the
 work for them.

To start, I appreciate the kernel coding style requirements.  I think
it makes the code much easier to read and more consistent across the
kernel tree.  But, just to play devil's advocate, it's a fair amount
of work to write a driver especially if the hw is complex.  It's much
easier to share a common codebase between different OSs because to
reduces the maintenance burden and makes it easier to support new asic
variants.  This is especially true if you are a small company with
limited resources.  It annoys me somewhat when IHVs put in the effort
to actually produce a GPLed Linux driver and the community shits on
them for not writing it from scratch to match the kernel style
requirements.  Lets face it, there are a lot of hw specs out there
with no driver.  A working driver with source is vastly more useful.
It would be nice if every company out there had the resources to
develop a nice clean native Linux driver, but right now that's not
always the case.

Alex
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Illuminators and status LED controls

2010-09-08 Thread Alex Deucher
On Wed, Sep 8, 2010 at 2:58 PM, Peter Korsgaard jac...@sunsite.dk wrote:
 Andy == Andy Walls awa...@md.metrocast.net writes:

 Hi,

  Andy Incandescent and Halogen lamps that effect an image coming into a
  Andy camera are *not* LEDs that blink or flash automatically based on
  Andy driver or system trigger events.  They are components of a video
  Andy capture system with which a human attempts to adjust the
  Andy appearance of an image of a subject by changing the subject's
  Andy environment.  These illuminators are not some generically
  Andy connected device, but controlled by GPIO's on the camera's bridge
  Andy or sensor chip itself.  Such an illuminator will essentially be
  Andy used only in conjunction with the camera.

 Agreed.

  Andy Status LEDs integrated into webcam devices that are not generically
  Andy connected devices but controlled with GPIOs on the camera's bridge or
  Andy sensor chip will also essentially be used only in conjunction with the
  Andy camera.

 Or for any other usage the user envision - E.G. I could imagine using
 the status led of the webcam in my macbook for hard disk or wifi
 activity. I'm sure other people can come up with more creative use cases
 as well.

  Andy Turning these sorts camera specific illuminators and LEDs on an off
  Andy should be as simple to implement for an application developer as it is
  Andy to grasp the concept of turning a light bulb on and off.

 The point is that the logic rarely needs to be in the v4l
 applications. The status LEDs should by default go on when the v4l
 device is active and go off again when not like it used to do. A v4l
 application developer would normally not want to worry about such
 things, and only care about the video data.

 But if a user wants something special / non-standard, then it's just a
 matter of changing LED trigger in /sys/class/leds/..

I agree with Peter here.  I don't see why a video app would care about
blinking an LED while capturing.  I suspect most apps won't bother to
implement it, or it will be a card specific mess (depending on what
the hw actually provides).  Shouldn't the driver just turn it on or
blink it when capturing is active or whatever.  Why should apps care?
Plus, each app may implement some different behavior or some may not
implement it at all which will further confuse users.

Alex


  Andy The LED interface seems more appropriate to use when the LEDs are
  Andy connected more generically and will likely be used more generically,
  Andy such as in an embedded system.

 The LED subsystem certainly has it uses in embedded, but it's also used
 on PCs - As an example the ath9k wireless driver exports a number of
 LEDs. I find the situation with the wlan LEDs pretty comparable to
 status LEDs on v4l devices.

   And yes, application developers must use the correct API to control
   stuff.

   Why should kernel duplicate interfaces just because
   user land don't want to use two different interfaces? Doesn't this sound 
 a bit ... strange at least?

  Andy Why should the kernel push multiple APIs on application developers to
  Andy control a complex federation of small devices all connected behind a
  Andy single bridge chip, which the user perceives as a single device?  (BTW 
 a
  Andy USB microscope is such a federation which doesn't work at all without
  Andy proper subject illumination.)

 Because that's the only sensible way to create a bunch of incompatible
 custom interfaces  - E.G. a microphone built into a webcam is handled
 through also, not v4l, so it works with any sound recording program.

  Andy V4L2 controls are how desktop V4L2 applications currently control
  Andy aspects of a incoming image.  Forcing the use of the LED interface in
  Andy sysfs to control one aspect of that would be a departure from the norm
  Andy for the existing V4L2 desktop applications.

  Andy Forcing the use of the LED interface also brings along the complication
  Andy of proper association of the illuminator or LED sysfs control node to
  Andy the proper video capture/control device node.  I have a laptop with a
  Andy built in webcam with a status LED and a USB connected microscope with
  Andy two illuminators.  What are the steps for an application to discover 
 the
  Andy correct light for the video device and what settings that light is
  Andy capable of: using V4L2 controls? using the LED interface?

 Again, for status LEDs I don't see any reason why a standard v4l tool
 would care. As I mentioned above, illuminators are a different story
 (comparable to a gain setting imho).

  Andy How does one go about associating LEDs and Illuminators to video device
  Andy nodes using the LED sysfs interface?  I'm betting it's not as simple 
 for
  Andy applications that use V4L2 controls.

 I would imagine each video device would have a (number of) triggers,
 similar to how it's done for E.G. the wlan stuff - Something like
 video0-active. The status LED of the video0 device would default to
 that 

Re: Unknown CX23885 device

2010-07-27 Thread Alex Deucher
On Tue, Jul 27, 2010 at 3:21 PM, Christian Iversen
chriv...@iversen-net.dk wrote:
 (please CC, I'm not subscribed yet)

 Hey Linux-DVB people

 I'm trying to make an as-of-yet unsupported CX23885 device work in Linux.

 I've tested that the device is not supported using the newest snapshot
 of the DVB drivers. They did support a bunch of extra devices compared
 to the standard ubuntu driver, but to no avail.

 This is what I know about the device:

 ### physical description ###

 The device is a small mini-PCIe device currently installed in my
 Thinkpad T61p notebook. It did not originate there, but I managed to fit it
 in.

How are you attaching the video/audio/antenna/etc. input to the pcie
card?  I don't imagine the card is much use without external
connectors.

Alex


 It has an Avermedia logo on top, but no other discernable markings.
 I've tried removing the chip cover, but I can't see any other major chips
 than the cx23885. I can take a second look, if I know what to look for.

 ### pci info ###

 $ sudo lspci -s 02:00.0 -vv
 02:00.0 Multimedia video controller: Conexant Systems, Inc. CX23885 PCI
 Video and Audio Decoder (rev 02)
        Subsystem: Avermedia Technologies Inc Device c139
        Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
 Stepping- SERR+ FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast TAbort-
 TAbort- MAbort- SERR- PERR- INTx-
        Latency: 0, Cache Line Size: 64 bytes
        Interrupt: pin A routed to IRQ 16
        Region 0: Memory at d7a0 (64-bit, non-prefetchable) [size=2M]
        Capabilities: [40] Express (v1) Endpoint, MSI 00
                DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s 64ns,
 L1 1us
                        ExtTag- AttnBtn- AttnInd- PwrInd- RBE- FLReset-
                DevCtl: Report errors: Correctable- Non-Fatal- Fatal-
 Unsupported-
                        RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
                        MaxPayload 128 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr- UncorrErr+ FatalErr- UnsuppReq+ AuxPwr-
 TransPend-
                LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s L1,
 Latency L0 2us, L1 4us
                        ClockPM- Suprise- LLActRep- BwNot-
                LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain-
 CommClk+
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+
 DLActive- BWMgmt- ABWMgmt-
        Capabilities: [80] Power Management version 2
                Flags: PMEClk- DSI+ D1+ D2+ AuxCurrent=0mA
 PME(D0+,D1+,D2+,D3hot+,D3cold-)
                Status: D0 PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [90] Vital Product Data ?
        Capabilities: [a0] Message Signalled Interrupts: Mask- 64bit+
 Queue=0/0 Enable-
                Address:   Data: 
        Capabilities: [100] Advanced Error Reporting ?
        Capabilities: [200] Virtual Channel ?
        Kernel driver in use: cx23885
        Kernel modules: cx23885


 I've tried several different card=X settings for modprobe cx23885, and a
 few of them result in creation of /dev/dvb devices, but none of them really
 seem towork.

 What can I try for a next step?

 --
 Med venlig hilsen
 Christian Iversen

 ___
 linux-dvb users mailing list
 For V4L/DVB development, please use instead linux-media@vger.kernel.org
 linux-...@linuxtv.org
 http://www.linuxtv.org/cgi-bin/mailman/listinfo/linux-dvb
 --
 To unsubscribe from this list: send the line unsubscribe linux-media in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Idea of a v4l - fb interface driver

2010-05-28 Thread Alex Deucher
On Fri, May 28, 2010 at 4:21 AM, Guennadi Liakhovetski
g.liakhovet...@gmx.de wrote:
 On Thu, 27 May 2010, Alex Deucher wrote:

 On Thu, May 27, 2010 at 2:56 AM, Guennadi Liakhovetski 
 g.liakhovet...@gmx.de wrote:

 ...

  Ok, let me explain what exactly I meant. Above I referred to display
  drivers, which is not the same as a framebuffer controller driver or
  whatever you would call it. By framebuffer controller driver I mean a
  driver for the actual graphics engine on a certain graphics card or an
  SoC. This is the part, that reads data from the actual framebuffer and
  outputs it to some hardware interface to a display device. Now this
  interface can be a VGA or a DVI connector, it can be one of several bus
  types, used with various LCD displays. In many cases this is all you have
  to do to get the output to your display. But in some cases the actual
  display on the other side of this bus also requires a driver. That can be
  some kind of a smart display, it can be a panel with an attached display
  controller, that must be at least configured, say, over SPI, it can be a
  display, attached to the host over the MIPI DSI bus, and implementing some
  proprietary commands. In each of these cases you will have to write a
  display driver for this specific display or controller type, and your
  framebuffer driver will have to interface with that display driver. Now,
  obviously, those displays can be connected to a variety of host systems,
  in which case you will want to reuse that display driver. This means,
  there has to be a standard fb-driver - display-driver API. AFAICS, this is
  currently not implemented in fbdev, please, correct me if I am wrong.


 Another API to consider in the drm kms (kernel modesetting) interface.
  The kms API deals properly with advanced display hardware and
 properly handles crtcs, encoders, and connectors.  It also provides
 fbdev api emulation.

 Well, is KMS planned as a replacement for both fbdev and user-space
 graphics drivers? I mean, if you'd be writing a new fb driver for a
 relatively simple embedded SoC, would KMS apriori be a preferred API?

It's become the defacto standard for X and things like EGL are being
built onto of the API.  As for the kms vs fbdev, kms provides a nice
API for complex display setups with multiple display controllers and
connectors while fbdev assumes one monitor/connector/encoder per
device.  The fbdev and console stuff has yet to take advantage of this
flexibility, I'm not sure what will happen there.  fbdev emulation is
provided by kms, but it has to hide the complexity of the attached
displays.  For an soc with a single encoder and display, there's
probably not much advantage over fbdev, but if you have an soc that
can do TMDS and LVDS and possibly analog tv out, it gets more
interesting.

drm has historically been tied to pci, but Jordan Crouse recently
posted changes to support platform devices:
http://lists.freedesktop.org/archives/dri-devel/2010-May/000887.html

Alex
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Idea of a v4l - fb interface driver

2010-05-28 Thread Alex Deucher
On Fri, May 28, 2010 at 3:15 PM, Florian Tobias Schandinat
florianschandi...@gmx.de wrote:
 Alex Deucher schrieb:

 On Fri, May 28, 2010 at 4:21 AM, Guennadi Liakhovetski
 g.liakhovet...@gmx.de wrote:

 On Thu, 27 May 2010, Alex Deucher wrote:


 Another API to consider in the drm kms (kernel modesetting) interface.
  The kms API deals properly with advanced display hardware and
 properly handles crtcs, encoders, and connectors.  It also provides
 fbdev api emulation.

 Well, is KMS planned as a replacement for both fbdev and user-space
 graphics drivers? I mean, if you'd be writing a new fb driver for a
 relatively simple embedded SoC, would KMS apriori be a preferred API?

 It's become the defacto standard for X and things like EGL are being
 built onto of the API.  As for the kms vs fbdev, kms provides a nice
 API for complex display setups with multiple display controllers and
 connectors while fbdev assumes one monitor/connector/encoder per
 device.  The fbdev and console stuff has yet to take advantage of this
 flexibility, I'm not sure what will happen there.  fbdev emulation is
 provided by kms, but it has to hide the complexity of the attached
 displays.  For an soc with a single encoder and display, there's
 probably not much advantage over fbdev, but if you have an soc that
 can do TMDS and LVDS and possibly analog tv out, it gets more
 interesting.

 Well hiding complexity is actually the job of an API. I don't see any need
 for major changes in fbdev for complex display setups. In most cases as a
 userspace application you really don't want to be bothered how many
 different output devices you have and control each individually, you just
 want an area to draw and to know/control what area the user is expected to
 see and that's already provided in fbdev.

Users want to be able to dynamically change their display config on the fly.

 If the user wants the same content on multiple outputs just configure the
 driver to do so.

KMS provide an API to do that and a nice internal abstraction for handling it.

 If he wants different (independent) content on each output, just provide
 multiple /dev/fbX devices. I admit that we could use a controlling interface
 here that decides which user (application) might draw at a time to the
 interface which they currently only do if they are the active VT.
 If you want 2 or more outputs to be merged as one just configure this in the
 driver.
 The only thing that is impossible to do in fbdev is controlling 2 or more
 independent display outputs that access the same buffer. But that's not an
 issue I think.
 The things above only could use a unification of how to set them up on
 module load time (as only limited runtime changes are permited given that we
 must always be able to support a mode that we once entered during runtime).


What about changing outputs on the fly (turn off VGA, turn on DVI,
switch between multi-head and single-head, etc) or encoders shared
between multiple connectors (think a single dac shared between a VGA
and a TV port); how do you expose them easily as separate fbdevs?
Lots of stuff is doable with fbdev, but it's nicer with kms.

Alex

 The thing that's really missing in fbdev is a way to allow hardware
 acceleration for userspace.


 Regards,

 Florian Tobias Schandinat


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Idea of a v4l - fb interface driver

2010-05-27 Thread Alex Deucher
On Thu, May 27, 2010 at 2:56 AM, Guennadi Liakhovetski
g.liakhovet...@gmx.de wrote:
 (adding V4L ML to CC and preserving the complete reply for V4L readers)

 On Thu, 27 May 2010, Jaya Kumar wrote:

 On Wed, May 26, 2010 at 10:09 PM, Guennadi Liakhovetski
 g.liakhovet...@gmx.de wrote:
  Problem: Currently the standard way to provide graphical output on various
  (embedded) displays like LCDs is to use a framebuffer driver. The
  interface is well supported and widely adopted in the user-space, many
  applications, including the X-server, various libraries like directfb,
  gstreamer, mplayer, etc. In the kernel space, however, the subsystem has a
  number of problems. It is unmaintained. The infrastructure is not being
  further developed, every specific hardware driver is being supported by
  the respective architecture community. But as video output hardware

 I understand the issue you are raising, but to be clear there are
 several developers, Geert, Krzysztof, and others who are helping with
 the role of fbdev maintainer while Tony is away. If you meant that it
 has no specific currently active maintainer person, when you wrote
 unmaintained, then I agree that is correct.

 Exactly, I just interpreted this excerpt from MAINTAINERS:

 FRAMEBUFFER LAYER
 L:      linux-fb...@vger.kernel.org
 W:      http://linux-fbdev.sourceforge.net/
 S:      Orphan

 We're not sure where
 Tony is and we hope he's okay and that he'll be back soon. But if you
 meant that it is not maintained as in bugs aren't being fixed, then
 I'd have to slightly disagree. Maybe not as fast as commercial
 organizations seem to think should come for free, but still they are
 being worked on.

  evolves, more complex displays and buses appear and have to be supported,
  the subsystem shows its aging. For example, there is currently no way to
  write reusable across multiple platforms display drivers.

 At first I misread your point as talking about multi-headed displays
 which you're correct is not so great in fbdev. But write reusable
 across multi-platform display driver, I did not understand fully. I
 maintain a fbdev driver, broadsheetfb, that we're using on arm and x86
 without problems and my presumption is other fbdev drivers are also
 capable of this unless the author made it explicitly platform
 specific.

 Ok, let me explain what exactly I meant. Above I referred to display
 drivers, which is not the same as a framebuffer controller driver or
 whatever you would call it. By framebuffer controller driver I mean a
 driver for the actual graphics engine on a certain graphics card or an
 SoC. This is the part, that reads data from the actual framebuffer and
 outputs it to some hardware interface to a display device. Now this
 interface can be a VGA or a DVI connector, it can be one of several bus
 types, used with various LCD displays. In many cases this is all you have
 to do to get the output to your display. But in some cases the actual
 display on the other side of this bus also requires a driver. That can be
 some kind of a smart display, it can be a panel with an attached display
 controller, that must be at least configured, say, over SPI, it can be a
 display, attached to the host over the MIPI DSI bus, and implementing some
 proprietary commands. In each of these cases you will have to write a
 display driver for this specific display or controller type, and your
 framebuffer driver will have to interface with that display driver. Now,
 obviously, those displays can be connected to a variety of host systems,
 in which case you will want to reuse that display driver. This means,
 there has to be a standard fb-driver - display-driver API. AFAICS, this is
 currently not implemented in fbdev, please, correct me if I am wrong.


Another API to consider in the drm kms (kernel modesetting) interface.
 The kms API deals properly with advanced display hardware and
properly handles crtcs, encoders, and connectors.  It also provides
fbdev api emulation.

Alex



 In my experience with adding defio to the fbdev infra, the
 fbdev community seemed quite good and I did not notice any aging
 problems. I realize there's probably issues that you're encountering
 where fbdev might be weak, this is good, and if you raise them
 specifically, I think the community can work together to address the
 issues.

 
  OTOH V4L2 has a standard video output driver support, it is not very
  widely used, in the userspace I know only of gstreamer, that somehow
  supports video-output v4l2 devices in latest versions. But, being a part
  of the v4l2 subsystem, these drivers already now can take a full advantage
  of all v4l2 APIs, including the v4l2-subdev API for the driver reuse.
 
  So, how can we help graphics driver developers on the one hand by
  providing them with a capable driver framework (v4l2) and on the other
  hand by simplifying the task of interfacing to the user-space?

 I think it would help if there were more specific elaborations on the
 

Re: ATI TV Wonder 650 PCI development

2010-02-15 Thread Alex Deucher
On Mon, Feb 15, 2010 at 3:57 PM, Samuel Cantrell
samuelcantr...@gmail.com wrote:
 Hello,

 I have an ATI TV Wonder 650 PCI card, and have started to work on the
 wiki page on LinuxTV.org regarding it. I want to *attempt* to write a
 driver for it (more like, take a look at the code and run), and have
 printed off some information on the wiki. I need to get pictures up of
 the card and lspci output, etc.

 Is there anyone else more experienced at writing drivers that could
 perhaps help?

 http: // www.linuxtv.org / pipermail / linux-dvb / 2007-October /
 021228.html says that three pieces of documentation are missing. I've
 emailed Samsung regarding the tuner module on the card, as I could not
 find it on their website. I checked some of their affiliates as well,
 but still had no luck. I've emailed AMD/ATI regarding the card and
 technical documentation.


Who did you contact?   gpudriverdevsupport AT amd DOT com is the devel
address you probably want.  I looked into documentation for the newer
theatre chips when I started at AMD, but unfortunately, I'm not sure
how much we can release since we sold most of our multimedia IP to
Marvell last year.  I'm not sure what the status of the theatre chips
is now.

Documentation for the older theatre and theatre 200 asics was released
under NDA years ago which resulted in the theatre support in the
opensource radeon Xorg driver and gatos projects.  Now that we a
proper KMS driver for radeon, someone could port the old userspace
theatre code to the kernel for proper v4l support on AIW radeon cards.

Alex

 Is it likely that that the tuner module has an XC3028 in it? In the
 same linux-dvb message thread noted above, someone speculated that
 there is a XC3028. As the v4l tree has XC3028 support, if this is
 true, wouldn't that help at least a little bit?

 Thanks.

 Sam
 --
 To unsubscribe from this list: send the line unsubscribe linux-media in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: ATI TV Wonder 650 PCI development

2010-02-15 Thread Alex Deucher
On Mon, Feb 15, 2010 at 5:34 PM, Devin Heitmueller
dheitmuel...@kernellabs.com wrote:
 On Mon, Feb 15, 2010 at 5:21 PM, Alex Deucher alexdeuc...@gmail.com wrote:
 Who did you contact?   gpudriverdevsupport AT amd DOT com is the devel
 address you probably want.  I looked into documentation for the newer
 theatre chips when I started at AMD, but unfortunately, I'm not sure
 how much we can release since we sold most of our multimedia IP to
 Marvell last year.  I'm not sure what the status of the theatre chips
 is now.

 Documentation for the older theatre and theatre 200 asics was released
 under NDA years ago which resulted in the theatre support in the
 opensource radeon Xorg driver and gatos projects.  Now that we a
 proper KMS driver for radeon, someone could port the old userspace
 theatre code to the kernel for proper v4l support on AIW radeon cards.

 Alex

 For what it's worth, I actually did have a contact at the ATI/AMD
 division that made the Theatre 312/314/316, and I was able to get
 access to both the docs and reference driver sources under NDA.
 However, the division in question was sold off to Broadcom, and I
 couldn't get the rights needed to do a GPL driver nor to get
 redistribution rights on the firmware.  In fact, they couldn't even
 told me who actually *held* the rights for the reference driver code.

 At that point, I decided that it just wasn't worth the effort for such
 an obscure design.

Ah right, I meant Broadcom, not Marvell.

Alex
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: ATI TV Wonder 650 PCI development

2010-02-15 Thread Alex Deucher
On Mon, Feb 15, 2010 at 5:43 PM, Samuel Cantrell
samuelcantr...@gmail.com wrote:
 Perhaps we could contact Broadcom regarding the Theatre 312?

 Am I making too much of this? I do have a Pinnacle 800i which works
 with Linux, I was just also wanting to get this card to work. Should I
 just drop it?


Can't hurt to try, but I'm not sure how much luck you'll have.

Alex


 Thanks.

 Sam

 On Mon, Feb 15, 2010 at 2:37 PM, Alex Deucher alexdeuc...@gmail.com wrote:
 On Mon, Feb 15, 2010 at 5:34 PM, Devin Heitmueller
 dheitmuel...@kernellabs.com wrote:
 On Mon, Feb 15, 2010 at 5:21 PM, Alex Deucher alexdeuc...@gmail.com wrote:
 Who did you contact?   gpudriverdevsupport AT amd DOT com is the devel
 address you probably want.  I looked into documentation for the newer
 theatre chips when I started at AMD, but unfortunately, I'm not sure
 how much we can release since we sold most of our multimedia IP to
 Marvell last year.  I'm not sure what the status of the theatre chips
 is now.

 Documentation for the older theatre and theatre 200 asics was released
 under NDA years ago which resulted in the theatre support in the
 opensource radeon Xorg driver and gatos projects.  Now that we a
 proper KMS driver for radeon, someone could port the old userspace
 theatre code to the kernel for proper v4l support on AIW radeon cards.

 Alex

 For what it's worth, I actually did have a contact at the ATI/AMD
 division that made the Theatre 312/314/316, and I was able to get
 access to both the docs and reference driver sources under NDA.
 However, the division in question was sold off to Broadcom, and I
 couldn't get the rights needed to do a GPL driver nor to get
 redistribution rights on the firmware.  In fact, they couldn't even
 told me who actually *held* the rights for the reference driver code.

 At that point, I decided that it just wasn't worth the effort for such
 an obscure design.

 Ah right, I meant Broadcom, not Marvell.

 Alex


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Replace Mercurial with GIT as SCM

2009-12-01 Thread Alex Deucher
On Tue, Dec 1, 2009 at 9:59 AM, Patrick Boettcher
pboettc...@kernellabs.com wrote:
 Hi all,

 I would like to start a discussion which ideally results in either changing
 the SCM of v4l-dvb to git _or_ leaving everything as it is today with
 mercurial.

 To start right away: I'm in favour of using GIT because of difficulties I
 have with my daily work with v4l-dvb. It is in my nature do to mistakes,
 so I need a tool which assists me in fixing those, I have not found a simple
 way to do my stuff with HG.

 I'm helping out myself using a citation from which basically describes why
 GIT fits the/my needs better than HG (*):

 The culture of mercurial is one of immutability. This is quite a good
 thing, and it's one of my favorite aspects of gnu arch. If I commit
 something, I like to know that it's going to be there. Because of this,
 there are no tools to manipulate history by default.

 git is all about manipulating history. There's rebase, commit amend,
 reset, filter-branch, and probably other commands I'm not thinking of,
 many of which make it into day-to-day workflows. Then again, there's
 reflog, which adds a big safety net around this mutability.

 The first paragraph here describes exactly my problem and the second
 descibes how to solve it.

 My suggestion is not to have the full Linux Kernel source as a new base for
 v4l-dvb development, but only to replace the current v4l-dvb hg with a GIT
 one. Importing all the history and everything.

 Unfortunately it will change nothing for Mauro's job.

 I also understand that it does not give a lot to people who haven't used GIT
 until now other than a new SCM to learn. But believe me, once you've done a
 rebase when Mauro has asked you to rebuild your tree before he can merge it,
 you will see what I mean.

 I'm waiting for comments.

I prefer git myself, but I'm not really actively working on v4l at the
moment, so, I leave it up to the active devs.  One nice thing about
git is the ability to maintain patch authorship.

Alex
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: ENE CIR driver

2009-11-23 Thread Alex Deucher
On Sat, Nov 21, 2009 at 1:30 PM, Jarod Wilson ja...@wilsonet.com wrote:
 On Nov 21, 2009, at 12:18 PM, Alex Deucher wrote:

 Does anyone know if there is a driver or documentation available for
 the ENE CIR controller that's incorporated into many of their keyboard
 controllers?  If there is no driver but documentation, are there
 drivers for other CIR controllers that could be used as a reference?

 Maxim Levitsky authored lirc_ene0100, which is in the lirc tarball and my 
 lirc git tree now, with the intention of submitting it for upstream kernel 
 inclusion once (if?) we get the base lirc bits accepted.


Excellent.  thanks for the heads up.

Alex
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


ENE CIR driver

2009-11-21 Thread Alex Deucher
Does anyone know if there is a driver or documentation available for
the ENE CIR controller that's incorporated into many of their keyboard
controllers?  If there is no driver but documentation, are there
drivers for other CIR controllers that could be used as a reference?

Thanks,

Alex
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Add support for Compro VideoMate E800 (DVB-T part only)

2009-09-06 Thread Alex Deucher
On Sun, Sep 6, 2009 at 11:33 AM, geroin22geroi...@yandex.ru wrote:



 Nothing new, just adding Compro VideoMate E800 (DVB-T part only).
 Tested with Ubuntu 9.04 kernel 2.6.28 work well.


Please add your Signed-off-by.

 diff -Naur a/linux/Documentation/video4linux/CARDLIST.cx23885
 b/linux/Documentation/video4linux/CARDLIST.cx23885
 --- a/linux/Documentation/video4linux/CARDLIST.cx23885  2009-09-01
 16:43:46.0 +0300
 +++ b/linux/Documentation/video4linux/CARDLIST.cx23885  2009-09-06
 15:37:13.373793025 +0300
 @@ -23,3 +23,4 @@
  22 - Mygica X8506 DMB-TH                                 [14f1:8651]
  23 - Magic-Pro ProHDTV Extreme 2                         [14f1:8657]
  24 - Hauppauge WinTV-HVR1850                             [0070:8541]
 + 25 - Compro VideoMate E800                               [1858:e800]
 diff -Naur a/linux/drivers/media/video/cx23885/cx23885-cards.c
 b/linux/drivers/media/video/cx23885/cx23885-cards.c
 --- a/linux/drivers/media/video/cx23885/cx23885-cards.c 2009-09-01
 16:43:46.0 +0300
 +++ b/linux/drivers/media/video/cx23885/cx23885-cards.c 2009-09-06
 15:35:23.434293199 +0300
 @@ -211,6 +211,10 @@
                .portb          = CX23885_MPEG_ENCODER,
                .portc          = CX23885_MPEG_DVB,
        },
 +        [CX23885_BOARD_COMPRO_VIDEOMATE_E800] = {
 +               .name           = Compro VideoMate E800,
 +               .portc          = CX23885_MPEG_DVB,
 +       },
 };
 const unsigned int cx23885_bcount = ARRAY_SIZE(cx23885_boards);

 @@ -342,6 +346,10 @@
                .subvendor = 0x0070,
                .subdevice = 0x8541,
                .card      = CX23885_BOARD_HAUPPAUGE_HVR1850,
 +        }, {
 +               .subvendor = 0x1858,
 +               .subdevice = 0xe800,
 +               .card      = CX23885_BOARD_COMPRO_VIDEOMATE_E800,
        },
 };
 const unsigned int cx23885_idcount = ARRAY_SIZE(cx23885_subids);
 @@ -537,6 +545,7 @@
        case CX23885_BOARD_HAUPPAUGE_HVR1500Q:
        case CX23885_BOARD_LEADTEK_WINFAST_PXDVR3200_H:
        case CX23885_BOARD_COMPRO_VIDEOMATE_E650F:
 +        case CX23885_BOARD_COMPRO_VIDEOMATE_E800:
                /* Tuner Reset Command */
                bitmask = 0x04;
                break;
 @@ -688,6 +697,7 @@
                break;
        case CX23885_BOARD_LEADTEK_WINFAST_PXDVR3200_H:
        case CX23885_BOARD_COMPRO_VIDEOMATE_E650F:
 +        case CX23885_BOARD_COMPRO_VIDEOMATE_E800:
                /* GPIO-2  xc3028 tuner reset */

                /* The following GPIO's are on the internal AVCore (cx25840)
 */
 @@ -912,6 +922,7 @@
        case CX23885_BOARD_HAUPPAUGE_HVR1255:
        case CX23885_BOARD_HAUPPAUGE_HVR1210:
        case CX23885_BOARD_HAUPPAUGE_HVR1850:
 +        case CX23885_BOARD_COMPRO_VIDEOMATE_E800:
        default:
                ts2-gen_ctrl_val  = 0xc; /* Serial bus + punctured clock */
                ts2-ts_clk_en_val = 0x1; /* Enable TS_CLK */
 @@ -928,6 +939,7 @@
        case CX23885_BOARD_LEADTEK_WINFAST_PXDVR3200_H:
        case CX23885_BOARD_COMPRO_VIDEOMATE_E650F:
        case CX23885_BOARD_NETUP_DUAL_DVBS2_CI:
 +        case CX23885_BOARD_COMPRO_VIDEOMATE_E800:
                dev-sd_cx25840 = v4l2_i2c_new_subdev(dev-v4l2_dev,
                                dev-i2c_bus[2].i2c_adap,
                                cx25840, cx25840, 0x88  1, NULL);
 diff -Naur a/linux/drivers/media/video/cx23885/cx23885-dvb.c
 b/linux/drivers/media/video/cx23885/cx23885-dvb.c
 --- a/linux/drivers/media/video/cx23885/cx23885-dvb.c   2009-09-01
 16:43:46.0 +0300
 +++ b/linux/drivers/media/video/cx23885/cx23885-dvb.c   2009-09-06
 16:09:17.154602943 +0300
 @@ -744,6 +744,7 @@
        }
        case CX23885_BOARD_LEADTEK_WINFAST_PXDVR3200_H:
        case CX23885_BOARD_COMPRO_VIDEOMATE_E650F:
 +        case CX23885_BOARD_COMPRO_VIDEOMATE_E800:
                i2c_bus = dev-i2c_bus[0];

                fe0-dvb.frontend = dvb_attach(zl10353_attach,
 diff -Naur a/linux/drivers/media/video/cx23885/cx23885.h
 b/linux/drivers/media/video/cx23885/cx23885.h
 --- a/linux/drivers/media/video/cx23885/cx23885.h       2009-09-01
 16:43:46.0 +0300
 +++ b/linux/drivers/media/video/cx23885/cx23885.h       2009-09-06
 15:36:40.229792022 +0300
 @@ -79,6 +79,7 @@
 #define CX23885_BOARD_MYGICA_X8506             22
 #define CX23885_BOARD_MAGICPRO_PROHDTVE2       23
 #define CX23885_BOARD_HAUPPAUGE_HVR1850        24
 +#define CX23885_BOARD_COMPRO_VIDEOMATE_E800    25

 #define GPIO_0 0x0001
 #define GPIO_1 0x0002


 --
 To unsubscribe from this list: send the line unsubscribe linux-media in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PULL] http://kernellabs.com/hg/~mkrufky/cx23885

2009-08-04 Thread Alex Deucher
On Tue, Aug 4, 2009 at 3:50 PM, Michael Krufkymkru...@kernellabs.com wrote:
 On Tue, Aug 4, 2009 at 3:47 PM, Alex Deucheralexdeuc...@gmail.com wrote:
 On Tue, Aug 4, 2009 at 3:33 PM, Michael Krufkymkru...@linuxtv.org wrote:
 Mauro,

 Please pull from:

 http://kernellabs.com/hg/~mkrufky/cx23885

 for the following fixes:

 - cx23885: Enable mplayer pvr:// usage

 I'm not too familiar with mplayer's v4l support, but why not fix
 mplayer rather than adding a fake audio interface to the driver.
 Wouldn't that potentially confuse users or other apps?

 Thats a good question, Alex.

 The answer, for now, is conformity amongst the v4l2 drivers.

 Mplayer has been around for a good long time, and any v4l2 mpeg
 encoder that doesnt do this is broken in the vraious userspace
 applications.

 I agree that we should fix userspace also -- but fix the kernel first,
 so that older userspace works as-is.

er... yeah, but you are re-enforcing broken behavior, rather than
fixing more drivers, why not submit a mplayer patch and tell users
they need a newer version of mplayer/etc. to work with their card?
Otherwise we'll end up with tons of drivers with fake audio interfaces
and mplayer will never get fixed and users will complain that the
audio interface doesn't work.

Alex
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html