Re: [RFC v2 0/5] Common Display Framework

2013-02-08 Thread Marcus Lorentzon

On 02/08/2013 11:51 AM, Tomi Valkeinen wrote:

On 2013-02-04 12:05, Marcus Lorentzon wrote:


As discussed at FOSDEM I will give DSI bus with full feature set a
try.

Please do, but as a reminder I want to raise the issues I see with a DSI
bus:

- A device can be a child of only one bus. So if DSI is used only for
video, the device is a child of, say, i2c bus, and thus there's no DSI
bus. How to configure and use DSI in this case?

- If DSI is used for both control and video, we have two separate APIs
for the bus. What I mean here is that for the video-only case above, we
need a video-only-API for DSI. This API should contain all necessary
methods to configure DSI. But we need similar methods for the control
API also.

So, I hope you come up with some solution for this, but as I see it,
it's easily the most simple and clear option to have one video_source
style entity for the DSI bus itself, which is used for both control and
video.


Thanks, it is not that I'm totally against the video source stuff. And I 
share your concerns, none of the solutions are perfect. It just doesn't 
feel right to create this dummy source device without investigating 
the DSI bus route. But I will try to write up some history/problem 
description and ask Greg KH for guidance. If he has a strong opinion 
either way, there is not much more to discuss ;)


/BR
/Marcus

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC v2 0/5] Common Display Framework

2013-02-04 Thread Marcus Lorentzon

On 02/02/2013 12:35 AM, Laurent Pinchart wrote:

Hi Marcus,

On Tuesday 08 January 2013 18:08:19 Marcus Lorentzon wrote:

On 01/08/2013 05:36 PM, Tomasz Figa wrote:

On Tuesday 08 of January 2013 11:12:26 Marcus Lorentzon wrote:

[...]

But it is not perfect. After a couple of products we realized that most
panel drivers want an easy way to send a bunch of init commands in one
go. So I think it should be an op for sending an array of commands at
once. Something like

struct dsi_cmd {
   enum mipi_pkt_type type; /* MIPI DSI, DCS, SetPacketLen, ... */
   u8 cmd;
   int dataLen;
   u8 *data;
}

struct dsi_ops {
   int dsi_write(source, int num_cmds, struct dsi_cmd *cmds);
   ...
}

Do you have DSI IP(s) that can handle a list of commands ? Or would all DSI
transmitter drivers need to iterate over the commands manually ? In the later
case a lower-level API might be easier to implement in DSI transmitter
drivers. Helper functions could provide the higher-level API you proposed.


The HW has a FIFO, so it can handle a few. Currently we use the low 
level type of call with one call per command. But we have found DSI 
command mode panels that don't accept any commands during the update 
(write start+continues). And so we must use a mutex/state machine to 
exclude any async calls to send DSI commands during update. But if you 
need to send more than one command per frame this will be hard (like 
CABC and backlight commands). It will be a ping pong between update and 
command calls. One option is to expose the mutex to the caller so it can 
make many calls before the next update grabs the mutex again.
So maybe we could create a helper that handle the op for list of 
commands and another op for single command that you actually have to 
implement.

Yes, this should be flexible enough to cover most of (or even whole) DSI
specification.

However I'm not sure whether the dsi_write name would be appropriate,
since DSI packet types include also read and special transactions. So,
according to DSI terminology, maybe dsi_transaction would be better?

Or dsi_transfer or dsi_xfer ? Does the DSI bus have a concept of transactions
?


No transactions. And I don't want to mix reads and writes. It should be 
similar to I2C and other stream control busses. So one read and one 
write should be fine. And then a bunch of helpers on top for callers to 
use, like one per major DSI packet type.

I think read should still be separate. At least on my HW read and write
are quite different. But all transactions are very much the same in HW
setup. The ... was dsi_write etc ;) Like set_max_packet_size should
maybe be an ops. Since only the implementer of the video source will
know what the max read return packet size for that DSI IP is. The panels
don't know that. Maybe another ops to retrieve some info about the caps
of the video source would help that. Then a helper could call that and
then the dsi_write one.

If panels (or helper functions) need information about the DSI transmitter
capabilities, sure, we can add an op.


Yes, a video source op for getting caps would be ok too. But so far 
the only limits I have found is the read/write sizes. But if anyone else 
has other limits, please list them so we could add them to this struct 
dsi_host_caps.

And I think I still prefer the dsi_bus in favor of the abstract video
source. It just looks like a home made bus with bus-ops ... can't you do
something similar using the normal driver framework? enable/disable
looks like suspend/resume, register/unregister_vid_src is like
bus_(un)register_device, ... the video source anyway seems unattached
to the panel stuff with the find_video_source call.

The Linux driver framework is based on control busses. DSI usually handles
both control and video transfer, but the control and data busses can also be
separate (think DPI + SPI/I2C for instance). In that case the panel will be a
child of its control bus master, and will need a separate interface to access
video data operations. As a separate video interface is thus needed, I think
we should use it for DSI as well.

My initial proposal included a DBI bus (as I don't have any DSI hardware - DBI
and DSI can be used interchangeably in this discussions, they both share the
caracteristic of having a common control + data bus), and panels were children
of the DBI bus master. The DBI bus API was only used for control, not for data
transfers. Tomi then removed the DBI bus and moved the control operations to
the video source, turning the DBI panels into platform devices. I still favor
my initial approach, but I can agree to drop the DBI bus if there's a
consensus on that. Video bus operations will be separate in any case.


As discussed at FOSDEM I will give DSI bus with full feature set a try.

BTW. Who got the action to ask Greg about devices with multiple 
parents/buses?



Also, as discussed in previous posts, some panels might use DSI only for
video data and another interface (I2C, SPI

Re: [RFC v2 0/5] Common Display Framework

2013-01-08 Thread Marcus Lorentzon

On 01/08/2013 09:18 AM, Laurent Pinchart wrote:

On Thursday 27 December 2012 15:43:34 Tomasz Figa wrote:

  On Monday 24 of December 2012 15:12:28 Laurent Pinchart wrote:

On Friday 21 December 2012 11:00:52 Tomasz Figa wrote:

  On Tuesday 18 of December 2012 08:31:30 Vikas Sajjan wrote:

On 17 December 2012 20:55, Laurent Pinchart wrote:

  Hi Vikas,
  
  Sorry for the late reply. I now have more time to work on CDF, so

  delays should be much shorter.
  
  On Thursday 06 December 2012 10:51:15 Vikas Sajjan wrote:

Hi Laurent,

I was thinking of porting CDF to samsung EXYNOS 5250 platform,

what I found is that, the exynos display controller is MIPI 
DSI
based controller.

But if I look at CDF patches, it has only support for MIPI DBI

based Display controller.

So my question is, do we have any generic framework for MIPI DSI

based display controller? basically I wanted to know, how to 
go
about porting CDF for such kind of display controller.
  
  MIPI DSI support is not available yet. The only reason for that is

  that I don't have any MIPI DSI hardware to write and test the 
code
  with:-)
  
  The common display framework should definitely support MIPI DSI. I

  think the existing MIPI DBI code could be used as a base, so the
  implementation shouldn't be too high.
  
  Yeah, i was also thinking in similar lines, below is my though for

  MIPI DSI support in CDF.

o   MIPI DSI support as part of CDF framework will expose

§  mipi_dsi_register_device(mpi_device) (will be called 
mach-xxx-dt.c
file )
§  mipi_dsi_register_driver(mipi_driver, bus ops) (will be called
from platform specific init driver call )
·bus ops will be
o   read data
o   write data
o   write command
§  MIPI DSI will be registered as bus_register()

When MIPI DSI probe is called, it (e.g., Exynos or OMAP MIPI DSI)

will initialize the MIPI DSI HW IP.

This probe will also parse the DT file for MIPI DSI based panel, add

the panel device (device_add() ) to kernel and register the display
entity with its control and  video ops with CDF.

I can give this a try.
  
  I am currently in progress of reworking Exynos MIPI DSIM code and

  s6e8ax0 LCD driver to use the v2 RFC of Common Display Framework. I
  have most of the work done, I have just to solve several remaining
  problems.

Do you already have code that you can publish ? I'm particularly

interested (and I think Tomi Valkeinen would be as well) in looking at
the DSI operations you expose to DSI sinks (panels, transceivers, ...).
  
  Well, I'm afraid this might be little below your expectations, but here's

  an initial RFC of the part defining just the DSI bus. I need a bit more
  time for patches for Exynos MIPI DSI master and s6e8ax0 LCD.

No worries. I was particularly interested in the DSI operations you needed to
export, they seem pretty simple. Thank you for sharing the code.


FYI,
here is STE DSI API:
http://www.igloocommunity.org/gitweb/?p=kernel/igloo-kernel.git;a=blob;f=include/video/mcde.h;h=499ce5cfecc9ad77593e761cdcc1624502f28432;hb=HEAD#l361

But it is not perfect. After a couple of products we realized that most 
panel drivers want an easy way to send a bunch of init commands in one 
go. So I think it should be an op for sending an array of commands at 
once. Something like


struct dsi_cmd {
enum mipi_pkt_type type; /* MIPI DSI, DCS, SetPacketLen, ... */
u8 cmd;
int dataLen;
u8 *data;
}
struct dsi_ops {
int dsi_write(source, int num_cmds, struct dsi_cmd *cmds);
...
}

The rest of DSI write API could be made helpers on top of this one op. 
This grouping also allows driver to describe intent to send a bunch of 
commands together which might be of interest with mode set (if you need 
to synchronize a bunch of commands with a mode set, like setting smart 
panel rotation in synch with new framebuffer in dsi video mode).


I also looked at the video source in Tomi's git tree 
(http://gitorious.org/linux-omap-dss2/linux/blobs/work/dss-dev-model-cdf/include/video/display.h). 
I think I would prefer a single setup op taking a struct dsi_config 
as argument. Then each DSI formatter/encoder driver could decide best 
way to set that up. We have something similar at 
http://www.igloocommunity.org/gitweb/?p=kernel/igloo-kernel.git;a=blob;f=include/video/mcde.h;h=499ce5cfecc9ad77593e761cdcc1624502f28432;hb=HEAD#l118


And I think I still prefer the dsi_bus in favor of the abstract video 
source. It just looks like a home made bus with bus-ops ... can't you do 

Re: [RFC v2 0/5] Common Display Framework

2013-01-08 Thread Marcus Lorentzon

On 01/08/2013 05:36 PM, Tomasz Figa wrote:

On Tuesday 08 of January 2013 11:12:26 Marcus Lorentzon wrote:

On 01/08/2013 09:18 AM, Laurent Pinchart wrote:

On Thursday 27 December 2012 15:43:34 Tomasz Figa wrote:

  On Monday 24 of December 2012 15:12:28 Laurent Pinchart wrote:

 On Friday 21 December 2012 11:00:52 Tomasz Figa wrote:

On Tuesday 18 of December 2012 08:31:30 Vikas Sajjan

wrote:

   On 17 December 2012 20:55, Laurent Pinchart wrote:

  Hi Vikas,
   
  Sorry for the late reply. I now have more time to
  work on CDF, so
  delays should be much shorter.
   
  On Thursday 06 December 2012 10:51:15 Vikas Sajjan

wrote:

 Hi Laurent,
  
 I was thinking of porting CDF to samsung
 EXYNOS 5250 platform,
 what I found is that, the exynos display
 controller is MIPI DSI
 based controller.
  
 But if I look at CDF patches, it has only
 support for MIPI DBI
 based Display controller.
  
 So my question is, do we have any generic
 framework for MIPI DSI
 based display controller? basically I wanted
 to know, how to go
 about porting CDF for such kind of display
 controller.

   
  MIPI DSI support is not available yet. The only
  reason for that is
  that I don't have any MIPI DSI hardware to write
  and test the code
  with:-)
   
  The common display framework should definitely
  support MIPI DSI. I
  think the existing MIPI DBI code could be used as
  a base, so the
  implementation shouldn't be too high.
   
  Yeah, i was also thinking in similar lines, below
  is my though for
  MIPI DSI support in CDF.


   o   MIPI DSI support as part of CDF framework will
   expose
   §  mipi_dsi_register_device(mpi_device) (will be
   called mach-xxx-dt.c
   file )
   §  mipi_dsi_register_driver(mipi_driver, bus ops)
   (will be called
   from platform specific init driver call )
   ·bus ops will be
   o   read data
   o   write data
   o   write command
   §  MIPI DSI will be registered as bus_register()

   When MIPI DSI probe is called, it (e.g., Exynos or
   OMAP MIPI DSI)
   will initialize the MIPI DSI HW IP.

   This probe will also parse the DT file for MIPI DSI
   based panel, add
   the panel device (device_add() ) to kernel and
   register the display
   entity with its control and  video ops with CDF.

   I can give this a try.

 
I am currently in progress of reworking Exynos MIPI DSIM
code and
s6e8ax0 LCD driver to use the v2 RFC of Common Display
Framework. I
have most of the work done, I have just to solve several
remaining
problems.

  
 Do you already have code that you can publish ? I'm
 particularly
 interested (and I think Tomi Valkeinen would be as well) in
 looking at
 the DSI operations you expose to DSI sinks (panels,
 transceivers, ...).


  Well, I'm afraid this might be little below your expectations, but
  here's an initial RFC of the part defining just the DSI bus. I
  need a bit more time for patches for Exynos MIPI DSI master and
  s6e8ax0 LCD.

No worries. I was particularly interested in the DSI operations you
needed to export, they seem pretty simple. Thank you for sharing the
code.

FYI,
here is STE DSI API:
http://www.igloocommunity.org/gitweb/?p=kernel/igloo-kernel.git;a=blob;f
=include/video/mcde.h;h=499ce5cfecc9ad77593e761cdcc1624502f28432;hb=HEAD
#l361

But it is not perfect. After a couple of products we realized that most
panel drivers want an easy way to send a bunch of init commands in one
go. So I think it should be an op for sending an array of commands at
once. Something like

struct dsi_cmd {
  enum mipi_pkt_type type; /* MIPI DSI, DCS, SetPacketLen, ... */
  u8 cmd;
  int dataLen;
  u8 *data;
}
struct dsi_ops {
  int dsi_write(source, int num_cmds, struct dsi_cmd *cmds);
  ...
}

Yes, this should be flexible enough to cover most of (or even whole) DSI
specification.

However I'm not sure whether the dsi_write name would be appropriate,
since DSI packet types include also read and special transactions. So,
according to DSI terminology, maybe dsi_transaction would be better?


I think read should still be separate. At least on my HW read and write 
are quite different. But all transactions are very much

Re: [RFC v2 0/5] Common Display Framework

2012-12-18 Thread Marcus Lorentzon

On 12/18/2012 06:04 AM, Dave Airlie wrote:

Many developers showed interest in the first RFC, and I've had the opportunity
to discuss it with most of them. I would like to thank (in no particular
order) Tomi Valkeinen for all the time he spend helping me to draft v2, Marcus
Lorentzon for his useful input during Linaro Connect Q4 2012, and Linaro for
inviting me to Connect and providing a venue to discuss this topic.


So this might be a bit off topic but this whole CDF triggered me
looking at stuff I generally avoid:
I like the effort, right now it seems like x86 and arm display sub 
systems are quite different in terms of DRM driver (and HW) design. I 
think this is partly due to little information shared about these 
different architectures and ideas behind the choices made. I hope some 
discussion will light up both sides. And an early discussion will 
hopefully give you less pain when CDF drivers starts to get pushed your way.

The biggest problem I'm having currently with the whole ARM graphics
and output world is the proliferation of platform drivers for every
little thing. The whole ordering of operations with respect to things
like suspend/resume or dynamic power management is going to be a real
nightmare if there are dependencies between the drivers. How do you
enforce ordering of s/r operations between all the various components?
Could you give an example? Personally I don't think it is that many. I 
might not have counted the plat devs in all arm drivers. But the STE one 
have one per HW IP block in the HW (1 DSS + 3 DSI encoder/formatters). 
Then of course there are all these panel devices. But I hope that when 
CDF is finished we will have DSI devices on the DSI bus and DBI 
devices on the DBI bus. I think most vendors have used platform devices 
for these since they normally can't be probed in a generic way. But as 
they are off SoC I feel this is not the best choice. And then many of 
the panels are I2C devices (control bus) and that I guess is similar to 
x86 encoders/connectors?
Another part of the difference I feel is that in x86 a DRM device is 
most likely a PCI device, and as such has one huge driver for all IPs on 
that board. The closest thing we get to that in ARM is probably the DSS 
(collection of IPs on SoC, like 3D, 2D, display output, encoders). But 
it doesn't fell right to create a single driver for all these. And as 
you know often 3D is even from a separate vendor. All these lead up to a 
slight increase in the number of devices and drivers. Right way, I feel 
so, but you are welcome to show a better way.

The other thing I'd like you guys to do is kill the idea of fbdev and
v4l drivers that are shared with the drm codebase, really just
implement fbdev and v4l on top of the drm layer, some people might
think this is some sort of maintainer thing, but really nothing else
makes sense, and having these shared display frameworks just to avoid
having using drm/kms drivers seems totally pointless. Fix the drm
fbdev emulation if an fbdev interface is needed. But creating a fourth
framework because our previous 3 frameworks didn't work out doesn't
seem like a situation I want to get behind too much.

I have no intention to use CDF outside KMS connector/encoder and I have 
not heard Laurent talk about this either. Personally I see CDF as 
helpers to create and reuse connector/encoder drivers between SoCs 
instead of each SoC do their own panel drivers (which would be about a 
hundred, times the number of supported SoCs). We probably need to 
discuss the connector/encoder mappings to CDF/panels. But I think we 
need to flush out the higher level details like control bus vs. data bus 
vs. display entities. While I like the generic way of the display 
entities, I also like the pure bus/device/driver model without too many 
generalizations.
Do you have any support in x86 world that could be compared to mobile 
phone DSI/DBI/DPI panels? That is, different encoder/lcd-driver chips 
between the on chip/cpu/SoC CRTC and the external LCD depending on 
product (mobile/netbook/...) or is it all HDMI/DP/LVDS etc on x86?
And if you do, how do you model/setup/share all those in DRM driver? Or 
it is manageable ( 10) and not up in the hundreds of different 
encoders/lcd-drivers?


/BR
/Marcus

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Linaro-mm-sig] [PATCH] RFC: dma-buf: userspace mmap support

2012-03-19 Thread Marcus Lorentzon

On 03/19/2012 05:56 PM, Alan Cox wrote:

display controller will be reading the front buffer, but the GPU
  might also need to read that front buffer. So perhaps adding
  read-only  read-write access flags to prepare could also be
  interpreted as shared  exclusive accesses, if we went down this
  route for synchronization that is.:-)

mmap includes read/write info so probably using that works out. It also
means that you have the stuff mapped in a way that will bus error or
segfault anyone who goofs rather than give them the usual 'deep
weirdness' behaviour you get with mishandling of caching bits.

Alan
mmap only give you this info at time of mmap call. prepare/finish would 
give you this info for each CPU access to the buffer (assuming mmap 
lasts multiple frames). Which means that you can optimize and have zero 
cache syncs for frames where CPU doesn't touch the buffer at all. If you 
use mmap info, then you would be forced to sync cache before each GPU 
access to the buffer. For example sub texture updates in glyph caches. 
They will only rarely change, so you don't want to sync CPU cache each 
time buffer is used in GPU, just because mmap says CPU has write access.


/Marcus

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Linaro-mm-sig] [PATCH] RFC: dma-buf: userspace mmap support

2012-03-16 Thread Marcus Lorentzon

On 03/15/2012 02:32 AM, Rob Clark wrote:

From: Rob Clarkr...@ti.com
[snip]
In all cases, the mmap() call is allowed to fail, and the associated
dma_buf_ops are optional (mmap() will fail if at least the mmap()
op is not implemented by the exporter, but in either case the
{prepare,finish}_access() ops are optional).
I sort of understand this approach. It allowes some implementations 
(ARM/Android) to move forward. But how would an application act if mmap 
fails? What is the option? How can the application detect if mmap is 
possible or not? Or is this mmap only supposed to be used from device 
specific libs like libdrm-xx/libv4l2-xx/libgralloc?


Can mmap fail for one buffer, but not another? Can it fail for a buffer 
that have successfully been mmapped once before (except for the usual 
ENOMEM/EAGAIN etc)?

For now the prepare/finish access ioctls are kept simple with no
argument, although there is possibility to add additional ioctls
(or simply change the existing ioctls from _IO() to _IOW()) later
to provide optimization to allow userspace to specify a region of
interest.
I like the idea of simple, assume the worst, no args, versions of 
begin/end access. But once we move forward, I don't just like the 
region, but also access type (R/W). R/W info allows the driver to make 
cache management optimizations otherwise impossible. Like if CPU with no 
alloc-on-write just write, a write buffer flush is enough to switch to a 
HW read. And (at least on ARM) cache clean can be done for all cache for 
large areas, but invalidate has to be done line by line. Eliminating the 
need to do invalidate, especially if region is small, compared to 
invalidate entire buffer line by line can make a huge difference.
But I would like these in a separate ioctl to keep the normal case 
simple. Maybe as a separate patch even.


For a final patch, dma-buf.h would need to be split into what is
exported to userspace, and what is kernel private, but I wanted to
get feedback on the idea of requiring userspace to bracket access
first (vs. limiting this to coherent mappings or exporters who play
page faltings plus PTE shoot-down games) before I split the header
which would cause conflicts with other pending dma-buf patches.  So
flame-on!
Why not just guard the kernel parts with __KERNEL__ or something? Or 
there are guidelines preventing this?



[snip]

+
+static long dma_buf_ioctl(struct file *file, unsigned int cmd,
+   unsigned long arg)
+{
+   struct dma_buf *dmabuf;
+
+   if (!is_dma_buf_file(file))
+   return -EINVAL;
+
+   dmabuf = file-private_data;
+
+   switch (_IOC_NR(cmd)) {
+   case _IOC_NR(DMA_BUF_IOCTL_PREPARE_ACCESS):
+   if (dmabuf-ops-prepare_access)
+   return dmabuf-ops-prepare_access(dmabuf);
+   return 0;
+   case _IOC_NR(DMA_BUF_IOCTL_FINISH_ACCESS):
+   if (dmabuf-ops-finish_access)
+   return dmabuf-ops-finish_access(dmabuf);
+   return 0;
+   default:
+   return -EINVAL;
+   }
+}
+
+

Multiple empty lines

  static int dma_buf_release(struct inode *inode, struct file *file)
  {
struct dma_buf *dmabuf;
@@ -45,6 +85,8 @@ static int dma_buf_release(struct inode *inode, struct file 
*file)
  }

  static const struct file_operations dma_buf_fops = {
+   .mmap   = dma_buf_mmap,
+   .unlocked_ioctl = dma_buf_ioctl,
.release= dma_buf_release,
  };

diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index a885b26..cbdff81 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -34,6 +34,17 @@
  struct dma_buf;
  struct dma_buf_attachment;

+/* TODO: dma-buf.h should be the userspace visible header, and dma-buf-priv.h 
(?)
+ * the kernel internal header.. for now just stuff these here to avoid 
conflicting
+ * with other patches..
+ *
+ * For now, no arg to keep things simple, but we could consider adding an
+ * optional region of interest later.
+ */
+#define DMA_BUF_IOCTL_PREPARE_ACCESS   _IO('Z', 0)
+#define DMA_BUF_IOCTL_FINISH_ACCESS_IO('Z', 1)
+
+

Multiple empty lines

  /**
   * struct dma_buf_ops - operations possible on struct dma_buf
   * @attach: [optional] allows different devices to 'attach' themselves to the
@@ -49,6 +60,13 @@ struct dma_buf_attachment;
   * @unmap_dma_buf: decreases usecount of buffer, might deallocate scatter
   *   pages.
   * @release: release this buffer; to be called after the last dma_buf_put.
+ * @mmap: [optional, allowed to fail] operation called if userspace calls
+ *  mmap() on the dmabuf fd.  Note that userspace should use the
+ *  DMA_BUF_PREPARE_ACCESS / DMA_BUF_FINISH_ACCESS ioctls 
before/after
+ *  sw access to the buffer, to give the exporter an opportunity to
+ *  deal with cache maintenance.
+ * @prepare_access: [optional] handler for PREPARE_ACCESS ioctl.
+ * @finish_access: 

Re: [PATCH 09/10] MCDE: Add build files and bus

2010-12-17 Thread Marcus Lorentzon

On 12/17/2010 12:22 PM, Arnd Bergmann wrote:

* When I talk about a bus, I mean 'struct bus_type', which identifies
all devices with a uniform bus interface to their parent device
(think: PCI, USB, I2C). You seem to think of a bus as a specific
instance of that bus type, i.e. the device that is the parent of
all the connected devices. If you have only one instance of a bus
in any system, and they are all using the same driver, do not add
a bus_type for it.
A good reason to add a bus_type would be e.g. if the display
driver uses interfaces to the dss that are common among multiple
dss drivers from different vendors, but the actual display drivers
are identical. This does not seem to be the case.

   

Correct, I refer to the device, not type or driver. I used a bus type
since it allowed me to setup a default implementation for each driver
callback. So all drivers get generic implementation by default, and
override when that is not enough. Meybe you have a better way of getting
the same behavior.
 

One solution that I like is to write a module with the common code as
a library, exporting all the default actions. The specific drivers
can then fill their operations structure by listing the defaults
or by providing their own functions to replace them, which in turn
can call the default functions. This is e.g. what libata does.

   

Will do.



We are now taking a step back and start all over. We were almost as
fresh on this HW block as you are now when we started implementing the
driver earlier this year. I think all of us benefit from now having a
better understanding of customer requirements and the HW itself, there
are some nice quirks ;). Anyway, we will restart the patches and RFC
only the MCDE HW part of the driver, implementing basic fb support for
one display board as you suggested initially. It's a nice step towards
making the patches easier to review and give us some time to prepare the
DSS stuff. That remake was done today, so I think the patch will be sent
out soon. (I'm going on vacation for 3 weeks btw).
 

Ok, sounds great! I'm also starting a 3 week vacation, but will be at the
Linaro sprint in Dallas.

My feeling now, after understanding about it some more, is that it would
actually be better to start with a KMS implementation instead of a classic
frame buffer. Ideally you wouldn't even need the frame buffer driver or
the multiplexing between the two then, but still get all the benefits
from the new KMS infrastructure.

   
I will look at it, we might still post a fb-mcde_hw first though, since 
it's so little work.



DSS give access to all display devices probed on the virtual mcde
dss bus, or platform bus with specific type of devices if you like.
All calls to DSS operate on a display device, like create an
overlay(=framebuffer), request an update, set power mode, etc.
All calls to DSS related to display itself and not only framebuffer
scanout, will be passed on to the display driver of the display
device in question. All calls DSS only related to overlays, like
buffer pointers, position, rotation etc is handled directly by DSS
calling mcde_hw.

You could see mcde_hw as a physical level driver and mcde_dss closer
to a logical driver, delegating display specific decisions to the
display driver. Another analogy is mcde_hw is host driver and display
drivers are client device drivers. And DSS is a collection of logic
to manage the interaction between host and client devices.

 

The way you describe it, I would picture it differently:

+--+ ++-+-+ +---+
| mcde_hw  | | fb | kms | v4l | | displ |
++--+
| HW |mcde_dss  |
++--+

In this model, the dss is the core module that everything else
links to. The hw driver talks to the actual hardware and to the
dss. The three front-ends only talk to the dss, but not to the
individual display drivers or to the hw code directly (i.e. they
don't use their exported symbols or internal data structures.
The display drivers only talk to the dss, but not to the front-ends
or the hw drivers.

Would this be a correct representation of your modules?

   

Hmm, mcde_hw does not link to dss. It should be FB-DSS-Display
driver-MCDE_HW-HW IO (+ DSS-MCDE_HW). My picture is how code should
be used. Anything else you find in code is a violation of that layering.
 

I don't think it makes any sense to have the DSS sit on top of the
display drivers, since that means it has to know about all of them
and loading the DSS module would implicitly have to load all the
display modules below it, even for the displays that are not present.

   
DSS does not have a static dependency on display drivers. DSS is just a 
convenience library for handling the correct display driver call 
sequences, instead of each user (fbdev/KMS/V4L2) having to duplicate 
this code.



Moreover, I don't yet see the 

Re: [PATCH 09/10] MCDE: Add build files and bus

2010-12-16 Thread Marcus Lorentzon

On 11/26/2010 12:24 PM, Arnd Bergmann wrote:

[dri people: please have a look at the KMS discussion way below]

On Thursday 25 November 2010 19:00:26 Marcus LORENTZON wrote:
   

-Original Message-
From: Arnd Bergmann [mailto:a...@arndb.de]
Sent: den 25 november 2010 17:48
To: Marcus LORENTZON
Cc: linux-arm-ker...@lists.infradead.org; Jimmy RUBIN; linux-
me...@vger.kernel.org; Dan JOHANSSON; Linus WALLEIJ
Subject: Re: [PATCH 09/10] MCDE: Add build files and bus

On Thursday 25 November 2010, Marcus LORENTZON wrote:
   

From: Arnd Bergmann [mailto:a...@arndb.de]
 

On Wednesday 10 November 2010, Jimmy Rubin wrote:
   

This patch adds support for the MCDE, Memory-to-display
 

controller,
   

found in the ST-Ericsson ux500 products.

 

[note: please configure your email client properly so it keeps
proper attribution of text and and does not rewrap the citations
incorrectly. Wrap your own text after 70 characters]
   
I'm now using Thunderbird, please let me know if it's better than my 
previous webmail client, neither have many features for reply formatting.

All devices that you cannot probe by asking hardware or firmware are
normally
considered platform devices. Then again, a platform device is usally
identified by its resources, i.e. MMIO addresses and interrupts, which
I guess your display does not have.
   

Then we might be on right track to model them as devices on a
platform bus. Since most displays/panels can't be plug-n-play
probed, instead devices has to be statically declared in
board-xx.c files in mach-ux500 folder. Or is platform bus a
singleton? Or can we define a new platform bus device?
Displays like HDMI TV-sets are not considered for device/driver
in mcde. Instead there will be a hdmi-chip-device/driver on the
mcde bus. So all devices and drivers on this bus are static.
 

I think I need to clarify to things:

* When I talk about a bus, I mean 'struct bus_type', which identifies
   all devices with a uniform bus interface to their parent device
   (think: PCI, USB, I2C). You seem to think of a bus as a specific
   instance of that bus type, i.e. the device that is the parent of
   all the connected devices. If you have only one instance of a bus
   in any system, and they are all using the same driver, do not add
   a bus_type for it.
   A good reason to add a bus_type would be e.g. if the display
   driver uses interfaces to the dss that are common among multiple
   dss drivers from different vendors, but the actual display drivers
   are identical. This does not seem to be the case.
   
Correct, I refer to the device, not type or driver. I used a bus type 
since it allowed me to setup a default implementation for each driver 
callback. So all drivers get generic implementation by default, and 
override when that is not enough. Meybe you have a better way of getting 
the same behavior.

* When you say that the devices are static, I hope you do not mean
   static in the C language sense. We used to allow devices to be
   declared as static struct and registered using
   platform_device_register (or other bus specific functions). This
   is no longer valid and we are removing the existing users, do not
   add new ones. When creating a platform device, use
   platform_device_register_simple or platform_device_register_resndata.

I'm not sure what you mean with drivers being static. Predefining
the association between displays and drivers in per-machine files is
fine, but since this is really board specific, it would be better
to eventually do this through data passed from the boot loader, so
you don't have to have a machine file for every combination of displays
that is in the field.
   
I guess you have read the ARM vs static platform_devices. But, yes, I 
mean in the c-language static sense. I will adopt to whatever Russel 
King says is The right way in ARM SoCs.

Staging it in a way that adds all the display drivers later than the
base driver is a good idea, but it would be helpful to also add the
infrastructure at the later stage. Maybe you can try to simplify the
code for now by hardcoding the single display and remove the dynamic
registration. You still have the the code, so once the base code looks
good for inclusion, we can talk about it in the context of adding
dynamic display support back in, possibly in exactly the way you are
proposing now, but perhaps in an entirely different way if we come up
with a better solution.
   

What about starting with MCDE HW, which is the core HW driver doing
all real work? And then continue with the infrastructure + some displays
+ drivers ...

This is already the order in which you submitted them, I don't see a
difference here. I was not asking to delay any of the code, just to put
them in a logical order.
   
We are now taking a step back and start all over. We were almost as 
fresh on this HW block as you are now when we started implementing the 
driver earlier this year

RE: [PATCH 09/10] MCDE: Add build files and bus

2010-11-25 Thread Marcus LORENTZON
Hi Arnd,
Comments inline.

 -Original Message-
 From: Arnd Bergmann [mailto:a...@arndb.de]
 Sent: den 12 november 2010 17:23
 To: linux-arm-ker...@lists.infradead.org
 Cc: Jimmy RUBIN; linux-fb...@vger.kernel.org; linux-
 me...@vger.kernel.org; Dan JOHANSSON; Linus WALLEIJ
 Subject: Re: [PATCH 09/10] MCDE: Add build files and bus
 
 On Wednesday 10 November 2010, Jimmy Rubin wrote:
  This patch adds support for the MCDE, Memory-to-display controller,
  found in the ST-Ericsson ux500 products.
 
  This patch adds the necessary build files for MCDE and the bus that
  all displays are connected to.
 
 
 Can you explain why you need a bus for this?

The idea of the bus is to make it easier to add new panel/display support using 
the normal device/driver model. Boards add devices at init, and drivers are 
selected in config. If we were to model the real physical bus structure it 
would be very complex, hard to use. We use our own bus, but it's really a 
virtual bus for adding display devices and drivers to MCDE host. Is there any 
generic virtual bus type we should use instead of our own type? If you have 
another idea of how to add display devices and drivers without MCDE code 
modifications, please let us know. A virtual bus just give us a nice framework 
to do this.

 With the code you currently have, there is only a single driver
 associated
 with this bus type, and also just a single device that gets registered
 here!

And yes, currently there's only one single driver. But there's a lot more 
coming in the pipe. Like some LCD, DPI, DBI, DSI display drivers. And once all 
different u8500 boards are pushed, there will be multiple boards registering 
devices with different setups. But one thing at a time.

 +static int __init mcde_subsystem_init(void)
 +{
 +   int ret;
 +   pr_info(MCDE subsystem init begin\n);
 +
 +   /* MCDE module init sequence */
 +   ret = mcde_init();
 +   if (ret)
 +   return ret;
 +   ret = mcde_display_init();
 +   if (ret)
 +   goto mcde_display_failed;
 +   ret = mcde_dss_init();
 +   if (ret)
 +   goto mcde_dss_failed;
 +   ret = mcde_fb_init();
 +   if (ret)
 +   goto mcde_fb_failed;
 +   pr_info(MCDE subsystem init done\n);
 +
 +   return 0;
 +mcde_fb_failed:
 +   mcde_dss_exit();
 +mcde_dss_failed:
 +   mcde_display_exit();
 +mcde_display_failed:
 +   mcde_exit();
 +   return ret;
 +}
 
 Splitting up the module into four sub-modules and then initializing
 everything from one place indicates that something is done wrong
 on a global scale.
 
 If you indeed need a bus, that should be a separate module that gets
 loaded first and then has the other modules build on top of.

Yes, that's the general idea. We don't completely understand the correct way of 
making sure how one module gets loaded before another, without selecting init 
level, like the fs_initcall below you commented on. I guess most of our 
submodules should be initialized during subsys_init. But we have not found how 
to specify the module init order inside subsys init level. Maybe you can shed 
some light on this? Makefile order seems like a fragile way of defining init 
order dependencies.
Do you think static init calls from machine subsys init are a better solution?

 I'm not sure how the other parts layer on top of one another, can you
 provide some more insight?

++
| mcde_fb/mcde_kms/mcde_v4l2 |
+---++
|mcde_dss   |
+   +---+
|   | disp drvs |
+---+---+
|mcde hw|
+---+
|  HW   |
+---+

 From what I understood so far, you have a single multi-channel display
 controller (mcde_hw.c) that drives the hardware.
 Each controller can have multiple frame buffers attached to it, which
 in turn can have multiple displays attached to each of them, but your
 current configuration only has one of each, right?

Correct, channels A/B (crtcs) can have two blended framebuffers plus 
background color, channels C0/C1 can have one framebuffer.

 Right now you have a single top-level bus device for the displays,
 maybe that can be integrated into the controller so the displays are
 properly rooted below the hardware that drives them.

Not sure I understand 100%. Routing is independent of bus structure. Routing 
could change dynamically during runtime reconfiguration using future KMS for 
example. Bus is only for connecting display devices with drivers. Of course we 
could have one bus per port/connector. But then the code would be more complex 
and we would end up with one bus/device/driver per connector (or in some rare 
cases 2-4 on DBI/DSI connectors).

 The frame buffer device also looks weird. Right now you only seem
 to have a single frame buffer registered to a driver in the same
 module. Is that frame buffer not dependent on a controller?

MCDE framebuffers are only depending on MCDE DSS. DSS 

RE: [PATCH 09/10] MCDE: Add build files and bus

2010-11-25 Thread Marcus LORENTZON
 -Original Message-
 From: Arnd Bergmann [mailto:a...@arndb.de]
 Sent: den 25 november 2010 17:48
 To: Marcus LORENTZON
 Cc: linux-arm-ker...@lists.infradead.org; Jimmy RUBIN; linux-
 me...@vger.kernel.org; Dan JOHANSSON; Linus WALLEIJ
 Subject: Re: [PATCH 09/10] MCDE: Add build files and bus

 On Thursday 25 November 2010, Marcus LORENTZON wrote:
  From: Arnd Bergmann [mailto:a...@arndb.de]
   On Wednesday 10 November 2010, Jimmy Rubin wrote:
This patch adds support for the MCDE, Memory-to-display
 controller,
found in the ST-Ericsson ux500 products.
   
This patch adds the necessary build files for MCDE and the bus
 that
all displays are connected to.
   
  
   Can you explain why you need a bus for this?
 
  The idea of the bus is to make it easier to add new panel/display
 support
  using the normal device/driver model. Boards add devices at init, and
  drivers are selected in config. If we were to model the real
 physical
  bus structure it would be very complex, hard to use. We use our own
 bus,
  but it's really a virtual bus for adding display devices and drivers
 to
  MCDE host. Is there any generic virtual bus type we should use
 instead
  of our own type? If you have another idea of how to add display
 devices
  and drivers without MCDE code modifications, please let us know. A
 virtual
  bus just give us a nice framework to do this.

 All devices that you cannot probe by asking hardware or firmware are
 normally
 considered platform devices. Then again, a platform device is usally
 identified by its resources, i.e. MMIO addresses and interrupts, which
 I guess your display does not have.

Then we might be on right track to model them as devices on a platform bus. 
Since most displays/panels can't be plug-n-play probed, instead devices has 
to be statically declared in board-xx.c files in mach-ux500 folder. Or is 
platform bus a singleton? Or can we define a new platform bus device?
Displays like HDMI TV-sets are not considered for device/driver in mcde. 
Instead there will be a hdmi-chip-device/driver on the mcde bus. So all devices 
and drivers on this bus are static.

   With the code you currently have, there is only a single driver
   associated
   with this bus type, and also just a single device that gets
 registered
   here!
 
  And yes, currently there's only one single driver. But there's a lot
 more
  coming in the pipe. Like some LCD, DPI, DBI, DSI display drivers. And
 once
  all different u8500 boards are pushed, there will be multiple boards
  registering devices with different setups. But one thing at a time.

 Your approach is making it hard to review the code in context. Adding a
 device driver that uses existing infrastructure is usually
 straightforward,
 but adding infrastructure is a hard thing and needs to be done with
 great
 care, especially if you add infrastructure before it actually is
 needed.

 Staging it in a way that adds all the display drivers later than the
 base driver is a good idea, but it would be helpful to also add the
 infrastructure at the later stage. Maybe you can try to simplify the
 code for now by hardcoding the single display and remove the dynamic
 registration. You still have the the code, so once the base code looks
 good for inclusion, we can talk about it in the context of adding
 dynamic display support back in, possibly in exactly the way you are
 proposing now, but perhaps in an entirely different way if we come up
 with a better solution.

What about starting with MCDE HW, which is the core HW driver doing all real 
work? And then continue with the infrastructure + some displays + drivers ...
Only problem is that we then have a driver that can't be used from user space, 
because I don't think I can find anyone with enough time to write a display 
driver + framebuffer on top of mcde_hw (which is what the existing code does).

 On a more abstract level, when you want to add large chunks of code
 to the kernel, you often cannot find anyone to review and understand
 all of it at once. Splitting a subsystem into ten patches by file
 level rarely helps, so instead you should ideally have a series of
 patches that each add working code that enable more features.

 This is of course more work for you, especially if you did not consider
 it while writing the code in the first place. Still, you should
 always try hard to make code easy to review as much as possible,
 because you need to work with reviewers both to get code in and to
 make sure you make the quality ends up as good as possible.

   +static int __init mcde_subsystem_init(void)
   +{
   +   int ret;
   +   pr_info(MCDE subsystem init begin\n);
   +
   +   /* MCDE module init sequence */
   +   ret = mcde_init();
   +   if (ret)
   +   return ret;
   +   ret = mcde_display_init();
   +   if (ret)
   +   goto mcde_display_failed;
   +   ret = mcde_dss_init();
   +   if (ret)
   +   goto

RE: [PATCH 00/10] MCDE: Add frame buffer device driver

2010-11-13 Thread Marcus LORENTZON
Hi Hans,
MCDE is for both video and graphics. That is, it supports YUV and RGB buffers 
to be blended onto a background during scanout. And as most SOCs it supports 
normal CRTC type of continous scanout like LCD and MIPI DPI/DSI video mode and 
command mode scanout like MIPI DBI/DSI. I guess you have seen the slides of 
U8500 published at the last L4L2 smmit in Helsinki 
(http://linuxtv.org/downloads/presentations/summit_jun_2010/ste_V4L2_developer_meeting.pdf).
 And as you might remember I had the action to propose a way of posting GEM 
type of buffers to V4L2 devices. That's kind of what I'm working on. First we 
just had to get the basic support up for customers using framebuffer API. Even 
though it's an old API it can do most of what you need on a mobile device. And 
the work to add framebuffer support was not a big deal once we had MCDE DSS 
driver framework in place. This is what Jimmy is now pushing.

The next step we are working on is investigating how to extend the support for 
all the other features of MCDE. The two obvious choices are V4L2 and DRM. Both 
have their pros and cons. DRM has wide support in the community for window 
systems like X and now lately Wayland (go Kristian ;). I have not had time to 
investigate video overlay support in KMS yet. But my feeling is that desktop 
drivers are moving away from overlays and I guess userspace OpenGL composition 
is replacing overlays, blitters and other 2D HW rapidly. But my personal 
feeling is that SOC platform vendors try to do both. So personally I still see 
big possibilities for blitter/overlay HW, especially for power efficiency since 
these HW blocks can do basic composition with less gates and lower clock rates 
- less power.

To enable this type of HW we need APIs that can be used by ordinary userspace 
frameworks. Here KMS and GEM allow us to handle resolution changes, memory 
management and inter process buffer sharing. But the structure of DRM still 
look a bit to daunting for MCDE. Of course we could squeeze MCDE into DRM 
framework, we just have to undestand what we Have to use of DRM and what is 
actually valuable to reuse. It's not a big deal to maintain a driver like MCDE. 
On the other hand we might want to use KMS API. And Alex presented a new option 
I had not considered before, simply implement the KMS ioctl API. Both options 
are something we are looking at right now.
Then there's the video stuff. Like video/camera overlays. Here userspace 
frameworks push us in the direction of V4L2 (mainly gstreamer). For example 
exposing overlays as V4L2 output devices. V4L2 media controller, mem2mem, 
output devices also looks like the best option for building a complete open 
source OpenWFC solution on top of linux kernel APIs. Giving us a standard API 
for user space blitter/overlay use to be used in _paralell_ with OpenGL(ES). 
Note that OpenWFC is a rendering API for blitters and overlays, not a standard 
window system as many confuse it with.

The only question left is where do we control the muxing of 
channels/overlays/encoders/outputs etc. V4L2 media control API looks like the 
most flexible solution to do this. But KMS still have some features for 
controlling this type of connections for display outputs. And if we choose to 
go for KMS for resolution changes, TV mode settings etc. There's still little 
reason to go down the V4L2 media control path. And V4L2 media control path is 
still early development.

I guess these lists are a good place to start learning others' views on DRM/KMS 
vs. V4L2/MC for SOC display sub systems like MCDE DSS. Of course, these are not 
exclusive, but we still have to look at the value of supporting both APIs in 
paralell.

/BR
/Marcus

From: Hans Verkuil [hverk...@xs4all.nl]
Sent: Saturday, November 13, 2010 12:54 PM
To: Marcus LORENTZON
Cc: Alex Deucher; Jimmy RUBIN; linux-fb...@vger.kernel.org; 
linux-arm-ker...@lists.infradead.org; linux-media@vger.kernel.org; Linus 
WALLEIJ; Dan JOHANSSON
Subject: Re: [PATCH 00/10] MCDE: Add frame buffer device driver

Hi Marcus,

Is your display system 'just' for graphics output? Or can it also do video? I
ask because many SoC display systems are designed for video output with a
separate graphics layer that can be blended in. Usually the video output is
handled through the video4linux API and the graphics through fbdev. Using drm
is not yet common for SoC (I can't remember seeing anything of that kind, but
I never actively looked for it either). With the increasing complexity of
SoC graphics parts I am sure drm will become much more relevant.

A separate issue is memory handling. V4L and graphics drivers share similar
problems. It's my intention to start looking into this some time next year.
It all seems quite messy at the moment.

Regards,

Hans

On Friday, November 12, 2010 17:46:53 Marcus LORENTZON wrote:
 Hi Alex,
 Do you have any idea of how we should use KMS without being a real drm 3D 
 device? Do you

RE: [PATCH 00/10] MCDE: Add frame buffer device driver

2010-11-12 Thread Marcus LORENTZON
Hi Alex,
Do you have any idea of how we should use KMS without being a real drm 3D 
device? Do you mean that we should use the KMS ioctls on for display driver? Or 
do you mean that we should expose a /dev/drmX device only capable of KMS and no 
GEM?

What if we were to add a drm driver for 3D later on. Is it possible to have a 
separate drm device for display and one for 3D, but still share GEM like 
buffers between these devices? It look like GEM handles are device relative. 
This is a vital use case for us. And we really don't like to entangle our MCDE 
display driver, memory manager and 3D driver without a good reason. Today they 
are maintained as independent drivers without code dependencies. Would this 
still be possible using drm? Or does drm require memory manager, 3D and display 
to be one driver? I can see the drm=graphics card on desktop machines. But 
embedded UMA systems doesn't really have this dependency. You can switch memory 
mamanger, 3D driver, display manager in menuconfig independently of the other 
drivers. Not that it's used like that on one particular HW, but for different 
HW you can use different parts. In drm it looks like all these pieces belong 
together.

Do you think the driver should live in the gpu/drm folder, even though it's 
not a gpu driver?

Do you know of any other driver that use DRM/KMS API but not being a PC-style 
graphics card that we could look at for inspiration?

And GEM, is that the only way of exposing graphics buffers to user space in 
drm? Or is it possible (is it ok) to expose another similar API? You mentioned 
that there are TTM and GEM, do both expose user space APIs for things like 
sharing buffers between processes, security, cache management, defragmentation? 
Or are these type of features defined by DRM and not TTM/GEM?

/BR
/Marcus

 -Original Message-
 From: Alex Deucher [mailto:alexdeuc...@gmail.com]
 Sent: den 12 november 2010 16:53
 To: Jimmy RUBIN
 Cc: linux-fb...@vger.kernel.org; linux-arm-ker...@lists.infradead.org;
 linux-media@vger.kernel.org; Linus WALLEIJ; Dan JOHANSSON; Marcus
 LORENTZON
 Subject: Re: [PATCH 00/10] MCDE: Add frame buffer device driver
 
 On Fri, Nov 12, 2010 at 8:18 AM, Jimmy RUBIN
 jimmy.ru...@stericsson.com wrote:
  Hi Alex,
 
  Good point, we are looking at this for possible future improvements
 but for the moment we feel like
  the structure of drm does not add any simplifications for our driver.
 We have the display manager (MCDE DSS = KMS) and the memory manager
 (HWMEM = GEM) that could be migrated to drm framework. But we do not
 have drm drivers for 3D hw and this also makes drm a less obvious
 choice at the moment.
 
 
 You don't have to use the drm strictly for 3D hardware.  historically
 that's why it was written, but with kms, it also provides an interface
 for complex display systems.  fbdev doesn't really deal properly with
 multiple display controllers or connectors that are dynamically
 re-routeable at runtime.  I've seen a lot of gross hacks to fbdev to
 support this kind of stuff in the past, so it'd be nice to use the
 interface we now have for it if you need that functionality.
 Additionally, you can use the shared memory manager to both the
 display side and v4l side.  While the current drm drivers use GEM
 externally, there's no requirement that a kms driver has to use GEM.
 radeon and nouveau use ttm internally for example.  Something to
 consider.  I just want to make sure people are aware of the interface
 and what it's capable of.
 
 Alex
 
  Jimmy
 
  -Original Message-
  From: Alex Deucher [mailto:alexdeuc...@gmail.com]
  Sent: den 10 november 2010 15:43
  To: Jimmy RUBIN
  Cc: linux-fb...@vger.kernel.org; linux-arm-
 ker...@lists.infradead.org; linux-media@vger.kernel.org; Linus WALLEIJ;
 Dan JOHANSSON
  Subject: Re: [PATCH 00/10] MCDE: Add frame buffer device driver
 
  On Wed, Nov 10, 2010 at 7:04 AM, Jimmy Rubin
 jimmy.ru...@stericsson.com wrote:
  These set of patches contains a display sub system framework (DSS)
 which is used to
  implement the frame buffer device interface and a display device
  framework that is used to add support for different type of displays
  such as LCD, HDMI and so on.
 
  For complex display hardware, you may want to consider using the drm
  kms infrastructure rather than the kernel fb interface.  It provides
  an API for complex display hardware (multiple encoders, display
  controllers, etc.) and also provides a legacy kernel fb interface for
  compatibility.  See:
  Documentation/DocBook/drm.tmpl
  drivers/gpu/drm/
  in the kernel tree.
 
  Alex
 
 
  The current implementation supports DSI command mode displays.
 
  Below is a short summary of the files in this patchset:
 
  mcde_fb.c
  Implements the frame buffer device driver.
 
  mcde_dss.c
  Contains the implementation of the display sub system framework
 (DSS).
  This API is used by the frame buffer device driver.
 
  mcde_display.c
  Contains default implementations