RE: [PATCH] RFC: omapdrm DRM/KMS driver for TI OMAP platforms

2011-09-07 Thread Tomi Valkeinen
On Wed, 2011-09-07 at 15:00 +0900, Inki Dae wrote:
 Hello, Rob.
 
  -Original Message-
  From: robdcl...@gmail.com [mailto:robdcl...@gmail.com] On Behalf Of Rob
  Clark
  Sent: Tuesday, September 06, 2011 1:05 AM
  To: Inki Dae
  Cc: dri-devel@lists.freedesktop.org; linaro-...@lists.linaro.org;
  Valkeinen, Tomi
  Subject: Re: [PATCH] RFC: omapdrm DRM/KMS driver for TI OMAP platforms
  
  On Mon, Sep 5, 2011 at 4:58 AM, Inki Dae inki@samsung.com wrote:

snip

   How about moving codes above into interrupt handler for vblank?
maybe there
  
  I should mention a couple of things:
  
  1) drm vblank stuff isn't really used currently.. it is actually DSS2
  layer that is registering for the display related interrupt(s).  I'm
  not really sure how to handle this best.  I suppose the DRM driver
  could *also* register for these interrupts, but that seems a bit odd..
  
 
 DRM Framework supports only one interrupt handler. this issue should be
 resolved. and currently I made our driver to use its own request_irq, not
 DRM Framework side. we only sets drm_device-irq_enabled to 1 and interrupt
 handler is registered at display controller or hdmi driver respectively. but
 I'm not sure that this way is best so I will look over more. Anyway current
 drm framework should be fixed to be considered with multiple irq.

I am actually going to hide the display subsystem interrupts, as they
are really a DSS2 driver internal thing, and create some kind of
notifications for the events that the users of DSS2 need to see. I don't
know how that affects the OMAP DRM driver, but it should be a much
cleaner interface.

  Also, I guess it is also worth mentioning.. when it comes to vblank,
  there are two fundamentally different sorts of displays we deal with.
  Something like DVI/HDMI/etc which need constant refreshing.  In these
  cases we constantly scan-out the buffer until the next page
  flip+vsync.  And smart panels, where they get scanned out once and
  then DSS is no longer reading the scanout buffer until we manually
  trigger an update.
  
 
 Is the Smart panel CPU interface based lcd panel that has its own
 framebuffer internally.?

I don't like the smart panel term very much, but yes, it has an
internal framebuffer. The panels are connected via DSI command mode or
DBI bus. They can be updated with CPU, but normally they are not and use
the same display controller hardware as other displays to update the
screen.

The biggest difference with these smart panels compared to conventional
ones is that pixel data is only sent to the panel when needed (i.e. when
somebody has changed the pixel data), and the panel retains the last
sent frame independently on the screen, and the SoC can even go to
sleep.

 Tomi


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Proposal for a low-level Linux display framework

2011-09-15 Thread Tomi Valkeinen
Hi,

I am the author of OMAP display driver, and while developing it I've
often felt that there's something missing in Linux's display area. I've
been planning to write a post about this for a few years already, but I
never got to it. So here goes at last!

---

First I want to (try to) describe shortly what we have on OMAP, to give
a bit of a background for my point of view, and to have an example HW.

The display subsystem (DSS) hardware on OMAP handles only showing pixels
on a display, so it doesn't contain anything that produces pixels like
3D stuff or accelerated copying. All it does is fetch pixels from SDRAM,
possibly do some modifications for them (color format conversions etc),
and output them to a display.

The hardware has multiple overlays, which are like hardware windows.
They fetch pixels from SDRAM, and output them in a certain area on the
display (possibly with scaling). Multiple overlays can be composited
into one output.

So we may have something like this, when all overlays read pixels from
separate areas in the memory, and all overlays are on LCD display:

 .-. .--.   .--.
 | mem || ovl0 |-.| LCD  |
 '-' '--' | '--'
 .-. .--. |
 | mem || ovl1 |-|
 '-' '--' |
 .-. .--. | .--.
 | mem || ovl2 |-' |  TV  |
 '-' '--'   '--'

The LCD display can be rather simple one, like a standard monitor or a
simple panel directly connected to parallel RGB output, or a more
complex one. A complex panel needs something else than just
turn-it-on-and-go. This may involve sending and receiving messages
between OMAP and the panel, but more generally, there's need to have
custom code that handles the particular panel. And the complex panel is
not necessarily a panel at all, it may be a buffer chip between OMAP and
the actual panel.

The software side can be divided into three parts: the lower level
omapdss driver, the lower level panel drivers, and higher level drivers
like omapfb, v4l2 and omapdrm.

The omapdss driver handles the OMAP DSS hardware, and offers a kernel
internal API which the higher level drivers use. The omapdss does not
know anything about fb or drm, it just offers core display services.

The panel drivers handle particular panels/chips. The panel driver may
be very simple in case of a conventional display, basically doing pretty
much nothing, or bigger piece of code, handling communication with the
panel.

The higher level drivers handle buffers and tell omapdss things like
where to find the pixels, what size the overlays should be, and use the
omapdss API to turn displays on/off, etc.

---

There are two things that I'm proposing to improve the Linux display
support:

First, there should be a bunch of common video structs and helpers that
are independent of any higher level framework. Things like video
timings, mode databases, and EDID seem to be implemented multiple times
in the kernel. But there shouldn't be anything in those things that
depend on any particular display framework, so they could be implemented
just once and all the frameworks could use them.

Second, I think there could be use for a common low level display
framework. Currently the lower level code (display HW handling, etc.)
and higher level code (buffer management, policies, etc) seem to be
usually tied together, like the fb framework or the drm. Granted, the
frameworks do not force that, and for OMAP we indeed have omapfb and
omapdrm using the lower level omapdss. But I don't see that it's
anything OMAP specific as such.

I think the lower level framework could have components something like
this (the naming is OMAP oriented, of course):

overlay - a hardware window, gets pixels from memory, possibly does
format conversions, scaling, etc.

overlay compositor - composes multiple overlays into one output,
possibly doing things like translucency.

output - gets the pixels from overlay compositor, and sends them out
according to particular video timings when using conventional video
interface, or via any other mean when using non-conventional video buses
like DSI command mode.

display - handles an external display. For conventional displays this
wouldn't do much, but for complex ones it does whatever needed by that
particular display.

This is something similar to what DRM has, I believe. The biggest
difference is that the display can be a full blown driver for a complex
piece of HW.

This kind of low level framework would be good for two purposes: 1) I
think it's a good division generally, having the low level HW driver
separate from the higher level buffer/policy management and 2) fb, drm,
v4l2 or any possible future framework could all use the same low level
framework.

---

Now, I'm quite sure the above framework could work quite well with any
OMAP like hardware, with unified memory (i.e. the video buffers are in
SDRAM) and 3D 

Re: Proposal for a low-level Linux display framework

2011-09-15 Thread Tomi Valkeinen
On Thu, 2011-09-15 at 09:59 -0500, Keith Packard wrote:
 On Thu, 15 Sep 2011 15:07:05 +0300, Tomi Valkeinen tomi.valkei...@ti.com 
 wrote:
 
  This was a very rough and quite short proposal, but I'm happy to improve
  and extend it if it's not totally shot down.
 
 Jesse Barnes has put together a proposal much like this to work within
 the existing DRM environment. This is pretty much the last piece of
 missing mode-setting functionality that we know of, making DRM capable
 of fully supporting existing (and planned) devices.
 
 Here's a link to some older discussion on the issue, things have changed
 a bit since then and we had a long talk about this during the X
 Developers' Conference this week in Chicago. Expect an update to his
 proposal in the coming weeks.
 
 http://lists.freedesktop.org/archives/dri-devel/2011-April/010559.html
 

Thanks for the link.

Right, DRM has already components I described in my proposal, and adding
overlays brings it even closer. However, I think there are two major
differences:

1) It's part of DRM, so it doesn't help fb or v4l2 drivers. Except if
the plan is to make DRM the core Linux display framework, upon which
everything else is built, and fb and v4l2 are changed to use DRM.

But even if it was done like that, I see that it's combining two
separate things: 1) the lower level HW control, and 2) the upper level
buffer management, policies and userspace interfaces.

2) It's missing the panel driver part. This is rather important on
embedded systems, as the panels often are not dummy panels, but they
need things like custom initialization, sending commands to adjust
backlight, etc.

 Tomi


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Proposal for a low-level Linux display framework

2011-09-15 Thread Tomi Valkeinen
On Thu, 2011-09-15 at 10:50 -0500, Keith Packard wrote:
 On Thu, 15 Sep 2011 18:29:54 +0300, Tomi Valkeinen tomi.valkei...@ti.com 
 wrote:
 
  1) It's part of DRM, so it doesn't help fb or v4l2 drivers. Except if
  the plan is to make DRM the core Linux display framework, upon which
  everything else is built, and fb and v4l2 are changed to use DRM.
 
 I'd like to think we could make DRM the underlying display framework;
 it already exposes an fb interface, and with overlays, a bit more of the
 v4l2 stuff is done as well. Certainly eliminating three copies of mode
 setting infrastructure would be nice...

Ok, sounds good to me. We (as in OMAP display people) are already
planning to take DRM into use, so no problem there.

  But even if it was done like that, I see that it's combining two
  separate things: 1) the lower level HW control, and 2) the upper level
  buffer management, policies and userspace interfaces.
 
 Those are split between the DRM layer and the underlying device driver,
 which provides both kernel (via fb) and user space interfaces.

I'm not so familiar with DRM, but with device driver you mean a driver
for the the hardware which handles display output (gfx cards or whatever
it is on that platform)?

If so, it sounds good. That quite well matches what omapdss driver does
currently for us. But we still have semi-complex omapdrm between omapdss
and the standard drm layer.

Rob, would you say omapdrm is more of a DRM wrapper for omapdss than a
real separate entity? If so, then we could possibly in the future (when
nobody else uses omapdss) change omapdss to support DRM natively. (or
make omapdrm support omap HW natively, which ever way =).

  2) It's missing the panel driver part. This is rather important on
  embedded systems, as the panels often are not dummy panels, but they
  need things like custom initialization, sending commands to adjust
  backlight, etc.
 
 We integrate the panel (and other video output) drivers into the device
 drivers. With desktop chips, they're not easily separable. None of the
 desktop output drivers are simple; things like DisplayPort require link
 training, and everyone needs EDID. We share some of that code in the DRM
 layer today, and it would be nice to share even more.

I don't think we speak of similar panel drivers. I think there are two
different drivers here:

1) output drivers, handles the output from the SoC / gfx card. For
example DVI, DisplayPort, MIPI DPI/DBI/DSI.

2) panel drivers, handles panel specific things. Each panel may support
custom commands and features, for which we need a dedicated driver. And
this driver is not platform specific, but should work with any platform
which has the output used with the panel.

As an example, DSI command mode displays can be quite complex:

DSI bus is a half-duplex serial bus, and while it's designed for
displays you could use it easily for any communication between the SoC
and the peripheral.

The panel could have a feature like content adaptive backlight control,
and this would be configured via the DSI bus, sending a particular
command to the panel (possibly by first reading something from the
panel). The panel driver would accomplish this more or less the same way
one uses, say, i2c, so it would use the platform's DSI support to send
and receive packets.

Or a more complex scenario (but still a realistic scenario, been there,
done that) is sending the image to the panel in multiple parts, and
between each part sending configuration commands to the panel. (and
still getting it done in time so we avoid tearing).

And to complicate things more, there are DSI bus features like LP mode
(low power, basically low speed mode) and HS mode (high speed), virtual
channel IDs, and whatnot, which each panel may need to be used in
particular manner. Some panels may require initial configuration done in
LP, or configuration commands sent to a certain virtual channel ID.

The point is that we cannot have standard MIPI DSI command mode panel
driver which would work for all DSI cmd mode panels, but we need (in
the worst case) separate driver for each panel.

The same goes to lesser extent for other panels also. Some are
configured via i2c or spi.

 Tomi


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Proposal for a low-level Linux display framework

2011-09-16 Thread Tomi Valkeinen
On Thu, 2011-09-15 at 19:55 -0500, Keith Packard wrote:
 On Thu, 15 Sep 2011 20:21:15 +0300, Tomi Valkeinen tomi.valkei...@ti.com 
 wrote:
 
  2) panel drivers, handles panel specific things. Each panel may support
  custom commands and features, for which we need a dedicated driver. And
  this driver is not platform specific, but should work with any platform
  which has the output used with the panel.
 
 Right, we've got DDC ports (which are just i2c) and DisplayPort aux
 channel stuff.
 
 The DDC stuff is abstracted out and shared across the drivers, but the
 DisplayPort aux channel code is not -- it's duplicated in every output
 driver. 

I feel that you are still talking about the output driver, not the
panel. DDC and DP aux are part of the connector-entity in DRM, right?
But there's no separate display-entity behind the connector, which would
handle the peculiarities for a particular panel/display, say DSI panel
model L33T from AcmeCorp.

So, as I see it, DDC and DP aux are on the output driver, and the panel
driver uses those to do whatever is needed for a particular panel.

  DSI bus is a half-duplex serial bus, and while it's designed for
  displays you could use it easily for any communication between the SoC
  and the peripheral.
 
 Yeah, HDMI uses DDC for all kinds of crazy stuff in the CE world.

But that is still more or less standard HDMI stuff, isn't it? So you
implement it once for HDMI, and then it works with all HDMI monitors?

Or is there some way to implement custom behavior for one particular
HDMI monitor? Is this custom behavior in a kernel driver or handled in
userspace?

  The point is that we cannot have standard MIPI DSI command mode panel
  driver which would work for all DSI cmd mode panels, but we need (in
  the worst case) separate driver for each panel.
 
 It sounds like we do want to share code for those bits, much like we
 have DDC split out now. And, we should do something about the
 DisplayPort aux channel stuff to avoid duplicating it everywhere.

Yep. What I had in mind for DSI with my low-level fmwk would be a
mipi_dsi component that offers services to use the DSI bus. Each
platform which supports DSI would implement the DSI support for their
HW. Then the DSI panel driver could do things like:

dsi-write(dev, virtual_channel_id, buf, len);

dsi-set_max_return_packet_size(dev, 10);
dsi-read(dev, virtual_channel_id, read_cmd, recv_buf, len);

An example DSI command mode panel driver can be found from
drivers/video/omap2/displays/panel-taal.c, which uses omapdss' dsi
functions directly but could quite easily use a common DSI interface and
thus be platform independent.

 I'm not sure a common interface to all of these different
 channels makes sense, but surely a DSI library and an aux channel
 library would fit nicely alongside the existing DDC library.

What do you mean with channel? Any video or command bus going to the
display? Yes, I think they are quite different and I don't see a point
in trying to make a common interface for them. 

DSI is in many ways a real bus. You can connect multiple peripherals to
one DSI bus (but it needs a DSI hub), and communicate with them by using
their virtual channel ID. And quite often there are DSI chips that
transform the DSI packets to some other form. Some real example
configurations:

Plain DSI panel:

[SoC] ---DSI--- [DSI panel]

DSI-2-DisplayPort converter chip:

[SoC] ---DSI--- [DSI chip] ---DP--- [DP monitor]

DSI buffer chip supporting to DSI panels:

[SoC] ---DSI--- [DSI chip] +--DSI--- [DSI panel 1]
   |--DSI--- [DSI panel 2]

It would be nice to be able to model this somehow neatly with device
drivers. For example, the DSI panel from the first example could be used
in the two-panel configuration, and if (and when) the panel requires
custom configuration, the same panel driver could be used in both cases.
In the first case the panel driver would use DSI support from the Soc,
in the third case the panel driver would use the DSI support from the
DSI chip (which would, in turn, use DSI support from the SoC).

 Tomi



___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Proposal for a low-level Linux display framework

2011-09-19 Thread Tomi Valkeinen
On Fri, 2011-09-16 at 17:53 +0100, Alan Cox wrote:
  I'm not sure a common interface to all of these different
  channels makes sense, but surely a DSI library and an aux channel
  library would fit nicely alongside the existing DDC library.
 
 DSI and the various other MIPI bits tend to be horribly panel and device
 specific. In one sense yes its a standard with standard commands,
 processes, queries etc, on the other a lot of stuff is oriented around
 the 'its a fixed configuration unit we don't need to have queries' view.

I think it's a bit more complex than that. True, there are MIPI
standards, for the video there are DPI, DBI, DSI, and for the commands
there is DCS. And, as you mentioned, many panels need custom
initialization, or support only parts of the DCS, or have other quirks.

However, I think the biggest thing to realize here is that DSI
peripherals can be anything. It's not always so that you have a DSI bus
and a single panel connected to it. You can have hubs, buffer chips,
etc.

We need DSI peripheral device drivers for those, a single DSI output
driver cannot work. And this is also the most important thing in my
original proposition.

 There also tends to be a lot of vendor magic initialisation logic both
 chipset and device dependant, and often 'plumbing dependant' on SoC

While I have DSI experience with only one SoC, I do believe it'd be
possible to create a DSI API that would allow us to have platform
independent DSI peripheral drivers.

Can you share any SoC side DSI peculiarities on Intel's SoCs?

 Tomi


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Proposal for a low-level Linux display framework

2011-09-19 Thread Tomi Valkeinen
On Sun, 2011-09-18 at 23:53 -0700, Keith Packard wrote:
 On Mon, 19 Sep 2011 09:33:34 +0300, Tomi Valkeinen tomi.valkei...@ti.com 
 wrote:
 
  I think it's a bit more complex than that. True, there are MIPI
  standards, for the video there are DPI, DBI, DSI, and for the commands
  there is DCS. And, as you mentioned, many panels need custom
  initialization, or support only parts of the DCS, or have other
  quirks.
 
 So DSI is more like i2c than the DisplayPort aux channel or DDC. That

Well, not quite. DSI is like DisplayPort and DisplayPort aux combined;
there's only one bus, DSI, which is used to transfer video data and
commands.

For DSI video mode, the transfer is somewhat like traditional displays,
and video data is send according to a pixel clock as a constant stream.
However, before the video stream is enabled the bus can be used in
bi-directional communication. And even when the video stream is enabled,
there can be other communication in the blanking periods.

For DSI command mode the transfer is a bit like high speed i2c; messages
are sent when needed (when the userspace gives the command to update),
without any strict timings. In practice this means that the peripheral
needs to have a framebuffer memory of its own, which it uses to refresh
the actual panel (or send the pixels forward to another peripheral).

As the use patterns of these two types of displays are quite different,
we have the terms auto-update and manual-update displays for them.

 seems fine; you can create a DSI infrastructure like the i2c
 infrastructure and then just have your display drivers use it to talk to
 the panel. We might eventually end up with some shared DRM code to deal
 with common DSI functions for display devices, like the EDID code
 today, but that doesn't need to happen before you can write your first
 DSI-using display driver.

One difference with i2c and DSI is that i2c is independent of the video
path, so it's easy to keep that separate from DRM. But for DSI the data
is produced by the video hardware using the overlays, encoders etc. I
don't quite see how we could have an i2c-like separate DSI API, which
wasn't part of DRM.

And even in simpler case MIPI DPI, which is a traditional parallel RGB
interface, a panel may need custom configuration via, say, i2c or spi.
We can, of course, create a i2c device driver for the panel, but how is
that then connected to DRM? The i2c driver may need to know things like
when the display is enabled/disabled, about backlight changes, or any
other display related event.

Is there a way for the i2c driver to get these events, and add new
properties to the DRM (say, if the panel has a feature configured via
i2c, but we'd like it to be visible to the userspace via the DRM
driver)?

 Tomi


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Proposal for a low-level Linux display framework

2011-09-21 Thread Tomi Valkeinen
On Tue, 2011-09-20 at 23:20 +0200, Patrik Jakobsson wrote:

 Ok, not sure I understand the complexity of DSI. Can overlay composition
 occur after/at the DSI stage (through MCS perhaps)? Or is it a matter of
 panels requiring special scanout buffer formats that for instance V4L needs
 to know about in order to overlay stuff properly? Or am I getting it all 
 wrong?

I don't know what MCS is. But DSI is just a bi-directional transfer
protocol between the SoC and the peripheral, you can send arbitrary data
over it.

Normally the SoC composes a pixel stream using overlays and whatnot,
which goes to the DSI hardware, which then serializes the data and sends
it to the peripheral.

But you can as well send any data, like commands, and the peripheral can
respond with any relevant data.

 Tomi


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: drm pixel formats update

2011-11-29 Thread Tomi Valkeinen
On Tue, 2011-11-29 at 13:10 +0100, Laurent Pinchart wrote:

 To make it perfectly clear, I want to emphasize that I'm not trying to 
 replace 
 DRM, FBDEV and V4L2 with a new shared subsystem. What I would like to see in 
 the (near future) is collaboration and sharing of core features that make 
 sense to share. I believe we should address this from a pain point point of 
 view. The one that lead me to writing this e-mail is that developing a driver 
 that implements both the DRM and FBDEV APIs for video output currently 
 requires various similar structures, and code to translate between all of 
 them. That code can't be shared between multiple drivers, is error-prone, and 
 painful to maintin.

We've been in the same situation with OMAP display driver for years. The
same low level display driver is used by FB, V4L2 and now DRM drivers,
so I didn't have much choice but to implement omapdss versions for
things like video timings and color formats.

I've been planning to write code for this almost as long as omapdss has
had this problem, but there's always been
yet-another-important-thing-to-do. So good that somebody finally brought
this up =).

I'm happy to test any code related to this with omapdss.

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 0/2] omap/drm: dmm/tiler support for GEM buffers

2011-12-07 Thread Tomi Valkeinen
Hi,

On Mon, 2011-12-05 at 19:19 -0600, Rob Clark wrote:
 From: Rob Clark r...@ti.com
 
 Support for DMM and tiled buffers.  The DMM/TILER block in omap4+ SoC
 provides support for remapping physically discontiguous buffers for
 various DMA initiators (DSS, IVAHD, etc) which do not otherwise support
 non-physically contiguous buffers, as well as providing support for
 tiled buffers.
 
 See the descriptions in the following two patches for more details.

Why is the tiler/dmm driver integrated into the drm driver?

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 0/2] omap/drm: dmm/tiler support for GEM buffers

2011-12-08 Thread Tomi Valkeinen
On Wed, 2011-12-07 at 07:29 -0600, Rob Clark wrote:
 On Wed, Dec 7, 2011 at 3:31 AM, Tomi Valkeinen tomi.valkei...@ti.com wrote:
  Hi,
 
  On Mon, 2011-12-05 at 19:19 -0600, Rob Clark wrote:
  From: Rob Clark r...@ti.com
 
  Support for DMM and tiled buffers.  The DMM/TILER block in omap4+ SoC
  provides support for remapping physically discontiguous buffers for
  various DMA initiators (DSS, IVAHD, etc) which do not otherwise support
  non-physically contiguous buffers, as well as providing support for
  tiled buffers.
 
  See the descriptions in the following two patches for more details.
 
  Why is the tiler/dmm driver integrated into the drm driver?
 
 Basically because of a big list of reasons to keep it integrated, and
 no good reason that I could think of to make it a standalone driver.

Well, I think it's good architecture to keep independent things
separate. Also we have other display frameworks in the kernel than DRM.

 1) Because the function/usage is most like a GTT in other systems..
 the usage is really graphics/multimedia related so GEM is a natural
 way to expose it to userspace.  Other places we want to use tiler
 buffers, like camera, are neatly handled by using dmabuf to export the
 GEM buffer to a different device.
 
 2) We went down the separate driver path in the past, and it really
 exposes a lot of problems.  See the hacks that were done in the past
 to get wakeup/resume sequencing correct when tiler was a separate
 driver.  (hint: the table of page addresses needs to be reprogrammed
 before any access to buffer mapped in DMM is done.. this can be
 accomplished quite simply by restoring the LUT before enabling any
 video pipes when it is in a single driver... although that is still in
 the TODO)
 
 3) Doing some of the more advanced stuff, like page flipping using
 DMM's synchronized refill to update page addresses synchronized with
 scannout will, I think, end up being some kinda weird API.. I don't
 think I'd want to make that a public API exported by one driver
 consumed by another, but not such a problem if it is just something
 used internally by one driver.
 
 4) The LUT isn't really big enough to be managed statically like we
 did in the past.. it needs to be managed dynamically, mapping and
 evicting buffers.  This is something that is typical with other gfx
 drivers in their management of their GTT..
 
 5) I wouldn't really want to duplicate the userspace mmap'ing games
 for 2d buffers in a lot of different drivers.

I can't really argue your points as I'm not familiar with the problems
with the tiler. So I'm not questioning the decision to integrate the
tiler code into drm, just expressing my toughts =).

If the tiler driver had only kernel internal API, and no userspace APIs,
I don't see why it'd be much more difficult to have as a separate driver
than integrated into DRM. Well, except if the tiler code itself uses
features offered by DRM extensively, and having the tiler code as an
independent driver would mean replicating all those features.

I guess it's not quite a fair comparison, but I'm comparing tiler to the
vrfb driver (well, lib is perhaps a better word for it), which doesn't
depend on any framework and can be used by any kernel component.

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] OMAPDSS: HDMI: hot plug detect fix

2012-02-21 Thread Tomi Valkeinen
On Mon, 2012-02-20 at 15:03 -0600, Rob Clark wrote:
 From: Rob Clark r...@ti.com
 
 The OMAPDSS: HDMI: PHY burnout fix commit switched the HDMI driver
 over to using a GPIO for plug detect.  Unfortunately the -detect()
 method was not also updated, causing HDMI to no longer work for the
 omapdrm driver (because it would actually check if a connection was
 detected before attempting to enable display).

Thanks, I totally missed that.

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] omap2+: add drm device

2012-03-06 Thread Tomi Valkeinen
On Mon, 2012-03-05 at 10:54 -0600, Rob Clark wrote:
 From: Andy Gross andy.gr...@ti.com
 
 Register OMAP DRM/KMS platform device, and reserve a CMA region for
 the device to use for buffer allocation.  DMM is split into a
 separate device using hwmod.
 
 Signed-off-by: Andy Gross andy.gr...@ti.com
 Signed-off-by: Rob Clark r...@ti.com
 ---
  arch/arm/plat-omap/Makefile   |2 +-
  arch/arm/plat-omap/common.c   |3 +-
  arch/arm/plat-omap/drm.c  |   83 
 +
  arch/arm/plat-omap/include/plat/drm.h |   64 +
  4 files changed, 150 insertions(+), 2 deletions(-)
  create mode 100644 arch/arm/plat-omap/drm.c
  create mode 100644 arch/arm/plat-omap/include/plat/drm.h
 
 diff --git a/arch/arm/plat-omap/Makefile b/arch/arm/plat-omap/Makefile
 index 9a58461..b86e6cb 100644
 --- a/arch/arm/plat-omap/Makefile
 +++ b/arch/arm/plat-omap/Makefile
 @@ -4,7 +4,7 @@
  
  # Common support
  obj-y := common.o sram.o clock.o devices.o dma.o mux.o \
 -  usb.o fb.o counter_32k.o
 +  usb.o fb.o counter_32k.o drm.o
  obj-m :=
  obj-n :=
  obj-  :=
 diff --git a/arch/arm/plat-omap/common.c b/arch/arm/plat-omap/common.c
 index 06383b5..0d87dab 100644
 --- a/arch/arm/plat-omap/common.c
 +++ b/arch/arm/plat-omap/common.c
 @@ -21,10 +21,10 @@
  #include plat/board.h
  #include plat/vram.h
  #include plat/dsp.h
 +#include plat/drm.h
  
  #include plat/omap-secure.h
  
 -
  #define NO_LENGTH_CHECK 0x
  
  struct omap_board_config_kernel *omap_board_config __initdata;
 @@ -65,6 +65,7 @@ const void *__init omap_get_var_config(u16 tag, size_t *len)
  
  void __init omap_reserve(void)
  {
 + omapdrm_reserve_vram();
   omapfb_reserve_sdram_memblock();
   omap_vram_reserve_sdram_memblock();
   omap_dsp_reserve_sdram_memblock();
 diff --git a/arch/arm/plat-omap/drm.c b/arch/arm/plat-omap/drm.c

As Tony said, mach-omap2 is probably a better place. fb.c is in
plat-omap because it supports both OMAP1 and OMAP2+.

 new file mode 100644
 index 000..28279df
 --- /dev/null
 +++ b/arch/arm/plat-omap/drm.c
 @@ -0,0 +1,83 @@
 +/*
 + * DRM/KMS device registration for TI OMAP platforms
 + *
 + * Copyright (C) 2012 Texas Instruments
 + * Author: Rob Clark rob.cl...@linaro.org
 + *
 + * This program is free software; you can redistribute it and/or modify it
 + * under the terms of the GNU General Public License version 2 as published 
 by
 + * the Free Software Foundation.
 + *
 + * This program is distributed in the hope that it will be useful, but 
 WITHOUT
 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
 + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
 + * more details.
 + *
 + * You should have received a copy of the GNU General Public License along 
 with
 + * this program.  If not, see http://www.gnu.org/licenses/.
 + */
 +
 +#include linux/module.h
 +#include linux/kernel.h
 +#include linux/mm.h
 +#include linux/init.h
 +#include linux/platform_device.h
 +#include linux/dma-mapping.h
 +#ifdef CONFIG_CMA
 +#  include linux/dma-contiguous.h
 +#endif
 +
 +#include plat/omap_device.h
 +#include plat/omap_hwmod.h
 +
 +#include plat/drm.h
 +
 +#if defined(CONFIG_DRM_OMAP) || (CONFIG_DRM_OMAP_MODULE)
 +
 +static struct omap_drm_platform_data omapdrm_platdata;
 +
 +static struct platform_device omap_drm_device = {
 + .dev = {
 + .coherent_dma_mask = DMA_BIT_MASK(32),
 + .platform_data = omapdrm_platdata,
 + },
 + .name = omapdrm,
 + .id = 0,

Can there be more than one omapdrm device? I guess not. If so, the id
should be -1.

 +};
 +
 +static int __init omap_init_gpu(void)
 +{
 + struct omap_hwmod *oh = NULL;
 + struct platform_device *pdev;
 +
 + /* lookup and populate the DMM information, if present - OMAP4+ */
 + oh = omap_hwmod_lookup(dmm);
 +
 + if (oh) {
 + pdev = omap_device_build(oh-name, -1, oh, NULL, 0, NULL, 0,
 + false);
 + WARN(IS_ERR(pdev), Could not build omap_device for %s\n,
 + oh-name);
 + }
 +
 + return platform_device_register(omap_drm_device);
 +
 +}

The function is called omap_init_gpu(), but it doesn't do anything
related to the gpu... And aren't DMM and DRM totally separate things,
why are they bunched together like that?

 +arch_initcall(omap_init_gpu);
 +
 +void omapdrm_reserve_vram(void)
 +{
 +#ifdef CONFIG_CMA
 + /*
 +  * Create private 32MiB contiguous memory area for omapdrm.0 device
 +  * TODO revisit size.. if uc/wc buffers are allocated from CMA pages
 +  * then the amount of memory we need goes up..
 +  */
 + dma_declare_contiguous(omap_drm_device.dev, 32 * SZ_1M, 0, 0);

What does this actually do? Does it reserve the memory, i.e. it's not
usable for others? If so, shouldn't there be a way for the user to
configure it?

 +#else
 

Re: [PATCH] omap2+: add drm device

2012-03-06 Thread Tomi Valkeinen
On Tue, 2012-03-06 at 08:01 -0600, Rob Clark wrote:
 On Tue, Mar 6, 2012 at 7:26 AM, Tomi Valkeinen tomi.valkei...@ti.com wrote:

  Can there be more than one omapdrm device? I guess not. If so, the id
  should be -1.
 
 in the past, we have used multiple devices (using the platform-data to
 divide up the dss resources), although this was sort of a
 special-case.  But in theory you could have multiple.  (Think of a
 multi-seat scenario, where multiple compositors need to run and each
 need to be drm-master of their own device to do mode-setting on their
 corresponding display.)
 
 I think if we use .id = -1, then we'd end up with a funny looking
 bus-id for the device: platform:omapdrm:-1

I don't know about that, but it's the way platform devices are meant to
be used if there can be only one instance. If the case where there are
multiple omapdrm devices is rather theoretical, or only used for certain
debugging scenarios or such, I think having the id as -1 in the mainline
is cleaner.

  +};
  +
  +static int __init omap_init_gpu(void)
  +{
  + struct omap_hwmod *oh = NULL;
  + struct platform_device *pdev;
  +
  + /* lookup and populate the DMM information, if present - OMAP4+ */
  + oh = omap_hwmod_lookup(dmm);
  +
  + if (oh) {
  + pdev = omap_device_build(oh-name, -1, oh, NULL, 0, NULL, 0,
  + false);
  + WARN(IS_ERR(pdev), Could not build omap_device for %s\n,
  + oh-name);
  + }
  +
  + return platform_device_register(omap_drm_device);
  +
  +}
 
  The function is called omap_init_gpu(), but it doesn't do anything
  related to the gpu... And aren't DMM and DRM totally separate things,
  why are they bunched together like that?
 
 only legacy.. product branches also have sgx initialization here.  But
 I can change that to omap_init_drm()
 
 DMM is managed by the drm driver (and exposed to userspace as drm/gem
 buffers, shared with other devices via dma-buf, etc).  It is only
 split out to a separate device to play along with hwmod.

I have to say I don't know much about DMM, but my understanding is that
DMM is a bigger entity, of which TILER is only a small part, and DMM
manages all memory accesses.

Can there be other users for the DMM than DRM? I know there could be
other users for the TILER, and I know you want to combine that with the
DRM driver, but I'm wondering about the other parts of DMM than the
TILER.

Somehow having a DMM driver inside omapdrm sounds strange.

  +arch_initcall(omap_init_gpu);
  +
  +void omapdrm_reserve_vram(void)
  +{
  +#ifdef CONFIG_CMA
  + /*
  +  * Create private 32MiB contiguous memory area for omapdrm.0 device
  +  * TODO revisit size.. if uc/wc buffers are allocated from CMA pages
  +  * then the amount of memory we need goes up..
  +  */
  + dma_declare_contiguous(omap_drm_device.dev, 32 * SZ_1M, 0, 0);
 
  What does this actually do? Does it reserve the memory, i.e. it's not
  usable for others? If so, shouldn't there be a way for the user to
  configure it?
 
 It can be re-used by others.. see http://lwn.net/Articles/479297/

The link didn't tell much. I looked at the patches, and
dma_declare_contiguous allocates the memory with memblock_alloc. So is
that allocated memory available for any users? If so, why does the
function want a device pointer as an argument...

Is the memory available for normal kmalloc, or only available via the
CMA functions? Surely there must be some downside to the above? =) And
if so, it should be configurable. You could have a board with no display
at all, and you probably don't want to have 32MB allocated for DRM in
that case.

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] omap2+: add drm device

2012-03-07 Thread Tomi Valkeinen
On Tue, 2012-03-06 at 09:29 -0600, Rob Clark wrote:
 On Tue, Mar 6, 2012 at 8:35 AM, Tomi Valkeinen tomi.valkei...@ti.com wrote:
  On Tue, 2012-03-06 at 08:01 -0600, Rob Clark wrote:
  On Tue, Mar 6, 2012 at 7:26 AM, Tomi Valkeinen tomi.valkei...@ti.com 
  wrote:
 
   Can there be more than one omapdrm device? I guess not. If so, the id
   should be -1.
 
  in the past, we have used multiple devices (using the platform-data to
  divide up the dss resources), although this was sort of a
  special-case.  But in theory you could have multiple.  (Think of a
  multi-seat scenario, where multiple compositors need to run and each
  need to be drm-master of their own device to do mode-setting on their
  corresponding display.)
 
  I think if we use .id = -1, then we'd end up with a funny looking
  bus-id for the device: platform:omapdrm:-1
 
  I don't know about that, but it's the way platform devices are meant to
  be used if there can be only one instance. If the case where there are
  multiple omapdrm devices is rather theoretical, or only used for certain
  debugging scenarios or such, I think having the id as -1 in the mainline
  is cleaner.
 
 well, I don't care much one way or another, but need to check if there
 is a small patch needed in drm core code that generates the bus-id for
 .id = -1..

Hmm, why does the drm core care about it?

And generally, I think if the bus id is -1, there is no bus id in a
sense. For example, with bus id 0 you'll get a device omapdrm.0. With
bus id -1 you'll get a device omapdrm.

   +arch_initcall(omap_init_gpu);
   +
   +void omapdrm_reserve_vram(void)
   +{
   +#ifdef CONFIG_CMA
   + /*
   +  * Create private 32MiB contiguous memory area for omapdrm.0 
   device
   +  * TODO revisit size.. if uc/wc buffers are allocated from CMA 
   pages
   +  * then the amount of memory we need goes up..
   +  */
   + dma_declare_contiguous(omap_drm_device.dev, 32 * SZ_1M, 0, 0);
  
   What does this actually do? Does it reserve the memory, i.e. it's not
   usable for others? If so, shouldn't there be a way for the user to
   configure it?
 
  It can be re-used by others.. see http://lwn.net/Articles/479297/
 
  The link didn't tell much. I looked at the patches, and
  dma_declare_contiguous allocates the memory with memblock_alloc. So is
  that allocated memory available for any users? If so, why does the
  function want a device pointer as an argument...
 
  Is the memory available for normal kmalloc, or only available via the
  CMA functions? Surely there must be some downside to the above? =) And
  if so, it should be configurable. You could have a board with no display
  at all, and you probably don't want to have 32MB allocated for DRM in
  that case.
 
 Basically the short version is memory from a CMA carveout can be
 re-used for unpinnable memory.  Not kmalloc, but it can be used for
 anon userspace pages, for example.  Userspace needs memory too.

Okay, let me ask the other way. Is 32MB enough for everyone? Hardcoding
a value like that without any possibility to adjust it just sounds like
a rather bad thing.

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] omap2+: add drm device

2012-03-07 Thread Tomi Valkeinen
On Tue, 2012-03-06 at 09:50 -0600, Gross, Andy wrote:
 
 
 On Tue, Mar 6, 2012 at 8:35 AM, Tomi Valkeinen tomi.valkei...@ti.com
 wrote:
 
 
 I have to say I don't know much about DMM, but my
 understanding is that
 DMM is a bigger entity, of which TILER is only a small part,
 and DMM
 manages all memory accesses.
 
 Can there be other users for the DMM than DRM? I know there
 could be
 other users for the TILER, and I know you want to combine that
 with the
 DRM driver, but I'm wondering about the other parts of DMM
 than the
 TILER.
 
 Somehow having a DMM driver inside omapdrm sounds strange.
 
 
 The DMM does indeed contain a number of entities.  However, the TILER
 portion is the only part that requires a driver.  All other register
 modifications (LISA map settings, EMIF, etc) are done statically in
 the loader or u-boot and never changed again.  As such, DMM has become
 synonymous with TILER.

Ok. Well, as I said, I don't know much about that, just sounds rather
strange to me =).

Does this DMM has become synonymous mean that people just started
calling TILER DMM, and thus the name has stuck, or are there technical
reasons to handle it as DMM in the kernel? If the former, and if TILER
is the technically exact name for it, perhaps it would make sense to
call it TILER, as that's what (at least I) would be looking for if I
read the TRM and try to find the code for the TILER.

 Tomi




signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] omap2+: add drm device

2012-03-07 Thread Tomi Valkeinen
On Wed, 2012-03-07 at 07:06 -0600, Rob Clark wrote:
 On Wed, Mar 7, 2012 at 5:59 AM, Tomi Valkeinen tomi.valkei...@ti.com wrote:

  Hmm, why does the drm core care about it?
 
 Because it is the one generating the bus-id.. see
 drivers/gpu/drm/drm_platform.c drm_platform_set_busid()
 
 Anyways, it's just a detail about how libdrm/drmOpen() can
 differentiate between multiple instances of the same driver, similar
 to how PCI bus-id is used in the desktop world.  It is not difficult
 to change in drm_platform_set_busid(), although not sure if that would
 be considered an ABI change to userspace.  (Maybe it is less critical,
 I'm under the impression that other platform-drm users didn't even
 realize we had a bus-id.)

Ok. Well, I'm fine with id 0 also, if it makes sense in the DRM side. It
was just something that caught my eye.

  Okay, let me ask the other way. Is 32MB enough for everyone? Hardcoding
  a value like that without any possibility to adjust it just sounds like
  a rather bad thing.
 
 The main requirement is that, on omap3 or before (platforms without
 DMM) you need enough to allocate all of your scanout buffers.
 
 I'm not against having a bootarg to adjust, although I strongly prefer
 sane defaults and not requiring a million bootargs just to boot in
 some usable fashion.

Well, those are two different things. I'm fine with a default of 32MB.
My point was, what if I need more, or I don't need any.

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] omap2+: add drm device

2012-03-07 Thread Tomi Valkeinen
On Wed, 2012-03-07 at 09:59 -0600, Gross, Andy wrote:
 On Wed, Mar 7, 2012 at 6:05 AM, Tomi Valkeinen tomi.valkei...@ti.com wrote:
 
  Does this DMM has become synonymous mean that people just started
  calling TILER DMM, and thus the name has stuck, or are there technical
  reasons to handle it as DMM in the kernel? If the former, and if TILER
  is the technically exact name for it, perhaps it would make sense to
  call it TILER, as that's what (at least I) would be looking for if I
  read the TRM and try to find the code for the TILER.
 
   Tomi
 
 Well the breakdown of the DMM/Tiler kind of goes like this.  The DMM
 defines a set of registers used to manipulate the PAT table that backs
 the address space with physical memory.  The Tiler portion of the DMM
 really just describes the address space that is exposed.  Depending on
 what address range you access, you'll get a different mode of access
 and rotation.  Management of the address space is attributed to the
 Tiler as well and that is where we have managed the address space and
 provided APIs to reserve address space and pin/unpin memory to those
 regions.
 
 From a hardware perspective, we need access to the resources provided
 by the DMM so that we can do the PAT programming and that can only be
 gotten from the hwmod entry.  And if you use the hwmod device entry,
 you have to match the name used for that entry.

Ok. Thanks for the detailed explanation =).

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] omap2+: add drm device

2012-03-14 Thread Tomi Valkeinen
Hi,

On Tue, 2012-03-13 at 15:34 -0500, Rob Clark wrote:
 From: Andy Gross andy.gr...@ti.com
 
 Register OMAP DRM/KMS platform device, and reserve a CMA region for
 the device to use for buffer allocation.  DMM is split into a
 separate device using hwmod.

What's the diff with this and the previous one?

I see you still have the platform data there. You didn't comment on
that. Where is it used after the board files are gone when we use DT?

And how about the size of the contiguous memory area, it was left a bit
unclear to me why it cannot be dynamic.

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] omap2+: add drm device

2012-03-14 Thread Tomi Valkeinen
On Wed, 2012-03-14 at 07:55 -0500, Rob Clark wrote:
 On Wed, Mar 14, 2012 at 7:38 AM, Tomi Valkeinen tomi.valkei...@ti.com wrote:
  Hi,
 
  On Tue, 2012-03-13 at 15:34 -0500, Rob Clark wrote:
  From: Andy Gross andy.gr...@ti.com
 
  Register OMAP DRM/KMS platform device, and reserve a CMA region for
  the device to use for buffer allocation.  DMM is split into a
  separate device using hwmod.
 
  What's the diff with this and the previous one?
 
 Moving drm.c to mach-omap2 (header could not move because
 omap_reserve() is in plat-omap.. but this seems to be the same as what
 is done for dspbridge).
 
  I see you still have the platform data there. You didn't comment on
  that. Where is it used after the board files are gone when we use DT?
 
 I was kind-of thinking that there would be some DT-data-structure
 glue somewhere.. not sure if this goes in mach-omap2 or in the driver
 itself, but presumably it is needed somewhere.
 
 It is only really special cases where some board specific device-tree
 data is needed, so it seems like this is ok to handle later.

Afaik DT data should only contain information about the hardware. This
is SW configuration, so I think DT people won't accept things like that.

  And how about the size of the contiguous memory area, it was left a bit
  unclear to me why it cannot be dynamic.
 
 I don't think there is anything preventing adding a bootarg, but I
 think it is not essential so ok to add later

Well, maybe not essential to you =). But you are reserving 32MB memory,
which is quite a big amount. Even if the reserved memory can be used for
some other purposes, it's still a big chunk of special memory being
reserved even if the user doesn't use or have a display at all.

Well, it's not an issue for me either, but I just feel that doing things
like that without allowing the user to avoid it is a bit bad thing.

Btw, do you know why the dma_declare_contiguous() takes a dev pointer as
an argument, if the memory is not private to that device? (at least I
understood from you that the memory can be used for other purposes).

 Tomi


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] omap2+: add drm device

2012-03-14 Thread Tomi Valkeinen
On Wed, 2012-03-14 at 08:16 -0500, Rob Clark wrote:
 On Wed, Mar 14, 2012 at 8:07 AM, Tomi Valkeinen tomi.valkei...@ti.com wrote:
  On Wed, 2012-03-14 at 07:55 -0500, Rob Clark wrote:
  On Wed, Mar 14, 2012 at 7:38 AM, Tomi Valkeinen tomi.valkei...@ti.com 
  wrote:
   Hi,
  
   On Tue, 2012-03-13 at 15:34 -0500, Rob Clark wrote:
   From: Andy Gross andy.gr...@ti.com
  
   Register OMAP DRM/KMS platform device, and reserve a CMA region for
   the device to use for buffer allocation.  DMM is split into a
   separate device using hwmod.
  
   What's the diff with this and the previous one?
 
  Moving drm.c to mach-omap2 (header could not move because
  omap_reserve() is in plat-omap.. but this seems to be the same as what
  is done for dspbridge).
 
   I see you still have the platform data there. You didn't comment on
   that. Where is it used after the board files are gone when we use DT?
 
  I was kind-of thinking that there would be some DT-data-structure
  glue somewhere.. not sure if this goes in mach-omap2 or in the driver
  itself, but presumably it is needed somewhere.
 
  It is only really special cases where some board specific device-tree
  data is needed, so it seems like this is ok to handle later.
 
  Afaik DT data should only contain information about the hardware. This
  is SW configuration, so I think DT people won't accept things like that.
 
 hmm, then where *does* it go.. it is a bit too much for bootargs..

I have no idea =). And I may be wrong, but my understanding is the the
only thing DT data should have is information about the HW
configuration.

But is that kind of configuration needed at boot time? Why cannot the
userspace do the config? What configs are actually needed at boot time,
before userspace takes control? The only thing coming to my mind is to
define the display used for the console.

   And how about the size of the contiguous memory area, it was left a bit
   unclear to me why it cannot be dynamic.
 
  I don't think there is anything preventing adding a bootarg, but I
  think it is not essential so ok to add later
 
  Well, maybe not essential to you =). But you are reserving 32MB memory,
  which is quite a big amount. Even if the reserved memory can be used for
  some other purposes, it's still a big chunk of special memory being
  reserved even if the user doesn't use or have a display at all.
 
 If you didn't have display, and were tight on memory, wouldn't you
 just disable the display device in your kernel config?

Sure, if you want to modify your kernel. But you could as well use the
same kernel binary, and just say vram=0M which disables the vram
allocation. For DRM you would _have_ to modify your kernel.

Actually, I just realized vram=0M doesn't work, as the code checks for !
= 0, and just skips the vram argument when it's 0 =). So my code sucks
also...

  Well, it's not an issue for me either, but I just feel that doing things
  like that without allowing the user to avoid it is a bit bad thing.
 
  Btw, do you know why the dma_declare_contiguous() takes a dev pointer as
  an argument, if the memory is not private to that device? (at least I
  understood from you that the memory can be used for other purposes).
 
 contiguous use of the memory is private to the device.  There is also
 a global CMA pool, from which all dma_alloc_xyz() which is not
 associated w/ a per-device pool comes from.. but the advantage of a
 per-device-pool is it puts an upper limit on how much dma memory that
 device can allocate so that it cannot starve other devices.

Ah, I see. So the 32MB you reserve there is not visible as contiguous
memory to anyone else than omapdrm, but userspace can still use the 32MB
area for pages that can be moved out as needed.

But then, if it's private, it's also rather bad. If I have a kernel with
omapfb and omapdrm enabled as modules, but I never use omapdrm, the 32MB
is still always reserver and away from other contiguous memory use.

Also, I just realized that besides the memory reservation being fixed,
it's also hidden. If I enable omapdrm in the kernel config, I get no
indication that 32MB of mem is being reserved. Perhaps a Kconfig option
for it, like with OMAP VRAM, and then a boot arg which will override the
Kconfig value.

Well, as I said, it's not an issue for me and from my side it can be
improved later.

 Tomi


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] omap2+: add drm device

2012-03-15 Thread Tomi Valkeinen
On Wed, 2012-03-14 at 10:06 -0500, Rob Clark wrote:

  Well, as I said, it's not an issue for me and from my side it can be
  improved later.
 
 yeah, when CMA is actually merged, there are a few other things I'd
 like to do to, incl converting omapfb over to use CMA and remove
 omap_vram.. but I guess those will be other patches.

Right, I just realized CMA is not in the kernel, nor does it seem to be
in the linux-next. Is there a reason why you want it already merged?
Wouldn't it be easier to get it in only when it can actually be used.
Especially if there's room for improvement.

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] omap2+: add drm device

2012-03-16 Thread Tomi Valkeinen
On Thu, 2012-03-15 at 07:32 -0500, Rob Clark wrote:
 On Thu, Mar 15, 2012 at 3:46 AM, Tomi Valkeinen tomi.valkei...@ti.com wrote:
  On Wed, 2012-03-14 at 10:06 -0500, Rob Clark wrote:
 
   Well, as I said, it's not an issue for me and from my side it can be
   improved later.
 
  yeah, when CMA is actually merged, there are a few other things I'd
  like to do to, incl converting omapfb over to use CMA and remove
  omap_vram.. but I guess those will be other patches.
 
  Right, I just realized CMA is not in the kernel, nor does it seem to be
  in the linux-next. Is there a reason why you want it already merged?
  Wouldn't it be easier to get it in only when it can actually be used.
  Especially if there's room for improvement.
 
 Some folks are already pulling CMA into product kernels for various
 reasons.. keeping this w/ #ifdef CONFIG_CMA guards seemed like it
 would make their life a bit easier.
 
 But if people feel strongly about it, I can strip that out.

Well, I wouldn't say feel strongly, but... I think the mainline kernel
should have code only for the mainline kernel, not for some custom
kernels. And the code is not testable in any way, not even compilable. I
think all code going in should be tested and compiled. Also, if the CMA
code is not in, who says it won't change. Perhaps the CMA function won't
even exist in the version going into mainline.

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] omap2+: add drm device

2012-05-24 Thread Tomi Valkeinen
Hi,

On Wed, 2012-05-23 at 15:08 -0500, Andy Gross wrote:
 Register OMAP DRM/KMS platform device.  DMM is split into a
 separate device using hwmod.
 
 Signed-off-by: Andy Gross andy.gr...@ti.com

snip

 +static int __init omap_init_drm(void)
 +{
 + struct omap_hwmod *oh = NULL;
 + struct platform_device *pdev;
 +
 + /* lookup and populate the DMM information, if present - OMAP4+ */
 + oh = omap_hwmod_lookup(dmm);
 +
 + if (oh) {
 + pdev = omap_device_build(oh-name, -1, oh, NULL, 0, NULL, 0,
 + false);
 + WARN(IS_ERR(pdev), Could not build omap_device for %s\n,
 + oh-name);
 + }
 +
 + return platform_device_register(omap_drm_device);
 +
 +}

I still don't like fixing the tiler to drm. I would like to have basic
tiler support in omapfb also, but with this approach I'll need to
duplicate the code. And even if we disregard omapfb, wouldn't it be
architecturally better to have the tiler as a separate independent
library/driver?

 +struct omap_drm_platform_data {
 + struct omap_kms_platform_data *kms_pdata;
 +};

This one is missing struct omap_dmm_platform_data *dmm_pdata, so you
didn't just move the struct. Is that on purpose?

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] omap2+: add drm device

2012-05-24 Thread Tomi Valkeinen
On Thu, 2012-05-24 at 00:27 -0600, Clark, Rob wrote:
 On Thu, May 24, 2012 at 12:01 AM, Tomi Valkeinen tomi.valkei...@ti.com 
 wrote:
  Hi,
 
  On Wed, 2012-05-23 at 15:08 -0500, Andy Gross wrote:
  Register OMAP DRM/KMS platform device.  DMM is split into a
  separate device using hwmod.
 
  Signed-off-by: Andy Gross andy.gr...@ti.com
 
  snip
 
  +static int __init omap_init_drm(void)
  +{
  + struct omap_hwmod *oh = NULL;
  + struct platform_device *pdev;
  +
  + /* lookup and populate the DMM information, if present - OMAP4+ */
  + oh = omap_hwmod_lookup(dmm);
  +
  + if (oh) {
  + pdev = omap_device_build(oh-name, -1, oh, NULL, 0, NULL, 0,
  + false);
  + WARN(IS_ERR(pdev), Could not build omap_device for %s\n,
  + oh-name);
  + }
  +
  + return platform_device_register(omap_drm_device);
  +
  +}
 
  I still don't like fixing the tiler to drm. I would like to have basic
  tiler support in omapfb also, but with this approach I'll need to
  duplicate the code. And even if we disregard omapfb, wouldn't it be
  architecturally better to have the tiler as a separate independent
  library/driver?
 
 Not easily, at least not if we want to manage to use tiler/dmm in a
 more dynamic way, or to enable some additional features which are
 still on the roadmap (like reprogramming dmm synchronized w/ scanout,
 or some things which are coming if future hw generations).  We need
 one place to keep track of which buffers are potentially evictable to
 make room for mapping a new buffer.  And if you look at the tricks
 that go on with mmap'ing tiled buffers to userspace, you *really*
 don't want to duplicate that in N different drivers.

So why can't all that code be in a tiler library/driver?

 Fortunately with dmabuf there is not really a need for N different
 drivers to need to use tiler/dmm directly.  The dmabuf mechanism
 provides what they need to import GEM buffers from omapdrm.  That may
 not really help omapfb because fbdev doesn't have a concept of
 importing buffers.  But OTOH this is unnecessary, because drm provides
 an fbdev interface for legacy apps.  The best thing I'd recommend is,
 if you miss some features of omapfb in the drm fbdev implementation,
 is to send some patches to add this missing features.

Well, at least currently omapfb and omapdrm work quite differently, if
I've understood right. Can we make a full omapfb layer on top of
omapdrm? With multiple framebuffers mapped to one or more overlays,
support for all the ioctls, etc?

I guess we'd still need to have omapfb driver to keep the module
parameters and behavior the same. Can omapdrm be used from inside the
kernel by another driver?

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] omap2+: add drm device

2012-05-24 Thread Tomi Valkeinen
On Thu, 2012-05-24 at 10:05 +0300, Tomi Valkeinen wrote:
 On Thu, 2012-05-24 at 00:27 -0600, Clark, Rob wrote:
  On Thu, May 24, 2012 at 12:01 AM, Tomi Valkeinen tomi.valkei...@ti.com 
  wrote:
   Hi,
  
   On Wed, 2012-05-23 at 15:08 -0500, Andy Gross wrote:
   Register OMAP DRM/KMS platform device.  DMM is split into a
   separate device using hwmod.
  
   Signed-off-by: Andy Gross andy.gr...@ti.com
  
   snip
  
   +static int __init omap_init_drm(void)
   +{
   + struct omap_hwmod *oh = NULL;
   + struct platform_device *pdev;
   +
   + /* lookup and populate the DMM information, if present - OMAP4+ */
   + oh = omap_hwmod_lookup(dmm);
   +
   + if (oh) {
   + pdev = omap_device_build(oh-name, -1, oh, NULL, 0, NULL, 
   0,
   + false);
   + WARN(IS_ERR(pdev), Could not build omap_device for %s\n,
   + oh-name);
   + }
   +
   + return platform_device_register(omap_drm_device);
   +
   +}
  
   I still don't like fixing the tiler to drm. I would like to have basic
   tiler support in omapfb also, but with this approach I'll need to
   duplicate the code. And even if we disregard omapfb, wouldn't it be
   architecturally better to have the tiler as a separate independent
   library/driver?
  
  Not easily, at least not if we want to manage to use tiler/dmm in a
  more dynamic way, or to enable some additional features which are
  still on the roadmap (like reprogramming dmm synchronized w/ scanout,
  or some things which are coming if future hw generations).  We need
  one place to keep track of which buffers are potentially evictable to
  make room for mapping a new buffer.  And if you look at the tricks
  that go on with mmap'ing tiled buffers to userspace, you *really*
  don't want to duplicate that in N different drivers.
 
 So why can't all that code be in a tiler library/driver?

And I think we've discussed about this before, so sorry if I'm repeating
myself. I just find it odd that we are not able to create a nice
separate lib/driver for the tiler, which is a separate piece of HW that
multiple drivers might want to use.

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] omap2+: add drm device

2012-05-24 Thread Tomi Valkeinen
On Thu, 2012-05-24 at 02:35 -0600, Rob Clark wrote:
 On Thu, May 24, 2012 at 1:05 AM, Tomi Valkeinen tomi.valkei...@ti.com wrote:
  On Thu, 2012-05-24 at 00:27 -0600, Clark, Rob wrote:
  On Thu, May 24, 2012 at 12:01 AM, Tomi Valkeinen tomi.valkei...@ti.com 
  wrote:
   Hi,
  
   On Wed, 2012-05-23 at 15:08 -0500, Andy Gross wrote:
   Register OMAP DRM/KMS platform device.  DMM is split into a
   separate device using hwmod.
  
   Signed-off-by: Andy Gross andy.gr...@ti.com
  
   snip
  
   +static int __init omap_init_drm(void)
   +{
   + struct omap_hwmod *oh = NULL;
   + struct platform_device *pdev;
   +
   + /* lookup and populate the DMM information, if present - OMAP4+ */
   + oh = omap_hwmod_lookup(dmm);
   +
   + if (oh) {
   + pdev = omap_device_build(oh-name, -1, oh, NULL, 0, NULL, 
   0,
   + false);
   + WARN(IS_ERR(pdev), Could not build omap_device for %s\n,
   + oh-name);
   + }
   +
   + return platform_device_register(omap_drm_device);
   +
   +}
  
   I still don't like fixing the tiler to drm. I would like to have basic
   tiler support in omapfb also, but with this approach I'll need to
   duplicate the code. And even if we disregard omapfb, wouldn't it be
   architecturally better to have the tiler as a separate independent
   library/driver?
 
  Not easily, at least not if we want to manage to use tiler/dmm in a
  more dynamic way, or to enable some additional features which are
  still on the roadmap (like reprogramming dmm synchronized w/ scanout,
  or some things which are coming if future hw generations).  We need
  one place to keep track of which buffers are potentially evictable to
  make room for mapping a new buffer.  And if you look at the tricks
  that go on with mmap'ing tiled buffers to userspace, you *really*
  don't want to duplicate that in N different drivers.
 
  So why can't all that code be in a tiler library/driver?
 
 Possibly the placement stuff could be in a library..  the
 mmap/faulting stuff I think would be harder to split.  With it split
 out in a separate lib, it becomes logistically a bit more of a
 headache to change APIs, etc.  Basically it just makes more work and
 is unnecessary.

Unnecessary for you, but maybe not for those who want to use omapfb.

  Fortunately with dmabuf there is not really a need for N different
  drivers to need to use tiler/dmm directly.  The dmabuf mechanism
  provides what they need to import GEM buffers from omapdrm.  That may
  not really help omapfb because fbdev doesn't have a concept of
  importing buffers.  But OTOH this is unnecessary, because drm provides
  an fbdev interface for legacy apps.  The best thing I'd recommend is,
  if you miss some features of omapfb in the drm fbdev implementation,
  is to send some patches to add this missing features.
 
  Well, at least currently omapfb and omapdrm work quite differently, if
  I've understood right. Can we make a full omapfb layer on top of
  omapdrm? With multiple framebuffers mapped to one or more overlays,
  support for all the ioctls, etc?
 
 Well, there is still room to add your own fb_ioctl() fxn, so I guess
 in principle it should be possible to add whatever custom ioctls are
 required.
 
 Typically you have a single fbdev device for a single drm device,
 although I suppose nothing prevents creating more.  I'd probably want
 to encourage users more towards using KMS directly for multi-display
 cases because you have a lot more options/flexibility that way.

Sure, but we can't force people to use omapdrm instead of omapfb. And
omapfb is not going to disappear. So obviously we should recommend using
omapdrm, but on the other hand, I don't see any problem in adding new
features to omapfb if they are easily implemented (using, for example, a
separate tiler driver).

  I guess we'd still need to have omapfb driver to keep the module
  parameters and behavior the same. Can omapdrm be used from inside the
  kernel by another driver?
 
 Hmm, I'm not quite sure what you have in mind, but it sounds a bit
 hacky..  I'd guess if you need 100% backwards compatibility even down
 to kernel cmdline / module params, then you probably want to use
 omapfb.  But there isn't really need to add new features to omapfb in
 that case.

I was thinking of making omapfb use omapdrm, instead of omapdss. I mean,
not planning to do that, just wondering if that would be possible.

 Off the top of my head, I guess that 80-90% compatibility would
 probably be reasonable to add to omapdrm's fbdev..  and that the last
 10-20% would be too hacky/invasive to justify adding to omapdrm.

I think it should be 99.9% - 100% or nothing. If it's only 80-90%
compatible, then it's not compatible =).

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http

Re: [PATCH] omap2+: add drm device

2012-05-24 Thread Tomi Valkeinen
On Thu, 2012-05-24 at 02:44 -0600, Rob Clark wrote:

 but other drivers *can* use tiler, thanks to dmabuf.. I have omap4iss
 v4l2 camera working w/ tiler buffers on my pandaboard, for example.
 
 Maybe fbdev is an exception to the rule because it has no way for
 userspace to pass it a buffer to use.  But on the other hand it is a
 legacy API so I'm not sure if it is worth loosing too much sleep over
 that.

I'm not that familiar with dmabuf, but are you saying it's not possible
for a kernel driver to request the buffers? They _must_ come from the
userspace?

Anyway, even if it would be possible, if the tiler is a part of omapdrm
we need omapdrm to get and use the tiler buffers. And we can't have
omapdrm running when using omapfb, because they both use omapdss.

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] omap2+: add drm device

2012-05-24 Thread Tomi Valkeinen
On Thu, 2012-05-24 at 10:09 -0500, Gross, Andy wrote:
 On Thu, May 24, 2012 at 7:13 AM, Tomi Valkeinen tomi.valkei...@ti.com wrote:
  On Thu, 2012-05-24 at 02:44 -0600, Rob Clark wrote:
 
  but other drivers *can* use tiler, thanks to dmabuf.. I have omap4iss
  v4l2 camera working w/ tiler buffers on my pandaboard, for example.
 
  Maybe fbdev is an exception to the rule because it has no way for
  userspace to pass it a buffer to use.  But on the other hand it is a
  legacy API so I'm not sure if it is worth loosing too much sleep over
  that.
 
  I'm not that familiar with dmabuf, but are you saying it's not possible
  for a kernel driver to request the buffers? They _must_ come from the
  userspace?
 
  Anyway, even if it would be possible, if the tiler is a part of omapdrm
  we need omapdrm to get and use the tiler buffers. And we can't have
  omapdrm running when using omapfb, because they both use omapdss.
 
 And that is a good point.  The DSS is kind of a sticking point to anyone
 having to enable DRM to get Tiler.  However, omapfb doesn't currently utilize
 DMM/Tiler features.  Can't we defer generalizing until there is a
 requirement for it?

Sure. I just brought it up because it'd be nice and it'd be better
architecturally. However, as I said, I'm not familiar with the related
problems, so I take your word that it's not simple =).

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] omap2+: add drm device

2012-06-11 Thread Tomi Valkeinen
On Mon, 2012-06-11 at 09:51 -0500, Gross, Andy wrote:
 Tomi,
 
 
 So at this point, are you OK with deferring a split of the DMM until
 it necessary to do so (if ever)?  I'd like to get this patch in so
 that people have a working omapdrm device when they enable the config
 options.

Yes, I'm ok with it.

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] staging: drm/omap: add rotation properties

2012-06-29 Thread Tomi Valkeinen
On Wed, 2012-06-27 at 09:06 -0500, Rob Clark wrote:
 From: Rob Clark r...@ti.com
 
 Use tiled buffers for rotated/reflected scanout, with CRTC and plane
 properties as the interface for userspace to configure rotation.
 
 Signed-off-by: Rob Clark r...@ti.com

 +/* this should probably be in drm-core to standardize amongst drivers */
 +#define DRM_ROTATE_0 0
 +#define DRM_ROTATE_901
 +#define DRM_ROTATE_180   2
 +#define DRM_ROTATE_270   3
 +#define DRM_REFLECT_X4
 +#define DRM_REFLECT_Y5

Are both reflect X and Y needed? You can get all the possible
orientations with just one of the reflects.

And I think the word mirror represents nicely what the reflect does,
i.e. if you look at the mirror, the image you see is flipped
horizontally.

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC 0/3] solving omapdrm/omapdss layering issues

2012-07-31 Thread Tomi Valkeinen
Hi,

On Fri, 2012-07-27 at 20:07 -0500, Rob Clark wrote:
 From: Rob Clark r...@ti.com
 
 I've been working for the better part of the week on solving some of
 the omapdss vs kms mismatches, which is one of the bigger remaining
 issues in the TODO before moving omapdrm out of staging.
 
 The biggest place that this shows up is in GO bit handling.  Basically,
 some of the scanout related registers in DSS hw block are only shadow
 registers, and if the GO bit is set during the vblank the hw copies
 into the actual registers (not accessible to CPU) and clears the GO
 bit.  When the GO bit is set, there is no safe way to update these
 registers without undefined results.  So omapdss tries to be friendly
 and abstract this, by buffering up the updated state, and applying it

It's not really about being friendly. Omapdss tries to do as little as
possible, while still supporting all its HW features. Shadow registers
are a bit tricky creating this mess.

 on the next vblank once the GO bit is cleared.  But this causes all
 sorts of mayhem at the omapdrm layer, which would like to unpin the
 previous scanout buffer(s) on the next vblank (or endwin) irq.  Due
 to the buffering in omapdss, we have no way to know on a vblank if we
 have switched to the scanout buffer or not.  Basically it works ok as
 long as userspace is only ever updating on layer (either crtc or drm
 plane) at a time.  But throw together hw mouse cursor (drm plane)
 plus a window manager like compiz which does page flips, or wayland
 (weston drm compositor) with hw composition (drm plane), and things
 start to fail in a big way.
 
 I've tried a few approaches to preserve the omapdss more or less as it
 is, by adding callbacks for when GO bit is cleared, etc.  But the
 sequencing of setting up connector/encoder/crtc is not really what
 omapdss expects, and would generally end up confusing the apply
 layer in omapdss (it would end up not programming various registers
 because various dirty flags would get cleared, for example mgr updated
 before overlay connected, etc).

Can you give more info what the problem is? It shouldn't end up not
programming registers, except if there's a bug there.

Didn't the apply-id stuff I proposed some months ago have enough stuff
to make this work?

The thing about shadow registers is that we need to manage them in one
central place. And the same shadow registers are used for both the
composition stuff (overlays etc) and output stuff (video timings 
configuration). If omapdrm handles the composition shadow registers, it
also needs to handle all the other shadow registers.

 Finally, in frustration, this afternoon I hit upon an idea.  Why not
 just use the dispc code in omapdss, which is basically a stateless
 layer of helper functions, and bypass the stateful layer of omapdss.

If you do this, you'll need to implement all the stuff of the stateful
layer in omapdrm. You can't just call the dispc funcs and expect things
to work reliably. Things like enabling/disabling overlays with fifomerge
requires possibly multiple vsyncs. And output related shadow registers
may be changed separately from the composition side.

The apply.c is not there to make the life of the user of omapdss's easy,
it's there to make the DSS hardware work properly =).

 Since KMS helper functions already give us the correct sequence for
 setting up the hardware, this turned out to be rather easy.  And fit
 it quite nicely with my mechanism to queue up updates when the GO bit
 is not clear.  And, unlike my previous attempts, it actually worked..
 not only that, but it worked on the first boot!

Well, not having read the omapdrm code, I'm not in the best position to
comment, but I fear that while it seems to work, you have lots of corner
cases where you'll get glitches. We had a lot simpler configuration
model in the omapdss for long time, until the small corner cases started
to pile up and new features started to cause problems, and I wrote the
current apply mechanism.

 So I am pretty happy about how this is shaping up.  Not only is it
 simpler that my previous attepmts, and solves a few tricky buffer
 unpin related issues.  But it also makes it very easy to wire in the
 missing userspace vblank event handling without resorting to duct-
 tape.

Why is giving an event from omapdss at vsync duct-tapy?

 Obviously there is stuff still missing, and some hacks.  This is
 really just a proof of concept at this stage.  But I wanted to send an
 RFC so we could start discussing how to move forward.  Ie. could we
 reasonably add support to build dispc as a library of stateless helper
 functions, sharing it and the panel drivers between omapdrm and the
 legacy omapdss based drivers.  Or is there no clean way to do that, in
 which case we should just copy the code we need into omapdrm, and
 leave the deprecated omapdss as it is for legacy drivers.

I agree that we need to get omapdrm work well, as it's (or will be) the
most important display framework. And if 

Re: [RFC 0/3] solving omapdrm/omapdss layering issues

2012-08-01 Thread Tomi Valkeinen
On Tue, 2012-07-31 at 09:45 -0500, Rob Clark wrote:
 On Tue, Jul 31, 2012 at 8:40 AM, Tomi Valkeinen tomi.valkei...@ti.com wrote:

  It's not really about being friendly. Omapdss tries to do as little as
  possible, while still supporting all its HW features. Shadow registers
  are a bit tricky creating this mess.
 
 What I mean by 'friendly' is it tries to abstract this for simple
 users, like an fbdev driver.  But this really quickly breaks down w/ a

No, that's not what omapdss tries to do. I'm not trying to hide the
shadow registers and the GO bit behind the omapdss API, I'm just trying
to make it work.

The omapdss API was made with omapfb, so it's true that the API may not
be good for omapdrm. But I'm happy to change the API.

 And btw, I think the current mapping of drm_encoder to mgr in omapdrm
 is not correct.  I'm just in the process of shuffling things around.
 I think drm/kms actually maps quite nicely to the underlying hardware
 with the following arrangement:
 
  drm_plane - ovl
  drm_crtc - mgr
  drm_encoder - DSI/DPI/HDMI/VENC encoder
  drm_connector - pretty much what we call a panel driver today

Hmm, what was the arrangement earlier?

I guess the fact is that DRM concepts do not really match the OMAP DSS
hardware, and we'll have to use whatever gives us least problems.

 It would be quite useful if you could look at the omap_drm_apply
 mechanism I had in omapdrm, because that seems like a quite
 straightforward way to deal w/ shadowed registers.  I think it will

Yes, it seems straightforward, but it's not =).

I had a look at your omapdrm-on-dispc-2 branch. What you are doing there
is quite similar to what omapdss was doing earlier. It's not going to
work reliably with multiple outputs and fifomerge.

Configuring things like overlay color mode are quite simple. They only
affect that one overlay. Also things like manager default bg color are
simple, they affect only that one manager.

But enabling/disabling an overlay or a manager, changing the destination
mgr of an overlay, fifomerge... Those are not simple. You can't do them
directly, as you do in your branch.

As an example, consider the case of enabling an overlay (vid1), and
moving fifo buffers from currently enabled overlay (gfx) to vid1: you'll
first need to take the fifo buffers from gfx, set GO, and wait for the
settings to take effect. Only then you can set the fifo buffers for
vid1, enable it and set GO bit.

I didn't write omapdss's apply.c for fun or to make omapfb simpler. I
made it because the shadow register system is complex, and we need to
handle the tricky cases somewhere.

So, as I said before, I believe you'll just end up writing similar code
to what is currently in apply.c. It won't be as simple as your current
branch.

Also, as I mentioned earlier, you'll also need to handle the output side
of the shadow registers. These come from the output drivers (DPI, DSI,
etc, and indirectly from panel drivers). They are not currently handled
in the best manner in omapdss, but Archit is working on that and in his
version apply.c will handle also those properly.

About your code, I see you have these pre and post apply callbacks that
handle the configuration. Wouldn't it be rather easy to have omapdss's
apply.c call these?

And then one thing I don't think you've considered is manual update
displays. Of course, one option is to not support those with omapdrm,
but that's quite a big decision. omapdss's apply.c handles those also.

Also, can you check again my mail Re: OMAPDSS vsyncs/apply Sat, 12 May
2012 10:01:24, about the request_config() suggestion. I think that would
be somewhat similar to your pre/post callbacks. I'll try to write some
prototype for the request_config suggestion so that it's easier to
understand.

 Tomi




signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC 0/3] solving omapdrm/omapdss layering issues

2012-08-01 Thread Tomi Valkeinen
On Wed, 2012-08-01 at 11:53 -0500, Rob Clark wrote:

 Ok.. this would help.  I'll take a look.  I do request that
 interfaces/panels don't set any mgr/timing related registers.  I had
 to comment all this stuff out in my prototype.  Really we want to set
 the timings separately on the crtc (mgr) / encoder (interface) /
 connector (panel.. not sure if it is needed, though?).  KMS will take
 care of propagating the timings through the pipeline.

If we only had auto-update displays, and only the video timings were
shadow register, it'd work. Unfortunately we have other registers as
shadow registers also, like DISPC_CONTROL1, DISPC_CONFIG1 and
DISPC_DIVISOR1.

But we should think if this could be somehow be changed, so that all the
shadow register info would come from one place. I do find it a bit
unlikely with a quick thought, though.

Well, hmm. Perhaps... Omapdrm (or omapfb etc) doesn't really need to
know about the values of those registers, it just needs to control the
GO bit. So perhaps if we had some method to inform omapdrm that these
things have changed, and omapdrm would then set the GO bit as soon as
possible.

But there are some tricky stuff, like the divisors... Well, we need to
think about this.

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC 0/3] solving omapdrm/omapdss layering issues

2012-08-02 Thread Tomi Valkeinen
On Wed, 2012-08-01 at 09:25 -0500, Rob Clark wrote:
 On Wed, Aug 1, 2012 at 4:21 AM, Tomi Valkeinen tomi.valkei...@ti.com wrote:

  I guess the fact is that DRM concepts do not really match the OMAP DSS
  hardware, and we'll have to use whatever gives us least problems.
 
 Actually, I think it does map fairly well to the hardware.. at least
 more so than to omapdss ;-)

Hm, I'm not sure I understand, omapdss concepts map directly to the
hardware.

 The one area that kms mismatches a bit is decoupling of ovl from mgr
 that we have in our hw..  I've partially solved that a while back w/

What do you mean with that? Ovls and mgrs are one entity in KMS? Didn't
the drm_plane stuff separate these?

 For enabling/disabling an output (manager+encoder), this is relatively
 infrequent, so it can afford to block to avoid races.  (Like userspace
 enabling and then rapidly disabling an output part way through the
 enable.)  But enabling/disabling an overlay, or adjusting position or
 scanout address must not block.  And ideally, if possible, switching
 an overlay between two managers should not block.

Adjusting the position or address of the buffer are simple, they can be
easily done without any blocking.

But ovl enable/disable and switching an ovl to another mgr do (possibly)
take multiple vsyncs (and in the switch case, vsyncs of two separate
outputs). So if those do not block, we'll need to handle them as a state
machine and try to avoid races etc. It'll be interesting.

However, we can sometimes do those operations immediately. So I think we
should have these conditional fast-paths in the code, and do them in
non-blocking manner when possible.

 I'll look again, but as far as I remember that at least wasn't
 addressing the performance issues from making overlay enable/disable

Right, it wasn't addressing those issues. But your branch doesn't really
address those issues either, as it doesn't handle the problems related
to ovl enable/disable.

 synchronous.  And fixing that would, I expect, trigger the same
 problems that I already spent a few days debugging before switching
 over to handle apply in omapdrm.

I was under the impression that the apply mechanism, the caching and
setting of the configs, was the major issue you had. But you're hinting
that the actual problem is related to ovl enable/disable? I haven't
tried fixing the ovl enable/disable, as I didn't know it's an issue for
omapdrm. Or are they both as big issues?

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC 0/3] solving omapdrm/omapdss layering issues

2012-08-02 Thread Tomi Valkeinen
On Thu, 2012-08-02 at 07:45 -0500, Rob Clark wrote:
 On Thu, Aug 2, 2012 at 2:13 AM, Tomi Valkeinen tomi.valkei...@ti.com wrote:
  On Wed, 2012-08-01 at 09:25 -0500, Rob Clark wrote:
  On Wed, Aug 1, 2012 at 4:21 AM, Tomi Valkeinen tomi.valkei...@ti.com 
  wrote:
 
   I guess the fact is that DRM concepts do not really match the OMAP DSS
   hardware, and we'll have to use whatever gives us least problems.
 
  Actually, I think it does map fairly well to the hardware.. at least
  more so than to omapdss ;-)
 
  Hm, I'm not sure I understand, omapdss concepts map directly to the
  hardware.
 
 I think it is mainly exposing the encoder and panel as two separate
 entities.. which seems to be what Archit is working on

I still don't follow =) They are separate entities. Omapdss models the
HW quite directly, I think. It doesn't expose everything, though, as the
output drivers (dsi.c, dpi.c etc) are used via the panel drivers.

 in case of something like DVI bridge from DPI, this seems pretty
 straightforward.. only the connector needs to know about DDC stuff,
 which i2c to use and that sort of thing.  So at kms level we would
 have (for example) an omap_dpi_encoder which would be the same for DPI
 panel (connector) or DPI-DVI bridge.  For HDMI I'm still looking
 through the code to see how this would work.  Honestly I've looked
 less at this part of code and encoder related registers in the TRM,
 compared to the ovl/mgr parts, but at least from the 'DSS overview'
 picture in the TRM it seems to make sense ;-)
 
 KMS even exposes the idea that certain crtcs can connect to only
 certain encoders.  Or that you could you could have certain connectors
 switched between encoders.  For example if you had a hw w/ DPI out,
 and some mux to switch that back and forth between a DPI lcd panel and
 a DPI-DVI bridge.  (Ok, I'm not aware of any board that actually does
 this, but it is in theory possible.)  So we could expose possible
 video chain topologies to userspace in this way.

OMAP3 SDP board has such a setup, with manual switch to select between
LCD and DVI.

 The other thing is that we don't need to propagate timings from the
 panel up to the mgr at the dss level.. kms is already handling this
 for us.  In my latest version, which I haven't pushed, I removed the
 'struct omap_overlay_mgr' ptr from 'struct omap_dss_device'.

You're thinking only about simple DPI cases. Consider this example, with
a DSI-to-DP bridge chip. What we have is the following flow of data:

DISPC - DSI - DSI-2-DP - DP monitor

The timings you are thinking about are in the DISPC, but here they are
only one part of the link. And here the DISPC timings are not actually
the timings what the user is interested about. The user wants his
timings to be between DSI-2-DP chip and the DP monitor.

Timings programmed to DISPC are not the same. The timings for DISPC come
from the DSI driver, and they may be very different than the user's
timings. With DSI video mode, the DISPC timings would have some
resemblance to the user's timings, mainly the time to send one line
would be the same. With DSI cmd mode, the DISPC timings would be
something totally different, most likely with 0 blank times and as fast
pixel clock as possible.

What omapdss does currently is that you set the user's timings to the
right side of the chain, which propagate back to DSS. This allows the
DSI-2-DP bridge use DSI timings that work optimally for the bridge, and
DSI driver will use DISPC timings that work optimally for it.

And it's not only about timings above, but also other settings related
to the busses between the components. Clock dividers, polarities, stuff
like that.

 I think the problem was there were some cases, like ovl updates before
 setting the mgr, where the user_info_dirty flag would be cleared but
 the registers not updated.  This is what I meant by sequence of

Hmm, I'd appreciate more info about this if you can give. Sounds like a
bug, that shouldn't happen.

So you mean that you have an ovl, with no manager set. And you change
the settings of the ovl before assigning it to a mgr? That's something I
haven't really tested, so it could bug, true.

 operations at KMS level confusing omapdss.  This should be fixable
 with some debugging.  Although getting rid of the state tracking at
 omapdss level altogether was a much simpler solution, and is the one I
 prefer :-)

Yes, I don't prefer the state tracking either (we didn't have it in
earlier versions of omapdss), but I still don't see an option to it if
we want to support all the stuff we have.

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC 0/5] Generic panel framework

2012-08-17 Thread Tomi Valkeinen
Hi,

On Fri, 2012-08-17 at 02:49 +0200, Laurent Pinchart wrote:

 I will appreciate all reviews, comments, criticisms, ideas, remarks, ... If

Oookay, where to start... ;)

A few cosmetic/general comments first.

I find the file naming a bit strange. You have panel.c, which is the
core framework, panel-dbi.c, which is the DBI bus, panel-r61517.c, which
is driver for r61517 panel...

Perhaps something in this direction (in order): panel-core.c,
mipi-dbi-bus.c, panel-r61517.c? And we probably end up with quite a lot
of panel drivers, perhaps we should already divide these into separate
directories, and then we wouldn't need to prefix each panel with
panel- at all.

---

Should we aim for DT only solution from the start? DT is the direction
we are going, and I feel the older platform data stuff would be
deprecated soon.

---

Something missing from the intro is how this whole thing should be used.
It doesn't help if we know how to turn on the panel, we also need to
display something on it =). So I think some kind of diagram/example of
how, say, drm would use this thing, and also how the SoC specific DBI
bus driver would be done, would clarify things.

---

We have discussed face to face about the different hardware setups and
scenarios that we should support, but I'll list some of them here for
others:

1) We need to support chains of external display chips and panels. A
simple example is a chip that takes DSI in, and outputs DPI. In that
case we'd have a chain of SoC - DSI2DPI - DPI panel.

In final products I think two external devices is the maximum (at least
I've never seen three devices in a row), but in theory and in
development environments the chain can be arbitrarily long. Also the
connections are not necessarily 1-to-1, but a device can take one input
while it has two outputs, or a device can take two inputs.

Now, I think two external devices is a must requirement. I'm not sure if
supporting more is an important requirement. However, if we support two
devices, it could be that it's trivial to change the framework to
support n devices.

2) Panels and display chips are all but standard. They very often have
their own sequences how to do things, have bugs, or implement some
feature in slightly different way than some other panel. This is why the
panel driver should be able to control or define the way things happen.

As an example, Sharp LQ043T1DG01 panel
(www.sharpsme.com/download/LQ043T1DG01-SP-072106pdf). It is enabled with
the following sequence:

- Enable VCC and AVDD regulators
- Wait min 50ms
- Enable full video stream (pck, syncs, pixels) from SoC
- Wait min 0.5ms
- Set DISP GPIO, which turns on the display panel

Here we could split the enabling of panel to two parts, prepare (in this
case starts regulators and waits 50ms) and finish (wait 0.5ms and set
DISP GPIO), and the upper layer would start the video stream in between.

I realize this could be done with the PANEL_ENABLE_* levels in your RFC,
but I don't think the concepts quite match:

- PANEL_ENABLE_BLANK level is needed for smart panels, as we need to
configure them and send the initial frame at that operating level. With
dummy panels there's really no such level, there's just one enable
sequence that is always done right away.

- I find waiting at the beginning of a function very ugly (what are we
waiting for?) and we'd need that when changing the panel to
PANEL_ENABLE_ON level.

- It's still limited if the panel is a stranger one (see following
example).

Consider the following theoretical panel enable example, taken to absurd
level just to show the general problem:

- Enable regulators
- Enable video stream
- Wait 50ms
- Disable video stream
- Set enable GPIO
- Enable video stream

This one would be rather impossible with the upper layer handling the
enabling of the video stream. Thus I see that the panel driver needs to
control the sequences, and the Sharp panel driver's enable would look
something like:

regulator_enable(...);
sleep();
dpi_enable_video();
sleep();
gpip_set(..);

Note that even with this model we still need the PANEL_ENABLE levels you
have.

---

I'm not sure I understand the panel unload problem you mentioned. Nobody
should have direct references to the panel functions, so there shouldn't
be any automatic references that would prevent module unloading. So when
the user does rmmod panel-mypanel, the panel driver's remove will be
called. It'll unregister itself from the panel framework, which causes
notifications and the display driver will stop using the panel. After
that nobody has pointers to the panel, and it can safely be unloaded.

It could cause some locking issues, though. First the panel's remove
could take a lock, but the remove sequence would cause the display
driver to call disable on the panel, which could again try to take the
same lock...

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list

Re: [RFC 3/5] video: panel: Add MIPI DBI bus support

2012-08-17 Thread Tomi Valkeinen
On Fri, 2012-08-17 at 02:49 +0200, Laurent Pinchart wrote:

 +/* 
 -
 + * Bus operations
 + */
 +
 +void panel_dbi_write_command(struct panel_dbi_device *dev, unsigned long cmd)
 +{
 + dev-bus-ops-write_command(dev-bus, cmd);
 +}
 +EXPORT_SYMBOL_GPL(panel_dbi_write_command);
 +
 +void panel_dbi_write_data(struct panel_dbi_device *dev, unsigned long data)
 +{
 + dev-bus-ops-write_data(dev-bus, data);
 +}
 +EXPORT_SYMBOL_GPL(panel_dbi_write_data);
 +
 +unsigned long panel_dbi_read_data(struct panel_dbi_device *dev)
 +{
 + return dev-bus-ops-read_data(dev-bus);
 +}
 +EXPORT_SYMBOL_GPL(panel_dbi_read_data);

I'm not that familiar with how to implement bus drivers, can you
describe in pseudo code how the SoC's DBI driver would register these?


I think write/read data functions are a bit limited. Shouldn't they be
something like write_data(const u8 *buf, int size) and read_data(u8
*buf, int len)?

Something that's totally missing is configuring the DBI bus. There are a
bunch of timing related values that need to be configured. See
include/video/omapdss.h struct rfbi_timings. While the struct is OMAP
specific, if I recall right most of the values match to the MIPI DBI
spec.

And this makes me wonder, you use DBI bus for SYS-80 panel. The busses
may look similar in operation, but are they really so similar when you
take into account the timings (and perhaps something else, it's been
years since I read the MIPI DBI spec)?


Then there's the start_transfer. This is something I'm not sure what is
the best way to handle it, but the same problems that I mentioned in the
previous post related to enable apply here also. For example, what if
the panel needs to be update in two parts? This is done in Nokia N9.
From panel's perspective, it'd be best to handle it somewhat like this
(although asynchronously, probably):

write_update_area(0, 0, xres, yres / 2);
write_memory_start()
start_pixel_transfer();

wait_transfer_done();

write_update_area(0, yres / 2, xres, yres / 2);
write_memory_start()
start_pixel_transfer();

Why I said I'm not sure about this is that it does complicate things, as
the actual pixel data often comes from the display subsystem hardware,
which should probably be controlled by the display driver.


I think there also needs to be some kind of transfer_done notifier, for
both the display driver and the panel driver. Although if the display
driver handles starting the actual pixel transfer, then it'll get the
transfer_done via whatever interrupt the SoC has.


Also as food for thought, videomode timings does not really make sense
for DBI panels, at least when you just consider the DBI side. There's
really just the resolution of the display, and then the DBI timings. No
pck, syncs, etc. Of course in the end there's the actual panel, which
does have these video timings. But often they cannot be configured, and
often you don't even know them as the specs don't tell them.

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC 3/5] video: panel: Add MIPI DBI bus support

2012-08-17 Thread Tomi Valkeinen
On Fri, 2012-08-17 at 12:02 +0200, Laurent Pinchart wrote:
 Hi Tomi,
 
 Thank you for the review.
 
 On Friday 17 August 2012 12:03:02 Tomi Valkeinen wrote:
  On Fri, 2012-08-17 at 02:49 +0200, Laurent Pinchart wrote:
   +/*
   -
    + * Bus operations
   + */
   +
   +void panel_dbi_write_command(struct panel_dbi_device *dev, unsigned long
   cmd) +{
   + dev-bus-ops-write_command(dev-bus, cmd);
   +}
   +EXPORT_SYMBOL_GPL(panel_dbi_write_command);
   +
   +void panel_dbi_write_data(struct panel_dbi_device *dev, unsigned long
   data) +{
   + dev-bus-ops-write_data(dev-bus, data);
   +}
   +EXPORT_SYMBOL_GPL(panel_dbi_write_data);
   +
   +unsigned long panel_dbi_read_data(struct panel_dbi_device *dev)
   +{
   + return dev-bus-ops-read_data(dev-bus);
   +}
   +EXPORT_SYMBOL_GPL(panel_dbi_read_data);
  
  I'm not that familiar with how to implement bus drivers, can you
  describe in pseudo code how the SoC's DBI driver would register these?
 
 Sure.
 
 The DBI bus driver first needs to create a panel_dbi_bus_ops instance:
 
 static const struct panel_dbi_bus_ops sh_mobile_lcdc_dbi_bus_ops = {
 .write_command = lcdc_dbi_write_command,
 .write_data = lcdc_dbi_write_data,
 .read_data = lcdc_dbi_read_data,
 };

Thanks for the example, I think it cleared up things a bit.

As I mentioned earlier, I really think panel is not right here. While
the whole framework may be called panel framework, the bus drivers are
not panels, and we should support external chips also, which are not
panels either.

  I think write/read data functions are a bit limited. Shouldn't they be
  something like write_data(const u8 *buf, int size) and read_data(u8
  *buf, int len)?
 
 Good point. My hardware doesn't support multi-byte read/write operations 
 directly so I haven't thought about adding those.

OMAP HW doesn't support it either. Well, not quite true, as OMAP's
system DMA could be used to write a buffer to the DBI output. But that's
really the same as doing the write with a a loop with CPU.

But first, the data type should be byte, not unsigned long. How would
you write 8 bits or 16 bits with your API? And second, if the function
takes just u8, you'd need lots of calls to do simple writes. 

 Can your hardware group command + data writes in a single operation ? If so 
 we 
 should expose that at the API level as well.

No it can't. But with DCS that is a common operation, so we could have
some helpers to send command + data with one call. Then again, I'd hope
to have DCS somehow as a separate library, which would then use
DBI/DSI/whatnot to actually send the data.

I'm not quite sure how easy that is because of the differences between
the busses.

 Is DBI limited to 8-bit data transfers for commands ? Pixels can be 
 transferred 16-bit at a time, commands might as well. While DCS only 
 specifies 
 8-bit command/data, DBI panels that are not DCS compliant can use 16-bit 
 command/data (the R61505 panel, albeit a SYS-80 panel, does so).

I have to say I don't remember much about DBI =). Looking at OMAP's
driver, which was made for omap2 and hasn't been much updated since, I
see that there are 4 modes, 8/9/12/16 bits. I think that defines how
many of the parallel data lines are used.

However, I don't think that matters for the panel driver when it wants
to send data. The panel driver should just call dbi_write(buf, buf_len),
and the dbi driver would send the data in the buffer according to the
bus width.

Also note that some chips need to change the bus width on the fly. The
chip used on N800 wants configuration to be done with 8-bits, and pixel
transfers with 16-bits. Who knows why...

So I think this, and generally most of the configuration, should be
somewhat dynamic, so that the panel driver can change them when it
needs.

  Something that's totally missing is configuring the DBI bus. There are a
  bunch of timing related values that need to be configured. See
  include/video/omapdss.h struct rfbi_timings. While the struct is OMAP
  specific, if I recall right most of the values match to the MIPI DBI
  spec.
 
 I've left that out currently, and thought about passing that information as 
 platform data to the DBI bus driver. That's the easiest solution, but I agree 
 that it's a hack. Panel should expose their timing requirements to the DBI 
 host. API wise that wouldn't be difficult (we only need to add timing 
 information to the panel platform data and add a function to the DBI API to 
 retrieve it), one of challenges might be to express it in a way that's both 
 universal enough and easy to use for DBI bus drivers.

As I pointed above, I think the panel driver shouldn't expose it, but
the panel driver should somehow set it. Or at least allowed to change it
in some manner. This is actually again, the same problem as with enable
and transfer: who controls what's going on.

How I think it should work is something like

Re: [RFC 0/5] Generic panel framework

2012-08-17 Thread Tomi Valkeinen
On Fri, 2012-08-17 at 13:10 +0200, Laurent Pinchart wrote:

 What kind of directory structure do you have in mind ? Panels are already 
 isolated in drivers/video/panel/ so we could already ditch the panel- prefix 
 in drivers.

The same directory also contains files for the framework and buses. But
perhaps there's no need for additional directories if the amount of
non-panel files is small. And you can easily see from the name that they
are not panel drivers (e.g. mipi_dbi_bus.c).

 Would you also create include/video/panel/ ?

Perhaps that would be good. Well, having all the files prefixed with
panel- is not bad as such, but just feel extra.

  ---
  
  Should we aim for DT only solution from the start? DT is the direction we
  are going, and I feel the older platform data stuff would be deprecated
  soon.
 
 Don't forget about non-ARM architectures :-/ We need panel drivers for SH as 
 well, which doesn't use DT. I don't think that would be a big issue, a DT-
 compliant solution should be easy to use through board code and platform data 
 as well.

I didn't forget about them as I didn't even think about them ;). I
somehow had the impression that other architectures would also use DT,
sooner or later. I could be mistaken, though.

And true, it's not a big issue to support both DT and non-DT versions,
but I've been porting omap stuff for DT and keeping the old platform
data stuff also there, and it just feels burdensome. For very simple
panels it's easy, but when you've passing lots of parameters the code
starts to get longer.

  This one would be rather impossible with the upper layer handling the
  enabling of the video stream. Thus I see that the panel driver needs to
  control the sequences, and the Sharp panel driver's enable would look
  something like:
  
  regulator_enable(...);
  sleep();
  dpi_enable_video();
  sleep();
  gpip_set(..);
 
 I have to admit I have no better solution to propose at the moment, even if I 
 don't really like making the panel control the video stream. When several 
 devices will be present in the chain all of them might have similar annoying 
 requirements, and my feeling is that the resulting code will be quite messy. 
 At the end of the day the only way to really find out is to write an 
 implementation.

If we have a chain of devices, and each device uses the bus interface
from the previous device in the chain, there shouldn't be a problem. In
that model each device can handle the task however is best for it.

I think the problems come from the divided control we'll have. I mean,
if the panel driver would decide itself what to send to its output, and
it would provide the data (think of an i2c device), this would be very
simple. And it actually is for things like configuration data etc, but
not so for video stream.

  It could cause some locking issues, though. First the panel's remove
  could take a lock, but the remove sequence would cause the display
  driver to call disable on the panel, which could again try to take the
  same lock...
 
 We have two possible ways of calling panel operations, either directly (panel-
 bus-ops-enable(...)) or indirectly (panel_enable(...)).
 
 The former is what V4L2 currently does with subdevs, and requires display 
 drivers to hold a reference to the panel. The later can do without a direct 
 reference only if we use a global lock, which is something I would like to 

Wouldn't panel_enable() just do the same panel-bus-ops-enable()
anyway, and both require a panel reference? I don't see the difference.

 avoid. A panel-wide lock wouldn't work, as the access function would need to 
 take the lock on a panel instance that can be removed at any time.

Can't this be handled with some kind of get/put refcounting? If there's
a ref, it can't be removed.

Generally about locks, if we define that panel ops may only be called
exclusively, does it simplify things? I think we can make such
requirements, as there should be only one display framework that handles
the panel. Then we don't need locking for things like enable/disable.

Of course we need to be careful about things where calls come from
outside the display framework. I guess one such thing is rmmod, but if
that causes a notification to the display framework, which again handles
locking, it shouldn't be a problem.

Another thing to be careful about is if the panel internally uses irqs,
workqueues, sysfs files or such. In that case it needs to handle
locking.

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC 3/5] video: panel: Add MIPI DBI bus support

2012-08-17 Thread Tomi Valkeinen
On Fri, 2012-08-17 at 14:33 +0200, Laurent Pinchart wrote:

  But first, the data type should be byte, not unsigned long. How would
  you write 8 bits or 16 bits with your API?
 
 u8 and u16 both fit in an unsigned long :-) Please see below.

Ah, I see, so the driver would just discard 24 bits or 16 bits from the
ulong. I somehow thought that if you have 8 bit bus, and you call the
write with ulong, 4 bytes will be written.

  Then again, I'd hope to have DCS somehow as a separate library, which would
  then use DBI/DSI/whatnot to actually send the data.
  
  I'm not quite sure how easy that is because of the differences between
  the busses.
  
   Is DBI limited to 8-bit data transfers for commands ? Pixels can be
   transferred 16-bit at a time, commands might as well. While DCS only
   specifies 8-bit command/data, DBI panels that are not DCS compliant can
   use 16-bit command/data (the R61505 panel, albeit a SYS-80 panel, does
   so).
  
  I have to say I don't remember much about DBI =). Looking at OMAP's
  driver, which was made for omap2 and hasn't been much updated since, I
  see that there are 4 modes, 8/9/12/16 bits. I think that defines how
  many of the parallel data lines are used.
 
 SYS-80 also has an 18-bits mode, where bits 0 and 9 are always ignored when 
 transferring instructions and data other than pixels (for pixels the 18-bits 
 bus width can be used to transfer RGB666 in a single clock cycle).
 
 See page 87 of 
 http://read.pudn.com/downloads91/sourcecode/others/348230/e61505_103a.pdf.
 
  However, I don't think that matters for the panel driver when it wants
  to send data. The panel driver should just call dbi_write(buf, buf_len),
  and the dbi driver would send the data in the buffer according to the
  bus width.
 
 According to the DCS specification, commands and parameters are transferred 
 using 8-bit data. Non-DCS panels can however use wider commands and 
 parameters 
 (all commands and parameters are 16-bits wide for the R61505 for instance).
 
 We can add an API to switch the DBI bus width on the fly. For Renesas 
 hardware 
 this would just require shifting bits around to output the 8-bit or 16-bit 
 commands on the right data lines (the R61505 uses D17-D9 in 8-bit mode, while 
 the DCS specification mentions D7-D0) based on how the panel is connected and 
 on which lines the panel expects data.
 
 As commands can be expressed on either 8 or 16 bits I would use a 16 type for 
 them.

I won't put my head on the block, but I don't think DBI has any
restriction on the size of the command. A command just means a data
transfer while keeping the D/CX line low, and data when the line is
high. Similar transfers for n bytes can be done in both modes.

 For parameters, we can either express everything as u8 * in the DBI bus 
 operations, or use a union similar to i2c_smbus_data
 
 union i2c_smbus_data {
 __u8 byte;
 __u16 word;
 __u8 block[I2C_SMBUS_BLOCK_MAX + 2]; /* block[0] is used for length */
/* and one more for user-space compatibility */
 };

There's no DBI_BLOCK_MAX, so at least identical union won't work. I
think it's simplest to have u8 * function as a base, and then a few
helpers to write the most common datatypes.

So we could have on the lowest level something like:

dbi_write_command(u8 *buf, size_t size);
dbi_write_data(u8 *buf, size_t size);

And possible helpers:

dbi_write_data(u8 *cmd_buf, size_t cmd_size, u8 *data_buf, size_t
data_size);

dbi_write_dcs(u8 cmd, u8 *data, size_t size);

And variations:

dbi_write_dcs_0(u8 cmd);
dbi_write_dcs_1(u8 cmd, u8 data);

etc. So a simple helper to send 16 bits would be:

dbi_write_data(u16 data)
{
// or are the bytes the other way around...
u8 buf[2] = { data  0xff, (data  8)  0xff };
return dbi_write_data(buf, 2);
}


 Helper functions would be available to perform 8-bit, 16-bit or n*8 bits 
 transfers.
 
 Would that work for your use cases ?
 
  Also note that some chips need to change the bus width on the fly. The
  chip used on N800 wants configuration to be done with 8-bits, and pixel
  transfers with 16-bits. Who knows why...
 
 On which data lines is configuration performed ? D7-D0 ?

I guess so, but does it matter? All the bus driver needs to know is how
to send 8/16/.. bit data. On OMAP we just write the data to a 32 bit
register, and the HW takes the lowest n bits. Do the bits represent the
data lines directly on Renesans?

The omap driver actually only implements 8 and 16 bit modes, not the 9
and 12 bit modes. I'm not sure what kind of shifting is needed for
those.

   We might just need to provide fake timings. Video mode timings are at the
   core of display support in all drivers so we can't just get rid of them.
   The h/v front/back porch and sync won't be used by display drivers for
   DBI/DSI panels anyway.
  
  Right. But we should probably think if we can, at the panel level, easily
  separate conventional panels and 

Re: [RFC 0/5] Generic panel framework

2012-08-20 Thread Tomi Valkeinen
On Sat, 2012-08-18 at 03:16 +0200, Laurent Pinchart wrote:
 Hi Tomi,

 mipi-dbi-bus might not belong to include/video/panel/ though, as it can be 
 used for non-panel devices (at least in theory). The future mipi-dsi-bus 
 certainly will.

They are both display busses. So while they could be used for anything,
I find it quite unlikely as there are much better alternatives for
generic bus needs.

 Would you be able to send incremental patches on top of v2 to implement the 
 solution you have in mind ? It would be neat if you could also implement mipi-
 dsi-bus for the OMAP DSS and test the code with a real device :-)

Yes, I'd like to try this out on OMAP, both DBI and DSI. However, I fear
it'll be quite complex due to the dependencies all around we have in the
current driver. We're working on simplifying things so that it'll be
easier to try thing like the panel framework, though, so we're going in
the right direction.

  Generally about locks, if we define that panel ops may only be called
  exclusively, does it simplify things? I think we can make such
  requirements, as there should be only one display framework that handles
  the panel. Then we don't need locking for things like enable/disable.
 
 Pushing locking to callers would indeed simplify panel drivers, but we need 
 to 
 make sure we won't need to expose a panel to several callers in the future.

I have a feeling that would be a bad idea.

Display related stuff are quite sensitive to any delays, so any extra
transactions over, say, DSI bus could cause a noticeable glitch on the
screen. I'm not sure what are all the possible ops that a panel can
offer, but I think all that affect the display or could cause delays
should be handled by one controlling entity (drm or such). The
controlling entity needs to handle locking anyway, so in that sense I
don't think it's an extra burden for it.

The things that come to my mind that could possibly cause calls to the
panel outside drm: debugfs, sysfs, audio, backlight. Of those, I think
backlight should go through drm. Audio, no idea. debugfs and sysfs
locking needs to be handled by the panel driver, and they are a bit
problematic as I guess having them requires full locking.

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/omap: add more new timings fields

2012-09-09 Thread Tomi Valkeinen
On Fri, 2012-09-07 at 12:59 -0500, Rob Clark wrote:
 From: Rob Clark r...@ti.com
 
 Without these, DVI is broken.
 
 Signed-off-by: Rob Clark r...@ti.com
 ---
 Greg, it looks like the omapdss changes which added these fields, as
 well as the interlaced field, where merged in Linux 3.5-rc5.  So I
 think both this and the 'update for interlaced' patch are needed for
 both 3.6 and 3.5.

The omapdss timing and interlace changes were merged in 3.6 merge
window, they were not merged in 3.5 merge window (and even less in
3.5-rc5)...

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2] of: Add videomode helper

2012-09-13 Thread Tomi Valkeinen
On Wed, 2012-07-04 at 09:56 +0200, Sascha Hauer wrote:
 This patch adds a helper function for parsing videomodes from the devicetree.
 The videomode can be either converted to a struct drm_display_mode or a
 struct fb_videomode.

I have more or less the same generic comment for this as for the power
sequences series discussed: this would add panel specific information
into DT data, instead of the driver handling it. But, as also discussed
in the thread, there are differing opinions on which way is better.

 +int of_get_video_mode(struct device_node *np, struct drm_display_mode *dmode,
 + struct fb_videomode *fbmode);

From caller's point of view I think it'd make more sense to have
separate functions to get drm mode or fb mode. I don't think there's a
case where the caller would want both.

Then again, even better would be to have a common video mode struct used
by both fb and drm... But I think that's been on the todo list of
Laurent for a long time already =).

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v4] of: Add videomode helper

2012-09-19 Thread Tomi Valkeinen
On Wed, 2012-09-19 at 10:20 +0200, Steffen Trumtrar wrote:
 This patch adds a helper function for parsing videomodes from the devicetree.
 The videomode can be either converted to a struct drm_display_mode or a
 struct fb_videomode.
 
 Signed-off-by: Sascha Hauer s.ha...@pengutronix.de
 Signed-off-by: Steffen Trumtrar s.trumt...@pengutronix.de
 ---
 
 Hi!
 
 changes since v3:
   - print error messages
   - free alloced memory
   - general cleanup
 
 Regards
 Steffen
 
  .../devicetree/bindings/video/displaymode  |   74 +
  drivers/of/Kconfig |5 +
  drivers/of/Makefile|1 +
  drivers/of/of_videomode.c  |  283 
 
  include/linux/of_videomode.h   |   56 
  5 files changed, 419 insertions(+)
  create mode 100644 Documentation/devicetree/bindings/video/displaymode
  create mode 100644 drivers/of/of_videomode.c
  create mode 100644 include/linux/of_videomode.h
 
 diff --git a/Documentation/devicetree/bindings/video/displaymode 
 b/Documentation/devicetree/bindings/video/displaymode
 new file mode 100644
 index 000..990ca52
 --- /dev/null
 +++ b/Documentation/devicetree/bindings/video/displaymode
 @@ -0,0 +1,74 @@
 +videomode bindings
 +==
 +
 +Required properties:
 + - hactive, vactive: Display resolution
 + - hfront-porch, hback-porch, hsync-len: Horizontal Display timing parameters
 +   in pixels
 +   vfront-porch, vback-porch, vsync-len: Vertical display timing parameters 
 in
 +   lines
 + - clock: displayclock in Hz
 +
 +Optional properties:
 + - width-mm, height-mm: Display dimensions in mm
 + - hsync-active-high (bool): Hsync pulse is active high
 + - vsync-active-high (bool): Vsync pulse is active high
 + - interlaced (bool): This is an interlaced mode
 + - doublescan (bool): This is a doublescan mode
 +
 +There are different ways of describing a display mode. The devicetree 
 representation
 +corresponds to the one commonly found in datasheets for displays.
 +The description of the display and its mode is split in two parts: first the 
 display
 +properties like size in mm and (optionally) multiple subnodes with the 
 supported modes.
 +
 +Example:
 +
 + display@0 {
 + width-mm = 800;
 + height-mm = 480;
 + modes {
 + mode0: mode@0 {
 + /* 1920x1080p24 */
 + clock = 5200;
 + hactive = 1920;
 + vactive = 1080;
 + hfront-porch = 25;
 + hback-porch = 25;
 + hsync-len = 25;
 + vback-porch = 2;
 + vfront-porch = 2;
 + vsync-len = 2;
 + hsync-active-high;
 + };
 + };
 + };
 +
 +Every property also supports the use of ranges, so the commonly used 
 datasheet
 +description with min typ max-tuples can be used.
 +
 +Example:
 +
 + mode1: mode@1 {
 + /* 1920x1080p24 */
 + clock = 14850;
 + hactive = 1920;
 + vactive = 1080;
 + hsync-len = 0 44 60;
 + hfront-porch = 80 88 95;
 + hback-porch = 100 148 160;
 + vfront-porch = 0 4 6;
 + vback-porch = 0 36 50;
 + vsync-len = 0 5 6;
 + };
 +
 +The videomode can be linked to a connector via phandles. The connector has to
 +support the display- and default-mode-property to link to and select a mode.
 +
 +Example:
 +
 + hdmi@0012 {
 + status = okay;
 + display = benq;
 + default-mode = mode1;
 + };
 +
 diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
 index dfba3e6..a3acaa3 100644
 --- a/drivers/of/Kconfig
 +++ b/drivers/of/Kconfig
 @@ -83,4 +83,9 @@ config OF_MTD
   depends on MTD
   def_bool y
  
 +config OF_VIDEOMODE
 + def_bool y
 + help
 +   helper to parse videomodes from the devicetree
 +
  endmenu # OF
 diff --git a/drivers/of/Makefile b/drivers/of/Makefile
 index e027f44..80e6db3 100644
 --- a/drivers/of/Makefile
 +++ b/drivers/of/Makefile
 @@ -11,3 +11,4 @@ obj-$(CONFIG_OF_MDIO)   += of_mdio.o
  obj-$(CONFIG_OF_PCI) += of_pci.o
  obj-$(CONFIG_OF_PCI_IRQ)  += of_pci_irq.o
  obj-$(CONFIG_OF_MTD) += of_mtd.o
 +obj-$(CONFIG_OF_VIDEOMODE)   += of_videomode.o
 diff --git a/drivers/of/of_videomode.c b/drivers/of/of_videomode.c
 new file mode 100644
 index 000..52bfc74
 --- /dev/null
 +++ b/drivers/of/of_videomode.c
 @@ -0,0 +1,283 @@
 +/*
 + * OF helpers for parsing display modes
 + *
 + * Copyright (c) 2012 Sascha Hauer s.ha...@pengutronix.de, Pengutronix
 + * Copyright (c) 2012 Steffen Trumtrar s.trumt...@pengutronix.de, 
 Pengutronix
 + *
 + * This file is released under the 

Re: [RFC 0/5] Generic panel framework

2012-09-19 Thread Tomi Valkeinen
Hi,

On Fri, 2012-08-17 at 02:49 +0200, Laurent Pinchart wrote:
 Hi everybody,
 
 While working on DT bindings for the Renesas Mobile SoC display controller
 (a.k.a. LCDC) I quickly realized that display panel implementation based on
 board code callbacks would need to be replaced by a driver-based panel
 framework.

I thought I'd try to approach the common panel framework by creating
device tree examples that OMAP would need. I only thought about the
connections and organization, not individual properties, so I have not
filled any compatible or such properties.

What I have below is first DT data for OMAP SoC (more or less what omap4
has), then display examples of different board setups. The hardware
examples I have here are all real, so no imaginary use cases =).

We don't need to implement support for all these at once, but I think
the DT data structure should be right from the start, so I'm posting
this to get discussion about the format.


OMAP SoC


So here's first the SoC specific display nodes. OMAP has a DSS (display
subsystem) block, which contains the following elements:

- DISPC (display controller) reads the pixels from memory and outputs
them using specified video timings. DISPC has three outputs, LCD0, LCD1
and TV. These are SoC internal outputs, they do not go outside the SoC.

- DPI gets its data from DISPC's LCD0, and outputs MIPI DPI (parallel
RBG)

- Two independent DSI modules, which get their data from LCD0 or LCD1,
and output MIPI DSI (a serial two-way video bus)

- HDMI, gets data from DISPC's TV output and outputs HDMI

/ {
ocp {
dss {
dispc {
dss-lcd0: output@0 {
};

dss-lcd1: output@1 {
};

dss-tv: output@2 {
};
};

dpi: dpi {
video-source = dss-lcd0;
};

dsi0: dsi@0 {
video-source = dss-lcd0;
};

dsi1: dsi@1 {
video-source = dss-lcd1;
};

hdmi: hdmi {
video-source = dss-tv;
};
};
};
};

I have defined all the relevant nodes, and video-source property is used
to point to the source for video data. I also define aliases for the SoC
outputs so that panels can use them.

One thing to note is that the video sources for some of the blocks, like
DSI, are not hardcoded in the HW, so dsi0 could get its data from LCD0
or LCD1. However, I don't think they are usually changed during runtime,
and the dss driver cannot change them independently for sure (meaning
that some upper layer should tell it how to change the config). Thus I
specify sane defaults here, but the board dts files can of course
override the video sources.

Another thing to note is that we have more outputs from OMAP than we
have outputs from DISPC. This means that the same video source is used
by multiple sinks (LCD0 used by DPI and DSI0). DPI and DSI0 cannot be
used at the same time, obviously.

And third thing to note, DISPC node defines outputs explicitly, as it
has multiple outputs, whereas the external outputs do not as they have
only one output. Thus the node's output is implicitly the node itself.
So, instead of having:

ds0: dsi@0 {
video-source = dss-lcd0;
dsi0-out0: output@0 {
};
};

I have:

dsi0: dsi@0 {
video-source = dss-lcd0;
};

Of this I'm a bit unsure. I believe in most cases there's only one
output, so it'd be nice to have a shorter representation, but I'm not
sure if it's good to handle the cases for single and multiple outputs
differently. Or if it's good to mix control and data busses, as, for
example, dsi0 can be used as both control and data bus. Having the
output defined explicitly would separate the control and data bus nodes.


Simple DPI panel


Here a board has a DPI panel, which is controlled via i2c. Panel nodes
are children of the control bus, so in this case we define the panel
under i2c2.

i2c2 {
dpi-lcd-panel {
video-source = dpi;

};
};


HDMI


OMAP has a HDMI output, but it cannot be connected directly to an HDMI
cable. TI uses tpd12s015 chip in its board, which provides ESD,
level-shifting and whatnot (I'm an SW guy, google for the chip to read
the details =). tpd12s015 has a few GPIOs and powers that need to be
controlled, so we need a driver for it.

There's no control bus for the tpd12s015, so it's platform device. Then
there's the device for the HDMI monitor, and the DDC lines are connected
to OMAP's i2c4, thus the hdmi monitor device is a child of i2c.

/ {
hdmi-connector: tpd12s015 {
 

Re: [PATCH 1/2 v6] of: add helper to parse display timings

2012-10-08 Thread Tomi Valkeinen
Hi,

On Thu, 2012-10-04 at 19:59 +0200, Steffen Trumtrar wrote:
 Signed-off-by: Steffen Trumtrar s.trumt...@pengutronix.de
 ---
  .../devicetree/bindings/video/display-timings.txt  |  222 
 
  drivers/of/Kconfig |5 +
  drivers/of/Makefile|1 +
  drivers/of/of_display_timings.c|  183 
  include/linux/of_display_timings.h |   85 
  5 files changed, 496 insertions(+)
  create mode 100644 
 Documentation/devicetree/bindings/video/display-timings.txt
  create mode 100644 drivers/of/of_display_timings.c
  create mode 100644 include/linux/of_display_timings.h
 
 diff --git a/Documentation/devicetree/bindings/video/display-timings.txt 
 b/Documentation/devicetree/bindings/video/display-timings.txt
 new file mode 100644
 index 000..45e39bd
 --- /dev/null
 +++ b/Documentation/devicetree/bindings/video/display-timings.txt
 @@ -0,0 +1,222 @@
 +display-timings bindings
 +==
 +
 +display-timings-node
 +
 +
 +required properties:
 + - none
 +
 +optional properties:
 + - default-timing: the default timing value
 +
 +timings-subnode
 +---
 +
 +required properties:
 + - hactive, vactive: Display resolution
 + - hfront-porch, hback-porch, hsync-len: Horizontal Display timing parameters
 +   in pixels
 +   vfront-porch, vback-porch, vsync-len: Vertical display timing parameters 
 in
 +   lines
 + - clock: displayclock in Hz
 +
 +optional properties:
 + - hsync-active-high (bool): Hsync pulse is active high
 + - vsync-active-high (bool): Vsync pulse is active high
 + - de-active-high (bool): Data-Enable pulse is active high
 + - pixelclk-inverted (bool): pixelclock is inverted
 + - interlaced (bool)
 + - doublescan (bool)

I think bool should be generally used for things that are on/off, like
interlace. For hsync-active-high  others I'd rather have 0/1 values as
others already suggested.

 +There are different ways of describing the capabilities of a display. The 
 devicetree
 +representation corresponds to the one commonly found in datasheets for 
 displays.
 +If a display supports multiple signal timings, the default-timing can be 
 specified.
 +
 +The parameters are defined as
 +
 +struct signal_timing
 +===
 +
 +  
 +--+-+--+---+
 +  |  |↑|  |  
  |
 +  |  ||vback_porch |  |  
  |
 +  |  |↓|  |  
  |
 +  
 +--###--+---+
 +  |  #↑#  |  
  |
 +  |  #|#  |  
  |
 +  |  hback   #|#  hfront  | 
 hsync |
 +  |   porch  #|   hactive  #  porch   |  len 
  |
 +  
 |#---+---#|-|
 +  |  #|#  |  
  |
 +  |  #|vactive #  |  
  |
 +  |  #|#  |  
  |
 +  |  #↓#  |  
  |
 +  
 +--###--+---+
 +  |  |↑|  |  
  |
 +  |  ||vfront_porch|  |  
  |
 +  |  |↓|  |  
  |
 +  
 +--+-+--+---+
 +  |  |↑|  |  
  |
 +  |  ||vsync_len   |  |  
  |
 +  |  |↓|  |  
  |
 +  
 +--+-+--+---+
 +
 +
 +Example:
 +
 + display-timings {
 + default-timing = timing0;
 + timing0: 1920p24 {
 + /* 1920x1080p24 */

I think this is commonly called 1080p24.

 + clock = 5200;
 + hactive = 1920;
 + vactive = 1080;
 + hfront-porch = 25;
 + hback-porch = 25;
 + hsync-len = 25;
 + vback-porch = 2;
 + vfront-porch = 2;
 + vsync-len = 2;
 + hsync-active-high;
 + };
 + };
 +
 +Every property also supports the use of ranges, so the commonly used 
 datasheet
 

Re: [PATCH 1/2 v6] of: add helper to parse display timings

2012-10-08 Thread Tomi Valkeinen
On Mon, 2012-10-08 at 10:07 +0300, Tomi Valkeinen wrote:
 Hi,

 I don't see you setting disp-default_timing to OF_DEFAULT_TIMING in
 case there's no default_timing found.
 
 Or, at least I presume OF_DEFAULT_TIMING is meant to mark non-existing
 default timing. The name OF_DEFAULT_TIMING is not very descriptive to
 me.

Ah, I see now from the second patch how this is meant to be used. So if
there's no default timing in DT data, disp-default_timing is 0, meaning
the first entry. And the caller of of_get_videomode() will use
OF_DEFAULT_TIMING as index to get the default mode.

So I think it's ok.

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 2/2 v6] of: add generic videomode description

2012-10-08 Thread Tomi Valkeinen
On Thu, 2012-10-04 at 19:59 +0200, Steffen Trumtrar wrote:
 Get videomode from devicetree in a format appropriate for the
 backend. drm_display_mode and fb_videomode are supported atm.
 Uses the display signal timings from of_display_timings
 
 Signed-off-by: Steffen Trumtrar s.trumt...@pengutronix.de
 ---
  drivers/of/Kconfig   |5 +
  drivers/of/Makefile  |1 +
  drivers/of/of_videomode.c|  212 
 ++
  include/linux/of_videomode.h |   41 
  4 files changed, 259 insertions(+)
  create mode 100644 drivers/of/of_videomode.c
  create mode 100644 include/linux/of_videomode.h
 
 diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
 index 646deb0..74282e2 100644
 --- a/drivers/of/Kconfig
 +++ b/drivers/of/Kconfig
 @@ -88,4 +88,9 @@ config OF_DISPLAY_TIMINGS
   help
 helper to parse display timings from the devicetree
  
 +config OF_VIDEOMODE
 + def_bool y
 + help
 +   helper to get videomodes from the devicetree
 +
  endmenu # OF
 diff --git a/drivers/of/Makefile b/drivers/of/Makefile
 index c8e9603..09d556f 100644
 --- a/drivers/of/Makefile
 +++ b/drivers/of/Makefile
 @@ -12,3 +12,4 @@ obj-$(CONFIG_OF_PCI)+= of_pci.o
  obj-$(CONFIG_OF_PCI_IRQ)  += of_pci_irq.o
  obj-$(CONFIG_OF_MTD) += of_mtd.o
  obj-$(CONFIG_OF_DISPLAY_TIMINGS) += of_display_timings.o
 +obj-$(CONFIG_OF_VIDEOMODE) += of_videomode.o
 diff --git a/drivers/of/of_videomode.c b/drivers/of/of_videomode.c
 new file mode 100644
 index 000..76ac16e
 --- /dev/null
 +++ b/drivers/of/of_videomode.c
 @@ -0,0 +1,212 @@
 +/*
 + * generic videomode helper
 + *
 + * Copyright (c) 2012 Steffen Trumtrar s.trumt...@pengutronix.de, 
 Pengutronix
 + *
 + * This file is released under the GPLv2
 + */
 +#include linux/of.h
 +#include linux/fb.h
 +#include linux/slab.h
 +#include drm/drm_mode.h
 +#include linux/of_display_timings.h
 +#include linux/of_videomode.h
 +
 +void dump_fb_videomode(struct fb_videomode *m)
 +{
 +pr_debug(fb_videomode = %d %d %d %d %d %d %d %d %d %d %d %d %d\n,
 +m-refresh, m-xres, m-yres, m-pixclock, m-left_margin,
 +m-right_margin, m-upper_margin, m-lower_margin,
 +m-hsync_len, m-vsync_len, m-sync, m-vmode, m-flag);
 +}
 +
 +void dump_drm_displaymode(struct drm_display_mode *m)
 +{
 +pr_debug(drm_displaymode = %d %d %d %d %d %d %d %d %d\n,
 +m-hdisplay, m-hsync_start, m-hsync_end, m-htotal,
 +m-vdisplay, m-vsync_start, m-vsync_end, m-vtotal,
 +m-clock);
 +}
 +
 +int videomode_from_timing(struct display_timings *disp, struct videomode *vm,
 + int index)
 +{
 + struct signal_timing *st = NULL;
 +
 + if (!vm)
 + return -EINVAL;
 +
 + st = display_timings_get(disp, index);
 +
 + if (!st) {
 + pr_err(%s: no signal timings found\n, __func__);
 + return -EINVAL;
 + }
 +
 + vm-pixelclock = signal_timing_get_value(st-pixelclock, 0);
 + vm-hactive = signal_timing_get_value(st-hactive, 0);
 + vm-hfront_porch = signal_timing_get_value(st-hfront_porch, 0);
 + vm-hback_porch = signal_timing_get_value(st-hback_porch, 0);
 + vm-hsync_len = signal_timing_get_value(st-hsync_len, 0);
 +
 + vm-vactive = signal_timing_get_value(st-vactive, 0);
 + vm-vfront_porch = signal_timing_get_value(st-vfront_porch, 0);
 + vm-vback_porch = signal_timing_get_value(st-vback_porch, 0);
 + vm-vsync_len = signal_timing_get_value(st-vsync_len, 0);
 +
 + vm-vah = st-vsync_pol_active_high;
 + vm-hah = st-hsync_pol_active_high;
 + vm-interlaced = st-interlaced;
 + vm-doublescan = st-doublescan;
 +
 + return 0;
 +}
 +
 +int of_get_videomode(struct device_node *np, struct videomode *vm, int index)
 +{
 + struct display_timings *disp;
 + int ret = 0;
 +
 + disp = of_get_display_timing_list(np);
 +
 + if (!disp) {
 + pr_err(%s: no timings specified\n, __func__);
 + return -EINVAL;
 + }
 +
 + if (index == OF_DEFAULT_TIMING)
 + index = disp-default_timing;
 +
 + ret = videomode_from_timing(disp, vm, index);
 +
 + if (ret)
 + return ret;
 +
 + display_timings_release(disp);
 +
 + if (!vm) {
 + pr_err(%s: could not get videomode %d\n, __func__, index);
 + return -EINVAL;
 + }
 +
 + return 0;
 +}
 +EXPORT_SYMBOL_GPL(of_get_videomode);
 +
 +#if defined(CONFIG_DRM)
 +int videomode_to_display_mode(struct videomode *vm, struct drm_display_mode 
 *dmode)
 +{
 + memset(dmode, 0, sizeof(*dmode));
 +
 + dmode-hdisplay = vm-hactive;
 + dmode-hsync_start = dmode-hdisplay + vm-hfront_porch;
 + dmode-hsync_end = dmode-hsync_start + vm-hsync_len;
 + dmode-htotal = dmode-hsync_end + vm-hback_porch;
 +
 + dmode-vdisplay = vm-vactive;
 + dmode-vsync_start = dmode-vdisplay + vm-vfront_porch;
 + dmode-vsync_end = 

Re: [PATCH 1/2 v6] of: add helper to parse display timings

2012-10-08 Thread Tomi Valkeinen
On Mon, 2012-10-08 at 14:04 +0200, Laurent Pinchart wrote:
 Hi Tomi,
 
 On Monday 08 October 2012 12:01:18 Tomi Valkeinen wrote:
  On Mon, 2012-10-08 at 10:25 +0200, Guennadi Liakhovetski wrote:
   In general, I might be misunderstanding something, but don't we have to
   distinguish between 2 types of information about display timings: (1) is
   defined by the display controller requirements, is known to the display
   driver and doesn't need to be present in timings DT. We did have some of
   these parameters in board data previously, because we didn't have proper
   display controller drivers... (2) is board specific configuration, and is
   such it has to be present in DT.
   
   In that way, doesn't interlaced belong to type (1) and thus doesn't need
   to be present in DT?
  
  As I see it, this DT data is about the display (most commonly LCD
  panel), i.e. what video mode(s) the panel supports. If things were done
  my way, the panel's supported timings would be defined in the driver for
  the panel, and DT would be left to describe board specific data, but
  this approach has its benefits.
 
 What about dumb DPI panels ? They will all be supported by a single driver, 
 would you have the driver contain information about all known DPI panels ? DT 
 seems a good solution to me in this case.

Yes, I would have a table in the driver for all the devices it supports,
which would describe the device specific parameters.

But I don't have a problem with DT solution. Both methods have their
pros and cons, and perhaps DT based solution is more practical.

 For complex panels where the driver will support a single (or very few) model 
 I agree that specifying the timings in DT isn't needed.
 
  Thus, if you connect an interlaced panel to your board, you need to tell
  the display controller that this panel requires interlace signal. Also,
  pixel clock source doesn't make sense in this context, as this doesn't
  describe the actual used configuration, but only what the panel
  supports.
  
  Of course, if this is about describing the hardware, the default-mode
  property doesn't really fit in...
 
 Maybe we should rename it to native-mode then ?

Hmm, right, if it means native mode, then it is describing the hardware.
But would it make sense to require that the native mode is the first
mode in the list, then? This would make the separate
default-mode/native-mode property not needed.

 Tomi



signature.asc
Description: This is a digitally signed message part
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/3] drm/omap: add support for ARCH_MULTIPLATFORM

2012-10-29 Thread Tomi Valkeinen
On 2012-10-29 10:31, Rob Clark wrote:
 From: Rob Clark r...@ti.com
 
 Remove usage of plat/cpu.h and get information from platform data
 instead.  This enables omapdrm to be built with ARCH_MULTIPLATFORM.
 
 Signed-off-by: Rob Clark r...@ti.com
 ---
  arch/arm/mach-omap2/drm.c|7 +++
  drivers/staging/omapdrm/Kconfig  |2 +-
  drivers/staging/omapdrm/omap_dmm_tiler.h |1 -
  drivers/staging/omapdrm/omap_drv.c   |6 +-
  drivers/staging/omapdrm/omap_drv.h   |2 ++
  include/linux/platform_data/omap_drm.h   |1 +
  6 files changed, 16 insertions(+), 3 deletions(-)
 
 diff --git a/arch/arm/mach-omap2/drm.c b/arch/arm/mach-omap2/drm.c
 index 72e0f01b..49a7ffb 100644
 --- a/arch/arm/mach-omap2/drm.c
 +++ b/arch/arm/mach-omap2/drm.c
 @@ -23,15 +23,20 @@
  #include linux/init.h
  #include linux/platform_device.h
  #include linux/dma-mapping.h
 +#include linux/platform_data/omap_drm.h
  
  #include plat/omap_device.h
  #include plat/omap_hwmod.h
 +#include plat/cpu.h
  
  #if defined(CONFIG_DRM_OMAP) || (CONFIG_DRM_OMAP_MODULE)
  
 +static struct omap_drm_platform_data platform_data;
 +
  static struct platform_device omap_drm_device = {
   .dev = {
   .coherent_dma_mask = DMA_BIT_MASK(32),
 + .platform_data = platform_data,
   },
   .name = omapdrm,
   .id = 0,
 @@ -52,6 +57,8 @@ static int __init omap_init_drm(void)
   oh-name);
   }
  
 + platform_data.omaprev = GET_OMAP_REVISION();
 +
   return platform_device_register(omap_drm_device);
  
  }
 diff --git a/drivers/staging/omapdrm/Kconfig b/drivers/staging/omapdrm/Kconfig
 index 81a7cba..b724a41 100644
 --- a/drivers/staging/omapdrm/Kconfig
 +++ b/drivers/staging/omapdrm/Kconfig
 @@ -2,7 +2,7 @@
  config DRM_OMAP
   tristate OMAP DRM
   depends on DRM  !CONFIG_FB_OMAP2
 - depends on ARCH_OMAP2PLUS
 + depends on ARCH_OMAP2PLUS || ARCH_MULTIPLATFORM

If you remove the omap include dependencies, is there any reason to keep
ARCH_OMAP2PLUS here? And if you remove that, you don't need
ARCH_MULTIPLATFORM either.

omapdrm is not a driver for OMAP, even if the name so suggests. It's a
driver for a display subsystem hardware, that happens to be used in OMAP
(with the help of omapdss driver), and the tiler memory system used in OMAP.

I just recently removed omap dependencies from omapdss driver, and it
now compiles fine on x86 config also.

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC 0/5] Generic panel framework

2012-10-31 Thread Tomi Valkeinen
On 2012-10-31 15:13, Laurent Pinchart wrote:

 OMAP SoC
 

 So here's first the SoC specific display nodes. OMAP has a DSS (display
 subsystem) block, which contains the following elements:

 - DISPC (display controller) reads the pixels from memory and outputs
 them using specified video timings. DISPC has three outputs, LCD0, LCD1
 and TV. These are SoC internal outputs, they do not go outside the SoC.

 - DPI gets its data from DISPC's LCD0, and outputs MIPI DPI (parallel
 RBG)

 - Two independent DSI modules, which get their data from LCD0 or LCD1,
 and output MIPI DSI (a serial two-way video bus)

 - HDMI, gets data from DISPC's TV output and outputs HDMI

 / {
  ocp {
  dss {
  dispc {
  dss-lcd0: output@0 {
  };

  dss-lcd1: output@1 {
  };

  dss-tv: output@2 {
  };
  };

  dpi: dpi {
  video-source = dss-lcd0;
  };

  dsi0: dsi@0 {
  video-source = dss-lcd0;
  };

  dsi1: dsi@1 {
  video-source = dss-lcd1;
  };

  hdmi: hdmi {
  video-source = dss-tv;
  };
  };
  };
 };

 I have defined all the relevant nodes, and video-source property is used
 to point to the source for video data. I also define aliases for the SoC
 outputs so that panels can use them.

 One thing to note is that the video sources for some of the blocks, like
 DSI, are not hardcoded in the HW, so dsi0 could get its data from LCD0
 or LCD1.
 
 What about the source that are hardwired in hardware ? Shouldn't those be 
 hardcoded in the driver instead ?

Even if both the DSI and the DISPC are parts of OMAP DSS, and part of
the SoC, they are separate IPs. We should look at them the same way we'd
consider chips that are outside the SoC.

So things that are internal to a device can (and I think should) be
hardcoded in the driver, but integration details, the connections
between the IPs, etc, should be described in the DT data.

Then again, we do have (and need) a driver for the dss node in the
above DT data. This dss represents dss_core, a glue IP that contains
the rest of the DSS blocks and muxes and such. It could be argued that
this dss_core driver does indeed know the integration details, and thus
there's no need to represent them in the DT data.

However, I do think that we should represent the DISPC outputs with
generic display entities inside CPF, just like DSI and the panels. And
we do need to set the connections between these entities. So the
question is, do we get those connections from the DT data, or are they
hardcoded in the dss_core driver.

I don't currently have strong opinions on either direction. Both make
sense to me. But I think this is SoC driver implementation specific, and
the common panel framework doesn't need to force this to either
direction. Both should be possible from CPF's point of view.

 However, I don't think they are usually changed during runtime, and the dss
 driver cannot change them independently for sure (meaning that some upper
 layer should tell it how to change the config). Thus I specify sane defaults
 here, but the board dts files can of course override the video sources.
 
 I'm not sure whether default settings like those really belong to the DT. I'm 
 no expert on that topic though.

I agree. But I also don't have a good solution how the driver would find
good choices for these settings...

 Another thing to note is that we have more outputs from OMAP than we have
 outputs from DISPC. This means that the same video source is used by
 multiple sinks (LCD0 used by DPI and DSI0). DPI and DSI0 cannot be used at
 the same time, obviously.
 
 It might not be really obvious, as I don't see what prevents DPI and DSI0 to 
 be used at the same time :-) Do they share physical pins ?

I think they do, but even if they didn't, there's just one source for
two outputs. So if the SoC design is such that the only video source for
DPI is LCD0, and the only video source for DSI0 is LCD0, and presuming
you can't send the video data to both destinations, then only one of DPI
and DSI0 can be enabled at the same time.

Even if the LCD0 could send the pixel stream to both DPI and DSI0, it'd
be interesting, because the original video timings and pixel clock are
programmed in the LCD0 output, and thus both DPI and DSI0 get the same
timings. If DPI would want to use some other mode, DSI would most likely
go nuts.

So my opinion is that we should only allow 1:1 connections between
sources and sinks. If a component has multiple outputs, and even if
those outputs give the exact same data, it should define 

Re: [PATCH 12/12] OMAPDSS: DPI: always use DSI PLL if available

2012-11-06 Thread Tomi Valkeinen
On 2012-11-05 16:21, Rob Clark wrote:
 On 11/05/2012 02:55 AM, Tomi Valkeinen wrote:
 But even then, choosing the manager is not easy, as whoever chooses the
 manager needs to observe all the possible displays used at the same
 time...
 
 Right. I was wondering if omapfb/omapdrm could understand the 'all
 possible displays information' better compared to a panel's probe.
 
 Even omapdrm/omafb can't be perfect because we could insert a panel
 driver module at any time, and omapfb/omapdrm may miss that out.
 True, omapdrm/fb may have a better idea. It's still unclear though.
 Currently we have quite strict order in the sequence the modules need to
 be loaded, which is quite bad and causes issues. We should make things
 more dynamic, so that the initialization of the drivers could happen
 more freely.

 But that creates more problems: when booting up, omapfb starts. But
 omapfb can't know if all the panel drivers have already been loaded.
 omapfb may see that DVI is the default display, but what should it do if
 DVI doesn't have a driver yet? It could wait, but perhaps the driver for
 DVI will never even be loaded.
 
 The encoder which is connected to the crtc (manager) is picked by
 combination of encoder-possible_crtcs bitmask and
 connector-best_encoder().  We could keep things limited so that the
 association of crtc to encoder (manager to output, roughly) never
 changes, but this isn't really the right thing to do.  It is better that
 the dssdev not rely on knowing the manager it is attached to at probe
 time, but instead grab resources more dynamically.
 
 Also, at the moment we don't really have any notification to userspace
 about new encoders/connectors showing up (or conversely, being
 removed).  Only about existing connectors being plugged/unplugged.  The
 closest analogy is perhaps the USB display devices, but even there it is
 only the entire drm device that is plugged/unplugged.  And TBH I don't
 really see the point in supporting panel drivers being dynamically
 loaded.  It isn't like someone is dynamically soldering on a new display
 connector to some board that is running.  I think omapfb or omapdrm
 probe should trigger registering the compiled-in panel drivers, so that
 it can be sure that the dssdev's pop up before it goes and creates drm
 connector objects.  Currently we have to hack around this in omapdrm
 with late_initcall() to ensure the panel drivers are probed first, but
 that is an ugly hack that I'd like to get rid of.

We have panel devices and panel drivers, each of which can appear at any
time. Both are needed for the panel probe to happen. If we don't support
device hotplugging (dynamic creation of devices), we need to use
late_initcall for omapfb/drm. At least I don't see any other option.

You say that omapdrm should trigger registering of the drivers. How
would that work? Do you mean that the panel drivers would register
themselves to some common list, and omapdrm would go through this list
when drm is loaded, calling probe for the items in the list? I guess
that's doable, but... It's not how kernel drivers are supposed to work,
and so doesn't sound very clean approach to me.

I think we should support proper hotplugging of the panels. This would
fix the problem about init order, but it would also give us device
hotplug support. Obviously nobody is going to solder panel to a running
board, but I don't see any reason why panels, or, more likely, panels on
an add-on boards (like the capes being discussed in omap ml) would not
be hotpluggable using whatever connector is used on the particular use case.

And even if we don't support removing of the devices, things like the
add-on capes could cause the panel on the cape to be identified at some
late time (the panel is not described in the board file or DT data, but
found at runtime depending on the ID of the cape). This would add
another step to the init sequence that should be just right, if we don't
support hotplug.

Yes, I know it's not simple =). And I'm fine with simpler approach for
the time being, but I'd like full hotplug to be the future goal. At
least the common panel framework should not create restrictions about
this, even if drm wouldn't allow device hotplug.

 Tomi

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 12/12] OMAPDSS: DPI: always use DSI PLL if available

2012-11-07 Thread Tomi Valkeinen
On 2012-11-07 16:32, Rob Clark wrote:
 On Wed, Nov 7, 2012 at 4:01 AM, Tomi Valkeinen tomi.valkei...@ti.com wrote:

 Hotplugging is not some abstract future scenario, we already have
 hardware that could use it. For example, omap3 SDP board has a
 switchable output to DVI or LCD panel. In this case we know what the two
 options are, but the disabled component is still effectually removed
 from the system, and plugged back in when it's enabled.
 
 I would look at this as two different connectors which can not be used
 at the same time.  You have this scenario with desktop graphics cards.

Yes, that's an option with fixed amount of display devices. But doesn't
work for capes.

 Hotplug is not a high priority item, but I do wish we get it supported
 in common panel framework. Then it's at least possible to extend drm in
 the future to support it.



 Anyway, this makes me wonder... omapdrm currently maps the elements of
 the whole video pipeline to drm elements (encoder, connector, etc).
 Would it make more sense to just map the DISPC to these drm elements?
 Connector would then be the output from DISPC.
 
 I think:
 
   plane-overlay
   crtc-manager
 
 is pretty clear.  And really
 
   encoder-output
 
 should be the way it is..  on the branch w/ omapdss/omapdrm kms

I'm not so sure. The output (dpi/dsi/hdmi...) is the second step in our
chain. The primary output is in the DISPC module, the overlay manager.
That's where the timings, pixel clock, etc. are programmed. The second
step, our output, is really a converter IP. It receives the primary
output, converts it and outputs something else. Just like an external
converter chip would do.

And the output can be quite a bit anything. For example, with DBI or DSI
command mode outputs we don't have any of the conventional video
timings. It doesn't make sense to program, say, video blanking periods
to those outputs. But even with DBI and DSI we do have video blanking
periods in the DISPC's output, the ovl mgr.

Of course, at the end of the chain we have a panel that uses normal
video timings (well, most likely but not necessarily), and so we could
program those timings at the end of the chain, in the block before the
panel. But even then the encoder doesn't really map to the DSS's output
block, as the DSS's output block may not have the conventional timings
(like DBI), or they may be something totally different than what we get
in the end of the chain to the panel.

So I think mapping encoder to output will not work with multiple display
blocks in a chain. Thus I'd see the encoder would better match the
DISPC's output, or alternatively perhaps the block which is just before
the panel (whatever that is, sometimes it can be OMAP's DSI/HDMI/etc).
However, the latter may be a bit strange as the block could be an
external component, possibly hotpluggable.

 re-write, this is how it is for plane/crtc, except for now:
 
   encoder+connector-dssdev
 
 Basically the encoder is doing the control stuff (power on/off, set
 timings, etc), and the connector is only doing non control stuff
 (detect, reading edid, etc).
 
 But I think this will probably change a bit as CFP comes into the
 picture.  Currently the drm connector is somewhat a passive element,
 but I think this will have to change a bit w/ CFP.
 
 This would map the drm elements to the static hardware blocks, and the
 meaning of those blocks would be quite similar to what they are in the
 desktop world (I guess).

 The panel driver, the external chips, and the DSS internal output blocks
 (dsi, dpi, ...) would be handled separately from those drm elements. The
 DSS internal blocks are static, of course, but they can be effectively
 considered the same way as external chips.
 
 I think dsi/dpi/etc map to encoder.  The big question is where the
 panel's fit.  But to userspace somehow this should look like
 connectors.  I think:
 
   encoder-output
   connector-panel
 
 could work.. although connector is less passive than KMS currently
 assumes.  And panel could really be a whole chain in the case of
 bridge chips, etc.  I don't know, maybe there are better ways.  But I
 think userspace really just wants to know which monitor which is
 basically connector.

Hmm yes. Well, even if we map encoder and connector to the ovl manager,
the userspace could see which monitor there is. Obviously we need to
make changes for that to work, but as a model it feels a lot more
natural to me than using output and panel for encoder and connector.

Perhaps it's wrong to say map connector to ovl mgr. It would be more
like this connector observes the chain connected to this ovl mgr, even
though the connector wouldn't observe any block in the chain directly.
Just observing the plug in/out status, etc.

But I think encoder really maps quite well directly to the output side
of overlay manager.

 The omapdrm driver needs of course to access those separate elements
 also, but that shouldn't be a problem. If omapdrm needs to call a
 function

Re: [PATCH 12/12] OMAPDSS: DPI: always use DSI PLL if available

2012-11-08 Thread Tomi Valkeinen
On 2012-11-07 21:18, Rob Clark wrote:
 On Wed, Nov 7, 2012 at 9:13 AM, Tomi Valkeinen tomi.valkei...@ti.com wrote:
 On 2012-11-07 16:32, Rob Clark wrote:
 On Wed, Nov 7, 2012 at 4:01 AM, Tomi Valkeinen tomi.valkei...@ti.com 
 wrote:

 Hotplugging is not some abstract future scenario, we already have
 hardware that could use it. For example, omap3 SDP board has a
 switchable output to DVI or LCD panel. In this case we know what the two
 options are, but the disabled component is still effectually removed
 from the system, and plugged back in when it's enabled.

 I would look at this as two different connectors which can not be used
 at the same time.  You have this scenario with desktop graphics cards.

 Yes, that's an option with fixed amount of display devices. But doesn't
 work for capes.
 
 Only if capes are hotpluggable.. otherwise probe what cape(s?) are
 present at boot time (is this possible to detect a cape from sw?), and
 create the associated connector(s).

Well, a cape can be anything. For beaglebone they have capes with
eeprom, and you can detect it.

The reason I'd like to have hotplug is that it would simplify panel
drivers in the case where we have multiple possible panels for the same
output, like the DVI/LCD case above for omap3 SDP.

If we don't have hotplug, then both DVI and LCD panel devices are
present at the same time, and they will share resources. In the minimum
they are sharing the video output, but more often than not they share
gpios/powers/etc.

It's normal that a driver will acquire resources for its device in its
probe, and thus we would have two drivers acquiring the same resources
at boot time, leading to the other driver failing. We currently manage
this by acquiring the resources late, only when the panel is being
enabled. But I think that's rather ugly.

It would be much cleaner if the panel device does not exist at all if
the panel is disconnected, and is created only when it is connected.
This of course creates the problem of who is responsible for creating
the panel device, and what triggers it. I think that's case specific,
and for capes, it'd be the cape driver.

But then again, I guess it's acceptable that we don't allow changing the
plugged-in panels at runtime. The user would have to select them with
kernel parameters or such. I guess this would be ok for capes and
development boards. I'm not aware of a production board that would
switch panels at runtime, although I know these were on the table in Nokia.

 Anyways, I think we are stretching a bit hard for use cases for
 hot-pluggable panels..  I just prefer to ignore hotplug for now and
 come back to it when there is a more legitimate use-case.

Ok, fair enough. But let's keep hotplug in mind, and if we're going to
create code that would make hotplug impossible to implement, let's stop
for a moment and think if we can do that in some other way.

 I'm not so sure. The output (dpi/dsi/hdmi...) is the second step in our
 chain. The primary output is in the DISPC module, the overlay manager.
 That's where the timings, pixel clock, etc. are programmed. The second
 step, our output, is really a converter IP. It receives the primary
 output, converts it and outputs something else. Just like an external
 converter chip would do.

 
 the timings, pixel clock, vblank/framedone irqs, that all maps to crtc.
 
 encoder == converter, so I think this fits.  An encoder takes pixel
 data from a CRTC and converts it to a format suitable for any attached
 connectors (from drm docbook)

Oh, ok. Then what you say makes sense. I thought encoder contains the
timings, as you said previously: Basically the encoder is doing the
control stuff (power on/off, set timings, etc).

In the case we have a long chain of display blocks, encoder could cover
all of them, not just the output block of OMAP? I don't know where the
panel goes, though.

 And the output can be quite a bit anything. For example, with DBI or DSI
 command mode outputs we don't have any of the conventional video
 timings. It doesn't make sense to program, say, video blanking periods
 to those outputs. But even with DBI and DSI we do have video blanking
 periods in the DISPC's output, the ovl mgr.
 
 the encoder has mode_fixup() which can alter the timings that end up
 getting set, which might be a way to account for this.  I guess really
 it should be the panel driver that is telling the encoder what
 adjusted timings to give to the crtc.. so the panel driver doesn't
 quite map to connector.

The panel can't say what timings to give to crtc, it depends on what's
between the crtc and the panel. In case of OMAP DSI, the dsi driver
needs to specify the timings for crtc, based on the panel.

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/omap: use omapdss low level API

2012-11-19 Thread Tomi Valkeinen
On 2012-11-16 14:19, Greg KH wrote:
 On Thu, Nov 15, 2012 at 06:00:58PM -0600, Rob Clark wrote:
 This patch changes the omapdrm KMS to bypass the omapdss compat
 layer and use the core omapdss API directly.  This solves some layering
 issues that would cause unpin confusion vs GO bit status, because we
 would not know whether a particular pageflip or overlay update has hit
 the screen or not.  Now instead we explicitly manage the GO bits in
 dispc and handle the vblank/framedone interrupts ourself so that we
 always know which buffers are being scanned out at any given time, and
 so on.

 As an added bonus, we no longer leave the last overlay buffer pinned
 when the display is disabled, and have been able to add the previously
 missing vblank event handling.

 Signed-off-by: Rob Clark robdcl...@gmail.com
 ---
 Note: this patch applies on top of staging-next plus the OMAPDSS:
 create compat layer patch series:

 http://comments.gmane.org/gmane.linux.ports.arm.omap/89435
 
 Hm, I can't take that patch set in staging-next, so how should this go
 in?

I wonder, could the omapdrm part go in for -rc2? It's only a driver in
staging. I think that would be the easiest way. Perhaps it'd be even
possible to split the patch into two, of which the first half would not
depend on omapdss, but would prepare omapdrm for the change, thus making
the patch for -rc2 smaller.

Otherwise it's more tricky... Either wait for 3.9, or get omapdss,
omapfb and omapdrm merged via the same tree. But which that would be?
omapdss and omapfb usually go through fbdev tree. I don't know what its
maintainer thinks of merging drm driver. And if the omapdrm depends on
new drm changes, I guess fbdev is out of the question.

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v12 1/6] video: add display_timing and videomode

2012-11-21 Thread Tomi Valkeinen
Hi,

On 2012-11-20 17:54, Steffen Trumtrar wrote:
 Add display_timing structure and the according helper functions. This allows
 the description of a display via its supported timing parameters.
 
 Every timing parameter can be specified as a single value or a range
 min typ max.
 
 Also, add helper functions to convert from display timings to a generic 
 videomode
 structure. This videomode can then be converted to the corresponding subsystem
 mode representation (e.g. fb_videomode).

Sorry for reviewing this so late.

One thing I'd like to see is some explanation of the structs involved.
For example, in this patch you present structs videomode, display_timing
and display_timings without giving any hint what they represent.

I'm not asking for you to write a long documentation, but perhaps the
header files could include a few lines of comments above the structs,
explaining the idea.

 +void display_timings_release(struct display_timings *disp)
 +{
 + if (disp-timings) {
 + unsigned int i;
 +
 + for (i = 0; i  disp-num_timings; i++)
 + kfree(disp-timings[i]);
 + kfree(disp-timings);
 + }
 + kfree(disp);
 +}
 +EXPORT_SYMBOL_GPL(display_timings_release);

Perhaps this will become clearer after reading the following patches,
but it feels a bit odd to add a release function, without anything in
this patch that would actually allocate the timings.

 diff --git a/drivers/video/videomode.c b/drivers/video/videomode.c
 new file mode 100644
 index 000..e24f879
 --- /dev/null
 +++ b/drivers/video/videomode.c
 @@ -0,0 +1,46 @@
 +/*
 + * generic display timing functions
 + *
 + * Copyright (c) 2012 Steffen Trumtrar s.trumt...@pengutronix.de, 
 Pengutronix
 + *
 + * This file is released under the GPLv2
 + */
 +
 +#include linux/export.h
 +#include linux/errno.h
 +#include linux/display_timing.h
 +#include linux/kernel.h
 +#include linux/videomode.h
 +
 +int videomode_from_timing(const struct display_timings *disp,
 +   struct videomode *vm, unsigned int index)
 +{
 + struct display_timing *dt;
 +
 + dt = display_timings_get(disp, index);
 + if (!dt)
 + return -EINVAL;
 +
 + vm-pixelclock = display_timing_get_value(dt-pixelclock, 0);
 + vm-hactive = display_timing_get_value(dt-hactive, 0);
 + vm-hfront_porch = display_timing_get_value(dt-hfront_porch, 0);
 + vm-hback_porch = display_timing_get_value(dt-hback_porch, 0);
 + vm-hsync_len = display_timing_get_value(dt-hsync_len, 0);
 +
 + vm-vactive = display_timing_get_value(dt-vactive, 0);
 + vm-vfront_porch = display_timing_get_value(dt-vfront_porch, 0);
 + vm-vback_porch = display_timing_get_value(dt-vback_porch, 0);
 + vm-vsync_len = display_timing_get_value(dt-vsync_len, 0);

Shouldn't all these calls get the typical value, with index 1?

 +
 + vm-vah = dt-vsync_pol_active;
 + vm-hah = dt-hsync_pol_active;
 + vm-de = dt-de_pol_active;
 + vm-pixelclk_pol = dt-pixelclk_pol;
 +
 + vm-interlaced = dt-interlaced;
 + vm-doublescan = dt-doublescan;
 +
 + return 0;
 +}
 +
 +EXPORT_SYMBOL_GPL(videomode_from_timing);
 diff --git a/include/linux/display_timing.h b/include/linux/display_timing.h
 new file mode 100644
 index 000..d5bf03f
 --- /dev/null
 +++ b/include/linux/display_timing.h
 @@ -0,0 +1,70 @@
 +/*
 + * Copyright 2012 Steffen Trumtrar s.trumt...@pengutronix.de
 + *
 + * description of display timings
 + *
 + * This file is released under the GPLv2
 + */
 +
 +#ifndef __LINUX_DISPLAY_TIMINGS_H
 +#define __LINUX_DISPLAY_TIMINGS_H
 +
 +#include linux/types.h

What is this needed for? u32? I don't see it defined in types.h

 +
 +struct timing_entry {
 + u32 min;
 + u32 typ;
 + u32 max;
 +};
 +
 +struct display_timing {
 + struct timing_entry pixelclock;
 +
 + struct timing_entry hactive;
 + struct timing_entry hfront_porch;
 + struct timing_entry hback_porch;
 + struct timing_entry hsync_len;
 +
 + struct timing_entry vactive;
 + struct timing_entry vfront_porch;
 + struct timing_entry vback_porch;
 + struct timing_entry vsync_len;
 +
 + unsigned int vsync_pol_active;
 + unsigned int hsync_pol_active;
 + unsigned int de_pol_active;
 + unsigned int pixelclk_pol;
 + bool interlaced;
 + bool doublescan;
 +};
 +
 +struct display_timings {
 + unsigned int num_timings;
 + unsigned int native_mode;
 +
 + struct display_timing **timings;
 +};
 +
 +/*
 + * placeholder function until ranges are really needed
 + * the index parameter should then be used to select one of [min typ max]
 + */
 +static inline u32 display_timing_get_value(const struct timing_entry *te,
 +unsigned int index)
 +{
 + return te-typ;
 +}

Why did you opt for a placeholder here? It feels trivial to me to have
support to get the min/typ/max value properly.

 +static inline struct display_timing 

Re: [PATCH v12 2/6] video: add of helper for videomode

2012-11-21 Thread Tomi Valkeinen
On 2012-11-20 17:54, Steffen Trumtrar wrote:

 +timings subnode
 +---
 +
 +required properties:
 + - hactive, vactive: Display resolution
 + - hfront-porch, hback-porch, hsync-len: Horizontal Display timing parameters
 +   in pixels
 +   vfront-porch, vback-porch, vsync-len: Vertical display timing parameters 
 in
 +   lines
 + - clock-frequency: display clock in Hz
 +
 +optional properties:
 + - hsync-active: Hsync pulse is active low/high/ignored
 + - vsync-active: Vsync pulse is active low/high/ignored
 + - de-active: Data-Enable pulse is active low/high/ignored
 + - pixelclk-inverted: pixelclock is inverted/non-inverted/ignored

Inverted related to what? And isn't this a bool? Pixel clock is either
normal (whatever that is), or inverted. It can't be not used.

I guess normal case is pixel data is driven on the rising edge of pixel
clock? If that's common knowledge, I guess it doesn't need to be
mentioned. But I always have to verify from the documentation what
normal means on this particular panel/soc =).

 + - interlaced (bool)
 + - doublescan (bool)
 +
 +All the optional properties that are not bool follow the following logic:
 +1: high active
 +0: low active
 +omitted: not used on hardware

Perhaps it's obvious, but no harm being explicit: mention that the bool
properties are off is omitted.

And I didn't read the rest of the patches yet, so perhaps this is
already correct, but as I think this framework is usable without DT
also, the meanings of the fields in the structs should be explained in
the header files also in case they are not obvious.

 +Example:
 +
 + display-timings {
 + native-mode = timing0;
 + timing0: 1920p24 {

This should still be 1080p24, not 1920p24 =).

 + /* 1920x1080p24 */
 + clock-frequency = 5200;
 + hactive = 1920;
 + vactive = 1080;
 + hfront-porch = 25;
 + hback-porch = 25;
 + hsync-len = 25;
 + vback-porch = 2;
 + vfront-porch = 2;
 + vsync-len = 2;
 + hsync-active = 1;
 + };
 + };
 +

 diff --git a/include/linux/of_display_timings.h 
 b/include/linux/of_display_timings.h
 new file mode 100644
 index 000..2b4fa0a
 --- /dev/null
 +++ b/include/linux/of_display_timings.h
 @@ -0,0 +1,20 @@
 +/*
 + * Copyright 2012 Steffen Trumtrar s.trumt...@pengutronix.de
 + *
 + * display timings of helpers
 + *
 + * This file is released under the GPLv2
 + */
 +
 +#ifndef __LINUX_OF_DISPLAY_TIMINGS_H
 +#define __LINUX_OF_DISPLAY_TIMINGS_H
 +
 +#include linux/display_timing.h
 +#include linux/of.h

No need to include these, just add struct ...;.

 +#define OF_USE_NATIVE_MODE -1
 +
 +struct display_timings *of_get_display_timings(const struct device_node *np);
 +int of_display_timings_exists(const struct device_node *np);
 +
 +#endif
 diff --git a/include/linux/of_videomode.h b/include/linux/of_videomode.h
 new file mode 100644
 index 000..4de5fcc
 --- /dev/null
 +++ b/include/linux/of_videomode.h
 @@ -0,0 +1,18 @@
 +/*
 + * Copyright 2012 Steffen Trumtrar s.trumt...@pengutronix.de
 + *
 + * videomode of-helpers
 + *
 + * This file is released under the GPLv2
 + */
 +
 +#ifndef __LINUX_OF_VIDEOMODE_H
 +#define __LINUX_OF_VIDEOMODE_H
 +
 +#include linux/videomode.h
 +#include linux/of.h

Same here.

 +int of_get_videomode(const struct device_node *np, struct videomode *vm,
 +  int index);
 +
 +#endif /* __LINUX_OF_VIDEOMODE_H */
 

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v12 3/6] fbmon: add videomode helpers

2012-11-21 Thread Tomi Valkeinen
On 2012-11-20 17:54, Steffen Trumtrar wrote:

 diff --git a/include/linux/fb.h b/include/linux/fb.h
 index c7a9571..920cbe3 100644
 --- a/include/linux/fb.h
 +++ b/include/linux/fb.h
 @@ -14,6 +14,7 @@
  #include linux/backlight.h
  #include linux/slab.h
  #include asm/io.h
 +#include linux/videomode.h

No need for this, just add struct xxx;.

  struct vm_area_struct;
  struct fb_info;
 @@ -714,6 +715,11 @@ extern void fb_destroy_modedb(struct fb_videomode 
 *modedb);
  extern int fb_find_mode_cvt(struct fb_videomode *mode, int margins, int rb);
  extern unsigned char *fb_ddc_read(struct i2c_adapter *adapter);
  
 +#if IS_ENABLED(CONFIG_VIDEOMODE)
 +extern int fb_videomode_from_videomode(const struct videomode *vm,
 +struct fb_videomode *fbmode);
 +#endif
 +
  /* drivers/video/modedb.c */
  #define VESA_MODEDB_SIZE 34
  extern void fb_var_to_videomode(struct fb_videomode *mode,
 




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v12 4/6] fbmon: add of_videomode helpers

2012-11-21 Thread Tomi Valkeinen
On 2012-11-20 17:54, Steffen Trumtrar wrote:
 Add helper to get fb_videomode from devicetree.
 
 Signed-off-by: Steffen Trumtrar s.trumt...@pengutronix.de
 Reviewed-by: Thierry Reding thierry.red...@avionic-design.de
 Acked-by: Thierry Reding thierry.red...@avionic-design.de
 Tested-by: Thierry Reding thierry.red...@avionic-design.de
 Tested-by: Philipp Zabel p.za...@pengutronix.de
 Reviewed-by: Laurent Pinchart laurent.pinch...@ideasonboard.com
 ---
  drivers/video/fbmon.c |   42 +-
  include/linux/fb.h|7 +++
  2 files changed, 48 insertions(+), 1 deletion(-)


 diff --git a/include/linux/fb.h b/include/linux/fb.h
 index 920cbe3..41b5e49 100644
 --- a/include/linux/fb.h
 +++ b/include/linux/fb.h
 @@ -15,6 +15,8 @@
  #include linux/slab.h
  #include asm/io.h
  #include linux/videomode.h
 +#include linux/of.h
 +#include linux/of_videomode.h

Guess what? =)

To be honest, I don't know what the general opinion is about including
header files from header files. But I always leave them out if they are
not strictly needed.

  struct vm_area_struct;
  struct fb_info;
 @@ -715,6 +717,11 @@ extern void fb_destroy_modedb(struct fb_videomode 
 *modedb);
  extern int fb_find_mode_cvt(struct fb_videomode *mode, int margins, int rb);
  extern unsigned char *fb_ddc_read(struct i2c_adapter *adapter);
  
 +#if IS_ENABLED(CONFIG_OF_VIDEOMODE)
 +extern int of_get_fb_videomode(const struct device_node *np,
 +struct fb_videomode *fb,
 +unsigned int index);
 +#endif
  #if IS_ENABLED(CONFIG_VIDEOMODE)
  extern int fb_videomode_from_videomode(const struct videomode *vm,
  struct fb_videomode *fbmode);

Do you really need these #ifs in the header files? They do make it look
a bit messy. If somebody uses the functions and CONFIG_VIDEOMODE is not
enabled, he'll get a linker error anyway.

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v12 5/6] drm_modes: add videomode helpers

2012-11-21 Thread Tomi Valkeinen
On 2012-11-20 17:54, Steffen Trumtrar wrote:
 Add conversion from videomode to drm_display_mode
 
 Signed-off-by: Steffen Trumtrar s.trumt...@pengutronix.de
 Reviewed-by: Thierry Reding thierry.red...@avionic-design.de
 Acked-by: Thierry Reding thierry.red...@avionic-design.de
 Tested-by: Thierry Reding thierry.red...@avionic-design.de
 Tested-by: Philipp Zabel p.za...@pengutronix.de
 Reviewed-by: Laurent Pinchart laurent.pinch...@ideasonboard.com
 ---
  drivers/gpu/drm/drm_modes.c |   37 +
  include/drm/drmP.h  |6 ++
  2 files changed, 43 insertions(+)
 

 diff --git a/include/drm/drmP.h b/include/drm/drmP.h
 index 3fd8280..de2f6cf 100644
 --- a/include/drm/drmP.h
 +++ b/include/drm/drmP.h
 @@ -56,6 +56,7 @@
  #include linux/cdev.h
  #include linux/mutex.h
  #include linux/slab.h
 +#include linux/videomode.h
  #if defined(__alpha__) || defined(__powerpc__)
  #include asm/pgtable.h /* For pte_wrprotect */
  #endif
 @@ -1454,6 +1455,11 @@ extern struct drm_display_mode *
  drm_mode_create_from_cmdline_mode(struct drm_device *dev,
 struct drm_cmdline_mode *cmd);
  
 +#if IS_ENABLED(CONFIG_VIDEOMODE)
 +extern int drm_display_mode_from_videomode(const struct videomode *vm,
 +struct drm_display_mode *dmode);
 +#endif
 +
  /* Modesetting support */
  extern void drm_vblank_pre_modeset(struct drm_device *dev, int crtc);
  extern void drm_vblank_post_modeset(struct drm_device *dev, int crtc);
 

And the same comments apply to this header file.

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v12 6/6] drm_modes: add of_videomode helpers

2012-11-21 Thread Tomi Valkeinen
On 2012-11-20 17:54, Steffen Trumtrar wrote:
 Add helper to get drm_display_mode from devicetree.
 
 Signed-off-by: Steffen Trumtrar s.trumt...@pengutronix.de
 Reviewed-by: Thierry Reding thierry.red...@avionic-design.de
 Acked-by: Thierry Reding thierry.red...@avionic-design.de
 Tested-by: Thierry Reding thierry.red...@avionic-design.de
 Tested-by: Philipp Zabel p.za...@pengutronix.de
 Reviewed-by: Laurent Pinchart laurent.pinch...@ideasonboard.com
 ---
  drivers/gpu/drm/drm_modes.c |   35 ++-
  include/drm/drmP.h  |6 ++
  2 files changed, 40 insertions(+), 1 deletion(-)
 

 diff --git a/include/drm/drmP.h b/include/drm/drmP.h
 index de2f6cf..377280f 100644
 --- a/include/drm/drmP.h
 +++ b/include/drm/drmP.h
 @@ -56,6 +56,7 @@
  #include linux/cdev.h
  #include linux/mutex.h
  #include linux/slab.h
 +#include linux/of.h
  #include linux/videomode.h
  #if defined(__alpha__) || defined(__powerpc__)
  #include asm/pgtable.h /* For pte_wrprotect */
 @@ -1459,6 +1460,11 @@ drm_mode_create_from_cmdline_mode(struct drm_device 
 *dev,
  extern int drm_display_mode_from_videomode(const struct videomode *vm,
  struct drm_display_mode *dmode);
  #endif
 +#if IS_ENABLED(CONFIG_OF_VIDEOMODE)
 +extern int of_get_drm_display_mode(const struct device_node *np,
 +struct drm_display_mode *dmode,
 +unsigned int index);
 +#endif
  
  /* Modesetting support */
  extern void drm_vblank_pre_modeset(struct drm_device *dev, int crtc);

And the same comments here also.

 Tomi





signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v12 1/6] video: add display_timing and videomode

2012-11-21 Thread Tomi Valkeinen
On 2012-11-21 18:11, Steffen Trumtrar wrote:
 Hi,
 
 On Wed, Nov 21, 2012 at 01:37:08PM +0200, Tomi Valkeinen wrote:

 +#include linux/types.h

 What is this needed for? u32? I don't see it defined in types.h

 
 Yes, u32. What would be the right header for that if not types.h?

Yes, it looks like u32 comes via linux/types.h. It's defined elsewhere,
but linux/types.h sounds like the most reasonable header for it.

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v12 4/6] fbmon: add of_videomode helpers

2012-11-21 Thread Tomi Valkeinen
On 2012-11-21 18:24, Steffen Trumtrar wrote:
 On Wed, Nov 21, 2012 at 02:49:30PM +0200, Tomi Valkeinen wrote:

 @@ -715,6 +717,11 @@ extern void fb_destroy_modedb(struct fb_videomode 
 *modedb);
  extern int fb_find_mode_cvt(struct fb_videomode *mode, int margins, int 
 rb);
  extern unsigned char *fb_ddc_read(struct i2c_adapter *adapter);
  
 +#if IS_ENABLED(CONFIG_OF_VIDEOMODE)
 +extern int of_get_fb_videomode(const struct device_node *np,
 +  struct fb_videomode *fb,
 +  unsigned int index);
 +#endif
  #if IS_ENABLED(CONFIG_VIDEOMODE)
  extern int fb_videomode_from_videomode(const struct videomode *vm,
struct fb_videomode *fbmode);

 Do you really need these #ifs in the header files? They do make it look
 a bit messy. If somebody uses the functions and CONFIG_VIDEOMODE is not
 enabled, he'll get a linker error anyway.

 
 Well, I don't remember at the moment who requested this, but it was not my
 idea to put them there. So, this is a matter of style I guess.
 But maybe I understood that wrong.

Right, one reviewer says this way, and another says that way =).

With the header files I've made I only use #ifs with #else, when I want
to give a static inline empty/no-op implementation for the function in
case the feature is not compiled into the kernel.

As you said, matter of taste. Up to you.

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 0/5] Common Display Framework

2012-11-23 Thread Tomi Valkeinen
Hi,

On 2012-11-22 23:45, Laurent Pinchart wrote:
 From: Laurent Pinchart laurent.pinchart+rene...@ideasonboard.com
 
 Hi everybody,
 
 Here's the second RFC of what was previously known as the Generic Panel
 Framework.

Nice work! Thanks for working on this.

I was doing some testing with the code, seeing how to use it in omapdss.
Here are some thoughts:

In your model the DSS gets the panel devices connected to it from
platform data. After the DSS and the panel drivers are loaded, DSS gets
a notification and connects DSS and the panel.

I think it's a bit limited way. First of all, it'll make the DT data a
bit more complex (although this is not a major problem). With your
model, you'll need something like:

soc-base.dtsi:

dss {
dpi0: dpi {
};
};

board.dts:

dpi0 {
panel = dpi-panel;
};

/ {
dpi-panel: dpi-panel {
...panel data...;
};
};

Second, it'll prevent hotplug, and even if real hotplug would not be
supported, it'll prevent cases where the connected panel must be found
dynamically (like reading ID from eeprom).

Third, it kinda creates a cyclical dependency: the DSS needs to know
about the panel and calls ops in the panel, and the panel calls ops in
the DSS. I'm not sure if this is an actual problem, but I usually find
it simpler if calls are done only in one direction.


What I suggest is take a simpler approach, something alike to how
regulators or gpios are used, even if slightly more complex than those:
the entity that has a video output (SoC's DSS, external chips) offers
that video output as resource. It doesn't know or care who uses it. The
user of the video output (panel, external chips) will find the video
output (to which it is connected in the HW) by some means, and will use
different operations on that output to operate the device.

This would give us something like the following DT data:

soc-base.dtsi:

dss {
dpi0: dpi {
};
};

board.dts:

/ {
dpi-panel: dpi-panel {
source = dpi0;
...panel data...;
};
};

The panel driver would do something like this in its probe:

int dpi_panel_probe()
{
// Find the video source, increase ref
src = get_video_source_from_of(source);

// Reserve the video source for us. others can still get and
// observe it, but cannot use it as video data source.
// I think this should cascade upstream, so that after this call
// each video entity from the panel to the SoC's CRTC is
// reserved and locked for this video pipeline.
reserve_video_source(src);

// set DPI HW configuration, like DPI data lines. The
// configuration would come from panel's platform data
set_dpi_config(src, config);

// register this panel as a display.
register_display(this);
}


The DSS's dpi driver would do something like:

int dss_dpi_probe()
{
// register as a DPI video source
register_video_source(this);
}

A DSI-2-DPI chip would do something like:

int dsi2dpi_probe()
{
// get, reserve and config the DSI bus from SoC
src = get_video_source_from_of(source);
reserve_video_source(src);
set_dsi_config(src, config);

// register as a DPI video source
register_video_source(this);
}


Here we wouldn't have similar display_entity as you have, but video
sources and displays. Video sources are elements in the video pipeline,
and a video source is used only by the next downstream element. The last
element in the pipeline would not be a video source, but a display,
which would be used by the upper layer.

Video source's ops would deal with things related to the video bus in
question, like configuring data lanes, sending DSI packets, etc. The
display ops would be more high level things, like enable, update, etc.
Actually, I guess you could consider the display to represent and deal
with the whole pipeline, while video source deals with the bus between
two display entities.

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 0/5] Common Display Framework

2012-11-26 Thread Tomi Valkeinen
On 2012-11-23 21:56, Thierry Reding wrote:
 On Thu, Nov 22, 2012 at 10:45:31PM +0100, Laurent Pinchart wrote:
 [...]
 Display entities are accessed by driver using notifiers. Any driver can
 register a display entity notifier with the CDF, which then calls the 
 notifier
 when a matching display entity is registered. The reason for this 
 asynchronous
 mode of operation, compared to how drivers acquire regulator or clock
 resources, is that the display entities can use resources provided by the
 display driver. For instance a panel can be a child of the DBI or DSI bus
 controlled by the display device, or use a clock provided by that device. We
 can't defer the display device probe until the panel is registered and also
 defer the panel device probe until the display is registered. As most display
 drivers need to handle output devices hotplug (HDMI monitors for instance),
 handling other display entities through a notification system seemed to be 
 the
 easiest solution.

 Note that this brings a different issue after registration, as display
 controller and display entity drivers would take a reference to each other.
 Those circular references would make driver unloading impossible. One 
 possible
 solution to this problem would be to simulate an unplug event for the display
 entity, to force the display driver to release the dislay entities it uses. 
 We
 would need a userspace API for that though. Better solutions would of course
 be welcome.
 
 Maybe I don't understand all of the underlying issues correctly, but a
 parent/child model would seem like a better solution to me. We discussed
 this back when designing the DT bindings for Tegra DRM and came to the
 conclusion that the output resource of the display controller (RGB,
 HDMI, DSI or TVO) was the most suitable candidate to be the parent of
 the panel or display attached to it. The reason for that decision was
 that it keeps the flow of data or addressing of nodes consistent. So the
 chain would look something like this (on Tegra):
 
   CPU
   +-host1x
 +-dc
   +-rgb
   | +-panel
   +-hdmi
 +-monitor
 
 In a natural way this makes the output resource the master of the panel
 or display. From a programming point of view this becomes quite easy to
 implement and is very similar to how other busses like I2C or SPI are
 modelled. In device tree these would be represented as subnodes, while
 with platform data some kind of lookup could be done like for regulators
 or alternatively a board setup registration mechanism like what's in
 place for I2C or SPI.

You didn't explicitly say it, but I presume you are talking about the
device model for panels, not just how to refer to the outputs.

How would you deal with a, say, DPI panel that is controlled via I2C or
SPI? You can have the panel device be both a panel device, child of a
RGB output, and an i2c device.

The model you propose is currently used in omapdss, and while it seems
simple and logical, it's not that simple with panels/chips with separate
control and data busses.

I think it makes more sense to consider the device as a child of the
control bus. So a DPI panel controlled via I2C is an I2C device, and it
just happens to use a DPI video output as a resource (like it could use
a regulator, gpio, etc).

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCHv15 2/7] video: add display_timing and videomode

2012-11-26 Thread Tomi Valkeinen
On 2012-11-26 11:07, Steffen Trumtrar wrote:

 +/*
 + * Subsystem independent description of a videomode.
 + * Can be generated from struct display_timing.
 + */
 +struct videomode {
 + u32 pixelclock; /* pixelclock in Hz */

I don't know if this is of any importance, but the linux clock framework
manages clock rates with unsigned long. Would it be better to use the
same type here?

 + u32 hactive;
 + u32 hfront_porch;
 + u32 hback_porch;
 + u32 hsync_len;
 +
 + u32 vactive;
 + u32 vfront_porch;
 + u32 vback_porch;
 + u32 vsync_len;
 +
 + u32 hah;/* hsync active high */
 + u32 vah;/* vsync active high */
 + u32 de; /* data enable */
 + u32 pixelclk_pol;

What values will these 4 have? Aren't these booleans?

The data enable comment should be data enable active high.

The next patch says in the devtree binding documentation that the values
may be on/off/ignored. Is that represented here somehow, i.e. values are
0/1/2 or so? If not, what does it mean if the value is left out from
devtree data, meaning not used on hardware?

There's also a doubleclk value mentioned in the devtree bindings doc,
but not shown here.

I think this videomode struct is the one that display drivers are going
to use (?), in addition to the display_timing, so it would be good to
document it well.

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCHv15 3/7] video: add of helper for display timings/videomode

2012-11-26 Thread Tomi Valkeinen
Hi,

On 2012-11-26 11:07, Steffen Trumtrar wrote:
 This adds support for reading display timings from DT into a struct
 display_timings. The of_display_timing implementation supports multiple
 subnodes. All children are read into an array, that can be queried.
 
 If no native mode is specified, the first subnode will be used.
 
 For cases where the graphics driver knows there can be only one
 mode description or where the driver only supports one mode, a helper
 function of_get_videomode is added, that gets a struct videomode from DT.
 
 Signed-off-by: Steffen Trumtrar s.trumt...@pengutronix.de
 Signed-off-by: Philipp Zabel p.za...@pengutronix.de
 Acked-by: Stephen Warren swar...@nvidia.com
 Reviewed-by: Thierry Reding thierry.red...@avionic-design.de
 Acked-by: Thierry Reding thierry.red...@avionic-design.de
 Tested-by: Thierry Reding thierry.red...@avionic-design.de
 Tested-by: Philipp Zabel p.za...@pengutronix.de
 Reviewed-by: Laurent Pinchart laurent.pinch...@ideasonboard.com
 Acked-by: Laurent Pinchart laurent.pinch...@ideasonboard.com
 ---
  .../devicetree/bindings/video/display-timing.txt   |  107 ++
  drivers/video/Kconfig  |   15 ++
  drivers/video/Makefile |2 +
  drivers/video/of_display_timing.c  |  219 
 
  drivers/video/of_videomode.c   |   54 +
  include/linux/of_display_timing.h  |   20 ++
  include/linux/of_videomode.h   |   18 ++
  7 files changed, 435 insertions(+)
  create mode 100644 Documentation/devicetree/bindings/video/display-timing.txt
  create mode 100644 drivers/video/of_display_timing.c
  create mode 100644 drivers/video/of_videomode.c
  create mode 100644 include/linux/of_display_timing.h
  create mode 100644 include/linux/of_videomode.h
 
 diff --git a/Documentation/devicetree/bindings/video/display-timing.txt 
 b/Documentation/devicetree/bindings/video/display-timing.txt
 new file mode 100644
 index 000..e238f27
 --- /dev/null
 +++ b/Documentation/devicetree/bindings/video/display-timing.txt
 @@ -0,0 +1,107 @@
 +display-timing bindings
 +===
 +
 +display-timings node
 +
 +
 +required properties:
 + - none
 +
 +optional properties:
 + - native-mode: The native mode for the display, in case multiple modes are
 + provided. When omitted, assume the first node is the native.
 +
 +timing subnode
 +--
 +
 +required properties:
 + - hactive, vactive: display resolution
 + - hfront-porch, hback-porch, hsync-len: horizontal display timing parameters
 +   in pixels
 +   vfront-porch, vback-porch, vsync-len: vertical display timing parameters 
 in
 +   lines
 + - clock-frequency: display clock in Hz
 +
 +optional properties:
 + - hsync-active: hsync pulse is active low/high/ignored
 + - vsync-active: vsync pulse is active low/high/ignored
 + - de-active: data-enable pulse is active low/high/ignored
 + - pixelclk-inverted: pixelclock is inverted (active on falling edge)/
 + non-inverted (active on rising edge)/
 +  ignored (ignore property)

I think hsync-active and vsync-active are clear, and commonly used, and
they are used for both drm and fb mode conversions in later patches.

de-active is not used in drm and fb mode conversions, but I think it's
also clear.

pixelclk-inverted is not used in the mode conversions. It's also a bit
unclear to me. What does it mean that pix clock is active on rising
edge? The pixel data is driven on rising edge? How about the sync
signals and DE, when are they driven? Does your HW have any settings
related to those?

OMAP has the invert pclk setting, but it also has a setting to define
when the sync signals are driven. The options are:
- syncs are driven on rising edge of pclk
- syncs are driven on falling edge of pclk
- syncs are driven on the opposite edge of pclk compared to the pixel data

For DE there's no setting, except the active high/low.

And if I'm not mistaken, if the optional properties are not defined,
they are not ignored, but left to the default 0. Which means active low,
or active on rising edge(?). I think it would be good to have a
undefined value for the properties.

 + - interlaced (bool): boolean to enable interlaced mode
 + - doublescan (bool): boolean to enable doublescan mode
 + - doubleclk (bool)

As I mentioned in the other mail, doubleclk is not used nor documented here.

 +All the optional properties that are not bool follow the following logic:
 +1: high active
 +0: low active
 +omitted: not used on hardware
 +
 +There are different ways of describing the capabilities of a display. The 
 devicetree
 +representation corresponds to the one commonly found in datasheets for 
 displays.
 +If a display supports multiple signal timings, the native-mode can be 
 specified.

I have some of the same concerns for this series than with 

Re: [PATCHv15 3/7] video: add of helper for display timings/videomode

2012-11-26 Thread Tomi Valkeinen
On 2012-11-26 18:10, Steffen Trumtrar wrote:
 Hi,
 
 On Mon, Nov 26, 2012 at 04:38:36PM +0200, Tomi Valkeinen wrote:

 +optional properties:
 + - hsync-active: hsync pulse is active low/high/ignored
 + - vsync-active: vsync pulse is active low/high/ignored
 + - de-active: data-enable pulse is active low/high/ignored
 + - pixelclk-inverted: pixelclock is inverted (active on falling edge)/
 +   non-inverted (active on rising edge)/
 +ignored (ignore property)

 I think hsync-active and vsync-active are clear, and commonly used, and
 they are used for both drm and fb mode conversions in later patches.

 de-active is not used in drm and fb mode conversions, but I think it's
 also clear.

 pixelclk-inverted is not used in the mode conversions. It's also a bit
 unclear to me. What does it mean that pix clock is active on rising
 edge? The pixel data is driven on rising edge? How about the sync
 signals and DE, when are they driven? Does your HW have any settings
 related to those?

 
 Those are properties commonly found in display specs. That is why they are 
 here.
 If the GPU does not support the property it can be omitted.

So what does the pixelclk-inverted mean? Normally the SoC drives pixel
data on rising edge, and the panel samples it at falling edge? And
vice-versa for inverted? Or the other way around?

When is hsync/vsync set? On rising or falling edge of pclk?

My point here is that the pixelclk-inverted is not crystal clear thing,
like the hsync/vsync/de-active values are.

And while thinking about this, I realized that the meaning of
pixelclk-inverted depends on what component is it applied to. Presuming
normal pixclk means pixel data on rising edge, the meaning of that
depends on do we consider the SoC or the panel. The panel needs to
sample the data on the other edge from the one the SoC uses to drive the
data.

Does the videomode describe the panel, or does it describe the settings
programmed to the SoC?

 OMAP has the invert pclk setting, but it also has a setting to define
 when the sync signals are driven. The options are:
 - syncs are driven on rising edge of pclk
 - syncs are driven on falling edge of pclk
 - syncs are driven on the opposite edge of pclk compared to the pixel data

 For DE there's no setting, except the active high/low.

 And if I'm not mistaken, if the optional properties are not defined,
 they are not ignored, but left to the default 0. Which means active low,
 or active on rising edge(?). I think it would be good to have a
 undefined value for the properties.

 
 Yes. As mentioned in my other mail, the intention of the omitted properties do
 not propagate properly. Omitted must be a value  0, so it is clear in a later
 stage, that this property shall not be used. And isn't unintentionally 
 considered
 to be active low.

Ok. Just note that the values are currently stored into u32, and I don't
think using negative error values with u32 is a good idea.

 I have some of the same concerns for this series than with the
 interpreted power sequences (on fbdev): when you publish the DT
 bindings, it's somewhat final version then, and fixing it later will be
 difficult. Of course, video modes are much clearer than the power
 sequences, so it's possible there won't be any problems with the DT
 bindings.

 
 The binding is pretty much at the bare minimum after a lot of discussion about
 the properties. Even if the binding changes, I think it will rather grow than
 shrink. Take the doubleclock property for example. It got here mistakingly,
 because we had a display that has this feature.

Right. That's why I would leave the pixelclock-inverted out for now, if
we're not totally sure how it's defined. Of course, it could be just me
who is not understanding the pixclk-inverted =).

 However, I'd still feel safer if the series would be split to non-DT and
 DT parts. The non-DT parts could be merged quite easily, and people
 could start using them in their kernels. This should expose
 bugs/problems related to the code.

 The DT part could be merged later, when there's confidence that the
 timings are good for all platforms.

 Or, alternatively, all the non-common bindings (de-active, pck
 invert,...) that are not used for fbdev or drm currently could be left
 out for now. But I'd stil prefer merging it in two parts.
 
 I don't say that I'm against it, but the whole reason for the series was
 getting the display timings from a DT into a graphics driver. And I think
 I remember seeing some other attempts at achieving this, but all specific
 to one special case. There is even already a mainline driver that uses an 
 older
 version of the DT bindings (vt8500-fb).

I think it'd be very useful even without DT bindings. But yes, I
understand your need for it.

You're now in v15 of the series. Are you sure v16 will be good enough to
freeze the DT bindings, if 15 previous versions weren't? =). Perhaps I'm
just overly cautious with DT

Re: [RFC v2 2/5] video: panel: Add DPI panel support

2012-11-28 Thread Tomi Valkeinen
Hi,

On 2012-11-22 23:45, Laurent Pinchart wrote:

 +static void panel_dpi_release(struct display_entity *entity)
 +{
 + struct panel_dpi *panel = to_panel_dpi(entity);
 +
 + kfree(panel);
 +}
 +
 +static int panel_dpi_remove(struct platform_device *pdev)
 +{
 + struct panel_dpi *panel = platform_get_drvdata(pdev);
 +
 + platform_set_drvdata(pdev, NULL);
 + display_entity_unregister(panel-entity);
 +
 + return 0;
 +}
 +
 +static int __devinit panel_dpi_probe(struct platform_device *pdev)
 +{
 + const struct panel_dpi_platform_data *pdata = pdev-dev.platform_data;
 + struct panel_dpi *panel;
 + int ret;
 +
 + if (pdata == NULL)
 + return -ENODEV;
 +
 + panel = kzalloc(sizeof(*panel), GFP_KERNEL);
 + if (panel == NULL)
 + return -ENOMEM;
 +
 + panel-pdata = pdata;
 + panel-entity.dev = pdev-dev;
 + panel-entity.release = panel_dpi_release;
 + panel-entity.ops.ctrl = panel_dpi_control_ops;
 +
 + ret = display_entity_register(panel-entity);
 + if (ret  0) {
 + kfree(panel);
 + return ret;
 + }
 +
 + platform_set_drvdata(pdev, panel);
 +
 + return 0;
 +}
 +
 +static const struct dev_pm_ops panel_dpi_dev_pm_ops = {
 +};
 +
 +static struct platform_driver panel_dpi_driver = {
 + .probe = panel_dpi_probe,
 + .remove = panel_dpi_remove,
 + .driver = {
 + .name = panel_dpi,
 + .owner = THIS_MODULE,
 + .pm = panel_dpi_dev_pm_ops,
 + },
 +};

I'm not sure of how the free/release works. The release func is called
when the ref count drops to zero. But... The object in question, the
panel_dpi struct which contains the display entity, is not only about
data, it's also about code located in this module.

So I don't see anything preventing from unloading this module, while
some other component is holding a ref for the display entity. While its
holding the ref, it's valid to call ops in the display entity, but the
code for the ops in this module is already unloaded.

I don't really know how the kref can be used properly in this use case...

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 1/5] video: Add generic display entity core

2012-11-28 Thread Tomi Valkeinen
Hi,

On 2012-11-22 23:45, Laurent Pinchart wrote:
 +/**
 + * display_entity_get_modes - Get video modes supported by the display entity
 + * @entity The display entity
 + * @modes: Pointer to an array of modes
 + *
 + * Fill the modes argument with a pointer to an array of video modes. The 
 array
 + * is owned by the display entity.
 + *
 + * Return the number of supported modes on success (including 0 if no mode is
 + * supported) or a negative error code otherwise.
 + */
 +int display_entity_get_modes(struct display_entity *entity,
 +  const struct videomode **modes)
 +{
 + if (!entity-ops.ctrl || !entity-ops.ctrl-get_modes)
 + return 0;
 +
 + return entity-ops.ctrl-get_modes(entity, modes);
 +}
 +EXPORT_SYMBOL_GPL(display_entity_get_modes);
 +
 +/**
 + * display_entity_get_size - Get display entity physical size
 + * @entity: The display entity
 + * @width: Physical width in millimeters
 + * @height: Physical height in millimeters
 + *
 + * When applicable, for instance for display panels, retrieve the display
 + * physical size in millimeters.
 + *
 + * Return 0 on success or a negative error code otherwise.
 + */
 +int display_entity_get_size(struct display_entity *entity,
 + unsigned int *width, unsigned int *height)
 +{
 + if (!entity-ops.ctrl || !entity-ops.ctrl-get_size)
 + return -EOPNOTSUPP;
 +
 + return entity-ops.ctrl-get_size(entity, width, height);
 +}
 +EXPORT_SYMBOL_GPL(display_entity_get_size);

How do you envision these to be used with, say, DVI monitors with EDID
data? Should each panel driver, that manages a device with EDID, read
and parse the EDID itself? I guess that shouldn't be too difficult with
a common EDID lib, but that will only expose some of the information
found from EDID. Should the upper levels also have a way to get the raw
EDID data, in addition to funcs like above?

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 3/5] video: display: Add MIPI DBI bus support

2012-11-30 Thread Tomi Valkeinen
Hi,

On 2012-11-22 23:45, Laurent Pinchart wrote:
 From: Laurent Pinchart laurent.pinchart+rene...@ideasonboard.com
 
 Signed-off-by: Laurent Pinchart laurent.pinch...@ideasonboard.com
 ---
  drivers/video/display/Kconfig|4 +
  drivers/video/display/Makefile   |1 +
  drivers/video/display/mipi-dbi-bus.c |  228 
 ++
  include/video/display.h  |5 +
  include/video/mipi-dbi-bus.h |  125 +++
  5 files changed, 363 insertions(+), 0 deletions(-)
  create mode 100644 drivers/video/display/mipi-dbi-bus.c
  create mode 100644 include/video/mipi-dbi-bus.h

I've been doing some omapdss testing with CDF and DSI, and I have some
thoughts about the bus stuff. I already told these to Laurent, but I'll
write them to the mailing list also for discussion.

So with the current CDF model we have separate control and video buses.
The control bus is represented as proper Linux bus, and video bus is
represented via custom display_entity. This sounds good on paper, and I
also agreed to this approach when we were planning CDF.

However, now I doubt that approach.

First, I want to list some examples of devices with different bus
configurations:

1) Panel without any control, only video bus
2) Panel with separate control and video buses, e.g. i2c for control,
DPI for video
3) Panel with the same control and video buses, like DSI or DBI.

The first one is simple, it's just a platform device. No questions there.

The second one can be a bit tricky. Say, if we have a panel controlled
via i2c, and DSI/DBI used for video. The problem here is that with the
current model, DSI/DBI would be represented as a real bus, for control.
But in this case there's only the video path.

So if all the DSI/DBI bus configuration is handled on the DSI/DBI
control bus side, how can it be handled with only the video bus? And if
we add the same bus configuration to the video bus side as we have on
control bus side, then we have duplicated the API, and it's also
somewhat confusing. I don't have any good suggestion for this.

Third one is kinda clear, but I feel slightly uneasy about it. In theory
we can have separate control and video buses, which use the same HW
transport. However, I feel that we'll have some trouble with the
implementation, as we'll then have two more or less independent users
for the HW transport. I can't really point out why this would not be
possible to implement, but I have a gut feeling that it will be
difficult, at least for DSI.

So I think my question is: what does it give us to have separate control
and video buses, and what does the Linux bus give us with the control bus?

I don't see us ever having a case where a device would use one of the
display buses only for control. So either the display bus is only used
for video, or it's used for both control and video. And the display bus
is always 1-to-1, so we're talking about really simple bus here.

I believe things would be much simpler if we just have one entity for
the display buses, which support both video and (when available)
control. What would be the downsides of this approach versus the current
CDF proposal?

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCHv15 2/7] video: add display_timing and videomode

2012-12-08 Thread Tomi Valkeinen
On 2012-12-06 12:07, Grant Likely wrote:
 On Mon, 26 Nov 2012 16:39:58 +0100, Steffen Trumtrar 
 s.trumt...@pengutronix.de wrote:
 On Mon, Nov 26, 2012 at 02:37:26PM +0200, Tomi Valkeinen wrote:
 On 2012-11-26 11:07, Steffen Trumtrar wrote:

 +/*
 + * Subsystem independent description of a videomode.
 + * Can be generated from struct display_timing.
 + */
 +struct videomode {
 +  u32 pixelclock; /* pixelclock in Hz */

 I don't know if this is of any importance, but the linux clock framework
 manages clock rates with unsigned long. Would it be better to use the
 same type here?


 Hm, I don't know. Anyone? u32 should be large enough for a pixelclock.
 
 4GHz is a pretty large pixel clock. I have no idea how conceivable it is
 that hardware will get to that speed. However, if it will ever be
 larger, then you'll need to account for that in the DT binding so that
 the pixel clock can be specified using 2 cells.

I didn't mention the type because of the size of the field, but only
because to me it makes sense to use the same type for clock rates all
around the kernel. In many cases the value will be passed to clk_set_rate().

I can't see any real issues with u32, though.

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCHv15 3/7] video: add of helper for display timings/videomode

2012-12-10 Thread Tomi Valkeinen
On 2012-12-07 16:12, Philipp Zabel wrote:
 Hi,
 
 Am Montag, den 26.11.2012, 18:56 +0200 schrieb Tomi Valkeinen:

 So what does the pixelclk-inverted mean? Normally the SoC drives pixel
 data on rising edge, and the panel samples it at falling edge? And
 vice-versa for inverted? Or the other way around?

 When is hsync/vsync set? On rising or falling edge of pclk?

 My point here is that the pixelclk-inverted is not crystal clear thing,
 like the hsync/vsync/de-active values are.

 And while thinking about this, I realized that the meaning of
 pixelclk-inverted depends on what component is it applied to. Presuming
 normal pixclk means pixel data on rising edge, the meaning of that
 depends on do we consider the SoC or the panel. The panel needs to
 sample the data on the other edge from the one the SoC uses to drive the
 data.

 Does the videomode describe the panel, or does it describe the settings
 programmed to the SoC?
 
 How about calling this property pixelclk-active, active high meaning
 driving pixel data on rising edges and sampling on falling edges (the
 pixel clock is high between driving and sampling the data), and active
 low meaning driving on falling edges and sampling on rising edges?
 It is the same from the SoC perspective and from the panel perspective,
 and it mirrors the usage of the other *-active properties.

This sounds good to me. It's not quite correct, as neither pixelclock or
pixel data are not really active when the clock is high/low, but it
still makes sense and is clear (at least with a short description).

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[RFC 0/6] Common Display Framework-T

2012-12-14 Thread Tomi Valkeinen
Hi,

I have been testing Common Display Framework on OMAP, and making changes that
I've discussed in the posts I've sent in reply to the CDF series from Laurent.
While my CDF code is rather hacky and not at all ready, I wanted to post the
code for comments and also as a reference code to my posts.

So here is CDF-T (Tomi-edition =). 

Changes compared to Laurent's CDF
-

- DBI bus is removed. I don't have any DBI devices, and as I said in my posts,
  I didn't see need for a real bus for DBI (or DSI). I thus also removed the
  DBI panels.

- Split the display_entity into two parts: video_source, which is used to
  connect the display chips and panels, and display_entity, which represent
  either the panel or the whole pipeline (I'm not sure which is better).

- Added ops for DVI and DSI

- Added driver for TFP410, a DPI to DVI converter chip
- Added driver for generic DVI monitor
- Added driver for Taal, a DSI command mode panel

- Removed DISPLAY_ENTITY_STREAM_SINGLE_SHOT, and instead there's update() op.
  It's perhaps possible to use the single-shot method, but I went for a
  separate op to get a update done-callback implemented easily.

Notes about the code


As I said, the code is rather untidy, but I think it's more or less
understandable. I've skipped adding any helper funcs, for example to call the
ops, so it's easier for me to change the API.

I also (ab)used some of the omapdss panel headers to ease my hacking for
omapdss+cdf. These should be quite clear.

The code, including hacks to omapdss to make it use CDF, can be found from:
git://gitorious.org/linux-omap-dss2/linux.git work/dss-dev-model-cdf

The DSI part is very unfinished. I am able to use it to start up the Taal
panel, and send frames to the panel, but it's really missing lots of stuff.

About display_entity  video_source
---

I've discussed this split in previous posts, but I'll describe it here again.

As I see it, the connections between the display blocks, and the use of the
panel (and thus the whole pipeline) are very different things, and I think they
should be separated. So in my version I have struct video_source, used to
connect the blocks, and display_entity, representing the panel or the whole
pipeline. display_entity is probably not a good name for it anymore, but I
didn't come up with a good one yet.

As an example, let's look at chip-tfp410.c and panel-dvi.c.

tfp410 uses two video_sources, one for input and one for output. The input
comes from some other display block, in my case from OMAP display subsystem.
OMAP DSS has registered a DPI video_source, and the tfp410 driver will get this
source using video_source_find().

tfp410 registers its output as a video source, using video_source_register.
panel-dvi will in turn use video_source_find() to get it.

Both drivers use video_source to configure the input bus, i.e. tfp410
configures the DPI bus between OMAP DSS and TFP410 using, for example,
set_data_lines op to configure the number of datalines on the DPI bus.

With the video_sources in place, the whole video pipeline can be used. However,
we still need to expose an API so that somebody can actually use the pipeline.
This is done with display_entity. display_entity contains higher level ops,
which are not bus specific. The display_entity is registered by the panel at
the end of the chain.

In my model the display_entity ops go to the panel driver, which then calls ops
in the input video source to do the work. Laurent has objected to this model,
and would rather have the display_entity ops go to the DSS side (if I
understood right), which would then propagate forward towards the panel. I have
still kept my model, as I don't see the downsides with my model, nor do I see
any use for propagating the ops from DSS to the panel. But I'm happy to hear
examples how it is beneficial and could be used.


About the bus model
---

Lauren't version uses a linux bus for DBI. The idea here is that DBI is the
control bus fro the panel/chip, and should thus be represented by a real bus.
While I agreed to this approach when we discussed about it, I now am slightly
against it.

My reason is that DBI (or DSI or any other similar bus) is not really control
bus, it is a video bus. It _can_ be used for control, but video is the main
purpose. This has the partical issues:

- There's never use for DBI only for control. DBI is always used for either
  video only, or control+video.

- If DBI is used only for video, there's no DBI bus. How to configure DBI in
  this case?

- If DBI is used for control and video, we have two separate APIs for the bus.
  In theory it's possible to handle this, but in practice it may be impossible,
  especially for more complex busses like DSI.

So in my version I added DSI as a plain video_source, without a real linux bus.
I think this model is a lot simpler, and works better.

 Tomi

Tomi Valkeinen (6):
  video: add

[RFC 1/6] video: add display-core

2012-12-14 Thread Tomi Valkeinen
Signed-off-by: Tomi Valkeinen tomi.valkei...@ti.com
---
 drivers/video/display/display-core.c |  207 ++
 include/video/display.h  |  166 +++
 2 files changed, 373 insertions(+)
 create mode 100644 drivers/video/display/display-core.c
 create mode 100644 include/video/display.h

diff --git a/drivers/video/display/display-core.c 
b/drivers/video/display/display-core.c
new file mode 100644
index 000..5f8be30
--- /dev/null
+++ b/drivers/video/display/display-core.c
@@ -0,0 +1,207 @@
+/*
+ * Display Core
+ *
+ * Copyright (C) 2012 Renesas Solutions Corp.
+ *
+ * Contacts: Laurent Pinchart laurent.pinch...@ideasonboard.com
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include linux/export.h
+#include linux/kernel.h
+#include linux/list.h
+#include linux/module.h
+#include linux/mutex.h
+#include linux/videomode.h
+
+#include video/display.h
+
+/* 
-
+ * Display Entity
+ */
+
+static LIST_HEAD(display_entity_list);
+static DEFINE_MUTEX(display_entity_mutex);
+
+struct display_entity *display_entity_get_first(void)
+{
+   if (list_empty(display_entity_list))
+   return NULL;
+
+   return list_first_entry(display_entity_list, struct display_entity,
+   list);
+}
+EXPORT_SYMBOL(display_entity_get_first);
+
+int display_entity_set_state(struct display_entity *entity,
+enum display_entity_state state)
+{
+   int ret;
+
+   if (entity-state == state)
+   return 0;
+
+   if (!entity-ops || !entity-ops-set_state)
+   return 0;
+
+   ret = entity-ops-set_state(entity, state);
+   if (ret  0)
+   return ret;
+
+   entity-state = state;
+   return 0;
+}
+EXPORT_SYMBOL_GPL(display_entity_set_state);
+
+int display_entity_get_modes(struct display_entity *entity,
+const struct videomode **modes)
+{
+   if (!entity-ops || !entity-ops-get_modes)
+   return 0;
+
+   return entity-ops-get_modes(entity, modes);
+}
+EXPORT_SYMBOL_GPL(display_entity_get_modes);
+
+int display_entity_get_size(struct display_entity *entity,
+   unsigned int *width, unsigned int *height)
+{
+   if (!entity-ops || !entity-ops-get_size)
+   return -EOPNOTSUPP;
+
+   return entity-ops-get_size(entity, width, height);
+}
+EXPORT_SYMBOL_GPL(display_entity_get_size);
+
+static void display_entity_release(struct kref *ref)
+{
+   struct display_entity *entity =
+   container_of(ref, struct display_entity, ref);
+
+   if (entity-release)
+   entity-release(entity);
+}
+
+struct display_entity *display_entity_get(struct display_entity *entity)
+{
+   if (entity == NULL)
+   return NULL;
+
+   kref_get(entity-ref);
+   return entity;
+}
+EXPORT_SYMBOL_GPL(display_entity_get);
+
+void display_entity_put(struct display_entity *entity)
+{
+   kref_put(entity-ref, display_entity_release);
+}
+EXPORT_SYMBOL_GPL(display_entity_put);
+
+int __must_check __display_entity_register(struct display_entity *entity,
+  struct module *owner)
+{
+   kref_init(entity-ref);
+   entity-owner = owner;
+   entity-state = DISPLAY_ENTITY_STATE_OFF;
+
+   mutex_lock(display_entity_mutex);
+   list_add(entity-list, display_entity_list);
+
+   mutex_unlock(display_entity_mutex);
+
+   return 0;
+}
+EXPORT_SYMBOL_GPL(__display_entity_register);
+
+void display_entity_unregister(struct display_entity *entity)
+{
+   mutex_lock(display_entity_mutex);
+
+   list_del(entity-list);
+   mutex_unlock(display_entity_mutex);
+
+   display_entity_put(entity);
+}
+EXPORT_SYMBOL_GPL(display_entity_unregister);
+
+/* 
-
+ * Video Source
+ */
+
+static LIST_HEAD(video_source_list);
+static DEFINE_MUTEX(video_source_mutex);
+
+int __must_check __video_source_register(struct video_source *src,
+  struct module *owner)
+{
+   kref_init(src-ref);
+   src-owner = owner;
+
+   mutex_lock(video_source_mutex);
+   list_add(src-list, video_source_list);
+
+   mutex_unlock(video_source_mutex);
+
+   return 0;
+}
+EXPORT_SYMBOL_GPL(__video_source_register);
+
+void video_source_unregister(struct video_source *src)
+{
+   mutex_lock(video_source_mutex);
+
+   list_del(src-list);
+   mutex_unlock(video_source_mutex);
+
+   video_source_put(src);
+}
+EXPORT_SYMBOL_GPL(video_source_unregister);
+
+
+static void video_source_release(struct kref *ref)
+{
+   struct video_source *src

[RFC 2/6] video: add generic dpi panel

2012-12-14 Thread Tomi Valkeinen
Signed-off-by: Tomi Valkeinen tomi.valkei...@ti.com
---
 drivers/video/display/panel-dpi.c |  155 +
 include/video/panel-dpi.h |   25 ++
 2 files changed, 180 insertions(+)
 create mode 100644 drivers/video/display/panel-dpi.c
 create mode 100644 include/video/panel-dpi.h

diff --git a/drivers/video/display/panel-dpi.c 
b/drivers/video/display/panel-dpi.c
new file mode 100644
index 000..824cd88
--- /dev/null
+++ b/drivers/video/display/panel-dpi.c
@@ -0,0 +1,155 @@
+/*
+ * DPI Display Panel
+ *
+ * Copyright (C) 2012 Renesas Solutions Corp.
+ *
+ * Contacts: Laurent Pinchart laurent.pinch...@ideasonboard.com
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include linux/kernel.h
+#include linux/module.h
+#include linux/platform_device.h
+#include linux/slab.h
+
+#include video/display.h
+#include video/panel-dpi.h
+
+struct panel_dpi {
+   struct display_entity entity;
+   struct video_source *src;
+   const struct panel_dpi_platform_data *pdata;
+};
+
+#define to_panel_dpi(p)container_of(p, struct panel_dpi, 
entity)
+
+static int panel_dpi_set_state(struct display_entity *entity,
+  enum display_entity_state state)
+{
+   struct panel_dpi *panel = to_panel_dpi(entity);
+   struct video_source *src = panel-src;
+
+   switch (state) {
+   case DISPLAY_ENTITY_STATE_OFF:
+   case DISPLAY_ENTITY_STATE_STANDBY:
+   src-common_ops-set_stream(src,
+   DISPLAY_ENTITY_STREAM_STOPPED);
+   break;
+
+   case DISPLAY_ENTITY_STATE_ON:
+   src-common_ops-set_stream(src,
+   DISPLAY_ENTITY_STREAM_CONTINUOUS);
+   break;
+   }
+
+   return 0;
+}
+
+static int panel_dpi_get_modes(struct display_entity *entity,
+  const struct videomode **modes)
+{
+   struct panel_dpi *panel = to_panel_dpi(entity);
+
+   *modes = panel-pdata-mode;
+   return 1;
+}
+
+static int panel_dpi_get_size(struct display_entity *entity,
+ unsigned int *width, unsigned int *height)
+{
+   struct panel_dpi *panel = to_panel_dpi(entity);
+
+   *width = panel-pdata-width;
+   *height = panel-pdata-height;
+   return 0;
+}
+
+static const struct display_entity_control_ops panel_dpi_control_ops = {
+   .set_state = panel_dpi_set_state,
+   .get_modes = panel_dpi_get_modes,
+   .get_size = panel_dpi_get_size,
+};
+
+static void panel_dpi_release(struct display_entity *entity)
+{
+   struct panel_dpi *panel = to_panel_dpi(entity);
+
+   kfree(panel);
+}
+
+static int panel_dpi_remove(struct platform_device *pdev)
+{
+   struct panel_dpi *panel = platform_get_drvdata(pdev);
+
+   display_entity_unregister(panel-entity);
+
+   if (panel-src) {
+   video_source_put(panel-src);
+   panel-src = NULL;
+   }
+
+   platform_set_drvdata(pdev, NULL);
+
+   return 0;
+}
+
+static int __devinit panel_dpi_probe(struct platform_device *pdev)
+{
+   const struct panel_dpi_platform_data *pdata = pdev-dev.platform_data;
+   struct panel_dpi *panel;
+   int ret;
+
+   if (pdata == NULL)
+   return -ENODEV;
+
+   panel = kzalloc(sizeof(*panel), GFP_KERNEL);
+   if (panel == NULL)
+   return -ENOMEM;
+
+   panel-pdata = pdata;
+
+   panel-src = video_source_find(pdata-video_source);
+   if (panel-src == NULL) {
+   printk(failed to get video source\n);
+   return -EINVAL;
+   }
+
+   panel-src-ops.dpi-set_data_lines(panel-src, 24);
+   panel-src-ops.dpi-set_videomode(panel-src, pdata-mode);
+
+   panel-entity.dev = pdev-dev;
+   panel-entity.release = panel_dpi_release;
+   panel-entity.ops = panel_dpi_control_ops;
+
+   ret = display_entity_register(panel-entity);
+   if (ret  0) {
+   kfree(panel);
+   return ret;
+   }
+
+   platform_set_drvdata(pdev, panel);
+
+   return 0;
+}
+
+static const struct dev_pm_ops panel_dpi_dev_pm_ops = {
+};
+
+static struct platform_driver panel_dpi_driver = {
+   .probe = panel_dpi_probe,
+   .remove = panel_dpi_remove,
+   .driver = {
+   .name = panel_dpi,
+   .owner = THIS_MODULE,
+   .pm = panel_dpi_dev_pm_ops,
+   },
+};
+
+module_platform_driver(panel_dpi_driver);
+
+MODULE_AUTHOR(Laurent Pinchart laurent.pinch...@ideasonboard.com);
+MODULE_DESCRIPTION(DPI Display Panel);
+MODULE_LICENSE(GPL);
diff --git a/include/video/panel-dpi.h b/include/video/panel-dpi.h
new file mode 100644
index 000..0c5856e
--- /dev/null
+++ b/include/video/panel-dpi.h
@@ -0,0 +1,25 @@
+/*
+ * DPI Display Panel

[RFC 3/6] video: add tfp410

2012-12-14 Thread Tomi Valkeinen
Signed-off-by: Tomi Valkeinen tomi.valkei...@ti.com
---
 drivers/video/display/chip-tfp410.c |  164 +++
 include/video/omap-panel-tfp410.h   |4 +
 2 files changed, 168 insertions(+)
 create mode 100644 drivers/video/display/chip-tfp410.c

diff --git a/drivers/video/display/chip-tfp410.c 
b/drivers/video/display/chip-tfp410.c
new file mode 100644
index 000..2f8bdb6
--- /dev/null
+++ b/drivers/video/display/chip-tfp410.c
@@ -0,0 +1,164 @@
+/*
+ * TFP410 DPI-to-DVI bridge
+ *
+ * Copyright (C) 2012 Texas Instruments
+ *
+ * Contacts: Tomi Valkeinen tomi.valkei...@ti.com
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include linux/kernel.h
+#include linux/module.h
+#include linux/gpio.h
+#include linux/platform_device.h
+
+#include video/display.h
+#include video/omap-panel-tfp410.h
+
+struct tfp410_data {
+   struct video_source *in;
+
+   struct video_source out;
+
+   int power_down_gpio;
+};
+
+#define to_tfp410(p) container_of(p, struct tfp410_data, out)
+
+static int tfp410_set_stream(struct video_source *src,
+enum video_source_stream_state state)
+{
+   struct tfp410_data *data = to_tfp410(src);
+   struct video_source *in = data-in;
+   int r;
+
+   r = in-common_ops-set_stream(in, state);
+   if (r)
+   return r;
+
+   switch (state) {
+   case DISPLAY_ENTITY_STREAM_STOPPED:
+   printk(tfp410 set_stream: STOPPED\n);
+
+   gpio_set_value_cansleep(data-power_down_gpio, 0);
+
+   break;
+
+   case DISPLAY_ENTITY_STREAM_CONTINUOUS:
+   printk(tfp410 set_stream: CONTINUOUS\n);
+
+   gpio_set_value_cansleep(data-power_down_gpio, 1);
+
+   break;
+
+   default:
+   printk(tfp410 set_stream error\n);
+   break;
+   }
+
+   return 0;
+}
+
+static int tfp410_set_vm(struct video_source *src, const struct videomode *vm)
+{
+   struct tfp410_data *data = to_tfp410(src);
+   struct video_source *in = data-in;
+
+   printk(tfp410 set vm\n);
+
+   return in-ops.dpi-set_videomode(in, vm);
+}
+
+static const struct common_video_source_ops tfp410_common_ops = {
+   .set_stream = tfp410_set_stream,
+};
+
+static const struct dvi_video_source_ops tfp410_dvi_ops = {
+   .set_videomode = tfp410_set_vm,
+};
+
+static void tfp410_release(struct video_source *src)
+{
+   printk(tfp410 entity release\n);
+}
+
+static int __devinit tfp410_probe(struct platform_device *pdev)
+{
+   const struct tfp410_platform_data *pdata = pdev-dev.platform_data;
+   struct tfp410_data *data;
+   int r;
+
+   if (pdata == NULL)
+   return -ENODEV;
+
+   data = devm_kzalloc(pdev-dev, sizeof(*data), GFP_KERNEL);
+   if (data == NULL)
+   return -ENOMEM;
+
+   data-power_down_gpio = pdata-power_down_gpio;
+
+   r = devm_gpio_request_one(pdev-dev, pdata-power_down_gpio,
+   GPIOF_OUT_INIT_LOW, tfp410 pd);
+   if (r) {
+   printk(failed to request pd gpio\n);
+   return r;
+   }
+
+   /* setup input */
+
+   data-in = video_source_find(pdata-video_source);
+   if (data-in == NULL) {
+   printk(failed to get video source\n);
+   return -EINVAL;
+   }
+
+   data-in-ops.dpi-set_data_lines(data-in, 24);
+
+   /* setup output */
+
+   data-out.dev = pdev-dev;
+   data-out.release = tfp410_release;
+   data-out.common_ops = tfp410_common_ops;
+   data-out.ops.dvi = tfp410_dvi_ops;
+   data-out.name = pdata-video_output;
+
+   r = video_source_register(data-out);
+   if (r  0) {
+   printk(tfp410 entity register failed\n);
+   video_source_put(data-in);
+   return r;
+   }
+
+   platform_set_drvdata(pdev, data);
+
+   return 0;
+}
+
+static int tfp410_remove(struct platform_device *pdev)
+{
+   struct tfp410_data *data = platform_get_drvdata(pdev);
+
+   video_source_unregister(data-out);
+
+   video_source_put(data-in);
+
+   return 0;
+}
+
+static struct platform_driver tfp410_driver = {
+   .probe = tfp410_probe,
+   .remove = tfp410_remove,
+   .driver = {
+   .name = tfp410,
+   .owner = THIS_MODULE,
+   },
+};
+
+module_platform_driver(tfp410_driver);
+
+MODULE_AUTHOR(Tomi Valkeinen tomi.valkei...@ti.com);
+MODULE_DESCRIPTION(TFP410 DPI-to-DVI Bridge);
+MODULE_LICENSE(GPL);
diff --git a/include/video/omap-panel-tfp410.h 
b/include/video/omap-panel-tfp410.h
index b5b05f4..18f2b46 100644
--- a/include/video/omap-panel-tfp410.h
+++ b/include/video/omap-panel-tfp410.h
@@ -30,6 +30,10 @@ struct omap_dss_device;
  */
 struct tfp410_platform_data

[RFC 4/6] video: add generic dvi monitor

2012-12-14 Thread Tomi Valkeinen
Signed-off-by: Tomi Valkeinen tomi.valkei...@ti.com
---
 drivers/video/display/panel-dvi.c |  164 +
 include/video/panel-dvi.h |   18 
 2 files changed, 182 insertions(+)
 create mode 100644 drivers/video/display/panel-dvi.c
 create mode 100644 include/video/panel-dvi.h

diff --git a/drivers/video/display/panel-dvi.c 
b/drivers/video/display/panel-dvi.c
new file mode 100644
index 000..01cea09
--- /dev/null
+++ b/drivers/video/display/panel-dvi.c
@@ -0,0 +1,164 @@
+/*
+ * Generic DVI monitor
+ *
+ * Copyright (C) 2012 Texas Instruments
+ *
+ * Contacts: Tomi Valkeinen tomi.valkei...@ti.com
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include linux/kernel.h
+#include linux/module.h
+#include linux/platform_device.h
+
+#include video/display.h
+#include video/panel-dvi.h
+
+struct panel_data {
+   struct display_entity entity;
+   struct video_source *in;
+};
+
+#define to_panel(p) container_of(p, struct panel_data, entity)
+
+static int panel_dvi_set_state(struct display_entity *entity,
+  enum display_entity_state state)
+{
+   struct panel_data *data = to_panel(entity);
+   struct video_source *in = data-in;
+
+   switch (state) {
+   case DISPLAY_ENTITY_STATE_OFF:
+   case DISPLAY_ENTITY_STATE_STANDBY:
+   in-common_ops-set_stream(in, DISPLAY_ENTITY_STREAM_STOPPED);
+   break;
+
+   case DISPLAY_ENTITY_STATE_ON:
+   in-common_ops-set_stream(in, 
DISPLAY_ENTITY_STREAM_CONTINUOUS);
+   break;
+   }
+
+   return 0;
+}
+
+static const struct videomode vga_mode = {
+   .pixelclock = 23500,
+
+   .hactive = 640,
+   .hfront_porch = 48,
+   .hback_porch = 80,
+   .hsync_len = 32,
+
+   .vactive = 480,
+   .vfront_porch = 3,
+   .vback_porch = 7,
+   .vsync_len = 4,
+
+   .hah = true,
+   .vah = true,
+   .de = true,
+};
+
+static int panel_dvi_get_modes(struct display_entity *entity,
+  const struct videomode **modes)
+{
+   //struct panel_data *data = to_panel(entity);
+
+   *modes = vga_mode;
+   return 1;
+}
+
+static int panel_dvi_get_size(struct display_entity *entity,
+ unsigned int *width, unsigned int *height)
+{
+   //struct panel_data *data = to_panel(entity);
+
+   *width = 10;
+   *height = 10;
+   return 0;
+}
+
+static const struct display_entity_control_ops panel_dvi_control_ops = {
+   .set_state = panel_dvi_set_state,
+   .get_modes = panel_dvi_get_modes,
+   .get_size = panel_dvi_get_size,
+};
+
+static void panel_dvi_release(struct display_entity *entity)
+{
+   printk(panel dvi release\n);
+}
+
+static int __devinit panel_dvi_probe(struct platform_device *pdev)
+{
+   const struct panel_dvi_platform_data *pdata = pdev-dev.platform_data;
+   struct panel_data *data;
+   int ret;
+
+   if (pdata == NULL)
+   return -ENODEV;
+
+   data = devm_kzalloc(pdev-dev, sizeof(*data), GFP_KERNEL);
+   if (data == NULL)
+   return -ENOMEM;
+
+   /* setup input */
+   data-in = video_source_find(pdata-video_source);
+   if (data-in == NULL) {
+   printk(failed to get video source\n);
+   return -EINVAL;
+   }
+
+   /* setup default mode */
+   data-in-ops.dvi-set_videomode(data-in, vga_mode);
+
+   /* setup panel entity */
+
+   data-entity.dev = pdev-dev;
+   data-entity.release = panel_dvi_release;
+   data-entity.ops = panel_dvi_control_ops;
+
+   ret = display_entity_register(data-entity);
+   if (ret  0) {
+   video_source_put(data-in);
+   return ret;
+   }
+
+   platform_set_drvdata(pdev, data);
+
+   return 0;
+}
+
+static int panel_dvi_remove(struct platform_device *pdev)
+{
+   struct panel_data *data = platform_get_drvdata(pdev);
+
+   display_entity_unregister(data-entity);
+
+   video_source_put(data-in);
+
+   return 0;
+}
+
+
+static const struct dev_pm_ops panel_dvi_dev_pm_ops = {
+};
+
+static struct platform_driver panel_dvi_driver = {
+   .probe = panel_dvi_probe,
+   .remove = panel_dvi_remove,
+   .driver = {
+   .name = panel_dvi,
+   .owner = THIS_MODULE,
+   .pm = panel_dvi_dev_pm_ops,
+   },
+};
+
+module_platform_driver(panel_dvi_driver);
+
+MODULE_AUTHOR(Tomi Valkeinen tomi.valkei...@ti.com);
+MODULE_DESCRIPTION(DVI Monitor);
+MODULE_LICENSE(GPL);
diff --git a/include/video/panel-dvi.h b/include/video/panel-dvi.h
new file mode 100644
index 000..ab88380
--- /dev/null
+++ b/include/video/panel-dvi.h
@@ -0,0 +1,18 @@
+/*
+ * DVI Display Panel
+ *
+ * This program is free software; you can

[RFC 5/6] video: add taal panel

2012-12-14 Thread Tomi Valkeinen
Signed-off-by: Tomi Valkeinen tomi.valkei...@ti.com
---
 drivers/video/display/panel-taal.c   |  383 ++
 include/video/omap-panel-nokia-dsi.h |4 +-
 2 files changed, 385 insertions(+), 2 deletions(-)
 create mode 100644 drivers/video/display/panel-taal.c

diff --git a/drivers/video/display/panel-taal.c 
b/drivers/video/display/panel-taal.c
new file mode 100644
index 000..f1c2196
--- /dev/null
+++ b/drivers/video/display/panel-taal.c
@@ -0,0 +1,383 @@
+/*
+ * Taal DSI command mode panel
+ *
+ * Copyright (C) 2012 Texas Instruments
+ * Author: Tomi Valkeinen tomi.valkei...@ti.com
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see http://www.gnu.org/licenses/.
+ */
+
+#define DEBUG
+
+#include linux/module.h
+#include linux/delay.h
+#include linux/jiffies.h
+#include linux/sched.h
+#include linux/gpio.h
+#include linux/mutex.h
+#include linux/platform_device.h
+#include linux/videomode.h
+
+#include video/omapdss.h
+#include video/display.h
+#include video/omap-panel-nokia-dsi.h
+#include video/mipi_display.h
+
+/* DSI Virtual channel. Hardcoded for now. */
+#define TCH 0
+
+#define DCS_READ_NUM_ERRORS0x05
+#define DCS_BRIGHTNESS 0x51
+#define DCS_CTRL_DISPLAY   0x53
+#define DCS_WRITE_CABC 0x55
+#define DCS_READ_CABC  0x56
+#define DCS_GET_ID10xda
+#define DCS_GET_ID20xdb
+#define DCS_GET_ID30xdc
+
+struct taal_data {
+   struct platform_device *pdev;
+   struct video_source *src;
+   struct display_entity entity;
+
+   struct mutex lock;
+
+   unsigned long   hw_guard_end;   /* next value of jiffies when we can
+* issue the next sleep in/out command
+*/
+   unsigned long   hw_guard_wait;  /* max guard time in jiffies */
+
+   /* panel HW configuration from DT or platform data */
+   int reset_gpio;
+
+   /* runtime variables */
+   bool enabled;
+
+   bool te_enabled;
+
+   int channel;
+
+   bool cabc_broken;
+   unsigned cabc_mode;
+
+   bool intro_printed;
+};
+
+static void hw_guard_start(struct taal_data *td, int guard_msec)
+{
+   td-hw_guard_wait = msecs_to_jiffies(guard_msec);
+   td-hw_guard_end = jiffies + td-hw_guard_wait;
+}
+
+static void hw_guard_wait(struct taal_data *td)
+{
+   unsigned long wait = td-hw_guard_end - jiffies;
+
+   if ((long)wait  0  wait = td-hw_guard_wait) {
+   set_current_state(TASK_UNINTERRUPTIBLE);
+   schedule_timeout(wait);
+   }
+}
+
+static int taal_dcs_read_1(struct taal_data *td, u8 dcs_cmd, u8 *data)
+{
+   int r;
+   u8 buf[1];
+   struct video_source *src = td-src;
+
+   r = src-ops.dsi-dcs_read(src, td-channel, dcs_cmd, buf, 1);
+   if (r  0)
+   return r;
+
+   *data = buf[0];
+
+   return 0;
+}
+
+static int taal_dcs_write_0(struct taal_data *td, u8 dcs_cmd)
+{
+   struct video_source *src = td-src;
+
+   return src-ops.dsi-dcs_write(src, td-channel, dcs_cmd, 1);
+}
+
+static int taal_sleep_out(struct taal_data *td)
+{
+   int r;
+
+   hw_guard_wait(td);
+
+   r = taal_dcs_write_0(td, MIPI_DCS_EXIT_SLEEP_MODE);
+   if (r)
+   return r;
+
+   hw_guard_start(td, 120);
+
+   msleep(5);
+
+   return 0;
+}
+
+static int taal_get_id(struct taal_data *td, u8 *id1, u8 *id2, u8 *id3)
+{
+   int r;
+
+   r = taal_dcs_read_1(td, DCS_GET_ID1, id1);
+   if (r)
+   return r;
+   r = taal_dcs_read_1(td, DCS_GET_ID2, id2);
+   if (r)
+   return r;
+   r = taal_dcs_read_1(td, DCS_GET_ID3, id3);
+   if (r)
+   return r;
+
+   return 0;
+}
+
+static void taal_hw_reset(struct taal_data *td)
+{
+   if (!gpio_is_valid(td-reset_gpio))
+   return;
+
+   gpio_set_value(td-reset_gpio, 1);
+   udelay(10);
+   /* reset the panel */
+   gpio_set_value(td-reset_gpio, 0);
+   /* assert reset */
+   udelay(10);
+   gpio_set_value(td-reset_gpio, 1);
+   /* wait after releasing reset */
+   msleep(5);
+}
+
+#define to_panel(p) container_of(p, struct taal_data, entity)
+
+static int taal_set_state(struct display_entity *entity,
+  enum display_entity_state state)
+{
+   struct taal_data *td = to_panel(entity);
+   struct video_source *src = td-src;
+   int r

[RFC 6/6] video: add makefile kconfig

2012-12-14 Thread Tomi Valkeinen
Signed-off-by: Tomi Valkeinen tomi.valkei...@ti.com
---
 drivers/video/Kconfig  |1 +
 drivers/video/Makefile |1 +
 drivers/video/display/Kconfig  |   26 ++
 drivers/video/display/Makefile |5 +
 4 files changed, 33 insertions(+)
 create mode 100644 drivers/video/display/Kconfig
 create mode 100644 drivers/video/display/Makefile

diff --git a/drivers/video/Kconfig b/drivers/video/Kconfig
index c5b7bcf..e91f03e 100644
--- a/drivers/video/Kconfig
+++ b/drivers/video/Kconfig
@@ -2442,6 +2442,7 @@ source drivers/video/omap/Kconfig
 source drivers/video/omap2/Kconfig
 source drivers/video/exynos/Kconfig
 source drivers/video/backlight/Kconfig
+source drivers/video/display/Kconfig
 
 if VT
source drivers/video/console/Kconfig
diff --git a/drivers/video/Makefile b/drivers/video/Makefile
index b936b00..0a4cfea 100644
--- a/drivers/video/Makefile
+++ b/drivers/video/Makefile
@@ -14,6 +14,7 @@ fb-objs   := $(fb-y)
 obj-$(CONFIG_VT) += console/
 obj-$(CONFIG_LOGO)   += logo/
 obj-y+= backlight/
+obj-y+= display/
 
 obj-$(CONFIG_EXYNOS_VIDEO) += exynos/
 
diff --git a/drivers/video/display/Kconfig b/drivers/video/display/Kconfig
new file mode 100644
index 000..1d1a590
--- /dev/null
+++ b/drivers/video/display/Kconfig
@@ -0,0 +1,26 @@
+menuconfig DISPLAY_CORE
+   tristate Display Core
+   ---help---
+ Support common display framework for graphics devices.
+
+if DISPLAY_CORE
+
+config DISPLAY_PANEL_DPI
+   tristate DPI (Parallel) Display Panels
+   ---help---
+ Support for simple digital (parallel) pixel interface panels. Those
+ panels receive pixel data through a parallel bus and have no control
+ bus.
+
+ If you are in doubt, say N.
+
+config DISPLAY_PANEL_DVI
+   tristate DVI Monitor
+
+config DISPLAY_PANEL_TAAL
+   tristate Taal DSI command mode panel
+
+config DISPLAY_CHIP_TFP410
+   tristate DPI to DVI chip
+
+endif # DISPLAY_CORE
diff --git a/drivers/video/display/Makefile b/drivers/video/display/Makefile
new file mode 100644
index 000..ac97dfd
--- /dev/null
+++ b/drivers/video/display/Makefile
@@ -0,0 +1,5 @@
+obj-$(CONFIG_DISPLAY_CORE) += display-core.o
+obj-$(CONFIG_DISPLAY_PANEL_DPI) += panel-dpi.o
+obj-$(CONFIG_DISPLAY_PANEL_DVI) += panel-dvi.o
+obj-$(CONFIG_DISPLAY_PANEL_TAAL) += panel-taal.o
+obj-$(CONFIG_DISPLAY_CHIP_TFP410) += chip-tfp410.o
-- 
1.7.10.4

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC] drm/lcdc: add TI LCD Controller DRM driver

2012-12-17 Thread Tomi Valkeinen
On 2012-12-14 02:04, Rob Clark wrote:
 A simple DRM/KMS driver for the TI LCD Controller found in various
 smaller TI parts (AM33xx, OMAPL138, etc).  This driver uses the
 CMA helpers.  Currently only the TFP410 DVI encoder is supported
 (tested with beaglebone + DVI cape).  There are also various LCD
 displays, for which support can be added (as I get hw to test on),
 and an external i2c HDMI encoder found on some boards.
 
 The display controller supports a single CRTC.  And the encoder+
 connector are split out into sub-devices.  Depending on which LCD
 or external encoder is actually present, the appropriate output
 module(s) will be loaded.
 
 Signed-off-by: Rob Clark robdcl...@gmail.com
 ---
  drivers/gpu/drm/Kconfig|   2 +
  drivers/gpu/drm/Makefile   |   1 +
  drivers/gpu/drm/lcdc/Kconfig   |  11 +
  drivers/gpu/drm/lcdc/Makefile  |   8 +
  drivers/gpu/drm/lcdc/lcdc_crtc.c   | 544 +
  drivers/gpu/drm/lcdc/lcdc_drv.c| 604 
 +
  drivers/gpu/drm/lcdc/lcdc_drv.h| 162 ++
  drivers/gpu/drm/lcdc/lcdc_regs.h   | 154 ++
  drivers/gpu/drm/lcdc/lcdc_tfp410.c | 424 ++
  drivers/gpu/drm/lcdc/lcdc_tfp410.h |  26 ++
  10 files changed, 1936 insertions(+)
  create mode 100644 drivers/gpu/drm/lcdc/Kconfig
  create mode 100644 drivers/gpu/drm/lcdc/Makefile
  create mode 100644 drivers/gpu/drm/lcdc/lcdc_crtc.c
  create mode 100644 drivers/gpu/drm/lcdc/lcdc_drv.c
  create mode 100644 drivers/gpu/drm/lcdc/lcdc_drv.h
  create mode 100644 drivers/gpu/drm/lcdc/lcdc_regs.h
  create mode 100644 drivers/gpu/drm/lcdc/lcdc_tfp410.c
  create mode 100644 drivers/gpu/drm/lcdc/lcdc_tfp410.h

lcdc is quite a common term. The directory should perhaps be something
like ti-lcdc?

Probably not relevant, but I wonder if the same LCDC was used in
omap1... It'd be nice to get rid of the omap1 fb driver.

I'm not very enthusiastic about adding ti-lcdc specific panel/chip
drivers. It's not really a big deal if it's only kernel code, but you
add device-tree bindings also, which is an external API that you need to
support after adding it.

I'd rather see the energy put to common display framework, and get this
whole panel/chip driver issue solved in a generic manner.

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC 0/6] Common Display Framework-T

2012-12-20 Thread Tomi Valkeinen
On 2012-12-19 15:21, Laurent Pinchart wrote:
 Hi Tomi,
 
 On Friday 14 December 2012 16:27:26 Tomi Valkeinen wrote:
 Hi,

 I have been testing Common Display Framework on OMAP, and making changes
 that I've discussed in the posts I've sent in reply to the CDF series from
 Laurent. While my CDF code is rather hacky and not at all ready, I wanted
 to post the code for comments and also as a reference code to my posts.

 So here is CDF-T (Tomi-edition =).
 
 We've discussed your approach extensively face-to-face today so I won't 
 review 
 the patches in detail, but I will instead summarize our discussion to make 
 sure we understood each other (and let other developers jump in).

I have some comments =). But mostly it looks good.

 For the purpose of this discussion the term display controller driver (or 
 just display controller) refer to both the low-level driver layer that 
 communicates directly with the display controller hardware, and to the higher-
 level driver layer that implements and exposes the userspace API (FBDEV, KMS 
 and/or V4L). Those layers can be implemented in multiple kernel modules (such 
 as in the OMAP DSS case, with omapdss for the low-level layer and omapdrm, 
 omapfb and omapvout for the API-level layer) or a single kernel module.
 
 Control model
 -
 
 The figure at http://www.ideasonboard.org/media/cdf/cdf-panel-control-
 model.png shows the CDF control model.
 
 The panel object depicted on the figure doesn't need to be a panel in the 
 stricter sense but could be any chain of off-SoC (both on-board or off-board) 
 display entities. It however helps thinking about it as a panel and doesn't 
 hurt the model.

I don't think it needs to be off-soc. The dispc and the panel in the
image could be any two components, in- or off-soc.

 The panel is controlled through abstract control requests. Those requests are 
 used to retrieve panel information (such as the physical size, the supported 
 video modes, EDID information, ...), set the panel configuration (such as the 
 active video timings) or control the panel operation state 
 (enabling/disabling 
 the panel, controlling panel blanking and power management, ...). They are 
 exposed by the panel using function pointers, and called by other kernel 
 components in response to userspace requests (through the FBDEV, KMS or V4L2 
 APIs) or in-kernel events (for instance hotplug notifications).
 
 In response to the control requests the panel driver will communicate with 
 the 
 panel through the panel control bus (I2C, SPI, DBI, DSI, GPIO, ..., not shown 
 on the figure) and will control the video stream it receives on its input.
 
 The panel is connected at the hardware level to a video source (shown as a 
 green hashed rectangle) that provides it with a video stream. The video 
 stream 
 flows from the video source to the panel and is directly controlled by its 
 source, as shown by the green arrow from the display controller to the video 
 stream. The video source exposes stream control operations as function 
 pointers that are used by the panel to control the video stream, as shown by 
 the green arrow from the panel to the video source.
 
 The figure at http://www.ideasonboard.org/media/cdf/cdf-panel-control-
 model-2.png shows the call flow across entities when the panel is a pipeline 
 made of more than a single entity. In this case the SoC (on the left of the 
 dashed line) outputs a video stream on a DSI bus connected to a DSI to LVDS 
 transmitter. The output of the DSI to LVDS transmitter is connected to an 
 LVDS 
 panel (or, more accurately, an LVDS panel module made of an LVDS panel 
 controller and a panel).

Here also I don't see any reason to separate in- or off-soc components.
I think the call from DISPC, which now goes to the transmitter, should
first go to the DPI/DSI block. Whether the DPI/DSI block is in- or
off-soc should be irrelevant regarding CDF.

 The transmitter and panel module are seen by the display controller and 
 userspace API implementations as a single entity that exposes control request 
 operations and controls its input video stream. When a control request is 

I don't like the sound of this. I think the CDF shouldn't care how the
userspace API is implemented. There's no reason in CDF level to separate
in- or out-soc entities, nor expose only one entity. If DRM requires
mapping DRM's crtc, encoder and connector to, respectively, dispc,
DPI/DSI, the-rest, it should be able to do that, but CDF shouldn't force
that model.

Of course, the implementor of the particular SoC display driver could
decide to use one display entity that covers both dispc and DPI/DSI
blocks. But at least I would like to have OMAP's DISPC as a display
entity (or actually multiple entities, one for each DISPC output), and
the various in-SoC DPI-to-something encoders as display entities.

 performed (outermost green arrow) the DSI to LVDS transmitter will propagate 
 it to the panel, possibly mangling the input

Re: [RFC v2 0/5] Common Display Framework

2012-12-20 Thread Tomi Valkeinen
On 2012-12-19 16:57, Jani Nikula wrote:

 It just seems to me that, at least from a DRM/KMS perspective, adding
 another layer (=CDF) for HDMI or DP (or legacy outputs) would be
 overengineering it. They are pretty well standardized, and I don't see
 there would be a need to write multiple display drivers for them. Each
 display controller has one, and can easily handle any chip specific
 requirements right there. It's my gut feeling that an additional
 framework would just get in the way. Perhaps there could be more common
 HDMI/DP helper style code in DRM to reduce overlap across KMS drivers,
 but that's another thing.
 
 So is the HDMI/DP drivers using CDF a more interesting idea from a
 non-DRM perspective? Or, put another way, is it more of an alternative
 to using DRM? Please enlighten me if there's some real benefit here that
 I fail to see!

The use of CDF is an option, not something that has to be done. A DRM
driver developer may use it if it gives benefit for him for that
particular driver.

I don't know much about desktop display hardware, but I guess that using
CDF would not really give much there. In some cases it could, if the IPs
used on the graphics card are something that are used elsewhere also
(sounds quite unlikely, though). In that case there could be separate
drivers for the IPs.

And note that CDF is not really about the dispc side, i.e. the part that
creates the video stream from pixels in the memory. It's more about the
components after that, and how to connect those components.

 For DSI panels (or DSI-to-whatever bridges) it's of course another
 story. You typically need a panel specific driver. And here I see the
 main point of the whole CDF: decoupling display controllers and the
 panel drivers, and sharing panel (and converter chip) specific drivers
 across display controllers. Making it easy to write new drivers, as
 there would be a model to follow. I'm definitely in favour of coming up
 with some framework that would tackle that.

Right. But if you implement drivers for DSI panels with CDF for, say,
OMAP, I think it's simpler to use CDF also for HDMI/DP on OMAP.
Otherwise it'll be a mishmash with two different models.

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 0/5] Common Display Framework

2012-12-20 Thread Tomi Valkeinen
On 2012-12-19 17:26, Rob Clark wrote:
 On Wed, Dec 19, 2012 at 8:57 AM, Jani Nikula
 jani.nik...@linux.intel.com wrote:

 Hi Laurent -

 On Tue, 18 Dec 2012, Laurent Pinchart laurent.pinch...@ideasonboard.com 
 wrote:
 Hi Jani,

 On Monday 17 December 2012 18:53:37 Jani Nikula wrote:
 I can see the need for a framework for DSI panels and such (in fact Tomi
 and I have talked about it like 2-3 years ago already!) but what is the
 story for HDMI and DP? In particular, what's the relationship between
 DRM and CDF here? Is there a world domination plan to switch the DRM
 drivers to use this framework too? ;) Do you have some rough plans how
 DRM and CDF should work together in general?

 There's always a world domination plan, isn't there ? :-)

 I certainly want CDF to be used by DRM (or more accurately KMS). That's what
 the C stands for, common refers to sharing panel and other display entity
 drivers between FBDEV, KMS and V4L2.

 I currently have no plan to expose CDF internals to userspace through the 
 KMS
 API. We might have to do so later if the hardware complexity grows in such a
 way that finer control than what KMS provides needs to be exposed to
 userspace, but I don't think we're there yet. The CDF API will thus only be
 used internally in the kernel by display controller drivers. The KMS core
 might get functions to handle common display entity operations, but the bulk
 of the work will be in the display controller drivers to start with. We will
 then see what can be abstracted in KMS helper functions.

 Regarding HDMI and DP, I imagine HDMI and DP drivers that would use the CDF
 API. That's just a thought for now, I haven't tried to implement them, but 
 it
 would be nice to handle HDMI screens and DPI/DBI/DSI panels in a generic 
 way.

 Do you have thoughts to share on this topic ?

 It just seems to me that, at least from a DRM/KMS perspective, adding
 another layer (=CDF) for HDMI or DP (or legacy outputs) would be
 overengineering it. They are pretty well standardized, and I don't see
 there would be a need to write multiple display drivers for them. Each
 display controller has one, and can easily handle any chip specific
 requirements right there. It's my gut feeling that an additional
 framework would just get in the way. Perhaps there could be more common
 HDMI/DP helper style code in DRM to reduce overlap across KMS drivers,
 but that's another thing.

 So is the HDMI/DP drivers using CDF a more interesting idea from a
 non-DRM perspective? Or, put another way, is it more of an alternative
 to using DRM? Please enlighten me if there's some real benefit here that
 I fail to see!
 
 fwiw, I think there are at least a couple cases where multiple SoC's
 have the same HDMI IP block.
 
 And, there are also external HDMI encoders (for example connected over
 i2c) that can also be shared between boards.  So I think there will be
 a number of cases where CDF is appropriate for HDMI drivers.  Although
 trying to keep this all independent of DRM (as opposed to just
 something similar to what drivers/gpu/i2c is today) seems a bit
 overkill for me.  Being able to use the helpers in drm and avoiding an
 extra layer of translation seems like the better option to me.  So my
 vote would be drivers/gpu/cdf.

Well, we need to think about that. I would like to keep CDF independent
of DRM. I don't like tying different components/frameworks together if
there's no real need for that.

Also, something that Laurent mentioned in our face-to-face discussions:
Some IPs/chips can be used for other purposes than with DRM.

He had an example of a board, that (if I understood right) gets video
signal from somewhere outside the board, processes the signal with some
IPs/chips, and then outputs the signal. So there's no framebuffer, and
the image is not stored anywhere. I think the framework used in these
cases is always v4l2.

The IPs/chips in the above model may be the exact same IPs/chips that
are used with normal display. If the CDF was tied to DRM, using the
same drivers for normal and these streaming cases would probably not be
possible.

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: CDF meeting @FOSDEM report

2013-02-06 Thread Tomi Valkeinen
 
 
 Display panels are connected to a video bus that transmits video data and 
 optionally to a control bus. Those two busses can be separate physical 
 interfaces or combined into a single physical interface.
 
 The Linux device model represents the system as a tree of devices (not to be 
 confused by the device tree, abreviated as DT). The tree is organized around 
 control busses, with every device being a child of its control bus master. 
 For 
 instance an I2C device will be a child of its I2C controller device, which 
 can 
 itself be a child of its parent PCI device.
 
 Display panels will be represented as Linux devices. They will have a single 
 parent from the Linux device model point of view, but will be potentially 
 connected to multiple physical busses. CDF thus needs to define what bus to 
 select as the Linux parent bus.
 
 In theory any physical bus that the device is attached to can be selected as 
 the parent bus. However, selecting a video data bus would depart from the 
 traditional Linux device model that uses control busses only. This caused 
 concern among several people who argued that not presenting the device to the 
 kernel as attached to its control bus would bring issues in embedded system. 
 Unlike on PC systems where the control bus master is usually the same 
 physical 
 device as the data bus master, embedded systems are made of a potentially 
 complex assembly of completely unrelated devices. Not representing an I2C-
 controlled panel as a child of its I2C master in DT was thus frown upon, even 
 though no clear agreement was reached on the subject.

I've been thinking that a good rule of thumb would be that the device
must be somewhat usable after the parent bus is ready. So for, say,
DPI+SPI panel, when the SPI is set up the driver can send messages to
the panel, perhaps read an ID or such, even if the actual video cannot
be shown yet (presuming DPI bus is still missing).

Of course there are the funny cases, as always. Say, a DSI panel,
controlled via i2c, and the panel gets its functional clock from the DSI
bus's clock. In that case both busses need to be up and running before
the panel can do anything.

 - Combined video and control busses
 
 When the two busses are combined in a single physical bus the panel device 
 will obviously be represented as a child of that single physical bus. 
 
 In such cases the control bus could expose video bus control methods. This 
 would remove the need for a video source as proposed by Tomi Valkeinen in his 
 CDF model. However, if the bus can be used for video data transfer in 
 combination with a different control bus, a video source corresponding to the 
 data bus will be needed.

I think this is always the case. If a bus can be used for control and
video data, you can always use it only for video data.

 No decision has been taken on whether to use a video source in addition to 
 the 
 control bus in the combined busses case. Experimentation will be needed, and 
 the right solution might depend on the bus type.
 
 - Multiple control busses
 
 One panel was mentioned as being connected to a DSI bus and an I2C bus. The 
 DSI bus is used for both control and video, and the I2C bus for control only. 
 configuring the panel requires sending commands through both DSI and I2C. The 

I have luckily not encountered such a device. However, many of the DSI
devices do have i2c control as an option. From the device's point of
view, both can be used at the same time, but I think usually it's saner
to just pick one and use it.

The driver for the device should support both control busses, though.
Probably 99% of the driver code is common for both cases.

 6. Miscellaneous
 
 
 - If the OMAP3 DSS driver is used as a model for the DSI support 
 implementation, Daniel Vetter requested the DSI bus lock semaphore to be 
 killed as it prevents lockdep from working correctly (reference needed ;-)).

I don't think OMAP DSS should be used as a model. It has too much legacy
crap that should be rewritten. However, it can be used as a reference to
see what kind of features are needed, as it does support both video and
command mode DSI modes, and has been used with many different kinds of
DSI panels and DSI transcoders.

As for the semaphore, sure, it can be removed, although I'm not aware of
this lockdep problem. If there's a problem it should be fixed in any case.

 - Do we need to support chaining several encoders ? We can come up with 
 several theoretical use cases, some of them probably exist in real hardware, 
 but the details are still a bit fuzzy.

If encoder means the same as the transcoder term I used earlier, then
yes, I think so.

As I wrote, I'd like to model the OMAP DSS internal components with CDF.
The internal IP blocks are in no way different than external IP blocks,
they just happen to be integrated into OMAP. The same display IPs are
used with multiple different TI SoCs.

Also, the IPs vary between TI SoCs (for ex

Re: CDF meeting @FOSDEM report

2013-02-06 Thread Tomi Valkeinen
On 2013-02-06 14:11, Jani Nikula wrote:
 On Wed, 06 Feb 2013, Tomi Valkeinen tomi.valkei...@ti.com wrote:
 6. Miscellaneous
 

 - If the OMAP3 DSS driver is used as a model for the DSI support 
 implementation, Daniel Vetter requested the DSI bus lock semaphore to be 
 killed as it prevents lockdep from working correctly (reference needed ;-)).
 
 [...]
 
 As for the semaphore, sure, it can be removed, although I'm not aware of
 this lockdep problem. If there's a problem it should be fixed in any case.
 
 The problem is that lockdep does not support semaphores out of the
 box. I'm not sure how hard it would be to manually lockdep annotate the
 bus lock, and whether it would really work. In any case, as I think we
 learned in the past, getting locking right in a DSI command mode panel
 driver with an asynchronous update callback, DSI bus lock, and a driver
 data specific mutex can be a PITA. Lockdep would be extremely useful
 there.
 
 AFAICS simply replacing the semaphore with a mutex would work for all
 other cases except DSI command mode display update, unless you're
 prepared to wait in the call until the next tearing effect interrupt
 plus framedone. Which would suck. I think you and I have talked about
 this part in the past...

Mutex requires locking and unlocking to happen from the same thread. But
I guess that's what you meant that the problem would be with display
update, where the framedone callback is used to release the bus lock.

The semaphore could probably be changed to use wait queues, but isn't
that more or less what a semaphore already does?

And I want to point out to those not familiar with omapdss, that the DSI
bus lock in question does not protect any data in memory, but is an
indication that the DSI bus is currently in use. The bus lock can be
used to wait until the bus is free again.

I guess one option would be to disallow any waiting for the bus lock. If
the panel driver would try acquire bus lock, and the lock is already
taken, the call would fail. This would move the handling of exclusivity
to the user of the panel (drm driver, I guess), which already should
handle the framedone event.

The above would require that everything the panel does should be managed
by the drm driver. Currently this is not the case for OMAP, as the panel
driver can get calls via sysfs, or via backlight driver, or via (gpio)
interrupts.

I don't really know what would be the best option here. On one hand
requiring all panel calls to be managed by drm would be nice and simple.
But it is a bit limiting when thinking about complex display chips. Will
that work for all cases? I'm not sure.

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: CDF meeting @FOSDEM report

2013-02-06 Thread Tomi Valkeinen
On 2013-02-06 16:44, Alex Deucher wrote:
 On Wed, Feb 6, 2013 at 6:11 AM, Tomi Valkeinen tomi.valkei...@ti.com wrote:

 What is an encoder? Something that takes a video signal in, and lets the
 CPU store the received data to memory? Isn't that a decoder?

 Or do you mean something that takes a video signal in, and outputs a
 video signal in another format? (transcoder?)
 
 In KMS parlance, we have two objects a crtc and an encoder.  A crtc
 reads data from memory and produces a data stream with display timing.
  The encoder then takes that datastream and timing from the crtc and
 converts it some sort of physical signal (LVDS, TMDS, DP, etc.).  It's

Isn't the video stream between CRTC and encoder just as physical, it
just happens to be inside the GPU?

This is the case for OMAP, at least, where DISPC could be considered
CRTC, and DSI/HDMI/etc could be considered encoder. The stream between
DISPC and DSI/HDMI is plain parallel RGB signal. The video stream could
as well be outside OMAP.

 not always a perfect match to the hardware.  For example a lot of GPUs
 have a DVO encoder which feeds a secondary encoder like an sil164 DVO
 to TMDS encoder.

Right. I think mapping the DRM entities to CDF ones is one of the bigger
question marks we have with CDF. While I'm no expert on DRM, I think we
have the following options:

1. Force DRM's model to CDF, meaning one encoder.

2. Extend DRM to support multiple encoders in a chain.

3. Support multiple encoders in a chain in CDF, but somehow map them to
a single encoder in DRM side.

I really dislike the first option, as it would severely limit where CDF
can be used, or would force you to write some kind of combined drivers,
so that you can have one encoder driver running multiple encoder devices.

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 0/5] Common Display Framework

2013-02-09 Thread Tomi Valkeinen
On 2013-02-04 12:05, Marcus Lorentzon wrote:

 As discussed at FOSDEM I will give DSI bus with full feature set a
 try.

Please do, but as a reminder I want to raise the issues I see with a DSI
bus:

- A device can be a child of only one bus. So if DSI is used only for
video, the device is a child of, say, i2c bus, and thus there's no DSI
bus. How to configure and use DSI in this case?

- If DSI is used for both control and video, we have two separate APIs
for the bus. What I mean here is that for the video-only case above, we
need a video-only-API for DSI. This API should contain all necessary
methods to configure DSI. But we need similar methods for the control
API also.

So, I hope you come up with some solution for this, but as I see it,
it's easily the most simple and clear option to have one video_source
style entity for the DSI bus itself, which is used for both control and
video.

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 0/4] Common Display Framework-TF

2013-02-09 Thread Tomi Valkeinen
On 2013-02-08 14:40, Marcus Lorentzon wrote:

 I agree, but I think it should be
 setup-enable-video_on-video_off-disable-setup-...
 I don't think there is any interface parameters that should be changed
 while link is enabled. And if there are, they should be identified and
 split out into a separate operation.

Hmm. At least on OMAP and DSI video mode, it is possible to change at
least timings on the fly. But yes, generally the link has to be disabled
first before changing any parameters.

 In OMAP you can configure the DSI pins quite freely. We have the
 following struct:

 struct omap_dsi_pin_config {
 int num_pins;
 /*
  * pin numbers in the following order:
  * clk+, clk-
  * data1+, data1-
  * data2+, data2-
  * ...
  */
 int pins[OMAP_DSS_MAX_DSI_PINS];
 };

 Do you reroute after boot? Or is this just board/product setup. We
 have some pinmuxing as well and DPhy sharing, but we have never used it
 after init/boot. If not runtime, I think this could be put in DT config
 for product instead of under dynamic control from panel.

The pin config with the struct above is done later, when the panel
driver configures the DSI bus. So in OMAP we have two distinct things
that need to be configured:

- The actual pin muxing, i.e. selecting the functions for pin N on the
OMAP package. The functions for a single pin could be for example GPIO
70, DSI DX0, UART1_CTS, etc. This is normally done once during board init.

- DSI pin configuration in the display subsystem. This is used to map
the pins to DSI functions. I.e. DSI DX0 pin is mapped to DATA1+. This is
done by the DSS driver, when the panel driver gives it the parameters.

So the first muxing basically assigns the pin to DSI in general, and
then DSI will internally route the pin to a an actual DSI function.

Whether the muxing needs to changed during runtime... I'm not sure. On
OMAP the DSI pin config also tells how many lanes are used. So if a DSI
panel would first want to use only one lane, and later change it to n
lanes, we'd need this kind of function.

I think it conceptually fits better if the pin config data is passed to
the panel via DT data, and the panel then gives the config to the DSI
master. It's just a part of the DSI bus parameters, like, say, clock
speed or whether to use HS or LP. That way the DT node for the panel
contains the information about the panel. (versus having pin config data
in the DSI master, in which case the DSI master's node contains data
about a specific DSI panel).

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 0/4] Common Display Framework-TF

2013-02-09 Thread Tomi Valkeinen
On 2013-02-08 15:28, Marcus Lorentzon wrote:

 When we do that we stop-setup-start during blanking. So our DSS is
 optimized to be able to do that without getting blocked. All DSI video
 mode panels (and DPI) products we have done so far have not had any
 issue with that (as long as DSI HS clock is set to continuous). I think
 this approach is less platform dependant, as long as there is no SoC
 that take more than a blanking period to reconfigure.

So do you stop, setup and start the link with CPU, and this has to be
happen during blanking? Isn't that prone to errors? Or did you mean that
the hardware handles that automatically?

In OMAP DSS there are so called shadow registers, that can be programmed
at any time. The you set a bit (GO bit), which tells the hardware to
take the new settings into use at the next vblank.

From DSI driver's perspective the link is never stopped when
reconfiguring the video timings. However, many other settings have to be
configured when the link is disabled.

 In OMAP you can configure the DSI pins quite freely. We have the
 following struct:

 struct omap_dsi_pin_config {
  int num_pins;
  /*
   * pin numbers in the following order:
   * clk+, clk-
   * data1+, data1-
   * data2+, data2-
   * ...
   */
  int pins[OMAP_DSS_MAX_DSI_PINS];
 };


 I think it still is OMAP specifics and doesn't belong in the panel
 drivers any longer. If you revisit this requirement in the CDF context
 where DSI ifc parameters should describe how to interface with a panel
 outside the SoC, there can't really be any dependencies on SoC internal
 routing. As you say, this is inside pinmux, so how can that affect the
 SoC external interface? I would suggest moving this to dispc-dsilink DT
 settings that are activated on dsilink-enable/disable. At least that is
 how I plan to solve similar STE SoC specific DSI config settings that
 are not really CDF panel generic, like some DPhy trim settings. They do
 depend on the panel and clock speed, but they are more product specific
 than panel driver specific. Then if there are these type of settings
 that every SoC have, then we could look at standardize those. But for
 starters I would try to keep it in product/board-DT per DSI link. So we
 should try to differentiate between DSI host and slave bus params and
 keep slave params in panel driver.

Ok, I think I was being a bit vague here. I explained the OMAP DSI
routing not because I meant that this API is specific to that, but
because it explains why this kind of routing info is needed, and a
bitmask is not enough.

If you look at the omap_dsi_pin_config struct, there's nothing OMAP
specific there (except the names =). All it tells is that this device
uses N DSI pins, and the device's clk+ function should be connected to
pin X on the DSI master, clk- should be connected to pin Y, etc. X and Y
are integers, and what they mean is specific to the DSI master.

When the DSI master is OMAP's DSI, the OMAP DSI driver does the pin
configuration as I explained. When the DSI master is something else,
say, a DSI bridge, it does whatever it needs to do (which could be
nothing) to assign a particular DSI function to a pin.

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 0/4] Common Display Framework-TF

2013-02-11 Thread Tomi Valkeinen
On 2013-02-08 16:54, Marcus Lorentzon wrote:
 On 02/08/2013 03:02 PM, Tomi Valkeinen wrote:
 On 2013-02-08 15:28, Marcus Lorentzon wrote:

 When we do that we stop-setup-start during blanking. So our DSS is
 optimized to be able to do that without getting blocked. All DSI video
 mode panels (and DPI) products we have done so far have not had any
 issue with that (as long as DSI HS clock is set to continuous). I think
 this approach is less platform dependant, as long as there is no SoC
 that take more than a blanking period to reconfigure.
 So do you stop, setup and start the link with CPU, and this has to be
 happen during blanking? Isn't that prone to errors? Or did you mean that
 the hardware handles that automatically?

 In OMAP DSS there are so called shadow registers, that can be programmed
 at any time. The you set a bit (GO bit), which tells the hardware to
 take the new settings into use at the next vblank.

  From DSI driver's perspective the link is never stopped when
 reconfiguring the video timings. However, many other settings have to be
 configured when the link is disabled.
 
 Yeah, you lucky guys with the GO bit ;). No, we actually do CPU
 stop,setup,start. But since it is video mode, master is driving the sync
 so it is not a hard deadline. It is enough to restart before pixels
 start to degrade. On an LCD that is not so much time, but on an OLED it
 could be 10 secs :). Anyway, we have had several mass products with this
 soft solution and it has worked well.

Ah, ok. But in that case what you said in an earlier mail is not quite
correct: I think this approach is less platform dependant, as long as
there is no SoC that take more than a blanking period to reconfigure..
So in your approach the reconfiguration doesn't have to be done inside
the blanking period, but before the panel's picture starts to fade?

I don't know... It's early Monday morning, and not enough coffee yet,
but I get a bit uneasy feeling if I think that your method of
reconfiguring would be the only think allowed by the CDF API.

Some SoCs do support reconfiguring on the fly, without disabling the
link. It would not be nice if we didn't allow this to happen. And
actually, we're not only talking about SoCs here, but also any display
devices, like external buffer chips etc. They may also offer means to
change configs on the fly.

Well, I don't have any hard point about this at the moment, but I think
we should think different approaches how the configuration can be done.

 I understand, but removing the omap prefix doesn't mean it has to go in
 the panel slave port/bus settings. I still can't see why this should be
 configuration on the panel driver and not the DSI master driver. Number
 of pins might be useful since you might start with one lane and then
 activate the rest. But partial muxing (pre pinmux) doesn't seem to be
 something the panel should control or know anything about. Sounds like
 normal platform/DT data per product/board.

I think one case where this kind of pin configuration is needed, and
which also dictates that all panel related configuration has to be in
the panel's data, not in the DSI master's data, is hotplug.

If you have a board that has two panels connected to the same video
output, probably via some kind of mux, at least for the sensitive pins
like DSI, only one of the panels can be enabled at a time. The panels
can have different wiring, and thus the panel driver would need to
configure everything related to the bus when it's starting up.

The same also happens if you have a true hotplug, i.e. you can remove
the panel totally and plug in a new one. Again the wiring can be
different, and needs to be set up.

And, as I said, this means that all relevant data about the video bus
has to be in the panel's data, so that each panel can have its own set
of, say, pin configuration.

Hotplug is not a use case we should aim to support for the CDF v1, but I
think we should strive not to prevent hotplug either. So if we can
design the API so that hotplug is possible, I say let's do that.

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 0/4] Common Display Framework-TF

2013-02-11 Thread Tomi Valkeinen
On 2013-02-11 11:31, Marcus Lorentzon wrote:

 Ok, so what about a compromise which I think would work for both HWs
 ... we allow the configure operation during video on, then each HW
 driver could decide if this means you have to stop or not to do the
 changes required. But then it is also important that we keep all config
 in one struct and not split it up. Otherwise HW like ours that has so be
 stopped could need to stop once for each setting/operation called.
 So in short, allow configure(bus_params) during video on, keep all
 bus_params in the struct. It is in kernel struct so it can easily be
 changed/refactored.

Right, we need some way to group the configuration parameters. Either
one big struct, or something like start_config() and end_config(). I
think we could also think what parameters make no sense to be configured
on the fly, and possibly have those separately. Although it may be a bit
difficult to think of all the (weird) use cases.

 Again, this probing and bus muxing is platform/bus specific and not
 panel specific. Any panel of that type will only ever work on Omap (or

The parameters used for configuration itself is platform specific, and
that's why it needs to be defined in the DT data. But the API itself is
not platform specific. The parameters are information about how the
panel is connected, just like, say, number of data lines is for DPI.
Which is also something I think should be configured by the panel.

 someone else implementing the same muxing features) as I see it. So why

No, it works for everyone. Well, at least I still don't see anything
omap or platform specific in the API. Of course, if the platform/device
doesn't support modifying the pin functions, then the function does nothing.

 not just put that config on the bus master, dispc? I still can't see how
 this is panel config. You are only pushing CDF API and meta data to
 describe something that is only needed by one bus master. I have never

The information about how the panel is connected (the wiring) has to be
somewhere in the DT data. We could have the info in the DSI master or in
the DSI slave. Or, in some platforms where the DSS is not involved in
the muxing/config, the info could be totally separate, in the boards
pinmuxing data.

I think all those work ok for normal cases without hotplug. But if you
have hotplug, then you need separate pinmuxing/config data for each panel.

You could possibly have a list of panels in your DSI master node,
containing the muxing data, but that sounds rather hacky, and also very
hardcoded, i.e. you need to know each panel that is ever going to be
connected.

If, on the other hand, you have the info in the panel node, it just
works. When a new panel is connected, the relevant panel DT data comes
from somewhere (it's not relevant where), and it tells the DSI master
how it's connected.

Something like this is probably needed for the BeagleBone capes, if you
have happened to follow the discussion. Although it could be argued that
the capes may perhaps be not runtime hotswappable, and thus the
configuration can be done only once during boot after the cape has been
probed. But I'd rather not design the API so that we prevent hot swapping.

 seen any DSI slave that can change their pin config. And since there is

Well, if omap is the only SoC/device out there that supports configuring
the pin functions, and everybody is against the API, I'm not going to
press it.

But then again, I think similar configuration support may be needed even
for the normal pinmuxing, even in the case where you can't reconfigure
the DSI pin functions. You still need to mux the pins (perhaps, say,
between DSI and a GPIO), depending on how many lanes the panel uses.

In fact, speaking about all pins in general, I'm not very fond of having
a static pinmuxing in the board DT data, handled by the board setup
code. I think generally the pins should be muxed to safe-mode by the
board setup code, and then configured to their proper function by the
driver when it is initializing.

 no generic hot plug detect of DSI panels, at least not before DSI bus is
 available, I have to assume this probing it very platform specific. We
 have some products that provide 1-2 gpios to specify what panel is
 available, some use I2C sensor probing to then assume the panel plugged.

Yes, the hotplug mechanism is platform/board specific. But that's not
relevant here.

 At least in the first step I don't think this type of hot plug should be
 in the API. Then once the base panel driver is in we could discuss
 different solutions for you hot plug scenario.

Yes, as I said, I also think we shouldn't aim for hotplug in the v1. But
we also shouldn't prevent it.

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Multiple parents in device driver model and Common Display Framework (CDF)

2013-02-12 Thread Tomi Valkeinen
Hi Marcus,

On 2013-02-12 17:04, Marcus Lorentzon wrote:

 Now we have some different types of panels which are attached on a
 combination of busses:
 
 a) control:DBI, data:DBI
 b) control:I2C/SPI, data:I2C/SPI
 c) control:static setup, data:DPI
 d) control:I2C/SPI, data:DPI
 e) control:DSI, data:DSI
 f) control:I2C, data:DSI
 g) control:DSI+I2C, data:DSI
 
 As you could guess, g) is our issue. We have a device family that has
 two control bus attachments and one data bus. The kernel device/driver
 model only supports a single parent device which is normally the bus
 device.
 
 We will write drivers for all device types a-g implementing the CDF API.
 Those with only I2C/SPI bus attachemnt will of course be I2C drivers
 registered to CDF, same probably goes for DBI and DSI panels if we
 create a new bus type for them (if not - platform devices). But the CDF
 drivers still need some way to access the bus/host operations. So we
 will have to create an API for the exposed operations possible for the
 MIPI type busses (DBI/DSI/DPI), some control and some data bus related.
 For this problem we have discussed a few solutions, which is where we
 need your guidance:
 
 1) Due to the fact there is no support for multiple parents in the
 device driver model and the fact that there are no DSI/DBI busses in the
 kernel, Tomi has come up with a sort of logical parent device for
 displays (see video source section, top section is CDF API):
 http://gitorious.org/linux-omap-dss2/linux/blobs/work/dss-dev-model-cdf/include/video/display.h

When I made that, I didn't even have in mind the case g).

I made it because I think we have issues with case f) also (and, well,
in some sense we have issues with all cases. see below). If we have a
full linux bus for DSI and DBI, I don't quite understand how we should
manage f), because we have both I2C and DSI busses to which the display
device should belong.

I also had these points in my mind why I chose the video_source approach
in my version:

- The display busses are very specialized, point-to-point busses, so a
real linux bus doesn't give very much, I think.

- You never have a video bus used only for control, for example, a panel
controlled via DSI but video data sent via DPI. Yes, possible in theory,
but that would be rather insane.

- We anyway need some kind of non-bus approach for simple video data
busses like DPI. And if we have that, the slightly more complex video
busses like DSI fit quite easily in.

- We need something to represent all the data busses (see below).

 Pros: Simple, easy to implement, merging all bus types into one logical
 bus (simplicity)
 Cons: Diverging from device driver model, merging all bus types into one
 logical bus (scalability, maintainability), has to make a copy of many
 things already in device driver model (pm, enumeration, registration,
 relations, ...), solution only really needed for one special type (g)

It's not only for g). We need something similar for all cases. We need
to represent the chain of display devices with something, which is based
on the data busses. The control bus plays no role in this chain (except
when the data and control busses happen to be the same).

My video_source really represents the data bus, but offers some extended
features so that it also offers control bus operations for those video
busses that have control and data in the same bus.

If we go for a full DSI/DBI linux bus, we still need something to
represent the video bus. Then we'll have two separate entities for DSI
control (the real bus) and for DSI data (video_source or similar), which
in the end go via the same physical DSI bus.

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: CDF meeting @FOSDEM report

2013-02-14 Thread Tomi Valkeinen
On 2013-02-13 00:45, Stéphane Marchesin wrote:

 So, a part which is completely omitted in this thread is how to handle
 suspend/resume ordering. If you have multiple encoders which need to
 be turned on/off in a given order at suspend/resume, how do you handle
 that given the current scheme where they are just separate platform
 drivers in drivers/video?
 
 This problems occurs with drm/exynos in current 3.8 kernels for
 example. On that platform, the DP driver and the FIMD driver will
 suspend/resume in random order, and therefore fail resuming half the
 time. Is there something which could be done in CDF to address that?

I don't think we have a perfect solution for this, but I think we can
handle this by using PM notifiers, PM_SUSPEND_PREPARE and PM_POST_SUSPEND.

The code that manages the whole chain should register to those
notifiers, and disable or enable the display devices accordingly. This
way the devices are enabled and disabled in the right order, and also
(hopefully) so that the control busses are operational.

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [pull] tilcdc-next for 3.9

2013-02-18 Thread Tomi Valkeinen
On 2013-02-18 01:02, Rob Clark wrote:
 Hi Dave,
 
 Here is pull request for tilcdc drm driver..  it also includes a
 handful of dependent patches from me, plus a couple fixes from Daniel
 Vetter which haven't showed up yet in drm-next.

Why is the TFP410 driver integrated into the tilcdc, but the TDA998x is
a generic driver?

I think the DT bindings do not match the DT guidelines. For example,
compatible = tilcdc,slave should be ti,tilcdc,slave. And TI specific
properties should also be prepended with ti,.

Who is going to maintain this now that you're no longer in TI? I presume
you may do some small stuff, but I think this driver needs quite a lot
of further development in the future.

But my main concern for this series is still that it creates custom
panel stuff, and adds DT bindings for them. Which means we need to
support those custom DT bindings in the future, even though it's quite
sure that CDF should be used also for this driver, changing the bindings.

 Tomi

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [pull] tilcdc-next for 3.9

2013-02-18 Thread Tomi Valkeinen
On 2013-02-18 14:26, Rob Clark wrote:

 So I'm not going to get too hung up on supporting current DT bindings
 in the future when we have something better.  But I needed something

I may be mistaken, but my understanding is that DT bindings are like
kernel's userspace APIs. After they have been merged, they have to work
in the future kernels also.

 Tomi

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [pull] tilcdc-next for 3.9

2013-02-18 Thread Tomi Valkeinen
On 2013-02-18 14:35, Rob Clark wrote:
 On Mon, Feb 18, 2013 at 7:32 AM, Tomi Valkeinen to...@iki.fi wrote:
 On 2013-02-18 14:26, Rob Clark wrote:

 So I'm not going to get too hung up on supporting current DT bindings
 in the future when we have something better.  But I needed something

 I may be mistaken, but my understanding is that DT bindings are like
 kernel's userspace APIs. After they have been merged, they have to work
 in the future kernels also.
 
 and that probably *eventually* makes sense when dts files are kicked
 out of the kernel.  For now, it doesn't really seem useful, unlike
 maintaining compatibility for userspace ABI's.

Why does that matter? Afaik, the dts files are in the kernel for
convenience, and as examples. You can as well have the DT data in your
bootloader.

Or has there been a clear decision that while the dts files are in the
kernel, they are considered unstable?

 Tomi

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v17 2/7] video: add display_timing and videomode

2013-02-18 Thread Tomi Valkeinen
Hi Steffen,

On 2013-01-25 11:01, Steffen Trumtrar wrote:

 +/* VESA display monitor timing parameters */
 +#define VESA_DMT_HSYNC_LOW   BIT(0)
 +#define VESA_DMT_HSYNC_HIGH  BIT(1)
 +#define VESA_DMT_VSYNC_LOW   BIT(2)
 +#define VESA_DMT_VSYNC_HIGH  BIT(3)
 +
 +/* display specific flags */
 +#define DISPLAY_FLAGS_DE_LOW BIT(0)  /* data enable flag */
 +#define DISPLAY_FLAGS_DE_HIGHBIT(1)
 +#define DISPLAY_FLAGS_PIXDATA_POSEDGEBIT(2)  /* drive data on pos. 
 edge */
 +#define DISPLAY_FLAGS_PIXDATA_NEGEDGEBIT(3)  /* drive data on neg. 
 edge */
 +#define DISPLAY_FLAGS_INTERLACED BIT(4)
 +#define DISPLAY_FLAGS_DOUBLESCAN BIT(5)

snip

 + unsigned int dmt_flags; /* VESA DMT flags */
 + unsigned int data_flags; /* video data flags */

Why did you go for this approach? To be able to represent
true/false/not-specified?

Would it be simpler to just have flags field? What does it give us to
have those two separately?

Should the above say raising edge/falling edge instead of positive
edge/negative edge?

 Tomi

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] omapdrm: Make fixed resolution panels work

2013-02-18 Thread Tomi Valkeinen
On 2013-02-18 09:23, Archit Taneja wrote:

 diff --git a/drivers/staging/omapdrm/omap_connector.c
 b/drivers/staging/omapdrm/omap_connector.c
 index 4cc9ee7..839c6e4 100644
 --- a/drivers/staging/omapdrm/omap_connector.c
 +++ b/drivers/staging/omapdrm/omap_connector.c
 @@ -110,6 +110,11 @@ static enum drm_connector_status
 omap_connector_detect(
  ret = connector_status_connected;
  else
  ret = connector_status_disconnected;
 +   } else if (dssdev-type == OMAP_DISPLAY_TYPE_DPI ||
 +   dssdev-type == OMAP_DISPLAY_TYPE_DBI ||
 +   dssdev-type == OMAP_DISPLAY_TYPE_SDI ||
 +   dssdev-type == OMAP_DISPLAY_TYPE_DSI) {
 +   ret = connector_status_connected;

 hmm, why not just have a default detect fxn for panel drivers which
 always returns true?  Seems a bit cleaner.. otherwise, I guess we
 could just change omap_connector so that if dssdev-detect is null,
 assume always connected.  Seems maybe a slightly better way since
 currently omap_connector doesn't really know too much about different
 sorts of panels.  But I'm not too picky if that is a big pain because
 we probably end up changing all this as CFP starts coming into
 existence.

 Same goes for check_timings/set_timings..  it seems a bit cleaner just
 to have default fxns in the panel drivers that do-the-right-thing,
 although I suppose it might be a bit more convenient for out-of-tree
 panel drivers to just assume fixed panel if these fxns are null.

Yeah, I can't make my mind about null pointer. It's nice to inform with
a null pointer that the panel doesn't support the given operation, or
that it doesn't make sense, but handling the null value makes the code a
bit complex.

For example, our VENC driver doesn't know if there's a TV connected or
not, so neither true or false as connect return value makes sense there.
Or, for a DSI command mode panel, set_timings doesn't really make much
sense.

 Maybe having default functions in omapdss might be a better idea. There
 is an option of adding default functions in omap_dss_register_driver()
 (drivers/video/omap2/dss/core.c).
 
 Tomi, do you think it's fine to add some more default panel driver funcs?

We can add default funcs. However, adding them in
omap_dss_register_driver is not so nice, as it makes the omap_dss_driver
(which is essentially an ops struct) non-constable. Instead, we should
move the setting of the default funcs to the drivers.

If I'm not mistaken, we could manage that with a helper macro. Assigning
a value to the same field of a struct should be ok by the compiler, and
should cause only the last assignment to be taken into use. So:

static struct foo zab = {
.bar = 123, /* ignored */
.bar = 234, /* used */
};

And so we could have:

static struct omap_dss_driver my_panel_driver = {
OMAPDSS_DEFAULT_PANEL_OPS, /* macro for default ops */

.enable = my_panel_enable,
/* ... */
};

 Tomi




signature.asc
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


  1   2   3   4   5   6   7   8   9   10   >