[PATCH v2] of: Add videomode helper

2012-08-02 Thread Stephen Warren
On 07/05/2012 08:51 AM, Rob Herring wrote:
> On 07/04/2012 02:56 AM, Sascha Hauer wrote:
>> This patch adds a helper function for parsing videomodes from the devicetree.
>> The videomode can be either converted to a struct drm_display_mode or a
>> struct fb_videomode.

>> diff --git a/Documentation/devicetree/bindings/video/displaymode 
>> b/Documentation/devicetree/bindings/video/displaymode

>> +Example:
>> +
>> + display at 0 {
>> + /* 1920x1080p24 */
>> + clock = <5200>;
> 
> Should this use the clock binding? You probably need both constraints
> and clock binding though.

I don't think the clock binding is appropriate here. This binding
specifies a desired video mode, and should specify just the expected
clock frequency for that mode. Perhaps any tolerance imposed by the
specification used to calculate the mode timing could be specified too,
but the allowed tolerance (for a mode to be recognized by the dispaly)
is more driven by the receiving device than the transmitting device.

The clock bindings should come into play in the display controller that
sends a signal in that display mode. Something like:

mode: hd1080p {
clock = <5200>;
xres = <1920>;
yres = <1080>;

};

display-controller {
pixel-clock-source = <&clk ...>; // common clock bindings here
default-mode = <&mode>;

> Often you don't know the frequency up front and/or have limited control
> of the frequency (i.e. integer dividers). Then you have to adjust the
> margins to get the desired refresh rate. To do that, you need to know
> the ranges of values a panel can support. Perhaps you just assume you
> can increase the right-margin and lower-margins as I think you will hit
> pixel clock frequency max before any limit on margins.

I think this is more usually dealt with in HW design and matching
components.

The PLL/... feeding the display controller is going to have some known
specifications that imply which pixel clocks it can generate. HW
designers will pick a panel that accepts a clock the display controller
can generate. The driver for the display controller will just generate
as near of a pixel clock as it can to the rate specified in the mode
definition. I believe it'd be unusual for a display driver to start
fiddling with front-back porch (or margin) widths to munge the timing;
that kind of thing probably worked OK with analog displays, but with
digital displays where each pixel is clocked even in the margins, I
think that would cause more problems than it solved.

Similarly for external displays, the display controller will just pick
the nearest clock it can. If it can't generate a close enough clock, it
will just refuse to set the mode. In fact, a display controller driver
would typically validate this when the set of legal modes was enumerated
when initially detecting the display, so user-space typically wouldn't
even request invalid modes.


[PATCH v2] of: Add videomode helper

2012-08-02 Thread Stephen Warren
On 07/04/2012 01:56 AM, Sascha Hauer wrote:
> This patch adds a helper function for parsing videomodes from the devicetree.
> The videomode can be either converted to a struct drm_display_mode or a
> struct fb_videomode.

> diff --git a/Documentation/devicetree/bindings/video/displaymode 
> b/Documentation/devicetree/bindings/video/displaymode

> +Required properties:
> + - xres, yres: Display resolution
> + - left-margin, right-margin, hsync-len: Horizontal Display timing parameters
> +   in pixels
> +   upper-margin, lower-margin, vsync-len: Vertical display timing parameters 
> in
> +   lines

Perhaps bike-shedding, but...

For the margin property names, wouldn't it be better to use terminology
more commonly used for video timings rather than Linux FB naming. In
other words naming like:

hactive, hsync-len, hfront-porch, hback-porch?

> + - clock: displayclock in Hz
> +
> +Optional properties:
> + - width-mm, height-mm: Display dimensions in mm
> + - hsync-active-high (bool): Hsync pulse is active high
> + - vsync-active-high (bool): Vsync pulse is active high
> + - interlaced (bool): This is an interlaced mode
> + - doublescan (bool): This is a doublescan mode
> +
> +There are different ways of describing a display mode. The devicetree 
> representation
> +corresponds to the one used by the Linux Framebuffer framework described 
> here in
> +Documentation/fb/framebuffer.txt. This representation has been chosen 
> because it's
> +the only format which does not allow for inconsistent parameters.Unlike the 
> Framebuffer
> +framework the devicetree has the clock in Hz instead of ps.

As Rob mentioned, I think there shouldn't be any mention of Linux FB here.

> +
> +Example:
> +
> + display at 0 {

This node appears to describe a video mode, not a display, hence the
node name seems wrong.

Many displays can support multiple different video modes. As mentioned
elsewhere, properties like display width/height are properties of the
display not the mode.

So, I might expect something more like the following (various overhead
properties like reg/#address-cells etc. elided for simplicity):

disp: display {
width-mm = <...>;
height-mm = <...>;
modes {
mode at 0 {
/* 1920x1080p24 */
clock = <5200>;
xres = <1920>;
yres = <1080>;
left-margin = <25>;
right-margin = <25>;
hsync-len = <25>;
lower-margin = <2>;
upper-margin = <2>;
vsync-len = <2>;
hsync-active-high;
};
mode at 1 {
};
};
};

display-connector {
display = <&disp>;
// interface-specific properties such as pixel format here
};

Finally, have you considered just using an EDID instead of creating
something custom? I know that creating an EDID is harder than writing a
few simple properties into a DT, but EDIDs have the following advantages:

a) They're already standardized and very common.

b) They allow other information such as a display's HDMI audio
capabilities to be represented, which can then feed into an ALSA driver.

c) The few LCD panel datasheets I've seen actually quote their
specification as an EDID already, so deriving the EDID may actually be easy.

d) People familiar with displays are almost certainly familiar with
EDID's mode representation. There are many ways of representing display
modes (sync position vs. porch widths, htotal specified rather than
specifying all the components and hence htotal being calculated etc.).
Not everyone will be familiar with all representations. Conversion
errors are less likely if the target is EDID's familiar format.

e) You'll end up with exactly the same data as if you have a DDC-based
external monitor rather than an internal panel, so you end up getting to
a common path in display handling code much more quickly.



Re: Tegra DRM device tree bindings

2012-07-06 Thread Stephen Warren
On 07/05/2012 06:15 AM, Thierry Reding wrote:
> Here's a new proposal that should address all issues collected
> during the discussion of the previous one:
> 
> From tegra20.dtsi:
...

At a quick glance, that all seems pretty reasonable now.

> One problem I've come across when trying to get some rudimentary
> code working with this is that there's no longer a device which the
> DRM driver can bind to, because the top-level device (host1x) now
> has a separate driver.

Can't you just have the host1x driver trigger the instantiation of the
DRM driver? In other words, the host1x node is both the plain host1x
driver and the DRM driver. So, after initializing anything for host1x
itself, just end host1x's probe() with something a call to
drm_platform_init(), passing in host1x's own pdev.

After all, it seems like the whole point of representing host1x in the
DT is to expose the graphics HW, so you'd need the DRM device in that
case.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Tegra DRM device tree bindings

2012-07-06 Thread Stephen Warren
On 07/05/2012 06:15 AM, Thierry Reding wrote:
> Here's a new proposal that should address all issues collected
> during the discussion of the previous one:
> 
> From tegra20.dtsi:
...

At a quick glance, that all seems pretty reasonable now.

> One problem I've come across when trying to get some rudimentary
> code working with this is that there's no longer a device which the
> DRM driver can bind to, because the top-level device (host1x) now
> has a separate driver.

Can't you just have the host1x driver trigger the instantiation of the
DRM driver? In other words, the host1x node is both the plain host1x
driver and the DRM driver. So, after initializing anything for host1x
itself, just end host1x's probe() with something a call to
drm_platform_init(), passing in host1x's own pdev.

After all, it seems like the whole point of representing host1x in the
DT is to expose the graphics HW, so you'd need the DRM device in that
case.


Re: Tegra DRM device tree bindings

2012-06-28 Thread Stephen Warren
On 06/28/2012 11:19 AM, Lucas Stach wrote:
...
> CMA is just a way of providing large contiguous address space blocks in
> a dynamic fashion. ...
> 
> TTM though solves more advanced matters, like buffer synchronisation
> between 3D and 2D block of hardware ...
> 
> IMHO the best solution would be to use CMA as a flexible replacement of
> the static carveout area and put TTM on top of this ...

Ah right, thanks for the explanation. That makes sense to me now.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Tegra DRM device tree bindings

2012-06-28 Thread Stephen Warren
On 06/28/2012 11:19 AM, Lucas Stach wrote:
...
> CMA is just a way of providing large contiguous address space blocks in
> a dynamic fashion. ...
> 
> TTM though solves more advanced matters, like buffer synchronisation
> between 3D and 2D block of hardware ...
> 
> IMHO the best solution would be to use CMA as a flexible replacement of
> the static carveout area and put TTM on top of this ...

Ah right, thanks for the explanation. That makes sense to me now.


Tegra DRM device tree bindings

2012-06-28 Thread Stephen Warren
On 06/28/2012 05:12 AM, Thierry Reding wrote:
> On Wed, Jun 27, 2012 at 05:59:55PM +0200, Lucas Stach wrote:
>> Am Mittwoch, den 27.06.2012, 16:44 +0200 schrieb Thierry Reding:
...
>>> In the ideal case I would want to not have a carveout size at
>>> all. However there may be situations where you need to make
>>> sure some driver can allocate a given amount of memory. Having
>>> to specify this using a kernel command-line parameter is
>>> cumbersome because it may require changes to the bootloader or
>>> whatever. So if you know that a particular board always needs
>>> 128 MiB of carveout, then it makes sense to specify it on a
>>> per-board basis.
>> 
>> If we go with CMA, this is a non-issue, as CMA allows to use the
>> contig area for normal allocations and only purges them if it
>> really needs the space for contig allocs.
> 
> CMA certainly sounds like the most simple approach. While it may
> not be suited for 3D graphics or multimedia processing later on, I
> think we could use it at a starting point to get basic framebuffer
> and X support up and running. We can always move to something more
> advanced like TTM later.

I thought the whole purpose of CMA was to act as the infra-structure
to provide buffers to 3D, camera, etc. in particular allowing sharing
of buffers between them. In other words, isn't CMA the memory manager?
If there's some deficiency with CMA for 3D graphics, it seems like
that should be raised with those designing CMA. Or, am I way off base
with my expectations of CMA?


Tegra DRM device tree bindings

2012-06-28 Thread Stephen Warren
On 06/28/2012 12:18 AM, Hiroshi Doyu wrote:
> On Wed, 27 Jun 2012 16:44:14 +0200
> Thierry Reding  wrote:
> 
>>> I think that "coherent_pool" can be used only when the amount of
>>> contiguous memory is short in your system. Otherwise even unnecessary.
>>>
>>> Could you explain a bit more why you want carveout size on per-board basis?
>>
>> In the ideal case I would want to not have a carveout size at all.
>> However there may be situations where you need to make sure some driver
>> can allocate a given amount of memory. Having to specify this using a
>> kernel command-line parameter is cumbersome because it may require
>> changes to the bootloader or whatever. So if you know that a particular
>> board always needs 128 MiB of carveout, then it makes sense to specify
>> it on a per-board basis.
> 
> Hm...I could understand somewhat;) but DT can also specify "bootargs"
> in dts file, which can support per-board-wide spec too, like the above
> sum of carveout needed from all drivers. I just want to avoid
> introducing a new parameter additionaly if we can make use of the
> existing mechanism.

The bootargs in the DT file is usually provided by (over-written by) the
bootloader. If we start requiring lots of random kernel command-line
arguments, that makes it more effort for the user of the bootloader
(e.g. distribution bootloader scripts, etc.) to create the kernel
command-line. I'd prefer to avoid that as much as possible. That said,
using a standardized command-line option that is (or will be) used by
all (ARM?) SoCs for the same purpose is reasonable, because there's
commonality there.


Re: Tegra DRM device tree bindings

2012-06-28 Thread Stephen Warren
On 06/28/2012 05:12 AM, Thierry Reding wrote:
> On Wed, Jun 27, 2012 at 05:59:55PM +0200, Lucas Stach wrote:
>> Am Mittwoch, den 27.06.2012, 16:44 +0200 schrieb Thierry Reding:
...
>>> In the ideal case I would want to not have a carveout size at
>>> all. However there may be situations where you need to make
>>> sure some driver can allocate a given amount of memory. Having
>>> to specify this using a kernel command-line parameter is
>>> cumbersome because it may require changes to the bootloader or
>>> whatever. So if you know that a particular board always needs
>>> 128 MiB of carveout, then it makes sense to specify it on a
>>> per-board basis.
>> 
>> If we go with CMA, this is a non-issue, as CMA allows to use the
>> contig area for normal allocations and only purges them if it
>> really needs the space for contig allocs.
> 
> CMA certainly sounds like the most simple approach. While it may
> not be suited for 3D graphics or multimedia processing later on, I
> think we could use it at a starting point to get basic framebuffer
> and X support up and running. We can always move to something more
> advanced like TTM later.

I thought the whole purpose of CMA was to act as the infra-structure
to provide buffers to 3D, camera, etc. in particular allowing sharing
of buffers between them. In other words, isn't CMA the memory manager?
If there's some deficiency with CMA for 3D graphics, it seems like
that should be raised with those designing CMA. Or, am I way off base
with my expectations of CMA?
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-28 Thread Stephen Warren
On 06/28/2012 12:18 AM, Hiroshi Doyu wrote:
> On Wed, 27 Jun 2012 16:44:14 +0200
> Thierry Reding  wrote:
> 
>>> I think that "coherent_pool" can be used only when the amount of
>>> contiguous memory is short in your system. Otherwise even unnecessary.
>>>
>>> Could you explain a bit more why you want carveout size on per-board basis?
>>
>> In the ideal case I would want to not have a carveout size at all.
>> However there may be situations where you need to make sure some driver
>> can allocate a given amount of memory. Having to specify this using a
>> kernel command-line parameter is cumbersome because it may require
>> changes to the bootloader or whatever. So if you know that a particular
>> board always needs 128 MiB of carveout, then it makes sense to specify
>> it on a per-board basis.
> 
> Hm...I could understand somewhat;) but DT can also specify "bootargs"
> in dts file, which can support per-board-wide spec too, like the above
> sum of carveout needed from all drivers. I just want to avoid
> introducing a new parameter additionaly if we can make use of the
> existing mechanism.

The bootargs in the DT file is usually provided by (over-written by) the
bootloader. If we start requiring lots of random kernel command-line
arguments, that makes it more effort for the user of the bootloader
(e.g. distribution bootloader scripts, etc.) to create the kernel
command-line. I'd prefer to avoid that as much as possible. That said,
using a standardized command-line option that is (or will be) used by
all (ARM?) SoCs for the same purpose is reasonable, because there's
commonality there.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-28 Thread Stephen Warren
On 06/27/2012 06:44 AM, Hiroshi Doyu wrote:
...
> I think that there are 2 cases:
> 
>   (1) discontiguous memory with IOMMU
>   (2) contiguous memory without IOMMU(called "carveout" in general?)
...
> For (2), although memory is mostly anonymous one, we may need to know
> how much to allocate, where we only need "size". This size is not from
> h/w feature, but it depends on the system load/usage. So I think that
> this size can be passed from kernel command line? For exmaple, we can
> specify how much contiguous memory is necessary with putting
> "coherent_pool=??M" in the kernel command line as below:
> 
>   coherent_pool=nn[KMG]   [ARM,KNL]
>   Sets the size of memory pool for coherent, atomic dma
>   allocations.

I guess if that's the standard way of initializing CMA, then that's fine.

It'd be nice if there was a way to specify that from the DT too; that
way the user/distro/bootloader constructing the kernel command-line
wouldn't have to remember to add "random" (potentially
Tegra-/board-specific) extra arguments onto the command-line; the Tegra
command-line in the upstream kernel is quite clean right now, especially
compare to the enormous number of options we require downstream:-(
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-28 Thread Stephen Warren
On 06/27/2012 08:29 AM, Hiroshi Doyu wrote:
> Could you explain a bit more why you want carveout size on per-board basis?

Different boards have different amounts of memory, and are sometimes
targeted at different use-cases (e.g. server with simple display buffer,
vs. consumer-oriented device intended to play games with OpenGL
allocating lots of textures).
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-28 Thread Stephen Warren
On 06/26/2012 11:07 PM, Thierry Reding wrote:
> On Tue, Jun 26, 2012 at 04:48:14PM -0600, Stephen Warren wrote:
...
> I actually like what you proposed above a lot, so if you don't
> mind either way I'll go with that proposal. Keeping the connector
> nodes as children of the outputs has the advantage of being able to
> reference them if we need it at some point. But it is also
> redundant in that a single output doesn't usually (never?) driver
> more than one connector.

Yes, I believe that each output is 1:1 with (the video portion of) a
connector. The display controllers obviously aren't 1:1.

> The same issue will have to be addressed for the CSI and VI nodes,
> but as I currently use neither of those I don't feel qualified to
> propose a binding for them. Also for the VI part we're completely
> missing documentation. Maybe somebody could push this to be
> released as well?

I did file a bug noting the request for VI documentation. At this
point in time, it's too early to say what, if anything, will come of that.

> If I understand correctly, most of the host1x children can also be 
> chained in a processing pipeline to do postprocessing an video
> input for example. I suppose that's generic and doesn't need to be
> represented in DT either, right?

Yes, I believe that's something internal to the driver.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Tegra DRM device tree bindings

2012-06-27 Thread Stephen Warren
On 06/27/2012 06:44 AM, Hiroshi Doyu wrote:
...
> I think that there are 2 cases:
> 
>   (1) discontiguous memory with IOMMU
>   (2) contiguous memory without IOMMU(called "carveout" in general?)
...
> For (2), although memory is mostly anonymous one, we may need to know
> how much to allocate, where we only need "size". This size is not from
> h/w feature, but it depends on the system load/usage. So I think that
> this size can be passed from kernel command line? For exmaple, we can
> specify how much contiguous memory is necessary with putting
> "coherent_pool=??M" in the kernel command line as below:
> 
>   coherent_pool=nn[KMG]   [ARM,KNL]
>   Sets the size of memory pool for coherent, atomic dma
>   allocations.

I guess if that's the standard way of initializing CMA, then that's fine.

It'd be nice if there was a way to specify that from the DT too; that
way the user/distro/bootloader constructing the kernel command-line
wouldn't have to remember to add "random" (potentially
Tegra-/board-specific) extra arguments onto the command-line; the Tegra
command-line in the upstream kernel is quite clean right now, especially
compare to the enormous number of options we require downstream:-(


Tegra DRM device tree bindings

2012-06-27 Thread Stephen Warren
On 06/27/2012 08:29 AM, Hiroshi Doyu wrote:
> Could you explain a bit more why you want carveout size on per-board basis?

Different boards have different amounts of memory, and are sometimes
targeted at different use-cases (e.g. server with simple display buffer,
vs. consumer-oriented device intended to play games with OpenGL
allocating lots of textures).


Tegra DRM device tree bindings

2012-06-27 Thread Stephen Warren
On 06/26/2012 11:07 PM, Thierry Reding wrote:
> On Tue, Jun 26, 2012 at 04:48:14PM -0600, Stephen Warren wrote:
...
> I actually like what you proposed above a lot, so if you don't
> mind either way I'll go with that proposal. Keeping the connector
> nodes as children of the outputs has the advantage of being able to
> reference them if we need it at some point. But it is also
> redundant in that a single output doesn't usually (never?) driver
> more than one connector.

Yes, I believe that each output is 1:1 with (the video portion of) a
connector. The display controllers obviously aren't 1:1.

> The same issue will have to be addressed for the CSI and VI nodes,
> but as I currently use neither of those I don't feel qualified to
> propose a binding for them. Also for the VI part we're completely
> missing documentation. Maybe somebody could push this to be
> released as well?

I did file a bug noting the request for VI documentation. At this
point in time, it's too early to say what, if anything, will come of that.

> If I understand correctly, most of the host1x children can also be 
> chained in a processing pipeline to do postprocessing an video
> input for example. I suppose that's generic and doesn't need to be
> represented in DT either, right?

Yes, I believe that's something internal to the driver.


Re: Tegra DRM device tree bindings

2012-06-27 Thread Stephen Warren
On 06/26/2012 08:32 PM, Mark Zhang wrote:
>> On 06/26/2012 07:46 PM, Mark Zhang wrote:
> On Tue, 26 Jun 2012 12:55:13 +0200
> Thierry Reding  wrote:
>> ...
 I'm not sure I understand how information about the carveout would be
 obtained from the IOMMU API, though.
>>>
>>> I think that can be similar with current gart implementation. Define 
>>> carveout as:
>>>
>>> carveout {
>>> compatible = "nvidia,tegra20-carveout";
>>> size = <0x1000>;
>>> };
>>>
>>> Then create a file such like "tegra-carveout.c" to get these definitions and
>> register itself as platform device's iommu instance.
>>
>> The carveout isn't a HW object, so it doesn't seem appropriate to define a DT
>> node to represent it.
> 
> Yes. But I think it's better to export the size of carveout as a configurable 
> item.
> So we need to define this somewhere. How about define carveout as a property 
> of gart?

There already exists a way of preventing Linux from using certain chunks
of memory; the /memreserve/ syntax. From a brief look at the dtc source,
it looks like /memreserve/ entries can have labels, which implies that a
property in the GART node could refer to the /memreserve/ entry by
phandle in order to know what memory regions to use.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-27 Thread Stephen Warren
On 06/26/2012 07:46 PM, Mark Zhang wrote:
>>> On Tue, 26 Jun 2012 12:55:13 +0200
>>> Thierry Reding  wrote:
...
>> I'm not sure I understand how information about the carveout would be
>> obtained from the IOMMU API, though.
> 
> I think that can be similar with current gart implementation. Define carveout 
> as: 
> 
> carveout {
> compatible = "nvidia,tegra20-carveout";
> size = <0x1000>;
> };
> 
> Then create a file such like "tegra-carveout.c" to get these definitions and 
> register itself as platform device's iommu instance.

The carveout isn't a HW object, so it doesn't seem appropriate to define
a DT node to represent it.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-27 Thread Stephen Warren
On 06/26/2012 01:51 PM, Thierry Reding wrote:
> On Tue, Jun 26, 2012 at 12:10:42PM -0600, Stephen Warren wrote:
>> On 06/26/2012 04:55 AM, Thierry Reding wrote:
>>> Hi,
>>> 
>>> while I haven't got much time to work on the actual code right
>>> now, I think it might still be useful if we could get the
>>> device tree binding to a point where everybody is happy with
>>> it. That'll also save me some time once I get to writing the
>>> code because I won't have to redo it over again. =)
>>> 
>>> So here's the current proposal:
>>> 
>>> host1x { compatible = "nvidia,tegra20-host1x", "simple-bus"; 
>>> reg = <0x5000 0x00024000>; interrupts = <0 64 0x04   /* cop
>>> syncpt */ 0 65 0x04   /* mpcore syncpt */ 0 66 0x04   /* cop
>>> general */ 0 67 0x04>; /* mpcore general */
>>> 
>>> #address-cells = <1>; #size-cells = <1>;
>>> 
>>> ranges = <0x5400 0x5400 0x0400>;
>>> 
>>> status = "disabled";
>> 
>> The idea behind status="disabled" is that some HW only makes
>> sense to use on particular boards. This concept really only
>> applies to HW modules that drive external interfaces on the SoC,
>> which in turn the board can choose whether to connect anything to
>> (or whether to even connect to any external pins using the
>> pinmux, or not).
>> 
>> As such, I don't think it makes sense to set status="disabled"
>> on host1x, nor many other internal-only engines such as say mpe,
>> epp, i2sp, gr2d, gr3d, dc1, dc2.
> 
> What about power management and resource usage? If a board for
> instance doesn't need gr3d at all it could just leave it at status
> = "disabled" to not have the corresponding driver loaded and not
> waste the power and resources.

The driver should be turning off all the clocks and power if the
devices aren't actually used (or not turning it on in the first
place). I guess there will be some slight overhead for the
device/driver instantiation.

However, in all likelihood, all/most boards will enable this feature
once it's in place. For very resource-constrained scenarios without
display, presumably one would be building a custom kernel without the
display driver enabled, so the DT content for display wouldn't ever
come into play.

>>> Board DTS files could then extend this with board-specific
>>> requirements and connectors. The following describes the Medcom
>>> Wide:
>> 
>>> connectors { #address-cells = <1>; #size-cells = <0>;
>>> 
>>> };
>> 
>> The connector seems to be a property of the individual output
>> resources. I'd expect to see the connector configuration be a
>> child of the outputs that a particular board had enabled;
>> something more like:
>> 
>> host1x { rgb { status = "okay";
>> 
>> connector@0 { nvidia,edid = /incbin/("tegra-medcom.edid"); }; }; 
>> hdmi { status = "okay";
>> 
>> connector@0 { nvidia,ddc-i2c-bus = <&tegra_i2c1>; }; }; };
>> 
>> Perhaps even completely omit the connector node, and put the
>> properties directly within the rgb/hdmi node itself. After all
>> the HDMI output really is the connector as far as Tegra goes.
> 
> Heh. I seem to remember you objecting to this in a previous
> series[0] which is actually the reason that I moved them to the
> top-level in the first place. =)
> 
> Thierry
> 
> [0]: http://www.spinics.net/lists/linux-tegra/msg05298.html

I don't think I was objecting to exactly what I wrote above; in that
email, there were already separate connector nodes, but they were
placed directly inside the host1x node (at that time called graphics),
so still rather disconnected from the HW they represent.

The argument for sharing e.g. an HDMI port between both video and
audio still exists though. That said, I think now I'd still be
inclined to put all the connector information for video into the
hdmi/rgb/tvo nodes. If DT does grow to represent the user-connectors
at the top-level perhaps in conjunction with drivers/extcon, perhaps
the hdmi node can point at the extcon node with a phandle, or
vice-versa, to set up the link between the components of the
user-visible port.

Still, the decision here is possibly a little arbitrary; many schemes
would work. I think at this point I don't care /too/ strongly about
which is used, so the separate-connectors-at-top-level concept in your
email is probably OK. I wonder if the hdmi node doesn't need a phandle
pointing at the connector node though, so they can both "find" each-other?
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Tegra DRM device tree bindings

2012-06-26 Thread Stephen Warren
On 06/26/2012 08:32 PM, Mark Zhang wrote:
>> On 06/26/2012 07:46 PM, Mark Zhang wrote:
> On Tue, 26 Jun 2012 12:55:13 +0200
> Thierry Reding  wrote:
>> ...
 I'm not sure I understand how information about the carveout would be
 obtained from the IOMMU API, though.
>>>
>>> I think that can be similar with current gart implementation. Define 
>>> carveout as:
>>>
>>> carveout {
>>> compatible = "nvidia,tegra20-carveout";
>>> size = <0x1000>;
>>> };
>>>
>>> Then create a file such like "tegra-carveout.c" to get these definitions and
>> register itself as platform device's iommu instance.
>>
>> The carveout isn't a HW object, so it doesn't seem appropriate to define a DT
>> node to represent it.
> 
> Yes. But I think it's better to export the size of carveout as a configurable 
> item.
> So we need to define this somewhere. How about define carveout as a property 
> of gart?

There already exists a way of preventing Linux from using certain chunks
of memory; the /memreserve/ syntax. From a brief look at the dtc source,
it looks like /memreserve/ entries can have labels, which implies that a
property in the GART node could refer to the /memreserve/ entry by
phandle in order to know what memory regions to use.


Tegra DRM device tree bindings

2012-06-26 Thread Stephen Warren
On 06/26/2012 07:46 PM, Mark Zhang wrote:
>>> On Tue, 26 Jun 2012 12:55:13 +0200
>>> Thierry Reding  wrote:
...
>> I'm not sure I understand how information about the carveout would be
>> obtained from the IOMMU API, though.
> 
> I think that can be similar with current gart implementation. Define carveout 
> as: 
> 
> carveout {
> compatible = "nvidia,tegra20-carveout";
> size = <0x1000>;
> };
> 
> Then create a file such like "tegra-carveout.c" to get these definitions and 
> register itself as platform device's iommu instance.

The carveout isn't a HW object, so it doesn't seem appropriate to define
a DT node to represent it.


Tegra DRM device tree bindings

2012-06-26 Thread Stephen Warren
On 06/26/2012 01:51 PM, Thierry Reding wrote:
> On Tue, Jun 26, 2012 at 12:10:42PM -0600, Stephen Warren wrote:
>> On 06/26/2012 04:55 AM, Thierry Reding wrote:
>>> Hi,
>>> 
>>> while I haven't got much time to work on the actual code right
>>> now, I think it might still be useful if we could get the
>>> device tree binding to a point where everybody is happy with
>>> it. That'll also save me some time once I get to writing the
>>> code because I won't have to redo it over again. =)
>>> 
>>> So here's the current proposal:
>>> 
>>> host1x { compatible = "nvidia,tegra20-host1x", "simple-bus"; 
>>> reg = <0x5000 0x00024000>; interrupts = <0 64 0x04   /* cop
>>> syncpt */ 0 65 0x04   /* mpcore syncpt */ 0 66 0x04   /* cop
>>> general */ 0 67 0x04>; /* mpcore general */
>>> 
>>> #address-cells = <1>; #size-cells = <1>;
>>> 
>>> ranges = <0x5400 0x5400 0x0400>;
>>> 
>>> status = "disabled";
>> 
>> The idea behind status="disabled" is that some HW only makes
>> sense to use on particular boards. This concept really only
>> applies to HW modules that drive external interfaces on the SoC,
>> which in turn the board can choose whether to connect anything to
>> (or whether to even connect to any external pins using the
>> pinmux, or not).
>> 
>> As such, I don't think it makes sense to set status="disabled"
>> on host1x, nor many other internal-only engines such as say mpe,
>> epp, i2sp, gr2d, gr3d, dc1, dc2.
> 
> What about power management and resource usage? If a board for
> instance doesn't need gr3d at all it could just leave it at status
> = "disabled" to not have the corresponding driver loaded and not
> waste the power and resources.

The driver should be turning off all the clocks and power if the
devices aren't actually used (or not turning it on in the first
place). I guess there will be some slight overhead for the
device/driver instantiation.

However, in all likelihood, all/most boards will enable this feature
once it's in place. For very resource-constrained scenarios without
display, presumably one would be building a custom kernel without the
display driver enabled, so the DT content for display wouldn't ever
come into play.

>>> Board DTS files could then extend this with board-specific
>>> requirements and connectors. The following describes the Medcom
>>> Wide:
>> 
>>> connectors { #address-cells = <1>; #size-cells = <0>;
>>> 
>>> };
>> 
>> The connector seems to be a property of the individual output
>> resources. I'd expect to see the connector configuration be a
>> child of the outputs that a particular board had enabled;
>> something more like:
>> 
>> host1x { rgb { status = "okay";
>> 
>> connector at 0 { nvidia,edid = /incbin/("tegra-medcom.edid"); }; }; 
>> hdmi { status = "okay";
>> 
>> connector at 0 { nvidia,ddc-i2c-bus = <&tegra_i2c1>; }; }; };
>> 
>> Perhaps even completely omit the connector node, and put the
>> properties directly within the rgb/hdmi node itself. After all
>> the HDMI output really is the connector as far as Tegra goes.
> 
> Heh. I seem to remember you objecting to this in a previous
> series[0] which is actually the reason that I moved them to the
> top-level in the first place. =)
> 
> Thierry
> 
> [0]: http://www.spinics.net/lists/linux-tegra/msg05298.html

I don't think I was objecting to exactly what I wrote above; in that
email, there were already separate connector nodes, but they were
placed directly inside the host1x node (at that time called graphics),
so still rather disconnected from the HW they represent.

The argument for sharing e.g. an HDMI port between both video and
audio still exists though. That said, I think now I'd still be
inclined to put all the connector information for video into the
hdmi/rgb/tvo nodes. If DT does grow to represent the user-connectors
at the top-level perhaps in conjunction with drivers/extcon, perhaps
the hdmi node can point at the extcon node with a phandle, or
vice-versa, to set up the link between the components of the
user-visible port.

Still, the decision here is possibly a little arbitrary; many schemes
would work. I think at this point I don't care /too/ strongly about
which is used, so the separate-connectors-at-top-level concept in your
email is probably OK. I wonder if the hdmi node doesn't need a phandle
pointing at the connector node though, so they can both "find" each-other?


Re: Tegra DRM device tree bindings

2012-06-26 Thread Stephen Warren
On 06/26/2012 01:31 PM, Thierry Reding wrote:
> On Tue, Jun 26, 2012 at 11:43:38AM -0600, Stephen Warren wrote:
>> On 06/26/2012 07:41 AM, Thierry Reding wrote:
>>> On Tue, Jun 26, 2012 at 04:01:05PM +0300, Terje Bergström
>>> wrote:
>>>> On 26.06.2012 13:55, Thierry Reding wrote:
>> ...
>>>>> status = "disabled";
>>>>> 
>>>>> gart = <&gart>;
>>>>> 
>>>>> /* video-encoding/decoding */ mpe { reg = <0x5404 
>>>>> 0x0004>; interrupts = <0 68 0x04>; status = "disabled";
>>>>> };
>>>> 
>>>> 
>>>> The client device interrupts are not very interesting, so
>>>> they could be left out, too. Display controller related are
>>>> probably an exception to this.
>>> 
>>> If the interrupts aren't used at all we should drop them.
>> 
>> I disagree here; "used" is most likely something specific to a 
>> particular OS's drivers. The HW always has the interrupts, and
>> hence they should be described in DT.
> 
> Okay, I see. Does the same apply to the COP interrupts of the
> host1x node in your opinion? I don't know if it makes sense to
> describe something that's not reachable from the CPU. Yet it is
> defined in the GIC.

This probably applies to the interrupts too. The TRM does indicate
that git GIC has 4 interrupt IDs allocated to host1x. I recall Terje
saying that two of them weren't usable by the CPU though. Those two
points seem inconsistent. Terje, can you please explain further?
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-26 Thread Stephen Warren
On 06/26/2012 01:27 PM, Thierry Reding wrote:
> On Tue, Jun 26, 2012 at 11:41:43AM -0600, Stephen Warren wrote:
>> On 06/26/2012 08:02 AM, Terje Bergström wrote:
>>> On 26.06.2012 16:41, Thierry Reding wrote:
>>> 
>>>> On Tue, Jun 26, 2012 at 04:01:05PM +0300, Terje Bergström
>>>> wrote:
>>>>> We also assign certain host1x common resources per device
>>>>> by convention, f.ex. sync points, channels etc. We
>>>>> currently encode that information in the device node (3D
>>>>> uses sync point number X, 2D uses numbers Y and Z). The
>>>>> information is not actually describing hardware, as it
>>>>> just describes the convention, so I'm not sure if device
>>>>> tree is the proper place for it.
>>>> Are they configurable? If so I think we should provide for
>>>> them being specified in the device tree. They are still
>>>> hardware resources being assigned to devices.
>>> 
>>> Yes, they're configurable, and there's nothing hardware
>>> specific in the assignment of a sync point to a particular use.
>>> It's all just a software agreement. That's why I'm a bit
>>> hesitant on putting it in device trees, which are supposed to
>>> only describe hardware.
>> 
>> So I think that the DT can describe the existence of sync-points 
>> (presumably include a node for the sync-point HW device if it's 
>> separate). However, since the usage of each sync-point is
>> entirely arbitrary, that seems like something which should be
>> either assigned dynamically at run-time, or at least
>> managed/assigned in SW at runtime somehow, rather than hard-coded
>> into DT; it's more policy than HW.
> 
> The sync-points are part of the host1x device as I understand it.
> If their usage is truly generic, then we can probably ignore them
> safely. Maybe it'd make sense to carry a property that defines the
> number of sync points available for the host1x hardware represented
> by the DT?

I would assume this can safely be inferred from the compatible value;
nvidia,tegra20-host1x v.s. nvidia,tegra30-host1x, and so there's no
need to represent this in DT. I would assume (and it's certainly just
an assumption) that there are numerous other small details that are
different between the two SoCs, and so the driver will need to have
some table mapping from compatible value to various information
anyway. Terje, can you confirm/deny this (and hopefully use your
knowledge of any future chips to guide the decision without giving
anything away)?
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-26 Thread Stephen Warren
On 06/26/2012 04:55 AM, Thierry Reding wrote:
> Hi,
> 
> while I haven't got much time to work on the actual code right now, I
> think it might still be useful if we could get the device tree binding
> to a point where everybody is happy with it. That'll also save me some
> time once I get to writing the code because I won't have to redo it over
> again. =)
> 
> So here's the current proposal:
> 
>   host1x {
>   compatible = "nvidia,tegra20-host1x", "simple-bus";
>   reg = <0x5000 0x00024000>;
>   interrupts = <0 64 0x04   /* cop syncpt */
> 0 65 0x04   /* mpcore syncpt */
> 0 66 0x04   /* cop general */
> 0 67 0x04>; /* mpcore general */
> 
>   #address-cells = <1>;
>   #size-cells = <1>;
> 
>   ranges = <0x5400 0x5400 0x0400>;
> 
>   status = "disabled";

The idea behind status="disabled" is that some HW only makes sense to
use on particular boards. This concept really only applies to HW modules
that drive external interfaces on the SoC, which in turn the board can
choose whether to connect anything to (or whether to even connect to any
external pins using the pinmux, or not).

As such, I don't think it makes sense to set status="disabled" on
host1x, nor many other internal-only engines such as say mpe, epp, i2sp,
gr2d, gr3d, dc1, dc2.

However it does make sense for the output resources rgb, hdmi, tvo, dsi.

>   /* outputs */
>   rgb {
>   compatible = "nvidia,tegra20-rgb";
>   status = "disabled";
>   };
...
> The rgb node is something that I don't quite know how to handle yet.
> Since it is really part of the display controller and uses its register
> space, it isn't quite correct to represent it as a separate device. But
> we will need a separate node to make it available as a connector, which
> will become more obvious below.

Are you referring to the DC_COM_PIN_OUTPUT* registers? Sorry, I'm not at
all familiar with our display HW yet.

Some possible solutions spring to mind:

a) The output nodes don't have to be direct children of host1x. Instead,
each DC could have an rgb child node that represents its own individual
output capability.

b) If the RGB-related registers in DC are completely independent of any
other DC registers and grouped together well enough, we can just carve a
chunk out of the DC register space and give that to the RGB node instead:

i.e. not:

>   dc1: dc@5420 {
>   compatible = "nvidia,tegra20-dc";
>   reg = <0x5420 0x0004>;
>   interrupts = <0 73 0x04>;
>   status = "disabled";
>   };


but something more like (the completely made up example):

dc1: dc@5420 {
compatible = "nvidia,tegra20-dc";
reg = <0x5420 0x0002 0x54203000 0x1>;
interrupts = <0 73 0x04>;
status = "disabled";
};

rgb {
compatible = "nvidia,tegra20-rgb";
reg = <0x5422 0x0001>;
status = "disabled";
};

c) The rgb node could simply reference the dc nodes using a phandle, and
call into the dc driver to obtain RGB configuration services from it:

rgb {
compatible = "nvidia,tegra20-rgb";
status = "disabled";
nvidia,dcs = <&dc1 &dc2>;
};

By the way, if the RGB registers are in the DC, aren't there two
separate RGB outputs. Certainly the TRM implies that both DCs can be
driving LCDs, by reducing the width of the LCD signals that each DC uses
(lower bits-per-pixel, or perhaps DDR encoding on the data lines).

> Board DTS files could then extend this with board-specific requirements
> and connectors. The following describes the Medcom Wide:

>   connectors {
>   #address-cells = <1>;
>   #size-cells = <0>;
> 
>   };

The connector seems to be a property of the individual output resources.
I'd expect to see the connector configuration be a child of the outputs
that a particular board had enabled; something more like:

host1x {
rgb {
status = "okay";

connector@0 {
nvidia,edid = /incbin/("tegra-medcom.edid");
};
};
hdmi {
status = "okay";

connector@0 {
nvidia,ddc-i2c-bus = <&tegra_i2c1>;
};
};
};

Perhaps even completely omit the connector node, and put the properties
directly within the rgb/hdmi node itself. After all the HDMI output
really is the connector as far as Tegra goes.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-26 Thread Stephen Warren
On 06/26/2012 07:01 AM, Terje Bergström wrote:
> On 26.06.2012 13:55, Thierry Reding wrote:
...
>> An alternative would be to call of_platform_populate() from the host1x

[alternative to making the host1x node contain compatible="simple-bus".]

>> driver. This has the advantage that it could integrate better with the
>> host1x bus implementation that Terje is working on, but it also needs
>> additional code to tear down the devices when the host1x driver is
>> unloaded because a module reload would try to create duplicate devices
>> otherwise.
> 
> Yes, we already have a bus_type for nvhost, and we have nvhost_device
> and nvhost_driver that device from device and device_driver
> respectively. They all accommodate some host1x client device common
> behavior and data that we need to store. We use the bus_type also to
> match each device and driver together, but the matching is version
> sensitive. For example, Tegra2 3D needs different driver than Tegra3 3D.

I'd certainly like to see some upstream discussion re: why exactly we
have a custom bus type here. What does it do that a regular platform bus
doesn't do? Are those features something that would be generally useful
to add to the existing platform bus? Can we instead add hooks into
platform bus rather than creating a new bus? I recall you saying that
the nvhost_bus duplicated a lot of code from the existing platform bus.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-26 Thread Stephen Warren
On 06/26/2012 07:41 AM, Thierry Reding wrote:
> On Tue, Jun 26, 2012 at 04:01:05PM +0300, Terje Bergström wrote:
>> On 26.06.2012 13:55, Thierry Reding wrote:
...
>>> status = "disabled";
>>> 
>>> gart = <&gart>;
>>> 
>>> /* video-encoding/decoding */ mpe { reg = <0x5404
>>> 0x0004>; interrupts = <0 68 0x04>; status = "disabled"; };
>> 
>> 
>> The client device interrupts are not very interesting, so they
>> could be left out, too. Display controller related are probably
>> an exception to this.
> 
> If the interrupts aren't used at all we should drop them.

I disagree here; "used" is most likely something specific to a
particular OS's drivers. The HW always has the interrupts, and hence
they should be described in DT.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-26 Thread Stephen Warren
On 06/26/2012 08:02 AM, Terje Bergström wrote:
> On 26.06.2012 16:41, Thierry Reding wrote:
> 
>> On Tue, Jun 26, 2012 at 04:01:05PM +0300, Terje Bergström wrote:
>>> We also assign certain host1x common resources per device by convention,
>>> f.ex. sync points, channels etc. We currently encode that information in
>>> the device node (3D uses sync point number X, 2D uses numbers Y and Z).
>>> The information is not actually describing hardware, as it just
>>> describes the convention, so I'm not sure if device tree is the proper
>>> place for it.
>> Are they configurable? If so I think we should provide for them being
>> specified in the device tree. They are still hardware resources being
>> assigned to devices.
> 
> Yes, they're configurable, and there's nothing hardware specific in the
> assignment of a sync point to a particular use. It's all just a software
> agreement. That's why I'm a bit hesitant on putting it in device trees,
> which are supposed to only describe hardware.

So I think that the DT can describe the existence of sync-points
(presumably include a node for the sync-point HW device if it's
separate). However, since the usage of each sync-point is entirely
arbitrary, that seems like something which should be either assigned
dynamically at run-time, or at least managed/assigned in SW at runtime
somehow, rather than hard-coded into DT; it's more policy than HW.

>>> Either way is fine for me. The full addresses are more familiar to me as
>>> we tend to use them internally.
>
>> Using the OF mechanism for translating the host1x bus addresses,
>> relative to the host1x base address, to CPU addresses seems "purer", but
>> either way should work fine.
> 
> I'll let you decide, as I don't have a strong opinion either way. I
> guess whatever is the more common way wins.

I'd certainly prefer all the nodes to use the full/absolute address.
That way, the DT will exactly match the addresses in the documentation.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Tegra DRM device tree bindings

2012-06-26 Thread Stephen Warren
On 06/26/2012 01:31 PM, Thierry Reding wrote:
> On Tue, Jun 26, 2012 at 11:43:38AM -0600, Stephen Warren wrote:
>> On 06/26/2012 07:41 AM, Thierry Reding wrote:
>>> On Tue, Jun 26, 2012 at 04:01:05PM +0300, Terje Bergstr?m
>>> wrote:
>>>> On 26.06.2012 13:55, Thierry Reding wrote:
>> ...
>>>>> status = "disabled";
>>>>> 
>>>>> gart = <&gart>;
>>>>> 
>>>>> /* video-encoding/decoding */ mpe { reg = <0x5404 
>>>>> 0x0004>; interrupts = <0 68 0x04>; status = "disabled";
>>>>> };
>>>> 
>>>> 
>>>> The client device interrupts are not very interesting, so
>>>> they could be left out, too. Display controller related are
>>>> probably an exception to this.
>>> 
>>> If the interrupts aren't used at all we should drop them.
>> 
>> I disagree here; "used" is most likely something specific to a 
>> particular OS's drivers. The HW always has the interrupts, and
>> hence they should be described in DT.
> 
> Okay, I see. Does the same apply to the COP interrupts of the
> host1x node in your opinion? I don't know if it makes sense to
> describe something that's not reachable from the CPU. Yet it is
> defined in the GIC.

This probably applies to the interrupts too. The TRM does indicate
that git GIC has 4 interrupt IDs allocated to host1x. I recall Terje
saying that two of them weren't usable by the CPU though. Those two
points seem inconsistent. Terje, can you please explain further?


Tegra DRM device tree bindings

2012-06-26 Thread Stephen Warren
On 06/26/2012 01:27 PM, Thierry Reding wrote:
> On Tue, Jun 26, 2012 at 11:41:43AM -0600, Stephen Warren wrote:
>> On 06/26/2012 08:02 AM, Terje Bergstr?m wrote:
>>> On 26.06.2012 16:41, Thierry Reding wrote:
>>> 
>>>> On Tue, Jun 26, 2012 at 04:01:05PM +0300, Terje Bergstr?m
>>>> wrote:
>>>>> We also assign certain host1x common resources per device
>>>>> by convention, f.ex. sync points, channels etc. We
>>>>> currently encode that information in the device node (3D
>>>>> uses sync point number X, 2D uses numbers Y and Z). The
>>>>> information is not actually describing hardware, as it
>>>>> just describes the convention, so I'm not sure if device
>>>>> tree is the proper place for it.
>>>> Are they configurable? If so I think we should provide for
>>>> them being specified in the device tree. They are still
>>>> hardware resources being assigned to devices.
>>> 
>>> Yes, they're configurable, and there's nothing hardware
>>> specific in the assignment of a sync point to a particular use.
>>> It's all just a software agreement. That's why I'm a bit
>>> hesitant on putting it in device trees, which are supposed to
>>> only describe hardware.
>> 
>> So I think that the DT can describe the existence of sync-points 
>> (presumably include a node for the sync-point HW device if it's 
>> separate). However, since the usage of each sync-point is
>> entirely arbitrary, that seems like something which should be
>> either assigned dynamically at run-time, or at least
>> managed/assigned in SW at runtime somehow, rather than hard-coded
>> into DT; it's more policy than HW.
> 
> The sync-points are part of the host1x device as I understand it.
> If their usage is truly generic, then we can probably ignore them
> safely. Maybe it'd make sense to carry a property that defines the
> number of sync points available for the host1x hardware represented
> by the DT?

I would assume this can safely be inferred from the compatible value;
nvidia,tegra20-host1x v.s. nvidia,tegra30-host1x, and so there's no
need to represent this in DT. I would assume (and it's certainly just
an assumption) that there are numerous other small details that are
different between the two SoCs, and so the driver will need to have
some table mapping from compatible value to various information
anyway. Terje, can you confirm/deny this (and hopefully use your
knowledge of any future chips to guide the decision without giving
anything away)?


Tegra DRM device tree bindings

2012-06-26 Thread Stephen Warren
On 06/26/2012 04:55 AM, Thierry Reding wrote:
> Hi,
> 
> while I haven't got much time to work on the actual code right now, I
> think it might still be useful if we could get the device tree binding
> to a point where everybody is happy with it. That'll also save me some
> time once I get to writing the code because I won't have to redo it over
> again. =)
> 
> So here's the current proposal:
> 
>   host1x {
>   compatible = "nvidia,tegra20-host1x", "simple-bus";
>   reg = <0x5000 0x00024000>;
>   interrupts = <0 64 0x04   /* cop syncpt */
> 0 65 0x04   /* mpcore syncpt */
> 0 66 0x04   /* cop general */
> 0 67 0x04>; /* mpcore general */
> 
>   #address-cells = <1>;
>   #size-cells = <1>;
> 
>   ranges = <0x5400 0x5400 0x0400>;
> 
>   status = "disabled";

The idea behind status="disabled" is that some HW only makes sense to
use on particular boards. This concept really only applies to HW modules
that drive external interfaces on the SoC, which in turn the board can
choose whether to connect anything to (or whether to even connect to any
external pins using the pinmux, or not).

As such, I don't think it makes sense to set status="disabled" on
host1x, nor many other internal-only engines such as say mpe, epp, i2sp,
gr2d, gr3d, dc1, dc2.

However it does make sense for the output resources rgb, hdmi, tvo, dsi.

>   /* outputs */
>   rgb {
>   compatible = "nvidia,tegra20-rgb";
>   status = "disabled";
>   };
...
> The rgb node is something that I don't quite know how to handle yet.
> Since it is really part of the display controller and uses its register
> space, it isn't quite correct to represent it as a separate device. But
> we will need a separate node to make it available as a connector, which
> will become more obvious below.

Are you referring to the DC_COM_PIN_OUTPUT* registers? Sorry, I'm not at
all familiar with our display HW yet.

Some possible solutions spring to mind:

a) The output nodes don't have to be direct children of host1x. Instead,
each DC could have an rgb child node that represents its own individual
output capability.

b) If the RGB-related registers in DC are completely independent of any
other DC registers and grouped together well enough, we can just carve a
chunk out of the DC register space and give that to the RGB node instead:

i.e. not:

>   dc1: dc at 5420 {
>   compatible = "nvidia,tegra20-dc";
>   reg = <0x5420 0x0004>;
>   interrupts = <0 73 0x04>;
>   status = "disabled";
>   };


but something more like (the completely made up example):

dc1: dc at 5420 {
compatible = "nvidia,tegra20-dc";
reg = <0x5420 0x0002 0x54203000 0x1>;
interrupts = <0 73 0x04>;
status = "disabled";
};

rgb {
compatible = "nvidia,tegra20-rgb";
reg = <0x5422 0x0001>;
status = "disabled";
};

c) The rgb node could simply reference the dc nodes using a phandle, and
call into the dc driver to obtain RGB configuration services from it:

rgb {
compatible = "nvidia,tegra20-rgb";
status = "disabled";
nvidia,dcs = <&dc1 &dc2>;
};

By the way, if the RGB registers are in the DC, aren't there two
separate RGB outputs. Certainly the TRM implies that both DCs can be
driving LCDs, by reducing the width of the LCD signals that each DC uses
(lower bits-per-pixel, or perhaps DDR encoding on the data lines).

> Board DTS files could then extend this with board-specific requirements
> and connectors. The following describes the Medcom Wide:

>   connectors {
>   #address-cells = <1>;
>   #size-cells = <0>;
> 
>   };

The connector seems to be a property of the individual output resources.
I'd expect to see the connector configuration be a child of the outputs
that a particular board had enabled; something more like:

host1x {
rgb {
status = "okay";

connector at 0 {
nvidia,edid = /incbin/("tegra-medcom.edid");
};
};
hdmi {
status = "okay";

connector at 0 {
nvidia,ddc-i2c-bus = <&tegra_i2c1>;
};
};
};

Perhaps even completely omit the connector node, and put the properties
directly within the rgb/hdmi node itself. After all the HDMI output
really is the connector as far as Tegra goes.


Tegra DRM device tree bindings

2012-06-26 Thread Stephen Warren
On 06/26/2012 07:01 AM, Terje Bergstr?m wrote:
> On 26.06.2012 13:55, Thierry Reding wrote:
...
>> An alternative would be to call of_platform_populate() from the host1x

[alternative to making the host1x node contain compatible="simple-bus".]

>> driver. This has the advantage that it could integrate better with the
>> host1x bus implementation that Terje is working on, but it also needs
>> additional code to tear down the devices when the host1x driver is
>> unloaded because a module reload would try to create duplicate devices
>> otherwise.
> 
> Yes, we already have a bus_type for nvhost, and we have nvhost_device
> and nvhost_driver that device from device and device_driver
> respectively. They all accommodate some host1x client device common
> behavior and data that we need to store. We use the bus_type also to
> match each device and driver together, but the matching is version
> sensitive. For example, Tegra2 3D needs different driver than Tegra3 3D.

I'd certainly like to see some upstream discussion re: why exactly we
have a custom bus type here. What does it do that a regular platform bus
doesn't do? Are those features something that would be generally useful
to add to the existing platform bus? Can we instead add hooks into
platform bus rather than creating a new bus? I recall you saying that
the nvhost_bus duplicated a lot of code from the existing platform bus.


Tegra DRM device tree bindings

2012-06-26 Thread Stephen Warren
On 06/26/2012 07:41 AM, Thierry Reding wrote:
> On Tue, Jun 26, 2012 at 04:01:05PM +0300, Terje Bergstr?m wrote:
>> On 26.06.2012 13:55, Thierry Reding wrote:
...
>>> status = "disabled";
>>> 
>>> gart = <&gart>;
>>> 
>>> /* video-encoding/decoding */ mpe { reg = <0x5404
>>> 0x0004>; interrupts = <0 68 0x04>; status = "disabled"; };
>> 
>> 
>> The client device interrupts are not very interesting, so they
>> could be left out, too. Display controller related are probably
>> an exception to this.
> 
> If the interrupts aren't used at all we should drop them.

I disagree here; "used" is most likely something specific to a
particular OS's drivers. The HW always has the interrupts, and hence
they should be described in DT.


Tegra DRM device tree bindings

2012-06-26 Thread Stephen Warren
On 06/26/2012 08:02 AM, Terje Bergstr?m wrote:
> On 26.06.2012 16:41, Thierry Reding wrote:
> 
>> On Tue, Jun 26, 2012 at 04:01:05PM +0300, Terje Bergstr?m wrote:
>>> We also assign certain host1x common resources per device by convention,
>>> f.ex. sync points, channels etc. We currently encode that information in
>>> the device node (3D uses sync point number X, 2D uses numbers Y and Z).
>>> The information is not actually describing hardware, as it just
>>> describes the convention, so I'm not sure if device tree is the proper
>>> place for it.
>> Are they configurable? If so I think we should provide for them being
>> specified in the device tree. They are still hardware resources being
>> assigned to devices.
> 
> Yes, they're configurable, and there's nothing hardware specific in the
> assignment of a sync point to a particular use. It's all just a software
> agreement. That's why I'm a bit hesitant on putting it in device trees,
> which are supposed to only describe hardware.

So I think that the DT can describe the existence of sync-points
(presumably include a node for the sync-point HW device if it's
separate). However, since the usage of each sync-point is entirely
arbitrary, that seems like something which should be either assigned
dynamically at run-time, or at least managed/assigned in SW at runtime
somehow, rather than hard-coded into DT; it's more policy than HW.

>>> Either way is fine for me. The full addresses are more familiar to me as
>>> we tend to use them internally.
>
>> Using the OF mechanism for translating the host1x bus addresses,
>> relative to the host1x base address, to CPU addresses seems "purer", but
>> either way should work fine.
> 
> I'll let you decide, as I don't have a strong opinion either way. I
> guess whatever is the more common way wins.

I'd certainly prefer all the nodes to use the full/absolute address.
That way, the DT will exactly match the addresses in the documentation.


Re: [RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-05-07 Thread Stephen Warren
On 05/07/2012 02:50 AM, Terje Bergström wrote:
> On 25.04.2012 12:45, Thierry Reding wrote:
> 
>> +/ {
>> +   ...
>> +
>> +   /* host1x */
>> +   host1x: host1x@5000 {
>> +   compatible = "nvidia,tegra20-host1x";
>> +   reg = <0x5000 0x00024000>;
>> +   interrupts = <0 64 0x04   /* cop syncpt */
>> + 0 65 0x04   /* mpcore syncpt */
>> + 0 66 0x04   /* cop general */
>> + 0 67 0x04>; /* mpcore general */
>> +   };
>> +
>> +   /* video-encoding/decoding */
>> +   mpe@5404 {
>> +   reg = <0x5404 0x0004>;
>> +   interrupts = <0 68 0x04>;
>> +   };
>> +
> 
> (...)
> 
> Hi Thierry,
> 
> I have still lots of questions regarding how device trees work. I'm now
> just trying to match the device tree structure with hardware - let me
> know if that goes wrong.
> 
> There's a hierarchy in the hardware, which should be represented in the
> device trees. All of the hardware are client modules for host1x - with
> the exception of host1x obviously. CPU has two methods for accessing the
> hardware: clients' register aperture and host1x channels. Both of these
> operate via host1x hardware.
> 
> We should define host1x bus in the device tree, and move all nodes
> except host1x under that bus.

I think the host1x node /is/ that bus.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-05-07 Thread Stephen Warren
On 05/07/2012 02:50 AM, Terje Bergstr?m wrote:
> On 25.04.2012 12:45, Thierry Reding wrote:
> 
>> +/ {
>> +   ...
>> +
>> +   /* host1x */
>> +   host1x: host1x at 5000 {
>> +   compatible = "nvidia,tegra20-host1x";
>> +   reg = <0x5000 0x00024000>;
>> +   interrupts = <0 64 0x04   /* cop syncpt */
>> + 0 65 0x04   /* mpcore syncpt */
>> + 0 66 0x04   /* cop general */
>> + 0 67 0x04>; /* mpcore general */
>> +   };
>> +
>> +   /* video-encoding/decoding */
>> +   mpe at 5404 {
>> +   reg = <0x5404 0x0004>;
>> +   interrupts = <0 68 0x04>;
>> +   };
>> +
> 
> (...)
> 
> Hi Thierry,
> 
> I have still lots of questions regarding how device trees work. I'm now
> just trying to match the device tree structure with hardware - let me
> know if that goes wrong.
> 
> There's a hierarchy in the hardware, which should be represented in the
> device trees. All of the hardware are client modules for host1x - with
> the exception of host1x obviously. CPU has two methods for accessing the
> hardware: clients' register aperture and host1x channels. Both of these
> operate via host1x hardware.
> 
> We should define host1x bus in the device tree, and move all nodes
> except host1x under that bus.

I think the host1x node /is/ that bus.


Re: [RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-05-03 Thread Stephen Warren
On 04/25/2012 03:45 AM, Thierry Reding wrote:
> This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
> currently has rudimentary GEM support and can run a console on the
> framebuffer as well as X using the xf86-video-modesetting driver. Only
> the RGB output is supported.
> 
> HDMI support was taken from NVIDIA's Linux kernel tree but it doesn't
> quite work. EDID data can be retrieved but the output doesn't properly
> activate the connected TV.
> 
> The DSI and TVO outputs and the HOST1X driver are just stubs that setup
> the corresponding resources but don't do anything useful yet.

I'm mainly going to comment on the device tree bindings here; hopefully
Jon and Terje can chime in on the code itself.

> diff --git a/Documentation/devicetree/bindings/gpu/drm/tegra.txt 
> b/Documentation/devicetree/bindings/gpu/drm/tegra.txt

> +Example:
> +
> +/memreserve/ 0x0e00 0x0200;
> +
> +...
> +
> +/ {
> + ...
> +
> + /* host1x */
> + host1x: host1x@5000 {
> + compatible = "nvidia,tegra20-host1x";
> + reg = <0x5000 0x00024000>;
> + interrupts = <0 64 0x04   /* cop syncpt */
> +   0 65 0x04   /* mpcore syncpt */
> +   0 66 0x04   /* cop general */
> +   0 67 0x04>; /* mpcore general */
> + };

The host1x module is apparently a register bus, behind which all the
other modules sit. According to the address map in the TRM, "all the
other modules" appears to include all of MPE, VI, CSI, EPP, ISP, GR2D,
GR3D, DISPLAY A/B, HDMI, TVO, DSI, plus VG, VS, VCI, DSIB on Tegra30.

That said, I believe Terje wasn't convinced that all those modules are
really host1x children, just some. Can you check please, Terje?

Also, I wonder if host1x is really the parent of these modules,
register-bus-access-wise, or whether the bus covers the "graphic host
registers" at 0x5000-0x50023fff?

As such, I think the DT nodes for all those modules should be within the
host1x node (or perhaps graphics host node, pending above discussion):

host1x: host1x@5000 {
mpe@5404 { ... };
vi@5408 { ... };
...
};

host1x can easily instantiate all the child nodes simply by calling
of_platform_populate() at the end of its probe; see
sound/soc/tegra/tegra30_ahub.c for an example.

> + /* video-encoding/decoding */
> + mpe@5404 {
> + reg = <0x5404 0x0004>;
> + interrupts = <0 68 0x04>;
> + };

We'll probably end up needing a phandle from each of these nodes to
host1x, and a channel ID, so the drivers for these nodes can register
themselves with host1x. However, I it's probably OK to defer the DT
binding for this until we actually start working on command-channels.

> + /* graphics host */
> + graphics@5400 {
> + compatible = "nvidia,tegra20-graphics";
> +
> + #address-cells = <1>;
> + #size-cells = <1>;
> + ranges;

I don't think those 3 properties are needed, unless there are child
nodes with registers.

> + display-controllers = <&disp1 &disp2>;
> + carveout = <0x0e00 0x0200>;
> + host1x = <&host1x>;
> + gart = <&gart>;
> +
> + connectors {

I'm not sure that it makes sense to put the connectors nodes underneath
the graphics host node; the connectors aren't objects with registers
that are accessed through a bus managed by the graphics node. Equally,
the connectors could be useful outside of the graphics driver - e.g. the
audio driver might need to refer to an HDMI connector.

Instead, I'd probably put the connector nodes at the top level of the
device tree, and have graphics contain a property that lists the
phandles of all available connectors.

> + #address-cells = <1>;
> + #size-cells = <0>;
> +
> + connector@0 {
> + reg = <0>;
> + edid = /incbin/("machine.edid");
> + output = <&lvds>;
> + };

I think each of these needs some kind of compatible value to indicate
their type. Also, the ability to represent both HDMI video and audio
streams so that a sound card binding could use the HDMI connector too. I
see that drivers/extcon/ has appeared recently, and I wonder if the
connector nodes in DT shouldn't be handled by that subsystem?

> + connector@1 {
> + reg = <1>;
> + output = <&hdmi>;
> + ddc = <&i2c2>;
> +
> + hpd-gpio = <&gpio 111 0>; /* PN7 */
> + };
> + };
> + };
> +};

> diff --git a/arch/arm/mach-tegra/board-dt-tegra20.c 
> b/arch/arm/mach-tegra/board-dt-tegra20.c

> + { "host1x", "pll_c",14400,  true },
> + { "disp1",  "pl

[RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-05-03 Thread Stephen Warren
On 04/25/2012 03:45 AM, Thierry Reding wrote:
> This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
> currently has rudimentary GEM support and can run a console on the
> framebuffer as well as X using the xf86-video-modesetting driver. Only
> the RGB output is supported.
> 
> HDMI support was taken from NVIDIA's Linux kernel tree but it doesn't
> quite work. EDID data can be retrieved but the output doesn't properly
> activate the connected TV.
> 
> The DSI and TVO outputs and the HOST1X driver are just stubs that setup
> the corresponding resources but don't do anything useful yet.

I'm mainly going to comment on the device tree bindings here; hopefully
Jon and Terje can chime in on the code itself.

> diff --git a/Documentation/devicetree/bindings/gpu/drm/tegra.txt 
> b/Documentation/devicetree/bindings/gpu/drm/tegra.txt

> +Example:
> +
> +/memreserve/ 0x0e00 0x0200;
> +
> +...
> +
> +/ {
> + ...
> +
> + /* host1x */
> + host1x: host1x at 5000 {
> + compatible = "nvidia,tegra20-host1x";
> + reg = <0x5000 0x00024000>;
> + interrupts = <0 64 0x04   /* cop syncpt */
> +   0 65 0x04   /* mpcore syncpt */
> +   0 66 0x04   /* cop general */
> +   0 67 0x04>; /* mpcore general */
> + };

The host1x module is apparently a register bus, behind which all the
other modules sit. According to the address map in the TRM, "all the
other modules" appears to include all of MPE, VI, CSI, EPP, ISP, GR2D,
GR3D, DISPLAY A/B, HDMI, TVO, DSI, plus VG, VS, VCI, DSIB on Tegra30.

That said, I believe Terje wasn't convinced that all those modules are
really host1x children, just some. Can you check please, Terje?

Also, I wonder if host1x is really the parent of these modules,
register-bus-access-wise, or whether the bus covers the "graphic host
registers" at 0x5000-0x50023fff?

As such, I think the DT nodes for all those modules should be within the
host1x node (or perhaps graphics host node, pending above discussion):

host1x: host1x at 5000 {
mpe at 5404 { ... };
vi at 5408 { ... };
...
};

host1x can easily instantiate all the child nodes simply by calling
of_platform_populate() at the end of its probe; see
sound/soc/tegra/tegra30_ahub.c for an example.

> + /* video-encoding/decoding */
> + mpe at 5404 {
> + reg = <0x5404 0x0004>;
> + interrupts = <0 68 0x04>;
> + };

We'll probably end up needing a phandle from each of these nodes to
host1x, and a channel ID, so the drivers for these nodes can register
themselves with host1x. However, I it's probably OK to defer the DT
binding for this until we actually start working on command-channels.

> + /* graphics host */
> + graphics at 5400 {
> + compatible = "nvidia,tegra20-graphics";
> +
> + #address-cells = <1>;
> + #size-cells = <1>;
> + ranges;

I don't think those 3 properties are needed, unless there are child
nodes with registers.

> + display-controllers = <&disp1 &disp2>;
> + carveout = <0x0e00 0x0200>;
> + host1x = <&host1x>;
> + gart = <&gart>;
> +
> + connectors {

I'm not sure that it makes sense to put the connectors nodes underneath
the graphics host node; the connectors aren't objects with registers
that are accessed through a bus managed by the graphics node. Equally,
the connectors could be useful outside of the graphics driver - e.g. the
audio driver might need to refer to an HDMI connector.

Instead, I'd probably put the connector nodes at the top level of the
device tree, and have graphics contain a property that lists the
phandles of all available connectors.

> + #address-cells = <1>;
> + #size-cells = <0>;
> +
> + connector at 0 {
> + reg = <0>;
> + edid = /incbin/("machine.edid");
> + output = <&lvds>;
> + };

I think each of these needs some kind of compatible value to indicate
their type. Also, the ability to represent both HDMI video and audio
streams so that a sound card binding could use the HDMI connector too. I
see that drivers/extcon/ has appeared recently, and I wonder if the
connector nodes in DT shouldn't be handled by that subsystem?

> + connector at 1 {
> + reg = <1>;
> + output = <&hdmi>;
> + ddc = <&i2c2>;
> +
> + hpd-gpio = <&gpio 111 0>; /* PN7 */
> + };
> + };
> + };
> +};

> diff --git a/arch/arm/mach-tegra/board-dt-tegra20.c 
> b/arch/arm/mach-tegra/board-dt-tegra20.c

> + { "host1x", "pll_c",14400,  true },
> +

Re: [RFC v2 3/5] i2c: Add of_i2c_get_adapter() function

2012-04-25 Thread Stephen Warren
On 04/25/2012 03:45 AM, Thierry Reding wrote:
> This function resolves an OF device node to an I2C adapter registered
> with the I2C core.

I think this is doing the same thing as a patch I posted recently:
http://www.spinics.net/lists/linux-i2c/msg07808.html

What's the advantage of one way over the other?
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[RFC v2 3/5] i2c: Add of_i2c_get_adapter() function

2012-04-25 Thread Stephen Warren
On 04/25/2012 03:45 AM, Thierry Reding wrote:
> This function resolves an OF device node to an I2C adapter registered
> with the I2C core.

I think this is doing the same thing as a patch I posted recently:
http://www.spinics.net/lists/linux-i2c/msg07808.html

What's the advantage of one way over the other?


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-16 Thread Stephen Warren
On 04/16/2012 01:03 PM, Thierry Reding wrote:
...
> I've been looking about for tools to generate EDID data but didn't find
> anything useful. Does anyone know of any tool that's more convenient than
> manually filling a struct edid and writing that to a file?

Sorry, no.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-16 Thread Stephen Warren
On 04/16/2012 12:48 PM, Thierry Reding wrote:
> * Stephen Warren wrote:
...
>>> Has there been any discussion as to how EDID data would best be represented
>>> in DT? Should it just be a binary blob or rather some textual 
>>> representation?
>>
>> I think a binary blob makes sense - that's the exact same format it'd
>> have if read over the DDC I2C bus.
> 
> DTC has /incbin/ for that. Is arch/arm/boot/dts still the correct place for
> EDID blobs? I could add tegra-medcom.edid if that's okay.

As far as I know, yes.

Perhaps we'll want to start putting stuff in SoC-specific
sub-directories given the number of files we'll end up with here
(irrespective of EDID etc.), but I haven't seen any move towards that yet.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-16 Thread Stephen Warren
On 04/15/2012 02:39 AM, Thierry Reding wrote:
> * Stephen Warren wrote:
>> On 04/13/2012 03:14 AM, Thierry Reding wrote:
>>> display-controllers = <&disp1 &disp2>;
>>> outputs = <&lvds &hdmi &tvo &dsi>;
>>
>> I don't think you need both the child nodes and those two properties.
>>
>> In other words, I think you either want:
>>
>>  graphics@5400 {
>>  ... a bunch of child nodes
>>  };
>>
>> or you want:
>>
>>  disp1 : dc@5420 {
>>  ...
>>  };
>>  disp2 : dc@5424 {
>>  ...
>>  };
>>  ... all the other graphics nodes
>>
>>  graphics@5400 {
>>  display-controllers = <&disp1 &disp2>;
>>  outputs = <&lvds &hdmi &tvo &dsi>;
>>  };
>>
>> In the former case, presumably the drivers for the child nodes would
>> make some API call into the parent node and just register themselves
>> directly as a certain type of driver, so avoiding the
>> display-controllers/outputs properties.
> 
> I think I like the former better. The way I understand it the children of the
> graphics node will have to be registered explicitly by the DRM driver because
> of_platform_populate() doesn't work recursively. That would ensure that the
> DRM driver can setup the CRTC and output registries before the other devices
> call back into the DRM to register themselves.

Yes, with the first way, the DRM driver will have to call
of_platform_populate() recursively to make this work.

The thing here is that the device tree should model hardware, not be
designed purely to match the device registration needs of the DRM
driver. I'm not sure that it's correct to model all those devices as
children of the top-level graphics object; I /think/ all the devices are
flat on a single bus, and hence not children of each-other. That all
said, I guess having the nodes as children isn't too far off how the HW
is designed (even if the register accesses aren't on a child bus, the
modules at least logically are grouped together in an umbrella
situation), so I wouldn't push back on the first option above that you
prefer.

>>> /* initial configuration */
>>> configuration {
>>> lvds {
>>> display-controller = <&disp1>;
>>> output = <&lvds>;
>>> };
>>>
>>> hdmi {
>>> display-controller = <&disp2>;
>>> output = <&hdmi>;
>>> };
>>> };
>>> };
>>>
>>> I added an additional node for the initial configuration so that the driver
>>> knows which mapping to setup at boot.
>>
>> Isn't that kind of thing usually set up by the video= KMS-related kernel
>> command-line option? See Documentation/fb/modedb.txt. Again here, I
>> think the actual display controllers would be allocated to whichever
>> outputs get used on a first-come first-serve based, so no need for the
>> display-controller property above either way.
> 
> Boards should still be able to boot and display a console on the "standard"
> output even if the user doesn't provide a video= option. Shouldn't there be a
> way for a board DTS to specify what the default (or even allowed) connections
> are?

Why wouldn't the default be to light up all outputs that have an
attached display, or an algorithm something like:

* If internal LCD is present, use that
* Else, if HDMI display plugged in, use that
...

> Evaluation hardware like the Harmony might have LVDS, HDMI and VGA connectors
> to provide for a wide range of use cases. The Plutux for instance has only an
> HDMI connector and the Medcom has only LVDS. For the Medcom it would be quite
> confusing for people to suddenly see an HDMI-1 connector pop up f.e. in
> xrandr. It would be equally useless for the Plutux to show up as supporting
> an LVDS or VGA connector.

So the device tree for those devices would disable (or not include) the
connectors that were not present on the board.

...
> I see. Maybe this could be used for board-specific configuration? For
> example, the Plutux could have something like this:
> 
>   connectors {
>   hdmi {
>   output = <&hdmi>;
>   ddc = <&i2c2>;
>   };
>   };
> 
>

[RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-16 Thread Stephen Warren
On 04/16/2012 01:03 PM, Thierry Reding wrote:
...
> I've been looking about for tools to generate EDID data but didn't find
> anything useful. Does anyone know of any tool that's more convenient than
> manually filling a struct edid and writing that to a file?

Sorry, no.


[RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-16 Thread Stephen Warren
On 04/16/2012 12:48 PM, Thierry Reding wrote:
> * Stephen Warren wrote:
...
>>> Has there been any discussion as to how EDID data would best be represented
>>> in DT? Should it just be a binary blob or rather some textual 
>>> representation?
>>
>> I think a binary blob makes sense - that's the exact same format it'd
>> have if read over the DDC I2C bus.
> 
> DTC has /incbin/ for that. Is arch/arm/boot/dts still the correct place for
> EDID blobs? I could add tegra-medcom.edid if that's okay.

As far as I know, yes.

Perhaps we'll want to start putting stuff in SoC-specific
sub-directories given the number of files we'll end up with here
(irrespective of EDID etc.), but I haven't seen any move towards that yet.


[RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-16 Thread Stephen Warren
On 04/15/2012 02:39 AM, Thierry Reding wrote:
> * Stephen Warren wrote:
>> On 04/13/2012 03:14 AM, Thierry Reding wrote:
>>> display-controllers = <&disp1 &disp2>;
>>> outputs = <&lvds &hdmi &tvo &dsi>;
>>
>> I don't think you need both the child nodes and those two properties.
>>
>> In other words, I think you either want:
>>
>>  graphics at 5400 {
>>  ... a bunch of child nodes
>>  };
>>
>> or you want:
>>
>>  disp1 : dc at 5420 {
>>  ...
>>  };
>>  disp2 : dc at 5424 {
>>  ...
>>  };
>>  ... all the other graphics nodes
>>
>>  graphics at 5400 {
>>  display-controllers = <&disp1 &disp2>;
>>  outputs = <&lvds &hdmi &tvo &dsi>;
>>  };
>>
>> In the former case, presumably the drivers for the child nodes would
>> make some API call into the parent node and just register themselves
>> directly as a certain type of driver, so avoiding the
>> display-controllers/outputs properties.
> 
> I think I like the former better. The way I understand it the children of the
> graphics node will have to be registered explicitly by the DRM driver because
> of_platform_populate() doesn't work recursively. That would ensure that the
> DRM driver can setup the CRTC and output registries before the other devices
> call back into the DRM to register themselves.

Yes, with the first way, the DRM driver will have to call
of_platform_populate() recursively to make this work.

The thing here is that the device tree should model hardware, not be
designed purely to match the device registration needs of the DRM
driver. I'm not sure that it's correct to model all those devices as
children of the top-level graphics object; I /think/ all the devices are
flat on a single bus, and hence not children of each-other. That all
said, I guess having the nodes as children isn't too far off how the HW
is designed (even if the register accesses aren't on a child bus, the
modules at least logically are grouped together in an umbrella
situation), so I wouldn't push back on the first option above that you
prefer.

>>> /* initial configuration */
>>> configuration {
>>> lvds {
>>> display-controller = <&disp1>;
>>> output = <&lvds>;
>>> };
>>>
>>> hdmi {
>>> display-controller = <&disp2>;
>>> output = <&hdmi>;
>>> };
>>> };
>>> };
>>>
>>> I added an additional node for the initial configuration so that the driver
>>> knows which mapping to setup at boot.
>>
>> Isn't that kind of thing usually set up by the video= KMS-related kernel
>> command-line option? See Documentation/fb/modedb.txt. Again here, I
>> think the actual display controllers would be allocated to whichever
>> outputs get used on a first-come first-serve based, so no need for the
>> display-controller property above either way.
> 
> Boards should still be able to boot and display a console on the "standard"
> output even if the user doesn't provide a video= option. Shouldn't there be a
> way for a board DTS to specify what the default (or even allowed) connections
> are?

Why wouldn't the default be to light up all outputs that have an
attached display, or an algorithm something like:

* If internal LCD is present, use that
* Else, if HDMI display plugged in, use that
...

> Evaluation hardware like the Harmony might have LVDS, HDMI and VGA connectors
> to provide for a wide range of use cases. The Plutux for instance has only an
> HDMI connector and the Medcom has only LVDS. For the Medcom it would be quite
> confusing for people to suddenly see an HDMI-1 connector pop up f.e. in
> xrandr. It would be equally useless for the Plutux to show up as supporting
> an LVDS or VGA connector.

So the device tree for those devices would disable (or not include) the
connectors that were not present on the board.

...
> I see. Maybe this could be used for board-specific configuration? For
> example, the Plutux could have something like this:
> 
>   connectors {
>   hdmi {
>   output = <&hdmi>;
>   ddc = <&i2c2>;
>

Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-14 Thread Stephen Warren
On 04/13/2012 03:14 AM, Thierry Reding wrote:
> * Stephen Warren wrote:
>> On 04/12/2012 11:44 AM, Thierry Reding wrote:
> [...]
>> And given that, I don't think we should name the node after some
>> OS-specific software concept. Device tree is intended to model hardware.
> [...]
>>> Maybe one solution would be to have a top-level DRM device with a register
>>> map from 0x5400 to 0x547f, which the TRM designates as "host
>>> registers". Then subnodes could be used for the subdevices.
>>
>> Ah yes, just what I was thinking above:-)
> 
> I came up with the following:
> 
>   /* host1x */
>   host1x : host1x@5000 {
>   reg = <0x5000 0x00024000>;
>   interrupts = <0 64 0x04   /* cop syncpt */
> 0 65 0x04   /* mpcore syncpt */
> 0 66 0x04   /* cop general */
> 0 67 0x04>; /* mpcore general */
>   };
> 
>   /* graphics host */
>   graphics@5400 {
>   compatible = "nvidia,tegra20-graphics";
> 
>   #address-cells = <1>;
>   #size-cells = <1>;
>   ranges = <0 0x5400 0x0800>;
> 
>   host1x = <&host1x>;
> 
>   /* video-encoding/decoding */
>   mpe@5404 {
>   reg = <0x5404 0x0004>;
>   interrupts = <0 68 0x04>;
>   };
... [a bunch of nodes for graphics-related HW modules]

That all looks reasonable.

>   display-controllers = <&disp1 &disp2>;
>   outputs = <&lvds &hdmi &tvo &dsi>;

I don't think you need both the child nodes and those two properties.

In other words, I think you either want:

graphics@5400 {
... a bunch of child nodes
};

or you want:

disp1 : dc@5420 {
...
};
disp2 : dc@5424 {
...
};
... all the other graphics nodes

graphics@5400 {
display-controllers = <&disp1 &disp2>;
outputs = <&lvds &hdmi &tvo &dsi>;
};

In the former case, presumably the drivers for the child nodes would
make some API call into the parent node and just register themselves
directly as a certain type of driver, so avoiding the
display-controllers/outputs properties.

>   /* initial configuration */
>   configuration {
>   lvds {
>   display-controller = <&disp1>;
>   output = <&lvds>;
>   };
> 
>   hdmi {
>   display-controller = <&disp2>;
>   output = <&hdmi>;
>   };
>   };
>   };
> 
> I added an additional node for the initial configuration so that the driver
> knows which mapping to setup at boot.

Isn't that kind of thing usually set up by the video= KMS-related kernel
command-line option? See Documentation/fb/modedb.txt. Again here, I
think the actual display controllers would be allocated to whichever
outputs get used on a first-come first-serve based, so no need for the
display-controller property above either way.

> What I don't quite see yet is where to
> attach EDID data or pass the phandle to the I2C controller for DDC/EDID
> probing.

I think there's a third node type.

1) Nodes for the display controllers in Tegra (Also known as CRTCs I
believe)

2) Nodes for the video outputs from the Tegra chips (what you have as
outputs above)

3) Connectors, which aren't represented in your DT above yet.

I think you need to add additional nodes for (3). These are where you'd
represent all the EDID/I2C/... stuff. Then, the nodes for the outputs
will point at the nodes for the connectors using phandles, or vice-versa.

(IIRC, although I didn't look at them in detail, the sdrm patches
recently mentioned something about splitting up the various object types
in DRM and allowing them to be represented separately, and I vaguely
recall something along the lines of the types I mention above. I might
expect to have a 1:1 mapping between the DRM object types and DT nodes.)

> The initial configuration is certainly not the right place. Perhaps
> the outputs property should be made a node instead:
> 
>   outputs {
>   lvds_out {
>   output = <&lvds>;
>   edid = <&edid>;
>   };
> 
>   hdmi_out {
>   output = <&hdmi>;
>   ddc = <&i2c2>;
>   };
>   };
> 
> But then "outputs" should probably become something like "connectors"
> instead and the initial configuration refers to the "_out" phandles.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-13 Thread Stephen Warren
On 04/13/2012 03:14 AM, Thierry Reding wrote:
> * Stephen Warren wrote:
>> On 04/12/2012 11:44 AM, Thierry Reding wrote:
> [...]
>> And given that, I don't think we should name the node after some
>> OS-specific software concept. Device tree is intended to model hardware.
> [...]
>>> Maybe one solution would be to have a top-level DRM device with a register
>>> map from 0x5400 to 0x547f, which the TRM designates as "host
>>> registers". Then subnodes could be used for the subdevices.
>>
>> Ah yes, just what I was thinking above:-)
> 
> I came up with the following:
> 
>   /* host1x */
>   host1x : host1x at 5000 {
>   reg = <0x5000 0x00024000>;
>   interrupts = <0 64 0x04   /* cop syncpt */
> 0 65 0x04   /* mpcore syncpt */
> 0 66 0x04   /* cop general */
> 0 67 0x04>; /* mpcore general */
>   };
> 
>   /* graphics host */
>   graphics at 5400 {
>   compatible = "nvidia,tegra20-graphics";
> 
>   #address-cells = <1>;
>   #size-cells = <1>;
>   ranges = <0 0x5400 0x0800>;
> 
>   host1x = <&host1x>;
> 
>   /* video-encoding/decoding */
>   mpe at 5404 {
>   reg = <0x5404 0x0004>;
>   interrupts = <0 68 0x04>;
>   };
... [a bunch of nodes for graphics-related HW modules]

That all looks reasonable.

>   display-controllers = <&disp1 &disp2>;
>   outputs = <&lvds &hdmi &tvo &dsi>;

I don't think you need both the child nodes and those two properties.

In other words, I think you either want:

graphics at 5400 {
... a bunch of child nodes
};

or you want:

disp1 : dc at 5420 {
...
};
disp2 : dc at 5424 {
...
};
... all the other graphics nodes

graphics at 5400 {
display-controllers = <&disp1 &disp2>;
outputs = <&lvds &hdmi &tvo &dsi>;
};

In the former case, presumably the drivers for the child nodes would
make some API call into the parent node and just register themselves
directly as a certain type of driver, so avoiding the
display-controllers/outputs properties.

>   /* initial configuration */
>   configuration {
>   lvds {
>   display-controller = <&disp1>;
>   output = <&lvds>;
>   };
> 
>   hdmi {
>   display-controller = <&disp2>;
>   output = <&hdmi>;
>   };
>   };
>   };
> 
> I added an additional node for the initial configuration so that the driver
> knows which mapping to setup at boot.

Isn't that kind of thing usually set up by the video= KMS-related kernel
command-line option? See Documentation/fb/modedb.txt. Again here, I
think the actual display controllers would be allocated to whichever
outputs get used on a first-come first-serve based, so no need for the
display-controller property above either way.

> What I don't quite see yet is where to
> attach EDID data or pass the phandle to the I2C controller for DDC/EDID
> probing.

I think there's a third node type.

1) Nodes for the display controllers in Tegra (Also known as CRTCs I
believe)

2) Nodes for the video outputs from the Tegra chips (what you have as
outputs above)

3) Connectors, which aren't represented in your DT above yet.

I think you need to add additional nodes for (3). These are where you'd
represent all the EDID/I2C/... stuff. Then, the nodes for the outputs
will point at the nodes for the connectors using phandles, or vice-versa.

(IIRC, although I didn't look at them in detail, the sdrm patches
recently mentioned something about splitting up the various object types
in DRM and allowing them to be represented separately, and I vaguely
recall something along the lines of the types I mention above. I might
expect to have a 1:1 mapping between the DRM object types and DT nodes.)

> The initial configuration is certainly not the right place. Perhaps
> the outputs property should be made a node instead:
> 
>   outputs {
>   lvds_out {
>   output = <&lvds>;
>   edid = <&edid>;
>   };
> 
>   hdmi_out {
>   output = <&hdmi>;
>   ddc = <&i2c2>;
>   };
>   };
> 
> But then "outputs" should probably become something like "connectors"
> instead and the initial configuration refers to the "_out" phandles.


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-12 Thread Stephen Warren
On 04/12/2012 11:44 AM, Thierry Reding wrote:
> * Stephen Warren wrote:
>> On 04/12/2012 12:50 AM, Thierry Reding wrote:
>>> drm {
>>> compatible = "nvidia,tegra20-drm";
>>
>> I'm don't think having an explicit "drm" node is the right approach; drm
>> is after all a SW term and the DT should be describing HW. Having some
>> kind of top-level node almost certainly makes sense, but naming it
>> something related to "tegra display" than "drm" would be appropriate.
> 
> In this case there really isn't a HW device that can be represented. But in
> the end it's still the DRM driver that needs to bind to the device. However
> the other graphics devices (MPE, VI/CSI, EPP, GR2D and GR3D) probably need
> to be bound against.

Well, everything graphics-related appears to be grouped within some
container. In the memory map, there's the range 0x5400-547f, and
in the Tegra20 TRM, the first diagram in section 29 "Display Controller"
again lumps all the modules together into one box.

I'd say it's perfectly legitimate to create a device tree node to
represent this aggregate display/graphics-related engine. So, I think
that yes there is a real HW device to represent.

And given that, I don't think we should name the node after some
OS-specific software concept. Device tree is intended to model hardware.

> Would it be possible for someone at NVIDIA to provide some more details about
> what those other devices are? GR2D and GR3D seem obvious, MPE might be video
> decoding, VI/CSI video input and camera interface? As to EPP I have no idea.

MPE is something video encode/decode related. VI/CSI are indeed camera
related. I'm afraid I don't personally know any more details than that,
and even if I did, I'm only allowed to discuss what's in the TRM:-(

I don't think those modules should have much impact on the DRM driver
design if that's what you're worried about. I /believe/ they all just
interact directly with memory rather than the other components,
irrespective of what that first diagram in section 29 might imply.

> Maybe one solution would be to have a top-level DRM device with a register
> map from 0x5400 to 0x547f, which the TRM designates as "host
> registers". Then subnodes could be used for the subdevices.

Ah yes, just what I was thinking above:-)

I don't think you'd /have/ to make the nodes sub-nodes; you could use
phandles to refer from the top-level node to the other components. One
might end up having to use phandles anyway to represent the routing from
DCA or DCB to MIPI or HDMI anyway, so it's possible that using phandles
everywhere might be simpler and more consistent than parent/child
relationships for some things and phandles for other things.

>>> lvds {
>>> compatible = "...";
>>> dc = <&disp1>;
>>> };
>>
>> Aren't the outputs separate HW blocks too, such that they would have
>> their own compatible/reg properties and their own drivers, and be
>> outside the top-level drm/display node?
> 
> The RGB output is programmed via the display controller registers. For HDMI,
> TVO and DSI there are indeed separate sets of registers in addition to the
> display controller's. So perhaps for those more nodes would be required:
> 
>   hdmi : hdmi@5428 {
>   compatible = "nvidia,tegra20-hdmi";
>   reg = <0x5428 0x0004>;
>   };

Yes, looks reasonable.

> And hook that up with the HDMI output node of the "DRM" node:
> 
>   drm {
>   hdmi {
>   compatible = "...";
>   connector = <&hdmi>;
>   dc = <&disp2>;
>   };
>   };
>
> Maybe with this setup we no longer need the "compatible" property since it
> will already be inherent in the "connector" property. There will have to be
> special handling for the RGB output, which could be the default if the
> "connector" property is missing.

I suspect you'd have something more like:

tegra-graphics {
output-resources = <&hdmi &tvo &dsi ... >;
display-controllers = <&disp1 &disp2>;
};

i.e. just a list of all extant devices. Each should provide some common
API so you could just map from phandle to of_node to device object, and
call the appropriate APIs on it.

Finally note that although I didn't really represent it correct above,
there are at least 3 levels of object/hierarchy here:

Display controllers reads from RAM and form a

[RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-12 Thread Stephen Warren
On 04/12/2012 11:44 AM, Thierry Reding wrote:
> * Stephen Warren wrote:
>> On 04/12/2012 12:50 AM, Thierry Reding wrote:
>>> drm {
>>> compatible = "nvidia,tegra20-drm";
>>
>> I'm don't think having an explicit "drm" node is the right approach; drm
>> is after all a SW term and the DT should be describing HW. Having some
>> kind of top-level node almost certainly makes sense, but naming it
>> something related to "tegra display" than "drm" would be appropriate.
> 
> In this case there really isn't a HW device that can be represented. But in
> the end it's still the DRM driver that needs to bind to the device. However
> the other graphics devices (MPE, VI/CSI, EPP, GR2D and GR3D) probably need
> to be bound against.

Well, everything graphics-related appears to be grouped within some
container. In the memory map, there's the range 0x5400-547f, and
in the Tegra20 TRM, the first diagram in section 29 "Display Controller"
again lumps all the modules together into one box.

I'd say it's perfectly legitimate to create a device tree node to
represent this aggregate display/graphics-related engine. So, I think
that yes there is a real HW device to represent.

And given that, I don't think we should name the node after some
OS-specific software concept. Device tree is intended to model hardware.

> Would it be possible for someone at NVIDIA to provide some more details about
> what those other devices are? GR2D and GR3D seem obvious, MPE might be video
> decoding, VI/CSI video input and camera interface? As to EPP I have no idea.

MPE is something video encode/decode related. VI/CSI are indeed camera
related. I'm afraid I don't personally know any more details than that,
and even if I did, I'm only allowed to discuss what's in the TRM:-(

I don't think those modules should have much impact on the DRM driver
design if that's what you're worried about. I /believe/ they all just
interact directly with memory rather than the other components,
irrespective of what that first diagram in section 29 might imply.

> Maybe one solution would be to have a top-level DRM device with a register
> map from 0x5400 to 0x547f, which the TRM designates as "host
> registers". Then subnodes could be used for the subdevices.

Ah yes, just what I was thinking above:-)

I don't think you'd /have/ to make the nodes sub-nodes; you could use
phandles to refer from the top-level node to the other components. One
might end up having to use phandles anyway to represent the routing from
DCA or DCB to MIPI or HDMI anyway, so it's possible that using phandles
everywhere might be simpler and more consistent than parent/child
relationships for some things and phandles for other things.

>>> lvds {
>>> compatible = "...";
>>> dc = <&disp1>;
>>> };
>>
>> Aren't the outputs separate HW blocks too, such that they would have
>> their own compatible/reg properties and their own drivers, and be
>> outside the top-level drm/display node?
> 
> The RGB output is programmed via the display controller registers. For HDMI,
> TVO and DSI there are indeed separate sets of registers in addition to the
> display controller's. So perhaps for those more nodes would be required:
> 
>   hdmi : hdmi at 5428 {
>   compatible = "nvidia,tegra20-hdmi";
>   reg = <0x5428 0x0004>;
>   };

Yes, looks reasonable.

> And hook that up with the HDMI output node of the "DRM" node:
> 
>   drm {
>   hdmi {
>   compatible = "...";
>   connector = <&hdmi>;
>   dc = <&disp2>;
>   };
>   };
>
> Maybe with this setup we no longer need the "compatible" property since it
> will already be inherent in the "connector" property. There will have to be
> special handling for the RGB output, which could be the default if the
> "connector" property is missing.

I suspect you'd have something more like:

tegra-graphics {
output-resources = <&hdmi &tvo &dsi ... >;
display-controllers = <&disp1 &disp2>;
};

i.e. just a list of all extant devices. Each should provide some common
API so you could just map from phandle to of_node to device object, and
call the appropriate APIs on it.

Finally note that although I didn't really represent it correct above,
there are at least 3 levels of object/hierarchy here:

Display controllers reads from RAM and

[RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-12 Thread Stephen Warren
On 04/12/2012 12:50 AM, Thierry Reding wrote:
> * Stephen Warren wrote:
>> On 04/11/2012 06:10 AM, Thierry Reding wrote:
>>> This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
>>> currently has rudimentary GEM support and can run a console on the
>>> framebuffer as well as X using the xf86-video-modesetting driver.
>>> Only the RGB output is supported. Quite a lot of things still need
>>> to be worked out and there is a lot of room for cleanup.
...
>>> diff --git a/Documentation/devicetree/bindings/gpu/drm/tegra.txt 
>>> b/Documentation/devicetree/bindings/gpu/drm/tegra.txt
...
>> This doesn't seem right, and couples back to my assertion above that the
>> two display controller modules probably deserve separate device objects,
>> named e.g. tegradc.*.
> 
> I think I understand where you're going with this. Does the following look
> more correct?
> 
>   disp1 : dc at 5420 {
>   compatible = "nvidia,tegra20-dc";
>   reg = <0x5420, 0x0004>;
>   interrupts = <0 73 0x04>;
>   };
> 
>   disp2 : dc at 5424 {
>   compatible = "nvidia,tegra20-dc";
>   reg = <0x5424, 0x0004>;
>   interrupts = <0 74 0x04>;
>   };

Those look good.

>   drm {
>   compatible = "nvidia,tegra20-drm";

I'm don't think having an explicit "drm" node is the right approach; drm
is after all a SW term and the DT should be describing HW. Having some
kind of top-level node almost certainly makes sense, but naming it
something related to "tegra display" than "drm" would be appropriate.

>   lvds {
>   compatible = "...";
>   dc = <&disp1>;
>   };

Aren't the outputs separate HW blocks too, such that they would have
their own compatible/reg properties and their own drivers, and be
outside the top-level drm/display node?

I believe the mapping between the output this node represents and the
display controller ("dc" above) that it uses is not static; the
connectivity should be set up at runtime, and possibly dynamically
alterable via xrandr or equivalent.

>   hdmi {
>   compatible = "...";
>   dc = <&disp2>;
>   };
>   };

>>> +static int tegra_drm_parse_dt(struct platform_device *pdev)
>>> +{
>> ...
>>> +   pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL);
>>> +   if (!pdata)
>>> +   return -ENOMEM;
>> ...
>>> +   dev->platform_data = pdata;
>>
>> I don't think you should assign to dev->platform_data. If you do, then I
>> think the following could happen:
>>
>> * During first probe, the assignment above happens
>> * Module is removed, hence device removed, hence dev->platform_data
>> freed, but not zero'd out
>> * Module is re-inserted, finds that dev->platform_data!=NULL and
>> proceeds to use it.
> 
> Actually the code does zero out platform_data in tegra_drm_remove(). In fact
> I did test module unloading and reloading and it works properly. But it
> should probably be zeroed in case drm_platform_init() fails as well.
>
>> Instead, the active platform data should probably be stored in a
>> tegra_drm struct that's stored in the dev's private data.
>> tegra_drm_probe() might then look more like:
>>
>> struct tegra_drm *tdev;
>>
>> tdev = devm_kzalloc();
>> tdev->pdata = pdev->dev.platform_data;
>> if (!tdev->pdata)
>> tdev->pdata = tegra_drm_parse_dt();
>> if (!tdev->pdata)
>> return -EINVAL;
>>
>> dev_set_drvdata(dev, tdev);
>>
>> This is safe, since probe() will never assume that dev_get_drvdata()
>> might contain something valid before probe() sets it.
> 
> I prefer my approach over storing the data in an extra field because the
> device platform_data field is where everybody would expect it. Furthermore
> this wouldn't be relevant if we decided not to support non-DT setups.

Drivers are expected to use pre-existing platform data, if provided.
This might happen in order to work around bugs in device tree content.


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-12 Thread Stephen Warren
On 04/12/2012 12:50 AM, Thierry Reding wrote:
> * Stephen Warren wrote:
>> On 04/11/2012 06:10 AM, Thierry Reding wrote:
>>> This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
>>> currently has rudimentary GEM support and can run a console on the
>>> framebuffer as well as X using the xf86-video-modesetting driver.
>>> Only the RGB output is supported. Quite a lot of things still need
>>> to be worked out and there is a lot of room for cleanup.
...
>>> diff --git a/Documentation/devicetree/bindings/gpu/drm/tegra.txt 
>>> b/Documentation/devicetree/bindings/gpu/drm/tegra.txt
...
>> This doesn't seem right, and couples back to my assertion above that the
>> two display controller modules probably deserve separate device objects,
>> named e.g. tegradc.*.
> 
> I think I understand where you're going with this. Does the following look
> more correct?
> 
>   disp1 : dc@5420 {
>   compatible = "nvidia,tegra20-dc";
>   reg = <0x5420, 0x0004>;
>   interrupts = <0 73 0x04>;
>   };
> 
>   disp2 : dc@5424 {
>   compatible = "nvidia,tegra20-dc";
>   reg = <0x5424, 0x0004>;
>   interrupts = <0 74 0x04>;
>   };

Those look good.

>   drm {
>   compatible = "nvidia,tegra20-drm";

I'm don't think having an explicit "drm" node is the right approach; drm
is after all a SW term and the DT should be describing HW. Having some
kind of top-level node almost certainly makes sense, but naming it
something related to "tegra display" than "drm" would be appropriate.

>   lvds {
>   compatible = "...";
>   dc = <&disp1>;
>   };

Aren't the outputs separate HW blocks too, such that they would have
their own compatible/reg properties and their own drivers, and be
outside the top-level drm/display node?

I believe the mapping between the output this node represents and the
display controller ("dc" above) that it uses is not static; the
connectivity should be set up at runtime, and possibly dynamically
alterable via xrandr or equivalent.

>   hdmi {
>   compatible = "...";
>   dc = <&disp2>;
>   };
>   };

>>> +static int tegra_drm_parse_dt(struct platform_device *pdev)
>>> +{
>> ...
>>> +   pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL);
>>> +   if (!pdata)
>>> +   return -ENOMEM;
>> ...
>>> +   dev->platform_data = pdata;
>>
>> I don't think you should assign to dev->platform_data. If you do, then I
>> think the following could happen:
>>
>> * During first probe, the assignment above happens
>> * Module is removed, hence device removed, hence dev->platform_data
>> freed, but not zero'd out
>> * Module is re-inserted, finds that dev->platform_data!=NULL and
>> proceeds to use it.
> 
> Actually the code does zero out platform_data in tegra_drm_remove(). In fact
> I did test module unloading and reloading and it works properly. But it
> should probably be zeroed in case drm_platform_init() fails as well.
>
>> Instead, the active platform data should probably be stored in a
>> tegra_drm struct that's stored in the dev's private data.
>> tegra_drm_probe() might then look more like:
>>
>> struct tegra_drm *tdev;
>>
>> tdev = devm_kzalloc();
>> tdev->pdata = pdev->dev.platform_data;
>> if (!tdev->pdata)
>> tdev->pdata = tegra_drm_parse_dt();
>> if (!tdev->pdata)
>> return -EINVAL;
>>
>> dev_set_drvdata(dev, tdev);
>>
>> This is safe, since probe() will never assume that dev_get_drvdata()
>> might contain something valid before probe() sets it.
> 
> I prefer my approach over storing the data in an extra field because the
> device platform_data field is where everybody would expect it. Furthermore
> this wouldn't be relevant if we decided not to support non-DT setups.

Drivers are expected to use pre-existing platform data, if provided.
This might happen in order to work around bugs in device tree content.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-11 Thread Stephen Warren
On 04/11/2012 06:10 AM, Thierry Reding wrote:
> This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
> currently has rudimentary GEM support and can run a console on the
> framebuffer as well as X using the xf86-video-modesetting driver.
> Only the RGB output is supported. Quite a lot of things still need
> to be worked out and there is a lot of room for cleanup.

I'll let Jon Mayo comment on the actual driver implementation, since
he's a lot more familiar with Tegra's display hardware. However, I have
some general comments below.

>  .../devicetree/bindings/gpu/drm/tegra.txt  |   24 +
>  arch/arm/mach-tegra/board-dt-tegra20.c |3 +
>  arch/arm/mach-tegra/tegra2_clocks.c|8 +-
>  drivers/gpu/drm/Kconfig|2 +
>  drivers/gpu/drm/Makefile   |1 +
>  drivers/gpu/drm/tegra/Kconfig  |   10 +
>  drivers/gpu/drm/tegra/Makefile |5 +
>  drivers/gpu/drm/tegra/tegra_drv.c  | 2241 
> 
>  drivers/gpu/drm/tegra/tegra_drv.h  |  184 ++
>  include/drm/tegra_drm.h|   44 +

Splitting this patch into two, between arch/arm and drivers/gpu would be
a good idea.

> diff --git a/Documentation/devicetree/bindings/gpu/drm/tegra.txt 
> b/Documentation/devicetree/bindings/gpu/drm/tegra.txt

> + drm@5420 {
> + compatible = "nvidia,tegra20-drm";

This doesn't seem right; there isn't a "DRM" hardware module on Tegra,
since "DRM" is a Linux/software-specific term.

I'd at least expect to see this compatible flag be renamed to something
more like "nvidia,tegra20-dc" (dc==display controller).

Since Tegra has two display controller modules (I believe identical?),
and numerous other independent(?) blocks, I'd expect to see multiple
nodes in device tree, one per hardware block, such that each block gets
its own device and driver. That said, I'm not familiar enough with
Tegra's display and graphics HW to know if this makes sense. Jon, what's
your take here? The clock change below, and in particular the original
code there that we use downstream, lends weight to my argument.

> + reg = < 0x5420 0x0004/* display A */
> + 0x5424 0x0004/* display B */
> + 0x5800 0x0200 >; /* GART aperture */
> + interrupts = < 0 73 0x04/* display A */
> +0 74 0x04 >; /* display B */
> +
> + lvds {
> + type = "rgb";

These sub-nodes probably want a "compatible" property rather than a
"type" property.

> + size = <345 194>;
> +
> + default-mode {
> + pixel-clock = <61715000>;
> + vertical-refresh = <50>;
> + resolution = <1366 768>;
> + bits-per-pixel = <16>;
> + horizontal-timings = <4 136 2 36>;
> + vertical-timings = <2 4 21 10>;
> + };
> + };

I imagine that quite a bit of thought needs to be put into the output
part of the binding in order to:

* Model the outputs/connectors separately from display controllers.
* Make sure that the basic infra-structure for representing an output is
general enough to be extensible to all the kinds of outputs we support,
not just the LVDS output.
* We were wondering about putting an EDID into the DT to represent the
display modes, so that all outputs had EDIDs rather than "real" monitors
having EDIDs, and fixed internal displays having some other
representation of capabilities.

I'm hoping that Jon will drive this.

> diff --git a/arch/arm/mach-tegra/tegra2_clocks.c 
> b/arch/arm/mach-tegra/tegra2_clocks.c

> - PERIPH_CLK("disp1", "tegradc.0",NULL,   27, 0x138,  
> 6, mux_pllp_plld_pllc_clkm, MUX), /* scales with voltage and 
> process_id */
> - PERIPH_CLK("disp2", "tegradc.1",NULL,   26, 0x13c,  
> 6, mux_pllp_plld_pllc_clkm, MUX), /* scales with voltage and 
> process_id */
> + PERIPH_CLK("disp1", "tegra-drm",NULL,   27, 0x138,  
> 6, mux_pllp_plld_pllc_clkm, MUX), /* scales with voltage and 
> process_id */
> + PERIPH_CLK("disp2", "tegra-drm",NULL,   26, 0x13c,  
> 6, mux_pllp_plld_pllc_clkm, MUX), /* scales with voltage and 
> process_id */

This doesn't seem right, and couples back to my assertion above that the
two display controller modules probably deserve separate device objects,
named e.g. tegradc.*.

> diff --git a/drivers/gpu/drm/tegra/Kconfig b/drivers/gpu/drm/tegra/Kconfig
> new file mode 100644
> index 000..f3382c9
> --- /dev/null
> +++ b/drivers/gpu/drm/tegra/Kconfig
> @@ -0,0 +1,10 @@
> +config DRM_TEGRA
> + tristate 

Re: [RFC 3/4] drm: fixed: Add dfixed_frac() macro

2012-04-11 Thread Stephen Warren
On 04/11/2012 06:10 AM, Thierry Reding wrote:
> This commit is taken from the Chromium tree and was originally written
> by Robert Morell.

Maybe just cherry-pick it from there? That way, the git authorship will
show up as Robert.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC 2/4] iommu: tegra/gart: Add device tree support

2012-04-11 Thread Stephen Warren
On 04/11/2012 06:10 AM, Thierry Reding wrote:
> This commit adds device tree support for the GART hardware available on
> NVIDIA Tegra 20 SoCs.
> 
> Signed-off-by: Thierry Reding 
> ---
>  arch/arm/boot/dts/tegra20.dtsi |6 ++
>  arch/arm/mach-tegra/board-dt-tegra20.c |1 +
>  drivers/iommu/tegra-gart.c |   10 ++
>  3 files changed, 17 insertions(+)

I think I'd prefer at least the tegra20.dtsi change split out into a
separate patch so I can queue it in a "dt" topic branch in the Tegra repo.

It might be a good idea to split this into two on an arch/arm vs.
drivers/iommu boundary. Looking at Olof's branches for 3.4, that split
is what happened there.

Finally, there should be a binding documentation file in
Documentation/devicetree/bindings in order to specify the number of reg
property entries needed, and their meaning, since there's more than 1
(even though you did comment it nicely in the .dtsi file)
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-11 Thread Stephen Warren
On 04/11/2012 06:10 AM, Thierry Reding wrote:
> This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
> currently has rudimentary GEM support and can run a console on the
> framebuffer as well as X using the xf86-video-modesetting driver.
> Only the RGB output is supported. Quite a lot of things still need
> to be worked out and there is a lot of room for cleanup.

I'll let Jon Mayo comment on the actual driver implementation, since
he's a lot more familiar with Tegra's display hardware. However, I have
some general comments below.

>  .../devicetree/bindings/gpu/drm/tegra.txt  |   24 +
>  arch/arm/mach-tegra/board-dt-tegra20.c |3 +
>  arch/arm/mach-tegra/tegra2_clocks.c|8 +-
>  drivers/gpu/drm/Kconfig|2 +
>  drivers/gpu/drm/Makefile   |1 +
>  drivers/gpu/drm/tegra/Kconfig  |   10 +
>  drivers/gpu/drm/tegra/Makefile |5 +
>  drivers/gpu/drm/tegra/tegra_drv.c  | 2241 
> 
>  drivers/gpu/drm/tegra/tegra_drv.h  |  184 ++
>  include/drm/tegra_drm.h|   44 +

Splitting this patch into two, between arch/arm and drivers/gpu would be
a good idea.

> diff --git a/Documentation/devicetree/bindings/gpu/drm/tegra.txt 
> b/Documentation/devicetree/bindings/gpu/drm/tegra.txt

> + drm at 5420 {
> + compatible = "nvidia,tegra20-drm";

This doesn't seem right; there isn't a "DRM" hardware module on Tegra,
since "DRM" is a Linux/software-specific term.

I'd at least expect to see this compatible flag be renamed to something
more like "nvidia,tegra20-dc" (dc==display controller).

Since Tegra has two display controller modules (I believe identical?),
and numerous other independent(?) blocks, I'd expect to see multiple
nodes in device tree, one per hardware block, such that each block gets
its own device and driver. That said, I'm not familiar enough with
Tegra's display and graphics HW to know if this makes sense. Jon, what's
your take here? The clock change below, and in particular the original
code there that we use downstream, lends weight to my argument.

> + reg = < 0x5420 0x0004/* display A */
> + 0x5424 0x0004/* display B */
> + 0x5800 0x0200 >; /* GART aperture */
> + interrupts = < 0 73 0x04/* display A */
> +0 74 0x04 >; /* display B */
> +
> + lvds {
> + type = "rgb";

These sub-nodes probably want a "compatible" property rather than a
"type" property.

> + size = <345 194>;
> +
> + default-mode {
> + pixel-clock = <61715000>;
> + vertical-refresh = <50>;
> + resolution = <1366 768>;
> + bits-per-pixel = <16>;
> + horizontal-timings = <4 136 2 36>;
> + vertical-timings = <2 4 21 10>;
> + };
> + };

I imagine that quite a bit of thought needs to be put into the output
part of the binding in order to:

* Model the outputs/connectors separately from display controllers.
* Make sure that the basic infra-structure for representing an output is
general enough to be extensible to all the kinds of outputs we support,
not just the LVDS output.
* We were wondering about putting an EDID into the DT to represent the
display modes, so that all outputs had EDIDs rather than "real" monitors
having EDIDs, and fixed internal displays having some other
representation of capabilities.

I'm hoping that Jon will drive this.

> diff --git a/arch/arm/mach-tegra/tegra2_clocks.c 
> b/arch/arm/mach-tegra/tegra2_clocks.c

> - PERIPH_CLK("disp1", "tegradc.0",NULL,   27, 0x138,  
> 6, mux_pllp_plld_pllc_clkm, MUX), /* scales with voltage and 
> process_id */
> - PERIPH_CLK("disp2", "tegradc.1",NULL,   26, 0x13c,  
> 6, mux_pllp_plld_pllc_clkm, MUX), /* scales with voltage and 
> process_id */
> + PERIPH_CLK("disp1", "tegra-drm",NULL,   27, 0x138,  
> 6, mux_pllp_plld_pllc_clkm, MUX), /* scales with voltage and 
> process_id */
> + PERIPH_CLK("disp2", "tegra-drm",NULL,   26, 0x13c,  
> 6, mux_pllp_plld_pllc_clkm, MUX), /* scales with voltage and 
> process_id */

This doesn't seem right, and couples back to my assertion above that the
two display controller modules probably deserve separate device objects,
named e.g. tegradc.*.

> diff --git a/drivers/gpu/drm/tegra/Kconfig b/drivers/gpu/drm/tegra/Kconfig
> new file mode 100644
> index 000..f3382c9
> --- /dev/null
> +++ b/drivers/gpu/drm/tegra/Kconfig
> @@ -0,0 +1,10 @@
> +config DRM_TEGRA
> + trista

[RFC 3/4] drm: fixed: Add dfixed_frac() macro

2012-04-11 Thread Stephen Warren
On 04/11/2012 06:10 AM, Thierry Reding wrote:
> This commit is taken from the Chromium tree and was originally written
> by Robert Morell.

Maybe just cherry-pick it from there? That way, the git authorship will
show up as Robert.


[RFC 2/4] iommu: tegra/gart: Add device tree support

2012-04-11 Thread Stephen Warren
On 04/11/2012 06:10 AM, Thierry Reding wrote:
> This commit adds device tree support for the GART hardware available on
> NVIDIA Tegra 20 SoCs.
> 
> Signed-off-by: Thierry Reding 
> ---
>  arch/arm/boot/dts/tegra20.dtsi |6 ++
>  arch/arm/mach-tegra/board-dt-tegra20.c |1 +
>  drivers/iommu/tegra-gart.c |   10 ++
>  3 files changed, 17 insertions(+)

I think I'd prefer at least the tegra20.dtsi change split out into a
separate patch so I can queue it in a "dt" topic branch in the Tegra repo.

It might be a good idea to split this into two on an arch/arm vs.
drivers/iommu boundary. Looking at Olof's branches for 3.4, that split
is what happened there.

Finally, there should be a binding documentation file in
Documentation/devicetree/bindings in order to specify the number of reg
property entries needed, and their meaning, since there's more than 1
(even though you did comment it nicely in the .dtsi file)


RE: [alsa-devel] [PATCH v3] pass ELD to HDMI/DP audio driver

2011-08-05 Thread Stephen Warren
Wu Fengguang wrote at Friday, August 05, 2011 6:50 AM:
> On Fri, Aug 05, 2011 at 02:03:41AM +0800, Keith Packard wrote:
> > On Thu, 4 Aug 2011 17:40:24 +0800, Wu Fengguang  
> > wrote:
...
> > > You may wonder why the mode parameter is needed in intel_write_eld().
> > > This is because the ELD field aud_synch_delay (ie. A/V sync delay) may
> > > have different values in progressive/interleaved display modes.
> >
> > Ok, so you can't write ELD data until the display is active, which
> > happens at mode_set time.
> >
> > Do you need to provide ELD when the display is inactive? Is this only to
> > enable audio output when the display is not on? In that case, we will
> 
> Good questions!  In general the audio functionalities should not
> depend on the display activeness. There are even audio-only HDMI
> devices. So I'll need to make intel_write_eld() work even without
> information about the current display mode.

Is there such a thing as an audio-only HDMI signal though; the audio data
is transferred inside the blanking region of the HDMI video signal, which
kinda implies that there needs to be some video content (even if the pixel
data is e.g. hard-coded black rather than being scanned out from a frame-
buffer) so as to define where those blanking regions are within the signal.

-- 
nvpublic

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[alsa-devel] [PATCH v3] pass ELD to HDMI/DP audio driver

2011-08-05 Thread Stephen Warren
Wu Fengguang wrote at Friday, August 05, 2011 6:50 AM:
> On Fri, Aug 05, 2011 at 02:03:41AM +0800, Keith Packard wrote:
> > On Thu, 4 Aug 2011 17:40:24 +0800, Wu Fengguang  
> > wrote:
...
> > > You may wonder why the mode parameter is needed in intel_write_eld().
> > > This is because the ELD field aud_synch_delay (ie. A/V sync delay) may
> > > have different values in progressive/interleaved display modes.
> >
> > Ok, so you can't write ELD data until the display is active, which
> > happens at mode_set time.
> >
> > Do you need to provide ELD when the display is inactive? Is this only to
> > enable audio output when the display is not on? In that case, we will
> 
> Good questions!  In general the audio functionalities should not
> depend on the display activeness. There are even audio-only HDMI
> devices. So I'll need to make intel_write_eld() work even without
> information about the current display mode.

Is there such a thing as an audio-only HDMI signal though; the audio data
is transferred inside the blanking region of the HDMI video signal, which
kinda implies that there needs to be some video content (even if the pixel
data is e.g. hard-coded black rather than being scanned out from a frame-
buffer) so as to define where those blanking regions are within the signal.

-- 
nvpublic



<    1   2   3   4