[RFC v2 6/8] gpu: drm: tegra: Remove redundant host1x

2012-12-14 Thread Stephen Warren
On 12/13/2012 11:09 PM, Terje Bergstr?m wrote:
> On 13.12.2012 19:58, Stephen Warren wrote:
>> On 12/13/2012 01:57 AM, Thierry Reding wrote:
>>> After some more discussion with Stephen on IRC we came to the
>>> conclusion that the easiest might be to have tegra-drm call into
>>> host1x with something like:
>>>
>>> void host1x_set_drm_device(struct host1x *host1x, struct device
>>> *dev);
>>
>> If host1x is registering the dummy device that causes tegradrm to be
>> instantiated, then presumably there's no need for the API above, since
>> host1x will already have the struct device * for tegradrm, since it
>> created it?
> 
> I didn't add the dummy device in my latest patch set. I first set out to
> add it, and moved the drm global data to be drvdata of that device. Then
> I noticed that it doesn't actually help at all.
> 
> The critical accesses to the global data are from probes of DC, HDMI,
> etc.

OK

> They want to get the global data by getting drvdata of their parent.

There's no reason that /has/ to be so; they can get any information from
wherever it is, rather than trying to require it to be somewhere it isn't.

> The dummy device is not their parent - host1x is. There's no
> connection between the dummy and the real client devices.

It's quite possible for the client devices to ask their actual parent
(host1x) for the tegradrm struct device *, by calling a host1x API, and
have host1x implement that API by returning the dummy/virtual device
that it created. That should be just 1-2 lines of code to implement in
host1x, plus maybe a couple more to correctly return -EPROBE_DEFERRED
when appropriate.


Re: [RFC v2 6/8] gpu: drm: tegra: Remove redundant host1x

2012-12-14 Thread Stephen Warren
On 12/13/2012 11:09 PM, Terje Bergström wrote:
 On 13.12.2012 19:58, Stephen Warren wrote:
 On 12/13/2012 01:57 AM, Thierry Reding wrote:
 After some more discussion with Stephen on IRC we came to the
 conclusion that the easiest might be to have tegra-drm call into
 host1x with something like:

 void host1x_set_drm_device(struct host1x *host1x, struct device
 *dev);

 If host1x is registering the dummy device that causes tegradrm to be
 instantiated, then presumably there's no need for the API above, since
 host1x will already have the struct device * for tegradrm, since it
 created it?
 
 I didn't add the dummy device in my latest patch set. I first set out to
 add it, and moved the drm global data to be drvdata of that device. Then
 I noticed that it doesn't actually help at all.
 
 The critical accesses to the global data are from probes of DC, HDMI,
 etc.

OK

 They want to get the global data by getting drvdata of their parent.

There's no reason that /has/ to be so; they can get any information from
wherever it is, rather than trying to require it to be somewhere it isn't.

 The dummy device is not their parent - host1x is. There's no
 connection between the dummy and the real client devices.

It's quite possible for the client devices to ask their actual parent
(host1x) for the tegradrm struct device *, by calling a host1x API, and
have host1x implement that API by returning the dummy/virtual device
that it created. That should be just 1-2 lines of code to implement in
host1x, plus maybe a couple more to correctly return -EPROBE_DEFERRED
when appropriate.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[RFC v2 6/8] gpu: drm: tegra: Remove redundant host1x

2012-12-13 Thread Stephen Warren
On 12/13/2012 01:57 AM, Thierry Reding wrote:
> On Thu, Dec 13, 2012 at 10:48:55AM +0200, Terje Bergstr?m wrote:
>> On 12.12.2012 18:08, Thierry Reding wrote:
>>> I've briefly discussed this with Stephen on IRC because I
>>> thought I had remembered him objecting to the idea of adding a
>>> dummy device just for this purpose. It turns out, however, that
>>> what he didn't like was to add a dummy node to the DT just to
>>> make this happen, but he has no (strong) objections to a dummy
>>> platform device.
>>> 
>>> While I'm not very happy about that solution, I've been going
>>> over it for a week now and haven't come up with any better
>>> alternative that doesn't have its own disadvantages. So perhaps
>>> we should go ahead and implement that. For the host1x driver
>>> this really just means creating a platform device and adding it
>>> to the system, with some of the fields tweaked to make things
>>> work.
>> 
>> Even the virtual device is not too beautiful. The problem is that
>> the virtual device is not physical parent for DC, HDMI, etc, so 
>> dev_get_drvdata(pdev->dev.parent) returns the data from host1x
>> device, not the virtual device.
>> 
>> We'll post with something that goes around this, but it's not
>> going to be too pretty. Let's try to find the solution once we
>> get the code out.
> 
> After some more discussion with Stephen on IRC we came to the
> conclusion that the easiest might be to have tegra-drm call into
> host1x with something like:
> 
> void host1x_set_drm_device(struct host1x *host1x, struct device
> *dev);

If host1x is registering the dummy device that causes tegradrm to be
instantiated, then presumably there's no need for the API above, since
host1x will already have the struct device * for tegradrm, since it
created it?

> Once the dummy device has been properly set up and have each client
> use
> 
> struct device *host1x_get_drm_device(struct host1x *host1x);
> 
> to obtain a pointer to the dummy device, such that it can receive
> the driver-private data using dev_get_drvdata() on the returned
> device. As long as tegra-drm hasn't registered with host1x yet, the
> above function could return ERR_PTR(-EPROBE_DEFER), so that
> dependencies are automatically handled. This is required because,
> tegra-drm not being the parent of the child devices, it can be
> guaranteed that it is probed before any of the children.
> 
> That way we should be able to get around any circular
> dependencies, since we only call into host1x from tegra-drm but not
> the other way around.




Re: [RFC v2 6/8] gpu: drm: tegra: Remove redundant host1x

2012-12-13 Thread Stephen Warren
On 12/13/2012 01:57 AM, Thierry Reding wrote:
 On Thu, Dec 13, 2012 at 10:48:55AM +0200, Terje Bergström wrote:
 On 12.12.2012 18:08, Thierry Reding wrote:
 I've briefly discussed this with Stephen on IRC because I
 thought I had remembered him objecting to the idea of adding a
 dummy device just for this purpose. It turns out, however, that
 what he didn't like was to add a dummy node to the DT just to
 make this happen, but he has no (strong) objections to a dummy
 platform device.
 
 While I'm not very happy about that solution, I've been going
 over it for a week now and haven't come up with any better
 alternative that doesn't have its own disadvantages. So perhaps
 we should go ahead and implement that. For the host1x driver
 this really just means creating a platform device and adding it
 to the system, with some of the fields tweaked to make things
 work.
 
 Even the virtual device is not too beautiful. The problem is that
 the virtual device is not physical parent for DC, HDMI, etc, so 
 dev_get_drvdata(pdev-dev.parent) returns the data from host1x
 device, not the virtual device.
 
 We'll post with something that goes around this, but it's not
 going to be too pretty. Let's try to find the solution once we
 get the code out.
 
 After some more discussion with Stephen on IRC we came to the
 conclusion that the easiest might be to have tegra-drm call into
 host1x with something like:
 
 void host1x_set_drm_device(struct host1x *host1x, struct device
 *dev);

If host1x is registering the dummy device that causes tegradrm to be
instantiated, then presumably there's no need for the API above, since
host1x will already have the struct device * for tegradrm, since it
created it?

 Once the dummy device has been properly set up and have each client
 use
 
 struct device *host1x_get_drm_device(struct host1x *host1x);
 
 to obtain a pointer to the dummy device, such that it can receive
 the driver-private data using dev_get_drvdata() on the returned
 device. As long as tegra-drm hasn't registered with host1x yet, the
 above function could return ERR_PTR(-EPROBE_DEFER), so that
 dependencies are automatically handled. This is required because,
 tegra-drm not being the parent of the child devices, it can be
 guaranteed that it is probed before any of the children.
 
 That way we should be able to get around any circular
 dependencies, since we only call into host1x from tegra-drm but not
 the other way around.


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


First version of host1x intro

2012-12-06 Thread Stephen Warren
On 12/06/2012 01:13 AM, Mark Zhang wrote:
> On 12/06/2012 04:00 PM, Lucas Stach wrote:
>> Am Donnerstag, den 06.12.2012, 15:49 +0800 schrieb Mark Zhang:
> [...]
>>>
>>> OK. So these relocation addresses are used to let userspace tells kernel
>>> which buffers mentioned in the command should be relocated to addresses
>>> which host1x clients able to reach.
>>>
>> Yep, preferably all buffers referenced by a command stream should
>> already be set up in such a position (CMA with Tegra2) or the relocation
>> should be nothing more than setting up IOMMU page tables (Tegra3).
>>
>>> I'm also wondering that, if our driver understands the stuffs in the
>>> commands, maybe we can find out all addresses in the command, in that
>>> way, we will not need userspace tells us which are the addresses need to
>>> be relocated, right?
>>
>> No. How will the kernel ever know which buffer gets referenced in a
>> command stream? All the kernel sees is is a command stream with
>> something like "blit data to address 0xADDR" in it. The only info that
>> you can gather from that is that there must be some buffer to blit into.
>> Neither do you know which buffer the stuff should be going to, nor can
>> you know if you blit to offset zero in this buffer. It's perfectly valid
>> to only use a subregion of a buffer.
>>
>> Or maybe I'm misunderstanding you and you mean it the other way around.
>> You don't let userspace dictate the addresses, the relocation
>> information just tells the kernel to find the addresses of the
>> referenced buffers for you and insert them, instead of the dummy
>> information, into the command stream.
> 
> Yes, I think this is what I mean. No dummy information in the command
> stream, userspace just fills the address which it uses(actually this is
> cpu address of the buffer) in the command stream, and our driver must
> have a HashTable or something which contains the buffer address pair --
> (cpu address, dma address), so our driver can find the dma addresses for
> every buffer then modify the addresses in the command stream.

Typically there would be no CPU address; there's no need in most cases
to ever map most buffers to the CPU.

Automatically parsing the buffer sounds like an interesting idea, but
then the kernel relocation code would have to know the meaning of every
single register or command-stream "method" in order to know which of
them take a buffer address as an argument. I am not familiar with this
HW specifically, so perhaps it's much more regular than I think and it's
actually easy to do that, but I imagine it'd be a PITA to implement that
(although perhaps we have to for the command-stream validation stuff
anyway?). Also, it'd be a lot quicker at least for larger command
buffers to go straight to the few locations in the command stream where
a buffer is referenced (i.e. use the side-band metadata for relocation)
rather than having the CPU re-read the entire command stream in the
kernel to parse it.


Re: First version of host1x intro

2012-12-06 Thread Stephen Warren
On 12/06/2012 01:13 AM, Mark Zhang wrote:
 On 12/06/2012 04:00 PM, Lucas Stach wrote:
 Am Donnerstag, den 06.12.2012, 15:49 +0800 schrieb Mark Zhang:
 [...]

 OK. So these relocation addresses are used to let userspace tells kernel
 which buffers mentioned in the command should be relocated to addresses
 which host1x clients able to reach.

 Yep, preferably all buffers referenced by a command stream should
 already be set up in such a position (CMA with Tegra2) or the relocation
 should be nothing more than setting up IOMMU page tables (Tegra3).

 I'm also wondering that, if our driver understands the stuffs in the
 commands, maybe we can find out all addresses in the command, in that
 way, we will not need userspace tells us which are the addresses need to
 be relocated, right?

 No. How will the kernel ever know which buffer gets referenced in a
 command stream? All the kernel sees is is a command stream with
 something like blit data to address 0xADDR in it. The only info that
 you can gather from that is that there must be some buffer to blit into.
 Neither do you know which buffer the stuff should be going to, nor can
 you know if you blit to offset zero in this buffer. It's perfectly valid
 to only use a subregion of a buffer.

 Or maybe I'm misunderstanding you and you mean it the other way around.
 You don't let userspace dictate the addresses, the relocation
 information just tells the kernel to find the addresses of the
 referenced buffers for you and insert them, instead of the dummy
 information, into the command stream.
 
 Yes, I think this is what I mean. No dummy information in the command
 stream, userspace just fills the address which it uses(actually this is
 cpu address of the buffer) in the command stream, and our driver must
 have a HashTable or something which contains the buffer address pair --
 (cpu address, dma address), so our driver can find the dma addresses for
 every buffer then modify the addresses in the command stream.

Typically there would be no CPU address; there's no need in most cases
to ever map most buffers to the CPU.

Automatically parsing the buffer sounds like an interesting idea, but
then the kernel relocation code would have to know the meaning of every
single register or command-stream method in order to know which of
them take a buffer address as an argument. I am not familiar with this
HW specifically, so perhaps it's much more regular than I think and it's
actually easy to do that, but I imagine it'd be a PITA to implement that
(although perhaps we have to for the command-stream validation stuff
anyway?). Also, it'd be a lot quicker at least for larger command
buffers to go straight to the few locations in the command stream where
a buffer is referenced (i.e. use the side-band metadata for relocation)
rather than having the CPU re-read the entire command stream in the
kernel to parse it.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm: tegra: Use framebuffer pitch as line stride

2012-12-03 Thread Stephen Warren
On 12/03/2012 08:00 PM, Mark Zhang wrote:
> On 11/28/2012 02:37 PM, Mark Zhang wrote:
>> On 11/28/2012 05:39 AM, Stephen Warren wrote:
>>> On 11/27/2012 11:17 AM, Stephen Warren wrote:
>>>> On 11/26/2012 08:16 PM, Mark Zhang wrote:
>>>>> On 11/27/2012 06:37 AM, Stephen Warren wrote:
>>>>>> On 11/22/2012 12:37 PM, Thierry Reding wrote:
>>>>>>> Instead of using the stride derived from the display mode, use the pitch
>>>>>>> associated with the currently active framebuffer. This fixes a bug where
>>>>>>> the LCD display content would be skewed when enabling HDMI with a video
>>>>>>> mode different from that of the LCD.
>>>>>>
>>>>>> This patch certainly doesn't cause any additional issues for me, so:
>>>>>>
>>>>>> Tested-by: Stephen Warren 
>>>>>>
>>>>>> Howwever, it still doesn't allow both Cardhu's LCD panel and external
>>>>>> HDMI port (1080p) to be active at once. If I boot with both enabled, or
>>>>>> boot with just the LCD enabled and hot-plug HDMI, as soon as both heads
>>>>>> are active, then some kind of display corruption starts; it looks like a
>>>>>> clocking issue or perhaps memory underflow.
>>>>>
>>>>> I haven't observed this issue. What kind of display corruption you mean?
>>>>> Did it recover after some seconds or the display in LVDS panel was
>>>>> always corrupted?
>>>>>
>>>>> During my testing, I connected HDMI while booting cardhu and I can see
>>>>> the LVDS and HDMI working with no corruptions.
>>>>
>>>> For your viewing pleasure (and playing with my new phone) :-)
>>>> http://www.youtube.com/watch?v=ZJxJnONz7DA
>>>>
>>>> The external monitor is 1920x1200 I believe.
>>>
>>> Jon Mayo says the corruption in the video is display (memory fetch)
>>> underflow. Perhaps this is because (IIRC) the BCT I'm using on Cardhu
>>> programs the memory controller at a slow rate, and the bootloader and/or
>>> kernel is supposed to bump up the rate to the max, but that's not
>>> implemented anywhere yet upstream. If you're testing with "fastboot"
>>> instead of U-Boot, that might be re-programming the memory frequencies,
>>> and hence avoiding this.
>>>
>>
>> All right, I just test the framebuffer console and "xinit", I didn't
>> install the whole ubuntu.
>>
>> I'll install the ubuntu in my cardhu and see whether I have this kind of
>> issues.
> 
> Hi swarren, I installed ubuntu 12.04 in l4t and didn't observe the issue
> you described. The display worked with no corruptions. I can show you
> the video if you want.
> 
> What I used for testing is a cardhu board with our downstream U-Boot.
> 
> But the HDMI didn't work. The HDMI monitor showed this: "CANNOT DISPLAY
> THIS VIDEO MODE, CHANGE COMPUTER DISPLAY INPUT TO 1920x1080 at 60HZ". So
> sounds like the clock setting has some problems... I'll have a look at this.

Oh, I thought I'd followed up on this - Jon Mayo said it was display
underflow due to lack of memory bandwidth. IIRC, this may be due to the
BCT programming the memory controller for conservative settings that
don't require non-default voltages from the PMIC, with the expectation
that the bootloader or kernel will reprogram everything for correct
performance.


[RFC v2 1/8] video: tegra: Add nvhost driver

2012-12-03 Thread Stephen Warren
On 12/01/2012 07:58 AM, Thierry Reding wrote:
> On Sat, Dec 01, 2012 at 01:31:02PM +0200, Terje Bergstr?m wrote:
...
>> I was thinking of definitions like this:
>> 
>> static inline u32 host1x_sync_cfpeek_ctrl_cfpeek_addr_f(u32 v) { 
>> return (v & 0x1ff) << 0; }
>> 
>> versus
>> 
>> #define host1x_sync_cfpeek_ctrl_cfpeek_addr_f(v) ((v) >> 16) &
>> 0x3ff
>> 
>> Both of these produce the same machine code and have same usage,
>> but the latter has type checking and code coverage analysis and
>> the former is (in my eyes) clearer. In both of these cases the
>> usage is like this:
>> 
>> writel(host1x_sync_cfpeek_ctrl_cfpeek_ena_f(1) |
>> host1x_sync_cfpeek_ctrl_cfpeek_channr_f(chid) |
>> host1x_sync_cfpeek_ctrl_cfpeek_addr_f(rd_ptr), m->sync_aperture +
>> host1x_sync_cfpeek_ctrl_r());
> 
> Again there's no precedent for doing this with static inline
> functions. You can do the same with macros. Type checking isn't an
> issue in these cases since we're talking about bitfields for which
> no proper type exists.

I suspect the inline functions could encode signed-vs-unsigned fields,
perhaps catch u8 variables when they should have been u32, etc.?


[RFC v2 1/8] video: tegra: Add nvhost driver

2012-12-03 Thread Stephen Warren
On 11/30/2012 03:38 AM, Thierry Reding wrote:
> On Fri, Nov 30, 2012 at 10:56:39AM +0200, Terje Bergstr?m wrote:
>> On 29.11.2012 13:47, Thierry Reding wrote:
>>> On Thu, Nov 29, 2012 at 12:21:04PM +0200, Terje Bergstr?m
>>> wrote:
 Tegra20 and Tegra30 are compatible, but future chips are not.
 I was hoping we would be ready in upstream kernel for future
 chips.
>>> 
>>> I think we should ignore that problem for now. Generally
>>> planning for any possible combination of incompatibilities
>>> leads to overgeneralized designs that require precisely these
>>> kinds of indirections.
>>> 
>>> Once some documentation for Tegra 40 materializes we can start
>>> thinking about how to encapsulate the incompatible code.
>> 
>> I think here our perspectives differ a lot. That is natural
>> considering the company I work for and company you work for, so
>> let's try to sync the perspective.
>> 
>> In my reality, whatever is in market is old news and I barely
>> work on them anymore. Upstreaming activity is the exception. 90%
>> of my time is spent dealing with future chips which I know cannot
>> be handled without this split to logical and physical driver
>> parts.
>> 
>> For you, Tegra2 and Tegra3 are the reality.
> 
> To be fair, Tegra2 and Tegra3 are the reality for *everyone*
> *outside* NVIDIA.
> 
> It's great that you spend most of your time working with future
> chips, but unless you submit the code for inclusion or review
> nobody upstream needs to be concerned about the implications. Most
> people don't have time to waste so we naturally try to keep the
> maintenance burden to a minimum.
> 
> The above implies that when you submit code it shouldn't contain
> pieces that prepare for possible future extensions which may or may
> not be submitted (the exception being if such changes are part of a
> series where subsequent patches actually use them). The outcome is
> that the amount of cruft in the mainline kernel is kept to a
> minimum. And that's a very good thing.

I think there's room for letting Terje's complete knowledge of future
chips guide the design of the current code that's sent upstream.
Certainly we shouldn't add a ton of unnecessary abstraction layers
right now that aren't needed for Tegra20/30, but if there's some
decision that doesn't affect the bloat, opaqueness, ... of the current
code but one choice is better for future development without serious
negatives for the current code, it's pretty reasonable to make that
decision rather than the other.

(That all said, I haven't really followed the details of this
particular point, so I can't say how my comment applies to any
decisions being made right now - just that we shouldn't blanket reject
future knowledge when making decisions)

After all, making the right decision now will reduce the number/size
of patches later, and hence reduce code churn and reviewer load.


Re: [RFC v2 1/8] video: tegra: Add nvhost driver

2012-12-03 Thread Stephen Warren
On 11/30/2012 03:38 AM, Thierry Reding wrote:
 On Fri, Nov 30, 2012 at 10:56:39AM +0200, Terje Bergström wrote:
 On 29.11.2012 13:47, Thierry Reding wrote:
 On Thu, Nov 29, 2012 at 12:21:04PM +0200, Terje Bergström
 wrote:
 Tegra20 and Tegra30 are compatible, but future chips are not.
 I was hoping we would be ready in upstream kernel for future
 chips.
 
 I think we should ignore that problem for now. Generally
 planning for any possible combination of incompatibilities
 leads to overgeneralized designs that require precisely these
 kinds of indirections.
 
 Once some documentation for Tegra 40 materializes we can start
 thinking about how to encapsulate the incompatible code.
 
 I think here our perspectives differ a lot. That is natural
 considering the company I work for and company you work for, so
 let's try to sync the perspective.
 
 In my reality, whatever is in market is old news and I barely
 work on them anymore. Upstreaming activity is the exception. 90%
 of my time is spent dealing with future chips which I know cannot
 be handled without this split to logical and physical driver
 parts.
 
 For you, Tegra2 and Tegra3 are the reality.
 
 To be fair, Tegra2 and Tegra3 are the reality for *everyone*
 *outside* NVIDIA.
 
 It's great that you spend most of your time working with future
 chips, but unless you submit the code for inclusion or review
 nobody upstream needs to be concerned about the implications. Most
 people don't have time to waste so we naturally try to keep the
 maintenance burden to a minimum.
 
 The above implies that when you submit code it shouldn't contain
 pieces that prepare for possible future extensions which may or may
 not be submitted (the exception being if such changes are part of a
 series where subsequent patches actually use them). The outcome is
 that the amount of cruft in the mainline kernel is kept to a
 minimum. And that's a very good thing.

I think there's room for letting Terje's complete knowledge of future
chips guide the design of the current code that's sent upstream.
Certainly we shouldn't add a ton of unnecessary abstraction layers
right now that aren't needed for Tegra20/30, but if there's some
decision that doesn't affect the bloat, opaqueness, ... of the current
code but one choice is better for future development without serious
negatives for the current code, it's pretty reasonable to make that
decision rather than the other.

(That all said, I haven't really followed the details of this
particular point, so I can't say how my comment applies to any
decisions being made right now - just that we shouldn't blanket reject
future knowledge when making decisions)

After all, making the right decision now will reduce the number/size
of patches later, and hence reduce code churn and reviewer load.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 1/8] video: tegra: Add nvhost driver

2012-12-03 Thread Stephen Warren
On 12/01/2012 07:58 AM, Thierry Reding wrote:
 On Sat, Dec 01, 2012 at 01:31:02PM +0200, Terje Bergström wrote:
...
 I was thinking of definitions like this:
 
 static inline u32 host1x_sync_cfpeek_ctrl_cfpeek_addr_f(u32 v) { 
 return (v  0x1ff)  0; }
 
 versus
 
 #define host1x_sync_cfpeek_ctrl_cfpeek_addr_f(v) ((v)  16) 
 0x3ff
 
 Both of these produce the same machine code and have same usage,
 but the latter has type checking and code coverage analysis and
 the former is (in my eyes) clearer. In both of these cases the
 usage is like this:
 
 writel(host1x_sync_cfpeek_ctrl_cfpeek_ena_f(1) |
 host1x_sync_cfpeek_ctrl_cfpeek_channr_f(chid) |
 host1x_sync_cfpeek_ctrl_cfpeek_addr_f(rd_ptr), m-sync_aperture +
 host1x_sync_cfpeek_ctrl_r());
 
 Again there's no precedent for doing this with static inline
 functions. You can do the same with macros. Type checking isn't an
 issue in these cases since we're talking about bitfields for which
 no proper type exists.

I suspect the inline functions could encode signed-vs-unsigned fields,
perhaps catch u8 variables when they should have been u32, etc.?
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm: tegra: Use framebuffer pitch as line stride

2012-12-03 Thread Stephen Warren
On 12/03/2012 08:00 PM, Mark Zhang wrote:
 On 11/28/2012 02:37 PM, Mark Zhang wrote:
 On 11/28/2012 05:39 AM, Stephen Warren wrote:
 On 11/27/2012 11:17 AM, Stephen Warren wrote:
 On 11/26/2012 08:16 PM, Mark Zhang wrote:
 On 11/27/2012 06:37 AM, Stephen Warren wrote:
 On 11/22/2012 12:37 PM, Thierry Reding wrote:
 Instead of using the stride derived from the display mode, use the pitch
 associated with the currently active framebuffer. This fixes a bug where
 the LCD display content would be skewed when enabling HDMI with a video
 mode different from that of the LCD.

 This patch certainly doesn't cause any additional issues for me, so:

 Tested-by: Stephen Warren swar...@nvidia.com

 Howwever, it still doesn't allow both Cardhu's LCD panel and external
 HDMI port (1080p) to be active at once. If I boot with both enabled, or
 boot with just the LCD enabled and hot-plug HDMI, as soon as both heads
 are active, then some kind of display corruption starts; it looks like a
 clocking issue or perhaps memory underflow.

 I haven't observed this issue. What kind of display corruption you mean?
 Did it recover after some seconds or the display in LVDS panel was
 always corrupted?

 During my testing, I connected HDMI while booting cardhu and I can see
 the LVDS and HDMI working with no corruptions.

 For your viewing pleasure (and playing with my new phone) :-)
 http://www.youtube.com/watch?v=ZJxJnONz7DA

 The external monitor is 1920x1200 I believe.

 Jon Mayo says the corruption in the video is display (memory fetch)
 underflow. Perhaps this is because (IIRC) the BCT I'm using on Cardhu
 programs the memory controller at a slow rate, and the bootloader and/or
 kernel is supposed to bump up the rate to the max, but that's not
 implemented anywhere yet upstream. If you're testing with fastboot
 instead of U-Boot, that might be re-programming the memory frequencies,
 and hence avoiding this.


 All right, I just test the framebuffer console and xinit, I didn't
 install the whole ubuntu.

 I'll install the ubuntu in my cardhu and see whether I have this kind of
 issues.
 
 Hi swarren, I installed ubuntu 12.04 in l4t and didn't observe the issue
 you described. The display worked with no corruptions. I can show you
 the video if you want.
 
 What I used for testing is a cardhu board with our downstream U-Boot.
 
 But the HDMI didn't work. The HDMI monitor showed this: CANNOT DISPLAY
 THIS VIDEO MODE, CHANGE COMPUTER DISPLAY INPUT TO 1920x1080@60HZ. So
 sounds like the clock setting has some problems... I'll have a look at this.

Oh, I thought I'd followed up on this - Jon Mayo said it was display
underflow due to lack of memory bandwidth. IIRC, this may be due to the
BCT programming the memory controller for conservative settings that
don't require non-default voltages from the PMIC, with the expectation
that the bootloader or kernel will reprogram everything for correct
performance.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[RFC v2 2/8] video: tegra: Add syncpoint wait and interrupts

2012-11-29 Thread Stephen Warren
On 11/29/2012 01:44 AM, Thierry Reding wrote:
> On Mon, Nov 26, 2012 at 03:19:08PM +0200, Terje Bergstrom wrote:

>> diff --git a/drivers/video/tegra/host/host1x/host1x_intr.c
>> b/drivers/video/tegra/host/host1x/host1x_intr.c
> [...]
>> +/* Spacing between sync registers */ +#define REGISTER_STRIDE 4
> 
> Erm... no. The usual way you should be doing this is either make
> the register definitions account for the stride or use accessors
> that apply the stride. You should be doing the latter anyway to
> make accesses. For example:
> 
> static inline void host1x_syncpt_writel(struct host1x *host1x, 
> unsigned long value, unsigned long offset) { writel(value,
> host1x->regs + SYNCPT_BASE + offset); }
> 
> static inline unsigned long host1x_syncpt_readl(struct host1x
> *host1x, unsigned long offset) { return readl(host1x->regs +
> SYNCPT_BASE + offset); }
> 
> Alternatively, if you want to pass the register index instead of
> the offset, you can use just multiply the offset in that function:
> 
> writel(value, host1x->regs + SYNCPT_BASE + (offset << 2));
> 
> The same can also be done with the non-syncpt registers.

It seems like reasonable documentation to replace "<< 2" with "*
REGISTER_STRIDE" here.


[RFC v2 1/8] video: tegra: Add nvhost driver

2012-11-29 Thread Stephen Warren
On 11/29/2012 04:47 AM, Thierry Reding wrote:
> On Thu, Nov 29, 2012 at 12:21:04PM +0200, Terje Bergstr?m wrote:
>> On 28.11.2012 23:23, Thierry Reding wrote:
>>> This could be problematic. Since drivers/video and
>>> drivers/gpu/drm are separate trees, this would entail a
>>> continuous burden on keeping both trees synchronized. While I
>>> realize that eventually it might be better to put the host1x
>>> driver in a separate place to accomodate for its use by other
>>> subsystems, I'm not sure moving it here right away is the best 
>>> approach.
>> 
>> I understand your point, but I hope also that we'd end up with
>> something that can be used as basis for the downstream kernel to
>> migrate to upstream stack.
>> 
>> The key point here is to make the API between nvhost and tegradrm
>> as small and robust to changes as possible.
> 
> I agree. But I also fear that there will be changes eventually and 
> having both go in via different tree requires those trees to be
> merged in a specific order to avoid breakage should the API change.
> This will be particularly ugly in linux-next.
> 
> That's why I explicitly proposed to take this into
> drivers/gpu/drm/tegra for the time being, until we can be
> reasonably sure that the API is fixed. Then I'm fine with moving it
> wherever seems the best fit. Even then there might be the
> occasional dependency, but they should get fewer and fewer as the
> code matures.

It is acceptable for one maintainer to ack patches, and another
maintainer to merge a series that touches both "their own" code and
code owned by another tree. This should of course only be needed when
inter-module APIs change; changes to code within a module shouldn't
require this.



[RFC v2 1/8] video: tegra: Add nvhost driver

2012-11-29 Thread Stephen Warren
On 11/29/2012 03:21 AM, Terje Bergstr?m wrote:
> On 28.11.2012 23:23, Thierry Reding wrote:
...
>>> + regs = platform_get_resource(dev, IORESOURCE_MEM, 0);
>>> + intr0 = platform_get_resource(dev, IORESOURCE_IRQ, 0);
>>> + intr1 = platform_get_resource(dev, IORESOURCE_IRQ, 1);
>>> +
>>> + if (!regs || !intr0 || !intr1) {
>>
>> I prefer to have these checked for explicitly, one by one for
>> readability and potentially more useful diagnostics.
> 
> Can do.
> 
>> Also you should be using platform_get_irq() for interrupts. Furthermore
>> the host1x DT node (and the TRM) name the interrupts "syncpt" and
>> "general", so maybe those would be more useful variable names than
>> "intr0" and "intr1".
>>
>> But since you don't use them anyway they shouldn't be part of this
>> patch.
> 
> True. I might also as well delete the general interrupt altogether, as
> we don't use it for any real purpose.

Do make sure the interrupts still are part of the DT binding though, so
that the binding fully describes the HW, and the interrupt is available
to retrieve if we ever do use it in the future.

>>> + for (i = 0; i < pdata->num_clks; i++)
>>> + clk_prepare_enable(pdata->clk[i]);
>>> + nvhost_syncpt_reset(>syncpt);
>>> + for (i = 0; i < pdata->num_clks; i++)
>>> + clk_disable_unprepare(pdata->clk[i]);
>>
>> Stephen already hinted at this when discussing the AUXDATA. You should
>> explicitly request the clocks.
> 
> I'm not too happy about that idea. The clock management code is generic
> for all modules, and that's why it's driven by a data structure. Now
> Stephen and you ask me to unroll the loops and make copies of the code
> to facilitate different modules and different SoCs.

You can still create tables of clocks inside the driver and loop over
them. So, loop unrolling isn't related to my comments at least. It's
just that clk_get() shouldn't take its parameters from platform data.

But if these are clocks for (arbitrary) child modules (that may or may
not exist dynamically), why aren't the drivers for the child modules
managing them?


Re: [RFC v2 1/8] video: tegra: Add nvhost driver

2012-11-29 Thread Stephen Warren
On 11/29/2012 03:21 AM, Terje Bergström wrote:
 On 28.11.2012 23:23, Thierry Reding wrote:
...
 + regs = platform_get_resource(dev, IORESOURCE_MEM, 0);
 + intr0 = platform_get_resource(dev, IORESOURCE_IRQ, 0);
 + intr1 = platform_get_resource(dev, IORESOURCE_IRQ, 1);
 +
 + if (!regs || !intr0 || !intr1) {

 I prefer to have these checked for explicitly, one by one for
 readability and potentially more useful diagnostics.
 
 Can do.
 
 Also you should be using platform_get_irq() for interrupts. Furthermore
 the host1x DT node (and the TRM) name the interrupts syncpt and
 general, so maybe those would be more useful variable names than
 intr0 and intr1.

 But since you don't use them anyway they shouldn't be part of this
 patch.
 
 True. I might also as well delete the general interrupt altogether, as
 we don't use it for any real purpose.

Do make sure the interrupts still are part of the DT binding though, so
that the binding fully describes the HW, and the interrupt is available
to retrieve if we ever do use it in the future.

 + for (i = 0; i  pdata-num_clks; i++)
 + clk_prepare_enable(pdata-clk[i]);
 + nvhost_syncpt_reset(host-syncpt);
 + for (i = 0; i  pdata-num_clks; i++)
 + clk_disable_unprepare(pdata-clk[i]);

 Stephen already hinted at this when discussing the AUXDATA. You should
 explicitly request the clocks.
 
 I'm not too happy about that idea. The clock management code is generic
 for all modules, and that's why it's driven by a data structure. Now
 Stephen and you ask me to unroll the loops and make copies of the code
 to facilitate different modules and different SoCs.

You can still create tables of clocks inside the driver and loop over
them. So, loop unrolling isn't related to my comments at least. It's
just that clk_get() shouldn't take its parameters from platform data.

But if these are clocks for (arbitrary) child modules (that may or may
not exist dynamically), why aren't the drivers for the child modules
managing them?
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 1/8] video: tegra: Add nvhost driver

2012-11-29 Thread Stephen Warren
On 11/29/2012 04:47 AM, Thierry Reding wrote:
 On Thu, Nov 29, 2012 at 12:21:04PM +0200, Terje Bergström wrote:
 On 28.11.2012 23:23, Thierry Reding wrote:
 This could be problematic. Since drivers/video and
 drivers/gpu/drm are separate trees, this would entail a
 continuous burden on keeping both trees synchronized. While I
 realize that eventually it might be better to put the host1x
 driver in a separate place to accomodate for its use by other
 subsystems, I'm not sure moving it here right away is the best 
 approach.
 
 I understand your point, but I hope also that we'd end up with
 something that can be used as basis for the downstream kernel to
 migrate to upstream stack.
 
 The key point here is to make the API between nvhost and tegradrm
 as small and robust to changes as possible.
 
 I agree. But I also fear that there will be changes eventually and 
 having both go in via different tree requires those trees to be
 merged in a specific order to avoid breakage should the API change.
 This will be particularly ugly in linux-next.
 
 That's why I explicitly proposed to take this into
 drivers/gpu/drm/tegra for the time being, until we can be
 reasonably sure that the API is fixed. Then I'm fine with moving it
 wherever seems the best fit. Even then there might be the
 occasional dependency, but they should get fewer and fewer as the
 code matures.

It is acceptable for one maintainer to ack patches, and another
maintainer to merge a series that touches both their own code and
code owned by another tree. This should of course only be needed when
inter-module APIs change; changes to code within a module shouldn't
require this.

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 2/8] video: tegra: Add syncpoint wait and interrupts

2012-11-29 Thread Stephen Warren
On 11/29/2012 01:44 AM, Thierry Reding wrote:
 On Mon, Nov 26, 2012 at 03:19:08PM +0200, Terje Bergstrom wrote:

 diff --git a/drivers/video/tegra/host/host1x/host1x_intr.c
 b/drivers/video/tegra/host/host1x/host1x_intr.c
 [...]
 +/* Spacing between sync registers */ +#define REGISTER_STRIDE 4
 
 Erm... no. The usual way you should be doing this is either make
 the register definitions account for the stride or use accessors
 that apply the stride. You should be doing the latter anyway to
 make accesses. For example:
 
 static inline void host1x_syncpt_writel(struct host1x *host1x, 
 unsigned long value, unsigned long offset) { writel(value,
 host1x-regs + SYNCPT_BASE + offset); }
 
 static inline unsigned long host1x_syncpt_readl(struct host1x
 *host1x, unsigned long offset) { return readl(host1x-regs +
 SYNCPT_BASE + offset); }
 
 Alternatively, if you want to pass the register index instead of
 the offset, you can use just multiply the offset in that function:
 
 writel(value, host1x-regs + SYNCPT_BASE + (offset  2));
 
 The same can also be done with the non-syncpt registers.

It seems like reasonable documentation to replace  2 with *
REGISTER_STRIDE here.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v2] drm: tegra: Add maintainers entry

2012-11-28 Thread Stephen Warren
On 11/28/2012 12:18 PM, Thierry Reding wrote:
> Add myself as the maintainer of the NVIDIA Tegra DRM driver.

Aside from one comment below,

Acked-by: Stephen Warren 

> +L:   dri-devel at lists.freedesktop.org

Should linux-tegra at vger.kernel.org also be CC'd so that everything
Tegra-related goes to one place?


[RFC v2 8/8] drm: tegra: Add gr2d device

2012-11-28 Thread Stephen Warren
On 11/28/2012 07:45 AM, Terje Bergstr?m wrote:
> On 28.11.2012 16:06, Lucas Stach wrote:
>> Why do even need/use dma-buf for this use case? This is all one DRM
>> device, even if we separate host1x and gr2d as implementation modules.
> 
> I didn't want to implement dependency to drm gem objects in nvhost, but
> we have thought about doing that. dma-buf brings quite a lot of
> overhead, so implementing support for gem buffers would make the
> sequence a bit leaner.
> 
> nvhost already has infra to support multiple memory managers.
> 
>> So standard way of doing this is:
>> 1. create gem object for pushbuffer
>> 2. create fake mmap offset for gem obj
>> 3. map pushbuf using the fake offset on the drm device
>> 4. at submit time zap the mapping
>>
>> You need this logic anyway, as normally we don't rely on userspace to
>> sync gpu and cpu, but use the kernel to handle the concurrency issues.
> 
> Taking a step back - 2D streams are actually very short, in the order of
> <100 bytes. Just copying them to kernel space would actually be faster
> than doing MMU operations.

I'm not sure it's a good idea to have one buffer submission mechanism
for the 2D class and another for the 3D/... class, nor to bet that 2D
streams will always be short.


Re: [RFC v2 8/8] drm: tegra: Add gr2d device

2012-11-28 Thread Stephen Warren
On 11/28/2012 07:45 AM, Terje Bergström wrote:
 On 28.11.2012 16:06, Lucas Stach wrote:
 Why do even need/use dma-buf for this use case? This is all one DRM
 device, even if we separate host1x and gr2d as implementation modules.
 
 I didn't want to implement dependency to drm gem objects in nvhost, but
 we have thought about doing that. dma-buf brings quite a lot of
 overhead, so implementing support for gem buffers would make the
 sequence a bit leaner.
 
 nvhost already has infra to support multiple memory managers.
 
 So standard way of doing this is:
 1. create gem object for pushbuffer
 2. create fake mmap offset for gem obj
 3. map pushbuf using the fake offset on the drm device
 4. at submit time zap the mapping

 You need this logic anyway, as normally we don't rely on userspace to
 sync gpu and cpu, but use the kernel to handle the concurrency issues.
 
 Taking a step back - 2D streams are actually very short, in the order of
 100 bytes. Just copying them to kernel space would actually be faster
 than doing MMU operations.

I'm not sure it's a good idea to have one buffer submission mechanism
for the 2D class and another for the 3D/... class, nor to bet that 2D
streams will always be short.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2] drm: tegra: Add maintainers entry

2012-11-28 Thread Stephen Warren
On 11/28/2012 12:18 PM, Thierry Reding wrote:
 Add myself as the maintainer of the NVIDIA Tegra DRM driver.

Aside from one comment below,

Acked-by: Stephen Warren swar...@nvidia.com

 +L:   dri-devel@lists.freedesktop.org

Should linux-te...@vger.kernel.org also be CC'd so that everything
Tegra-related goes to one place?
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm: tegra: Use framebuffer pitch as line stride

2012-11-27 Thread Stephen Warren
On 11/27/2012 11:17 AM, Stephen Warren wrote:
> On 11/26/2012 08:16 PM, Mark Zhang wrote:
>> On 11/27/2012 06:37 AM, Stephen Warren wrote:
>>> On 11/22/2012 12:37 PM, Thierry Reding wrote:
>>>> Instead of using the stride derived from the display mode, use the pitch
>>>> associated with the currently active framebuffer. This fixes a bug where
>>>> the LCD display content would be skewed when enabling HDMI with a video
>>>> mode different from that of the LCD.
>>>
>>> This patch certainly doesn't cause any additional issues for me, so:
>>>
>>> Tested-by: Stephen Warren 
>>>
>>> Howwever, it still doesn't allow both Cardhu's LCD panel and external
>>> HDMI port (1080p) to be active at once. If I boot with both enabled, or
>>> boot with just the LCD enabled and hot-plug HDMI, as soon as both heads
>>> are active, then some kind of display corruption starts; it looks like a
>>> clocking issue or perhaps memory underflow.
>>
>> I haven't observed this issue. What kind of display corruption you mean?
>> Did it recover after some seconds or the display in LVDS panel was
>> always corrupted?
>>
>> During my testing, I connected HDMI while booting cardhu and I can see
>> the LVDS and HDMI working with no corruptions.
> 
> For your viewing pleasure (and playing with my new phone) :-)
> http://www.youtube.com/watch?v=ZJxJnONz7DA
> 
> The external monitor is 1920x1200 I believe.

Jon Mayo says the corruption in the video is display (memory fetch)
underflow. Perhaps this is because (IIRC) the BCT I'm using on Cardhu
programs the memory controller at a slow rate, and the bootloader and/or
kernel is supposed to bump up the rate to the max, but that's not
implemented anywhere yet upstream. If you're testing with "fastboot"
instead of U-Boot, that might be re-programming the memory frequencies,
and hence avoiding this.

I guess we have a fun time ahead of us with mode validation and memory
controller programming.


[PATCH] drm: tegra: Use framebuffer pitch as line stride

2012-11-27 Thread Stephen Warren
On 11/26/2012 08:16 PM, Mark Zhang wrote:
> On 11/27/2012 06:37 AM, Stephen Warren wrote:
>> On 11/22/2012 12:37 PM, Thierry Reding wrote:
>>> Instead of using the stride derived from the display mode, use the pitch
>>> associated with the currently active framebuffer. This fixes a bug where
>>> the LCD display content would be skewed when enabling HDMI with a video
>>> mode different from that of the LCD.
>>
>> This patch certainly doesn't cause any additional issues for me, so:
>>
>> Tested-by: Stephen Warren 
>>
>> Howwever, it still doesn't allow both Cardhu's LCD panel and external
>> HDMI port (1080p) to be active at once. If I boot with both enabled, or
>> boot with just the LCD enabled and hot-plug HDMI, as soon as both heads
>> are active, then some kind of display corruption starts; it looks like a
>> clocking issue or perhaps memory underflow.
> 
> I haven't observed this issue. What kind of display corruption you mean?
> Did it recover after some seconds or the display in LVDS panel was
> always corrupted?
> 
> During my testing, I connected HDMI while booting cardhu and I can see
> the LVDS and HDMI working with no corruptions.

For your viewing pleasure (and playing with my new phone) :-)
http://www.youtube.com/watch?v=ZJxJnONz7DA

The external monitor is 1920x1200 I believe.


[RFC v2 5/8] ARM: tegra: Add auxiliary data for nvhost

2012-11-27 Thread Stephen Warren
On 11/26/2012 11:33 PM, Terje Bergstr?m wrote:
> On 27.11.2012 01:39, Stephen Warren wrote:
>> Clock names shouldn't be passed in platform data; instead, clk_get()
>> should be passed the device object and device-relative (i.e. not global)
>> clock name. I expect if the driver is fixed to make this change, the
>> changes to tegra*_clocks_data.c won't be needed either.
> 
> Isn't this code doing exactly that - getting a device relative clock,
> nvhost_module_init() in nvhost.acm.c:
> 
> (...)
>   /* initialize clocks to known state */
>   while (pdata->clocks[i].name && i < NVHOST_MODULE_MAX_CLOCKS) {
>   long rate = pdata->clocks[i].default_rate;
>   struct clk *c;
> 
>   c = devm_clk_get(>dev, pdata->clocks[i].name);

The line above is getting the (device-relative) clock name from platform
data, rather than using some fixed name as it should be.


Re: [RFC v2 5/8] ARM: tegra: Add auxiliary data for nvhost

2012-11-27 Thread Stephen Warren
On 11/26/2012 11:33 PM, Terje Bergström wrote:
 On 27.11.2012 01:39, Stephen Warren wrote:
 Clock names shouldn't be passed in platform data; instead, clk_get()
 should be passed the device object and device-relative (i.e. not global)
 clock name. I expect if the driver is fixed to make this change, the
 changes to tegra*_clocks_data.c won't be needed either.
 
 Isn't this code doing exactly that - getting a device relative clock,
 nvhost_module_init() in nvhost.acm.c:
 
 (...)
   /* initialize clocks to known state */
   while (pdata-clocks[i].name  i  NVHOST_MODULE_MAX_CLOCKS) {
   long rate = pdata-clocks[i].default_rate;
   struct clk *c;
 
   c = devm_clk_get(dev-dev, pdata-clocks[i].name);

The line above is getting the (device-relative) clock name from platform
data, rather than using some fixed name as it should be.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm: tegra: Use framebuffer pitch as line stride

2012-11-27 Thread Stephen Warren
On 11/26/2012 08:16 PM, Mark Zhang wrote:
 On 11/27/2012 06:37 AM, Stephen Warren wrote:
 On 11/22/2012 12:37 PM, Thierry Reding wrote:
 Instead of using the stride derived from the display mode, use the pitch
 associated with the currently active framebuffer. This fixes a bug where
 the LCD display content would be skewed when enabling HDMI with a video
 mode different from that of the LCD.

 This patch certainly doesn't cause any additional issues for me, so:

 Tested-by: Stephen Warren swar...@nvidia.com

 Howwever, it still doesn't allow both Cardhu's LCD panel and external
 HDMI port (1080p) to be active at once. If I boot with both enabled, or
 boot with just the LCD enabled and hot-plug HDMI, as soon as both heads
 are active, then some kind of display corruption starts; it looks like a
 clocking issue or perhaps memory underflow.
 
 I haven't observed this issue. What kind of display corruption you mean?
 Did it recover after some seconds or the display in LVDS panel was
 always corrupted?
 
 During my testing, I connected HDMI while booting cardhu and I can see
 the LVDS and HDMI working with no corruptions.

For your viewing pleasure (and playing with my new phone) :-)
http://www.youtube.com/watch?v=ZJxJnONz7DA

The external monitor is 1920x1200 I believe.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm: tegra: Use framebuffer pitch as line stride

2012-11-27 Thread Stephen Warren
On 11/27/2012 11:17 AM, Stephen Warren wrote:
 On 11/26/2012 08:16 PM, Mark Zhang wrote:
 On 11/27/2012 06:37 AM, Stephen Warren wrote:
 On 11/22/2012 12:37 PM, Thierry Reding wrote:
 Instead of using the stride derived from the display mode, use the pitch
 associated with the currently active framebuffer. This fixes a bug where
 the LCD display content would be skewed when enabling HDMI with a video
 mode different from that of the LCD.

 This patch certainly doesn't cause any additional issues for me, so:

 Tested-by: Stephen Warren swar...@nvidia.com

 Howwever, it still doesn't allow both Cardhu's LCD panel and external
 HDMI port (1080p) to be active at once. If I boot with both enabled, or
 boot with just the LCD enabled and hot-plug HDMI, as soon as both heads
 are active, then some kind of display corruption starts; it looks like a
 clocking issue or perhaps memory underflow.

 I haven't observed this issue. What kind of display corruption you mean?
 Did it recover after some seconds or the display in LVDS panel was
 always corrupted?

 During my testing, I connected HDMI while booting cardhu and I can see
 the LVDS and HDMI working with no corruptions.
 
 For your viewing pleasure (and playing with my new phone) :-)
 http://www.youtube.com/watch?v=ZJxJnONz7DA
 
 The external monitor is 1920x1200 I believe.

Jon Mayo says the corruption in the video is display (memory fetch)
underflow. Perhaps this is because (IIRC) the BCT I'm using on Cardhu
programs the memory controller at a slow rate, and the bootloader and/or
kernel is supposed to bump up the rate to the max, but that's not
implemented anywhere yet upstream. If you're testing with fastboot
instead of U-Boot, that might be re-programming the memory frequencies,
and hence avoiding this.

I guess we have a fun time ahead of us with mode validation and memory
controller programming.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[RFC v2 5/8] ARM: tegra: Add auxiliary data for nvhost

2012-11-26 Thread Stephen Warren
On 11/26/2012 06:19 AM, Terje Bergstrom wrote:
> Add SoC specific auxiliary data to host1x and gr2d. nvhost uses
> this data.
> 
> Signed-off-by: Terje Bergstrom 
> Signed-off-by: Arto Merilainen 

Arto's S-o-b really should be first and yours last since you're (Terje)
the one who touched the patches last.

> diff --git a/arch/arm/mach-tegra/board-dt-tegra20.c 
> b/arch/arm/mach-tegra/board-dt-tegra20.c

I think none of the changes the board-dt-tegra*.c should be made.

AUXDATA is a temporary measure to keep things working during the
transition to device tree. We want to remove entries from the AUXDATA
tables rather than add them. The only thing that's stopping us from
doing so right now is the lack of DT-based clock lookups, which hence
require devices to have a specific name.

> +static struct nvhost_device_data tegra_host1x_info = {
> + .clocks = { {"host1x", UINT_MAX} },

> +static struct nvhost_device_data tegra_gr2d_info = {
> + .clocks = { {"gr2d", UINT_MAX, true}, {"epp", UINT_MAX, true} },

Clock names shouldn't be passed in platform data; instead, clk_get()
should be passed the device object and device-relative (i.e. not global)
clock name. I expect if the driver is fixed to make this change, the
changes to tegra*_clocks_data.c won't be needed either.



[PATCH] drm: tegra: Use framebuffer pitch as line stride

2012-11-26 Thread Stephen Warren
On 11/22/2012 12:37 PM, Thierry Reding wrote:
> Instead of using the stride derived from the display mode, use the pitch
> associated with the currently active framebuffer. This fixes a bug where
> the LCD display content would be skewed when enabling HDMI with a video
> mode different from that of the LCD.

This patch certainly doesn't cause any additional issues for me, so:

Tested-by: Stephen Warren 

Howwever, it still doesn't allow both Cardhu's LCD panel and external
HDMI port (1080p) to be active at once. If I boot with both enabled, or
boot with just the LCD enabled and hot-plug HDMI, as soon as both heads
are active, then some kind of display corruption starts; it looks like a
clocking issue or perhaps memory underflow.

Mark, can you please investigate this. I haven't tested to see if the
issue is Tegra30-specific, or also happens on Tegra20.


Re: [PATCH] drm: tegra: Use framebuffer pitch as line stride

2012-11-26 Thread Stephen Warren
On 11/22/2012 12:37 PM, Thierry Reding wrote:
 Instead of using the stride derived from the display mode, use the pitch
 associated with the currently active framebuffer. This fixes a bug where
 the LCD display content would be skewed when enabling HDMI with a video
 mode different from that of the LCD.

This patch certainly doesn't cause any additional issues for me, so:

Tested-by: Stephen Warren swar...@nvidia.com

Howwever, it still doesn't allow both Cardhu's LCD panel and external
HDMI port (1080p) to be active at once. If I boot with both enabled, or
boot with just the LCD enabled and hot-plug HDMI, as soon as both heads
are active, then some kind of display corruption starts; it looks like a
clocking issue or perhaps memory underflow.

Mark, can you please investigate this. I haven't tested to see if the
issue is Tegra30-specific, or also happens on Tegra20.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 5/8] ARM: tegra: Add auxiliary data for nvhost

2012-11-26 Thread Stephen Warren
On 11/26/2012 06:19 AM, Terje Bergstrom wrote:
 Add SoC specific auxiliary data to host1x and gr2d. nvhost uses
 this data.
 
 Signed-off-by: Terje Bergstrom tbergst...@nvidia.com
 Signed-off-by: Arto Merilainen amerilai...@nvidia.com

Arto's S-o-b really should be first and yours last since you're (Terje)
the one who touched the patches last.

 diff --git a/arch/arm/mach-tegra/board-dt-tegra20.c 
 b/arch/arm/mach-tegra/board-dt-tegra20.c

I think none of the changes the board-dt-tegra*.c should be made.

AUXDATA is a temporary measure to keep things working during the
transition to device tree. We want to remove entries from the AUXDATA
tables rather than add them. The only thing that's stopping us from
doing so right now is the lack of DT-based clock lookups, which hence
require devices to have a specific name.

 +static struct nvhost_device_data tegra_host1x_info = {
 + .clocks = { {host1x, UINT_MAX} },

 +static struct nvhost_device_data tegra_gr2d_info = {
 + .clocks = { {gr2d, UINT_MAX, true}, {epp, UINT_MAX, true} },

Clock names shouldn't be passed in platform data; instead, clk_get()
should be passed the device object and device-relative (i.e. not global)
clock name. I expect if the driver is fixed to make this change, the
changes to tegra*_clocks_data.c won't be needed either.

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm: Add NVIDIA Tegra30 support

2012-11-16 Thread Stephen Warren
On 11/15/2012 09:58 PM, Mark Zhang wrote:
> This patch is based on Thierry's drm patch for Tegra20:
> - [PATCH v2 0/6] Device tree updates for host1x support
> - [PATCH v3 0/2] NVIDIA Tegra DRM driver

> diff --git a/drivers/gpu/drm/tegra/dc.c b/drivers/gpu/drm/tegra/dc.c
> index 216cd0f..6e9f1b4 100644
> --- a/drivers/gpu/drm/tegra/dc.c
> +++ b/drivers/gpu/drm/tegra/dc.c
> @@ -819,6 +819,7 @@ static int tegra_dc_remove(struct platform_device *pdev)
>  
>  static struct of_device_id tegra_dc_of_match[] = {
> { .compatible = "nvidia,tegra20-dc", },
> + { .compatible = "nvidia,tegra30-dc", },

The Tegra30 entries should come first in the table due to the order the
kernel (currently) matches compatible properties against theses tables.
The same comment applies for all 3 tables.

With the patch manually applied (due to the whitespace issues I
mentioned earlier), and this ordering issue fixed,

Tested-by: Stephen Warren 

(On Cardhu, with no HDMI plugged in; see my next email for details).


[PATCH] drm: Add NVIDIA Tegra30 support

2012-11-16 Thread Stephen Warren
On 11/15/2012 09:58 PM, Mark Zhang wrote:
> This patch is based on Thierry's drm patch for Tegra20:
> - [PATCH v2 0/6] Device tree updates for host1x support
> - [PATCH v3 0/2] NVIDIA Tegra DRM driver
> 
> It adds the support for NVIDIA Tegra30.

Mark, I tried to apply this for testing locally, but it doesn't apply.

For some reason, all the whitespace in the context has been replaced
with spaces. Are your local copies of dc.c and host1x.c indented with
spaces for some reason, or did the MS Exchange server corrupt your patch
as you sent it (I've previously only observed inbound corruption...)


Re: [PATCH] drm: Add NVIDIA Tegra30 support

2012-11-16 Thread Stephen Warren
On 11/15/2012 09:58 PM, Mark Zhang wrote:
 This patch is based on Thierry's drm patch for Tegra20:
 - [PATCH v2 0/6] Device tree updates for host1x support
 - [PATCH v3 0/2] NVIDIA Tegra DRM driver

 diff --git a/drivers/gpu/drm/tegra/dc.c b/drivers/gpu/drm/tegra/dc.c
 index 216cd0f..6e9f1b4 100644
 --- a/drivers/gpu/drm/tegra/dc.c
 +++ b/drivers/gpu/drm/tegra/dc.c
 @@ -819,6 +819,7 @@ static int tegra_dc_remove(struct platform_device *pdev)
  
  static struct of_device_id tegra_dc_of_match[] = {
 { .compatible = nvidia,tegra20-dc, },
 + { .compatible = nvidia,tegra30-dc, },

The Tegra30 entries should come first in the table due to the order the
kernel (currently) matches compatible properties against theses tables.
The same comment applies for all 3 tables.

With the patch manually applied (due to the whitespace issues I
mentioned earlier), and this ordering issue fixed,

Tested-by: Stephen Warren swar...@nvidia.com

(On Cardhu, with no HDMI plugged in; see my next email for details).
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v3 0/2] NVIDIA Tegra DRM driver

2012-11-15 Thread Stephen Warren
On 11/15/2012 02:28 PM, Thierry Reding wrote:
> This third version of the patch series cleans up some minor issues that
> were found in the previous versions, such as deferred probe not working
> properly and the display remaining enabled after the driver is removed.
> 
> I've also put the two patches in this series into the tegra/drm/for-3.8
> branch of my repository on gitorious[0].

The series,

Tested-by: Stephen Warren 

(on Harmony, using the HDMI output)



Re: [PATCH v3 0/2] NVIDIA Tegra DRM driver

2012-11-15 Thread Stephen Warren
On 11/15/2012 02:28 PM, Thierry Reding wrote:
 This third version of the patch series cleans up some minor issues that
 were found in the previous versions, such as deferred probe not working
 properly and the display remaining enabled after the driver is removed.
 
 I've also put the two patches in this series into the tegra/drm/for-3.8
 branch of my repository on gitorious[0].

The series,

Tested-by: Stephen Warren swar...@nvidia.com

(on Harmony, using the HDMI output)

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v8 2/6] video: add of helper for videomode

2012-11-13 Thread Stephen Warren
On 11/13/2012 04:08 AM, Thierry Reding wrote:
> On Mon, Nov 12, 2012 at 04:37:02PM +0100, Steffen Trumtrar wrote:
>> This adds support for reading display timings from DT or/and
>> convert one of those timings to a videomode. The
>> of_display_timing implementation supports multiple children where
>> each property can have up to 3 values. All children are read into
>> an array, that can be queried. of_get_videomode converts exactly
>> one of that timings to a struct videomode.

>> diff --git
>> a/Documentation/devicetree/bindings/video/display-timings.txt
>> b/Documentation/devicetree/bindings/video/display-timings.txt

>> + - clock-frequency: displayclock in Hz
> 
> "display clock"?

I /think/ I had suggested naming this clock-frequency before so that
the property name would be more standardized; other bindings use that
same name. But I'm not too attached to the name I guess.


[PATCH v2 0/2] NVIDIA Tegra DRM driver

2012-11-13 Thread Stephen Warren
On 11/12/2012 11:47 PM, Thierry Reding wrote:
> On Mon, Nov 12, 2012 at 05:17:18PM -0700, Stephen Warren wrote:
>> On 11/12/2012 02:55 PM, Thierry Reding wrote:
>>> This second version of this patch series addresses all the
>>> comments received so far. Most notably it takes advantage of
>>> the debugfs helpers provided by the DRM core. Oddly enough this
>>> actually increases the line count, but that's because the
>>> helpers don't fit with the subdevices approach as implemented
>>> by this driver. However some quick discussions with Rob Clark
>>> showed that Tegra DRM is not special in this respect but other
>>> drivers may need the same functionality. Eventually the
>>> debugfs code could be reworked on top of helpers that are
>>> better suited at the design of embedded, multi-device DRM
>>> drivers.
>>> 
>>> Other than that there is some removal of code that was actually
>>> supposed to go into a later patch because it has dependencies
>>> that haven't been merged yet and some moving around of #defines
>>> and the device tree bindings documentation. Finally the driver
>>> now uses the DRM core's drm_compat_ioctl() instead of a custom
>>> and unimplemented (!) version.
>> 
>> The series,
>> 
>> Tested-by: Stephen Warren 
>> 
>> (on the Harmony board's HDMI output; I'll test other
>> boards/outputs later).
> 
> You also gave an Acked-by for the DT binding documentation in the
> first version of this patchset, does it apply to the rest of the
> patch as well? That is, can I add it to patch 1?

I didn't actually read the rest of the patch since there are many
people much more familiar with the host1x/... code that will provide
useful feedback.

However, yes, I think it's fine to include my ack in the patch - it's
common to ack only parts of patches I believe.


[PATCH v2 1/2] drm: Add NVIDIA Tegra20 support

2012-11-13 Thread Stephen Warren
On 11/13/2012 02:37 AM, Thierry Reding wrote:
> On Tue, Nov 13, 2012 at 04:49:24PM +0800, Mark Zhang wrote:
>> On 11/13/2012 03:48 PM, Thierry Reding wrote:
>>> * PGP Signed by an unknown key
>>> 
>>> On Tue, Nov 13, 2012 at 03:15:47PM +0800, Mark Zhang wrote:
 On 11/13/2012 05:55 AM, Thierry Reding wrote:
> This commit adds a KMS driver for the Tegra20 SoC. This
> includes basic support for host1x and the two display
> controllers found on the Tegra20 SoC. Each display
> controller can drive a separate RGB/LVDS output.

> diff --git a/drivers/gpu/drm/tegra/Kconfig
> b/drivers/gpu/drm/tegra/Kconfig new file mode 100644 index
> 000..be1daf7 --- /dev/null +++
> b/drivers/gpu/drm/tegra/Kconfig @@ -0,0 +1,23 @@ +config
> DRM_TEGRA +   tristate "NVIDIA Tegra DRM" +
> depends on DRM && OF && ARCH_TEGRA +   select
> DRM_KMS_HELPER +   select DRM_GEM_CMA_HELPER +
> select DRM_KMS_CMA_HELPER
 
 Just for curious, according to my testing, why the
 "CONFIG_CMA" is not enabled while DRM_GEM_CMA_HELPER &
 DRM_KMS_CMA_HELPER are enabled here?
>>> 
>>> The reason is that CMA doesn't actually provide any API for
>>> drivers to use and in fact unless you use very large buffers
>>> you could indeed run this code on top of a non-CMA kernel and
>>> it will likely even work.
>>> 
>> 
>> Okay. But I think it's better to turn on CMA defaultly. During
>> my testing, it's hard to allocate more 2MB without CMA...
> 
> CMA is enabled by default in one of the Tegra default
> configuration patches in my tegra/next branch. I will submit that
> patch to Stephen when the 3.8 cycle starts, so that it'll be
> automatically enabled along with the DRM driver.
> 
> But I don't think it makes sense to couple it to the DRM_TEGRA
> symbol as it isn't strictly required.

OK, I guess that approach makes sense; most people will just use the
defconfig and hence get a useful kernel, while flexibility will not be
lost if someone really wants.

Note that I have less than 1 week left to apply patches for 3.8. I
hope that if tegradrm makes it into the drm tree for 3.8, so will the
defconfig and other enablement patches to activate it.


[PATCH v2 1/2] drm: Add NVIDIA Tegra20 support

2012-11-13 Thread Stephen Warren
On 11/13/2012 12:15 AM, Mark Zhang wrote:
> On 11/13/2012 05:55 AM, Thierry Reding wrote:
>> This commit adds a KMS driver for the Tegra20 SoC. This includes basic
>> support for host1x and the two display controllers found on the Tegra20
>> SoC. Each display controller can drive a separate RGB/LVDS output.

Mark, when you're replying to a long patch, it's useful to quote only
the code you're directly responding to; slowly scrolling through your
email to find your comments is rather error-prone and time-consuming.

Thanks for the feedback though.


Re: [PATCH v2 1/2] drm: Add NVIDIA Tegra20 support

2012-11-13 Thread Stephen Warren
On 11/13/2012 12:15 AM, Mark Zhang wrote:
 On 11/13/2012 05:55 AM, Thierry Reding wrote:
 This commit adds a KMS driver for the Tegra20 SoC. This includes basic
 support for host1x and the two display controllers found on the Tegra20
 SoC. Each display controller can drive a separate RGB/LVDS output.

Mark, when you're replying to a long patch, it's useful to quote only
the code you're directly responding to; slowly scrolling through your
email to find your comments is rather error-prone and time-consuming.

Thanks for the feedback though.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2 1/2] drm: Add NVIDIA Tegra20 support

2012-11-13 Thread Stephen Warren
On 11/13/2012 02:37 AM, Thierry Reding wrote:
 On Tue, Nov 13, 2012 at 04:49:24PM +0800, Mark Zhang wrote:
 On 11/13/2012 03:48 PM, Thierry Reding wrote:
 * PGP Signed by an unknown key
 
 On Tue, Nov 13, 2012 at 03:15:47PM +0800, Mark Zhang wrote:
 On 11/13/2012 05:55 AM, Thierry Reding wrote:
 This commit adds a KMS driver for the Tegra20 SoC. This
 includes basic support for host1x and the two display
 controllers found on the Tegra20 SoC. Each display
 controller can drive a separate RGB/LVDS output.

 diff --git a/drivers/gpu/drm/tegra/Kconfig
 b/drivers/gpu/drm/tegra/Kconfig new file mode 100644 index
 000..be1daf7 --- /dev/null +++
 b/drivers/gpu/drm/tegra/Kconfig @@ -0,0 +1,23 @@ +config
 DRM_TEGRA +   tristate NVIDIA Tegra DRM +
 depends on DRM  OF  ARCH_TEGRA +   select
 DRM_KMS_HELPER +   select DRM_GEM_CMA_HELPER +
 select DRM_KMS_CMA_HELPER
 
 Just for curious, according to my testing, why the
 CONFIG_CMA is not enabled while DRM_GEM_CMA_HELPER 
 DRM_KMS_CMA_HELPER are enabled here?
 
 The reason is that CMA doesn't actually provide any API for
 drivers to use and in fact unless you use very large buffers
 you could indeed run this code on top of a non-CMA kernel and
 it will likely even work.
 
 
 Okay. But I think it's better to turn on CMA defaultly. During
 my testing, it's hard to allocate more 2MB without CMA...
 
 CMA is enabled by default in one of the Tegra default
 configuration patches in my tegra/next branch. I will submit that
 patch to Stephen when the 3.8 cycle starts, so that it'll be
 automatically enabled along with the DRM driver.
 
 But I don't think it makes sense to couple it to the DRM_TEGRA
 symbol as it isn't strictly required.

OK, I guess that approach makes sense; most people will just use the
defconfig and hence get a useful kernel, while flexibility will not be
lost if someone really wants.

Note that I have less than 1 week left to apply patches for 3.8. I
hope that if tegradrm makes it into the drm tree for 3.8, so will the
defconfig and other enablement patches to activate it.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2 0/2] NVIDIA Tegra DRM driver

2012-11-13 Thread Stephen Warren
On 11/12/2012 11:47 PM, Thierry Reding wrote:
 On Mon, Nov 12, 2012 at 05:17:18PM -0700, Stephen Warren wrote:
 On 11/12/2012 02:55 PM, Thierry Reding wrote:
 This second version of this patch series addresses all the
 comments received so far. Most notably it takes advantage of
 the debugfs helpers provided by the DRM core. Oddly enough this
 actually increases the line count, but that's because the
 helpers don't fit with the subdevices approach as implemented
 by this driver. However some quick discussions with Rob Clark
 showed that Tegra DRM is not special in this respect but other
 drivers may need the same functionality. Eventually the
 debugfs code could be reworked on top of helpers that are
 better suited at the design of embedded, multi-device DRM
 drivers.
 
 Other than that there is some removal of code that was actually
 supposed to go into a later patch because it has dependencies
 that haven't been merged yet and some moving around of #defines
 and the device tree bindings documentation. Finally the driver
 now uses the DRM core's drm_compat_ioctl() instead of a custom
 and unimplemented (!) version.
 
 The series,
 
 Tested-by: Stephen Warren swar...@nvidia.com
 
 (on the Harmony board's HDMI output; I'll test other
 boards/outputs later).
 
 You also gave an Acked-by for the DT binding documentation in the
 first version of this patchset, does it apply to the rest of the
 patch as well? That is, can I add it to patch 1?

I didn't actually read the rest of the patch since there are many
people much more familiar with the host1x/... code that will provide
useful feedback.

However, yes, I think it's fine to include my ack in the patch - it's
common to ack only parts of patches I believe.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v8 2/6] video: add of helper for videomode

2012-11-13 Thread Stephen Warren
On 11/13/2012 04:08 AM, Thierry Reding wrote:
 On Mon, Nov 12, 2012 at 04:37:02PM +0100, Steffen Trumtrar wrote:
 This adds support for reading display timings from DT or/and
 convert one of those timings to a videomode. The
 of_display_timing implementation supports multiple children where
 each property can have up to 3 values. All children are read into
 an array, that can be queried. of_get_videomode converts exactly
 one of that timings to a struct videomode.

 diff --git
 a/Documentation/devicetree/bindings/video/display-timings.txt
 b/Documentation/devicetree/bindings/video/display-timings.txt

 + - clock-frequency: displayclock in Hz
 
 display clock?

I /think/ I had suggested naming this clock-frequency before so that
the property name would be more standardized; other bindings use that
same name. But I'm not too attached to the name I guess.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v2 0/2] NVIDIA Tegra DRM driver

2012-11-12 Thread Stephen Warren
On 11/12/2012 02:55 PM, Thierry Reding wrote:
> This second version of this patch series addresses all the comments
> received so far. Most notably it takes advantage of the debugfs helpers
> provided by the DRM core. Oddly enough this actually increases the line
> count, but that's because the helpers don't fit with the subdevices
> approach as implemented by this driver. However some quick discussions
> with Rob Clark showed that Tegra DRM is not special in this respect but
> other drivers may need the same functionality. Eventually the debugfs
> code could be reworked on top of helpers that are better suited at the
> design of embedded, multi-device DRM drivers.
> 
> Other than that there is some removal of code that was actually supposed
> to go into a later patch because it has dependencies that haven't been
> merged yet and some moving around of #defines and the device tree
> bindings documentation. Finally the driver now uses the DRM core's
> drm_compat_ioctl() instead of a custom and unimplemented (!) version.

The series,

Tested-by: Stephen Warren 

(on the Harmony board's HDMI output; I'll test other boards/outputs later).


[PATCH v8 2/6] video: add of helper for videomode

2012-11-12 Thread Stephen Warren
On 11/12/2012 08:37 AM, Steffen Trumtrar wrote:
> This adds support for reading display timings from DT or/and convert one of 
> those
> timings to a videomode.
> The of_display_timing implementation supports multiple children where each
> property can have up to 3 values. All children are read into an array, that
> can be queried.
> of_get_videomode converts exactly one of that timings to a struct videomode.

> diff --git a/Documentation/devicetree/bindings/video/display-timings.txt 
> b/Documentation/devicetree/bindings/video/display-timings.txt

The device tree bindings look reasonable to me, so,

Acked-by: Stephen Warren 


Re: [PATCH v8 2/6] video: add of helper for videomode

2012-11-12 Thread Stephen Warren
On 11/12/2012 08:37 AM, Steffen Trumtrar wrote:
 This adds support for reading display timings from DT or/and convert one of 
 those
 timings to a videomode.
 The of_display_timing implementation supports multiple children where each
 property can have up to 3 values. All children are read into an array, that
 can be queried.
 of_get_videomode converts exactly one of that timings to a struct videomode.

 diff --git a/Documentation/devicetree/bindings/video/display-timings.txt 
 b/Documentation/devicetree/bindings/video/display-timings.txt

The device tree bindings look reasonable to me, so,

Acked-by: Stephen Warren swar...@nvidia.com
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2 0/2] NVIDIA Tegra DRM driver

2012-11-12 Thread Stephen Warren
On 11/12/2012 02:55 PM, Thierry Reding wrote:
 This second version of this patch series addresses all the comments
 received so far. Most notably it takes advantage of the debugfs helpers
 provided by the DRM core. Oddly enough this actually increases the line
 count, but that's because the helpers don't fit with the subdevices
 approach as implemented by this driver. However some quick discussions
 with Rob Clark showed that Tegra DRM is not special in this respect but
 other drivers may need the same functionality. Eventually the debugfs
 code could be reworked on top of helpers that are better suited at the
 design of embedded, multi-device DRM drivers.
 
 Other than that there is some removal of code that was actually supposed
 to go into a later patch because it has dependencies that haven't been
 merged yet and some moving around of #defines and the device tree
 bindings documentation. Finally the driver now uses the DRM core's
 drm_compat_ioctl() instead of a custom and unimplemented (!) version.

The series,

Tested-by: Stephen Warren swar...@nvidia.com

(on the Harmony board's HDMI output; I'll test other boards/outputs later).
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH 1/2] drm: Add NVIDIA Tegra20 support

2012-11-09 Thread Stephen Warren
On 11/09/2012 06:59 AM, Thierry Reding wrote:
> This commit adds a KMS driver for the Tegra20 SoC. This includes basic
> support for host1x and the two display controllers found on the Tegra20
> SoC. Each display controller can drive a separate RGB/LVDS output.

I applied these two patches and the two arch/arm/mach-tegra patches you
posted, directly on top of next-20121109, and I see the following build
failure:

> drivers/gpu/drm/tegra/output.c: In function 'tegra_output_init':
> drivers/gpu/drm/tegra/output.c:166:9: error: 'struct tegra_output' has no 
> member named 'display'
> drivers/gpu/drm/tegra/output.c:166:3: error: implicit declaration of function 
> 'of_get_display'
> drivers/gpu/drm/tegra/output.c:167:20: error: 'struct tegra_output' has no 
> member named 'display'
> drivers/gpu/drm/tegra/output.c:168:25: error: 'struct tegra_output' has no 
> member named 'display'
> drivers/gpu/drm/tegra/output.c:179:13: error: 'struct tegra_output' has no 
> member named 'display'
> drivers/gpu/drm/tegra/output.c:180:3: error: implicit declaration of function 
> 'display_put'
> drivers/gpu/drm/tegra/output.c:180:21: error: 'struct tegra_output' has no 
> member named 'display'
> drivers/gpu/drm/tegra/output.c:257:20: error: 'struct tegra_output' has no 
> member named 'display'
> drivers/gpu/drm/tegra/output.c: In function 'tegra_output_exit':
> drivers/gpu/drm/tegra/output.c:272:20: error: 'struct tegra_output' has no 
> member named 'display'

Does this depend on something not in linux-next?


[PATCH 1/2] drm: Add NVIDIA Tegra20 support

2012-11-09 Thread Stephen Warren
On 11/09/2012 06:59 AM, Thierry Reding wrote:
> This commit adds a KMS driver for the Tegra20 SoC. This includes basic
> support for host1x and the two display controllers found on the Tegra20
> SoC. Each display controller can drive a separate RGB/LVDS output.

> diff --git 
> a/Documentation/devicetree/bindings/gpu/drm/nvidia,tegra20-host1x.txt 
> b/Documentation/devicetree/bindings/gpu/drm/nvidia,tegra20-host1x.txt
> new file mode 100644
> index 000..b4fa934
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/gpu/drm/nvidia,tegra20-host1x.txt

"drm" is a Linux-specific term, so shouldn't really be used as the
directory name for a binding. bindings/gpu/nvidia,tegra20-host1x.txt
would probably be just fine.

Aside from that, the bindings,
Acked-by: Stephen Warren 

I don't really know anything about DRM or our display HW, so I haven't
reviewed the code at all. I certainly ack the concept of adding the
driver though! I have asked various other people at NVIDIA to give a
quick review of the code.


Re: [PATCH 1/2] drm: Add NVIDIA Tegra20 support

2012-11-09 Thread Stephen Warren
On 11/09/2012 06:59 AM, Thierry Reding wrote:
 This commit adds a KMS driver for the Tegra20 SoC. This includes basic
 support for host1x and the two display controllers found on the Tegra20
 SoC. Each display controller can drive a separate RGB/LVDS output.

 diff --git 
 a/Documentation/devicetree/bindings/gpu/drm/nvidia,tegra20-host1x.txt 
 b/Documentation/devicetree/bindings/gpu/drm/nvidia,tegra20-host1x.txt
 new file mode 100644
 index 000..b4fa934
 --- /dev/null
 +++ b/Documentation/devicetree/bindings/gpu/drm/nvidia,tegra20-host1x.txt

drm is a Linux-specific term, so shouldn't really be used as the
directory name for a binding. bindings/gpu/nvidia,tegra20-host1x.txt
would probably be just fine.

Aside from that, the bindings,
Acked-by: Stephen Warren swar...@nvidia.com

I don't really know anything about DRM or our display HW, so I haven't
reviewed the code at all. I certainly ack the concept of adding the
driver though! I have asked various other people at NVIDIA to give a
quick review of the code.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm: Add NVIDIA Tegra20 support

2012-11-09 Thread Stephen Warren
On 11/09/2012 06:59 AM, Thierry Reding wrote:
 This commit adds a KMS driver for the Tegra20 SoC. This includes basic
 support for host1x and the two display controllers found on the Tegra20
 SoC. Each display controller can drive a separate RGB/LVDS output.

I applied these two patches and the two arch/arm/mach-tegra patches you
posted, directly on top of next-20121109, and I see the following build
failure:

 drivers/gpu/drm/tegra/output.c: In function 'tegra_output_init':
 drivers/gpu/drm/tegra/output.c:166:9: error: 'struct tegra_output' has no 
 member named 'display'
 drivers/gpu/drm/tegra/output.c:166:3: error: implicit declaration of function 
 'of_get_display'
 drivers/gpu/drm/tegra/output.c:167:20: error: 'struct tegra_output' has no 
 member named 'display'
 drivers/gpu/drm/tegra/output.c:168:25: error: 'struct tegra_output' has no 
 member named 'display'
 drivers/gpu/drm/tegra/output.c:179:13: error: 'struct tegra_output' has no 
 member named 'display'
 drivers/gpu/drm/tegra/output.c:180:3: error: implicit declaration of function 
 'display_put'
 drivers/gpu/drm/tegra/output.c:180:21: error: 'struct tegra_output' has no 
 member named 'display'
 drivers/gpu/drm/tegra/output.c:257:20: error: 'struct tegra_output' has no 
 member named 'display'
 drivers/gpu/drm/tegra/output.c: In function 'tegra_output_exit':
 drivers/gpu/drm/tegra/output.c:272:20: error: 'struct tegra_output' has no 
 member named 'display'

Does this depend on something not in linux-next?
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v7 2/8] of: add helper to parse display timings

2012-11-01 Thread Stephen Warren
On 10/31/2012 03:28 AM, Steffen Trumtrar wrote:

Patch description? The patch defines the DT binding as well, which isn't
mentioned in the patch subject.

> new file mode 100644
> index 000..04c94a3
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/video/display-timings.txt

> +Usage in backend
> +

Everything before this point in the binding docs looks reasonable to me.
Everything after this point is Linux-specific/internal implementation
detail, and hence shouldn't be in the binding document.

I only read the DT binding.



Re: [PATCH v7 2/8] of: add helper to parse display timings

2012-11-01 Thread Stephen Warren
On 10/31/2012 03:28 AM, Steffen Trumtrar wrote:

Patch description? The patch defines the DT binding as well, which isn't
mentioned in the patch subject.

 new file mode 100644
 index 000..04c94a3
 --- /dev/null
 +++ b/Documentation/devicetree/bindings/video/display-timings.txt

 +Usage in backend
 +

Everything before this point in the binding docs looks reasonable to me.
Everything after this point is Linux-specific/internal implementation
detail, and hence shouldn't be in the binding document.

I only read the DT binding.

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH 1/2 v6] of: add helper to parse display timings

2012-10-08 Thread Stephen Warren
On 10/08/2012 06:20 AM, Tomi Valkeinen wrote:
> On Mon, 2012-10-08 at 14:04 +0200, Laurent Pinchart wrote:
>> On Monday 08 October 2012 12:01:18 Tomi Valkeinen wrote:
>>> On Mon, 2012-10-08 at 10:25 +0200, Guennadi Liakhovetski
>>> wrote:
...
>>> Of course, if this is about describing the hardware, the
>>> default-mode property doesn't really fit in...
>> 
>> Maybe we should rename it to native-mode then ?
> 
> Hmm, right, if it means native mode, then it is describing the
> hardware. But would it make sense to require that the native mode
> is the first mode in the list, then? This would make the separate 
> default-mode/native-mode property not needed.

I'm not sure if device-tree guarantees that the nodes enumerate in a
specific order. If it does, then that may be a reasonable solution.



[PATCH 1/2 v6] of: add helper to parse display timings

2012-10-08 Thread Stephen Warren
On 10/08/2012 02:25 AM, Guennadi Liakhovetski wrote:
> On Fri, 5 Oct 2012, Stephen Warren wrote:
> 
>> On 10/04/2012 03:35 PM, Guennadi Liakhovetski wrote:
>>> Hi Steffen
>>>
>>> Sorry for chiming in so late in the game, but I've long been wanting to 
>>> have a look at this and compare with what we do for V4L2, so, this seems a 
>>> great opportunity to me:-)
>>>
>>> On Thu, 4 Oct 2012, Steffen Trumtrar wrote:
>>
>>>> diff --git a/Documentation/devicetree/bindings/video/display-timings.txt 
>>>> b/Documentation/devicetree/bindings/video/display-timings.txt
>>
>>>> +timings-subnode
>>>> +---
>>>> +
>>>> +required properties:
>>>> + - hactive, vactive: Display resolution
>>>> + - hfront-porch, hback-porch, hsync-len: Horizontal Display timing 
>>>> parameters
>>>> +   in pixels
>>>> +   vfront-porch, vback-porch, vsync-len: Vertical display timing 
>>>> parameters in
>>>> +   lines
>>>> + - clock: displayclock in Hz
>>>
>>> You're going to hate me for this, but eventually we want to actually 
>>> reference clock objects in our DT bindings. For now, even if you don't 
>>> want to actually add clock phandles and stuff here, I think, using the 
>>> standard "clock-frequency" property would be much better!
>>
>> In a definition of a display timing, we will never need to use the clock
>> binding; the clock binding would be used by the HW module that is
>> generating a timing, not by the timing definition itself.
> 
> You mean clock consumer bindings will be in the display device DT node? 
> And the display-timings node will be its child?

Yes

...
>>>> + - interlaced (bool)
>>>
>>> Is "interlaced" a property of the hardware, i.e. of the board? Can the 
>>> same display controller on one board require interlaced data and on 
>>> another board - progressive?
>>
>> Interlace is a property of a display mode. It's quite possible for a
>> particular display controller to switch between interlace and
>> progressive output at run-time. For example, reconfiguring the output
>> between 480i, 720p, 1080i, 1080p modes. Admittedly, if you're talking to
>> a built-in LCD display, you're probably always going to be driving the
>> single mode required by the panel, and that mode will likely always be
>> progressive. However, since this binding attempts to describe any
>> display timing, I think we still need this property per mode.
> 
> But why do you need this in the DT then at all?

Because the driver for the display controller has no idea what display
or panel will be connected to it.

> If it's fixed, as required 
> per display controller, then its driver will know it. If it's runtime 
> configurable, then it's a purely software parameter and doesn't depend on 
> the board?

interlace-vs-progressive isn't "fixed, as required per display
controller", but is a property of the mode being sent by the display
controller, and the requirements for that mode are driven by the
panel/display connected to the display controller, not the display
controller, in general.

...
>>> BTW, I'm not very familiar with display 
>>> interfaces, but for interlaced you probably sometimes use a field signal, 
>>> whose polarity you also want to specify here? We use a "field-even-active" 
>>> integer property for it.
>>
>> I think that's a property of the display controller itself, rather than
>> an individual mode, although I'm not 100% certain. My assertion is that
>> the physical interface that the display controller is driving will
>> determine whether embedded or separate sync is used, and in the separate
>> sync case, how the field signal is defined, and that all interlace modes
>> driven over that interface will use the same field signal definition.
> 
> In general, I might be misunderstanding something, but don't we have to 
> distinguish between 2 types of information about display timings: (1) is 
> defined by the display controller requirements, is known to the display 
> driver and doesn't need to be present in timings DT. We did have some of 
> these parameters in board data previously, because we didn't have proper 
> display controller drivers...

Yes, there probably is data of that kind, but the display mode timings
binding is only address standardized display timings information, not
controller-specific information, and hence doesn't cover this case.

> (2) is board specific configuration, and is
> such it has to be present in DT.

Certainly, yes.

> In that way, doesn't "interlaced" belong to type (1) and thus doesn't need 
> to be present in DT?

I don't believe so.


Re: [PATCH 1/2 v6] of: add helper to parse display timings

2012-10-08 Thread Stephen Warren
On 10/08/2012 02:25 AM, Guennadi Liakhovetski wrote:
 On Fri, 5 Oct 2012, Stephen Warren wrote:
 
 On 10/04/2012 03:35 PM, Guennadi Liakhovetski wrote:
 Hi Steffen

 Sorry for chiming in so late in the game, but I've long been wanting to 
 have a look at this and compare with what we do for V4L2, so, this seems a 
 great opportunity to me:-)

 On Thu, 4 Oct 2012, Steffen Trumtrar wrote:

 diff --git a/Documentation/devicetree/bindings/video/display-timings.txt 
 b/Documentation/devicetree/bindings/video/display-timings.txt

 +timings-subnode
 +---
 +
 +required properties:
 + - hactive, vactive: Display resolution
 + - hfront-porch, hback-porch, hsync-len: Horizontal Display timing 
 parameters
 +   in pixels
 +   vfront-porch, vback-porch, vsync-len: Vertical display timing 
 parameters in
 +   lines
 + - clock: displayclock in Hz

 You're going to hate me for this, but eventually we want to actually 
 reference clock objects in our DT bindings. For now, even if you don't 
 want to actually add clock phandles and stuff here, I think, using the 
 standard clock-frequency property would be much better!

 In a definition of a display timing, we will never need to use the clock
 binding; the clock binding would be used by the HW module that is
 generating a timing, not by the timing definition itself.
 
 You mean clock consumer bindings will be in the display device DT node? 
 And the display-timings node will be its child?

Yes

...
 + - interlaced (bool)

 Is interlaced a property of the hardware, i.e. of the board? Can the 
 same display controller on one board require interlaced data and on 
 another board - progressive?

 Interlace is a property of a display mode. It's quite possible for a
 particular display controller to switch between interlace and
 progressive output at run-time. For example, reconfiguring the output
 between 480i, 720p, 1080i, 1080p modes. Admittedly, if you're talking to
 a built-in LCD display, you're probably always going to be driving the
 single mode required by the panel, and that mode will likely always be
 progressive. However, since this binding attempts to describe any
 display timing, I think we still need this property per mode.
 
 But why do you need this in the DT then at all?

Because the driver for the display controller has no idea what display
or panel will be connected to it.

 If it's fixed, as required 
 per display controller, then its driver will know it. If it's runtime 
 configurable, then it's a purely software parameter and doesn't depend on 
 the board?

interlace-vs-progressive isn't fixed, as required per display
controller, but is a property of the mode being sent by the display
controller, and the requirements for that mode are driven by the
panel/display connected to the display controller, not the display
controller, in general.

...
 BTW, I'm not very familiar with display 
 interfaces, but for interlaced you probably sometimes use a field signal, 
 whose polarity you also want to specify here? We use a field-even-active 
 integer property for it.

 I think that's a property of the display controller itself, rather than
 an individual mode, although I'm not 100% certain. My assertion is that
 the physical interface that the display controller is driving will
 determine whether embedded or separate sync is used, and in the separate
 sync case, how the field signal is defined, and that all interlace modes
 driven over that interface will use the same field signal definition.
 
 In general, I might be misunderstanding something, but don't we have to 
 distinguish between 2 types of information about display timings: (1) is 
 defined by the display controller requirements, is known to the display 
 driver and doesn't need to be present in timings DT. We did have some of 
 these parameters in board data previously, because we didn't have proper 
 display controller drivers...

Yes, there probably is data of that kind, but the display mode timings
binding is only address standardized display timings information, not
controller-specific information, and hence doesn't cover this case.

 (2) is board specific configuration, and is
 such it has to be present in DT.

Certainly, yes.

 In that way, doesn't interlaced belong to type (1) and thus doesn't need 
 to be present in DT?

I don't believe so.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2 v6] of: add helper to parse display timings

2012-10-08 Thread Stephen Warren
On 10/08/2012 06:20 AM, Tomi Valkeinen wrote:
 On Mon, 2012-10-08 at 14:04 +0200, Laurent Pinchart wrote:
 On Monday 08 October 2012 12:01:18 Tomi Valkeinen wrote:
 On Mon, 2012-10-08 at 10:25 +0200, Guennadi Liakhovetski
 wrote:
...
 Of course, if this is about describing the hardware, the
 default-mode property doesn't really fit in...
 
 Maybe we should rename it to native-mode then ?
 
 Hmm, right, if it means native mode, then it is describing the
 hardware. But would it make sense to require that the native mode
 is the first mode in the list, then? This would make the separate 
 default-mode/native-mode property not needed.

I'm not sure if device-tree guarantees that the nodes enumerate in a
specific order. If it does, then that may be a reasonable solution.

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH 1/2 v6] of: add helper to parse display timings

2012-10-05 Thread Stephen Warren
On 10/05/2012 10:16 AM, Steffen Trumtrar wrote:
> On Thu, Oct 04, 2012 at 12:47:16PM -0600, Stephen Warren wrote:
>> On 10/04/2012 11:59 AM, Steffen Trumtrar wrote:
...
>>> +   for_each_child_of_node(timings_np, entry) {
>>> +   struct signal_timing *st;
>>> +
>>> +   st = of_get_display_timing(entry);
>>> +
>>> +   if (!st)
>>> +   continue;
>>
>> I wonder if that shouldn't be an error?
> 
> In the sense of a pr_err not a -EINVAL I presume?! It is a little bit quiet in
> case of a faulty spec, that is right.

I did mean return an error; if we try to parse something and can't,
shouldn't we return an error?

I suppose it may be possible to limp on and use whatever subset of modes
could be parsed and drop the others, which is what this code does, but
the code after the loop would definitely return an error if zero timings
were parseable.


[PATCH 1/2 v6] of: add helper to parse display timings

2012-10-05 Thread Stephen Warren
On 10/04/2012 03:35 PM, Guennadi Liakhovetski wrote:
> Hi Steffen
> 
> Sorry for chiming in so late in the game, but I've long been wanting to 
> have a look at this and compare with what we do for V4L2, so, this seems a 
> great opportunity to me:-)
> 
> On Thu, 4 Oct 2012, Steffen Trumtrar wrote:

>> diff --git a/Documentation/devicetree/bindings/video/display-timings.txt 
>> b/Documentation/devicetree/bindings/video/display-timings.txt

>> +timings-subnode
>> +---
>> +
>> +required properties:
>> + - hactive, vactive: Display resolution
>> + - hfront-porch, hback-porch, hsync-len: Horizontal Display timing 
>> parameters
>> +   in pixels
>> +   vfront-porch, vback-porch, vsync-len: Vertical display timing parameters 
>> in
>> +   lines
>> + - clock: displayclock in Hz
> 
> You're going to hate me for this, but eventually we want to actually 
> reference clock objects in our DT bindings. For now, even if you don't 
> want to actually add clock phandles and stuff here, I think, using the 
> standard "clock-frequency" property would be much better!

In a definition of a display timing, we will never need to use the clock
binding; the clock binding would be used by the HW module that is
generating a timing, not by the timing definition itself.

That said, your comment about renaming the property to avoid any kind of
conceptual conflict is still quite valid. This is bike-shedding, but
"pixel-clock" might be more in line with typical video mode terminology,
although there's certainly preference in DT for using the generic term
clock-frequency that you proposed. Either is fine by me.

>> +optional properties:
>> + - hsync-active-high (bool): Hsync pulse is active high
>> + - vsync-active-high (bool): Vsync pulse is active high
> 
> For the above two we also considered using bool properties but eventually 
> settled down with integer ones:
> 
> - hsync-active = <1>
> 
> for active-high and 0 for active low. This has the added advantage of 
> being able to omit this property in the .dts, which then doesn't mean, 
> that the polarity is active low, but rather, that the hsync line is not 
> used on this hardware. So, maybe it would be good to use the same binding 
> here too?

I agree. This also covers the case where analog display connectors often
use polarity to differentiate similar modes, yet digital connectors
often always use a fixed polarity since the receiving device can
"measure" the signal in more complete ways.

If the board HW inverts these lines, the same property names can exist
in the display controller itself, and the two values XORd together to
yield the final output polarity.

>> + - de-active-high (bool): Data-Enable pulse is active high
>> + - pixelclk-inverted (bool): pixelclock is inverted
> 
> We don't (yet) have a de-active property in V4L, don't know whether we'll 
> ever have to distingsuish between what some datasheets call "HREF" and 
> HSYNC in DT, but maybe similarly to the above an integer would be 
> preferred. As for pixclk, we call the property "pclk-sample" and it's also 
> an integer.

Thinking about this more: de-active-high is likely to be a
board-specific property and hence something in the display controller,
not in the mode definition?

>> + - interlaced (bool)
> 
> Is "interlaced" a property of the hardware, i.e. of the board? Can the 
> same display controller on one board require interlaced data and on 
> another board - progressive?

Interlace is a property of a display mode. It's quite possible for a
particular display controller to switch between interlace and
progressive output at run-time. For example, reconfiguring the output
between 480i, 720p, 1080i, 1080p modes. Admittedly, if you're talking to
a built-in LCD display, you're probably always going to be driving the
single mode required by the panel, and that mode will likely always be
progressive. However, since this binding attempts to describe any
display timing, I think we still need this property per mode.

> BTW, I'm not very familiar with display 
> interfaces, but for interlaced you probably sometimes use a field signal, 
> whose polarity you also want to specify here? We use a "field-even-active" 
> integer property for it.

I think that's a property of the display controller itself, rather than
an individual mode, although I'm not 100% certain. My assertion is that
the physical interface that the display controller is driving will
determine whether embedded or separate sync is used, and in the separate
sync case, how the field signal is defined, and that all interlace modes
driven over that interface will use the same field signal definition.


Re: [PATCH 1/2 v6] of: add helper to parse display timings

2012-10-05 Thread Stephen Warren
On 10/04/2012 03:35 PM, Guennadi Liakhovetski wrote:
 Hi Steffen
 
 Sorry for chiming in so late in the game, but I've long been wanting to 
 have a look at this and compare with what we do for V4L2, so, this seems a 
 great opportunity to me:-)
 
 On Thu, 4 Oct 2012, Steffen Trumtrar wrote:

 diff --git a/Documentation/devicetree/bindings/video/display-timings.txt 
 b/Documentation/devicetree/bindings/video/display-timings.txt

 +timings-subnode
 +---
 +
 +required properties:
 + - hactive, vactive: Display resolution
 + - hfront-porch, hback-porch, hsync-len: Horizontal Display timing 
 parameters
 +   in pixels
 +   vfront-porch, vback-porch, vsync-len: Vertical display timing parameters 
 in
 +   lines
 + - clock: displayclock in Hz
 
 You're going to hate me for this, but eventually we want to actually 
 reference clock objects in our DT bindings. For now, even if you don't 
 want to actually add clock phandles and stuff here, I think, using the 
 standard clock-frequency property would be much better!

In a definition of a display timing, we will never need to use the clock
binding; the clock binding would be used by the HW module that is
generating a timing, not by the timing definition itself.

That said, your comment about renaming the property to avoid any kind of
conceptual conflict is still quite valid. This is bike-shedding, but
pixel-clock might be more in line with typical video mode terminology,
although there's certainly preference in DT for using the generic term
clock-frequency that you proposed. Either is fine by me.

 +optional properties:
 + - hsync-active-high (bool): Hsync pulse is active high
 + - vsync-active-high (bool): Vsync pulse is active high
 
 For the above two we also considered using bool properties but eventually 
 settled down with integer ones:
 
 - hsync-active = 1
 
 for active-high and 0 for active low. This has the added advantage of 
 being able to omit this property in the .dts, which then doesn't mean, 
 that the polarity is active low, but rather, that the hsync line is not 
 used on this hardware. So, maybe it would be good to use the same binding 
 here too?

I agree. This also covers the case where analog display connectors often
use polarity to differentiate similar modes, yet digital connectors
often always use a fixed polarity since the receiving device can
measure the signal in more complete ways.

If the board HW inverts these lines, the same property names can exist
in the display controller itself, and the two values XORd together to
yield the final output polarity.

 + - de-active-high (bool): Data-Enable pulse is active high
 + - pixelclk-inverted (bool): pixelclock is inverted
 
 We don't (yet) have a de-active property in V4L, don't know whether we'll 
 ever have to distingsuish between what some datasheets call HREF and 
 HSYNC in DT, but maybe similarly to the above an integer would be 
 preferred. As for pixclk, we call the property pclk-sample and it's also 
 an integer.

Thinking about this more: de-active-high is likely to be a
board-specific property and hence something in the display controller,
not in the mode definition?

 + - interlaced (bool)
 
 Is interlaced a property of the hardware, i.e. of the board? Can the 
 same display controller on one board require interlaced data and on 
 another board - progressive?

Interlace is a property of a display mode. It's quite possible for a
particular display controller to switch between interlace and
progressive output at run-time. For example, reconfiguring the output
between 480i, 720p, 1080i, 1080p modes. Admittedly, if you're talking to
a built-in LCD display, you're probably always going to be driving the
single mode required by the panel, and that mode will likely always be
progressive. However, since this binding attempts to describe any
display timing, I think we still need this property per mode.

 BTW, I'm not very familiar with display 
 interfaces, but for interlaced you probably sometimes use a field signal, 
 whose polarity you also want to specify here? We use a field-even-active 
 integer property for it.

I think that's a property of the display controller itself, rather than
an individual mode, although I'm not 100% certain. My assertion is that
the physical interface that the display controller is driving will
determine whether embedded or separate sync is used, and in the separate
sync case, how the field signal is defined, and that all interlace modes
driven over that interface will use the same field signal definition.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH 2/2 v6] of: add generic videomode description

2012-10-04 Thread Stephen Warren
On 10/04/2012 11:59 AM, Steffen Trumtrar wrote:
> Get videomode from devicetree in a format appropriate for the
> backend. drm_display_mode and fb_videomode are supported atm.
> Uses the display signal timings from of_display_timings

> +++ b/drivers/of/of_videomode.c

> +int videomode_from_timing(struct display_timings *disp, struct videomode *vm,

> + st = display_timings_get(disp, index);
> +
> + if (!st) {

It's a little odd to leave a blank line between those two lines.

Only half of the code in this file seems OF-related; the routines to
convert a timing to a videomode or drm display mode seem like they'd be
useful outside device tree, so I wonder if putting them into
of_videomode.c is the correct thing to do. Still, it's probably not a
big deal.


[PATCH 1/2 v6] of: add helper to parse display timings

2012-10-04 Thread Stephen Warren
On 10/04/2012 11:59 AM, Steffen Trumtrar wrote:

A patch description would be useful for something like this.

> diff --git a/Documentation/devicetree/bindings/video/display-timings.txt 
> b/Documentation/devicetree/bindings/video/display-timings.txt
> new file mode 100644
...
> +Usage in backend
> +

Everything before that point in the file looks fine to me.

Everything after this point in the file seems to be Linux-specific
implementation details. Does it really belong in the DT binding
documentation, rather than some Linux-specific documentation file?

> +struct drm_display_mode
> +===
> +
> +  
> +--+-+--+---+
> +  |  | |  |  
>  |  ?
> +  |  | |  |  
>  |  |
> +  |  | |  |  
>  |  |
> +  
> +--###--+---+ 
>  |

I suspect the entire horizontal box above (and the entire vertical box
all the way down the left-hand side) should be on the bottom/right
instead of top/left. The reason I think this is because all of
vsync_start, vsync_end, vdisplay have to be referenced to some known
point, which is usually zero or the start of the timing definition, /or/
there would be some value indicating the size of the top marging/porch
in order to say where those other values are referenced to.

> +  |  #   ? ?  ?#  |  
>  |  |
> +  |  #   | |  |#  |  
>  |  |
> +  |  #   | |  |#  |  
>  |  |
> +  |  #   | |  |#  |  
>  |  |
> +  |  #   | |  |#  |  
>  |  |
> +  |  #   | |  |   hdisplay #  |  
>  |  |
> +  |  #<--++--->#  |  
>  |  |
> +  |  #   | |  |#  |  
>  |  |
> +  |  #   |vsync_start |#  |  
>  |  |
> +  |  #   | |  |#  |  
>  |  |
> +  |  #   | |vsync_end |#  |  
>  |  |
> +  |  #   | |  |vdisplay#  |  
>  |  |
> +  |  #   | |  |#  |  
>  |  |
> +  |  #   | |  |#  |  
>  |  |
> +  |  #   | |  |#  |  
>  |  | vtotal
> +  |  #   | |  |#  |  
>  |  |
> +  |  #   | |  | hsync_start#  |  
>  |  |
> +  |  #<--+-+--+-->|  
>  |  |
> +  |  #   | |  |#  |  
>  |  |
> +  |  #   | |  | hsync_end  #  |  
>  |  |
> +  |  
> #<--+-+--+-->|  |
> +  |  #   | |  ?#  |  
>  |  |
> +  
> +--|#|--+---+ 
>  |
> +  |  |   | |   |  |  
>  |  |
> +  |  |   | |   |  |  
>  |  |
> +  |  |   ? |   |  |  
>  |  |
> +  
> +--+-+---+--+---+ 
>  |
> +  |  | |   |  |  
>  |  |
> +  |  | |   |  |  
>  |  |
> +  |  | ?   |  |  
>  |  ?
> +  
> +--+-+--+---+
> +   htotal
> +   
> <->

> diff --git a/drivers/of/of_display_timings.c b/drivers/of/of_display_timings.c

> +static int parse_property(struct device_node *np, char *name,
> + struct timing_entry *result)

> + if (cells == 1)
> + ret = of_property_read_u32_array(np, name, >typ, cells);

Should that branch not just set result->min/max to typ as well?
Presumably it'd prevent any code that interprets struct timing_entry
from having to check if those values were 0 or not?

> + else if (cells == 3)
> + ret 

Re: [PATCH 1/2 v6] of: add helper to parse display timings

2012-10-04 Thread Stephen Warren
On 10/04/2012 11:59 AM, Steffen Trumtrar wrote:

A patch description would be useful for something like this.

 diff --git a/Documentation/devicetree/bindings/video/display-timings.txt 
 b/Documentation/devicetree/bindings/video/display-timings.txt
 new file mode 100644
...
 +Usage in backend
 +

Everything before that point in the file looks fine to me.

Everything after this point in the file seems to be Linux-specific
implementation details. Does it really belong in the DT binding
documentation, rather than some Linux-specific documentation file?

 +struct drm_display_mode
 +===
 +
 +  
 +--+-+--+---+
 +  |  | |  |  
  |  ↑
 +  |  | |  |  
  |  |
 +  |  | |  |  
  |  |
 +  
 +--###--+---+ 
  |

I suspect the entire horizontal box above (and the entire vertical box
all the way down the left-hand side) should be on the bottom/right
instead of top/left. The reason I think this is because all of
vsync_start, vsync_end, vdisplay have to be referenced to some known
point, which is usually zero or the start of the timing definition, /or/
there would be some value indicating the size of the top marging/porch
in order to say where those other values are referenced to.

 +  |  #   ↑ ↑  ↑#  |  
  |  |
 +  |  #   | |  |#  |  
  |  |
 +  |  #   | |  |#  |  
  |  |
 +  |  #   | |  |#  |  
  |  |
 +  |  #   | |  |#  |  
  |  |
 +  |  #   | |  |   hdisplay #  |  
  |  |
 +  |  #--++---#  |  
  |  |
 +  |  #   | |  |#  |  
  |  |
 +  |  #   |vsync_start |#  |  
  |  |
 +  |  #   | |  |#  |  
  |  |
 +  |  #   | |vsync_end |#  |  
  |  |
 +  |  #   | |  |vdisplay#  |  
  |  |
 +  |  #   | |  |#  |  
  |  |
 +  |  #   | |  |#  |  
  |  |
 +  |  #   | |  |#  |  
  |  | vtotal
 +  |  #   | |  |#  |  
  |  |
 +  |  #   | |  | hsync_start#  |  
  |  |
 +  |  #--+-+--+--|  
  |  |
 +  |  #   | |  |#  |  
  |  |
 +  |  #   | |  | hsync_end  #  |  
  |  |
 +  |  
 #--+-+--+--|  |
 +  |  #   | |  ↓#  |  
  |  |
 +  
 +--|#|--+---+ 
  |
 +  |  |   | |   |  |  
  |  |
 +  |  |   | |   |  |  
  |  |
 +  |  |   ↓ |   |  |  
  |  |
 +  
 +--+-+---+--+---+ 
  |
 +  |  | |   |  |  
  |  |
 +  |  | |   |  |  
  |  |
 +  |  | ↓   |  |  
  |  ↓
 +  
 +--+-+--+---+
 +   htotal
 +   
 -

 diff --git a/drivers/of/of_display_timings.c b/drivers/of/of_display_timings.c

 +static int parse_property(struct device_node *np, char *name,
 + struct timing_entry *result)

 + if (cells == 1)
 + ret = of_property_read_u32_array(np, name, result-typ, cells);

Should that branch not just set result-min/max to typ as well?
Presumably it'd prevent any code that interprets struct timing_entry
from having to check if those values were 0 or not?

 + else if (cells == 3)
 + ret = of_property_read_u32_array(np, name, result-min, cells);

 +struct display_timings 

Re: [PATCH 2/2 v6] of: add generic videomode description

2012-10-04 Thread Stephen Warren
On 10/04/2012 11:59 AM, Steffen Trumtrar wrote:
 Get videomode from devicetree in a format appropriate for the
 backend. drm_display_mode and fb_videomode are supported atm.
 Uses the display signal timings from of_display_timings

 +++ b/drivers/of/of_videomode.c

 +int videomode_from_timing(struct display_timings *disp, struct videomode *vm,

 + st = display_timings_get(disp, index);
 +
 + if (!st) {

It's a little odd to leave a blank line between those two lines.

Only half of the code in this file seems OF-related; the routines to
convert a timing to a videomode or drm display mode seem like they'd be
useful outside device tree, so I wonder if putting them into
of_videomode.c is the correct thing to do. Still, it's probably not a
big deal.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH V6 2/2] video: drm: exynos: Add device tree support

2012-10-03 Thread Stephen Warren
On 10/02/2012 10:06 PM, Leela Krishna Amudala wrote:
> On Mon, Oct 1, 2012 at 9:50 PM, Stephen Warren  
> wrote:
>> On 09/30/2012 11:29 PM, Leela Krishna Amudala wrote:
>>> Hello Stephen Warren,
>>>
>>> The binding names that I use in my dts file should match with the
>>> names given in 
>>> http://lists.freedesktop.org/archives/dri-devel/2012-July/024875.html
>>> right?
>>
>> I don't think so; the binding in that link is for example:
>>
>>> + - xres, yres: Display resolution
>>> + - left-margin, right-margin, hsync-len: Horizontal Display timing 
>>> parameters
>>> +   in pixels
>>> +   upper-margin, lower-margin, vsync-len: Vertical display timing 
>>> parameters in
>>> +   lines
>>> + - clock: displayclock in Hz
>>
>> i.e. a bunch of separate properties, one for each value needed to
>> describe the display timing. However, your patch contains:
> 
> I mean to say that even I have to use separate properties for each one
> instead of grouping them.
> Also the names should match with the ones given in the example..?

Yes. The patch I pointed to isn't supposed to be just an example, but
/the/ standard way of representing display timings.



Re: [PATCH V6 2/2] video: drm: exynos: Add device tree support

2012-10-03 Thread Stephen Warren
On 10/02/2012 10:06 PM, Leela Krishna Amudala wrote:
 On Mon, Oct 1, 2012 at 9:50 PM, Stephen Warren swar...@wwwdotorg.org wrote:
 On 09/30/2012 11:29 PM, Leela Krishna Amudala wrote:
 Hello Stephen Warren,

 The binding names that I use in my dts file should match with the
 names given in 
 http://lists.freedesktop.org/archives/dri-devel/2012-July/024875.html
 right?

 I don't think so; the binding in that link is for example:

 + - xres, yres: Display resolution
 + - left-margin, right-margin, hsync-len: Horizontal Display timing 
 parameters
 +   in pixels
 +   upper-margin, lower-margin, vsync-len: Vertical display timing 
 parameters in
 +   lines
 + - clock: displayclock in Hz

 i.e. a bunch of separate properties, one for each value needed to
 describe the display timing. However, your patch contains:
 
 I mean to say that even I have to use separate properties for each one
 instead of grouping them.
 Also the names should match with the ones given in the example..?

Yes. The patch I pointed to isn't supposed to be just an example, but
/the/ standard way of representing display timings.

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH 1/2] of: add helper to parse display specs

2012-10-01 Thread Stephen Warren
On 10/01/2012 01:16 PM, Mitch Bradley wrote:
> On 10/1/2012 6:53 AM, Stephen Warren wrote:
>> On 09/24/2012 09:35 AM, Steffen Trumtrar wrote:
>>> Parse a display-node with timings and hardware-specs from devictree.
>>
>>> diff --git a/Documentation/devicetree/bindings/video/display 
>>> b/Documentation/devicetree/bindings/video/display
>>> new file mode 100644
>>> index 000..722766a
>>> --- /dev/null
>>> +++ b/Documentation/devicetree/bindings/video/display
>>
>> This should be display.txt.
>>
>>> @@ -0,0 +1,208 @@
>>> +display bindings
>>> +==
>>> +
>>> +display-node
>>> +
>>
>> I'm not personally convinced about the direction this is going. While I
>> think it's reasonable to define DT bindings for displays, and DT
>> bindings for display modes, I'm not sure that it's reasonable to couple
>> them together into a single binding.
>>
>> I think creating a well-defined timing binding first will be much
>> simpler than doing so within the context of a display binding; the
>> scope/content of a general display binding seems much less well-defined
>> to me at least, for reasons I mentioned before.
>>
>>> +required properties:
>>> + - none
>>> +
>>> +optional properties:
>>> + - default-timing: the default timing value
>>> + - width-mm, height-mm: Display dimensions in mm
>>
>>> + - hsync-active-high (bool): Hsync pulse is active high
>>> + - vsync-active-high (bool): Vsync pulse is active high
>>
>> At least those two properties should exist in the display timing instead
>> (or perhaps as well). There are certainly cases where different similar
>> display modes are differentiated by hsync/vsync polarity more than
>> anything else. This is probably more likely with analog display
>> connectors than digital, but I see no reason why a DT binding for
>> display timing shouldn't cover both.
>>
>>> + - de-active-high (bool): Data-Enable pulse is active high
>>> + - pixelclk-inverted (bool): pixelclock is inverted
>>
>>> + - pixel-per-clk
>>
>> pixel-per-clk is probably something that should either be part of the
>> timing definition, or something computed internally to the display
>> driver based on rules for the signal type, rather than something
>> represented in DT.
>>
>> The above comment assumes this property is intended to represent DVI's
>> requirement for pixel clock doubling for low-pixel-clock-rate modes. If
>> it's something to do with e.g. a single-data-rate vs. double-data-rate
>> property of the underlying physical connection, that's most likely
>> something that should be defined in a binding specific to e.g. LVDS,
>> rather than something generic.
>>
>>> + - link-width: number of channels (e.g. LVDS)
>>> + - bpp: bits-per-pixel
>>> +
>>> +timings-subnode
>>> +---
>>> +
>>> +required properties:
>>> +subnodes that specify
>>> + - hactive, vactive: Display resolution
>>> + - hfront-porch, hback-porch, hsync-len: Horizontal Display timing 
>>> parameters
>>> +   in pixels
>>> +   vfront-porch, vback-porch, vsync-len: Vertical display timing 
>>> parameters in
>>> +   lines
>>> + - clock: displayclock in Hz
>>> +
>>> +There are different ways of describing a display and its capabilities. The 
>>> devicetree
>>> +representation corresponds to the one commonly found in datasheets for 
>>> displays.
>>> +The description of the display and its timing is split in two parts: first 
>>> the display
>>> +properties like size in mm and (optionally) multiple subnodes with the 
>>> supported timings.
>>> +If a display supports multiple signal timings, the default-timing can be 
>>> specified.
>>> +
>>> +Example:
>>> +
>>> +   display at 0 {
>>> +   width-mm = <800>;
>>> +   height-mm = <480>;
>>> +   default-timing = <>;
>>> +   timings {
>>> +   timing0: timing at 0 {
>>
>> If you're going to use a unit address ("@0") to ensure that node names
>> are unique (which is not mandatory), then each node also needs a reg
>> property with matching value, and #address-cells/#size-cells in the
>> parent. Instead, you could name the nodes something unique based on the
>> mode name to avoid this, e.g. 1080p2

[PATCH 1/2] of: add helper to parse display specs

2012-10-01 Thread Stephen Warren
On 09/24/2012 09:35 AM, Steffen Trumtrar wrote:
> Parse a display-node with timings and hardware-specs from devictree.

> diff --git a/Documentation/devicetree/bindings/video/display 
> b/Documentation/devicetree/bindings/video/display
> new file mode 100644
> index 000..722766a
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/video/display

This should be display.txt.

> @@ -0,0 +1,208 @@
> +display bindings
> +==
> +
> +display-node
> +

I'm not personally convinced about the direction this is going. While I
think it's reasonable to define DT bindings for displays, and DT
bindings for display modes, I'm not sure that it's reasonable to couple
them together into a single binding.

I think creating a well-defined timing binding first will be much
simpler than doing so within the context of a display binding; the
scope/content of a general display binding seems much less well-defined
to me at least, for reasons I mentioned before.

> +required properties:
> + - none
> +
> +optional properties:
> + - default-timing: the default timing value
> + - width-mm, height-mm: Display dimensions in mm

> + - hsync-active-high (bool): Hsync pulse is active high
> + - vsync-active-high (bool): Vsync pulse is active high

At least those two properties should exist in the display timing instead
(or perhaps as well). There are certainly cases where different similar
display modes are differentiated by hsync/vsync polarity more than
anything else. This is probably more likely with analog display
connectors than digital, but I see no reason why a DT binding for
display timing shouldn't cover both.

> + - de-active-high (bool): Data-Enable pulse is active high
> + - pixelclk-inverted (bool): pixelclock is inverted

> + - pixel-per-clk

pixel-per-clk is probably something that should either be part of the
timing definition, or something computed internally to the display
driver based on rules for the signal type, rather than something
represented in DT.

The above comment assumes this property is intended to represent DVI's
requirement for pixel clock doubling for low-pixel-clock-rate modes. If
it's something to do with e.g. a single-data-rate vs. double-data-rate
property of the underlying physical connection, that's most likely
something that should be defined in a binding specific to e.g. LVDS,
rather than something generic.

> + - link-width: number of channels (e.g. LVDS)
> + - bpp: bits-per-pixel
> +
> +timings-subnode
> +---
> +
> +required properties:
> +subnodes that specify
> + - hactive, vactive: Display resolution
> + - hfront-porch, hback-porch, hsync-len: Horizontal Display timing parameters
> +   in pixels
> +   vfront-porch, vback-porch, vsync-len: Vertical display timing parameters 
> in
> +   lines
> + - clock: displayclock in Hz
> +
> +There are different ways of describing a display and its capabilities. The 
> devicetree
> +representation corresponds to the one commonly found in datasheets for 
> displays.
> +The description of the display and its timing is split in two parts: first 
> the display
> +properties like size in mm and (optionally) multiple subnodes with the 
> supported timings.
> +If a display supports multiple signal timings, the default-timing can be 
> specified.
> +
> +Example:
> +
> + display at 0 {
> + width-mm = <800>;
> + height-mm = <480>;
> + default-timing = <>;
> + timings {
> + timing0: timing at 0 {

If you're going to use a unit address ("@0") to ensure that node names
are unique (which is not mandatory), then each node also needs a reg
property with matching value, and #address-cells/#size-cells in the
parent. Instead, you could name the nodes something unique based on the
mode name to avoid this, e.g. 1080p24 { ... }.



[PATCH V6 2/2] video: drm: exynos: Add device tree support

2012-10-01 Thread Stephen Warren
On 09/30/2012 11:29 PM, Leela Krishna Amudala wrote:
> Hello Stephen Warren,
> 
> The binding names that I use in my dts file should match with the
> names given in 
> http://lists.freedesktop.org/archives/dri-devel/2012-July/024875.html
> right?

I don't think so; the binding in that link is for example:

> + - xres, yres: Display resolution
> + - left-margin, right-margin, hsync-len: Horizontal Display timing parameters
> +   in pixels
> +   upper-margin, lower-margin, vsync-len: Vertical display timing parameters 
> in
> +   lines
> + - clock: displayclock in Hz

i.e. a bunch of separate properties, one for each value needed to
describe the display timing. However, your patch contains:

>>> + - samsung,fimd-display: This property should specify the phandle of the
>>> +   display device node which holds the video interface timing with the
>>> +   below mentioned properties.
>>> +
>>> +   - lcd-htiming: Specifies the horizontal timing for the overlay. The
>>> + horizontal timing includes four parameters in the following order.
>>> +
>>> + - horizontal back porch (in number of lcd clocks)
>>> + - horizontal front porch (in number of lcd clocks)
>>> + - hsync pulse width (in number of lcd clocks)
>>> + - Display panels X resolution.

A single lcd-htiming property, which contains 4 values. (and a similar
construct for the vertical timing).

That seems entirely different to me...


Re: [PATCH V6 2/2] video: drm: exynos: Add device tree support

2012-10-01 Thread Stephen Warren
On 09/30/2012 11:29 PM, Leela Krishna Amudala wrote:
 Hello Stephen Warren,
 
 The binding names that I use in my dts file should match with the
 names given in 
 http://lists.freedesktop.org/archives/dri-devel/2012-July/024875.html
 right?

I don't think so; the binding in that link is for example:

 + - xres, yres: Display resolution
 + - left-margin, right-margin, hsync-len: Horizontal Display timing parameters
 +   in pixels
 +   upper-margin, lower-margin, vsync-len: Vertical display timing parameters 
 in
 +   lines
 + - clock: displayclock in Hz

i.e. a bunch of separate properties, one for each value needed to
describe the display timing. However, your patch contains:

 + - samsung,fimd-display: This property should specify the phandle of the
 +   display device node which holds the video interface timing with the
 +   below mentioned properties.
 +
 +   - lcd-htiming: Specifies the horizontal timing for the overlay. The
 + horizontal timing includes four parameters in the following order.
 +
 + - horizontal back porch (in number of lcd clocks)
 + - horizontal front porch (in number of lcd clocks)
 + - hsync pulse width (in number of lcd clocks)
 + - Display panels X resolution.

A single lcd-htiming property, which contains 4 values. (and a similar
construct for the vertical timing).

That seems entirely different to me...
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] of: add helper to parse display specs

2012-10-01 Thread Stephen Warren
On 09/24/2012 09:35 AM, Steffen Trumtrar wrote:
 Parse a display-node with timings and hardware-specs from devictree.

 diff --git a/Documentation/devicetree/bindings/video/display 
 b/Documentation/devicetree/bindings/video/display
 new file mode 100644
 index 000..722766a
 --- /dev/null
 +++ b/Documentation/devicetree/bindings/video/display

This should be display.txt.

 @@ -0,0 +1,208 @@
 +display bindings
 +==
 +
 +display-node
 +

I'm not personally convinced about the direction this is going. While I
think it's reasonable to define DT bindings for displays, and DT
bindings for display modes, I'm not sure that it's reasonable to couple
them together into a single binding.

I think creating a well-defined timing binding first will be much
simpler than doing so within the context of a display binding; the
scope/content of a general display binding seems much less well-defined
to me at least, for reasons I mentioned before.

 +required properties:
 + - none
 +
 +optional properties:
 + - default-timing: the default timing value
 + - width-mm, height-mm: Display dimensions in mm

 + - hsync-active-high (bool): Hsync pulse is active high
 + - vsync-active-high (bool): Vsync pulse is active high

At least those two properties should exist in the display timing instead
(or perhaps as well). There are certainly cases where different similar
display modes are differentiated by hsync/vsync polarity more than
anything else. This is probably more likely with analog display
connectors than digital, but I see no reason why a DT binding for
display timing shouldn't cover both.

 + - de-active-high (bool): Data-Enable pulse is active high
 + - pixelclk-inverted (bool): pixelclock is inverted

 + - pixel-per-clk

pixel-per-clk is probably something that should either be part of the
timing definition, or something computed internally to the display
driver based on rules for the signal type, rather than something
represented in DT.

The above comment assumes this property is intended to represent DVI's
requirement for pixel clock doubling for low-pixel-clock-rate modes. If
it's something to do with e.g. a single-data-rate vs. double-data-rate
property of the underlying physical connection, that's most likely
something that should be defined in a binding specific to e.g. LVDS,
rather than something generic.

 + - link-width: number of channels (e.g. LVDS)
 + - bpp: bits-per-pixel
 +
 +timings-subnode
 +---
 +
 +required properties:
 +subnodes that specify
 + - hactive, vactive: Display resolution
 + - hfront-porch, hback-porch, hsync-len: Horizontal Display timing parameters
 +   in pixels
 +   vfront-porch, vback-porch, vsync-len: Vertical display timing parameters 
 in
 +   lines
 + - clock: displayclock in Hz
 +
 +There are different ways of describing a display and its capabilities. The 
 devicetree
 +representation corresponds to the one commonly found in datasheets for 
 displays.
 +The description of the display and its timing is split in two parts: first 
 the display
 +properties like size in mm and (optionally) multiple subnodes with the 
 supported timings.
 +If a display supports multiple signal timings, the default-timing can be 
 specified.
 +
 +Example:
 +
 + display@0 {
 + width-mm = 800;
 + height-mm = 480;
 + default-timing = timing0;
 + timings {
 + timing0: timing@0 {

If you're going to use a unit address (@0) to ensure that node names
are unique (which is not mandatory), then each node also needs a reg
property with matching value, and #address-cells/#size-cells in the
parent. Instead, you could name the nodes something unique based on the
mode name to avoid this, e.g. 1080p24 { ... }.

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v4] of: Add videomode helper

2012-09-24 Thread Stephen Warren
On 09/24/2012 12:26 PM, Rob Herring wrote:
> On 09/24/2012 10:45 AM, Stephen Warren wrote:
>> On 09/24/2012 07:42 AM, Rob Herring wrote:
>>> On 09/19/2012 03:20 AM, Steffen Trumtrar wrote:
>>>> This patch adds a helper function for parsing videomodes from the 
>>>> devicetree.
>>>> The videomode can be either converted to a struct drm_display_mode or a
>>>> struct fb_videomode.
>>
>>>> +++ b/Documentation/devicetree/bindings/video/displaymode
>>>> @@ -0,0 +1,74 @@
>>>> +videomode bindings
>>>> +==
>>>> +
>>>> +Required properties:
>>>> + - hactive, vactive: Display resolution
>>>> + - hfront-porch, hback-porch, hsync-len: Horizontal Display timing 
>>>> parameters
>>>> +   in pixels
>>>> +   vfront-porch, vback-porch, vsync-len: Vertical display timing 
>>>> parameters in
>>>> +   lines
>>>> + - clock: displayclock in Hz
>>>
>>> A major piece missing is the LCD controller to display interface width
>>> and component ordering.
>>
>> I thought this binding was solely defining the timing of the video
>> signal (hence "video mode"). Any definition of the physical interface to
>> the LCD/display-connector is something entirely orthogonal, so it seems
>> entirely reasonable to represent that separately.
> 
> It is not orthogonal because in many cases the LCD panel defines the
> mode.

The LCD panel itself defines both the mode and the physical interface
properties. The mode does not imply anything about the physical
interface, nor does the physical interface imply anything about the
mode. So, they are in fact orthogonal. In other words, just because you
need both sets of information, doesn't make the two pieces of
information correlated.

>>>> +Example:
>>>> +
>>>> +  display at 0 {
>>>
>>> It would be useful to have a compatible string here. We may not always
>>> know the panel type or have a fixed panel though. We could define
>>> "generic-lcd" or something for cases where the panel type is unknown.
>>>
>>>> +  width-mm = <800>;
>>>> +  height-mm = <480>;
>>
>> I would hope that everything in the example above this point is just
>> that - an example, and this binding only covers the display mode
>> definition - i.e. that part of the example below.
>>
> 
> It's fairly clear this binding is being defined based on what Linux
> supports vs. what the h/w looks like.
> 
>> If that's not the intent, as Rob says, there's a /ton/ of stuff missing.
> 
> Assuming not, what all is missing?

Everything related to the physical interface:

* For DSI, whatever it needs to be configured.
* For LVDS, e.g. number of lanes of R, G, B.
* Perhaps multi-pumping rates (# of clock signals to send each data
value for, to satisfy any minimum clock rates)
* Built-in displays typically need to be coupled with a backlight and
all the associated control of that.
* Pinctrl interaction.

and probably a bunch of other things I haven't thought about.



[PATCH v4] of: Add videomode helper

2012-09-24 Thread Stephen Warren
On 09/24/2012 07:42 AM, Rob Herring wrote:
> On 09/19/2012 03:20 AM, Steffen Trumtrar wrote:
>> This patch adds a helper function for parsing videomodes from the devicetree.
>> The videomode can be either converted to a struct drm_display_mode or a
>> struct fb_videomode.

>> +++ b/Documentation/devicetree/bindings/video/displaymode
>> @@ -0,0 +1,74 @@
>> +videomode bindings
>> +==
>> +
>> +Required properties:
>> + - hactive, vactive: Display resolution
>> + - hfront-porch, hback-porch, hsync-len: Horizontal Display timing 
>> parameters
>> +   in pixels
>> +   vfront-porch, vback-porch, vsync-len: Vertical display timing parameters 
>> in
>> +   lines
>> + - clock: displayclock in Hz
> 
> A major piece missing is the LCD controller to display interface width
> and component ordering.

I thought this binding was solely defining the timing of the video
signal (hence "video mode"). Any definition of the physical interface to
the LCD/display-connector is something entirely orthogonal, so it seems
entirely reasonable to represent that separately.

>> +Example:
>> +
>> +display at 0 {
> 
> It would be useful to have a compatible string here. We may not always
> know the panel type or have a fixed panel though. We could define
> "generic-lcd" or something for cases where the panel type is unknown.
> 
>> +width-mm = <800>;
>> +height-mm = <480>;

I would hope that everything in the example above this point is just
that - an example, and this binding only covers the display mode
definition - i.e. that part of the example below.

If that's not the intent, as Rob says, there's a /ton/ of stuff missing.

>> +modes {
>> +mode0: mode at 0 {
>> +/* 1920x1080p24 */
>> +clock = <5200>;
>> +hactive = <1920>;
>> +vactive = <1080>;
>> +hfront-porch = <25>;
>> +hback-porch = <25>;
>> +hsync-len = <25>;
>> +vback-porch = <2>;
>> +vfront-porch = <2>;
>> +vsync-len = <2>;
>> +hsync-active-high;
>> +};
>> +};
>> +};



Re: [PATCH v4] of: Add videomode helper

2012-09-24 Thread Stephen Warren
On 09/24/2012 07:42 AM, Rob Herring wrote:
 On 09/19/2012 03:20 AM, Steffen Trumtrar wrote:
 This patch adds a helper function for parsing videomodes from the devicetree.
 The videomode can be either converted to a struct drm_display_mode or a
 struct fb_videomode.

 +++ b/Documentation/devicetree/bindings/video/displaymode
 @@ -0,0 +1,74 @@
 +videomode bindings
 +==
 +
 +Required properties:
 + - hactive, vactive: Display resolution
 + - hfront-porch, hback-porch, hsync-len: Horizontal Display timing 
 parameters
 +   in pixels
 +   vfront-porch, vback-porch, vsync-len: Vertical display timing parameters 
 in
 +   lines
 + - clock: displayclock in Hz
 
 A major piece missing is the LCD controller to display interface width
 and component ordering.

I thought this binding was solely defining the timing of the video
signal (hence video mode). Any definition of the physical interface to
the LCD/display-connector is something entirely orthogonal, so it seems
entirely reasonable to represent that separately.

 +Example:
 +
 +display@0 {
 
 It would be useful to have a compatible string here. We may not always
 know the panel type or have a fixed panel though. We could define
 generic-lcd or something for cases where the panel type is unknown.
 
 +width-mm = 800;
 +height-mm = 480;

I would hope that everything in the example above this point is just
that - an example, and this binding only covers the display mode
definition - i.e. that part of the example below.

If that's not the intent, as Rob says, there's a /ton/ of stuff missing.

 +modes {
 +mode0: mode@0 {
 +/* 1920x1080p24 */
 +clock = 5200;
 +hactive = 1920;
 +vactive = 1080;
 +hfront-porch = 25;
 +hback-porch = 25;
 +hsync-len = 25;
 +vback-porch = 2;
 +vfront-porch = 2;
 +vsync-len = 2;
 +hsync-active-high;
 +};
 +};
 +};

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH V6 2/2] video: drm: exynos: Add device tree support

2012-09-21 Thread Stephen Warren
On 09/21/2012 01:22 AM, Inki Dae wrote:
> 2012/9/21 Stephen Warren :
>> On 09/21/2012 05:22 AM, Leela Krishna Amudala wrote:
>>> This patch adds device tree based discovery support for exynos DRM-FIMD
>>> driver which includes driver modification to handle platform data in
>>> both the cases with DT and non-DT, Also adds the documentation for bindings.
>>
>>> diff --git a/Documentation/devicetree/bindings/drm/exynos/fimd.txt 
>>> b/Documentation/devicetree/bindings/drm/exynos/fimd.txt
>> ...
>>> + - samsung,fimd-display: This property should specify the phandle of the
>>> +   display device node which holds the video interface timing with the
>>> +   below mentioned properties.
>>> +
>>> +   - lcd-htiming: Specifies the horizontal timing for the overlay. The
>>> + horizontal timing includes four parameters in the following order.
>>> +
>>> + - horizontal back porch (in number of lcd clocks)
>>> + - horizontal front porch (in number of lcd clocks)
>>> + - hsync pulse width (in number of lcd clocks)
>>> + - Display panels X resolution.
>>> +
>>> +   - lcd-vtiming: Specifies the vertical timing for the overlay. The
>>> + vertical timing includes four parameters in the following order.
>>> +
>>> + - vertical back porch (in number of lcd lines)
>>> + - vertical front porch (in number of lcd lines)
>>> + - vsync pulse width (in number of lcd clocks)
>>> + - Display panels Y resolution.
>>
>> Should this not use the new videomode timings that are under discussion at:
>>
>> http://lists.freedesktop.org/archives/dri-devel/2012-July/024875.html
>>
> 
> ok, I agree with you. then the videomode helper is going to be merged
> to mainline(3.6)? if so, this patch should be reworked based on the
> videomode helper.

I think the videomode helpers would be merged for 3.7 at the very
earliest; 3.6 is cooked already. Given there are still some comments on
the binding, perhaps it won't happen until 3.8, but it'd be best to ask
on that thread so that people more directly involved with the status can
answer.


[PATCH V6 2/2] video: drm: exynos: Add device tree support

2012-09-21 Thread Stephen Warren
On 09/21/2012 05:22 AM, Leela Krishna Amudala wrote:
> This patch adds device tree based discovery support for exynos DRM-FIMD
> driver which includes driver modification to handle platform data in
> both the cases with DT and non-DT, Also adds the documentation for bindings.

> diff --git a/Documentation/devicetree/bindings/drm/exynos/fimd.txt 
> b/Documentation/devicetree/bindings/drm/exynos/fimd.txt
...
> + - samsung,fimd-display: This property should specify the phandle of the
> +   display device node which holds the video interface timing with the
> +   below mentioned properties.
> +
> +   - lcd-htiming: Specifies the horizontal timing for the overlay. The
> + horizontal timing includes four parameters in the following order.
> +
> + - horizontal back porch (in number of lcd clocks)
> + - horizontal front porch (in number of lcd clocks)
> + - hsync pulse width (in number of lcd clocks)
> + - Display panels X resolution.
> +
> +   - lcd-vtiming: Specifies the vertical timing for the overlay. The
> + vertical timing includes four parameters in the following order.
> +
> + - vertical back porch (in number of lcd lines)
> + - vertical front porch (in number of lcd lines)
> + - vsync pulse width (in number of lcd clocks)
> + - Display panels Y resolution.

Should this not use the new videomode timings that are under discussion at:

http://lists.freedesktop.org/archives/dri-devel/2012-July/024875.html


Re: [PATCH V6 2/2] video: drm: exynos: Add device tree support

2012-09-21 Thread Stephen Warren
On 09/21/2012 05:22 AM, Leela Krishna Amudala wrote:
 This patch adds device tree based discovery support for exynos DRM-FIMD
 driver which includes driver modification to handle platform data in
 both the cases with DT and non-DT, Also adds the documentation for bindings.

 diff --git a/Documentation/devicetree/bindings/drm/exynos/fimd.txt 
 b/Documentation/devicetree/bindings/drm/exynos/fimd.txt
...
 + - samsung,fimd-display: This property should specify the phandle of the
 +   display device node which holds the video interface timing with the
 +   below mentioned properties.
 +
 +   - lcd-htiming: Specifies the horizontal timing for the overlay. The
 + horizontal timing includes four parameters in the following order.
 +
 + - horizontal back porch (in number of lcd clocks)
 + - horizontal front porch (in number of lcd clocks)
 + - hsync pulse width (in number of lcd clocks)
 + - Display panels X resolution.
 +
 +   - lcd-vtiming: Specifies the vertical timing for the overlay. The
 + vertical timing includes four parameters in the following order.
 +
 + - vertical back porch (in number of lcd lines)
 + - vertical front porch (in number of lcd lines)
 + - vsync pulse width (in number of lcd clocks)
 + - Display panels Y resolution.

Should this not use the new videomode timings that are under discussion at:

http://lists.freedesktop.org/archives/dri-devel/2012-July/024875.html
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH V6 2/2] video: drm: exynos: Add device tree support

2012-09-21 Thread Stephen Warren
On 09/21/2012 01:22 AM, Inki Dae wrote:
 2012/9/21 Stephen Warren swar...@wwwdotorg.org:
 On 09/21/2012 05:22 AM, Leela Krishna Amudala wrote:
 This patch adds device tree based discovery support for exynos DRM-FIMD
 driver which includes driver modification to handle platform data in
 both the cases with DT and non-DT, Also adds the documentation for bindings.

 diff --git a/Documentation/devicetree/bindings/drm/exynos/fimd.txt 
 b/Documentation/devicetree/bindings/drm/exynos/fimd.txt
 ...
 + - samsung,fimd-display: This property should specify the phandle of the
 +   display device node which holds the video interface timing with the
 +   below mentioned properties.
 +
 +   - lcd-htiming: Specifies the horizontal timing for the overlay. The
 + horizontal timing includes four parameters in the following order.
 +
 + - horizontal back porch (in number of lcd clocks)
 + - horizontal front porch (in number of lcd clocks)
 + - hsync pulse width (in number of lcd clocks)
 + - Display panels X resolution.
 +
 +   - lcd-vtiming: Specifies the vertical timing for the overlay. The
 + vertical timing includes four parameters in the following order.
 +
 + - vertical back porch (in number of lcd lines)
 + - vertical front porch (in number of lcd lines)
 + - vsync pulse width (in number of lcd clocks)
 + - Display panels Y resolution.

 Should this not use the new videomode timings that are under discussion at:

 http://lists.freedesktop.org/archives/dri-devel/2012-July/024875.html

 
 ok, I agree with you. then the videomode helper is going to be merged
 to mainline(3.6)? if so, this patch should be reworked based on the
 videomode helper.

I think the videomode helpers would be merged for 3.7 at the very
earliest; 3.6 is cooked already. Given there are still some comments on
the binding, perhaps it won't happen until 3.8, but it'd be best to ask
on that thread so that people more directly involved with the status can
answer.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Kconfig DRM_USB/DRM_UDL, and select vs. depends, and causing Tegra USB to be disabled

2012-09-05 Thread Stephen Warren
With respect to the following commits:

df0b344 drm/usb: select USB_SUPPORT in Kconfig
8f057d7 gpu/mfd/usb: Fix USB randconfig problems

... which end up with the following in next-20120904:

config DRM_USB
depends on DRM
depends on USB_ARCH_HAS_HCD
select USB
select USB_SUPPORT

config DRM_UDL
depends on DRM  EXPERIMENTAL
depends on USB_ARCH_HAS_HCD
select DRM_USB

Surely this is backwards; these should be dependencies, not selects? In
other words:

config DRM_USB
depends on DRM  USB

config DRM_UDL
depends on DRM  EXPERIMENTAL  USB
select DRM_USB

or perhaps:

config DRM_USB
depends on DRM  USB

config DRM_UDL
depends on DRM  EXPERIMENTAL  DRM_USB

The problem here is that currently, the dependency logic for USB:

config USB
depends on USB_ARCH_HAS_HCD

... is duplicated into each of DRM_USB and DRM_UDL, thus requiring both
of those to be edited should the dependencies for USB ever change.

The current state of the code also causes some strange problem with
ARM's tegra_defconfig, whereby running make tegra_defconfig will
result in USB support fully enabled in .config as expected, yet
subsequently running make oldconfig will cause all USB support to be
removed from .config. For some reason, the above DRM logic is causing
CONFIG_USB_ARCH_HAS_HCD not to be selected (perhaps it isn't evaluated
because USB is selected, so there's no need to evaluate USB's
dependencies?). Arguably, this is a deficiency in Tegra's Kconfig, in
that it probably should say:

select USB_ARCH_HAS_EHCI

not:

select USB_ARCH_HAS_EHCI if USB_SUPPORT

... but it has contained the latter for quite some time, and it's always
worked before somehow.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Kconfig DRM_USB/DRM_UDL, and select vs. depends, and causing Tegra USB to be disabled

2012-09-05 Thread Stephen Warren
On 09/04/2012 02:00 PM, Guenter Roeck wrote:
 On Tue, Sep 04, 2012 at 01:19:12PM -0600, Stephen Warren wrote:
 With respect to the following commits:

 df0b344 drm/usb: select USB_SUPPORT in Kconfig
 8f057d7 gpu/mfd/usb: Fix USB randconfig problems

 ... which end up with the following in next-20120904:

 config DRM_USB
 depends on DRM
 depends on USB_ARCH_HAS_HCD
 select USB
 select USB_SUPPORT

 config DRM_UDL
 depends on DRM  EXPERIMENTAL
 depends on USB_ARCH_HAS_HCD
 select DRM_USB

 Surely this is backwards; these should be dependencies, not selects? In
 other words:

 config DRM_USB
 depends on DRM  USB

 config DRM_UDL
 depends on DRM  EXPERIMENTAL  USB
 select DRM_USB

 or perhaps:

 config DRM_USB
 depends on DRM  USB

 config DRM_UDL
 depends on DRM  EXPERIMENTAL  DRM_USB

 The problem here is that currently, the dependency logic for USB:

 config USB
  depends on USB_ARCH_HAS_HCD

 ... is duplicated into each of DRM_USB and DRM_UDL, thus requiring both
 of those to be edited should the dependencies for USB ever change.

 This should be fixed with in https://patchwork.kernel.org/patch/1373371/ (drm:
 udl: usb: Fix recursive Kconfig dependency), which should make it into the 
 next
 iteration of linux-next.

Yes, this does appear to solve all the problems for me. Thanks.

I still tend to believe that drivers should probably depend on things
rather than select them, but given the common precedent for select USB
that exists here, others clearly don't agree!

Sorry; accidentally sent the email too early last time:-(
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Kconfig DRM_USB/DRM_UDL, and select vs. depends, and causing Tegra USB to be disabled

2012-09-04 Thread Stephen Warren
On 09/04/2012 02:00 PM, Guenter Roeck wrote:
> On Tue, Sep 04, 2012 at 01:19:12PM -0600, Stephen Warren wrote:
>> With respect to the following commits:
>>
>> df0b344 drm/usb: select USB_SUPPORT in Kconfig
>> 8f057d7 gpu/mfd/usb: Fix USB randconfig problems
>>
>> ... which end up with the following in next-20120904:
>>
>> config DRM_USB
>> depends on DRM
>> depends on USB_ARCH_HAS_HCD
>> select USB
>> select USB_SUPPORT
>>
>> config DRM_UDL
>> depends on DRM && EXPERIMENTAL
>> depends on USB_ARCH_HAS_HCD
>> select DRM_USB
>>
>> Surely this is backwards; these should be dependencies, not selects? In
>> other words:
>>
>> config DRM_USB
>> depends on DRM && USB
>>
>> config DRM_UDL
>> depends on DRM && EXPERIMENTAL && USB
>> select DRM_USB
>>
>> or perhaps:
>>
>> config DRM_USB
>> depends on DRM && USB
>>
>> config DRM_UDL
>> depends on DRM && EXPERIMENTAL && DRM_USB
>>
>> The problem here is that currently, the dependency logic for USB:
>>
>> config USB
>>  depends on USB_ARCH_HAS_HCD
>>
>> ... is duplicated into each of DRM_USB and DRM_UDL, thus requiring both
>> of those to be edited should the dependencies for USB ever change.
>
> This should be fixed with in https://patchwork.kernel.org/patch/1373371/ (drm:
> udl: usb: Fix recursive Kconfig dependency), which should make it into the 
> next
> iteration of linux-next.

Yes, this does appear to solve all the problems for me. Thanks.

I still tend to believe that drivers should probably depend on things
rather than select them, but given the common precedent for "select USB"
that exists here, others clearly don't agree!

Sorry; accidentally sent the email too early last time:-(


Kconfig DRM_USB/DRM_UDL, and select vs. depends, and causing Tegra USB to be disabled

2012-09-04 Thread Stephen Warren
With respect to the following commits:

df0b344 drm/usb: select USB_SUPPORT in Kconfig
8f057d7 gpu/mfd/usb: Fix USB randconfig problems

... which end up with the following in next-20120904:

config DRM_USB
depends on DRM
depends on USB_ARCH_HAS_HCD
select USB
select USB_SUPPORT

config DRM_UDL
depends on DRM && EXPERIMENTAL
depends on USB_ARCH_HAS_HCD
select DRM_USB

Surely this is backwards; these should be dependencies, not selects? In
other words:

config DRM_USB
depends on DRM && USB

config DRM_UDL
depends on DRM && EXPERIMENTAL && USB
select DRM_USB

or perhaps:

config DRM_USB
depends on DRM && USB

config DRM_UDL
depends on DRM && EXPERIMENTAL && DRM_USB

The problem here is that currently, the dependency logic for USB:

config USB
depends on USB_ARCH_HAS_HCD

... is duplicated into each of DRM_USB and DRM_UDL, thus requiring both
of those to be edited should the dependencies for USB ever change.

The current state of the code also causes some strange problem with
ARM's tegra_defconfig, whereby running "make tegra_defconfig" will
result in USB support fully enabled in .config as expected, yet
subsequently running "make oldconfig" will cause all USB support to be
removed from .config. For some reason, the above DRM logic is causing
CONFIG_USB_ARCH_HAS_HCD not to be selected (perhaps it isn't evaluated
because USB is selected, so there's no need to evaluate USB's
dependencies?). Arguably, this is a deficiency in Tegra's Kconfig, in
that it probably should say:

select USB_ARCH_HAS_EHCI

not:

select USB_ARCH_HAS_EHCI if USB_SUPPORT

... but it has contained the latter for quite some time, and it's always
worked before somehow.


Re: [alsa-devel] [RFC] ASoC: snd_soc_jack for HDMI audio: does it make sense?

2012-08-24 Thread Stephen Warren
On 08/23/2012 07:44 PM, Ricardo Neri wrote:
 On 08/22/2012 02:55 AM, Takashi Iwai wrote:
 At Tue, 21 Aug 2012 19:58:02 -0500,
 Ricardo Neri wrote:
...
 Maybe the lack of audio support in drm is because the audio users should
 not talk to drm directly but to a lower level component (omapdrm,
 omapdss?). However, today there exists video technology supports audio
 as well, such as DisplayPort or HDMI. Could it make more sense now to
 provide audio support?

 The reason is that the audio and video handling are already separated
 in the hardware level, at least, for desktop graphics.
 
 Separated in what sense? Do they have separate register banks in

For NVIDIA desktop GPUs, this is certainly true, and I think so for any
Intel Azalia/HDA controller. The separate register banks are in
different PCI functions on the chip. The Intel Azalia/HDA spec also
architects specific ways that the audio and video parts interact (i.e.
ELD representation of EDID data, unsolicited response messages when the
video state changes, etc.) Oh, I see Takashi mentioned this below.

 independent power domains?

Most likely yes.

 Can audio an video work with complete
 independence. What happens if someone decides to power off video. Is the
 audio able to continue if required?

I believe audio DMA isn't affect by the video state, but I'm not 100%
sure of that.

 The audio infoframe is passed via ELD to the audio controller part
 upon plug/unplugging via HD-audio unsolicited event, and the audio
 driver sets up the stuff according to the given ELD.  Thus no extra
 interface between drm and ALSA was required in the kernel API level,
 so far.
 
 I see that the unsolicited event is used only to parse the EDID,
 correct? It also notifies about the jack status. Hence, if there the
 cable is disconnected the application will know and act accordingly. Is
 this understanding correct?

The kernel will know, and then passes the information on to user-space.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[alsa-devel] [RFC] ASoC: snd_soc_jack for HDMI audio: does it make sense?

2012-08-23 Thread Stephen Warren
On 08/23/2012 07:44 PM, Ricardo Neri wrote:
> On 08/22/2012 02:55 AM, Takashi Iwai wrote:
>> At Tue, 21 Aug 2012 19:58:02 -0500,
>> Ricardo Neri wrote:
...
>>> Maybe the lack of audio support in drm is because the audio users should
>>> not talk to drm directly but to a lower level component (omapdrm,
>>> omapdss?). However, today there exists video technology supports audio
>>> as well, such as DisplayPort or HDMI. Could it make more sense now to
>>> provide audio support?
>>
>> The reason is that the audio and video handling are already separated
>> in the hardware level, at least, for desktop graphics.
> 
> Separated in what sense? Do they have separate register banks in

For NVIDIA desktop GPUs, this is certainly true, and I think so for any
Intel Azalia/HDA controller. The separate register banks are in
different PCI functions on the chip. The Intel Azalia/HDA spec also
architects specific ways that the audio and video parts interact (i.e.
ELD representation of EDID data, unsolicited response messages when the
video state changes, etc.) Oh, I see Takashi mentioned this below.

> independent power domains?

Most likely yes.

> Can audio an video work with complete
> independence. What happens if someone decides to power off video. Is the
> audio able to continue if required?

I believe audio DMA isn't affect by the video state, but I'm not 100%
sure of that.

>> The audio infoframe is passed via ELD to the audio controller part
>> upon plug/unplugging via HD-audio unsolicited event, and the audio
>> driver sets up the stuff according to the given ELD.  Thus no extra
>> interface between drm and ALSA was required in the kernel API level,
>> so far.
> 
> I see that the unsolicited event is used only to parse the EDID,
> correct? It also notifies about the jack status. Hence, if there the
> cable is disconnected the application will know and act accordingly. Is
> this understanding correct?

The kernel will know, and then passes the information on to user-space.


Re: [PATCH v2] of: Add videomode helper

2012-08-05 Thread Stephen Warren
On 08/03/2012 01:38 AM, Sascha Hauer wrote:
 Hi Stephen,
 
 On Thu, Aug 02, 2012 at 01:35:40PM -0600, Stephen Warren wrote:
 On 07/04/2012 01:56 AM, Sascha Hauer wrote:
 This patch adds a helper function for parsing videomodes from the 
 devicetree.
 The videomode can be either converted to a struct drm_display_mode or a
 struct fb_videomode.

 diff --git a/Documentation/devicetree/bindings/video/displaymode 
 b/Documentation/devicetree/bindings/video/displaymode

 +Required properties:
 + - xres, yres: Display resolution
 + - left-margin, right-margin, hsync-len: Horizontal Display timing 
 parameters
 +   in pixels
 +   upper-margin, lower-margin, vsync-len: Vertical display timing 
 parameters in
 +   lines

 Perhaps bike-shedding, but...

 For the margin property names, wouldn't it be better to use terminology
 more commonly used for video timings rather than Linux FB naming. In
 other words naming like:

 hactive, hsync-len, hfront-porch, hback-porch?
 
 Can do. Just to make sure:
 
 hactive == xres
 hsync-len == hsync-len
 hfront-porch == right-margin
 hback-porch == left-margin

I believe so yes.

 a) They're already standardized and very common.
 
 Indeed, that's a big plus for EDID. I have no intention of removing EDID
 data from the devicetree. There are situations where EDID is handy, for
 example when you get EDID data provided by your vendor.
 

 b) They allow other information such as a display's HDMI audio
 capabilities to be represented, which can then feed into an ALSA driver.

 c) The few LCD panel datasheets I've seen actually quote their
 specification as an EDID already, so deriving the EDID may actually be easy.

 d) People familiar with displays are almost certainly familiar with
 EDID's mode representation. There are many ways of representing display
 modes (sync position vs. porch widths, htotal specified rather than
 specifying all the components and hence htotal being calculated etc.).
 Not everyone will be familiar with all representations. Conversion
 errors are less likely if the target is EDID's familiar format.
 
 You seem to think about a different class of displays for which EDID
 indeed is a better way to handle. What I have to deal with here mostly
 are dumb displays which:
 
 - can only handle their native resolution
 - Have no audio capabilities at all
 - come with a datasheet which specify a min/typ/max triplet for
   xres,hsync,..., often enough they are scanned to pdf from some previously
   printed paper.
 
 These displays are very common on embedded devices, probably that's the
 reason I did not even think about the possibility that a single display
 might have different modes.

That's true, but as I mentioned, there are at least some dumb panels
(both I've seen recently) whose specification provides the EDID. I don't
know how common that is though, I must admit.

 e) You'll end up with exactly the same data as if you have a DDC-based
 external monitor rather than an internal panel, so you end up getting to
 a common path in display handling code much more quickly.
 
 All we have in our display driver currently is:
 
   edidp = of_get_property(np, edid, imxpd-edid_len);
   if (edidp) {
   imxpd-edid = kmemdup(edidp, imxpd-edid_len, GFP_KERNEL);
   } else {
   ret = of_get_video_mode(np, imxpd-mode, NULL);
   if (!ret)
   imxpd-mode_valid = 1;
   }

Presumably there's more to it though; something later checks
imxpd-mode_valid and if false, parses the EDID and sets up imxpd-mode,
etc.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v2] of: Add videomode helper

2012-08-03 Thread Stephen Warren
On 08/03/2012 01:38 AM, Sascha Hauer wrote:
> Hi Stephen,
> 
> On Thu, Aug 02, 2012 at 01:35:40PM -0600, Stephen Warren wrote:
>> On 07/04/2012 01:56 AM, Sascha Hauer wrote:
>>> This patch adds a helper function for parsing videomodes from the 
>>> devicetree.
>>> The videomode can be either converted to a struct drm_display_mode or a
>>> struct fb_videomode.
>>
>>> diff --git a/Documentation/devicetree/bindings/video/displaymode 
>>> b/Documentation/devicetree/bindings/video/displaymode
>>
>>> +Required properties:
>>> + - xres, yres: Display resolution
>>> + - left-margin, right-margin, hsync-len: Horizontal Display timing 
>>> parameters
>>> +   in pixels
>>> +   upper-margin, lower-margin, vsync-len: Vertical display timing 
>>> parameters in
>>> +   lines
>>
>> Perhaps bike-shedding, but...
>>
>> For the margin property names, wouldn't it be better to use terminology
>> more commonly used for video timings rather than Linux FB naming. In
>> other words naming like:
>>
>> hactive, hsync-len, hfront-porch, hback-porch?
> 
> Can do. Just to make sure:
> 
> hactive == xres
> hsync-len == hsync-len
> hfront-porch == right-margin
> hback-porch == left-margin

I believe so yes.

>> a) They're already standardized and very common.
> 
> Indeed, that's a big plus for EDID. I have no intention of removing EDID
> data from the devicetree. There are situations where EDID is handy, for
> example when you get EDID data provided by your vendor.
> 
>>
>> b) They allow other information such as a display's HDMI audio
>> capabilities to be represented, which can then feed into an ALSA driver.
>>
>> c) The few LCD panel datasheets I've seen actually quote their
>> specification as an EDID already, so deriving the EDID may actually be easy.
>>
>> d) People familiar with displays are almost certainly familiar with
>> EDID's mode representation. There are many ways of representing display
>> modes (sync position vs. porch widths, htotal specified rather than
>> specifying all the components and hence htotal being calculated etc.).
>> Not everyone will be familiar with all representations. Conversion
>> errors are less likely if the target is EDID's familiar format.
> 
> You seem to think about a different class of displays for which EDID
> indeed is a better way to handle. What I have to deal with here mostly
> are dumb displays which:
> 
> - can only handle their native resolution
> - Have no audio capabilities at all
> - come with a datasheet which specify a min/typ/max triplet for
>   xres,hsync,..., often enough they are scanned to pdf from some previously
>   printed paper.
> 
> These displays are very common on embedded devices, probably that's the
> reason I did not even think about the possibility that a single display
> might have different modes.

That's true, but as I mentioned, there are at least some dumb panels
(both I've seen recently) whose specification provides the EDID. I don't
know how common that is though, I must admit.

>> e) You'll end up with exactly the same data as if you have a DDC-based
>> external monitor rather than an internal panel, so you end up getting to
>> a common path in display handling code much more quickly.
> 
> All we have in our display driver currently is:
> 
>   edidp = of_get_property(np, "edid", >edid_len);
>   if (edidp) {
>   imxpd->edid = kmemdup(edidp, imxpd->edid_len, GFP_KERNEL);
>   } else {
>   ret = of_get_video_mode(np, >mode, NULL);
>   if (!ret)
>   imxpd->mode_valid = 1;
>   }

Presumably there's more to it though; something later checks
imxpd->mode_valid and if false, parses the EDID and sets up imxpd->mode,
etc.


[PATCH v2] of: Add videomode helper

2012-08-02 Thread Stephen Warren
On 07/05/2012 08:51 AM, Rob Herring wrote:
> On 07/04/2012 02:56 AM, Sascha Hauer wrote:
>> This patch adds a helper function for parsing videomodes from the devicetree.
>> The videomode can be either converted to a struct drm_display_mode or a
>> struct fb_videomode.

>> diff --git a/Documentation/devicetree/bindings/video/displaymode 
>> b/Documentation/devicetree/bindings/video/displaymode

>> +Example:
>> +
>> + display at 0 {
>> + /* 1920x1080p24 */
>> + clock = <5200>;
> 
> Should this use the clock binding? You probably need both constraints
> and clock binding though.

I don't think the clock binding is appropriate here. This binding
specifies a desired video mode, and should specify just the expected
clock frequency for that mode. Perhaps any tolerance imposed by the
specification used to calculate the mode timing could be specified too,
but the allowed tolerance (for a mode to be recognized by the dispaly)
is more driven by the receiving device than the transmitting device.

The clock bindings should come into play in the display controller that
sends a signal in that display mode. Something like:

mode: hd1080p {
clock = <5200>;
xres = <1920>;
yres = <1080>;

};

display-controller {
pixel-clock-source = < ...>; // common clock bindings here
default-mode = <>;

> Often you don't know the frequency up front and/or have limited control
> of the frequency (i.e. integer dividers). Then you have to adjust the
> margins to get the desired refresh rate. To do that, you need to know
> the ranges of values a panel can support. Perhaps you just assume you
> can increase the right-margin and lower-margins as I think you will hit
> pixel clock frequency max before any limit on margins.

I think this is more usually dealt with in HW design and matching
components.

The PLL/... feeding the display controller is going to have some known
specifications that imply which pixel clocks it can generate. HW
designers will pick a panel that accepts a clock the display controller
can generate. The driver for the display controller will just generate
as near of a pixel clock as it can to the rate specified in the mode
definition. I believe it'd be unusual for a display driver to start
fiddling with front-back porch (or margin) widths to munge the timing;
that kind of thing probably worked OK with analog displays, but with
digital displays where each pixel is clocked even in the margins, I
think that would cause more problems than it solved.

Similarly for external displays, the display controller will just pick
the nearest clock it can. If it can't generate a close enough clock, it
will just refuse to set the mode. In fact, a display controller driver
would typically validate this when the set of legal modes was enumerated
when initially detecting the display, so user-space typically wouldn't
even request invalid modes.


[PATCH v2] of: Add videomode helper

2012-08-02 Thread Stephen Warren
On 07/04/2012 01:56 AM, Sascha Hauer wrote:
> This patch adds a helper function for parsing videomodes from the devicetree.
> The videomode can be either converted to a struct drm_display_mode or a
> struct fb_videomode.

> diff --git a/Documentation/devicetree/bindings/video/displaymode 
> b/Documentation/devicetree/bindings/video/displaymode

> +Required properties:
> + - xres, yres: Display resolution
> + - left-margin, right-margin, hsync-len: Horizontal Display timing parameters
> +   in pixels
> +   upper-margin, lower-margin, vsync-len: Vertical display timing parameters 
> in
> +   lines

Perhaps bike-shedding, but...

For the margin property names, wouldn't it be better to use terminology
more commonly used for video timings rather than Linux FB naming. In
other words naming like:

hactive, hsync-len, hfront-porch, hback-porch?

> + - clock: displayclock in Hz
> +
> +Optional properties:
> + - width-mm, height-mm: Display dimensions in mm
> + - hsync-active-high (bool): Hsync pulse is active high
> + - vsync-active-high (bool): Vsync pulse is active high
> + - interlaced (bool): This is an interlaced mode
> + - doublescan (bool): This is a doublescan mode
> +
> +There are different ways of describing a display mode. The devicetree 
> representation
> +corresponds to the one used by the Linux Framebuffer framework described 
> here in
> +Documentation/fb/framebuffer.txt. This representation has been chosen 
> because it's
> +the only format which does not allow for inconsistent parameters.Unlike the 
> Framebuffer
> +framework the devicetree has the clock in Hz instead of ps.

As Rob mentioned, I think there shouldn't be any mention of Linux FB here.

> +
> +Example:
> +
> + display at 0 {

This node appears to describe a video mode, not a display, hence the
node name seems wrong.

Many displays can support multiple different video modes. As mentioned
elsewhere, properties like display width/height are properties of the
display not the mode.

So, I might expect something more like the following (various overhead
properties like reg/#address-cells etc. elided for simplicity):

disp: display {
width-mm = <...>;
height-mm = <...>;
modes {
mode at 0 {
/* 1920x1080p24 */
clock = <5200>;
xres = <1920>;
yres = <1080>;
left-margin = <25>;
right-margin = <25>;
hsync-len = <25>;
lower-margin = <2>;
upper-margin = <2>;
vsync-len = <2>;
hsync-active-high;
};
mode at 1 {
};
};
};

display-connector {
display = <>;
// interface-specific properties such as pixel format here
};

Finally, have you considered just using an EDID instead of creating
something custom? I know that creating an EDID is harder than writing a
few simple properties into a DT, but EDIDs have the following advantages:

a) They're already standardized and very common.

b) They allow other information such as a display's HDMI audio
capabilities to be represented, which can then feed into an ALSA driver.

c) The few LCD panel datasheets I've seen actually quote their
specification as an EDID already, so deriving the EDID may actually be easy.

d) People familiar with displays are almost certainly familiar with
EDID's mode representation. There are many ways of representing display
modes (sync position vs. porch widths, htotal specified rather than
specifying all the components and hence htotal being calculated etc.).
Not everyone will be familiar with all representations. Conversion
errors are less likely if the target is EDID's familiar format.

e) You'll end up with exactly the same data as if you have a DDC-based
external monitor rather than an internal panel, so you end up getting to
a common path in display handling code much more quickly.



Re: [PATCH v2] of: Add videomode helper

2012-08-02 Thread Stephen Warren
On 07/04/2012 01:56 AM, Sascha Hauer wrote:
 This patch adds a helper function for parsing videomodes from the devicetree.
 The videomode can be either converted to a struct drm_display_mode or a
 struct fb_videomode.

 diff --git a/Documentation/devicetree/bindings/video/displaymode 
 b/Documentation/devicetree/bindings/video/displaymode

 +Required properties:
 + - xres, yres: Display resolution
 + - left-margin, right-margin, hsync-len: Horizontal Display timing parameters
 +   in pixels
 +   upper-margin, lower-margin, vsync-len: Vertical display timing parameters 
 in
 +   lines

Perhaps bike-shedding, but...

For the margin property names, wouldn't it be better to use terminology
more commonly used for video timings rather than Linux FB naming. In
other words naming like:

hactive, hsync-len, hfront-porch, hback-porch?

 + - clock: displayclock in Hz
 +
 +Optional properties:
 + - width-mm, height-mm: Display dimensions in mm
 + - hsync-active-high (bool): Hsync pulse is active high
 + - vsync-active-high (bool): Vsync pulse is active high
 + - interlaced (bool): This is an interlaced mode
 + - doublescan (bool): This is a doublescan mode
 +
 +There are different ways of describing a display mode. The devicetree 
 representation
 +corresponds to the one used by the Linux Framebuffer framework described 
 here in
 +Documentation/fb/framebuffer.txt. This representation has been chosen 
 because it's
 +the only format which does not allow for inconsistent parameters.Unlike the 
 Framebuffer
 +framework the devicetree has the clock in Hz instead of ps.

As Rob mentioned, I think there shouldn't be any mention of Linux FB here.

 +
 +Example:
 +
 + display@0 {

This node appears to describe a video mode, not a display, hence the
node name seems wrong.

Many displays can support multiple different video modes. As mentioned
elsewhere, properties like display width/height are properties of the
display not the mode.

So, I might expect something more like the following (various overhead
properties like reg/#address-cells etc. elided for simplicity):

disp: display {
width-mm = ...;
height-mm = ...;
modes {
mode@0 {
/* 1920x1080p24 */
clock = 5200;
xres = 1920;
yres = 1080;
left-margin = 25;
right-margin = 25;
hsync-len = 25;
lower-margin = 2;
upper-margin = 2;
vsync-len = 2;
hsync-active-high;
};
mode@1 {
};
};
};

display-connector {
display = disp;
// interface-specific properties such as pixel format here
};

Finally, have you considered just using an EDID instead of creating
something custom? I know that creating an EDID is harder than writing a
few simple properties into a DT, but EDIDs have the following advantages:

a) They're already standardized and very common.

b) They allow other information such as a display's HDMI audio
capabilities to be represented, which can then feed into an ALSA driver.

c) The few LCD panel datasheets I've seen actually quote their
specification as an EDID already, so deriving the EDID may actually be easy.

d) People familiar with displays are almost certainly familiar with
EDID's mode representation. There are many ways of representing display
modes (sync position vs. porch widths, htotal specified rather than
specifying all the components and hence htotal being calculated etc.).
Not everyone will be familiar with all representations. Conversion
errors are less likely if the target is EDID's familiar format.

e) You'll end up with exactly the same data as if you have a DDC-based
external monitor rather than an internal panel, so you end up getting to
a common path in display handling code much more quickly.

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2] of: Add videomode helper

2012-08-02 Thread Stephen Warren
On 07/05/2012 08:51 AM, Rob Herring wrote:
 On 07/04/2012 02:56 AM, Sascha Hauer wrote:
 This patch adds a helper function for parsing videomodes from the devicetree.
 The videomode can be either converted to a struct drm_display_mode or a
 struct fb_videomode.

 diff --git a/Documentation/devicetree/bindings/video/displaymode 
 b/Documentation/devicetree/bindings/video/displaymode

 +Example:
 +
 + display@0 {
 + /* 1920x1080p24 */
 + clock = 5200;
 
 Should this use the clock binding? You probably need both constraints
 and clock binding though.

I don't think the clock binding is appropriate here. This binding
specifies a desired video mode, and should specify just the expected
clock frequency for that mode. Perhaps any tolerance imposed by the
specification used to calculate the mode timing could be specified too,
but the allowed tolerance (for a mode to be recognized by the dispaly)
is more driven by the receiving device than the transmitting device.

The clock bindings should come into play in the display controller that
sends a signal in that display mode. Something like:

mode: hd1080p {
clock = 5200;
xres = 1920;
yres = 1080;

};

display-controller {
pixel-clock-source = clk ...; // common clock bindings here
default-mode = mode;

 Often you don't know the frequency up front and/or have limited control
 of the frequency (i.e. integer dividers). Then you have to adjust the
 margins to get the desired refresh rate. To do that, you need to know
 the ranges of values a panel can support. Perhaps you just assume you
 can increase the right-margin and lower-margins as I think you will hit
 pixel clock frequency max before any limit on margins.

I think this is more usually dealt with in HW design and matching
components.

The PLL/... feeding the display controller is going to have some known
specifications that imply which pixel clocks it can generate. HW
designers will pick a panel that accepts a clock the display controller
can generate. The driver for the display controller will just generate
as near of a pixel clock as it can to the rate specified in the mode
definition. I believe it'd be unusual for a display driver to start
fiddling with front-back porch (or margin) widths to munge the timing;
that kind of thing probably worked OK with analog displays, but with
digital displays where each pixel is clocked even in the margins, I
think that would cause more problems than it solved.

Similarly for external displays, the display controller will just pick
the nearest clock it can. If it can't generate a close enough clock, it
will just refuse to set the mode. In fact, a display controller driver
would typically validate this when the set of legal modes was enumerated
when initially detecting the display, so user-space typically wouldn't
even request invalid modes.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Tegra DRM device tree bindings

2012-07-06 Thread Stephen Warren
On 07/05/2012 06:15 AM, Thierry Reding wrote:
> Here's a new proposal that should address all issues collected
> during the discussion of the previous one:
> 
> From tegra20.dtsi:
...

At a quick glance, that all seems pretty reasonable now.

> One problem I've come across when trying to get some rudimentary
> code working with this is that there's no longer a device which the
> DRM driver can bind to, because the top-level device (host1x) now
> has a separate driver.

Can't you just have the host1x driver trigger the instantiation of the
DRM driver? In other words, the host1x node is both the plain host1x
driver and the DRM driver. So, after initializing anything for host1x
itself, just end host1x's probe() with something a call to
drm_platform_init(), passing in host1x's own pdev.

After all, it seems like the whole point of representing host1x in the
DT is to expose the graphics HW, so you'd need the DRM device in that
case.


Re: Tegra DRM device tree bindings

2012-07-06 Thread Stephen Warren
On 07/05/2012 06:15 AM, Thierry Reding wrote:
 Here's a new proposal that should address all issues collected
 during the discussion of the previous one:
 
 From tegra20.dtsi:
...

At a quick glance, that all seems pretty reasonable now.

 One problem I've come across when trying to get some rudimentary
 code working with this is that there's no longer a device which the
 DRM driver can bind to, because the top-level device (host1x) now
 has a separate driver.

Can't you just have the host1x driver trigger the instantiation of the
DRM driver? In other words, the host1x node is both the plain host1x
driver and the DRM driver. So, after initializing anything for host1x
itself, just end host1x's probe() with something a call to
drm_platform_init(), passing in host1x's own pdev.

After all, it seems like the whole point of representing host1x in the
DT is to expose the graphics HW, so you'd need the DRM device in that
case.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-29 Thread Stephen Warren
On 06/28/2012 11:19 AM, Lucas Stach wrote:
...
 CMA is just a way of providing large contiguous address space blocks in
 a dynamic fashion. ...
 
 TTM though solves more advanced matters, like buffer synchronisation
 between 3D and 2D block of hardware ...
 
 IMHO the best solution would be to use CMA as a flexible replacement of
 the static carveout area and put TTM on top of this ...

Ah right, thanks for the explanation. That makes sense to me now.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Tegra DRM device tree bindings

2012-06-28 Thread Stephen Warren
On 06/28/2012 11:19 AM, Lucas Stach wrote:
...
> CMA is just a way of providing large contiguous address space blocks in
> a dynamic fashion. ...
> 
> TTM though solves more advanced matters, like buffer synchronisation
> between 3D and 2D block of hardware ...
> 
> IMHO the best solution would be to use CMA as a flexible replacement of
> the static carveout area and put TTM on top of this ...

Ah right, thanks for the explanation. That makes sense to me now.


Tegra DRM device tree bindings

2012-06-28 Thread Stephen Warren
On 06/28/2012 05:12 AM, Thierry Reding wrote:
> On Wed, Jun 27, 2012 at 05:59:55PM +0200, Lucas Stach wrote:
>> Am Mittwoch, den 27.06.2012, 16:44 +0200 schrieb Thierry Reding:
...
>>> In the ideal case I would want to not have a carveout size at
>>> all. However there may be situations where you need to make
>>> sure some driver can allocate a given amount of memory. Having
>>> to specify this using a kernel command-line parameter is
>>> cumbersome because it may require changes to the bootloader or
>>> whatever. So if you know that a particular board always needs
>>> 128 MiB of carveout, then it makes sense to specify it on a
>>> per-board basis.
>> 
>> If we go with CMA, this is a non-issue, as CMA allows to use the
>> contig area for normal allocations and only purges them if it
>> really needs the space for contig allocs.
> 
> CMA certainly sounds like the most simple approach. While it may
> not be suited for 3D graphics or multimedia processing later on, I
> think we could use it at a starting point to get basic framebuffer
> and X support up and running. We can always move to something more
> advanced like TTM later.

I thought the whole purpose of CMA was to act as the infra-structure
to provide buffers to 3D, camera, etc. in particular allowing sharing
of buffers between them. In other words, isn't CMA the memory manager?
If there's some deficiency with CMA for 3D graphics, it seems like
that should be raised with those designing CMA. Or, am I way off base
with my expectations of CMA?


Tegra DRM device tree bindings

2012-06-28 Thread Stephen Warren
On 06/28/2012 12:18 AM, Hiroshi Doyu wrote:
> On Wed, 27 Jun 2012 16:44:14 +0200
> Thierry Reding  wrote:
> 
>>> I think that "coherent_pool" can be used only when the amount of
>>> contiguous memory is short in your system. Otherwise even unnecessary.
>>>
>>> Could you explain a bit more why you want carveout size on per-board basis?
>>
>> In the ideal case I would want to not have a carveout size at all.
>> However there may be situations where you need to make sure some driver
>> can allocate a given amount of memory. Having to specify this using a
>> kernel command-line parameter is cumbersome because it may require
>> changes to the bootloader or whatever. So if you know that a particular
>> board always needs 128 MiB of carveout, then it makes sense to specify
>> it on a per-board basis.
> 
> Hm...I could understand somewhat;) but DT can also specify "bootargs"
> in dts file, which can support per-board-wide spec too, like the above
> sum of carveout needed from all drivers. I just want to avoid
> introducing a new parameter additionaly if we can make use of the
> existing mechanism.

The bootargs in the DT file is usually provided by (over-written by) the
bootloader. If we start requiring lots of random kernel command-line
arguments, that makes it more effort for the user of the bootloader
(e.g. distribution bootloader scripts, etc.) to create the kernel
command-line. I'd prefer to avoid that as much as possible. That said,
using a standardized command-line option that is (or will be) used by
all (ARM?) SoCs for the same purpose is reasonable, because there's
commonality there.


Re: Tegra DRM device tree bindings

2012-06-28 Thread Stephen Warren
On 06/26/2012 11:07 PM, Thierry Reding wrote:
 On Tue, Jun 26, 2012 at 04:48:14PM -0600, Stephen Warren wrote:
...
 I actually like what you proposed above a lot, so if you don't
 mind either way I'll go with that proposal. Keeping the connector
 nodes as children of the outputs has the advantage of being able to
 reference them if we need it at some point. But it is also
 redundant in that a single output doesn't usually (never?) driver
 more than one connector.

Yes, I believe that each output is 1:1 with (the video portion of) a
connector. The display controllers obviously aren't 1:1.

 The same issue will have to be addressed for the CSI and VI nodes,
 but as I currently use neither of those I don't feel qualified to
 propose a binding for them. Also for the VI part we're completely
 missing documentation. Maybe somebody could push this to be
 released as well?

I did file a bug noting the request for VI documentation. At this
point in time, it's too early to say what, if anything, will come of that.

 If I understand correctly, most of the host1x children can also be 
 chained in a processing pipeline to do postprocessing an video
 input for example. I suppose that's generic and doesn't need to be
 represented in DT either, right?

Yes, I believe that's something internal to the driver.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-28 Thread Stephen Warren
On 06/27/2012 08:29 AM, Hiroshi Doyu wrote:
 Could you explain a bit more why you want carveout size on per-board basis?

Different boards have different amounts of memory, and are sometimes
targeted at different use-cases (e.g. server with simple display buffer,
vs. consumer-oriented device intended to play games with OpenGL
allocating lots of textures).
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


<    1   2   3   4   >