Re: [PATCH v2 18/24] i2c: mpc: OF clock lookup for MPC512x

2013-07-18 Thread Russell King - ARM Linux
On Thu, Jul 18, 2013 at 10:20:52PM +0200, Gerhard Sittig wrote:
> + /* enable clock for the I2C peripheral (non fatal) */
> + clk = of_clk_get_by_name(node, "per");
> + if (!IS_ERR(clk)) {
> + clk_prepare_enable(clk);
> + clk_put(clk);
> + }
> +

This kind of hacked up approach to the clk API is exactly the thing I
really don't like seeing.  I don't know what it is... is the clk API
somehow difficult to use or what's the problem with doing stuff correctly?

1. Get the clock in your probe function.
2. Prepare it at the appropriate time.
3. Enable it appropriately.  (or if you want to combine 2 and 3, use
   clk_prepare_enable().)
4. Ensure that enables/disables and prepares/unprepares are appropriately
   balanced.
5. 'put' the clock in your remove function.

Certainly do not get-enable-put a clock.  You're supposed to hold on to
the clock all the time that you're actually using it.

Final point - if you want to make it non-fatal, don't play games like:

clk = clk_get(whatever);
if (IS_ERR(clk))
clk = NULL;

...
if (clk)
clk_prepare(clk);

Do this instead:

clk = clk_get(whatever);
...

if (!IS_ERR(clk))
clk_prepare(clk);
etc.

(And on this subject, I'm considering whether to make a change to the
clk API where clk_prepare() and clk_enable() return zero when passed
an error pointer - this means drivers with optional clocks don't have
to burden themselves with these kinds of checks.)
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 1/3] dmaengine: add dma_get_slave_sg_limits()

2013-07-18 Thread Russell King - ARM Linux
On Thu, Jul 18, 2013 at 11:46:39AM -0500, Joel Fernandes wrote:
> The API is optionally implemented by dmaengine drivers and when
> unimplemented will return a NULL pointer. A client driver using
> this API provides the required dma channel, address width, and
> burst size of the transfer. dma_get_slave_sg_limits() returns an
> SG limits structure with the maximum number and size of SG segments
> that the given channel can handle.

Please look at what's already in struct device:

struct device {
...
struct device_dma_parameters *dma_parms;
...
};

This provides:

struct device_dma_parameters {
/*
 * a low level driver may set these to teach IOMMU code about
 * sg limitations.
 */
unsigned int max_segment_size;
unsigned long segment_boundary_mask;
};

Now, these are helpfully accessed via:

dma_get_max_seg_size(dev)
dma_set_max_seg_size(dev)
dma_get_seg_boundary(dev)
dma_set_seg_boundary(dev, mask)

Drivers already use these to work out how to construct the scatterlist
before passing it to the DMA API, which means that they should also be
used when creating a scatterlist for the DMA engine (think about it -
you have to use the DMA API to map the buffers for the DMA engine too.)

So, we already have two properties defined on a per-device basis: the
maximum size of a scatterlist segment, and the boundary over which any
segment must not cross.

The former ties up with your max_seg_len() property, though arguably it
may depend on the DMA engine access size.  The problem with implementing
this new API though is that the subsystems (such as SCSI) which already
use dma_get_max_seg_size() will be at odds with what is possible via the
DMA engine.

I strongly suggest using the infrastructure at device level and not
implementing some private DMA engine API to convey this information.

As for the maximum number of scatterlist entries, really that's a bug in
the DMA engine implementations if they can't accept arbitary lengths.
I've created DMA engine drivers for implementations where you have to
program each segment individually, ones which can have the current and
next segments, as well as those which can walk a list.  Provided you get
informed of a transfer being completed, there really is no reason for a
DMA engine driver to limit the number of scatterlist entries that it
will accept.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v1 05/24] clk: wrap I/O access for improved portability

2013-07-18 Thread Russell King - ARM Linux
On Thu, Jul 18, 2013 at 09:04:02AM +0200, Gerhard Sittig wrote:
> The common clock API assumes (it's part of the contract) that
> there are potentially expensive operations like get, put, prepare
> and unprepare, as well as swift and non-blocking operations like
> enable and disable.

Let's get something straight here, because what you've said above is
wrong.

1. clk_get() and clk_put() are NOT part of the common clock API.
   They're separate - they're part of the clk API, and the infrastructure
   behind that is clkdev, which is a separately owned thing (by me.)

2. The "contract" of the clk API is defined by the clk API, not by some
   random implementation like the common clock API.  The clk API is
   maintained by myself, and is described in include/linux/clk.h

3. clk_prepare() and clk_unprepare() are functions MUST only be called
   from contexts where sleeping is permitted.  These functions MAY sleep
   for whatever reason they require to, and as long as they require to.
   (This is the whole reason these two functions were created in the
   first place.)

4. clk_enable() and clk_disable() MAY be called from any context, but
   MUST never sleep.  If you need to talk over a non-atomic bus for these,
   then these functions should be no-ops, and the code which does that
   must be executed from the clk_prepare()/clk_unprepare() operations.

That is the "clk API" contract.  The CCF has no bearing on this; if it
disagrees, then the CCF is buggy and is non-conformant.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [RFC PATCH v5 0/1] drivers: mfd: Versatile Express SPC support

2013-07-17 Thread Russell King - ARM Linux
On Wed, Jul 17, 2013 at 11:57:55AM -0400, Nicolas Pitre wrote:
> The sanest location at this point might simply be 
> drivers/platform/vexpress_spc.c or drivers/platform/vexpress/spc.c 
> depending on whether or not more such driver glue is expected in the 
> vexpress future.  No point putting "arm" in the path, especially if this 
> is later reused on arm64.

You wouldn't be making that argument if it were arch/arm64 and arch/arm32 -
you'd probably be arguing that "arm" made perfect sense.

Don't get too hung up on names please, it's really not worth the time
and effort being soo pedantic, and being soo pedantic leads to "pointless
churn" when someone comes along and wants to pedantically change the
names because it no longer matches the users.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: DT binding review for Armada display subsystem

2013-07-13 Thread Russell King - ARM Linux
On Sun, Jul 14, 2013 at 12:16:58AM +0200, Sylwester Nawrocki wrote:
> On 07/13/2013 11:02 PM, Russell King - ARM Linux wrote:
>> On Sat, Jul 13, 2013 at 10:43:29PM +0200, Sylwester Nawrocki wrote:
>>> I wasn't aware of it, thanks. I've seen a patch from Jiada Wang, it seems
>>> they're working on v4 with clock object reference counting. Presumably we
>>> need both clk_get() to be taking reference on the module and reference
>>> counted clk free, e.g. in cases where clock provider is a hot-pluggable
>>> device. It might be too paranoid though, I haven't seen hardware
>>> configurations where a clock source could be unplugged safely when whole
>>> system is running.
>>
>> I'm not going to accept refcounting being thrown into clk_get().  The
>> clkdev API already has refcounting, as much as it needs to.  It just
>> needs people to use the hooks that I provided back in 2008 when I
>> created the clkdev API for doing _precisely_ this job.
>>
>> Have a read through these commits, which backup my statement above:
>>
>> 0318e693d3a56836632bf1a2cfdafb7f34bcc703 - initial commit of the clkdev API
>> d72fbdf01fc77628c0b837d0dd2fd564fa26ede6 - converting Integrator to clkdev 
>> API
>>
>> and it will show you how to do refcounting.  The common clk API just
>> needs to stop defining __clk_get() and __clk_put() to be an empty
>> function and implement them appropriately for it's clk implementation,
>> like they were always meant to be.
>
> Sure, I've already seen the above commits. This is how I understood it
> as well - __clk_get(), __clk_put() need to be defined by the common clk
> API, since it is going to replace all the arch specific implementations.
>
>> __clk_get() and __clk_put() are the clk-implementation specific parts
>> of the clkdev API, because the clkdev API is utterly divorsed from the
>> internals of what a 'struct clk' actually is.  clkdev just treats a
>> 'struct clk' as a completely opaque type and never bothers poking
>> about inside it.
>
> OK, but at the clock's implementation side we may need, e.g. to use kref
> to keep track of the clock's state, and free it only when it is unprepared,
> all its children are unregistered, etc. ? Is it not what you stated in
> your comment to patch [1] ?

If you want to do refcounting on a clock (which you run into problems
as I highlighted earlier in this thread) then yes, you need to use a
kref, and take a reference in __clk_get() and drop it in __clk_put()
to make things safe.

Whether you also take a reference on the module supplying the clock or
not depends whether you need to keep the code around to manipulate that
clock while there are users of it.

As I've already said - consider the possibilities with this scenario.
Here's one:

  A clock consumer is using a clock, but the clock supplier has been
  removed.  The clock consumer wants to change the state of the clock
  in some way - eg, system is suspending.  clk_disable() doesn't fail,
  but on resume, clk_enable() does... (and how many people assume that
  clk_enable() never fails?)  And who knows what rate the clock is now
  producing or whether it's even producing anything...

I'm not saying don't do the refcounting thing I mentioned back in June.
I'm merely pointing out the issues that there are.  There isn't a one
right answer here because each has their own advantages and disadvantages
(and problems.)
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: DT binding review for Armada display subsystem

2013-07-13 Thread Russell King - ARM Linux
On Sat, Jul 13, 2013 at 10:43:29PM +0200, Sylwester Nawrocki wrote:
> I wasn't aware of it, thanks. I've seen a patch from Jiada Wang, it seems
> they're working on v4 with clock object reference counting. Presumably we
> need both clk_get() to be taking reference on the module and reference
> counted clk free, e.g. in cases where clock provider is a hot-pluggable
> device. It might be too paranoid though, I haven't seen hardware
> configurations where a clock source could be unplugged safely when whole
> system is running.

I'm not going to accept refcounting being thrown into clk_get().  The
clkdev API already has refcounting, as much as it needs to.  It just
needs people to use the hooks that I provided back in 2008 when I
created the clkdev API for doing _precisely_ this job.

Have a read through these commits, which backup my statement above:

0318e693d3a56836632bf1a2cfdafb7f34bcc703 - initial commit of the clkdev API
d72fbdf01fc77628c0b837d0dd2fd564fa26ede6 - converting Integrator to clkdev API

and it will show you how to do refcounting.  The common clk API just
needs to stop defining __clk_get() and __clk_put() to be an empty
function and implement them appropriately for it's clk implementation,
like they were always meant to be.

__clk_get() and __clk_put() are the clk-implementation specific parts
of the clkdev API, because the clkdev API is utterly divorsed from the
internals of what a 'struct clk' actually is.  clkdev just treats a
'struct clk' as a completely opaque type and never bothers poking
about inside it.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: DT binding review for Armada display subsystem

2013-07-13 Thread Russell King - ARM Linux
On Sat, Jul 13, 2013 at 07:44:58PM +0200, Sebastian Hesselbarth wrote:
> On 07/13/2013 01:12 PM, Russell King - ARM Linux wrote:
>> When I designed the clk API, I arranged for things like clk_get() to
>> take a reference on the module if the clock was supplied by a module.
>> You'll see some evidence of that by looking back in the git history
>> for arch/arm/mach-integrator/include/mach/clkdev.h.
>
> Last time I checked, clkdev API neither implements referencing nor
> unregister.

*Sigh*.

a613163dff04cbfcb7d66b06ef4a5f65498ee59b:
arch/arm/mach-integrator/include/mach/clkdev.h:
-struct clk {
-   unsigned long   rate;
-   const struct clk_ops*ops;
-   struct module   *owner;
-   const struct icst_params *params;
-   void __iomem*vcoreg;
-   void*data;
-};
-
-static inline int __clk_get(struct clk *clk)
-{
-   return try_module_get(clk->owner);
-}
-
-static inline void __clk_put(struct clk *clk)
-{
-   module_put(clk->owner);
-}

70ee65771424829fd092a1df9afcc7e24c94004b:
arch/arm/mach-integrator/impd1.c:
 static int impd1_probe(struct lm_device *dev)
...
-   for (i = 0; i < ARRAY_SIZE(impd1->vcos); i++) {
-   impd1->vcos[i].ops = &impd1_clk_ops,
-   impd1->vcos[i].owner = THIS_MODULE,
-   impd1->vcos[i].params = &impd1_vco_params,
-   impd1->vcos[i].data = impd1;
-   }
-   impd1->vcos[0].vcoreg = impd1->base + IMPD1_OSC1;
-   impd1->vcos[1].vcoreg = impd1->base + IMPD1_OSC2;
-
-   impd1->clks[0] = clkdev_alloc(&impd1->vcos[0], NULL, "lm%x:01000",
-   dev->id);
-   impd1->clks[1] = clkdev_alloc(&fixed_14745600, NULL, "lm%x:00100",
-   dev->id);
-   impd1->clks[2] = clkdev_alloc(&fixed_14745600, NULL, "lm%x:00200",
-   dev->id);
-   for (i = 0; i < ARRAY_SIZE(impd1->clks); i++)
-   clkdev_add(impd1->clks[i]);
...
 static void impd1_remove(struct lm_device *dev)
...
-   for (i = 0; i < ARRAY_SIZE(impd1->clks); i++)
-   clkdev_drop(impd1->clks[i]);

drivers/clk/clkdev.c:
/*
 * clkdev_drop - remove a clock dynamically allocated
 */
void clkdev_drop(struct clk_lookup *cl)
{
mutex_lock(&clocks_mutex);
list_del(&cl->node);
mutex_unlock(&clocks_mutex);
kfree(cl);
}
EXPORT_SYMBOL(clkdev_drop);

No, of course, I'm imagining all the above!

Now, today, the IM-PD/1 support got broken in respect of ensuring that
things are properly refcounted in the rush to convert things to this
wonderful new common clk API - because it's oh soo much better.  Yes,
right, I'd forgotten the number one rule of kernel programming - always
sacrifice correctness when we get a new fantasic hip idea!  Silly me.

> This is on Mike's list and IIRC there already has been
> a proposal for unregister. Si5351 was the first clkdev driver ever
> that could possibly be unloaded, so there may be still some features
> missing.

Utter rubbish - it's not the first as you can see from the above.
Integrator had this support since the clk and clkdev APIs came along,
because the IM-PD/1 module was implemented as a module, and it has to
create and register clocks for its peripherals.

What you've found is a backwards step with the common clk API, nothing
more.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: DT binding review for Armada display subsystem

2013-07-13 Thread Russell King - ARM Linux
On Sat, Jul 13, 2013 at 08:25:15AM -0600, Daniel Drake wrote:
> I guess the IRE falls into the same category as the DCON - we won't
> implement it for now, but knowing where it might fit in is useful.

I don't see much need at the moment for IRE.  IRE isn't going to be
useful for general graphics rotation as it only supports up to 16bpp
RGB.

It looks to me like this module was designed to rotate YUV/camera
images.  If we wanted to rotate a video image, then it would probably
be more efficient to use this, but it will cause a conversion of the
image format in doing so.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: DT binding review for Armada display subsystem

2013-07-13 Thread Russell King - ARM Linux
On Sat, Jul 13, 2013 at 12:56:50PM +0200, Sylwester Nawrocki wrote:
> On 07/13/2013 10:35 AM, Jean-Francois Moine wrote:
>> On Fri, 12 Jul 2013 13:00:23 -0600 Daniel Drake  wrote:
>>> On Fri, Jul 12, 2013 at 12:39 PM, Jean-Francois Moine  
>>> wrote:
>
 - the phandles to the clocks does not tell how the clock may be set by
the driver (it is an array index in the 510).
>>>
>>> In the other threads on clock selection, we decided that that exact
>>> information on how to select a clock (i.e. register bits/values) must
>>> be in the driver, not in the DT. What was suggested there is a
>>> well-documented scheme for clock naming, so the driver knows which
>>> clock is which. That is defined in the proposed DT binding though the
>>> "valid clock names" part. For an example driver implementation of this
>>> you can see my patch "armada_drm clock selection - try 2".
>>
>> OK.
>>
>> Sorry to go back to this thread.
>>
>> I use my Cubox for daily jobs as a desktop computer. My kernel is a DT
>> driven 3.10.0. The dove-drm, tda998x and si5351 (clock) are kernel
>> modules. I set 3 clocks in the DT for the LCD0: lcdclk, axi and extclk0
>
> Hmm, a bit off topic question, does it work when the si5351 module gets
> removed ? I can't see anything preventing clock provider module from being
> removed why any of its clocks is used and clk_unregister() function is
> currently unimplemented.

When I designed the clk API, I arranged for things like clk_get() to
take a reference on the module if the clock was supplied by a module.
You'll see some evidence of that by looking back in the git history
for arch/arm/mach-integrator/include/mach/clkdev.h.

If the common clk API has been designed without any thought to clocks
supplied by modules and making sure that in-use clocks don't go away,
then it's going to be a real pain to sort that out.  I don't think
refcounting clocks makes sense (what do you do with an enabled clock
that you then remove the module for but it's still being used?  Do you
just shut it down anyway?  Do you leave it running?  What about
subsequent calls?).

I think this is one case where taking a reference on the module supplying
it makes total sense.

>> (si5351). Normally, the external clock is used, but, sometimes, the
>> si5351 module is not yet initialized when the drm driver starts. So,
>> for 1920x1080p, it uses the lcdclk which sets the LCD clock to 13
>> (40/3) instead of 148500. As a result, display and sound still work
>> correctly on my TV set thru HDMI.
>>
>> So, it would be nice to have 2 usable clocks per LCD, until we find a
>> good way to initialize the modules in the right order at startup time.
>
> Doesn't deferred probing help here ?

Indeed it does.  Just because one TV works with such a wrong clock does not
mean that all TVs and HDMI compatible monitors will do.  It's 11% out.

The reason that audio works is because of the way the HDMI transmitter works
- it can calculate the CTS value to send (by measuring it) and it sends that
value over the link so the TV can regenerate the correct audio clock from
the HDMI clock.

Whether that's going to be universally true, I don't know - it depends on
how much audio data gets sent via each frame.  As the HDMI clock is slower,
there would need to be more audio data sent.

>>  lcd0: lcd-controller@82 {
>>  compatible = "marvell,dove-lcd0";
> [...]
>>  };
>>
>>  lcd1: lcd-controller@81 {
>>  compatible = "marvell,dove-lcd1";
> [...]
>>  };
>
> Using different compatible strings to indicate multiple instances of same
> hardware doesn't seem right. Unless LCD0, LCD1 are really different pieces

They aren't.  They're 100% identical in the Armada 510.

> of hardware incompatible with each other I think it would be more correct
> to use same compatible property and DT node aliases, similarly as is now
> done with e.g. I2C busses:
>
>   aliases {
>   lcd0 = &lcd_0;  
>   lcd1 = &lcd_1;  
>   };
>
>   lcd_0: lcd-controller@82 {
>   compatible = "marvell,dove-lcd";

I'd much prefer marvell,armada-510-lcd rather than using the codenames for
the devices.  Otherwise we're going to run into totally different things
being used for different devices (eg, armada-xp...)
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: Sharing *.dtsi between Linux architectures?

2013-07-12 Thread Russell King - ARM Linux
On Fri, Jul 12, 2013 at 01:58:35PM -0600, Stephen Warren wrote:
> Is there a (possibly just proposed) mechanism in place to allow *.dts
> from multiple Linux architectures to share common *.dtsi files?

Don't forget that the long term goal is to move these files out of the
kernel source, which means that they'll be a separately managed project,
possibly with a different file structure.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH V3] of: Set the DMA mask to 64 bits when dma_addr_t is 64-bits

2013-07-08 Thread Russell King - ARM Linux
On Mon, Jul 08, 2013 at 06:03:57PM +0800, Ming Lei wrote:
> On Sat, Jul 6, 2013 at 5:15 PM, Russell King - ARM Linux
>  wrote:
> > On Sat, Jul 06, 2013 at 10:21:05AM +0800, Ming Lei wrote:
> >> But please see below words in Documentation/DMA-API.txt:
> >>
> >>Further, the physical address of the memory must be within the
> >>dma_mask of the device (the dma_mask represents a bit mask of 
> >> the
> >>addressable region for the device.  I.e., if the physical 
> >> address of
> >>the memory anded with the dma_mask is still equal to the 
> >> physical
> >>address, then the device can perform DMA to the memory).
> >>
> >> You may argue that the description is about dev->dma_mask, but the
> >> usage should be same.
> >>
> >> In fact, most of devices in ARM SoC(even with LPAE enabled) doesn't
> >> support 64bit DMA, so I don't see the point that the default mask is set
> >> as 64bit.
> >
> > There's another couple of issues there:
> >
> > (a) Linux assumes that physical memory starts at location zero.  There
> > are various places in the kernel that assume that this holds true:
> >
> > max_dma_pfn = (dma_mask >> PAGE_SHIFT) + 1
> >
> > One notable place is the block layer.  I've suggested to Santosh a
> > way to fix this but I need to help him a little more with it.
> >
> > (b) Device DMA addresses are a *separate* address space to the physical
> > address space.  Device DMA addresses are the addresses which need to
> > be programmed into the device to perform DMA to the identified
> > location.
> >
> > What (b) means is that if you are ending up with a 64-bit address to be
> > programmed into a 32-bit only register, there is something very very
> > wrong.  What this also means is that a 32-bit DMA mask should suffice,
> > because if the DMA address results in a 32-bit address _for the DMA
> > device_ then we are within its capabilities.
> >
> > In any case, the work around is not to set the DMA mask to be 64-bit.
> 
> So how about working around the problem by setting arm_dma_limit as
> (phys_addr_t)0x if CONFIG_ZONE_DMA is unset? Or other better
> solutions?
> 
> Otherwise enabling LPAE may cause system boot failure because
> mmc card may not work when arm_dma_limit becomes (u64)-1.

Well, working around it by bodging it means that the bodges will stay
and the problem will become even worse later, and we won't have the
weight of people saying it doesn't work to use as leverage to persuade
people that DMA masks need to change.  Sorry.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH V3] of: Set the DMA mask to 64 bits when dma_addr_t is 64-bits

2013-07-08 Thread Russell King - ARM Linux
On Fri, Jul 05, 2013 at 12:33:21PM -0700, Laura Abbott wrote:
> On 7/3/2013 7:15 AM, Ming Lei wrote:
>> On Sat, Apr 27, 2013 at 5:32 AM, Rob Herring  wrote:
>>> On 04/26/2013 03:31 PM, Laura Abbott wrote:
 Currently, of_platform_device_create_pdata always sets the
 coherent DMA mask to 32 bits. On ARM systems without CONFIG_ZONE_DMA,
 arm_dma_limit gets set to ~0 or 0x on LPAE based
 systems. Since arm_dma_limit represents the smallest dma_mask
 on the system, the default of 32 bits prevents any dma_coherent
 allocation from succeeding unless clients manually set the
 dma mask first. Rather than make every client on an LPAE system set
 the mask manually, account for the size of dma_addr_t when setting
 the coherent mask.

 Signed-off-by: Laura Abbott 
 ---
   drivers/of/platform.c |2 +-
   1 files changed, 1 insertions(+), 1 deletions(-)

 diff --git a/drivers/of/platform.c b/drivers/of/platform.c
 index 0970505..5f0ba94 100644
 --- a/drivers/of/platform.c
 +++ b/drivers/of/platform.c
 @@ -214,7 +214,7 @@ struct platform_device 
 *of_platform_device_create_pdata(
   #if defined(CONFIG_MICROBLAZE)
dev->archdata.dma_mask = 0xUL;
   #endif
 - dev->dev.coherent_dma_mask = DMA_BIT_MASK(32);
 + dev->dev.coherent_dma_mask = DMA_BIT_MASK(sizeof(dma_addr_t) * 8);
>>>
>>> This is going to change the mask from 32 to 64 bits on 64-bit powerpc
>>> and others possibly. Maybe it doesn't matter. I think it doesn't, but
>>> I'm not sure enough to apply for 3.10. So I'll queue it for 3.11.
>>
>> Without the patch, LPAE enabled board may not boot at all, but looks
>> it still isn't in -next tree.
>>
>> But I am wondering if it is a correct approach, because enabling LPAE
>> doesn't mean the I/O devices can support DMA to/from 64bit address, and
>> it is very probably devices can't do it at all.
>>
>
> The problem is the way the arm_dma_limit is set up, all dma allocations  
> are currently broken regardless of if the actual device supports 64-bit  
> addresses or not.

Please explain this statement.

> I previously asked about the arm_dma_limit and was told that the current  
> behavior is the correct approach (see  
> https://lists.ozlabs.org/pipermail/devicetree-discuss/2013-April/032729.html 
> and  
> https://lists.ozlabs.org/pipermail/devicetree-discuss/2013-April/032690.html) 
>
> The point here is to set the mask to something reasonable such that  
> allocations can succeed by default. If devices can't use part of the  
> memory space they can adjust the limit accordingly.

The point is arm_dma_limit is that it sets the limit of the memory you
get from the page allocators when you ask for a GFP_DMA allocation.

In the case of no GFP_DMA zone, all memory is available for allocations.
Hence, arm_dma_limit is set to the absolute maximum physical memory
address which an allocator could return.

In the case of a 32-bit only system, this is 32-bits.  In the case of a
LPAE system, it is 64-bit.  (This follows because arm_dma_limit is
phys_addr_t, and the size of this follows the bit size of the system.)

The only questionable case is where we have a LPAE system with a GFP_DMA
zone and arm_dma_limit is set to 0xf - this limit is probably
too low.

Now the reason for the arm_dma_limit is very simple: if the streaming
APIs are passed a buffer outside of the DMA mask, we must reallocate
that memory.  We must have a way to get memory which is within the DMA
mask.  That's what passing GFP_DMA does.  If you have no GFP_DMA zone,
then any memory could be returned.  In the case of a LPAE system with
memory both above and below the 32-bit address range, and you try to
allocate a buffer which happens to be above the 32-bit boundary, what
do you do?  Do you just retry the allocation and hope for the best?
What if that allocation also came out above the 32-bit boundary?  Do
you continue gobbling up system memory until you get a page which you
can DMA do and then free all those buffers you unsuccesfully obtained?

We have a massive problem with DMA masks in the Linux kernel, because
of the x86-ism of assuming that physical memory always starts at
address zero.  This assumption simply is not true on ARM, even more so
with LPAE, which is why people are starting to see a problem.

Consider this case: you have an existing 32-bit DMA controller.  This
controller sets a 32-bit DMA mask, because that's all it is capable of
addressing.  It gets used on 32-bit systems, and it works fine.

It gets put onto a LPAE system where memory starts at the 4GB boundary.
The DMA controller can address the first 4GB of memory because that's
how it's wired up.  The DMA controller still has a 32-bit DMA mask
because as far as the DMA controller is concerned, nothing has changed.
But now things fail because the kernel assumes that a DMA mask is
something to do with a physical address.

The solution to this is 

Re: [PATCH V3] of: Set the DMA mask to 64 bits when dma_addr_t is 64-bits

2013-07-06 Thread Russell King - ARM Linux
On Sat, Jul 06, 2013 at 10:21:05AM +0800, Ming Lei wrote:
> But please see below words in Documentation/DMA-API.txt:
> 
>Further, the physical address of the memory must be within the
>dma_mask of the device (the dma_mask represents a bit mask of the
>addressable region for the device.  I.e., if the physical address 
> of
>the memory anded with the dma_mask is still equal to the physical
>address, then the device can perform DMA to the memory).
> 
> You may argue that the description is about dev->dma_mask, but the
> usage should be same.
> 
> In fact, most of devices in ARM SoC(even with LPAE enabled) doesn't
> support 64bit DMA, so I don't see the point that the default mask is set
> as 64bit.

There's another couple of issues there:

(a) Linux assumes that physical memory starts at location zero.  There
are various places in the kernel that assume that this holds true:

max_dma_pfn = (dma_mask >> PAGE_SHIFT) + 1

One notable place is the block layer.  I've suggested to Santosh a
way to fix this but I need to help him a little more with it.

(b) Device DMA addresses are a *separate* address space to the physical
address space.  Device DMA addresses are the addresses which need to
be programmed into the device to perform DMA to the identified
location.

What (b) means is that if you are ending up with a 64-bit address to be
programmed into a 32-bit only register, there is something very very
wrong.  What this also means is that a 32-bit DMA mask should suffice,
because if the DMA address results in a 32-bit address _for the DMA
device_ then we are within its capabilities.

In any case, the work around is not to set the DMA mask to be 64-bit.
Think about it.  What if you have 8GB of physical memory in a LPAE
system, but your DMA controllers can only be programmed with a 32-bit
address?

Lastly, think about what I said last night about most of the ARM drivers
being rather buggy in that they do not call either dma_set_mask() or
dma_set_coherent_mask().

So, I think we're well past the point where we need to stop assuming that
DMA masks somehow relate to physical addresses, and that they can be used
to indicate a range of physical addresses starting at zero and extending
up to and including the mask value.  To call such a thing a "mask" is
absolutely absurd.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH V3] of: Set the DMA mask to 64 bits when dma_addr_t is 64-bits

2013-07-05 Thread Russell King - ARM Linux
On Wed, Jul 03, 2013 at 10:15:50PM +0800, Ming Lei wrote:
> 
> Without the patch, LPAE enabled board may not boot at all, but looks
> it still isn't in -next tree.
> 
> But I am wondering if it is a correct approach, because enabling LPAE
> doesn't mean the I/O devices can support DMA to/from 64bit address, and
> it is very probably devices can't do it at all.
> 
> Another way is to always set arm_dma_limit as 0x when
> CONFIG_ZONE_DMA is unset, and let driver override device's dma
> mask if the device supports 64bit DMA.

Okay, here's a teach-in on DMA masks, this is how it's supposed to work:

1. The bus code which creates the devices sets a default mask appropriate
   for the bus type.  For PCI, this is 32-bit, for ISA, it's 24-bit.

2. Before performing DMA, drivers should query the capabilities of the
   platform using dma_set_mask() for streaming transfers, and
   dma_set_coherent_mask() for coherent transfers.  Note: the DMA API
   specifies that a mask accepted by dma_set_mask() will also be
   accepted by dma_set_coherent_mask() - in other words, it is
   permissible _not_ to check the return value of dma_set_coherent_mask()
   iff that same mask is currently in use via dma_set_mask().

   (Side note: I have a patch in my tree which introduces
   dma_set_mask_and_coherent() which sets both.)

3. Drivers should attempt to set a mask appropriate for the device.
   If the device can drive 32-bit addresses, it requests a 32-bit
   mask.  If only 24-bit, a 24-bit mask.  If it wants to do more
   than 32-bit, it should set a larger mask.

   (See the "DMA addressing limitations" section of
Documentation/DMA-API-HOWTO.txt).

4. PCI drivers use this as part of a negotiation with the rest of the
   system; you can plug a 64-bit capable PCI card into a 32-bit only
   capable host, and it should work - the driver should try to
   request the 64-bit mask first, if that succeeds then it can use
   DMA to 64-bit addresses.  If it fails, then it should then try to
   set a 32-bit mask.

So, that's more or less what's currently "known" of DMA masks.  What a lot
of ARM drivers do is totally ignore (3) and just assume that the platform
code set the mask up appropriately.  This is an incorrect assumption, and
it's something I'm slowly fixing where I have the knowledge.

What we shouldn't be relying upon is the default DMA mask being correct.

Last point is - the device actually doing DMA should be the one which the
DMA mask counts for above; if the device is just a user of DMA, then its
DMA mask should not matter (if it does, it's likely because of x86/PCI
assumptions made in the subsystem code) and actually should _not_ be set.
In other words, the DMA engine probe function should set the DMA mask on
the DMA controller, and you should do DMA mappings against the DMA
controller's struct device, not against the IO peripheral's struct
device.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: Best practice device tree design for display subsystems/DRM

2013-07-05 Thread Russell King
On Fri, Jul 05, 2013 at 09:37:34AM +0100, Grant Likely wrote:
> Alternatively, you can have the same effect with a property or set of
> properties in the controller node that contains phandles to the
> required devices. That would provide the driver with the same
> information about which devices must be present.

How do you go from phandle to something-that-the-driver-for-that-device-
has-setup?

>From what I can see, you can go from phandle to OF node, but no further.

I'm guessing we'd need some kind of "registry" for sub-drivers with
these structures to register their devices OF node plus "shared" data
so that other drivers can find it.  "shared" data might be a
standardized operations struct or something similar to 'struct
device_driver' but for componentised devices.

-- 
Russell King
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: Best practice device tree design for display subsystems/DRM

2013-07-04 Thread Russell King
On Thu, Jul 04, 2013 at 10:58:17AM +0200, Sascha Hauer wrote:
> On Thu, Jul 04, 2013 at 09:40:52AM +0100, Russell King wrote:
> > Wrong.  Please read the example with the diagrams I gave.  Consider
> > what happens if you have two display devices connected to a single
> > output, one which fixes the allowable mode and one which _can_
> > reformat the selected mode.
> 
> What you describe here is a forced clone mode. This could be described
> in the devicetree so that a driver wouldn't start before all connected
> displays (links) are present, but this should be limited to the affected
> path, not to the whole componentized device.

Okay, to throw a recent argument back at you: so what in this scenario
if you have a driver for the fixed-mode device but not the other device?

It's exactly the same problem which you were describing to Sebastian
just a moment ago with drivers missing from the supernode approach -
you can't start if one of those "forced clone" drivers is missing.

-- 
Russell King
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: Best practice device tree design for display subsystems/DRM

2013-07-04 Thread Russell King
On Thu, Jul 04, 2013 at 10:33:07AM +0200, Sascha Hauer wrote:
> A componentized device never completes and it doesn't have to. A
> componentized device can start once there is a path from an input
> (crtc, i2s unit) to an output (connector, speaker).

Sorry for the incomplete reply.

If you read all the messages in this thread, then you will realise that
DRM does not support an incremental startup approach.  It needs to know
everything at the point of "load".

> Without supernode you can just start once you have everything between
> the crtc and lvds nodes. If later a hdmi device joins in then you can
> either notify the users (provided the DRM/KMS API supports it) or just
> ignore it until the DRM device gets reopened.

It's not a case that you can ignore it until the "DRM device gets reopened"
because the DRM device never shuts down.  You'd have to ignore it until
you tear down what you have already registered into DRM, causing all
the display hardware to be shutdown, and then re-"load" DRM.

To make this work, you would have to modify not only DRM to allow that,
but also the framebuffer layer too.  Are you volunteering? :)

I don't think that Sebastian nor myself have either the motivation nor
the time available to go down that route of majorly rewriting kernel
subsystems.

Not only that but I believe it to be an unsafe approach as I've already
outlined.

-- 
Russell King
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: Best practice device tree design for display subsystems/DRM

2013-07-04 Thread Russell King
On Thu, Jul 04, 2013 at 10:33:07AM +0200, Sascha Hauer wrote:
> On Wed, Jul 03, 2013 at 10:52:49AM +0100, Russell King wrote:
> > > > Sorry but I'd like to say that this cannot be used commonly. Shouldn't 
> > > > you
> > > > really consider Linux framebuffer or other subsystems? The above dtsi 
> > > > file
> > > > is specific to DRM subsystem. And I think the dtsi file has no any
> > > > dependency on certain subsystem so board dtsi file should be considered 
> > > > for
> > > > all device drivers based on other subsystems: i.e., Linux framebuffer, 
> > > > DRM,
> > > > and so no. So I *strongly* object to it. All we have to do is to keep 
> > > > the
> > > > dtsi file as is, and to find other better way that can be used commonly 
> > > > in
> > > > DRM.
> > > 
> > > +1 for not encoding the projected usecase of the graphics subsystem into
> > > the devicetree. Whether the two LCD controllers shall be used together
> > > or separately should not affect the devicetree. devicetree is about
> > > hardware description, not configuration.
> > 
> > And if we listen to that argument, then this problem is basically
> > impossible to solve sanely.
> > 
> > Are we really saying that we have no acceptable way to represent
> > componentized devices in DT?  If that's true, then DT fails to represent
> > quite a lot of ARM hardware, and frankly we shouldn't be using it.  I
> > can't believe that's true though.
> > 
> > The problem is that even with an ASoC like approach, that doesn't work
> > here because there's no way to know how many "components" to expect.
> > That's what the "supernode" is doing - telling us what components group
> > together to form a device.
> 
> A componentized device never completes and it doesn't have to. A
> componentized device can start once there is a path from an input
> (crtc, i2s unit) to an output (connector, speaker).

Wrong.  Please read the example with the diagrams I gave.  Consider
what happens if you have two display devices connected to a single
output, one which fixes the allowable mode and one which _can_
reformat the selected mode.

If you go down that path, you risk driving the LCD panel with
inappropriate timings which may damage it.

-- 
Russell King
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: Best practice device tree design for display subsystems/DRM

2013-07-03 Thread Russell King
On Wed, Jul 03, 2013 at 08:43:20PM +0900, Inki Dae wrote:
> In case of fbdev, framebuffer driver would use lcd0 or lcd1 driver, or lcd0
> and lcd1 drivers which are placed in drivers/video/backlight/.

No, that's totally wrong.  Framebuffer drivers are not backlights.
Framebuffer drivers go in drivers/video not drivers/video/backlight.

> And let's assume the following:
> 
> On board A
>  Display controller - lcd 0
> On board B
>  Display controller - lcd 1
> On board C
>  Display controller - lcd 0 and lcd 1
> 
> Without the super node, user could configure Linux kernel through menuconfig
> like below;
>   board A:  enabling lcd 0, and disabling lcd 1,
>   board B: disabling lcd 0, and enabling lcd 1,
>   board C: enabling lcd 0 and lcd 1.

I don't think so.  By using menuconfig, you completely miss the point of
using DT - which is to allow us to have a single kernel image which can
support multiple boards with different configurations, even different
SoCs.

> All we have to do is to configure menuconfig to enable only drivers for
> certain board. Why does fbdev need the super node? Please give me comments
> if there is my missing point.

fbdev needs the supernode _OR_ some way to specify that information which
you're putting into menuconfig, because what's correct for the way one
board is physically wired is not correct for how another board is
physically wired.

With that information in menuconfig, you get a kernel image which can
support board A, or board B, or board C but not a single kernel image
which can support board A and board B and board C by loading that very
same kernel image onto all three boards with just a different DT image.

This is the *whole* point of ARM moving over to DT.

If we wanted to use menuconfig to sort these kinds of board specific
details, we wouldn't be investing so much time and effort into moving
over to DT for ARM.  In fact, we used to use menuconfig to sort out
some of these kinds of details, and we've firmly decided that this is
the wrong approach.

Today, there is a very strong push towards having a single kernel image
which runs on every (modern) ARM board with DT describing not only the
board level hardware but also the internal SoC as well.

-- 
Russell King
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: Best practice device tree design for display subsystems/DRM

2013-07-03 Thread Russell King
On Wed, Jul 03, 2013 at 12:52:37PM +0200, Sebastian Hesselbarth wrote:
> But honestly, I see no way around it and it is the only way
> to allow to even have the decision for one or two cards at all.
> There is no way for auto-probing the users intention...

It's not _just_ about the users intention - for get that, because really
it's to do with solving a much bigger question, and that question is:

How do we know when all the components are present?

In other words, how do we know that we have LCD0 and LCD1 connected to
the DCON, which is connected to a LCD and/or a HDMI tranceiver.  How
do we know that the on-board VGA DACs are wired up and to be used?
How do we know which I2C bus the VGA port is connected to, and whether
to expect an I2C bus?

Let's look at the Cubox setup (sorry, but you _will_ have to use a
fixed-width font for this):

CPU bus
|
+-I2C -TDA998X --(HDMI)--> Display
||
|   (RGB888+clock+sync)
+-LCD0 -.   /
+--DCON ---(VGA)---> not wired
+-LCD1 (unused)-'

DCON can allegedly route the data from LCD0 or LCD1 to the parallel
interface which the TDA998x sits on, and/or the VGA interface.  In
the case of other setups, the TDA998x may be a LCD panel.

The OLPC setup (which seems to be the more common case in terms of the
on-SoC device structure):

CPU bus
|
+-LCD ---(RGB666+clock+sync)> LCD panel

and I believe an HDMI tranceiver somewhere.

In the above diagrams, "LCD" and "LCD0"/"LCD1" are essentially all the
same basic IP block, so they should use the same driver code.  Moreover,
each named element is a separate platform device.

In the first, to drive that correctly, you need the following before
"loading" the display system:
1. LCD0, and optionally LCD1 and DCON to be found and known to display driver.
2. I2C driver to be probed and available for use.
3. TDA998x to be found and known to display driver.
Only once you have all those components can you allow display to "load".

Now consider the case where the TDA998x is not present but the parallel
interface is connected directly to a LCD panel.  This then becomes:
1. LCD0, and optionally LCD1 and DCON to be found and known to display driver.
2. LCD panel details known to display driver.

If the VGA port is being used, then both of these cases need to be
supplemented with:
N. I2C bus for VGA DDC to be probed and available for use.
N+1. DCON must be known to the display driver.
N+2. LCD1 required if different display modes on the devices are required.

In the OLPC case, it's just:
1. LCD to be found and known to display driver.
2. LCD panel details known to display driver.

What you should be getting from the above is that the platform devices
which are required for any kind of display subsystem driver to initialize
is not really a function of the "software" use case, but how (a) the
board hardware has been designed and put together, and (b) the internal
structure of the SoC.

Moreover, the problem which we're facing is this: how does a display
driver know which platform devices to expect from a DT description to
make the decision that all parts required for the physical wiring of
the board are now present.

Consider this too: what if you have a LCD panel on your RGB888 interface
which is also connected to a HDMI transceiver which can do scaling and
re-syncing (basically format conversion - the TDA998x appears to have
this capability), and you drive it with a mode suitable for HDMI but not
the LCD panel because the driver doesn't know that there's a LCD panel
also connected?  This is why I feel that the hotplug idea is actually
rather unsafe, and if we go down that path we're building in that risk.

(And I think the OLPC guys may be have exactly that kind of setup...)

-- 
Russell King
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: Best practice device tree design for display subsystems/DRM

2013-07-03 Thread Russell King
On Wed, Jul 03, 2013 at 06:48:41PM +0900, Inki Dae wrote:
> That's not whether we can write device driver or not. dtsi is common spot in
> other subsystems. Do you think the cardX node is meaningful to other
> subsystems?

Yes, because fbdev could also use it to solve the same problem which we're
having with DRM.

-- 
Russell King
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: Best practice device tree design for display subsystems/DRM

2013-07-03 Thread Russell King
On Wed, Jul 03, 2013 at 11:02:42AM +0200, Sascha Hauer wrote:
> On Wed, Jul 03, 2013 at 05:57:18PM +0900, Inki Dae wrote:
> > > video {
> > >   /* Single video card w/ multiple lcd controllers */
> > >   card0 {
> > >   compatible = "marvell,armada-510-display";
> > >   reg = <0 0x3f00 0x100>; /* video-mem hole */
> > >   /* later: linux,video-memory-size = <0x100>; */
> > >   marvell,video-devices = <&lcd0 &lcd1 &dcon>;
> > >   };
> > > 
> > >   /* OR: Multiple video card w/ single lcd controllers */
> > >   card0 {
> > >   compatible = "marvell,armada-510-display";
> > >   ...
> > >   marvell,video-devices = <&lcd0>;
> > >   };
> > > 
> > >   card1 {
> > >   compatible = "marvell,armada-510-display";
> > >   ...
> > >   marvell,video-devices = <&lcd1>;
> > >   };
> > > };
> > 
> > Sorry but I'd like to say that this cannot be used commonly. Shouldn't you
> > really consider Linux framebuffer or other subsystems? The above dtsi file
> > is specific to DRM subsystem. And I think the dtsi file has no any
> > dependency on certain subsystem so board dtsi file should be considered for
> > all device drivers based on other subsystems: i.e., Linux framebuffer, DRM,
> > and so no. So I *strongly* object to it. All we have to do is to keep the
> > dtsi file as is, and to find other better way that can be used commonly in
> > DRM.
> 
> +1 for not encoding the projected usecase of the graphics subsystem into
> the devicetree. Whether the two LCD controllers shall be used together
> or separately should not affect the devicetree. devicetree is about
> hardware description, not configuration.

And if we listen to that argument, then this problem is basically
impossible to solve sanely.

Are we really saying that we have no acceptable way to represent
componentized devices in DT?  If that's true, then DT fails to represent
quite a lot of ARM hardware, and frankly we shouldn't be using it.  I
can't believe that's true though.

The problem is that even with an ASoC like approach, that doesn't work
here because there's no way to know how many "components" to expect.
That's what the "supernode" is doing - telling us what components group
together to form a device.

Moreover, if you pay attention to my proposal, what you will realise is
that it isn't DRM specific - it's totally subsystem agnostic.  All it's
doing is collecting a set of other devices together and only then
publishing a device representing the full set of sub-devices.

Now think about that: what is DRM specific about that solution?  What
is the DRM specific about "collecting a set of devices together and
publishing a new device" ?

How is this not "describing the hardware"?  If I attach a HDMI transceiver
to the DCON which is then connected to LCD0, is it not "describing the
hardware" to put into DT that LCD0, DCON, and the HDMI transceiver are
all connected together and therefore are required?  One of the points of
DT after all is that it can and should be used to represent the
relationship between devices.

No - using the tree approach doesn't work, because LCD0, LCD1 and DCON
are all on the same physical bus, but are themselves connected together.
If you like, there are multiple heirarchies here - there's the bus
heirarchy, and then there's the device heirarchy.  Both of these
heirarchies need to be represented in DT, otherwise you're not describing
the hardware properly.

-- 
Russell King
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: Best practice device tree design for display subsystems/DRM

2013-07-02 Thread Russell King
On Wed, Jul 03, 2013 at 08:02:05AM +1000, Dave Airlie wrote:
> Have you also considered how suspend/resume works in such a place,
> where every driver is independent? The ChromeOS guys have bitched
> before about the exynos driver which is has lots of sub-drivers, how
> do you control the s/r ordering in a crazy system like that? I'd
> prefer a central driver, otherwise there is too many moving parts.

>From earlier in the evolution of Armada DRM, that has also been my
preferred idea - though probably not quite how people think.

My idea was to have a separate "driver" assemble all the constituent
parts, and then register the "armada-drm" platform device providing
via platform resources and/or platform data all the necessary
information (maybe not even bothering to decode the OF nodes but just
provide a collection of nodes for each consituent part.)

Such a thing could be turned into a generic solution for all the
multi-part drivers.  If we use Sebastian's idea of using phandles
(it seems there's a precident for it with the direction v4l2 is
going to solve a similar problem) then we likely have a standard
way of describing component-ized display setups in DT.

-- 
Russell King
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: Best practice device tree design for display subsystems/DRM

2013-07-02 Thread Russell King
On Tue, Jul 02, 2013 at 09:57:32PM +0200, Sebastian Hesselbarth wrote:
> I am against a super node which contains lcd and dcon/ire nodes. You can
> enable those devices on a per board basis. We add them to dove.dtsi but
> disable them by default (status = "disabled").
>
> The DRM driver itself should get a video-card node outside of  
> soc/internal-regs where you can put e.g. video memory hole (or video
> mem size if it will be taken from RAM later)
>
> About the unusual case, I guess we should try to get both lcd
> controllers into one DRM driver. Then support mirror or screen
> extension X already provides. For those applications where you want
> X on one lcd and some other totally different video stream - wait
> for someone to come up with a request or proposal.

Well, all I can say then is that the onus is on those who want to treat
the components as separate devices to come up with some foolproof way
to solve this problem which doesn't involve making assumptions about
the way that devices are probed and doesn't end up creating artificial
restrictions on how the devices can be used - and doesn't end up burdening
the common case with lots of useless complexity that they don't need.

It's _that_ case which needs to come up with a proposal about how to
handle it because you _can't_ handle it at the moment in any sane
manner which meets the criteria I've set out above, and at the moment
the best proposal by far to resolve that is the "super node" approach.

There is _no_ way in the device model to combine several individual
devices together into one logical device safely when the subsystem
requires that there be a definite point where everything is known.
That applies even more so with -EPROBE_DEFER.  With the presence of
such a thing, there is now no logical point where any code can say
definitively that the system has technically finished booting and all
resources are known.

That's the problem - if you don't od the super-node approach, you end
up with lots of individual devices which you have to figure out some
way of combining, and coping with missing ones which might not be
available in the order you want them to be, etc.

That's the advantage of the "super node" approach - it's a container
to tell you what's required in order to complete the creation of the
logical device, and you can parse the sub-nodes to locate the
information you need.

An alternative as I see it is that DRM - and not only DRM but also
the DRM API and Xorg - would need to evolve hotplug support for the
various parts of the display subsystem.  Do we have enough people
with sufficient knowledge and willingness to be able to make all
that happen?  I don't think we do, and I don't see that there's any
funding out there to make such a project happen, which would make it
a volunteer/spare time effort.

-- 
Russell King
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: Best practice device tree design for display subsystems/DRM

2013-07-02 Thread Russell King
On Tue, Jul 02, 2013 at 08:42:55PM +0200, Jean-Francois Moine wrote:
> It seems that you did not look at the NVIDIA Tegra driver (I got its
> general concept for my own driver, but I used a simple atomic counter):
> 
> - at probe time, the main driver (drivers/gpu/host1x/drm/drm.c) scans
>   the DT and finds its external modules. These ones are put in a
>   "clients" list.
> 
> - when loaded, the other modules register themselves into the main
>   driver. This last one checks if each module is in the "client" list.
>   If so, the module is moved from the "client" to an "active" list".
> 
> - when the "client" list is empty, this means all modules are started,
>   and, so, the main driver starts the drm stuff.
> 
> The active list is kept for module unloading.

Please tell me how this works with the two LCD controllers if you wish
to drive the two LCD controllers as entirely separate devices.  Given
that the above requires the use of global data in the driver, how do
you distinguish between the two?

> Putting "phandle"s in the 'display' seems more flexible (I did not do so
> because I knew the hardware - 2 LCDs and the dcon/ire).

Except you haven't looked at the bigger picture - the Armada 510 is
unusual in that it has two LCD controllers and the DCON.  All of the
other SoCs using this IP block that I've been able to research have
only one LCD controller and no DCON.  I don't think they even have an
IRE (image rotation engine) either.

Neither have you considered the case where you may wish to keep the
two LCD controllers entirely separate (eg, you want X to drive one
but something else on the other.)  X drives the DRM device as a whole,
including all CRTCs which make up that device - with them combined
into one DRM device, you can't ask X to leave one controller alone
because you're doing something else with it.  (This is just the simple
extension of the common case of a single LCD controller, so it's
really nothing special.)

So, the unusual case _is_ the Armada 510 with its two LCD controllers
and DCON which we _could_ work out some way of wrapping up into one
DRM device, or we could just ignore the special case, ignore the DCON
and just keep the two LCD CRTCs as two separate and independent DRM
devices.

I'm actually starting to come towards the conclusion that we should go
for the easiest solution, which is the one I just mentioned, and forget
trying to combine these devices into one super DRM driver.

-- 
Russell King
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: Best practice device tree design for display subsystems/DRM

2013-07-02 Thread Russell King
On Tue, Jul 02, 2013 at 12:54:41PM -0600, Daniel Drake wrote:
> On Tue, Jul 2, 2013 at 12:43 PM, Russell King  wrote:
> > I will point out that relying on driver probing orders has already been
> > stated by driver model people to be unsafe.  This is why I will not
> > adopt such a solution for my driver; it is a bad design.
> 
> Just to clarify, what you're objecting to is effectively the
> following? Because it is not guaranteed in the future that the probe
> order will be the same as the platform_driver_register() calls?

Correct.  Consider what happens if the devices are registered after
the driver(s) have been registered, which may not be in the correct
order.

-- 
Russell King
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: Best practice device tree design for display subsystems/DRM

2013-07-02 Thread Russell King
On Tue, Jul 02, 2013 at 11:43:59AM -0600, Daniel Drake wrote:
> exynos seems to take a the same approach. Components are separate in
> the device tree, and each component is implemented as a platform
> driver or i2c driver. However all the drivers are built together in
> the same module, and the module_init sequence is careful to initialise
> all of the output component drivers before loading the DRM driver. The
> output component driver store their findings in global structures.

I will point out that relying on driver probing orders has already been
stated by driver model people to be unsafe.  This is why I will not
adopt such a solution for my driver; it is a bad design.

-- 
Russell King
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v2 1/5] phy: Add driver for Exynos MIPI CSIS/DSIM DPHYs

2013-06-27 Thread Russell King - ARM Linux
On Thu, Jun 27, 2013 at 09:17:13AM +0300, Felipe Balbi wrote:
> On Wed, Jun 26, 2013 at 05:00:34PM +0200, Sylwester Nawrocki wrote:
> > Hi,
> > 
> > On 06/25/2013 05:06 PM, Felipe Balbi wrote:
> > >> +static struct platform_driver exynos_video_phy_driver = {
> > >> > +  .probe  = exynos_video_phy_probe,
> > >
> > > you *must* provide a remove method. drivers with NULL remove are
> > > non-removable :-)
> > 
> > Actually the remove() callback can be NULL, it's just missing module_exit
> > function that makes a module not unloadable.
> 
> look at the implementation of platform_drv_remove():
> 
>  499 static int platform_drv_remove(struct device *_dev)
>  500 {
>  501 struct platform_driver *drv = to_platform_driver(_dev->driver);
>  502 struct platform_device *dev = to_platform_device(_dev);
>  503 int ret;
>  504 
>  505 ret = drv->remove(dev);
>  506 if (ACPI_HANDLE(_dev))
>  507 acpi_dev_pm_detach(_dev, true);
>  508 
>  509 return ret;
>  510 }
> 
> that's not a conditional call right :-)

Wrong.

if (drv->remove)
drv->driver.remove = platform_drv_remove;

The function you quote will only be used if drv->remove is non-NULL.
You do not need to provide a remove method.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v4 02/10] pinctrl: mvebu: dove pinctrl driver

2013-06-18 Thread Russell King - ARM Linux
On Tue, Jun 18, 2013 at 05:02:49PM +0200, Linus Walleij wrote:
> Nowadays I would do the above with regmap_update_bits().
> 
> Mutual exclusion for read-modify-write of individual bits in a
> register is one of those cases where doing a regmap over
> a memory-mapped register range makes a lot of sense.
> (drivers/mfd/syscon.c being a nice example)

So, for that solution we need to have some kind of global regmap per
register or somesuch.  Then you run into regmap needing a struct
device - well, with a shared register, which struct device do you
use, or do you have to invent one?

That sounds more heavy-weight than is really necessary.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v4 02/10] pinctrl: mvebu: dove pinctrl driver

2013-06-18 Thread Russell King - ARM Linux
On Thu, Sep 13, 2012 at 05:41:44PM +0200, Sebastian Hesselbarth wrote:
> +#define DOVE_GLOBAL_CONFIG_1 (DOVE_SB_REGS_VIRT_BASE | 0xe802C)
> +#define  DOVE_TWSI_ENABLE_OPTION1BIT(7)
> +#define DOVE_GLOBAL_CONFIG_2 (DOVE_SB_REGS_VIRT_BASE | 0xe8030)
> +#define  DOVE_TWSI_ENABLE_OPTION2BIT(20)
> +#define  DOVE_TWSI_ENABLE_OPTION3BIT(21)
> +#define  DOVE_TWSI_OPTION3_GPIO  BIT(22)
...
> +static int dove_twsi_ctrl_set(struct mvebu_mpp_ctrl *ctrl,
> + unsigned long config)
> +{
> + unsigned long gcfg1 = readl(DOVE_GLOBAL_CONFIG_1);
> + unsigned long gcfg2 = readl(DOVE_GLOBAL_CONFIG_2);
> +
> + gcfg1 &= ~DOVE_TWSI_ENABLE_OPTION1;
> + gcfg2 &= ~(DOVE_TWSI_ENABLE_OPTION2 | DOVE_TWSI_ENABLE_OPTION2);
> +
> + switch (config) {
> + case 1:
> + gcfg1 |= DOVE_TWSI_ENABLE_OPTION1;
> + break;
> + case 2:
> + gcfg2 |= DOVE_TWSI_ENABLE_OPTION2;
> + break;
> + case 3:
> + gcfg2 |= DOVE_TWSI_ENABLE_OPTION3;
> + break;
> + }
> +
> + writel(gcfg1, DOVE_GLOBAL_CONFIG_1);
> + writel(gcfg2, DOVE_GLOBAL_CONFIG_2);
> +
> + return 0;
> +}

So, I've just been thinking about the LCD clocking on the Armada 510,
and found that there's dividers for the internal LCD clocks in the
global config 1/2 registers.  So I grepped the kernel source for
references to these, expecting to find something in drivers/clk, but
found the above.

Now, the reason that I'm replying to this is that we're again falling
into the same trap that we did with SA1100 devices.  Back in those
days, there wasn't so much of a problem because the kernel was basically
single threaded when it came to code like the above on such platforms.
There was no kernel preemption.

However, todays kernel is sometimes SMP, commonly with kernel preemption
enabled, maybe even RT.  This makes things like the above sequence a
problem where a multifunction register is read, modified and then
written back.

Consider two threads doing this, and a preemption event happening in the
middle of this sequence to another thread also doing a read-modify-write
of the same register.  Which one wins depends on the preemption sequence,
but ultimately one loses out.

Any access to such registers needs careful thought, and protection in some
manner.

Maybe what we need is something like this:

static DEFINE_SPINLOCK(io_lock);
static void modifyl(u32 new, u32 mask, void __iomem *reg)
{
unsigned long flags;
u32 val;

spin_lock_irqsave(&io_lock, flags);
val = readl(reg) & ~mask;
val |= new | mask;
writel(val, reg);
spin_unlock_irqrestore(&io_lock, flags);
}

in order to provide arbitrated access to these kinds of multifunction
registers in a safe, platform agnostic way.

Comments?
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v2 06/11] ARM:stixxxx: Add STiH415 SOC support

2013-06-13 Thread Russell King - ARM Linux
On Tue, Jun 11, 2013 at 07:50:31AM +0100, Srinivas KANDAGATLA wrote:
> You are right, It does not make sense to use BIT() macro for field which
> has more than 1 bit. I think using mix of both BIT() and the old style
> will make code look bit confusing to reader, Also no other mach code in
> the kernel use BIT while configuring L2 controller. So am going to drop
> the idea of using BIT here and leave the code as it is.

I'd suggest putting a comment in the code to that effect.  With the way
"cleanups" get done, I wouldn't be surprised if this attracts a lot of
people wanting to do a trivial "1 << bit" -> "BIT(bit)" conversions.

One of the problems of open source is that you can say "no" to a patch
like that until you're blue in the face, but it will eventually make
its way in via some path.

Just one of the reasons I consider BIT() to be evil and an inappropriate
macro.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v2 06/11] ARM:stixxxx: Add STiH415 SOC support

2013-06-10 Thread Russell King - ARM Linux
On Mon, Jun 10, 2013 at 12:46:59PM +0100, Srinivas KANDAGATLA wrote:
> > +   aux_ctrl = (0x1 << L2X0_AUX_CTRL_SHARE_OVERRIDE_SHIFT) |
> > +   (0x1 << L2X0_AUX_CTRL_DATA_PREFETCH_SHIFT) |
> > +   (0x1 << L2X0_AUX_CTRL_INSTR_PREFETCH_SHIFT) |
> > +   (way_size << L2X0_AUX_CTRL_WAY_SIZE_SHIFT);
> > 
> > 
> > 
> > #include 
> > Linus Walleij would write use  BIT() here
> 
> I will use BIT() macro.

Without checking those fields... BIT() is only appropriate if you're
really talking about single bits.  If you have a field of more than a
single bit which you happen to be setting to '1' then it's not
appropriate to use BIT().
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v2 01/11] serial:st-asc: Add ST ASC driver.

2013-06-10 Thread Russell King - ARM Linux
On Mon, Jun 10, 2013 at 10:21:00AM +0100, Srinivas KANDAGATLA wrote:
> This patch adds support to ASC (asynchronous serial controller)
> driver, which is basically a standard serial driver. This IP is common
> across all the ST parts for settop box platforms.
> 
> ASC is embedded in ST COMMS IP block. It supports Rx & Tx functionality.
> It support all industry standard baud rates.

Your driver is not POSIX compliant.

> + for (; count != 0; count--) {
> + c = asc_in(port, ASC_RXBUF);
> + flag = TTY_NORMAL;
> + port->icount.rx++;
> +
> + if (unlikely(c & ASC_RXBUF_FE)) {
> + if (c == ASC_RXBUF_FE) {
> + port->icount.brk++;
> + if (uart_handle_break(port))
> + continue;
> + flag = TTY_BREAK;
> + } else {
> + port->icount.frame++;
> + flag = TTY_FRAME;
> + }
> + } else if (ascport->check_parity &&
> +unlikely(c & ASC_RXBUF_PE)) {
> + port->icount.parity++;
> + flag = TTY_PARITY;
> + }
> +
> + if (uart_handle_sysrq_char(port, c))
> + continue;
> + tty_insert_flip_char(tport, c & 0xff, flag);
> + }
> + if (overrun) {
> + port->icount.overrun++;
> + tty_insert_flip_char(tport, 0, TTY_OVERRUN);
> + }

No support for ignoring error conditions.  No support for ignoring all
input... and:

> +static void asc_set_termios(struct uart_port *port, struct ktermios *termios,
> + struct ktermios *old)
> +{
> + struct asc_port *ascport = to_asc_port(port);
> + unsigned int baud;
> + u32 ctrl_val;
> + tcflag_t cflag;
> + unsigned long flags;
> +
> + port->uartclk = clk_get_rate(ascport->clk);
> +
> + baud = uart_get_baud_rate(port, termios, old, 0, port->uartclk/16);
> + cflag = termios->c_cflag;
> +
> + spin_lock_irqsave(&port->lock, flags);
> +
> + /* read control register */
> + ctrl_val = asc_in(port, ASC_CTL);
> +
> + /* stop serial port and reset value */
> + asc_out(port, ASC_CTL, (ctrl_val & ~ASC_CTL_RUN));
> + ctrl_val = ASC_CTL_RXENABLE | ASC_CTL_FIFOENABLE;
> +
> + /* reset fifo rx & tx */
> + asc_out(port, ASC_TXRESET, 1);
> + asc_out(port, ASC_RXRESET, 1);
> +
> + /* set character length */
> + if ((cflag & CSIZE) == CS7) {
> + ctrl_val |= ASC_CTL_MODE_7BIT_PAR;
> + } else {
> + ctrl_val |= (cflag & PARENB) ?  ASC_CTL_MODE_8BIT_PAR :
> + ASC_CTL_MODE_8BIT;
> + }
> +
> + ascport->check_parity = (cflag & PARENB) ? 1 : 0;
> +
> + /* set stop bit */
> + ctrl_val |= (cflag & CSTOPB) ? ASC_CTL_STOP_2BIT : ASC_CTL_STOP_1BIT;
> +
> + /* odd parity */
> + if (cflag & PARODD)
> + ctrl_val |= ASC_CTL_PARITYODD;
> +
> + /* hardware flow control */
> + if ((cflag & CRTSCTS) && ascport->hw_flow_control)
> + ctrl_val |= ASC_CTL_CTSENABLE;

This doesn't reflect those facts back into the termios structure to
indicate that they aren't supported.

Consider using uart_port's ignore and read status masks to implement
the break, framing, parity and overrun checking in your interrupt
handler using the same methodology as drivers like 8250, amba-pl011
etc.  That will help you get these code sequences correct.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH] ARM: tegra: add basic SecureOS support

2013-06-10 Thread Russell King - ARM Linux
On Mon, Jun 10, 2013 at 05:11:15PM +0900, Alexandre Courbot wrote:
> On Sat, Jun 8, 2013 at 1:33 AM, Stephen Warren  wrote:
> >>> I think we need to separate the concept of support for *a* secure
> >>> monitor, from support for a *particular* secure monitor.
> >>
> >> Agreed. In this case, can we assume that support for a specific secure
> >> monitor is not arch-specific, and that this patch should be moved
> >> outside of arch-tegra and down to arch/arm? In other words, the ABI of
> >> a particular secure monitor should be the same no matter the chip,
> >> shouldn't it?
> >
> > I would like to believe that the Trusted Foundations monitor had the
> > same ABI irrespective of which Soc it was running on. However, I have
> > absolutely no idea at all if that's true. Even if there's some common
> > subset of the ABI that is identical across all SoCs, I wouldn't be too
> > surprised if there were custom extensions for each different SoC, or
> > just perhaps even each product.
> >
> > Can you research this and find out the answer?
> 
> Will do. Information about TF is scarce unfortunately.

The answer is... there isn't a common ABI.  That is something I pressed
ARM Ltd for when this stuff first appeared and they were adamant that
they were not going to specify any kind of ABI for this interface.

The net result is that everyone has invented their own interfaces around
it.  Some pass all arguments in registers, some pass arguments in memory
and pass pointers to those memory locations, and those memory locations
have to be flushed in some way.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH] ARM: tegra: add basic SecureOS support

2013-06-10 Thread Russell King - ARM Linux
On Mon, Jun 10, 2013 at 04:47:22PM +0900, Alexandre Courbot wrote:
> One could remove the naked attribute and put there registers into the
> clobber list, but then the function will be inlined and we will have
> to ensure the parameters end up in the right register (and having a
> function that cannot be inlined is convenient in that respect). So as
> far as I can tell, having the function naked and saving the registers
> ourselves seems to be the most convenient way around this.

If you use such a large clobber list, you risk the compiler barfing on
you that it's run out of registers.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [Arm-netbook] getting allwinner SoC support upstream (was Re: Uploading linux (3.9.4-1))

2013-06-09 Thread Russell King - ARM Linux
On Sun, Jun 09, 2013 at 11:09:59PM +0100, luke.leighton wrote:
> this is all a rather round-about way to say that for those people who
> heard and are thinking of heeding russell's call to "be silent and to
> ignore me"

Okay, so you've just misrepresented me in the above comment.  I never
said anything of the sort.  The closest that I've come to a comment like
that is asking you to stop wasting people's time.

So, given your displayed inability to properly convey what people have
said to you in a discussion in your own replies, is there *any* reason
that anyone should trust you to communicate their position to any other
third party?

In some ways, I'm *glad* that you've passed everything on verbatum,
because it means that it's (hopefully) free from any misrepresentations
such as the above.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [Arm-netbook] getting allwinner SoC support upstream (was Re: Uploading linux (3.9.4-1))

2013-06-07 Thread Russell King - ARM Linux
On Fri, Jun 07, 2013 at 08:18:14PM +0100, Luke Kenneth Casson Leighton wrote:
> On Fri, Jun 7, 2013 at 7:26 PM, Russell King - ARM Linux
>  wrote:
> > Luke Leighton on the other hand is demanding that we
> 
>  no demands have been made, russell: i've informed you of an immovable
> deadline which will pass beyond which the opportunity being presented
> is lost.

" well, tough.  get me up to speed, *fast*.  please stop wasting time
like this: get me up to speed."

That is a demand.  On the other hand this would not be:

"Can someone please get me up to speed on this?"  That is a request.

Sorry, you've just lost some more credibility.

Please let those who are already working with Allwinner continue to
work with them rather than spending time needlessly with someone who
clearly doesn't have any idea about what they say even from one moment
to the next.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [Arm-netbook] getting allwinner SoC support upstream (was Re: Uploading linux (3.9.4-1))

2013-06-07 Thread Russell King - ARM Linux
On Fri, Jun 07, 2013 at 08:04:26PM +0100, luke.leighton wrote:
> On Fri, Jun 7, 2013 at 7:54 PM, Olof Johansson  wrote:
> > By demanding
> 
>  a-a-ah, no demands made.

" well, tough.  get me up to speed, *fast*.  please stop wasting time
like this: get me up to speed."

That is a demand.  Stop trolling.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [Arm-netbook] getting allwinner SoC support upstream (was Re: Uploading linux (3.9.4-1))

2013-06-07 Thread Russell King - ARM Linux
On Fri, Jun 07, 2013 at 08:02:03PM +0100, luke.leighton wrote:
>  well, tough.  get me up to speed, *fast*.

No, not unless you're willing to *pay* someone to spend time teaching you,
because you are asking to be *taught* about the current situation, so
you're asking someone to do some _work_ _for_ _you_.  If you're not
willing to do that, then all the information is out there in the public
domain for you to learn from yourself.

> please stop wasting time like this:

Pot. Black.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [Arm-netbook] getting allwinner SoC support upstream (was Re: Uploading linux (3.9.4-1))

2013-06-07 Thread Russell King - ARM Linux
On Fri, Jun 07, 2013 at 02:49:28PM +, joem wrote:
> > > SoC vendors are free to join the discussion, and many SoC vendors are part
> > > of the kernel community, so calling this unilateral is plain wrong.
> > 
> >  you're free to believe that, vladimir.  i've explained why that
> > hasn't happened, in prior messages.  can we move forward, please?
> 
> I prefer it if the "vladimir" troll (and fellow trolls)
> make requests for one of us to make or do something instead of
> constantly whining and wasting time.
> 
> After all we are attached to funds and resources ready to
> deploy if something sounds a good idea and gets a vote.
> 
> To vladimir - please put your thoughts in a blog where it belongs.
> If its important, I'm sure others would offer comment
> and take you up on your thoughts.

I think your position is confused.  Everything that Vladimir (in his
three posts) in this thread so far have been correct.  Vladimir is not
the one doing any trolling in this thread.

Vladimir has not requested anything.  He hasn't whined.  He hasn't
wasted time.  He has stated the following in _three_ short succinct
emails:
(a) no one gets to impose stipulate timescales unless they're willing
to financially sponsor the work.
(b) the adopted position of the Linux kernel developers.

Luke Leighton on the other hand is demanding that we (Linux kernel
developers) come up with proposals within three days so that Luke can
act as a middle man between us and Allwinner, and is blaming the Linux
kernel community for the situation with Allwinner.

As you seem to have your facts wrong, I can only conclude that you
are also trolling... I hope I'm wrong about that and you've just made
an innocent mistake.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [Arm-netbook] getting allwinner SoC support upstream (was Re: Uploading linux (3.9.4-1))

2013-06-07 Thread Russell King - ARM Linux
On Fri, Jun 07, 2013 at 09:48:22AM +0200, Vladimir Pantelic wrote:
> luke.leighton wrote:
>>   3 days remaining on the clock.
>
> what catastrophic thing will happen when the time runs out?

Maybe the world will explode into tiny small bits?  Probably not.  I
suspect nothing of any relevance to us.

As has already been pointed out, Allwinner is already working with
people in the kernel community to move things forward, so the right
things are already happening.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [Arm-netbook] getting allwinner SoC support upstream (was Re: Uploading linux (3.9.4-1))

2013-06-07 Thread Russell King - ARM Linux
On Fri, Jun 07, 2013 at 10:40:37AM +0200, Vladimir Pantelic wrote:
> luke.leighton wrote:>   so.
> >
> >   coming back to what you said earlier: i'm formulating what to say to
> > allwinner [and need to pre-send something by monday so that they can
> > consider it before the meeting].  so far, it consists of:
> >
> > * device-tree is what the linux kernel community has come up with, it
> > is equivalent to FEX.
> >
> > * the linux kernel community would like to apologise for not
> > consulting with you (allwinner) on the decision to only accept device
> > tree
>
> apologize? WTF?

(I don't seem to have the original mail).

Definitely not.

The way this generally works is that discussions happen in public on
mailing lists, and people who have an interest get involved in the
discussion.  If someone decides to avoid the mailing lists because they
want to be secret about XYZ then they won't be part of the decision
making process - but that's not _our_ problem.  That is _their_ problem.

In the case of DT, there was quite a lengthy discussion about it and
its adoption.

So, either the discussion took place _before_ there was any interest in
the ARM kernel developments from Allwinner, or they themselves _chose_
not to be represented in the discussion by not being on the mailing list
or participating in the discussion.

There is nothing for us to apologise for here.

(Occasionally, the kernel community *is* a dictatorship - for example,
when Linus threatens to delete all the ARM defconfigs, or when Linus
decides that we're running out of control when adding more than 10k
lines of new code per kernel release, or - in the same way - when I see
something I really don't like, but thankfully those are fairly rare.)
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [Arm-netbook] getting allwinner SoC support upstream (was Re: Uploading linux (3.9.4-1))

2013-06-07 Thread Russell King - ARM Linux
On Fri, Jun 07, 2013 at 09:02:43AM +0100, luke.leighton wrote:
>  ok.  so.  we come back to the question again: what shall i propose to
> them that they consider doing, and what benefit would it be to them to
> do so?
> 
>  i cannot go to them and say "you have to do this [insert proposal
> here]" - it has to be more subtle than that.

It seems that you don't have to do anything at all.  From what Arnd and
others have said, they are _already_ talking to, and working with people
in the kernel community to move their code base forward, move to DT, and
they are already starting to send their own patches for the mainline
kernel themselves.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [Arm-netbook] getting allwinner SoC support upstream (was Re: Uploading linux (3.9.4-1))

2013-06-06 Thread Russell King - ARM Linux
On Thu, Jun 06, 2013 at 01:22:04PM +0100, luke.leighton wrote:
> On Thu, Jun 6, 2013 at 1:19 AM, Henrik Nordström
>  wrote:
> > tor 2013-06-06 klockan 00:54 +0100 skrev luke.leighton:
> >
> >> > Not really the case. Actually the opposite. DT have this as well, and
> >> > integrated in device probing. Allwinner need to hack every driver used
> >> > to add their gpio requests to have pinmuxing triggered.
> >>
> >>  augh.  ok.  solutions.  what are the solutions here?
> >
> > What I said before.
> 
>  idea: hook into devicetree gpio functions to allow script-fex gpio
> functions to gain access in a separate module?  that sort of thing.
> 
> > Go with DT for the kernel. There is no need for two configuration
> > mechanisms doing the same thing. Disguise it in fex form (and
> > translator) if too hard for people with a DOS editior to configure.
> 
>  what methods for doing that.  i need proposals.  4 days on the clock.

No.

If you want to set time scales, please put up money to find the work to
come up with those proposals.

Virtually no one here is a charity; the charity days of open source are
long gone.  Everyone needs money to put food in their mouths, and the
way this works is that those who pay the most get the time.  That's the
nature of a open and free market.

What's more is that you have been given some good proposals already:
converting the fex description to DT for the kernel.  Wow, that means you
can use the work which has already been done in the mainline kernel for
free!  How cool is that!
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [Arm-netbook] getting allwinner SoC support upstream (was Re: Uploading linux (3.9.4-1))

2013-06-06 Thread Russell King - ARM Linux
On Thu, Jun 06, 2013 at 01:24:57PM +0100, luke.leighton wrote:
> On Thu, Jun 6, 2013 at 1:01 AM, Tomasz Figa  wrote:
> 
> > I don't see any other solution here than moving all the Allwinner code to
> > DT (as it has been suggested in this thread several times already), as
> > this is the only hardware description method supported by ARM Linux.
> 
>  i repeat again: please state, explicitly and unequivocably that you -
> linux kernel developers - are happy that the reach of linux and
> gnu/linux OSes is dramatically reduced due to this intransigent
> position.

If companies are going to go off and invent the square wheel, and that
makes *them* suffer the loss of being able to merge back into the
mainline kernel, thereby making *their* job of moving forward with
their kernel versions much harder, then yes, we *are* happy.

Companies will always do idiotic things; it's *them* and their users
who have to live with the results of that bad decision making process.

Eventually, the pain of being outside the mainline kernel will become
too great, especially if their users demand things like kernel upgrades
from them.  Eventually, they will change.

And no, this isn't an intransigent position.  This is reality given
that ARM already has way too much code in the Linux kernel and we're
trying to reduce that through the use of technologies like DT.  The
last thing we need right now is for another "DT" like implementation
to come along which is only used on a minority of platforms - even if
the platform it is used on is successful.

The way this works is this:
- you either go with the policies which are set, or
- you change the policy as a whole because you have a technically
  superior solution

What that means in this case is either you adopt DT, or you convince
everyone that DT isn't the solution, but your solution is, and we adopt
your solution for *everything* instead.

If neither of those are possible, then that's really not our problem,
and it's not _us_ who are being "intransigent".
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH] ARM: tegra: add basic SecureOS support

2013-06-06 Thread Russell King - ARM Linux
On Thu, Jun 06, 2013 at 04:28:07PM +0900, Alexandre Courbot wrote:
> +static int __attribute__((used)) __tegra_smc_stack[10];
> +
> +/*
> + * With EABI, subtype and arg already end up in r0, r1 and r2 as they are
> + * function arguments, but we prefer to play safe here and explicitly move
> + * these values into the expected registers anyway. mov instructions without
> + * any side-effect are turned into nops by the assembler, which limits
> + * overhead.

No they aren't.  They will be preserved as:
mov r0, r0
mov r1, r1
mov r2, r2

Moreover, things will go wrong if the compiler decides for whatever reason
to move 'arg' into r0 before calling your code sequence.  So really this
is quite fragile.

There's also no point in mentioning EABI in the above paragraph; all ARM
ABIs under Linux have always placed the arguments in r0..r3 and then on
the stack.  You can assert that this is always true by using the __asmeq()
macro.

Also...

> + */
> +static void tegra_generic_smc(u32 type, u32 subtype, u32 arg)
> +{
> + asm volatile(
> + ".arch_extensionsec\n\t"
> + "ldrr3, =__tegra_smc_stack\n\t"
> + "stmia  r3, {r4-r12, lr}\n\t"

using a statically allocated area to save the registers in this way is
probably a bad move; although the hotplug code guarantees that there
will be no concurrency between CPU hotplug operations, this sets a
precident for it being used in places where no such protection exists.

What is wrong with stacking r4-r12, lr onto the SVC stack?  You don't
save the SVC stack pointer, so one can only assume that your SMC
implmentation preserves this (if it doesn't, that's yet another bug
with this assembly code.)

Combining these two issues, you're probably far better off using an
__attribute__((naked)) function for this - which means you get to
write the entire function in assembly code without any compiler
interference:

static void __attribute__((naked)) tegra_generic_smc(u32 type, u32 subtype, u32 
arg)
{
asm volatile(
".arch_extensionsec\n\t"
"stmfd  sp!, {r4 - r12, lr}\n\t"
__asmeq("%0", "r0")
__asmeq("%1", "r1")
__asmeq("%2", "r2")
"movr3, #0\n\t"
"movr4, #0\n\t"
"dsb\n\t"
"smc#0\n\t"
"ldmfd  sp!, {r4 - r12, pc}"
:
: "r" (type), "r" (subtype), "r" (arg)
: "memory");
}
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: getting allwinner SoC support upstream (was Re: Uploading linux (3.9.4-1))

2013-06-05 Thread Russell King - ARM Linux
On Wed, Jun 05, 2013 at 05:38:45PM -0400, Lennart Sorensen wrote:
> I haven't personally dealt with any nvidia arm devices, so I have no
> idea how those are turning out, nor have I looked much at the marvell
> ones yet (even though I have a cubox sitting on my desk I intend to play
> around with).

Cubox is Dove, which dates from way before DT on ARM, and there's
relatively few people working on Dove support, so progress towards
DT support is rather slow there.  I believe it to be at the point
where it's possible to boot the Cubox using DT, but not all features
are supported.

But then, not all features of the Dove SoC are supported in mainline
either; the port essentially stopped after a little more than basic
support was merged.  Eg, no hardware monitoring driver, no video
drivers, etc...
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: getting allwinner SoC support upstream (was Re: Uploading linux (3.9.4-1))

2013-06-05 Thread Russell King - ARM Linux
On Wed, Jun 05, 2013 at 03:00:13PM -0600, Stephen Warren wrote:
> 2) Having U-Boot itself read a DT and configure itself, just like the
> kernel does. This is relatively new, and only supported by a few boards
> (all Tegra to some extent, and a couple each Samsung Exynos and Xilinx
> boards). I suspect/guess this is the kind of thing that Luke was
> referring to re: U-Boot fex integration.

Reading what I have of this thread, this is just another case of
$company runs of and does their own unique way of doing something,
which is in a totally different direction from that path chosen by
Linux kernel developers, and Linux kernel developers are _expected_
to roll over and accept $company's unique way of doing it.

$company could have assisted with the DT effort, helping to sort out
the common arch issues (which haven't been all that much), converting
drivers to DT and such like.  But no, instead they've gone off and
created their own thing.

I wonder how many more of these cases there needs to be before people
get the message that Linux kernel developers *do* *not* like this
behaviour, and if you do this, then chances are you're going to be
stuck with having code which isn't able to be merged into mainline.

And I don't buy the argument that we were still sorting out DT.  DT has
been well defined for many many years before we started using it on ARM.
It has been used for years on both PowerPC and Sparc architectures to
describe their hardware, and all of the DT infrastructure was already
present in the kernel.  What was left for us were:

* converting our platform-data based drivers to use data from DT.
* come up with ways of dealing with SoC issues such as clock
  representation, pin muxing and such like in DT.

But no... all that had to be created in this custom fex stuff which now
presents a barrier with getting support for something merged.

And somehow people make out that this is _our_ problem...
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 2/3] ARM: msm: Re-organize platsmp to make it extensible

2013-06-04 Thread Russell King - ARM Linux
On Mon, Jun 03, 2013 at 05:19:44PM -0700, Rohit Vaswani wrote:
> + sc1_base_ptr = of_iomap(dn, 0);
> + if (sc1_base_ptr) {
> + writel_relaxed(0, sc1_base_ptr + VDD_SC1_ARRAY_CLAMP_GFS_CTL);
> + writel_relaxed(0, sc1_base_ptr + SCSS_CPU1CORE_RESET);
> + writel_relaxed(3, sc1_base_ptr + SCSS_DBG_STATUS_CORE_PWRDUP);
> + mb();
> + iounmap(sc1_base_ptr);

If you need to fiddle with power rails and resets for your secondary
core, you don't need any of the pen_release stuff, and you really
should get rid of it.  The pen_release stuff is only there for
platforms where there's no proper way of controlling the secondary
CPUs except by using a software method.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 2/2] net: mv643xx_eth: proper initialization for Kirkwood SoCs

2013-05-24 Thread Russell King - ARM Linux
On Fri, May 24, 2013 at 01:01:25PM -0400, Jason Cooper wrote:
> On Fri, May 24, 2013 at 01:03:25PM +0200, Linus Walleij wrote:
> > IMO: if you want to go down that road, what you really want is not
> > ever more expressible device trees, but real open firmware,
> > or ACPI or UEFI that can interpret and run bytecode as some
> > "bios" for you. With DT coming from OF maybe this is a natural
> > progression of things, but one has to realize when we reach the
> > point where what we really want is a bios. Then your time is
> > likely better spent with Tianocore or something than with the
> > kernel.
> 
> shudder.  I like embedded systems because the *don't* have a bios.

Then you're into scenarios like I have with my laptop, where - those
of you who check the nightly build results will have noticed - one
of my serial ports doesn't always exist.  That's because the ACPI data
in the BIOS is *wrong*.  It reports that it has been enabled when it
hasn't, and the disassembled byte code is at fault here.

The fix?  God knows.  As far as I'm concerned as a user, or even as an
OS developer, it's unfixable without getting the ACPI data structures
changed, and that's not something I can do.

Do you really want that on ARM?  Given the fiasco with the location of
the registers, are you sure you want to place more trust in that
direction?  Does it give you a warm fuzzy feeling to know that you
might have to work out some way to patch vendor supplied bytecode?
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH V2 1/6] drivers: bus: add a new driver for WEIM

2013-05-23 Thread Russell King - ARM Linux
On Thu, May 23, 2013 at 04:16:13PM +0800, Huang Shijie wrote:
> + /* get the clock */
> + weim->clk = devm_clk_get(&pdev->dev, NULL);
> + if (IS_ERR(weim->clk))
> + goto weim_err;
> +
> + clk_prepare_enable(weim->clk);

I notice people are getting lazy about this.  clk_prepare_enable() can
return an error...
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [RFC 1/8] serial:st-asc: Add ST ASC driver.

2013-05-10 Thread Russell King - ARM Linux
On Wed, May 08, 2013 at 09:36:13AM -0700, Greg KH wrote:
> On Wed, May 08, 2013 at 06:31:48PM +0200, Arnd Bergmann wrote:
> > On Wednesday 08 May 2013, Greg KH wrote:
> > > > just mention there is not hardware reason to not use the generic ttySx
> > > > in place of ttyAS as we have only one IP that handle serial on this
> > > > family of SoC
> > > > 
> > > > personally I'll switch to ttySx
> > > 
> > > Great, then you can use the same major/minor range as well, so there's
> > > no more objection from me about this :)
> > 
> > Does that work these days when you have kernel with multiple built-in
> > uart drivers?
> 
> It "should", as the major/minor registration should only happen when the
> hardware is found, but I haven't tested it out, so I can't say for sure.

serial stuff has never operated like that.  More specifically, it's a
limitation with the tty stuff that the way stuff works is that a
tty driver can only drive a single bunch of contiguous minor numbers.
No interleaving is allowed.

That limitation has existed for years, and I don't see it going away.
As long as that limitation exists, you can only ever have one serial
driver driving a set of contiguous minor numbers.

There has been an attempt to "work around" this by making the 8250
driver "special" which was a complete hack to get it to work.  That
was while I maintained this stuff and I outright refused to make one
serial driver magically special.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH V2] ARM: bcm281xx: Add L2 support for Rev A2 chips

2013-05-10 Thread Russell King - ARM Linux
On Thu, May 09, 2013 at 01:44:34PM -0700, Olof Johansson wrote:
> On Thu, May 02, 2013 at 05:57:33PM -0700, Christian Daudt wrote:
> > Rev A2 SoCs have an unorthodox memory re-mapping and this needs
> > to be reflected in the cache operations.
> > This patch adds new outer cache functions for the l2x0 driver
> > to support this SoC revision. It also adds a new compatible
> > value for the cache to enable this functionality.
> > 
> > Updates from V1:
> > - remove section 1 altogether and note that in comments
> > - simplify section selection caused by section 1 removal
> > - BUG_ON just in case section 1 shows up
> > 
> > Signed-off-by: Christian Daudt 
> 
> This patch mostly covers code that is on Russells plate, so please feed
> this to his tracker to be picked up by him. Feel free to add:

Yes, but there's a problem.  I don't have bcm11351.dtsi to apply this
patch to.  This is one of the problems of splitting the SoC stuff
from core stuff - either I start doing cross-merges (which cause Linus
and everyone else pain as we've seen this time around) or we have to
stuff everything through a single tree.

Not happy.

Not applying until after -rc1.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [RFC patch 4/8] genirq: generic chip: Cache per irq bit mask

2013-05-03 Thread Russell King - ARM Linux
On Fri, May 03, 2013 at 09:50:50PM -, Thomas Gleixner wrote:
> - u32 mask = ~(1 << (d->irq - gc->irq_base));
> + u32 mask = ~(d->mask);

I suspect the above changes are all a result of a search-and-replace
which results in the cosmetic parens remaining.  Would be nice to get
rid of them too as they're completely unnecessary.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [RFC patch 7/8] genirq: generic chip: Add linear irq domain support

2013-05-03 Thread Russell King - ARM Linux
On Fri, May 03, 2013 at 09:50:53PM -, Thomas Gleixner wrote:
> + /* Init mask cache ? */
> + if (dgc->gc_flags & IRQ_GC_INIT_MASK_CACHE) {
> + raw_spin_lock_irqsave(&gc->lock, flags);
> + gc->mask_cache = irq_reg_readl(gc->reg_base + ct->regs.mask);
> + raw_spin_unlock_irqrestore(&gc->lock, flags);
> + }

This looks a little weird to me - it seems that it'll re-read this
each time any irq is mapped in the domain, which is probably not
wanted.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v2 1/5] irqchip: add support for Marvell Orion SoCs

2013-05-03 Thread Russell King - ARM Linux
On Fri, May 03, 2013 at 01:48:35AM +0200, Sebastian Hesselbarth wrote:
> This patch adds an irqchip driver for the main interrupt controller found
> on Marvell Orion SoCs (Kirkwood, Dove, Orion5x, Discovery Innovation).
> Corresponding device tree documentation is also added.
> 
> Signed-off-by: Sebastian Hesselbarth 
> ---
> Note: This patch triggers a checkpatch warning for
>   WARNING: Avoid CamelCase: 
> 
> Changelog:
> v1->v2:
> - rename compatible string to "marvell,orion-intc" (Suggested by Jason 
> Gunthorpe)
> - request mem regions for irq base registers (Suggested by Jason Gunthorpe)
> - make orion_handle_irq static (Suggested by Jason Gunthorpe)
> - make use of IRQCHIP_DECLARE (Suggested by Jason Gunthorpe)

It would still be a good idea to convert this to use the generic chip
stuff which Thomas created, though exactly how is hard to see at the
moment.

> +static void orion_irq_mask(struct irq_data *irqd)
> +{
> + unsigned int irq = irqd_to_hwirq(irqd);
> + unsigned int irq_off = irq % 32;
> + int reg = irq / 32;
> + u32 val;
> +
> + val = readl(orion_irq_base[reg] + ORION_IRQ_MASK);
> + writel(val & ~(1 << irq_off), orion_irq_base[reg] + ORION_IRQ_MASK);
> +}

This could be replaced with irq_gc_mask_clr_bit().

> +
> +static void orion_irq_unmask(struct irq_data *irqd)
> +{
> + unsigned int irq = irqd_to_hwirq(irqd);
> + unsigned int irq_off = irq % 32;
> + int reg = irq / 32;
> + u32 val;
> +
> + val = readl(orion_irq_base[reg] + ORION_IRQ_MASK);
> + writel(val | (1 << irq_off), orion_irq_base[reg] + ORION_IRQ_MASK);
> +}

This with irq_gc_mask_set_bit().

> +
> +static struct irq_chip orion_irq_chip = {
> + .name   = "orion_irq",
> + .irq_mask   = orion_irq_mask,
> + .irq_unmask = orion_irq_unmask,
> +};
> +
> +static int orion_irq_map(struct irq_domain *d, unsigned int virq,
> +  irq_hw_number_t hw)
> +{
> + irq_set_chip_and_handler(virq, &orion_irq_chip,
> +  handle_level_irq);
> + set_irq_flags(virq, IRQF_VALID | IRQF_PROBE);

This is where it starts to get tricky, because I can't see how you'd
merge the irq_alloc_generic_chip() and irq_setup_generic_chip() with
this.  Maybe you don't need to do anything here and just do all that
in orion_of_init() instead?  But then you seem to need to know the
virq range before hand, and I can't see how that's known.  Maybe Thomas
can provide some enlightenment about how the gc irqchip stuff works
with the irq domain stuff...

However, you wouldn't need the statically defined orion_irq_chip nor
orion_irq_base[] either, because those would be held within the gc
irqchip stuff and stored in the upper layer.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH] irqchip: add support for Marvell Orion SoCs

2013-05-02 Thread Russell King - ARM Linux
On Thu, May 02, 2013 at 08:54:20PM +0200, Sebastian Hesselbarth wrote:
> On 05/02/13 20:45, Russell King - ARM Linux wrote:
>> On Thu, May 02, 2013 at 08:33:48PM +0200, Sebastian Hesselbarth wrote:
>>> On 05/02/2013 08:25 PM, Sebastian Hesselbarth wrote:
>>>> This patch adds an irqchip driver for the main interrupt controller found
>>>> on Marvell Orion SoCs (Kirkwood, Dove, Orion5x, Discovery Innovation).
>>>> Corresponding device tree documentation is also added.
>>>>
>>>> Signed-off-by: Sebastian Hesselbarth
>>>> ---
>>>> Note: This patch triggers a checkpatch warning for
>>>> WARNING: Avoid CamelCase:
>>>>
>>> [...]
>>>> diff --git a/include/linux/irqchip/orion.h b/include/linux/irqchip/orion.h
>>>> new file mode 100644
>>>> index 000..04f7bab
>>>> --- /dev/null
>>>> +++ b/include/linux/irqchip/orion.h
>>>> @@ -0,0 +1,18 @@
>>>> +/*
>>>> + * Marvell Orion SoCs IRQ chip driver header.
>>>> + *
>>>> + * Sebastian Hesselbarth
>>>> + *
>>>> + * This file is licensed under the terms of the GNU General Public
>>>> + * License version 2.  This program is licensed "as is" without any
>>>> + * warranty of any kind, whether express or implied.
>>>> + */
>>>> +
>>>> +#ifndef __LINUX_IRQCHIP_ORION_H
>>>> +#define __LINUX_IRQCHIP_ORION_H
>>>> +
>>>> +#include
>>>
>>> First review by myself. The above include is a left-over and
>>> will be removed in a v2.
>>
>> You still need your first level IRQ handlers marked with 
>> __exception_irq_entry
>> which is defined in the above file.
>>
>
> Russell,
>
> I know and it is marked with __exception_irq_entry. The above is in
> include/linux/irqchip/orion.h and only used for .init_irq in machine
> descriptor later.
>
> The irq handler is never exposed to the board file itself, but set
> within orion_init_irq. This approach has been taked by
> irqchip/irq-gic.c and irqchip/irq-vic.c rather than adding
> .handle_irq to the machine descriptor.

But I don't find an asm/exception.h include in 
drivers/irqchip/whateveryour.cfileiscalled
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH] irqchip: add support for Marvell Orion SoCs

2013-05-02 Thread Russell King - ARM Linux
On Thu, May 02, 2013 at 08:33:48PM +0200, Sebastian Hesselbarth wrote:
> On 05/02/2013 08:25 PM, Sebastian Hesselbarth wrote:
>> This patch adds an irqchip driver for the main interrupt controller found
>> on Marvell Orion SoCs (Kirkwood, Dove, Orion5x, Discovery Innovation).
>> Corresponding device tree documentation is also added.
>>
>> Signed-off-by: Sebastian Hesselbarth
>> ---
>> Note: This patch triggers a checkpatch warning for
>>WARNING: Avoid CamelCase:
>>
> [...]
>> diff --git a/include/linux/irqchip/orion.h b/include/linux/irqchip/orion.h
>> new file mode 100644
>> index 000..04f7bab
>> --- /dev/null
>> +++ b/include/linux/irqchip/orion.h
>> @@ -0,0 +1,18 @@
>> +/*
>> + * Marvell Orion SoCs IRQ chip driver header.
>> + *
>> + * Sebastian Hesselbarth
>> + *
>> + * This file is licensed under the terms of the GNU General Public
>> + * License version 2.  This program is licensed "as is" without any
>> + * warranty of any kind, whether express or implied.
>> + */
>> +
>> +#ifndef __LINUX_IRQCHIP_ORION_H
>> +#define __LINUX_IRQCHIP_ORION_H
>> +
>> +#include
>
> First review by myself. The above include is a left-over and
> will be removed in a v2.

You still need your first level IRQ handlers marked with __exception_irq_entry
which is defined in the above file.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 2/2] thermal: db8500_cpufreq_cooling: remove usage of IS_ERR_OR_NULL()

2013-04-26 Thread Russell King
On Fri, Apr 26, 2013 at 12:19:08PM +0200, Fabio Baltieri wrote:
> On Thu, Apr 25, 2013 at 06:46:35PM +0100, Russell King wrote:
> > On Thu, Apr 25, 2013 at 10:13:34AM -0400, Eduardo Valentin wrote:
> > > diff --git a/drivers/thermal/db8500_cpufreq_cooling.c 
> > > b/drivers/thermal/db8500_cpufreq_cooling.c
> > > index 21419851..786d192 100644
> > > --- a/drivers/thermal/db8500_cpufreq_cooling.c
> > > +++ b/drivers/thermal/db8500_cpufreq_cooling.c
> > > @@ -37,7 +37,7 @@ static int db8500_cpufreq_cooling_probe(struct 
> > > platform_device *pdev)
> > >   cpumask_set_cpu(0, &mask_val);
> > >   cdev = cpufreq_cooling_register(&mask_val);
> > >  
> > > - if (IS_ERR_OR_NULL(cdev)) {
> > > + if (IS_ERR(cdev)) {
> > >   dev_err(&pdev->dev, "Failed to register cooling device\n");
> > >   return PTR_ERR(cdev);
> > 
> > Correct.  cpufreq_cooling_register() returns either an error-pointer or
> > a valid pointer.
> 
> Acked-by: Fabio Baltieri 
> 
> ...but more of these are going to appear forever, was the proposed
> removal of IS_ERR_OR_NULL rejected?  Can't find any new message about
> that on the list.

We first need to get rid of the existing users of it - I've got rid of
most of those in arch/arm - but it seems that I never pushed that in the
last merge window.

Really it needs the cooperation of everyone to make it happen across the
tree with everyone removing IS_ERR_OR_NULL() in their subsystem.  There
are currently 366 instances of this macro being used in the entire tree
which is far too many to even mark it as deprecated.

-- 
Russell King
 Linux kernel2.6 ARM Linux   - http://www.arm.linux.org.uk/
 maintainer of:
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH] of: Set the DMA mask to 64 bits on ARM LPAE systems

2013-04-25 Thread Russell King - ARM Linux
On Thu, Apr 25, 2013 at 10:09:57AM -0700, Laura Abbott wrote:
> I thought about this as well but in arch/arm/mm/mm.h:
>
> #ifdef CONFIG_ZONE_DMA
> extern phys_addr_t arm_dma_limit;
> #else
> #define arm_dma_limit ((phys_addr_t)~0)
> #endif
>
> arm_dma_limit is explicitly cast to phys_addr_t, which means that  
> arm_dma_limit will be always be sizeof(phys_addr_t) regardless of  
> sizeof(dma_addr_t).

This is the size of the DMA zone, which controls the _minimum_ DMA mask
that can be used in the system.  DMA masks must always be greater than
this limit.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 2/2] thermal: db8500_cpufreq_cooling: remove usage of IS_ERR_OR_NULL()

2013-04-25 Thread Russell King
On Thu, Apr 25, 2013 at 10:13:34AM -0400, Eduardo Valentin wrote:
> diff --git a/drivers/thermal/db8500_cpufreq_cooling.c 
> b/drivers/thermal/db8500_cpufreq_cooling.c
> index 21419851..786d192 100644
> --- a/drivers/thermal/db8500_cpufreq_cooling.c
> +++ b/drivers/thermal/db8500_cpufreq_cooling.c
> @@ -37,7 +37,7 @@ static int db8500_cpufreq_cooling_probe(struct 
> platform_device *pdev)
>   cpumask_set_cpu(0, &mask_val);
>   cdev = cpufreq_cooling_register(&mask_val);
>  
> - if (IS_ERR_OR_NULL(cdev)) {
> + if (IS_ERR(cdev)) {
>   dev_err(&pdev->dev, "Failed to register cooling device\n");
>   return PTR_ERR(cdev);

Correct.  cpufreq_cooling_register() returns either an error-pointer or
a valid pointer.

-- 
Russell King
 Linux kernel2.6 ARM Linux   - http://www.arm.linux.org.uk/
 maintainer of:
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 1/2] GPIO: clps711x: Rewrite driver for using generic GPIO code

2013-04-25 Thread Russell King - ARM Linux
On Thu, Apr 25, 2013 at 03:31:07PM +0400, Alexander Shiyan wrote:
> > On Sat, Apr 13, 2013 at 08:21:39AM +0400, Alexander Shiyan wrote:
> > > +static void __init clps711x_add_gpio(void)
> > > +{
> > > + const struct resource clps711x_gpio0_res[] = {
> > > + DEFINE_RES_MEM(PADR, SZ_1),
> > > + DEFINE_RES_MEM(PADDR, SZ_1),
> > > + };
> ...
> > Don't do this - this is incredibly wasteful.
> > 
> > C lesson 1: do not put unmodified initialised structures onto the stack.
> > 
> > What the C compiler will do with the above is exactly the same as this
> > for each structure:
> > 
> > static const struct resource private_clps711x_gpio4_res[] = {
> > DEFINE_RES_MEM(PEDR, SZ_1),
> > DEFINE_RES_MEM(PEDDR, SZ_1),
> > };
> > 
> > static void __init clps711x_add_gpio(void)
> > {
> > const struct resource clps711x_gpio4_res[] = private_clps711x_gpio4_res;
> > ...
> > 
> > which will in itself be a call out to memcpy, or an inlined memcpy.  Now
> > do you see why it's wasteful?  You might as well write the thing in that
> > way to start with and safe the additional code to copy the structures.
> > 
> > The other way to do this, which will probably be far more space efficient:
> > 
> > static phys_addr_t gpio_addrs[][2] = {
> > { PADR, PADDR },
> > { PBDR, PBDDR },
> > ...
> > };
> > 
> > static void __init clps711x_add_gpio(void)
> > {
> > struct resource gpio_res[2];
> > unsigned i;
> > 
> > gpio_res[0].flags = IORESOURCE_MEM;
> > gpio_res[1].flags = IORESOURCE_MEM;
> > 
> > for (i = 0; i < ARRAY_SIZE(gpio_addrs); i++) {
> > gpio_res[0].start = gpio_addrs[i][0];
> > gpio_res[0].end = gpio_res[0].start;
> > gpio_res[1].start = gpio_addrs[i][1];
> > gpio_res[1].end = gpio_res[1].start;
> > 
> > platform_device_register_simple("clps711x-gpio", i,
> > gpio_res, ARRAY_SIZE(gpio_res));
> > }
> > }
> > 
> > which results in smaller storage and most probably also smaller code size
> > too.
> 
> Very strange results with this change.
> So, I add a debug output before "platform_device_register_simple" for
> print resource and in "__request_resource" for print parent.
> This is output.
> gpio 0 [mem 0x8000] [mem 0x8040]
> resource: root [??? 0xe3a0f000-0x flags 0xb0]
> clps711x-gpio.0: failed to claim resource 0
> 
> The first line is seems correct. But I do not understand why
> parent is wrong here. Normally it should be as:
> resource: root [mem 0x-0x].
> And it shows normal with my first version.
> Have anyone any ideas?

memset(&gpio_res, 0, sizeof(gpio_res));
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [RFC v2] video: ARM CLCD: Add DT & CDF support

2013-04-22 Thread Russell King - ARM Linux
On Thu, Apr 18, 2013 at 06:33:21PM +0100, Pawel Moll wrote:
> This patch adds basic DT bindings for the PL11x CLCD cells
> and make their fbdev driver use them, together with the
> Common Display Framework.
> 
> The DT provides information about the hardware configuration
> and limitations (eg. the largest supported resolution)
> but the video modes come exclusively from the Common
> Display Framework drivers, referenced to by the standard CDF
> binding.
> 
> Signed-off-by: Pawel Moll 

Much better.

I will point out though that there be all sorts of worms here when you
come to the previous ARM evaluation boards (which is why the capabilities
stuff got written in the first place) where there's a horrid mixture of
BGR/RGB ordering at various levels of the system - some of which must be
set correctly because the CLCD output isn't strictly used as R bits
G bits and B bits (to support different formats from the CLCD's native
formats.)
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [RFC PATCH] drivers: bus: add ARM CCI support

2013-04-22 Thread Russell King - ARM Linux
On Thu, Apr 18, 2013 at 01:54:04PM -0400, Nicolas Pitre wrote:
> On Thu, 18 Apr 2013, Stephen Boyd wrote:
> 
> > On 04/11/13 07:47, Lorenzo Pieralisi wrote:
> > > diff --git a/drivers/bus/arm-cci.c b/drivers/bus/arm-cci.c
> > > new file mode 100644
> > > index 000..81953de
> > > --- /dev/null
> > > +++ b/drivers/bus/arm-cci.c
> > [...]
> > > +static void notrace cci_port_control(unsigned int port, bool enable)
> > > +{
> > > + void __iomem *base = ports[port].base;
> > > +
> > > + if (!base)
> > > + return;
> > > +
> > > + writel_relaxed(enable, base + CCI_PORT_CTRL);
> > > + while (readl_relaxed(cci_ctrl_base + CCI_CTRL_STATUS) & 0x1)
> > > + ;
> > 
> > cpu_relax()?
> 
> In some cases there is no more cache coherence when this is called and 
> the hardware might not be in a good state to cope with whatever action 
> people might be tempted to insert into cpu_relax().  After the CCI is 
> disabled it is important to keep a low profile and not touch anything 
> global. With some early hardware revision, even a DMB here was harmful.

It would be useful if there was a comment in the code explaining this.
As it stands, you _will_ get a stream of patches from kernel janitors
itching to add cpu_relax() there.

If you're going to do something different from the norm, it _needs_ to
definitely be commented and explained.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCHv8 07/19] arm: pci: add a align_resource hook

2013-04-19 Thread Russell King - ARM Linux
On Mon, Apr 15, 2013 at 12:36:09PM -0400, Jason Cooper wrote:
> Russell,
> 
> Thanks for applying this patch (7683/1) to your tree.  I see it's in
> your for-next, which I understand *isn't* stable.

That is correct.

>   029baf1 ARM: 7683/1: pci: add a align_resource hook
> 
> Is there a stable branch I could depend on for the rest of this series
> (particularly, patch 10)?  It looks like you applied it to your misc
> branch, but that's not currently available.

I will have to look at that; as I'm still catching up with mail three
days after having returned and there's still quite an amount outstanding,
it's going to be a while before I can look at this.  Given that we're at
-rc7 and the amount of outstanding mail, it's likely that may happen
after the merge window has opened.

___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 1/2] GPIO: clps711x: Rewrite driver for using generic GPIO code

2013-04-19 Thread Russell King - ARM Linux
On Sat, Apr 13, 2013 at 08:21:39AM +0400, Alexander Shiyan wrote:
> +static void __init clps711x_add_gpio(void)
> +{
> + const struct resource clps711x_gpio0_res[] = {
> + DEFINE_RES_MEM(PADR, SZ_1),
> + DEFINE_RES_MEM(PADDR, SZ_1),
> + };
> + const struct resource clps711x_gpio1_res[] = {
> + DEFINE_RES_MEM(PBDR, SZ_1),
> + DEFINE_RES_MEM(PBDDR, SZ_1),
> + };
> + const struct resource clps711x_gpio2_res[] = {
> + DEFINE_RES_MEM(PCDR, SZ_1),
> + DEFINE_RES_MEM(PCDDR, SZ_1),
> + };
> + const struct resource clps711x_gpio3_res[] = {
> + DEFINE_RES_MEM(PDDR, SZ_1),
> + DEFINE_RES_MEM(PDDDR, SZ_1),
> + };
> + const struct resource clps711x_gpio4_res[] = {
> + DEFINE_RES_MEM(PEDR, SZ_1),
> + DEFINE_RES_MEM(PEDDR, SZ_1),
> + };

Don't do this - this is incredibly wasteful.

C lesson 1: do not put unmodified initialised structures onto the stack.

What the C compiler will do with the above is exactly the same as this
for each structure:

static const struct resource private_clps711x_gpio4_res[] = {
DEFINE_RES_MEM(PEDR, SZ_1),
DEFINE_RES_MEM(PEDDR, SZ_1),
};

static void __init clps711x_add_gpio(void)
{
const struct resource clps711x_gpio4_res[] = private_clps711x_gpio4_res;
...

which will in itself be a call out to memcpy, or an inlined memcpy.  Now
do you see why it's wasteful?  You might as well write the thing in that
way to start with and safe the additional code to copy the structures.

The other way to do this, which will probably be far more space efficient:

static phys_addr_t gpio_addrs[][2] = {
{ PADR, PADDR },
{ PBDR, PBDDR },
...
};

static void __init clps711x_add_gpio(void)
{
struct resource gpio_res[2];
unsigned i;

gpio_res[0].flags = IORESOURCE_MEM;
gpio_res[1].flags = IORESOURCE_MEM;

for (i = 0; i < ARRAY_SIZE(gpio_addrs); i++) {
gpio_res[0].start = gpio_addrs[i][0];
gpio_res[0].end = gpio_res[0].start;
gpio_res[1].start = gpio_addrs[i][1];
gpio_res[1].end = gpio_res[1].start;

platform_device_register_simple("clps711x-gpio", i,
gpio_res, ARRAY_SIZE(gpio_res));
}
}

which results in smaller storage and most probably also smaller code size
too.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [RFC 06/10] video: ARM CLCD: Add DT & CDF support

2013-04-18 Thread Russell King - ARM Linux
On Wed, Apr 17, 2013 at 04:17:18PM +0100, Pawel Moll wrote:
> +#if defined(CONFIG_OF)
> +static int clcdfb_of_get_tft_parallel_panel(struct clcd_panel *panel,
> + struct display_entity_interface_params *params)
> +{
> + int r = params->p.tft_parallel.r_bits;
> + int g = params->p.tft_parallel.g_bits;
> + int b = params->p.tft_parallel.b_bits;
> +
> + /* Bypass pixel clock divider, data output on the falling edge */
> + panel->tim2 = TIM2_BCD | TIM2_IPC;
> +
> + /* TFT display, vert. comp. interrupt at the start of the back porch */
> + panel->cntl |= CNTL_LCDTFT | CNTL_LCDVCOMP(1);
> +
> + if (params->p.tft_parallel.r_b_swapped)
> + panel->cntl |= CNTL_BGR;

NAK.  Do not set this explicitly.  Note the driver talks about this being
"the old way" - this should not be used with the panel capabilities - and
in fact it will be overwritten.

Instead, you need to encode this into the capabilities by masking the
below with CLCD_CAP_RGB or CLCD_CAP_BGR depending on the ordering.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 1/6] of/pci: Provide support for parsing PCI DT ranges property

2013-03-23 Thread Russell King - ARM Linux
On Sat, Mar 23, 2013 at 01:04:53PM +0900, Jingoo Han wrote:
> - switch ((pci_space >> 24) & 0x3) {
> - case 1: /* PCI IO space */
> + if (iter.flags & IORESOURCE_IO) {

Please look at how IORESOURCE_* stuff is defined:
#define IORESOURCE_TYPE_BITS0x1f00  /* Resource type */
#define IORESOURCE_IO   0x0100  /* PCI/ISA I/O ports */
#define IORESOURCE_MEM  0x0200
#define IORESOURCE_REG  0x0300  /* Register offsets */
#define IORESOURCE_IRQ  0x0400
#define IORESOURCE_DMA  0x0800
#define IORESOURCE_BUS  0x1000

Notice that it's not an array of bits.

So this should be:
if ((iter.flags & IORESOURCE_TYPE_BITS) == IORESOURCE_IO) {

> + iter.cpu_addr = iter.pci_addr;
> + } else if (iter.flags & IORESOURCE_MEM) {

And this should be:
} else if ((iter.flags & IORESOURCE_TYPE_BITS) == 
IORESOURCE_MEM) {

> + if (iter.flags & IORESOURCE_IO) {

Same here.

> + } else if (iter.flags & IORESOURCE_MEM) {

And again here.

> - switch ((pci_space >> 24) & 0x3) {
> - case 1: /* PCI IO space */
> + if (iter.flags & IORESOURCE_IO) {

ditto.

> + iter.cpu_addr = iter.pci_addr;
> + } else if (flags & IORESOURCE_MEM) {

ditto.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 03/11] ARM: timer-sp: convert to use CLKSRC_OF init

2013-03-22 Thread Russell King - ARM Linux
On Thu, Mar 21, 2013 at 09:31:01PM -0500, Rob Herring wrote:
> Perhaps re-writing it like this would be more clear:
> 
> if (irq_num == 2){
>   __sp804_clockevents_init(base + TIMER_2_BASE, irq, clk1, name);
>   __sp804_clocksource_and_sched_clock_init(base, name, clk0, 1);
> } else {
>   __sp804_clockevents_init(base, irq, clk0, name);
>   __sp804_clocksource_and_sched_clock_init(base + TIMER_2_BASE,
>   name, clk1, 1);
> }

Yes, that is definitely much clearer than the original patch.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 2/5] arm: mvebu: Select DMA_BOUNCE when LPAE is selected in Kconfig

2013-03-22 Thread Russell King - ARM Linux
On Thu, Mar 21, 2013 at 05:26:15PM +0100, Gregory CLEMENT wrote:
> From: Lior Amsalem 
> 
> For mvebu IOs are 32 bits and we have 40 bits memory due to LPAE so
> make sure we give 32 bits addresses to the IOs.
> 
> Signed-off-by: Lior Amsalem 
> Tested-by: Franklin 
> Signed-off-by: Gregory CLEMENT 

Oh god no.  Please move away from the addition on DMABOUNCE - that code
creaks, doesn't have highmem support, and is known to give problems on
various platforms.

Instead, please rely on using the DMA mask and such like, just like on
x86.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 11/11] devtree: add binding documentation for sp804

2013-03-21 Thread Russell King - ARM Linux
On Wed, Mar 20, 2013 at 05:54:11PM -0500, Rob Herring wrote:
> +Optional properties:
> +- arm,sp804-has-irq = <#>: In the case of only 1 timer irq line connected, 
> this
> + specifies if the irq connection is for timer 1 or timer 2. A value of 1
> + or 2 should be used.

This fails to mention that it has any bearing on the clocks.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 02/11] ARM: remove extra timer-sp control register clearing

2013-03-21 Thread Russell King - ARM Linux
On Wed, Mar 20, 2013 at 05:54:02PM -0500, Rob Herring wrote:
> From: Rob Herring 
> 
> The timer-sp initialization code clears the control register before
> initializing the timers, so every platform doing this is redundant.
> 
> For unused timers, we should not care what state they are in.

NAK.  We do care what state they're in.  What if they have their interrupt
enable bit set, and IRQ is shared with the clock event timer?

No, this patch is just wrong.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 03/11] ARM: timer-sp: convert to use CLKSRC_OF init

2013-03-21 Thread Russell King - ARM Linux
On Wed, Mar 20, 2013 at 05:54:03PM -0500, Rob Herring wrote:
> + clk0 = of_clk_get(np, 0);
> + if (IS_ERR(clk0))
> + clk0 = NULL;
> +
> + /* Get the 2nd clock if the timer has 2 timer clocks */
> + if (of_count_phandle_with_args(np, "clocks", "#clock-cells") == 3) {
> + clk1 = of_clk_get(np, 1);
> + if (IS_ERR(clk1)) {
> + pr_err("sp804: %s clock not found: %d\n", np->name,
> + (int)PTR_ERR(clk1));
> + return;
> + }
> + } else
> + clk1 = clk0;
> +
> + irq = irq_of_parse_and_map(np, 0);
> + if (irq <= 0)
> + return;
> +
> + of_property_read_u32(np, "arm,sp804-has-irq", &irq_num);
> + if (irq_num == 2)
> + tmr2_evt = true;
> +
> + __sp804_clockevents_init(base + (tmr2_evt ? TIMER_2_BASE : 0),
> +  irq, tmr2_evt ? clk1 : clk0, name);
> + __sp804_clocksource_and_sched_clock_init(base + (tmr2_evt ? 0 : 
> TIMER_2_BASE),
> +  name, tmr2_evt ? clk0 : clk1, 
> 1);

This just looks totally screwed to me.

A SP804 cell has two timers, and has one clock input and two clock
enable inputs.  The clock input is common to both timers.  The timers
only count on the rising edge of the clock input when the enable
input is high.  (the APB PCLK also matters too...)

Now, the clock enable inputs are controlled by the SP810 system
controller to achieve different clock rates for each.  So, we *can*
view an SP804 as having two clock inputs.

However, the two clock inputs do not depend on whether one or the
other has an IRQ or not.  Timer 1 is always clocked by TIMCLK &
TIMCLKEN1.  Timer 2 is always clocked by TIMCLK & TIMCLKEN2.

Using the logic above, the clocks depend on how the IRQs are wired
up... really?  Can you see from my description above why that is
screwed?  What bearing does the IRQ have on the wiring of the
clock inputs?

And, by doing this aren't you embedding into the DT description
something specific to the Linux implementation for this device -
namely the decision by you that the IRQs determine how the clocks
get used?

So... NAK.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 3/4 v2] net: mvmdio: enhance driver to support SMI error/done interrupts

2013-03-15 Thread Russell King - ARM Linux
On Thu, Mar 14, 2013 at 07:08:34PM +0100, Florian Fainelli wrote:
> + if (dev->err_interrupt == NO_IRQ) {
...
> + init_waitqueue_head(&dev->smi_busy_wait);
> +
> + dev->err_interrupt = platform_get_irq(pdev, 0);
> + if (dev->err_interrupt != -ENXIO) {
...
> + } else
> + dev->err_interrupt = NO_IRQ;


FYI, NO_IRQ is not supposed to be used anymore (we're supposed to be
removing it).  platform_get_irq() returns negative numbers for failure,
so why not test for < 0 in both the above tests, or maybe <= 0 (as
IRQ0 is also not supposed to be valid.)?
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 1/1] ARM: OMAP: gpmc: request CS address space for ethernet chips

2013-03-12 Thread Russell King - ARM Linux
On Sun, Mar 10, 2013 at 06:18:22PM +0100, Javier Martinez Canillas wrote:
> +static int gpmc_probe_ethernet_child(struct platform_device *pdev,
> +  struct device_node *child)
> +{
> + int ret, cs;
> + unsigned long base;
> + struct resource res;
> + struct platform_device *of_dev;
> +
> + if (of_property_read_u32(child, "reg", &cs) < 0) {
> + dev_err(&pdev->dev, "%s has no 'reg' property\n",
> + child->full_name);
> + return -ENODEV;
> + }
> +
> + if (of_address_to_resource(child, 0, &res)) {
> + dev_err(&pdev->dev, "%s has malformed 'reg' property\n",
> + child->full_name);
> + return -ENODEV;
> + }
> +
> + ret = gpmc_cs_request(cs, resource_size(&res), &base);
> + if (IS_ERR_VALUE(ret)) {

NAK.  ret < 0 is the correct way to test here.  Don't use IS_ERR_VALUE
unless you have a _very_ good reason to.  That's a good bit of advice
for everything in linux/err.h.  Don't use *anything* from there without
a damned good reason.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: omap: IORESOURCE_IRQ flags not set when defining a GPIO-IRQ from DT

2013-03-01 Thread Russell King - ARM Linux
On Fri, Mar 01, 2013 at 04:41:37PM -0600, Jon Hunter wrote:
> Thanks for the history. For OMAP I see SMC_IRQ_FLAGS getting defined as
> follows in smc91x.h ...
> 
> #ifndef SMC_IRQ_FLAGS
> #define SMC_IRQ_FLAGS   IRQF_TRIGGER_RISING
> #endif
> 
> And so for OMAP devices using smc91x, it is always being configured as
> rising-edge. So it would be good to move OMAP to use a dynamic
> configuration too.

Yep.  I think that was requested back when I did the work from those
which remained...
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: omap: IORESOURCE_IRQ flags not set when defining a GPIO-IRQ from DT

2013-03-01 Thread Russell King - ARM Linux
On Fri, Mar 01, 2013 at 05:17:57PM +0100, Javier Martinez Canillas wrote:
> >> unsigned long irq_flags = SMC_IRQ_FLAGS;
> >> ...
> >>if (irq_flags == -1 || ires->flags & IRQF_TRIGGER_MASK)
> >> irq_flags = ires->flags & IRQF_TRIGGER_MASK;
> >>
> >> while smsc911x driver's probe function uses the flags from the
> >> resource unconditionally:
> >>
> >> irq_flags = irq_res->flags & IRQF_TRIGGER_MASK;
> >>
> >> So, at the end both will set irq_flags to whatever is on the
> >> IORESOURCE_IRQ struct resource flags member.
> >
> > Actually, that's not the case for smc91x. By default SMC_IRQ_FLAGS != -1
> > (for omap) and so it will not set irq_flags to ires->flags &
> > IRQF_TRIGGER_MASK. However, if I force irq_flags to be -1, then I see
> > that irq_flags are to 0.

smc91x is complicated by the fact that it started off life before there
was any possibility to pass IRQ flags through resources.  So we ended
up with smc91x.h containing _lots_ of platform specific data, and the
driver could only be built for one platform.

I fixed that by sorting out this IRQ passing method, and changing smc91x
to support both the fixed configuration, and the dynamic-through-IRQflags
method.

There is no reason for any other driver to support the fixed method; that
would be a completely backwards step.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [RFC PATCH 7/9] thermal: tegra30: add tegra30 thermal driver

2013-02-19 Thread Russell King - ARM Linux
On Mon, Feb 18, 2013 at 07:30:29PM +0800, Wei Ni wrote:
> +static struct tegra_thermal_data * __devinit thermal_tegra_dt_parse_pdata(

__dev* no longer exists.

> + tdata = devm_kzalloc(&pdev->dev, sizeof(*tdata), GFP_KERNEL);
> + if (!tdata) {
> + dev_err(&pdev->dev, "Can't allocate platform data\n");
> + return NULL;
> + }
> + memset(tdata, 0, sizeof(*tdata));

Useless memset.  k*z*alloc already zeros the memory before returning.

> +static int tegra30_thermal_probe(struct platform_device *pdev)
> +{
> + struct tegra_thermal_data *pdata = pdev->dev.platform_data;

You read pdata here

> + struct thermal_zone *tz;
> + struct thermal_sensor *ts;
> + static struct thermal_cooling_device *cdev;
> + int ret;
> +
> + pdata = thermal_tegra_dt_parse_pdata(pdev);

and immediately overwrite it here.

> + if (!pdata) {
> + dev_err(&pdev->dev, "Get platform data failed.\n");
> + return -EINVAL;
> + }
> +
> + /* Create a thermal zone */
> + tz = create_thermal_zone("tz_tegra", NULL);
> + if (!tz) {
> + dev_err(&pdev->dev, "Create thermal_zone failed.\n");
> + return -EINVAL;
> + }
> +
> + pdata->tz = tz;

This isn't how you deal with driver data.  Set driver data against a
platform device using platform_set_drvdata(pdev, tz).

> +static int tegra30_thermal_remove(struct platform_device *pdev)
> +{
> + struct tegra_thermal_data *pdata = pdev->dev.platform_data;

and use platform_get_drvdata() here - and don't use pdata->tz.
struct struct thermal_zone *tz = platform_get_drvdata(pdev);
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 2/4] dmaengine: dw_dmac: move to generic DMA binding

2013-02-16 Thread Russell King - ARM Linux
On Sat, Feb 16, 2013 at 10:07:39AM +, Arnd Bergmann wrote:
> On Saturday 16 February 2013, Viresh Kumar wrote:
> > On 15 February 2013 23:51, Arnd Bergmann  wrote:
> > > +static bool dw_dma_generic_filter(struct dma_chan *chan, void *param)
> > >  {
> > 
> > > +   dws->cfg_hi = 0x;
> > > +   dws->cfg_lo = 0x;
> > 
> > s/0x/-1 ?
> 
> It's an 'unsigned int'. While -1 would work here, I always find it a little
> odd to rely on that feature of the C language.

However, relying on an 'int' being 32-bits is also rather odd, and
probably much more dubious too.  If you want to set all bits in an
int, the portable way to do that is ~0.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v2 8/8] reset: Add driver for gpio-controlled reset pins

2013-02-14 Thread Russell King - ARM Linux
On Wed, Feb 13, 2013 at 06:34:32PM +0100, Philipp Zabel wrote:
> Signed-off-by: Philipp Zabel 

Just be aware that PXA has a "gpio reset" facility which is used for SoC
reset - see arch/arm/mach-pxa/reset.c

The use of gpio-reset would be ambiguous... or maybe PXA's usage could be
combined somehow with this?
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH RFC 1/7] platform: add a device node

2013-02-10 Thread Russell King - ARM Linux
On Sun, Feb 10, 2013 at 02:49:21AM +0100, Javier Martinez Canillas wrote:
> I knew this would be controversial and that's why I didn't mean it to be a 
> patch
> but a RFC :)
> 
> The problem basically is that you have to associate the platform device with 
> its
> corresponding DT device node because it can be used in the driver probe 
> function.

When DT is being used, doesn't DT create the platform devices for you,
with the device node already set correctly?

Manually creating platform devices and adding DT nodes to it sounds like
the wrong thing to be doing.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-09 Thread Russell King - ARM Linux
On Sat, Feb 09, 2013 at 09:35:53PM +0530, Sekhar Nori wrote:
> On 2/1/2013 11:52 PM, Matt Porter wrote:
> > +   ret = of_address_to_resource(node, 1, &res);
> 
> of_address_to_resource() needs 
> 
> > +   if (IS_ERR_VALUE(ret))
> 
> This needs 

More importantly, is this the correct way to check for errors from
of_address_to_resource() ?  Grepping the source shows that either less
than 0 or non-zero are the commonly used conditions here.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 2/2] dmaengine: OMAP: Register SDMA controller with Device Tree DMA driver

2013-02-09 Thread Russell King - ARM Linux
On Wed, Feb 06, 2013 at 03:03:16PM -0600, Jon Hunter wrote:
> @@ -673,7 +702,7 @@ static int omap_dma_init(void)
>  {
>   int rc = platform_driver_register(&omap_dma_driver);
>  
> - if (rc == 0) {
> + if ((rc == 0) && (!of_have_populated_dt())) {

Looks good, the worst I can find is this over-use of parens...
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 2/5] spi: pl022: use generic DMA slave configuration if possible

2013-02-08 Thread Russell King - ARM Linux
On Thu, Feb 07, 2013 at 10:15:48PM +0100, Arnd Bergmann wrote:
> On Thursday 07 February 2013 21:19:04 Linus Walleij wrote:
> > On Thu, Feb 7, 2013 at 8:42 PM, Arnd Bergmann  wrote:
> > > On Thursday 07 February 2013, Linus Walleij wrote:
> > 
> > >> Actually I once read about a feature where the kernel provides
> > >> a static page full of zeroes or something like this, that would be
> > >> ideal to use in cases like this, then all of this dummy page
> > >> allocation and freeing can be deleted.
> > >
> > > You mean empty_zero_page? That only works if this page is
> > > read-only from the perspective of the DMA controller, but
> > > then it would be a good fit, yes.
> > 
> > That's actually how it's used.
> > 
> > SPI is symmetric, and in the DMA case we're not poking
> > data into the buffers from the CPU so the controller need
> > something - anything - to stream to the block.
> > 
> > If we can use that page we'll even save a few remaps.
> 
> I'm slightly worried about the caching effects though. The
> idea of the empty-zero page is that all user processes get
> it when they read a page before they write to it, so the
> data in it can essentially always be cache-hot.
> 
> If we do DMA from that page to a device what would be the
> overhead of flushing the (clean) cache lines?

If it's DMA _to_ a device, then we will only ever clean the lines prior to
a transfer, never invalidate them.  So that's not really a concern.  (There
better not be any dirty cache lines associated with the empty zero page
either.)
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 2/5] spi: pl022: use generic DMA slave configuration if possible

2013-02-08 Thread Russell King - ARM Linux
On Thu, Feb 07, 2013 at 07:29:17PM +0100, Linus Walleij wrote:
> Actually I once read about a feature where the kernel provides
> a static page full of zeroes or something like this, that would be
> ideal to use in cases like this, then all of this dummy page
> allocation and freeing can be deleted.

I think you're thinking of empty_zero_page which is used to provide the
initial BSS pages for user apps.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 06/14] ARM: pci: Keep pci_common_init() around after init

2013-02-06 Thread Russell King - ARM Linux
On Tue, Feb 05, 2013 at 09:41:48PM +0100, Thierry Reding wrote:
> On Wed, Jan 09, 2013 at 09:43:06PM +0100, Thierry Reding wrote:
> > When using deferred driver probing, PCI host controller drivers may
> > actually require this function after the init stage.
> > 
> > Signed-off-by: Thierry Reding 
> > ---
> >  arch/arm/kernel/bios32.c | 6 +++---
> >  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> Russell,
> 
> Can this patch and patch 7 (ARM: pci: Allow passing per-controller
> private data) of this series be applied for 3.9? Thomas uses them in his
> Marvell PCIe series as well and it would allow to reduce the complexity
> of the dependencies.

It'll need to go into the patch system in that case...
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-05 Thread Russell King - ARM Linux
On Tue, Feb 05, 2013 at 04:30:45PM +0100, Linus Walleij wrote:
> On Mon, Feb 4, 2013 at 10:54 PM, Cyril Chemparathy  wrote:
> > On 02/04/2013 04:11 PM, Linus Walleij wrote:
> 
> >> Cyril, just stack up the cookies and take a sweep over them to see
> >> which ones are baked when the NAPI poll comes in -> problem
> >> solved.
> >
> > You're assuming that cookies complete in order.  That is not necessarily
> > true.
> 
> So put them on a wait list? Surely you will have a list of pending
> cookies and pick from the front of the queue if there isn't a hole on
> queue position 0.

Not quite.  The key is the cookie system DMA engine employs to indicate
when a cookie is complete.

Cookies between the "issued sequence" and "completed sequence" are defined
to be in progress, everything else is defined to be completed.

This means that if "completed sequence" is 1, and "issued sequence" is 5,
then cookies with values 2, 3, 4, 5 are in progress.  You can't mark
sequence 4 as being complete until 2 and 3 have completed.

If we need out-of-order completion, then that's a problem for the DMA
engine API, because you'd need to radically change the way "completion"
is marked.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-05 Thread Russell King - ARM Linux
On Tue, Feb 05, 2013 at 04:47:05PM +, Mark Brown wrote:
> On Tue, Feb 05, 2013 at 05:21:48PM +0100, Linus Walleij wrote:
> 
> > For IRQ mode, use the completion callback to push each cookie
> > to NAPI, and thus let the IRQ drive the traffic.
> 
> The whole purpose of NAPI is to avoid taking interrupts for completion
> of transfers.  Anything that generates interrupts when NAPI is in
> polling mode is defeating the point.

Yes, but you're missing one very important point.  If your DMA engine
is decoupled from the network device, and the DMA engine relies upon
interrupts to queue further transfers to move data into RAM, then you
have two options:

1. Receive these interrupts so you can update the DMA engine for
   further data transfer.
2. Don't receive these interrupts, and cause the network device to
   error out with a FIFO overrun because its DMA requests have not
   been serviced. (which may raise an interrupt.)
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-05 Thread Russell King - ARM Linux
On Mon, Feb 04, 2013 at 04:54:45PM -0500, Cyril Chemparathy wrote:
> You're assuming that cookies complete in order.  That is not necessarily  
> true.

Under what circumstances is that not true?
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-05 Thread Russell King - ARM Linux
On Mon, Feb 04, 2013 at 09:47:38PM +, Arnd Bergmann wrote:
> On Monday 04 February 2013, Linus Walleij wrote:
> > So I think the above concerns are moot. The callback we can
> > set on cookies is entirely optional, and it's even implemented by
> > each DMA engine, and some may not even support it but require
> > polling, and then it won't even be implemented by the driver.
> 
> Just to ensure that everybody is talking about the same thing here:
> Is it just the callback that is optional, or also the interrupt
> coming from the hardware?

If everyone implements stuff correctly, both.  The callback most certainly
is optional as things stand.  The interrupt - that depends on the DMA
engine.

Some DMA engines you can't avoid it because you need to reprogram the
hardware with the next+1 transfer upon completion of an existing transfer.
Others may allow you to chain transfers in hardware.  That's all up to
how the DMA engine driver is implemented and how the hardware behaves.

Now, there's another problem here: that is, people abuse the API.  People
don't pass DMA_CTRL_ACK | DMA_PREP_INTERRUPT into their operations by
default.  People like typing '0'.

The intention of the "DMA_PREP_INTERRUPT" is significant here: it means
"ask the hardware to send an interrupt upon completion of this transfer".

Because soo many people like to type '0' instead in their DMA engine
clients, it means that this flag is utterly useless today - you have to
ignore it.  So there's _no_ way for client drivers to actually tell the
a DMA engine driver which _doesn't_ need to signal interrupts at the end
of every transfer not to do so.

So yes, the DMA engine API supports it.  Whether the _implementations_
themselves do is very much hit and miss (and in reality is much more
miss than hit.)
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH V2 1/6] pinctrl: pinctrl-single: use arch_initcall and module_exit

2013-02-05 Thread Russell King - ARM Linux
On Tue, Feb 05, 2013 at 06:36:34AM +, Vishwanathrao Badarkhe, Manish wrote:
> I made following changes, in order to update "dip->p" pointer with 
> correct value:
> 
> -   if (!dpi->p) {
> +   if (IS_ERR_OR_NULL(dpi->p)) {
> dpi->p = devm_pinctrl_get(dev);
> -   if (IS_ERR_OR_NULL(dpi->p)) {
> -   int ret = PTR_ERR(dpi->p);
> -
> -   dev_dbg(dev, "no pinctrl handle\n");
> -   /* Only return deferrals */
> -   if (ret == -EPROBE_DEFER)
> -   return ret;
> -   return 0;
> -   }
> +   ret = PTR_ERR(dpi->p);
> +   dev_dbg(dev, "no pinctrl handle\n");
> +   /* Only return deferrals */
> +   if (ret == -EPROBE_DEFER)
> +   return ret;
> +   return 0;
> 
> Is this intended change?

The above looks totally broken to me.

Oh, it's using IS_ERR_OR_NULL(), so it's bound to be broken.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-04 Thread Russell King - ARM Linux
On Mon, Feb 04, 2013 at 06:47:12PM +0200, Felipe Balbi wrote:
> Hi,
> 
> On Mon, Feb 04, 2013 at 08:36:38PM +0300, Sergei Shtylyov wrote:
> >In my eyes, getting rid of the mess doesn't justify breaking the rules 
> > that
> > Russell formulated above.
> 
> MUSB is no PCI, there is no single, standard interface to the DMA
> engine (unlike Synopsys' dwc-usb3 and dwc-usb2, where we don't use DMA
> engine), every DMA engine comes with its own set of registers, its own
> programming model and so forth.

Err, before you get too wrapped up in that argument... if you think there's
anything like a common hardware interface for DMA on PCI, you'd be wrong.
There isn't.

What there is is a bus protocol for it.  How the device decides to perform
DMA is up to the device; there's no standard register definitions across
all devices.  Yes, some classes have a standard register set (like USB,
and ATA) but others are totally device specific (eg, network chips).

However, on PCI, the DMA support is always integrated with the rest of
the device itself.  That is not the case here.

So, bringing PCI up into this discussion is totally irrelevant.  As I've
already explained, this was brought up in the _previous_ discussion as
a reason why _not_ to use the DMA engine API for this.

So, can we please leave PCI totally out of this?  It didn't belong here
2-3 years ago, and it sure enough doesn't belong here now.  It's nothing
but pure distraction.  And the more this goes on, the more I can see,
yet again, the CPPI 4.1 going nowhere at all.

As I can see it, there's nothing more to talk about.  The issue has been
extensively talked to death.  What it needs now is not _more_ talk, it
needs someone to actually go and _do_ something useful with it.

I realise that may be difficult given the mess that TI must still be in
internally since the upheaval of OMAP, but it sounds like there's a
different group needing this stuff to work...
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-04 Thread Russell King - ARM Linux
On Mon, Feb 04, 2013 at 05:41:53PM +0200, Felipe Balbi wrote:
> Hi,
> 
> On Fri, Feb 01, 2013 at 09:30:03PM +, Russell King - ARM Linux wrote:
> > > > > I guess to make the MUSB side simpler we would need musb-dma-engine 
> > > > > glue
> > > > > to map dmaengine to the private MUSB API. Then we would have some
> > > > > starting point to also move inventra (and anybody else) to dmaengine
> > > > > API.
> > > > 
> > > >Why? Inventra is a dedicated device's private DMA controller, why 
> > > > make
> > > > universal DMA driver for it?
> > > 
> > > because it doesn't make sense to support multiple DMA APIs. We can check
> > > from MUSB's registers if it was configured with Inventra DMA support and
> > > based on that we can register MUSB's own DMA Engine to dmaengine API.
> > 
> > Hang on.  This is one of the DMA implementations which is closely
> > coupled with the USB and only the USB?  If it is...
> > 
> > I thought this had been discussed _extensively_ before.  I thought the
> > resolution on it was:
> > 1. It would not use the DMA engine API.
> > 2. It would not live in arch/arm.
> > 3. It would be placed nearby the USB driver it's associated with.
> > 
> > (1) because we don't use APIs just for the hell of it - think.  Do we
> > use the DMA engine API for PCI bus mastering ethernet controllers?  No.
> > Do we use it for PCI bus mastering SCSI controllers?  No.  Because the
> > DMA is integral to the rest of the device.
> 
> that's not really a fair comparison, however. MUSB is used with several
> DMA engines.

I only mentioned it because it _was_ brought up as an argument against
using the DMA engine API in the previous discussions.  I'm just reminding
people what was discussed.

> Considering all of the above, it's far better to use DMA engine and get
> rid of all the mess.

Which is what both you and I have been saying for the last 3 or so years
on this subject...
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-02 Thread Russell King - ARM Linux
On Sat, Feb 02, 2013 at 08:27:42PM +0400, Sergei Shtylyov wrote:
>> There are two people on this thread CC list who were also involved or
>> CC'd on the mails from the thread in 2010...  Tony and Felipe.
>> Unfortunately, the person who agreed to do the work is no longer in the
>> land of the living.  Yes I know it's inconvenient for people to die
>> when they've still got lots of important work to do but that's what can
>> happen...
>
>Hm... wasn't it David Brownell? He's the only person who I know has 
> died recently who has dealt with DaVinci, MUSB and the releated stuff.

Actually, it wasn't David who was going to do it - that's where the email
thread gets messy because the mailer David was using makes no distinction
in text format between what bits of text make up the original email being
replied to, and which bits of text are written by David.

It might have been Felipe; there appears to be an email from Felipe saying
that as the immediate parent to David's email.  But that's not really the
point here.  The point is that _someone_ agreed to put the work in, and
_that_ agreement is what caused the patch to be discarded.

And, as I've already explained, you brought up the subject of it being
discarded shortly after, and it got discussed there _again_, and the
same things were said _again_ by at least two people about it being in
drivers/dma.

But anyway, that's all past history.  What was said back then about it
being elsewhere in the tree is just as relevant today as it was back
then.  The only difference is that because that message wasn't received,
it's now two years later with no progress on that.  And that's got
*nothing* what so ever to do with me.

I know people like to blame me just because I'm apparantly the focus of
the blame culture, but really this is getting beyond a joke.

So, I want an apology from you for your insistance that I'm to blame
for this.  Moreover, _I_ want to know what is going to happen in the
future with this so that I don't end up being blamed anymore for the
lack of progress on this issue.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-02 Thread Russell King - ARM Linux
On Fri, Feb 01, 2013 at 10:41:08AM -0800, Tony Lindgren wrote:
> * Matt Porter  [130201 10:25]:
> > Move mach-davinci/dma.c to common/edma.c so it can be used
> > by OMAP (specifically AM33xx) as well.
> 
> I think this should rather go to drivers/dma/?

Yes, it should, but just like OMAP, there's a conversion effort that needs
to be gone through.  It has one point - and only one point - which allows
its continued existence under arch/arm, and that is it already exists
there.

If it was new code, the answer would be a definite NACK, but it isn't.
It's pre-existing code which is already in mainline.  It's merely being
moved.

Another plus point for it is that there does seem to be a DMA engine
driver for it, so hopefully we'll see it killed off in arch/arm soon.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-02 Thread Russell King - ARM Linux
On Fri, Feb 01, 2013 at 01:59:59PM -0500, Matt Porter wrote:
> On Fri, Feb 01, 2013 at 07:52:46PM +, Sergei Shtylyov wrote:
> > Hello.
> > 
> > On 02/01/2013 09:49 PM, Matt Porter wrote:
> > 
> > >>> Move mach-davinci/dma.c to common/edma.c so it can be used
> > >>> by OMAP (specifically AM33xx) as well.
> > 
> > >> I think this should rather go to drivers/dma/?
> > 
> > > No, this is the private EDMA API. It's the analogous thing to
> > > the private OMAP dma API that is in plat-omap/dma.c. The actual
> > > dmaengine driver is in drivers/dma/edma.c as a wrapper around
> > > this...same way OMAP DMA engine conversion is being done.
> > 
> >   Keeps me wondering why we couldn't have the same with CPPI 4.1 when I 
> > proposed
> > that, instead of waiting indefinitely for TI to convert it to drivers/dma/
> > directly. We could have working MUSB DMA on OMAP-L1x/Sitara all this 
> > time... Sigh.
> 
> That is a shame. Yeah, I've pointed out that I was doing this exactly
> the same way as was acceptable for the OMAP DMA conversion since it was
> in RFC. The reasons are sound since in both cases, we have many drivers
> to convert that need to continue using the private DMA APIs.

It's very simple.

The OMAP DMA stuff in plat-omap/dma.c has existed for a long time, well
before things like the DMA engine API came into being.  Same with a lot
of the DMA code in arch/arm.  It is therefore in the privileged position
of having "grandfather" rights when it comes to existing in mainline.

However, today what we're finding is that these private per-platform APIs
are sub-optimal; they prevent the peripheral drivers in the drivers/
subtree from being re-used on other SoCs.  So, even through they pre-exist
the DMA engine support, they're being gradually converted to the API.

Moreover, we have _far_ too much code in arch/arm.  It's something that
has been attracted complaints from Linus.  It's something that's got us
close to having updates to arch/arm refused during merge windows.  It's
got us close to having parts of arch/arm deleted from the mainline kernel.

The long term plan is _still_ to remove arch/arm/*omap*/dma.c and move
the whole thing to drivers/dma.  That can only happen when everything is
converted (or the drivers which aren't converted have been retired and
deleted) - and provided that TI decide to continue funding that work.
That's still uncertain since the whole TI OMAP thing imploded two months
ago.

Now, CPPI is brand new code to arch/arm - always has been.  It post-dates
the DMA engine API.  And it's been said many times about moving it to
drivers/dma.  The problem is Sergei doesn't want to do it - he's anti the
DMA engine API for whatever reasons he can dream up.  He professes no
knowledge of my dislike for having it in arch/arm, yet there's emails
from years back showing that he knew about it.  TI knows about it; Ajay
about it.  Yet... well... history speaks volumes about this.

Now, there may be a new problem with CPPI.  That being we're now requiring
DT support.  _Had_ this been done prior to the push for DT, then it would
not have been a concern, because then the problem becomes one for DT.
But now that OMAP is being converted to DT, and DT is The Way To Go now,
that's an additional problem that needs to be grappled with - or it may
make things easier.

However, as I've said now many times: CPPI is _not_ going in arch/arm.
Period.  That makes it not my problem.  Period.

I really have nothing further to say on CPPI; everything that can be said
has already been said.  If there's still pressure to have it in arch/arm,
I will _NOT_ respond to it, because there's nothing to be said about it
which hasn't been said before.  The only reason it's still being pressed
for is a total reluctance to do anything about it being in arch/arm.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-02 Thread Russell King - ARM Linux
On Sat, Feb 02, 2013 at 10:18:51AM +, Russell King - ARM Linux wrote:
> On Sat, Feb 02, 2013 at 06:09:24AM +0400, Sergei Shtylyov wrote:
> > Hello.
> >
> > On 02-02-2013 4:44, Russell King - ARM Linux wrote:
> >
> >>>>> On Fri, Feb 01, 2013 at 11:49:11PM +0300, Sergei Shtylyov wrote:
> >>>>>>> good point, do you wanna send some patches ?
> >
> >>>>>>  I have already sent them countless times and even stuck CPPI 4.1 
> >>>>>> support (in
> >>>>>> arch/arm/common/cppi41.c) in Russell's patch system. TI requested to 
> >>>>>> remove the
> >>>>>> patch. :-(
> >
> >>>>> sticking into arch/arm/common/ wasn't a nice move. But then again, so
> >>>>> wasn't asking for the patch to be removed :-s
> >
> >>>> Err, patches don't get removed, they get moved to 'discarded'.
> >
> >>> Any chance to bring it back to life? :-)
> >>> Although... drivers/usb/musb/cppi41.c would need to be somewhat
> >>> reworked for at least AM35x and I don't have time. But that may change,
> >>> of course.
> >
> >> Right, I've just looked back at the various meeting minutes from December
> >> 2010 when the CPPI stuff was discussed.  Yes, I archive these things and
> >> all email discussions for referencing in cases like this.
> >
> >Thanks.
> >
> >> Unfortunately, they do not contain any useful information other than the
> >> topic having been brought up.  At that point, the CPPI stuff was in
> >> mach-davinci, and I had suggested moving it into drivers/dma.
> >
> >I don't remember that, probably was out of the loop again.

Here you go - this goes back even _further_ - November 2009 - on the
mailing list.  The entire thread:

http://lists.arm.linux.org.uk/lurker/thread/20091102.105759.a54cf3f5.en.html

And selected emails from it:

http://lists.arm.linux.org.uk/lurker/message/20091102.103706.38c029b5.en.html
On Mon, Nov 02, 2009 at 10:37:06AM +, I wrote:
| On Mon, Nov 02, 2009 at 04:27:59PM +0530, Gupta, Ajay Kumar wrote:
| > Another option is to create arch/arm/ti-common to place all TI platform's
| > common software, such as CPPI4.1 used both in DA8xx and AM3517.
| 
| No thanks.  I really don't see why we should allow TI to have yet more
| directories scattered throughout the tree that are out of place with
| existing conventions.
| 
| And what is this CPPI thing anyway?
| 
| http://acronyms.thefreedictionary.com/CPPI
| 
| "Communications Port Programming Interface" seems to be about the best
| applicable one from that list!
| 
| If it's a USB DMA device (from the patches I can find, that seems to be
| the case) then why can't it live in drivers/usb or drivers/dma ?

And again:

http://lists.arm.linux.org.uk/lurker/message/20091102.115458.61cde450.en.html
On Mon, Nov 02, 2009 at 11:54:58AM +, I wrote:
| On Mon, Nov 02, 2009 at 04:27:59PM +0530, Gupta, Ajay Kumar wrote:
| > CPPI4.1 DMA engine can be used either by USB or by Ethernet interface though
| > currently only USB is using it but in future even Ethernet devices may use 
it.
| 
| drivers/dma does seem to be the right place for this.

http://lists.arm.linux.org.uk/lurker/message/20091102.110217.adec3ca7.en.html
Even Felipe Balbi said so:
| you might want to provide support for it via drivers/dma and for the
| musb stuff, you just add the wrappers musb uses. See how tusb6010_omap.c
| uses OMAP's system dma which is also used by any other driver which
| requests a dma channel.

And it seems that _YOU_ did get the message - see your quoted text in:
http://lists.arm.linux.org.uk/lurker/message/20091230.132240.ecd56b3d.en.html
> We're currently having it there but the matter is it should be shred
> between different platforms, so arch/arm/common/ seems like the right
> place (which Russell didn't like, suggesting ill suited for that
> drivers/dma/ instead).

See - you acknowledge here that I don't like it.  So you _KNOW_ my views
on it in December 2009, contary to what you're saying in this thread.

Yet, you persisted with putting it in arch/arm/common:

http://lists.arm.linux.org.uk/lurker/message/20100515.181453.472c7c10.en.html
| Changes since the previous version:
| - moved everything from arch/arm/mach-davinci/ to arch/arm/common/;
| - s/CONFIG_CPPI41/CONFIG_TI_CPPI41/, made that option invisible;
| - added #include  for kzalloc();
| - switched alloc_queue() and cppi41_queue_free() to using bit operations;
| - replaced 'static' linking_ram[] by local variable in 
cppi41_queue_mgr_init();
| - fixed pr_debug() in cppi41_dma_ctrlr_init() t

Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-02 Thread Russell King - ARM Linux
On Sat, Feb 02, 2013 at 06:09:24AM +0400, Sergei Shtylyov wrote:
> Hello.
>
> On 02-02-2013 4:44, Russell King - ARM Linux wrote:
>
>>>>> On Fri, Feb 01, 2013 at 11:49:11PM +0300, Sergei Shtylyov wrote:
>>>>>>> good point, do you wanna send some patches ?
>
>>>>>>  I have already sent them countless times and even stuck CPPI 4.1 
>>>>>> support (in
>>>>>> arch/arm/common/cppi41.c) in Russell's patch system. TI requested to 
>>>>>> remove the
>>>>>> patch. :-(
>
>>>>> sticking into arch/arm/common/ wasn't a nice move. But then again, so
>>>>> wasn't asking for the patch to be removed :-s
>
>>>> Err, patches don't get removed, they get moved to 'discarded'.
>
>>> Any chance to bring it back to life? :-)
>>> Although... drivers/usb/musb/cppi41.c would need to be somewhat
>>> reworked for at least AM35x and I don't have time. But that may change,
>>> of course.
>
>> Right, I've just looked back at the various meeting minutes from December
>> 2010 when the CPPI stuff was discussed.  Yes, I archive these things and
>> all email discussions for referencing in cases like this.
>
>Thanks.
>
>> Unfortunately, they do not contain any useful information other than the
>> topic having been brought up.  At that point, the CPPI stuff was in
>> mach-davinci, and I had suggested moving it into drivers/dma.
>
>I don't remember that, probably was out of the loop again.
>
>> The result of that was to say that it doesn't fit the DMA engine APIs.
>
>I remember this as a discussion happening post me sending the patch to 
> the patch system and it being discarded...
>
>> So someone came up with the idea of putting it in arch/arm/common - which
>
>Probably was me. There was also idea of putting it into 
> drivers/usb/musb/ -- which TI indeed followed in its Arago prject. I 
> firmly denied that suggestion.
>
>> I frankly ignored by email (how long have we been saying "no drivers in
>> arch/arm" ?)
>
>But there *are* drivers there! And look at edma.c which is about to be 
> moved there... Anyway, I haven't seen such warnings, probably was too 
> late in the game.

I've already objected about the header moving to some random place in
arch/arm/include.  Really, edma.c needs to find another home too - but
there's a difference here.  edma.c is already present under arch/arm.
CPPI is _not_.  CPPI is new code appearing under arch/arm (you can see
that for yourself by looking at the diffstat of 6305/1... it doesn't
move files, it adds new code.)

>> Now, it would've been discussed in that meeting, but unfortunately no
>> record exists of that.  What does follow that meeting is a discussion
>> trail.  From what I can see there, but it looks to me like the decision
>> was taken to move it to the DMA engine API, and work on sorting out MUSB
>> was going to commence.
>
>> The last email in that says "I'll get to that soon"... and that is also
>> the final email I have on this topic.  I guess if nothing has happened...
>> Shrug, that's someone elses problem.
>
>Well, as usual... :-(
>
>> Anyway, the answer for putting it in arch/arm/common hasn't changed,
>> and really, where we are now, post Linus having a moan about the size
>> of arch/arm, that answer is even more concrete in the negative.  It's
>> 54K of code which should not be under arch/arm at all.
>
>> Anyway, if you need to look at the patch, it's 6305/1.  Typing into the
>> summary search box 'cppi' found it in one go.
>
>Thanks, I remember this variant was under arch/arm/common/.
>Now however, I see what happened to that variant in somewhat different 
> light. Looks like it was entirely your decision to discard the patch, 
> without TI's request...

Firstly, it is *my* perogative to say no to anything in arch/arm, and I
really don't have to give reasons for it if I choose to.

Secondly, it *was* discussed with TI, and the following thread of
discussion (threaded to the minutes email) shows that *something* was
going to happen _as a result of that meeting_ to address the problem of
it being under arch/arm.  And *therefore* it was discarded from the patch
system - because there was expectation that it was going to get fixed.

For christ sake, someone even agreed to do it.  Even a target was mentioned,
of 2.6.39.  That was mentioned on 7th December 2010.  And 6305/1 was
discarded on 8th December 2010.  Cause and effect.

And yes, *you* were n

Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-01 Thread Russell King - ARM Linux
On Sat, Feb 02, 2013 at 04:07:59AM +0400, Sergei Shtylyov wrote:
> Hello.
>
> On 02-02-2013 1:30, Russell King - ARM Linux wrote:
>
>>> On Fri, Feb 01, 2013 at 11:49:11PM +0300, Sergei Shtylyov wrote:
>>>>> good point, do you wanna send some patches ?
>
>>>> I have already sent them countless times and even stuck CPPI 4.1 
>>>> support (in
>>>> arch/arm/common/cppi41.c) in Russell's patch system. TI requested to 
>>>> remove the
>>>> patch. :-(
>
>>> sticking into arch/arm/common/ wasn't a nice move. But then again, so
>>> wasn't asking for the patch to be removed :-s
>
>> Err, patches don't get removed, they get moved to 'discarded'.
>
>Any chance to bring it back to life? :-)
>Although... drivers/usb/musb/cppi41.c would need to be somewhat 
> reworked for at least AM35x and I don't have time. But that may change, 
> of course.

Right, I've just looked back at the various meeting minutes from December
2010 when the CPPI stuff was discussed.  Yes, I archive these things and
all email discussions for referencing in cases like this.

Unfortunately, they do not contain any useful information other than the
topic having been brought up.  At that point, the CPPI stuff was in
mach-davinci, and I had suggested moving it into drivers/dma.

The result of that was to say that it doesn't fit the DMA engine APIs.
So someone came up with the idea of putting it in arch/arm/common - which
I frankly ignored by email (how long have we been saying "no drivers in
arch/arm" ?)

Now, it would've been discussed in that meeting, but unfortunately no
record exists of that.  What does follow that meeting is a discussion
trail.  From what I can see there, but it looks to me like the decision
was taken to move it to the DMA engine API, and work on sorting out MUSB
was going to commence.

The last email in that says "I'll get to that soon"... and that is also
the final email I have on this topic.  I guess if nothing has happened...
Shrug, that's someone elses problem.

Anyway, the answer for putting it in arch/arm/common hasn't changed,
and really, where we are now, post Linus having a moan about the size
of arch/arm, that answer is even more concrete in the negative.  It's
54K of code which should not be under arch/arm at all.

Anyway, if you need to look at the patch, it's 6305/1.  Typing into the
summary search box 'cppi' found it in one go.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-01 Thread Russell King - ARM Linux
On Fri, Feb 01, 2013 at 10:56:00PM +0200, Felipe Balbi wrote:
> hi,
> 
> On Fri, Feb 01, 2013 at 11:49:11PM +0300, Sergei Shtylyov wrote:
> > > good point, do you wanna send some patches ?
> > 
> >I have already sent them countless times and even stuck CPPI 4.1 support 
> > (in
> > arch/arm/common/cppi41.c) in Russell's patch system. TI requested to remove 
> > the
> > patch. :-(
> 
> sticking into arch/arm/common/ wasn't a nice move. But then again, so
> wasn't asking for the patch to be removed :-s

Err, patches don't get removed, they get moved to 'discarded'.

> > > I guess to make the MUSB side simpler we would need musb-dma-engine glue
> > > to map dmaengine to the private MUSB API. Then we would have some
> > > starting point to also move inventra (and anybody else) to dmaengine
> > > API.
> > 
> >Why? Inventra is a dedicated device's private DMA controller, why make
> > universal DMA driver for it?
> 
> because it doesn't make sense to support multiple DMA APIs. We can check
> from MUSB's registers if it was configured with Inventra DMA support and
> based on that we can register MUSB's own DMA Engine to dmaengine API.

Hang on.  This is one of the DMA implementations which is closely
coupled with the USB and only the USB?  If it is...

I thought this had been discussed _extensively_ before.  I thought the
resolution on it was:
1. It would not use the DMA engine API.
2. It would not live in arch/arm.
3. It would be placed nearby the USB driver it's associated with.

(1) because we don't use APIs just for the hell of it - think.  Do we
use the DMA engine API for PCI bus mastering ethernet controllers?  No.
Do we use it for PCI bus mastering SCSI controllers?  No.  Because the
DMA is integral to the rest of the device.

The DMA engine API only makes sense if the DMA engine is a shared
system resource.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


  1   2   3   4   >