Re: [PATCH] brcmnand: Fix up the flash cache register offset for older controllers
On Wed, Jul 05, 2017 at 11:15:01AM -0700, Florian Fainelli wrote: > On 07/05/2017 10:46 AM, Karl Beldan wrote: > > From: Karl Beldan > > > > Tested on BCM{63138,6838,63268} and cross checked with the various > > *_map_part.h which the brcmnand_regs_v* in brcmnand.c have historically > > been derived from. > > BCM63138 is using a 7.0 controller, 6838 uses a 5.0 controller, but has > a separate flash cache register which does indeed end up at 0x400 bytes > off the main FLASH block, and finally 63268 does have a v4.0 controller > and the flash cache is also in a separate register that makes it end up > at 0x400. > > Your change, as proposed would break chips like 7425 which use 5.0 > controller with the flash cache at 0x200 bytes. > > The binding describes an optional flash-cache register cell that you can > specify, so that's probably what you want to do here? > I wasn't aware the flash cache offset was variable but the flash-cache binding you are pointing to makes it obvious ;). Karl
[PATCH] brcmnand: Fix up the flash cache register offset for older controllers
From: Karl Beldan Tested on BCM{63138,6838,63268} and cross checked with the various *_map_part.h which the brcmnand_regs_v* in brcmnand.c have historically been derived from. Cc: Brian Norris Cc: Kamal Dasu Cc: Boris Brezillon Cc: Richard Weinberger Cc: David Woodhouse Cc: Marek Vasut Cc: Cyrille Pitchen Signed-off-by: Karl Beldan --- drivers/mtd/nand/brcmnand/brcmnand.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/mtd/nand/brcmnand/brcmnand.c b/drivers/mtd/nand/brcmnand/brcmnand.c index 7419c5c..e6371ff6 100644 --- a/drivers/mtd/nand/brcmnand/brcmnand.c +++ b/drivers/mtd/nand/brcmnand/brcmnand.c @@ -250,7 +250,7 @@ static const u16 brcmnand_regs_v40[] = { [BRCMNAND_OOB_READ_10_BASE] = 0x130, [BRCMNAND_OOB_WRITE_BASE] = 0x30, [BRCMNAND_OOB_WRITE_10_BASE]= 0, - [BRCMNAND_FC_BASE] = 0x200, + [BRCMNAND_FC_BASE] = 0x400, }; /* BRCMNAND v5.0 */ @@ -280,7 +280,7 @@ static const u16 brcmnand_regs_v50[] = { [BRCMNAND_OOB_READ_10_BASE] = 0x130, [BRCMNAND_OOB_WRITE_BASE] = 0x30, [BRCMNAND_OOB_WRITE_10_BASE]= 0x140, - [BRCMNAND_FC_BASE] = 0x200, + [BRCMNAND_FC_BASE] = 0x400, }; /* BRCMNAND v6.0 - v7.1 */ -- 2.10.1
[RESEND PATCH] MIPS: head: Reorder instructions missing a delay slot
In this sequence the 'move' is assumed in the delay slot of the 'beq', but head.S is in reorder mode and the former gets pushed one 'nop' farther by the assembler. The corrected behavior made booting with an UHI supplied dtb erratic. Fixes: 15f37e158892 ("MIPS: store the appended dtb address in a variable") Cc: Cc: Ralf Baechle Cc: Jonas Gorski Signed-off-by: Karl Beldan --- arch/mips/kernel/head.S | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/mips/kernel/head.S b/arch/mips/kernel/head.S index cf05220..d1bb506 100644 --- a/arch/mips/kernel/head.S +++ b/arch/mips/kernel/head.S @@ -106,8 +106,8 @@ NESTED(kernel_entry, 16, sp)# kernel entry point beq t0, t1, dtb_found #endif li t1, -2 - beq a0, t1, dtb_found movet2, a1 + beq a0, t1, dtb_found li t2, 0 dtb_found: -- 2.10.1
[PATCH] MIPS: head: Reorder instructions missing a delay slot
In this sequence the 'move' is assumed in the delay slot of the 'beq', but head.S is in reorder mode and the former gets pushed one 'nop' farther by the assembler. The corrected behavior made booting with an UHI supplied dtb erratic. Fixes: 15f37e158892 ("MIPS: store the appended dtb address in a variable") Cc: Cc: Ralf Baechle Cc: Jonas Gorski Signed-off-by: Karl Beldan --- arch/mips/kernel/head.S | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/mips/kernel/head.S b/arch/mips/kernel/head.S index cf05220..d1bb506 100644 --- a/arch/mips/kernel/head.S +++ b/arch/mips/kernel/head.S @@ -106,8 +106,8 @@ NESTED(kernel_entry, 16, sp)# kernel entry point beq t0, t1, dtb_found #endif li t1, -2 - beq a0, t1, dtb_found movet2, a1 + beq a0, t1, dtb_found li t2, 0 dtb_found: -- 2.10.1
Re: [PATCH v2 01/46] mtdpart: Propagate _get/put_device()
On Thu, Dec 15, 2016 at 08:51:06AM +0100, Richard Weinberger wrote: > On 15.12.2016 08:09, Karl Beldan wrote: > >>> I think this should also go into -stable. > >> > >> Why? Do you have real use cases that are broken by this? I understand > > > > I do, some code adding partitions on a gluebi master. > > What exactly are you doing? > > >> this is a problem, but I'm curious on how this satisfies the stable > >> rules. > >> > >> Also, note that this isn't a regression; it's been broken forever and > >> apparently no one noticed. IMO that raises the bar a bit (but not > >> impossibly so) for -stable. > >> > > > > I just encountered the bug yesterday and yes it is obvious it has been > > broken forever. > > I don't have strong opinion about these things so no worries. > > If existing stuff is broken, and you can trigger it. Please let us > know. Then it should go into -stable. > I thought that's what I already did. Anyways, it didn't require much imagination to come up with a script triggering the issue so here you are: #{{ modprobe nandsim modprobe gluebi ubiattach -p /dev/mtd1 -b 1 ubimkvol /dev/ubi0 -S 4 -N vol_0 mtdpart add /dev/mtd2 vol_0_0 0 0x1000 tail -F /dev/mtd2 & while :; do dd bs=1 count=1 if=/dev/mtd3 >/dev/null 2>&1 || break; done kill -9 %1 # Oops #}} Karl
Re: [PATCH v2 01/46] mtdpart: Propagate _get/put_device()
On Wed, Dec 14, 2016 at 9:09 PM, Brian Norris wrote: > On Wed, Dec 14, 2016 at 07:24:46PM +0000, Karl Beldan wrote: >> On Wed, Sep 28, 2016 at 8:16 PM, Brian Norris >> wrote: >> > On Wed, Sep 21, 2016 at 12:15:31PM +0200, Boris Brezillon wrote: >> >> On Wed, 21 Sep 2016 11:43:56 +0200 >> >> Daniel Walter wrote: >> >> >> >> > From: Richard Weinberger >> >> > >> >> > If the master device has callbacks for _get/put_device() >> >> > and this MTD has slaves a get_mtd_device() call on paritions >> >> > will never issue the registered callbacks. >> >> > Fix this by propagating _get/put_device() down. >> >> >> >> Brian, can we have this one queued for 4.9? I can't take it in my tree >> >> if you want, but it's probably better if it's in the mtd tree. >> > >> > Applied this patch to l2-mtd.git >> > >> >> I think this should also go into -stable. > > Why? Do you have real use cases that are broken by this? I understand I do, some code adding partitions on a gluebi master. > this is a problem, but I'm curious on how this satisfies the stable > rules. > > Also, note that this isn't a regression; it's been broken forever and > apparently no one noticed. IMO that raises the bar a bit (but not > impossibly so) for -stable. > I just encountered the bug yesterday and yes it is obvious it has been broken forever. I don't have strong opinion about these things so no worries. Karl > Anyway, if we decide to do this, you'll also want to include the git > hash and applicable kernel versions, per Option 2 [1]. > > Brian > > [1] Documentation/stable_kernel_rules.txt.
Re: [PATCH v2 01/46] mtdpart: Propagate _get/put_device()
On Wed, Sep 28, 2016 at 8:16 PM, Brian Norris wrote: > On Wed, Sep 21, 2016 at 12:15:31PM +0200, Boris Brezillon wrote: >> On Wed, 21 Sep 2016 11:43:56 +0200 >> Daniel Walter wrote: >> >> > From: Richard Weinberger >> > >> > If the master device has callbacks for _get/put_device() >> > and this MTD has slaves a get_mtd_device() call on paritions >> > will never issue the registered callbacks. >> > Fix this by propagating _get/put_device() down. >> >> Brian, can we have this one queued for 4.9? I can't take it in my tree >> if you want, but it's probably better if it's in the mtd tree. > > Applied this patch to l2-mtd.git > I think this should also go into -stable.
Re: [PATCH v5 2/2] drm: tilcdc: clear the sync lost bit in crtc isr
Hi, On Mon, Oct 31, 2016 at 2:19 PM, Bartosz Golaszewski wrote: > The frame synchronization error happens when the DMA engine attempts > to read what it believes to be the first word of the video buffer but > it cannot be recognized as such or when the LCDC is starved of data > due to insufficient bandwidth of the system interconnect. > This also happens when the framebuffer boundaries are "incorrect", eg when flipping while the crtc is enabled. The driver doesn't handle it yet, so working around it is made via toggling LCDC_RASTER_ENABLE as you put in this change, it is worth noting because that's how modetest for example can "seem" to work with v1 when handling SYNC_LOST and is partly why the "vsynced page flipping" of modetest won't work either with v1 ATM. I haven't looked at the different trees since early september but my guess would be that nobody has reworked this yet. > On some SoCs (notably: da850) the memory settings do not meet the > LCDC throughput requirements even after increasing the memory > controller command re-ordering and the LDCD master peripheral > priority, sometimes causing the sync lost error (typically when > changing the resolution). > > When the sync lost error occurs, simply reset the input FIFO in the > DMA controller unless a sync lost flood is detected in which case > disable the interrupt. > > Signed-off-by: Bartosz Golaszewski > --- > drivers/gpu/drm/tilcdc/tilcdc_crtc.c | 50 > ++-- > drivers/gpu/drm/tilcdc/tilcdc_regs.h | 1 + > 2 files changed, 37 insertions(+), 14 deletions(-) > > diff --git a/drivers/gpu/drm/tilcdc/tilcdc_crtc.c > b/drivers/gpu/drm/tilcdc/tilcdc_crtc.c > index 937697d..c4c6323 100644 > --- a/drivers/gpu/drm/tilcdc/tilcdc_crtc.c > +++ b/drivers/gpu/drm/tilcdc/tilcdc_crtc.c > @@ -163,7 +163,7 @@ static void tilcdc_crtc_enable_irqs(struct drm_device > *dev) > > if (priv->rev == 1) { > tilcdc_set(dev, LCDC_RASTER_CTRL_REG, > - LCDC_V1_UNDERFLOW_INT_ENA); > + LCDC_V1_UNDERFLOW_INT_ENA | LCDC_V1_SYNC_LOST_ENA); > tilcdc_set(dev, LCDC_DMA_CTRL_REG, > LCDC_V1_END_OF_FRAME_INT_ENA); > } else { > @@ -181,7 +181,9 @@ static void tilcdc_crtc_disable_irqs(struct drm_device > *dev) > /* disable irqs that we might have enabled: */ > if (priv->rev == 1) { > tilcdc_clear(dev, LCDC_RASTER_CTRL_REG, > - LCDC_V1_UNDERFLOW_INT_ENA | LCDC_V1_PL_INT_ENA); > +LCDC_V1_UNDERFLOW_INT_ENA | > +LCDC_V1_PL_INT_ENA | > +LCDC_V1_SYNC_LOST_ENA); > tilcdc_clear(dev, LCDC_DMA_CTRL_REG, > LCDC_V1_END_OF_FRAME_INT_ENA); > } else { > @@ -885,24 +887,44 @@ irqreturn_t tilcdc_crtc_irq(struct drm_crtc *crtc) > wake_up(&tilcdc_crtc->frame_done_wq); > } > > - if (stat & LCDC_SYNC_LOST) { > - dev_err_ratelimited(dev->dev, "%s(0x%08x): Sync lost", > - __func__, stat); > - tilcdc_crtc->frame_intact = false; > - if (tilcdc_crtc->sync_lost_count++ > > - SYNC_LOST_COUNT_LIMIT) { > - dev_err(dev->dev, "%s(0x%08x): Sync lost > flood detected, disabling the interrupt", __func__, stat); > - tilcdc_write(dev, LCDC_INT_ENABLE_CLR_REG, > -LCDC_SYNC_LOST); > - } > - } > - > /* Indicate to LCDC that the interrupt service routine has > * completed, see 13.3.6.1.6 in AM335x TRM. > */ > tilcdc_write(dev, LCDC_END_OF_INT_IND_REG, 0); > } > > + if (stat & LCDC_SYNC_LOST) { > + dev_err_ratelimited(dev->dev, "%s(0x%08x): Sync lost", > + __func__, stat); > + > + tilcdc_crtc->frame_intact = false; > + if (tilcdc_crtc->sync_lost_count++ > SYNC_LOST_COUNT_LIMIT) { > + dev_err(dev->dev, > + "%s(0x%08x): Sync lost flood detected, > disabling the interrupt", > + __func__, stat); > + if (priv->rev == 2) > + tilcdc_write(dev, LCDC_INT_ENABLE_CLR_REG, > +LCDC_SYNC_LOST); > + else if (priv->rev == 1) > + tilcdc_clear(dev, LCDC_RASTER_CTRL_REG, > +LCDC_V1_SYNC_LOST_ENA); > + } > + > + if (priv->rev == 2) { > + /* > +* Indicate to LCDC that the interrupt service routine
Re: [PATCH 0/2] ARM: davinci: initial infrastructure for LCDC
On Wed, Oct 05, 2016 at 03:05:30PM +0200, Bartosz Golaszewski wrote: > After discussing the matter with Laurent Pinchart it turned out that > using ti,tilcdc,panel was wrong and we should go with the new > simple-vga-dac driver proposed by Maxime Ripard and currently being > reviewed. > > The da850-lcdk board on which I'm working has a THS8135 video DAC for > which the new driver seems to be best suited and we'll be able to > query the connected display for supported modes instead of hardcoding > them in the dt as is needed for the panel driver. > I meant to point to these new changes but then it slipped my mind, it is clearly the way to go. Regards, Karl > In the meantime I'm posting two patches based on Karl Beldan's > previous work that can already be merged. > > The first one adds OF_DEV_AUXDATA entry to da8xx-dt.c. I changed the > compatible string to the new one we're introducing in the tilcdc > driver. > > The second adds the lcd pins and the display node to da850.dtsi. As > suggested by Sekhar: I moved the pins node, which was previously in > da850-lcdk.dts, to da850.dtsi. I also squashed Karl's two patches and > removed the panel node. > > Tested on a da850-lcdk with an LCD display connected over VGA with > two patches already posted to the drm mailing list: > > drm: tilcdc: add a da850-specific compatible string > drm: tilcdc: add a workaround for failed clk_set_rate() > > and some additional work-in-progress/hacks on top of that. > > Karl Beldan (2): > ARM: davinci: da8xx-dt: add OF_DEV_AUXDATA entry for lcdc > ARM: dts: da850: add a node for the LCD controller > > arch/arm/boot/dts/da850.dtsi | 29 + > arch/arm/mach-davinci/da8xx-dt.c | 1 + > 2 files changed, 30 insertions(+) > > -- > 2.9.3 >
ERROR: ftrace_graph_ret_addr" [arch/x86/oprofile/oprofile.ko] undefined!
Hi, This has already been reported by Fengguang Wu's kbuild test robot: https://lists.01.org/pipermail/kbuild-all/2016-July/022203.html but it seems to have slipped through. Karl
Re: [PATCH 3/6] ARM: dts: da850-lcdk: enable the LCD controller
On Fri, Sep 30, 2016 at 11:42:14AM +0200, Bartosz Golaszewski wrote: > 2016-09-29 20:40 GMT+02:00 Karl Beldan : > > Hi, > > > > On Thu, Sep 29, 2016 at 06:31:52PM +0200, Bartosz Golaszewski wrote: > >> From: Karl Beldan > >> > >> This adds the pins used by the LCD controller, and uses 'tilcdc,panel' > >> with some default timings for 800x600. > >> > >> Tested on an LCDK connected on the VGA port (the LCDC is connected to > >> this port via a THS8135). > >> > >> Signed-off-by: Karl Beldan > >> [Bartosz: > >> - fixed whitespace errors > >> - tweaked the description > > > > The description tweak you mention is the removal of an erratum which is > > in the mentioned commit I put on github @ > > (https://github.com/kbeldan/linux/commit/b7720bc983c00a083dece119f68ea9d2f522c6c4) > > it included an erratum wrt FIFO threshold I think is worth keeping: > > { > > There is an erratum (fifo-th) "LCDC: Underflow During Initialization": > > [...] > > "This problem may occur if the LCDC FIFO threshold size ( > > LCDDMA_CTRL[TH_FIFO_READY]) is left at its default value after reset. > > Increasing the FIFO threshold size will reduce or eliminate underflows. > > Setting the threshold size to 256 double words or larger is > > recommended." > > } > > Isn't this the issue that is fixed by changing the memory priority for lcdc? > It is possible that the erratum and the memory priority settings try to address the symptoms of the same underlying issue, it is impossible to state with the publicly available information, however, the erratum relates to the LCDC registers settings, namely the fifo-th propperty of panel-info in the dts, which is really different from the memory priority adjustments in the SYSCFG and DDR_CTL. Regards, Karl > > > >> - fixed the incorrect hback-porch value > > > > It can't be a fix, this value depends on the monitor connected. > > > > Thanks, I'm new to drm. From reading the datasheet it seemed to me > that this depends on the resolution. FWIW it seems that most LCDs are > able to adjust to this themselves - I tested with two different > displays and the value I introduced worked on both while the previous > one shifted the image to the right. I'll look into that. > > >> - other minor tweaks] > > > > I didn't see any other change while diffing. > > > > Dropped the refresh rate from the timings node name. > > Thanks, > Bartosz
Re: [PATCH 4/6] ARM: dts: da850-lcdk: add support for 1024x768 resolution
On Fri, Sep 30, 2016 at 11:37:57AM +0200, Bartosz Golaszewski wrote: > 2016-09-29 20:58 GMT+02:00 Karl Beldan : > > Hi, > > > > On Thu, Sep 29, 2016 at 06:31:53PM +0200, Bartosz Golaszewski wrote: > >> Add svga timings for 1024x768 resolution to the da850-lcdk > >> device tree. > >> > > > > [snip] > > > > > Why do you also call 1024x768 svga ? > > > > Thanks, should have been xga. will fix in v2. > > > I don't think the LCDK can cope with this resolution at this frequency > > (in terms of mem bandwidth), at least that's what I observed back in > > August. If confirmed I think it is worth mentioning in the log at least, > > but then I doubt adding this config would be useful. > > > > Thanks for the heads up. How would that manifest itself? This seems to > work fine for me - I'm not getting any warnings on a simple system - > maybe if I added some additional memory load it would complain. > A mere dmesg > /dev/tty0 (or repeatedly cat-ting a file to /dev/tty0) should suffice to make the issue visible and trigger FIFO underflows. Regards, Karl
Re: [PATCH 6/6] ARM: da850: adjust memory settings for tilcdc
On Fri, Sep 30, 2016 at 9:31 AM, Bartosz Golaszewski wrote: > 2016-09-29 21:07 GMT+02:00 Karl Beldan : >> Hi, >> >> On Thu, Sep 29, 2016 at 06:31:55PM +0200, Bartosz Golaszewski wrote: >>> Default memory settings of da850 do not meet the throughput/latency >>> requirements of tilcdc. This results in the image displayed being >>> incorrect and the following warning being displayed by the LCDC >>> drm driver: >>> >>> tilcdc da8xx_lcdc.0: tilcdc_crtc_irq(0x0020): FIFO underfow >>> >>> Reconfigure the LCDC priority to the highest. This is a workaround >>> for the da850-lcdk board which has the LCD controller enabled in >>> the device tree, but a long-term, system-wide fix is needed for >>> all davinci boards. >>> >>> This patch has been modified for mainline linux. It comes from a >>> downstream TI release for da850[1]. >>> >>> Original author: Vishwanathrao Badarkhe, Manish >>> >>> [1] >>> http://arago-project.org/git/projects/linux-davinci.git?p=projects/linux-davinci.git;a=commitdiff;h=b9bd39a34cc02c3ba2fc15539a2f0bc2b68d25da;hp=6f6c795faa6366a4ebc1037a0235edba6018a991 >>> >>> Signed-off-by: Bartosz Golaszewski >>> --- >> >> FWIW, the quirks could be applied conditionnally depending on the lcdc >> node presence in the DT, a bit like: >> https://github.com/kbeldan/linux/commit/cf15572ffef8e8a0d8110b3f6b29bd401d0538be >> https://github.com/kbeldan/linux/commit/07e4fff9958bc1625a96791dce284c163fbe9c43 >> >> >> Regards, >> Karl > > Hi Karl, > > I decided to post the simplest possible way of this to get the lcdc > working upstream. In parallel I'm working on a system-wide way of > applying such quirks not only limited to device tree nodes' presence. Ok, that'd be a good thing, apparently people have been looking to do such a thing for the am335x as well. Regards, Karl > Thanks for the info! > > Best regards, > Bartosz Golaszewski
Re: [PATCH 6/6] ARM: da850: adjust memory settings for tilcdc
Hi, On Thu, Sep 29, 2016 at 06:31:55PM +0200, Bartosz Golaszewski wrote: > Default memory settings of da850 do not meet the throughput/latency > requirements of tilcdc. This results in the image displayed being > incorrect and the following warning being displayed by the LCDC > drm driver: > > tilcdc da8xx_lcdc.0: tilcdc_crtc_irq(0x0020): FIFO underfow > > Reconfigure the LCDC priority to the highest. This is a workaround > for the da850-lcdk board which has the LCD controller enabled in > the device tree, but a long-term, system-wide fix is needed for > all davinci boards. > > This patch has been modified for mainline linux. It comes from a > downstream TI release for da850[1]. > > Original author: Vishwanathrao Badarkhe, Manish > > [1] > http://arago-project.org/git/projects/linux-davinci.git?p=projects/linux-davinci.git;a=commitdiff;h=b9bd39a34cc02c3ba2fc15539a2f0bc2b68d25da;hp=6f6c795faa6366a4ebc1037a0235edba6018a991 > > Signed-off-by: Bartosz Golaszewski > --- FWIW, the quirks could be applied conditionnally depending on the lcdc node presence in the DT, a bit like: https://github.com/kbeldan/linux/commit/cf15572ffef8e8a0d8110b3f6b29bd401d0538be https://github.com/kbeldan/linux/commit/07e4fff9958bc1625a96791dce284c163fbe9c43 Regards, Karl > arch/arm/mach-davinci/da8xx-dt.c | 43 > ++ > arch/arm/mach-davinci/include/mach/da8xx.h | 4 +++ > 2 files changed, 47 insertions(+) > > diff --git a/arch/arm/mach-davinci/da8xx-dt.c > b/arch/arm/mach-davinci/da8xx-dt.c > index f8ecc02..9d29670 100644 > --- a/arch/arm/mach-davinci/da8xx-dt.c > +++ b/arch/arm/mach-davinci/da8xx-dt.c > @@ -44,9 +44,52 @@ static struct of_dev_auxdata da850_auxdata_lookup[] > __initdata = { > > #ifdef CONFIG_ARCH_DAVINCI_DA850 > > +/* > + * Adjust the default memory settings to cope with the LCDC > + * > + * REVISIT: This issue occurs on other davinci boards as well. Find > + * a proper system-wide fix. > + */ > +static void da850_lcdc_adjust_memory_bandwidth(void) > +{ > + void __iomem *cfg_mstpri1_base; > + void __iomem *cfg_mstpri2_base; > + void __iomem *emifb; > + u32 val; > + > + /* > + * Default master priorities in reg 0 are all lower by default than LCD > + * which is set below to 0. Hence don't need to change here. > + */ > + > + /* set EDMA30TC0 and TC1 to lower than LCDC (4 < 0) */ > + cfg_mstpri1_base = DA8XX_SYSCFG0_VIRT(DA8XX_MSTPRI1_REG); > + val = __raw_readl(cfg_mstpri1_base); > + val &= 0x00FF; > + val |= 4 << 8; /* 0-high, 7-low priority*/ > + val |= 4 << 12;/* 0-high, 7-low priority*/ > + __raw_writel(val, cfg_mstpri1_base); > + > + /* > + * Reconfigure the LCDC priority to the highest to ensure that > + * the throughput/latency requirements for the LCDC are met. > + */ > + cfg_mstpri2_base = DA8XX_SYSCFG0_VIRT(DA8XX_MSTPRI2_REG); > + > + val = __raw_readl(cfg_mstpri2_base); > + val &= 0x0fff; > + __raw_writel(val, cfg_mstpri2_base); > + > + /* set BPRIO */ > + emifb = ioremap(DA8XX_DDR_CTL_BASE, SZ_4K); > + __raw_writel(0x20, emifb + DA8XX_PBBPR_REG); > + iounmap(emifb); > +} > + > static void __init da850_init_machine(void) > { > of_platform_default_populate(NULL, da850_auxdata_lookup, NULL); > + da850_lcdc_adjust_memory_bandwidth(); > } > > static const char *const da850_boards_compat[] __initconst = { > diff --git a/arch/arm/mach-davinci/include/mach/da8xx.h > b/arch/arm/mach-davinci/include/mach/da8xx.h > index f9f9713..5549eff 100644 > --- a/arch/arm/mach-davinci/include/mach/da8xx.h > +++ b/arch/arm/mach-davinci/include/mach/da8xx.h > @@ -56,6 +56,8 @@ extern unsigned int da850_max_speed; > #define DA8XX_SYSCFG0_VIRT(x)(da8xx_syscfg0_base + (x)) > #define DA8XX_JTAG_ID_REG0x18 > #define DA8XX_HOST1CFG_REG 0x44 > +#define DA8XX_MSTPRI1_REG0x114 > +#define DA8XX_MSTPRI2_REG0x118 > #define DA8XX_CHIPSIG_REG0x174 > #define DA8XX_CFGCHIP0_REG 0x17c > #define DA8XX_CFGCHIP1_REG 0x180 > @@ -79,6 +81,8 @@ extern unsigned int da850_max_speed; > #define DA8XX_AEMIF_CTL_BASE 0x6800 > #define DA8XX_SHARED_RAM_BASE0x8000 > #define DA8XX_ARM_RAM_BASE 0x > +#define DA8XX_DDR_CTL_BASE 0xB000 > +#define DA8XX_PBBPR_REG 0x0020 > > void da830_init(void); > void da850_init(void); > -- > 2.7.4 > > > ___ > linux-arm-kernel mailing list > linux-arm-ker...@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Re: [PATCH 4/6] ARM: dts: da850-lcdk: add support for 1024x768 resolution
Hi, On Thu, Sep 29, 2016 at 06:31:53PM +0200, Bartosz Golaszewski wrote: > Add svga timings for 1024x768 resolution to the da850-lcdk > device tree. > > Signed-off-by: Bartosz Golaszewski > --- > arch/arm/boot/dts/da850-lcdk.dts | 15 +-- > 1 file changed, 13 insertions(+), 2 deletions(-) > > diff --git a/arch/arm/boot/dts/da850-lcdk.dts > b/arch/arm/boot/dts/da850-lcdk.dts > index 6ca5d48..6e4288c 100644 > --- a/arch/arm/boot/dts/da850-lcdk.dts > +++ b/arch/arm/boot/dts/da850-lcdk.dts > @@ -70,8 +70,8 @@ > }; > > display-timings { > - native-mode = <&svga_timings>; > - svga_timings: 800x600 { > + native-mode = <&svga_timing0>; > + svga_timing0: 800x600 { > clock-frequency = <3750>; > hactive = <800>; > hback-porch = <140>; > @@ -82,6 +82,17 @@ > vfront-porch = <1>; > vsync-len = <4>; > }; > + svga_timing1: 1024x768 { > + clock-frequency = <7200>; > + hactive = <1024>; > + hback-porch = <140>; > + hfront-porch = <40>; > + hsync-len = <128>; > + vactive = <768>; > + vback-porch = <23>; > + vfront-porch = <1>; > + vsync-len = <4>; > + }; Why do you also call 1024x768 svga ? I don't think the LCDK can cope with this resolution at this frequency (in terms of mem bandwidth), at least that's what I observed back in August. If confirmed I think it is worth mentioning in the log at least, but then I doubt adding this config would be useful. Regards, Karl > }; > }; > }; > -- > 2.7.4 > > > ___ > linux-arm-kernel mailing list > linux-arm-ker...@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Re: [PATCH 3/6] ARM: dts: da850-lcdk: enable the LCD controller
Hi, On Thu, Sep 29, 2016 at 06:31:52PM +0200, Bartosz Golaszewski wrote: > From: Karl Beldan > > This adds the pins used by the LCD controller, and uses 'tilcdc,panel' > with some default timings for 800x600. > > Tested on an LCDK connected on the VGA port (the LCDC is connected to > this port via a THS8135). > > Signed-off-by: Karl Beldan > [Bartosz: > - fixed whitespace errors > - tweaked the description The description tweak you mention is the removal of an erratum which is in the mentioned commit I put on github @ (https://github.com/kbeldan/linux/commit/b7720bc983c00a083dece119f68ea9d2f522c6c4) it included an erratum wrt FIFO threshold I think is worth keeping: { There is an erratum (fifo-th) "LCDC: Underflow During Initialization": [...] "This problem may occur if the LCDC FIFO threshold size ( LCDDMA_CTRL[TH_FIFO_READY]) is left at its default value after reset. Increasing the FIFO threshold size will reduce or eliminate underflows. Setting the threshold size to 256 double words or larger is recommended." } > - fixed the incorrect hback-porch value It can't be a fix, this value depends on the monitor connected. > - other minor tweaks] I didn't see any other change while diffing. Regards, Karl Beldan > Signed-off-by: Bartosz Golaszewski > --- > arch/arm/boot/dts/da850-lcdk.dts | 60 > > 1 file changed, 60 insertions(+) > > diff --git a/arch/arm/boot/dts/da850-lcdk.dts > b/arch/arm/boot/dts/da850-lcdk.dts > index 7b8ab21..6ca5d48 100644 > --- a/arch/arm/boot/dts/da850-lcdk.dts > +++ b/arch/arm/boot/dts/da850-lcdk.dts > @@ -50,6 +50,40 @@ > system-clock-frequency = <24576000>; > }; > }; > + > + panel { > + compatible = "ti,tilcdc,panel"; > + pinctrl-names = "default"; > + pinctrl-0 = <&lcd_pins>; > + status = "okay"; > + > + panel-info { > + ac-bias = <0>; > + ac-bias-intrpt= <0>; > + dma-burst-sz = <16>; > + bpp = <16>; > + fdd = <255>; > + sync-edge = <0>; > + sync-ctrl = <0>; > + raster-order = <0>; > + fifo-th = <5>; > + }; > + > + display-timings { > + native-mode = <&svga_timings>; > + svga_timings: 800x600 { > + clock-frequency = <3750>; > + hactive = <800>; > + hback-porch = <140>; > + hfront-porch = <40>; > + hsync-len = <128>; > + vactive = <600>; > + vback-porch = <23>; > + vfront-porch = <1>; > + vsync-len = <4>; > + }; > + }; > + }; > }; > > &pmx_core { > @@ -84,6 +118,28 @@ > 0x30 0x0110 0x0ff0 > >; > }; > + > + lcd_pins: pinmux_lcd_pins { > + pinctrl-single,bits = < > + /* > + * LCD_D[2], LCD_D[3], LCD_D[4], LCD_D[5], > + * LCD_D[6], LCD_D[7] > + */ > + 0x40 0x2200 0xff00 > + /* > + * LCD_D[10], LCD_D[11], LCD_D[12], LCD_D[13], > + * LCD_D[14], LCD_D[15], LCD_D[0], LCD_D[1] > + */ > + 0x44 0x 0x > + /* LCD_D[8], LCD_D[9] */ > + 0x48 0x0022 0x00ff > + > + /* LCD_PCLK */ > + 0x48 0x0200 0x0f00 > + /* LCD_AC_ENB_CS, LCD_VSYNC, LCD_HSYNC */ > + 0x4c 0x0222 0x0fff > + >; > + }; > }; > > &serial2 { > @@ -219,3 +275,7 @@ > }; > }; > }; > + > +&lcdc { > + status = "okay"; > +}; > -- > 2.7.4 > > > ___ > linux-arm-kernel mailing list > linux-arm-ker...@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Re: [PATCH 3/4] ARM: dts: da850-lcdk: Add NAND to DT
On Tue, Aug 16, 2016 at 11:20:29PM +, Karl Beldan wrote: > On Wed, Aug 10, 2016 at 05:23:23PM +0530, Sekhar Nori wrote: > > On Wednesday 10 August 2016 04:49 PM, Karl Beldan wrote: > > > On Wed, Aug 10, 2016 at 03:01:30PM +0530, Sekhar Nori wrote: > > >> On Wednesday 10 August 2016 02:34 PM, Karl Beldan wrote: > > >>> On Wed, Aug 10, 2016 at 02:01:57PM +0530, Sekhar Nori wrote: > > >>>> On Tuesday 09 August 2016 10:45 PM, Karl Beldan wrote: > > >>>>> This adds DT support for the NAND connected to the SoC AEMIF. > > >>>>> The parameters (timings, ecc) are the same as what the board ships > > >>>>> with > > >>>>> (default AEMIF timings, 1bit ECC) and improvements will be handled in > > >>>>> due course. > > >>>> > > >>>> I disagree that we need to be compatible to the software that ships > > >>>> with > > >>>> the board. Thats software was last updated 3 years ago. Instead I would > > >>>> concern with what the hardware supports. So, if the hardware can > > >>>> support > > >>>> 4-bit ECC, I would use that. > > >>>> > > >>> I am not saying we _need_ to be compatible. > > >> > > >> Alright then, please drop references to what software the board ships > > >> with in the commit message and in the patch itself. > > >> > > > > > > I hadn't seen this comment before sending v2. > > > > > >>> > > >>>> If driver is broken for 4-bit ECC, please fix that up first. > > >>>> > > >>> Since this issue is completely separate from my DT improvements > > >>> I'll stick to resubmitting the series, applying my LCDK changes to the > > >>> EVM too, besides you'll be able to compare the behavior without ECC > > >>> discrepancies. > > >>> I took note that you are likely to not apply without the ECC fix. > > >> > > >> Yeah, I would not like to apply with 1-bit ECC now and then change to > > >> 4-bit ECC soon after. > > >> > > > > > > Both mityomapl138 from mainline and hawkboard from TI's BSP release > > > include the comment: > > > - "4 bit mode is not supported with 16 bit NAND" > > > It is not clear whether they imply that the HW has issues or if it's SW > > > > At least the TRM says "The EMIFA supports 4-bit ECC on 8-bit/16-bit NAND > > Flash.". And then the TRM goes on to describe how 4-bit ECC is supposed > > to work for 16-bit NAND. I could not find any errata related to this in > > the errata document. > > > > > only, but 4-bits ECC is a different matter and I hope you'll integrate > > > the current changes prior to tackling it. > > > > Lets apply everything together. This is a new feature, not a bug fix. > > There is no need to rush it. Also, for the NAND part on LCDK, per the > > micron datasheet, minimum of 4-bit ECC is needed. > > > > FYI my patches are related to a small contracting work spanning 20 days > amounting to 15 real days (from scratch). ATM my SoW not only doesn't > include this 'new feature' but excludes it. So I cannot promise I'll > look into it, though it shouldn't be too big a thing. I took a look at it yesterday, cf. https://lkml.org/lkml/2016/8/29/71 Karl > It seemed clear that you wanted to squeeze it with the rest, but after > setting things up for the EVM and observing mainline was broken with > 4bits ECC on an 8bits NAND I thought it was impossible nobody hadn't > noticed it .. so I dug in the mail archive, only to find there had > already been some alerts raised some months ago on the list, so I picked > up my next item. > > > I will take a look at other patches in this series. > > > > Ok, thanks. > > Rgds, > Karl
[PATCH] mtd: nand: davinci: Reinitialize the HW ECC engine in 4bit hwctl
This fixes subpage writes when using 4-bit HW ECC. There has been numerous reports about ECC errors with devices using this driver for a while. Also the 4-bit ECC has been reported as broken with subpages in [1] and with 16 bits NANDs in the driver and in mach* board files both in mainline and in the vendor BSPs. What I saw with 4-bit ECC on a 16bits NAND (on an LCDK) which got me to try reinitializing the ECC engine: - R/W on whole pages properly generates/checks RS code - try writing the 1st subpage only of a blank page, the subpage is well written and the RS code properly generated, re-reading the same page the HW detects some ECC error, reading the same page again no ECC error is detected Note that the ECC engine is already reinitialized in the 1-bit case. Tested on my LCDK with UBI+UBIFS using subpages. This could potentially get rid of the issue workarounded in [1]. [1] 28c015a9daab ("mtd: davinci-nand: disable subpage write for keystone-nand") Signed-off-by: Karl Beldan --- drivers/mtd/nand/davinci_nand.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/mtd/nand/davinci_nand.c b/drivers/mtd/nand/davinci_nand.c index cc07ba0..27fa8b8 100644 --- a/drivers/mtd/nand/davinci_nand.c +++ b/drivers/mtd/nand/davinci_nand.c @@ -240,6 +240,9 @@ static void nand_davinci_hwctl_4bit(struct mtd_info *mtd, int mode) unsigned long flags; u32 val; + /* Reset ECC hardware */ + davinci_nand_readl(info, NAND_4BIT_ECC1_OFFSET); + spin_lock_irqsave(&davinci_nand_lock, flags); /* Start 4-bit ECC calculation for read/write */ -- 2.9.2
Re: [PATCH v2 4/4] ARM: dts: da850-lcdk: Add NAND to DT
On Thu, Aug 18, 2016 at 03:27:52PM +0530, Sekhar Nori wrote: > On Wednesday 10 August 2016 04:30 PM, Karl Beldan wrote: > > This adds DT support for the NAND connected to the SoC AEMIF. > > The parameters (timings, ecc) are the same as what the board ships with > > (default AEMIF timings, 1bit ECC) and improvements will be handled in > > due course. > > This passed elementary tests hashing a 20MB file on top of ubifs on my > > LCDK. > > > > Signed-off-by: Karl Beldan > > --- > > arch/arm/boot/dts/da850-lcdk.dts | 107 > > +++ > > 1 file changed, 107 insertions(+) > > > > diff --git a/arch/arm/boot/dts/da850-lcdk.dts > > b/arch/arm/boot/dts/da850-lcdk.dts > > index dbcca0b..a3f9845 100644 > > --- a/arch/arm/boot/dts/da850-lcdk.dts > > +++ b/arch/arm/boot/dts/da850-lcdk.dts > > @@ -27,6 +27,27 @@ > > > +&aemif { > > + pinctrl-names = "default"; > > + pinctrl-0 = <&nand_pins>; > > + status = "ok"; > > + cs3 { > > + #address-cells = <2>; > > + #size-cells = <1>; > > + clock-ranges; > > + ranges; > > + > > + ti,cs-chipselect = <3>; > > + > > + nand@200,0 { > > + compatible = "ti,davinci-nand"; > > + #address-cells = <1>; > > + #size-cells = <1>; > > + reg = <0 0x0200 0x0200 > > + 1 0x 0x8000>; > > + > > + ti,davinci-chipselect = <1>; > > + ti,davinci-mask-ale = <0>; > > + ti,davinci-mask-cle = <0>; > > + ti,davinci-mask-chipsel = <0>; > > + > > + /* > > +* nand_ecc_strength_good will emit a warning > > +* but the LCDK ships with these settings [1]. > > +* Also HW 4bits ECC with 16bits NAND seems to > > +* require some attention. > > +* > > +* ATM nand_davinci_probe handling of nand-ecc-* > > +* is broken, e.g. > > +* chip.ecc.strength = pdata->ecc_bits occurs after > > +* scan_ident(), otherwise I would have used: > > +* nand-ecc-mode = "hw"; > > +* nand-ecc-strength = <1>; > > +* nand-ecc-step-size = <512>; > > +*/ > > + ti,davinci-nand-buswidth = <16>; > > + ti,davinci-ecc-mode = "hw"; > > + ti,davinci-ecc-bits = <1>; > > + ti,davinci-nand-use-bbt; > > As discussed before, will apply this patch after 4-bit ECC support is > sorted out. > Ok. BTW it seems I might have the opportunity to look into this. > > + > > + /* > > +* LCDK original partitions: > > +* 0x-0x0002 : "u-boot env" > > +* 0x0002-0x000a : "u-boot" > > +* 0x000a-0x002a : "kernel" > > +* 0x002a-0x2000 : "filesystem" > > +* > > +* The 1st NAND block being guaranted to be valid w/o > > ECC (> 1k cycles), > > +* it makes a perfect candidate as an SPL for the > > BootROM to jump to. > > +* However the OMAP-L132/L138 Bootloader doc SPRAB41E > > reads: > > +* "To boot from NAND Flash, the AIS should be written > > to NAND block 1 > > +* (NAND block 0 is not used by default)", which > > matches the LCDK > > +* original partitioning. > > FWIW, silicon version 2.1 supports booting from block 0, but needs new > boot pin settings not possible in stock LCDK. > That's a convenient improvement. > > +* Also, the LCDK ships with only the u-boot partition > > provisioned and > > +* boots on it in its default configuration while using > > the MMC for the > > +* kernel and rootfs, so preserve that one as is for > > now. > > +* [1]: Ensur
Re: [PATCH 2/2] ARM: davinci_all_defconfig: enable modules for sound on LCDK
On Wed, Aug 17, 2016 at 04:08:03PM +0530, Sekhar Nori wrote: > On Wednesday 17 August 2016 03:43 PM, Karl Beldan wrote: > > On Wed, Aug 17, 2016 at 02:20:51PM +0530, Sekhar Nori wrote: > >> On Thursday 11 August 2016 01:38 AM, Karl Beldan wrote: > >>> This is the minimal set of additional modules required to support audio > >>> on the LCDK. > >>> > >>> Signed-off-by: Karl Beldan > >> > >> This patch does not apply because Kevin already added some missing audio > >> modules through commit "ARM: davinci_all_defconfig: enable DA850 audio > >> as modules" > >> > >> Can you please rebase on master branch of my tree[1] and resend? > >> > > > > I checked out your tree. > > With Kevin's patch the needed modules get selected without needing 2/2. > > > > However I don't know why he used: > > CONFIG_SND_DA850_SOC_EVM=m > > instead of: > > CONFIG_SND_DAVINCI_SOC_MCASP=m > > CONFIG_SND_SOC_TLV320AIC3X=m > > The former seems to me to be pre-DT style config. I used the latter in > > 2/2 and thought that's what we'd start switching to ? Unless it is for > > pre-DT compatibility ? > > I don't think this is related to DT. As you said, > CONFIG_SND_DA850_SOC_EVM selects CONFIG_SND_DAVINCI_SOC_MCASP and > CONFIG_SND_SOC_TLV320AIC3X anyway. > > To be honest, I am okay either way. Copying Peter to see if he has any > preference. > Ok, I dropped 2/2 in v2 which will rely on Kevin's change. Rgds, Karl
Re: [PATCH 1/2] ARM: dts: da850-lcdk: Audio support via simple-card
On Wed, Aug 17, 2016 at 03:24:42PM +0530, Sekhar Nori wrote: > On Thursday 11 August 2016 01:38 AM, Karl Beldan wrote: > > The LCDK embeds a TLV320AIC3106 connected to the SoC McASP for analog > > audio. This relies on the 'dummy' regulator as the codec is always on. > > Quality is good with arecord -pipe- aplay on Line In/Line Out. > > The MIC path doesn't seem to work yet. > > I tested the patch and I could not get the MIC to work as well. I > suggest dropping the MIC path parts till that is debugged and root caused. > Ok. > > Signed-off-by: Karl Beldan > > --- > > arch/arm/boot/dts/da850-lcdk.dts | 75 > > > > 1 file changed, 75 insertions(+) > > > +&i2c0 { > > + pinctrl-names = "default"; > > + pinctrl-0 = <&i2c0_pins>; > > + clock-frequency = <10>; > > + status = "okay"; > > + > > + tlv320aic3106: tlv320aic3106@18 { > > + #sound-dai-cells = <0>; > > + compatible = "ti,tlv320aic3106"; > > + reg = <0x18>; > > + status = "okay"; > > + > > + ai3x-micbias-vg = ; > > It will be nice to add the codec regulators as well (and that in turn > needs the PMIC to be populated). Although I wont insist on that since > the regulators are optional properties. > In the patch comment I stated it relied on the dummy regulator as the codec is always on but maybe I should have been more precise: it is not out of convenience to keep things on, it is by HW design: I mentioned it in the comment of "ARM: dts: da850: Add basic DTS for the LCDK": "the main PMIC, different and hard-wired on the LCDK (the LDOs and DCDCs are always on)" after reviewing the schematics. I could add that in the patch comment. Thanks for reviewing. Rgds, Karl
Re: [PATCH 2/2] ARM: davinci_all_defconfig: enable modules for sound on LCDK
On Wed, Aug 17, 2016 at 02:20:51PM +0530, Sekhar Nori wrote: > On Thursday 11 August 2016 01:38 AM, Karl Beldan wrote: > > This is the minimal set of additional modules required to support audio > > on the LCDK. > > > > Signed-off-by: Karl Beldan > > This patch does not apply because Kevin already added some missing audio > modules through commit "ARM: davinci_all_defconfig: enable DA850 audio > as modules" > > Can you please rebase on master branch of my tree[1] and resend? > I checked out your tree. With Kevin's patch the needed modules get selected without needing 2/2. However I don't know why he used: CONFIG_SND_DA850_SOC_EVM=m instead of: CONFIG_SND_DAVINCI_SOC_MCASP=m CONFIG_SND_SOC_TLV320AIC3X=m The former seems to me to be pre-DT style config. I used the latter in 2/2 and thought that's what we'd start switching to ? Unless it is for pre-DT compatibility ? Karl
Re: [PATCH 3/4] ARM: dts: da850-lcdk: Add NAND to DT
On Wed, Aug 10, 2016 at 05:23:23PM +0530, Sekhar Nori wrote: > On Wednesday 10 August 2016 04:49 PM, Karl Beldan wrote: > > On Wed, Aug 10, 2016 at 03:01:30PM +0530, Sekhar Nori wrote: > >> On Wednesday 10 August 2016 02:34 PM, Karl Beldan wrote: > >>> On Wed, Aug 10, 2016 at 02:01:57PM +0530, Sekhar Nori wrote: > >>>> On Tuesday 09 August 2016 10:45 PM, Karl Beldan wrote: > >>>>> This adds DT support for the NAND connected to the SoC AEMIF. > >>>>> The parameters (timings, ecc) are the same as what the board ships with > >>>>> (default AEMIF timings, 1bit ECC) and improvements will be handled in > >>>>> due course. > >>>> > >>>> I disagree that we need to be compatible to the software that ships with > >>>> the board. Thats software was last updated 3 years ago. Instead I would > >>>> concern with what the hardware supports. So, if the hardware can support > >>>> 4-bit ECC, I would use that. > >>>> > >>> I am not saying we _need_ to be compatible. > >> > >> Alright then, please drop references to what software the board ships > >> with in the commit message and in the patch itself. > >> > > > > I hadn't seen this comment before sending v2. > > > >>> > >>>> If driver is broken for 4-bit ECC, please fix that up first. > >>>> > >>> Since this issue is completely separate from my DT improvements > >>> I'll stick to resubmitting the series, applying my LCDK changes to the > >>> EVM too, besides you'll be able to compare the behavior without ECC > >>> discrepancies. > >>> I took note that you are likely to not apply without the ECC fix. > >> > >> Yeah, I would not like to apply with 1-bit ECC now and then change to > >> 4-bit ECC soon after. > >> > > > > Both mityomapl138 from mainline and hawkboard from TI's BSP release > > include the comment: > > - "4 bit mode is not supported with 16 bit NAND" > > It is not clear whether they imply that the HW has issues or if it's SW > > At least the TRM says "The EMIFA supports 4-bit ECC on 8-bit/16-bit NAND > Flash.". And then the TRM goes on to describe how 4-bit ECC is supposed > to work for 16-bit NAND. I could not find any errata related to this in > the errata document. > > > only, but 4-bits ECC is a different matter and I hope you'll integrate > > the current changes prior to tackling it. > > Lets apply everything together. This is a new feature, not a bug fix. > There is no need to rush it. Also, for the NAND part on LCDK, per the > micron datasheet, minimum of 4-bit ECC is needed. > FYI my patches are related to a small contracting work spanning 20 days amounting to 15 real days (from scratch). ATM my SoW not only doesn't include this 'new feature' but excludes it. So I cannot promise I'll look into it, though it shouldn't be too big a thing. It seemed clear that you wanted to squeeze it with the rest, but after setting things up for the EVM and observing mainline was broken with 4bits ECC on an 8bits NAND I thought it was impossible nobody hadn't noticed it .. so I dug in the mail archive, only to find there had already been some alerts raised some months ago on the list, so I picked up my next item. > I will take a look at other patches in this series. > Ok, thanks. Rgds, Karl
Re: [PATCH 2/4] ARM: dts: da850: Add an aemif node
On Wed, Aug 10, 2016 at 02:04:48PM +0530, Sekhar Nori wrote: > On Wednesday 10 August 2016 02:04 PM, Karl Beldan wrote: > > On Wed, Aug 10, 2016 at 01:59:26PM +0530, Sekhar Nori wrote: > >> On Wednesday 10 August 2016 01:56 PM, Karl Beldan wrote: > >>> On Wed, Aug 10, 2016 at 01:42:01PM +0530, Sekhar Nori wrote: > >>>> On Wednesday 10 August 2016 01:37 PM, Karl Beldan wrote: > >>>>> On Wed, Aug 10, 2016 at 01:32:03PM +0530, Sekhar Nori wrote: > >>>>>> On Wednesday 10 August 2016 01:18 PM, Sekhar Nori wrote: > >>>>>>> On Tuesday 09 August 2016 10:45 PM, Karl Beldan wrote: > >>>>>>>> Currently the davinci da8xx boards use the mach-davinci aemif code. > >>>>>>>> Instantiating an aemif node into the DT allows to use the ti-aemif > >>>>>>>> memory driver and is another step to better DT support. > >>>>>>>> Also it will allow to properly pass the emif timings via DT. > >>>>>>>> > >>>>>>>> Signed-off-by: Karl Beldan > >>>>>>>> --- > >>>>>>>> arch/arm/boot/dts/da850.dtsi | 10 ++ > >>>>>>>> 1 file changed, 10 insertions(+) > >>>>>>>> > >>>>>>>> diff --git a/arch/arm/boot/dts/da850.dtsi > >>>>>>>> b/arch/arm/boot/dts/da850.dtsi > >>>>>>>> index bc10e7e..f62928c 100644 > >>>>>>>> --- a/arch/arm/boot/dts/da850.dtsi > >>>>>>>> +++ b/arch/arm/boot/dts/da850.dtsi > >>>>>>>> @@ -411,6 +411,16 @@ > >>>>>>>> dma-names = "tx", "rx"; > >>>>>>>> }; > >>>>>>>> }; > >>>>>>>> +aemif: aemif@6800 { > >>>>>>>> +compatible = "ti,da850-aemif"; > >>>>>>>> +#address-cells = <2>; > >>>>>>>> +#size-cells = <1>; > >>>>>>>> + > >>>>>>>> +reg = <0x6800 0x8000>; > >>>>>>>> +ranges = <0 0 0x6000 0x0800 > >>>>>>>> + 1 0 0x6800 0x8000>; > >>>>>>>> +status = "disabled"; > >>>>>>>> +}; > >>>>>>>> nand_cs3@6200 { > >>>>>>>> compatible = "ti,davinci-nand"; > >>>>>>>> reg = <0x6200 0x807ff > >>>>>>> > >>>>>>> The nand node should be part of aemif node like it is done for > >>>>>>> keystone > >>>>>>> boards. > >>>>>> > >>>>>> Actually, can you move the nand node out of da850.dtsi completely. Its > >>>>>> much better to keep da850.dtsi restricted to soc-internal devices and > >>>>>> keep the board level devices like NAND flash in .dts file. > >>>>>> > >>>>>> Similarly, can you move the NAND pinmux definitions too to the > >>>>>> da850-evm.dts file? > >>>>>> > >>>>>> There is advantage in keeping common pinmux definitions in da850.dtsi > >>>>>> so > >>>>>> each board doe not have to repeat them. But AEMIF is an exception as > >>>>>> its > >>>>>> usage can really be varied (NAND, NOR, SRAM, other). Plus, different > >>>>>> boards are likely to use different chip selects so coming up with some > >>>>>> pinmux definitions which will be reused widely is really unlikely. > >>>>>> > >>>>> This is exactly what I just did for the LCDK. > >>>>> If everybody is happy with it I will do the same for the evm as I put it > >>>>> in the cover letter. > >>>> > >>>> Yes please. We dont want duplication of data between da850.dtsi and > >>>> da850-lcdk.dts files. > >>>> > >>> Then I'll wait for this series to be applied and then apply my changes > >>> to the EVM while retiring the nand_cs3 together. > >> > >> No, I prefer the fixup happens first. In the same series, you can first > >> fixup existing EVM and then add LCDK support. > >> > > > > Well in that case you'll have to do the testing since I only have an > > LCDK. I should be able to send the series within the hour. > > Sure. I can test it. > Yesterday I got my hands on an EVM TI just sent and could test it on it. The change proper is fine, but I was surprised mainline was broken wrt 4-bit ECC on top of 8bits NANDs, so I tested with 1-bit ECC, 'enough' for this device. FYI, the NAND socket had a nand: device found, Manufacturer ID: 0x2c, Chip ID: 0xdc nand: Micron MT29F4G08AAC nand: 512 MiB, SLC, erase size: 128 KiB, page size: 2048, OOB size: 64 Karl
Re: [PATCH v2 3/4] ARM: dts: da850,da850-evm: Add an aemif node and use it for the NAND
On Wed, Aug 10, 2016 at 11:00:31AM +, Karl Beldan wrote: > Currently the davinci da8xx boards use the mach-davinci aemif code. > Instantiating an aemif node into the DT allows to use the ti-aemif > memory driver and is another step to better DT support. > This change adds an aemif node in the dtsi while retiring the nand_cs3 > node. The NAND is now instantiated in the dts as a subnode of the aemif > one along with its pins. > > Signed-off-by: Karl Beldan > --- > arch/arm/boot/dts/da850-evm.dts | 49 > - > arch/arm/boot/dts/da850.dtsi| 19 +++- > 2 files changed, 52 insertions(+), 16 deletions(-) > > diff --git a/arch/arm/boot/dts/da850-evm.dts b/arch/arm/boot/dts/da850-evm.dts > index 1a15db8..eedcc59 100644 > --- a/arch/arm/boot/dts/da850-evm.dts > +++ b/arch/arm/boot/dts/da850-evm.dts > @@ -29,6 +29,20 @@ > 0x04 0x00011000 0x000ff000 > >; > }; > + nand_pins: nand_pins { > + pinctrl-single,bits = < > + /* EMA_WAIT[0], EMA_OE, EMA_WE, > EMA_CS[4], EMA_CS[3] */ > + 0x1c 0x10110110 0xf0ff0ff0 > + /* > + * EMA_D[0], EMA_D[1], EMA_D[2], > + * EMA_D[3], EMA_D[4], EMA_D[5], > + * EMA_D[6], EMA_D[7] > + */ > + 0x24 0x 0x > + /* EMA_A[1], EMA_A[2] */ > + 0x30 0x0110 0x0ff0 > + >; > + }; > }; > serial0: serial@42000 { > status = "okay"; > @@ -131,11 +145,6 @@ > status = "okay"; > }; > }; > - nand_cs3@6200 { > - status = "okay"; > - pinctrl-names = "default"; > - pinctrl-0 = <&nand_cs3_pins>; > - }; > vbat: fixedregulator@0 { > compatible = "regulator-fixed"; > regulator-name = "vbat"; It has no functionnal impact but with this change nand_cs3_pins becomes useless and I forgot to get rid of it, so it'll make a v3. Karl
[PATCH v2 1/4] ARM: davinci: da8xx-dt: Add ti-aemif lookup for clock matching
Many davinci boards (da830 and da850 families) don't have their clocks in DT yet and won't be successful in getting an unnamed aemif clock without explicitly registering them via clk_lookups, failing the ti-aemif memory driver probe. The current aemif lookup entry resolving to the same clock: 'CLK(NULL, "aemif",&aemif_clk)' remains, as it is currently used (davinci_nand is getting a named clock "aemif"). This change will allow to switch from the mach-davinci aemif code to the ti-aemif memory driver. Signed-off-by: Karl Beldan --- arch/arm/mach-davinci/da850.c| 1 + arch/arm/mach-davinci/da8xx-dt.c | 1 + 2 files changed, 2 insertions(+) diff --git a/arch/arm/mach-davinci/da850.c b/arch/arm/mach-davinci/da850.c index 2398862..3477d30 100644 --- a/arch/arm/mach-davinci/da850.c +++ b/arch/arm/mach-davinci/da850.c @@ -485,6 +485,7 @@ static struct clk_lookup da850_clks[] = { CLK("da8xx_lcdc.0", "fck", &lcdc_clk), CLK("da830-mmc.0", NULL, &mmcsd0_clk), CLK("da830-mmc.1", NULL, &mmcsd1_clk), + CLK("ti-aemif", NULL, &aemif_clk), CLK(NULL, "aemif",&aemif_clk), CLK(NULL, "usb11",&usb11_clk), CLK(NULL, "usb20",&usb20_clk), diff --git a/arch/arm/mach-davinci/da8xx-dt.c b/arch/arm/mach-davinci/da8xx-dt.c index ca99711..c9f7e92 100644 --- a/arch/arm/mach-davinci/da8xx-dt.c +++ b/arch/arm/mach-davinci/da8xx-dt.c @@ -37,6 +37,7 @@ static struct of_dev_auxdata da850_auxdata_lookup[] __initdata = { OF_DEV_AUXDATA("ti,davinci-dm6467-emac", 0x01e2, "davinci_emac.1", NULL), OF_DEV_AUXDATA("ti,da830-mcasp-audio", 0x01d0, "davinci-mcasp.0", NULL), + OF_DEV_AUXDATA("ti,da850-aemif", 0x6800, "ti-aemif", NULL), {} }; -- 2.9.2
Re: [PATCH 1/4] memory: ti-aemif: Get a named clock rather than an unnamed one
On Wed, Aug 10, 2016 at 07:00:20AM +, Karl Beldan wrote: > On Tue, Aug 09, 2016 at 05:15:15PM +0000, Karl Beldan wrote: > > Many davinci boards (da830 and da850 families) don't have their clocks > > in DT yet and won't be successful in getting an unnamed aemif clock. > > Also the sole current users of ti-aemif (keystone boards) use 'aemif' as > > their aemif device clock clock-name and should remain unaffected. > > > > Signed-off-by: Karl Beldan > > --- > > drivers/memory/ti-aemif.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/drivers/memory/ti-aemif.c b/drivers/memory/ti-aemif.c > > index a579a0f..c251fc8 100644 > > --- a/drivers/memory/ti-aemif.c > > +++ b/drivers/memory/ti-aemif.c > > @@ -345,7 +345,7 @@ static int aemif_probe(struct platform_device *pdev) > > > > platform_set_drvdata(pdev, aemif); > > > > - aemif->clk = devm_clk_get(dev, NULL); > > + aemif->clk = devm_clk_get(dev, "aemif"); > > Looking further it seems to me that the struct clk_lookup da850_clks > registered by davinci_clk_init() should be enough to clk_get() unnamed > clocks using only the dev name. I look into what's going on but it > would make this patch unnecessary. > Ok, just saw what's happening, this patch is unnecessary, v2 will follow. Karl
[PATCH v2 2/4] ARM: davinci_all_defconfig: Enable AEMIF as a module
This enables the use of the memory/ti-aemif.c driver. ATM most davinci boards use the mach-davinci aemif code which gets in the way of genericity and proper DT boot. Signed-off-by: Karl Beldan --- arch/arm/configs/davinci_all_defconfig | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/arm/configs/davinci_all_defconfig b/arch/arm/configs/davinci_all_defconfig index d037d3d..2802897 100644 --- a/arch/arm/configs/davinci_all_defconfig +++ b/arch/arm/configs/davinci_all_defconfig @@ -192,6 +192,8 @@ CONFIG_RTC_CLASS=y CONFIG_RTC_DRV_OMAP=m CONFIG_DMADEVICES=y CONFIG_TI_EDMA=y +CONFIG_MEMORY=y +CONFIG_TI_AEMIF=m CONFIG_EXT2_FS=y CONFIG_EXT3_FS=y CONFIG_XFS_FS=m -- 2.9.2
[PATCH 2/2] ARM: davinci_all_defconfig: enable modules for sound on LCDK
This is the minimal set of additional modules required to support audio on the LCDK. Signed-off-by: Karl Beldan --- arch/arm/configs/davinci_all_defconfig | 4 1 file changed, 4 insertions(+) diff --git a/arch/arm/configs/davinci_all_defconfig b/arch/arm/configs/davinci_all_defconfig index 9980d1d..0357e39 100644 --- a/arch/arm/configs/davinci_all_defconfig +++ b/arch/arm/configs/davinci_all_defconfig @@ -128,6 +128,10 @@ CONFIG_LOGO=y CONFIG_SOUND=m CONFIG_SND=m CONFIG_SND_SOC=m +CONFIG_SND_EDMA_SOC=m +CONFIG_SND_DAVINCI_SOC_MCASP=m +CONFIG_SND_SOC_TLV320AIC3X=m +CONFIG_SND_SIMPLE_CARD=m CONFIG_HID=m CONFIG_HID_A4TECH=m CONFIG_HID_APPLE=m -- 2.9.2
[PATCH 1/2] ARM: dts: da850-lcdk: Audio support via simple-card
The LCDK embeds a TLV320AIC3106 connected to the SoC McASP for analog audio. This relies on the 'dummy' regulator as the codec is always on. Quality is good with arecord -pipe- aplay on Line In/Line Out. The MIC path doesn't seem to work yet. Signed-off-by: Karl Beldan --- arch/arm/boot/dts/da850-lcdk.dts | 75 1 file changed, 75 insertions(+) diff --git a/arch/arm/boot/dts/da850-lcdk.dts b/arch/arm/boot/dts/da850-lcdk.dts index dbcca0b..35b31d9 100644 --- a/arch/arm/boot/dts/da850-lcdk.dts +++ b/arch/arm/boot/dts/da850-lcdk.dts @@ -6,6 +6,7 @@ /dts-v1/; #include "da850.dtsi" #include +#include / { model = "DA850/AM1808/OMAP-L138 LCDK"; @@ -23,10 +24,50 @@ device_type = "memory"; reg = <0xc000 0x0800>; }; + + sound { + compatible = "simple-audio-card"; + simple-audio-card,name = "DA850/OMAP-L138 LCDK"; + simple-audio-card,widgets = + "Microphone", "Mic Jack", + "Line", "Line In", + "Line", "Line Out"; + simple-audio-card,routing = + "MIC3L", "Mic Jack", + "MIC3R", "Mic Jack", + "Mic Jack", "Mic Bias", + "LINE1L", "Line In", + "LINE1R", "Line In", + "Line Out", "LLOUT", + "Line Out", "RLOUT"; + simple-audio-card,format = "dsp_b"; + simple-audio-card,bitclock-master = <&link0_codec>; + simple-audio-card,frame-master = <&link0_codec>; + simple-audio-card,bitclock-inversion; + + simple-audio-card,cpu { + sound-dai = <&mcasp0>; + system-clock-frequency = <24576000>; + }; + + link0_codec: simple-audio-card,codec { + sound-dai = <&tlv320aic3106>; + system-clock-frequency = <24576000>; + }; + }; }; &pmx_core { status = "okay"; + + mcasp0_pins: pinmux_mcasp0_pins { + pinctrl-single,bits = < + /* AHCLKX AFSX ACLKX */ + 0x00 0x00101010 0x00f0f0f0 + /* ARX13 ARX14 */ + 0x04 0x0110 0x0ff0 + >; + }; }; &serial2 { @@ -68,3 +109,37 @@ cd-gpios = <&gpio 64 GPIO_ACTIVE_HIGH>; status = "okay"; }; + +&i2c0 { + pinctrl-names = "default"; + pinctrl-0 = <&i2c0_pins>; + clock-frequency = <10>; + status = "okay"; + + tlv320aic3106: tlv320aic3106@18 { + #sound-dai-cells = <0>; + compatible = "ti,tlv320aic3106"; + reg = <0x18>; + status = "okay"; + + ai3x-micbias-vg = ; + }; +}; + +&mcasp0 { + #sound-dai-cells = <0>; + pinctrl-names = "default"; + pinctrl-0 = <&mcasp0_pins>; + status = "okay"; + + op-mode = <0>; /* DAVINCI_MCASP_IIS_MODE */ + tdm-slots = <2>; + serial-dir = < /* 0: INACTIVE, 1: TX, 2: RX */ + 0 0 0 0 + 0 0 0 0 + 0 0 0 0 + 0 1 2 0 + >; + tx-num-evt = <32>; + rx-num-evt = <32>; +}; -- 2.9.2
Re: [PATCH 2/4] ARM: dts: da850: Add an aemif node
On Wed, Aug 10, 2016 at 01:42:01PM +0530, Sekhar Nori wrote: > On Wednesday 10 August 2016 01:37 PM, Karl Beldan wrote: > > On Wed, Aug 10, 2016 at 01:32:03PM +0530, Sekhar Nori wrote: > >> On Wednesday 10 August 2016 01:18 PM, Sekhar Nori wrote: > >>> On Tuesday 09 August 2016 10:45 PM, Karl Beldan wrote: > >>>> Currently the davinci da8xx boards use the mach-davinci aemif code. > >>>> Instantiating an aemif node into the DT allows to use the ti-aemif > >>>> memory driver and is another step to better DT support. > >>>> Also it will allow to properly pass the emif timings via DT. > >>>> > >>>> Signed-off-by: Karl Beldan > >>>> --- > >>>> arch/arm/boot/dts/da850.dtsi | 10 ++ > >>>> 1 file changed, 10 insertions(+) > >>>> > >>>> diff --git a/arch/arm/boot/dts/da850.dtsi b/arch/arm/boot/dts/da850.dtsi > >>>> index bc10e7e..f62928c 100644 > >>>> --- a/arch/arm/boot/dts/da850.dtsi > >>>> +++ b/arch/arm/boot/dts/da850.dtsi > >>>> @@ -411,6 +411,16 @@ > >>>> dma-names = "tx", "rx"; > >>>> }; > >>>> }; > >>>> +aemif: aemif@6800 { > >>>> +compatible = "ti,da850-aemif"; > >>>> +#address-cells = <2>; > >>>> +#size-cells = <1>; > >>>> + > >>>> +reg = <0x6800 0x8000>; > >>>> +ranges = <0 0 0x6000 0x0800 > >>>> + 1 0 0x6800 0x8000>; > >>>> +status = "disabled"; > >>>> +}; > >>>> nand_cs3@6200 { > >>>> compatible = "ti,davinci-nand"; > >>>> reg = <0x6200 0x807ff > >>> > >>> The nand node should be part of aemif node like it is done for keystone > >>> boards. > >> > >> Actually, can you move the nand node out of da850.dtsi completely. Its > >> much better to keep da850.dtsi restricted to soc-internal devices and > >> keep the board level devices like NAND flash in .dts file. > >> > >> Similarly, can you move the NAND pinmux definitions too to the > >> da850-evm.dts file? > >> > >> There is advantage in keeping common pinmux definitions in da850.dtsi so > >> each board doe not have to repeat them. But AEMIF is an exception as its > >> usage can really be varied (NAND, NOR, SRAM, other). Plus, different > >> boards are likely to use different chip selects so coming up with some > >> pinmux definitions which will be reused widely is really unlikely. > >> > > This is exactly what I just did for the LCDK. > > If everybody is happy with it I will do the same for the evm as I put it > > in the cover letter. > > Yes please. We dont want duplication of data between da850.dtsi and > da850-lcdk.dts files. > Then I'll wait for this series to be applied and then apply my changes to the EVM while retiring the nand_cs3 together. Karl
Re: [PATCH 2/4] ARM: dts: da850: Add an aemif node
On Wed, Aug 10, 2016 at 01:59:26PM +0530, Sekhar Nori wrote: > On Wednesday 10 August 2016 01:56 PM, Karl Beldan wrote: > > On Wed, Aug 10, 2016 at 01:42:01PM +0530, Sekhar Nori wrote: > >> On Wednesday 10 August 2016 01:37 PM, Karl Beldan wrote: > >>> On Wed, Aug 10, 2016 at 01:32:03PM +0530, Sekhar Nori wrote: > >>>> On Wednesday 10 August 2016 01:18 PM, Sekhar Nori wrote: > >>>>> On Tuesday 09 August 2016 10:45 PM, Karl Beldan wrote: > >>>>>> Currently the davinci da8xx boards use the mach-davinci aemif code. > >>>>>> Instantiating an aemif node into the DT allows to use the ti-aemif > >>>>>> memory driver and is another step to better DT support. > >>>>>> Also it will allow to properly pass the emif timings via DT. > >>>>>> > >>>>>> Signed-off-by: Karl Beldan > >>>>>> --- > >>>>>> arch/arm/boot/dts/da850.dtsi | 10 ++ > >>>>>> 1 file changed, 10 insertions(+) > >>>>>> > >>>>>> diff --git a/arch/arm/boot/dts/da850.dtsi > >>>>>> b/arch/arm/boot/dts/da850.dtsi > >>>>>> index bc10e7e..f62928c 100644 > >>>>>> --- a/arch/arm/boot/dts/da850.dtsi > >>>>>> +++ b/arch/arm/boot/dts/da850.dtsi > >>>>>> @@ -411,6 +411,16 @@ > >>>>>>dma-names = "tx", "rx"; > >>>>>>}; > >>>>>>}; > >>>>>> + aemif: aemif@6800 { > >>>>>> + compatible = "ti,da850-aemif"; > >>>>>> + #address-cells = <2>; > >>>>>> + #size-cells = <1>; > >>>>>> + > >>>>>> + reg = <0x6800 0x8000>; > >>>>>> + ranges = <0 0 0x6000 0x0800 > >>>>>> +1 0 0x6800 0x8000>; > >>>>>> + status = "disabled"; > >>>>>> + }; > >>>>>>nand_cs3@6200 { > >>>>>>compatible = "ti,davinci-nand"; > >>>>>>reg = <0x6200 0x807ff > >>>>> > >>>>> The nand node should be part of aemif node like it is done for keystone > >>>>> boards. > >>>> > >>>> Actually, can you move the nand node out of da850.dtsi completely. Its > >>>> much better to keep da850.dtsi restricted to soc-internal devices and > >>>> keep the board level devices like NAND flash in .dts file. > >>>> > >>>> Similarly, can you move the NAND pinmux definitions too to the > >>>> da850-evm.dts file? > >>>> > >>>> There is advantage in keeping common pinmux definitions in da850.dtsi so > >>>> each board doe not have to repeat them. But AEMIF is an exception as its > >>>> usage can really be varied (NAND, NOR, SRAM, other). Plus, different > >>>> boards are likely to use different chip selects so coming up with some > >>>> pinmux definitions which will be reused widely is really unlikely. > >>>> > >>> This is exactly what I just did for the LCDK. > >>> If everybody is happy with it I will do the same for the evm as I put it > >>> in the cover letter. > >> > >> Yes please. We dont want duplication of data between da850.dtsi and > >> da850-lcdk.dts files. > >> > > Then I'll wait for this series to be applied and then apply my changes > > to the EVM while retiring the nand_cs3 together. > > No, I prefer the fixup happens first. In the same series, you can first > fixup existing EVM and then add LCDK support. > Well in that case you'll have to do the testing since I only have an LCDK. I should be able to send the series within the hour. Karl
Re: [PATCH 2/4] ARM: dts: da850: Add an aemif node
On Wed, Aug 10, 2016 at 02:04:48PM +0530, Sekhar Nori wrote: > On Wednesday 10 August 2016 02:04 PM, Karl Beldan wrote: > > On Wed, Aug 10, 2016 at 01:59:26PM +0530, Sekhar Nori wrote: > >> On Wednesday 10 August 2016 01:56 PM, Karl Beldan wrote: > >>> On Wed, Aug 10, 2016 at 01:42:01PM +0530, Sekhar Nori wrote: > >>>> On Wednesday 10 August 2016 01:37 PM, Karl Beldan wrote: > >>>>> On Wed, Aug 10, 2016 at 01:32:03PM +0530, Sekhar Nori wrote: > >>>>>> On Wednesday 10 August 2016 01:18 PM, Sekhar Nori wrote: > >>>>>>> On Tuesday 09 August 2016 10:45 PM, Karl Beldan wrote: > >>>>>>>> Currently the davinci da8xx boards use the mach-davinci aemif code. > >>>>>>>> Instantiating an aemif node into the DT allows to use the ti-aemif > >>>>>>>> memory driver and is another step to better DT support. > >>>>>>>> Also it will allow to properly pass the emif timings via DT. > >>>>>>>> > >>>>>>>> Signed-off-by: Karl Beldan > >>>>>>>> --- > >>>>>>>> arch/arm/boot/dts/da850.dtsi | 10 ++ > >>>>>>>> 1 file changed, 10 insertions(+) > >>>>>>>> > >>>>>>>> diff --git a/arch/arm/boot/dts/da850.dtsi > >>>>>>>> b/arch/arm/boot/dts/da850.dtsi > >>>>>>>> index bc10e7e..f62928c 100644 > >>>>>>>> --- a/arch/arm/boot/dts/da850.dtsi > >>>>>>>> +++ b/arch/arm/boot/dts/da850.dtsi > >>>>>>>> @@ -411,6 +411,16 @@ > >>>>>>>> dma-names = "tx", "rx"; > >>>>>>>> }; > >>>>>>>> }; > >>>>>>>> +aemif: aemif@6800 { > >>>>>>>> +compatible = "ti,da850-aemif"; > >>>>>>>> +#address-cells = <2>; > >>>>>>>> +#size-cells = <1>; > >>>>>>>> + > >>>>>>>> +reg = <0x6800 0x8000>; > >>>>>>>> +ranges = <0 0 0x6000 0x0800 > >>>>>>>> + 1 0 0x6800 0x8000>; > >>>>>>>> +status = "disabled"; > >>>>>>>> +}; > >>>>>>>> nand_cs3@6200 { > >>>>>>>> compatible = "ti,davinci-nand"; > >>>>>>>> reg = <0x6200 0x807ff > >>>>>>> > >>>>>>> The nand node should be part of aemif node like it is done for > >>>>>>> keystone > >>>>>>> boards. > >>>>>> > >>>>>> Actually, can you move the nand node out of da850.dtsi completely. Its > >>>>>> much better to keep da850.dtsi restricted to soc-internal devices and > >>>>>> keep the board level devices like NAND flash in .dts file. > >>>>>> > >>>>>> Similarly, can you move the NAND pinmux definitions too to the > >>>>>> da850-evm.dts file? > >>>>>> > >>>>>> There is advantage in keeping common pinmux definitions in da850.dtsi > >>>>>> so > >>>>>> each board doe not have to repeat them. But AEMIF is an exception as > >>>>>> its > >>>>>> usage can really be varied (NAND, NOR, SRAM, other). Plus, different > >>>>>> boards are likely to use different chip selects so coming up with some > >>>>>> pinmux definitions which will be reused widely is really unlikely. > >>>>>> > >>>>> This is exactly what I just did for the LCDK. > >>>>> If everybody is happy with it I will do the same for the evm as I put it > >>>>> in the cover letter. > >>>> > >>>> Yes please. We dont want duplication of data between da850.dtsi and > >>>> da850-lcdk.dts files. > >>>> > >>> Then I'll wait for this series to be applied and then apply my changes > >>> to the EVM while retiring the nand_cs3 together. > >> > >> No, I prefer the fixup happens first. In the same series, you can first > >> fixup existing EVM and then add LCDK support. > >> > > > > Well in that case you'll have to do the testing since I only have an > > LCDK. I should be able to send the series within the hour. > > Sure. I can test it. > The aemif/davinci_nand drivers don't configure AWCCR, yet davinci_nand relies on EM_WAIT for RDY/nBUSY, so for the moment I keep the default settings, but I configure the EM_WAIT pins in the pinctrl. I did it for the LCDK, and it is not done for the EVM. Since the EVM schematics are not public can you tell which EM_WAIT pins are connected ? Karl
Re: [PATCH 2/4] ARM: dts: da850: Add an aemif node
On Wed, Aug 10, 2016 at 01:18:51PM +0530, Sekhar Nori wrote: > On Tuesday 09 August 2016 10:45 PM, Karl Beldan wrote: > > Currently the davinci da8xx boards use the mach-davinci aemif code. > > Instantiating an aemif node into the DT allows to use the ti-aemif > > memory driver and is another step to better DT support. > > Also it will allow to properly pass the emif timings via DT. > > > > Signed-off-by: Karl Beldan > > --- > > arch/arm/boot/dts/da850.dtsi | 10 ++ > > 1 file changed, 10 insertions(+) > > > > diff --git a/arch/arm/boot/dts/da850.dtsi b/arch/arm/boot/dts/da850.dtsi > > index bc10e7e..f62928c 100644 > > --- a/arch/arm/boot/dts/da850.dtsi > > +++ b/arch/arm/boot/dts/da850.dtsi > > @@ -411,6 +411,16 @@ > > dma-names = "tx", "rx"; > > }; > > }; > > + aemif: aemif@6800 { > > + compatible = "ti,da850-aemif"; > > + #address-cells = <2>; > > + #size-cells = <1>; > > + > > + reg = <0x6800 0x8000>; > > + ranges = <0 0 0x6000 0x0800 > > + 1 0 0x6800 0x8000>; > > + status = "disabled"; > > + }; > > nand_cs3@6200 { > > compatible = "ti,davinci-nand"; > > reg = <0x6200 0x807ff > > The nand node should be part of aemif node like it is done for keystone > boards. > Hmm, this is exactly what I have done. Karl
Re: [PATCH 2/4] ARM: dts: da850: Add an aemif node
On Wed, Aug 10, 2016 at 01:32:03PM +0530, Sekhar Nori wrote: > On Wednesday 10 August 2016 01:18 PM, Sekhar Nori wrote: > > On Tuesday 09 August 2016 10:45 PM, Karl Beldan wrote: > >> Currently the davinci da8xx boards use the mach-davinci aemif code. > >> Instantiating an aemif node into the DT allows to use the ti-aemif > >> memory driver and is another step to better DT support. > >> Also it will allow to properly pass the emif timings via DT. > >> > >> Signed-off-by: Karl Beldan > >> --- > >> arch/arm/boot/dts/da850.dtsi | 10 ++ > >> 1 file changed, 10 insertions(+) > >> > >> diff --git a/arch/arm/boot/dts/da850.dtsi b/arch/arm/boot/dts/da850.dtsi > >> index bc10e7e..f62928c 100644 > >> --- a/arch/arm/boot/dts/da850.dtsi > >> +++ b/arch/arm/boot/dts/da850.dtsi > >> @@ -411,6 +411,16 @@ > >>dma-names = "tx", "rx"; > >>}; > >>}; > >> + aemif: aemif@6800 { > >> + compatible = "ti,da850-aemif"; > >> + #address-cells = <2>; > >> + #size-cells = <1>; > >> + > >> + reg = <0x6800 0x8000>; > >> + ranges = <0 0 0x6000 0x0800 > >> +1 0 0x6800 0x8000>; > >> + status = "disabled"; > >> + }; > >>nand_cs3@6200 { > >>compatible = "ti,davinci-nand"; > >>reg = <0x6200 0x807ff > > > > The nand node should be part of aemif node like it is done for keystone > > boards. > > Actually, can you move the nand node out of da850.dtsi completely. Its > much better to keep da850.dtsi restricted to soc-internal devices and > keep the board level devices like NAND flash in .dts file. > > Similarly, can you move the NAND pinmux definitions too to the > da850-evm.dts file? > > There is advantage in keeping common pinmux definitions in da850.dtsi so > each board doe not have to repeat them. But AEMIF is an exception as its > usage can really be varied (NAND, NOR, SRAM, other). Plus, different > boards are likely to use different chip selects so coming up with some > pinmux definitions which will be reused widely is really unlikely. > This is exactly what I just did for the LCDK. If everybody is happy with it I will do the same for the evm as I put it in the cover letter. Karl
Re: [PATCH 3/4] ARM: dts: da850-lcdk: Add NAND to DT
On Tue, Aug 09, 2016 at 05:15:17PM +, Karl Beldan wrote: > This adds DT support for the NAND connected to the SoC AEMIF. > The parameters (timings, ecc) are the same as what the board ships with > (default AEMIF timings, 1bit ECC) and improvements will be handled in > due course. > This passed elementary tests hashing a 20MB file on top of ubifs on my > LCDK. > > Signed-off-by: Karl Beldan > --- > arch/arm/boot/dts/da850-lcdk.dts | 108 > +++ > 1 file changed, 108 insertions(+) > > diff --git a/arch/arm/boot/dts/da850-lcdk.dts > b/arch/arm/boot/dts/da850-lcdk.dts > index dbcca0b..033380b 100644 > --- a/arch/arm/boot/dts/da850-lcdk.dts > +++ b/arch/arm/boot/dts/da850-lcdk.dts > @@ -27,6 +27,27 @@ > > &pmx_core { > status = "okay"; > + > + nand_pins: nand_pins { > + pinctrl-single,bits = < > + /* EMA_WAIT[0], EMA_OE, EMA_WE, EMA_CS[3] */ > + 0x1c 0x10110010 0xf0ff00f0 > + /* > + * EMA_D[0], EMA_D[1], EMA_D[2], > + * EMA_D[3], EMA_D[4], EMA_D[5], > + * EMA_D[6], EMA_D[7] > + */ > + 0x24 0x 0x > + /* > + * EMA_D[8], EMA_D[9], EMA_D[10], > + * EMA_D[11], EMA_D[12], EMA_D[13], > + * EMA_D[14], EMA_D[15] > + */ > + 0x20 0x 0x > + /* EMA_A[1], EMA_A[2] */ > + 0x30 0x0110 0x0ff0 > + >; > + }; > }; > > &serial2 { > @@ -68,3 +89,90 @@ > cd-gpios = <&gpio 64 GPIO_ACTIVE_HIGH>; > status = "okay"; > }; > + > +&aemif { > + pinctrl-names = "default"; > + pinctrl-0 = <&nand_pins>; > + status = "ok"; > + cs2 { Typo, should be cs3. Karl > + #address-cells = <2>; > + #size-cells = <1>; > + clock-ranges; > + ranges; > + > + ti,cs-chipselect = <2>; > + > + nand@200,0 { > + compatible = "ti,davinci-nand"; > + #address-cells = <1>; > + #size-cells = <1>; > + reg = <0 0x0200 0x0200 > +1 0x 0x8000>; > + > + ti,davinci-chipselect = <1>; > + ti,davinci-mask-ale = <0>; > + ti,davinci-mask-cle = <0>; > + ti,davinci-mask-chipsel = <0>; > + > + /* > + * nand_ecc_strength_good will emit a warning > + * but the LCDK ships with these settings [1]. > + * Also HW 4bits ECC with 16bits NAND seems to > + * require some attention. > + * > + * ATM nand_davinci_probe handling of nand-ecc-* > + * is broken, e.g. > + * chip.ecc.strength = pdata->ecc_bits occurs after > + * scan_ident(), otherwise I would have used: > + * nand-ecc-mode = "hw"; > + * nand-ecc-strength = <1>; > + * nand-ecc-step-size = <512>; > + */ > + ti,davinci-ecc-mode = "hw"; > + ti,davinci-ecc-bits = <1>; > + > + nand-bus-width= <16>; > + nand-on-flash-bbt; > + > + /* > + * LCDK original partitions: > + * 0x-0x0002 : "u-boot env" > + * 0x0002-0x000a : "u-boot" > + * 0x000a-0x002a : "kernel" > + * 0x002a-0x2000 : "filesystem" > + * > + * The 1st NAND block being guaranted to be valid w/o > ECC (> 1k cycles), > + * it makes a perfect candidate as an SPL for the > BootROM to jump to. > + * However the OMAP-L132/L138 Bootloader doc SPRAB41E > reads: > + * "To boot from NAND Flash, the AIS should be written > to NAND block 1 > + * (NAND block 0 is not
Re: [PATCH 2/4] ARM: dts: da850: Add an aemif node
On Wed, Aug 10, 2016 at 09:28:48AM +, Karl Beldan wrote: > On Wed, Aug 10, 2016 at 02:04:48PM +0530, Sekhar Nori wrote: > > On Wednesday 10 August 2016 02:04 PM, Karl Beldan wrote: > > > On Wed, Aug 10, 2016 at 01:59:26PM +0530, Sekhar Nori wrote: > > >> On Wednesday 10 August 2016 01:56 PM, Karl Beldan wrote: > > >>> On Wed, Aug 10, 2016 at 01:42:01PM +0530, Sekhar Nori wrote: > > >>>> On Wednesday 10 August 2016 01:37 PM, Karl Beldan wrote: > > >>>>> On Wed, Aug 10, 2016 at 01:32:03PM +0530, Sekhar Nori wrote: > > >>>>>> On Wednesday 10 August 2016 01:18 PM, Sekhar Nori wrote: > > >>>>>>> On Tuesday 09 August 2016 10:45 PM, Karl Beldan wrote: > > >>>>>>>> Currently the davinci da8xx boards use the mach-davinci aemif code. > > >>>>>>>> Instantiating an aemif node into the DT allows to use the ti-aemif > > >>>>>>>> memory driver and is another step to better DT support. > > >>>>>>>> Also it will allow to properly pass the emif timings via DT. > > >>>>>>>> > > >>>>>>>> Signed-off-by: Karl Beldan > > >>>>>>>> --- > > >>>>>>>> arch/arm/boot/dts/da850.dtsi | 10 ++ > > >>>>>>>> 1 file changed, 10 insertions(+) > > >>>>>>>> > > >>>>>>>> diff --git a/arch/arm/boot/dts/da850.dtsi > > >>>>>>>> b/arch/arm/boot/dts/da850.dtsi > > >>>>>>>> index bc10e7e..f62928c 100644 > > >>>>>>>> --- a/arch/arm/boot/dts/da850.dtsi > > >>>>>>>> +++ b/arch/arm/boot/dts/da850.dtsi > > >>>>>>>> @@ -411,6 +411,16 @@ > > >>>>>>>>dma-names = "tx", "rx"; > > >>>>>>>>}; > > >>>>>>>>}; > > >>>>>>>> + aemif: aemif@6800 { > > >>>>>>>> + compatible = "ti,da850-aemif"; > > >>>>>>>> + #address-cells = <2>; > > >>>>>>>> + #size-cells = <1>; > > >>>>>>>> + > > >>>>>>>> + reg = <0x6800 0x8000>; > > >>>>>>>> + ranges = <0 0 0x6000 0x0800 > > >>>>>>>> +1 0 0x6800 0x8000>; > > >>>>>>>> + status = "disabled"; > > >>>>>>>> + }; > > >>>>>>>>nand_cs3@6200 { > > >>>>>>>>compatible = "ti,davinci-nand"; > > >>>>>>>>reg = <0x6200 0x807ff > > >>>>>>> > > >>>>>>> The nand node should be part of aemif node like it is done for > > >>>>>>> keystone > > >>>>>>> boards. > > >>>>>> > > >>>>>> Actually, can you move the nand node out of da850.dtsi completely. > > >>>>>> Its > > >>>>>> much better to keep da850.dtsi restricted to soc-internal devices and > > >>>>>> keep the board level devices like NAND flash in .dts file. > > >>>>>> > > >>>>>> Similarly, can you move the NAND pinmux definitions too to the > > >>>>>> da850-evm.dts file? > > >>>>>> > > >>>>>> There is advantage in keeping common pinmux definitions in > > >>>>>> da850.dtsi so > > >>>>>> each board doe not have to repeat them. But AEMIF is an exception as > > >>>>>> its > > >>>>>> usage can really be varied (NAND, NOR, SRAM, other). Plus, different > > >>>>>> boards are likely to use different chip selects so coming up with > > >>>>>> some > > >>>>>> pinmux definitions which will be reused widely is really unlikely. > > >>>>>> > > >>>>> This is exactly what I just did for the LCDK. > > >>>>> If everybody is happy with it I will do the same for the evm as I put > > >>>>> it > > >>>>> in the cover letter. > > >>>> > > >>>> Yes please. We dont want duplication of data between da850.dtsi and > > >>>> da850-lcdk.dts files. > > >>>> > > >>> Then I'll wait for this series to be applied and then apply my changes > > >>> to the EVM while retiring the nand_cs3 together. > > >> > > >> No, I prefer the fixup happens first. In the same series, you can first > > >> fixup existing EVM and then add LCDK support. > > >> > > > > > > Well in that case you'll have to do the testing since I only have an > > > LCDK. I should be able to send the series within the hour. > > > > Sure. I can test it. > > > The aemif/davinci_nand drivers don't configure AWCCR, yet davinci_nand > relies on EM_WAIT for RDY/nBUSY, so for the moment I keep the default > settings, but I configure the EM_WAIT pins in the pinctrl. I did it for > the LCDK, and it is not done for the EVM. Since the EVM schematics are > not public can you tell which EM_WAIT pins are connected ? > Also the device name is nand_cs3 but the pin muxing also enables CS4 both in Linux and U-Boot, can you tell whether it is needed ? Or maybe you can share the schematics and I'll check it myself ? Karl
Re: [PATCH 3/4] ARM: dts: da850-lcdk: Add NAND to DT
On Wed, Aug 10, 2016 at 03:01:30PM +0530, Sekhar Nori wrote: > On Wednesday 10 August 2016 02:34 PM, Karl Beldan wrote: > > On Wed, Aug 10, 2016 at 02:01:57PM +0530, Sekhar Nori wrote: > >> On Tuesday 09 August 2016 10:45 PM, Karl Beldan wrote: > >>> This adds DT support for the NAND connected to the SoC AEMIF. > >>> The parameters (timings, ecc) are the same as what the board ships with > >>> (default AEMIF timings, 1bit ECC) and improvements will be handled in > >>> due course. > >> > >> I disagree that we need to be compatible to the software that ships with > >> the board. Thats software was last updated 3 years ago. Instead I would > >> concern with what the hardware supports. So, if the hardware can support > >> 4-bit ECC, I would use that. > >> > > I am not saying we _need_ to be compatible. > > Alright then, please drop references to what software the board ships > with in the commit message and in the patch itself. > I hadn't seen this comment before sending v2. > > > >> If driver is broken for 4-bit ECC, please fix that up first. > >> > > Since this issue is completely separate from my DT improvements > > I'll stick to resubmitting the series, applying my LCDK changes to the > > EVM too, besides you'll be able to compare the behavior without ECC > > discrepancies. > > I took note that you are likely to not apply without the ECC fix. > > Yeah, I would not like to apply with 1-bit ECC now and then change to > 4-bit ECC soon after. > Both mityomapl138 from mainline and hawkboard from TI's BSP release include the comment: - "4 bit mode is not supported with 16 bit NAND" It is not clear whether they imply that the HW has issues or if it's SW only, but 4-bits ECC is a different matter and I hope you'll integrate the current changes prior to tackling it. Karl
[PATCH v2 3/4] ARM: dts: da850,da850-evm: Add an aemif node and use it for the NAND
Currently the davinci da8xx boards use the mach-davinci aemif code. Instantiating an aemif node into the DT allows to use the ti-aemif memory driver and is another step to better DT support. This change adds an aemif node in the dtsi while retiring the nand_cs3 node. The NAND is now instantiated in the dts as a subnode of the aemif one along with its pins. Signed-off-by: Karl Beldan --- arch/arm/boot/dts/da850-evm.dts | 49 - arch/arm/boot/dts/da850.dtsi| 19 +++- 2 files changed, 52 insertions(+), 16 deletions(-) diff --git a/arch/arm/boot/dts/da850-evm.dts b/arch/arm/boot/dts/da850-evm.dts index 1a15db8..eedcc59 100644 --- a/arch/arm/boot/dts/da850-evm.dts +++ b/arch/arm/boot/dts/da850-evm.dts @@ -29,6 +29,20 @@ 0x04 0x00011000 0x000ff000 >; }; + nand_pins: nand_pins { + pinctrl-single,bits = < + /* EMA_WAIT[0], EMA_OE, EMA_WE, EMA_CS[4], EMA_CS[3] */ + 0x1c 0x10110110 0xf0ff0ff0 + /* +* EMA_D[0], EMA_D[1], EMA_D[2], +* EMA_D[3], EMA_D[4], EMA_D[5], +* EMA_D[6], EMA_D[7] +*/ + 0x24 0x 0x + /* EMA_A[1], EMA_A[2] */ + 0x30 0x0110 0x0ff0 + >; + }; }; serial0: serial@42000 { status = "okay"; @@ -131,11 +145,6 @@ status = "okay"; }; }; - nand_cs3@6200 { - status = "okay"; - pinctrl-names = "default"; - pinctrl-0 = <&nand_cs3_pins>; - }; vbat: fixedregulator@0 { compatible = "regulator-fixed"; regulator-name = "vbat"; @@ -250,3 +259,33 @@ &edma1 { ti,edma-reserved-slot-ranges = <32 90>; }; + +&aemif { + pinctrl-names = "default"; + pinctrl-0 = <&nand_pins>; + status = "ok"; + cs3 { + #address-cells = <2>; + #size-cells = <1>; + clock-ranges; + ranges; + + ti,cs-chipselect = <3>; + + nand@200,0 { + compatible = "ti,davinci-nand"; + #address-cells = <1>; + #size-cells = <1>; + reg = <0 0x0200 0x0200 + 1 0x 0x8000>; + + ti,davinci-chipselect = <1>; + ti,davinci-mask-ale = <0>; + ti,davinci-mask-cle = <0>; + ti,davinci-mask-chipsel = <0>; + ti,davinci-ecc-mode = "hw"; + ti,davinci-ecc-bits = <4>; + ti,davinci-nand-use-bbt; + }; + }; +}; diff --git a/arch/arm/boot/dts/da850.dtsi b/arch/arm/boot/dts/da850.dtsi index bc10e7e..e73fd64 100644 --- a/arch/arm/boot/dts/da850.dtsi +++ b/arch/arm/boot/dts/da850.dtsi @@ -411,17 +411,14 @@ dma-names = "tx", "rx"; }; }; - nand_cs3@6200 { - compatible = "ti,davinci-nand"; - reg = <0x6200 0x807ff - 0x6800 0x8000>; - ti,davinci-chipselect = <1>; - ti,davinci-mask-ale = <0>; - ti,davinci-mask-cle = <0>; - ti,davinci-mask-chipsel = <0>; - ti,davinci-ecc-mode = "hw"; - ti,davinci-ecc-bits = <4>; - ti,davinci-nand-use-bbt; + aemif: aemif@6800 { + compatible = "ti,da850-aemif"; + #address-cells = <2>; + #size-cells = <1>; + + reg = <0x6800 0x8000>; + ranges = <0 0 0x6000 0x0800 + 1 0 0x6800 0x8000>; status = "disabled"; }; }; -- 2.9.2
Re: [PATCH 1/4] memory: ti-aemif: Get a named clock rather than an unnamed one
On Tue, Aug 09, 2016 at 05:15:15PM +, Karl Beldan wrote: > Many davinci boards (da830 and da850 families) don't have their clocks > in DT yet and won't be successful in getting an unnamed aemif clock. > Also the sole current users of ti-aemif (keystone boards) use 'aemif' as > their aemif device clock clock-name and should remain unaffected. > > Signed-off-by: Karl Beldan > --- > drivers/memory/ti-aemif.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/drivers/memory/ti-aemif.c b/drivers/memory/ti-aemif.c > index a579a0f..c251fc8 100644 > --- a/drivers/memory/ti-aemif.c > +++ b/drivers/memory/ti-aemif.c > @@ -345,7 +345,7 @@ static int aemif_probe(struct platform_device *pdev) > > platform_set_drvdata(pdev, aemif); > > - aemif->clk = devm_clk_get(dev, NULL); > + aemif->clk = devm_clk_get(dev, "aemif"); Looking further it seems to me that the struct clk_lookup da850_clks registered by davinci_clk_init() should be enough to clk_get() unnamed clocks using only the dev name. I look into what's going on but it would make this patch unnecessary. Karl
[PATCH v2 0/4] Add DT support for NAND to LCDK
Changes from v1: - s/cs2/cs3 - kept "ti,.." only nand properties (the adjustments made by nand_davinci_probe are broken) - replaced v1_1/4: "memory: ti-aemif: Get a named clock rather than an unnamed one" with v2_1/4: "davinci: da8xx-dt: Add ti-aemif lookup for clock matching" - Make the same improvements for the EVM (and retire nand_cs3) Karl Beldan (4): ARM: davinci: da8xx-dt: Add ti-aemif lookup for clock matching ARM: davinci_all_defconfig: Enable AEMIF as a module ARM: dts: da850,da850-evm: Add an aemif node and use it for the NAND ARM: dts: da850-lcdk: Add NAND to DT arch/arm/boot/dts/da850-evm.dts| 49 +-- arch/arm/boot/dts/da850-lcdk.dts | 107 + arch/arm/boot/dts/da850.dtsi | 19 +++--- arch/arm/configs/davinci_all_defconfig | 2 + arch/arm/mach-davinci/da850.c | 1 + arch/arm/mach-davinci/da8xx-dt.c | 1 + 6 files changed, 163 insertions(+), 16 deletions(-) -- 2.9.2
Re: [PATCH 3/4] ARM: dts: da850-lcdk: Add NAND to DT
On Wed, Aug 10, 2016 at 02:01:57PM +0530, Sekhar Nori wrote: > On Tuesday 09 August 2016 10:45 PM, Karl Beldan wrote: > > This adds DT support for the NAND connected to the SoC AEMIF. > > The parameters (timings, ecc) are the same as what the board ships with > > (default AEMIF timings, 1bit ECC) and improvements will be handled in > > due course. > > I disagree that we need to be compatible to the software that ships with > the board. Thats software was last updated 3 years ago. Instead I would > concern with what the hardware supports. So, if the hardware can support > 4-bit ECC, I would use that. > I am not saying we _need_ to be compatible. > If driver is broken for 4-bit ECC, please fix that up first. > Since this issue is completely separate from my DT improvements I'll stick to resubmitting the series, applying my LCDK changes to the EVM too, besides you'll be able to compare the behavior without ECC discrepancies. I took note that you are likely to not apply without the ECC fix. Karl > > This passed elementary tests hashing a 20MB file on top of ubifs on my > > LCDK. > > > > Signed-off-by: Karl Beldan > > Thanks, > Sekhar
[PATCH v2 4/4] ARM: dts: da850-lcdk: Add NAND to DT
This adds DT support for the NAND connected to the SoC AEMIF. The parameters (timings, ecc) are the same as what the board ships with (default AEMIF timings, 1bit ECC) and improvements will be handled in due course. This passed elementary tests hashing a 20MB file on top of ubifs on my LCDK. Signed-off-by: Karl Beldan --- arch/arm/boot/dts/da850-lcdk.dts | 107 +++ 1 file changed, 107 insertions(+) diff --git a/arch/arm/boot/dts/da850-lcdk.dts b/arch/arm/boot/dts/da850-lcdk.dts index dbcca0b..a3f9845 100644 --- a/arch/arm/boot/dts/da850-lcdk.dts +++ b/arch/arm/boot/dts/da850-lcdk.dts @@ -27,6 +27,27 @@ &pmx_core { status = "okay"; + + nand_pins: nand_pins { + pinctrl-single,bits = < + /* EMA_WAIT[0], EMA_OE, EMA_WE, EMA_CS[3] */ + 0x1c 0x10110010 0xf0ff00f0 + /* +* EMA_D[0], EMA_D[1], EMA_D[2], +* EMA_D[3], EMA_D[4], EMA_D[5], +* EMA_D[6], EMA_D[7] +*/ + 0x24 0x 0x + /* +* EMA_D[8], EMA_D[9], EMA_D[10], +* EMA_D[11], EMA_D[12], EMA_D[13], +* EMA_D[14], EMA_D[15] +*/ + 0x20 0x 0x + /* EMA_A[1], EMA_A[2] */ + 0x30 0x0110 0x0ff0 + >; + }; }; &serial2 { @@ -68,3 +89,89 @@ cd-gpios = <&gpio 64 GPIO_ACTIVE_HIGH>; status = "okay"; }; + +&aemif { + pinctrl-names = "default"; + pinctrl-0 = <&nand_pins>; + status = "ok"; + cs3 { + #address-cells = <2>; + #size-cells = <1>; + clock-ranges; + ranges; + + ti,cs-chipselect = <3>; + + nand@200,0 { + compatible = "ti,davinci-nand"; + #address-cells = <1>; + #size-cells = <1>; + reg = <0 0x0200 0x0200 + 1 0x 0x8000>; + + ti,davinci-chipselect = <1>; + ti,davinci-mask-ale = <0>; + ti,davinci-mask-cle = <0>; + ti,davinci-mask-chipsel = <0>; + + /* +* nand_ecc_strength_good will emit a warning +* but the LCDK ships with these settings [1]. +* Also HW 4bits ECC with 16bits NAND seems to +* require some attention. +* +* ATM nand_davinci_probe handling of nand-ecc-* +* is broken, e.g. +* chip.ecc.strength = pdata->ecc_bits occurs after +* scan_ident(), otherwise I would have used: +* nand-ecc-mode = "hw"; +* nand-ecc-strength = <1>; +* nand-ecc-step-size = <512>; +*/ + ti,davinci-nand-buswidth = <16>; + ti,davinci-ecc-mode = "hw"; + ti,davinci-ecc-bits = <1>; + ti,davinci-nand-use-bbt; + + /* +* LCDK original partitions: +* 0x-0x0002 : "u-boot env" +* 0x0002-0x000a : "u-boot" +* 0x000a-0x002a : "kernel" +* 0x002a-0x2000 : "filesystem" +* +* The 1st NAND block being guaranted to be valid w/o ECC (> 1k cycles), +* it makes a perfect candidate as an SPL for the BootROM to jump to. +* However the OMAP-L132/L138 Bootloader doc SPRAB41E reads: +* "To boot from NAND Flash, the AIS should be written to NAND block 1 +* (NAND block 0 is not used by default)", which matches the LCDK +* original partitioning. +* Also, the LCDK ships with only the u-boot partition provisioned and +* boots on it in its default configuration while using the MMC for the +* kernel and rootfs, so preserve that one as is for now. +* [1]: Ensuring for example that U-Boot LCDK SPL can handle it properly +
[PATCH 3/4] ARM: dts: da850-lcdk: Add NAND to DT
This adds DT support for the NAND connected to the SoC AEMIF. The parameters (timings, ecc) are the same as what the board ships with (default AEMIF timings, 1bit ECC) and improvements will be handled in due course. This passed elementary tests hashing a 20MB file on top of ubifs on my LCDK. Signed-off-by: Karl Beldan --- arch/arm/boot/dts/da850-lcdk.dts | 108 +++ 1 file changed, 108 insertions(+) diff --git a/arch/arm/boot/dts/da850-lcdk.dts b/arch/arm/boot/dts/da850-lcdk.dts index dbcca0b..033380b 100644 --- a/arch/arm/boot/dts/da850-lcdk.dts +++ b/arch/arm/boot/dts/da850-lcdk.dts @@ -27,6 +27,27 @@ &pmx_core { status = "okay"; + + nand_pins: nand_pins { + pinctrl-single,bits = < + /* EMA_WAIT[0], EMA_OE, EMA_WE, EMA_CS[3] */ + 0x1c 0x10110010 0xf0ff00f0 + /* +* EMA_D[0], EMA_D[1], EMA_D[2], +* EMA_D[3], EMA_D[4], EMA_D[5], +* EMA_D[6], EMA_D[7] +*/ + 0x24 0x 0x + /* +* EMA_D[8], EMA_D[9], EMA_D[10], +* EMA_D[11], EMA_D[12], EMA_D[13], +* EMA_D[14], EMA_D[15] +*/ + 0x20 0x 0x + /* EMA_A[1], EMA_A[2] */ + 0x30 0x0110 0x0ff0 + >; + }; }; &serial2 { @@ -68,3 +89,90 @@ cd-gpios = <&gpio 64 GPIO_ACTIVE_HIGH>; status = "okay"; }; + +&aemif { + pinctrl-names = "default"; + pinctrl-0 = <&nand_pins>; + status = "ok"; + cs2 { + #address-cells = <2>; + #size-cells = <1>; + clock-ranges; + ranges; + + ti,cs-chipselect = <2>; + + nand@200,0 { + compatible = "ti,davinci-nand"; + #address-cells = <1>; + #size-cells = <1>; + reg = <0 0x0200 0x0200 + 1 0x 0x8000>; + + ti,davinci-chipselect = <1>; + ti,davinci-mask-ale = <0>; + ti,davinci-mask-cle = <0>; + ti,davinci-mask-chipsel = <0>; + + /* +* nand_ecc_strength_good will emit a warning +* but the LCDK ships with these settings [1]. +* Also HW 4bits ECC with 16bits NAND seems to +* require some attention. +* +* ATM nand_davinci_probe handling of nand-ecc-* +* is broken, e.g. +* chip.ecc.strength = pdata->ecc_bits occurs after +* scan_ident(), otherwise I would have used: +* nand-ecc-mode = "hw"; +* nand-ecc-strength = <1>; +* nand-ecc-step-size = <512>; +*/ + ti,davinci-ecc-mode = "hw"; + ti,davinci-ecc-bits = <1>; + + nand-bus-width= <16>; + nand-on-flash-bbt; + + /* +* LCDK original partitions: +* 0x-0x0002 : "u-boot env" +* 0x0002-0x000a : "u-boot" +* 0x000a-0x002a : "kernel" +* 0x002a-0x2000 : "filesystem" +* +* The 1st NAND block being guaranted to be valid w/o ECC (> 1k cycles), +* it makes a perfect candidate as an SPL for the BootROM to jump to. +* However the OMAP-L132/L138 Bootloader doc SPRAB41E reads: +* "To boot from NAND Flash, the AIS should be written to NAND block 1 +* (NAND block 0 is not used by default)", which matches the LCDK +* original partitioning. +* Also, the LCDK ships with only the u-boot partition provisioned and +* boots on it in its default configuration while using the MMC for the +* kernel and rootfs, so preserve that one as is for now. +* [1]: Ensuring for example that U-Boot LCDK SPL can handle it properly +* and
[PATCH 0/4] Add DT support for NAND to LCDK
Hi, This does not use the same way the current da8xx boards do, instead it is using the more generic and DT friendly memory driver ti-aemif. I can do the same for the da850-evm and retire the dts nandcs3 instances. Karl Beldan (4): memory: ti-aemif: Get a named clock rather than an unnamed one ARM: dts: da850: Add an aemif node ARM: dts: da850-lcdk: Add NAND to DT ARM: davinci_all_defconfig: Enable AEMIF as a module arch/arm/boot/dts/da850-lcdk.dts | 108 + arch/arm/boot/dts/da850.dtsi | 10 +++ arch/arm/configs/davinci_all_defconfig | 2 + drivers/memory/ti-aemif.c | 2 +- 4 files changed, 121 insertions(+), 1 deletion(-) -- 2.9.2
[PATCH 4/4] ARM: davinci_all_defconfig: Enable AEMIF as a module
This enables the use of the memory/ti-aemif.c driver. ATM most davinci boards use the mach-davinci aemif code which gets in the way of genericity and proper DT boot. Signed-off-by: Karl Beldan --- arch/arm/configs/davinci_all_defconfig | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/arm/configs/davinci_all_defconfig b/arch/arm/configs/davinci_all_defconfig index d037d3d..2802897 100644 --- a/arch/arm/configs/davinci_all_defconfig +++ b/arch/arm/configs/davinci_all_defconfig @@ -192,6 +192,8 @@ CONFIG_RTC_CLASS=y CONFIG_RTC_DRV_OMAP=m CONFIG_DMADEVICES=y CONFIG_TI_EDMA=y +CONFIG_MEMORY=y +CONFIG_TI_AEMIF=m CONFIG_EXT2_FS=y CONFIG_EXT3_FS=y CONFIG_XFS_FS=m -- 2.9.2
[PATCH 1/4] memory: ti-aemif: Get a named clock rather than an unnamed one
Many davinci boards (da830 and da850 families) don't have their clocks in DT yet and won't be successful in getting an unnamed aemif clock. Also the sole current users of ti-aemif (keystone boards) use 'aemif' as their aemif device clock clock-name and should remain unaffected. Signed-off-by: Karl Beldan --- drivers/memory/ti-aemif.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/memory/ti-aemif.c b/drivers/memory/ti-aemif.c index a579a0f..c251fc8 100644 --- a/drivers/memory/ti-aemif.c +++ b/drivers/memory/ti-aemif.c @@ -345,7 +345,7 @@ static int aemif_probe(struct platform_device *pdev) platform_set_drvdata(pdev, aemif); - aemif->clk = devm_clk_get(dev, NULL); + aemif->clk = devm_clk_get(dev, "aemif"); if (IS_ERR(aemif->clk)) { dev_err(dev, "cannot get clock 'aemif'\n"); return PTR_ERR(aemif->clk); -- 2.9.2
[PATCH 2/4] ARM: dts: da850: Add an aemif node
Currently the davinci da8xx boards use the mach-davinci aemif code. Instantiating an aemif node into the DT allows to use the ti-aemif memory driver and is another step to better DT support. Also it will allow to properly pass the emif timings via DT. Signed-off-by: Karl Beldan --- arch/arm/boot/dts/da850.dtsi | 10 ++ 1 file changed, 10 insertions(+) diff --git a/arch/arm/boot/dts/da850.dtsi b/arch/arm/boot/dts/da850.dtsi index bc10e7e..f62928c 100644 --- a/arch/arm/boot/dts/da850.dtsi +++ b/arch/arm/boot/dts/da850.dtsi @@ -411,6 +411,16 @@ dma-names = "tx", "rx"; }; }; + aemif: aemif@6800 { + compatible = "ti,da850-aemif"; + #address-cells = <2>; + #size-cells = <1>; + + reg = <0x6800 0x8000>; + ranges = <0 0 0x6000 0x0800 + 1 0 0x6800 0x8000>; + status = "disabled"; + }; nand_cs3@6200 { compatible = "ti,davinci-nand"; reg = <0x6200 0x807ff -- 2.9.2
Re: [PATCH 3.4 140/176] lib/checksum.c: fix carry in csum_tcpudp_nofold
On Thu, Apr 09, 2015 at 04:46:28PM +0800, l...@kernel.org wrote: > From: karl beldan > > 3.4.107-rc1 review patch. If anyone has any objections, please let me know. > > -- > > > commit 150ae0e94634714b23919f0c333fee28a5b199d5 upstream. > Hi Zefan, The above patch introduced a build error for some archs, please consider applying 9ce357795ef208faa0d59894d9d119a7434e37f3 "lib/checksum.c: fix build for generic csum_tcpudp_nofold" as well. Karl -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 3.12 065/122] lib/checksum.c: fix carry in csum_tcpudp_nofold
On Wed, Feb 18, 2015 at 09:40:23AM +, David Laight wrote: > From: Karl Beldan > > On Tue, Feb 17, 2015 at 12:04:22PM +, David Laight wrote: > > > > +static inline u32 from64to32(u64 x) > > > > +{ > > > > + /* add up 32-bit and 32-bit for 32+c bit */ > > > > + x = (x & 0x) + (x >> 32); > > > > + /* add up carry.. */ > > > > + x = (x & 0x) + (x >> 32); > > > > + return (u32)x; > > > > +} > > > > > > As a matter of interest, does the compiler optimise away the > > > second (x & 0x) ? > > > The code could just be: > > > x = (x & 0x) + (x >> 32); > > > return x + (x >> 32); > > > > > > > On my side, from what I've seen so far, your version results in better > > assembly, esp. with clang, but my first version > > http://article.gmane.org/gmane.linux.kernel/1875407: > > x += (x << 32) + (x >> 32); > > return (__force __wsum)(x >> 32); > > resulted in even better assembly, I just verified with gcc/clang, > > x86_64/ARM and -O1,2,3. > > The latter looks to have a shorter dependency chain as well. > Although I'd definitely include a comment saying that it is equivalent > to the two lines in the current patch. > > Does either compiler manage to use a rotate for the two shifts? > Using '(x << 32) | (x >> 32)' might convince it to do so. > That would reduce it to three 'real' instructions and a register rename. > gcc and clang rotate for tile (just checked gcc) and x86_64, not for arm (and IMHO rightly so). Both '|' and '+' yielded the same asm for those 3 archs. Karl -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 3.12 065/122] lib/checksum.c: fix carry in csum_tcpudp_nofold
On Tue, Feb 17, 2015 at 12:04:22PM +, David Laight wrote: > > +static inline u32 from64to32(u64 x) > > +{ > > + /* add up 32-bit and 32-bit for 32+c bit */ > > + x = (x & 0x) + (x >> 32); > > + /* add up carry.. */ > > + x = (x & 0x) + (x >> 32); > > + return (u32)x; > > +} > > As a matter of interest, does the compiler optimise away the > second (x & 0x) ? > The code could just be: > x = (x & 0x) + (x >> 32); > return x + (x >> 32); > On my side, from what I've seen so far, your version results in better assembly, esp. with clang, but my first version http://article.gmane.org/gmane.linux.kernel/1875407: x += (x << 32) + (x >> 32); return (__force __wsum)(x >> 32); resulted in even better assembly, I just verified with gcc/clang, x86_64/ARM and -O1,2,3. Karl -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] lib/checksum.c: fix build for generic csum_tcpudp_nofold
Fixed commit added from64to32 under _#ifndef do_csum_ but used it under _#ifndef csum_tcpudp_nofold_, breaking some builds (Fengguang's robot reported TILEGX's). Move from64to32 under the latter. Fixes: 150ae0e94634 ("lib/checksum.c: fix carry in csum_tcpudp_nofold") Reported-by: kbuild test robot Signed-off-by: Karl Beldan Cc: Eric Dumazet Cc: David S. Miller --- lib/checksum.c | 18 +- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/lib/checksum.c b/lib/checksum.c index fcf3894..8b39e86 100644 --- a/lib/checksum.c +++ b/lib/checksum.c @@ -47,15 +47,6 @@ static inline unsigned short from32to16(unsigned int x) return x; } -static inline u32 from64to32(u64 x) -{ - /* add up 32-bit and 32-bit for 32+c bit */ - x = (x & 0x) + (x >> 32); - /* add up carry.. */ - x = (x & 0x) + (x >> 32); - return (u32)x; -} - static unsigned int do_csum(const unsigned char *buff, int len) { int odd; @@ -190,6 +181,15 @@ csum_partial_copy(const void *src, void *dst, int len, __wsum sum) EXPORT_SYMBOL(csum_partial_copy); #ifndef csum_tcpudp_nofold +static inline u32 from64to32(u64 x) +{ + /* add up 32-bit and 32-bit for 32+c bit */ + x = (x & 0x) + (x >> 32); + /* add up carry.. */ + x = (x & 0x) + (x >> 32); + return (u32)x; +} + __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr, unsigned short len, unsigned short proto, -- 2.0.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v2] lib/checksum.c: fix carry in csum_tcpudp_nofold
On Wed, Jan 28, 2015 at 10:32:49PM -0800, David Miller wrote: > From: Karl Beldan > Date: Wed, 28 Jan 2015 10:58:11 +0100 > > > The carry from the 64->32bits folding was dropped, e.g with: > > saddr=0x daddr=0xFFFF len=0x proto=0 sum=1, > > csum_tcpudp_nofold returned 0 instead of 1. > > > > Signed-off-by: Karl Beldan > > Applied, thanks. Fengguang's robot reported I managed to break the build, this occurs when using non-generic do_csum and generic csum_tcpudp_nofold. I guess a v3 is irrelevant now, should I send another patch for the build fix only ? Sorry for that. Karl -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH v2] lib/checksum.c: fix carry in csum_tcpudp_nofold
The carry from the 64->32bits folding was dropped, e.g with: saddr=0x daddr=0xFFFF len=0x proto=0 sum=1, csum_tcpudp_nofold returned 0 instead of 1. Signed-off-by: Karl Beldan Cc: Al Viro Cc: Eric Dumazet Cc: Arnd Bergmann Cc: Mike Frysinger Cc: net...@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: sta...@vger.kernel.org --- lib/checksum.c | 12 ++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/lib/checksum.c b/lib/checksum.c index 129775e..fcf3894 100644 --- a/lib/checksum.c +++ b/lib/checksum.c @@ -47,6 +47,15 @@ static inline unsigned short from32to16(unsigned int x) return x; } +static inline u32 from64to32(u64 x) +{ + /* add up 32-bit and 32-bit for 32+c bit */ + x = (x & 0x) + (x >> 32); + /* add up carry.. */ + x = (x & 0x) + (x >> 32); + return (u32)x; +} + static unsigned int do_csum(const unsigned char *buff, int len) { int odd; @@ -195,8 +204,7 @@ __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr, #else s += (proto + len) << 8; #endif - s += (s >> 32); - return (__force __wsum)s; + return (__force __wsum)from64to32(s); } EXPORT_SYMBOL(csum_tcpudp_nofold); #endif -- 2.0.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] lib/checksum.c: fix carry in csum_tcpudp_nofold
On Tue, Jan 27, 2015 at 10:03:32PM +, Al Viro wrote: > On Tue, Jan 27, 2015 at 04:25:16PM +0100, Karl Beldan wrote: > > The carry from the 64->32bits folding was dropped, e.g with: > > saddr=0x daddr=0xFFFF len=0x proto=0 sum=1 > > > > Signed-off-by: Karl Beldan > > Cc: Mike Frysinger > > Cc: Arnd Bergmann > > Cc: linux-kernel@vger.kernel.org > > Cc: Stable > > --- > > lib/checksum.c | 4 ++-- > > 1 file changed, 2 insertions(+), 2 deletions(-) > > > > diff --git a/lib/checksum.c b/lib/checksum.c > > index 129775e..4b5adf2 100644 > > --- a/lib/checksum.c > > +++ b/lib/checksum.c > > @@ -195,8 +195,8 @@ __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr, > > #else > > s += (proto + len) << 8; > > #endif > > - s += (s >> 32); > > - return (__force __wsum)s; > > + s += (s << 32) + (s >> 32); > > + return (__force __wsum)(s >> 32); > > Umm... I _think_ it's correct, but it needs a better commit message. AFAICS, > what we have is that s is guaranteed to be (a << 32) + b, with a being small. > What we want is something congruent to a + b modulo 0x. And yes, in case > when a + b >= 2^32, the original variant fails - it yields a + b - 2^32, which > is one less than what's needed. New one results first in > (a + b)(2^32+1)mod 2^64, then that divided by 2^32. If a + b <= 2^32 - 1, > the first product is less than 2^64 and dividing it by 2^32 yields a + b. > If a + b = 2^32 + c, c is guaranteed to be small and we first get > 2^32 * c + 2^32 + 1, then c + 1, i.e. a + b - 0x, i.e. > a + b - 0x10001 * 0x, so the congruence holds in all cases. > > IOW, I think the fix is correct, but it really needs analysis in the commit > message. My take on this was "somewhat" simpler: s = a31..0b31..b0 = a << 32 + b, as you put it Here however I don't assume that a is "small", however I assume it has never overflowed, which is trivial to verify since we only add 3 32bits values and 2 16 bits values to a 64bits. Now we just want (a + b + carry(a + b)) % 2^32, and here I assume (a + b + carry(a + b)) % 2^32 == (a + b) % 2^32 + carry(a + b), I guess this is the trick, and this is easy to convince oneself with: 0x + 0x == 0x1fffe ==> ((u32)-1 + (u32)-1 + 1) % 2^32 == 0xfffe % 2^32 + 1 We get this carry pushed out from the MSbs side by the s+= addition pushed back in to the LSbs side of the upper 32bits and this carry doesn't make the upper side overflow. If this explanation is acceptable, I can reword the commit message with it. Sorry if my initial commit log lacked details, and thanks for your detailed input. Karl -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] lib/checksum.c: fix carry in csum_tcpudp_nofold
The carry from the 64->32bits folding was dropped, e.g with: saddr=0x daddr=0xFFFF len=0x proto=0 sum=1 Signed-off-by: Karl Beldan Cc: Mike Frysinger Cc: Arnd Bergmann Cc: linux-kernel@vger.kernel.org Cc: Stable --- lib/checksum.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/lib/checksum.c b/lib/checksum.c index 129775e..4b5adf2 100644 --- a/lib/checksum.c +++ b/lib/checksum.c @@ -195,8 +195,8 @@ __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr, #else s += (proto + len) << 8; #endif - s += (s >> 32); - return (__force __wsum)s; + s += (s << 32) + (s >> 32); + return (__force __wsum)(s >> 32); } EXPORT_SYMBOL(csum_tcpudp_nofold); #endif -- 2.0.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: About dma_sync_single_for_{cpu,device}
On 7/31/12, Russell King - ARM Linux wrote: > On Tue, Jul 31, 2012 at 09:31:13PM +0200, Karl Beldan wrote: >> On 7/31/12, Russell King - ARM Linux wrote: >> > On Tue, Jul 31, 2012 at 08:45:57AM +0200, Karl Beldan wrote: >> >> I was expecting the following to work: >> >> addr = dma_map_single(dev, buffer, size, DMA_TO_DEVICE); >> >> dma_sync_single_for_device(dev, buffer, pattern_size, >> >> DMA_FROM_DEVICE); >> >> dev_send(buffer); >> >> // wait for irq (don't peek in the buffer) ... got irq >> >> dma_sync_single_for_cpu(dev, buffer, pattern_size, DMA_FROM_DEVICE); >> >> if (!xfer_done(buffer)) // not RAM value >> >> dma_sync_single_for_device(dev, buffer, pattern_size, >> >> DMA_FROM_DEVICE); >> >> [...] >> > >> >> Hi Russell, >> >> >> > First point is that you clearly do not understand the DMA API at all. >> > The >> > DMA API has the idea of buffer ownership. Only the owner may access >> > the >> > buffer: >> > >> Are you saying that this scenario does not work ? >> We are taking some liberties with the DMA API, we're more using some >> of its funcs rather than _using_ it ;). >> The question was not whether this was a proper usage of the API, but >> why that scenario would not lead to the expected results .. and now >> I've found the culprit peek I am happy. > > If you abuse the API don't expect your stuff to work in future kernel > versions. > Of course. > It seems that the overall tone of your reply is "what we have now works, > we don't care if it's correct, sod you." > Not at all : { On 7/31/12, Karl Beldan wrote: > I might use something different in a not so distant future, yet, for > the time being, this way of doing as its advantages. } and during this time I might stick to the API. I am not at ease telling people how they should take things, especially if I asked for their help, all the more when they put efforts within the exchange while being expert on the matter, yet please, do not assume I did not care for your advices, which I deem of the most valuable, as, needless to say, do the community. > Fine, I won't spend any more time on this. Just don't ever think about > merging it into mainline, thanks. > Merge submission while taking such liberties .. that I would not dare ;) this really was a down to the ground technical question not the start of a disguised start of a merging request. I am sure that taking such liberties and feeling its limits before sticking to a super safe API is not a bad thing, e.g it might trigger easierly nasty hidden bugs, it is often beneficial to me at least. Thanks for your feedback, Karl -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: About dma_sync_single_for_{cpu,device}
On 7/31/12, Russell King - ARM Linux wrote: > On Tue, Jul 31, 2012 at 08:45:57AM +0200, Karl Beldan wrote: >> I was expecting the following to work: >> addr = dma_map_single(dev, buffer, size, DMA_TO_DEVICE); >> dma_sync_single_for_device(dev, buffer, pattern_size, DMA_FROM_DEVICE); >> dev_send(buffer); >> // wait for irq (don't peek in the buffer) ... got irq >> dma_sync_single_for_cpu(dev, buffer, pattern_size, DMA_FROM_DEVICE); >> if (!xfer_done(buffer)) // not RAM value >> dma_sync_single_for_device(dev, buffer, pattern_size, >> DMA_FROM_DEVICE); >> [...] > Hi Russell, > First point is that you clearly do not understand the DMA API at all. The > DMA API has the idea of buffer ownership. Only the owner may access the > buffer: > Are you saying that this scenario does not work ? We are taking some liberties with the DMA API, we're more using some of its funcs rather than _using_ it ;). The question was not whether this was a proper usage of the API, but why that scenario would not lead to the expected results .. and now I've found the culprit peek I am happy. [...] > So, there is absolutely no noeed what so ever to follow dma_map_single() > with dma_sync_single_for_device(). > Cleaning a wide address range while invalidating a small one ? [...] > Fourthly, remember that the CPU deals with cache lines, and dirty cache > lines may be written back in their _entirety_ - which means that data > DMA'd from a device in the same cache line that you've modified via the > CPU will not work (either the data in the cache line has to be invalidated > and therefore the CPU update discarded, or the cache line has to be flushed > back to RAM and the DMA'd data is overwritten.) Hence why the buffer > ownership rules are extremely important. > Of course. > The solution for this fourth point is to use coherent DMA memory for things > like ring buffers and the like, which does not suffer from this. > I might use something different in a not so distant future, yet, for the time being, this way of doing as its advantages. [...] > Even with that comment, the idea of buffer ownership must be preserved by > drivers, and they must follow that rule. The fact that some ARM CPU > do not respect the ownership entirely is worked around inside the DMA API > and is of no interest to drivers. Feroceon is not one CPU which does this > though. > It was kind of a last resort explanation for a cache line filled out of the blue .. before I spotted the culprit peek this morning. Thanks for your input, Karl -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: About dma_sync_single_for_{cpu,device}
On Tue, Jul 31, 2012 at 09:34:01AM +0200, Clemens Ladisch wrote: > Karl Beldan wrote: > > On 7/31/12, Clemens Ladisch wrote: > >> Karl Beldan wrote: > >>> To tx a chunk of data from the SoC => network device, we : > >>> - prepare a buffer with a leading header embedding a pattern, > >>> - trigger the xfer and wait for an irq > >>> // The device updates the pattern and then triggers an irq > >>> - upon irq we check the pattern for the xfer completion > >>> > >>> I was expecting the following to work: > >>> addr = dma_map_single(dev, buffer, size, DMA_TO_DEVICE); > >> > >> Of both the CPU and the device write to the buffer, you must use > >> DMA_BIDIRECTIONAL. > > > > This does not work (tested) : seems to me BIDIRECTIONAL would just > > add invalidate, and invalidate before the ram has been updated, as > > stated, does not work. > > Please show the exact sequence of dma_* calls, and also show when and > how the CPU and the device access the buffer. > Hmm, so I just spotted a line where we peek in the buffer after invalidating .. cannot believe I missed it .. so sorry for the noise .. now it's working. I felt I would find the culprit right after posting ;) Thanks Clemens ! -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: About dma_sync_single_for_{cpu,device}
On 7/31/12, Clemens Ladisch wrote: > Karl Beldan wrote: >> To tx a chunk of data from the SoC => network device, we : >> - prepare a buffer with a leading header embedding a pattern, >> - trigger the xfer and wait for an irq >> // The device updates the pattern and then triggers an irq >> - upon irq we check the pattern for the xfer completion >> >> I was expecting the following to work: >> addr = dma_map_single(dev, buffer, size, DMA_TO_DEVICE); > > Of both the CPU and the device write to the buffer, you must use > DMA_BIDIRECTIONAL. > Hi Clemens, This does not work (tested) : seems to me BIDIRECTIONAL would just add invalidate, and invalidate before the ram has been updated, as stated, does not work. In fact the immediately following sync_for_device is intended to cater for what DMA_BIDIRECTIONAL would provide (though it is not implementation agnostic), only invalidating a smaller address range. Regards, Karl -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: About dma_sync_single_for_{cpu,device}
Hi, (This is an email originally addressed to the linux-kernel mailing-list.) On our board we've got an MV78200 and a network device between which we xfer memory chunks via the ddram with an external dma controller. To handle these xfers we're using the dma API. To tx a chunk of data from the SoC => network device, we : - prepare a buffer with a leading header embedding a pattern, - trigger the xfer and wait for an irq // The device updates the pattern and then triggers an irq - upon irq we check the pattern for the xfer completion I was expecting the following to work: addr = dma_map_single(dev, buffer, size, DMA_TO_DEVICE); dma_sync_single_for_device(dev, buffer, pattern_size, DMA_FROM_DEVICE); dev_send(buffer); // wait for irq (don't peek in the buffer) ... got irq dma_sync_single_for_cpu(dev, buffer, pattern_size, DMA_FROM_DEVICE); if (!xfer_done(buffer)) // not RAM value dma_sync_single_for_device(dev, buffer, pattern_size, DMA_FROM_DEVICE); [...] But this does not work (the buffer pattern does not reflect the ddram value). On the other hand, the following works: [...] // wait for irq (don't peek in the buffer) ... got irq dma_sync_single_for_device(dev, buffer, pattern_size, DMA_FROM_DEVICE); if (!xfer_done(buffer)) // RAM value [...] Looking at dma-mapping.c:__dma_page_cpu_to_{dev,cpu}() and proc-feroceon.S: feroceon_dma_{,un}map_area this behavior is not surprising. The sync_for_cpu calls the unmap which just invalidates the outer cache while the sync_for_device invalidates both inner and outer. It seems that: - we need to invalidate after the RAM has been updated - we need to invalidate with sync_single_for_device rather than sync_single_for_cpu to check the value Is it correct ? Maybe the following comment in dma-mapping.c explains the situation : /* * The DMA API is built upon the notion of "buffer ownership". A buffer * is either exclusively owned by the CPU (and therefore may be accessed * by it) or exclusively owned by the DMA device. These helper functions * represent the transitions between these two ownership states. * * Note, however, that on later ARMs, this notion does not work due to * speculative prefetches. We model our approach on the assumption that * the CPU does do speculative prefetches, which means we clean caches * before transfers and delay cache invalidation until transfer completion. * */ Thanks for your input, Regards, Karl -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: About dma_sync_single_for_{cpu,device}
On Mon, Jul 30, 2012 at 10:24:01PM +0200, karl.bel...@gmail.com wrote: > I was expecting the following to work: > addr = dma_map_single(dev, buffer, size, DMA_TO_DEVICE); Sorry, I forgot this (invalidate): dma_sync_single_for_device(dev, buffer, pattern_size, DMA_FROM_DEVICE); > dev_send(buffer); > // wait for irq (don't peek in the buffer) ... got irq > dma_sync_single_single_for_cpu(dev, buffer, pattern_size, > DMA_FROM_DEVICE); > if (!xfer_done(buffer)) // not RAM value > dma_sync_single_for_device(dev, buffer, pattern_size, > DMA_FROM_DEVICE); > [...] Karl -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
About dma_sync_single_for_{cpu,device}
Hi, On our board we've got an MV78200 and a network device between which we xfer memory chunks via the ddram with an external dma controller. To handle these xfers we're using the dma API. To tx a chunk of data from the SoC => network device, we : - prepare a buffer with a leading header embedding a pattern, - trigger the xfer and wait for an irq // The device updates the pattern and then triggers an irq - upon irq we check the pattern for the xfer completion I was expecting the following to work: addr = dma_map_single(dev, buffer, size, DMA_TO_DEVICE); dev_send(buffer); // wait for irq (don't peek in the buffer) ... got irq dma_sync_single_single_for_cpu(dev, buffer, pattern_size, DMA_FROM_DEVICE); if (!xfer_done(buffer)) // not RAM value dma_sync_single_for_device(dev, buffer, pattern_size, DMA_FROM_DEVICE); [...] But this does not work (the buffer pattern does not reflect the ddram value). On the other hand, the following works: [...] // wait for irq (don't peek in the buffer) ... got irq dma_sync_single_for_device(dev, buffer, pattern_size, DMA_FROM_DEVICE); if (!xfer_done(buffer)) // RAM value [...] Looking at dma-mapping.c:__dma_page_cpu_to_{dev,cpu}() and proc-feroceon.S: feroceon_dma_{,un}map_area this behavior is not surprising. The sync_for_cpu calls the unmap which just invalidates the outer cache while the sync_for_device invalidates both inner and outer. It seems that: - we need to invalidate after the RAM has been updated - we need to invalidate with sync_single_for_device rather than sync_single_for_cpu to check the value Is it correct ? Maybe the following comment in dma-mapping.c explains the situation : /* * The DMA API is built upon the notion of "buffer ownership". A buffer * is either exclusively owned by the CPU (and therefore may be accessed * by it) or exclusively owned by the DMA device. These helper functions * represent the transitions between these two ownership states. * * Note, however, that on later ARMs, this notion does not work due to * speculative prefetches. We model our approach on the assumption that * the CPU does do speculative prefetches, which means we clean caches * before transfers and delay cache invalidation until transfer completion. * */ Thanks for your input, Regards, Karl -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/