Re: [PATCH] arch/riscv: disable too many harts before pick main boot hart

2019-09-19 Thread Xiang Wang






‐‐‐ Original Message ‐‐‐
On 2019年9月19日ThursdayPM6点25分, Paul Walmsley  wrote:

> On Fri, 6 Sep 2019, Xiang Wang wrote:
>
> > From 12300865d1103618c9d4c375f7d7fbe601b6618c Mon Sep 17 00:00:00 2001
> > From: Xiang Wang me...@hardenedlinux.org
> > Date: Fri, 6 Sep 2019 11:56:09 +0800
> > Subject: [PATCH] arch/riscv: disable too many harts before pick main boot 
> > hart
> > These harts with id greater than or equal to CONFIG_NR_CPUS need to be 
> > disabled.
> > But pick the main Hart can choose any one. So, before pick the main hart, 
> > you
> > need to disable the hart with id greater than or equal to CONFIG_NR_CPUS.
> > Signed-off-by: Xiang Wang me...@hardenedlinux.org
>
> Thanks, here's what I'm planning to queue for v5.4-rc1. Please let me
> know ASAP if you want to change the patch description.
>
> -   Paul

Not need to change

>
> From: Xiang Wang me...@hardenedlinux.org
>
>
> Date: Fri, 6 Sep 2019 11:56:09 +0800
> Subject: [PATCH] arch/riscv: disable excess harts before picking main boot 
> hart
>
> Harts with id greater than or equal to CONFIG_NR_CPUS need to be
> disabled. But the kernel can pick any hart as the main hart. So,
> before picking the main hart, the kernel must disable harts with ids
> greater than or equal to CONFIG_NR_CPUS.
>
> Signed-off-by: Xiang Wang me...@hardenedlinux.org
> Reviewed-by: Palmer Dabbelt pal...@sifive.com
> [paul.walms...@sifive.com: updated to apply; cleaned up patch
> description]
> Signed-off-by: Paul Walmsley paul.walms...@sifive.com
>
> arch/riscv/kernel/head.S | 8 +---
> 1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S
> index 15a9189f91ad..72f89b7590dd 100644
> --- a/arch/riscv/kernel/head.S
> +++ b/arch/riscv/kernel/head.S
> @@ -63,6 +63,11 @@ _start_kernel:
> li t0, SR_FS
> csrc CSR_SSTATUS, t0
>
> +#ifdef CONFIG_SMP
>
> -   li t0, CONFIG_NR_CPUS
> -   bgeu a0, t0, .Lsecondary_park
> +#endif
>
> -   /* Pick one hart to run the main boot sequence */
> la a3, hart_lottery
> li a2, 1
> @@ -154,9 +159,6 @@ relocate:
>
> .Lsecondary_start:
> #ifdef CONFIG_SMP
>
>
> -   li a1, CONFIG_NR_CPUS
> -   bgeu a0, a1, .Lsecondary_park
> -   /* Set trap vector to spin forever to help debug */
> la a3, .Lsecondary_park
> csrw CSR_STVEC, a3
> --
> 2.23.0
>




Re: arch/riscv: disable too many harts before pick main boot hart

2019-09-18 Thread Xiang Wang






‐‐‐ Original Message ‐‐‐
On 2019年9月14日SaturdayAM3点04分, Palmer Dabbelt  wrote:

> On Thu, 05 Sep 2019 23:51:15 PDT (-0700), me...@hardenedlinux.org wrote:
>
> > From 12300865d1103618c9d4c375f7d7fbe601b6618c Mon Sep 17 00:00:00 2001
> > From: Xiang Wang me...@hardenedlinux.org
> > Date: Fri, 6 Sep 2019 11:56:09 +0800
> > Subject: [PATCH] arch/riscv: disable too many harts before pick main boot 
> > hart
> > These harts with id greater than or equal to CONFIG_NR_CPUS need to be 
> > disabled.
> > But pick the main Hart can choose any one. So, before pick the main hart, 
> > you
> > need to disable the hart with id greater than or equal to CONFIG_NR_CPUS.
> >
> > Signed-off-by: Xiang Wang me...@hardenedlinux.org
> >
> > --
> >
> > arch/riscv/kernel/head.S | 8 +---
> > 1 file changed, 5 insertions(+), 3 deletions(-)
> > diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S
> > index 0f1ba17e476f..cfffea38eb17 100644
> > --- a/arch/riscv/kernel/head.S
> > +++ b/arch/riscv/kernel/head.S
> > @@ -63,6 +63,11 @@ _start_kernel:
> > li t0, SR_FS
> > csrc sstatus, t0
> > +#ifdef CONFIG_SMP
> >
> > -   li t0, CONFIG_NR_CPUS
> > -   bgeu a0, t0, .Lsecondary_park
> > +#endif
> >
> > -
> >
> > /* Pick one hart to run the main boot sequence */
> > la a3, hart_lottery
> > li a2, 1
> > @@ -154,9 +159,6 @@ relocate:
> > .Lsecondary_start:
> > #ifdef CONFIG_SMP
> >
> > -   li a1, CONFIG_NR_CPUS
> > -   bgeu a0, a1, .Lsecondary_park
> > -
> >
> > /* Set trap vector to spin forever to help debug */
> > la a3, .Lsecondary_park
> > csrw CSR_STVEC, a3
>
> It would be better to decouple the hart masks from NR_CPUS, as there's really
> no reason for these to be the same.

This may be new feature. Need to define new macros such as disabled_hart_mask,
this patch is used to fix bug and not add new feature.

>
> Reviewed-by: Palmer Dabbelt pal...@sifive.com




[PATCH] arch/riscv: disable too many harts before pick main boot hart

2019-09-05 Thread Xiang Wang
>From 12300865d1103618c9d4c375f7d7fbe601b6618c Mon Sep 17 00:00:00 2001
From: Xiang Wang 
Date: Fri, 6 Sep 2019 11:56:09 +0800
Subject: [PATCH] arch/riscv: disable too many harts before pick main boot hart

These harts with id greater than or equal to CONFIG_NR_CPUS need to be disabled.
But pick the main Hart can choose any one. So, before pick the main hart, you
need to disable the hart with id greater than or equal to CONFIG_NR_CPUS.

Signed-off-by: Xiang Wang 
---
 arch/riscv/kernel/head.S | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S
index 0f1ba17e476f..cfffea38eb17 100644
--- a/arch/riscv/kernel/head.S
+++ b/arch/riscv/kernel/head.S
@@ -63,6 +63,11 @@ _start_kernel:
li t0, SR_FS
csrc sstatus, t0

+#ifdef CONFIG_SMP
+   li t0, CONFIG_NR_CPUS
+   bgeu a0, t0, .Lsecondary_park
+#endif
+
/* Pick one hart to run the main boot sequence */
la a3, hart_lottery
li a2, 1
@@ -154,9 +159,6 @@ relocate:

 .Lsecondary_start:
 #ifdef CONFIG_SMP
-   li a1, CONFIG_NR_CPUS
-   bgeu a0, a1, .Lsecondary_park
-
/* Set trap vector to spin forever to help debug */
la a3, .Lsecondary_park
csrw CSR_STVEC, a3
--
2.20.1









Re: [PATCH 2/2] i2c: designware: enable High-speed mode for pcidrv

2015-10-21 Thread Xiang Wang
2015-10-15 17:40 GMT+08:00 Andy Shevchenko :
> On Thu, 2015-10-15 at 11:32 +0300, Jarkko Nikula wrote:
>> On 10/15/2015 08:46 AM, Xiang Wang wrote:
>> >
>> > In conclusion, we have 2 solutions to set the i2c controller speed
>> > mode (pci driver):
>> > 1) use hardcode value in pci driver
>> > 2) use frequency setting of "i2c device" in ACPI table (more
>> > flexible,
>> > but looks a bit strange)
>> >
>> > Do you have any preference/suggestions for above solutions? Thanks
>>
>> I don't think we can hard code especially the high-speed mode because
>>
>> most typically buses are populated with slower devices.
>>
>> Things are a bit more clear when ACPI provides timing parameters for
>> the
>> bus (for standard and fast speed modes at the moment in
>> i2c-designware-platdrv.c: dw_i2c_acpi_configure()) but still I think
>> the
>> ACPI namespace walk may be needed against potential BIOS
>> misconfigurations. For instance if it provides timing parameters for
>> all
>> speeds but there are devices with lower speed on the same bus.
>>
>> I'd take these timing parameters as configuration data for bus
>> features
>> but actual speed (speed bits in IC_CON register) is defined
>> separately.
>> To me it looks only way to achieve that is to pick slowest device
>> from
>> I2cSerialBus resource descriptors.
>
> Should it (ACPI walk) be done in PCI case as well? If so, then it needs
> to be done up to i2c-core. There you may adjust the bus speed whenever
> slave device is enumerated.
>
I think the "ACPI walk for bus speed" also works for PCI case. It'll
be good to do this in i2c-core.
By doing this we can get a minimum device speed for this bus. I2C bus
drivers can use this speed to set corresponding mode into i2c
controller.
Waiting for comments from others.

> For PCI case you have still to have hardcoded values and it should be
> maximum supported by the bus I think. When you have implemented above
> algorithm you may do this safely. Am I missing something?
Agree. It'll be safer to set a hardcoded maximum supported speed of
the bus for PCI case.
I2C bus driver can use MIN(speed_get_by_ACPI_walk,
hardcoded_max_supported_speed) for bus speed.

>
> --
> Andy Shevchenko 
> Intel Finland Oy



-- 
Regards,
Xiang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 2/2] i2c: designware: enable High-speed mode for pcidrv

2015-10-14 Thread Xiang Wang
2015-10-12 16:31 GMT+08:00 Andy Shevchenko :
> On Mon, 2015-10-12 at 15:41 +0800, Xiang Wang wrote:
>> Hi, Andy
>> Thanks for your comments.
>>
>> [Andy] Don't see a relationship between PCI driver and this ACPI
>> stuff.
>> Although this is a pci driver, we may enumerate the i2c devices from
>> DSDT table while i2c controllers are enumerated via PCI. In this
>> scenario, in DSDT, there are descriptions of i2c devices as well as
>> i2c controllers. The ACPI node of i2c controllers are bond to i2c PCI
>> devices via pci-acpi glue.
>> So if we want to determine the i2c devices' settings (e.g. bus
>> speed),
>> we should leverage ACPI.
>>
>> Above is also the reason why the ACPI stuff is put in
>> i2c-designware-core: i2c_dw_acpi_setup_speed can be used by both plat
>> and pci driver. Thanks
>
> Wait, first of all let's divide parameters to two groups: a) to be
> applied to host driver, and b) to be applied to slave devices.
>
> The drivers/i2c/busses/i2c-designware* is about host driver parameters.
>
> Thus, PCI driver comes with hardcoded values since it's enumerated via
> PCI, and platform driver utilizes ACPI values.
>
> For slave devices everything is done in i2c-core.c.
>
> So, what exactly you are trying to enhance?
Agree on your analysis. The bus speed mode is a "host controller"
parameter. We can hardcode it in PCI driver.
However,
1. "bus speed mode" is a bit different from other parameters. Actually
it can be determined by the speed setting of "i2c devices" in ACPI
(I2CSerialBus). E.g. If i2c device uses 3MHz, we use High-speed mode
for this i2c bus.
2. If we hardcode speed setting in pci driver, we lose the
flexibility. A high-speed device may be connected to this i2c bus on
this board, but may be connected to another i2c bus on another board
design.

In this patch, we enumerate the i2c device in ACPI table to determine
the frequency setting. Then we set corresponding speed mode for this
i2c controller. The ACPI stuff is common for pci and plat driver. If
board design changes, we only change BIOS.

In conclusion, we have 2 solutions to set the i2c controller speed
mode (pci driver):
1) use hardcode value in pci driver
2) use frequency setting of "i2c device" in ACPI table (more flexible,
but looks a bit strange)

Do you have any preference/suggestions for above solutions? Thanks
>
>>
>> 2015-10-09 17:31 GMT+08:00 Andy Shevchenko <
>> andriy.shevche...@linux.intel.com>:
>> > On Fri, 2015-10-09 at 16:47 +0800, wangx...@gmail.com wrote:
>> > > From: Xiang Wang 
>> > >
>> > > 1. Support setting hs_hcnt and hs_lcnt
>> > > 2. Get bus speed mode from ACPI companion of the
>> > > i2c controller.
>> > >
>> > > Signed-off-by: Xiang Wang 
>> > > ---
>> > >  drivers/i2c/busses/i2c-designware-pcidrv.c | 7 +++
>> > >  1 file changed, 7 insertions(+)
>> > >
>> > > diff --git a/drivers/i2c/busses/i2c-designware-pcidrv.c
>> > > b/drivers/i2c/busses/i2c-designware-pcidrv.c
>> > > index 6643d2d..0f4c0c4 100644
>> > > --- a/drivers/i2c/busses/i2c-designware-pcidrv.c
>> > > +++ b/drivers/i2c/busses/i2c-designware-pcidrv.c
>> > > @@ -54,8 +54,10 @@ enum dw_pci_ctl_id_t {
>> > >  struct dw_scl_sda_cfg {
>> > >   u32 ss_hcnt;
>> > >   u32 fs_hcnt;
>> > > + u32 hs_hcnt;
>> > >   u32 ss_lcnt;
>> > >   u32 fs_lcnt;
>> > > + u32 hs_lcnt;
>> > >   u32 sda_hold;
>> > >  };
>> > >
>> > > @@ -237,8 +239,10 @@ static int i2c_dw_pci_probe(struct pci_dev
>> > > *pdev,
>> > >   cfg = controller->scl_sda_cfg;
>> > >   dev->ss_hcnt = cfg->ss_hcnt;
>> > >   dev->fs_hcnt = cfg->fs_hcnt;
>> > > + dev->hs_hcnt = cfg->hs_hcnt;
>> > >   dev->ss_lcnt = cfg->ss_lcnt;
>> > >   dev->fs_lcnt = cfg->fs_lcnt;
>> > > + dev->hs_lcnt = cfg->hs_lcnt;
>> > >   dev->sda_hold_time = cfg->sda_hold;
>> > >   }
>> > >
>> > > @@ -246,6 +250,9 @@ static int i2c_dw_pci_probe(struct pci_dev
>> > > *pdev,
>> > >
>> > >   dev->tx_fifo_depth = controller->tx_fifo_depth;
>> > >   dev->rx_fifo_depth = controller->rx_fifo_depth;
>> > > +
>> > > + i2c_dw_acpi_setup_speed(&pdev->dev, dev);
>> >
>> > Don't see a relationship between PCI driver and this ACPI stuff.
>> >
>> > > +
>> > >   r = i2c_dw_init(dev);
>> > >   if (r)
>> > >   return r;
>> >
>> > --
>> > Andy Shevchenko 
>> > Intel Finland Oy
>>
>>
>>
>
> --
> Andy Shevchenko 
> Intel Finland Oy



-- 
Regards,
Xiang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 2/2] i2c: designware: enable High-speed mode for pcidrv

2015-10-12 Thread Xiang Wang
Hi, Andy
Thanks for your comments.

[Andy] Don't see a relationship between PCI driver and this ACPI stuff.
Although this is a pci driver, we may enumerate the i2c devices from
DSDT table while i2c controllers are enumerated via PCI. In this
scenario, in DSDT, there are descriptions of i2c devices as well as
i2c controllers. The ACPI node of i2c controllers are bond to i2c PCI
devices via pci-acpi glue.
So if we want to determine the i2c devices' settings (e.g. bus speed),
we should leverage ACPI.

Above is also the reason why the ACPI stuff is put in
i2c-designware-core: i2c_dw_acpi_setup_speed can be used by both plat
and pci driver. Thanks

2015-10-09 17:31 GMT+08:00 Andy Shevchenko :
> On Fri, 2015-10-09 at 16:47 +0800, wangx...@gmail.com wrote:
>> From: Xiang Wang 
>>
>> 1. Support setting hs_hcnt and hs_lcnt
>> 2. Get bus speed mode from ACPI companion of the
>> i2c controller.
>>
>> Signed-off-by: Xiang Wang 
>> ---
>>  drivers/i2c/busses/i2c-designware-pcidrv.c | 7 +++
>>  1 file changed, 7 insertions(+)
>>
>> diff --git a/drivers/i2c/busses/i2c-designware-pcidrv.c
>> b/drivers/i2c/busses/i2c-designware-pcidrv.c
>> index 6643d2d..0f4c0c4 100644
>> --- a/drivers/i2c/busses/i2c-designware-pcidrv.c
>> +++ b/drivers/i2c/busses/i2c-designware-pcidrv.c
>> @@ -54,8 +54,10 @@ enum dw_pci_ctl_id_t {
>>  struct dw_scl_sda_cfg {
>>   u32 ss_hcnt;
>>   u32 fs_hcnt;
>> + u32 hs_hcnt;
>>   u32 ss_lcnt;
>>   u32 fs_lcnt;
>> + u32 hs_lcnt;
>>   u32 sda_hold;
>>  };
>>
>> @@ -237,8 +239,10 @@ static int i2c_dw_pci_probe(struct pci_dev
>> *pdev,
>>   cfg = controller->scl_sda_cfg;
>>   dev->ss_hcnt = cfg->ss_hcnt;
>>   dev->fs_hcnt = cfg->fs_hcnt;
>> + dev->hs_hcnt = cfg->hs_hcnt;
>>   dev->ss_lcnt = cfg->ss_lcnt;
>>   dev->fs_lcnt = cfg->fs_lcnt;
>> + dev->hs_lcnt = cfg->hs_lcnt;
>>   dev->sda_hold_time = cfg->sda_hold;
>>   }
>>
>> @@ -246,6 +250,9 @@ static int i2c_dw_pci_probe(struct pci_dev *pdev,
>>
>>   dev->tx_fifo_depth = controller->tx_fifo_depth;
>>   dev->rx_fifo_depth = controller->rx_fifo_depth;
>> +
>> + i2c_dw_acpi_setup_speed(&pdev->dev, dev);
>
> Don't see a relationship between PCI driver and this ACPI stuff.
>
>> +
>>   r = i2c_dw_init(dev);
>>   if (r)
>>   return r;
>
> --
> Andy Shevchenko 
> Intel Finland Oy



-- 
Regards,
Xiang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] dmaengine: increase privatecnt in dma_get_any_slave_channel

2014-08-21 Thread Xiang Wang
From: Xiang Wang 

There might be such calling sequence for dma channel:
1. register dma device
dma_async_device_register
  -> if (dma_has_cap(DMA_PRIVATE, device->cap_mask))
device->privatecnt++;

2. request channel
dma_request_slave_channel
  -> of_dma_request_slave_channel
-> of_dma_xlate
  -> dma_get_any_slave_channel in dma drivers
note that device->privatecnt is not changed during this.

3. release channel
dma_release_channel
  -> if (--chan->device->privatecnt == 0)
dma_cap_clear(DMA_PRIVATE,
chan->device->cap_mask);

So if we request a channel then release it, DMA_PRIVATE will
be cleared unexpectedly. In this patch, we increase privatecnt
in step 2 just like what __dma_request_channel does.

Signed-off-by: Xiang Wang 
---
 drivers/dma/dmaengine.c |3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index ed610b4..01bb372 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -557,6 +557,9 @@ struct dma_chan *dma_get_any_slave_channel(struct 
dma_device *device)
}
}
 
+   if (chan && dma_has_cap(DMA_PRIVATE, device->cap_mask))
+   device->privatecnt++;
+
mutex_unlock(&dma_list_mutex);
 
return chan;
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] mmc: sdhci-pxav3: fix error handling of sdhci_add_host

2014-07-16 Thread Xiang Wang
From: Xiang Wang 

Commit 0dcaa2499b7d111bd70da5b0976c34210c850fb3 improved error
handling of sdhci_add_host. However, "err_of_parse" and "err_cd_req"
should be placed after "pm_runtime_disable(&pdev->dev)".

Signed-off-by: Xiang Wang 
---
 drivers/mmc/host/sdhci-pxav3.c |4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/mmc/host/sdhci-pxav3.c b/drivers/mmc/host/sdhci-pxav3.c
index 2fd73b3..0644655 100644
--- a/drivers/mmc/host/sdhci-pxav3.c
+++ b/drivers/mmc/host/sdhci-pxav3.c
@@ -380,11 +380,11 @@ static int sdhci_pxav3_probe(struct platform_device *pdev)
 
return 0;
 
-err_of_parse:
-err_cd_req:
 err_add_host:
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
+err_of_parse:
+err_cd_req:
clk_disable_unprepare(clk);
clk_put(clk);
 err_clk_get:
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2] dma: mmp_pdma: fix a memory alloc error

2013-07-31 Thread Xiang Wang
2013/7/29 Vinod Koul :
> On Thu, Jul 25, 2013 at 10:22:50AM +0800, Xiang Wang wrote:
>
> Btw its generally a good idea to remove the parts not required for further
> discussion
>
>> Would you please explain a little more about why we should use
>> GFP_NOWAIT here? This memory is not dedicated for DMA controller. Do
>> we have special reasons to use GFP_NOWAIT? Thanks!
> There are few reasons
> - the dma requests can be triggered from atomic context
> - so for above you can argue to use the GFP_ATOMIC and the guideline for dma
>   drivers [1] was discussed briefly
>
> ~Vinod
> [1]: http://lkml.indiana.edu/hypermail/linux/kernel/0911.1/02316.html
>
> --
Hi, Vinod
Thanks for your explanation. But want to double confirm with you that
the memory alloc in my patch is in probe function, still need to use
GFP_NOWAIT? Thanks!

-- 
Regards,
Xiang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2] dma: mmp_pdma: fix a memory alloc error

2013-07-24 Thread Xiang Wang
2013/7/5 Vinod Koul :
> On Tue, Jun 18, 2013 at 02:21:58PM +0800, Xiang Wang wrote:
>> From: Xiang Wang 
>>
>> pdev->phy is of type "struct mmp_pdma_phy *". But when
>> allocating memory for it, "struct mmp_pdma_chan" is used
>> by mistake.
>>
>> Signed-off-by: Xiang Wang 
>> ---
>>  drivers/dma/mmp_pdma.c |2 +-
>>  1 files changed, 1 insertions(+), 1 deletions(-)
>>
>> diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
>> index c26699f..18ac57f 100644
>> --- a/drivers/dma/mmp_pdma.c
>> +++ b/drivers/dma/mmp_pdma.c
>> @@ -801,7 +801,7 @@ static int mmp_pdma_probe(struct platform_device *op)
>>   }
>>
>>   pdev->phy = devm_kzalloc(pdev->dev,
>> - dma_channels * sizeof(struct mmp_pdma_chan), GFP_KERNEL);
>> + dma_channels * sizeof(*pdev->phy), GFP_KERNEL);
> While at it, can you make the flag GFP_NOWAIT. that is the recommendation for
> dmac drivers
>
> --
> ~Vinod

Hi, Vinod
Would you please explain a little more about why we should use
GFP_NOWAIT here? This memory is not dedicated for DMA controller. Do
we have special reasons to use GFP_NOWAIT? Thanks!

-- 
Regards,
Xiang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH V3 0/2] dma: mmp_pdma: Fix phy channels not protected issue

2013-07-24 Thread Xiang Wang
Hi, maintainers
What's the status of this patch set V3? Sorry for pushing. Thanks!

2013/6/18 Xiang Wang :
> From: Xiang Wang 
>
> This patch set deals with the issues that 1) phy channels are not protected
> in mmp_pdma. 2) dma request<->channel mapping is not cleared when a phy chan
> is freed.
>
> Xiang Wang (2):
>   dma: mmp_pdma: add protect when alloc/free phy channels
>   dma: mmp_pdma: clear DRCMR when free a phy channel
>
>  drivers/dma/mmp_pdma.c |   48 
> 
>  1 files changed, 32 insertions(+), 16 deletions(-)
>
> --
> 1.7.5.4
>



-- 
Regards,
Xiang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH V2] dma: mmp_pdma: support for getting residual bytes

2013-07-10 Thread Xiang Wang
2013/7/5 Vinod Koul :
> On Tue, Jun 18, 2013 at 04:41:20PM +0800, Xiang Wang wrote:
>> From: Xiang Wang 
>>
>> In some of our drivers (e.g. UART) we may stop a running DMA
>> before it finishes. So we need APIs to know how many bytes
>> have been transferred.
>>
>> Signed-off-by: Xiang Wang 
>> ---
>>  drivers/dma/mmp_pdma.c |   88 
>> +++
>>  1 files changed, 80 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
>> index c26699f..57cd047 100644
>> --- a/drivers/dma/mmp_pdma.c
>> +++ b/drivers/dma/mmp_pdma.c
>> @@ -104,7 +104,8 @@ struct mmp_pdma_chan {
>>   spinlock_t desc_lock;   /* Descriptor list lock */
>>   struct list_head chain_pending; /* Link descriptors queue for pending 
>> */
>>   struct list_head chain_running; /* Link descriptors queue for running 
>> */
>> - bool idle;  /* channel statue machine */
>> + enum dma_status status; /* channel status machine */
>> + u32 bytes_residue;
>>
>>   struct dma_pool *desc_pool; /* Descriptors pool */
>>  };
>> @@ -270,7 +271,7 @@ static void start_pending_queue(struct mmp_pdma_chan 
>> *chan)
>>   struct mmp_pdma_desc_sw *desc;
>>
>>   /* still in running, irq will start the pending list */
>> - if (!chan->idle) {
>> + if (chan->status == DMA_IN_PROGRESS) {
>>   dev_dbg(chan->dev, "DMA controller still busy\n");
>>   return;
>>   }
>> @@ -307,7 +308,64 @@ static void start_pending_queue(struct mmp_pdma_chan 
>> *chan)
>>*/
>>   set_desc(chan->phy, desc->async_tx.phys);
>>   enable_chan(chan->phy);
>> - chan->idle = false;
>> + chan->status = DMA_IN_PROGRESS;
>> + chan->bytes_residue = 0;
>> +}
>> +
>> +/*
>> + * Get the number of pending bytes. Should be called with desc_lock held
>> + * because we are accessing desc list.
>> + */
>> +static u32 mmp_pdma_get_bytes_residue(struct mmp_pdma_chan *chan)
>> +{
>> + u32 reg, orig_pos, cur_pos, ddadr, residue = 0;
>> + bool running_desc_found = false;
>> + struct mmp_pdma_desc_sw *desc_sw;
>> +
>> + /*
>> +  * When a phy channel is unavailable, maybe it has been freed, return
>> +  * last stored value for safe.
>> +  */
>> + if (!chan->phy)
>> + return chan->bytes_residue;
>> +
>> + reg = (chan->phy->idx << 4) + DDADR;
>> + ddadr = readl_relaxed(chan->phy->base + reg);
>> +
>> + /* iterate over all descriptors to sum up the number of pending bytes 
>> */
> and why?
>
> Residue does not mean sum of all pending bytes across all descriptors 
> submitted
> You need to find the residue of current descriptor only and return
>
Here are my thoughts:
We tell dma engine to transmit x bytes for us and it creates n
descriptors to transmit which is transparent to us.
So when we want to find out how many bytes are left, dma engine should
sum up all pending bytes across all descriptors.
What's your opinion?
>> + list_for_each_entry(desc_sw, &chan->chain_running, node) {
>> + /* for the case of a running descriptor */
>> + if (desc_sw->desc.ddadr == ddadr && !running_desc_found) {
>> + switch (chan->dir) {
>> + case DMA_DEV_TO_MEM:
>> + case DMA_MEM_TO_MEM:
>> + reg = (chan->phy->idx << 4) + DTADR;
>> + cur_pos = readl_relaxed(chan->phy->base + reg);
>> + orig_pos = desc_sw->desc.dtadr;
>> + break;
>> +
>> + case DMA_MEM_TO_DEV:
>> + reg = (chan->phy->idx << 4) + DSADR;
>> + cur_pos = readl_relaxed(chan->phy->base + reg);
>> + orig_pos = desc_sw->desc.dsadr;
>> + break;
>> +
>> + default:
>> + cur_pos = 0;
>> + orig_pos = 0;
> This makes no sense...
What do you recommend for the default case? Just return 0?
>
>> + }
>> + residue = (u32)(desc_sw->desc.dcmd & DCMD_LENGTH)
>> +

Re: [PATCH V2] dma: mmp_pdma: support for getting residual bytes

2013-07-03 Thread Xiang Wang
Hi, all
Do you have any comments about this patch? Thanks!

2013/6/18 Xiang Wang :
> From: Xiang Wang 
>
> In some of our drivers (e.g. UART) we may stop a running DMA
> before it finishes. So we need APIs to know how many bytes
> have been transferred.
>
> Signed-off-by: Xiang Wang 
> ---
>  drivers/dma/mmp_pdma.c |   88 +++
>  1 files changed, 80 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
> index c26699f..57cd047 100644
> --- a/drivers/dma/mmp_pdma.c
> +++ b/drivers/dma/mmp_pdma.c
> @@ -104,7 +104,8 @@ struct mmp_pdma_chan {
> spinlock_t desc_lock;   /* Descriptor list lock */
> struct list_head chain_pending; /* Link descriptors queue for pending 
> */
> struct list_head chain_running; /* Link descriptors queue for running 
> */
> -   bool idle;  /* channel statue machine */
> +   enum dma_status status; /* channel status machine */
> +   u32 bytes_residue;
>
> struct dma_pool *desc_pool; /* Descriptors pool */
>  };
> @@ -270,7 +271,7 @@ static void start_pending_queue(struct mmp_pdma_chan 
> *chan)
> struct mmp_pdma_desc_sw *desc;
>
> /* still in running, irq will start the pending list */
> -   if (!chan->idle) {
> +   if (chan->status == DMA_IN_PROGRESS) {
> dev_dbg(chan->dev, "DMA controller still busy\n");
> return;
> }
> @@ -307,7 +308,64 @@ static void start_pending_queue(struct mmp_pdma_chan 
> *chan)
>  */
> set_desc(chan->phy, desc->async_tx.phys);
> enable_chan(chan->phy);
> -   chan->idle = false;
> +   chan->status = DMA_IN_PROGRESS;
> +   chan->bytes_residue = 0;
> +}
> +
> +/*
> + * Get the number of pending bytes. Should be called with desc_lock held
> + * because we are accessing desc list.
> + */
> +static u32 mmp_pdma_get_bytes_residue(struct mmp_pdma_chan *chan)
> +{
> +   u32 reg, orig_pos, cur_pos, ddadr, residue = 0;
> +   bool running_desc_found = false;
> +   struct mmp_pdma_desc_sw *desc_sw;
> +
> +   /*
> +* When a phy channel is unavailable, maybe it has been freed, return
> +* last stored value for safe.
> +*/
> +   if (!chan->phy)
> +   return chan->bytes_residue;
> +
> +   reg = (chan->phy->idx << 4) + DDADR;
> +   ddadr = readl_relaxed(chan->phy->base + reg);
> +
> +   /* iterate over all descriptors to sum up the number of pending bytes 
> */
> +   list_for_each_entry(desc_sw, &chan->chain_running, node) {
> +   /* for the case of a running descriptor */
> +   if (desc_sw->desc.ddadr == ddadr && !running_desc_found) {
> +   switch (chan->dir) {
> +   case DMA_DEV_TO_MEM:
> +   case DMA_MEM_TO_MEM:
> +   reg = (chan->phy->idx << 4) + DTADR;
> +   cur_pos = readl_relaxed(chan->phy->base + 
> reg);
> +   orig_pos = desc_sw->desc.dtadr;
> +   break;
> +
> +   case DMA_MEM_TO_DEV:
> +   reg = (chan->phy->idx << 4) + DSADR;
> +   cur_pos = readl_relaxed(chan->phy->base + 
> reg);
> +   orig_pos = desc_sw->desc.dsadr;
> +   break;
> +
> +   default:
> +   cur_pos = 0;
> +   orig_pos = 0;
> +   }
> +   residue = (u32)(desc_sw->desc.dcmd & DCMD_LENGTH)
> +   + orig_pos - cur_pos;
> +   running_desc_found = true;
> +   continue;
> +   }
> +
> +   /* for the case of following un-started descriptors*/
> +   if (running_desc_found)
> +   residue += (u32)(desc_sw->desc.dcmd & DCMD_LENGTH);
> +   }
> +
> +   return residue;
>  }
>
>
> @@ -381,7 +439,7 @@ static int mmp_pdma_alloc_chan_resources(struct dma_chan 
> *dchan)
> chan->phy->vchan = NULL;
> chan->phy = NULL;
> }
> -   chan->idle = true;
> +   chan->status = DMA_SUCCESS;
> chan->dev_addr = 0;
> return 1;
>  }
> @@ -409,7 +467,7 @@ static void mmp_pdma_f

Re: [PATCH v2] dma: mmp_pdma: fix a memory alloc error

2013-07-03 Thread Xiang Wang
Hi,
Do you have any comments about this patch? Have modified according to
Vinod's advice. Thanks!

2013/6/18 Xiang Wang :
> From: Xiang Wang 
>
> pdev->phy is of type "struct mmp_pdma_phy *". But when
> allocating memory for it, "struct mmp_pdma_chan" is used
> by mistake.
>
> Signed-off-by: Xiang Wang 
> ---
>  drivers/dma/mmp_pdma.c |2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
>
> diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
> index c26699f..18ac57f 100644
> --- a/drivers/dma/mmp_pdma.c
> +++ b/drivers/dma/mmp_pdma.c
> @@ -801,7 +801,7 @@ static int mmp_pdma_probe(struct platform_device *op)
> }
>
> pdev->phy = devm_kzalloc(pdev->dev,
> -   dma_channels * sizeof(struct mmp_pdma_chan), GFP_KERNEL);
> +   dma_channels * sizeof(*pdev->phy), GFP_KERNEL);
> if (pdev->phy == NULL)
> return -ENOMEM;
>
> --
> 1.7.5.4
>



-- 
Regards,
Xiang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH V3 0/2] dma: mmp_pdma: Fix phy channels not protected issue

2013-07-03 Thread Xiang Wang
Hi, all
Do you have any comments about this patch set? Thanks!

2013/6/18 Xiang Wang :
> From: Xiang Wang 
>
> This patch set deals with the issues that 1) phy channels are not protected
> in mmp_pdma. 2) dma request<->channel mapping is not cleared when a phy chan
> is freed.
>
> Xiang Wang (2):
>   dma: mmp_pdma: add protect when alloc/free phy channels
>   dma: mmp_pdma: clear DRCMR when free a phy channel
>
>  drivers/dma/mmp_pdma.c |   48 
> 
>  1 files changed, 32 insertions(+), 16 deletions(-)
>
> --
> 1.7.5.4
>



-- 
Regards,
Xiang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] dma: mmp_pdma: support for getting residual bytes

2013-06-18 Thread Xiang Wang
2013/6/17 Vinod Koul :
> On Fri, May 31, 2013 at 04:21:25PM +0800, Xiang Wang wrote:
>> From: Xiang Wang 
>>
>> In some of our drivers (e.g. UART) we may stop a running DMA
>> before it finishes. So we need to know how many bytes have
>> been transferred.
>>
>> Signed-off-by: Xiang Wang 
>> ---
>>  drivers/dma/mmp_pdma.c |   77 
>> +++-
>>  1 files changed, 69 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
>> index 76a8dcf..0c623cc 100644
>> --- a/drivers/dma/mmp_pdma.c
>> +++ b/drivers/dma/mmp_pdma.c
>> @@ -104,7 +104,8 @@ struct mmp_pdma_chan {
>>   spinlock_t desc_lock;   /* Descriptor list lock */
>>   struct list_head chain_pending; /* Link descriptors queue for pending 
>> */
>>   struct list_head chain_running; /* Link descriptors queue for running 
>> */
>> - bool idle;  /* channel statue machine */
>> + enum dma_status status; /* channel status machine */
>> + u32 bytes_residue;
>>
>>   struct dma_pool *desc_pool; /* Descriptors pool */
>>  };
>> @@ -270,7 +271,7 @@ static void start_pending_queue(struct mmp_pdma_chan 
>> *chan)
>>   struct mmp_pdma_desc_sw *desc;
>>
>>   /* still in running, irq will start the pending list */
>> - if (!chan->idle) {
>> + if (chan->status == DMA_IN_PROGRESS) {
>>   dev_dbg(chan->dev, "DMA controller still busy\n");
> why are you not reading the residue and filling it here as well
In patch V2, the user can query the residue bytes on the fly. We only
need to read and save the value in some key points where phy channel
are released (the registers will not be accessible).
>
>>   return;
>>   }
>> @@ -307,7 +308,55 @@ static void start_pending_queue(struct mmp_pdma_chan 
>> *chan)
>>*/
>>   set_desc(chan->phy, desc->async_tx.phys);
>>   enable_chan(chan->phy);
>> - chan->idle = false;
>> + chan->status = DMA_IN_PROGRESS;
>> + chan->bytes_residue = 0;
>> +}
>> +
>> +/*
>> + * Get the number of bytes untransferred by DMA.
>   ^
> The word which might fit better is pending
Updated in patch V2. Thanks!
>> + */
>> +static u32 mmp_pdma_get_bytes_residue(struct mmp_pdma_chan *chan)
>> +{
>> + u32 reg, orig_pos, cur_pos, residue = 0;
>> + struct mmp_pdma_desc_sw *desc_sw;
>> + struct list_head *list_chain = NULL;
>> +
>> + /*
>> +  * When a phy channel is unavailable, maybe its destroied, return last
> typo^
Updated in patch V2. Thanks!
>> +  * stored value for safe.
>> +  */
>> + if (!chan || !chan->phy)
>> + return chan->bytes_residue;
>> +
>> + if (!list_empty(&chan->chain_running))
>> + list_chain = &chan->chain_running;
>> + else
>> + return 0;
>> +
>> + desc_sw = list_first_entry(list_chain, struct mmp_pdma_desc_sw, node);
>> +
>> + switch (chan->dir) {
>> + case DMA_DEV_TO_MEM:
>> + reg = (chan->phy->idx << 4) + DTADR;
>> + cur_pos = readl_relaxed(chan->phy->base + reg);
>> + orig_pos = desc_sw->desc.dtadr;
>> + break;
> empty line here and below pls
Updated in patch V2. Thanks!
>
>> + case DMA_MEM_TO_DEV:
>> + reg = (chan->phy->idx << 4) + DSADR;
>> + cur_pos = readl_relaxed(chan->phy->base + reg);
>> + orig_pos = desc_sw->desc.dsadr;
>> + break;
>> + case DMA_MEM_TO_MEM:
>> + reg = (chan->phy->idx << 4) + DTADR;
>> + cur_pos = readl_relaxed(chan->phy->base + reg);
>> + orig_pos = desc_sw->desc.dtadr;
>> + break;
>> + default:
>> + return 0;
>> + }
>> +
>> + residue = (u32)(desc_sw->desc.dcmd & DCMD_LENGTH) + orig_pos - cur_pos;
>> + return residue;
>>  }
>>
>>
>> @@ -381,7 +430,7 @@ static int mmp_pdma_alloc_chan_resources(struct dma_chan 
>> *dchan)
>>   chan->phy->vchan = NULL;
>>   chan->phy = NULL;
>>   }
>> - chan->idle = true;
>> + chan->status = DMA_SUCCESS;
>>

[PATCH V2] dma: mmp_pdma: support for getting residual bytes

2013-06-18 Thread Xiang Wang
From: Xiang Wang 

In some of our drivers (e.g. UART) we may stop a running DMA
before it finishes. So we need APIs to know how many bytes
have been transferred.

Signed-off-by: Xiang Wang 
---
 drivers/dma/mmp_pdma.c |   88 +++
 1 files changed, 80 insertions(+), 8 deletions(-)

diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
index c26699f..57cd047 100644
--- a/drivers/dma/mmp_pdma.c
+++ b/drivers/dma/mmp_pdma.c
@@ -104,7 +104,8 @@ struct mmp_pdma_chan {
spinlock_t desc_lock;   /* Descriptor list lock */
struct list_head chain_pending; /* Link descriptors queue for pending */
struct list_head chain_running; /* Link descriptors queue for running */
-   bool idle;  /* channel statue machine */
+   enum dma_status status; /* channel status machine */
+   u32 bytes_residue;
 
struct dma_pool *desc_pool; /* Descriptors pool */
 };
@@ -270,7 +271,7 @@ static void start_pending_queue(struct mmp_pdma_chan *chan)
struct mmp_pdma_desc_sw *desc;
 
/* still in running, irq will start the pending list */
-   if (!chan->idle) {
+   if (chan->status == DMA_IN_PROGRESS) {
dev_dbg(chan->dev, "DMA controller still busy\n");
return;
}
@@ -307,7 +308,64 @@ static void start_pending_queue(struct mmp_pdma_chan *chan)
 */
set_desc(chan->phy, desc->async_tx.phys);
enable_chan(chan->phy);
-   chan->idle = false;
+   chan->status = DMA_IN_PROGRESS;
+   chan->bytes_residue = 0;
+}
+
+/*
+ * Get the number of pending bytes. Should be called with desc_lock held
+ * because we are accessing desc list.
+ */
+static u32 mmp_pdma_get_bytes_residue(struct mmp_pdma_chan *chan)
+{
+   u32 reg, orig_pos, cur_pos, ddadr, residue = 0;
+   bool running_desc_found = false;
+   struct mmp_pdma_desc_sw *desc_sw;
+
+   /*
+* When a phy channel is unavailable, maybe it has been freed, return
+* last stored value for safe.
+*/
+   if (!chan->phy)
+   return chan->bytes_residue;
+
+   reg = (chan->phy->idx << 4) + DDADR;
+   ddadr = readl_relaxed(chan->phy->base + reg);
+
+   /* iterate over all descriptors to sum up the number of pending bytes */
+   list_for_each_entry(desc_sw, &chan->chain_running, node) {
+   /* for the case of a running descriptor */
+   if (desc_sw->desc.ddadr == ddadr && !running_desc_found) {
+   switch (chan->dir) {
+   case DMA_DEV_TO_MEM:
+   case DMA_MEM_TO_MEM:
+   reg = (chan->phy->idx << 4) + DTADR;
+   cur_pos = readl_relaxed(chan->phy->base + reg);
+   orig_pos = desc_sw->desc.dtadr;
+   break;
+
+   case DMA_MEM_TO_DEV:
+   reg = (chan->phy->idx << 4) + DSADR;
+   cur_pos = readl_relaxed(chan->phy->base + reg);
+   orig_pos = desc_sw->desc.dsadr;
+   break;
+
+   default:
+   cur_pos = 0;
+   orig_pos = 0;
+   }
+   residue = (u32)(desc_sw->desc.dcmd & DCMD_LENGTH)
+   + orig_pos - cur_pos;
+   running_desc_found = true;
+   continue;
+   }
+
+   /* for the case of following un-started descriptors*/
+   if (running_desc_found)
+   residue += (u32)(desc_sw->desc.dcmd & DCMD_LENGTH);
+   }
+
+   return residue;
 }
 
 
@@ -381,7 +439,7 @@ static int mmp_pdma_alloc_chan_resources(struct dma_chan 
*dchan)
chan->phy->vchan = NULL;
chan->phy = NULL;
}
-   chan->idle = true;
+   chan->status = DMA_SUCCESS;
chan->dev_addr = 0;
return 1;
 }
@@ -409,7 +467,7 @@ static void mmp_pdma_free_chan_resources(struct dma_chan 
*dchan)
 
dma_pool_destroy(chan->desc_pool);
chan->desc_pool = NULL;
-   chan->idle = true;
+   chan->status = DMA_SUCCESS;
chan->dev_addr = 0;
if (chan->phy) {
chan->phy->vchan = NULL;
@@ -588,9 +646,16 @@ static int mmp_pdma_control(struct dma_chan *dchan, enum 
dma_ctrl_cmd cmd,
spin_lock_irqsave(&chan->desc_lock, flags);
mmp_pdma_free_desc_list(chan, &chan->chain_pending);
mmp_pdma_free_desc_list(chan, &chan->chain_running);
+   chan->bytes_

Re: [PATCH 1/2] dma: mmp_pdma: add protect when alloc/free phy channels

2013-06-18 Thread Xiang Wang
2013/6/17 Vinod Koul :
> On Mon, Jun 03, 2013 at 04:02:08PM +0800, Xiang Wang wrote:
>> From: Xiang Wang 
>>
>> In mmp pdma, phy channels are allocated/freed dynamically
>> and frequently. But no proper protection is added.
>> Conflict will happen when multi-users are requesting phy
>> channels at the same time. Use spinlock to protect.
>>
>> Signed-off-by: Xiang Wang 
>> ---
>>  drivers/dma/mmp_pdma.c |   41 +
>>  1 files changed, 25 insertions(+), 16 deletions(-)
>>
>
>> +static void free_phy(struct mmp_pdma_chan *pchan)
> pls namespace this
Updated in patch set V3. Thanks!
>> +{
>> + struct mmp_pdma_device *pdev = to_mmp_pdma_dev(pchan->chan.device);
>> + unsigned long flags;
> empty line pls
Updated in patch set V3. Thanks!
>
>> + if (!pchan->phy)
>> + return;
>> +
>> + spin_lock_irqsave(&pdev->phy_lock, flags);
>> + pchan->phy->vchan = NULL;
>> + pchan->phy = NULL;
>> + spin_unlock_irqrestore(&pdev->phy_lock, flags);
>> +}
>> +
> rest looks okay
>
> --
> ~Vinod



--
Regards,
Xiang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] dma: mmp_pdma: fix a memory alloc error

2013-06-18 Thread Xiang Wang
2013/6/17 Vinod Koul :
> On Tue, May 28, 2013 at 08:05:13PM +0800, Xiang Wang wrote:
>> From: Xiang Wang 
>>
>> pdev->phy is of type "struct mmp_pdma_phy *". But when
>> allocating memory for it, "struct mmp_pdma_chan" is used
>> by mistake.
>>
>> Signed-off-by: Xiang Wang 
>> ---
>>  drivers/dma/mmp_pdma.c |2 +-
>>  1 files changed, 1 insertions(+), 1 deletions(-)
>>
>> diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
>> index c26699f..76a8dcf 100644
>> --- a/drivers/dma/mmp_pdma.c
>> +++ b/drivers/dma/mmp_pdma.c
>> @@ -801,7 +801,7 @@ static int mmp_pdma_probe(struct platform_device *op)
>>   }
>>
>>   pdev->phy = devm_kzalloc(pdev->dev,
>> - dma_channels * sizeof(struct mmp_pdma_chan), GFP_KERNEL);
>> + dma_channels * sizeof(struct mmp_pdma_phy), GFP_KERNEL);
> why not sizeof(*pdev-phy), that way you dont make above error
>
> --
> ~Vinod
Hi,
Updated patch V2 to use sizeof(*pdev->phy). Thanks!

--
Regards,
Xiang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH V3 0/2] dma: mmp_pdma: Fix phy channels not protected issue

2013-06-17 Thread Xiang Wang
From: Xiang Wang 

This patch set deals with the issues that 1) phy channels are not protected
in mmp_pdma. 2) dma request<->channel mapping is not cleared when a phy chan
is freed.

Xiang Wang (2):
  dma: mmp_pdma: add protect when alloc/free phy channels
  dma: mmp_pdma: clear DRCMR when free a phy channel

 drivers/dma/mmp_pdma.c |   48 
 1 files changed, 32 insertions(+), 16 deletions(-)

-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH V3 2/2] dma: mmp_pdma: clear DRCMR when free a phy channel

2013-06-17 Thread Xiang Wang
From: Xiang Wang 

In mmp pdma, phy channels are allocated/freed dynamically.
The mapping from DMA request to DMA channel number in DRCMR
should be cleared when a phy channel is freed. Otherwise
conflicts will happen when:
1. A is using channel 2 and free it after finished, but A
still maps to channel 2 in DRCMR of A.
2. Now another one B gets channel 2. So B maps to channel 2
too in DRCMR of B.
In the datasheet, it is described that "Do not map two active
requests to the same channel since it produces unpredictable
results" and we can observe that during test.

Signed-off-by: Xiang Wang 
---
 drivers/dma/mmp_pdma.c |6 ++
 1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
index 226158d..2844eaf 100644
--- a/drivers/dma/mmp_pdma.c
+++ b/drivers/dma/mmp_pdma.c
@@ -252,10 +252,16 @@ static void mmp_pdma_free_phy(struct mmp_pdma_chan *pchan)
 {
struct mmp_pdma_device *pdev = to_mmp_pdma_dev(pchan->chan.device);
unsigned long flags;
+   u32 reg;
 
if (!pchan->phy)
return;
 
+   /* clear the channel mapping in DRCMR */
+   reg = pchan->phy->vchan->drcmr;
+   reg = ((reg < 64) ? 0x0100 : 0x1100) + ((reg & 0x3f) << 2);
+   writel(0, pchan->phy->base + reg);
+
spin_lock_irqsave(&pdev->phy_lock, flags);
pchan->phy->vchan = NULL;
pchan->phy = NULL;
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH V3 1/2] dma: mmp_pdma: add protect when alloc/free phy channels

2013-06-17 Thread Xiang Wang
From: Xiang Wang 

In mmp pdma, phy channels are allocated/freed dynamically
and frequently. But no proper protection is added.
Conflict will happen when multi-users are requesting phy
channels at the same time. Use spinlock to protect.

Signed-off-by: Xiang Wang 
---
 drivers/dma/mmp_pdma.c |   42 ++
 1 files changed, 26 insertions(+), 16 deletions(-)

diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
index c26699f..226158d 100644
--- a/drivers/dma/mmp_pdma.c
+++ b/drivers/dma/mmp_pdma.c
@@ -121,6 +121,7 @@ struct mmp_pdma_device {
struct device   *dev;
struct dma_device   device;
struct mmp_pdma_phy *phy;
+   spinlock_t phy_lock; /* protect alloc/free phy channels */
 };
 
 #define tx_to_mmp_pdma_desc(tx) container_of(tx, struct mmp_pdma_desc_sw, 
async_tx)
@@ -219,6 +220,7 @@ static struct mmp_pdma_phy *lookup_phy(struct mmp_pdma_chan 
*pchan)
int prio, i;
struct mmp_pdma_device *pdev = to_mmp_pdma_dev(pchan->chan.device);
struct mmp_pdma_phy *phy;
+   unsigned long flags;
 
/*
 * dma channel priorities
@@ -227,6 +229,8 @@ static struct mmp_pdma_phy *lookup_phy(struct mmp_pdma_chan 
*pchan)
 * ch 8 - 11, 24 - 27  <--> (2)
 * ch 12 - 15, 28 - 31  <--> (3)
 */
+
+   spin_lock_irqsave(&pdev->phy_lock, flags);
for (prio = 0; prio <= (((pdev->dma_channels - 1) & 0xf) >> 2); prio++) 
{
for (i = 0; i < pdev->dma_channels; i++) {
if (prio != ((i & 0xf) >> 2))
@@ -234,14 +238,30 @@ static struct mmp_pdma_phy *lookup_phy(struct 
mmp_pdma_chan *pchan)
phy = &pdev->phy[i];
if (!phy->vchan) {
phy->vchan = pchan;
+   spin_unlock_irqrestore(&pdev->phy_lock, flags);
return phy;
}
}
}
 
+   spin_unlock_irqrestore(&pdev->phy_lock, flags);
return NULL;
 }
 
+static void mmp_pdma_free_phy(struct mmp_pdma_chan *pchan)
+{
+   struct mmp_pdma_device *pdev = to_mmp_pdma_dev(pchan->chan.device);
+   unsigned long flags;
+
+   if (!pchan->phy)
+   return;
+
+   spin_lock_irqsave(&pdev->phy_lock, flags);
+   pchan->phy->vchan = NULL;
+   pchan->phy = NULL;
+   spin_unlock_irqrestore(&pdev->phy_lock, flags);
+}
+
 /* desc->tx_list ==> pending list */
 static void append_pending_queue(struct mmp_pdma_chan *chan,
struct mmp_pdma_desc_sw *desc)
@@ -277,10 +297,7 @@ static void start_pending_queue(struct mmp_pdma_chan *chan)
 
if (list_empty(&chan->chain_pending)) {
/* chance to re-fetch phy channel with higher prio */
-   if (chan->phy) {
-   chan->phy->vchan = NULL;
-   chan->phy = NULL;
-   }
+   mmp_pdma_free_phy(chan);
dev_dbg(chan->dev, "no pending list\n");
return;
}
@@ -377,10 +394,7 @@ static int mmp_pdma_alloc_chan_resources(struct dma_chan 
*dchan)
dev_err(chan->dev, "unable to allocate descriptor pool\n");
return -ENOMEM;
}
-   if (chan->phy) {
-   chan->phy->vchan = NULL;
-   chan->phy = NULL;
-   }
+   mmp_pdma_free_phy(chan);
chan->idle = true;
chan->dev_addr = 0;
return 1;
@@ -411,10 +425,7 @@ static void mmp_pdma_free_chan_resources(struct dma_chan 
*dchan)
chan->desc_pool = NULL;
chan->idle = true;
chan->dev_addr = 0;
-   if (chan->phy) {
-   chan->phy->vchan = NULL;
-   chan->phy = NULL;
-   }
+   mmp_pdma_free_phy(chan);
return;
 }
 
@@ -581,10 +592,7 @@ static int mmp_pdma_control(struct dma_chan *dchan, enum 
dma_ctrl_cmd cmd,
switch (cmd) {
case DMA_TERMINATE_ALL:
disable_chan(chan->phy);
-   if (chan->phy) {
-   chan->phy->vchan = NULL;
-   chan->phy = NULL;
-   }
+   mmp_pdma_free_phy(chan);
spin_lock_irqsave(&chan->desc_lock, flags);
mmp_pdma_free_desc_list(chan, &chan->chain_pending);
mmp_pdma_free_desc_list(chan, &chan->chain_running);
@@ -777,6 +785,8 @@ static int mmp_pdma_probe(struct platform_device *op)
return -ENOMEM;
pdev->dev = &op->dev;
 
+   spin_lock_init(&pdev->phy_lock);
+
iores = platform_get_resource(op, IORESOURCE_MEM, 0);
if (!iores)

[PATCH v2] dma: mmp_pdma: fix a memory alloc error

2013-06-17 Thread Xiang Wang
From: Xiang Wang 

pdev->phy is of type "struct mmp_pdma_phy *". But when
allocating memory for it, "struct mmp_pdma_chan" is used
by mistake.

Signed-off-by: Xiang Wang 
---
 drivers/dma/mmp_pdma.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
index c26699f..18ac57f 100644
--- a/drivers/dma/mmp_pdma.c
+++ b/drivers/dma/mmp_pdma.c
@@ -801,7 +801,7 @@ static int mmp_pdma_probe(struct platform_device *op)
}
 
pdev->phy = devm_kzalloc(pdev->dev,
-   dma_channels * sizeof(struct mmp_pdma_chan), GFP_KERNEL);
+   dma_channels * sizeof(*pdev->phy), GFP_KERNEL);
if (pdev->phy == NULL)
return -ENOMEM;
 
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 0/2] dma: mmp_pdma: Fix phy channels not protected issue

2013-06-17 Thread Xiang Wang
2013/6/6 Xiang Wang :
> From: Xiang Wang 
>
> This patch set deals with the issues that 1) phy channels are not protected
> in mmp_pdma. 2) dma request<->channel mapping is not cleared when a phy chan
> is freed.
>
> Xiang Wang (2):
>   dma: mmp_pdma: add protect when alloc/free phy channels
>   dma: mmp_pdma: clear DRCMR when free a phy channel
>
>  drivers/dma/mmp_pdma.c |   48 
> 
>  1 files changed, 32 insertions(+), 16 deletions(-)
>
> --
> 1.7.5.4
>
Hi, all
Do you have any comments about this patch set? Thanks!

--
Regards,
Xiang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] dma: mmp_pdma: fix a memory alloc error

2013-06-17 Thread Xiang Wang
2013/5/31 Xiang Wang :
> 2013/5/28 Andy Shevchenko :
>> On Tue, May 28, 2013 at 3:05 PM, Xiang Wang  wrote:
>>> From: Xiang Wang 
>>>
>>> pdev->phy is of type "struct mmp_pdma_phy *". But when
>>> allocating memory for it, "struct mmp_pdma_chan" is used
>>> by mistake.
>>
>> Have you tested it?
>>
>>> Signed-off-by: Xiang Wang 
>>> ---
>>>  drivers/dma/mmp_pdma.c |2 +-
>>>  1 files changed, 1 insertions(+), 1 deletions(-)
>>>
>>> diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
>>> index c26699f..76a8dcf 100644
>>> --- a/drivers/dma/mmp_pdma.c
>>> +++ b/drivers/dma/mmp_pdma.c
>>> @@ -801,7 +801,7 @@ static int mmp_pdma_probe(struct platform_device *op)
>>> }
>>>
>>> pdev->phy = devm_kzalloc(pdev->dev,
>>> -   dma_channels * sizeof(struct mmp_pdma_chan), GFP_KERNEL);
>>> +   dma_channels * sizeof(struct mmp_pdma_phy), GFP_KERNEL);
>>
>> For me it seems you did get how it works.
>>
>> --
>> With Best Regards,
>> Andy Shevchenko
>
> Hi, Andy
> Yes. I've tested on marvell pxa988 platform using dmatest.ko.

Hi, All
Do you have any comments about this patch? Thanks!

--
Regards,
Xiang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] dma: mmp_pdma: support for getting residual bytes

2013-06-06 Thread Xiang Wang
2013/6/6 Andy Shevchenko :
> On Thu, Jun 6, 2013 at 6:09 AM, Xiang Wang  wrote:
>> 2013/6/3 Andy Shevchenko :
>>> On Mon, Jun 3, 2013 at 6:22 AM, Xiang Wang  wrote:
>>>> 2013/5/31 Andy Shevchenko :
>>>>> On Fri, May 31, 2013 at 11:21 AM, Xiang Wang  wrote:
>>>>>> In some of our drivers (e.g. UART) we may stop a running DMA
>>>>>> before it finishes. So we need to know how many bytes have
>>>>>> been transferred.
>
> []
>
>>>> Why I use a saved value (chan->bytes_residue)?
>>>> In current mmp pdma driver, a phy channel will be freed after the
>>>> transmission finishes (chan->phy is set to NULL). So we cannot get the
>>>> physical channel information after we call DMA_TERMINATE_ALL or DMA
>>>> finishes itself.
>>>
>>> I don't see any contradiction to workflow.
>>> So, If you call device_tx_status() when transfer is completed or
>>> aborted you will get 0 as a residue, which is correct.
>
>>>>>> @@ -637,7 +692,8 @@ static enum dma_status mmp_pdma_tx_status(struct 
>>>>>> dma_chan *dchan,
>>>>>> unsigned long flags;
>>>>>>
>>>>>> spin_lock_irqsave(&chan->desc_lock, flags);
>>>>>> -   ret = dma_cookie_status(dchan, cookie, txstate);
>>>>>> +   ret = chan->status;
>>>>>> +   dma_set_residue(txstate, chan->bytes_residue);
>>>>>> spin_unlock_irqrestore(&chan->desc_lock, flags);
>>>>>
>>>>> Besides my patch which removes this spinlock I think the workflow
>>>>> should be something like
>>>>>
>>>>> status = dma_cookie_status()
>>>>> if status == DMA_SUCCESS or !txstate:
>>>>> return status
>>>>>
>>>>> dma_set_residue()
>>>>> return status
>>>>>
>>>>> Because there is no reason to return residue of successfully finished
>>>>> transfer. It should be 0.
>>>
>>>> There is one exception from my point of view. When we are receiving
>>>> data from peripheral devices, we usually start a DMA transaction with
>>>> a target length of 4K for example. When a timed-out event occurs in
>>>> peripheral device, it will notify DMA controller and DMA controller
>>>> will send out a End of Receive interrupt (Marvell specific?).
>>>
>>> Might be your hardware specifics, in our case we have got a timeout
>>> interrupt from uart controller.
>>>
>>>> In such situation, DMA status is also DMA_SUCCESS. But the residual
>>>> bytes is not 0 and the user must query it.
>>>
>>> Which sounds wrong approach.
>>>
>>> P.S. take a look at  drivers/tty/serial/8250/8250_dma.c
>
>> When peripheral device (e.g. UART) is using DMA controller, we should
>> have 2 solutions to deal with trailing bytes:
>
>> 1. Timeout interrupt from UART controller.
>> In this case, we usually pause DMA and read out data from DMA buffer
>> in uart irq handler.
>
> Hmm... When UART timeout occurs you have trailing bytes in the UART
> FIFO. Correct? What do you mean under "DMA buffer" ?
> I think you have to read bytes from UART FIFO.
I didn't described clearly. We have to deal with 2 kinds of data in
this situation: 1) The data that DMA has moved but not reached the
target length (So DMA interrupt is not sent out). This is what I mean
in "DMA buffer". When we receive Timeout Interrupt from UART
controller, we pause DMA and read out the data  2) The data in UART
FIFO.

>
>> 2. DMA controller handles trailing bytes for us.
>> This is the case I mentioned in my previous email. "When a timed-out
>> event occurs in peripheral device, it will notify DMA controller and
>> DMA controller will send out a End of Receive interrupt"
>> I think we should know how many residual bytes in this case even the
>> DMA status is DMA_SUCCESS.
>
> I'm still not convinced that you have implement such algorithm. Anyway,
> if I got it correctly you have something like "Receive FIFO Occupancy
> Register (FOR)" in UART. When you got timeout interrupt, you may get
> residue and value from FOR, sum them, program DMA to transfer the
> trailing bytes.
> What did I miss?
This case is another working mode of peripheral device (e.g. UART). If
we configure UART to work in this mode, the hardware(DMA controller
and UART controller) will deal with the trailing bytes for us.
The workflow is likely to be:
a. When timeout occurs in UART, it notifies DMA controller to move the
trailing bytes in UART FIFO. (software transparent)
b. After DMA finishes moving all bytes in UART FIFO, it will send out
an End Of Receive(EOR) interrupt.
So in this case, software sees an EOR interrupt from DMA controller
instead of an Timeout Interrupt from UART controller.
Actually Marvell PXA988 UART controller supports this mode and I've tested. :)
>
> --
> With Best Regards,
> Andy Shevchenko



--
Regards,
Xiang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] dma: mmp_pdma: support for getting residual bytes

2013-06-05 Thread Xiang Wang
2013/6/3 Andy Shevchenko :
> On Mon, Jun 3, 2013 at 6:22 AM, Xiang Wang  wrote:
>> 2013/5/31 Andy Shevchenko :
>>> On Fri, May 31, 2013 at 11:21 AM, Xiang Wang  wrote:
>>>> In some of our drivers (e.g. UART) we may stop a running DMA
>>>> before it finishes. So we need to know how many bytes have
>>>> been transferred.
>>>
>>> Couple of comments below.
>>>
>>>> --- a/drivers/dma/mmp_pdma.c
>>>> +++ b/drivers/dma/mmp_pdma.c
>>>
>>>> @@ -589,7 +638,13 @@ static int mmp_pdma_control(struct dma_chan *dchan, 
>>>> enum dma_ctrl_cmd cmd,
>>>> mmp_pdma_free_desc_list(chan, &chan->chain_pending);
>>>> mmp_pdma_free_desc_list(chan, &chan->chain_running);
>>>> spin_unlock_irqrestore(&chan->desc_lock, flags);
>>>> -   chan->idle = true;
>>>> +   chan->status = DMA_SUCCESS;
>>>> +   chan->bytes_residue = 0;
>>>> +   break;
>>>> +   case DMA_PAUSE:
>>>> +   disable_chan(chan->phy);
>>>> +   chan->status = DMA_PAUSED;
>>>> +   chan->bytes_residue = mmp_pdma_get_bytes_residue(chan);
>>>
>>> Does it mean user has to do DMA_PAUSE first to get more or less
>>> accurate residue?
>>> Logically that sound correct, but in general we may allow user to get
>>> approximate residue value of on going transfer.
>
>> Your comment makes sense. But if the user is allowed to query the
>> residue value in real-time, we cannot just return a saved value to
>> him.
>
> Right.
>
>> Why I use a saved value (chan->bytes_residue)?
>> In current mmp pdma driver, a phy channel will be freed after the
>> transmission finishes (chan->phy is set to NULL). So we cannot get the
>> physical channel information after we call DMA_TERMINATE_ALL or DMA
>> finishes itself.
>
> I don't see any contradiction to workflow.
> So, If you call device_tx_status() when transfer is completed or
> aborted you will get 0 as a residue, which is correct.
>
>> That is to say, when the use queries the channel information at these
>> points, the chan->phy is usually NULL.
>
>>>> @@ -637,7 +692,8 @@ static enum dma_status mmp_pdma_tx_status(struct 
>>>> dma_chan *dchan,
>>>> unsigned long flags;
>>>>
>>>> spin_lock_irqsave(&chan->desc_lock, flags);
>>>> -   ret = dma_cookie_status(dchan, cookie, txstate);
>>>> +   ret = chan->status;
>>>> +   dma_set_residue(txstate, chan->bytes_residue);
>>>> spin_unlock_irqrestore(&chan->desc_lock, flags);
>>>
>>> Besides my patch which removes this spinlock I think the workflow
>>> should be something like
>>>
>>> status = dma_cookie_status()
>>> if status == DMA_SUCCESS or !txstate:
>>> return status
>>>
>>> dma_set_residue()
>>> return status
>>>
>>> Because there is no reason to return residue of successfully finished
>>> transfer. It should be 0.
>
>> There is one exception from my point of view. When we are receiving
>> data from peripheral devices, we usually start a DMA transaction with
>> a target length of 4K for example. When a timed-out event occurs in
>> peripheral device, it will notify DMA controller and DMA controller
>> will send out a End of Receive interrupt (Marvell specific?).
>
> Might be your hardware specifics, in our case we have got a timeout
> interrupt from uart controller.
>
>> In such situation, DMA status is also DMA_SUCCESS. But the residual
>> bytes is not 0 and the user must query it.
>
> Which sounds wrong approach.
>
> P.S. take a look at  drivers/tty/serial/8250/8250_dma.c
Hi, Andy
When peripheral device (e.g. UART) is using DMA controller, we should
have 2 solutions to deal with trailing bytes:
1. Timeout interrupt from UART controller.
In this case, we usually pause DMA and read out data from DMA buffer
in uart irq handler.
2. DMA controller handles trailing bytes for us.
This is the case I mentioned in my previous email. "When a timed-out
event occurs in peripheral device, it will notify DMA controller and
DMA controller will send out a End of Receive interrupt"
I think we should know how many residual bytes in this case even the
DMA status is DMA_SUCCESS.

Thanks!
>
> --
> With Best Regards,
> Andy Shevchenko



--
Regards,
Xiang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2 1/2] dma: mmp_pdma: add protect when alloc/free phy channels

2013-06-05 Thread Xiang Wang
From: Xiang Wang 

In mmp pdma, phy channels are allocated/freed dynamically
and frequently. But no proper protection is added.
Conflict will happen when multi-users are requesting phy
channels at the same time. Use spinlock to protect.

Signed-off-by: Xiang Wang 
---
 drivers/dma/mmp_pdma.c |   42 ++
 1 files changed, 26 insertions(+), 16 deletions(-)

diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
index c26699f..84e51a1 100644
--- a/drivers/dma/mmp_pdma.c
+++ b/drivers/dma/mmp_pdma.c
@@ -121,6 +121,7 @@ struct mmp_pdma_device {
struct device   *dev;
struct dma_device   device;
struct mmp_pdma_phy *phy;
+   spinlock_t phy_lock; /* protect alloc/free phy channels */
 };
 
 #define tx_to_mmp_pdma_desc(tx) container_of(tx, struct mmp_pdma_desc_sw, 
async_tx)
@@ -219,6 +220,7 @@ static struct mmp_pdma_phy *lookup_phy(struct mmp_pdma_chan 
*pchan)
int prio, i;
struct mmp_pdma_device *pdev = to_mmp_pdma_dev(pchan->chan.device);
struct mmp_pdma_phy *phy;
+   unsigned long flags;
 
/*
 * dma channel priorities
@@ -227,6 +229,8 @@ static struct mmp_pdma_phy *lookup_phy(struct mmp_pdma_chan 
*pchan)
 * ch 8 - 11, 24 - 27  <--> (2)
 * ch 12 - 15, 28 - 31  <--> (3)
 */
+
+   spin_lock_irqsave(&pdev->phy_lock, flags);
for (prio = 0; prio <= (((pdev->dma_channels - 1) & 0xf) >> 2); prio++) 
{
for (i = 0; i < pdev->dma_channels; i++) {
if (prio != ((i & 0xf) >> 2))
@@ -234,14 +238,30 @@ static struct mmp_pdma_phy *lookup_phy(struct 
mmp_pdma_chan *pchan)
phy = &pdev->phy[i];
if (!phy->vchan) {
phy->vchan = pchan;
+   spin_unlock_irqrestore(&pdev->phy_lock, flags);
return phy;
}
}
}
 
+   spin_unlock_irqrestore(&pdev->phy_lock, flags);
return NULL;
 }
 
+static void free_phy(struct mmp_pdma_chan *pchan)
+{
+   struct mmp_pdma_device *pdev = to_mmp_pdma_dev(pchan->chan.device);
+   unsigned long flags;
+
+   if (!pchan->phy)
+   return;
+
+   spin_lock_irqsave(&pdev->phy_lock, flags);
+   pchan->phy->vchan = NULL;
+   pchan->phy = NULL;
+   spin_unlock_irqrestore(&pdev->phy_lock, flags);
+}
+
 /* desc->tx_list ==> pending list */
 static void append_pending_queue(struct mmp_pdma_chan *chan,
struct mmp_pdma_desc_sw *desc)
@@ -277,10 +297,7 @@ static void start_pending_queue(struct mmp_pdma_chan *chan)
 
if (list_empty(&chan->chain_pending)) {
/* chance to re-fetch phy channel with higher prio */
-   if (chan->phy) {
-   chan->phy->vchan = NULL;
-   chan->phy = NULL;
-   }
+   free_phy(chan);
dev_dbg(chan->dev, "no pending list\n");
return;
}
@@ -377,10 +394,7 @@ static int mmp_pdma_alloc_chan_resources(struct dma_chan 
*dchan)
dev_err(chan->dev, "unable to allocate descriptor pool\n");
return -ENOMEM;
}
-   if (chan->phy) {
-   chan->phy->vchan = NULL;
-   chan->phy = NULL;
-   }
+   free_phy(chan);
chan->idle = true;
chan->dev_addr = 0;
return 1;
@@ -411,10 +425,7 @@ static void mmp_pdma_free_chan_resources(struct dma_chan 
*dchan)
chan->desc_pool = NULL;
chan->idle = true;
chan->dev_addr = 0;
-   if (chan->phy) {
-   chan->phy->vchan = NULL;
-   chan->phy = NULL;
-   }
+   free_phy(chan);
return;
 }
 
@@ -581,10 +592,7 @@ static int mmp_pdma_control(struct dma_chan *dchan, enum 
dma_ctrl_cmd cmd,
switch (cmd) {
case DMA_TERMINATE_ALL:
disable_chan(chan->phy);
-   if (chan->phy) {
-   chan->phy->vchan = NULL;
-   chan->phy = NULL;
-   }
+   free_phy(chan);
spin_lock_irqsave(&chan->desc_lock, flags);
mmp_pdma_free_desc_list(chan, &chan->chain_pending);
mmp_pdma_free_desc_list(chan, &chan->chain_running);
@@ -777,6 +785,8 @@ static int mmp_pdma_probe(struct platform_device *op)
return -ENOMEM;
pdev->dev = &op->dev;
 
+   spin_lock_init(&pdev->phy_lock);
+
iores = platform_get_resource(op, IORESOURCE_MEM, 0);
if (!iores)
return -EINVAL;
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2 2/2] dma: mmp_pdma: clear DRCMR when free a phy channel

2013-06-05 Thread Xiang Wang
From: Xiang Wang 

In mmp pdma, phy channels are allocated/freed dynamically.
The mapping from DMA request to DMA channel number in DRCMR
should be cleared when a phy channel is freed. Otherwise
conflicts will happen when:
1. A is using channel 2 and free it after finished, but A
still maps to channel 2 in DRCMR of A.
2. Now another one B gets channel 2. So B maps to channel 2
too in DRCMR of B.
In the datasheet, it is described that "Do not map two active
requests to the same channel since it produces unpredictable
results" and we can observe that during test.

Signed-off-by: Xiang Wang 
---
 drivers/dma/mmp_pdma.c |6 ++
 1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
index 84e51a1..410dc26 100644
--- a/drivers/dma/mmp_pdma.c
+++ b/drivers/dma/mmp_pdma.c
@@ -252,10 +252,16 @@ static void free_phy(struct mmp_pdma_chan *pchan)
 {
struct mmp_pdma_device *pdev = to_mmp_pdma_dev(pchan->chan.device);
unsigned long flags;
+   u32 reg;
 
if (!pchan->phy)
return;
 
+   /* clear the channel mapping in DRCMR */
+   reg = pchan->phy->vchan->drcmr;
+   reg = ((reg < 64) ? 0x0100 : 0x1100) + ((reg & 0x3f) << 2);
+   writel(0, pchan->phy->base + reg);
+
spin_lock_irqsave(&pdev->phy_lock, flags);
pchan->phy->vchan = NULL;
pchan->phy = NULL;
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2 0/2] dma: mmp_pdma: Fix phy channels not protected issue

2013-06-05 Thread Xiang Wang
From: Xiang Wang 

This patch set deals with the issues that 1) phy channels are not protected
in mmp_pdma. 2) dma request<->channel mapping is not cleared when a phy chan
is freed.

Xiang Wang (2):
  dma: mmp_pdma: add protect when alloc/free phy channels
  dma: mmp_pdma: clear DRCMR when free a phy channel

 drivers/dma/mmp_pdma.c |   48 
 1 files changed, 32 insertions(+), 16 deletions(-)

-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 1/2] dma: mmp_pdma: add protect when alloc/free phy channels

2013-06-03 Thread Xiang Wang
From: Xiang Wang 

In mmp pdma, phy channels are allocated/freed dynamically
and frequently. But no proper protection is added.
Conflict will happen when multi-users are requesting phy
channels at the same time. Use spinlock to protect.

Signed-off-by: Xiang Wang 
---
 drivers/dma/mmp_pdma.c |   41 +
 1 files changed, 25 insertions(+), 16 deletions(-)

diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
index c26699f..21820c9 100644
--- a/drivers/dma/mmp_pdma.c
+++ b/drivers/dma/mmp_pdma.c
@@ -121,6 +121,7 @@ struct mmp_pdma_device {
struct device   *dev;
struct dma_device   device;
struct mmp_pdma_phy *phy;
+   spinlock_t phy_lock; /* protect alloc/free phy channels */
 };
 
 #define tx_to_mmp_pdma_desc(tx) container_of(tx, struct mmp_pdma_desc_sw, 
async_tx)
@@ -219,6 +220,7 @@ static struct mmp_pdma_phy *lookup_phy(struct mmp_pdma_chan 
*pchan)
int prio, i;
struct mmp_pdma_device *pdev = to_mmp_pdma_dev(pchan->chan.device);
struct mmp_pdma_phy *phy;
+   unsigned long flags;
 
/*
 * dma channel priorities
@@ -227,6 +229,8 @@ static struct mmp_pdma_phy *lookup_phy(struct mmp_pdma_chan 
*pchan)
 * ch 8 - 11, 24 - 27  <--> (2)
 * ch 12 - 15, 28 - 31  <--> (3)
 */
+
+   spin_lock_irqsave(&pdev->phy_lock, flags);
for (prio = 0; prio <= (((pdev->dma_channels - 1) & 0xf) >> 2); prio++) 
{
for (i = 0; i < pdev->dma_channels; i++) {
if (prio != ((i & 0xf) >> 2))
@@ -234,14 +238,29 @@ static struct mmp_pdma_phy *lookup_phy(struct 
mmp_pdma_chan *pchan)
phy = &pdev->phy[i];
if (!phy->vchan) {
phy->vchan = pchan;
+   spin_unlock_irqrestore(&pdev->phy_lock, flags);
return phy;
}
}
}
 
+   spin_unlock_irqrestore(&pdev->phy_lock, flags);
return NULL;
 }
 
+static void free_phy(struct mmp_pdma_chan *pchan)
+{
+   struct mmp_pdma_device *pdev = to_mmp_pdma_dev(pchan->chan.device);
+   unsigned long flags;
+   if (!pchan->phy)
+   return;
+
+   spin_lock_irqsave(&pdev->phy_lock, flags);
+   pchan->phy->vchan = NULL;
+   pchan->phy = NULL;
+   spin_unlock_irqrestore(&pdev->phy_lock, flags);
+}
+
 /* desc->tx_list ==> pending list */
 static void append_pending_queue(struct mmp_pdma_chan *chan,
struct mmp_pdma_desc_sw *desc)
@@ -277,10 +296,7 @@ static void start_pending_queue(struct mmp_pdma_chan *chan)
 
if (list_empty(&chan->chain_pending)) {
/* chance to re-fetch phy channel with higher prio */
-   if (chan->phy) {
-   chan->phy->vchan = NULL;
-   chan->phy = NULL;
-   }
+   free_phy(chan);
dev_dbg(chan->dev, "no pending list\n");
return;
}
@@ -377,10 +393,7 @@ static int mmp_pdma_alloc_chan_resources(struct dma_chan 
*dchan)
dev_err(chan->dev, "unable to allocate descriptor pool\n");
return -ENOMEM;
}
-   if (chan->phy) {
-   chan->phy->vchan = NULL;
-   chan->phy = NULL;
-   }
+   free_phy(chan);
chan->idle = true;
chan->dev_addr = 0;
return 1;
@@ -411,10 +424,7 @@ static void mmp_pdma_free_chan_resources(struct dma_chan 
*dchan)
chan->desc_pool = NULL;
chan->idle = true;
chan->dev_addr = 0;
-   if (chan->phy) {
-   chan->phy->vchan = NULL;
-   chan->phy = NULL;
-   }
+   free_phy(chan);
return;
 }
 
@@ -581,10 +591,7 @@ static int mmp_pdma_control(struct dma_chan *dchan, enum 
dma_ctrl_cmd cmd,
switch (cmd) {
case DMA_TERMINATE_ALL:
disable_chan(chan->phy);
-   if (chan->phy) {
-   chan->phy->vchan = NULL;
-   chan->phy = NULL;
-   }
+   free_phy(chan);
spin_lock_irqsave(&chan->desc_lock, flags);
mmp_pdma_free_desc_list(chan, &chan->chain_pending);
mmp_pdma_free_desc_list(chan, &chan->chain_running);
@@ -777,6 +784,8 @@ static int mmp_pdma_probe(struct platform_device *op)
return -ENOMEM;
pdev->dev = &op->dev;
 
+   spin_lock_init(&pdev->phy_lock);
+
iores = platform_get_resource(op, IORESOURCE_MEM, 0);
if (!iores)
return -EINVAL;
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 2/2] dma: mmp_pdma: clear DRCMR when free a phy channel

2013-06-03 Thread Xiang Wang
From: Xiang Wang 

In mmp pdma, phy channels are allocated/freed dynamically.
The mapping from DMA request to DMA channel number in DRCMR
should be cleared when a phy channel is freed. Otherwise
conflicts will happen when:
1. A is using channel 2 and free it after finished, but A
still maps to channel 2 in DRCMR of A.
2. Now another one B gets channel 2. So B maps to channel 2
too in DRCMR of B.
In the datasheet, it is described that "Do not map two active
requests to the same channel since it produces unpredictable
results" and we can observe that during test.

Signed-off-by: Xiang Wang 
---
 drivers/dma/mmp_pdma.c |6 ++
 1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
index 21820c9..4cd7276 100644
--- a/drivers/dma/mmp_pdma.c
+++ b/drivers/dma/mmp_pdma.c
@@ -252,9 +252,15 @@ static void free_phy(struct mmp_pdma_chan *pchan)
 {
struct mmp_pdma_device *pdev = to_mmp_pdma_dev(pchan->chan.device);
unsigned long flags;
+   u32 reg;
if (!pchan->phy)
return;
 
+   /* clear the channel mapping in DRCMR */
+   reg = pchan->phy->vchan->drcmr;
+   reg = (((reg) < 64) ? 0x0100 : 0x1100) + (((reg) & 0x3f) << 2);
+   writel(0, pchan->phy->base + reg);
+
spin_lock_irqsave(&pdev->phy_lock, flags);
pchan->phy->vchan = NULL;
pchan->phy = NULL;
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 0/2] dma: mmp_pdma: Fix phy channels not protected issue

2013-06-03 Thread Xiang Wang
From: Xiang Wang 

This patch set deals with the issues that 1) phy channels are not protected
in mmp_pdma. 2) dma request<->channel mapping is not cleared when a phy chan
is freed.

Xiang Wang (2):
  dma: mmp_pdma: add protect when alloc/free phy channels
  dma: mmp_pdma: clear DRCMR when free a phy channel

 drivers/dma/mmp_pdma.c |   47 +++
 1 files changed, 31 insertions(+), 16 deletions(-)

-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] dma: mmp_pdma: support for getting residual bytes

2013-06-02 Thread Xiang Wang
2013/5/31 Andy Shevchenko :
> On Fri, May 31, 2013 at 11:21 AM, Xiang Wang  wrote:
>> In some of our drivers (e.g. UART) we may stop a running DMA
>> before it finishes. So we need to know how many bytes have
>> been transferred.
>
> Couple of comments below.
>
>> --- a/drivers/dma/mmp_pdma.c
>> +++ b/drivers/dma/mmp_pdma.c
>
>> @@ -589,7 +638,13 @@ static int mmp_pdma_control(struct dma_chan *dchan, 
>> enum dma_ctrl_cmd cmd,
>> mmp_pdma_free_desc_list(chan, &chan->chain_pending);
>> mmp_pdma_free_desc_list(chan, &chan->chain_running);
>> spin_unlock_irqrestore(&chan->desc_lock, flags);
>> -   chan->idle = true;
>> +   chan->status = DMA_SUCCESS;
>> +   chan->bytes_residue = 0;
>> +   break;
>> +   case DMA_PAUSE:
>> +   disable_chan(chan->phy);
>> +   chan->status = DMA_PAUSED;
>> +   chan->bytes_residue = mmp_pdma_get_bytes_residue(chan);
>
> Does it mean user has to do DMA_PAUSE first to get more or less
> accurate residue?
> Logically that sound correct, but in general we may allow user to get
> approximate residue value of on going transfer.
Hi, Andy
Your comment makes sense. But if the user is allowed to query the
residue value in real-time, we cannot just return a saved value to
him.
Why I use a saved value (chan->bytes_residue)?
In current mmp pdma driver, a phy channel will be freed after the
transmission finishes (chan->phy is set to NULL). So we cannot get the
physical channel information after we call DMA_TERMINATE_ALL or DMA
finishes itself.
That is to say, when the use queries the channel information at these
points, the chan->phy is usually NULL.
>
>> break;
>> case DMA_SLAVE_CONFIG:
>> if (cfg->direction == DMA_DEV_TO_MEM) {
>> @@ -637,7 +692,8 @@ static enum dma_status mmp_pdma_tx_status(struct 
>> dma_chan *dchan,
>> unsigned long flags;
>>
>> spin_lock_irqsave(&chan->desc_lock, flags);
>> -   ret = dma_cookie_status(dchan, cookie, txstate);
>> +   ret = chan->status;
>> +   dma_set_residue(txstate, chan->bytes_residue);
>> spin_unlock_irqrestore(&chan->desc_lock, flags);
>
> Besides my patch which removes this spinlock I think the workflow
> should be something like
>
> status = dma_cookie_status()
> if status == DMA_SUCCESS or !txstate:
> return status
>
> dma_set_residue()
> return status
>
> Because there is no reason to return residue of successfully finished
> transfer. It should be 0.
Hi, Andy
There is one exception from my point of view. When we are receiving
data from peripheral devices, we usually start a DMA transaction with
a target length of 4K for example. When a timed-out event occurs in
peripheral device, it will notify DMA controller and DMA controller
will send out a End of Receive interrupt (Marvell specific?).
In such situation, DMA status is also DMA_SUCCESS. But the residual
bytes is not 0 and the user must query it.
Thanks!
>
> --
> With Best Regards,
> Andy Shevchenko
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] dma: mmp_pdma: support for getting residual bytes

2013-05-31 Thread Xiang Wang
From: Xiang Wang 

In some of our drivers (e.g. UART) we may stop a running DMA
before it finishes. So we need to know how many bytes have
been transferred.

Signed-off-by: Xiang Wang 
---
 drivers/dma/mmp_pdma.c |   77 +++-
 1 files changed, 69 insertions(+), 8 deletions(-)

diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
index 76a8dcf..0c623cc 100644
--- a/drivers/dma/mmp_pdma.c
+++ b/drivers/dma/mmp_pdma.c
@@ -104,7 +104,8 @@ struct mmp_pdma_chan {
spinlock_t desc_lock;   /* Descriptor list lock */
struct list_head chain_pending; /* Link descriptors queue for pending */
struct list_head chain_running; /* Link descriptors queue for running */
-   bool idle;  /* channel statue machine */
+   enum dma_status status; /* channel status machine */
+   u32 bytes_residue;
 
struct dma_pool *desc_pool; /* Descriptors pool */
 };
@@ -270,7 +271,7 @@ static void start_pending_queue(struct mmp_pdma_chan *chan)
struct mmp_pdma_desc_sw *desc;
 
/* still in running, irq will start the pending list */
-   if (!chan->idle) {
+   if (chan->status == DMA_IN_PROGRESS) {
dev_dbg(chan->dev, "DMA controller still busy\n");
return;
}
@@ -307,7 +308,55 @@ static void start_pending_queue(struct mmp_pdma_chan *chan)
 */
set_desc(chan->phy, desc->async_tx.phys);
enable_chan(chan->phy);
-   chan->idle = false;
+   chan->status = DMA_IN_PROGRESS;
+   chan->bytes_residue = 0;
+}
+
+/*
+ * Get the number of bytes untransferred by DMA.
+ */
+static u32 mmp_pdma_get_bytes_residue(struct mmp_pdma_chan *chan)
+{
+   u32 reg, orig_pos, cur_pos, residue = 0;
+   struct mmp_pdma_desc_sw *desc_sw;
+   struct list_head *list_chain = NULL;
+
+   /*
+* When a phy channel is unavailable, maybe its destroied, return last
+* stored value for safe.
+*/
+   if (!chan || !chan->phy)
+   return chan->bytes_residue;
+
+   if (!list_empty(&chan->chain_running))
+   list_chain = &chan->chain_running;
+   else
+   return 0;
+
+   desc_sw = list_first_entry(list_chain, struct mmp_pdma_desc_sw, node);
+
+   switch (chan->dir) {
+   case DMA_DEV_TO_MEM:
+   reg = (chan->phy->idx << 4) + DTADR;
+   cur_pos = readl_relaxed(chan->phy->base + reg);
+   orig_pos = desc_sw->desc.dtadr;
+   break;
+   case DMA_MEM_TO_DEV:
+   reg = (chan->phy->idx << 4) + DSADR;
+   cur_pos = readl_relaxed(chan->phy->base + reg);
+   orig_pos = desc_sw->desc.dsadr;
+   break;
+   case DMA_MEM_TO_MEM:
+   reg = (chan->phy->idx << 4) + DTADR;
+   cur_pos = readl_relaxed(chan->phy->base + reg);
+   orig_pos = desc_sw->desc.dtadr;
+   break;
+   default:
+   return 0;
+   }
+
+   residue = (u32)(desc_sw->desc.dcmd & DCMD_LENGTH) + orig_pos - cur_pos;
+   return residue;
 }
 
 
@@ -381,7 +430,7 @@ static int mmp_pdma_alloc_chan_resources(struct dma_chan 
*dchan)
chan->phy->vchan = NULL;
chan->phy = NULL;
}
-   chan->idle = true;
+   chan->status = DMA_SUCCESS;
chan->dev_addr = 0;
return 1;
 }
@@ -409,7 +458,7 @@ static void mmp_pdma_free_chan_resources(struct dma_chan 
*dchan)
 
dma_pool_destroy(chan->desc_pool);
chan->desc_pool = NULL;
-   chan->idle = true;
+   chan->status = DMA_SUCCESS;
chan->dev_addr = 0;
if (chan->phy) {
chan->phy->vchan = NULL;
@@ -589,7 +638,13 @@ static int mmp_pdma_control(struct dma_chan *dchan, enum 
dma_ctrl_cmd cmd,
mmp_pdma_free_desc_list(chan, &chan->chain_pending);
mmp_pdma_free_desc_list(chan, &chan->chain_running);
spin_unlock_irqrestore(&chan->desc_lock, flags);
-   chan->idle = true;
+   chan->status = DMA_SUCCESS;
+   chan->bytes_residue = 0;
+   break;
+   case DMA_PAUSE:
+   disable_chan(chan->phy);
+   chan->status = DMA_PAUSED;
+   chan->bytes_residue = mmp_pdma_get_bytes_residue(chan);
break;
case DMA_SLAVE_CONFIG:
if (cfg->direction == DMA_DEV_TO_MEM) {
@@ -637,7 +692,8 @@ static enum dma_status mmp_pdma_tx_status(struct dma_chan 
*dchan,
unsigned long flags;
 
spin_lock_irqsave(&chan->desc_lock, flags);
-   ret = dma_cookie_status(dchan, cookie, txstate);
+   ret = chan->status;
+   dm

Re: [PATCH] dma: mmp_pdma: fix a memory alloc error

2013-05-30 Thread Xiang Wang
2013/5/28 Andy Shevchenko :
> On Tue, May 28, 2013 at 3:05 PM, Xiang Wang  wrote:
>> From: Xiang Wang 
>>
>> pdev->phy is of type "struct mmp_pdma_phy *". But when
>> allocating memory for it, "struct mmp_pdma_chan" is used
>> by mistake.
>
> Have you tested it?
>
>> Signed-off-by: Xiang Wang 
>> ---
>>  drivers/dma/mmp_pdma.c |2 +-
>>  1 files changed, 1 insertions(+), 1 deletions(-)
>>
>> diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
>> index c26699f..76a8dcf 100644
>> --- a/drivers/dma/mmp_pdma.c
>> +++ b/drivers/dma/mmp_pdma.c
>> @@ -801,7 +801,7 @@ static int mmp_pdma_probe(struct platform_device *op)
>> }
>>
>> pdev->phy = devm_kzalloc(pdev->dev,
>> -   dma_channels * sizeof(struct mmp_pdma_chan), GFP_KERNEL);
>> +   dma_channels * sizeof(struct mmp_pdma_phy), GFP_KERNEL);
>
> For me it seems you did get how it works.
>
> --
> With Best Regards,
> Andy Shevchenko

Hi, Andy
Yes. I've tested on marvell pxa988 platform using dmatest.ko.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] dma: mmp_pdma: fix a memory alloc error

2013-05-28 Thread Xiang Wang
From: Xiang Wang 

pdev->phy is of type "struct mmp_pdma_phy *". But when
allocating memory for it, "struct mmp_pdma_chan" is used
by mistake.

Signed-off-by: Xiang Wang 
---
 drivers/dma/mmp_pdma.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
index c26699f..76a8dcf 100644
--- a/drivers/dma/mmp_pdma.c
+++ b/drivers/dma/mmp_pdma.c
@@ -801,7 +801,7 @@ static int mmp_pdma_probe(struct platform_device *op)
}
 
pdev->phy = devm_kzalloc(pdev->dev,
-   dma_channels * sizeof(struct mmp_pdma_chan), GFP_KERNEL);
+   dma_channels * sizeof(struct mmp_pdma_phy), GFP_KERNEL);
if (pdev->phy == NULL)
return -ENOMEM;
 
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/