RE: [RFC PATCH v3 00/16] DMA Engine support for AM33XX

2012-10-21 Thread Bedia, Vaibhav
On Fri, Oct 19, 2012 at 22:16:15, Porter, Matt wrote:
> On Fri, Oct 19, 2012 at 12:02:42PM +, Bedia, Vaibhav wrote:
> > On Fri, Oct 19, 2012 at 16:45:58, Porter, Matt wrote:
> > > On Fri, Oct 19, 2012 at 10:26:20AM +, Bedia, Vaibhav wrote:
> > [...]
> > > > 
> > > > I didn't see all the patches that you posted on edma-dmaengine-v3
> > > > but I do seem them on edma-dmaengine-am33xx-v3 branch.
> > > 
> > > I see I referenced the wrong branch in the cover letter. Thanks for
> > > testing and noticing this. Sorry to make you hunt for the correct
> > > branch in that repo. ;) 
> > > 
> > 
> > No problem.
> > 
> > > https://github.com/ohporter/linux/tree/edma-dmaengine-am33xx-v3
> > > is indeed the correct branch for those wanting to pull this in or
> > > grab some of the not-to-be-merged drivers I used for testing.
> > > 
> > > > I added a couple of patches to enable earlyprintk and build the DTB
> > > > appended kernel image uImage-dtb.am335x-evm
> > > > 
> > > > Here's what i see
> > > > 
> > > > [...]
> > > 
> > > 
> > > 
> > > > [0.175354] edma: probe of 4900.edma failed with error -16
> > > 
> > > I missed an uninitialized pdata case in the bug fixes mentioned in
> > > the changelog and the folks previously failing the same way didn't
> > > hit the case I suspect you are hitting. Can you try this and let me
> > > know how it works?
> > > 
> > 
> > That doesn't help :(
> 
> Ok, so I dumped my Linaro toolchain which was masking this issue that
> you got unlucky with on EVM, whereas I was lucky. Switching toolchains I
> was able to reproduce the problem. Pantelis Antoniou suggested some
> changes and the following fixes this issue for me...verified on both
> BeagleBone and EVM. Let me know if that works on your end and I'll
> incorporate some version of it in the next update.

Heh I would not have suspected the toolchain so early ;)

> 
> diff --git a/arch/arm/common/edma.c b/arch/arm/common/edma.c
> index b761b7a..6ed394f 100644
> --- a/arch/arm/common/edma.c
> +++ b/arch/arm/common/edma.c
> @@ -1598,6 +1598,8 @@ static struct of_dma_filter_info edma_filter_info = {
>  static int __init edma_probe(struct platform_device *pdev)
>  {
>   struct edma_soc_info**info = pdev->dev.platform_data;
> + struct edma_soc_info*ninfo[EDMA_MAX_CC] = {NULL, NULL};
> + struct edma_soc_infotmpinfo;
>   s8  (*queue_priority_mapping)[2];
>   s8  (*queue_tc_mapping)[2];
>   int i, j, off, ln, found = 0;
> @@ -1614,15 +1616,13 @@ static int __init edma_probe(struct platform_device 
> *pdev)
>   charirq_name[10];
>   struct device_node  *node = pdev->dev.of_node;
>   struct device   *dev = &pdev->dev;
> - struct edma_soc_info*pdata;
>   int ret;
>  
>   if (node) {
> - pdata = devm_kzalloc(dev,
> -  sizeof(struct edma_soc_info),
> -  GFP_KERNEL);
> - edma_of_parse_dt(dev, node, pdata);
> - info = &pdata;
> + info = ninfo;
> + edma_of_parse_dt(dev, node, &tmpinfo);
> + info[0] = &tmpinfo;
> +
>   dma_cap_set(DMA_SLAVE, edma_filter_info.dma_cap);
>   of_dma_controller_register(dev->of_node,
>  of_dma_simple_xlate,
> 

With the above diff, the kernel boots fine on the EVM.

> > Looking at the original crash log, I suspect something is not correct
> > with the irq portion, probably in the DT or the driver. 
> > 
> > "genirq: Flags mismatch irq 28.  (edma) vs.  (edma)"
> > 
> > The warning below that is coming due to fail case in edma_probe not tracking
> > the request_irq status properly and but IMO that's a separate issue.
> 
> It is a separate issue, indeed. My ideal goal was to avoid changing
> anything in this existing davinci dma implementation, that's why the
> error paths were unmodified. Since I'm having to rework a few more things
> I'll look at those and generate an improved version.
> 
> Russ Dill also made some good simplification/cleanup suggestions for the
> of parsing on irc which I'll incorporate in the next version.

Ok, sounds good.

Regards,
Vaibhav
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: process hangs on do_exit when oom happens

2012-10-21 Thread Balbir Singh
On Mon, Oct 22, 2012 at 7:46 AM, Qiang Gao  wrote:
> I don't know whether  the process will exit finally, bug this stack lasts
> for hours, which is obviously unnormal.
> The situation:  we use a command calld "cglimit" to fork-and-exec the worker
> process,and the "cglimit" will
> set some limitation on the worker with cgroup. for now,we limit the
> memory,and we also use cpu cgroup,but with
> no limiation,so when the worker is running, the cgroup directory looks like
> following:
>
> /cgroup/memory/worker : this directory limit the memory
> /cgroup/cpu/worker :with no limit,but worker process is in.
>
> for some reason(some other process we didn't consider),  the worker process
> invoke global oom-killer,
> not cgroup-oom-killer.  then the worker process hangs there.
>
> Actually, if we didn't set the worker process into the cpu cgroup, this will
> never happens.
>

You said you don't use CPU limits right? can you also send in the
output of /proc/sched_debug. Can you also send in your
/etc/cgconfig.conf? If the OOM is not caused by cgroup memory limit
and the global system is under pressure in 2.6.32, it can trigger an
OOM.

Also

1. Have you turned off swapping (seems like it) right?
2. Do you have a NUMA policy setup for this task?

Can you also share the .config (not sure if any special patches are
being used) in the version you've mentioned.

Balbir
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: process hangs on do_exit when oom happens

2012-10-21 Thread Qiang Gao
I don't know whether  the process will exit finally, bug this stack
lasts for hours, which is obviously unnormal.
The situation:  we use a command calld "cglimit" to fork-and-exec the
worker process,and the "cglimit" will
set some limitation on the worker with cgroup. for now,we limit the
memory,and we also use cpu cgroup,but with
no limiation,so when the worker is running, the cgroup directory looks
like following:

/cgroup/memory/worker : this directory limit the memory
/cgroup/cpu/worker :with no limit,but worker process is in.

for some reason(some other process we didn't consider),  the worker
process invoke global oom-killer,
not cgroup-oom-killer.  then the worker process hangs there.

Actually, if we didn't set the worker process into the cpu cgroup,
this will never happens.

On Sat, Oct 20, 2012 at 12:04 AM, Michal Hocko  wrote:
>
> On Wed 17-10-12 18:23:34, gaoqiang wrote:
> > I looked up nothing useful with google,so I'm here for help..
> >
> > when this happens:  I use memcg to limit the memory use of a
> > process,and when the memcg cgroup was out of memory,
> > the process was oom-killed   however,it cannot really complete the
> > exiting. here is the some information
>
> How many tasks are in the group and what kind of memory do they use?
> Is it possible that you were hit by the same issue as described in
> 79dfdacc memcg: make oom_lock 0 and 1 based rather than counter.
>
> > OS version:  centos6.22.6.32.220.7.1
>
> Your kernel is quite old and you should be probably asking your
> distribution to help you out. There were many fixes since 2.6.32.
> Are you able to reproduce the same issue with the current vanila kernel?
>
> > /proc/pid/stack
> > ---
> >
> > [] __cond_resched+0x2a/0x40
> > [] unmap_vmas+0xb49/0xb70
> > [] exit_mmap+0x7e/0x140
> > [] mmput+0x58/0x110
> > [] exit_mm+0x11d/0x160
> > [] do_exit+0x1ad/0x860
> > [] do_group_exit+0x41/0xb0
> > [] get_signal_to_deliver+0x1e8/0x430
> > [] do_notify_resume+0xf4/0x8b0
> > [] int_signal+0x12/0x17
> > [] 0x
>
> This looks strange because this is just an exit part which shouldn't
> deadlock or anything. Is this stack stable? Have you tried to take check
> it more times?
>
> --
> Michal Hocko
> SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH 4/4] sdio: pm: set device's power state after driver runtime suspended it

2012-10-21 Thread Aaron Lu
On 10/22/2012 03:57 AM, Rafael J. Wysocki wrote:
> On Saturday 20 of October 2012 15:15:41 Aaron Lu wrote:
>> On Fri, Oct 19, 2012 at 08:08:38PM +0200, Rafael J. Wysocki wrote:
>>> On Friday 19 of October 2012 01:39:25 Rafael J. Wysocki wrote:
 On Friday 12 of October 2012 11:12:41 Aaron Lu wrote:
> In sdio bus level runtime callback function, after call the driver's
> runtime suspend callback, we will check if the device supports a
> platform level power management, and if so, a proper power state is
> chosen by the corresponding platform callback and then set.
>
> Platform level runtime wakeup is also set, if device is enabled for
> runtime wakeup by its driver, it will be armed the ability to generate
> a wakeup event by the platform.
>
> Signed-off-by: Aaron Lu 
> ---
>  drivers/mmc/core/sdio_bus.c | 49 
> +++--
>  1 file changed, 47 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/mmc/core/sdio_bus.c b/drivers/mmc/core/sdio_bus.c
> index aaec9e2..d83dea8 100644
> --- a/drivers/mmc/core/sdio_bus.c
> +++ b/drivers/mmc/core/sdio_bus.c
> @@ -23,6 +23,7 @@
>  
>  #include "sdio_cis.h"
>  #include "sdio_bus.h"
> +#include "sdio.h"
>  #include "sdio_acpi.h"
>  
>  /* show configuration fields */
> @@ -194,10 +195,54 @@ static int sdio_bus_remove(struct device *dev)
>  }
>  
>  #ifdef CONFIG_PM
> +
> +static int sdio_bus_runtime_suspend(struct device *dev)
> +{
> + int ret;
> + sdio_power_t state;
> +
> + ret = pm_generic_runtime_suspend(dev);
> + if (ret)
> + goto out;
> +
> + if (!platform_sdio_power_manageable(dev))
> + goto out;
> +
> + platform_sdio_run_wake(dev, true);
> +
> + state = platform_sdio_choose_power_state(dev);
> + if (state == SDIO_POWER_ERROR) {
> + ret = -EIO;
> + goto out;
> + }
> +
> + ret = platform_sdio_set_power_state(dev, state);
> +
> +out:
> + return ret;
> +}
> +
> +static int sdio_bus_runtime_resume(struct device *dev)
> +{
> + int ret;
> +
> + if (platform_sdio_power_manageable(dev)) {
> + platform_sdio_run_wake(dev, false);
> + ret = platform_sdio_set_power_state(dev, SDIO_D0);
> + if (ret)
> + goto out;
> + }
> +
> + ret = pm_generic_runtime_resume(dev);
> +
> +out:
> + return ret;
> +}
> +

 Most likely we will need to make analogous changes for other bus types that
 don't support power management natively, like platform, SPI, I2C etc.  In 
 all
 of them the _runtime_suspend() and _runtime_resume() routine will look
 almost exactly the same except for the platform_sdio_ prefix.

 For this reason, I think it would be better to simply define two functions
 acpi_pm_runtime_suspend() and acpi_pm_runtime_resume() that will do all of
 the ACPI-specific operations related to runtime suspend/resume.  Then, we
 will be able to use these functions for all of the bus types in question
 in the same way (we may also need to add analogous functions for system
 suspend/resume handling).
>>>
>>> Something like in the (totally untested) patch below.
>>
>> Looks good to me.
>> I'll test the code and put it into v2 of the patchset with your
>> sign-off, is it OK?
> 
> I'd rather do it a bit differently in the signed-off version (I'm working
> on these patches, they should be ready around Tuesday), but if you can test

OK, thanks.

> it in its current form, that'd be useful too.

I was planning to test it some time later, so looks like I can directly
test your signed-off version :-)

Thanks,
Aaron

--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v1] mmc: fix async request mechanism for sequential read scenarios

2012-10-21 Thread Per Forlin
On Mon, Oct 15, 2012 at 5:36 PM, Konstantin Dorfman
 wrote:
> The main assumption of the async request design is that the file
> system adds block requests to the block device queue asynchronously
> without waiting for completion (see the Rationale section of
> https://wiki.linaro.org/WorkingGroups/Kernel/Specs
> /StoragePerfMMC-async-req).
>
> We found out that in case of sequential read operations this is not
> the case, due to the read ahead mechanism.
> When mmcqt reports on completion of a request there should be
> a context switch to allow the insertion of the next read ahead BIOs
> to the block layer. Since the mmcqd tries to fetch another request
> immediately after the completion of the previous request it gets NULL
> and starts waiting for the completion of the previous request.
> This wait on completion gives the FS the opportunity to insert the next
> request but the MMC layer is already blocked on the previous request
> completion and is not aware of the new request waiting to be fetched.
I thought that I could trigger a context switch in order to give
execution time for FS to add the new request to the MMC queue.
I made a simple hack to call yield() in case the request gets NULL. I
thought it may give the FS layer enough time to add a new request to
the MMC queue. This would not delay the MMC transfer since the yield()
is done in parallel with an ongoing transfer. Anyway it was just meant
to be a simple test.

One yield was not enough. Just for sanity check I added a msleep as
well and that was enough to let FS add a new request,
Would it be possible to gain throughput by delaying the fetch of new
request? Too avoid unnecessary NULL requests

If (ongoing request is read AND size is max read ahead AND new request
is NULL) yield();

BR
Per

>
> This patch fixes the above behavior and allows the async request mechanism
> to work in case of sequential read scenarios.
> The main idea is to replace the blocking wait for a completion with an
> event driven mechanism and add a new event of new_request.
> When the block layer notifies the MMC layer on a new request, we check
> for the above case where MMC layer is waiting on a previous request
> completion and the current request is NULL.
> In such a case the new_request event will be triggered to wakeup
> the waiting thread. MMC layer will then fetch the new request
> and after its preparation will go back to waiting on the completion.
>
> Our tests showed that this fix improves the read sequential throughput
> by 16%.
>
> Signed-off-by: Konstantin Dorfman 
>
> diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
> index 172a768..4d6431b 100644
> --- a/drivers/mmc/card/block.c
> +++ b/drivers/mmc/card/block.c
> @@ -112,17 +112,6 @@ struct mmc_blk_data {
>
>  static DEFINE_MUTEX(open_lock);
>
> -enum mmc_blk_status {
> -   MMC_BLK_SUCCESS = 0,
> -   MMC_BLK_PARTIAL,
> -   MMC_BLK_CMD_ERR,
> -   MMC_BLK_RETRY,
> -   MMC_BLK_ABORT,
> -   MMC_BLK_DATA_ERR,
> -   MMC_BLK_ECC_ERR,
> -   MMC_BLK_NOMEDIUM,
> -};
> -
>  module_param(perdev_minors, int, 0444);
>  MODULE_PARM_DESC(perdev_minors, "Minors numbers to allocate per device");
>
> @@ -1224,6 +1213,7 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req 
> *mqrq,
> }
>
> mqrq->mmc_active.mrq = &brq->mrq;
> +   mqrq->mmc_active.mrq->sync_data = &mq->sync_data;
> mqrq->mmc_active.err_check = mmc_blk_err_check;
>
> mmc_queue_bounce_pre(mqrq);
> @@ -1284,9 +1274,12 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, 
> struct request *rqc)
> areq = &mq->mqrq_cur->mmc_active;
> } else
> areq = NULL;
> -   areq = mmc_start_req(card->host, areq, (int *) &status);
> -   if (!areq)
> +   areq = mmc_start_data_req(card->host, areq, (int *)&status);
> +   if (!areq) {
> +   if (status == MMC_BLK_NEW_REQUEST)
> +   return status;
> return 0;
> +   }
>
> mq_rq = container_of(areq, struct mmc_queue_req, mmc_active);
> brq = &mq_rq->brq;
> @@ -1295,6 +1288,8 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, 
> struct request *rqc)
> mmc_queue_bounce_post(mq_rq);
>
> switch (status) {
> +   case MMC_BLK_NEW_REQUEST:
> +   BUG_ON(1); /* should never get here */
> case MMC_BLK_SUCCESS:
> case MMC_BLK_PARTIAL:
> /*
> @@ -1367,7 +1362,8 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, 
> struct request *rqc)
>  * prepare it again and resend.
>  */
> mmc_blk_rw_rq_prep(mq_rq, card, disable_multi, mq);
> -   mmc_start_req(card->host, &mq_rq->mmc_active, NULL);
> +   mmc_start_

Re: [PATCH] mmc: fix async request mechanism for sequential read scenarios

2012-10-21 Thread Per Forlin
On Sun, Oct 14, 2012 at 6:17 PM, Konstantin Dorfman
 wrote:
> On Thu, 11 Oct 2012 17:19:01 +0200, Per Forlin 
> wrote:
> Hello Per,
>
>>I would like to start with some basic comments.
>>
>>1. Is this read sequential issue specific to MMC?
>>2. Or is it common with all other block-drivers that gets data from
>>the block layer (SCSI/SATA etc) ?
>>If (#2) can the issue be addressed inside the block layer instead?
>>
>>BR
>>Per
> This issue specific to MMC, others block drivers probably not using
> MMC mechanism for async request (or have more kernel threads for
> processing incoming blk requests).
> I think, since MMC actively fetches requests from block layer queue,
> the solution has nothing to do with block layer context.
>
>>
>>On Tue, Oct 2, 2012 at 5:39 PM, Konstantin Dorfman
>> wrote:
>>> The main assumption of the async request design is that the file
>>> system adds block requests to the block device queue asynchronously
>>> without waiting for completion (see the Rationale section of
>>> https://wiki.linaro.org/WorkingGroups/Kernel/Specs
>>> /StoragePerfMMC-async-req).
>>>
>>> We found out that in case of sequential read operations this is not
>>> the case, due to the read ahead mechanism.
>>Would it be possible to improve this mechanism to achieve the same result?
>>Allow an outstanding read ahead request on top of the current ongoing one.
>>
>
> I need to look on this mechanism,  but from first glance such
> behaviour may be result of libc/vfs/fs decisions and too complex
> comparing to the patch we are talking about.
One observation I have made is that if setting the mmc_req_size to
half READ_AHEAD changes the way block layer adds request to the MMC
queue.

Extract from 
https://wiki.linaro.org/WorkingGroups/Kernel/Specs/StoragePerfMMC-async-req#Unresolved_issues

Forcing mmc host driver to set mmc_req_size 64k results in this behaviour.

dd if=/dev/mmcblk0 of=/dev/null bs=4k count=256
 [mmc_queue_thread] req d955f9b0 blocks 32
 [mmc_queue_thread] req   (null) blocks 0
 [mmc_queue_thread] req   (null) blocks 0
 [mmc_queue_thread] req d955f9b0 blocks 64
 [mmc_queue_thread] req   (null) blocks 0
 [mmc_queue_thread] req d955f8d8 blocks 128
 [mmc_queue_thread] req   (null) blocks 0
 [mmc_queue_thread] req d955f9b0 blocks 128
 [mmc_queue_thread] req d955f800 blocks 128
 [mmc_queue_thread] req d955f8d8 blocks 128
 [mmc_queue_thread] req d955fec0 blocks 128
 [mmc_queue_thread] req d955f800 blocks 128
 [mmc_queue_thread] req d955f9b0 blocks 128
 [mmc_queue_thread] req d967cd30 blocks 128


This shows that the block layer can add request in a more asynchronous
manner. I have not investigate that mechanism enough to say what can
be done.
Do you have an explanation to why the block layer behaves like this?

BR
Per

>
>
> --
> Konstantin Dorfman,
> QUALCOMM ISRAEL, on behalf of Qualcomm Innovation Center, Inc. is a member
> of Code Aurora Forum, hosted by The Linux Foundation
>
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH 4/4] sdio: pm: set device's power state after driver runtime suspended it

2012-10-21 Thread Rafael J. Wysocki
On Saturday 20 of October 2012 15:15:41 Aaron Lu wrote:
> On Fri, Oct 19, 2012 at 08:08:38PM +0200, Rafael J. Wysocki wrote:
> > On Friday 19 of October 2012 01:39:25 Rafael J. Wysocki wrote:
> > > On Friday 12 of October 2012 11:12:41 Aaron Lu wrote:
> > > > In sdio bus level runtime callback function, after call the driver's
> > > > runtime suspend callback, we will check if the device supports a
> > > > platform level power management, and if so, a proper power state is
> > > > chosen by the corresponding platform callback and then set.
> > > > 
> > > > Platform level runtime wakeup is also set, if device is enabled for
> > > > runtime wakeup by its driver, it will be armed the ability to generate
> > > > a wakeup event by the platform.
> > > > 
> > > > Signed-off-by: Aaron Lu 
> > > > ---
> > > >  drivers/mmc/core/sdio_bus.c | 49 
> > > > +++--
> > > >  1 file changed, 47 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git a/drivers/mmc/core/sdio_bus.c b/drivers/mmc/core/sdio_bus.c
> > > > index aaec9e2..d83dea8 100644
> > > > --- a/drivers/mmc/core/sdio_bus.c
> > > > +++ b/drivers/mmc/core/sdio_bus.c
> > > > @@ -23,6 +23,7 @@
> > > >  
> > > >  #include "sdio_cis.h"
> > > >  #include "sdio_bus.h"
> > > > +#include "sdio.h"
> > > >  #include "sdio_acpi.h"
> > > >  
> > > >  /* show configuration fields */
> > > > @@ -194,10 +195,54 @@ static int sdio_bus_remove(struct device *dev)
> > > >  }
> > > >  
> > > >  #ifdef CONFIG_PM
> > > > +
> > > > +static int sdio_bus_runtime_suspend(struct device *dev)
> > > > +{
> > > > +   int ret;
> > > > +   sdio_power_t state;
> > > > +
> > > > +   ret = pm_generic_runtime_suspend(dev);
> > > > +   if (ret)
> > > > +   goto out;
> > > > +
> > > > +   if (!platform_sdio_power_manageable(dev))
> > > > +   goto out;
> > > > +
> > > > +   platform_sdio_run_wake(dev, true);
> > > > +
> > > > +   state = platform_sdio_choose_power_state(dev);
> > > > +   if (state == SDIO_POWER_ERROR) {
> > > > +   ret = -EIO;
> > > > +   goto out;
> > > > +   }
> > > > +
> > > > +   ret = platform_sdio_set_power_state(dev, state);
> > > > +
> > > > +out:
> > > > +   return ret;
> > > > +}
> > > > +
> > > > +static int sdio_bus_runtime_resume(struct device *dev)
> > > > +{
> > > > +   int ret;
> > > > +
> > > > +   if (platform_sdio_power_manageable(dev)) {
> > > > +   platform_sdio_run_wake(dev, false);
> > > > +   ret = platform_sdio_set_power_state(dev, SDIO_D0);
> > > > +   if (ret)
> > > > +   goto out;
> > > > +   }
> > > > +
> > > > +   ret = pm_generic_runtime_resume(dev);
> > > > +
> > > > +out:
> > > > +   return ret;
> > > > +}
> > > > +
> > > 
> > > Most likely we will need to make analogous changes for other bus types 
> > > that
> > > don't support power management natively, like platform, SPI, I2C etc.  In 
> > > all
> > > of them the _runtime_suspend() and _runtime_resume() routine will look
> > > almost exactly the same except for the platform_sdio_ prefix.
> > > 
> > > For this reason, I think it would be better to simply define two functions
> > > acpi_pm_runtime_suspend() and acpi_pm_runtime_resume() that will do all of
> > > the ACPI-specific operations related to runtime suspend/resume.  Then, we
> > > will be able to use these functions for all of the bus types in question
> > > in the same way (we may also need to add analogous functions for system
> > > suspend/resume handling).
> > 
> > Something like in the (totally untested) patch below.
> 
> Looks good to me.
> I'll test the code and put it into v2 of the patchset with your
> sign-off, is it OK?

I'd rather do it a bit differently in the signed-off version (I'm working
on these patches, they should be ready around Tuesday), but if you can test
it in its current form, that'd be useful too.

Thanks,
Rafael


-- 
I speak only for myself.
Rafael J. Wysocki, Intel Open Source Technology Center.
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html