Re: Fwd: [PATCH v3 0/5] mmc: Add access to RPMB partition

2012-11-15 Thread Linus Walleij
On Wed, Nov 14, 2012 at 10:14 PM, Krishna Konda kko...@codeaurora.org wrote:
 On Wed, 2012-11-14 at 12:58 -0800, Krishna Konda wrote:

 The goal of this patchserie is to offer access to MMC RPMB
 (Replay Protected Memory Block) partition.
 The RPMB partition is used in general to store some secure data.
 It is accessible through a trusted mechanism described in
 JEDEC standard JESD84-A441.

 Hi Loic, Chris, are there any plans to merge these patchsets? I did not
 see this in mmc-next.

I was sort of wondering the same.

Krishna, could you provide your Acked-by/Reviewed-by
tag so as to convince Chris that this is a nice feature?

Yours,
Linus Walleij
--
To unsubscribe from this list: send the line unsubscribe linux-mmc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] mmc: sdhci: check the quirk HISPD_BIT for SD3.0 card

2012-11-15 Thread Jaehoon Chung
If SD-card is more than SD3.0 card, set to ctrl with UHS-I modes.
But in case of Samsung-SoC, didn't use the SDHCI_CTRL_HISPD.
So need to check the quirk for SDHCI_QUIRK_NO_HISPD_BIT.

Signed-off-by: Jaehoon Chung jh80.ch...@samsung.com
Signed-off-by: Kyungmin Park kyungmin.p...@samsung.com
---
 drivers/mmc/host/sdhci.c |5 +++--
 1 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index 6f6534e..b42d551 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -1418,11 +1418,12 @@ static void sdhci_do_set_ios(struct sdhci_host *host, 
struct mmc_ios *ios)
unsigned int clock;
 
/* In case of UHS-I modes, set High Speed Enable */
-   if ((ios-timing == MMC_TIMING_MMC_HS200) ||
+   if (((ios-timing == MMC_TIMING_MMC_HS200) ||
(ios-timing == MMC_TIMING_UHS_SDR50) ||
(ios-timing == MMC_TIMING_UHS_SDR104) ||
(ios-timing == MMC_TIMING_UHS_DDR50) ||
-   (ios-timing == MMC_TIMING_UHS_SDR25))
+   (ios-timing == MMC_TIMING_UHS_SDR25)) 
+   !(host-quirks  SDHCI_QUIRK_NO_HISPD_BIT))
ctrl |= SDHCI_CTRL_HISPD;
 
ctrl_2 = sdhci_readw(host, SDHCI_HOST_CONTROL2);
-- 
1.7.4.1
--
To unsubscribe from this list: send the line unsubscribe linux-mmc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH v8 2/2] mmc: support packed write command for eMMC4.5 device

2012-11-15 Thread Seungwon Jeon
Hi Maya,

Thanks to remind it.
I'm struggling with something. I didn't check more since v8.

Hi Chris,

As we know, nand flash write is slower than read, because write is costly 
operation.
Write operation need block erase operation. Of course, if there is free block 
to write, erase will not happen.
In worst case, write operation may include copy exiting data as well as block 
erase block.
So, in the block management of nand, several data of small size is 
disadvantageous for write.
That is when writing large amounts of data, eMMC give it a significant speed 
advantage.
Packed write can help it by gathering the data. Actually we got the performance 
gain.

There are several bus speed mode. Such as HS200 mode, clock speed is up to 
200MHz.
However, with little or no effect for write performance. ASYNC transfer patch 
is the same.
I think packed wrtie is a candidate which can improve write performance just 
now.

If you want to find the write improvement, you need to check the following two 
parameters.
/sys/block/mmcblk0/queue/max_hw_sectors_kb
/sys/block/mmcblk0/queue/max_sectors_kb
'max_hw_sectors_kb' is maximum size which host controller can gather in device 
driver.
If this size is large enough, performance effect may not be shown noticeably.

Thanks,
Seungwon Jeon
On Wednesday, November 14, 2012, me...@codeaurora.org wrote:
 Hi Chris,
 
 The amount of improvement from the packed commands, as from any other
 eMMC4.5 feature, depends on several parameters:
 1. The card support of this feature. If the card supports only the feature
 interface, then you'll see no improvement when using the feature.
 2. The benchmark tool used. Since the packed command preparation is
 stopped due to a FLUSH request, a benchmark that issues many FLUSH
 requests can result in a small amount of packing and you will see no
 improvement.
 
 You can use the following patch to get the packed commands statistics:
 http://marc.info/?l=linux-mmcm=134374508625826w=2
 With this patch you will be able to see the amount of packing and what
 caused the packed preparation to stop.
 
 We tested the packed commands feature with SanDisk cards and got
 improvement of 30% when using lmdd and tiotest. We don't use iozone for
 sequential tests but if you'll send me the exact command that you use we
 can try it as well.
 
 It is true that packed commands can cause degradation of read in
 read-write collisions. However, it is only nature that when having longer
 write request a read request has to wait for a longer time and its latency
 will increase. I believe that it is not our duty to decide if this is a
 reason to exclude this feature. Everyone should take its own decision if
 he wants to benefit from the write improvement, while risking the
 read-write collisions scenarios.
 eMMC4.5 introduces the HPI and stop transmission to overcome the
 degradation of read latency due to write (regardless of the packed
 commands).
 The packing control is our own enhancement that we believe can also be
 used to overcome this degradation. It is tunable and requires a specific
 enabling, so it can be the developer’s decision whether to use it or not.
 Since it is not a standard feature we can discuss separately if it should
 be accepted or not and what is the best way to use it.
 
 Packed commands is not the only eMMC4.5 feature that can cause degradation
 in specific scenarios. If we will look at the cache feature, it causes
 degradation by almost a half in random operations when FLUSH is being
 used.
 When using the following iozone command when cache is enabled, you will
 see degradation in the iozone results:
 ./data/iozone -i0 -i2 -r4k -s50m -O -o -I -f /data/mmc0/file3
 However, cache support was accepted regardless of this degradation and it
 is the developer’s responsibility to decide if to use this feature or not.
 
 To summarize, all eMMC4.5 features that were added are tunable and
 disabled by default.
 I believe that when someone would enable a certain feature he will do all
 the required testing for determining if he can benefit from this feature
 or not in his own environment.
 
 Thanks,
 Maya
 
 On Tue, November 13, 2012 6:54 pm, Chris Ball wrote:
  Hi Maya,
 
  On Sun, Nov 04 2012, me...@codeaurora.org wrote:
  Packed commands is a mandatory eMMC4.5 feature and is supported by all
 the card vendors.
 
  We're still only talking about using packed writes, though, right?
 
  It wa proven to be beneficial for eMMC4.5 cards and harmless for non
 eMMC4.5 cards.
 
  My understanding is that write packing causes a regression in read
 performance that can be tuned/fixed by your num_wr_reqs_to_start_packing
 tunable (and read packing causes a read regression with current eMMC 4.5
 cards).  Is that wrong?
 
  I don't see a point to hold it back while it can be enabled or
  disabled by a flag and most of the code it adds is guarded in specific
 functions and is not active when packed commands is disabled.
 
  Earlier in the thread I 

RE: [PATCH v3] mmc: fix async request mechanism for sequential read scenarios

2012-11-15 Thread Seungwon Jeon
I checked this commit.
v3 doesn't work in dw_mmc host driver.
It makes some driver warning while v1 is fine.
I'm finding the reason. And could you check minor one below?

On Tuesday, November 13, 2012, Seungwon Jeon tgih@samsung.com
 Hi Konstantin,
 
 I'm looking into this patch.
 Idea looks good as a whole.
 
 On Tuesday, November 13, 2012, Konstantin Dorfman kdorf...@codeaurora.org 
 wrote:
  When current request is running on the bus and if next request fetched
  by mmcqd is NULL, mmc context (mmcqd thread) gets blocked until the
  current request completes. This means if new request comes in while
  the mmcqd thread is blocked, this new request can not be prepared in
  parallel to current ongoing request. This may result in latency to
  start new request.
 
  This change allows to wake up the MMC thread (which
  is waiting for the current running request to complete). Now once the
  MMC thread is woken up, new request can be fetched and prepared in
  parallel to current running request which means this new request can
  be started immediately after the current running request completes.
 
  With this change read throughput is improved by 16%.
 
  Signed-off-by: Konstantin Dorfman kdorf...@codeaurora.org
  ---
  v3:
  - new MMC_QUEUE_NEW_REQUEST flag to mark new request case
  - lock added to update is_new_req flag
  - condition for sending new req notification changed
  - Moved start waiting for new req notification after
fetching NULL
  v2:
  - Removed synchronization flags
  - removed lock from done()
  - flags names changed
  v1:
  - Initial submit
 
  diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
  index 172a768..34d8bd9 100644
  --- a/drivers/mmc/card/block.c
  +++ b/drivers/mmc/card/block.c
  @@ -112,17 +112,6 @@ struct mmc_blk_data {
 
   static DEFINE_MUTEX(open_lock);
 
  -enum mmc_blk_status {
  -   MMC_BLK_SUCCESS = 0,
  -   MMC_BLK_PARTIAL,
  -   MMC_BLK_CMD_ERR,
  -   MMC_BLK_RETRY,
  -   MMC_BLK_ABORT,
  -   MMC_BLK_DATA_ERR,
  -   MMC_BLK_ECC_ERR,
  -   MMC_BLK_NOMEDIUM,
  -};
  -
   module_param(perdev_minors, int, 0444);
   MODULE_PARM_DESC(perdev_minors, Minors numbers to allocate per device);
 
  @@ -1225,6 +1214,7 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req 
  *mqrq,
 
  mqrq-mmc_active.mrq = brq-mrq;
  mqrq-mmc_active.err_check = mmc_blk_err_check;
  +   mqrq-mmc_active.mrq-context_info = mq-context_info;
 
  mmc_queue_bounce_pre(mqrq);
   }
  @@ -1284,9 +1274,12 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, 
  struct request *rqc)
  areq = mq-mqrq_cur-mmc_active;
  } else
  areq = NULL;
  -   areq = mmc_start_req(card-host, areq, (int *) status);
  -   if (!areq)
  +   areq = mmc_start_req(card-host, areq, (int *)status);
  +   if (!areq) {
  +   if (status == MMC_BLK_NEW_REQUEST)
  +   mq-flags |= MMC_QUEUE_NEW_REQUEST;
  return 0;
  +   }
 
  mq_rq = container_of(areq, struct mmc_queue_req, mmc_active);
  brq = mq_rq-brq;
  @@ -1295,6 +1288,8 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, 
  struct request *rqc)
  mmc_queue_bounce_post(mq_rq);
 
  switch (status) {
  +   case MMC_BLK_NEW_REQUEST:
  +   BUG(); /* should never get here */
  case MMC_BLK_SUCCESS:
  case MMC_BLK_PARTIAL:
  /*
  @@ -1367,7 +1362,8 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, 
  struct request *rqc)
   * prepare it again and resend.
   */
  mmc_blk_rw_rq_prep(mq_rq, card, disable_multi, mq);
  -   mmc_start_req(card-host, mq_rq-mmc_active, NULL);
  +   mmc_start_req(card-host, mq_rq-mmc_active,
  +   (int *)status);
 Here, we'll try to send previous request  which is not completed successfully.
 And then at the top of do~while, mmc_start_req will be called for current 
 request and get the status.
 Above change looks like unnecessary. Is there any reason?
 
 Thanks,
 Seungwon Jeon
 
  }
  } while (ret);
 
  @@ -1407,6 +1403,8 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, 
  struct request *req)
  goto out;
  }
 
  +   mq-flags = ~MMC_QUEUE_NEW_REQUEST;
  +
  if (req  req-cmd_flags  REQ_DISCARD) {
  /* complete ongoing async transfer before issuing discard */
  if (card-host-areq)
  @@ -1426,9 +1424,10 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, 
  struct request *req)
  }
 
   out:
  -   if (!req)
  +   if (!req  !(mq-flags  MMC_QUEUE_NEW_REQUEST))
  /* release host only when there are no more requests */
  mmc_release_host(card-host);
  +
  return ret;
   }
 
  diff --git 

Re: [PATCH] mmc: Standardise capability type

2012-11-15 Thread Linus Walleij
On Wed, Nov 14, 2012 at 1:35 PM, Lee Jones lee.jo...@linaro.org wrote:

 There are discrepancies with regards to how MMC capabilities
 are carried throughout the subsystem. Let's standardise them
 to elevate any confusion.

 Cc: Chris Ball c...@laptop.org
 Cc: linux-mmc@vger.kernel.org
 Signed-off-by: Lee Jones lee.jo...@linaro.org

Looks good to me, and these are obviously
u32 bitfields by design, so:
Reviewed-by: Linus Walleij linus.wall...@linaro.org

Yours,
Linus Walleij
--
To unsubscribe from this list: send the line unsubscribe linux-mmc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] mmc: Standardise capability type

2012-11-15 Thread Arnd Bergmann
On Thursday 15 November 2012, Linus Walleij wrote:
 
 On Wed, Nov 14, 2012 at 1:35 PM, Lee Jones lee.jo...@linaro.org wrote:
 
  There are discrepancies with regards to how MMC capabilities
  are carried throughout the subsystem. Let's standardise them
  to elevate any confusion.
 
  Cc: Chris Ball c...@laptop.org
  Cc: linux-mmc@vger.kernel.org
  Signed-off-by: Lee Jones lee.jo...@linaro.org
 
 Looks good to me, and these are obviously
 u32 bitfields by design, so:
 Reviewed-by: Linus Walleij linus.wall...@linaro.org

Acked-by: Arnd Bergmann a...@arndb.de
--
To unsubscribe from this list: send the line unsubscribe linux-mmc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v1] mmc: fix async request mechanism for sequential read scenarios

2012-11-15 Thread Per Förlin
On 11/14/2012 04:15 PM, Konstantin Dorfman wrote:
 Hello Per,
 
 On 11/13/2012 11:10 PM, Per Forlin wrote:
 On Tue, Oct 30, 2012 at 1:19 PM, Konstantin Dorfman
 kdorf...@codeaurora.org wrote:
 Hello,

 On 10/29/2012 11:40 PM, Per Forlin wrote:
 Hi,

 I would like to move the focus of my concerns from root cause analysis
 to the actual patch,
 My first reflection is that the patch is relatively complex and some
 of the code looks duplicated.
 Would it be possible to simplify it and re-use  the current execution flow.

 Is the following flow feasible?

 in mmc_start_req():
 --
 if (rqc == NULL  not_resend)
   wait_for_both_mmc_and_arrival_of_new_req
 else
   wait_only_for_mmc

 if (arrival_of_new_req) {
set flag to indicate fetch-new_req
   return NULL;
 }
 -

 in queue.c:
 if (fetch-new_req)
   don't overwrite previous req.

 BR
 Per

 You are talking about using both waiting mechanisms, old (wait on
 completion) and new - wait_event_interruptible()? But how done()
 callback, called on host controller irq context, will differentiate
 which one is relevant for the request?

 I think it is better to use single wait point for mmc thread.

 I have sketch a patch to better explain my point. It's not tested it
 barely compiles :)
 The patch tries to introduce your feature and still keep the same code
 path. And it exposes an API that could be used by SDIO as well.
 The intention of my sketch patch is only to explain what I tried to
 visualize in the pseudo code previously in this thread.
 The out come of your final patch should be documented here I think:
 Documentation/mmc/mmc-async-req.txt
 This document is ready, attaching it to this mail and will be included
 in next version of the patch (or RESEND).

 Here follows my patch sketch:
 
 diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
 index e360a97..08036a1 100644
 --- a/drivers/mmc/card/queue.c
 +++ b/drivers/mmc/card/queue.c
 @@ -66,6 +66,8 @@ static int mmc_queue_thread(void *d)
   spin_unlock_irq(q-queue_lock);

   if (req || mq-mqrq_prev-req) {
 + if (!req)
 + mmc_prefetch_start(mq-mqrq_prev-mmc_active, 
 true);
   set_current_state(TASK_RUNNING);
   mq-issue_fn(mq, req);
   } else {
 @@ -79,11 +81,13 @@ static int mmc_queue_thread(void *d)
   }

   /* Current request becomes previous request and vice versa. */
 - mq-mqrq_prev-brq.mrq.data = NULL;
 - mq-mqrq_prev-req = NULL;
 - tmp = mq-mqrq_prev;
 - mq-mqrq_prev = mq-mqrq_cur;
 - mq-mqrq_cur = tmp;
 + if (!mmc_prefetch_pending(mq-mqrq_prev-mmc_active)) {
 + mq-mqrq_prev-brq.mrq.data = NULL;
 + mq-mqrq_prev-req = NULL;
 + tmp = mq-mqrq_prev;
 + mq-mqrq_prev = mq-mqrq_cur;
 + mq-mqrq_cur = tmp;
 + }
   } while (1);
   up(mq-thread_sem);

 @@ -109,10 +113,44 @@ static void mmc_request_fn(struct request_queue *q)
   return;
   }

 + if (mq-prefetch_enable) {
 + spin_lock(mq-prefetch_lock);
 + if (mq-prefetch_completion)
 + complete(mq-prefetch_completion);
 + mq-prefetch_pending = true;
 + spin_unlock(mq-prefetch_lock);
 + }
 +
   if (!mq-mqrq_cur-req  !mq-mqrq_prev-req)
   wake_up_process(mq-thread);
  }

 +static void mmc_req_init(struct mmc_async_req *areq, struct completion 
 *compl)
 +{
 + struct mmc_queue *mq =
 + container_of(areq-prefetch, struct mmc_queue, prefetch);
 +
 + spin_lock(mq-prefetch_lock);
 + mq-prefetch_completion = compl;
 + if (mq-prefetch_pending)
 + complete(mq-prefetch_completion);
 + spin_unlock(mq-prefetch_lock);
 +}
 +
 +static void mmc_req_start(struct mmc_async_req *areq, bool enable)
 +{
 + struct mmc_queue *mq =
 + container_of(areq-prefetch, struct mmc_queue, prefetch);
 + mq-prefetch_enable = enable;
 +}
 +
 +static bool mmc_req_pending(struct mmc_async_req *areq)
 +{
 + struct mmc_queue *mq =
 + container_of(areq-prefetch, struct mmc_queue, prefetch);
 + return mq-prefetch_pending;
 +}
 +
  static struct scatterlist *mmc_alloc_sg(int sg_len, int *err)
  {
   struct scatterlist *sg;
 @@ -166,6 +204,12 @@ int mmc_init_queue(struct mmc_queue *mq, struct
 mmc_card *card,
   int ret;
   struct mmc_queue_req *mqrq_cur = mq-mqrq[0];
   struct mmc_queue_req *mqrq_prev = mq-mqrq[1];
 + spin_lock_init(mq-prefetch_lock);
 + mq-prefetch.wait_req_init = mmc_req_init;
 + mq-prefetch.wait_req_start = mmc_req_start;
 + mq-prefetch.wait_req_pending = mmc_req_pending;
 + mqrq_cur-mmc_active.prefetch = mq-prefetch;
 + 

[PATCH] mmc: Limit MMC speed to 52MHz if not HS200

2012-11-15 Thread Al Cooper
If caps2 host capabilities does not indicate support for MMC
HS200, don't allow clock speeds 52MHz. Currently, for MMC, the
clock speed is set to the lesser of the max speed the eMMC module
supports (card-ext_csd.hs_max_dtr) or the max base clock of the
host controller (host-f_max based on BASE_CLK_FREQ in the host
CAPS register). This means that a host controller that doesn't
support HS200 mode but has a base clock of 100MHz and an eMMC module
that supports HS200 speeds will end up using a 100MHz clock.

Signed-off-by: Al Cooper alcoop...@gmail.com
---
 drivers/mmc/core/mmc.c |2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
index 7cc4638..6d669c3 100644
--- a/drivers/mmc/core/mmc.c
+++ b/drivers/mmc/core/mmc.c
@@ -1051,6 +1051,8 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr,
if (mmc_card_highspeed(card) || mmc_card_hs200(card)) {
if (max_dtr  card-ext_csd.hs_max_dtr)
max_dtr = card-ext_csd.hs_max_dtr;
+   if (mmc_card_highspeed(card)  (max_dtr  5200))
+   max_dtr = 5200;
} else if (max_dtr  card-csd.max_dtr) {
max_dtr = card-csd.max_dtr;
}
-- 
1.7.6


--
To unsubscribe from this list: send the line unsubscribe linux-mmc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] mmc,sdio: Fix the panic due to devname NULL when calling pm_runtime_set_active()

2012-11-15 Thread Chuansheng Liu
Subject: [PATCH] mmc,sdio: Fix the panic due to devname NULL when calling 
pm_runtime_set_active()

Meet one panic as the below:
1[   15.067350] BUG: unable to handle kernel NULL pointer dereference at   
(null)
1[   15.074455] IP: [c1496a42] strlen+0x12/0x20
4[   15.078803] *pde = 
0[   15.081674] Oops:  [#1] PREEMPT SMP
4[   15.101676] Pid: 5, comm: kworker/u:0 Tainted: G C  
3.0.34-140729-g7f9d5c5 #1 Intel Corporation Medfield/BKB2
4[   15.112282] EIP: 0060:[c1496a42] EFLAGS: 00010046 CPU: 0
4[   15.117760] EIP is at strlen+0x12/0x20
4[   15.121496] EAX:  EBX: f344cc04 ECX:  EDX: f344cc04
4[   15.127754] ESI: c12bcee0 EDI:  EBP: f586fe74 ESP: f586fe70
4[   15.134013]  DS: 007b ES: 007b FS: 00d8 GS:  SS: 0068
0[   15.139406] Process kworker/u:0 (pid: 5, ti=f586e000 task=f585b440 
task.ti=f586e000)
0[   15.147140] Stack:
4[   15.149141]  f344cc04 f586feb0 c12bcf12  f586fe9c  
0007 0082
4[   15.156877]  0092 0002 c1b01ee4 f586feb8 c1635f31 f3b42330 
c12bcee0 f344cc04
4[   15.164616]  f586fed0 c152f815 f3b42330 f3b42328  f344cc04 
f589b804 
0[   15.172351] Call Trace:
4[   15.174810]  [c12bcf12] ftrace_raw_event_runtime_pm_status+0x32/0x140
4[   15.181411]  [c1635f31] ? sdio_enable_wide.part.8+0x61/0x80
4[   15.187145]  [c12bcee0] ? perf_trace_runtime_pm_usage+0x1a0/0x1a0
4[   15.193407]  [c152f815] __update_runtime_status+0x65/0x90
4[   15.198968]  [c1531170] __pm_runtime_set_status+0xe0/0x1b0
4[   15.204621]  [c1637366] mmc_attach_sdio+0x2f6/0x410
4[   15.209666]  [c162f520] mmc_rescan+0x240/0x2b0
4[   15.214270]  [c12643ce] process_one_work+0xfe/0x3f0
4[   15.219311]  [c1242754] ? wake_up_process+0x14/0x20
4[   15.224357]  [c162f2e0] ? mmc_detect_card_removed+0x80/0x80
4[   15.230091]  [c12649c1] worker_thread+0x121/0x2f0
4[   15.234958]  [c12648a0] ? rescuer_thread+0x1e0/0x1e0
4[   15.240091]  [c12684cd] kthread+0x6d/0x80
4[   15.244264]  [c1268460] ? __init_kthread_worker+0x30/0x30
4[   15.245485]  [c186dc3a] kernel_thread_helper+0x6/0x10

The reason is pm_runtime_set_active() is called before the device name
is set, and the dev name setting is done at mmc_add_card() laterly.

So when calling pm_runtime_set_active(), it will hit the strlen(devname==0)
which trigger the panic.

Here before calling pm_runtime_set_active(), set the dev name, although
it is duplicated with mmc_add_card(), but it do not break the original
design(commit 81968561b).

Signed-off-by: liu chuansheng chuansheng@intel.com
---
 drivers/mmc/core/sdio.c |9 +
 1 files changed, 9 insertions(+), 0 deletions(-)

diff --git a/drivers/mmc/core/sdio.c b/drivers/mmc/core/sdio.c
index 2273ce6..73746af 100644
--- a/drivers/mmc/core/sdio.c
+++ b/drivers/mmc/core/sdio.c
@@ -1104,6 +1104,15 @@ int mmc_attach_sdio(struct mmc_host *host)
 */
if (host-caps  MMC_CAP_POWER_OFF_CARD) {
/*
+* pm_runtime_set_active will use strlen(dev_name),
+* we must set it in advance to avoid crash,
+* although it is the duplication in mmc_add_card
+* laterly.
+*/
+   dev_set_name(card-dev, %s:%04x, mmc_hostname(card-host),
+   card-rca);
+
+   /*
 * Let runtime PM core know our card is active
 */
err = pm_runtime_set_active(card-dev);
-- 
1.7.0.4



--
To unsubscribe from this list: send the line unsubscribe linux-mmc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html