[PATCH] mmc: sdhci: fixed regulator control when suspend/resume

2011-04-06 Thread Jaehoon Chung
Suspend/resume is working always enable regulator after resuming. 
(if there is host->vmmc)
I assume that if regulator is enabled, card is inserted.
Then We need to restore "vmmc".

Signed-off-by: Jaehoon Chung 
Signed-off-by: Kyungmin Park 
---
 drivers/mmc/host/sdhci.c  |8 +---
 include/linux/mmc/sdhci.h |1 +
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index 9e15f41..70cbbd6 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -1663,8 +1663,10 @@ int sdhci_suspend_host(struct sdhci_host *host, 
pm_message_t state)
 
free_irq(host->irq, host);
 
-   if (host->vmmc)
+   if (host->vmmc && regulator_is_enabled(host->vmmc)) {
ret = regulator_disable(host->vmmc);
+   host->restore_vmmc = true;
+   }
 
return ret;
 }
@@ -1675,8 +1677,8 @@ int sdhci_resume_host(struct sdhci_host *host)
 {
int ret;
 
-   if (host->vmmc) {
-   int ret = regulator_enable(host->vmmc);
+   if (host->vmmc && host->restore_vmmc) {
+   ret = regulator_enable(host->vmmc);
if (ret)
return ret;
}
diff --git a/include/linux/mmc/sdhci.h b/include/linux/mmc/sdhci.h
index 83bd9f7..1c38261 100644
--- a/include/linux/mmc/sdhci.h
+++ b/include/linux/mmc/sdhci.h
@@ -92,6 +92,7 @@ struct sdhci_host {
const struct sdhci_ops *ops;/* Low level hw interface */
 
struct regulator *vmmc; /* Power regulator */
+   bool restore_vmmc;  /* Restore vmmc */
 
/* Internal data */
struct mmc_host *mmc;   /* MMC structure */
-- 
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: pre-defined multiple block read/write vs. open ended multiple block read/write

2011-04-06 Thread Gao, Yunpeng
>>> So, the question is, why the current mmc driver does not use the
>pre-defined multiple block read/write commands? Any comments?
>>>
>>
>> While I really don't know, I would guess becase the SD spec has a
>> different definition of ACMD23 than MMC, so perhaps avoiding
>> ACMD/CMD23 that was chosen as the simpler more common alternative.
>> Alternatively, the difference might imply that at one time it was an
>> optional feature. I'll try digging up an early MMCA spec to confirm...
>
>Ok let me correct myself.
>
>For SD:
>ACMD23 - SET_WR_BLK_ERASE_COUNT might speed to the multiple write
>block, but still need CMD12 after the write.
>CMD23 - optional and mandatory on UHS104 cards. CMD23 support is noted
>in SCR register.

Thanks for the response.
I also checked the SD Spec, seems CMD23 was not supported in SD 2.0, but added 
in SD 3.0 as an optional command and is mandatory on UHS104 cards.

>> At least one new MMC feature now (reliable writes) requires CMD23, and
>> at the very least using SET_BLOCK_COUNT instead of STOP_TRANSMISSION
>> would make that code path more homogenous. Certainly the overhead is
>> the same for MMC - either send an extra command before or after the
>> transfer.

Agree. Although the open ended mode has better compatibility for MMC/SD card,
it does prevent some new eMMC features from being support,
such as Context Management, Data Tag Mechanism and so on.

So I think it makes sense to have an alternate way to enable the pre-defined 
multiple block read/write mode.

>> I am actually working on a patch to do just that...
>>
>Which I will send shortly.

Very nice. Glad to hear that.

Thanks.

Regards,
Yunpeng

--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Boot partition support patch set.

2011-04-06 Thread Andrei Warkentin
On Tue, Apr 5, 2011 at 8:09 PM, Andrei Warkentin  wrote:
> This is the latest version of the MMC device partition support.
> It relies on a few other changes that were discussed recently.
>
> Thanks,
> A
>
> TOC:
>  [PATCH 1/4] MMC: Rename erase_timeout to cmd_timeout.
>  [PATCH 2/4] MMC: SDHCI R1B command handling + MMC_CAP_ERASE.
>  [PATCH 3/4] MMC: Allow setting CMD timeout for CMD6 (SWITCH).
>  [PATCH 4/4] MMC: MMC boot partitions support.
>

Any problems, comments?

A
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 06/12] mmc: move error code in mmc_block_issue_rw_rq to a separate function.

2011-04-06 Thread Per Forlin
Break out code without functional changes. This simplifies the code and
makes way for handle two parallel request.

Signed-off-by: Per Forlin 
---
 drivers/mmc/card/block.c |  225 +++---
 1 files changed, 132 insertions(+), 93 deletions(-)

diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index e606dec..f5db000 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -79,6 +79,13 @@ struct mmc_blk_data {
 
 static DEFINE_MUTEX(open_lock);
 
+enum mmc_blk_status {
+   MMC_BLK_SUCCESS = 0,
+   MMC_BLK_RETRY,
+   MMC_BLK_DATA_ERR,
+   MMC_BLK_CMD_ERR,
+};
+
 module_param(perdev_minors, int, 0444);
 MODULE_PARM_DESC(perdev_minors, "Minors numbers to allocate per device");
 
@@ -413,116 +420,148 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req 
*mqrq,
mmc_queue_bounce_pre(mqrq);
 }
 
+static enum mmc_blk_status mmc_blk_get_status(struct mmc_blk_request *brq,
+ struct request *req,
+ struct mmc_card *card,
+ struct mmc_blk_data *md)
+{
+   struct mmc_command cmd;
+   u32 status;
+   enum mmc_blk_status ret = MMC_BLK_SUCCESS;
+
+   /*
+* Check for errors here, but don't jump to cmd_err
+* until later as we need to wait for the card to leave
+* programming mode even when things go wrong.
+*/
+   if (brq->cmd.error || brq->data.error || brq->stop.error) {
+   if (brq->data.blocks > 1 && rq_data_dir(req) == READ) {
+   /* Redo read one sector at a time */
+   printk(KERN_WARNING "%s: retrying using single "
+  "block read, brq %p\n",
+  req->rq_disk->disk_name, brq);
+   ret = MMC_BLK_RETRY;
+   goto out;
+   }
+   status = get_card_status(card, req);
+   }
+
+   if (brq->cmd.error) {
+   printk(KERN_ERR "%s: error %d sending read/write "
+  "command, response %#x, card status %#x\n",
+  req->rq_disk->disk_name, brq->cmd.error,
+  brq->cmd.resp[0], status);
+   }
+
+   if (brq->data.error) {
+   if (brq->data.error == -ETIMEDOUT && brq->mrq.stop)
+   /* 'Stop' response contains card status */
+   status = brq->mrq.stop->resp[0];
+   printk(KERN_ERR "%s: error %d transferring data,"
+  " sector %u, nr %u, card status %#x\n",
+  req->rq_disk->disk_name, brq->data.error,
+  (unsigned)blk_rq_pos(req),
+  (unsigned)blk_rq_sectors(req), status);
+   }
+
+   if (brq->stop.error) {
+   printk(KERN_ERR "%s: error %d sending stop command, "
+  "response %#x, card status %#x\n",
+  req->rq_disk->disk_name, brq->stop.error,
+  brq->stop.resp[0], status);
+   }
+
+   if (!mmc_host_is_spi(card->host) && rq_data_dir(req) != READ) {
+   do {
+   int err;
+
+   cmd.opcode = MMC_SEND_STATUS;
+   cmd.arg = card->rca << 16;
+   cmd.flags = MMC_RSP_R1 | MMC_CMD_AC;
+   err = mmc_wait_for_cmd(card->host, &cmd, 5);
+   if (err) {
+   printk(KERN_ERR "%s: error %d requesting 
status\n",
+  req->rq_disk->disk_name, err);
+   ret = MMC_BLK_CMD_ERR;
+   goto out;
+   }
+   /*
+* Some cards mishandle the status bits,
+* so make sure to check both the busy
+* indication and the card state.
+*/
+   } while (!(cmd.resp[0] & R1_READY_FOR_DATA) ||
+(R1_CURRENT_STATE(cmd.resp[0]) == 7));
+
+#if 0
+   if (cmd.resp[0] & ~0x0900)
+   printk(KERN_ERR "%s: status = %08x\n",
+  req->rq_disk->disk_name, cmd.resp[0]);
+   if (mmc_decode_status(cmd.resp)) {
+   ret = MMC_BLK_CMD_ERR;
+   goto out;
+   }
+
+#endif
+   }
+
+   if (brq->cmd.error || brq->stop.error || brq->data.error) {
+   if (rq_data_dir(req) == READ)
+   ret = MMC_BLK_DATA_ERR;
+   else
+   ret = MMC_BLK_CMD_ERR;
+   }
+ out:
+   return ret;
+
+}
+
 static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req)
 {
struct mmc_blk_data *md = mq->data;
struct mmc_card *card = md->queue.

[PATCH v2 08/12] mmc: add handling for two parallel block requests in issue_rw_rq

2011-04-06 Thread Per Forlin
Change mmc_blk_issue_rw_rq() to become asynchronous.
The execution flow looks like this:
The mmc-queue calls issue_rw_rq(), which sends the request
to the host and returns back to the mmc-queue. The mmc-queue calls
isuue_rw_rq() again with a new request. This new request is prepared,
in isuue_rw_rq(), then it waits for the active request to complete before
pushing it to the host. When to mmc-queue is empty it will call
isuue_rw_rq() with req=NULL to finish off the active request
without starting a new request.

Signed-off-by: Per Forlin 
---
 drivers/mmc/card/block.c |  157 +++---
 drivers/mmc/card/queue.c |2 +-
 2 files changed, 134 insertions(+), 25 deletions(-)

diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index f5db000..4b530ae 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -516,24 +516,75 @@ static enum mmc_blk_status mmc_blk_get_status(struct 
mmc_blk_request *brq,
 
 }
 
-static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req)
+static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
 {
struct mmc_blk_data *md = mq->data;
struct mmc_card *card = md->queue.card;
-   struct mmc_blk_request *brq = &mq->mqrq_cur->brq;
-   int ret = 1, disable_multi = 0;
+   struct mmc_blk_request *brqc = &mq->mqrq_cur->brq;
+   struct mmc_blk_request *brqp = &mq->mqrq_prev->brq;
+   struct mmc_queue_req  *mqrqp = mq->mqrq_prev;
+   struct request *rqp = mqrqp->req;
+   int ret = 0;
+   int disable_multi = 0;
enum mmc_blk_status status;
 
-   mmc_claim_host(card->host);
+   if (!rqc && !rqp)
+   return 0;
 
-   do {
-   mmc_blk_rw_rq_prep(mq->mqrq_cur, card, disable_multi, mq);
-   mmc_wait_for_req(card->host, &brq->mrq);
+   if (rqc) {
+   /* Claim host for the first request in a serie of requests */
+   if (!rqp)
+   mmc_claim_host(card->host);
 
-   mmc_queue_bounce_post(mq->mqrq_cur);
-   status = mmc_blk_get_status(brq, req, card, md);
+   /* Prepare a new request */
+   mmc_blk_rw_rq_prep(mq->mqrq_cur, card, 0, mq);
+   mmc_pre_req(card->host, &brqc->mrq, !rqp);
+   }
+   do {
+   /*
+* If there is an ongoing request, indicated by rqp, wait for
+* it to finish before starting a new one.
+*/
+   if (rqp)
+   mmc_wait_for_req_done(&brqp->mrq);
+   else {
+   /* start a new asynchronous request */
+   mmc_start_req(card->host, &brqc->mrq);
+   goto out;
+   }
+   status = mmc_blk_get_status(brqp, rqp, card, md);
+   if (status != MMC_BLK_SUCCESS) {
+   mmc_post_req(card->host, &brqp->mrq, -EINVAL);
+   mmc_queue_bounce_post(mqrqp);
+   if (rqc)
+   mmc_post_req(card->host, &brqc->mrq, -EINVAL);
+   }
 
switch (status) {
+   case MMC_BLK_SUCCESS:
+   /*
+* A block was successfully transferred.
+*/
+
+   /*
+* All data is transferred without errors.
+* Defer mmc post processing and _blk_end_request
+* until after the new request is started.
+*/
+   if (blk_rq_bytes(rqp) == brqp->data.bytes_xfered)
+   break;
+
+   mmc_post_req(card->host, &brqp->mrq, 0);
+   mmc_queue_bounce_post(mqrqp);
+
+   spin_lock_irq(&md->lock);
+   ret = __blk_end_request(rqp, 0,
+   brqp->data.bytes_xfered);
+   spin_unlock_irq(&md->lock);
+
+   if (rqc)
+   mmc_post_req(card->host, &brqc->mrq, -EINVAL);
+   break;
case MMC_BLK_CMD_ERR:
goto cmd_err;
break;
@@ -548,27 +599,73 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, 
struct request *req)
 * read a single sector.
 */
spin_lock_irq(&md->lock);
-   ret = __blk_end_request(req, -EIO,
-   brq->data.blksz);
+   ret = __blk_end_request(rqp, -EIO, brqp->data.blksz);
spin_unlock_irq(&md->lock);
-
+   if (rqc && !ret)
+   mmc_pre_req(card->host, &brqc->mrq, false);
break;
-

[PATCH v2 10/12] omap_hsmmc: use original sg_len for dma_unmap_sg

2011-04-06 Thread Per Forlin
Don't use the returned sg_len from dma_map_sg() as inparameter
to dma_unmap_sg(). Use the original sg_len for both dma_map_sg
and dma_unmap_sg.

Signed-off-by: Per Forlin 
---
 drivers/mmc/host/omap_hsmmc.c |5 +++--
 1 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c
index 259ece0..ad3731a 100644
--- a/drivers/mmc/host/omap_hsmmc.c
+++ b/drivers/mmc/host/omap_hsmmc.c
@@ -959,7 +959,8 @@ static void omap_hsmmc_dma_cleanup(struct omap_hsmmc_host 
*host, int errno)
spin_unlock(&host->irq_lock);
 
if (host->use_dma && dma_ch != -1) {
-   dma_unmap_sg(mmc_dev(host->mmc), host->data->sg, host->dma_len,
+   dma_unmap_sg(mmc_dev(host->mmc), host->data->sg,
+   host->data->sg_len,
omap_hsmmc_get_dma_dir(host, host->data));
omap_free_dma(dma_ch);
}
@@ -1343,7 +1344,7 @@ static void omap_hsmmc_dma_cb(int lch, u16 ch_status, 
void *cb_data)
return;
}
 
-   dma_unmap_sg(mmc_dev(host->mmc), data->sg, host->dma_len,
+   dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
omap_hsmmc_get_dma_dir(host, data));
 
req_in_progress = host->req_in_progress;
-- 
1.7.4.1

--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 09/12] mmc: test: add random fault injection in core.c

2011-04-06 Thread Per Forlin
This simple fault injection proved to be very useful to
test the error handling in the block.c rw_rq(). It may
still be useful to test if the host driver handle
pre_req() and post_req() correctly in case of errors.

Signed-off-by: Per Forlin 
---
 drivers/mmc/core/core.c|   54 
 drivers/mmc/core/debugfs.c |5 
 include/linux/mmc/host.h   |4 ++-
 lib/Kconfig.debug  |   11 +
 4 files changed, 73 insertions(+), 1 deletions(-)

diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index e88dd36..85296df 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -23,6 +23,8 @@
 #include 
 #include 
 #include 
+#include 
+#include 
 
 #include 
 #include 
@@ -82,6 +84,56 @@ static void mmc_flush_scheduled_work(void)
flush_workqueue(workqueue);
 }
 
+#ifdef CONFIG_FAIL_MMC_REQUEST
+
+static DECLARE_FAULT_ATTR(fail_mmc_request);
+
+static int __init setup_fail_mmc_request(char *str)
+{
+   return setup_fault_attr(&fail_mmc_request, str);
+}
+__setup("fail_mmc_request=", setup_fail_mmc_request);
+
+static void mmc_should_fail_request(struct mmc_host *host,
+   struct mmc_request *mrq)
+{
+   struct mmc_command *cmd = mrq->cmd;
+   struct mmc_data *data = mrq->data;
+   static const int data_errors[] = {
+   -ETIMEDOUT,
+   -EILSEQ,
+   -EIO,
+   };
+
+   if (!data)
+   return;
+
+   if (cmd->error || data->error || !host->make_it_fail ||
+   !should_fail(&fail_mmc_request, data->blksz * data->blocks))
+   return;
+
+   data->error = data_errors[random32() % ARRAY_SIZE(data_errors)];
+   data->bytes_xfered = (random32() % (data->bytes_xfered >> 9)) << 9;
+}
+
+static int __init fail_mmc_request_debugfs(void)
+{
+   return init_fault_attr_dentries(&fail_mmc_request,
+   "fail_mmc_request");
+}
+
+late_initcall(fail_mmc_request_debugfs);
+
+#else /* CONFIG_FAIL_MMC_REQUEST */
+
+static inline void mmc_should_fail_request(struct mmc_host *host,
+  struct mmc_data *data)
+{
+}
+
+#endif /* CONFIG_FAIL_MMC_REQUEST */
+
+
 /**
  * mmc_request_done - finish processing an MMC request
  * @host: MMC host which completed request
@@ -108,6 +160,8 @@ void mmc_request_done(struct mmc_host *host, struct 
mmc_request *mrq)
cmd->error = 0;
host->ops->request(host, mrq);
} else {
+   mmc_should_fail_request(host, mrq);
+
led_trigger_event(host->led, LED_OFF);
 
pr_debug("%s: req done (CMD%u): %d: %08x %08x %08x %08x\n",
diff --git a/drivers/mmc/core/debugfs.c b/drivers/mmc/core/debugfs.c
index 998797e..588e76f 100644
--- a/drivers/mmc/core/debugfs.c
+++ b/drivers/mmc/core/debugfs.c
@@ -188,6 +188,11 @@ void mmc_add_host_debugfs(struct mmc_host *host)
root, &host->clk_delay))
goto err_node;
 #endif
+#ifdef CONFIG_FAIL_MMC_REQUEST
+   if (!debugfs_create_u8("make-it-fail", S_IRUSR | S_IWUSR,
+  root, &host->make_it_fail))
+   goto err_node;
+#endif
return;
 
 err_node:
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index c056a3d..8b2b44b 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -251,7 +251,9 @@ struct mmc_host {
 #endif
 
struct dentry   *debugfs_root;
-
+#ifdef CONFIG_FAIL_MMC_REQUEST
+   u8  make_it_fail;
+#endif
unsigned long   private[0] cacheline_aligned;
 };
 
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index df9234c..180620b 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1059,6 +1059,17 @@ config FAIL_IO_TIMEOUT
  Only works with drivers that use the generic timeout handling,
  for others it wont do anything.
 
+config FAIL_MMC_REQUEST
+   bool "Fault-injection capability for MMC IO"
+   select DEBUG_FS
+   depends on FAULT_INJECTION
+   help
+ Provide fault-injection capability for MMC IO.
+ This will make the mmc core return data errors. This is
+ useful for testing the error handling in the mmc block device
+ and how the mmc host driver handle retries from
+ the block device.
+
 config FAULT_INJECTION_DEBUG_FS
bool "Debugfs entries for fault-injection capabilities"
depends on FAULT_INJECTION && SYSFS && DEBUG_FS
-- 
1.7.4.1

--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 07/12] mmc: add a second mmc queue request member

2011-04-06 Thread Per Forlin
Add an additional mmc queue request instance to make way for
two active block requests. One request may be active while the
other request is being prepared.

Signed-off-by: Per Forlin 
---
 drivers/mmc/card/queue.c |   44 ++--
 drivers/mmc/card/queue.h |3 ++-
 2 files changed, 44 insertions(+), 3 deletions(-)

diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
index 40e18b5..eef3510 100644
--- a/drivers/mmc/card/queue.c
+++ b/drivers/mmc/card/queue.c
@@ -130,6 +130,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card 
*card, spinlock_t *lock
u64 limit = BLK_BOUNCE_HIGH;
int ret;
struct mmc_queue_req *mqrq_cur = &mq->mqrq[0];
+   struct mmc_queue_req *mqrq_prev = &mq->mqrq[1];
 
if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
limit = *mmc_dev(host)->dma_mask;
@@ -140,7 +141,9 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card 
*card, spinlock_t *lock
return -ENOMEM;
 
memset(&mq->mqrq_cur, 0, sizeof(mq->mqrq_cur));
+   memset(&mq->mqrq_prev, 0, sizeof(mq->mqrq_prev));
mq->mqrq_cur = mqrq_cur;
+   mq->mqrq_prev = mqrq_prev;
mq->queue->queuedata = mq;
 
blk_queue_prep_rq(mq->queue, mmc_prep_request);
@@ -181,9 +184,17 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card 
*card, spinlock_t *lock
"allocate bounce cur buffer\n",
mmc_card_name(card));
}
+   mqrq_prev->bounce_buf = kmalloc(bouncesz, GFP_KERNEL);
+   if (!mqrq_prev->bounce_buf) {
+   printk(KERN_WARNING "%s: unable to "
+   "allocate bounce prev buffer\n",
+   mmc_card_name(card));
+   kfree(mqrq_cur->bounce_buf);
+   mqrq_cur->bounce_buf = NULL;
+   }
}
 
-   if (mqrq_cur->bounce_buf) {
+   if (mqrq_cur->bounce_buf && mqrq_prev->bounce_buf) {
blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_ANY);
blk_queue_max_hw_sectors(mq->queue, bouncesz / 512);
blk_queue_max_segments(mq->queue, bouncesz / 512);
@@ -198,11 +209,19 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card 
*card, spinlock_t *lock
if (ret)
goto cleanup_queue;
 
+   mqrq_prev->sg = mmc_alloc_sg(1, &ret);
+   if (ret)
+   goto cleanup_queue;
+
+   mqrq_prev->bounce_sg =
+   mmc_alloc_sg(bouncesz / 512, &ret);
+   if (ret)
+   goto cleanup_queue;
}
}
 #endif
 
-   if (!mqrq_cur->bounce_buf) {
+   if (!mqrq_cur->bounce_buf && !mqrq_prev->bounce_buf) {
blk_queue_bounce_limit(mq->queue, limit);
blk_queue_max_hw_sectors(mq->queue,
min(host->max_blk_count, host->max_req_size / 512));
@@ -213,6 +232,10 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card 
*card, spinlock_t *lock
if (ret)
goto cleanup_queue;
 
+
+   mqrq_prev->sg = mmc_alloc_sg(host->max_segs, &ret);
+   if (ret)
+   goto cleanup_queue;
}
 
sema_init(&mq->thread_sem, 1);
@@ -229,6 +252,8 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card 
*card, spinlock_t *lock
  free_bounce_sg:
kfree(mqrq_cur->bounce_sg);
mqrq_cur->bounce_sg = NULL;
+   kfree(mqrq_prev->bounce_sg);
+   mqrq_prev->bounce_sg = NULL;
 
  cleanup_queue:
kfree(mqrq_cur->sg);
@@ -236,6 +261,11 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card 
*card, spinlock_t *lock
kfree(mqrq_cur->bounce_buf);
mqrq_cur->bounce_buf = NULL;
 
+   kfree(mqrq_prev->sg);
+   mqrq_prev->sg = NULL;
+   kfree(mqrq_prev->bounce_buf);
+   mqrq_prev->bounce_buf = NULL;
+
blk_cleanup_queue(mq->queue);
return ret;
 }
@@ -245,6 +275,7 @@ void mmc_cleanup_queue(struct mmc_queue *mq)
struct request_queue *q = mq->queue;
unsigned long flags;
struct mmc_queue_req *mqrq_cur = mq->mqrq_cur;
+   struct mmc_queue_req *mqrq_prev = mq->mqrq_prev;
 
/* Make sure the queue isn't suspended, as that will deadlock */
mmc_queue_resume(mq);
@@ -267,6 +298,15 @@ void mmc_cleanup_queue(struct mmc_queue *mq)
kfree(mqrq_cur->bounce_buf);
mqrq_cur->bounce_buf = NULL;
 
+   kfree(mqrq_prev->bounce_sg);
+   mqrq_prev->bounce_sg = NULL;
+
+   kfree(mqrq_prev->sg);
+   mqrq_prev->sg = NULL;
+
+   kfree(mqrq_prev->

[PATCH v2 05/12] mmc: add a block request prepare function

2011-04-06 Thread Per Forlin
Break out code from mmc_blk_issue_rw_rq to create a
block request prepare function. This doesn't change
any functionallity. This helps when handling more
than one active block request.

Signed-off-by: Per Forlin 
---
 drivers/mmc/card/block.c |  170 -
 1 files changed, 91 insertions(+), 79 deletions(-)

diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index ec4e432..e606dec 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -324,97 +324,109 @@ out:
return err ? 0 : 1;
 }
 
-static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req)
+static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
+  struct mmc_card *card,
+  int disable_multi,
+  struct mmc_queue *mq)
 {
-   struct mmc_blk_data *md = mq->data;
-   struct mmc_card *card = md->queue.card;
-   struct mmc_blk_request *brq = &mq->mqrq_cur->brq;
-   int ret = 1, disable_multi = 0;
+   u32 readcmd, writecmd;
+   struct mmc_blk_request *brq = &mqrq->brq;
+   struct request *req = mqrq->req;
 
-   mmc_claim_host(card->host);
+   memset(brq, 0, sizeof(struct mmc_blk_request));
 
-   do {
-   struct mmc_command cmd;
-   u32 readcmd, writecmd, status = 0;
-
-   memset(brq, 0, sizeof(struct mmc_blk_request));
-   brq->mrq.cmd = &brq->cmd;
-   brq->mrq.data = &brq->data;
-
-   brq->cmd.arg = blk_rq_pos(req);
-   if (!mmc_card_blockaddr(card))
-   brq->cmd.arg <<= 9;
-   brq->cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_ADTC;
-   brq->data.blksz = 512;
-   brq->stop.opcode = MMC_STOP_TRANSMISSION;
-   brq->stop.arg = 0;
-   brq->stop.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | MMC_CMD_AC;
-   brq->data.blocks = blk_rq_sectors(req);
+   brq->mrq.cmd = &brq->cmd;
+   brq->mrq.data = &brq->data;
 
-   /*
-* The block layer doesn't support all sector count
-* restrictions, so we need to be prepared for too big
-* requests.
-*/
-   if (brq->data.blocks > card->host->max_blk_count)
-   brq->data.blocks = card->host->max_blk_count;
+   brq->cmd.arg = blk_rq_pos(req);
+   if (!mmc_card_blockaddr(card))
+   brq->cmd.arg <<= 9;
+   brq->cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_ADTC;
+   brq->data.blksz = 512;
+   brq->stop.opcode = MMC_STOP_TRANSMISSION;
+   brq->stop.arg = 0;
+   brq->stop.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | MMC_CMD_AC;
+   brq->data.blocks = blk_rq_sectors(req);
 
-   /*
-* After a read error, we redo the request one sector at a time
-* in order to accurately determine which sectors can be read
-* successfully.
+   /*
+* The block layer doesn't support all sector count
+* restrictions, so we need to be prepared for too big
+* requests.
+*/
+   if (brq->data.blocks > card->host->max_blk_count)
+   brq->data.blocks = card->host->max_blk_count;
+
+   /*
+* After a read error, we redo the request one sector at a time
+* in order to accurately determine which sectors can be read
+* successfully.
+*/
+   if (disable_multi && brq->data.blocks > 1)
+   brq->data.blocks = 1;
+
+   if (brq->data.blocks > 1) {
+   /* SPI multiblock writes terminate using a special
+* token, not a STOP_TRANSMISSION request.
 */
-   if (disable_multi && brq->data.blocks > 1)
-   brq->data.blocks = 1;
-
-   if (brq->data.blocks > 1) {
-   /* SPI multiblock writes terminate using a special
-* token, not a STOP_TRANSMISSION request.
-*/
-   if (!mmc_host_is_spi(card->host)
-   || rq_data_dir(req) == READ)
-   brq->mrq.stop = &brq->stop;
-   readcmd = MMC_READ_MULTIPLE_BLOCK;
-   writecmd = MMC_WRITE_MULTIPLE_BLOCK;
-   } else {
-   brq->mrq.stop = NULL;
-   readcmd = MMC_READ_SINGLE_BLOCK;
-   writecmd = MMC_WRITE_BLOCK;
-   }
-   if (rq_data_dir(req) == READ) {
-   brq->cmd.opcode = readcmd;
-   brq->data.flags |= MMC_DATA_READ;
-   } else {
-   brq->cmd.opcode = writecmd;
-   brq->data.flags |= MMC_DATA_WRITE;
-   }
+   if (!mmc_host_is_spi(card->host)
+  

[PATCH v2 04/12] mmc: add member in mmc queue struct to hold request data

2011-04-06 Thread Per Forlin
The way the request data is organized in the mmc queue struct
it only allows processing of one request at the time.
This patch adds a new struct to hold mmc queue request data such as
sg list, request, blk request and bounce buffers, and updates any functions
depending on the mmc queue struct. This lies the ground for
using multiple active request for one mmc queue.

Signed-off-by: Per Forlin 
---
 drivers/mmc/card/block.c |  105 +
 drivers/mmc/card/queue.c |  129 --
 drivers/mmc/card/queue.h |   30 ---
 3 files changed, 138 insertions(+), 126 deletions(-)

diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index 61d233a..ec4e432 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -165,13 +165,6 @@ static const struct block_device_operations mmc_bdops = {
.owner  = THIS_MODULE,
 };
 
-struct mmc_blk_request {
-   struct mmc_request  mrq;
-   struct mmc_command  cmd;
-   struct mmc_command  stop;
-   struct mmc_data data;
-};
-
 static u32 mmc_sd_num_wr_blocks(struct mmc_card *card)
 {
int err;
@@ -335,7 +328,7 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct 
request *req)
 {
struct mmc_blk_data *md = mq->data;
struct mmc_card *card = md->queue.card;
-   struct mmc_blk_request brq;
+   struct mmc_blk_request *brq = &mq->mqrq_cur->brq;
int ret = 1, disable_multi = 0;
 
mmc_claim_host(card->host);
@@ -344,72 +337,72 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, 
struct request *req)
struct mmc_command cmd;
u32 readcmd, writecmd, status = 0;
 
-   memset(&brq, 0, sizeof(struct mmc_blk_request));
-   brq.mrq.cmd = &brq.cmd;
-   brq.mrq.data = &brq.data;
+   memset(brq, 0, sizeof(struct mmc_blk_request));
+   brq->mrq.cmd = &brq->cmd;
+   brq->mrq.data = &brq->data;
 
-   brq.cmd.arg = blk_rq_pos(req);
+   brq->cmd.arg = blk_rq_pos(req);
if (!mmc_card_blockaddr(card))
-   brq.cmd.arg <<= 9;
-   brq.cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_ADTC;
-   brq.data.blksz = 512;
-   brq.stop.opcode = MMC_STOP_TRANSMISSION;
-   brq.stop.arg = 0;
-   brq.stop.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | MMC_CMD_AC;
-   brq.data.blocks = blk_rq_sectors(req);
+   brq->cmd.arg <<= 9;
+   brq->cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_ADTC;
+   brq->data.blksz = 512;
+   brq->stop.opcode = MMC_STOP_TRANSMISSION;
+   brq->stop.arg = 0;
+   brq->stop.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | MMC_CMD_AC;
+   brq->data.blocks = blk_rq_sectors(req);
 
/*
 * The block layer doesn't support all sector count
 * restrictions, so we need to be prepared for too big
 * requests.
 */
-   if (brq.data.blocks > card->host->max_blk_count)
-   brq.data.blocks = card->host->max_blk_count;
+   if (brq->data.blocks > card->host->max_blk_count)
+   brq->data.blocks = card->host->max_blk_count;
 
/*
 * After a read error, we redo the request one sector at a time
 * in order to accurately determine which sectors can be read
 * successfully.
 */
-   if (disable_multi && brq.data.blocks > 1)
-   brq.data.blocks = 1;
+   if (disable_multi && brq->data.blocks > 1)
+   brq->data.blocks = 1;
 
-   if (brq.data.blocks > 1) {
+   if (brq->data.blocks > 1) {
/* SPI multiblock writes terminate using a special
 * token, not a STOP_TRANSMISSION request.
 */
if (!mmc_host_is_spi(card->host)
|| rq_data_dir(req) == READ)
-   brq.mrq.stop = &brq.stop;
+   brq->mrq.stop = &brq->stop;
readcmd = MMC_READ_MULTIPLE_BLOCK;
writecmd = MMC_WRITE_MULTIPLE_BLOCK;
} else {
-   brq.mrq.stop = NULL;
+   brq->mrq.stop = NULL;
readcmd = MMC_READ_SINGLE_BLOCK;
writecmd = MMC_WRITE_BLOCK;
}
if (rq_data_dir(req) == READ) {
-   brq.cmd.opcode = readcmd;
-   brq.data.flags |= MMC_DATA_READ;
+   brq->cmd.opcode = readcmd;
+   brq->data.flags |= MMC_DA

[PATCH v2 12/12] mmci: implement pre_req() and post_req()

2011-04-06 Thread Per Forlin
pre_req() runs dma_map_sg() and prepares the dma descriptor
for the next mmc data transfer. post_req() runs dma_unmap_sg.
If not calling pre_req() before mmci_request(), mmci_request()
will prepare the cache and dma just like it did it before.
It is optional to use pre_req() and post_req() for mmci.

Signed-off-by: Per Forlin 
---
 drivers/mmc/host/mmci.c |  146 ++
 drivers/mmc/host/mmci.h |8 +++
 2 files changed, 141 insertions(+), 13 deletions(-)

diff --git a/drivers/mmc/host/mmci.c b/drivers/mmc/host/mmci.c
index b4a7e4f..985c77d 100644
--- a/drivers/mmc/host/mmci.c
+++ b/drivers/mmc/host/mmci.c
@@ -320,7 +320,8 @@ static void mmci_dma_unmap(struct mmci_host *host, struct 
mmc_data *data)
dir = DMA_FROM_DEVICE;
}
 
-   dma_unmap_sg(chan->device->dev, data->sg, data->sg_len, dir);
+   if (!data->host_cookie)
+   dma_unmap_sg(chan->device->dev, data->sg, data->sg_len, dir);
 
/*
 * Use of DMA with scatter-gather is impossible.
@@ -338,7 +339,8 @@ static void mmci_dma_data_error(struct mmci_host *host)
dmaengine_terminate_all(host->dma_current);
 }
 
-static int mmci_dma_start_data(struct mmci_host *host, unsigned int datactrl)
+static int mmci_dma_prep_data(struct mmci_host *host, struct mmc_data *data,
+ struct mmci_host_next *next)
 {
struct variant_data *variant = host->variant;
struct dma_slave_config conf = {
@@ -349,13 +351,20 @@ static int mmci_dma_start_data(struct mmci_host *host, 
unsigned int datactrl)
.src_maxburst = variant->fifohalfsize >> 2, /* # of words */
.dst_maxburst = variant->fifohalfsize >> 2, /* # of words */
};
-   struct mmc_data *data = host->data;
struct dma_chan *chan;
struct dma_device *device;
struct dma_async_tx_descriptor *desc;
int nr_sg;
 
-   host->dma_current = NULL;
+   /* Check if next job is already prepared */
+   if (data->host_cookie && !next &&
+   host->dma_current && host->dma_desc_current)
+   return 0;
+
+   if (!next) {
+   host->dma_current = NULL;
+   host->dma_desc_current = NULL;
+   }
 
if (data->flags & MMC_DATA_READ) {
conf.direction = DMA_FROM_DEVICE;
@@ -370,7 +379,7 @@ static int mmci_dma_start_data(struct mmci_host *host, 
unsigned int datactrl)
return -EINVAL;
 
/* If less than or equal to the fifo size, don't bother with DMA */
-   if (host->size <= variant->fifosize)
+   if (data->blksz * data->blocks <= variant->fifosize)
return -EINVAL;
 
device = chan->device;
@@ -384,14 +393,38 @@ static int mmci_dma_start_data(struct mmci_host *host, 
unsigned int datactrl)
if (!desc)
goto unmap_exit;
 
-   /* Okay, go for it. */
-   host->dma_current = chan;
+   if (next) {
+   next->dma_chan = chan;
+   next->dma_desc = desc;
+   } else {
+   host->dma_current = chan;
+   host->dma_desc_current = desc;
+   }
+
+   return 0;
 
+ unmap_exit:
+   if (!next)
+   dmaengine_terminate_all(chan);
+   dma_unmap_sg(device->dev, data->sg, data->sg_len, conf.direction);
+   return -ENOMEM;
+}
+
+static int mmci_dma_start_data(struct mmci_host *host, unsigned int datactrl)
+{
+   int ret;
+   struct mmc_data *data = host->data;
+
+   ret = mmci_dma_prep_data(host, host->data, NULL);
+   if (ret)
+   return ret;
+
+   /* Okay, go for it. */
dev_vdbg(mmc_dev(host->mmc),
 "Submit MMCI DMA job, sglen %d blksz %04x blks %04x flags 
%08x\n",
 data->sg_len, data->blksz, data->blocks, data->flags);
-   dmaengine_submit(desc);
-   dma_async_issue_pending(chan);
+   dmaengine_submit(host->dma_desc_current);
+   dma_async_issue_pending(host->dma_current);
 
datactrl |= MCI_DPSM_DMAENABLE;
 
@@ -406,14 +439,90 @@ static int mmci_dma_start_data(struct mmci_host *host, 
unsigned int datactrl)
writel(readl(host->base + MMCIMASK0) | MCI_DATAENDMASK,
   host->base + MMCIMASK0);
return 0;
+}
 
-unmap_exit:
-   dmaengine_terminate_all(chan);
-   dma_unmap_sg(device->dev, data->sg, data->sg_len, conf.direction);
-   return -ENOMEM;
+static void mmci_get_next_data(struct mmci_host *host, struct mmc_data *data)
+{
+   struct mmci_host_next *next = &host->next_data;
+
+   if (data->host_cookie && data->host_cookie != next->cookie) {
+   printk(KERN_WARNING "[%s] invalid cookie: data->host_cookie %d"
+  " host->next_data.cookie %d\n",
+  __func__, data->host_cookie, host->next_data.cookie);
+   data->host_cookie = 0;
+   }
+
+   if (!data->host_cookie)
+   return;
+
+   

[PATCH v2 11/12] omap_hsmmc: add support for pre_req and post_req

2011-04-06 Thread Per Forlin
pre_req() runs dma_map_sg() post_req() runs dma_unmap_sg.
If not calling pre_req() before omap_hsmmc_request(), request()
will prepare the cache just like it did it before.
It is optional to use pre_req() and post_req().

Signed-off-by: Per Forlin 
---
 drivers/mmc/host/omap_hsmmc.c |   87 +++--
 1 files changed, 83 insertions(+), 4 deletions(-)

diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c
index ad3731a..2116c09 100644
--- a/drivers/mmc/host/omap_hsmmc.c
+++ b/drivers/mmc/host/omap_hsmmc.c
@@ -141,6 +141,11 @@
 #define OMAP_HSMMC_WRITE(base, reg, val) \
__raw_writel((val), (base) + OMAP_HSMMC_##reg)
 
+struct omap_hsmmc_next {
+   unsigned intdma_len;
+   s32 cookie;
+};
+
 struct omap_hsmmc_host {
struct  device  *dev;
struct  mmc_host*mmc;
@@ -184,6 +189,7 @@ struct omap_hsmmc_host {
int reqs_blocked;
int use_reg;
int req_in_progress;
+   struct omap_hsmmc_next  next_data;
 
struct  omap_mmc_platform_data  *pdata;
 };
@@ -1344,8 +1350,9 @@ static void omap_hsmmc_dma_cb(int lch, u16 ch_status, 
void *cb_data)
return;
}
 
-   dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
-   omap_hsmmc_get_dma_dir(host, data));
+   if (!data->host_cookie)
+   dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
+omap_hsmmc_get_dma_dir(host, data));
 
req_in_progress = host->req_in_progress;
dma_ch = host->dma_ch;
@@ -1363,6 +1370,45 @@ static void omap_hsmmc_dma_cb(int lch, u16 ch_status, 
void *cb_data)
}
 }
 
+static int omap_hsmmc_pre_dma_transfer(struct omap_hsmmc_host *host,
+  struct mmc_data *data,
+  struct omap_hsmmc_next *next)
+{
+   int dma_len;
+
+   if (!next && data->host_cookie &&
+   data->host_cookie != host->next_data.cookie) {
+   printk(KERN_WARNING "[%s] invalid cookie: data->host_cookie %d"
+  " host->next_data.cookie %d\n",
+  __func__, data->host_cookie, host->next_data.cookie);
+   data->host_cookie = 0;
+   }
+
+   /* Check if next job is already prepared */
+   if (next ||
+   (!next && data->host_cookie != host->next_data.cookie)) {
+   dma_len = dma_map_sg(mmc_dev(host->mmc), data->sg,
+data->sg_len,
+omap_hsmmc_get_dma_dir(host, data));
+
+   } else {
+   dma_len = host->next_data.dma_len;
+   host->next_data.dma_len = 0;
+   }
+
+
+   if (dma_len == 0)
+   return -EINVAL;
+
+   if (next) {
+   next->dma_len = dma_len;
+   data->host_cookie = ++next->cookie < 0 ? 1 : next->cookie;
+   } else
+   host->dma_len = dma_len;
+
+   return 0;
+}
+
 /*
  * Routine to configure and start DMA for the MMC card
  */
@@ -1396,9 +1442,10 @@ static int omap_hsmmc_start_dma_transfer(struct 
omap_hsmmc_host *host,
mmc_hostname(host->mmc), ret);
return ret;
}
+   ret = omap_hsmmc_pre_dma_transfer(host, data, NULL);
+   if (ret)
+   return ret;
 
-   host->dma_len = dma_map_sg(mmc_dev(host->mmc), data->sg,
-   data->sg_len, omap_hsmmc_get_dma_dir(host, data));
host->dma_ch = dma_ch;
host->dma_sg_idx = 0;
 
@@ -1478,6 +1525,35 @@ omap_hsmmc_prepare_data(struct omap_hsmmc_host *host, 
struct mmc_request *req)
return 0;
 }
 
+static void omap_hsmmc_post_req(struct mmc_host *mmc, struct mmc_request *mrq,
+   int err)
+{
+   struct omap_hsmmc_host *host = mmc_priv(mmc);
+   struct mmc_data *data = mrq->data;
+
+   if (host->use_dma) {
+   dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
+omap_hsmmc_get_dma_dir(host, data));
+   data->host_cookie = 0;
+   }
+}
+
+static void omap_hsmmc_pre_req(struct mmc_host *mmc, struct mmc_request *mrq,
+  bool is_first_req)
+{
+   struct omap_hsmmc_host *host = mmc_priv(mmc);
+
+   if (mrq->data->host_cookie) {
+   mrq->data->host_cookie = 0;
+   return ;
+   }
+
+   if (host->use_dma)
+   if (omap_hsmmc_pre_dma_transfer(host, mrq->data,
+   &host->next_data))
+   mrq->data->host_cookie = 0;
+}
+
 /*
  * Request function. for read/write operation
  */
@@ -1926,6 +2002,8 @@ static int omap_hsmmc_disable_fclk(struct mmc_host *mmc, 
int lazy)
 static const struct mmc_host_ops omap_hsmmc_ops = {
.enable = omap_hsmmc_enable_fclk,
 

[PATCH v2 03/12] mmc: mmc_test: add test for none blocking transfers

2011-04-06 Thread Per Forlin
Add four tests for read and write performance per
different transfer size, 4k to 4M.
 * Read using blocking mmc request
 * Read using none blocking mmc request
 * Write using blocking mmc request
 * Write using none blocking mmc request

The host dirver must support pre_req() and post_req()
in order to run the none blocking test cases.

Signed-off-by: Per Forlin 
---
 drivers/mmc/card/mmc_test.c |  312 +-
 1 files changed, 304 insertions(+), 8 deletions(-)

diff --git a/drivers/mmc/card/mmc_test.c b/drivers/mmc/card/mmc_test.c
index 466cdb5..1000383 100644
--- a/drivers/mmc/card/mmc_test.c
+++ b/drivers/mmc/card/mmc_test.c
@@ -22,6 +22,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #define RESULT_OK  0
 #define RESULT_FAIL1
@@ -51,10 +52,12 @@ struct mmc_test_pages {
  * struct mmc_test_mem - allocated memory.
  * @arr: array of allocations
  * @cnt: number of allocations
+ * @size_min_cmn: lowest common size in array of allocations
  */
 struct mmc_test_mem {
struct mmc_test_pages *arr;
unsigned int cnt;
+   unsigned int size_min_cmn;
 };
 
 /**
@@ -148,6 +151,21 @@ struct mmc_test_card {
struct mmc_test_general_result  *gr;
 };
 
+enum mmc_test_prep_media {
+   MMC_TEST_PREP_NONE = 0,
+   MMC_TEST_PREP_WRITE_FULL = 1 << 0,
+   MMC_TEST_PREP_ERASE = 1 << 1,
+};
+
+struct mmc_test_multiple_rw {
+   unsigned int *bs;
+   unsigned int len;
+   unsigned int size;
+   bool do_write;
+   bool do_nonblock_req;
+   enum mmc_test_prep_media prepare;
+};
+
 /***/
 /*  General helper functions   */
 /***/
@@ -307,6 +325,7 @@ static struct mmc_test_mem *mmc_test_alloc_mem(unsigned 
long min_sz,
unsigned long max_seg_page_cnt = DIV_ROUND_UP(max_seg_sz, PAGE_SIZE);
unsigned long page_cnt = 0;
unsigned long limit = nr_free_buffer_pages() >> 4;
+   unsigned int min_cmn = 0;
struct mmc_test_mem *mem;
 
if (max_page_cnt > limit)
@@ -350,6 +369,12 @@ static struct mmc_test_mem *mmc_test_alloc_mem(unsigned 
long min_sz,
mem->arr[mem->cnt].page = page;
mem->arr[mem->cnt].order = order;
mem->cnt += 1;
+   if (!min_cmn)
+   min_cmn = PAGE_SIZE << order;
+   else
+   min_cmn = min(min_cmn,
+ (unsigned int) (PAGE_SIZE << order));
+
if (max_page_cnt <= (1UL << order))
break;
max_page_cnt -= 1UL << order;
@@ -360,6 +385,7 @@ static struct mmc_test_mem *mmc_test_alloc_mem(unsigned 
long min_sz,
break;
}
}
+   mem->size_min_cmn = min_cmn;
 
return mem;
 
@@ -386,7 +412,6 @@ static int mmc_test_map_sg(struct mmc_test_mem *mem, 
unsigned long sz,
do {
for (i = 0; i < mem->cnt; i++) {
unsigned long len = PAGE_SIZE << mem->arr[i].order;
-
if (len > sz)
len = sz;
if (len > max_seg_sz)
@@ -725,6 +750,94 @@ static int mmc_test_check_broken_result(struct 
mmc_test_card *test,
 }
 
 /*
+ * Tests nonblock transfer with certain parameters
+ */
+static void mmc_test_nonblock_reset(struct mmc_request *mrq,
+   struct mmc_command *cmd,
+   struct mmc_command *stop,
+   struct mmc_data *data)
+{
+   memset(mrq, 0, sizeof(struct mmc_request));
+   memset(cmd, 0, sizeof(struct mmc_command));
+   memset(data, 0, sizeof(struct mmc_data));
+   memset(stop, 0, sizeof(struct mmc_command));
+
+   mrq->cmd = cmd;
+   mrq->data = data;
+   mrq->stop = stop;
+}
+static int mmc_test_nonblock_transfer(struct mmc_test_card *test,
+ struct scatterlist *sg, unsigned sg_len,
+ unsigned dev_addr, unsigned blocks,
+ unsigned blksz, int write, int count)
+{
+   struct mmc_request mrq1;
+   struct mmc_command cmd1;
+   struct mmc_command stop1;
+   struct mmc_data data1;
+
+   struct mmc_request mrq2;
+   struct mmc_command cmd2;
+   struct mmc_command stop2;
+   struct mmc_data data2;
+
+   struct mmc_request *cur_mrq;
+   struct mmc_request *prev_mrq;
+   int i;
+   int ret = 0;
+
+   if (!test->card->host->ops->pre_req ||
+   !test->card->host->ops->post_req)
+   return -RESULT_UNSUP_HOST;
+
+   mmc_test_nonblock_reset(&mrq1, &cmd1, &stop1, &data1);
+   mmc_test_nonblock_reset(&mrq2, &cmd2, &stop2, &data2);
+
+   cur_mrq = &mrq1;
+  

[PATCH v2 02/12] mmc: mmc_test: add debugfs file to list all tests

2011-04-06 Thread Per Forlin
Add a debugfs file "testlist" to print all available tests

Signed-off-by: Per Forlin 
---
 drivers/mmc/card/mmc_test.c |   30 ++
 1 files changed, 30 insertions(+), 0 deletions(-)

diff --git a/drivers/mmc/card/mmc_test.c b/drivers/mmc/card/mmc_test.c
index f5cedec..466cdb5 100644
--- a/drivers/mmc/card/mmc_test.c
+++ b/drivers/mmc/card/mmc_test.c
@@ -2447,6 +2447,32 @@ static const struct file_operations mmc_test_fops_test = 
{
.release= single_release,
 };
 
+static int mtf_testlist_show(struct seq_file *sf, void *data)
+{
+   int i;
+
+   mutex_lock(&mmc_test_lock);
+
+   for (i = 0; i < ARRAY_SIZE(mmc_test_cases); i++)
+   seq_printf(sf, "%d:\t%s\n", i+1, mmc_test_cases[i].name);
+
+   mutex_unlock(&mmc_test_lock);
+
+   return 0;
+}
+
+static int mtf_testlist_open(struct inode *inode, struct file *file)
+{
+   return single_open(file, mtf_testlist_show, inode->i_private);
+}
+
+static const struct file_operations mmc_test_fops_testlist = {
+   .open   = mtf_testlist_open,
+   .read   = seq_read,
+   .llseek = seq_lseek,
+   .release= single_release,
+};
+
 static void mmc_test_free_file_test(struct mmc_card *card)
 {
struct mmc_test_dbgfs_file *df, *dfs;
@@ -2476,6 +2502,10 @@ static int mmc_test_register_file_test(struct mmc_card 
*card)
file = debugfs_create_file("test", S_IWUSR | S_IRUGO,
card->debugfs_root, card, &mmc_test_fops_test);
 
+   if (card->debugfs_root)
+   file = debugfs_create_file("testlist", S_IRUGO,
+   card->debugfs_root, card, &mmc_test_fops_testlist);
+
if (IS_ERR_OR_NULL(file)) {
dev_err(&card->dev,
"Can't create file. Perhaps debugfs is disabled.\n");
-- 
1.7.4.1

--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 01/12] mmc: add none blocking mmc request function

2011-04-06 Thread Per Forlin
Previously there has only been one function mmc_wait_for_req
to start and wait for a request. This patch adds
 * mmc_start_req - starts a request wihtout waiting
 * mmc_wait_for_req_done - waits until request is done
 * mmc_pre_req - asks the host driver to prepare for the next job
 * mmc_post_req - asks the host driver to clean up after a completed job

The intention is to use pre_req() and post_req() to do cache maintenance
while a request is active. pre_req() can be called while a request is active
to minimize latency to start next job. post_req() can be used after the next
job is started to clean up the request. This will minimize the host driver
request end latency. post_req() is typically used before ending the block
request and handing over the buffer to the block layer.

Add a host-private member in mmc_data to be used by
pre_req to mark the data. The host driver will then
check this mark to see if the data is prepared or not.

Signed-off-by: Per Forlin 
---
 drivers/mmc/core/core.c  |   78 --
 include/linux/mmc/core.h |9 +-
 include/linux/mmc/host.h |9 +
 3 files changed, 85 insertions(+), 11 deletions(-)

diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 1f453ac..e88dd36 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -198,30 +198,88 @@ mmc_start_request(struct mmc_host *host, struct 
mmc_request *mrq)
 
 static void mmc_wait_done(struct mmc_request *mrq)
 {
-   complete(mrq->done_data);
+   complete(&mrq->completion);
 }
 
 /**
- * mmc_wait_for_req - start a request and wait for completion
+ * mmc_pre_req - Prepare for a new request
+ * @host: MMC host to prepare command
+ * @mrq: MMC request to prepare for
+ * @is_first_req: true if there is no previous started request
+ * that may run in parellel to this call, otherwise false
+ *
+ * mmc_pre_req() is called in prior to mmc_start_req() to let
+ * host prepare for the new request. Preparation of a request may be
+ * performed while another request is running on the host.
+ */
+void mmc_pre_req(struct mmc_host *host, struct mmc_request *mrq,
+bool is_first_req)
+{
+   if (host->ops->pre_req)
+   host->ops->pre_req(host, mrq, is_first_req);
+}
+EXPORT_SYMBOL(mmc_pre_req);
+
+/**
+ * mmc_post_req - Post process a completed request
+ * @host: MMC host to post process command
+ * @mrq: MMC request to post process for
+ * @err: Error, if none zero, clean up any resources made in pre_req
+ *
+ * Let the host post process a completed request. Post processing of
+ * a request may be performed while another reuqest is running.
+ */
+void mmc_post_req(struct mmc_host *host, struct mmc_request *mrq, int err)
+{
+   if (host->ops->post_req)
+   host->ops->post_req(host, mrq, err);
+}
+EXPORT_SYMBOL(mmc_post_req);
+
+/**
+ * mmc_start_req - start a request
  * @host: MMC host to start command
  * @mrq: MMC request to start
  *
- * Start a new MMC custom command request for a host, and wait
- * for the command to complete. Does not attempt to parse the
- * response.
+ * Start a new MMC custom command request for a host.
+ * Does not wait for the command to complete.
  */
-void mmc_wait_for_req(struct mmc_host *host, struct mmc_request *mrq)
+void mmc_start_req(struct mmc_host *host, struct mmc_request *mrq)
 {
-   DECLARE_COMPLETION_ONSTACK(complete);
-
-   mrq->done_data = &complete;
+   init_completion(&mrq->completion);
mrq->done = mmc_wait_done;
 
mmc_start_request(host, mrq);
+}
+EXPORT_SYMBOL(mmc_start_req);
 
-   wait_for_completion(&complete);
+/**
+ * mmc_wait_for_req_done - wait for completion of request
+ * @mrq: MMC request to wait for
+ *
+ * Wait for the command to complete. Does not attempt to parse the
+ * response.
+ */
+void mmc_wait_for_req_done(struct mmc_request *mrq)
+{
+   wait_for_completion(&mrq->completion);
 }
+EXPORT_SYMBOL(mmc_wait_for_req_done);
 
+/**
+ * mmc_wait_for_req - start a request and wait for completion
+ * @host: MMC host to start command
+ * @mrq: MMC request to start
+ *
+ * Start a new MMC custom command request for a host, and wait
+ * for the command to complete. Does not attempt to parse the
+ * response.
+ */
+void mmc_wait_for_req(struct mmc_host *host, struct mmc_request *mrq)
+{
+   mmc_start_req(host, mrq);
+   mmc_wait_for_req_done(mrq);
+}
 EXPORT_SYMBOL(mmc_wait_for_req);
 
 /**
diff --git a/include/linux/mmc/core.h b/include/linux/mmc/core.h
index 07f27af..5bbfb71 100644
--- a/include/linux/mmc/core.h
+++ b/include/linux/mmc/core.h
@@ -117,6 +117,7 @@ struct mmc_data {
 
unsigned intsg_len; /* size of scatter list */
struct scatterlist  *sg;/* I/O scatter list */
+   s32 host_cookie;

[PATCH v2 00/12] mmc: use nonblock mmc requests to minimize latency

2011-04-06 Thread Per Forlin
How significant is the cache maintenance over head?
It depends, the eMMC are much faster now
compared to a few years ago and cache maintenance cost more due to
multiple cache levels and speculative cache pre-fetch. In relation the
cost for handling the caches have increased and is now a bottle neck
dealing with fast eMMC together with DMA.

The intention for introducing none blocking mmc requests is to minimize the
time between a mmc request ends and another mmc request starts. In the
current implementation the MMC controller is idle when dma_map_sg and
dma_unmap_sg is processing. Introducing none blocking mmc request makes it
possible to prepare the caches for next job parallel with an active
mmc request.

This is done by making the issue_rw_rq() none blocking.
The increase in throughput is proportional to the time it takes to
prepare (major part of preparations is dma_map_sg and dma_unmap_sg)
a request and how fast the memory is. The faster the MMC/SD is
the more significant the prepare request time becomes. Measurements on U5500
and Panda on eMMC and SD shows significant performance gain for for large
reads when running DMA mode. In the PIO case the performance is unchanged.

There are two optional hooks pre_req() and post_req() that the host driver
may implement in order to move work to before and after the actual mmc_request
function is called. In the DMA case pre_req() may do dma_map_sg() and prepare
the dma descriptor and post_req runs the dma_unmap_sg.

Details on measurements from IOZone and mmc_test:
https://wiki.linaro.org/WorkingGroups/KernelConsolidation/Specs/StoragePerfMMC-async-req

Changes since v1:
 * Add support for omap_hsmmc
 * Add test in mmc_test to compare performance with
   and without none blocking request.
 * Add random fault injection in mmc core to exercise error
   handling in the mmc block code.
 * Fix serveral issue in the mmc block error handling.
 * Add a host_cookie member in mmc_data to be used by
   pre_req to mark the data. The host driver will then
   check this mark to see if the data is prepared or not.
 * Previous patch subject was
   "add double buffering for mmc block requests".

Per Forlin (12):
  mmc: add none blocking mmc request function
  mmc: mmc_test: add debugfs file to list all tests
  mmc: mmc_test: add test for none blocking transfers
  mmc: add member in mmc queue struct to hold request data
  mmc: add a block request prepare function
  mmc: move error code in mmc_block_issue_rw_rq to a separate function.
  mmc: add a second mmc queue request member
  mmc: add handling for two parallel block requests in issue_rw_rq
  mmc: test: add random fault injection in core.c
  omap_hsmmc: use original sg_len for dma_unmap_sg
  omap_hsmmc: add support for pre_req and post_req
  mmci: implement pre_req() and post_req()

 drivers/mmc/card/block.c  |  493 +++--
 drivers/mmc/card/mmc_test.c   |  342 -
 drivers/mmc/card/queue.c  |  171 +--
 drivers/mmc/card/queue.h  |   31 ++-
 drivers/mmc/core/core.c   |  132 ++-
 drivers/mmc/core/debugfs.c|5 +
 drivers/mmc/host/mmci.c   |  146 +++-
 drivers/mmc/host/mmci.h   |8 +
 drivers/mmc/host/omap_hsmmc.c |   90 +++-
 include/linux/mmc/core.h  |9 +-
 include/linux/mmc/host.h  |   13 +-
 lib/Kconfig.debug |   11 +
 12 files changed, 1172 insertions(+), 279 deletions(-)

-- 
1.7.4.1

--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 2/2] mmc: Check CAP_SYS_ADMIN for destructive ioctl ACMDs

2011-04-06 Thread John Calixto
On Wed, 6 Apr 2011, Michał Mirosław wrote:

> 2011/4/6 John Calixto :
> > On Tue, 5 Apr 2011, Andrei Warkentin wrote:
> >> Could you check the SD behavior for undefined ACMDs? If I do ACMD25,
> >> and ACMD25 is not defined, will it be executed as CMD25? This is the
> >> MMC behavior as I have mentioned.
> > The SD spec has a similar paragraph about unspecified ACMD opcodes -
> > they should be treated as the equivalent CMD opcodes.  However, all of
> > the opcodes that I listed are indeed specified.  So, if the card
> > identifies itself as an SD card, it must implement those opcodes.  If
> > it identifies itself as an SD card, and it does *not* implement the ACMD
> > opcodes I listed, then it likely "fell out the back door of the
> > manufacturing facility and landed on a store shelf" ;) ... its
> > quality/correct performance/reliability is unlikely.
> 
> Not all specified ACMDs are mandatory. SD spec 2.00 says that only
> 6,13,41,42,51 are mandatory for all cards. This is less than the list
> you prepared in the patch.
> 

Hmm... In several places of the full SD spec, and even in the redacted
"SD Specifications, Part 1, Physical Layer, Simplified Specification,
Version 3.01", it states the security functions are mandatory for all SD
cards except ROM cards.  For ROM cards, the security ACMDs are expected
to be treated as illegal commands, and do not fall back to their CMD
equivalents.

> It's better not to make those exceptions and instead prepare an
> userspace daemon that opens mmcblkX (non-exclusively) and forwards the
> requests from unprivileged users to the device (or even implements
> application specific API).
> 

OK, this sounds reasonable to me (although I still do not see why
non-root, non-privileged users have open() access to /dev/mmcblk0).

I will replace the opcode filter with an unconditional check for
CAP_SYS_RAWIO.  With this capability check, I think I can skip checking
for SD vs. MMC.  It would be nice for this ioctl to have more general
purpose than just enabling secure operations on SD cards (see Arnd
Bergmann's intended use as a passthrough for virtual machines).

Regarding permissions, the next iteration of this patch will:

- Be a single patch containing the main ioctl functionality *and* this
  capability verification (checking "CAP_SYS_RAWIO", per Alan Cox in
  this thread).

- Only have the check for CAP_SYS_RAWIO.  It will *not* have an
  opcode-specific filter, nor a filter limiting it to just SD cards.

Is that satisfactory?

John

Re: [PATCH v2 1/2] mmc: Add ioctl to let userspace apps send ACMDs

2011-04-06 Thread Michał Mirosław
W dniu 6 kwietnia 2011 19:37 użytkownik John Calixto
 napisał:
> On Wed, 6 Apr 2011, Michał Mirosław wrote:
>> 2011/4/5 John Calixto :
>> > +#ifdef CONFIG_COMPAT
>> > +struct sd_ioc_cmd32 {
>> > +       u32 write_flag;
>> > +       u32 opcode;
>> > +       u32 arg;
>> > +       u32 response[4];
>> > +       u32 flags;
>> > +       u32 blksz;
>> > +       u32 blocks;
>> > +       u32 postsleep_us;
>> > +       u32 force_timeout_ns;
>> > +       compat_uptr_t data_ptr;
>> > +};
>> > +#define SD_IOC_ACMD32 _IOWR(MMC_BLOCK_MAJOR, 0, struct sd_ioc_cmd32)
>> [...]
>>
>> Since your implementing a new ioctl you can make the structure the
>> same for 64 and 32-bit archs and avoid all this compat crap.
> I was also less than pleased with this, but I chose to implement the
> compat crap because it allow a "natural" userspace API for both the
> kernel32+user32 and kernel64+user64 environments.  The ugliness in the
> kernel is just when you have defined CONFIG_COMPAT.  I think 32-bit-only
> is still an important target for this functionality (think set-top media
> players, mobile devices; basically, anything running on ARM) so always
> having to cast your data pointer to 64-bit is not appealing.  I suspect
> it will be very unlikely to see people using this in the kernel64+user32
> environment.

The problem is only with data_ptr field. If you make it the same size
whether it's for 32-bit or 64-bit arch then you don't need all that
compat code for user32 on kernel64 except for pointer translation when
there's 32-bit process calling.

Best Regards,
Michał Mirosław
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: pre-defined multiple block read/write vs. open ended multiple block read/write

2011-04-06 Thread Andrei Warkentin
On Wed, Apr 6, 2011 at 10:33 AM, Andrei Warkentin  wrote:
> On Wed, Apr 6, 2011 at 8:54 AM, Gao, Yunpeng  wrote:
>> Currently, the mmc driver uses open ended multiple block read/write 
>> (CMD18/25+CMD12) commands and not use pre-defined multiple block read/write 
>> (CMD23+CMD18/25) commands.
>>
>> According to the feedback from some MMC/SD device vendors, they prefer the 
>> pre-defined multiple block read/write commands since it gives the device a 
>> chance to do some internal optimization.
>>
>> Also I found on the latest eMMC 4.5 Spec, quite a few new features requires 
>> using of CMD23. That means, these new features can not be implemented with 
>> the open ended multiple block read/write commands.
>>
>> So, the question is, why the current mmc driver does not use the pre-defined 
>> multiple block read/write commands? Any comments?
>>
>
> While I really don't know, I would guess becase the SD spec has a
> different definition of ACMD23 than MMC, so perhaps avoiding
> ACMD/CMD23 that was chosen as the simpler more common alternative.
> Alternatively, the difference might imply that at one time it was an
> optional feature. I'll try digging up an early MMCA spec to confirm...

Ok let me correct myself.

For SD:
ACMD23 - SET_WR_BLK_ERASE_COUNT might speed to the multiple write
block, but still need CMD12 after the write.
CMD23 - optional and mandatory on UHS104 cards. CMD23 support is noted
in SCR register.

>
> At least one new MMC feature now (reliable writes) requires CMD23, and
> at the very least using SET_BLOCK_COUNT instead of STOP_TRANSMISSION
> would make that code path more homogenous. Certainly the overhead is
> the same for MMC - either send an extra command before or after the
> transfer. For SD you do get an extra CMD55 before CMD23, since it's
> ACMD23 we're talking about.
>
> I am actually working on a patch to do just that...
>

Which I will send shortly.

A
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Dynamic MMC device naming vs. bootloaders

2011-04-06 Thread Andrei Warkentin
On Wed, Apr 6, 2011 at 12:32 PM, Stephen Warren  wrote:
> Andrei Warkentin wrote at Wednesday, April 06, 2011 11:19 AM:
>> On Wed, Apr 6, 2011 at 11:59 AM, Stephen Warren  wrote:
>> > However, isn't it just a fluke that this will work; registering the 
>> > internal
>> > host controller first will I assume start probing of any attached devices 
>> > on
>> > that controller first, but does it actually guarantee that such probing 
>> > will
>> > also complete first, which I believe is the necessary condition for the 
>> > mmcblk
>> > device to be assigned ID 0?
>> >
>>
>> The device index is only assigned if the mmc block driver is started
>> on a detected card. ...
>>
>> > Equally, if there were two controllers with fixed/internal MMC and/or two
>> > controllers which supported pluggable SD cards, the race issue would still
>> > exist?
>>
>> I think if you had two controllers and you plugged two cards in at the
>> "same time", then you would have  a race condition, as both would
>> mmc_detect_change (effectively schedule_work to an ordered wq), and it
>> would depend on which card change IRQ occured first. It seems like
>> different hosts use different delays for when the work will be done,
>> so if you have different hosts, you can make this even more obvious.
>> You'd have to really try, though, I think. I guess if you are never
>> going to support multiple cards on one host, you might as well tie the
>> block index to host index.
>
> The case I care about most right now is a cold kernel boot. This is
> basically the same as plugging two SD cards in at (exactly) the same time;
> the time being when the SD platform driver is registered. The fact that
> that on my board, one is actually eMMC and one really an SD card that's
> already plugged in pre-boot isn't really that relevant.
>
> So, if I interpret your statements correctly, you're agreeing that simply
> registering the host controller for eMMC first doesn't guarantee that
> the eMMC will be block device ID 0, albeit in practice that does seem to
> be true a large enough percentage of the time not to notice any problem.
>

The call path is:

mmc_add_host
mmc_start_host
mmc_detect_change schedules mmc_rescan work with no delay

So two consecutive mmc_add_host-s for two different hosts implies that
the first mmc_rescan will happen for the first added host. So the
answer to your question is no - you seem to have that guarantee.

A
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 1/2] mmc: Add ioctl to let userspace apps send ACMDs

2011-04-06 Thread John Calixto
On Wed, 6 Apr 2011, Michał Mirosław wrote:
> 2011/4/5 John Calixto :
> > +       /*
> > +        * Only allow ACMDs on the whole block device, not on partitions.  
> > This
> > +        * prevents overspray between sibling partitions.
> > +        */
> > +       if (bdev != bdev->bd_contains)
> > +               return -EPERM;
> 
> This should at least also check CAP_SYS_ADMIN capability.
> 

Yep.  I broke out the capability check into a separate patch - I'll
reply to your permissions-related comments on that thread.

> > +
> > +       md = mmc_blk_get(bdev->bd_disk);
> > +       if (!md)
> > +               return -EINVAL;
> > +
> > +       card = md->queue.card;
> > +       if (IS_ERR(card))
> > +               return PTR_ERR(card);
> > +
> > +       host = card->host;
> > +       mmc_claim_host(host);
> > +
> > +       err = mmc_app_cmd(host, card);
> > +       if (err)
> > +               goto acmd_done;
> > +
> > +       mrq.cmd = &cmd;
> > +       mrq.data = &data;
> > +
> > +       if (copy_from_user(&sdic, sdic_ptr, sizeof(sdic))) {
> > +               err = -EFAULT;
> > +               goto acmd_done;
> > +       }
> 
> You should first copy and verify ioctl's data and only then claim host
> and send commands. Preferably the check-and-copy should be separate
> function.
> 

Will do.

> > +
> > +#ifdef CONFIG_COMPAT
> > +struct sd_ioc_cmd32 {
> > +       u32 write_flag;
> > +       u32 opcode;
> > +       u32 arg;
> > +       u32 response[4];
> > +       u32 flags;
> > +       u32 blksz;
> > +       u32 blocks;
> > +       u32 postsleep_us;
> > +       u32 force_timeout_ns;
> > +       compat_uptr_t data_ptr;
> > +};
> > +#define SD_IOC_ACMD32 _IOWR(MMC_BLOCK_MAJOR, 0, struct sd_ioc_cmd32)
> [...]
> 
> Since your implementing a new ioctl you can make the structure the
> same for 64 and 32-bit archs and avoid all this compat crap.
> 

I was also less than pleased with this, but I chose to implement the
compat crap because it allow a "natural" userspace API for both the
kernel32+user32 and kernel64+user64 environments.  The ugliness in the
kernel is just when you have defined CONFIG_COMPAT.  I think 32-bit-only
is still an important target for this functionality (think set-top media
players, mobile devices; basically, anything running on ARM) so always
having to cast your data pointer to 64-bit is not appealing.  I suspect
it will be very unlikely to see people using this in the kernel64+user32
environment.

Cheers,

John

RE: Dynamic MMC device naming vs. bootloaders

2011-04-06 Thread Stephen Warren
Andrei Warkentin wrote at Wednesday, April 06, 2011 11:19 AM:
> On Wed, Apr 6, 2011 at 11:59 AM, Stephen Warren  wrote:
> > However, isn't it just a fluke that this will work; registering the internal
> > host controller first will I assume start probing of any attached devices on
> > that controller first, but does it actually guarantee that such probing will
> > also complete first, which I believe is the necessary condition for the 
> > mmcblk
> > device to be assigned ID 0?
> >
> 
> The device index is only assigned if the mmc block driver is started
> on a detected card. ...
> 
> > Equally, if there were two controllers with fixed/internal MMC and/or two
> > controllers which supported pluggable SD cards, the race issue would still
> > exist?
> 
> I think if you had two controllers and you plugged two cards in at the
> "same time", then you would have  a race condition, as both would
> mmc_detect_change (effectively schedule_work to an ordered wq), and it
> would depend on which card change IRQ occured first. It seems like
> different hosts use different delays for when the work will be done,
> so if you have different hosts, you can make this even more obvious.
> You'd have to really try, though, I think. I guess if you are never
> going to support multiple cards on one host, you might as well tie the
> block index to host index.

The case I care about most right now is a cold kernel boot. This is
basically the same as plugging two SD cards in at (exactly) the same time;
the time being when the SD platform driver is registered. The fact that
that on my board, one is actually eMMC and one really an SD card that's
already plugged in pre-boot isn't really that relevant.

So, if I interpret your statements correctly, you're agreeing that simply
registering the host controller for eMMC first doesn't guarantee that
the eMMC will be block device ID 0, albeit in practice that does seem to
be true a large enough percentage of the time not to notice any problem.

Am I correct?

Thanks.

-- 
nvpublic

--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Dynamic MMC device naming vs. bootloaders

2011-04-06 Thread Andrei Warkentin
On Wed, Apr 6, 2011 at 11:59 AM, Stephen Warren  wrote:
>
> Ah. I do see now that Tegra's equivalent platform_device registration order 
> was
> changed between the two kernels that exhibit different behavior, so 
> re-ordering
> might work. I'll try it out.
>
> However, isn't it just a fluke that this will work; registering the internal
> host controller first will I assume start probing of any attached devices on
> that controller first, but does it actually guarantee that such probing will
> also complete first, which I believe is the necessary condition for the mmcblk
> device to be assigned ID 0?
>

The device index is only assigned if the mmc block driver is started
on a detected card. This means that
if you remove a card from host 0, then card on host 1 will be detected
first. I would guess most platform code assigns an emmc/esd-containing
host as first to add.
Being first to add implies being first to resume so the ordering
should stick across PM states (and safe MMC resume)

> Equally, if there were two controllers with fixed/internal MMC and/or two
> controllers which supported pluggable SD cards, the race issue would still
> exist?

I think if you had two controllers and you plugged two cards in at the
"same time", then you would have  a race condition, as both would
mmc_detect_change (effectively schedule_work to an ordered wq), and it
would depend on which card change IRQ occured first. It seems like
different hosts use different delays for when the work will be done,
so if you have different hosts, you can make this even more obvious.
You'd have to really try, though, I think. I guess if you are never
going to support multiple cards on one host, you might as well tie the
block index to host index.

A
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Dynamic MMC device naming vs. bootloaders

2011-04-06 Thread Stephen Warren
Kishore Kadiyala wrote at Wednesday, April 06, 2011 4:11 AM:
> 
> On Wed, Apr 6, 2011 at 2:58 AM, Stephen Warren  wrote:
> > Kishore Kadiyala wrote at Tuesday, April 05, 2011 12:38 AM:
> >>
> >> Hi Stephen,
> >>
> >> On Mon, Apr 4, 2011 at 11:09 PM, Stephen Warren  wrote:
> >> > Chris et. al.,
> >> >
> >> > I'm working on an ARM system that contains at least two MMC/SD devices;
> >> > specifically the board has an internal MMC device, and an SD card slot,
> >> > although the SoC has four MMC/SD hosts IIRC.
> >> >
> >> > The kernel's naming of these devices is dynamic. If the SD card is not
> >> > plugged in, the internal MMC is always known as mmcblock0. If the SD card
> >> > is plugged in too, sometimes the internal MMC is mmcblk0 and sometimes 
> >> > it's
> >> > mmcblk1. I assume this is timing related; 2.6.37 usually seemed to name
> >> > them "in order", whereas 2.6.38 usually seems to name them "backwards".
> >> >
> >> > This causes problems with the bootloader scripts I'm using, which assumes
> >> > that the internal MMC is always device 0 and the SD slot is always 
> >> > device 1,
> >> > and hence provides kernel command-line argument root=/dev/mmcblk0p3 or
> >> > root=/dev/mmcblk1p3 based on whether it booted from SD or internal MMC 
> >> > (SD
> >> > is searched for a valid image first by the bootloader).
> >>
> >> Just follow this thread which discusses the same but for OMAP4 controller
> >> http://www.mail-archive.com/linux-omap@vger.kernel.org/msg47853.html
> >
> > Kishore, thanks for the pointer. However, I don't see anything in that 
> > thread
> > that affects MMC block device IDs. The thread ended with the original poster
> > requesting the patch be dropped, since he'd made a mistake in his bootloader
> > settings.
> >
> > Just perhaps the registration order change is enough to change the timing of
> > device (memory device, not host controller?) probing, which just happens to
> > affect the ID assignment?
> >
> >> One solution could be make your internal MMC always registered as mmcblk0
> >> and the removable one as next device.
> >
> > That's essentially what the patch I gave previously does.
> 
> Your change was in mmc/card/block.c, but you can handle this by
> changing sequence during device registeration.
> 
> Following change in the patch does that
> https://patchwork.kernel.org/patch/595861/

Ah. I do see now that Tegra's equivalent platform_device registration order was
changed between the two kernels that exhibit different behavior, so re-ordering
might work. I'll try it out.

However, isn't it just a fluke that this will work; registering the internal
host controller first will I assume start probing of any attached devices on
that controller first, but does it actually guarantee that such probing will
also complete first, which I believe is the necessary condition for the mmcblk
device to be assigned ID 0?

Equally, if there were two controllers with fixed/internal MMC and/or two
controllers which supported pluggable SD cards, the race issue would still
exist?

Thanks for the pointer!

-- 
nvpublic

--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: pre-defined multiple block read/write vs. open ended multiple block read/write

2011-04-06 Thread Andrei Warkentin
On Wed, Apr 6, 2011 at 8:54 AM, Gao, Yunpeng  wrote:
> Currently, the mmc driver uses open ended multiple block read/write 
> (CMD18/25+CMD12) commands and not use pre-defined multiple block read/write 
> (CMD23+CMD18/25) commands.
>
> According to the feedback from some MMC/SD device vendors, they prefer the 
> pre-defined multiple block read/write commands since it gives the device a 
> chance to do some internal optimization.
>
> Also I found on the latest eMMC 4.5 Spec, quite a few new features requires 
> using of CMD23. That means, these new features can not be implemented with 
> the open ended multiple block read/write commands.
>
> So, the question is, why the current mmc driver does not use the pre-defined 
> multiple block read/write commands? Any comments?
>

While I really don't know, I would guess becase the SD spec has a
different definition of ACMD23 than MMC, so perhaps avoiding
ACMD/CMD23 that was chosen as the simpler more common alternative.
Alternatively, the difference might imply that at one time it was an
optional feature. I'll try digging up an early MMCA spec to confirm...

At least one new MMC feature now (reliable writes) requires CMD23, and
at the very least using SET_BLOCK_COUNT instead of STOP_TRANSMISSION
would make that code path more homogenous. Certainly the overhead is
the same for MMC - either send an extra command before or after the
transfer. For SD you do get an extra CMD55 before CMD23, since it's
ACMD23 we're talking about.

I am actually working on a patch to do just that...

A
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


pre-defined multiple block read/write vs. open ended multiple block read/write

2011-04-06 Thread Gao, Yunpeng
Currently, the mmc driver uses open ended multiple block read/write 
(CMD18/25+CMD12) commands and not use pre-defined multiple block read/write 
(CMD23+CMD18/25) commands.

According to the feedback from some MMC/SD device vendors, they prefer the 
pre-defined multiple block read/write commands since it gives the device a 
chance to do some internal optimization.

Also I found on the latest eMMC 4.5 Spec, quite a few new features requires 
using of CMD23. That means, these new features can not be implemented with the 
open ended multiple block read/write commands.

So, the question is, why the current mmc driver does not use the pre-defined 
multiple block read/write commands? Any comments?

Thanks.

Regards,
Yunpeng
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 2/2] mmc: Check CAP_SYS_ADMIN for destructive ioctl ACMDs

2011-04-06 Thread Michał Mirosław
2011/4/6 John Calixto :
> On Tue, 5 Apr 2011, Andrei Warkentin wrote:
>> Could you check the SD behavior for undefined ACMDs? If I do ACMD25,
>> and ACMD25 is not defined, will it be executed as CMD25? This is the
>> MMC behavior as I have mentioned.
> The SD spec has a similar paragraph about unspecified ACMD opcodes -
> they should be treated as the equivalent CMD opcodes.  However, all of
> the opcodes that I listed are indeed specified.  So, if the card
> identifies itself as an SD card, it must implement those opcodes.  If
> it identifies itself as an SD card, and it does *not* implement the ACMD
> opcodes I listed, then it likely "fell out the back door of the
> manufacturing facility and landed on a store shelf" ;) ... its
> quality/correct performance/reliability is unlikely.

Not all specified ACMDs are mandatory. SD spec 2.00 says that only
6,13,41,42,51 are mandatory for all cards. This is less than the list
you prepared in the patch.

It's better not to make those exceptions and instead prepare an
userspace daemon that opens mmcblkX (non-exclusively) and forwards the
requests from unprivileged users to the device (or even implements
application specific API).

Best Regards,
Michał Mirosław
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Dynamic MMC device naming vs. bootloaders

2011-04-06 Thread Kishore Kadiyala
On Wed, Apr 6, 2011 at 2:58 AM, Stephen Warren  wrote:
> Kishore Kadiyala wrote at Tuesday, April 05, 2011 12:38 AM:
>>
>> Hi Stephen,
>>
>> On Mon, Apr 4, 2011 at 11:09 PM, Stephen Warren  wrote:
>> > Chris et. al.,
>> >
>> > I'm working on an ARM system that contains at least two MMC/SD devices;
>> > specifically the board has an internal MMC device, and an SD card slot,
>> > although the SoC has four MMC/SD hosts IIRC.
>> >
>> > The kernel's naming of these devices is dynamic. If the SD card is not
>> > plugged in, the internal MMC is always known as mmcblock0. If the SD card
>> > is plugged in too, sometimes the internal MMC is mmcblk0 and sometimes it's
>> > mmcblk1. I assume this is timing related; 2.6.37 usually seemed to name
>> > them "in order", whereas 2.6.38 usually seems to name them "backwards".
>> >
>> > This causes problems with the bootloader scripts I'm using, which assumes
>> > that the internal MMC is always device 0 and the SD slot is always device 
>> > 1,
>> > and hence provides kernel command-line argument root=/dev/mmcblk0p3 or
>> > root=/dev/mmcblk1p3 based on whether it booted from SD or internal MMC (SD
>> > is searched for a valid image first by the bootloader).
>>
>> Just follow this thread which discusses the same but for OMAP4 controller
>> http://www.mail-archive.com/linux-omap@vger.kernel.org/msg47853.html
>
> Kishore, thanks for the pointer. However, I don't see anything in that thread
> that affects MMC block device IDs. The thread ended with the original poster
> requesting the patch be dropped, since he'd made a mistake in his bootloader
> settings.
>
> Just perhaps the registration order change is enough to change the timing of
> device (memory device, not host controller?) probing, which just happens to
> affect the ID assignment?
>
>> One solution could be make your internal MMC always registered as mmcblk0
>> and the removable one as next device.
>
> That's essentially what the patch I gave previously does.

Your change was in mmc/card/block.c, but you can handle this by
changing sequence during
device registeration.

Following change in the patch does that
https://patchwork.kernel.org/patch/595861/

diff --git a/arch/arm/mach-omap2/board-4430sdp.c
b/arch/arm/mach-omap2/board-4430sdp.c
index 1a943be..f914099 100644
--- a/arch/arm/mach-omap2/board-4430sdp.c
+++ b/arch/arm/mach-omap2/board-4430sdp.c
@@ -349,11 +349,6 @@  static struct twl4030_usb_data omap4_usbphy_data = {

 static struct omap2_hsmmc_info mmc[] = {
{
-   .mmc= 1,
-   .caps   = MMC_CAP_4_BIT_DATA | MMC_CAP_8_BIT_DATA,
-   .gpio_wp= -EINVAL,
-   },
-   {
.mmc= 2,
.caps   =  MMC_CAP_4_BIT_DATA | MMC_CAP_8_BIT_DATA,
.gpio_cd= -EINVAL,
@@ -361,19 +356,24 @@  static struct omap2_hsmmc_info mmc[] = {
.nonremovable   = true,
.ocr_mask   = MMC_VDD_29_30,
},
+   {
+   .mmc= 1,
+   .caps   = MMC_CAP_4_BIT_DATA | MMC_CAP_8_BIT_DATA,
+   .gpio_wp= -EINVAL,
+   },
{}  /* Terminator */
 };

Regards,
Kishore

>
> --
> nvpublic
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 1/2] mmc: Add ioctl to let userspace apps send ACMDs

2011-04-06 Thread Michał Mirosław
2011/4/5 John Calixto :
> Sending ACMDs from userspace is useful for such things as:
>
>    - The security application of an SD card (SD Specification, Part 3,
>      Security)
>
>    - SD passthrough for virtual machines
>
> Tested on TI PCIxx12 (SDHCI), Sigma Designs SMP8652 SoC, TI OMAP3621
> SoC, TI OMAP3630 SoC, Samsung S5PC110 SoC, Qualcomm MSM7200A SoC.
>
> Signed-off-by: John Calixto 
> ---
>  drivers/mmc/card/block.c  |  191 
> +
>  drivers/mmc/core/sd_ops.c |    3 +-
>  include/linux/mmc/core.h  |    1 +
>  include/linux/mmc/sd.h    |   16 
>  4 files changed, 210 insertions(+), 1 deletions(-)
>
> diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
> index 61d233a..c2e107c 100644
> --- a/drivers/mmc/card/block.c
> +++ b/drivers/mmc/card/block.c
> @@ -31,6 +31,8 @@
>  #include 
>  #include 
>  #include 
> +#include 
> +#include 
>
>  #include 
>  #include 
> @@ -158,11 +160,200 @@ mmc_blk_getgeo(struct block_device *bdev, struct 
> hd_geometry *geo)
>        return 0;
>  }
>
> +static int mmc_blk_ioctl_acmd(struct block_device *bdev,
> +       struct sd_ioc_cmd __user *sdic_ptr)
> +{
> +       struct sd_ioc_cmd sdic;
> +       struct mmc_blk_data *md;
> +       struct mmc_host *host;
> +       struct mmc_card *card;
> +       struct mmc_command cmd = {0};
> +       struct mmc_data data = {0};
> +       struct mmc_request mrq = {0};
> +       struct scatterlist sg = {0};
> +       unsigned char *blocks = NULL;
> +       size_t data_bytes;
> +       int err;
> +
> +       /*
> +        * Only allow ACMDs on the whole block device, not on partitions.  
> This
> +        * prevents overspray between sibling partitions.
> +        */
> +       if (bdev != bdev->bd_contains)
> +               return -EPERM;

This should at least also check CAP_SYS_ADMIN capability.

> +
> +       md = mmc_blk_get(bdev->bd_disk);
> +       if (!md)
> +               return -EINVAL;
> +
> +       card = md->queue.card;
> +       if (IS_ERR(card))
> +               return PTR_ERR(card);
> +
> +       host = card->host;
> +       mmc_claim_host(host);
> +
> +       err = mmc_app_cmd(host, card);
> +       if (err)
> +               goto acmd_done;
> +
> +       mrq.cmd = &cmd;
> +       mrq.data = &data;
> +
> +       if (copy_from_user(&sdic, sdic_ptr, sizeof(sdic))) {
> +               err = -EFAULT;
> +               goto acmd_done;
> +       }

You should first copy and verify ioctl's data and only then claim host
and send commands. Preferably the check-and-copy should be separate
function.

> +
> +       cmd.opcode = sdic.opcode;
> +       cmd.arg = sdic.arg;
> +       cmd.flags = sdic.flags;
> +
> +       data.sg = &sg;
> +       data.sg_len = 1;
> +       data.blksz = sdic.blksz;
> +       data.blocks = sdic.blocks;
> +
> +       data_bytes = data.blksz * data.blocks;
> +       blocks = kzalloc(data_bytes, GFP_KERNEL);
> +       if (!blocks) {
> +               err = -ENOMEM;
> +               goto acmd_done;
> +       }
> +       sg_init_one(data.sg, blocks, data_bytes);
> +
> +
> +       if (copy_from_user(blocks, sdic_ptr->data_ptr, data_bytes)) {
> +               err = -EFAULT;
> +               goto acmd_done;
> +       }
> +       if (sdic.write_flag)
> +               data.flags = MMC_DATA_WRITE;
> +       else
> +               data.flags = MMC_DATA_READ;
> +
> +       /* data.flags must already be set before doing this. */
> +       mmc_set_data_timeout(&data, card);
> +       /* Allow overriding the timeout_ns for empirical tuning. */
> +       if (sdic.force_timeout_ns)
> +               data.timeout_ns = sdic.force_timeout_ns;
> +
> +       mmc_wait_for_req(host, &mrq);
> +
> +       if (cmd.error) {
> +               dev_err(mmc_dev(host), "%s: cmd error %d\n",
> +                                               __func__, cmd.error);
> +               err = cmd.error;
> +               goto acmd_done;
> +       }
> +       if (data.error) {
> +               dev_err(mmc_dev(host), "%s: data error %d\n",
> +                                               __func__, data.error);
> +               err = data.error;
> +               goto acmd_done;
> +       }
> +
> +       /*
> +        * According to the SD specs, some commands require a delay after
> +        * issuing the command.
> +        */
> +       if (sdic.postsleep_us)
> +               udelay(sdic.postsleep_us);
> +
> +       if (copy_to_user(&(sdic_ptr->response), cmd.resp, sizeof(cmd.resp))) {
> +               err = -EFAULT;
> +               goto acmd_done;
> +       }
> +
> +       if (!sdic.write_flag) {
> +               if (copy_to_user(sdic_ptr->data_ptr, blocks, data_bytes)) {
> +                       err = -EFAULT;
> +                       goto acmd_done;
> +               }
> +       }
> +
> +acmd_done:
> +       kfree(blocks);
> +       mmc_release_host(host);
> +       mmc_blk_put(md);
> +       return err;
> +}
> +
> +static int mmc_blk_ioctl(struct block_device *bdev, f