RE: [PATCH] exynos: mmc: use correct variable for MODULE_DEVICE_TABLE

2012-11-01 Thread Seungwon Jeon
On Wednesday, October 31, 2012, Sergei Trofimovich sly...@gentoo.org wrote:
 From: Sergei Trofimovich sly...@gentoo.org
 
 Found by gcc:
 
 linux-2.6/drivers/mmc/host/dw_mmc-exynos.c: At top level:
 linux-2.6/drivers/mmc/host/dw_mmc-exynos.c:226:1: error: 
 '__mod_of_device_table' aliased to
 undefined symbol 'dw_mci_pltfm_match'
 
 CC: Chris Ball c...@laptop.org
 CC: Thomas Abraham thomas.abra...@linaro.org
 CC: Will Newton will.new...@imgtec.com
 CC: linux-mmc@vger.kernel.org
 CC: linux-ker...@vger.kernel.org
 Signed-off-by: Sergei Trofimovich sly...@gentoo.org
 ---
Acked-by: Seungwon Jeon tgih@samsung.com

I suggest changing the prefix of subject.
'mmc: dw_mmc-exynos' instead of 'exynos: mmc'

Thanks,
Seungwon Jeon


--
To unsubscribe from this list: send the line unsubscribe linux-mmc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC/PATCH v3 0/2] ROW scheduling Algorithm

2012-11-01 Thread tlinder
From: Tatyana Brokhman tlin...@codeaurora.org

The ROW scheduling algorithm will be used in mobile devices as default
block layer IO scheduling algorithm. ROW stands for READ Over WRITE
which is the main requests dispatch policy of this algorithm.

The ROW IO scheduler was developed with the mobile devices needs in
mind. In mobile devices we favor user experience upon everything else,
thus we want to give READ IO requests as much priority as possible.
In mobile devices we won’t have AS much parallel threads as on desktops.
Usually it’s a single thread or at most 2 simultaneous working threads
for read  write. Favoring READ requests over WRITEs decreases the READ
latency greatly.

The main idea of the ROW scheduling policy is:
Give READ requests priority over WRITE with WRITE starvation in mind

Bellow you’ll find a small comparison of ROW to existing schedulers.
The test that was run for these measurements is parallel lmdd read and write.
The tests were performed on:
kernel version: 3.4
Underline device driver: mmc
Host controller: msm-sdcc
Card:standard emmc NAND flash

--
   Algorithm   |   Throughput [mb/sec]   |   Worst case Latency [msec]   |
   | READ|   WRITE   | READ  | WRITE |
--
Noop   |12.12|   25.18   | 4407  |  4804 |
Deadline   |12.02|   24.6| 705   |  5130 | 
CFQ|20.81|   15.23   | 230   |  9370 |
ROW|27.75|   15.34   |  85   | 12025 |
-|

Tatyana Brokhman (2):
  block: Adding ROW scheduling algorithm
  block: compile ROW statically into the kernel

 Documentation/block/row-iosched.txt |  186 ++
 block/Kconfig.iosched   |   22 ++
 block/Makefile  |1 +
 block/row-iosched.c |  686 +++
 4 files changed, 895 insertions(+), 0 deletions(-)
 create mode 100644 Documentation/block/row-iosched.txt
 create mode 100644 block/row-iosched.c

-- 
1.7.6
--
QUALCOMM ISRAEL, on behalf of Qualcomm Innovation Center, Inc. 
Is a member of Code Aurora Forum, hosted by the Linux Foundation
--
To unsubscribe from this list: send the line unsubscribe linux-mmc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC/PATCH v3 2/2] block: compile ROW statically into the kernel

2012-11-01 Thread tlinder
From: Tatyana Brokhman tlin...@codeaurora.org

ROW is a new scheduling algorithm. Similar to the existing scheduling
algorithms it should be compiled to the kernel statically giving the user
the ability to switch to it without kernel recompilation.

Signed-off-by: Tatyana Brokhman tlin...@codeaurora.org

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index 5a747e2..401f42d 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -23,6 +23,7 @@ config IOSCHED_DEADLINE
 
 config IOSCHED_ROW
tristate ROW I/O scheduler
+   default y
---help---
  The ROW I/O scheduler gives priority to READ requests over the
  WRITE requests when dispatching, without starving WRITE requests.
-- 
1.7.6
--
QUALCOMM ISRAEL, on behalf of Qualcomm Innovation Center, Inc. 
Is a member of Code Aurora Forum, hosted by the Linux Foundation
--
To unsubscribe from this list: send the line unsubscribe linux-mmc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC/PATCH v3 1/2] block: Adding ROW scheduling algorithm

2012-11-01 Thread tlinder
From: Tatyana Brokhman tlin...@codeaurora.org

This patch adds the implementation of a new scheduling algorithm - ROW.
The policy of this algorithm is to prioritize READ requests over WRITE
as much as possible without starving the WRITE requests.

Signed-off-by: Tatyana Brokhman tlin...@codeaurora.org

diff --git a/Documentation/block/row-iosched.txt 
b/Documentation/block/row-iosched.txt
new file mode 100644
index 000..343c276
--- /dev/null
+++ b/Documentation/block/row-iosched.txt
@@ -0,0 +1,186 @@
+Introduction
+
+
+The ROW scheduling algorithm will be used in mobile devices as default
+block layer IO scheduling algorithm. ROW stands for READ Over WRITE
+which is the main requests dispatch policy of this algorithm.
+
+The ROW IO scheduler was developed with the mobile devices needs in
+mind. In mobile devices we favor user experience upon everything else,
+thus we want to give READ IO requests as much priority as possible.
+The main idea of the ROW scheduling policy is just that:
+- If there are READ requests in pipe - dispatch them, while write
+starvation is considered.
+
+Software description
+
+The elevator defines a registering mechanism for different IO scheduler
+to implement. This makes implementing a new algorithm quite straight
+forward and requires almost no changes to block/elevator framework. A
+new IO scheduler just has to implement a set of callback functions
+defined by the elevator.
+These callbacks cover all the required IO operations such as
+adding/removing request to/from the scheduler, merging two requests,
+dispatching a request etc.
+
+Design
+==
+
+The requests are kept in queues according to their priority. The
+dispatching of requests is done in a Round Robin manner with a
+different slice for each queue. The dispatch quantum for a specific
+queue is set according to the queues priority. READ queues are
+given bigger dispatch quantum than the WRITE queues, within a dispatch
+cycle.
+
+At the moment there are 6 types of queues the requests are
+distributed to:
+-  High priority READ queue
+-  High priority Synchronous WRITE queue
+-  Regular priority READ queue
+-  Regular priority Synchronous WRITE queue
+-  Regular priority WRITE queue
+-  Low priority READ queue
+
+The marking of request as high/low priority will be done by the
+application adding the request and not the scheduler. See TODO section.
+If the request is not marked in any way (high/low) the scheduler
+assigns it to one of the regular priority queues:
+read/write/sync write.
+
+If in a certain dispatch cycle one of the queues was empty and didn't
+use its quantum that queue will be marked as un-served. If we're in
+a middle of a dispatch cycle dispatching from queue Y and a request
+arrives for queue X that was un-served in the previous cycle, if X's
+priority is higher than Y's, queue X will be preempted in the favor of
+queue Y.
+
+For READ request queues ROW IO scheduler allows idling within a
+dispatch quantum in order to give the application a chance to insert
+more requests. Idling means adding some extra time for serving a
+certain queue even if the queue is empty. The idling is enabled if
+the ROW IO scheduler identifies the application is inserting requests
+in a high frequency.
+Not all queues can idle. ROW scheduler exposes an enablement struct
+for idling.
+For idling on READ queues, the ROW IO scheduler uses timer mechanism.
+When the timer expires we schedule a delayed work that will signal the
+device driver to fetch another request for dispatch.
+
+ROW scheduler will support additional services for block devices that
+supports Urgent Requests. That is, the scheduler may inform the
+device driver upon urgent requests using a newly defined callback.
+In addition it will support rescheduling of requests that were
+interrupted. For example if the device driver issues a long write
+request and a sudden urgent request is received by the scheduler.
+The scheduler will inform the device driver about the urgent request,
+so the device driver can stop the current write request and serve the
+urgent request. In such a case the device driver may also insert back
+to the scheduler the remainder of the interrupted write request, such
+that the scheduler may continue sending urgent requests without the
+need to interrupt the ongoing write again and again. The write
+remainder will be sent later on according to the scheduler policy.
+
+SMP/multi-core
+==
+At the moment the code is accessed from 2 contexts:
+- Application context (from block/elevator layer): adding the requests.
+- device driver thread: dispatching the requests and notifying on
+  completion.
+
+One lock is used to synchronize between the two. This lock is provided
+by the block device driver along with the dispatch queue.
+
+Performance
+===
+Several performance tests were run in order to compare the ROW
+scheduler to existing scheduling algorithms:
+
+1. 

[RFC/PATCH v2 0/2] Adding support for urgent requests handling

2012-11-01 Thread tlinder
This patch set adds support in block  elevator layers for handling
urgent requests.
Urgent request notification passed to underlying driver (eMMC for
example) and causes interruption of low priority current request in
order to execute the urgent one.
The interrupted request is inserted back to the scheduler's internal
data structures.

Tatyana Brokhman (2):
  block: Add support for reinsert a dispatched req
  block: Add API for urgent request handling

 block/blk-core.c |   68 -
 block/blk-settings.c |   12 
 block/blk.h  |   11 +++
 block/elevator.c |   35 +++
 include/linux/blkdev.h   |6 
 include/linux/elevator.h |8 +
 6 files changed, 138 insertions(+), 2 deletions(-)

-- 
1.7.6
--
QUALCOMM ISRAEL, on behalf of Qualcomm Innovation Center, Inc. 
Is a member of Code Aurora Forum, hosted by the Linux Foundation

--
To unsubscribe from this list: send the line unsubscribe linux-mmc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC/PATCH v2 1/2] block: Add support for reinsert a dispatched req

2012-11-01 Thread tlinder
From: Tatyana Brokhman tlin...@codeaurora.org

Add support for reinserting a dispatched request back to the
schedulers internal data structures.
Add API for verifying whether the current scheduler
supports reinserting requests mechanism

Signed-off-by: Tatyana Brokhman tlin...@codeaurora.org

diff --git a/block/blk-core.c b/block/blk-core.c
index b421289..8881e46 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1177,6 +1177,45 @@ void blk_requeue_request(struct request_queue *q, struct 
request *rq)
 }
 EXPORT_SYMBOL(blk_requeue_request);
 
+/**
+ * blk_reinsert_request() - Insert a request back to the scheduler
+ * @q: request queue
+ * @rq:request to be inserted
+ *
+ * This function inserts the request back to the scheduler as if
+ * it was never dispatched.
+ *
+ * Return: 0 on success, error code on fail
+ */
+int blk_reinsert_request(struct request_queue *q, struct request *rq)
+{
+   blk_delete_timer(rq);
+   blk_clear_rq_complete(rq);
+   trace_block_rq_requeue(q, rq);
+
+   if (blk_rq_tagged(rq))
+   blk_queue_end_tag(q, rq);
+
+   BUG_ON(blk_queued_rq(rq));
+
+   return elv_reinsert_request(q, rq);
+}
+EXPORT_SYMBOL(blk_reinsert_request);
+
+/**
+ * blk_reinsert_req_sup() - check whether the scheduler supports
+ *  reinsertion of requests
+ * @q: request queue
+ *
+ * Returns true if the current scheduler supports reinserting
+ * request. False otherwise
+ */
+bool blk_reinsert_req_sup(struct request_queue *q)
+{
+   return q-elevator-type-ops.elevator_reinsert_req_fn ? true : false;
+}
+EXPORT_SYMBOL(blk_reinsert_req_sup);
+
 static void add_acct_request(struct request_queue *q, struct request *rq,
 int where)
 {
diff --git a/block/elevator.c b/block/elevator.c
index 9b1d42b..121a351 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -539,6 +539,36 @@ void elv_requeue_request(struct request_queue *q, struct 
request *rq)
__elv_add_request(q, rq, ELEVATOR_INSERT_REQUEUE);
 }
 
+/**
+ * elv_reinsert_request() - Insert a request back to the scheduler
+ * @q: request queue where request should be inserted
+ * @rq:request to be inserted
+ *
+ * This function returns the request back to the scheduler to be
+ * inserted as if it was never dispatched
+ *
+ * Return: 0 on success, error code on failure
+ */
+int elv_reinsert_request(struct request_queue *q, struct request *rq)
+{
+   if (!q-elevator-type-ops.elevator_reinsert_req_fn)
+   return -EPERM;
+   /*
+* it already went through dequeue, we need to decrement the
+* in_flight count again
+*/
+   if (blk_account_rq(rq)) {
+   q-in_flight[rq_is_sync(rq)]--;
+   if (rq-cmd_flags  REQ_SORTED)
+   elv_deactivate_rq(q, rq);
+   }
+
+   rq-cmd_flags = ~REQ_STARTED;
+   q-nr_sorted++;
+
+   return q-elevator-type-ops.elevator_reinsert_req_fn(q, rq);
+}
+
 void elv_drain_elevator(struct request_queue *q)
 {
static int printed;
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 1756001..e725303 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -722,6 +722,8 @@ extern struct request *blk_get_request(struct request_queue 
*, int, gfp_t);
 extern struct request *blk_make_request(struct request_queue *, struct bio *,
gfp_t);
 extern void blk_requeue_request(struct request_queue *, struct request *);
+extern int blk_reinsert_request(struct request_queue *q, struct request *rq);
+extern bool blk_reinsert_req_sup(struct request_queue *q);
 extern void blk_add_request_payload(struct request *rq, struct page *page,
unsigned int len);
 extern int blk_rq_check_limits(struct request_queue *q, struct request *rq);
diff --git a/include/linux/elevator.h b/include/linux/elevator.h
index c03af76..f70d05d 100644
--- a/include/linux/elevator.h
+++ b/include/linux/elevator.h
@@ -22,6 +22,8 @@ typedef void (elevator_bio_merged_fn) (struct request_queue *,
 typedef int (elevator_dispatch_fn) (struct request_queue *, int);
 
 typedef void (elevator_add_req_fn) (struct request_queue *, struct request *);
+typedef int (elevator_reinsert_req_fn) (struct request_queue *,
+   struct request *);
 typedef struct request *(elevator_request_list_fn) (struct request_queue *, 
struct request *);
 typedef void (elevator_completed_req_fn) (struct request_queue *, struct 
request *);
 typedef int (elevator_may_queue_fn) (struct request_queue *, int);
@@ -47,6 +49,8 @@ struct elevator_ops
 
elevator_dispatch_fn *elevator_dispatch_fn;
elevator_add_req_fn *elevator_add_req_fn;
+   elevator_reinsert_req_fn *elevator_reinsert_req_fn;
+
elevator_activate_req_fn *elevator_activate_req_fn;
elevator_deactivate_req_fn *elevator_deactivate_req_fn;
 
@@ -123,6 +127,7 @@ extern void 

[RFC/PATCH v2 2/2] block: Add API for urgent request handling

2012-11-01 Thread tlinder
From: Tatyana Brokhman tlin...@codeaurora.org

This patch add support in block  elevator layers for handling
urgent requests.
Urgent request notification passed to underlying driver (eMMC for
example) and causes interruption of low priority current request in
order to execute the urgent one.

Signed-off-by: Tatyana Brokhman tlin...@codeaurora.org

diff --git a/block/blk-core.c b/block/blk-core.c
index 8881e46..ba11425 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -306,7 +306,23 @@ void __blk_run_queue(struct request_queue *q)
if (unlikely(blk_queue_stopped(q)))
return;
 
-   q-request_fn(q);
+   /*
+* Notify the driver of urgent request pending under the folowing
+* conditions:
+* 1. There isn't already an urgent request in flight, meaning
+* previously notified urgent request completed (!q-notified_urgent)
+* 2. The driver and the current scheduler support urgent request
+* handling
+* 3. There is an urgent request pending in the scheduler
+*/
+   if (!q-notified_urgent 
+   q-elevator-type-ops.elevator_is_urgent_fn 
+   q-urgent_request_fn 
+   q-elevator-type-ops.elevator_is_urgent_fn(q)) {
+   q-urgent_request_fn(q);
+   q-notified_urgent = true;
+   } else
+   q-request_fn(q);
 }
 EXPORT_SYMBOL(__blk_run_queue);
 
@@ -2227,8 +2243,17 @@ struct request *blk_fetch_request(struct request_queue 
*q)
struct request *rq;
 
rq = blk_peek_request(q);
-   if (rq)
+   if (rq) {
+   /*
+* Assumption: the next request fetched from scheduler after we
+* notified urgent request pending - will be the urgent one
+*/
+   if (q-notified_urgent  !q-urgent_req) {
+   q-urgent_req = rq;
+   (void)blk_mark_rq_urgent(rq);
+   }
blk_start_request(rq);
+   }
return rq;
 }
 EXPORT_SYMBOL(blk_fetch_request);
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 779bb76..8d07e06 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -100,6 +100,18 @@ void blk_queue_lld_busy(struct request_queue *q, 
lld_busy_fn *fn)
 EXPORT_SYMBOL_GPL(blk_queue_lld_busy);
 
 /**
+ * blk_urgent_request - Set an urgent_request handler function for queue
+ * @q: queue
+ * @fn:handler for urgent requests
+ *
+ */
+void blk_urgent_request(struct request_queue *q, request_fn_proc *fn)
+{
+   q-urgent_request_fn = fn;
+}
+EXPORT_SYMBOL(blk_urgent_request);
+
+/**
  * blk_set_default_limits - reset limits to default values
  * @lim:  the queue_limits structure to reset
  *
diff --git a/block/blk.h b/block/blk.h
index ca51543..5fba856 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -42,6 +42,7 @@ void blk_add_timer(struct request *);
  */
 enum rq_atomic_flags {
REQ_ATOM_COMPLETE = 0,
+   REQ_ATOM_URGENT = 1,
 };
 
 /*
@@ -58,6 +59,16 @@ static inline void blk_clear_rq_complete(struct request *rq)
clear_bit(REQ_ATOM_COMPLETE, rq-atomic_flags);
 }
 
+static inline int blk_mark_rq_urgent(struct request *rq)
+{
+   return test_and_set_bit(REQ_ATOM_URGENT, rq-atomic_flags);
+}
+
+static inline void blk_clear_rq_urgent(struct request *rq)
+{
+   clear_bit(REQ_ATOM_URGENT, rq-atomic_flags);
+}
+
 /*
  * Internal elevator interface
  */
diff --git a/block/elevator.c b/block/elevator.c
index 121a351..dc2bc75 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -743,6 +743,11 @@ void elv_completed_request(struct request_queue *q, struct 
request *rq)
 {
struct elevator_queue *e = q-elevator;
 
+   if (blk_mark_rq_urgent(rq)) {
+   q-notified_urgent = false;
+   q-urgent_req = NULL;
+   }
+   blk_clear_rq_urgent(rq);
/*
 * request is released from the driver, io must be done
 */
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index e725303..962ee54 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -303,6 +303,7 @@ struct request_queue {
struct request_list root_rl;
 
request_fn_proc *request_fn;
+   request_fn_proc *urgent_request_fn;
make_request_fn *make_request_fn;
prep_rq_fn  *prep_rq_fn;
unprep_rq_fn*unprep_rq_fn;
@@ -391,6 +392,8 @@ struct request_queue {
 #endif
 
struct queue_limits limits;
+   boolnotified_urgent;
+   struct request  *urgent_req;
 
/*
 * sg stuff
@@ -894,6 +897,7 @@ extern struct request_queue 
*blk_init_queue_node(request_fn_proc *rfn,
 extern struct request_queue *blk_init_queue(request_fn_proc *, spinlock_t *);
 extern struct request_queue *blk_init_allocated_queue(struct request_queue *,
  request_fn_proc *, 

[RFC/PATCH v2 0/2] block:row: Adding support for urgent requests handling

2012-11-01 Thread tlinder
This patch set add support for handling urgent requests by the
ROW algorith. It depends on 2 previosly uploaded patch sets:
1. ROW scheduling Algorithm
2. Adding support for urgent requests handling

Tatyana Brokhman (2):
  row: Adding support for reinsert already dispatched req
  row: Add support for urgent request handling

 block/row-iosched.c |   78 ++-
 1 files changed, 77 insertions(+), 1 deletions(-)

-- 
1.7.6
--
QUALCOMM ISRAEL, on behalf of Qualcomm Innovation Center, Inc. 
Is a member of Code Aurora Forum, hosted by the Linux Foundation

--
To unsubscribe from this list: send the line unsubscribe linux-mmc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC/PATCH v2 1/2] block:row: Adding support for reinsert already dispatched req

2012-11-01 Thread tlinder
From: Tatyana Brokhman tlin...@codeaurora.org

Add support for reinserting already dispatched request back to the
schedulers internal data structures.
The request will be reinserted back to the queue (head) it was
dispatched from as if it was never dispatched.

Signed-off-by: Tatyana Brokhman tlin...@codeaurora.org

diff --git a/block/row-iosched.c b/block/row-iosched.c
index b7965c6..62789a4 100644
--- a/block/row-iosched.c
+++ b/block/row-iosched.c
@@ -273,6 +273,33 @@ static void row_add_request(struct request_queue *q,
 }
 
 /*
+ * row_reinsert_req() - Reinsert request back to the scheduler
+ * @q: dispatch queue
+ * @rq:request to add
+ *
+ * Reinsert the given request back to the queue it was
+ * dispatched from as if it was never dispatched.
+ *
+ * Returns 0 on success, error code otherwise
+ */
+static int row_reinsert_req(struct request_queue *q,
+   struct request *rq)
+{
+   struct row_data*rd = q-elevator-elevator_data;
+   struct row_queue   *rqueue = RQ_ROWQ(rq);
+
+   /* Verify rqueue is legitimate */
+   BUG_ON(rqueue != rd-row_queues[rqueue-prio].rqueue);
+
+   list_add(rq-queuelist, rqueue-fifo);
+   rd-nr_reqs[rq_data_dir(rq)]++;
+
+   row_log_rowq(rd, rqueue-prio, request reinserted);
+
+   return 0;
+}
+
+/**
  * row_remove_request() -  Remove given request from scheduler
  * @q: requests queue
  * @rq:request to remove
@@ -656,6 +683,7 @@ static struct elevator_type iosched_row = {
.elevator_merge_req_fn  = row_merged_requests,
.elevator_dispatch_fn   = row_dispatch_requests,
.elevator_add_req_fn= row_add_request,
+   .elevator_reinsert_req_fn   = row_reinsert_req,
.elevator_former_req_fn = elv_rb_former_request,
.elevator_latter_req_fn = elv_rb_latter_request,
.elevator_set_req_fn= row_set_request,
-- 
1.7.6
--
QUALCOMM ISRAEL, on behalf of Qualcomm Innovation Center, Inc. 
Is a member of Code Aurora Forum, hosted by the Linux Foundation

--
To unsubscribe from this list: send the line unsubscribe linux-mmc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC/PATCH v2 2/2]block:row: Add support for urgent request handling

2012-11-01 Thread tlinder
From: Tatyana Brokhman tlin...@codeaurora.org

This patch add support for handling urgent requests.
ROW queue can be marked as urgent so if it was un-served and a
request was added to it - it will trigger issuing urgent request
to the mmc driver.

Signed-off-by: Tatyana Brokhman tlin...@codeaurora.org

diff --git a/block/row-iosched.c b/block/row-iosched.c
index 62789a4..fdf7618 100644
--- a/block/row-iosched.c
+++ b/block/row-iosched.c
@@ -27,6 +27,8 @@
 #include linux/blktrace_api.h
 #include linux/jiffies.h
 
+#include blk.h
+
 /*
  * enum row_queue_prio - Priorities of the ROW queues
  *
@@ -58,6 +60,17 @@ static const bool queue_idling_enabled[] = {
false,  /* ROWQ_PRIO_LOW_SWRITE */
 };
 
+/* Flags indicating whether the queue can notify on urgent requests */
+static const bool urgent_queues[] = {
+   true,   /* ROWQ_PRIO_HIGH_READ */
+   true,   /* ROWQ_PRIO_REG_READ */
+   false,  /* ROWQ_PRIO_HIGH_SWRITE */
+   false,  /* ROWQ_PRIO_REG_SWRITE */
+   false,  /* ROWQ_PRIO_REG_WRITE */
+   false,  /* ROWQ_PRIO_LOW_READ */
+   false,  /* ROWQ_PRIO_LOW_SWRITE */
+};
+
 /* Default values for row queues quantums in each dispatch cycle */
 static const int queue_quantum[] = {
100,/* ROWQ_PRIO_HIGH_READ */
@@ -269,7 +282,13 @@ static void row_add_request(struct request_queue *q,
rqueue-idle_data.idle_trigger_time =
jiffies + msecs_to_jiffies(rd-read_idle.freq);
}
-   row_log_rowq(rd, rqueue-prio, added request);
+   if (urgent_queues[rqueue-prio] 
+   row_rowq_unserved(rd, rqueue-prio)) {
+   row_log_rowq(rd, rqueue-prio,
+added urgent req curr_queue = %d,
+rd-curr_queue);
+   } else
+   row_log_rowq(rd, rqueue-prio, added request);
 }
 
 /*
@@ -289,7 +308,12 @@ static int row_reinsert_req(struct request_queue *q,
struct row_queue   *rqueue = RQ_ROWQ(rq);
 
/* Verify rqueue is legitimate */
-   BUG_ON(rqueue != rd-row_queues[rqueue-prio].rqueue);
+   if (rqueue-prio = ROWQ_MAX_PRIO) {
+   pr_err(\n\nROW BUG: row_reinsert_req() rqueue-prio = %d\n,
+  rqueue-prio);
+   blk_dump_rq_flags(rq, );
+   return -EIO;
+   }
 
list_add(rq-queuelist, rqueue-fifo);
rd-nr_reqs[rq_data_dir(rq)]++;
@@ -299,6 +323,29 @@ static int row_reinsert_req(struct request_queue *q,
return 0;
 }
 
+/*
+ * row_urgent_pending() - Return TRUE if there is an urgent
+ *   request on scheduler
+ * @q: dispatch queue
+ *
+ */
+static bool row_urgent_pending(struct request_queue *q)
+{
+   struct row_data *rd = q-elevator-elevator_data;
+   int i;
+
+   for (i = 0; i  ROWQ_MAX_PRIO; i++)
+   if (urgent_queues[i]  row_rowq_unserved(rd, i) 
+   !list_empty(rd-row_queues[i].rqueue.fifo)) {
+   row_log_rowq(rd, i,
+Urgent request pending (curr=%i),
+rd-curr_queue);
+   return true;
+   }
+
+   return false;
+}
+
 /**
  * row_remove_request() -  Remove given request from scheduler
  * @q: requests queue
@@ -684,6 +731,7 @@ static struct elevator_type iosched_row = {
.elevator_dispatch_fn   = row_dispatch_requests,
.elevator_add_req_fn= row_add_request,
.elevator_reinsert_req_fn   = row_reinsert_req,
+   .elevator_is_urgent_fn  = row_urgent_pending,
.elevator_former_req_fn = elv_rb_former_request,
.elevator_latter_req_fn = elv_rb_latter_request,
.elevator_set_req_fn= row_set_request,
-- 
1.7.6
--
QUALCOMM ISRAEL, on behalf of Qualcomm Innovation Center, Inc. 
Is a member of Code Aurora Forum, hosted by the Linux Foundation

--
To unsubscribe from this list: send the line unsubscribe linux-mmc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2] mmc: fix async request mechanism for sequential read scenarios

2012-11-01 Thread Konstantin Dorfman
When current request is running on the bus and if next request fetched
by mmcqd is NULL, mmc context (mmcqd thread) gets blocked until the
current request completes. This means if new request comes in while
the mmcqd thread is blocked, this new request can not be prepared in
parallel to current ongoing request. This may result in latency to
start new request.

This change allows to wake up the MMC thread (which
is waiting for the current running request to complete). Now once the
MMC thread is woken up, new request can be fetched and prepared in
parallel to current running request which means this new request can
be started immediately after the current running request completes.

With this change read throughput is improved by 16%.

Signed-off-by: Konstantin Dorfman kdorf...@codeaurora.org
---
 drivers/mmc/card/block.c |   26 +---
 drivers/mmc/card/queue.c |   26 ++-
 drivers/mmc/card/queue.h |3 +
 drivers/mmc/core/core.c  |  102 -
 include/linux/mmc/card.h |   12 +
 include/linux/mmc/core.h |   15 +++
 6 files changed, 163 insertions(+), 21 deletions(-)

diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index 172a768..0e9bedb 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -112,17 +112,6 @@ struct mmc_blk_data {
 
 static DEFINE_MUTEX(open_lock);
 
-enum mmc_blk_status {
-   MMC_BLK_SUCCESS = 0,
-   MMC_BLK_PARTIAL,
-   MMC_BLK_CMD_ERR,
-   MMC_BLK_RETRY,
-   MMC_BLK_ABORT,
-   MMC_BLK_DATA_ERR,
-   MMC_BLK_ECC_ERR,
-   MMC_BLK_NOMEDIUM,
-};
-
 module_param(perdev_minors, int, 0444);
 MODULE_PARM_DESC(perdev_minors, Minors numbers to allocate per device);
 
@@ -1225,6 +1214,7 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
 
mqrq-mmc_active.mrq = brq-mrq;
mqrq-mmc_active.err_check = mmc_blk_err_check;
+   mqrq-mmc_active.mrq-context_info = mq-context_info;
 
mmc_queue_bounce_pre(mqrq);
 }
@@ -1284,9 +1274,12 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, 
struct request *rqc)
areq = mq-mqrq_cur-mmc_active;
} else
areq = NULL;
-   areq = mmc_start_req(card-host, areq, (int *) status);
-   if (!areq)
+   areq = mmc_start_req(card-host, areq, (int *)status);
+   if (!areq) {
+   if (status == MMC_BLK_NEW_REQUEST)
+   mq-flags |= MMC_QUEUE_NEW_REQUEST;
return 0;
+   }
 
mq_rq = container_of(areq, struct mmc_queue_req, mmc_active);
brq = mq_rq-brq;
@@ -1295,6 +1288,8 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, 
struct request *rqc)
mmc_queue_bounce_post(mq_rq);
 
switch (status) {
+   case MMC_BLK_NEW_REQUEST:
+   BUG_ON(1); /* should never get here */
case MMC_BLK_SUCCESS:
case MMC_BLK_PARTIAL:
/*
@@ -1367,7 +1362,7 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, 
struct request *rqc)
 * prepare it again and resend.
 */
mmc_blk_rw_rq_prep(mq_rq, card, disable_multi, mq);
-   mmc_start_req(card-host, mq_rq-mmc_active, NULL);
+   mmc_start_req(card-host, mq_rq-mmc_active, (int 
*)status);
}
} while (ret);
 
@@ -1406,6 +1401,7 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct 
request *req)
ret = 0;
goto out;
}
+   mq-flags = ~MMC_QUEUE_NEW_REQUEST;
 
if (req  req-cmd_flags  REQ_DISCARD) {
/* complete ongoing async transfer before issuing discard */
@@ -1426,7 +1422,7 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct 
request *req)
}
 
 out:
-   if (!req)
+   if (!req  !(mq-flags  MMC_QUEUE_NEW_REQUEST))
/* release host only when there are no more requests */
mmc_release_host(card-host);
return ret;
diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
index fadf52e..7375476 100644
--- a/drivers/mmc/card/queue.c
+++ b/drivers/mmc/card/queue.c
@@ -22,7 +22,6 @@
 
 #define MMC_QUEUE_BOUNCESZ 65536
 
-#define MMC_QUEUE_SUSPENDED(1  0)
 
 /*
  * Prepare a MMC request. This just filters out odd stuff.
@@ -63,11 +62,17 @@ static int mmc_queue_thread(void *d)
set_current_state(TASK_INTERRUPTIBLE);
req = blk_fetch_request(q);
mq-mqrq_cur-req = req;
+   if (!req  mq-mqrq_prev-req)
+   mq-context_info.is_waiting_last_req = true;
spin_unlock_irq(q-queue_lock);
 
if (req || mq-mqrq_prev-req) {
set_current_state(TASK_RUNNING);
   

Re: [PATCH 2/4 v4] MMC/SD: Add callback function to detect card

2012-11-01 Thread Johan Rudholm
Hi Jerry,

2012/10/30  r66...@freescale.com:
 From: Jerry Huang chang-ming.hu...@freescale.com

 In order to check whether the card has been removed, the function
 mmc_send_status() will send command CMD13 to card and ask the card
 to send its status register to sdhc driver, which will generate
 many interrupts repeatedly and make the system performance bad.

 Therefore, add callback function get_cd() to check whether
 the card has been removed when the driver has this callback function.

 If the card is present, 1 will return, if the card is absent, 0 will return.
 If the controller will not support this feature, -ENOSYS will return.

In what cases is the performance affected by this? I believe this
function is called only on some errors and on detect?

Also, we've seen cases where the card detect GPIO cannot be trusted to
tell wether the card is present or not, for instance in the corner
case when an SD-card is removed very slowly...

Kind regards, Johan
--
To unsubscribe from this list: send the line unsubscribe linux-mmc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sdhci vccq regulator support drops UHS-I flags

2012-11-01 Thread Daniel Drake
On Tue, Oct 30, 2012 at 3:07 PM, Philip Rakity prak...@nvidia.com wrote:
 The intent is if there is no regulator should hit this if statement.

 +   if (IS_ERR(host-vqmmc)) {
 +   pr_info(%s: no vqmmc regulator found\n, mmc_hostname(mmc));
 +   host-vqmmc = NULL;

There's a bug here then.
When there is no regulator, host-vqmmc is NULL.
NULL is not an error recognised by IS_ERR.

Double checked this with a simple test:

#include stdio.h

#define MAX_ERRNO   4095
#define IS_ERR_VALUE(x) ((x) = (unsigned long)-MAX_ERRNO)

static inline long IS_ERR(const void *ptr)
{
return IS_ERR_VALUE((unsigned long)ptr);
}

int main(void)
{
if (IS_ERR(NULL))
printf(NULL is error\n);
else
printf(NULL is not error\n);
}



result: NULL is not error
--
To unsubscribe from this list: send the line unsubscribe linux-mmc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH 2/4 v4] MMC/SD: Add callback function to detect card

2012-11-01 Thread Huang Changming-R66093
Hi, Johan,
When quicks SDHCI_QUIRK_BROKEN_CARD_DETECTION is set, the driver will poll the 
card status with the command CMD13 periodically, then many interrupts will be 
generated, which affect the performance.

Yes, some cases to detect GPIO can't be trusted, so I only just implement this 
callback in eSDHC-of driver. that is to say, just when the platform support it, 
this callback can be implement, if not, continue to send the command CMD13.  

 Hi Jerry,
 
 2012/10/30  r66...@freescale.com:
  From: Jerry Huang chang-ming.hu...@freescale.com
 
  In order to check whether the card has been removed, the function
  mmc_send_status() will send command CMD13 to card and ask the card to
  send its status register to sdhc driver, which will generate many
  interrupts repeatedly and make the system performance bad.
 
  Therefore, add callback function get_cd() to check whether the card
  has been removed when the driver has this callback function.
 
  If the card is present, 1 will return, if the card is absent, 0 will
 return.
  If the controller will not support this feature, -ENOSYS will return.
 
 In what cases is the performance affected by this? I believe this
 function is called only on some errors and on detect?
 
 Also, we've seen cases where the card detect GPIO cannot be trusted to
 tell wether the card is present or not, for instance in the corner case
 when an SD-card is removed very slowly...
 
 Kind regards, Johan


--
To unsubscribe from this list: send the line unsubscribe linux-mmc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH V3] mmc: omap_hsmmc: Enable HSPE bit for high speed cards

2012-11-01 Thread Hebbar, Gururaja
On Thu, Nov 01, 2012 at 01:46:26, Balbi, Felipe wrote:
 Hi,
 
 On Thu, Nov 01, 2012 at 01:21:36AM +0530, Venkatraman S wrote:
  On Wed, Oct 31, 2012 at 5:56 PM, Felipe Balbi ba...@ti.com wrote:
   Hi,
  
   On Wed, Oct 31, 2012 at 05:27:36PM +0530, Hebbar, Gururaja wrote:
   HSMMC IP on AM33xx need a special setting to handle High-speed cards.
   Other platforms like TI81xx, OMAP4 may need this as-well. This depends
   on the HSMMC IP timing closure done for the high speed cards.
  
   From AM335x TRM (SPRUH73F - 18.3.12 Output Signals Generation)
  
   The MMC/SD/SDIO output signals can be driven on either falling edge or
   rising edge depending on the SD_HCTL[2] HSPE bit. This feature allows
   to reach better timing performance, and thus to increase data transfer
   frequency.
  
   There are few pre-requisites for enabling the HSPE bit
   - Controller should support High-Speed-Enable Bit and
   - Controller should not be using DDR Mode and
   - Controller should advertise that it supports High Speed in
 capabilities register and
   - MMC/SD clock coming out of controller  25MHz
  
   Note:
   The implementation reuses the output of calc_divisor() so as to reduce
   code addition.
  
   Signed-off-by: Hebbar, Gururaja gururaja.heb...@ti.com
  
   this looks good to my eyes, hopefully I haven't missed anything:
  
   Reviewed-by: Felipe Balbi ba...@ti.com
  
  
  Except for the excessively verbose comments which are just duplicating the 
  code,
  Quote
   +  * Enable High-Speed Support
   +  * Pre-Requisites
   +  *  - Controller should support High-Speed-Enable Bit
   +  *  - Controller should not be using DDR Mode
   +  *  - Controller should advertise that it supports High Speed
   +  *in capabilities register
   +  *  - MMC/SD clock coming out of controller  25MHz
   +  */
  /Quote
  
  I'm ok with this patch as well. I'm putting a few patches under test
  including this one,
  and will send it to Chris as part of that series.
  I'll strip out the above mentioned comments, unless there are any
  objections.
 
 please don't. Detailing the pre-requisites for getting HSP mode working
 isn't bad at all. Should someone decide to change the behavior and ends
 up breaking it, the comment will help putting things back together.
 
 my 2 cents, you've got the final decision though.

Same here. Description is required in commit message since it will help
in during git bisect.

 
 -- 
 balbi
 


Regards, 
Gururaja
--
To unsubscribe from this list: send the line unsubscribe linux-mmc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html