Re: [patch 4/4] Port of blktrace to the Linux Kernel Markers.

2007-09-21 Thread Mathieu Desnoyers
* Steven Rostedt ([EMAIL PROTECTED]) wrote:
> On Tue, Sep 18, 2007 at 05:13:28PM -0400, Mathieu Desnoyers wrote:
> > +void blk_probe_disarm(void)
> > +{
> > +   int i, err;
> > +
> > +   for (i = 0; i < ARRAY_SIZE(probe_array); i++) {
> > +   err = marker_disarm(probe_array[i].name);
> > +   BUG_ON(err);
> > +   err = IS_ERR(marker_probe_unregister(probe_array[i].name));
> > +   BUG_ON(err);
> > +   }
> > +}
> 
> As well as changing these to WARN_ON.
> 
Yep.

> -- Steve
> 

-- 
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 4/4] Port of blktrace to the Linux Kernel Markers.

2007-09-21 Thread Mathieu Desnoyers
* Steven Rostedt ([EMAIL PROTECTED]) wrote:
 On Tue, Sep 18, 2007 at 05:13:28PM -0400, Mathieu Desnoyers wrote:
  +void blk_probe_disarm(void)
  +{
  +   int i, err;
  +
  +   for (i = 0; i  ARRAY_SIZE(probe_array); i++) {
  +   err = marker_disarm(probe_array[i].name);
  +   BUG_ON(err);
  +   err = IS_ERR(marker_probe_unregister(probe_array[i].name));
  +   BUG_ON(err);
  +   }
  +}
 
 As well as changing these to WARN_ON.
 
Yep.

 -- Steve
 

-- 
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 4/4] Port of blktrace to the Linux Kernel Markers.

2007-09-20 Thread Steven Rostedt
On Tue, Sep 18, 2007 at 05:13:28PM -0400, Mathieu Desnoyers wrote:
> +void blk_probe_disarm(void)
> +{
> + int i, err;
> +
> + for (i = 0; i < ARRAY_SIZE(probe_array); i++) {
> + err = marker_disarm(probe_array[i].name);
> + BUG_ON(err);
> + err = IS_ERR(marker_probe_unregister(probe_array[i].name));
> + BUG_ON(err);
> + }
> +}

As well as changing these to WARN_ON.

-- Steve

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 4/4] Port of blktrace to the Linux Kernel Markers.

2007-09-20 Thread Steven Rostedt
On Tue, Sep 18, 2007 at 05:13:28PM -0400, Mathieu Desnoyers wrote:
 +void blk_probe_disarm(void)
 +{
 + int i, err;
 +
 + for (i = 0; i  ARRAY_SIZE(probe_array); i++) {
 + err = marker_disarm(probe_array[i].name);
 + BUG_ON(err);
 + err = IS_ERR(marker_probe_unregister(probe_array[i].name));
 + BUG_ON(err);
 + }
 +}

As well as changing these to WARN_ON.

-- Steve

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[patch 4/4] Port of blktrace to the Linux Kernel Markers.

2007-09-18 Thread Mathieu Desnoyers
Here is the first stage of a port of blktrace to the Linux Kernel Markers. The
advantage of this port is that it minimizes the impact on the running when
blktrace is not active.

A few remarks : this patch has the positive effect of removing some code
from the block io tracing hot paths, minimizing the i-cache impact in a
system where the io tracing is compiled in but inactive.

It also moves the blk tracing code from a header (and therefore from the
body of the instrumented functions) to a separate C file.

There, as soon as one device has to be traced, all devices have to
execute the tracing function call when they pass by the instrumentation site.
This is slower than the previous inline function which tested the condition
quickly.

It does not make the code smaller, since I left all the specialized
tracing functions for requests, bio, generic, remap, which would go away
once a generic infrastructure is in place to serialize the information
passed to the marker. This is mostly why I consider it as a step towards the
full improvements that could bring the markers.

Signed-off-by: Mathieu Desnoyers <[EMAIL PROTECTED]>
Acked-by: "Frank Ch. Eigler" <[EMAIL PROTECTED]>
CC: Jens Axboe <[EMAIL PROTECTED]>
---

 block/Kconfig|1 
 block/blktrace.c |  343 ++-
 block/elevator.c |6 
 block/ll_rw_blk.c|   35 ++--
 drivers/block/cciss.c|4 
 drivers/md/dm.c  |   14 -
 fs/bio.c |6 
 include/linux/blktrace_api.h |  145 +-
 mm/bounce.c  |4 
 mm/highmem.c |2 
 10 files changed, 388 insertions(+), 172 deletions(-)

Index: linux-2.6-lttng/block/elevator.c
===
--- linux-2.6-lttng.orig/block/elevator.c   2007-09-18 10:08:11.0 
-0400
+++ linux-2.6-lttng/block/elevator.c2007-09-18 13:18:26.0 -0400
@@ -32,7 +32,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 #include 
 
 #include 
@@ -548,7 +548,7 @@ void elv_insert(struct request_queue *q,
unsigned ordseq;
int unplug_it = 1;
 
-   blk_add_trace_rq(q, rq, BLK_TA_INSERT);
+   trace_mark(blk_request_insert, "%p %p", q, rq);
 
rq->q = q;
 
@@ -735,7 +735,7 @@ struct request *elv_next_request(struct 
 * not be passed by new incoming requests
 */
rq->cmd_flags |= REQ_STARTED;
-   blk_add_trace_rq(q, rq, BLK_TA_ISSUE);
+   trace_mark(blk_request_issue, "%p %p", q, rq);
}
 
if (!q->boundary_rq || q->boundary_rq == rq) {
Index: linux-2.6-lttng/block/ll_rw_blk.c
===
--- linux-2.6-lttng.orig/block/ll_rw_blk.c  2007-09-18 10:09:51.0 
-0400
+++ linux-2.6-lttng/block/ll_rw_blk.c   2007-09-18 13:18:26.0 -0400
@@ -28,6 +28,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -1570,7 +1571,7 @@ void blk_plug_device(struct request_queu
 
if (!test_and_set_bit(QUEUE_FLAG_PLUGGED, >queue_flags)) {
mod_timer(>unplug_timer, jiffies + q->unplug_delay);
-   blk_add_trace_generic(q, NULL, 0, BLK_TA_PLUG);
+   trace_mark(blk_plug_device, "%p %p %d", q, NULL, 0);
}
 }
 
@@ -1636,7 +1637,7 @@ static void blk_backing_dev_unplug(struc
 * devices don't necessarily have an ->unplug_fn defined
 */
if (q->unplug_fn) {
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, "%p %p %d", q, NULL,
q->rq.count[READ] + q->rq.count[WRITE]);
 
q->unplug_fn(q);
@@ -1648,7 +1649,7 @@ static void blk_unplug_work(struct work_
struct request_queue *q =
container_of(work, struct request_queue, unplug_work);
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, "%p %p %d", q, NULL,
q->rq.count[READ] + q->rq.count[WRITE]);
 
q->unplug_fn(q);
@@ -1658,7 +1659,7 @@ static void blk_unplug_timeout(unsigned 
 {
struct request_queue *q = (struct request_queue *)data;
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_TIMER, NULL,
+   trace_mark(blk_pdu_unplug_timer, "%p %p %d", q, NULL,
q->rq.count[READ] + q->rq.count[WRITE]);
 
kblockd_schedule_work(>unplug_work);
@@ -2178,7 +2179,7 @@ rq_starved:

rq_init(q, rq);
 
-   blk_add_trace_generic(q, bio, rw, BLK_TA_GETRQ);
+   trace_mark(blk_get_request, "%p %p %d", q, bio, rw);
 out:
return rq;
 }
@@ -2208,7 +2209,7 @@ static struct request *get_request_wait(
if (!rq) {
struct io_context 

[patch 4/4] Port of blktrace to the Linux Kernel Markers.

2007-09-18 Thread Mathieu Desnoyers
Here is the first stage of a port of blktrace to the Linux Kernel Markers. The
advantage of this port is that it minimizes the impact on the running when
blktrace is not active.

A few remarks : this patch has the positive effect of removing some code
from the block io tracing hot paths, minimizing the i-cache impact in a
system where the io tracing is compiled in but inactive.

It also moves the blk tracing code from a header (and therefore from the
body of the instrumented functions) to a separate C file.

There, as soon as one device has to be traced, all devices have to
execute the tracing function call when they pass by the instrumentation site.
This is slower than the previous inline function which tested the condition
quickly.

It does not make the code smaller, since I left all the specialized
tracing functions for requests, bio, generic, remap, which would go away
once a generic infrastructure is in place to serialize the information
passed to the marker. This is mostly why I consider it as a step towards the
full improvements that could bring the markers.

Signed-off-by: Mathieu Desnoyers [EMAIL PROTECTED]
Acked-by: Frank Ch. Eigler [EMAIL PROTECTED]
CC: Jens Axboe [EMAIL PROTECTED]
---

 block/Kconfig|1 
 block/blktrace.c |  343 ++-
 block/elevator.c |6 
 block/ll_rw_blk.c|   35 ++--
 drivers/block/cciss.c|4 
 drivers/md/dm.c  |   14 -
 fs/bio.c |6 
 include/linux/blktrace_api.h |  145 +-
 mm/bounce.c  |4 
 mm/highmem.c |2 
 10 files changed, 388 insertions(+), 172 deletions(-)

Index: linux-2.6-lttng/block/elevator.c
===
--- linux-2.6-lttng.orig/block/elevator.c   2007-09-18 10:08:11.0 
-0400
+++ linux-2.6-lttng/block/elevator.c2007-09-18 13:18:26.0 -0400
@@ -32,7 +32,7 @@
 #include linux/init.h
 #include linux/compiler.h
 #include linux/delay.h
-#include linux/blktrace_api.h
+#include linux/marker.h
 #include linux/hash.h
 
 #include asm/uaccess.h
@@ -548,7 +548,7 @@ void elv_insert(struct request_queue *q,
unsigned ordseq;
int unplug_it = 1;
 
-   blk_add_trace_rq(q, rq, BLK_TA_INSERT);
+   trace_mark(blk_request_insert, %p %p, q, rq);
 
rq-q = q;
 
@@ -735,7 +735,7 @@ struct request *elv_next_request(struct 
 * not be passed by new incoming requests
 */
rq-cmd_flags |= REQ_STARTED;
-   blk_add_trace_rq(q, rq, BLK_TA_ISSUE);
+   trace_mark(blk_request_issue, %p %p, q, rq);
}
 
if (!q-boundary_rq || q-boundary_rq == rq) {
Index: linux-2.6-lttng/block/ll_rw_blk.c
===
--- linux-2.6-lttng.orig/block/ll_rw_blk.c  2007-09-18 10:09:51.0 
-0400
+++ linux-2.6-lttng/block/ll_rw_blk.c   2007-09-18 13:18:26.0 -0400
@@ -28,6 +28,7 @@
 #include linux/task_io_accounting_ops.h
 #include linux/interrupt.h
 #include linux/cpu.h
+#include linux/marker.h
 #include linux/blktrace_api.h
 #include linux/fault-inject.h
 #include linux/scatterlist.h
@@ -1570,7 +1571,7 @@ void blk_plug_device(struct request_queu
 
if (!test_and_set_bit(QUEUE_FLAG_PLUGGED, q-queue_flags)) {
mod_timer(q-unplug_timer, jiffies + q-unplug_delay);
-   blk_add_trace_generic(q, NULL, 0, BLK_TA_PLUG);
+   trace_mark(blk_plug_device, %p %p %d, q, NULL, 0);
}
 }
 
@@ -1636,7 +1637,7 @@ static void blk_backing_dev_unplug(struc
 * devices don't necessarily have an -unplug_fn defined
 */
if (q-unplug_fn) {
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, %p %p %d, q, NULL,
q-rq.count[READ] + q-rq.count[WRITE]);
 
q-unplug_fn(q);
@@ -1648,7 +1649,7 @@ static void blk_unplug_work(struct work_
struct request_queue *q =
container_of(work, struct request_queue, unplug_work);
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, %p %p %d, q, NULL,
q-rq.count[READ] + q-rq.count[WRITE]);
 
q-unplug_fn(q);
@@ -1658,7 +1659,7 @@ static void blk_unplug_timeout(unsigned 
 {
struct request_queue *q = (struct request_queue *)data;
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_TIMER, NULL,
+   trace_mark(blk_pdu_unplug_timer, %p %p %d, q, NULL,
q-rq.count[READ] + q-rq.count[WRITE]);
 
kblockd_schedule_work(q-unplug_work);
@@ -2178,7 +2179,7 @@ rq_starved:

rq_init(q, rq);
 
-   blk_add_trace_generic(q, bio, rw, BLK_TA_GETRQ);
+   

[patch 4/4] Port of blktrace to the Linux Kernel Markers.

2007-09-17 Thread Mathieu Desnoyers
Here is the first stage of a port of blktrace to the Linux Kernel Markers. The
advantage of this port is that it minimizes the impact on the running when
blktrace is not active.

A few remarks : this patch has the positive effect of removing some code
from the block io tracing hot paths, minimizing the i-cache impact in a
system where the io tracing is compiled in but inactive.

It also moves the blk tracing code from a header (and therefore from the
body of the instrumented functions) to a separate C file.

There, as soon as one device has to be traced, all devices have to
execute the tracing function call when they pass by the instrumentation site.
This is slower than the previous inline function which tested the condition
quickly.

It does not make the code smaller, since I left all the specialized
tracing functions for requests, bio, generic, remap, which would go away
once a generic infrastructure is in place to serialize the information
passed to the marker. This is mostly why I consider it as a step towards the
full improvements that could bring the markers.

Signed-off-by: Mathieu Desnoyers <[EMAIL PROTECTED]>
Acked-by: "Frank Ch. Eigler" <[EMAIL PROTECTED]>
CC: Jens Axboe <[EMAIL PROTECTED]>
---

 block/Kconfig|1 
 block/blktrace.c |  343 ++-
 block/elevator.c |6 
 block/ll_rw_blk.c|   35 ++--
 drivers/block/cciss.c|4 
 drivers/md/dm.c  |   14 -
 fs/bio.c |6 
 include/linux/blktrace_api.h |  145 +-
 mm/bounce.c  |4 
 mm/highmem.c |2 
 10 files changed, 388 insertions(+), 172 deletions(-)

Index: linux-2.6-lttng/block/elevator.c
===
--- linux-2.6-lttng.orig/block/elevator.c   2007-09-17 14:02:48.0 
-0400
+++ linux-2.6-lttng/block/elevator.c2007-09-17 14:03:12.0 -0400
@@ -32,7 +32,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 #include 
 
 #include 
@@ -548,7 +548,7 @@ void elv_insert(struct request_queue *q,
unsigned ordseq;
int unplug_it = 1;
 
-   blk_add_trace_rq(q, rq, BLK_TA_INSERT);
+   trace_mark(blk_request_insert, "%p %p", q, rq);
 
rq->q = q;
 
@@ -735,7 +735,7 @@ struct request *elv_next_request(struct 
 * not be passed by new incoming requests
 */
rq->cmd_flags |= REQ_STARTED;
-   blk_add_trace_rq(q, rq, BLK_TA_ISSUE);
+   trace_mark(blk_request_issue, "%p %p", q, rq);
}
 
if (!q->boundary_rq || q->boundary_rq == rq) {
Index: linux-2.6-lttng/block/ll_rw_blk.c
===
--- linux-2.6-lttng.orig/block/ll_rw_blk.c  2007-09-17 14:02:48.0 
-0400
+++ linux-2.6-lttng/block/ll_rw_blk.c   2007-09-17 14:03:12.0 -0400
@@ -28,6 +28,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -1559,7 +1560,7 @@ void blk_plug_device(struct request_queu
 
if (!test_and_set_bit(QUEUE_FLAG_PLUGGED, >queue_flags)) {
mod_timer(>unplug_timer, jiffies + q->unplug_delay);
-   blk_add_trace_generic(q, NULL, 0, BLK_TA_PLUG);
+   trace_mark(blk_plug_device, "%p %p %d", q, NULL, 0);
}
 }
 
@@ -1625,7 +1626,7 @@ static void blk_backing_dev_unplug(struc
 * devices don't necessarily have an ->unplug_fn defined
 */
if (q->unplug_fn) {
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, "%p %p %d", q, NULL,
q->rq.count[READ] + q->rq.count[WRITE]);
 
q->unplug_fn(q);
@@ -1637,7 +1638,7 @@ static void blk_unplug_work(struct work_
struct request_queue *q =
container_of(work, struct request_queue, unplug_work);
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, "%p %p %d", q, NULL,
q->rq.count[READ] + q->rq.count[WRITE]);
 
q->unplug_fn(q);
@@ -1647,7 +1648,7 @@ static void blk_unplug_timeout(unsigned 
 {
struct request_queue *q = (struct request_queue *)data;
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_TIMER, NULL,
+   trace_mark(blk_pdu_unplug_timer, "%p %p %d", q, NULL,
q->rq.count[READ] + q->rq.count[WRITE]);
 
kblockd_schedule_work(>unplug_work);
@@ -2160,7 +2161,7 @@ rq_starved:

rq_init(q, rq);
 
-   blk_add_trace_generic(q, bio, rw, BLK_TA_GETRQ);
+   trace_mark(blk_get_request, "%p %p %d", q, bio, rw);
 out:
return rq;
 }
@@ -2190,7 +2191,7 @@ static struct request *get_request_wait(
if (!rq) {
struct io_context 

[patch 4/4] Port of blktrace to the Linux Kernel Markers.

2007-09-17 Thread Mathieu Desnoyers
Here is the first stage of a port of blktrace to the Linux Kernel Markers. The
advantage of this port is that it minimizes the impact on the running when
blktrace is not active.

A few remarks : this patch has the positive effect of removing some code
from the block io tracing hot paths, minimizing the i-cache impact in a
system where the io tracing is compiled in but inactive.

It also moves the blk tracing code from a header (and therefore from the
body of the instrumented functions) to a separate C file.

There, as soon as one device has to be traced, all devices have to
execute the tracing function call when they pass by the instrumentation site.
This is slower than the previous inline function which tested the condition
quickly.

It does not make the code smaller, since I left all the specialized
tracing functions for requests, bio, generic, remap, which would go away
once a generic infrastructure is in place to serialize the information
passed to the marker. This is mostly why I consider it as a step towards the
full improvements that could bring the markers.

Signed-off-by: Mathieu Desnoyers [EMAIL PROTECTED]
Acked-by: Frank Ch. Eigler [EMAIL PROTECTED]
CC: Jens Axboe [EMAIL PROTECTED]
---

 block/Kconfig|1 
 block/blktrace.c |  343 ++-
 block/elevator.c |6 
 block/ll_rw_blk.c|   35 ++--
 drivers/block/cciss.c|4 
 drivers/md/dm.c  |   14 -
 fs/bio.c |6 
 include/linux/blktrace_api.h |  145 +-
 mm/bounce.c  |4 
 mm/highmem.c |2 
 10 files changed, 388 insertions(+), 172 deletions(-)

Index: linux-2.6-lttng/block/elevator.c
===
--- linux-2.6-lttng.orig/block/elevator.c   2007-09-17 14:02:48.0 
-0400
+++ linux-2.6-lttng/block/elevator.c2007-09-17 14:03:12.0 -0400
@@ -32,7 +32,7 @@
 #include linux/init.h
 #include linux/compiler.h
 #include linux/delay.h
-#include linux/blktrace_api.h
+#include linux/marker.h
 #include linux/hash.h
 
 #include asm/uaccess.h
@@ -548,7 +548,7 @@ void elv_insert(struct request_queue *q,
unsigned ordseq;
int unplug_it = 1;
 
-   blk_add_trace_rq(q, rq, BLK_TA_INSERT);
+   trace_mark(blk_request_insert, %p %p, q, rq);
 
rq-q = q;
 
@@ -735,7 +735,7 @@ struct request *elv_next_request(struct 
 * not be passed by new incoming requests
 */
rq-cmd_flags |= REQ_STARTED;
-   blk_add_trace_rq(q, rq, BLK_TA_ISSUE);
+   trace_mark(blk_request_issue, %p %p, q, rq);
}
 
if (!q-boundary_rq || q-boundary_rq == rq) {
Index: linux-2.6-lttng/block/ll_rw_blk.c
===
--- linux-2.6-lttng.orig/block/ll_rw_blk.c  2007-09-17 14:02:48.0 
-0400
+++ linux-2.6-lttng/block/ll_rw_blk.c   2007-09-17 14:03:12.0 -0400
@@ -28,6 +28,7 @@
 #include linux/task_io_accounting_ops.h
 #include linux/interrupt.h
 #include linux/cpu.h
+#include linux/marker.h
 #include linux/blktrace_api.h
 #include linux/fault-inject.h
 #include linux/scatterlist.h
@@ -1559,7 +1560,7 @@ void blk_plug_device(struct request_queu
 
if (!test_and_set_bit(QUEUE_FLAG_PLUGGED, q-queue_flags)) {
mod_timer(q-unplug_timer, jiffies + q-unplug_delay);
-   blk_add_trace_generic(q, NULL, 0, BLK_TA_PLUG);
+   trace_mark(blk_plug_device, %p %p %d, q, NULL, 0);
}
 }
 
@@ -1625,7 +1626,7 @@ static void blk_backing_dev_unplug(struc
 * devices don't necessarily have an -unplug_fn defined
 */
if (q-unplug_fn) {
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, %p %p %d, q, NULL,
q-rq.count[READ] + q-rq.count[WRITE]);
 
q-unplug_fn(q);
@@ -1637,7 +1638,7 @@ static void blk_unplug_work(struct work_
struct request_queue *q =
container_of(work, struct request_queue, unplug_work);
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, %p %p %d, q, NULL,
q-rq.count[READ] + q-rq.count[WRITE]);
 
q-unplug_fn(q);
@@ -1647,7 +1648,7 @@ static void blk_unplug_timeout(unsigned 
 {
struct request_queue *q = (struct request_queue *)data;
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_TIMER, NULL,
+   trace_mark(blk_pdu_unplug_timer, %p %p %d, q, NULL,
q-rq.count[READ] + q-rq.count[WRITE]);
 
kblockd_schedule_work(q-unplug_work);
@@ -2160,7 +2161,7 @@ rq_starved:

rq_init(q, rq);
 
-   blk_add_trace_generic(q, bio, rw, BLK_TA_GETRQ);
+   

Re: [patch 4/4] Port of blktrace to the Linux Kernel Markers.

2007-08-30 Thread Mathieu Desnoyers
* Christoph Hellwig ([EMAIL PROTECTED]) wrote:
> On Mon, Aug 27, 2007 at 12:05:44PM -0400, Mathieu Desnoyers wrote:
> > Here is the first stage of a port of blktrace to the Linux Kernel Markers. 
> > The
> > advantage of this port is that it minimizes the impact on the running when
> > blktrace is not active.
> > 
> > A few remarks : this patch has the positive effect of removing some code
> > from the block io tracing hot paths, minimizing the i-cache impact in a
> > system where the io tracing is compiled in but inactive.
> > 
> > It also moves the blk tracing code from a header (and therefore from the
> > body of the instrumented functions) to a separate C file.
> > 
> > There, as soon as one device has to be traced, all devices have to
> > execute the tracing function call when they pass by the instrumentation 
> > site.
> > This is slower than the previous inline function which tested the condition
> > quickly.
> > 
> > It does not make the code smaller, since I left all the specialized
> > tracing functions for requests, bio, generic, remap, which would go away
> > once a generic infrastructure is in place to serialize the information
> > passed to the marker. This is mostly why I consider it as a step towards the
> > full improvements that could bring the markers.
> 
> I like this as it moves the whole tracing code out of line.  It would
> be nice if we could make blktrace a module with this, but we'd need
> to change the interface away from an ioctl on the block device for that.
> 
> Btw, something that really shows here and what I noticed in my sputrace
> aswell is that there is a lot of boilerplate code due to the varargs
> trace handlers.  We really need some way to auto-generate the boilerplate
> for the trace function to avoid coding this up everywhere.

Or we can use a vprintk-like function to parse the format string and
serialize the information into trace buffers. I prefer this latter
option because, overall, it will localize the probes in a few bytes of
functions instead of duplicating the memory and instruction cache
required by multiple serializing functions.

I have the code ready, but I do not want to flood LKML with patches
neither

Mathieu


-- 
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 4/4] Port of blktrace to the Linux Kernel Markers.

2007-08-30 Thread Christoph Hellwig
On Mon, Aug 27, 2007 at 12:05:44PM -0400, Mathieu Desnoyers wrote:
> Here is the first stage of a port of blktrace to the Linux Kernel Markers. The
> advantage of this port is that it minimizes the impact on the running when
> blktrace is not active.
> 
> A few remarks : this patch has the positive effect of removing some code
> from the block io tracing hot paths, minimizing the i-cache impact in a
> system where the io tracing is compiled in but inactive.
> 
> It also moves the blk tracing code from a header (and therefore from the
> body of the instrumented functions) to a separate C file.
> 
> There, as soon as one device has to be traced, all devices have to
> execute the tracing function call when they pass by the instrumentation site.
> This is slower than the previous inline function which tested the condition
> quickly.
> 
> It does not make the code smaller, since I left all the specialized
> tracing functions for requests, bio, generic, remap, which would go away
> once a generic infrastructure is in place to serialize the information
> passed to the marker. This is mostly why I consider it as a step towards the
> full improvements that could bring the markers.

I like this as it moves the whole tracing code out of line.  It would
be nice if we could make blktrace a module with this, but we'd need
to change the interface away from an ioctl on the block device for that.

Btw, something that really shows here and what I noticed in my sputrace
aswell is that there is a lot of boilerplate code due to the varargs
trace handlers.  We really need some way to auto-generate the boilerplate
for the trace function to avoid coding this up everywhere.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 4/4] Port of blktrace to the Linux Kernel Markers.

2007-08-30 Thread Christoph Hellwig
On Mon, Aug 27, 2007 at 12:05:44PM -0400, Mathieu Desnoyers wrote:
 Here is the first stage of a port of blktrace to the Linux Kernel Markers. The
 advantage of this port is that it minimizes the impact on the running when
 blktrace is not active.
 
 A few remarks : this patch has the positive effect of removing some code
 from the block io tracing hot paths, minimizing the i-cache impact in a
 system where the io tracing is compiled in but inactive.
 
 It also moves the blk tracing code from a header (and therefore from the
 body of the instrumented functions) to a separate C file.
 
 There, as soon as one device has to be traced, all devices have to
 execute the tracing function call when they pass by the instrumentation site.
 This is slower than the previous inline function which tested the condition
 quickly.
 
 It does not make the code smaller, since I left all the specialized
 tracing functions for requests, bio, generic, remap, which would go away
 once a generic infrastructure is in place to serialize the information
 passed to the marker. This is mostly why I consider it as a step towards the
 full improvements that could bring the markers.

I like this as it moves the whole tracing code out of line.  It would
be nice if we could make blktrace a module with this, but we'd need
to change the interface away from an ioctl on the block device for that.

Btw, something that really shows here and what I noticed in my sputrace
aswell is that there is a lot of boilerplate code due to the varargs
trace handlers.  We really need some way to auto-generate the boilerplate
for the trace function to avoid coding this up everywhere.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 4/4] Port of blktrace to the Linux Kernel Markers.

2007-08-30 Thread Mathieu Desnoyers
* Christoph Hellwig ([EMAIL PROTECTED]) wrote:
 On Mon, Aug 27, 2007 at 12:05:44PM -0400, Mathieu Desnoyers wrote:
  Here is the first stage of a port of blktrace to the Linux Kernel Markers. 
  The
  advantage of this port is that it minimizes the impact on the running when
  blktrace is not active.
  
  A few remarks : this patch has the positive effect of removing some code
  from the block io tracing hot paths, minimizing the i-cache impact in a
  system where the io tracing is compiled in but inactive.
  
  It also moves the blk tracing code from a header (and therefore from the
  body of the instrumented functions) to a separate C file.
  
  There, as soon as one device has to be traced, all devices have to
  execute the tracing function call when they pass by the instrumentation 
  site.
  This is slower than the previous inline function which tested the condition
  quickly.
  
  It does not make the code smaller, since I left all the specialized
  tracing functions for requests, bio, generic, remap, which would go away
  once a generic infrastructure is in place to serialize the information
  passed to the marker. This is mostly why I consider it as a step towards the
  full improvements that could bring the markers.
 
 I like this as it moves the whole tracing code out of line.  It would
 be nice if we could make blktrace a module with this, but we'd need
 to change the interface away from an ioctl on the block device for that.
 
 Btw, something that really shows here and what I noticed in my sputrace
 aswell is that there is a lot of boilerplate code due to the varargs
 trace handlers.  We really need some way to auto-generate the boilerplate
 for the trace function to avoid coding this up everywhere.

Or we can use a vprintk-like function to parse the format string and
serialize the information into trace buffers. I prefer this latter
option because, overall, it will localize the probes in a few bytes of
functions instead of duplicating the memory and instruction cache
required by multiple serializing functions.

I have the code ready, but I do not want to flood LKML with patches
neither

Mathieu


-- 
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[patch 4/4] Port of blktrace to the Linux Kernel Markers.

2007-08-27 Thread Mathieu Desnoyers
Here is the first stage of a port of blktrace to the Linux Kernel Markers. The
advantage of this port is that it minimizes the impact on the running when
blktrace is not active.

A few remarks : this patch has the positive effect of removing some code
from the block io tracing hot paths, minimizing the i-cache impact in a
system where the io tracing is compiled in but inactive.

It also moves the blk tracing code from a header (and therefore from the
body of the instrumented functions) to a separate C file.

There, as soon as one device has to be traced, all devices have to
execute the tracing function call when they pass by the instrumentation site.
This is slower than the previous inline function which tested the condition
quickly.

It does not make the code smaller, since I left all the specialized
tracing functions for requests, bio, generic, remap, which would go away
once a generic infrastructure is in place to serialize the information
passed to the marker. This is mostly why I consider it as a step towards the
full improvements that could bring the markers.

Signed-off-by: Mathieu Desnoyers <[EMAIL PROTECTED]>
Reviewed-by: "Frank Ch. Eigler" <[EMAIL PROTECTED]>
CC: Jens Axboe <[EMAIL PROTECTED]>
---

 block/Kconfig|1 
 block/blktrace.c |  343 ++-
 block/elevator.c |6 
 block/ll_rw_blk.c|   35 ++--
 drivers/block/cciss.c|4 
 drivers/md/dm.c  |   14 -
 fs/bio.c |6 
 include/linux/blktrace_api.h |  145 +-
 mm/bounce.c  |4 
 mm/highmem.c |2 
 10 files changed, 388 insertions(+), 172 deletions(-)

Index: linux-2.6-lttng/block/elevator.c
===
--- linux-2.6-lttng.orig/block/elevator.c   2007-08-24 17:21:23.0 
-0400
+++ linux-2.6-lttng/block/elevator.c2007-08-24 17:48:22.0 -0400
@@ -32,7 +32,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 #include 
 
 #include 
@@ -548,7 +548,7 @@ void elv_insert(struct request_queue *q,
unsigned ordseq;
int unplug_it = 1;
 
-   blk_add_trace_rq(q, rq, BLK_TA_INSERT);
+   trace_mark(blk_request_insert, "%p %p", q, rq);
 
rq->q = q;
 
@@ -727,7 +727,7 @@ struct request *elv_next_request(struct 
 * not be passed by new incoming requests
 */
rq->cmd_flags |= REQ_STARTED;
-   blk_add_trace_rq(q, rq, BLK_TA_ISSUE);
+   trace_mark(blk_request_issue, "%p %p", q, rq);
}
 
if (!q->boundary_rq || q->boundary_rq == rq) {
Index: linux-2.6-lttng/block/ll_rw_blk.c
===
--- linux-2.6-lttng.orig/block/ll_rw_blk.c  2007-08-24 17:29:47.0 
-0400
+++ linux-2.6-lttng/block/ll_rw_blk.c   2007-08-24 18:01:12.0 -0400
@@ -28,6 +28,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -1555,7 +1556,7 @@ void blk_plug_device(struct request_queu
 
if (!test_and_set_bit(QUEUE_FLAG_PLUGGED, >queue_flags)) {
mod_timer(>unplug_timer, jiffies + q->unplug_delay);
-   blk_add_trace_generic(q, NULL, 0, BLK_TA_PLUG);
+   trace_mark(blk_plug_device, "%p %p %d", q, NULL, 0);
}
 }
 
@@ -1621,7 +1622,7 @@ static void blk_backing_dev_unplug(struc
 * devices don't necessarily have an ->unplug_fn defined
 */
if (q->unplug_fn) {
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, "%p %p %d", q, NULL,
q->rq.count[READ] + q->rq.count[WRITE]);
 
q->unplug_fn(q);
@@ -1633,7 +1634,7 @@ static void blk_unplug_work(struct work_
struct request_queue *q =
container_of(work, struct request_queue, unplug_work);
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, "%p %p %d", q, NULL,
q->rq.count[READ] + q->rq.count[WRITE]);
 
q->unplug_fn(q);
@@ -1643,7 +1644,7 @@ static void blk_unplug_timeout(unsigned 
 {
struct request_queue *q = (struct request_queue *)data;
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_TIMER, NULL,
+   trace_mark(blk_pdu_unplug_timer, "%p %p %d", q, NULL,
q->rq.count[READ] + q->rq.count[WRITE]);
 
kblockd_schedule_work(>unplug_work);
@@ -2156,7 +2157,7 @@ rq_starved:

rq_init(q, rq);
 
-   blk_add_trace_generic(q, bio, rw, BLK_TA_GETRQ);
+   trace_mark(blk_get_request, "%p %p %d", q, bio, rw);
 out:
return rq;
 }
@@ -2186,7 +2187,7 @@ static struct request *get_request_wait(
if (!rq) {
struct io_context *ioc;
 

[patch 4/4] Port of blktrace to the Linux Kernel Markers.

2007-08-27 Thread Mathieu Desnoyers
Here is the first stage of a port of blktrace to the Linux Kernel Markers. The
advantage of this port is that it minimizes the impact on the running when
blktrace is not active.

A few remarks : this patch has the positive effect of removing some code
from the block io tracing hot paths, minimizing the i-cache impact in a
system where the io tracing is compiled in but inactive.

It also moves the blk tracing code from a header (and therefore from the
body of the instrumented functions) to a separate C file.

There, as soon as one device has to be traced, all devices have to
execute the tracing function call when they pass by the instrumentation site.
This is slower than the previous inline function which tested the condition
quickly.

It does not make the code smaller, since I left all the specialized
tracing functions for requests, bio, generic, remap, which would go away
once a generic infrastructure is in place to serialize the information
passed to the marker. This is mostly why I consider it as a step towards the
full improvements that could bring the markers.

Signed-off-by: Mathieu Desnoyers [EMAIL PROTECTED]
Reviewed-by: Frank Ch. Eigler [EMAIL PROTECTED]
CC: Jens Axboe [EMAIL PROTECTED]
---

 block/Kconfig|1 
 block/blktrace.c |  343 ++-
 block/elevator.c |6 
 block/ll_rw_blk.c|   35 ++--
 drivers/block/cciss.c|4 
 drivers/md/dm.c  |   14 -
 fs/bio.c |6 
 include/linux/blktrace_api.h |  145 +-
 mm/bounce.c  |4 
 mm/highmem.c |2 
 10 files changed, 388 insertions(+), 172 deletions(-)

Index: linux-2.6-lttng/block/elevator.c
===
--- linux-2.6-lttng.orig/block/elevator.c   2007-08-24 17:21:23.0 
-0400
+++ linux-2.6-lttng/block/elevator.c2007-08-24 17:48:22.0 -0400
@@ -32,7 +32,7 @@
 #include linux/init.h
 #include linux/compiler.h
 #include linux/delay.h
-#include linux/blktrace_api.h
+#include linux/marker.h
 #include linux/hash.h
 
 #include asm/uaccess.h
@@ -548,7 +548,7 @@ void elv_insert(struct request_queue *q,
unsigned ordseq;
int unplug_it = 1;
 
-   blk_add_trace_rq(q, rq, BLK_TA_INSERT);
+   trace_mark(blk_request_insert, %p %p, q, rq);
 
rq-q = q;
 
@@ -727,7 +727,7 @@ struct request *elv_next_request(struct 
 * not be passed by new incoming requests
 */
rq-cmd_flags |= REQ_STARTED;
-   blk_add_trace_rq(q, rq, BLK_TA_ISSUE);
+   trace_mark(blk_request_issue, %p %p, q, rq);
}
 
if (!q-boundary_rq || q-boundary_rq == rq) {
Index: linux-2.6-lttng/block/ll_rw_blk.c
===
--- linux-2.6-lttng.orig/block/ll_rw_blk.c  2007-08-24 17:29:47.0 
-0400
+++ linux-2.6-lttng/block/ll_rw_blk.c   2007-08-24 18:01:12.0 -0400
@@ -28,6 +28,7 @@
 #include linux/task_io_accounting_ops.h
 #include linux/interrupt.h
 #include linux/cpu.h
+#include linux/marker.h
 #include linux/blktrace_api.h
 #include linux/fault-inject.h
 
@@ -1555,7 +1556,7 @@ void blk_plug_device(struct request_queu
 
if (!test_and_set_bit(QUEUE_FLAG_PLUGGED, q-queue_flags)) {
mod_timer(q-unplug_timer, jiffies + q-unplug_delay);
-   blk_add_trace_generic(q, NULL, 0, BLK_TA_PLUG);
+   trace_mark(blk_plug_device, %p %p %d, q, NULL, 0);
}
 }
 
@@ -1621,7 +1622,7 @@ static void blk_backing_dev_unplug(struc
 * devices don't necessarily have an -unplug_fn defined
 */
if (q-unplug_fn) {
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, %p %p %d, q, NULL,
q-rq.count[READ] + q-rq.count[WRITE]);
 
q-unplug_fn(q);
@@ -1633,7 +1634,7 @@ static void blk_unplug_work(struct work_
struct request_queue *q =
container_of(work, struct request_queue, unplug_work);
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, %p %p %d, q, NULL,
q-rq.count[READ] + q-rq.count[WRITE]);
 
q-unplug_fn(q);
@@ -1643,7 +1644,7 @@ static void blk_unplug_timeout(unsigned 
 {
struct request_queue *q = (struct request_queue *)data;
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_TIMER, NULL,
+   trace_mark(blk_pdu_unplug_timer, %p %p %d, q, NULL,
q-rq.count[READ] + q-rq.count[WRITE]);
 
kblockd_schedule_work(q-unplug_work);
@@ -2156,7 +2157,7 @@ rq_starved:

rq_init(q, rq);
 
-   blk_add_trace_generic(q, bio, rw, BLK_TA_GETRQ);
+   trace_mark(blk_get_request, %p %p %d, 

[patch 4/4] Port of blktrace to the Linux Kernel Markers.

2007-08-20 Thread Mathieu Desnoyers
Here is the first stage of a port of blktrace to the Linux Kernel Markers. The
advantage of this port is that it minimizes the impact on the running when
blktrace is not active.

A few remarks : this patch has the positive effect of removing some code
from the block io tracing hot paths, minimizing the i-cache impact in a
system where the io tracing is compiled in but inactive.

It also moves the blk tracing code from a header (and therefore from the
body of the instrumented functions) to a separate C file.

There, as soon as one device has to be traced, all devices have to
execute the tracing function call when they pass by the instrumentation site.
This is slower than the previous inline function which tested the condition
quickly.

It does not make the code smaller, since I left all the specialized
tracing functions for requests, bio, generic, remap, which would go away
once a generic infrastructure is in place to serialize the information
passed to the marker. This is mostly why I consider it as a step towards the
full improvements that could bring the markers.

Signed-off-by: Mathieu Desnoyers <[EMAIL PROTECTED]>
CC: Jens Axboe <[EMAIL PROTECTED]>
---

 block/Kconfig|1 
 block/blktrace.c |  342 ++-
 block/elevator.c |6 
 block/ll_rw_blk.c|   28 +--
 drivers/block/cciss.c|4 
 drivers/md/dm.c  |   14 -
 fs/bio.c |6 
 include/linux/blktrace_api.h |  144 +-
 mm/bounce.c  |4 
 mm/highmem.c |2 
 10 files changed, 383 insertions(+), 168 deletions(-)

Index: linux-2.6-lttng/block/elevator.c
===
--- linux-2.6-lttng.orig/block/elevator.c   2007-08-07 11:03:19.0 
-0400
+++ linux-2.6-lttng/block/elevator.c2007-08-07 11:43:37.0 -0400
@@ -32,7 +32,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 #include 
 
 #include 
@@ -548,7 +548,7 @@ void elv_insert(struct request_queue *q,
unsigned ordseq;
int unplug_it = 1;
 
-   blk_add_trace_rq(q, rq, BLK_TA_INSERT);
+   trace_mark(blk_request_insert, "%p %p", q, rq);
 
rq->q = q;
 
@@ -727,7 +727,7 @@ struct request *elv_next_request(struct 
 * not be passed by new incoming requests
 */
rq->cmd_flags |= REQ_STARTED;
-   blk_add_trace_rq(q, rq, BLK_TA_ISSUE);
+   trace_mark(blk_request_issue, "%p %p", q, rq);
}
 
if (!q->boundary_rq || q->boundary_rq == rq) {
Index: linux-2.6-lttng/block/ll_rw_blk.c
===
--- linux-2.6-lttng.orig/block/ll_rw_blk.c  2007-08-07 11:03:39.0 
-0400
+++ linux-2.6-lttng/block/ll_rw_blk.c   2007-08-07 11:43:37.0 -0400
@@ -28,6 +28,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -1555,7 +1556,7 @@ void blk_plug_device(struct request_queu
 
if (!test_and_set_bit(QUEUE_FLAG_PLUGGED, >queue_flags)) {
mod_timer(>unplug_timer, jiffies + q->unplug_delay);
-   blk_add_trace_generic(q, NULL, 0, BLK_TA_PLUG);
+   trace_mark(blk_plug_device, "%p %p %d", q, NULL, 0);
}
 }
 
@@ -1621,7 +1622,7 @@ static void blk_backing_dev_unplug(struc
 * devices don't necessarily have an ->unplug_fn defined
 */
if (q->unplug_fn) {
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, "%p %p %d", q, NULL,
q->rq.count[READ] + q->rq.count[WRITE]);
 
q->unplug_fn(q);
@@ -1633,7 +1634,7 @@ static void blk_unplug_work(struct work_
struct request_queue *q =
container_of(work, struct request_queue, unplug_work);
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, "%p %p %d", q, NULL,
q->rq.count[READ] + q->rq.count[WRITE]);
 
q->unplug_fn(q);
@@ -1643,7 +1644,7 @@ static void blk_unplug_timeout(unsigned 
 {
struct request_queue *q = (struct request_queue *)data;
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_TIMER, NULL,
+   trace_mark(blk_pdu_unplug_timer, "%p %p %d", q, NULL,
q->rq.count[READ] + q->rq.count[WRITE]);
 
kblockd_schedule_work(>unplug_work);
@@ -2156,7 +2157,7 @@ rq_starved:

rq_init(q, rq);
 
-   blk_add_trace_generic(q, bio, rw, BLK_TA_GETRQ);
+   trace_mark(blk_get_request, "%p %p %d", q, bio, rw);
 out:
return rq;
 }
@@ -2186,7 +2187,7 @@ static struct request *get_request_wait(
if (!rq) {
struct io_context *ioc;
 
-   blk_add_trace_generic(q, 

[patch 4/4] Port of blktrace to the Linux Kernel Markers.

2007-08-20 Thread Mathieu Desnoyers
Here is the first stage of a port of blktrace to the Linux Kernel Markers. The
advantage of this port is that it minimizes the impact on the running when
blktrace is not active.

A few remarks : this patch has the positive effect of removing some code
from the block io tracing hot paths, minimizing the i-cache impact in a
system where the io tracing is compiled in but inactive.

It also moves the blk tracing code from a header (and therefore from the
body of the instrumented functions) to a separate C file.

There, as soon as one device has to be traced, all devices have to
execute the tracing function call when they pass by the instrumentation site.
This is slower than the previous inline function which tested the condition
quickly.

It does not make the code smaller, since I left all the specialized
tracing functions for requests, bio, generic, remap, which would go away
once a generic infrastructure is in place to serialize the information
passed to the marker. This is mostly why I consider it as a step towards the
full improvements that could bring the markers.

Signed-off-by: Mathieu Desnoyers [EMAIL PROTECTED]
CC: Jens Axboe [EMAIL PROTECTED]
---

 block/Kconfig|1 
 block/blktrace.c |  342 ++-
 block/elevator.c |6 
 block/ll_rw_blk.c|   28 +--
 drivers/block/cciss.c|4 
 drivers/md/dm.c  |   14 -
 fs/bio.c |6 
 include/linux/blktrace_api.h |  144 +-
 mm/bounce.c  |4 
 mm/highmem.c |2 
 10 files changed, 383 insertions(+), 168 deletions(-)

Index: linux-2.6-lttng/block/elevator.c
===
--- linux-2.6-lttng.orig/block/elevator.c   2007-08-07 11:03:19.0 
-0400
+++ linux-2.6-lttng/block/elevator.c2007-08-07 11:43:37.0 -0400
@@ -32,7 +32,7 @@
 #include linux/init.h
 #include linux/compiler.h
 #include linux/delay.h
-#include linux/blktrace_api.h
+#include linux/marker.h
 #include linux/hash.h
 
 #include asm/uaccess.h
@@ -548,7 +548,7 @@ void elv_insert(struct request_queue *q,
unsigned ordseq;
int unplug_it = 1;
 
-   blk_add_trace_rq(q, rq, BLK_TA_INSERT);
+   trace_mark(blk_request_insert, %p %p, q, rq);
 
rq-q = q;
 
@@ -727,7 +727,7 @@ struct request *elv_next_request(struct 
 * not be passed by new incoming requests
 */
rq-cmd_flags |= REQ_STARTED;
-   blk_add_trace_rq(q, rq, BLK_TA_ISSUE);
+   trace_mark(blk_request_issue, %p %p, q, rq);
}
 
if (!q-boundary_rq || q-boundary_rq == rq) {
Index: linux-2.6-lttng/block/ll_rw_blk.c
===
--- linux-2.6-lttng.orig/block/ll_rw_blk.c  2007-08-07 11:03:39.0 
-0400
+++ linux-2.6-lttng/block/ll_rw_blk.c   2007-08-07 11:43:37.0 -0400
@@ -28,6 +28,7 @@
 #include linux/task_io_accounting_ops.h
 #include linux/interrupt.h
 #include linux/cpu.h
+#include linux/marker.h
 #include linux/blktrace_api.h
 #include linux/fault-inject.h
 
@@ -1555,7 +1556,7 @@ void blk_plug_device(struct request_queu
 
if (!test_and_set_bit(QUEUE_FLAG_PLUGGED, q-queue_flags)) {
mod_timer(q-unplug_timer, jiffies + q-unplug_delay);
-   blk_add_trace_generic(q, NULL, 0, BLK_TA_PLUG);
+   trace_mark(blk_plug_device, %p %p %d, q, NULL, 0);
}
 }
 
@@ -1621,7 +1622,7 @@ static void blk_backing_dev_unplug(struc
 * devices don't necessarily have an -unplug_fn defined
 */
if (q-unplug_fn) {
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, %p %p %d, q, NULL,
q-rq.count[READ] + q-rq.count[WRITE]);
 
q-unplug_fn(q);
@@ -1633,7 +1634,7 @@ static void blk_unplug_work(struct work_
struct request_queue *q =
container_of(work, struct request_queue, unplug_work);
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, %p %p %d, q, NULL,
q-rq.count[READ] + q-rq.count[WRITE]);
 
q-unplug_fn(q);
@@ -1643,7 +1644,7 @@ static void blk_unplug_timeout(unsigned 
 {
struct request_queue *q = (struct request_queue *)data;
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_TIMER, NULL,
+   trace_mark(blk_pdu_unplug_timer, %p %p %d, q, NULL,
q-rq.count[READ] + q-rq.count[WRITE]);
 
kblockd_schedule_work(q-unplug_work);
@@ -2156,7 +2157,7 @@ rq_starved:

rq_init(q, rq);
 
-   blk_add_trace_generic(q, bio, rw, BLK_TA_GETRQ);
+   trace_mark(blk_get_request, %p %p %d, q, bio, rw);
 out:
return rq;
 }
@@ 

[patch 4/4] Port of blktrace to the Linux Kernel Markers.

2007-08-12 Thread Mathieu Desnoyers
Here is the first stage of a port of blktrace to the Linux Kernel Markers. The
advantage of this port is that it minimizes the impact on the running when
blktrace is not active.

A few remarks : this patch has the positive effect of removing some code
from the block io tracing hot paths, minimizing the i-cache impact in a
system where the io tracing is compiled in but inactive.

It also moves the blk tracing code from a header (and therefore from the
body of the instrumented functions) to a separate C file.

There, as soon as one device has to be traced, all devices have to
execute the tracing function call when they pass by the instrumentation site.
This is slower than the previous inline function which tested the condition
quickly.

It does not make the code smaller, since I left all the specialized
tracing functions for requests, bio, generic, remap, which would go away
once a generic infrastructure is in place to serialize the information
passed to the marker. This is mostly why I consider it as a step towards the
full improvements that could bring the markers.

Signed-off-by: Mathieu Desnoyers <[EMAIL PROTECTED]>
CC: Jens Axboe <[EMAIL PROTECTED]>
---

 block/Kconfig|1 
 block/blktrace.c |  342 ++-
 block/elevator.c |6 
 block/ll_rw_blk.c|   28 +--
 drivers/block/cciss.c|4 
 drivers/md/dm.c  |   14 -
 fs/bio.c |6 
 include/linux/blktrace_api.h |  144 +-
 mm/bounce.c  |4 
 mm/highmem.c |2 
 10 files changed, 383 insertions(+), 168 deletions(-)

Index: linux-2.6-lttng/block/elevator.c
===
--- linux-2.6-lttng.orig/block/elevator.c   2007-08-07 11:03:19.0 
-0400
+++ linux-2.6-lttng/block/elevator.c2007-08-07 11:43:37.0 -0400
@@ -32,7 +32,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 #include 
 
 #include 
@@ -548,7 +548,7 @@ void elv_insert(struct request_queue *q,
unsigned ordseq;
int unplug_it = 1;
 
-   blk_add_trace_rq(q, rq, BLK_TA_INSERT);
+   trace_mark(blk_request_insert, "%p %p", q, rq);
 
rq->q = q;
 
@@ -727,7 +727,7 @@ struct request *elv_next_request(struct 
 * not be passed by new incoming requests
 */
rq->cmd_flags |= REQ_STARTED;
-   blk_add_trace_rq(q, rq, BLK_TA_ISSUE);
+   trace_mark(blk_request_issue, "%p %p", q, rq);
}
 
if (!q->boundary_rq || q->boundary_rq == rq) {
Index: linux-2.6-lttng/block/ll_rw_blk.c
===
--- linux-2.6-lttng.orig/block/ll_rw_blk.c  2007-08-07 11:03:39.0 
-0400
+++ linux-2.6-lttng/block/ll_rw_blk.c   2007-08-07 11:43:37.0 -0400
@@ -28,6 +28,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -1555,7 +1556,7 @@ void blk_plug_device(struct request_queu
 
if (!test_and_set_bit(QUEUE_FLAG_PLUGGED, >queue_flags)) {
mod_timer(>unplug_timer, jiffies + q->unplug_delay);
-   blk_add_trace_generic(q, NULL, 0, BLK_TA_PLUG);
+   trace_mark(blk_plug_device, "%p %p %d", q, NULL, 0);
}
 }
 
@@ -1621,7 +1622,7 @@ static void blk_backing_dev_unplug(struc
 * devices don't necessarily have an ->unplug_fn defined
 */
if (q->unplug_fn) {
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, "%p %p %d", q, NULL,
q->rq.count[READ] + q->rq.count[WRITE]);
 
q->unplug_fn(q);
@@ -1633,7 +1634,7 @@ static void blk_unplug_work(struct work_
struct request_queue *q =
container_of(work, struct request_queue, unplug_work);
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, "%p %p %d", q, NULL,
q->rq.count[READ] + q->rq.count[WRITE]);
 
q->unplug_fn(q);
@@ -1643,7 +1644,7 @@ static void blk_unplug_timeout(unsigned 
 {
struct request_queue *q = (struct request_queue *)data;
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_TIMER, NULL,
+   trace_mark(blk_pdu_unplug_timer, "%p %p %d", q, NULL,
q->rq.count[READ] + q->rq.count[WRITE]);
 
kblockd_schedule_work(>unplug_work);
@@ -2156,7 +2157,7 @@ rq_starved:

rq_init(q, rq);
 
-   blk_add_trace_generic(q, bio, rw, BLK_TA_GETRQ);
+   trace_mark(blk_get_request, "%p %p %d", q, bio, rw);
 out:
return rq;
 }
@@ -2186,7 +2187,7 @@ static struct request *get_request_wait(
if (!rq) {
struct io_context *ioc;
 
-   blk_add_trace_generic(q, 

[patch 4/4] Port of blktrace to the Linux Kernel Markers.

2007-08-12 Thread Mathieu Desnoyers
Here is the first stage of a port of blktrace to the Linux Kernel Markers. The
advantage of this port is that it minimizes the impact on the running when
blktrace is not active.

A few remarks : this patch has the positive effect of removing some code
from the block io tracing hot paths, minimizing the i-cache impact in a
system where the io tracing is compiled in but inactive.

It also moves the blk tracing code from a header (and therefore from the
body of the instrumented functions) to a separate C file.

There, as soon as one device has to be traced, all devices have to
execute the tracing function call when they pass by the instrumentation site.
This is slower than the previous inline function which tested the condition
quickly.

It does not make the code smaller, since I left all the specialized
tracing functions for requests, bio, generic, remap, which would go away
once a generic infrastructure is in place to serialize the information
passed to the marker. This is mostly why I consider it as a step towards the
full improvements that could bring the markers.

Signed-off-by: Mathieu Desnoyers [EMAIL PROTECTED]
CC: Jens Axboe [EMAIL PROTECTED]
---

 block/Kconfig|1 
 block/blktrace.c |  342 ++-
 block/elevator.c |6 
 block/ll_rw_blk.c|   28 +--
 drivers/block/cciss.c|4 
 drivers/md/dm.c  |   14 -
 fs/bio.c |6 
 include/linux/blktrace_api.h |  144 +-
 mm/bounce.c  |4 
 mm/highmem.c |2 
 10 files changed, 383 insertions(+), 168 deletions(-)

Index: linux-2.6-lttng/block/elevator.c
===
--- linux-2.6-lttng.orig/block/elevator.c   2007-08-07 11:03:19.0 
-0400
+++ linux-2.6-lttng/block/elevator.c2007-08-07 11:43:37.0 -0400
@@ -32,7 +32,7 @@
 #include linux/init.h
 #include linux/compiler.h
 #include linux/delay.h
-#include linux/blktrace_api.h
+#include linux/marker.h
 #include linux/hash.h
 
 #include asm/uaccess.h
@@ -548,7 +548,7 @@ void elv_insert(struct request_queue *q,
unsigned ordseq;
int unplug_it = 1;
 
-   blk_add_trace_rq(q, rq, BLK_TA_INSERT);
+   trace_mark(blk_request_insert, %p %p, q, rq);
 
rq-q = q;
 
@@ -727,7 +727,7 @@ struct request *elv_next_request(struct 
 * not be passed by new incoming requests
 */
rq-cmd_flags |= REQ_STARTED;
-   blk_add_trace_rq(q, rq, BLK_TA_ISSUE);
+   trace_mark(blk_request_issue, %p %p, q, rq);
}
 
if (!q-boundary_rq || q-boundary_rq == rq) {
Index: linux-2.6-lttng/block/ll_rw_blk.c
===
--- linux-2.6-lttng.orig/block/ll_rw_blk.c  2007-08-07 11:03:39.0 
-0400
+++ linux-2.6-lttng/block/ll_rw_blk.c   2007-08-07 11:43:37.0 -0400
@@ -28,6 +28,7 @@
 #include linux/task_io_accounting_ops.h
 #include linux/interrupt.h
 #include linux/cpu.h
+#include linux/marker.h
 #include linux/blktrace_api.h
 #include linux/fault-inject.h
 
@@ -1555,7 +1556,7 @@ void blk_plug_device(struct request_queu
 
if (!test_and_set_bit(QUEUE_FLAG_PLUGGED, q-queue_flags)) {
mod_timer(q-unplug_timer, jiffies + q-unplug_delay);
-   blk_add_trace_generic(q, NULL, 0, BLK_TA_PLUG);
+   trace_mark(blk_plug_device, %p %p %d, q, NULL, 0);
}
 }
 
@@ -1621,7 +1622,7 @@ static void blk_backing_dev_unplug(struc
 * devices don't necessarily have an -unplug_fn defined
 */
if (q-unplug_fn) {
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, %p %p %d, q, NULL,
q-rq.count[READ] + q-rq.count[WRITE]);
 
q-unplug_fn(q);
@@ -1633,7 +1634,7 @@ static void blk_unplug_work(struct work_
struct request_queue *q =
container_of(work, struct request_queue, unplug_work);
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, %p %p %d, q, NULL,
q-rq.count[READ] + q-rq.count[WRITE]);
 
q-unplug_fn(q);
@@ -1643,7 +1644,7 @@ static void blk_unplug_timeout(unsigned 
 {
struct request_queue *q = (struct request_queue *)data;
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_TIMER, NULL,
+   trace_mark(blk_pdu_unplug_timer, %p %p %d, q, NULL,
q-rq.count[READ] + q-rq.count[WRITE]);
 
kblockd_schedule_work(q-unplug_work);
@@ -2156,7 +2157,7 @@ rq_starved:

rq_init(q, rq);
 
-   blk_add_trace_generic(q, bio, rw, BLK_TA_GETRQ);
+   trace_mark(blk_get_request, %p %p %d, q, bio, rw);
 out:
return rq;
 }
@@ 

[patch 4/4] Port of blktrace to the Linux Kernel Markers.

2007-07-13 Thread Mathieu Desnoyers
Here is the first stage of a port of blktrace to the Linux Kernel Markers. The
advantage of this port is that it minimizes the impact on the running when
blktrace is not active.

A few remarks : this patch has the positive effect of removing some code
from the block io tracing hot paths, minimizing the i-cache impact in a
system where the io tracing is compiled in but inactive.

It also moves the blk tracing code from a header (and therefore from the
body of the instrumented functions) to a separate C file.

There, as soon as one device has to be traced, all devices have to
execute the tracing function call when they pass by the instrumentation site.
This is slower than the previous inline function which tested the condition
quickly.

It does not make the code smaller, since I left all the specialized
tracing functions for requests, bio, generic, remap, which would go away
once a generic infrastructure is in place to serialize the information
passed to the marker. This is mostly why I consider it as a step towards the
full improvements that could bring the markers.

Signed-off-by: Mathieu Desnoyers <[EMAIL PROTECTED]>
CC: Jens Axboe <[EMAIL PROTECTED]>
---

 block/Kconfig|1 
 block/blktrace.c |  342 ++-
 block/elevator.c |6 
 block/ll_rw_blk.c|   28 +--
 drivers/block/cciss.c|4 
 drivers/md/dm.c  |   14 -
 fs/bio.c |6 
 include/linux/blktrace_api.h |  146 +-
 mm/bounce.c  |4 
 mm/highmem.c |2 
 10 files changed, 385 insertions(+), 168 deletions(-)

Index: linux-2.6-lttng/block/elevator.c
===
--- linux-2.6-lttng.orig/block/elevator.c   2007-07-13 17:33:58.0 
-0400
+++ linux-2.6-lttng/block/elevator.c2007-07-13 17:34:05.0 -0400
@@ -32,7 +32,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 #include 
 
 #include 
@@ -547,7 +547,7 @@
unsigned ordseq;
int unplug_it = 1;
 
-   blk_add_trace_rq(q, rq, BLK_TA_INSERT);
+   trace_mark(blk_request_insert, "%p %p", q, rq);
 
rq->q = q;
 
@@ -726,7 +726,7 @@
 * not be passed by new incoming requests
 */
rq->cmd_flags |= REQ_STARTED;
-   blk_add_trace_rq(q, rq, BLK_TA_ISSUE);
+   trace_mark(blk_request_issue, "%p %p", q, rq);
}
 
if (!q->boundary_rq || q->boundary_rq == rq) {
Index: linux-2.6-lttng/block/ll_rw_blk.c
===
--- linux-2.6-lttng.orig/block/ll_rw_blk.c  2007-07-13 17:33:58.0 
-0400
+++ linux-2.6-lttng/block/ll_rw_blk.c   2007-07-13 17:54:03.0 -0400
@@ -28,6 +28,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -1551,7 +1552,7 @@
 
if (!test_and_set_bit(QUEUE_FLAG_PLUGGED, >queue_flags)) {
mod_timer(>unplug_timer, jiffies + q->unplug_delay);
-   blk_add_trace_generic(q, NULL, 0, BLK_TA_PLUG);
+   trace_mark(blk_plug_device, "%p %p %d", q, NULL, 0);
}
 }
 
@@ -1617,7 +1618,7 @@
 * devices don't necessarily have an ->unplug_fn defined
 */
if (q->unplug_fn) {
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, "%p %p %d", q, NULL,
q->rq.count[READ] + q->rq.count[WRITE]);
 
q->unplug_fn(q);
@@ -1628,7 +1629,7 @@
 {
request_queue_t *q = container_of(work, request_queue_t, unplug_work);
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, "%p %p %d", q, NULL,
q->rq.count[READ] + q->rq.count[WRITE]);
 
q->unplug_fn(q);
@@ -1638,7 +1639,7 @@
 {
request_queue_t *q = (request_queue_t *)data;
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_TIMER, NULL,
+   trace_mark(blk_pdu_unplug_timer, "%p %p %d", q, NULL,
q->rq.count[READ] + q->rq.count[WRITE]);
 
kblockd_schedule_work(>unplug_work);
@@ -2150,7 +2151,7 @@

rq_init(q, rq);
 
-   blk_add_trace_generic(q, bio, rw, BLK_TA_GETRQ);
+   trace_mark(blk_get_request, "%p %p %d", q, bio, rw);
 out:
return rq;
 }
@@ -2180,7 +2181,7 @@
if (!rq) {
struct io_context *ioc;
 
-   blk_add_trace_generic(q, bio, rw, BLK_TA_SLEEPRQ);
+   trace_mark(blk_sleep_request, "%p %p %d", q, bio, rw);
 
__generic_unplug_device(q);
spin_unlock_irq(q->queue_lock);
@@ -2254,7 +2255,7 @@
  */
 void blk_requeue_request(request_queue_t *q, struct request *rq)
 {
-   

[patch 4/4] Port of blktrace to the Linux Kernel Markers.

2007-07-13 Thread Mathieu Desnoyers
Here is the first stage of a port of blktrace to the Linux Kernel Markers. The
advantage of this port is that it minimizes the impact on the running when
blktrace is not active.

A few remarks : this patch has the positive effect of removing some code
from the block io tracing hot paths, minimizing the i-cache impact in a
system where the io tracing is compiled in but inactive.

It also moves the blk tracing code from a header (and therefore from the
body of the instrumented functions) to a separate C file.

There, as soon as one device has to be traced, all devices have to
execute the tracing function call when they pass by the instrumentation site.
This is slower than the previous inline function which tested the condition
quickly.

It does not make the code smaller, since I left all the specialized
tracing functions for requests, bio, generic, remap, which would go away
once a generic infrastructure is in place to serialize the information
passed to the marker. This is mostly why I consider it as a step towards the
full improvements that could bring the markers.

Signed-off-by: Mathieu Desnoyers [EMAIL PROTECTED]
CC: Jens Axboe [EMAIL PROTECTED]
---

 block/Kconfig|1 
 block/blktrace.c |  342 ++-
 block/elevator.c |6 
 block/ll_rw_blk.c|   28 +--
 drivers/block/cciss.c|4 
 drivers/md/dm.c  |   14 -
 fs/bio.c |6 
 include/linux/blktrace_api.h |  146 +-
 mm/bounce.c  |4 
 mm/highmem.c |2 
 10 files changed, 385 insertions(+), 168 deletions(-)

Index: linux-2.6-lttng/block/elevator.c
===
--- linux-2.6-lttng.orig/block/elevator.c   2007-07-13 17:33:58.0 
-0400
+++ linux-2.6-lttng/block/elevator.c2007-07-13 17:34:05.0 -0400
@@ -32,7 +32,7 @@
 #include linux/init.h
 #include linux/compiler.h
 #include linux/delay.h
-#include linux/blktrace_api.h
+#include linux/marker.h
 #include linux/hash.h
 
 #include asm/uaccess.h
@@ -547,7 +547,7 @@
unsigned ordseq;
int unplug_it = 1;
 
-   blk_add_trace_rq(q, rq, BLK_TA_INSERT);
+   trace_mark(blk_request_insert, %p %p, q, rq);
 
rq-q = q;
 
@@ -726,7 +726,7 @@
 * not be passed by new incoming requests
 */
rq-cmd_flags |= REQ_STARTED;
-   blk_add_trace_rq(q, rq, BLK_TA_ISSUE);
+   trace_mark(blk_request_issue, %p %p, q, rq);
}
 
if (!q-boundary_rq || q-boundary_rq == rq) {
Index: linux-2.6-lttng/block/ll_rw_blk.c
===
--- linux-2.6-lttng.orig/block/ll_rw_blk.c  2007-07-13 17:33:58.0 
-0400
+++ linux-2.6-lttng/block/ll_rw_blk.c   2007-07-13 17:54:03.0 -0400
@@ -28,6 +28,7 @@
 #include linux/task_io_accounting_ops.h
 #include linux/interrupt.h
 #include linux/cpu.h
+#include linux/marker.h
 #include linux/blktrace_api.h
 #include linux/fault-inject.h
 
@@ -1551,7 +1552,7 @@
 
if (!test_and_set_bit(QUEUE_FLAG_PLUGGED, q-queue_flags)) {
mod_timer(q-unplug_timer, jiffies + q-unplug_delay);
-   blk_add_trace_generic(q, NULL, 0, BLK_TA_PLUG);
+   trace_mark(blk_plug_device, %p %p %d, q, NULL, 0);
}
 }
 
@@ -1617,7 +1618,7 @@
 * devices don't necessarily have an -unplug_fn defined
 */
if (q-unplug_fn) {
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, %p %p %d, q, NULL,
q-rq.count[READ] + q-rq.count[WRITE]);
 
q-unplug_fn(q);
@@ -1628,7 +1629,7 @@
 {
request_queue_t *q = container_of(work, request_queue_t, unplug_work);
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, %p %p %d, q, NULL,
q-rq.count[READ] + q-rq.count[WRITE]);
 
q-unplug_fn(q);
@@ -1638,7 +1639,7 @@
 {
request_queue_t *q = (request_queue_t *)data;
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_TIMER, NULL,
+   trace_mark(blk_pdu_unplug_timer, %p %p %d, q, NULL,
q-rq.count[READ] + q-rq.count[WRITE]);
 
kblockd_schedule_work(q-unplug_work);
@@ -2150,7 +2151,7 @@

rq_init(q, rq);
 
-   blk_add_trace_generic(q, bio, rw, BLK_TA_GETRQ);
+   trace_mark(blk_get_request, %p %p %d, q, bio, rw);
 out:
return rq;
 }
@@ -2180,7 +2181,7 @@
if (!rq) {
struct io_context *ioc;
 
-   blk_add_trace_generic(q, bio, rw, BLK_TA_SLEEPRQ);
+   trace_mark(blk_sleep_request, %p %p %d, q, bio, rw);
 

[patch 4/4] Port of blktrace to the Linux Kernel Markers.

2007-07-03 Thread Mathieu Desnoyers
Here is a proof of concept patch, for demonstration purpose, of moving
blktrace to the markers.

A few remarks : this patch has the positive effect of removing some code
from the block io tracing hot paths, minimizing the i-cache impact in a
system where the io tracing is compiled in but inactive.

It also moves the blk tracing code from a header (and therefore from the
body of the instrumented functions) to a separate C file.

There, as soon as one device has to be traced, every devices have to
fall into the tracing function call. This is slower than the previous
inline function which tested the condition quickly. If it becomes a
show stopper, it could be fixed by having the possibility to test a
supplementary condition, dependant of the marker context, at the marker
site, just after the enable/disable test.

It does not make the code smaller, since I left all the specialized
tracing functions for requests, bio, generic, remap, which would go away
once a generic infrastructure is in place to serialize the information
passed to the marker. This is mostly why I consider it a proof a
concept.

Signed-off-by: Mathieu Desnoyers <[EMAIL PROTECTED]>
CC: Jens Axboe <[EMAIL PROTECTED]>
---

 block/Kconfig|1 
 block/blktrace.c |  281 ++-
 block/elevator.c |6 
 block/ll_rw_blk.c|   27 ++--
 drivers/block/cciss.c|4 
 drivers/md/dm.c  |   14 +-
 fs/bio.c |4 
 include/linux/blktrace_api.h |  146 +-
 mm/bounce.c  |4 
 mm/highmem.c |2 
 10 files changed, 322 insertions(+), 167 deletions(-)

Index: linux-2.6-lttng/block/elevator.c
===
--- linux-2.6-lttng.orig/block/elevator.c   2007-06-15 16:13:49.0 
-0400
+++ linux-2.6-lttng/block/elevator.c2007-06-15 16:14:14.0 -0400
@@ -32,7 +32,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 #include 
 
 #include 
@@ -547,7 +547,7 @@
unsigned ordseq;
int unplug_it = 1;
 
-   blk_add_trace_rq(q, rq, BLK_TA_INSERT);
+   trace_mark(blk_request_insert, "%p %p", q, rq);
 
rq->q = q;
 
@@ -726,7 +726,7 @@
 * not be passed by new incoming requests
 */
rq->cmd_flags |= REQ_STARTED;
-   blk_add_trace_rq(q, rq, BLK_TA_ISSUE);
+   trace_mark(blk_request_issue, "%p %p", q, rq);
}
 
if (!q->boundary_rq || q->boundary_rq == rq) {
Index: linux-2.6-lttng/block/ll_rw_blk.c
===
--- linux-2.6-lttng.orig/block/ll_rw_blk.c  2007-06-15 16:13:49.0 
-0400
+++ linux-2.6-lttng/block/ll_rw_blk.c   2007-06-15 16:14:14.0 -0400
@@ -28,6 +28,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -1551,7 +1552,7 @@
 
if (!test_and_set_bit(QUEUE_FLAG_PLUGGED, >queue_flags)) {
mod_timer(>unplug_timer, jiffies + q->unplug_delay);
-   blk_add_trace_generic(q, NULL, 0, BLK_TA_PLUG);
+   trace_mark(blk_plug_device, "%p %p %d", q, NULL, 0);
}
 }
 
@@ -1617,7 +1618,7 @@
 * devices don't necessarily have an ->unplug_fn defined
 */
if (q->unplug_fn) {
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, "%p %p %d", q, NULL,
q->rq.count[READ] + q->rq.count[WRITE]);
 
q->unplug_fn(q);
@@ -1628,7 +1629,7 @@
 {
request_queue_t *q = container_of(work, request_queue_t, unplug_work);
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, "%p %p %d", q, NULL,
q->rq.count[READ] + q->rq.count[WRITE]);
 
q->unplug_fn(q);
@@ -1638,7 +1639,7 @@
 {
request_queue_t *q = (request_queue_t *)data;
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_TIMER, NULL,
+   trace_mark(blk_pdu_unplug_timer, "%p %p %d", q, NULL,
q->rq.count[READ] + q->rq.count[WRITE]);
 
kblockd_schedule_work(>unplug_work);
@@ -2150,7 +2151,7 @@

rq_init(q, rq);
 
-   blk_add_trace_generic(q, bio, rw, BLK_TA_GETRQ);
+   trace_mark(blk_get_request, "%p %p %d", q, bio, rw);
 out:
return rq;
 }
@@ -2180,7 +2181,7 @@
if (!rq) {
struct io_context *ioc;
 
-   blk_add_trace_generic(q, bio, rw, BLK_TA_SLEEPRQ);
+   trace_mark(blk_sleep_request, "%p %p %d", q, bio, rw);
 
__generic_unplug_device(q);
spin_unlock_irq(q->queue_lock);
@@ -2254,7 +2255,7 @@
  */
 void blk_requeue_request(request_queue_t *q, struct request 

[patch 4/4] Port of blktrace to the Linux Kernel Markers.

2007-07-03 Thread Mathieu Desnoyers
Here is a proof of concept patch, for demonstration purpose, of moving
blktrace to the markers.

A few remarks : this patch has the positive effect of removing some code
from the block io tracing hot paths, minimizing the i-cache impact in a
system where the io tracing is compiled in but inactive.

It also moves the blk tracing code from a header (and therefore from the
body of the instrumented functions) to a separate C file.

There, as soon as one device has to be traced, every devices have to
fall into the tracing function call. This is slower than the previous
inline function which tested the condition quickly. If it becomes a
show stopper, it could be fixed by having the possibility to test a
supplementary condition, dependant of the marker context, at the marker
site, just after the enable/disable test.

It does not make the code smaller, since I left all the specialized
tracing functions for requests, bio, generic, remap, which would go away
once a generic infrastructure is in place to serialize the information
passed to the marker. This is mostly why I consider it a proof a
concept.

Signed-off-by: Mathieu Desnoyers [EMAIL PROTECTED]
CC: Jens Axboe [EMAIL PROTECTED]
---

 block/Kconfig|1 
 block/blktrace.c |  281 ++-
 block/elevator.c |6 
 block/ll_rw_blk.c|   27 ++--
 drivers/block/cciss.c|4 
 drivers/md/dm.c  |   14 +-
 fs/bio.c |4 
 include/linux/blktrace_api.h |  146 +-
 mm/bounce.c  |4 
 mm/highmem.c |2 
 10 files changed, 322 insertions(+), 167 deletions(-)

Index: linux-2.6-lttng/block/elevator.c
===
--- linux-2.6-lttng.orig/block/elevator.c   2007-06-15 16:13:49.0 
-0400
+++ linux-2.6-lttng/block/elevator.c2007-06-15 16:14:14.0 -0400
@@ -32,7 +32,7 @@
 #include linux/init.h
 #include linux/compiler.h
 #include linux/delay.h
-#include linux/blktrace_api.h
+#include linux/marker.h
 #include linux/hash.h
 
 #include asm/uaccess.h
@@ -547,7 +547,7 @@
unsigned ordseq;
int unplug_it = 1;
 
-   blk_add_trace_rq(q, rq, BLK_TA_INSERT);
+   trace_mark(blk_request_insert, %p %p, q, rq);
 
rq-q = q;
 
@@ -726,7 +726,7 @@
 * not be passed by new incoming requests
 */
rq-cmd_flags |= REQ_STARTED;
-   blk_add_trace_rq(q, rq, BLK_TA_ISSUE);
+   trace_mark(blk_request_issue, %p %p, q, rq);
}
 
if (!q-boundary_rq || q-boundary_rq == rq) {
Index: linux-2.6-lttng/block/ll_rw_blk.c
===
--- linux-2.6-lttng.orig/block/ll_rw_blk.c  2007-06-15 16:13:49.0 
-0400
+++ linux-2.6-lttng/block/ll_rw_blk.c   2007-06-15 16:14:14.0 -0400
@@ -28,6 +28,7 @@
 #include linux/task_io_accounting_ops.h
 #include linux/interrupt.h
 #include linux/cpu.h
+#include linux/marker.h
 #include linux/blktrace_api.h
 #include linux/fault-inject.h
 
@@ -1551,7 +1552,7 @@
 
if (!test_and_set_bit(QUEUE_FLAG_PLUGGED, q-queue_flags)) {
mod_timer(q-unplug_timer, jiffies + q-unplug_delay);
-   blk_add_trace_generic(q, NULL, 0, BLK_TA_PLUG);
+   trace_mark(blk_plug_device, %p %p %d, q, NULL, 0);
}
 }
 
@@ -1617,7 +1618,7 @@
 * devices don't necessarily have an -unplug_fn defined
 */
if (q-unplug_fn) {
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, %p %p %d, q, NULL,
q-rq.count[READ] + q-rq.count[WRITE]);
 
q-unplug_fn(q);
@@ -1628,7 +1629,7 @@
 {
request_queue_t *q = container_of(work, request_queue_t, unplug_work);
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+   trace_mark(blk_pdu_unplug_io, %p %p %d, q, NULL,
q-rq.count[READ] + q-rq.count[WRITE]);
 
q-unplug_fn(q);
@@ -1638,7 +1639,7 @@
 {
request_queue_t *q = (request_queue_t *)data;
 
-   blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_TIMER, NULL,
+   trace_mark(blk_pdu_unplug_timer, %p %p %d, q, NULL,
q-rq.count[READ] + q-rq.count[WRITE]);
 
kblockd_schedule_work(q-unplug_work);
@@ -2150,7 +2151,7 @@

rq_init(q, rq);
 
-   blk_add_trace_generic(q, bio, rw, BLK_TA_GETRQ);
+   trace_mark(blk_get_request, %p %p %d, q, bio, rw);
 out:
return rq;
 }
@@ -2180,7 +2181,7 @@
if (!rq) {
struct io_context *ioc;
 
-   blk_add_trace_generic(q, bio, rw, BLK_TA_SLEEPRQ);
+   trace_mark(blk_sleep_request, %p %p %d, q, bio, rw);