On 11/01/2016 03:05 PM, Jens Axboe wrote:
+void blk_stat_init(struct blk_rq_stat *stat)
+{
+       __blk_stat_init(stat, ktime_to_ns(ktime_get()));
+}
+
+static bool __blk_stat_is_current(struct blk_rq_stat *stat, s64 now)
+{
+       return (now & BLK_STAT_NSEC_MASK) == (stat->time & BLK_STAT_NSEC_MASK);
+}
+
+bool blk_stat_is_current(struct blk_rq_stat *stat)
+{
+       return __blk_stat_is_current(stat, ktime_to_ns(ktime_get()));
+}

Hello Jens,

What is the performance impact of these patches? My experience is that introducing ktime_get() in the I/O path of high-performance I/O devices measurably slows down I/O. On https://lkml.org/lkml/2016/4/21/107 I read that a single ktime_get() call takes about 100 ns.

Bart.

Reply via email to