Re: [PATCH 0/4] lightnvm: base 2.0 implementation

2018-02-08 Thread Javier Gonzalez
> On 5 Feb 2018, at 13.15, Matias Bjørling  wrote:
> 
> Hi,
> 
> A couple of patches for 2.0 support for the lightnvm subsystem. They
> form the basis for integrating 2.0 support.
> 
> For the rest of the support, Javier has code that implements report
> chunk and sets up the LBA format data structure. He also has a bunch
> of patches that brings pblk up to speed.
> 
> The first two patches is preparation for the 2.0 work. The third patch
> implements the 2.0 data structures, the geometry command, and exposes
> the sysfs attributes that comes with the 2.0 specification. Note that
> the attributes between 1.2 and 2.0 are different, and it is expected
> that user-space shall use the version sysfs attribute to know which
> attributes will be available.
> 
> The last patch implements support for using the nvme namespace logical
> block and metadata fields and sync it with the internal lightnvm
> identify structures.
> 
> -Matias
> 
> Matias Bjørling (4):
>  lightnvm: make 1.2 data structures explicit
>  lightnvm: flatten nvm_id_group into nvm_id
>  lightnvm: add 2.0 geometry identification
>  nvme: lightnvm: add late setup of block size and metadata
> 
> drivers/lightnvm/core.c  |  27 ++-
> drivers/nvme/host/core.c |   2 +
> drivers/nvme/host/lightnvm.c | 508 ---
> drivers/nvme/host/nvme.h |   2 +
> include/linux/lightnvm.h |  64 +++---
> 5 files changed, 426 insertions(+), 177 deletions(-)
> 
> --
> 2.11.0

Thanks for posting these. I have started rebasing my patches on top of
the new geometry - it is a bit different of how I implemented it, but
I'll take care of it.

I'll review as I go - some of the changes I have might make sense to
squash in your patches to keep a clean history...

I'll add a couple of patches abstracting the geometry so that at core.c
level we only work with a single geometry structure. This is they way it
is done in the early patches I pointe you to before. Then it is patches
building bottom-up support for the new features in 2.0.

Javier


signature.asc
Description: Message signed with OpenPGP


Re: [PATCH 0/4] lightnvm: base 2.0 implementation

2018-02-08 Thread Matias Bjørling

On 02/08/2018 10:35 AM, Javier Gonzalez wrote:

On 5 Feb 2018, at 13.15, Matias Bjørling  wrote:

Hi,

A couple of patches for 2.0 support for the lightnvm subsystem. They
form the basis for integrating 2.0 support.

For the rest of the support, Javier has code that implements report
chunk and sets up the LBA format data structure. He also has a bunch
of patches that brings pblk up to speed.

The first two patches is preparation for the 2.0 work. The third patch
implements the 2.0 data structures, the geometry command, and exposes
the sysfs attributes that comes with the 2.0 specification. Note that
the attributes between 1.2 and 2.0 are different, and it is expected
that user-space shall use the version sysfs attribute to know which
attributes will be available.

The last patch implements support for using the nvme namespace logical
block and metadata fields and sync it with the internal lightnvm
identify structures.

-Matias

Matias Bjørling (4):
  lightnvm: make 1.2 data structures explicit
  lightnvm: flatten nvm_id_group into nvm_id
  lightnvm: add 2.0 geometry identification
  nvme: lightnvm: add late setup of block size and metadata

drivers/lightnvm/core.c  |  27 ++-
drivers/nvme/host/core.c |   2 +
drivers/nvme/host/lightnvm.c | 508 ---
drivers/nvme/host/nvme.h |   2 +
include/linux/lightnvm.h |  64 +++---
5 files changed, 426 insertions(+), 177 deletions(-)

--
2.11.0


Thanks for posting these. I have started rebasing my patches on top of
the new geometry - it is a bit different of how I implemented it, but
I'll take care of it.

I'll review as I go - some of the changes I have might make sense to
squash in your patches to keep a clean history...



Thanks.


I'll add a couple of patches abstracting the geometry so that at core.c
level we only work with a single geometry structure. This is they way it
is done in the early patches I pointe you to before. Then it is patches
building bottom-up support for the new features in 2.0.



Yep, I was expecting that. I skipped that part since it went into pblk 
and you already had some patches for it.



Javier





Re: [PATCH V2] lightnvm: pblk: add padding distribution sysfs attribute

2018-02-08 Thread Matias Bjørling

On 02/06/2018 12:54 PM, hans.ml.holmb...@owltronix.com wrote:

From: Hans Holmberg 

When pblk receives a sync, all data up to that point in the write buffer
must be comitted to persistent storage, and as flash memory comes with a
minimal write size there is a significant cost involved both in terms
of time for completing the sync and in terms of write amplification
padded sectors for filling up to the minimal write size.

In order to get a better understanding of the costs involved for syncs,
Add a sysfs attribute to pblk: padded_dist, showing a normalized
distribution of sectors padded. In order to facilitate measurements of
specific workloads during the lifetime of the pblk instance, the
distribution can be reset by writing 0 to the attribute.

Do this by introducing counters for each possible padding:
{0..(minimal write size - 1)} and calculate the normalized distribution
when showing the attribute.

Signed-off-by: Hans Holmberg 
Signed-off-by: Javier González 
Rearranged total_buckets statement in pblk_sysfs_get_padding_dist
Signed-off-by: Matias Bjørling 
---

Changes since V1:

* Picked up Matias rearrengment of the total_buckets_statement
* Fixed build problems reported by kbuild on i386 by using sector_div
   instead of / when calculating the padding distribution and turning
   nr_flush into atomic64_t (which makes more sense anyway)

  drivers/lightnvm/pblk-init.c  | 16 +++-
  drivers/lightnvm/pblk-rb.c| 17 +
  drivers/lightnvm/pblk-sysfs.c | 86 ++-
  drivers/lightnvm/pblk.h   |  6 ++-
  4 files changed, 112 insertions(+), 13 deletions(-)

diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c
index 7eedc5d..bf9bc31 100644
--- a/drivers/lightnvm/pblk-init.c
+++ b/drivers/lightnvm/pblk-init.c
@@ -921,6 +921,7 @@ static void pblk_free(struct pblk *pblk)
  {
pblk_luns_free(pblk);
pblk_lines_free(pblk);
+   kfree(pblk->pad_dist);
pblk_line_meta_free(pblk);
pblk_core_free(pblk);
pblk_l2p_free(pblk);
@@ -998,11 +999,13 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct 
gendisk *tdisk,
pblk->pad_rst_wa = 0;
pblk->gc_rst_wa = 0;
  
+	atomic64_set(&pblk->nr_flush, 0);

+   pblk->nr_flush_rst = 0;
+
  #ifdef CONFIG_NVM_DEBUG
atomic_long_set(&pblk->inflight_writes, 0);
atomic_long_set(&pblk->padded_writes, 0);
atomic_long_set(&pblk->padded_wb, 0);
-   atomic_long_set(&pblk->nr_flush, 0);
atomic_long_set(&pblk->req_writes, 0);
atomic_long_set(&pblk->sub_writes, 0);
atomic_long_set(&pblk->sync_writes, 0);
@@ -1034,10 +1037,17 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct 
gendisk *tdisk,
goto fail_free_luns;
}
  
+	pblk->pad_dist = kzalloc((pblk->min_write_pgs - 1) * sizeof(atomic64_t),

+GFP_KERNEL);
+   if (!pblk->pad_dist) {
+   ret = -ENOMEM;
+   goto fail_free_line_meta;
+   }
+
ret = pblk_core_init(pblk);
if (ret) {
pr_err("pblk: could not initialize core\n");
-   goto fail_free_line_meta;
+   goto fail_free_pad_dist;
}
  
  	ret = pblk_l2p_init(pblk);

@@ -1097,6 +1107,8 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct 
gendisk *tdisk,
pblk_l2p_free(pblk);
  fail_free_core:
pblk_core_free(pblk);
+fail_free_pad_dist:
+   kfree(pblk->pad_dist);
  fail_free_line_meta:
pblk_line_meta_free(pblk);
  fail_free_luns:
diff --git a/drivers/lightnvm/pblk-rb.c b/drivers/lightnvm/pblk-rb.c
index 7044b55..8b14340 100644
--- a/drivers/lightnvm/pblk-rb.c
+++ b/drivers/lightnvm/pblk-rb.c
@@ -437,9 +437,7 @@ static int pblk_rb_may_write_flush(struct pblk_rb *rb, 
unsigned int nr_entries,
if (bio->bi_opf & REQ_PREFLUSH) {
struct pblk *pblk = container_of(rb, struct pblk, rwb);
  
-#ifdef CONFIG_NVM_DEBUG

-   atomic_long_inc(&pblk->nr_flush);
-#endif
+   atomic64_inc(&pblk->nr_flush);
if (pblk_rb_flush_point_set(&pblk->rwb, bio, mem))
*io_ret = NVM_IO_OK;
}
@@ -620,14 +618,17 @@ unsigned int pblk_rb_read_to_bio(struct pblk_rb *rb, 
struct nvm_rq *rqd,
pr_err("pblk: could not pad page in write bio\n");
return NVM_IO_ERR;
}
-   }
  
-	atomic64_add(pad, &((struct pblk *)

-   (container_of(rb, struct pblk, rwb)))->pad_wa);
+   if (pad < pblk->min_write_pgs)
+   atomic64_inc(&pblk->pad_dist[pad - 1]);
+   else
+   pr_warn("pblk: padding more than min. sectors\n");
+
+   atomic64_add(pad, &pblk->pad_wa);
+   }
  
  #ifdef CONFIG_NVM_DEBUG

-   atomic_long_add(pad, &((struct pblk *)
-   (container_of(rb, struct pblk, rwb)))->padded_writes);
+  

Re: [PATCH 3/4] lightnvm: add 2.0 geometry identification

2018-02-08 Thread Javier Gonzalez
> On 5 Feb 2018, at 13.15, Matias Bjørling  wrote:
> 
> Implement the geometry data structures for 2.0 and enable a drive
> to be identified as one, including exposing the appropriate 2.0
> sysfs entries.
> 
> Signed-off-by: Matias Bjørling 
> ---
> drivers/lightnvm/core.c  |   2 +-
> drivers/nvme/host/lightnvm.c | 334 +--
> include/linux/lightnvm.h |  11 +-
> 3 files changed, 295 insertions(+), 52 deletions(-)
> 
> diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
> index c72863b36439..250e74dfa120 100644
> --- a/drivers/lightnvm/core.c
> +++ b/drivers/lightnvm/core.c
> @@ -934,7 +934,7 @@ static int nvm_init(struct nvm_dev *dev)
>   pr_debug("nvm: ver:%x nvm_vendor:%x\n",
>   dev->identity.ver_id, dev->identity.vmnt);
> 
> - if (dev->identity.ver_id != 1) {
> + if (dev->identity.ver_id != 1 && dev->identity.ver_id != 2) {
>   pr_err("nvm: device not supported by kernel.");
>   goto err;
>   }
> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
> index 6412551ecc65..a9c010655ccc 100644
> --- a/drivers/nvme/host/lightnvm.c
> +++ b/drivers/nvme/host/lightnvm.c
> @@ -184,6 +184,58 @@ struct nvme_nvm_bb_tbl {
>   __u8blk[0];
> };
> 
> +struct nvme_nvm_id20_addrf {
> + __u8grp_len;
> + __u8pu_len;
> + __u8chk_len;
> + __u8lba_len;
> + __u8resv[4];
> +};
> +
> +struct nvme_nvm_id20 {
> + __u8mjr;
> + __u8mnr;
> + __u8resv[6];
> +
> + struct nvme_nvm_id20_addrf lbaf;
> +
> + __u32   mccap;
> + __u8resv2[12];
> +
> + __u8wit;
> + __u8resv3[31];
> +
> + /* Geometry */
> + __u16   num_grp;
> + __u16   num_pu;
> + __u32   num_chk;
> + __u32   clba;
> + __u8resv4[52];
> +
> + /* Write data requirements */
> + __u32   ws_min;
> + __u32   ws_opt;
> + __u32   mw_cunits;
> + __u32   maxoc;
> + __u32   maxocpu;
> + __u8resv5[44];
> +
> + /* Performance related metrics */
> + __u32   trdt;
> + __u32   trdm;
> + __u32   twrt;
> + __u32   twrm;
> + __u32   tcrst;
> + __u32   tcrsm;
> + __u8resv6[40];
> +
> + /* Reserved area */
> + __u8resv7[2816];
> +
> + /* Vendor specific */
> + __u8vs[1024];
> +};
> 

All __u16, __u32 should be __le16, __le32

Javier


signature.asc
Description: Message signed with OpenPGP


[PATCH] blk: optimization for classic polling

2018-02-08 Thread Nitesh Shetty
This removes the dependency on interrupts to wake up task. Set task
state as TASK_RUNNING, if need_resched() returns true,
while polling for IO completion.
Earlier, polling task used to sleep, relying on interrupt to wake it up.
This made some IO take very long when interrupt-coalescing is enabled in
NVMe.

Reference:
http://lists.infradead.org/pipermail/linux-nvme/2018-February/015435.html
Signed-off-by: Nitesh Shetty 
---
 fs/block_dev.c | 16 
 fs/direct-io.c |  8 ++--
 fs/iomap.c | 10 +++---
 3 files changed, 25 insertions(+), 9 deletions(-)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index 4a181fc..a87d8b7 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -236,9 +236,13 @@ __blkdev_direct_IO_simple(struct kiocb *iocb, struct 
iov_iter *iter,
set_current_state(TASK_UNINTERRUPTIBLE);
if (!READ_ONCE(bio.bi_private))
break;
-   if (!(iocb->ki_flags & IOCB_HIPRI) ||
-   !blk_poll(bdev_get_queue(bdev), qc))
+   if (!(iocb->ki_flags & IOCB_HIPRI))
io_schedule();
+   else if (!blk_poll(bdev_get_queue(bdev), qc)) {
+   if (need_resched())
+   set_current_state(TASK_RUNNING);
+   io_schedule();
+   }
}
__set_current_state(TASK_RUNNING);
 
@@ -401,9 +405,13 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter 
*iter, int nr_pages)
if (!READ_ONCE(dio->waiter))
break;
 
-   if (!(iocb->ki_flags & IOCB_HIPRI) ||
-   !blk_poll(bdev_get_queue(bdev), qc))
+   if (!(iocb->ki_flags & IOCB_HIPRI))
io_schedule();
+   else if (!blk_poll(bdev_get_queue(bdev), qc)) {
+   if (need_resched())
+   set_current_state(TASK_RUNNING);
+   io_schedule();
+   }
}
__set_current_state(TASK_RUNNING);
 
diff --git a/fs/direct-io.c b/fs/direct-io.c
index a0ca9e4..c815ac9 100644
--- a/fs/direct-io.c
+++ b/fs/direct-io.c
@@ -518,9 +518,13 @@ static struct bio *dio_await_one(struct dio *dio)
__set_current_state(TASK_UNINTERRUPTIBLE);
dio->waiter = current;
spin_unlock_irqrestore(&dio->bio_lock, flags);
-   if (!(dio->iocb->ki_flags & IOCB_HIPRI) ||
-   !blk_poll(dio->bio_disk->queue, dio->bio_cookie))
+   if (!(dio->iocb->ki_flags & IOCB_HIPRI))
io_schedule();
+   else if (!blk_poll(dio->bio_disk->queue, dio->bio_cookie)) {
+   if (need_resched())
+   __set_current_state(TASK_RUNNING);
+   io_schedule();
+   }
/* wake up sets us TASK_RUNNING */
spin_lock_irqsave(&dio->bio_lock, flags);
dio->waiter = NULL;
diff --git a/fs/iomap.c b/fs/iomap.c
index afd1635..b51569d 100644
--- a/fs/iomap.c
+++ b/fs/iomap.c
@@ -1072,10 +1072,14 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
break;
 
if (!(iocb->ki_flags & IOCB_HIPRI) ||
-   !dio->submit.last_queue ||
-   !blk_poll(dio->submit.last_queue,
-dio->submit.cookie))
+   !dio->submit.last_queue)
io_schedule();
+   else if (!blk_poll(dio->submit.last_queue,
+dio->submit.cookie)) {
+   if (need_resched())
+   set_current_state(TASK_RUNNING);
+   io_schedule();
+   }
}
__set_current_state(TASK_RUNNING);
}
-- 
2.7.4



Re: [PATCH v2] blk-throttle: fix race between blkcg_bio_issue_check and cgroup_rmdir

2018-02-08 Thread Tejun Heo
Hello, Joseph.

On Thu, Feb 08, 2018 at 10:29:43AM +0800, Joseph Qi wrote:
> So you mean checking css->refcnt to prevent the further use of
> blkg_get? I think it makes sense.

Yes.

> IMO, we should use css_tryget_online instead, and rightly after taking

Not really.  An offline css still can have a vast amount of IO to
drain and write out.

> queue_lock. Because there may be more use of blkg_get in blk_throtl_bio
> in the futher. Actually it already has two now. One is in
> blk_throtl_assoc_bio, and the other is in throtl_qnode_add_bio.
> What do you think of this?

As long as we avoid making the bypass paths expensive, whatever makes
the code easier to read and maintain would be better.  css ref ops are
extremely cheap anyway.

Thanks.

-- 
tejun


Re: [PATCH] blk: optimization for classic polling

2018-02-08 Thread Keith Busch
On Sun, May 30, 2083 at 09:51:06AM +0530, Nitesh Shetty wrote:
> This removes the dependency on interrupts to wake up task. Set task
> state as TASK_RUNNING, if need_resched() returns true,
> while polling for IO completion.
> Earlier, polling task used to sleep, relying on interrupt to wake it up.
> This made some IO take very long when interrupt-coalescing is enabled in
> NVMe.
> 
> Reference:
> http://lists.infradead.org/pipermail/linux-nvme/2018-February/015435.html
> Signed-off-by: Nitesh Shetty 
> ---
>  fs/block_dev.c | 16 
>  fs/direct-io.c |  8 ++--
>  fs/iomap.c | 10 +++---
>  3 files changed, 25 insertions(+), 9 deletions(-)

I think it'd be simpler to have blk_poll set it back to running if
need_resched is true rather than repeat this patter across all the
callers:

---
diff --git a/block/blk-mq.c b/block/blk-mq.c
index df93102e2149..40285fe1c8ad 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -3164,6 +3164,7 @@ static bool __blk_mq_poll(struct blk_mq_hw_ctx *hctx, 
struct request *rq)
cpu_relax();
}
 
+   set_current_state(TASK_RUNNING);
return false;
 }
 
--


Re: [PATCH V2 2/8] blk-mq: introduce BLK_MQ_F_GLOBAL_TAGS

2018-02-08 Thread Bart Van Assche
On Mon, 2018-02-05 at 23:20 +0800, Ming Lei wrote:
> diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
> index 55c0a745b427..385bbec73804 100644
> --- a/block/blk-mq-sched.c
> +++ b/block/blk-mq-sched.c
> @@ -81,6 +81,17 @@ static bool blk_mq_sched_restart_hctx(struct blk_mq_hw_ctx 
> *hctx)
>   } else
>   clear_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state);
>  
> + /* need to restart all hw queues for global tags */
> + if (hctx->flags & BLK_MQ_F_GLOBAL_TAGS) {
> + struct blk_mq_hw_ctx *hctx2;
> + int i;
> +
> + queue_for_each_hw_ctx(hctx->queue, hctx2, i)
> + if (blk_mq_run_hw_queue(hctx2, true))
> + return true;
> + return false;
> + }
> +
>   return blk_mq_run_hw_queue(hctx, true);
>  }

It seems weird to me that no matter for which hardware queue a restart is
requested (the hctx argument) that the above loop starts with examining
the hardware queue with index 0. Will this cause fairness and/or cache
line bouncing problems?

Thanks,

Bart.






Re: [PATCH v2] blk-mq: Fix race between resetting the timer and completion handling

2018-02-08 Thread t...@kernel.org
On Thu, Feb 08, 2018 at 01:09:57AM +, Bart Van Assche wrote:
> On Wed, 2018-02-07 at 23:48 +, Bart Van Assche wrote:
> > With this patch applied I see requests for which it seems like the timeout 
> > handler
> > did not get invoked: [ ... ]
> 
> I just noticed the following in the system log, which is probably the reason 
> why some
> requests got stuck:
> 
> Feb  7 15:16:26 ubuntu-vm kernel: BUG: unable to handle kernel NULL pointer 
> dereference at   (null)
> Feb  7 15:16:26 ubuntu-vm kernel: IP: scsi_times_out+0x17/0x2c0 [scsi_mod]
> Feb  7 15:16:26 ubuntu-vm kernel: PGD 0 P4D 0  
> Feb  7 15:16:26 ubuntu-vm kernel: Oops:  [#1] PREEMPT SMP
> Feb  7 15:16:26 ubuntu-vm kernel: Modules linked in: dm_service_time ib_srp 
> libcrc32c crc32c_generic scsi_transport_srp target_core_pscsi 
> target_core_file ib_srpt target_core_iblock target_core_mod
> rdma_rxe ip6_udp_tunnel udp_tunnel ib_umad ib_uverbs scsi_debug brd 
> mq_deadline kyber_iosched deadline_iosched cfq_iosched bfq crct10dif_pclmul 
> crc32_pclmul af_packet ghash_clmulni_intel pcbc
> aesni_intel aes_x86_64 crypto_simd cryptd glue_helper serio_raw 
> virtio_console virtio_balloon sg button i2c_piix4 dm_multipath dm_mod dax 
> scsi_dh_rdac scsi_dh_emc scsi_dh_alua ib_iser rdma_cm iw_cm
> ib_cm ib_core configfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi 
> ip_tables x_tables ipv6 autofs4 ext4 crc16 mbcache jbd2 sd_mod virtio_net 
> virtio_blk virtio_scsi sr_mod cdrom ata_generic
> pata_acpi psmouse crc32c_intel i2c_core atkbd
> Feb  7 15:16:26 ubuntu-vm kernel: uhci_hcd ehci_hcd intel_agp ata_piix 
> intel_gtt agpgart libata virtio_pci usbcore virtio_ring virtio scsi_mod 
> usb_common unix
> Feb  7 15:16:26 ubuntu-vm kernel: CPU: 0 PID: 146 Comm: kworker/0:1H Not 
> tainted 4.15.0-dbg+ #1
> Feb  7 15:16:26 ubuntu-vm kernel: Hardware name: QEMU Standard PC (i440FX + 
> PIIX, 1996), BIOS 1.0.0-prebuilt.qemu-project.org 04/01/2014
> Feb  7 15:16:26 ubuntu-vm kernel: Workqueue: kblockd blk_mq_timeout_work
> Feb  7 15:16:26 ubuntu-vm kernel: RIP: 0010:scsi_times_out+0x17/0x2c0 
> [scsi_mod]
> Feb  7 15:16:26 ubuntu-vm kernel: RSP: 0018:98f0c02abd58 EFLAGS: 00010246
> Feb  7 15:16:26 ubuntu-vm kernel: RAX:  RBX: 965de2b3a2c0 
> RCX: 
> Feb  7 15:16:26 ubuntu-vm kernel: RDX: c0094740 RSI:  
> RDI: 965de2b3a2c0
> Feb  7 15:16:26 ubuntu-vm kernel: RBP: 965de2b3a438 R08: fffc 
> R09: 0007
> Feb  7 15:16:26 ubuntu-vm kernel: R10:  R11:  
> R12: 0002
> Feb  7 15:16:26 ubuntu-vm kernel: R13:  R14: 965de2a44218 
> R15: 965de2a48728
> Feb  7 15:16:26 ubuntu-vm kernel: FS:  () 
> GS:965dffc0() knlGS:
> Feb  7 15:16:26 ubuntu-vm kernel: CS:  0010 DS:  ES:  CR0: 
> 80050033
> Feb  7 15:16:26 ubuntu-vm kernel: CR2:  CR3: 3ce0f003 
> CR4: 003606f0
> Feb  7 15:16:26 ubuntu-vm kernel: DR0:  DR1:  
> DR2: 
> Feb  7 15:16:26 ubuntu-vm kernel: DR3:  DR6: fffe0ff0 
> DR7: 0400
> Feb  7 15:16:26 ubuntu-vm kernel: Call Trace:
> Feb  7 15:16:26 ubuntu-vm kernel: blk_mq_terminate_expired+0x42/0x80
> Feb  7 15:16:26 ubuntu-vm kernel: bt_iter+0x3d/0x50
> Feb  7 15:16:26 ubuntu-vm kernel: blk_mq_queue_tag_busy_iter+0xe9/0x200
> Feb  7 15:16:26 ubuntu-vm kernel: blk_mq_timeout_work+0x181/0x2e0
> Feb  7 15:16:26 ubuntu-vm kernel: process_one_work+0x21c/0x6d0
> Feb  7 15:16:26 ubuntu-vm kernel: worker_thread+0x35/0x380
> Feb  7 15:16:26 ubuntu-vm kernel: kthread+0x117/0x130
> Feb  7 15:16:26 ubuntu-vm kernel: ret_from_fork+0x24/0x30
> Feb  7 15:16:26 ubuntu-vm kernel: Code: ff ff 0f ff e9 fd fe ff ff 90 66 2e 
> 0f 1f 84 00 00 00 00 00 41 55 41 54 55 48 8d af 78 01 00 00 53 48 8b 87 b0 01 
> 00 00 48 89 fb <4c> 8b 28 0f 1f 44 00 00 65
> 8b 05 6a b5 f8 3f 83 f8 0f 0f 87 ed  
> Feb  7 15:16:26 ubuntu-vm kernel: RIP: scsi_times_out+0x17/0x2c0 [scsi_mod] 
> RSP: 98f0c02abd58
> Feb  7 15:16:26 ubuntu-vm kernel: CR2: 
> Feb  7 15:16:26 ubuntu-vm kernel: ---[ end trace ce6c20d95bab450e ]---

So, that's dereferencing %rax which is NULL.  That gotta be the ->host
deref in the following.

  enum blk_eh_timer_return scsi_times_out(struct request *req)
  {
  struct scsi_cmnd *scmd = blk_mq_rq_to_pdu(req);
  enum blk_eh_timer_return rtn = BLK_EH_NOT_HANDLED;
  struct Scsi_Host *host = scmd->device->host;
  ...

That sounds more like a scsi hotplug but than an issue in the timeout
code unless we messed up @req pointer to begin with.

Thanks.

-- 
tejun


Re: [PATCH v2] blk-mq: Fix race between resetting the timer and completion handling

2018-02-08 Thread t...@kernel.org
On Thu, Feb 08, 2018 at 07:39:40AM -0800, t...@kernel.org wrote:
> That sounds more like a scsi hotplug but than an issue in the timeout
   ^bug
> code unless we messed up @req pointer to begin with.

-- 
tejun


Re: [PATCH RESEND] blk-throttle: dispatch more sync writes in block throttle layer

2018-02-08 Thread Tejun Heo
Hello,

On Thu, Feb 08, 2018 at 11:39:19AM +0800, xuejiufei wrote:
> Hi Tejun,
> 
> Could you please kindly review this patch or give some advice?

I don't have anything against it but let's wait for Shaohua.

Thanks.

-- 
tejun


[PATCH v6 0/9] bcache: device failure handling improvement

2018-02-08 Thread Coly Li
Hi maintainers and folks,

This patch set tries to improve bcache device failure handling, includes
cache device and backing device failures.

The basic idea to handle failed cache device is,
- Unregister cache set
- Detach all backing devices which are attached to this cache set
- Stop all the detached bcache devices (configurable)
- Stop all flash only volume on the cache set
The above process is named 'cache set retire' by me. The result of cache
set retire is, cache set and bcache devices are all removed, following
I/O requests will get failed immediately to notift upper layer or user
space coce that the cache device is failed or disconnected.
- Stop all the detached bcache devices (configurable)
- Stop all flash only volume on the cache set
The above process is named 'cache set retire' by me. The result of cache
set retire is, cache set and bcache devices are all removed
(configurable), following I/O requests will get failed immediately to
notify upper layer or user space coce that the cache device is failed or
disconnected.

one patch from v5 patch set is merged into bcache-for-next, which is not
in v6 patch set any longer. The changes of v6 patch set are only for typo
fix, which were pointed out by Nix, Michael and other developers.

So far all patches have peer review, thank you all, bcache developers! 

Changelog:
v6: fix typo and mistaken spelling.
v5: replace patch "bcache: stop all attached bcache devices for a retired
cache set" from v4 patch set by "bcache: add stop_when_cache_set_failed
option to backing device" from v5 patch set.
fix issues from v4 patch set.
improve kernel message format, remove redundant prefix string.
v4: add per-cached_dev option stop_attached_devs_on_fail to avoid stopping
attached bcache device from a retiring cache set.
v3: fix detach issue find in v2 patch set.
v2: fixes all problems found in v1 review.
add patches to handle backing device failure.
add one more patch to set writeback_rate_update_seconds range.
include a patch from Junhui Tang.
v1: the initial version, only handles cache device failure.

Coly Li
---

Coly Li (8):
  bcache: fix cached_dev->count usage for bch_cache_set_error()
  bcache: quit dc->writeback_thread when BCACHE_DEV_DETACHING is set
  bcache: stop dc->writeback_rate_update properly
  bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags
  bcache: add stop_when_cache_set_failed option to backing device
  bcache: add backing_request_endio() for bi_end_io of attached backing
device I/O
  bcache: add io_disable to struct cached_dev
  bcache: stop bcache device when backing device is offline

Tang Junhui (1):
  bcache: fix inaccurate io state for detached bcache devices

 drivers/md/bcache/alloc.c |   3 +-
 drivers/md/bcache/bcache.h|  44 -
 drivers/md/bcache/btree.c |  10 ++-
 drivers/md/bcache/io.c|  16 +++-
 drivers/md/bcache/journal.c   |   4 +-
 drivers/md/bcache/request.c   | 185 --
 drivers/md/bcache/super.c | 205 ++
 drivers/md/bcache/sysfs.c |  55 +++-
 drivers/md/bcache/util.h  |   6 --
 drivers/md/bcache/writeback.c |  92 ---
 drivers/md/bcache/writeback.h |   2 -
 11 files changed, 543 insertions(+), 79 deletions(-)

-- 
2.16.1



[PATCH v6 1/9] bcache: fix cached_dev->count usage for bch_cache_set_error()

2018-02-08 Thread Coly Li
When bcache metadata I/O fails, bcache will call bch_cache_set_error()
to retire the whole cache set. The expected behavior to retire a cache
set is to unregister the cache set, and unregister all backing device
attached to this cache set, then remove sysfs entries of the cache set
and all attached backing devices, finally release memory of structs
cache_set, cache, cached_dev and bcache_device.

In my testing when journal I/O failure triggered by disconnected cache
device, sometimes the cache set cannot be retired, and its sysfs
entry /sys/fs/bcache/ still exits and the backing device also
references it. This is not expected behavior.

When metadata I/O failes, the call senquence to retire whole cache set is,
bch_cache_set_error()
bch_cache_set_unregister()
bch_cache_set_stop()
__cache_set_unregister() <- called as callback by calling
clousre_queue(&c->caching)
cache_set_flush()<- called as a callback when refcount
of cache_set->caching is 0
cache_set_free() <- called as a callback when refcount
of catch_set->cl is 0
bch_cache_set_release()  <- called as a callback when refcount
of catch_set->kobj is 0

I find if kernel thread bch_writeback_thread() quits while-loop when
kthread_should_stop() is true and searched_full_index is false, clousre
callback cache_set_flush() set by continue_at() will never be called. The
result is, bcache fails to retire whole cache set.

cache_set_flush() will be called when refcount of closure c->caching is 0,
and in function bcache_device_detach() refcount of closure c->caching is
released to 0 by clousre_put(). In metadata error code path, function
bcache_device_detach() is called by cached_dev_detach_finish(). This is a
callback routine being called when cached_dev->count is 0. This refcount
is decreased by cached_dev_put().

The above dependence indicates, cache_set_flush() will be called when
refcount of cache_set->cl is 0, and refcount of cache_set->cl to be 0
when refcount of cache_dev->count is 0.

The reason why sometimes cache_dev->count is not 0 (when metadata I/O fails
and bch_cache_set_error() called) is, in bch_writeback_thread(), refcount
of cache_dev is not decreased properly.

In bch_writeback_thread(), cached_dev_put() is called only when
searched_full_index is true and cached_dev->writeback_keys is empty, a.k.a
there is no dirty data on cache. In most of run time it is correct, but
when bch_writeback_thread() quits the while-loop while cache is still
dirty, current code forget to call cached_dev_put() before this kernel
thread exits. This is why sometimes cache_set_flush() is not executed and
cache set fails to be retired.

The reason to call cached_dev_put() in bch_writeback_rate() is, when the
cache device changes from clean to dirty, cached_dev_get() is called, to
make sure during writeback operatiions both backing and cache devices
won't be released.

Adding following code in bch_writeback_thread() does not work,
   static int bch_writeback_thread(void *arg)
}

+   if (atomic_read(&dc->has_dirty))
+   cached_dev_put()
+
return 0;
 }
because writeback kernel thread can be waken up and start via sysfs entry:
echo 1 > /sys/block/bcache/bcache/writeback_running
It is difficult to check whether backing device is dirty without race and
extra lock. So the above modification will introduce potential refcount
underflow in some conditions.

The correct fix is, to take cached dev refcount when creating the kernel
thread, and put it before the kernel thread exits. Then bcache does not
need to take a cached dev refcount when cache turns from clean to dirty,
or to put a cached dev refcount when cache turns from ditry to clean. The
writeback kernel thread is alwasy safe to reference data structure from
cache set, cache and cached device (because a refcount of cache device is
taken for it already), and no matter the kernel thread is stopped by I/O
errors or system reboot, cached_dev->count can always be used correctly.

The patch is simple, but understanding how it works is quite complicated.

Changelog:
v2: set dc->writeback_thread to NULL in this patch, as suggested by Hannes.
v1: initial version for review.

Signed-off-by: Coly Li 
Reviewed-by: Hannes Reinecke 
Cc: Michael Lyle 
Cc: Junhui Tang 
---
 drivers/md/bcache/super.c |  1 -
 drivers/md/bcache/writeback.c | 11 ---
 drivers/md/bcache/writeback.h |  2 --
 3 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 312895788036..9b745c5c1980 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -1054,7 +1054,6 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct 
cache_set *c,
if (BDEV_STATE(&dc->sb) == BDEV_STATE_DIRTY) {

[PATCH v6 3/9] bcache: stop dc->writeback_rate_update properly

2018-02-08 Thread Coly Li
struct delayed_work writeback_rate_update in struct cache_dev is a delayed
worker to call function update_writeback_rate() in period (the interval is
defined by dc->writeback_rate_update_seconds).

When a metadate I/O error happens on cache device, bcache error handling
routine bch_cache_set_error() will call bch_cache_set_unregister() to
retire whole cache set. On the unregister code path, this delayed work is
stopped by calling cancel_delayed_work_sync(&dc->writeback_rate_update).

dc->writeback_rate_update is a special delayed work from others in bcache.
In its routine update_writeback_rate(), this delayed work is re-armed
itself. That means when cancel_delayed_work_sync() returns, this delayed
work can still be executed after several seconds defined by
dc->writeback_rate_update_seconds.

The problem is, after cancel_delayed_work_sync() returns, the cache set
unregister code path will continue and release memory of struct cache set.
Then the delayed work is scheduled to run, __update_writeback_rate()
will reference the already released cache_set memory, and trigger a NULL
pointer deference fault.

This patch introduces two more bcache device flags,
- BCACHE_DEV_WB_RUNNING
  bit set:  bcache device is in writeback mode and running, it is OK for
dc->writeback_rate_update to re-arm itself.
  bit clear:bcache device is trying to stop dc->writeback_rate_update,
this delayed work should not re-arm itself and quit.
- BCACHE_DEV_RATE_DW_RUNNING
  bit set:  routine update_writeback_rate() is executing.
  bit clear: routine update_writeback_rate() quits.

This patch also adds a function cancel_writeback_rate_update_dwork() to
wait for dc->writeback_rate_update quits before cancel it by calling
cancel_delayed_work_sync(). In order to avoid a deadlock by unexpected
quit dc->writeback_rate_update, after time_out seconds this function will
give up and continue to call cancel_delayed_work_sync().

And here I explain how this patch stops self re-armed delayed work properly
with the above stuffs.

update_writeback_rate() sets BCACHE_DEV_RATE_DW_RUNNING at its beginning
and clears BCACHE_DEV_RATE_DW_RUNNING at its end. Before calling
cancel_writeback_rate_update_dwork() clear flag BCACHE_DEV_WB_RUNNING.

Before calling cancel_delayed_work_sync() wait utill flag
BCACHE_DEV_RATE_DW_RUNNING is clear. So when calling
cancel_delayed_work_sync(), dc->writeback_rate_update must be already re-
armed, or quite by seeing BCACHE_DEV_WB_RUNNING cleared. In both cases
delayed work routine update_writeback_rate() won't be executed after
cancel_delayed_work_sync() returns.

Inside update_writeback_rate() before calling schedule_delayed_work(), flag
BCACHE_DEV_WB_RUNNING is checked before. If this flag is cleared, it means
someone is about to stop the delayed work. Because flag
BCACHE_DEV_RATE_DW_RUNNING is set already and cancel_delayed_work_sync()
has to wait for this flag to be cleared, we don't need to worry about race
condition here.

If update_writeback_rate() is scheduled to run after checking
BCACHE_DEV_RATE_DW_RUNNING and before calling cancel_delayed_work_sync()
in cancel_writeback_rate_update_dwork(), it is also safe. Because at this
moment BCACHE_DEV_WB_RUNNING is cleared with memory barrier. As I mentioned
previously, update_writeback_rate() will see BCACHE_DEV_WB_RUNNING is clear
and quit immediately.

Because there are more dependences inside update_writeback_rate() to struct
cache_set memory, dc->writeback_rate_update is not a simple self re-arm
delayed work. After trying many different methods (e.g. hold dc->count, or
use locks), this is the only way I can find which works to properly stop
dc->writeback_rate_update delayed work.

Changelog:
v3: change values of BCACHE_DEV_WB_RUNNING and BCACHE_DEV_RATE_DW_RUNNING
to bit index, for test_bit().
v2: Try to fix the race issue which is pointed out by Junhui.
v1: The initial version for review

Signed-off-by: Coly Li 
Reviewed-by: Junhui Tang 
Cc: Michael Lyle 
Cc: Hannes Reinecke 
---
 drivers/md/bcache/bcache.h|  9 +
 drivers/md/bcache/super.c | 39 +++
 drivers/md/bcache/sysfs.c |  3 ++-
 drivers/md/bcache/writeback.c | 29 -
 4 files changed, 70 insertions(+), 10 deletions(-)

diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index 12e5197f186c..b5ddb848cd31 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -258,10 +258,11 @@ struct bcache_device {
struct gendisk  *disk;
 
unsigned long   flags;
-#define BCACHE_DEV_CLOSING 0
-#define BCACHE_DEV_DETACHING   1
-#define BCACHE_DEV_UNLINK_DONE 2
-
+#define BCACHE_DEV_CLOSING 0
+#define BCACHE_DEV_DETACHING   1
+#define BCACHE_DEV_UNLINK_DONE 2
+#define BCACHE_DEV_WB_RUNNING  3
+#define BCACHE_DEV_RATE_DW_RUNNING 4
unsignednr_stripes;
unsignedstripe_siz

[PATCH v6 2/9] bcache: quit dc->writeback_thread when BCACHE_DEV_DETACHING is set

2018-02-08 Thread Coly Li
In patch "bcache: fix cached_dev->count usage for bch_cache_set_error()",
cached_dev_get() is called when creating dc->writeback_thread, and
cached_dev_put() is called when exiting dc->writeback_thread. This
modification works well unless people detach the bcache device manually by
'echo 1 > /sys/block/bcache/bcache/detach'
Because this sysfs interface only calls bch_cached_dev_detach() which wakes
up dc->writeback_thread but does not stop it. The reason is, before patch
"bcache: fix cached_dev->count usage for bch_cache_set_error()", inside
bch_writeback_thread(), if cache is not dirty after writeback,
cached_dev_put() will be called here. And in cached_dev_make_request() when
a new write request makes cache from clean to dirty, cached_dev_get() will
be called there. Since we don't operate dc->count in these locations,
refcount d->count cannot be dropped after cache becomes clean, and
cached_dev_detach_finish() won't be called to detach bcache device.

This patch fixes the issue by checking whether BCACHE_DEV_DETACHING is
set inside bch_writeback_thread(). If this bit is set and cache is clean
(no existing writeback_keys), break the while-loop, call cached_dev_put()
and quit the writeback thread.

Please note if cache is still dirty, even BCACHE_DEV_DETACHING is set the
writeback thread should continue to perform writeback, this is the original
design of manually detach.

It is safe to do the following check without locking, let me explain why,
+   if (!test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags) &&
+   (!atomic_read(&dc->has_dirty) || !dc->writeback_running)) {

If the kenrel thread does not sleep and continue to run due to conditions
are not updated in time on the running CPU core, it just consumes more CPU
cycles and has no hurt. This should-sleep-but-run is safe here. We just
focus on the should-run-but-sleep condition, which means the writeback
thread goes to sleep in mistake while it should continue to run.
1, First of all, no matter the writeback thread is hung or not, kthread_stop() 
from
   cached_dev_detach_finish() will wake up it and terminate by making
   kthread_should_stop() return true. And in normal run time, bit on index
   BCACHE_DEV_DETACHING is always cleared, the condition
!test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags)
   is always true and can be ignored as constant value.
2, If one of the following conditions is true, the writeback thread should
   go to sleep,
   "!atomic_read(&dc->has_dirty)" or "!dc->writeback_running)"
   each of them independently controls the writeback thread should sleep or
   not, let's analyse them one by one.
2.1 condition "!atomic_read(&dc->has_dirty)"
   If dc->has_dirty is set from 0 to 1 on another CPU core, bcache will
   call bch_writeback_queue() immediately or call bch_writeback_add() which
   indirectly calls bch_writeback_queue() too. In bch_writeback_queue(),
   wake_up_process(dc->writeback_thread) is called. It sets writeback
   thread's task state to TASK_RUNNING and following an implicit memory
   barrier, then tries to wake up the writeback thread.
   In writeback thread, its task state is set to TASK_INTERRUPTIBLE before
   doing the condition check. If other CPU core sets the TASK_RUNNING state
   after writeback thread setting TASK_INTERRUPTIBLE, the writeback thread
   will be scheduled to run very soon because its state is not
   TASK_INTERRUPTIBLE. If other CPU core sets the TASK_RUNNING state before
   writeback thread setting TASK_INTERRUPTIBLE, the implict memory barrier
   of wake_up_process() will make sure modification of dc->has_dirty on
   other CPU core is updated and observed on the CPU core of writeback
   thread. Therefore the condition check will correctly be false, and
   continue writeback code without sleeping.
2.2 condition "!dc->writeback_running)"
   dc->writeback_running can be changed via sysfs file, every time it is
   modified, a following bch_writeback_queue() is alwasy called. So the
   change is always observed on the CPU core of writeback thread. If
   dc->writeback_running is changed from 0 to 1 on other CPU core, this
   condition check will observe the modification and allow writeback
   thread to continue to run without sleeping.
Now we can see, even without a locking protection, multiple conditions
check is safe here, no deadlock or process hang up will happen.

I compose a separte patch because that patch "bcache: fix cached_dev->count
usage for bch_cache_set_error()" already gets a "Reviewed-by:" from Hannes
Reinecke. Also this fix is not trivial and good for a separate patch.

Signed-off-by: Coly Li 
Reviewed-by: Michael Lyle 
Cc: Hannes Reinecke 
Cc: Huijun Tang 
---
 drivers/md/bcache/writeback.c | 20 +---
 1 file changed, 17 insertions(+), 3 deletions(-)

diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index b280c134dd4d..4dbeaaa575bf 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -565,9 +5

[PATCH v6 4/9] bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags

2018-02-08 Thread Coly Li
When too many I/Os failed on cache device, bch_cache_set_error() is called
in the error handling code path to retire whole problematic cache set. If
new I/O requests continue to come and take refcount dc->count, the cache
set won't be retired immediately, this is a problem.

Further more, there are several kernel thread and self-armed kernel work
may still running after bch_cache_set_error() is called. It needs to wait
quite a while for them to stop, or they won't stop at all. They also
prevent the cache set from being retired.

The solution in this patch is, to add per cache set flag to disable I/O
request on this cache and all attached backing devices. Then new coming I/O
requests can be rejected in *_make_request() before taking refcount, kernel
threads and self-armed kernel worker can stop very fast when flags bit
CACHE_SET_IO_DISABLE is set.

Because bcache also do internal I/Os for writeback, garbage collection,
bucket allocation, journaling, this kind of I/O should be disabled after
bch_cache_set_error() is called. So closure_bio_submit() is modified to
check whether CACHE_SET_IO_DISABLE is set on cache_set->flags. If set,
closure_bio_submit() will set bio->bi_status to BLK_STS_IOERR and
return, generic_make_request() won't be called.

A sysfs interface is also added to set or clear CACHE_SET_IO_DISABLE bit
from cache_set->flags, to disable or enable cache set I/O for debugging. It
is helpful to trigger more corner case issues for failed cache device.

Changelog
v3, change CACHE_SET_IO_DISABLE from 4 to 3, since it is bit index.
remove "bcache: " prefix when printing out kernel message.
v2, more changes by previous review,
- Use CACHE_SET_IO_DISABLE of cache_set->flags, suggested by Junhui.
- Check CACHE_SET_IO_DISABLE in bch_btree_gc() to stop a while-loop, this
  is reported and inspired from origal patch of Pavel Vazharov.
v1, initial version.

Signed-off-by: Coly Li 
Reviewed-by: Hannes Reinecke 
Cc: Junhui Tang 
Cc: Michael Lyle 
Cc: Pavel Vazharov 
---
 drivers/md/bcache/alloc.c |  3 ++-
 drivers/md/bcache/bcache.h| 18 ++
 drivers/md/bcache/btree.c | 10 +++---
 drivers/md/bcache/io.c|  2 +-
 drivers/md/bcache/journal.c   |  4 ++--
 drivers/md/bcache/request.c   | 26 +++---
 drivers/md/bcache/super.c |  6 +-
 drivers/md/bcache/sysfs.c | 20 
 drivers/md/bcache/util.h  |  6 --
 drivers/md/bcache/writeback.c | 35 +++
 10 files changed, 101 insertions(+), 29 deletions(-)

diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c
index 458e1d38577d..004cc3cc6123 100644
--- a/drivers/md/bcache/alloc.c
+++ b/drivers/md/bcache/alloc.c
@@ -287,7 +287,8 @@ do {
\
break;  \
\
mutex_unlock(&(ca)->set->bucket_lock);  \
-   if (kthread_should_stop()) {\
+   if (kthread_should_stop() ||\
+   test_bit(CACHE_SET_IO_DISABLE, &ca->set->flags)) {  \
set_current_state(TASK_RUNNING);\
return 0;   \
}   \
diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index b5ddb848cd31..56179fff1e59 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -475,10 +475,15 @@ struct gc_stat {
  *
  * CACHE_SET_RUNNING means all cache devices have been registered and journal
  * replay is complete.
+ *
+ * CACHE_SET_IO_DISABLE is set when bcache is stopping the whold cache set, all
+ * external and internal I/O should be denied when this flag is set.
+ *
  */
 #define CACHE_SET_UNREGISTERING0
 #defineCACHE_SET_STOPPING  1
 #defineCACHE_SET_RUNNING   2
+#define CACHE_SET_IO_DISABLE   3
 
 struct cache_set {
struct closure  cl;
@@ -868,6 +873,19 @@ static inline void wake_up_allocators(struct cache_set *c)
wake_up_process(ca->alloc_thread);
 }
 
+static inline void closure_bio_submit(struct cache_set *c,
+ struct bio *bio,
+ struct closure *cl)
+{
+   closure_get(cl);
+   if (unlikely(test_bit(CACHE_SET_IO_DISABLE, &c->flags))) {
+   bio->bi_status = BLK_STS_IOERR;
+   bio_endio(bio);
+   return;
+   }
+   generic_make_request(bio);
+}
+
 /* Forward declarations */
 
 void bch_count_io_errors(struct cache *, blk_status_t, int, const char *);
diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
index fad9fe8817eb..8ca50f387a1d 10064

[PATCH v6 7/9] bcache: add backing_request_endio() for bi_end_io of attached backing device I/O

2018-02-08 Thread Coly Li
In order to catch I/O error of backing device, a separate bi_end_io
call back is required. Then a per backing device counter can record I/O
errors number and retire the backing device if the counter reaches a
per backing device I/O error limit.

This patch adds backing_request_endio() to bcache backing device I/O code
path, this is a preparation for further complicated backing device failure
handling. So far there is no real code logic change, I make this change a
separate patch to make sure it is stable and reliable for further work.

Changelog:
v2: Fix code comments typo, remove a redundant bch_writeback_add() line
added in v4 patch set.
v1: indeed this is new added in this patch set.

Signed-off-by: Coly Li 
Reviewed-by: Hannes Reinecke 
Cc: Junhui Tang 
Cc: Michael Lyle 
---
 drivers/md/bcache/request.c   | 93 +++
 drivers/md/bcache/super.c |  1 +
 drivers/md/bcache/writeback.c |  1 +
 3 files changed, 79 insertions(+), 16 deletions(-)

diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
index e09c5ae745be..9c6dda3b0068 100644
--- a/drivers/md/bcache/request.c
+++ b/drivers/md/bcache/request.c
@@ -139,6 +139,7 @@ static void bch_data_invalidate(struct closure *cl)
}
 
op->insert_data_done = true;
+   /* get in bch_data_insert() */
bio_put(bio);
 out:
continue_at(cl, bch_data_insert_keys, op->wq);
@@ -630,6 +631,38 @@ static void request_endio(struct bio *bio)
closure_put(cl);
 }
 
+static void backing_request_endio(struct bio *bio)
+{
+   struct closure *cl = bio->bi_private;
+
+   if (bio->bi_status) {
+   struct search *s = container_of(cl, struct search, cl);
+   /*
+* If a bio has REQ_PREFLUSH for writeback mode, it is
+* speically assembled in cached_dev_write() for a non-zero
+* write request which has REQ_PREFLUSH. we don't set
+* s->iop.status by this failure, the status will be decided
+* by result of bch_data_insert() operation.
+*/
+   if (unlikely(s->iop.writeback &&
+bio->bi_opf & REQ_PREFLUSH)) {
+   char buf[BDEVNAME_SIZE];
+
+   bio_devname(bio, buf);
+   pr_err("Can't flush %s: returned bi_status %i",
+   buf, bio->bi_status);
+   } else {
+   /* set to orig_bio->bi_status in bio_complete() */
+   s->iop.status = bio->bi_status;
+   }
+   s->recoverable = false;
+   /* should count I/O error for backing device here */
+   }
+
+   bio_put(bio);
+   closure_put(cl);
+}
+
 static void bio_complete(struct search *s)
 {
if (s->orig_bio) {
@@ -644,13 +677,21 @@ static void bio_complete(struct search *s)
}
 }
 
-static void do_bio_hook(struct search *s, struct bio *orig_bio)
+static void do_bio_hook(struct search *s,
+   struct bio *orig_bio,
+   bio_end_io_t *end_io_fn)
 {
struct bio *bio = &s->bio.bio;
 
bio_init(bio, NULL, 0);
__bio_clone_fast(bio, orig_bio);
-   bio->bi_end_io  = request_endio;
+   /*
+* bi_end_io can be set separately somewhere else, e.g. the
+* variants in,
+* - cache_bio->bi_end_io from cached_dev_cache_miss()
+* - n->bi_end_io from cache_lookup_fn()
+*/
+   bio->bi_end_io  = end_io_fn;
bio->bi_private = &s->cl;
 
bio_cnt_set(bio, 3);
@@ -676,7 +717,7 @@ static inline struct search *search_alloc(struct bio *bio,
s = mempool_alloc(d->c->search, GFP_NOIO);
 
closure_init(&s->cl, NULL);
-   do_bio_hook(s, bio);
+   do_bio_hook(s, bio, request_endio);
 
s->orig_bio = bio;
s->cache_miss   = NULL;
@@ -743,10 +784,11 @@ static void cached_dev_read_error(struct closure *cl)
trace_bcache_read_retry(s->orig_bio);
 
s->iop.status = 0;
-   do_bio_hook(s, s->orig_bio);
+   do_bio_hook(s, s->orig_bio, backing_request_endio);
 
/* XXX: invalidate cache */
 
+   /* I/O request sent to backing device */
closure_bio_submit(s->iop.c, bio, cl);
}
 
@@ -859,7 +901,7 @@ static int cached_dev_cache_miss(struct btree *b, struct 
search *s,
bio_copy_dev(cache_bio, miss);
cache_bio->bi_iter.bi_size  = s->insert_bio_sectors << 9;
 
-   cache_bio->bi_end_io= request_endio;
+   cache_bio->bi_end_io= backing_request_endio;
cache_bio->bi_private   = &s->cl;
 
bch_bio_map(cache_bio, NULL);
@@ -872,14 +914,16 @@ static int cached_dev_cache_miss(struct btree *b, struct 
search *s,
s->cache_miss   = miss;
s->iop.bio  = cache_bio;
bio

[PATCH v6 6/9] bcache: fix inaccurate io state for detached bcache devices

2018-02-08 Thread Coly Li
From: Tang Junhui 

When we run IO in a detached device,  and run iostat to shows IO status,
normally it will show like bellow (Omitted some fields):
Device: ... avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdd... 15.89 0.531.820.202.23   1.81  52.30
bcache0... 15.89   115.420.000.000.00   2.40  69.60
but after IO stopped, there are still very big avgqu-sz and %util
values as bellow:
Device: ... avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
bcache0   ...  0   5326.320.000.000.00   0.00 100.10

The reason for this issue is that, only generic_start_io_acct() called
and no generic_end_io_acct() called for detached device in
cached_dev_make_request(). See the code:
//start generic_start_io_acct()
generic_start_io_acct(q, rw, bio_sectors(bio), &d->disk->part0);
if (cached_dev_get(dc)) {
//will callback generic_end_io_acct()
}
else {
//will not call generic_end_io_acct()
}

This patch calls generic_end_io_acct() in the end of IO for detached
devices, so we can show IO state correctly.

(Modified to use GFP_NOIO in kzalloc() by Coly Li)

Changelog:
v2: fix typo.
v1: the initial version.

Signed-off-by: Tang Junhui 
Reviewed-by: Coly Li 
Reviewed-by: Hannes Reinecke 
Reviewed-by: Michael Lyle 
---
 drivers/md/bcache/request.c | 58 +++--
 1 file changed, 51 insertions(+), 7 deletions(-)

diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
index 02296bda6384..e09c5ae745be 100644
--- a/drivers/md/bcache/request.c
+++ b/drivers/md/bcache/request.c
@@ -986,6 +986,55 @@ static void cached_dev_nodata(struct closure *cl)
continue_at(cl, cached_dev_bio_complete, NULL);
 }
 
+struct detached_dev_io_private {
+   struct bcache_device*d;
+   unsigned long   start_time;
+   bio_end_io_t*bi_end_io;
+   void*bi_private;
+};
+
+static void detached_dev_end_io(struct bio *bio)
+{
+   struct detached_dev_io_private *ddip;
+
+   ddip = bio->bi_private;
+   bio->bi_end_io = ddip->bi_end_io;
+   bio->bi_private = ddip->bi_private;
+
+   generic_end_io_acct(ddip->d->disk->queue,
+   bio_data_dir(bio),
+   &ddip->d->disk->part0, ddip->start_time);
+
+   kfree(ddip);
+
+   bio->bi_end_io(bio);
+}
+
+static void detached_dev_do_request(struct bcache_device *d, struct bio *bio)
+{
+   struct detached_dev_io_private *ddip;
+   struct cached_dev *dc = container_of(d, struct cached_dev, disk);
+
+   /*
+* no need to call closure_get(&dc->disk.cl),
+* because upper layer had already opened bcache device,
+* which would call closure_get(&dc->disk.cl)
+*/
+   ddip = kzalloc(sizeof(struct detached_dev_io_private), GFP_NOIO);
+   ddip->d = d;
+   ddip->start_time = jiffies;
+   ddip->bi_end_io = bio->bi_end_io;
+   ddip->bi_private = bio->bi_private;
+   bio->bi_end_io = detached_dev_end_io;
+   bio->bi_private = ddip;
+
+   if ((bio_op(bio) == REQ_OP_DISCARD) &&
+   !blk_queue_discard(bdev_get_queue(dc->bdev)))
+   bio->bi_end_io(bio);
+   else
+   generic_make_request(bio);
+}
+
 /* Cached devices - read & write stuff */
 
 static blk_qc_t cached_dev_make_request(struct request_queue *q,
@@ -1028,13 +1077,8 @@ static blk_qc_t cached_dev_make_request(struct 
request_queue *q,
else
cached_dev_read(dc, s);
}
-   } else {
-   if ((bio_op(bio) == REQ_OP_DISCARD) &&
-   !blk_queue_discard(bdev_get_queue(dc->bdev)))
-   bio_endio(bio);
-   else
-   generic_make_request(bio);
-   }
+   } else
+   detached_dev_do_request(d, bio);
 
return BLK_QC_T_NONE;
 }
-- 
2.16.1



[PATCH v6 5/9] bcache: add stop_when_cache_set_failed option to backing device

2018-02-08 Thread Coly Li
When there are too many I/O errors on cache device, current bcache code
will retire the whole cache set, and detach all bcache devices. But the
detached bcache devices are not stopped, which is problematic when bcache
is in writeback mode.

If the retired cache set has dirty data of backing devices, continue
writing to bcache device will write to backing device directly. If the
LBA of write request has a dirty version cached on cache device, next time
when the cache device is re-registered and backing device re-attached to
it again, the stale dirty data on cache device will be written to backing
device, and overwrite latest directly written data. This situation causes
a quite data corruption.

But we cannot simply stop all attached bcache devices when the cache set is
broken or disconnected. For example, use bcache to accelerate performance
of an email service. In such workload, if cache device is broken but no
dirty data lost, keep the bcache device alive and permit email service
continue to access user data might be a better solution for the cache
device failure.

Nix  points out the issue and provides the above example
to explain why it might be necessary to not stop bcache device for broken
cache device. Pavel Goran  provides a brilliant
suggestion to provide "always" and "auto" options to per-cached device
sysfs file stop_when_cache_set_failed. If cache set is retiring and the
backing device has no dirty data on cache, it should be safe to keep the
bcache device alive. In this case, if stop_when_cache_set_failed is set to
"auto", the device failure handling code will not stop this bcache device
and permit application to access the backing device with a unattached
bcache device.

Changelog:
v3: fix typos pointed out by Nix.
v2: change option values of stop_when_cache_set_failed from 1/0 to
"auto"/"always".
v1: initial version, stop_when_cache_set_failed can be 0 (not stop) or 1
(always stop).

Signed-off-by: Coly Li 
Reviewed-by: Michael Lyle 
Cc: Nix 
Cc: Pavel Goran 
Cc: Junhui Tang 
Cc: Hannes Reinecke 
---
 drivers/md/bcache/bcache.h |  9 +
 drivers/md/bcache/super.c  | 82 --
 drivers/md/bcache/sysfs.c  | 17 ++
 3 files changed, 98 insertions(+), 10 deletions(-)

diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index 56179fff1e59..7c2b836732e9 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -287,6 +287,12 @@ struct io {
sector_tlast;
 };
 
+enum stop_on_failure {
+   BCH_CACHED_DEV_STOP_AUTO = 0,
+   BCH_CACHED_DEV_STOP_ALWAYS,
+   BCH_CACHED_DEV_STOP_MODE_MAX,
+};
+
 struct cached_dev {
struct list_headlist;
struct bcache_devicedisk;
@@ -379,6 +385,8 @@ struct cached_dev {
unsignedwriteback_rate_i_term_inverse;
unsignedwriteback_rate_p_term_inverse;
unsignedwriteback_rate_minimum;
+
+   enum stop_on_failurestop_when_cache_set_failed;
 };
 
 enum alloc_reserve {
@@ -924,6 +932,7 @@ void bch_write_bdev_super(struct cached_dev *, struct 
closure *);
 
 extern struct workqueue_struct *bcache_wq;
 extern const char * const bch_cache_modes[];
+extern const char * const bch_stop_on_failure_modes[];
 extern struct mutex bch_register_lock;
 extern struct list_head bch_cache_sets;
 
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index a1abeebc7643..52d5012948c9 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -47,6 +47,14 @@ const char * const bch_cache_modes[] = {
NULL
 };
 
+/* Default is -1; we skip past it for stop_when_cache_set_failed */
+const char * const bch_stop_on_failure_modes[] = {
+   "default",
+   "auto",
+   "always",
+   NULL
+};
+
 static struct kobject *bcache_kobj;
 struct mutex bch_register_lock;
 LIST_HEAD(bch_cache_sets);
@@ -1189,6 +1197,9 @@ static int cached_dev_init(struct cached_dev *dc, 
unsigned block_size)
max(dc->disk.disk->queue->backing_dev_info->ra_pages,
q->backing_dev_info->ra_pages);
 
+   /* default to auto */
+   dc->stop_when_cache_set_failed = BCH_CACHED_DEV_STOP_AUTO;
+
bch_cached_dev_request_init(dc);
bch_cached_dev_writeback_init(dc);
return 0;
@@ -1465,25 +1476,76 @@ static void cache_set_flush(struct closure *cl)
closure_return(cl);
 }
 
+/*
+ * This function is only called when CACHE_SET_IO_DISABLE is set, which means
+ * cache set is unregistering due to too many I/O errors. In this condition,
+ * the bcache device might be stopped, it depends on stop_when_cache_set_failed
+ * value and whether the broken cache has dirty data:
+ *
+ * dc->stop_when_cache_set_faileddc->has_dirty   stop bcache device
+ *  BCH_CACHED_STOP_AUTO   0   NO
+ *  BCH_CACHED_STOP_AUTO   1   YES
+ *  BCH_CACHED_DEV_STOP_ALWAYS 0 

[PATCH v6 9/9] bcache: stop bcache device when backing device is offline

2018-02-08 Thread Coly Li
Currently bcache does not handle backing device failure, if backing
device is offline and disconnected from system, its bcache device can still
be accessible. If the bcache device is in writeback mode, I/O requests even
can success if the requests hit on cache device. That is to say, when and
how bcache handles offline backing device is undefined.

This patch tries to handle backing device offline in a rather simple way,
- Add cached_dev->status_update_thread kernel thread to update backing
  device status in every 1 second.
- Add cached_dev->offline_seconds to record how many seconds the backing
  device is observed to be offline. If the backing device is offline for
  BACKING_DEV_OFFLINE_TIMEOUT (30) seconds, set dc->io_disable to 1 and
  call bcache_device_stop() to stop the bache device which linked to the
  offline backing device.

Now if a backing device is offline for BACKING_DEV_OFFLINE_TIMEOUT seconds,
its bcache device will be removed, then user space application writing on
it will get error immediately, and handler the device failure in time.

This patch is quite simple, does not handle more complicated situations.
Once the bcache device is stopped, users need to recovery the backing
device, register and attach it manually.

Changelog:
v2: remove "bcache: " prefix when calling pr_warn().
v1: initial version.

Signed-off-by: Coly Li 
Reviewed-by: Hannes Reinecke 
Cc: Michael Lyle 
Cc: Junhui Tang 
---
 drivers/md/bcache/bcache.h |  2 ++
 drivers/md/bcache/super.c  | 55 ++
 2 files changed, 57 insertions(+)

diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index dbc4fb48c754..e465a661f32e 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -344,6 +344,7 @@ struct cached_dev {
 
struct keybuf   writeback_keys;
 
+   struct task_struct  *status_update_thread;
/*
 * Order the write-half of writeback operations strongly in dispatch
 * order.  (Maintain LBA order; don't allow reads completing out of
@@ -391,6 +392,7 @@ struct cached_dev {
 #define DEFAULT_CACHED_DEV_ERROR_LIMIT 64
atomic_tio_errors;
unsignederror_limit;
+   unsignedoffline_seconds;
 };
 
 enum alloc_reserve {
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 1c5b7074bd6c..ea25cef924ff 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -654,6 +654,11 @@ static int ioctl_dev(struct block_device *b, fmode_t mode,
 unsigned int cmd, unsigned long arg)
 {
struct bcache_device *d = b->bd_disk->private_data;
+   struct cached_dev *dc = container_of(d, struct cached_dev, disk);
+
+   if (dc->io_disable)
+   return -EIO;
+
return d->ioctl(d, mode, cmd, arg);
 }
 
@@ -864,6 +869,45 @@ static void calc_cached_dev_sectors(struct cache_set *c)
c->cached_dev_sectors = sectors;
 }
 
+#define BACKING_DEV_OFFLINE_TIMEOUT 5
+static int cached_dev_status_update(void *arg)
+{
+   struct cached_dev *dc = arg;
+   struct request_queue *q;
+   char buf[BDEVNAME_SIZE];
+
+   /*
+* If this delayed worker is stopping outside, directly quit here.
+* dc->io_disable might be set via sysfs interface, so check it
+* here too.
+*/
+   while (!kthread_should_stop() && !dc->io_disable) {
+   q = bdev_get_queue(dc->bdev);
+   if (blk_queue_dying(q))
+   dc->offline_seconds++;
+   else
+   dc->offline_seconds = 0;
+
+   if (dc->offline_seconds >= BACKING_DEV_OFFLINE_TIMEOUT) {
+   pr_err("%s: device offline for %d seconds",
+   bdevname(dc->bdev, buf),
+   BACKING_DEV_OFFLINE_TIMEOUT);
+   pr_err("%s: disable I/O request due to backing "
+   "device offline", dc->disk.name);
+   dc->io_disable = true;
+   /* let others know earlier that io_disable is true */
+   smp_mb();
+   bcache_device_stop(&dc->disk);
+   break;
+   }
+
+   schedule_timeout_interruptible(HZ);
+   }
+
+   dc->status_update_thread = NULL;
+   return 0;
+}
+
 void bch_cached_dev_run(struct cached_dev *dc)
 {
struct bcache_device *d = &dc->disk;
@@ -906,6 +950,15 @@ void bch_cached_dev_run(struct cached_dev *dc)
if (sysfs_create_link(&d->kobj, &disk_to_dev(d->disk)->kobj, "dev") ||
sysfs_create_link(&disk_to_dev(d->disk)->kobj, &d->kobj, "bcache"))
pr_debug("error creating sysfs link");
+
+   dc->status_update_thread = kthread_run(cached_dev_status_update,
+  dc,
+ "bca

[PATCH v6 8/9] bcache: add io_disable to struct cached_dev

2018-02-08 Thread Coly Li
If a bcache device is configured to writeback mode, current code does not
handle write I/O errors on backing devices properly.

In writeback mode, write request is written to cache device, and
latter being flushed to backing device. If I/O failed when writing from
cache device to the backing device, bcache code just ignores the error and
upper layer code is NOT noticed that the backing device is broken.

This patch tries to handle backing device failure like how the cache device
failure is handled,
- Add a error counter 'io_errors' and error limit 'error_limit' in struct
  cached_dev. Add another io_disable to struct cached_dev to disable I/Os
  on the problematic backing device.
- When I/O error happens on backing device, increase io_errors counter. And
  if io_errors reaches error_limit, set cache_dev->io_disable to true, and
  stop the bcache device.

The result is, if backing device is broken of disconnected, and I/O errors
reach its error limit, backing device will be disabled and the associated
bcache device will be removed from system.

Changelog:
v2: remove "bcache: " prefix in pr_error(), and use correct name string to
print out bcache device gendisk name.
v1: indeed this is new added in v2 patch set.

Signed-off-by: Coly Li 
Reviewed-by: Hannes Reinecke 
Cc: Michael Lyle 
Cc: Junhui Tang 
---
 drivers/md/bcache/bcache.h  |  6 ++
 drivers/md/bcache/io.c  | 14 ++
 drivers/md/bcache/request.c | 14 --
 drivers/md/bcache/super.c   | 21 +
 drivers/md/bcache/sysfs.c   | 15 ++-
 5 files changed, 67 insertions(+), 3 deletions(-)

diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index 7c2b836732e9..dbc4fb48c754 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -366,6 +366,7 @@ struct cached_dev {
unsignedsequential_cutoff;
unsignedreadahead;
 
+   unsignedio_disable:1;
unsignedverify:1;
unsignedbypass_torture_test:1;
 
@@ -387,6 +388,9 @@ struct cached_dev {
unsignedwriteback_rate_minimum;
 
enum stop_on_failurestop_when_cache_set_failed;
+#define DEFAULT_CACHED_DEV_ERROR_LIMIT 64
+   atomic_tio_errors;
+   unsignederror_limit;
 };
 
 enum alloc_reserve {
@@ -896,6 +900,7 @@ static inline void closure_bio_submit(struct cache_set *c,
 
 /* Forward declarations */
 
+void bch_count_backing_io_errors(struct cached_dev *dc, struct bio *bio);
 void bch_count_io_errors(struct cache *, blk_status_t, int, const char *);
 void bch_bbio_count_io_errors(struct cache_set *, struct bio *,
  blk_status_t, const char *);
@@ -923,6 +928,7 @@ int bch_bucket_alloc_set(struct cache_set *, unsigned,
 struct bkey *, int, bool);
 bool bch_alloc_sectors(struct cache_set *, struct bkey *, unsigned,
   unsigned, unsigned, bool);
+bool bch_cached_dev_error(struct cached_dev *dc);
 
 __printf(2, 3)
 bool bch_cache_set_error(struct cache_set *, const char *, ...);
diff --git a/drivers/md/bcache/io.c b/drivers/md/bcache/io.c
index 8013ecbcdbda..7fac97ae036e 100644
--- a/drivers/md/bcache/io.c
+++ b/drivers/md/bcache/io.c
@@ -50,6 +50,20 @@ void bch_submit_bbio(struct bio *bio, struct cache_set *c,
 }
 
 /* IO errors */
+void bch_count_backing_io_errors(struct cached_dev *dc, struct bio *bio)
+{
+   char buf[BDEVNAME_SIZE];
+   unsigned errors;
+
+   WARN_ONCE(!dc, "NULL pointer of struct cached_dev");
+
+   errors = atomic_add_return(1, &dc->io_errors);
+   if (errors < dc->error_limit)
+   pr_err("%s: IO error on backing device, unrecoverable",
+   bio_devname(bio, buf));
+   else
+   bch_cached_dev_error(dc);
+}
 
 void bch_count_io_errors(struct cache *ca,
 blk_status_t error,
diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
index 9c6dda3b0068..03245e6980a6 100644
--- a/drivers/md/bcache/request.c
+++ b/drivers/md/bcache/request.c
@@ -637,6 +637,8 @@ static void backing_request_endio(struct bio *bio)
 
if (bio->bi_status) {
struct search *s = container_of(cl, struct search, cl);
+   struct cached_dev *dc = container_of(s->d,
+struct cached_dev, disk);
/*
 * If a bio has REQ_PREFLUSH for writeback mode, it is
 * speically assembled in cached_dev_write() for a non-zero
@@ -657,6 +659,7 @@ static void backing_request_endio(struct bio *bio)
}
s->recoverable = false;
/* should count I/O error for backing device here */
+   bch_count_backing_io_errors(dc, bio);
}
 
bio_put(bio);
@@ -1065,8 +1068,14 @@ static void detatched_dev_end_io(struct bio *b

Re: [PATCH] blk: optimization for classic polling

2018-02-08 Thread Sagi Grimberg



I think it'd be simpler to have blk_poll set it back to running if
need_resched is true rather than repeat this patter across all the
callers:

---
diff --git a/block/blk-mq.c b/block/blk-mq.c
index df93102e2149..40285fe1c8ad 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -3164,6 +3164,7 @@ static bool __blk_mq_poll(struct blk_mq_hw_ctx *hctx, 
struct request *rq)
cpu_relax();
}
  
+	set_current_state(TASK_RUNNING);

return false;
  }
  
--


Nice!


Re: [PATCH v2] blk-mq: Fix race between resetting the timer and completion handling

2018-02-08 Thread Bart Van Assche
On Thu, 2018-02-08 at 07:39 -0800, t...@kernel.org wrote:
> On Thu, Feb 08, 2018 at 01:09:57AM +, Bart Van Assche wrote:
> > On Wed, 2018-02-07 at 23:48 +, Bart Van Assche wrote:
> > > With this patch applied I see requests for which it seems like the 
> > > timeout handler
> > > did not get invoked: [ ... ]
> > 
> > I just noticed the following in the system log, which is probably the 
> > reason why some
> > requests got stuck:
> > 
> > Feb  7 15:16:26 ubuntu-vm kernel: BUG: unable to handle kernel NULL pointer 
> > dereference at   (null)
> > Feb  7 15:16:26 ubuntu-vm kernel: IP: scsi_times_out+0x17/0x2c0 [scsi_mod]
> > Feb  7 15:16:26 ubuntu-vm kernel: PGD 0 P4D 0  
> > Feb  7 15:16:26 ubuntu-vm kernel: Oops:  [#1] PREEMPT SMP
> > Feb  7 15:16:26 ubuntu-vm kernel: Modules linked in: dm_service_time ib_srp 
> > libcrc32c crc32c_generic scsi_transport_srp target_core_pscsi 
> > target_core_file ib_srpt target_core_iblock
> > target_core_mod
> > rdma_rxe ip6_udp_tunnel udp_tunnel ib_umad ib_uverbs scsi_debug brd 
> > mq_deadline kyber_iosched deadline_iosched cfq_iosched bfq crct10dif_pclmul 
> > crc32_pclmul af_packet ghash_clmulni_intel pcbc
> > aesni_intel aes_x86_64 crypto_simd cryptd glue_helper serio_raw 
> > virtio_console virtio_balloon sg button i2c_piix4 dm_multipath dm_mod dax 
> > scsi_dh_rdac scsi_dh_emc scsi_dh_alua ib_iser rdma_cm
> > iw_cm
> > ib_cm ib_core configfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi 
> > ip_tables x_tables ipv6 autofs4 ext4 crc16 mbcache jbd2 sd_mod virtio_net 
> > virtio_blk virtio_scsi sr_mod cdrom
> > ata_generic
> > pata_acpi psmouse crc32c_intel i2c_core atkbd
> > Feb  7 15:16:26 ubuntu-vm kernel: uhci_hcd ehci_hcd intel_agp ata_piix 
> > intel_gtt agpgart libata virtio_pci usbcore virtio_ring virtio scsi_mod 
> > usb_common unix
> > Feb  7 15:16:26 ubuntu-vm kernel: CPU: 0 PID: 146 Comm: kworker/0:1H Not 
> > tainted 4.15.0-dbg+ #1
> > Feb  7 15:16:26 ubuntu-vm kernel: Hardware name: QEMU Standard PC (i440FX + 
> > PIIX, 1996), BIOS 1.0.0-prebuilt.qemu-project.org 04/01/2014
> > Feb  7 15:16:26 ubuntu-vm kernel: Workqueue: kblockd blk_mq_timeout_work
> > Feb  7 15:16:26 ubuntu-vm kernel: RIP: 0010:scsi_times_out+0x17/0x2c0 
> > [scsi_mod]
> > Feb  7 15:16:26 ubuntu-vm kernel: RSP: 0018:98f0c02abd58 EFLAGS: 
> > 00010246
> > Feb  7 15:16:26 ubuntu-vm kernel: RAX:  RBX: 
> > 965de2b3a2c0 RCX: 
> > Feb  7 15:16:26 ubuntu-vm kernel: RDX: c0094740 RSI: 
> >  RDI: 965de2b3a2c0
> > Feb  7 15:16:26 ubuntu-vm kernel: RBP: 965de2b3a438 R08: 
> > fffc R09: 0007
> > Feb  7 15:16:26 ubuntu-vm kernel: R10:  R11: 
> >  R12: 0002
> > Feb  7 15:16:26 ubuntu-vm kernel: R13:  R14: 
> > 965de2a44218 R15: 965de2a48728
> > Feb  7 15:16:26 ubuntu-vm kernel: FS:  () 
> > GS:965dffc0() knlGS:
> > Feb  7 15:16:26 ubuntu-vm kernel: CS:  0010 DS:  ES:  CR0: 
> > 80050033
> > Feb  7 15:16:26 ubuntu-vm kernel: CR2:  CR3: 
> > 3ce0f003 CR4: 003606f0
> > Feb  7 15:16:26 ubuntu-vm kernel: DR0:  DR1: 
> >  DR2: 
> > Feb  7 15:16:26 ubuntu-vm kernel: DR3:  DR6: 
> > fffe0ff0 DR7: 0400
> > Feb  7 15:16:26 ubuntu-vm kernel: Call Trace:
> > Feb  7 15:16:26 ubuntu-vm kernel: blk_mq_terminate_expired+0x42/0x80
> > Feb  7 15:16:26 ubuntu-vm kernel: bt_iter+0x3d/0x50
> > Feb  7 15:16:26 ubuntu-vm kernel: blk_mq_queue_tag_busy_iter+0xe9/0x200
> > Feb  7 15:16:26 ubuntu-vm kernel: blk_mq_timeout_work+0x181/0x2e0
> > Feb  7 15:16:26 ubuntu-vm kernel: process_one_work+0x21c/0x6d0
> > Feb  7 15:16:26 ubuntu-vm kernel: worker_thread+0x35/0x380
> > Feb  7 15:16:26 ubuntu-vm kernel: kthread+0x117/0x130
> > Feb  7 15:16:26 ubuntu-vm kernel: ret_from_fork+0x24/0x30
> > Feb  7 15:16:26 ubuntu-vm kernel: Code: ff ff 0f ff e9 fd fe ff ff 90 66 2e 
> > 0f 1f 84 00 00 00 00 00 41 55 41 54 55 48 8d af 78 01 00 00 53 48 8b 87 b0 
> > 01 00 00 48 89 fb <4c> 8b 28 0f 1f 44 00 00
> > 65
> > 8b 05 6a b5 f8 3f 83 f8 0f 0f 87 ed  
> > Feb  7 15:16:26 ubuntu-vm kernel: RIP: scsi_times_out+0x17/0x2c0 [scsi_mod] 
> > RSP: 98f0c02abd58
> > Feb  7 15:16:26 ubuntu-vm kernel: CR2: 
> > Feb  7 15:16:26 ubuntu-vm kernel: ---[ end trace ce6c20d95bab450e ]---
> 
> So, that's dereferencing %rax which is NULL.  That gotta be the ->host
> deref in the following.
> 
>   enum blk_eh_timer_return scsi_times_out(struct request *req)
>   {
> struct scsi_cmnd *scmd = blk_mq_rq_to_pdu(req);
> enum blk_eh_timer_return rtn = BLK_EH_NOT_HANDLED;
> struct Scsi_Host *host = scmd->device->host;
>   ...
> 
> That sounds more like a scsi hotplug bug than an issue in the timeout
> code unless we messed up @req pointer to begin with.

I don't

Re: [PATCH 0/5] blk-mq/scsi-mq: support global tags & introduce force_blk_mq

2018-02-08 Thread Ming Lei
On Thu, Feb 08, 2018 at 08:00:29AM +0100, Hannes Reinecke wrote:
> On 02/07/2018 03:14 PM, Kashyap Desai wrote:
> >> -Original Message-
> >> From: Ming Lei [mailto:ming@redhat.com]
> >> Sent: Wednesday, February 7, 2018 5:53 PM
> >> To: Hannes Reinecke
> >> Cc: Kashyap Desai; Jens Axboe; linux-block@vger.kernel.org; Christoph
> >> Hellwig; Mike Snitzer; linux-s...@vger.kernel.org; Arun Easi; Omar
> > Sandoval;
> >> Martin K . Petersen; James Bottomley; Christoph Hellwig; Don Brace;
> > Peter
> >> Rivera; Paolo Bonzini; Laurence Oberman
> >> Subject: Re: [PATCH 0/5] blk-mq/scsi-mq: support global tags & introduce
> >> force_blk_mq
> >>
> >> On Wed, Feb 07, 2018 at 07:50:21AM +0100, Hannes Reinecke wrote:
> >>> Hi all,
> >>>
> >>> [ .. ]
> >
> > Could you share us your patch for enabling global_tags/MQ on
>  megaraid_sas
> > so that I can reproduce your test?
> >
> >> See below perf top data. "bt_iter" is consuming 4 times more CPU.
> >
> > Could you share us what the IOPS/CPU utilization effect is after
>  applying the
> > patch V2? And your test script?
>  Regarding CPU utilization, I need to test one more time. Currently
>  system is in used.
> 
>  I run below fio test on total 24 SSDs expander attached.
> 
>  numactl -N 1 fio jbod.fio --rw=randread --iodepth=64 --bs=4k
>  --ioengine=libaio --rw=randread
> 
>  Performance dropped from 1.6 M IOPs to 770K IOPs.
> 
> >>> This is basically what we've seen with earlier iterations.
> >>
> >> Hi Hannes,
> >>
> >> As I mentioned in another mail[1], Kashyap's patch has a big issue,
> > which
> >> causes only reply queue 0 used.
> >>
> >> [1] https://marc.info/?l=linux-scsi&m=151793204014631&w=2
> >>
> >> So could you guys run your performance test again after fixing the
> > patch?
> > 
> > Ming -
> > 
> > I tried after change you requested.  Performance drop is still unresolved.
> > From 1.6 M IOPS to 770K IOPS.
> > 
> > See below data. All 24 reply queue is in used correctly.
> > 
> > IRQs / 1 second(s)
> > IRQ#  TOTAL  NODE0   NODE1  NAME
> >  360  16422  0   16422  IR-PCI-MSI 70254653-edge megasas
> >  364  15980  0   15980  IR-PCI-MSI 70254657-edge megasas
> >  362  15979  0   15979  IR-PCI-MSI 70254655-edge megasas
> >  345  15696  0   15696  IR-PCI-MSI 70254638-edge megasas
> >  341  15659  0   15659  IR-PCI-MSI 70254634-edge megasas
> >  369  15656  0   15656  IR-PCI-MSI 70254662-edge megasas
> >  359  15650  0   15650  IR-PCI-MSI 70254652-edge megasas
> >  358  15596  0   15596  IR-PCI-MSI 70254651-edge megasas
> >  350  15574  0   15574  IR-PCI-MSI 70254643-edge megasas
> >  342  15532  0   15532  IR-PCI-MSI 70254635-edge megasas
> >  344  15527  0   15527  IR-PCI-MSI 70254637-edge megasas
> >  346  15485  0   15485  IR-PCI-MSI 70254639-edge megasas
> >  361  15482  0   15482  IR-PCI-MSI 70254654-edge megasas
> >  348  15467  0   15467  IR-PCI-MSI 70254641-edge megasas
> >  368  15463  0   15463  IR-PCI-MSI 70254661-edge megasas
> >  354  15420  0   15420  IR-PCI-MSI 70254647-edge megasas
> >  351  15378  0   15378  IR-PCI-MSI 70254644-edge megasas
> >  352  15377  0   15377  IR-PCI-MSI 70254645-edge megasas
> >  356  15348  0   15348  IR-PCI-MSI 70254649-edge megasas
> >  337  15344  0   15344  IR-PCI-MSI 70254630-edge megasas
> >  343  15320  0   15320  IR-PCI-MSI 70254636-edge megasas
> >  355  15266  0   15266  IR-PCI-MSI 70254648-edge megasas
> >  335  15247  0   15247  IR-PCI-MSI 70254628-edge megasas
> >  363  15233  0   15233  IR-PCI-MSI 70254656-edge megasas
> > 
> > 
> > Average:CPU  %usr %nice  %sys   %iowait%steal
> > %irq %soft%guest%gnice %idle
> > Average: 18  3.80  0.00 14.78 10.08  0.00
> > 0.00  4.01  0.00  0.00 67.33
> > Average: 19  3.26  0.00 15.35 10.62  0.00
> > 0.00  4.03  0.00  0.00 66.74
> > Average: 20  3.42  0.00 14.57 10.67  0.00
> > 0.00  3.84  0.00  0.00 67.50
> > Average: 21  3.19  0.00 15.60 10.75  0.00
> > 0.00  4.16  0.00  0.00 66.30
> > Average: 22  3.58  0.00 15.15 10.66  0.00
> > 0.00  3.51  0.00  0.00 67.11
> > Average: 23  3.34  0.00 15.36 10.63  0.00
> > 0.00  4.17  0.00  0.00 66.50
> > Average: 24  3.50  0.00 14.58 10.93  0.00
> > 0.00  3.85  0.00  0.00 67.13
> > Average: 25  3.20  0.00 14.68 10.86  0.00
> > 0.00  4.31  0.00  0.00 66.95
> > Average: 26  3.27  0.00 14.80 10.70  0.00
> > 0.00  3.68  0.00  0.00 67.55
> > Average: 27  3.58  0.00 15.36 10.80  0.00
> > 0.00  3

Re: [PATCH v2] blk-mq: Fix race between resetting the timer and completion handling

2018-02-08 Thread t...@kernel.org
Hello, Bart.

On Thu, Feb 08, 2018 at 04:31:43PM +, Bart Van Assche wrote:
> > That sounds more like a scsi hotplug bug than an issue in the timeout
> > code unless we messed up @req pointer to begin with.
> 
> I don't think that this is related to SCSI hotplugging: this crash does not
> occur with the v4.15 block layer core and it does not occur with my timeout
> handler rework patch applied either. I think that means that we cannot
> exclude the block layer core timeout handler rework as a possible cause.
> 
> The disassembler output is as follows:
> 
> (gdb) disas /s scsi_times_out
> Dump of assembler code for function scsi_times_out:
> drivers/scsi/scsi_error.c:
> 282 {
>0x5bd0 <+0>: push   %r13
>0x5bd2 <+2>: push   %r12
>0x5bd4 <+4>: push   %rbp
> ./include/linux/blk-mq.h:
> 300 return rq + 1;
>0x5bd5 <+5>: lea0x178(%rdi),%rbp
> drivers/scsi/scsi_error.c:
> 282 {
>0x5bdc <+12>:push   %rbx
> 283 struct scsi_cmnd *scmd = blk_mq_rq_to_pdu(req);
> 284 enum blk_eh_timer_return rtn = BLK_EH_NOT_HANDLED;
> 285 struct Scsi_Host *host = scmd->device->host;
>0x5bdd <+13>:mov0x1b0(%rdi),%rax
> 282 {
>0x5be4 <+20>:mov%rdi,%rbx
> 283 struct scsi_cmnd *scmd = blk_mq_rq_to_pdu(req);
> 284 enum blk_eh_timer_return rtn = BLK_EH_NOT_HANDLED;
> 285 struct Scsi_Host *host = scmd->device->host;
>0x5be7 <+23>:mov(%rax),%r13
>0x5bea <+26>:nopl   0x0(%rax,%rax,1)
> [ ... ]
> (gdb) print /x sizeof(struct request)
> $2 = 0x178
> (gdb) print &(((struct scsi_cmnd*)0)->device)
> $4 = (struct scsi_device **) 0x38 
> (gdb) print &(((struct scsi_device*)0)->host)   
> $5 = (struct Scsi_Host **) 0x0
> 
> The crash is reported at address scsi_times_out+0x17 == scsi_times_out+23. The
> instruction at that address tries to dereference scsi_cmnd.device (%rax). The
> register dump shows that that pointer has the value NULL. The only function I
> know of that clears the scsi_cmnd.device pointer is scsi_req_init(). The only
> caller of that function in the SCSI core is scsi_initialize_rq(). That 
> function
> has two callers, namely scsi_init_command() and blk_get_request(). However,
> the scsi_cmnd.device pointer is not cleared when a request finishes. This is
> why I think that the above crash report indicates that scsi_times_out() was
> called for a request that was being reinitialized and not by device 
> hotplugging.

I could be misreading it but scsi_cmnd->device dereference should be
the following.

0x5bdd <+13>:mov0x1b0(%rdi),%rax

%rdi is @req, 0x1b0(%rdi) seems to be the combined arithmetic of
blk_mq_rq_to_pdu() and ->device dereference - 0x178 + 0x38.  The
faulting access is (%rax), which is deref'ing host from device.

Thanks.

-- 
tejun


Re: [PATCH v2] blk-mq: Fix race between resetting the timer and completion handling

2018-02-08 Thread Bart Van Assche
On Thu, 2018-02-08 at 09:00 -0800, t...@kernel.org wrote:
> On Thu, Feb 08, 2018 at 04:31:43PM +, Bart Van Assche wrote:
> > The crash is reported at address scsi_times_out+0x17 == scsi_times_out+23. 
> > The
> > instruction at that address tries to dereference scsi_cmnd.device (%rax). 
> > The
> > register dump shows that that pointer has the value NULL. The only function 
> > I
> > know of that clears the scsi_cmnd.device pointer is scsi_req_init(). The 
> > only
> > caller of that function in the SCSI core is scsi_initialize_rq(). That 
> > function
> > has two callers, namely scsi_init_command() and blk_get_request(). However,
> > the scsi_cmnd.device pointer is not cleared when a request finishes. This is
> > why I think that the above crash report indicates that scsi_times_out() was
> > called for a request that was being reinitialized and not by device 
> > hotplugging.
> 
> I could be misreading it but scsi_cmnd->device dereference should be
> the following.
> 
> 0x5bdd <+13>:mov0x1b0(%rdi),%rax
> 
> %rdi is @req, 0x1b0(%rdi) seems to be the combined arithmetic of
> blk_mq_rq_to_pdu() and ->device dereference - 0x178 + 0x38.  The
> faulting access is (%rax), which is deref'ing host from device.

Hello Tejun,

I think "dereferencing a pointer" means reading the memory location that 
pointer points
at? Anyway, I think we both interpret the crash report in the same way, namely 
that it
means that scmd->device == NULL.

Thanks,

Bart.





Re: [PATCH v2] blk-mq: Fix race between resetting the timer and completion handling

2018-02-08 Thread t...@kernel.org
Hello, Bart.

On Thu, Feb 08, 2018 at 05:10:45PM +, Bart Van Assche wrote:
> I think "dereferencing a pointer" means reading the memory location that 
> pointer points
> at? Anyway, I think we both interpret the crash report in the same way, 
> namely that it
> means that scmd->device == NULL.

No, what I'm trying to say is that it's is the device->host reference
not the scmd->device reference.

Thanks.

-- 
tejun


Re: [PATCH 1/5] blk-mq: tags: define several fields of tags as pointer

2018-02-08 Thread Bart Van Assche
On Sat, 2018-02-03 at 12:21 +0800, Ming Lei wrote:
> diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h
> index 61deab0b5a5a..a68323fa0c02 100644
> --- a/block/blk-mq-tag.h
> +++ b/block/blk-mq-tag.h
> @@ -11,10 +11,14 @@ struct blk_mq_tags {
>   unsigned int nr_tags;
>   unsigned int nr_reserved_tags;
>  
> - atomic_t active_queues;
> + atomic_t *active_queues;
> + atomic_t __active_queues;
>  
> - struct sbitmap_queue bitmap_tags;
> - struct sbitmap_queue breserved_tags;
> + struct sbitmap_queue *bitmap_tags;
> + struct sbitmap_queue *breserved_tags;
> +
> + struct sbitmap_queue __bitmap_tags;
> + struct sbitmap_queue __breserved_tags;
>  
>   struct request **rqs;
>   struct request **static_rqs;

This is getting ugly: multiple pointers that either all point at an element
inside struct blk_mq_tags or all point at a member outside blk_mq_tags. Have
you considered to introduce a new structure for these members (active_queues,
bitmap_tags and breserved_tags) such that only a single new pointer has to
be introduced in struct blk_mq_tags? Additionally, why does every
blk_mq_tags instance have its own static_rqs array if tags are shared
across hardware queues? blk_mq_get_tag() allocates a tag either from
bitmap_tags or from breserved_tags. So if these tag sets are shared I
don't think that it is necessary to have one static_rqs array per struct
blk_mq_tags instance.

Thanks,

Bart.





Re: [PATCH v2] blk-mq: Fix race between resetting the timer and completion handling

2018-02-08 Thread Bart Van Assche
On Thu, 2018-02-08 at 09:19 -0800, t...@kernel.org wrote:
> Hello, Bart.
> 
> On Thu, Feb 08, 2018 at 05:10:45PM +, Bart Van Assche wrote:
> > I think "dereferencing a pointer" means reading the memory location that 
> > pointer points
> > at? Anyway, I think we both interpret the crash report in the same way, 
> > namely that it
> > means that scmd->device == NULL.
> 
> No, what I'm trying to say is that it's is the device->host reference
> not the scmd->device reference.

Again, I think that means that we agree about the interpretation of the crash
report. The big question here is what the next step should be to analyze this
further and/or to avoid that this crash can occur?

Thanks,

Bart.







Re: [PATCH 00/24] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)

2018-02-08 Thread Danil Kipnis
On Wed, Feb 7, 2018 at 6:32 PM, Bart Van Assche  wrote:
> On Wed, 2018-02-07 at 18:18 +0100, Roman Penyaev wrote:
>> So the question is: are there real life setups where
>> some of the local IB network members can be untrusted?
>
> Hello Roman,
>
> You may want to read more about the latest evolutions with regard to network
> security. An article that I can recommend is the following: "Google reveals
> own security regime policy trusts no network, anywhere, ever"
> (https://www.theregister.co.uk/2016/04/06/googles_beyondcorp_security_policy/).
>
> If data-centers would start deploying RDMA among their entire data centers
> (maybe they are already doing this) then I think they will want to restrict
> access to block devices to only those initiator systems that need it.
>
> Thanks,
>
> Bart.
>
>

Hi Bart,

thanks for the link to the article. To the best of my understanding,
the guys suggest to authenticate the devices first and only then
authenticate the users who use the devices in order to get access to a
corporate service. They also mention in the presentation the current
trend of moving corporate services into the cloud. But I think this is
not about the devices from which that cloud is build of. Isn't a cloud
first build out of devices connected via IB and then users (and their
devices) are provided access to the services of that cloud as a whole?
If a malicious user already plugged his device into an IB switch of a
cloud internal infrastructure, isn't it game over anyway? Can't he
just take the hard drives instead of mapping them?

Thanks,

Danil.


Re: [PATCH v2] blk-mq: Fix race between resetting the timer and completion handling

2018-02-08 Thread t...@kernel.org
On Thu, Feb 08, 2018 at 05:37:46PM +, Bart Van Assche wrote:
> On Thu, 2018-02-08 at 09:19 -0800, t...@kernel.org wrote:
> > Hello, Bart.
> > 
> > On Thu, Feb 08, 2018 at 05:10:45PM +, Bart Van Assche wrote:
> > > I think "dereferencing a pointer" means reading the memory location that 
> > > pointer points
> > > at? Anyway, I think we both interpret the crash report in the same way, 
> > > namely that it
> > > means that scmd->device == NULL.
> > 
> > No, what I'm trying to say is that it's is the device->host reference
> > not the scmd->device reference.
> 
> Again, I think that means that we agree about the interpretation of the crash
> report. The big question here is what the next step should be to analyze this
> further and/or to avoid that this crash can occur?

Heh, sorry about not being clear.  What I'm trying to say is that
scmd->device != NULL && device->host == NULL.  Or was this what you
were saying all along?

Thanks.

-- 
tejun


Re: [PATCH v2] blk-mq: Fix race between resetting the timer and completion handling

2018-02-08 Thread Bart Van Assche
On Thu, 2018-02-08 at 09:40 -0800, t...@kernel.org wrote:
> Heh, sorry about not being clear.  What I'm trying to say is that
> scmd->device != NULL && device->host == NULL.  Or was this what you
> were saying all along?

What I agree with is that the request pointer (req argument) is stored in %rdi
and that mov 0x1b0(%rdi),%rax loads scmd->device into %rax. Since RIP ==
scsi_times_out+0x17, since the instruction at that address tries to dereference
%rax and since the register dump shows that %rax == NULL I think that means that
scmd->device == NULL.

Thanks,

Bart.






Re: [PATCH v2] blk-mq: Fix race between resetting the timer and completion handling

2018-02-08 Thread t...@kernel.org
On Thu, Feb 08, 2018 at 05:48:32PM +, Bart Van Assche wrote:
> On Thu, 2018-02-08 at 09:40 -0800, t...@kernel.org wrote:
> > Heh, sorry about not being clear.  What I'm trying to say is that
> > scmd->device != NULL && device->host == NULL.  Or was this what you
> > were saying all along?
> 
> What I agree with is that the request pointer (req argument) is stored in %rdi
> and that mov 0x1b0(%rdi),%rax loads scmd->device into %rax. Since RIP ==
> scsi_times_out+0x17, since the instruction at that address tries to 
> dereference
> %rax and since the register dump shows that %rax == NULL I think that means 
> that
> scmd->device == NULL.

Ah, I was completely confused about which one had to be NULL.  You're
absolutely right.  Let's continue earlier in the thread.

Thanks.

-- 
tejun


Re: [PATCH 00/24] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)

2018-02-08 Thread Bart Van Assche
On Thu, 2018-02-08 at 18:38 +0100, Danil Kipnis wrote:
> thanks for the link to the article. To the best of my understanding,
> the guys suggest to authenticate the devices first and only then
> authenticate the users who use the devices in order to get access to a
> corporate service. They also mention in the presentation the current
> trend of moving corporate services into the cloud. But I think this is
> not about the devices from which that cloud is build of. Isn't a cloud
> first build out of devices connected via IB and then users (and their
> devices) are provided access to the services of that cloud as a whole?
> If a malicious user already plugged his device into an IB switch of a
> cloud internal infrastructure, isn't it game over anyway? Can't he
> just take the hard drives instead of mapping them?

Hello Danil,

It seems like we each have been focussing on different aspects of the article.
The reason I referred to that article is because I read the following in
that article: "Unlike the conventional perimeter security model, BeyondCorp
doesn’t gate access to services and tools based on a user’s physical location
or the originating network [ ... ] The zero trust architecture spells trouble
for traditional attacks that rely on penetrating a tough perimeter to waltz
freely within an open internal network." Suppose e.g. that an organization
decides to use RoCE or iWARP for connectivity between block storage initiator
systems and block storage target systems and that it has a single company-
wide Ethernet network. If the target system does not restrict access based
on initiator IP address then any penetrator would be able to access all the
block devices exported by the target after a SoftRoCE or SoftiWARP initiator
driver has been loaded. If the target system however restricts access based
on the initiator IP address then that would make it harder for a penetrator
to access the exported block storage devices. Instead of just penetrating the
network access, IP address spoofing would have to be used or access would
have to be obtained to a system that has been granted access to the target
system.

Thanks,

Bart.




[PATCH v7 1/2] xfs: remove assert to check bytes returned

2018-02-08 Thread Goldwyn Rodrigues
From: Goldwyn Rodrigues 

Since we can return less than count in case of partial direct
writes, remove the ASSERT.

Signed-off-by: Goldwyn Rodrigues 
---
 fs/xfs/xfs_file.c | 6 --
 1 file changed, 6 deletions(-)

Changes since v6:
 - Reordered to before direct write fix

diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index 8601275cc5e6..8fc4dbf66910 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -590,12 +590,6 @@ xfs_file_dio_aio_write(
ret = iomap_dio_rw(iocb, from, &xfs_iomap_ops, xfs_dio_write_end_io);
 out:
xfs_iunlock(ip, iolock);
-
-   /*
-* No fallback to buffered IO on errors for XFS, direct IO will either
-* complete fully or fail.
-*/
-   ASSERT(ret < 0 || ret == count);
return ret;
 }
 
-- 
2.16.1



[PATCH v7 2/2] Return bytes transferred for partial direct I/O

2018-02-08 Thread Goldwyn Rodrigues
From: Goldwyn Rodrigues 

In case direct I/O encounters an error midway, it returns the error.
Instead it should be returning the number of bytes transferred so far.

Test case for filesystems (with ENOSPC):
1. Create an almost full filesystem
2. Create a file, say /mnt/lastfile, until the filesystem is full.
3. Direct write() with count > sizeof /mnt/lastfile.

Result: write() returns -ENOSPC. However, file content has data written
in step 3.

Added a sysctl entry: dio_short_writes which is on by default. This is
to support applications which expect either and error or the bytes submitted
as a return value for the write calls.

This fixes fstest generic/472.

Signed-off-by: Goldwyn Rodrigues 
---
 Documentation/sysctl/fs.txt | 14 ++
 fs/block_dev.c  |  2 +-
 fs/direct-io.c  |  7 +--
 fs/iomap.c  | 23 ---
 include/linux/fs.h  |  1 +
 kernel/sysctl.c |  9 +
 6 files changed, 42 insertions(+), 14 deletions(-)

Changes since v1:
 - incorporated iomap and block devices

Changes since v2:
 - realized that file size was not increasing when performing a (partial)
   direct I/O because end_io function was receiving the error instead of
   size. Fixed.

Changes since v3:
 - [hch] initialize transferred with dio->size and use transferred instead
   of dio->size.

Changes since v4:
 - Refreshed to v4.14

Changes since v5:
 - Added /proc/sys/fs/dio_short_writes (default 1) to guard older applications
   which expect write(fd, buf, count) returns either count or error.

Changes since v6:
 - Corrected documentation
 - Re-ordered patch

diff --git a/Documentation/sysctl/fs.txt b/Documentation/sysctl/fs.txt
index 6c00c1e2743f..21582f675985 100644
--- a/Documentation/sysctl/fs.txt
+++ b/Documentation/sysctl/fs.txt
@@ -22,6 +22,7 @@ Currently, these files are in /proc/sys/fs:
 - aio-max-nr
 - aio-nr
 - dentry-state
+- dio_short_writes
 - dquot-max
 - dquot-nr
 - file-max
@@ -76,6 +77,19 @@ dcache isn't pruned yet.
 
 ==
 
+dio_short_writes:
+
+In case Direct I/O encounters a transient error, it returns
+the error code, even if it has performed part of the write.
+This flag, if on (default), will return the number of bytes written
+so far, as the write(2) semantics are. However, some older applications
+still consider a direct write as an error if all of the I/O
+submitted is not complete. I.e. write(file, count, buf) != count.
+This option can be disabled on systems in order to support
+existing applications which do not expect short writes.
+
+==
+
 dquot-max & dquot-nr:
 
 The file dquot-max shows the maximum number of cached disk
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 4a181fcb5175..49d94360bb51 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -409,7 +409,7 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter 
*iter, int nr_pages)
 
if (!ret)
ret = blk_status_to_errno(dio->bio.bi_status);
-   if (likely(!ret))
+   if (likely(dio->size))
ret = dio->size;
 
bio_put(&dio->bio);
diff --git a/fs/direct-io.c b/fs/direct-io.c
index 3aafb3343a65..9bd15be64c25 100644
--- a/fs/direct-io.c
+++ b/fs/direct-io.c
@@ -151,6 +151,7 @@ struct dio {
 } cacheline_aligned_in_smp;
 
 static struct kmem_cache *dio_cache __read_mostly;
+unsigned int sysctl_dio_short_writes = 1;
 
 /*
  * How many pages are in the queue?
@@ -262,7 +263,7 @@ static ssize_t dio_complete(struct dio *dio, ssize_t ret, 
unsigned int flags)
ret = dio->page_errors;
if (ret == 0)
ret = dio->io_error;
-   if (ret == 0)
+   if (!sysctl_dio_short_writes && (ret == 0))
ret = transferred;
 
if (dio->end_io) {
@@ -310,7 +311,9 @@ static ssize_t dio_complete(struct dio *dio, ssize_t ret, 
unsigned int flags)
}
 
kmem_cache_free(dio_cache, dio);
-   return ret;
+   if (!sysctl_dio_short_writes)
+   return ret;
+   return transferred ? transferred : ret;
 }
 
 static void dio_aio_complete_work(struct work_struct *work)
diff --git a/fs/iomap.c b/fs/iomap.c
index 47d29ccffaef..a8d6908dc0de 100644
--- a/fs/iomap.c
+++ b/fs/iomap.c
@@ -716,23 +716,24 @@ static ssize_t iomap_dio_complete(struct iomap_dio *dio)
struct kiocb *iocb = dio->iocb;
struct inode *inode = file_inode(iocb->ki_filp);
loff_t offset = iocb->ki_pos;
-   ssize_t ret;
+   ssize_t err;
+   ssize_t transferred = dio->size;
 
if (dio->end_io) {
-   ret = dio->end_io(iocb,
-   dio->error ? dio->error : dio->size,
-   dio->flags);
+   err = dio->end_io(iocb,
+ (transferred && sysctl_dio_short_writes) ?
+   transferre

[LSF/MM ATTEND] State of blk-mq I/O scheduling and next steps

2018-02-08 Thread Omar Sandoval
Hi,

I'd like to attend LSF/MM to talk about the state of I/O scheduling in
blk-mq. I can present some results about mq-deadline and Kyber on our
production workloads. I'd also like to talk about what's next, in
particular, improvements and features for Kyber. Finally, I'd like to
participate in Ming's proposed blk-mq topics.

Thanks!


Re: [PATCH v2] blk-throttle: fix race between blkcg_bio_issue_check and cgroup_rmdir

2018-02-08 Thread Joseph Qi
Hi Tejun,

On 18/2/8 23:23, Tejun Heo wrote:
> Hello, Joseph.
> 
> On Thu, Feb 08, 2018 at 10:29:43AM +0800, Joseph Qi wrote:
>> So you mean checking css->refcnt to prevent the further use of
>> blkg_get? I think it makes sense.
> 
> Yes.
> 
>> IMO, we should use css_tryget_online instead, and rightly after taking
> 
> Not really.  An offline css still can have a vast amount of IO to
> drain and write out.
> 
IIUC, we have to identify it is in blkcg_css_offline now which will
blkg_put. Since percpu_ref_kill_and_confirm in kill_css will set flag
__PERCPU_REF_DEAD, so we can use this to avoid the race. IOW, if
__PERCPU_REF_DEAD is set now, we know blkcg css is in offline and
continue access blkg may risk double free. Thus we choose to skip these
ios.
I don't get how css_tryget works since it doesn't care the flag
__PERCPU_REF_DEAD. Also css_tryget can't prevent blkcg_css from
offlining since the race happens blkcg_css_offline is in progress.
Am I missing something here?

Thanks,
Joseph

>> queue_lock. Because there may be more use of blkg_get in blk_throtl_bio
>> in the futher. Actually it already has two now. One is in
>> blk_throtl_assoc_bio, and the other is in throtl_qnode_add_bio.
>> What do you think of this?
> 
> As long as we avoid making the bypass paths expensive, whatever makes
> the code easier to read and maintain would be better.  css ref ops are
> extremely cheap anyway.
> 
> Thanks.
> 


Re: [PATCH v7 1/2] xfs: remove assert to check bytes returned

2018-02-08 Thread Darrick J. Wong
On Thu, Feb 08, 2018 at 12:59:47PM -0600, Goldwyn Rodrigues wrote:
> From: Goldwyn Rodrigues 
> 
> Since we can return less than count in case of partial direct
> writes, remove the ASSERT.
> 
> Signed-off-by: Goldwyn Rodrigues 

Looks ok,
Reviewed-by: Darrick J. Wong 

--D


RE: [PATCH 0/5] blk-mq/scsi-mq: support global tags & introduce force_blk_mq

2018-02-08 Thread Kashyap Desai
> -Original Message-
> From: Ming Lei [mailto:ming@redhat.com]
> Sent: Thursday, February 8, 2018 10:23 PM
> To: Hannes Reinecke
> Cc: Kashyap Desai; Jens Axboe; linux-block@vger.kernel.org; Christoph
> Hellwig; Mike Snitzer; linux-s...@vger.kernel.org; Arun Easi; Omar
Sandoval;
> Martin K . Petersen; James Bottomley; Christoph Hellwig; Don Brace;
Peter
> Rivera; Paolo Bonzini; Laurence Oberman
> Subject: Re: [PATCH 0/5] blk-mq/scsi-mq: support global tags & introduce
> force_blk_mq
>
> On Thu, Feb 08, 2018 at 08:00:29AM +0100, Hannes Reinecke wrote:
> > On 02/07/2018 03:14 PM, Kashyap Desai wrote:
> > >> -Original Message-
> > >> From: Ming Lei [mailto:ming@redhat.com]
> > >> Sent: Wednesday, February 7, 2018 5:53 PM
> > >> To: Hannes Reinecke
> > >> Cc: Kashyap Desai; Jens Axboe; linux-block@vger.kernel.org;
> > >> Christoph Hellwig; Mike Snitzer; linux-s...@vger.kernel.org; Arun
> > >> Easi; Omar
> > > Sandoval;
> > >> Martin K . Petersen; James Bottomley; Christoph Hellwig; Don Brace;
> > > Peter
> > >> Rivera; Paolo Bonzini; Laurence Oberman
> > >> Subject: Re: [PATCH 0/5] blk-mq/scsi-mq: support global tags &
> > >> introduce force_blk_mq
> > >>
> > >> On Wed, Feb 07, 2018 at 07:50:21AM +0100, Hannes Reinecke wrote:
> > >>> Hi all,
> > >>>
> > >>> [ .. ]
> > >
> > > Could you share us your patch for enabling global_tags/MQ on
> >  megaraid_sas
> > > so that I can reproduce your test?
> > >
> > >> See below perf top data. "bt_iter" is consuming 4 times more
CPU.
> > >
> > > Could you share us what the IOPS/CPU utilization effect is after
> >  applying the
> > > patch V2? And your test script?
> >  Regarding CPU utilization, I need to test one more time.
> >  Currently system is in used.
> > 
> >  I run below fio test on total 24 SSDs expander attached.
> > 
> >  numactl -N 1 fio jbod.fio --rw=randread --iodepth=64 --bs=4k
> >  --ioengine=libaio --rw=randread
> > 
> >  Performance dropped from 1.6 M IOPs to 770K IOPs.
> > 
> > >>> This is basically what we've seen with earlier iterations.
> > >>
> > >> Hi Hannes,
> > >>
> > >> As I mentioned in another mail[1], Kashyap's patch has a big issue,
> > > which
> > >> causes only reply queue 0 used.
> > >>
> > >> [1] https://marc.info/?l=linux-scsi&m=151793204014631&w=2
> > >>
> > >> So could you guys run your performance test again after fixing the
> > > patch?
> > >
> > > Ming -
> > >
> > > I tried after change you requested.  Performance drop is still
unresolved.
> > > From 1.6 M IOPS to 770K IOPS.
> > >
> > > See below data. All 24 reply queue is in used correctly.
> > >
> > > IRQs / 1 second(s)
> > > IRQ#  TOTAL  NODE0   NODE1  NAME
> > >  360  16422  0   16422  IR-PCI-MSI 70254653-edge megasas
> > >  364  15980  0   15980  IR-PCI-MSI 70254657-edge megasas
> > >  362  15979  0   15979  IR-PCI-MSI 70254655-edge megasas
> > >  345  15696  0   15696  IR-PCI-MSI 70254638-edge megasas
> > >  341  15659  0   15659  IR-PCI-MSI 70254634-edge megasas
> > >  369  15656  0   15656  IR-PCI-MSI 70254662-edge megasas
> > >  359  15650  0   15650  IR-PCI-MSI 70254652-edge megasas
> > >  358  15596  0   15596  IR-PCI-MSI 70254651-edge megasas
> > >  350  15574  0   15574  IR-PCI-MSI 70254643-edge megasas
> > >  342  15532  0   15532  IR-PCI-MSI 70254635-edge megasas
> > >  344  15527  0   15527  IR-PCI-MSI 70254637-edge megasas
> > >  346  15485  0   15485  IR-PCI-MSI 70254639-edge megasas
> > >  361  15482  0   15482  IR-PCI-MSI 70254654-edge megasas
> > >  348  15467  0   15467  IR-PCI-MSI 70254641-edge megasas
> > >  368  15463  0   15463  IR-PCI-MSI 70254661-edge megasas
> > >  354  15420  0   15420  IR-PCI-MSI 70254647-edge megasas
> > >  351  15378  0   15378  IR-PCI-MSI 70254644-edge megasas
> > >  352  15377  0   15377  IR-PCI-MSI 70254645-edge megasas
> > >  356  15348  0   15348  IR-PCI-MSI 70254649-edge megasas
> > >  337  15344  0   15344  IR-PCI-MSI 70254630-edge megasas
> > >  343  15320  0   15320  IR-PCI-MSI 70254636-edge megasas
> > >  355  15266  0   15266  IR-PCI-MSI 70254648-edge megasas
> > >  335  15247  0   15247  IR-PCI-MSI 70254628-edge megasas
> > >  363  15233  0   15233  IR-PCI-MSI 70254656-edge megasas
> > >
> > >
> > > Average:CPU  %usr %nice  %sys   %iowait
%steal
> > > %irq %soft%guest%gnice %idle
> > > Average: 18  3.80  0.00 14.78 10.08
0.00
> > > 0.00  4.01  0.00  0.00 67.33
> > > Average: 19  3.26  0.00 15.35 10.62
0.00
> > > 0.00  4.03  0.00  0.00 66.74
> > > Average: 20  3.42  0.00 14.57 10.67
0.00
> > > 0.00  3.84  0.00  0.00 67.50
> > > Average: 21  3.19  0.00 15.60 10.75
0.00
> > > 0.00  4.16  0.00  0.00 66.30
> > > Average: 22 

Re: [PATCH 0/5] blk-mq/scsi-mq: support global tags & introduce force_blk_mq

2018-02-08 Thread Ming Lei
On Fri, Feb 09, 2018 at 10:28:23AM +0530, Kashyap Desai wrote:
> > -Original Message-
> > From: Ming Lei [mailto:ming@redhat.com]
> > Sent: Thursday, February 8, 2018 10:23 PM
> > To: Hannes Reinecke
> > Cc: Kashyap Desai; Jens Axboe; linux-block@vger.kernel.org; Christoph
> > Hellwig; Mike Snitzer; linux-s...@vger.kernel.org; Arun Easi; Omar
> Sandoval;
> > Martin K . Petersen; James Bottomley; Christoph Hellwig; Don Brace;
> Peter
> > Rivera; Paolo Bonzini; Laurence Oberman
> > Subject: Re: [PATCH 0/5] blk-mq/scsi-mq: support global tags & introduce
> > force_blk_mq
> >
> > On Thu, Feb 08, 2018 at 08:00:29AM +0100, Hannes Reinecke wrote:
> > > On 02/07/2018 03:14 PM, Kashyap Desai wrote:
> > > >> -Original Message-
> > > >> From: Ming Lei [mailto:ming@redhat.com]
> > > >> Sent: Wednesday, February 7, 2018 5:53 PM
> > > >> To: Hannes Reinecke
> > > >> Cc: Kashyap Desai; Jens Axboe; linux-block@vger.kernel.org;
> > > >> Christoph Hellwig; Mike Snitzer; linux-s...@vger.kernel.org; Arun
> > > >> Easi; Omar
> > > > Sandoval;
> > > >> Martin K . Petersen; James Bottomley; Christoph Hellwig; Don Brace;
> > > > Peter
> > > >> Rivera; Paolo Bonzini; Laurence Oberman
> > > >> Subject: Re: [PATCH 0/5] blk-mq/scsi-mq: support global tags &
> > > >> introduce force_blk_mq
> > > >>
> > > >> On Wed, Feb 07, 2018 at 07:50:21AM +0100, Hannes Reinecke wrote:
> > > >>> Hi all,
> > > >>>
> > > >>> [ .. ]
> > > >
> > > > Could you share us your patch for enabling global_tags/MQ on
> > >  megaraid_sas
> > > > so that I can reproduce your test?
> > > >
> > > >> See below perf top data. "bt_iter" is consuming 4 times more
> CPU.
> > > >
> > > > Could you share us what the IOPS/CPU utilization effect is after
> > >  applying the
> > > > patch V2? And your test script?
> > >  Regarding CPU utilization, I need to test one more time.
> > >  Currently system is in used.
> > > 
> > >  I run below fio test on total 24 SSDs expander attached.
> > > 
> > >  numactl -N 1 fio jbod.fio --rw=randread --iodepth=64 --bs=4k
> > >  --ioengine=libaio --rw=randread
> > > 
> > >  Performance dropped from 1.6 M IOPs to 770K IOPs.
> > > 
> > > >>> This is basically what we've seen with earlier iterations.
> > > >>
> > > >> Hi Hannes,
> > > >>
> > > >> As I mentioned in another mail[1], Kashyap's patch has a big issue,
> > > > which
> > > >> causes only reply queue 0 used.
> > > >>
> > > >> [1] https://marc.info/?l=linux-scsi&m=151793204014631&w=2
> > > >>
> > > >> So could you guys run your performance test again after fixing the
> > > > patch?
> > > >
> > > > Ming -
> > > >
> > > > I tried after change you requested.  Performance drop is still
> unresolved.
> > > > From 1.6 M IOPS to 770K IOPS.
> > > >
> > > > See below data. All 24 reply queue is in used correctly.
> > > >
> > > > IRQs / 1 second(s)
> > > > IRQ#  TOTAL  NODE0   NODE1  NAME
> > > >  360  16422  0   16422  IR-PCI-MSI 70254653-edge megasas
> > > >  364  15980  0   15980  IR-PCI-MSI 70254657-edge megasas
> > > >  362  15979  0   15979  IR-PCI-MSI 70254655-edge megasas
> > > >  345  15696  0   15696  IR-PCI-MSI 70254638-edge megasas
> > > >  341  15659  0   15659  IR-PCI-MSI 70254634-edge megasas
> > > >  369  15656  0   15656  IR-PCI-MSI 70254662-edge megasas
> > > >  359  15650  0   15650  IR-PCI-MSI 70254652-edge megasas
> > > >  358  15596  0   15596  IR-PCI-MSI 70254651-edge megasas
> > > >  350  15574  0   15574  IR-PCI-MSI 70254643-edge megasas
> > > >  342  15532  0   15532  IR-PCI-MSI 70254635-edge megasas
> > > >  344  15527  0   15527  IR-PCI-MSI 70254637-edge megasas
> > > >  346  15485  0   15485  IR-PCI-MSI 70254639-edge megasas
> > > >  361  15482  0   15482  IR-PCI-MSI 70254654-edge megasas
> > > >  348  15467  0   15467  IR-PCI-MSI 70254641-edge megasas
> > > >  368  15463  0   15463  IR-PCI-MSI 70254661-edge megasas
> > > >  354  15420  0   15420  IR-PCI-MSI 70254647-edge megasas
> > > >  351  15378  0   15378  IR-PCI-MSI 70254644-edge megasas
> > > >  352  15377  0   15377  IR-PCI-MSI 70254645-edge megasas
> > > >  356  15348  0   15348  IR-PCI-MSI 70254649-edge megasas
> > > >  337  15344  0   15344  IR-PCI-MSI 70254630-edge megasas
> > > >  343  15320  0   15320  IR-PCI-MSI 70254636-edge megasas
> > > >  355  15266  0   15266  IR-PCI-MSI 70254648-edge megasas
> > > >  335  15247  0   15247  IR-PCI-MSI 70254628-edge megasas
> > > >  363  15233  0   15233  IR-PCI-MSI 70254656-edge megasas
> > > >
> > > >
> > > > Average:CPU  %usr %nice  %sys   %iowait
> %steal
> > > > %irq %soft%guest%gnice %idle
> > > > Average: 18  3.80  0.00 14.78 10.08
> 0.00
> > > > 0.00  4.01  0.00  0.00 67.33
> > > > Average: 19  3.26  0.00 15.35 10.62
> 0.00
> > > > 0.00  4.03  0.00  0.