Re: [PATCH] block: fix RO partition with RW disk

2019-08-05 Thread Dongli Zhang
Hi Junxiao, While this is reported by md, is it possible to reproduce the error on purpose with other device (e.g., loop) and add a test to blktests? Dongli Zhang On 8/6/19 4:01 AM, Junxiao Bi wrote: > When md raid1 was used with imsm metadata, during the boot stage, > the raid devic

Re: [PATCH 4/5] nvme: wait until all completed request's complete fn is called

2019-07-23 Thread Dongli Zhang
>mrs_used > 0) in ib_destroy_qp_user() may be triggered. > > Fix this issue by using blk_mq_tagset_drain_completed_request. > Should be blk_mq_tagset_wait_completed_request but not blk_mq_tagset_drain_completed_request? Dongli Zhang

Re: [PATCH] blk-mq: update hctx->cpumask at cpu-hotplug(Internet mail)

2019-06-24 Thread Dongli Zhang
ed, after boot: # cat /proc/cpuinfo | grep processor processor : 0 # cat /sys/block/nvme0n1/mq/0/cpu_list 0 # cat /sys/block/nvme0n1/mq/1/cpu_list 1 # cat /sys/block/nvme0n1/mq/2/cpu_list 2 # cat /sys/block/nvme0n1/mq/3/cpu_list 3 # cat /proc/interrupts | grep nvme 24: 11 PCI-

Re: [PATCH] blk-mq: update hctx->cpumask at cpu-hotplug

2019-06-24 Thread Dongli Zhang
w patch explaining above for removing a cpu and it is unfortunately not merged yet. https://patchwork.kernel.org/patch/10889307/ Another thing is during initialization, the hctx->cpumask should already been set and even the cpu is offline. Would you please explain the case hctx->cpumask is not set correctly, e.g., how to reproduce with a kvm guest running scsi/virtio/nvme/loop? Dongli Zhang

Re: io_uring is zero copy or not?

2019-04-14 Thread Dongli Zhang
ro-copy feature? Doesn't it depends on the definition of 'copy'? There is still a copy from original userspace pages to the registered buffer? Dongli Zhang > > From the code, kernel directly get the userspace virtual address's > pages and store them: > > 2475 pr

Re: [PATCH] blk-mq: Wait for for hctx requests on CPU unplug

2019-04-07 Thread Dongli Zhang
_mq_ctx->cpu = 2 for blk_mq_hw_ctx->queue_num = 0 Although blk_mq_ctx->cpu = 2 is only mapped to blk_mq_hw_ctx->queue_num = 2 in this case, its ctx->rq_lists[type] will however be moved to blk_mq_hw_ctx->queue_num = 3 during the 1st call of blk_mq_hctx_notify_dead(). This pa

Re: [PATCH] blk-mq: set plug->rq_count 0 after sort in blk_mq_flush_plug_list

2019-04-06 Thread Dongli Zhang
-linus&id=bcc816dfe51ab86ca94663c7b225f2d6eb0fddb9 Dongli Zhang > > Signed-off-by: Shenghui Wang > --- > block/blk-mq.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/block/blk-mq.c b/block/blk-mq.c > index f3b0d33bdf88..e5d32f186e81 100644 > --- a/block/blk-mq.c >

Re: [PATCH] blk-mq: Wait for for hctx requests on CPU unplug

2019-04-06 Thread Dongli Zhang
r all inflight requests to complete, is it possible to re-map (migrate) those requests to other hctx whose cpus (ctx) are still online, e.g., to extract the bio and re-map those bio to other ctx (e.g., cpu0)? One drawback I can see is if 63 out of 64 cpus are suddenly offline, cpu0 would be stuck. Dongli Zhang

Re: [PATCH] block: Fix a race between tag iteration and hardware queue changes

2019-04-02 Thread Dongli Zhang
ad of making __blk_mq_update_nr_hw_queues() wait until > q->q_usage_counter == 0 is globally visible, do not iterate over tags > if the request queue is frozen. > > Cc: Christoph Hellwig > Cc: Hannes Reinecke > Cc: James Smart > Cc: Ming Lei > Cc: Jianchao Wang > Cc: Keith

Re: [PATCH] block: Fix a race between tag iteration and hardware queue changes

2019-04-02 Thread Dongli Zhang
ad of making __blk_mq_update_nr_hw_queues() wait until > q->q_usage_counter == 0 is globally visible, do not iterate over tags > if the request queue is frozen.> > Cc: Christoph Hellwig > Cc: Hannes Reinecke > Cc: James Smart > Cc: Ming Lei > Cc: Jianchao Wang > Cc: Keith

Re: [PATCH blktests 2/2] loop/001: verify all partitions are removed

2019-03-21 Thread Dongli Zhang
On 3/22/19 7:26 AM, Omar Sandoval wrote: > On Fri, Mar 15, 2019 at 10:00:27AM +0800, Dongli Zhang wrote: >> >> >> On 3/15/19 1:55 AM, Omar Sandoval wrote: >>> On Thu, Mar 14, 2019 at 07:45:17PM +0800, Dongli Zhang wrote: >>>> loop/001 does n

Re: loop: Too many partitions and OOM killer?

2019-03-20 Thread Dongli Zhang
log printed to /var/log/syslog and I would assume blkid is not involved. This is just a test on ubuntu 16.04.5. I am not sure about the env of syzbot. Dongli Zhang

Re: [PATCH blktests 2/2] loop/001: verify all partitions are removed

2019-03-14 Thread Dongli Zhang
On 3/15/19 1:55 AM, Omar Sandoval wrote: > On Thu, Mar 14, 2019 at 07:45:17PM +0800, Dongli Zhang wrote: >> loop/001 does not test whether all partitions are removed successfully >> during loop device partition scanning. As a result, the regression >> introduced by 0da

[PATCH blktests 2/2] loop/001: verify all partitions are removed

2019-03-14 Thread Dongli Zhang
left is 0. Signed-off-by: Dongli Zhang --- tests/loop/001 | 5 + tests/loop/001.out | 1 + 2 files changed, 6 insertions(+) diff --git a/tests/loop/001 b/tests/loop/001 index 47f760a..a0326b7 100755 --- a/tests/loop/001 +++ b/tests/loop/001 @@ -4,6 +4,9 @@ # # Test loop device

[PATCH blktests 1/2] loop/001: fix typo 'paritition' to 'partition'

2019-03-14 Thread Dongli Zhang
Signed-off-by: Dongli Zhang --- tests/loop/001 | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tests/loop/001 b/tests/loop/001 index b70e8c0..47f760a 100755 --- a/tests/loop/001 +++ b/tests/loop/001 @@ -2,7 +2,7 @@ # SPDX-License-Identifier: GPL-3.0+ # Copyright (C) 2017

Re: NVMe: Regression: write zeros corrupts ext4 file system

2019-03-11 Thread Dongli Zhang
git, there is no call of nvme_write_zeros(). Perhaps there is some special configuration required to trigger the nvme_write_zeros() on purpose during "git clone" to involve the nvme_cmd_write_zeroes on kernel side? My test nvme image is only about 5GB. Dongli Zhang > > QEMU versio

[PATCH for-next 1/1] blk-mq: use HCTX_TYPE_DEFAULT but not 0 to index blk_mq_tag_set->map

2019-02-27 Thread Dongli Zhang
Replace set->map[0] with set->map[HCTX_TYPE_DEFAULT] to avoid hardcoding. Signed-off-by: Dongli Zhang --- block/blk-mq.c | 14 +++--- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 54535f4..4e502db 100644 --- a/block/blk-mq.c

[PATCH v3 2/2] loop: set GENHD_FL_NO_PART_SCAN after blkdev_reread_part()

2019-02-22 Thread Dongli Zhang
x deadlock when calling blkdev_reread_part()") Signed-off-by: Dongli Zhang Reviewed-by: Jan Kara --- Changed since v1: * move the setting of lo->lo_state to Lo_unbound after partscan has finished as well (suggested by Jan Kara) Changed since v2: * Put the code inline in __loop_clr

[PATCH v3 1/2] loop: do not print warn message if partition scan is successful

2019-02-22 Thread Dongli Zhang
Do not print warn message when the partition scan returns 0. Fixes: d57f3374ba48 ("loop: Move special partition reread handling in loop_clr_fd()") Signed-off-by: Dongli Zhang Reviewed-by: Jan Kara --- drivers/block/loop.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)

[PATCH v3 0/2] loop: fix two issues introduced by prior commit

2019-02-22 Thread Dongli Zhang
e. Changed since v1: * move the setting of lo->lo_state to Lo_unbound after partscan has finished as well (suggested by Jan Kara) Changed since v2: * Put the code inline in __loop_clr_fd() but not introduce new function (suggested by Jan Kara) Dongli Zhang

Re: [PATCH v2 2/2] loop: set GENHD_FL_NO_PART_SCAN after blkdev_reread_part()

2019-02-22 Thread Dongli Zhang
On 02/22/2019 07:47 PM, Jan Kara wrote: > On Thu 21-02-19 23:33:43, Dongli Zhang wrote: >> Commit 0da03cab87e6 >> ("loop: Fix deadlock when calling blkdev_reread_part()") moves >> blkdev_reread_part() out of the loop_ctl_mutex. However, >>

[PATCH v2 2/2] loop: set GENHD_FL_NO_PART_SCAN after blkdev_reread_part()

2019-02-21 Thread Dongli Zhang
x deadlock when calling blkdev_reread_part()") Signed-off-by: Dongli Zhang --- Changed since v1: * move the setting of lo->lo_state to Lo_unbound after partscan has finished as well (suggested by Jan Kara) drivers/block/loop.c | 26 ++ 1 file changed, 22 ins

[PATCH v2 1/2] loop: do not print warn message if partition scan is successful

2019-02-21 Thread Dongli Zhang
Do not print warn message when the partition scan returns 0. Fixes: d57f3374ba48 ("loop: Move special partition reread handling in loop_clr_fd()") Signed-off-by: Dongli Zhang Reviewed-by: Jan Kara --- drivers/block/loop.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)

[PATCH v2 0/2] loop: fix two issues introduced by prior commit

2019-02-21 Thread Dongli Zhang
e. Changed since v1: * move the setting of lo->lo_state to Lo_unbound after partscan has finished as well (suggested by Jan Kara) Dongli Zhang

Re: [PATCH 2/2] loop: set GENHD_FL_NO_PART_SCAN after blkdev_reread_part()

2019-02-21 Thread Dongli Zhang
On 02/21/2019 07:30 PM, Jan Kara wrote: > On Thu 21-02-19 12:17:35, Dongli Zhang wrote: >> Commit 0da03cab87e6 >> ("loop: Fix deadlock when calling blkdev_reread_part()") moves >> blkdev_reread_part() out of the loop_ctl_mutex. However, >>

[PATCH 2/2] loop: set GENHD_FL_NO_PART_SCAN after blkdev_reread_part()

2019-02-20 Thread Dongli Zhang
x deadlock when calling blkdev_reread_part()") Signed-off-by: Dongli Zhang --- drivers/block/loop.c | 15 --- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/drivers/block/loop.c b/drivers/block/loop.c index 7908673..736e55b 100644 --- a/drivers/block/loop.c +++ b/

[PATCH 1/2] loop: do not print warn message if partition scan is successful

2019-02-20 Thread Dongli Zhang
Do not print warn message when the partition scan returns 0. Fixes: d57f3374ba48 ("loop: Move special partition reread handling in loop_clr_fd()") Signed-off-by: Dongli Zhang --- drivers/block/loop.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/bl

[PATCH 0/2] loop: fix two issues introduced by prior commit

2019-02-20 Thread Dongli Zhang
on scan is successful. [PATCH 2/2] loop: set GENHD_FL_NO_PART_SCAN after blkdev_reread_part() [PATCH 2/2] fixes 0da03cab87e6 ("loop: Fix deadlock when calling blkdev_reread_part()") to not set GENHD_FL_NO_PART_SCAN before partition scan when detaching loop device from the file. Thank yo

Re: [Xen-devel] [PATCH v6 2/2] xen/blkback: rework connect_ring() to avoid inconsistent xenstore 'ring-page-order' set by malicious blkfront

2019-02-18 Thread Dongli Zhang
Hi Konrad, On 1/17/19 11:29 PM, Konrad Rzeszutek Wilk wrote: > On Tue, Jan 15, 2019 at 09:20:36AM +0100, Roger Pau Monné wrote: >> On Tue, Jan 15, 2019 at 12:41:44AM +0800, Dongli Zhang wrote: >>> The xenstore 'ring-page-order' is used globally for each blkback queue

Re: [Xen-devel] [PATCH 1/1] xen-blkback: do not wake up shutdown_wq after xen_blkif_schedule() is stopped

2019-01-17 Thread Dongli Zhang
Hi Roger, On 01/17/2019 04:20 PM, Roger Pau Monné wrote: > On Thu, Jan 17, 2019 at 10:10:00AM +0800, Dongli Zhang wrote: >> Hi Roger, >> >> On 2019/1/16 下午10:52, Roger Pau Monné wrote: >>> On Wed, Jan 16, 2019 at 09:47:41PM +0800, Dongli Zhang wrote: &g

Re: [Xen-devel] [PATCH 1/1] xen-blkback: do not wake up shutdown_wq after xen_blkif_schedule() is stopped

2019-01-16 Thread Dongli Zhang
Hi Roger, On 2019/1/16 下午10:52, Roger Pau Monné wrote: > On Wed, Jan 16, 2019 at 09:47:41PM +0800, Dongli Zhang wrote: >> There is no need to wake up xen_blkif_schedule() as kthread_stop() is able >> to already wake up the kernel thread. >> >> Signed-off-by: Dongli Zhan

Re: [Xen-devel] [PATCH v5 2/2] xen/blkback: rework connect_ring() to avoid inconsistent xenstore 'ring-page-order' set by malicious blkfront

2019-01-16 Thread Dongli Zhang
On 2019/1/17 上午12:32, Konrad Rzeszutek Wilk wrote: > On Tue, Jan 08, 2019 at 04:24:32PM +0800, Dongli Zhang wrote: >> oops. Please ignore this v5 patch. >> >> I just realized Linus suggested in an old email not use BUG()/BUG_ON() in >> the code. >> >> I

[PATCH 1/1] xen-blkback: do not wake up shutdown_wq after xen_blkif_schedule() is stopped

2019-01-16 Thread Dongli Zhang
There is no need to wake up xen_blkif_schedule() as kthread_stop() is able to already wake up the kernel thread. Signed-off-by: Dongli Zhang --- drivers/block/xen-blkback/xenbus.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/block/xen-blkback/xenbus.c b

[PATCH v6 2/2] xen/blkback: rework connect_ring() to avoid inconsistent xenstore 'ring-page-order' set by malicious blkfront

2019-01-14 Thread Dongli Zhang
ing() to read xenstore 'ring-page-order' only once. Signed-off-by: Dongli Zhang --- Changed since v1: * change the order of xenstore read in read_per_ring_refs * use xenbus_read_unsigned() in connect_ring() Changed since v2: * simplify the condition check as "(err != 1 &

[PATCH v6 1/2] xen/blkback: add stack variable 'blkif' in connect_ring()

2019-01-14 Thread Dongli Zhang
As 'be->blkif' is used for many times in connect_ring(), the stack variable 'blkif' is added to substitute 'be-blkif'. Suggested-by: Paul Durrant Signed-off-by: Dongli Zhang Reviewed-by: Paul Durrant Reviewed-by: Roger Pau Monné --- driver

Re: [Xen-devel] [PATCH v4 2/2] xen/blkback: rework connect_ring() to avoid inconsistent xenstore 'ring-page-order' set by malicious blkfront

2019-01-08 Thread Dongli Zhang
Hi Roger, On 01/07/2019 11:27 PM, Roger Pau Monné wrote: > On Mon, Jan 07, 2019 at 10:07:34PM +0800, Dongli Zhang wrote: >> >> >> On 01/07/2019 10:05 PM, Dongli Zhang wrote: >>> >>> >>> On 01/07/2019 08:01 PM, Roger Pau Monné wrote: >>>

Re: [Xen-devel] [PATCH v5 2/2] xen/blkback: rework connect_ring() to avoid inconsistent xenstore 'ring-page-order' set by malicious blkfront

2019-01-08 Thread Dongli Zhang
oops. Please ignore this v5 patch. I just realized Linus suggested in an old email not use BUG()/BUG_ON() in the code. I will switch to the WARN() solution and resend again. Sorry for the junk email. Dongli Zhang On 2019/1/8 下午4:15, Dongli Zhang wrote: > The xenstore 'ring-page-o

[PATCH v5 1/2] xen/blkback: add stack variable 'blkif' in connect_ring()

2019-01-08 Thread Dongli Zhang
As 'be->blkif' is used for many times in connect_ring(), the stack variable 'blkif' is added to substitute 'be-blkif'. Suggested-by: Paul Durrant Signed-off-by: Dongli Zhang Reviewed-by: Paul Durrant Reviewed-by: Roger Pau Monné --- driver

[PATCH v5 2/2] xen/blkback: rework connect_ring() to avoid inconsistent xenstore 'ring-page-order' set by malicious blkfront

2019-01-08 Thread Dongli Zhang
ing() to read xenstore 'ring-page-order' only once. Signed-off-by: Dongli Zhang --- Changed since v1: * change the order of xenstore read in read_per_ring_refs * use xenbus_read_unsigned() in connect_ring() Changed since v2: * simplify the condition check as "(err != 1 &

Re: [Xen-devel] [PATCH v4 2/2] xen/blkback: rework connect_ring() to avoid inconsistent xenstore 'ring-page-order' set by malicious blkfront

2019-01-07 Thread Dongli Zhang
On 01/07/2019 10:05 PM, Dongli Zhang wrote: > > > On 01/07/2019 08:01 PM, Roger Pau Monné wrote: >> On Mon, Jan 07, 2019 at 01:35:59PM +0800, Dongli Zhang wrote: >>> The xenstore 'ring-page-order' is used globally for each blkback queue and >>> there

Re: [Xen-devel] [PATCH v4 2/2] xen/blkback: rework connect_ring() to avoid inconsistent xenstore 'ring-page-order' set by malicious blkfront

2019-01-07 Thread Dongli Zhang
On 01/07/2019 08:01 PM, Roger Pau Monné wrote: > On Mon, Jan 07, 2019 at 01:35:59PM +0800, Dongli Zhang wrote: >> The xenstore 'ring-page-order' is used globally for each blkback queue and >> therefore should be read from xenstore only once. However, it is obtained

[PATCH v4 1/2] xen/blkback: add stack variable 'blkif' in connect_ring()

2019-01-06 Thread Dongli Zhang
As 'be->blkif' is used for many times in connect_ring(), the stack variable 'blkif' is added to substitute 'be-blkif'. Suggested-by: Paul Durrant Signed-off-by: Dongli Zhang --- drivers/block/xen-blkback/xenbus.c | 27 ++- 1 file change

[PATCH v4 2/2] xen/blkback: rework connect_ring() to avoid inconsistent xenstore 'ring-page-order' set by malicious blkfront

2019-01-06 Thread Dongli Zhang
ing() to read xenstore 'ring-page-order' only once. Signed-off-by: Dongli Zhang --- Changed since v1: * change the order of xenstore read in read_per_ring_refs * use xenbus_read_unsigned() in connect_ring() Changed since v2: * simplify the condition check as "(err != 1 &

Re: [PATCH v3 1/1] xen/blkback: rework connect_ring() to avoid inconsistent xenstore 'ring-page-order' set by malicious blkfront

2019-01-03 Thread Dongli Zhang
Ping? Dongli Zhang On 12/19/2018 09:23 PM, Dongli Zhang wrote: > The xenstore 'ring-page-order' is used globally for each blkback queue and > therefore should be read from xenstore only once. However, it is obtained > in read_per_ring_refs() which might be called multip

[PATCH v3 1/1] xen/blkback: rework connect_ring() to avoid inconsistent xenstore 'ring-page-order' set by malicious blkfront

2018-12-19 Thread Dongli Zhang
ing() to read xenstore 'ring-page-order' only once. Signed-off-by: Dongli Zhang --- Changed since v1: * change the order of xenstore read in read_per_ring_refs * use xenbus_read_unsigned() in connect_ring() Changed since v2: * simplify the condition check as "(err != 1 &

Re: [Xen-devel] [PATCH v2 1/1] xen/blkback: rework connect_ring() to avoid inconsistent xenstore 'ring-page-order' set by malicious blkfront

2018-12-18 Thread Dongli Zhang
On 12/18/2018 11:13 PM, Roger Pau Monné wrote: > On Tue, Dec 18, 2018 at 07:31:59PM +0800, Dongli Zhang wrote: >> Hi Roger, >> >> On 12/18/2018 05:33 PM, Roger Pau Monné wrote: >>> On Tue, Dec 18, 2018 at 08:55:38AM +0800, Dongli Zhang wrote: >>>> The x

Re: [Xen-devel] [PATCH v2 1/1] xen/blkback: rework connect_ring() to avoid inconsistent xenstore 'ring-page-order' set by malicious blkfront

2018-12-18 Thread Dongli Zhang
Hi Roger, On 12/18/2018 05:33 PM, Roger Pau Monné wrote: > On Tue, Dec 18, 2018 at 08:55:38AM +0800, Dongli Zhang wrote: >> The xenstore 'ring-page-order' is used globally for each blkback queue and >> therefore should be read from xenstore only once. Howe

[PATCH v2 1/1] xen/blkback: rework connect_ring() to avoid inconsistent xenstore 'ring-page-order' set by malicious blkfront

2018-12-17 Thread Dongli Zhang
ing() to read xenstore 'ring-page-order' only once. Signed-off-by: Dongli Zhang --- Changed since v1: * change the order of xenstore read in read_per_ring_refs(suggested by Roger Pau Monne) * use xenbus_read_unsigned() in connect_ring() (suggested by Roger Pau Monne

Re: [Xen-devel] [PATCH 1/1] xen/blkback: rework connect_ring() to avoid inconsistent xenstore 'ring-page-order' set by malicious blkfront

2018-12-07 Thread Dongli Zhang
On 12/07/2018 11:15 PM, Paul Durrant wrote: >> -Original Message- >> From: Dongli Zhang [mailto:dongli.zh...@oracle.com] >> Sent: 07 December 2018 15:10 >> To: Paul Durrant ; linux-ker...@vger.kernel.org; >> xen-de...@lists.xenproject.org; linux-b

Re: [Xen-devel] [PATCH 1/1] xen/blkback: rework connect_ring() to avoid inconsistent xenstore 'ring-page-order' set by malicious blkfront

2018-12-07 Thread Dongli Zhang
Hi Paul, On 12/07/2018 05:39 PM, Paul Durrant wrote: >> -Original Message- >> From: Xen-devel [mailto:xen-devel-boun...@lists.xenproject.org] On Behalf >> Of Dongli Zhang >> Sent: 07 December 2018 04:18 >> To: linux-ker...@vger.kernel.org; xen-de...@list

[PATCH 1/1] xen/blkback: rework connect_ring() to avoid inconsistent xenstore 'ring-page-order' set by malicious blkfront

2018-12-06 Thread Dongli Zhang
ing() to read xenstore 'ring-page-order' only once. Signed-off-by: Dongli Zhang --- drivers/block/xen-blkback/xenbus.c | 49 -- 1 file changed, 31 insertions(+), 18 deletions(-) diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blk

Re: [Xen-devel] [PATCH] xen-blkfront: use old rinfo after enomem during migration

2018-12-03 Thread Dongli Zhang
Hi Manjunath, On 12/04/2018 10:49 AM, Manjunath Patil wrote: > On 12/3/2018 6:16 PM, Boris Ostrovsky wrote: > >> On 12/3/18 8:14 PM, Dongli Zhang wrote: >>> Hi Boris, >>> >>> On 12/04/2018 12:07 AM, Boris Ostrovsky wrote: >>>> On 12/2/18 3:31

Re: [Xen-devel] [PATCH] xen-blkfront: use old rinfo after enomem during migration

2018-12-03 Thread Dongli Zhang
mq(), we >> might hit it in setup_blkring() as well. >> We should add the similar change to blkif_sring struct as well. > > > Won't you have a similar issue with other frontends, say, netfront? I think the kmalloc is failed not because of OOM. In fact, the size of "blkfront_ring_info" is large. When domU have 4 queues/rings, the size of 4 blkfront_ring_info can be about 300+ KB. There is chance that kmalloc() 300+ KB would fail. About netfront, to kmalloc() 8 'struct netfront_queue' seems consumes <70 KB? Dongli Zhang

Re: [PATCH v9] virtio_blk: add discard and write zeroes support

2018-11-01 Thread Dongli Zhang
Hi Daniel, Other than crosvm, is there any version of qemu (e.g., repositories developed in progress on github) where I can try with this feature? Thank you very much! Dongli Zhang On 11/02/2018 06:40 AM, Daniel Verkamp wrote: > From: Changpeng Liu > > In commit 88c85538, "v

Re: [PATCH 17/17] null_blk: remove legacy IO path

2018-10-11 Thread Dongli Zhang
tutorial? Dongli Zhang On 10/12/2018 12:59 AM, Jens Axboe wrote: > We're planning on removing this code completely, kill the old > path. > > Signed-off-by: Jens Axboe > --- > drivers/block/null_blk_main.c | 96 +++ > 1 file changed, 6 i

Re: nvme-pci: number of queues off by one

2018-10-07 Thread Dongli Zhang
_queues - 2) << 16)); > returns 0x60006 when num_queues is 8. Finally, nr_io_queues is set to 6+1=7 in nvme_set_queue_count() in VM kernel. I do not know how to paraphrase this in the world of nvme. Dongli Zhang On 10/08/2018 01:59 PM, Dongli Zhang wrote: > I can reproduce with q

Re: nvme-pci: number of queues off by one

2018-10-07 Thread Dongli Zhang
enable-kvm \ -net nic -net user,hostfwd=tcp::5022-:22 \ -device nvme,drive=nvme1,serial=deadbeaf1,num_queues=8 \ -drive file=/path-to-img/nvme.disk,if=none,id=nvme1 Dongli Zhang On 10/08/2018 01:05 PM, Prasun Ratn wrote: > Hi > > I have an NVMe SSD that has 8 hw

Re: [PATCH] blk-mq: complete req in softirq context in case of single queue

2018-09-26 Thread Dongli Zhang
ia xen event channel). I have an extra basic question perhaps not related to this patch: Why not delay other cases in softirq as well? (perhaps this is a question about mq but not for patch). Thank you very much! Dongli Zhang > > So if all IOs are completed in hardirq context, i