Hi Junxiao,
While this is reported by md, is it possible to reproduce the error on purpose
with other device (e.g., loop) and add a test to blktests?
Dongli Zhang
On 8/6/19 4:01 AM, Junxiao Bi wrote:
> When md raid1 was used with imsm metadata, during the boot stage,
> the raid devic
>mrs_used > 0) in ib_destroy_qp_user() may be triggered.
>
> Fix this issue by using blk_mq_tagset_drain_completed_request.
>
Should be blk_mq_tagset_wait_completed_request but not
blk_mq_tagset_drain_completed_request?
Dongli Zhang
ed, after boot:
# cat /proc/cpuinfo | grep processor
processor : 0
# cat /sys/block/nvme0n1/mq/0/cpu_list
0
# cat /sys/block/nvme0n1/mq/1/cpu_list
1
# cat /sys/block/nvme0n1/mq/2/cpu_list
2
# cat /sys/block/nvme0n1/mq/3/cpu_list
3
# cat /proc/interrupts | grep nvme
24: 11 PCI-
w patch explaining above for removing a cpu and it is
unfortunately not merged yet.
https://patchwork.kernel.org/patch/10889307/
Another thing is during initialization, the hctx->cpumask should already been
set and even the cpu is offline. Would you please explain the case hctx->cpumask
is not set correctly, e.g., how to reproduce with a kvm guest running
scsi/virtio/nvme/loop?
Dongli Zhang
ro-copy feature?
Doesn't it depends on the definition of 'copy'?
There is still a copy from original userspace pages to the registered buffer?
Dongli Zhang
>
> From the code, kernel directly get the userspace virtual address's
> pages and store them:
>
> 2475 pr
_mq_ctx->cpu = 2 for blk_mq_hw_ctx->queue_num = 0
Although blk_mq_ctx->cpu = 2 is only mapped to blk_mq_hw_ctx->queue_num = 2
in this case, its ctx->rq_lists[type] will however be moved to
blk_mq_hw_ctx->queue_num = 3 during the 1st call of
blk_mq_hctx_notify_dead().
This pa
-linus&id=bcc816dfe51ab86ca94663c7b225f2d6eb0fddb9
Dongli Zhang
>
> Signed-off-by: Shenghui Wang
> ---
> block/blk-mq.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index f3b0d33bdf88..e5d32f186e81 100644
> --- a/block/blk-mq.c
>
r all inflight requests to
complete, is it possible to re-map (migrate) those requests to other hctx whose
cpus (ctx) are still online, e.g., to extract the bio and re-map those bio to
other ctx (e.g., cpu0)?
One drawback I can see is if 63 out of 64 cpus are suddenly offline, cpu0 would
be stuck.
Dongli Zhang
ad of making __blk_mq_update_nr_hw_queues() wait until
> q->q_usage_counter == 0 is globally visible, do not iterate over tags
> if the request queue is frozen.
>
> Cc: Christoph Hellwig
> Cc: Hannes Reinecke
> Cc: James Smart
> Cc: Ming Lei
> Cc: Jianchao Wang
> Cc: Keith
ad of making __blk_mq_update_nr_hw_queues() wait until
> q->q_usage_counter == 0 is globally visible, do not iterate over tags
> if the request queue is frozen.>
> Cc: Christoph Hellwig
> Cc: Hannes Reinecke
> Cc: James Smart
> Cc: Ming Lei
> Cc: Jianchao Wang
> Cc: Keith
On 3/22/19 7:26 AM, Omar Sandoval wrote:
> On Fri, Mar 15, 2019 at 10:00:27AM +0800, Dongli Zhang wrote:
>>
>>
>> On 3/15/19 1:55 AM, Omar Sandoval wrote:
>>> On Thu, Mar 14, 2019 at 07:45:17PM +0800, Dongli Zhang wrote:
>>>> loop/001 does n
log printed to /var/log/syslog and I would assume blkid is not
involved.
This is just a test on ubuntu 16.04.5. I am not sure about the env of syzbot.
Dongli Zhang
On 3/15/19 1:55 AM, Omar Sandoval wrote:
> On Thu, Mar 14, 2019 at 07:45:17PM +0800, Dongli Zhang wrote:
>> loop/001 does not test whether all partitions are removed successfully
>> during loop device partition scanning. As a result, the regression
>> introduced by 0da
left is 0.
Signed-off-by: Dongli Zhang
---
tests/loop/001 | 5 +
tests/loop/001.out | 1 +
2 files changed, 6 insertions(+)
diff --git a/tests/loop/001 b/tests/loop/001
index 47f760a..a0326b7 100755
--- a/tests/loop/001
+++ b/tests/loop/001
@@ -4,6 +4,9 @@
#
# Test loop device
Signed-off-by: Dongli Zhang
---
tests/loop/001 | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tests/loop/001 b/tests/loop/001
index b70e8c0..47f760a 100755
--- a/tests/loop/001
+++ b/tests/loop/001
@@ -2,7 +2,7 @@
# SPDX-License-Identifier: GPL-3.0+
# Copyright (C) 2017
git, there is no call of
nvme_write_zeros().
Perhaps there is some special configuration required to trigger the
nvme_write_zeros() on purpose during "git clone" to involve the
nvme_cmd_write_zeroes on kernel side?
My test nvme image is only about 5GB.
Dongli Zhang
>
> QEMU versio
Replace set->map[0] with set->map[HCTX_TYPE_DEFAULT] to avoid hardcoding.
Signed-off-by: Dongli Zhang
---
block/blk-mq.c | 14 +++---
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 54535f4..4e502db 100644
--- a/block/blk-mq.c
x deadlock when calling blkdev_reread_part()")
Signed-off-by: Dongli Zhang
Reviewed-by: Jan Kara
---
Changed since v1:
* move the setting of lo->lo_state to Lo_unbound after partscan has finished
as well
(suggested by Jan Kara)
Changed since v2:
* Put the code inline in __loop_clr
Do not print warn message when the partition scan returns 0.
Fixes: d57f3374ba48 ("loop: Move special partition reread handling in
loop_clr_fd()")
Signed-off-by: Dongli Zhang
Reviewed-by: Jan Kara
---
drivers/block/loop.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
e.
Changed since v1:
* move the setting of lo->lo_state to Lo_unbound after partscan has finished
as well
(suggested by Jan Kara)
Changed since v2:
* Put the code inline in __loop_clr_fd() but not introduce new function
(suggested by Jan Kara)
Dongli Zhang
On 02/22/2019 07:47 PM, Jan Kara wrote:
> On Thu 21-02-19 23:33:43, Dongli Zhang wrote:
>> Commit 0da03cab87e6
>> ("loop: Fix deadlock when calling blkdev_reread_part()") moves
>> blkdev_reread_part() out of the loop_ctl_mutex. However,
>>
x deadlock when calling blkdev_reread_part()")
Signed-off-by: Dongli Zhang
---
Changed since v1:
* move the setting of lo->lo_state to Lo_unbound after partscan has finished
as well
(suggested by Jan Kara)
drivers/block/loop.c | 26 ++
1 file changed, 22 ins
Do not print warn message when the partition scan returns 0.
Fixes: d57f3374ba48 ("loop: Move special partition reread handling in
loop_clr_fd()")
Signed-off-by: Dongli Zhang
Reviewed-by: Jan Kara
---
drivers/block/loop.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
e.
Changed since v1:
* move the setting of lo->lo_state to Lo_unbound after partscan has finished
as well
(suggested by Jan Kara)
Dongli Zhang
On 02/21/2019 07:30 PM, Jan Kara wrote:
> On Thu 21-02-19 12:17:35, Dongli Zhang wrote:
>> Commit 0da03cab87e6
>> ("loop: Fix deadlock when calling blkdev_reread_part()") moves
>> blkdev_reread_part() out of the loop_ctl_mutex. However,
>>
x deadlock when calling blkdev_reread_part()")
Signed-off-by: Dongli Zhang
---
drivers/block/loop.c | 15 ---
1 file changed, 12 insertions(+), 3 deletions(-)
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 7908673..736e55b 100644
--- a/drivers/block/loop.c
+++ b/
Do not print warn message when the partition scan returns 0.
Fixes: d57f3374ba48 ("loop: Move special partition reread handling in
loop_clr_fd()")
Signed-off-by: Dongli Zhang
---
drivers/block/loop.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/bl
on scan is successful.
[PATCH 2/2] loop: set GENHD_FL_NO_PART_SCAN after blkdev_reread_part()
[PATCH 2/2] fixes 0da03cab87e6 ("loop: Fix deadlock when calling
blkdev_reread_part()") to not set GENHD_FL_NO_PART_SCAN before partition
scan when detaching loop device from the file.
Thank yo
Hi Konrad,
On 1/17/19 11:29 PM, Konrad Rzeszutek Wilk wrote:
> On Tue, Jan 15, 2019 at 09:20:36AM +0100, Roger Pau Monné wrote:
>> On Tue, Jan 15, 2019 at 12:41:44AM +0800, Dongli Zhang wrote:
>>> The xenstore 'ring-page-order' is used globally for each blkback queue
Hi Roger,
On 01/17/2019 04:20 PM, Roger Pau Monné wrote:
> On Thu, Jan 17, 2019 at 10:10:00AM +0800, Dongli Zhang wrote:
>> Hi Roger,
>>
>> On 2019/1/16 下午10:52, Roger Pau Monné wrote:
>>> On Wed, Jan 16, 2019 at 09:47:41PM +0800, Dongli Zhang wrote:
&g
Hi Roger,
On 2019/1/16 下午10:52, Roger Pau Monné wrote:
> On Wed, Jan 16, 2019 at 09:47:41PM +0800, Dongli Zhang wrote:
>> There is no need to wake up xen_blkif_schedule() as kthread_stop() is able
>> to already wake up the kernel thread.
>>
>> Signed-off-by: Dongli Zhan
On 2019/1/17 上午12:32, Konrad Rzeszutek Wilk wrote:
> On Tue, Jan 08, 2019 at 04:24:32PM +0800, Dongli Zhang wrote:
>> oops. Please ignore this v5 patch.
>>
>> I just realized Linus suggested in an old email not use BUG()/BUG_ON() in
>> the code.
>>
>> I
There is no need to wake up xen_blkif_schedule() as kthread_stop() is able
to already wake up the kernel thread.
Signed-off-by: Dongli Zhang
---
drivers/block/xen-blkback/xenbus.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/drivers/block/xen-blkback/xenbus.c
b
ing() to read xenstore 'ring-page-order' only
once.
Signed-off-by: Dongli Zhang
---
Changed since v1:
* change the order of xenstore read in read_per_ring_refs
* use xenbus_read_unsigned() in connect_ring()
Changed since v2:
* simplify the condition check as "(err != 1 &
As 'be->blkif' is used for many times in connect_ring(), the stack variable
'blkif' is added to substitute 'be-blkif'.
Suggested-by: Paul Durrant
Signed-off-by: Dongli Zhang
Reviewed-by: Paul Durrant
Reviewed-by: Roger Pau Monné
---
driver
Hi Roger,
On 01/07/2019 11:27 PM, Roger Pau Monné wrote:
> On Mon, Jan 07, 2019 at 10:07:34PM +0800, Dongli Zhang wrote:
>>
>>
>> On 01/07/2019 10:05 PM, Dongli Zhang wrote:
>>>
>>>
>>> On 01/07/2019 08:01 PM, Roger Pau Monné wrote:
>>>
oops. Please ignore this v5 patch.
I just realized Linus suggested in an old email not use BUG()/BUG_ON() in the
code.
I will switch to the WARN() solution and resend again.
Sorry for the junk email.
Dongli Zhang
On 2019/1/8 下午4:15, Dongli Zhang wrote:
> The xenstore 'ring-page-o
As 'be->blkif' is used for many times in connect_ring(), the stack variable
'blkif' is added to substitute 'be-blkif'.
Suggested-by: Paul Durrant
Signed-off-by: Dongli Zhang
Reviewed-by: Paul Durrant
Reviewed-by: Roger Pau Monné
---
driver
ing() to read xenstore 'ring-page-order' only
once.
Signed-off-by: Dongli Zhang
---
Changed since v1:
* change the order of xenstore read in read_per_ring_refs
* use xenbus_read_unsigned() in connect_ring()
Changed since v2:
* simplify the condition check as "(err != 1 &
On 01/07/2019 10:05 PM, Dongli Zhang wrote:
>
>
> On 01/07/2019 08:01 PM, Roger Pau Monné wrote:
>> On Mon, Jan 07, 2019 at 01:35:59PM +0800, Dongli Zhang wrote:
>>> The xenstore 'ring-page-order' is used globally for each blkback queue and
>>> there
On 01/07/2019 08:01 PM, Roger Pau Monné wrote:
> On Mon, Jan 07, 2019 at 01:35:59PM +0800, Dongli Zhang wrote:
>> The xenstore 'ring-page-order' is used globally for each blkback queue and
>> therefore should be read from xenstore only once. However, it is obtained
As 'be->blkif' is used for many times in connect_ring(), the stack variable
'blkif' is added to substitute 'be-blkif'.
Suggested-by: Paul Durrant
Signed-off-by: Dongli Zhang
---
drivers/block/xen-blkback/xenbus.c | 27 ++-
1 file change
ing() to read xenstore 'ring-page-order' only
once.
Signed-off-by: Dongli Zhang
---
Changed since v1:
* change the order of xenstore read in read_per_ring_refs
* use xenbus_read_unsigned() in connect_ring()
Changed since v2:
* simplify the condition check as "(err != 1 &
Ping?
Dongli Zhang
On 12/19/2018 09:23 PM, Dongli Zhang wrote:
> The xenstore 'ring-page-order' is used globally for each blkback queue and
> therefore should be read from xenstore only once. However, it is obtained
> in read_per_ring_refs() which might be called multip
ing() to read xenstore 'ring-page-order' only
once.
Signed-off-by: Dongli Zhang
---
Changed since v1:
* change the order of xenstore read in read_per_ring_refs
* use xenbus_read_unsigned() in connect_ring()
Changed since v2:
* simplify the condition check as "(err != 1 &
On 12/18/2018 11:13 PM, Roger Pau Monné wrote:
> On Tue, Dec 18, 2018 at 07:31:59PM +0800, Dongli Zhang wrote:
>> Hi Roger,
>>
>> On 12/18/2018 05:33 PM, Roger Pau Monné wrote:
>>> On Tue, Dec 18, 2018 at 08:55:38AM +0800, Dongli Zhang wrote:
>>>> The x
Hi Roger,
On 12/18/2018 05:33 PM, Roger Pau Monné wrote:
> On Tue, Dec 18, 2018 at 08:55:38AM +0800, Dongli Zhang wrote:
>> The xenstore 'ring-page-order' is used globally for each blkback queue and
>> therefore should be read from xenstore only once. Howe
ing() to read xenstore 'ring-page-order' only
once.
Signed-off-by: Dongli Zhang
---
Changed since v1:
* change the order of xenstore read in read_per_ring_refs(suggested by Roger
Pau Monne)
* use xenbus_read_unsigned() in connect_ring() (suggested by Roger Pau Monne
On 12/07/2018 11:15 PM, Paul Durrant wrote:
>> -Original Message-
>> From: Dongli Zhang [mailto:dongli.zh...@oracle.com]
>> Sent: 07 December 2018 15:10
>> To: Paul Durrant ; linux-ker...@vger.kernel.org;
>> xen-de...@lists.xenproject.org; linux-b
Hi Paul,
On 12/07/2018 05:39 PM, Paul Durrant wrote:
>> -Original Message-
>> From: Xen-devel [mailto:xen-devel-boun...@lists.xenproject.org] On Behalf
>> Of Dongli Zhang
>> Sent: 07 December 2018 04:18
>> To: linux-ker...@vger.kernel.org; xen-de...@list
ing() to read xenstore 'ring-page-order' only
once.
Signed-off-by: Dongli Zhang
---
drivers/block/xen-blkback/xenbus.c | 49 --
1 file changed, 31 insertions(+), 18 deletions(-)
diff --git a/drivers/block/xen-blkback/xenbus.c
b/drivers/block/xen-blk
Hi Manjunath,
On 12/04/2018 10:49 AM, Manjunath Patil wrote:
> On 12/3/2018 6:16 PM, Boris Ostrovsky wrote:
>
>> On 12/3/18 8:14 PM, Dongli Zhang wrote:
>>> Hi Boris,
>>>
>>> On 12/04/2018 12:07 AM, Boris Ostrovsky wrote:
>>>> On 12/2/18 3:31
mq(), we
>> might hit it in setup_blkring() as well.
>> We should add the similar change to blkif_sring struct as well.
>
>
> Won't you have a similar issue with other frontends, say, netfront?
I think the kmalloc is failed not because of OOM.
In fact, the size of "blkfront_ring_info" is large. When domU have 4
queues/rings, the size of 4 blkfront_ring_info can be about 300+ KB.
There is chance that kmalloc() 300+ KB would fail.
About netfront, to kmalloc() 8 'struct netfront_queue' seems consumes <70 KB?
Dongli Zhang
Hi Daniel,
Other than crosvm, is there any version of qemu (e.g., repositories developed in
progress on github) where I can try with this feature?
Thank you very much!
Dongli Zhang
On 11/02/2018 06:40 AM, Daniel Verkamp wrote:
> From: Changpeng Liu
>
> In commit 88c85538, "v
tutorial?
Dongli Zhang
On 10/12/2018 12:59 AM, Jens Axboe wrote:
> We're planning on removing this code completely, kill the old
> path.
>
> Signed-off-by: Jens Axboe
> ---
> drivers/block/null_blk_main.c | 96 +++
> 1 file changed, 6 i
_queues - 2) << 16));
> returns 0x60006 when num_queues is 8.
Finally, nr_io_queues is set to 6+1=7 in nvme_set_queue_count() in VM kernel.
I do not know how to paraphrase this in the world of nvme.
Dongli Zhang
On 10/08/2018 01:59 PM, Dongli Zhang wrote:
> I can reproduce with q
enable-kvm \
-net nic -net user,hostfwd=tcp::5022-:22 \
-device nvme,drive=nvme1,serial=deadbeaf1,num_queues=8 \
-drive file=/path-to-img/nvme.disk,if=none,id=nvme1
Dongli Zhang
On 10/08/2018 01:05 PM, Prasun Ratn wrote:
> Hi
>
> I have an NVMe SSD that has 8 hw
ia xen event
channel).
I have an extra basic question perhaps not related to this patch:
Why not delay other cases in softirq as well? (perhaps this is a question about
mq but not for patch).
Thank you very much!
Dongli Zhang
>
> So if all IOs are completed in hardirq context, i
58 matches
Mail list logo