Re: [PATCH for-6.0 v2 6/8] hw/block/nvme: update dmsrl limit on namespace detachment

2021-04-09 Thread Thomas Huth
On 06/04/2021 09.24, Klaus Jensen wrote: On Apr 6 09:10, Philippe Mathieu-Daudé wrote: On 4/5/21 7:54 PM, Klaus Jensen wrote: From: Klaus Jensen The Non-MDTS DMSRL limit must be recomputed when namespaces are detached. Fixes: 645ce1a70cb6 ("hw/block/nvme: support namespace attachment

Re: General question about parsing an rbd filename

2021-04-09 Thread Connor Kuehl
On 4/9/21 9:27 AM, Markus Armbruster wrote: Connor Kuehl writes: block/rbd.c hints that: * Configuration values containing :, @, or = can be escaped with a * leading "\". Right now, much of the parsing code will allow anyone to escape _anything_ so long as it's preceded by '\'. Is

Re: [PATCH] hw/block/nvme: slba equal to nsze is out of bounds if nlb is 1-based

2021-04-09 Thread Klaus Jensen
On Apr 10 00:30, Keith Busch wrote: On Fri, Apr 09, 2021 at 01:55:01PM +0200, Klaus Jensen wrote: On Apr 9 20:05, Minwoo Im wrote: > On 21-04-09 13:14:02, Gollu Appalanaidu wrote: > > NSZE is the total size of the namespace in logical blocks. So the max > > addressable logical block is NLB

Re: [PATCH for-6.0? 1/3] job: Add job_wait_unpaused() for block-job-complete

2021-04-09 Thread John Snow
On 4/9/21 5:57 AM, Max Reitz wrote: Just as a PS, in a reply to one of Vladimir’s mails (da048f58-43a6-6811-6ad2-0d7899737...@redhat.com) I was wondering whether it even makes sense for mirror to do all the stuff it does in mirror_complete() to do it there.  Aren’t all of those things that

Re: [RFC PATCH v2 02/11] python: qemu: pass the wrapper field from QEMUQtestmachine to QEMUMachine

2021-04-09 Thread John Snow
On 4/9/21 12:07 PM, Emanuele Giuseppe Esposito wrote: diff --git a/python/qemu/machine.py b/python/qemu/machine.py index c721e07d63..18d32ebe45 100644 --- a/python/qemu/machine.py +++ b/python/qemu/machine.py @@ -109,7 +109,7 @@ def __init__(self,   self._binary = binary  

[PULL 08/10] mirror: Do not enter a paused job on completion

2021-04-09 Thread Kevin Wolf
From: Max Reitz Currently, it is impossible to complete jobs on standby (i.e. paused ready jobs), but actually the only thing in mirror_complete() that does not work quite well with a paused job is the job_enter() at the end. If we make it conditional, this function works just fine even if the

[PULL 10/10] test-blockjob: Test job_wait_unpaused()

2021-04-09 Thread Kevin Wolf
From: Max Reitz Create a job that remains on STANDBY after a drained section, and see that invoking job_wait_unpaused() will get it unstuck. Signed-off-by: Max Reitz Message-Id: <20210409120422.144040-5-mre...@redhat.com> Signed-off-by: Kevin Wolf --- tests/unit/test-blockjob.c | 121

[PULL 06/10] hw/block/fdc: Fix 'fallback' property on sysbus floppy disk controllers

2021-04-09 Thread Kevin Wolf
From: Philippe Mathieu-Daudé Setting the 'fallback' property corrupts the QOM instance state (FDCtrlSysBus) because it accesses an incorrect offset (it uses the offset of the FDCtrlISABus state). Cc: qemu-sta...@nongnu.org Fixes: a73275dd6fc ("fdc: Add fallback option") Signed-off-by: Philippe

[PULL 07/10] mirror: Move open_backing_file to exit_common

2021-04-09 Thread Kevin Wolf
From: Max Reitz This is a graph change and therefore should be done in job-finalize (which is what invokes mirror_exit_common()). Signed-off-by: Max Reitz Message-Id: <20210409120422.144040-2-mre...@redhat.com> Signed-off-by: Kevin Wolf --- block/mirror.c | 22 -- 1 file

[PULL 09/10] job: Allow complete for jobs on standby

2021-04-09 Thread Kevin Wolf
From: Max Reitz The only job that implements .complete is the mirror job, and it can handle completion requests just fine while the job is paused. Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1945635 Signed-off-by: Max Reitz Message-Id: <20210409120422.144040-4-mre...@redhat.com>

[PULL 00/10] Block layer fixes for 6.0-rc3

2021-04-09 Thread Kevin Wolf
The following changes since commit ce69aa92d71e13db9c3702a8e8305e8d2463aeb8: Merge remote-tracking branch 'remotes/jasowang/tags/net-pull-request' into staging (2021-04-08 16:45:31 +0100) are available in the Git repository at: git://repo.or.cz/qemu/kevin.git tags/for-upstream for you to

Re: [PATCH 0/4] job: Allow complete for jobs on standby

2021-04-09 Thread Kevin Wolf
Am 09.04.2021 um 14:04 hat Max Reitz geschrieben: > Hi, > > We sometimes have a problem with jobs remaining on STANDBY after a drain > (for a short duration), so if the user immediately issues a > block-job-complete, that will fail. > > (See also >

[PULL 03/10] iotests/qsd-jobs: Filter events in the first test

2021-04-09 Thread Kevin Wolf
From: Max Reitz The job may or may not be ready before the 'quit' is issued. Whether it is is irrelevant; for the purpose of the test, it only needs to still be there. Filter the job status change and READY events from the output so it becomes reliable. Reported-by: Peter Maydell

[PULL 05/10] iotests: Test mirror-top filter permissions

2021-04-09 Thread Kevin Wolf
From: Max Reitz Add a test accompanying commit 53431b9086b2832ca1aeff0c55e186e9ed79bd11 ("block/mirror: Fix mirror_top's permissions"). Signed-off-by: Max Reitz Message-Id: <20210331122815.51491-1-mre...@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Eric Blake

[PULL 04/10] iotests: add test for removing persistent bitmap from backing file

2021-04-09 Thread Kevin Wolf
From: Vladimir Sementsov-Ogievskiy Just demonstrate one of x-blockdev-reopen usecases. We can't simply remove persistent bitmap from RO node (for example from backing file), as we need to remove it from the image too. So, we should reopen the node first. Signed-off-by: Vladimir

[PULL 02/10] block/rbd: fix memory leak in qemu_rbd_co_create_opts()

2021-04-09 Thread Kevin Wolf
From: Stefano Garzarella When we allocate 'q_namespace', we forgot to set 'has_q_namespace' to true. This can cause several issues, including a memory leak, since qapi_free_BlockdevCreateOptions() does not deallocate that memory, as reported by valgrind: 13 bytes in 1 blocks are definitely

[PULL 01/10] block/rbd: fix memory leak in qemu_rbd_connect()

2021-04-09 Thread Kevin Wolf
From: Stefano Garzarella In qemu_rbd_connect(), 'mon_host' is allocated by qemu_rbd_mon_host() using g_strjoinv(), but it's only freed in the error path, leaking memory in the success path as reported by valgrind: 80 bytes in 4 blocks are definitely lost in loss record 5,028 of 6,516 at

[PATCH for-6.0 1/2] block/nbd: fix channel object leak

2021-04-09 Thread Roman Kagan
nbd_free_connect_thread leaks the channel object if it hasn't been stolen. Unref it and fix the leak. Signed-off-by: Roman Kagan --- block/nbd.c | 1 + 1 file changed, 1 insertion(+) diff --git a/block/nbd.c b/block/nbd.c index c26dc5a54f..d86df3afcb 100644 --- a/block/nbd.c +++ b/block/nbd.c

Re: [RFC PATCH v2 04/11] qemu-iotests: delay QMP socket timers

2021-04-09 Thread Emanuele Giuseppe Esposito
On 08/04/2021 21:03, Paolo Bonzini wrote: Il gio 8 apr 2021, 18:06 Emanuele Giuseppe Esposito > ha scritto: On 08/04/2021 17:40, Paolo Bonzini wrote: > On 07/04/21 15:50, Emanuele Giuseppe Esposito wrote: >>   def

[PATCH for-6.0 0/2] block/nbd: assorted bugfixes

2021-04-09 Thread Roman Kagan
A couple of bugfixes to block/nbd that look appropriate for 6.0. Roman Kagan (2): block/nbd: fix channel object leak block/nbd: ensure ->connection_thread is always valid block/nbd.c | 59 +++-- 1 file changed, 30 insertions(+), 29

Re: [RFC PATCH v2 02/11] python: qemu: pass the wrapper field from QEMUQtestmachine to QEMUMachine

2021-04-09 Thread Emanuele Giuseppe Esposito
diff --git a/python/qemu/machine.py b/python/qemu/machine.py index c721e07d63..18d32ebe45 100644 --- a/python/qemu/machine.py +++ b/python/qemu/machine.py @@ -109,7 +109,7 @@ def __init__(self,   self._binary = binary   self._args = list(args) -    self._wrapper = wrapper + 

[PATCH for-6.0 2/2] block/nbd: ensure ->connection_thread is always valid

2021-04-09 Thread Roman Kagan
Simplify lifetime management of BDRVNBDState->connection_thread by delaying the possible cleanup of it until the BDRVNBDState itself goes away. This also fixes possible use-after-free in nbd_co_establish_connection when it races with nbd_co_establish_connection_cancel. Signed-off-by: Roman Kagan

Re: [RFC PATCH v2 01/11] python: qemu: add timer parameter for qmp.accept socket

2021-04-09 Thread Emanuele Giuseppe Esposito
--- a/python/qemu/qtest.py +++ b/python/qemu/qtest.py @@ -138,9 +138,9 @@ def _pre_launch(self) -> None:   super()._pre_launch()   self._qtest = QEMUQtestProtocol(self._qtest_path, server=True) -    def _post_launch(self) -> None: +    def _post_launch(self, timer) -> None:

Re: [PATCH v3] hw/block/nvme: add device self test command support

2021-04-09 Thread Keith Busch
On Wed, Mar 31, 2021 at 02:54:27PM +0530, Gollu Appalanaidu wrote: > This is to add support for Device Self Test Command (DST) and > DST Log Page. Refer NVM Express specification 1.4b section 5.8 > ("Device Self-test command") Please don't write change logs that just say what you did. I can read

Re: [PATCH] hw/block/nvme: slba equal to nsze is out of bounds if nlb is 1-based

2021-04-09 Thread Keith Busch
On Fri, Apr 09, 2021 at 01:55:01PM +0200, Klaus Jensen wrote: > On Apr 9 20:05, Minwoo Im wrote: > > On 21-04-09 13:14:02, Gollu Appalanaidu wrote: > > > NSZE is the total size of the namespace in logical blocks. So the max > > > addressable logical block is NLB minus 1. So your starting logical

[PATCH v3 2/2] block/rbd: Add an escape-aware strchr helper

2021-04-09 Thread Connor Kuehl
Sometimes the parser needs to further split a token it has collected from the token input stream. Right now, it does a cursory check to see if the relevant characters appear in the token to determine if it should break it down further. However, qemu_rbd_next_tok() will escape characters as it

[PATCH v3 1/2] iotests/231: Update expected deprecation message

2021-04-09 Thread Connor Kuehl
The deprecation message in the expected output has technically been wrong since the wrong version of a patch was applied to it. Because of this, the test fails. Correct the expected output so that it passes. Signed-off-by: Connor Kuehl Reviewed-by: Max Reitz --- tests/qemu-iotests/231.out | 4

[PATCH v3 0/2] Fix segfault in qemu_rbd_parse_filename

2021-04-09 Thread Connor Kuehl
Connor Kuehl (2): iotests/231: Update expected deprecation message block/rbd: Add an escape-aware strchr helper block/rbd.c| 20 ++-- tests/qemu-iotests/231 | 4 tests/qemu-iotests/231.out | 7 --- 3 files changed, 26 insertions(+), 5

Re: [PATCH v2 03/10] util/async: aio_co_enter(): do aio_co_schedule in general case

2021-04-09 Thread Roman Kagan
On Thu, Apr 08, 2021 at 06:54:30PM +0300, Roman Kagan wrote: > On Thu, Apr 08, 2021 at 05:08:20PM +0300, Vladimir Sementsov-Ogievskiy wrote: > > With the following patch we want to call aio_co_wake() from thread. > > And it works bad. > > Assume we have no iothreads. > > Assume we have a coroutine

Re: General question about parsing an rbd filename

2021-04-09 Thread Markus Armbruster
Connor Kuehl writes: > Hi, > > block/rbd.c hints that: > >> * Configuration values containing :, @, or = can be escaped with a >> * leading "\". > > Right now, much of the parsing code will allow anyone to escape > _anything_ so long as it's preceded by '\'. > > Is this the intended behavior?

Re: [PATCH v2 2/2] block/rbd: Add an escape-aware strchr helper

2021-04-09 Thread Max Reitz
On 09.04.21 16:05, Connor Kuehl wrote: On 4/6/21 9:24 AM, Max Reitz wrote: On 01.04.21 23:01, Connor Kuehl wrote: [..] diff --git a/block/rbd.c b/block/rbd.c index 9071a00e3f..c0e4d4a952 100644 --- a/block/rbd.c +++ b/block/rbd.c @@ -134,6 +134,22 @@ static char *qemu_rbd_next_tok(char *src,

Re: [PATCH v2 2/2] block/rbd: Add an escape-aware strchr helper

2021-04-09 Thread Connor Kuehl
On 4/6/21 9:24 AM, Max Reitz wrote: On 01.04.21 23:01, Connor Kuehl wrote: [..] diff --git a/block/rbd.c b/block/rbd.c index 9071a00e3f..c0e4d4a952 100644 --- a/block/rbd.c +++ b/block/rbd.c @@ -134,6 +134,22 @@ static char *qemu_rbd_next_tok(char *src, char delim, char **p)   return src;

Re: [PATCH v2 3/3] iotests/041: block-job-complete on user-paused job

2021-04-09 Thread Max Reitz
On 09.04.21 15:29, Max Reitz wrote: Expand test_pause() to check what happens when issuing block-job-complete on a job that is on STANDBY because it has been paused by the user. (This should be an error, and in particular not hang job_wait_unpaused().) Signed-off-by: Max Reitz ---

Re: iotests 041 intermittent failure (netbsd)

2021-04-09 Thread Philippe Mathieu-Daudé
On 4/9/21 1:37 PM, Kevin Wolf wrote: > Am 09.04.2021 um 12:31 hat Daniel P. Berrangé geschrieben: >> On Fri, Apr 09, 2021 at 12:22:26PM +0200, Philippe Mathieu-Daudé wrote: >>> On 4/9/21 11:43 AM, Peter Maydell wrote: Just hit this (presumably intermittent) 041 failure running the

[PATCH v2 2/3] test-blockjob: Test job_wait_unpaused()

2021-04-09 Thread Max Reitz
Create a job that remains on STANDBY after a drained section, and see that invoking job_wait_unpaused() will get it unstuck. Signed-off-by: Max Reitz --- tests/unit/test-blockjob.c | 140 + 1 file changed, 140 insertions(+) diff --git

[PATCH v2 1/3] job: Add job_wait_unpaused() for block-job-complete

2021-04-09 Thread Max Reitz
block-job-complete can only be applied when the job is READY, not when it is on STANDBY (ready, but paused). Draining a job technically pauses it (which makes a READY job enter STANDBY), and ending the drained section does not synchronously resume it, but only schedules the job, which will then

[PATCH v2 4/3] iotests: Test completion immediately after drain

2021-04-09 Thread Max Reitz
Test what happens when you have multiple busy block jobs, drain all (via an empty transaction), and immediately issue a block-job-complete on one of the jobs. Sometimes it will still be in STANDBY, in which case block-job-complete used to fail. It should not. Signed-off-by: Max Reitz ---

[PATCH v2 3/3] iotests/041: block-job-complete on user-paused job

2021-04-09 Thread Max Reitz
Expand test_pause() to check what happens when issuing block-job-complete on a job that is on STANDBY because it has been paused by the user. (This should be an error, and in particular not hang job_wait_unpaused().) Signed-off-by: Max Reitz --- tests/qemu-iotests/041 | 13 - 1

[PATCH v2 0/3] job: Add job_wait_unpaused() for block-job-complete

2021-04-09 Thread Max Reitz
Hi, v1: https://lists.nongnu.org/archive/html/qemu-block/2021-04/msg00215.html Alternative: https://lists.nongnu.org/archive/html/qemu-block/2021-04/msg00261.html Compared to v1, I’ve added an aio_wait_kick() to job_pause_point() (as suggested by Kevin) and adjusted the error message on

Re: [PATCH 2/2] hw/block/nvme: drain namespaces on sq deletion

2021-04-09 Thread Gollu Appalanaidu
On Thu, Apr 08, 2021 at 09:37:09PM +0200, Klaus Jensen wrote: From: Klaus Jensen For most commands, when issuing an AIO, the BlockAIOCB is stored in the NvmeRequest aiocb pointer when the AIO is issued. The main use of this is cancelling AIOs when deleting submission queues (it is currently

Re: [PATCH] hw/block/nvme: slba equal to nsze is out of bounds if nlb is 1-based

2021-04-09 Thread Minwoo Im
On 21-04-09 14:36:19, Klaus Jensen wrote: > On Apr 9 21:31, Minwoo Im wrote: > > On 21-04-09 13:55:01, Klaus Jensen wrote: > > > On Apr 9 20:05, Minwoo Im wrote: > > > > On 21-04-09 13:14:02, Gollu Appalanaidu wrote: > > > > > NSZE is the total size of the namespace in logical blocks. So the max

Re: [PATCH] hw/block/nvme: slba equal to nsze is out of bounds if nlb is 1-based

2021-04-09 Thread Klaus Jensen
On Apr 9 21:31, Minwoo Im wrote: On 21-04-09 13:55:01, Klaus Jensen wrote: On Apr 9 20:05, Minwoo Im wrote: > On 21-04-09 13:14:02, Gollu Appalanaidu wrote: > > NSZE is the total size of the namespace in logical blocks. So the max > > addressable logical block is NLB minus 1. So your starting

Re: [PATCH] hw/block/nvme: slba equal to nsze is out of bounds if nlb is 1-based

2021-04-09 Thread Minwoo Im
On 21-04-09 13:55:01, Klaus Jensen wrote: > On Apr 9 20:05, Minwoo Im wrote: > > On 21-04-09 13:14:02, Gollu Appalanaidu wrote: > > > NSZE is the total size of the namespace in logical blocks. So the max > > > addressable logical block is NLB minus 1. So your starting logical > > > block is equal

[PATCH 4/4] test-blockjob: Test job_wait_unpaused()

2021-04-09 Thread Max Reitz
Create a job that remains on STANDBY after a drained section, and see that invoking job_wait_unpaused() will get it unstuck. Signed-off-by: Max Reitz --- tests/unit/test-blockjob.c | 121 + 1 file changed, 121 insertions(+) diff --git

[PATCH 0/4] job: Allow complete for jobs on standby

2021-04-09 Thread Max Reitz
Hi, We sometimes have a problem with jobs remaining on STANDBY after a drain (for a short duration), so if the user immediately issues a block-job-complete, that will fail. (See also https://lists.nongnu.org/archive/html/qemu-block/2021-04/msg00215.html, which this series is an alternative for.)

[PATCH 3/4] job: Allow complete for jobs on standby

2021-04-09 Thread Max Reitz
The only job that implements .complete is the mirror job, and it can handle completion requests just fine while the job is paused. Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1945635 Signed-off-by: Max Reitz --- job.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff

[PATCH 1/4] mirror: Move open_backing_file to exit_common

2021-04-09 Thread Max Reitz
This is a graph change and therefore should be done in job-finalize (which is what invokes mirror_exit_common()). Signed-off-by: Max Reitz --- block/mirror.c | 22 -- 1 file changed, 8 insertions(+), 14 deletions(-) diff --git a/block/mirror.c b/block/mirror.c index

[PATCH 5/4] iotests: Test completion immediately after drain

2021-04-09 Thread Max Reitz
Test what happens when you have multiple busy block jobs, drain all (via an empty transaction), and immediately issue a block-job-complete on one of the jobs. Sometimes it will still be in STANDBY, in which case block-job-complete used to fail. It should not. Signed-off-by: Max Reitz ---

[PATCH 2/4] mirror: Do not enter a paused job on completion

2021-04-09 Thread Max Reitz
Currently, it is impossible to complete jobs on standby (i.e. paused ready jobs), but actually the only thing in mirror_complete() that does not work quite well with a paused job is the job_enter() at the end. If we make it conditional, this function works just fine even if the mirror job is

Re: [PATCH] hw/block/nvme: slba equal to nsze is out of bounds if nlb is 1-based

2021-04-09 Thread Klaus Jensen
On Apr 9 20:05, Minwoo Im wrote: On 21-04-09 13:14:02, Gollu Appalanaidu wrote: NSZE is the total size of the namespace in logical blocks. So the max addressable logical block is NLB minus 1. So your starting logical block is equal to NSZE it is a out of range. Signed-off-by: Gollu

Re: [PATCH 2/2] hw/block/nvme: drain namespaces on sq deletion

2021-04-09 Thread Klaus Jensen
On Apr 9 20:09, Minwoo Im wrote: On 21-04-08 21:37:09, Klaus Jensen wrote: From: Klaus Jensen For most commands, when issuing an AIO, the BlockAIOCB is stored in the NvmeRequest aiocb pointer when the AIO is issued. The main use of this is cancelling AIOs when deleting submission queues (it

Re: iotests 041 intermittent failure (netbsd)

2021-04-09 Thread Kevin Wolf
Am 09.04.2021 um 12:31 hat Daniel P. Berrangé geschrieben: > On Fri, Apr 09, 2021 at 12:22:26PM +0200, Philippe Mathieu-Daudé wrote: > > On 4/9/21 11:43 AM, Peter Maydell wrote: > > > Just hit this (presumably intermittent) 041 failure running > > > the build-and-test on the tests/vm netbsd setup.

Re: [PATCH 1/2] hw/block/nvme: store aiocb in compare

2021-04-09 Thread Minwoo Im
On 21-04-08 21:37:08, Klaus Jensen wrote: > From: Klaus Jensen > > nvme_compare() fails to store the aiocb from the blk_aio_preadv() call. > Fix this. > > Fixes: 0a384f923f51 ("hw/block/nvme: add compare command") > Cc: Gollu Appalanaidu > Signed-off-by: Klaus Jensen Reviewed-by: Minwoo Im

Re: [PATCH 2/2] hw/block/nvme: drain namespaces on sq deletion

2021-04-09 Thread Minwoo Im
On 21-04-08 21:37:09, Klaus Jensen wrote: > From: Klaus Jensen > > For most commands, when issuing an AIO, the BlockAIOCB is stored in the > NvmeRequest aiocb pointer when the AIO is issued. The main use of this > is cancelling AIOs when deleting submission queues (it is currently not > used for

Re: [PATCH] hw/block/nvme: slba equal to nsze is out of bounds if nlb is 1-based

2021-04-09 Thread Minwoo Im
On 21-04-09 13:14:02, Gollu Appalanaidu wrote: > NSZE is the total size of the namespace in logical blocks. So the max > addressable logical block is NLB minus 1. So your starting logical > block is equal to NSZE it is a out of range. > > Signed-off-by: Gollu Appalanaidu > --- > hw/block/nvme.c

Re: [PATCH 1/2] hw/block/nvme: store aiocb in compare

2021-04-09 Thread Gollu Appalanaidu
On Thu, Apr 08, 2021 at 09:37:08PM +0200, Klaus Jensen wrote: From: Klaus Jensen nvme_compare() fails to store the aiocb from the blk_aio_preadv() call. Fix this. Fixes: 0a384f923f51 ("hw/block/nvme: add compare command") Cc: Gollu Appalanaidu Signed-off-by: Klaus Jensen --- hw/block/nvme.c

Re: iotests 041 intermittent failure (netbsd)

2021-04-09 Thread Daniel P . Berrangé
On Fri, Apr 09, 2021 at 12:22:26PM +0200, Philippe Mathieu-Daudé wrote: > On 4/9/21 11:43 AM, Peter Maydell wrote: > > Just hit this (presumably intermittent) 041 failure running > > the build-and-test on the tests/vm netbsd setup. Does it look > > familiar to anybody? > > This one is known as

Re: iotests 041 intermittent failure (netbsd)

2021-04-09 Thread Philippe Mathieu-Daudé
On 4/9/21 11:43 AM, Peter Maydell wrote: > Just hit this (presumably intermittent) 041 failure running > the build-and-test on the tests/vm netbsd setup. Does it look > familiar to anybody? This one is known as the mysterious failure:

Re: [PATCH for-6.0? 1/3] job: Add job_wait_unpaused() for block-job-complete

2021-04-09 Thread Max Reitz
On 09.04.21 12:07, Vladimir Sementsov-Ogievskiy wrote: 09.04.2021 12:51, Max Reitz wrote: On 08.04.21 19:26, Vladimir Sementsov-Ogievskiy wrote: 08.04.2021 20:04, John Snow wrote: On 4/8/21 12:58 PM, Vladimir Sementsov-Ogievskiy wrote: job-complete command is async. Can we instead just add a

Re: [PATCH for-6.0? 1/3] job: Add job_wait_unpaused() for block-job-complete

2021-04-09 Thread Kevin Wolf
Am 09.04.2021 um 11:31 hat Max Reitz geschrieben: > On 08.04.21 18:55, John Snow wrote: > > On 4/8/21 12:20 PM, Max Reitz wrote: > > > +    /* Similarly, if the job is still drained, waiting will not > > > help either */ > > > +    if (job->pause_count > 0) { > > > +    error_setg(errp, "Job

Re: [PATCH for-6.0? 1/3] job: Add job_wait_unpaused() for block-job-complete

2021-04-09 Thread Vladimir Sementsov-Ogievskiy
09.04.2021 12:51, Max Reitz wrote: On 08.04.21 19:26, Vladimir Sementsov-Ogievskiy wrote: 08.04.2021 20:04, John Snow wrote: On 4/8/21 12:58 PM, Vladimir Sementsov-Ogievskiy wrote: job-complete command is async. Can we instead just add a boolean like job->completion_requested, and set it if

Re: [PATCH for-6.0? 1/3] job: Add job_wait_unpaused() for block-job-complete

2021-04-09 Thread Max Reitz
On 09.04.21 11:44, Kevin Wolf wrote: Am 08.04.2021 um 18:55 hat John Snow geschrieben: On 4/8/21 12:20 PM, Max Reitz wrote: block-job-complete can only be applied when the job is READY, not when it is on STANDBY (ready, but paused). Draining a job technically pauses it (which makes a READY

Re: [PATCH for-6.0? 1/3] job: Add job_wait_unpaused() for block-job-complete

2021-04-09 Thread Max Reitz
On 08.04.21 19:26, Vladimir Sementsov-Ogievskiy wrote: 08.04.2021 20:04, John Snow wrote: On 4/8/21 12:58 PM, Vladimir Sementsov-Ogievskiy wrote: job-complete command is async. Can we instead just add a boolean like job->completion_requested, and set it if job-complete called in STANDBY

Re: [PATCH for-6.0? 1/3] job: Add job_wait_unpaused() for block-job-complete

2021-04-09 Thread Kevin Wolf
Am 08.04.2021 um 18:55 hat John Snow geschrieben: > On 4/8/21 12:20 PM, Max Reitz wrote: > > block-job-complete can only be applied when the job is READY, not when > > it is on STANDBY (ready, but paused). Draining a job technically pauses > > it (which makes a READY job enter STANDBY), and

iotests 041 intermittent failure (netbsd)

2021-04-09 Thread Peter Maydell
Just hit this (presumably intermittent) 041 failure running the build-and-test on the tests/vm netbsd setup. Does it look familiar to anybody? TEST iotest-qcow2: 041 [fail] QEMU -- "/home/qemu/qemu-test.bx6kgg/build/tests/qemu-iotests/../../qemu-system-aarch64" -nodefaults -display

Re: [PATCH for-6.0? 1/3] job: Add job_wait_unpaused() for block-job-complete

2021-04-09 Thread Max Reitz
On 08.04.21 18:58, Vladimir Sementsov-Ogievskiy wrote: 08.04.2021 19:20, Max Reitz wrote: block-job-complete can only be applied when the job is READY, not when it is on STANDBY (ready, but paused).  Draining a job technically pauses it (which makes a READY job enter STANDBY), and ending the

Re: [PATCH for-6.0? 1/3] job: Add job_wait_unpaused() for block-job-complete

2021-04-09 Thread Max Reitz
On 08.04.21 18:55, John Snow wrote: On 4/8/21 12:20 PM, Max Reitz wrote: block-job-complete can only be applied when the job is READY, not when it is on STANDBY (ready, but paused).  Draining a job technically pauses it (which makes a READY job enter STANDBY), and ending the drained section

[PATCH] hw/block/nvme: slba equal to nsze is out of bounds if nlb is 1-based

2021-04-09 Thread Gollu Appalanaidu
NSZE is the total size of the namespace in logical blocks. So the max addressable logical block is NLB minus 1. So your starting logical block is equal to NSZE it is a out of range. Signed-off-by: Gollu Appalanaidu --- hw/block/nvme.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff

[PATCH v2] hw/block/nvme: map prp fix if prp2 contains non-zero offset

2021-04-09 Thread Padmakar Kalghatgi
From: padmakar nvme_map_prp needs to calculate the number of list entries based on the offset value. For the subsequent PRP2 list, need to ensure the number of entries is within the MAX number of PRP entries for a page. Signed-off-by: Padmakar Kalghatgi --- -v2: removed extraneous

Re: [PATCH] hw/block/nvme: map prp fix if prp2 contains non-zero offset

2021-04-09 Thread Klaus Jensen
On Apr 9 06:38, Keith Busch wrote: On Thu, Apr 08, 2021 at 09:53:13PM +0530, Padmakar Kalghatgi wrote: +/* + * The first PRP list entry, pointed by PRP2 can contain + * offsets. Hence, we need calculate the no of entries in + * prp2 based