Re: [PATCH v3 3/7] block: add max_hw_transfer to BlockLimits
On 16.06.21 15:18, Paolo Bonzini wrote: On 15/06/21 18:18, Max Reitz wrote: } +/* Returns the maximum hardware transfer length, in bytes; guaranteed nonzero */ +uint64_t blk_get_max_hw_transfer(BlockBackend *blk) +{ + BlockDriverState *bs = blk_bs(blk); + uint64_t max = INT_MAX; + + if (bs) { + max = MIN_NON_ZERO(bs->bl.max_hw_transfer, bs->bl.max_transfer); + } + return max; Both `max_hw_transfer` and `max_transfer` can be 0, so this could return 0, contrary to what the comment above promises. Should `max` be initialized to 0 with a `MIN_NON_ZERO(max, INT_MAX)` here (like `blk_get_max_transfer()` does it)? Yes, something to that effect. (As for the rest, I think aligning to the request alignment makes sense, but then again we don’t do that for max_transfer either, so... this at least wouldn’t be a new bug. Ok, will do. I will also add a new patch to align max_transfer to the request alignment. Regarding the comment, checkpatch complains about it, so it should be fixed so that /* is on its own line. That makes it different from every other comment in block_int.h though. Is it okay to fix all of them in a follow-up? The reason it’s different is that the comment style in question was added to checkpatch only relatively recently. I can’t speak for others, but I’m a simple person. I just do what makes checkpatch happy. :) Given that the checkpatch complaint is only a warning, I think it’s OK to keep the comment as it is here, and perhaps optionally fix all comments in block_int.h in a follow-up. I don’t think we need to fix existing comments, but, well, it wouldn’t be wrong. Max
Re: [PATCH v3 3/7] block: add max_hw_transfer to BlockLimits
On 15/06/21 18:18, Max Reitz wrote: } +/* Returns the maximum hardware transfer length, in bytes; guaranteed nonzero */ +uint64_t blk_get_max_hw_transfer(BlockBackend *blk) +{ + BlockDriverState *bs = blk_bs(blk); + uint64_t max = INT_MAX; + + if (bs) { + max = MIN_NON_ZERO(bs->bl.max_hw_transfer, bs->bl.max_transfer); + } + return max; Both `max_hw_transfer` and `max_transfer` can be 0, so this could return 0, contrary to what the comment above promises. Should `max` be initialized to 0 with a `MIN_NON_ZERO(max, INT_MAX)` here (like `blk_get_max_transfer()` does it)? Yes, something to that effect. (As for the rest, I think aligning to the request alignment makes sense, but then again we don’t do that for max_transfer either, so... this at least wouldn’t be a new bug. Ok, will do. I will also add a new patch to align max_transfer to the request alignment. Regarding the comment, checkpatch complains about it, so it should be fixed so that /* is on its own line. That makes it different from every other comment in block_int.h though. Is it okay to fix all of them in a follow-up? Paolo Speaking of checkpatch, now that I ran it, it also complains about the new line in bdrv_merge_limits() exceeding 80 characters, so that should be fixed, too.)
Re: [PATCH V3 5/6] block/rbd: add write zeroes support
On Wed, May 19, 2021 at 4:28 PM Peter Lieven wrote: > > Signed-off-by: Peter Lieven > --- > block/rbd.c | 37 - > 1 file changed, 36 insertions(+), 1 deletion(-) > > diff --git a/block/rbd.c b/block/rbd.c > index 0d8612a988..ee13f08a74 100644 > --- a/block/rbd.c > +++ b/block/rbd.c > @@ -63,7 +63,8 @@ typedef enum { > RBD_AIO_READ, > RBD_AIO_WRITE, > RBD_AIO_DISCARD, > -RBD_AIO_FLUSH > +RBD_AIO_FLUSH, > +RBD_AIO_WRITE_ZEROES > } RBDAIOCmd; > > typedef struct BDRVRBDState { > @@ -705,6 +706,10 @@ static int qemu_rbd_open(BlockDriverState *bs, QDict > *options, int flags, > } > } > > +#ifdef LIBRBD_SUPPORTS_WRITE_ZEROES > +bs->supported_zero_flags = BDRV_REQ_MAY_UNMAP; I wonder if we should also set BDRV_REQ_NO_FALLBACK here since librbd does not really have a notion of non-efficient explicit zeroing. Thanks, Ilya
Re: [PATCH V3 1/6] block/rbd: bump librbd requirement to luminous release
On Wed, May 19, 2021 at 4:26 PM Peter Lieven wrote: > > even luminous (version 12.2) is unmaintained for over 3 years now. > Bump the requirement to get rid of the ifdef'ry in the code. > Qemu 6.1 dropped the support for RHEL-7 which was the last supported > OS that required an older librbd. > > Signed-off-by: Peter Lieven > --- > block/rbd.c | 120 > meson.build | 7 ++- > 2 files changed, 13 insertions(+), 114 deletions(-) > > diff --git a/block/rbd.c b/block/rbd.c > index 26f64cce7c..6b1cbe1d75 100644 > --- a/block/rbd.c > +++ b/block/rbd.c > @@ -55,24 +55,10 @@ > * leading "\". > */ > > -/* rbd_aio_discard added in 0.1.2 */ > -#if LIBRBD_VERSION_CODE >= LIBRBD_VERSION(0, 1, 2) > -#define LIBRBD_SUPPORTS_DISCARD > -#else > -#undef LIBRBD_SUPPORTS_DISCARD > -#endif > - > #define OBJ_MAX_SIZE (1UL << OBJ_DEFAULT_OBJ_ORDER) > > #define RBD_MAX_SNAPS 100 > > -/* The LIBRBD_SUPPORTS_IOVEC is defined in librbd.h */ > -#ifdef LIBRBD_SUPPORTS_IOVEC > -#define LIBRBD_USE_IOVEC 1 > -#else > -#define LIBRBD_USE_IOVEC 0 > -#endif > - > typedef enum { > RBD_AIO_READ, > RBD_AIO_WRITE, > @@ -84,7 +70,6 @@ typedef struct RBDAIOCB { > BlockAIOCB common; > int64_t ret; > QEMUIOVector *qiov; > -char *bounce; > RBDAIOCmd cmd; > int error; > struct BDRVRBDState *s; > @@ -94,7 +79,6 @@ typedef struct RADOSCB { > RBDAIOCB *acb; > struct BDRVRBDState *s; > int64_t size; > -char *buf; > int64_t ret; > } RADOSCB; > > @@ -342,13 +326,9 @@ static int qemu_rbd_set_keypairs(rados_t cluster, const > char *keypairs_json, > > static void qemu_rbd_memset(RADOSCB *rcb, int64_t offs) > { > -if (LIBRBD_USE_IOVEC) { > -RBDAIOCB *acb = rcb->acb; > -iov_memset(acb->qiov->iov, acb->qiov->niov, offs, 0, > - acb->qiov->size - offs); > -} else { > -memset(rcb->buf + offs, 0, rcb->size - offs); > -} > +RBDAIOCB *acb = rcb->acb; > +iov_memset(acb->qiov->iov, acb->qiov->niov, offs, 0, > + acb->qiov->size - offs); > } > > /* FIXME Deprecate and remove keypairs or make it available in QMP. */ > @@ -504,13 +484,6 @@ static void qemu_rbd_complete_aio(RADOSCB *rcb) > > g_free(rcb); > > -if (!LIBRBD_USE_IOVEC) { > -if (acb->cmd == RBD_AIO_READ) { > -qemu_iovec_from_buf(acb->qiov, 0, acb->bounce, acb->qiov->size); > -} > -qemu_vfree(acb->bounce); > -} > - > acb->common.cb(acb->common.opaque, (acb->ret > 0 ? 0 : acb->ret)); > > qemu_aio_unref(acb); > @@ -878,28 +851,6 @@ static void rbd_finish_aiocb(rbd_completion_t c, RADOSCB > *rcb) > rbd_finish_bh, rcb); > } > > -static int rbd_aio_discard_wrapper(rbd_image_t image, > - uint64_t off, > - uint64_t len, > - rbd_completion_t comp) > -{ > -#ifdef LIBRBD_SUPPORTS_DISCARD > -return rbd_aio_discard(image, off, len, comp); > -#else > -return -ENOTSUP; > -#endif > -} > - > -static int rbd_aio_flush_wrapper(rbd_image_t image, > - rbd_completion_t comp) > -{ > -#ifdef LIBRBD_SUPPORTS_AIO_FLUSH > -return rbd_aio_flush(image, comp); > -#else > -return -ENOTSUP; > -#endif > -} > - > static BlockAIOCB *rbd_start_aio(BlockDriverState *bs, > int64_t off, > QEMUIOVector *qiov, > @@ -922,21 +873,6 @@ static BlockAIOCB *rbd_start_aio(BlockDriverState *bs, > > rcb = g_new(RADOSCB, 1); > > -if (!LIBRBD_USE_IOVEC) { > -if (cmd == RBD_AIO_DISCARD || cmd == RBD_AIO_FLUSH) { > -acb->bounce = NULL; > -} else { > -acb->bounce = qemu_try_blockalign(bs, qiov->size); > -if (acb->bounce == NULL) { > -goto failed; > -} > -} > -if (cmd == RBD_AIO_WRITE) { > -qemu_iovec_to_buf(acb->qiov, 0, acb->bounce, qiov->size); > -} > -rcb->buf = acb->bounce; > -} > - > acb->ret = 0; > acb->error = 0; > acb->s = s; > @@ -950,7 +886,7 @@ static BlockAIOCB *rbd_start_aio(BlockDriverState *bs, > } > > switch (cmd) { > -case RBD_AIO_WRITE: { > +case RBD_AIO_WRITE: > /* > * RBD APIs don't allow us to write more than actual size, so in > order > * to support growing images, we resize the image before write > @@ -962,25 +898,16 @@ static BlockAIOCB *rbd_start_aio(BlockDriverState *bs, > goto failed_completion; > } > } > -#ifdef LIBRBD_SUPPORTS_IOVEC > -r = rbd_aio_writev(s->image, qiov->iov, qiov->niov, off, c); > -#else > -r = rbd_aio_write(s->image, off, size, rcb->buf, c); > -#endif > +r = rbd_aio_writev(s->image, qiov->iov, qiov->niov, off, c); > break; >
Re: [PATCH v3 0/2] hw/nvme: namespace parameter for EUI-64
On Jun 14 22:18, Heinrich Schuchardt wrote: The EUI-64 field is the only identifier for NVMe namespaces in UEFI device paths. Add a new namespace property "eui64", that provides the user the option to specify the EUI-64. v3: use 52-54-00-00-00-00-00-00 as starting values for generating EUI-64s Heinrich Schuchardt (2): hw/nvme: namespace parameter for EUI-64 hw/nvme: default for namespace EUI-64 docs/system/nvme.rst | 6 + hw/core/machine.c| 1 + hw/nvme/ctrl.c | 58 ++-- hw/nvme/ns.c | 11 + hw/nvme/nvme.h | 3 +++ 5 files changed, 56 insertions(+), 23 deletions(-) -- 2.30.2 LGTM. Reviewed-by: Klaus Jensen signature.asc Description: PGP signature
Re: [PATCH v5 06/16] qemu-iotests: delay QMP socket timers
16.06.2021 10:09, Emanuele Giuseppe Esposito wrote: On 15/06/2021 09:57, Vladimir Sementsov-Ogievskiy wrote: 14.06.2021 13:36, Emanuele Giuseppe Esposito wrote: On 04/06/2021 11:17, Emanuele Giuseppe Esposito wrote: Attaching gdbserver implies that the qmp socket should wait indefinitely for an answer from QEMU. For Timeout class, create a @contextmanager that switches Timeout with NoTimeout (empty context manager) so that if --gdb is set, no timeout will be triggered. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Vladimir Sementsov-Ogievskiy --- tests/qemu-iotests/iotests.py | 9 - 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py index c86f239d81..d4bfd8f1d6 100644 --- a/tests/qemu-iotests/iotests.py +++ b/tests/qemu-iotests/iotests.py @@ -486,6 +486,13 @@ def __exit__(self, exc_type, value, traceback): def timeout(self, signum, frame): raise Exception(self.errmsg) +@contextmanager +def NoTimeout(): + yield + +if qemu_gdb: + Timeout = NoTimeout + @Vladimir or anyone expert enough in python: This fix above works, but I just noticed that makes pylint (test 297) fail. My bad, I should have tried it before. I think, just make mypy ignore it, like: Timeout = NoTimeout # type: ignore I think I am going to drop this change. This series already has patch 2 to ignore another pylint warning, plus another separate patch was sent to silence a warning that pops out with newer versions of pylint. pylint complains should not be a reason for bad pattern. Shadowing the whole class is not good too, but it's at least a separate small hack, instead of silencing the whole logic of existing well-defined class internally with help of global variable. -- Best regards, Vladimir
Re: [PATCH v5 06/16] qemu-iotests: delay QMP socket timers
On 15/06/2021 09:57, Vladimir Sementsov-Ogievskiy wrote: 14.06.2021 13:36, Emanuele Giuseppe Esposito wrote: On 04/06/2021 11:17, Emanuele Giuseppe Esposito wrote: Attaching gdbserver implies that the qmp socket should wait indefinitely for an answer from QEMU. For Timeout class, create a @contextmanager that switches Timeout with NoTimeout (empty context manager) so that if --gdb is set, no timeout will be triggered. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Vladimir Sementsov-Ogievskiy --- tests/qemu-iotests/iotests.py | 9 - 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py index c86f239d81..d4bfd8f1d6 100644 --- a/tests/qemu-iotests/iotests.py +++ b/tests/qemu-iotests/iotests.py @@ -486,6 +486,13 @@ def __exit__(self, exc_type, value, traceback): def timeout(self, signum, frame): raise Exception(self.errmsg) +@contextmanager +def NoTimeout(): + yield + +if qemu_gdb: + Timeout = NoTimeout + @Vladimir or anyone expert enough in python: This fix above works, but I just noticed that makes pylint (test 297) fail. My bad, I should have tried it before. I think, just make mypy ignore it, like: Timeout = NoTimeout # type: ignore I think I am going to drop this change. This series already has patch 2 to ignore another pylint warning, plus another separate patch was sent to silence a warning that pops out with newer versions of pylint. So once this gets in, feel free to add a patch with this change. Emanuele The reason for that is + Timeout = NoTimeout They obviously have different types. +iotests.py:507: error: Cannot assign to a type +iotests.py:507: error: Incompatible types in assignment (expression has type "Callable[[], _GeneratorContextManager[Any]]", variable has type "Type[Timeout]") +Found 2 errors in 1 file (checked 1 source file) Any ideas on how to fix this? Otherwise I will get rid of it. Thank you, Emanuele def file_pattern(name): return "{0}-{1}".format(os.getpid(), name) @@ -575,7 +582,7 @@ class VM(qtest.QEMUQtestMachine): def __init__(self, path_suffix=''): name = "qemu%s-%d" % (path_suffix, os.getpid()) - timer = 15.0 + timer = 15.0 if not qemu_gdb else None super().__init__(qemu_prog, qemu_opts, name=name, base_temp_dir=test_dir, socket_scm_helper=socket_scm_helper,