Re: [PATCH v6 07/11] block: use int64_t instead of int in driver write_zeroes handlers

2021-09-23 Thread Vladimir Sementsov-Ogievskiy

23.09.2021 23:33, Eric Blake wrote:

On Fri, Sep 03, 2021 at 01:28:03PM +0300, Vladimir Sementsov-Ogievskiy wrote:

We are generally moving to int64_t for both offset and bytes parameters
on all io paths.

Main motivation is realization of 64-bit write_zeroes operation for
fast zeroing large disk chunks, up to the whole disk.

We chose signed type, to be consistent with off_t (which is signed) and
with possibility for signed return type (where negative value means
error).

So, convert driver write_zeroes handlers bytes parameter to int64_t.

The only caller of all updated function is bdrv_co_do_pwrite_zeroes().

bdrv_co_do_pwrite_zeroes() itself is of course OK with widening of
callee parameter type. Also, bdrv_co_do_pwrite_zeroes()'s
max_write_zeroes is limited to INT_MAX. So, updated functions all are
safe, they will not get "bytes" larger than before.

Still, let's look through all updated functions, and add assertions to
the ones which are actually unprepared to values larger than INT_MAX.
For these drivers also set explicit max_pwrite_zeroes limit.


[snip]


At this point all block drivers are prepared to support 64bit
write-zero requests, or have explicitly set max_pwrite_zeroes.


The long commit message is essential, but the analysis looks sane.



Signed-off-by: Vladimir Sementsov-Ogievskiy 
---



+++ b/block/iscsi.c



@@ -1250,11 +1250,21 @@ coroutine_fn iscsi_co_pwrite_zeroes(BlockDriverState 
*bs, int64_t offset,
  iscsi_co_init_iscsitask(iscsilun, &iTask);
  retry:
  if (use_16_for_ws) {
+/*
+ * iscsi_writesame16_task num_blocks argument is uint32_t. We rely here
+ * on our max_pwrite_zeroes limit.
+ */
+assert(nb_blocks < UINT32_MAX);
  iTask.task = iscsi_writesame16_task(iscsilun->iscsi, iscsilun->lun, 
lba,
  iscsilun->zeroblock, 
iscsilun->block_size,
  nb_blocks, 0, !!(flags & 
BDRV_REQ_MAY_UNMAP),
  0, 0, iscsi_co_generic_cb, 
&iTask);


Should this be <= instead of < ?


  } else {
+/*
+ * iscsi_writesame10_task num_blocks argument is uint16_t. We rely here
+ * on our max_pwrite_zeroes limit.
+ */
+assert(nb_blocks < UINT16_MAX);
  iTask.task = iscsi_writesame10_task(iscsilun->iscsi, iscsilun->lun, 
lba,
  iscsilun->zeroblock, 
iscsilun->block_size,
  nb_blocks, 0, !!(flags & 
BDRV_REQ_MAY_UNMAP),


here too.  The 16-bit limit is where we're most likely to run into
someone actually trying to zeroize that much at once.


+++ b/block/nbd.c
@@ -1407,15 +1407,17 @@ static int nbd_client_co_pwritev(BlockDriverState *bs, 
int64_t offset,
  }
  
  static int nbd_client_co_pwrite_zeroes(BlockDriverState *bs, int64_t offset,

-   int bytes, BdrvRequestFlags flags)
+   int64_t bytes, BdrvRequestFlags flags)
  {
  BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
  NBDRequest request = {
  .type = NBD_CMD_WRITE_ZEROES,
  .from = offset,
-.len = bytes,
+.len = bytes,  /* .len is uint32_t actually */
  };
  
+assert(bytes < UINT32_MAX); /* relay on max_pwrite_zeroes */


And again.  Here, you happen to get by with < because we clamped
bl.max_pwrite_zeroes at BDRV_REQUEST_MAX_BYTES, which is INT_MAX
rounded down.  But I had to check; whereas using <= would be less
worrisome, even if we never get a request that large.

If you agree with my analysis, I can make that change while preparing
my pull request.


I agree, <= should be right thing, thanks!



Reviewed-by: Eric Blake 




--
Best regards,
Vladimir



Re: [PATCH v6 07/11] block: use int64_t instead of int in driver write_zeroes handlers

2021-09-23 Thread Eric Blake
On Thu, Sep 23, 2021 at 03:33:45PM -0500, Eric Blake wrote:
> > +++ b/block/nbd.c
> > @@ -1407,15 +1407,17 @@ static int nbd_client_co_pwritev(BlockDriverState 
> > *bs, int64_t offset,
> >  }
> >  
> >  static int nbd_client_co_pwrite_zeroes(BlockDriverState *bs, int64_t 
> > offset,
> > -   int bytes, BdrvRequestFlags flags)
> > +   int64_t bytes, BdrvRequestFlags 
> > flags)
> >  {
> >  BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
> >  NBDRequest request = {
> >  .type = NBD_CMD_WRITE_ZEROES,
> >  .from = offset,
> > -.len = bytes,
> > +.len = bytes,  /* .len is uint32_t actually */
> >  };
> >  
> > +assert(bytes < UINT32_MAX); /* relay on max_pwrite_zeroes */
> 
> And again.  Here, you happen to get by with < because we clamped
> bl.max_pwrite_zeroes at BDRV_REQUEST_MAX_BYTES, which is INT_MAX
> rounded down.  But I had to check; whereas using <= would be less
> worrisome, even if we never get a request that large.

Whoops, I was reading a local patch of mine.  Upstream has merely:

uint32_t max = MIN_NON_ZERO(NBD_MAX_BUFFER_SIZE, s->info.max_block);

bs->bl.max_pdiscard = QEMU_ALIGN_DOWN(INT_MAX, min);
bs->bl.max_pwrite_zeroes = max;

which is an even smaller limit than BDRV_REQUEST_MAX_BYTES (and
obviously one we're trying to raise).  But the point remains that
using <= rather than < will make it easier to review the code where we
raise the limits (either up to the 4G-1 limit of the current protocol,
or with protocol extensions to finally get to 64-bit requests).

> 
> If you agree with my analysis, I can make that change while preparing
> my pull request.
> 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org




Re: [PATCH v6 07/11] block: use int64_t instead of int in driver write_zeroes handlers

2021-09-23 Thread Eric Blake
On Fri, Sep 03, 2021 at 01:28:03PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> We are generally moving to int64_t for both offset and bytes parameters
> on all io paths.
> 
> Main motivation is realization of 64-bit write_zeroes operation for
> fast zeroing large disk chunks, up to the whole disk.
> 
> We chose signed type, to be consistent with off_t (which is signed) and
> with possibility for signed return type (where negative value means
> error).
> 
> So, convert driver write_zeroes handlers bytes parameter to int64_t.
> 
> The only caller of all updated function is bdrv_co_do_pwrite_zeroes().
> 
> bdrv_co_do_pwrite_zeroes() itself is of course OK with widening of
> callee parameter type. Also, bdrv_co_do_pwrite_zeroes()'s
> max_write_zeroes is limited to INT_MAX. So, updated functions all are
> safe, they will not get "bytes" larger than before.
> 
> Still, let's look through all updated functions, and add assertions to
> the ones which are actually unprepared to values larger than INT_MAX.
> For these drivers also set explicit max_pwrite_zeroes limit.
> 
[snip]
> 
> At this point all block drivers are prepared to support 64bit
> write-zero requests, or have explicitly set max_pwrite_zeroes.

The long commit message is essential, but the analysis looks sane.

> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy 
> ---

> +++ b/block/iscsi.c

> @@ -1250,11 +1250,21 @@ coroutine_fn iscsi_co_pwrite_zeroes(BlockDriverState 
> *bs, int64_t offset,
>  iscsi_co_init_iscsitask(iscsilun, &iTask);
>  retry:
>  if (use_16_for_ws) {
> +/*
> + * iscsi_writesame16_task num_blocks argument is uint32_t. We rely 
> here
> + * on our max_pwrite_zeroes limit.
> + */
> +assert(nb_blocks < UINT32_MAX);
>  iTask.task = iscsi_writesame16_task(iscsilun->iscsi, iscsilun->lun, 
> lba,
>  iscsilun->zeroblock, 
> iscsilun->block_size,
>  nb_blocks, 0, !!(flags & 
> BDRV_REQ_MAY_UNMAP),
>  0, 0, iscsi_co_generic_cb, 
> &iTask);

Should this be <= instead of < ?

>  } else {
> +/*
> + * iscsi_writesame10_task num_blocks argument is uint16_t. We rely 
> here
> + * on our max_pwrite_zeroes limit.
> + */
> +assert(nb_blocks < UINT16_MAX);
>  iTask.task = iscsi_writesame10_task(iscsilun->iscsi, iscsilun->lun, 
> lba,
>  iscsilun->zeroblock, 
> iscsilun->block_size,
>  nb_blocks, 0, !!(flags & 
> BDRV_REQ_MAY_UNMAP),

here too.  The 16-bit limit is where we're most likely to run into
someone actually trying to zeroize that much at once.

> +++ b/block/nbd.c
> @@ -1407,15 +1407,17 @@ static int nbd_client_co_pwritev(BlockDriverState 
> *bs, int64_t offset,
>  }
>  
>  static int nbd_client_co_pwrite_zeroes(BlockDriverState *bs, int64_t offset,
> -   int bytes, BdrvRequestFlags flags)
> +   int64_t bytes, BdrvRequestFlags flags)
>  {
>  BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
>  NBDRequest request = {
>  .type = NBD_CMD_WRITE_ZEROES,
>  .from = offset,
> -.len = bytes,
> +.len = bytes,  /* .len is uint32_t actually */
>  };
>  
> +assert(bytes < UINT32_MAX); /* relay on max_pwrite_zeroes */

And again.  Here, you happen to get by with < because we clamped
bl.max_pwrite_zeroes at BDRV_REQUEST_MAX_BYTES, which is INT_MAX
rounded down.  But I had to check; whereas using <= would be less
worrisome, even if we never get a request that large.

If you agree with my analysis, I can make that change while preparing
my pull request.

Reviewed-by: Eric Blake 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org




[PATCH v6 07/11] block: use int64_t instead of int in driver write_zeroes handlers

2021-09-03 Thread Vladimir Sementsov-Ogievskiy
We are generally moving to int64_t for both offset and bytes parameters
on all io paths.

Main motivation is realization of 64-bit write_zeroes operation for
fast zeroing large disk chunks, up to the whole disk.

We chose signed type, to be consistent with off_t (which is signed) and
with possibility for signed return type (where negative value means
error).

So, convert driver write_zeroes handlers bytes parameter to int64_t.

The only caller of all updated function is bdrv_co_do_pwrite_zeroes().

bdrv_co_do_pwrite_zeroes() itself is of course OK with widening of
callee parameter type. Also, bdrv_co_do_pwrite_zeroes()'s
max_write_zeroes is limited to INT_MAX. So, updated functions all are
safe, they will not get "bytes" larger than before.

Still, let's look through all updated functions, and add assertions to
the ones which are actually unprepared to values larger than INT_MAX.
For these drivers also set explicit max_pwrite_zeroes limit.

Let's go:

blkdebug: calculations can't overflow, thanks to
  bdrv_check_qiov_request() in generic layer. rule_check() and
  bdrv_co_pwrite_zeroes() both have 64bit argument.

blklogwrites: pass to blk_log_writes_co_log() with 64bit argument.

blkreplay, copy-on-read, filter-compress: pass to
  bdrv_co_pwrite_zeroes() which is OK

copy-before-write: Calls cbw_do_copy_before_write() and
  bdrv_co_pwrite_zeroes, both have 64bit argument.

file-posix: both handler calls raw_do_pwrite_zeroes, which is updated.
  In raw_do_pwrite_zeroes() calculations are OK due to
  bdrv_check_qiov_request(), bytes go to RawPosixAIOData::aio_nbytes
  which is uint64_t.
  Check also where that uint64_t gets handed:
  handle_aiocb_write_zeroes_block() passes a uint64_t[2] to
  ioctl(BLKZEROOUT), handle_aiocb_write_zeroes() calls do_fallocate()
  which takes off_t (and we compile to always have 64-bit off_t), as
  does handle_aiocb_write_zeroes_unmap. All look safe.

gluster: bytes go to GlusterAIOCB::size which is int64_t and to
  glfs_zerofill_async works with off_t.

iscsi: Aha, here we deal with iscsi_writesame16_task() that has
  uint32_t num_blocks argument and iscsi_writesame16_task() has
  uint16_t argument. Make comments, add assertions and clarify
  max_pwrite_zeroes calculation.
  iscsi_allocmap_() functions already has int64_t argument
  is_byte_request_lun_aligned is simple to update, do it.

mirror_top: pass to bdrv_mirror_top_do_write which has uint64_t
  argument

nbd: Aha, here we have protocol limitation, and NBDRequest::len is
  uint32_t. max_pwrite_zeroes is cleanly set to 32bit value, so we are
  OK for now.

nvme: Again, protocol limitation. And no inherent limit for
  write-zeroes at all. But from code that calculates cdw12 it's obvious
  that we do have limit and alignment. Let's clarify it. Also,
  obviously the code is not prepared to handle bytes=0. Let's handle
  this case too.
  trace events already 64bit

preallocate: pass to handle_write() and bdrv_co_pwrite_zeroes(), both
  64bit.

rbd: pass to qemu_rbd_start_co() which is 64bit.

qcow2: offset + bytes and alignment still works good (thanks to
  bdrv_check_qiov_request()), so tail calculation is OK
  qcow2_subcluster_zeroize() has 64bit argument, should be OK
  trace events updated

qed: qed_co_request wants int nb_sectors. Also in code we have size_t
  used for request length which may be 32bit. So, let's just keep
  INT_MAX as a limit (aligning it down to pwrite_zeroes_alignment) and
  don't care.

raw-format: Is OK. raw_adjust_offset and bdrv_co_pwrite_zeroes are both
  64bit.

throttle: Both throttle_group_co_io_limits_intercept() and
  bdrv_co_pwrite_zeroes() are 64bit.

vmdk: pass to vmdk_pwritev which is 64bit

quorum: pass to quorum_co_pwritev() which is 64bit

Hooray!

At this point all block drivers are prepared to support 64bit
write-zero requests, or have explicitly set max_pwrite_zeroes.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
---
 include/block/block_int.h |  2 +-
 block/blkdebug.c  |  2 +-
 block/blklogwrites.c  |  4 ++--
 block/blkreplay.c |  2 +-
 block/copy-before-write.c |  2 +-
 block/copy-on-read.c  |  2 +-
 block/file-posix.c|  6 +++---
 block/filter-compress.c   |  2 +-
 block/gluster.c   |  6 +++---
 block/iscsi.c | 30 --
 block/mirror.c|  2 +-
 block/nbd.c   |  6 --
 block/nvme.c  | 24 +---
 block/preallocate.c   |  2 +-
 block/qcow2.c |  2 +-
 block/qed.c   |  9 -
 block/quorum.c|  2 +-
 block/raw-format.c|  2 +-
 block/rbd.c   |  4 ++--
 block/throttle.c  |  2 +-
 block/vmdk.c  |  2 +-
 block/trace-events|  4 ++--
 22 files changed, 78 insertions(+), 41 deletions(-)

diff --git a/include/block/block_int.h b/include/block/block_int.h
index 6c47985d5f..112a42ae8f 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -300,7 +300,7