On 4/17/19 5:09 AM, Vladimir Sementsov-Ogievskiy wrote:
> This fixes at least one overflow in qcow2_process_discards.

It's worth calling out how long the problem of passing >2G discard
requests has been present (my reply to the cover letter tracked down
0b919fae as tracking a 64-bit discard region but passing it to
bdrv_discard() which took an int sectors; I'm not sure if later changes
to byte-based rather than sector-based made a difference).


> @@ -2664,14 +2664,15 @@ static void coroutine_fn bdrv_pdiscard_co_entry(void 
> *opaque)
>      aio_wait_kick();
>  }
>  
> -int coroutine_fn bdrv_co_pdiscard(BdrvChild *child, int64_t offset, int 
> bytes)
> +int coroutine_fn bdrv_co_pdiscard(BdrvChild *child, int64_t offset,
> +                                  int64_t bytes)
>  {
>      BdrvTrackedRequest req;
>      int max_pdiscard, ret;
>      int head, tail, align;
>      BlockDriverState *bs = child->bs;
>  
> -    if (!bs || !bs->drv) {
> +    if (!bs || !bs->drv || !bdrv_is_inserted(bs)) {

This change seems unrelated? Oh, it's because you are inlining the rest
of what bdrv_check_byte_request used to do.

>          return -ENOMEDIUM;
>      }
>  
> @@ -2679,9 +2680,8 @@ int coroutine_fn bdrv_co_pdiscard(BdrvChild *child, 
> int64_t offset, int bytes)
>          return -EPERM;
>      }
>  
> -    ret = bdrv_check_byte_request(bs, offset, bytes);
> -    if (ret < 0) {
> -        return ret;

If we keep this call in place, we can flag if there were any other
callers that were passing truncated 64-bit quantities. But I also agree
that now that we are switching to a 64-bit interface, we no longer have
to check whether callers were properly limiting their requests.

Hmm - I just realized that bdrv_check_byte_request() takes a size_t
(rather than int64_t) size argument - could this result in any other
truncations on a 32-bit platform that don't affect 64-bit platforms?

> @@ -2711,12 +2711,13 @@ int coroutine_fn bdrv_co_pdiscard(BdrvChild *child, 
> int64_t offset, int bytes)
>          goto out;
>      }
>  
> -    max_pdiscard = QEMU_ALIGN_DOWN(MIN_NON_ZERO(bs->bl.max_pdiscard, 
> INT_MAX),
> +    max_pdiscard = QEMU_ALIGN_DOWN(MIN_NON_ZERO(bs->bl.max_pdiscard,
> +                                                BDRV_REQUEST_MAX_BYTES),
>                                     align);

This change is a no-op, since BDRV_REQUEST_MAX_BYTES is already INT_MAX
aligned down to sector size, and align is at least sector size.

>      assert(max_pdiscard >= bs->bl.request_alignment);
>  
>      while (bytes > 0) {
> -        int num = bytes;
> +        int64_t num = bytes;
>  
>          if (head) {
>              /* Make small requests to get to alignment boundaries. */
> @@ -2778,7 +2779,7 @@ out:
>      return ret;
>  }
>  
> -int bdrv_pdiscard(BdrvChild *child, int64_t offset, int bytes)
> +int bdrv_pdiscard(BdrvChild *child, int64_t offset, int64_t bytes)
>  {
>      Coroutine *co;
>      DiscardCo rwco = {
> 

I'm not sure the patch is perfect, but I definitely agree that we want
to support 64-byte discard length (where the block layer fragments the
request as needed) rather than the current 32-byte discard length (where
callers have to be careful to not suffer from truncation).

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to