On Sun, Jan 27, 2019 at 12:51:44PM +0000, Stefan Hajnoczi wrote:
> On Fri, Jan 25, 2019 at 05:18:13PM +0100, Stefano Garzarella wrote:
> > On Fri, Jan 25, 2019 at 02:58:56PM +0000, Stefan Hajnoczi wrote:
> > > On Thu, Jan 24, 2019 at 06:23:22PM +0100, Stefano Garzarella wrote:
> > > > +            virtio_error(vdev, "virtio-blk discard/wzeroes header too 
> > > > short");
> > > > +            return -1;
> > > > +        }
> > > > +
> > > > +        sector = virtio_ldq_p(VIRTIO_DEVICE(req->dev), 
> > > > &dwz_hdr.sector);
> > > > +        bytes = virtio_ldl_p(VIRTIO_DEVICE(req->dev),
> > > > +                             &dwz_hdr.num_sectors) << BDRV_SECTOR_BITS;
> > > 
> > > Please handle num_sectors << BDRV_SECTOR_BITS overflow so that we don't
> > > need to worry about what happens with an overflowed value later on.
> > > 
> > 
> > Should I also check if num_sectors is less than "max_discard_sectors" or
> > "max_write_zeroes_sectors"?
> 
> Yes, anything that virtio_blk_sect_range_ok() doesn't check needs to be
> checked manually in this patch.  I can't see a reason for allowing
> requests through that exceed the limit.
> 
> > Do you think make sense flush the MultiReqBuffer before call
> > blk_aio_pdiscard() or blk_aio_pwrite_zeroes()?
> 
> I don't think that is necessary since virtio-blk doesn't guarantee
> ordering (the legacy barrier feature isn't supported by QEMU).
> 
> Even the MultiReqBuffer is flushed, -drive aio=threads (the default
> setting) could end up submitting discard/write zeroes before "earlier"
> requests - it depends on host kernel thread scheduler decisions.
> 

Thanks for the details!

Stefano

Reply via email to