On Fri, Oct 10, 2025 at 06:33:40PM +0200, Kevin Wolf wrote:
> Am 10.09.2025 um 19:57 hat Stefan Hajnoczi geschrieben:
> > The io_uring_prep_readv2/writev2() man pages recommend using the
> > non-vectored read/write operations when possible for performance
> > reasons.
> > 
> > I didn't measure a significant difference but it doesn't hurt to have
> > this optimization in place.
> > 
> > Suggested-by: Eric Blake <[email protected]>
> > Signed-off-by: Stefan Hajnoczi <[email protected]>
> > ---
> >  block/io_uring.c | 29 ++++++++++++++++++++++++-----
> >  1 file changed, 24 insertions(+), 5 deletions(-)
> > 
> > diff --git a/block/io_uring.c b/block/io_uring.c
> > index dd930ee57e..bbefbddcc0 100644
> > --- a/block/io_uring.c
> > +++ b/block/io_uring.c
> > @@ -49,12 +49,24 @@ static void luring_prep_sqe(struct io_uring_sqe *sqe, 
> > void *opaque)
> >  #ifdef HAVE_IO_URING_PREP_WRITEV2
> >      {
> >          int luring_flags = (flags & BDRV_REQ_FUA) ? RWF_DSYNC : 0;
> > -        io_uring_prep_writev2(sqe, fd, qiov->iov,
> > -                              qiov->niov, offset, luring_flags);
> > +        if (luring_flags != 0 || qiov->niov > 1) {
> > +            io_uring_prep_writev2(sqe, fd, qiov->iov,
> > +                                  qiov->niov, offset, luring_flags);
> > +        } else {
> > +            /* The man page says non-vectored is faster than vectored */
> > +            struct iovec *iov = qiov->iov;
> > +            io_uring_prep_write(sqe, fd, iov->iov_base, iov->iov_len, 
> > offset);
> > +        }
> >      }
> >  #else
> >          assert(flags == 0);
> > -        io_uring_prep_writev(sqe, fd, qiov->iov, qiov->niov, offset);
> > +        if (qiov->niov > 1) {
> > +            io_uring_prep_writev(sqe, fd, qiov->iov, qiov->niov, offset);
> > +        } else {
> > +            /* The man page says non-vectored is faster than vectored */
> > +            struct iovec *iov = qiov->iov;
> > +            io_uring_prep_write(sqe, fd, iov->iov_base, iov->iov_len, 
> > offset);
> > +        }
> >  #endif
> 
> We have a lot of duplication in this now. Let's use the #ifdef a little
> more locally:
> 
>     {
>         int luring_flags = (flags & BDRV_REQ_FUA) ? RWF_DSYNC : 0;
>         if (luring_flags != 0 || qiov->niov > 1) {
> #ifdef HAVE_IO_URING_PREP_WRITEV2
>             io_uring_prep_writev2(sqe, fd, qiov->iov,
>                                   qiov->niov, offset, luring_flags);
> #else
>             assert(luring_flags == 0);
>             io_uring_prep_writev(sqe, fd, qiov->iov, qiov->niov, offset);
> #endif
>         } else {
>             /* The man page says non-vectored is faster than vectored */
>             struct iovec *iov = qiov->iov;
>             io_uring_prep_write(sqe, fd, iov->iov_base, iov->iov_len, offset);
>         }
>     }

Will fix in v5. Thanks!

Stefan

Attachment: signature.asc
Description: PGP signature

Reply via email to