On 2/18/26 9:06 AM, Stefan Hajnoczi wrote:
> On Wed, Feb 18, 2026 at 10:57:02AM +0100, Fiona Ebner wrote:
>> Am 13.02.26 um 5:05 PM schrieb Kevin Wolf:
>>> Am 13.02.2026 um 15:26 hat Jens Axboe geschrieben:
>>>> When a vCPU thread handles MMIO (holding BQL), aio_co_enter() runs the
>>>> block I/O coroutine inline on the vCPU thread because
>>>> qemu_get_current_aio_context() returns the main AioContext when BQL is
>>>> held. The coroutine calls luring_co_submit() which queues an SQE via
>>>> fdmon_io_uring_add_sqe(), but the actual io_uring_submit() only happens
>>>> in gsource_prepare() on the main loop thread.
>>>
>>> Ouch! Yes, looks like we completely missed I/O submitted in vCPU threads
>>> in the recent changes (or I guess worker threads in theory, but I don't
>>> think there any that actually make use of aio_add_sqe()).
>>>
>>>> Since the coroutine ran inline (not via aio_co_schedule()), no BH is
>>>> scheduled and aio_notify() is never called. The main loop remains asleep
>>>> in ppoll() with up to a 499ms timeout, leaving the SQE unsubmitted until
>>>> the next timer fires.
>>>>
>>>> Fix this by calling aio_notify() after queuing the SQE. This wakes the
>>>> main loop via the eventfd so it can run gsource_prepare() and submit the
>>>> pending SQE promptly.
>>>>
>>>> This is a generic fix that benefits all devices using aio=io_uring.
>>>> Without it, AHCI/SATA devices see MUCH worse I/O latency since they use
>>>> MMIO (not ioeventfd like virtio) and have no other mechanism to wake the
>>>> main loop after queuing block I/O.
>>>>
>>>> This is usually a bit hard to detect, as it also relies on the ppoll
>>>> loop not waking up for other activity, and micro benchmarks tend not to
>>>> see it because they don't have any real processing time. With a
>>>> synthetic test case that has a few usleep() to simulate processing of
>>>> read data, it's very noticeable. The below example reads 128MB with
>>>> O_DIRECT in 128KB chunks in batches of 16, and has a 1ms delay before
>>>> each batch submit, and a 1ms delay after processing each completion.
>>>> Running it on /dev/sda yields:
>>>>
>>>> time sudo ./iotest /dev/sda
>>>>
>>>> ________________________________________________________
>>>> Executed in   25.76 secs      fish           external
>>>>    usr time    6.19 millis  783.00 micros    5.41 millis
>>>>    sys time   12.43 millis  642.00 micros   11.79 millis
>>>>
>>>> while on a virtio-blk or NVMe device we get:
>>>>
>>>> time sudo ./iotest /dev/vdb
>>>>
>>>> ________________________________________________________
>>>> Executed in    1.25 secs      fish           external
>>>>    usr time    1.40 millis    0.30 millis    1.10 millis
>>>>    sys time   17.61 millis    1.43 millis   16.18 millis
>>>>
>>>> time sudo ./iotest /dev/nvme0n1
>>>>
>>>> ________________________________________________________
>>>> Executed in    1.26 secs      fish           external
>>>>    usr time    6.11 millis    0.52 millis    5.59 millis
>>>>    sys time   13.94 millis    1.50 millis   12.43 millis
>>>>
>>>> where the latter are consistent. If we run the same test but keep the
>>>> socket for the ssh connection active by having activity there, then
>>>> the sda test looks as follows:
>>>>
>>>> time sudo ./iotest /dev/sda
>>>>
>>>> ________________________________________________________
>>>> Executed in    1.23 secs      fish           external
>>>>    usr time    2.70 millis   39.00 micros    2.66 millis
>>>>    sys time    4.97 millis  977.00 micros    3.99 millis
>>>>
>>>> as now the ppoll loop is woken all the time anyway.
>>>>
>>>> After this fix, on an idle system:
>>>>
>>>> time sudo ./iotest /dev/sda
>>>>
>>>> ________________________________________________________
>>>> Executed in    1.30 secs      fish           external
>>>>    usr time    2.14 millis    0.14 millis    2.00 millis
>>>>    sys time   16.93 millis    1.16 millis   15.76 millis
>>>>
>>>> Signed-off-by: Jens Axboe <[email protected]>
>>>> ---
>>>>  util/fdmon-io_uring.c | 8 ++++++++
>>>>  1 file changed, 8 insertions(+)
>>>>
>>>> diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c
>>>> index d0b56127c670..96392876b490 100644
>>>> --- a/util/fdmon-io_uring.c
>>>> +++ b/util/fdmon-io_uring.c
>>>> @@ -181,6 +181,14 @@ static void fdmon_io_uring_add_sqe(AioContext *ctx,
>>>>  
>>>>      trace_fdmon_io_uring_add_sqe(ctx, opaque, sqe->opcode, sqe->fd, 
>>>> sqe->off,
>>>>                                   cqe_handler);
>>>> +
>>>> +    /*
>>>> +     * Wake the main loop if it is sleeping in ppoll().  When a vCPU 
>>>> thread
>>>> +     * runs a coroutine inline (holding BQL), it queues SQEs here but the
>>>> +     * actual io_uring_submit() only happens in gsource_prepare().  
>>>> Without
>>>> +     * this notify, ppoll() can sleep up to 499ms before submitting.
>>>> +     */
>>>> +    aio_notify(ctx);
>>>>  }
>>>
>>> Makes sense to me.
>>>
>>> At first I wondered if we should use defer_call() for the aio_notify()
>>> to batch the submission, but of course holding the BQL will already take
>>> care of that. And in iothreads where there is no BQL, the aio_notify()
>>> shouldn't make a difference anyway because we're already in the right
>>> thread.
>>>
>>> I suppose the other variation could be have another io_uring_enter()
>>> call here (but then probably really through defer_call()) to avoid
>>> waiting for another CPU to submit the request in its main loop. But I
>>> don't really have an intuition if that would make things better or worse
>>> in the common case.
> 
> It's possible to call io_uring_enter(). QEMU currently doesn't use
> IORING_SETUP_SINGLE_ISSUER, so it's okay for multiple threads to call
> io_uring_enter() on the same io_uring fd.

I would not recommend that, see below.

> I experimented with IORING_SETUP_SINGLE_ISSUER (as well as
> IORING_SETUP_COOP_TASKRUN and IORING_SETUP_TASKRUN_FLAG) in the past and
> didn't measure a performance improvement:
> https://lore.kernel.org/qemu-devel/[email protected]/
> 
> Jens, any advice regarding these flags?

None other than "yes you should use them" - it's an expanding area of
"let's make that faster", so if you tested something older, then that
may be why as we didn't have a lot earlier. We're toying with getting
rid of the uring_lock for SINGLE_ISSUER, for example.

Hence I think having multiple threads do enter is a design mistake, and
one that might snowball down the line and make it harder to step back
and make SINGLE_ISSUER work for you. Certain features also end up being
gated behing DEFER_TASKRUN, which requires SINGLE_ISSUER as well.

tldr - don't have multiple threads do enter on the same ring, ever, if
it can be avoided. It's a design mistake.

-- 
Jens Axboe

Reply via email to