On 22/11/2016 17:31, Stefan Hajnoczi wrote: > +static bool try_poll_mode(AioContext *ctx, bool enable) > +{ > + if (enable && aio_poll_max_ns && ctx->poll_disable_cnt == 0) { > + /* See qemu_soonest_timeout() uint64_t hack */ > + int64_t max_ns = MIN((uint64_t)aio_compute_timeout(ctx), > + (uint64_t)aio_poll_max_ns); > + > + if (max_ns) { > + poll_set_started(ctx, true); > + > + if (run_poll_handlers(ctx, max_ns)) { > + return true; > + } > + } > + } > + > + poll_set_started(ctx, false);
You could do a single iteration even if enable == false (which I'd rename to blocking, BTW, because poll_start can be false on exit even if enable == true). In fact, since (like virtio_queue_host_notifier_aio_poll_end) all .io_poll_end() callbacks are going to poll once more, what about adding here: return run_poll_handlers(ctx, 0); or just an instance of the loop, without qemu_clock_get_ns and the tracepoints: return run_poll_handlers_once(ctx); and removing from patch 10 the /* Handle any buffers that snuck in after we finished polling */ virtio_queue_host_notifier_aio_poll(n); ? Thanks, Paolo