On 11/15/2016 04:32 AM, Stefan Hajnoczi wrote: > On Mon, Nov 14, 2016 at 09:52:00PM +0100, Paolo Bonzini wrote: >> On 14/11/2016 21:12, Karl Rister wrote: >>> 256 46,929 >>> 512 35,627 >>> 1,024 46,477 >>> 2,000 35,247 >>> 2,048 46,322 >>> 4,000 46,540 >>> 4,096 46,368 >>> 8,000 47,054 >>> 8,192 46,671 >>> 16,000 46,466 >>> 16,384 32,504 >>> 32,000 20,620 >>> 32,768 20,807 >> >> Huh, it breaks down exactly when it should start going faster >> (10^9/46000 = ~21000). > > Could it be because we're not breaking the polling loop for BHs, new > timers, or aio_notify()? > > Once that is fixed polling should achieve maximum performance when > QEMU_AIO_MAX_POLL_NS is at least as long as the duration of a request. > > This is logical if there are enough pinned CPUs so the polling thread > can run flat out. >
I removed all the pinning and restored the guest to a "normal" configuration. QEMU_AIO_POLL_MAX_NS IOPs unset 25,553 1 28,684 2 38,213 4 29,413 8 38,612 16 30,578 32 30,145 64 41,637 128 28,554 256 29,661 512 39,178 1,024 29,644 2,048 37,190 4,096 29,838 8,192 38,581 16,384 37,793 32,768 20,332 65,536 35,755 -- Karl Rister <kris...@redhat.com>