On Thu, Jan 12, 2017 at 04:41:00PM +0200, Sagi Grimberg wrote:
> 
> >>**Note: when I ran multiple threads on more cpus the performance
> >>degradation phenomenon disappeared, but I tested on a VM with
> >>qemu emulation backed by null_blk so I figured I had some other
> >>bottleneck somewhere (that's why I asked for some more testing).
> >
> >That could be because of the vmexits as every MMIO access in the guest
> >triggers a vmexit and if you poll with a low budget you do more MMIOs hence
> >you have more vmexits.
> >
> >Did you do testing only in qemu or with real H/W as well?
> 
> I tried once. IIRC, I saw the same phenomenons...

JFTR I tried my AHCI irq_poll patch on the Qemu emulation and the read
throughput dropped from ~1GB/s to ~350MB/s. But this can be related to
Qemu's I/O wiredness as well I think. I'll try on real hardware tomorrow.

-- 
Johannes Thumshirn                                          Storage
jthumsh...@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to