On Thu, Jan 12, 2017 at 01:44:05PM +0200, Sagi Grimberg wrote:
[...]
> Its pretty basic:
> --
> [global]
> group_reporting
> cpus_allowed=0
> cpus_allowed_policy=split
> rw=randrw
> bs=4k
> numjobs=4
> iodepth=32
> runtime=60
> time_based
> loops=1
> ioengine=libaio
> direct=1
> invalidate=1
> randrepeat=1
> norandommap
> exitall
> 
> [job]
> --
> 
> **Note: when I ran multiple threads on more cpus the performance
> degradation phenomenon disappeared, but I tested on a VM with
> qemu emulation backed by null_blk so I figured I had some other
> bottleneck somewhere (that's why I asked for some more testing).

That could be because of the vmexits as every MMIO access in the guest
triggers a vmexit and if you poll with a low budget you do more MMIOs hence
you have more vmexits.

Did you do testing only in qemu or with real H/W as well?

> 
> Note that I ran randrw because I was backed with null_blk, testing
> with a real nvme device, you should either run randread or write, and
> if you do a write, you can't run it multi-threaded (well you can, but
> you'll get unpredictable performance...).

Noted, thanks.

Byte,
        Johannes
-- 
Johannes Thumshirn                                          Storage
jthumsh...@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to