I must have been blind, of course your testcase in the description is
fine. Sorry for even asking.

I'll give it a try if it also works to reflect the improvement on some
HW I can access, but if not thanks for offer to test it on your side.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1847948

Title:
  Improve NVMe guest performance on Bionic QEMU

Status in The Ubuntu-power-systems project:
  Triaged
Status in linux package in Ubuntu:
  Fix Released
Status in qemu package in Ubuntu:
  Fix Released
Status in linux source package in Bionic:
  New
Status in qemu source package in Bionic:
  Triaged

Bug description:
  == Comment: #0 - Murilo Opsfelder Araujo  - 2019-10-11 14:16:14 ==

  ---Problem Description---
  Back-port the following patches to Bionic QEMU to improve NVMe guest 
performance by more than 200%:

  ?vfio-pci: Allow mmap of MSIX BAR?
  
https://git.qemu.org/?p=qemu.git;a=commit;h=ae0215b2bb56a9d5321a185dde133bfdd306a4c0

  ?ppc/spapr, vfio: Turn off MSIX emulation for VFIO devices?
  
https://git.qemu.org/?p=qemu.git;a=commit;h=fcad0d2121976df4b422b4007a5eb7fcaac01134
   
  ---uname output---
  na
   
  ---Additional Hardware Info---
  0030:01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe 
SSD Controller 172Xa/172Xb (rev 01) 

   
  Machine Type = AC922 
   
  ---Debugger---
  A debugger is not configured
   
  ---Steps to Reproduce---
   Install or setup a guest image and boot it.

  Once guest is running, passthrough the NVMe disk to the guest using
  the XML:

  host$ cat nvme-disk.xml
  <hostdev mode='subsystem' type='pci' managed='no'>
     <driver name='vfio'/>
      <source>
          <address domain='0x0030' bus='0x01' slot='0x00' function='0x0'/>
      </source>
  </hostdev>

  host$ virsh attach-device <domain> nvme-disk.xml --live

  On the guest, run fio benchmarks:

  guest$ fio --direct=1 --rw=randrw --refill_buffers --norandommap
  --randrepeat=0 --ioengine=libaio --bs=4k --rwmixread=100 --iodepth=16
  --runtime=60 --name=job1 --filename=/dev/nvme0n1 --numjobs=4

  Results are similar with numjobs=4 and numjobs=64, respectively:

     READ: bw=385MiB/s (404MB/s), 78.0MiB/s-115MiB/s (81.8MB/s-120MB/s), 
io=11.3GiB (12.1GB), run=30001-30001msec
     READ: bw=382MiB/s (400MB/s), 2684KiB/s-12.6MiB/s (2749kB/s-13.2MB/s), 
io=11.2GiB (12.0GB), run=30001-30009msec

  With the two patches applied, performance improved significantly for
  numjobs=4 and numjobs=64 cases, respectively:

     READ: bw=1191MiB/s (1249MB/s), 285MiB/s-309MiB/s (299MB/s-324MB/s), 
io=34.9GiB (37.5GB), run=30001-30001msec
     READ: bw=4273MiB/s (4481MB/s), 49.7MiB/s-113MiB/s (52.1MB/s-119MB/s), 
io=125GiB (134GB), run=30001-30005msec

   
  Userspace tool common name: qemu 

  Userspace rpm: qemu 
   
  The userspace tool has the following bit modes: 64-bit 

  Userspace tool obtained from project website:  na

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-power-systems/+bug/1847948/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to