For a 1 disk setup, I'm seeing a tiny - possibly insignificant -
improvement. There's no obvious regression, so I'll go ahead and mark
verified.

dannf@d05-3:~$ tail -n2 fio.out.*
==> fio.out.new <==
Disk stats (read/write):
  sda: ios=45/80729, merge=0/4929748, ticks=856/3320144, in_queue=3321592, 
util=100.00%

==> fio.out.old <==
Disk stats (read/write):
  sda: ios=0/80887, merge=0/4933228, ticks=0/3324968, in_queue=3325680, 
util=99.93%

I've requested some testing w/ a larger array to confirm the scaling
improvements.

** Tags removed: verification-needed-zesty
** Tags added: verification-done-zesty

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1708734

Title:
  hisi_sas performance improvements

Status in linux package in Ubuntu:
  Fix Released
Status in linux source package in Zesty:
  Fix Committed

Bug description:
  [Impact]
  Recent changes to the upstream driver improve write performance scalability 
with large numbers of disks.

  [Test Case]
  Attach SSDs to the controller, and run fio with the attached configuration 
file (adjust disk names as appropriate). You should see a nearly 2x improvement.

  [Regression Risk]
  The fixes are localized to the hisi_sas driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1708734/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to