Cameron Harr wrote:
Vladislav Bolkhovitin wrote:
Cameron Harr wrote:
Ok, I've done some testing with elevator=noop, with scst_threads=[123] and srpt thread=[01]. I ran with both 4k blocks and 512B blocks, random writes with 60s per test. Unfortunately, it looks like I can't seem to reproduce the numbers I had before - I believe the reporting mechanism I used earlier (script that uses /proc/diskstats) gave me invalid results. This time I have calculated iops straight from the FIO results. One interesting note is that in almost every case srpt thread=1 gives better performance.
Strange, indeed.

Do you use the latest SVN trunk?
Almost - it was svn rev 532.
Did you use the real drives or NULLIO?
Real drives
What is your FIO script?
A variation on this:
fio/fio --rw=randwrite --bs=512 --size=20G --loops=10 --name=randwrite_512_sdc --numjobs=64 --runtime=60 --direct=1 --group_reporting --randrepeat=0 --softrandommap=1 --ioengine=libaio --iodepth=16 --filename=/dev/sdb --filename=/dev/sdc

AFAIK, libaio currently isn't the best from performance POV. Can you try with other ioengines, especially "sync"? Also, why did you choose other options, especially "iodepth"?

To better interpret results I also need "vmstat 1" and "top d1" output during runs from all initiators and target. Top should show stats for all CPUs, not only the aggregate value, which it shows by default.

How do you calculate IOPS rate?
I divide the sum (if more than 1) of the "ios=" from a particular test by the runtime.

I would rather use the "iops=" value, reported by fio.


_______________________________________________
general mailing list
[email protected]
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Reply via email to