Vladislav Bolkhovitin wrote:
Cameron Harr wrote:

Ok, I've done some testing with elevator=noop, with scst_threads=[123] and srpt thread=[01]. I ran with both 4k blocks and 512B blocks, random writes with 60s per test. Unfortunately, it looks like I can't seem to reproduce the numbers I had before - I believe the reporting mechanism I used earlier (script that uses /proc/diskstats) gave me invalid results. This time I have calculated iops straight from the FIO results. One interesting note is that in almost every case srpt thread=1 gives better performance.

Strange, indeed.

Do you use the latest SVN trunk?
Almost - it was svn rev 532.

Did you use the real drives or NULLIO?
Real drives

What is your FIO script?
A variation on this:
fio/fio --rw=randwrite --bs=512 --size=20G --loops=10 --name=randwrite_512_sdc --numjobs=64 --runtime=60 --direct=1 --group_reporting --randrepeat=0 --softrandommap=1 --ioengine=libaio --iodepth=16 --filename=/dev/sdb --filename=/dev/sdc



How do you calculate IOPS rate?
I divide the sum (if more than 1) of the "ios=" from a particular test by the runtime.

It would be interesting to know "vmstat 1" and "top d1" output during runs. Top should show stats for all CPUs, not only aggregate value.

type=randwrite bs=4k drives=1 scst_threads=1 srptthread=0 iops=51134.20 type=randwrite bs=4k drives=1 scst_threads=1 srptthread=1 iops=63461.86 type=randwrite bs=4k drives=1 scst_threads=2 srptthread=0 iops=52383.10 type=randwrite bs=4k drives=1 scst_threads=2 srptthread=1 iops=54065.52 type=randwrite bs=4k drives=1 scst_threads=3 srptthread=0 iops=48827.27 type=randwrite bs=4k drives=1 scst_threads=3 srptthread=1 iops=52703.82 type=randwrite bs=4k drives=2 scst_threads=1 srptthread=0 iops=64619.11 type=randwrite bs=4k drives=2 scst_threads=1 srptthread=1 iops=62605.09 type=randwrite bs=4k drives=2 scst_threads=2 srptthread=0 iops=67961.56 type=randwrite bs=4k drives=2 scst_threads=2 srptthread=1 iops=78884.72 type=randwrite bs=4k drives=2 scst_threads=3 srptthread=0 iops=70340.04 type=randwrite bs=4k drives=2 scst_threads=3 srptthread=1 iops=76253.60 type=randwrite bs=4k drives=3 scst_threads=1 srptthread=0 iops=53777.02 type=randwrite bs=4k drives=3 scst_threads=1 srptthread=1 iops=64661.21 type=randwrite bs=4k drives=3 scst_threads=2 srptthread=0 iops=91073.05 type=randwrite bs=4k drives=3 scst_threads=2 srptthread=1 iops=90127.98 type=randwrite bs=4k drives=3 scst_threads=3 srptthread=0 iops=92012.13 type=randwrite bs=4k drives=3 scst_threads=3 srptthread=1 iops=96848.61 type=randwrite bs=512 drives=1 scst_threads=1 srptthread=0 iops=55040.20 type=randwrite bs=512 drives=1 scst_threads=1 srptthread=1 iops=62057.33 type=randwrite bs=512 drives=1 scst_threads=2 srptthread=0 iops=60237.05 type=randwrite bs=512 drives=1 scst_threads=2 srptthread=1 iops=63465.54 type=randwrite bs=512 drives=1 scst_threads=3 srptthread=0 iops=58716.01 type=randwrite bs=512 drives=1 scst_threads=3 srptthread=1 iops=60089.11 type=randwrite bs=512 drives=2 scst_threads=1 srptthread=0 iops=64978.41 type=randwrite bs=512 drives=2 scst_threads=1 srptthread=1 iops=64018.47 type=randwrite bs=512 drives=2 scst_threads=2 srptthread=0 iops=78128.56 type=randwrite bs=512 drives=2 scst_threads=2 srptthread=1 iops=94561.47 type=randwrite bs=512 drives=2 scst_threads=3 srptthread=0 iops=82526.52 type=randwrite bs=512 drives=2 scst_threads=3 srptthread=1 iops=105874.51 type=randwrite bs=512 drives=3 scst_threads=1 srptthread=0 iops=56730.70 type=randwrite bs=512 drives=3 scst_threads=1 srptthread=1 iops=62147.04 type=randwrite bs=512 drives=3 scst_threads=2 srptthread=0 iops=87507.15 type=randwrite bs=512 drives=3 scst_threads=2 srptthread=1 iops=95781.40 type=randwrite bs=512 drives=3 scst_threads=3 srptthread=0 iops=91645.99 type=randwrite bs=512 drives=3 scst_threads=3 srptthread=1 iops=114164.39




_______________________________________________
general mailing list
[email protected]
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Reply via email to