Vladislav Bolkhovitin wrote:
I think srptthread=0 performs worse in this case, because with it part of processing done in SIRQ, but seems scheduler make it be done on the same CPU as fct0-worker, which does the data transfer to your SSD device job. And this thread is always consumes about 100% CPU, so it has less CPU time, hence less overall performance.

So, try to affine fctX-worker, SCST threads and SIRQ processing on different CPUs and check again. You can affine threads using utility from http://www.kernel.org/pub/linux/kernel/people/rml/cpu-affinity/, how to affine IRQ see Documentation/IRQ-affinity.txt in your kernel tree.

I ran with the two fct-worker threads pinned to cpus 7,8, the scsi_tgt threads pinned to cpus 4, 5 or 6 and irqbalance pinned on cpus 1-3. I wasn't sure if I should play with the 8 ksoftirqd procs, since there is one process per cpu. From these results, I don't see a big difference, but would still give srpt thread=1 a slight performance advantage.

type=randwrite  bs=4k   drives=1 scst_threads=1 srptthread=1 iops=74990.87
type=randwrite  bs=4k   drives=2 scst_threads=1 srptthread=1 iops=84005.58
type=randwrite  bs=4k   drives=1 scst_threads=2 srptthread=1 iops=72369.04
type=randwrite  bs=4k   drives=2 scst_threads=2 srptthread=1 iops=91147.19
type=randwrite  bs=4k   drives=1 scst_threads=3 srptthread=1 iops=70463.27
type=randwrite  bs=4k   drives=2 scst_threads=3 srptthread=1 iops=91755.24
type=randwrite  bs=4k   drives=1 scst_threads=1 srptthread=0 iops=68000.68
type=randwrite  bs=4k   drives=2 scst_threads=1 srptthread=0 iops=87982.08
type=randwrite  bs=4k   drives=1 scst_threads=2 srptthread=0 iops=73380.33
type=randwrite  bs=4k   drives=2 scst_threads=2 srptthread=0 iops=87223.54
type=randwrite  bs=4k   drives=1 scst_threads=3 srptthread=0 iops=70918.08
type=randwrite  bs=4k   drives=2 scst_threads=3 srptthread=0 iops=88843.35

_______________________________________________
general mailing list
[email protected]
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Reply via email to