Hello,
Scenario:
--------
RHEL 5.5 x86_64 server on an HP Proliant DL380G6.
Local RAID: RAID10 with 12 fast SAS-disks.
SAN-storage: IBM XIV connected by four 4Gbit/s HBAs
using round-robin multipathing, as set
up by the xiv_attach tool. Queue depth
for the involved devices increased to
192 per device.
Observations:
------------
When I do a single "dd if=/dev/cciss/c0d0foo of=/dev/null bs=512k" on the
local storage, I get around 400MB/s.
When I do a single "dd if=/dev/mpath/bar of=/dev/null bs=512k" on the XIV
storage, I get around 100MB/s.
Now, if I run a number of dd jobs in parallel for the XIV storage, the
total throughput rises to round 450MB/s. In comparison, throughput for
the local RAID doesn't change if I run multiple instances of dd in
parallel.
When monitoring the scsi queue for the devices for the XIV when performing
I/O, the limits are never reached. Changing values for readahead with
"blockdev" change the situation.
Questions:
---------
Why is there this significant difference in single-task throughput
between the two kinds of storage, knowing that both storage systems
perform more or less similar at peak performance?
What should I do to increase the performance when doing single-process
I/O to the XIV? Could it be related to the multipath system?
(This question was also be posted on IBMs storage forum at https://
www.ibm.com/developerworks/forums/thread.jspa?threadID=345438 ).
--
Regards,
Troels Arvin <[email protected]>
http://troels.arvin.dk/
_______________________________________________
rhelv5-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/rhelv5-list