Jody -
In OpenOffice the graphs don't show well. I agree with your
interpretation, there are worries here. I wouldn't start with the obd
survey until we have learned how to tune sgp further.
For the runs that Eric Barton did, the critical parameters were the MF
(a read ahead factor) and readahead setting. What did you set those too?
I have one question about the IO kit - could it write to many regions
(say 10,000) with fewer threads? This is a realistic situation under
Lustre load on larger clusters I think.
Finally, I have forwarded the graphs to DDN - I hope they can comment.
- Peter -
Jody McIntyre wrote:
Hi all,
Attached is my first attempt at characterizing the performance of a DDN
S2A 9500 under various loads using sgpdd-survey.
I tested IO sizes from 512K to 4M with reads, writes with writeback
cache enabled, and writes with writeback cache disabled. As can be seen
from the graphs, performance really does improve with 4M IOs, especially
for writes without WB. However, even with 4M, reads and writes without
WB never reach the nice "plateau" seen with the S2A 8500 and 1M IOs.
I will include DDN settings (there's even a tab set aside for them) but
I'm not sure what's useful - can anyone suggest what would be good to
include? I have 'showall' from the controller, but that is far too
voluminous (and also sensitive information.)
Also if anyone can suggest tunings that are likely to improve either
read or write performance, please let me know and I will try them if
possible. We have done the standard tuning shown at:
https://mail.clusterfs.com/wikis/lustre/LustreDdnTuning
I plan on performing an obdfilter-survey on the same hardware, and am in
the process of doing a similar study on Thumper hardware as well.
Cheers,
Jody
------------------------------------------------------------------------
_______________________________________________
Lustre-devel mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-devel
_______________________________________________
Lustre-devel mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-devel