Hi Sébastien,
If I understand you correctly, you are running your application on two
different MPIs on two different clusters with two different IB vendors.
Could you make a comparison more "apples to apples"-ish?
For instance:
- run the same version of Open MPI on both clusters
- run the same
Hello,
Open-MPI 1.4.3 on Mellanox Infiniband hardware gives a latency of 250
microseconds with 256 MPI ranks on super-computer A (name is colosse).
The same software gives a latency of 10 microseconds with MVAPICH2 and QLogic
Infiniband hardware with 512 MPI ranks on super-computer B (name is g