Hello all,

I have been doing some MPI benchmarking on an Infiniband cluster.

Specs are:
12 cores/node
2.9ghz/core
Infiniband interconnect (TCP also available)

Some runtime numbers:
192 cores total: (16 nodes)
IntelMPI:
0.4 seconds
OpenMPI 3.1.3 (--mca btl ^tcp):
2.5 seconds
OpenMPI 3.1.3 (--mca btl ^openib):
26 seconds

As you can see there are some issues with these numbers - I suspect user
error may be to blame.
Are there any additional arguments I should be passing to mpirun in order
to get some more performance out of OpenMPI? My command currently looks
like:

mpirun -np 192 --hostfile $HOSTS --mca btl ^tcp ./<application>

Any advice would be appreciated,

Thanks,
Cooper


Cooper Burns
Senior Research Engineer
<https://www.linkedin.com/company/convergent-science-inc>
<https://www.facebook.com/ConvergentScience>
<https://twitter.com/convergecfd>
<https://www.youtube.com/user/convergecfd>  <https://vimeo.com/convergecfd>
(608) 230-1551
convergecfd.com
<https://convergecfd.com/?utm_source=Email&utm_medium=signature&utm_campaign=CSIEmailSignature>
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to