I'll let the Myricom guys answer further for message passing
optimizations, but some general tips:
- Yes, using processor affinity might help. Add "--mca
mpi_paffinity_alone 1" to your mpirun command line and see if that
helps.
- I *believe* that HP MPI uses processor affinity by default
Dear openmpi-ers,
I lately installed openmpi to run OpenFOAM 1.5 on our myrinet cluster. I
saw great performence improvements compared to openmpi 1.2.6, however it
is still little behind the commerical HPMPI.
Are there further tipps for fine-tuning the parameters to run mpirun
with for this