Re: [OMPI users] openmpiu 1.3.3. with OpenFOAM

2009-09-02 Thread Jeff Squyres
I'll let the Myricom guys answer further for message passing  
optimizations, but some general tips:


- Yes, using processor affinity might help.  Add "--mca  
mpi_paffinity_alone 1" to your mpirun command line and see if that  
helps.
- I *believe* that HP MPI uses processor affinity by default -- Open  
MPI does not.
- If you are using less MPI processes than you have processors, you  
might want to spread out the MPI processes across computational  
sockets.  Open MPI 1.3.x does not yet have good controls for this yet;  
you might need to do this manually with some shell scripting (we're  
putting better processor affinity controls in future versions).



On Aug 27, 2009, at 11:13 PM,   
 wrote:



Dear openmpi-ers,

I lately installed openmpi to run OpenFOAM 1.5 on our myrinet  
cluster. I
saw great performence improvements compared to openmpi 1.2.6,  
however it

is still little behind the commerical HPMPI.
Are there further tipps for fine-tuning the parameters to run mpirun
with for this special application? From my experience the MX-ML should
be the quicker one so I currently use:

mpirun --mca mtl mx --mca pml cm ...


as given the FAQ.

I also thing that processor affinity might be worth trying, I will do
this. Some other tipps? Are there special reasons why HPMPI still
outperforms openMPI for this kind of tasks? Thanks and regards.

BastiL
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




--
Jeff Squyres
jsquy...@cisco.com



[OMPI users] openmpiu 1.3.3. with OpenFOAM

2009-08-27 Thread bastil2...@yahoo.de
Dear openmpi-ers,

I lately installed openmpi to run OpenFOAM 1.5 on our myrinet cluster. I
saw great performence improvements compared to openmpi 1.2.6, however it
is still little behind the commerical HPMPI.
Are there further tipps for fine-tuning the parameters to run mpirun
with for this special application? From my experience the MX-ML should
be the quicker one so I currently use:

mpirun --mca mtl mx --mca pml cm ...


as given the FAQ.

I also thing that processor affinity might be worth trying, I will do
this. Some other tipps? Are there special reasons why HPMPI still
outperforms openMPI for this kind of tasks? Thanks and regards.

BastiL