Yes, we have never really optimized open MPI for tcp. That is changing soon, 
hopefully. 

Regardless, what is the communication pattern of your app?  Are you sending a 
lot of data frequently?  Even the MPICH perf difference is surprising - it 
suggests a lot of data xfer, potentially with small messages...?

Another option to try is to install the openmx drivers on your system and run 
open MPI with mx support. This should be much better perf than tcp. 

Sent from my PDA. No type good. 

On Dec 3, 2010, at 3:11 AM, "Mathieu Gontier" <mathieu.gont...@gmail.com> wrote:

> 
> Dear OpenMPI users
> 
> I am dealing with an arithmetic problem. In fact, I have two variants of my 
> code: one in single precision, one in double precision. When I compare the 
> two executable built with MPICH, one can observed an expected difference of 
> performance: 115.7-sec in single precision against 178.68-sec in double 
> precision (+54%).
> 
> The thing is, when I use OpenMPI, the difference is really bigger: 238.5-sec 
> in single precision against 403.19-sec double precision (+69%).
> 
> Our experiences have already shown OpenMPI is less efficient than MPICH on 
> Ethernet with a small number of processes. This explain the differences 
> between the first set of results with MPICH and the second set with OpenMPI. 
> (But if someone have more information about that or even a solution, I am of 
> course interested.)
> But, using OpenMPI increases the difference between the two arithmetic. Is it 
> the accentuation of the OpenMPI+Ethernet loss of performance, is it another 
> issue into OpenMPI or is there any option a can use?
> 
> Thank you for your help.
> Mathieu.
> 
> -- 
> Mathieu Gontier
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to