I would be interested in what others have to say about this as well.

We have been doing a bit of performance testing since we are deploying a
new cluster and it is our first InfiniBand based set up.

In our experience, so far, OpenMPI is coming out faster than MVAPICH.
Comparisons were made with different compilers, PGI and Pathscale. We do
not have a running implementation of OpenMPI with SunStudio compilers.

Our tests were with actual user codes running on up to 600 processors so
far.


Sangamesh B wrote:
> Hi All,
> 
>        I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI
> supports both ethernet and infiniband. Before doing that I tested an
> application 'GROMACS' to compare the performance of MPICH2 & OpenMPI. Both
> have been compiled with GNU compilers.
> 
> After this benchmark, I came to know that OpenMPI is slower than MPICH2.
> 
> This benchmark is run on a AMD dual core, dual opteron processor. Both have
> compiled with default configurations.
> 
> The job is run on 2 nodes - 8 cores.
> 
> OpenMPI - 25 m 39 s.
> MPICH2  -  15 m 53 s.
> 
> Any comments ..?
> 
> Thanks,
> Sangamesh
> 

-Ray Muno
 Aerospace Engineering.

Reply via email to