Your doing this on just one node? That would be using the OpenMPI SM
transport, Last I knew it wasn't that optimized though should still
be much faster than TCP.
I am surpised at your result though I do not have MPICH2 on the
cluster right now I don't have time to compare.
How did you run the job?
Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
bro...@umich.edu
(734)936-1985
On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote:
Hi All,
I wanted to switch from mpich2/mvapich2 to OpenMPI, as
OpenMPI supports both ethernet and infiniband. Before doing that I
tested an application 'GROMACS' to compare the performance of
MPICH2 & OpenMPI. Both have been compiled with GNU compilers.
After this benchmark, I came to know that OpenMPI is slower than
MPICH2.
This benchmark is run on a AMD dual core, dual opteron processor.
Both have compiled with default configurations.
The job is run on 2 nodes - 8 cores.
OpenMPI - 25 m 39 s.
MPICH2 - 15 m 53 s.
Any comments ..?
Thanks,
Sangamesh
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users