Sangamesh B wrote:

I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI supports both ethernet and infiniband. Before doing that I tested an application 'GROMACS' to compare the performance of MPICH2 & OpenMPI. Both have been compiled with GNU compilers.

After this benchmark, I came to know that OpenMPI is slower than MPICH2.

This benchmark is run on a AMD dual core, dual opteron processor. Both have compiled with default configurations.

The job is run on 2 nodes - 8 cores.

OpenMPI - 25 m 39 s.
MPICH2  -  15 m 53 s.

I agree with Samuel that this difference is strikingly large.

I had a thought that might not apply to your case, but I figured I'd share it anyhow.

I don't understand MPICH very well, but it seemed as though some of the flags used in building MPICH are supposed to be added in automatically to the mpicc/etc compiler wrappers. That is, if you specified CFLAGS=-O to build MPICH, then if you compiled an application with mpicc you would automatically get -O. At least that was my impression. Maybe I misunderstood the documentation. (If you want to use some flags just for building MPICH but you don't want users to get those flags automatically when they use mpicc, you're supposed to use flags like MPICH2LIB_CFLAGS instead of just CFLAGS when you run "configure".)

Not only may this theory not apply to your case, but I'm not even sure it holds water. I just tried building MPICH2 with --enable-fast turned on. The "configure" output indicates I'm getting CFLAGS=-O2, but when I run "mpicc -show" it seems to invoke gcc without any optimization flags by default.

So, I guess I'm sending this mail less to help you and more as a request that someone might improve my understanding.

With regards to your issue, do you have any indication when you get that 25m39s timing if there is a grotesque amount of time being spent in MPI calls? Or, is the slowdown due to non-MPI portions?

Reply via email to