On Wed, Oct 8, 2008 at 7:16 PM, Jeff Squyres <jsquy...@cisco.com> wrote:

> On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote:
>
>        I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI
>> supports both ethernet and infiniband. Before doing that I tested an
>> application 'GROMACS' to compare the performance of MPICH2 & OpenMPI. Both
>> have been compiled with GNU compilers.
>>
>> After this benchmark, I came to know that OpenMPI is slower than MPICH2.
>>
>> This benchmark is run on a AMD dual core, dual opteron processor. Both
>> have compiled with default configurations.
>>
>> The job is run on 2 nodes - 8 cores.
>>
>> OpenMPI - 25 m 39 s.
>> MPICH2  -  15 m 53 s.
>>
>
>
> A few things:
>
> - What version of Open MPI are you using?  Please send the information
> listed here:
>
1.2.7

>
>    http://www.open-mpi.org/community/help/
>
> - Did you specify to use mpi_leave_pinned?

No

> Use "--mca mpi_leave_pinned 1" on your mpirun command line (I don't know if
> leave pinned behavior benefits Gromacs or not, but it likely won't hurt)
>

> - Did you enable processor affinity?

No

>  Use "--mca mpi_paffinity_alone 1" on your mpirun command line.
>
Will use these options in the next benchmark

>
> - Are you sure that Open MPI didn't fall back to ethernet (and not use IB)?
>  Use "--mca btl openib,self" on your mpirun command line.
>
I'm using TCP. There is no infiniband support. But eventhough the results
can be compared?

>
> - Have you tried compiling Open MPI with something other than GCC?

No.

>  Just this week, we've gotten some reports from an OMPI member that they
> are sometimes seeing *huge* performance differences with OMPI compiled with
> GCC vs. any other compiler (Intel, PGI, Pathscale).  We are working to
> figure out why; no root cause has been identified yet.
>
I'll try for other than gcc and comeback to you

>
> --
> Jeff Squyres
> Cisco Systems
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to