On Wed, Oct 8, 2008 at 7:09 PM, Brock Palen <bro...@umich.edu> wrote:

> Your doing this on just one node?  That would be using the OpenMPI SM
> transport,  Last I knew it wasn't that optimized though should still be much
> faster than TCP.
>

its on 2 nodes. I'm using TCP only. There is no infiniband hardware.

>
> I am surpised at your result though I do not have MPICH2 on the cluster
> right now I don't have time to compare.
>
> How did you run the job?


MPICH2:

time /opt/mpich2/gnu/bin/mpirun -machinefile ./mach -np 8
/opt/apps/gromacs333/bin/mdrun_mpi | tee gro_bench_8p

OpenMPI:

$ time /opt/ompi127/bin/mpirun -machinefile ./mach -np 8
/opt/apps/gromacs333_ompi/bin/mdrun_mpi | tee gromacs_openmpi_8process


>
>
> Brock Palen
> www.umich.edu/~brockp <http://www.umich.edu/%7Ebrockp>
> Center for Advanced Computing
> bro...@umich.edu
> (734)936-1985
>
>
>
>
> On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote:
>
>  Hi All,
>>
>>       I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI
>> supports both ethernet and infiniband. Before doing that I tested an
>> application 'GROMACS' to compare the performance of MPICH2 & OpenMPI. Both
>> have been compiled with GNU compilers.
>>
>> After this benchmark, I came to know that OpenMPI is slower than MPICH2.
>>
>> This benchmark is run on a AMD dual core, dual opteron processor. Both
>> have compiled with default configurations.
>>
>> The job is run on 2 nodes - 8 cores.
>>
>> OpenMPI - 25 m 39 s.
>> MPICH2  -  15 m 53 s.
>>
>> Any comments ..?
>>
>> Thanks,
>> Sangamesh
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to