On Thu, Oct 9, 2008 at 7:30 PM, Brock Palen <bro...@umich.edu> wrote:

> Which benchmark did you use?
>
Out of 4 benchmarks I used d.dppc benchmark.

>
> Brock Palen
> www.umich.edu/~brockp <http://www.umich.edu/%7Ebrockp>
> Center for Advanced Computing
> bro...@umich.edu
> (734)936-1985
>
>
>
> On Oct 9, 2008, at 8:06 AM, Sangamesh B wrote:
>
>
>>
>> On Thu, Oct 9, 2008 at 5:40 AM, Jeff Squyres <jsquy...@cisco.com> wrote:
>> On Oct 8, 2008, at 5:25 PM, Aurélien Bouteiller wrote:
>>
>> Make sure you don't use a "debug" build of Open MPI. If you use trunk, the
>> build system detects it and turns on debug by default. It really kills
>> performance. --disable-debug will remove all those nasty printfs from the
>> critical path.
>>
>> You can easily tell if you have a debug build of OMPI with the ompi_info
>> command:
>>
>> shell$ ompi_info | grep debug
>>  Internal debug support: no
>> Memory debugging support: no
>> shell$
>> Yes. It is "no"
>> $ /opt/ompi127/bin/ompi_info -all | grep debug
>>  Internal debug support: no
>> Memory debugging support: no
>>
>> I've tested GROMACS for a single process (mpirun -np 1):
>> Here are the results:
>>
>> OpenMPI : 120m 6s
>>
>> MPICH2 :  67m 44s
>>
>> I'm trying to bulid the codes with PGI, but facing problem with
>> compilation of GROMACS.
>>
>> You want to see "no" for both of those.
>>
>> --
>> Jeff Squyres
>> Cisco Systems
>>
>>
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to