On Oct 8, 2008, at 10:26 AM, Sangamesh B wrote:
- What version of Open MPI are you using? Please send the
information listed here:
1.2.7
http://www.open-mpi.org/community/help/
- Did you specify to use mpi_leave_pinned?
No
Use "--mca mpi_leave_pinned 1" on your mpirun command line (I don't
know if leave pinned behavior benefits Gromacs or not, but it likely
won't hurt)
I see from your other mail that you are not using IB. If you're only
using TCP, then mpi_leave_pinned will have little/no effect.
- Did you enable processor affinity?
No
Use "--mca mpi_paffinity_alone 1" on your mpirun command line.
Will use these options in the next benchmark
- Are you sure that Open MPI didn't fall back to ethernet (and not
use IB)? Use "--mca btl openib,self" on your mpirun command line.
I'm using TCP. There is no infiniband support. But eventhough the
results can be compared?
Yes, they should be comparable. We've always known that our TCP
support is "ok" but not "great" (truthfully: we've not tuned it nearly
as extensively as we've tuned our other transports). But such a huge
performance difference is surprising.
It this one 1 or more nodes? It might be useful to delineate between
TCP and shared memory performance difference. I believe that MPICH2's
shmem performance is likely to be better than OMPI v1.2's, but like
TCP, it shouldn't be *that* huge.
- Have you tried compiling Open MPI with something other than GCC?
No.
Just this week, we've gotten some reports from an OMPI member that
they are sometimes seeing *huge* performance differences with OMPI
compiled with GCC vs. any other compiler (Intel, PGI, Pathscale).
We are working to figure out why; no root cause has been identified
yet.
I'll try for other than gcc and comeback to you
That would be most useful; thanks.
--
Jeff Squyres
Cisco Systems