Re: [OMPI users] ompi-restart issue : ompi-restart doesn't work across nodes - possible installation problem or environment setting problem??

2008-10-08 Thread arun dhakne
I have configured with the additional flags(--enable-ft-thread --enable-mpi-threads) but there is no change in behaviour, it still gives seg fault. open mpi version: Open MPI: 1.3a1r19685 blcr version: version 0.7.3 The core file is attached. hello.c is sample mpi program whose core is dumped is

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Jeff Squyres
On Oct 8, 2008, at 5:25 PM, Aurélien Bouteiller wrote: Make sure you don't use a "debug" build of Open MPI. If you use trunk, the build system detects it and turns on debug by default. It really kills performance. --disable-debug will remove all those nasty printfs from the critical path.

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Eugene Loh
Eugene Loh wrote: Sangamesh B wrote: The job is run on 2 nodes - 8 cores. OpenMPI - 25 m 39 s. MPICH2 - 15 m 53 s. I don't understand MPICH very well, but it seemed as though some of the flags used in building MPICH are supposed to be added in automatically to the mpicc/etc compiler wra

Re: [OMPI users] compilation error about Open Macro when building the code with OpenMPI on Mac OS 10.5.5

2008-10-08 Thread Sudhakar Mahalingam
Jed, You are correct. I found an "Open" macro defined in our another header file which was included before the mpi header files (Actually this order was working fine with the mpich-1.2.7 but both openmpi-1.2.7 and MPICH-2 complained and threw errors to me). Now when I change the order of

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Aurélien Bouteiller
Make sure you don't use a "debug" build of Open MPI. If you use trunk, the build system detects it and turns on debug by default. It really kills performance. --disable-debug will remove all those nasty printfs from the critical path. You can also run a simple ping-pong test (Netpipe is a g

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread George Bosilca
One thing to look for is the process distribution. Based on the application communication pattern, the process distribution can have a tremendous impact on the execution time. Imagine that the application split the processes in two equal groups based on the rank and only communicate in each

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Brian Dobbins
Hi guys, [From Eugene Loh:] > OpenMPI - 25 m 39 s. >> MPICH2 - 15 m 53 s. >> > With regards to your issue, do you have any indication when you get that > 25m39s timing if there is a grotesque amount of time being spent in MPI > calls? Or, is the slowdown due to non-MPI portions? Just to ad

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Eugene Loh
Sangamesh B wrote: I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI supports both ethernet and infiniband. Before doing that I tested an application 'GROMACS' to compare the performance of MPICH2 & OpenMPI. Both have been compiled with GNU compilers. After this benchmark, I cam

Re: [OMPI users] Problem building OpenMPi with SunStudio compilers

2008-10-08 Thread Ethan Mallove
On Mon, Oct/06/2008 12:24:48PM, Ray Muno wrote: > Ethan Mallove wrote: > > >> Now I get farther along but the build fails at (small excerpt) > >> > >> mutex.c:(.text+0x30): multiple definition of `opal_atomic_cmpset_32' > >> asm/.libs/libasm.a(asm.o):asm.c:(.text+0x30): first defined here > >> thr

Re: [OMPI users] compilation error about Open Macro when building the code with OpenMPI on Mac OS 10.5.5

2008-10-08 Thread Jed Brown
On Wed, Oct 8, 2008 at 21:19, Sudhakar Mahalingam wrote: > I am having a problem about "Open" Macro's number of arguments, when I try > to build a C++ code with the openmpi-1.2.7 on my Mac OS 10.5.5 machine. The > error message is given below. When I look at the file.h and file_inln.h > header fil

[OMPI users] compilation error about Open Macro when building the code with OpenMPI on Mac OS 10.5.5

2008-10-08 Thread Sudhakar Mahalingam
Hi, I am having a problem about "Open" Macro's number of arguments, when I try to build a C++ code with the openmpi-1.2.7 on my Mac OS 10.5.5 machine. The error message is given below. When I look at the file.h and file_inln.h header files in the cxx folder, I am seeing that the "Open" fu

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Jeff Squyres
On Oct 8, 2008, at 10:58 AM, Ashley Pittman wrote: You probably already know this but the obvious candidate here is the memcpy() function, icc sticks in it's own which in some cases is much better than the libc one. It's unusual for compilers to have *huge* differences from code optimisations a

Re: [OMPI users] OMPI link error with petsc 2.3.3

2008-10-08 Thread Terry Dontje
Yann, Well, when you use f90 to link it passed the linker the -t option which is described in the manpage with the following: Turns off the warning for multiply-defined symbols that have different sizes or different alignments. That's why :-) To your original question should y

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Brock Palen
Jeff, You probably already know this but the obvious candidate here is the memcpy() function, icc sticks in it's own which in some cases is much better than the libc one. It's unusual for compilers to have *huge* differences from code optimisations alone. I know this is off topic, but I was i

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Ashley Pittman
On Wed, 2008-10-08 at 09:46 -0400, Jeff Squyres wrote: > - Have you tried compiling Open MPI with something other than GCC? > Just this week, we've gotten some reports from an OMPI member that > they are sometimes seeing *huge* performance differences with OMPI > compiled with GCC vs. any ot

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Jeff Squyres
On Oct 8, 2008, at 10:26 AM, Sangamesh B wrote: - What version of Open MPI are you using? Please send the information listed here: 1.2.7 http://www.open-mpi.org/community/help/ - Did you specify to use mpi_leave_pinned? No Use "--mca mpi_leave_pinned 1" on your mpirun command line (I don

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Sangamesh B
FYI attached here OpenMPI install details On Wed, Oct 8, 2008 at 7:56 PM, Sangamesh B wrote: > > > On Wed, Oct 8, 2008 at 7:16 PM, Jeff Squyres wrote: > >> On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote: >> >>I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI >>> supports bot

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Sangamesh B
On Wed, Oct 8, 2008 at 7:16 PM, Jeff Squyres wrote: > On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote: > >I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI >> supports both ethernet and infiniband. Before doing that I tested an >> application 'GROMACS' to compare the performanc

Re: [OMPI users] OMPI link error with petsc 2.3.3

2008-10-08 Thread Yann JOBIC
Hello, I just tried to link with mpif90. And that's working! I don't have the warning. (the small change from your command : PIC, not fPIC) I'm trying to compile PETSC with the new linker. How come we don't have the warning ? Thanks, Yann Terry Dontje wrote: Yann, Your whole compile pro

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Sangamesh B
On Wed, Oct 8, 2008 at 7:09 PM, Brock Palen wrote: > Your doing this on just one node? That would be using the OpenMPI SM > transport, Last I knew it wasn't that optimized though should still be much > faster than TCP. > its on 2 nodes. I'm using TCP only. There is no infiniband hardware. > >

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Samuel Sarholz
Hi, my experience is that OpenMPI has slightly less latency and less bandwidth than Intel MPI (which is based on mpich2) using InfiniBand. I don't remember the numbers using shared memory. As you are seeing a huge difference, I would suspect that either something with your compilation is stra

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Jeff Squyres
On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote: I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI supports both ethernet and infiniband. Before doing that I tested an application 'GROMACS' to compare the performance of MPICH2 & OpenMPI. Both have been compiled with GNU co

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Brock Palen
Your doing this on just one node? That would be using the OpenMPI SM transport, Last I knew it wasn't that optimized though should still be much faster than TCP. I am surpised at your result though I do not have MPICH2 on the cluster right now I don't have time to compare. How did you r

Re: [OMPI users] OMPI link error with petsc 2.3.3

2008-10-08 Thread Terry Dontje
Yann, Your whole compile process in your email below shows you using mpicc to link your executable. Can you please try and do the following for linkage instead? mpif90 -fPIC -m64 -o solv_ksp solv_ksp.o -R/opt/lib/petsc/lib/amd-64-openmpi_no_debug -L/opt/lib/petsc/lib/amd-64-openmpi_no_de

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Ray Muno
I would be interested in what others have to say about this as well. We have been doing a bit of performance testing since we are deploying a new cluster and it is our first InfiniBand based set up. In our experience, so far, OpenMPI is coming out faster than MVAPICH. Comparisons were made with d

[OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Sangamesh B
Hi All, I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI supports both ethernet and infiniband. Before doing that I tested an application 'GROMACS' to compare the performance of MPICH2 & OpenMPI. Both have been compiled with GNU compilers. After this benchmark, I came to know

Re: [OMPI users] OMPI link error with petsc 2.3.3

2008-10-08 Thread Yann JOBIC
Hello, I used cc to compile. I tried to use mpicc/mpif90 to compile PETSC, but it changed nothing. I still have the same error. I'm giving you the whole compile proccess : 4440p-jobic% gmake solv_ksp mpicc -o solv_ksp.o -c -fPIC -m64 -I/opt/lib/petsc -I/opt/lib/petsc/bmake/amd-64-openmpi_no_d