Re: [OMPI users] cputype (7) does not match previous archive members cputype

2011-05-04 Thread Cizmas, Paul
Life is much better after "make clean" :) Thank you, Paul From: users-boun...@open-mpi.org [users-boun...@open-mpi.org] on behalf of Ralph Castain [r...@open-mpi.org] Sent: Wednesday, May 04, 2011 12:29 PM To: Open MPI Users Subject: Re: [OMPI users]

Re: [OMPI users] cputype (7) does not match previous archive members cputype

2011-05-04 Thread Ralph Castain
Did you make clean first? configure won't clean out the prior installation, so you may be picking up stale libs. On May 4, 2011, at 11:27 AM, Cizmas, Paul wrote: > I added LDFLAGS=-m64, such that the command is now > > ./configure --prefix=/opt/openmpi1.4.3GFm64 CC=/sw/bin/gcc-fsf-4.5 >

Re: [OMPI users] cputype (7) does not match previous archive members cputype

2011-05-04 Thread Cizmas, Paul
I added LDFLAGS=-m64, such that the command is now ./configure --prefix=/opt/openmpi1.4.3GFm64 CC=/sw/bin/gcc-fsf-4.5 CFLAGS=-m64 CXX=/sw/bin/g++-fsf-4.5 CXXFLAGS=-m64 F77=gfortran FFLAGS=-m64 FC=gfortran FCFLAGS=-m64 LDFLAGS=-m64 but it did not work. It still dies when I do make all

Re: [OMPI users] cputype (7) does not match previous archive members cputype

2011-05-04 Thread Jeff Squyres
On May 4, 2011, at 12:39 PM, Paul Cizmas wrote: > ./configure --prefix=/opt/openmpi1.4.3GFm64 CC=/sw/bin/gcc-fsf-4.5 > CFLAGS=-m64 CXX=/sw/bin/g++-fsf-4.5 CXXFLAGS=-m64 F77=gfortran FFLAGS=-m64 > FC=gfortran FCFLAGS=-m64 Oops -- sorry, you probably also need to include LDFLAGS=-m64, too

[OMPI users] configure: mpi-threads disabled by default

2011-05-04 Thread Mark Dixon
I've been asked about mixed-mode MPI/OpenMP programming with OpenMPI, so have been digging through the past list messages on MPI_THREAD_*, etc. Interesting stuff :) Before I go ahead and add "--enable-mpi-threads" to our standard configure flags, is there any reason it's disabled by default,

Re: [OMPI users] Error occurred in MPI_Allreduce on communicator MPI_COMM_WORLD

2011-05-04 Thread Peter Kjellström
On Wednesday, May 04, 2011 04:04:37 PM hi wrote: > Greetings !!! > > I am observing following error messages when executing attached test > program... > > > C:\test>mpirun mar_f.exe ... > [vbgyor:9920] *** An error occurred in MPI_Allreduce > [vbgyor:9920] *** on communicator MPI_COMM_WORLD >

[OMPI users] Error occurred in MPI_Allreduce on communicator MPI_COMM_WORLD

2011-05-04 Thread hi
Greetings !!! I am observing following error messages when executing attached test program... C:\test>mpirun mar_f.exe 0 0 0 size= 1 , rank= 0 start -- a= 2.002.002.00

Re: [OMPI users] [Wrf-users] WRF Problem running in Parallel on multiple nodes(cluster)

2011-05-04 Thread Ralph Castain
You still have to set the PATH and LD_LIBRARY_PATH on your remote nodes to include where you installed OMPI. Alternatively, use the absolute path name to mpirun in your cmd - we'll pick up the path and propagate it. On May 3, 2011, at 9:14 PM, Ahsan Ali wrote: > Dear Bart, > > I think

Re: [OMPI users] is there an equiv of iprove for bcast?

2011-05-04 Thread Randolph Pullen
Sorry, I meant to say:- on each node there is 1 listener and 1 worker.- all workers act together when any of the listeners send them a request.- currently I must use an extra clearinghouse process to receive from any of the listeners and bcast to workers, this is unfortunate because of the

Re: [OMPI users] [Wrf-users] WRF Problem running in Parallel on multiple nodes(cluster)

2011-05-04 Thread Ahsan Ali
Dear Bart, I think OpenMPI don't need to be installed on all machines because they are NFS shared with the master node. I don't know how to check output of which orted, it is running just on the master node. I have another application which is running similarly but I am having problem with WRF.