Life is much better after "make clean" :)
Thank you,
Paul
From: users-boun...@open-mpi.org [users-boun...@open-mpi.org] on behalf of
Ralph Castain [r...@open-mpi.org]
Sent: Wednesday, May 04, 2011 12:29 PM
To: Open MPI Users
Subject: Re: [OMPI users]
Did you make clean first?
configure won't clean out the prior installation, so you may be picking up
stale libs.
On May 4, 2011, at 11:27 AM, Cizmas, Paul wrote:
> I added LDFLAGS=-m64, such that the command is now
>
> ./configure --prefix=/opt/openmpi1.4.3GFm64 CC=/sw/bin/gcc-fsf-4.5
>
I added LDFLAGS=-m64, such that the command is now
./configure --prefix=/opt/openmpi1.4.3GFm64 CC=/sw/bin/gcc-fsf-4.5 CFLAGS=-m64
CXX=/sw/bin/g++-fsf-4.5 CXXFLAGS=-m64 F77=gfortran FFLAGS=-m64 FC=gfortran
FCFLAGS=-m64 LDFLAGS=-m64
but it did not work.
It still dies when I do
make all
On May 4, 2011, at 12:39 PM, Paul Cizmas wrote:
> ./configure --prefix=/opt/openmpi1.4.3GFm64 CC=/sw/bin/gcc-fsf-4.5
> CFLAGS=-m64 CXX=/sw/bin/g++-fsf-4.5 CXXFLAGS=-m64 F77=gfortran FFLAGS=-m64
> FC=gfortran FCFLAGS=-m64
Oops -- sorry, you probably also need to include LDFLAGS=-m64, too
I've been asked about mixed-mode MPI/OpenMP programming with OpenMPI, so
have been digging through the past list messages on MPI_THREAD_*, etc.
Interesting stuff :)
Before I go ahead and add "--enable-mpi-threads" to our standard configure
flags, is there any reason it's disabled by default,
On Wednesday, May 04, 2011 04:04:37 PM hi wrote:
> Greetings !!!
>
> I am observing following error messages when executing attached test
> program...
>
>
> C:\test>mpirun mar_f.exe
...
> [vbgyor:9920] *** An error occurred in MPI_Allreduce
> [vbgyor:9920] *** on communicator MPI_COMM_WORLD
>
Greetings !!!
I am observing following error messages when executing attached test program...
C:\test>mpirun mar_f.exe
0
0
0
size= 1 , rank= 0
start --
a= 2.002.002.00
You still have to set the PATH and LD_LIBRARY_PATH on your remote nodes to
include where you installed OMPI.
Alternatively, use the absolute path name to mpirun in your cmd - we'll pick up
the path and propagate it.
On May 3, 2011, at 9:14 PM, Ahsan Ali wrote:
> Dear Bart,
>
> I think
Sorry, I meant to say:- on each node there is 1 listener and 1 worker.- all
workers act together when any of the listeners send them a request.- currently
I must use an extra clearinghouse process to receive from any of the listeners
and bcast to workers, this is unfortunate because of the
Dear Bart,
I think OpenMPI don't need to be installed on all machines because they are
NFS shared with the master node. I don't know how to check output of which
orted, it is running just on the master node. I have another application
which is running similarly but I am having problem with WRF.
10 matches
Mail list logo