Attached are the two library files you requested, also the output from
ompi_info.
I tried the work-around procedure you suggested, and it worked. I had to
also use it in 'ompi/mca/mpool/gm' and 'ompi/mca/ptl/gm', but I got a
successful make. Then, on a hunch, I went back and added
setenv LDFLAGS -L/usr/lib64
to my environment, did a 'make clean', reran configure (with the MPI2
support),
and did another 'make all install'. It worked. The ompi_info output is
attached.
I see 'gm' entries in the list, so I assume things are as expected. I
now must
have my sysadmin guy transport the installation to the compute nodes, but I
hope that will be routine.
Thanks for the help
Brian Barrett wrote:
On Mar 10, 2006, at 8:35 AM, Brian Barrett wrote:
On Mar 9, 2006, at 11:37 PM, Tom Rosmond wrote:
Attached are output files from a build with the adjustments you
suggested.
setenv FC pgf90
setenv F77 pgf90
setenv CCPFLAGS -I/usr/include/gm
./configure --prefix=/users/rosmond/ompi --with-gm
The results are the same.
Yes, I figured the failure would still be there. Sorry to make you
do the extra work, but I needed a build without the extra issues so
that I could try to get a clearer picture of what is going on.
Unfortunately, it looks like libtool (the GNU project to build
portable libraries) is doing something I didn't expect and causing
issues. I'm passing this on to a friend of Open MPI who works on
the Libtool project and is extremely good at figuring these issues
out. I'll relay back what he recommends, but it might not be until
Monday.
The Libtool expert was wondering if you could send the contents of
the files /usr/lib/libgm.la and /usr/lib64/libgm.la. They should
both be (fairly short) text files.
Also, as a possible work-around, he suggests compiling from the top
level like normal (just "make" or "make all") until the failure,
changing directories into ompi/mca/btl/gm (where the failure
occurred) and running "make LDFLAGS=-L/usr/lib64", then changing
directories back to the top level of the Open MPI source code and
running make (without the extra LDFLAGS option) again. Let me know
if that works.
Thanks,
Brian
# libgm.la - a libtool library file
# Generated by ltmain.sh - GNU libtool 1.4.2a (1.922.2.100 2002/06/26 07:25:14)
#
# Please DO NOT delete this file!
# It is necessary for linking the library.
# The name that we can dlopen(3).
dlname='libgm.so.0'
# Names of this library.
library_names='libgm.so.0.0.0 libgm.so.0 libgm.so'
# The name of the static archive.
old_library='libgm.a'
# Libraries that this one depends upon.
dependency_libs=''
# Version information for libgm.
current=0
age=0
revision=0
# Is this an already installed library?
installed=yes
# Files to dlopen/dlpreopen
dlopen=''
dlpreopen=''
# Directory that this library needs to be installed in:
libdir='/opt/gm/lib'
# libgm.la - a libtool library file
# Generated by ltmain.sh - GNU libtool 1.4.2a (1.922.2.100 2002/06/26 07:25:14)
#
# Please DO NOT delete this file!
# It is necessary for linking the library.
# The name that we can dlopen(3).
dlname='libgm.so.0'
# Names of this library.
library_names='libgm.so.0.0.0 libgm.so.0 libgm.so'
# The name of the static archive.
old_library='libgm.a'
# Libraries that this one depends upon.
dependency_libs=''
# Version information for libgm.
current=0
age=0
revision=0
# Is this an already installed library?
installed=yes
# Files to dlopen/dlpreopen
dlopen=''
dlpreopen=''
# Directory that this library needs to be installed in:
libdir='/opt/gm/lib64'
Open MPI: 1.0.1r8453
Open MPI SVN revision: r8453
Open RTE: 1.0.1r8453
Open RTE SVN revision: r8453
OPAL: 1.0.1r8453
OPAL SVN revision: r8453
Prefix: /users/rosmond/ompi
Configured architecture: x86_64-unknown-linux-gnu
Configured by: rosmond
Configured on: Fri Mar 10 09:55:13 PST 2006
Configure host: cluster0
Built by: rosmond
Built on: Fri Mar 10 10:11:17 PST 2006
Built host: cluster0
C bindings: yes
C++ bindings: yes
Fortran77 bindings: yes (all)
Fortran90 bindings: yes
C compiler: gcc
C compiler absolute: /usr/bin/gcc
C++ compiler: g++
C++ compiler absolute: /usr/bin/g++
Fortran77 compiler: pgf90
Fortran77 compiler abs: /usr/pgi/linux86-64/6.1/bin/pgf90
Fortran90 compiler: pgf90
Fortran90 compiler abs: /usr/pgi/linux86-64/6.1/bin/pgf90
C profiling: yes
C++ profiling: yes
Fortran77 profiling: yes
Fortran90 profiling: yes
C++ exceptions: no
Thread support: posix (mpi: no, progress: no)
Internal debug support: no
MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
libltdl support: 1
MCA memory: malloc_hooks (MCA v1.0, API v1.0, Component v1.0.1)
MCA paffinity: linux (MCA v1.0, API v1.0, Component v1.0.1)
MCA maffinity: first_use (MCA v1.0, API v1.0, Component v1.0.1)
MCA maffinity: libnuma (MCA v1.0, API v1.0, Component v1.0.1)
MCA timer: linux (MCA v1.0, API v1.0, Component v1.0.1)
MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0)
MCA allocator: bucket (MCA v1.0, API v1.0, Component v1.0)
MCA coll: basic (MCA v1.0, API v1.0, Component v1.0.1)
MCA coll: self (MCA v1.0, API v1.0, Component v1.0.1)
MCA coll: sm (MCA v1.0, API v1.0, Component v1.0.1)
MCA io: romio (MCA v1.0, API v1.0, Component v1.0.1)
MCA mpool: gm (MCA v1.0, API v1.0, Component v1.0.1)
MCA mpool: sm (MCA v1.0, API v1.0, Component v1.0.1)
MCA pml: ob1 (MCA v1.0, API v1.0, Component v1.0.1)
MCA pml: teg (MCA v1.0, API v1.0, Component v1.0.1)
MCA ptl: gm (MCA v1.0, API v1.0, Component v1.0.1)
MCA ptl: self (MCA v1.0, API v1.0, Component v1.0.1)
MCA ptl: sm (MCA v1.0, API v1.0, Component v1.0.1)
MCA ptl: tcp (MCA v1.0, API v1.0, Component v1.0.1)
MCA btl: gm (MCA v1.0, API v1.0, Component v1.0.1)
MCA btl: self (MCA v1.0, API v1.0, Component v1.0.1)
MCA btl: sm (MCA v1.0, API v1.0, Component v1.0.1)
MCA btl: tcp (MCA v1.0, API v1.0, Component v1.0)
MCA topo: unity (MCA v1.0, API v1.0, Component v1.0.1)
MCA gpr: null (MCA v1.0, API v1.0, Component v1.0.1)
MCA gpr: proxy (MCA v1.0, API v1.0, Component v1.0.1)
MCA gpr: replica (MCA v1.0, API v1.0, Component v1.0.1)
MCA iof: proxy (MCA v1.0, API v1.0, Component v1.0.1)
MCA iof: svc (MCA v1.0, API v1.0, Component v1.0.1)
MCA ns: proxy (MCA v1.0, API v1.0, Component v1.0.1)
MCA ns: replica (MCA v1.0, API v1.0, Component v1.0.1)
MCA oob: tcp (MCA v1.0, API v1.0, Component v1.0)
MCA ras: dash_host (MCA v1.0, API v1.0, Component v1.0.1)
MCA ras: hostfile (MCA v1.0, API v1.0, Component v1.0.1)
MCA ras: localhost (MCA v1.0, API v1.0, Component v1.0.1)
MCA ras: slurm (MCA v1.0, API v1.0, Component v1.0.1)
MCA rds: hostfile (MCA v1.0, API v1.0, Component v1.0.1)
MCA rds: resfile (MCA v1.0, API v1.0, Component v1.0.1)
MCA rmaps: round_robin (MCA v1.0, API v1.0, Component v1.0.1)
MCA rmgr: proxy (MCA v1.0, API v1.0, Component v1.0.1)
MCA rmgr: urm (MCA v1.0, API v1.0, Component v1.0.1)
MCA rml: oob (MCA v1.0, API v1.0, Component v1.0.1)
MCA pls: daemon (MCA v1.0, API v1.0, Component v1.0.1)
MCA pls: fork (MCA v1.0, API v1.0, Component v1.0.1)
MCA pls: proxy (MCA v1.0, API v1.0, Component v1.0.1)
MCA pls: rsh (MCA v1.0, API v1.0, Component v1.0.1)
MCA pls: slurm (MCA v1.0, API v1.0, Component v1.0.1)
MCA sds: env (MCA v1.0, API v1.0, Component v1.0.1)
MCA sds: pipe (MCA v1.0, API v1.0, Component v1.0.1)
MCA sds: seed (MCA v1.0, API v1.0, Component v1.0.1)
MCA sds: singleton (MCA v1.0, API v1.0, Component v1.0.1)
MCA sds: slurm (MCA v1.0, API v1.0, Component v1.0.1)