nd then fails to finish the rest of the cleanup.
The reason is due to to our specific systems and the use of the configure
argument --disable-dlopen, so nothing (including the Makefile) gets created in
/user/openmpi-1.4.3/opal/libltd.
Is there a workaround for this?
Thanks,
david
--
David Gu
..
Should it have been a higher number or is there another param that
should be set?
Thanks,
david
--
David Gunter
HPC-3: Infrastructure Team
Los Alamos National Laboratory
I believe you still must add "--enable-f77" and "--enable-f90" to the
OMPI configure line in addition to setting the FC and F77 env variables.
-david
--
David Gunter
HPC-3: Parallel Tools Team
Los Alamos National Laboratory
On Jun 16, 2008, at 10:25 AM, Weirs, V Gregory wr
remove these warnings?
Here is some info about the build:
I issued a "make distclean" prior to running the configure/make/make
install steps and the final install directory was completely erased
beforehand. The output from ompi_info is appended below.
-david
--
David Gunter
H
d be someone to volunteer to actually
| spend the cycles to maintain ROMIO in Open MPI (I am pretty sure that
| Brian simply does not have them)...
|
| --
| Jeff Squyres
| Cisco Systems
Since Brian no longer works on these issues I'm wondering if and how
it is possible.
Thanks,
david
--
Do I need to do anything special to enable multi-path routing on
InfiniBand networks? For example, are there command-line arguments to
mpiexec or the like?
Thanks,
david
--
David Gunter
HPC-3: Parallel Tools Team
Los Alamos National Laboratory
.
Thanks,
david
--
David Gunter
HPC-3: Parallel Tools Team
Los Alamos National Laboratory
A quick reading of this thread makes it sound to me as if you are
using icc to compile c++ code. The correct compiler to use is icpc.
This has been the case since at least the version 9 release of the
Intel compilers. icc will not compile c++ code.
Hope this is useful.
-david
--
David
Is it possible to view message queues inside TotalView with OpenMPI?
Thanks,
david
--
David Gunter
HPC-4: Parallel Tools Team
Los Alamos National Laboratory
You can eliminate the "[n17:30019] odls_bproc: openpty failed, using
pipes instead" message by configuring OMPI with the --disable-pty-
support flag, as there is a bug in BProc that causes that to happen.
-david
--
David Gunter
HPC-4: HPC Environments: Parallel Tools Team
Los Alamo
software environments are controlled via the Module package,
thus a user will load her favorite flavor of compiler and Open-MPI
module at login or some other time and LD_LIBRARY_PATH and PATH, etc,
are set accordingly.
-david
--
David Gunter
HPC-4: HPC Environments: Parallel Tools Team
Los
ding to LOGICAL... int
checking alignment of Fortran LOGICAL... 4
...
-david
--
David Gunter
HPC-4: HPC Environments: Parallel Tools Team
Los Alamos National Laboratory
On Dec 14, 2006, at 12:01 PM, Michael Galloway wrote:
good day all, i've been trying to build ompi with the 6.2-X version
of
enpty failed, using pipes instead
.
.
.
The code then runs fine with no other errors.
What is the meaning of those 3 lines?
-david
--
David Gunter
HPC-4: HPC Environments: Parallel Tools Team
Los Alamos National Laboratory
If I configure and build OpenMPI 1.1.1 with only --enable-shared, the
files listed below are created. If I build with both --enable-shared
and --enable-static, these same files do not appear in the final
install. Is this the correct behavior?
Thanks,
david
> /opt/OpenMPI/openmpi-pgi-1.1
What machine is this on, Daryl? I have a conference call with Intel
re: compiler problems today. If I can verify this is an Intel
problem I can bring it up to them.
Thanks,
david
On Jul 11, 2006, at 9:53 AM, Daryl W. Grunau wrote:
I'm trying to build version 1.1 with Intel 9.0 compilers a
This is how we were told to implement it originally by the BProc
folks. However, that means that shared libraries have problems, for
obvious reasons.
We have to reimplement the bproc launcher using a different
approach - will take a little time.
Ralph
David Gunter wrote:
Unfortunately s
Unfortunately static-only will create binaries that will overwhelm
our machines. This is not a realistic option.
-david
On Apr 11, 2006, at 1:04 PM, Ralph Castain wrote:
Also, remember that you must configure for static operation for
bproc - use the configuration options "--enable-static -
This is what I have just discovered - mpicc didn't have -m32 in it.
Thanks for the other info (config list)!
-david
On Apr 10, 2006, at 8:56 AM, Jeff Squyres ((jsquyres)) wrote:
The extra "-m32" was necessary because the wrapper compiler did not
include the CFLAGS from the configure line (we
sure that's not the
issue...
Brian
On Apr 10, 2006, at 10:24 AM, David Gunter wrote:
The problem with doing it that way is that is disallows our in-hose
code teams from using their compilers of choice. Prior to open-mpi
we have been using LA-MPI. LA-MPI has always been compiled in such
really need to use "mpicc" to do so.
I think that might be the source of your errors.
Ralph
David Gunter wrote:
After much fiddling around, I managed to create a version of open-mpi
that would actually build. Unfortunately, I can't run the simplest
of applications with it. Here
Linux
-david
config.log
Description: Binary data
flash64_conifig_5.out
Description: Binary data
On Apr 10, 2006, at 7:55 AM, Brian Barrett wrote:
On Apr 10, 2006, at 9:43 AM, David Gunter wrote:
After much fiddling around, I managed to create a version of open-mpi
that would actually
ll open to suggestions.
-david
On Apr 10, 2006, at 7:11 AM, David R. (Chip) Kent IV wrote:
When running the tests, is the LD_LIBRARY_PATH getting set to lib64
instead of lib or something like that?
Chip
On Sat, Apr 08, 2006 at 02:45:01AM -0600, David Gunter wrote:
I am trying to build a 32-bit co
--build=i686-pc-linux-gnu"
configure halts with errors when trying to run the Fortran 77 tests.
If I remove those env settings and just use the --build option,
configure will proceed to the end but the make will eventually halt
with errors due to a mix of lib64 libs being accessed at some point.
I would like see more of such results. In particular it would be
nice to see a comparison of OpenMPI to the newer MPICH2.
Thanks, Glen.
-david
--
David Gunter
CCN-8: HPC Environments - Parallel Tools
On Feb 2, 2006, at 6:55 AM, Glen Kaukola wrote:
Hi everyone,
I recently took Open MPI
24 matches
Mail list logo