run with MPICH2 (also PGI
8.0-2) and the problem does not occur.
If I have forgotten any details needed to debug this issue, please let
me know.
Thanks,
David Robertson
e compiled
statically. I have also tried this with gfortran 4.2.5 and ifort
10.1.018 with the same results.
Thanks,
David Robertson
ompi_f90_problem.tar.gz
Description: GNU Zip compressed data
I don't know how helpful this code will be unless you happen to have
HDF5/NetCDF4 already installed. I looked at the code NetCDF4 uses to
test parallel IO but it is all in C so it wasn't very helpful. If you
have the NetCDF4 source code the parallel IO tests are in the nc_test4
directory.
Mm,
Hi,
One question: you *are* using different HDF5/NetCDF4 installations for
Open MPI and MPICH2, right? I.e., all the software that uses MPI needs
to be separately compiled/installed against different MPI
implementations. Case in point: if you have HDF5 compiled against
MPICH2, it will not w
Hi,
MPI_COMM_WORLD is set to a large integer (1140850688) in MPICH2 so I
wonder if there is something in HDF5 and/or NetCDF4 that doesn't
like 0 for the communicator handle. At any rate, you have given me
some ideas of things to check in the debugger tomorrow. Is there a
safe way to chang
Hello,
I am trying to build Open MPI 1.3.2 with ifort 11.0.074 and icc/icpc
11.0.083 (the Intel compilers) on a quad-core AMD Opteron workstation
running CentOS 4.4. I have no problems on this same machine if I use
ifort with gcc/g++ instead of icc/icpc. Configure seems to work ok even
though
Paul H. Hargrove wrote:
Jeff Squyres wrote:
[snip]
Erm -- that's weird. So when you extract the tarballs,
atomic-amd64-linux.s is non-empty (as it should be), but after a
failed build, it's file length 0?
Notice that during the build process, we sym link atomic-amd64-linux.s
to atomic-asm.S
Hi all,
I have compiled Open MPI 1.3.2 with Intel Fortran and C/C++ 11.0
compilers. Fortran Real*16 seems to be working except for MPI_Allreduce.
I have attached a simple program to show what I mean. I am not an MPI
programmer but I work for one and he actually wrote the attached
program. The
Hi Jeff,
Jeff Squyres wrote:
Greetings David.
I think we should have a more explicit note about MPI_REAL16 support in
the README.
This issue has come up before; see
https://svn.open-mpi.org/trac/ompi/ticket/1603.
If you read through that ticket, you'll see that I was unable to find a
C e
(e.g., the test codes I put near the
end) and see what Intel has to say about it. Perhaps we're doing
something wrong...?
I hate to pass the buck here, but I unfortunately have a whole pile of
higher-priority items that I need to work on...
On Jun 19, 2009, at 1:32 PM, David Robertson
Hi all,
We use both the PGI and Intel compilers over an Infiniband cluster and I
was trying to find a way to have both orteruns in the path (in separate
directories) at the same time. I decided to use the --program-suffix
option. However, all the symlinks in the resulting bin directory point
Perhaps it should be taken out of the help message in the configure
script then.
Dave
Jeff Squyres wrote:
On Sep 3, 2009, at 9:55 PM, David Robertson wrote:
We use both the PGI and Intel compilers over an Infiniband cluster and I
was trying to find a way to have both orteruns in the path
12 matches
Mail list logo