I would like to start implementing the Open MPI Extensions
infrastructure on a branch, and eventually bring it in to the trunk
when it is ready. The sketch of the design can be found on the wiki
page below:
https://svn.open-mpi.org/trac/ompi/wiki/MPIExtensions
It seems that the vast major
On Jan 23, 2009, at 1:20 PM, N.M. Maclaren wrote:
FWIW, ABI is not necessarily a bad thing; it has its benefits and
drawbacks (and enablers and limitations). Some people want it and
some people don't (most don't care, I think). We'll see where
that effort goes in the Forum and elsewher
On Jan 23 2009, Jeff Squyres wrote:
FWIW, ABI is not necessarily a bad thing; it has its benefits and
drawbacks (and enablers and limitations). Some people want it and
some people don't (most don't care, I think). We'll see where that
effort goes in the Forum and elsewhere.
Right. But
On Jan 23, 2009, at 10:07 AM, N.M. Maclaren wrote:
I'm assuming what you're really referring to is the fact that there
is no currently binary compatibility between different MPI
implementations (forgive me if my assumption is wrong). ...
Good Lord, no! You don't know me, but that's
On Thu, 22 Jan 2009, Scott Atchley wrote:
Can you try a run with:
-mca btl_mx_free_list_max 100
Still hangs in Gather on 128 ranks.
After that, try a additional runs without the above but with:
--mca coll_tuned_use_dynamic_rules 1 --mca coll_tuned_gather_algorithm N
where N is 0, 1, 2
On Jan 23 2009, Jeff Squyres wrote:
No. Open MPI's Fortran MPI_COMM_WORLD is pretty much hard-wired to 0.
That's a mistake. But probably non-trivial to fix.
Could you explain what you meant by that? There is no "fix"; Open
MPI's Fortran MPI_COMM_WORLD has always been 0. More specifica
On Jan 23, 2009, at 7:45 AM, N.M. Maclaren wrote:
MPI_COMM_WORLD is set to a large integer (1140850688) in MPICH2 so
I wonder if there is something in HDF5 and/or NetCDF4 that
doesn't like 0 for the communicator handle. At any rate, you have
given me some ideas of things to check in the
Hi,
MPI_COMM_WORLD is set to a large integer (1140850688) in MPICH2 so I
wonder if there is something in HDF5 and/or NetCDF4 that doesn't
like 0 for the communicator handle. At any rate, you have given me
some ideas of things to check in the debugger tomorrow. Is there a
safe way to chang
On 23 January 2009 at 09:09, Christophe Prud'homme wrote:
| It means that, indeed, we _must_ recompile/relink all libs and
| programs in Debian depending on openmpi
|
| Dirk, what do we do ? that's quite a job to do.Perhaps put back 1.2.8
| in unstable with an epoch and upload 1.3 in experimental
Hi,
One question: you *are* using different HDF5/NetCDF4 installations for
Open MPI and MPICH2, right? I.e., all the software that uses MPI needs
to be separately compiled/installed against different MPI
implementations. Case in point: if you have HDF5 compiled against
MPICH2, it will not w
On Jan 23 2009, Jeff Squyres wrote:
On Jan 23, 2009, at 12:30 AM, David Robertson wrote:
I have looked for both MPI_COMM_WORLD and mpi_comm_world but neither
can be found by totalview (the parallel debugger we use) when I
compile with "USE mpi". When I use "include 'mpif.h'" both
MPI_COMM_
On Jan 23, 2009, at 12:30 AM, David Robertson wrote:
I don't know how helpful this code will be unless you happen to
have HDF5/NetCDF4 already installed. I looked at the code NetCDF4
uses to test parallel IO but it is all in C so it wasn't very
helpful. If you have the NetCDF4 source code t
On Jan 23, 2009, at 3:09 AM, Christophe Prud'homme wrote:
FWIW, "drop in replacement" in this context means recompile and
relink. We
did not provide binary compatibility between the 1.2 series and the
1.3
series.
that would mean that all libs and programs in Debian depending on
openmpi mus
I don't know how helpful this code will be unless you happen to have
HDF5/NetCDF4 already installed. I looked at the code NetCDF4 uses to
test parallel IO but it is all in C so it wasn't very helpful. If you
have the NetCDF4 source code the parallel IO tests are in the nc_test4
directory.
Mm,
14 matches
Mail list logo