Dear Open MPI developers,
We (as in the Debian maintainer for Open MPI) got this bug report from
Uwe who sees mpi apps segfault on Debian systems with the FreeBSD
kernel. Is there anybody here familiar with any BSD peculiarties that
may play a role?
Any input would be greatly appreciated!
Dirk
BTW, I totally forgot to mention a notable C++ MPI bindings project
that is the next-generation/successor to OMPI: the Boost C++ MPI
bindings (boost.mpi).
http://www.generic-programming.org/~dgregor/boost.mpi/doc/
I believe there's also python bindings included...?
On Aug 1, 2007, at
On Jul 31, 2007, at 6:43 PM, Lisandro Dalcin wrote:
I am working in the development of MPI for Python, a port of MPI to
Python, a high level language with automatic memory management. Said
that, in such an environment, having to call XXX.Free() for every
object i get from a call like XXX.Get_so
FYI, I just introduced a new debugging MCA parameter:
mpi_show_mpi_alloc_mem_leaks
When activated, MPI_FINALIZE displays a list of memory allocations
from MPI_ALLOC_MEM that were not freed by MPI_FREE_MEM (in each MPI
process).
* If set to a positive integer, display only that many lea
Hi,
since yesterday I noticed that Netpipe and sometimes IMB are hanging. As far
as I saw both processes stuck in a receive. The wired thing is that if I run
it in a debugger everything works fine.
Cheers,
Sven
On Tuesday 31 July 2007 23:47, Jeff Squyres wrote:
> I'm getting a pile of test