Jeff Squyres wrote:
There's no synchronization *guarantee* in MPI collectives except for
MPI_BARRIER. [...] BCAST *can* synchronize; I'm not saying it has to.
I fully agree with Jeff and would even go a step further.
As has already been noted, there are also some implicit data
dependencies
Hi Edgar,
cid's are in fact not recycled in the block algorithm. The problem
is that comm_free is not collective, so you can not make any
assumptions whether other procs have also released that communicator.
well, that's not quite correct. The MPI standard says the following
about MPI_Comm_
Hi,
as you all have noticed already, ftruncate() does NOT extend the size
of a file on all systems. Instead, the preferred way to set a file to
a specific size is to call lseek() and then write() one byte (see e.g.
[1]).
Best regards,
Christian
[1] Richard Stevens: Advanced Programmi
Hi,
I just gave the new release 1.3.1 a go. While Ethernet and InfiniBand
seem to work properly, I noticed that Myrinet/GM compiles fine but
gives a segmentation violation in the first attempt to communicate
(MPI_Send in a simple "hello world" application). Is GM not supported
anymore or
to contact me [1] if you have any comments. I'm especially
interested in more applications that can benefit from this implementation.
"Now it is possible to use MPI_BCAST for large-scaling applications."
Yours sincerely
Christian Siebert
[1] This e-mail address will become invalid at the end of March 2007 too.
Hi,
recently I've discovered a strange bug, which occurs when you try to
communicate within mca_coll_*_comm_query() or mca_coll_*_module_init().
The interesting thing is that it only fails for larger communicators.
Until now, I wasn't sure if this is a problem of my own collective
component,
Hi again,
there is a tiny portability problem located in ompi/tools/ompi_info/param.cc.
This code uses the asprintf() function, which is a GNU extension and
therefore not very portable. Fortunately, it is not hard to exchange
the line
asprintf(&value_string, "%d", value_int);
with a separa
Hi,
I stumbled across a serious bug in the tuned component of Open MPI,
which crashes for example the well-known HPL benchmark in conjunction
with the "native MPI_Bcast() patch" [1].
The problem is within the function ompi_coll_tuned_bcast_intra_chain(),
which does mainly the following:
o