N.M. Maclaren wrote:
On May 10 2010, Kawashima wrote:
If we use OpenMP with MPI, we need at least MPI_THREAD_FUNNELED even
if MPI functions are called only outside of omp parallel region,
like below.
#pragma omp parallel for
for (...) {
/* computation */
}
MPI_Allreduce(..
Hi Nick and all,
> >> On others, they use a completely different (and seriously incompatible,
> >> at both the syntactic and semantic levels) set of libraries. E.g. AIX.
> >
> >Sorry, I don't know these issue well.
> >Do you mean the case I wrote above about malloc?
>
> No. You have to compile
On May 10 2010, Kawashima wrote:
Because MPI_THREAD_FUNNELED/SERIALIZED doesn't restrict other threads to
call functions other than those of MPI library, code bellow are not
thread safe if malloc is not thread safe and MPI_Allreduce calls malloc.
#pragma omp parallel for private(is_master)
On May 10 2010, Sylvain Jeaugey wrote:
That is definitely the correct action. Unless an application or library
has been built with thread support, or can guaranteed to be called only
from a single thread, using threads is catastrophic.
I personnaly see that as a bug, but I certainly lack so
Thanks Nick,
> >Though Sylvain's original mail (*1) was sent 4 months ago and nobody
> >replied to it, I'm interested in this issue and strongly agree with
> >Sylvain.
> >
> > *1 http://www.open-mpi.org/community/lists/devel/2010/01/7275.php
> >
> >As explained by Sylvain, current Open MPI impleme
On Mon, 10 May 2010, N.M. Maclaren wrote:
As explained by Sylvain, current Open MPI implementation always returns
MPI_THREAD_SINGLE as provided thread level if neither --enable-mpi-threads
nor --enable-progress-threads was specified at configure (v1.4).
That is definitely the correct action.
On May 10 2010, Kawashima wrote:
Though Sylvain's original mail (*1) was sent 4 months ago and nobody
replied to it, I'm interested in this issue and strongly agree with
Sylvain.
*1 http://www.open-mpi.org/community/lists/devel/2010/01/7275.php
As explained by Sylvain, current Open MPI implemen
Hi Open MPI developers,
Though Sylvain's original mail (*1) was sent 4 months ago and nobody
replied to it, I'm interested in this issue and strongly agree with
Sylvain.
*1 http://www.open-mpi.org/community/lists/devel/2010/01/7275.php
As explained by Sylvain, current Open MPI implementation al