It's the same situation as with any of the bindings - we don't really support 
it at this time. All the bindings ultimately funnel down into the C 
implementation, so when that becomes fully thread safe, then so will the rest 
of the bindings.


On Oct 26, 2013, at 6:39 PM, Saliya Ekanayake <esal...@gmail.com> wrote:

> Hi,
> 
> Since this is a conversation on thread support, I have a quick related 
> question. Is MPI_THREAD_MULTIPLE supported with OpenMPI's Java binding?
> 
> Thank you,
> Saliya
> 
> 
> On Wed, Oct 23, 2013 at 2:40 PM, Jeff Hammond <jeff.scie...@gmail.com> wrote:
> And in practice the difference between FUNNELED and SERIALIZED will be
> very small.  The differences might emerge from thread-local state and
> thread-specific network registration, but I don't see this being
> required.  Hence, for most purposes SINGLE=FUNNELED=SERIALIZED is
> equivalent to NOMUTEX and MULTIPLE is MUTEX, where MUTEX refers to the
> internal mutex required to make MPI reentrant.
> 
> Jeff
> 
> On Wed, Oct 23, 2013 at 1:18 PM, Tim Prince <n...@aol.com> wrote:
> > On 10/23/2013 01:02 PM, Barrett, Brian W wrote:
> >
> > On 10/22/13 10:23 AM, "Jai Dayal" <dayals...@gmail.com> wrote:
> >
> > I, for the life of me, can't understand the difference between these two
> > init_thread modes.
> >
> > MPI_THREAD_SINGLE states that "only one thread will execute", but
> > MPI_THREAD_FUNNELED states "The process may be multi-threaded, but only the
> > main thread will make MPI calls (all MPI calls are funneled to the main
> > thread)."
> >
> > If I use MPI_THREAD_SINGLE, and just create a bunch of pthreads that dumbly
> > loop in the background, the MPI library will have no way of detecting this,
> > nor should this have any affects on the machine.
> >
> > This is exactly the same as MPI_THREAD_FUNNELED. What exactly does it mean
> > with "only one thread will execute?" The openmpi library has absolutely zero
> > way of knowng I've spawned other pthreads, and since these pthreads aren't
> > actually doing MPI communication, I fail to see how this would interfere.
> >
> >
> > Technically, if you call MPI_INIT_THREAD with MPI_THREAD_SINGLE, you have
> > made a promise that you will not create any other threads in your
> > application.  There was a time where OSes shipped threaded and non-threaded
> > malloc, for example, so knowing that might be important for that last bit of
> > performance.  There are also some obscure corner cases of the memory model
> > of some architectures where you might get unexpected results if you made an
> > MPI Receive call in an thread and accessed that buffer later from another
> > thread, which may require memory barriers inside the implementation, so
> > there could be some differences between SINGLE and FUNNELED due to those
> > barriers.
> >
> > In Open MPI, we'll handle those corner cases whether you init for SINGLE or
> > FUNNELED, so there's really no practical difference for Open MPI, but you're
> > then slightly less portable.
> >
> > I'm asking because I'm using an open_mpi build ontop of infiniband, and the
> > maximum thread mode is MPI_THREAD_SINGLE.
> >
> >
> > That doesn't seem right; which version of Open MPI are you using?
> >
> > Brian
> >
> >
> >
> > As Brian said, you aren't likely to be running on a system like Windows 98
> > where non-thread-safe libraries were preferred.  My colleagues at NASA
> > insist that any properly built MPI will support MPI_THREAD_FUNNELED by
> > default, even when the documentation says explicit setting in
> > MPI_Init_thread() is mandatory.  The statement which I see in OpenMPI doc
> > says all MPI calls must be made by the thread which calls MPI_Init_thread.
> > Apparently it will work if plain MPI_Init is used instead.  This theory
> > appears to hold up for all the MPI implementations of interest.  The
> > additional threads referred to are "inside the MPI rank," although I suppose
> > additional application threads not involved with MPI are possible.
> >
> >
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> 
> --
> Jeff Hammond
> jeff.scie...@gmail.com
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> 
> -- 
> Saliya Ekanayake esal...@gmail.com 
> Cell 812-391-4914 Home 812-961-6383
> http://saliya.org
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to