Hello Mark,

You may want to checkout this package:

https://github.com/lanl/libquo

Another option would be to do something like use an MPI_Ibarrier in the
application
with all the MPI processes but rank 0 going into a loop over waiting for
completion of the barrier
and doing a sleep.  Once rank 0 had completed the OpenMP work, it would
then enter the
barrier and wait for completion.

This type of problem may be helped in a future MPI that supports the notion
of MPI Sessions.
With this approach, you would initialize one MPI session for normal
messaging behavior, using
polling for fast processing of messages.  Your MPI library would use this
for its existing messaging.
You could initialize a second MPI session to use blocking methods for
message receipt.  You would
use a communicator derived from the second session to do what's described
above for the loop
with sleep on an Ibarrier.

Good luck,

Howard


Am Do., 21. Feb. 2019 um 11:25 Uhr schrieb Mark McClure <
mark.w.m...@gmail.com>:

> I have the following, rather unusual, scenario...
>
> I have a program running with OpenMP on a multicore computer. At one point
> in the program, I want to use an external package that is written to
> exploit MPI, not OpenMP, parallelism. So a (rather awkward) solution could
> be to launch the program in MPI, but most of the time, everything is being
> done in a single MPI process, which is using OpenMP (ie, run my current
> program in a single MPI process). Then, when I get to the part where I need
> to use the external package, distribute out the information to all the MPI
> processes, run it across all, and then pull them back to the master
> process. This is awkward, but probably better than my current approach,
> which is running the external package on a single processor (ie, not
> exploiting parallelism in this time-consuming part of the code).
>
> If I use this strategy, I fear that the idle MPI processes may be
> consuming clock cycles while I am running the rest of the program on the
> master process with OpenMP. Thus, they may compete with the OpenMP threads.
> OpenMP does not close threads between every pragma, but OMP_WAIT_POLICY can
> be set to sleep idle threads (actually, this is the default behavior). I
> have not been able to find any equivalent documentation regarding the
> behavior of idle threads in MPI.
>
> Best regards,
> Mark
>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to