Justin,
one more question...
if you want to run n>1 MPI tasks per node, would you have ?
- 1 container with one orted and n MPI tasks
- n containers with one orted and one MPI task per container
And btw, did you configure ompi with --disable-dlopen ?
If not, I strongly encourage you to do so.
C
Let me follow up on this...
IOF is but one of the frameworks / plugins involved in launching and monitoring
processes.
It might actually be easier to get on a webex and give you an overview (Ralph
would be the best person for this; he's the one would does most of the work in
the ORTE layer); I
Thank you. At least its clear now that for the immediate problem I have
to look at IOF code.
On 16. 10. 2015 03:32, Gilles Gouaillardet wrote:
> Justin,
>
> IOF stands for Input/Output (aka I/O) Forwarding
>
> here is a very high level overview of a quite simple case.
> on host A, you run
> mpiru
Justin,
IOF stands for Input/Output (aka I/O) Forwarding
here is a very high level overview of a quite simple case.
on host A, you run
mpirun -host B,C -np 2 a.out
without any batch manager and TCP interconnect
first, mpirun will fork&exec
ssh B orted ...
ssh C orted ...
the orted daemons will
I'm trying to run OpenMPI in OSv container
(https://github.com/cloudius-systems/osv). It's a single process, single
address space VM, without fork, exec, openpty function. With some
butchering of OSv and OpenMPI I was able to compile orted.so, and run it
inside OSv via mpirun (mpirun is on remote m