e startup of the daemons.
>
>
> > On May 3, 2017, at 6:26 AM, r...@open-mpi.org wrote:
> >
> > The orte routed framework does that for you - there is an API for that
> > purpose.
> >
> >
> >> On May 3, 2017, at 12:17 AM, Justin Cinkelj
> >
Important detail first: I get this message from significantly modified
Open MPI code, so problem exists solely due to my mistake.
Orterun on 192.168.122.90 starts orted on remote node 192.168.122.91,
than orted figures out it has nothing to do.
If I request to start workers on the same 192.168.
> I’m pretty sure you can by simply enclosing the entire launch proxy command
> in quotes, but I can take a look a little later today
>
> > On Feb 5, 2016, at 7:17 AM, Justin Cinkelj wrote:
> >
> > I'm starting mpi program via --launch-proxy, and would like to pa
I'm starting mpi program via --launch-proxy, and would like to pass some
additional parameters to it. But I'm not able to figure out how to do
that (or if it is possible at all). Attempts to use environ failed:
OMPI_VAR1=aa mpirun program
mpirun -x VAR2=bb program
The program will get set OMPI_
, this is for performance only, using tcp btl only
should be enough to get things work.
Cheers,
Gilles
On Wednesday, December 16, 2015, Justin Cinkelj
mailto:justin.cink...@xlab.si>> wrote:
Vader is for intra-node communication only, right? So for
inter-node communication some
ot so painful, and I would have expected some issues with the
global variables, and some race conditions with the environment.
did you already solve these issues ?
Cheers,
Gilles
On Tuesday, December 15, 2015, Justin Cinkelj <mailto:justin.cink...@xlab.si>> wrote:
I'm trying
I'm trying to port Open MPI to OS with threads instead of processes.
Currently, during MPI_Finalize, I get attempt to call munmap first with
address of 0x20c0 and later 0x20c8.
mca_btl_vader_component_close():
munmap (mca_btl_vader_component.my_segment,
mca_btl_vader_component.
- Original Message -
> From: "Justin Cinkelj"
> To: "Open MPI Developers"
> Sent: Friday, October 23, 2015 5:59:43 PM
> Subject: Re: [OMPI devel] How is session dir used?
>
> Shared memory file is used by mpi_program only, and not by orted, I gue
Normally, mpi_run starts via ssh on remote node orted process, and orted
start mpi_program via fork+exec.
orted and mpi_program communicate via:
- environment variables (ok, that's on-time setup only, but still)
- pipes (only one, right? - it is close-on-exec by child).
- file descriptors, mpi_prog
ing said, i do not think that should be needed ... just make
> sure there is no firewall running on your system, and you should be fine.
> if some hosts have several interfaces, you can restrict to the one
> that should work (e.g. eth0) with
> mpirun --mca oob_tcp_if_include eth0 --mca btl
I'm trying to run OpenMPI in OSv container
(https://github.com/cloudius-systems/osv). It's a single process, single
address space VM, without fork, exec, openpty function. With some
butchering of OSv and OpenMPI I was able to compile orted.so, and run it
inside OSv via mpirun (mpirun is on remote m
11 matches
Mail list logo