I've updated the wiki to include a map of 3 hotels near DFW that offers a shuttle both to/from DFW and the IBM Innovation Center for those who wish to go without a car.
https://github.com/open-mpi/ompi/wiki/Meeting-2016-02
---Geoffrey PaulsenSoftware Engineer, IBM Platform MPIIBM Platform-MPIPhon
Thanks for clarification. I will go via new btl module path.
I used -btl self,tcp in past, to get things working (when dealing with
exec and fork problems). So at the moment, Open MPI runs fine, we were
able to run some test jobs, to get some preliminary performance
measurements etc. Only the c
Justin,
knem allows a process to write into the address space of an other process,
to do zero copy.
in the case of osv, threads can simply do a memcpy(), and I doubt knew is
even available.
so a new btl that uses memcpy would be optimal on osv.
one option is to starts from the vader btl, and repl
Justiin,
Rewriting a btl is for intra-node performance purpose only.
To get things working, you can force tcp connectipns for intra node
communication
mpirun --mca btl tcp,self ...
Cheers,
Gilles
Justin Cinkelj wrote:
>Vader is for intra-node communication only, right? So for inter-node
>co
Vader is for intra-node communication only, right? So for inter-node
communication some other mechanism will be used anyway.
Why would be even better to write a new btl? To avoid memcpy (knem would
use it, if I understand you correctly; I guess code assumes that
multiple processes on same node h