So long as both binaries use the same OMPI version, I can’t see why there would be an issue. It sounds like you are thinking of running an MPI process on the GPU itself (instead of using an offload library)? People have done that before - IIRC, the only issue is trying to launch a process onto the GPU when the GPU doesn’t have a globally visible TCP address. I wrote a backend spawn capability to resolve that problem and it should still work, though I am not ware of it being exercised recently.
> On May 15, 2017, at 8:02 AM, Kumar, Amit <ahku...@mail.smu.edu> wrote: > > Dear Open MPI, > > I would like to gain a better understanding for running two different > binaries on two different types of nodes(GPU nodes and Non GPUnodes) as a > single job. > > I have run two different binaries with mpirun command and that works fine for > us. But My question is: if I have a binary-1 that uses Intel MKL, and is > compiled with (OpenMPI-wrapped-around-gcc-compiler), and then another > binary-2 that uses Intel MKL and compiled with OpenMPI-warped-around-gcc, > should they have any MPI communication or launch issues? What ABI > compatibilities should I be aware of when launching task that need to > communicate over Open MPI? Or this question has no relevance? > > Thank you, > Amit > > > _______________________________________________ > devel mailing list > devel@lists.open-mpi.org <mailto:devel@lists.open-mpi.org> > https://rfd.newmexicoconsortium.org/mailman/listinfo/devel > <https://rfd.newmexicoconsortium.org/mailman/listinfo/devel>
_______________________________________________ devel mailing list devel@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/devel