Hello,
FWIW: the reason you have to use PML_CALL() is by design. The MPI
API has all the error checking stuff for ensuring that MPI_INIT
completed, error checking of parameters, etc. We never invoke the
top-level MPI API from elsewhere in the OMPI code base (except for
from within ROMIO
Hi,
I assume you mean something like mca_coll_foo_init_query() for your
initialization function. And I'm guessing you're exchanging some sort
of address information for your network here?
correct.
What I actually did in my collective component was use PML's modex
(module exchange) facili
with InfinniBand, mp-mpich
with SCI).
I hope, my information suffices to reproduce the problem.
Best regards,
Georg Wassen.
ps. I know that I could transmit the array in one MPI_Send, but this is
extracted from my real problem.
1st node---
wassen@pd-01:~$
error "MPI not yet
initialized" occurs.
Long story short: is there a way to communicate during
mca_coll_*_module_init between different processes?
(I don't want to use TCP/IP-sockets while Open MPI should be able to do
this more portable.)
Thanks for your help,
Georg Wassen.