I checked this with a fresh clone and everything is working fine, so I expect 
this is a stale submodule issue again. I've asked John to check.

> On Mar 4, 2020, at 8:05 AM, John DelSignore via devel 
> <devel@lists.open-mpi.org> wrote:
> 
> Hi,
> 
> I've been working with Ralph to try to get the PMIx debugging interfaces 
> working with OMPI v5 master. I've been periodically pulling new versions to 
> try to pickup the changes Ralph has been pushing into PRRTE/OpenPMIx. After 
> pulling this morning, I'm getting the following error. This all worked OK 
> yesterday with a pull from late last week, so it seems to me that something 
> got broken in the last few days. Is this a known problem, or am I doing 
> something wrong?
> 
> Thanks, John D.
> 
> mic:/amd/home/jdelsign/PMIx>prun -x MESSAGE=name -n 1 --map-by node 
> --personality ompi ./tx_basic_mpi
> --------------------------------------------------------------------------
> It looks like MPI runtime init failed for some reason; your parallel process 
> is
> likely to abort.  There are many reasons that a parallel process can
> fail during RTE init; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>  local size
>  --> Returned "Not found" (-13) instead of "Success" (0)
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>  ompi_mpi_init: ompi_rte_init failed
>  --> Returned "Not found" (-13) instead of "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [microway3:110345] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> mic:/amd/home/jdelsign/PMIx>
> 
> 
> This e-mail may contain information that is privileged or confidential. If 
> you are not the intended recipient, please delete the e-mail and any 
> attachments and notify us immediately.
> 

Reply via email to