Hi for everybody,

I don't know if I'm in the correct forum, sorry.

I am using:
- Python3.8.0.2
- Openmpi 4.03
- MPI4PY
- CX_FREEZE

I am using CX_FREEZE to freeze my Python3.8.0.2 script.
I am also running only in one RASPBERRY.

I  am using Python3.8 shared_memory in myScript.py.
If I run:  mpiexec -n 2 python3.8 myScript.py    ==> run OK, without
problems (with shared_memory).

If I run CX_FREEZE to build my ./myScript and then:
   mpiexec -n 2 ./myScript       ==> ERROR (below) ,  but the program run
after the error

if I run CX_FREEZE to build my ./myScript, but removing SHARED_MEMORY code
inside pyhton file:
    mpiexec -n 2 ./myScript       ==>  run OK, without problems.

The error appears using Python3.8 SHARED_MEMORY in my Python script and
CX_FREEZE.

Appreciate any help,

Ivo Wolff Gersberg
PEC-COPPE/Federal University of Rio de Janeiro
Brazil


The error:
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  getting local rank failed
  --> Returned value No permission (-17) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  orte_ess_init failed
  --> Returned value No permission (-17) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  ompi_mpi_init: ompi_rte_init failed
  --> Returned "No permission" (-17) instead of "Success" (0)
--------------------------------------------------------------------------
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[raspberrypi:01077] Local abort before MPI_INIT completed completed
successfully, but am not able to aggregate error messages,
and not able to guarantee that all other processes were killed!
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[raspberrypi:01080] Local abort before MPI_INIT completed completed
successfully, but am not able to aggregate error messages, and not able to
guarantee that all other processes were killed!
[raspberrypi:01064] 1 more process has sent help message
help-orte-runtime.txt / orte_init:startup:internal-failure
[raspberrypi:01064] Set MCA parameter "orte_base_help_aggregate" to 0 to
see all help / error messages
[raspberrypi:01064] 1 more process has sent help message help-orte-runtime
/ orte_init:startup:internal-failure
[raspberrypi:01064] 1 more process has sent help message
help-mpi-runtime.txt / mpi_init:startup:internal-failure


-- 


*...*

*Ivo Wolff Gersberg*
_______________________________________________
cx-freeze-users mailing list
cx-freeze-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/cx-freeze-users

Reply via email to