Source: dune-common Version: 2.9.0-6 Severity: serious Justification: FTBFS Tags: trixie sid ftbfs User: lu...@debian.org Usertags: ftbfs-20240420 ftbfs-trixie ftbfs-t64-armhf
Hi, During a rebuild of all packages in sid, your package failed to build on armhf. Relevant part (hopefully): > make[5]: Entering directory '/<<PKGBUILDDIR>>/build' > make[5]: Nothing to be done for 'CMakeFiles/build_tests.dir/build'. > make[5]: Leaving directory '/<<PKGBUILDDIR>>/build' > [100%] Built target build_tests > make[4]: Leaving directory '/<<PKGBUILDDIR>>/build' > /usr/bin/cmake -E cmake_progress_start /<<PKGBUILDDIR>>/build/CMakeFiles 0 > make[3]: Leaving directory '/<<PKGBUILDDIR>>/build' > make[2]: Leaving directory '/<<PKGBUILDDIR>>/build' > cd build; PATH=/<<PKGBUILDDIR>>/debian/tmp-test:$PATH > /<<PKGBUILDDIR>>/bin/dune-ctest > Site: ip-10-84-234-171 > Build name: Linux-c++ > Create new tag: 20240420-0332 - Experimental > Test project /<<PKGBUILDDIR>>/build > Start 1: communicationtest > 1/115 Test #1: communicationtest ......................***Failed 0.02 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976535] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-171:1976534] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-171:1976534] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-171:1976534] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 2: communicationtest-mpi-2 > 2/115 Test #2: communicationtest-mpi-2 ................***Failed 0.02 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976536] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > > Start 3: indexsettest > 3/115 Test #3: indexsettest ........................... Passed 0.00 > sec > Start 4: remoteindicestest > 4/115 Test #4: remoteindicestest ......................***Failed 0.02 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976539] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-171:1976538] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-171:1976538] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-171:1976538] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 5: remoteindicestest-mpi-2 > 5/115 Test #5: remoteindicestest-mpi-2 ................***Failed 0.02 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976540] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > > Start 6: selectiontest > 6/115 Test #6: selectiontest .......................... Passed 0.17 > sec > Start 7: syncertest > 7/115 Test #7: syncertest .............................***Failed 0.02 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976543] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-171:1976542] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-171:1976542] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-171:1976542] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 8: syncertest-mpi-2 > 8/115 Test #8: syncertest-mpi-2 .......................***Failed 0.02 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976544] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > > Start 9: variablesizecommunicatortest > 9/115 Test #9: variablesizecommunicatortest ...........***Failed 0.02 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976546] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-171:1976545] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-171:1976545] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-171:1976545] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 10: variablesizecommunicatortest-mpi-2 > 10/115 Test #10: variablesizecommunicatortest-mpi-2 .....***Failed 0.02 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976547] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > > Start 11: mpidatatest-mpi-2 > 11/115 Test #11: mpidatatest-mpi-2 ......................***Failed 0.02 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976548] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > > Start 12: mpifuturetest > 12/115 Test #12: mpifuturetest ..........................***Failed 0.02 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976550] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-171:1976549] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-171:1976549] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-171:1976549] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 13: mpifuturetest-mpi-2 > 13/115 Test #13: mpifuturetest-mpi-2 ....................***Failed 0.02 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976551] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > > Start 14: mpipacktest-mpi-2 > 14/115 Test #14: mpipacktest-mpi-2 ......................***Failed 0.02 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976552] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > > Start 15: mpigatherscattertest-mpi-2 > 15/115 Test #15: mpigatherscattertest-mpi-2 .............***Failed 0.02 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976553] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > > Start 16: looptest > 16/115 Test #16: looptest ............................... Passed 0.03 > sec > Start 17: standardtest > 17/115 Test #17: standardtest ........................... Passed 0.00 > sec > Start 18: vcarraytest > 18/115 Test #18: vcarraytest ............................***Skipped 0.00 > sec > Start 19: vcvectortest > 19/115 Test #19: vcvectortest ...........................***Skipped 0.00 > sec > Start 20: arithmetictestsuitetest > 20/115 Test #20: arithmetictestsuitetest ................***Failed 0.02 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976559] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-171:1976558] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-171:1976558] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-171:1976558] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 21: arraylisttest > 21/115 Test #21: arraylisttest .......................... Passed 0.00 > sec > Start 22: arraytest > 22/115 Test #22: arraytest .............................. Passed 0.00 > sec > Start 23: assertandreturntest > 23/115 Test #23: assertandreturntest ....................***Failed 0.02 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976563] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-171:1976562] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-171:1976562] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-171:1976562] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 24: assertandreturntest_compiletime_fail > 24/115 Test #24: assertandreturntest_compiletime_fail ... Passed 1.08 > sec > Start 25: assertandreturntest_ndebug > 25/115 Test #25: assertandreturntest_ndebug .............***Failed 0.02 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976588] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-171:1976587] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-171:1976587] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-171:1976587] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 26: autocopytest > 26/115 Test #26: autocopytest ........................... Passed 0.00 > sec > Start 27: bigunsignedinttest > 27/115 Test #27: bigunsignedinttest ..................... Passed 0.00 > sec > Start 28: bitsetvectortest > 28/115 Test #28: bitsetvectortest ....................... Passed 0.00 > sec > Start 29: boundscheckingtest > 29/115 Test #29: boundscheckingtest ..................... Passed 0.00 > sec > Start 30: boundscheckingmvtest > 30/115 Test #30: boundscheckingmvtest ................... Passed 0.00 > sec > Start 31: boundscheckingoptest > 31/115 Test #31: boundscheckingoptest ................... Passed 0.00 > sec > Start 32: calloncetest > 32/115 Test #32: calloncetest ........................... Passed 0.00 > sec > Start 33: check_fvector_size > 33/115 Test #33: check_fvector_size ..................... Passed 0.00 > sec > Start 34: check_fvector_size_fail1 > 34/115 Test #34: check_fvector_size_fail1 ............... Passed 0.72 > sec > Start 35: check_fvector_size_fail2 > 35/115 Test #35: check_fvector_size_fail2 ............... Passed 0.73 > sec > Start 36: classnametest-demangled > 36/115 Test #36: classnametest-demangled ................ Passed 0.01 > sec > Start 37: classnametest-fallback > 37/115 Test #37: classnametest-fallback ................. Passed 0.01 > sec > Start 38: concept > 38/115 Test #38: concept ................................***Failed 0.02 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976634] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-171:1976633] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-171:1976633] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-171:1976633] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 39: constexprifelsetest > 39/115 Test #39: constexprifelsetest .................... Passed 0.00 > sec > Start 40: debugaligntest > 40/115 Test #40: debugaligntest .........................***Failed 0.02 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976637] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-171:1976636] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-171:1976636] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-171:1976636] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 41: debugalignsimdtest > 41/115 Test #41: debugalignsimdtest .....................***Failed 0.02 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976639] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-171:1976638] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-171:1976638] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-171:1976638] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 42: densematrixassignmenttest > 42/115 Test #42: densematrixassignmenttest .............. Passed 0.00 > sec > Start 43: densematrixassignmenttest_fail0 > 43/115 Test #43: densematrixassignmenttest_fail0 ........ Passed 1.21 > sec > Start 44: densematrixassignmenttest_fail1 > 44/115 Test #44: densematrixassignmenttest_fail1 ........ Passed 1.20 > sec > Start 45: densematrixassignmenttest_fail2 > 45/115 Test #45: densematrixassignmenttest_fail2 ........ Passed 1.20 > sec > Start 46: densematrixassignmenttest_fail3 > 46/115 Test #46: densematrixassignmenttest_fail3 ........ Passed 1.21 > sec > Start 47: densematrixassignmenttest_fail4 > 47/115 Test #47: densematrixassignmenttest_fail4 ........ Passed 1.22 > sec > Start 48: densematrixassignmenttest_fail5 > 48/115 Test #48: densematrixassignmenttest_fail5 ........ Passed 1.23 > sec > Start 49: densematrixassignmenttest_fail6 > 49/115 Test #49: densematrixassignmenttest_fail6 ........ Passed 1.21 > sec > Start 50: densevectorassignmenttest > 50/115 Test #50: densevectorassignmenttest .............. Passed 0.00 > sec > Start 51: diagonalmatrixtest > 51/115 Test #51: diagonalmatrixtest ..................... Passed 0.00 > sec > Start 52: dynmatrixtest > 52/115 Test #52: dynmatrixtest .......................... Passed 0.00 > sec > Start 53: dynvectortest > 53/115 Test #53: dynvectortest .......................... Passed 0.00 > sec > Start 54: densevectortest > 54/115 Test #54: densevectortest ........................ Passed 0.00 > sec > Start 55: enumsettest > 55/115 Test #55: enumsettest ............................ Passed 0.00 > sec > Start 56: filledarraytest > 56/115 Test #56: filledarraytest ........................ Passed 0.00 > sec > Start 57: fmatrixtest > 57/115 Test #57: fmatrixtest ............................ Passed 0.00 > sec > Start 58: functiontest > 58/115 Test #58: functiontest ........................... Passed 0.00 > sec > Start 59: fvectortest > 59/115 Test #59: fvectortest ............................ Passed 0.00 > sec > Start 60: fvectorconversion1d > 60/115 Test #60: fvectorconversion1d .................... Passed 0.00 > sec > Start 61: genericiterator_compile_fail > 61/115 Test #61: genericiterator_compile_fail ........... Passed 0.76 > sec > Start 62: hybridutilitiestest > 62/115 Test #62: hybridutilitiestest .................... Passed 0.00 > sec > Start 63: indicestest > 63/115 Test #63: indicestest ............................ Passed 0.00 > sec > Start 64: iscallabletest > 64/115 Test #64: iscallabletest ......................... Passed 0.00 > sec > Start 65: iteratorfacadetest2 > 65/115 Test #65: iteratorfacadetest2 .................... Passed 0.00 > sec > Start 66: iteratorfacadetest > 66/115 Test #66: iteratorfacadetest ..................... Passed 0.00 > sec > Start 67: lrutest > 67/115 Test #67: lrutest ................................ Passed 0.00 > sec > Start 68: mathclassifierstest > 68/115 Test #68: mathclassifierstest .................... Passed 0.00 > sec > Start 69: metistest > 69/115 Test #69: metistest ..............................***Skipped 0.00 > sec > Start 70: mpicommunicationtest > 70/115 Test #70: mpicommunicationtest ...................***Failed 0.02 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976839] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-171:1976838] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-171:1976838] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-171:1976838] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 71: mpicommunicationtest-mpi-2 > 71/115 Test #71: mpicommunicationtest-mpi-2 .............***Failed 0.01 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976840] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > > Start 72: mpiguardtest > 72/115 Test #72: mpiguardtest ...........................***Failed 0.02 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976842] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-171:1976841] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-171:1976841] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-171:1976841] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 73: mpiguardtest-mpi-2 > 73/115 Test #73: mpiguardtest-mpi-2 .....................***Failed 0.01 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976843] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > > Start 74: mpihelpertest > 74/115 Test #74: mpihelpertest ..........................***Failed 0.02 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976845] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-171:1976844] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-171:1976844] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-171:1976844] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 75: mpihelpertest-mpi-2 > 75/115 Test #75: mpihelpertest-mpi-2 ....................***Failed 0.01 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976846] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > > Start 76: mpihelpertest2 > 76/115 Test #76: mpihelpertest2 .........................***Failed 0.02 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976848] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-171:1976847] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-171:1976847] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-171:1976847] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 77: mpihelpertest2-mpi-2 > 77/115 Test #77: mpihelpertest2-mpi-2 ...................***Failed 0.01 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976849] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > > Start 78: overloadsettest > 78/115 Test #78: overloadsettest ........................ Passed 0.00 > sec > Start 79: parameterizedobjecttest > 79/115 Test #79: parameterizedobjecttest ................ Passed 0.00 > sec > Start 80: parametertreelocaletest > 80/115 Test #80: parametertreelocaletest ................***Skipped 0.00 > sec > Start 81: parametertreetest > 81/115 Test #81: parametertreetest ...................... Passed 0.00 > sec > Start 82: pathtest > 82/115 Test #82: pathtest ............................... Passed 0.00 > sec > Start 83: poolallocatortest > 83/115 Test #83: poolallocatortest ...................... Passed 0.00 > sec > Start 84: powertest > 84/115 Test #84: powertest .............................. Passed 0.00 > sec > Start 85: quadmathtest > 85/115 Test #85: quadmathtest ...........................***Skipped 0.00 > sec > Start 86: rangeutilitiestest > 86/115 Test #86: rangeutilitiestest ..................... Passed 0.00 > sec > Start 87: referencehelpertest > 87/115 Test #87: referencehelpertest ....................***Failed 0.02 > sec > -------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-171:1976860] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-171:1976859] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-171:1976859] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-171:1976859] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 88: reservedvectortest > 88/115 Test #88: reservedvectortest ..................... Passed 0.00 > sec > Start 89: scotchtest > 89/115 Test #89: scotchtest .............................***Skipped 0.00 > sec > Start 90: shared_ptrtest > 90/115 Test #90: shared_ptrtest ......................... Passed 0.00 > sec > Start 91: singletontest > 91/115 Test #91: singletontest .......................... Passed 0.00 > sec > Start 92: sllisttest > 92/115 Test #92: sllisttest ............................. Passed 0.00 > sec > Start 93: stdidentity > 93/115 Test #93: stdidentity ............................ Passed 0.00 > sec > Start 94: stdapplytest > 94/115 Test #94: stdapplytest ........................... Passed 0.00 > sec > Start 95: stdchecktypes > 95/115 Test #95: stdchecktypes .......................... Passed 0.00 > sec > Start 96: streamoperatorstest > 96/115 Test #96: streamoperatorstest .................... Passed 0.00 > sec > Start 97: streamtest > 97/115 Test #97: streamtest ............................. Passed 0.00 > sec > Start 98: stringutilitytest > 98/115 Test #98: stringutilitytest ...................... Passed 0.00 > sec > Start 99: testdebugallocator > 99/115 Test #99: testdebugallocator ..................... Passed 0.00 > sec > Start 100: testdebugallocator_fail1 > 100/115 Test #100: testdebugallocator_fail1 ............... Passed 0.00 > sec > Start 101: testdebugallocator_fail2 > 101/115 Test #101: testdebugallocator_fail2 ............... Passed 0.00 > sec > Start 102: testdebugallocator_fail3 > 102/115 Test #102: testdebugallocator_fail3 ............... Passed 0.00 > sec > Start 103: testdebugallocator_fail4 > 103/115 Test #103: testdebugallocator_fail4 ............... Passed 0.00 > sec > Start 104: testdebugallocator_fail5 > 104/115 Test #104: testdebugallocator_fail5 ............... Passed 0.00 > sec > Start 105: testfloatcmp > 105/115 Test #105: testfloatcmp ........................... Passed 0.00 > sec > Start 106: transposetest > 106/115 Test #106: transposetest .......................... Passed 0.00 > sec > Start 107: tupleutilitytest > 107/115 Test #107: tupleutilitytest ....................... Passed 0.00 > sec > Start 108: typeutilitytest > 108/115 Test #108: typeutilitytest ........................ Passed 0.00 > sec > Start 109: typelisttest > 109/115 Test #109: typelisttest ........................... Passed 0.00 > sec > Start 110: utilitytest > 110/115 Test #110: utilitytest ............................ Passed 0.00 > sec > Start 111: eigenvaluestest > 111/115 Test #111: eigenvaluestest ........................ Passed 0.66 > sec > Start 112: versiontest > 112/115 Test #112: versiontest ............................ Passed 0.00 > sec > Start 113: mathtest > 113/115 Test #113: mathtest ............................... Passed 0.00 > sec > Start 114: vcexpectedimpltest > 114/115 Test #114: vcexpectedimpltest .....................***Skipped 0.00 > sec > Start 115: alignedallocatortest > 115/115 Test #115: alignedallocatortest ................... Passed 0.00 > sec > > 76% tests passed, 28 tests failed out of 115 > > Label Time Summary: > quick = 13.46 sec*proc (107 tests) > > Total Test time (real) = 13.35 sec > > The following tests did not run: > 18 - vcarraytest (Skipped) > 19 - vcvectortest (Skipped) > 69 - metistest (Skipped) > 80 - parametertreelocaletest (Skipped) > 85 - quadmathtest (Skipped) > 89 - scotchtest (Skipped) > 114 - vcexpectedimpltest (Skipped) > > The following tests FAILED: > 1 - communicationtest (Failed) > 2 - communicationtest-mpi-2 (Failed) > 4 - remoteindicestest (Failed) > 5 - remoteindicestest-mpi-2 (Failed) > 7 - syncertest (Failed) > 8 - syncertest-mpi-2 (Failed) > 9 - variablesizecommunicatortest (Failed) > 10 - variablesizecommunicatortest-mpi-2 (Failed) > 11 - mpidatatest-mpi-2 (Failed) > 12 - mpifuturetest (Failed) > 13 - mpifuturetest-mpi-2 (Failed) > 14 - mpipacktest-mpi-2 (Failed) > 15 - mpigatherscattertest-mpi-2 (Failed) > 20 - arithmetictestsuitetest (Failed) > 23 - assertandreturntest (Failed) > 25 - assertandreturntest_ndebug (Failed) > 38 - concept (Failed) > 40 - debugaligntest (Failed) > 41 - debugalignsimdtest (Failed) > 70 - mpicommunicationtest (Failed) > 71 - mpicommunicationtest-mpi-2 (Failed) > 72 - mpiguardtest (Failed) > 73 - mpiguardtest-mpi-2 (Failed) > 74 - mpihelpertest (Failed) > 75 - mpihelpertest-mpi-2 (Failed) > 76 - mpihelpertest2 (Failed) > 77 - mpihelpertest2-mpi-2 (Failed) > 87 - referencehelpertest (Failed) > Errors while running CTest > ====================================================================== > Name: communicationtest > FullName: ./dune/common/parallel/test/communicationtest > Status: FAILED > > ====================================================================== > Name: communicationtest-mpi-2 > FullName: ./dune/common/parallel/test/communicationtest-mpi-2 > Status: FAILED > > ====================================================================== > Name: remoteindicestest > FullName: ./dune/common/parallel/test/remoteindicestest > Status: FAILED > > ====================================================================== > Name: remoteindicestest-mpi-2 > FullName: ./dune/common/parallel/test/remoteindicestest-mpi-2 > Status: FAILED > > ====================================================================== > Name: syncertest > FullName: ./dune/common/parallel/test/syncertest > Status: FAILED > > ====================================================================== > Name: syncertest-mpi-2 > FullName: ./dune/common/parallel/test/syncertest-mpi-2 > Status: FAILED > > ====================================================================== > Name: variablesizecommunicatortest > FullName: ./dune/common/parallel/test/variablesizecommunicatortest > Status: FAILED > > ====================================================================== > Name: variablesizecommunicatortest-mpi-2 > FullName: ./dune/common/parallel/test/variablesizecommunicatortest-mpi-2 > Status: FAILED > > ====================================================================== > Name: mpidatatest-mpi-2 > FullName: ./dune/common/parallel/test/mpidatatest-mpi-2 > Status: FAILED > > ====================================================================== > Name: mpifuturetest > FullName: ./dune/common/parallel/test/mpifuturetest > Status: FAILED > > ====================================================================== > Name: mpifuturetest-mpi-2 > FullName: ./dune/common/parallel/test/mpifuturetest-mpi-2 > Status: FAILED > > ====================================================================== > Name: mpipacktest-mpi-2 > FullName: ./dune/common/parallel/test/mpipacktest-mpi-2 > Status: FAILED > > ====================================================================== > Name: mpigatherscattertest-mpi-2 > FullName: ./dune/common/parallel/test/mpigatherscattertest-mpi-2 > Status: FAILED > > ====================================================================== > Name: arithmetictestsuitetest > FullName: ./dune/common/test/arithmetictestsuitetest > Status: FAILED > > ====================================================================== > Name: assertandreturntest > FullName: ./dune/common/test/assertandreturntest > Status: FAILED > > ====================================================================== > Name: assertandreturntest_ndebug > FullName: ./dune/common/test/assertandreturntest_ndebug > Status: FAILED > > ====================================================================== > Name: concept > FullName: ./dune/common/test/concept > Status: FAILED > > ====================================================================== > Name: debugaligntest > FullName: ./dune/common/test/debugaligntest > Status: FAILED > > ====================================================================== > Name: debugalignsimdtest > FullName: ./dune/common/test/debugalignsimdtest > Status: FAILED > > ====================================================================== > Name: mpicommunicationtest > FullName: ./dune/common/test/mpicommunicationtest > Status: FAILED > > ====================================================================== > Name: mpicommunicationtest-mpi-2 > FullName: ./dune/common/test/mpicommunicationtest-mpi-2 > Status: FAILED > > ====================================================================== > Name: mpiguardtest > FullName: ./dune/common/test/mpiguardtest > Status: FAILED > > ====================================================================== > Name: mpiguardtest-mpi-2 > FullName: ./dune/common/test/mpiguardtest-mpi-2 > Status: FAILED > > ====================================================================== > Name: mpihelpertest > FullName: ./dune/common/test/mpihelpertest > Status: FAILED > > ====================================================================== > Name: mpihelpertest-mpi-2 > FullName: ./dune/common/test/mpihelpertest-mpi-2 > Status: FAILED > > ====================================================================== > Name: mpihelpertest2 > FullName: ./dune/common/test/mpihelpertest2 > Status: FAILED > > ====================================================================== > Name: mpihelpertest2-mpi-2 > FullName: ./dune/common/test/mpihelpertest2-mpi-2 > Status: FAILED > > ====================================================================== > Name: referencehelpertest > FullName: ./dune/common/test/referencehelpertest > Status: FAILED > > JUnit report for CTest results written to > /<<PKGBUILDDIR>>/build/junit/cmake.xml > make[1]: *** [debian/dune-debian.mk:39: override_dh_auto_test] Error 1 The full build log is available from: http://qa-logs.debian.net/2024/04/20/dune-common_2.9.0-6_unstable-armhf.log All bugs filed during this archive rebuild are listed at: https://bugs.debian.org/cgi-bin/pkgreport.cgi?tag=ftbfs-20240420;users=lu...@debian.org or: https://udd.debian.org/bugs/?release=na&merged=ign&fnewerval=7&flastmodval=7&fusertag=only&fusertagtag=ftbfs-20240420&fusertaguser=lu...@debian.org&allbugs=1&cseverity=1&ctags=1&caffected=1#results A list of current common problems and possible solutions is available at http://wiki.debian.org/qa.debian.org/FTBFS . You're welcome to contribute! If you reassign this bug to another package, please mark it as 'affects'-ing this package. See https://www.debian.org/Bugs/server-control#affects If you fail to reproduce this, please provide a build log and diff it with mine so that we can identify if something relevant changed in the meantime. -- debian-science-maintainers mailing list debian-science-maintainers@alioth-lists.debian.net https://alioth-lists.debian.net/cgi-bin/mailman/listinfo/debian-science-maintainers