Source: abinit Version: 9.10.4-3 Severity: serious Justification: FTBFS Tags: trixie sid ftbfs User: lu...@debian.org Usertags: ftbfs-20240420 ftbfs-trixie ftbfs-t64-armhf
Hi, During a rebuild of all packages in sid, your package failed to build on armhf. Relevant part (hopefully): > make[1]: Entering directory '/<<PKGBUILDDIR>>' > (cd tests; ./runtests.py -t 1800 fast) > stty: 'standard input': Inappropriate ioctl for device > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t00.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t00/t00.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t01.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t01/t01.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t02.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t02/t02.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > [4m[32mRunning on ip-10-84-234-172 -- system Linux -- ncpus 4 -- Python > 3.11.9 -- runtests.py-0.6.0[0m > [33mRegenerating database...[0m > Saving database to /<<PKGBUILDDIR>>/tests/test_suite.cpkl > [33mRunning 26 test(s) with MPI_procs: 1, py_nprocs: 1[0m > [34m[TIP] runtests.py is using 1 CPUs but your architecture has 4 CPUs > (including Hyper-Threading) > You may want to use python processes to speed up the execution > Use `runtests -jNUM` to run with NUM processes[0m > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t00/t00.abi > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t00/t00.stdout 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t00/t00.stderr > returned exit_code: 1 > > [31m[fast][t00][np=1][run_etime: 0.04 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t00/t00.abo' [file=t00.abo][0m > [31m[fast][t00][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674301] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674300] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674300] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674300] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t00][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t01/t01.abi > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t01/t01.stdout 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t01/t01.stderr > returned exit_code: 1 > > [31m[fast][t01][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t01/t01.abo' [file=t01.abo][0m > [31m[fast][t01][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674306] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674305] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674305] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674305] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t01][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t02/t02.abi > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t02/t02.stdout 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t02/t02.stderr > returned exit_code: 1 > > [31m[fast][t02][np=1][run_etime: 0.04 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t02/t02.abo' [file=t02.abo][0m > [31m[fast][t02][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t03.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t03.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t05.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t05.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674311] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674310] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674310] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674310] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t02][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t03.abi > > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t03.stdout > 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t03.stderr > > returned exit_code: 1 > > [31m[fast][t03][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t03.abo' > [file=t03.abo][0m > [31m[fast][t03][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674316] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674315] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674315] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674315] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t03][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t05.abi > > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t05.stdout > 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t05.stderr > > returned exit_code: 1 > > File > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t05o_WFK > does not exist, will try netcdf version > [31m[fast][t05][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t05.abo' > [file=t05.abo][0m > [31m[fast][t05][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t06.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t06.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t07.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t07.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674320] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674319] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674319] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674319] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t05][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t06.abi > > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t06.stdout > 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t06.stderr > > returned exit_code: 1 > > [31m[fast][t06][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t06.abo' > [file=t06.abo][0m > [31m[fast][t06][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674324] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674323] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674323] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674323] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t06][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t07.abi > > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t07.stdout > 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t07.stderr > > returned exit_code: 1 > > [31m[fast][t07][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t07.abo' > [file=t07.abo][0m > [31m[fast][t07][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t08.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t08.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t09.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t09.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674328] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674327] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674327] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674327] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t07][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t08.abi > > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t08.stdout > 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t08.stderr > > returned exit_code: 1 > > File > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t08o_DEN > does not exist, will try netcdf version > File > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t08o_DEN > does not exist, will try netcdf version > File > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t08o_DEN > does not exist, will try netcdf version > [31m[fast][t08][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t08.abo' > [file=t08.abo][0m > [31m[fast][t08][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674332] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674331] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674331] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674331] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t08][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t09.abi > > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t09.stdout > 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t09.stderr > > returned exit_code: 1 > > File > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t09o_WFK > does not exist, will try netcdf version > File > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t09o_WFK > does not exist, will try netcdf version > [31m[fast][t09][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t09.abo' > [file=t09.abo][0m > [31m[fast][t09][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t11.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t11.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t12.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t12.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674336] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674335] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674335] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674335] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t09][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t11.abi > > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t11.stdout > 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t11.stderr > > returned exit_code: 1 > > [31m[fast][t11][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t11.abo' > [file=t11.abo][0m > [31m[fast][t11][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674340] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674339] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674339] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674339] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t11][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t12.abi > > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t12.stdout > 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t12.stderr > > returned exit_code: 1 > > [31m[fast][t12][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t12.abo' > [file=t12.abo][0m > [31m[fast][t12][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t14.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t14.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t16.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t16.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674344] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674343] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674343] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674343] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t12][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t14.abi > > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t14.stdout > 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t14.stderr > > returned exit_code: 1 > > [31m[fast][t14][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t14.abo' > [file=t14.abo][0m > [31m[fast][t14][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674348] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674347] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674347] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674347] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t14][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t16.abi > > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t16.stdout > 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t16.stderr > > returned exit_code: 1 > > [31m[fast][t16][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t16.abo' > [file=t16.abo][0m > [31m[fast][t16][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t04.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t04/t04.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t17.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t17.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674352] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674351] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674351] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674351] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t16][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t04/t04.abi > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t04/t04.stdout 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t04/t04.stderr > returned exit_code: 1 > > [31m[fast][t04][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t04/t04.abo' [file=t04.abo][0m > [31m[fast][t04][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674366] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674365] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674365] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674365] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t04][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t17.abi > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t17.stdout 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t17.stderr > returned exit_code: 1 > > File /<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t17o_WFK does > not exist, will try netcdf version > File /<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t17o_DEN does > not exist, will try netcdf version > [31m[fast][t17][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t17.abo' > [file=t17.abo][0m > [31m[fast][t17][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t19.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t19.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t20.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t20.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674371] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674370] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674370] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674370] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t17][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t19.abi > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t19.stdout 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t19.stderr > returned exit_code: 1 > > [31m[fast][t19][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t19.abo' > [file=t19.abo][0m > [31m[fast][t19][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674375] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674374] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674374] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674374] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t19][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t20.abi > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t20.stdout 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t20.stderr > returned exit_code: 1 > > [31m[fast][t20][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t20.abo' > [file=t20.abo][0m > [31m[fast][t20][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t21.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t21.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t23.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t23.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674379] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674378] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674378] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674378] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t20][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t21.abi > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t21.stdout 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t21.stderr > returned exit_code: 1 > > [31m[fast][t21][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t21.abo' > [file=t21.abo][0m > [31m[fast][t21][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674383] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674382] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674382] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674382] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t21][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t23.abi > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t23.stdout 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t23.stderr > returned exit_code: 1 > > [31m[fast][t23][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t23.abo' > [file=t23.abo][0m > [31m[fast][t23][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t24.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t24/t24.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t25.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t25/t25.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674387] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674386] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674386] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674386] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t23][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t24/t24.abi > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t24/t24.stdout 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t24/t24.stderr > returned exit_code: 1 > > [31m[fast][t24][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t24/t24.abo' [file=t24.abo][0m > [31m[fast][t24][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674396] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674395] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674395] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674395] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t24][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t25/t25.abi > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t25/t25.stdout 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t25/t25.stderr > returned exit_code: 1 > > [31m[fast][t25][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t25/t25.abo' [file=t25.abo][0m > [31m[fast][t25][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t26.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t26/t26.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t27.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t27.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674401] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674400] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674400] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674400] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t25][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t26/t26.abi > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t26/t26.stdout 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t26/t26.stderr > returned exit_code: 1 > > [31m[fast][t26][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t26/t26.abo' [file=t26.abo][0m > [31m[fast][t26][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674406] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674405] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674405] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674405] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t26][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t27.abi > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t27.stdout 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t27.stderr > returned exit_code: 1 > > [31m[fast][t27][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t27.abo' > [file=t27.abo][0m > [31m[fast][t27][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t28.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t28.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t28o_TIM2_GEO] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t28o_TIM2_GEO' > > warnings.warn(('[{}] Something went wrong with this test:\n' > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t29.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t29.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t29o_TIM8_GEO] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t29o_TIM8_GEO' > > warnings.warn(('[{}] Something went wrong with this test:\n' > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674411] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674410] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674410] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674410] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t27][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t28.abi > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t28.stdout 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t28.stderr > returned exit_code: 1 > > [31m[fast][t28][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t28.abo' > [file=t28.abo][0m > [31m[fast][t28][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t28o_TIM2_GEO' > [file=t28o_TIM2_GEO][0m > [31m[fast][t28][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674415] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674414] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674414] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674414] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t28][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t29.abi > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t29.stdout 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t29.stderr > returned exit_code: 1 > > [31m[fast][t29][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t29.abo' > [file=t29.abo][0m > [31m[fast][t29][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t29o_TIM8_GEO' > [file=t29o_TIM8_GEO][0m > [31m[fast][t29][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:400: UserWarning: [t30.abo] > Something went wrong with this test: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t30/t30.abo' > > warnings.warn(('[{}] Something went wrong with this test:\n' > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding /<<PKGBUILDDIR>>/tests/Test_suite/fast_t00/t00.abo to tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t00/t00.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding /<<PKGBUILDDIR>>/tests/Test_suite/fast_t01/t01.abo to tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t01/t01.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding /<<PKGBUILDDIR>>/tests/Test_suite/fast_t02/t02.abo to tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t02/t02.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t14.abo > to tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t14.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t03.abo > to tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t03.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t08.abo > to tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t08.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t12.abo > to tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t12.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t06.abo > to tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t06.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t16.abo > to tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t16.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t11.abo > to tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t11.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t07.abo > to tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t07.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t05.abo > to tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t05.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t09.abo > to tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t03-t05-t06-t07-t08-t09-t11-t12-t14-t16/t09.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding /<<PKGBUILDDIR>>/tests/Test_suite/fast_t04/t04.abo to tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t04/t04.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding /<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t20.abo to > tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t20.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding /<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t21.abo to > tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t21.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding /<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t23.abo to > tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t23.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding /<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t17.abo to > tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t17.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding /<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t19.abo to > tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t17-t19-t20-t21-t23/t19.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding /<<PKGBUILDDIR>>/tests/Test_suite/fast_t24/t24.abo to tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t24/t24.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding /<<PKGBUILDDIR>>/tests/Test_suite/fast_t25/t25.abo to tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t25/t25.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding /<<PKGBUILDDIR>>/tests/Test_suite/fast_t26/t26.abo to tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t26/t26.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding /<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t29o_TIM8_GEO to > tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t29o_TIM8_GEO' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding /<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t28.abo to tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t28.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding /<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t29.abo to tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t29.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding /<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t27.abo to tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t27.abo' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding /<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t28o_TIM2_GEO to > tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t27-t28-t29/t28o_TIM2_GEO' > warnings.warn( > /<<PKGBUILDDIR>>/tests/pymods/testsuite.py:3780: UserWarning: exception while > adding /<<PKGBUILDDIR>>/tests/Test_suite/fast_t30/t30.abo to tarball: > [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t30/t30.abo' > warnings.warn( > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674419] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674418] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674418] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674418] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t29][np=1] > Command /<<PKGBUILDDIR>>/src/98_main/abinit > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t30/t30.abi > > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t30/t30.stdout 2> > /<<PKGBUILDDIR>>/tests/Test_suite/fast_t30/t30.stderr > returned exit_code: 1 > > [31m[fast][t30][np=1][run_etime: 0.03 s]: Internal error: > FileNotFoundError: [Errno 2] No such file or directory: > '/<<PKGBUILDDIR>>/tests/Test_suite/fast_t30/t30.abo' [file=t30.abo][0m > [31m[fast][t30][np=1] Test was not expected to fail but subprocesses > returned retcode: 1[0m > [31m-------------------------------------------------------------------------- > Sorry! You were supposed to get help about: > pmix_init:startup:internal-failure > But I couldn't open the help file: > /usr/share/pmix/help-pmix-runtime.txt: No such file or directory. Sorry! > -------------------------------------------------------------------------- > [ip-10-84-234-172:3674426] PMIX ERROR: NOT-FOUND in file > ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at > line 237 > [ip-10-84-234-172:3674425] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-10-84-234-172:3674425] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to > start a daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-10-84-234-172:3674425] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > [0m > No YAML Error found in: [fast][t30][np=1] > > Suite failed passed succeeded skipped disabled run_etime tot_etime > fast 11 0 0 0 0 0.90 0.92 > > [33mCompleted in 1.04 [s]. Average time for test=0.08 [s], stdev=0.09 [s][0m > [4m[31mSummary: failed=11, succeeded=0, passed=0, skipped=0, disabled=0[0m > > Execution completed. > Results in HTML format are available in Test_suite/suite_report.html > make[1]: *** [debian/rules:91: override_dh_auto_test-arch] Error 11 The full build log is available from: http://qa-logs.debian.net/2024/04/20/abinit_9.10.4-3_unstable-armhf.log All bugs filed during this archive rebuild are listed at: https://bugs.debian.org/cgi-bin/pkgreport.cgi?tag=ftbfs-20240420;users=lu...@debian.org or: https://udd.debian.org/bugs/?release=na&merged=ign&fnewerval=7&flastmodval=7&fusertag=only&fusertagtag=ftbfs-20240420&fusertaguser=lu...@debian.org&allbugs=1&cseverity=1&ctags=1&caffected=1#results A list of current common problems and possible solutions is available at http://wiki.debian.org/qa.debian.org/FTBFS . You're welcome to contribute! If you reassign this bug to another package, please mark it as 'affects'-ing this package. See https://www.debian.org/Bugs/server-control#affects If you fail to reproduce this, please provide a build log and diff it with mine so that we can identify if something relevant changed in the meantime.