RHEL 8 with OpenMPI  4.1.5a1 on a HPC cluster compute node Singularity
version 3.7.1. I see the error in another issue mentioned
<https://github.com/open-mpi/ompi/issues/4437> at the Git page an on SO
<https://stackoverflow.com/questions/65634590/data-unpack-would-read-past-end-of-buffer-in-file-util-show-help-c-at-line-501>
where
a suggestion for setting -mca orte_base_help_aggregate 0 suppresses it.

ompi_info | head

                 Package: Open MPI root@c-141-88-1-005 Distribution

                Open MPI: 4.1.5a1

  Open MPI repo revision: v4.1.4-32-g5abd86c

   Open MPI release date: Sep 05, 2022

                Open RTE: 4.1.5a1

  Open RTE repo revision: v4.1.4-32-g5abd86c

   Open RTE release date: Sep 05, 2022

                    OPAL: 4.1.5a1

      OPAL repo revision: v4.1.4-32-g5abd86c

       OPAL release date: Sep 05, 2022



mpirun -np 8 -debug-devel -v snappyHexMesh -overwrite -parallel  >
snappyHexMesh.out

[g279:2750943] procdir: /tmp/ompi.g279.547289/pid.2750943/0/0

[g279:2750943] jobdir: /tmp/ompi.g279.547289/pid.2750943/0

[g279:2750943] top: /tmp/ompi.g279.547289/pid.2750943

[g279:2750943] top: /tmp/ompi.g279.547289

[g279:2750943] tmp: /tmp

[g279:2750943] sess_dir_cleanup: job session dir does not exist

[g279:2750943] sess_dir_cleanup: top session dir not empty - leaving

[g279:2750943] procdir: /tmp/ompi.g279.547289/pid.2750943/0/0

[g279:2750943] jobdir: /tmp/ompi.g279.547289/pid.2750943/0

[g279:2750943] top: /tmp/ompi.g279.547289/pid.2750943

[g279:2750943] top: /tmp/ompi.g279.547289

[g279:2750943] tmp: /tmp

[g279:2750943] [[20506,0],0] Releasing job data for [INVALID]

--------------------------------------------------------------------------

By default, for Open MPI 4.0 and later, infiniband ports on a device

are not used by default.  The intent is to use UCX for these devices.

You can override this policy by setting the btl_openib_allow_ib MCA
parameter

to true.


  Local host:              g279

  Local adapter:           mlx5_0

  Local port:              1


--------------------------------------------------------------------------

[g279:2750947] procdir: /tmp/ompi.g279.547289/pid.2750943/1/0

[g279:2750947] jobdir: /tmp/ompi.g279.547289/pid.2750943/1

[g279:2750947] top: /tmp/ompi.g279.547289/pid.2750943

[g279:2750947] top: /tmp/ompi.g279.547289

[g279:2750947] tmp: /tmp

[g279:2750949] procdir: /tmp/ompi.g279.547289/pid.2750943/1/2

[g279:2750949] jobdir: /tmp/ompi.g279.547289/pid.2750943/1

[g279:2750949] top: /tmp/ompi.g279.547289/pid.2750943

[g279:2750949] top: /tmp/ompi.g279.547289

[g279:2750949] tmp: /tmp

--------------------------------------------------------------------------

WARNING: There was an error initializing an OpenFabrics device.


  Local host:   g279

  Local device: mlx5_0

--------------------------------------------------------------------------

[g279:2750948] procdir: /tmp/ompi.g279.547289/pid.2750943/1/1

[g279:2750948] jobdir: /tmp/ompi.g279.547289/pid.2750943/1

[g279:2750948] top: /tmp/ompi.g279.547289/pid.2750943

[g279:2750948] top: /tmp/ompi.g279.547289

[g279:2750948] tmp: /tmp

[g279:2750953] procdir: /tmp/ompi.g279.547289/pid.2750943/1/4

[g279:2750953] jobdir: /tmp/ompi.g279.547289/pid.2750943/1

[g279:2750953] top: /tmp/ompi.g279.547289/pid.2750943

[g279:2750953] top: /tmp/ompi.g279.547289

[g279:2750953] tmp: /tmp

[g279:2750950] procdir: /tmp/ompi.g279.547289/pid.2750943/1/3

[g279:2750950] jobdir: /tmp/ompi.g279.547289/pid.2750943/1

[g279:2750950] top: /tmp/ompi.g279.547289/pid.2750943

[g279:2750950] top: /tmp/ompi.g279.547289

[g279:2750950] tmp: /tmp

[g279:2750954] procdir: /tmp/ompi.g279.547289/pid.2750943/1/5

[g279:2750954] jobdir: /tmp/ompi.g279.547289/pid.2750943/1

[g279:2750954] top: /tmp/ompi.g279.547289/pid.2750943

[g279:2750954] top: /tmp/ompi.g279.547289

[g279:2750954] tmp: /tmp

[g279:2750955] procdir: /tmp/ompi.g279.547289/pid.2750943/1/6

[g279:2750955] jobdir: /tmp/ompi.g279.547289/pid.2750943/1

[g279:2750955] top: /tmp/ompi.g279.547289/pid.2750943

[g279:2750955] top: /tmp/ompi.g279.547289

[g279:2750955] tmp: /tmp

  MPIR_being_debugged = 0

  MPIR_debug_state = 1

  MPIR_partial_attach_ok = 1

  MPIR_i_am_starter = 0

  MPIR_forward_output = 0

  MPIR_proctable_size = 8

  MPIR_proctable:

    (i, host, exe, pid) = (0, g279,
/usr/lib/openfoam/openfoam2212/platforms/linux64GccDPInt32Opt/bin/snappyHexMesh,
2750947)

    (i, host, exe, pid) = (1, g279,
/usr/lib/openfoam/openfoam2212/platforms/linux64GccDPInt32Opt/bin/snappyHexMesh,
2750948)

    (i, host, exe, pid) = (2, g279,
/usr/lib/openfoam/openfoam2212/platforms/linux64GccDPInt32Opt/bin/snappyHexMesh,
2750949)

    (i, host, exe, pid) = (3, g279,
/usr/lib/openfoam/openfoam2212/platforms/linux64GccDPInt32Opt/bin/snappyHexMesh,
2750950)

    (i, host, exe, pid) = (4, g279,
/usr/lib/openfoam/openfoam2212/platforms/linux64GccDPInt32Opt/bin/snappyHexMesh,
2750953)

    (i, host, exe, pid) = (5, g279,
/usr/lib/openfoam/openfoam2212/platforms/linux64GccDPInt32Opt/bin/snappyHexMesh,
2750954)

    (i, host, exe, pid) = (6, g279,
/usr/lib/openfoam/openfoam2212/platforms/linux64GccDPInt32Opt/bin/snappyHexMesh,
2750955)

    (i, host, exe, pid) = (7, g279,
/usr/lib/openfoam/openfoam2212/platforms/linux64GccDPInt32Opt/bin/snappyHexMesh,
2750956)

MPIR_executable_path: NULL

MPIR_server_arguments: NULL

[g279:2750956] procdir: /tmp/ompi.g279.547289/pid.2750943/1/7

[g279:2750956] jobdir: /tmp/ompi.g279.547289/pid.2750943/1

[g279:2750956] top: /tmp/ompi.g279.547289/pid.2750943

[g279:2750956] top: /tmp/ompi.g279.547289

[g279:2750956] tmp: /tmp

[g279:2750943] 7 more processes have sent help message
help-mpi-btl-openib.txt / ib port not selected

[g279:2750943] Set MCA parameter "orte_base_help_aggregate" to 0 to see all
help / error messages

[g279:2750943] 7 more processes have sent help message
help-mpi-btl-openib.txt / error in device init

[g279:2750943] sess_dir_finalize: proc session dir does not exist

[g279:2750943] sess_dir_finalize: job session dir does not exist

[g279:2750943] sess_dir_finalize: jobfam session dir not empty - leaving

[g279:2750943] sess_dir_finalize: jobfam session dir not empty - leaving

[g279:2750943] sess_dir_finalize: top session dir not empty - leaving

[g279:2750943] sess_dir_finalize: proc session dir does not exist

[g279:2750943] sess_dir_finalize: job session dir does not exist

[g279:2750943] sess_dir_finalize: jobfam session dir not empty - leaving

[g279:2750943] sess_dir_finalize: jobfam session dir not empty - leaving

[g279:2750943] sess_dir_finalize: top session dir not empty - leaving

[g279:2750943] sess_dir_finalize: proc session dir does not exist

[g279:2750943] sess_dir_finalize: job session dir does not exist

[g279:2750943] sess_dir_finalize: jobfam session dir not empty - leaving

[g279:2750943] sess_dir_finalize: jobfam session dir not empty - leaving

[g279:2750943] sess_dir_finalize: top session dir not empty - leaving

[g279:2750943] sess_dir_finalize: proc session dir does not exist

[g279:2750943] sess_dir_finalize: job session dir does not exist

[g279:2750943] sess_dir_finalize: jobfam session dir not empty - leaving

[g279:2750943] sess_dir_finalize: jobfam session dir not empty - leaving

[g279:2750943] sess_dir_finalize: top session dir not empty - leaving

[g279:2750943] sess_dir_finalize: proc session dir does not exist

[g279:2750943] sess_dir_finalize: job session dir does not exist

[g279:2750943] sess_dir_finalize: jobfam session dir not empty - leaving

[g279:2750943] sess_dir_finalize: jobfam session dir not empty - leaving

[g279:2750943] sess_dir_finalize: top session dir not empty - leaving

[g279:2750943] sess_dir_finalize: proc session dir does not exist

[g279:2750943] sess_dir_finalize: job session dir does not exist

[g279:2750943] sess_dir_finalize: jobfam session dir not empty - leaving

[g279:2750943] sess_dir_finalize: jobfam session dir not empty - leaving

[g279:2750943] sess_dir_finalize: top session dir not empty - leaving

[g279:2750943] sess_dir_finalize: proc session dir does not exist

[g279:2750943] sess_dir_finalize: job session dir does not exist

[g279:2750943] sess_dir_finalize: jobfam session dir not empty - leaving

[g279:2750943] sess_dir_finalize: jobfam session dir not empty - leaving

[g279:2750943] sess_dir_finalize: top session dir not empty - leaving

[g279:2750943] sess_dir_finalize: proc session dir does not exist

[g279:2750943] sess_dir_finalize: job session dir does not exist

[g279:2750943] sess_dir_finalize: jobfam session dir not empty - leaving

[g279:2750943] sess_dir_finalize: jobfam session dir not empty - leaving

[g279:2750943] sess_dir_finalize: top session dir not empty - leaving

[g279:2750943] Job UNKNOWN has launched

[g279:2750943] [[20506,0],0] Releasing job data for [20506,1]

[g279:2750943] sess_dir_finalize: proc session dir does not exist

[g279:2750943] sess_dir_finalize: job session dir does not exist

[g279:2750943] sess_dir_finalize: jobfam session dir does not exist

[g279:2750943] sess_dir_finalize: jobfam session dir does not exist

[g279:2750943] sess_dir_finalize: top session dir not empty - leaving

[g279:2750943] sess_dir_cleanup: job session dir does not exist

[g279:2750943] sess_dir_cleanup: top session dir not empty - leaving

[g279:2750943] [[20506,0],0] Releasing job data for [20506,0]

[g279:2750943] sess_dir_cleanup: job session dir does not exist

[g279:2750943] sess_dir_cleanup: top session dir not empty - leaving


Running mpirun without the debug and verbose options gets this:

[g279:2738279] [[24994,0],0] ORTE_ERROR_LOG: Data unpack would read past
end of buffer in file util/show_help.c at line 501

[g279:2738279] 7 more processes have sent help message
help-mpi-btl-openib.txt / ib port not selected

[g279:2738279] Set MCA parameter "orte_base_help_aggregate" to 0 to see all
help / error messages

[g279:2738279] 6 more processes have sent help message
help-mpi-btl-openib.txt / error in device init

Mesh created. It is advised to check in paraview to confirm mesh of
porespace is reasonable before running flow


Are these just warnings?

Reply via email to