Aziz,

When using direct run (e.g. srun), OpenMPI has to interact with SLURM.
This is typically achieved via PMI2 or PMIx

You can
srun --mpi=list
to list the available options on your system

if PMIx is available, you can
srun --mpi=pmix ...

if only PMI2 is available, you need to make sure Open MPI was built with
SLURM support (e.g. configure --with-slurm ...)
and then
srun --mpi=pmi2 ...


Cheers,

Gilles

On Tue, Jul 25, 2023 at 5:07 PM Aziz Ogutlu via users <
users@lists.open-mpi.org> wrote:

> Hi there all,
> We're using Slurm 21.08 on Redhat 7.9 HPC cluster with OpenMPI 4.0.3 + gcc
> 8.5.0.
> When we run command below for call SU2, we get an error message:
>
> *$ srun -p defq --nodes=1 --ntasks-per-node=1 --time=01:00:00 --pty bash
> -i*
> *$ module load su2/7.5.1*
> *$ SU2_CFD config.cfg*
>
> **** An error occurred in MPI_Init_thread*
> **** on a NULL communicator*
> **** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,*
> ****    and potentially your MPI job)*
> *[cnode003.hpc:17534] Local abort before MPI_INIT completed completed
> successfully, but am not able to aggregate error messages, and not able to
> guarantee that all other processes were killed!*
>
> --
> Best regards,
> Aziz Öğütlü
>
> Eduline Bilişim Sanayi ve Ticaret Ltd. Şti.  www.eduline.com.tr
> Merkez Mah. Ayazma Cad. No:37 Papirus Plaza
> Kat:6 Ofis No:118 Kağıthane -  İstanbul - Türkiye 34406
> Tel : +90 212 324 60 61     Cep: +90 541 350 40 72
>
>

Reply via email to