Hi John,

We're using software on HPC system, because of that I have to compile it from zero.

On 7/26/23 17:16, John Hearns wrote:
Another idiot question... Is there a Pack or Easy build recipe for this software?
Should help you get it built.

On Wed, 26 Jul 2023, 10:27 Aziz Ogutlu via users, <users@lists.open-mpi.org> wrote:

    Hi Howard,

    I'm using with salloc+mpirun command that explained on faq page
    that you send, this time I'm getting error below:

    Caught signal 11 (Segmentation fault: address not mapped to object
    at address 0x30)

    On 7/25/23 19:34, Pritchard Jr., Howard wrote:

    HI Aziz,

    Oh I see you referenced the faq. That section of the faq is
    discussing how to make the Open MPI 4 series (and older)  job
    launcher “know” about the batch scheduler you are using.

    The relevant section for launching with srun is covered by this
    faq - https://www-lb.open-mpi.org/faq/?category=slurm

    Howard

    *From: *"Pritchard Jr., Howard" <howa...@lanl.gov>
    <mailto:howa...@lanl.gov>
    *Date: *Tuesday, July 25, 2023 at 8:26 AM
    *To: *Open MPI Users <users@lists.open-mpi.org>
    <mailto:users@lists.open-mpi.org>
    *Cc: *Aziz Ogutlu <aziz.ogu...@eduline.com.tr>
    <mailto:aziz.ogu...@eduline.com.tr>
    *Subject: *Re: [EXTERNAL] Re: [OMPI users] MPI_Init_thread error

    HI Aziz,

    Did you include –with-pmi2 on your Open MPI configure line?

    Howard

    *From: *users <users-boun...@lists.open-mpi.org>
    <mailto:users-boun...@lists.open-mpi.org> on behalf of Aziz
    Ogutlu via users <users@lists.open-mpi.org>
    <mailto:users@lists.open-mpi.org>
    *Organization: *Eduline Bilisim
    *Reply-To: *Open MPI Users <users@lists.open-mpi.org>
    <mailto:users@lists.open-mpi.org>
    *Date: *Tuesday, July 25, 2023 at 8:18 AM
    *To: *Open MPI Users <users@lists.open-mpi.org>
    <mailto:users@lists.open-mpi.org>
    *Cc: *Aziz Ogutlu <aziz.ogu...@eduline.com.tr>
    <mailto:aziz.ogu...@eduline.com.tr>
    *Subject: *[EXTERNAL] Re: [OMPI users] MPI_Init_thread error

    Hi Gilles,

    Thank you for your response.

    When I run srun --mpi=list, I get only pmi2.

    When I run command with --mpi=pmi2 parameter, I got same error.

    OpenMPI automatically support slurm after 4.x version.
    https://www.open-mpi.org/faq/?category=building#build-rte
    
<https://urldefense.com/v3/__https:/www.open-mpi.org/faq/?category=building*build-rte__;Iw!!Bt8fGhp8LhKGRg!DNaoJu7zrmKRHUF76zzyFXi9n2Bq8K8Ud-yvTEIkUYtxz_1_2DFwrZAKofSbiBD1rhLyttDpQVrl12eaQ2CN$>

    On 7/25/23 12:55, Gilles Gouaillardet via users wrote:

        Aziz,

        When using direct run (e.g. srun), OpenMPI has to interact
        with SLURM.

        This is typically achieved via PMI2 or PMIx

        You can

        srun --mpi=list

        to list the available options on your system

        if PMIx is available, you can

        srun --mpi=pmix ...

        if only PMI2 is available, you need to make sure Open MPI was
        built with SLURM support (e.g. configure --with-slurm ...)

        and then

        srun --mpi=pmi2 ...

        Cheers,

        Gilles

        On Tue, Jul 25, 2023 at 5:07 PM Aziz Ogutlu via users
        <users@lists.open-mpi.org> wrote:

            Hi there all,

            We're using Slurm 21.08 on Redhat 7.9 HPC cluster with
            OpenMPI 4.0.3 + gcc 8.5.0.

            When we run command below for call SU2, we get an error
            message:

            /$ srun -p defq --nodes=1 --ntasks-per-node=1
            --time=01:00:00 --pty bash -i/

            /$ module load su2/7.5.1/

            /$ SU2_CFD config.cfg/

            /*** An error occurred in MPI_Init_thread/

            /*** on a NULL communicator/

            /*** MPI_ERRORS_ARE_FATAL (processes in this communicator
            will now abort,/

            /***    and potentially your MPI job)/

            /[cnode003.hpc:17534] Local abort before MPI_INIT
            completed completed successfully, but am not able to
            aggregate error messages, and not able to guarantee that
            all other processes were killed!/

--
            Best regards,

            Aziz Öğütlü

            Eduline Bilişim Sanayi ve Ticaret Ltd. Şti.www.eduline.com.tr  
<https://urldefense.com/v3/__http:/www.eduline.com.tr__;!!Bt8fGhp8LhKGRg!DNaoJu7zrmKRHUF76zzyFXi9n2Bq8K8Ud-yvTEIkUYtxz_1_2DFwrZAKofSbiBD1rhLyttDpQVrl1wUXbUrh$>

            Merkez Mah. Ayazma Cad. No:37 Papirus Plaza

            Kat:6 Ofis No:118 Kağıthane -  İstanbul - Türkiye 34406

            Tel : +90 212 324 60 61     Cep: +90 541 350 40 72

-- İyi çalışmalar,
    Aziz Öğütlü
Eduline Bilişim Sanayi ve Ticaret Ltd. Şti.www.eduline.com.tr <https://urldefense.com/v3/__http:/www.eduline.com.tr__;!!Bt8fGhp8LhKGRg!DNaoJu7zrmKRHUF76zzyFXi9n2Bq8K8Ud-yvTEIkUYtxz_1_2DFwrZAKofSbiBD1rhLyttDpQVrl1wUXbUrh$>
    Merkez Mah. Ayazma Cad. No:37 Papirus Plaza
    Kat:6 Ofis No:118 Kağıthane -  İstanbul - Türkiye 34406
    Tel : +90 212 324 60 61     Cep: +90 541 350 40 72

-- İyi çalışmalar,
    Aziz Öğütlü

    Eduline Bilişim Sanayi ve Ticaret Ltd. Şti.www.eduline.com.tr  
<http://www.eduline.com.tr>
    Merkez Mah. Ayazma Cad. No:37 Papirus Plaza
    Kat:6 Ofis No:118 Kağıthane -  İstanbul - Türkiye 34406
    Tel : +90 212 324 60 61     Cep: +90 541 350 40 72

--
İyi çalışmalar,
Aziz Öğütlü

Eduline Bilişim Sanayi ve Ticaret Ltd. Şti.www.eduline.com.tr
Merkez Mah. Ayazma Cad. No:37 Papirus Plaza
Kat:6 Ofis No:118 Kağıthane -  İstanbul - Türkiye 34406
Tel : +90 212 324 60 61     Cep: +90 541 350 40 72

Reply via email to