On Jun 8, 2018, at 11:38 AM, Bennet Fauber wrote:
>
> Hmm. Maybe I had insufficient error checking in our installation process.
>
> Can you make and make install after the configure fails? I somehow got an
> installation, despite the configure status, perhaps?
If it's a fresh tarball expansi
Jeff,
Hmm. Maybe I had insufficient error checking in our installation process.
Can you make and make install after the configure fails? I somehow got an
installation, despite the configure status, perhaps?
-- bennet
On Fri, Jun 8, 2018 at 11:32 AM Jeff Squyres (jsquyres) via users <
users
Hmm. I'm confused -- can we clarify?
I just tried configuring Open MPI v3.1.0 on a RHEL 7.4 system with the RHEL
hwloc RPM installed, but *not* the hwloc-devel RPM. Hence, no hwloc.h (for
example).
When specifying an external hwloc, configure did fail, as expected:
-
$ ./configure --with
> On Jun 8, 2018, at 8:10 AM, Bennet Fauber wrote:
>
> Further testing shows that it was the failure to find the hwloc-devel files
> that seems to be the cause of the failure. I compiled and ran without the
> additional configure flags, and it still seems to work.
>
> I think it issued a tw
Further testing shows that it was the failure to find the hwloc-devel files
that seems to be the cause of the failure. I compiled and ran without the
additional configure flags, and it still seems to work.
I think it issued a two-line warning about this. Is that something that
should result in a
, run the test and
>> provide the content of slurmd.log?
>>
>
> I will reply separately with this, as I have to coordinate with the
> cluster administrator, who is not in yet.
>
> Please note, also, that I was able to build this successfully after
> install
I have to coordinate with the cluster
administrator, who is not in yet.
Please note, also, that I was able to build this successfully after install
the hwlock-devel package and adding the --disable-dlopen and
--enable-shared options to configure.
Thanks, -- bennet
>
> Today's T
-----------------------------
Message: 1
Date: Thu, 7 Jun 2018 08:05:30 -0700
From: "r...@open-mpi.org"
To: Open MPI Users
Subject: Re: [OMPI users] Fwd: OpenMPI 3.1.0 on aarch64
Message-ID:
Content-Type: text/plain; charset=utf-8
Odd - Artem, do
I rebuilt and examined the logs more closely. There was a warning
about a failure with the external hwloc, and that led to finding that
the CentOS hwloc-devel package was not installed.
I also added the options that we have been using for a while,
--disable-dlopen and --enable-shared, to the conf
Odd - Artem, do you have any suggestions?
> On Jun 7, 2018, at 7:41 AM, Bennet Fauber wrote:
>
> Thanks, Ralph,
>
> I just tried it with
>
>srun --mpi=pmix_v2 ./test_mpi
>
> and got these messages
>
>
> srun: Step created for job 89
> [cav02.arc-ts.umich.edu:92286] PMIX ERROR: OUT-OF-RE
Thanks, Ralph,
I just tried it with
srun --mpi=pmix_v2 ./test_mpi
and got these messages
srun: Step created for job 89
[cav02.arc-ts.umich.edu:92286] PMIX ERROR: OUT-OF-RESOURCE in file
client/pmix_client.c at line 234
[cav02.arc-ts.umich.edu:92286] OPAL ERROR: Error in file
pmix2x_client.
I think you need to set your MPIDefault to pmix_v2 since you are using a PMIx
v2 library
> On Jun 7, 2018, at 6:25 AM, Bennet Fauber wrote:
>
> Hi, Ralph,
>
> Thanks for the reply, and sorry for the missing information. I hope
> this fills in the picture better.
>
> $ srun --version
> slurm
Hi, Ralph,
Thanks for the reply, and sorry for the missing information. I hope
this fills in the picture better.
$ srun --version
slurm 17.11.7
$ srun --mpi=list
srun: MPI types are...
srun: pmix_v2
srun: openmpi
srun: none
srun: pmi2
srun: pmix
We have pmix configured as the default in /opt/s
You didn’t show your srun direct launch cmd line or what version of Slurm is
being used (and how it was configured), so I can only provide some advice. If
you want to use PMIx, then you have to do two things:
1. Slurm must be configured to use PMIx - depending on the version, that might
be ther
14 matches
Mail list logo