Hi Prentice,
We are using OpenMPI and HWLOC on POWER9 servers. The topology information
looks good from our initial use.
Let me know if you need anything specifically.
S.
—
Si Hammond
Scalable Computer Architectures
Sandia National Laboratories, NM
> On Apr 16, 2019, at 11:28 AM, Prentice Bis
ks/pull/11.patch
Cheers,
Gilles
On 12/4/2018 4:41 AM, Hammond, Simon David via users wrote:
> Hi Open MPI Users,
>
> Just wanted to report a bug we have seen with OpenMPI 3.1.3 and 4.0.0
when using the Intel 2019 Update 1 compi
Hi Open MPI Users,
Just wanted to report a bug we have seen with OpenMPI 3.1.3 and 4.0.0 when
using the Intel 2019 Update 1 compilers on our Skylake/OmniPath-1 cluster. The
bug occurs when running the Github master src_c variant of the Intel MPI
Benchmarks.
Configuration:
./configure --prefi
Hi OpenMPI Users,
I wonder if you can help us with a problem we are having when trying to force
OpenMPI to use specific cores. We want to supply an initial CPU affinity list
to mpirun and then have it select its appropriate binding from within that set.
So for instance, to provide it with two c
Hi OpenMPI Users,
I am compiling OpenMPI 2.1.4 with the ARM 18.4.0 HPC Compiler on our ARM
ThunderX2 system. Configuration options below. For now, I am using the simplest
configuration test we can use on our system.
If I use the OpenMPI 2.1.4 which I have compiled and run a simple 4 rank run of
> On Jun 30, 2018, at 3:18 PM, Hammond, Simon David via users
> mailto:users@lists.open-mpi.org>> wrote:
>
> Nathan,
>
> Same issue with OpenMPI 3.1.1 on POWER9 with GCC 7.2.0 and CUDA9.2.
>
> S.
>
> --
> Si Hammond
> Scalable Computer Architectures
>
ly tarball for v3.1.x. Should be fixed.
> On Jun 16, 2018, at 5:48 PM, Hammond, Simon David via users
wrote:
>
> The output from the test in question is:
>
> Single thread test. Time: 0 s 10182 us 10 nsec/poppush
> Atomics thread finished. Time: 0
The output from the test in question is:
Single thread test. Time: 0 s 10182 us 10 nsec/poppush
Atomics thread finished. Time: 0 s 169028 us 169 nsec/poppush
S.
--
Si Hammond
Scalable Computer Architectures
Sandia National Laboratories, NM, USA
[Sent from remote connection, excuse typos]
Hi OpenMPI Team,
We have recently updated an install of OpenMPI on POWER9 system (configuration
details below). We migrated from OpenMPI 2.1 to OpenMPI 3.1. We seem to have a
symptom where code than ran before is now locking up and making no progress,
getting stuck in wait-all operations. While