Re: [OMPI users] [EXTERNAL] hwloc support for Power9/IBM AC922 servers

2019-04-16 Thread Hammond, Simon David via users
Hi Prentice, We are using OpenMPI and HWLOC on POWER9 servers. The topology information looks good from our initial use. Let me know if you need anything specifically. S. — Si Hammond Scalable Computer Architectures Sandia National Laboratories, NM > On Apr 16, 2019, at 11:28 AM, Prentice Bis

Re: [OMPI users] [EXTERNAL] Re: MPI_Reduce_Scatter Segmentation Fault with Intel 2019 Update 1 Compilers on OPA-1

2018-12-05 Thread Hammond, Simon David via users
ks/pull/11.patch Cheers, Gilles On 12/4/2018 4:41 AM, Hammond, Simon David via users wrote: > Hi Open MPI Users, > > Just wanted to report a bug we have seen with OpenMPI 3.1.3 and 4.0.0 when using the Intel 2019 Update 1 compi

[OMPI users] MPI_Reduce_Scatter Segmentation Fault with Intel 2019 Update 1 Compilers on OPA-1

2018-12-03 Thread Hammond, Simon David via users
Hi Open MPI Users, Just wanted to report a bug we have seen with OpenMPI 3.1.3 and 4.0.0 when using the Intel 2019 Update 1 compilers on our Skylake/OmniPath-1 cluster. The bug occurs when running the Github master src_c variant of the Intel MPI Benchmarks. Configuration: ./configure --prefi

[OMPI users] Providing an Initial CPU Affinity List to mpirun

2018-11-20 Thread Hammond, Simon David via users
Hi OpenMPI Users, I wonder if you can help us with a problem we are having when trying to force OpenMPI to use specific cores. We want to supply an initial CPU affinity list to mpirun and then have it select its appropriate binding from within that set. So for instance, to provide it with two c

[OMPI users] ARM HPC Compiler 18.4.0 / OpenMPI 2.1.4 Hang for IMB All Reduce Test on 4 Ranks

2018-08-15 Thread Hammond, Simon David via users
Hi OpenMPI Users, I am compiling OpenMPI 2.1.4 with the ARM 18.4.0 HPC Compiler on our ARM ThunderX2 system. Configuration options below. For now, I am using the simplest configuration test we can use on our system. If I use the OpenMPI 2.1.4 which I have compiled and run a simple 4 rank run of

Re: [OMPI users] [EXTERNAL] Re: OpenMPI 3.1.0 Lock Up on POWER9 w/ CUDA9.2

2018-07-02 Thread Hammond, Simon David via users
> On Jun 30, 2018, at 3:18 PM, Hammond, Simon David via users > mailto:users@lists.open-mpi.org>> wrote: > > Nathan, > > Same issue with OpenMPI 3.1.1 on POWER9 with GCC 7.2.0 and CUDA9.2. > > S. > > -- > Si Hammond > Scalable Computer Architectures >

Re: [OMPI users] [EXTERNAL] Re: OpenMPI 3.1.0 Lock Up on POWER9 w/ CUDA9.2

2018-07-01 Thread Hammond, Simon David via users
ly tarball for v3.1.x. Should be fixed. > On Jun 16, 2018, at 5:48 PM, Hammond, Simon David via users wrote: > > The output from the test in question is: > > Single thread test. Time: 0 s 10182 us 10 nsec/poppush > Atomics thread finished. Time: 0

Re: [OMPI users] OpenMPI 3.1.0 Lock Up on POWER9 w/ CUDA9.2

2018-06-16 Thread Hammond, Simon David via users
The output from the test in question is: Single thread test. Time: 0 s 10182 us 10 nsec/poppush Atomics thread finished. Time: 0 s 169028 us 169 nsec/poppush S. -- Si Hammond Scalable Computer Architectures Sandia National Laboratories, NM, USA [Sent from remote connection, excuse typos] 

[OMPI users] OpenMPI 3.1.0 Lock Up on POWER9 w/ CUDA9.2

2018-06-16 Thread Hammond, Simon David via users
Hi OpenMPI Team, We have recently updated an install of OpenMPI on POWER9 system (configuration details below). We migrated from OpenMPI 2.1 to OpenMPI 3.1. We seem to have a symptom where code than ran before is now locking up and making no progress, getting stuck in wait-all operations. While