Re: [OMPI users] Segmentation Fault when using OpenMPI 1.10.6 and PGI 17.1.0 on POWER8

2017-02-21 Thread r...@open-mpi.org
Can you provide a backtrace with line numbers from a debug build? We don’t get much testing with lsf, so it is quite possible there is a bug in there. > On Feb 21, 2017, at 7:39 PM, Hammond, Simon David (-EXP) > wrote: > > Hi OpenMPI Users, > > Has anyone successfully

[OMPI users] Segmentation Fault when using OpenMPI 1.10.6 and PGI 17.1.0 on POWER8

2017-02-21 Thread Hammond, Simon David (-EXP)
Hi OpenMPI Users, Has anyone successfully tested OpenMPI 1.10.6 with PGI 17.1.0 on POWER8 with the LSF scheduler (—with-lsf=..)? I am getting this error when the code hits MPI_Finalize. It causes the job to abort (i.e. exit the LSF session) when I am running interactively. Are there any

Re: [OMPI users] MPI_THREAD_MULTIPLE: Fatal error in MPI_Win_flush

2017-02-21 Thread Jeff Hammond
This is fine if each thread interacts with a different window, no? Jeff On Sun, Feb 19, 2017 at 5:32 PM Nathan Hjelm wrote: > You can not perform synchronization at the same time as communication on > the same target. This means if one thread is in >

Re: [OMPI users] Severe performance issue with PSM2 and single-node CP2K jobs

2017-02-21 Thread Iliev, Hristo
Hi Jingchao, My bad, I should have read closer into your thread. The problem is indeed in that CP2K calls MPI_Alloc_mem to allocate memory for practically everything and all the time. This somehow managed to escape our earlier profiling runs, perhaps because we were too concentrated on finding

Re: [OMPI users] Problem with MPI_Comm_spawn using openmpi 2.0.x + sbatch

2017-02-21 Thread Jing Gong
Hi, The email is intended to follow the thread about "Problem with MPI_Comm_spawn using openmpi 2.0.x + sbatch". https://mail-archive.com/users@lists.open-mpi.org/msg30650.html We have installed the latest version v2.0.2 on the cluster that