Can you provide a backtrace with line numbers from a debug build? We don’t get
much testing with lsf, so it is quite possible there is a bug in there.
> On Feb 21, 2017, at 7:39 PM, Hammond, Simon David (-EXP)
> wrote:
>
> Hi OpenMPI Users,
>
> Has anyone successfully
Hi OpenMPI Users,
Has anyone successfully tested OpenMPI 1.10.6 with PGI 17.1.0 on POWER8 with
the LSF scheduler (—with-lsf=..)?
I am getting this error when the code hits MPI_Finalize. It causes the job to
abort (i.e. exit the LSF session) when I am running interactively.
Are there any
This is fine if each thread interacts with a different window, no?
Jeff
On Sun, Feb 19, 2017 at 5:32 PM Nathan Hjelm wrote:
> You can not perform synchronization at the same time as communication on
> the same target. This means if one thread is in
>
Hi Jingchao,
My bad, I should have read closer into your thread. The problem is indeed in
that CP2K calls MPI_Alloc_mem to allocate memory for practically everything
and all the time. This somehow managed to escape our earlier profiling runs,
perhaps because we were too concentrated on finding
Hi,
The email is intended to follow the thread about
"Problem with MPI_Comm_spawn using openmpi 2.0.x + sbatch".
https://mail-archive.com/users@lists.open-mpi.org/msg30650.html
We have installed the latest version v2.0.2 on the cluster that