Ralph, attached please find a file with the results of
$ mpirun -d singularity exec ./mpi_test.img /usr/bin/ring
I pressed Ctrl-C at about line 200 of the output file.
I hope there is something useful in it.
My nodefile looks like this
nyx6219
nyx6219
nyx6145
nyx6145
nyx6191
nyx6191
If you can send us some more info on how it breaks, that would be helpful. I’ll
file it as an issue so we can track things
Thanks
Ralph
> On Feb 20, 2017, at 9:13 AM, Bennet Fauber wrote:
>
> I got mixed results when bringing a container that doesn't have the IB
> and
I got mixed results when bringing a container that doesn't have the IB
and Torque libraries compiled into the OMPI inside the container to a
cluster where it does.
The short summary is that mutlinode communication seems unreliable. I
can mostly get up to 8 procs, two-per-node, to run, but beyond
Inside the tarball that you downloaded, there's a
doc/doxygen-doc/hwloc-a4.pdf with chapter 18 about Netloc with Scotch.
Beware that this code is still under development.
Brice
Le 19/02/2017 20:20, Михаил Халилов a écrit :
> Okay, but what configure options for Scotch should I use? I didn't
>
Nathan,
Thanks for your clarification. Just so that I understand where my
misunderstanding of this matter comes from: can you please point me to
the place in the standard that prohibits thread-concurrent window
synchronization using MPI_Win_flush[_all]? I can neither seem to find
such a