Hi,
I have a question regarding the mpool_hugepage_page_size MCA parameter.
Would varying this parameter affect performance over the infiniband network for
large messages (messages greater than the eager limit)?
I have set the parameter to 4K and 4MB (default is 2MB on the machine) and ran
P on a private/unused interface.
>
> I suggest you explicitly restrict the interface Open MPI should be using.
> For example, you can
>
> mpirun --mca btl_tcp_if_include eth0 ...
>
> Cheers,
>
> Gilles
>
> On Fri, Nov 27, 2020 at 7:36 PM CHESTER, DEAN (PGR) vi
Hi,
I am trying to set up some machines with OpenMPI connected with ethernet to
expand some batch system we already have in use.
This is controlled with Slurm already and we are able to get a basic MPI
program running across 2 of the machines but when I compile and something that
actually pe
The permissions were incorrect!
For our old installation of OMPI 1.10.6 it didn’t complain which is strange.
Thanks for the help.
Dean
> On 2 Jul 2020, at 11:01, Peter Kjellström wrote:
>
> On Thu, 2 Jul 2020 08:38:51 +
> "CHESTER, DEAN \(PGR\) via users" wro
I tried this again and it resulted in the same error:
nymph3.29935PSM can't open /dev/ipath for reading and writing (err=23)
nymph3.29937PSM can't open /dev/ipath for reading and writing (err=23)
nymph3.29936PSM can't open /dev/ipath for reading and writing (err=23)
---
HI,
I’m having some difficulties building a working OpenMPI configuration for an
infiniband cluster.
My configuration has been built with GCC 9.3.0 and is configured like so:
'--prefix=/opt/mpi/openmpi/4.0.4/gnu/9.3.0' '--with-slurm' '--enable-shared'
'--with-pmi' 'CC=/opt/gnu/gcc/9.3.0/bi