Hi,
I've a small addition to my last email. The command works once more with
openmpi-v3.0.x-201711130242-40522ff and openmpi-master-201711140242-97d0469.
How can I avoid the binding messages?
loki fd1026 107 which mpiexec
/usr/local/openmpi-3.0.1_64_gcc/bin/mpiexec
loki fd1026 108 mpiexec --host
Hi,
I've installed openmpi-v3.0.0 on my "SUSE Linux Enterprise Server 12.3 (x86_64)"
with gcc-6.4.0. Today I discovered that I get an error for --map-by that I don't
get with older versions.
loki fd1026 115 which mpiexec
/usr/local/openmpi-2.0.3_64_gcc/bin/mpiexec
loki fd1026 116 mpiexec --ho
> Or one could tell OMPI to do what you really want it to do using map-by and
> bind-to options, perhaps putting them in the default MCA param file.
Nod. Agreed, but far too complicated for 98% of our users.
>
> Or you could enable cgroups in slurm so that OMPI sees the binding envelope -
> i
> On Dec 19, 2017, at 8:46 AM, Charles A Taylor wrote:
>
> Hi All,
>
> I’m glad to see this come up. We’ve used OpenMPI for a long time and
> switched to SLURM (from torque+moab) about 2.5 years ago. At the time, I had
> a lot of questions about running MPI jobs under SLURM and good inform
Hi All,
I’m glad to see this come up. We’ve used OpenMPI for a long time and switched
to SLURM (from torque+moab) about 2.5 years ago. At the time, I had a lot of
questions about running MPI jobs under SLURM and good information seemed to be
scarce - especially regarding “srun”. I’ll just b
Ralph,
Thank your very much for your response. I'll pass this along to my
users. Sounds lie we might need to do some testing of our own. We're
still using Slurm 15.08, but planning to upgrade to 17.11 soon, so it
sounds like we'll get some performance benefits from doing so.
Prentice
On 12/
Benjamin,
unfortunatly, the compiler wrappers (mpicc and friends) will be riscv64
binaries.
fwiw, they will (try to) use the cross compilers on the riscv64 machines (!)
but you can configure with the '--enable-script-wrapper-compilers'
option in order to generate scripts
that can be invo