From “man mpirun” - note that not specifying “slots=N” in a hostfile defaults
to slots=#cores on that node (as it states in the text):
Specifying Host Nodes
Host nodes can be identified on the mpirun command line with the -host
option or in a hostfile.
For example,
I gather you are using OMPI 2.x, yes? And you configured it
--with-pmi=, then moved the executables/libs to your workstation?
I suppose I could state the obvious and say “don’t do that - just rebuild it”,
and I fear that (after checking the 2.x code) you really have no choice. OMPI
v3.0 will
On Thu, Jun 22, 2017 at 10:43 AM, John Hearns via users
wrote:
> Having had some problems with ssh launching (a few minutes ago) I can
> confirm that this works:
>
> --mca plm_rsh_agent "ssh -v"
this doesn't do anything for me
if i set OMPI_MCA_sec=^munge
i can clear
I am just learning to use openmpi 1.8.4 that is installed on our cluster. I am
running into a baffling issue. If I run:
mpirun -np 3 --host b1,b2,b3 hostname
I get the expected output:
b1
b2
b3
But if I do:
mpirun -np 3 --hostfile hostfile hostname
I get:
b1
b1
b1
Where hostfile contains:
I may have asked this recently (if so sorry).
If anyoen has worked with QoS settings with OpenMPI please ping me off list,
eg
mpirun --mca btl_openib_ib_service_level N
___
users mailing list
users@lists.open-mpi.org
Having had some problems with ssh launching (a few minutes ago) I can
confirm that this works:
--mca plm_rsh_agent "ssh -v"
Stupidly I thought there was a majr problem - when it turned otu I could
not ssh into a host.. ahem.
On 22 June 2017 at 16:35, r...@open-mpi.org
that took care of one of the errors, but i missed a re-type on the second error
mca_base_component_repository_open: unable to open mca_pmix_pmix112:
libmunge missing
and the opal_pmix_base_select error is still there (which is what's
actually halting my job)
On Thu, Jun 22, 2017 at 10:35 AM,
You can add "OMPI_MCA_plm=rsh OMPI_MCA_sec=^munge” to your environment
> On Jun 22, 2017, at 7:28 AM, John Hearns via users
> wrote:
>
> Michael, try
> --mca plm_rsh_agent ssh
>
> I've been fooling with this myself recently, in the contect of a PBS cluster
>
> On
Michael, try
--mca plm_rsh_agent ssh
I've been fooling with this myself recently, in the contect of a PBS cluster
On 22 June 2017 at 16:16, Michael Di Domenico
wrote:
> is it possible to disable slurm/munge/psm/pmi(x) from the mpirun
> command line or (better) using
is it possible to disable slurm/munge/psm/pmi(x) from the mpirun
command line or (better) using environment variables?
i'd like to use the installed version of openmpi i have on a
workstation, but it's linked with slurm from one of my clusters.
mpi/slurm work just fine on the cluster, but when i
Dear all,
I have done a program with gfortran\fortran and openMPI.
I would like to profile it.
Can someone suggest me a open program to profile it?
I have done some Internet researches but I have no enough information to
choose the best one.
Thanks in advance to all of you.
Diego
11 matches
Mail list logo