"mpirun takes the #slots for each node from the slurm allocation."
Yes this is my issue and what I was not expected. But I will stick with
--bynode solution.

Thanks a lot for your help.
Regards,
Nicolas


2018-05-17 14:33 GMT+02:00 r...@open-mpi.org <r...@open-mpi.org>:

> mpirun takes the #slots for each node from the slurm allocation. Your
> hostfile (at least, what you provided) retained that information and shows
> 2 slots on each node. So both the original allocation _and_ your
> constructed hostfile are both telling mpirun to assign 2 slots on each node.
>
> Like I said before, on this old version, -H doesn’t say anything about
> #slots - that information is coming solely from the original allocation and
> your hostfile.
>
>
> On May 17, 2018, at 5:11 AM, Nicolas Deladerriere <
> nicolas.deladerri...@gmail.com> wrote:
>
> About "-H" option and using --bynode option:
>
> In my case, I do not specify number of slots by node to openmpi (see
> mpirun command just above). From what I see the only place I define number
> of slots in this case is actually through SLURM configuration
> (SLURM_JOB_CPUS_PER_NODE=4(x3)). And I was not expected this to be taken
> when running mpi processes.
>
> Using --bynode is probably the easiest solution in my case, even if I am
> scared that it will not necessary fit all my running configuration. Better
> solution would be to review my management script for better integration
> with slurm resources manager, but this is another story.
>
>
>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to