over `mpirun ./my_mpi_programĀ“? For me, both seem to do exactly the same thing. No? Did I miss something?

no, the issue is whether your mpirun is slurm-aware or not.
you can get exactly the same behavior, if you link with slurm hooks.

the main thing is that slurm communicates the resources for the job:
the set of nodes, number of processes, threads, etc.

if you run a non-slurm-aware mpi, then yes, you can observe differences related to the layout of resources. obviously, if your mpi is sshing to nodes, it's possible for things to get quite screwed up.

it's a really good idea to use the pam adopt-to-slurm plugin,
which makes even scheduler-oblivious mpirun behave better.

regards, mark hahn

Reply via email to