Open MPI should just be *using* SLURM and should be agnostic of whatever scheduler you choose to use (indeed, OMPI doesn't even have visibility of which scheduler you are using). OMPI's mpirun will use "srun" to launch the MPI processes in a SLURM job -- it may be helpful to see check out what is happening differently with Maui with the sub- srun that mpirun invokes...?

The SLURM development list might be able to provide more insight here.


On May 22, 2008, at 11:22 AM, Romaric David wrote:

Hello,

I am trying to make use of Maui 1.3.6p19 + Slurm 1.2.29 + OpenMPI 1.2.6 together.

I am currently trying to have slurm procs --ntasks-per-node specification work
with openmpi.

I submit a simple mpirun job with :
sbatch -N 2 --ntasks-per-node=1 myscript where myscript only contains an mpirun command.

When submitting this script using slurm's builtin scheduler, all runs perfectly and processes
get dispatched, one per node, allright.

When using maui scheduler, the mpi program does not start : the mpi executable
does not get read.

Would mpirun be confused by the environment transmitted by slurm/ maui ?

Do you have a clue on this ?

        Regards,
        Romaric
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
Cisco Systems

Reply via email to