> On Apr 8, 2017, at 8:44 PM, Doug Meyer <dameye...@gmail.com> wrote: > > Running 15.x and have run into a next step that is probably us tripping over > our feet. > > Engineers were happy clams with SGE but it was time to move one. We have > adopted slurm and are moving users forward. So far, much joy. As we work > more with MPI apps we are getting foggy. > > slurm support automatically included in openmpi builds. Excellent. > > Must add build flag for PMI in OpenMPI build. Got it.
Not necessarily. If you are running 15.x of SLURM, then you can use the PMIx support from OpenMPI v2.x. You’ll find jobs start significantly faster that way. Or if you decide (see below) to launch via mpirun, then you also don’t need to set the flag. > > Mpidefault and Mpiflags?? > > Should Mpidefault be "none"? If all our MPI work is OpenMPI with PMI-1 are > we better off setting it to openmpi? I’m not sure the openmpi plugin actually does much of anything. We certainly don’t rely on it doing anything. > > Understand we can use srun to change the MPI. > > From reading it sounds like Mpidefault can be ignored if we are running > OpenMPI 1.5 or newer and PMI. Is that correct? Yes, as I said above. Though I would strongly suggest you start with OMPI v2.x > > Finally, from reading it seems slurm is very mature and supplants the need > for mpirun unless we are using an salloc or running from an external script. > Using srun or sbatch, forget about mpirun. Is this correct? Not completely - while it is true that you can do a lot with srun, there are a number of features that mpirun supports and srun does not. So it really is a matter of looking at the options each provides, and deciding which meets your needs. You won’t find any performance difference regardless of which launch method you use, and with OMPI v2.x, both start in the same amount of time. Ralph > > Thank you, > Doug Meyer > >