An MPI library with tight integration with Slurm (e.g. Intel MPI, Open MPI) can 
use "srun" to start the remote workers.  In some cases "srun" can be used 
directly for MPI startup (e.g. "srun" instead of "mpirun").


Other/older MPI libraries that start remote processes using "ssh" would, 
naturally, require keyless ssh logins to work across all compute nodes in the 
cluster.


When we provision user accounts on our Slurm cluster we still add .ssh, 
.ssh/id_rsa (needed for older X11 tunneling via libssh2), and add the public 
key to .ssh/authorized_keys.  All officially-supported MPIs on the cluster are 
tightly integrated with Slurm.  But there are commercial products and older 
software our clients use that are not, so having keyless access ready for them 
helps those users get their workflows working more quickly.





> On Jun 8, 2020, at 11:16 , Durai Arasan <arasan.du...@gmail.com> wrote:
> 
> Hi,
> 
> we are setting up a slurm cluster and are at the stage of adding ssh keys of 
> the users to the nodes.
> 
> I thought it would be sufficient to add the ssh keys of the users to only the 
> designated login nodes. But I heard that it is also necessary to add them to 
> the compute nodes as well for slurm to be able to submit jobs of the users 
> successfully. Apparently this is true especially for MPI jobs.
> 
> So is it true that ssh keys of the users must be added to the 
> ~/.ssh/authorized_keys of *all* nodes and not just the login nodes?
> 
> Thanks,
> Durai
> 


Reply via email to