Anyone know if the new GPU support allows having a different number of GPUs per 
node?

I found:
https://www.ch.cam.ac.uk/computing/slurm-usage

Which mentions "SLURM does not support having varying numbers of GPUs per node 
in a job yet."

I have a user with a particularly flexible code that would like to run a single 
MPI job across
multiple nodes, some with 8 GPUs each, some with 2 GPUs.



Reply via email to