On 9/5/19 3:49 PM, Bill Broadley wrote:
I have a user with a particularly flexible code that would like to run a single MPI job across multiple nodes, some with 8 GPUs each, some with 2 GPUs.
Perhaps they could just specify a number of tasks with cpus per task, mem per task and GPUs per task and let Slurm balance it out?
All the best, Chris -- Chris Samuel : http://www.csamuel.org/ : Berkeley, CA, USA