i think you want srun --exclusive option -- Sent from my Android phone. Please excuse my brevity and typos.
Bill Broadley <[email protected]> wrote: > > >I'm migrating a few clusters from SGE -> SLURM. 2 are working fine, >but >the 3rd has brought up some new use cases. > >I've tried to allocate CPUs with salloc or sbatch and have srun use >those resources. But it seems like each srun asks for new resources. >I looked at srun -Z, but it seems intentionally limited. > >Is it possible to setup SLURM to use CPUs as a consumable resource and >then have srun use no more than was allocated with salloc or sbatch? > >I've looked at sarray and arrayrun. Both seem a bit awkward. Wouldn't >it be cleaner to add a --background option to srun so that: > if srun successfully allocates resources it gets put in the > background and returns to the prompt > if srun fails to allocate resources it blocks until resources > are allocated. > >Something like: > >$ cat bigrun.sh >for i in `seq 1 100000`; do > srun --background ./MyJob $i >done >$ > >Seems much cleaner than having a script submit jobs, then poll for each >job status, then submit more jobs. So if you have 64 CPUs available >you >have 64 sruns actually running, and 1 blocking. The instant 1 job >finishes another job would start. > >Ideally if you didn't want to monopolize a cluster a 256 CPU clouster >you could: > >$ sbatch -n 64 bigrun > >The above seems like a more SLURM like way to handle array jobs. > >Better ideas?
