Re: [slurm-users] good practices

2019-11-26 Thread Eli V
Inline below On Tue, Nov 26, 2019 at 5:50 AM Loris Bennett wrote: > > Hi Nigella, > > Nigella Sanders writes: > > > Thank you all for such interesting replies. > > > > The --dependency option is quite useful but in practice it has some > > inconvenients. Firstly, all 20 jobs are instantly

Re: [slurm-users] good practices

2019-11-26 Thread Loris Bennett
Hi Nigella, Nigella Sanders writes: > Thank you all for such interesting replies. > > The --dependency option is quite useful but in practice it has some > inconvenients. Firstly, all 20 jobs are instantly queued which some > users may be interpreting as an abusive use of common resources.

Re: [slurm-users] good practices

2019-11-26 Thread Nigella Sanders
Thank you all for such interesting replies. The --dependency option is quite useful but in practice it has some inconvenients. Firstly, all 20 jobs are *instantly queued* which some users may be interpreting as an abusive use of common resources. Even worse, if a job fails, the rest one will stay

Re: [slurm-users] good practices

2019-11-25 Thread Yair Yarom
Hi, I'm not sure what queue time limit of 10 hours is. If you can't have jobs waiting for more than 10 hours, than it seems to be very small for 8 hours jobs. Generally, a few options: a. The --dependency option (either afterok or singleton) b. The --array option of sbatch with limit of 1 job at

[slurm-users] good practices

2019-11-25 Thread Nigella Sanders
Hi all, I guess this is a simple matter but I still find it confusing. I have to run 20 jobs on our supercomputer. Each job takes about 8 hours and every one need the previous one to be completed. The queue time limit for jobs is 10 hours. So my first approach is serially launching them in a