Re: [slurm-users] Strange error, submission denied

2019-02-19 Thread Chris Samuel
On Tuesday, 19 February 2019 10:14:21 PM PST Marcus Wagner wrote: > sbatch -N 1 --ntasks-per-node=48 --wrap hostname > submission denied, got jobid 199805 On one of our 40 core nodes with 2 hyperthreads: $ srun -C gpu -N 1 --ntasks-per-node=80 hostname | uniq -c 80 nodename02 The spec is:

Re: [slurm-users] Strange error, submission denied

2019-02-19 Thread Marcus Wagner
I just made a little bit debugging, setting the debug level to debug5 during submission. I submitted (or at least tried to) two jobs: sbatch -n 48 --wrap hostname got submitted, got jobid 199801 sbatch -N 1 --ntasks-per-node=48 --wrap hostname submission denied, got jobid 199805 The only

Re: [slurm-users] Strange error, submission denied

2019-02-19 Thread Marcus Wagner
Hi Prentice, On 2/19/19 2:58 PM, Prentice Bisbal wrote: --ntasks-per-node is meant to be used in conjunction with --nodes option. From https://slurm.schedmd.com/sbatch.html: *--ntasks-per-node*= Request that /ntasks/ be invoked on each node. If used with the *--ntasks* option, the

Re: [slurm-users] Priority access for a group of users

2019-02-19 Thread Prentice Bisbal
I just set this up a couple of weeks ago myself. Creating two partitions is definitely the way to go. I created one partition, "general" for normal, general-access jobs, and another, "interruptible" for general-access jobs that can be interrupted, and then set PriorityTier accordingly in my

Re: [slurm-users] allocate last MPI-rank to an exclusive node?

2019-02-19 Thread Jing Gong
Hi Chris, Thanks for your information. Regards, Jing Gong From: slurm-users on behalf of Chris Samuel Sent: Tuesday, February 19, 2019 03:47 To: slurm-users@lists.schedmd.com Subject: Re: [slurm-users] allocate last MPI-rank to an exclusive node? On

Re: [slurm-users] allocate last MPI-rank to an exclusive node?

2019-02-19 Thread Hendryk Bockelmann
Hi, we had the same issue and solved it by using the 'plane' distribution in combination with MPMD style srun, e.g. in your example #SBATCH -N 3 # 3 nodes with 10 cores each #SBATCH -n 21 # 21 MPI-tasks in sum #SBATCH --cpus-per-task=1 # if you do not want hyperthreading cat > mpmd.conf <<