Re: [slurm-users] Backfill Scheduling

2023-06-27 Thread Loris Bennett
Hi Reed, Reed Dier writes: > On Jun 27, 2023, at 1:10 AM, Loris Bennett > wrote: > > Hi Reed, > > Reed Dier writes: > > Is this an issue with the relative FIFO nature of the priority scheduling > currently with all of the other factors disabled, > or since my queue is fairly deep, is

Re: [slurm-users] Backfill Scheduling

2023-06-27 Thread Reed Dier
> On Jun 27, 2023, at 1:10 AM, Loris Bennett wrote: > > Hi Reed, > > Reed Dier mailto:reed.d...@focusvq.com>> writes: > >> Is this an issue with the relative FIFO nature of the priority scheduling >> currently with all of the other factors disabled, >> or since my queue is fairly deep, is

[slurm-users] Unconfigured GPUs being allocated

2023-06-27 Thread Wilson, Steven M
Hi, I manually configure the GPUs in our Slurm configuration (AutoDetect=off in gres.conf) and everything works fine when all the GPUs in a node are configured in gres.conf and available to Slurm. But we have some nodes where a GPU is reserved for running the display and is specifically not

[slurm-users] cpu-bind=MASK at output files

2023-06-27 Thread GestiĆ³ Servidors
Hello, Running this simple script: #!/bin/bash # #SBATCH --job-name=mega_job #SBATCH --output=mega_job.out #SBATCH --tasks=3 #SBATCH --array=0-5 #SBATCH --partition=cuda.q echo "STARTING" srun echo "hello world" >> file_${SLURM_ARRAY_TASK_ID}.out echo "ENDING" I always get this output: STARTING

Re: [slurm-users] Backfill Scheduling

2023-06-27 Thread Loris Bennett
Hi Reed, Reed Dier writes: > Hoping this will be an easy one for the community. > > The priority schema was recently reworked for our cluster, with only > PriorityWeightQOS and PriorityWeightAge contributing to the priority > value, while PriorityWeightAssoc, PriorityWeightFairshare, >