Hi Reed,
Reed Dier writes:
> On Jun 27, 2023, at 1:10 AM, Loris Bennett
> wrote:
>
> Hi Reed,
>
> Reed Dier writes:
>
> Is this an issue with the relative FIFO nature of the priority scheduling
> currently with all of the other factors disabled,
> or since my queue is fairly deep, is th
> On Jun 27, 2023, at 1:10 AM, Loris Bennett wrote:
>
> Hi Reed,
>
> Reed Dier mailto:reed.d...@focusvq.com>> writes:
>
>> Is this an issue with the relative FIFO nature of the priority scheduling
>> currently with all of the other factors disabled,
>> or since my queue is fairly deep, is this
Hi,
I manually configure the GPUs in our Slurm configuration (AutoDetect=off in
gres.conf) and everything works fine when all the GPUs in a node are configured
in gres.conf and available to Slurm. But we have some nodes where a GPU is
reserved for running the display and is specifically not co
Hello,
Running this simple script:
#!/bin/bash
#
#SBATCH --job-name=mega_job
#SBATCH --output=mega_job.out
#SBATCH --tasks=3
#SBATCH --array=0-5
#SBATCH --partition=cuda.q
echo "STARTING"
srun echo "hello world" >> file_${SLURM_ARRAY_TASK_ID}.out
echo "ENDING"
I always get this output:
STARTING