Daniel,
I think that you do not need the CPUs=  at all.

Also look at specifying the use of cgroups.  then when you run a job and
request one GPU, that GPU will be made available to you as
CUDA_VISIBLE_DEVICES
The other GPU will nto be available to you - but can be used by another
batch job.



On 4 May 2017 at 13:34, Daniel Ruiz Molina <daniel.r...@caos.uab.es> wrote:

> Hello,
>
> I have reconfigured slurm:
>
>    - slurmd.conf: NodeName=mynode CPUs=8 SocketsPerBoard=1
>    CoresPerSocket=4 ThreadsPerCore=2 RealMemory=7812 TmpDisk=50268 Gres=gpu:2
>    (without specify gpu model)
>
>
>    - gres.conf: two separate lines:
>
> NodeName=mynode Name=gpu Count=1 Type=GeForceGTX680 File=/dev/nvidia0
> CPUs=0-3
> NodeName=mynode Name=gpu Count=1 Type=GeForceGTX1080 File=/dev/nvidia1
> CPUs=4-7
>
>
> With this configuration, slurm starts OK... but I think it would be also
> correct both lines with "CPUs=0-7", isn't it? Because if not, how could I
> use all CPUs with only one GPU?
>
> Thanks.
>
>

Reply via email to