On 4/1/19 5:48 am, Marcin Stolarek wrote:

I think that the main reason is the lack of access to some /dev "files" in your docker container. For singularity nvidia plugin is required, maybe there is something similar for docker...

That's unlikely, the problem isn't that nvidia-smi isn't working in Docker because of a lack of device files, the problem is that it's seeing all 4 GPUs and thus is no longer being controlled by the device cgroup that Slurm is creating.

--
 Chris Samuel  :  http://www.csamuel.org/  :  Melbourne, VIC

Reply via email to