On 7/14/23 1:10 pm, Wilson, Steven M wrote:

It's not so much whether a job may or may not access the GPU but rather which GPU(s) is(are) included in $CUDA_VISIBLE_DEVICES. That is what controls what our CUDA jobs can see and therefore use (within any cgroups constraints, of course). In my case, Slurm is sometimes setting $CUDA_VISIBLE_DEVICES to a GPU that is not in the Slurm configuration because it is intended only for driving the display and not GPU computations.

Sorry I didn't see this before! Yeah that does sound different, I wouldn't expect that. :-(

All the best,
Chris
--
Chris Samuel  :  http://www.csamuel.org/  :  Berkeley, CA, USA


Reply via email to