On 09/11/16 12:15, Ran Du wrote:
> Thanks a lot for your reply. However, it's not what I want to
> get. For the example of Job 6449483, it is allocated with only one node,
> what if it was allocated with multiple nodes? I'd like to get the
> accounting statistics about how many CPUs/GPUs separately on each node,
> but not the sum number on all nodes.
Oh sorry, that's my fault, I completely misread what you were after
and managed to invert your request!
I don't know if that information is included in the accounting data.
I believe the allocation is uniform across the nodes, for instance:
$ sbatch --gres=mic:1 --mem=4g --nodes=2 --wrap /bin/true
resulted in:
$ sacct -j 6449484 -o jobid%20,jobname,alloctres%20,allocnodes,allocgres
JobID JobName AllocTRES AllocNodes AllocGRES
-------------------- ---------- -------------------- ---------- ------------
6449484 wrap cpu=2,mem=8G,node=2 2 mic:2
6449484.batch batch cpu=1,mem=4G,node=1 1 mic:2
6449484.extern extern cpu=2,mem=8G,node=2 2 mic:2
The only oddity there is that the batch step is of course
only on the first node, but it says it was allocated 2 GRES.
I suspect that's just a symptom of Slurm only keeping a total
number.
I don't think Slurm can give you an uneven GRES allocation, but
the SchedMD folks would need to confirm that I'm afraid.
All the best,
Chris
--
Christopher Samuel Senior Systems Administrator
VLSCI - Victorian Life Sciences Computation Initiative
Email: [email protected] Phone: +61 (0)3 903 55545
http://www.vlsci.org.au/ http://twitter.com/vlsci