I use seff all the time as a first order approximation. It's a good hint at
what's going on with a job but doesn't give much detail.

We are in the process of integrating the Supremm node utilization capture
tool with our clusters and with our local XDMOD installation. Plain old
XDMOD can ingest the Slurm logs and give you some great information on
utilization, but generally has more of a high-level or summary perspective
on stats. To help see their personal job efficiency, you really need to
give users time-series data and we're expecting to get that with the
Supremm components.

The other angle which I've recently asked our eng/admin team to try to
implement on our newest cluster (yet to be released), is to turn on the
bits that Slurm has built-in for job profiling. With this properly
configured, users can turn on job-profiling as with a Slurm job-option and
it will produce that time-series data. Look for the AcctGatherProfileType
config stuff for slurm.conf.

Best,

Matt

Matthew Brown
Computational Scientist
Advanced Research Computing
Virginia Tech


On Mon, Jul 24, 2023 at 10:39 AM Will Furnell - STFC UKRI <
will.furn...@stfc.ac.uk> wrote:

> Hello,
>
>
>
> I am aware of ‘seff’, which allows you to check the efficiency of a single
> job, which is good for users, but as a cluster administrator I would like
> to be able to track the efficiency of all jobs from all users on the
> cluster, so I am able to ‘re-educate’ users that may be running jobs that
> have terrible resource usage efficiency.
>
>
>
> What do other cluster administrators use for this task? Is there anything
> you use and recommend (or don’t recommend) or have heard of that is able to
> do this? Even if it’s something like a Grafana dashboard that hooks up to
> the SLURM database,
>
>
>
> Thank you,
>
>
>
> Will.
>

Reply via email to