I typod the example command.  --gres should have been --tres

----
Doug Jacobsen, Ph.D.
NERSC Computer Systems Engineer
National Energy Research Scientific Computing Center <http://www.nersc.gov>
dmjacob...@lbl.gov

------------- __o
---------- _ '\<,_
----------(_)/  (_)__________________________


On Thu, May 4, 2017 at 7:02 AM, Douglas Jacobsen <dmjacob...@lbl.gov> wrote:

> You can also use sreport to get summaries (though it is limited)
>
> sreport user top users=<usersYouCareAbout> --gres=cpu,mem
>
> Can include other limits like cluster, start, end, can group by account
> and so on.  Limitation is that the TopUsers report only ever shows the
> top10 users.  Would be nice to get top N users.
>
> ----
> Doug Jacobsen, Ph.D.
> NERSC Computer Systems Engineer
> National Energy Research Scientific Computing Center
> <http://www.nersc.gov>
> dmjacob...@lbl.gov
>
> ------------- __o
> ---------- _ '\<,_
> ----------(_)/  (_)__________________________
>
>
> On Thu, May 4, 2017 at 6:58 AM, Swindelles, Ed <ed.swindel...@uconn.edu>
> wrote:
>
>> Hi Mahmood -
>>
>> I don’t believe SLURM has a direct way to generate that report. You can
>> collect all of the data necessary to create that report with the “sacct”
>> command, though. Then, use your favorite data analysis tools (Bash, Excel,
>> R, etc.) to aggregate rows and format it appropriately. Here’s an example
>> sacct command to dump all jobs for the last seven days, including columns
>> for some of the metrics you asked for:
>>
>> $ sacct -aXS $(date -d "-7 days" +%F) -oUser,JobID,State,Start,Elaps
>> ed,AllocCPUS,ReqMem,MaxDiskWrite
>>
>> (Note that this really is ALL jobs, so consider filtering by user or
>> account if you’ve got thousands/millions/etc. The man page for sacct is
>> very friendly.)
>>
>> I’ll also put in a plug for XDMoD. It is a powerful web app for getting
>> useful aggregate data from SLURM (and others). We use it quite a bit to
>> generate usage reports, mostly for administration. http://open.xdmod.org
>>
>> Best of luck,
>>
>> --
>> Ed Swindelles
>> Manager of Advanced Computing
>> University of Connecticut
>>
>> On May 4, 2017, at 9:08 AM, Mahmood Naderan <mahmood...@gmail.com> wrote:
>>
>> Hi,
>> I read the accounting page https://slurm.schedmd.com/accounting.html
>> however since it is quite large, I didn't get my answer!
>> I want to know the user stats for their jobs. For example, something like
>> this
>>
>> <user> <jobs submitted> <jobs completed successfully> <total wall clock
>> time for all jobs including successful and not> <total number of cores
>> used> <total memory used> <total disk used> ...
>>
>> Assume a user has submitted 2 jobs with the following specs:
>> job1: 10 minutes, 2.4GB memory, 4 cores, 1GB disk, success
>> job2: 15 minutes, 6GB memory, 2 cores, 2 GB disk, failed (due to his code
>> and not the system error)
>>
>> So the report looks like
>>
>> user, 2, 1, 25 min, 6, 8.4GB, 3GB, ...
>>
>> How can I get that?
>>
>> Regards,
>> Mahmood
>>
>>
>>
>>
>

Reply via email to