To emphasize what Thomas wrote: backfill will only be useful if users submit 
jobs with realistic runtime limits. If every job is submitted with a default 
runtime of, for example, 7 days, then Slurm will not backfill your small jobs 
while it waits for the resources for the highest-priority large job. It will 
only backfill if it can do so without delaying the start of the highest 
priority job:


  1.  Slurm needs resources to run Job A. It looks at currently running jobs, 
they all have a runtime of < 7 days.
  2.  Slurm looks at runtime of jobs B, C, etc. queued behind Job A. They all 
need 7 days.
  3.  Slurm figures that starting any of jobs B, C, etc., will push back the 
start of Job A to at least 7 days; if it just waits for current jobs to finish, 
Job A will start in < 7 days. So it never backfills jobs B, C, etc.

Training users to submit jobs with realistic runtime limits is a User Education 
Opportunity.

John


From: slurm-users <slurm-users-boun...@lists.schedmd.com> on behalf of "Thomas 
M. Payerle" <paye...@umd.edu>
Reply-To: Slurm User Community List <slurm-users@lists.schedmd.com>
Date: Tuesday, July 9, 2019 at 10:23 AM
To: Slurm User Community List <slurm-users@lists.schedmd.com>
Subject: Re: [slurm-users] Jobs waiting while plenty of cpu and memory available

[WARNING: External Email - Use Caution]


You can use squeue to see the priority of jobs.  I believe it normally shows 
jobs in order of priority, even though does not display priority.  If you want 
to see actual priority, you need to request it in the format field.  I 
typically use
squeue -o "%.18i %.12a %.6P %.8u %.2t %.8m %.4D %.4C %12l %12p %Q %b %R" <any 
other squeue options>

Do you have backfill enabled?  This can help in many cases.
If the job with highest priority is quite wide, Slurm will reserve resources 
for it.  E.g., if it requests all of your nodes, then Slurm will reserve all 
nodes as they become idle for the wide job, until no other jobs are running and 
it can finally run.  W/out backfill, no other jobs will run before it.  With 
backfill, Slurm will estimate when all the nodes needed for the highest 
priority job to run will be available (based on walltime limits of running 
jobs), and will allow other jobs to run on the reserved nodes (backfill) as 
long as they will complete (based on their walltime limits) before Slurm 
expects the remaining nodes for the top priority job will be available.  This 
can greatly improve utilization of the cluster --- I suspect a large percentage 
of our jobs run as backfill.


On Tue, Jul 9, 2019 at 10:10 AM Edward Ned Harvey (slurm) 
<sl...@nedharvey.com<mailto:sl...@nedharvey.com>> wrote:
> From: slurm-users 
> <slurm-users-boun...@lists.schedmd.com<mailto:slurm-users-boun...@lists.schedmd.com>>
>  On Behalf Of
> Ole Holm Nielsen
> Sent: Tuesday, July 9, 2019 2:36 AM
>
> When some jobs are pending with Reason=Priority this means that other
> jobs with a higher priority are waiting for the same resources (CPUs) to
> become available, and they will have Pending=Resources in the squeue
> output.

Yeah, that's exactly the problem. There are plenty of cpu and memory resources 
available, yet jobs are waiting. Is there any way to know what resources, 
specifically, the jobs are waiting for, or what jobs are ahead of a particular 
job in queue, so I can then look at what resources the first job requires? 
"scontrol show partition" doesn't reveal any clear problems:

    PartitionName=batch
       AllowGroups=ALL AllowAccounts=ALL DenyQos=foo,bar,baz
       AllocNodes=ALL Default=YES QoS=N/A
       DefaultTime=00:15:00 DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 
Hidden=NO
       MaxNodes=UNLIMITED MaxTime=3-00:00:00 MinNodes=1 LLN=NO 
MaxCPUsPerNode=UNLIMITED
       Nodes=alpha[003-068],omega[003-068]
       PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO 
OverSubscribe=NO
       OverTimeLimit=NONE PreemptMode=REQUEUE
       State=UP TotalCPUs=4321 TotalNodes=123 SelectTypeParameters=NONE
       DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED

The QoS policies are not new, and have not changed recently, yet the problem of 
jobs pending is a new problem. I can't seem to get any information about why 
they're pending.



--
Tom Payerle
DIT-ACIGS/Mid-Atlantic Crossroads        paye...@umd.edu<mailto:paye...@umd.edu>
5825 University Research Park               (301) 405-6135
University of Maryland
College Park, MD 20740-3831

Reply via email to