Ale, if I understand your desired outcome, you could take a look at LLN=YES
in the partition definition.
Regards,
Lyn
On Wed, Aug 31, 2022 at 10:31 AM Alejandro Acuña <
alejandro.acu...@iflp.unlp.edu.ar> wrote:
> Hi all.
> Under Slurm 19.05, is there a way to configure partition nodes to submit
Mike, it feels like there may be other PriorityWeight terms that are
non-zero in your config. QoS or partition-related, perhaps?
Regards,
Lyn
On Mon, Jun 13, 2022 at 5:55 AM wrote:
>
> Dear all,
>
> I noticed different priority calculations by running a pipe, the
> settings are for example:
>
>
Jake, my hunch is that your jobs are getting hung up on mem allocation,
such that Slurm is assigning all of memory to each job as it runs; you can
verify w/scontrol show job. If that's what's happening, try setting a
DefMemPerCPU value for your partition(s).
Best of luck,
Lyn
On Thu, May 26, 2022
Hey, Brian,
Neither I nor you are going to like what I'm about to say (but I think it's
where you're headed). :)
We have an equivalent use case, where we're trying to keep long work off of
a certain number of nodes. Since we already have used "long" as a QoS name,
to keep from overloading "long,"
David, take a look at the various instances of the string "LLN" throughout
slurm.conf, as well as pack_serial_at_end. (I suspect you may want LLN=no
on your partition definition.)
Best,
Lyn
On Tue, Jun 8, 2021 at 11:51 AM David Chaffin
wrote:
> replying to myself as I can't quite figure out how
Hi Matteo,
Hard to say without seeing your priority config values, but I'm guessing
you want to take a look at
https://slurm.schedmd.com/priority_multifactor.html.
Regards,
Lyn
On Tue, Apr 14, 2020 at 12:02 AM Matteo F wrote:
> Hello there,
> I am having problems understanding the slurm schedu
James, you might take a look at CompleteWait and KillWait.
Regards,
Lyn
On Fri, Jan 3, 2020 at 12:27 PM Erwin, James wrote:
> Hello,
>
>
>
> I’ve recently updated a cluster to SLURM 19.05.4 and notice that new jobs
> are starting on nodes still in the CG state. In an epilog I am running node
>
Hey Brian,
I think the discussion was in the context of suspend/resume,
and it was the Reserved value that effectively represents that time.
Regards,
Lyn
On Sat, Sep 21, 2019 at 9:15 AM Brian Andrus wrote:
> There was a command shared at the SLUG that showed how long it took a
> node to go fro
Hi Sven,
You'll probably be better served by switching your purge time units to
hours instead of months; this will provoke purging much smaller amounts of
data, much more frequently (once per hour instead of once per month). Also,
depending on your job throughput, and how long your DB has been sto
Hi Jean-Mathieu,
I'd also recommend that you update to 17.11.12. I had issues w/job arrays
in 17.11.7,
such as tasks erroneously being held as "DependencyNeverSatisfied" that,
I'm
pleased to report, I have not seen in .12.
Best,
Lyn
On Fri, Jan 11, 2019 at 8:13 AM Jean-mathieu CHANTREIN <
jean-m
Thanks for helping out. No I have not enforced any other limits, including
> AccountingStorageEnforce.
>
> Thanks
> Sid
>
> On Thu, Jul 26, 2018 at 5:06 PM Lyn Gerner
> wrote:
>
>> HI,
>>
>> Have you enforced other limits successfully? What is the value of
HI,
Have you enforced other limits successfully? What is the value of
AccountingStorageEnforce?
Regards,
Lyn
On Thu, Jul 26, 2018 at 1:45 PM, Siddharth Dalmia
wrote:
>
> Hi all, We wanted to try make 2 different qos (priority and normal). For
> priority QOS - 1) Each user is only allowed 1 JOB
Hi Dmitri,
You might check the value you have for AccountingStorageEnforce in
slurm.conf to make sure it's one that enables QoS enforcement.
Regards,
Lyn
On Sat, Apr 7, 2018 at 9:32 AM, Dmitri Chebotarov wrote:
>
> The MaxSubmitJobsPerUser seems to be working when QOS where
> MaxSubmitJobsPerU
13 matches
Mail list logo