[slurm-users] squeue: unrecognized option '--array-unique'

2023-09-12 Thread Loris Bennett
Hi,

Since upgrading to 23.02.5, I am seeing the following error

  $ squeue --array-unique
  squeue: unrecognized option '--array-unique'
  Try "squeue --help" for more information

The help for 'squeue' implies it is still a valid option:

  $ squeue --help | grep array-unique
--array-unique  display one unique pending job array

Is this a regression or is something else going on?

Regards

Loris Bennett

-- 
Dr. Loris Bennett (Herr/Mr)
ZEDAT, Freie Universität Berlin



Re: [slurm-users] squeue: unrecognized option '--array-unique'

2023-09-12 Thread Tommi Tervo
> Since upgrading to 23.02.5, I am seeing the following error
> 
>  $ squeue --array-unique
>  squeue: unrecognized option '--array-unique'
> 
> Is this a regression or is something else going on?

23.05 NEWS file reveals that it's removed, unfortunately bug 4346 is private.

commit e7fbb89b810cb7aa3316bcb6b1b43d9dabd1e645
Author: Tim Wickberg 
Date:   Tue Jan 3 15:30:46 2023 -0700

squeue - remove --array-unique option.

Bug 4346.

diff --git a/NEWS b/NEWS
index 3488e745c7..3d48b83f2e 100644
--- a/NEWS
+++ b/NEWS
@@ -211,6 +211,7 @@ documents those changes that are of interest to users and 
administrators.
  -- helpers.conf - Allow specification of node specific features.
  -- helpers.conf - Allow many features to one helper script.
  -- Allow nodefeatures plugin features to work with cloud nodes.
+ -- squeue - removed --array-unique option.


BR,
Tommi



Re: [slurm-users] squeue: unrecognized option '--array-unique'

2023-09-12 Thread Loris Bennett
Loris Bennett  writes:

> Hi,
>
> Since upgrading to 23.02.5, I am seeing the following error
>
>   $ squeue --array-unique
>   squeue: unrecognized option '--array-unique'
>   Try "squeue --help" for more information
>
> The help for 'squeue' implies it is still a valid option:
>
>   $ squeue --help | grep array-unique
> --array-unique  display one unique pending job array
>
> Is this a regression or is something else going on?

I see that

  https://github.com/SchedMD/slurm/blob/slurm-23-02-5-1/NEWS

contains the line

  -- squeue - removed --array-unique option.

which makes sense, since 

  --array

does the same thing.  A bug report regarding the redundant information
displayed by '--help' is here:

  https://bugs.schedmd.com/show_bug.cgi?id=17669

Cheers,

Loris

-- 
Dr. Loris Bennett (Herr/Mr)
ZEDAT, Freie Universität Berlin



[slurm-users] Guarantee minimum amount of GPU resources to a Slurm account

2023-09-12 Thread Stephan Roth

Dear Slurm users,

I'm looking to fulfill the requirement of guaranteeing availability of 
GPU resources to a Slurm account, while allowing this account to use 
other available GPU resources as well.


The guaranteed GPU resources should be of at least 1 type, optionally up 
to 3 types, as in:

Gres=gpu:type_1:N,gpu:type_2:P,gpu:type_3:Q

The version of Slurm I'm using is 20.11.9.


Ideas I came up with so far:

Placing a reservation seems like the simplest solution. But this forces 
users of the account to decide whether to submit their jobs within the 
reservation or outside, based on a manual check of currently available 
GPU resources in the cluster.


Changing the partition setup by moving nodes into a new partition for 
exclusive use of the account is an overhead I'd like to avoid, as this 
is a time-limited scenario.
Even though this looks like a working solution when combined with an 
extension to the job_submit.lua prioritizing partitions for users of 
said account.



I haven't looked at QOS, yet, hoping for a short-cut from anyone who 
already has a working solution to my problem.


If you have such a solution, would you mind sharing it?

Thanks,
Stephan



Re: [slurm-users] Guarantee minimum amount of GPU resources to a Slurm account

2023-09-12 Thread Bernstein, Noam CIV USN NRL (6393) Washington DC (USA)
Is this what you want?
Magnetic Reservations

The default behavior for reservations is that jobs must request a reservation 
in order to run in it. The MAGNETIC flag allows you to create a reservation 
that will allow jobs to run in it without requiring that they specify the name 
of the reservation. The reservation will only "attract" jobs that meet the 
access control requirements.

(from https://slurm.schedmd.com/reservations.html)

On Sep 12, 2023, at 10:14 AM, Stephan Roth 
mailto:stephan.r...@ee.ethz.ch>> wrote:

Dear Slurm users,

I'm looking to fulfill the requirement of guaranteeing availability of GPU 
resources to a Slurm account, while allowing this account to use other 
available GPU resources as well.











U.S. NAVAL



RESEARCH


LABORATORY
Noam Bernstein, Ph.D.
Center for Materials Physics and Technology
U.S. Naval Research Laboratory
T +1 202 404 8628 F +1 202 404 7546
https://www.nrl.navy.mil



Re: [slurm-users] Guarantee minimum amount of GPU resources to a Slurm account

2023-09-12 Thread Stephan Roth

Thanks Noam, this looks promising!

I'll have to test whether a job allowed to use such a reservation will 
run outside of it in case the reservation's resources are all occupied 
or queue up waiting to run in the reservation.



On 12.09.23 16:28, Bernstein, Noam CIV USN NRL (6393) Washington DC 
(USA) wrote:

Is this what you want?

Magnetic Reservations

The default behavior for reservations is that jobs must request a
reservation in order to run in it. The MAGNETIC flag allows you to
create a reservation that will allow jobs to run in it without
requiring that they specify the name of the reservation. The
reservation will only "attract" jobs that meet the access control
requirements.


(from https://slurm.schedmd.com/reservations.html 
)


On Sep 12, 2023, at 10:14 AM, Stephan Roth > wrote:


Dear Slurm users,

I'm looking to fulfill the requirement of guaranteeing availability of 
GPU resources to a Slurm account, while allowing this account to use 
other available GPU resources as well





Re: [slurm-users] Guarantee minimum amount of GPU resources to a Slurm account

2023-09-12 Thread Chris Samuel

On 12/9/23 9:22 am, Stephan Roth wrote:


Thanks Noam, this looks promising!


I would suggest that as was as the "magnetic" flag you may want the 
"flex" flag on the reservation too in order to let jobs that match it 
run on GPUs outside of the reservation.


All the best,
Chris



Re: [slurm-users] Guarantee minimum amount of GPU resources to a Slurm account

2023-09-12 Thread Stephan Roth

Thanks Chris, this completes what I was looking for.

Should have had a better look at the scontrol man page.

Best,
Stephan

On 13.09.23 02:24, Chris Samuel wrote:

On 12/9/23 9:22 am, Stephan Roth wrote:


Thanks Noam, this looks promising!


I would suggest that as was as the "magnetic" flag you may want the 
"flex" flag on the reservation too in order to let jobs that match it 
run on GPUs outside of the reservation.


All the best,
Chris