Lyn,

I am confused then.
In the man page for slurm.conf:

       Shared The Shared configuration parameter has been replaced by the 
OverSubscribe parameter described above.


I have exactly the settings you list for SelectType and SelectTypeParameters as 
well.
I had already tried the shared=no setting for the partition, but it seemed to 
be ignored, which is why I looked into the oversubscribe option.
This is running slurm 16.05


Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238



From: Lyn Gerner [mailto:schedulerqu...@gmail.com]
Sent: Tuesday, August 09, 2016 2:29 PM
To: slurm-dev
Subject: [slurm-dev] Re: Fully utilizing nodes

Hi Brian,

You'll need Shared=No in the partition definition (Oversubscribe not required). 
That will cap your allocations to one user task per core.

To allocate >1 job per node, you can use something like these two values:

SelectType              = select/cons_res

SelectTypeParameters    = CR_CORE_MEMORY

(See the other CR_* options; CR_LLN is what you *don't* want.)

WIth the above, your users should be able to specify and obtain --exclusive as 
desired.

Best,
Lyn

On Tue, Aug 9, 2016 at 11:06 AM, Andrus, Brian Contractor 
<bdand...@nps.edu<mailto:bdand...@nps.edu>> wrote:
All,

I am trying to figure out the bits required to allow users to use part of a 
node and not block others from using remaining resources.

It looks like the “OverSubscribe” option  is what I need, but that doesn’t seem 
to quite be all of it.

I would like users to be able to request --exclusive if needed.
However, I would like when users don’t then slurm prefers to pack jobs onto as 
few nodes as possible when they start.

I suspect it may be a combination of the settings in slurm.conf as well as how 
users are requesting nodes.

Currently, I have a user running an array that only needs one core. His script 
uses:
#SBATCH --time=00:10:00
#SBATCH --ntasks=1

But slurm is allocating one node per task and not putting multiple tasks on a 
node. Additionally, it appears nobody else is allowed to use that node until 
his job completes.

Could someone point me to the proper settings to have set for both users and 
slurm to accomplish this?

Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238<tel:831-656-6238>


Reply via email to