Dear slurm users,
I would like to know if it is possible to prepare a slurm submission
script in a way that initially CPU resources are requested (lets say 30
CPUs), and afterwards, the assigned resources are used to launch an
array of 30 single CPU jobs array? I would greatly appreciate any h
All,
Does anyone have an example of setting features (if not set) in the Lua job
submission scripts?
job_desc.features
There was a discussion here, but it appears to be for the case where it is
checked and rejected
https://groups.google.com/d/topic/slurm-users/C-oYERITK9c/discussion
-Kevin
Hello,
We do this, it works like most of the other string-based fields, e.g.,
function job_submit(job_request, partinfo, submit_uid) {
job_request['features'] = 'special'
return slurm.SUCCESS
}
Is there something detailed you are looking for?
-Doug
Doug Jacobsen, Ph.D.
NERSC Comp
Literal job arrays are built into Slurm:
https://slurm.schedmd.com/job_array.html
Alternatively, if you wanted to allocate a set of CPUs for a parallel task, and
then run a set of single-CPU tasks in the same job, something like:
#!/bin/bash
#SBATCH --ntasks=30
srun --ntasks=${SLURM_NTASK
Is it safe to assume the value is nil if not set?
if (job_desc[‘partition’] == "parallel" and job_desc[‘features’] == nil) then
job_desc['features'] = "[haswell|broadwell|skylake]"
end
-Kevin
From: slurm-users on behalf of Douglas
Jacobsen
Reply-To: Slurm User Community List
Date: Wed
thank you Michael for the feedback, my scenario is the following: I want
to run a job array of (lets say) 30 jobs. So I setted the slurm input as
follows:
#SBATCH --array=1-104%30
#SBATCH --ntasks=1
however only 4 jobs within the array are launched at a time due to the
allowed max number of j
Alfredo,
I’m assuming the resources are used initially in some sort of tightly-coupled
parallel task, or at least some workload where all the tasks finish at about
the same time. I’m wondering and also assuming that the tasks you’re looking to
run afterwards as part of an array are less tightly
Literal job arrays are built into Slurm:
https://slurm.schedmd.com/job_array.html
yes, and the best way to describe these are "job generators".
that is, you submit one and it sits in the pending queue, while
the array elements kind of "bud" off the parent job. each of
the array jobs is a ful
Thank you Aaron for the reply,
Specifically I am trying to run what in chemistry is known as an
Umbrella Samplig simulation, in which independent simulation windows are
run. The total number of windows for the whole simulation is 104, but
allocating 104 cores to perform the simulation would si
Yes. We use something like this
if job_desc.features == nil then
job_desc.features = "special"
else
job_desc.features = job_desc.features .. ",special"
end
Bill
On 12/19/2018 09:27 AM, Kevin Manalo wrote:
Is it safe to
Hi Alfredo,
You can have a look at using https://github.com/eth-cscs/GREASY . It was
developed before array-jobs were supported in slurm and it will do exactly
what you want.
Regards,
Carlos
On Wed, Dec 19, 2018 at 3:33 PM Alfredo Quevedo
wrote:
> thank you Michael for the feedback, my scenari
Does slurm remove job completion info from it's memory after a while?
Might explain a why I'm seeing job's getting cancled when there
dependent predecessor step finished ok. Below is the egrep
'352209(1|2)_11' from slurmctld.log. The 3522092 job array was created
with -d aftercorr:3522091. Looks li
Yesterday I upgraded from 18.08.3 to 18.08.4. After the upgrade, I found
that batch scripts named "batch" are being rejected. Simply changing the
script name fixes the problem. For example:
$ sbatch batch
sbatch: error: ERROR: A time limit must be specified
sbatch: error: Batch job submission f
Hi;
We upgraded from 18.08.3 to 18.08.4 and there is a job_submit.lua script
also. And nearly same issue at our cluster:
$ sbatch batch
sbatch: error: Batch job submission failed: Unspecified error
$ mv batch nobatchy
$ sbatch nobatchy
Submitted batch job 172174
I hope this helps.
Ahmet M.
thank you very much Carlos for the info,
regards
Alfredo
Enviado desde BlueMail
En 19 de diciembre de 2018 13:36, en 13:36, Carlos Fenoy
escribió:
>Hi Alfredo,
>
>You can have a look at using https://github.com/eth-cscs/GREASY . It
>was
>developed before array-jobs were supported in slurm an
Looking through the slurm.conf docs and greping around the source code
it looks like MinJobAge might be what I need to adjust. I changed it
by 2 orders of magnitude, 300 -> 300_000 on our dev cluster. I'll see
how things go.
On Wed, Dec 19, 2018 at 1:14 PM Eli V wrote:
>
> Does slurm remove job c
Hi Alfredo?,
Beyond what is already suggested, I have used the following script to run a set
number of jobs simultaneously within a batch script.
for thing in ${arrayOfThings[@]}; do echo $thing; done | (
srun -J JobName xargs -I{} --max-procs ${SLURM_JOB_CPUS_PER_NODE} bash -c '{
someCommand
Thank you Sam for this example. I will try to apply this procedure to my study
case,
regards
Alfredo
Enviado desde BlueMail
En 20 de diciembre de 2018 00:11, en 00:11, Sam Hawarden
escribió:
>Hi Alfredo?,
>
>
>Beyond what is already suggested, I have used the following script to
>run a se
18 matches
Mail list logo