-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 13/11/13 11:43, Leith Bade wrote:
> OK that will likely be the best.
>
> I assume I will need to install master on the nodes too if the
> data structures have changed.
You'll want the same everywhere.
Good luck!
Chrsi
- --
Christopher Samuel
According to the docs, builtin is supposed to be the simplest scheduler that
does not provide any scheduling services.
Looking at builtin's code it seems to do quite a bit of work as it creates a
thread that appears to do something with computing 'backfill' times etc.
The docs don't mention ho
OK that will likely be the best.
I assume I will need to install master on the nodes too if the data structures
have changed.
Thanks,
Leith Bade
leith.b...@anu.edu.au
-Original Message-
From: Christopher Samuel [mailto:sam...@unimelb.edu.au]
Sent: Wednesday, 13 November 2013 10:34 AM
T
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 13/11/13 10:20, Leith Bade wrote:
> Is master still compatible with a 2.6 install?
I don't think so, master is what will be the 13.12 which is a new
feature release.
Given that's due to be out (as the name suggests) in December I'd
think you'll
v2.6 is stable. The master is still under active development and there
are changes in data structures, plugin calls, etc. You can upgrade and
use the old configuration file fine. What is best to use comes down to
a judgement call.
Quoting Leith Bade :
Hi,
I have now managed to set a
Hi,
I have now managed to set a up a small cluster inside a few VMs with SLURM
2.6.
I want to start creating the new scheduler plugin but am not sure if I
should work on the 2.6 branch or master.
Is master still compatible with a 2.6 install? Can I just overwrite the
existing setup with
oh you made me feel like a solution was here until I read
"but unfortunately, that doesn't work either".
Ok.
Thanks
Jackie
On Tue, Nov 12, 2013 at 2:19 PM, Eckert, Phil wrote:
> Jackie,
>
> it looks like in 2.5.7, according to the scontrol man page, the correct
> syntax would be:
>
> sc
Jackie,
it looks like in 2.5.7, according to the scontrol man page, the correct syntax
would be:
scontrol create reservation flags=PART_NODES,IGNORE_JOBS nodes=ALL
starttime=now endtime=tomorrow partitionname=pbatch user=eckert
but unfortunately, that doesn't work either.
Phil
From: Jacqu
Here is my problem Phil,
My node name is not like n00[00-91] instead we have a suffix added to our
hostname like n.jackie0. Since we have multiple n nodes we had to
add the cluster they were associated to on the FQDN. And when I tried
nodes='n00[00-91].jackie0' I got a message that the n
Thanks for your work. The contribution has been checked in the master
branch. Commit dc0c4e293caccaa.
On 11/07/2013 12:17 PM, Troy Baer wrote:
The attached patch to contribs/torque/pbsnodes.pl adds the following to
make the script act more like pbsnodes on a TORQUE system:
* adds the node o
Ok great. Attached is a patch adding the ability to create job_arrays to
slurm_drmaa. My e-mail to the author bounced as spam so I thought I'd put
it here. A bit ugly having ifdefs based on slurm version, but not sure how
else to do it.
On Tue, Nov 12, 2013 at 12:44 PM, Moe Jette wrote:
>
> The
Jackie,
I was trying this with an earlier version of SLURM, I just build a 2.5.7 test
system and tried it again, and I am seeing the same failures that you do when
any of the nodes in the partition are allocated. A workaround is to use the
"nodes=" option, ie:
scontrol create reservation flags
I also believe I tried that one as well as the other two and each time I
got Nodes busy message. If the nodes are in alloc state will either of
these flags work? From what I saw they would not work in this case.
Jackie
On Tue, Nov 12, 2013 at 9:34 AM, Eckert, Phil wrote:
> I see the nodes bu
The array specification is just a string in the job create RPCs:
typedef struct job_descriptor { /* For submit, allocate, and update
requests */
...
char *array_inx;/* job array index values */
Quoting E V :
Is there a C api for job array creation in slurm? I'd like to get t
[root@hpca ~]# scontrol show config | grep PriorityWeightAge
PriorityWeightAge = 1000
[root@hpca ~]# sprio -w
JOBID PRIORITYAGEJOBSIZE
Weights 1000 1000
[root@hpca ~]# sprio -l
JOBID USER PRIORITYAGE FAIRSHAREJOBSIZE PARTITION
[root@hpca ~]# scontrol show config | grep PriorityWeightAge
PriorityWeightAge = 1000
[root@hpca ~]# sprio -w
JOBID PRIORITYAGEJOBSIZE
Weights 1000 1000
[root@hpca ~]# sprio -l
JOBID USER PRIORITYAGE FAIRSHAREJOBSIZE PARTITION
I see the nodes busy message only if I am trying to create a reservation on top
of another reservation that includes the same nodes. You might try adding the
overlap flag if this is the case.
Phil Eckert
LLNL
From: Jacqueline Scoggins mailto:jscogg...@lbl.gov>>
Reply-To: slurm-dev mailto:slurm-
Is there a C api for job array creation in slurm? I'd like to get the slurm
drmaa code to do array submission on a run_bulk job and just wondering the
best way to do it.
I tried that and it stated that the nodes were busy.
Jackie
On Tue, Nov 12, 2013 at 9:16 AM, Paul Edmon wrote:
> Include the ignore_jobs flag. That will force the reservation.
>
> -Paul Edmon-
>
>
> On 11/12/2013 12:11 PM, Jacqueline Scoggins wrote:
>
> Running slurm 2.5.7 and tried to reser
Include the ignore_jobs flag. That will force the reservation.
-Paul Edmon-
On 11/12/2013 12:11 PM, Jacqueline Scoggins wrote:
Admin reservation on busy nodes
Running slurm 2.5.7 and tried to reserve the nodes of the cluster
because of hardware issues that needed to be repaired. Some of the
Running slurm 2.5.7 and tried to reserve the nodes of the cluster because
of hardware issues that needed to be repaired. Some of the nodes were
allocated with jobs and others were not. Tried to do the following but got
an error that the Nodes were busy and the reservation was not set.
scontrol cr
There are two ways to confirm your PriorityWeightAge setting has been read in:
$ scontrol show conf | grep PriorityWeightAge
or
$ sprio -w
Once you confirm that the system has recognized the values you set in your
slurm.conf file, if you still see the problem, I suggest you turn up debugging
an
You should check that the parameters of the configuration file are being loaded.
For example:
scontrol show conf | grep Age | grep Priority
PriorityMaxAge = 02:00:00
PriorityWeightAge = 1
and check that the values are the same as in the file.
Regards.
Juan Pancorbo Armada
juan.
23 matches
Mail list logo