Hi folks,
Does anyone know if there's a way to restrict a frontend node to a
particular partition in Slurm 2.6.x?
We're running 2.6.5+patches (upgrade to 14.x due in 1 months time) and
I can't see a way to do this in the documentation at present (you can
restrict to users and groups, neither
Wow, well spotted. I came here to see if anyone had reported this same
issue with environment modules, as I noticed several of my jobs failing
on our cluster this morning. Turns out, I'm probably the only one who
had failed jobs, as I have a long-running tmux session open on the head
node, and
2014-09-29 11:10 GMT+02:00 Alan Orth alan.o...@gmail.com:
Wow, well spotted. I came here to see if anyone had reported this same
issue with environment modules, as I noticed several of my jobs failing
on our cluster this morning. Turns out, I'm probably the only one who
had failed jobs, as
Slurm does not support this today.
Quoting Christopher Samuel sam...@unimelb.edu.au:
Hi folks,
Does anyone know if there's a way to restrict a frontend node to a
particular partition in Slurm 2.6.x?
We're running 2.6.5+patches (upgrade to 14.x due in 1 months time) and
I can't see a way
On Mon, 29 Sep 2014 02:10:07 AM Alan Orth wrote:
Other users wouldn't have noticed because we updated all of our
infrastructure in one go using ansible[0] last Friday.
We use xCAT to manage our clusters and whilst we could have done that if we
had wished it would have caused any jobs queued
True. We're lucky, our queue is very short! Also, to be honest, I was
mainly thinking of my web servers etc when I ran the updates, as the
list of shellshock vectors is quite expansive, and covers bash released
in 1994 - 2014! I didn't realize until afterwards that modules were
implemented as:
Can I submit a RFE for the partition_prio preemption plugin?
Looking through the partition_prio plugins source code, from what I can
gather it does not appear to be topology aware.
At least not in the way that the consumable resources selection plugin
is, this one has comment blocks
On 09/23/2014 11:27 AM, Trey Dockendorf wrote:
Has anyone used the Lua job_submit plugin and also allows multiple partitions? I'm not
even user what the partition value would be in the Lua code when a job is submitted with
--partition=general,background, for example.
We do. We use the
On Sun, Sep 28, 2014 at 5:53 AM, Marcin Stolarek stolarek.mar...@gmail.com
wrote:
Does I understood you correctly, that it's able to start interactive shell
with:
srun --pty bash in yours configuration and because this is non-login shell
the environment have to be set on submit host?
We
On Mon, Sep 29, 2014 at 1:27 AM, Christopher Samuel
sam...@unimelb.edu.au wrote:
B) If you update a compute node when there are jobs queued under the
previous bash then they will fail when they run there (also cannot find
modules, even though a prologue of ours sets BASH_ENV to force the env
About 70 people attended the Slurm User Group Meeting last week in
Lugano Switzerland. There were a lot of good presentations and
discussions. Copies of the presentations are now available online at
http://slurm.schedmd.com/publications.html
NOTE: A few of the presentations are missing,
Ryan,
Thanks for the information. Is your Lua script something you would be
willing to share with me, either via the mailing list of privately? I'm
able to stumble my way around Lua and am curious how others are defining
available resources, conditions, allowed partitions, etc, in Lua. I've so
On 29/09/14 21:33, je...@schedmd.com wrote:
Slurm does not support this today.
Thanks Moe, we'll see if we can figure another way around it.
cheers!
Chris
--
Christopher SamuelSenior Systems Administrator
VLSCI - Victorian Life Sciences Computation Initiative
Email:
On 30/09/14 02:39, je...@schedmd.com wrote:
About 70 people attended the Slurm User Group Meeting last week in
Lugano Switzerland.
Thanks so much to you and everyone who organised the meeting and to
everyone who came, it was well worth attending.
All the best,
Chris
--
Christopher Samuel
14 matches
Mail list logo