I've added the `cgroup_enable=memory swapaccount=1` to the cmdline but that
doesn't help
I've seen people having those kind of problems, but no one seem to be
able to solve it and keep the cgroupsThanks a lot
Arthur
--
Dr. Christoph Brüning
Universität Würzburg
HPC & DataManagement @ ct.qmat &
nd regards,
Heitor
--
Dr. Christoph Brüning
Universität Würzburg
HPC & DataManagement @ ct.qmat & RZUW
Am Hubland
D-97074 Würzburg
Tel.: +49 931 31-80499
searching tangents and
parts of the slurm source just gave me some directions. I'm guessing
slurm only knows cgroup v1 so it fails when it tries to interact with
cgorup v2. Am I correct or am I barking up the wrong tree?
Thanks for you feedback in advance!
Cheers
Richard
--
Dr. Christoph Brüning
U
jobs were successfully allocated
to the same node and ran concurrently on the same node.
Does anyone know why such behavior is seen? Why does including
memory as consumable resource lead to node exclusive behavior?
Thanks,
Durai
--
Dr. Christoph Brüning
U
Research Data Services
P: (858) 246-5593
--
Dr. Christoph Brüning
Universität Würzburg
Rechenzentrum
Am Hubland
D-97074 Würzburg
Tel.: +49 931 31-80499
:
$ scontrol show config | grep NEXT_JOB_ID
NEXT_JOB_ID = 2488059
The next jobid is presumably in the Slurm database.
/Ole
--
Dr. Christoph Brüning
Universität Würzburg
Rechenzentrum
Am Hubland
D-97074 Würzburg
Tel.: +49 931 31-80499
Christoph
On 20/05/2020 12.00, Christoph Brüning wrote:
Dear all,
we set up a floating partition as described in SLURM's QoS documentation
to allow for jobs with a longer than usual walltime on a part of our
cluster: QoS with GrpCPUs and GrpNodes limits attached to the
longer-walltime parti
to "N/A".
Did any of you observe this or similar behaviour?
FWIW, we are running SLURM 17.11 on Debian, an upgrade to 19.05 is
scheduled in the next couple of weeks.
Best,
Christoph
--
Dr. Christoph Brüning
Universität Würzburg
Rechenzentrum
Am Hubland
D-97074 Würzburg
Tel.: +49 931 31-80499
rm be used to
schedule containers?
If someone has any experience using docker in HPC clusters, please let
me know.
Regards,
Mahmood
--
Dr. Christoph Brüning
Universität Würzburg
Rechenzentrum
Am Hubland
D-97074 Würzburg
Tel.: +49 931 31-80499
_Memory
TaskPlugin=task/cgroup
ProctrackType=proctrack/cgroup
I would be grateful for any idea.
Best regards,
René
--
Dr. Christoph Brüning
Universität Würzburg
Rechenzentrum
Am Hubland
D-97074 Würzburg
Tel.: +49 931 31-80499
usage which I guess is
ultimately what you want.
---
Sam Gallop
-Original Message-
From: slurm-users On Behalf Of
Christoph Brüning
Sent: 12 June 2019 10:58
To: slurm-users@lists.schedmd.com
Subject: [slurm-users] Rename account or move user from one account to another
Hi everyone
with the underlying MariaDB, it does
not exactly appear to be a convenient or elegant solution...
Best,
Christoph
--
Dr. Christoph Brüning
Universität Würzburg
Rechenzentrum
Am Hubland
D-97074 Würzburg
Tel.: +49 931 31-80499
hone: 605-688-5767
www.sdstate.edu <http://www.sdstate.edu/>
cid:image007.png@01D24AF4.6CEECA30
--
Dr. Christoph Brüning
Universität Würzburg
Rechenzentrum
Am Hubland
D-97074 Würzburg
Tel.: +49 931 31-80499
on't overcommit space).
Our epilog then cleans both the per job temporary space and per job
/dev/shm up at the end.
All the best,
Chris
--
Dr. Christoph Brüning
Universität Würzburg
Rechenzentrum
Am Hubland
D-97074 Würzburg
Tel.: +49 931 31-80499
14 matches
Mail list logo