[slurm-users] sacct cache?

2023-08-15 Thread Danny Marc Rotscher
Dear all, does anyone of you know, if there is a sacct cache? We see some differences for some jobs between database and sacct output for the same job id. Kind regards, Danny Rotscher smime.p7s Description: S/MIME cryptographic signature

Re: [slurm-users] Job step do not take the hole allocation

2023-06-30 Thread Danny Marc Rotscher
Hi, thank you very much for your help! Best wishes, Danny > Am 30.06.2023 um 08:41 schrieb Tommi Tervo : > > smime.p7s Description: S/MIME cryptographic signature

[slurm-users] Job step do not take the hole allocation

2023-06-30 Thread Danny Marc Rotscher
Dear all, we currently see a change of a default behavior of a job step. On our old cluster (Slurm 20.11.9) a job step take all the resources of my allocation. rotscher@tauruslogin5:~> salloc --partition=interactive --nodes=1 --ntasks=1 --cpus-per-task=24 --hint=nomultithread salloc: Pending

Re: [slurm-users] SlurmDBD 20.02.7

2022-01-06 Thread Danny Marc Rotscher
gards, Danny Rotscher > Am 06.01.2022 um 16:26 schrieb Bas van der Vlies : > > Hi Danny, > > We had the same issue when we upgraded slurm to 20.11 but maybe the solution > also works for you: > * https://bugs.schedmd.com/show_bug.cgi?id=12947 > > On 06/01/2022

[slurm-users] SlurmDBD 20.02.7

2022-01-06 Thread Danny Marc Rotscher
Hello everyone, today we update our Slurm database daemon from 20.02.2 to 20.02.7 and everything works except when I try to delete a user it does not work. sacctmgr -i delete user name=xyz account=xyz sacctmgr: slurmdbd: No error Nothing deleted slurmdbd.log: slurmdbd_1 | slurmdbd: error:

[slurm-users] SlurmDBD 20.02.7

2022-01-06 Thread Danny Marc Rotscher
Hello everyone, today we update our Slurm database daemon from 20.02.2 to 20.02.7 and everything works except when I try to delete a user it does not work. sacctmgr -i delete user name=xyz account=xyz sacctmgr: slurmdbd: No error Nothing deleted slurmdbd.log: slurmdbd_1 | slurmdbd: error:

[slurm-users] Slurm 20.02.5 problems with --gres=gpu:1 and -c >1

2020-11-06 Thread Danny Marc Rotscher
Hello, yesterday we upgrade our cluster from Slurm 20.02.2 to 20.02.5 and recognized some problems with the usage of gpus and more than one cpu per task. I could reproduce that problem in a little Docker container, which description you could find on the following link.