Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package slurm for openSUSE:Factory checked 
in at 2022-10-22 14:13:18
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/slurm (Old)
 and      /work/SRC/openSUSE:Factory/.slurm.new.2275 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "slurm"

Sat Oct 22 14:13:18 2022 rev:79 rq:1030432 version:22.05.5

Changes:
--------
--- /work/SRC/openSUSE:Factory/slurm/slurm.changes      2022-09-26 
18:48:46.848119661 +0200
+++ /work/SRC/openSUSE:Factory/.slurm.new.2275/slurm.changes    2022-10-22 
14:13:52.616850316 +0200
@@ -1,0 +2,29 @@
+Fri Oct 14 08:49:24 UTC 2022 - Christian Goll <cg...@suse.com>
+
+- updated to 22.05.5
+- NOTE: Slurm validates that libraries are of the same version. Unfortunately,
+  due to an oversight, we failed to notice that the slurmstepd loads the
+  hash_k12 library only after a job has completed. This means that if the
+  hash_k12 library is upgraded before a job finishes, the slurmstepd will load
+  the new library when the job finishes, and will fail due to a mismatch of
+  versions.  This results in nodes with slurmstepd processes stuck
+  indefinitely. These processes require manual intervention to clean up. There
+  is no clean way to resolve these hung slurmstepd processes.
+  The only recommended way to upgrade between minor versions of 22.05 with
+  RPM???s or upgrades that replace current binaries and libraries is to drain 
the
+  nodes of running jobs first.
+- Fixes a number of moderate severity issues, noteable are:
+  * Load hash plugin at slurmstepd launch time to prevent issues loading the
+    plugin at step completion if the Slurm installation is upgraded.
+  * Update nvml plugin to match the unique id format for MIG devices in new
+    Nvidia drivers.
+  * Fix multi-node step launch failure when nodes in the controller aren't in
+    natural order. This can happen with inconsistent node naming (such as
+    node15 and node052) or with dynamic nodes which can register in any order.
+  * job_container/tmpfs - cleanup containers even when the .ns file isn't
+    mounted anymore.
+  * Wait up to PrologEpilogTimeout before shutting down slurmd to allow prolog
+    and epilog scripts to complete or timeout. Previously, slurmd waited 120
+    seconds before timing out and killing prolog and epilog scripts.
+
+-------------------------------------------------------------------

Old:
----
  slurm-22.05.2.tar.bz2

New:
----
  slurm-22.05.5.tar.bz2

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ slurm.spec ++++++
--- /var/tmp/diff_new_pack.08FSmd/_old  2022-10-22 14:13:53.212851728 +0200
+++ /var/tmp/diff_new_pack.08FSmd/_new  2022-10-22 14:13:53.224851757 +0200
@@ -18,7 +18,7 @@
 
 # Check file META in sources: update so_version to (API_CURRENT - API_AGE)
 %define so_version 38
-%define ver 22.05.2
+%define ver 22.05.5
 %define _ver _22_05
 %define dl_ver %{ver}
 # so-version is 0 and seems to be stable

++++++ slurm-22.05.2.tar.bz2 -> slurm-22.05.5.tar.bz2 ++++++
/work/SRC/openSUSE:Factory/slurm/slurm-22.05.2.tar.bz2 
/work/SRC/openSUSE:Factory/.slurm.new.2275/slurm-22.05.5.tar.bz2 differ: char 
11, line 1

Reply via email to