Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package slurm for openSUSE:Factory checked 
in at 2024-10-15 15:01:34
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/slurm (Old)
 and      /work/SRC/openSUSE:Factory/.slurm.new.19354 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "slurm"

Tue Oct 15 15:01:34 2024 rev:106 rq:1208086 version:24.05.3

Changes:
--------
--- /work/SRC/openSUSE:Factory/slurm/slurm.changes      2024-03-26 
19:32:11.555807228 +0100
+++ /work/SRC/openSUSE:Factory/.slurm.new.19354/slurm.changes   2024-10-15 
15:02:13.990240560 +0200
@@ -1,0 +2,412 @@
+Mon Oct 14 10:40:10 UTC 2024 - Egbert Eich <[email protected]>
+
+- Update to version 24.05.3
+  * `data_parser/v0.0.40` - Added field descriptions.
+  * `slurmrestd` - Avoid creating new slurmdbd connection per request
+    to `* /slurm/slurmctld/*/*` endpoints.
+  * Fix compilation issue with `switch/hpe_slingshot` plugin.
+  * Fix gres per task allocation with threads-per-core.
+  * `data_parser/v0.0.41` - Added field descriptions.
+  * `slurmrestd` - Change back generated OpenAPI schema for
+    `DELETE /slurm/v0.0.40/jobs/` to `RequestBody` instead of using
+    parameters for request. `slurmrestd` will continue accept endpoint
+    requests via `RequestBody` or HTTP query.
+  * `topology/tree` - Fix issues with switch distance optimization.
+  * Fix potential segfault of secondary `slurmctld` when falling back
+    to the primary when running with a `JobComp` plugin.
+  * Enable `--json`/`--yaml=v0.0.39` options on client commands to
+    dump data using data_parser/v0.0.39 instead or outputting nothing.
+  * `switch/hpe_slingshot` - Fix issue that could result in a 0 length
+    state file.
+  * Fix unnecessary message protocol downgrade for unregistered nodes.
+  * Fix unnecessarily packing alias addrs when terminating jobs with
+    a mix of non-cloud/dynamic nodes and powered down cloud/dynamic
+    nodes.
+  * `accounting_storage/mysql` - Fix issue when deleting a qos that
+    could remove too many commas from the qos and/or delta_qos fields
+    of the assoc table.
+  * `slurmctld` - Fix memory leak when using RestrictedCoresPerGPU.
+  * Fix allowing access to reservations without `MaxStartDelay` set.
+  * Fix regression introduced in 24.05.0rc1 breaking
+    `srun --send-libs` parsing.
+  * Fix slurmd vsize memory leak when using job submission/allocation
+    commands that implicitly or explicitly use --get-user-env.
+  * `slurmd` - Fix node going into invalid state when using
+    `CPUSpecList` and setting CPUs to the # of cores on a
+    multithreaded node.
+  * Fix reboot asap nodes being considered in backfill after a restart.
+  * Fix `--clusters`/`-M queries` for clusters outside of a
+    federation when `fed_display` is configured.
+  * Fix `scontrol` allowing updating job with bad cpus-per-task value.
+  * `sattach` - Fix regression from 24.05.2 security fix leading to
+    crash.
+  * `mpi/pmix` - Fix assertion when built under `--enable-debug`.
+- Changes from Slurm 24.05.2
+  * Fix energy gathering rpc counter underflow in
+    `_rpc_acct_gather_energy` when more than 10 threads try to get
+    energy at the same time. This prevented the possibility to get
+    energy from slurmd by any step until slurmd was restarted,
+    so losing energy accounting metrics in the node.
+  * `accounting_storage/mysql` - Fix issue where new user with `wckey`
+    did not have a default wckey sent to the slurmctld.
+  * `slurmrestd` - Prevent slurmrestd segfault when handling the
+    following endpoints when none of the optional parameters are
+    specified:
+      `DELETE /slurm/v0.0.40/jobs`
+      `DELETE /slurm/v0.0.41/jobs`
+      `GET /slurm/v0.0.40/shares`
+      `GET /slurm/v0.0.41/shares`
+      `GET /slurmdb/v0.0.40/instance`
+      `GET /slurmdb/v0.0.41/instance`
+      `GET /slurmdb/v0.0.40/instances`
+      `GET /slurmdb/v0.0.41/instances`
+      `POST /slurm/v0.0.40/job/{job_id}`
+      `POST /slurm/v0.0.41/job/{job_id}`
+  * Fix IPMI energy gathering when no IPMIPowerSensors are specified
+    in `acct_gather.conf`. This situation resulted in an accounted
+    energy of 0 for job steps.
+  * Fix a minor memory leak in slurmctld when updating a job dependency.
+  * `scontrol`,`squeue` - Fix regression that caused incorrect values
+    for multisocket nodes at `.jobs[].job_resources.nodes.allocation`
+    for `scontrol show jobs --(json|yaml)` and `squeue --(json|yaml)`.
+  * `slurmrestd` - Fix regression that caused incorrect values for
+    multisocket nodes at `.jobs[].job_resources.nodes.allocation` to
+    be dumped with endpoints:
+      `GET /slurm/v0.0.41/job/{job_id}`
+      `GET /slurm/v0.0.41/jobs`
+  * `jobcomp/filetxt` - Fix truncation of job record lines > 1024
+    characters.
+  * `switch/hpe_slingshot` - Drain node on failure to delete CXI
+    services.
+  * Fix a performance regression from 23.11.0 in cpu frequency
+    handling when no `CpuFreqDef` is defined.
+  * Fix one-task-per-sharing not working across multiple nodes.
+  * Fix inconsistent number of cpus when creating a reservation
+    using the TRESPerNode option.
+  * `data_parser/v0.0.40+` - Fix job state parsing which could
+    break filtering.
+  * Prevent `cpus-per-task` to be modified in jobs where a `-c`
+    value has been explicitly specified and the requested memory
+    constraints implicitly increase the number of CPUs to allocate.
+  * `slurmrestd` - Fix regression where args `-s v0.0.39,dbv0.0.39`
+    and `-d v0.0.39` would result in `GET /openapi/v3` not
+    registering as a valid possible query resulting in 404 errors.
+  * `slurmrestd` - Fix memory leak for dbv0.0.39 jobs query which
+    occurred if the query parameters specified account, association,
+    cluster, constraints, format, groups, job_name, partition, qos,
+    reason, reservation, state, users, or wckey. This affects the
+    following endpoints:
+      `GET /slurmdb/v0.0.39/jobs`
+  * `slurmrestd` - In the case the slurmdbd does not respond to a
+    persistent connection init message, prevent the closed fd from
+    being used, and instead emit an error or warning depending on
+    if the connection was required.
+  * Fix 24.05.0 regression that caused the slurmdbd not to send back
+    an error message if there is an error initializing a persistent
+    connection.
+  * Reduce latency of forwarded x11 packets.
+  * Add `curr_dependency` (representing the current dependency of
+    the job).
+    and `orig_dependency` (representing the original requested
+    dependency of the job) fields to the job record in
+    `job_submit.lua` (for job update) and `jobcomp.lua`.
+  * Fix potential segfault of slurmctld configured with
+    `SlurmctldParameters=enable_rpc_queue` from happening on
+    reconfigure.
+  * Fix potential segfault of slurmctld on its shutdown when rate
+    limitting is enabled.
+  * `slurmrestd` - Fix missing job environment for `SLURM_JOB_NAME`,
+    `SLURM_OPEN_MODE`, `SLURM_JOB_DEPENDENCY`, `SLURM_PROFILE`,
+    `SLURM_ACCTG_FREQ`, `SLURM_NETWORK` and `SLURM_CPU_FREQ_REQ` to
+    match sbatch.
+  * Fix GRES environment variable indices being incorrect when only
+    using a subset of all GPUs on a node and the
+    `--gres-flags=allow-task-sharing` option.
+  * Prevent `scontrol` from segfaulting when requesting scontrol
+    show reservation `--json` or `--yaml` if there is an error
+    retrieving reservations from the `slurmctld`.
+  * `switch/hpe_slingshot` - Fix security issue around managing VNI
+    access. CVE-2024-42511.
+  * `switch/nvidia_imex` - Fix security issue managing IMEX channel
+    access. CVE-2024-42511.
+  * `switch/nvidia_imex` - Allow for compatibility with
+    `job_container/tmpfs`.
+- Changes in Slurm 24.05.1
+  * Fix `slurmctld` and `slurmdbd` potentially stopping instead of
+    performing a logrotate when recieving `SIGUSR2` when using
+    `auth/slurm`.
+  * `switch/hpe_slingshot` - Fix slurmctld crash when upgrading
+    from 23.02.
+  * Fix "Could not find group" errors from `validate_group()` when
+    using `AllowGroups` with large `/etc/group` files.
+  * Add `AccountingStoreFlags=no_stdio` which allows to not record
+    the stdio paths of the job when set.
+  * `slurmrestd` - Prevent a slurmrestd segfault when parsing the
+    `crontab` field, which was never usable. Now it explicitly
+    ignores the value and emits a warning if it is used for the
+    following endpoints:
+      `POST /slurm/v0.0.39/job/{job_id}`
+      `POST /slurm/v0.0.39/job/submit`
+      `POST /slurm/v0.0.40/job/{job_id}`
+      `POST /slurm/v0.0.40/job/submit`
+      `POST /slurm/v0.0.41/job/{job_id}`
+      `POST /slurm/v0.0.41/job/submit`
+      `POST /slurm/v0.0.41/job/allocate`
+  * `mpi/pmi2` - Fix communication issue leading to task launch
+    failure with "`invalid kvs seq from node`".
+  * Fix getting user environment when using sbatch with
+    `--get-user-env` or `--export=` when there is a user profile
+    script that reads `/proc`.
+  * Prevent slurmd from crashing if `acct_gather_energy/gpu` is
+    configured but `GresTypes` is not configured.
+  * Do not log the following errors when `AcctGatherEnergyType`
+    plugins are used but a node does not have or cannot find sensors:
+    "`error: _get_joules_task: can't get info from slurmd`"
+    "`error: slurm_get_node_energy: Zero Bytes were transmitted or
+     received`"
+    However, the following error will continue to be logged:
+    "`error: Can't get energy data. No power sensors are available.
+     Try later`"
+  * `sbatch`, `srun` - Set `SLURM_NETWORK` environment variable if
+    `--network` is set.
+  * Fix cloud nodes not being able to forward to nodes that restarted
+    with new IP addresses.
+  * Fix cwd not being set correctly when running a SPANK plugin with a
+    `spank_user_init()` hook and the new "`contain_spank`" option set.
+  * `slurmctld` - Avoid deadlock during shutdown when `auth/slurm`
+    is active.
+  * Fix segfault in `slurmctld` with `topology/block`.
+  * `sacct` - Fix printing of job group for job steps.
+  * `scrun` - Log when an invalid environment variable causes the
+    job submission to be rejected.
+  * `accounting_storage/mysql` - Fix problem where listing or
+    modifying an association when specifying a qos list could hang
+    or take a very long time.
+  * `gpu/nvml` - Fix `gpuutil/gpumem` only tracking last GPU in step.
+    Now, `gpuutil/gpumem` will record sums of all GPUS in the step.
+  * Fix error in `scrontab` jobs when using
+    `slurm.conf:PropagatePrioProcess=1`.
+  * Fix `slurmctld` crash on a batch job submission with
+    `--nodes 0,...`.
+  * Fix dynamic IP address fanout forwarding when using `auth/slurm`.
+  * Restrict listening sockets in the `mpi/pmix` plugin and `sattach`
+    to the `SrunPortRange`.
+  * `slurmrestd` - Limit mime types returned from query to
+    `GET /openapi/v3` to only return one mime type per serializer
+    plugin to fix issues with OpenAPI client generators that are
+    unable to handle multiple mime type aliases.
+  * Fix many commands possibly reporting an "`Unexpected Message
+    Received`" when in reality the connection timed out.
+  * Prevent   slurmctld  from starting if there is not a json
+    serializer present and the `extra_constraints` feature is enabled.
+  * Fix heterogeneous job components not being signaled with
+    `scancel --ctld` and `DELETE slurm/v0.0.40/jobs` if the job ids
+    are not explicitly given, the heterogeneous job components match
+    the given filters, and the heterogeneous job leader does not
+    match the given filters.
+  * Fix regression from 23.02 impeding job licenses from being cleared.
+  * Move error to `log_flag` which made `_get_joules_task` error to
+    be logged to the user when too many rpcs were queued in slurmd
+    for gathering energy.
+  * For `scancel --ctld` and the associated rest api endpoints:
+      `DELETE /slurm/v0.0.40/jobs`
+      `DELETE /slurm/v0.0.41/jobs`
+    Fix canceling the final array task in a job array when the task
+    is pending and all array tasks have been split into separate job
+    records. Previously this task was not canceled.
+  * Fix `power_save operation` after recovering from a failed
+    reconfigure.
+  * `slurmctld` - Skip removing the pidfile when running under
+    systemd. In that situation it is never created in the first place.
+  * Fix issue where altering the flags on a Slurm account
+    (`UsersAreCoords`) several limits on the account's association
+    would be set to 0 in Slurm's internal cache.
+  * Fix memory leak in the controller when relaying `stepmgr` step
+    accounting to the dbd.
+  * Fix segfault when submitting stepmgr jobs within an existing
+    allocation.
+  * Added `disable_slurm_hydra_bootstrap` as a possible `MpiParams`
+    parameter in `slurm.conf`. Using this will disable env variable
+    injection to allocations for the following variables:
+    `I_MPI_HYDRA_BOOTSTRAP,` `I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS`,
+    `HYDRA_BOOTSTRAP`, `HYDRA_LAUNCHER_EXTRA_ARGS`.
+  * `scrun` - Delay shutdown until after start requested.
+    This caused `scrun` to never start or shutdown and hung forever
+    when using `--tty`.
+  * Fix backup `slurmctld` potentially not running the agent when
+    taking over as the primary controller.
+  * Fix primary controller not running the agent when a reconfigure
+    of the `slurmctld` fails.
+  * `slurmd` - fix premature timeout waiting for
+    `REQUEST_LAUNCH_PROLOG` with large array jobs causing node to
+    drain.
+  * `jobcomp/{elasticsearch,kafka}` - Avoid sending fields with
+    invalid date/time.
+  * `jobcomp/elasticsearch` - Fix `slurmctld` memory leak from
+    curl usage.
+  * `acct_gather_profile/influxdb` - Fix slurmstepd memory leak from
+    curl usage
+  * Fix 24.05.0 regression not deleting job hash dirs after
+    `MinJobAge`.
+  * Fix filtering arguments being ignored when using squeue `--json`.
+  * `switch/nvidia_imex` - Move setup call after `spank_init()` to
+    allow namespace manipulation within the SPANK plugin.
+  * `switch/nvidia_imex` - Skip plugin operation if
+    `nvidia-caps-imex-channels` device is not present rather than
+    preventing slurmd from starting.
+  * `switch/nvidia_imex` - Skip plugin operation if
+    `job_container/tmpfs` is configured due to incompatibility.
+  * `switch/nvidia_imex` - Remove any pre-existing channels when
+    `slurmd` starts.
+  * `rpc_queue` - Add support for an optional `rpc_queue.yaml`
+    configuration file.
+  * `slurmrestd` - Add new +prefer_refs flag to `data_parser/v0.0.41`
+    plugin. This flag will avoid inlining single referenced schemas
+    in the OpenAPI schema.
+
+-------------------------------------------------------------------
+Tue Jun  4 09:36:54 UTC 2024 - Christian Goll <[email protected]>
+
+- Updated to new release 24.05.0 with following major changes
+  * Important Notes:
+    If using the slurmdbd (Slurm DataBase Daemon) you must update
+    this first.  NOTE: If using a backup DBD you must start the
+    primary first to do any database conversion, the backup will not
+    start until this has happened.  The 24.05 slurmdbd will work
+    with Slurm daemons of version 23.02 and above.  You will not
+    need to update all clusters at the same time, but it is very
+    important to update slurmdbd first and having it running before
+    updating any other clusters making use of it.
+  * Highlights
+    + Federation - allow client command operation when slurmdbd is
+      unavailable.
+    + `burst_buffer/lua` - Added two new hooks: `slurm_bb_test_data_in`
+      and `slurm_bb_test_data_out`. The syntax and use of the new hooks
+      are documented in `etc/burst_buffer.lua.example`. These are
+      required to exist. slurmctld now checks on startup if the
+      `burst_buffer.lua` script loads and contains all required hooks;
+      `slurmctld` will exit with a fatal error if this is not
+      successful. Added `PollInterval` to `burst_buffer.conf`. Removed
+      the arbitrary limit of 512 copies of the script running
+      simultaneously.
+    + Add QOS limit `MaxTRESRunMinsPerAccount`.
+    + Add QOS limit `MaxTRESRunMinsPerUser`.
+    + Add `ELIGIBLE` environment variable to `jobcomp/script` plugin.
+    + Always use the QOS name for `SLURM_JOB_QOS` environment variables.
+      Previously the batch environment would use the description field,
+      which was usually equivalent to the name.
++++ 499 more lines (skipped)
++++ between /work/SRC/openSUSE:Factory/slurm/slurm.changes
++++ and /work/SRC/openSUSE:Factory/.slurm.new.19354/slurm.changes

Old:
----
  Fix-test-21.41.patch
  slurm-23.11.5.tar.bz2

New:
----
  slurm-24.05.3.tar.bz2

BETA DEBUG BEGIN:
  Old:      only affects `mpi`/`pmi2` plugin.
- Removed Fix-test-21.41.patch as upstream test changed.
- Dropped package plugin-ext-sensors-rrd as the plugin module no
BETA DEBUG END:

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ slurm.spec ++++++
--- /var/tmp/diff_new_pack.5NiKYG/_old  2024-10-15 15:02:14.946280378 +0200
+++ /var/tmp/diff_new_pack.5NiKYG/_new  2024-10-15 15:02:14.950280544 +0200
@@ -1,5 +1,5 @@
 #
-# spec file
+# spec file for package slurm
 #
 # Copyright (c) 2024 SUSE LLC
 #
@@ -17,10 +17,10 @@
 
 
 # Check file META in sources: update so_version to (API_CURRENT - API_AGE)
-%define so_version 40
+%define so_version 41
 # Make sure to update `upgrades` as well!
-%define ver 23.11.5
-%define _ver _23_11
+%define ver 24.05.3
+%define _ver _24_05
 %define dl_ver %{ver}
 # so-version is 0 and seems to be stable
 %define pmi_so 0
@@ -59,6 +59,9 @@
 %if 0%{?sle_version} == 150500 || 0%{?sle_version} == 150600
 %define base_ver 2302
 %endif
+%if 0%{?sle_version} == 150500 || 0%{?sle_version} == 150600
+%define base_ver 2302
+%endif
 
 %define ver_m %{lua:x=string.gsub(rpm.expand("%ver"),"%.[^%.]*$","");print(x)}
 # Keep format_spec_file from botching the define below:
@@ -170,8 +173,6 @@
 Source21:       README_Testsuite.md
 Patch0:         Remove-rpath-from-build.patch
 Patch2:         pam_slurm-Initialize-arrays-and-pass-sizes.patch
-Patch10:        Fix-test-21.41.patch
-#Patch14:        
Keep-logs-of-skipped-test-when-running-test-cases-sequentially.patch
 Patch15:        Fix-test7.2-to-find-libpmix-under-lib64-as-well.patch
 
 %{upgrade_dep %pname}
@@ -406,19 +407,6 @@
 %description plugins
 This package contains the SLURM plugins (loadable shared objects)
 
-%package plugin-ext-sensors-rrd
-Summary:        SLURM ext_sensors/rrd Plugin (loadable shared objects)
-Group:          Productivity/Clustering/Computing
-Requires:       %{name}-plugins = %{version}
-%{upgrade_dep %{pname}-plugin-ext-sensors-rrd}
-# file was moved from slurm-plugins to here
-Conflicts:      %{pname}-plugins < %{version}
-
-%description plugin-ext-sensors-rrd
-This package contains the ext_sensors/rrd plugin used to read data
-using RRD, a tool that creates and manages a linear database for
-sampling and logging data.
-
 %package torque
 Summary:        Wrappers for transitition from Torque/PBS to SLURM
 Group:          Productivity/Clustering/Computing
@@ -762,9 +750,15 @@
        %{buildroot}/%{_libdir}/*.la \
        %{buildroot}/%_lib/security/*.la
 
-rm %{buildroot}/%{perl_archlib}/perllocal.pod \
-   %{buildroot}/%{perl_vendorarch}/auto/Slurm/.packlist \
-   %{buildroot}/%{perl_vendorarch}/auto/Slurmdb/.packlist
+# Fix perl
+rm %{buildroot}%{perl_archlib}/perllocal.pod \
+   %{buildroot}%{perl_sitearch}/auto/Slurm/.packlist \
+   %{buildroot}%{perl_sitearch}/auto/Slurmdb/.packlist
+
+mkdir -p %{buildroot}%{perl_vendorarch}
+
+mv %{buildroot}%{perl_sitearch}/* \
+   %{buildroot}%{perl_vendorarch}
 
 # Remove Cray specific binaries
 rm -f %{buildroot}/%{_sbindir}/capmc_suspend \
@@ -1086,7 +1080,6 @@
 %{?have_netloc:%{_bindir}/netloc_to_topology}
 %{_sbindir}/sackd
 %{_sbindir}/slurmctld
-%{_sbindir}/slurmsmwd
 %dir %{_libdir}/slurm/src
 %{_unitdir}/slurmctld.service
 %{_sbindir}/rcslurmctld
@@ -1164,9 +1157,10 @@
 %files -n perl-%{name}
 %{perl_vendorarch}/Slurm.pm
 %{perl_vendorarch}/Slurm
-%{perl_vendorarch}/auto/Slurm
 %{perl_vendorarch}/Slurmdb.pm
+%{perl_vendorarch}/auto/Slurm
 %{perl_vendorarch}/auto/Slurmdb
+%dir %{perl_vendorarch}/auto
 %{_mandir}/man3/Slurm*.3pm.*
 
 %files slurmdbd
@@ -1189,6 +1183,7 @@
 %dir %{_libdir}/slurm
 %{_libdir}/slurm/libslurmfull.so
 %{_libdir}/slurm/accounting_storage_slurmdbd.so
+%{_libdir}/slurm/accounting_storage_ctld_relay.so
 %{_libdir}/slurm/acct_gather_energy_pm_counters.so
 %{_libdir}/slurm/acct_gather_energy_gpu.so
 %{_libdir}/slurm/acct_gather_energy_ibmaem.so
@@ -1197,6 +1192,7 @@
 %{_libdir}/slurm/acct_gather_filesystem_lustre.so
 %{_libdir}/slurm/burst_buffer_lua.so
 %{_libdir}/slurm/burst_buffer_datawarp.so
+%{_libdir}/slurm/data_parser_v0_0_41.so
 %{_libdir}/slurm/data_parser_v0_0_40.so
 %{_libdir}/slurm/data_parser_v0_0_39.so
 %{_libdir}/slurm/cgroup_v1.so
@@ -1214,12 +1210,13 @@
 %{_libdir}/slurm/gres_nic.so
 %{_libdir}/slurm/gres_shard.so
 %{_libdir}/slurm/hash_k12.so
+%{_libdir}/slurm/hash_sha3.so
+%{_libdir}/slurm/tls_none.so
 %{_libdir}/slurm/jobacct_gather_cgroup.so
 %{_libdir}/slurm/jobacct_gather_linux.so
 %{_libdir}/slurm/jobcomp_filetxt.so
 %{_libdir}/slurm/jobcomp_lua.so
 %{_libdir}/slurm/jobcomp_script.so
-%{_libdir}/slurm/job_container_cncu.so
 %{_libdir}/slurm/job_container_tmpfs.so
 %{_libdir}/slurm/job_submit_all_partitions.so
 %{_libdir}/slurm/job_submit_defaults.so
@@ -1253,6 +1250,7 @@
 %{_libdir}/slurm/serializer_url_encoded.so
 %{_libdir}/slurm/serializer_yaml.so
 %{_libdir}/slurm/site_factor_example.so
+%{_libdir}/slurm/switch_nvidia_imex.so
 %{_libdir}/slurm/task_affinity.so
 %{_libdir}/slurm/task_cgroup.so
 %{_libdir}/slurm/topology_3d_torus.so
@@ -1272,9 +1270,6 @@
 %{_libdir}/slurm/acct_gather_profile_influxdb.so
 %{_libdir}/slurm/jobcomp_elasticsearch.so
 
-%files plugin-ext-sensors-rrd
-%{_libdir}/slurm/ext_sensors_rrd.so
-
 %files lua
 %{_libdir}/slurm/job_submit_lua.so
 
@@ -1310,8 +1305,6 @@
 %{_libdir}/slurm/openapi_slurmdbd.so
 %{_libdir}/slurm/openapi_dbv0_0_39.so
 %{_libdir}/slurm/openapi_v0_0_39.so
-%{_libdir}/slurm/openapi_dbv0_0_38.so
-%{_libdir}/slurm/openapi_v0_0_38.so
 %{_libdir}/slurm/rest_auth_local.so
 %endif
 
@@ -1348,12 +1341,10 @@
 %files config-man
 %{_mandir}/man5/acct_gather.conf.*
 %{_mandir}/man5/burst_buffer.conf.*
-%{_mandir}/man5/ext_sensors.conf.*
 %{_mandir}/man5/slurm.*
 %{_mandir}/man5/cgroup.*
 %{_mandir}/man5/gres.*
 %{_mandir}/man5/helpers.*
-#%%{_mandir}/man5/nonstop.conf.5.*
 %{_mandir}/man5/oci.conf.5.gz
 %{_mandir}/man5/topology.*
 %{_mandir}/man5/knl.conf.5.*
@@ -1368,17 +1359,7 @@
 %endif
 
 %files cray
-# do not remove cray sepcific packages from SLES update
-# Only for Cray
-%{_libdir}/slurm/core_spec_cray_aries.so
-%{_libdir}/slurm/job_submit_cray_aries.so
-%{_libdir}/slurm/select_cray_aries.so
-%{_libdir}/slurm/switch_cray_aries.so
-%{_libdir}/slurm/task_cray_aries.so
-%{_libdir}/slurm/proctrack_cray_aries.so
 %{_libdir}/slurm/mpi_cray_shasta.so
-%{_libdir}/slurm/node_features_knl_cray.so
-%{_libdir}/slurm/power_cray_aries.so
 
 %if 0%{?slurm_testsuite}
 %files testsuite

++++++ slurm-23.11.5.tar.bz2 -> slurm-24.05.3.tar.bz2 ++++++
/work/SRC/openSUSE:Factory/slurm/slurm-23.11.5.tar.bz2 
/work/SRC/openSUSE:Factory/.slurm.new.19354/slurm-24.05.3.tar.bz2 differ: char 
11, line 1

Reply via email to