Bug#1014506: slurm-wlm: flaky autopkgtest: sbatch fails without
Control: tag -1 patch pending Hi, On Thu, 7 Jul 2022 11:22:44 +0200 Paul Gevers wrote: I looked at the results of the autopkgtest of you package on armhf because it was showing up as a regression for the upload of perl. I noticed that the test regularly fails and I saw failures on other architectures too. Because the unstable-to-testing migration software now blocks on regressions in testing, flaky tests, i.e. tests that flip between passing and failing without changes to the list of installed packages, are causing people unrelated to your package to spend time on these tests. I took a look how long the test needs if I let it run a bit longer, and on armhf it took 17 seconds in several attempts. Apparently 5 seconds is just a bit tight. I am going to upload an NMU to DELAYED/10 with the changes in MR5 [1]. Please let me know if I should delay further or should cancel the upload. Paul [1] https://salsa.debian.org/hpc-team/slurm-wlm/-/merge_requests/5 OpenPGP_signature Description: OpenPGP digital signature
Processed: Re: Bug#1014506: slurm-wlm: flaky autopkgtest: sbatch fails without
Processing control commands: > affects 1014506 src:adduser Bug #1014506 [src:slurm-wlm] slurm-wlm: flaky autopkgtest: sbatch fails without Added indication that 1014506 affects src:adduser -- 1014506: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1014506 Debian Bug Tracking System Contact ow...@bugs.debian.org with problems
Processed: Re: Bug#1014506: slurm-wlm: flaky autopkgtest: sbatch fails without
Processing control commands: > affects 1014506 src:adduser Bug #1014506 [src:slurm-wlm] slurm-wlm: flaky autopkgtest: sbatch fails without Ignoring request to set affects of bug 1014506 to the same value previously set -- 1014506: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1014506 Debian Bug Tracking System Contact ow...@bugs.debian.org with problems
Bug#1014506: slurm-wlm: flaky autopkgtest: sbatch fails without
Control: affects 1014506 src:adduser thanks On Sun, Jul 17, 2022 at 11:56:08AM +0200, Florian Ernst wrote: > On Thu, Jul 07, 2022 at 11:22:44AM +0200, Paul Gevers wrote: > > [...] > > I looked at the results of the autopkgtest of you package on armhf because > > it was showing up as a regression for the upload of perl. I noticed that the > > test regularly fails and I saw failures on other architectures too. > > > > Because the unstable-to-testing migration software now blocks on > > regressions in testing, flaky tests, i.e. tests that flip between > > passing and failing without changes to the list of installed packages, > > are causing people unrelated to your package to spend time on these > > tests. > > JFTR, this now also affects the unstable-to-testing migration of libyaml, > cf. https://tracker.debian.org/pkg/libyaml. Thus marking this issue here > as such. jftr, adduser is now also affected. Greetings Marc
Bug#1014506: slurm-wlm: flaky autopkgtest: sbatch fails without
Control: affects 1014506 src:libyaml On Thu, Jul 07, 2022 at 11:22:44AM +0200, Paul Gevers wrote: > [...] > I looked at the results of the autopkgtest of you package on armhf because > it was showing up as a regression for the upload of perl. I noticed that the > test regularly fails and I saw failures on other architectures too. > > Because the unstable-to-testing migration software now blocks on > regressions in testing, flaky tests, i.e. tests that flip between > passing and failing without changes to the list of installed packages, > are causing people unrelated to your package to spend time on these > tests. JFTR, this now also affects the unstable-to-testing migration of libyaml, cf. https://tracker.debian.org/pkg/libyaml. Thus marking this issue here as such. Cheers, Flo signature.asc Description: PGP signature
Processed: Re: Bug#1014506: slurm-wlm: flaky autopkgtest: sbatch fails without
Processing control commands: > affects 1014506 src:libyaml Bug #1014506 [src:slurm-wlm] slurm-wlm: flaky autopkgtest: sbatch fails without Added indication that 1014506 affects src:libyaml -- 1014506: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1014506 Debian Bug Tracking System Contact ow...@bugs.debian.org with problems
Bug#1014506: slurm-wlm: flaky autopkgtest: sbatch fails without
Source: slurm-wlm Version: 21.08.8.2-1 Severity: serious X-Debbugs-CC: debian...@lists.debian.org User: debian...@lists.debian.org Usertags: flaky Dear maintainer(s), I looked at the results of the autopkgtest of you package on armhf because it was showing up as a regression for the upload of perl. I noticed that the test regularly fails and I saw failures on other architectures too. Because the unstable-to-testing migration software now blocks on regressions in testing, flaky tests, i.e. tests that flip between passing and failing without changes to the list of installed packages, are causing people unrelated to your package to spend time on these tests. Don't hesitate to reach out if you need help and some more information from our infrastructure. Paul https://ci.debian.net/packages/s/slurm-wlm/ https://ci.debian.net/data/autopkgtest/testing/armhf/s/slurm-wlm/23115549/log.gz autopkgtest [01:29:11]: test sbatch: [--- ● slurmctld.service - Slurm controller daemon Loaded: loaded (/lib/systemd/system/slurmctld.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2022-06-27 01:29:11 UTC; 10s ago Docs: man:slurmctld(8) Main PID: 3099 (slurmctld) Tasks: 10 Memory: 1.4M CPU: 4.907s CGroup: /system.slice/slurmctld.service ├─3099 /usr/sbin/slurmctld -D -s └─3103 "slurmctld: slurmscriptd" Jun 27 01:29:11 ci-178-32c04b32 systemd[1]: Started Slurm controller daemon. Jun 27 01:29:11 ci-178-32c04b32 slurmctld[3099]: slurmctld: error: High latency for 1000 calls to gettimeofday(): 288 microseconds Jun 27 01:29:11 ci-178-32c04b32 slurmctld[3099]: slurmctld: Recovered state of 1 nodes Jun 27 01:29:11 ci-178-32c04b32 slurmctld[3099]: slurmctld: Recovered JobId=1 Assoc=0 Jun 27 01:29:11 ci-178-32c04b32 slurmctld[3099]: slurmctld: Recovered information about 1 jobs Jun 27 01:29:11 ci-178-32c04b32 slurmctld[3099]: slurmctld: Recovered state of 0 reservations Jun 27 01:29:11 ci-178-32c04b32 slurmctld[3099]: slurmctld: read_slurm_conf: backup_controller not specified Jun 27 01:29:11 ci-178-32c04b32 slurmctld[3099]: slurmctld: Running as primary controller Jun 27 01:29:11 ci-178-32c04b32 slurmctld[3099]: slurmctld: No parameter for mcs plugin, default values set Jun 27 01:29:11 ci-178-32c04b32 slurmctld[3099]: slurmctld: mcs: MCSParameters = (null). ondemand set. ● slurmd.service - Slurm node daemon Loaded: loaded (/lib/systemd/system/slurmd.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2022-06-27 01:29:11 UTC; 10s ago Docs: man:slurmd(8) Main PID: 3106 (slurmd) Tasks: 1 Memory: 1.1M CPU: 106ms CGroup: /system.slice/slurmd.service └─3106 /usr/sbin/slurmd -D -s Jun 27 01:29:11 ci-178-32c04b32 systemd[1]: Started Slurm node daemon. Jun 27 01:29:11 ci-178-32c04b32 slurmd[3106]: slurmd: slurmd version 21.08.8-2 started Jun 27 01:29:11 ci-178-32c04b32 slurmd[3106]: slurmd: slurmd started on Mon, 27 Jun 2022 01:29:11 + Jun 27 01:29:11 ci-178-32c04b32 slurmd[3106]: slurmd: CPUs=1 Boards=1 Sockets=1 Cores=1 Threads=1 Memory=513603 TmpDisk=2806 Uptime=1228661 CPUSpecList=(null) FeaturesAvail=(null) FeaturesActive=(null) PARTITION AVAIL TIMELIMIT NODES STATE NODELIST test*up infinite 1 idle localhost NODELIST NODES PARTITION STATE localhost 1 test* idle Submitted batch job 2 autopkgtest [01:29:27]: test sbatch: ---] OpenPGP_signature Description: OpenPGP digital signature