Bug#1084025: RM: golang-procfs-dev -- ROM; obsolete transitional package

2024-10-04 Thread Daniel Swarbrick

Package: ftp.debian.org
Severity: normal

Please remove the transitional package golang-procfs-dev. This was 
removed from debian/control in golang-github-prometheus-procfs 0.11.0-1, 
but for reasons unclear to me still exists in the archive.


OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1083100: please bundle the more modern React UI

2024-10-03 Thread Daniel Swarbrick

On 03.10.24 23:21, Antoine Beaupré wrote:

Yeah, I'm kind of hoping they stop doing that already with the v3. But i
will note that people *are* trying to keep up with a lot of modules like
this for other projects, and I think it's actually possible to give it a
try already.


The v3.0.0 uses a totally different (and presumably quite new) framework 
called Mantine (https://mantine.dev/), so it's anybody's guess when that 
will be usable within Debian. I don't see any relevant ITP or RFP bugs 
for it. Given that the prometheus package is struggling to advance 
beyond a "LTS-1" version due to missing Go build-deps, it could be a 
while before we can even contemplate packaging v3.



I would also argue *for* vendoring the stuff that's not practical to
package. We have a few special precedents in Debian for important
packages that have bent the rules a little bit to ship vendored copies
of source code (firefox and kubernetes, for example), and I think we
could get away with shipping a couple javascript libraries like this.


If the Prometheus package gets a pass in that respect, it would surely 
make things a lot simpler, as we could stop de-blobbing the embedded 
assets, and basically not have to give any thought whatsoever to 
building the UI from scratch. I'm not sure how likely this is though.



You may have noticed that I recently added an install-ui.sh helper
script to the prometheus package, to fetch the React web UI tarball from
upstream, similar to the script that is bundled with
prometheus-alertmanager. So you /can/ use the React web UI now, if you want.


Yes, I've seen that! I have even tried it out, but it didn't work out so
well for me, something which could perhaps be filed as a separate bug.


It only works with the sid / trixie package (>= 2.45.6+ds-2), since 
previous versions also had the backend Go code used by the React UI 
stripped out. You also need to explicitly request the "/graph" URL, 
since landing on the root "/" URL will redirect you to "/classic/graph". 
I will reinstate a link to the "new" UI in the classic UI, which 
upstream versions included whilst the two variants were still officially 
supported. Unless you have run the "install-ui.sh" script however, 
landing on the new UI URL will display an index.html explaining the 
situation (à la prometheus-alertmanager).



I will be honest and admit this is a request more than an offer, but
I *might* get time to deal with this in the coming *year* (aka "2025",
definitely not until January), so I wanted to see if there was at least
*some* openness in dealing with it.

Also, we're not alone with this: there's a bunch of people working on
JavaScript stuff in Debian, and we could get some help there.


Yes, and it would be enormously appreciated if they could at least get 
the ball rolling in terms of what is needed to start building the React 
UI with the package. I have never done any JavaScript packaging in 
Debian, but if somebody suitably qualified lays the groundwork, I can 
probably maintain it in the long run.



So the question is: do we, as a golang team (and you, as an uploader for
this package), want to do this? Or do we want to fundamentally object to
even trying to fix this issue?

Because this is essentially why i filed this issue: I don't expect
anyone to just drop everything and do this. I want to see if people are
okay with us trying to do this.


I have no objection to this, and it would be awesome if people can come 
together to make it happen. I don't have the time (or expertise) to do 
this single-handedly. It's been /years/ since I did any frontend 
development, back in the heyday of jQuery. I never got into npm / 
nodejs, and have not worked with React (or Typescript in general). So I 
would be totally outside of my comfort zone if I attempted this.


I will at least paste this list of dependencies [1], with what I _think_ 
are the Debian packages alongside in square brackets.


  "dependencies": {
"@codemirror/autocomplete": "^6.17.0",  [node-codemirror-autocomplete]
"@codemirror/commands": "^6.6.0",  [node-codemirror-commands]
"@codemirror/language": "^6.10.2",  [node-codemirror-language]
"@codemirror/lint": "^6.8.1",  [node-codemirror-lint]
"@codemirror/search": "^6.5.6",  [node-codemirror-search]
"@codemirror/state": "^6.3.3",  [node-codemirror-state]
"@codemirror/view": "^6.29.1",  [node-codemirror-view]
"@forevolve/bootstrap-dark": "^4.0.2",  [???]
"@fortawesome/fontawesome-svg-core": "6.5.2", 
[node-fortawesome-fontawesome-svg-core]
"@fortawesome/free-solid-svg-icons": "6.5.2", 
[node-fortawesome-free-solid-svg-icons]

"@fortawesome/react-fontawesome": "0.2.0",  [???]
"@lezer/common": "^1.2.1",  [node-lezer-common]
"@lezer/highlight": "^1.2.0",  [???]
"@lezer/lr": "^1.4.2",   [???]
"@nexucis/fuzzy": "^0.4.1",  [???]
"@nexucis/kvsearch": "^0.8.1",  [???]
"@prometheus-io/codemirror-promql": "0.55.0-rc.0",  [???]
"bootstrap": "^4.6.2",  

Bug#1083100: please bundle the more modern React UI

2024-10-03 Thread Daniel Swarbrick

Antoine,

I think you know as well as I do that the likelihood of this being 
feasible is pretty remote. The fact that sufficient versions of npm, 
nodejs and React are already available in Debian does not help much if 
the web app uses a ton of bleeding edge modules which are not even on 
anybody's radar to package for Debian. Combine that with the fact that 
the Prometheus web UI is getting yet another major overhaul for v3.0.0. 
It seems to be a fast-moving target.


The classic web UI was actually removed upstream in v2.34.0, and I have 
forward-ported / reinstated the necessary Go code to keep supporting it. 
This is a dead end however, since Prometheus has newer functionality 
like exemplars and native (sparse) histograms, which are not supported 
by the legacy web UI. I would prefer to see the Debian package finally 
drop the classic UI also, since the Debian package is starting to 
resemble more of a /fork/ of Prometheus.


You may have noticed that I recently added an install-ui.sh helper 
script to the prometheus package, to fetch the React web UI tarball from 
upstream, similar to the script that is bundled with 
prometheus-alertmanager. So you /can/ use the React web UI now, if you want.


If you're willing to pitch in and help package the React UI, by all 
means, please do.




OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1059083: RFS: updated golang-github-azure-azure-sdk-for-go

2024-09-24 Thread Daniel Swarbrick

On 24.09.24 16:41, Shengjing Zhu wrote:


The reason that I propose to use 0.0~gitMMDD is to just track the
latest commit in the mono repo. It just stops caring how the sub
modules are tagged and avoid confusing in the single packaging version
This is what I have done for golang-github-moby-sys-dev package which
has similar mono repo that contains many sub modules.



At the end of the day, I don't really care what the Debian package 
version number is. If it's 0.0~gitMMDD, then it is super easy to 
adopt a different scheme later (i.e., if the upstream developers come to 
their senses), as virtually any other version number style will be 
greater than 0.0~git...


What's more important is that it contains the specific upstream code 
that other packages are urgently depending on to move forward. I would 
however suggest that we at least record the upstream versions somewhere 
obvious, e.g. as a d/changelog entry, to make our jobs as packagers a 
little easier.


I intensely dislike mono repos.


OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1059083: RFS: updated golang-github-azure-azure-sdk-for-go

2024-09-23 Thread Daniel Swarbrick

On 21.06.24 23:15, Shengjing Zhu wrote:

I think people are hesitant to sponsor because of the reconstruction
of the upstream repository.





So I propose to create a new package called
golang-github-azure-azure-sdk-for-go-sdk, which contains all the
modules in the /sdk directory. And the package version uses
0.0~gitMMDD schema. This package doesn't conflict with the
previous legacy version, and can be co-installed.


This is starting to get more urgent as we inch closer to the first 
trixie freeze.


Today I tried building the currently packaged 
golang-github-azure-azure-sdk-for-go 68.0.0-2, with the intention of 
doing some ratt tests. I was shocked when it oom-killed my terminal (and 
thus sbuild) after eating 32 GB RAM plus 10 GB swap. Why does this 
package require such an absurd amount of memory to build? I can add some 
more swap, but I'm not sure how much more this thing is going to gobble up.


As already mentioned by Maytham, Prometheus is just one package that 
urgently needs a newer version of azure-sdk-for-go. Debian currently has 
Prometheus v2.45.6, which is (was?) an LTS release, but since it has 
been superseded by a newer LTS v2.53.x, it's unlikely to get further 
updates. Without wanting to hijack the topic too much, Prometheus 2.53 
would require these Azure packages:

  github.com/Azure/azure-sdk-for-go/sdk/azcore v1.11.1
  github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.5.2

github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/compute/armcompute/v5 
v5.7.0


github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/network/armnetwork/v4 
v4.3.0

  github.com/Azure/azure-sdk-for-go/sdk/internal v1.6.0 // indirect

I expect that there are other applications which require other modules 
from the github.com/Azure/azure-sdk-for-go repo too.


How do we go about packaging these? If we package each module as a 
separate Debian source package, that means we need to essentially clone 
the upstream repo n number of times in Salsa, each containing an 
almost-identical debian/control, just to produce a plethora of split 
packages - and believe me, there are a LOT of modules in the upstream repo.


Or do we produce a single package that tracks the upstream "sdk/azcore" 
tags (and its associated version number scheme), but also include all 
the other sub-modules in that package? Maytham pointed out in another 
email thread that these other modules get interim updates independently 
of the "sdk/azcore" releases, and this might mean that packages which 
depend on such updates are stuck. However, given how rarely the existing 
golang-github-azure-azure-sdk-for-go package was updated, is the 
situation really going to be that different? We /could/ always release a 
"1.23.4-1+git20240923.coffeee" package to cater for these interim updates.


I really hope we can make a decision and start to move forward on this.


OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#855145: prometheus: Console templates don't work out of the box

2024-09-22 Thread Daniel Swarbrick
The sample console templates have been removed from the upstream 
release-3.0 branch:


https://github.com/prometheus/prometheus/pull/14807

It might be a while before we can package Prometheus 3.0 in Debian, 
because even the latest Prometheus 2.x version has a sprawling 
dependency tree. But I have no interest in maintaining an ever-growing 
patchset delta in the Debian package. So if upstream removes it, so will 
the Debian package.


I honestly think that for a lot of custom console use cases, Grafana 
could probably fulfill the requirements.




OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1082244: prometheus-node-exporter-collectors: upgradable packages (apt_upgrades_pending) are not printed but appears as held packages (apt_upgrades_held)

2024-09-19 Thread Daniel Swarbrick

I'm pretty sure that this is a duplicate of #1077694.



OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1055115: bookworm-pu: package prometheus-node-exporter-collectors/0.0~git20230203.6f710f8-1

2024-08-21 Thread Daniel Swarbrick

On 21.08.24 21:36, Georg Faerber wrote:

Thanks -- there is also #1077694 which asks for more fixes in regards to
apt_info.py to land in bookworm.

Any objections, Antoine, Daniel?


No objections from me. I usually don't get involved in backports (and 
only recently got upload access to it), since I spend what little time I 
have available keeping unstable abreast of upstream changes / breakage.


OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1071260: prometheus-postgres-exporter: pg_replication_slots metrics query fails on standbys with replication slots

2024-06-08 Thread Daniel Swarbrick
No sooner than I had posted my previous reply, I discovered that there 
is in fact already an upstream PR for the aforementioned #547 [1]; 
unsurprisingly #548 [2] claims to fix it, but has been awaiting approval 
since 2021 :(


Perhaps you could give that PR a nudge, and perhaps it also makes sense 
if we simply cherry-pick that as the Debian patch.


[1]: https://github.com/prometheus-community/postgres_exporter/issues/547
[2]: https://github.com/prometheus-community/postgres_exporter/pull/548


OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1071260: prometheus-postgres-exporter: pg_replication_slots metrics query fails on standbys with replication slots

2024-06-08 Thread Daniel Swarbrick

Hi Michael,

As this would seem to also affect the latest upstream release (v0.15.0), 
can you please forward this patch upstream and make a DEP-3 reference to 
it in your patch? There is little point in only patching this in Debian 
when it in fact affects the wider community.


Thanks



OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1069346: please don't auto-enable the shipped timers

2024-05-07 Thread Daniel Swarbrick

Hi Evgeni,

This is possibly behaviour that has been carried over from when the 
textfile collectors were part of the prometheus-node-exporter package, 
prior to upstream splitting them out into their own git repo.


If you look closely, you will see that the systemd timers (with the 
exception of the apt collector) contain Condition... clauses, which will 
prevent them from starting if the relevant hardware is not found on the 
host. So yes, they are /enabled/ in the sense that systemd will process 
them at boot, but they won't /start/ if not applicable (even if 
attempted to be started manually) - and obviously if the timers do not 
start, then the service units won't be automatically triggered either. 
At most, you should get one log entry per timer, stating that it was not 
started, e.g.:


May 08 07:50:53 vega systemd[1]: 
prometheus-node-exporter-ipmitool-sensor.timer - Run ipmitool sensor 
metrics collection every minute was skipped because of an unmet 
condition check (ConditionDirectoryNotEmpty=/sys/class/ipmi).


Regards,
Daniel



OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1068732: prometheus-ipmi-exporter: debian path patch breaks local collection with sudo

2024-04-11 Thread Daniel Swarbrick
In the upstream bug report, it is suggested that one should "complain to 
[Debian] to get this fixed".


I don't see this as a Debian-specific bug however. It would affect any 
distro with freeipmi-utils installed in /usr/sbin and sudo installed in 
/usr/bin, on which the user set a non-empty --freeipmi.path flag - 
because that is merely what the Debian patch does - pre-populate that 
flag value.


As such, I don't see any way for Debian to "fix" this without a much 
more invasive patch, which would effectively require "fixing" the issue 
upstream.


Have you tried overriding the --freeipmi.path flag back to an empty 
string (e.g. --freeipmi.path="") so that ipmi_exporter falls back to 
searching on the PATH?




OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1067405: ERROR: permission denied for function pg_ls_waldir

2024-03-31 Thread Daniel Swarbrick
The instructions in README.Debian are outdated and need to be refreshed 
for currently supported Postgres versions (>= 10).


Please consult the /upstream/ documentation, specifically 
https://github.com/prometheus-community/postgres_exporter?tab=readme-ov-file#running-as-non-superuser, 
i.e. grant the "pg_monitor" role to your user.





OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1064765: prometheus: FTBFS: dh_auto_test error

2024-03-28 Thread Daniel Swarbrick

On 28.03.24 23:33, Santiago Vila wrote:

If you prefer I could report this build failure in a new report
(or you can also use the clone command so that the bug has a new number,
then close the old bug).


Please report a new bug, with just the relevant info regarding the new 
build failure.


We already override the default test timeout for arm, mips64el and 
riscv64 to 60 minutes, as well as set "-short", because otherwise those 
archs simply take too long to grind through all the tests.


If you expect these tests to pass on a host with only one or two cores, 
we will certainly need to raise the test timeout, even for fast amd64 hosts.


OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1064765: prometheus: FTBFS: dh_auto_test error

2024-03-28 Thread Daniel Swarbrick

As expected:

=== RUN   TestQuerierIndexQueriesRace/[m!="0"___name__="metric"]
panic: test timed out after 20m0s
...
FAILgithub.com/prometheus/prometheus/tsdb   1200.016s

On 28.03.24 23:13, Santiago Vila wrote:

Ok, I'm attaching one of my build logs for version 2.45.3+ds-3.
This one was tried on a m6a.large instance from AWS, which has 2 CPUs.

Thanks.


OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1064765: prometheus: FTBFS: dh_auto_test error

2024-03-28 Thread Daniel Swarbrick

On 28.03.24 15:00, Santiago Vila wrote:

In either case, this is still happening for me in the current version:

Lucas Nussbaum  wrote:

   FAILED:
1:48: parse error: unexpected character inside braces: '0'


I think you are taking the "FAILED" out of context and misinterpreting 
the test output. Those are TestRulesUnitTest/* subtests, which are 
expected to fail. The summary at the end shows the expected results:


=== RUN   TestRulesUnitTest
...
=== RUN   TestRulesUnitTest/Bad_input_series
Unit Testing:  ./testdata/bad-input-series.yml
  FAILED:
1:48: parse error: unexpected character inside braces: '0'
...
--- PASS: TestRulesUnitTest (0.38s)
--- PASS: TestRulesUnitTest/Passing_Unit_Tests (0.22s)
--- PASS: TestRulesUnitTest/Long_evaluation_interval (0.13s)
--- PASS: TestRulesUnitTest/Bad_input_series (0.00s)
--- PASS: TestRulesUnitTest/Bad_PromQL (0.00s)
--- PASS: TestRulesUnitTest/Bad_rules_(syntax_error) (0.00s)
--- PASS: TestRulesUnitTest/Bad_rules_(error_evaluating) (0.00s)
--- PASS: TestRulesUnitTest/Simple_failing_test (0.01s)
--- PASS: TestRulesUnitTest/Disabled_feature_(@_modifier) (0.00s)
--- PASS: TestRulesUnitTest/Enabled_feature_(@_modifier) (0.00s)
--- PASS: TestRulesUnitTest/Disabled_feature_(negative_offset) (0.00s)
--- PASS: TestRulesUnitTest/Enabled_feature_(negative_offset) (0.00s)

You will see this in the output of _passing_ debci test runs.

Please can you find in your logs the _actual_ failing test or tests, 
because it is not TestRulesUnitTest.


If you are running tests on a VM with a single core, it's quite likely 
that you're hitting the test timeout, which I would find a more 
reasonable explanation for the dh_auto_test_error, since the Prometheus 
tests are quite extensive. They will take more than an hour on an 11th 
gen Intel Core i7 if I set GOMAXPROCS=1. Since debian/rules is setting a 
test timeout of 20m by default, this would fail.


OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1064765: prometheus: FTBFS: dh_auto_test error

2024-03-28 Thread Daniel Swarbrick

On 28.03.24 15:00, Santiago Vila wrote:

Daniel Swarbrick  wrote:

* Add new 0022-Support-prometheus-common-0.47.0.patch (Closes: #1064765)


Hello. I don't quite understand how the above fix is related to
the bug itself (but maybe it's because I don't know prometheus internals).


As described in the patch:

This cherry-picks part of a commit relating to negotiation of the
"Accept" header, which became more complex with prometheus/common
0.47.0. See upstream commit a28d786.

This resolved the original FTBFS for which this bug was opened, as far 
as I could see, which was this test failure:


> === RUN   TestFederationWithNativeHistograms
> federate_test.go:417:
> 	Error Trace: 
/<>/.build/src/github.com/prometheus/prometheus/web/federate_test.go:417

>Error:  Not equal:
>expected: 4
>actual  : 1
>Test:   TestFederationWithNativeHistograms
> --- FAIL: TestFederationWithNativeHistograms (0.01s)

I was able to reliably reproduce that failure by rolling forward / back 
the prometheus/common dependency in go.mod on a local git clone.



In either case, this is still happening for me in the current version:

Lucas Nussbaum  wrote:

   FAILED:
1:48: parse error: unexpected character inside braces: '0'


This sounds like a _new_ bug.


Note: I'm currently using virtual machines with 1 CPU and with 2 CPUs
for archive rebuilds. On systems with 2 CPUs, the package FTBFS randomly.
On systems with 1 CPU, the package FTBFS always.

Therefore, to reproduce, please try GRUB_CMDLINE_LINUX="nr_cpus=1"
in /etc/default/grub first.


I'm struggling to see how a different number of CPU cores would elicit 
the aforementioned new bug. It doesn't seem to have the typical 
characteristics of a race condition. I'll have to try to find some time 
to setup a VM and try to reproduce it.


OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1067469: script for generating the UI does not work

2024-03-22 Thread Daniel Swarbrick
The generate-ui.sh script was substantially refactored in June 2023. The 
patch you have supplied would not apply cleanly, and I suspect that some 
of the issues may have already been resolved anyway[1]


Can you try the script from the latest package from sid?

[1]: 
https://salsa.debian.org/go-team/packages/prometheus-alertmanager/-/commits/debian/sid/debian/generate-ui.sh?ref_type=heads




OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1067187: ITP: golang-github-lmittmann-tint -- slog.Handler that writes tinted (colorized) logs

2024-03-20 Thread Daniel Swarbrick

I think you're missing the point of this package.

Firstly, it is a _library_, not a daemon, so it is intended to be 
compiled / linked into other Go applications. It provides an easy 
jumping-off point for developers to customize the output of logs, 
particularly with respect to color and syntax highlighting, which is 
quite helpful during the development and debugging phase of writing 
software.


I doubt that it is of much interest to production deployments, which 
will typically not be logging to a console. However, it is trivial to 
configure tint to detect whether it is logging to a tty or not, and 
disable color accordingly.


The key-value functionality is not inherent to tint - that is part of 
Go's standaring library log/slog package.


For comparison's sake, the older sirupsen/logrus package also featured 
color output by default on ttys. That package is no longer actively 
developed (and predates log/slog by many years), so developers are 
likely to be looking for alternatives.


tint is currently imported by 164 other Go packages. It's only a matter 
of time before one of those Go packages needs to be packaged in Debian.


On 21.03.24 00:31, Salvo Tomaselli wrote:

journalctl does this, assuming that the journald or syslog protocols are used.

If stdout is used to log everything then it won't work.

Personally I prefer when software uses syslog, then I can filter by severity
directly, and the colours work too, of course.

journald protocol lets software define arbitrary keys and values.

Just a suggestion.


OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1067187: ITP: golang-github-lmittmann-tint -- slog.Handler that writes tinted (colorized) logs

2024-03-19 Thread Daniel Swarbrick
Package: wnpp
Severity: wishlist
Owner: Daniel Swarbrick 
X-Debbugs-Cc: debian-de...@lists.debian.org

* Package name: golang-github-lmittmann-tint
  Version : 1.0.4
  Upstream Contact: lmittmann 
* URL : https://github.com/lmittmann/tint
* License : Expat
  Programming Lang: Go
  Description : slog.Handler that writes tinted (colorized) logs

tint implements a zero-dependency slog.Handler that writes tinted
(colorized) logs. Its output format is inspired by the
zerolog.ConsoleWriter and slog.TextHandler.

I am packaging this primarily for my own selfish reasons, however I can
see it being useful to other Go packages which may wish to import it in
future. I will maintain it as a member of the Debian Go Packaging team.



Bug#1066815: ITP: golang-github-woblerr-pgbackrest-exporter -- Prometheus exporter for pgBackRest

2024-03-14 Thread Daniel Swarbrick
The package name "golang-github-woblerr-pgbackrest-exporter" gives the 
impression that this is a library package.


Please consider naming it as per the more or less adopted convention 
used by the approximately three dozen other Prometheus exporter 
packages, e.g. "prometheus-pgbackrest-exporter".




OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1064925: RFP: fail2ban-prometheus-exporter - collect and export Prometheus metrics on Fail2Ban)

2024-03-14 Thread Daniel Swarbrick
Should the package name perhaps instead be 
"prometheus-fail2ban-exporter", so that it aligns with the approximately 
three dozen other exporters already packaged by Debian?


In case you're wondering, there /are/ other examples where the upstream 
name has been munged to conform with the "prometheus-foo-exporter" 
pattern, e.g. prometheus-mqtt-exporter (upstream name mqtt2prometheus).




OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1063489: ITP: golang-github-rluisr-mysqlrouter-exporter -- Prometheus exporter for MySQL router

2024-03-14 Thread Daniel Swarbrick
The package name "golang-github-rluisr-mysqlrouter-exporter" gives the 
impression that this is a library package.


Perhaps consider naming it as per the more or less adopted convention 
used by other Prometheus exporter packages, e.g. 
"prometheus-mysqlrouter-exporter".




OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1059083: golang-github-azure-azure-sdk-for-go: package outdated, upstream now versions components independently

2024-02-29 Thread Daniel Swarbrick

Hi Maytham,

On 29.02.24 12:16, Maytham Alsudany wrote:
Could we avoid bumping the epoch by suffixing the version with the git 
commit? e.g. for the case of azure-sdk-for-go, the next version would be 
68.0.0+git20240229.d33ad0-1s


I did this kind of thing previously for golang-github-azure-go-autorest 
(14.2.0+git20220726.711dde1-1) since it was in the same situation. 
However, it has a bad smell IMHO, as there is little useful difference 
between 68.0.0+git20240229.d33ad0-1 and 0.0~git20240229.d33ad0-1. It 
completely throws semantic versioning out the window.


Since the azure-go-autorest upstream seems to have more or less resumed 
vX.Y.Z tagging (albeit lower version numbers than before - most recent 
tag is "autorest/v0.11.29"), I would be more inclined to bump the epoch 
and make the Debian version number e.g. 1:0.11.29-1


All that remains is to determine which series of tags in 
azure-sdk-for-go is the "main" version number - and my gut feeling is 
that it's the "sdk/azcore/vX.Y.Z" series.


Have you checked what other distros are doing, assuming that they 
package that project?


OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1059083: golang-github-azure-azure-sdk-for-go: package outdated, upstream now versions components independently

2024-02-28 Thread Daniel Swarbrick
The outdated golang-github-azure-azure-sdk-for-go is also now a blocker 
for updated versions of Prometheus, which requires newer 
azure-sdk-for-go since v2.48.0. We currently package v2.45.3, which is 
an LTS release; current upstream Prometheus version is 2.50.1.


I am also pretty baffled by some projects' newfangled release / tag 
naming conventions, and how to make them fit the Debian packaging process.


In the case of azure-sdk-for-go, I see that upstream has tags in the 
form of "sdk/azcore/vX.Y.Z" (among many others). Can we assume that 
"azcore" is as close an analog as we are going to get to the former 
simple "vX.Y.Z" tags?


Several large libraries (e.g. aws-sdk-go-v2) seem to now publish 
releases almost daily, which simply isn't rational or feasible for 
Debian packagers to track. We need to identify which releases are actual 
major milestone releases, and not merely a dependency bump in go.mod due 
to some overly eager dependabot.


Obviously we will need to bump the epoch on such packages where the new 
upstream tagging convention would result in a lower version number than 
what is currently packaged.


On 20.12.23 04:19, Maytham Alsudany wrote:

Source: golang-github-azure-azure-sdk-for-go
Severity: important
X-Debbugs-Cc: debian...@lists.debian.org

Hi Go team,

The golang-github-azure-azure-sdk-for-go package is outdated, as
upstream have stopped versioning the SDK as a whole (last version was
68.0.0, 11 months ago), but are now independently versioning components
of the SDK (subdirectories of sdk/). e.g. sdk/to/v0.1.5 and
sdk/storage/azfile/v1.1.1

There are several questions to answer in order to determine how this
problem is dealt with:

   - Should it all be kept into one source package?

 - Should versioning in d/changelog follow HEAD?
   e.g. 68.0.0+git20231220.f497335-1

 - How should imports in the source code be dealt with?
   Imports references the versions in upstream's tags
   (sdk/storage/azfile/v1.1.1), which means either a find+replace of
   all versioned imports, or a lot of symlinking.

 - One binary package, that contains all the components?

 - Or separate binary packages per component?
   e.g. golang-github-azure-azure-sdk-for-go-sdk-to-dev

 - Should we only package what is currently needed?
   e.g. if sdk/storage/azblob isn't used in any packages, should
   we bother to package it?

   - Or should each component be split into their own source package?
 e.g. golang-github-azure-azure-sdk-for-go-sdk-storage-azfile

 - How will new versions be imported? (What will be put into
   d/watch?)

   - Maybe write our own sh script that does a sparse checkout of the
 subdir needed and generates an orig tarball?
 (New uscan feature perhaps?)

 - Should we only package what is currently needed? i.e. should
   anything without rdeps be packaged at all?


What are your thoughts on this?

Kind regards,
Maytham




OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1064765: prometheus: FTBFS: dh_auto_test error

2024-02-26 Thread Daniel Swarbrick
It appears that bumping prometheus/common to 0.47.0 in the prometheus 
go.mod will reproduce the test failure.


prometheus/common 0.46.0 and earlier does not provoke the test failure.



OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1055326: prometheus-postfix-exporter: systemd support is missing

2023-11-04 Thread Daniel Swarbrick
systemd support is broken upstream. See 
https://github.com/kumina/postfix_exporter/issues/55


This is mentioned in the Debian changelog:

prometheus-postfix-exporter (0.3.0-2) unstable; urgency=medium

  [ Daniel Swarbrick ]
  ...
  * Disable systemd journal support until fixed upstream



OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1053243: prometheus-alertmanager: Please package the gui

2023-10-06 Thread Daniel Swarbrick

Hi Bastien,

Although the Elm compiler is now in Debian, the issue preventing the 
packaging of the Alertmanager web UI is the lack of the Elm dependencies 
used by the UI (see ui/app/elm.json in the Alertmanager source). This 
problem also afflicts the Prometheus package, where we are unable to 
include the modern React UI, for essentially the same reason.


Note that the generate-ui.sh script that is bundled with 
prometheus-alertmanager does at least now use the elm-compiler provided 
by Debian, rather than fetching it from Github (see 
https://salsa.debian.org/go-team/packages/prometheus-alertmanager/-/commit/51802d88957fc08bf13daab426e59718fadcf66e)


Regards,
Daniel Swarbrick



OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1042336: moment-timezone.js: FTBFS: cp: cannot stat '/usr/share/zoneinfo/posix/*': No such file or directory

2023-09-09 Thread Daniel Swarbrick

On Wed, 26 Jul 2023 22:08:15 +0200 Lucas Nussbaum  wrote:
> Relevant part (hopefully):
> > mkdir -p temp/zic/2023c temp/zdump/2023c
> > cp -RL /usr/share/zoneinfo/posix/* temp/zic/2023c/
> > cp: cannot stat '/usr/share/zoneinfo/posix/*': No such file or 
directory

> > make[1]: *** [debian/rules:51: data/unpacked/2023c.json] Error 1

This appears to have broken due to a change in the tzdata package which 
landed in both Debian and Ubuntu[1].


The Ubuntu moment-timezone.js package was adapted[2] to accommodate this 
change, but the Debian package was not.


[1]: https://bugs.launchpad.net/ubuntu/+source/tzdata/+bug/2008076
[2]: 
https://git.launchpad.net/ubuntu/+source/moment-timezone.js/commit/debian?h=applied/ubuntu/lunar




OpenPGP_signature.asc
Description: OpenPGP digital signature


Bug#1050558: prometheus-alertmanager: CVE-2023-40577

2023-08-26 Thread Daniel Swarbrick

Disregard my previous comment - I was mistaken.

prometheus-alertmanager ships with a generate-ui.sh script which in the 
past fetched the Elm compiler from upstream (since it was not available 
in Debian), but the script has always used the Alertmanager web UI 
sources as shipped in the package.


OpenPGP_signature
Description: OpenPGP digital signature


Bug#1050558: prometheus-alertmanager: CVE-2023-40577

2023-08-26 Thread Daniel Swarbrick
Note that the Debian prometheus-alertmanager package strips out the web 
UI, so the fix in 0.25.1 would actually result in no changes to this 
package.




OpenPGP_signature
Description: OpenPGP digital signature


Bug#1050523: dh-make-golang: fails to determine dependencies

2023-08-25 Thread Daniel Swarbrick

On 25.08.23 19:40, Mathias Gibbens wrote:

   Something has changed in sid's golang environment since August 4
which is causing dh-make-golang to fail to determine a package's
dependencies and generate a correct d/control. For example, this worked
fine on August 4 but now fails:


It's probably also worth noting that dh-make-golang is now FTBFS 
(#1043070) due to golang.org/x/tools/go/vcs having been deprecated and 
removed from golang-golang-x-tools-dev as of version 0.11.0.


OpenPGP_signature
Description: OpenPGP digital signature


Bug#1049711: prometheus-frr-exporter: Fails to build binary packages again after successful build

2023-08-16 Thread Daniel Swarbrick
I am not able to reproduce this using one of the suggested methods at 
https://wiki.debian.org/qa.debian.org/FTBFS/DoubleBuild:


sbuild -A -d unstable -v --no-run-lintian \
--finished-build-commands="cd %SBUILD_PKGBUILD_DIR && runuser -u $(id 
-un) -- dpkg-buildpackage --sanitize-env -us -uc -rfakeroot -b" \

prometheus-frr-exporter

Both builds and dh_auto_test are successful.



OpenPGP_signature
Description: OpenPGP digital signature


Bug#1041745: smartd[…]: Device: /dev/nvme0, number of Error Log entries increased from … to …

2023-08-02 Thread Daniel Swarbrick
This sounds quite similar to this: 
https://github.com/linux-nvme/libnvme/issues/550


Even prior to that bug, I noticed that the smart error log counter would 
increment by one with every reboot. This was not too concerning, but 
when nvme-cli 2.x started to result in (albeit innocent) errors being 
logged each time a "nvme list" command was executed, it became an 
annoyance. As I understand it, it was due to the SSD being fairly old, 
and the firmware only supporting a fairly outdated version of the NVMe 
spec (< 1.2)


At least the _kernel_ should have fixed this, with commit 
https://github.com/torvalds/linux/commit/d7ac8dca938cd60cf7bd9a89a229a173c6bcba87


A fix for nvme-cli (via libnvme) is still being worked on, AFAIK.



OpenPGP_signature
Description: OpenPGP digital signature


Bug#1017564: (no subject)

2023-07-20 Thread Daniel Swarbrick
Snipping the test failure output from the build log so it does not get 
archived:


--- FAIL: TestJWT_ClaimsFromJWT_NotBeforeClaims (0.00s)
    --- FAIL: TestJWT_ClaimsFromJWT_NotBeforeClaims/static (0.01s)
    --- PASS: 
TestJWT_ClaimsFromJWT_NotBeforeClaims/static/custom_nbf_leeway_using_exp_with_no_clock_skew_leeway 
(0.02s)
    --- PASS: 
TestJWT_ClaimsFromJWT_NotBeforeClaims/static/not_yet_valid_custom_nbf_leeway_using_exp_with_auto_clock_skew_leeway 
(0.02s)
    --- PASS: 
TestJWT_ClaimsFromJWT_NotBeforeClaims/static/auto_nbf_leeway_using_exp_with_custom_clock_skew_leeway 
(0.02s)
    --- PASS: 
TestJWT_ClaimsFromJWT_NotBeforeClaims/static/no_nbf_leeway_using_iat_with_auto_clock_skew_leeway 
(0.02s)
    --- FAIL: 
TestJWT_ClaimsFromJWT_NotBeforeClaims/static/not_yet_valid_custom_nbf_leeway_using_exp_with_no_clock_skew_leeway_with_default_leeway 
(0.02s)
    --- FAIL: 
TestJWT_ClaimsFromJWT_NotBeforeClaims/static/not_yet_valid_custom_nbf_leeway_using_exp_with_no_clock_skew_leeway 
(0.02s)
    --- FAIL: 
TestJWT_ClaimsFromJWT_NotBeforeClaims/static/not_yet_valid_auto_nbf_leeway_using_exp_with_no_clock_skew_leeway 
(0.02s)


These look quite likely to be timing-sensitive tests, which are often a 
headache on Debian CI.


I have just uploaded consul 1.9.17+dfsg2-1 and as of now, it builds 
successfully on armel. That does not rule out the possibility however 
that these are intermittent test failures.




OpenPGP_signature
Description: OpenPGP digital signature


Bug#1037005: prometheus-smokeping-prober: package is missing an /etc/init.d script

2023-07-15 Thread Daniel Swarbrick

Hello Tim,

Despite SysVinit systems being something of a rarity these days, and 
Debian policy no longer requiring package maintainers to ship init 
scripts, I am willing to entertain your request.


However, may I ask what template you have based your script on? It would 
require adding a runtime dependency on "daemon", which is essentially 
superfluous since start-stop-daemon can achieve the same result, and is 
included in dpkg (i.e., an _essential_ package). I would prefer not to 
bring in any new dependencies, especially considering how few systems 
will actually make use of this script.


Daniel



OpenPGP_signature
Description: OpenPGP digital signature


Bug#1041093: ITP: golang-github-go-zookeeper-zk -- native ZooKeeper client for Go

2023-07-14 Thread Daniel Swarbrick
Package: wnpp
Severity: wishlist
Owner: Daniel Swarbrick 
X-Debbugs-Cc: debian-de...@lists.debian.org

* Package name: golang-github-go-zookeeper-zk
  Version : 1.0.3-1
  Upstream Contact: The Go-ZooKeeper Developers
* URL : https://github.com/go-zookeeper/zk
* License : BSD-3-clause
  Programming Lang: Go
  Description : native ZooKeeper client for Go

Native Go ZooKeeper Client Library.

This is (yet another) fork of the original g/samuel/go-zookeeper
package, and is a build-dep of Prometheus (which currently uses a patch
to build against the original g/samuel/go-zookeeper).

Since the original g/samuel/go-zookeeper is now abandoned / archived,
g-g-samuel-go-zookeeper-dev could eventually become a dummy transitional
package for g-g-go-zookeeper-zk-dev, with symlinks.

I will co-maintain this package as a member of the Debian Go Packaging
Team.



Bug#1040866: ITP: golang-github-linode-linodego -- Go client for Linode REST v4 API

2023-07-11 Thread Daniel Swarbrick
Package: wnpp
Severity: wishlist
Owner: Daniel Swarbrick 
X-Debbugs-Cc: debian-de...@lists.debian.org, debian...@lists.debian.org

* Package name: golang-github-linode-linodego
  Version : 1.18.0-1
  Upstream Contact: Linode
* URL : https://github.com/linode/linodego
* License : Expat
  Programming Lang: Go
  Description : Go client for Linode REST v4 API

Go client for Linode REST v4 API (https://developers.linode.com/api/v4).

This is a dependency of Linode service discovery in Prometheus, which is
currently patched out. I will co-maintain this as a member of the Debian
Go Packaging Team.



Bug#1040861: ITP: golang-github-prometheus-community-pro-bing -- library for creating continuous probers

2023-07-11 Thread Daniel Swarbrick
Package: wnpp
Severity: wishlist
Owner: Daniel Swarbrick 
X-Debbugs-Cc: debian-de...@lists.debian.org, debian...@lists.debian.org

* Package name: golang-github-prometheus-community-pro-bing
  Version : 0.3.0-1
  Upstream Contact: Prometheus Monitoring Community
* URL : https://github.com/prometheus-community/pro-bing
* License : Expat
  Programming Lang: Go
  Description : library for creating continuous probers

A simple but powerful ICMP echo (ping) library for Go, inspired by
go-ping and go-fastping.

This package is a new dependency of prometheus-smokeping-prober. I will
co-maintain it as a member of the Debian Go Packaging Team.



Bug#1031858: prometheus-node-exporter-collectors: locale issue leads to an unparseable ipmitool_sensor textfile

2023-05-19 Thread Daniel Swarbrick
On Fri, 24 Feb 2023 11:30:10 +0100 Florian Schlichting 
 wrote:


> The hotfix is a
> 
/etc/systemd/system/prometheus-node-exporter-ipmitool-sensor.service.d/override.conf 


> file setting
>
> [Service]
> Environment=LC_NUMERIC=C
>
> but this should preferably be set in
> /lib/systemd/system/prometheus-node-exporter-ipmitool-sensor.service or
> even fixed in the ipmitools awk script upstream..

Setting LC_NUMERIC in the upstream awk script is not viable, since the 
awk script does not actually _execute_ the ipmitool command. It merely 
expects the output of the ipmitool command to be piped to the awk script.


So, setting the environment in the systemd service file is the best that 
we can do.




OpenPGP_signature
Description: OpenPGP digital signature


Bug#1032842: Your mail

2023-04-22 Thread Daniel Swarbrick

On 22.04.23 00:01, Christoph Anton Mitterer wrote:


Are all these strict dependencies, or also optionals?



I haven't checked them individually, but it's pretty rare for a 
dependency to be optional. Maybe some of the tracing stuff might be 
non-essential, but I think the majority will be fundamentally required 
for core functionality.


OpenPGP_signature
Description: OpenPGP digital signature


Bug#1032842: (no subject)

2023-04-02 Thread Daniel Swarbrick
A sample run of dh-make-golang against github.com/thanos-io/thanos 
reveals the following missing build-deps:


 * github.com/chromedp/chromedp
 * github.com/efficientgo/core
 * github.com/efficientgo/e2e
 * github.com/efficientgo/tools
 * github.com/fatih/structtag
 * github.com/GoogleCloudPlatform/opentelemetry-operations-go
 * github.com/lightstep/lightstep-tracer-go
 * github.com/lovoo/gcloud-opentracing
 * github.com/prometheus/prometheus
 * github.com/rueian/rueidis
 * github.com/sony/gobreaker
 * github.com/thanos-community/promql-engine
 * github.com/thanos-io/objstore
 * github.com/uber/jaeger-client-go
 * github.com/vimeo/galaxycache
 * github.com/weaveworks/common
 * go4.org/intern
 * go.elastic.co/apm
 * gopkg.in/fsnotify.v1

A few of them are bogus, e.g. gopkg.in/fsnotify.v1 is packaged as 
golang-github-fsnotify-fsnotify-dev. Prometheus is obviously already 
packaged in Debian, but does not however produce a -dev package (such as 
prometheus-alertmanager-dev does). Certain parts of Prometheus /are/ 
reused by other projects, so having such a -dev package would be desirable.




OpenPGP_signature
Description: OpenPGP digital signature


Bug#1032419: (no subject)

2023-03-08 Thread Daniel Swarbrick

jq is already a Recommends.

Due to the potentially very diverse nature of this package, making 
everything that is referenced by any / all scripts in the package a 
Depends is going to pollute systems. For example, the ipmi textfile 
collector requires ipmitool, but it's also only a Recommends.




OpenPGP_signature
Description: OpenPGP digital signature


Bug#1032329: prometheus-node-exporter: --collector.netdev.device-include cannot be used

2023-03-04 Thread Daniel Swarbrick
You can use --collector.netdev.device-include if you simply ensure that 
--collector.netdev.device-exclude is empty, e.g.:


prometheus-node-exporter --collector.netdev.device-exclude= 
--collector.netdev.device-include=eno1




OpenPGP_signature
Description: OpenPGP digital signature


Bug#1032138: prometheus-snmp-exporter: generator doesn't honour snmp.conf, so it misses site-specific MIBs

2023-03-01 Thread Daniel Swarbrick

On 02.03.23 14:32, наб wrote:

I had gotten p-s-g to work with just "orno" after posting, yes,
but only because I was reading netsnmp_mib_api(3),
and its "ENVIRONMENT VARIABLES" sexion notes MIBDIRS and MIBS,
which appear to funxion à la /e/s/s.c mibdirs and mibs,
so the invocation that I've gotten to work is
   
MIBDIRS=/usr/share/snmp/mibs:/usr/share/snmp/mibs/iana:/usr/share/snmp/mibs/ietf:/usr/local/share/snmp/mibs
 \
   MIBS=+ORNO-MIB prometheus-snmp-generator generate


That also doesn't align with my experience p-s-g (up until about 18 
months ago). I had the "standard" MIBs on the system (via 
snmp-mibs-downloader), and some third-party MIBs in ~/.mibs. I commented 
out the "mibs :" line in /etc/snmp/snmp.conf, so that the tools would 
use the MIBs in the default locations.


I was able to run p-s-g without any special environment variables, and 
it just "magically" found all necessary MIBs that I referenced in 
generator.yml.



I didn't really see anything in the changelog that would imply anything
has changed in this regard, and my only sid system is x32, which you
don't build for, apparently.


Unfortunately that's beyond my control. The Build-Deps are not available 
on x32 - 
https://buildd.debian.org/status/package.php?p=prometheus-snmp-exporter



I did manage to generate, p-s-e is not the right tool for my use-case,
so it doesn't really matter either way.


Please close this bug if you don't wish to pursue it further.


OpenPGP_signature
Description: OpenPGP digital signature


Bug#1032138: prometheus-snmp-exporter: generator doesn't honour snmp.conf, so it misses site-specific MIBs

2023-03-01 Thread Daniel Swarbrick
Ah, my mistake, I did not notice that you already included your 
generator.yml:


modules:
  orno_or_we_504_505_507:
    walk:
  - ORNO-MIB::orno

It looks kinda odd to me. I don't recall ever including the MIB name in 
the list of objects to walk. Have you tried simply:


modules:
  orno_or_we_504_505_507:
    walk:
  - orno

(cf: 
https://github.com/prometheus/snmp_exporter/tree/main/generator#file-format)




OpenPGP_signature
Description: OpenPGP digital signature


Bug#1032138: prometheus-snmp-exporter: generator doesn't honour snmp.conf, so it misses site-specific MIBs

2023-03-01 Thread Daniel Swarbrick
I don't currently use prometheus-snmp-exporter, however I have used it 
extensively in the past, and never encountered any problems with the 
generator loading third-party MIBs.


Most maintainers (including myself) are currently focused on fixing bugs 
in the upcoming bookworm release, so unless you can also demonstrate 
that this is an issue with prometheus-snmp-exporter 0.21.0-1, this bug 
is not likely to get a very prompt resolution. If the bug _is_ still 
present in the latest upstream version, you may have more success 
reporting the bug upstream, especially since it is not likely to be 
Debian-specific.


Can you at least include your generator.yml in this bug report?




OpenPGP_signature
Description: OpenPGP digital signature


Bug#1031463: golang-github-prometheus-exporter-toolkit: FTBFS: dh_auto_test: error: cd _build && go test -vet=off -v -p 8 github.com/prometheus/exporter-toolkit/web github.com/prometheus/exporter-tool

2023-02-18 Thread Daniel Swarbrick
Aha, I overlooked the fact that a new test (TestByPassBasicAuthVuln) had 
been added to web/handler_test.go since the 02-Avoid_race_in_test.patch 
was added by Tina.


I patched in the same time.Sleep workaround for the new 
TestByPassBasicAuthVuln, and it seems to fix the failures. Hooray for 
racy tests!




OpenPGP_signature
Description: OpenPGP digital signature


Bug#1031463: golang-github-prometheus-exporter-toolkit: FTBFS: dh_auto_test: error: cd _build && go test -vet=off -v -p 8 github.com/prometheus/exporter-toolkit/web github.com/prometheus/exporter-tool

2023-02-18 Thread Daniel Swarbrick
This seems to be a resurrection of #1013578, i.e. 
https://github.com/prometheus/exporter-toolkit/issues/108.


With unpatched sources, I can get various tests in handler_test.go to 
intermittently fail simply by running tests on an upstream git clone, on 
a 16-core host. Running tests with GOMAXPROCS=1 makes the failures 
consistently reproducible.


Applying 02-Avoid_race_in_test.patch doesn't seem to have that much of 
an impact, regardless of GOMAXPROCS setting, i.e., the tests still 
intermittently fail. Even increasing the time.Sleep to a whole second 
doesn't help, so I wonder if the patch takes the wrong approach.


It's surprising that this hasn't been more of an issue for the upstream 
developers.




OpenPGP_signature
Description: OpenPGP digital signature


Bug#1031463: golang-github-prometheus-exporter-toolkit: FTBFS: dh_auto_test: error: cd _build && go test -vet=off -v -p 8 github.com/prometheus/exporter-toolkit/web github.com/prometheus/exporter-tool

2023-02-18 Thread Daniel Swarbrick

I am unable to reproduce this in a clean sid chroot.


OpenPGP_signature
Description: OpenPGP digital signature


Bug#1025241: prometheus: Please increase timeout of tests for "riscv64" arch

2023-02-05 Thread Daniel Swarbrick
The riscv64 build still fails with a test timeout of 30m. Since there 
appears to be quite a substantial difference in performance between a 
riscv64 porterbox, and the actual riscv64 buildd, I think we have no 
option other than to skip the test completely.


=== RUN   TestTombstoneCleanRetentionLimitsRace/iteration12
panic: test timed out after 30m0s
...
FAIL    github.com/prometheus/prometheus/tsdb    1800.416s

This test is skipped on arm, due to debian/rules:

ifeq ($(DEB_HOST_ARCH_CPU),arm)
# Tests in armel take way too long, and run out of memory in armhf.
TESTFLAGS  := -timeout 60m -short

and:

func TestTombstoneCleanRetentionLimitsRace(t *testing.T) {
    if testing.Short() {
    t.Skip("skipping test in short mode.")
    }

I will implement something similar in d/rules for riscv64.



OpenPGP_signature
Description: OpenPGP digital signature


Bug#1030092: nvme-cli: nvme list json output contains wrapped-around negative integers

2023-02-01 Thread Daniel Swarbrick
Just following up on this; I see that nvme-cli 2.3-2 fixes the build, 
and I can confirm that the numeric values no longer wrap around to 
negative values.


As an aside, I also noticed that the JSON output "string" numeric values 
(e.g. the 128-bit NVMe counters which would lose accuracy if rendered as 
float64 JSON numbers) are now localized, i.e. include thousands 
separators. This is a bit peculiar to see in machine-readable JSON 
output, but easily worked around by setting LC_NUMERIC=C.




OpenPGP_signature
Description: OpenPGP digital signature


Bug#1030092: nvme-cli: nvme list json output contains wrapped-around negative integers

2023-01-31 Thread Daniel Swarbrick

Hi Daniel,

On 01.02.23 09:48, Daniel Baumann wrote:
I've reproduced the bug with 2.2 and can confirm that 2.3 does fix it. 
Did you notice that nvme-cli 2.3-1 is FTBFS on the buildds? The build 
appears to be failing due to a missing "libnvme-mi".


OpenPGP_signature
Description: OpenPGP digital signature


Bug#1030092: nvme-cli: nvme list json output contains wrapped-around negative integers

2023-01-30 Thread Daniel Swarbrick
Package: nvme-cli
Version: 2.2.1-4
Severity: normal

Dear Maintainer,

Since the update of nvme-cli to v2.x, the JSON output of an "nvme list"
command contains wrapped-around negative integers for various fields,
e.g.:

{
  "Devices":[
{
  "NameSpace":1,
  "DevicePath":"/dev/nvme0n1",
  "Firmware":"2B0QBXX7",
  "ModelNumber":"Samsung SSD 950 PRO 256GB",
  "SerialNumber":"",
  "UsedBytes":-2147483648,
  "MaximumLBA":500118192,
  "PhysicalSize":-2147483648,
  "SectorSize":512
}
  ]
}

Compare with the output of the previous nvme-cli v1.2:

{
  "Devices" : [
{
  "NameSpace" : 1,
  "DevicePath" : "/dev/nvme0n1",
  "Firmware" : "2B0QBXX7",
  "Index" : 0,
  "ModelNumber" : "Samsung SSD 950 PRO 256GB",
  "ProductName" : "Non-Volatile memory controller: Samsung Electronics Co 
Ltd VMe SSD Controller SM951/PM951 PM963 2.5\" NVMe PCIe SSD",
  "SerialNumber" : "",
  "UsedBytes" : 93811310592,
  "MaximumLBA" : 500118192,
  "PhysicalSize" : 256060514304,
  "SectorSize" : 512
}
  ]
}

The PhysicalSize item appears to be an overflowed int32 in the v2.2.x
output.

Another example of bogus values for a 1TB drive with nvme-cli 2.2.1

{
  "Devices":[
{
  "NameSpace":1,
  "DevicePath":"/dev/nvme0n1",
  "Firmware":"41001131",
  "ModelNumber":"PC711 NVMe SK hynix 1TB",
  "SerialNumber":"",
  "UsedBytes":-2147483648,
  "MaximumLBA":2000409264,
  "PhysicalSize":-2147483648,
  "SectorSize":512
}
  ]
}

Upstream has published new releases of nvme-cli (v2.3) and libnvme
(v1.3) in the last 24 hours, and skimming through the changelog I get
the feeling that this bug may have been resolved by those releases.

-- System Information:
Debian Release: bookworm/sid
  APT prefers testing
  APT policy: (500, 'testing'), (400, 'unstable')
Architecture: amd64 (x86_64)

Kernel: Linux 6.1.0-2-amd64 (SMP w/4 CPU threads; PREEMPT)
Kernel taint flags: TAINT_FIRMWARE_WORKAROUND
Locale: LANG=en_NZ.UTF-8, LC_CTYPE=en_NZ.UTF-8 (charmap=UTF-8) (ignored: LC_ALL 
set to en_US.UTF-8), LANGUAGE=en_NZ:en
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)

Versions of packages nvme-cli depends on:
ii  libc6 2.36-8
ii  libjson-c50.16-2
ii  libnvme1  1.2-3
ii  uuid-runtime  2.38.1-4
ii  zlib1g1:1.2.13.dfsg-1

Versions of packages nvme-cli recommends:
ii  pci.ids  0.0~2023.01.18-1

nvme-cli suggests no packages.

-- no debconf information



Bug#1028212: prometheus-node-exporter-collectors: APT update deadlock - prevents unattended security upgrades

2023-01-08 Thread Daniel Swarbrick

Hi Eric,

Thanks for the detailed bug report. As this is something which can 
theoretically affect _any_ apt-based distributed (i.e., derivatives of 
Debian), I feel that it should ideally be reported upstream.


I personally run this textfile collector on a Debian bookworm system, as 
well as apticron - so this is (I think) a similar scenario where two 
independent processes are periodically updating the apt cache, and I 
wondered whether that was wise or not. I have seen the textfile 
collector block only once so far.


The apt.sh script which apt_info.py replaces only executed "apt-get 
--just-print" - so even if executed as root, it never tried to update 
the apt cache. In fact, unless you had something else like apticron to 
periodically update the apt cache, apt.sh would return stale information.


I guess that a simple workaround would be to tweak the systemd service 
so that apt_info.py is executed as an unprivileged user, which would be 
unable to update the cache, and theoretically avoid any potential for a 
deadlock. Perhaps a recommendation to the upstream developer could be 
made, e.g. to add a command-line argument to the script so that it 
wouldn't try to update the cache even when executed as root.


Best,
Daniel



OpenPGP_signature
Description: OpenPGP digital signature


Bug#1025234: prometheus: flaky autopkgtest (on 32 bit archs?)

2023-01-06 Thread Daniel Swarbrick

On 07.01.23 12:40, Adrian Bunk wrote:

Does running the autopkgtests on 32-bit bring more benefits than hassle,
or should they be run only on 64-bit architectures?


As troublesome as the tests are on 32-bit, and as much as it would 
probably be simpler to just blanket disable them in d/rules, I have seen 
other dubious code land occasionally, which would fail on 32-bit.


On several occasions in the past, I have had to patch tests which 
attempted to read numeric values into an untyped int / uint, which would 
overflow on 32-bit. For this reason, I think we still need to keep 
testing on 32-bit, to keep the upstream developers on their toes ;-)


OpenPGP_signature
Description: OpenPGP digital signature


Bug#1027367: (no subject)

2023-01-06 Thread Daniel Swarbrick
Paraphrasing myself from #1027365, this package's tests will pass (even 
without tzdata present) if "-tags timetzdata" is used, e.g. by  
overriding dh_auto_test.




OpenPGP_signature
Description: OpenPGP digital signature


Bug#1027366: (no subject)

2023-01-06 Thread Daniel Swarbrick
Paraphrasing myself from #1027365, this package's tests will pass (even 
without tzdata present) if "-tags timetzdata" is used, e.g. by 
overriding dh_auto_test.




OpenPGP_signature
Description: OpenPGP digital signature


Bug#1027365: golang-github-go-playground-locales: FTBFS in bullseye (missing build-depends on tzdata)

2023-01-06 Thread Daniel Swarbrick
Aha, reading the docs for the Go tzdata package more thoroughly sheds 
some light on the topic:


> Package tzdata provides an embedded copy of the timezone database. If 
this package is imported anywhere in the program, then if the time 
package cannot find tzdata files on the system, it will use this 
embedded information.

>
> This package should normally be imported by a program's main package, 
not by a library. ...

>
> This package will be automatically imported if you build with -tags 
timetzdata.


Since golang-github-go-playground-locales neither imports time/tzdata, 
nor specifies "-tags timetzdata" anywhere, it stands to reason that this 
FTBFS without the tzdata package available on the system.


All tests pass if "-tags timetzdata" is specified.

So, in keeping with Go's recommendation that time/tzdata should not be 
imported by a library, does it actually make sense to make this package 
Build-Depend on tzdata, when this can be as easily resolved as passing 
"tags timetzdata" to dh_auto_test? It seems like it's the responsibility 
of the app which _uses_ this library to import tzdata - not the 
responsibility of the library itself.




OpenPGP_signature
Description: OpenPGP digital signature


Bug#1027365: golang-github-go-playground-locales: FTBFS in bullseye (missing build-depends on tzdata)

2023-01-06 Thread Daniel Swarbrick

I am able to reproduce the FTBFS in a schroot, sans tzdata package.

However, this is weird, because Go ships with an embedded copy of tzdata 
(https://pkg.go.dev/time/tzdata). AFAICT, this is not stripped out in 
Debian golang-go packages.


Only selected tests fails, and only those which reference certain timezones:

--- FAIL: TestFmtTimeFull (0.00s)
    en_test.go:673: Expected '' Got 'unknown time zone 
America/Toronto'


--- FAIL: TestFmtTimeFull (0.00s)
    ja_test.go:641: Expected '' Got 'unknown time zone Asia/Tokyo'

--- FAIL: TestFmtTimeFull (0.00s)
    ja_JP_test.go:641: Expected '' Got 'unknown time zone Asia/Tokyo'

--- FAIL: TestFmtTimeFull (0.00s)
    ru_RU_test.go:834: Expected '' Got 'unknown time zone 
America/Toronto'


I don't quite understand what America/Toronto has got to do with the 
ru_RU locale, however that zone appears in virtually all other 
"unrelated" tests (albeit commented out). It looks like an oversight 
that it was uncommented in the ru_RU_test.go.


In fact, the _only_ two timezones referred to in this package are 
"America/Toronto" (always commented out except in en_test and 
ru_RU_test.go), and "Asia/Tokyo" - so perhaps it's not just something 
that affects certain timezones, but rather the mere fact that the 
package only refers to a tiny subset of the available timezones.




OpenPGP_signature
Description: OpenPGP digital signature


Bug#1027734: prometheus-blackbox-exporter: FTBFS: inconsistent test failures

2023-01-04 Thread Daniel Swarbrick

I think I just found the smoking gun, so to speak.

In the reproducible builds log, I spotted this:

=== RUN   TestDNSProtocol
    dns_test.go:490: "localhost" doesn't resolve to ::1.
--- SKIP: TestDNSProtocol (0.00s)

This is due to this check in TestDNSProtocol:
    _, err := net.ResolveIPAddr("ip6", "localhost")
    if err != nil {
    t.Skip("\"localhost\" doesn't resolve to ::1.")
    }

The failure of "localhost" resolving to ::1 (as well as 127.0.0.1) 
suggests that the reproducible builds environment does not conform to a 
typical Debian environment (where localhost resolves to both 127.0.0.1 
_and_ ::1), and will cause the GRPC tests to fail, since the upstream 
developer sets "IPProtocolFallback: false", and blackbox_exporter's 
resolver defaults to ip6[1].


I will prepare a patch to set "IPProtocolFallback: true" in the GRPC 
tests, as it is in all other tests in the repo (and as per the 
documented runtime config defaults).


[1]: 
https://github.com/prometheus/blackbox_exporter/issues/969#issuecomment-1370452161




OpenPGP_signature
Description: OpenPGP digital signature


Bug#1027734: prometheus-blackbox-exporter: FTBFS: inconsistent test failures

2023-01-03 Thread Daniel Swarbrick
Studying the test failure with the panic more closely, I think it is due 
to the inherent raciness caused by tests which spin up http servers, tcp 
servers etc in goroutines within the same test.


I think that what's happening is that the grpc server in the goroutine 
is not ready in time, so ProbeGRPC() fails, and the deferred 
s.GracefulStop() is called as the test exits. However, the goroutine is 
still in the s.Serve() loop, so... panic.


It's surprising how often a small delay is needed after spinning up such 
a test server in a goroutine. Such tests seem to pretty regularly fail 
on Debian CI infrastructure due to this raciness.




OpenPGP_signature
Description: OpenPGP digital signature


Bug#1027734: prometheus-blackbox-exporter: FTBFS: inconsistent test failures

2023-01-02 Thread Daniel Swarbrick
In general, the gRPC tests (in a pristine v0.23.0 checkout) seem to be 
utterly broken.


What's more, isolating the tests to just the GRPC tests fails in a 
completely different way:


$ go test ./...
ok  github.com/prometheus/blackbox_exporter (cached)
ok  github.com/prometheus/blackbox_exporter/config  (cached)
--- FAIL: TestGRPCConnection (0.00s)
    grpc_test.go:73: GRPC probe failed
panic: Fail in goroutine after TestGRPCConnection has completed

goroutine 612 [running]:
testing.(*common).Fail(0xc00023ad00)
    /usr/lib/go-1.19/src/testing/testing.go:824 +0xe5
testing.(*common).Errorf(0xc00023ad00, {0xc68a18?, 0xc000314390?}, 
{0xc00030dfc0?, 0x0?, 0xc000394ea0?})

    /usr/lib/go-1.19/src/testing/testing.go:941 +0x65
github.com/prometheus/blackbox_exporter/prober.TestGRPCConnection.func1()
    /tmp/blackbox_exporter/prober/grpc_test.go:56 +0x6e
created by github.com/prometheus/blackbox_exporter/prober.TestGRPCConnection
    /tmp/blackbox_exporter/prober/grpc_test.go:54 +0x325
FAIL    github.com/prometheus/blackbox_exporter/prober  0.013s
FAIL

vs.

$ go test -run TestGRPC ./...
ok  github.com/prometheus/blackbox_exporter (cached) [no tests to run]
ok  github.com/prometheus/blackbox_exporter/config  (cached) [no 
tests to run]

--- FAIL: TestGRPCConnection (0.00s)
    grpc_test.go:73: GRPC probe failed
--- FAIL: TestGRPCTLSConnection (0.05s)
    grpc_test.go:237: GRPC probe failed
--- FAIL: TestGRPCServiceNotFound (0.00s)
    utils_test.go:48: Expected: probe_grpc_status_code: 5, got: 
probe_grpc_status_code: 0

--- FAIL: TestGRPCHealthCheckUnimplemented (0.00s)
    utils_test.go:48: Expected: probe_grpc_status_code: 12, got: 
probe_grpc_status_code: 0

FAIL
FAIL    github.com/prometheus/blackbox_exporter/prober  0.058s
FAIL



OpenPGP_signature
Description: OpenPGP digital signature


Bug#1027734: prometheus-blackbox-exporter: FTBFS: inconsistent test failures

2023-01-02 Thread Daniel Swarbrick

Hi Mathias,

Given that a pristine, upstream checkout fails on that test for the last 
two releases, I think we will have to just skip the test.


See https://github.com/prometheus/blackbox_exporter/issues/969



OpenPGP_signature
Description: OpenPGP digital signature


Bug#1026696: golang-github-prometheus-client-model: FTBFS: make: *** [debian/rules:6: binary] Error 25

2022-12-22 Thread Daniel Swarbrick

On 22.12.22 20:52, Shengjing Zhu wrote:


Hmm, this works for me, the generated pb.go uses old timestamp type.
I have added above change and built the package, then checked the result.


My mistake, I think I must have looked at a stale build. The suggested 
.proto mapping workaround seems to do what we need.


I'll upload a new release shortly.


OpenPGP_signature
Description: OpenPGP digital signature


Bug#1026696: golang-github-prometheus-client-model: FTBFS: make: *** [debian/rules:6: binary] Error 25

2022-12-21 Thread Daniel Swarbrick
Updating the 01-Use_go_generate.patch as follows results in a successful 
build (without needing to add golang-google-protobuf-dev as a dependency):


diff --git a/debian/patches/01-Use_go_generate.patch 
b/debian/patches/01-Use_go_generate.patch

index cafa5e2..ffa83cf 100644
--- a/debian/patches/01-Use_go_generate.patch
+++ b/debian/patches/01-Use_go_generate.patch
@@ -6,4 +6,4 @@ Description: Use go generate to avoid depending on 
special make rules in

 @@ -0,0 +1,3 @@
 +package io_prometheus_client
 +
-+//go:generate protoc --proto_path=../io/prometheus/client 
--go_out=paths=source_relative:. metrics.proto
++//go:generate protoc --proto_path=../io/prometheus/client 
--go_out=paths=source_relative,Mgoogle/protobuf/timestamp.proto=github.com/golang/protobuf/ptypes/timestamp:. 
metrics.proto


However, the generated metrics.pb.go still uses *timestamppb.Timestamp 
for the timestamp fields, which will cause undesirable side-effects on 
downstream packages.


I am not aware of any way to influence protoc to use the old timestamp type.


OpenPGP_signature
Description: OpenPGP digital signature


Bug#1026696: golang-github-prometheus-client-model: FTBFS: make: *** [debian/rules:6: binary] Error 25

2022-12-21 Thread Daniel Swarbrick

Hi,

On 22.12.22 00:41, Shengjing Zhu wrote:

Hi,

The workaroud could be like this:
https://salsa.debian.org/go-team/packages/notary/-/commit/b0a072faa72857f7523c8245ecaa8814d5a60051


Fixing the build failure in golang-github-prometheus-client-model is a 
simple matter of including golang-google-protobuf-dev in the build-deps.


However, as the resulting metrics.pb.go now has a different type for the 
timestamp fields, and downstream packages that use this will likely need 
patching.


I already had to patch golang-github-prometheus-common[1] and 
golang-github-prometheus-client-golang[2] for similar issues not long 
ago. With those patches in place, and the new metrics.pb.go, those 
packages FTBFS. Dropping those patches fixes the build, but 
prometheus-common then panics with a Go reflect error in one of the tests.


So I'm not really sure of the best course of action at the moment.

[1]: 
https://salsa.debian.org/go-team/packages/golang-github-prometheus-common/-/blob/debian/sid/debian/patches/01-support-outdated-protobuf-build-deps.patch
[2]: 
https://salsa.debian.org/go-team/packages/golang-github-prometheus-client-golang/-/blob/debian/sid/debian/patches/02-support-outdated-protobuf-build-deps.patch


OpenPGP_signature
Description: OpenPGP digital signature


Bug#1026696: golang-github-prometheus-client-model: FTBFS: make: *** [debian/rules:6: binary] Error 25

2022-12-20 Thread Daniel Swarbrick
After a fair amount of head scratching, I tracked this down to a change 
in behaviour of the protobuf compiler. Version 3.14.0+ generates 
slightly different pb.go files with respect to the timestamp type (and 
possibly others):


--- metrics.pb.go.old   2022-11-08 23:31:00.0 +1300
+++ metrics.pb.go.new   2022-11-08 23:31:00.0 +1300
@@ -6,7 +6,7 @@
 import (
    fmt "fmt"
    proto "github.com/golang/protobuf/proto"
-   timestamp "github.com/golang/protobuf/ptypes/timestamp"
+   timestamppb "google.golang.org/protobuf/types/known/timestamppb"
    math "math"
 )

@@ -629,12 +629,12 @@
 }

 type Exemplar struct {
-   Label    []*LabelPair 
`protobuf:"bytes,1,rep,name=label" json:"label,omitempty"`
-   Value    *float64 
`protobuf:"fixed64,2,opt,name=value" json:"value,omitempty"`
-   Timestamp    *timestamp.Timestamp 
`protobuf:"bytes,3,opt,name=timestamp" json:"timestamp,omitempty"`

-   XXX_NoUnkeyedLiteral struct{} `json:"-"`
-   XXX_unrecognized []byte   `json:"-"`
-   XXX_sizecache    int32    `json:"-"`
+   Label    []*LabelPair 
`protobuf:"bytes,1,rep,name=label" json:"label,omitempty"`
+   Value    *float64 
`protobuf:"fixed64,2,opt,name=value" json:"value,omitempty"`
+   Timestamp    *timestamppb.Timestamp 
`protobuf:"bytes,3,opt,name=timestamp" json:"timestamp,omitempty"`

+   XXX_NoUnkeyedLiteral struct{}   `json:"-"`
+   XXX_unrecognized []byte `json:"-"`
+   XXX_sizecache    int32  `json:"-"`
 }

 func (m *Exemplar) Reset() { *m = Exemplar{} }
@@ -676,7 +676,7 @@
    return 0
 }

-func (m *Exemplar) GetTimestamp() *timestamp.Timestamp {
+func (m *Exemplar) GetTimestamp() *timestamppb.Timestamp {
    if m != nil {
    return m.Timestamp
    }

I was surprised that the protobuf _compiler_ was responsible for this 
change, but verified with an older snapshot of the package (3.12.4-1+b5) 
that the previous behaviour was restored. Snapshot versions 3.14.0-1 and 
later produce the newer style generated pb.go file, referencing 
timestamppb.Timestamp.


The upstream changelogs for the protobuf compiler do not list any 
Go-related changes in version 3.13.x, however for version 3.14.0 the 
following is mentioned:


  Go:
  * Update go_package options to reference google.golang.org/protobuf 
module.


I strongly suspect that this resulted in the change in the generated 
pb.go file. The Makefile in prometheus/client_model also pins the 
protobuf compiler version to 3.13.0:


# Need to be on a previous version that doesn't cause the updated WKT 
go_package values to be added.

PROTOC_VERSION := 3.13.0



OpenPGP_signature
Description: OpenPGP digital signature


Bug#1025241: prometheus: Please increase timeout of tests for "riscv64" arch

2022-12-20 Thread Daniel Swarbrick

For the record, the following test also just timed out on i386:

=== RUN   TestRulesUnitTest/Long_evaluation_interval
Unit Testing:  ./testdata/long-period.yml
panic: test timed out after 20m0s

So perhaps we need to increase the baseline test timeout for _all_ archs 
to at least e.g. 30 mins.




OpenPGP_signature
Description: OpenPGP digital signature


Bug#855145: [pkg-go] Bug#855145: prometheus: Console templates don't work out of the box

2022-12-18 Thread Daniel Swarbrick

On Wed, 22 Jun 2022 17:01:51 -0700 Rob Leslie  wrote:
> Attached is at least one patch needed to make the sample consoles usable.

Unfortunately it requires a slightly more extensive patch than that. See 
https://salsa.debian.org/go-team/packages/prometheus/-/commit/d95f2bf1764710e4583be69340f4c1502361b57a




OpenPGP_signature
Description: OpenPGP digital signature


Bug#1025241: prometheus: Please increase timeout of tests for "riscv64" arch

2022-12-18 Thread Daniel Swarbrick
60 minutes is a big jump up from 20 minutes, especially if the test 
duration is only just on the border of the current 20 minute timeout. I 
would suggest a slightly more conservative increase, e.g. 30 minutes, so 
as not to unnecessarily tie up the s390x hosts if some test has 
terminally blocked.


Additionally, there are _other_ reasons for the failing tests on s390x 
which are unrelated to test duration / timeout.




OpenPGP_signature
Description: OpenPGP digital signature


Bug#1026123: ITP: golang-github-mdlayher-packet -- Go library for Linux packet sockets (AF_PACKET)

2022-12-14 Thread Daniel Swarbrick
Package: wnpp
Severity: wishlist
Owner: Daniel Swarbrick 
X-Debbugs-Cc: debian-de...@lists.debian.org

* Package name: golang-github-mdlayher-packet
  Version : 1.1.0-1
  Upstream Contact: Matt Layher 
* URL : https://github.com/mdlayher/packet
* License : Expat
  Programming Lang: Go
  Description : Go library for Linux packet sockets (AF_PACKET)

 github.com/mdlayer/packet is a successor to github.com/mdlayher/raw,
 but exclusively focused on Linux and AF_PACKET sockets. The APIs are
 nearly identical, but with a few changes which take into account some
 of the lessons learned while working on raw.

This package is a dependency of the one and only tagged release of
github.com/mdlayher/raw, which has been deprecated. Users of raw are
encouraged to migrate to packet.

I will co-maintain this package as a member of the Debian Go Packaging
Team.



Bug#1025234: prometheus: flaky autopkgtest (on 32 bit archs?)

2022-12-01 Thread Daniel Swarbrick

Hi Paul,

I have also noticed the fairly frequent failures of the memory-intensive 
tests on 32-bit, and I am doing my best to keep on top of them with 
t.Skip() patches where appropriate. Several of the tests result in the 4 
GiB memory footprint threshold being exceeded.


Prometheus itself is still usable on 32-bit, but obviously only up to a 
certain size. The upstream developers don't seem to consider 32-bit when 
writing unit tests, thus the regular addition of new tests which fail.


Daniel.



OpenPGP_signature
Description: OpenPGP digital signature


Bug#1025134: ITP: golang-github-mdlayher-ethtool -- Go library to control the Linux ethtool generic netlink interface

2022-11-29 Thread Daniel Swarbrick
Package: wnpp
Severity: wishlist
Owner: Daniel Swarbrick 
X-Debbugs-Cc: debian-de...@lists.debian.org

* Package name: golang-github-mdlayher-ethtool
  Version : 0.0~git20220830.0e16326-1
  Upstream Author : Matt Layher 
* URL : https://github.com/mdlayher/ethtool
* License : Expat
  Programming Lang: Go
  Description : Go library to control the Linux ethtool generic netlink 
interface

Go library to control the Linux ethtool generic netlink interface. For more
information, see https://docs.kernel.org/networking/ethtool-netlink.html

This package is a required build-dependency of prometheus-node-exporter >=
1.5.0.

I will co-maintain this package as a member of the Debian Go Packaging
Team.



Bug#1024967: ITP: golang-github-grafana-regexp -- Faster version of the Go regexp package

2022-11-27 Thread Daniel Swarbrick
Package: wnpp
Severity: wishlist
Owner: Daniel Swarbrick 
X-Debbugs-Cc: debian-de...@lists.debian.org

* Package name: golang-github-grafana-regexp
  Version : 0.0~git20221122.6b5c0a4-1
  Upstream Author : Grafana Labs
* URL : https://github.com/grafana/regexp
* License : BSD-3-clause
  Programming Lang: Go
  Description : Faster version of the Go regexp package

Fork of the upstream Go regexp package, with some code optimisations to
make it run faster.
.
All semantics are the same, and the optimised code passes all tests from
upstream.

This package is a required build-dep of Prometheus. Whilst patching
Prometheus to use the original Go standard library regexp package is
fairly trivial, the grafana/regexp package is used extensively
throughout Prometheus, and likely has performance advantages.

I will co-maintain this package as a member of the Debian Go Packaging
Team.



Bug#1023790: (no subject)

2022-11-10 Thread Daniel Swarbrick
The email template was split out of default.tmpl by upstream commit 
https://github.com/prometheus/alertmanager/commit/c0a7b75c9cfb0772bdf5ec7362775f5f7798a3a0, 
into email.tmpl.


The Debian package does not install email.tmpl, and even if that file is 
copied manually into the /usr/share/prometheus/alertmanager, it does not 
resolve the issue. Concatenating the contents of email.tmpl to 
default.tmpl restores the former functionality, and can be considered a 
workaround for now.


I guess the behaviour of 02-Do_not_embed_blobs.patch is going to need to 
be refactored to handle multiple templates, or alternatively the package 
build process will need to concatenate the email.tmpl into default.tmpl.




OpenPGP_signature
Description: OpenPGP digital signature


Bug#1023790: (no subject)

2022-11-10 Thread Daniel Swarbrick

I am able to reproduce the reported error with 0.24.0-4.

A vanilla upstream build does not exhibit the error. It appears to be 
caused by 02-Do_not_embed_blobs.patch, as omitting that also results in 
a working build.




OpenPGP_signature
Description: OpenPGP digital signature


Bug#1010404: (no subject)

2022-11-06 Thread Daniel Swarbrick
This bug report is pretty difficult to make sense of, due to the 
spelling mistakes and lack of punctuation.


Why is docker.io mentioned in the bug report? The Debian package of 
prometheus-node-exporter is not intended to be run in docker.


If you run a system with sid / unstable configured in apt sources, you 
have to expect a certain amount of breakage from time to time, and 
package maintainers are unlikely to take bug reports seriously unless 
the issue can be reproduced on a stable or testing install.


Please write a clearer description of the bug. I don't know where to 
begin with this one.




OpenPGP_signature
Description: OpenPGP digital signature


Bug#962476: (no subject)

2022-11-06 Thread Daniel Swarbrick
Such a change is unlikely to be met with enthusiasm by the vast majority 
of users, and would likely be the source of many subsequent bug reports 
requesting the change to be reverted.


Whilst I acknowledge that node_exporter provides a wealth of information 
which could potentially be useful to attackers, the main purpose of the 
daemon is to be queried via the network by a Prometheus instance.


Many other network-based services will bind to the wildcard address by 
default, since they are functionally pretty useless if they don't do that.


The upstream Prometheus developers have long maintained the position 
that security is out of scope for Prometheus and its related exporters, 
since there is no "one size fits all", and end users are encouraged to 
weigh up what security precautions make sense in their specific environment.


If you are concerned about drive-by probes of node_exporter or other 
services for that matter, I suggest that you look into running a 
firewall on your host.




OpenPGP_signature
Description: OpenPGP digital signature


Bug#1022741: ITP: golang-github-scaleway-scaleway-sdk-go -- Scaleway API SDK for Go

2022-10-24 Thread Daniel Swarbrick
Package: wnpp
Severity: wishlist
Owner: Daniel Swarbrick 
X-Debbugs-Cc: debian-de...@lists.debian.org

* Package name: golang-github-scaleway-scaleway-sdk-go
  Version : 1.0.0~beta9-1
  Upstream Author : Scaleway
* URL : https://github.com/scaleway/scaleway-sdk-go
* License : Apache-2.0
  Programming Lang: Go
  Description : Scaleway API SDK for Go

Scaleway is a European cloud computing company proposing a complete &
simple public cloud ecosystem, bare-metal servers & private datacenter
infrastructures.
.
This package provides an SDK to programmatically interact with the
Scaleway API from Go.

This is a build-dep of the Scaleway service discovery mechanism in
Prometheus. I will co-maintain this package as a member of the Debian Go
Packaging team.



Bug#1022529: ITP: golang-github-ionos-cloud-sdk-go -- Go API client for IONOS Cloud

2022-10-23 Thread Daniel Swarbrick
Package: wnpp
Severity: wishlist
Owner: Daniel Swarbrick 
X-Debbugs-Cc: debian-de...@lists.debian.org

* Package name: golang-github-ionos-cloud-sdk-go
  Version : 6.1.3-1
  Upstream Author : IONOS Cloud
* URL : https://github.com/ionos-cloud/sdk-go
* License : Apache-2.0
  Programming Lang: Go
  Description : Go API client for IONOS Cloud

IONOS enterprise-grade Infrastructure as a Service (IaaS) solutions can
be managed through the Cloud API, in addition to or as an alternative to
the "Data Center Designer" (DCD) browser-based tool.
.
The IONOS Cloud SDK for Go provides programmatic access to the IONOS
Cloud API.

This package is required to support IONOS Cloud service discovery in
Prometheus. I will co-maintain this package as part of the Debian Go
Packaging Team.



Bug#1022055: ITP: prometheus-ui-classic -- classic web user interface for Prometheus

2022-10-19 Thread Daniel Swarbrick
Package: wnpp
Severity: wishlist
Owner: Daniel Swarbrick 
X-Debbugs-Cc: debian-de...@lists.debian.org

* Package name: prometheus-ui-classic
  Version : 2.33.5+ds-1
  Upstream Author : The Prometheus Authors
* URL : https://prometheus.io/
* License : Apache-2.0, Expat
  Programming Lang: JavaScript
  Description : classic web user interface for Prometheus

This package contains the classic web UI for Prometheus, which shipped
with versions up until 2.33.5, after which it was removed upstream.
Splitting out the classic UI from the last Prometheus version to include
it was necessary due to the new React UI being prohibitively complex to
package (without violating Debian policy).

This package will effectively be frozen in time at version 2.33.5, until
such time that the React UI can be included in the prometheus package.
It is unlikely that the prometheus-ui-classic package will need any
updating, so long as Prometheus provides the v1 API.



Bug#1021118: ITP: golang-github-dennwc-btrfs -- btrfs library for Go

2022-10-02 Thread Daniel Swarbrick
Package: wnpp
Severity: wishlist
Owner: Daniel Swarbrick 
X-Debbugs-Cc: debian-de...@lists.debian.org

* Package name: golang-github-dennwc-btrfs
  Version : 0.0~git20220403.b3db0b2
  Upstream Author : Denys Smirnov 
* URL : https://github.com/dennwc/btrfs
* License : Apache-2.0
  Programming Lang: Go
  Description : btrfs library for Go

btrfs library for Go, providing access to low-level management functions
of btrfs filesystems.

This package is a new build-dep for the btrfs collector in Prometheus
node_exporter >= v1.4.0. I will co-maintain this package as part of the
Debian Go packaging team.



Bug#1021116: ITP: golang-github-dennwc-ioctl -- ioctl library for Go

2022-10-02 Thread Daniel Swarbrick
Package: wnpp
Severity: wishlist
Owner: Daniel Swarbrick 
X-Debbugs-Cc: debian-de...@lists.debian.org

* Package name: golang-github-dennwc-ioctl
  Version : 1.0.0
  Upstream Author : Denys Smirnov 
* URL : https://github.com/dennwc/ioctl
* License : MIT
  Programming Lang: Go
  Description : ioctl library for Go

Lightweight ioctl library for Go, providing functions for performing
read/write ioctl operations.

This package is dependency of github.com/dennwc/btrfs, and thus an
indirect dependency of Prometheus node_exporter 1.4.0+. It appears
fairly inactive, as it is functionally complete and mature. I don't
expect it to require much maintenance going forward, however I will
co-maintain it as part of the Debian Go packaging team.



Bug#1019256: ITP: golang-gopkg-telebot.v3 -- bot framework for the Telegram Bot API

2022-09-06 Thread Daniel Swarbrick
Package: wnpp
Severity: wishlist
Owner: Daniel Swarbrick 
X-Debbugs-Cc: debian-de...@lists.debian.org

* Package name: golang-gopkg-telebot.v3
  Version : 3.0.0
  Upstream Author : Ilya Kowalewski 
* URL : https://gopkg.in/telebot.v3
* License : Expat
  Programming Lang: Go
  Description : bot framework for the Telegram Bot API

Telebot is a Go framework for the Telegram Bot API, providing an easy to use
API for command routing, inline query requests and keyboards, as well as
callbacks.

This package is a build-dep for Prometheus Alertmanager 0.24.0+. I will
co-maintain this package as a member of the Go packaging team.



Bug#1010054:

2022-05-10 Thread Daniel Swarbrick
Looking at the changes for prometheus/common v0.26.0, I see:

- Replace go-kit/kit/log with go-kit/log

This just jogged my memory as to what the issue is. For whatever reason,
the log level is not set correctly if the exporter still uses
go-kit/kit/log (instead of the newer go-kit/log) and is built with
prometheus/common v0.26.0 or later.

blackbox_exporter v0.19.0 still uses go-kit/kit/log, so the options are
either to patch that in Debian to use go-kit/log, or update to
blackbox_exporter v0.20.0, which uses the latter.

On Tue, May 10, 2022 at 8:08 PM Daniel Swarbrick 
wrote:

> Several months ago I encountered the same bug with
> prometheus-smokeping-prober, and fixed it, but my brain is a bit foggy
> right now as to what the root cause was.
>
> However, for prometheus-blackbox-exporter at least, the --log.level flag
> is respected so long as you build / run it with the dependency versions
> specified by go.mod - up until using prometheus/common v0.26.0 or later -
> at which point the --log.level flag is no longer honoured. This version
> number sticks in my mind from the smokeping_prober bug too.
>
> I will follow up with more information shortly.
>


Bug#1010054:

2022-05-10 Thread Daniel Swarbrick
Several months ago I encountered the same bug with
prometheus-smokeping-prober, and fixed it, but my brain is a bit foggy
right now as to what the root cause was.

However, for prometheus-blackbox-exporter at least, the --log.level flag is
respected so long as you build / run it with the dependency versions
specified by go.mod - up until using prometheus/common v0.26.0 or later -
at which point the --log.level flag is no longer honoured. This version
number sticks in my mind from the smokeping_prober bug too.

I will follow up with more information shortly.


Bug#997736:

2021-12-20 Thread Daniel Swarbrick
I noticed in the debian/rules that rebuilding the timezone JS data from the
tzdata package is *optional*, and only occurs if the moment-timezone
package has a +x suffix, e.g. +2021e. If not, the timezone JS data from
the upstream moment-timezone source will be used.

I just tried building v0.5.34 *without* the tzdata suffix, and all tests
passed. I suspect that in order to build against tzdata 2021e, despite it
allegedly already being included in the upstream v0.5.34 source, it would
require some manipulation of the tests. I suppose the question is how far
down the rabbit hole we want to go. I'm sure that this issue will arise
again as soon as tzdata 2022a is released, and we will need another patch
in moment-timezone, such as was contributed by Martina Ferrari in the past.

Perhaps we could instead just move forward with moment-timezone v0.5.34,
sans tzdata 2021e, in the interests of closing this bug and unblocking a
bunch of testing package transitions.


Bug#997736:

2021-12-19 Thread Daniel Swarbrick
This bug is now blocking Prometheus (which I co-maintain) from
transitioning to testing.

In the meantime, I see that there is a new release of moment-timezone.js
upstream (0.5.34) which updates the IANA TZDB to 2021e, and would thus
perhaps resolve the test failures that I see when trying to build the
current 0.5.33 package:

> Warning: 35/1394782 assertions failed (26711ms) Use --force to continue.

The tweak to debian/control, which allegedly closes this bug, could then be
made a more elegant "tzdata (>= 2021e)"

My JS is pretty rusty these days, and I mostly only touch Go packages in
Debian, so I'm hesitant to upload a 0.5.34 package (but may do so if this
bug is going to stagnate).


Bug#998231:

2021-12-16 Thread Daniel Swarbrick
A git bisect seems to suggest that
https://github.com/go-ping/ping/commit/30a8f08ad2a9d0b88ca9c1978114d253f63748c3
is the commit which resulted in the regression in go-ping. Sadly that made
it into the package that shipped with bullseye.

I can update the go-ping package to a newer version that resolves this bug,
and bump a new build of smokeping_prober. Perhaps somebody with more free
time can backport it to bullseye.


Bug#998231:

2021-12-16 Thread Daniel Swarbrick
I tested some upstream tags, i.e. "go run ..." to build with the exact
module versions specified by their go.mod:

v0.3.1 - OK
v0.4.0 - counters latch at 65536
v0.4.1 - counters latch at 65534
v0.4.2 - OK
master (as of writing) - OK

Between v0.4.1 and v0.4.2, the go-ping version specified by go.mod changed
from v0.0.0-20210201192233-b6486c6f1f2d to
v0.0.0-20210417180633-a50937ef0888. Theoretically, an updated
golang-github-go-ping-ping package, built from
v0.0.0-20210417180633-a50937ef0888 or later, should resolve this bug.

The go-ping commit responsible for fixing this bug is
https://github.com/go-ping/ping/commit/38182647687148eb7c963dc57dc62456b12aa0ae,
which was committed a little while after the currently packaged
0.0~git20210312.d90f377-1 version.


Bug#998231:

2021-12-16 Thread Daniel Swarbrick
Unable to reproduce with an earlier, in-house build of smokeping_prober
0.3.1 (prior to first debian upload):

# HELP smokeping_response_duration_seconds A histogram of latencies for
ping responses.
# TYPE smokeping_response_duration_seconds histogram
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="5e-05"}
474
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.0001"}
67789
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.0002"}
72798
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.0004"}
72861
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.0008"}
72917
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.0016"}
72938
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.0032"}
72949
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.0064"}
72966
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.0128"}
72973
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.0256"}
72978
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.0512"}
72979
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.1024"}
72979
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.2048"}
72979
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.4096"}
72979
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.8192"}
72979
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="1.6384"}
72979
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="3.2768"}
72979
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="6.5536"}
72979
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="13.1072"}
72979
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="26.2144"}
72979
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="+Inf"}
72979
smokeping_response_duration_seconds_sum{host="127.0.0.1",ip="127.0.0.1"}
6.126716514000115
smokeping_response_duration_seconds_count{host="127.0.0.1",ip="127.0.0.1"}
72979

According to the go.mod for smokeping_prober v0.3.1, it was presumably
built with github.com/sparrc/go-ping v0.0.0-20190613174326-4e5b6552494c
(which predates the go-ping project moving to a new organization in github).

So despite the fact that the currently open github issues suggest that the
bug is not yet resolved upstream, it is also beginning to smell like a
regression in go-ping, somewhere between v0.0.0-20190613174326-4e5b6552494c
and v0.0.0-20210312.d90f377.


Bug#998231:

2021-12-16 Thread Daniel Swarbrick
Reproduced with prometheus-smokeping-prober 0.4.2-2, built with go-ping
0.0~git20210312.d90f377-1.

# HELP smokeping_response_duration_seconds A histogram of latencies for
ping responses.
# TYPE smokeping_response_duration_seconds histogram
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="5e-05"}
49514
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.0001"}
65308
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.0002"}
65498
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.0004"}
65507
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.0008"}
65530
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.0016"}
65535
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.0032"}
65535
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.0064"}
65535
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.0128"}
65535
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.0256"}
65535
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.0512"}
65535
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.1024"}
65535
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.2048"}
65535
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.4096"}
65535
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="0.8192"}
65535
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="1.6384"}
65535
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="3.2768"}
65535
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="6.5536"}
65535
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="13.1072"}
65535
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="26.2144"}
65535
smokeping_response_duration_seconds_bucket{host="127.0.0.1",ip="127.0.0.1",le="+Inf"}
65535
smokeping_response_duration_seconds_sum{host="127.0.0.1",ip="127.0.0.1"}
3.102824054205
smokeping_response_duration_seconds_count{host="127.0.0.1",ip="127.0.0.1"}
65535


Bug#1001408: ITP: golang-github-nginxinc-nginx-plus-go-client -- Go client for NGINX Plus API

2021-12-09 Thread Daniel Swarbrick
Package: wnpp
Severity: wishlist
Owner: Daniel Swarbrick 
X-Debbugs-Cc: debian-de...@lists.debian.org

* Package name: golang-github-nginxinc-nginx-plus-go-client
  Version : 0.9.0
  Upstream Author : NGINX, Inc.
* URL : https://github.com/nginxinc/nginx-plus-go-client
* License : Apache-2.0
  Programming Lang: Go
  Description : Go client for NGINX Plus API

Client library for working with the NGINX Plus API from Go. Compatible
with version 5 of NGINX Plus API, which was introduced in NGINX Plus
R19.

This package is a build-dependency of newer versions of
prometheus-nginx-exporter, and may also find use in other projects.

I will co-maintain this package with the Debian Go Packaging Team.



Bug#992409:

2021-12-05 Thread Daniel Swarbrick
Tags: unreproducible upstream

Sadly I am not able to reproduce this (in a de_DE.UTF-8 locale). However,
the issue has also been mentioned upstream, and since this is (in theory)
not a Debian-specific issue, I suggested that it should be fixed upstream.
See
https://github.com/prometheus-community/node-exporter-textfile-collector-scripts/issues/97

Upon further consideration, it's probably advisable for all of these
scripts to set LC_ALL=C, since that is the most universal locale for
parsing output of third-party executables - not only numeric formats, but
also dates / times and localized strings.


Bug#990715:

2021-12-04 Thread Daniel Swarbrick
Hi Tim,

Thanks for raising these valid points.

prometheus-smokeping-prober is still quite a young package, and was first
uploaded not long before the bullseye freeze, so it perhaps lacked a bit of
polish. The cap_net_raw debconf db setup is identical to how it is
implemented in prometheus-blackbox-exporter, which also defaults to false.
In the case of blackbox_exporter, I think this is *probably* justified,
since it probes targets on demand, and accepts targets in the scrape URL.

smokeping_prober is a little different however, since the targets are
predefined, and are pinged on a schedule determined by smokeping_prober
itself, so the opportunity for abuse is greatly diminished. I think we
could safely enable cap_net_raw by default for this package.

On the subject of logging errors when unable to send packets, I don't
believe this would be best implemented by a debian patch, since such errors
are not debian-specific. I recommend that you open an issue upstream on
github to address that.

Daniel Swarbrick


Bug#998231:

2021-12-04 Thread Daniel Swarbrick
Hello Tim,

I have just uploaded prometheus-smokeping-prober 0.4.2-1, however I have
not marked this bug as solved, since the only difference between 0.4.1 and
0.4.2 is that the upstream developer bumped the go-ping build dependency
git snapshot in go.mod to a slightly later version.

As far as I can tell, the real bug (which is actually in go-ping) has not
had a fix merged yet: see https://github.com/go-ping/ping/issues/142

I have older packaged versions of prometheus-smokeping-prober (0.3.0,
0.3.1) which were definitely built with much older versions of go-ping, yet
they have bucket counts that are over 200K, so this is a little bit
bewildering. Note that the Debian go-ping build-dep has been updated since
the previous prometheus-smokeping-prober 0.4.1 package was built, however
it is still not as new as the git snapshot referred to in
smokeping_prober's go.mod

Perhaps you could take the new 0.4.2-1 package for a test drive.

Daniel Swarbrick


Bug#1000397: ITP: prometheus-frr-exporter -- Prometheus export for the FRR daemon

2021-11-22 Thread Daniel Swarbrick
Package: wnpp
Severity: wishlist
Owner: Daniel Swarbrick 
X-Debbugs-Cc: debian-de...@lists.debian.org

* Package name: prometheus-frr-exporter
  Version : 0.2.20
  Upstream Author : Tynan Young 
* URL : https://github.com/tynany/frr_exporter
* License : MIT
  Programming Lang: Go
  Description : Prometheus exporter for the FRR daemon

Prometheus exporter for FRR version 3.0+ that collects metrics by using
vtysh and exposes them via HTTP, ready for collecting by Prometheus.

My employer currently uses this exporter and since I have already
packaged it inhouse, I wanted to also share it with the Debian
community. I feel that the package would be useful to service providers
and network operators who wish to monitor the health of e.g. BGP peering
with Prometheus.

I will maintain this package with the help of Debian Go packaging /
Prometheus team(s).



Bug#981741: ITP: prometheus-smokeping-prober -- Prometheus style "smokeping" prober

2021-02-03 Thread Daniel Swarbrick
Package: wnpp
Severity: wishlist
Owner: Daniel Swarbrick 

* Package name: prometheus-smokeping-prober
  Version : 0.4.1
  Upstream Author : Ben Kochie 
* URL : https://github.com/SuperQ/smokeping_prober
* License : Apache-2.0
  Programming Lang: Go
  Description : Prometheus style "smokeping" prober

This prober sends a series of ICMP (or UDP) pings to a target and records
the responses in Prometheus histogram metrics. The resulting metrics are
useful for detecting changes in network latency (or round trip time), as
well as packet loss over a network path.

I have been using smokeping_prober for a little over a year, and think
it would be of interest to others as a Debian package. I will
co-maintain this package as part of the Debian Go Team. Benjamin Drung
(bdrung) has kindly offered to sponsor the upload.



Bug#981733: ITP: golang-github-go-ping-ping -- simple but powerful ICMP echo (ping) library for Go

2021-02-03 Thread Daniel Swarbrick
Package: wnpp
Severity: wishlist
Owner: Daniel Swarbrick 

* Package name: golang-github-go-ping-ping
  Version : 0+git20201106.b6486c6
  Upstream Author : Ben Kochie , et al
* URL : https://github.com/go-ping/ping
* License : MIT
  Programming Lang: Go
  Description : simple but powerful ICMP echo (ping) library for Go

Library for sending ICMP Echo Request packet(s) and waiting for their
Echo Reply responses. Both traditional ICMP ping (requiring raw socket
access) and unprivileged UDP ping are supported.

This package would be a required build-dep of smokeping_prober, which is
a Prometheus exporter for smokeping-style metrics. The go-ping library
will also likely be useful for other projects.

I will co-maintain this package as part of the Debian Go Team. Benjamin
Drung (bdrung) has kindly offered to sponsor the upload.



  1   2   >