[prometheus-users] Re: prometheus snmp-exporter stack trace: invalid memory address

2022-03-18 Thread Brian Candler
Can you share the content of your "pxdlrouternxos" module?

Does "all the time" mean on every scrape?

Meanwhile: here is your stack trace reformatted.

error getting target 10.14.25.10: recover: runtime error: invalid memory 
address or nil pointer dereference
Stack:goroutine 358625 [running]:
github.com/gosnmp/gosnmp.(*GoSNMP).send.func1(0xc000e155c8)
/go/pkg/mod/github.com/gosnmp/gos...@v1.29.0/marshal.go:326 +0xa5
panic(0x9e4880, 0xe89790)
/usr/local/go/src/runtime/panic.go:969 +0x1b9
github.com/gosnmp/gosnmp.(*GoSNMP).send(0xc0004f6c60, 0xc0004c42a0, 
0xcde601, 0x0, 0xb34ee0, 0xc000b609c0)
/go/pkg/mod/github.com/gosnmp/gos...@v1.29.0/marshal.go:372 +0x20e
github.com/gosnmp/gosnmp.(*GoSNMP).Get(0xc0004f6c60, 0xc00012e300, 0x6, 
0x6, 0x0, 0x0, 0xc000522b10)
/go/pkg/mod/github.com/gosnmp/gos...@v1.29.0/gosnmp.go:363 +0x159
main.ScrapeTarget(0xb40660, 0xc0004b41c0, 0xc00045ca67, 0xe, 0xc000161770, 
0xb34f40, 0xc000522840, 0x0, 0x0, 0x0, ...)
/app/collector.go:133 +0x5be
main.collector.Collect(0xb40660, 0xc0004b41c0, 0xc00045ca67, 0xe, 
0xc000161770, 0xb34f40, 0xc000522840, 0xc000281020)
/app/collector.go:222 +0xbc
github.com/prometheus/client_golang/prometheus.(*Registry).Gather.func1()

/go/pkg/mod/github.com/prometheus/client...@v1.9.0/prometheus/registry.go:446 
+0x1a2
created by github.com/prometheus/client_golang/prometheus.(*Registry).Gather

/go/pkg/mod/github.com/prometheus/client...@v1.9.0/prometheus/registry.go:457 
+0x5ce

goroutine 1 [IO wait, 216 minutes]:
internal/poll.runtime_pollWait(0x7f0f18245708, 0x72, 0x0)
/usr/local/go/src/runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc0004c6018, 0x72, 0x0, 0x0, 0xa85b68)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Accept(0xc0004c6000, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:394 +0x1fc
net.(*netFD).accept(0xc0004c6000, 0x1eadb908595c5b95, 0x0, 0x0)
/usr/local/go/src/net/fd_unix.go:172 +0x45
net.(*TCPListener).accept(0xc0004a8300, 0x6234102b, 0xc00052dad0, 0x48e3a6)
/usr/local/go/src/net/tcpsock_posix.go:139 +0x32
net.(*TCPListener).Accept(0xc0004a8300, 0xc00052db20, 0x18, 0xc00180, 
0x7f8dec)
/usr/local/go/src/net/tcpsock.go:261 +0x65
net/http.(*Server).Serve(0xc0004c4000, 0xb3ee20, 0xc0004a8300, 0x0, 0x0)
/usr/local/go/src/net/http/server.go:2937 +0x266
github.com/prometheus/exporter-toolkit/web.Serve(0xb3ee20, 0xc0004a8300, 
0xc0004c4000, 0x0, 0x0, 0xb34f40, 0xc000286420, 0x0, 0xe0)

/go/pkg/mod/github.com/prometheus/exporter...@v0.5.1/web/tls_config.go:192 
+0x1b0
github.com/prometheus/exporter-toolkit/web.ListenAndServe(0xc0004c4000, 
0x0, 0x0, 0xb34f40, 0xc000286420, 0x0, 0x0)

/go/pkg/mod/github.com/prometheus/exporter...@v0.5.1/web/tls_config.go:184 
+0xfd
main.main()
/app/main.go:248 +0xb71

goroutine 45 [syscall, 3871 minutes]:
os/signal.signal_recv(0x0)
/usr/local/go/src/runtime/sigqueue.go:147 +0x9d
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:23 +0x25
created by os/signal.Notify.func1.1
/usr/local/go/src/os/signal/signal.go:150 +0x45

goroutine 46 [select, 3871 minutes]:
main.main.func1(0xc00012e360, 0xb34f40, 0xc000286420)
/app/main.go:179 +0xe5
created by main.main
/app/main.go:177 +0x7ce

goroutine 50 [IO wait]:
internal/poll.runtime_pollWait(0x7f0f18245620, 0x72, 0xb36220)
/usr/local/go/src/runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc0004c6098, 0x72, 0xb36200, 0xe444f0, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc0004c6080, 0xc0004d8000, 0x1000, 0x1000, 0x0, 
0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:159 +0x1a5
net.(*netFD).Read(0xc0004c6080, 0xc0004d8000, 0x1000, 0x1000, 0x92a41b, 
0xc00052f7f8, 0x7f2b36)
/usr/local/go/src/net/fd_posix.go:55 +0x4f
net.(*conn).Read(0xc0004ae050, 0xc0004d8000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:182 +0x8e
net/http.(*connReader).Read(0xc0004ac750, 0xc0004d8000, 0x1000, 0x1000, 
0x10012864e, 0x0, 0x7f0f18281db8)
/usr/local/go/src/net/http/server.go:798 +0x1ad
bufio.(*Reader).fill(0xc0002806c0)
/usr/local/go/src/bufio/bufio.go:101 +0x105
bufio.(*Reader).ReadSlice(0xc0002806c0, 0xc00025460a, 0x7f0f18281db8, 
0xc00052f988, 0x40d950, 0xc00022a100, 0x100)
/usr/local/go/src/bufio/bufio.go:360 +0x3d
bufio.(*Reader).ReadLine(0xc0002806c0, 0xc00022a100, 0x479294, 0xea09c0, 
0x0, 0xa5b4c0, 0xc000730a80)
/usr/local/go/src/bufio/bufio.go:389 +0x34
net/textproto.(*Reader).readLineSlice(0xc000730a80, 0xc00022a100, 0x4d788d, 
0xc0004c6080, 0x467500, 

[prometheus-users] Re: Latest metric value from multipe Unicorn workers

2022-03-18 Thread Brian Candler
On Friday, 18 March 2022 at 09:45:59 UTC nia...@gmail.com wrote:

> It also would mean what we would need to support both systems - Prometheus 
> and Statsd (we use Prometheus for our infra monitoring so it will no go 
> away any soon), which is not ideal.


I think you misunderstand me.  statsd_exporter does *not* use statsd.  It's 
standalone.  But you send messages to it in statsd format.  It's kind-of 
like pushgateway, except it knows how to increment counters.

With statsd_exporter, you would be using only prometheus.

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/49d88876-4560-4145-810e-e06f8f5b47b3n%40googlegroups.com.


Re: [prometheus-users] Prometheus storage wrt Pushgateway metrics

2022-03-18 Thread anuj tyagi
Yes. So, Prometheus is scraping at every scrape interval. Is it adding to 
Prometheus storage at every scrape interval even if no change in value in 
Pushgateway metrics? 


--
Regards,
Anuj Tyagi
On Friday, March 18, 2022 at 6:00:03 AM UTC-4 Stuart Clark wrote:

> On 18/03/2022 03:38, anuj tyagi wrote:
> > Hi All,
> >
> > I have a question for a use case. If we are pushing batch jobs metrics 
> > to Pushgateway.
> >
> > Some of the job groups of those metrics are getting pushed to 
> > pushgateway every 24 hrs. So, metric values are updating once in a day.
> >
> > There are other job groups pushing to pushgateway every 15 seconds, 
> > and updating metrics values every 15 seconds in Pushgateway.
> >
> > Eg.
> > Backup_timestamp: x
> > Backup_files_count: 
> > so values are getting updated for same metrics. So, All these requests 
> > are overwriting the metrics value so not much increase in storage with 
> > time.
> >
> >
> > Now, Prometheus is scraping all the jobs every 30 seconds. Even job 
> > groups with metrics getting pushed every 24 hrs at a time in 
> > Pushgateway, prometheus is scraping every 30 seconds.
> >
> > Do you think Pushgateway scraping with such short time interval adding 
> > storage even though metric value stay same for 24 hrs.
> >
> > For this reason, one way is to clean Pushgateway job which are older 
> > than maybe few seconds like 50 seconds. So, Prometheus will not scrape 
> > job at all. This way I can save Prometheus storage and scraping effort?
> >
> > Consider I'm pushing 10k metrics in total part of different job 
> > groups. Half of those are getting pushed/updated to Pushgateway only 
> > once in a day?
> >
> > So, the question is how much it impact on storage of Prometheus if 
> > Prometheus scraping metrics from Pushgateway every 30 seconds with no 
> > change in value for 1 day.
>
> The storage usage for a metric that isn't changing is next to nothing, 
> so I wouldn't worry about it. What you describe would be exactly how I'd 
> expect Pushgateway to be behaving - some metrics are updated more 
> frequently and others less, but they are always there and being scraped 
> at the same frequency.
>
> -- 
> Stuart Clark
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/a2e0d051-28d4-42f8-83ed-86e6d404e475n%40googlegroups.com.


[prometheus-users] prometheus snmp-exporter stack trace: invalid memory address

2022-03-18 Thread ohey...@gmail.com
Hi,

we are running on 0.20.0 and scraping devices, which are not configured 
correctly (probably acl issue on switch side), produce these errors all the 
time. This device for example is a Cisco N7k.

```
level=info ts=2022-03-18T08:29:09.347Z caller=collector.go:224 
module=pxdlrouternxos target=10.14.25.10 msg="Error scraping target" 
err="error getting target 10.14.25.10: recover: runtime error: invalid 
memory address or nil pointer dereference\nStack:goroutine 358625 
[running]:\ngithub.com/gosnmp/gosnmp.(*GoSNMP).send.func1(0xc000e155c8)\n\t/go/pkg/mod/github.com/gosnmp/gosnmp@v1.29.0/marshal.go:326
 
+0xa5\npanic(0x9e4880, 0xe89790)\n\t/usr/local/go/src/runtime/panic.go:969 
+0x1b9\ngithub.com/gosnmp/gosnmp.(*GoSNMP).send(0xc0004f6c60, 0xc0004c42a0, 
0xcde601, 0x0, 0xb34ee0, 
0xc000b609c0)\n\t/go/pkg/mod/github.com/gosnmp/gosnmp@v1.29.0/marshal.go:372 
+0x20e\ngithub.com/gosnmp/gosnmp.(*GoSNMP).Get(0xc0004f6c60, 0xc00012e300, 
0x6, 0x6, 0x0, 0x0, 
0xc000522b10)\n\t/go/pkg/mod/github.com/gosnmp/gosnmp@v1.29.0/gosnmp.go:363 
+0x159\nmain.ScrapeTarget(0xb40660, 0xc0004b41c0, 0xc00045ca67, 0xe, 
0xc000161770, 0xb34f40, 0xc000522840, 0x0, 0x0, 0x0, 
...)\n\t/app/collector.go:133 +0x5be\nmain.collector.Collect(0xb40660, 
0xc0004b41c0, 0xc00045ca67, 0xe, 0xc000161770, 0xb34f40, 0xc000522840, 
0xc000281020)\n\t/app/collector.go:222 
+0xbc\ngithub.com/prometheus/client_golang/prometheus.(*Registry).Gather.func1()\n\t/go/pkg/mod/github.com/prometheus/client_golang@v1.9.0/prometheus/registry.go:446
 
+0x1a2\ncreated by 
github.com/prometheus/client_golang/prometheus.(*Registry).Gather\n\t/go/pkg/mod/github.com/prometheus/client_golang@v1.9.0/prometheus/registry.go:457
 
+0x5ce\n\ngoroutine 1 [IO wait, 216 
minutes]:\ninternal/poll.runtime_pollWait(0x7f0f18245708, 0x72, 
0x0)\n\t/usr/local/go/src/runtime/netpoll.go:222 
+0x55\ninternal/poll.(*pollDesc).wait(0xc0004c6018, 0x72, 0x0, 0x0, 
0xa85b68)\n\t/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 
+0x45\ninternal/poll.(*pollDesc).waitRead(...)\n\t/usr/local/go/src/internal/poll/fd_poll_runtime.go:92\ninternal/poll.(*FD).Accept(0xc0004c6000,
 
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 
0x0)\n\t/usr/local/go/src/internal/poll/fd_unix.go:394 
+0x1fc\nnet.(*netFD).accept(0xc0004c6000, 0x1eadb908595c5b95, 0x0, 
0x0)\n\t/usr/local/go/src/net/fd_unix.go:172 
+0x45\nnet.(*TCPListener).accept(0xc0004a8300, 0x6234102b, 0xc00052dad0, 
0x48e3a6)\n\t/usr/local/go/src/net/tcpsock_posix.go:139 
+0x32\nnet.(*TCPListener).Accept(0xc0004a8300, 0xc00052db20, 0x18, 
0xc00180, 0x7f8dec)\n\t/usr/local/go/src/net/tcpsock.go:261 
+0x65\nnet/http.(*Server).Serve(0xc0004c4000, 0xb3ee20, 0xc0004a8300, 0x0, 
0x0)\n\t/usr/local/go/src/net/http/server.go:2937 
+0x266\ngithub.com/prometheus/exporter-toolkit/web.Serve(0xb3ee20, 
0xc0004a8300, 0xc0004c4000, 0x0, 0x0, 0xb34f40, 0xc000286420, 0x0, 
0xe0)\n\t/go/pkg/mod/github.com/prometheus/exporter-toolkit@v0.5.1/web/tls_config.go:192
 
+0x1b0\ngithub.com/prometheus/exporter-toolkit/web.ListenAndServe(0xc0004c4000, 
0x0, 0x0, 0xb34f40, 0xc000286420, 0x0, 
0x0)\n\t/go/pkg/mod/github.com/prometheus/exporter-toolkit@v0.5.1/web/tls_config.go:184
 
+0xfd\nmain.main()\n\t/app/main.go:248 +0xb71\n\ngoroutine 45 [syscall, 
3871 
minutes]:\nos/signal.signal_recv(0x0)\n\t/usr/local/go/src/runtime/sigqueue.go:147
 
+0x9d\nos/signal.loop()\n\t/usr/local/go/src/os/signal/signal_unix.go:23 
+0x25\ncreated by 
os/signal.Notify.func1.1\n\t/usr/local/go/src/os/signal/signal.go:150 
+0x45\n\ngoroutine 46 [select, 3871 
minutes]:\nmain.main.func1(0xc00012e360, 0xb34f40, 
0xc000286420)\n\t/app/main.go:179 +0xe5\ncreated by 
main.main\n\t/app/main.go:177 +0x7ce\n\ngoroutine 50 [IO 
wait]:\ninternal/poll.runtime_pollWait(0x7f0f18245620, 0x72, 
0xb36220)\n\t/usr/local/go/src/runtime/netpoll.go:222 
+0x55\ninternal/poll.(*pollDesc).wait(0xc0004c6098, 0x72, 0xb36200, 
0xe444f0, 0x0)\n\t/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 
+0x45\ninternal/poll.(*pollDesc).waitRead(...)\n\t/usr/local/go/src/internal/poll/fd_poll_runtime.go:92\ninternal/poll.(*FD).Read(0xc0004c6080,
 
0xc0004d8000, 0x1000, 0x1000, 0x0, 0x0, 
0x0)\n\t/usr/local/go/src/internal/poll/fd_unix.go:159 
+0x1a5\nnet.(*netFD).Read(0xc0004c6080, 0xc0004d8000, 0x1000, 0x1000, 
0x92a41b, 0xc00052f7f8, 0x7f2b36)\n\t/usr/local/go/src/net/fd_posix.go:55 
+0x4f\nnet.(*conn).Read(0xc0004ae050, 0xc0004d8000, 0x1000, 0x1000, 0x0, 
0x0, 0x0)\n\t/usr/local/go/src/net/net.go:182 
+0x8e\nnet/http.(*connReader).Read(0xc0004ac750, 0xc0004d8000, 0x1000, 
0x1000, 0x10012864e, 0x0, 
0x7f0f18281db8)\n\t/usr/local/go/src/net/http/server.go:798 
+0x1ad\nbufio.(*Reader).fill(0xc0002806c0)\n\t/usr/local/go/src/bufio/bufio.go:101
 
+0x105\nbufio.(*Reader).ReadSlice(0xc0002806c0, 0xc00025460a, 
0x7f0f18281db8, 0xc00052f988, 0x40d950, 0xc00022a100, 
0x100)\n\t/usr/local/go/src/bufio/bufio.go:360 
+0x3d\nbufio.(*Reader).ReadLine(0xc0002806c0, 0xc00022a100, 0x479294, 
0xea09c0, 0x0, 0xa5b4c0, 

Re: [prometheus-users] Prometheus storage wrt Pushgateway metrics

2022-03-18 Thread Stuart Clark

On 18/03/2022 03:38, anuj tyagi wrote:

Hi All,

I have a question for a use case. If we are pushing batch jobs metrics 
to Pushgateway.


Some of the job groups of those metrics are getting pushed to 
pushgateway every 24 hrs. So, metric values are updating once in a day.


There are other job groups pushing to pushgateway every 15 seconds, 
and updating metrics values every 15 seconds in Pushgateway.


Eg.
Backup_timestamp: x
Backup_files_count: 
so values are getting updated for same metrics. So, All these requests 
are overwriting the metrics value so not much increase in storage with 
time.



Now, Prometheus is scraping all the jobs every 30 seconds. Even job 
groups with metrics getting pushed every 24 hrs at a time in 
Pushgateway, prometheus is scraping every 30 seconds.


Do you think Pushgateway scraping with such short time interval adding 
storage even though metric value stay same for 24 hrs.


For this reason, one way is to clean Pushgateway job which are older 
than maybe few seconds like 50 seconds. So, Prometheus will not scrape 
job at all. This way I can save Prometheus storage and scraping effort?


Consider I'm pushing 10k metrics in total part of different job 
groups. Half of those are getting pushed/updated to Pushgateway only 
once in a day?


So, the question is how much it impact on storage of Prometheus if 
Prometheus scraping metrics from Pushgateway every 30 seconds with no 
change in value for 1 day.


The storage usage for a metric that isn't changing is next to nothing, 
so I wouldn't worry about it. What you describe would be exactly how I'd 
expect Pushgateway to be behaving - some metrics are updated more 
frequently and others less, but they are always there and being scraped 
at the same frequency.


--
Stuart Clark

--
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/93475233-e826-ffa9-67a2-828a1e202c97%40Jahingo.com.


[prometheus-users] Re: Latest metric value from multipe Unicorn workers

2022-03-18 Thread Mindaugas Niaura
It also would mean what we would need to support both systems - Prometheus 
and Statsd (we use Prometheus for our infra monitoring so it will no go 
away any soon), which is not ideal.

Maybe some solution can be implemented exposing metrics with 
timestamps 
https://prometheus.io/docs/instrumenting/exposition_formats/#text-format-example,
 
which Prometheus support. Each thread will expose metric with timestamp and 
we would rewrite timestamp on scrape. When we probably can use 
last_over_time() to find what is the latest value.

On Thursday, March 17, 2022 at 2:00:05 PM UTC+2 Brian Candler wrote:

> On Thursday, 17 March 2022 at 10:28:18 UTC nia...@gmail.com wrote:
>
>> I work as SRE at a company which runs Ruby on Rails application deployed 
>> with Unicorn. Not so long time ago, we started to migrate from StatsD (
>> https://github.com/statsd/statsd) based metrics to Prometheus. We wrote 
>> wrapper library which uses 
>> https://gitlab.com/gitlab-org/prometheus-client-mmap (which also is fork 
>> of https://github.com/prometheus/client_ruby) and so far so good, 
>> everything went as planned during PoC.
>>
>> But recently we found some limitation which we are banging head on how to 
>> solve. Many people are used to StatD and in application code they used just 
>> to increase gauges values (from multiple locations) and aggregating such 
>> values gives you "current" value. 
>>
>> Now with Prometheus, we have multiple muti-thread workers which updates 
>> their own gauge values (workers are labeled e.g. with id). So let's say one 
>> process stores 10 in gauge and after some minutes another worker stores 5. 
>> How we shall know what value is the latest?
>>
>
> What do you want to happen? Do you want the scraped result to be 15? In 
> that case it's a counter not a gauge, and you need to add to it.  Or do you 
> want two separate timeseries showing values 5 and 10? Then you need 
> separate gauges in each thread with their own labels.
>  
>
>>
>> Maybe someone had similar issues and come up with some different 
>> solutions?
>>
>
> There is statsd_exporter , 
> which takes the same protocol messages as statsd, but exposes gauges and 
> counters for scraping.  This is an easy way to aggregate counters across 
> multiple processes and/or threads, since update messages from multiple 
> sources can update the same counter, and it doesn't matter whether they are 
> separate processes or threads.
>
> I suspect it means you'll have to revert your code back to using statsd 
> client though :-) 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/4b95afcc-84a1-4d81-9681-145f74c25724n%40googlegroups.com.


[prometheus-users] Prometheus storage wrt Pushgateway metrics

2022-03-18 Thread anuj tyagi
Hi All,

I have a question for a use case. If we are pushing batch jobs metrics to 
Pushgateway.

Some of the job groups of those metrics are getting pushed to pushgateway 
every 24 hrs. So, metric values are updating once in a day. 

There are other job groups pushing to pushgateway every 15 seconds, and 
updating metrics values every 15 seconds in Pushgateway. 

Eg.
Backup_timestamp: x
Backup_files_count: 
so values are getting updated for same metrics. So, All these requests are 
overwriting the metrics value so not much increase in storage with time. 


Now, Prometheus is scraping all the jobs every 30 seconds. Even job groups 
with metrics getting pushed every 24 hrs at a time in Pushgateway, 
prometheus is scraping every 30 seconds. 

Do you think Pushgateway scraping with such short time interval adding 
storage even though metric value stay same for 24 hrs. 

For this reason, one way is to clean Pushgateway job which are older than 
maybe few seconds like 50 seconds. So, Prometheus will not scrape job at 
all. This way I can save Prometheus storage and scraping effort? 

Consider I'm pushing 10k metrics in total part of different job groups. 
Half of those are getting pushed/updated to Pushgateway only once in a day? 

So, the question is how much it impact on storage of Prometheus if 
Prometheus scraping metrics from Pushgateway every 30 seconds with no 
change in value for 1 day.

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/9b0e6fd4-0df5-40dd-bf3e-c632677c972fn%40googlegroups.com.