[prometheus-developers] prometheus_client guage metrics are not saved in prometheus

2022-09-07 Thread Tommy b
```
import prometheus_client as prom, Guage
import random
import time


pool={'pool_name': 'testing-prom-tool','member_name': 'promtest', 
'mem_port': '443', 'mem_address': 'xx.xx.xx.xx', 'mem_state': 'down'}
# Create a metric to track time spent and requests made.
REQUEST_TIME = prom.Summary('request_processing_seconds', 'Time spent 
processing request')


# Decorate function with metric.
@REQUEST_TIME.time()
def process_request():
time.sleep(1)


if __name__ == '__main__':
#  name documentation label names ###  I was 
able to populate all the keys from the dictionary using the pool. keys())
f5_prom_test = 
prom.Guage('f5_test','f5_node_status',('pool_name','member_name','mem_port','mem_address','mem_state'))
prom.start_http_server(1234)
While True:
   process_request()

f5_prom_test.labels(pool.get('pool_name'),pool.get('member_name'),pool.get('mem_port'),pool.get('mem_address'),pool.get('mem_state'))
#f5_prom.labels(**pool), this works as well


```

I can see metrics are registered when I curl -K http://localhost:1234. 
Somehow the metrics are not saved in the Prometheus. I can't view the data 
whenever I stop the python script in grafana, and there is no historical 
data held in the Prometheus tsdb to view on the Prometheus web URL 
```
curl -K http://localhost:1234

f5_test{mem_address="xx.xx.xx.xxx",mem_name="test-server",pool_name"=testpool",mem_port="5443",mem_state="down"}
 

```





Here is my `prometheus.yml` for custom python Prometheus collector for pool 
data

```
  - job_name: 'python-exporter
scrape_interval: 5s
static_configs:
  - targets: ['hostname:1234']
```

I can view the data only when I use the python script; later, that data is 
not saved in Prometheus. I'm not using any custom registry 
, How to save registered data in /metrics to Prometheus using 
prometheus_client? I already changed my retention period of Prometheus tsdb 

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/deafc9e8-a2d3-4ad3-96a9-5897bebbb8b5n%40googlegroups.com.


Re: [prometheus-developers] Inf buckets in Native Histograms

2022-09-07 Thread Bjoern Rabenstein
On 06.09.22 02:50, 'Fabian Stäber' via Prometheus Developers wrote:
> 
> Looking at client_golang, it seems you can observe math.Inf(), and bucket 
> index math.MaxInt32 is used to represent the Inf bucket.
> 
> https://github.com/prometheus/client_golang/blob/95cf173f1965388665dcb2a28971f35af280e3a5/prometheus/histogram.go#L589-L590
> 
> I'm wondering how to represent the Inf bucket as a BucketSpan in protobuf.
> Initially I set the offset to current index minus previous index, but 
> obviously that doesn't work if the current index is MaxInt32.
> 
> Any ideas?

Yeah, very good question. And definitely something that needs to get
ironed out before coming up with a final spec for Native Histograms.

In practice, I think, observations of ±Inf will be irrelevant. The set
the sum of observations to ±Inf, too (or even to NaN if it was +Inf
before and then -Inf is observed or vice versa), thereby rendering the
sum useless.

My idea so far was to put observations of ±Inf and even NaN in no
bucket at all, let them "ruin" the sum of observations (setting it to
±Inf or NaN as appropriate), and increment the count of observations
as usual. In that way, the difference between observations in buckets
and observations in the count would account for all those
observations. The downside is that you cannot distinguish between the
three types of "weird" observations (+Inf, -Inf, NaN). On the other
hand, I don't think we should add a whole lot of costly plumbing
throughout the stack to store them separately.

>From a completionist's perspective, observations of very large
positive or negative numbers should be treated similarly as very small
observations, i.e. adding an "overflow bucket" (or even two, for
negative and positive observations separately) similarly to the zero
bucket we already have.

The reason for not doing it so far is mainly pragmatic: While it is
easy to accidentally create values close to zero (may it come from
some calculation or from actual physical measurements), it is far less
likely (but not impossible, of course) to accidentally create numbers
with a very large absolute value of up to ±Inf.

This assumption might not hold, and that's exactly why the Native
Histograms are marked as experimental. We can still correct those
things if needed.

> Not sure if this is covered in client_golang either 
> https://github.com/prometheus/client_golang/blob/95cf173f1965388665dcb2a28971f35af280e3a5/prometheus/histogram.go#L1272-L1280

Yeah, that's weird. I filed
https://github.com/prometheus/client_golang/issues/1131 to investigate
more closely.

-- 
Björn Rabenstein
[PGP-ID] 0x851C3DA17D748D03
[email] bjo...@rabenste.in

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/Yxjfqp8TaxOuxtjx%40mail.rabenste.in.