[prometheus-users] Re: How to push metrics to prometheus api/v1/write API endpoint with CURL

2022-11-23 Thread Brian Candler

*The read and write protocols both use a snappy-compressed protocol buffer 
encoding over HTTP.*
I think you have tried to use plain text metrics, when they need to be 
represented in protobuf binary format (and *then* compressed)

On Wednesday, 23 November 2022 at 13:14:02 UTC ihor.pi...@gmail.com wrote:

> Here is a set of correct metrics 
>
> cat metrics.prom
> # HELP http_requests_total The total number of HTTP requests.
> # TYPE http_requests_total counter
> http_requests_total{method="post",code="200"} 1027 1395066363000
> http_requests_total{method="post",code="400"}3 1395066363000
>
> cat metrics.prom | promtool check metrics
>
>
> Then it is supposed to be compressed by snappy as the manual said 
>
> The read and write protocols both use a snappy-compressed protocol buffer 
> encoding over HTTP.
>
>
> So,
>
> snzip metrics.prom
>
>
> Then 
>
> curl  --header "Content-Type: application/openmetrics-text" \
> --header "Content-Encoding: snappy" \
> --request POST \
> --data-binary "@metrics.prom.sz" \
> "http://localhost:9090/api/v1/write;
>
> but unfortunately, the  result is 
>
>
> snappy: corrupt input
>
> Why is it corrupt?
>
> snzip -d  metrics.prom.sz
>
>
> gives perfectly fine result.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/114d9848-b0ba-45bb-9e10-7cb779b3a85dn%40googlegroups.com.


Re: [prometheus-users] Prometheus reaction time

2022-11-23 Thread Stuart Clark

On 23/11/2022 15:28, Nenad Rakonjac wrote:

Hello,

Does everyone have clue how much time need for prometheus metrics to 
go from application to alertmenager? Can this time be bigger than one 
minute? -- 


Metrics don't go to Alertmanager. Instead you create alerting rules 
which query metrics to produce alerts.


How long something changing to an alert being fired depend on lots of 
different factors:


How often you scrape the application (so Prometheus has the latest metrics)
The query you are using in your alerting rule (for example you might be 
alerting based on an average rate over the last few minutes, so a sudden 
spike wouldn't immediately trigger an alert)
If you have a "for" clause in the alert rule (which is generally 
recommended so as not to send an alert for something that goes away very 
quickly, such as a transient spike)


In general the "speed" of alerting isn't particularly critical. Instead 
what is important is producing useful actionable alerts. Sending an 
alert as soon as a resource goes above 90% isn't particularly useful if 
a second later it drops to 10% - nothing bad happened and there is 
nothing to be addressed. In general it would actually be more useful to 
alert if that threshold is breached for over say 5 minutes, or even more 
usefully when an SLO is failed or is projected to fail within the next 
30 minutes.


--
Stuart Clark

--
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/ef6ed871-c01c-56c1-b7f3-68074fffb745%40Jahingo.com.


[prometheus-users] Prometheus reaction time

2022-11-23 Thread Nenad Rakonjac
Hello,

Does everyone have clue how much time need for prometheus metrics to go 
from application to alertmenager? Can this time be bigger than one minute?

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/4fb68e5e-edd9-4a88-bd0f-c493be4a7a43n%40googlegroups.com.


Re: [prometheus-users] Prometheus correlation between memory usage and timeseries

2022-11-23 Thread Ben Kochie
That is a different question than you asked.

Time series depends on the jobs you are scraping. This varies greatly from
one user to the next.

You can graph avg_over_time(prometheus_tsdb_head_series[$__interval]) in
Grafana to see the trend over time.

On Wed, Nov 23, 2022 at 1:41 PM Julio Leal  wrote:

> Ben, thank you so much for your answer.
> My problem is that the prometheus_tsdb_head_series changes a lot over time.
> Is there any way to get how much my timeseries is increasing over time
> (for example, in the last 3 months)?
>
> On Wed, Nov 23, 2022 at 7:36 AM Ben Kochie  wrote:
>
>> I usually recommend looking at `process_resident_memory_bytes /
>> prometheus_tsdb_head_series`.
>>
>> The current typical use is around 8KiB per series, mainly due to the
>> indexing of series.
>>
>> On Wed, Nov 23, 2022 at 2:14 AM Julio Leal 
>> wrote:
>>
>>> Hi everyone
>>> I'm trying understand and create an end of life of my prometheus
>>> instance.
>>> I think that my promethes will die as my number of timeseries increases
>>> and I need more memory ram.
>>> How can I create a correlation between my timeseries growth and my
>>> memory ram growth?
>>>
>>> I already try to use:
>>>
>>>1. container_memory_working_set_bytes with
>>>prometheus_tsdb_head_series
>>>2. go_memstats_alloc_bytes with prometheus_tsdb_head_series
>>>3. go_memstats_heap_inuse_bytes with prometheus_tsdb_head_series
>>>4. process_resident_memory_bytes with prometheus_tsdb_head_series
>>>
>>>
>>> The combination closest that I can was number 3 that I can a correlation
>>> of 0.56
>>>
>>> Is there another way to create this correlation between Memory Ram and
>>> timeseries to mensure end of life or the growth of the prometheus instance?
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Prometheus Users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to prometheus-users+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/prometheus-users/f84d34ab-9c82-45b1-855d-9081e9d46cedn%40googlegroups.com
>>> 
>>> .
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CABbyFmqUK8Ucb1uVAJ3fEjXVz37qPk-y8jKtgkYjcGvLJ%2BK%2BXA%40mail.gmail.com.


[prometheus-users] How to push metrics to prometheus api/v1/write API endpoint with CURL

2022-11-23 Thread ihor.pi...@gmail.com
Here is a set of correct metrics 

cat metrics.prom
# HELP http_requests_total The total number of HTTP requests.
# TYPE http_requests_total counter
http_requests_total{method="post",code="200"} 1027 1395066363000
http_requests_total{method="post",code="400"}3 1395066363000

cat metrics.prom | promtool check metrics


Then it is supposed to be compressed by snappy as the manual said 

The read and write protocols both use a snappy-compressed protocol buffer 
encoding over HTTP.


So,

snzip metrics.prom


Then 

curl  --header "Content-Type: application/openmetrics-text" \
--header "Content-Encoding: snappy" \
--request POST \
--data-binary "@metrics.prom.sz" \
"http://localhost:9090/api/v1/write;

but unfortunately, the  result is 


snappy: corrupt input

Why is it corrupt?

snzip -d  metrics.prom.sz


gives perfectly fine result.

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/aebacc17-b38b-4c96-9b03-9df5219c8859n%40googlegroups.com.


Re: [prometheus-users] Prometheus correlation between memory usage and timeseries

2022-11-23 Thread Julio Leal
Ben, thank you so much for your answer.
My problem is that the prometheus_tsdb_head_series changes a lot over time.
Is there any way to get how much my timeseries is increasing over time (for
example, in the last 3 months)?

On Wed, Nov 23, 2022 at 7:36 AM Ben Kochie  wrote:

> I usually recommend looking at `process_resident_memory_bytes /
> prometheus_tsdb_head_series`.
>
> The current typical use is around 8KiB per series, mainly due to the
> indexing of series.
>
> On Wed, Nov 23, 2022 at 2:14 AM Julio Leal  wrote:
>
>> Hi everyone
>> I'm trying understand and create an end of life of my prometheus instance.
>> I think that my promethes will die as my number of timeseries increases
>> and I need more memory ram.
>> How can I create a correlation between my timeseries growth and my memory
>> ram growth?
>>
>> I already try to use:
>>
>>1. container_memory_working_set_bytes with prometheus_tsdb_head_series
>>2. go_memstats_alloc_bytes with prometheus_tsdb_head_series
>>3. go_memstats_heap_inuse_bytes with prometheus_tsdb_head_series
>>4. process_resident_memory_bytes with prometheus_tsdb_head_series
>>
>>
>> The combination closest that I can was number 3 that I can a correlation
>> of 0.56
>>
>> Is there another way to create this correlation between Memory Ram and
>> timeseries to mensure end of life or the growth of the prometheus instance?
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Prometheus Users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to prometheus-users+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/prometheus-users/f84d34ab-9c82-45b1-855d-9081e9d46cedn%40googlegroups.com
>> 
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CADALxso0Ed1bf2vfZNfHR351u%2Bsb5DgpDZX0KTemdbO7hhDezg%40mail.gmail.com.


Re: [prometheus-users] Prometheus correlation between memory usage and timeseries

2022-11-23 Thread Ben Kochie
I usually recommend looking at `process_resident_memory_bytes /
prometheus_tsdb_head_series`.

The current typical use is around 8KiB per series, mainly due to the
indexing of series.

On Wed, Nov 23, 2022 at 2:14 AM Julio Leal  wrote:

> Hi everyone
> I'm trying understand and create an end of life of my prometheus instance.
> I think that my promethes will die as my number of timeseries increases
> and I need more memory ram.
> How can I create a correlation between my timeseries growth and my memory
> ram growth?
>
> I already try to use:
>
>1. container_memory_working_set_bytes with prometheus_tsdb_head_series
>2. go_memstats_alloc_bytes with prometheus_tsdb_head_series
>3. go_memstats_heap_inuse_bytes with prometheus_tsdb_head_series
>4. process_resident_memory_bytes with prometheus_tsdb_head_series
>
>
> The combination closest that I can was number 3 that I can a correlation
> of 0.56
>
> Is there another way to create this correlation between Memory Ram and
> timeseries to mensure end of life or the growth of the prometheus instance?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-users/f84d34ab-9c82-45b1-855d-9081e9d46cedn%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CABbyFmogmzj8ffh%3DsLQn8fVzCYsAO%2Bbvz6G-9qm%3DWO%2B2nCXQBA%40mail.gmail.com.


[prometheus-users] how can I get rule group name in alert_relabel_configs

2022-11-23 Thread chen sr
I want to defined serverity level by group name , for example:
rule file:
groups:
 - name: *critical_for_xxx*
rules:
 - alert: 
 - name: waring_for_xxx

alerting:
alert_relabel_configs:
 - source_labels: [ ... ]
regex: (*critical*).+
target_label: serverity
replacement: critical
what it needs in source_labels [  ] to regex rule name to match?

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/7a235498-d3a6-4fb6-9847-3488edc31f04n%40googlegroups.com.


[prometheus-users] Email Subject for alerts

2022-11-23 Thread sri L
Hi all,
I want to add a label name in subject header. This is not a common label 
which we generally defined under target level. The label "name" is coming 
in metrics only and i want to see that "name" in email subject. Currently 
iam using this header value
[{{ .Status | toUpper }}] {{.CommonLabels.app_name}} 
{{.CommonLabels.hostname}}

can anyone please suggest here

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/dc3ebdaf-a027-4ed7-a5d2-0a77ad552807n%40googlegroups.com.