Re: [prometheus-users] Debugging OOM issue.

2020-11-24 Thread Ben Kochie
No, concurrency only affects how many queries are running at the same time.

On Wed, Nov 25, 2020 at 8:45 AM Yagyansh S. Kumar 
wrote:

> Thanks, Ben. Was thinking of doing the same because a single query is
> causing my Prometheus to go down occasionally.
> One query though, will limiting the concurrency slow down the overall
> evaluation process?
>
> On Wed, Nov 25, 2020 at 1:07 PM Ben Kochie  wrote:
>
>> Maybe set a lower `--query.max-samples` flag setting. The default is 50
>> million samples. I typically lower this to 20 million to avoid too-heavy
>> queries. You can also lower the defualt `--query.max-concurrency=20` to
>> avoid overloading.
>>
>> Likely, if you need to make large queries, you should allocate more
>> memory for Prometheus.
>>
>> On Wed, Nov 25, 2020 at 6:49 AM yagyans...@gmail.com <
>> yagyanshsku...@gmail.com> wrote:
>>
>>> Thanks, Christian.
>>>
>>> Today I noticed something that is totally new to me. Prometheus went
>>> down and I got the query because of which it went down but strangely at
>>> that time I checked the server did not go OOM, the Memory dropped directly
>>> from constant usage of 77% to zero, but usually when a Query takes a long
>>> time the Memory usage spikes up which causes the Prometheus to crash
>>> because of OOM. This time there was no sudden spike in either CPU or Memory
>>> Utilization.
>>>
>>> Any thoughts on this?
>>>
>>> On Monday, November 9, 2020 at 5:31:18 PM UTC+5:30 Christian Hoffmann
>>> wrote:
>>>
 Hi,

 On 11/9/20 10:56 AM, yagyans...@gmail.com wrote:
 > Hi. I am using Promtheus v 2.20.1 and suddenly my Prometheus crashed
 > because of Memory overshoot. How to pinpoint what caused the
 Prometheus
 > to go OOM or which query caused the Prometheus go OOM?

 Prometheus writes the currently active queries to a file which is read
 upon restart. Prometheus will print all unfinished queries, see here:


 https://www.robustperception.io/what-queries-were-running-when-prometheus-died

 This should help pin-pointing the relevant queries.

 Often it's some combination of querying long timestamps and/or high
 cardinality metrics.

 Kind regards,
 Christian

>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Prometheus Users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to prometheus-users+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/prometheus-users/1bfe152b-bf4a-4c33-85a0-9ad9637a241fn%40googlegroups.com
>>> 
>>> .
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CABbyFmp49DcOLe5uqSwk3m_qWq0ujxZ8T2qSR6WghjviMDyjcg%40mail.gmail.com.


Re: [prometheus-users] Debugging OOM issue.

2020-11-24 Thread Yagyansh S. Kumar
Thanks, Ben. Was thinking of doing the same because a single query is
causing my Prometheus to go down occasionally.
One query though, will limiting the concurrency slow down the overall
evaluation process?

On Wed, Nov 25, 2020 at 1:07 PM Ben Kochie  wrote:

> Maybe set a lower `--query.max-samples` flag setting. The default is 50
> million samples. I typically lower this to 20 million to avoid too-heavy
> queries. You can also lower the defualt `--query.max-concurrency=20` to
> avoid overloading.
>
> Likely, if you need to make large queries, you should allocate more memory
> for Prometheus.
>
> On Wed, Nov 25, 2020 at 6:49 AM yagyans...@gmail.com <
> yagyanshsku...@gmail.com> wrote:
>
>> Thanks, Christian.
>>
>> Today I noticed something that is totally new to me. Prometheus went down
>> and I got the query because of which it went down but strangely at that
>> time I checked the server did not go OOM, the Memory dropped directly from
>> constant usage of 77% to zero, but usually when a Query takes a long time
>> the Memory usage spikes up which causes the Prometheus to crash because of
>> OOM. This time there was no sudden spike in either CPU or Memory
>> Utilization.
>>
>> Any thoughts on this?
>>
>> On Monday, November 9, 2020 at 5:31:18 PM UTC+5:30 Christian Hoffmann
>> wrote:
>>
>>> Hi,
>>>
>>> On 11/9/20 10:56 AM, yagyans...@gmail.com wrote:
>>> > Hi. I am using Promtheus v 2.20.1 and suddenly my Prometheus crashed
>>> > because of Memory overshoot. How to pinpoint what caused the
>>> Prometheus
>>> > to go OOM or which query caused the Prometheus go OOM?
>>>
>>> Prometheus writes the currently active queries to a file which is read
>>> upon restart. Prometheus will print all unfinished queries, see here:
>>>
>>>
>>> https://www.robustperception.io/what-queries-were-running-when-prometheus-died
>>>
>>> This should help pin-pointing the relevant queries.
>>>
>>> Often it's some combination of querying long timestamps and/or high
>>> cardinality metrics.
>>>
>>> Kind regards,
>>> Christian
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Prometheus Users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to prometheus-users+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/prometheus-users/1bfe152b-bf4a-4c33-85a0-9ad9637a241fn%40googlegroups.com
>> 
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CAFGi5vBK_hDXrtYHzMbBh7snYEmfRCsUygxdvuY5T9_1mb3z-w%40mail.gmail.com.


Re: [prometheus-users] Debugging OOM issue.

2020-11-24 Thread Ben Kochie
Maybe set a lower `--query.max-samples` flag setting. The default is 50
million samples. I typically lower this to 20 million to avoid too-heavy
queries. You can also lower the defualt `--query.max-concurrency=20` to
avoid overloading.

Likely, if you need to make large queries, you should allocate more memory
for Prometheus.

On Wed, Nov 25, 2020 at 6:49 AM yagyans...@gmail.com <
yagyanshsku...@gmail.com> wrote:

> Thanks, Christian.
>
> Today I noticed something that is totally new to me. Prometheus went down
> and I got the query because of which it went down but strangely at that
> time I checked the server did not go OOM, the Memory dropped directly from
> constant usage of 77% to zero, but usually when a Query takes a long time
> the Memory usage spikes up which causes the Prometheus to crash because of
> OOM. This time there was no sudden spike in either CPU or Memory
> Utilization.
>
> Any thoughts on this?
>
> On Monday, November 9, 2020 at 5:31:18 PM UTC+5:30 Christian Hoffmann
> wrote:
>
>> Hi,
>>
>> On 11/9/20 10:56 AM, yagyans...@gmail.com wrote:
>> > Hi. I am using Promtheus v 2.20.1 and suddenly my Prometheus crashed
>> > because of Memory overshoot. How to pinpoint what caused the Prometheus
>> > to go OOM or which query caused the Prometheus go OOM?
>>
>> Prometheus writes the currently active queries to a file which is read
>> upon restart. Prometheus will print all unfinished queries, see here:
>>
>>
>> https://www.robustperception.io/what-queries-were-running-when-prometheus-died
>>
>> This should help pin-pointing the relevant queries.
>>
>> Often it's some combination of querying long timestamps and/or high
>> cardinality metrics.
>>
>> Kind regards,
>> Christian
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-users/1bfe152b-bf4a-4c33-85a0-9ad9637a241fn%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CABbyFmp7yJJb_w6oUheLkbmhe3y9wPQ634Ny2%3D37Qy1ob_Bhrw%40mail.gmail.com.


[prometheus-users] Selectors in kubernetes_sd_configs don't restrict services based on labels

2020-11-24 Thread Shubham Shrivastav
Hello,
I'm trying to scrape all services that have labels *app: core. *However, 
after adding the below section to kubernetes_sd_configs, I'm not getting 
limited services in the Prometheus targets list. In fact, the configuration 
file does not save saying not able to parse the YAML config file.

After I remove the below section it works fine. I'm placing it in the 
config file as mentioned below:

*kubernetes_sd_configs:*
* - role: service*
* namespaces:*
* names:*
* - ns-1*
*selectors:*
* - role: service*
* label: "app=core"*

Can anyone help me out with this?

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/f4e30cb0-6b07-4ba6-967f-300714944d9dn%40googlegroups.com.


[prometheus-users] Templating

2020-11-24 Thread Chaitanya Bayana
Hi All,

We have configured multiple Prometheus alerts with one single template in 
Alertmanager configuration and every alert is triggered with Runbook and 
Grafana link.
Now the problem is,  All my alerts are referring to different Grafana 
dashboards. Currently whenever we receive any alert, its always showing one 
Grafana dashboard which is not correct. We have hardcoded the Grafana link 
in the template. We want something like below.
So is there a solution to achieve this problem. If yes, where exactly I 
need to make the changes, alertmanager or at alerts?.

Eg: *ALERT1>>>   GRAFANA1 DASHBOARD LINK *
*  ALERT2 >>>   GRAFANA2 **DASHBOARD **LINK*

Current sample templating:

{{ define "pagerduty.default.description_runbooklink" }} *Runbook:* 
https://dev.runbook.com/confluence/promalerts {{ end }} {{ define 
"pagerduty.default.description_datacenter" }} *Datacenter:* us-india-1 {{ end 
}} {{ define "pagerduty.default.description_grafanalink" }} *Grafana 
Dashboard:* https://grafana.dev.com/services-dashboard {{ end }}  



Thanks & Regards,
Chaitanya

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/d3c57270-d26f-4cc2-a25c-64e7a9b4b6aen%40googlegroups.com.


Re: [prometheus-users] Debugging OOM issue.

2020-11-24 Thread yagyans...@gmail.com
Thanks, Christian.

Today I noticed something that is totally new to me. Prometheus went down 
and I got the query because of which it went down but strangely at that 
time I checked the server did not go OOM, the Memory dropped directly from 
constant usage of 77% to zero, but usually when a Query takes a long time 
the Memory usage spikes up which causes the Prometheus to crash because of 
OOM. This time there was no sudden spike in either CPU or Memory 
Utilization.

Any thoughts on this?

On Monday, November 9, 2020 at 5:31:18 PM UTC+5:30 Christian Hoffmann wrote:

> Hi,
>
> On 11/9/20 10:56 AM, yagyans...@gmail.com wrote:
> > Hi. I am using Promtheus v 2.20.1 and suddenly my Prometheus crashed
> > because of Memory overshoot. How to pinpoint what caused the Prometheus
> > to go OOM or which query caused the Prometheus go OOM?
>
> Prometheus writes the currently active queries to a file which is read
> upon restart. Prometheus will print all unfinished queries, see here:
>
>
> https://www.robustperception.io/what-queries-were-running-when-prometheus-died
>
> This should help pin-pointing the relevant queries.
>
> Often it's some combination of querying long timestamps and/or high
> cardinality metrics.
>
> Kind regards,
> Christian
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/1bfe152b-bf4a-4c33-85a0-9ad9637a241fn%40googlegroups.com.


[prometheus-users] Re: what's the best practice to collect metrics and write to different remote storage with different bear_token_file

2020-11-24 Thread jun min
BTW, someone also make another suggestions, maybe useful for people who 
have same scenerio. Instead of having a agent to scrape data for every 
tenant,  we can use one prometheus to scrape the data, and write a remote 
write adapter to receive these data, split them, and route to different 
remote storage for different tenant. It seems also very light weight and 
simple

在2020年11月23日星期一 UTC+8 下午10:33:28 写道:

> awesome, I'll give a shot
>
> 在2020年11月23日星期一 UTC+8 下午7:28:22 写道:
>
>> Having one prometheus server writing to thousands of different remote 
>> write endpoints doesn't sound like a sensible way to work.
>>
>> Maybe you want a proper multi-tenant solution, like Cortex 
>> , or the cluster/multi-tenant version of 
>> VictoriaMetrics 
>> .
>>
>> A simpler option would be a separate prometheus instance per tenant doing 
>> the scraping. Even more lightweight, look at the vmagent 
>>  part of 
>> VictoriaMetrics, which can be used for scraping and remote write without a 
>> local TSDB.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/c3aa0c4f-a98e-4beb-81cf-b88ee1494974n%40googlegroups.com.


[prometheus-users] what is the right way to setup alertmanager in Federation?

2020-11-24 Thread radhamani...@gmail.com

If we enable Alertmanagers in dedicated clusters, then how do we route the 
alerts to receiver ? Do we send the alerts to receivers directly from 
dedicated clusters? or should we route the alerts from dedicated clusters 
to central Federation Alertmanager? If yes, how to chain the alertmanagers?

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/c34682ea-7e83-4b75-b82b-16cd193a104cn%40googlegroups.com.


Re: [prometheus-users] Re: How to enable STS to address CWE-693: Protection Mechanism Failure in node_exporter?

2020-11-24 Thread Stuart Clark

On 24/11/2020 17:30, b.ca...@pobox.com wrote:

I'm guessing what's happened is:
1. You've run an (unnamed) security scanner against node_exporter
2. The scanner has come back with this message, telling you that 
node_exporter should return an STS header.


I'm saying that the scanner's conclusion is wrong.

Firstly, node_exporter isn't a web server, and you don't connect to it 
with a web browser.


Secondly, I don't know how you have configured node_exporter, but it 
can either serve HTTP (default) or HTTPS (*), on one port that you 
select.  STS only makes sense for a website which has both HTTP and 
HTTPS endpoints, usually on the standard ports 80 and 443.  It tells 
the browser always to select the HTTPS endpoint, and to remember this 
fact.


Technically it does still offer advantages for HTTPS only websites, as 
it would prevent people from accessing things at all if HTTP was 
actually enabled (either the site switched from just HTTPS to dual or 
just HTTP, or something else tried to use the HTTP port [assuming 80/443 
for a normal website]) and you tried to access the site. Therefore it 
prevents some future (possibly nefarious) change from tripping you up.


But as you say that is pretty much irrelevant as Prometheus doesn't read 
or obey the STS headers anyway, and access from a normal web browser is 
fairly unusual or short lived (e.g. temporary tests & debugging).



--
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/cf1b899b-b8ff-a922-1f7a-45fdbce90a15%40Jahingo.com.


[prometheus-users] Metrics API not registered when using promethus adapter

2020-11-24 Thread kumar k
Hi


I have installed a new prometheus adaptor using this 
https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-adapter,i
 could 
see v1beta1.custom.metrics.k8s.io registered,but couldn't see the 
v1beta1.metrics.k8s.io.Is there any config change needed to register 
v1beta1.metrics.k8s.io? Can both the API's registered with same helm?

v1beta1.custom.metrics.k8s.io dock/custom-metrics-prometheus-adapter True 
106s




Thanks,

Kumar

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/e9526672-833a-4188-a116-95322f19a39an%40googlegroups.com.


Re: [prometheus-users] Re: How to enable STS to address CWE-693: Protection Mechanism Failure in node_exporter?

2020-11-24 Thread Selvam Elangovan
Perfect.  you are spot on.  Thanks for your inputs.  It helps us.

Thanks & Regards,
Selvam E.

On Tue, 24 Nov 2020, 23:00 b.ca...@pobox.com,  wrote:

> I'm guessing what's happened is:
> 1. You've run an (unnamed) security scanner against node_exporter
> 2. The scanner has come back with this message, telling you that
> node_exporter should return an STS header.
>
> I'm saying that the scanner's conclusion is wrong.
>
> Firstly, node_exporter isn't a web server, and you don't connect to it
> with a web browser.
>
> Secondly, I don't know how you have configured node_exporter, but it can
> either serve HTTP (default) or HTTPS (*), on one port that you select.  STS
> only makes sense for a website which has both HTTP and HTTPS endpoints,
> usually on the standard ports 80 and 443.  It tells the browser always to
> select the HTTPS endpoint, and to remember this fact.
>
> node_exporter only provides one or the other, so (1) STS is meaningless,
> and (2) this is not a vulnerability in node_exporter.
>
> If you've configured node_exporter on HTTP, then there's no HTTPS port for
> STS to prefer.  If you've configured node_exporter on HTTPS (and of course
> configured prometheus to scrape it on HTTPS), then there's no HTTP port for
> STS to stop you using.
>
> Regards,
>
> Brian.
>
> (*) TLS is available in node_exporter 1.0.0+: you need to set --web.config
> to point to a file which contains the tlsConfig settings. See
> https://github.com/prometheus/node_exporter#tls-endpoint
>
> A sample web.config file would look like this:
>
> tlsConfig:
>   tlsCertPath: /etc/prometheus/ssl/prom_node_cert.pem
>   tlsKeyPath: /etc/prometheus/ssl/prom_node_key.pem
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-users/d9292b98-2cda-418f-a06d-da946c08a39fn%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CAKhb3rvGjapt-_a%2Bc2f9zEXUcEcmTg4gQOV6iEocLEVHQ%2BNugw%40mail.gmail.com.


Re: [prometheus-users] Re: How to enable STS to address CWE-693: Protection Mechanism Failure in node_exporter?

2020-11-24 Thread b.ca...@pobox.com
I'm guessing what's happened is:
1. You've run an (unnamed) security scanner against node_exporter
2. The scanner has come back with this message, telling you that 
node_exporter should return an STS header.

I'm saying that the scanner's conclusion is wrong. 

Firstly, node_exporter isn't a web server, and you don't connect to it with 
a web browser.

Secondly, I don't know how you have configured node_exporter, but it can 
either serve HTTP (default) or HTTPS (*), on one port that you select.  STS 
only makes sense for a website which has both HTTP and HTTPS endpoints, 
usually on the standard ports 80 and 443.  It tells the browser always to 
select the HTTPS endpoint, and to remember this fact.

node_exporter only provides one or the other, so (1) STS is meaningless, 
and (2) this is not a vulnerability in node_exporter.

If you've configured node_exporter on HTTP, then there's no HTTPS port for 
STS to prefer.  If you've configured node_exporter on HTTPS (and of course 
configured prometheus to scrape it on HTTPS), then there's no HTTP port for 
STS to stop you using.

Regards,

Brian.

(*) TLS is available in node_exporter 1.0.0+: you need to set --web.config 
to point to a file which contains the tlsConfig settings. 
See https://github.com/prometheus/node_exporter#tls-endpoint

A sample web.config file would look like this:

tlsConfig:
  tlsCertPath: /etc/prometheus/ssl/prom_node_cert.pem
  tlsKeyPath: /etc/prometheus/ssl/prom_node_key.pem

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/d9292b98-2cda-418f-a06d-da946c08a39fn%40googlegroups.com.


Re: [prometheus-users] Re: How to enable STS to address CWE-693: Protection Mechanism Failure in node_exporter?

2020-11-24 Thread Selvam Elangovan
Thanks. I am still confused.

Can we just configure Https in Prometheus for scrape to address cwe693 STS.

Or


If we enable TLS in node exporter to fix STS vaulnerability.

Kindly clarify.

Thanks

Selvam E.

On Tue, 24 Nov 2020, 21:49 b.ca...@pobox.com,  wrote:

> node_exporter isn't accessed via a browser - it's accessed only from
> prometheus scrapes.
>
> If you configure prometheus to scrape using https, then it will only use
> https. STS won't make any difference.
>
> Furthermore, if you configure node_exporter to use TLS, then it will
> *only* serve TLS.  It doesn't provide separate http and https ports (like
> port 80 and port 443).  So STS doesn't make any sense.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-users/faea5830-a516-4b68-9b81-40bdc863c9fbn%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CAKhb3ruv8TLhR09h2MtyFxw0PO%3DsNTS4_jZhgxyF48h9i33vAg%40mail.gmail.com.


Re: [prometheus-users] Prometheus using AWS Timestream

2020-11-24 Thread Stuart Clark

On 24/11/2020 14:06, 'ellis...@googlemail.com' via Prometheus Users wrote:
the guy that wrote the adapter suggests that a Grafana plugin would be 
used to read the information from Timestream in AWS. 


Yes, but that doesn't help for alerting, recording rules, etc. which are 
in Prometheus.


--
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/82976fae-8493-771c-1e11-5496b85a9e3e%40Jahingo.com.


[prometheus-users] Re: How to enable STS to address CWE-693: Protection Mechanism Failure in node_exporter?

2020-11-24 Thread b.ca...@pobox.com
node_exporter isn't accessed via a browser - it's accessed only from 
prometheus scrapes.

If you configure prometheus to scrape using https, then it will only use 
https. STS won't make any difference.

Furthermore, if you configure node_exporter to use TLS, then it will *only* 
serve TLS.  It doesn't provide separate http and https ports (like port 80 
and port 443).  So STS doesn't make any sense.

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/faea5830-a516-4b68-9b81-40bdc863c9fbn%40googlegroups.com.


[prometheus-users] How to enable STS to address CWE-693: Protection Mechanism Failure in node_exporter?

2020-11-24 Thread Selvam Elangovan
How to enable STS to address CWE-693: Protection Mechanism Failure in 
node_exporter?  

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/702e27d7-5ce2-44d1-bfdd-7c3fea12b336n%40googlegroups.com.


Re: [prometheus-users] Prometheus using AWS Timestream

2020-11-24 Thread 'ellis...@googlemail.com' via Prometheus Users
the guy that wrote the adapter suggests that a Grafana plugin would be used 
to read the information from Timestream in AWS. 

On Tuesday, 24 November 2020 at 12:47:12 UTC Stuart Clark wrote:

> On 24/11/2020 09:53, 'ellis...@googlemail.com' via Prometheus Users wrote:
> > Hi all,
> >
> > Is anyone using Prometheus in AWS to monitor and if so have you 
> > thought about using Timestream as a remote storage solution?
>
>
> I can see that there is a remote write adapter available at 
> https://github.com/dpattmann/prometheus-timestream-adapter but is anyone 
> aware of a remote read adapter?
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/f247d343-ed5a-4792-a256-a1671d7ad778n%40googlegroups.com.


Re: [prometheus-users] Prometheus using AWS Timestream

2020-11-24 Thread Stuart Clark

On 24/11/2020 09:53, 'ellis...@googlemail.com' via Prometheus Users wrote:

Hi all,

Is anyone using Prometheus in AWS to monitor and if so have you 
thought about using Timestream as a remote storage solution?



I can see that there is a remote write adapter available at 
https://github.com/dpattmann/prometheus-timestream-adapter but is anyone 
aware of a remote read adapter?


--
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/6b43e128-7891-c2c3-b67c-0f8a597b6d62%40Jahingo.com.


[prometheus-users] Prometheus using AWS Timestream

2020-11-24 Thread 'ellis...@googlemail.com' via Prometheus Users
Hi all, 

Is anyone using Prometheus in AWS to monitor and if so have you thought 
about using Timestream as a remote storage solution?

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/2e6e4817-87ee-4f51-b4ed-5d48fde62d76n%40googlegroups.com.


[prometheus-users] Re: Timescale DB setup

2020-11-24 Thread Harkishen Singh

Hey Sree hari,

I had sent you a follow mail mentioning the details. Is it resolved by now? 
If not, feel free to mention on TimescaleDB slack on promscale channel: invite 
link 
.

Thank you.
On Wednesday, November 4, 2020 at 4:30:24 PM UTC+5:30 sreeha...@gmail.com 
wrote:

>
> Hello Team,
>
> I need to setup Timescale DB and its need to integrate with Prometheus 
> monitoring. 
>
> In the current setup, I am using Prometheus, Grafana and exporters.  
> Grafana using Prometheus as Data Source. After implementing timescale DB, i 
> want to read the data from timescale DB  using the same Prometheus data 
> source.  
>
> Can someone please help me with installation steps of timescale DB in 
> RHEL7. 
>
> Thanks and regards,
> Sreehari
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/4aaf1e98-58b1-4783-9fca-94d23b7ebed1n%40googlegroups.com.


Re: [prometheus-users] Sample Code in Go/C++ to Publish Metrics to a Remote Write Endpoint

2020-11-24 Thread Harkishen Singh

Hey AI,

We (TimescaleDB team) are in the process of adding support to easily push 
prometheus metric data to Promscale/TimescaleDB. The PR is already merged 
and we are writing examples and docs for users to understand in simple 
terms how to use this feature.
For knowing more information on this, feel free to drop a message at 
TimescaleDB community slack (link )
 
and we will answer your queries ASAP.

Thank you
On Monday, November 9, 2020 at 8:43:47 PM UTC+5:30 bwpl...@gmail.com wrote:

> I think this question should go to the TimescaleDB support list rather 
> because it's up to Timescale what exactly semantics they allow on their 
> APIs.
>
> In terms of official Prometheus Remote Write, it's hard to get exactly the 
> same semantics that is accurate as Prometheus client. Think about staleness 
> marking, continuity of series etc
>
> Something that is happening in the community is Open Telemetry Remote 
> Write exporter which might give you some examples: 
> https://github.com/open-telemetry/opentelemetry-collector/tree/master/exporter/prometheusremotewriteexporter
>  
> (I personally did not look how valid it is though).
>
> Kind Regards,
> Bartek Płotka (@bwplotka)
>
>
> On Mon, 9 Nov 2020 at 15:55, Al  wrote:
>
>> I have a specific use case where I'm backfilling metrics into a Postgres 
>> TimescaleDB instance.   As backfilling indicates, the metrics are not 
>> scrapped during regular intervals.  The individual metrics/data points are 
>> calculated at the end of each day, and then pushed to TimescaleDB.  I know 
>> we can accomplish this directly via some SQL inserts although I know we can 
>> also do it by encoding the individual data points with the necessary 
>> protobuf definitions which are included in the prometheus repo and posting 
>> them to the promscale connector.  I'd like to know if there are any code 
>> samples out there of how this can be accomplished in Go and in C++ directly 
>> from the application which is generating the metrics?
>>
>> I appreciate any help.
>>
>>
>> Al
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Prometheus Users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to prometheus-use...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/prometheus-users/7590a8c9-646d-4e28-bb2c-ccfffdc99fc2n%40googlegroups.com
>>  
>> 
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/0a75c205-c979-4e53-b5d5-1f0d7525da1an%40googlegroups.com.


[prometheus-users] Re: Security on Prometheus target

2020-11-24 Thread b.ca...@pobox.com
1. basic_auth is HTTP basic authentication 
: a standard and 
well-documented HTTP mechanism.  The exporter itself will have to implement 
this mechanism of course (or you can sit the exporter behind a proxy which 
implements it)

2. the authentication mechanisms which Prometheus can use during a scrape 
are documented.

If you want to do something other form of authentication like the one you 
described, you could write a HTTP proxy which does it (passing a query 
parameter for the target).

Alternatively, you could use basic_auth with password_file, or 
bearer_token_file, and have an external program which writes to that file.  
However I haven't tested whether prometheus reads that file on every 
scrape, or whether you'd have to signal 
 to 
prometheus when it changes.

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/63f5aaca-1bdd-4bd3-82b3-1254d4cd037bn%40googlegroups.com.