[prometheus-users] Re: Promteheus HA different metrics

2023-09-05 Thread Brian Candler
On Tuesday, 5 September 2023 at 14:26:07 UTC+1 Анастасия Зель wrote:

i only have pod ip and i cant get it from prometheus node because they are 
in different subnets.


Hosts on different subnets *could* talk to each other - that's what routers 
are for.

It's quite possible that you have a routing or network reachability issue, 
but you'll have to work out why you can reach some pods but not others.  
That will be down to how your particular k8s cluster(s) have been built and 
configured.

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/8a88f431-07c4-4b2f-873b-3152f06db064n%40googlegroups.com.


Re: [prometheus-users] Re: Promteheus HA different metrics

2023-09-05 Thread Stuart Clark

On 2023-09-05 14:26, Анастасия Зель wrote:

yeah, i think scrape manually it will be useful but remember that its
k8s pods :)
i only have pod ip and i cant get it from prometheus node because they
are in different subnets. Pods subnet don't have access to outside
network.
so i dont know how i can scrape manually particular pod target from
prometheus server.



That would explain why it isn't working. You need to have network 
connectivity to all of your scrape targets from the Prometheus server. 
So if you have configured Prometheus to scrape every pod (via the 
Kubernetes SD for example) the Prometheus server will either need to be 
inside the cluster or connected to the same network mechanism as the 
pods.


--
Stuart Clark

--
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/4eb0b62f043f84563619eecb8ba0c307%40Jahingo.com.


[prometheus-users] Re: Promteheus HA different metrics

2023-09-05 Thread Анастасия Зель
yeah, i think scrape manually it will be useful but remember that its k8s 
pods :)
i only have pod ip and i cant get it from prometheus node because they are 
in different subnets. Pods subnet don't have access to outside network. 
so i dont know how i can scrape manually particular pod target from 
prometheus server.

but thank you for yours guesses, i will check it out
вторник, 5 сентября 2023 г. в 15:06:30 UTC+4, Brian Candler: 

> > the fail 100% of the time on that prometheus where its down
>
> Then you're lucky: in principle it's straightforward to debug.
> - get a shell on the affected prometheus server
> - use "curl" to do a manual scrape of the target which is down (using the 
> same URL that the Targets list shows)
> - if it fails, then you've taken Prometheus out of the equation.
>
> My best guesses would be (1) Network connectivity between the Prometheus 
> server and the affected pods, or (2) service discovery is giving wrong 
> information (i.e. you're scraping the wrong URL in the first place)
>
> In case (2), I note that you're getting the targets to scrape from pod 
> annotations. Look carefully at the values of those annotations, and how 
> they are mapped into scrape address/port/path for the affected pods.
>
> On Tuesday, 5 September 2023 at 11:45:04 UTC+1 Анастасия Зель wrote:
>
>> Actually its targets on different k8s nodes, but the fail 100% of the 
>> time on that prometheus where its down. 
>> I get list of all down pods targets and noticed that number of down pods 
>> its the same on both prometheus nodes - 306 down pods targets. But its 
>> different targets :D
>> Yes, they scrape same urls of pods.
>> вторник, 5 сентября 2023 г. в 10:32:15 UTC+4, Brian Candler: 
>>
>>> Note that setting the scrape timeout longer than the scrape interval 
>>> won't achieve anything.
>>>
>>> I'd suggest you investigate by looking at the history of the "up" 
>>> metric: this will go to zero on scrape failures.  Can you discern a 
>>> pattern?  Is it only on a certain type of target, or targets running on a 
>>> particular k8s node?  Is it intermittent across all targets, or some 
>>> targets which fail 100% of the time?
>>>
>>> If you compare the Targets page on both servers, are they scraping 
>>> exactly the same URLs?  (That is, check whether service discovery is giving 
>>> different results)
>>>
>>> On Tuesday, 5 September 2023 at 06:09:55 UTC+1 Анастасия Зель wrote:
>>>
 yes, i see errors on targets page in web interface.
 I tried to increase timeout to 5 minutes and it changes nothing. 
 Its strange because prometheus 2 always get this error on similar pods. 
 And prometheus 1 never get this errors on this pods. 
 понедельник, 4 сентября 2023 г. в 19:00:32 UTC+4, Brian Candler: 

> On Monday, 4 September 2023 at 15:49:25 UTC+1 Анастасия Зель wrote:
>
> Hello, we use HA prometheus with two servers.
>
> You mean, two Prometheus servers with the same config, both scraping 
> the same targets?
>
>  
>
> The problem is we get different metrics in dashboards from this two 
> servers.
>
> Small differences are to be expected.  That's because the two servers 
> won't be scraping the targets at the same points in time.  If you see 
> more 
> significant differences, then please provide some examples.
>
>  
>
> And we also scrape metrics from k8s, and some pods are not scraping 
> because of error context deadline exceeded
>
> That basically means "scrape timed out".  The scrape hadn't completed 
> within the "scrape_timeout:" value that you've set.  You'll need to look 
> at 
> your individual exporters and the failing scrape URLs: either the target 
> is 
> not reachable at all (e.g. firewalling or network configuration issue), 
> or 
> the target is taking too long to respond.
>  
>
> Its differents pods on each server. In prometheus logs we dont see any 
> of errors.
>
> Where *do* you see the "context deadline exceeded" errors then?
>


-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/494ada91-c4b8-4ea5-bdbc-4db440c4a40en%40googlegroups.com.


Re: [prometheus-users] Re: Need help with mysql exporter

2023-09-05 Thread Ben Kochie
The exporter is meant to run as a "side car", installed on every node of
your cluster.

What you are doing is not following best practices.

On Tue, Sep 5, 2023 at 2:42 PM Y.G Kumar  wrote:

> Brian,
>
> This is what I am trying to achieve. I have a three node galera mysql
> cluster on three nodes A, B and C with respective IPs.
> As of now, I have installed mysql exporter in all the above three nodes
> and trying to access it using haproxy external IP using the above three
> nodes as backends.  It is working fine.
>
> Now , I  want to keep the mysql exporter only on node A and remove it on
> nodes B and C and start the  multi target approach from node A only. So in
> the /etc/.mysqld_exporter.cnf   file , how do I mention each section of the
> other two nodes ?  This is my present file content in each of the nodes
> above:
> 
> [client]
> user=mysqld_exporter
> password=StrongPassword
> ---
>
> Please suggest..
>
> Appreciate your time.
>
> Good day
>
>
> On Tuesday, August 29, 2023 at 12:46:50 PM UTC+5:30 Brian Candler wrote:
>
>> On Tuesday, 29 August 2023 at 07:34:58 UTC+1 Y.G Kumar wrote:
>>
>> Please help ...
>>
>>
>> Please start by reading the documentation at the link I posted earlier in
>> this thread:
>> https://github.com/prometheus/mysqld_exporter/#multi-target-support
>>
>> Then if you are unable to make this work, you can post a specific
>> question :
>> show exactly what you configured, what you expected to happen, and what
>> actually happened (with any logs or error messages).
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-users/97eb8eb4-1ac6-4d47-9f7f-a07978cd4e6an%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CABbyFmoXkBnOQU2j%3DxW7V7Niu87C52Jb_KRPTBLaHOHnF_%3DzNA%40mail.gmail.com.


[prometheus-users] Re: Need help with mysql exporter

2023-09-05 Thread Brian Candler
Have you tried it? With curl?

If all servers have the same user/password, then I think just
curl nodeA:9104/probe?target=nodeB:3306
should work, using the credentials from the [client] section of my.cnf.

If they have different usernames/passwords, then as the documentation says, 
you can add additional sections to my.cnf:

 [client]
user=mysqld_exporter
password=StrongPassword
 [client.secret]
user=mysqld_exporter
password=VeryStrongPassword

and call /probe?target=nodeB:3306_module=client.secret

If you don't provide auth_module then the "[client]" section will be used, 
as the documentation I linked to before says: Will match value to child in 
config file. Default value is `client`

Of course, you will have to arrange that nodes B and C accept mysql 
connections from node A. You can test this using the 'mysql' command line 
tool on node A, to see if you can establish connections to B/C.

On Tuesday, 5 September 2023 at 13:42:41 UTC+1 Y.G Kumar wrote:

> Brian,
>
> This is what I am trying to achieve. I have a three node galera mysql 
> cluster on three nodes A, B and C with respective IPs.
> As of now, I have installed mysql exporter in all the above three nodes 
> and trying to access it using haproxy external IP using the above three 
> nodes as backends.  It is working fine. 
>
> Now , I  want to keep the mysql exporter only on node A and remove it on 
> nodes B and C and start the  multi target approach from node A only. So in 
> the /etc/.mysqld_exporter.cnf   file , how do I mention each section of the 
> other two nodes ?  This is my present file content in each of the nodes 
> above:
> 
> [client]
> user=mysqld_exporter
> password=StrongPassword
> ---
>
> Please suggest..
>
> Appreciate your time.
>
> Good day
>
>
> On Tuesday, August 29, 2023 at 12:46:50 PM UTC+5:30 Brian Candler wrote:
>
>> On Tuesday, 29 August 2023 at 07:34:58 UTC+1 Y.G Kumar wrote:
>>
>> Please help ...
>>
>>
>> Please start by reading the documentation at the link I posted earlier in 
>> this thread:
>> https://github.com/prometheus/mysqld_exporter/#multi-target-support
>>
>> Then if you are unable to make this work, you can post a specific 
>> question : 
>> show exactly what you configured, what you expected to happen, and what 
>> actually happened (with any logs or error messages).
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/f7cc5d50-e6de-4ed5-9e45-775f8f55411en%40googlegroups.com.


[prometheus-users] Re: Need help with mysql exporter

2023-09-05 Thread Y.G Kumar
Brian,

This is what I am trying to achieve. I have a three node galera mysql 
cluster on three nodes A, B and C with respective IPs.
As of now, I have installed mysql exporter in all the above three nodes and 
trying to access it using haproxy external IP using the above three nodes 
as backends.  It is working fine. 

Now , I  want to keep the mysql exporter only on node A and remove it on 
nodes B and C and start the  multi target approach from node A only. So in 
the /etc/.mysqld_exporter.cnf   file , how do I mention each section of the 
other two nodes ?  This is my present file content in each of the nodes 
above:

[client]
user=mysqld_exporter
password=StrongPassword
---

Please suggest..

Appreciate your time.

Good day


On Tuesday, August 29, 2023 at 12:46:50 PM UTC+5:30 Brian Candler wrote:

> On Tuesday, 29 August 2023 at 07:34:58 UTC+1 Y.G Kumar wrote:
>
> Please help ...
>
>
> Please start by reading the documentation at the link I posted earlier in 
> this thread:
> https://github.com/prometheus/mysqld_exporter/#multi-target-support
>
> Then if you are unable to make this work, you can post a specific question 
> : show 
> exactly what you configured, what you expected to happen, and what actually 
> happened (with any logs or error messages).
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/97eb8eb4-1ac6-4d47-9f7f-a07978cd4e6an%40googlegroups.com.


[prometheus-users] Prometheus Traces to Elastic APM using operator

2023-09-05 Thread Maniraj Periasamy
hi Team, 

We are trying to monitor the performance of prometheus and trying to send 
the traces to Elastic APM. 

Is there any documentation to do this using prometheus operator. We are 
using Kube-prometheus stack helm.

Regards,
Mani


-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/f6c6721e-1ef9-4778-9af2-473a70b9a1ffn%40googlegroups.com.


[prometheus-users] Re: Promteheus HA different metrics

2023-09-05 Thread Brian Candler
> the fail 100% of the time on that prometheus where its down

Then you're lucky: in principle it's straightforward to debug.
- get a shell on the affected prometheus server
- use "curl" to do a manual scrape of the target which is down (using the 
same URL that the Targets list shows)
- if it fails, then you've taken Prometheus out of the equation.

My best guesses would be (1) Network connectivity between the Prometheus 
server and the affected pods, or (2) service discovery is giving wrong 
information (i.e. you're scraping the wrong URL in the first place)

In case (2), I note that you're getting the targets to scrape from pod 
annotations. Look carefully at the values of those annotations, and how 
they are mapped into scrape address/port/path for the affected pods.

On Tuesday, 5 September 2023 at 11:45:04 UTC+1 Анастасия Зель wrote:

> Actually its targets on different k8s nodes, but the fail 100% of the time 
> on that prometheus where its down. 
> I get list of all down pods targets and noticed that number of down pods 
> its the same on both prometheus nodes - 306 down pods targets. But its 
> different targets :D
> Yes, they scrape same urls of pods.
> вторник, 5 сентября 2023 г. в 10:32:15 UTC+4, Brian Candler: 
>
>> Note that setting the scrape timeout longer than the scrape interval 
>> won't achieve anything.
>>
>> I'd suggest you investigate by looking at the history of the "up" metric: 
>> this will go to zero on scrape failures.  Can you discern a pattern?  Is it 
>> only on a certain type of target, or targets running on a particular k8s 
>> node?  Is it intermittent across all targets, or some targets which fail 
>> 100% of the time?
>>
>> If you compare the Targets page on both servers, are they scraping 
>> exactly the same URLs?  (That is, check whether service discovery is giving 
>> different results)
>>
>> On Tuesday, 5 September 2023 at 06:09:55 UTC+1 Анастасия Зель wrote:
>>
>>> yes, i see errors on targets page in web interface.
>>> I tried to increase timeout to 5 minutes and it changes nothing. 
>>> Its strange because prometheus 2 always get this error on similar pods. 
>>> And prometheus 1 never get this errors on this pods. 
>>> понедельник, 4 сентября 2023 г. в 19:00:32 UTC+4, Brian Candler: 
>>>
 On Monday, 4 September 2023 at 15:49:25 UTC+1 Анастасия Зель wrote:

 Hello, we use HA prometheus with two servers.

 You mean, two Prometheus servers with the same config, both scraping 
 the same targets?

  

 The problem is we get different metrics in dashboards from this two 
 servers.

 Small differences are to be expected.  That's because the two servers 
 won't be scraping the targets at the same points in time.  If you see more 
 significant differences, then please provide some examples.

  

 And we also scrape metrics from k8s, and some pods are not scraping 
 because of error context deadline exceeded

 That basically means "scrape timed out".  The scrape hadn't completed 
 within the "scrape_timeout:" value that you've set.  You'll need to look 
 at 
 your individual exporters and the failing scrape URLs: either the target 
 is 
 not reachable at all (e.g. firewalling or network configuration issue), or 
 the target is taking too long to respond.
  

 Its differents pods on each server. In prometheus logs we dont see any 
 of errors.

 Where *do* you see the "context deadline exceeded" errors then?

>>>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/24575a81-2302-4d4c-8b6b-e24075ddaa98n%40googlegroups.com.


[prometheus-users] Re: Promteheus HA different metrics

2023-09-05 Thread Анастасия Зель
Actually its targets on different k8s nodes, but the fail 100% of the time 
on that prometheus where its down. 
I get list of all down pods targets and noticed that number of down pods 
its the same on both prometheus nodes - 306 down pods targets. But its 
different targets :D
Yes, they scrape same urls of pods.
вторник, 5 сентября 2023 г. в 10:32:15 UTC+4, Brian Candler: 

> Note that setting the scrape timeout longer than the scrape interval won't 
> achieve anything.
>
> I'd suggest you investigate by looking at the history of the "up" metric: 
> this will go to zero on scrape failures.  Can you discern a pattern?  Is it 
> only on a certain type of target, or targets running on a particular k8s 
> node?  Is it intermittent across all targets, or some targets which fail 
> 100% of the time?
>
> If you compare the Targets page on both servers, are they scraping exactly 
> the same URLs?  (That is, check whether service discovery is giving 
> different results)
>
> On Tuesday, 5 September 2023 at 06:09:55 UTC+1 Анастасия Зель wrote:
>
>> yes, i see errors on targets page in web interface.
>> I tried to increase timeout to 5 minutes and it changes nothing. 
>> Its strange because prometheus 2 always get this error on similar pods. 
>> And prometheus 1 never get this errors on this pods. 
>> понедельник, 4 сентября 2023 г. в 19:00:32 UTC+4, Brian Candler: 
>>
>>> On Monday, 4 September 2023 at 15:49:25 UTC+1 Анастасия Зель wrote:
>>>
>>> Hello, we use HA prometheus with two servers.
>>>
>>> You mean, two Prometheus servers with the same config, both scraping the 
>>> same targets?
>>>
>>>  
>>>
>>> The problem is we get different metrics in dashboards from this two 
>>> servers.
>>>
>>> Small differences are to be expected.  That's because the two servers 
>>> won't be scraping the targets at the same points in time.  If you see more 
>>> significant differences, then please provide some examples.
>>>
>>>  
>>>
>>> And we also scrape metrics from k8s, and some pods are not scraping 
>>> because of error context deadline exceeded
>>>
>>> That basically means "scrape timed out".  The scrape hadn't completed 
>>> within the "scrape_timeout:" value that you've set.  You'll need to look at 
>>> your individual exporters and the failing scrape URLs: either the target is 
>>> not reachable at all (e.g. firewalling or network configuration issue), or 
>>> the target is taking too long to respond.
>>>  
>>>
>>> Its differents pods on each server. In prometheus logs we dont see any 
>>> of errors.
>>>
>>> Where *do* you see the "context deadline exceeded" errors then?
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/805a2feb-d0ab-4f70-a308-2a2e8a58cee6n%40googlegroups.com.


Re: [prometheus-users] HELP: GaugeVec memory keeps increasing

2023-09-05 Thread Monkey92t
I'm sorry, this is my first time using the Google forum. After I posted my 
question for the first time, it didn't appear in the list, so I thought the 
operation had failed, and that's why I submitted the question again. I 
apologize for the confusion.

在2023年9月5日星期二 UTC+8 16:03:23 写道:

> Please do not spam the list for an issue you just opened.
>
> Repeated abuse of this list will result in your removal.
>
> On Tue, Sep 5, 2023 at 9:58 AM Monkey92t  wrote:
>
>> See: https://github.com/prometheus/client_golang/issues/1340 
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Prometheus Users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to prometheus-use...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/prometheus-users/5d84446d-cedb-4cc1-bbb5-6905beeba412n%40googlegroups.com
>>  
>> 
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/ea033373-b01b-4e64-89b6-c78ff8c41e78n%40googlegroups.com.


Re: [prometheus-users] HELP: GaugeVec memory keeps increasing

2023-09-05 Thread Ben Kochie
Please do not spam the list for an issue you just opened.

Repeated abuse of this list will result in your removal.

On Tue, Sep 5, 2023 at 9:58 AM Monkey92t  wrote:

> See: https://github.com/prometheus/client_golang/issues/1340
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-users/5d84446d-cedb-4cc1-bbb5-6905beeba412n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CABbyFmq75yCVHQ0UNR%2BQB1g3k3VZ0y27zwaYt6i_gE5q1BbBpQ%40mail.gmail.com.


[prometheus-users] HELP: GaugeVec memory keeps increasing

2023-09-05 Thread Monkey92t
See: https://github.com/prometheus/client_golang/issues/1340

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/5d84446d-cedb-4cc1-bbb5-6905beeba412n%40googlegroups.com.


[prometheus-users] GaugeVec memory keeps increasing

2023-09-05 Thread 邵华
see: https://github.com/prometheus/client_golang/issues/1340

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/18c8da82-923e-4ff3-b52c-0552e66b3e0an%40googlegroups.com.


[prometheus-users] Re: Promteheus HA different metrics

2023-09-05 Thread Brian Candler
Note that setting the scrape timeout longer than the scrape interval won't 
achieve anything.

I'd suggest you investigate by looking at the history of the "up" metric: 
this will go to zero on scrape failures.  Can you discern a pattern?  Is it 
only on a certain type of target, or targets running on a particular k8s 
node?  Is it intermittent across all targets, or some targets which fail 
100% of the time?

If you compare the Targets page on both servers, are they scraping exactly 
the same URLs?  (That is, check whether service discovery is giving 
different results)

On Tuesday, 5 September 2023 at 06:09:55 UTC+1 Анастасия Зель wrote:

> yes, i see errors on targets page in web interface.
> I tried to increase timeout to 5 minutes and it changes nothing. 
> Its strange because prometheus 2 always get this error on similar pods. 
> And prometheus 1 never get this errors on this pods. 
> понедельник, 4 сентября 2023 г. в 19:00:32 UTC+4, Brian Candler: 
>
>> On Monday, 4 September 2023 at 15:49:25 UTC+1 Анастасия Зель wrote:
>>
>> Hello, we use HA prometheus with two servers.
>>
>> You mean, two Prometheus servers with the same config, both scraping the 
>> same targets?
>>
>>  
>>
>> The problem is we get different metrics in dashboards from this two 
>> servers.
>>
>> Small differences are to be expected.  That's because the two servers 
>> won't be scraping the targets at the same points in time.  If you see more 
>> significant differences, then please provide some examples.
>>
>>  
>>
>> And we also scrape metrics from k8s, and some pods are not scraping 
>> because of error context deadline exceeded
>>
>> That basically means "scrape timed out".  The scrape hadn't completed 
>> within the "scrape_timeout:" value that you've set.  You'll need to look at 
>> your individual exporters and the failing scrape URLs: either the target is 
>> not reachable at all (e.g. firewalling or network configuration issue), or 
>> the target is taking too long to respond.
>>  
>>
>> Its differents pods on each server. In prometheus logs we dont see any of 
>> errors.
>>
>> Where *do* you see the "context deadline exceeded" errors then?
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/ff7ed768-c75b-462d-be60-7c2d47773751n%40googlegroups.com.