[prometheus-users] Extracting long queries from multiple histograms

2022-04-19 Thread Victor Sudakov
Dear Colleages,

There is a web app which exports its metrics as multiple histograms,
one histogram per Web endpoint. So each set of histogram data is also
labelled by the {endpoint} label. There are about 50 endpoints so
about 50 histograms.

I would like to detect and graph slow endpoints, that is I would like
to know the value of {endpoint} when its {le} is over 1s or something
like that. 

Can you please help with a relevant PromQL query and an idea how to
represent it in Grafana?

I don't actually want 50 heatmaps, there must be a clever way to make
an overview of all the slow endpoints, or all the endpoints with a
particular status code etc.

-- 
Victor Sudakov VAS4-RIPE
http://vas.tomsk.ru/
2:5005/49@fidonet

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/Yl77deKJeKZuj7eU%40admin.sibptus.ru.


Re: [prometheus-users] Facing 5m staleness issue even with 2.x

2022-04-19 Thread Brian Candler
It depends on:
1. How often Gatling sends its graphite metrics
2. How often Prometheus scrapes graphite-exporter

If Prometheus is scraping graphite-exporter every 15 seconds, then you'll 
need to keep --graphite.sample-expiry to at least 15 seconds; otherwise you 
may lose the last metric value written by Gatling.

On Tuesday, 19 April 2022 at 15:55:12 UTC+1 anik...@gmail.com wrote:

> Thanks a lot Brian..
> Setting --graphite.sample-expiry flag solved the issue.
> For now, I have kept it to 15 seconds... any guidance on how to decide 
> this correct value would be appreciated.
>
> On Tue, Apr 19, 2022, 4:56 PM Aniket Kulkarni  wrote:
>
>> Thanks for the response Stuart.. 
>>
>> To explain you more..
>> I am load testing an application through Gatling scripts (similar to 
>> jmeter).
>>
>> Now I want to have a real time monitoring of this load test.
>>
>> For this, Gatling supports graphite writer protocol(it can't directly 
>> talk with prometheus hence I have used graphite-exporter in between)
>>
>> Now Promotheus will collect these metrics sent by Gatling and provide to 
>> Grafana to plot the graphs.
>>
>> Now the problem is I am getting graphs but even after my load test is 
>> finished, I see the last value graph repeating for 5 minutes.
>>
>> Which is the known issue of prometheus... Hence I am confused on how to 
>> resolve this issue? Any configuration need to be added to prometheus.yml 
>> file?
>>
>> Please let me know if you need any further details..
>>
>>
>> On Tue, Apr 19, 2022, 4:44 PM Stuart Clark  wrote:
>>
>>> On 2022-04-19 08:58, Aniket Kulkarni wrote:
>>> > Hi,
>>> > 
>>> > I have referred below links:
>>> > 
>>> > I understand this was a problem with 1.x
>>> > https://github.com/prometheus/prometheus/issues/398
>>> > 
>>> > I also got this link as a solution
>>> > https://promcon.io/2017-munich/talks/staleness-in-prometheus-2-0/
>>> > 
>>> > No doubt it's a great session. But I am still not clear as to what
>>> > change I have to make and where?
>>> > 
>>> > I also couldn't find the prometheus docs useful for this.
>>> > 
>>> > I am using following tech stack:
>>> > Gatling -> graphite-exporter -> prometheus-> grafana.
>>> > 
>>> > I am still facing staleness issue. Please guide me on the solution or
>>> > any extra configuration needed?
>>> > 
>>> > I am using the default storage system by prometheus and not any
>>> > external one.
>>> > 
>>>
>>> Could you describe a bit more of the problem you are seeing and what you 
>>> are wanting to do?
>>>
>>> All time series will be marked as stale if they have not been scraped 
>>> for a while, which causes data to stop being returned by queries, which 
>>> is important as things like labels will change over time (especially for 
>>> things like Kubernetes which include pod names). It is expected that 
>>> targets will be regularly scraped, so things shouldn't otherwise 
>>> disapear (unless there is an error, which should be visible via 
>>> something like the "up" metric).
>>>
>>> As the standard staleness interval is 5 minutes it is recommended that 
>>> the maximum scrape period should be no more that 2 minutes (to allow for 
>>> a failed scrape without the time series being marked as stale).
>>>
>>> -- 
>>> Stuart Clark
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/96a0dd06-6ba8-400c-9838-2fd48eb54459n%40googlegroups.com.


Re: [prometheus-users] Facing 5m staleness issue even with 2.x

2022-04-19 Thread Aniket Kulkarni
Thanks a lot Brian..
Setting --graphite.sample-expiry flag solved the issue.
For now, I have kept it to 15 seconds... any guidance on how to decide this
correct value would be appreciated.

On Tue, Apr 19, 2022, 4:56 PM Aniket Kulkarni  wrote:

> Thanks for the response Stuart..
>
> To explain you more..
> I am load testing an application through Gatling scripts (similar to
> jmeter).
>
> Now I want to have a real time monitoring of this load test.
>
> For this, Gatling supports graphite writer protocol(it can't directly talk
> with prometheus hence I have used graphite-exporter in between)
>
> Now Promotheus will collect these metrics sent by Gatling and provide to
> Grafana to plot the graphs.
>
> Now the problem is I am getting graphs but even after my load test is
> finished, I see the last value graph repeating for 5 minutes.
>
> Which is the known issue of prometheus... Hence I am confused on how to
> resolve this issue? Any configuration need to be added to prometheus.yml
> file?
>
> Please let me know if you need any further details..
>
>
> On Tue, Apr 19, 2022, 4:44 PM Stuart Clark 
> wrote:
>
>> On 2022-04-19 08:58, Aniket Kulkarni wrote:
>> > Hi,
>> >
>> > I have referred below links:
>> >
>> > I understand this was a problem with 1.x
>> > https://github.com/prometheus/prometheus/issues/398
>> >
>> > I also got this link as a solution
>> > https://promcon.io/2017-munich/talks/staleness-in-prometheus-2-0/
>> >
>> > No doubt it's a great session. But I am still not clear as to what
>> > change I have to make and where?
>> >
>> > I also couldn't find the prometheus docs useful for this.
>> >
>> > I am using following tech stack:
>> > Gatling -> graphite-exporter -> prometheus-> grafana.
>> >
>> > I am still facing staleness issue. Please guide me on the solution or
>> > any extra configuration needed?
>> >
>> > I am using the default storage system by prometheus and not any
>> > external one.
>> >
>>
>> Could you describe a bit more of the problem you are seeing and what you
>> are wanting to do?
>>
>> All time series will be marked as stale if they have not been scraped
>> for a while, which causes data to stop being returned by queries, which
>> is important as things like labels will change over time (especially for
>> things like Kubernetes which include pod names). It is expected that
>> targets will be regularly scraped, so things shouldn't otherwise
>> disapear (unless there is an error, which should be visible via
>> something like the "up" metric).
>>
>> As the standard staleness interval is 5 minutes it is recommended that
>> the maximum scrape period should be no more that 2 minutes (to allow for
>> a failed scrape without the time series being marked as stale).
>>
>> --
>> Stuart Clark
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CAPYU55e%2BPTnnQeObYYBBzQhXuUsvnJXf-Mpmg978eGJJVi_Chw%40mail.gmail.com.


Re: [prometheus-users] Facing 5m staleness issue even with 2.x

2022-04-19 Thread Brian Candler
This is an issue with graphite-exporter, not prometheus or staleness.

The problem is this: if your application simply stops sending data to 
graphite-exporter, then graphite-exporter has no idea whether the time 
series has finished or not, so it keeps exporting it for a while.
See https://github.com/prometheus/graphite_exporter#usage
*"To avoid using unbounded memory, metrics will be garbage collected five 
minutes after they are last pushed to. This is configurable with the *
--graphite.sample-expiry* flag."*

Once graphite-exporter stops exporting the metric, then on the next scrape 
prometheus will see that the timeseries has gone, and it will immediately 
mark it as stale (i.e. has no more values), and everything is fine.

Therefore, reducing --graphite.sample-expiry may help, although you need to 
know how often your application sends graphite data; if you set this too 
short, then you'll get gaps in your graphs.

Another option you could try is to get your application to send a "NaN" 
value at the end of this run.  But technically this is a real NaN value, 
not a staleness marker (staleness markers are internally represented as a 
special kind of NaN, but that's an implementation detail that you can't 
rely on).  Still, a NaN may be enough to stop Grafana showing any values 
from this point onwards.

On Tuesday, 19 April 2022 at 12:26:31 UTC+1 anik...@gmail.com wrote:

> Thanks for the response Stuart.. 
>
> To explain you more..
> I am load testing an application through Gatling scripts (similar to 
> jmeter).
>
> Now I want to have a real time monitoring of this load test.
>
> For this, Gatling supports graphite writer protocol(it can't directly talk 
> with prometheus hence I have used graphite-exporter in between)
>
> Now Promotheus will collect these metrics sent by Gatling and provide to 
> Grafana to plot the graphs.
>
> Now the problem is I am getting graphs but even after my load test is 
> finished, I see the last value graph repeating for 5 minutes.
>
> Which is the known issue of prometheus... Hence I am confused on how to 
> resolve this issue? Any configuration need to be added to prometheus.yml 
> file?
>
> Please let me know if you need any further details..
>
>
> On Tue, Apr 19, 2022, 4:44 PM Stuart Clark  wrote:
>
>> On 2022-04-19 08:58, Aniket Kulkarni wrote:
>> > Hi,
>> > 
>> > I have referred below links:
>> > 
>> > I understand this was a problem with 1.x
>> > https://github.com/prometheus/prometheus/issues/398
>> > 
>> > I also got this link as a solution
>> > https://promcon.io/2017-munich/talks/staleness-in-prometheus-2-0/
>> > 
>> > No doubt it's a great session. But I am still not clear as to what
>> > change I have to make and where?
>> > 
>> > I also couldn't find the prometheus docs useful for this.
>> > 
>> > I am using following tech stack:
>> > Gatling -> graphite-exporter -> prometheus-> grafana.
>> > 
>> > I am still facing staleness issue. Please guide me on the solution or
>> > any extra configuration needed?
>> > 
>> > I am using the default storage system by prometheus and not any
>> > external one.
>> > 
>>
>> Could you describe a bit more of the problem you are seeing and what you 
>> are wanting to do?
>>
>> All time series will be marked as stale if they have not been scraped 
>> for a while, which causes data to stop being returned by queries, which 
>> is important as things like labels will change over time (especially for 
>> things like Kubernetes which include pod names). It is expected that 
>> targets will be regularly scraped, so things shouldn't otherwise 
>> disapear (unless there is an error, which should be visible via 
>> something like the "up" metric).
>>
>> As the standard staleness interval is 5 minutes it is recommended that 
>> the maximum scrape period should be no more that 2 minutes (to allow for 
>> a failed scrape without the time series being marked as stale).
>>
>> -- 
>> Stuart Clark
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/96842b37-bbd3-49f9-adb3-a24bc28cec79n%40googlegroups.com.


Re: [prometheus-users] Facing 5m staleness issue even with 2.x

2022-04-19 Thread Aniket Kulkarni
Thanks for the response Stuart..

To explain you more..
I am load testing an application through Gatling scripts (similar to
jmeter).

Now I want to have a real time monitoring of this load test.

For this, Gatling supports graphite writer protocol(it can't directly talk
with prometheus hence I have used graphite-exporter in between)

Now Promotheus will collect these metrics sent by Gatling and provide to
Grafana to plot the graphs.

Now the problem is I am getting graphs but even after my load test is
finished, I see the last value graph repeating for 5 minutes.

Which is the known issue of prometheus... Hence I am confused on how to
resolve this issue? Any configuration need to be added to prometheus.yml
file?

Please let me know if you need any further details..


On Tue, Apr 19, 2022, 4:44 PM Stuart Clark  wrote:

> On 2022-04-19 08:58, Aniket Kulkarni wrote:
> > Hi,
> >
> > I have referred below links:
> >
> > I understand this was a problem with 1.x
> > https://github.com/prometheus/prometheus/issues/398
> >
> > I also got this link as a solution
> > https://promcon.io/2017-munich/talks/staleness-in-prometheus-2-0/
> >
> > No doubt it's a great session. But I am still not clear as to what
> > change I have to make and where?
> >
> > I also couldn't find the prometheus docs useful for this.
> >
> > I am using following tech stack:
> > Gatling -> graphite-exporter -> prometheus-> grafana.
> >
> > I am still facing staleness issue. Please guide me on the solution or
> > any extra configuration needed?
> >
> > I am using the default storage system by prometheus and not any
> > external one.
> >
>
> Could you describe a bit more of the problem you are seeing and what you
> are wanting to do?
>
> All time series will be marked as stale if they have not been scraped
> for a while, which causes data to stop being returned by queries, which
> is important as things like labels will change over time (especially for
> things like Kubernetes which include pod names). It is expected that
> targets will be regularly scraped, so things shouldn't otherwise
> disapear (unless there is an error, which should be visible via
> something like the "up" metric).
>
> As the standard staleness interval is 5 minutes it is recommended that
> the maximum scrape period should be no more that 2 minutes (to allow for
> a failed scrape without the time series being marked as stale).
>
> --
> Stuart Clark
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CAPYU55fLc7rnq41B6aFdjc-GyNFq6em6rpw-xeYuE7LbZnhHbA%40mail.gmail.com.


Re: [prometheus-users] Facing 5m staleness issue even with 2.x

2022-04-19 Thread Stuart Clark

On 2022-04-19 08:58, Aniket Kulkarni wrote:

Hi,

I have referred below links:

I understand this was a problem with 1.x
https://github.com/prometheus/prometheus/issues/398

I also got this link as a solution
https://promcon.io/2017-munich/talks/staleness-in-prometheus-2-0/

No doubt it's a great session. But I am still not clear as to what
change I have to make and where?

I also couldn't find the prometheus docs useful for this.

I am using following tech stack:
Gatling -> graphite-exporter -> prometheus-> grafana.

I am still facing staleness issue. Please guide me on the solution or
any extra configuration needed?

I am using the default storage system by prometheus and not any
external one.



Could you describe a bit more of the problem you are seeing and what you 
are wanting to do?


All time series will be marked as stale if they have not been scraped 
for a while, which causes data to stop being returned by queries, which 
is important as things like labels will change over time (especially for 
things like Kubernetes which include pod names). It is expected that 
targets will be regularly scraped, so things shouldn't otherwise 
disapear (unless there is an error, which should be visible via 
something like the "up" metric).


As the standard staleness interval is 5 minutes it is recommended that 
the maximum scrape period should be no more that 2 minutes (to allow for 
a failed scrape without the time series being marked as stale).


--
Stuart Clark

--
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/a97e8140ebdc538c0553192e3dacaf71%40Jahingo.com.


[prometheus-users] Facing 5m staleness issue even with 2.x

2022-04-19 Thread Aniket Kulkarni
Hi,

I have referred below links:

I understand this was a problem with 1.x
https://github.com/prometheus/prometheus/issues/398

I also got this link as a solution
https://promcon.io/2017-munich/talks/staleness-in-prometheus-2-0/

No doubt it's a great session. But I am still not clear as to what change I
have to make and where?

I also couldn't find the prometheus docs useful for this.

I am using following tech stack:
Gatling -> graphite-exporter -> prometheus-> grafana.

I am still facing staleness issue. Please guide me on the solution or any
extra configuration needed?

I am using the default storage system by prometheus and not any external
one.

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CAPYU55eVY7OTrVKZ1ijJWt1WFURrkN1jR34G6e9q5Z9QxEs6bA%40mail.gmail.com.


Re: [prometheus-users] Re: Prometheus inner metrics can't be remote write

2022-04-19 Thread Yawhua Wong
Thanks a lot.
my mistake.

On Mon, Apr 18, 2022 at 8:19 PM Brian Candler  wrote:

> By "inner metrics" I guess you mean "internal metrics".  These *are*
> written to the remote storage.  For example: I use VictoriaMetrics for
> remote storage, and the "up" metrics *do* propagate there.
>
> So I think your problem must be at the receiving side.  You haven't
> specified what system it is that you're writing to.
>
> On Monday, 18 April 2022 at 12:59:32 UTC+1 yawhu...@gmail.com wrote:
>
>> Sorry for the  'screenshot images' and  thanks for your reply.
>>
>> There are some inner metrics in Prometheus to expose targets status.
>> Refer to this
>> https://github.com/prometheus/prometheus/blob/ec3d02019e84d9d793d2e137891dd7ea6d19ea60/scrape/scrape.go#L1662
>>
>> I can query on Prometheus, but no data on remote storage according to the
>> above configuration.
>>
>> I want to write these inner metrics to our remote storage.
>>
>>
>> On Monday, April 18, 2022 at 6:58:45 PM UTC+8 Brian Candler wrote:
>>
>>> (Please don't post screenshot images - they are hard to read, and
>>> impossible to copy-paste from)
>>>
>>> What are you trying to do? Your rewriting config says:
>>>
>>> 1. If the label name is "up", then keep the metric.
>>> 2. Otherwise, keep the metric. (This is the default if you reach the end
>>> of the ruleset)
>>>
>>> Hence it does nothing: it keeps all metrics.
>>>
>>> On Monday, 18 April 2022 at 09:56:35 UTC+1 yawhu...@gmail.com wrote:
>>>
 UP metrics is inner metrics.
 But why can't it be rewrite?
 Is there any mistake?
 [image: 微信图片_20220418164419.png]

>>> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Prometheus Users" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/prometheus-users/FNsSbhtG-xc/unsubscribe
> .
> To unsubscribe from this group and all its topics, send an email to
> prometheus-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-users/00832702-408c-4a4e-8d1b-50a94482e3c6n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CAAMffb1m_vh5YcEx097hJ8s%2BQD5eFVkt07eVoUf9zyCGq4t6Lw%40mail.gmail.com.


[prometheus-users] Re: HTTP status 503 service unavailable: json_exporter with basic_auth

2022-04-19 Thread Brian Candler
Also you should realise that if you set "basic_auth" in prometheus.yml, 
this only sets basic auth for the HTTP request from prometheus to 
json_exporter, not from json_exporter to target.

Does the target 
endpoint http://localhost:9015/services/v2/mpoints/E_TIP5/statistics 
require authentication? If so, you'll need a HTTPClientConfig section in 
your json_exporter configuration.

Admittedly I couldn't find this documented in the json_exporter config 
examples, but you can find what you need here:
https://github.com/prometheus-community/json_exporter/blob/v0.4.0/config/config.go#L40-L46
https://github.com/prometheus/common/blob/main/config/http_config.go

On Tuesday, 19 April 2022 at 08:03:12 UTC+1 Brian Candler wrote:

> The configuration of prometheus isn't really of interest, because it's 
> json_exporter that's returning the error.
>
> Scrape the exporter by hand:
>
> curl -vg '
> http://localhost:7979/probe?target=http:%2f%2flocalhost%3a9015%2fservices%2fv2%2fmpoints%2fE_TIP5%2fstatistics
> '
>
> I suspect you'll see the 503 error there too, but you may get a more 
> detailed error message that may help understand what's going on.  Also try 
> scraping the JSON target directly:
>
> curl -vh 'http://localhost:9015/services/v2/mpoints/E_TIP5/statistics'
>
> If the latter doesn't work, or doesn't return JSON, then obviously the 
> former won't work either.
>
> Likely problems are:
> - the .../statistics endpoint isn't working or isn't returning JSON
> - the configuration of json_exporter is bad
>
> On Tuesday, 19 April 2022 at 03:24:40 UTC+1 sivap...@gmail.com wrote:
>
>>
>> Hi,
>> I'm getting below error on prometheus console log while scraping metrics 
>> from json_exporter 
>> .
>>  
>> Target endpoint is available and able to get response with curl using basic 
>> authentication. 
>>
>> ts=2022-04-18T20:58:07.156Z caller=scrape.go:1292 level=debug 
>> component="scrape manager" scrape_pool=json target="
>> http://localhost:7979/probe?target=http%3A%2F%2Flocalhost%3A9015%2Fservices%2Fv2%2Fmpoints%2FE_TIP5%2Fstatistics;
>>  
>> msg="Scrape failed" err="server returned HTTP status 503 Service 
>> Unavailable"
>>
>>
>> Here is the prometheus.yaml
>>
>> scrape_configs:
>>
>> ## gather metrics of prometheus itself
>> - job_name: prometheus
>> static_configs:
>> - targets:
>> - localhost:9090
>>
>> ## gather the metrics of json_exporter application itself
>> - job_name: json_exporter
>> static_configs:
>> - targets:
>> - localhost:7979 ## Location of the json exporter's real :
>>
>> ## gather the metrics from third party json sources, via the json exporter
>> - job_name: json
>> scrape_interval: 15s
>> scrape_timeout: 10s
>> metrics_path: /probe
>> basic_auth:
>> username: XX
>> password: X
>> static_configs:
>> - targets: ['http://localhost:9015/services/v2/mpoints/E_TIP5/statistics
>> ']
>> relabel_configs:
>> - source_labels: [__address__]
>> target_label: __param_target
>> - source_labels: [__param_target]
>> target_label: instance
>> - target_label: __address__
>> replacement: localhost:7979 ## Location of the json exporter's real 
>> :
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/1f223f4b-3391-4794-bf4c-ca3f1adb256bn%40googlegroups.com.


[prometheus-users] Re: HTTP status 503 service unavailable: json_exporter with basic_auth

2022-04-19 Thread Brian Candler
The configuration of prometheus isn't really of interest, because it's 
json_exporter that's returning the error.

Scrape the exporter by hand:

curl -vg 
'http://localhost:7979/probe?target=http:%2f%2flocalhost%3a9015%2fservices%2fv2%2fmpoints%2fE_TIP5%2fstatistics'

I suspect you'll see the 503 error there too, but you may get a more 
detailed error message that may help understand what's going on.  Also try 
scraping the JSON target directly:

curl -vh 'http://localhost:9015/services/v2/mpoints/E_TIP5/statistics'

If the latter doesn't work, or doesn't return JSON, then obviously the 
former won't work either.

Likely problems are:
- the .../statistics endpoint isn't working or isn't returning JSON
- the configuration of json_exporter is bad

On Tuesday, 19 April 2022 at 03:24:40 UTC+1 sivap...@gmail.com wrote:

>
> Hi,
> I'm getting below error on prometheus console log while scraping metrics 
> from json_exporter 
> .
>  
> Target endpoint is available and able to get response with curl using basic 
> authentication. 
>
> ts=2022-04-18T20:58:07.156Z caller=scrape.go:1292 level=debug 
> component="scrape manager" scrape_pool=json target="
> http://localhost:7979/probe?target=http%3A%2F%2Flocalhost%3A9015%2Fservices%2Fv2%2Fmpoints%2FE_TIP5%2Fstatistics;
>  
> msg="Scrape failed" err="server returned HTTP status 503 Service 
> Unavailable"
>
>
> Here is the prometheus.yaml
>
> scrape_configs:
>
> ## gather metrics of prometheus itself
> - job_name: prometheus
> static_configs:
> - targets:
> - localhost:9090
>
> ## gather the metrics of json_exporter application itself
> - job_name: json_exporter
> static_configs:
> - targets:
> - localhost:7979 ## Location of the json exporter's real :
>
> ## gather the metrics from third party json sources, via the json exporter
> - job_name: json
> scrape_interval: 15s
> scrape_timeout: 10s
> metrics_path: /probe
> basic_auth:
> username: XX
> password: X
> static_configs:
> - targets: ['http://localhost:9015/services/v2/mpoints/E_TIP5/statistics']
> relabel_configs:
> - source_labels: [__address__]
> target_label: __param_target
> - source_labels: [__param_target]
> target_label: instance
> - target_label: __address__
> replacement: localhost:7979 ## Location of the json exporter's real 
> :
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/e1808134-a3e7-49c8-80a7-4a6ac75ce06cn%40googlegroups.com.