to Julius
well in that case i made a custom metric in web server this could be a
reason for failing the probe?
2021년 3월 23일 화요일 오전 2시 7분 13초 UTC+9에 marcelo@grafana.com님이 작성:
> I see there's an artifact in the logs when reporting the error, "Get "
> plus some non-ascii character.
>
> Is t
Thanks Stuart. I'll need to think about if it's doable for my case to run
node_exporter on each ec2 instances. I am in an infra team, doing that will
have lots of impact which I need to evaluate. But thanks for your
suggestions.
One more questions regarding cloudwatch exporter: for my case, ano
On 22/03/2021 23:30, chuanjia xing wrote:
I have one more question for node_exporter: say if I want to get ec2
instance cpu metrics for _/lots/_ of clusters, do I need to run
node_exporter on every node in all clusters? From the doc of
node_exporter, it looks like one exporter will only collect
I have one more question for node_exporter: say if I want to get ec2
instance cpu metrics for *lots* of clusters, do I need to run node_exporter
on every node in all clusters? From the doc of node_exporter, it looks like
one exporter will only collect metrics for the node it's running on, which
On 22 Mar 11:30, jzwu1121 wrote:
> Hello,
>
> I have some metrics that change infrequently, is there a way to query these
> metrics for distinct values over time? (i.e. change the “staircase” like
> graph to one drawing lines between the points of change). I’m not sure if
> there is a way to d
Thanks Stuart. I didn't know node exporter can also collect metrics at
instance level. If it can get per instance level cpu metrics and faster
than cloudwatch exporter, then that should satisfy my requirements. I'll
take a look at node exporter then.
On Monday, March 22, 2021 at 4:03:48 PM UTC-
On 22/03/2021 22:53, chuanjia xing wrote:
Thanks. The reason I am using cloudwatch exporter is because I want to
get cpuutilization metrics per cluster / service, not on the node level.
I haven't used node_exporter before, not sure if I can get
cpuutilization metrics for per cluster / service?
On 22/03/2021 22:53, chuanjia xing wrote:
Thanks. The reason I am using cloudwatch exporter is because I want to
get cpuutilization metrics per cluster / service, not on the node level.
I haven't used node_exporter before, not sure if I can get
cpuutilization metrics for per cluster / service?
Thanks. The reason I am using cloudwatch exporter is because I want to get
cpuutilization metrics per cluster / service, not on the node level.
I haven't used node_exporter before, not sure if I can get cpuutilization
metrics for per cluster / service?
On Monday, March 22, 2021 at 3:35:24 PM UT
You should gather CPU utilization from the node_exporter, not cloudwatch.
This is much more scaleable and won't run into these problems.
On Mon, Mar 22, 2021 at 11:22 PM chuanjia xing
wrote:
> Thanks for your quick response Stuart!
> The reason I increase the scrape_interval to be longer than 2
Thanks for your quick response Stuart!
The reason I increase the scrape_interval to be longer than 2 mins is that
I have several regions in aws to query for ec2 cpuutilization metrics, and
for the Exporter, some region it took ~3mins to return the cloudwatch
matrics. Let's say if it took 3mins,
On 22/03/2021 21:48, chuanjia xing wrote:
Hi there,
I recently hit an missing data point issue using prometheus.
Want to get some help here. Thanks.
*Issue:*
Increasing scrape_interval in prometheus resulted in missing data points.
*My scenario:*
I am using prometheus CloudWatch Exp
blackbox_exporter applies the provided regular expression against the
entire body.
This means in particular that if your body is something like ['O', 'K',
'\n'] (O, followed by K, followed by a carriage return), the regular
expression '^OK$' WILL NOT match because '$' anchors it to the end of the
Hi Julius,
Using the expression "^OK$" leads to the failure of all the checks for
which response was OK. This seems weird to me. Ideally, should have worked.
Any more workarounds or suggestions to achieve this?
On Mon, Mar 22, 2021 at 9:38 PM Yagyansh S. Kumar
wrote:
> Thanks, Julius.
>
> On Mo
Hello,
I have some metrics that change infrequently, is there a way to query these
metrics for distinct values over time? (i.e. change the “staircase” like
graph to one drawing lines between the points of change). I’m not sure if
there is a way to do this through PromQL, or if I’m approaching
I see there's an artifact in the logs when reporting the error, "Get " plus
some non-ascii character.
Is the domain name for the target by any chance an international domain
name (using characters outside the ascii range)?
There's an old open PR in blackbox_exporter related to that that I have
be
On 20.03.21 14:45, Stuart Clark wrote:
Personally I try to keep things more separated, so using different
storage for each team, which makes maintenance easier and allows each
team to control their own configuration, at the cost of a bit more
complexity/infrastructure.
We do the same here. Bu
Prometheus extrapolates `increase()` results - see
https://github.com/prometheus/prometheus/issues/3746 for more details.
There is an implementation, which returns exact results from increase()
without extrapolation - https://victoriametrics.github.io/MetricsQL.html .
On Fri, Mar 19, 2021 at 4:55
Thanks, Julius.
On Mon, Mar 22, 2021 at 6:50 PM Julius Volz
wrote:
> Hi,
>
> the Blackbox exporter doesn't do full-string matching for these regexes,
> but substring matching, so it also matches if the body contains "OK"
> anywhere (which is true for "NOK"). Try '^OK$` instead?
>
> Also, in case
On 2021-03-22 14:27, dc3o wrote:
Considering that scrape interval is controller by Prometheus
configuration, how do we configure how often some exporter pulls and
exposes the metrics? For instance - say that scrape interval is 60s -
how do we ensure that metrics provided are representing values i
Considering that scrape interval is controller by Prometheus configuration,
how do we configure how often some exporter pulls and exposes the metrics?
For instance - say that scrape interval is 60s - how do we ensure that
metrics provided are representing values in the exact moment?
--
You rec
Hi,
the Blackbox exporter doesn't do full-string matching for these regexes,
but substring matching, so it also matches if the body contains "OK"
anywhere (which is true for "NOK"). Try '^OK$` instead?
Also, in case your HTTP endpoint returns a status code other than 2xx, you
will have to set the
So EOF ("end of file") in the context of a failed HTTP request probably
means that the remote end closed the connection for some reason. This can
also happen if the web server sends the response, but closes the connection
before the client reads the response. I would check your custom web server
co
sorry for CC Julius
This is my debugging logs please help me...
2021년 3월 22일 월요일 오후 3시 19분 59초 UTC+9에 juliu...@promlabs.com님이 작성:
> Try going to your Blackbox exporter manually and probing your HTTPS server
> through it, but with "debug=true" appended to the probe request. That
> should show you
Can someone please help! I am confused here.
On Monday, March 15, 2021 at 12:20:03 PM UTC+5:30 yagyans...@gmail.com
wrote:
>
> Hi. I am using blackbox_exporter version 0.18.0 and I am using http prober
> to check if the response by my URL is "OK" or not. Below is the
> configuration of the mod
25 matches
Mail list logo