I use Cortex to store the metrics. Queries will go through Cortex and
Prometheus doesn't need to process the requests.
So is there any way to disable the tsdb isolation to save the memory usage?
--
You received this message because you are subscribed to the Google Groups
"Prometheus Users" grou
On 31/01/2021 05:10, Amit Dubey wrote:
Thanks Stuart,
I am collecting the metrics from the targeted pods and I can see the
different labels on the prometheus UI (graph) and I want to remove
some labels out of all the collected ones.
Actually I don't want to display the labels(instance, job &
Did you set read_recent: true on the remote read ?
On 31 Jan 22:30, Julius Volz wrote:
> That sounds like the remote read adapter is probably not evaluating /
> translating label matchers correctly for the remote system to send back the
> correct data? Or am I misunderstanding something?
>
> On S
That sounds like the remote read adapter is probably not evaluating /
translating label matchers correctly for the remote system to send back the
correct data? Or am I misunderstanding something?
On Sun, Jan 31, 2021 at 9:07 PM Johny wrote:
> No, I haven't tried from local storage yet. The remot
No, I haven't tried from local storage yet. The remote system is
proprietary storage backend exposed via a Prometheus adapter. I've metrics
exported from adapter process about fail rate, latency, etc and see no
issues. I also never see the issue when I query directly.
On Sunday, January
On Sun, Jan 31, 2021 at 7:08 PM Johny wrote:
> Please not that I am doing a remote read to fetch metrics in my rules. Per
> my metrics, no issues with consistency in remote reads.
Oh, that's interesting. Have you been able to reproduce it with just the
local storage yet?
What system are you r
This seems to be an issue with the server. BBE is connecting to the target (
https://www2.trustnet.com) and it's getting a 502. Is the server trying to
enforce some TLS parameters? Headers? Something along those lines?
Marcelo
On Tue, Jan 12, 2021 at 11:06 AM RBFE wrote:
> *Host operating syste
Please not that I am doing a remote read to fetch metrics in my rules. Per
my metrics, no issues with consistency in remote reads.
On Sunday, January 31, 2021 at 1:04:08 PM UTC-5 Johny wrote:
> Unfortunately, I am still seeing the issue even after removing all the
> filters!
> Almost 5% of ti
Unfortunately, I am still seeing the issue even after removing all the
filters!
Almost 5% of time series are not recorded. I am writing to local disk with
sufficient volume. No errors in log.
Reproduced issue with a single recording rule.
Prometheus version 2.17.2. Are there any known bugs caus
See also:
https://prometheus.io/docs/prometheus/latest/configuration/template_reference/#numbers
On Sun, Jan 31, 2021 at 10:07 AM Julius Volz
wrote:
> In your annotation template, instead of "{{ $value }}", you can use "{{
> $value | printf "%.2f" }}" to cut the value off after 2 decimal places.
In your annotation template, instead of "{{ $value }}", you can use "{{
$value | printf "%.2f" }}" to cut the value off after 2 decimal places. Or
you could try "{{ humanize $value }}".
On Sun, Jan 31, 2021 at 9:12 AM Auggie Yang
wrote:
> guys,
>
> Configuration rules.yml with promethues, and al
Yep! What you want is an alerting heartbeat:
https://www.youtube.com/watch?v=RsigFUMUHZ0
So an external service like https://deadmanssnitch.com/ that sends you a
notification when it does *not* receive a message in a given time.
If you are using the Prometheus Operator to deploy Prometheus to Kub
guys,
Configuration rules.yml with promethues, and alermanger always got some
long vlaue;
just like memory usage: 70.25266596686059, disk usage: 65*.*58277282;
is there has some good solutions to format prometheus's value with 70.xx or
65.xx, not 70., thanks.
--
You received this me
Hi,
Cool, man. thanks.
i will test it in my lab.
在2021年1月30日星期六 UTC+8 下午6:56:11 写道:
> Hi,
>
> On 2021-01-30 10:47, Auggie Yang wrote:
> > I was confused with alertmanager inhibit_rules; details as following:
> >
> > Exapmles:
> > 8 servers, one of servers always got high memory(50%) or disk us
14 matches
Mail list logo