ocal as well as prometheus. The flink
jobmanager and task manager are in *UP* status in prometheus targets, but
checking the list of metrics from the explorer "myCounter" custom
metrics cannot be found.
metrics.reporter.prom.factory.class:
org.apache.flink.metrics.prometheus.Pro
get|set}RuntimeContext` methods. More or less the same thing we already do
>> with for example the `RichFunction`. This unfortunately requires some work
>> on the Flink side.
>>
>> cc @Arvid
>>
>> On Thu, Dec 2, 2021 at 5:52 PM > <mailto:lars.bachm.
.
>
> cc @Arvid
>
> On Thu, Dec 2, 2021 at 5:52 PM wrote:
>
>> Hi,
>>
>> is there a way to expose custom metrics within an elasticsearch failure
>> handler (ActionRequestFailureHandler)? To register custom metrics I need
>> access to the runtime context but I don't see a way to access the
>> context in the failure handler.
>>
>> Thanks and regards,
>>
>> Lars
>>
>
>
> `{get|set}RuntimeContext` methods. More or less the same thing we already do
> with for example the `RichFunction`. This unfortunately requires some work on
> the Flink side.
>
> cc @Arvid
>
> On Thu, Dec 2, 2021 at 5:52 PM <mailto:lars.bachm...@posteo.de>>
or less the same thing we already
do with for example the `RichFunction`. This unfortunately requires some
work on the Flink side.
cc @Arvid
On Thu, Dec 2, 2021 at 5:52 PM wrote:
> Hi,
>
> is there a way to expose custom metrics within an elasticsearch failure
> handler (ActionRequestFa
Hi,
is there a way to expose custom metrics within an elasticsearch failure
handler (ActionRequestFailureHandler)? To register custom metrics I need
access to the runtime context but I don't see a way to access the
context in the failure handler.
Thanks and regards,
Lars
We would like a counter of exceptions so we can alert if there's an
anomalous increase in them. I realize a counter in the JobManager would not
capture anywhere close to all exceptions but even capturing a count of a
subset that we're able to track would be helpful.
On Thu, Jul 15, 2021 at 3:47
This is currently not possible. What metric are you interested in?
On 15/07/2021 21:16, Jeff Charles wrote:
Hi,
I would like to register a custom metric on the JobManager as opposed
to a TaskManager. I cannot seem to locate any documentation that
indicates how to do this or even if it's
Hi,
I would like to register a custom metric on the JobManager as opposed to a
TaskManager. I cannot seem to locate any documentation that indicates how
to do this or even if it's currently possible or not.
Does anyone have any guidance on how to register a custom metric on the
JobManager?
Jeff
Hello Cliff,
You are right, indeed defining custom metrics is not supported at the
moment.
I will file a JIRA issue so we can track this, and we will try to
prioritize this feature up.
Meanwhile, there are a lot of metrics that StateFun defines, like
invocations rates etc' perhaps you can find
We think Embedded Statefun is a nicer fit than Datastream for some problem
domains, but one thing we miss is support for custom metrics/counters. Is
there a way to access the Flink support? It looks like if we want custom
metrics we'll need to roll our own.
Do you mind sharing the code how do you register your metrics with the
TriggerContext? It could help us identify where does name collisions
come from. As far as I am aware it should be fine to use the
TriggerContext for registering metrics.
Best,
Dawid
On 16/03/2021 17:35, Aleksander Sumowski
there anything in the logs (ideally on debug)?
>>> Have you debugged the execution and followed the counter() calls all the
>>> way to the reporter?
>>> Do you only see JobManager metrics, or is there somewhere also something
>>> about the TaskManager?
>>>
&g
Hi all,
I'd like to measure how many events arrive within allowed lateness grouped
by particular feature of the event. We assume particular type of events
have way more late arrivals and would like to verify this. The natural
place to make the measurement would be our custom trigger within
custom MetricGroup implementation.
On 3/16/2021 4:13 AM, Rion Williams wrote:
Hi all,
Recently, I was working on adding some custom metrics to a
Flink job that required the use of dynamic labels (i.e.
capturing various counters that were "slicable&qu
s to be cleaner.
Alternatively, when using the testing harnesses IIRC you can
also set set a custom MetricGroup implementation.
On 3/16/2021 4:13 AM, Rion Williams wrote:
Hi all,
Recently, I was working on adding some custom metrics to a
Fli
t;
>> Be also aware that there will be 2 reporter instances; one for the JM and
>> one for the TM.
>> To remedy this, I would recommend creating a factory that returns a
>> static reporter instance instead; overall this tends to be cleaner.
>>
>> Alternatively, when us
eturns
a static reporter instance instead; overall this tends to be cleaner.
Alternatively, when using the testing harnesses IIRC you can also
set set a custom MetricGroup implementation.
On 3/16/2021 4:13 AM, Rion Williams wrote:
Hi all,
Recently, I was working on adding
n.
>
> On 3/16/2021 4:13 AM, Rion Williams wrote:
>
> Hi all,
>
> Recently, I was working on adding some custom metrics to a Flink job that
> required the use of dynamic labels (i.e. capturing various counters that
> were "slicable" by things like tenant / source, etc
custom metrics to a Flink job
that required the use of dynamic labels (i.e. capturing various
counters that were "slicable" by things like tenant / source, etc.).
I ended up handling it in a very naive fashion that would just keep a
dictionary of metrics that had already been
Hi all,
Recently, I was working on adding some custom metrics to a Flink job that
required the use of dynamic labels (i.e. capturing various counters that
were "slicable" by things like tenant / source, etc.).
I ended up handling it in a very naive fashion that would just keep a
er");
in order to register a custom counter. For more information please take a
look at [1].
[1]
https://ci.apache.org/projects/flink/flink-docs-stable/monitoring/metrics.html#user-scope
Cheers,
Till
On Tue, Oct 6, 2020 at 1:05 AM Piper Piper wrote:
> Hi
>
> I have questions r
Hi
I have questions regarding making my own custom metrics.
When exactly is the class RichMapFunction’s map(value) method
called/invoked, and what “value” will be passed/expected as an argument to
this map(value) method?
Does the RichMapFunction’s map() method have any relation
Hi Joris,
I don't think that the approach of "add methods in operator class code that
can be called from the main Flink program" will work.
The most efficient approach would be implementing a ProcessFunction that
counts in 1-min time buckets (using event-time semantics) and updates the
metrics.
Hi,
We want to collect metrics for stream processing, typically counts aggregated
over 1-minute buckets. However, we want these 1-minute boundaries determined by
timestamps within the data records. Flink metrics do not handle this so we want
to roll our own. How to proceed ? Some of our team
Hi,
I have integrated prometheus and flink for monitoring custom flink metrics.
But I am not able to get those metrics displayed on prometheus dashboard or
on Grafana dashboard. I can see default flink metrics, but not custom ones.
I have put details here SO question
>
>> FlinkKafkaConsumer in itself is RichParallelSourceFunction, and you could
>> call function below to register your metrics group:
>>
>> getRuntimeContext().getMetricGroup().addGroup("MyMetrics").counter("myCounter")
>>
>>
>>
>>
>> Best
;).counter("myCounter")
Best
Yun Tang
*From:* David Magalhães mailto:speeddra...@gmail.com>>
*Sent:* Tuesday, January 21, 2020 3:45
*To:* user mailto:user@flink.apache.org>>
*Subject:* Custom Metrics o
is RichParallelSourceFunction, and you could
> call function below to register your metrics group:
>
> getRuntimeContext().getMetricGroup().addGroup("MyMetrics").counter("myCounter")
>
>
>
>
> Best
> Yun Tang
> --
> *
avid Magalhães
Sent: Tuesday, January 21, 2020 3:45
To: user
Subject: Custom Metrics outside RichFunctions
Hi, I want to create a custom metric that shows the number of message that
couldn't be deserialized using a custom deserializer inside FlinkKafkaConsumer.
Looking into Metrics page (
h
Hi, I want to create a custom metric that shows the number of message that
couldn't be deserialized using a custom deserializer inside
FlinkKafkaConsumer.
Looking into Metrics page (
https://ci.apache.org/projects/flink/flink-docs-stable/monitoring/metrics.html
)
that doesn't seem to be possible,
That was my first alternative actually :)
That works well for per window metrics. And though it fits perfectly for
smaller windows, it might not be frequent enough for larger window sizes.
Thanks,
Chirag
On Wednesday, 19 December, 2018, 4:15:41 PM IST, Dawid Wysakowicz
wrote:
Hi
Hi Chirag,
I am afraid you are right you cannot access metrics from within
AggregateFunction in WindowedStream. You can though use rich variant of
WindowFunction, which is invoked for every window with the results of
AggregateFunction. Would that be enough for your use case to use
by creating a Jira issue.
Thanks,
Fabian
Am Mo., 17. Dez. 2018 um 21:29 Uhr schrieb Gaurav Luthra
:
Hi,
I need to know the way to implement custom metrics in my flink
program.Currently, I know we can create custom metrics with the help of
RuntimeContext. But in my aggregate() I do not have
mplement custom metrics in my flink program.
> Currently, I know we can create custom metrics with the help of
> RuntimeContext.
> But in my aggregate() I do not have RuntimeContext. I am using window
> operator and applying aggregate() method on it. And I am passing
> AggregateFunction
Hi,
I am writing a Flink job for aggregating events in a window.
I am trying to use the AggregateFunction implementation for this.
Now, since WindowedStream does not allow a RichAggregateFunction for
aggregation, I cant use the RuntimeContext to get the Metric group.
I dont even see any other
Hi,
I need to know the way to implement custom metrics in my flink program.
Currently, I know we can create custom metrics with the help of
RuntimeContext.
But in my aggregate() I do not have RuntimeContext. I am using window
operator and applying aggregate() method on it. And I am passing
we want to have custom
> metrics. Unfortunately operator’s wrapping this assigner interface is
> hidden from user API. What do you think if we add some optional api in
> order to let user possibility to register custom metrics in the watermark
> assigner?
>
> This feature
Thank you Reza. I will try your repo first :)
Regards,
Averell
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hello Averell
Based on my experience, using out-of-the-box reporters & collectors need a
little more effort!
Of course I hadn't experienced all of them, but after reviewing some of
them I tried my way:
Writing custom reporters to push metrics in ElasticSearch (the available
component
in our
Hi everyone,
I am trying to publish some counters and meters from my Flink job, to be
scraped by a Prometheus server. It seems to me that all the metrics that I
am publishing are done at the task level, so that my Prometheus server needs
to be configured to scrape from many targets (the number
have ran into the following problem and I want to double check
> > wether this is intended behaviour.
> >
> > I have a custom metrics reporter that pushes things to Kafka (so it
> > creates a KafkaProducer in the open method etc.etc.) for my streaming
> job.
>
.
On 11.07.2018 14:59, Gyula Fóra wrote:
Hi all,
I have ran into the following problem and I want to double check
wether this is intended behaviour.
I have a custom metrics reporter that pushes things to Kafka (so it
creates a KafkaProducer in the open method etc.etc.) for my streaming job
Hi all,
I have ran into the following problem and I want to double check wether
this is intended behaviour.
I have a custom metrics reporter that pushes things to Kafka (so it creates
a KafkaProducer in the open method etc.etc.) for my streaming job.
Naturally as my Flink job consumes from
Hi,
> I have couple more questions related to metrics. I use Influx db reporter to
> report flink metrics and I see a lot of metrics are bring reported. Is there
> a way to select only a subset of metrics that we need to monitor the
> application?
At this point is up to either reporter, or up
Thanks Pitor.
I have couple more questions related to metrics. I use Influx db reporter
to report flink metrics and I see a lot of metrics are bring reported. Is
there a way to select only a subset of metrics that we need to monitor the
application?
Also, Is there a way to specify custom metics
Hi,
Reporting once per 10 seconds shouldn’t create problems. Best to try it out.
Let us know if you get into some troubles :)
Piotrek
> On 11 Dec 2017, at 18:23, Navneeth Krishnan wrote:
>
> Thanks Piotr.
>
> Yes, passing the metric group should be sufficient. The
Thanks Piotr.
Yes, passing the metric group should be sufficient. The subcomponents will
not be able to provide the list of metrics to register since the metrics
are created based on incoming data by tenant. Also I am planning to have
the metrics reported every 10 seconds and hope it shouldn't be
Hi,
I’m not sure if I completely understand your issue.
1.
- You don’t have to pass RuntimeContext, you can always pass just the
MetricGroup or ask your components/subclasses “what metrics do you want to
register” and register them at the top level.
- Reporting tens/hundreds/thousands of
Hi,
I have a streaming pipeline running on flink and I need to collect metrics
to identify how my algorithm is performing. The entire pipeline is
multi-tenanted and I also need metrics per tenant. Lets say there would be
around 20 metrics to be captured per tenant. I have the following ideas for
On 06.07.2017 08:10, wyphao.2007 wrote:
Hi, all
I want to know element's latency before write to
Elasticsearch, so I registering a custom metrics as follow:
class CustomElasticsearchSinkFunction extends
ElasticsearchSinkFunction[EventEntry] {
private var met
Does it throw an exception, log a warning, is the metric
not get registered at all or does the value not changing?
On 06.07.2017 08:10, wyphao.2007 wrote:
Hi, all
I want to know element's latency before write to Elasticsearch, so I
registering
to Elasticsearch, so I
registering a custom metrics as follow:
class CustomElasticsearchSinkFunction extends
ElasticsearchSinkFunction[EventEntry] {
private var metricGroup: Option[MetricGroup] = None
private var latency: Long = _
private def init(runtimeContext: RuntimeContext): Unit
Hi, all
I want to know element's latency before write to Elasticsearch, so I
registering a custom metrics as follow:
class CustomElasticsearchSinkFunction extends
ElasticsearchSinkFunction[EventEntry] {
private var metricGroup: Option[MetricGroup] = None
private var latency: Long
54 matches
Mail list logo