Hi,

I’m not sure if I completely understand your issue.

1.
- You don’t have to pass RuntimeContext, you can always pass just the 
MetricGroup or ask your components/subclasses “what metrics do you want to 
register” and register them at the top level.
- Reporting tens/hundreds/thousands of metrics shouldn’t be an issue for Flink, 
as long as you have a reasonable reporting interval. However keep in mind that 
Flink only reports your metrics and you still need something to 
read/handle/process/aggregate your metrics
2.
I don’t think that reporting per node/jvm is possible with Flink’s metric 
system. For that you would need some other solution, like report your metrics 
using JMX (directly register MBeans from your code)

Piotrek

> On 10 Dec 2017, at 18:51, Navneeth Krishnan <reachnavnee...@gmail.com> wrote:
> 
> Hi,
> 
> I have a streaming pipeline running on flink and I need to collect metrics to 
> identify how my algorithm is performing. The entire pipeline is 
> multi-tenanted and I also need metrics per tenant. Lets say there would be 
> around 20 metrics to be captured per tenant. I have the following ideas for 
> implemention but any suggestions on which one might be better will help.
> 
> 1. Use flink metric group and register a group per tenant at the operator 
> level. The disadvantage of this approach for me is I need the runtimecontext 
> parameter to register a metric and I have various subclasses to which I need 
> to pass this object to limit the metric scope within the operator. Also there 
> will be too many metrics reported if there are higher number of subtasks. 
> How is everyone accessing flink state/ metrics from other classes where you 
> don't have access to runtimecontext?
> 
> 2. Use a custom singleton metric registry to add and send these metrics using 
> custom sink. Instead of using flink metric group to collect metrics per 
> operatior - subtask, collect per jvm and use influx sink to send the metric 
> data. What i'm not sure in this case is how to collect only once per node/jvm.
> 
> Thanks a bunch in advance.

Reply via email to