[ 
https://issues.apache.org/jira/browse/FLINK-24542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17430043#comment-17430043
 ] 

Qingsheng Ren commented on FLINK-24542:
---------------------------------------

Thanks [~zlzhang0122] I checked [the documentation of 
Kafka|https://kafka.apache.org/documentation/#consumer_monitoring] but didn't 
find the metric "freshness". According to the blog post I assume this is a 
derived metric instead of a standard metric exposed by Kafka client. I think 
using other tools like Burrow to monitor Kafka in the bypass would be a better 
choice.

> Expose the freshness metrics for kafka connector
> ------------------------------------------------
>
>                 Key: FLINK-24542
>                 URL: https://issues.apache.org/jira/browse/FLINK-24542
>             Project: Flink
>          Issue Type: Improvement
>          Components: Connectors / Kafka
>    Affects Versions: 1.12.2, 1.14.0, 1.13.1
>            Reporter: zlzhang0122
>            Priority: Major
>             Fix For: 1.15.0
>
>
> When we start a flink job to consume apache kafka, we usually use offsetLag, 
> which can be calulated by current-offsets minus committed-offsets, but 
> sometimes the offsetLag is hard to understand, we can hardly to judge wether 
> the value is normal or not. Kafka have proposed a new metric: freshness(see 
> [a-guide-to-kafka-consumer-freshness|https://www.jesseyates.com/2019/11/04/kafka-consumer-freshness-a-guide.html?trk=article_share_wechat&from=timeline&isappinstalled=0]).
> So we can also expose the freshness metric for kafka connector to improve the 
> user experience.From this freshness metric, user can easily know wether the 
> kafka message is backlog and need to deal with it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to