Thank You Ufuk & Shannon. Since my kafka consumer is
UnboundedKafkaSource from BEAM, not sure if records-lag-max metrics is
exposed. Let me research further.
Thanks,
Jins George
On 01/08/2018 10:11 AM, Shannon Carey wrote:
Right, backpressure only measures backpressure on the inside of the
Right, backpressure only measures backpressure on the inside of the Flink job.
Ie. between Flink tasks.
Therefore, it’s up to you to monitor whether your Flink job is “keeping up”
with the source stream. If you’re using Kafka, there’s a metric that the
consumer library makes available. For
Hey Jins,
our current back pressure tracking mechanism does not work with Kafka
sources. To gather back pressure indicators we sample the main task
thread of a subtask. For most tasks, this is the thread that emits
records downstream (e.g. if you have a map function) and everything
works as
I have a Beam Pipeline consuming records from Kafka doing some
transformations and writing it to Hbase. I faced an issue in which
records were writing to Hbase at a slower rate than the incoming
messages to Kafka due to a temporary surge in the incoming traffic.
From the flink UI, if I check