Hi, I'm using RichMapFunction to enrich data from stream generated from
Kafka topic and put rich data again to HBase. And when there is a failure on
HBase side I'm seeing in Flink's log that HBase client attempts several
times to get necessary data from HBase - I believe it makes it
`hbase.client.retries.number` times - and after retry count exceeded the
data become just lost and Flink job moves to next record from stream. So the
question is how to avoid this data loss? I guess making
`hbase.client.retries.number` just bigger is not ideal solution.

Reply via email to