1. Like a regular Kafka client, so it depends on how you configure it.
2. Yes
3. It depends on the failure of course, but you could create something like
a Dead Letter Queue in case deserialization fails for your incoming message.

Best regards,

Martijn

On Sat, Jun 10, 2023 at 2:03 PM Anirban Dutta Gupta <
anir...@indicussoftware.com> wrote:

> Hello,
>
> Thanks for the guidance. We will surely think of moving to a newer version
> of Flink.
> Just a few followup questions when using KafkaSource..(sorry if I am being
> naive in my questions)
>
> 1. How does KafkaSource handle disruptions with the Kafka broker ? Will it
> keep on trying to connect and subscribe to the broker indefinitely or will
> it fail the Flink source after a certain number of retries ?
>
> 2. Will there be some log output in the Flink logs while the KafkaSource
> is trying again to connect to the broker after a disruption ?
>
> 3. In case of the source failing, is there a way in the Flink program
> using the KafkaSource to detect the error and add some error handling
> mechanism..for e.g. sending an alert mail to the stakeholders in case the
> source fails completely. (Something similar to
> "ActionRequestFailureHandler" for ElasticsearchSink)
>
> Many thanks in advance,
> Anirban
>
> On 09-06-2023 20:01, Martijn Visser wrote:
>
> Hi,
>
> This consumer should not be used. This only occurs in really old and no
> longer supported Flink versions. You should really upgrade to a newer
> version of Flink and use the KafkaSource.
>
> Best regards,
>
> Martijn
>
> On Fri, Jun 9, 2023 at 11:05 AM Anirban Dutta Gupta <
> anir...@indicussoftware.com> wrote:
>
>> Hello,
>>
>> We are using "FlinkKafkaConsumer011" as a Kafka source consumer for
>> Flink. Please guide on how to implement error handling mechanism for the
>> following:
>> 1. If the subscription to the Kafka topic gets lost, Kafka connection
>> gets disconnected.
>> In this case, is there any mechanism of re-subscribing to the Kafka
>> topic automatically in the program.
>>
>> 2. If there is any error in the FetchRecords of the consumer.
>>
>> Thanks and Regards,
>> Anirban
>>
>
>

Reply via email to