Hello,

Thank you. I think it is a problem caused by Kafka configuration, not
Flink. I'll take a look and let you know if there's an issue in Flink.

BR,
Jung


2023년 8월 15일 (화) 오후 9:40, Hector Rios <oj.r...@gmail.com>님이 작성:

> Hi there
>
> It would be helpful if you could include the code for your pipeline. One
> suggestion, can you disable the "EXACTLY_ONCE" semantics on the producer.
> Using EXACTLY_ONCE will leverage Kafka transactions and thus add overhead.
> I would disable it to see if you still get the same situation.
>
> Also, can you look in the Flink UI for this job and see if checkpoints are
> in fact being taken?
>
> Hope that helps
> -Hector
>
> On Tue, Aug 15, 2023 at 11:36 AM Dennis Jung <inylov...@gmail.com> wrote:
>
>> Sorry, I've forgot putting title, so sending again.
>>
>> 2023년 8월 15일 (화) 오후 6:27, Dennis Jung <inylov...@gmail.com>님이 작성:
>>
>>> (this is issue from Flink 1.14)
>>>
>>> Hello,
>>>
>>> I've set up following logic to consume messages from kafka, and produce
>>> them to another kafka broker. For producer, I've configured
>>> `Semantics.EXACTLY_ONCE` to send messages exactly once. (also setup
>>> 'StreamExecutionEnvironment::enableCheckpointing' as
>>> 'CheckpointingMode.EXACTLY_ONCE')
>>>
>>>
>>> --------------------------------------------------------------------------------------------
>>> kafka A -> FlinkKafkaConsumer -> ... -> FlinkKafkaProducer -> kafka B
>>>
>>> --------------------------------------------------------------------------------------------
>>>
>>> But though I've just produced only 1 message to 'kafka A', consumer
>>> consumes the same message repeatedly.
>>>
>>> When I remove `FlinkKafkaProducer` part and make it 'read only', it does
>>> not happen.
>>>
>>> Could someone suggest a way to debug or fix this?
>>>
>>> Thank you.
>>>
>>

Reply via email to