Hi,

I think this is unexpected. The generated transactional ids should not be
clashing.
Looking at the FlinkKafkaProducer code, it seems like the generation is
only a function of the subtask id of the FlinkKafkaProducer, which could be
the same across 2 different Kafka sources.

I'm not completely certain about this. Piotr (in CC) might have more
insights for this.

Cheers,
Gordon

On Wed, Feb 13, 2019 at 9:15 AM Slotterback, Chris <
chris_slotterb...@comcast.com> wrote:

> Hey all,
>
>
>
> I am running into an issue where if I run 2 flink jobs (same jar,
> different configuration), that produce to different kafka topics on the
> same broker, using the 1.7 FlinkKafkaProducer set with EXACTLY_ONCE
> semantics, both jobs go into a checkpoint exception loop every 15 seconds
> or so:
>
>
>
> Caused by: org.apache.kafka.common.errors.ProducerFencedException:
> Producer attempted an operation with an old epoch. Either there is a newer
> producer with the same transactionalId, or the producer's transaction has
> been expired by the broker.
>
>
>
> As soon as one of the jobs is cancelled, things go back to normal for the
> other job. I tried manually setting the TRANSACTIONAL_ID_CONFIG config in
> the producer to be unique for each of the jobs. My producer transaction
> timeout is set to 5 minutes, and flink checkpointing is set to 1 minute. Is
> there some way to prevent these jobs from tripping over each other in
> execution while retaining exactly once processing?
>

Reply via email to