Hey all,

I am running into an issue where if I run 2 flink jobs (same jar, different 
configuration), that produce to different kafka topics on the same broker, 
using the 1.7 FlinkKafkaProducer set with EXACTLY_ONCE semantics, both jobs go 
into a checkpoint exception loop every 15 seconds or so:

Caused by: org.apache.kafka.common.errors.ProducerFencedException: Producer 
attempted an operation with an old epoch. Either there is a newer producer with 
the same transactionalId, or the producer's transaction has been expired by the 
broker.

As soon as one of the jobs is cancelled, things go back to normal for the other 
job. I tried manually setting the TRANSACTIONAL_ID_CONFIG config in the 
producer to be unique for each of the jobs. My producer transaction timeout is 
set to 5 minutes, and flink checkpointing is set to 1 minute. Is there some way 
to prevent these jobs from tripping over each other in execution while 
retaining exactly once processing?

Reply via email to