t; Christian
>
>
>
> Von: Alexander Fedulov
> Datum: Montag, 13. Juni 2022 um 23:42
> An: Christian Lorenz
> Cc: "user@flink.apache.org"
> Betreff: Re: Kafka Consumer commit error
>
>
>
> This email has reached Mapp via an external source
>
&
ucible in Flink 1.15.0, but not in Flink
> 1.14.4.
>
>
>
> Kind regards,
>
> Christian
>
>
>
> *Von: *Alexander Fedulov
> *Datum: *Montag, 13. Juni 2022 um 23:42
> *An: *Christian Lorenz
> *Cc: *"user@flink.apache.org"
> *Betreff: *Re: Kaf
.
Kind regards,
Christian
Von: Alexander Fedulov
Datum: Montag, 13. Juni 2022 um 23:42
An: Christian Lorenz
Cc: "user@flink.apache.org"
Betreff: Re: Kafka Consumer commit error
This email has reached Mapp via an external source
Hi Christian,
thanks for the reply. We use AT_
> *Datum: *Montag, 13. Juni 2022 um 13:06
> *An: *"user@flink.apache.org"
> *Cc: *Christian Lorenz
> *Betreff: *Re: Kafka Consumer commit error
>
>
>
> This email has reached Mapp via an external source
>
>
>
> Hi Christian,
>
>
>
> you should c
etreff: Re: Kafka Consumer commit error
This email has reached Mapp via an external source
Hi Christian,
you should check if the exceptions that you see after the broker is back from
maintenance are the same as the ones you posted here. If you are using
EXACTLY_ONCE, it could be that the later
to verify this behavior, as we also use
flink-avro-confluent-registry which makes it harder to understand the root of
the issue.
Best regards,
Christian
Von: Martijn Visser
Datum: Montag, 13. Juni 2022 um 12:05
An: Christian Lorenz
Cc: "user@flink.apache.org"
Betreff: Re: Kafk
Hi Christian,
you should check if the exceptions that you see after the broker is back
from maintenance are the same as the ones you posted here. If you are using
EXACTLY_ONCE, it could be that the later errors are caused by Kafka purging
transactions that Flink attempts to commit [1].
Best,
Alex
Hi Christian,
I would expect that after the broker comes back up and recovers completely,
these error messages would disappear automagically. It should not require a
restart (only time). Flink doesn't rely on Kafka's checkpointing mechanism
for fault tolerance.
Best regards,
Martijn
Op wo 8 jun
Hi,
we have some issues with a job using the flink-sql-connector-kafka (flink
1.15.0/standalone cluster). If one broker e.g. is restarted for maintainance
(replication-factor=2), the taskmanagers executing the job are constantly
logging errors on each checkpoint creation:
Failed to commit cons