Hi Anil,

You are never throwing any exception outside of your process function?, so
your Flink job is not restarting because of a failure (your Flink job would
restart because of a failure if you would throw an exception out of your
user code).

If you can rule-out a job restart (check the logs for that), then I assume
your data has duplicates or something is wrong in your logic.
The only case where Flink is re-reading data is on recovery.

I hope this helps. If not, it would be good if you could share a minimal
example to reproduce the problem.

Best,
Robert


On Tue, Mar 17, 2020 at 7:36 PM Anil Alfons K <anilalf...@gmail.com> wrote:

> Hi Community,
> I am reading data from Kafka. The FlinkKafkaConsumer reads data from it.
> Then some application-specific logic in a process function. If I receive
> any invalid data I throw a custom exception and it's handled in the process
> function itself. This invalid data is taken out as side output. But the
> problem is Flink tries to read the same invalid messages again and again
> for a few times.
>
> Can anyone let me know how can the error/exception handling be done
> without the Flink job breaking?
>
> The plan is to process all the events only once through the process
> function without any retry.
>
> Regards
> Anil
>

Reply via email to