Hi all,

When I'm running my jobs I am consuming data from Kafka to process in my
job.  Unfortunately my job receives unexpected data from time to time which
I'm trying to find the root cause of the issue.

Ideally, I want to be able to have a way to know when the job has failed due
to an exception, to then log to file the last message that it was consuming
at the time to help track down the offending message consumed.  How is this
possible within Flink?

Thinking about this more, it may not be a consumed message that killed the
job, but maybe a transformation within the job itself and it died in a
downstream Operator.  In this case, is there a way to log to file the
message that an Operator was processing at the time that caused the
exception?


Thanks in advance!



--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Reply via email to