Thanks for the approach but few updates w.r.t my query sent –
Parquet file is a binary file so when I said corrupt record it is complete file
in itself can’t be processed right?
So it is not counting corrupt records rather counting corrupt files or splits
in Flink?
From: Ken Krugler
Sent: 16
Hi,
I was wondering if it would be safe for me to make use of
reinterpretAsKeyedStream on a Kafka source in order to have an
"embarrassingly parallel" job without any .keyBy().
My Kafka topic is partitioned by the same id I'm then sending through a
session window operator. Therefore there's in th