Hi Teena,

could you tell us a bit more about your job. Are you using event-time semantics?

Regards,
Timo

Am 1/2/18 um 6:14 AM schrieb Teena K:
Hi,

I am using Flink 1.4 along with Kafka 0.11. My stream job has 4 Kafka consumers each subscribing to 4 different topics. The stream from each consumer gets processed in 3 to 4 different ways there by writing to a total of 12 sinks (cassandra tables). When the job runs, up to 8 or 10 records get processed correctly and after that they are not subscribed by the consumers. I have tried this with 'flink 1.3.2 and kafka 0.10' and 'flink 1.4 and kafka 0.10' all of which gave the same results.


Reply via email to