Using a FlinkRunner, if I cancel the job, the pipeline blows up. Exceptions
stream across my screen rapidly.

```
java.lang.InterruptedException: null
        at
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:944)
~[?:1.8.0_282]
        at
org.apache.beam.sdk.io.kafka.KafkaUnboundedReader.nextBatch(KafkaUnboundedReader.java:584)
~[blob_p-b0501fca08dc4506cf9cffe14466df74e4d010e9-d1cc5178ec372501d7c1e755b90de159:?]
```
How can I gracefully stop my Flink+Beam job that uses an unbounded KafkaIO
source?

If it is not explicitly supported, are there any work-arounds?

Please and thank you,
Marco.

Reply via email to