boyuanzz commented on a change in pull request #13710:
URL: https://github.com/apache/beam/pull/13710#discussion_r560414782
##########
File path:
sdks/java/io/kafka/src/main/java/org/apache/beam/sdk/io/kafka/ReadFromKafkaDoFn.java
##########
@@ -288,6 +316,19 @@ public ProcessContinuation processElement(
Optional.ofNullable(watermarkEstimator.currentWatermark()));
}
try (Consumer<byte[], byte[]> consumer =
consumerFactoryFn.apply(updatedConsumerConfig)) {
+ // Check whether current TopicPartition is still available to read.
+ Set<TopicPartition> existingTopicPartitions = new HashSet<>();
+ for (List<PartitionInfo> topicPartitionList :
consumer.listTopics().values()) {
Review comment:
For one bundle, it is called once per `KafkaSourceDescriptor`. It has
been a result compromising for performance concern.
If we want to get accurate behavior(stop emitting records as soon as the
TopicPartition is stopped/removed), we should call `listTopics()` per record.
But we know that it will affect performance a lot.
We can also do caching but it will reduce more accuracy.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]