chia7712 commented on a change in pull request #10022:
URL: https://github.com/apache/kafka/pull/10022#discussion_r569196050
##########
File path:
clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java
##########
@@ -1236,7 +1236,7 @@ public void assign(Collection<TopicPartition> partitions)
{
}
final FetchedRecords<K, V> records = pollForFetches(timer);
- if (!records.isEmpty()) {
+ if (!records.records().isEmpty()) {
Review comment:
> At the moment, I think we could change the javadoc along with the PR
if so far we've only seen our tests being broken because it relies on this
guarantee; if you have any common use cases that may be impacted by this
behavior change, I'd love to hear and revisit.
just imagine a use case :)
Users tried to random access a record according to offset. Hence, the
consumer is NOT in a loop. Instead, it works like a getter method.
```scala
def randomAccess(offset: Int, duration: Duration): Seq[ConsumerRecords] = {
consumer.seek(tp, offset)
consumer.poll(duration).asJava
}
```
In this case, users expect the method returns empty record "only if" there
is no available records. With the new behavior, users possibly receive empty
results before timeout and they have to rewrite the code like below.
```scala
def randomAccess(offset: Int, duration: Duration): Seq[ConsumerRecords] = {
consumer.seek(tp, offset)
val endTime = System.currentTimeMillis() + duration.toMillis
while (System.currentTimeMillis() <= endTime) {
val records = consumer.poll(duration).asJava
if (!records.isEmpty) return records
}
Seq.empty
}
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]