[ 
https://issues.apache.org/jira/browse/KAFKA-9233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16983858#comment-16983858
 ] 

ASF GitHub Bot commented on KAFKA-9233:
---------------------------------------

noslowerdna commented on pull request #7755: KAFKA-9233: Fix 
IllegalStateException in Fetcher retrieval of beginni…
URL: https://github.com/apache/kafka/pull/7755
 
 
   …ng or end offsets for duplicate TopicPartition values
   
   Minor bug fix. The issue was introduced in Kafka 2.3.0, likely by 
[KAFKA-7831](https://issues.apache.org/jira/browse/KAFKA-7831).
   
   Tested by,
   `./gradlew clients:test --tests FetcherTest`
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Kafka consumer throws undocumented IllegalStateException
> --------------------------------------------------------
>
>                 Key: KAFKA-9233
>                 URL: https://issues.apache.org/jira/browse/KAFKA-9233
>             Project: Kafka
>          Issue Type: Bug
>          Components: consumer
>    Affects Versions: 2.3.0
>            Reporter: Andrew Olson
>            Priority: Minor
>
> If the provided collection of TopicPartition instances contains any 
> duplicates, an IllegalStateException not documented in the javadoc is thrown 
> by internal Java stream code when calling KafkaConsumer#beginningOffsets or 
> KafkaConsumer#endOffsets.
> The stack trace looks like this,
> {noformat}
> java.lang.IllegalStateException: Duplicate key -2
>       at 
> java.util.stream.Collectors.lambda$throwingMerger$0(Collectors.java:133)
>       at java.util.HashMap.merge(HashMap.java:1254)
>       at java.util.stream.Collectors.lambda$toMap$58(Collectors.java:1320)
>       at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
>       at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
>       at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
>       at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
>       at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
>       at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>       at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
>       at 
> org.apache.kafka.clients.consumer.internals.Fetcher.beginningOrEndOffset(Fetcher.java:555)
>       at 
> org.apache.kafka.clients.consumer.internals.Fetcher.beginningOffsets(Fetcher.java:542)
>       at 
> org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:2054)
>       at 
> org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:2031)
> {noformat}
> {noformat}
> java.lang.IllegalStateException: Duplicate key -1
>       at 
> java.util.stream.Collectors.lambda$throwingMerger$0(Collectors.java:133)
>       at java.util.HashMap.merge(HashMap.java:1254)
>       at java.util.stream.Collectors.lambda$toMap$58(Collectors.java:1320)
>       at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
>       at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
>       at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
>       at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
>       at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
>       at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>       at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
>       at 
> org.apache.kafka.clients.consumer.internals.Fetcher.beginningOrEndOffset(Fetcher.java:559)
>       at 
> org.apache.kafka.clients.consumer.internals.Fetcher.endOffsets(Fetcher.java:550)
>       at 
> org.apache.kafka.clients.consumer.KafkaConsumer.endOffsets(KafkaConsumer.java:2109)
>       at 
> org.apache.kafka.clients.consumer.KafkaConsumer.endOffsets(KafkaConsumer.java:2081)
> {noformat}
> Looking at the code, it appears this may likely have been introduced by 
> KAFKA-7831. The exception is not thrown in Kafka 2.2.1, with the duplicated 
> TopicPartition values silently ignored. Either we should document this 
> exception possibility (probably wrapping it with a Kafka exception class) 
> indicating invalid client API usage, or restore the previous behavior where 
> the duplicates were harmless.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to