[
https://issues.apache.org/jira/browse/KAFKA-1749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14195417#comment-14195417
]
Ewen Cheslack-Postava commented on KAFKA-1749:
----------------------------------------------
This looks like it's probably a duplicate of KAFKA-1196 -- that looks like the
stack trace I described in that issue that I get when sending a FetchResponse
that exceeds 2GB.
> Brokers continually throw exceptions when there are hundreds of topic being
> fetched by mirrormaker
> --------------------------------------------------------------------------------------------------
>
> Key: KAFKA-1749
> URL: https://issues.apache.org/jira/browse/KAFKA-1749
> Project: Kafka
> Issue Type: Bug
> Affects Versions: 0.8.1.1, 0.8.2
> Reporter: Min Zhou
>
> Here is one piece of millions of the exceptions.
> {noformat}
> kafka.common.KafkaException: This operation cannot be completed on a complete
> request.
> at kafka.network.Transmission$class.expectIncomplete(Transmission.scala:34)
> at kafka.api.FetchResponseSend.expectIncomplete(FetchResponse.scala:191)
> at kafka.api.FetchResponseSend.writeTo(FetchResponse.scala:214)
> at kafka.network.Processor.write(SocketServer.scala:375)
> at kafka.network.Processor.run(SocketServer.scala:247)
> at java.lang.Thread.run(Thread.java:662)
> {noformat}
> We use tools to hook function kafka.api.FetchResponseSend.writeTo, found
> fetchResponse.sizeInBytes was overflow. Which means below code will get a
> result over the limit of integer type
> {noformat}
> val sizeInBytes =
> FetchResponse.headerSize +
> dataGroupedByTopic.foldLeft(0) ((folded, curr) => {
> val topicData = TopicData(curr._1, curr._2.map {
> case (topicAndPartition, partitionData) =>
> (topicAndPartition.partition, partitionData)
> })
> folded + topicData.sizeInBytes
> })
> {noformat}
> If we just fetch few topics by mirrormaker, the brokers ran well.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)