[ 
https://issues.apache.org/jira/browse/KAFKA-3764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15312377#comment-15312377
 ] 

Hannu Valtonen commented on KAFKA-3764:
---------------------------------------

Hi wanted to chime in,  I'm seeing the same in our test enviroment with kafka 
0.10.0 when  telegraf-0.12.1_55_g671b40d-1.x86_64 
(https://github.com/influxdata/telegraf/) produces Kafka messages.  
Internally telegraf is using Shopify's Sarama go client. 
(https://github.com/Shopify/sarama) Our own setup is also using SSL 
authentication 
if that makes any difference. 

I'd be happy to test either patches or to provide more logs for the issue if 
you need any. (with some possible logging patches applied if needed)


> Error processing append operation on partition
> ----------------------------------------------
>
>                 Key: KAFKA-3764
>                 URL: https://issues.apache.org/jira/browse/KAFKA-3764
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 0.10.0.0
>            Reporter: Martin Nowak
>
> After updating Kafka from 0.9.0.1 to 0.10.0.0 I'm getting plenty of `Error 
> processing append operation on partition` errors. This happens with 
> ruby-kafka as producer and enabled snappy compression.
> {noformat}
> [2016-05-27 20:00:11,074] ERROR [Replica Manager on Broker 2]: Error 
> processing append operation on partition m2m-0 (kafka.server.ReplicaManager)
> kafka.common.KafkaException: 
>         at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:159)
>         at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:85)
>         at 
> kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64)
>         at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56)
>         at 
> kafka.message.ByteBufferMessageSet$$anon$2.makeNextOuter(ByteBufferMessageSet.scala:357)
>         at 
> kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:369)
>         at 
> kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:324)
>         at 
> kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64)
>         at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56)
>         at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>         at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:30)
>         at 
> kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:427)
>         at kafka.log.Log.liftedTree1$1(Log.scala:339)
>         at kafka.log.Log.append(Log.scala:338)
>         at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:443)
>         at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:429)
>         at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
>         at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:237)
>         at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429)
>         at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:406)
>         at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:392)
>         at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>         at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>         at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
>         at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
>         at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
>         at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
>         at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
>         at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>         at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>         at 
> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:392)
>         at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:328)
>         at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:405)
>         at kafka.server.KafkaApis.handle(KafkaApis.scala:76)
>         at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: failed to read chunk
>         at 
> org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:433)
>         at 
> org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:167)
>         at java.io.DataInputStream.readFully(DataInputStream.java:195)
>         at java.io.DataInputStream.readLong(DataInputStream.java:416)
>         at 
> kafka.message.ByteBufferMessageSet$$anon$1.readMessageFromStream(ByteBufferMessageSet.scala:118)
>         at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:153)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to