[ 
https://issues.apache.org/jira/browse/KAFKA-7282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16582084#comment-16582084
 ] 

Alastair Munro commented on KAFKA-7282:
---------------------------------------

We set these settings below for the kafka* glusterfs volumes in our openshift 
project (kubernetes namespace), which fixed the issue.

Now this is heavy handed and will give poor performance, as it turns off ALL 
caching, so we will need to work through the settings to work out what settings 
are really required:
{code:java}
cluster.post-op-delay-secs: 0
performance.client-io-threads: off
performance.open-behind: off
performance.readdir-ahead: off
performance.read-ahead: off
performance.stat-prefetch: off
performance.write-behind: off
performance.io-cache: off
cluster.consistent-metadata: on
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
cluster.brick-multiplex: on{code}

> Failed to read `log header` from file channel
> ---------------------------------------------
>
>                 Key: KAFKA-7282
>                 URL: https://issues.apache.org/jira/browse/KAFKA-7282
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 0.11.0.2, 1.1.1, 2.0.0
>         Environment: Linux
>            Reporter: Alastair Munro
>            Priority: Major
>
> Full stack trace:
> {code:java}
> [2018-08-13 11:22:01,635] ERROR [ReplicaManager broker=2] Error processing 
> fetch operation on partition segmenter-evt-v1-14, offset 96745 
> (kafka.server.ReplicaManager)
> org.apache.kafka.common.KafkaException: java.io.EOFException: Failed to read 
> `log header` from file channel `sun.nio.ch.FileChannelImpl@6e6d8ddd`. 
> Expected to read 17 bytes, but reached end of file after reading 0 bytes. 
> Started read from position 25935.
> at 
> org.apache.kafka.common.record.RecordBatchIterator.makeNext(RecordBatchIterator.java:40)
> at 
> org.apache.kafka.common.record.RecordBatchIterator.makeNext(RecordBatchIterator.java:24)
> at 
> org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(AbstractIterator.java:79)
> at 
> org.apache.kafka.common.utils.AbstractIterator.hasNext(AbstractIterator.java:45)
> at 
> org.apache.kafka.common.record.FileRecords.searchForOffsetWithSize(FileRecords.java:286)
> at kafka.log.LogSegment.translateOffset(LogSegment.scala:254)
> at kafka.log.LogSegment.read(LogSegment.scala:277)
> at kafka.log.Log$$anonfun$read$2.apply(Log.scala:1159)
> at kafka.log.Log$$anonfun$read$2.apply(Log.scala:1114)
> at kafka.log.Log.maybeHandleIOException(Log.scala:1837)
> at kafka.log.Log.read(Log.scala:1114)
> at 
> kafka.server.ReplicaManager.kafka$server$ReplicaManager$$read$1(ReplicaManager.scala:912)
> at 
> kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:974)
> at 
> kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:973)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at kafka.server.ReplicaManager.readFromLocalLog(ReplicaManager.scala:973)
> at kafka.server.ReplicaManager.readFromLog$1(ReplicaManager.scala:802)
> at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:815)
> at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:678)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:107)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to