Hi, team. 

I’m using 0.8.1.
I found some strange log repeatedly on server.log in one of my brokers and it 
keeps logging until now.

server.log 
======================================================================================
...
[2014-06-09 10:41:47,402] ERROR Conditional update of path 
/brokers/topics/topicTRACE/partitions/1/state with data 
{"controller_epoch":19,"leader":2,"version":1,"leader_epoch":43,"isr":[4,2]} 
and expected version 439 failed due to 
org.apache.zookeeper.KeeperException$BadVersionException: KeeperErrorCode = 
BadVersion for /brokers/topics/topicTRACE/partitions/1/state 
(kafka.utils.ZkUtils$)
[2014-06-09 10:41:47,402] INFO Partition [topicTRACE,1] on broker 2: Cached 
zkVersion [439] not equal to that in zookeeper, skip updating ISR 
(kafka.cluster.Partition)
[2014-06-09 10:41:47,402] INFO Partition [topicDEBUG,0] on broker 2: Shrinking 
ISR for partition [topicDEBUG,0] from 1,3,2 to 2 (kafka.cluster.Partition)
[2014-06-09 10:41:47,416] ERROR Conditional update of path 
/brokers/topics/topicDEBUG/partitions/0/state with data 
{"controller_epoch":19,"leader":2,"version":1,"leader_epoch":43,"isr":[2]} and 
expected version 1424 failed due to 
org.apache.zookeeper.KeeperException$BadVersionException: KeeperErrorCode = 
BadVersion for /brokers/topics/topicDEBUG/partitions/0/state 
(kafka.utils.ZkUtils$)
[2014-06-09 10:41:47,432] INFO Partition [topicDEBUG,0] on broker 2: Cached 
zkVersion [1424] not equal to that in zookeeper, skip updating ISR 
(kafka.cluster.Partition)
[2014-06-09 10:41:47,432] INFO Partition [topicCDR,3] on broker 2: Shrinking 
ISR for partition [topicCDR,3] from 4,1,2 to 2 (kafka.cluster.Partition)
[2014-06-09 10:41:47,435] ERROR Conditional update of path 
/brokers/topics/topicCDR/partitions/3/state with data 
{"controller_epoch":19,"leader":2,"version":1,"leader_epoch":46,"isr":[2]} and 
expected version 541 failed due to 
org.apache.zookeeper.KeeperException$BadVersionException: KeeperErrorCode = 
BadVersion for /brokers/topics/topicCDR/partitions/3/state 
(kafka.utils.ZkUtils$)
[2014-06-09 10:41:47,435] INFO Partition [topicCDR,3] on broker 2: Cached 
zkVersion [541] not equal to that in zookeeper, skip updating ISR 
(kafka.cluster.Partition)
[2014-06-09 10:41:48,426] INFO Partition [topicTRACE,1] on broker 2: Shrinking 
ISR for partition [topicTRACE,1] from 4,3,2 to 4,2 (kafka.cluster.Partition)
...
=================================================================================================

and found some error and warning in controller.log


controller.log 
======================================================================================
...
[2014-06-09 10:42:03,962] WARN [Controller-3-to-broker-1-send-thread], 
Controller 3 fails to send a request to broker 
id:1,host:c-ccp-tk1-a58,port:9091 (kafka.controller.RequestSendThread)
java.net.SocketTimeoutException
        at 
sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:229)
        at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
        at 
java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
        at kafka.utils.Utils$.read(Utils.scala:375)
        at 
kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
        at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
        at 
kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
        at kafka.network.BlockingChannel.receive(BlockingChannel.scala:100)
        at 
kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:146)
        at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
[2014-06-09 10:42:03,964] ERROR [Controller-3-to-broker-1-send-thread], 
Controller 3 epoch 21 failed to send UpdateMetadata request with correlation id 
1 to broker id:1,host:c-ccp-tk1-a58,port:9091. Reconnecting to broker. 
(kafka.controller.RequestSendThread)
java.nio.channels.ClosedChannelException
        at kafka.network.BlockingChannel.send(BlockingChannel.scala:89)
        at 
kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:132)
        at 
kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:131)
        at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)

...

[2014-06-09 10:42:38,064] WARN [OfflinePartitionLeaderSelector]: No broker in 
ISR is alive for [topicTRACE,0]. Elect leader 3 from live brokers 3. There's 
potential data loss. (kafka.controller.OfflinePartitionLeaderSelector)
...
=================================================================================================

Why is this happen? Is there any possibilities data loss?
To normalize my brokers, What I have to do? Do I have to restart this broker?


Thanks in advance.


Reply via email to