[jira] [Updated] (KAFKA-7488) Controller not recovering after disconnection to zookeeper

2018-10-05 Thread Luigi Tagliamonte (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luigi Tagliamonte updated KAFKA-7488:
-
Description: 
This issue seems related to https://issues.apache.org/jira/browse/KAFKA-2729 
that has been resolved here https://issues.apache.org/jira/browse/KAFKA-2729

The issue still exists in Kafka 1.1

Cluster details:
 * 3 Kafka nodes cluster running 1.1
 * 3 Zookeeper node cluster running 3.4.10

Today meanwhile I was replacing a zookeeper server 
([10.48.208.70|http://10.48.208.70/]) the leader among the brokers experienced 
this issue:
{code:java}
[2018-10-05 21:03:02,799] INFO [GroupMetadataManager brokerId=1] Removed 0 
expired offsets in 0 milliseconds. 
(kafka.coordinator.group.GroupMetadataManager)
[2018-10-05 21:08:20,060] INFO Unable to read additional data from server 
sessionid 0x34663b434985000e, likely server has closed socket, closing socket 
connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2018-10-05 21:08:21,001] INFO Opening socket connection to server 
10.48.208.70/10.48.208.70:2181. Will not attempt to authenticate using SASL 
(unknown error) (org.apache.zookeeper.ClientCnxn)
[2018-10-05 21:08:21,003] WARN Session 0x34663b434985000e for server null, 
unexpected error, closing socket connection and attempting reconnect 
(org.apache.zookeeper.ClientCnxn)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
[2018-10-05 21:08:21,797] INFO Opening socket connection to server 
10.48.210.44/10.48.210.44:2181. Will not attempt to authenticate using SASL 
(unknown error) (org.apache.zookeeper.ClientCnxn)
[2018-10-05 21:08:21,799] INFO Socket connection established to 
10.48.210.44/10.48.210.44:2181, initiating session 
(org.apache.zookeeper.ClientCnxn)
[2018-10-05 21:08:21,802] INFO Session establishment complete on server 
10.48.210.44/10.48.210.44:2181, sessionid = 0x34663b434985000e, negotiated 
timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2018-10-05 21:08:28,015] INFO Creating /controller (is it secure? false) 
(kafka.zk.KafkaZkClient)
[2018-10-05 21:08:28,015] INFO Creating /controller (is it secure? false) 
(kafka.zk.KafkaZkClient)
[2018-10-05 21:08:28,025] ERROR Error while creating ephemeral at /controller, 
node already exists and owner '3703712903740981258' does not match current 
session '3775770497779040270' (kafka.zk.KafkaZkClient$CheckedEphemeral)
[2018-10-05 21:08:28,025] ERROR Error while creating ephemeral at /controller, 
node already exists and owner '3703712903740981258' does not match current 
session '3775770497779040270' (kafka.zk.KafkaZkClient$CheckedEphemeral)
[2018-10-05 21:08:28,025] INFO Result of znode creation at /controller is: 
NODEEXISTS (kafka.zk.KafkaZkClient)
[2018-10-05 21:08:28,025] INFO Result of znode creation at /controller is: 
NODEEXISTS (kafka.zk.KafkaZkClient)
[2018-10-05 21:08:42,561] INFO [Partition -store-changelog-7 broker=1] 
Shrinking ISR from 2,1,3 to 1 (kafka.cluster.Partition)
[2018-10-05 21:08:42,561] INFO [Partition -store-changelog-7 broker=1] 
Shrinking ISR from 2,1,3 to 1 (kafka.cluster.Partition)
[2018-10-05 21:08:42,569] INFO [Partition -store-changelog-7 broker=1] 
Cached zkVersion [11] not equal to that in zookeeper, skip updating ISR 
(kafka.cluster.Partition)
[2018-10-05 21:08:42,569] INFO [Partition -store-changelog-7 broker=1] 
Cached zkVersion [11] not equal to that in zookeeper, skip updating ISR 
(kafka.cluster.Partition)
[2018-10-05 21:08:42,569] INFO [Partition bycontact_0-19 broker=1] 
Shrinking ISR from 2,1,3 to 1 (kafka.cluster.Partition)
[2018-10-05 21:08:42,569] INFO [Partition bycontact_0-19 broker=1] 
Shrinking ISR from 2,1,3 to 1 (kafka.cluster.Partition)
[2018-10-05 21:08:42,574] INFO [Partition bycontact_0-19 broker=1] 
Cached zkVersion [44] not equal to that in zookeeper, skip updating ISR 
(kafka.cluster.Partition)
[2018-10-05 21:08:42,574] INFO [Partition bycontact_0-19 broker=1] 
Cached zkVersion [44] not equal to that in zookeeper, skip updating ISR 
(kafka.cluster.Partition){code}
 The only way in order to recover was to restart the broker.

  was:
This issue seems related to https://issues.apache.org/jira/browse/KAFKA-2729 
that has been resolved here https://issues.apache.org/jira/browse/KAFKA-2729

The issue still exists in Kafka 1.1

Cluster details:
 * 3 Kafka nodes cluster running 1.1
 * 3 Zookeeper node cluster running 3.4.10

Today meanwhile I was replacing a zookeeper server the leader among the brokers 
experienced this issue:
{code:java}
[2018-10-05 21:03:02,799] INFO [GroupMetadataManager brokerId=1] Removed 0 
expired offsets in 0 milliseconds. 

[jira] [Created] (KAFKA-7488) Controller not recovering after disconnection to zookeeper

2018-10-05 Thread Luigi Tagliamonte (JIRA)
Luigi Tagliamonte created KAFKA-7488:


 Summary: Controller not recovering after disconnection to zookeeper
 Key: KAFKA-7488
 URL: https://issues.apache.org/jira/browse/KAFKA-7488
 Project: Kafka
  Issue Type: Bug
  Components: controller
Affects Versions: 1.1.0
Reporter: Luigi Tagliamonte


This issue seems related to https://issues.apache.org/jira/browse/KAFKA-2729 
that has been resolved here https://issues.apache.org/jira/browse/KAFKA-2729

The issue still exists in Kafka 1.1

Cluster details:
 * 3 Kafka nodes cluster running 1.1
 * 3 Zookeeper node cluster running 3.4.10

Today meanwhile I was replacing a zookeeper server the leader among the brokers 
experienced this issue:
{code:java}
[2018-10-05 21:03:02,799] INFO [GroupMetadataManager brokerId=1] Removed 0 
expired offsets in 0 milliseconds. 
(kafka.coordinator.group.GroupMetadataManager)
[2018-10-05 21:08:20,060] INFO Unable to read additional data from server 
sessionid 0x34663b434985000e, likely server has closed socket, closing socket 
connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2018-10-05 21:08:21,001] INFO Opening socket connection to server 
10.48.208.70/10.48.208.70:2181. Will not attempt to authenticate using SASL 
(unknown error) (org.apache.zookeeper.ClientCnxn)
[2018-10-05 21:08:21,003] WARN Session 0x34663b434985000e for server null, 
unexpected error, closing socket connection and attempting reconnect 
(org.apache.zookeeper.ClientCnxn)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
[2018-10-05 21:08:21,797] INFO Opening socket connection to server 
10.48.210.44/10.48.210.44:2181. Will not attempt to authenticate using SASL 
(unknown error) (org.apache.zookeeper.ClientCnxn)
[2018-10-05 21:08:21,799] INFO Socket connection established to 
10.48.210.44/10.48.210.44:2181, initiating session 
(org.apache.zookeeper.ClientCnxn)
[2018-10-05 21:08:21,802] INFO Session establishment complete on server 
10.48.210.44/10.48.210.44:2181, sessionid = 0x34663b434985000e, negotiated 
timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2018-10-05 21:08:28,015] INFO Creating /controller (is it secure? false) 
(kafka.zk.KafkaZkClient)
[2018-10-05 21:08:28,015] INFO Creating /controller (is it secure? false) 
(kafka.zk.KafkaZkClient)
[2018-10-05 21:08:28,025] ERROR Error while creating ephemeral at /controller, 
node already exists and owner '3703712903740981258' does not match current 
session '3775770497779040270' (kafka.zk.KafkaZkClient$CheckedEphemeral)
[2018-10-05 21:08:28,025] ERROR Error while creating ephemeral at /controller, 
node already exists and owner '3703712903740981258' does not match current 
session '3775770497779040270' (kafka.zk.KafkaZkClient$CheckedEphemeral)
[2018-10-05 21:08:28,025] INFO Result of znode creation at /controller is: 
NODEEXISTS (kafka.zk.KafkaZkClient)
[2018-10-05 21:08:28,025] INFO Result of znode creation at /controller is: 
NODEEXISTS (kafka.zk.KafkaZkClient)
[2018-10-05 21:08:42,561] INFO [Partition -store-changelog-7 broker=1] 
Shrinking ISR from 2,1,3 to 1 (kafka.cluster.Partition)
[2018-10-05 21:08:42,561] INFO [Partition -store-changelog-7 broker=1] 
Shrinking ISR from 2,1,3 to 1 (kafka.cluster.Partition)
[2018-10-05 21:08:42,569] INFO [Partition -store-changelog-7 broker=1] 
Cached zkVersion [11] not equal to that in zookeeper, skip updating ISR 
(kafka.cluster.Partition)
[2018-10-05 21:08:42,569] INFO [Partition -store-changelog-7 broker=1] 
Cached zkVersion [11] not equal to that in zookeeper, skip updating ISR 
(kafka.cluster.Partition)
[2018-10-05 21:08:42,569] INFO [Partition bycontact_0-19 broker=1] 
Shrinking ISR from 2,1,3 to 1 (kafka.cluster.Partition)
[2018-10-05 21:08:42,569] INFO [Partition bycontact_0-19 broker=1] 
Shrinking ISR from 2,1,3 to 1 (kafka.cluster.Partition)
[2018-10-05 21:08:42,574] INFO [Partition bycontact_0-19 broker=1] 
Cached zkVersion [44] not equal to that in zookeeper, skip updating ISR 
(kafka.cluster.Partition)
[2018-10-05 21:08:42,574] INFO [Partition bycontact_0-19 broker=1] 
Cached zkVersion [44] not equal to that in zookeeper, skip updating ISR 
(kafka.cluster.Partition){code}
 The only way in order to recover was to restart the broker.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-2729) Cached zkVersion not equal to that in zookeeper, broker not recovering.

2018-10-05 Thread Luigi Tagliamonte (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16640382#comment-16640382
 ] 

Luigi Tagliamonte commented on KAFKA-2729:
--

This issue seems not fixed in 1.1

Cluster details:
 * 3 Kafka nodes cluster running 1.1
 * 3 Zookeeper node cluster running 3.4.10

Today meanwhile I was replacing a zookeeper server the leader among the brokers 
experienced this issue:
{code:java}
[2018-10-05 21:03:02,799] INFO [GroupMetadataManager brokerId=1] Removed 0 
expired offsets in 0 milliseconds. 
(kafka.coordinator.group.GroupMetadataManager)
[2018-10-05 21:08:20,060] INFO Unable to read additional data from server 
sessionid 0x34663b434985000e, likely server has closed socket, closing socket 
connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2018-10-05 21:08:21,001] INFO Opening socket connection to server 
10.48.208.70/10.48.208.70:2181. Will not attempt to authenticate using SASL 
(unknown error) (org.apache.zookeeper.ClientCnxn)
[2018-10-05 21:08:21,003] WARN Session 0x34663b434985000e for server null, 
unexpected error, closing socket connection and attempting reconnect 
(org.apache.zookeeper.ClientCnxn)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
[2018-10-05 21:08:21,797] INFO Opening socket connection to server 
10.48.210.44/10.48.210.44:2181. Will not attempt to authenticate using SASL 
(unknown error) (org.apache.zookeeper.ClientCnxn)
[2018-10-05 21:08:21,799] INFO Socket connection established to 
10.48.210.44/10.48.210.44:2181, initiating session 
(org.apache.zookeeper.ClientCnxn)
[2018-10-05 21:08:21,802] INFO Session establishment complete on server 
10.48.210.44/10.48.210.44:2181, sessionid = 0x34663b434985000e, negotiated 
timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2018-10-05 21:08:28,015] INFO Creating /controller (is it secure? false) 
(kafka.zk.KafkaZkClient)
[2018-10-05 21:08:28,015] INFO Creating /controller (is it secure? false) 
(kafka.zk.KafkaZkClient)
[2018-10-05 21:08:28,025] ERROR Error while creating ephemeral at /controller, 
node already exists and owner '3703712903740981258' does not match current 
session '3775770497779040270' (kafka.zk.KafkaZkClient$CheckedEphemeral)
[2018-10-05 21:08:28,025] ERROR Error while creating ephemeral at /controller, 
node already exists and owner '3703712903740981258' does not match current 
session '3775770497779040270' (kafka.zk.KafkaZkClient$CheckedEphemeral)
[2018-10-05 21:08:28,025] INFO Result of znode creation at /controller is: 
NODEEXISTS (kafka.zk.KafkaZkClient)
[2018-10-05 21:08:28,025] INFO Result of znode creation at /controller is: 
NODEEXISTS (kafka.zk.KafkaZkClient)
[2018-10-05 21:08:42,561] INFO [Partition -store-changelog-7 broker=1] 
Shrinking ISR from 2,1,3 to 1 (kafka.cluster.Partition)
[2018-10-05 21:08:42,561] INFO [Partition -store-changelog-7 broker=1] 
Shrinking ISR from 2,1,3 to 1 (kafka.cluster.Partition)
[2018-10-05 21:08:42,569] INFO [Partition -store-changelog-7 broker=1] 
Cached zkVersion [11] not equal to that in zookeeper, skip updating ISR 
(kafka.cluster.Partition)
[2018-10-05 21:08:42,569] INFO [Partition -store-changelog-7 broker=1] 
Cached zkVersion [11] not equal to that in zookeeper, skip updating ISR 
(kafka.cluster.Partition)
[2018-10-05 21:08:42,569] INFO [Partition bycontact_0-19 broker=1] 
Shrinking ISR from 2,1,3 to 1 (kafka.cluster.Partition)
[2018-10-05 21:08:42,569] INFO [Partition bycontact_0-19 broker=1] 
Shrinking ISR from 2,1,3 to 1 (kafka.cluster.Partition)
[2018-10-05 21:08:42,574] INFO [Partition bycontact_0-19 broker=1] 
Cached zkVersion [44] not equal to that in zookeeper, skip updating ISR 
(kafka.cluster.Partition)
[2018-10-05 21:08:42,574] INFO [Partition bycontact_0-19 broker=1] 
Cached zkVersion [44] not equal to that in zookeeper, skip updating ISR 
(kafka.cluster.Partition){code}
 The only way in order to recover was to restart the broker.

> Cached zkVersion not equal to that in zookeeper, broker not recovering.
> ---
>
> Key: KAFKA-2729
> URL: https://issues.apache.org/jira/browse/KAFKA-2729
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1, 0.9.0.0, 0.10.0.0, 0.10.1.0, 0.11.0.0
>Reporter: Danil Serdyuchenko
>Assignee: Onur Karaman
>Priority: Major
> Fix For: 1.1.0
>
>
> After a small network wobble where zookeeper nodes couldn't reach each other, 
> we started seeing a large number of undereplicated partitions. The zookeeper 
> cluster recovered, however we continued to see a