[ https://issues.apache.org/jira/browse/KAFKA-2082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14482042#comment-14482042 ]
Sriharsha Chintalapani commented on KAFKA-2082: ----------------------------------------------- [~eapache] I think the problem here is how the test is operating. In general users will configure each broker with a zookeeper connection that specifies multiple hosts like localhost:2181,localhost:2182,localhost:2183 . If there is connection loss that might be due to some issue in broker ( jvm garbage collection for example) in this case new controller will be elected and when this broker comes back up it goes through oncontrollerresignation. It get a updateMetadataRequest to update metadata cache so that the requests get latest metadata. Another case would be one of the zookeeper node going down which is fine as there are other zookeeper nodes to serve the kafka brokers. But in this particular test each broker connected to individual zookeeper host which are in a quorum. When you disable zk1, broker 1 disconnects which causes /broker/ids/1 to get deleted as its a ephemeral node. New controller will be elected and it goes through KafkaController.onPartitionReassignment //12. After electing leader, the replicas and isr information changes, so resend the update metadata request to every broker sendUpdateMetadataRequest(controllerContext.liveOrShuttingDownBrokerIds.toSeq, Set(topicAndPartition)) but liveOrShuttingDownBrokerIds won't have all the brokers in the list in this specific case ,as the /brokes/ids are ephemeral nodes they get deleted if the client disconnects. But the producer still connecting to broker1 which never got a updateMetadataRequest so its still serving stale data. > Kafka Replication ends up in a bad state > ---------------------------------------- > > Key: KAFKA-2082 > URL: https://issues.apache.org/jira/browse/KAFKA-2082 > Project: Kafka > Issue Type: Bug > Components: replication > Affects Versions: 0.8.2.1 > Reporter: Evan Huus > Assignee: Sriharsha Chintalapani > Priority: Critical > Attachments: KAFKA-2082.patch > > > While running integration tests for Sarama (the go client) we came across a > pattern of connection losses that reliably puts kafka into a bad state: > several of the brokers start spinning, chewing ~30% CPU and spamming the logs > with hundreds of thousands of lines like: > {noformat} > [2015-04-01 13:08:40,070] WARN [Replica Manager on Broker 9093]: Fetch > request with correlation id 111094 from client ReplicaFetcherThread-0-9093 on > partition [many_partition,1] failed due to Leader not local for partition > [many_partition,1] on broker 9093 (kafka.server.ReplicaManager) > [2015-04-01 13:08:40,070] WARN [Replica Manager on Broker 9093]: Fetch > request with correlation id 111094 from client ReplicaFetcherThread-0-9093 on > partition [many_partition,6] failed due to Leader not local for partition > [many_partition,6] on broker 9093 (kafka.server.ReplicaManager) > [2015-04-01 13:08:40,070] WARN [Replica Manager on Broker 9093]: Fetch > request with correlation id 111095 from client ReplicaFetcherThread-0-9093 on > partition [many_partition,21] failed due to Leader not local for partition > [many_partition,21] on broker 9093 (kafka.server.ReplicaManager) > [2015-04-01 13:08:40,071] WARN [Replica Manager on Broker 9093]: Fetch > request with correlation id 111095 from client ReplicaFetcherThread-0-9093 on > partition [many_partition,26] failed due to Leader not local for partition > [many_partition,26] on broker 9093 (kafka.server.ReplicaManager) > [2015-04-01 13:08:40,071] WARN [Replica Manager on Broker 9093]: Fetch > request with correlation id 111095 from client ReplicaFetcherThread-0-9093 on > partition [many_partition,1] failed due to Leader not local for partition > [many_partition,1] on broker 9093 (kafka.server.ReplicaManager) > [2015-04-01 13:08:40,071] WARN [Replica Manager on Broker 9093]: Fetch > request with correlation id 111095 from client ReplicaFetcherThread-0-9093 on > partition [many_partition,6] failed due to Leader not local for partition > [many_partition,6] on broker 9093 (kafka.server.ReplicaManager) > [2015-04-01 13:08:40,072] WARN [Replica Manager on Broker 9093]: Fetch > request with correlation id 111096 from client ReplicaFetcherThread-0-9093 on > partition [many_partition,21] failed due to Leader not local for partition > [many_partition,21] on broker 9093 (kafka.server.ReplicaManager) > [2015-04-01 13:08:40,072] WARN [Replica Manager on Broker 9093]: Fetch > request with correlation id 111096 from client ReplicaFetcherThread-0-9093 on > partition [many_partition,26] failed due to Leader not local for partition > [many_partition,26] on broker 9093 (kafka.server.ReplicaManager) > {noformat} > This can be easily and reliably reproduced using the {{toxiproxy-final}} > branch of https://github.com/Shopify/sarama which includes a vagrant script > for provisioning the appropriate cluster: > - {{git clone https://github.com/Shopify/sarama.git}} > - {{git checkout test-jira-kafka-2082}} > - {{vagrant up}} > - {{TEST_SEED=1427917826425719059 DEBUG=true go test -v}} > After the test finishes (it fails because the cluster ends up in a bad > state), you can log into the cluster machine with {{vagrant ssh}} and inspect > the bad nodes. The vagrant script provisions five zookeepers and five brokers > in {{/opt/kafka-9091/}} through {{/opt/kafka-9095/}}. > Additional context: the test produces continually to the cluster while > randomly cutting and restoring zookeeper connections (all connections to > zookeeper are run through a simple proxy on the same vm to make this easy). > The majority of the time this works very well and does a good job exercising > our producer's retry and failover code. However, under certain patterns of > connection loss (the {{TEST_SEED}} in the instructions is important), kafka > gets confused. The test never cuts more than two connections at a time, so > zookeeper should always have quorum, and the topic (with three replicas) > should always be writable. > Completely restarting the cluster via {{vagrant reload}} seems to put it back > into a sane state. -- This message was sent by Atlassian JIRA (v6.3.4#6332)