[
https://issues.apache.org/jira/browse/KAFKA-1908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14347063#comment-14347063
]
Alexey Ozeritskiy commented on KAFKA-1908:
------------------------------------------
Hi all, the following is our scenario.
We use a custom consumer which works on broker hosts and always consumes leader
partitions from localhost. Consumer reads data and pushs it to 3rdparty system.
We send a metadata request to localhost and don't use the zk data.
We use zk locks to guarantee that we read a single partition in one process.
Sometimes we release the locks and consumer can begin to consume the data from
"broken" host and reset offsets.
> Split brain
> -----------
>
> Key: KAFKA-1908
> URL: https://issues.apache.org/jira/browse/KAFKA-1908
> Project: Kafka
> Issue Type: Bug
> Components: core
> Affects Versions: 0.8.2.0
> Reporter: Alexey Ozeritskiy
>
> In some cases, there may be two leaders for one partition.
> Steps to reproduce:
> # We have 3 brokers, 1 partition with 3 replicas:
> {code}
> TopicAndPartition: [partition,0] Leader: 1 Replicas: [2,1,3]
> ISR: [1,2,3]
> {code}
> # controller works on broker 3
> # let the kafka port be 9092. We execute on broker 1:
> {code}
> iptables -A INPUT -p tcp --dport 9092 -j REJECT
> {code}
> # Initiate replica election
> # As a result:
> Broker 1:
> {code}
> TopicAndPartition: [partition,0] Leader: 1 Replicas: [2,1,3]
> ISR: [1,2,3]
> {code}
> Broker 2:
> {code}
> TopicAndPartition: [partition,0] Leader: 2 Replicas: [2,1,3]
> ISR: [1,2,3]
> {code}
> # Flush the iptables rules on broker 1
> Now we can produce messages to {code}[partition,0]{code}. Replica-1 will not
> receive new data. A consumer can read data from replica-1 or replica-2. When
> it reads from replica-1 it resets the offsets and than can read duplicates
> from replica-2.
> We saw this situation in our production cluster when it had network problems.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)