[ 
https://issues.apache.org/jira/browse/KAFKA-286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13217050#comment-13217050
 ] 

Neha Narkhede commented on KAFKA-286:
-------------------------------------

Wondering if the patch can get the consumer into the following state -

Say, there are 2 consumers in a group, c1 and c2. Both are consuming topic1 
with partitions 0-0, 0-1 and 1-0. Say c1 owns 0-0 and 0-1 and c2 owns 1-0. 

1. Broker 1 goes down. This triggers rebalancing attempt in c1 and c2. 
2. c1 releases partition ownership, but fails to rebalance. 
3. Meanwhile, c2 completes rebalancing successfully, and owns partition 0-1 and 
starts consuming data. 
4. c1 starts next rebalancing attempt and it releases partition 0-1 (since 0-1 
is still part of topicRegistry). It owns partition 0-0 again, and starts 
consuming data. 
5. Effectively, rebalancing has completed successfully, but there is no owner 
for partition 0-1 registered in Zookeeper. 

I think using the topicRegistry cache is dangerous, since it has to be in sync 
with the ownership information in zookeeper. How about reading the ownership 
information from ZK along with the other data and only release that ?

                
> consumer sometimes don't release partition ownership properly in ZK during 
> rebalance
> ------------------------------------------------------------------------------------
>
>                 Key: KAFKA-286
>                 URL: https://issues.apache.org/jira/browse/KAFKA-286
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>            Reporter: Jun Rao
>            Assignee: Jun Rao
>             Fix For: 0.7.1
>
>         Attachments: kafka-286.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to