[ https://issues.apache.org/jira/browse/KAFKA-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16612441#comment-16612441 ]
Matthias J. Sax commented on KAFKA-6699: ---------------------------------------- If you want to insure consistency, and you want to be able to write if one broker goes down, you need to have 3 broker. If you only have 2 brokers and one goes down, you either loose the ability to write, or you trade of correctness guarantees and might loose data. This is independent of Zookeeper, because those guarantees depend on the data replication factor you get and data is store on the brokers. Let's phrase it differently: to guarantee consistent writes and not to loose any data, you need to insure that your write goes to at least 2 brokers before you ack the write to the producer. Thus, if you only have 2 brokers and one goes down, you cannot get 2 replicates of your data any longer and hence, you either sacrifice replication (resulting in potential data loss) or you just disallow writing until the second broker comes back. > When one of two Kafka nodes are dead, streaming API cannot handle messaging > --------------------------------------------------------------------------- > > Key: KAFKA-6699 > URL: https://issues.apache.org/jira/browse/KAFKA-6699 > Project: Kafka > Issue Type: Bug > Components: streams > Affects Versions: 0.11.0.2 > Reporter: Seweryn Habdank-Wojewodzki > Priority: Major > > Dears, > I am observing quite often, when Kafka Broker is partly dead(*), then > application, which uses streaming API are doing nothing. > (*) Partly dead in my case it means that one of two Kafka nodes are out of > order. > Especially when disk is full on one machine, then Broker is going in some > strange state, where streaming API goes vacations. It seems like regular > producer/consumer API has no problem in such a case. > Can you have a look on that matter? -- This message was sent by Atlassian JIRA (v7.6.3#76005)