[ https://issues.apache.org/jira/browse/KAFKA-1182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13997445#comment-13997445 ]
Simon Cooper commented on KAFKA-1182: ------------------------------------- The current behaviour is causing problems for us - we've got an automated install script that creates several topics, and when creating lots of topics in sequence kafka has a tendancy to bounce replicas, causing spurious failures in the install script. Ideally, topic creation of an under-replicated topic (both ad-hoc and using the kafka-topics.sh script) could be forced, so an automated install could complete, allowing under-replicated topics to be dealt with afterwards > Topic not created if number of live brokers less than # replicas > ---------------------------------------------------------------- > > Key: KAFKA-1182 > URL: https://issues.apache.org/jira/browse/KAFKA-1182 > Project: Kafka > Issue Type: Improvement > Components: producer > Affects Versions: 0.8.0 > Environment: Centos 6.3 > Reporter: Hanish Bansal > Assignee: Jun Rao > > We are having kafka cluster of 2 nodes. (Using Kafka 0.8.0 version) > Replication Factor: 2 > Number of partitions: 2 > Actual Behaviour: > Out of two nodes, if any one node goes down then topic is not created in > kafka. > Steps to Reproduce: > 1. Create a 2 node kafka cluster with replication factor 2 > 2. Start the Kafka cluster > 3. Kill any one node > 4. Start the producer to write on a new topic > 5. Observe the exception stated below: > 2013-12-12 19:37:19 0 [WARN ] ClientUtils$ - Fetching topic metadata with > correlation id 3 for topics [Set(test-topic)] from broker > [id:0,host:122.98.12.11,port:9092] failed > java.net.ConnectException: Connection refused > at sun.nio.ch.Net.connect(Native Method) > at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:500) > at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57) > at kafka.producer.SyncProducer.connect(SyncProducer.scala:146) > at > kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:161) > at > kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68) > at kafka.producer.SyncProducer.send(SyncProducer.scala:112) > at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:53) > at > kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82) > at > kafka.producer.BrokerPartitionInfo.getBrokerPartitionInfo(BrokerPartitionInfo.scala:49) > at > kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$getPartitionListForTopic(DefaultEventHandler.scala:186) > at > kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:150) > at > kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:149) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43) > at > kafka.producer.async.DefaultEventHandler.partitionAndCollate(DefaultEventHandler.scala:149) > at > kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:95) > at > kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72) > at kafka.producer.Producer.send(Producer.scala:76) > at kafka.javaapi.producer.Producer.send(Producer.scala:33) > Expected Behaviour: > In case of live brokers less than # replicas: > There should be topic created so at least live brokers can receive the data. > They can replicate data to other broker once any down broker comes up. > Because now in case of live brokers less than # replicas, there is complete > loss of data. -- This message was sent by Atlassian JIRA (v6.2#6252)