[ 
https://issues.apache.org/jira/browse/FLINK-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15027450#comment-15027450
 ] 

ASF GitHub Bot commented on FLINK-3061:
---------------------------------------

Github user StephanEwen commented on the pull request:

    https://github.com/apache/flink/pull/1395#issuecomment-159710764
  
    I think this looks good.
    
    It only blocks job deployment if all brokers are down. As soon as there is 
one broker during deployment, the rest will keep recovering and retrying, if I 
understand it correctly.


> Kafka Consumer is not failing if broker is not available
> --------------------------------------------------------
>
>                 Key: FLINK-3061
>                 URL: https://issues.apache.org/jira/browse/FLINK-3061
>             Project: Flink
>          Issue Type: Bug
>          Components: Kafka Connector
>            Reporter: Robert Metzger
>            Assignee: Robert Metzger
>             Fix For: 1.0.0
>
>
> It seems that the FlinkKafkaConsumer is just logging the errors when trying 
> to get the initial list of partitions for the topic, but its not failing.
> The following code ALWAYS runs, even if there is no broker or zookeeper 
> running.
> {code}
>  def main(args: Array[String]) {
>     val env = StreamExecutionEnvironment.getExecutionEnvironment
>     val properties = new Properties()
>     properties.setProperty("bootstrap.servers", "localhost:9092")
>     properties.setProperty("zookeeper.connect", "localhost:2181")
>     properties.setProperty("group.id", "test")
>     val stream = env
>       .addSource(new FlinkKafkaConsumer082[String]("topic", new 
> SimpleStringSchema(), properties))
>       .print
>     env.execute("Flink Kafka Example")
>   }
> {code}
> The runtime consumers are designed to idle when they have no partitions 
> assigned, but there is no check that there are no partitions at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to