[ 
https://issues.apache.org/jira/browse/FLINK-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15027170#comment-15027170
 ] 

ASF GitHub Bot commented on FLINK-3061:
---------------------------------------

Github user rmetzger commented on the pull request:

    https://github.com/apache/flink/pull/1395#issuecomment-159676808
  
    > Could it happen that a broker is currently not available but will be in 
the future.
    
    Yes. With this PR, the consumer fails only if NO broker is available. As 
long as we can get the partition id's of the topic, the consumer will start.
    
    > With the behaviour before this PR could Flink then idle and start 
consuming once the broker becomes available?
    
    No. The parallel consuming thread deployed to the cluster are initialized 
with a list of partition IDs from the topic. If we can not get the partition 
IDs when creating the consumer, we can never assign the partitions to the 
consumers.



> Kafka Consumer is not failing if broker is not available
> --------------------------------------------------------
>
>                 Key: FLINK-3061
>                 URL: https://issues.apache.org/jira/browse/FLINK-3061
>             Project: Flink
>          Issue Type: Bug
>          Components: Kafka Connector
>            Reporter: Robert Metzger
>            Assignee: Robert Metzger
>             Fix For: 1.0.0
>
>
> It seems that the FlinkKafkaConsumer is just logging the errors when trying 
> to get the initial list of partitions for the topic, but its not failing.
> The following code ALWAYS runs, even if there is no broker or zookeeper 
> running.
> {code}
>  def main(args: Array[String]) {
>     val env = StreamExecutionEnvironment.getExecutionEnvironment
>     val properties = new Properties()
>     properties.setProperty("bootstrap.servers", "localhost:9092")
>     properties.setProperty("zookeeper.connect", "localhost:2181")
>     properties.setProperty("group.id", "test")
>     val stream = env
>       .addSource(new FlinkKafkaConsumer082[String]("topic", new 
> SimpleStringSchema(), properties))
>       .print
>     env.execute("Flink Kafka Example")
>   }
> {code}
> The runtime consumers are designed to idle when they have no partitions 
> assigned, but there is no check that there are no partitions at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to