Yes, it gives an error with kafka 0.7, and works fine with kafka 0.8.
Maybe, there are something I missing. I will run more tests.
BTW, the error is placed below.
Any help will be grate.
Thank you.
08 Mar 2016 11:33:28,189 INFO [lifecycleSupervisor-1-0]
(kafka.utils.Logging$class.info:76) -
[flume_qacrbshive01-1457404401976-9136aead], exception during
rebalance
kafka.common.KafkaException: Failed to parse the broker info from
zookeeper: 10.10.22.24-1435180967245:10.10.22.24:9092
at kafka.cluster.Broker$.createBroker(Broker.scala:34)
at kafka.utils.ZkUtils$$anonfun$getCluster$1.apply(ZkUtils.scala:506)
at kafka.utils.ZkUtils$$anonfun$getCluster$1.apply(ZkUtils.scala:504)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.utils.ZkUtils$.getCluster(ZkUtils.scala:504)
at
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1.apply$mcVI$sp(ZookeeperConsumerConnector.scala:407)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:402)
at
kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$reinitializeConsumer(ZookeeperConsumerConnector.scala:722)
at
kafka.consumer.ZookeeperConsumerConnector.consume(ZookeeperConsumerConnector.scala:212)
at
kafka.javaapi.consumer.ZookeeperConsumerConnector.createMessageStreams(ZookeeperConsumerConnector.scala:80)
at
kafka.javaapi.consumer.ZookeeperConsumerConnector.createMessageStreams(ZookeeperConsumerConnector.scala:92)
at org.apache.flume.source.kafka.KafkaSource.start(KafkaSource.java:216)
at
org.apache.flume.source.PollableSourceRunner.start(PollableSourceRunner.java:74)
at
org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: kafka.common.BrokerNotAvailableException: Broker id 19 does not exist
at kafka.cluster.Broker$.createBroker(Broker.scala:42)
... 24 more
On Tue, Mar 8, 2016 at 3:54 AM, Justin Ryan <[email protected]> wrote:
> The ticket says it should be in flume 1.6, are you running into an error
> related to kafka version?
>
> On 3/6/16, 5:51 PM, "Mungeol Heo" <[email protected]> wrote:
>
>>Hi, all.
>>
>>As mentioned at topic, my question is that are there source and sink
>>for kafka 0.7?
>>AFAIK, there are source and sink for kafka 0.8 in flume 1.6.
>>And the URL placed below mentioned about supporting kafka 0.7.
>>
>>https://issues.apache.org/jira/browse/FLUME-2242
>>
>>Where are these kafka 0.7 source and sink?
>>
>>Thank you.
>
>