broker disconnected from cluster

2016-12-28 Thread Alessandro De Maria
Hello, I would like to get some help/advise on some issues I am having with my kafka cluster. I am running kafka (kafka_2.11-0.10.1.0) on a 5 broker cluster (ubuntu 16.04) configuration is here: http://pastebin.com/cPch8Kd7 today one of the 5 brokers (id: 1) appeared to disconnect from the

Re: Processing time series data in order

2016-12-28 Thread Ali Akhtar
This will only ensure the order of delivery though, not the actual order of the events, right? I.e if due to network lag or any other reason, if the producer sends A, then B, but B arrives before A, then B will be returned before A even if they both went to the same partition. Am I correct about

Re: [ANNOUCE] Apache Kafka 0.10.1.1 Released

2016-12-28 Thread Ismael Juma
The website has now been updated with the 0.10.1.1 release details. Ismael On Sat, Dec 24, 2016 at 9:29 AM, Ismael Juma wrote: > Hi Guozhang and Allen, > > I filed an INFRA ticket about this: > > https://issues.apache.org/jira/browse/INFRA-13172 > > This has happened to me

Re: How to get and reset Kafka stream application's offsets

2016-12-28 Thread Matthias J. Sax
Exactly. On 12/28/16 10:47 AM, Sachin Mittal wrote: > Understood. So if we want it to start consuming from earliest we should add > props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"); > > So when we start the app first time it will start from earliest. Later when > we stop this app

Re: Unable to create new topics, "no brokers available"

2016-12-28 Thread Stevo Slavić
Hello Alex, ZooKeeper nodes that Kafka brokers create in /brokers/ids path, upon registering themselves that they are available, are ephemeral, so not persistent. Ephemeral ZooKeeper nodes live as long as ZooKeeper session that created them is active. The /broker/ids child nodes are gone by

Re: Unable to create new topics, "no brokers available"

2016-12-28 Thread Alex Eftimie
Quick update: digging into Zookeeper data, we observed that the /brokers/ids path was empty. Restarting the kafka nodes repopulated zookeeper data, and now the error is gone (we are able to create new topics). We didn’t alter data in Zookeeper manually, but recently we added 2 nodes to a 3

Re: How to get and reset Kafka stream application's offsets

2016-12-28 Thread Sachin Mittal
Understood. So if we want it to start consuming from earliest we should add props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"); So when we start the app first time it will start from earliest. Later when we stop this app and restart it will start from point where we has last committed

Re: How to get and reset Kafka stream application's offsets

2016-12-28 Thread Matthias J. Sax
Hi Sachin, What do you mean by "with this commented"? Did you set auto.offset.reset to "earliest" or not? Default value is "latest" and if you do not set it to "earliest", that the application will start consuming from end-of-topic if no committed offsets are found. For default values of Kafka

Unable to create new topics, "no brokers available"

2016-12-28 Thread Alex Eftimie
Hello, We recently migrated from Kafka 0.8 to 0.10, while keeping the log format and internal communication at 0.9 version[1]. We have a cluster of two nodes which is working correctly for 10 topics (producers/consumers work fine). Trying to create a new topic raises: kafka-topics.sh