Re: Kafka rebalancing message lost

2018-12-18 Thread Manoj Khangaonkar
Rebalancing of partitions consumers does not necessarily mean loss of message. But I understand it can be annoying. If Kafka is rebalancing between consumers frequently, It means your consumer code is not polling within the expected timeout, as a result of which Kafka thinks the consumer is

Re: How to pull the data from mobile app directly into Kafka broker

2018-12-04 Thread Manoj Khangaonkar
There is an open source Kafka Rest API from confluent. You could use that to POST to your broker. regards On Tue, Dec 4, 2018 at 5:32 AM Satendra Pratap Singh wrote: > Hi Team, > > Can you help me out in general I wanted to know I developed an app and > installed in my mobile when I logged in

Re: Kafka ingestion data not equally distribute among brokers

2018-11-07 Thread Manoj Khangaonkar
; > I am new on Kafka, is it something mentioned on this site? > https://kafka.apache.org/0100/javadoc/org/apache/kafka/streams/state/KeyValueStore.html > > Where can we set the key and value in Kafka? > > Thanks! > > > - Original Message - > From: "Manoj Khangao

Re: Kafka ingestion data not equally distribute among brokers

2018-11-05 Thread Manoj Khangaonkar
Hi In Kafka topic data is split into partitions. Partitions are assigned to brokers. I assume what you are trying to says is that the distribution of messages across partitions is not balanced. Messages are written to topics as key,value. The messages are distributed across partitions based on

Re: Consumer Pause & Scheduled Resume

2018-10-26 Thread Manoj Khangaonkar
for two partitions and gets the assignment for another > two partitions (because of a pod termination), how can i pause the > consumption if its not the scheduled time to process the records. > Thanks > Pradeep > > On Thu, Oct 25, 2018 at 5:48 PM Manoj Khangaonkar > wrote: > &

Re: Consumer Pause & Scheduled Resume

2018-10-25 Thread Manoj Khangaonkar
One item to be aware with pause and resume - is that it applies to partitions currently assigned to the consumer. But partitions can get revoked or additional partitions can get assigned to consumer. With reassigned , you might be expecting the consumer to be paused but suddenly start getting

Re: Kafka consumer producer logs

2018-10-16 Thread Manoj Khangaonkar
Producer and consumer logs would be in the respective client applications. To enable them , you would enable them for the kafka packages in the client application. For example if you were using log4j , you would add something like org.apache.kafka.clients=INFO regards On Tue, Oct 16, 2018

Re: Optimal Message Size for Kafka

2018-09-07 Thread Manoj Khangaonkar
Best kafka performance is with messages size in the order of a few KB. Larger messages put heavy load on brokers and is very inefficient. It is inefficient on producers and consumers as well regards On Thu, Sep 6, 2018 at 11:16 PM SenthilKumar K wrote: > Hello Experts, We are planning to use

Re: Looking for help with a question on the consumer API

2018-08-09 Thread Manoj Khangaonkar
Hi, Yes , if you don'nt call poll within the configured timeouts, the broker thinks the consumer is gone. But increasing the timeout is not a sustainable design. In general the code in the consumer poll loop should be very fast and do minimal work. Any heavy duty work should be done by handing

KafkaConsumer pause method not working as expected

2018-07-31 Thread Manoj Khangaonkar
Hi, I am implementing flow control by (1) using the pause(partitions) method on the consumer to stop consumer.poll from returning messages. (2) using the resume(partitions) method on the consumer to let consumer.poll return messages This works well for a while. Several sets of pause-resume

Re: Kafka consumer commit behaviour on rebalance

2018-02-13 Thread Manoj Khangaonkar
Yes. If a consumer when down, all the polled messages that were not committed will be redelivered to another consumer. regards On Tue, Feb 13, 2018 at 9:31 AM, pradeep s wrote: > Hi All, > I am running a Kafka consumer(Single threaded) on kubernetes . Application

Re: can't feed remote broker with producer demo

2018-01-23 Thread Manoj Khangaonkar
In your server.properties , in either the listeners or advertised.listeners , replace the localhost with the ip address. regards On Mon, Jan 22, 2018 at 7:16 AM, Rotem Jacobi wrote: > Hi, > When running the quickstart guide (producer, broker (with zookeeper) and > consumer

Re: Capturing and storing these Kafka events for query.

2018-01-11 Thread Manoj Khangaonkar
Hi, If I understood the question correctly , then the better approach is to consume events from topic and store in your favorite database. Then query the database as needed. Querying the topic for messages in kafka is not recommended as that will be a linear search. regards On Thu, Jan 11,

Re: kafka configure problem

2017-12-28 Thread Manoj Khangaonkar
The advertised.listeners property in server.properties needs to have the ip address that the broker is listening on. For eg. advertised.listeners=PLAINTEXT://192.168.52.194:9092:9092 regards On Mon, Dec 25, 2017 at 12:20 AM, 刘闯 wrote: > i‘ m a beginner of kafka. i usde

Re: kafka configure problem

2017-12-28 Thread Manoj Khangaonkar
The advertised.listeners property in server.properties needs to have the ip address that the broker is listening on. For eg. advertised.listeners=PLAINTEXT://192.168.52.194:9092:9092 regards On Mon, Dec 25, 2017 at 12:20 AM, 刘闯 wrote: > i‘ m a beginner of kafka. i usde

Re: Seeking advice on Kafka Streams and Kafka Connect

2017-12-21 Thread Manoj Khangaonkar
Hi I am not a big fan of kafka connect. I had use case for kafka messages that needed to be written to MongoDb. The available third party connectors were less than ideal. To me a well written Kafka consumer is simpler and better longer term solution instead of an additional moving part and

Re: How to get the start and end of partition from kafka

2017-12-17 Thread Manoj Khangaonkar
Hi Not sure of adminClient but if you are programming is Java this should be possible by using KafkaConsumer class org.apache.kafka.clients.consumer.KafkaConsumer It has beginningOffsets and endOffSets methods , that can give you the information. regards On Thu, Dec 14, 2017 at 11:10 PM, 懒羊羊

Re: Installing and Running Kafka

2017-12-17 Thread Manoj Khangaonkar
Hi Did you download the binary download or are you trying to build the source code and then run ? With binary downloads, I have never had an issue. Another possibility is you have scala installed that is getting in the way. regards On Fri, Dec 15, 2017 at 1:54 PM, Karl Keller

Re: scaling kafka brokers

2015-05-21 Thread Manoj Khangaonkar
see the link http://kafka.apache.org/documentation.html#basic_ops_cluster_expansion After you add the new broker , you have to run the partition assignment tool to re assign partitions regards On Thu, May 21, 2015 at 3:55 PM, Dillian Murphey crackshotm...@gmail.com wrote: What's out there in

Re: Optimal number of partitions for topic

2015-05-20 Thread Manoj Khangaonkar
With knowing the actual implementation details, I would get guess more partitions implies more parallelism, more concurrency, more threads, more files to write to - all of which will contribute to more CPU load. Partitions allow you to scale by partitioning the topic across multiple brokers.

Re: Differences between new and legacy scala producer API

2015-05-08 Thread Manoj Khangaonkar
On Thu, May 7, 2015 at 10:01 PM, Rendy Bambang Junior rendy.b.jun...@gmail.com wrote: Hi - Legacy scala api for producer is having keyed message along with topic, key, partkey, and message. Meanwhile new api has no partkey. Whats the difference between key and partkey? In the new API, key

Re: Regarding key to b sent as part of producer message Please help

2015-04-25 Thread Manoj Khangaonkar
Hi Your key seems to be String. key.serializer.class might need to be set to StringEncoder. regards On Sat, Apr 25, 2015 at 10:43 AM, Gaurav Agarwal gaurav130...@gmail.com wrote: Hello I am sending message from producer like this with DefaultEncoder. KeyedMessageString, byte[]

Re: Fetch API Offset

2015-04-21 Thread Manoj Khangaonkar
Hi, I suspect If some message from the given offset have expired, then they will not be returned. regards On Tue, Apr 21, 2015 at 5:14 AM, Piotr Husiatyński p...@optiopay.com wrote: According to documentation, sending fetch request with offset value result in messages starting with given

Re: Fetch API Offset

2015-04-21 Thread Manoj Khangaonkar
is saying it will be included in response. On Tue, Apr 21, 2015 at 3:35 PM, Manoj Khangaonkar khangaon...@gmail.com wrote: Hi, I suspect If some message from the given offset have expired, then they will not be returned. regards On Tue, Apr 21, 2015 at 5:14 AM, Piotr Husiatyński p

Re: SimpleConsumer.getOffsetsBefore problem

2015-04-16 Thread Manoj Khangaonkar
Hi, Earliest and Latest are like Enums that denote the first and last messages in the partition. (Or the offsets for those positions) My understanding is the you can only based on offsets . Not on timestamps. regards On Thu, Apr 16, 2015 at 7:35 AM, Alexey Borschenko

Re: Design questions related to kafka

2015-04-15 Thread Manoj Khangaonkar
# I looked the documents of kafka and I see that there is no way a consume instance can read specific messages from partition. With Kafka you read messages from the beginning multiple times. Since you say later that you do not have many messages per topic, you can iterate over the

Re: Design questions related to kafka

2015-04-15 Thread Manoj Khangaonkar
09:31, Manoj Khangaonkar wrote: # I looked the documents of kafka and I see that there is no way a consume instance can read specific messages from partition. With Kafka you read messages from the beginning multiple times. Since you say later that you do not have many

Re: Some queries about java api for kafka producer

2015-04-12 Thread Manoj Khangaonkar
Clarification. My answer applies to the new producer API in 0.8.2. regards On Sun, Apr 12, 2015 at 4:00 PM, Manoj Khangaonkar khangaon...@gmail.com wrote: Hi, For (1) from the java docs The producer is *thread safe* and should generally be shared among all threads for best performance (2

Re: Some queries about java api for kafka producer

2015-04-12 Thread Manoj Khangaonkar
Hi, For (1) from the java docs The producer is *thread safe* and should generally be shared among all threads for best performance (2) (1) implies no pool is necessary. regards On Sun, Apr 12, 2015 at 12:38 AM, dhiraj prajapati dhirajp...@gmail.com wrote: Hi, I want to send data to apache

Re: Message routing, Kafka-to-REST and HTTP API tools/frameworks for Kafka?

2015-03-24 Thread Manoj Khangaonkar
Hi, For (1) and perhaps even for (2) where distribution/filtering on scale is required, I would look at using Apache Storm with kafka. For (3) , it seems you just need REST services wrapping kafka consumers/producers. I would start with usual suspects like jersey. regards On Tue, Mar 24, 2015

Storm Kafka spout

2014-04-23 Thread Manoj Khangaonkar
Hi, What is the open source Kafka Spout for storm that people are using ? What is the experience with https://github.com/nathanmarz/storm-contrib/tree/master/storm-kafka ? regards --

Re: Storm Kafka spout

2014-04-23 Thread Manoj Khangaonkar
/ On Wed, Apr 23, 2014 at 7:09 PM, Manoj Khangaonkar khangaon...@gmail.com wrote: Hi, What is the open source Kafka Spout for storm that people are using ? What is the experience with https://github.com/nathanmarz/storm-contrib/tree/master/storm-kafka

Re: Unable to get off the ground following the quick start section

2014-04-17 Thread Manoj Khangaonkar
Hi I was able to get the quickstart instructions working recently but I used the (1) binary download (2) I did not use the zookeeper packaged with Kafka. I installed zookeeper using a download from the zookeeper projects. ( I did get a lot of exceptions with the packaged zookeeper) regards

Kafka API docs

2014-04-10 Thread Manoj Khangaonkar
Hi, The API description at http://kafka.apache.org/documentation.html#api is rather thin -- when you are used to the API docs of other apache projects like hadoop , cassandra , tomcat etc etc. Is there a comprehensive API description somewhere (like javadocs) ? Besides looking at the source