Rebalancing of partitions consumers does not necessarily mean loss of
message.
But I understand it can be annoying.
If Kafka is rebalancing between consumers frequently, It means your
consumer code is not polling within the expected timeout, as a result of
which
Kafka thinks the consumer is
There is an open source Kafka Rest API from confluent. You could use that
to POST to your broker.
regards
On Tue, Dec 4, 2018 at 5:32 AM Satendra Pratap Singh
wrote:
> Hi Team,
>
> Can you help me out in general I wanted to know I developed an app and
> installed in my mobile when I logged in
;
> I am new on Kafka, is it something mentioned on this site?
> https://kafka.apache.org/0100/javadoc/org/apache/kafka/streams/state/KeyValueStore.html
>
> Where can we set the key and value in Kafka?
>
> Thanks!
>
>
> - Original Message -
> From: "Manoj Khangao
Hi
In Kafka topic data is split into partitions. Partitions are assigned to
brokers.
I assume what you are trying to says is that the distribution of messages
across partitions is not balanced.
Messages are written to topics as key,value. The messages are distributed
across partitions based on
for two partitions and gets the assignment for another
> two partitions (because of a pod termination), how can i pause the
> consumption if its not the scheduled time to process the records.
> Thanks
> Pradeep
>
> On Thu, Oct 25, 2018 at 5:48 PM Manoj Khangaonkar
> wrote:
>
&
One item to be aware with pause and resume - is that it applies to
partitions currently assigned to the consumer.
But partitions can get revoked or additional partitions can get assigned to
consumer.
With reassigned , you might be expecting the consumer to be paused but
suddenly start getting
Producer and consumer logs would be in the respective client applications.
To enable them , you would enable them for the kafka packages in the client
application.
For example if you were using log4j , you would add something like
org.apache.kafka.clients=INFO
regards
On Tue, Oct 16, 2018
Best kafka performance is with messages size in the order of a few KB.
Larger messages put heavy load on brokers and is very inefficient. It is
inefficient on producers and consumers as well
regards
On Thu, Sep 6, 2018 at 11:16 PM SenthilKumar K
wrote:
> Hello Experts, We are planning to use
Hi,
Yes , if you don'nt call poll within the configured timeouts, the broker
thinks the consumer is gone.
But increasing the timeout is not a sustainable design.
In general the code in the consumer poll loop should be very fast and do
minimal work.
Any heavy duty work should be done by handing
Hi,
I am implementing flow control by
(1) using the pause(partitions) method on the consumer to stop
consumer.poll from returning messages.
(2) using the resume(partitions) method on the consumer to let
consumer.poll return messages
This works well for a while. Several sets of pause-resume
Yes. If a consumer when down, all the polled messages that
were not committed will be redelivered to another consumer.
regards
On Tue, Feb 13, 2018 at 9:31 AM, pradeep s
wrote:
> Hi All,
> I am running a Kafka consumer(Single threaded) on kubernetes . Application
In your server.properties , in either the listeners or advertised.listeners
, replace the localhost with the ip address.
regards
On Mon, Jan 22, 2018 at 7:16 AM, Rotem Jacobi wrote:
> Hi,
> When running the quickstart guide (producer, broker (with zookeeper) and
> consumer
Hi,
If I understood the question correctly , then the better approach is to
consume events from topic and store in
your favorite database. Then query the database as needed.
Querying the topic for messages in kafka is not recommended as that will be
a linear search.
regards
On Thu, Jan 11,
The advertised.listeners property in server.properties needs to have the ip
address that the broker is listening on.
For eg.
advertised.listeners=PLAINTEXT://192.168.52.194:9092:9092
regards
On Mon, Dec 25, 2017 at 12:20 AM, 刘闯 wrote:
> i‘ m a beginner of kafka. i usde
The advertised.listeners property in server.properties needs to have the ip
address that the broker is listening on.
For eg.
advertised.listeners=PLAINTEXT://192.168.52.194:9092:9092
regards
On Mon, Dec 25, 2017 at 12:20 AM, 刘闯 wrote:
> i‘ m a beginner of kafka. i usde
Hi
I am not a big fan of kafka connect.
I had use case for kafka messages that needed to be written to MongoDb. The
available third party connectors were less than ideal.
To me a well written Kafka consumer is simpler and better longer term
solution instead of an additional moving part and
Hi
Not sure of adminClient but if you are programming is Java this should be
possible by using KafkaConsumer class
org.apache.kafka.clients.consumer.KafkaConsumer
It has beginningOffsets and endOffSets methods , that can give you the
information.
regards
On Thu, Dec 14, 2017 at 11:10 PM, 懒羊羊
Hi
Did you download the binary download or are you trying to build the source
code and then run ?
With binary downloads, I have never had an issue.
Another possibility is you have scala installed that is getting in the way.
regards
On Fri, Dec 15, 2017 at 1:54 PM, Karl Keller
see the link
http://kafka.apache.org/documentation.html#basic_ops_cluster_expansion
After you add the new broker , you have to run the partition assignment
tool to re assign partitions
regards
On Thu, May 21, 2015 at 3:55 PM, Dillian Murphey crackshotm...@gmail.com
wrote:
What's out there in
With knowing the actual implementation details, I would get guess more
partitions implies more parallelism, more concurrency, more threads, more
files to write to - all of which will contribute to more CPU load.
Partitions allow you to scale by partitioning the topic across multiple
brokers.
On Thu, May 7, 2015 at 10:01 PM, Rendy Bambang Junior
rendy.b.jun...@gmail.com wrote:
Hi
- Legacy scala api for producer is having keyed message along with topic,
key, partkey, and message. Meanwhile new api has no partkey. Whats the
difference between key and partkey?
In the new API, key
Hi
Your key seems to be String.
key.serializer.class might need to be set to StringEncoder.
regards
On Sat, Apr 25, 2015 at 10:43 AM, Gaurav Agarwal gaurav130...@gmail.com
wrote:
Hello
I am sending message from producer like this with DefaultEncoder.
KeyedMessageString, byte[]
Hi,
I suspect If some message from the given offset have expired, then they
will not be returned.
regards
On Tue, Apr 21, 2015 at 5:14 AM, Piotr Husiatyński p...@optiopay.com wrote:
According to documentation, sending fetch request with offset value
result in messages starting with given
is saying it will be included in response.
On Tue, Apr 21, 2015 at 3:35 PM, Manoj Khangaonkar
khangaon...@gmail.com wrote:
Hi,
I suspect If some message from the given offset have expired, then they
will not be returned.
regards
On Tue, Apr 21, 2015 at 5:14 AM, Piotr Husiatyński p
Hi,
Earliest and Latest are like Enums that denote the first and last messages
in the partition. (Or the offsets for those positions)
My understanding is the you can only based on offsets . Not on timestamps.
regards
On Thu, Apr 16, 2015 at 7:35 AM, Alexey Borschenko
# I looked the documents of kafka and I see that there is no way a
consume instance can
read specific messages from partition.
With Kafka you read messages from the beginning multiple times. Since you
say later that
you do not have many messages per topic, you can iterate over the
09:31, Manoj Khangaonkar wrote:
# I looked the documents of kafka and I see that there is no way a
consume instance can
read specific messages from partition.
With Kafka you read messages from the beginning multiple times. Since you
say later that
you do not have many
Clarification. My answer applies to the new producer API in 0.8.2.
regards
On Sun, Apr 12, 2015 at 4:00 PM, Manoj Khangaonkar khangaon...@gmail.com
wrote:
Hi,
For (1) from the java docs The producer is *thread safe* and should
generally be shared among all threads for best performance
(2
Hi,
For (1) from the java docs The producer is *thread safe* and should
generally be shared among all threads for best performance
(2) (1) implies no pool is necessary.
regards
On Sun, Apr 12, 2015 at 12:38 AM, dhiraj prajapati dhirajp...@gmail.com
wrote:
Hi,
I want to send data to apache
Hi,
For (1) and perhaps even for (2) where distribution/filtering on scale is
required, I would look at using Apache Storm with kafka.
For (3) , it seems you just need REST services wrapping kafka
consumers/producers. I would start with usual suspects like jersey.
regards
On Tue, Mar 24, 2015
Hi,
What is the open source Kafka Spout for storm that people are using ?
What is the experience with
https://github.com/nathanmarz/storm-contrib/tree/master/storm-kafka ?
regards
--
/
On Wed, Apr 23, 2014 at 7:09 PM, Manoj Khangaonkar khangaon...@gmail.com
wrote:
Hi,
What is the open source Kafka Spout for storm that people are using ?
What is the experience with
https://github.com/nathanmarz/storm-contrib/tree/master/storm-kafka
Hi
I was able to get the quickstart instructions working recently but I used
the
(1) binary download
(2) I did not use the zookeeper packaged with Kafka. I installed zookeeper
using a download from the zookeeper projects.
( I did get a lot of exceptions with the packaged zookeeper)
regards
Hi,
The API description at http://kafka.apache.org/documentation.html#api is
rather thin -- when you are used to the API docs of other apache projects
like hadoop , cassandra , tomcat etc etc.
Is there a comprehensive API description somewhere (like javadocs) ?
Besides looking at the source
34 matches
Mail list logo