We have a 9 node cluster running 0.8.2.1 that does around 545 thousand
messages(kafka-messages-in) per second. Each of our brokers has 30 G of
memory and 16 cores. We give the brokers themselves 2G of heap. Each broker
ranges from around 33 - 40% cpu utilization. The values for both
kafka-bytes-in
Thanks Todd. I figured out the problem on my client end (independent of
these messages being kind of noisy). My SimpleConsumer was querying every
broker (instead of only the brokers it needed to talk to) for offset
requests every one minute. Given I have more than 50 clients every server
received a
This message is regarding a normal connection close. You see it in the logs
for any connection - consumers, producers, replica fetchers. It can be
particularly noisy because metadata requests often happen on their own
connection.
The log message has been moved to debug level in recent commits (it
My broker logs are full of messages of the following type of log message:
INFO [kafka-network-thread-9092-1] [kafka.network.Processor
]: Closing socket connection to /some_ip_that_I_know.
I see at least one every 4-5 seconds. Something I identified was that the
ip of the closed cl
The selector is probably not the issue. If there is no incoming traffic,
selector.select(300) won't return until after 300ms.
Thanks,
Jun
On Thu, Sep 17, 2015 at 1:13 PM, Jaikiran Pai
wrote:
> Sending this to the dev list since the Kafka dev team might have more
> inputs on this one. Can someo
Consumer offsets in Zookeeper are not handled by the Kafka brokers at all -
the consumer writes those directly to Zookeeper. Most likely, what you are
seeing is the interval over which the consumer is committing offsets.
Assuming that you are using the auto.commit.enable setting (it defaults to
tru
anyone faced this issue?
Regards,
Nitin Kumar Sharma.
On Wed, Sep 2, 2015 at 5:32 PM, nitin sharma
wrote:
> Hi All,
>
> i have run into a weird issue with my Kafka setup.
>
> I see that it takes around 5-6 sec for Zookeeper to update the offset for
> Kafka topics.
>
> I am running "ConsumerOff
Hey Jun,
Should we also include https://issues.apache.org/jira/browse/KAFKA-2390 in
0.9.0? Becket told me that this is one of those patches (
https://issues.apache.org/jira/browse/KAFKA-2387) needed for new consumer
API.
Thanks,
Dong
On Tue, Sep 15, 2015 at 11:01 PM, Jason Rosenberg wrote:
> I
Looking at the docs here (
https://cwiki.apache.org/confluence/display/KAFKA/Committing+and+fetching+c
onsumer+offsets+in+Kafka), its possible to attach metadata as a string to
each partition for the consumer group using the PartitionData constructor
(http://apache.osuosl.org/kafka/0.8.2-beta/java-
Sending this to the dev list since the Kafka dev team might have more
inputs on this one. Can someone please take a look at the issue noted
below and whether the suggested change makes sense?
-Jaikiran
On Tuesday 15 September 2015 12:03 AM, Jaikiran Pai wrote:
We have been using Kafka for a whi
Hi all,
can I add custom metadata to the new commit topic which I could use to
recover my app? This would give me some level of consistency if I could
commit my additional data at once instead of writing them to another topic.
Many thanks,
Petr
It will pull the available section of the log into the new replica. In other
words, yes it will copy the "entire" log, where entire respects that the
earliest available offset is probably not 0.
In the case of a compacted log the replicated log may or may not have the same
structure (I think
Hi,
If we add a new broker and then assign it as a new replica for a topic,
does the entire log for the topic get copied to that new node or does the
new node just get new data?
Thanks
--Scott Thibault
--
*This e-mail is not encrypted. Due to the unsecured nature of unencrypted
e-mail, there
Oh great! Thanks for that tip - that looks exactly like what I described.
Surprised/embarrassed I didn't find that myself whilst searching.
-Original Message-
From: tao xiao [mailto:xiaotao...@gmail.com]
Sent: 17 September 2015 02:20
To: users@kafka.apache.org
Subject: Re: topics, partiti
14 matches
Mail list logo