num.consumer.fetchers means the max number of fetcher threads that can be
spawned. it doesn't necessarily mean you can get as many fetcher threads as
you specify.
To me the metrics are suggesting a very slow consumption rate only 18.21
bytes/minute. Here is the benchmark Linkedin does
hi Xiao,
i have finally got JMX monitoring enabled for my kafka nodes in test
envrionment and here is what i observed.
i was monitoring mbeans under kafka.consumer domain of JVM running Kafka
Mirror Maker process.
=
AllTopicsBytes === 18.21 bytes/minute
Hi Vamsi,
You can also see the example here
https://github.com/madhukarbharti/kafka-8.2.1-test/blob/master/src/com/bharti/kafka/offset/OffsetHandler.java
if
you want to use Java API to get the offset from topic.
Regards,
Madhukar
On Mon, Apr 13, 2015 at 10:25 PM, 4mayank 4may...@gmail.com
Yeah, the current ConsumerOffsetChecker has this issue (maybe bug also) if
the offset storage is Kafka and no offset has been committed. It will
throw ZK exception, which is very confusing. KAFKA-1951 was opened for
this but was not checked in.
Thanks.
Jiangjie (Becket) Qin
On 4/13/15, 9:55 AM,
--
This message, and any attachments, is for the intended recipient(s) only, may
contain information that is privileged, confidential and/or proprietary and
subject to important terms and conditions available at
Here's the algorithm as described in AdminUtils.scala:
/**
* There are 2 goals of replica assignment:
* 1. Spread the replicas evenly among brokers.
* 2. For partitions assigned to a particular broker, their other
replicas are spread over the other brokers.
*
* To achieve this
how about the consumer lag of mirror maker?
On Mon, Apr 13, 2015 at 1:33 PM, nitin sharma kumarsharma.ni...@gmail.com
wrote:
i just tested that too and below is the stats.. it is clear that with
kafka-consumer-perf-test.sh, i am able to get a high throughput. around
44.0213 MB/sec.
i just tested that too and below is the stats.. it is clear that with
kafka-consumer-perf-test.sh, i am able to get a high throughput. around
44.0213 MB/sec.
Seriously some configuration needs to be tweaked in MirrorMaker
configuration for speedy processing... Can you think of something ?
The mailing list is self-service, you can find ways to unsubscribe here
http://kafka.apache.org/contact.html
On Mon, Apr 13, 2015 at 12:16 PM, Orelowitz, David david.orelow...@baml.com
wrote:
--
This message, and any
BTW it seems you are referring to the producer not the consumer as claimed
in the title.
On Mon, Apr 13, 2015 at 5:54 PM, François Méthot fmetho...@gmail.com
wrote:
It could be that the maximum number of connections was reached for client
IP you are using. The defaut is 10. But it can be
I highly recommend https://github.com/airbnb/kafkat, which will simplify
your partition management tasks. Use it with
https://github.com/airbnb/kafkat/pull/3 for partition specific reassignment.
Chi
On Mon, Apr 13, 2015 at 4:08 AM, Jan Filipiak jan.filip...@trivago.com
wrote:
Hey,
try to
It could be that the maximum number of connections was reached for client
IP you are using. The defaut is 10. But it can be changed. I had similar
intermittents issue because of that.
The property is max.connections. per.ip
Le 2015-04-12 11:20 PM, kaybin wong kaybinw...@gmail.com a écrit :
Kafka version is 0.8.1.1. On taking a thread dump against one of our servers in
Kafka Cluster, i see lot of threads with message below:
SOMEID-9 id=67784 idx=0x75c tid=24485 prio=5 alive, parked, native_blocked,
daemon
-- Parking to wait for:
Parking to wait for just means the thread has been put to sleep while
waiting for some synchronized resource. In this case, ConditionObject
indicates it's probably await()ing on a condition variable. This almost
always means that thread is just waiting for notification from another
thread that
On Mon, Apr 13, 2015 at 10:10 PM, bit1...@163.com bit1...@163.com wrote:
Hi, Kafka experts:
I got serveral questions about auto.offset.reset. This configuration
parameter governs how consumer read the message from Kafka when there is
no initial offset in ZooKeeper or if an offset is out of
Hi, Kafka experts:
I got serveral questions about auto.offset.reset. This configuration parameter
governs how consumer read the message from Kafka when there is no initial
offset in ZooKeeper or if an offset is out of range.
Q1. no initial offset in zookeeper means that there isn't any
Thanks for the reference. But it doesn't seem to cover how a particular
topic is assigned to a particular broker and also how the replicas are
chosen. For eg if I have brokers A B C D and E what algorithm is used to
assign a topic X and partition 1 to brokers B D and E if the chosen
replication
Hi Guys
How do topics get assigned to brokers? I mean if I were to create a topic X
and publish to it how does Kafka assign the topic and the message to a
particular broker? If I have create a topic with say 10 partitions how does
kafka assign each partition to a different broker?
--
Cheers
Thanks a lot
On 13 Apr 2015 06:08, Manoj Khangaonkar khangaon...@gmail.com wrote:
Clarification. My answer applies to the new producer API in 0.8.2.
regards
On Sun, Apr 12, 2015 at 4:00 PM, Manoj Khangaonkar khangaon...@gmail.com
wrote:
Hi,
For (1) from the java docs The producer is
I think you need to re balance the cluster.
something like
bin/kafka-reassign-partitions.sh --zookeeper localhost:2181
--topics-to-move-json-file topics-to-move.json --broker-list 5,6
--generate
On Mon, Apr 13, 2015 at 11:22 AM, shadyxu shad...@gmail.com wrote:
I added several new brokers to
Hi,
Simply adding the brokers to the cluster will not reassign or redistribute
topic partitions to newly added brokers.
As it is also mentioned in documentation.
*However these new servers will not automatically be assigned any data
partitions, so unless partitions are moved to them they won't
Thanks guys. You are right and then here comes another problems:
I added new brokers 4, 5 and 6. Now I want to move partitions 3, 4 and
5(currently on broker 1, 2 and 3) of topic test to these brokers. I wrote
r.json file like this:
{partitions:
[{topic: test,partition: 3,replicas: [4]},
Probably you should first try to generate proposed plan using --generate
option and then edit that if needed.
thanks
On Mon, Apr 13, 2015 at 3:12 PM, shadyxu shad...@gmail.com wrote:
Thanks guys. You are right and then here comes another problems:
I added new brokers 4, 5 and 6. Now I want
Hey,
try to not have newlines \n in your jsonfile. I think the parser dies on
those and then claims the file is empty
Best
Jan
On 13.04.2015 12:06, Ashutosh Kumar wrote:
Probably you should first try to generate proposed plan using --generate
option and then edit that if needed.
thanks
I did a similar change - moved from High Level Consumer to Simple Consumer.
Howerver kafka-consumer-offset-checker.sh throws an exception. Its
searching the zk path /consumers/group/ which does not exist on any of my
zk nodes.
Is there any other tool for getting the offset lag when using Simple
25 matches
Mail list logo