Is your network shared? Is so, another possibility is that some other apps
are consuming the bandwidth.
Thanks,
Jun
On Sun, Apr 21, 2013 at 12:23 PM, Andrew Neilson wrote:
> Thanks very much for the reply Neha! So I swapped out the consumer that
> processes the messages with one that just prin
In Producer, we have jmx beans (based on metrics) that report
byte/message/request rate.
Thanks,
Jun
On Fri, Apr 19, 2013 at 9:54 AM, Drew Daugherty <
drew.daughe...@returnpath.com> wrote:
> I would second this request. We would like to gather metrics on the
> SyncProducer (bytes in/out, messa
Could you try the 0.8 branch instead of trunk? You can follow the 0.8 quick
start wiki (
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.8+Quick+Start).
Thanks,
Jun
On Fri, Apr 19, 2013 at 5:38 AM, Withers, Robert wrote:
> Hi Jun,
>
> It seems I failed to respond to this post. My ap
Thanks very much for the reply Neha! So I swapped out the consumer that
processes the messages with one that just prints them. It does indeed
achieve a much better rate at peaks but can still nearly zero out (if not
completely zero out). I plotted the messages printed in graphite to show
the behavi
OK, if you want each consumer to process the same data, then simply
point each consumer at your Kafka cluster and have each Consumer
consume all data. There is no synchronization required between those
two consumers.
In other words, what you want to do is fine. Please read the Kafka
design doc if
I am on POC stage , so I can configure the producer to write in different
partitions.
But how it will help me to process the same data with two consumers.
I try to get such effect:
I got the data and store it to Kafka.
I have 2 consumers:
1) for real time which consumes the data for example
On Sun, Apr 21, 2013 at 8:53 AM, Oleg Ruchovets wrote:
> Hi Philip.
>Does it mean to store the same data twice - each time to different
> partition? I tried to save data only one time. Using two partitions means
> to store data twice?
No, I mean spreading the data across the two partitions, s
Hi Philip.
Does it mean to store the same data twice - each time to different
partition? I tried to save data only one time. Using two partitions means
to store data twice?
By the way I am using kafka 0.7.2.
Thanks
Oleg.
On Sun, Apr 21, 2013 at 11:30 AM, Philip O'Toole wrote:
> Read the de
Some of the reasons a consumer is slow are -
1. Small fetch size
2. Expensive message processing
Are you processing the received messages in the consumer ? Have you
tried running console consumer for this topic and see how it performs
?
Thanks,
Neha
On Sun, Apr 21, 2013 at 1:59 AM, Andrew Neilso
Read the design doc on the Kafka site.
The short answer is to use two partitions for your topic.
Philip
On Apr 21, 2013, at 12:37 AM, Oleg Ruchovets wrote:
> Hi,
> I have one producer for kafka and have 2 consumers.
> I want to consume produced events to hdfs and storm. Copy to hdfs I will
Since There is only 1 chroot for a zk cluster, if you specified for each server
there would be a potential for error/mismatch
Things would probably go really bad if you had mismatched chroots :)
Sent from my iPhone
On Apr 21, 2013, at 1:34 AM, Ryan Chan wrote:
> Thanks, this solved the proble
I am currently running a deployment with 3 brokers, 3 ZK, 3 producers, 2
consumers, and 15 topics. I should first point out that this is my first
project using Kafka ;). The issue I'm seeing is that the consumers are only
processing about 15 messages per second from what should be the largest
topic
Thanks, this solved the problem.
But the connection string as "Zk1:2181,zk2:2181,zk3;2181/Kafka", seems
unintuitive?
On Sun, Apr 21, 2013 at 2:29 AM, Scott Clasen wrote:
> Afaik you only put the chroot on the end of the zk conn str...
>
> Zk1:2181,zk2:2181,zk3;2181/Kafka
>
> Not
>
> Zk1:2181/k
Hi,
I have one producer for kafka and have 2 consumers.
I want to consume produced events to hdfs and storm. Copy to hdfs I will do
every hour but to storm every 10 seconds.
Question: Is it supported by kafka? Where can I read how to organize 1
producer and 2 consumers?
Thanks
Oleg.
14 matches
Mail list logo