The only other thing being written to these disks is log4j (kafka.out), so
technically it is not dedicated to the data logs. The disks are 250GB SATA.
On Fri, Apr 26, 2013 at 6:35 PM, Neha Narkhede wrote:
> >- Decreased num.partitions and log.flush.interval on the brokers from
> >64/10k
>- Decreased num.partitions and log.flush.interval on the brokers from
>64/10k to 32/100 in order to lower the average flush time (we were
>previously always hitting the default flush interval since no
partitions
Hmm, that is a pretty low value for flush interval leading to higher disk
Thanks Jun, your suggestion helped me quite a bit.
Since earlier this week I've been able to work out the issues (at least it
seems like it for now). My consumer is now roughly processing messages at
the rate they are being produced with an acceptable amount of lag end to
end. Here is an overview
You can run kafka.tools.ConsumerOffsetChecker to check the consumer lag. If
the consumer is lagging, this indicates a problem on the consumer side.
Thanks,
Jun
On Mon, Apr 22, 2013 at 9:13 PM, Andrew Neilson wrote:
> Hmm it is highly unlikely that that is the culprit... There is lots of
> band
Oh... and at this point I'm talking about consumers that do no processing
and don't even produce any output. They simply send udp packets to graphite.
On Mon, Apr 22, 2013 at 9:13 PM, Andrew Neilson wrote:
> Hmm it is highly unlikely that that is the culprit... There is lots of
> bandwidth avail
Hmm it is highly unlikely that that is the culprit... There is lots of
bandwidth available for me to use. I will definitely keep that in mind
though. I was working on this today and have some tidbits of additional
information and thoughts that you might be able to shed some light on:
- I mentio
Is your network shared? Is so, another possibility is that some other apps
are consuming the bandwidth.
Thanks,
Jun
On Sun, Apr 21, 2013 at 12:23 PM, Andrew Neilson wrote:
> Thanks very much for the reply Neha! So I swapped out the consumer that
> processes the messages with one that just prin
Thanks very much for the reply Neha! So I swapped out the consumer that
processes the messages with one that just prints them. It does indeed
achieve a much better rate at peaks but can still nearly zero out (if not
completely zero out). I plotted the messages printed in graphite to show
the behavi
Some of the reasons a consumer is slow are -
1. Small fetch size
2. Expensive message processing
Are you processing the received messages in the consumer ? Have you
tried running console consumer for this topic and see how it performs
?
Thanks,
Neha
On Sun, Apr 21, 2013 at 1:59 AM, Andrew Neilso
I am currently running a deployment with 3 brokers, 3 ZK, 3 producers, 2
consumers, and 15 topics. I should first point out that this is my first
project using Kafka ;). The issue I'm seeing is that the consumers are only
processing about 15 messages per second from what should be the largest
topic
10 matches
Mail list logo