you see that with no compression 80% of the time goes to FileChannel.write,
But with snappy enabled only 5% goes to writing data, 50% of the time goes
to byte copying and allocation, and only about 22% goes to actual
I had similar problem with MapDB, it was solved by using memory mapped
The more accurate formula is the following since fetch size is per
partition.
consumer threads * queuedchunks.max * fetch size * #partitions
Thanks,
Jun
On Thu, Aug 15, 2013 at 9:40 PM, Drew Daugherty
drew.daughe...@returnpath.com wrote:
Thank you Jun. It turned out an OOME was thrown in
Thanks Jun, that would explain why I was running out of memory.
-drew
From: Jun Rao [jun...@gmail.com]
Sent: Friday, August 16, 2013 8:37 AM
To: users@kafka.apache.org
Subject: Re: Kafka Consumer Threads Stalled
The more accurate formula is the following
Hello Paul,
1. Yes it is OK. Actually each MirrorMaker process may use multiple
ConsumerConnectors.
2. Yes it is OK.
Guozhang
On Fri, Aug 16, 2013 at 8:29 AM, Paul Mackles pmack...@adobe.com wrote:
Hi - I am making a few assumptions about the 0.8 high-level consumer API
that I am looking
Just to clarify, are the consumer threads you are referring to the number
passed into the map along with the topic when instantiating the connector or is
it the fetcher thread count? This formula must specify a maximum memory usage
and not a working usage or we would still be getting OOMEs.
Ok,
I didn't realize the write to disk was immediate (is that new in 0.8, with
requested acks enabled?).
I do think the OS will indeed reserve space in advance for data not yet
flushed to disk. This seems to be true, at least, for xfs, which I have
more experience lately.
Jason
On Thu, Aug
According to the Kafka 8 documentation under broker configuration. There
are these parameters and their definitions.
log.retention.bytes -1 The maximum size of the log before deleting it
log.retention.bytes.per.topic The maximum size of the log for some
specific topic before deleting it
I'm
It should be the # fetcher threads. Yes, this is the max memory usage. You
will only hit it if all partitions have fetch.size bytes to give. This
typically only happens when the consumer was stopped and restarted after
some time.
Thanks,
Jun
On Fri, Aug 16, 2013 at 9:36 AM, Drew Daugherty
log.retention.bytes is for all topics that are not included in
log.retention.bytes.per.topic
(which defines a map of topic - size).
Currently, we don't have a total size limit across all topics.
Thanks,
Jun
On Fri, Aug 16, 2013 at 2:00 PM, Paul Christian
pchrist...@salesforce.comwrote: