Not sure why the re-registration fails. Are you using ZK 3.3.4 or above?
It seems that you consumer still GCs, which is the root cause. So, you will
need to tune the GC setting further. Another way to avoid ZK session
timeout is to increase the session timeout config.
Thanks,
Jun
On Wed, Mar 27
Howdy,
I'm considering the use of Kafka in the rewrite of a big legacy product. A
good chunk of the back end code is going to be written in C++ (large in
memory data-structures). The two possible options available to me for
clients appear to be:
https://github.com/edenhill/librdkafka
and
https:
Now I used GC like this:
-server -Xms1536m -Xmx1536m -XX:NewSize=128m -XX:MaxNewSize=128m
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC
-XX:CMSInitiatingOccupancyFraction=70
But it still happened. It seems kafka server reconnect with zk, but the
old node was still there. So kafka server stopped.
Can
I know this is a really old thread, but it looked like the only pertinent
one that came up when searching for ‘exactly once’ in the archives. I just
want to confirm my understanding of the 0.8 version in that it still
doesn’t completely support exactly once semantics. With the producer
configured
Hi All,
I am having some trouble using kafka 0.8 with CDH 4.1.2
I was able to add the cloudera repository and get the code compile with
kafka 0.8 and cloudera 4.1.2, it also pulled in the required jars from
cloudera repo and also created the hadoop-consumer jar. The problem I am
facing is that whe
We've seen big performace degradation when we tested 1024 topics, so we've
opted to go for a much smaller topic count (< 100).
On the read side, I think performance is largely driven by the ability of
the operating system to effectively cache access to #partitions*topic
files. Clearly if you divid
With more topics, you may hit one of those limits: (1) # dirs allowed in a
FS; (2) open file handlers (we keep all log segments open in the broker);
(3) ZK nodes.
Thanks,
Jun
On Wed, Mar 27, 2013 at 8:36 AM, Jason Huang wrote:
> Just curious - I don't see any limit in the # of topics that you
That will be awesome.
Thanks,
Soby Chacko
On Wed, Mar 27, 2013 at 11:51 AM, Neha Narkhede wrote:
> We are looking into the possibility of changing Kafka 0.8 to depend on
> metrics 2.2.0 instead. This will allow us to mavenize Kafka.
>
> THanks,
> Neha
>
> On Wed, Mar 27, 2013 at 8:45 AM, Soby Ch
We are looking into the possibility of changing Kafka 0.8 to depend on
metrics 2.2.0 instead. This will allow us to mavenize Kafka.
THanks,
Neha
On Wed, Mar 27, 2013 at 8:45 AM, Soby Chacko wrote:
> Hello,
>
> Whats the likelihood of more changes to these two yammer metrics libraries?
> Or are t
Hello,
Whats the likelihood of more changes to these two yammer metrics libraries?
Or are they going to stay the same for kafka 0.8?
Regards,
Soby Chacko
On Tue, Mar 26, 2013 at 7:34 PM, Soby Chacko wrote:
> Thanks Dragos!!
>
> Is the zkclient 0.2 change is pushed?
>
> I will see what I can do
Just curious - I don't see any limit in the # of topics that you can
have in a Kafka cluster.
So in principle, you could have as many topics as you want, so long as
your hardware can keep up, right?
Jason
On Wed, Mar 27, 2013 at 11:33 AM, Jun Rao wrote:
> At LinkedIn, our largest cluster has mo
At LinkedIn, our largest cluster has more than 2K topics. 5K topics should
be fine.
Thanks,
Jun
On Tue, Mar 26, 2013 at 11:52 PM, Suyog Rao wrote:
> Hello,
>
> Wanted to check if there is any known limit on the # of topics in a Kafka
> cluster? I wanted to design a system which has say 5k topi
The kafka-server-start.sh script doesn't have the mentioned GC
settings and heap size configured. However, probably doing that is a
good idea.
Thanks,
Neha
On Tue, Mar 26, 2013 at 9:47 AM, Yonghui Zhao wrote:
> kafka server is started by bin/kafka-server-start.sh. No gc setting.
> 在 2013-3-26 下
13 matches
Mail list logo