Hi folks,
I'm a bit new to the operational side of G1, but pretty familiar with its
basic concept. We recently set up a Kafka cluster to support a new product,
and are seeing some suboptimal GC performance. We're using the parameters
suggested in the docs, except for having switched to java
We've had no problems with G1 in all of our clusters with varying load
levels. I think we've seen an occasional long GC here and there, but
nothing recurring at this point.
What's the full command line that you're using with all the options?
-Todd
On Wed, Oct 14, 2015 at 2:18 PM, Scott Clasen
Hi
I want to create async producer so i can buffer messages in queue and send
after every 5 sec .
my kafka version is 0.8.2.0.
and i am using kafka-clients 0.8.2.0 to create kafka producer in java.
below is my sample code :
package com.intel.labs.ive.cloud.testKafkaProducerJ;
import
Hi
I want to create producer in async mode so i can send message in 5 sec
interval.
below is my code :
Map props = new HashMap();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
metadataBroker);
//
My current theory, which I haven't dug into the source to confirm, is that
said buffers are being pre-allocated. Because the kafka instance is
relatively bored, they end up living long enough to see a few collections
and be promoted. I could be way off base though.
Command line, broken out for a
Looks like you may be mixing the new producer with old producer configs.
See the new config documentation here:
http://kafka.apache.org/documentation.html#newproducerconfigs. You will
likely want to set the "batch.size" and "linger.ms" to achieve your goal.
Thanks,
Grant
On Wed, Oct 14, 2015 at
Hi
Thanks for help .
but same behavior even after changing batch.size
I have changes batch.size value to 33554432.
props.put("batch.size","33554432");
On Wed, Oct 14, 2015 at 11:09 AM, Zakee wrote:
> Hi Prateek,
>
> Looks like you are using default batch.size which
Hi,
I've seen pauses using G1 in other applications and have found that
-XX:+UseParallelGC
-XX:+UseParallelOldGC works best if you're having GC issues in general on
the JVM.
Regards,
Gerrit
On Wed, Oct 14, 2015 at 4:28 PM, Cory Kolbeck wrote:
> Hi folks,
>
> I'm a bit
Hi,
According to the wiki:
https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Client+Re-Design,
allowing manual partition and topic access is a design goal.
Also the new API has functions to seek and subscribe at the partition level:
Hi All,
I am in the process of performance testing an application that
produces messages to Kafka. I am testing the application with
increasing loads.
I would like to get some statistics from Kafka so that I know that
more Kafka nodes should be added to handle a specific load. I have
connected to
A number of our developers are seeing errors like the one below in their
console when running a consumer on their laptop. The error is always
followed by logging indicating that the local consumer is rebalancing, and
in the meantime we are not making much progress.
I'm reading this as the
You can try this plugin: https://github.com/apakulov/kafka-graphite
On Sun, Oct 11, 2015 at 3:19 AM, sunil kalva wrote:
> How to configure, to emit kafka broker metrics to graphite.
>
> t
> SunilKalva
>
--
_
Best Regards, Alexander
Thanks
its work ...
Regards
Prateek
On Wed, Oct 14, 2015 at 11:46 AM, Grant Henke wrote:
> Looks like you may be mixing the new producer with old producer configs.
> See the new config documentation here:
> http://kafka.apache.org/documentation.html#newproducerconfigs.
Gwen,
that helps. Thank you,
Julio
On 10/13/15 13:57, Gwen Shapira wrote:
Hi,
We normally run 1 broker per 1 physical server, and up to around 1000
partitions per broker (although that depends on the specific machine the
broker is on and specific configuration).
In order to enjoy
It is not strange, it means that one of the consumers lost connectivity to
Zookeeper, its session timed-out and this caused ephemeral ZK nodes (like
/consumers/real-time-updates/ids/real-time-updates_infra-
buildagent-06-1444854764478-4dd4d6af) to be removed and ultimately cause
the rebalance.
Thanks Gwen.
So am I right in deducing that any consumer in the same group dropping will
cause a rebalance, regardless of which topics they are subscribed to?
On Wed, Oct 14, 2015 at 3:52 PM Gwen Shapira wrote:
> It is not strange, it means that one of the consumers lost
Yes. The rebalance is on consumers in the group and does not take topics
into account.
On Wed, Oct 14, 2015 at 1:59 PM, noah wrote:
> Thanks Gwen.
>
> So am I right in deducing that any consumer in the same group dropping will
> cause a rebalance, regardless of which topics
Just assigned reviewers for all the blocker issues. Please always feel free
to reassign, the purpose is to drive the review process towards the release.
On Tue, Oct 13, 2015 at 2:29 PM, Guozhang Wang wrote:
> If time permits, it would be great to have both KAFKA-2397 and
You can also use -Xmn with that gc to size the new gen such that those
buffers don't get tenured
I don't think that's an option with G1
On Wednesday, October 14, 2015, Cory Kolbeck wrote:
> I'm not sure that will help here, you'll likely have the same
> medium-lifetime
Hello
I have one query related to Kafka data flow towards consumer.
Means whether kafka used push or pull technic to send data to a consumer
using high level API?
Kafka is a pull mechanism.
Thanks,
Mayuresh
On Wed, Oct 14, 2015 at 9:47 PM, Kiran Singh wrote:
> Hello
>
> I have one query related to Kafka data flow towards consumer.
>
> Means whether kafka used push or pull technic to send data to a consumer
> using high level API?
What are the major advantage to store Offset on kafka server instead of
zookeeper?
Please share any link for the same.
So it's consumer responsibility to keep on checking whether a new data
available on Kafka server or not.
Am I right?
On Thu, Oct 15, 2015 at 10:21 AM, Mayuresh Gharat <
gharatmayures...@gmail.com> wrote:
> Kafka is a pull mechanism.
>
> Thanks,
>
> Mayuresh
>
> On Wed, Oct 14, 2015 at 9:47 PM,
23 matches
Mail list logo