Acks all ... Having one Kafka broker only
On Tue, Jul 19, 2016, 9:22 AM David Garcia wrote:
> Ah ok. Another dumb question: what about acks? Are you using auto-ack?
>
> On 7/19/16, 10:00 AM, "Abhinav Solan" wrote:
>
> If I add 2 more nodes and make it a cluster
If I add 2 more nodes and make it a cluster .. would that help ? Have
searched forums and all this kind of thing is not there ... If we have a
cluster then might be Kafka Server has a backup option and it self heals
from this behavior ... Just a theory
On Tue, Jul 19, 2016, 7:57 AM Abhinav Solan
No, was monitoring the app at that time .. it was just sitting idle
On Tue, Jul 19, 2016, 7:32 AM David Garcia wrote:
> Is it possible that your app is thrashing (i.e. FullGC’ing too much and
> not processing messages)?
>
> -David
>
> On 7/19/16, 9:16 AM, "Abhinav So
Hi Everyone, can anyone help me on this
Thanks,
Abhinav
On Mon, Jul 18, 2016, 6:19 PM Abhinav Solan wrote:
> Hi Everyone,
>
> Here are my settings
> Using Kafka 0.9.0.1, 1 instance (as we are testing things on a staging
> environment)
> Subscribing to 4 topics from
Hi Everyone,
Here are my settings
Using Kafka 0.9.0.1, 1 instance (as we are testing things on a staging
environment)
Subscribing to 4 topics from a single Consumer application with 4 threads
Now the server keeps on working fine for a while, then after about 3-4 hrs
or so, it stops consuming at a
Hi Everyone,
Is there a way to start Kafka Connector via JMX?
Thanks,
Abhinav
Hi Everyone,
I wanted to what is the best and secure way of error handling for
KafkaConsumer. I am using confluent's recommended consumer implementation.
my delivery semantics is at least once. I am switching off the auto commit
as well.
Or I should just switch on the auto commit.
The thing is I a
Hi Sahitya,
Try reducing max.partition.fetch.bytes in your consumer.
Then also increase heartbeat.interval.ms, this might help in to delay the
consumer rebalance of your inbound process is taking more time than this
- Abhinav
On Fri, May 13, 2016 at 5:42 AM sahitya agrawal
wrote:
> Hi,
>
> I a
de look like, where you
> are verifying/measuring this consumed size?
>
> -Jaikiran
> On Thursday 05 May 2016 03:00 AM, Abhinav Solan wrote:
> > Thanks a lot Jens for the reply.
> > One thing is still unclear is this happening only when we set the
> > max.partitions.fetch.
Thanks a lot Jens for the reply.
One thing is still unclear is this happening only when we set the
max.partitions.fetch.bytes to a higher value ? Because I am setting it
quite lower at 8192 only instead, because I can control the size of the
data coming in Kafka, so even after setting this value wh
Hi,
I am using kafka-0.9.0.1 and have configured the Kafka consumer to fetch
8192 bytes by setting max.partition.fetch.bytes
Here are the properties I am using
props.put("bootstrap.servers", servers);
props.put("group.id", "perf-test");
props.put("offset.storage", "kafka");
props.put("enable.au
11 matches
Mail list logo