Hi All,
One of the partitions showing the huge lag(21K) and I see the below error in
kafkaserver.out log of one of the kafka nodes.
Current offset 43294 for partition [PROD_TASK_TOPIC_120,10] out of range; reset
offset to 43293 (kafka.server.ReplicaFetcherThread)
What is the quick solution,
Hi there,
Any idea why log.retention attribute is not working? We kept
log.retention.hours=6 in server.properties but we see old data are not getting
deleted. We see Dec 9th data/log files are still there.
We are running this in production boxes and if it does not delete the old files
our stor
partition key ?
On Wed, Nov 23, 2016 at 12:33 AM, Ghosh, Achintya (Contractor) <
achintya_gh...@comcast.com> wrote:
> Hi there,
>
> We are doing the load test in Kafka with 25tps and first 9 hours it
> went fine almost 80K/hr messages were processed after that we see a
> lot o
Hi there,
We are doing the load test in Kafka with 25tps and first 9 hours it went fine
almost 80K/hr messages were processed after that we see a lot of lags and we
stopped the incoming load.
Currently we see 15K/hr messages are processing. We have 40 consumer instances
with concurrency 4 and
arch
Consulting Support Training - http://sematext.com/
On Mon, Nov 14, 2016 at 5:16 PM, Ghosh, Achintya (Contractor) <
achintya_gh...@comcast.com> wrote:
> Hi there,
> What is the best open source tool for Kafka monitoring mainly to check
> the offset lag. We tried the following tools:
&g
Hi there,
What is the best open source tool for Kafka monitoring mainly to check the
offset lag. We tried the following tools:
1. Burrow
2. KafkaOffsetMonitor
3. Prometheus and Grafana
4. Kafka Manager
But nothing is working perfectly. Please help us on this.
Thanks
Hi there,
Can anyone please help us as we are getting the SendFailedException when Kafka
consumer is starting and not able to consume any message?
Thanks
Achintya
Hi there,
I see a lot of same offset value kafka consumer receives hence it creates a lot
of duplicate messages. What could be the reason and how we can solve this issue?
Thanks
Achintya
than they are consumed, you will get
a backlog of messages. In that case, you may need to grow your cluster so that
more messages are processed in parallel.
Best regards / Mit freundlichen Grüßen / Sincères salutations M. Lohith Samaga
-Original Message-
From: Ghosh, Achintya
Hi there,
We have an usecase where we do a lot of business logic to process each message
and sometime it takes 1-2 sec, so will be Kafka fit in our usecase?
Thanks
Achintya
fetcher threads on the broker failing
which makes perfect sense since some of the partitions were bound to have
leaders in the failed datacenter. I'd actually like to see the consumer logs at
DEBUG level if possible.
Thanks,
Jason
On Wed, Aug 31, 2016 at 7:48 PM, Ghosh, Achintya (Contra
er datacenter's zookeeper server? I tried with
> to increate the zookeeper session timeout and connection time out but no luck.
>
> Please help on this.
> Thanks
> Achintya
>
>
> -Original Message-
> From: Jason Gustafson [mailto:ja...@confluent.io]
> Sent: Wedn
on time out but no luck.
Please help on this.
Thanks
Achintya
-Original Message-
From: Jason Gustafson [mailto:ja...@confluent.io]
Sent: Wednesday, August 31, 2016 4:05 PM
To: us...@kafka.apache.org
Cc: dev@kafka.apache.org
Subject: Re: Kafka consumers unable to process message
Hi A
Hi there,
Kafka consumer gets stuck at consumer.poll() method if my current datacenter is
down and replicated messages are in remote datacenter.
How to solve that issue?
Thanks
Achintya
Hi there,
What does the below error mean and how to avoid this? I see this error one of
the kafkaServer.out file when other broker is down.
And not able to process any message as we see o.a.k.c.c.i.AbstractCoordinator -
Issuing group metadata request to broker 5 from application log
[2016-08
t is a pretty big timeout.
However, I noticed if there is no connections made to broker, you can still get
batch expiry.
On Fri, Aug 26, 2016 at 6:32 AM, Ghosh, Achintya (Contractor) <
achintya_gh...@comcast.com> wrote:
> Hi there,
>
> What is the recommended Producer setting for Pro
Hi there,
What is the recommended Producer setting for Producer as I see a lot of Batch
Expired exception even though I put request.timeout=6.
Producer settings:
acks=1
retries=3
batch.size=16384
linger.ms=5
buffer.memory=33554432
request.timeout.ms=6
timeout.ms=6
Thanks
Achintya
Hi there,
I created a broker as stand by using Kafka Mirror maker but same messages gets
consumed by both Source broker and mirror broker.
Ex:
I send 1000 messages let's say offset value 1 to 1000 and consumed 500
messages from the source broker. Now my broker goes down and want to read rest
Can anyone please check this one?
Thanks
Achintya
-Original Message-
From: Ghosh, Achintya (Contractor)
Sent: Monday, August 08, 2016 9:44 AM
To: us...@kafka.apache.org
Cc: dev@kafka.apache.org
Subject: RE: Kafka consumer getting duplicate message
Thank you , Ewen for your response
that probably means shutdown/failover is
not being handled correctly. If you can provide more info about your setup, we
might be able to suggest tweaks that will avoid these situations.
-Ewen
On Fri, Aug 5, 2016 at 8:15 AM, Ghosh, Achintya (Contractor) <
achintya_gh...@comcast.com> wrot
Hi there,
We are using Kafka 1.0.0.M2 with Spring and we see a lot of duplicate message
is getting received by the Listener onMessage() method .
We configured :
enable.auto.commit=false
session.timeout.ms=15000
factory.getContainerProperties().setSyncCommits(true);
factory.setConcurrency(5);
So
21 matches
Mail list logo