will compress somewhat since text doesn't
> use all 256 possible byte values and so it can use less than 8 bits per
> character in the encoding.
>
>
>
> On Wed, May 12, 2021, 22:35 Shantanu Deshmukh
> wrote:
>
> > I have some updates on this.
> > I tried thi
0
200.760078 records/sec (19.61 MB/sec)
0.635
In short snappy = uncompressed !! Why is this happening?
On Wed, May 12, 2021 at 11:40 AM Shantanu Deshmukh
wrote:
> Hey Nitin,
>
> I have already done that. I used dump-log-segments option. And I can see
> the codec used is s
ata from the disk and see compression type.
> https://thehoard.blog/how-kafkas-storage-internals-work-3a29b02e026
>
> Thanks,
> Nitin
>
> On Wed, May 12, 2021 at 11:10 AM Shantanu Deshmukh
> wrote:
>
> > I am trying snappy compression on my producer. Here's my setup
I am trying snappy compression on my producer. Here's my setup
Kafka - 2.0.0
Spring-Kafka - 2.1.2
Here's my producer config
compressed producer ==
configProps.put( ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
bootstrapServer);
configProps.put(
<1095193...@qq.com> wrote:
>
>
> On 2019/04/09 11:21:10, Shantanu Deshmukh wrote:
> > That was a blooper. But even after correcting, it still isn't working.
> > Still getting the same error.
> > Here are the configs again:
> >
&
Well,
from your own synopsis it is clear that message you want to send it much
larger than max.message.bytes setting on broker. You can modify it.
However, do keep in mind that if you seem to be constantly increasing this
limit then you have to look at your message itself. Does it really need to
nLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret";
};
On Mon, Apr 8, 2019 at 2:11 PM 1095193...@qq.com <1095193...@qq.com> wrote:
>
>
> On 2019/04/03 13:08:45, Shantanu Deshmukh wrote:
> > Hello e
-03 16:32:31,268] WARN [Controller id=0, targetBrokerId=0]
Connection to node 0 (localhost/127.0.0.1:9092) terminated during
authentication. This may indicate that authentication failed due to
invalid credentials. (org.apache.kafka.clients.NetworkClient)
Please help. Unable to understand this problem.
Thanks & Regards,
Shantanu Deshmukh
at 2:36 PM Manikumar wrote:
> Hi,
> Instead trying the PR, make sure you are setting valid security protocol
> and connecting to valid broker port.
> also looks for any errors in producer logs.
>
> Thanks,
>
>
>
>
>
> On Fri, Sep 21, 2018 at 12:35 PM Shantanu D
guide me here?
On Wed, Sep 19, 2018 at 1:02 PM Manikumar wrote:
> Similar issue reported here:KAFKA-7304, but on broker side. maybe you can
> create a JIRA and upload the heap dump for analysis.
>
> On Wed, Sep 19, 2018 at 11:59 AM Shantanu Deshmukh
> wrote:
>
> > Any
Any thoughts on this matter? Someone, please help.
On Tue, Sep 18, 2018 at 6:05 PM Shantanu Deshmukh
wrote:
> Additionally, here's the producer config
>
> kafka.bootstrap.servers=x.x.x.x:9092,x.x.x.x:9092,x.x.x.x:9092
> kafka.acks=0
> kafka
, Sep 18, 2018 at 5:36 PM Shantanu Deshmukh
wrote:
> Hello,
>
> We have a 3 broker Kafka 0.10.1.0 deployment in production. There are some
> applications which have Kafka Producers embedded in them which send
> application logs to a topic. This topic has 10 partitions with replicatio
Hello,
We have a 3 broker Kafka 0.10.1.0 deployment in production. There are some
applications which have Kafka Producers embedded in them which send
application logs to a topic. This topic has 10 partitions with replication
factor of 3.
We are observing that memory usage on some of these
session.timeout.ms to any value above default consumers start very
slow. Has anyone seen such behaviour or explain me why this is hapening?
On Wed, Aug 29, 2018 at 12:04 PM Shantanu Deshmukh
wrote:
> Hi Ryanne,
>
> Thanks for your response. I had even tried with 5 records and session
> ti
duce max.poll.records.
>
> Ryanne
>
> On Tue, Aug 28, 2018 at 6:34 AM Shantanu Deshmukh
> wrote:
>
> > Someone, please help me. Only 1 or 2 out of 7 consumer groups keep
> > rebalancing every 5-10mins. One topic is constantly receiving 10-20
> > msg/sec. The oth
, Aug 22, 2018 at 5:47 PM Shantanu Deshmukh
wrote:
> I know average time of processing one record, it is about 70-80ms. I have
> set session.timeout.ms so high total processing time for one poll
> invocation should be well within it.
>
> On Wed, Aug 22, 2018 at 5:04 PM Stev
ons and the size
> of returned `ConsumrRecords`?
>
> On Wed, Aug 22, 2018, 7:00 PM Shantanu Deshmukh
> wrote:
>
> > Ohh sorry, my bad. Kafka version is 0.10.1.0 indeed and so is the client.
> >
> > On Wed, Aug 22, 2018 at 4:26 PM Steve Tian
> > wrote:
> &
il thread.
>
> On Wed, Aug 22, 2018, 6:51 PM Shantanu Deshmukh
> wrote:
>
> > How do I check for GC pausing?
> >
> > On Wed, Aug 22, 2018 at 4:12 PM Steve Tian
> > wrote:
> >
> > > Did you observed any GC-pausing?
> > >
> > > On
How do I check for GC pausing?
On Wed, Aug 22, 2018 at 4:12 PM Steve Tian wrote:
> Did you observed any GC-pausing?
>
> On Wed, Aug 22, 2018, 6:38 PM Shantanu Deshmukh
> wrote:
>
> > Hi Steve,
> >
> > Application is just sending mails. Every record is just a
did it take to process 50 `ConsumerRecord`s?
>
> On Wed, Aug 22, 2018, 5:16 PM Shantanu Deshmukh
> wrote:
>
> > Hello,
> >
> > We have Kafka 0.10.0.1 running on a 3 broker cluster. We have an
> > application which consumes from a topic having 10 partitions. 10
> c
Hello,
We have Kafka 0.10.0.1 running on a 3 broker cluster. We have an
application which consumes from a topic having 10 partitions. 10 consumers
are spawned from this process, they belong to one consumer group.
What we have observed is that very frequently we are observing such
messages in
Can anyone help me understand how to debug this issue? I tried setting log
level to trace in consumer logback configuration. But at such times nothing
appears in log, even in trace level. It is like entire code is frozen.
On Thu, Aug 16, 2018 at 6:32 PM Shantanu Deshmukh
wrote:
> I saw a
Firstly, record size of 150mb is too big. I am quite sure your timeout
exceptions are due to such a large record. There is a setting in producer
and broker config which allows you to specify max message size in bytes.
But still records each of size 150mb might lead to problems with increasing
How many brokers are there in your cluster? This error usually comes when
one of the brokers who is leader for a partition dies and you are trying to
access it.
On Fri, Aug 17, 2018 at 9:23 PM Harish K wrote:
> Hi,
>I have installed Kafka and created topic but while data ingestion i get
>
try. Some of these
> configs may or may not be applicable at runtime. so a rolling restart may
> be required before all changes take place.
>
> On 9 August 2018 at 13:48, Shantanu Deshmukh
> wrote:
>
> > Hi,
> >
> > Yes my consumer application works like below
&g
egards,
>
> Regards,
> On Thu, 16 Aug 2018 at 06:55, Shantanu Deshmukh
> wrote:
>
> > I am also facing the same issue. Whenever I am restarting my consumers it
> > is taking upto 10 minutes to start consumption. Also some of the
> consumers
> > randomly rebalance
I am also facing the same issue. Whenever I am restarting my consumers it
is taking upto 10 minutes to start consumption. Also some of the consumers
randomly rebalance and it again takes same amount of time to complete
rebalance.
I haven't been able to figure out any solution for this issue, nor
uired before all changes take place.
>
> On 9 August 2018 at 13:48, Shantanu Deshmukh
> wrote:
>
> > Hi,
> >
> > Yes my consumer application works like below
> >
> >1. Reads how many workers are required to process each topics from
> >properties f
Consumer gets kicked out if it fails to send heart beat in designated time
period. Every call to poll sends one heart beat to consumer group
coordinator.
You need to look at *how much time is it taking to process your single
record*. *Maybe it is exceeding session.timeout.ms
?
> 2) Have you configured the session timeouts for client and zookeeper
> accordingly?
>
> Regards,
>
> On 9 August 2018 at 08:00, Shantanu Deshmukh
> wrote:
>
> > I am facing too many problems these days. Now one of our consumer groups
> > is rebalancing every now
I am facing too many problems these days. Now one of our consumer groups
is rebalancing every now and then. And rebalance takes very low, more than
5-10 minutes. Even after re-balancing I see that only half of the consumers
are active/receive assignment. Its all going haywire.
I am seeing these
We have a cluster of 3 kafka+zookeeper. Only on one of our zookeeper
servers we are seeing these logs infinitely getting written in
zookeeper.out log file
WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCxn@1033] -
Exception causing close of session 0x0 due to java.io.Exception
INFO
;
> > Try reducing below timer
> > metadata.max.age.ms = 30
> >
> >
> > On Fri, Jul 6, 2018 at 5:55 AM Shantanu Deshmukh
> > wrote:
> >
> > > Hello everyone,
> > >
> > > We are running a 3 broker Kafka 0.10.0.1 cluster. W
Kind people on this group, please help me!
On Fri, Jul 6, 2018 at 3:24 PM Shantanu Deshmukh
wrote:
> Hello everyone,
>
> We are running a 3 broker Kafka 0.10.0.1 cluster. We have a java app which
> spawns many consumer threads consuming from different topics. For every
> topic we
value.deserializer = class
org.apache.kafka.common.serialization.StringDeserializer
Please help.
*Thanks & Regards,*
*Shantanu Deshmukh*
mitted offsets expire after a period of time.
>
> On Wed, 20 Jun. 2018, 5:46 pm Shantanu Deshmukh,
> wrote:
>
> > It is happening via auto-commit. Frequence is 3000 ms
> >
> > On Wed, Jun 20, 2018 at 10:31 AM Liam Clarke
> > wrote:
> >
> >
It is happening via auto-commit. Frequence is 3000 ms
On Wed, Jun 20, 2018 at 10:31 AM Liam Clarke
wrote:
> How frequently are your consumers committing offsets?
>
> On Wed, 20 Jun. 2018, 4:52 pm Shantanu Deshmukh,
> wrote:
>
> > I desperately need help. Facing this issu
I desperately need help. Facing this issue on production since a while now.
Someone please help me out.
On Fri, Jun 15, 2018 at 2:02 AM Lawrence Weikum wrote:
> unsubscribe
>
>
Any help please.
On Thu, Jun 14, 2018 at 2:39 PM Shantanu Deshmukh
wrote:
> We have a consumer application which has a single consumer group
> connecting to multiple topics. We are seeing strange behaviour in consumer
> logs. With these lines
>
> Fetch offset 1109143
= JKS
value.deserializer = class
org.apache.kafka.common.serialization.StringDeserializer
The topic which went orphan has 10 partitions, retention.ms=180,
segment.ms=180.
Please help.
Thanks & Regards,
Shantanu Deshmukh
?
>
> Thanks !
>
> --
> Sent from my iPhone
>
> On May 28, 2018, at 10:44 PM, Shantanu Deshmukh
> wrote:
>
> Can anyone here help me please? I am at my wit's end. I now have
> max.poll.records set to just 2. Still I am getting Auto offset commit
> failed w
Do you want to avoid rebalancing in such way that if a consumer exits then
its previously owned partition should be left disowned? But then who will
consume from partition that was deserted by a exiting consumer? In such
case you can go for manual partition assignment. Then there is no question
of
Hey,
You should try setting topic level config by doing kafka-topics.sh --alter
--topic --config = --zookeeper
Make sure you also set segment.ms for topics which are not that populous.
This setting specifies amount of time after which a new segment is rolled.
So Kafka deletes only those
ause a segment can only be dropped if
> _all_ messages in a segment passed the retention time.
>
> Does this make sense?
>
> Of course, we are always happy to improve the docs. Feel free to do a PR :)
>
>
> -Matthias
>
>
> On 5/29/18 3:01 AM, Shantanu Deshmukh wrot
readhttps://www.mail-archive.com/dev@kafka.apache.org/msg67224.html
> > >
> > > -Jaikiran
> > >
> > >
> > > On 29/05/18 5:21 PM, Shantanu Deshmukh wrote:
> > > > Hello,
> > > >
> > > > We have 3 broker Kafka 0.10.0.1 clus
ied increase the poll time higher, e.g. 4000 and see if that
> helps matters?
>
> On 29 May 2018 at 13:44, Shantanu Deshmukh wrote:
>
> > Here is the code which consuming messages
> >
> > >>>>>>>>
> > while(true && startShutdown == false)
No, no dynamic topic creation.
On Tue, May 29, 2018 at 6:38 PM Jaikiran Pai
wrote:
> Are your topics dynamically created? If so, see this
> threadhttps://www.mail-archive.com/dev@kafka.apache.org/msg67224.html
>
> -Jaikiran
>
>
> On 29/05/18 5:21 PM, Shantanu Desh
y.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=30
ssl.keystore.location=/opt/kafka/certificates/kafka.keystore.jks
ssl.keystore.password=
ssl.key.password=
ssl.truststore.location=/opt/kafka/certificates/kafka.truststore.jks
ssl.truststore.password
gt; On 29 May 2018 at 12:51, Shantanu Deshmukh wrote:
>
> > Hello,
> >
> > We have 3 broker Kafka 0.10.0.1 cluster. We have 5 topics, each with 10
> > partitions. I have an application which consumes from all these topics by
> > creating multiple consumer processes. All
Hello,
We have 3 broker Kafka 0.10.0.1 cluster. We have 5 topics, each with 10
partitions. I have an application which consumes from all these topics by
creating multiple consumer processes. All of these consumers are under a
same consumer group. I am noticing that every time we restart this
>
> On 29 May 2018 at 08:26, Shantanu Deshmukh wrote:
>
> > Hello,
> >
> > Is it wise to use a single consumer group for multiple consumers who
> > consume from many different topics? Can this lead to frequent rebalance
> > issues?
> >
>
Hello,
Is it wise to use a single consumer group for multiple consumers who
consume from many different topics? Can this lead to frequent rebalance
issues?
ted after the bound passed.
>
> However, client side, you can always check the record timestamp and just
> drop older data that is still in the topic.
>
> Hope this helps.
>
>
> -Matthias
>
>
> On 5/28/18 9:18 PM, Shantanu Deshmukh wrote:
> > Please help.
PM Shantanu Deshmukh
wrote:
>
> Hello,
>
> We have a 3 broker Kafka 0.10.0.1 cluster. There we have 3 topics with 10
> partitions each. We have an application which spawns threads as consumers.
> We spawn 5 consumers for each topic. I am observing that consider group
> rando
Which Kafka version?
On Mon, May 28, 2018 at 9:09 PM Dinesh Subramanian <
dsubraman...@apptivo.co.in> wrote:
> Hi,
>
> Whenever we bounce the consumer in tomcat node, I am facing duplication.
> It is consumed from the beginning. I have this property in consumer
> "auto.offset.reset" =
Please help.
On Mon, May 28, 2018 at 5:18 PM Shantanu Deshmukh
wrote:
> I have a topic otp-sms. I want that retention of this topic should be 5
> minutes as OTPs are invalid post that amount of time. So I set
> retention.ms=30. However, this was not working. So reading more i
I have a topic otp-sms. I want that retention of this topic should be 5
minutes as OTPs are invalid post that amount of time. So I set
retention.ms=30.
However, this was not working. So reading more in depth in Kafka
configuration document I found another topic level setting that can be
tuned
Duplication can happen if your producer or consumer are exiting uncleanly.
Like if producer just crashes before it receives ack from broker your logic
will fail to register that message got produced. And when it comes back up
it will try to send that batch again. Same with consumer, if it crashes
current stage.
Thanks & Regards,
Shantanu Deshmukh
On Fri 25 May, 2018, 1:30 PM Vincent Maurin, <vincent.mau...@glispa.com>
wrote:
> What is the end results done by your consumers ?
> From what I understand, having the need for no duplicates means that these
> duplicates ca
s a case (error to purge etc) and then it is still
> replicated.
> > You may reduce the probability but it will never be impossible.
> >
> > Your application should be able to handle duplicated messages.
> >
> > > On 25. May 2018, at 08:54, Shantanu Deshmukh <
Hello,
We have cross data center replication. Using Kafka mirror maker we are
replicating data from our primary cluster to backup cluster. Problem arises
when we start operating from backup cluster, in case of drill or actual
outage. Data gathered at backup cluster needs to be reverse-replicated
o call poll(long)
> <
> https://kafka.apache.org/0100/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#poll(long)
> >
> for a period of time longer than session.timeout.ms then it will be
> considered dead and its partitions will be assigned to another process."
-consumer
Then nothing. After 5-6 minutes activities start.
On Thu, May 24, 2018 at 6:49 PM Shantanu Deshmukh <shantanu...@gmail.com>
wrote:
> Hi Vincent,
>
> Yes I reduced max.poll.records to get that same effect. I reduced it all
> the way down to 5 records still I am seeing sam
sed on those:
>
> 1) https://www.confluent.io/blog/transactions-apache-kafka/
> 2)
>
> https://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0-9-consumer-client/
>
> ALso, please don't forget to read Javadoc on KafkaConsumer.java
>
> Regards,
>
> On
ce.backoff.ms=1 and zookeeper.session.timeout.ms
> =3
> > in addition to what Manikumar said.
> >
> >
> >
> > On 24 May 2018 at 12:41, Shantanu Deshmukh <shantanu...@gmail.com>
> wrote:
> >
> > > Hello,
> > >
> > >
Hi M. Manna,
Thanks I will try these settings.
On Thu, May 24, 2018 at 5:15 PM M. Manna <manme...@gmail.com> wrote:
> Set your rebalance.backoff.ms=1 and zookeeper.session.timeout.ms=3
> in addition to what Manikumar said.
>
>
>
> On 24 May 2018 at 12:41, Sha
st machine or
network etc? Is there a better optimized method of manual commit? Or better
yet, how to avoid "auto commit failed" error?
*Thanks & Regards,*
*Shantanu Deshmukh*
ck here:
> http://kafka.apache.org/0101/documentation.html#newconsumerconfigs
>
>
> On Thu, May 24, 2018 at 2:39 PM, Shantanu Deshmukh <shantanu...@gmail.com>
> wrote:
>
> > Someone please help me. I am suffering due to this issue since a long
> time
> > and
Someone please help me. I am suffering due to this issue since a long time
and not finding any solution.
On Wed, May 23, 2018 at 3:48 PM Shantanu Deshmukh <shantanu...@gmail.com>
wrote:
> We have a 3 broker Kafka 0.10.0.1 cluster. There we have 3 topics with 10
> partitions e
cified different CGs for different topics' consumers. Even this is not
helping.
I am trying to search over the web, checked my code, tried many
combinations of configuration but still no luck. Please help me.
*Thanks & Regards,*
*Shantanu Deshmukh*
specified different CGs for different topics' consumers. Even this is not
helping.
I am trying to search over the web, checked my code, tried many
combinations of configuration but still no luck. Please help me.
Thanks & Regards,
Shantanu Deshmukh
cified different CGs for different topics' consumers. Even this is not
helping.
I am trying to search over the web, checked my code, tried many
combinations of configuration but still no luck. Please help me.
Thanks & Regards,
Shantanu Deshmukh
72 matches
Mail list logo