Hi Gowtham,
You could check if there are any throttles on the broker. The
metric kafka_server_quota_throttle_time might do you a favor.
Cheers
On Fri, 18 Sep 2020 at 07:20, Liam Clarke-Hutchinson <
liam.cla...@adscale.co.nz> wrote:
> Hi, what is the output of kafka-consumer-groups.sh --describe
Hi,
Are you using the official Java client or any third party clients?
On Thu, 17 Sep 2020 at 22:53, M. Manna wrote:
> Hey Shaohan,
>
> Thanks for your reply, much appreciated. We are using Kafka 2.4.1. Although
> we use the Confluent platform under Confluent Licence, I don't think this
> matte
Hi, what is the output of kafka-consumer-groups.sh --describe for that
consumer's group please?
On Thu, 17 Sep. 2020, 7:37 pm Gowtham S, wrote:
> Hello All,
> We are consuming a topic with a single partition, the consumer.poll(1000)
> returns "0" records mostly event if we have more than 1 r
Hey Shaohan,
Thanks for your reply, much appreciated. We are using Kafka 2.4.1. Although
we use the Confluent platform under Confluent Licence, I don't think this
matters.
We are using compression.type=GZIP for all our producers.
We create all topics with compression.type=GZIP i.e. per topic lev
Hi,
Could you specify your version and the protocol of your broker?
>From the client of 2.5.1 I didn't see any changes that could be made to the
client compression.
Maybe you could check if the compression.type is set on the topic level or
the broker side?
Cheers
On Thu, 17 Sep 2020 at 20:19, M
Great, thanks for the update Manoj and and Andrey! I recommend the simple
upgrade from 2.5.0 to 2.5.1 to get the fix for
https://issues.apache.org/jira/browse/KAFKA-9835.
Ismael
On Thu, Sep 3, 2020 at 9:01 PM wrote:
> We also upgraded kafka 2.2.1 to kafka 2.5.0 and kept same zookeeper . no
> is
Hello,
I am trying to understand the compression.type settings and the growth
impact from this. With a compression type (valid) settings, this ensures
that the message in the topic stays compressed and has to be decompressed
by the consumer.
However, it may be possible that the compression will n
Hello All,
We are consuming a topic with a single partition, the consumer.poll(1000)
returns "0" records mostly event if we have more than 1 records in lag.
In which case it will behave like this.
We are using Kafka-2.4.0 client and 2.4.0 broker. The single record size
is 100Kb.
Consumer conf