Hi All,
We are using Confluent version of Kafka i.e. version 5.5.0. We deploy this
as pod service on kubernetes.
We use 3 broker pods and have set the request/limit memory for the pod to
512Mi/2GiB respectively and we observed all pods were almost touching the
limit or going over the limit a bit
Hi
Below is the script I used to create table in Mysql
CREATE TABLE `sample` (
`id` varchar(45) NOT NULL,
`a` decimal(10,3) DEFAULT NULL,
`b` decimal(10,3) DEFAULT NULL,
`c` decimal(10,3) DEFAULT NULL,
`d` decimal(10,3) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB
Hi Team,
Today, In our production cluster, we faced an issue with Kafka (Old offsets was
getting pulled from spark streaming application) and couldn't debug the issue
using kafka_consumer_group.sh CLI.
Whenever we execute the below command to list the consumer groups, it is
working fine.