Hi Luke,
The purpose of the count is to know how many producers and consumers and
connected at any given time and it is not crossing any threshold.
A bad client can open many connections affecting all others.
Thanks,
Dhirendra.
On Fri, Feb 4, 2022 at 9:29 AM Luke Chen wrote:
> Hi Dhirendra,
>
>
Hi all, I'm working on a client library for kafka, and I can't seem to find
the difference between truncation of a RecordBatch due to log compaction
and truncation of a RecordBatch due to the MaxBytes requested by the
reader, is there a flag set in the message that I can reference?
https://cwiki.ap
To my knowledge, such policy/practice does not exist for the Apache Kafka
project.
>From time to time certain environments and tool support like Java and Scala
versions have been deprecated and dropped but o don’t think this applies to
Kafka versions
Bug and security fixes are typically applied t
I have found the Confluent EOL schedule, but I have not been able to find the
EOL schedule for Apache Kafka. Does such a policy exist?
Best Regards,
--
Doug Whitfield | Enterprise Architect, OpenLogic
This e-mail may contain information that is privileged or confidential. If you
are not the
Hello,
I have an issue. I hope you are able to resolve it.
I attached my test code.*
*
*Environment:*
3 node clustered kafka (2.12-2.1.1)
zookeeper (3.4)*
*
*Kafka topic description:*
Topic:test_topic PartitionCount:5 ReplicationFactor:2
Configs:min.insync.replicas=2
Topic: test_topic Partit
Hi All,
Can the Config Provider be specified at the connector level or is the
configuration only available at the worker level (docker configuration)?
Here are our observations:
1. When we specify the config provider at the connector level then when
adding the connector, the validation for