Just following up, I realized I forgot to add some information.
This is using kafka 3.5.1,
I am in the process of setting up a kafka cluster which is configured to
> use KRaft. There is a set of three controller nodes and a set of six
> brokers. Both the controllers and the brokers are
Thanks Luke, this helps for our use case. It does not cover the buildout
of a new cluster where there are no brokers, but that should be remedied by
kip 919 which looks to be resolved in 3.7.0.
ttyl
Dima
On Sun, Apr 21, 2024 at 9:06 PM Luke Chen wrote:
> Hi Frank,
>
> About your question:
>
Hello,
I am in the process of setting up a kafka cluster which is configured to
use KRaft. There is a set of three controller nodes and a set of six
brokers. Both the controllers and the brokers are configured to use mTLS
(Mutual TLS). So the part of the controller config looks like:
Hello,
Is there a "git" issue with 3.5.2. When I look at github I see the 3.5.2
tag. But if I make the repo an upstream remote target I don't see 3.5.2.
Any ideas what could be up?
Thanks!
ttyl
Dima
On Mon, Dec 11, 2023 at 3:36 AM Luke Chen wrote:
> The Apache Kafka community is pleased to
Hello,
Would the following configuration be valid in a kafka kraft cluster
So lets say we had the following configs for a controller and a broker:
=== controller -
https://github.com/apache/kafka/blob/6d1d68617ecd023b787f54aafc24a4232663428d/config/kraft/controller.properties
Hello, question,
If I have my kafka cluster behind a VIP for bootstrapping, is it possible
to have the controllers participate in the bootstrap process or only
brokers can?
Thanks!
ttyl
Dima
--
ddbrod...@gmail.com
"The price of reliability is the pursuit of the utmost simplicity.
It is a
Hello,
I was wondering if anybody has seen the following:
I have a topic with about 200 partitions and with a replication factor of
3. Once in a while I am seeing a partition or two lose a replica. That is
the replica list for those partitions goes from 3 to 2 brokers. Kafka does
not see
the data is completely stale.
I would think the data should be deleted, but we are seeing that it is not.
On Tue, Sep 15, 2020 at 8:04 PM wrote:
> It should delete the old data log based on retention of topic.
> What kafka version you are using ?
>
> On 9/15/20, 7:48 PM, "Dima
Hi,
I have a question, when you start kafka on a node, if there is a random
replica log should it delete it on startup? Here is an example: Assume
you have a 4 node cluster. Topic X has 3 replicas and it is replicated on
nodes 1, 2, and 3. Now you shutdown node 3 and you place the replica
Hi,
I am using kafka client 2.0.1 and once in a while I see the following in
the logs:
2020-03-20 09:42:57.960 INFO 160813 --- [pool-1-thread-1]
o.a.kafka.clients.FetchSessionHandler: [Consumer clientId=consumer-1,
groupId=version-grabber-ajna0-mgmt1-1-prd] Error sending fetch request
Hi,
I was just wondering if the following article:
https://docs.confluent.io/current/kafka/incremental-security-upgrade.html
is still valid when using Zookeeper 3.5.5 with mTLS rather than kerberos?
If it is still valid, what principle is used for the ACL?
Thanks!
ttyl
Dima
--
11 matches
Mail list logo