Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1325

2022-10-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1324

2022-10-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.3 #112

2022-10-28 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-866 ZooKeeper to KRaft Migration

2022-10-28 Thread Jun Rao
Hi, David,

Thanks for the reply.

20/21. Sounds good.

Could you update the doc with all the changes being discussed?

Thanks,

Jun

On Fri, Oct 28, 2022 at 10:11 AM David Arthur
 wrote:

> Jun,
>
> 20/21. I was also wondering about a "migration" record. In addition to the
> scenario you mentioned, we also need a way to prevent the cluster from
> re-entering the dual write mode after the migration has been finalized. I
> could see this happening inadvertently via a change in some configuration
> management system. How about we add a record that marks the beginning and
> end of the dual-write mode. The first occurrence of the record could be
> included in the metadata transaction when we migrate data from ZK.
>
> With this, the active controller would decide whether to enter dual write
> mode, finalize the migration based, or fail based on:
>
> * Metadata log state
> * It's own configuration ("kafka.metadata.migration.enable",
> "zookeeper.connect", etc)
> * The other controllers configuration (via ApiVersionsResponse)
>
> WDYT?
>
> 22. Since we will need the fencing anyways as a safe-guard, then I agree
> would could skip the registration of KRaft brokers in ZK to simply things a
> bit.
>
> Thanks,
> David
>
>
>
> On Thu, Oct 27, 2022 at 5:11 PM Jun Rao  wrote:
>
> > Hi, David,
> >
> > Thanks for the reply.
> >
> > 20/21. Relying upon the presence of ZK configs to determine whether the
> > KRaft controller is in a dual write mode seems a bit error prone. If
> > someone accidentally adds a ZK configuration to a brand new KRaft
> cluster,
> > ideally it shouldn't cause the controller to get into a weird state. Have
> > we considered storing the migration state in a metadata record?
> >
> > 22. If we have the broker fencing logic, do we need to write the broker
> > registration path in ZK for KRaft brokers at all?
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Thu, Oct 27, 2022 at 1:02 PM David Arthur
> >  wrote:
> >
> > > Jun,
> > >
> > > 20/21. A KRaft controller will recover the migration state by reading
> the
> > > "/migration" ZNode. If the migration enable config is set, and the ZK
> > > migration is complete, it will enter the dual-write mode. Before an
> > > operator can decommission ZK, they will need to finalize the migration
> > > which involves removing the migration config and the ZK config. I'll
> > > clarify this in the KIP.
> > >
> > > 22. Yea, we could see an incorrect broker ID during that window.  If we
> > > ended up with a state where we saw a ZK broker ID that conflicted with
> a
> > > KRaft broker ID, we would need to fence one of them. I would probably
> opt
> > > to fence the KRaft broker in that case since broker registration and
> > > fencing is more robust in KRaft. Hopefully this is a rare case.
> > >
> > > 26. Sounds good.
> > >
> > > Thanks!
> > > David
> > >
> > >
> > > On Thu, Oct 27, 2022 at 1:34 PM Jun Rao 
> > wrote:
> > >
> > > > Hi, David,
> > > >
> > > > Thanks for the reply. A few more comments.
> > > >
> > > > 20/21. Using a tagged field in ApiVersionRequest could work. Related
> to
> > > > this, how does a KRaft controller know that it's in the dual write
> > mode?
> > > > Does it need to read the /controller path from ZK? After the
> migration,
> > > > people may have the ZK cluster decommissioned, but still have the ZK
> > > > configs left in the KRaft controller. Will this cause the KRaft
> > > controller
> > > > to be stuck because it doesn't know which mode it is in?
> > > >
> > > > 22. Using the ephemeral node matches the current ZK-based broker
> > behavior
> > > > better. However, it leaves a window for incorrect broker registration
> > to
> > > > sneak in during KRaft controller failover.
> > > >
> > > > 26. Then, we could just remove Broker Registration in that section.
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > > On Wed, Oct 26, 2022 at 2:21 PM David Arthur
> > > >  wrote:
> > > >
> > > > > Jun
> > > > >
> > > > > 20/21 It could definitely cause problems if we failover to a
> > controller
> > > > > without "kafka.metadata.migration.enable". The only mechanism I
> know
> > of
> > > > for
> > > > > controllers to learn things about one another is ApiVersions. We
> > > > currently
> > > > > use this for checking support for "metadata.version" (in KRaft
> mode).
> > > We
> > > > > could add a "zk.migration" feature flag that's enabled on a
> > controller
> > > > only
> > > > > if the config is set. Another possibility would be a tagged field
> on
> > > > > ApiVersionResponse that indicated if the config was set. Both seem
> > > > somewhat
> > > > > inelegant. I think a tagged field would be a bit simpler (and
> > arguably
> > > > less
> > > > > hacky).
> > > > >
> > > > > For 20, we could avoid entering the migration state unless the
> whole
> > > > quorum
> > > > > had the field present in their NodeApiVersions. For 21, we could
> > avoid
> > > > > leaving the migration state unless the whole quorum did not have
> the
> > > > field
> > > > > in 

Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1323

2022-10-28 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-866 ZooKeeper to KRaft Migration

2022-10-28 Thread David Arthur
Jun,

20/21. I was also wondering about a "migration" record. In addition to the
scenario you mentioned, we also need a way to prevent the cluster from
re-entering the dual write mode after the migration has been finalized. I
could see this happening inadvertently via a change in some configuration
management system. How about we add a record that marks the beginning and
end of the dual-write mode. The first occurrence of the record could be
included in the metadata transaction when we migrate data from ZK.

With this, the active controller would decide whether to enter dual write
mode, finalize the migration based, or fail based on:

* Metadata log state
* It's own configuration ("kafka.metadata.migration.enable",
"zookeeper.connect", etc)
* The other controllers configuration (via ApiVersionsResponse)

WDYT?

22. Since we will need the fencing anyways as a safe-guard, then I agree
would could skip the registration of KRaft brokers in ZK to simply things a
bit.

Thanks,
David



On Thu, Oct 27, 2022 at 5:11 PM Jun Rao  wrote:

> Hi, David,
>
> Thanks for the reply.
>
> 20/21. Relying upon the presence of ZK configs to determine whether the
> KRaft controller is in a dual write mode seems a bit error prone. If
> someone accidentally adds a ZK configuration to a brand new KRaft cluster,
> ideally it shouldn't cause the controller to get into a weird state. Have
> we considered storing the migration state in a metadata record?
>
> 22. If we have the broker fencing logic, do we need to write the broker
> registration path in ZK for KRaft brokers at all?
>
> Thanks,
>
> Jun
>
>
> On Thu, Oct 27, 2022 at 1:02 PM David Arthur
>  wrote:
>
> > Jun,
> >
> > 20/21. A KRaft controller will recover the migration state by reading the
> > "/migration" ZNode. If the migration enable config is set, and the ZK
> > migration is complete, it will enter the dual-write mode. Before an
> > operator can decommission ZK, they will need to finalize the migration
> > which involves removing the migration config and the ZK config. I'll
> > clarify this in the KIP.
> >
> > 22. Yea, we could see an incorrect broker ID during that window.  If we
> > ended up with a state where we saw a ZK broker ID that conflicted with a
> > KRaft broker ID, we would need to fence one of them. I would probably opt
> > to fence the KRaft broker in that case since broker registration and
> > fencing is more robust in KRaft. Hopefully this is a rare case.
> >
> > 26. Sounds good.
> >
> > Thanks!
> > David
> >
> >
> > On Thu, Oct 27, 2022 at 1:34 PM Jun Rao 
> wrote:
> >
> > > Hi, David,
> > >
> > > Thanks for the reply. A few more comments.
> > >
> > > 20/21. Using a tagged field in ApiVersionRequest could work. Related to
> > > this, how does a KRaft controller know that it's in the dual write
> mode?
> > > Does it need to read the /controller path from ZK? After the migration,
> > > people may have the ZK cluster decommissioned, but still have the ZK
> > > configs left in the KRaft controller. Will this cause the KRaft
> > controller
> > > to be stuck because it doesn't know which mode it is in?
> > >
> > > 22. Using the ephemeral node matches the current ZK-based broker
> behavior
> > > better. However, it leaves a window for incorrect broker registration
> to
> > > sneak in during KRaft controller failover.
> > >
> > > 26. Then, we could just remove Broker Registration in that section.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Wed, Oct 26, 2022 at 2:21 PM David Arthur
> > >  wrote:
> > >
> > > > Jun
> > > >
> > > > 20/21 It could definitely cause problems if we failover to a
> controller
> > > > without "kafka.metadata.migration.enable". The only mechanism I know
> of
> > > for
> > > > controllers to learn things about one another is ApiVersions. We
> > > currently
> > > > use this for checking support for "metadata.version" (in KRaft mode).
> > We
> > > > could add a "zk.migration" feature flag that's enabled on a
> controller
> > > only
> > > > if the config is set. Another possibility would be a tagged field on
> > > > ApiVersionResponse that indicated if the config was set. Both seem
> > > somewhat
> > > > inelegant. I think a tagged field would be a bit simpler (and
> arguably
> > > less
> > > > hacky).
> > > >
> > > > For 20, we could avoid entering the migration state unless the whole
> > > quorum
> > > > had the field present in their NodeApiVersions. For 21, we could
> avoid
> > > > leaving the migration state unless the whole quorum did not have the
> > > field
> > > > in their NodeApiVersions. Do you think this would be sufficient?
> > > >
> > > > 22. Right, we need to write the broker info back to ZK just as a
> > > safeguard
> > > > against incorrect broker IDs getting registered into ZK. I was
> thinking
> > > > these would be persistent nodes, but it's probably fine to make them
> > > > ephemeral and have the active KRaft controller keep them up to date.
> > > >
> > > > 23. Right. When the broker comes up as a KRaft broker, 

Re: [VOTE] KIP-848: The Next Generation of the Consumer Rebalance Protocol

2022-10-28 Thread David Jacot
Hi all,

I am pleased to announce that the KIP is accepted with 4 binding votes
from Guozhang, Luke, Jun and Jason, and 1 non-binding vote from
Magnus.

Thank you all for the great discussion and feedback!

Best,
David

On Fri, Oct 28, 2022 at 6:33 PM Jason Gustafson
 wrote:
>
> +1 Thanks for all the hard work.
>
> -Jason
>
> On Tue, Oct 25, 2022 at 7:17 AM David Jacot 
> wrote:
>
> > Hi all,
> >
> > The vote has been open for a while. I plan to close it on Friday if
> > there are no further comments in the discussion thread.
> >
> > Best,
> > David
> >
> > On Wed, Oct 19, 2022 at 6:10 PM Jun Rao  wrote:
> > >
> > > Hi, David,
> > >
> > > Thanks for the KIP. +1
> > >
> > > Jun
> > >
> > > On Wed, Oct 19, 2022 at 2:21 AM Magnus Edenhill 
> > wrote:
> > >
> > > > Great work on the KIP, David.
> > > >
> > > > +1 (nonbinding)
> > > >
> > > > Den fre 14 okt. 2022 kl 11:50 skrev Luke Chen :
> > > >
> > > > > Hi David,
> > > > >
> > > > > I made a final pass and LGTM now.
> > > > > +1 from me.
> > > > >
> > > > > Luke
> > > > >
> > > > > On Wed, Oct 5, 2022 at 12:32 AM Guozhang Wang 
> > > > wrote:
> > > > >
> > > > > > Hello David,
> > > > > >
> > > > > > I've made my final pass on the doc and I think it looks good now.
> > +1.
> > > > > >
> > > > > >
> > > > > > Guozhang
> > > > > >
> > > > > > On Wed, Sep 14, 2022 at 1:37 PM Guozhang Wang 
> > > > > wrote:
> > > > > >
> > > > > > > Thanks David,
> > > > > > >
> > > > > > > There are a few minor comments pending in the discussion thread,
> > and
> > > > > one
> > > > > > > is about whether we should merge PreparePartitionAssignment with
> > HB.
> > > > > But
> > > > > > I
> > > > > > > think the KIP itself is in pretty good shape now. Thanks!
> > > > > > >
> > > > > > >
> > > > > > > Guozhang
> > > > > > >
> > > > > > > On Fri, Sep 9, 2022 at 1:32 AM David Jacot
> > > >  > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > >> Hi all,
> > > > > > >>
> > > > > > >> Thank you all for the very positive discussion about KIP-848. It
> > > > looks
> > > > > > >> like folks are very positive about it overall.
> > > > > > >>
> > > > > > >> I would like to start a vote on KIP-848, which introduces a
> > brand
> > > > new
> > > > > > >> consumer rebalance protocol.
> > > > > > >>
> > > > > > >> The KIP is here: https://cwiki.apache.org/confluence/x/HhD1D.
> > > > > > >>
> > > > > > >> Best,
> > > > > > >> David
> > > > > > >>
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > -- Guozhang
> > > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > -- Guozhang
> > > > > >
> > > > >
> > > >
> >


Re: [VOTE] KIP-848: The Next Generation of the Consumer Rebalance Protocol

2022-10-28 Thread Jason Gustafson
+1 Thanks for all the hard work.

-Jason

On Tue, Oct 25, 2022 at 7:17 AM David Jacot 
wrote:

> Hi all,
>
> The vote has been open for a while. I plan to close it on Friday if
> there are no further comments in the discussion thread.
>
> Best,
> David
>
> On Wed, Oct 19, 2022 at 6:10 PM Jun Rao  wrote:
> >
> > Hi, David,
> >
> > Thanks for the KIP. +1
> >
> > Jun
> >
> > On Wed, Oct 19, 2022 at 2:21 AM Magnus Edenhill 
> wrote:
> >
> > > Great work on the KIP, David.
> > >
> > > +1 (nonbinding)
> > >
> > > Den fre 14 okt. 2022 kl 11:50 skrev Luke Chen :
> > >
> > > > Hi David,
> > > >
> > > > I made a final pass and LGTM now.
> > > > +1 from me.
> > > >
> > > > Luke
> > > >
> > > > On Wed, Oct 5, 2022 at 12:32 AM Guozhang Wang 
> > > wrote:
> > > >
> > > > > Hello David,
> > > > >
> > > > > I've made my final pass on the doc and I think it looks good now.
> +1.
> > > > >
> > > > >
> > > > > Guozhang
> > > > >
> > > > > On Wed, Sep 14, 2022 at 1:37 PM Guozhang Wang 
> > > > wrote:
> > > > >
> > > > > > Thanks David,
> > > > > >
> > > > > > There are a few minor comments pending in the discussion thread,
> and
> > > > one
> > > > > > is about whether we should merge PreparePartitionAssignment with
> HB.
> > > > But
> > > > > I
> > > > > > think the KIP itself is in pretty good shape now. Thanks!
> > > > > >
> > > > > >
> > > > > > Guozhang
> > > > > >
> > > > > > On Fri, Sep 9, 2022 at 1:32 AM David Jacot
> > >  > > > >
> > > > > > wrote:
> > > > > >
> > > > > >> Hi all,
> > > > > >>
> > > > > >> Thank you all for the very positive discussion about KIP-848. It
> > > looks
> > > > > >> like folks are very positive about it overall.
> > > > > >>
> > > > > >> I would like to start a vote on KIP-848, which introduces a
> brand
> > > new
> > > > > >> consumer rebalance protocol.
> > > > > >>
> > > > > >> The KIP is here: https://cwiki.apache.org/confluence/x/HhD1D.
> > > > > >>
> > > > > >> Best,
> > > > > >> David
> > > > > >>
> > > > > >
> > > > > >
> > > > > > --
> > > > > > -- Guozhang
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > -- Guozhang
> > > > >
> > > >
> > >
>


[jira] [Resolved] (KAFKA-14314) MirrorSourceConnector throwing NPE during `isCycle` check

2022-10-28 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14314.

Fix Version/s: 3.4.0
   Resolution: Fixed

> MirrorSourceConnector throwing NPE during `isCycle` check
> -
>
> Key: KAFKA-14314
> URL: https://issues.apache.org/jira/browse/KAFKA-14314
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.3.1
>Reporter: John Krupka
>Assignee: John Krupka
>Priority: Blocker
> Fix For: 3.4.0
>
>
> We are using MirrorMaker to replicate topics across clusters in AWS. As the 
> process is starting up, we are getting a NullPointerException when 
> MirrorSourceConnector is calling `isCycle`.
> Retrieving the `upstreamTopic` on [this line of 
> code|https://github.com/apache/kafka/blob/cc582897bfb237572131369a598f7869220b43dc/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorSourceConnector.java#L497]
>  is returning null, which causes the NPE on the next line. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-872: Add Serializer#serializeToByteBuffer() to reduce memory copying

2022-10-28 Thread ShunKang Lin
Bump this thread to see if there are any comments/thoughts.

Best,
ShunKang


ShunKang Lin  于2022年9月30日周五 23:58写道:

> Hi Divij Vaidya,
>
> 3. Sounds good, but `ByteBuffer#asReadOnlyBuffer()` returns a read-only
> `ByteBuffer` which `ByteBuffer#hasArray()` returns false, then it will make
> `Utils#writeTo(DataOutput, ByteBuffer, int)` perform efficiently Lower
> (called in `DefaultRecord#writeTo(DataOutputStream, int, long, ByteBuffer,
> ByteBuffer, Header[])`). By the way,
> `ByteBufferSerializer#serialize(String, ByteBuffer)` has called
> `ByteBuffer#flip()` which will modify the offset of the input buffer.
>
> In my opinion, it is acceptable to modify the offset of the input buffer.
> After all, serialization means reading data and `ByteBuffer` needs to
> modify the position and limit before reading the data. We just need to
> assure the user that the input data will not be modified by the Kafka
> library.
>
> Divij Vaidya  于2022年9月29日周四 19:07写道:
>
>> 1. You are right. We append the message to the `DefaultRecord` and append
>> is a copy operation. Hence, the ByteBuffer would be released at the end of
>> the KafkaProducer#doSend() method. This comment is resolved.
>> 2. I don't foresee any compatibility issues since #1 is not a problem
>> anymore. This comment is resolved.
>>
>> New comments:
>>
>> 3. In the ByteBufferSerializer#serializeToByteBuffer, could we take the
>> input ByteBuffer from the user application and return a
>> `data.asReadOnlyBuffer()`? As I understand, it does not involve any data
>> copy, hence no extra memory cost. On the upside, it would help provide the
>> guarantee to the user that the data (and the points such as position, cap
>> etc.) in the input ByteBuffer is not modified by the Kafka library.
>>
>> 4. Please change the documentation of the ByteBufferSerializer to clarify
>> that Kafka code will not modify the buffer (neither the data of the
>> provided input buffer nor the pointers).
>>
>> --
>> Divij Vaidya
>>
>>
>>
>> On Wed, Sep 28, 2022 at 5:35 PM ShunKang Lin 
>> wrote:
>>
>> > Hi Divij Vaidya,
>> >
>> > Thanks for your comments.
>> >
>> > 1. I checked the code of KafkaProducer#doSend()
>> > and RecordAccumulator#append(), if KafkaProducer#doSend() returns it
>> means
>> > serializedKey and serializedValue have been appended to
>> > ProducerBatch#recordsBuilder and we don't keep reference of
>> serializedKey
>> > and serializedValue.
>> >
>> > 2. According to 1, the user application can reuse the ByteBuffer to send
>> > consecutive KafkaProducer#send() requests without breaking the user
>> > application. If we are concerned about compatibility, we can provide
>> > another Serializer, such as ZeroCopyByteBufferSerializer, and keep the
>> > original ByteBufferSerializer unchanged.
>> >
>> > In my opinion, kafka-clients should provide some way for users who want
>> to
>> > improve application performance, if users want to improve application
>> > performance, they should use lower level code and understand the
>> underlying
>> > implementation of these codes.
>> >
>> > Best,
>> > ShunKang
>> >
>> > Divij Vaidya  于2022年9月28日周三 19:58写道:
>> >
>> > > Hello
>> > >
>> > > I believe that the current behaviour of creating a copy of the user
>> > > provided input is the correct approach because of the following
>> reasons:
>> > >
>> > > 1. In the existing implementation (considering cases when T is
>> ByteBuffer
>> > > in Serializer#serialize(String,Headers,T)) we copy the data (T) into a
>> > new
>> > > byte[]. In the new approach, we would continue to re-use the
>> ByteBuffer
>> > > even after doSend() which means the `ProducerRecord` object cannot go
>> out
>> > > of scope from a GC perspective at the end of doSend(). Hence, the new
>> > > approach may lead to increased heap memory usage for a greater period
>> of
>> > > time.
>> > >
>> > > 2. The new approach may break certain user applications e.g. consider
>> an
>> > > user application which re-uses the ByteBuffer (maybe it's a memory
>> mapped
>> > > byte buffer) to send consecutive Producer.send() requests. Prior to
>> this
>> > > change, they could do that because we copy the data from user provided
>> > > input before storing it in the accumulator but after this change, they
>> > will
>> > > have to allocate a new ByteBuffer for every ProduceRecord.
>> > >
>> > > In general, I am of the opinion that any user provided data should be
>> > > copied to internal data structures at the interface of an opaque
>> library
>> > > (like client) so that the user doesn't have to guess about memory
>> > lifetime
>> > > of the objects they provided to the opaque library.
>> > >
>> > > What do you think?
>> > >
>> > > --
>> > > Divij Vaidya
>> > >
>> > >
>> > >
>> > > On Sun, Sep 25, 2022 at 5:59 PM ShunKang Lin <
>> linshunkang@gmail.com>
>> > > wrote:
>> > >
>> > > > Hi all, I'd like to start a new discussion thread on KIP-872 (Kafka
>> > > Client)
>> > > > which proposes that add Serializer#serializeToByteBuffer() to reduce
>> > 

[DISCUSS] KIP-879: Multi-level Rack Awareness

2022-10-28 Thread Viktor Somogyi-Vass
Hey all,

I'd like to propose a new broker side replica assignment strategy and an
interface that generalizes replica assignment on brokers and makes them
pluggable.

Briefly, the motivation for the new replica assignment strategy is that
more and more of our customers would want to run their clusters in a
stretched environment, where for instance a cluster is running over
multiple regions (and multiple racks inside a region). Since this seems
like a more common need, we'd like to contribute back our implementation
and also make a generalized interface, so that new strategies that people
may come up with could be served better.

I welcome any feedback on this KIP.

The link:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-879%3A+Multi-level+Rack+Awareness

Best to all,
Viktor


RE: Supported Kafka/Zookeeper Version with ELK 8.4.3

2022-10-28 Thread Kumar, Sudip
Hi Team,

We are still waiting for the reply. Please update we must know what version of 
Kafka is compatible with ELK 8.4 version.

Still, I can see no one replied on user and Dev community portal

[cid:image001.png@01D8EAFE.9B1F9280]


Thanks
Sudip


From: Kumar, Sudip
Sent: Monday, October 17, 2022 5:23 PM
To: us...@kafka.apache.org; dev@kafka.apache.org
Cc: Rajendra Bangal, Nikhil ; Verma, 
Harshit ; Verma, Deepak Kumar 
; Arkal, Dinesh Balaji 
; Saurabh, Shobhit 

Subject: Supported Kafka/Zookeeper Version with ELK 8.4.3
Importance: High

Hi Kafka Team,

Currently we are planning to upgrade ELK 7.16 to 8.4.3 version. In our 
ecosystem we are using Kafka as middleware which is ingesting data coming from 
different sources where publisher (Logstash shipper) publishing data in 
different Kafka Topics and subscriber (Logstash indexer) consuming the data.

We have an integration of ELK 7.16 with Kafka V2.5.1 and zookeeper 3.5.8. 
Please suggest if we upgrade on ELK 8.4.3 version which Kafka and Zookeeper 
version will be supported? Provide us handful documents.

Let me know if you any further questions.

Thanks
Sudip Kumar
Capgemini-India


This message contains information that may be privileged or confidential and is 
the property of the Capgemini Group. It is intended only for the person to whom 
it is addressed. If you are not the intended recipient, you are not authorized 
to read, print, retain, copy, disseminate, distribute, or use this message or 
any part thereof. If you receive this message in error, please notify the 
sender immediately and delete all copies of this message.