Re: GlobalKTable with RocksDB - queries before state RUNNING?

2023-11-21 Thread Sophie Blee-Goldman
Just to make sure I understand the logs, you're saying the "new file
processed" lines represent store queries, and presumably the
com.osr.serKafkaStreamsService is your service that's issuing these queries?

You need to wait for the app to finish restoring state before querying it.
Based on this message -- "KafkaStreams has not been started, you can retry
after calling start()" -- I assume you're kicking off the querying service
right away and blocking queries until after KafkaStreams#start is called.
But you need to wait for it to actually finish starting up, not just for
start() to be called. The best way to do this is by setting a state
listener via KafkaStreams#setStateListener, and then using this to listen
in on the KafkaStreams.State and blocking the queries until the state has
changed to RUNNING.

In case you're curious about why this seems to work with in-memory stores
but not with rocksdb, it seems like in the in-memory case, the queries that
are attempted during restoration are blocked due to the store being closed
(according to "(Quarkus Main Thread) the state store, store-name, is not
open.")

So why is the store closed for most of the restoration in the in-memory
case only? This gets a bit into the weeds, but it has to do with the
sequence of events in starting up a state store. When the global thread
starts up, it'll first loop over all its state stores and call #init on
them. Two things have to happen inside #init: the store is opened, and the
store registers itself with the ProcessorContext. The #register involves
various things, including a call to fetch the end offsets of the topic for
global state stores. This is a blocking call, so the store might stay
inside the #register call for a relatively long while.

For RocksDB stores, we open the store first and then call #register, so by
the time the GlobalStreamThread is sitting around waiting on the end
offsets response, the store is open and your queries are getting through to
it. However the in-memory store actually registers itself *first*, before
marking itself as open, and so it remains closed for most of the time it
spends in restoration and blocks any query attempts during this time.

I suppose it would make sense to align the two store implementations to
have the same behavior, and the in-memory store is probably technically
more correct. But in the end you really should just wait for the
KafkaStreams.State to get to RUNNING before querying the state store, as
that's the only true guarantee.

Hope this helps!

-Sophie

On Tue, Nov 21, 2023 at 6:44 AM Christian Zuegner
 wrote:

> Hi,
>
> we have the following problem - a Kafka Topic ~20Megabytes is made
> available as GlobalKTable for queries. With using RocksDB the GKTable is
> ready for queries instantly even without having reading the data complete -
> all get() requests return null. After a few seconds the data is querieable
> correctly - but this is to late for our application. Once we switch to
> IN_MEMORY we get the expected behavior. The store is only ready after all
> data has been read from topic.
>
> How can we achieve the same behavior with the RocksDB setup?
>
> Snipet to build KafkaStreams Topology
>
> builder.globalTable(
>   "topic-name",
>   Consumed.with(Serdes.String(), Serdes.String()),
>
> Materialized.as(STORE_NAME).withStoreType(Materialized.StoreType.ROCKS_DB)
> );
>
> Query the Table
>
> while (true) {
> try {
> return streams.store(
>
> StoreQueryParameters.fromNameAndType(FileCrawlerKafkaTopologyProducer.STORE_NAME,
> QueryableStoreTypes.keyValueStore()));
> } catch (InvalidStateStoreException e) {
> logger.warn(e.getMessage());
> try {
> Thread.sleep(3000);
> } catch (InterruptedException ignored) {
> }
> }
> }
>
> The store is queried with getStore().get(key); <- here we get the null
> values.
>
> This is the Log Output when RocksDB - first query before state RUNNING
>
> ...
> 2023-11-21 15:15:40,629 INFO  [com.osr.serKafkaStreamsService] (Quarkus
> Main Thread) wait for kafka streams store to get ready: KafkaStreams has
> not been started, you can retry after calling start()
> 2023-11-21 15:15:41,781 INFO  [org.apa.kaf.str.KafkaStreams]
> (pool-10-thread-1) stream-client
> [topic-name-7c35d436-f18c-4cb9-9d87-80855df5d1a2] State transition from
> CREATED to REBALANCING
> 2023-11-21 15:15:41,819 INFO  
> [org.apa.kaf.str.sta.int.RocksDBTimestampedStore]
> (topic-name-7c35d436-f18c-4cb9-9d87-80855df5d1a2-GlobalStreamThread)
> Opening store store-name in regular mode
> 2023-11-21 15:15:41,825 INFO  [org.apa.kaf.str.pro.int.GlobalStateManagerImpl]
> (topic-name-7c35d436-f18c-4cb9-9d87-80855df5d1a2-GlobalStreamThread)
> global-stream-thread
> [topic-name-7c35d436-f18c-4cb9-9d87-80855df5d1a2-GlobalStreamThread]
> Restoring state for global store store-name
> 2023-11-21 15:15:43,753 INFO  [io.quarkus] (Quarkus Main Threa

GlobalKTable with RocksDB - queries before state RUNNING?

2023-11-21 Thread Christian Zuegner
Hi,

we have the following problem - a Kafka Topic ~20Megabytes is made available as 
GlobalKTable for queries. With using RocksDB the GKTable is ready for queries 
instantly even without having reading the data complete - all get() requests 
return null. After a few seconds the data is querieable correctly - but this is 
to late for our application. Once we switch to IN_MEMORY we get the expected 
behavior. The store is only ready after all data has been read from topic.

How can we achieve the same behavior with the RocksDB setup?

Snipet to build KafkaStreams Topology

builder.globalTable(
  "topic-name",
  Consumed.with(Serdes.String(), Serdes.String()),
  Materialized.as(STORE_NAME).withStoreType(Materialized.StoreType.ROCKS_DB)
);

Query the Table

while (true) {
try {
return streams.store(

StoreQueryParameters.fromNameAndType(FileCrawlerKafkaTopologyProducer.STORE_NAME,
 QueryableStoreTypes.keyValueStore()));
} catch (InvalidStateStoreException e) {
logger.warn(e.getMessage());
try {
Thread.sleep(3000);
} catch (InterruptedException ignored) {
}
}
}

The store is queried with getStore().get(key); <- here we get the null values.

This is the Log Output when RocksDB - first query before state RUNNING

...
2023-11-21 15:15:40,629 INFO  [com.osr.serKafkaStreamsService] (Quarkus Main 
Thread) wait for kafka streams store to get ready: KafkaStreams has not been 
started, you can retry after calling start()
2023-11-21 15:15:41,781 INFO  [org.apa.kaf.str.KafkaStreams] (pool-10-thread-1) 
stream-client [topic-name-7c35d436-f18c-4cb9-9d87-80855df5d1a2] State 
transition from CREATED to REBALANCING
2023-11-21 15:15:41,819 INFO  [org.apa.kaf.str.sta.int.RocksDBTimestampedStore] 
(topic-name-7c35d436-f18c-4cb9-9d87-80855df5d1a2-GlobalStreamThread) Opening 
store store-name in regular mode
2023-11-21 15:15:41,825 INFO  [org.apa.kaf.str.pro.int.GlobalStateManagerImpl] 
(topic-name-7c35d436-f18c-4cb9-9d87-80855df5d1a2-GlobalStreamThread) 
global-stream-thread 
[topic-name-7c35d436-f18c-4cb9-9d87-80855df5d1a2-GlobalStreamThread] Restoring 
state for global store store-name
2023-11-21 15:15:43,753 INFO  [io.quarkus] (Quarkus Main Thread) demo 
1.0-SNAPSHOT on JVM (powered by Quarkus 3.2.8.Final) started in 5.874s.
2023-11-21 15:15:43,754 INFO  [io.quarkus] (Quarkus Main Thread) Profile dev 
activated. Live Coding activated.
2023-11-21 15:15:43,756 INFO  [io.quarkus] (Quarkus Main Thread) Installed 
features: [apicurio-registry-avro, cdi, config-yaml, kafka-client, 
kafka-streams, logging-gelf, smallrye-context-propagation, 
smallrye-fault-tolerance, smallrye-reactive-messaging, 
smallrye-reactive-messaging-kafka, vertx]
2023-11-21 15:15:44,195 INFO  [com.osr.ser.KafkaStreamsService] 
(vert.x-worker-thread-1) new file processed
2023-11-21 15:15:44,629 INFO  [org.apa.kaf.str.pro.int.GlobalStreamThread] 
(topic-name-7c35d436-f18c-4cb9-9d87-80855df5d1a2-GlobalStreamThread) 
global-stream-thread 
[topic-name-7c35d436-f18c-4cb9-9d87-80855df5d1a2-GlobalStreamThread] State 
transition from CREATED to RUNNING
2023-11-21 15:15:44,631 INFO  [org.apa.kaf.str.KafkaStreams] 
(topic-name-7c35d436-f18c-4cb9-9d87-80855df5d1a2-GlobalStreamThread) 
stream-client [topic-name-7c35d436-f18c-4cb9-9d87-80855df5d1a2] State 
transition from REBALANCING to RUNNING
2023-11-21 15:15:44,631 INFO  [org.apa.kaf.str.KafkaStreams] (pool-10-thread-1) 
stream-client [topic-name-7c35d436-f18c-4cb9-9d87-80855df5d1a2] Started 0 
stream threads
...

Once I configure with StoreType.IN_MEMORY no queries are done before the state 
is RUNNING

2023-11-21 15:28:25,511 WARN  [com.osr.serKafkaStreamsService] (Quarkus Main 
Thread) KafkaStreams has not been started, you can retry after calling start()
2023-11-21 15:28:26,730 INFO  [org.apa.kaf.str.KafkaStreams] (pool-10-thread-1) 
stream-client [topic-name-e459f74c-cd36-4595-a8c6-bd0aad9ae0a7] State 
transition from CREATED to REBALANCING
2023-11-21 15:28:26,752 INFO  [org.apa.kaf.str.pro.int.GlobalStateManagerImpl] 
(topic-name-e459f74c-cd36-4595-a8c6-bd0aad9ae0a7-GlobalStreamThread) 
global-stream-thread 
[topic-name-e459f74c-cd36-4595-a8c6-bd0aad9ae0a7-GlobalStreamThread] Restoring 
state for global store store-name
2023-11-21 15:28:29,834 WARN  [com.osr.serKafkaStreamsService] (Quarkus Main 
Thread) the state store, store-name, is not open.
2023-11-21 15:28:33,670 WARN  [com.osr.serKafkaStreamsService] (Quarkus Main 
Thread) the state store, store-name, is not open.
2023-11-21 15:28:33,763 INFO  [org.apa.kaf.str.pro.int.GlobalStreamThread] 
(topic-name-e459f74c-cd36-4595-a8c6-bd0aad9ae0a7-GlobalStreamThread) 
global-stream-thread 
[topic-name-e459f74c-cd36-4595-a8c6-bd0aad9ae0a7-GlobalStreamThread] State 
transition from CREATED to RUNNING
2023-11-21 15:28:33,765 INFO  [org.apa.kaf.str.KafkaStreams] 
(topic-name-e459f74c-cd36-4595-a8c6-bd0aad9ae0a7

Change super.users at runtime

2023-11-21 Thread Artem Timchenko
Hi,

I'm trying to change super.users config at runtime, without broker restart.
The next command is used (I use the same approach for all brokers, but
posting output only from one broker here not to overload):

> /opt/confluent/bin/kafka-configs --bootstrap-server localhost:9093
> --command-config kafka.properties --entity-type brokers --entity-name 2
> --alter --add-config 'super.users=User:kafka;'

I can see in the kafka.log it was applied successfully:

> {"short_message":"Processing override for entityPath: brokers/2 with
> config: HashMap(super.users -> User:kafka;)","full_message":"Processing
> override for entityPath: brokers/2 with config: HashMap(super.users ->
> User:kafka;)","timestamp":1.700571334002E9,"level":6,"facility":"logstash-gelf","LoggerName":"kafka.server.DynamicConfigManager","SourceSimpleClassName":"Logging","SourceClassName":"kafka.utils.Logging","Time":"2023-11-21
> 12:55:34,002","Severity":"INFO","SourceLineNumber":66,"Thread":"/config/changes-event-process-thread","SourceMethodName":"info"}

But in kafka-authorizer.log with a loglevel set to DEBUG I can still see
another user is considered as a super user:

> {"short_message":"principal = User:management is a super user, allowing
> operation without checking acls.","full_message":"principal =
> User:management is a super user, allowing operation without checking
> acls.","timestamp":1.700572220417E9,"level":7,"facility":"logstash-gelf","LoggerName":"kafka.authorizer.logger","SourceSimpleClassName":"AclAuthorizer","SourceClassName":"kafka.security.authorizer.AclAuthorizer","Time":"2023-11-21
> 13:10:20,417","Severity":"DEBUG","SourceLineNumber":493,"Thread":"data-plane-kafka-request-handler-31","SourceMethodName":"isSuperUser"}


Using kafka-configs seems to be showing new config, but I suspect it's
marked as null due to sensitive=true:

>  super.users=null sensitive=true
> synonyms={DYNAMIC_BROKER_CONFIG:super.users=null,
> STATIC_BROKER_CONFIG:super.users=null}


 So the question is if super.users can be upgraded at runtime at all, or
it's a read-only config and should be upgraded only via cluster restart?

Thanks


RE: Messages streaming from topic to topic

2023-11-21 Thread Alexander Shapiro (ashapiro)
As I know, MM2 cannot sync topics on same cluster
as topics on source and target supposed to have same name or to come with 
prefix.sameName

plz correct me, there is no topicA -> topicB replication

-Original Message-
From: Anders Engström 
Sent: Tuesday, November 21, 2023 12:58
To: users@kafka.apache.org
Subject: Re: Messages streaming from topic to topic

[You don't often get email from epirea...@gmail.com. Learn why this is 
important at https://aka.ms/LearnAboutSenderIdentification ]

CAUTION: This email is from an external source. Please don’t open any unknown 
links or attachments.

Mirror Maker (2) might be a good solution.. It's "mostly" Kafka Connect, so it 
should be possible to do filtering etc.
https://developers.redhat.com/articles/2023/11/13/demystifying-kafka-mirrormaker-2-use-cases-and-architecture#mirrormaker_2_internal_topics
/Anders

On Tue, Nov 21, 2023 at 11:55 AM Alexander Shapiro (ashapiro) 
 wrote:

> Thanks for instant reply
>
> Yes, I did
>
> I have checked till now the below:
> Apache Storm
> Apache Heron
> Apache Samza
> Apache Spark
> Apache Flink
> Apache NiFi
> Confluent replicator
> StreamSets Data collector
> Envoy Filters
>
>
>
>
>
> -Original Message-
> From: megh vidani 
> Sent: Tuesday, November 21, 2023 12:46
> To: users@kafka.apache.org
> Cc: dev ; kafka-clients <
> kafka-clie...@googlegroups.com>
> Subject: Re: Messages streaming from topic to topic
>
> [You don't often get email from vidanimeg...@gmail.com. Learn why this
> is important at https://aka.ms/LearnAboutSenderIdentification ]
>
> CAUTION: This email is from an external source. Please don't open any
> unknown links or attachments.
>
> Hi Alexander,
>
> Have you explored Spark or Flink?
>
> Thanks,
> Megh
>
> On Tue, Nov 21, 2023, 15:50 Alexander Shapiro (ashapiro) <
> alexander.shap...@amdocs.com.invalid> wrote:
>
> > Hi team!
> >
> >
> > I am looking for an open-source Tool with capability to stream Kafka
> > message from topic A to topic B with bellow conditions:
> > a. A and B can be on same cluster.
> > b. A and B can be on different clusters.
> > c. Aggregation of messages
> > d. Filtration of messages
> > e. Customization options for payload manipulation
> > before sending to topic B - preferable java.
> > f. Optional - streaming message to other
> > technologies
> >
> > For example,
> > such capabilities can be implemented with "StreamSets Data
> > Collector", but this tool looks like overkill for my needs This
> > email and the information contained herein is proprietary and
> > confidential and subject to the Amdocs Email Terms of Service, which
> > you may review at
> > https://www.amdocs.com/about/email-terms-of-service <
> > https://www.amdocs.com/about/email-terms-of-service>
> >
> >
> This email and the information contained herein is proprietary and
> confidential and subject to the Amdocs Email Terms of Service, which
> you may review at https://www.amdocs.com/about/email-terms-of-service
> < https://www.amdocs.com/about/email-terms-of-service>
>
>
This email and the information contained herein is proprietary and confidential 
and subject to the Amdocs Email Terms of Service, which you may review at 
https://www.amdocs.com/about/email-terms-of-service 



Re: Messages streaming from topic to topic

2023-11-21 Thread megh vidani
Okay. For us both of them are working well for the use case mentioned by
you.

Thanks,
Megh

On Tue, Nov 21, 2023, 16:25 Alexander Shapiro (ashapiro)
 wrote:

> Thanks for instant reply
>
> Yes, I did
>
> I have checked till now the below:
> Apache Storm
> Apache Heron
> Apache Samza
> Apache Spark
> Apache Flink
> Apache NiFi
> Confluent replicator
> StreamSets Data collector
> Envoy Filters
>
>
>
>
>
> -Original Message-
> From: megh vidani 
> Sent: Tuesday, November 21, 2023 12:46
> To: users@kafka.apache.org
> Cc: dev ; kafka-clients <
> kafka-clie...@googlegroups.com>
> Subject: Re: Messages streaming from topic to topic
>
> [You don't often get email from vidanimeg...@gmail.com. Learn why this is
> important at https://aka.ms/LearnAboutSenderIdentification ]
>
> CAUTION: This email is from an external source. Please don't open any
> unknown links or attachments.
>
> Hi Alexander,
>
> Have you explored Spark or Flink?
>
> Thanks,
> Megh
>
> On Tue, Nov 21, 2023, 15:50 Alexander Shapiro (ashapiro) <
> alexander.shap...@amdocs.com.invalid> wrote:
>
> > Hi team!
> >
> >
> > I am looking for an open-source Tool with capability to stream Kafka
> > message from topic A to topic B with bellow conditions:
> > a. A and B can be on same cluster.
> > b. A and B can be on different clusters.
> > c. Aggregation of messages
> > d. Filtration of messages
> > e. Customization options for payload manipulation
> > before sending to topic B - preferable java.
> > f. Optional - streaming message to other technologies
> >
> > For example,
> > such capabilities can be implemented with "StreamSets Data Collector",
> > but this tool looks like overkill for my needs This email and the
> > information contained herein is proprietary and confidential and
> > subject to the Amdocs Email Terms of Service, which you may review at
> > https://www.amdocs.com/about/email-terms-of-service <
> > https://www.amdocs.com/about/email-terms-of-service>
> >
> >
> This email and the information contained herein is proprietary and
> confidential and subject to the Amdocs Email Terms of Service, which you
> may review at https://www.amdocs.com/about/email-terms-of-service <
> https://www.amdocs.com/about/email-terms-of-service>
>
>


Re: Messages streaming from topic to topic

2023-11-21 Thread Anders Engström
Mirror Maker (2) might be a good solution.. It's "mostly" Kafka Connect, so
it should be possible to do filtering etc.
https://developers.redhat.com/articles/2023/11/13/demystifying-kafka-mirrormaker-2-use-cases-and-architecture#mirrormaker_2_internal_topics
/Anders

On Tue, Nov 21, 2023 at 11:55 AM Alexander Shapiro (ashapiro)
 wrote:

> Thanks for instant reply
>
> Yes, I did
>
> I have checked till now the below:
> Apache Storm
> Apache Heron
> Apache Samza
> Apache Spark
> Apache Flink
> Apache NiFi
> Confluent replicator
> StreamSets Data collector
> Envoy Filters
>
>
>
>
>
> -Original Message-
> From: megh vidani 
> Sent: Tuesday, November 21, 2023 12:46
> To: users@kafka.apache.org
> Cc: dev ; kafka-clients <
> kafka-clie...@googlegroups.com>
> Subject: Re: Messages streaming from topic to topic
>
> [You don't often get email from vidanimeg...@gmail.com. Learn why this is
> important at https://aka.ms/LearnAboutSenderIdentification ]
>
> CAUTION: This email is from an external source. Please don't open any
> unknown links or attachments.
>
> Hi Alexander,
>
> Have you explored Spark or Flink?
>
> Thanks,
> Megh
>
> On Tue, Nov 21, 2023, 15:50 Alexander Shapiro (ashapiro) <
> alexander.shap...@amdocs.com.invalid> wrote:
>
> > Hi team!
> >
> >
> > I am looking for an open-source Tool with capability to stream Kafka
> > message from topic A to topic B with bellow conditions:
> > a. A and B can be on same cluster.
> > b. A and B can be on different clusters.
> > c. Aggregation of messages
> > d. Filtration of messages
> > e. Customization options for payload manipulation
> > before sending to topic B - preferable java.
> > f. Optional - streaming message to other technologies
> >
> > For example,
> > such capabilities can be implemented with "StreamSets Data Collector",
> > but this tool looks like overkill for my needs This email and the
> > information contained herein is proprietary and confidential and
> > subject to the Amdocs Email Terms of Service, which you may review at
> > https://www.amdocs.com/about/email-terms-of-service <
> > https://www.amdocs.com/about/email-terms-of-service>
> >
> >
> This email and the information contained herein is proprietary and
> confidential and subject to the Amdocs Email Terms of Service, which you
> may review at https://www.amdocs.com/about/email-terms-of-service <
> https://www.amdocs.com/about/email-terms-of-service>
>
>


RE: Messages streaming from topic to topic

2023-11-21 Thread Alexander Shapiro (ashapiro)
Thanks for instant reply

Yes, I did

I have checked till now the below:
Apache Storm
Apache Heron
Apache Samza
Apache Spark
Apache Flink
Apache NiFi
Confluent replicator
StreamSets Data collector
Envoy Filters





-Original Message-
From: megh vidani  
Sent: Tuesday, November 21, 2023 12:46
To: users@kafka.apache.org
Cc: dev ; kafka-clients 
Subject: Re: Messages streaming from topic to topic

[You don't often get email from vidanimeg...@gmail.com. Learn why this is 
important at https://aka.ms/LearnAboutSenderIdentification ]

CAUTION: This email is from an external source. Please don't open any unknown 
links or attachments.

Hi Alexander,

Have you explored Spark or Flink?

Thanks,
Megh

On Tue, Nov 21, 2023, 15:50 Alexander Shapiro (ashapiro) 
 wrote:

> Hi team!
>
>
> I am looking for an open-source Tool with capability to stream Kafka 
> message from topic A to topic B with bellow conditions:
> a. A and B can be on same cluster.
> b. A and B can be on different clusters.
> c. Aggregation of messages
> d. Filtration of messages
> e. Customization options for payload manipulation 
> before sending to topic B - preferable java.
> f. Optional - streaming message to other technologies
>
> For example,
> such capabilities can be implemented with "StreamSets Data Collector", 
> but this tool looks like overkill for my needs This email and the 
> information contained herein is proprietary and confidential and 
> subject to the Amdocs Email Terms of Service, which you may review at 
> https://www.amdocs.com/about/email-terms-of-service < 
> https://www.amdocs.com/about/email-terms-of-service>
>
>
This email and the information contained herein is proprietary and confidential 
and subject to the Amdocs Email Terms of Service, which you may review at 
https://www.amdocs.com/about/email-terms-of-service 




Re: Messages streaming from topic to topic

2023-11-21 Thread megh vidani
Hi Alexander,

Have you explored Spark or Flink?

Thanks,
Megh

On Tue, Nov 21, 2023, 15:50 Alexander Shapiro (ashapiro)
 wrote:

> Hi team!
>
>
> I am looking for an open-source Tool with capability to stream Kafka
> message from topic A to topic B with bellow conditions:
> a. A and B can be on same cluster.
> b. A and B can be on different clusters.
> c. Aggregation of messages
> d. Filtration of messages
> e. Customization options for payload manipulation before
> sending to topic B - preferable java.
> f. Optional - streaming message to other technologies
>
> For example,
> such capabilities can be implemented with "StreamSets Data Collector", but
> this tool looks like overkill for my needs
> This email and the information contained herein is proprietary and
> confidential and subject to the Amdocs Email Terms of Service, which you
> may review at https://www.amdocs.com/about/email-terms-of-service <
> https://www.amdocs.com/about/email-terms-of-service>
>
>


Messages streaming from topic to topic

2023-11-21 Thread Alexander Shapiro (ashapiro)
Hi team!


I am looking for an open-source Tool with capability to stream Kafka message 
from topic A to topic B with bellow conditions:
a. A and B can be on same cluster. 
b. A and B can be on different clusters.
c. Aggregation of messages
d. Filtration of messages
e. Customization options for payload manipulation before 
sending to topic B - preferable java.
f. Optional - streaming message to other technologies

For example, 
such capabilities can be implemented with "StreamSets Data Collector", but this 
tool looks like overkill for my needs
This email and the information contained herein is proprietary and confidential 
and subject to the Amdocs Email Terms of Service, which you may review at 
https://www.amdocs.com/about/email-terms-of-service 




[VOTE] 3.5.2 RC1

2023-11-21 Thread Luke Chen
Hello Kafka users, developers and client-developers,

This is the first candidate for release of Apache Kafka 3.5.2.

This is a bugfix release with several fixes since the release of 3.5.1,
including dependency version bumps for CVEs.

Release notes for the 3.5.2 release:
https://home.apache.org/~showuon/kafka-3.5.2-rc1/RELEASE_NOTES.html

*** Please download, test and vote by Nov. 28.

Kafka's KEYS file containing PGP keys we use to sign the release:
https://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
https://home.apache.org/~showuon/kafka-3.5.2-rc1/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
https://home.apache.org/~showuon/kafka-3.5.2-rc1/javadoc/

* Tag to be voted upon (off 3.5 branch) is the 3.5.2 tag:
https://github.com/apache/kafka/releases/tag/3.5.2-rc1

* Documentation:
https://kafka.apache.org/35/documentation.html

* Protocol:
https://kafka.apache.org/35/protocol.html

* Successful Jenkins builds for the 3.5 branch:
Unit/integration tests:
https://ci-builds.apache.org/job/Kafka/job/kafka/job/3.5/98/
There are some falky tests, including the testSingleIP test failure. It
failed because of some infra change and we fixed it
 recently.

System tests: running, will update the results later.



Thank you.
Luke