[jira] [Created] (KAFKA-16343) Improve tests of streams foreignkey package

2024-03-04 Thread Ayoub Omari (Jira)
Ayoub Omari created KAFKA-16343:
---

 Summary: Improve tests of streams foreignkey package
 Key: KAFKA-16343
 URL: https://issues.apache.org/jira/browse/KAFKA-16343
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 3.7.0
Reporter: Ayoub Omari
Assignee: Ayoub Omari


Some classes are not tested in streams foreignkey package, such as 
SubscriptionSendProcessorSupplier and ForeignTableJoinProcessorSupplier. 
Corresponding tests should be added.

The class ForeignTableJoinProcessorSupplierTest should be renamed as it is not 
testing ForeignTableJoinProcessor, but rather SubscriptionJoinProcessorSupplier.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16341) Fix un-compressed records

2024-03-04 Thread Luke Chen (Jira)
Luke Chen created KAFKA-16341:
-

 Summary: Fix un-compressed records
 Key: KAFKA-16341
 URL: https://issues.apache.org/jira/browse/KAFKA-16341
 Project: Kafka
  Issue Type: Sub-task
Reporter: Luke Chen






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16342) Fix compressed records

2024-03-04 Thread Luke Chen (Jira)
Luke Chen created KAFKA-16342:
-

 Summary: Fix compressed records
 Key: KAFKA-16342
 URL: https://issues.apache.org/jira/browse/KAFKA-16342
 Project: Kafka
  Issue Type: Sub-task
Reporter: Luke Chen






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2695

2024-03-04 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-16340) Replication factor: 3 larger than available brokers: 1.

2024-03-04 Thread Jianbin Chen (Jira)
Jianbin Chen created KAFKA-16340:


 Summary:  Replication factor: 3 larger than available brokers: 1.
 Key: KAFKA-16340
 URL: https://issues.apache.org/jira/browse/KAFKA-16340
 Project: Kafka
  Issue Type: Wish
Affects Versions: 3.7.0
Reporter: Jianbin Chen
 Attachments: image-2024-03-05-09-31-35-058.png

我在测试分层存储时遇到了设置remote.log.metadata.topic.replication.factor 无效的问题
{code:java}
broker.id=1
log.cleanup.policy=delete
log.cleaner.enable=true
log.cleaner.delete.retention.ms=30
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
message.max.bytes=5242880
replica.fetch.max.bytes=5242880
log.dirs=/data01/kafka110-logs
num.partitions=2
default.replication.factor=1
delete.topic.enable=true
auto.create.topics.enable=true
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
offsets.retention.minutes=1440
log.retention.minutes=10
log.local.retention.ms=30
log.segment.bytes=104857600
log.retention.check.interval.ms=30
remote.log.metadata.topic.replication.factor=1
remote.log.storage.system.enable=true
remote.log.metadata.topic.retention.ms=-1{code}
!image-2024-03-05-09-31-35-058.png!

 

 
{code:java}
[2024-03-05 09:27:49,672] ERROR Encountered error while creating 
__remote_log_metadata topic. 
(org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager)
java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication 
factor: 3 larger than available brokers: 1.
    at 
java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396)
    at 
java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2073)
    at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
    at 
org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager.createTopic(TopicBasedRemoteLogMetadataManager.java:509)
    at 
org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager.initializeResources(TopicBasedRemoteLogMetadataManager.java:396)
    at java.base/java.lang.Thread.run(Thread.java:1589)
Caused by: org.apache.kafka.common.errors.InvalidReplicationFactorException: 
Replication factor: 3 larger than available brokers: 1.{code}
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14747) FK join should record discarded subscription responses

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-14747.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> FK join should record discarded subscription responses
> --
>
> Key: KAFKA-14747
> URL: https://issues.apache.org/jira/browse/KAFKA-14747
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Ayoub Omari
>Priority: Minor
>  Labels: beginner, newbie
> Fix For: 3.8.0
>
>
> FK-joins are subject to a race condition: If the left-hand side record is 
> updated, a subscription is sent to the right-hand side (including a hash 
> value of the left-hand side record), and the right-hand side might send back 
> join responses (also including the original hash). The left-hand side only 
> processed the responses if the returned hash matches to current hash of the 
> left-hand side record, because a different hash implies that the lef- hand 
> side record was updated in the mean time (including sending a new 
> subscription to the right hand side), and thus the data is stale and the 
> response should not be processed (joining the response to the new record 
> could lead to incorrect results).
> A similar thing can happen on a right-hand side update that triggers a 
> response, that might be dropped if the left-hand side record was updated in 
> parallel.
> While the behavior is correct, we don't record if this happens. We should 
> consider to record this using the existing "dropped record" sensor or maybe 
> add a new sensor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-853: KRaft Controller Membership Changes

2024-03-04 Thread José Armando García Sancio
Hi Jun,

Thanks for the feedback. See my comments below.

On Fri, Mar 1, 2024 at 11:36 AM Jun Rao  wrote:
> 30. Historically, we used MV to gate the version of Fetch request. Are you
> saying that voters will ignore MV and only depend on raft.version when
> choosing the version of Fetch request?

Between Kafka servers/nodes (brokers and controllers) there are two
implementations for the Fetch RPC.

One, is the one traditionally used between brokers to replicate ISR
based topic partitions. As you point out Kafka negotiates those
versions using the IBP for ZK-based clusters and MV for KRaft-based
clusters. This KIP doesn't change that. There have been offline
conversations of potentially using the ApiVersions to negotiate those
RPC versions but that is outside the scope of this KIP.

Two, is the KRaft implementation. As of today only the controller
listeners  (controller.listener.names) implement the request handlers
for this version of the Fetch RPC. KafkaRaftClient implements the
client side of this RPC. This version of the Fetch RPC is negotiated
using ApiVersions.

I hope that clarifies the two implementations. On a similar note,
Jason and I did have a brief conversation regarding if KRaft should
use a different RPC from Fetch to replicate the log of KRaft topic
partition. This could be a long term option to make these two
implementations clearer and allow them to diverge. I am not ready to
tackle that problem in this KIP.

> 35. Upgrading the controller listeners.
> 35.1 So, the protocol is that each controller will pick the first listener
> in controller.listener.names to initiate a connection?

Yes. The negative of this solution is that it requires 3 rolls of
voters (controllers) and 1 roll of observers (brokers) to replace a
voter endpoint. In the future, we can have a solution that initiates
the connection based on the state of the VotersRecord for voters RPCs.
That solution can replace an endpoint with 2 rolls of voters and 1
roll of observers.

> 35.2 Should we include the new listeners in the section "Change the
> controller listener in the brokers"?

Yes. We need to. The observers (brokers) need to know what security
protocol to use to connect to the endpoint(s) in
controller.quorum.bootstrap.servers. This is also how connections to
controller.quorum.voters work today.

> 35.3 For every RPC that returns the controller leader, do we need to
> return multiple endpoints?

KRaft only needs to return the endpoint associated with the listener
used to send the RPC request. This is similar to how the Metadata RPC
works. The Brokers field in the Metadata response only returns the
endpoints that match the listener used to receive the Metadata
request.

This is the main reason why KRaft needs to initiate connections using
a security protocol (listener name) that is supported by all of the
replicas. All of the clients (voters and observers) need to know
(security protocol) how to connect to the redirection endpoint. All of
the voters need to be listening on that listener name so that
redirection works no matter the leader.

> 35.4 The controller/observer can now get the endpoint from both records and
> RPCs. Which one takes precedence? For example, suppose that a voter is down
> for a while. It's started and gets the latest listener for the leader from
> the initial fetch response. When fetching the records, it could see an
> outdated listener. If it picks up this listener, it may not be able to
> connect to the leader.

Yeah. This is where connection and endpoint management gets tricky.
This is my implementation strategy:

1. For the RPCs Vote, BeginQuorumEpoch and EndQuorumEpoch the replicas
(votes) will always initiate connections using the endpoints described
in the VotersRecord (or controller.quorum.voters for kraft.version 0).
2. For the Fetch RPC when the leader is not known, the replicas will
use the endpoints in controller.quorum.bootstrap.servers (or
controller.quorum.voters for kraft.version 0). This is how the
replicas (observers) normally discover the latest leader.
2. For the Fetch and FetchSnapshot RPC when the leader is known, the
replicas use the endpoint that was discovered through previous RPC
response(s) or the endpoint in the BeginQuorumEpoch request.

I have been thinking a lot about this and this is the most consistent
and deterministic algorithm that I can think of. We should be able to
implement a different algorithm in the future without changing the
protocol or KIP.

> 36. Bootstrapping with multiple voters: How does a user get the replica
> uuid? In that case, do we use the specified replica uuid instead of a
> randomly generated one in the meta.properties file in metadata.log.dir?

There are two options:
1. They generate the directory.id for all of the voters using
something like "kafka-storage random-uuid" and specify those in
"kafka-storage format --controller-quorum-voters". This is the safest
option as it can detect disk replacement from bootstrap.

2. They only specify the 

[jira] [Resolved] (KAFKA-10603) Re-design KStream.process() and K*.transform*() operations

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-10603.
-
Resolution: Fixed

> Re-design KStream.process() and K*.transform*() operations
> --
>
> Key: KAFKA-10603
> URL: https://issues.apache.org/jira/browse/KAFKA-10603
> Project: Kafka
>  Issue Type: New Feature
>Reporter: John Roesler
>Priority: Major
>  Labels: needs-kip
>
> After the implementation of KIP-478, we have the ability to reconsider all 
> these APIs, and maybe just replace them with
> {code:java}
> // KStream
> KStream process(ProcessorSupplier) 
> // KTable
> KTable process(ProcessorSupplier){code}
>  
> but it needs more thought and a KIP for sure.
>  
> This ticket probably supercedes 
> https://issues.apache.org/jira/browse/KAFKA-8396



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16339) Remove Deprecated "transformer" methods and classes

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16339:
---

 Summary: Remove Deprecated "transformer" methods and classes
 Key: KAFKA-16339
 URL: https://issues.apache.org/jira/browse/KAFKA-16339
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Cf 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-820%3A+Extend+KStream+process+with+new+Processor+API]
 * KStream#tranform
 * KStream#flatTransform
 * KStream#transformValue
 * KStream#flatTransformValues
 * and the corresponding Scala methods

Related to https://issues.apache.org/jira/browse/KAFKA-12829, and both tickets 
should be worked on together.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Incremental build for scala tests

2024-03-04 Thread Pavel Pozdeev

Hi team,
I'm a new member, just joined the community. Trying to add Kraft support to 
some existing unit-tests.
I noticed, that when I do any change to some unit-test, e.g. 
"kafka.admin.AclCommandTest.scala", and then run:
 
"./gradlew core:compileTestScala"
 
the entire folder "core/build/classes/scala/test" is cleared, and gradle 
re-compiles ALL tests. It takes quite a long time.
Looks like this issue affects only tests. If I change main scala code, e.g. 
"kafka.admin.AclCommand.scala", and then run:
 
"./gradlew core:compileScala"
 
only single file is re-compiled, and it takes only a few seconds.
Has anybody noticed same issue before?
 
Best,
Pavel Pozdeev

[jira] [Created] (KAFKA-16338) Removed Deprecated configs from StreamsConfig

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16338:
---

 Summary: Removed Deprecated configs from StreamsConfig
 Key: KAFKA-16338
 URL: https://issues.apache.org/jira/browse/KAFKA-16338
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 5.0.0


* "buffered.records.per.partition" were deprecated via 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186878390] 
(KIP not fully implemented yet, so move this from the 4.0 into this 5.0 ticket)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16337) Remove Deprecates APIs of Kafka Streams in 5.0

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16337:
---

 Summary: Remove Deprecates APIs of Kafka Streams in 5.0
 Key: KAFKA-16337
 URL: https://issues.apache.org/jira/browse/KAFKA-16337
 Project: Kafka
  Issue Type: Task
  Components: streams, streams-test-utils
Reporter: Matthias J. Sax
 Fix For: 5.0.0


This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
that were deprecated in 3.6 or later. When the release scheduled for 5.0 will 
be set, we might need to remove sub-tasks if they don't hit the 1-year 
threshold.

Each subtask will de focusing on a specific API, so it's easy to discuss if it 
should be removed by 5.0.0 or maybe even at a later point.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16336) Remove Deprecated metric standby-process-ratio

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16336:
---

 Summary: Remove Deprecated metric standby-process-ratio
 Key: KAFKA-16336
 URL: https://issues.apache.org/jira/browse/KAFKA-16336
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Metric "standby-process-ratio" was deprecated in 3.5 release via 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-869%3A+Improve+Streams+State+Restoration+Visibility



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16335) Remove Deprecated method on StreamPartitioner

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16335:
---

 Summary: Remove Deprecated method on StreamPartitioner
 Key: KAFKA-16335
 URL: https://issues.apache.org/jira/browse/KAFKA-16335
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Deprecated in 3.4 release via 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=211883356]
 * StreamPartitioner#partition (singular)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16334) Remove Deprecated command line option from reset tool

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16334:
---

 Summary: Remove Deprecated command line option from reset tool
 Key: KAFKA-16334
 URL: https://issues.apache.org/jira/browse/KAFKA-16334
 Project: Kafka
  Issue Type: Sub-task
  Components: streams, tools
Reporter: Matthias J. Sax
 Fix For: 4.0.0


--bootstrap-server (singular) was deprecated in 3.4 release via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-865%3A+Support+--bootstrap-server+in+kafka-streams-application-reset]
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16333) Removed Deprecated methods KTable#join

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16333:
---

 Summary: Removed Deprecated methods KTable#join
 Key: KAFKA-16333
 URL: https://issues.apache.org/jira/browse/KAFKA-16333
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


KTable#join() methods taking a `Named` parameter got deprecated in 3.1 release 
via https://issues.apache.org/jira/browse/KAFKA-13813 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16332) Remove Deprecated builder methods for Time/Session/Join/SlidingWindows

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16332:
---

 Summary: Remove Deprecated builder methods for 
Time/Session/Join/SlidingWindows
 Key: KAFKA-16332
 URL: https://issues.apache.org/jira/browse/KAFKA-16332
 Project: Kafka
  Issue Type: Sub-task
Reporter: Matthias J. Sax


Deprecated in 3.0: 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-633%3A+Deprecate+24-hour+Default+Grace+Period+for+Windowed+Operations+in+Streams]
 
 * TimeWindows#of
 * TimeWindows#grace
 * SessionWindows#with
 * SessionWindows#grace
 * JoinWindows#of
 * JoinWindows#grace
 * SlidingWindows#withTimeDifferencAndGrace

Me might want to hold-off to cleanup JoinWindows due to 
https://issues.apache.org/jira/browse/KAFKA-13813 (open for discussion)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16331) Remove Deprecated EOSv1

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16331:
---

 Summary: Remove Deprecated EOSv1
 Key: KAFKA-16331
 URL: https://issues.apache.org/jira/browse/KAFKA-16331
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


EOSv1 was deprecated in AK 3.0 via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-732%3A+Deprecate+eos-alpha+and+replace+eos-beta+with+eos-v2]
 * remove conifg
 * remove Producer#sendOffsetsToTransaction
 * cleanup code



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16330) Remove Deprecated methods/variables from TaskId

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16330:
---

 Summary: Remove Deprecated methods/variables from TaskId
 Key: KAFKA-16330
 URL: https://issues.apache.org/jira/browse/KAFKA-16330
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Cf [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=181306557]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16329) Remove Deprecated Task/ThreadMetadata classes and related methods

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16329:
---

 Summary: Remove Deprecated Task/ThreadMetadata classes and related 
methods
 Key: KAFKA-16329
 URL: https://issues.apache.org/jira/browse/KAFKA-16329
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Deprecated in AK 3.0 via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-744%3A+Migrate+TaskMetadata+and+ThreadMetadata+to+an+interface+with+internal+implementation]
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-12831) Remove Deprecated method StateStore#init

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-12831.
-
Resolution: Fixed

> Remove Deprecated method StateStore#init
> 
>
> Key: KAFKA-12831
> URL: https://issues.apache.org/jira/browse/KAFKA-12831
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> The method 
> org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
>  org.apache.kafka.streams.processor.StateStore) was deprected in version 2.7
>  
> See KAFKA-10562 and KIP-478
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-12825) Remove Deprecated method StreamsBuilder#addGlobalStore

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-12825.
-
Resolution: Fixed

> Remove Deprecated method StreamsBuilder#addGlobalStore
> --
>
> Key: KAFKA-12825
> URL: https://issues.apache.org/jira/browse/KAFKA-12825
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Methods:
> org.apache.kafka.streams.scala.StreamsBuilder#addGlobalStore
> org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
> org.apache.kafka.streams.processor.ProcessorSupplier)
> were deprecated in 2.7
>  
> See KAFKA-10379 and KIP-478



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16328) Remove deprecated config StreamsConfig#retries

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16328:
---

 Summary: Remove deprecated config StreamsConfig#retries
 Key: KAFKA-16328
 URL: https://issues.apache.org/jira/browse/KAFKA-16328
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Deprecated in AK 2.7 – already unused – via 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-572%3A+Improve+timeouts+and+retries+in+Kafka+Streams



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16327) Remove Deprecated variable StreamsConfig#TOPOLOGY_OPTIMIZATION

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16327:
---

 Summary: Remove Deprecated variable 
StreamsConfig#TOPOLOGY_OPTIMIZATION 
 Key: KAFKA-16327
 URL: https://issues.apache.org/jira/browse/KAFKA-16327
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax


Deprecated in 2.7 release via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-626%3A+Rename+StreamsConfig+config+variable+name]
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-10582) Mirror Maker 2 not replicating new topics until restart

2024-03-04 Thread Greg Harris (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Harris resolved KAFKA-10582.
-
Fix Version/s: 3.5.0
   Resolution: Fixed

> Mirror Maker 2 not replicating new topics until restart
> ---
>
> Key: KAFKA-10582
> URL: https://issues.apache.org/jira/browse/KAFKA-10582
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 2.5.1
> Environment: RHEL 7 Linux.
>Reporter: Robert Martin
>Priority: Minor
> Fix For: 3.5.0
>
>
> We are using Mirror Maker 2 from the 2.5.1 release for replication on some 
> clusters.  Replication is working as expected for existing topics.  When we 
> create a new topic, however, Mirror Maker 2 creates the replicated topic as 
> expected but never starts replicating it.  If we restart Mirror Maker 2 
> within 2-3 minutes the topic starts replicating as expected.  From 
> documentation we haveve seen it appears this should start replicating without 
> a restart based on the settings we have.
> *Example:*
> Create topic "mytesttopic" on source cluster
> MirrorMaker 2 creates "source.mytesttopioc" on target cluster with no issue
> MirrorMaker 2 does not replicate "mytesttopic" -> "source.mytesttopic"
> Restart MirrorMaker 2 and now replication works for "mytesttopic" -> 
> "source.mytesttopic"
> *Example config:*
> name = source->target
> group.id = source-to-target
> clusters = source, target
> source.bootstrap.servers = sourcehosts:9092
> target.bootstrap.servers = targethosts:9092
> source->target.enabled = true
> source->target.topics = .*
> target->source = false
> target->source.topics = .*
> replication.factor=3
> checkpoints.topic.replication.factor=3
> heartbeats.topic.replication.factor=3
> offset-syncs.topic.replication.factor=3
> offset.storage.replication.factor=3
> status.storage.replication.factor=3
> config.storage.replication.factor=3
> tasks.max = 16
> refresh.topics.enabled = true
> sync.topic.configs.enabled = true
> refresh.topics.interval.seconds = 300
> refresh.groups.interval.seconds = 300
> readahead.queue.capacity = 100
> emit.checkpoints.enabled = true
> emit.checkpoints.interval.seconds = 5



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-956: Tiered Storage Quotas

2024-03-04 Thread Jun Rao
Hi, Abhijeet,

Thanks for the reply. Sounds good to me.

Jun


On Sat, Mar 2, 2024 at 7:40 PM Abhijeet Kumar 
wrote:

> Hi Jun,
>
> Thanks for pointing it out. It makes sense to me. We can have the following
> metrics instead. What do you think?
>
>- remote-(fetch|copy)-throttle-time-avg (The average time in ms remote
>fetches/copies was throttled by a broker)
>- remote-(fetch|copy)-throttle-time--max (The maximum time in ms remote
>fetches/copies was throttled by a broker)
>
> These are similar to fetch-throttle-time-avg and fetch-throttle-time-max
> metrics we have for Kafak Consumers?
> The Avg and Max are computed over the (sliding) window as defined by the
> configuration metrics.sample.window.ms on the server.
>
> (Also, I will update the config and metric names to be consistent)
>
> Regards.
>
> On Thu, Feb 29, 2024 at 2:51 AM Jun Rao  wrote:
>
> > Hi, Abhijeet,
> >
> > Thanks for the reply.
> >
> > The issue with recording the throttle time as a gauge is that it's
> > transient. If the metric is not read immediately, the recorded value
> could
> > be reset to 0. The admin won't realize that throttling has happened.
> >
> > For client quotas, the throttle time is tracked as the average
> > throttle-time per user/client-id. This makes the metric less transient.
> >
> > Also, the configs use read/write whereas the metrics use fetch/copy.
> Could
> > we make them consistent?
> >
> > Jun
> >
> > On Wed, Feb 28, 2024 at 6:49 AM Abhijeet Kumar <
> abhijeet.cse@gmail.com
> > >
> > wrote:
> >
> > > Hi Jun,
> > >
> > > Clarified the meaning of the two metrics. Also updated the KIP.
> > >
> > > kafka.log.remote:type=RemoteLogManager, name=RemoteFetchThrottleTime ->
> > The
> > > duration of time required at a given moment to bring the observed fetch
> > > rate within the allowed limit, by preventing further reads.
> > > kafka.log.remote:type=RemoteLogManager, name=RemoteCopyThrottleTime ->
> > The
> > > duration of time required at a given moment to bring the observed
> remote
> > > copy rate within the allowed limit, by preventing further copies.
> > >
> > > Regards.
> > >
> > > On Wed, Feb 28, 2024 at 12:28 AM Jun Rao 
> > wrote:
> > >
> > > > Hi, Abhijeet,
> > > >
> > > > Thanks for the explanation. Makes sense to me now.
> > > >
> > > > Just a minor comment. Could you document the exact meaning of the
> > > following
> > > > two metrics? For example, is the time accumulated? If so, is it from
> > the
> > > > start of the broker or over some window?
> > > >
> > > > kafka.log.remote:type=RemoteLogManager, name=RemoteFetchThrottleTime
> > > > kafka.log.remote:type=RemoteLogManager, name=RemoteCopyThrottleTime
> > > >
> > > > Jun
> > > >
> > > > On Tue, Feb 27, 2024 at 1:39 AM Abhijeet Kumar <
> > > abhijeet.cse@gmail.com
> > > > >
> > > > wrote:
> > > >
> > > > > Hi Jun,
> > > > >
> > > > > The existing quota system for consumers is designed to throttle the
> > > > > consumer if it exceeds the allowed fetch rate.
> > > > > The additional quota we want to add works on the broker level. If
> the
> > > > > broker-level remote read quota is being
> > > > > exceeded, we prevent additional reads from the remote storage but
> do
> > > not
> > > > > prevent local reads for the consumer.
> > > > > If the consumer has specified other partitions to read, which can
> be
> > > > served
> > > > > from local, it can continue to read those
> > > > > partitions. To elaborate more, we make a check for quota exceeded
> if
> > we
> > > > > know a segment needs to be read from
> > > > > remote. If the quota is exceeded, we simply skip the partition and
> > move
> > > > to
> > > > > other segments in the fetch request.
> > > > > This way consumers can continue to read the local data as long as
> > they
> > > > have
> > > > > not exceeded the client-level quota.
> > > > >
> > > > > Also, when we choose the appropriate consumer-level quota, we would
> > > > > typically look at what kind of local fetch
> > > > > throughput is supported. If we were to reuse the same consumer
> quota,
> > > we
> > > > > should also consider the throughput
> > > > > the remote storage supports. The throughput supported by remote may
> > be
> > > > > less/more than the throughput supported
> > > > > by local (when using a cloud provider, it depends on the plan opted
> > by
> > > > the
> > > > > user). The consumer quota has to be carefully
> > > > > set considering both local and remote throughput. Instead, if we
> > have a
> > > > > separate quota, it makes things much simpler
> > > > > for the user, since they already know what throughput their remote
> > > > storage
> > > > > supports.
> > > > >
> > > > > (Also, thanks for pointing out. I will update the KIP based on the
> > > > > discussion)
> > > > >
> > > > > Regards,
> > > > > Abhijeet.
> > > > >
> > > > > On Tue, Feb 27, 2024 at 2:49 AM Jun Rao 
> > > > wrote:
> > > > >
> > > > > > Hi, Abhijeet,
> > > > > >
> > > > > > Sorry for the late reply. It seems that you haven't u

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2694

2024-03-04 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 452776 lines...]
[2024-03-04T17:37:27.707Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testMigrateTopicConfigs() PASSED
[2024-03-04T17:37:27.707Z] 
[2024-03-04T17:37:27.707Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testNonIncreasingKRaftEpoch() STARTED
[2024-03-04T17:37:27.707Z] 
[2024-03-04T17:37:27.707Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testNonIncreasingKRaftEpoch() PASSED
[2024-03-04T17:37:27.707Z] 
[2024-03-04T17:37:27.707Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testMigrateEmptyZk() STARTED
[2024-03-04T17:37:27.707Z] 
[2024-03-04T17:37:27.707Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testMigrateEmptyZk() PASSED
[2024-03-04T17:37:27.707Z] 
[2024-03-04T17:37:27.707Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testTopicAndBrokerConfigsMigrationWithSnapshots() 
STARTED
[2024-03-04T17:37:27.707Z] 
[2024-03-04T17:37:27.707Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testTopicAndBrokerConfigsMigrationWithSnapshots() 
PASSED
[2024-03-04T17:37:27.707Z] 
[2024-03-04T17:37:27.707Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testClaimAndReleaseExistingController() STARTED
[2024-03-04T17:37:27.707Z] 
[2024-03-04T17:37:27.707Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testClaimAndReleaseExistingController() PASSED
[2024-03-04T17:37:27.707Z] 
[2024-03-04T17:37:27.707Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testClaimAbsentController() STARTED
[2024-03-04T17:37:29.300Z] 
[2024-03-04T17:37:29.300Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testClaimAbsentController() PASSED
[2024-03-04T17:37:29.300Z] 
[2024-03-04T17:37:29.300Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testIdempotentCreateTopics() STARTED
[2024-03-04T17:37:29.300Z] 
[2024-03-04T17:37:29.300Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testIdempotentCreateTopics() PASSED
[2024-03-04T17:37:29.300Z] 
[2024-03-04T17:37:29.300Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testCreateNewTopic() STARTED
[2024-03-04T17:37:29.300Z] 
[2024-03-04T17:37:29.300Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testCreateNewTopic() PASSED
[2024-03-04T17:37:29.300Z] 
[2024-03-04T17:37:29.300Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testUpdateExistingTopicWithNewAndChangedPartitions() 
STARTED
[2024-03-04T17:37:29.300Z] 
[2024-03-04T17:37:29.300Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testUpdateExistingTopicWithNewAndChangedPartitions() 
PASSED
[2024-03-04T17:37:29.300Z] 
[2024-03-04T17:37:29.300Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testZNodeChangeHandlerForDataChange() STARTED
[2024-03-04T17:37:29.300Z] 
[2024-03-04T17:37:29.300Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testZNodeChangeHandlerForDataChange() PASSED
[2024-03-04T17:37:29.300Z] 
[2024-03-04T17:37:29.300Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testZooKeeperSessionStateMetric() STARTED
[2024-03-04T17:37:29.300Z] 
[2024-03-04T17:37:29.300Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testZooKeeperSessionStateMetric() PASSED
[2024-03-04T17:37:29.300Z] 
[2024-03-04T17:37:29.300Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testExceptionInBeforeInitializingSession() STARTED
[2024-03-04T17:37:30.815Z] 
[2024-03-04T17:37:30.815Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testExceptionInBeforeInitializingSession() PASSED
[2024-03-04T17:37:30.815Z] 
[2024-03-04T17:37:30.815Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testGetChildrenExistingZNode() STARTED
[2024-03-04T17:37:30.815Z] 
[2024-03-04T17:37:30.815Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testGetChildrenExistingZNode() PASSED
[2024-03-04T17:37:30.815Z] 
[2024-03-04T17:37:30.815Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testConnection() STARTED
[2024-03-04T17:37:30.815Z] 
[2024-03-04T17:37:30.815Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testConnection() PASSED
[2024-03-04T17:37:30.815Z] 
[2024-03-04T17:37:30.815Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClient

[jira] [Created] (KAFKA-16326) Kafka Connect unable to find javax dependency on Quarkus update to 3.X

2024-03-04 Thread Pau Ortega Puig (Jira)
Pau Ortega Puig created KAFKA-16326:
---

 Summary: Kafka Connect unable to find javax dependency on Quarkus 
update to 3.X
 Key: KAFKA-16326
 URL: https://issues.apache.org/jira/browse/KAFKA-16326
 Project: Kafka
  Issue Type: Bug
  Components: connect
Affects Versions: 3.6.0
Reporter: Pau Ortega Puig


We have a repository that uses both Quarkus and Kafka Connect. We're trying to 
update Quarkus to version 3.X but we're finding an error when configuring Kafka 
Connect:
{code:java}
java.lang.ClassNotFoundException: javax.ws.rs.core.Configurable{code}
We are aware of the _javax_ to _jakarta_ libraries change and indeed we have 
changed all our direct dependencies to use {_}jakarta{_}. It looks like Kafka 
Connect still uses _javax_ dependencies and at runtime it is unable to find 
them.

We attach a minimal repo that reproduces the issue here: 
[https://github.com/pauortegathoughtworks/quarkus-kafka-connect-bug]

Also we provide the full stack trace here:

 
{code:java}
java.lang.RuntimeException: Failed to start quarkus
        at io.quarkus.runner.ApplicationImpl.doStart(Unknown Source)
        at io.quarkus.runtime.Application.start(Application.java:101)
        at 
io.quarkus.runtime.ApplicationLifecycleManager.run(ApplicationLifecycleManager.java:111)
        at io.quarkus.runtime.Quarkus.run(Quarkus.java:71)
        at io.quarkus.runtime.Quarkus.run(Quarkus.java:44)
        at io.quarkus.runtime.Quarkus.run(Quarkus.java:124)
        at io.quarkus.runner.GeneratedMain.main(Unknown Source)
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
        at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:568)
        at 
io.quarkus.runner.bootstrap.StartupActionImpl$1.run(StartupActionImpl.java:113)
        at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.NoClassDefFoundError: javax/ws/rs/core/Configurable
        at 
org.apache.kafka.connect.cli.AbstractConnectCli.startConnect(AbstractConnectCli.java:128)
        at org.acme.KafkaConnectRunner.start(KafkaConnectRunner.java:88)
        at org.acme.KafkaConnectRunner.onStart(KafkaConnectRunner.java:73)
        at 
org.acme.KafkaConnectRunner_Observer_onStart_1_-42pHN04Og1MUKiGWhJM7NweE.notify(Unknown
 Source)
        at 
io.quarkus.arc.impl.EventImpl$Notifier.notifyObservers(EventImpl.java:346)
        at io.quarkus.arc.impl.EventImpl$Notifier.notify(EventImpl.java:328)
        at io.quarkus.arc.impl.EventImpl.fire(EventImpl.java:82)
        at 
io.quarkus.arc.runtime.ArcRecorder.fireLifecycleEvent(ArcRecorder.java:157)
        at 
io.quarkus.arc.runtime.ArcRecorder.handleLifecycleEvents(ArcRecorder.java:108)
        at 
io.quarkus.deployment.steps.LifecycleEventsBuildStep$startupEvent1144526294.deploy_0(Unknown
 Source)
        at 
io.quarkus.deployment.steps.LifecycleEventsBuildStep$startupEvent1144526294.deploy(Unknown
 Source)
        ... 13 more
Caused by: java.lang.ClassNotFoundException: javax.ws.rs.core.Configurable
        at 
java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:641)
        at 
java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188)
        at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:520)
        at 
io.quarkus.bootstrap.classloading.QuarkusClassLoader.loadClass(QuarkusClassLoader.java:518)
        at 
io.quarkus.bootstrap.classloading.QuarkusClassLoader.loadClass(QuarkusClassLoader.java:468)
        ... 24 more
 {code}
 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16325) Add missing producer metrics to documentation

2024-03-04 Thread Divij Vaidya (Jira)
Divij Vaidya created KAFKA-16325:


 Summary: Add missing producer metrics to documentation
 Key: KAFKA-16325
 URL: https://issues.apache.org/jira/browse/KAFKA-16325
 Project: Kafka
  Issue Type: Improvement
  Components: documentation, website
Reporter: Divij Vaidya


Some producer metrics such as buffer-exhausted-rate [1]are missing from the 
documentation at 
[https://kafka.apache.org/documentation.html#producer_monitoring] 

Hence, users of Kafka sometimes don't know about these metrics at all.

This task will add these (and possibly any other missing) metrics to the 
documentation. An example of a similar PR where metrics were added to the 
documentation is at [https://github.com/apache/kafka/pull/12934] 

[1] 
[https://github.com/apache/kafka/blob/c254b22a4877e70617b2710b95ef44b8cc55ce97/clients/src/main/java/org/apache/kafka/clients/producer/internals/BufferPool.java#L91]
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16324) Move BrokerApiVersionsCommand to tools

2024-03-04 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-16324:
--

 Summary: Move BrokerApiVersionsCommand to tools
 Key: KAFKA-16324
 URL: https://issues.apache.org/jira/browse/KAFKA-16324
 Project: Kafka
  Issue Type: Sub-task
Reporter: Chia-Ping Tsai


https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/admin/BrokerApiVersionsCommand.scala



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Subscribe to Developer mailing list

2024-03-04 Thread Bruno Cadonna

Hi LoÏC,

subscription to the mailing lists is self-service. See details under 
https://kafka.apache.org/contact


Best,
Bruno

On 2/29/24 9:48 AM, Loic Greffier wrote:

Hi @dev@kafka.apache.org ,

I am working as a Software Engineer at Michelin, and would like to 
subscribe to the Developer mailing list to be able to open a KIP and 
contribute to Apache Kafka.


LoÏC GREFFIER

*/GROUPE MICHELIN – /*/Development Technology Specialist « *DOTI/BS/SMI* »/

//

noun_94615_cc_blue8, Rue de la Grolière | R1-2 | 63100 Clermont-Ferrand 
Cedex 09 | France


loic.greff...@michelin.com 



Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2693

2024-03-04 Thread Apache Jenkins Server
See 




Subscribe to Developer mailing list

2024-03-04 Thread Loic Greffier
Hi @dev@kafka.apache.org,

I am working as a Software Engineer at Michelin, and would like to subscribe to 
the Developer mailing list to be able to open a KIP and contribute to Apache 
Kafka.

LoÏC GREFFIER
GROUPE MICHELIN - Development Technology Specialist « DOTI/BS/SMI »
  [cid:image001.png@01DA6AF3.002B0EF0]
[noun_94615_cc_blue]8, Rue de la Grolière | R1-2 | 63100 Clermont-Ferrand Cedex 
09 | France
[cid:image003.png@01DA6AF3.002B0EF0]  
loic.greff...@michelin.com
[cid:image004.png@01DA6AF3.002B0EF0]



Re: [ANNOUNCE] Apache Kafka 3.7.0

2024-03-04 Thread Josep Prat
Thanks Stanislav for running the release! And thanks to all members of the
community who were part of the release!

Best,

On Fri, Mar 1, 2024 at 8:27 AM Kamal Chandraprakash <
kamal.chandraprak...@gmail.com> wrote:

> Thanks Stanislav for running the release!
>
> On Thu, Feb 29, 2024, 21:03 Guozhang Wang 
> wrote:
>
> > Thanks Stan for running the release!
> >
> > On Thu, Feb 29, 2024 at 5:39 AM Boudjelda Mohamed Said
> >  wrote:
> > >
> > > Thanks Stanislav for running the release!
> > >
> > > On Wed, Feb 28, 2024 at 10:36 PM Kirk True  wrote:
> > >
> > > > Thanks Stanislav
> > > >
> > > > > On Feb 27, 2024, at 10:01 AM, Stanislav Kozlovski <
> > > > stanislavkozlov...@apache.org> wrote:
> > > > >
> > > > > The Apache Kafka community is pleased to announce the release of
> > > > > Apache Kafka 3.7.0
> > > > >
> > > > > This is a minor release that includes new features, fixes, and
> > > > > improvements from 296 JIRAs
> > > > >
> > > > > An overview of the release and its notable changes can be found in
> > the
> > > > > release blog post:
> > > > >
> https://kafka.apache.org/blog#apache_kafka_370_release_announcement
> > > > >
> > > > > All of the changes in this release can be found in the release
> notes:
> > > > > https://www.apache.org/dist/kafka/3.7.0/RELEASE_NOTES.html
> > > > >
> > > > > You can download the source and binary release (Scala 2.12, 2.13)
> > from:
> > > > > https://kafka.apache.org/downloads#3.7.0
> > > > >
> > > > >
> > > >
> >
> ---
> > > > >
> > > > >
> > > > > Apache Kafka is a distributed streaming platform with four core
> APIs:
> > > > >
> > > > >
> > > > > ** The Producer API allows an application to publish a stream of
> > records
> > > > to
> > > > > one or more Kafka topics.
> > > > >
> > > > > ** The Consumer API allows an application to subscribe to one or
> more
> > > > > topics and process the stream of records produced to them.
> > > > >
> > > > > ** The Streams API allows an application to act as a stream
> > processor,
> > > > > consuming an input stream from one or more topics and producing an
> > > > > output stream to one or more output topics, effectively
> transforming
> > the
> > > > > input streams to output streams.
> > > > >
> > > > > ** The Connector API allows building and running reusable producers
> > or
> > > > > consumers that connect Kafka topics to existing applications or
> data
> > > > > systems. For example, a connector to a relational database might
> > > > > capture every change to a table.
> > > > >
> > > > >
> > > > > With these APIs, Kafka can be used for two broad classes of
> > application:
> > > > >
> > > > > ** Building real-time streaming data pipelines that reliably get
> data
> > > > > between systems or applications.
> > > > >
> > > > > ** Building real-time streaming applications that transform or
> react
> > > > > to the streams of data.
> > > > >
> > > > >
> > > > > Apache Kafka is in use at large and small companies worldwide,
> > including
> > > > > Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest,
> > Rabobank,
> > > > > Target, The New York Times, Uber, Yelp, and Zalando, among others.
> > > > >
> > > > > A big thank you to the following 146 contributors to this release!
> > > > > (Please report an unintended omission)
> > > > >
> > > > > Abhijeet Kumar, Akhilesh Chaganti, Alieh, Alieh Saeedi, Almog
> Gavra,
> > > > > Alok Thatikunta, Alyssa Huang, Aman Singh, Andras Katona, Andrew
> > > > > Schofield, Anna Sophie Blee-Goldman, Anton Agestam, Apoorv Mittal,
> > > > > Arnout Engelen, Arpit Goyal, Artem Livshits, Ashwin Pankaj,
> > > > > ashwinpankaj, atu-sharm, bachmanity1, Bob Barrett, Bruno Cadonna,
> > > > > Calvin Liu, Cerchie, chern, Chris Egerton, Christo Lolov, Colin
> > > > > Patrick McCabe, Colt McNealy, Crispin Bernier, David Arthur, David
> > > > > Jacot, David Mao, Deqi Hu, Dimitar Dimitrov, Divij Vaidya, Dongnuo
> > > > > Lyu, Eaugene Thomas, Eduwer Camacaro, Eike Thaden, Federico Valeri,
> > > > > Florin Akermann, Gantigmaa Selenge, Gaurav Narula, gongzhongqiang,
> > > > > Greg Harris, Guozhang Wang, Gyeongwon, Do, Hailey Ni, Hanyu Zheng,
> > Hao
> > > > > Li, Hector Geraldino, hudeqi, Ian McDonald, Iblis Lin, Igor Soarez,
> > > > > iit2009060, Ismael Juma, Jakub Scholz, James Cheng, Jason
> Gustafson,
> > > > > Jay Wang, Jeff Kim, Jim Galasyn, John Roesler, Jorge Esteban
> Quilcate
> > > > > Otoya, Josep Prat, José Armando García Sancio, Jotaniya Jeel, Jouni
> > > > > Tenhunen, Jun Rao, Justine Olshan, Kamal Chandraprakash, Kirk True,
> > > > > kpatelatwork, kumarpritam863, Laglangyue, Levani Kokhreidze, Lianet
> > > > > Magrans, Liu Zeyu, Lucas Brutschy, Lucia Cerchie, Luke Chen,
> > maniekes,
> > > > > Manikumar Reddy, mannoopj, Maros Orsak, Matthew de Detrich,
> Matthias
> > > > > J. Sax, Max Riedel, Mayank Shekhar Narula, Mehari Beyene, Michael
> > > > > Westerby, Mickael Maison, Nick Telford, Ni