[jira] [Resolved] (KAFKA-17679) Remove kafka.security.authorizer.AclAuthorizer from AclCommand

2024-10-04 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-17679.

Resolution: Duplicate

> Remove kafka.security.authorizer.AclAuthorizer from AclCommand
> --
>
> Key: KAFKA-17679
> URL: https://issues.apache.org/jira/browse/KAFKA-17679
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17692) Remove KafkaServer references in streams tests

2024-10-03 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17692:
--

 Summary: Remove KafkaServer references in streams tests
 Key: KAFKA-17692
 URL: https://issues.apache.org/jira/browse/KAFKA-17692
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17691) Remove KafkaServer references in tools tests

2024-10-03 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17691:
--

 Summary: Remove KafkaServer references in tools tests
 Key: KAFKA-17691
 URL: https://issues.apache.org/jira/browse/KAFKA-17691
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17679) Remove kafka.security.authorizer.AclAuthorizer from AclCommand

2024-10-02 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17679:
--

 Summary: Remove kafka.security.authorizer.AclAuthorizer from 
AclCommand
 Key: KAFKA-17679
 URL: https://issues.apache.org/jira/browse/KAFKA-17679
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison
Assignee: Mickael Maison






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17662) config.providers configuration missing from the docs

2024-09-30 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17662:
--

 Summary: config.providers configuration missing from the docs
 Key: KAFKA-17662
 URL: https://issues.apache.org/jira/browse/KAFKA-17662
 Project: Kafka
  Issue Type: Bug
Reporter: Mickael Maison


The config.providers configuration is only listed in the Connect configuration 
documentation. Since it's usable by all components, it should be listed in all 
the configuration sections.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16188) Delete deprecated kafka.common.MessageReader

2024-09-05 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16188.

Resolution: Fixed

> Delete deprecated kafka.common.MessageReader
> 
>
> Key: KAFKA-16188
> URL: https://issues.apache.org/jira/browse/KAFKA-16188
> Project: Kafka
>  Issue Type: Task
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 4.0.0
>
>
> [KIP-641|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158866569]
>  introduced org.apache.kafka.tools.api.RecordReader and deprecated 
> kafka.common.MessageReader in Kafka 3.5.0.
> We should delete kafka.common.MessageReader in Kafka 4.0.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17468) Move kafka.log.remote.quota to storage module

2024-09-03 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17468:
--

 Summary: Move kafka.log.remote.quota to storage module
 Key: KAFKA-17468
 URL: https://issues.apache.org/jira/browse/KAFKA-17468
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison
Assignee: Mickael Maison






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-17430) Move RequestChannel.Metrics and RequestChannel.RequestMetrics to server module

2024-09-03 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-17430.

Fix Version/s: 4.0.0
   Resolution: Fixed

> Move RequestChannel.Metrics and RequestChannel.RequestMetrics to server module
> --
>
> Key: KAFKA-17430
> URL: https://issues.apache.org/jira/browse/KAFKA-17430
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17449) Move Quota classes to server module

2024-08-30 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17449:
--

 Summary: Move Quota classes to server module
 Key: KAFKA-17449
 URL: https://issues.apache.org/jira/browse/KAFKA-17449
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison
Assignee: Mickael Maison






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17430) Move RequestChannel.Metrics and RequestChannel.RequestMetrics to server module

2024-08-27 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17430:
--

 Summary: Move RequestChannel.Metrics and 
RequestChannel.RequestMetrics to server module
 Key: KAFKA-17430
 URL: https://issues.apache.org/jira/browse/KAFKA-17430
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison
Assignee: Mickael Maison






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-17353) Separate unsupported releases in downloads page on website

2024-08-27 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-17353.

Resolution: Fixed

> Separate unsupported releases in downloads page on website
> --
>
> Key: KAFKA-17353
> URL: https://issues.apache.org/jira/browse/KAFKA-17353
> Project: Kafka
>  Issue Type: Task
>  Components: website
>Reporter: Mickael Maison
>Assignee: Federico Valeri
>Priority: Major
>
> Currently we list all releases on https://kafka.apache.org/downloads
> We should have the supported releases at the top clearly identified and list 
> archived releases below. As per 
> https://cwiki.apache.org/confluence/display/KAFKA/Time+Based+Release+Plan#TimeBasedReleasePlan-WhatIsOurEOLPolicy?
>  we support the last 3 releases.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-17193) Pin external GitHub actions to specific git hash

2024-08-23 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-17193.

Fix Version/s: 4.0.0
   Resolution: Fixed

> Pin external GitHub actions to specific git hash
> 
>
> Key: KAFKA-17193
> URL: https://issues.apache.org/jira/browse/KAFKA-17193
> Project: Kafka
>  Issue Type: Task
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 4.0.0
>
>
> As per [https://infra.apache.org/github-actions-policy.html] we must pin any 
> GitHub action that is not from the apache/*, github/* and actions/* 
> namespaces to a specific git hash.
> We are currently using actions from aquasecurity and docker and these are not 
> pinned to specific git hashes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17353) Separate unsupported releases in downloads page on website

2024-08-16 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17353:
--

 Summary: Separate unsupported releases in downloads page on website
 Key: KAFKA-17353
 URL: https://issues.apache.org/jira/browse/KAFKA-17353
 Project: Kafka
  Issue Type: Task
  Components: website
Reporter: Mickael Maison


Currently we list all releases on https://kafka.apache.org/downloads

We should have the supported releases at the top clearly identified and list 
archived releases below. As per 
https://cwiki.apache.org/confluence/display/KAFKA/Time+Based+Release+Plan#TimeBasedReleasePlan-WhatIsOurEOLPolicy?
 we support the last 3 releases.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17301) lz4-java is not maintained anymore

2024-08-08 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17301:
--

 Summary: lz4-java is not maintained anymore
 Key: KAFKA-17301
 URL: https://issues.apache.org/jira/browse/KAFKA-17301
 Project: Kafka
  Issue Type: Task
Reporter: Mickael Maison


lz4-java has not made a release since June 2021. It still depends on lz4 1.9.3 
which has a critical (however it does not seem exploitable in our case) CVE: 
[CVE-2021-3520|https://nvd.nist.gov/vuln/detail/CVE-2021-3520].





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17246) Simplify the process of building a test docker image

2024-08-02 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17246:
--

 Summary: Simplify the process of building a test docker image
 Key: KAFKA-17246
 URL: https://issues.apache.org/jira/browse/KAFKA-17246
 Project: Kafka
  Issue Type: Improvement
Reporter: Mickael Maison


The docker_build_test.py script requires a URL and a signature file. This makes 
it hard to build a test image locally with a custom Kafka binary.

It would be nice to have a way to point it to a local distribution artifact 
instead and ignore the signature check.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15469) Document built-in configuration providers

2024-07-30 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15469.

Fix Version/s: 4.0.0
   Resolution: Fixed

> Document built-in configuration providers
> -
>
> Key: KAFKA-15469
> URL: https://issues.apache.org/jira/browse/KAFKA-15469
> Project: Kafka
>  Issue Type: Task
>  Components: documentation
>Reporter: Mickael Maison
>Assignee: Paul Mellor
>Priority: Major
> Fix For: 4.0.0
>
>
> Kafka has 3 built-in ConfigProvider implementations:
> * DirectoryConfigProvider
> * EnvVarConfigProvider
> * FileConfigProvider
> These don't appear anywhere in the documentation. We should at least mention 
> them and probably even demonstrate how to use them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14614) Missing cluster tool script for Windows

2024-07-30 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14614.

Fix Version/s: 3.6.0
   Resolution: Fixed

> Missing cluster tool script for Windows
> ---
>
> Key: KAFKA-14614
> URL: https://issues.apache.org/jira/browse/KAFKA-14614
> Project: Kafka
>  Issue Type: Bug
>Reporter: Mickael Maison
>Priority: Major
> Fix For: 3.6.0
>
>
> We have the kafka-cluster.sh script to run ClusterTool but there's no 
> matching script for Windows.
> We should check if other scripts are missing too.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17193) Pin external GitHub actions to specific git hash

2024-07-24 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17193:
--

 Summary: Pin external GitHub actions to specific git hash
 Key: KAFKA-17193
 URL: https://issues.apache.org/jira/browse/KAFKA-17193
 Project: Kafka
  Issue Type: Task
Reporter: Mickael Maison


As per [https://infra.apache.org/github-actions-policy.html] we must pin any 
GitHub action that is not from the apache/*, github/* and actions/* namespaces 
to a specific git hash.

We are currently using actions from aquasecurity and docker and these are not 
pinned to specific git hashes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17137) Ensure Admin APIs are properly tested

2024-07-15 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17137:
--

 Summary: Ensure Admin APIs are properly tested
 Key: KAFKA-17137
 URL: https://issues.apache.org/jira/browse/KAFKA-17137
 Project: Kafka
  Issue Type: Improvement
  Components: admin
Reporter: Mickael Maison


A number of Admin client APIs don't have integration tests. While testing 3.8.0 
RC0 we discovered the Admin.describeTopics() API hung. This should have been 
caught by tests.

I suggest to create subtasks for each API that needs tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16254) Allow MM2 to fully disable offset sync feature

2024-07-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16254.

Resolution: Fixed

> Allow MM2 to fully disable offset sync feature
> --
>
> Key: KAFKA-16254
> URL: https://issues.apache.org/jira/browse/KAFKA-16254
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.5.0, 3.6.0, 3.7.0
>Reporter: Omnia Ibrahim
>Assignee: Omnia Ibrahim
>Priority: Major
>  Labels: need-kip
> Fix For: 3.9.0
>
>
> *Background:* 
> At the moment syncing offsets feature in MM2 is broken to 2 parts
>  # One is in `MirrorSourceTask` where we store the new recored's offset on 
> target cluster to {{offset_syncs}} internal topic after mirroring the record. 
> Before KAFKA-14610 in 3.5 MM2 used to just queue the offsets and publish them 
> later but since 3.5 this behaviour changed we now publish any offset syncs 
> that we've queued up, but have not yet been able to publish when 
> `MirrorSourceTask.commit` get invoked. This introduced an over head to commit 
> process.
>  # The second part is in checkpoints source task where we use the new record 
> offsets from {{offset_syncs}} and update {{checkpoints}} and 
> {{__consumer_offsets}} topics.
> *Problem:*
> For customers who only use MM2 for mirroring data and not interested in 
> syncing offsets feature they now can disable the second part of this feature 
> which is by disabling {{emit.checkpoints.enabled}} and/or 
> {{sync.group.offsets.enabled}} to disable emitting {{__consumer_offsets}} 
> topic but nothing disabling 1st part of the feature. 
> The problem get worse if they disabled MM2 from creating offset syncs 
> internal topic as 
> 1. this will increase throughput as MM2 will try to force trying to update 
> the offset with every mirrored batch which impacting the performance of our 
> MM2.
> 2. Get too many error logs because they don't create the sync offset topic as 
> they don't use the feature.
> *Possible solution:*
> Allow customers to fully disable the feature if they don't really need it 
> similar to how we fully can disable other MM2 features like heartbeat feature 
> by adding a new config.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17072) Document broker decommissioning process with KRaft

2024-07-03 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17072:
--

 Summary: Document broker decommissioning process with KRaft
 Key: KAFKA-17072
 URL: https://issues.apache.org/jira/browse/KAFKA-17072
 Project: Kafka
  Issue Type: Improvement
  Components: docs
Reporter: Mickael Maison


When decommissioning a broker in KRaft mode, the broker also has to be 
explicitly unregistered. This is not mentioned anywhere in the documentation.

A broker not unregistered stays eligible for new partition assignment and will 
prevent bumping the metadata version if the remaining brokers are upgraded.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14109) Clean up JUnit 4 test infrastructure

2024-06-27 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14109.

Resolution: Duplicate

> Clean up JUnit 4 test infrastructure
> 
>
> Key: KAFKA-14109
> URL: https://issues.apache.org/jira/browse/KAFKA-14109
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Christo Lolov
>Assignee: Christo Lolov
>Priority: Major
>
> We need to cleanup the setup in 
> https://issues.apache.org/jira/browse/KAFKA-14108 once the JUnit 4 to JUnit 5 
> migration is complete.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-7342) Migrate streams modules to JUnit 5

2024-06-27 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-7342.
---
Resolution: Duplicate

> Migrate streams modules to JUnit 5
> --
>
> Key: KAFKA-7342
> URL: https://issues.apache.org/jira/browse/KAFKA-7342
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, unit tests
>Reporter: Ismael Juma
>Assignee: Christo Lolov
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17027) Inconsistent casing in Selector metrics tags

2024-06-24 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17027:
--

 Summary: Inconsistent casing in Selector metrics tags
 Key: KAFKA-17027
 URL: https://issues.apache.org/jira/browse/KAFKA-17027
 Project: Kafka
  Issue Type: Improvement
  Components: core, metrics
Reporter: Mickael Maison


When creating metric tags for a Selector instance, we use "broker-id" in 
ControllerChannelManager, BrokerBlockingSender and ReplicaFetcherBlockingSend 
but we use "BrokerId" in NodeToControllerChannelManagerImpl.

Not only these casing are inconsistent for metrics tags for the same component 
(Selector) but it looks like neither match the casing the use for other broker 
metrics!

We seem to always use lower camel case for tags for broker metrics. For 
example, we have "networkProcessor", "clientId", "delayedOperation", 
"clientSoftwareName", "clientSoftwareVersion" as tags on other metrics.

Fixing this will require a KIP.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-17008) Update zookeeper to 3.8.4 or 3.9.2 to address CVE-2024-23944

2024-06-21 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-17008.

Resolution: Duplicate

> Update zookeeper to 3.8.4 or 3.9.2 to address CVE-2024-23944
> 
>
> Key: KAFKA-17008
> URL: https://issues.apache.org/jira/browse/KAFKA-17008
> Project: Kafka
>  Issue Type: Bug
>Reporter: Arushi Helms
>Priority: Major
>
> Update zookeeper to 3.8.4 or 3.9.2 to address CVE-2024-23944.
> I could not find an existing ticket for this, if there one then please mark 
> this as duplicate. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16998) Fix warnings in our Github actions

2024-06-19 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16998:
--

 Summary: Fix warnings in our Github actions
 Key: KAFKA-16998
 URL: https://issues.apache.org/jira/browse/KAFKA-16998
 Project: Kafka
  Issue Type: Task
  Components: build
Reporter: Mickael Maison


Most of our Github actions produce warnings, see 
[https://github.com/apache/kafka/actions/runs/9572915509|https://github.com/apache/kafka/actions/runs/9572915509.]
 for example.


It looks like we need to bump the version we use for actions/checkout, 
actions/setup-python, actions/upload-artifact to v4.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15752) KRaft support in SaslSslAdminIntegrationTest

2024-06-17 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15752.

Fix Version/s: 3.9.0
   Resolution: Fixed

> KRaft support in SaslSslAdminIntegrationTest
> 
>
> Key: KAFKA-15752
> URL: https://issues.apache.org/jira/browse/KAFKA-15752
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Gantigmaa Selenge
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.9.0
>
>
> The following tests in SaslSslAdminIntegrationTest in 
> core/src/test/scala/integration/kafka/api/SaslSslAdminIntegrationTest.scala 
> need to be updated to support KRaft
> 95 : def testAclOperations(): Unit = {
> 116 : def testAclOperations2(): Unit = {
> 142 : def testAclDescribe(): Unit = {
> 169 : def testAclDelete(): Unit = {
> 219 : def testLegacyAclOpsNeverAffectOrReturnPrefixed(): Unit = {
> 256 : def testAttemptToCreateInvalidAcls(): Unit = {
> 351 : def testAclAuthorizationDenied(): Unit = {
> 383 : def testCreateTopicsResponseMetadataAndConfig(): Unit = {
> Scanned 527 lines. Found 0 KRaft tests out of 8 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15751) KRaft support in BaseAdminIntegrationTest

2024-06-17 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15751.

Fix Version/s: 3.9.0
   Resolution: Fixed

> KRaft support in BaseAdminIntegrationTest
> -
>
> Key: KAFKA-15751
> URL: https://issues.apache.org/jira/browse/KAFKA-15751
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Gantigmaa Selenge
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.9.0
>
>
> The following tests in BaseAdminIntegrationTest in 
> core/src/test/scala/integration/kafka/api/BaseAdminIntegrationTest.scala need 
> to be updated to support KRaft
> 70 : def testCreateDeleteTopics(): Unit = {
> 163 : def testAuthorizedOperations(): Unit = {
> Scanned 259 lines. Found 0 KRaft tests out of 2 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16974) KRaft support in SslAdminIntegrationTest

2024-06-17 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16974:
--

 Summary: KRaft support in SslAdminIntegrationTest
 Key: KAFKA-16974
 URL: https://issues.apache.org/jira/browse/KAFKA-16974
 Project: Kafka
  Issue Type: Task
  Components: core
Reporter: Mickael Maison


This class needs to be updated to support KRaft



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16865) Admin.describeTopics behavior change after KIP-966

2024-06-12 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16865.

Resolution: Fixed

> Admin.describeTopics behavior change after KIP-966
> --
>
> Key: KAFKA-16865
> URL: https://issues.apache.org/jira/browse/KAFKA-16865
> Project: Kafka
>  Issue Type: Task
>  Components: admin, clients
>Affects Versions: 3.8.0
>Reporter: Mickael Maison
>Assignee: Gantigmaa Selenge
>Priority: Major
> Fix For: 3.9.0
>
>
> Running the following code produces different behavior between ZooKeeper and 
> KRaft:
> {code:java}
> DescribeTopicsOptions options = new 
> DescribeTopicsOptions().includeAuthorizedOperations(false);
> TopicCollection topics = 
> TopicCollection.ofTopicNames(Collections.singletonList(topic));
> DescribeTopicsResult describeTopicsResult = admin.describeTopics(topics, 
> options);
> TopicDescription topicDescription = 
> describeTopicsResult.topicNameValues().get(topic).get();
> System.out.println(topicDescription.authorizedOperations());
> {code}
> With ZooKeeper this print null, and with KRaft it prints [ALTER, READ, 
> DELETE, ALTER_CONFIGS, CREATE, DESCRIBE_CONFIGS, WRITE, DESCRIBE].
> The Admin.getTopicDescriptionFromDescribeTopicsResponseTopic does not take 
> into account the options provided to describeTopics() and always populates 
> the authorizedOperations field.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-5261) Performance improvement of SimpleAclAuthorizer

2024-06-07 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-5261.
---
Resolution: Won't Do

> Performance improvement of SimpleAclAuthorizer
> --
>
> Key: KAFKA-5261
> URL: https://issues.apache.org/jira/browse/KAFKA-5261
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.2.1
>Reporter: Stephane Maarek
>Priority: Major
>
> Currently, looking at the KafkaApis class, it seems that every request going 
> through Kafka is also going through an authorize check:
> {code}
>   private def authorize(session: Session, operation: Operation, resource: 
> Resource): Boolean =
> authorizer.forall(_.authorize(session, operation, resource))
> {code}
> The SimpleAclAuthorizer logic runs through checks which all look to be done 
> in linear time (except on first run) proportional to the number of acls on a 
> specific resource. This operation is re-run every time a client tries to use 
> a Kafka Api, especially on the very often called `handleProducerRequest` and  
> `handleFetchRequest`
> I believe a cache could be built to store the result of the authorize call, 
> possibly allowing more expensive authorize() calls to happen, and reducing 
> greatly the CPU usage in the long run. The cache would be invalidated every 
> time a change happens to aclCache
> Thoughts before I try giving it a go with a PR? 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16881) InitialState type leaks into the Connect REST API OpenAPI spec

2024-06-03 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16881:
--

 Summary: InitialState type leaks into the Connect REST API OpenAPI 
spec
 Key: KAFKA-16881
 URL: https://issues.apache.org/jira/browse/KAFKA-16881
 Project: Kafka
  Issue Type: Task
  Components: connect
Affects Versions: 3.7.0
Reporter: Mickael Maison


In our [OpenAPI spec 
file|https://kafka.apache.org/37/generated/connect_rest.yaml] we have the 
following:
{noformat}
CreateConnectorRequest:
      type: object
      properties:
        config:
          type: object
          additionalProperties:
            type: string
        initialState:
          type: string
          enum:
          - RUNNING
          - PAUSED
          - STOPPED
        initial_state:
          type: string
          enum:
          - RUNNING
          - PAUSED
          - STOPPED
          writeOnly: true
        name:
          type: string{noformat}
Only initial_state is a valid field, InitialState should not be present.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16865) Admin.describeTopics behavior change after KIP-966

2024-05-30 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16865:
--

 Summary: Admin.describeTopics behavior change after KIP-966
 Key: KAFKA-16865
 URL: https://issues.apache.org/jira/browse/KAFKA-16865
 Project: Kafka
  Issue Type: Task
  Components: admin, clients
Reporter: Mickael Maison


Running the following code produces different behavior between ZooKeeper and 
KRaft:


{code:java}
DescribeTopicsOptions options = new 
DescribeTopicsOptions().includeAuthorizedOperations(false);
TopicCollection topics = 
TopicCollection.ofTopicNames(Collections.singletonList(topic));
DescribeTopicsResult describeTopicsResult = admin.describeTopics(topics, 
options);
TopicDescription topicDescription = 
describeTopicsResult.topicNameValues().get(topic).get();
System.out.println(topicDescription.authorizedOperations());
{code}

With ZooKeeper this print null, and with KRaft it prints [ALTER, READ, DELETE, 
ALTER_CONFIGS, CREATE, DESCRIBE_CONFIGS, WRITE, DESCRIBE].

The Admin.getTopicDescriptionFromDescribeTopicsResponseTopic does not take into 
account the options provided to describeTopics() and always populates the 
authorizedOperations field.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16859) Cleanup check if tiered storage is enabled

2024-05-28 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16859:
--

 Summary: Cleanup check if tiered storage is enabled
 Key: KAFKA-16859
 URL: https://issues.apache.org/jira/browse/KAFKA-16859
 Project: Kafka
  Issue Type: Task
Reporter: Mickael Maison


We have 2 ways to detect whether tiered storage is enabled:
- KafkaConfig.isRemoteLogStorageSystemEnabled
- KafkaConfig.remoteLogManagerConfig().enableRemoteStorageSystem()

We use both in various files. We should stick with one way to do it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16825) CVE vulnerabilities in Jetty and netty

2024-05-23 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16825.

Fix Version/s: 3.8.0
   Resolution: Fixed

> CVE vulnerabilities in Jetty and netty
> --
>
> Key: KAFKA-16825
> URL: https://issues.apache.org/jira/browse/KAFKA-16825
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 3.7.0
>Reporter: mooner
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 3.8.0
>
>
> There is a vulnerability (CVE-2024-29025) in the passive dependency software 
> Netty used by Kafka, which has been fixed in version 4.1.108.Final.
> There is also a vulnerability (CVE-2024-22201) in the passive dependency 
> software Jetty, which has been fixed in version 9.4.54.v20240208.
> When will Kafka upgrade the versions of Netty and Jetty to fix these two 
> vulnerabilities?
> Reference website:
> https://nvd.nist.gov/vuln/detail/CVE-2024-29025
> https://nvd.nist.gov/vuln/detail/CVE-2024-22201



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-12399) Deprecate Log4J Appender KIP-719

2024-05-22 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-12399.

Fix Version/s: 3.8.0
   Resolution: Fixed

> Deprecate Log4J Appender KIP-719
> 
>
> Key: KAFKA-12399
> URL: https://issues.apache.org/jira/browse/KAFKA-12399
> Project: Kafka
>  Issue Type: Improvement
>  Components: logging
>Reporter: Dongjin Lee
>Assignee: Mickael Maison
>Priority: Major
>  Labels: needs-kip
> Fix For: 3.8.0
>
>
> As a following job of KAFKA-9366, we have to entirely remove the log4j 1.2.7 
> dependency from the classpath by removing dependencies on log4j-appender.
> KIP-719: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-719%3A+Deprecate+Log4J+Appender



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-7632) Support Compression Level

2024-05-21 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-7632.
---
Fix Version/s: 3.8.0
 Assignee: Mickael Maison  (was: Dongjin Lee)
   Resolution: Fixed

> Support Compression Level
> -
>
> Key: KAFKA-7632
> URL: https://issues.apache.org/jira/browse/KAFKA-7632
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.1.0
> Environment: all
>Reporter: Dave Waters
>Assignee: Mickael Maison
>Priority: Major
>  Labels: needs-kip
> Fix For: 3.8.0
>
>
> The compression level for ZSTD is currently set to use the default level (3), 
> which is a conservative setting that in some use cases eliminates the value 
> that ZSTD provides with improved compression. Each use case will vary, so 
> exposing the level as a producer, broker, and topic configuration setting 
> will allow the user to adjust the level.
> Since it applies to the other compression codecs, we should add the same 
> functionalities to them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16771) First log directory printed twice when formatting storage

2024-05-15 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16771:
--

 Summary: First log directory printed twice when formatting storage
 Key: KAFKA-16771
 URL: https://issues.apache.org/jira/browse/KAFKA-16771
 Project: Kafka
  Issue Type: Task
  Components: tools
Affects Versions: 3.7.0
Reporter: Mickael Maison


If multiple log directories are set, when running bin/kafka-storage.sh format, 
the first directory is printed twice. For example:

{noformat}
bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c 
config/kraft/server.properties --release-version 3.6
metaPropertiesEnsemble=MetaPropertiesEnsemble(metadataLogDir=Optional.empty, 
dirs={/tmp/kraft-combined-logs: EMPTY, /tmp/kraft-combined-logs2: EMPTY})
Formatting /tmp/kraft-combined-logs with metadata.version 3.6-IV2.
Formatting /tmp/kraft-combined-logs with metadata.version 3.6-IV2.
Formatting /tmp/kraft-combined-logs2 with metadata.version 3.6-IV2.
{noformat}






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16769) Delete deprecated add.source.alias.to.metrics configuration

2024-05-15 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16769:
--

 Summary: Delete deprecated add.source.alias.to.metrics 
configuration
 Key: KAFKA-16769
 URL: https://issues.apache.org/jira/browse/KAFKA-16769
 Project: Kafka
  Issue Type: Task
  Components: mirrormaker
Reporter: Mickael Maison
Assignee: Mickael Maison
 Fix For: 4.0.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16646) Consider only running the CVE scanner action on apache/kafka and not in forks

2024-04-30 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16646:
--

 Summary: Consider only running the CVE scanner action on 
apache/kafka and not in forks
 Key: KAFKA-16646
 URL: https://issues.apache.org/jira/browse/KAFKA-16646
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison


Currently the CVE scanner action is failing due to CVEs in the base image. It 
seems that anybody that has a fork is getting daily emails about it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16645) CVEs in 3.7.0 docker image

2024-04-30 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16645:
--

 Summary: CVEs in 3.7.0 docker image
 Key: KAFKA-16645
 URL: https://issues.apache.org/jira/browse/KAFKA-16645
 Project: Kafka
  Issue Type: Task
Affects Versions: 3.7.0
Reporter: Mickael Maison


Our Docker Image CVE Scanner GitHub action reports 2 high CVEs in our base 
image:

apache/kafka:3.7.0 (alpine 3.19.1)
==
Total: 2 (HIGH: 2, CRITICAL: 0)

┌──┬┬──┬┬───┬───┬─┐
│ Library  │ Vulnerability  │ Severity │ Status │ Installed Version │ Fixed 
Version │Title│
├──┼┼──┼┼───┼───┼─┤
│ libexpat │ CVE-2023-52425 │ HIGH │ fixed  │ 2.5.0-r2  │ 2.6.0-r0  
│ expat: parsing large tokens can trigger a denial of service │
│  ││  ││   │   
│ https://avd.aquasec.com/nvd/cve-2023-52425  │
│  ├┤  ││   
├───┼─┤
│  │ CVE-2024-28757 │  ││   │ 2.6.2-r0  
│ expat: XML Entity Expansion │
│  ││  ││   │   
│ https://avd.aquasec.com/nvd/cve-2024-28757  │
└──┴┴──┴┴───┴───┴─┘

Looking at the 
[KIP|https://cwiki.apache.org/confluence/display/KAFKA/KIP-975%3A+Docker+Image+for+Apache+Kafka#KIP975:DockerImageforApacheKafka-WhatifweobserveabugoracriticalCVEinthereleasedApacheKafkaDockerImage?]
 that introduced the docker images, it seems we should release a bugfix when 
high CVEs are detected. It would be good to investigate and assess whether 
Kafka is impacted or not.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16478) Links for Kafka 3.5.2 release are broken

2024-04-08 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16478.

Resolution: Fixed

> Links for Kafka 3.5.2 release are broken
> 
>
> Key: KAFKA-16478
> URL: https://issues.apache.org/jira/browse/KAFKA-16478
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 3.5.2
>Reporter: Philipp Trulson
>Assignee: Mickael Maison
>Priority: Major
>
> While trying to update our setup, I noticed that the download links for the 
> 3.5.2 links are broken. They all point to a different host and also contain 
> an additional `/kafka` in their URL. Compare:
> not working:
> [https://downloads.apache.org/kafka/kafka/3.5.2/RELEASE_NOTES.html]
> working:
> [https://archive.apache.org/dist/kafka/3.5.1/RELEASE_NOTES.html]
> [https://downloads.apache.org/kafka/3.6.2/RELEASE_NOTES.html]
> This goes for all links in the release - archives, checksums, signatures.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15882) Scheduled nightly github actions workflow for CVE reports on published docker images

2024-03-25 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15882.

Fix Version/s: 3.8.0
   Resolution: Fixed

> Scheduled nightly github actions workflow for CVE reports on published docker 
> images
> 
>
> Key: KAFKA-15882
> URL: https://issues.apache.org/jira/browse/KAFKA-15882
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Vedarth Sharma
>Assignee: Vedarth Sharma
>Priority: Major
> Fix For: 3.8.0
>
>
> This scheduled github actions workflow will check supported published docker 
> images for CVEs and generate reports.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16206) KRaftMigrationZkWriter tries to delete deleted topic configs twice

2024-03-21 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16206.

Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaftMigrationZkWriter tries to delete deleted topic configs twice
> --
>
> Key: KAFKA-16206
> URL: https://issues.apache.org/jira/browse/KAFKA-16206
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft, migration
>Reporter: David Arthur
>Assignee: Alyssa Huang
>Priority: Minor
> Fix For: 3.8.0
>
>
> When deleting a topic, we see spurious ERROR logs from 
> kafka.zk.migration.ZkConfigMigrationClient:
>  
> {code:java}
> Did not delete ConfigResource(type=TOPIC, name='xxx') since the node did not 
> exist. {code}
> This seems to happen because ZkTopicMigrationClient#deleteTopic is deleting 
> the topic, partitions, and config ZNodes in one shot. Subsequent calls from 
> KRaftMigrationZkWriter to delete the config encounter a NO_NODE since the 
> ZNode is already gone.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16355) ConcurrentModificationException in InMemoryTimeOrderedKeyValueBuffer.evictWhile

2024-03-08 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16355:
--

 Summary: ConcurrentModificationException in 
InMemoryTimeOrderedKeyValueBuffer.evictWhile
 Key: KAFKA-16355
 URL: https://issues.apache.org/jira/browse/KAFKA-16355
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 3.5.1
Reporter: Mickael Maison


While a Streams application was restoring its state after an outage, it hit the 
following:

org.apache.kafka.streams.errors.StreamsException: Exception caught in process. 
taskId=0_16, processor=KSTREAM-SOURCE-00, topic=, partition=16, 
offset=454875695, stacktrace=java.util.ConcurrentModificationException
at java.base/java.util.TreeMap$PrivateEntryIterator.remove(TreeMap.java:1507)
at 
org.apache.kafka.streams.state.internals.InMemoryTimeOrderedKeyValueBuffer.evictWhile(InMemoryTimeOrderedKeyValueBuffer.java:423)
at 
org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessorSupplier$KTableSuppressProcessor.enforceConstraints(KTableSuppressProcessorSupplier.java:178)
at 
org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessorSupplier$KTableSuppressProcessor.process(KTableSuppressProcessorSupplier.java:165)
at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:157)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:290)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:269)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:228)
at 
org.apache.kafka.streams.kstream.internals.TimestampedCacheFlushListener.apply(TimestampedCacheFlushListener.java:45)
at 
org.apache.kafka.streams.state.internals.MeteredWindowStore.lambda$setFlushListener$4(MeteredWindowStore.java:181)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore.putAndMaybeForward(CachingWindowStore.java:124)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore.lambda$initInternal$0(CachingWindowStore.java:99)
at 
org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:158)
at 
org.apache.kafka.streams.state.internals.NamedCache.evict(NamedCache.java:252)
at 
org.apache.kafka.streams.state.internals.ThreadCache.maybeEvict(ThreadCache.java:302)
at 
org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:179)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:173)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:47)
at 
org.apache.kafka.streams.state.internals.MeteredWindowStore.lambda$put$5(MeteredWindowStore.java:201)
at 
org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:872)
at 
org.apache.kafka.streams.state.internals.MeteredWindowStore.put(MeteredWindowStore.java:200)
at 
org.apache.kafka.streams.processor.internals.AbstractReadWriteDecorator$WindowStoreReadWriteDecorator.put(AbstractReadWriteDecorator.java:201)
at 
org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate$KStreamWindowAggregateProcessor.process(KStreamWindowAggregate.java:138)
at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:157)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:290)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:269)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:228)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:215)
at 
org.apache.kafka.streams.kstream.internals.KStreamPeek$KStreamPeekProcessor.process(KStreamPeek.java:42)
at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:159)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:290)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:269)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:228)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:215)
at 
org.apache.kafka.streams.kstream.internals.KStreamFilter$KStreamFilterProcessor.process(KStreamFilter.java:44)
at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:159)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:290)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:269)
at 
org.apache.kafka.streams.

[jira] [Created] (KAFKA-16347) Bump ZooKeeper to 3.8.4

2024-03-06 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16347:
--

 Summary: Bump ZooKeeper to 3.8.4
 Key: KAFKA-16347
 URL: https://issues.apache.org/jira/browse/KAFKA-16347
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.6.1, 3.7.0
Reporter: Mickael Maison
Assignee: Mickael Maison


ZooKeeper 3.8.4 was released and contains a few CVE fixes: 
https://zookeeper.apache.org/doc/r3.8.4/releasenotes.html

We should update 3.6, 3.7 and trunk to use this new ZooKeeper release.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16318) Add javadoc to KafkaMetric

2024-03-01 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16318:
--

 Summary: Add javadoc to KafkaMetric
 Key: KAFKA-16318
 URL: https://issues.apache.org/jira/browse/KAFKA-16318
 Project: Kafka
  Issue Type: Bug
  Components: docs
Reporter: Mickael Maison


KafkaMetric is part of the public API but it's missing javadoc describing the 
class and several of its methods.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16292) Revamp upgrade.html page

2024-02-21 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16292:
--

 Summary: Revamp upgrade.html page 
 Key: KAFKA-16292
 URL: https://issues.apache.org/jira/browse/KAFKA-16292
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Reporter: Mickael Maison


At the moment we keep adding to this page for each release. The upgrade.html 
file is now over 2000 line long. It still contains steps for upgrading from 0.8 
to 0.9! These steps are already in the docs for each version which can be 
accessed via //documentation.html.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-13566) producer exponential backoff implementation

2024-02-19 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-13566.

Resolution: Duplicate

> producer exponential backoff implementation
> ---
>
> Key: KAFKA-13566
> URL: https://issues.apache.org/jira/browse/KAFKA-13566
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Luke Chen
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-13567) adminClient exponential backoff implementation

2024-02-19 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-13567.

Resolution: Duplicate

> adminClient exponential backoff implementation
> --
>
> Key: KAFKA-13567
> URL: https://issues.apache.org/jira/browse/KAFKA-13567
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Luke Chen
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-13565) consumer exponential backoff implementation

2024-02-19 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-13565.

Fix Version/s: 3.7.0
   Resolution: Duplicate

> consumer exponential backoff implementation
> ---
>
> Key: KAFKA-13565
> URL: https://issues.apache.org/jira/browse/KAFKA-13565
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Luke Chen
>Priority: Major
> Fix For: 3.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14576) Move ConsoleConsumer to tools

2024-02-13 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14576.

Fix Version/s: 3.8.0
   Resolution: Fixed

> Move ConsoleConsumer to tools
> -
>
> Key: KAFKA-14576
> URL: https://issues.apache.org/jira/browse/KAFKA-14576
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14822) Allow restricting File and Directory ConfigProviders to specific paths

2024-02-13 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14822.

Fix Version/s: 3.8.0
 Assignee: Gantigmaa Selenge  (was: Mickael Maison)
   Resolution: Fixed

> Allow restricting File and Directory ConfigProviders to specific paths
> --
>
> Key: KAFKA-14822
> URL: https://issues.apache.org/jira/browse/KAFKA-14822
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Mickael Maison
>Assignee: Gantigmaa Selenge
>Priority: Major
>  Labels: need-kip
> Fix For: 3.8.0
>
>
> In sensitive environments, it would be interesting to be able to restrict the 
> files that can be accessed by the built-in configuration providers.
> For example:
> config.providers=directory
> config.providers.directory.class=org.apache.kafka.connect.configs.DirectoryConfigProvider
> config.providers.directory.path=/var/run
> Then if a caller tries to access another path, for example
> ssl.keystore.password=${directory:/etc/passwd:keystore-password}
> it would be rejected.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16246) Cleanups in ConsoleConsumer

2024-02-13 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16246:
--

 Summary: Cleanups in ConsoleConsumer
 Key: KAFKA-16246
 URL: https://issues.apache.org/jira/browse/KAFKA-16246
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Mickael Maison


When rewriting ConsoleConsumer in Java, in order to keep the conversion and 
review process simple we mimicked the logic flow and types used in the Scala 
implementation.

Once the rewrite is merged, we should refactor some of the logic to make it 
more Java-like. This include removing Optional where it makes sense and moving 
all the argument checking logic into ConsoleConsumerOptions.


See https://github.com/apache/kafka/pull/15274 for pointers.

  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16238) ConnectRestApiTest broken after KIP-1004

2024-02-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16238.

Fix Version/s: 3.8.0
   Resolution: Fixed

> ConnectRestApiTest broken after KIP-1004
> 
>
> Key: KAFKA-16238
> URL: https://issues.apache.org/jira/browse/KAFKA-16238
> Project: Kafka
>  Issue Type: Improvement
>  Components: connect
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 3.8.0
>
>
> KIP-1004 introduced a new configuration for connectors: 'tasks.max.enforce'.
> The ConnectRestApiTest system test needs to be updated to expect the new 
> configuration.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16238) ConnectRestApiTest broken after KIP-1004

2024-02-09 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16238:
--

 Summary: ConnectRestApiTest broken after KIP-1004
 Key: KAFKA-16238
 URL: https://issues.apache.org/jira/browse/KAFKA-16238
 Project: Kafka
  Issue Type: Improvement
  Components: connect
Reporter: Mickael Maison
Assignee: Mickael Maison


KIP-1004 introduced a new configuration for connectors: 'tasks.max.enforce'.

The ConnectRestApiTest system test needs to be updated to expect the new 
configuration.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-12937) Mirrormaker2 can only start from the beginning of a topic

2024-02-08 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-12937.

Resolution: Duplicate

> Mirrormaker2  can only start from the beginning of a topic
> --
>
> Key: KAFKA-12937
> URL: https://issues.apache.org/jira/browse/KAFKA-12937
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 2.8.0
> Environment: Dockerized environment
>Reporter: Daan Bosch
>Priority: Major
>
> *Goal*:
>  I want to replace Mirrormaker version 1 with Mirrormaker2.
>  To do this I want to:
>  start Mirrormaker2 from the latest offset of every topic
>  stop Mirrormaker1 
>  There should only be a couple of double messages.
> What happened:
>  Mirrormaker2 starts replicating from the start of all topics
> *How to reproduce:*
>  Start two Kafka clusters, A and B
> I produce 3000 messages to cluster A on a topic (TOPIC1)
>  Kafka Connect is running and connected to cluster B
>  Start a Mirrormaker2 task in connect to replicate messages from cluster A. 
> Wit the option:
>  consumer auto.offset.reset to latest
>  Produce another 3000 messages to cluster A on the same topic (TOPIC1)
> *Expected result:*
>  Cluster B will only contain the messages produced the second time (3000 in 
> total) on TOPIC1
> Actual result:
>  The mirror picks up all messages from the start (6000 in total) and 
> replicates them to cluster B
> *Additional logs:*
>  Logs from the consumer of the Mirrormaker task:
> mirrormaker.log:7581:mirrormaker_1 | [2021-06-11 09:31:40,403] INFO [Consumer 
> clientId=consumer-null-4, groupId=null] Seeking to offset 0 for partition 
> perf-test-8 (org.apache.kafka.clients.consumer.KafkaConsumer:1582)
>  mirrormaker.log:7583:mirrormaker_1 | [2021-06-11 09:31:40,403] INFO 
> [Consumer clientId=consumer-null-4, groupId=null] Seeking to offset 0 for 
> partition perf-test-3 (org.apache.kafka.clients.consumer.KafkaConsumer:1582)
>  mirrormaker.log:7585:mirrormaker_1 | [2021-06-11 09:31:40,403] INFO 
> [Consumer clientId=consumer-null-4, groupId=null] Seeking to offset 0 for 
> partition perf-test-2 (org.apache.kafka.clients.consumer.KafkaConsumer:1582)
>  mirrormaker.log:7587:mirrormaker_1 | [2021-06-11 09:31:40,403] INFO 
> [Consumer clientId=consumer-null-4, groupId=null] Seeking to offset 0 for 
> partition perf-test-1 (org.apache.kafka.clients.consumer.KafkaConsumer:1582)
>  mirrormaker.log:7589:mirrormaker_1 | [2021-06-11 09:31:40,404] INFO 
> [Consumer clientId=consumer-null-4, groupId=null] Seeking to offset 0 for 
> partition perf-test-0 (org.apache.kafka.clients.consumer.KafkaConsumer:1582)
>  mirrormaker.log:7591:mirrormaker_1 | [2021-06-11 09:31:40,404] INFO 
> [Consumer clientId=consumer-null-4, groupId=null] Seeking to offset 0 for 
> partition perf-test-7 (org.apache.kafka.clients.consumer.KafkaConsumer:1582)
>  mirrormaker.log:7593:mirrormaker_1 | [2021-06-11 09:31:40,404] INFO 
> [Consumer clientId=consumer-null-4, groupId=null] Seeking to offset 0 for 
> partition perf-test-6 (org.apache.kafka.clients.consumer.KafkaConsumer:1582)
>  mirrormaker.log:7595:mirrormaker_1 | [2021-06-11 09:31:40,404] INFO 
> [Consumer clientId=consumer-null-4, groupId=null] Seeking to offset 0 for 
> partition perf-test-5 (org.apache.kafka.clients.consumer.KafkaConsumer:1582)
>  mirrormaker.log:7597:mirrormaker_1 | [2021-06-11 09:31:40,404] INFO 
> [Consumer clientId=consumer-null-4, groupId=null] Seeking to offset 0 for 
> partition perf-test-4 
> (org.apache.kafka.clients.consumer.KafkaConsumer:1582)You can see they are 
> trying to seek to a position and thus overriding the latest offset
>  
> You can see it is doing a seek to position 0 for every partition. which is 
> not what I expected



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-8259) Build RPM for Kafka

2024-02-08 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-8259.
---
Resolution: Won't Do

> Build RPM for Kafka
> ---
>
> Key: KAFKA-8259
> URL: https://issues.apache.org/jira/browse/KAFKA-8259
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Patrick Dignan
>Priority: Minor
>
> RPM packaging eases the installation and deployment of Kafka to make it much 
> more standard.
> I noticed in https://issues.apache.org/jira/browse/KAFKA-1324 [~jkreps] 
> closed the issue because other sources provide packaging.  I think it's 
> worthwhile for the standard, open source project to provide this as a base to 
> reduce redundant work and provide this functionality for users.  Other 
> similar open source software like Elasticsearch create an RPM 
> [https://github.com/elastic/elasticsearch/blob/0ad3d90a36529bf369813ea6253f305e11aff2e9/distribution/packages/build.gradle].
>   This also makes forking internally more maintainable by reducing the amount 
> of work to be done for each version upgrade.
> I have a patch to add this functionality that I will clean up and PR on 
> Github.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-9094) Validate the replicas for partition reassignments triggered through the /admin/reassign_partitions zNode

2024-02-08 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-9094.
---
Resolution: Won't Do

> Validate the replicas for partition reassignments triggered through the 
> /admin/reassign_partitions zNode
> 
>
> Key: KAFKA-9094
> URL: https://issues.apache.org/jira/browse/KAFKA-9094
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Stanislav Kozlovski
>Assignee: Stanislav Kozlovski
>Priority: Minor
>
> As was mentioned by [~jsancio] in 
> [https://github.com/apache/kafka/pull/7574#discussion_r337621762] , it would 
> make sense to apply the same replica validation we apply to the KIP-455 
> reassignments API.
> Namely, validate that the replicas:
> * are not empty (e.g [])
> * are not negative negative (e.g [1,2,-1])
> * are not brokers that are not part of the cluster or otherwise unhealthy 
> (e.g not in /brokers zNode)
> The last liveness validation is subject to comments. We are re-evaluating 
> whether we want to enforce it for the API



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15717) KRaft support in LeaderEpochIntegrationTest

2024-02-05 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15717.

Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaft support in LeaderEpochIntegrationTest
> ---
>
> Key: KAFKA-15717
> URL: https://issues.apache.org/jira/browse/KAFKA-15717
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.8.0
>
>
> The following tests in LeaderEpochIntegrationTest in 
> core/src/test/scala/unit/kafka/server/epoch/LeaderEpochIntegrationTest.scala 
> need to be updated to support KRaft
> 67 : def shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader(): 
> Unit = {
> 99 : def shouldSendLeaderEpochRequestAndGetAResponse(): Unit = {
> 144 : def shouldIncreaseLeaderEpochBetweenLeaderRestarts(): Unit = {
> Scanned 305 lines. Found 0 KRaft tests out of 3 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15728) KRaft support in DescribeUserScramCredentialsRequestNotAuthorizedTest

2024-02-02 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15728.

Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaft support in DescribeUserScramCredentialsRequestNotAuthorizedTest
> -
>
> Key: KAFKA-15728
> URL: https://issues.apache.org/jira/browse/KAFKA-15728
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Zihao Lin
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.8.0
>
>
> The following tests in DescribeUserScramCredentialsRequestNotAuthorizedTest 
> in 
> core/src/test/scala/unit/kafka/server/DescribeUserScramCredentialsRequestNotAuthorizedTest.scala
>  need to be updated to support KRaft
> 39 : def testDescribeNotAuthorized(): Unit = {
> Scanned 52 lines. Found 0 KRaft tests out of 1 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-10047) Unnecessary widening of (int to long) scope in FloatSerializer

2024-02-02 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-10047.

Fix Version/s: 3.8.0
   Resolution: Fixed

> Unnecessary widening of (int to long) scope in FloatSerializer
> --
>
> Key: KAFKA-10047
> URL: https://issues.apache.org/jira/browse/KAFKA-10047
> Project: Kafka
>  Issue Type: Task
>  Components: clients
>Reporter: Guru Tahasildar
>Priority: Trivial
> Fix For: 3.8.0
>
>
> The following code is present in FloatSerializer:
> {code}
> long bits = Float.floatToRawIntBits(data);
> return new byte[] {
> (byte) (bits >>> 24),
> (byte) (bits >>> 16),
> (byte) (bits >>> 8),
> (byte) bits
> };
> {code}
> {{Float.floatToRawIntBits()}} returns an {{int}} but, the result is assigned 
> to a {{long}} so there is a widening of scope. This is not needed for any 
> subsequent operations hence, can be changed to use {{int}}.
> I would like to volunteer to make this change.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-5561) Java based TopicCommand to use the Admin client

2024-02-02 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-5561.
---
Resolution: Duplicate

> Java based TopicCommand to use the Admin client
> ---
>
> Key: KAFKA-5561
> URL: https://issues.apache.org/jira/browse/KAFKA-5561
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Paolo Patierno
>Assignee: Paolo Patierno
>Priority: Major
>
> Hi, 
> as suggested in the https://issues.apache.org/jira/browse/KAFKA-3331, it 
> could be great to have the TopicCommand using the new Admin client instead of 
> the way it works today.
> As pushed by [~gwenshap] in the above JIRA, I'm going to work on it.
> Thanks,
> Paolo



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16204) Stray file core/00000000000000000001.snapshot created when running core tests

2024-01-30 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16204.

Fix Version/s: 3.8.0
   Resolution: Fixed

> Stray file core/0001.snapshot created when running core tests
> -
>
> Key: KAFKA-16204
> URL: https://issues.apache.org/jira/browse/KAFKA-16204
> Project: Kafka
>  Issue Type: Improvement
>  Components: core, unit tests
>Reporter: Mickael Maison
>Assignee: Gaurav Narula
>Priority: Major
>  Labels: newbie, newbie++
> Fix For: 3.8.0
>
>
> When running the core tests I often get a file called 
> core/0001.snapshot created in my kafka folder. It looks like 
> one of the test does not clean its resources properly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16204) Stray file core/00000000000000000001.snapshot created when running core tests

2024-01-29 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16204:
--

 Summary: Stray file core/0001.snapshot created 
when running core tests
 Key: KAFKA-16204
 URL: https://issues.apache.org/jira/browse/KAFKA-16204
 Project: Kafka
  Issue Type: Improvement
  Components: core, unit tests
Reporter: Mickael Maison


When running the core tests I often get a file called 
core/0001.snapshot created in my kafka folder. It looks like 
one of the test does not clean its resources properly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16202) Extra dot in error message in producer

2024-01-29 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16202:
--

 Summary: Extra dot in error message in producer
 Key: KAFKA-16202
 URL: https://issues.apache.org/jira/browse/KAFKA-16202
 Project: Kafka
  Issue Type: Improvement
Reporter: Mickael Maison


If the broker hits a StorageException while handling a record from the 
producer, the producer prints the following warning:

[2024-01-29 15:33:30,722] WARN [Producer clientId=console-producer] Received 
invalid metadata error in produce request on partition topic1-0 due to 
org.apache.kafka.common.errors.KafkaStorageException: Disk error when trying to 
access log file on the disk.. Going to request metadata update now 
(org.apache.kafka.clients.producer.internals.Sender)

There's an extra dot between disk and Going.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16003) The znode /config/topics is not updated during KRaft migration in "dual-write" mode

2024-01-25 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16003.

Fix Version/s: 3.8.0
   Resolution: Fixed

> The znode /config/topics is not updated during KRaft migration in 
> "dual-write" mode
> ---
>
> Key: KAFKA-16003
> URL: https://issues.apache.org/jira/browse/KAFKA-16003
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 3.6.1
>Reporter: Paolo Patierno
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 3.8.0
>
>
> I tried the following scenario ...
> I have a ZooKeeper-based cluster and create a my-topic-1 topic (without 
> specifying any specific configuration for it). The correct znodes are created 
> under /config/topics and /brokers/topics.
> I start a migration to KRaft but not moving forward from "dual write" mode. 
> While in this mode, I create a new my-topic-2 topic (still without any 
> specific config). I see that a new znode is created under /brokers/topics but 
> NOT under /config/topics. It seems that the KRaft controller is not updating 
> this information in ZooKeeper during the dual-write. The controller log shows 
> that the write to ZooKeeper was done, but not everything I would say:
> {code:java}
> 2023-12-13 10:23:26,229 TRACE [KRaftMigrationDriver id=3] Create Topic 
> my-topic-2, ID Macbp8BvQUKpzmq2vG_8dA. Transitioned migration state from 
> ZkMigrationLeadershipState{kraftControllerId=3, kraftControllerEpoch=7, 
> kraftMetadataOffset=445, kraftMetadataEpoch=7, 
> lastUpdatedTimeMs=1702462785587, migrationZkVersion=236, controllerZkEpoch=3, 
> controllerZkVersion=3} to ZkMigrationLeadershipState{kraftControllerId=3, 
> kraftControllerEpoch=7, kraftMetadataOffset=445, kraftMetadataEpoch=7, 
> lastUpdatedTimeMs=1702462785587, migrationZkVersion=237, controllerZkEpoch=3, 
> controllerZkVersion=3} 
> (org.apache.kafka.metadata.migration.KRaftMigrationDriver) 
> [controller-3-migration-driver-event-handler]
> 2023-12-13 10:23:26,229 DEBUG [KRaftMigrationDriver id=3] Made the following 
> ZK writes when handling KRaft delta: {CreateTopic=1} 
> (org.apache.kafka.metadata.migration.KRaftMigrationDriver) 
> [controller-3-migration-driver-event-handler] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-7957) Flaky Test DynamicBrokerReconfigurationTest#testMetricsReporterUpdate

2024-01-25 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-7957.
---
Resolution: Fixed

> Flaky Test DynamicBrokerReconfigurationTest#testMetricsReporterUpdate
> -
>
> Key: KAFKA-7957
> URL: https://issues.apache.org/jira/browse/KAFKA-7957
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Mickael Maison
>Priority: Blocker
>  Labels: flaky-test
> Fix For: 3.8.0
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/18/]
> {quote}java.lang.AssertionError: Messages not sent at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:356) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:766) at 
> kafka.server.DynamicBrokerReconfigurationTest.startProduceConsume(DynamicBrokerReconfigurationTest.scala:1270)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.testMetricsReporterUpdate(DynamicBrokerReconfigurationTest.scala:650){quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16188) Delete deprecated kafka.common.MessageReader

2024-01-24 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16188:
--

 Summary: Delete deprecated kafka.common.MessageReader
 Key: KAFKA-16188
 URL: https://issues.apache.org/jira/browse/KAFKA-16188
 Project: Kafka
  Issue Type: Task
Reporter: Mickael Maison
Assignee: Mickael Maison
 Fix For: 4.0.0


[KIP-641|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158866569]
 introduced org.apache.kafka.tools.api.RecordReader and deprecated 
kafka.common.MessageReader in Kafka 3.5.0.

We should delete kafka.common.MessageReader in Kafka 4.0.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16170) Continuous never ending logs observed when running single node kafka in kraft mode with default KRaft properties in 3.7.0 RC2

2024-01-19 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16170.

Resolution: Duplicate

Duplicate of https://issues.apache.org/jira/browse/KAFKA-16144

> Continuous never ending logs observed when running single node kafka in kraft 
> mode with default KRaft properties in 3.7.0 RC2
> -
>
> Key: KAFKA-16170
> URL: https://issues.apache.org/jira/browse/KAFKA-16170
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.0
>Reporter: Vedarth Sharma
>Priority: Major
> Attachments: kafka_logs.txt
>
>
> After kafka server startup, endless logs are observed, even when server is 
> sitting idle. This behaviour was not observed in previous versions.
> It is easy to reproduce this issue
>  * Download the RC tarball for 3.7.0
>  * Follow the [quickstart guide|https://kafka.apache.org/quickstart] to run 
> kafka in KRaft mode i.e. execute following commands
>  ** KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"
>  ** bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c 
> config/kraft/server.properties
>  ** bin/kafka-server-start.sh config/kraft/server.properties
>  * Once kafka server is started wait for a few seconds and you should see 
> endless logs coming in.
> I have attached a small section of the logs in the ticket just after kafka 
> startup line, just to showcase the nature of endless logs observed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16163) Constant resignation/reelection of controller when starting a single node in combined mode

2024-01-18 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16163:
--

 Summary: Constant resignation/reelection of controller when 
starting a single node in combined mode
 Key: KAFKA-16163
 URL: https://issues.apache.org/jira/browse/KAFKA-16163
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.7.0
Reporter: Mickael Maison


When starting a single node in combined mode:
{noformat}
$ KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"
$ bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c 
config/kraft/server.properties
$ bin/kafka-server-start.sh config/kraft/server.properties{noformat}
 

it's constantly spamming the logs with:
{noformat}
[2024-01-18 17:37:09,065] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread)
[2024-01-18 17:37:11,967] INFO [RaftManager id=1] Did not receive fetch request 
from the majority of the voters within 3000ms. Current fetched voters are []. 
(org.apache.kafka.raft.LeaderState)
[2024-01-18 17:37:11,967] INFO [RaftManager id=1] Completed transition to 
ResignedState(localId=1, epoch=138, voters=[1], electionTimeoutMs=1864, 
unackedVoters=[], preferredSuccessors=[]) from Leader(localId=1, epoch=138, 
epochStartOffset=829, highWatermark=Optional[LogOffsetMetadata(offset=835, 
metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=62788)])], 
voterStates={1=ReplicaState(nodeId=1, 
endOffset=Optional[LogOffsetMetadata(offset=835, 
metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=62788)])], 
lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true)}) 
(org.apache.kafka.raft.QuorumState)
[2024-01-18 17:37:13,072] INFO [NodeToControllerChannelManager id=1 
name=heartbeat] Client requested disconnect from node 1 
(org.apache.kafka.clients.NetworkClient)
[2024-01-18 17:37:13,072] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread)
[2024-01-18 17:37:13,123] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread)
[2024-01-18 17:37:13,124] INFO [NodeToControllerChannelManager id=1 
name=heartbeat] Client requested disconnect from node 1 
(org.apache.kafka.clients.NetworkClient)
[2024-01-18 17:37:13,124] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread)
[2024-01-18 17:37:13,175] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread)
[2024-01-18 17:37:13,176] INFO [NodeToControllerChannelManager id=1 
name=heartbeat] Client requested disconnect from node 1 
(org.apache.kafka.clients.NetworkClient)
[2024-01-18 17:37:13,176] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread)
[2024-01-18 17:37:13,227] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread)
[2024-01-18 17:37:13,229] INFO [NodeToControllerChannelManager id=1 
name=heartbeat] Client requested disconnect from node 1 
(org.apache.kafka.clients.NetworkClient)
[2024-01-18 17:37:13,229] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread)
[2024-01-18 17:37:13,279] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread){noformat}
This did not happen in 3.6.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16153) kraft_upgrade_test system test is broken

2024-01-17 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16153:
--

 Summary: kraft_upgrade_test system test is broken
 Key: KAFKA-16153
 URL: https://issues.apache.org/jira/browse/KAFKA-16153
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Reporter: Mickael Maison


I get the following failure from all `from_kafka_version` versions:


Command '/opt/kafka-dev/bin/kafka-features.sh --bootstrap-server 
ducker05:9092,ducker06:9092,ducker07:9092 upgrade --metadata 3.8' returned 
non-zero exit status 1. Remote error message: b'SLF4J: Class path contains 
multiple SLF4J bindings.\nSLF4J: Found binding in 
[jar:file:/opt/kafka-dev/tools/build/dependant-libs-2.13.12/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J:
 Found binding in 
[jar:file:/opt/kafka-dev/trogdor/build/dependant-libs-2.13.12/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J:
 See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.\nSLF4J: Actual binding is of type 
[org.slf4j.impl.Reload4jLoggerFactory]\nUnsupported metadata version 3.8. 
Supported metadata versions are 3.3-IV0, 3.3-IV1, 3.3-IV2, 3.3-IV3, 3.4-IV0, 
3.5-IV0, 3.5-IV1, 3.5-IV2, 3.6-IV0, 3.6-IV1, 3.6-IV2, 3.7-IV0, 3.7-IV1, 
3.7-IV2, 3.7-IV3, 3.7-IV4, 3.8-IV0\n'



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15740) KRaft support in DeleteOffsetsConsumerGroupCommandIntegrationTest

2024-01-15 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15740.

Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaft support in DeleteOffsetsConsumerGroupCommandIntegrationTest
> -
>
> Key: KAFKA-15740
> URL: https://issues.apache.org/jira/browse/KAFKA-15740
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Zihao Lin
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.8.0
>
>
> The following tests in DeleteOffsetsConsumerGroupCommandIntegrationTest in 
> core/src/test/scala/unit/kafka/admin/DeleteOffsetsConsumerGroupCommandIntegrationTest.scala
>  need to be updated to support KRaft
> 49 : def testDeleteOffsetsNonExistingGroup(): Unit = {
> 59 : def testDeleteOffsetsOfStableConsumerGroupWithTopicPartition(): Unit = {
> 64 : def testDeleteOffsetsOfStableConsumerGroupWithTopicOnly(): Unit = {
> 69 : def testDeleteOffsetsOfStableConsumerGroupWithUnknownTopicPartition(): 
> Unit = {
> 74 : def testDeleteOffsetsOfStableConsumerGroupWithUnknownTopicOnly(): Unit = 
> {
> 79 : def testDeleteOffsetsOfEmptyConsumerGroupWithTopicPartition(): Unit = {
> 84 : def testDeleteOffsetsOfEmptyConsumerGroupWithTopicOnly(): Unit = {
> 89 : def testDeleteOffsetsOfEmptyConsumerGroupWithUnknownTopicPartition(): 
> Unit = {
> 94 : def testDeleteOffsetsOfEmptyConsumerGroupWithUnknownTopicOnly(): Unit = {
> Scanned 198 lines. Found 0 KRaft tests out of 9 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16130) Test migration rollback

2024-01-15 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16130:
--

 Summary: Test migration rollback
 Key: KAFKA-16130
 URL: https://issues.apache.org/jira/browse/KAFKA-16130
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison
 Fix For: 3.8.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16119) kraft_upgrade_test system test is broken

2024-01-12 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16119.

Resolution: Invalid

After rebuilding my env from scratch I don't see this error anymore

> kraft_upgrade_test system test is broken
> 
>
> Key: KAFKA-16119
> URL: https://issues.apache.org/jira/browse/KAFKA-16119
> Project: Kafka
>  Issue Type: New Feature
>Affects Versions: 3.6.0, 3.7.0, 3.6.1
>Reporter: Mickael Maison
>Priority: Major
>
> When the test attempts to restart brokers after the upgrade, brokers fail 
> with:
> [2024-01-12 13:43:40,144] ERROR Exiting Kafka due to fatal exception 
> (kafka.Kafka$)
> java.lang.NoClassDefFoundError: 
> org/apache/kafka/image/loader/MetadataLoaderMetrics
> at kafka.server.KafkaRaftServer.(KafkaRaftServer.scala:68)
> at kafka.Kafka$.buildServer(Kafka.scala:83)
> at kafka.Kafka$.main(Kafka.scala:91)
> at kafka.Kafka.main(Kafka.scala)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.kafka.image.loader.MetadataLoaderMetrics
> at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
> ... 4 more
> MetadataLoaderMetrics was moved from org.apache.kafka.image.loader to 
> org.apache.kafka.image.loader.metrics in 
> https://github.com/apache/kafka/commit/c7de30f38bfd6e2d62a0b5c09b5dc9707e58096b



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16119) kraft_upgrade_test system test is broken

2024-01-12 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16119:
--

 Summary: kraft_upgrade_test system test is broken
 Key: KAFKA-16119
 URL: https://issues.apache.org/jira/browse/KAFKA-16119
 Project: Kafka
  Issue Type: New Feature
Affects Versions: 3.6.1, 3.6.0, 3.7.0
Reporter: Mickael Maison


When the test attempts to restart brokers after the upgrade, brokers fail with:

[2024-01-12 13:43:40,144] ERROR Exiting Kafka due to fatal exception 
(kafka.Kafka$)
java.lang.NoClassDefFoundError: 
org/apache/kafka/image/loader/MetadataLoaderMetrics
at kafka.server.KafkaRaftServer.(KafkaRaftServer.scala:68)
at kafka.Kafka$.buildServer(Kafka.scala:83)
at kafka.Kafka$.main(Kafka.scala:91)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.lang.ClassNotFoundException: 
org.apache.kafka.image.loader.MetadataLoaderMetrics
at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 4 more

MetadataLoaderMetrics was moved from org.apache.kafka.image.loader to 
org.apache.kafka.image.loader.metrics in 
https://github.com/apache/kafka/commit/c7de30f38bfd6e2d62a0b5c09b5dc9707e58096b



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15747) KRaft support in DynamicConnectionQuotaTest

2024-01-10 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15747.

Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaft support in DynamicConnectionQuotaTest
> ---
>
> Key: KAFKA-15747
> URL: https://issues.apache.org/jira/browse/KAFKA-15747
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.8.0
>
>
> The following tests in DynamicConnectionQuotaTest in 
> core/src/test/scala/integration/kafka/network/DynamicConnectionQuotaTest.scala
>  need to be updated to support KRaft
> 77 : def testDynamicConnectionQuota(): Unit = {
> 104 : def testDynamicListenerConnectionQuota(): Unit = {
> 175 : def testDynamicListenerConnectionCreationRateQuota(): Unit = {
> 237 : def testDynamicIpConnectionRateQuota(): Unit = {
> Scanned 416 lines. Found 0 KRaft tests out of 4 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15741) KRaft support in DescribeConsumerGroupTest

2024-01-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15741.

Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaft support in DescribeConsumerGroupTest
> --
>
> Key: KAFKA-15741
> URL: https://issues.apache.org/jira/browse/KAFKA-15741
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Zihao Lin
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.8.0
>
>
> The following tests in DescribeConsumerGroupTest in 
> core/src/test/scala/unit/kafka/admin/DescribeConsumerGroupTest.scala need to 
> be updated to support KRaft
> 39 : def testDescribeNonExistingGroup(): Unit = {
> 55 : def testDescribeWithMultipleSubActions(): Unit = {
> 76 : def testDescribeWithStateValue(): Unit = {
> 97 : def testDescribeOffsetsOfNonExistingGroup(): Unit = {
> 113 : def testDescribeMembersOfNonExistingGroup(): Unit = {
> 133 : def testDescribeStateOfNonExistingGroup(): Unit = {
> 151 : def testDescribeExistingGroup(): Unit = {
> 169 : def testDescribeExistingGroups(): Unit = {
> 194 : def testDescribeAllExistingGroups(): Unit = {
> 218 : def testDescribeOffsetsOfExistingGroup(): Unit = {
> 239 : def testDescribeMembersOfExistingGroup(): Unit = {
> 272 : def testDescribeStateOfExistingGroup(): Unit = {
> 291 : def testDescribeStateOfExistingGroupWithRoundRobinAssignor(): Unit = {
> 310 : def testDescribeExistingGroupWithNoMembers(): Unit = {
> 334 : def testDescribeOffsetsOfExistingGroupWithNoMembers(): Unit = {
> 366 : def testDescribeMembersOfExistingGroupWithNoMembers(): Unit = {
> 390 : def testDescribeStateOfExistingGroupWithNoMembers(): Unit = {
> 417 : def testDescribeWithConsumersWithoutAssignedPartitions(): Unit = {
> 436 : def testDescribeOffsetsWithConsumersWithoutAssignedPartitions(): Unit = 
> {
> 455 : def testDescribeMembersWithConsumersWithoutAssignedPartitions(): Unit = 
> {
> 480 : def testDescribeStateWithConsumersWithoutAssignedPartitions(): Unit = {
> 496 : def testDescribeWithMultiPartitionTopicAndMultipleConsumers(): Unit = {
> 517 : def testDescribeOffsetsWithMultiPartitionTopicAndMultipleConsumers(): 
> Unit = {
> 539 : def testDescribeMembersWithMultiPartitionTopicAndMultipleConsumers(): 
> Unit = {
> 565 : def testDescribeStateWithMultiPartitionTopicAndMultipleConsumers(): 
> Unit = {
> 583 : def testDescribeSimpleConsumerGroup(): Unit = {
> 601 : def testDescribeGroupWithShortInitializationTimeout(): Unit = {
> 618 : def testDescribeGroupOffsetsWithShortInitializationTimeout(): Unit = {
> 634 : def testDescribeGroupMembersWithShortInitializationTimeout(): Unit = {
> 652 : def testDescribeGroupStateWithShortInitializationTimeout(): Unit = {
> 668 : def testDescribeWithUnrecognizedNewConsumerOption(): Unit = {
> 674 : def testDescribeNonOffsetCommitGroup(): Unit = {
> Scanned 699 lines. Found 0 KRaft tests out of 32 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15719) KRaft support in OffsetsForLeaderEpochRequestTest

2024-01-08 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15719.

Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaft support in OffsetsForLeaderEpochRequestTest
> -
>
> Key: KAFKA-15719
> URL: https://issues.apache.org/jira/browse/KAFKA-15719
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Zihao Lin
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.8.0
>
>
> The following tests in OffsetsForLeaderEpochRequestTest in 
> core/src/test/scala/unit/kafka/server/OffsetsForLeaderEpochRequestTest.scala 
> need to be updated to support KRaft
> 37 : def testOffsetsForLeaderEpochErrorCodes(): Unit = {
> 60 : def testCurrentEpochValidation(): Unit = {
> Scanned 127 lines. Found 0 KRaft tests out of 2 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15725) KRaft support in FetchRequestTest

2024-01-08 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15725.

Fix Version/s: 3.7.0
   Resolution: Fixed

> KRaft support in FetchRequestTest
> -
>
> Key: KAFKA-15725
> URL: https://issues.apache.org/jira/browse/KAFKA-15725
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.7.0
>
>
> The following tests in FetchRequestTest in 
> core/src/test/scala/unit/kafka/server/FetchRequestTest.scala need to be 
> updated to support KRaft
> 45 : def testBrokerRespectsPartitionsOrderAndSizeLimits(): Unit = {
> 147 : def testFetchRequestV4WithReadCommitted(): Unit = {
> 165 : def testFetchRequestToNonReplica(): Unit = {
> 195 : def testLastFetchedEpochValidation(): Unit = {
> 200 : def testLastFetchedEpochValidationV12(): Unit = {
> 247 : def testCurrentEpochValidation(): Unit = {
> 252 : def testCurrentEpochValidationV12(): Unit = {
> 295 : def testEpochValidationWithinFetchSession(): Unit = {
> 300 : def testEpochValidationWithinFetchSessionV12(): Unit = {
> 361 : def testDownConversionWithConnectionFailure(): Unit = {
> 428 : def testDownConversionFromBatchedToUnbatchedRespectsOffset(): Unit = {
> 509 : def testCreateIncrementalFetchWithPartitionsInErrorV12(): Unit = {
> 568 : def testFetchWithPartitionsWithIdError(): Unit = {
> 610 : def testZStdCompressedTopic(): Unit = {
> 657 : def testZStdCompressedRecords(): Unit = {
> Scanned 783 lines. Found 0 KRaft tests out of 15 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15980) Add KIP-1001 CurrentControllerId metric

2023-12-19 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15980.

Fix Version/s: 3.7.0
   Resolution: Fixed

> Add KIP-1001 CurrentControllerId metric
> ---
>
> Key: KAFKA-15980
> URL: https://issues.apache.org/jira/browse/KAFKA-15980
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Major
> Fix For: 3.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15000) High vulnerability PRISMA-2023-0067 reported in jackson-core

2023-12-19 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15000.

Resolution: Fixed

> High vulnerability PRISMA-2023-0067 reported in jackson-core
> 
>
> Key: KAFKA-15000
> URL: https://issues.apache.org/jira/browse/KAFKA-15000
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.4.0, 3.3.2, 3.5.1
>Reporter: Arushi Rai
>Assignee: Said BOUDJELDA
>Priority: Critical
> Fix For: 3.7.0
>
>
> Kafka is using jackson-core version 2.13.4 which has high vulnerability 
> reported [PRISMA-2023-0067. 
> |https://github.com/FasterXML/jackson-core/pull/827]
> This vulnerability is fix in Jackson-core 2.15.0 and Kafka should upgrade to 
> the same. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16005) ZooKeeper to KRaft migration rollback missing disabling controller and migration configuration on brokers

2023-12-14 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16005.

Fix Version/s: 3.7.0
   Resolution: Fixed

> ZooKeeper to KRaft migration rollback missing disabling controller and 
> migration configuration on brokers
> -
>
> Key: KAFKA-16005
> URL: https://issues.apache.org/jira/browse/KAFKA-16005
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.6.1
>Reporter: Paolo Patierno
>Assignee: Paolo Patierno
>Priority: Major
> Fix For: 3.7.0
>
>
> I was following the latest documentation additions to try the rollback 
> process of a ZK cluster migrating to KRaft, while it's still in dual-write 
> mode: 
> [https://github.com/apache/kafka/pull/14160/files#diff-e4e8d893dc2a4e999c96713dd5b5857203e0756860df0e70fb0cb041aa4d347bR3786]
> The first point is just about stopping broker, deleting __cluster_metadata 
> folder and restarting broker.
> I think it's missing at least the following steps:
>  * removing/disabling the ZooKeeper migration flag
>  * removing all properties related to controllers configuration (i.e. 
> controller.quorum.voters, controller.listener.names, ...)
> Without those steps, when the broker restarts, we have got broker re-creating 
> the __cluster_metadata folder (because it syncs with controllers while they 
> are still running).
> Also, when controllers stops, the broker starts to raise exceptions like this:
> {code:java}
> [2023-12-13 15:22:28,437] DEBUG [BrokerToControllerChannelManager id=0 
> name=quorum] Connection with localhost/127.0.0.1 (channelId=1) disconnected 
> (org.apache.kafka.common.network.Selector)java.net.ConnectException: 
> Connection refusedat 
> java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)at 
> java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
> at 
> org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
> at 
> org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224)
> at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 
>at org.apache.kafka.common.network.Selector.poll(Selector.java:481)at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:571)at 
> org.apache.kafka.server.util.InterBrokerSendThread.pollOnce(InterBrokerSendThread.java:109)
> at 
> kafka.server.BrokerToControllerRequestThread.doWork(BrokerToControllerChannelManager.scala:421)
> at 
> org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130)[2023-12-13
>  15:22:28,438] INFO [BrokerToControllerChannelManager id=0 name=quorum] Node 
> 1 disconnected. (org.apache.kafka.clients.NetworkClient)[2023-12-13 
> 15:22:28,438] WARN [BrokerToControllerChannelManager id=0 name=quorum] 
> Connection to node 1 (localhost/127.0.0.1:9093) could not be established. 
> Broker may not be available. (org.apache.kafka.clients.NetworkClient) {code}
> (where I have controller locally on port 9093)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15995) Mechanism for plugins and connectors to register metrics

2023-12-12 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15995:
--

 Summary: Mechanism for plugins and connectors to register metrics
 Key: KAFKA-15995
 URL: https://issues.apache.org/jira/browse/KAFKA-15995
 Project: Kafka
  Issue Type: New Feature
Reporter: Mickael Maison
Assignee: Mickael Maison


Ticket for 
[KIP-877|https://cwiki.apache.org/confluence/display/KAFKA/KIP-877%3A+Mechanism+for+plugins+and+connectors+to+register+metrics]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15714) KRaft support in DynamicNumNetworkThreadsTest

2023-12-10 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15714.

Fix Version/s: 3.7.0
   Resolution: Fixed

> KRaft support in DynamicNumNetworkThreadsTest
> -
>
> Key: KAFKA-15714
> URL: https://issues.apache.org/jira/browse/KAFKA-15714
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Sameer Tejani
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.7.0
>
>
> The following tests in DynamicNumNetworkThreadsTest in 
> core/src/test/scala/integration/kafka/network/DynamicNumNetworkThreadsTest.scala
>  need to be updated to support KRaft
> 58 : def testDynamicNumNetworkThreads(): Unit = {
> Scanned 103 lines. Found 0 KRaft tests out of 1 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15973) quota_test.py system tests are flaky

2023-12-05 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15973:
--

 Summary: quota_test.py system tests are flaky
 Key: KAFKA-15973
 URL: https://issues.apache.org/jira/browse/KAFKA-15973
 Project: Kafka
  Issue Type: Bug
  Components: core, system tests
Reporter: Mickael Maison


Stacktrace:
{noformat}
    TimeoutError("Kafka server didn't finish startup in 60 seconds")
Traceback (most recent call last):
  File 
"/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", line 
186, in _do_run
    data = self.run_test()
  File 
"/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", line 
246, in run_test
    return self.test_context.function(self.test)
  File "/usr/local/lib/python3.9/dist-packages/ducktape/mark/_mark.py", line 
433, in wrapper
    return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
  File "/opt/kafka-dev/tests/kafkatest/tests/client/quota_test.py", line 139, 
in test_quota
    self.kafka.start()
  File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 654, in 
start
    self.wait_for_start(node, monitor, timeout_sec)
  File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 879, in 
wait_for_start
    monitor.wait_until("Kafka\s*Server.*started", timeout_sec=timeout_sec, 
backoff_sec=.25,
  File 
"/usr/local/lib/python3.9/dist-packages/ducktape/cluster/remoteaccount.py", 
line 753, in wait_until
    return wait_until(lambda: self.acct.ssh("tail -c +%d %s | grep '%s'" % 
(self.offset + 1, self.log, pattern),
  File "/usr/local/lib/python3.9/dist-packages/ducktape/utils/util.py", line 
58, in wait_until
    raise TimeoutError(err_msg() if callable(err_msg) else err_msg) from 
last_exception
ducktape.errors.TimeoutError: Kafka server didn't finish startup in 60 
seconds{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15645) Move ReplicationQuotasTestRig to tools

2023-12-05 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15645.

Fix Version/s: 3.7.0
   Resolution: Fixed

> Move ReplicationQuotasTestRig to tools
> --
>
> Key: KAFKA-15645
> URL: https://issues.apache.org/jira/browse/KAFKA-15645
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Nikolay Izhikov
>Assignee: Nikolay Izhikov
>Priority: Minor
> Fix For: 3.7.0
>
>
> ReplicationQuotasTestRig class used for measuring performance.
> Conains dependencies to `ReassignPartitionCommand` API.
> To move all commands to tools must move ReplicationQuotasTestRig to tools, 
> also.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15912) Parallelize conversion and transformation steps in Connect

2023-11-28 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15912:
--

 Summary: Parallelize conversion and transformation steps in Connect
 Key: KAFKA-15912
 URL: https://issues.apache.org/jira/browse/KAFKA-15912
 Project: Kafka
  Issue Type: Improvement
  Components: connect
Reporter: Mickael Maison


In busy Connect pipelines, the conversion and transformation steps can 
sometimes have a very significant impact on performance. This is especially 
true with large records with complex schemas, for example with CDC connectors.

Today in order to always preserve ordering, converters and transformations are 
called on one record at a time in a single thread in the Connect worker. As 
Connect usually handles records in batches (up to max.poll.records in sink 
pipelines, for source pipelines it depends on the connector), it could be 
highly beneficial to attempt running the converters and transformation chain in 
parallel by a pool a processing threads.

It should be possible to do some of these steps in parallel and still keep 
exact ordering. I'm even considering whether an option to lose ordering but 
allow even faster processing would make sense.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15464) Allow dynamic reloading of certificates with different DN / SANs

2023-11-24 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15464.

Fix Version/s: 3.7.0
   Resolution: Fixed

> Allow dynamic reloading of certificates with different DN / SANs
> 
>
> Key: KAFKA-15464
> URL: https://issues.apache.org/jira/browse/KAFKA-15464
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jakub Scholz
>Assignee: Jakub Scholz
>Priority: Major
> Fix For: 3.7.0
>
>
> Kafka currently doesn't allow dynamic reloading of keystores when the new key 
> has a different DN or removes some of the SANs. While it might help to 
> prevent users from breaking their cluster, in some cases it would be great to 
> be able to bypass this validation when desired.
> More details are in the [KIP-978: Allow dynamic reloading of certificates 
> with different DN / 
> SANs|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=263429128]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15793) Flaky test ZkMigrationIntegrationTest.testMigrateTopicDeletions

2023-11-17 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15793.

Resolution: Fixed

> Flaky test ZkMigrationIntegrationTest.testMigrateTopicDeletions
> ---
>
> Key: KAFKA-15793
> URL: https://issues.apache.org/jira/browse/KAFKA-15793
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 3.7.0, 3.6.1
>Reporter: Divij Vaidya
>Assignee: David Arthur
>Priority: Major
>  Labels: flaky-test
> Fix For: 3.7.0, 3.6.1
>
> Attachments: Screenshot 2023-11-06 at 11.30.06.png
>
>
> The tests have been flaky since they were introduced in 
> [https://github.com/apache/kafka/pull/14545] (see picture attached).
> The stack traces for the flakiness can be found at 
> [https://ge.apache.org/scans/tests?search.relativeStartTime=P28D&search.rootProjectNames=kafka&search.tags=trunk&search.timeZoneId=Europe%2FBerlin&tests.container=kafka.zk.ZkMigrationIntegrationTest]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15644) Fix CVE-2023-4586 in netty:handler

2023-10-26 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15644.

Fix Version/s: 3.7.0
   Resolution: Fixed

> Fix CVE-2023-4586 in netty:handler
> --
>
> Key: KAFKA-15644
> URL: https://issues.apache.org/jira/browse/KAFKA-15644
> Project: Kafka
>  Issue Type: Bug
>Reporter: Atul Sharma
>Assignee: Atul Sharma
>Priority: Major
> Fix For: 3.7.0
>
>
> Need to remediate CVE-2023-4586 
> Ref: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-4586



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15093) Add 3.5.0 to broker/client and streams upgrade/compatibility tests

2023-10-23 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15093.

Fix Version/s: 3.5.2
   3.7.0
   3.6.1
   Resolution: Fixed

> Add 3.5.0 to broker/client and streams upgrade/compatibility tests
> --
>
> Key: KAFKA-15093
> URL: https://issues.apache.org/jira/browse/KAFKA-15093
> Project: Kafka
>  Issue Type: Task
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 3.5.2, 3.7.0, 3.6.1
>
>
> Per the penultimate bullet on the [release 
> checklist|https://cwiki.apache.org/confluence/display/KAFKA/Release+Process#ReleaseProcess-Afterthevotepasses],
>  Kafka v3.5.0 is released. We should add this version to the system tests.
> Example PRs:
>  * Broker and clients: [https://github.com/apache/kafka/pull/6794]
>  * Streams: [https://github.com/apache/kafka/pull/6597/files]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15664) Add 3.4.0 streams upgrade/compatibility tests

2023-10-23 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15664.

Resolution: Fixed

> Add 3.4.0 streams upgrade/compatibility tests
> -
>
> Key: KAFKA-15664
> URL: https://issues.apache.org/jira/browse/KAFKA-15664
> Project: Kafka
>  Issue Type: Task
>  Components: streams, system tests
>Affects Versions: 3.5.0
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Critical
> Fix For: 3.5.2, 3.7.0, 3.6.1
>
>
> Per the penultimate bullet on the release checklist, Kafka v3.4.0 is 
> released. We should add this version to the system tests.
> Example PR: https://github.com/apache/kafka/pull/6597/files



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15664) Add 3.4.0 streams upgrade/compatibility tests

2023-10-21 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15664:
--

 Summary: Add 3.4.0 streams upgrade/compatibility tests
 Key: KAFKA-15664
 URL: https://issues.apache.org/jira/browse/KAFKA-15664
 Project: Kafka
  Issue Type: Task
Reporter: Mickael Maison
Assignee: Mickael Maison


Per the penultimate bullet on the release checklist, Kafka v3.4.0 is released. 
We should add this version to the system tests.

Example PR: https://github.com/apache/kafka/pull/6597/files



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15630) Improve documentation of offset.lag.max

2023-10-18 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15630:
--

 Summary: Improve documentation of offset.lag.max
 Key: KAFKA-15630
 URL: https://issues.apache.org/jira/browse/KAFKA-15630
 Project: Kafka
  Issue Type: Improvement
  Components: docs, mirrormaker
Reporter: Mickael Maison


It would be good to expand on the role of this configuration on offset 
translation and mention that it can be set to a smaller value, or even 0, to 
help in scenarios when records may not flow constantly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15622) Delete configs deprecated by KIP-629

2023-10-17 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15622:
--

 Summary: Delete configs deprecated by KIP-629
 Key: KAFKA-15622
 URL: https://issues.apache.org/jira/browse/KAFKA-15622
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 4.0.0
Reporter: Mickael Maison
Assignee: Mickael Maison


[KIP-629|https://cwiki.apache.org/confluence/display/KAFKA/KIP-629%3A+Use+racially+neutral+terms+in+our+codebase]
 deprecated a bunch of configurations. We should delete them in the next major 
release.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14684) Replace EasyMock and PowerMock with Mockito in WorkerSinkTaskThreadedTest

2023-10-16 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14684.

Fix Version/s: 3.7.0
   Resolution: Fixed

> Replace EasyMock and PowerMock with Mockito in WorkerSinkTaskThreadedTest
> -
>
> Key: KAFKA-14684
> URL: https://issues.apache.org/jira/browse/KAFKA-14684
> Project: Kafka
>  Issue Type: Sub-task
>  Components: KafkaConnect
>Reporter: Hector Geraldino
>Assignee: Hector Geraldino
>Priority: Minor
> Fix For: 3.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15596) Upgrade ZooKeeper to 3.8.3

2023-10-12 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15596.

Fix Version/s: 3.7.0
   3.6.1
   Resolution: Fixed

> Upgrade ZooKeeper to 3.8.3
> --
>
> Key: KAFKA-15596
> URL: https://issues.apache.org/jira/browse/KAFKA-15596
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 3.7.0, 3.6.1
>
>
> ZooKeeper 3.8.3 fixes 
> [CVE-2023-44981|https://www.cve.org/CVERecord?id=CVE-2023-44981] as described 
> in https://lists.apache.org/thread/7o6cch0gm7hzz0zcj2zs16hnl1dxm6oy



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15596) Upgrade ZooKeeper to 3.8.3

2023-10-12 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15596:
--

 Summary: Upgrade ZooKeeper to 3.8.3
 Key: KAFKA-15596
 URL: https://issues.apache.org/jira/browse/KAFKA-15596
 Project: Kafka
  Issue Type: Improvement
Reporter: Mickael Maison
Assignee: Mickael Maison


ZooKeeper 3.8.3 fixes 
[CVE-2023-44981|https://www.cve.org/CVERecord?id=CVE-2023-44981] as described 
in https://lists.apache.org/thread/7o6cch0gm7hzz0zcj2zs16hnl1dxm6oy



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15521) Refactor build.gradle to align gradle swagger plugin with swagger dependencies

2023-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15521.

Fix Version/s: 3.7.0
   Resolution: Fixed

> Refactor build.gradle to align gradle swagger plugin with swagger dependencies
> --
>
> Key: KAFKA-15521
> URL: https://issues.apache.org/jira/browse/KAFKA-15521
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Mickael Maison
>Assignee: Atul Sharma
>Priority: Major
> Fix For: 3.7.0
>
>
> We use both the Swagger Gradle plugin 
> "io.swagger.core.v3.swagger-gradle-plugin" and 2 Swagger dependencies 
> swaggerAnnotations and swaggerJaxrs2. The version for the Gradle plugin is in 
> build.gradle while the version for the dependency is in 
> gradle/dependencies.gradle.
> When we upgrade the version of one or the other it sometimes cause build 
> breakages, for example https://github.com/apache/kafka/pull/13387 and 
> https://github.com/apache/kafka/pull/14464
> We should try to have the version defined in a single place to avoid breaking 
> the build again.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   >