[jira] [Commented] (KAFKA-17014) ScramFormatter should not use String for password.

2024-06-20 Thread dujian0068 (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-17014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856670#comment-17856670
 ] 

dujian0068 commented on KAFKA-17014:


hello

This is an interesting question, can I handle it?

> ScramFormatter should not use String for password.
> --
>
> Key: KAFKA-17014
> URL: https://issues.apache.org/jira/browse/KAFKA-17014
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Tsz-wo Sze
>Priority: Major
>
> Since String is immutable, there is no easy way to erase a String password 
> after use.  We should not use String for password.  See also  
> https://stackoverflow.com/questions/8881291/why-is-char-preferred-over-string-for-passwords



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-17014) ScramFormatter should not use String for password.

2024-06-20 Thread Tsz-wo Sze (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-17014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1785#comment-1785
 ] 

Tsz-wo Sze commented on KAFKA-17014:


I suggest the following changes:
{code}
@@ -88,11 +92,11 @@ public class ScramFormatter {
 return result;
 }
 
-public static byte[] normalize(String str) {
-return toBytes(str);
+public static byte[] normalize(char[] chars) {
+return toBytes(chars);
 }
 
-public byte[] saltedPassword(String password, byte[] salt, int iterations) 
throws InvalidKeyException {
+public byte[] saltedPassword(char[] password, byte[] salt, int iterations) 
throws InvalidKeyException {
 return hi(normalize(password), salt, iterations);
 }
 
@@ -168,11 +172,20 @@ public class ScramFormatter {
 return toBytes(secureRandomString(random));
 }
 
+public static byte[] toBytes(char[] chars) {
+  final CharsetEncoder encoder =  StandardCharsets.UTF_8.newEncoder();
+  try {
+return encoder.encode(CharBuffer.wrap(chars)).array();
+  } catch (CharacterCodingException e) {
+throw new IllegalStateException("Failed to encode " + 
Arrays.toString(chars), e);
+  }
+}
+
 public static byte[] toBytes(String str) {
 return str.getBytes(StandardCharsets.UTF_8);
 }
 
-public ScramCredential generateCredential(String password, int iterations) 
{
+public ScramCredential generateCredential(char[] password, int iterations) 
{
 try {
 byte[] salt = secureRandomBytes();
 byte[] saltedPassword = saltedPassword(password, salt, iterations);
{code}


> ScramFormatter should not use String for password.
> --
>
> Key: KAFKA-17014
> URL: https://issues.apache.org/jira/browse/KAFKA-17014
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Tsz-wo Sze
>Priority: Major
>
> Since String is immutable, there is no easy way to erase a String password 
> after use.  We should not use String for password.  See also  
> https://stackoverflow.com/questions/8881291/why-is-char-preferred-over-string-for-passwords



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17014) ScramFormatter should not use String for password.

2024-06-20 Thread Tsz-wo Sze (Jira)
Tsz-wo Sze created KAFKA-17014:
--

 Summary: ScramFormatter should not use String for password.
 Key: KAFKA-17014
 URL: https://issues.apache.org/jira/browse/KAFKA-17014
 Project: Kafka
  Issue Type: Improvement
  Components: security
Reporter: Tsz-wo Sze


Since String is immutable, there is no easy way to erase a String password 
after use.  We should not use String for password.  See also  
https://stackoverflow.com/questions/8881291/why-is-char-preferred-over-string-for-passwords



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-17013) RequestManager#ConnectionState#toString() should use %s

2024-06-20 Thread dujian0068 (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dujian0068 updated KAFKA-17013:
---
Issue Type: Bug  (was: Improvement)

> RequestManager#ConnectionState#toString() should use %s
> ---
>
> Key: KAFKA-17013
> URL: https://issues.apache.org/jira/browse/KAFKA-17013
> Project: Kafka
>  Issue Type: Bug
>Reporter: dujian0068
>Assignee: dujian0068
>Priority: Minor
>
> RequestManager#ConnectionState#toString() should use %s
> [https://github.com/apache/kafka/blob/trunk/raft/src/main/java/org/apache/kafka/raft/RequestManager.java#L375]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-17013) RequestManager#ConnectionState#toString() should use %s

2024-06-20 Thread dujian0068 (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dujian0068 updated KAFKA-17013:
---
Description: 
RequestManager#ConnectionState#toString() should use %s

[https://github.com/apache/kafka/blob/trunk/raft/src/main/java/org/apache/kafka/raft/RequestManager.java#L375]

  was:
RequestManager#toString() should use %s

https://github.com/apache/kafka/blob/trunk/raft/src/main/java/org/apache/kafka/raft/RequestManager.java#L375


> RequestManager#ConnectionState#toString() should use %s
> ---
>
> Key: KAFKA-17013
> URL: https://issues.apache.org/jira/browse/KAFKA-17013
> Project: Kafka
>  Issue Type: Improvement
>Reporter: dujian0068
>Assignee: dujian0068
>Priority: Minor
>
> RequestManager#ConnectionState#toString() should use %s
> [https://github.com/apache/kafka/blob/trunk/raft/src/main/java/org/apache/kafka/raft/RequestManager.java#L375]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-17013) RequestManager#ConnectionState#toString() should use %s

2024-06-20 Thread dujian0068 (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dujian0068 updated KAFKA-17013:
---
Summary: RequestManager#ConnectionState#toString() should use %s  (was: 
RequestManager#toString() should use %s)

> RequestManager#ConnectionState#toString() should use %s
> ---
>
> Key: KAFKA-17013
> URL: https://issues.apache.org/jira/browse/KAFKA-17013
> Project: Kafka
>  Issue Type: Improvement
>Reporter: dujian0068
>Assignee: dujian0068
>Priority: Minor
>
> RequestManager#toString() should use %s
> https://github.com/apache/kafka/blob/trunk/raft/src/main/java/org/apache/kafka/raft/RequestManager.java#L375



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16531) Fix check-quorum calculation to not assume that the leader is in the voter set

2024-06-20 Thread Luke Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Chen resolved KAFKA-16531.
---
Resolution: Fixed

> Fix check-quorum calculation to not assume that the leader is in the voter set
> --
>
> Key: KAFKA-16531
> URL: https://issues.apache.org/jira/browse/KAFKA-16531
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kraft
>Reporter: José Armando García Sancio
>Assignee: Luke Chen
>Priority: Major
> Fix For: 3.9.0
>
>
> In the check-quorum calculation, the leader should not assume that it is part 
> of the voter set. This may happen when the leader is removing itself from the 
> voter set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17013) RequestManager#toString() should use %s

2024-06-20 Thread dujian0068 (Jira)
dujian0068 created KAFKA-17013:
--

 Summary: RequestManager#toString() should use %s
 Key: KAFKA-17013
 URL: https://issues.apache.org/jira/browse/KAFKA-17013
 Project: Kafka
  Issue Type: Improvement
Reporter: dujian0068


RequestManager#toString() should use %s

https://github.com/apache/kafka/blob/trunk/raft/src/main/java/org/apache/kafka/raft/RequestManager.java#L375



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-17013) RequestManager#toString() should use %s

2024-06-20 Thread dujian0068 (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dujian0068 reassigned KAFKA-17013:
--

Assignee: dujian0068

> RequestManager#toString() should use %s
> ---
>
> Key: KAFKA-17013
> URL: https://issues.apache.org/jira/browse/KAFKA-17013
> Project: Kafka
>  Issue Type: Improvement
>Reporter: dujian0068
>Assignee: dujian0068
>Priority: Minor
>
> RequestManager#toString() should use %s
> https://github.com/apache/kafka/blob/trunk/raft/src/main/java/org/apache/kafka/raft/RequestManager.java#L375



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-17012) Enable testMeasureCommitSyncDuration, testMeasureCommittedDurationOnFailure, testInvalidGroupMetadata, testMeasureCommittedDuration, testOffsetsForTimesTimeout, testBe

2024-06-20 Thread xuanzhang gong (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-17012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856648#comment-17856648
 ] 

xuanzhang gong commented on KAFKA-17012:


hello,i will handle this issue,plz ticked me 

> Enable testMeasureCommitSyncDuration, testMeasureCommittedDurationOnFailure, 
> testInvalidGroupMetadata, testMeasureCommittedDuration, 
> testOffsetsForTimesTimeout, testBeginningOffsetsTimeout and 
> testEndOffsetsTimeout for AsyncConsumer
> 
>
> Key: KAFKA-17012
> URL: https://issues.apache.org/jira/browse/KAFKA-17012
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
>
> just test my fingers - it seems "testMeasureCommitSyncDuration, 
> testMeasureCommittedDurationOnFailure, testInvalidGroupMetadata, 
> testMeasureCommittedDuration, testOffsetsForTimesTimeout, 
> testBeginningOffsetsTimeout, testEndOffsetsTimeout" can work with 
> AsyncConsumer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17012) Enable testMeasureCommitSyncDuration, testMeasureCommittedDurationOnFailure, testInvalidGroupMetadata, testMeasureCommittedDuration, testOffsetsForTimesTimeout, testBegi

2024-06-20 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-17012:
--

 Summary: Enable testMeasureCommitSyncDuration, 
testMeasureCommittedDurationOnFailure, testInvalidGroupMetadata, 
testMeasureCommittedDuration, testOffsetsForTimesTimeout, 
testBeginningOffsetsTimeout and testEndOffsetsTimeout for AsyncConsumer
 Key: KAFKA-17012
 URL: https://issues.apache.org/jira/browse/KAFKA-17012
 Project: Kafka
  Issue Type: Sub-task
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


just test my fingers - it seems "testMeasureCommitSyncDuration, 
testMeasureCommittedDurationOnFailure, testInvalidGroupMetadata, 
testMeasureCommittedDuration, testOffsetsForTimesTimeout, 
testBeginningOffsetsTimeout, testEndOffsetsTimeout" can work with AsyncConsumer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-17007) Fix SourceAndTarget#equal

2024-06-20 Thread dujian0068 (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dujian0068 reassigned KAFKA-17007:
--

Assignee: PoAn Yang  (was: dujian0068)

> Fix SourceAndTarget#equal
> -
>
> Key: KAFKA-17007
> URL: https://issues.apache.org/jira/browse/KAFKA-17007
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: PoAn Yang
>Priority: Minor
>
> In reviewing https://github.com/apache/kafka/pull/16404 I noticed that 
> SourceAndTarget is a part of public class. Hence, we should fix the `equal` 
> that it does not check the class type [0].
> [0] 
> https://github.com/apache/kafka/blob/trunk/connect/mirror-client/src/main/java/org/apache/kafka/connect/mirror/SourceAndTarget.java#L49



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-17007) Fix SourceAndTarget#equal

2024-06-20 Thread dujian0068 (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dujian0068 reassigned KAFKA-17007:
--

Assignee: dujian0068  (was: PoAn Yang)

> Fix SourceAndTarget#equal
> -
>
> Key: KAFKA-17007
> URL: https://issues.apache.org/jira/browse/KAFKA-17007
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: dujian0068
>Priority: Minor
>
> In reviewing https://github.com/apache/kafka/pull/16404 I noticed that 
> SourceAndTarget is a part of public class. Hence, we should fix the `equal` 
> that it does not check the class type [0].
> [0] 
> https://github.com/apache/kafka/blob/trunk/connect/mirror-client/src/main/java/org/apache/kafka/connect/mirror/SourceAndTarget.java#L49



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16823) Extract LegacyConsumer-specific unit tests from generic KafkaConsumerTest

2024-06-20 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856641#comment-17856641
 ] 

Chia-Ping Tsai commented on KAFKA-16823:


The following comment is copy from PR:

{quote}
It would be nice to review all tests before separating them. Maybe we can do a 
bit rewrite to make them works with both consumes.
{quote}

I had filed KAFKA-16957 to address a part of it, and I will dig in other tests 
later.

> Extract LegacyConsumer-specific unit tests from generic KafkaConsumerTest 
> --
>
> Key: KAFKA-16823
> URL: https://issues.apache.org/jira/browse/KAFKA-16823
> Project: Kafka
>  Issue Type: Task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Assignee: PoAn Yang
>Priority: Major
>  Labels: kip-848-client-support
>
> Currently the KafkaConsumerTest file contains unit tests that apply to both 
> consumer implementations, but also tests that apply to the legacy consumer 
> only. We should consider splitting the tests that apply to the legacy only 
> into their own LegacyConsumerTest file (aligning with the existing 
> AsyncKafkaConsumerTest). End result would be: 
> KafkaConsumerTest -> unit tests that apply to both consumers. 
> LegacyKafkaConsumerTest -> unit tests that apply only to the 
> LegacyKafkaConsumer, either because of the logic they test, or the way they 
> are written (file to be created with this task)
> AsyncKafkaConsumerTest -> unit tests that apply only to the 
> AsyncKafkaConsumer (this file already exist)
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16957) Enable KafkaConsumerTest#configurableObjectsShouldSeeGeneratedClientId to work with CLASSIC and CONSUMER

2024-06-20 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-16957:
---
Parent: KAFKA-16823
Issue Type: Sub-task  (was: Test)

> Enable KafkaConsumerTest#configurableObjectsShouldSeeGeneratedClientId to 
> work with CLASSIC and CONSUMER 
> -
>
> Key: KAFKA-16957
> URL: https://issues.apache.org/jira/browse/KAFKA-16957
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer, unit tests
>Reporter: Chia-Ping Tsai
>Assignee: Chia Chuan Yu
>Priority: Minor
> Fix For: 3.9.0
>
>
> The `CLIENT_IDS` is a static variable, so the latter one will see previous 
> test results. We should clear it before testing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-17009) Add unit test to query nonexistent replica by describeReplicaLogDirs

2024-06-20 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-17009:
--

Assignee: 黃竣陽  (was: Chia-Ping Tsai)

> Add unit test to query nonexistent replica by describeReplicaLogDirs
> 
>
> Key: KAFKA-17009
> URL: https://issues.apache.org/jira/browse/KAFKA-17009
> Project: Kafka
>  Issue Type: Test
>Reporter: Chia-Ping Tsai
>Assignee: 黃竣陽
>Priority: Minor
>
> our docs[0] says that "currentReplicaLogDir will be null if the replica is 
> not found", so it means `describeReplicaLogDirs` can accept the queries for 
> nonexistent replicas. However, current UT [1] only verify the replica is not 
> found due to storage error. We should add a UT to verify it for nonexistent 
> replica
> [0] 
> https://github.com/apache/kafka/blob/391778b8d737f4af074422ffe61bc494b21e6555/clients/src/main/java/org/apache/kafka/clients/admin/DescribeReplicaLogDirsResult.java#L71
> [1] 
> https://github.com/apache/kafka/blob/trunk/clients/src/test/java/org/apache/kafka/clients/admin/KafkaAdminClientTest.java#L2356



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-17010) Remove DescribeLogDirsResponse#ReplicaInfo

2024-06-20 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-17010:
--

Assignee: Chia Chuan Yu  (was: Chia-Ping Tsai)

> Remove DescribeLogDirsResponse#ReplicaInfo
> --
>
> Key: KAFKA-17010
> URL: https://issues.apache.org/jira/browse/KAFKA-17010
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia Chuan Yu
>Priority: Minor
> Fix For: 4.0.0
>
>
> It was deprecated by KAFKA-10120 in 2.7
> https://github.com/apache/kafka/blob/64702bcf6f883d266ccffcec458b4c3c0706ad75/clients/src/main/java/org/apache/kafka/common/requests/DescribeLogDirsResponse.java#L113



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16995) The listeners broker parameter incorrect documentation

2024-06-20 Thread dujian0068 (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856636#comment-17856636
 ] 

dujian0068 commented on KAFKA-16995:


if you want to bind the same port  on both IPv4 wildcard addresses(0.0.0.0)
and IPv6 wildcard addresses([::]),
you can make listener lists like `PLAINTEXT://[::]:9092`
not `PLAINTEXT://0.0.0.0:9092,PLAINTEXT://[::]:9092`
tips: This is only needed for wildcard addresses




Sergey (Jira)  于2024年6月21日周五 05:00写道:



> The listeners broker parameter incorrect documentation 
> ---
>
> Key: KAFKA-16995
> URL: https://issues.apache.org/jira/browse/KAFKA-16995
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.6.1
> Environment: Kafka 3.6.1
>Reporter: Sergey
>Assignee: dujian0068
>Priority: Minor
>
> We are using Kafka 3.6.1 and the 
> [KIP-797|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=195726330]
>  describes configuring listeners with the same port and name for supporting 
> IPv4/IPv6 dual-stack. 
> Documentation link: 
> [https://kafka.apache.org/36/documentation.html#brokerconfigs_listeners]
> As I understand it, Kafka should allow us to set the listener name and 
> listener port to the same value if we configure dual-stack. 
> But in reality, the broker returns an error if we set the listener name to 
> the same value.
> Error example:
> {code:java}
> java.lang.IllegalArgumentException: requirement failed: Each listener must 
> have a different name, listeners: 
> CONTROLPLANE://0.0.0.0:9090,SSL://0.0.0.0:9093,SSL://[::]:9093
>         at scala.Predef$.require(Predef.scala:337)
>         at kafka.utils.CoreUtils$.validate$1(CoreUtils.scala:214)
>         at kafka.utils.CoreUtils$.listenerListToEndPoints(CoreUtils.scala:268)
>         at kafka.server.KafkaConfig.listeners(KafkaConfig.scala:2120)
>         at kafka.server.KafkaConfig.(KafkaConfig.scala:1807)
>         at kafka.server.KafkaConfig.(KafkaConfig.scala:1604)
>         at kafka.Kafka$.buildServer(Kafka.scala:72)
>         at kafka.Kafka$.main(Kafka.scala:91)
>         at kafka.Kafka.main(Kafka.scala) {code}
> I've tried to set the listeners to: "SSL://0.0.0.0:9093,SSL://[::]:9093"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15146) Flaky test ConsumerBounceTest.testConsumptionWithBrokerFailures

2024-06-20 Thread Chia Chuan Yu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856634#comment-17856634
 ] 

Chia Chuan Yu commented on KAFKA-15146:
---

Hi [~divijvaidya] 

I'm interested in this one, can I have it? thanks!

> Flaky test ConsumerBounceTest.testConsumptionWithBrokerFailures
> ---
>
> Key: KAFKA-15146
> URL: https://issues.apache.org/jira/browse/KAFKA-15146
> Project: Kafka
>  Issue Type: Test
>  Components: unit tests
>Reporter: Divij Vaidya
>Priority: Major
>  Labels: flaky-test
>
> Flaky test that fails with the following error. Example build - 
> [https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-13953/2] 
> {noformat}
> Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
> ConsumerBounceTest > testConsumptionWithBrokerFailures() FAILED
> org.apache.kafka.clients.consumer.CommitFailedException: Offset commit 
> cannot be completed since the consumer is not part of an active group for 
> auto partition assignment; it is likely that the consumer was kicked out of 
> the group.
> at 
> app//org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:1351)
> at 
> app//org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:1188)
> at 
> app//org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1518)
> at 
> app//org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1417)
> at 
> app//org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1374)
> at 
> app//kafka.api.ConsumerBounceTest.consumeWithBrokerFailures(ConsumerBounceTest.scala:109)
> at 
> app//kafka.api.ConsumerBounceTest.testConsumptionWithBrokerFailures(ConsumerBounceTest.scala:81){noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-17009) Add unit test to query nonexistent replica by describeReplicaLogDirs

2024-06-20 Thread Jira


[ 
https://issues.apache.org/jira/browse/KAFKA-17009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856631#comment-17856631
 ] 

黃竣陽 commented on KAFKA-17009:
-

Im interesting in this issue, Please assign to me.

> Add unit test to query nonexistent replica by describeReplicaLogDirs
> 
>
> Key: KAFKA-17009
> URL: https://issues.apache.org/jira/browse/KAFKA-17009
> Project: Kafka
>  Issue Type: Test
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
>
> our docs[0] says that "currentReplicaLogDir will be null if the replica is 
> not found", so it means `describeReplicaLogDirs` can accept the queries for 
> nonexistent replicas. However, current UT [1] only verify the replica is not 
> found due to storage error. We should add a UT to verify it for nonexistent 
> replica
> [0] 
> https://github.com/apache/kafka/blob/391778b8d737f4af074422ffe61bc494b21e6555/clients/src/main/java/org/apache/kafka/clients/admin/DescribeReplicaLogDirsResult.java#L71
> [1] 
> https://github.com/apache/kafka/blob/trunk/clients/src/test/java/org/apache/kafka/clients/admin/KafkaAdminClientTest.java#L2356



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-17010) Remove DescribeLogDirsResponse#ReplicaInfo

2024-06-20 Thread Chia Chuan Yu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-17010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856629#comment-17856629
 ] 

Chia Chuan Yu commented on KAFKA-17010:
---

Hi, [~chia7712] 

Can I have this one please? thanks!

> Remove DescribeLogDirsResponse#ReplicaInfo
> --
>
> Key: KAFKA-17010
> URL: https://issues.apache.org/jira/browse/KAFKA-17010
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 4.0.0
>
>
> It was deprecated by KAFKA-10120 in 2.7
> https://github.com/apache/kafka/blob/64702bcf6f883d266ccffcec458b4c3c0706ad75/clients/src/main/java/org/apache/kafka/common/requests/DescribeLogDirsResponse.java#L113



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16990) Unrecognised flag passed to kafka-storage.sh in system test

2024-06-20 Thread Justine Olshan (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856622#comment-17856622
 ] 

Justine Olshan commented on KAFKA-16990:


new Jira here: https://issues.apache.org/jira/browse/KAFKA-17011

> Unrecognised flag passed to kafka-storage.sh in system test
> ---
>
> Key: KAFKA-16990
> URL: https://issues.apache.org/jira/browse/KAFKA-16990
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 3.8.0
>Reporter: Gaurav Narula
>Assignee: Justine Olshan
>Priority: Blocker
> Fix For: 3.8.0
>
>
> Running 
> {{TC_PATHS="tests/kafkatest/tests/core/kraft_upgrade_test.py::TestKRaftUpgrade"
>  bash tests/docker/run_tests.sh}} on trunk (c4a3d2475f) fails with the 
> following:
> {code:java}
> [INFO:2024-06-18 09:16:03,139]: Triggering test 2 of 32...
> [INFO:2024-06-18 09:16:03,147]: RunnerClient: Loading test {'directory': 
> '/opt/kafka-dev/tests/kafkatest/tests/core', 'file_name': 
> 'kraft_upgrade_test.py', 'cls_name': 'TestKRaftUpgrade', 'method_name': 
> 'test_isolated_mode_upgrade', 'injected_args': {'from_kafka_version': 
> '3.1.2', 'use_new_coordinator': True, 'metadata_quorum': 'ISOLATED_KRAFT'}}
> [INFO:2024-06-18 09:16:03,151]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  on run 1/1
> [INFO:2024-06-18 09:16:03,153]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  Setting up...
> [INFO:2024-06-18 09:16:03,153]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  Running...
> [INFO:2024-06-18 09:16:05,999]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  Tearing down...
> [INFO:2024-06-18 09:16:12,366]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  FAIL: RemoteCommandError({'ssh_config': {'host': 'ducker10', 'hostname': 
> 'ducker10', 'user': 'ducker', 'port': 22, 'password': '', 'identityfile': 
> '/home/ducker/.ssh/id_rsa', 'connecttimeout': None}, 'hostname': 'ducker10', 
> 'ssh_hostname': 'ducker10', 'user': 'ducker', 'externally_routable_ip': 
> 'ducker10', '_logger':  kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT-2
>  (DEBUG)>, 'os': 'linux', '_ssh_client':  0x85bccc70>, '_sftp_client':  0x85bccdf0>, '_custom_ssh_exception_checks': None}, 
> '/opt/kafka-3.1.2/bin/kafka-storage.sh format --ignore-formatted --config 
> /mnt/kafka/kafka.properties --cluster-id I2eXt9rvSnyhct8BYmW6-w -f 
> group.version=1', 1, b"usage: kafka-storage format [-h] --config CONFIG 
> --cluster-id CLUSTER_ID\n                     
> [--ignore-formatted]\nkafka-storage: error: unrecognized arguments: '-f'\n")
> Traceback (most recent call last):
>   File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 186, in _do_run
>     data = self.run_test()
>   File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 246, in run_test
>     return self.test_context.function(self.test)
>   File "/usr/local/lib/python3.9/dist-packages/ducktape/mark/_mark.py", line 
> 433, in wrapper
>     return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File "/opt/kafka-dev/tests/kafkatest/tests/core/kraft_upgrade_test.py", 
> line 132, in test_isolated_mode_upgrade
>     self.run_upgrade(from_kafka_version, group_protocol)
>   File "/opt/kafka-dev/tests/kafkatest/tests/core/kraft_upgrade_test.py", 
> line 96, in run_upgrade
>     self.kafka.start()
>   File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 669, in 
> start
>     self.isolated_controller_quorum.start()
>   File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 671, in 
> start
>     Service.start(self, **kwargs)
>   File "/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", 
> line 265, in start
>     self.start_node(node, **kwargs)
>   File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 902, in 
> start_node
>     node.account.ssh(cmd)
>   File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/remoteaccount.py", 
> line 35, in wrapper
>     return 

[jira] [Created] (KAFKA-17011) SupportedFeatures.MinVersion incorrectly blocks v0

2024-06-20 Thread Colin McCabe (Jira)
Colin McCabe created KAFKA-17011:


 Summary: SupportedFeatures.MinVersion incorrectly blocks v0
 Key: KAFKA-17011
 URL: https://issues.apache.org/jira/browse/KAFKA-17011
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.8.0
Reporter: Colin McCabe
Assignee: Colin McCabe


SupportedFeatures.MinVersion incorrectly blocks v0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16990) Unrecognised flag passed to kafka-storage.sh in system test

2024-06-20 Thread Justine Olshan (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856616#comment-17856616
 ] 

Justine Olshan commented on KAFKA-16990:


While fixing this bug I encountered a new one :) 


With this change: 
[https://github.com/apache/kafka/commit/bfe81d622979809325c31d549943c40f6f0f7337],
 we made it possible for the minimum version to be 0. However, when we upgrade, 
we can send to an old node and get this response:
{{java.lang.IllegalArgumentException: Expected minValue >= 1, maxValue >= 1 and 
maxValue >= minValue, but received minValue: 0, maxValue: 0 at 
org.apache.kafka.common.feature.BaseVersionRange.(BaseVersionRange.java:65)
 at 
org.apache.kafka.common.feature.SupportedVersionRange.(SupportedVersionRange.java:32)
 at org.apache.kafka.clients.NodeApiVersions.(NodeApiVersions.java:112) 
at 
org.apache.kafka.clients.NetworkClient.handleApiVersionsResponse(NetworkClient.java:920)
 at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:888)
 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:569) at 
kafka.common.InterBrokerSendThread.pollOnce(InterBrokerSendThread.scala:74) at 
kafka.server.BrokerToControllerRequestThread.doWork(BrokerToControllerChannelManager.scala:368)
 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)}}
I think maybe we should either: # Not allow min version to be 0 (and just 
assume if a value is not set it is zero) I believe this is the approach 
group.version took
 # Have some gating on the api versions request so we don’t send 0 if the 
version is too low (can’t support the change)

I can file a new Jira for this if we think it is easier to track, but I can't 
fully test the other tests until it is.
 

> Unrecognised flag passed to kafka-storage.sh in system test
> ---
>
> Key: KAFKA-16990
> URL: https://issues.apache.org/jira/browse/KAFKA-16990
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 3.8.0
>Reporter: Gaurav Narula
>Assignee: Justine Olshan
>Priority: Blocker
> Fix For: 3.8.0
>
>
> Running 
> {{TC_PATHS="tests/kafkatest/tests/core/kraft_upgrade_test.py::TestKRaftUpgrade"
>  bash tests/docker/run_tests.sh}} on trunk (c4a3d2475f) fails with the 
> following:
> {code:java}
> [INFO:2024-06-18 09:16:03,139]: Triggering test 2 of 32...
> [INFO:2024-06-18 09:16:03,147]: RunnerClient: Loading test {'directory': 
> '/opt/kafka-dev/tests/kafkatest/tests/core', 'file_name': 
> 'kraft_upgrade_test.py', 'cls_name': 'TestKRaftUpgrade', 'method_name': 
> 'test_isolated_mode_upgrade', 'injected_args': {'from_kafka_version': 
> '3.1.2', 'use_new_coordinator': True, 'metadata_quorum': 'ISOLATED_KRAFT'}}
> [INFO:2024-06-18 09:16:03,151]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  on run 1/1
> [INFO:2024-06-18 09:16:03,153]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  Setting up...
> [INFO:2024-06-18 09:16:03,153]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  Running...
> [INFO:2024-06-18 09:16:05,999]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  Tearing down...
> [INFO:2024-06-18 09:16:12,366]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  FAIL: RemoteCommandError({'ssh_config': {'host': 'ducker10', 'hostname': 
> 'ducker10', 'user': 'ducker', 'port': 22, 'password': '', 'identityfile': 
> '/home/ducker/.ssh/id_rsa', 'connecttimeout': None}, 'hostname': 'ducker10', 
> 'ssh_hostname': 'ducker10', 'user': 'ducker', 'externally_routable_ip': 
> 'ducker10', '_logger':  kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT-2
>  (DEBUG)>, 'os': 'linux', '_ssh_client':  0x85bccc70>, '_sftp_client':  0x85bccdf0>, '_custom_ssh_exception_checks': None}, 
> '/opt/kafka-3.1.2/bin/kafka-storage.sh format --ignore-formatted --config 
> /mnt/kafka/kafka.properties --cluster-id I2eXt9rvSnyhct8BYmW6-w -f 
> group.version=1', 1, b"usage: kafka-storage format [-h] --config CONFIG 
> --cluster-id CLUSTER_ID\n                     
> [--ignore-forma

[jira] [Comment Edited] (KAFKA-16995) The listeners broker parameter incorrect documentation

2024-06-20 Thread Sergey (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856591#comment-17856591
 ] 

Sergey edited comment on KAFKA-16995 at 6/20/24 8:59 PM:
-

I see the error message, but the documentation says otherwise:
{code:java}
Listener names and port numbers must be unique unless
one listener is an IPv4 address and the other listener is
an IPv6 address (for the same port){code}
as well as examples from KIP 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=195726330]

I used the following config:
listeners=CONTROLPLANE://0.0.0.0:9090,SSL://0.0.0.0:9093,SSL://[::]:9093
and according to the documentation it should be legitimate because the SSL 
listeners have the same name and port, but one uses - IPv4 and the other uses - 
IPv6.


was (Author: JIRAUSER305864):
I see the error message, but the documentation says otherwise:
{code:java}
Listener names and port numbers must be unique unless
one listener is an IPv4 address and the other listener is
an IPv6 address (for the same port){code}
as well as examples from KIP 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=195726330]

I used the following config:
listeners=CONTROLPLANE://0.0.0.0:9090,SSL://0.0.0.0:9093,SSL://[::]:9093
and according to the documentation should be legitimate because the SSL 
listeners have the same name and port, but one uses - IPv4 and the other uses - 
IPv6.

> The listeners broker parameter incorrect documentation 
> ---
>
> Key: KAFKA-16995
> URL: https://issues.apache.org/jira/browse/KAFKA-16995
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.6.1
> Environment: Kafka 3.6.1
>Reporter: Sergey
>Assignee: dujian0068
>Priority: Minor
>
> We are using Kafka 3.6.1 and the 
> [KIP-797|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=195726330]
>  describes configuring listeners with the same port and name for supporting 
> IPv4/IPv6 dual-stack. 
> Documentation link: 
> [https://kafka.apache.org/36/documentation.html#brokerconfigs_listeners]
> As I understand it, Kafka should allow us to set the listener name and 
> listener port to the same value if we configure dual-stack. 
> But in reality, the broker returns an error if we set the listener name to 
> the same value.
> Error example:
> {code:java}
> java.lang.IllegalArgumentException: requirement failed: Each listener must 
> have a different name, listeners: 
> CONTROLPLANE://0.0.0.0:9090,SSL://0.0.0.0:9093,SSL://[::]:9093
>         at scala.Predef$.require(Predef.scala:337)
>         at kafka.utils.CoreUtils$.validate$1(CoreUtils.scala:214)
>         at kafka.utils.CoreUtils$.listenerListToEndPoints(CoreUtils.scala:268)
>         at kafka.server.KafkaConfig.listeners(KafkaConfig.scala:2120)
>         at kafka.server.KafkaConfig.(KafkaConfig.scala:1807)
>         at kafka.server.KafkaConfig.(KafkaConfig.scala:1604)
>         at kafka.Kafka$.buildServer(Kafka.scala:72)
>         at kafka.Kafka$.main(Kafka.scala:91)
>         at kafka.Kafka.main(Kafka.scala) {code}
> I've tried to set the listeners to: "SSL://0.0.0.0:9093,SSL://[::]:9093"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-16995) The listeners broker parameter incorrect documentation

2024-06-20 Thread Sergey (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856591#comment-17856591
 ] 

Sergey edited comment on KAFKA-16995 at 6/20/24 8:59 PM:
-

I see the error message, but the documentation says otherwise:
{code:java}
Listener names and port numbers must be unique unless
one listener is an IPv4 address and the other listener is
an IPv6 address (for the same port){code}
as well as examples from KIP 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=195726330]

I used the following config:
listeners=CONTROLPLANE://0.0.0.0:9090,SSL://0.0.0.0:9093,SSL://[::]:9093
and according to the documentation should be legitimate because the SSL 
listeners have the same name and port, but one uses - IPv4 and the other uses - 
IPv6.


was (Author: JIRAUSER305864):
I see the error message, but the documentation says otherwise:
{code:java}
Listener names and port numbers must be unique unless
one listener is an IPv4 address and the other listener is
an IPv6 address (for the same port){code}
as well as examples from KIP 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=195726330]

I used the following config according to the documentation:
listeners=CONTROLPLANE://0.0.0.0:9090,SSL://0.0.0.0:9093,SSL://[::]:9093
should be legitimate because the SSL listeners have the same name and port, but 
one uses - IPv4 and the other uses - IPv6.

> The listeners broker parameter incorrect documentation 
> ---
>
> Key: KAFKA-16995
> URL: https://issues.apache.org/jira/browse/KAFKA-16995
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.6.1
> Environment: Kafka 3.6.1
>Reporter: Sergey
>Assignee: dujian0068
>Priority: Minor
>
> We are using Kafka 3.6.1 and the 
> [KIP-797|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=195726330]
>  describes configuring listeners with the same port and name for supporting 
> IPv4/IPv6 dual-stack. 
> Documentation link: 
> [https://kafka.apache.org/36/documentation.html#brokerconfigs_listeners]
> As I understand it, Kafka should allow us to set the listener name and 
> listener port to the same value if we configure dual-stack. 
> But in reality, the broker returns an error if we set the listener name to 
> the same value.
> Error example:
> {code:java}
> java.lang.IllegalArgumentException: requirement failed: Each listener must 
> have a different name, listeners: 
> CONTROLPLANE://0.0.0.0:9090,SSL://0.0.0.0:9093,SSL://[::]:9093
>         at scala.Predef$.require(Predef.scala:337)
>         at kafka.utils.CoreUtils$.validate$1(CoreUtils.scala:214)
>         at kafka.utils.CoreUtils$.listenerListToEndPoints(CoreUtils.scala:268)
>         at kafka.server.KafkaConfig.listeners(KafkaConfig.scala:2120)
>         at kafka.server.KafkaConfig.(KafkaConfig.scala:1807)
>         at kafka.server.KafkaConfig.(KafkaConfig.scala:1604)
>         at kafka.Kafka$.buildServer(Kafka.scala:72)
>         at kafka.Kafka$.main(Kafka.scala:91)
>         at kafka.Kafka.main(Kafka.scala) {code}
> I've tried to set the listeners to: "SSL://0.0.0.0:9093,SSL://[::]:9093"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-17010) Remove DescribeLogDirsResponse#ReplicaInfo

2024-06-20 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-17010:
---
Description: 
It was deprecated by KAFKA-10120 in 2.7

https://github.com/apache/kafka/blob/64702bcf6f883d266ccffcec458b4c3c0706ad75/clients/src/main/java/org/apache/kafka/common/requests/DescribeLogDirsResponse.java#L113

  was:It was deprecated by KAFKA-10120 in 2.7


> Remove DescribeLogDirsResponse#ReplicaInfo
> --
>
> Key: KAFKA-17010
> URL: https://issues.apache.org/jira/browse/KAFKA-17010
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 4.0.0
>
>
> It was deprecated by KAFKA-10120 in 2.7
> https://github.com/apache/kafka/blob/64702bcf6f883d266ccffcec458b4c3c0706ad75/clients/src/main/java/org/apache/kafka/common/requests/DescribeLogDirsResponse.java#L113



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17010) Remove ReplicaInfo

2024-06-20 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-17010:
--

 Summary: Remove ReplicaInfo
 Key: KAFKA-17010
 URL: https://issues.apache.org/jira/browse/KAFKA-17010
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai
 Fix For: 4.0.0


It was deprecated by KAFKA-10120 in 2.7



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-17010) Remove DescribeLogDirsResponse#ReplicaInfo

2024-06-20 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-17010:
---
Summary: Remove DescribeLogDirsResponse#ReplicaInfo  (was: Remove 
ReplicaInfo)

> Remove DescribeLogDirsResponse#ReplicaInfo
> --
>
> Key: KAFKA-17010
> URL: https://issues.apache.org/jira/browse/KAFKA-17010
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 4.0.0
>
>
> It was deprecated by KAFKA-10120 in 2.7



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17009) Add unit test to query nonexistent replica by describeReplicaLogDirs

2024-06-20 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-17009:
--

 Summary: Add unit test to query nonexistent replica by 
describeReplicaLogDirs
 Key: KAFKA-17009
 URL: https://issues.apache.org/jira/browse/KAFKA-17009
 Project: Kafka
  Issue Type: Test
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


our docs[0] says that "currentReplicaLogDir will be null if the replica is not 
found", so it means `describeReplicaLogDirs` can accept the queries for 
nonexistent replicas. However, current UT [1] only verify the replica is not 
found due to storage error. We should add a UT to verify it for nonexistent 
replica

[0] 
https://github.com/apache/kafka/blob/391778b8d737f4af074422ffe61bc494b21e6555/clients/src/main/java/org/apache/kafka/clients/admin/DescribeReplicaLogDirsResult.java#L71
[1] 
https://github.com/apache/kafka/blob/trunk/clients/src/test/java/org/apache/kafka/clients/admin/KafkaAdminClientTest.java#L2356



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17008) Update zookeeper to 3.8.4 or 3.9.2 to address CVE-2024-23944

2024-06-20 Thread Arushi Helms (Jira)
Arushi Helms created KAFKA-17008:


 Summary: Update zookeeper to 3.8.4 or 3.9.2 to address 
CVE-2024-23944
 Key: KAFKA-17008
 URL: https://issues.apache.org/jira/browse/KAFKA-17008
 Project: Kafka
  Issue Type: Bug
Reporter: Arushi Helms


Update zookeeper to 3.8.4 or 3.9.2 to address CVE-2024-23944.
I could not find an existing ticket for this, if there one then please mark 
this as duplicate. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16987) Release connector resources when entering FAILED state

2024-06-20 Thread Greg Harris (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Harris updated KAFKA-16987:

Description: 
Connectors will enter the FAILED state if an unexpected exception appears. 
Connectors may also have resources associated with them that must be explicitly 
cleaned up by calling stop().

Currently, this cleanup only takes place when the connectors are requested to 
shut down and the state transits to UNASSIGNED. This cleanup can be done 
earlier, so that resources are not held while the connector or task is in the 
FAILED state.

  was:
Connectors and Tasks will enter the FAILED state if an unexpected exception 
appears. Connectors and tasks also have many resources associated with them 
that must be explicitly cleaned up by calling close().

Currently, this cleanup only takes place when the connectors are requested to 
shut down and the state transits to UNASSIGNED. This cleanup can be done 
earlier, so that resources are not held while the connector or task is in the 
FAILED state.


> Release connector resources when entering FAILED state
> --
>
> Key: KAFKA-16987
> URL: https://issues.apache.org/jira/browse/KAFKA-16987
> Project: Kafka
>  Issue Type: Improvement
>  Components: connect
>Reporter: Greg Harris
>Priority: Minor
>
> Connectors will enter the FAILED state if an unexpected exception appears. 
> Connectors may also have resources associated with them that must be 
> explicitly cleaned up by calling stop().
> Currently, this cleanup only takes place when the connectors are requested to 
> shut down and the state transits to UNASSIGNED. This cleanup can be done 
> earlier, so that resources are not held while the connector or task is in the 
> FAILED state.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16987) Release connector resources when entering FAILED state

2024-06-20 Thread Greg Harris (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Harris updated KAFKA-16987:

Summary: Release connector resources when entering FAILED state  (was: 
Release connector & task resources when entering FAILED state)

> Release connector resources when entering FAILED state
> --
>
> Key: KAFKA-16987
> URL: https://issues.apache.org/jira/browse/KAFKA-16987
> Project: Kafka
>  Issue Type: Improvement
>  Components: connect
>Reporter: Greg Harris
>Priority: Minor
>
> Connectors and Tasks will enter the FAILED state if an unexpected exception 
> appears. Connectors and tasks also have many resources associated with them 
> that must be explicitly cleaned up by calling close().
> Currently, this cleanup only takes place when the connectors are requested to 
> shut down and the state transits to UNASSIGNED. This cleanup can be done 
> earlier, so that resources are not held while the connector or task is in the 
> FAILED state.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16995) The listeners broker parameter incorrect documentation

2024-06-20 Thread Sergey (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856591#comment-17856591
 ] 

Sergey commented on KAFKA-16995:


I see the error message, but the documentation says otherwise:
{code:java}
Listener names and port numbers must be unique unless
one listener is an IPv4 address and the other listener is
an IPv6 address (for the same port){code}
as well as examples from KIP 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=195726330]

I used the following config according to the documentation:
listeners=CONTROLPLANE://0.0.0.0:9090,SSL://0.0.0.0:9093,SSL://[::]:9093
should be legitimate because the SSL listeners have the same name and port, but 
one uses - IPv4 and the other uses - IPv6.

> The listeners broker parameter incorrect documentation 
> ---
>
> Key: KAFKA-16995
> URL: https://issues.apache.org/jira/browse/KAFKA-16995
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.6.1
> Environment: Kafka 3.6.1
>Reporter: Sergey
>Assignee: dujian0068
>Priority: Minor
>
> We are using Kafka 3.6.1 and the 
> [KIP-797|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=195726330]
>  describes configuring listeners with the same port and name for supporting 
> IPv4/IPv6 dual-stack. 
> Documentation link: 
> [https://kafka.apache.org/36/documentation.html#brokerconfigs_listeners]
> As I understand it, Kafka should allow us to set the listener name and 
> listener port to the same value if we configure dual-stack. 
> But in reality, the broker returns an error if we set the listener name to 
> the same value.
> Error example:
> {code:java}
> java.lang.IllegalArgumentException: requirement failed: Each listener must 
> have a different name, listeners: 
> CONTROLPLANE://0.0.0.0:9090,SSL://0.0.0.0:9093,SSL://[::]:9093
>         at scala.Predef$.require(Predef.scala:337)
>         at kafka.utils.CoreUtils$.validate$1(CoreUtils.scala:214)
>         at kafka.utils.CoreUtils$.listenerListToEndPoints(CoreUtils.scala:268)
>         at kafka.server.KafkaConfig.listeners(KafkaConfig.scala:2120)
>         at kafka.server.KafkaConfig.(KafkaConfig.scala:1807)
>         at kafka.server.KafkaConfig.(KafkaConfig.scala:1604)
>         at kafka.Kafka$.buildServer(Kafka.scala:72)
>         at kafka.Kafka$.main(Kafka.scala:91)
>         at kafka.Kafka.main(Kafka.scala) {code}
> I've tried to set the listeners to: "SSL://0.0.0.0:9093,SSL://[::]:9093"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-4230) HistogramSample needs override Sample's reset method

2024-06-20 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-4230.
---
Resolution: Duplicate

it was fixed by KAFKA-5900

> HistogramSample needs override Sample's reset method
> 
>
> Key: KAFKA-4230
> URL: https://issues.apache.org/jira/browse/KAFKA-4230
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.0.0
>Reporter: Xing Huang
>Priority: Minor
>
> HistogramSample should reset its inner Histogram, so it should override its 
> parent' reset method.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16990) Unrecognised flag passed to kafka-storage.sh in system test

2024-06-20 Thread Josep Prat (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856563#comment-17856563
 ] 

Josep Prat commented on KAFKA-16990:


Thanks [~jolshan] 

> Unrecognised flag passed to kafka-storage.sh in system test
> ---
>
> Key: KAFKA-16990
> URL: https://issues.apache.org/jira/browse/KAFKA-16990
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 3.8.0
>Reporter: Gaurav Narula
>Assignee: Justine Olshan
>Priority: Blocker
> Fix For: 3.8.0
>
>
> Running 
> {{TC_PATHS="tests/kafkatest/tests/core/kraft_upgrade_test.py::TestKRaftUpgrade"
>  bash tests/docker/run_tests.sh}} on trunk (c4a3d2475f) fails with the 
> following:
> {code:java}
> [INFO:2024-06-18 09:16:03,139]: Triggering test 2 of 32...
> [INFO:2024-06-18 09:16:03,147]: RunnerClient: Loading test {'directory': 
> '/opt/kafka-dev/tests/kafkatest/tests/core', 'file_name': 
> 'kraft_upgrade_test.py', 'cls_name': 'TestKRaftUpgrade', 'method_name': 
> 'test_isolated_mode_upgrade', 'injected_args': {'from_kafka_version': 
> '3.1.2', 'use_new_coordinator': True, 'metadata_quorum': 'ISOLATED_KRAFT'}}
> [INFO:2024-06-18 09:16:03,151]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  on run 1/1
> [INFO:2024-06-18 09:16:03,153]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  Setting up...
> [INFO:2024-06-18 09:16:03,153]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  Running...
> [INFO:2024-06-18 09:16:05,999]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  Tearing down...
> [INFO:2024-06-18 09:16:12,366]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  FAIL: RemoteCommandError({'ssh_config': {'host': 'ducker10', 'hostname': 
> 'ducker10', 'user': 'ducker', 'port': 22, 'password': '', 'identityfile': 
> '/home/ducker/.ssh/id_rsa', 'connecttimeout': None}, 'hostname': 'ducker10', 
> 'ssh_hostname': 'ducker10', 'user': 'ducker', 'externally_routable_ip': 
> 'ducker10', '_logger':  kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT-2
>  (DEBUG)>, 'os': 'linux', '_ssh_client':  0x85bccc70>, '_sftp_client':  0x85bccdf0>, '_custom_ssh_exception_checks': None}, 
> '/opt/kafka-3.1.2/bin/kafka-storage.sh format --ignore-formatted --config 
> /mnt/kafka/kafka.properties --cluster-id I2eXt9rvSnyhct8BYmW6-w -f 
> group.version=1', 1, b"usage: kafka-storage format [-h] --config CONFIG 
> --cluster-id CLUSTER_ID\n                     
> [--ignore-formatted]\nkafka-storage: error: unrecognized arguments: '-f'\n")
> Traceback (most recent call last):
>   File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 186, in _do_run
>     data = self.run_test()
>   File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 246, in run_test
>     return self.test_context.function(self.test)
>   File "/usr/local/lib/python3.9/dist-packages/ducktape/mark/_mark.py", line 
> 433, in wrapper
>     return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File "/opt/kafka-dev/tests/kafkatest/tests/core/kraft_upgrade_test.py", 
> line 132, in test_isolated_mode_upgrade
>     self.run_upgrade(from_kafka_version, group_protocol)
>   File "/opt/kafka-dev/tests/kafkatest/tests/core/kraft_upgrade_test.py", 
> line 96, in run_upgrade
>     self.kafka.start()
>   File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 669, in 
> start
>     self.isolated_controller_quorum.start()
>   File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 671, in 
> start
>     Service.start(self, **kwargs)
>   File "/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", 
> line 265, in start
>     self.start_node(node, **kwargs)
>   File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 902, in 
> start_node
>     node.account.ssh(cmd)
>   File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/remoteaccount.py", 
> line 35, in wrapper
>     return method(self, *args, **kwargs)
>   File 
> "/usr/local/

[jira] [Assigned] (KAFKA-16998) Fix warnings in our Github actions

2024-06-20 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-16998:
--

Assignee: Kuan Po Tseng

> Fix warnings in our Github actions
> --
>
> Key: KAFKA-16998
> URL: https://issues.apache.org/jira/browse/KAFKA-16998
> Project: Kafka
>  Issue Type: Task
>  Components: build
>Reporter: Mickael Maison
>Assignee: Kuan Po Tseng
>Priority: Major
>
> Most of our Github actions produce warnings, see 
> [https://github.com/apache/kafka/actions/runs/9572915509|https://github.com/apache/kafka/actions/runs/9572915509.]
>  for example.
> It looks like we need to bump the version we use for actions/checkout, 
> actions/setup-python, actions/upload-artifact to v4.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15411) DelegationTokenEndToEndAuthorizationWithOwnerTest is Flaky

2024-06-20 Thread David Arthur (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Arthur updated KAFKA-15411:
-
Fix Version/s: (was: 3.9.0)

> DelegationTokenEndToEndAuthorizationWithOwnerTest is Flaky 
> ---
>
> Key: KAFKA-15411
> URL: https://issues.apache.org/jira/browse/KAFKA-15411
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft
>Reporter: Proven Provenzano
>Assignee: Proven Provenzano
>Priority: Major
>  Labels: flaky-test
>
> DelegationTokenEndToEndAuthorizationWithOwnerTest has become flaky since the 
> merge of delegation token support for KRaft (PR - 
> https://github.com/apache/kafka/pull/14083).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16998) Fix warnings in our Github actions

2024-06-20 Thread Kuan Po Tseng (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856552#comment-17856552
 ] 

Kuan Po Tseng commented on KAFKA-16998:
---

I can help on this one ! :)

> Fix warnings in our Github actions
> --
>
> Key: KAFKA-16998
> URL: https://issues.apache.org/jira/browse/KAFKA-16998
> Project: Kafka
>  Issue Type: Task
>  Components: build
>Reporter: Mickael Maison
>Priority: Major
>
> Most of our Github actions produce warnings, see 
> [https://github.com/apache/kafka/actions/runs/9572915509|https://github.com/apache/kafka/actions/runs/9572915509.]
>  for example.
> It looks like we need to bump the version we use for actions/checkout, 
> actions/setup-python, actions/upload-artifact to v4.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-17007) Fix SourceAndTarget#equal

2024-06-20 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-17007:
--

Assignee: PoAn Yang  (was: Chia-Ping Tsai)

> Fix SourceAndTarget#equal
> -
>
> Key: KAFKA-17007
> URL: https://issues.apache.org/jira/browse/KAFKA-17007
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: PoAn Yang
>Priority: Minor
>
> In reviewing https://github.com/apache/kafka/pull/16404 I noticed that 
> SourceAndTarget is a part of public class. Hence, we should fix the `equal` 
> that it does not check the class type [0].
> [0] 
> https://github.com/apache/kafka/blob/trunk/connect/mirror-client/src/main/java/org/apache/kafka/connect/mirror/SourceAndTarget.java#L49



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-3346) Rename "Mode" to "SslMode"

2024-06-20 Thread Ksolves (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ksolves reassigned KAFKA-3346:
--

Assignee: Ksolves

> Rename "Mode" to "SslMode"
> --
>
> Key: KAFKA-3346
> URL: https://issues.apache.org/jira/browse/KAFKA-3346
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Gwen Shapira
>Assignee: Ksolves
>Priority: Major
>
> In the channel builders, the Mode enum is undocumented, so it is unclear that 
> it is used to signify whether the connection is for SSL client or SSL server.
> I suggest renaming to SslMode (although adding documentation will be ok too, 
> if people object to the rename)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16990) Unrecognised flag passed to kafka-storage.sh in system test

2024-06-20 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot updated KAFKA-16990:

Fix Version/s: 3.8.0

> Unrecognised flag passed to kafka-storage.sh in system test
> ---
>
> Key: KAFKA-16990
> URL: https://issues.apache.org/jira/browse/KAFKA-16990
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 3.8.0
>Reporter: Gaurav Narula
>Assignee: Justine Olshan
>Priority: Blocker
> Fix For: 3.8.0
>
>
> Running 
> {{TC_PATHS="tests/kafkatest/tests/core/kraft_upgrade_test.py::TestKRaftUpgrade"
>  bash tests/docker/run_tests.sh}} on trunk (c4a3d2475f) fails with the 
> following:
> {code:java}
> [INFO:2024-06-18 09:16:03,139]: Triggering test 2 of 32...
> [INFO:2024-06-18 09:16:03,147]: RunnerClient: Loading test {'directory': 
> '/opt/kafka-dev/tests/kafkatest/tests/core', 'file_name': 
> 'kraft_upgrade_test.py', 'cls_name': 'TestKRaftUpgrade', 'method_name': 
> 'test_isolated_mode_upgrade', 'injected_args': {'from_kafka_version': 
> '3.1.2', 'use_new_coordinator': True, 'metadata_quorum': 'ISOLATED_KRAFT'}}
> [INFO:2024-06-18 09:16:03,151]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  on run 1/1
> [INFO:2024-06-18 09:16:03,153]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  Setting up...
> [INFO:2024-06-18 09:16:03,153]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  Running...
> [INFO:2024-06-18 09:16:05,999]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  Tearing down...
> [INFO:2024-06-18 09:16:12,366]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  FAIL: RemoteCommandError({'ssh_config': {'host': 'ducker10', 'hostname': 
> 'ducker10', 'user': 'ducker', 'port': 22, 'password': '', 'identityfile': 
> '/home/ducker/.ssh/id_rsa', 'connecttimeout': None}, 'hostname': 'ducker10', 
> 'ssh_hostname': 'ducker10', 'user': 'ducker', 'externally_routable_ip': 
> 'ducker10', '_logger':  kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT-2
>  (DEBUG)>, 'os': 'linux', '_ssh_client':  0x85bccc70>, '_sftp_client':  0x85bccdf0>, '_custom_ssh_exception_checks': None}, 
> '/opt/kafka-3.1.2/bin/kafka-storage.sh format --ignore-formatted --config 
> /mnt/kafka/kafka.properties --cluster-id I2eXt9rvSnyhct8BYmW6-w -f 
> group.version=1', 1, b"usage: kafka-storage format [-h] --config CONFIG 
> --cluster-id CLUSTER_ID\n                     
> [--ignore-formatted]\nkafka-storage: error: unrecognized arguments: '-f'\n")
> Traceback (most recent call last):
>   File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 186, in _do_run
>     data = self.run_test()
>   File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 246, in run_test
>     return self.test_context.function(self.test)
>   File "/usr/local/lib/python3.9/dist-packages/ducktape/mark/_mark.py", line 
> 433, in wrapper
>     return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File "/opt/kafka-dev/tests/kafkatest/tests/core/kraft_upgrade_test.py", 
> line 132, in test_isolated_mode_upgrade
>     self.run_upgrade(from_kafka_version, group_protocol)
>   File "/opt/kafka-dev/tests/kafkatest/tests/core/kraft_upgrade_test.py", 
> line 96, in run_upgrade
>     self.kafka.start()
>   File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 669, in 
> start
>     self.isolated_controller_quorum.start()
>   File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 671, in 
> start
>     Service.start(self, **kwargs)
>   File "/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", 
> line 265, in start
>     self.start_node(node, **kwargs)
>   File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 902, in 
> start_node
>     node.account.ssh(cmd)
>   File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/remoteaccount.py", 
> line 35, in wrapper
>     return method(self, *args, **kwargs)
>   File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/remot

[jira] [Commented] (KAFKA-17007) Fix SourceAndTarget#equal

2024-06-20 Thread PoAn Yang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856541#comment-17856541
 ] 

PoAn Yang commented on KAFKA-17007:
---

Hi [~chia7712], I'm interested in this. If you're not working on it, may I take 
it? Thank you.

> Fix SourceAndTarget#equal
> -
>
> Key: KAFKA-17007
> URL: https://issues.apache.org/jira/browse/KAFKA-17007
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
>
> In reviewing https://github.com/apache/kafka/pull/16404 I noticed that 
> SourceAndTarget is a part of public class. Hence, we should fix the `equal` 
> that it does not check the class type [0].
> [0] 
> https://github.com/apache/kafka/blob/trunk/connect/mirror-client/src/main/java/org/apache/kafka/connect/mirror/SourceAndTarget.java#L49



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16906) Add consistent error handling for Transactions

2024-06-20 Thread Kaushik Raina (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kaushik Raina updated KAFKA-16906:
--
Description: 
Apache Kafka supports a variety of client SDKs for different programming 
languages.
We want to group errors handing into 5 types which will help in keeping 
consistent error handling across all clients SDKs and Producer APIs.

KIP: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-1050%3A+Consistent+error+handling+for+Transactions

  was:
Apache Kafka supports a variety of client SDKs for different programming 
languages.
We want to group errors handing into 5 types which will help in keeping 
consistent error handling across all clients SDKs and Producer APIs.


> Add consistent error handling for Transactions
> --
>
> Key: KAFKA-16906
> URL: https://issues.apache.org/jira/browse/KAFKA-16906
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer, producer , streams
>Reporter: Kaushik Raina
>Priority: Major
>
> Apache Kafka supports a variety of client SDKs for different programming 
> languages.
> We want to group errors handing into 5 types which will help in keeping 
> consistent error handling across all clients SDKs and Producer APIs.
> KIP: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1050%3A+Consistent+error+handling+for+Transactions



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16906) Add consistent error handling for Transactions

2024-06-20 Thread Kaushik Raina (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856536#comment-17856536
 ] 

Kaushik Raina commented on KAFKA-16906:
---

[~ksolves.kafka] These 5 types are defined in detailed KIP section: 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-1050%3A+Consistent+error+handling+for+Transactions#KIP1050:ConsistenterrorhandlingforTransactions-ProposedChanges]
 

where *+Producer-Retriable+* ++  is divided into 2 subtypes.

> Add consistent error handling for Transactions
> --
>
> Key: KAFKA-16906
> URL: https://issues.apache.org/jira/browse/KAFKA-16906
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer, producer , streams
>Reporter: Kaushik Raina
>Priority: Major
>
> Apache Kafka supports a variety of client SDKs for different programming 
> languages.
> We want to group errors handing into 5 types which will help in keeping 
> consistent error handling across all clients SDKs and Producer APIs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17007) Fix SourceAndTarget#equal

2024-06-20 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-17007:
--

 Summary: Fix SourceAndTarget#equal
 Key: KAFKA-17007
 URL: https://issues.apache.org/jira/browse/KAFKA-17007
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


In reviewing https://github.com/apache/kafka/pull/16404 I noticed that 
SourceAndTarget is a part of public class. Hence, we should fix the `equal` 
that it does not check the class type [0].


[0] 
https://github.com/apache/kafka/blob/trunk/connect/mirror-client/src/main/java/org/apache/kafka/connect/mirror/SourceAndTarget.java#L49



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16990) Unrecognised flag passed to kafka-storage.sh in system test

2024-06-20 Thread Justine Olshan (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justine Olshan updated KAFKA-16990:
---
Priority: Blocker  (was: Major)

> Unrecognised flag passed to kafka-storage.sh in system test
> ---
>
> Key: KAFKA-16990
> URL: https://issues.apache.org/jira/browse/KAFKA-16990
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 3.8.0
>Reporter: Gaurav Narula
>Assignee: Justine Olshan
>Priority: Blocker
>
> Running 
> {{TC_PATHS="tests/kafkatest/tests/core/kraft_upgrade_test.py::TestKRaftUpgrade"
>  bash tests/docker/run_tests.sh}} on trunk (c4a3d2475f) fails with the 
> following:
> {code:java}
> [INFO:2024-06-18 09:16:03,139]: Triggering test 2 of 32...
> [INFO:2024-06-18 09:16:03,147]: RunnerClient: Loading test {'directory': 
> '/opt/kafka-dev/tests/kafkatest/tests/core', 'file_name': 
> 'kraft_upgrade_test.py', 'cls_name': 'TestKRaftUpgrade', 'method_name': 
> 'test_isolated_mode_upgrade', 'injected_args': {'from_kafka_version': 
> '3.1.2', 'use_new_coordinator': True, 'metadata_quorum': 'ISOLATED_KRAFT'}}
> [INFO:2024-06-18 09:16:03,151]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  on run 1/1
> [INFO:2024-06-18 09:16:03,153]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  Setting up...
> [INFO:2024-06-18 09:16:03,153]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  Running...
> [INFO:2024-06-18 09:16:05,999]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  Tearing down...
> [INFO:2024-06-18 09:16:12,366]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  FAIL: RemoteCommandError({'ssh_config': {'host': 'ducker10', 'hostname': 
> 'ducker10', 'user': 'ducker', 'port': 22, 'password': '', 'identityfile': 
> '/home/ducker/.ssh/id_rsa', 'connecttimeout': None}, 'hostname': 'ducker10', 
> 'ssh_hostname': 'ducker10', 'user': 'ducker', 'externally_routable_ip': 
> 'ducker10', '_logger':  kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT-2
>  (DEBUG)>, 'os': 'linux', '_ssh_client':  0x85bccc70>, '_sftp_client':  0x85bccdf0>, '_custom_ssh_exception_checks': None}, 
> '/opt/kafka-3.1.2/bin/kafka-storage.sh format --ignore-formatted --config 
> /mnt/kafka/kafka.properties --cluster-id I2eXt9rvSnyhct8BYmW6-w -f 
> group.version=1', 1, b"usage: kafka-storage format [-h] --config CONFIG 
> --cluster-id CLUSTER_ID\n                     
> [--ignore-formatted]\nkafka-storage: error: unrecognized arguments: '-f'\n")
> Traceback (most recent call last):
>   File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 186, in _do_run
>     data = self.run_test()
>   File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 246, in run_test
>     return self.test_context.function(self.test)
>   File "/usr/local/lib/python3.9/dist-packages/ducktape/mark/_mark.py", line 
> 433, in wrapper
>     return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File "/opt/kafka-dev/tests/kafkatest/tests/core/kraft_upgrade_test.py", 
> line 132, in test_isolated_mode_upgrade
>     self.run_upgrade(from_kafka_version, group_protocol)
>   File "/opt/kafka-dev/tests/kafkatest/tests/core/kraft_upgrade_test.py", 
> line 96, in run_upgrade
>     self.kafka.start()
>   File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 669, in 
> start
>     self.isolated_controller_quorum.start()
>   File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 671, in 
> start
>     Service.start(self, **kwargs)
>   File "/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", 
> line 265, in start
>     self.start_node(node, **kwargs)
>   File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 902, in 
> start_node
>     node.account.ssh(cmd)
>   File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/remoteaccount.py", 
> line 35, in wrapper
>     return method(self, *args, **kwargs)
>   File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/remoteaccount.py", 

[jira] [Updated] (KAFKA-16990) Unrecognised flag passed to kafka-storage.sh in system test

2024-06-20 Thread Justine Olshan (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justine Olshan updated KAFKA-16990:
---
Affects Version/s: 3.8.0

> Unrecognised flag passed to kafka-storage.sh in system test
> ---
>
> Key: KAFKA-16990
> URL: https://issues.apache.org/jira/browse/KAFKA-16990
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 3.8.0
>Reporter: Gaurav Narula
>Assignee: Justine Olshan
>Priority: Major
>
> Running 
> {{TC_PATHS="tests/kafkatest/tests/core/kraft_upgrade_test.py::TestKRaftUpgrade"
>  bash tests/docker/run_tests.sh}} on trunk (c4a3d2475f) fails with the 
> following:
> {code:java}
> [INFO:2024-06-18 09:16:03,139]: Triggering test 2 of 32...
> [INFO:2024-06-18 09:16:03,147]: RunnerClient: Loading test {'directory': 
> '/opt/kafka-dev/tests/kafkatest/tests/core', 'file_name': 
> 'kraft_upgrade_test.py', 'cls_name': 'TestKRaftUpgrade', 'method_name': 
> 'test_isolated_mode_upgrade', 'injected_args': {'from_kafka_version': 
> '3.1.2', 'use_new_coordinator': True, 'metadata_quorum': 'ISOLATED_KRAFT'}}
> [INFO:2024-06-18 09:16:03,151]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  on run 1/1
> [INFO:2024-06-18 09:16:03,153]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  Setting up...
> [INFO:2024-06-18 09:16:03,153]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  Running...
> [INFO:2024-06-18 09:16:05,999]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  Tearing down...
> [INFO:2024-06-18 09:16:12,366]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  FAIL: RemoteCommandError({'ssh_config': {'host': 'ducker10', 'hostname': 
> 'ducker10', 'user': 'ducker', 'port': 22, 'password': '', 'identityfile': 
> '/home/ducker/.ssh/id_rsa', 'connecttimeout': None}, 'hostname': 'ducker10', 
> 'ssh_hostname': 'ducker10', 'user': 'ducker', 'externally_routable_ip': 
> 'ducker10', '_logger':  kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT-2
>  (DEBUG)>, 'os': 'linux', '_ssh_client':  0x85bccc70>, '_sftp_client':  0x85bccdf0>, '_custom_ssh_exception_checks': None}, 
> '/opt/kafka-3.1.2/bin/kafka-storage.sh format --ignore-formatted --config 
> /mnt/kafka/kafka.properties --cluster-id I2eXt9rvSnyhct8BYmW6-w -f 
> group.version=1', 1, b"usage: kafka-storage format [-h] --config CONFIG 
> --cluster-id CLUSTER_ID\n                     
> [--ignore-formatted]\nkafka-storage: error: unrecognized arguments: '-f'\n")
> Traceback (most recent call last):
>   File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 186, in _do_run
>     data = self.run_test()
>   File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 246, in run_test
>     return self.test_context.function(self.test)
>   File "/usr/local/lib/python3.9/dist-packages/ducktape/mark/_mark.py", line 
> 433, in wrapper
>     return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File "/opt/kafka-dev/tests/kafkatest/tests/core/kraft_upgrade_test.py", 
> line 132, in test_isolated_mode_upgrade
>     self.run_upgrade(from_kafka_version, group_protocol)
>   File "/opt/kafka-dev/tests/kafkatest/tests/core/kraft_upgrade_test.py", 
> line 96, in run_upgrade
>     self.kafka.start()
>   File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 669, in 
> start
>     self.isolated_controller_quorum.start()
>   File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 671, in 
> start
>     Service.start(self, **kwargs)
>   File "/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", 
> line 265, in start
>     self.start_node(node, **kwargs)
>   File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 902, in 
> start_node
>     node.account.ssh(cmd)
>   File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/remoteaccount.py", 
> line 35, in wrapper
>     return method(self, *args, **kwargs)
>   File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/remoteaccount.py", 
> line 3

[jira] [Commented] (KAFKA-16990) Unrecognised flag passed to kafka-storage.sh in system test

2024-06-20 Thread Justine Olshan (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856535#comment-17856535
 ] 

Justine Olshan commented on KAFKA-16990:


[~jlprat] I believe this is a 3.8 blocker since the tests will fail. [~dajac] 
and I will take a loot and fix as soon as we can.

> Unrecognised flag passed to kafka-storage.sh in system test
> ---
>
> Key: KAFKA-16990
> URL: https://issues.apache.org/jira/browse/KAFKA-16990
> Project: Kafka
>  Issue Type: Test
>Reporter: Gaurav Narula
>Assignee: Justine Olshan
>Priority: Major
>
> Running 
> {{TC_PATHS="tests/kafkatest/tests/core/kraft_upgrade_test.py::TestKRaftUpgrade"
>  bash tests/docker/run_tests.sh}} on trunk (c4a3d2475f) fails with the 
> following:
> {code:java}
> [INFO:2024-06-18 09:16:03,139]: Triggering test 2 of 32...
> [INFO:2024-06-18 09:16:03,147]: RunnerClient: Loading test {'directory': 
> '/opt/kafka-dev/tests/kafkatest/tests/core', 'file_name': 
> 'kraft_upgrade_test.py', 'cls_name': 'TestKRaftUpgrade', 'method_name': 
> 'test_isolated_mode_upgrade', 'injected_args': {'from_kafka_version': 
> '3.1.2', 'use_new_coordinator': True, 'metadata_quorum': 'ISOLATED_KRAFT'}}
> [INFO:2024-06-18 09:16:03,151]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  on run 1/1
> [INFO:2024-06-18 09:16:03,153]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  Setting up...
> [INFO:2024-06-18 09:16:03,153]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  Running...
> [INFO:2024-06-18 09:16:05,999]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  Tearing down...
> [INFO:2024-06-18 09:16:12,366]: RunnerClient: 
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
>  FAIL: RemoteCommandError({'ssh_config': {'host': 'ducker10', 'hostname': 
> 'ducker10', 'user': 'ducker', 'port': 22, 'password': '', 'identityfile': 
> '/home/ducker/.ssh/id_rsa', 'connecttimeout': None}, 'hostname': 'ducker10', 
> 'ssh_hostname': 'ducker10', 'user': 'ducker', 'externally_routable_ip': 
> 'ducker10', '_logger':  kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT-2
>  (DEBUG)>, 'os': 'linux', '_ssh_client':  0x85bccc70>, '_sftp_client':  0x85bccdf0>, '_custom_ssh_exception_checks': None}, 
> '/opt/kafka-3.1.2/bin/kafka-storage.sh format --ignore-formatted --config 
> /mnt/kafka/kafka.properties --cluster-id I2eXt9rvSnyhct8BYmW6-w -f 
> group.version=1', 1, b"usage: kafka-storage format [-h] --config CONFIG 
> --cluster-id CLUSTER_ID\n                     
> [--ignore-formatted]\nkafka-storage: error: unrecognized arguments: '-f'\n")
> Traceback (most recent call last):
>   File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 186, in _do_run
>     data = self.run_test()
>   File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 246, in run_test
>     return self.test_context.function(self.test)
>   File "/usr/local/lib/python3.9/dist-packages/ducktape/mark/_mark.py", line 
> 433, in wrapper
>     return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File "/opt/kafka-dev/tests/kafkatest/tests/core/kraft_upgrade_test.py", 
> line 132, in test_isolated_mode_upgrade
>     self.run_upgrade(from_kafka_version, group_protocol)
>   File "/opt/kafka-dev/tests/kafkatest/tests/core/kraft_upgrade_test.py", 
> line 96, in run_upgrade
>     self.kafka.start()
>   File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 669, in 
> start
>     self.isolated_controller_quorum.start()
>   File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 671, in 
> start
>     Service.start(self, **kwargs)
>   File "/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", 
> line 265, in start
>     self.start_node(node, **kwargs)
>   File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 902, in 
> start_node
>     node.account.ssh(cmd)
>   File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/remoteaccount.py", 
> line 35, in wrapper
>     retur

[jira] [Commented] (KAFKA-16943) Synchronously verify Connect worker startup failure in InternalTopicsIntegrationTest

2024-06-20 Thread Chris Egerton (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856531#comment-17856531
 ] 

Chris Egerton commented on KAFKA-16943:
---

[~ksolves.kafka] the idea here isn't to assert that no workers have started 
after a given timeout, it's to assert that one or more workers has attempted, 
failed, and aborted startup. We don't want to just wait for 30 seconds, see 
that no workers have started up, and then call that good enough, since startup 
may take longer than 30 seconds on our CI infrastructure (which can be pretty 
slow), and if startup does fail before the 30 seconds are up, it still forces 
us to wait that long, adding bloat to test runtime.

> Synchronously verify Connect worker startup failure in 
> InternalTopicsIntegrationTest
> 
>
> Key: KAFKA-16943
> URL: https://issues.apache.org/jira/browse/KAFKA-16943
> Project: Kafka
>  Issue Type: Improvement
>  Components: connect
>Reporter: Chris Egerton
>Priority: Minor
>  Labels: newbie
> Attachments: code-diff.png
>
>
> Created after PR discussion 
> [here|https://github.com/apache/kafka/pull/16288#discussion_r1636615220].
> In some of our integration tests, we want to verify that a Connect worker 
> cannot start under poor conditions (such as when its internal topics do not 
> yet exist and it is configured to create them with a higher replication 
> factor than the number of available brokers, or when its internal topics 
> already exist but they do not have the compaction cleanup policy).
> This is currently not possible, and presents a possible gap in testing 
> coverage, especially for the test cases 
> {{testFailToCreateInternalTopicsWithMoreReplicasThanBrokers}} and 
> {{{}testFailToStartWhenInternalTopicsAreNotCompacted{}}}. It'd be nice if we 
> could have some way of synchronously awaiting the completion or failure of 
> worker startup in our integration tests in order to guarantee that worker 
> startup fails under sufficiently adverse conditions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-17006) Kafka Metrics showing type as "Untyped"

2024-06-20 Thread Dharani (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dharani updated KAFKA-17006:

Description: 
Hi,

Most of the Kafka metrics type is showing as 'untyped'.

Some of the metrics is as below.
{code:java}
HELP kafka_server_ZooKeeperClientMetrics_Min Attribute exposed for management 
kafka.server:name=ZooKeeperRequestLatencyMs,type=ZooKeeperClientMetrics,attribute=Min
TYPE kafka_server_ZooKeeperClientMetrics_Min untyped
HELP java_lang_CodeHeap_non_profiled_nmethods_UsageThreshold 
java.lang:name=CodeHeap 'non-profiled 
nmethods',type=MemoryPool,attribute=UsageThreshold
TYPE java_lang_CodeHeap_non_profiled_nmethods_UsageThreshold untyped
HELP kafka_server_socket_server_metrics_successful_reauthentication_rate The 
number of successful re-authentication of connections per second 
kafka.server:name=null,type=socket-server-metrics,attribute=successful-reauthentication-rate
TYPE kafka_server_socket_server_metrics_successful_reauthentication_rate untyped
{code}
But the supported Prometheus metric types are Counter | Gauge | Histogram | 
Summary as per [https://prometheus.io/docs/concepts/metric_types/]

Is it possible to change type of metrics from "untyped" to the Prometheus 
supported types?

  was:
Hi,

Most of the Kafka metrics type is showing as 'untyped'.

Some of the metrics is as below.
 # HELP kafka_server_ZooKeeperClientMetrics_Min Attribute exposed for 
management 
kafka.server:name=ZooKeeperRequestLatencyMs,type=ZooKeeperClientMetrics,attribute=Min
 # TYPE kafka_server_ZooKeeperClientMetrics_Min untyped
 # HELP java_lang_CodeHeap_non_profiled_nmethods_UsageThreshold 
java.lang:name=CodeHeap 'non-profiled 
nmethods',type=MemoryPool,attribute=UsageThreshold
 # TYPE java_lang_CodeHeap_non_profiled_nmethods_UsageThreshold untyped
 # HELP kafka_server_socket_server_metrics_successful_reauthentication_rate The 
number of successful re-authentication of connections per second 
kafka.server:name=null,type=socket-server-metrics,attribute=successful-reauthentication-rate
 # TYPE kafka_server_socket_server_metrics_successful_reauthentication_rate 
untyped


But the supported Prometheus metric types are Counter | Gauge | Histogram | 
Summary as per [https://prometheus.io/docs/concepts/metric_types/]

Is it possible to change type of metrics from "untyped" to the Prometheus 
supported types?


> Kafka Metrics showing type as "Untyped"
> ---
>
> Key: KAFKA-17006
> URL: https://issues.apache.org/jira/browse/KAFKA-17006
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dharani
>Priority: Major
> Attachments: metrics.txt
>
>
> Hi,
> Most of the Kafka metrics type is showing as 'untyped'.
> Some of the metrics is as below.
> {code:java}
> HELP kafka_server_ZooKeeperClientMetrics_Min Attribute exposed for management 
> kafka.server:name=ZooKeeperRequestLatencyMs,type=ZooKeeperClientMetrics,attribute=Min
> TYPE kafka_server_ZooKeeperClientMetrics_Min untyped
> HELP java_lang_CodeHeap_non_profiled_nmethods_UsageThreshold 
> java.lang:name=CodeHeap 'non-profiled 
> nmethods',type=MemoryPool,attribute=UsageThreshold
> TYPE java_lang_CodeHeap_non_profiled_nmethods_UsageThreshold untyped
> HELP kafka_server_socket_server_metrics_successful_reauthentication_rate The 
> number of successful re-authentication of connections per second 
> kafka.server:name=null,type=socket-server-metrics,attribute=successful-reauthentication-rate
> TYPE kafka_server_socket_server_metrics_successful_reauthentication_rate 
> untyped
> {code}
> But the supported Prometheus metric types are Counter | Gauge | Histogram | 
> Summary as per [https://prometheus.io/docs/concepts/metric_types/]
> Is it possible to change type of metrics from "untyped" to the Prometheus 
> supported types?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-17006) Kafka Metrics showing type as "Untyped"

2024-06-20 Thread Dharani (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dharani updated KAFKA-17006:

Description: 
Hi,

Most of the Kafka metrics type is showing as 'untyped'.

Some of the metrics is as below.
 # HELP kafka_server_ZooKeeperClientMetrics_Min Attribute exposed for 
management 
kafka.server:name=ZooKeeperRequestLatencyMs,type=ZooKeeperClientMetrics,attribute=Min
 # TYPE kafka_server_ZooKeeperClientMetrics_Min untyped
 # HELP java_lang_CodeHeap_non_profiled_nmethods_UsageThreshold 
java.lang:name=CodeHeap 'non-profiled 
nmethods',type=MemoryPool,attribute=UsageThreshold
 # TYPE java_lang_CodeHeap_non_profiled_nmethods_UsageThreshold untyped
 # HELP kafka_server_socket_server_metrics_successful_reauthentication_rate The 
number of successful re-authentication of connections per second 
kafka.server:name=null,type=socket-server-metrics,attribute=successful-reauthentication-rate
 # TYPE kafka_server_socket_server_metrics_successful_reauthentication_rate 
untyped


But the supported Prometheus metric types are Counter | Gauge | Histogram | 
Summary as per [https://prometheus.io/docs/concepts/metric_types/]

Is it possible to change type of metrics from "untyped" to the Prometheus 
supported types?

  was:
Hi,

Most of the Kafka metrics type is showing as 'untyped'.

Some of the metrics is as below.
 # HELP kafka_server_ZooKeeperClientMetrics_Min Attribute exposed for 
management 
kafka.server:name=ZooKeeperRequestLatencyMs,type=ZooKeeperClientMetrics,attribute=Min
 # TYPE kafka_server_ZooKeeperClientMetrics_Min untyped
 # HELP java_lang_CodeHeap_non_profiled_nmethods_UsageThreshold 
java.lang:name=CodeHeap 'non-profiled 
nmethods',type=MemoryPool,attribute=UsageThreshold
 # TYPE java_lang_CodeHeap_non_profiled_nmethods_UsageThreshold untyped
 # HELP kafka_server_socket_server_metrics_successful_reauthentication_rate The 
number of successful re-authentication of connections per second 
kafka.server:name=null,type=socket-server-metrics,attribute=successful-reauthentication-rate
 # TYPE kafka_server_socket_server_metrics_successful_reauthentication_rate 
untyped
But the supported Prometheus metric types are Counter | Gauge | Histogram | 
Summary as per [https://prometheus.io/docs/concepts/metric_types/]


Is it possible to change type of metrics from "untyped" to the Prometheus 
supported types?


> Kafka Metrics showing type as "Untyped"
> ---
>
> Key: KAFKA-17006
> URL: https://issues.apache.org/jira/browse/KAFKA-17006
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dharani
>Priority: Major
> Attachments: metrics.txt
>
>
> Hi,
> Most of the Kafka metrics type is showing as 'untyped'.
> Some of the metrics is as below.
>  # HELP kafka_server_ZooKeeperClientMetrics_Min Attribute exposed for 
> management 
> kafka.server:name=ZooKeeperRequestLatencyMs,type=ZooKeeperClientMetrics,attribute=Min
>  # TYPE kafka_server_ZooKeeperClientMetrics_Min untyped
>  # HELP java_lang_CodeHeap_non_profiled_nmethods_UsageThreshold 
> java.lang:name=CodeHeap 'non-profiled 
> nmethods',type=MemoryPool,attribute=UsageThreshold
>  # TYPE java_lang_CodeHeap_non_profiled_nmethods_UsageThreshold untyped
>  # HELP kafka_server_socket_server_metrics_successful_reauthentication_rate 
> The number of successful re-authentication of connections per second 
> kafka.server:name=null,type=socket-server-metrics,attribute=successful-reauthentication-rate
>  # TYPE kafka_server_socket_server_metrics_successful_reauthentication_rate 
> untyped
> But the supported Prometheus metric types are Counter | Gauge | Histogram | 
> Summary as per [https://prometheus.io/docs/concepts/metric_types/]
> Is it possible to change type of metrics from "untyped" to the Prometheus 
> supported types?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-14460) In-memory store iterators can return results with null values

2024-06-20 Thread Ayoub Omari (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayoub Omari reassigned KAFKA-14460:
---

Assignee: Ayoub Omari

> In-memory store iterators can return results with null values
> -
>
> Key: KAFKA-14460
> URL: https://issues.apache.org/jira/browse/KAFKA-14460
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: A. Sophie Blee-Goldman
>Assignee: Ayoub Omari
>Priority: Major
>
> Due to the thread-safety model we adopted in our in-memory stores to avoid 
> scaling issues, we synchronize all read/write methods and then during range 
> scans, copy the keyset of all results rather than returning a direct iterator 
> over the underlying map. When users call #next to read out the iterator 
> results, we issue a point lookup on the next key and then simply return a new 
> KeyValue<>(key, get(key))
> This lets the range scan return results without blocking access to the store 
> by other threads and without risk of ConcurrentModification, as a writer can 
> modify the real store without affecting the keyset copy of the iterator. This 
> also means that those changes won't be reflected in what the iterator sees or 
> returns, which in itself is fine as we don't guarantee consistency semantics 
> of any kind.
> However, we _do_ guarantee that range scans "must not return null values" – 
> and this contract may be violated if the StreamThread deletes a record that 
> the iterator was going to return.
> tl;dr we should check get(key) for null and skip to the next result if 
> necessary in the in-memory store iterators. See for example 
> InMemoryKeyValueIterator (note that we'll probably need to buffer one record 
> in advance before we return true from #hasNext)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-17006) Kafka Metrics showing type as "Untyped"

2024-06-20 Thread Dharani (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dharani updated KAFKA-17006:

Description: 
Hi,

Most of the Kafka metrics type is showing as 'untyped'.

Some of the metrics is as below.
 # HELP kafka_server_ZooKeeperClientMetrics_Min Attribute exposed for 
management 
kafka.server:name=ZooKeeperRequestLatencyMs,type=ZooKeeperClientMetrics,attribute=Min
 # TYPE kafka_server_ZooKeeperClientMetrics_Min untyped
 # HELP java_lang_CodeHeap_non_profiled_nmethods_UsageThreshold 
java.lang:name=CodeHeap 'non-profiled 
nmethods',type=MemoryPool,attribute=UsageThreshold
 # TYPE java_lang_CodeHeap_non_profiled_nmethods_UsageThreshold untyped
 # HELP kafka_server_socket_server_metrics_successful_reauthentication_rate The 
number of successful re-authentication of connections per second 
kafka.server:name=null,type=socket-server-metrics,attribute=successful-reauthentication-rate
 # TYPE kafka_server_socket_server_metrics_successful_reauthentication_rate 
untyped
But the supported Prometheus metric types are Counter | Gauge | Histogram | 
Summary as per [https://prometheus.io/docs/concepts/metric_types/]


Is it possible to change type of metrics from "untyped" to the Prometheus 
supported types?

  was:
Hi,

Most of the Kafka metrics type is showing as 'untyped'.

Some of the metrics is as below.
# HELP kafka_server_ZooKeeperClientMetrics_Min Attribute exposed for management 
kafka.server:name=ZooKeeperRequestLatencyMs,type=ZooKeeperClientMetrics,attribute=Min
# TYPE kafka_server_ZooKeeperClientMetrics_Min untyped
# HELP java_lang_CodeHeap_non_profiled_nmethods_UsageThreshold 
java.lang:name=CodeHeap 'non-profiled 
nmethods',type=MemoryPool,attribute=UsageThreshold
# TYPE java_lang_CodeHeap_non_profiled_nmethods_UsageThreshold untyped
# HELP kafka_server_socket_server_metrics_successful_reauthentication_rate The 
number of successful re-authentication of connections per second 
kafka.server:name=null,type=socket-server-metrics,attribute=successful-reauthentication-rate
# TYPE kafka_server_socket_server_metrics_successful_reauthentication_rate 
untyped
But the supported Prometheus metric types are Counter | Gauge | Histogram | 
Summary as per [https://prometheus.io/docs/concepts/metric_types/]
Is it possible to change type of metrics from "untyped" to the Prometheus 
supported types?


> Kafka Metrics showing type as "Untyped"
> ---
>
> Key: KAFKA-17006
> URL: https://issues.apache.org/jira/browse/KAFKA-17006
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dharani
>Priority: Major
> Attachments: metrics.txt
>
>
> Hi,
> Most of the Kafka metrics type is showing as 'untyped'.
> Some of the metrics is as below.
>  # HELP kafka_server_ZooKeeperClientMetrics_Min Attribute exposed for 
> management 
> kafka.server:name=ZooKeeperRequestLatencyMs,type=ZooKeeperClientMetrics,attribute=Min
>  # TYPE kafka_server_ZooKeeperClientMetrics_Min untyped
>  # HELP java_lang_CodeHeap_non_profiled_nmethods_UsageThreshold 
> java.lang:name=CodeHeap 'non-profiled 
> nmethods',type=MemoryPool,attribute=UsageThreshold
>  # TYPE java_lang_CodeHeap_non_profiled_nmethods_UsageThreshold untyped
>  # HELP kafka_server_socket_server_metrics_successful_reauthentication_rate 
> The number of successful re-authentication of connections per second 
> kafka.server:name=null,type=socket-server-metrics,attribute=successful-reauthentication-rate
>  # TYPE kafka_server_socket_server_metrics_successful_reauthentication_rate 
> untyped
> But the supported Prometheus metric types are Counter | Gauge | Histogram | 
> Summary as per [https://prometheus.io/docs/concepts/metric_types/]
> Is it possible to change type of metrics from "untyped" to the Prometheus 
> supported types?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17006) Kafka Metrics showing type as "Untyped"

2024-06-20 Thread Dharani (Jira)
Dharani created KAFKA-17006:
---

 Summary: Kafka Metrics showing type as "Untyped"
 Key: KAFKA-17006
 URL: https://issues.apache.org/jira/browse/KAFKA-17006
 Project: Kafka
  Issue Type: Bug
Reporter: Dharani
 Attachments: metrics.txt

Hi,

Most of the Kafka metrics type is showing as 'untyped'.

Some of the metrics is as below.
# HELP kafka_server_ZooKeeperClientMetrics_Min Attribute exposed for management 
kafka.server:name=ZooKeeperRequestLatencyMs,type=ZooKeeperClientMetrics,attribute=Min
# TYPE kafka_server_ZooKeeperClientMetrics_Min untyped
# HELP java_lang_CodeHeap_non_profiled_nmethods_UsageThreshold 
java.lang:name=CodeHeap 'non-profiled 
nmethods',type=MemoryPool,attribute=UsageThreshold
# TYPE java_lang_CodeHeap_non_profiled_nmethods_UsageThreshold untyped
# HELP kafka_server_socket_server_metrics_successful_reauthentication_rate The 
number of successful re-authentication of connections per second 
kafka.server:name=null,type=socket-server-metrics,attribute=successful-reauthentication-rate
# TYPE kafka_server_socket_server_metrics_successful_reauthentication_rate 
untyped
But the supported Prometheus metric types are Counter | Gauge | Histogram | 
Summary as per [https://prometheus.io/docs/concepts/metric_types/]
Is it possible to change type of metrics from "untyped" to the Prometheus 
supported types?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17005) Online protocol migration integration tests

2024-06-20 Thread Dongnuo Lyu (Jira)
Dongnuo Lyu created KAFKA-17005:
---

 Summary: Online protocol migration integration tests
 Key: KAFKA-17005
 URL: https://issues.apache.org/jira/browse/KAFKA-17005
 Project: Kafka
  Issue Type: Sub-task
Reporter: Dongnuo Lyu
Assignee: Dongnuo Lyu






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14460) In-memory store iterators can return results with null values

2024-06-20 Thread Lucia Cerchie (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856496#comment-17856496
 ] 

Lucia Cerchie commented on KAFKA-14460:
---

[~ayoubomari] sure! thank you 

> In-memory store iterators can return results with null values
> -
>
> Key: KAFKA-14460
> URL: https://issues.apache.org/jira/browse/KAFKA-14460
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: A. Sophie Blee-Goldman
>Priority: Major
>
> Due to the thread-safety model we adopted in our in-memory stores to avoid 
> scaling issues, we synchronize all read/write methods and then during range 
> scans, copy the keyset of all results rather than returning a direct iterator 
> over the underlying map. When users call #next to read out the iterator 
> results, we issue a point lookup on the next key and then simply return a new 
> KeyValue<>(key, get(key))
> This lets the range scan return results without blocking access to the store 
> by other threads and without risk of ConcurrentModification, as a writer can 
> modify the real store without affecting the keyset copy of the iterator. This 
> also means that those changes won't be reflected in what the iterator sees or 
> returns, which in itself is fine as we don't guarantee consistency semantics 
> of any kind.
> However, we _do_ guarantee that range scans "must not return null values" – 
> and this contract may be violated if the StreamThread deletes a record that 
> the iterator was going to return.
> tl;dr we should check get(key) for null and skip to the next result if 
> necessary in the in-memory store iterators. See for example 
> InMemoryKeyValueIterator (note that we'll probably need to buffer one record 
> in advance before we return true from #hasNext)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-14460) In-memory store iterators can return results with null values

2024-06-20 Thread Lucia Cerchie (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lucia Cerchie reassigned KAFKA-14460:
-

Assignee: (was: Lucia Cerchie)

> In-memory store iterators can return results with null values
> -
>
> Key: KAFKA-14460
> URL: https://issues.apache.org/jira/browse/KAFKA-14460
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: A. Sophie Blee-Goldman
>Priority: Major
>
> Due to the thread-safety model we adopted in our in-memory stores to avoid 
> scaling issues, we synchronize all read/write methods and then during range 
> scans, copy the keyset of all results rather than returning a direct iterator 
> over the underlying map. When users call #next to read out the iterator 
> results, we issue a point lookup on the next key and then simply return a new 
> KeyValue<>(key, get(key))
> This lets the range scan return results without blocking access to the store 
> by other threads and without risk of ConcurrentModification, as a writer can 
> modify the real store without affecting the keyset copy of the iterator. This 
> also means that those changes won't be reflected in what the iterator sees or 
> returns, which in itself is fine as we don't guarantee consistency semantics 
> of any kind.
> However, we _do_ guarantee that range scans "must not return null values" – 
> and this contract may be violated if the StreamThread deletes a record that 
> the iterator was going to return.
> tl;dr we should check get(key) for null and skip to the next result if 
> necessary in the in-memory store iterators. See for example 
> InMemoryKeyValueIterator (note that we'll probably need to buffer one record 
> in advance before we return true from #hasNext)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17004) MINOR: Remove extra synchronized blocks in SharePartitionManager

2024-06-20 Thread Abhinav Dixit (Jira)
Abhinav Dixit created KAFKA-17004:
-

 Summary: MINOR: Remove extra synchronized blocks in 
SharePartitionManager
 Key: KAFKA-17004
 URL: https://issues.apache.org/jira/browse/KAFKA-17004
 Project: Kafka
  Issue Type: Sub-task
Reporter: Abhinav Dixit
Assignee: Abhinav Dixit






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17003) Implement SharePartitionManager close functionality

2024-06-20 Thread Abhinav Dixit (Jira)
Abhinav Dixit created KAFKA-17003:
-

 Summary: Implement SharePartitionManager close functionality
 Key: KAFKA-17003
 URL: https://issues.apache.org/jira/browse/KAFKA-17003
 Project: Kafka
  Issue Type: Sub-task
Reporter: Abhinav Dixit
Assignee: Abhinav Dixit






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17002) Integrate partition leader epoch in Share Partition

2024-06-20 Thread Apoorv Mittal (Jira)
Apoorv Mittal created KAFKA-17002:
-

 Summary: Integrate partition leader epoch in Share Partition
 Key: KAFKA-17002
 URL: https://issues.apache.org/jira/browse/KAFKA-17002
 Project: Kafka
  Issue Type: Sub-task
Reporter: Apoorv Mittal
Assignee: Apoorv Mittal






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16463) Automatically delete metadata log directory on ZK brokers

2024-06-20 Thread David Arthur (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Arthur updated KAFKA-16463:
-
Fix Version/s: 3.8.0
   3.7.1
   (was: 3.9.0)

> Automatically delete metadata log directory on ZK brokers
> -
>
> Key: KAFKA-16463
> URL: https://issues.apache.org/jira/browse/KAFKA-16463
> Project: Kafka
>  Issue Type: Improvement
>Reporter: David Arthur
>Assignee: David Arthur
>Priority: Minor
> Fix For: 3.8.0, 3.7.1
>
>
> Throughout the process of a ZK to KRaft migration, the operator has the 
> choice to revert back to ZK mode. Once this is done, there will be a copy of 
> the metadata log on each broker in the cluster.
> In order to re-attempt the migration in the future, this metadata log needs 
> to be deleted. This can be pretty burdensome to the operator for large 
> clusters, especially since the log deletion must be done while the broker is 
> offline.
> To improve this, we can automatically delete any metadata log present during 
> startup of a ZK broker. In general, it is always safe to remove the metadata 
> log from a KRaft or migrating ZK broker. The main impact is that this will 
> delay the time it takes for the broker to be unfenced by the controller since 
> it has to re-replicate the log. In the case of hybrid mode ZK brokers, there 
> will be a delay in them receiving their first UpdateMetadataRequest from the 
> controller (for the same reason -- delay in getting unfenced).
> The delayed startup should not affect the performance of the cluster, though 
> it would increase the overall time required to do a rolling restart of the 
> cluster. 
> Once a broker restarts as KRaft, we will stop doing this automatic deletion. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16463) Automatically delete metadata log directory on ZK brokers

2024-06-20 Thread David Arthur (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Arthur resolved KAFKA-16463.
--
Resolution: Fixed

> Automatically delete metadata log directory on ZK brokers
> -
>
> Key: KAFKA-16463
> URL: https://issues.apache.org/jira/browse/KAFKA-16463
> Project: Kafka
>  Issue Type: Improvement
>Reporter: David Arthur
>Assignee: David Arthur
>Priority: Minor
> Fix For: 3.8.0, 3.7.1
>
>
> Throughout the process of a ZK to KRaft migration, the operator has the 
> choice to revert back to ZK mode. Once this is done, there will be a copy of 
> the metadata log on each broker in the cluster.
> In order to re-attempt the migration in the future, this metadata log needs 
> to be deleted. This can be pretty burdensome to the operator for large 
> clusters, especially since the log deletion must be done while the broker is 
> offline.
> To improve this, we can automatically delete any metadata log present during 
> startup of a ZK broker. In general, it is always safe to remove the metadata 
> log from a KRaft or migrating ZK broker. The main impact is that this will 
> delay the time it takes for the broker to be unfenced by the controller since 
> it has to re-replicate the log. In the case of hybrid mode ZK brokers, there 
> will be a delay in them receiving their first UpdateMetadataRequest from the 
> controller (for the same reason -- delay in getting unfenced).
> The delayed startup should not affect the performance of the cluster, though 
> it would increase the overall time required to do a rolling restart of the 
> cluster. 
> Once a broker restarts as KRaft, we will stop doing this automatic deletion. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14186) Add unit tests for BatchFileWriter

2024-06-20 Thread David Arthur (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Arthur updated KAFKA-14186:
-
Fix Version/s: (was: 3.9.0)

> Add unit tests for BatchFileWriter
> --
>
> Key: KAFKA-14186
> URL: https://issues.apache.org/jira/browse/KAFKA-14186
> Project: Kafka
>  Issue Type: Test
>Reporter: David Arthur
>Assignee: Alexandre Dupriez
>Priority: Minor
>
> We have integration tests that cover this class, but no direct unit tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16725) Add broker configurations

2024-06-20 Thread Andrew Schofield (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Schofield resolved KAFKA-16725.
--
Resolution: Fixed

> Add broker configurations
> -
>
> Key: KAFKA-16725
> URL: https://issues.apache.org/jira/browse/KAFKA-16725
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Andrew Schofield
>Assignee: Andrew Schofield
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17001) Consider using another class to replace `AbstractConfig` to be class which alwasy returns the up-to-date configs

2024-06-20 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-17001:
--

 Summary: Consider using another class to replace `AbstractConfig` 
to be class which alwasy returns the up-to-date configs
 Key: KAFKA-17001
 URL: https://issues.apache.org/jira/browse/KAFKA-17001
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


from https://github.com/apache/kafka/pull/16394#discussion_r1647321514

We are starting to have separate config class ( i.e RemoteLogManagerConfig), 
and those configs will be initialized with a AbstractConfig. By calling 
`AbstractConfig' getters, those individual configs can always return the 
up-to-date configs. Behind the magic behavior is the instance of 
`AbstractConfig` ... yes, we use the `KafkaConfig` to construct those config 
classes. We call `KafkaConfig#updateCurrentConfig` to update inner configs, so 
those config classes which using `AbstractConfig` can see the latest configs 
too.

However, this mechanism is not readable from `AbstractConfig`. Maybe we should 
add enough docs for it. Or we can move`KafkaConfig#updateCurrentConfig` into a 
new class with better naming.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16962) kafkatest.tests.core.upgrade_test system tests failed in v3.7.1 RC1

2024-06-20 Thread Luke Chen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856470#comment-17856470
 ] 

Luke Chen commented on KAFKA-16962:
---

This test tested the upgrade from version [0.9 ~ 3.6] to v3.7, and only v0.9 
failed, others all passed. Given v0.9 version is a very old version, this issue 
should not be the blocker for v3.7.1.

> kafkatest.tests.core.upgrade_test system tests failed in v3.7.1 RC1
> ---
>
> Key: KAFKA-16962
> URL: https://issues.apache.org/jira/browse/KAFKA-16962
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 3.7.1
>Reporter: Luke Chen
>Priority: Major
>
>  
> {code:java}
>  name="test_upgrade_compression_types=lz4_from_kafka_version=0_9_0_1_to_message_format_version=0_9_0_1"
>  classname="kafkatest.tests.core.upgrade_test" time="212.312"> message="AssertionError()" type="exception">AssertionError() Traceback (most 
> recent call last): File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 186, in _do_run data = self.run_test() File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 246, in run_test return self.test_context.function(self.test) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/mark/_mark.py", line 433, in 
> wrapper return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs) 
> File "/opt/kafka-dev/tests/kafkatest/tests/core/upgrade_test.py", line 240, 
> in test_upgrade assert self.kafka.check_protocol_errors(self) AssertionError 
> AssertionError() Traceback (most recent call last): File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 186, in _do_run data = self.run_test() File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 246, in run_test return self.test_context.function(self.test) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/mark/_mark.py", line 433, in 
> wrapper return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs) 
> File "/opt/kafka-dev/tests/kafkatest/tests/core/upgrade_test.py", line 240, 
> in test_upgrade assert self.kafka.check_protocol_errors(self) AssertionError 
> ... name="test_transactions_bounce_target=brokers_check_order=False_failure_mode=hard_bounce_metadata_quorum=COMBINED_KRAFT_use_group_metadata=False_use_new_coordinator=False"
>  classname="kafkatest.tests.core.transactions_test" time="185.422"/> name="test_upgrade_compression_types=none_from_kafka_version=0_9_0_1_to_message_format_version=0_9_0_1"
>  classname="kafkatest.tests.core.upgrade_test" time="208.997"/> name="test_transactions_bounce_target=brokers_check_order=False_failure_mode=hard_bounce_metadata_quorum=ISOLATED_KRAFT_use_group_metadata=False_use_new_coordinator=False"
>  classname="kafkatest.tests.core.transactions_test" time="169.953"/> name="test_upgrade_compression_types=lz4_from_kafka_version=0_9_0_1_to_message_format_version=None"
>  classname="kafkatest.tests.core.upgrade_test" time="227.220"/> name="test_transactions_bounce_target=brokers_check_order=False_failure_mode=hard_bounce_metadata_quorum=ZK_use_group_metadata=False_use_new_coordinator=False"
>  classname="kafkatest.tests.core.transactions_test" time="160.104"/>
>  name="test_upgrade_compression_types=snappy_from_kafka_version=0_9_0_1_to_message_format_version=None"
>  classname="kafkatest.tests.core.upgrade_test" time="227.571"> message="AssertionError()" type="exception">AssertionError() Traceback (most 
> recent call last): File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 186, in _do_run data = self.run_test() File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 246, in run_test return self.test_context.function(self.test) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/mark/_mark.py", line 433, in 
> wrapper return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs) 
> File "/opt/kafka-dev/tests/kafkatest/tests/core/upgrade_test.py", line 240, 
> in test_upgrade assert self.kafka.check_protocol_errors(self) AssertionError 
> AssertionError() Traceback (most recent call last): File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 186, in _do_run data = self.run_test() File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 246, in run_test return self.test_context.function(self.test) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/mark/_mark.py", line 433, in 
> wrapper return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs) 
> File "/opt/kafka-dev/tests/kafkatest/tests/core/upgrade_test.py", line 240, 
> in test_upgrade assert self.kafka.check_protocol_errors(self) AssertionError 
> ...{code

[jira] [Updated] (KAFKA-17000) Occasional AuthorizerTest thread leak

2024-06-20 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona updated KAFKA-17000:
--
Description: 
h2. error during AclAuthorizer.configure
{noformat}
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = 
Session expired for /kafka-acl/DelegationToken
at 
app//org.apache.zookeeper.KeeperException.create(KeeperException.java:134)
at 
app//org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at 
app//kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:570)
at app//kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1883)
at 
app//kafka.zk.KafkaZkClient.$anonfun$createAclPaths$2(KafkaZkClient.scala:1294)
at 
app//kafka.zk.KafkaZkClient.$anonfun$createAclPaths$2$adapted(KafkaZkClient.scala:1294)
at app//scala.collection.immutable.HashSet.foreach(HashSet.scala:958)
at 
app//kafka.zk.KafkaZkClient.$anonfun$createAclPaths$1(KafkaZkClient.scala:1294)
at 
app//kafka.zk.KafkaZkClient.$anonfun$createAclPaths$1$adapted(KafkaZkClient.scala:1292)
at app//scala.collection.IterableOnceOps.foreach(IterableOnce.scala:576)
at 
app//scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:574)
at app//scala.collection.AbstractIterable.foreach(Iterable.scala:933)
at app//kafka.zk.KafkaZkClient.createAclPaths(KafkaZkClient.scala:1292)
at 
app//kafka.security.authorizer.AclAuthorizer.configure(AclAuthorizer.scala:212)
at 
app//kafka.security.authorizer.AuthorizerTest.createAclAuthorizer(AuthorizerTest.scala:1182)
at 
app//kafka.security.authorizer.AuthorizerTest.createAuthorizer(AuthorizerTest.scala:1175)
at 
app//kafka.security.authorizer.AuthorizerTest.setUp(AuthorizerTest.scala:95)
at jdk.internal.reflect.GeneratedMethodAccessor138.invoke(Unknown 
Source)
at 
java.base@11.0.15/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base@11.0.15/java.lang.reflect.Method.invoke(Method.java:566)
at 
app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728)
at 
app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:128)
at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptBeforeEachMethod(TimeoutExtension.java:78)
at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
at 
app//org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeMethodInExtensionContext(ClassBasedTestDescriptor.java:521)
at 
app//org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$synthesizeBeforeEachMethodAdapter$23(ClassBasedTestDescriptor.java:506)
at 
app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeEachMethods$3(TestMethodTestDescriptor.java:175)
at 
app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeMethodsOrCallbacksUntilExceptionOccurs$6(TestMethodTestDescriptor.java:203)
at 
app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at 
app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeMethodsOrCallbacksUntilExceptionOccurs(TestMethodTestDescriptor.java:203)
at 
app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeEachMethods(TestMethodTestD

[jira] [Created] (KAFKA-17000) Occasional AuthorizerTest thread leak

2024-06-20 Thread Andras Katona (Jira)
Andras Katona created KAFKA-17000:
-

 Summary: Occasional AuthorizerTest thread leak
 Key: KAFKA-17000
 URL: https://issues.apache.org/jira/browse/KAFKA-17000
 Project: Kafka
  Issue Type: Test
Reporter: Andras Katona


h2. error
{noformat}
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = 
Session expired for /kafka-acl/DelegationToken
at 
app//org.apache.zookeeper.KeeperException.create(KeeperException.java:134)
at 
app//org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at 
app//kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:570)
at app//kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1883)
at 
app//kafka.zk.KafkaZkClient.$anonfun$createAclPaths$2(KafkaZkClient.scala:1294)
at 
app//kafka.zk.KafkaZkClient.$anonfun$createAclPaths$2$adapted(KafkaZkClient.scala:1294)
at app//scala.collection.immutable.HashSet.foreach(HashSet.scala:958)
at 
app//kafka.zk.KafkaZkClient.$anonfun$createAclPaths$1(KafkaZkClient.scala:1294)
at 
app//kafka.zk.KafkaZkClient.$anonfun$createAclPaths$1$adapted(KafkaZkClient.scala:1292)
at app//scala.collection.IterableOnceOps.foreach(IterableOnce.scala:576)
at 
app//scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:574)
at app//scala.collection.AbstractIterable.foreach(Iterable.scala:933)
at app//kafka.zk.KafkaZkClient.createAclPaths(KafkaZkClient.scala:1292)
at 
app//kafka.security.authorizer.AclAuthorizer.configure(AclAuthorizer.scala:212)
at 
app//kafka.security.authorizer.AuthorizerTest.createAclAuthorizer(AuthorizerTest.scala:1182)
at 
app//kafka.security.authorizer.AuthorizerTest.createAuthorizer(AuthorizerTest.scala:1175)
at 
app//kafka.security.authorizer.AuthorizerTest.setUp(AuthorizerTest.scala:95)
at jdk.internal.reflect.GeneratedMethodAccessor138.invoke(Unknown 
Source)
at 
java.base@11.0.15/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base@11.0.15/java.lang.reflect.Method.invoke(Method.java:566)
at 
app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728)
at 
app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:128)
at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptBeforeEachMethod(TimeoutExtension.java:78)
at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
at 
app//org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeMethodInExtensionContext(ClassBasedTestDescriptor.java:521)
at 
app//org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$synthesizeBeforeEachMethodAdapter$23(ClassBasedTestDescriptor.java:506)
at 
app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeEachMethods$3(TestMethodTestDescriptor.java:175)
at 
app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeMethodsOrCallbacksUntilExceptionOccurs$6(TestMethodTestDescriptor.java:203)
at 
app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at 
app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeMethodsOrCallbacksUntilExceptionOccurs(TestMethodTestDescriptor.java:203)
at 
app//org.junit.jupite

[jira] [Assigned] (KAFKA-17000) Occasional AuthorizerTest thread leak

2024-06-20 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona reassigned KAFKA-17000:
-

Assignee: Andras Katona

> Occasional AuthorizerTest thread leak
> -
>
> Key: KAFKA-17000
> URL: https://issues.apache.org/jira/browse/KAFKA-17000
> Project: Kafka
>  Issue Type: Test
>Reporter: Andras Katona
>Assignee: Andras Katona
>Priority: Major
>
> h2. error
> {noformat}
> org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode 
> = Session expired for /kafka-acl/DelegationToken
>   at 
> app//org.apache.zookeeper.KeeperException.create(KeeperException.java:134)
>   at 
> app//org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
>   at 
> app//kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:570)
>   at app//kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1883)
>   at 
> app//kafka.zk.KafkaZkClient.$anonfun$createAclPaths$2(KafkaZkClient.scala:1294)
>   at 
> app//kafka.zk.KafkaZkClient.$anonfun$createAclPaths$2$adapted(KafkaZkClient.scala:1294)
>   at app//scala.collection.immutable.HashSet.foreach(HashSet.scala:958)
>   at 
> app//kafka.zk.KafkaZkClient.$anonfun$createAclPaths$1(KafkaZkClient.scala:1294)
>   at 
> app//kafka.zk.KafkaZkClient.$anonfun$createAclPaths$1$adapted(KafkaZkClient.scala:1292)
>   at app//scala.collection.IterableOnceOps.foreach(IterableOnce.scala:576)
>   at 
> app//scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:574)
>   at app//scala.collection.AbstractIterable.foreach(Iterable.scala:933)
>   at app//kafka.zk.KafkaZkClient.createAclPaths(KafkaZkClient.scala:1292)
>   at 
> app//kafka.security.authorizer.AclAuthorizer.configure(AclAuthorizer.scala:212)
>   at 
> app//kafka.security.authorizer.AuthorizerTest.createAclAuthorizer(AuthorizerTest.scala:1182)
>   at 
> app//kafka.security.authorizer.AuthorizerTest.createAuthorizer(AuthorizerTest.scala:1175)
>   at 
> app//kafka.security.authorizer.AuthorizerTest.setUp(AuthorizerTest.scala:95)
>   at jdk.internal.reflect.GeneratedMethodAccessor138.invoke(Unknown 
> Source)
>   at 
> java.base@11.0.15/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base@11.0.15/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728)
>   at 
> app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
>   at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
>   at 
> app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
>   at 
> app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:128)
>   at 
> app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptBeforeEachMethod(TimeoutExtension.java:78)
>   at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
>   at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
>   at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
>   at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
>   at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
>   at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
>   at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
>   at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
>   at 
> app//org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeMethodInExtensionContext(ClassBasedTestDescriptor.java:521)
>   at 
> app//org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$synthesizeBeforeEachMethodAdapter$23(ClassBasedTestDescriptor.java:506)
>   at 
> app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeEachMethods$3(TestMethodTestDescriptor.java:175)
>   at 
> app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeMethodsOrCallbacksUntilExceptionOccurs$6(TestMethodTestDescriptor

[jira] [Resolved] (KAFKA-16973) Fix caught-up condition

2024-06-20 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16973.
-
Fix Version/s: 3.9.0
   Resolution: Fixed

> Fix caught-up condition
> ---
>
> Key: KAFKA-16973
> URL: https://issues.apache.org/jira/browse/KAFKA-16973
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
> Fix For: 3.9.0
>
>
> When a write operation does not have any records, the coordinator runtime 
> checked whether the state machine is caught-up to decide whether the 
> operation should wait until the state machine is committed up to the 
> operation point or the operation should be completed. The current 
> implementation assumes that there will always be a pending write operation 
> waiting in the deferred queue when the state machine is not fully caught-up 
> yet. This is true except when the state machine is just loaded and not 
> caught-up yet. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14508) Add ConsumerGroupInstallAssignment API

2024-06-20 Thread David Jacot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856403#comment-17856403
 ] 

David Jacot commented on KAFKA-14508:
-

We are not quite ready for doing this one. I recommend to wait a bit.

> Add ConsumerGroupInstallAssignment API
> --
>
> Key: KAFKA-14508
> URL: https://issues.apache.org/jira/browse/KAFKA-14508
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)