[jira] [Commented] (KAFKA-17164) Consider to enforce application.server : format at config level
[ https://issues.apache.org/jira/browse/KAFKA-17164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868547#comment-17868547 ] dujian0068 commented on KAFKA-17164: Hello: I agree that the verification process should be placed in StreamsConfig, and it is necessary. Distributed validation is not conducive to our modification and maintenance. We don't seem to have to worry about `new StreamsConfig(...)` causing an exception, because if the value format set by the user is incorrect, an exception will also occur when using this value If this problem needs to be fixed I'd be happy to fix it Thank you > Consider to enforce application.server : format at config level > - > > Key: KAFKA-17164 > URL: https://issues.apache.org/jira/browse/KAFKA-17164 > Project: Kafka > Issue Type: Improvement > Components: streams >Reporter: Matthias J. Sax >Priority: Minor > Labels: needs-kip > > KafkaStreams support configuration parameter `application.server` which must > be of format `:`. > However, in `StreamsConfig` we accept any `String` w/o validation, but only > validate the format inside `KafkaStreams` constructor. > It might be better to add an `AbstactConfig.Validator` and move this > validation into `StreamsConfig` directly. > This would be a semantic change because `new StreamsConfig(...)` might now > throw an exception. Thus we need a KIP for this change, and it's technically > backward incompatible... (So not sure if we can do this at all – expect for a > major release? – But 4.0 is close...) > The ROI is unclear to be fair. Filing this ticket mainly for documentation > and to collect feedback if people think it would be a worthwhile thing to do > or not. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-17070) perf: consider to use ByteBufferOutputstream to append records
[ https://issues.apache.org/jira/browse/KAFKA-17070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17865999#comment-17865999 ] dujian0068 commented on KAFKA-17070: Hello: For records that do not require compression, I tried using ByteBufferOutputStream instead of DataOutputStream, but the test results showed that the performance did not change significantly. MyTestCode:[^MyProducerTest.java] ByteBufferOutputStream Test result : [^ByteBufferOutpuStream-Testresult.txt] DataOutputStream Test result:[^DataOutputStream-TestResult.txt] Code modification record:[https://github.com/apache/kafka/pull/16595] Can anyone give me some advice? Thank you > perf: consider to use ByteBufferOutputstream to append records > -- > > Key: KAFKA-17070 > URL: https://issues.apache.org/jira/browse/KAFKA-17070 > Project: Kafka > Issue Type: Improvement >Reporter: Luke Chen >Assignee: dujian0068 >Priority: Major > Attachments: ByteBufferOutpuStream-Testresult.txt, > DataOutputStream-TestResult.txt, MyProducerTest.java > > > Consider to use ByteBufferOutputstream to append records, instead of a > DataOutputStream. We should add JMH test to confirm this indeed improve the > performance before merging it. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-17070) perf: consider to use ByteBufferOutputstream to append records
[ https://issues.apache.org/jira/browse/KAFKA-17070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dujian0068 updated KAFKA-17070: --- Attachment: MyProducerTest.java ByteBufferOutpuStream-Testresult.txt DataOutputStream-TestResult.txt > perf: consider to use ByteBufferOutputstream to append records > -- > > Key: KAFKA-17070 > URL: https://issues.apache.org/jira/browse/KAFKA-17070 > Project: Kafka > Issue Type: Improvement >Reporter: Luke Chen >Assignee: dujian0068 >Priority: Major > Attachments: ByteBufferOutpuStream-Testresult.txt, > DataOutputStream-TestResult.txt, MyProducerTest.java > > > Consider to use ByteBufferOutputstream to append records, instead of a > DataOutputStream. We should add JMH test to confirm this indeed improve the > performance before merging it. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-17070) perf: consider to use ByteBufferOutputstream to append records
[ https://issues.apache.org/jira/browse/KAFKA-17070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dujian0068 reassigned KAFKA-17070: -- Assignee: dujian0068 > perf: consider to use ByteBufferOutputstream to append records > -- > > Key: KAFKA-17070 > URL: https://issues.apache.org/jira/browse/KAFKA-17070 > Project: Kafka > Issue Type: Improvement >Reporter: Luke Chen >Assignee: dujian0068 >Priority: Major > > Consider to use ByteBufferOutputstream to append records, instead of a > DataOutputStream. We should add JMH test to confirm this indeed improve the > performance before merging it. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-17070) perf: consider to use ByteBufferOutputstream to append records
[ https://issues.apache.org/jira/browse/KAFKA-17070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17862773#comment-17862773 ] dujian0068 commented on KAFKA-17070: Hello: This is a valuable question,Can I try it? > perf: consider to use ByteBufferOutputstream to append records > -- > > Key: KAFKA-17070 > URL: https://issues.apache.org/jira/browse/KAFKA-17070 > Project: Kafka > Issue Type: Improvement >Reporter: Luke Chen >Priority: Major > > Consider to use ByteBufferOutputstream to append records, instead of a > DataOutputStream. We should add JMH test to confirm this indeed improve the > performance before merging it. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-16995) The listeners broker parameter incorrect documentation
[ https://issues.apache.org/jira/browse/KAFKA-16995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17860833#comment-17860833 ] dujian0068 commented on KAFKA-16995: Hello [~chia7712] Could you please review the merge request for me? PR:https://github.com/apache/kafka/pull/16395 > The listeners broker parameter incorrect documentation > --- > > Key: KAFKA-16995 > URL: https://issues.apache.org/jira/browse/KAFKA-16995 > Project: Kafka > Issue Type: Bug >Affects Versions: 3.6.1 > Environment: Kafka 3.6.1 >Reporter: Sergey >Assignee: dujian0068 >Priority: Minor > > We are using Kafka 3.6.1 and the > [KIP-797|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=195726330] > describes configuring listeners with the same port and name for supporting > IPv4/IPv6 dual-stack. > Documentation link: > [https://kafka.apache.org/36/documentation.html#brokerconfigs_listeners] > As I understand it, Kafka should allow us to set the listener name and > listener port to the same value if we configure dual-stack. > But in reality, the broker returns an error if we set the listener name to > the same value. > Error example: > {code:java} > java.lang.IllegalArgumentException: requirement failed: Each listener must > have a different name, listeners: > CONTROLPLANE://0.0.0.0:9090,SSL://0.0.0.0:9093,SSL://[::]:9093 > at scala.Predef$.require(Predef.scala:337) > at kafka.utils.CoreUtils$.validate$1(CoreUtils.scala:214) > at kafka.utils.CoreUtils$.listenerListToEndPoints(CoreUtils.scala:268) > at kafka.server.KafkaConfig.listeners(KafkaConfig.scala:2120) > at kafka.server.KafkaConfig.(KafkaConfig.scala:1807) > at kafka.server.KafkaConfig.(KafkaConfig.scala:1604) > at kafka.Kafka$.buildServer(Kafka.scala:72) > at kafka.Kafka$.main(Kafka.scala:91) > at kafka.Kafka.main(Kafka.scala) {code} > I've tried to set the listeners to: "SSL://0.0.0.0:9093,SSL://[::]:9093" -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-17041) Add pagination when describe large set of metadata via Admin API
[ https://issues.apache.org/jira/browse/KAFKA-17041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17860140#comment-17860140 ] dujian0068 commented on KAFKA-17041: Hello: Can I take this task? > Add pagination when describe large set of metadata via Admin API > - > > Key: KAFKA-17041 > URL: https://issues.apache.org/jira/browse/KAFKA-17041 > Project: Kafka > Issue Type: Task > Components: admin >Reporter: Omnia Ibrahim >Priority: Major > > Some of the request via Admin API do timeout on large cluster or cluster with > too many metadata. For example OffsetFetchRequest and DescribeLogDirsRequest > timeout due to large number of partition on cluster. Also > DescribeProducersRequest and ListTransactionsRequest time out due to too many > short lived PID or too many hanging transactions > [KIP-1062: Introduce Pagination for some requests used by Admin > API|https://cwiki.apache.org/confluence/display/KAFKA/KIP-1062%3A+Introduce+Pagination+for+some+requests+used+by+Admin+API] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-17015) ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() Should not be deprecated and throw an exception
[ https://issues.apache.org/jira/browse/KAFKA-17015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dujian0068 resolved KAFKA-17015. Resolution: Fixed > ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() Should not be > deprecated and throw an exception > - > > Key: KAFKA-17015 > URL: https://issues.apache.org/jira/browse/KAFKA-17015 > Project: Kafka > Issue Type: Improvement >Reporter: dujian0068 >Assignee: dujian0068 >Priority: Minor > > when review PR#16970。 I find function > `ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() ` be > deprecated because they have a mutable attribute, which will cause the > hashCode to change。 > I don't think that hashCode should be discarded just because it is mutable. > HashCode is a very important property of an object. It just shouldn't be used > for hash addressing, like ArayList > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-17015) ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() Should not be deprecated and throw an exception
[ https://issues.apache.org/jira/browse/KAFKA-17015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17859473#comment-17859473 ] dujian0068 commented on KAFKA-17015: Thanks you replay When I studied the Kafka source code, I find class `ContextualRecord` have an attribute is `ProcessorRecordContext` object and `ProcessorRecordContext#toString()` be deprecated. I am considering whether to also deprecate `ContextualRecord#toString()`. But it seems unnecessary, so I raised this issue > ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() Should not be > deprecated and throw an exception > - > > Key: KAFKA-17015 > URL: https://issues.apache.org/jira/browse/KAFKA-17015 > Project: Kafka > Issue Type: Improvement >Reporter: dujian0068 >Assignee: dujian0068 >Priority: Minor > > when review PR#16970。 I find function > `ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() ` be > deprecated because they have a mutable attribute, which will cause the > hashCode to change。 > I don't think that hashCode should be discarded just because it is mutable. > HashCode is a very important property of an object. It just shouldn't be used > for hash addressing, like ArayList > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-17015) ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() Should not be deprecated and throw an exception
[ https://issues.apache.org/jira/browse/KAFKA-17015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dujian0068 updated KAFKA-17015: --- Description: when review PR#16970。 I find function `ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() ` be deprecated because they have a mutable attribute, which will cause the hashCode to change。 I don't think that hashCode should be discarded just because it is mutable. HashCode is a very important property of an object. It just shouldn't be used for hash addressing, like ArayList was: when review PR#16416。 I find function `ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() ` be deprecated because they have a mutable attribute, which will cause the hashCode to change。 I don't think that hashCode should be discarded just because it is mutable. HashCode is a very important property of an object. It just shouldn't be used for hash addressing, like ArayList > ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() Should not be > deprecated and throw an exception > - > > Key: KAFKA-17015 > URL: https://issues.apache.org/jira/browse/KAFKA-17015 > Project: Kafka > Issue Type: Improvement >Reporter: dujian0068 >Assignee: dujian0068 >Priority: Minor > > when review PR#16970。 I find function > `ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() ` be > deprecated because they have a mutable attribute, which will cause the > hashCode to change。 > I don't think that hashCode should be discarded just because it is mutable. > HashCode is a very important property of an object. It just shouldn't be used > for hash addressing, like ArayList > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-17015) ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() Should not be deprecated and throw an exception
[ https://issues.apache.org/jira/browse/KAFKA-17015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17856714#comment-17856714 ] dujian0068 commented on KAFKA-17015: Hello [~chia7712] Please evaluate whether to change it? Thank you > ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() Should not be > deprecated and throw an exception > - > > Key: KAFKA-17015 > URL: https://issues.apache.org/jira/browse/KAFKA-17015 > Project: Kafka > Issue Type: Improvement >Reporter: dujian0068 >Assignee: dujian0068 >Priority: Minor > > when review PR#16416。 I find function > `ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() ` be > deprecated because they have a mutable attribute, which will cause the > hashCode to change。 > I don't think that hashCode should be discarded just because it is mutable. > HashCode is a very important property of an object. It just shouldn't be used > for hash addressing, like ArayList > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-17015) ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() Should not be deprecated and throw an exception
[ https://issues.apache.org/jira/browse/KAFKA-17015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dujian0068 reassigned KAFKA-17015: -- Assignee: dujian0068 > ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() Should not be > deprecated and throw an exception > - > > Key: KAFKA-17015 > URL: https://issues.apache.org/jira/browse/KAFKA-17015 > Project: Kafka > Issue Type: Improvement >Reporter: dujian0068 >Assignee: dujian0068 >Priority: Minor > > when review PR#16416。 I find function > `ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() ` be > deprecated because they have a mutable attribute, which will cause the > hashCode to change。 > I don't think that hashCode should be discarded just because it is mutable. > HashCode is a very important property of an object. It just shouldn't be used > for hash addressing, like ArayList > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17015) ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() Should not be deprecated and throw an exception
dujian0068 created KAFKA-17015: -- Summary: ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() Should not be deprecated and throw an exception Key: KAFKA-17015 URL: https://issues.apache.org/jira/browse/KAFKA-17015 Project: Kafka Issue Type: Improvement Reporter: dujian0068 when review PR#16416。 I find function `ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() ` be deprecated because they have a mutable attribute, which will cause the hashCode to change。 I don't think that hashCode should be discarded just because it is mutable. HashCode is a very important property of an object. It just shouldn't be used for hash addressing, like ArayList -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-17014) ScramFormatter should not use String for password.
[ https://issues.apache.org/jira/browse/KAFKA-17014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17856670#comment-17856670 ] dujian0068 commented on KAFKA-17014: hello This is an interesting question, can I handle it? > ScramFormatter should not use String for password. > -- > > Key: KAFKA-17014 > URL: https://issues.apache.org/jira/browse/KAFKA-17014 > Project: Kafka > Issue Type: Improvement > Components: security >Reporter: Tsz-wo Sze >Priority: Major > > Since String is immutable, there is no easy way to erase a String password > after use. We should not use String for password. See also > https://stackoverflow.com/questions/8881291/why-is-char-preferred-over-string-for-passwords -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-17013) RequestManager#ConnectionState#toString() should use %s
[ https://issues.apache.org/jira/browse/KAFKA-17013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dujian0068 updated KAFKA-17013: --- Issue Type: Bug (was: Improvement) > RequestManager#ConnectionState#toString() should use %s > --- > > Key: KAFKA-17013 > URL: https://issues.apache.org/jira/browse/KAFKA-17013 > Project: Kafka > Issue Type: Bug >Reporter: dujian0068 >Assignee: dujian0068 >Priority: Minor > > RequestManager#ConnectionState#toString() should use %s > [https://github.com/apache/kafka/blob/trunk/raft/src/main/java/org/apache/kafka/raft/RequestManager.java#L375] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-17013) RequestManager#ConnectionState#toString() should use %s
[ https://issues.apache.org/jira/browse/KAFKA-17013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dujian0068 updated KAFKA-17013: --- Description: RequestManager#ConnectionState#toString() should use %s [https://github.com/apache/kafka/blob/trunk/raft/src/main/java/org/apache/kafka/raft/RequestManager.java#L375] was: RequestManager#toString() should use %s https://github.com/apache/kafka/blob/trunk/raft/src/main/java/org/apache/kafka/raft/RequestManager.java#L375 > RequestManager#ConnectionState#toString() should use %s > --- > > Key: KAFKA-17013 > URL: https://issues.apache.org/jira/browse/KAFKA-17013 > Project: Kafka > Issue Type: Improvement >Reporter: dujian0068 >Assignee: dujian0068 >Priority: Minor > > RequestManager#ConnectionState#toString() should use %s > [https://github.com/apache/kafka/blob/trunk/raft/src/main/java/org/apache/kafka/raft/RequestManager.java#L375] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-17013) RequestManager#ConnectionState#toString() should use %s
[ https://issues.apache.org/jira/browse/KAFKA-17013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dujian0068 updated KAFKA-17013: --- Summary: RequestManager#ConnectionState#toString() should use %s (was: RequestManager#toString() should use %s) > RequestManager#ConnectionState#toString() should use %s > --- > > Key: KAFKA-17013 > URL: https://issues.apache.org/jira/browse/KAFKA-17013 > Project: Kafka > Issue Type: Improvement >Reporter: dujian0068 >Assignee: dujian0068 >Priority: Minor > > RequestManager#toString() should use %s > https://github.com/apache/kafka/blob/trunk/raft/src/main/java/org/apache/kafka/raft/RequestManager.java#L375 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17013) RequestManager#toString() should use %s
dujian0068 created KAFKA-17013: -- Summary: RequestManager#toString() should use %s Key: KAFKA-17013 URL: https://issues.apache.org/jira/browse/KAFKA-17013 Project: Kafka Issue Type: Improvement Reporter: dujian0068 RequestManager#toString() should use %s https://github.com/apache/kafka/blob/trunk/raft/src/main/java/org/apache/kafka/raft/RequestManager.java#L375 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-17013) RequestManager#toString() should use %s
[ https://issues.apache.org/jira/browse/KAFKA-17013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dujian0068 reassigned KAFKA-17013: -- Assignee: dujian0068 > RequestManager#toString() should use %s > --- > > Key: KAFKA-17013 > URL: https://issues.apache.org/jira/browse/KAFKA-17013 > Project: Kafka > Issue Type: Improvement >Reporter: dujian0068 >Assignee: dujian0068 >Priority: Minor > > RequestManager#toString() should use %s > https://github.com/apache/kafka/blob/trunk/raft/src/main/java/org/apache/kafka/raft/RequestManager.java#L375 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-17007) Fix SourceAndTarget#equal
[ https://issues.apache.org/jira/browse/KAFKA-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dujian0068 reassigned KAFKA-17007: -- Assignee: PoAn Yang (was: dujian0068) > Fix SourceAndTarget#equal > - > > Key: KAFKA-17007 > URL: https://issues.apache.org/jira/browse/KAFKA-17007 > Project: Kafka > Issue Type: Bug >Reporter: Chia-Ping Tsai >Assignee: PoAn Yang >Priority: Minor > > In reviewing https://github.com/apache/kafka/pull/16404 I noticed that > SourceAndTarget is a part of public class. Hence, we should fix the `equal` > that it does not check the class type [0]. > [0] > https://github.com/apache/kafka/blob/trunk/connect/mirror-client/src/main/java/org/apache/kafka/connect/mirror/SourceAndTarget.java#L49 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-17007) Fix SourceAndTarget#equal
[ https://issues.apache.org/jira/browse/KAFKA-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dujian0068 reassigned KAFKA-17007: -- Assignee: dujian0068 (was: PoAn Yang) > Fix SourceAndTarget#equal > - > > Key: KAFKA-17007 > URL: https://issues.apache.org/jira/browse/KAFKA-17007 > Project: Kafka > Issue Type: Bug >Reporter: Chia-Ping Tsai >Assignee: dujian0068 >Priority: Minor > > In reviewing https://github.com/apache/kafka/pull/16404 I noticed that > SourceAndTarget is a part of public class. Hence, we should fix the `equal` > that it does not check the class type [0]. > [0] > https://github.com/apache/kafka/blob/trunk/connect/mirror-client/src/main/java/org/apache/kafka/connect/mirror/SourceAndTarget.java#L49 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-16995) The listeners broker parameter incorrect documentation
[ https://issues.apache.org/jira/browse/KAFKA-16995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17856636#comment-17856636 ] dujian0068 commented on KAFKA-16995: if you want to bind the same port on both IPv4 wildcard addresses(0.0.0.0) and IPv6 wildcard addresses([::]), you can make listener lists like `PLAINTEXT://[::]:9092` not `PLAINTEXT://0.0.0.0:9092,PLAINTEXT://[::]:9092` tips: This is only needed for wildcard addresses Sergey (Jira) 于2024年6月21日周五 05:00写道: > The listeners broker parameter incorrect documentation > --- > > Key: KAFKA-16995 > URL: https://issues.apache.org/jira/browse/KAFKA-16995 > Project: Kafka > Issue Type: Bug >Affects Versions: 3.6.1 > Environment: Kafka 3.6.1 >Reporter: Sergey >Assignee: dujian0068 >Priority: Minor > > We are using Kafka 3.6.1 and the > [KIP-797|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=195726330] > describes configuring listeners with the same port and name for supporting > IPv4/IPv6 dual-stack. > Documentation link: > [https://kafka.apache.org/36/documentation.html#brokerconfigs_listeners] > As I understand it, Kafka should allow us to set the listener name and > listener port to the same value if we configure dual-stack. > But in reality, the broker returns an error if we set the listener name to > the same value. > Error example: > {code:java} > java.lang.IllegalArgumentException: requirement failed: Each listener must > have a different name, listeners: > CONTROLPLANE://0.0.0.0:9090,SSL://0.0.0.0:9093,SSL://[::]:9093 > at scala.Predef$.require(Predef.scala:337) > at kafka.utils.CoreUtils$.validate$1(CoreUtils.scala:214) > at kafka.utils.CoreUtils$.listenerListToEndPoints(CoreUtils.scala:268) > at kafka.server.KafkaConfig.listeners(KafkaConfig.scala:2120) > at kafka.server.KafkaConfig.(KafkaConfig.scala:1807) > at kafka.server.KafkaConfig.(KafkaConfig.scala:1604) > at kafka.Kafka$.buildServer(Kafka.scala:72) > at kafka.Kafka$.main(Kafka.scala:91) > at kafka.Kafka.main(Kafka.scala) {code} > I've tried to set the listeners to: "SSL://0.0.0.0:9093,SSL://[::]:9093" -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-16995) The listeners broker parameter incorrect documentation
[ https://issues.apache.org/jira/browse/KAFKA-16995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17856114#comment-17856114 ] dujian0068 commented on KAFKA-16995: Hello: >From the error message, it seems that there is something wrong with your >broker configuration FORMAT: listeners = listener_name://host_name:port each listener_name must different > The listeners broker parameter incorrect documentation > --- > > Key: KAFKA-16995 > URL: https://issues.apache.org/jira/browse/KAFKA-16995 > Project: Kafka > Issue Type: Bug >Affects Versions: 3.6.1 > Environment: Kafka 3.6.1 >Reporter: Sergey >Assignee: dujian0068 >Priority: Minor > > We are using Kafka 3.6.1 and the > [KIP-797|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=195726330] > describes configuring listeners with the same port and name for supporting > IPv4/IPv6 dual-stack. > Documentation link: > [https://kafka.apache.org/36/documentation.html#brokerconfigs_listeners] > As I understand it, Kafka should allow us to set the listener name and > listener port to the same value if we configure dual-stack. > But in reality, the broker returns an error if we set the listener name to > the same value. > Error example: > {code:java} > java.lang.IllegalArgumentException: requirement failed: Each listener must > have a different name, listeners: > CONTROLPLANE://0.0.0.0:9090,SSL://0.0.0.0:9093,SSL://[::]:9093 > at scala.Predef$.require(Predef.scala:337) > at kafka.utils.CoreUtils$.validate$1(CoreUtils.scala:214) > at kafka.utils.CoreUtils$.listenerListToEndPoints(CoreUtils.scala:268) > at kafka.server.KafkaConfig.listeners(KafkaConfig.scala:2120) > at kafka.server.KafkaConfig.(KafkaConfig.scala:1807) > at kafka.server.KafkaConfig.(KafkaConfig.scala:1604) > at kafka.Kafka$.buildServer(Kafka.scala:72) > at kafka.Kafka$.main(Kafka.scala:91) > at kafka.Kafka.main(Kafka.scala) {code} > I've tried to set the listeners to: "SSL://0.0.0.0:9093,SSL://[::]:9093" -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-16995) The listeners broker parameter incorrect documentation
[ https://issues.apache.org/jira/browse/KAFKA-16995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dujian0068 reassigned KAFKA-16995: -- Assignee: dujian0068 > The listeners broker parameter incorrect documentation > --- > > Key: KAFKA-16995 > URL: https://issues.apache.org/jira/browse/KAFKA-16995 > Project: Kafka > Issue Type: Bug >Affects Versions: 3.6.1 > Environment: Kafka 3.6.1 >Reporter: Sergey >Assignee: dujian0068 >Priority: Minor > > We are using Kafka 3.6.1 and the > [KIP-797|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=195726330] > describes configuring listeners with the same port and name for supporting > IPv4/IPv6 dual-stack. > Documentation link: > [https://kafka.apache.org/36/documentation.html#brokerconfigs_listeners] > As I understand it, Kafka should allow us to set the listener name and > listener port to the same value if we configure dual-stack. > But in reality, the broker returns an error if we set the listener name to > the same value. > Error example: > {code:java} > java.lang.IllegalArgumentException: requirement failed: Each listener must > have a different name, listeners: > CONTROLPLANE://0.0.0.0:9090,SSL://0.0.0.0:9093,SSL://[::]:9093 > at scala.Predef$.require(Predef.scala:337) > at kafka.utils.CoreUtils$.validate$1(CoreUtils.scala:214) > at kafka.utils.CoreUtils$.listenerListToEndPoints(CoreUtils.scala:268) > at kafka.server.KafkaConfig.listeners(KafkaConfig.scala:2120) > at kafka.server.KafkaConfig.(KafkaConfig.scala:1807) > at kafka.server.KafkaConfig.(KafkaConfig.scala:1604) > at kafka.Kafka$.buildServer(Kafka.scala:72) > at kafka.Kafka$.main(Kafka.scala:91) > at kafka.Kafka.main(Kafka.scala) {code} > I've tried to set the listeners to: "SSL://0.0.0.0:9093,SSL://[::]:9093" -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-16956) Broker-side ability to subscribe to record delete events
[ https://issues.apache.org/jira/browse/KAFKA-16956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dujian0068 reassigned KAFKA-16956: -- Assignee: dujian0068 > Broker-side ability to subscribe to record delete events > > > Key: KAFKA-16956 > URL: https://issues.apache.org/jira/browse/KAFKA-16956 > Project: Kafka > Issue Type: Improvement >Reporter: Luke Chen >Assignee: dujian0068 >Priority: Major > Labels: need-kip > > In some cases it would be useful for systems outside Kafka to have the > ability to know when Kafka deletes records (tombstoning or retention). > In general the use-case is where there is a desire to link the lifecycle of a > record in a third party system (database or filesystem etc) to the lifecycle > of the record in Kafka. > A concrete use-case: a system using Kafka to distribute video clips + > metadata. The binary content is too big to store in Kafka so the publishing > application caches the content in cloud storage and publishes a record > containing a S3 url to the video clip. The desire is to have a mechanism to > remove the clip from cloud storage at the same time the record is expunged > from Kafka by retention or tombstoning. Currently there is no practical way > to achieve this. > h2. Desired solution > A pluggable broker-side mechanism that is informed as records are being > compacted away or deleted. The API would expose the topic from which the > record is being deleted, the record key, record headers, timestamp and > (possibly) record value. > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-16958) add `STRICT_STUBS` to `EndToEndLatencyTest`, `OffsetCommitCallbackInvokerTest`, `ProducerPerformanceTest`, and `TopologyTest`
[ https://issues.apache.org/jira/browse/KAFKA-16958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17855012#comment-17855012 ] dujian0068 commented on KAFKA-16958: Hello: Can you assign this task to me? > add `STRICT_STUBS` to `EndToEndLatencyTest`, > `OffsetCommitCallbackInvokerTest`, `ProducerPerformanceTest`, and > `TopologyTest` > - > > Key: KAFKA-16958 > URL: https://issues.apache.org/jira/browse/KAFKA-16958 > Project: Kafka > Issue Type: Test >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Minor > > They all need `@MockitoSettings(strictness = Strictness.STRICT_STUBS)` -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-14507) Add ConsumerGroupPrepareAssignment API
[ https://issues.apache.org/jira/browse/KAFKA-14507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dujian0068 reassigned KAFKA-14507: -- Assignee: dujian0068 > Add ConsumerGroupPrepareAssignment API > -- > > Key: KAFKA-14507 > URL: https://issues.apache.org/jira/browse/KAFKA-14507 > Project: Kafka > Issue Type: Sub-task >Reporter: David Jacot >Assignee: dujian0068 >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-14705) Remove deprecated options and redirections
[ https://issues.apache.org/jira/browse/KAFKA-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dujian0068 reassigned KAFKA-14705: -- Assignee: dujian0068 > Remove deprecated options and redirections > -- > > Key: KAFKA-14705 > URL: https://issues.apache.org/jira/browse/KAFKA-14705 > Project: Kafka > Issue Type: Sub-task >Reporter: Federico Valeri >Assignee: dujian0068 >Priority: Major > Fix For: 4.0.0 > > > We can use this task to track tools cleanup for the next major release > (4.0.0). > 1. Redirections to be removed: > - core/src/main/scala/kafka/tools/JmxTool > - core/src/main/scala/kafka/tools/ClusterTool > - core/src/main/scala/kafka/tools/StateChangeLogMerger > - core/src/main/scala/kafka/tools/EndToEndLatency > - core/src/main/scala/kafka/admin/FeatureCommand > - core/src/main/scala/kafka/tools/StreamsResetter > 2. Deprecated tools to be removed: > - tools/src/main/java/org/apache/kafka/tools/StateChangeLogMerger > 3. TopicFilter, PartitionFilter and TopicPartitionFilter in "server-common" > should be moved to "tools" once we get rid of MirrorMaker1 dependency. > 4. We should also get rid of many deprecated options across all tools, > including not migrated tools. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-16944) Range assignor doesn't co-partition with stickiness
[ https://issues.apache.org/jira/browse/KAFKA-16944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17854593#comment-17854593 ] dujian0068 commented on KAFKA-16944: Hello, does this problem need to be fixed quickly? If not, can it be assigned to me? It is not easy for me to find a problem, and it may take me some time to deal with it > Range assignor doesn't co-partition with stickiness > --- > > Key: KAFKA-16944 > URL: https://issues.apache.org/jira/browse/KAFKA-16944 > Project: Kafka > Issue Type: Sub-task >Reporter: Ritika Reddy >Assignee: Ritika Reddy >Priority: Major > > When stickiness is considered during range assignments, it is possible that > in certain cases where co-partitioning is guaranteed we fail. > An example would be: > Consider two topics T1, T2 with 3 partitions each and three members A, B, C. > Let's say the existing assignment (for whatever reason) is: > {quote}A -> T1P0 || B -> T1P1, T2P0, T2P1, T2P2 || C -> T1P2 > {quote} > Now we trigger a rebalance with the following subscriptions where all members > are subscribed to both topics everything else is the same > {quote}A -> T1, T2 || B -> T1, T2 || C -> T1, T2 > {quote} > Since all the topics have an equal number of partitions and all the members > are subscribed to the same set of topics we would expect co-partitioning > right so would we want the final assignment returned to be > {quote}A -> T1P0, T2P0 || B -> T1P1, T2P1 || C -> T1P2, T2P2 > {quote} > SO currently the client side assignor returns the following but it's because > they don't assign sticky partitions > {{{}C=[topic1-2, topic2-2], B=[topic1-1, topic2-1], A=[topic1-0, > topic2-0]{}}}Our > > Server side assignor returns: > (The partitions in bold are the sticky partitions) > {{{}A=MemberAssignment(targetPartitions={topic2=[1], > }}\{{{}{*}topic1=[0]{*}{}}}{{{}}), > B=MemberAssignment(targetPartitions={{}}}{{{}*topic2=[0]*{}}}{{{}, > {{{{{}*topic1=[1]*{}}}{{{}}), > C=MemberAssignment(targetPartitions={topic2=[2], {{{{{}*topic1=[2]*{}}} > *As seen above co-partitioning is expected but not returned.* -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-16937) Consider inlineing Time#waitObject to ProducerMetadata#awaitUpdate
[ https://issues.apache.org/jira/browse/KAFKA-16937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17854251#comment-17854251 ] dujian0068 commented on KAFKA-16937: Hello: The developer of {{Time#waitObject()}} seems to want to provide a unified synchronization method based on {{Time, I don't think it's necessary to delete it}} > Consider inlineing Time#waitObject to ProducerMetadata#awaitUpdate > -- > > Key: KAFKA-16937 > URL: https://issues.apache.org/jira/browse/KAFKA-16937 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: PoAn Yang >Priority: Minor > > Time#waitObject is implemented by while-loop and it is used by > `ProducerMetadata` only. Hence, this jira can include following changes: > 1. move `Time#waitObject` to `ProducerMetadata#awaitUpdate` > 2. ProducerMetadata#awaitUpdate can throw "exact" TimeoutException [0] > [0] > https://github.com/apache/kafka/blob/23fe71d579f84d59ebfe6d5a29e688315cec1285/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L1176 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-16929) Conside defining kafka-specified assertion to unify testing style
[ https://issues.apache.org/jira/browse/KAFKA-16929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17853863#comment-17853863 ] dujian0068 commented on KAFKA-16929: I agree you point that need unify the "usage" 。 but i think that Kafka's own assertions cannot achieve this goal, and someone will always use junit and hamcrest in the future, unless they are disabled or add some code checks to prevent assertion classes from being used in junit and hamcrest。 Therefore, we can try to add some code checks and disable one of hamcrest and junit Chia-Ping Tsai (Jira) 于2024年6月11日周二 09:42写道: > Conside defining kafka-specified assertion to unify testing style > - > > Key: KAFKA-16929 > URL: https://issues.apache.org/jira/browse/KAFKA-16929 > Project: Kafka > Issue Type: Wish >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Major > > There are many contributors who trying to fix chaos of kafka testing. That > includes following huge works: > # replace powermock/easymock by mockito (KAFKA-7438) > # replace junit 4 assertion by junit 5 (KAFKA-7339) > We take 6 years to complete the migration for task 1. The second task is in > progress and I hope it can be addressed in 4.0.0 > When reviewing I noticed there are many different tastes in code base. That > is why the task 1 is such difficult to rewrite. Now, the rewriting of > "assertion" is facing the same issue, and I feel the usage of "assertion" is > even more awkward than "mockito" due to following reason. > # there are two "different" assertion style in code base - hamcrest and > junit - that is confused to developers > # > [https://github.com/apache/kafka/pull/15730#discussion_r1567676845|https://github.com/apache/kafka/pull/15730#discussion_r1567676845)] > # third-party assertion does not offer good error message, so we need to use > non-common style to get useful output > [https://github.com/apache/kafka/pull/16253#discussion_r1633406693|https://github.com/apache/kafka/pull/16253#discussion_r1633406693)] > IMHO, we should consider having our kafka-specified assertion style. Than can > bring following benefit. > # unify the assertion style of whole project > # apply customized assertion. for example: > ## assertEqual(List, List, F)) > ## assertTrue(Supplier, Duration) - equal to `TestUtils.waitForCondition` > # auto-generate useful error message. For example: assertEqual(0, list) -> > print the list > In short, I'd like to add a new module to define common assertions, and then > apply it to code base slowly. > All feedback/responses/objections are welcomed :) > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-16929) Conside defining kafka-specified assertion to unify testing style
[ https://issues.apache.org/jira/browse/KAFKA-16929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17853851#comment-17853851 ] dujian0068 commented on KAFKA-16929: hello Unifying the way of assertions is a valuable thing, but is it really necessary to develop an assertion component? If you are developing assertions for Kafka, do you want to disable hamcrest and junit? > Conside defining kafka-specified assertion to unify testing style > - > > Key: KAFKA-16929 > URL: https://issues.apache.org/jira/browse/KAFKA-16929 > Project: Kafka > Issue Type: Wish >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Major > > There are many contributors who trying to fix chaos of kafka testing. That > includes following huge works: > # replace powermock/easymock by mockito (KAFKA-7438) > # replace junit 4 assertion by junit 5 (KAFKA-7339) > We take 6 years to complete the migration for task 1. The second task is in > progress and I hope it can be addressed in 4.0.0 > When reviewing I noticed there are many different tastes in code base. That > is why the task 1 is such difficult to rewrite. Now, the rewriting of > "assertion" is facing the same issue, and I feel the usage of "assertion" is > even more awkward than "mockito" due to following reason. > # there are two "different" assertion style in code base - hamcrest and > junit - that is confused to developers > # > [https://github.com/apache/kafka/pull/15730#discussion_r1567676845|https://github.com/apache/kafka/pull/15730#discussion_r1567676845)] > # third-party assertion does not offer good error message, so we need to use > non-common style to get useful output > [https://github.com/apache/kafka/pull/16253#discussion_r1633406693|https://github.com/apache/kafka/pull/16253#discussion_r1633406693)] > IMHO, we should consider having our kafka-specified assertion style. Than can > bring following benefit. > # unify the assertion style of whole project > # apply customized assertion. for example: > ## assertEqual(List, List, F)) > ## assertTrue(Supplier, Duration) - equal to `TestUtils.waitForCondition` > # auto-generate useful error message. For example: assertEqual(0, list) -> > print the list > In short, I'd like to add a new module to define common assertions, and then > apply it to code base slowly. > All feedback/responses/objections are welcomed :) > -- This message was sent by Atlassian Jira (v8.20.10#820010)