[jira] [Commented] (KAFKA-12708) Rewrite org.apache.kafka.test.Microbenchmarks by JMH
[ https://issues.apache.org/jira/browse/KAFKA-12708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17849793#comment-17849793 ] GeordieMai commented on KAFKA-12708: [~brandboat] Sure, you can take it :) > Rewrite org.apache.kafka.test.Microbenchmarks by JMH > > > Key: KAFKA-12708 > URL: https://issues.apache.org/jira/browse/KAFKA-12708 > Project: Kafka > Issue Type: Task >Reporter: Chia-Ping Tsai >Assignee: GeordieMai >Priority: Minor > > The benchmark code is a bit obsolete and it would be better to rewrite it by > JMH -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-12708) Rewrite org.apache.kafka.test.Microbenchmarks by JMH
[ https://issues.apache.org/jira/browse/KAFKA-12708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] GeordieMai reassigned KAFKA-12708: -- Assignee: Kuan Po Tseng (was: GeordieMai) > Rewrite org.apache.kafka.test.Microbenchmarks by JMH > > > Key: KAFKA-12708 > URL: https://issues.apache.org/jira/browse/KAFKA-12708 > Project: Kafka > Issue Type: Task >Reporter: Chia-Ping Tsai >Assignee: Kuan Po Tseng >Priority: Minor > > The benchmark code is a bit obsolete and it would be better to rewrite it by > JMH -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-12708) Rewrite org.apache.kafka.test.Microbenchmarks by JMH
[ https://issues.apache.org/jira/browse/KAFKA-12708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] GeordieMai reassigned KAFKA-12708: -- Assignee: GeordieMai > Rewrite org.apache.kafka.test.Microbenchmarks by JMH > > > Key: KAFKA-12708 > URL: https://issues.apache.org/jira/browse/KAFKA-12708 > Project: Kafka > Issue Type: Task >Reporter: Chia-Ping Tsai >Assignee: GeordieMai >Priority: Minor > > The benchmark code is a bit obsolete and it would be better to rewrite it by > JMH -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-12337) provide full scala api for operators naming
[ https://issues.apache.org/jira/browse/KAFKA-12337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] GeordieMai resolved KAFKA-12337. Resolution: Duplicate > provide full scala api for operators naming > > > Key: KAFKA-12337 > URL: https://issues.apache.org/jira/browse/KAFKA-12337 > Project: Kafka > Issue Type: Improvement > Components: streams >Affects Versions: 2.7.0 >Reporter: Ramil Israfilov >Priority: Major > > Kafka Streams Java DSL provides possibility to do custom naming for all > operators via Named, Grouped, Consumed objects (there is a separate dev guide > page about it > [https://kafka.apache.org/27/documentation/streams/developer-guide/dsl-topology-naming.html] > ) > But Scala api for Kafka Streams provide only partial support. > For example following API's are missing custom naming: > filter,selectKey, map, mapValues, flatMap... > Probably there is same issue for other scala objects. > As workaround I have to do quite ugly calls to inner KStream java class and > perform scala2java and back conversions. > Would be really handy if all custom naming API's will be also supported on > Scala Kafka Streams DSL -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (KAFKA-12337) provide full scala api for operators naming
[ https://issues.apache.org/jira/browse/KAFKA-12337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17286994#comment-17286994 ] GeordieMai commented on KAFKA-12337: [~ramazanyich] hello ~ I find in scala KStream {code:java} def filter(predicate: (K, V) => Boolean, named: Named) def selectKey[KR](mapper: (K, V) => KR, named: Named) def mapValues[VR](mapper: V => VR, named: Named) ...{code} Is it what you want ? > provide full scala api for operators naming > > > Key: KAFKA-12337 > URL: https://issues.apache.org/jira/browse/KAFKA-12337 > Project: Kafka > Issue Type: Improvement > Components: streams >Affects Versions: 2.7.0 >Reporter: Ramil Israfilov >Priority: Major > > Kafka Streams Java DSL provides possibility to do custom naming for all > operators via Named, Grouped, Consumed objects (there is a separate dev guide > page about it > [https://kafka.apache.org/27/documentation/streams/developer-guide/dsl-topology-naming.html] > ) > But Scala api for Kafka Streams provide only partial support. > For example following API's are missing custom naming: > filter,selectKey, map, mapValues, flatMap... > Probably there is same issue for other scala objects. > As workaround I have to do quite ugly calls to inner KStream java class and > perform scala2java and back conversions. > Would be really handy if all custom naming API's will be also supported on > Scala Kafka Streams DSL -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (KAFKA-12336) custom stream naming does not work while calling stream[K, V](topicPattern: Pattern) API with named Consumed parameter
[ https://issues.apache.org/jira/browse/KAFKA-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] GeordieMai reassigned KAFKA-12336: -- Assignee: GeordieMai > custom stream naming does not work while calling stream[K, V](topicPattern: > Pattern) API with named Consumed parameter > --- > > Key: KAFKA-12336 > URL: https://issues.apache.org/jira/browse/KAFKA-12336 > Project: Kafka > Issue Type: Bug > Components: streams >Affects Versions: 2.7.0 >Reporter: Ramil Israfilov >Assignee: GeordieMai >Priority: Minor > Labels: easy-fix, newbie > > In our Scala application I am trying to implement custom naming for Kafka > Streams application nodes. > We are using topicPattern for our stream source. > Here is an API which I am calling: > > {code:java} > val topicsPattern="t-[A-Za-z0-9-].suffix" > val operations: KStream[MyKey, MyValue] = > builder.stream[MyKey, MyValue](Pattern.compile(topicsPattern))( > Consumed.`with`[MyKey, MyValue].withName("my-fancy-name") > ) > {code} > Despite the fact that I am providing Consumed with custom name the topology > describe still show "KSTREAM-SOURCE-00" as name for our stream source. > It is not a problem if I just use a name for topic. But our application needs > to get messages from set of topics based on topicname pattern matching. > After checking the kakfa code I see that > org.apache.kafka.streams.kstream.internals.InternalStreamBuilder (on line > 103) has a bug: > {code:java} > public KStream stream(final Pattern topicPattern, >final ConsumedInternal consumed) { > final String name = newProcessorName(KStreamImpl.SOURCE_NAME); > final StreamSourceNode streamPatternSourceNode = new > StreamSourceNode<>(name, topicPattern, consumed); > {code} > node name construction does not take into account the name of consumed > parameter. > For example code for another stream api call with topic name does it > correctly: > {code:java} > final String name = new > NamedInternal(consumed.name()).orElseGenerateWithPrefix(this, > KStreamImpl.SOURCE_NAME); > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (KAFKA-12306) Avoid using plaintext/hard-coded key while generating secret key
[ https://issues.apache.org/jira/browse/KAFKA-12306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17280744#comment-17280744 ] GeordieMai edited comment on KAFKA-12306 at 2/8/21, 4:21 AM: - [~Vicky Zhang] hello . I think the hard-coded text `Password` is just a hint message . you can see here . https://github.com/apache/kafka/blob/42a9355e606bd2bbdb7fd0dd348805edc189/clients/src/main/java/org/apache/kafka/common/security/authenticator/SaslClientCallbackHandler.java#L68 was (Author: geordie): [~Vicky Zhang] hello . I think the hard-coded text `Password` is just a hint message . you can see here . https://github.com/a0x8o/kafka/blob/88ad7d1b7f816ddce65c3b4fa188c4781fe75b67/clients/src/main/java/org/apache/kafka/common/security/authenticator/SaslClientCallbackHandler.java#L68 > Avoid using plaintext/hard-coded key while generating secret key > - > > Key: KAFKA-12306 > URL: https://issues.apache.org/jira/browse/KAFKA-12306 > Project: Kafka > Issue Type: Improvement > Components: clients >Reporter: Vicky Zhang >Priority: Major > > We are a security research team at Virginia Tech. We are doing an empirical > study about the usefulness of the existing security vulnerability detection > tools. The following is a reported vulnerability by certain tools. We'll so > appreciate it if you can give any feedback on it. > *Security Location:* > in file > kafka/clients/src/main/java/org/apache/kafka/common/security/scram/internals/ScramFormatter.java > line 58 and 76, new SecretKeySpec(key, algorithm) is invoked with hard-code > key, which is defined in file > kafka/clients/src/main/java/org/apache/kafka/common/security/scram/internals/ScramSaslClient.java > line 127 -> 189. > *Security Impact:* > Cryptographic keys should not be kept in the source code. The source code can > be widely shared in an enterprise environment and is certainly shared in open > source. The use of a hard-coded cryptographic key significantly increases the > possibility that encrypted data may be recovered. > *suggestions:* > To be managed safely, passwords and secret keys should be stored in separate > configuration files. > Useful link: > [https://cwe.mitre.org/data/definitions/321.html] > [https://www.appmarq.com/public/tqi,1039028,CWE-327-Avoid-weak-encryption-providing-not-sufficient-key-size-JEE] > *Please share with us your opinions/comments if there is any:* > Is the bug report helpful? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (KAFKA-12306) Avoid using plaintext/hard-coded key while generating secret key
[ https://issues.apache.org/jira/browse/KAFKA-12306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17280744#comment-17280744 ] GeordieMai commented on KAFKA-12306: [~Vicky Zhang] hello . I think the hard-coded text `Password` is just a hint message . you can see here . https://github.com/a0x8o/kafka/blob/88ad7d1b7f816ddce65c3b4fa188c4781fe75b67/clients/src/main/java/org/apache/kafka/common/security/authenticator/SaslClientCallbackHandler.java#L68 > Avoid using plaintext/hard-coded key while generating secret key > - > > Key: KAFKA-12306 > URL: https://issues.apache.org/jira/browse/KAFKA-12306 > Project: Kafka > Issue Type: Improvement > Components: clients >Reporter: Vicky Zhang >Priority: Major > > We are a security research team at Virginia Tech. We are doing an empirical > study about the usefulness of the existing security vulnerability detection > tools. The following is a reported vulnerability by certain tools. We'll so > appreciate it if you can give any feedback on it. > *Security Location:* > in file > kafka/clients/src/main/java/org/apache/kafka/common/security/scram/internals/ScramFormatter.java > line 58 and 76, new SecretKeySpec(key, algorithm) is invoked with hard-code > key, which is defined in file > kafka/clients/src/main/java/org/apache/kafka/common/security/scram/internals/ScramSaslClient.java > line 127 -> 189. > *Security Impact:* > Cryptographic keys should not be kept in the source code. The source code can > be widely shared in an enterprise environment and is certainly shared in open > source. The use of a hard-coded cryptographic key significantly increases the > possibility that encrypted data may be recovered. > *suggestions:* > To be managed safely, passwords and secret keys should be stored in separate > configuration files. > Useful link: > [https://cwe.mitre.org/data/definitions/321.html] > [https://www.appmarq.com/public/tqi,1039028,CWE-327-Avoid-weak-encryption-providing-not-sufficient-key-size-JEE] > *Please share with us your opinions/comments if there is any:* > Is the bug report helpful? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (KAFKA-12196) Migrate connect:api module to JUnit 5
[ https://issues.apache.org/jira/browse/KAFKA-12196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17264728#comment-17264728 ] GeordieMai commented on KAFKA-12196: [~chia7712] I assign this to myself . if you are doing , reassign to yourself . thank you > Migrate connect:api module to JUnit 5 > - > > Key: KAFKA-12196 > URL: https://issues.apache.org/jira/browse/KAFKA-12196 > Project: Kafka > Issue Type: Sub-task >Reporter: Chia-Ping Tsai >Assignee: GeordieMai >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (KAFKA-12196) Migrate connect:api module to JUnit 5
[ https://issues.apache.org/jira/browse/KAFKA-12196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] GeordieMai reassigned KAFKA-12196: -- Assignee: GeordieMai > Migrate connect:api module to JUnit 5 > - > > Key: KAFKA-12196 > URL: https://issues.apache.org/jira/browse/KAFKA-12196 > Project: Kafka > Issue Type: Sub-task >Reporter: Chia-Ping Tsai >Assignee: GeordieMai >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (KAFKA-12191) SslTransportTls12Tls13Test can replace 'assumeTrue' by (junit 5) conditional test
[ https://issues.apache.org/jira/browse/KAFKA-12191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] GeordieMai reassigned KAFKA-12191: -- Assignee: GeordieMai (was: Chia-Ping Tsai) > SslTransportTls12Tls13Test can replace 'assumeTrue' by (junit 5) conditional > test > - > > Key: KAFKA-12191 > URL: https://issues.apache.org/jira/browse/KAFKA-12191 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: GeordieMai >Priority: Minor > > the conditional test is more readable than assumeTest in testing code. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (KAFKA-12191) SslTransportTls12Tls13Test can replace 'assumeTrue' by (junit 5) conditional test
[ https://issues.apache.org/jira/browse/KAFKA-12191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17264299#comment-17264299 ] GeordieMai commented on KAFKA-12191: [~chia7712] Thank you > SslTransportTls12Tls13Test can replace 'assumeTrue' by (junit 5) conditional > test > - > > Key: KAFKA-12191 > URL: https://issues.apache.org/jira/browse/KAFKA-12191 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Minor > > the conditional test is more readable than assumeTest in testing code. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (KAFKA-12191) SslTransportTls12Tls13Test can replace 'assumeTrue' by (junit 5) conditional test
[ https://issues.apache.org/jira/browse/KAFKA-12191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17264295#comment-17264295 ] GeordieMai commented on KAFKA-12191: [~chia7712] can I also take this ? > SslTransportTls12Tls13Test can replace 'assumeTrue' by (junit 5) conditional > test > - > > Key: KAFKA-12191 > URL: https://issues.apache.org/jira/browse/KAFKA-12191 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Minor > > the conditional test is more readable than assumeTest in testing code. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (KAFKA-12189) ShellTest can replace 'assumeTrue' by (junit 5) conditional test
[ https://issues.apache.org/jira/browse/KAFKA-12189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] GeordieMai reassigned KAFKA-12189: -- Assignee: GeordieMai (was: Chia-Ping Tsai) > ShellTest can replace 'assumeTrue' by (junit 5) conditional test > > > Key: KAFKA-12189 > URL: https://issues.apache.org/jira/browse/KAFKA-12189 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: GeordieMai >Priority: Minor > > from https://github.com/apache/kafka/pull/9874#discussion_r556664433 > the conditional test is more readable than assumeTest in testing code. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (KAFKA-12189) ShellTest can replace 'assumeTrue' by (junit 5) conditional test
[ https://issues.apache.org/jira/browse/KAFKA-12189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17264259#comment-17264259 ] GeordieMai commented on KAFKA-12189: [~chia7712] Hello , are you doing this ? If not , can I take it ? > ShellTest can replace 'assumeTrue' by (junit 5) conditional test > > > Key: KAFKA-12189 > URL: https://issues.apache.org/jira/browse/KAFKA-12189 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Minor > > from https://github.com/apache/kafka/pull/9874#discussion_r556664433 > the conditional test is more readable than assumeTest in testing code. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (KAFKA-10885) Refactor MemoryRecordsBuilderTest/MemoryRecordsTest to avoid a lot of (unnecessary) ignored test cases
[ https://issues.apache.org/jira/browse/KAFKA-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] GeordieMai updated KAFKA-10885: --- Description: {quote}private void assumeAtLeastV2OrNotZstd(byte magic) Unknown macro: \{ assumeTrue(compressionType != CompressionType.ZSTD || magic >= MAGIC_VALUE_V2); }{quote} MemoryRecordsBuilderTest/MemoryRecordsTest use aforementioned method to avoid testing zstd on unsupported magic code. However, it produces some unnecessary ignored test cases. Personally, it could be separated to different test classes for each magic code as we do assign specify magic code to each test cases. was: {quote} private void assumeAtLeastV2OrNotZstd(byte magic) { assumeTrue(compressionType != CompressionType.ZSTD || magic >= MAGIC_VALUE_V2); } {quote} MemoryRecordsBuilderTest/MemoryRecordsTest use aforementioned method to avoid testing zstd on unsupported magic code. However, it produces some unnecessary ignored test cases. Personally, it could be separated to different test classes for each magic code as we do assign specify magic code to each test cases. > Refactor MemoryRecordsBuilderTest/MemoryRecordsTest to avoid a lot of > (unnecessary) ignored test cases > -- > > Key: KAFKA-10885 > URL: https://issues.apache.org/jira/browse/KAFKA-10885 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: GeordieMai >Priority: Major > Labels: newbie > > {quote}private void assumeAtLeastV2OrNotZstd(byte magic) > Unknown macro: \{ assumeTrue(compressionType != CompressionType.ZSTD || magic > >= MAGIC_VALUE_V2); }{quote} > MemoryRecordsBuilderTest/MemoryRecordsTest use aforementioned method to avoid > testing zstd on unsupported magic code. However, it produces some unnecessary > ignored test cases. Personally, it could be separated to different test > classes for each magic code as we do assign specify magic code to each test > cases. > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (KAFKA-10887) Migrate log4j-appender module to JUnit 5
[ https://issues.apache.org/jira/browse/KAFKA-10887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254369#comment-17254369 ] GeordieMai edited comment on KAFKA-10887 at 12/24/20, 2:30 AM: --- [~chia7712] can I take this ? It seems to be able to give newbie . like me was (Author: geordie): [~chia7712] can I take this ? It seems to be able to give newbie . likes me > Migrate log4j-appender module to JUnit 5 > > > Key: KAFKA-10887 > URL: https://issues.apache.org/jira/browse/KAFKA-10887 > Project: Kafka > Issue Type: Sub-task >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (KAFKA-10887) Migrate log4j-appender module to JUnit 5
[ https://issues.apache.org/jira/browse/KAFKA-10887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254369#comment-17254369 ] GeordieMai commented on KAFKA-10887: [~chia7712] can I take this ? It seems to be able to give newbie . likes me > Migrate log4j-appender module to JUnit 5 > > > Key: KAFKA-10887 > URL: https://issues.apache.org/jira/browse/KAFKA-10887 > Project: Kafka > Issue Type: Sub-task >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (KAFKA-10885) Refactor MemoryRecordsBuilderTest/MemoryRecordsTest to avoid a lot of (unnecessary) ignored test cases
[ https://issues.apache.org/jira/browse/KAFKA-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] GeordieMai reassigned KAFKA-10885: -- Assignee: GeordieMai > Refactor MemoryRecordsBuilderTest/MemoryRecordsTest to avoid a lot of > (unnecessary) ignored test cases > -- > > Key: KAFKA-10885 > URL: https://issues.apache.org/jira/browse/KAFKA-10885 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: GeordieMai >Priority: Major > Labels: newbie > > {quote} > private void assumeAtLeastV2OrNotZstd(byte magic) { > assumeTrue(compressionType != CompressionType.ZSTD || magic >= > MAGIC_VALUE_V2); > } > {quote} > MemoryRecordsBuilderTest/MemoryRecordsTest use aforementioned method to avoid > testing zstd on unsupported magic code. However, it produces some unnecessary > ignored test cases. Personally, it could be separated to different test > classes for each magic code as we do assign specify magic code to each test > cases. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (KAFKA-10885) Refactor MemoryRecordsBuilderTest/MemoryRecordsTest to avoid a lot of (unnecessary) ignored test cases
[ https://issues.apache.org/jira/browse/KAFKA-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17253905#comment-17253905 ] GeordieMai commented on KAFKA-10885: [~chia7712] can I assign this to myself? > Refactor MemoryRecordsBuilderTest/MemoryRecordsTest to avoid a lot of > (unnecessary) ignored test cases > -- > > Key: KAFKA-10885 > URL: https://issues.apache.org/jira/browse/KAFKA-10885 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Priority: Major > Labels: newbie > > {quote} > private void assumeAtLeastV2OrNotZstd(byte magic) { > assumeTrue(compressionType != CompressionType.ZSTD || magic >= > MAGIC_VALUE_V2); > } > {quote} > MemoryRecordsBuilderTest/MemoryRecordsTest use aforementioned method to avoid > testing zstd on unsupported magic code. However, it produces some unnecessary > ignored test cases. Personally, it could be separated to different test > classes for each magic code as we do assign specify magic code to each test > cases. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (KAFKA-10878) testReadWhenOptionalDataMissingAtTheEndIsNotTolerated/testReadWithMissingNonOptionalExtraDataAtTheEnd should check the error message
[ https://issues.apache.org/jira/browse/KAFKA-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17253259#comment-17253259 ] GeordieMai commented on KAFKA-10878: [~chia7712] thank you > testReadWhenOptionalDataMissingAtTheEndIsNotTolerated/testReadWithMissingNonOptionalExtraDataAtTheEnd > should check the error message > > > Key: KAFKA-10878 > URL: https://issues.apache.org/jira/browse/KAFKA-10878 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: GeordieMai >Priority: Major > Labels: newbie > > {quote} > SchemaException e = assertThrows(SchemaException.class, () -> > newSchema.read(buffer)); > e.getMessage().contains("Missing value for field 'field2' which has no > default value"); > {quote} > It does not use the assert check. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (KAFKA-10878) testReadWhenOptionalDataMissingAtTheEndIsNotTolerated/testReadWithMissingNonOptionalExtraDataAtTheEnd should check the error message
[ https://issues.apache.org/jira/browse/KAFKA-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] GeordieMai reassigned KAFKA-10878: -- Assignee: GeordieMai > testReadWhenOptionalDataMissingAtTheEndIsNotTolerated/testReadWithMissingNonOptionalExtraDataAtTheEnd > should check the error message > > > Key: KAFKA-10878 > URL: https://issues.apache.org/jira/browse/KAFKA-10878 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: GeordieMai >Priority: Major > Labels: newbie > > {quote} > SchemaException e = assertThrows(SchemaException.class, () -> > newSchema.read(buffer)); > e.getMessage().contains("Missing value for field 'field2' which has no > default value"); > {quote} > It does not use the assert check. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (KAFKA-10878) testReadWhenOptionalDataMissingAtTheEndIsNotTolerated/testReadWithMissingNonOptionalExtraDataAtTheEnd should check the error message
[ https://issues.apache.org/jira/browse/KAFKA-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17253253#comment-17253253 ] GeordieMai commented on KAFKA-10878: [~chia7712] can I take this ? > testReadWhenOptionalDataMissingAtTheEndIsNotTolerated/testReadWithMissingNonOptionalExtraDataAtTheEnd > should check the error message > > > Key: KAFKA-10878 > URL: https://issues.apache.org/jira/browse/KAFKA-10878 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Priority: Major > Labels: newbie > > {quote} > SchemaException e = assertThrows(SchemaException.class, () -> > newSchema.read(buffer)); > e.getMessage().contains("Missing value for field 'field2' which has no > default value"); > {quote} > It does not use the assert check. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (KAFKA-10874) Fix flaky ClientQuotasRequestTest.testAlterIpQuotasRequest
[ https://issues.apache.org/jira/browse/KAFKA-10874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17253220#comment-17253220 ] GeordieMai commented on KAFKA-10874: [~chia7712] can I take this? :) > Fix flaky ClientQuotasRequestTest.testAlterIpQuotasRequest > -- > > Key: KAFKA-10874 > URL: https://issues.apache.org/jira/browse/KAFKA-10874 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Major > > [https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-9748/8/testReport/junit/kafka.server/ClientQuotasRequestTest/Build___JDK_15___testAlterIpQuotasRequest/?cloudbees-analytics-link=scm-reporting%2Ftests%2Ffailed] > > {quote} > Build / JDK 15 / kafka.server.ClientQuotasRequestTest.testAlterIpQuotasRequest > {quote} > > {quote} > java.lang.AssertionError: expected:<150.0> but was:<100.0> > at org.junit.Assert.fail(Assert.java:89) > at org.junit.Assert.failNotEquals(Assert.java:835) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:685) > at > kafka.server.ClientQuotasRequestTest.$anonfun$testAlterIpQuotasRequest$1(ClientQuotasRequestTest.scala:215) > at > kafka.server.ClientQuotasRequestTest.$anonfun$testAlterIpQuotasRequest$1$adapted(ClientQuotasRequestTest.scala:206) > at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:563) > at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:561) > at scala.collection.AbstractIterable.foreach(Iterable.scala:919) > at > kafka.server.ClientQuotasRequestTest.verifyIpQuotas$1(ClientQuotasRequestTest.scala:206) > at > kafka.server.ClientQuotasRequestTest.testAlterIpQuotasRequest(ClientQuotasRequestTest.scala:228) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:564) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at org.junit.runners.ParentRunner.run(ParentRunner.java:413) > at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110) > at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58) > at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38) > at > org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62) > at > org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51) > at jdk.internal.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:564) > at > org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36) > at > org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:
[jira] [Commented] (KAFKA-10841) LogReadResult should be able to converted to FetchPartitionData
[ https://issues.apache.org/jira/browse/KAFKA-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17248495#comment-17248495 ] GeordieMai commented on KAFKA-10841: Hello , I'm a newbie too . So please let me do it . I have written it and try to know the failed test is related or not . > LogReadResult should be able to converted to FetchPartitionData > > > Key: KAFKA-10841 > URL: https://issues.apache.org/jira/browse/KAFKA-10841 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: GeordieMai >Priority: Minor > Labels: newbie > > There are duplicate code which try to convert LogReadResult to > FetchPartitionData. It seems to me the duplicate code can be eliminated by > moving the conversion to LogReadResult. > occurrence 1: > https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/ReplicaManager.scala#L1076 > occurrence 2: > https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/DelayedFetch.scala#L189 > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (KAFKA-10841) LogReadResult should be able to converted to FetchPartitionData
[ https://issues.apache.org/jira/browse/KAFKA-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17247655#comment-17247655 ] GeordieMai commented on KAFKA-10841: [~chia7712] Can I take this? > LogReadResult should be able to converted to FetchPartitionData > > > Key: KAFKA-10841 > URL: https://issues.apache.org/jira/browse/KAFKA-10841 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Priority: Minor > Labels: newbie > > There are duplicate code which try to convert LogReadResult to > FetchPartitionData. It seems to me the duplicate code can be eliminated by > moving the conversion to LogReadResult. > occurrence 1: > https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/ReplicaManager.scala#L1076 > occurrence 2: > https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/DelayedFetch.scala#L189 > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (KAFKA-10790) Detect/Prevent Deadlock on Producer Network Thread
[ https://issues.apache.org/jira/browse/KAFKA-10790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17244252#comment-17244252 ] GeordieMai edited comment on KAFKA-10790 at 12/4/20, 7:23 PM: -- [~chia7712] Can I take this issue? when flush method is called in callback , throw a exception to notify user to prevent it or just make flush method not working or make flush method work fine what do you think ? was (Author: geordie): [~chia7712] Can I take this issue? when flush method is called in callback , throw a exception to notify user to prevent it or just make flush method not working or make flush method work fun what do you think ? > Detect/Prevent Deadlock on Producer Network Thread > -- > > Key: KAFKA-10790 > URL: https://issues.apache.org/jira/browse/KAFKA-10790 > Project: Kafka > Issue Type: Improvement > Components: clients >Affects Versions: 2.6.0, 2.7.0 >Reporter: Gary Russell >Priority: Major > > I realize this is contrived, but I stumbled across the problem while testing > some library code with 2.7.0 RC3 (although the issue is not limited to 2.7). > For example, calling flush() on the producer callback deadlocks the network > thread (and any attempt to close the producer thereafter). > {code:java} > producer.send(new ProducerRecord("foo", "bar"), (rm, ex) -> { > producer.flush(); > }); > Thread.sleep(1000); > producer.close(); > {code} > It took some time to figure out why the close was blocked. > There is existing logic in close() to avoid it blocking if called from the > callback; perhaps similar logic could be added to flush() (and any other > methods that might block), even if it means throwing an exception to make it > clear that you can't call flush() from the callback. > These stack traces are with the 2.6.0 client. > {noformat} > "main" #1 prio=5 os_prio=31 cpu=1333.10ms elapsed=13.05s > tid=0x7ff259012800 nid=0x2803 in Object.wait() [0x7fda5000] >java.lang.Thread.State: TIMED_WAITING (on object monitor) > at java.lang.Object.wait(java.base@14.0.2/Native Method) > - waiting on <0x000700d0> (a > org.apache.kafka.common.utils.KafkaThread) > at java.lang.Thread.join(java.base@14.0.2/Thread.java:1297) > - locked <0x000700d0> (a > org.apache.kafka.common.utils.KafkaThread) > at > org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:1205) > at > org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:1182) > at > org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:1158) > at com.example.demo.Rk1Application.lambda$2(Rk1Application.java:55) > "kafka-producer-network-thread | producer-1" #24 daemon prio=5 os_prio=31 > cpu=225.80ms elapsed=11.64s tid=0x7ff256963000 nid=0x7103 waiting on > condition [0x700011d04000] >java.lang.Thread.State: WAITING (parking) > at jdk.internal.misc.Unsafe.park(java.base@14.0.2/Native Method) > - parking to wait for <0x0007020b27e0> (a > java.util.concurrent.CountDownLatch$Sync) > at > java.util.concurrent.locks.LockSupport.park(java.base@14.0.2/LockSupport.java:211) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.base@14.0.2/AbstractQueuedSynchronizer.java:714) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(java.base@14.0.2/AbstractQueuedSynchronizer.java:1046) > at > java.util.concurrent.CountDownLatch.await(java.base@14.0.2/CountDownLatch.java:232) > at > org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:76) > at > org.apache.kafka.clients.producer.internals.RecordAccumulator.awaitFlushCompletion(RecordAccumulator.java:712) > at > org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:) > at com.example.demo.Rk1Application.lambda$3(Rk1Application.java:52) > at > com.example.demo.Rk1Application$$Lambda$528/0x000800e28840.onCompletion(Unknown > Source) > at > org.apache.kafka.clients.producer.KafkaProducer$InterceptorCallback.onCompletion(KafkaProducer.java:1363) > at > org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:228) > at > org.apache.kafka.clients.producer.internals.ProducerBatch.done(ProducerBatch.java:197) > at > org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:653) > at > org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:634) > at > org.apache.kafka.clients.producer.internals.Sender.handleProduceResponse(Sender.java:554) > at > org.apache.kafka.cl
[jira] [Commented] (KAFKA-10790) Detect/Prevent Deadlock on Producer Network Thread
[ https://issues.apache.org/jira/browse/KAFKA-10790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17244252#comment-17244252 ] GeordieMai commented on KAFKA-10790: [~chia7712] Can I take this issue? when flush method is called in callback , throw a exception to notify user to prevent it or just make flush method not working or make flush method work fun what do you think ? > Detect/Prevent Deadlock on Producer Network Thread > -- > > Key: KAFKA-10790 > URL: https://issues.apache.org/jira/browse/KAFKA-10790 > Project: Kafka > Issue Type: Improvement > Components: clients >Affects Versions: 2.6.0, 2.7.0 >Reporter: Gary Russell >Priority: Major > > I realize this is contrived, but I stumbled across the problem while testing > some library code with 2.7.0 RC3 (although the issue is not limited to 2.7). > For example, calling flush() on the producer callback deadlocks the network > thread (and any attempt to close the producer thereafter). > {code:java} > producer.send(new ProducerRecord("foo", "bar"), (rm, ex) -> { > producer.flush(); > }); > Thread.sleep(1000); > producer.close(); > {code} > It took some time to figure out why the close was blocked. > There is existing logic in close() to avoid it blocking if called from the > callback; perhaps similar logic could be added to flush() (and any other > methods that might block), even if it means throwing an exception to make it > clear that you can't call flush() from the callback. > These stack traces are with the 2.6.0 client. > {noformat} > "main" #1 prio=5 os_prio=31 cpu=1333.10ms elapsed=13.05s > tid=0x7ff259012800 nid=0x2803 in Object.wait() [0x7fda5000] >java.lang.Thread.State: TIMED_WAITING (on object monitor) > at java.lang.Object.wait(java.base@14.0.2/Native Method) > - waiting on <0x000700d0> (a > org.apache.kafka.common.utils.KafkaThread) > at java.lang.Thread.join(java.base@14.0.2/Thread.java:1297) > - locked <0x000700d0> (a > org.apache.kafka.common.utils.KafkaThread) > at > org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:1205) > at > org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:1182) > at > org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:1158) > at com.example.demo.Rk1Application.lambda$2(Rk1Application.java:55) > "kafka-producer-network-thread | producer-1" #24 daemon prio=5 os_prio=31 > cpu=225.80ms elapsed=11.64s tid=0x7ff256963000 nid=0x7103 waiting on > condition [0x700011d04000] >java.lang.Thread.State: WAITING (parking) > at jdk.internal.misc.Unsafe.park(java.base@14.0.2/Native Method) > - parking to wait for <0x0007020b27e0> (a > java.util.concurrent.CountDownLatch$Sync) > at > java.util.concurrent.locks.LockSupport.park(java.base@14.0.2/LockSupport.java:211) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.base@14.0.2/AbstractQueuedSynchronizer.java:714) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(java.base@14.0.2/AbstractQueuedSynchronizer.java:1046) > at > java.util.concurrent.CountDownLatch.await(java.base@14.0.2/CountDownLatch.java:232) > at > org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:76) > at > org.apache.kafka.clients.producer.internals.RecordAccumulator.awaitFlushCompletion(RecordAccumulator.java:712) > at > org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:) > at com.example.demo.Rk1Application.lambda$3(Rk1Application.java:52) > at > com.example.demo.Rk1Application$$Lambda$528/0x000800e28840.onCompletion(Unknown > Source) > at > org.apache.kafka.clients.producer.KafkaProducer$InterceptorCallback.onCompletion(KafkaProducer.java:1363) > at > org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:228) > at > org.apache.kafka.clients.producer.internals.ProducerBatch.done(ProducerBatch.java:197) > at > org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:653) > at > org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:634) > at > org.apache.kafka.clients.producer.internals.Sender.handleProduceResponse(Sender.java:554) > at > org.apache.kafka.clients.producer.internals.Sender.lambda$sendProduceRequest$0(Sender.java:743) > at > org.apache.kafka.clients.producer.internals.Sender$$Lambda$642/0x000800ea2040.onComplete(Unknown > Source) > at > org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109) >
[jira] [Commented] (KAFKA-9926) Flaky test PlaintextAdminIntegrationTest.testCreatePartitions
[ https://issues.apache.org/jira/browse/KAFKA-9926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17243819#comment-17243819 ] GeordieMai commented on KAFKA-9926: --- Can I take this issue ? I want to try if this test case fails in current version . > Flaky test PlaintextAdminIntegrationTest.testCreatePartitions > - > > Key: KAFKA-9926 > URL: https://issues.apache.org/jira/browse/KAFKA-9926 > Project: Kafka > Issue Type: Bug >Reporter: Wang Ge >Priority: Major > > Flaky test: kafka.api.PlaintextAdminIntegrationTest.testCreatePartitions > [https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/6007/testReport/junit/kafka.api/PlaintextAdminIntegrationTest/testCreatePartitions/] -- This message was sent by Atlassian Jira (v8.3.4#803005)