[GitHub] [kafka] chia7712 commented on pull request #10386: MINOR: fix aggregatedJavadoc dep on compileJava
chia7712 commented on pull request #10386: URL: https://github.com/apache/kafka/pull/10386#issuecomment-805532637 The error can be reproduced on either 2.8 or trunk branch by deleting the `.git` folder. We apply `rat.gradle` if `.git` folder exists (see https://github.com/apache/kafka/blob/trunk/build.gradle#L135) and `rat.gradle` applies java plugin (https://github.com/apache/kafka/blob/trunk/gradle/rat.gradle#L101). Hence, `build.gradle` does not need to apply java plugin explicitly when there is a `.git` folder. In short, +1 to apply java plugin explicitly. BTW, could you revise the comments about why we have to apply java plugin? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] wenbingshen commented on pull request #10383: MINOR: Query topic describe and sort output by topic name when using adminClient
wenbingshen commented on pull request #10383: URL: https://github.com/apache/kafka/pull/10383#issuecomment-805499083 Since topic describe prints are ordered in accordance with topic name, I now detect all TopicCommandWithAdminClientTest test cases and fix possible problems. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] junrao commented on a change in pull request #10388: KAFKA-12520: Ensure log loading does not truncate producer state unless required
junrao commented on a change in pull request #10388: URL: https://github.com/apache/kafka/pull/10388#discussion_r600174178 ## File path: core/src/main/scala/kafka/log/Log.scala ## @@ -700,11 +700,13 @@ class Log(@volatile private var _dir: File, case _: NoSuchFileException => error(s"Could not find offset index file corresponding to log file ${segment.log.file.getAbsolutePath}, " + "recovering segment and rebuilding index files...") -recoverSegment(segment) +if (segment.validateSegmentAndRebuildIndices() > 0) Review comment: Another thing is that it's possible for a segment after recovery point to have no index file and also be corrupted. In that case, we want to truncate the data instead of failing with an error. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] guozhangwang merged pull request #10379: KAFKA-12524: Remove deprecated segments()
guozhangwang merged pull request #10379: URL: https://github.com/apache/kafka/pull/10379 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] vvcephei commented on pull request #10386: MINOR: fix aggregatedJavadoc dep on compileJava
vvcephei commented on pull request #10386: URL: https://github.com/apache/kafka/pull/10386#issuecomment-805470895 Absolutely, I'd appreciate it! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] chia7712 commented on pull request #10386: MINOR: fix aggregatedJavadoc dep on compileJava
chia7712 commented on pull request #10386: URL: https://github.com/apache/kafka/pull/10386#issuecomment-805466192 @vvcephei Thanks for your sharing. Could you give me one second to verify this patch on my local? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] vvcephei commented on pull request #10386: MINOR: fix aggregatedJavadoc dep on compileJava
vvcephei commented on pull request #10386: URL: https://github.com/apache/kafka/pull/10386#issuecomment-805463999 Oh, sorry, I should have been specific. It's super weird. The symptom looks like list this user's report: https://lists.apache.org/thread.html/rdbf942295aa41f3a4852b46ad0d16144c5a3516a1fe9400921af7137%40%3Cdev.kafka.apache.org%3E The task works just fine on my local fork of Kafka, 2.8 branch, but while verifying the 2.8.0RC0 I'm trying to publish, I see: (candidate artifact: https://home.apache.org/~vvcephei/kafka-2.8.0-rc0/kafka-2.8.0-src.tgz) ``` [john@arcturus kafka-2.8.0-src]$ ./gradlew clean install > Configure project : Building project 'core' with Scala version 2.13.5 Building project 'streams-scala' with Scala version 2.13.5 FAILURE: Build failed with an exception. * Where: Build file '/tmp/2.8/kafka-2.8.0-src/build.gradle' line: 2282 * What went wrong: A problem occurred evaluating root project 'kafka-2.8.0-src'. > Could not get unknown property 'compileJava' for root project 'kafka-2.8.0-src' of type org.gradle.api.Project. * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. * Get more help at https://help.gradle.org Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0. Use '--warning-mode all' to show the individual deprecation warnings. See https://docs.gradle.org/6.8.1/userguide/command_line_interface.html#sec:command_line_warnings BUILD FAILED in 523ms ``` The line (2282) is in the aggregatedJavadoc task. I really can't understand how I'm getting a different result on this extracted tarball than I get on my primary clone of Kafka. I checked the build.gradle file, and it's identical: ``` [john@arcturus kafka-2.8.0-src]$ diff build.gradle /home/repos/kafka/build.gradle [john@arcturus kafka-2.8.0-src]$ sha1sum build.gradle 3d6dbaeb00a25dc4b52a39a7912b463bd3cf5203 build.gradle [john@arcturus kafka-2.8.0-src]$ sha1sum /home/repos/kafka/build.gradle 3d6dbaeb00a25dc4b52a39a7912b463bd3cf5203 /home/repos/kafka/build.gradle ``` As well as gradlew: ``` [john@arcturus kafka-2.8.0-src]$ diff gradlew /home/repos/kafka/gradlew [john@arcturus kafka-2.8.0-src]$ sha1sum gradlew 3d1f1466c838a14e44053e78c259196da7dae45f gradlew [john@arcturus kafka-2.8.0-src]$ sha1sum /home/repos/kafka/gradlew 3d1f1466c838a14e44053e78c259196da7dae45f /home/repos/kafka/gradlew ``` And I also verified that the actual version of Gradle is the same: ``` [john@arcturus kafka-2.8.0-src]$ ./gradlew -v Gradle 6.8.1 Build time: 2021-01-22 13:20:08 UTC Revision: 31f14a87d93945024ab7a78de84102a3400fa5b2 Kotlin: 1.4.20 Groovy: 2.5.12 Ant: Apache Ant(TM) version 1.10.9 compiled on September 27 2020 JVM: 1.8.0_282 (Oracle Corporation 25.282-b08) OS: Linux 5.11.6-arch1-1 amd64 [john@arcturus kafka-2.8.0-src]$ cd /home/repos/kafka [john@arcturus kafka]$ ./gradlew -v Gradle 6.8.1 Build time: 2021-01-22 13:20:08 UTC Revision: 31f14a87d93945024ab7a78de84102a3400fa5b2 Kotlin: 1.4.20 Groovy: 2.5.12 Ant: Apache Ant(TM) version 1.10.9 compiled on September 27 2020 JVM: 1.8.0_282 (Oracle Corporation 25.282-b08) OS: Linux 5.11.6-arch1-1 amd64 [john@arcturus kafka]$ ``` In some sense, what is actually confusing is why it passes for anyone, not so much why it fails... It is true that the `java` plugin is only applied to the subprojects, not the root, so it does seem to make sense that we couldn't create a task in the root project that depends on `compileJava` (which comes from that `java` plugin). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] wenbingshen commented on pull request #10383: MINOR: Query topic describe and sort output by topic name when using adminClient
wenbingshen commented on pull request #10383: URL: https://github.com/apache/kafka/pull/10383#issuecomment-805460470 > Should we add a unit or integration test for this? Many thanks for your comments. I added a unit test that verifies the sequential output, please review it again. :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] wenbingshen commented on pull request #10383: MINOR: Query topic describe and sort output by topic name when using adminClient
wenbingshen commented on pull request #10383: URL: https://github.com/apache/kafka/pull/10383#issuecomment-805459914 > @wenbingshen thanks for this patch. LGTM Many thanks for your review. I submitted the latest code, please review it again :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] wenbingshen commented on a change in pull request #10383: MINOR: Query topic describe and sort output by topic name when using adminClient
wenbingshen commented on a change in pull request #10383: URL: https://github.com/apache/kafka/pull/10383#discussion_r600135590 ## File path: core/src/main/scala/kafka/admin/TopicCommand.scala ## @@ -320,9 +320,10 @@ object TopicCommand extends Logging { val allConfigs = adminClient.describeConfigs(topics.map(new ConfigResource(Type.TOPIC, _)).asJavaCollection).values() val liveBrokers = adminClient.describeCluster().nodes().get().asScala.map(_.id()) val topicDescriptions = adminClient.describeTopics(topics.asJavaCollection).all().get().values().asScala + .toSeq.sortBy(td => td.name()) Review comment: I have added a comment. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] wenbingshen commented on a change in pull request #10383: MINOR: Query topic describe and sort output by topic name when using adminClient
wenbingshen commented on a change in pull request #10383: URL: https://github.com/apache/kafka/pull/10383#discussion_r600134761 ## File path: core/src/main/scala/kafka/admin/TopicCommand.scala ## @@ -320,9 +320,10 @@ object TopicCommand extends Logging { val allConfigs = adminClient.describeConfigs(topics.map(new ConfigResource(Type.TOPIC, _)).asJavaCollection).values() val liveBrokers = adminClient.describeCluster().nodes().get().asScala.map(_.id()) val topicDescriptions = adminClient.describeTopics(topics.asJavaCollection).all().get().values().asScala + .toSeq.sortBy(td => td.name()) val describeOptions = new DescribeOptions(opts, liveBrokers.toSet) val topicPartitions = topicDescriptions - .flatMap(td => td.partitions.iterator().asScala.map(p => new TopicPartition(td.name(), p.partition( + .flatMap(td => td.partitions.iterator().asScala.map(p => new TopicPartition(td.name(), p.partition())).toSeq) Review comment: It doesn't need it, i have deleted it. :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] feyman2016 commented on pull request #10377: KAFKA-12515 ApiVersionManager should create response based on request version
feyman2016 commented on pull request #10377: URL: https://github.com/apache/kafka/pull/10377#issuecomment-805459141 @abbccdda @kowshik Could you please help to review? Thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] chia7712 commented on pull request #10386: MINOR: fix aggregatedJavadoc dep on compileJava
chia7712 commented on pull request #10386: URL: https://github.com/apache/kafka/pull/10386#issuecomment-805457892 > What I was facing before was a complete failure to load the gradle project just curious. How to reproduce that error? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] vvcephei commented on pull request #10386: MINOR: fix aggregatedJavadoc dep on compileJava
vvcephei commented on pull request #10386: URL: https://github.com/apache/kafka/pull/10386#issuecomment-805456926 Thanks, @chia7712 . That does seem like an orthogonal issue, so I think I'll go ahead merge this fix. I just submitted a fix for those errors at https://github.com/apache/kafka/pull/10392 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] vvcephei commented on a change in pull request #10392: KAFKA-12435: Fix javadoc errors
vvcephei commented on a change in pull request #10392: URL: https://github.com/apache/kafka/pull/10392#discussion_r600124984 ## File path: clients/src/main/java/org/apache/kafka/server/log/remote/storage/RemoteLogMetadataManager.java ## @@ -71,7 +71,7 @@ * * * +-++--+ - * |COPY_SEGMENT_STARTED |--->|COPY_SEGMENT_FINISHED | + * |COPY_SEGMENT_STARTED |--->|COPY_SEGMENT_FINISHED | Review comment: These were just warnings, but they were the only two warnings in the project, so I fixed them. We just need to XML-escape the `>` character. ## File path: streams/test-utils/src/main/java/org/apache/kafka/streams/TopologyTestDriver.java ## @@ -41,8 +41,8 @@ import org.apache.kafka.common.utils.Time; import org.apache.kafka.streams.errors.LogAndContinueExceptionHandler; import org.apache.kafka.streams.errors.TopologyException; -import org.apache.kafka.streams.internals.KeyValueStoreFacade; -import org.apache.kafka.streams.internals.WindowStoreFacade; +import org.apache.kafka.streams.test.internal.KeyValueStoreFacade; +import org.apache.kafka.streams.test.internal.WindowStoreFacade; Review comment: Javadoc is unable to generate docs for this (TopologyTestDriver) file because it depends on these classes, which are also in test-utils, but are excluded in the build.gradle spec. I was unable to override the exclusion with a more specific inclusion, so instead I just moved these classes to a different package that matches only the "include" patterns in `:streams:test-utils:javadoc`. ## File path: streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsProducer.java ## @@ -237,7 +237,7 @@ private static boolean isRecoverable(final KafkaException uncaughtException) { * @throws IllegalStateException if EOS is disabled * @throws TaskMigratedException */ -void commitTransaction(final Map offsets, +protected void commitTransaction(final Map offsets, Review comment: Needed this so I could move the subclass to a different package. I think it's still just as obviously inappropriate for users to subclass this class, since it's in the `internals` package. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] chia7712 commented on pull request #10389: MINOR: stabilize ListOffsetsRequestTest#testResponseIncludesLeaderEpoch
chia7712 commented on pull request #10389: URL: https://github.com/apache/kafka/pull/10389#issuecomment-805454465 @dengziming thanks for your review! > should we trigger the jenkins build multiple times to verify that the flaky test is fixed. sure. I also loop `ListOffsetsRequestTest` 300 times on my local. all pass -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] vvcephei opened a new pull request #10392: KAFKA-12435: Fix javadoc errors
vvcephei opened a new pull request #10392: URL: https://github.com/apache/kafka/pull/10392 Fixes errors while generating javadoc. ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (KAFKA-12435) Several streams-test-utils classes missing from javadoc
[ https://issues.apache.org/jira/browse/KAFKA-12435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17307511#comment-17307511 ] John Roesler commented on KAFKA-12435: -- Thanks for the report, [~ijuma] . The errors are unintuitive, but it seems to be because we're publishing javadocs for public APIs in the test-utils module that import internal classes from the main Streams module, whose javadocs we exclude. The include/exclude precedence rules are not documented, so it's taking me a little while to work though the fix. > Several streams-test-utils classes missing from javadoc > --- > > Key: KAFKA-12435 > URL: https://issues.apache.org/jira/browse/KAFKA-12435 > Project: Kafka > Issue Type: Bug > Components: docs, streams-test-utils >Reporter: Ismael Juma >Assignee: John Roesler >Priority: Blocker > Fix For: 2.8.0 > > Attachments: image-2021-03-05-14-22-45-891.png > > > !image-2021-03-05-14-22-45-891.png! > Only 3 of them show up currently ^. Source: > https://kafka.apache.org/27/javadoc/index.html -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (KAFKA-12435) Several streams-test-utils classes missing from javadoc
[ https://issues.apache.org/jira/browse/KAFKA-12435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Roesler reassigned KAFKA-12435: Assignee: John Roesler > Several streams-test-utils classes missing from javadoc > --- > > Key: KAFKA-12435 > URL: https://issues.apache.org/jira/browse/KAFKA-12435 > Project: Kafka > Issue Type: Bug > Components: docs, streams-test-utils >Reporter: Ismael Juma >Assignee: John Roesler >Priority: Blocker > Fix For: 2.8.0 > > Attachments: image-2021-03-05-14-22-45-891.png > > > !image-2021-03-05-14-22-45-891.png! > Only 3 of them show up currently ^. Source: > https://kafka.apache.org/27/javadoc/index.html -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (KAFKA-12435) Several streams-test-utils classes missing from javadoc
[ https://issues.apache.org/jira/browse/KAFKA-12435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Roesler updated KAFKA-12435: - Priority: Blocker (was: Major) > Several streams-test-utils classes missing from javadoc > --- > > Key: KAFKA-12435 > URL: https://issues.apache.org/jira/browse/KAFKA-12435 > Project: Kafka > Issue Type: Bug > Components: docs, streams-test-utils >Reporter: Ismael Juma >Priority: Blocker > Attachments: image-2021-03-05-14-22-45-891.png > > > !image-2021-03-05-14-22-45-891.png! > Only 3 of them show up currently ^. Source: > https://kafka.apache.org/27/javadoc/index.html -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (KAFKA-12435) Several streams-test-utils classes missing from javadoc
[ https://issues.apache.org/jira/browse/KAFKA-12435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Roesler updated KAFKA-12435: - Fix Version/s: 2.8.0 > Several streams-test-utils classes missing from javadoc > --- > > Key: KAFKA-12435 > URL: https://issues.apache.org/jira/browse/KAFKA-12435 > Project: Kafka > Issue Type: Bug > Components: docs, streams-test-utils >Reporter: Ismael Juma >Priority: Blocker > Fix For: 2.8.0 > > Attachments: image-2021-03-05-14-22-45-891.png > > > !image-2021-03-05-14-22-45-891.png! > Only 3 of them show up currently ^. Source: > https://kafka.apache.org/27/javadoc/index.html -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] jiameixie commented on pull request #8836: KAFKA-10124:Wrong rebalance.time.ms
jiameixie commented on pull request #8836: URL: https://github.com/apache/kafka/pull/8836#issuecomment-805406840 > That's fine. Maybe you can close this PR, and update the JIRA ticket as well so others can pick up? Ok, I have closed the PR and unsigned the JIRA ticket. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] jiameixie closed pull request #8836: KAFKA-10124:Wrong rebalance.time.ms
jiameixie closed pull request #8836: URL: https://github.com/apache/kafka/pull/8836 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (KAFKA-10124) ConsumerPerformance output wrong rebalance.time.ms
[ https://issues.apache.org/jira/browse/KAFKA-10124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] jiamei xie reassigned KAFKA-10124: -- Assignee: (was: jiamei xie) > ConsumerPerformance output wrong rebalance.time.ms > > > Key: KAFKA-10124 > URL: https://issues.apache.org/jira/browse/KAFKA-10124 > Project: Kafka > Issue Type: Bug > Components: clients, consumer >Reporter: jiamei xie >Priority: Major > > When running consumer performance benchmark, negative fetch.time.ms and > fetch.MB.sec, fetch.nMsg.sec are got, which must be wrong. > bin/kafka-consumer-perf-test.sh --topic test1 --bootstrap-server > localhost:9092 --messages 10 > start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, > nMsg.sec, rebalance.time.ms, fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec > 2020-06-07 05:08:52:393, 2020-06-07 05:09:46:815, 19073.6132, 350.4762, > 2133, 367500.8820, 1591477733263, -1591477678841, -0., -0.0126 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] dengziming commented on a change in pull request #10389: MINOR: stabilize ListOffsetsRequestTest#testResponseIncludesLeaderEpoch
dengziming commented on a change in pull request #10389: URL: https://github.com/apache/kafka/pull/10389#discussion_r600079948 ## File path: core/src/test/scala/unit/kafka/server/ListOffsetsRequestTest.scala ## @@ -139,10 +139,17 @@ class ListOffsetsRequestTest extends BaseRequestTest { val request = if (version == -1) builder.build() else builder.build(version) -val response = sendRequest(serverId, request) -val partitionData = response.topics.asScala.find(_.name == topic).get +sendRequest(serverId, request).topics.asScala.find(_.name == topic).get .partitions.asScala.find(_.partitionIndex == partition.partition).get + } + // -1 indicate "latest" + private[this] def fetchOffsetAndEpoch(serverId: Int, +timestamp: Long, +version: Short): (Long, Int) = { +val partitionData = sendRequest(serverId, timestamp, version) + +println(s"[CHIA] fetchOffsetAndEpoch version: $version partitionData: $partitionData") Review comment: nit: this is checked in by accident? ## File path: core/src/test/scala/unit/kafka/server/ListOffsetsRequestTest.scala ## @@ -166,6 +176,9 @@ class ListOffsetsRequestTest extends BaseRequestTest { // Kill the first leader so that we can verify the epoch change when fetching the latest offset killBroker(firstLeaderId) val secondLeaderId = TestUtils.awaitLeaderChange(servers, partition, firstLeaderId) +// make sure high watermark of new leader has not caught up Review comment: should this be "has caught up" -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] junrao commented on a change in pull request #10388: KAFKA-12520: Ensure log loading does not truncate producer state unless required
junrao commented on a change in pull request #10388: URL: https://github.com/apache/kafka/pull/10388#discussion_r600057668 ## File path: core/src/main/scala/kafka/log/Log.scala ## @@ -700,11 +700,13 @@ class Log(@volatile private var _dir: File, case _: NoSuchFileException => error(s"Could not find offset index file corresponding to log file ${segment.log.file.getAbsolutePath}, " + "recovering segment and rebuilding index files...") -recoverSegment(segment) +if (segment.validateSegmentAndRebuildIndices() > 0) + throw new KafkaStorageException("Found invalid or corrupted messages in segment " + segment.log.file); Review comment: Perhaps we could report the number of invalid bytes in the exception? Ditto below and in `completeSwapOperations()`. ## File path: core/src/main/scala/kafka/log/LogSegment.scala ## @@ -322,17 +323,14 @@ class LogSegment private[log] (val log: FileRecords, offsetIndex.fetchUpperBoundOffset(startOffsetPosition, fetchSize).map(_.offset) /** - * Run recovery on the given segment. This will rebuild the index from the log file and lop off any invalid bytes - * from the end of the log and index. + * Ensure batches in the segment are valid and rebuild all corresponding indices. * - * @param producerStateManager Producer state corresponding to the segment's base offset. This is needed to recover - * the transaction index. - * @param leaderEpochCache Optionally a cache for updating the leader epoch during recovery. - * @return The number of bytes truncated from the log + * @param batchCallbackOpt Optional callback invoked for all valid batches in segment + * @return The number of invalid bytes at the end of the segment * @throws LogSegmentOffsetOverflowException if the log segment contains an offset that causes the index offset to overflow */ @nonthreadsafe - def recover(producerStateManager: ProducerStateManager, leaderEpochCache: Option[LeaderEpochFileCache] = None): Int = { + def validateSegmentAndRebuildIndices(batchCallbackOpt: Option[FileChannelRecordBatch => Unit] = None) : Int = { Review comment: It seems this method needs to the logic to trim the indexes at the end? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] ableegoldman commented on a change in pull request #10387: KAFKA-12537: get EOS corner case
ableegoldman commented on a change in pull request #10387: URL: https://github.com/apache/kafka/pull/10387#discussion_r600060976 ## File path: streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java ## @@ -492,22 +492,18 @@ private void handleStreamsUncaughtException(final Throwable throwable, closeToError(); break; case SHUTDOWN_APPLICATION: +if (getNumLiveStreamThreads() <= 1) { +log.warn("Adding thread to communicate the shutdown. No processing will be done on this thread"); +addStreamThread(); +} if (throwable instanceof Error) { log.error("This option requires running threads to shut down the application." + "but the uncaught exception was an Error, which means this runtime is no " + "longer in a well-defined state. Attempting to send the shutdown command anyway.", throwable); } - -if (Thread.currentThread().equals(globalStreamThread) && getNumLiveStreamThreads() == 0) { -log.error("Exception in global thread caused the application to attempt to shutdown." + Review comment: I'm not sure there's a good way to check if it's a global-only topology at the moment, so I'm with not putting in a check for that case. Was just a suggestion -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] ableegoldman commented on a change in pull request #10387: KAFKA-12537: get EOS corner case
ableegoldman commented on a change in pull request #10387: URL: https://github.com/apache/kafka/pull/10387#discussion_r600059442 ## File path: streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamThread.java ## @@ -631,6 +625,14 @@ public void setStreamsUncaughtExceptionHandler(final java.util.function.Consumer this.streamsUncaughtExceptionHandler = streamsUncaughtExceptionHandler; } +public void maybeSendShutdown() { +if (assignmentErrorCode.get() == AssignorError.SHUTDOWN_REQUESTED.code()) { +log.warn("Detected that shutdown was requested. " + +"All clients in this app will now begin to shutdown"); +mainConsumer.enforceRebalance(); Review comment: But do we even need to invoke `maybeSendShutdown` at all from the two catch blocks? The thread that we start up should handle this -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] ableegoldman commented on a change in pull request #10387: KAFKA-12537: get EOS corner case
ableegoldman commented on a change in pull request #10387: URL: https://github.com/apache/kafka/pull/10387#discussion_r600055319 ## File path: streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java ## @@ -492,22 +492,18 @@ private void handleStreamsUncaughtException(final Throwable throwable, closeToError(); break; case SHUTDOWN_APPLICATION: +if (getNumLiveStreamThreads() <= 1) { +log.warn("Adding thread to communicate the shutdown. No processing will be done on this thread"); +addStreamThread(); +} if (throwable instanceof Error) { log.error("This option requires running threads to shut down the application." + "but the uncaught exception was an Error, which means this runtime is no " + "longer in a well-defined state. Attempting to send the shutdown command anyway.", throwable); } - -if (Thread.currentThread().equals(globalStreamThread) && getNumLiveStreamThreads() == 0) { -log.error("Exception in global thread caused the application to attempt to shutdown." + Review comment: Well, who knows what the exception is -- could be some local disk error or corruption, or they're using a remote state store and god-knows-what happened. I agree that it's probably rare for a global-only app to hit an exception and fail to shut down the application, and not end up shutting down anyways due to hitting the same exception elsewhere. But anything's possible -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] ableegoldman commented on a change in pull request #10387: KAFKA-12537: get EOS corner case
ableegoldman commented on a change in pull request #10387: URL: https://github.com/apache/kafka/pull/10387#discussion_r600052508 ## File path: streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java ## @@ -492,22 +492,18 @@ private void handleStreamsUncaughtException(final Throwable throwable, closeToError(); break; case SHUTDOWN_APPLICATION: +if (getNumLiveStreamThreads() <= 1) { +log.warn("Adding thread to communicate the shutdown. No processing will be done on this thread"); +addStreamThread(); +} if (throwable instanceof Error) { log.error("This option requires running threads to shut down the application." + "but the uncaught exception was an Error, which means this runtime is no " + "longer in a well-defined state. Attempting to send the shutdown command anyway.", throwable); } - -if (Thread.currentThread().equals(globalStreamThread) && getNumLiveStreamThreads() == 0) { -log.error("Exception in global thread caused the application to attempt to shutdown." + Review comment: Whoops, should've scrolled up a bit to the `replaceThread()` method. Thanks for filing the ticket -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] ableegoldman commented on a change in pull request #10387: KAFKA-12537: get EOS corner case
ableegoldman commented on a change in pull request #10387: URL: https://github.com/apache/kafka/pull/10387#discussion_r600041390 ## File path: streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java ## @@ -492,22 +492,18 @@ private void handleStreamsUncaughtException(final Throwable throwable, closeToError(); break; case SHUTDOWN_APPLICATION: +if (getNumLiveStreamThreads() <= 1) { +log.warn("Adding thread to communicate the shutdown. No processing will be done on this thread"); +addStreamThread(); +} if (throwable instanceof Error) { log.error("This option requires running threads to shut down the application." + "but the uncaught exception was an Error, which means this runtime is no " + "longer in a well-defined state. Attempting to send the shutdown command anyway.", throwable); } - -if (Thread.currentThread().equals(globalStreamThread) && getNumLiveStreamThreads() == 0) { -log.error("Exception in global thread caused the application to attempt to shutdown." + Review comment: Ah, nice that this solves the global thread issue as well! I guess technically this will still fail to communicate the shutdown if the application only ever runs the global thread and literally never started up any StreamThreads, but I think that's fine. Apparently running a global-only Streams app is a thing, as some users have reported in the past, but I would imagine this use case would almost certainly prefer the `REPLACE_THREAD` option. ~Ooh, wait. Do we need to add this check in the `REPLACE_THREAD` handling so we don't start up a StreamThread if it was the global thread that was killed?~ edit: we already do exactly this, but it's in `replaceThread()` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] wcarlson5 commented on a change in pull request #10387: KAFKA-12537: get EOS corner case
wcarlson5 commented on a change in pull request #10387: URL: https://github.com/apache/kafka/pull/10387#discussion_r600051169 ## File path: streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java ## @@ -492,22 +492,18 @@ private void handleStreamsUncaughtException(final Throwable throwable, closeToError(); break; case SHUTDOWN_APPLICATION: +if (getNumLiveStreamThreads() <= 1) { +log.warn("Adding thread to communicate the shutdown. No processing will be done on this thread"); +addStreamThread(); +} if (throwable instanceof Error) { log.error("This option requires running threads to shut down the application." + "but the uncaught exception was an Error, which means this runtime is no " + "longer in a well-defined state. Attempting to send the shutdown command anyway.", throwable); } - -if (Thread.currentThread().equals(globalStreamThread) && getNumLiveStreamThreads() == 0) { -log.error("Exception in global thread caused the application to attempt to shutdown." + Review comment: If all the Clients are global thread only shouldn't they hit the same error? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] wcarlson5 commented on a change in pull request #10387: KAFKA-12537: get EOS corner case
wcarlson5 commented on a change in pull request #10387: URL: https://github.com/apache/kafka/pull/10387#discussion_r600049583 ## File path: streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java ## @@ -492,22 +492,18 @@ private void handleStreamsUncaughtException(final Throwable throwable, closeToError(); break; case SHUTDOWN_APPLICATION: +if (getNumLiveStreamThreads() <= 1) { +log.warn("Adding thread to communicate the shutdown. No processing will be done on this thread"); +addStreamThread(); +} if (throwable instanceof Error) { log.error("This option requires running threads to shut down the application." + "but the uncaught exception was an Error, which means this runtime is no " + "longer in a well-defined state. Attempting to send the shutdown command anyway.", throwable); } - -if (Thread.currentThread().equals(globalStreamThread) && getNumLiveStreamThreads() == 0) { -log.error("Exception in global thread caused the application to attempt to shutdown." + Review comment: We do check for the replace thread in the replace thread option. so we kinda already take care of it. If they do choose the shutdown the client we don't need to log a warning so its probably unnecessary. We don't have the option to replace the global threads yet. I will make a ticket for that. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] wcarlson5 commented on a change in pull request #10387: KAFKA-12537: get EOS corner case
wcarlson5 commented on a change in pull request #10387: URL: https://github.com/apache/kafka/pull/10387#discussion_r600049583 ## File path: streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java ## @@ -492,22 +492,18 @@ private void handleStreamsUncaughtException(final Throwable throwable, closeToError(); break; case SHUTDOWN_APPLICATION: +if (getNumLiveStreamThreads() <= 1) { +log.warn("Adding thread to communicate the shutdown. No processing will be done on this thread"); +addStreamThread(); +} if (throwable instanceof Error) { log.error("This option requires running threads to shut down the application." + "but the uncaught exception was an Error, which means this runtime is no " + "longer in a well-defined state. Attempting to send the shutdown command anyway.", throwable); } - -if (Thread.currentThread().equals(globalStreamThread) && getNumLiveStreamThreads() == 0) { -log.error("Exception in global thread caused the application to attempt to shutdown." + Review comment: We do check for the replace thread in the replace thread option. We don't have the option to replace the global threads. I will make a ticket for that. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] wcarlson5 commented on a change in pull request #10387: KAFKA-12537: get EOS corner case
wcarlson5 commented on a change in pull request #10387: URL: https://github.com/apache/kafka/pull/10387#discussion_r600050203 ## File path: streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamThread.java ## @@ -631,6 +625,14 @@ public void setStreamsUncaughtExceptionHandler(final java.util.function.Consumer this.streamsUncaughtExceptionHandler = streamsUncaughtExceptionHandler; } +public void maybeSendShutdown() { +if (assignmentErrorCode.get() == AssignorError.SHUTDOWN_REQUESTED.code()) { +log.warn("Detected that shutdown was requested. " + +"All clients in this app will now begin to shutdown"); +mainConsumer.enforceRebalance(); Review comment: This is also how we trigger the rebalance for the other threads so we can't just remove it -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (KAFKA-12538) Global Threads should be able to be replaced like stream threads
Walker Carlson created KAFKA-12538: -- Summary: Global Threads should be able to be replaced like stream threads Key: KAFKA-12538 URL: https://issues.apache.org/jira/browse/KAFKA-12538 Project: Kafka Issue Type: Improvement Components: streams Reporter: Walker Carlson We should be able to replace global threads from the streams uncaught exception handler just like we replace stream threads. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (KAFKA-12537) Single Threaded EOS applications will not work with SHUTDOWN_APPLICATION
[ https://issues.apache.org/jira/browse/KAFKA-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Walker Carlson updated KAFKA-12537: --- Component/s: streams > Single Threaded EOS applications will not work with SHUTDOWN_APPLICATION > > > Key: KAFKA-12537 > URL: https://issues.apache.org/jira/browse/KAFKA-12537 > Project: Kafka > Issue Type: Bug > Components: streams >Affects Versions: 3.0.0, 2.8.0 >Reporter: Walker Carlson >Assignee: Walker Carlson >Priority: Major > > Single Threaded EOS applications will not work with the streams uncaught > exception handler option SHUTDOWN_APPLICATION. This is because the EOS thread > needs to close and clean up, but to send the shutdown signal it needs to have > at least one thread running. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] wcarlson5 commented on a change in pull request #10387: KAFKA-12537: get EOS corner case
wcarlson5 commented on a change in pull request #10387: URL: https://github.com/apache/kafka/pull/10387#discussion_r600049583 ## File path: streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java ## @@ -492,22 +492,18 @@ private void handleStreamsUncaughtException(final Throwable throwable, closeToError(); break; case SHUTDOWN_APPLICATION: +if (getNumLiveStreamThreads() <= 1) { +log.warn("Adding thread to communicate the shutdown. No processing will be done on this thread"); +addStreamThread(); +} if (throwable instanceof Error) { log.error("This option requires running threads to shut down the application." + "but the uncaught exception was an Error, which means this runtime is no " + "longer in a well-defined state. Attempting to send the shutdown command anyway.", throwable); } - -if (Thread.currentThread().equals(globalStreamThread) && getNumLiveStreamThreads() == 0) { -log.error("Exception in global thread caused the application to attempt to shutdown." + Review comment: We do check fo the replace thread in the replace thread option. We don't have the option to replace the global threads. I will make a ticket for that. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (KAFKA-9168) Integrate JNI direct buffer support to RocksDBStore
[ https://issues.apache.org/jira/browse/KAFKA-9168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] A. Sophie Blee-Goldman updated KAFKA-9168: -- Priority: Major (was: Blocker) > Integrate JNI direct buffer support to RocksDBStore > --- > > Key: KAFKA-9168 > URL: https://issues.apache.org/jira/browse/KAFKA-9168 > Project: Kafka > Issue Type: Improvement > Components: streams >Reporter: Sagar Rao >Priority: Major > Labels: perfomance > > There has been a PR created on rocksdb Java client to support direct > ByteBuffers in Java. We can look at integrating it whenever it gets merged. > Link to PR: [https://github.com/facebook/rocksdb/pull/2283] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (KAFKA-9168) Integrate JNI direct buffer support to RocksDBStore
[ https://issues.apache.org/jira/browse/KAFKA-9168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] A. Sophie Blee-Goldman updated KAFKA-9168: -- Issue Type: Improvement (was: Task) > Integrate JNI direct buffer support to RocksDBStore > --- > > Key: KAFKA-9168 > URL: https://issues.apache.org/jira/browse/KAFKA-9168 > Project: Kafka > Issue Type: Improvement > Components: streams >Reporter: Sagar Rao >Priority: Blocker > Labels: perfomance > > There has been a PR created on rocksdb Java client to support direct > ByteBuffers in Java. We can look at integrating it whenever it gets merged. > Link to PR: [https://github.com/facebook/rocksdb/pull/2283] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (KAFKA-9168) Integrate JNI direct buffer support to RocksDBStore
[ https://issues.apache.org/jira/browse/KAFKA-9168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] A. Sophie Blee-Goldman updated KAFKA-9168: -- Fix Version/s: (was: 3.0.0) > Integrate JNI direct buffer support to RocksDBStore > --- > > Key: KAFKA-9168 > URL: https://issues.apache.org/jira/browse/KAFKA-9168 > Project: Kafka > Issue Type: Task > Components: streams >Reporter: Sagar Rao >Priority: Blocker > Labels: perfomance > > There has been a PR created on rocksdb Java client to support direct > ByteBuffers in Java. We can look at integrating it whenever it gets merged. > Link to PR: [https://github.com/facebook/rocksdb/pull/2283] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] ableegoldman commented on a change in pull request #10387: KAFKA-12537: get EOS corner case
ableegoldman commented on a change in pull request #10387: URL: https://github.com/apache/kafka/pull/10387#discussion_r600044638 ## File path: streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java ## @@ -492,22 +492,18 @@ private void handleStreamsUncaughtException(final Throwable throwable, closeToError(); break; case SHUTDOWN_APPLICATION: +if (getNumLiveStreamThreads() <= 1) { +log.warn("Adding thread to communicate the shutdown. No processing will be done on this thread"); +addStreamThread(); +} if (throwable instanceof Error) { log.error("This option requires running threads to shut down the application." + "but the uncaught exception was an Error, which means this runtime is no " + "longer in a well-defined state. Attempting to send the shutdown command anyway.", throwable); } - -if (Thread.currentThread().equals(globalStreamThread) && getNumLiveStreamThreads() == 0) { -log.error("Exception in global thread caused the application to attempt to shutdown." + Review comment: WDYT about just checking for the case of a global-only topology before the `switch` statement, and just automatically invoking `closeToError()` with a warning that the other options are not supported in this case? We should also file an improvement ticket for the "restart the global thread" feature, if we don't already have one -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] ableegoldman commented on a change in pull request #10387: KAFKA-12537: get EOS corner case
ableegoldman commented on a change in pull request #10387: URL: https://github.com/apache/kafka/pull/10387#discussion_r600042316 ## File path: streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamThread.java ## @@ -631,6 +625,14 @@ public void setStreamsUncaughtExceptionHandler(final java.util.function.Consumer this.streamsUncaughtExceptionHandler = streamsUncaughtExceptionHandler; } +public void maybeSendShutdown() { +if (assignmentErrorCode.get() == AssignorError.SHUTDOWN_REQUESTED.code()) { +log.warn("Detected that shutdown was requested. " + +"All clients in this app will now begin to shutdown"); +mainConsumer.enforceRebalance(); Review comment: Since this thread is going to immediately shut down anyways, I think we can skip the `mainConsumer.enforceRebalance()` ## File path: streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java ## @@ -492,22 +492,18 @@ private void handleStreamsUncaughtException(final Throwable throwable, closeToError(); break; case SHUTDOWN_APPLICATION: +if (getNumLiveStreamThreads() <= 1) { +log.warn("Adding thread to communicate the shutdown. No processing will be done on this thread"); +addStreamThread(); +} if (throwable instanceof Error) { log.error("This option requires running threads to shut down the application." + "but the uncaught exception was an Error, which means this runtime is no " + "longer in a well-defined state. Attempting to send the shutdown command anyway.", throwable); } - -if (Thread.currentThread().equals(globalStreamThread) && getNumLiveStreamThreads() == 0) { -log.error("Exception in global thread caused the application to attempt to shutdown." + Review comment: Ah, nice that this solves the global thread issue as well! I guess technically this will still fail to communicate the shutdown if the application only ever runs the global thread and literally never started up any StreamThreads, but I think that's fine. Apparently running a global-only Streams app is a thing, as some users have reported in the past, but I would imagine this use case would almost certainly prefer the `REPLACE_THREAD` option. Ooh, wait. Do we need to add this check in the `REPLACE_THREAD` handling so we don't start up a StreamThread if it was the global thread that was killed? ## File path: streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java ## @@ -493,22 +493,18 @@ private void handleStreamsUncaughtException(final Throwable throwable, closeToError(); break; case SHUTDOWN_APPLICATION: +if (countStreamThread(StreamThread::isRunning) == 1) { +log.warn("Adding thread to communicate the shutdown. No processing will be done on this thread"); Review comment: ```suggestion log.warn("Attempt to shut down the application requires adding a thread to communicate the shutdown. No processing will be done on this thread"); ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] wcarlson5 commented on pull request #10387: KAFKA-12537: get EOS corner case
wcarlson5 commented on pull request #10387: URL: https://github.com/apache/kafka/pull/10387#issuecomment-805354345 @ableegoldman fixed -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] guozhangwang commented on a change in pull request #10331: KAFKA-10847: Add a RocksDBTimeOrderedWindowStore to hold records using their timestamp as key prefix
guozhangwang commented on a change in pull request #10331: URL: https://github.com/apache/kafka/pull/10331#discussion_r600028281 ## File path: streams/src/main/java/org/apache/kafka/streams/state/internals/SegmentIterator.java ## @@ -82,9 +93,9 @@ public boolean hasNext() { } } else { if (forward) { -currentIterator = currentSegment.range(from, to); +currentIterator = currentSegment.range(from, to, prefixScan); Review comment: I should clarify: not meant the existing `RocksDBStore#prefixScan` since it only allows a single prefix, but adding another function. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] guozhangwang commented on pull request #10331: KAFKA-10847: Add a RocksDBTimeOrderedWindowStore to hold records using their timestamp as key prefix
guozhangwang commented on pull request #10331: URL: https://github.com/apache/kafka/pull/10331#issuecomment-805344527 cc @cadonna to take a look around prefix scan, and @vcrfxia for the bytes layout. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] guozhangwang commented on a change in pull request #10331: KAFKA-10847: Add a RocksDBTimeOrderedWindowStore to hold records using their timestamp as key prefix
guozhangwang commented on a change in pull request #10331: URL: https://github.com/apache/kafka/pull/10331#discussion_r599952559 ## File path: streams/src/main/java/org/apache/kafka/streams/state/Stores.java ## @@ -37,6 +37,7 @@ import static org.apache.kafka.streams.internals.ApiUtils.prepareMillisCheckFailMsgPrefix; import static org.apache.kafka.streams.internals.ApiUtils.validateMillisecondDuration; +import static org.apache.kafka.streams.state.internals.RocksDbWindowBytesStoreSupplier.WindowStoreTypes; Review comment: Since `Stores` are public APIs, we would need to file a KIP in order to change it. On the other hand, `Stores` is used by users to customize their materialized state stores, while for KAFKA-10847 we can just hard-code which types of stores to use not through `Stores` factory. ## File path: streams/src/main/java/org/apache/kafka/streams/state/internals/Segment.java ## @@ -25,4 +26,35 @@ void destroy() throws IOException; +/** + * INTERNAL USE ONLY - Move this method to ReadOnlyKeyValueStore to make it a public API + * + * Get an iterator over a given range of keys. This iterator must be closed after use. + * The returned iterator must be safe from {@link java.util.ConcurrentModificationException}s + * and must not return null values. + * Order is not guaranteed as bytes lexicographical ordering might not represent key order. + * + * @param from The first key that could be in the range, where iteration starts from. + * @param to The last key that could be in the range, where iteration ends. + * @param prefixScan If true, then it iterates using the common key prefixes. + * @return The iterator for this range, from smallest to largest bytes. + * @throws NullPointerException If null is used for from or to.* + */ +KeyValueIterator range(Bytes from, Bytes to, boolean prefixScan); + +/** + * INTERNAL USE ONLY - - Move this method to ReadOnlyKeyValueStore to make it a public API + * + * Get a reverse iterator over a given range of keys. This iterator must be closed after use. + * The returned iterator must be safe from {@link java.util.ConcurrentModificationException}s + * and must not return null values. + * Order is not guaranteed as bytes lexicographical ordering might not represent key order. + * + * @param from The first key that could be in the range, where iteration ends. + * @param to The last key that could be in the range, where iteration starts from. + * @param prefixScan If true, then it iterates using the common key prefixes. + * @return The reverse iterator for this range, from largest to smallest key bytes. + * @throws NullPointerException If null is used for from or to. + */ +KeyValueIterator reverseRange(Bytes from, Bytes to, boolean prefixScan); Review comment: Related to the other comment: since in stream-stream join we do not really need reverse-prefixScan, just adding a forward `prefixScan(..)` interface may be better. ## File path: streams/src/main/java/org/apache/kafka/streams/state/internals/TimeOrderedWindowStoreBuilder.java ## @@ -0,0 +1,80 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + *http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.kafka.streams.state.internals; + +import org.apache.kafka.common.serialization.Serde; +import org.apache.kafka.common.utils.Bytes; +import org.apache.kafka.common.utils.Time; +import org.apache.kafka.streams.state.WindowBytesStoreSupplier; +import org.apache.kafka.streams.state.WindowStore; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.Objects; + +public class TimeOrderedWindowStoreBuilder extends AbstractStoreBuilder> { Review comment: If we do not allow such stores to be created from `Stores`, maybe we could remove this class as well. ## File path: streams/src/main/java/org/apache/kafka/streams/state/internals/SegmentIterator.java ## @@ -82,9 +93,9 @@ public boolean hasNext() { } } else { if (forward) { -currentIterator = curre
[GitHub] [kafka] mjsax opened a new pull request #10391: MINOR: disable flaky system test
mjsax opened a new pull request #10391: URL: https://github.com/apache/kafka/pull/10391 Call for review @abbccdda @guozhangwang @vvcephei -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] ableegoldman removed a comment on pull request #10387: KAFKA-12537: get EOS corner case
ableegoldman removed a comment on pull request #10387: URL: https://github.com/apache/kafka/pull/10387#issuecomment-805303422 Also, since we have a fix for this, can we modify the `catch` blocks in the StreamThread loop to return false regardless of the processing mode? Also now that you explained how the default exception handler interacts with the StreamThread loop under ALOS, it seems like KAFKA-12537 probably affects both EOS and ALOS -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] ableegoldman commented on pull request #10387: KAFKA-12537: get EOS corner case
ableegoldman commented on pull request #10387: URL: https://github.com/apache/kafka/pull/10387#issuecomment-805303422 Also, since we have a fix for this, can we modify the `catch` blocks in the StreamThread loop to return false regardless of the processing mode? Also now that you explained how the default exception handler interacts with the StreamThread loop under ALOS, it seems like KAFKA-12537 probably affects both EOS and ALOS -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] ableegoldman commented on pull request #10387: KAFKA-12537: get EOS corner case
ableegoldman commented on pull request #10387: URL: https://github.com/apache/kafka/pull/10387#issuecomment-805301703 @wcarlson5 heads up, looks like you need to rebase this PR -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] vvcephei commented on pull request #10386: MINOR: fix aggregatedJavadoc dep on compileJava
vvcephei commented on pull request #10386: URL: https://github.com/apache/kafka/pull/10386#issuecomment-805289415 Thanks, @chia7712 ! Ah, I thought that was just a warning; now I see it's an error. What I was facing before was a complete failure to load the gradle project, so this is an improvement. Still, I'll check out those errors. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] abbccdda commented on a change in pull request #10142: KAFKA-12294: forward auto topic request within envelope on behalf of clients
abbccdda commented on a change in pull request #10142: URL: https://github.com/apache/kafka/pull/10142#discussion_r599962199 ## File path: core/src/test/scala/unit/kafka/server/AutoTopicCreationManagerTest.scala ## @@ -263,7 +263,7 @@ class AutoTopicCreationManagerTest { topicName: String, isInternal: Boolean): Unit = { val topicResponses = autoTopicCreationManager.createTopics( - Set(topicName), UnboundedControllerMutationQuota) + Set(topicName), UnboundedControllerMutationQuota, None) Review comment: Added -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] wcarlson5 commented on a change in pull request #10387: HOTFIX: get EOS corner case
wcarlson5 commented on a change in pull request #10387: URL: https://github.com/apache/kafka/pull/10387#discussion_r599886841 ## File path: streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java ## @@ -493,22 +493,18 @@ private void handleStreamsUncaughtException(final Throwable throwable, closeToError(); break; case SHUTDOWN_APPLICATION: +if (countStreamThread(StreamThread::isRunning) == 1) { +log.warn("Adding thread to communicate the shutdown. No processing will be done on this thread"); +addStreamThread(); +} if (throwable instanceof Error) { log.error("This option requires running threads to shut down the application." + "but the uncaught exception was an Error, which means this runtime is no " + "longer in a well-defined state. Attempting to send the shutdown command anyway.", throwable); } - -if (Thread.currentThread().equals(globalStreamThread) && countStreamThread(StreamThread::isRunning) == 0) { Review comment: Since we are adding a thread we won't have this issue anymore ## File path: streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java ## @@ -493,22 +493,18 @@ private void handleStreamsUncaughtException(final Throwable throwable, closeToError(); break; case SHUTDOWN_APPLICATION: +if (countStreamThread(StreamThread::isRunning) == 1) { +log.warn("Adding thread to communicate the shutdown. No processing will be done on this thread"); Review comment: If we add a thread before shutting down the application the thread with the exception can shutdown -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (KAFKA-12537) Single Threaded EOS applications will not work with SHUTDOWN_APPLICATION
[ https://issues.apache.org/jira/browse/KAFKA-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Walker Carlson reassigned KAFKA-12537: -- Assignee: Walker Carlson > Single Threaded EOS applications will not work with SHUTDOWN_APPLICATION > > > Key: KAFKA-12537 > URL: https://issues.apache.org/jira/browse/KAFKA-12537 > Project: Kafka > Issue Type: Bug >Affects Versions: 3.0.0, 2.8.0 >Reporter: Walker Carlson >Assignee: Walker Carlson >Priority: Major > > Single Threaded EOS applications will not work with the streams uncaught > exception handler option SHUTDOWN_APPLICATION. This is because the EOS thread > needs to close and clean up, but to send the shutdown signal it needs to have > at least one thread running. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] hachikuji merged pull request #10376: MINOR: Remove duplicate `createKafkaMetricsContext`
hachikuji merged pull request #10376: URL: https://github.com/apache/kafka/pull/10376 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] abbccdda merged pull request #10374: (Cherry-pick) KAFKA-9274: handle TimeoutException on task reset (#10000)
abbccdda merged pull request #10374: URL: https://github.com/apache/kafka/pull/10374 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (KAFKA-12537) Single Threaded EOS applications will not work with SHUTDOWN_APPLICATION
[ https://issues.apache.org/jira/browse/KAFKA-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17307370#comment-17307370 ] Walker Carlson commented on KAFKA-12537: It maybe possible to add a thread, have that send the shutdown signal then the whole things will come down > Single Threaded EOS applications will not work with SHUTDOWN_APPLICATION > > > Key: KAFKA-12537 > URL: https://issues.apache.org/jira/browse/KAFKA-12537 > Project: Kafka > Issue Type: Bug >Affects Versions: 3.0.0, 2.8.0 >Reporter: Walker Carlson >Priority: Major > > Single Threaded EOS applications will not work with the streams uncaught > exception handler option SHUTDOWN_APPLICATION. This is because the EOS thread > needs to close and clean up, but to send the shutdown signal it needs to have > at least one thread running. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12537) Single Threaded EOS applications will not work with SHUTDOWN_APPLICATION
Walker Carlson created KAFKA-12537: -- Summary: Single Threaded EOS applications will not work with SHUTDOWN_APPLICATION Key: KAFKA-12537 URL: https://issues.apache.org/jira/browse/KAFKA-12537 Project: Kafka Issue Type: Bug Affects Versions: 3.0.0, 2.8.0 Reporter: Walker Carlson Single Threaded EOS applications will not work with the streams uncaught exception handler option SHUTDOWN_APPLICATION. This is because the EOS thread needs to close and clean up, but to send the shutdown signal it needs to have at least one thread running. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (KAFKA-12536) Add Instant-based methods to ReadOnlySessionStore
[ https://issues.apache.org/jira/browse/KAFKA-12536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jorge Esteban Quilcate Otoya updated KAFKA-12536: - Description: [KIP-666|https://cwiki.apache.org/confluence/display/KAFKA/KIP-666%3A+Add+Instant-based+methods+to+ReadOnlySessionStore] implementation. (was: KIP-666 implementation.) > Add Instant-based methods to ReadOnlySessionStore > - > > Key: KAFKA-12536 > URL: https://issues.apache.org/jira/browse/KAFKA-12536 > Project: Kafka > Issue Type: Sub-task >Reporter: Jorge Esteban Quilcate Otoya >Assignee: Jorge Esteban Quilcate Otoya >Priority: Major > > [KIP-666|https://cwiki.apache.org/confluence/display/KAFKA/KIP-666%3A+Add+Instant-based+methods+to+ReadOnlySessionStore] > implementation. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] jeqo opened a new pull request #10390: KAFKA-12536: Add Instant-based methods to ReadOnlySessionStore
jeqo opened a new pull request #10390: URL: https://github.com/apache/kafka/pull/10390 KIP-666 implementation. ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (KAFKA-12536) Add Instant-based methods to ReadOnlySessionStore
[ https://issues.apache.org/jira/browse/KAFKA-12536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jorge Esteban Quilcate Otoya updated KAFKA-12536: - Description: KIP-666 implementation. > Add Instant-based methods to ReadOnlySessionStore > - > > Key: KAFKA-12536 > URL: https://issues.apache.org/jira/browse/KAFKA-12536 > Project: Kafka > Issue Type: Sub-task >Reporter: Jorge Esteban Quilcate Otoya >Assignee: Jorge Esteban Quilcate Otoya >Priority: Major > > KIP-666 implementation. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (KAFKA-12536) Add Instant-based methods to ReadOnlySessionStore
[ https://issues.apache.org/jira/browse/KAFKA-12536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jorge Esteban Quilcate Otoya reassigned KAFKA-12536: Assignee: Jorge Esteban Quilcate Otoya > Add Instant-based methods to ReadOnlySessionStore > - > > Key: KAFKA-12536 > URL: https://issues.apache.org/jira/browse/KAFKA-12536 > Project: Kafka > Issue Type: Sub-task >Reporter: Jorge Esteban Quilcate Otoya >Assignee: Jorge Esteban Quilcate Otoya >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12536) Add Instant-based methods to ReadOnlySessionStore
Jorge Esteban Quilcate Otoya created KAFKA-12536: Summary: Add Instant-based methods to ReadOnlySessionStore Key: KAFKA-12536 URL: https://issues.apache.org/jira/browse/KAFKA-12536 Project: Kafka Issue Type: Sub-task Reporter: Jorge Esteban Quilcate Otoya -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12535) Consider Revising Document Anchors for Properties
Gary Russell created KAFKA-12535: Summary: Consider Revising Document Anchors for Properties Key: KAFKA-12535 URL: https://issues.apache.org/jira/browse/KAFKA-12535 Project: Kafka Issue Type: Improvement Components: documentation Affects Versions: 2.7.0 Reporter: Gary Russell Anchors for ToC entries work fine: https://kafka.apache.org/documentation/#producerconfigs With the section title appearing below the "floating" banner. However, anchors for properties, e.g. https://kafka.apache.org/documentation/#producerconfigs_max.block.ms don't render properly; the first part of the property description is "hidden" under the floating banner. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] swiedenfeld closed pull request #10384: Update ops.html
swiedenfeld closed pull request #10384: URL: https://github.com/apache/kafka/pull/10384 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] swiedenfeld commented on pull request #10384: Update ops.html
swiedenfeld commented on pull request #10384: URL: https://github.com/apache/kafka/pull/10384#issuecomment-805142080 Yes, I obviously got it wrong. Thanks for explaining. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] dajac commented on pull request #10383: MINOR: Query topic describe and sort output by topic name when using adminClient
dajac commented on pull request #10383: URL: https://github.com/apache/kafka/pull/10383#issuecomment-805105838 Should we add a unit or integration test for this? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] guozhangwang commented on pull request #8836: KAFKA-10124:Wrong rebalance.time.ms
guozhangwang commented on pull request #8836: URL: https://github.com/apache/kafka/pull/8836#issuecomment-805104290 That's fine. Maybe you can close this PR, and update the JIRA ticket as well so others can pick up? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] chia7712 commented on a change in pull request #10383: MINOR: Query topic describe and sort output by topic name when using adminClient
chia7712 commented on a change in pull request #10383: URL: https://github.com/apache/kafka/pull/10383#discussion_r599787628 ## File path: core/src/main/scala/kafka/admin/TopicCommand.scala ## @@ -320,9 +320,10 @@ object TopicCommand extends Logging { val allConfigs = adminClient.describeConfigs(topics.map(new ConfigResource(Type.TOPIC, _)).asJavaCollection).values() val liveBrokers = adminClient.describeCluster().nodes().get().asScala.map(_.id()) val topicDescriptions = adminClient.describeTopics(topics.asJavaCollection).all().get().values().asScala + .toSeq.sortBy(td => td.name()) Review comment: Could you add comment for this `sortBy`? ## File path: core/src/main/scala/kafka/admin/TopicCommand.scala ## @@ -320,9 +320,10 @@ object TopicCommand extends Logging { val allConfigs = adminClient.describeConfigs(topics.map(new ConfigResource(Type.TOPIC, _)).asJavaCollection).values() val liveBrokers = adminClient.describeCluster().nodes().get().asScala.map(_.id()) val topicDescriptions = adminClient.describeTopics(topics.asJavaCollection).all().get().values().asScala + .toSeq.sortBy(td => td.name()) val describeOptions = new DescribeOptions(opts, liveBrokers.toSet) val topicPartitions = topicDescriptions - .flatMap(td => td.partitions.iterator().asScala.map(p => new TopicPartition(td.name(), p.partition( + .flatMap(td => td.partitions.iterator().asScala.map(p => new TopicPartition(td.name(), p.partition())).toSeq) Review comment: why `toSeq` is required? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] abbccdda commented on a change in pull request #10142: KAFKA-12294: forward auto topic request within envelope on behalf of clients
abbccdda commented on a change in pull request #10142: URL: https://github.com/apache/kafka/pull/10142#discussion_r599787253 ## File path: core/src/main/scala/kafka/server/AutoTopicCreationManager.scala ## @@ -166,7 +179,34 @@ class DefaultAutoTopicCreationManager( debug(s"Auto topic creation completed for ${creatableTopics.keys}.") clearInflightRequests(creatableTopics) } -}) +} + +val channelManager = this.channelManager.getOrElse { + throw new IllegalStateException("Channel manager must be defined in order to send CreateTopic requests.") +} + +metadataRequestContext match { + case Some(context) => +val requestVersion = + channelManager.controllerApiVersions() match { +case None => + ApiKeys.CREATE_TOPICS.latestVersion() Review comment: I guess we could rely on client to retry Metadata request for simplicity. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] chia7712 opened a new pull request #10389: MINOR: stabilize ListOffsetsRequestTest#testResponseIncludesLeaderEpoch
chia7712 opened a new pull request #10389: URL: https://github.com/apache/kafka/pull/10389 There are 2 root causes. 1. the mini in-sync is 1 so we could lose data when force-removing current leader 2. we don't wait new leader to sync hw with follower so sending request to get offset could encounter `OFFSET_NOT_AVAILABLE` error. ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] chia7712 commented on pull request #10386: MINOR: fix aggregatedJavadoc dep on compileJava
chia7712 commented on pull request #10386: URL: https://github.com/apache/kafka/pull/10386#issuecomment-805076137 I run `./gradlew aggregatedJavadocg` with JDK 11 and this patch and it still produces following error message. ``` > Task :aggregatedJavadoc FAILED /home/chia7712/kafka/streams/test-utils/src/main/java/org/apache/kafka/streams/TopologyTestDriver.java:44: error: cannot find symbol import org.apache.kafka.streams.internals.KeyValueStoreFacade; ^ symbol: class KeyValueStoreFacade location: package org.apache.kafka.streams.internals /home/chia7712/kafka/streams/test-utils/src/main/java/org/apache/kafka/streams/TopologyTestDriver.java:45: error: cannot find symbol import org.apache.kafka.streams.internals.WindowStoreFacade; ^ symbol: class WindowStoreFacade location: package org.apache.kafka.streams.internals /home/chia7712/kafka/streams/test-utils/src/main/java/org/apache/kafka/streams/TopologyTestDriver.java:69: error: cannot find symbol import org.apache.kafka.streams.processor.internals.TestDriverProducer; ^ symbol: class TestDriverProducer location: package org.apache.kafka.streams.processor.internals /home/chia7712/kafka/streams/test-utils/src/main/java/org/apache/kafka/streams/TopologyTestDriver.java:226: error: cannot find symbol private final TestDriverProducer testDriverProducer; ^ symbol: class TestDriverProducer location: class TopologyTestDriver 4 errors ``` BTW, there is a issue related to this error (https://issues.apache.org/jira/browse/KAFKA-12435) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] dhruvilshah3 opened a new pull request #10388: KAFKA-12520: Ensure log loading does not truncate producer state unless required
dhruvilshah3 opened a new pull request #10388: URL: https://github.com/apache/kafka/pull/10388 When we find a `.swap` file on startup, we typically want to rename and replace it as `.log`, `.index`, `.timeindex`, etc. as a way to complete any ongoing replace operations. These swap files are usually known to have been flushed to disk before the replace operation begins. One flaw in the current logic is that we recover these swap files on startup and as part of that, end up truncating the producer state and rebuild it from scratch. This is unneeded as the replace operation does not mutate the producer state by itself. It is only meant to replace the `.log` file along with corresponding indices. Because of this unneeded producer state rebuild operation, we have seen multi-hour startup times for clusters that have large compacted topics. This patch fixes the issue by doing a sanity check of all records in the segment to swap and rebuilds corresponding indices without mutating the producer state. Similarly, we also rebuild indices without truncating the producer state when we find a missing or corrupted index in the middle of the log. The patch also adds an extra sanity check to detect invalid bytes at the end of swap segments. Before this patch, we would truncate invalid bytes from the swap segment which could leave us with holes in the log. Because this is an unexpected scenario, we now raise an exception in such cases which will fail the broker on startup. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (KAFKA-12533) Migrate KStream stateless operators to new Processor API
[ https://issues.apache.org/jira/browse/KAFKA-12533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jorge Esteban Quilcate Otoya updated KAFKA-12533: - Description: Including these operators: * KStream#branch * KStream#filter * KStream#flatMap * KStream#flatMapValues * KStream#map * KStream#mapValues * KStream#peek * KStream#print * KStream#passthrough These operators are left out, waiting for a new Transformer API (https://issues.apache.org/jira/browse/KAFKA-8396): * KStream#flatMapTransform * KStream#flatMapTransformValues > Migrate KStream stateless operators to new Processor API > > > Key: KAFKA-12533 > URL: https://issues.apache.org/jira/browse/KAFKA-12533 > Project: Kafka > Issue Type: Sub-task > Components: streams >Reporter: Jorge Esteban Quilcate Otoya >Assignee: Jorge Esteban Quilcate Otoya >Priority: Major > > Including these operators: > > * KStream#branch > * KStream#filter > * KStream#flatMap > * KStream#flatMapValues > * KStream#map > * KStream#mapValues > * KStream#peek > * KStream#print > * KStream#passthrough > > These operators are left out, waiting for a new Transformer API > (https://issues.apache.org/jira/browse/KAFKA-8396): > * KStream#flatMapTransform > * KStream#flatMapTransformValues -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] spena commented on pull request #10331: KAFKA-10847: Add a RocksDBTimeOrderedWindowStore to hold records using their timestamp as key prefix
spena commented on pull request #10331: URL: https://github.com/apache/kafka/pull/10331#issuecomment-805050202 Failing test is `kafka.server.ListOffsetsRequestTest.testResponseIncludesLeaderEpoch()` which it is unrelated to this PR. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (KAFKA-12468) Initial offsets are copied from source to target cluster
[ https://issues.apache.org/jira/browse/KAFKA-12468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17307208#comment-17307208 ] Bart De Neuter commented on KAFKA-12468: I forgot to mention that we use a custom replication policy to keep the topic names the same in both source and target cluster. > Initial offsets are copied from source to target cluster > > > Key: KAFKA-12468 > URL: https://issues.apache.org/jira/browse/KAFKA-12468 > Project: Kafka > Issue Type: Bug > Components: mirrormaker >Affects Versions: 2.7.0 >Reporter: Bart De Neuter >Priority: Major > > We have an active-passive setup where the 3 connectors from mirror maker 2 > (heartbeat, checkpoint and source) are running on a dedicated Kafka connect > cluster on the target cluster. > Offset syncing is enabled as specified by KIP-545. But when activated, it > seems the offsets from the source cluster are initially copied to the target > cluster without translation. This causes a negative lag for all synced > consumer groups. Only when we reset the offsets for each topic/partition on > the target cluster and produce a record on the topic/partition in the source, > the sync starts working correctly. > I would expect that the consumer groups are synced but that the current > offsets of the source cluster are not copied to the target cluster. > This is the configuration we are currently using: > Heartbeat connector > > {code:xml} > { > "name": "mm2-mirror-heartbeat", > "config": { > "name": "mm2-mirror-heartbeat", > "connector.class": > "org.apache.kafka.connect.mirror.MirrorHeartbeatConnector", > "source.cluster.alias": "eventador", > "target.cluster.alias": "msk", > "source.cluster.bootstrap.servers": "", > "target.cluster.bootstrap.servers": "", > "topics": ".*", > "groups": ".*", > "tasks.max": "1", > "replication.policy.class": "CustomReplicationPolicy", > "sync.group.offsets.enabled": "true", > "sync.group.offsets.interval.seconds": "5", > "emit.checkpoints.enabled": "true", > "emit.checkpoints.interval.seconds": "30", > "emit.heartbeats.interval.seconds": "30", > "key.converter": " > org.apache.kafka.connect.converters.ByteArrayConverter", > "value.converter": > "org.apache.kafka.connect.converters.ByteArrayConverter" > } > } > {code} > Checkpoint connector: > {code:xml} > { > "name": "mm2-mirror-checkpoint", > "config": { > "name": "mm2-mirror-checkpoint", > "connector.class": > "org.apache.kafka.connect.mirror.MirrorCheckpointConnector", > "source.cluster.alias": "eventador", > "target.cluster.alias": "msk", > "source.cluster.bootstrap.servers": "", > "target.cluster.bootstrap.servers": "", > "topics": ".*", > "groups": ".*", > "tasks.max": "40", > "replication.policy.class": "CustomReplicationPolicy", > "sync.group.offsets.enabled": "true", > "sync.group.offsets.interval.seconds": "5", > "emit.checkpoints.enabled": "true", > "emit.checkpoints.interval.seconds": "30", > "emit.heartbeats.interval.seconds": "30", > "key.converter": " > org.apache.kafka.connect.converters.ByteArrayConverter", > "value.converter": > "org.apache.kafka.connect.converters.ByteArrayConverter" > } > } > {code} > Source connector: > {code:xml} > { > "name": "mm2-mirror-source", > "config": { > "name": "mm2-mirror-source", > "connector.class": > "org.apache.kafka.connect.mirror.MirrorSourceConnector", > "source.cluster.alias": "eventador", > "target.cluster.alias": "msk", > "source.cluster.bootstrap.servers": "", > "target.cluster.bootstrap.servers": "", > "topics": ".*", > "groups": ".*", > "tasks.max": "40", > "replication.policy.class": "CustomReplicationPolicy", > "sync.group.offsets.enabled": "true", > "sync.group.offsets.interval.seconds": "5", > "emit.checkpoints.enabled": "true", > "emit.checkpoints.interval.seconds": "30", > "emit.heartbeats.interval.seconds": "30", > "key.converter": " > org.apache.kafka.connect.converters.ByteArrayConverter", > "value.converter": > "org.apache.kafka.connect.converters.ByteArrayConverter" > } > } > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] wcarlson5 opened a new pull request #10387: HOTFIX: get EOS corner case
wcarlson5 opened a new pull request #10387: URL: https://github.com/apache/kafka/pull/10387 When in EOS the run loop terminates on that thread before the shutdown can be called. This is a problem for EOS single thread applications using the application shutdown feature. ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] kkonstantine commented on pull request #10375: KAFKA-12522: Cast SMT should allow null value records to pass through
kkonstantine commented on pull request #10375: URL: https://github.com/apache/kafka/pull/10375#issuecomment-805025698 Thanks for checking @dosvath. That means that this fix probably won't cut it for a release blocker at this late stage in the release process of 2.8 (or other branches that are in the process of generating release candidates). We'll either merge to trunk and wait before we backport, or wait a bit more altogether. Finally, in this project we always target `trunk` on our PRs and then we cherry-pick (unless we explicitly need to backport only to a release branch). So your target was correctly pointing to `trunk`. I think @dongjinleekr was suggesting that we should cherry-pick. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (KAFKA-12420) Kafka network Selector class has many constructors; use a Builder pattern instead
[ https://issues.apache.org/jira/browse/KAFKA-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17307171#comment-17307171 ] Nick Dekker edited comment on KAFKA-12420 at 3/23/21, 3:47 PM: --- Seems like a straightforward implementation. We have created a PR here: [https://github.com/apache/kafka/pull/10382|https://github.com/apache/kafka/pull/10382] A validation and a second opinion would be appreciated. was (Author: ikdekker): Seems like a straightforward implementation. We have created a PR here: [https://github.com/apache/kafka/pull/10385|https://github.com/apache/kafka/pull/10382] A validation and a second opinion would be appreciated. > Kafka network Selector class has many constructors; use a Builder pattern > instead > - > > Key: KAFKA-12420 > URL: https://issues.apache.org/jira/browse/KAFKA-12420 > Project: Kafka > Issue Type: Improvement > Components: network >Affects Versions: 2.7.0 >Reporter: Steve Rodrigues >Assignee: Steve Rodrigues >Priority: Minor > > The Kafka network Selector has a myriad of constructor parameters and to deal > with its multiple use cases this class has 6 distinct constructors taking up > to 12 parameters (or various combinations thereof). The proposal for this > small task is to have a builder pattern to consolidate to a simple path going > forward. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (KAFKA-12420) Kafka network Selector class has many constructors; use a Builder pattern instead
[ https://issues.apache.org/jira/browse/KAFKA-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17307171#comment-17307171 ] Nick Dekker edited comment on KAFKA-12420 at 3/23/21, 3:46 PM: --- Seems like a straightforward implementation. We have created a PR here: [https://github.com/apache/kafka/pull/10385|https://github.com/apache/kafka/pull/10382] A validation and a second opinion would be appreciated. was (Author: ikdekker): Seems like a straightforward implementation. We have created a PR here: [https://github.com/apache/kafka/pull/10382] A validation and a second opinion would be appreciated. > Kafka network Selector class has many constructors; use a Builder pattern > instead > - > > Key: KAFKA-12420 > URL: https://issues.apache.org/jira/browse/KAFKA-12420 > Project: Kafka > Issue Type: Improvement > Components: network >Affects Versions: 2.7.0 >Reporter: Steve Rodrigues >Assignee: Steve Rodrigues >Priority: Minor > > The Kafka network Selector has a myriad of constructor parameters and to deal > with its multiple use cases this class has 6 distinct constructors taking up > to 12 parameters (or various combinations thereof). The proposal for this > small task is to have a builder pattern to consolidate to a simple path going > forward. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] vvcephei opened a new pull request #10386: MINOR: fix aggregatedJavadoc dep on compileJava
vvcephei opened a new pull request #10386: URL: https://github.com/apache/kafka/pull/10386 It seems like gradle is inconsistently failing to build the project with the message that "compileJava" isn't defined on the root project. Applying the Java plugin to the root project (as opposed to only the subprojects) seems to fix it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] feyman2016 commented on pull request #10377: KAFKA-12515 ApiVersionManager should create response based on request version
feyman2016 commented on pull request #10377: URL: https://github.com/apache/kafka/pull/10377#issuecomment-805010605 Checked locally, test failure is not related~ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] julian9499 opened a new pull request #10385: KAFKA-12420: Kafka network Selector class has many constructors; use a Builder pattern instead
julian9499 opened a new pull request #10385: URL: https://github.com/apache/kafka/pull/10385 Replace the 6 network selector constructors with a builder that can be used to specify the needed parameters instead of using one of the 6 defined constructors. This pull requests updates all usages of the network selector and does not add any tests. Existing tests already tests the correctness of the network selector and our solution only changes the instance creation of the selector. All tests ran correctly.. ### Committer Checklist (excluded from commit message) - [x] Verify design and implementation - [ ] Verify test coverage and CI build status - [x] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] julian9499 closed pull request #10382: KAFKA-12420: Kafka network Selector class has many constructors; use a Builder pattern instead
julian9499 closed pull request #10382: URL: https://github.com/apache/kafka/pull/10382 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] julian9499 removed a comment on pull request #10382: KAFKA-12420: Kafka network Selector class has many constructors; use a Builder pattern instead
julian9499 removed a comment on pull request #10382: URL: https://github.com/apache/kafka/pull/10382#issuecomment-805007894 @vvcephei could you rerun the ci? If I should not -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] julian9499 commented on pull request #10382: KAFKA-12420: Kafka network Selector class has many constructors; use a Builder pattern instead
julian9499 commented on pull request #10382: URL: https://github.com/apache/kafka/pull/10382#issuecomment-805007894 @vvcephei could you rerun the ci? If I should not -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Issue Comment Deleted] (KAFKA-12420) Kafka network Selector class has many constructors; use a Builder pattern instead
[ https://issues.apache.org/jira/browse/KAFKA-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dekker updated KAFKA-12420: Comment: was deleted (was: https://issues.apache.org/jira/projects/KAFKA/issues/KAFKA-12420?filter=allissues) > Kafka network Selector class has many constructors; use a Builder pattern > instead > - > > Key: KAFKA-12420 > URL: https://issues.apache.org/jira/browse/KAFKA-12420 > Project: Kafka > Issue Type: Improvement > Components: network >Affects Versions: 2.7.0 >Reporter: Steve Rodrigues >Assignee: Steve Rodrigues >Priority: Minor > > The Kafka network Selector has a myriad of constructor parameters and to deal > with its multiple use cases this class has 6 distinct constructors taking up > to 12 parameters (or various combinations thereof). The proposal for this > small task is to have a builder pattern to consolidate to a simple path going > forward. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] wenbingshen commented on pull request #10383: MINOR: Query topic describe and sort output by topic name when using adminClient
wenbingshen commented on pull request #10383: URL: https://github.com/apache/kafka/pull/10383#issuecomment-805001602 ping @chia7712 @dajac Can you see see this minor pr?Thanks. :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (KAFKA-12420) Kafka network Selector class has many constructors; use a Builder pattern instead
[ https://issues.apache.org/jira/browse/KAFKA-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17307171#comment-17307171 ] Nick Dekker commented on KAFKA-12420: - Seems like a straightforward implementation. We have created a PR here: [https://github.com/apache/kafka/pull/10382] A validation and a second opinion would be appreciated. > Kafka network Selector class has many constructors; use a Builder pattern > instead > - > > Key: KAFKA-12420 > URL: https://issues.apache.org/jira/browse/KAFKA-12420 > Project: Kafka > Issue Type: Improvement > Components: network >Affects Versions: 2.7.0 >Reporter: Steve Rodrigues >Assignee: Steve Rodrigues >Priority: Minor > > The Kafka network Selector has a myriad of constructor parameters and to deal > with its multiple use cases this class has 6 distinct constructors taking up > to 12 parameters (or various combinations thereof). The proposal for this > small task is to have a builder pattern to consolidate to a simple path going > forward. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] swiedenfeld opened a new pull request #10384: Update ops.html
swiedenfeld opened a new pull request #10384: URL: https://github.com/apache/kafka/pull/10384 A ZooKeeper ensemble of 7 servers tolerates 4 servers down, because 3 is minimum. At least, that is what I understood. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] wenbingshen opened a new pull request #10383: MINOR: Query topic describe and sort output by topic name when using adminClient
wenbingshen opened a new pull request #10383: URL: https://github.com/apache/kafka/pull/10383 As the title. When using zk client, the query topicDescribe are output in order of topic names. Similarly, when using adminClient, they are also output in order of topic names. ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] julian9499 opened a new pull request #10382: KAFKA 12420 - Kafka network Selector class has many constructors; use a Builder pattern instead
julian9499 opened a new pull request #10382: URL: https://github.com/apache/kafka/pull/10382 Replace the 6 network selector constructors with a builder that can be used to specify the needed parameters instead of using one of the 6 defined constructors. This pull requests updates all usages of the network selector and does not add any tests. Existing tests already tests the correctness of the network selector and our solution only changes the instance creation of the selector. All tests ran correctly.. ### Committer Checklist (excluded from commit message) - [ x] Verify design and implementation - [ ] Verify test coverage and CI build status - [ x] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] satishd commented on pull request #10271: KAFKA-12429: Added serdes for the default implementation of RLMM based on an internal topic as storage.
satishd commented on pull request #10271: URL: https://github.com/apache/kafka/pull/10271#issuecomment-804975395 @kowshik Classes introduced in this PR are located in the new module created as part of https://github.com/apache/kafka/pull/10218. You can omit the first 2 commits and review from the third commit. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] dosvath commented on pull request #10375: KAFKA-12522: Cast SMT should allow null value records to pass through
dosvath commented on pull request #10375: URL: https://github.com/apache/kafka/pull/10375#issuecomment-804951684 > LGTM. > > @vvcephei @ewencp @ijuma Could you have a look? IMHO this PR can be included in 2.8. Thanks @dongjinleekr I will update the branch. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] dosvath commented on pull request #10375: KAFKA-12522: Cast SMT should allow null value records to pass through
dosvath commented on pull request #10375: URL: https://github.com/apache/kafka/pull/10375#issuecomment-804950443 > To be more equipped to say whether this issue can be considered a release blocker or not (which is something that should be called out on the dev mailing list for any releases that are in progress) it would be good to know whether this is a regression or bug that has escaped several releases. @dosvath do you happen to know? It seems it's a bug that has escaped several releases I didn't see any point in the history where null is handled to make this a regression. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (KAFKA-12534) kafka-configs does not work with ssl enabled kafka broker.
kaushik srinivas created KAFKA-12534: Summary: kafka-configs does not work with ssl enabled kafka broker. Key: KAFKA-12534 URL: https://issues.apache.org/jira/browse/KAFKA-12534 Project: Kafka Issue Type: Bug Affects Versions: 2.6.1 Reporter: kaushik srinivas We are trying to change the trust store password on the fly using the kafka-configs script for a ssl enabled kafka broker. Below is the command used: kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 1001 --alter --add-config 'ssl.truststore.password=xxx' But we see below error in the broker logs when the command is run. {"type":"log", "host":"kf-2-0", "level":"INFO", "neid":"kafka-cfd5ccf2af7f47868e83473408", "system":"kafka", "time":"2021-03-23T12:14:40.055", "timezone":"UTC", "log":\{"message":"data-plane-kafka-network-thread-1002-ListenerName(SSL)-SSL-2 - org.apache.kafka.common.network.Selector - [SocketServer brokerId=1002] Failed authentication with /127.0.0.1 (SSL handshake failed)"}} How can anyone configure ssl certs for the kafka-configs script and succeed with the ssl handshake in this case ? Note : We are trying with a single listener i.e SSL: -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] jeqo opened a new pull request #10381: KAFKA-12533: Migrating KStream Stateless operators to new Processor API
jeqo opened a new pull request #10381: URL: https://github.com/apache/kafka/pull/10381 Migrating KStream stateless operators to new Processor API, first. Following PRs will complete migration of KStream stateful operators and KTable. Testing strategy: Keep the current tests green. ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (KAFKA-12533) Migrate KStream stateless operators to new Processor API
Jorge Esteban Quilcate Otoya created KAFKA-12533: Summary: Migrate KStream stateless operators to new Processor API Key: KAFKA-12533 URL: https://issues.apache.org/jira/browse/KAFKA-12533 Project: Kafka Issue Type: Sub-task Components: streams Reporter: Jorge Esteban Quilcate Otoya Assignee: Jorge Esteban Quilcate Otoya -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12532) Migrate Stream operators to new Processor API
Jorge Esteban Quilcate Otoya created KAFKA-12532: Summary: Migrate Stream operators to new Processor API Key: KAFKA-12532 URL: https://issues.apache.org/jira/browse/KAFKA-12532 Project: Kafka Issue Type: Improvement Components: streams Reporter: Jorge Esteban Quilcate Otoya Assignee: Jorge Esteban Quilcate Otoya To continue adoption of [KIP-478|https://cwiki.apache.org/confluence/display/KAFKA/KIP-478+-+Strongly+typed+Processor+API], KStream and KTable operators need to be migrated to the new Processor API. -- This message was sent by Atlassian Jira (v8.3.4#803005)