[GitHub] [kafka-site] sadatrafsan closed pull request #439: bs-23.png logo added

2022-08-31 Thread GitBox


sadatrafsan closed pull request #439: bs-23.png logo added
URL: https://github.com/apache/kafka-site/pull/439


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-13990) Update features will fail in KRaft mode

2022-08-31 Thread dengziming (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dengziming resolved KAFKA-13990.

Resolution: Fixed

> Update features will fail in KRaft mode
> ---
>
> Key: KAFKA-13990
> URL: https://issues.apache.org/jira/browse/KAFKA-13990
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: dengziming
>Assignee: dengziming
>Priority: Blocker
> Fix For: 3.3.0
>
>
> We return empty supported features in Controller ApiVersionResponse, so the 
> {{quorumSupportedFeature}} will always return empty, we should return 
> Map(metadata.version -> latest)
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1192

2022-08-31 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 504392 lines...]
[2022-09-01T00:19:43.478Z] KStreamAggregationIntegrationTest > 
shouldCountSessionWindows() STARTED
[2022-09-01T00:19:44.171Z] 
[2022-09-01T00:19:44.171Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThreadWithStaticMembership() PASSED
[2022-09-01T00:19:44.171Z] 
[2022-09-01T00:19:44.171Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThread() STARTED
[2022-09-01T00:19:45.722Z] 
[2022-09-01T00:19:45.722Z] KStreamAggregationIntegrationTest > 
shouldCountSessionWindows() PASSED
[2022-09-01T00:19:45.722Z] 
[2022-09-01T00:19:45.722Z] KStreamAggregationIntegrationTest > 
shouldAggregateWindowed(TestInfo) STARTED
[2022-09-01T00:19:48.399Z] 
[2022-09-01T00:19:48.399Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThread() PASSED
[2022-09-01T00:19:48.399Z] 
[2022-09-01T00:19:48.399Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadRemovalTimesOut() STARTED
[2022-09-01T00:19:48.914Z] 
[2022-09-01T00:19:48.915Z] KStreamAggregationIntegrationTest > 
shouldAggregateWindowed(TestInfo) PASSED
[2022-09-01T00:19:50.468Z] 
[2022-09-01T00:19:50.468Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadRemovalTimesOut() PASSED
[2022-09-01T00:19:54.966Z] 
[2022-09-01T00:19:54.966Z] FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithGlobalAutoOffsetResetLatest()
 STARTED
[2022-09-01T00:19:54.966Z] 
[2022-09-01T00:19:54.966Z] FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithGlobalAutoOffsetResetLatest()
 PASSED
[2022-09-01T00:19:54.966Z] 
[2022-09-01T00:19:54.966Z] FineGrainedAutoResetIntegrationTest > 
shouldThrowExceptionOverlappingPattern() STARTED
[2022-09-01T00:19:54.966Z] 
[2022-09-01T00:19:54.966Z] FineGrainedAutoResetIntegrationTest > 
shouldThrowExceptionOverlappingPattern() PASSED
[2022-09-01T00:19:54.966Z] 
[2022-09-01T00:19:54.966Z] FineGrainedAutoResetIntegrationTest > 
shouldThrowExceptionOverlappingTopic() STARTED
[2022-09-01T00:19:54.966Z] 
[2022-09-01T00:19:54.966Z] FineGrainedAutoResetIntegrationTest > 
shouldThrowExceptionOverlappingTopic() PASSED
[2022-09-01T00:19:54.966Z] 
[2022-09-01T00:19:54.966Z] FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithInvalidCommittedOffsets() STARTED
[2022-09-01T00:19:57.613Z] streams-4: SMOKE-TEST-CLIENT-CLOSED
[2022-09-01T00:19:57.613Z] streams-1: SMOKE-TEST-CLIENT-CLOSED
[2022-09-01T00:19:57.613Z] streams-3: SMOKE-TEST-CLIENT-CLOSED
[2022-09-01T00:19:57.613Z] streams-2: SMOKE-TEST-CLIENT-CLOSED
[2022-09-01T00:19:57.613Z] streams-5: SMOKE-TEST-CLIENT-CLOSED
[2022-09-01T00:19:57.613Z] streams-0: SMOKE-TEST-CLIENT-CLOSED
[2022-09-01T00:20:02.591Z] 
[2022-09-01T00:20:02.591Z] FAILURE: Build failed with an exception.
[2022-09-01T00:20:02.591Z] 
[2022-09-01T00:20:02.591Z] * What went wrong:
[2022-09-01T00:20:02.591Z] Execution failed for task ':storage:unitTest'.
[2022-09-01T00:20:02.591Z] > Process 'Gradle Test Executor 137' finished with 
non-zero exit value 1
[2022-09-01T00:20:02.591Z]   This problem might be caused by incorrect test 
process configuration.
[2022-09-01T00:20:02.591Z]   Please refer to the test execution section in the 
User Manual at 
https://docs.gradle.org/7.5.1/userguide/java_testing.html#sec:test_execution
[2022-09-01T00:20:02.591Z] 
[2022-09-01T00:20:02.591Z] * Try:
[2022-09-01T00:20:02.591Z] > Run with --stacktrace option to get the stack 
trace.
[2022-09-01T00:20:02.591Z] > Run with --info or --debug option to get more log 
output.
[2022-09-01T00:20:02.591Z] > Run with --scan to get full insights.
[2022-09-01T00:20:02.591Z] 
[2022-09-01T00:20:02.591Z] * Get more help at https://help.gradle.org
[2022-09-01T00:20:02.591Z] 
[2022-09-01T00:20:02.591Z] BUILD FAILED in 2h 33m 25s
[2022-09-01T00:20:02.591Z] 212 actionable tasks: 115 executed, 97 up-to-date
[2022-09-01T00:20:02.591Z] 
[2022-09-01T00:20:02.591Z] See the profiling report at: 
file:///home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk/build/reports/profile/profile-2022-08-31-21-46-41.html
[2022-09-01T00:20:02.591Z] A fine-grained performance profile is available: use 
the --scan option.
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch JDK 17 and Scala 2.13
[2022-09-01T00:20:45.570Z] 
[2022-09-01T00:20:45.570Z] FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithInvalidCommittedOffsets() PASSED
[2022-09-01T00:20:45.570Z] 
[2022-09-01T00:20:45.570Z] FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithDefaultGlobalAutoOffsetResetEarliest()

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.3 #54

2022-08-31 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 570835 lines...]
[2022-08-31T22:00:25.398Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyStandbys PASSED
[2022-08-31T22:00:25.398Z] 
[2022-08-31T22:00:25.398Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers STARTED
[2022-08-31T22:00:25.398Z] 
[2022-08-31T22:00:25.398Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers PASSED
[2022-08-31T22:00:25.398Z] 
[2022-08-31T22:00:25.398Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers STARTED
[2022-08-31T22:00:25.398Z] 
[2022-08-31T22:00:25.398Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers PASSED
[2022-08-31T22:00:26.340Z] streams-2: SMOKE-TEST-CLIENT-CLOSED
[2022-08-31T22:00:26.340Z] streams-4: SMOKE-TEST-CLIENT-CLOSED
[2022-08-31T22:00:26.340Z] streams-1: SMOKE-TEST-CLIENT-CLOSED
[2022-08-31T22:00:26.340Z] streams-5: SMOKE-TEST-CLIENT-CLOSED
[2022-08-31T22:00:26.340Z] streams-0: SMOKE-TEST-CLIENT-CLOSED
[2022-08-31T22:00:26.340Z] streams-3: SMOKE-TEST-CLIENT-CLOSED
[2022-08-31T22:00:26.824Z] 
[2022-08-31T22:00:26.824Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterInner[caching enabled = false] PASSED
[2022-08-31T22:00:26.824Z] 
[2022-08-31T22:00:26.824Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterOuter[caching enabled = false] STARTED
[2022-08-31T22:00:29.765Z] 
[2022-08-31T22:00:29.765Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyStandbys PASSED
[2022-08-31T22:00:29.765Z] 
[2022-08-31T22:00:29.765Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers STARTED
[2022-08-31T22:00:30.973Z] 
[2022-08-31T22:00:30.973Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers PASSED
[2022-08-31T22:00:30.973Z] 
[2022-08-31T22:00:30.973Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers STARTED
[2022-08-31T22:00:31.666Z] 
[2022-08-31T22:00:31.666Z] BUILD SUCCESSFUL in 2h 59m 11s
[2022-08-31T22:00:31.666Z] 212 actionable tasks: 115 executed, 97 up-to-date
[2022-08-31T22:00:31.666Z] 
[2022-08-31T22:00:31.666Z] See the profiling report at: 
file:///home/jenkins/workspace/Kafka_kafka_3.3/build/reports/profile/profile-2022-08-31-19-01-27.html
[2022-08-31T22:00:31.666Z] A fine-grained performance profile is available: use 
the --scan option.
[2022-08-31T22:00:32.182Z] 
[2022-08-31T22:00:32.182Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers PASSED
[2022-08-31T22:00:32.182Z] streams-5: SMOKE-TEST-CLIENT-CLOSED
[2022-08-31T22:00:32.182Z] streams-3: SMOKE-TEST-CLIENT-CLOSED
[2022-08-31T22:00:32.182Z] streams-1: SMOKE-TEST-CLIENT-CLOSED
[2022-08-31T22:00:32.182Z] streams-2: SMOKE-TEST-CLIENT-CLOSED
[2022-08-31T22:00:32.182Z] streams-0: SMOKE-TEST-CLIENT-CLOSED
[2022-08-31T22:00:32.182Z] streams-4: SMOKE-TEST-CLIENT-CLOSED
[Pipeline] junit
[2022-08-31T22:00:32.699Z] Recording test results
[2022-08-31T22:00:33.830Z] 
[2022-08-31T22:00:33.830Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterOuter[caching enabled = false] PASSED
[2022-08-31T22:00:33.830Z] 
[2022-08-31T22:00:33.830Z] 
org.apache.kafka.streams.integration.TaskMetadataIntegrationTest > 
shouldReportCorrectEndOffsetInformation STARTED
[2022-08-31T22:00:34.765Z] 
[2022-08-31T22:00:34.765Z] 
org.apache.kafka.streams.integration.TaskMetadataIntegrationTest > 
shouldReportCorrectEndOffsetInformation PASSED
[2022-08-31T22:00:34.765Z] 
[2022-08-31T22:00:34.765Z] 
org.apache.kafka.streams.integration.TaskMetadataIntegrationTest > 
shouldReportCorrectCommittedOffsetInformation STARTED
[2022-08-31T22:00:35.700Z] 
[2022-08-31T22:00:35.701Z] 
org.apache.kafka.streams.integration.TaskMetadataIntegrationTest > 
shouldReportCorrectCommittedOffsetInformation PASSED
[2022-08-31T22:00:36.637Z] 
[2022-08-31T22:00:36.637Z] 
org.apache.kafka.streams.processor.internals.HandlingSourceTopicDeletionIntegrationTest
 > shouldThrowErrorAfterSourceTopicDeleted STARTED
[2022-08-31T22:00:38.055Z] 
[2022-08-31T22:00:38.055Z] BUILD SUCCESSFUL in 2h 57m 48s
[2022-08-31T22:00:38.055Z] 212 actionable tasks: 115 executed, 97 up-to-date
[2022-08-31T22:00:38.055Z] 
[2022-08-31T22:00:38.055Z] See the profiling report at: 

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1191

2022-08-31 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 420946 lines...]
[2022-08-31T21:29:13.718Z] 
[2022-08-31T21:29:13.718Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyThreadsPerClient PASSED
[2022-08-31T21:29:13.718Z] 
[2022-08-31T21:29:13.718Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargePartitionCount STARTED
[2022-08-31T21:29:40.842Z] 
[2022-08-31T21:29:40.842Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargePartitionCount PASSED
[2022-08-31T21:29:40.842Z] 
[2022-08-31T21:29:40.842Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargePartitionCount STARTED
[2022-08-31T21:30:08.845Z] 
[2022-08-31T21:30:08.845Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargePartitionCount PASSED
[2022-08-31T21:30:08.845Z] 
[2022-08-31T21:30:08.845Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyStandbys STARTED
[2022-08-31T21:30:12.557Z] 
[2022-08-31T21:30:12.557Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyStandbys PASSED
[2022-08-31T21:30:12.557Z] 
[2022-08-31T21:30:12.557Z] StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyStandbys STARTED
[2022-08-31T21:30:39.630Z] 
[2022-08-31T21:30:39.630Z] StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyStandbys PASSED
[2022-08-31T21:30:39.630Z] 
[2022-08-31T21:30:39.630Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers STARTED
[2022-08-31T21:30:40.644Z] 
[2022-08-31T21:30:40.644Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers PASSED
[2022-08-31T21:30:40.644Z] 
[2022-08-31T21:30:40.644Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers STARTED
[2022-08-31T21:30:42.669Z] 
[2022-08-31T21:30:42.669Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers PASSED
[2022-08-31T21:30:42.669Z] 
[2022-08-31T21:30:42.669Z] AdjustStreamThreadCountTest > 
testConcurrentlyAccessThreads() STARTED
[2022-08-31T21:30:45.876Z] 
[2022-08-31T21:30:45.876Z] AdjustStreamThreadCountTest > 
testConcurrentlyAccessThreads() PASSED
[2022-08-31T21:30:45.876Z] 
[2022-08-31T21:30:45.876Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadReplacement() STARTED
[2022-08-31T21:30:50.098Z] 
[2022-08-31T21:30:50.098Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadReplacement() PASSED
[2022-08-31T21:30:50.098Z] 
[2022-08-31T21:30:50.098Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveThreadsMultipleTimes() STARTED
[2022-08-31T21:31:02.269Z] 
[2022-08-31T21:31:02.269Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveThreadsMultipleTimes() PASSED
[2022-08-31T21:31:02.269Z] 
[2022-08-31T21:31:02.269Z] AdjustStreamThreadCountTest > 
shouldnNotRemoveStreamThreadWithinTimeout() STARTED
[2022-08-31T21:31:04.220Z] 
[2022-08-31T21:31:04.220Z] AdjustStreamThreadCountTest > 
shouldnNotRemoveStreamThreadWithinTimeout() PASSED
[2022-08-31T21:31:04.220Z] 
[2022-08-31T21:31:04.220Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveStreamThreadsWhileKeepingNamesCorrect() STARTED
[2022-08-31T21:31:26.417Z] 
[2022-08-31T21:31:26.417Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveStreamThreadsWhileKeepingNamesCorrect() PASSED
[2022-08-31T21:31:26.417Z] 
[2022-08-31T21:31:26.417Z] AdjustStreamThreadCountTest > 
shouldAddStreamThread() STARTED
[2022-08-31T21:31:28.466Z] 
[2022-08-31T21:31:28.466Z] AdjustStreamThreadCountTest > 
shouldAddStreamThread() PASSED
[2022-08-31T21:31:28.466Z] 
[2022-08-31T21:31:28.466Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThreadWithStaticMembership() STARTED
[2022-08-31T21:31:32.555Z] 
[2022-08-31T21:31:32.555Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThreadWithStaticMembership() PASSED
[2022-08-31T21:31:32.555Z] 
[2022-08-31T21:31:32.555Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThread() STARTED
[2022-08-31T21:31:37.032Z] 
[2022-08-31T21:31:37.032Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThread() PASSED
[2022-08-31T21:31:37.032Z] 
[2022-08-31T21:31:37.032Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadRemovalTimesOut() STARTED
[2022-08-31T21:31:40.038Z] 
[2022-08-31T21:31:40.038Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadRemovalTimesOut() PASSED
[2022-08-31T21:31:50.109Z] 
[2022-08-31T21:31:50.110Z] FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithGlobalAutoOffsetResetLatest()
 STARTED
[2022-08-31T21:31:50.110Z] 
[2022-08-31T21:31:50.110Z] FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithGlobalAutoOffsetResetLatest()
 PASSED
[2022-08-31T21:31:50.110Z] 
[2022-08-31T21:31:50.110Z] FineGrainedAutoResetIntegrationTest > 
shouldThrowExceptionOverlappingPattern() STARTED
[2022-08-31T21:31:50.110Z] 
[2022-08-31T21:31:50.110Z] FineGrainedAutoResetIntegrationTest 

RE: Re: [DISCUSS] KIP-821: Connect Transforms support for nested structures

2022-08-31 Thread Chris Egerton
Hi Robert and Jorge,

I think the backtick/backslash proposal works, but I'm a little unclear on
some of the details:

1. Are backticks only given special treatment when they immediately follow
a non-escaped dot? E.g., "foo.b`ar.ba`z" would refer to "foo" -> "b`ar" ->
"ba`z" instead of "foo" -> "bar.baz"? Based on the example where the name
"a.b`.c" refers to "a" -> "b`" -> "c", it seems like this is the case, but
I'm worried this might cause confusion since the role of the backtick and
the need to escape it becomes context-dependent.

2. In the example table near the beginning of the KIP, the name "a.`b\`.c`"
refers to "a" -> "b`c". What happened to the dot in the second part of the
name? Should it refer to "a" -> "b`.c" instead?

3. Is it ever necessary to escape backslashes themselves? If so, when?

Overall, I wish we could come up with a prettier/simpler approach, but the
benefits provided by the dual backtick/dot syntax are too great to deny:
there are no correctness issues like the ones posed with double-dot
escaping that would lead to ambiguity, the most common cases are still very
simple to work with, and there's no risk of interfering with JSON escape
mechanisms (in most cases) or single-quote shell quoting (which may be
relevant when connector configurations are defined on the command line).
Thanks for the suggestion, Robert!

Cheers,

Chris


Re: Hosting Kafka Videos on ASF YouTube channel

2022-08-31 Thread Bill Bejeck
This thread has been open for 22 days, so I will close the vote now.

The question of hosting the four Kafka Streams videos passes:

+1 votes
PMC Members:
* Mickael Maison
* John Roesler
* Bill Bejeck

Vote thread:
https://www.mail-archive.com/dev@kafka.apache.org/msg126019.html

Joe,
Provided this vote is sufficient, what are the next steps?

Thanks,
Bill

On Thu, Aug 25, 2022 at 12:48 PM John Roesler  wrote:

> Thanks all,
>
> I’m also +1 on the Kafka Streams videos.
>
> Thanks,
> John
>
> On Tue, Aug 9, 2022, at 03:54, Mickael Maison wrote:
> > Hi,
> >
> > I checked the four Streams videos
> > (https://kafka.apache.org/32/documentation/streams/), they are good
> > and don't mention any vendors.
> > +1 (binding) for these four videos
> >
> > For the last video (https://kafka.apache.org/intro and
> > https://kafka.apache.org/quickstart) we will have to wait till the
> > intro is edited.
> >
> > Thanks,
> > Mickael
> >
> >
> > On Mon, Aug 8, 2022 at 11:12 PM Joe Brockmeier  wrote:
> >>
> >> Repurpose away. Thanks!
> >>
> >> On Mon, Aug 8, 2022 at 4:55 PM Bill Bejeck  wrote:
> >> >
> >> > Hi Joe,
> >> >
> >> > Thanks that works for me. As for you watching the videos, they are
> about 10 minutes each, and you can watch them at 1.5 - 1.75 playback speed.
> >> >
> >> > If it's ok with you, I'm going to repurpose this thread as a voting
> thread for the videos.
> >> >
> >> > I watched the Kafka Streams videos on
> https://kafka.apache.org/32/documentation/streams/, and I can confirm
> they are vendor-neutral.
> >> > The other videos and logo that show up at the end are coming from
> YouTube, so once move the videos to the ASF channel, that should go away.
> >> >
> >> > +1(binding).
> >> >
> >> > Thanks,
> >> > Bill
> >> >
> >> >
> >> >
> >> > On Mon, Aug 8, 2022 at 9:46 AM Joe Brockmeier  wrote:
> >> >>
> >> >> If we can get a +1 from the PMC on each video that they're happy that
> >> >> the videos are vendor neutral I think we can do that. I'll also need
> >> >> to view them as well. I hope they're not long videos. :-)
> >> >>
> >> >> On Tue, Aug 2, 2022 at 3:38 PM Bill Bejeck 
> wrote:
> >> >> >
> >> >> > Hi Joe,
> >> >> >
> >> >> > Yes, that is correct.  Sorry, I should have mentioned that in the
> original email.  That is the only video where Tim says that.
> >> >> > The Kafka Streams videos do not mention Confluent.
> >> >> >
> >> >> > We're currently pursuing editing the video to remove the "from
> Confluent" part.
> >> >> > Note that the site also uses the same video on the "quickstart"
> page, so both places will be fixed when editing is completed.
> >> >> >
> >> >> > Can we pursue hosting the Kafka Streams videos for now, then
> revisit the "What is Apache Kafka?" when the editing is done?
> >> >> >
> >> >> > Thanks,
> >> >> > Bill
> >> >> >
> >> >> >
> >> >> > On Tue, Aug 2, 2022 at 3:12 PM Joe Brockmeier 
> wrote:
> >> >> >>
> >> >> >> Hi Bill,
> >> >> >>
> >> >> >> I'm not sure changing hosting would quite solve the problem. The
> first
> >> >> >> video I see on this page:
> >> >> >>
> >> >> >> https://kafka.apache.org/intro
> >> >> >>
> >> >> >> Starts with "Hi, I'm Bill Berglund from *Confluent*" rather than
> "Hi,
> >> >> >> I'm Bill from Apache Kafka" -- so moving to the ASF Youtube
> channel
> >> >> >> wouldn't completely solve the problem.
> >> >> >>
> >> >> >> On Tue, Aug 2, 2022 at 3:05 PM Bill Bejeck 
> wrote:
> >> >> >> >
> >> >> >> > Hi,
> >> >> >> >
> >> >> >> > I am an Apache Kafka® committer and PMC member, and I'm working
> on our site to address some issues around our embedded videos and branding.
> >> >> >> >
> >> >> >> > The Kafka site has six embedded videos:
> https://kafka.apache.org/intro, https://kafka.apache.org/quickstart, and
> four videos on https://kafka.apache.org/32/documentation/streams/.
> >> >> >> >
> >> >> >> > The videos are hosted on the Confluent YouTube channel, so the
> branding on the video is from Confluent.  Since it's coming from YouTube,
> there's no way to change it.
> >> >> >> >
> >> >> >> > Would it be possible to upload these videos to the Apache
> Foundation YouTube channel (
> https://www.youtube.com/c/TheApacheFoundation/featured)?  Doing this
> would automatically change the branding to Apache.
> >> >> >> >
> >> >> >> > Thanks, and I look forward to working with you on this matter.
> >> >> >> >
> >> >> >> > Bill Bejeck
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >> --
> >> >> >> Joe Brockmeier
> >> >> >> Vice President Marketing & Publicity
> >> >> >> j...@apache.org
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Joe Brockmeier
> >> >> Vice President Marketing & Publicity
> >> >> j...@apache.org
> >>
> >>
> >>
> >> --
> >> Joe Brockmeier
> >> Vice President Marketing & Publicity
> >> j...@apache.org
>


Re: Re: [DISCUSS] KIP-710: Full support for distributed mode in dedicated MirrorMaker 2.0 clusters

2022-08-31 Thread Chris Egerton
Hi Daniel,

I've taken a look at the KIP in detail. Here are my complete thoughts
(minus the aforementioned sections that may be affected by changes to an
internal-only REST API):

1. Why introduce new mm.host.name and mm.rest.protocol properties instead
of using the properties that are already used by Kafka Connect: listeners,
rest.advertised.host.name, rest.advertised.host.port, and
rest.advertised.listener? We used to have the rest.host.name and rest.port
properties in Connect but deprecated and eventually removed them in favor
of the listeners property in KIP-208 [1]; I'm hoping we can keep things as
similar as possible between MM2 and Connect in order to make it easier for
users to work with both. I'm also hoping that we can allow users to
configure the port that their MM2 nodes listen on instead of hardcoding MM2
to bind to port 0.

2. Do we still need to change the worker IDs that get used in the status
topic?

Everything else looks good, or should change once the KIP is updated with
the internal-only REST API alternative.

Cheers,

Chris

[1] -
https://cwiki.apache.org/confluence/display/KAFKA/KIP-208%3A+Add+SSL+support+to+Kafka+Connect+REST+interface

On Mon, Aug 29, 2022 at 1:55 PM Chris Egerton  wrote:

> Hi Daniel,
>
> Yeah, I think that's the way to go. Adding multiple servers for each
> herder seems like it'd be too much of a pain for users to configure, and if
> we keep the API strictly internal for now, we shouldn't be painting
> ourselves into too much of a corner if/when we decide to expose a
> public-facing REST API for dedicated MM2 clusters.
>
> I plan to take a look at the rest of the KIP and provide a complete review
> sometime this week; I'll hold off on commenting on anything that seems like
> it'll be affected by switching to an internal-only REST API until those
> changes are published, but should be able to review everything else.
>
> Cheers,
>
> Chris
>
> On Mon, Aug 29, 2022 at 6:57 AM Dániel Urbán 
> wrote:
>
>> Hi Chris,
>>
>> I understand your point, sounds good to me.
>> So in short, we should opt for an internal-only API, and preferably a
>> single server solution. Is that right?
>>
>> Thanks
>> Daniel
>>
>> Chris Egerton  ezt írta (időpont: 2022. aug.
>> 26.,
>> P, 17:36):
>>
>> > Hi Daniel,
>> >
>> > Glad to hear from you!
>> >
>> > With regards to the stripped-down REST API alternative, I don't see how
>> > this would prevent us from introducing the fully-fledged Connect REST
>> API,
>> > or even an augmented variant of it, at some point down the road. If we
>> go
>> > with the internal-only API now, and want to expand later, can't we gate
>> the
>> > expansion behind a feature flag configuration property that by default
>> > disables the new feature?
>> >
>> > I'm also not sure that we'd ever want to expose the raw Connect REST API
>> > for dedicated MM2 clusters. If people want that capability, they can
>> > already spin up a vanilla Connect cluster and run as many MM2
>> connectors as
>> > they'd like on it, and as of KIP-458 [1], it's even possible to use a
>> > single Connect cluster to replicate between any two Kafka clusters
>> instead
>> > of only targeting the Kafka cluster that the vanilla Connect cluster
>> > operates on top of. I do agree that it'd be great to be able to
>> dynamically
>> > adjust things like topic filters without having to restart a dedicated
>> MM2
>> > node; I'm just not sure that the vanilla Connect REST API is the
>> > appropriate way to do that, especially since the exact mechanisms that
>> make
>> > a single Connect cluster viable for replicating across any two Kafka
>> > clusters could be abused and cause a dedicated MM2 cluster to start
>> writing
>> > to a completely different Kafka cluster that's not even defined in its
>> > config file.
>> >
>> > Finally, as far as security goes--since this is essentially a bug fix,
>> I'm
>> > inclined to make it as easy as possible for users to adopt it. MTLS is a
>> > great start for securing a REST API, but it's not sufficient on its own
>> > since anyone who could issue an authenticated REST request against the
>> MM2
>> > cluster would still be able to make any changes they want (with the
>> > exception of accessing internal endpoints, which were secured with
>> > KIP-507). If we were to bring up the fully-fledged Connect REST API,
>> > cluster administrators would also likely have to add some kind of
>> > authorization layer to prevent people from using the REST API to mess
>> with
>> > the configurations of the connectors that MM2 brought up. One way of
>> doing
>> > that is to add a REST extension to your Connect cluster, but
>> implementing
>> > and configuring one in order to be able to run a multi-node MM2 cluster
>> > without hitting this bug seems like too much work to be worth it.
>> >
>> > I think if we had a better picture of what a REST API for dedicated MM2
>> > clusters would/should look like, then it would be easier to go along
>> with
>> > this, and we could even just add 

Re: [DISCUSS] KIP-844: Transactional State Stores

2022-08-31 Thread Guozhang Wang
Hello Alex,

Thanks for the detailed replies, I think that makes sense, and in the long
run we would need some public indicators from StateStore to determine if
checkpoints can really be used to indicate clean snapshots.

As for the @Evolving label, I think we can still keep it but for a
different reason, since as we add more state management functionalities in
the near future we may need to revisit the public APIs again and hence
keeping it as @Evolving would allow us to modify if necessary, in an easier
path than deprecate -> delete after several minor releases.

Besides that, I have no further comments about the KIP.


Guozhang

On Fri, Aug 26, 2022 at 1:51 AM Alexander Sorokoumov
 wrote:

> Hey Guozhang,
>
>
> I think that we will have to keep StateStore#transactional() because
> post-commit checkpointing of non-txn state stores will break the guarantees
> we want in ProcessorStateManager#initializeStoreOffsetsFromCheckpoint for
> correct recovery. Let's consider checkpoint-recovery behavior under EOS
> that we want to support:
>
> 1. Non-txn state stores should checkpoint on graceful shutdown and restore
> from that checkpoint.
>
> 2. Non-txn state stores should delete local data during recovery after a
> crash failure.
>
> 3. Txn state stores should checkpoint on commit and on graceful shutdown.
> These stores should roll back uncommitted changes instead of deleting all
> local data.
>
>
> #1 and #2 are already supported; this proposal adds #3. Essentially, we
> have two parties at play here - the post-commit checkpointing in
> StreamTask#postCommit and recovery in ProcessorStateManager#
> initializeStoreOffsetsFromCheckpoint. Together, these methods must allow
> all three workflows and prevent invalid behavior, e.g., non-txn stores
> should not checkpoint post-commit to avoid keeping uncommitted data on
> recovery.
>
>
> In the current state of the prototype, we checkpoint only txn state stores
> post-commit under EOS using StateStore#transactional(). If we remove
> StateStore#transactional() and always checkpoint post-commit,
> ProcessorStateManager#initializeStoreOffsetsFromCheckpoint will have to
> determine whether to delete local data. Non-txn implementation of
> StateStore#recover can't detect if it has uncommitted writes. Since its
> default implementation must always return either true or false, signaling
> whether it is restored into a valid committed-only state. If
> StateStore#recover always returns true, we preserve uncommitted writes and
> violate correctness. Otherwise, ProcessorStateManager#
> initializeStoreOffsetsFromCheckpoint would always delete local data even
> after
> a graceful shutdown.
>
>
> With StateStore#transactional we avoid checkpointing non-txn state stores
> and prevent that problem during recovery.
>
>
> Best,
>
> Alex
>
> On Fri, Aug 19, 2022 at 1:05 AM Guozhang Wang  wrote:
>
> > Hello Alex,
> >
> > Thanks for the replies!
> >
> > > As long as we allow custom user implementations of that interface, we
> > should
> > probably either keep that flag to distinguish between transactional and
> > non-transactional implementations or change the contract behind the
> > interface. What do you think?
> >
> > Regarding this question, I thought that in the long run, we may always
> > write checkpoints regardless of txn v.s. non-txn stores, in which case we
> > would not need that `StateStore#transactional()`. But for now in order
> for
> > backward compatibility edge cases we still need to distinguish on whether
> > or not to write checkpoints. Maybe I was mis-reading its purposes? If
> yes,
> > please let me know.
> >
> >
> > On Mon, Aug 15, 2022 at 7:56 AM Alexander Sorokoumov
> >  wrote:
> >
> > > Hey Guozhang,
> > >
> > > Thank you for elaborating! I like your idea to introduce a
> StreamsConfig
> > > specifically for the default store APIs. You mentioned Materialized,
> but
> > I
> > > think changes in StreamJoined follow the same logic.
> > >
> > > I updated the KIP and the prototype according to your suggestions:
> > > * Add a new StoreType and a StreamsConfig for transactional RocksDB.
> > > * Decide whether Materialized/StreamJoined are transactional based on
> the
> > > configured StoreType.
> > > * Move RocksDBTransactionalMechanism to
> > > org.apache.kafka.streams.state.internals to remove it from the proposal
> > > scope.
> > > * Add a flag in new Stores methods to configure a state store as
> > > transactional. Transactional state stores use the default transactional
> > > mechanism.
> > > * The changes above allowed to remove all changes to the StoreSupplier
> > > interface.
> > >
> > > I am not sure about marking StateStore#transactional() as evolving. As
> > long
> > > as we allow custom user implementations of that interface, we should
> > > probably either keep that flag to distinguish between transactional and
> > > non-transactional implementations or change the contract behind the
> > > interface. What do you think?
> > >
> > > Best,
> > > Alex
> > >
> 

Re: [DISCUSS] KIP-862: Implement self-join optimization

2022-08-31 Thread Guozhang Wang
Thanks Vicky,

I do not have any further comments about the KIP.


Guozhang

On Tue, Aug 30, 2022 at 8:21 AM Vasiliki Papavasileiou
 wrote:

> Hi Guozhang,
>
> That's an excellent idea, I will make the changes. I was also going back
> and forth with having a specific config for each optimization or not but I
> feel your approach has the best of both worlds.
>
> Thank you,
> Vicky
>
> On Sun, Aug 28, 2022 at 6:20 AM Guozhang Wang  wrote:
>
> > Hello Vicky,
> >
> > I made a quick pass on your WIP PR and now I understand and agree that
> > compatibility is indeed preserved since we get the optimized topology in
> a
> > second pass, and hence we already "used and burned" the original
> topologies
> > naming suffices in the first pass.
> >
> > Regarding the configuration patterns, I still have a bit concern about
> it:
> > primarily, if we follow this pattern to introduce a new config for each
> > optimization rule, in the future we would have a lot of configs --- one
> per
> > rule --- inside the StreamsConfig. I thought about this back and forth
> > again and still feel that this may not be what we want.. I think stead,
> we
> > can change the existing `TOPOLOGY_OPTIMIZATION_CONFIG` to accept a list
> of
> > strings, separated by comma --- this aligns with other similar configs as
> > well --- so that for different scenarios users can choose either fine
> > grained or coarse grained controls, e.g.:
> >
> > * I just want to enable all rules, or none: "all", "none".
> > * I know my app was created with Kafka version X, and I just want to only
> > apply all rules that are already there since version X: "versionX" --- I
> > just made it up for future use cases since we discussed about it in the
> > original KIP when we introduced "TOPOLOGY_OPTIMIZATION_CONFIG", we do not
> > need to include it in this KIP.
> > * I know my app is compatible with specific rules A/B/C, and I just want
> to
> > always enable those and not others: "ruleA,ruleB,ruleC".
> >
> > SO far we only have a few rules: a) reuse source topic as changelog topic
> > for KTable, b) merge duplicate repartition topics, c) self-join (this
> KIP),
> > so I suggest in this KIP, we just add make the
> > `TOPOLOGY_OPTIMIZATION_CONFIG` accepting a list of string, but 1) check
> > that some strings cannot coexist (e.g. `none` and all`), and 2) add a new
> > string value for self-join itself. In this way:
> >
> > * People who chose `none` before will not be impacted.
> > * People who chose `all` before will get this optimization by default,
> and
> > it's backward compatible so it's okay; they also get what they meant: I
> > just want "all" :)
> > * Advanced users who read about this KIP and just what it but not others:
> > they will change their config from `none` to `self-join`.
> >
> > WDYT?
> >
> >
> > Guozhang
> >
> >
> >
> >
> > On Fri, Aug 12, 2022 at 7:25 PM John Roesler 
> wrote:
> >
> > > Thanks for the KIP, Vicky!
> > >
> > > Re 1/2, I agree with what you both worked out.
> > >
> > > Re 3: It sounds like you were able to preserve backward compatibility,
> so
> > > I don’t think you need to add any new configs. I think you can just
> > switch
> > > it on if people specify “all”.
> > >
> > > Thanks!
> > > -John
> > >
> > >
> > > On Thu, Aug 11, 2022, at 11:27, Guozhang Wang wrote:
> > > > Thanks Vicky for your reply!
> > > >
> > > > Re 1/2): I think you have a great point here to adhere with the
> > existing
> > > > implementation, I'm convinced. In that case we do not need to
> consider
> > > > left/outer-joins, and hence do not need to worry about the extra
> store
> > in
> > > > the impl.
> > > >
> > > > Re 3): I'm curious how the compatibility is preserved since with
> > > > optimizations turned on, we would use fewer stores and hence the
> store
> > > name
> > > > suffixes would change. In your experiment did you specifically
> specify
> > > the
> > > > store names, e.g. via Materialized? I'd be glad if it turns out to
> > really
> > > > be conveniently backward compatible, and rest with my concerns :)
> > > >
> > > >
> > > > Guozhang
> > > >
> > > > On Thu, Aug 11, 2022 at 4:44 AM Vasiliki Papavasileiou
> > > >  wrote:
> > > >
> > > >> Hi Guozhang,
> > > >>
> > > >> Thank you very much for your comments.
> > > >>
> > > >> Regarding 1: the extra state store is only needed in outer joins
> since
> > > >> that's the only case we have non-joining records that would need to
> > get
> > > >> emitted when the window closes, right? If we do decide to go with an
> > > >> outer-join implementation, I will make sure to have the extra state
> > > store
> > > >> as well. Thank you for pointing it out.
> > > >>
> > > >> Regarding 2: As the self-join is only a physical optimization over
> an
> > > inner
> > > >> join whose two arguments are the same entity, it should return the
> > same
> > > >> results as the inner join. We wouldn't want a user upgrading and
> > > enabling
> > > >> the optimization to suddenly see that their joins behave differently
> > and

Re: [VOTE] KIP-837 Allow MultiCasting a Result Record.

2022-08-31 Thread Sagar
Thanks Bruno for the great points.

I see 2 options here =>

1) As Chris suggested, drop the support for dropping records in the
partitioner. That way, an empty list could signify the usage of a default
partitioner. Also, if the deprecated partition() method returns null
thereby signifying the default partitioner, the partitions() can return an
empty list i.e default partitioner.

2) OR we treat a null return type of partitions() method to signify the
usage of the default partitioner. In the default implementation of
partitions() method, if partition() returns null, then even partitions()
can return null(instead of an empty list). The RecordCollectorImpl code can
also be modified accordingly. @Chris, to your point, we can even drop the
support of dropping of records. It came up during KIP discussion, and I
thought it might be a useful feature. Let me know what you think.

3) Lastly about the partition number check. I wanted to avoid the throwing
of exception so I thought adding it might be a useful feature. But as you
pointed out, if it can break backwards compatibility, it's better to remove
it.

Thanks!
Sagar.


On Tue, Aug 30, 2022 at 6:32 PM Chris Egerton 
wrote:

> +1 to Bruno's concerns about backward compatibility. Do we actually need
> support for dropping records in the partitioner? It doesn't seem necessary
> based on the motivation for the KIP. If we remove that feature, we could
> handle null and/or empty lists by using the default partitioning,
> equivalent to how we handle null return values from the existing partition
> method today.
>
> On Tue, Aug 30, 2022 at 8:55 AM Bruno Cadonna  wrote:
>
> > Hi Sagar,
> >
> > Thank you for the updates!
> >
> > I do not intend to prolong this vote thread more than needed, but I
> > still have some points.
> >
> > The deprecated partition method can return null if the default
> > partitioning logic of the producer should be used.
> > With the new method partitions() it seems that it is not possible to use
> > the default partitioning logic, anymore.
> >
> > Also, in the default implementation of method partitions(), a record
> > that would use the default partitioning logic in method partition()
> > would be dropped, which would break backward compatibility since Streams
> > would always call the new method partitions() even though the users
> > still implement the deprecated method partition().
> >
> > I have a last point that we should probably discuss on the PR and not on
> > the KIP but since you added the code in the KIP I need to mention it. I
> > do not think you should check the validity of the partition number since
> > the ProducerRecord does the same check and throws an exception. If
> > Streams adds the same check but does not throw, the behavior is not
> > backward compatible.
> >
> > Best,
> > Bruno
> >
> >
> > On 30.08.22 12:43, Sagar wrote:
> > > Thanks Bruno/Chris,
> > >
> > > Even I agree that might be better to keep it simple like the way Chris
> > > suggested. I have updated the KIP accordingly. I made couple of minor
> > > changes to the KIP:
> > >
> > > 1) One of them being the change of return type of partitions method
> from
> > > List to Set. This is to ensure that in case the implementation of
> > > StreamPartitoner is buggy and ends up returning duplicate
> > > partition numbers, we won't have duplicates thereby not trying to send
> to
> > > the same partition multiple times due to this.
> > > 2) I also added a check to send the record only to valid partition
> > numbers
> > > and log and drop when the partition number is invalid. This is again to
> > > prevent errors for cases when the StreamPartitioner implementation has
> > some
> > > bugs (since there are no validations as such).
> > > 3) I also updated the Test Plan section based on the suggestion from
> > Bruno.
> > > 4) I updated the default implementation of partitions method based on
> the
> > > great catch from Chris!
> > >
> > > Let me know if it looks fine now.
> > >
> > > Thanks!
> > > Sagar.
> > >
> > >
> > > On Tue, Aug 30, 2022 at 3:00 PM Bruno Cadonna 
> > wrote:
> > >
> > >> Hi,
> > >>
> > >> I am favour of discarding the sugar for broadcasting and leave the
> > >> broadcasting to the implementation as Chris suggests. I think that is
> > >> the cleanest option.
> > >>
> > >> Best,
> > >> Bruno
> > >>
> > >> On 29.08.22 19:50, Chris Egerton wrote:
> > >>> Hi all,
> > >>>
> > >>> I think it'd be useful to be more explicit about broadcasting to all
> > >> topic
> > >>> partitions rather than add implicit behavior for empty cases (empty
> > >>> optional, empty list, etc.). The suggested enum approach would
> address
> > >> that
> > >>> nicely.
> > >>>
> > >>> It's also worth noting that there's no hard requirement to add sugar
> > for
> > >>> broadcasting to all topic partitions since the API already provides
> the
> > >>> number of topic partitions available when calling a stream
> partitioner.
> > >> If
> > >>> we can't find a clean way to add this support, it 

[GitHub] [kafka-site] bbejeck commented on pull request #437: HelloSafe Kafka

2022-08-31 Thread GitBox


bbejeck commented on PR #437:
URL: https://github.com/apache/kafka-site/pull/437#issuecomment-1233091319

   Thanks @SimonRenault86 for the addition to the Powered By page!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka-site] bbejeck commented on pull request #437: HelloSafe Kafka

2022-08-31 Thread GitBox


bbejeck commented on PR #437:
URL: https://github.com/apache/kafka-site/pull/437#issuecomment-1233090781

   merged #437 into asf-site


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka-site] bbejeck merged pull request #437: HelloSafe Kafka

2022-08-31 Thread GitBox


bbejeck merged PR #437:
URL: https://github.com/apache/kafka-site/pull/437


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka-site] bbejeck commented on pull request #431: Brain Station 23 adopted Kafka

2022-08-31 Thread GitBox


bbejeck commented on PR #431:
URL: https://github.com/apache/kafka-site/pull/431#issuecomment-1233076438

   > image added to the instructed folder
   
   Hi @sadatrafsan - did you forget to commit the image? I still don't see 
`bs-23.png` on your branch


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #1190

2022-08-31 Thread Apache Jenkins Server
See 




[DISCUSSION] KIP-864: Support --bootstrap-server in kafka-streams-application-reset

2022-08-31 Thread Николай Ижиков
Hello.

I would like to start discussion on small KIP [1]
The goal of KIP is to add the same —boostrap-server parameter to 
`kafka-streams-appliation-reset.sh` tool as other tools use.
Please, share your feedback.

[1] 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-864%3A+Support+--bootstrap-server+in+kafka-streams-application-reset



[jira] [Created] (KAFKA-14193) Connect system test ConnectRestApiTest is failing

2022-08-31 Thread Yash Mayya (Jira)
Yash Mayya created KAFKA-14193:
--

 Summary: Connect system test ConnectRestApiTest is failing
 Key: KAFKA-14193
 URL: https://issues.apache.org/jira/browse/KAFKA-14193
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Yash Mayya
Assignee: Yash Mayya


[ConnectRestApiTest|https://github.com/apache/kafka/blob/trunk/tests/kafkatest/tests/connect/connect_rest_test.py]
 is currently failing on `trunk` and `3.3` with the following assertion error:

 

 
{code:java}
AssertionError()
Traceback (most recent call last):
  File 
"/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", line 
183, in _do_run
    data = self.run_test()
  File 
"/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", line 
243, in run_test
    return self.test_context.function(self.test)
  File "/usr/local/lib/python3.9/dist-packages/ducktape/mark/_mark.py", line 
433, in wrapper
    return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
  File "/opt/kafka-dev/tests/kafkatest/tests/connect/connect_rest_test.py", 
line 106, in test_rest_api
    self.verify_config(self.FILE_SOURCE_CONNECTOR, self.FILE_SOURCE_CONFIGS, 
configs)
  File "/opt/kafka-dev/tests/kafkatest/tests/connect/connect_rest_test.py", 
line 219, in verify_config
    assert config_def == set(config_names){code}
On closer inspection, this is because of the new source connector EOS related 
configs added in [https://github.com/apache/kafka/pull/11775.] Adding the 
following new configs - 
{code:java}
offsets.storage.topic, transaction.boundary, exactly.once.support, 
transaction.boundary.interval.ms{code}
in the expected config defs 
[here|https://github.com/apache/kafka/blob/6f4778301b1fcac1e2750cc697043d674eaa230d/tests/kafkatest/tests/connect/connect_rest_test.py#L35]
 fixes the tests on the 3.3 branch. However, the tests still fail on trunk due 
to the changes from [https://github.com/apache/kafka/pull/12450.]

 

The plan to fix this is to raise two PRs against trunk patching 
connect_rest_test.py - the first one fixing the EOS configs related issue which 
can be backported to 3.3 and the second one fixing the issue related to 
propagation of full connector configs to tasks which shouldn't be backported to 
3.3 (because the commit from https://github.com/apache/kafka/pull/12450 is only 
on trunk and not on 3.3)

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14192) Move registering and unregistering changelogs to state updater

2022-08-31 Thread Bruno Cadonna (Jira)
Bruno Cadonna created KAFKA-14192:
-

 Summary: Move registering and unregistering changelogs to state 
updater
 Key: KAFKA-14192
 URL: https://issues.apache.org/jira/browse/KAFKA-14192
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Bruno Cadonna


Currently, we register and unregister changelogs when we initialize and 
close/recycle a task. 
When we will remove the old code path for restoration and we will only use the 
state updater, we should consider to move registering and unregistering 
changelogs inside the state udpater. In such a way, we would put registering 
and unregistering changelogs in one place and we would only have changelog 
registered when it is actually needed, i.e., during restoration of active tasks 
and updating of standby tasks and not during the complete life of a task.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)