Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #440

2021-01-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: log 2min processing summary of StreamThread loop (#9941)


--
[...truncated 7.13 MB...]
MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7b588efb, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7b588efb, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@b898b4b, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@b898b4b, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@79d24a0c, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@79d24a0c, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@23ea5d0c, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@23ea5d0c, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@342a1707, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@342a1707, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5a53a0d6, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5a53a0d6, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5ec80009, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5ec80009, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2121bcb3, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2121bcb3, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@52389edf, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@52389edf, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2e0132b1, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2e0132b1, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5fcdf39b, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5fcdf39b, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@7f8b90, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@7f8b90, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@17d7090, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@17d7090, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder 

Unsubscribe from Notifications

2021-01-21 Thread Hiringuru
Hi,

Kindly unsubscribe our email from your daily notification. We don't want to 
receive any further notification.

I would be highly appreciated if you can check and remove our email from your 
list.

Thanks
Team HirinGuru

[GitHub] [kafka-site] mimaison commented on pull request #322: KAFKA-6223: Use archive.apache.org for older releases

2021-01-21 Thread GitBox


mimaison commented on pull request #322:
URL: https://github.com/apache/kafka-site/pull/322#issuecomment-764846591


   @mjsax can you take a look? Thanks



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #392

2021-01-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: log 2min processing summary of StreamThread loop (#9941)


--
[...truncated 3.55 MB...]

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@33cfbe66, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@74648473, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@74648473, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@597051e, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@597051e, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@cf80b47, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@cf80b47, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@e64fb37, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@e64fb37, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4b4d09e3, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4b4d09e3, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2ee095a2, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2ee095a2, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@51a02553, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@51a02553, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@638b8543, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@638b8543, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@716a590b, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@716a590b, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2fee75b3, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2fee75b3, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@54a9eea0, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@54a9eea0, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@458ab6c7, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@458ab6c7, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@10d09df9, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > build

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #421

2021-01-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: log 2min processing summary of StreamThread loop (#9941)


--
[...truncated 3.58 MB...]
OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp() 
STARTED

OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp() 
PASSED

OutputVerifierTest > shouldNotAllowNullExpectedRecordForCompareValue() STARTED

OutputVerifierTest > shouldNotAllowNullExpectedRecordForCompareValue() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > shouldNotAllowNullProducerRecordForCompareKeyValue() 
STARTED

OutputVerifierTest > shouldNotAllowNullProducerRecordForCompareKeyValue() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp() STARTED

OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > shouldFailIfValueIsDifferentForCompareKeyValueTimestamp() 
STARTED

OutputVerifierTest > shouldFailIfValueIsDifferentForCompareKeyValueTimestamp() 
PASSED

OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp() STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord()
 PASSED

KeyValueStoreFacadeTest > shouldReturnIsOpen() STARTED

KeyValueStoreFacadeTest > shouldReturnIsOpen() PASSED

KeyValueStoreFacadeTest > shouldDeleteAndReturnPlainValue() STARTED

KeyValueStoreFacadeTest > shouldDeleteAndReturnPlainValue() PASSED

KeyValueStoreFacadeTest > shouldReturnName() STARTED

KeyValueStoreFacadeTest > shouldReturnName() PASSED

KeyValueStoreFacadeTest > shouldPutWithUnknownTimestamp() STARTED

KeyValueStoreFacadeTest > shouldPutWithUnknownTimestamp() PASSED

KeyValueStoreFacadeTest > shouldPutAllWithUnknownTimestamp() STARTED

KeyValueStoreFacadeTest > shouldPutAllWithUnknownTimestamp() PASSED

KeyValueStoreFacadeTest > shouldReturnIsPersistent() STARTED

KeyValueStoreFacadeTest > shouldReturnIsPersistent() PASSED

KeyValueStoreFacadeTest > shouldForwardDeprecatedInit() STARTED

KeyValueStoreFacadeTest > shouldForwardDeprecatedInit() PASSED

KeyValueStoreFacadeTest > shouldPutIfAbsentWithUnknownTimestamp() STARTED

KeyValueStoreFacadeTest > shouldPutIfAbsentWithUnknownTimestamp() PASSED

KeyValueStoreFacadeTest > shouldForwardClose() STARTED

KeyValueStoreFacadeTest > shouldForwardClose() PASSED

KeyValueStoreFacadeTest > shouldForwardFlush() STARTED

KeyValueStoreFacadeTest > shouldForwardFlush() PASSED

KeyValueStoreFacadeTest > shouldForwardInit() STARTED

KeyValueStoreFacadeTest > shouldForwardInit() PASSED

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #439

2021-01-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Drop enable.metadata.quorum config (#9934)


--
[...truncated 3.58 MB...]

TopologyTestDriverAtLeastOnceTest > 
shouldNotCreateStateDirectoryForStatelessTopology() PASSED

TopologyTestDriverAtLeastOnceTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies() STARTED

TopologyTestDriverAtLeastOnceTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies() PASSED

TopologyTestDriverAtLeastOnceTest > shouldReturnAllStoresNames() STARTED

TopologyTestDriverAtLeastOnceTest > shouldReturnAllStoresNames() PASSED

TopologyTestDriverAtLeastOnceTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers() STARTED

TopologyTestDriverAtLeastOnceTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers() PASSED

TopologyTestDriverAtLeastOnceTest > shouldProcessConsumerRecordList() STARTED

TopologyTestDriverAtLeastOnceTest > shouldProcessConsumerRecordList() PASSED

TopologyTestDriverAtLeastOnceTest > shouldUseSinkSpecificSerializers() STARTED

TopologyTestDriverAtLeastOnceTest > shouldUseSinkSpecificSerializers() PASSED

TopologyTestDriverAtLeastOnceTest > shouldFlushStoreForFirstInput() STARTED

TopologyTestDriverAtLeastOnceTest > shouldFlushStoreForFirstInput() PASSED

TopologyTestDriverAtLeastOnceTest > shouldProcessFromSourceThatMatchPattern() 
STARTED

TopologyTestDriverAtLeastOnceTest > shouldProcessFromSourceThatMatchPattern() 
PASSED

TopologyTestDriverAtLeastOnceTest > shouldCaptureSinkTopicNamesIfWrittenInto() 
STARTED

TopologyTestDriverAtLeastOnceTest > shouldCaptureSinkTopicNamesIfWrittenInto() 
PASSED

TopologyTestDriverAtLeastOnceTest > shouldUpdateStoreForNewKey() STARTED

TopologyTestDriverAtLeastOnceTest > shouldUpdateStoreForNewKey() PASSED

TopologyTestDriverAtLeastOnceTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated() STARTED

TopologyTestDriverAtLeastOnceTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated() PASSED

TopologyTestDriverAtLeastOnceTest > shouldPunctuateOnWallClockTime() STARTED

TopologyTestDriverAtLeastOnceTest > shouldPunctuateOnWallClockTime() PASSED

TopologyTestDriverAtLeastOnceTest > shouldSetRecordMetadata() STARTED

TopologyTestDriverAtLeastOnceTest > shouldSetRecordMetadata() PASSED

TopologyTestDriverAtLeastOnceTest > shouldNotUpdateStoreForLargerValue() STARTED

TopologyTestDriverAtLeastOnceTest > shouldNotUpdateStoreForLargerValue() PASSED

TopologyTestDriverAtLeastOnceTest > shouldReturnCorrectInMemoryStoreTypeOnly() 
STARTED

TopologyTestDriverAtLeastOnceTest > shouldReturnCorrectInMemoryStoreTypeOnly() 
PASSED

TopologyTestDriverAtLeastOnceTest > shouldThrowForMissingTime() STARTED

TopologyTestDriverAtLeastOnceTest > shouldThrowForMissingTime() PASSED

TopologyTestDriverAtLeastOnceTest > 
shouldCaptureInternalTopicNamesIfWrittenInto() STARTED

TopologyTestDriverAtLeastOnceTest > 
shouldCaptureInternalTopicNamesIfWrittenInto() PASSED

TopologyTestDriverAtLeastOnceTest > shouldPunctuateOnWallClockTimeDeprecated() 
STARTED

TopologyTestDriverAtLeastOnceTest > shouldPunctuateOnWallClockTimeDeprecated() 
PASSED

TopologyTestDriverAtLeastOnceTest > shouldProcessRecordForTopic() STARTED

TopologyTestDriverAtLeastOnceTest > shouldProcessRecordForTopic() PASSED

TopologyTestDriverAtLeastOnceTest > 
shouldForwardRecordsFromSubtopologyToSubtopology() STARTED

TopologyTestDriverAtLeastOnceTest > 
shouldForwardRecordsFromSubtopologyToSubtopology() PASSED

TopologyTestDriverAtLeastOnceTest > shouldNotUpdateStoreForSmallerValue() 
STARTED

TopologyTestDriverAtLeastOnceTest > shouldNotUpdateStoreForSmallerValue() PASSED

TopologyTestDriverAtLeastOnceTest > 
shouldCreateStateDirectoryForStatefulTopology() STARTED

TopologyTestDriverAtLeastOnceTest > 
shouldCreateStateDirectoryForStatefulTopology() PASSED

TopologyTestDriverAtLeastOnceTest > shouldNotRequireParameters() STARTED

TopologyTestDriverAtLeastOnceTest > shouldNotRequireParameters() PASSED

TopologyTestDriverAtLeastOnceTest > shouldPunctuateIfWallClockTimeAdvances() 
STARTED

TopologyTestDriverAtLeastOnceTest > shouldPunctuateIfWallClockTimeAdvances() 
PASSED

> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> Task :streams:upgrade-system-tests-0102:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:testClasses
> Task :streams:upgrade-system-tests-0102:checkstyleTest
> Task :streams:upgrade-system-tests-0102:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:test
> Task :streams:upgrade-system-tests-0110:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0110:processResources NO-SOURCE
> Task :streams:upgrade-system-tests

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #391

2021-01-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Drop enable.metadata.quorum config (#9934)


--
[...truncated 3.55 MB...]
TestTopicsTest > testEmptyTopic() STARTED

TestTopicsTest > testEmptyTopic() PASSED

TestTopicsTest > testStartTimestamp() STARTED

TestTopicsTest > testStartTimestamp() PASSED

TestTopicsTest > testNegativeAdvance() STARTED

TestTopicsTest > testNegativeAdvance() PASSED

TestTopicsTest > shouldNotAllowToCreateWithNullDriver() STARTED

TestTopicsTest > shouldNotAllowToCreateWithNullDriver() PASSED

TestTopicsTest > testDuration() STARTED

TestTopicsTest > testDuration() PASSED

TestTopicsTest > testOutputToString() STARTED

TestTopicsTest > testOutputToString() PASSED

TestTopicsTest > testValue() STARTED

TestTopicsTest > testValue() PASSED

TestTopicsTest > testTimestampAutoAdvance() STARTED

TestTopicsTest > testTimestampAutoAdvance() PASSED

TestTopicsTest > testOutputWrongSerde() STARTED

TestTopicsTest > testOutputWrongSerde() PASSED

TestTopicsTest > shouldNotAllowToCreateOutputTopicWithNullTopicName() STARTED

TestTopicsTest > shouldNotAllowToCreateOutputTopicWithNullTopicName() PASSED

TestTopicsTest > testWrongSerde() STARTED

TestTopicsTest > testWrongSerde() PASSED

TestTopicsTest > testKeyValuesToMapWithNull() STARTED

TestTopicsTest > testKeyValuesToMapWithNull() PASSED

TestTopicsTest > testNonExistingOutputTopic() STARTED

TestTopicsTest > testNonExistingOutputTopic() PASSED

TestTopicsTest > testMultipleTopics() STARTED

TestTopicsTest > testMultipleTopics() PASSED

TestTopicsTest > testKeyValueList() STARTED

TestTopicsTest > testKeyValueList() PASSED

TestTopicsTest > shouldNotAllowToCreateOutputWithNullDriver() STARTED

TestTopicsTest > shouldNotAllowToCreateOutputWithNullDriver() PASSED

TestTopicsTest > testValueList() STARTED

TestTopicsTest > testValueList() PASSED

TestTopicsTest > testRecordList() STARTED

TestTopicsTest > testRecordList() PASSED

TestTopicsTest > testNonExistingInputTopic() STARTED

TestTopicsTest > testNonExistingInputTopic() PASSED

TestTopicsTest > testKeyValuesToMap() STARTED

TestTopicsTest > testKeyValuesToMap() PASSED

TestTopicsTest > testRecordsToList() STARTED

TestTopicsTest > testRecordsToList() PASSED

TestTopicsTest > testKeyValueListDuration() STARTED

TestTopicsTest > testKeyValueListDuration() PASSED

TestTopicsTest > testInputToString() STARTED

TestTopicsTest > testInputToString() PASSED

TestTopicsTest > testTimestamp() STARTED

TestTopicsTest > testTimestamp() PASSED

TestTopicsTest > testWithHeaders() STARTED

TestTopicsTest > testWithHeaders() PASSED

TestTopicsTest > testKeyValue() STARTED

TestTopicsTest > testKeyValue() PASSED

TestTopicsTest > shouldNotAllowToCreateTopicWithNullTopicName() STARTED

TestTopicsTest > shouldNotAllowToCreateTopicWithNullTopicName() PASSED

> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> Task :streams:upgrade-system-tests-0102:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:testClasses
> Task :streams:upgrade-system-tests-0102:checkstyleTest
> Task :streams:upgrade-system-tests-0102:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:test
> Task :streams:upgrade-system-tests-0110:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0110:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0110:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:compileTestJava
> Task :streams:upgrade-system-tests-0110:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:testClasses
> Task :streams:upgrade-system-tests-0110:checkstyleTest
> Task :streams:upgrade-system-tests-0110:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:test
> Task :streams:upgrade-system-tests-10:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-10:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-10:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-10:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-10:compileTestJava
> Task :streams:upgrade-system-tests-10:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-10:testClasses
> Task :streams:upgrade-system-tests-10:checkstyleTest
> Task :streams:upgrade-system-tests-10:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-10:test
> Task :streams:upgrade-system-tests-11:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-11:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-11:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-11:checkstyleMain NO

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #420

2021-01-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Tweak IBM i support in "stop" scripts (#9810)

[github] MINOR: Import RaftConfig config definitions into KafkaConfig (#9916)

[github] MINOR: Drop enable.metadata.quorum config (#9934)


--
[...truncated 3.58 MB...]
TopologyTestDriverAtLeastOnceTest > shouldRespectTaskIdling() STARTED

TopologyTestDriverAtLeastOnceTest > shouldRespectTaskIdling() PASSED

TopologyTestDriverAtLeastOnceTest > shouldUseSourceSpecificDeserializers() 
STARTED

TopologyTestDriverAtLeastOnceTest > shouldUseSourceSpecificDeserializers() 
PASSED

TopologyTestDriverAtLeastOnceTest > shouldReturnAllStores() STARTED

TopologyTestDriverAtLeastOnceTest > shouldReturnAllStores() PASSED

TopologyTestDriverAtLeastOnceTest > 
shouldNotCreateStateDirectoryForStatelessTopology() STARTED

TopologyTestDriverAtLeastOnceTest > 
shouldNotCreateStateDirectoryForStatelessTopology() PASSED

TopologyTestDriverAtLeastOnceTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies() STARTED

TopologyTestDriverAtLeastOnceTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies() PASSED

TopologyTestDriverAtLeastOnceTest > shouldReturnAllStoresNames() STARTED

TopologyTestDriverAtLeastOnceTest > shouldReturnAllStoresNames() PASSED

TopologyTestDriverAtLeastOnceTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers() STARTED

TopologyTestDriverAtLeastOnceTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers() PASSED

TopologyTestDriverAtLeastOnceTest > shouldProcessConsumerRecordList() STARTED

TopologyTestDriverAtLeastOnceTest > shouldProcessConsumerRecordList() PASSED

TopologyTestDriverAtLeastOnceTest > shouldUseSinkSpecificSerializers() STARTED

TopologyTestDriverAtLeastOnceTest > shouldUseSinkSpecificSerializers() PASSED

TopologyTestDriverAtLeastOnceTest > shouldFlushStoreForFirstInput() STARTED

TopologyTestDriverAtLeastOnceTest > shouldFlushStoreForFirstInput() PASSED

TopologyTestDriverAtLeastOnceTest > shouldProcessFromSourceThatMatchPattern() 
STARTED

TopologyTestDriverAtLeastOnceTest > shouldProcessFromSourceThatMatchPattern() 
PASSED

TopologyTestDriverAtLeastOnceTest > shouldCaptureSinkTopicNamesIfWrittenInto() 
STARTED

TopologyTestDriverAtLeastOnceTest > shouldCaptureSinkTopicNamesIfWrittenInto() 
PASSED

TopologyTestDriverAtLeastOnceTest > shouldUpdateStoreForNewKey() STARTED

TopologyTestDriverAtLeastOnceTest > shouldUpdateStoreForNewKey() PASSED

TopologyTestDriverAtLeastOnceTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated() STARTED

TopologyTestDriverAtLeastOnceTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated() PASSED

TopologyTestDriverAtLeastOnceTest > shouldPunctuateOnWallClockTime() STARTED

TopologyTestDriverAtLeastOnceTest > shouldPunctuateOnWallClockTime() PASSED

TopologyTestDriverAtLeastOnceTest > shouldSetRecordMetadata() STARTED

TopologyTestDriverAtLeastOnceTest > shouldSetRecordMetadata() PASSED

TopologyTestDriverAtLeastOnceTest > shouldNotUpdateStoreForLargerValue() STARTED

TopologyTestDriverAtLeastOnceTest > shouldNotUpdateStoreForLargerValue() PASSED

TopologyTestDriverAtLeastOnceTest > shouldReturnCorrectInMemoryStoreTypeOnly() 
STARTED

TopologyTestDriverAtLeastOnceTest > shouldReturnCorrectInMemoryStoreTypeOnly() 
PASSED

TopologyTestDriverAtLeastOnceTest > shouldThrowForMissingTime() STARTED

TopologyTestDriverAtLeastOnceTest > shouldThrowForMissingTime() PASSED

TopologyTestDriverAtLeastOnceTest > 
shouldCaptureInternalTopicNamesIfWrittenInto() STARTED

TopologyTestDriverAtLeastOnceTest > 
shouldCaptureInternalTopicNamesIfWrittenInto() PASSED

TopologyTestDriverAtLeastOnceTest > shouldPunctuateOnWallClockTimeDeprecated() 
STARTED

TopologyTestDriverAtLeastOnceTest > shouldPunctuateOnWallClockTimeDeprecated() 
PASSED

TopologyTestDriverAtLeastOnceTest > shouldProcessRecordForTopic() STARTED

TopologyTestDriverAtLeastOnceTest > shouldProcessRecordForTopic() PASSED

TopologyTestDriverAtLeastOnceTest > 
shouldForwardRecordsFromSubtopologyToSubtopology() STARTED

TopologyTestDriverAtLeastOnceTest > 
shouldForwardRecordsFromSubtopologyToSubtopology() PASSED

TopologyTestDriverAtLeastOnceTest > shouldNotUpdateStoreForSmallerValue() 
STARTED

TopologyTestDriverAtLeastOnceTest > shouldNotUpdateStoreForSmallerValue() PASSED

TopologyTestDriverAtLeastOnceTest > 
shouldCreateStateDirectoryForStatefulTopology() STARTED

TopologyTestDriverAtLeastOnceTest > 
shouldCreateStateDirectoryForStatefulTopology() PASSED

TopologyTestDriverAtLeastOnceTest > shouldNotRequireParameters() STARTED

TopologyTestDriverAtLeastOnceTest > shouldNotRequireParameters() PASSED

TopologyTestDriverAtLeastOnceTest > shouldPunctuateIfWallClockTimeAdvances() 
STARTED

TopologyTestDriverAtLeastOnceTest > shouldPunctuateIfWallClockTimeAdvances() 
PASSED

> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tes

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #390

2021-01-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Tweak IBM i support in "stop" scripts (#9810)

[github] MINOR: Import RaftConfig config definitions into KafkaConfig (#9916)


--
[...truncated 3.55 MB...]

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@33cfbe66, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@74648473, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@74648473, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@597051e, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@597051e, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@cf80b47, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@cf80b47, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@e64fb37, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@e64fb37, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4b4d09e3, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4b4d09e3, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2ee095a2, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2ee095a2, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@51a02553, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@51a02553, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@638b8543, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@638b8543, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@716a590b, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@716a590b, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2fee75b3, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2fee75b3, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@54a9eea0, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@54a9eea0, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@458ab6c7, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@458ab6c7, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@10d09df9, 
timestamped = false, caching = false

Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #419

2021-01-21 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #438

2021-01-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Tweak IBM i support in "stop" scripts (#9810)

[github] MINOR: Import RaftConfig config definitions into KafkaConfig (#9916)


--
[...truncated 3.57 MB...]

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2cbd1b4, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2cbd1b4, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@39b51ac0, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@39b51ac0, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@6c7ebae9, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@6c7ebae9, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3635bb00, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3635bb00, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7b588efb, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7b588efb, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@b898b4b, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@b898b4b, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@79d24a0c, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@79d24a0c, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@23ea5d0c, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@23ea5d0c, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@342a1707, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@342a1707, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5a53a0d6, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5a53a0d6, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5ec80009, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5ec80009, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2121bcb3, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2121bcb3, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@52389edf, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@52389edf, 
timestamped = false, caching = true, logging

Re: [DISCUSS] KIP-690 Add additional configuration to control MirrorMaker 2 internal topics naming convention

2021-01-21 Thread Omnia Ibrahim
Hi Mickael,
Thanks for taking another look into the KIP, regards your questions

- I believe we need both "isMM2InternalTopic" and
`ReplicationPolicy.isInternalTopic`  as `ReplicationPolicy.isInternalTopic`
does check if a topic is Kafka internal topic, while `isMM2InternalTopic`
is just focusing if a topic is MM2 internal topic or not(which is
heartbeat/checkpoint/offset-sync). The fact that the default for MM2
internal topics matches "ReplicationPolicy.isInternalTopic" will not be an
accurate assumption anymore once we implement this KIP.

- "isCheckpointTopic" will detect all checkpoint topics for all MM2
instances this is needed for "MirrorClient.checkpointTopics" which
originally check if the topic name ends with CHECKPOINTS_TOPIC_SUFFIX. So
this method just to keep the same functionality that originally exists in
MM2

- "checkpointTopic" is used in two places 1. At topic creation in
"MirrorCheckpointConnector.createInternalTopics" which use
"sourceClusterAlias() + CHECKPOINTS_TOPIC_SUFFIX" and 2. At
"MirrorClient.remoteConsumerOffsets" which is called by
"RemoteClusterUtils.translateOffsets"  the cluster alias here referred to
as "remoteCluster" where the topic name is "remoteClusterAlias +
CHECKPOINTS_TOPIC_SUFFIX"  (which is an argument in RemoteClusterUtils, not
a config) This why I called the variable cluster instead of source and
instead of using the config to figure out the cluster aliases from config
as we use checkpoints to keep `RemoteClusterUtils` compatible for existing
users. I see a benefit of just read the config a find out the cluster
aliases but on the other side, I'm not sure why "RemoteClusterUtils"
doesn't get the name of the cluster from the properties instead of an
argument, so I decided to keep it just for compatibility.

Hope these answer some of your concerns.
Best
Omnia

On Thu, Jan 21, 2021 at 3:37 PM Mickael Maison 
wrote:

> Hi Omnia,
>
> Thanks for the updates. Sorry for the delay but I have a few more
> small questions about the API:
> - Do we really need "isMM2InternalTopic()"? There's already
> "ReplicationPolicy.isInternalTopic()". If so, we need to explain the
> difference between these 2 methods.
>
> - Is "isCheckpointTopic()" expected to detect all checkpoint topics
> (for all MM2 instances) or only the ones for this connector instance.
> If it's the later, I wonder if we could do without the method. As this
> interface is only called by MM2, we could first call
> "checkpointTopic()" and check if that's equal to the topic we're
> checking. If it's the former, we don't really know topic names other
> MM2 instances may be using!
>
> - The 3 methods returning topic names have different APIs:
> "heartbeatsTopic()" takes no arguments, "offsetSyncTopic()" takes the
> target cluster alias and "checkpointTopic()" takes "clusterAlias"
> (which one is it? source or target?). As the interface extends
> Configurable, maybe we could get rid of all the arguments and use the
> config to find the cluster aliases. WDYT?
>
> These are minor concerns, just making sure I fully understand how the
> API is expected to be used. Once these are cleared, I'll be happy to
> vote for this KIP.
>
> Thanks
>
> On Fri, Jan 8, 2021 at 12:06 PM Omnia Ibrahim 
> wrote:
> >
> > Hi Mickael,
> > Did you get time to review the changes to the KIP? If you okay with it
> could you vote for the KIP here ttps://
> www.mail-archive.com/dev@kafka.apache.org/msg113575.html?
> > Thanks
> >
> > On Thu, Dec 10, 2020 at 2:19 PM Omnia Ibrahim 
> wrote:
> >>
> >> Hi Mickael,
> >> 1) That's right the interface and default implementation will in
> mirror-connect
> >> 2) Renaming the interface should be fine too especially if you planning
> to move other functionality related to the creation there, I can edit this
> >>
> >> if you are okay with that please vote for the KIP here
> https://www.mail-archive.com/dev@kafka.apache.org/msg113575.html
> >>
> >>
> >> Thanks
> >> Omnia
> >> On Thu, Dec 10, 2020 at 12:58 PM Mickael Maison <
> mickael.mai...@gmail.com> wrote:
> >>>
> >>> Hi Omnia,
> >>>
> >>> Thank you for the reply, it makes sense.
> >>>
> >>> A couple more comments:
> >>>
> >>> 1) I'm assuming the new interface and default implementation will be
> >>> in the mirror-client project? as the names of some of these topics are
> >>> needed by RemoteClusterUtils on the client-side.
> >>>
> >>> 2) I'm about to open a KIP to specify where the offset-syncs topic is
> >>> created by MM2. In restricted environments, we'd prefer MM2 to only
> >>> have read access to the source cluster and have the offset-syncs on
> >>> the target cluster. I think allowing to specify the cluster where to
> >>> create that topic would be a natural extension of the interface you
> >>> propose here.
> >>>
> >>> So I wonder if your interface could be named InternalTopicsPolicy?
> >>> That's a bit more generic than InternalTopicNamingPolicy. That would
> >>> also match the configuration setting, internal.topics.policy.class,
> >>> you're proposing

Re: [DISCUSS] KIP-676: Respect the logging hierarchy

2021-01-21 Thread Randall Hauch
Tom, et al.,

I'm really late to the party and mistakenly thought the scope of this KIP
included only the broker. But I now see in the KIP-676 [1] text the
following claim:

Kafka exposes a number of APIs for describing and changing logger levels:
>
> * The Kafka broker exposes the DescribeConfigs RPC with the BROKER_LOGGER
> config resource.
> * Broker logger levels can also be configured using the
> Log4jControllerMBean MBean, exposed through JMX as
> kafka:type=kafka.Log4jController.
> * Kafka Connect exposes the /admin/loggers REST API endpoint for
> describing and changing logger levels.
>
> When accessing a logger's level these APIs do not respect the logger
> hierarchy.
> Instead, if the logger's level is not explicitly set the level of the root
> logger is used, even when an intermediate logger is configured with a
> different level.
>

Regarding Connect, the third bullet is accurate: Kafka
Connect's `/admin/loggers/` REST API introduced via KIP-495 [2] does
describe and change logger levels.

But the first sentence after the bullets is inaccurate, because per KIP-495
the Kafka Connect `/admin/loggers/` REST API does respect the hierarchy. In
fact, this case is explicitly called out in KIP-495:

Setting the log level of an ancestor (for example,
> `org.apache.kafka.connect` as opposed to a classname) will update the
> levels of all child classes.
>

and there are even unit tests in Connect for that case [3].

Can we modify this sentence in KIP-676 to reflect this? One way to minimize
the changes to the already-approved KIP is to change:

> When accessing a logger's level these APIs do not respect the logger
> hierarchy.

to

> When accessing a logger's level the first two of these APIs do not respect
> the logger hierarchy.


Thoughts?

Randall

[1]
https://cwiki.apache.org/confluence/display/KAFKA/KIP-676%3A+Respect+logging+hierarchy
[2]
https://cwiki.apache.org/confluence/display/KAFKA/KIP-495%3A+Dynamically+Adjust+Log+Levels+in+Connect
[3]
https://github.com/apache/kafka/blob/trunk/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/rest/resources/LoggingResourceTest.java#L118-L124

On Thu, Oct 8, 2020 at 9:56 AM Dongjin Lee  wrote:

> Hi Tom,
>
> I also agree that the current behavior is clearly wrong and I think it was
> mistakenly omitted in the KIP-412 discussion process. The current
> implementation does not reflect log4j's logger hierarchy.
>
> Regards,
> Dongjin
>
> On Thu, Oct 8, 2020 at 1:27 AM John Roesler  wrote:
>
> > Ah, thanks Tom,
> >
> > My only concern was that we might silently start logging a
> > lot more or less after the upgrade, but if the logging
> > behavior won't change at all, then the concern is moot.
> >
> > Since the KIP is only to make the APIs return an accurrate
> > representation of the actual log level, I have no concerns
> > at all.
> >
> > Thanks,
> > -John
> >
> > On Wed, 2020-10-07 at 17:00 +0100, Tom Bentley wrote:
> > > Hi John,
> > >
> > > You're right, but note that this affects the level the broker/connect
> > > worker was _reporting_ for that logger, not the level at which the
> logger
> > > was actually logging, which would be TRACE both before and after
> > upgrading.
> > >
> > > I've added more of an explanation to the KIP, since it wasn't very
> clear.
> > >
> > > Thanks for taking a look.
> > >
> > > Tom
> > >
> > > On Wed, Oct 7, 2020 at 4:29 PM John Roesler 
> wrote:
> > >
> > > > Thanks for this KIP Tom,
> > > >
> > > > Just to clarify the impact: In your KIP you described a
> > > > situation in which the root logger is configured at INFO, an
> > > > "kafka.foo" is configured at TRACE, and then "kafka.foo.bar"
> > > > is resolved to INFO.
> > > >
> > > > Assuming this goes into 3.0, would it be the case that if I
> > > > had the above configuration, after upgrade, "kafka.foo.bar"
> > > > would just switch from INFO to TRACE on its own?
> > > >
> > > > It seems like it must, since it's not configured explicitly,
> > > > and we are changing the inheritance rule from "inherit
> > > > directly from root" to "inherit from the closest configured
> > > > ancestor in the hierarchy".
> > > >
> > > > Am I thinking about this right?
> > > >
> > > > Thanks,
> > > > -John
> > > >
> > > > On Wed, 2020-10-07 at 15:42 +0100, Tom Bentley wrote:
> > > > > Hi all,
> > > > >
> > > > > I would like to start discussion on a small KIP which seeks to
> > rectify an
> > > > > inconsistency between how Kafka reports logger levels and how
> logger
> > > > > configuration is inherited hierarchically in log4j.
> > > > >
> > > > >
> > > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-676%3A+Respect+logging+hierarchy
> > > > > <
> > > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-676%3A+Respect+logging+hierarchy?moved=true
> > > > >
> > > > > If you have a few minutes to have a look I'd be grateful for any
> > > > feedback.
> > > > > Many thanks,
> > > > >
> > > > > Tom
> >
> >
>
> --
> *Dongjin Lee*
>
> *A hitchhiker in the mathemati

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #389

2021-01-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Refactor DescribeAuthorizedOperationsTest (#9938)

[github] KAFKA-12185: fix ConcurrentModificationException in newly added Tasks 
container class (#9940)


--
[...truncated 3.56 MB...]

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@28663e51, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@58d98cb9, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@58d98cb9, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5c8ea7f4, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5c8ea7f4, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@458a258c, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@458a258c, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1a737e4e, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1a737e4e, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1cd29773, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1cd29773, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3c329997, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3c329997, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@12d0f0c0, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@12d0f0c0, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2ea7d0ac, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2ea7d0ac, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5b4df9b4, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5b4df9b4, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5fb73970, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5fb73970, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4812e23a, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4812e23a, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5a233e05, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5a233e05, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@20b01353, 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #437

2021-01-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Refactor DescribeAuthorizedOperationsTest (#9938)

[github] KAFKA-12185: fix ConcurrentModificationException in newly added Tasks 
container class (#9940)


--
[...truncated 3.57 MB...]
TestTopicsTest > testEmptyTopic() STARTED

TestTopicsTest > testEmptyTopic() PASSED

TestTopicsTest > testStartTimestamp() STARTED

TestTopicsTest > testStartTimestamp() PASSED

TestTopicsTest > testNegativeAdvance() STARTED

TestTopicsTest > testNegativeAdvance() PASSED

TestTopicsTest > shouldNotAllowToCreateWithNullDriver() STARTED

TestTopicsTest > shouldNotAllowToCreateWithNullDriver() PASSED

TestTopicsTest > testDuration() STARTED

TestTopicsTest > testDuration() PASSED

TestTopicsTest > testOutputToString() STARTED

TestTopicsTest > testOutputToString() PASSED

TestTopicsTest > testValue() STARTED

TestTopicsTest > testValue() PASSED

TestTopicsTest > testTimestampAutoAdvance() STARTED

TestTopicsTest > testTimestampAutoAdvance() PASSED

TestTopicsTest > testOutputWrongSerde() STARTED

TestTopicsTest > testOutputWrongSerde() PASSED

TestTopicsTest > shouldNotAllowToCreateOutputTopicWithNullTopicName() STARTED

TestTopicsTest > shouldNotAllowToCreateOutputTopicWithNullTopicName() PASSED

TestTopicsTest > testWrongSerde() STARTED

TestTopicsTest > testWrongSerde() PASSED

TestTopicsTest > testKeyValuesToMapWithNull() STARTED

TestTopicsTest > testKeyValuesToMapWithNull() PASSED

TestTopicsTest > testNonExistingOutputTopic() STARTED

TestTopicsTest > testNonExistingOutputTopic() PASSED

TestTopicsTest > testMultipleTopics() STARTED

TestTopicsTest > testMultipleTopics() PASSED

TestTopicsTest > testKeyValueList() STARTED

TestTopicsTest > testKeyValueList() PASSED

TestTopicsTest > shouldNotAllowToCreateOutputWithNullDriver() STARTED

TestTopicsTest > shouldNotAllowToCreateOutputWithNullDriver() PASSED

TestTopicsTest > testValueList() STARTED

TestTopicsTest > testValueList() PASSED

TestTopicsTest > testRecordList() STARTED

TestTopicsTest > testRecordList() PASSED

TestTopicsTest > testNonExistingInputTopic() STARTED

TestTopicsTest > testNonExistingInputTopic() PASSED

TestTopicsTest > testKeyValuesToMap() STARTED

TestTopicsTest > testKeyValuesToMap() PASSED

TestTopicsTest > testRecordsToList() STARTED

TestTopicsTest > testRecordsToList() PASSED

TestTopicsTest > testKeyValueListDuration() STARTED

TestTopicsTest > testKeyValueListDuration() PASSED

TestTopicsTest > testInputToString() STARTED

TestTopicsTest > testInputToString() PASSED

TestTopicsTest > testTimestamp() STARTED

TestTopicsTest > testTimestamp() PASSED

TestTopicsTest > testWithHeaders() STARTED

TestTopicsTest > testWithHeaders() PASSED

TestTopicsTest > testKeyValue() STARTED

TestTopicsTest > testKeyValue() PASSED

TestTopicsTest > shouldNotAllowToCreateTopicWithNullTopicName() STARTED

TestTopicsTest > shouldNotAllowToCreateTopicWithNullTopicName() PASSED

> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> Task :streams:upgrade-system-tests-0102:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:testClasses
> Task :streams:upgrade-system-tests-0102:checkstyleTest
> Task :streams:upgrade-system-tests-0102:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:test
> Task :streams:upgrade-system-tests-0110:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0110:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0110:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:compileTestJava
> Task :streams:upgrade-system-tests-0110:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:testClasses
> Task :streams:upgrade-system-tests-0110:checkstyleTest
> Task :streams:upgrade-system-tests-0110:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:test
> Task :streams:upgrade-system-tests-10:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-10:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-10:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-10:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-10:compileTestJava
> Task :streams:upgrade-system-tests-10:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-10:testClasses
> Task :streams:upgrade-system-tests-10:checkstyleTest
> Task :streams:upgrade-system-tests-10:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-10:test
> Task :streams:upgrade-system-tests-11:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-11:processResources NO-SOURCE
> T

[jira] [Created] (KAFKA-12232) Distinguish API scope by broker/controller

2021-01-21 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-12232:
---

 Summary: Distinguish API scope by broker/controller
 Key: KAFKA-12232
 URL: https://issues.apache.org/jira/browse/KAFKA-12232
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson


After KIP-500, not all APIs will be available on all listeners. Specifically, 
there are controller-only APIs which are only accessible on the controller 
listener (e.g. the Raft APIs). In general, we have three API scopes:

client: must be exposed on client listener
broker: must be exposed on inter-broker listener
controller: must be exposed on controller listener

These categories are not mutually exclusive. The `Fetch` API is required on all 
listeners as an example, so we need a way to represent the scope as a set in 
`ApiKeys`.

We should also put some thought into how this scope is reflected through the 
ApiVersions API. I think it makes sense to only advertise APIs that can be 
handled. For example, if the controller does not have a handler for the 
`FindCoordinator` API, then it doesn't make sense to advertise it. 

Potentially we could be even more restrictive when it comes to the inter-broker 
APIs. For example, we might not need to advertise `WriteTxnMarkers` on 
client-only listeners since a client should never use this API. Alternatively, 
we can make it simple and just identify APIs by controller, broker, or both.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12231) Consumer Lag increases linearly until a Consumer-Group Rebalance is initiated

2021-01-21 Thread Scott Kidder (Jira)
Scott Kidder created KAFKA-12231:


 Summary: Consumer Lag increases linearly until a Consumer-Group 
Rebalance is initiated
 Key: KAFKA-12231
 URL: https://issues.apache.org/jira/browse/KAFKA-12231
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.6.0
 Environment: Kubernetes 1.12
Reporter: Scott Kidder
 Attachments: Consumer Lag by Partition.png, Consumer Lag on a Single 
Partition.png, Lag drop on rebalance.png, max-consumer-lag.png

I observed a linear increase in consumer lag reading from a single topic (480 
partitions) across multiple consumers for multiple hours. The increase in lag 
was stopped by initiating a consumer-group rebalance by replacing one of the 
consumers (this was in Kubernetes, so deleting a consumer pod and seeing its 
replacement pod join) at 07:46UTC on the chart below.

!max-consumer-lag.png!

 

The lag was observed across all topic partitions, but only briefly on each:

!Consumer Lag by Partition.png!

 

!Consumer Lag on a Single Partition.png!

 

For additional context, this was a Golang consumer using v1.27.2 of the Shopify 
Sarama Kafka client. Consumers used the Sticky Partition Assignor to plan 
assignments. So, even after the consumer-group rebalance, the majority of 
consumers kept their original assignments. Nothing about the data being 
consumed & processed from Kafka could explain these punctuated spikes in 
consumer lag. There were no errors or significant messages in the Kafka broker 
logs before or after the rebalance.

 

The lag dropped within 2 minutes of the consumer-group rebalance (initiated at 
07:46, lag fell at 07:48):

!Lag drop on rebalance.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka-site] mimaison commented on pull request #322: KAFKA-6223: Use archive.apache.org for older releases

2021-01-21 Thread GitBox


mimaison commented on pull request #322:
URL: https://github.com/apache/kafka-site/pull/322#issuecomment-764846591


   @mjsax can you take a look? Thanks



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (KAFKA-12230) Some Kafka TopologyTestDriver-based unit tests can't be run on Windows file system

2021-01-21 Thread Ivan Ponomarev (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Ponomarev resolved KAFKA-12230.

Resolution: Duplicate

Duplicate of KAFKA-12190

> Some Kafka TopologyTestDriver-based unit tests can't be run on Windows file 
> system
> --
>
> Key: KAFKA-12230
> URL: https://issues.apache.org/jira/browse/KAFKA-12230
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ivan Ponomarev
>Assignee: Ivan Ponomarev
>Priority: Minor
>
> While developing Kafka on Windows machine, I get some false 
> TopologyTestDriver-based test failures because of 
> `Files.setPosixFilePermissions` failure with UnsupportedOperationException, 
> see e. g. stack trace below
>  
> Simply  catching UnsupportedOperationException together with IOException in 
> StateDirectory. solves this issue
>  
>  
> {noformat}
> java.lang.UnsupportedOperationException
>   at 
> java.base/java.nio.file.Files.setPosixFilePermissions(Files.java:2078)
>   at 
> org.apache.kafka.streams.processor.internals.StateDirectory.(StateDirectory.java:118)
>   at 
> org.apache.kafka.streams.TopologyTestDriver.setupTopology(TopologyTestDriver.java:431)
>   at 
> org.apache.kafka.streams.TopologyTestDriver.(TopologyTestDriver.java:335)
>   at 
> org.apache.kafka.streams.TopologyTestDriver.(TopologyTestDriver.java:306)
>   at 
> org.apache.kafka.streams.TopologyTestDriver.(TopologyTestDriver.java:265)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamImplTest.shouldSupportForeignKeyTableTableJoinWithKTableFromKStream(KStreamImplTest.java:2751)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:

[jira] [Created] (KAFKA-12230) Some Kafka TopologyTestDriver-based unit tests can't be run on Windows file system

2021-01-21 Thread Ivan Ponomarev (Jira)
Ivan Ponomarev created KAFKA-12230:
--

 Summary: Some Kafka TopologyTestDriver-based unit tests can't be 
run on Windows file system
 Key: KAFKA-12230
 URL: https://issues.apache.org/jira/browse/KAFKA-12230
 Project: Kafka
  Issue Type: Bug
Reporter: Ivan Ponomarev
Assignee: Ivan Ponomarev


While developing Kafka on Windows machine, I get some false 
TopologyTestDriver-based test failures because of 
`Files.setPosixFilePermissions` failure with UnsupportedOperationException, see 
e. g. stack trace below

 

Simply  catching UnsupportedOperationException together with IOException in 
StateDirectory. solves this issue

 

 
{noformat}
java.lang.UnsupportedOperationException
at 
java.base/java.nio.file.Files.setPosixFilePermissions(Files.java:2078)
at 
org.apache.kafka.streams.processor.internals.StateDirectory.(StateDirectory.java:118)
at 
org.apache.kafka.streams.TopologyTestDriver.setupTopology(TopologyTestDriver.java:431)
at 
org.apache.kafka.streams.TopologyTestDriver.(TopologyTestDriver.java:335)
at 
org.apache.kafka.streams.TopologyTestDriver.(TopologyTestDriver.java:306)
at 
org.apache.kafka.streams.TopologyTestDriver.(TopologyTestDriver.java:265)
at 
org.apache.kafka.streams.kstream.internals.KStreamImplTest.shouldSupportForeignKeyTableTableJoinWithKTableFromKStream(KStreamImplTest.java:2751)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
at 
java.base/jdk.internal.reflect.NativeMethodAc

Re: [Connect] Different validation requirements for connector creation and update

2021-01-21 Thread Chris Egerton
Hi Gunnar,

It's not possible to do this in a generalized fashion with the API provided
by the framework today. Trying to hack your way around things by setting a
flag or storing the connector name in some shared JVM state wouldn't work
in a cluster with more than one worker since that state would obviously not
be available across workers.

With the specific case of the Debezium PostgreSQL connector, I'm wondering
if you might be able to store the name of the connector in some external
system (likely either the database itself or a Kafka topic, as I seem to
recall that Debezium connectors create and consume from topics outside of
the framework) after successfully claiming the replication slot. Then,
during config validation, you could skip the replication slot validation if
that stored name matched the name of the connector being validated. There
are obviously some edge cases that'd need to be addressed such as sudden
death of connectors after claiming the replication slot but before storing
their name; just wanted to share the thought in case it leads somewhere
useful.

Either way, I think a small, simple KIP for this would be fine, as long as
we could maintain backwards compatibility for existing connectors and allow
connectors that use this new API to work on older versions of Connect that
don't have support for it.

Cheers,

Chris

On Thu, Jan 21, 2021 at 6:00 AM Gunnar Morling  wrote:

> Hi,
>
> In the Debezium community, we ran into an interesting corner case of
> connector config validation [1].
>
> The Debezium Postgres connector requires a database resource called a
> "replication slot", which identifies this connector to the database and
> tracks progress it has made reading the TX log. This replication slot must
> not be shared between multiple clients (Debezium connectors, or others), so
> we added a validation to make sure that the slot configured by the user
> isn't active, i.e. no client is connected to it already. This works as
> expected when setting up, or restarting a connector, but when trying to
> update the connector configuration, the connector still is running when the
> configuration is validated, so the slot is active and validation hence
> fails.
>
> Is there a way we can distinguish during config validation whether the
> connector is (re-)started or whether it's a validation upon
> re-configuration (allowing us to skip this particular validation in the
> re-configuration case)?
>
> If that's not the case, would there be interest for a KIP for adding such
> capability to the Kafka Connect API?
>
> Thanks for any feedback,
>
> --Gunnar
>
> [1] https://issues.redhat.com/browse/DBZ-2952
>


Re: [DISCUSS] KIP-690 Add additional configuration to control MirrorMaker 2 internal topics naming convention

2021-01-21 Thread Mickael Maison
Hi Omnia,

Thanks for the updates. Sorry for the delay but I have a few more
small questions about the API:
- Do we really need "isMM2InternalTopic()"? There's already
"ReplicationPolicy.isInternalTopic()". If so, we need to explain the
difference between these 2 methods.

- Is "isCheckpointTopic()" expected to detect all checkpoint topics
(for all MM2 instances) or only the ones for this connector instance.
If it's the later, I wonder if we could do without the method. As this
interface is only called by MM2, we could first call
"checkpointTopic()" and check if that's equal to the topic we're
checking. If it's the former, we don't really know topic names other
MM2 instances may be using!

- The 3 methods returning topic names have different APIs:
"heartbeatsTopic()" takes no arguments, "offsetSyncTopic()" takes the
target cluster alias and "checkpointTopic()" takes "clusterAlias"
(which one is it? source or target?). As the interface extends
Configurable, maybe we could get rid of all the arguments and use the
config to find the cluster aliases. WDYT?

These are minor concerns, just making sure I fully understand how the
API is expected to be used. Once these are cleared, I'll be happy to
vote for this KIP.

Thanks

On Fri, Jan 8, 2021 at 12:06 PM Omnia Ibrahim  wrote:
>
> Hi Mickael,
> Did you get time to review the changes to the KIP? If you okay with it could 
> you vote for the KIP here 
> ttps://www.mail-archive.com/dev@kafka.apache.org/msg113575.html?
> Thanks
>
> On Thu, Dec 10, 2020 at 2:19 PM Omnia Ibrahim  wrote:
>>
>> Hi Mickael,
>> 1) That's right the interface and default implementation will in 
>> mirror-connect
>> 2) Renaming the interface should be fine too especially if you planning to 
>> move other functionality related to the creation there, I can edit this
>>
>> if you are okay with that please vote for the KIP here 
>> https://www.mail-archive.com/dev@kafka.apache.org/msg113575.html
>>
>>
>> Thanks
>> Omnia
>> On Thu, Dec 10, 2020 at 12:58 PM Mickael Maison  
>> wrote:
>>>
>>> Hi Omnia,
>>>
>>> Thank you for the reply, it makes sense.
>>>
>>> A couple more comments:
>>>
>>> 1) I'm assuming the new interface and default implementation will be
>>> in the mirror-client project? as the names of some of these topics are
>>> needed by RemoteClusterUtils on the client-side.
>>>
>>> 2) I'm about to open a KIP to specify where the offset-syncs topic is
>>> created by MM2. In restricted environments, we'd prefer MM2 to only
>>> have read access to the source cluster and have the offset-syncs on
>>> the target cluster. I think allowing to specify the cluster where to
>>> create that topic would be a natural extension of the interface you
>>> propose here.
>>>
>>> So I wonder if your interface could be named InternalTopicsPolicy?
>>> That's a bit more generic than InternalTopicNamingPolicy. That would
>>> also match the configuration setting, internal.topics.policy.class,
>>> you're proposing.
>>>
>>> Thanks
>>>
>>> On Thu, Dec 3, 2020 at 10:15 PM Omnia Ibrahim  
>>> wrote:
>>> >
>>> > Hi Mickael,
>>> > Thanks for your feedback!
>>> > Regards your question about having more configurations, I considered 
>>> > adding
>>> > configuration per each topic however this meant adding more configurations
>>> > for MM2 which already have so many, also the more complicated and advanced
>>> > replication pattern you have between clusters the more configuration lines
>>> > will be added to your MM2 config which isn't going to be pretty if you
>>> > don't have the same topics names across your clusters.
>>> >
>>> > Also, it added more complexity to the implementation as MM2 need to
>>> > 1- identify if a topic is checkpoints so we could list the checkpoints
>>> > topics in MirrorMaker 2 utils as one cluster could have X numbers
>>> > checkpoints topics if it's connected to X clusters, this is done right now
>>> > by listing any topic with suffix `.checkpoints.internal`. This could be
>>> > done by add `checkpoints.topic.suffix` config but this would make an
>>> > assumption that checkpoints will always have a suffix also having a suffix
>>> > means that we may need a separator as well to concatenate this suffix with
>>> > a prefix to identify source cluster name.
>>> > 2- identify if a topic is internal, so it shouldn't be replicated or track
>>> > checkpoints for it, right now this is relaying on disallow topics with
>>> > `.internal` suffix to be not replicated and not tracked in checkpoints but
>>> > with making topics configurable we need a way to define what is an 
>>> > internal
>>> > topic. This could be done by making using a list of all internal topics
>>> > have been entered to the configuration.
>>> >
>>> > So having an interface seemed easier and also give more flexibility for
>>> > users to define their own topics name, define what is internal topic 
>>> > means,
>>> > how to find checkpoints topics and it will be one line config for each
>>> > herder, also it more consistence with MM2 code as MM

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #418

2021-01-21 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12152; Idempotent Producer does not reset the sequence number of 
partitions without in-flight batches (#9832)


--
[...truncated 7.14 MB...]
TopologyTestDriverEosTest > shouldCaptureSinkTopicNamesIfWrittenInto() PASSED

TopologyTestDriverEosTest > shouldUpdateStoreForNewKey() STARTED

TopologyTestDriverEosTest > shouldUpdateStoreForNewKey() PASSED

TopologyTestDriverEosTest > shouldSendRecordViaCorrectSourceTopicDeprecated() 
STARTED

TopologyTestDriverEosTest > shouldSendRecordViaCorrectSourceTopicDeprecated() 
PASSED

TopologyTestDriverEosTest > shouldPunctuateOnWallClockTime() STARTED

TopologyTestDriverAtLeastOnceTest > 
shouldCreateStateDirectoryForStatefulTopology() PASSED

TopologyTestDriverAtLeastOnceTest > shouldNotRequireParameters() STARTED

TopologyTestDriverEosTest > shouldPunctuateOnWallClockTime() PASSED

TopologyTestDriverAtLeastOnceTest > shouldNotRequireParameters() PASSED

TopologyTestDriverAtLeastOnceTest > shouldPunctuateIfWallClockTimeAdvances() 
STARTED

TopologyTestDriverEosTest > shouldSetRecordMetadata() STARTED

TopologyTestDriverEosTest > shouldSetRecordMetadata() PASSED

TopologyTestDriverEosTest > shouldNotUpdateStoreForLargerValue() STARTED

TopologyTestDriverAtLeastOnceTest > shouldPunctuateIfWallClockTimeAdvances() 
PASSED

TopologyTestDriverEosTest > shouldNotUpdateStoreForLargerValue() PASSED

TopologyTestDriverEosTest > shouldReturnCorrectInMemoryStoreTypeOnly() STARTED

TopologyTestDriverEosTest > shouldReturnCorrectInMemoryStoreTypeOnly() PASSED

TopologyTestDriverEosTest > shouldThrowForMissingTime() STARTED

TopologyTestDriverEosTest > shouldThrowForMissingTime() PASSED

TopologyTestDriverEosTest > shouldCaptureInternalTopicNamesIfWrittenInto() 
STARTED

TopologyTestDriverEosTest > shouldCaptureInternalTopicNamesIfWrittenInto() 
PASSED

TopologyTestDriverEosTest > shouldPunctuateOnWallClockTimeDeprecated() STARTED

TopologyTestDriverEosTest > shouldPunctuateOnWallClockTimeDeprecated() PASSED

TopologyTestDriverEosTest > shouldProcessRecordForTopic() STARTED

TopologyTestDriverEosTest > shouldProcessRecordForTopic() PASSED

TopologyTestDriverEosTest > shouldForwardRecordsFromSubtopologyToSubtopology() 
STARTED

TopologyTestDriverEosTest > shouldForwardRecordsFromSubtopologyToSubtopology() 
PASSED

TopologyTestDriverEosTest > shouldNotUpdateStoreForSmallerValue() STARTED

TopologyTestDriverEosTest > shouldNotUpdateStoreForSmallerValue() PASSED

TopologyTestDriverEosTest > shouldCreateStateDirectoryForStatefulTopology() 
STARTED

TopologyTestDriverEosTest > shouldCreateStateDirectoryForStatefulTopology() 
PASSED

TopologyTestDriverEosTest > shouldNotRequireParameters() STARTED

TopologyTestDriverEosTest > shouldNotRequireParameters() PASSED

TopologyTestDriverEosTest > shouldPunctuateIfWallClockTimeAdvances() STARTED

TopologyTestDriverEosTest > shouldPunctuateIfWallClockTimeAdvances() PASSED

WindowStoreFacadeTest > shouldReturnIsOpen() STARTED

WindowStoreFacadeTest > shouldReturnIsOpen() PASSED

WindowStoreFacadeTest > shouldReturnName() STARTED

WindowStoreFacadeTest > shouldReturnName() PASSED

WindowStoreFacadeTest > shouldPutWithUnknownTimestamp() STARTED

WindowStoreFacadeTest > shouldPutWithUnknownTimestamp() PASSED

WindowStoreFacadeTest > shouldPutWindowStartTimestampWithUnknownTimestamp() 
STARTED

WindowStoreFacadeTest > shouldPutWindowStartTimestampWithUnknownTimestamp() 
PASSED

WindowStoreFacadeTest > shouldReturnIsPersistent() STARTED

WindowStoreFacadeTest > shouldReturnIsPersistent() PASSED

WindowStoreFacadeTest > shouldForwardDeprecatedInit() STARTED

WindowStoreFacadeTest > shouldForwardDeprecatedInit() PASSED

WindowStoreFacadeTest > shouldForwardClose() STARTED

WindowStoreFacadeTest > shouldForwardClose() PASSED

WindowStoreFacadeTest > shouldForwardFlush() STARTED

WindowStoreFacadeTest > shouldForwardFlush() PASSED

WindowStoreFacadeTest > shouldForwardInit() STARTED

WindowStoreFacadeTest > shouldForwardInit() PASSED

> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> Task :streams:upgrade-system-tests-0102:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:testClasses
> Task :streams:upgrade-system-tests-0102:checkstyleTest
> Task :streams:upgrade-system-tests-0102:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:test
> Task :streams:upgrade-system-tests-0110:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0110:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0110:checkstyleMain NO-SOURCE

Re: Kafka Advisory Topic

2021-01-21 Thread Christopher Shannon
Hi,

I am on the ActiveMQ PMC and I think this is a very good idea to have a way
to do advisories/notifications/events (whatever you want to call it). In
ActiveMQ classic you have advisories and in Artemis you have notifications.
Having management messages that can be subscribed to in real time is
actually a major feature that is missing from Kafka that many other brokers
have.

The idea here would be to publish notifications of different configurable
events when something important happens so a consumer can listen in on
things it cares about and be able to do something instead of having to poll
the admin API. There are many events that happen in a broker that would be
useful to be notified about. Events such as new connections to the cluster,
new topics created or destroyed, consumer group creation, authorization
errors, new leader election, etc. The list is pretty much endless.

The metadata topic that will exist is probably not going to have all of
this information so some other mechanism would be needed to handle
publishing these messages to a specific management topic that would be
useful for a consumer.

Chris


On Wed, Jan 20, 2021 at 4:12 PM Boyang Chen 
wrote:

> Hey Knowles,
>
> in Kafka people normally use admin clients to get those metadata. I'm not
> sure why you mentioned specifically that having a topic to manage these
> information is useful, but a good news is that in KIP-500
> <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-500%3A+Replace+ZooKeeper+with+a+Self-Managed+Metadata+Quorum
> >
> we
> are trying to deprecate Zookeeper and migrate to a self-managed metadata
> topic quorum. At the time this feature is fully done, you should be able to
> use consumers to pull the metadata log.
>
> Best,
> Boyang
>
> On Wed, Jan 20, 2021 at 11:22 AM Knowles Atchison Jr <
> katchiso...@gmail.com>
> wrote:
>
> > Good afternoon all,
> >
> > In our Kafka clusters we have a need to know when certain activities are
> > performed, mainly topics being created, but brokers coming up/down is
> also
> > useful. This would be akin to what ActiveMQ does via advisory messages (
> > https://activemq.apache.org/advisory-message).
> >
> > Since there did not appear to be anything in the ecosystem currently, I
> > wrote a standalone Java program that watches the various ZooKeeper
> > locations that the Kafka broker writes to and deltas can tell us
> > topic/broker actions etc... and writes to a kafka topic for downstream
> > consumption.
> >
> > Ideally, we would rather have the broker handle this internally rather
> > than yet another service stood up in our systems. I began digging through
> > the broker source (my Scala is basically hello world level) and there
> does
> > not appear to be any mechanism in which this could be easily patched into
> > the broker.
> >
> > Specifically, a producer or consumer acting upon an nonexistent topic or
> a
> > manual CreateTopic would trigger a Produce to this advisory topic and the
> > KafkaApis framework would handle it like any other request. However, by
> the
> > time we are inside the getTopicMetadata call there doesn't seem to be a
> > clean way to fire off another message that would make its way through
> > KafkaApis. Perhaps another XManager type object is required?
> >
> > Looking for alternative ideas or guidance (or I missed something in the
> > broker).
> >
> > Thank you.
> >
> > Knowles
> >
>


Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #436

2021-01-21 Thread Apache Jenkins Server
See 




Re: [DISCUSS] Apache Kafka 2.8.0 release

2021-01-21 Thread Leah Thomas
Hi John,

KIP-659 was just accepted as well, can it be added to the release plan?
https://cwiki.apache.org/confluence/display/KAFKA/KIP-659%3A+Improve+TimeWindowedDeserializer+and+TimeWindowedSerde+to+handle+window+size

Thanks,
Leah

On Thu, Jan 14, 2021 at 9:36 AM John Roesler  wrote:

> Hi David,
>
> Thanks for the heads-up; it's added.
>
> -John
>
> On Thu, 2021-01-14 at 08:43 +0100, David Jacot wrote:
> > Hi John,
> >
> > KIP-700 just got accepted. Can we add it to the release plan?
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-700%3A+Add+Describe+Cluster+API
> >
> > Thanks,
> > David
> >
> > On Wed, Jan 13, 2021 at 11:22 PM John Roesler 
> wrote:
> >
> > > Thanks, Gary! Sorry for the oversight.
> > > -John
> > >
> > > On Wed, 2021-01-13 at 21:25 +, Gary Russell wrote:
> > > > Can you add a link to the summary page [1]?
> > > >
> > > > I always start there.
> > > >
> > > > Thanks
> > > >
> > > > [1]:
> > > https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan
> > > > Future release plan - Apache Kafka - Apache Software Foundation<
> > > https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan>
> > > > Release Plan 0.10.0; Release Plan 0.10.1; Release Plan 0.10.2.0;
> Release
> > > Plan 0.10.2.2; Release Plan 0.11.0.0; Release Plan 0.11.0.3; Release
> Plan
> > > 1.0.0 (2017 Oct.)
> > > > cwiki.apache.org
> > > >
> > > > 
> > > > From: John Roesler 
> > > > Sent: Wednesday, January 13, 2021 4:11 PM
> > > > To: dev@kafka.apache.org 
> > > > Subject: Re: [DISCUSS] Apache Kafka 2.8.0 release
> > > >
> > > > Hello again, all,
> > > >
> > > > I have published a release plan at
> > > >
> > >
> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fpages%2Fviewpage.action%3FpageId%3D173081737&data=04%7C01%7Cgrussell%40vmware.com%7C6bb299de16bf4730c73608d8b8079404%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637461689989420036%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=8RfiDJZRr%2BZ1S46I5ZrHRyNOEKiBzjHYlxD4AnAb8p8%3D&reserved=0
> > > >
> > > > I have included all the KIPs that are currently approved,
> > > > but I am happy to make adjustments as necessary.
> > > >
> > > > The KIP freeze is Jan 27th.
> > > >
> > > > Please let me know if you have any objections.
> > > >
> > > > Thanks,
> > > > -John
> > > >
> > > > On Wed, 2021-01-06 at 23:30 -0600, John Roesler wrote:
> > > > > Hello All,
> > > > >
> > > > > I'd like to volunteer to be the release manager for our next
> > > > > feature release, 2.8.0. If there are no objections, I'll
> > > > > send out the release plan soon.
> > > > >
> > > > > Thanks,
> > > > > John Roesler
> > > > >
> > > >
> > > >
> > >
> > >
> > >
>
>
>


Re: [VOTE] KIP-659: Improve TimeWindowedDeserializer and TimeWindowedSerde to handle window size

2021-01-21 Thread Leah Thomas
Thanks everyone, with that I'll go ahead and close this KIP with 4 binding
votes (Sophie, Guozhang, John, Matthias). Thanks for the lively discussion!

Leah

On Wed, Jan 20, 2021 at 6:00 PM Matthias J. Sax  wrote:

> Thanks for reviving this KIP, Leah.
>
> I agree that we should not extend the scope of this KIP to potentially
> deprecate/rename the `default.windowed.[key|value].serde.inner` configs.
>
> @Sophie: if you feel strong about it, let's do a separate KIP.
>
>
> +1 (binding)
>
>
> -Matthias
>
> On 1/19/21 3:00 PM, John Roesler wrote:
> > Hi all,
> >
> > I've just caught up on the thread, and FWIW, I'm still +1.
> >
> > Thanks,
> > -John
> >
> > On Mon, 2021-01-18 at 21:53 -0800, Guozhang Wang wrote:
> >> Read the above updates and the KIP's scope. Makes sense to me. +1 still
> >> counts :)
> >>
> >> On Wed, Jan 13, 2021 at 2:04 PM Sophie Blee-Goldman <
> sop...@confluent.io>
> >> wrote:
> >>
> >>> That sounds good to me. Thanks for reviving this
> >>>
> >>> Sophie
> >>>
> >>> On Wed, Jan 13, 2021 at 7:47 AM Leah Thomas 
> wrote:
> >>>
>  Hey all,
> 
>  Bringing this back up for discussion.
> 
>  It seems like the next steps are to:
>  1. rename the config "window.size.ms"
>  2. ensure that users set window size EITHER through the config OR
> through
>  the constructor. On this note, it may make sense to remove the default
> >>> for
>  the `window.size.ms` config, so that there won't be a fall back if
> the
>  window size isn't set in either spot. WDYT? This could also address
> the
>  issue of multiple window sizes within a streams app.
> 
>  I see what Sophie is saying about the
> `default.windowed.key.serde.inner`
>  config, but I do think deprecating and moving those configs would
> >>> require a
>  larger discussion. I'm open to looping them into this KIP if we feel
> like
>  it's vital (or incredibly convenient with low cost to users), but my
>  initial reaction is to leave that out for now and work within the
> current
>  set-up for window size.
> 
>  Thanks for all the comments so far,
>  Leah
> 
>  On Tue, Sep 29, 2020 at 10:44 PM Sophie Blee-Goldman <
> >>> sop...@confluent.io>
>  wrote:
> 
> > There are two cases where you need to specify the window size --
> >>> directly
> > using a
> > Consumer (eg the console consumer) or reading as an input topic
> within
> > Streams.
> > We need a config for the first case, since you can't pass a
> >>> Deserializer
> > object to the
> > console consumer. In the Streams case, the reverse is true, and you
> >>> have
>  to
> > pass in
> > an actual Serde object.
> >
> > Imo we should keep these two cases separate and not use the config
> for
>  the
> > Streams
> > case at all. But that's hard to enforce (we'd have to strip the
> config
>  out
> > of the user's
> > StreamsConfig if they tried to use it within Streams, for example)
> and
> >>> it
> > also puts us
> > in an awkward position due to the  default.windowed.inner.serde.class
> > configs. If
> > they can specify the inner serde class through their Streams app
> >>> config,
> > they
> > should be able to specify the window size through config as well.
>  Otherwise
> > we
> > either force a mix-and-match as Matthias described, or you just
> always
>  have
> > to
> > specify both the inner class and the window size in the constructor,
> at
> > which
> > point, why even have the default.windowed.inner.serde.class config at
>  all?
> >
> > ...
> > that's not a rhetorical question, actually. Personally I do think we
>  should
> > deprecate the default.windowed.serde.inner.class and replace it with
> > separate
> > windowed.serializer.inner.class/windowed.deserializer.inner.class
>  configs.
> > This
> > way it's immediately obvious that the configs are only for the
> > Consumer/Producer,
> > and you should construct your own TimeWindowedSerde with all the
>  necessary
> > parameters for use in your Streams app.
> >
> > That might be too radical, and maybe the problem isn't worth the
> burden
>  of
> > forcing users to change their code and replace the config with actual
>  Serde
> > objects. But it should be an easy change to make, and if it isn't,
> >>> that's
> > probably a good sign that you're using the serde incorrectly
> somewhere.
> >
> > If we don't deprecate the default.windowed.serde.inner.class, then
> it's
> > less clear
> > to me how to proceed. The only really consistent thing to do seems to
> >>> be
>  to
> > name and position the new window size config as a default config and
>  allow
> > it to be used similar to the default inner class configs. Which, as
> > established
> > throughout this discussion, seems very very wrong
> >
> >

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #388

2021-01-21 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12152; Idempotent Producer does not reset the sequence number of 
partitions without in-flight batches (#9832)


--
[...truncated 3.55 MB...]
MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2c9041bd, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2c9041bd, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@20af095b, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@20af095b, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4e5e1d4c, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4e5e1d4c, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@28663e51, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@28663e51, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@58d98cb9, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@58d98cb9, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5c8ea7f4, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5c8ea7f4, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@458a258c, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@458a258c, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1a737e4e, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1a737e4e, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1cd29773, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1cd29773, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3c329997, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3c329997, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@12d0f0c0, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@12d0f0c0, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2ea7d0ac, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2ea7d0ac, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5b4df9b4, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5b4df9b4, 
timestamped = false, caching = true, logging = true PAS

[Connect] Different validation requirements for connector creation and update

2021-01-21 Thread Gunnar Morling
Hi,

In the Debezium community, we ran into an interesting corner case of
connector config validation [1].

The Debezium Postgres connector requires a database resource called a
"replication slot", which identifies this connector to the database and
tracks progress it has made reading the TX log. This replication slot must
not be shared between multiple clients (Debezium connectors, or others), so
we added a validation to make sure that the slot configured by the user
isn't active, i.e. no client is connected to it already. This works as
expected when setting up, or restarting a connector, but when trying to
update the connector configuration, the connector still is running when the
configuration is validated, so the slot is active and validation hence
fails.

Is there a way we can distinguish during config validation whether the
connector is (re-)started or whether it's a validation upon
re-configuration (allowing us to skip this particular validation in the
re-configuration case)?

If that's not the case, would there be interest for a KIP for adding such
capability to the Kafka Connect API?

Thanks for any feedback,

--Gunnar

[1] https://issues.redhat.com/browse/DBZ-2952


Re: [VOTE] KIP-690: Add additional configuration to control MirrorMaker 2 internal topics naming convention

2021-01-21 Thread Omnia Ibrahim
Hi
Can I get a vote on this, please?

Best
Omnia

On Tue, Dec 15, 2020 at 12:16 PM Omnia Ibrahim 
wrote:

> If anyone interested in reading the discussions you can find it here
> https://www.mail-archive.com/dev@kafka.apache.org/msg113373.html
>
> On Tue, Dec 8, 2020 at 4:01 PM Omnia Ibrahim 
> wrote:
>
>> Hi everyone,
>> I’m proposing a new KIP for MirrorMaker 2 to add the ability to control
>> internal topics naming convention. The proposal details are here
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-690%3A+Add+additional+configuration+to+control+MirrorMaker+2+internal+topics+naming+convention
>>
>> Please vote in this thread.
>> Thanks
>> Omnia
>>
>


[jira] [Resolved] (KAFKA-12152) Idempotent Producer does not reset the sequence number of partitions without in-flight batches

2021-01-21 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-12152.
-
Fix Version/s: 2.7.1
   2.8.0
 Reviewer: Jason Gustafson
   Resolution: Fixed

> Idempotent Producer does not reset the sequence number of partitions without 
> in-flight batches
> --
>
> Key: KAFKA-12152
> URL: https://issues.apache.org/jira/browse/KAFKA-12152
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.5.0, 2.6.0, 2.7.0
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
> Fix For: 2.8.0, 2.7.1
>
>
> When a `OutOfOrderSequenceException` error is received by an idempotent 
> producer for a partition, the producer bumps its epoch, adjusts the sequence 
> number and the epoch of the in-flight batches of the partitions affected by 
> the `OutOfOrderSequenceException` error. This happens in 
> `TransactionManager#bumpIdempotentProducerEpoch`.
> The remaining partitions are treated separately. When the last in-flight 
> batch of a given partition is completed, the sequence number is reset. This 
> happens in `TransactionManager#handleCompletedBatch`.
> However, when a given partition does not have in-flight batches when the 
> producer epoch is bumped, its sequence number is not reset. It results in 
> having subsequent producer request to use the new producer epoch with the old 
> sequence number and to be rejected by the broker.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)