[jira] [Resolved] (KAFKA-12206) o.a.k.common.Uuid should implement Comparable

2021-01-26 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-12206.

Fix Version/s: 2.8.0
 Assignee: Colin McCabe
   Resolution: Fixed

> o.a.k.common.Uuid should implement Comparable 
> --
>
> Key: KAFKA-12206
> URL: https://issues.apache.org/jira/browse/KAFKA-12206
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Major
> Fix For: 2.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : Kafka » kafka-2.7-jdk8 #107

2021-01-26 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #441

2021-01-26 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10793: move handling of FindCoordinatorFuture to fix race 
condition (#9671)

[github] MINOR: Fix meaningless message in assertNull validation (#9965)


--
[...truncated 3.57 MB...]
OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp() 
STARTED

OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp() 
PASSED

OutputVerifierTest > shouldNotAllowNullExpectedRecordForCompareValue() STARTED

OutputVerifierTest > shouldNotAllowNullExpectedRecordForCompareValue() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > shouldNotAllowNullProducerRecordForCompareKeyValue() 
STARTED

OutputVerifierTest > shouldNotAllowNullProducerRecordForCompareKeyValue() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp() STARTED

OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > shouldFailIfValueIsDifferentForCompareKeyValueTimestamp() 
STARTED

OutputVerifierTest > shouldFailIfValueIsDifferentForCompareKeyValueTimestamp() 
PASSED

OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp() STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord()
 PASSED

KeyValueStoreFacadeTest > shouldReturnIsOpen() STARTED

KeyValueStoreFacadeTest > shouldReturnIsOpen() PASSED

KeyValueStoreFacadeTest > shouldDeleteAndReturnPlainValue() STARTED

KeyValueStoreFacadeTest > shouldDeleteAndReturnPlainValue() PASSED

KeyValueStoreFacadeTest > shouldReturnName() STARTED

KeyValueStoreFacadeTest > shouldReturnName() PASSED

KeyValueStoreFacadeTest > shouldPutWithUnknownTimestamp() STARTED

KeyValueStoreFacadeTest > shouldPutWithUnknownTimestamp() PASSED

KeyValueStoreFacadeTest > shouldPutAllWithUnknownTimestamp() STARTED

KeyValueStoreFacadeTest > shouldPutAllWithUnknownTimestamp() PASSED

KeyValueStoreFacadeTest > shouldReturnIsPersistent() STARTED

KeyValueStoreFacadeTest > shouldReturnIsPersistent() PASSED

KeyValueStoreFacadeTest > shouldForwardDeprecatedInit() STARTED

KeyValueStoreFacadeTest > shouldForwardDeprecatedInit() PASSED

KeyValueStoreFacadeTest > shouldPutIfAbsentWithUnknownTimestamp() STARTED

KeyValueStoreFacadeTest > shouldPutIfAbsentWithUnknownTimestamp() PASSED

KeyValueStoreFacadeTest > shouldForwardClose() STARTED

KeyValueStoreFacadeTest > shouldForwardClose() PASSED

KeyValueStoreFacadeTest > shouldForwardFlush() STARTED

KeyValueStoreFacadeTest > shouldForwardFlush() PASSED

KeyValueStoreF

[jira] [Resolved] (KAFKA-12233) Align the length passed to FileChannel by `FileRecords.writeTo`

2021-01-26 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-12233.

Fix Version/s: 2.8.0
   Resolution: Fixed

> Align the length passed to FileChannel by `FileRecords.writeTo`
> ---
>
> Key: KAFKA-12233
> URL: https://issues.apache.org/jira/browse/KAFKA-12233
> Project: Kafka
>  Issue Type: Bug
>Reporter: dengziming
>Assignee: dengziming
>Priority: Major
> Fix For: 2.8.0
>
>
> [https://github.com/apache/kafka/pull/2140/files#r563471404]
> we set `int count = Math.min(length, oldSize)`, but we are expected to write 
> from `offset`, so the count should be `Math.min(length, oldSize - offset)`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10793) Race condition in FindCoordinatorFuture permanently severs connection to group coordinator

2021-01-26 Thread A. Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

A. Sophie Blee-Goldman resolved KAFKA-10793.

Fix Version/s: 2.7.1
   Resolution: Fixed

> Race condition in FindCoordinatorFuture permanently severs connection to 
> group coordinator
> --
>
> Key: KAFKA-10793
> URL: https://issues.apache.org/jira/browse/KAFKA-10793
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, streams
>Affects Versions: 2.5.0
>Reporter: A. Sophie Blee-Goldman
>Assignee: A. Sophie Blee-Goldman
>Priority: Critical
> Fix For: 2.8.0, 2.7.1
>
>
> Pretty much as soon as we started actively monitoring the 
> _last-rebalance-seconds-ago_ metric in our Kafka Streams test environment, we 
> started seeing something weird. Every so often one of the StreamThreads (ie a 
> single Consumer instance) would appear to permanently fall out of the group, 
> as evidenced by a monotonically increasing _last-rebalance-seconds-ago._ We 
> inject artificial network failures every few hours at most, so the group 
> rebalances quite often. But the one consumer never rejoins, with no other 
> symptoms (besides a slight drop in throughput since the remaining threads had 
> to take over this member's work). We're confident that the problem exists in 
> the client layer, since the logs confirmed that the unhealthy consumer was 
> still calling poll. It was also calling Consumer#committed in its main poll 
> loop, which was consistently failing with a TimeoutException.
> When I attached a remote debugger to an instance experiencing this issue, the 
> network client's connection to the group coordinator (the one that uses 
> MAX_VALUE - node.id as the coordinator id) was in the DISCONNECTED state. But 
> for some reason it never tried to re-establish this connection, although it 
> did successfully connect to that same broker through the "normal" connection 
> (ie the one that juts uses node.id).
> The tl;dr is that the AbstractCoordinator's FindCoordinatorRequest has failed 
> (presumably due to a disconnect), but the _findCoordinatorFuture_ is non-null 
> so a new request is never sent. This shouldn't be possible since the 
> FindCoordinatorResponseHandler is supposed to clear the 
> _findCoordinatorFuture_ when the future is completed. But somehow that didn't 
> happen, so the consumer continues to assume there's still a FindCoordinator 
> request in flight and never even notices that it's dropped out of the group.
> These are the only confirmed findings so far, however we have some guesses 
> which I'll leave in the comments. Note that we only noticed this due to the 
> newly added _last-rebalance-seconds-ago_ __metric, and there's no reason to 
> believe this bug hasn't been flying under the radar since the Consumer's 
> inception



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #410

2021-01-26 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #440

2021-01-26 Thread Apache Jenkins Server
See 




[GitHub] [kafka-site] showuon opened a new pull request #325: MINOR: Remove redundant apostrophe in doc

2021-01-26 Thread GitBox


showuon opened a new pull request #325:
URL: https://github.com/apache/kafka-site/pull/325


   Found a redundant apostrophe appeared in doc.
   
   
![image](https://user-images.githubusercontent.com/43372967/105932560-ff006700-6087-11eb-90df-f350412256b2.png)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Jenkins build is back to normal : Kafka » kafka-2.3-jdk8 #15

2021-01-26 Thread Apache Jenkins Server
See 




[DISCUSS] KIP-708: Rack aware Kafka Streams with pluggable StandbyTask assignor

2021-01-26 Thread Levani Kokhreidze
Hello all,

I’d like to start discussion on KIP-708 [1] that aims to introduce rack aware 
standby task distribution in Kafka Streams.
In addition to changes mentioned in the KIP, I’d like to get some ideas on 
additional change I have in mind. 
Assuming KIP moves forward, I was wondering if it makes sense to configure 
Kafka Streams consumer instances with the rack ID passed with the new 
StreamsConfig#RACK_ID_CONFIG property. 
In practice, that would mean that when “rack.id ” is 
configured in Kafka Streams, it will automatically translate into 
ConsumerConfig#CLIENT_RACK_ID config for all the KafkaConsumer clients that is 
used by Kafka Streams internally.

[1] 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-708%3A+Rack+aware+Kafka+Streams+with+pluggable+StandbyTask+assignor
 


P.S 
I have draft PR ready, if it helps the discussion moving forward, I can provide 
the draft PR link in this thread.

Regards, 
Levani

Build failed in Jenkins: Kafka » kafka-2.7-jdk8 #106

2021-01-26 Thread Apache Jenkins Server
See 


Changes:

[Konstantine Karantasis] KAFKA-10763: Fix incomplete cooperative rebalances 
preventing connector/task revocations (#9765)


--
[...truncated 3.44 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSE

Build failed in Jenkins: Kafka » kafka-2.6-jdk8 #76

2021-01-26 Thread Apache Jenkins Server
See 


Changes:

[Konstantine Karantasis] KAFKA-10763: Fix incomplete cooperative rebalances 
preventing connector/task revocations (#9765)


--
[...truncated 3.17 MB...]

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOU

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #439

2021-01-26 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: set initial capacity of ArrayList for all json converters 
(#9962)

[github] KAFKA-10763: Fix incomplete cooperative rebalances preventing 
connector/task revocations (#9765)


--
[...truncated 3.58 MB...]
OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp() 
STARTED

OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp() 
PASSED

OutputVerifierTest > shouldNotAllowNullExpectedRecordForCompareValue() STARTED

OutputVerifierTest > shouldNotAllowNullExpectedRecordForCompareValue() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > shouldNotAllowNullProducerRecordForCompareKeyValue() 
STARTED

OutputVerifierTest > shouldNotAllowNullProducerRecordForCompareKeyValue() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp() STARTED

OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > shouldFailIfValueIsDifferentForCompareKeyValueTimestamp() 
STARTED

OutputVerifierTest > shouldFailIfValueIsDifferentForCompareKeyValueTimestamp() 
PASSED

OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp() STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord()
 PASSED

KeyValueStoreFacadeTest > shouldReturnIsOpen() STARTED

KeyValueStoreFacadeTest > shouldReturnIsOpen() PASSED

KeyValueStoreFacadeTest > shouldDeleteAndReturnPlainValue() STARTED

KeyValueStoreFacadeTest > shouldDeleteAndReturnPlainValue() PASSED

KeyValueStoreFacadeTest > shouldReturnName() STARTED

KeyValueStoreFacadeTest > shouldReturnName() PASSED

KeyValueStoreFacadeTest > shouldPutWithUnknownTimestamp() STARTED

KeyValueStoreFacadeTest > shouldPutWithUnknownTimestamp() PASSED

KeyValueStoreFacadeTest > shouldPutAllWithUnknownTimestamp() STARTED

KeyValueStoreFacadeTest > shouldPutAllWithUnknownTimestamp() PASSED

KeyValueStoreFacadeTest > shouldReturnIsPersistent() STARTED

KeyValueStoreFacadeTest > shouldReturnIsPersistent() PASSED

KeyValueStoreFacadeTest > shouldForwardDeprecatedInit() STARTED

KeyValueStoreFacadeTest > shouldForwardDeprecatedInit() PASSED

KeyValueStoreFacadeTest > shouldPutIfAbsentWithUnknownTimestamp() STARTED

KeyValueStoreFacadeTest > shouldPutIfAbsentWithUnknownTimestamp() PASSED

KeyValueStoreFacadeTest > shouldForwardClose() STARTED

KeyValueStoreFacadeTest > shouldForwardClose() PASSED

KeyValueStoreFacadeTest > shouldForwardFlush() STARTED

KeyValueStoreFacadeTest > shouldForwardFlush

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #409

2021-01-26 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: set initial capacity of ArrayList for all json converters 
(#9962)

[github] KAFKA-10763: Fix incomplete cooperative rebalances preventing 
connector/task revocations (#9765)


--
[...truncated 3.55 MB...]
MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2c9041bd, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2c9041bd, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@20af095b, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@20af095b, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4e5e1d4c, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4e5e1d4c, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@28663e51, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@28663e51, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@58d98cb9, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@58d98cb9, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5c8ea7f4, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5c8ea7f4, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@458a258c, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@458a258c, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1a737e4e, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1a737e4e, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1cd29773, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1cd29773, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3c329997, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3c329997, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@12d0f0c0, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@12d0f0c0, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2ea7d0ac, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2ea7d0ac, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5b4df9b4, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5

Re: 2.6.1 Critical Regression

2021-01-26 Thread Ismael Juma
Hi Gary,

Thanks for raising this. Looking at the PR, the change is in kafka streams,
not clients?

Ismael

On Tue, Jan 26, 2021 at 12:39 PM Gary Russell  wrote:

> A critical regression was included in kafka-clients 2.6.1, which is now
> fixed and cherry picked [1].
>
> For some devs, it is not easy (or they are not allowed by their
> enterprise) to move up to a new release, even a minor release (2.7). Devs
> are often allowed to upgrade to patch releases without approval, though.
>
> There seems to be some reticence to getting 2.6.2 out ASAP, due to the
> effort needed; I understand that a full release of everything is not
> trivial, but could consideration be given to releasing, say, 2.6.1.1 for
> just the kafka-clients jar with this fix?
>
> Thanks.
>
>
> [1]: https://github.com/apache/kafka/pull/9947#issuecomment-767798579
>
>
>
>
>
>


2.6.1 Critical Regression

2021-01-26 Thread Gary Russell
A critical regression was included in kafka-clients 2.6.1, which is now fixed 
and cherry picked [1].

For some devs, it is not easy (or they are not allowed by their enterprise) to 
move up to a new release, even a minor release (2.7). Devs are often allowed to 
upgrade to patch releases without approval, though.

There seems to be some reticence to getting 2.6.2 out ASAP, due to the effort 
needed; I understand that a full release of everything is not trivial, but 
could consideration be given to releasing, say, 2.6.1.1 for just the 
kafka-clients jar with this fix?

Thanks.


[1]: https://github.com/apache/kafka/pull/9947#issuecomment-767798579







[jira] [Created] (KAFKA-12243) Add toString methods to some of the classes introduced by this Epic

2021-01-26 Thread Jose Armando Garcia Sancio (Jira)
Jose Armando Garcia Sancio created KAFKA-12243:
--

 Summary: Add toString methods to some of the classes introduced by 
this Epic
 Key: KAFKA-12243
 URL: https://issues.apache.org/jira/browse/KAFKA-12243
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jose Armando Garcia Sancio






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10763) Task already exists error on same worker due to skip removal of tasks

2021-01-26 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis resolved KAFKA-10763.

Resolution: Fixed

> Task already exists error on same worker due to skip removal of tasks
> -
>
> Key: KAFKA-10763
> URL: https://issues.apache.org/jira/browse/KAFKA-10763
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.3.0
>Reporter: Shao Wang
>Assignee: Greg Harris
>Priority: Major
> Fix For: 2.3.2, 2.4.2, 2.5.2, 2.8.0, 2.7.1, 2.6.2
>
>
> In our production environment, upon start two KafkaConnect workers, during 
> the first couple of minutes, the leader bounces between worker1 and worker2. 
> And a lot of tasks throw Task already exists in this worker exception on 
> worker2.
> The sequence of events:
> worker2(hostname:sinkdp2)
> gen3 assign
>  Start task 1
> gen4 assign task 1
> gen5 assign task 1
> gen6 skip stopping task 1 and removal due to rebalance unresolved
>  revoke
> gen7 assign task 1
>  Start task 1(Task already exists eror)
>  
> Worker1(hostname: sinkdp1)
> {code:java}
> 03:36:07,340 [INFO ] [Worker clientId=connect-1, 
> groupId=group_connect_sink_dp] Rebalance started   
> [DistributedHerder-connect-1][WorkerCoordinator.java:233]
> 03:36:10,460 [INFO ] [Worker clientId=connect-1, 
> groupId=group_connect_sink_dp] Joined group at generation 1 with protocol 
> version 1 and got assignment: Assignment{error=0, 
> leader='connect-1-a5790b31-6890-4958-905d-d44be9e18842', 
> leaderUrl='http://sinkdp1:8083/', offset=6457, 
> connectorIds=[dp-hive-sink-connector-dptask_475_22, 
> dp-hive-sink-connector-dptask_366_20, dp-tidb-connector-dpta
> 03:36:10,694 [INFO ] [Worker clientId=connect-1, 
> groupId=group_connect_sink_dp] Starting task 
> dp-hive-sink-connector-dptask_475_22-0   
> [pool-9-thread-5][DistributedHerder.java:1073]
> 03:36:10,979 [INFO ] Instantiated task dp-hive-sink-connector-dptask_475_22-0 
> with version 0.14.0-SNAPSHOT of type 
> com.datapipeline.sink.connector.hive.HiveConnectorTask   
> [pool-9-thread-5][Worker.java:426]
> 03:36:37,692 [INFO ] [Worker clientId=connect-1, 
> groupId=group_connect_sink_dp] Rebalance started   
> [DistributedHerder-connect-1][WorkerCoordinator.java:233]
> 03:36:37,806 [INFO ] Stopping task dp-hive-sink-connector-dptask_475_22-0   
> [pool-9-thread-5][Worker.java:702]
> 03:40:09,721 [INFO ] [Worker clientId=connect-1, 
> groupId=group_connect_sink_dp] Finished stopping tasks in preparation for 
> rebalance   [DistributedHerder-connect-1][DistributedHerder.java:1502]
> 03:40:09,722 [INFO ] [Worker clientId=connect-1, 
> groupId=group_connect_sink_dp] Joined group at generation 2 with protocol 
> version 1 and got assignment: Assignment{error=0, 
> leader='connect-1-a5790b31-6890-4958-905d-d44be9e18842', 
> leaderUrl='http://sinkdp1:8083/', offset=6457, 
> connectorIds=[dp-hive-sink-connector-dptask_599_20, 
> dp-tidb-connector-dptask_580_13, dp-hive-sink-connector-dpta
> 03:40:09,722 [INFO ] [Worker clientId=connect-1, 
> groupId=group_connect_sink_dp] Rebalance started   
> [DistributedHerder-connect-1][WorkerCoordinator.java:233]
> 03:41:10,650 [INFO ] [Worker clientId=connect-1, 
> groupId=group_connect_sink_dp] Wasn't unable to resume work after last 
> rebalance, can skip stopping connectors and tasks   
> [DistributedHerder-connect-1][DistributedHerder.java:1517]
> 03:41:10,650 [INFO ] [Worker clientId=connect-1, 
> groupId=group_connect_sink_dp] Joined group at generation 4 with protocol 
> version 1 and got assignment: Assignment{error=0, 
> leader='connect-1-ff52e9fd-c96c-4e50-93cf-25cf2ae430a4', 
> leaderUrl='http://sinkdp2:8083/', offset=6457, connectorIds=[], taskIds=[], 
> revokedConnectorIds=[dp-hive-sink-connector-dptask_599_20, 
> dp-tidb-connector-dptask
> 03:41:10,651 [INFO ] [Worker clientId=connect-1, 
> groupId=group_connect_sink_dp] Rebalance started   
> [DistributedHerder-connect-1][WorkerCoordinator.java:233]
> 03:42:10,815 [INFO ] [Worker clientId=connect-1, 
> groupId=group_connect_sink_dp] Joined group at generation 5 with protocol 
> version 1 and got assignment: Assignment{error=0, 
> leader='connect-1-5a0dbfd9-10d8-42dd-8efd-52ac6ba81e17', 
> leaderUrl='http://sinkdp1:8083/', offset=6457, 
> connectorIds=[dp-hive-sink-connector-dptask_475_22, 
> dp-hive-sink-connector-dptask_366_20, dp-tidb-connector-dpta
> 03:42:10,953 [INFO ] [Worker clientId=connect-1, 
> groupId=group_connect_sink_dp] Starting task 
> dp-hive-sink-connector-dptask_475_22-0   
> [pool-9-thread-8][DistributedHerder.java:1073]
> 03:42:10,953 [INFO ] Instantiated task dp-hive-sink-connector-dptask_475_22-0 
> with version 0.14.0-SNAPSHOT of type 
> com.datapipeline.sink.connector.hive.HiveConnectorTask   
> [pool-

[jira] [Created] (KAFKA-12242) Decouple state store materialization enforcement from name/serde provider

2021-01-26 Thread Guozhang Wang (Jira)
Guozhang Wang created KAFKA-12242:
-

 Summary: Decouple state store materialization enforcement from 
name/serde provider
 Key: KAFKA-12242
 URL: https://issues.apache.org/jira/browse/KAFKA-12242
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Guozhang Wang


Many users of Streams would want the following: let the Streams runtime to 
decide whether or not to materialize a state store; AND if it decides to do so, 
use the store name / serdes I provided ahead of time, if not, then nothing 
happens (the provided store name and serdes can just be dropped).

However, Streams today take `Materialized` as an indicator to enforce the 
materialization. We should think of a way for users to optionally decouple 
materialization enforcement from name/serde provider.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12241) Partition offline when ISR shrinks to leader and LogDir goes offline

2021-01-26 Thread Noa Resare (Jira)
Noa Resare created KAFKA-12241:
--

 Summary: Partition offline when ISR shrinks to leader and LogDir 
goes offline
 Key: KAFKA-12241
 URL: https://issues.apache.org/jira/browse/KAFKA-12241
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.4.2
Reporter: Noa Resare


This is a long standing issue that we haven't previously tracked in a JIRA. We 
experience this maybe once per month on average and we see the following 
sequence of events:
 # A broker shrinks ISR to just itself for a partition. However, the followers 
are at highWatermark:{{ [Partition PARTITION broker=601] Shrinking ISR from 
1501,601,1201,1801 to 601. Leader: (highWatermark: 432385279, endOffset: 
432385280). Out of sync replicas: (brokerId: 1501, endOffset: 432385279) 
(brokerId: 1201, endOffset: 432385279) (brokerId: 1801, endOffset: 432385279).}}
 # Around this time (in the case I have in front of me, 20ms earlier according 
to the logging subsystem) LogDirFailureChannel captures an Error while 
appending records to PARTITION due to a readonly filesystem.
 # ~20 ms after the ISR shrink, LogDirFailureHandler offlines the partition: 
Logs for partitions LIST_OF_PARTITIONS are offline and logs for future 
partitions are offline due to failure on log directory /kafka/d6/data 
 # ~50ms later the controller marks the replicas as offline from 601: message: 
[Controller id=901] Mark replicas LIST_OF_PARTITIONS on broker 601 as offline 
 # ~2ms later the controller offlines the partition: [Controller id=901 
epoch=4] Changed partition PARTITION state from OnlinePartition to 
OfflinePartition 

To resolve this someone needs to manually enable unclean leader election, which 
is obviously not ideal. Since the leader knows that all the followers that are 
removed from ISR is at highWatermark, maybe it could convey that to the 
controller in the LeaderAndIsr response so that the controller could pick a new 
leader without having to resort to unclean leader election.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10694) Implement zero copy for FetchSnapshot

2021-01-26 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-10694.
-
Resolution: Fixed

> Implement zero copy for FetchSnapshot
> -
>
> Key: KAFKA-10694
> URL: https://issues.apache.org/jira/browse/KAFKA-10694
> Project: Kafka
>  Issue Type: Sub-task
>  Components: replication
>Reporter: Jose Armando Garcia Sancio
>Assignee: dengziming
>Priority: Major
>
> Change the _RawSnapshotWriter_ and _RawSnapshotReader_ interfaces to allow 
> sending and receiving _FetchSnapshotResponse_ with minimal memory copies.
> This could be implemented by making the following changes
> {code:java}
> interface RawSnapshotWriter {
>   ...
>   public void append(MemoryRecords records) throws IOException;
> } {code}
> {code:java}
> interface RawSnapshotReader {
>   ...
>   public BaseRecords slice(long position) throws IOException;
> }{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Kafka Advisory Topic

2021-01-26 Thread Knowles Atchison Jr
Gwen,

Yes, Chris and I work together; we can put together a KIP.

If someone would please grant me wiki permissions, username is katchison.

Thank you.

On Mon, Jan 25, 2021 at 2:58 PM Gwen Shapira  wrote:

> Agree that this sounds like a good idea.
>
> Would be good to have a more formal proposal (i.e. a KIP) with the details.
> I can think of about 100 different questions (will there be "levels"
> like in logs, what type of events are in or out of scope, rate
> limiting, data formats, etc).
> I am also curious on whether the notifications are intended for
> humans, automated processes or even the Kafka client applications
> themselves. I hope the proposal can include a few example scenarios to
> help us reason about the experience.
>
> Knowlton, is this something you want to pick up?
>
> Gwen
>
> On Thu, Jan 21, 2021 at 6:05 AM Christopher Shannon
>  wrote:
> >
> > Hi,
> >
> > I am on the ActiveMQ PMC and I think this is a very good idea to have a
> way
> > to do advisories/notifications/events (whatever you want to call it). In
> > ActiveMQ classic you have advisories and in Artemis you have
> notifications.
> > Having management messages that can be subscribed to in real time is
> > actually a major feature that is missing from Kafka that many other
> brokers
> > have.
> >
> > The idea here would be to publish notifications of different configurable
> > events when something important happens so a consumer can listen in on
> > things it cares about and be able to do something instead of having to
> poll
> > the admin API. There are many events that happen in a broker that would
> be
> > useful to be notified about. Events such as new connections to the
> cluster,
> > new topics created or destroyed, consumer group creation, authorization
> > errors, new leader election, etc. The list is pretty much endless.
> >
> > The metadata topic that will exist is probably not going to have all of
> > this information so some other mechanism would be needed to handle
> > publishing these messages to a specific management topic that would be
> > useful for a consumer.
> >
> > Chris
> >
> >
> > On Wed, Jan 20, 2021 at 4:12 PM Boyang Chen 
> > wrote:
> >
> > > Hey Knowles,
> > >
> > > in Kafka people normally use admin clients to get those metadata. I'm
> not
> > > sure why you mentioned specifically that having a topic to manage these
> > > information is useful, but a good news is that in KIP-500
> > > <
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-500%3A+Replace+ZooKeeper+with+a+Self-Managed+Metadata+Quorum
> > > >
> > > we
> > > are trying to deprecate Zookeeper and migrate to a self-managed
> metadata
> > > topic quorum. At the time this feature is fully done, you should be
> able to
> > > use consumers to pull the metadata log.
> > >
> > > Best,
> > > Boyang
> > >
> > > On Wed, Jan 20, 2021 at 11:22 AM Knowles Atchison Jr <
> > > katchiso...@gmail.com>
> > > wrote:
> > >
> > > > Good afternoon all,
> > > >
> > > > In our Kafka clusters we have a need to know when certain activities
> are
> > > > performed, mainly topics being created, but brokers coming up/down is
> > > also
> > > > useful. This would be akin to what ActiveMQ does via advisory
> messages (
> > > > https://activemq.apache.org/advisory-message).
> > > >
> > > > Since there did not appear to be anything in the ecosystem
> currently, I
> > > > wrote a standalone Java program that watches the various ZooKeeper
> > > > locations that the Kafka broker writes to and deltas can tell us
> > > > topic/broker actions etc... and writes to a kafka topic for
> downstream
> > > > consumption.
> > > >
> > > > Ideally, we would rather have the broker handle this internally
> rather
> > > > than yet another service stood up in our systems. I began digging
> through
> > > > the broker source (my Scala is basically hello world level) and there
> > > does
> > > > not appear to be any mechanism in which this could be easily patched
> into
> > > > the broker.
> > > >
> > > > Specifically, a producer or consumer acting upon an nonexistent
> topic or
> > > a
> > > > manual CreateTopic would trigger a Produce to this advisory topic
> and the
> > > > KafkaApis framework would handle it like any other request. However,
> by
> > > the
> > > > time we are inside the getTopicMetadata call there doesn't seem to
> be a
> > > > clean way to fire off another message that would make its way through
> > > > KafkaApis. Perhaps another XManager type object is required?
> > > >
> > > > Looking for alternative ideas or guidance (or I missed something in
> the
> > > > broker).
> > > >
> > > > Thank you.
> > > >
> > > > Knowles
> > > >
> > >
>
>
>
> --
> Gwen Shapira
> Engineering Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


[jira] [Created] (KAFKA-12240) Proposal for Log layer refactoring

2021-01-26 Thread Kowshik Prakasam (Jira)
Kowshik Prakasam created KAFKA-12240:


 Summary: Proposal for Log layer refactoring
 Key: KAFKA-12240
 URL: https://issues.apache.org/jira/browse/KAFKA-12240
 Project: Kafka
  Issue Type: Improvement
Reporter: Kowshik Prakasam
Assignee: Kowshik Prakasam


Link to document containing the proposed idea for Log layer refactor for 
KIP-405 be found here: 
[https://docs.google.com/document/d/1dQJL4MCwqQJSPmZkVmVzshFZKuFy_bCPtubav4wBfHQ/edit?usp=sharing]
 .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #408

2021-01-26 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-12239) Unclear warning message in JmxReporter, when getting missing JMX attribute

2021-01-26 Thread Jira
Gérald Quintana created KAFKA-12239:
---

 Summary: Unclear warning message in JmxReporter, when getting 
missing JMX attribute
 Key: KAFKA-12239
 URL: https://issues.apache.org/jira/browse/KAFKA-12239
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 2.4.1
Reporter: Gérald Quintana


When collecting bulk metrics, this warning message in logs is unhelpful, it is 
impossible to determine which MBean is missing this attribute and fix the 
metric collector:

 
{noformat}
[2021-01-26T15:43:41,078][WARN ][org.apache.kafka.common.metrics.JmxReporter] 
Error getting JMX attribute 'records-lag-max'
javax.management.AttributeNotFoundException: Could not find attribute 
records-lag-max
 at 
org.apache.kafka.common.metrics.JmxReporter$KafkaMbean.getAttribute(JmxReporter.java:192)
 ~[?:?]
 at 
org.apache.kafka.common.metrics.JmxReporter$KafkaMbean.getAttributes(JmxReporter.java:200)
 ~[?:?]
 at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttributes(DefaultMBeanServerInterceptor.java:709)
 ~[?:1.8.0_202]
{noformat}
Il would be very useful, to have the MBean object name in the error message.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[DISCUSS] KIP-710: Full support for distributed mode in dedicated MirrorMaker 2.0 clusters

2021-01-26 Thread Dániel Urbán
Hello everyone,

I would like to start a discussion on KIP-709, which addresses some missing
features in MM2 dedicated mode.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-710%3A+Full+support+for+distributed+mode+in+dedicated+MirrorMaker+2.0+clusters

Currently, the dedicated mode of MM2 does not fully support running in a
cluster. The core issue is that the Connect REST Server is not included in
the dedicated mode, which makes follower->leader communication impossible.
In some cases, this results in the cluster not being able to react to
dynamic configuration changes (e.g. dynamic topic filter changes).
Another smaller detail is that MM2 dedicated mode eagerly resolves config
provider references in the Connector configurations, which is undesirable
and a breaking change compared to vanilla Connect. This can cause an issue
for example when there is an environment variable reference, which contains
some host-specific information, like a file path. The leader resolves the
reference eagerly, and the resolved value is propagated to other MM2 nodes
instead of the reference being resolved locally, separately on each node.

The KIP addresses these by adding the Connect REST Server to the MM2
dedicated mode for each replication flow, and postponing the config
provider reference resolution.

Please discuss, I know this is a major change, but also an important
feature for MM2 users.

Daniel


Re: [DISCUSS] KIP-709: Full support for distributed mode in dedicated MirrorMaker 2.0 clusters

2021-01-26 Thread Dániel Urbán
Hi Tom,
Sure, will increment. Sorry for the confusion, I just started using the
next number, but did not check the KIP list.
Thanks,
Daniel

Thomas Scott  ezt írta (időpont: 2021. jan. 26., K,
10:58):

> Hi Daniel,
>
>   It seems we have duplicate KIP-709s. Can we move this one to KIP-710?
>
> Thanks
>
>   Tom
>
>
> On Tue, Jan 26, 2021 at 8:35 AM Dániel Urbán 
> wrote:
>
> > Hello everyone,
> >
> > I would like to start a discussion on KIP-709, which addresses some
> missing
> > features in MM2 dedicated mode.
> >
> > Currently, the dedicated mode of MM2 does not fully support running in a
> > cluster. The core issue is that the Connect REST Server is not included
> in
> > the dedicated mode, which makes follower->leader communication
> impossible.
> > In some cases, this results in the cluster not being able to react to
> > dynamic configuration changes (e.g. dynamic topic filter changes).
> > Another smaller detail is that MM2 dedicated mode eagerly resolves config
> > provider references in the Connector configurations, which is undesirable
> > and a breaking change compared to vanilla Connect. This can cause an
> issue
> > for example when there is an environment variable reference, which
> contains
> > some host-specific information, like a file path. The leader resolves the
> > reference eagerly, and the resolved value is propagated to other MM2
> nodes
> > instead of the reference being resolved locally, separately on each node.
> >
> > The KIP addresses these by adding the Connect REST Server to the MM2
> > dedicated mode for each replication flow, and postponing the config
> > provider reference resolution.
> >
> > Please discuss, I know this is a major change, but also an important
> > feature for MM2 users.
> >
> > Daniel
> >
>


Re: [DISCUSS] KIP-709: Extend OffsetFetch requests to accept multiple group ids

2021-01-26 Thread Thomas Scott
Thanks David I've updated it.

On Tue, Jan 26, 2021 at 1:55 PM David Jacot  wrote:

> Great. That answers my question!
>
> Thomas, I suggest adding a Related/Future Work section in the
> KIP to link KIP-699 more explicitly.
>
> Thanks,
> David
>
> On Tue, Jan 26, 2021 at 1:30 PM Thomas Scott  wrote:
>
> > Hi Mickael/David,
> >
> >   I feel like the combination of these 2 KIPs gives the complete solution
> > but they can be implemented independently. I have added a description and
> > links to KIP-699 to KIP-709 to this effect.
> >
> > Thanks
> >
> >   Tom
> >
> >
> > On Tue, Jan 26, 2021 at 11:44 AM Mickael Maison <
> mickael.mai...@gmail.com>
> > wrote:
> >
> > > Hi Thomas,
> > > Thanks, the KIP looks good.
> > >
> > > David,
> > > I started working on exactly that a few weeks ago:
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-699%3A+FindCoordinators
> > > I hope to complete my draft and start a discussion later on this week.
> > >
> > > Thanks
> > >
> > > On Tue, Jan 26, 2021 at 10:06 AM David Jacot 
> > wrote:
> > > >
> > > > Hi Thomas,
> > > >
> > > > Thanks for the KIP. Overall, the KIP looks good to me.
> > > >
> > > > I have only one question: The FindCoordinator API only supports
> > > > resolving one group id at the time. If we want to get the offsets for
> > > > say N groups, that means that we have to first issue N
> FindCoordinator
> > > > requests, wait for the responses, group by coordinators, and then
> > > > send a OffsetFetch request per coordinator. I wonder if we should
> > > > also extend the FindCoordinator API to support resolving multiple
> > > > groups as well. This would make the implementation in the admin
> > > > client a bit easier and would ensure that we can handle multiple
> > > > groups end-to-end. Have you thought about this?
> > > >
> > > > Best,
> > > > David
> > > >
> > > > On Tue, Jan 26, 2021 at 10:13 AM Rajini Sivaram <
> > rajinisiva...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > > > Hi Thomas,
> > > > >
> > > > > Thanks for the KIP, this is a useful addition for admin use cases.
> It
> > > may
> > > > > be worth starting the voting thread soon if we want to get this
> into
> > > 2.8.0.
> > > > >
> > > > > Regards,
> > > > >
> > > > > Rajini
> > > > >
> > > > >
> > > > > On Mon, Jan 25, 2021 at 1:52 PM Thomas Scott 
> > wrote:
> > > > >
> > > > > > Thanks Ismael, that's a lot better. I've updated the KIP with
> this
> > > > > > behaviour instead.
> > > > > >
> > > > > > On Mon, Jan 25, 2021 at 11:42 AM Ismael Juma 
> > > wrote:
> > > > > >
> > > > > > > Thanks for the KIP, Thomas. One question below:
> > > > > > >
> > > > > > > Should an Admin client with this new functionality be used
> > against
> > > an
> > > > > old
> > > > > > > > broker that cannot handle these requests then the methods
> will
> > > throw
> > > > > > > > UnsupportedVersionException as per the usual pattern.
> > > > > > >
> > > > > > >
> > > > > > > Did we consider automatically falling back to the single group
> id
> > > > > request
> > > > > > > if the more efficient one is not supported?
> > > > > > >
> > > > > > > Ismael
> > > > > > >
> > > > > > > On Mon, Jan 25, 2021 at 3:34 AM Thomas Scott  >
> > > wrote:
> > > > > > >
> > > > > > > > Hi,
> > > > > > > >
> > > > > > > > I'm starting this thread to discuss KIP-709 to extend
> > OffsetFetch
> > > > > > > requests
> > > > > > > > to accept multiple group ids. Please check out the KIP here:
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=173084258
> > > > > > > >
> > > > > > > > Any comments much appreciated.
> > > > > > > >
> > > > > > > > thanks,
> > > > > > > >
> > > > > > > > Tom
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > >
> >
>


Re: [DISCUSS] KIP-709: Extend OffsetFetch requests to accept multiple group ids

2021-01-26 Thread David Jacot
Great. That answers my question!

Thomas, I suggest adding a Related/Future Work section in the
KIP to link KIP-699 more explicitly.

Thanks,
David

On Tue, Jan 26, 2021 at 1:30 PM Thomas Scott  wrote:

> Hi Mickael/David,
>
>   I feel like the combination of these 2 KIPs gives the complete solution
> but they can be implemented independently. I have added a description and
> links to KIP-699 to KIP-709 to this effect.
>
> Thanks
>
>   Tom
>
>
> On Tue, Jan 26, 2021 at 11:44 AM Mickael Maison 
> wrote:
>
> > Hi Thomas,
> > Thanks, the KIP looks good.
> >
> > David,
> > I started working on exactly that a few weeks ago:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-699%3A+FindCoordinators
> > I hope to complete my draft and start a discussion later on this week.
> >
> > Thanks
> >
> > On Tue, Jan 26, 2021 at 10:06 AM David Jacot 
> wrote:
> > >
> > > Hi Thomas,
> > >
> > > Thanks for the KIP. Overall, the KIP looks good to me.
> > >
> > > I have only one question: The FindCoordinator API only supports
> > > resolving one group id at the time. If we want to get the offsets for
> > > say N groups, that means that we have to first issue N FindCoordinator
> > > requests, wait for the responses, group by coordinators, and then
> > > send a OffsetFetch request per coordinator. I wonder if we should
> > > also extend the FindCoordinator API to support resolving multiple
> > > groups as well. This would make the implementation in the admin
> > > client a bit easier and would ensure that we can handle multiple
> > > groups end-to-end. Have you thought about this?
> > >
> > > Best,
> > > David
> > >
> > > On Tue, Jan 26, 2021 at 10:13 AM Rajini Sivaram <
> rajinisiva...@gmail.com
> > >
> > > wrote:
> > >
> > > > Hi Thomas,
> > > >
> > > > Thanks for the KIP, this is a useful addition for admin use cases. It
> > may
> > > > be worth starting the voting thread soon if we want to get this into
> > 2.8.0.
> > > >
> > > > Regards,
> > > >
> > > > Rajini
> > > >
> > > >
> > > > On Mon, Jan 25, 2021 at 1:52 PM Thomas Scott 
> wrote:
> > > >
> > > > > Thanks Ismael, that's a lot better. I've updated the KIP with this
> > > > > behaviour instead.
> > > > >
> > > > > On Mon, Jan 25, 2021 at 11:42 AM Ismael Juma 
> > wrote:
> > > > >
> > > > > > Thanks for the KIP, Thomas. One question below:
> > > > > >
> > > > > > Should an Admin client with this new functionality be used
> against
> > an
> > > > old
> > > > > > > broker that cannot handle these requests then the methods will
> > throw
> > > > > > > UnsupportedVersionException as per the usual pattern.
> > > > > >
> > > > > >
> > > > > > Did we consider automatically falling back to the single group id
> > > > request
> > > > > > if the more efficient one is not supported?
> > > > > >
> > > > > > Ismael
> > > > > >
> > > > > > On Mon, Jan 25, 2021 at 3:34 AM Thomas Scott 
> > wrote:
> > > > > >
> > > > > > > Hi,
> > > > > > >
> > > > > > > I'm starting this thread to discuss KIP-709 to extend
> OffsetFetch
> > > > > > requests
> > > > > > > to accept multiple group ids. Please check out the KIP here:
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=173084258
> > > > > > >
> > > > > > > Any comments much appreciated.
> > > > > > >
> > > > > > > thanks,
> > > > > > >
> > > > > > > Tom
> > > > > > >
> > > > > >
> > > > >
> > > >
> >
>


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #438

2021-01-26 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: remove unused code from MessageTest (#9961)


--
[...truncated 3.58 MB...]
TestTopicsTest > testEmptyTopic() STARTED

TestTopicsTest > testEmptyTopic() PASSED

TestTopicsTest > testStartTimestamp() STARTED

TestTopicsTest > testStartTimestamp() PASSED

TestTopicsTest > testNegativeAdvance() STARTED

TestTopicsTest > testNegativeAdvance() PASSED

TestTopicsTest > shouldNotAllowToCreateWithNullDriver() STARTED

TestTopicsTest > shouldNotAllowToCreateWithNullDriver() PASSED

TestTopicsTest > testDuration() STARTED

TestTopicsTest > testDuration() PASSED

TestTopicsTest > testOutputToString() STARTED

TestTopicsTest > testOutputToString() PASSED

TestTopicsTest > testValue() STARTED

TestTopicsTest > testValue() PASSED

TestTopicsTest > testTimestampAutoAdvance() STARTED

TestTopicsTest > testTimestampAutoAdvance() PASSED

TestTopicsTest > testOutputWrongSerde() STARTED

TestTopicsTest > testOutputWrongSerde() PASSED

TestTopicsTest > shouldNotAllowToCreateOutputTopicWithNullTopicName() STARTED

TestTopicsTest > shouldNotAllowToCreateOutputTopicWithNullTopicName() PASSED

TestTopicsTest > testWrongSerde() STARTED

TestTopicsTest > testWrongSerde() PASSED

TestTopicsTest > testKeyValuesToMapWithNull() STARTED

TestTopicsTest > testKeyValuesToMapWithNull() PASSED

TestTopicsTest > testNonExistingOutputTopic() STARTED

TestTopicsTest > testNonExistingOutputTopic() PASSED

TestTopicsTest > testMultipleTopics() STARTED

TestTopicsTest > testMultipleTopics() PASSED

TestTopicsTest > testKeyValueList() STARTED

TestTopicsTest > testKeyValueList() PASSED

TestTopicsTest > shouldNotAllowToCreateOutputWithNullDriver() STARTED

TestTopicsTest > shouldNotAllowToCreateOutputWithNullDriver() PASSED

TestTopicsTest > testValueList() STARTED

TestTopicsTest > testValueList() PASSED

TestTopicsTest > testRecordList() STARTED

TestTopicsTest > testRecordList() PASSED

TestTopicsTest > testNonExistingInputTopic() STARTED

TestTopicsTest > testNonExistingInputTopic() PASSED

TestTopicsTest > testKeyValuesToMap() STARTED

TestTopicsTest > testKeyValuesToMap() PASSED

TestTopicsTest > testRecordsToList() STARTED

TestTopicsTest > testRecordsToList() PASSED

TestTopicsTest > testKeyValueListDuration() STARTED

TestTopicsTest > testKeyValueListDuration() PASSED

TestTopicsTest > testInputToString() STARTED

TestTopicsTest > testInputToString() PASSED

TestTopicsTest > testTimestamp() STARTED

TestTopicsTest > testTimestamp() PASSED

TestTopicsTest > testWithHeaders() STARTED

TestTopicsTest > testWithHeaders() PASSED

TestTopicsTest > testKeyValue() STARTED

TestTopicsTest > testKeyValue() PASSED

TestTopicsTest > shouldNotAllowToCreateTopicWithNullTopicName() STARTED

TestTopicsTest > shouldNotAllowToCreateTopicWithNullTopicName() PASSED

> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> Task :streams:upgrade-system-tests-0102:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:testClasses
> Task :streams:upgrade-system-tests-0102:checkstyleTest
> Task :streams:upgrade-system-tests-0102:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:test
> Task :streams:upgrade-system-tests-0110:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0110:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0110:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:compileTestJava
> Task :streams:upgrade-system-tests-0110:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:testClasses
> Task :streams:upgrade-system-tests-0110:checkstyleTest
> Task :streams:upgrade-system-tests-0110:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:test
> Task :streams:upgrade-system-tests-10:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-10:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-10:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-10:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-10:compileTestJava
> Task :streams:upgrade-system-tests-10:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-10:testClasses
> Task :streams:upgrade-system-tests-10:checkstyleTest
> Task :streams:upgrade-system-tests-10:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-10:test
> Task :streams:upgrade-system-tests-11:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-11:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-11:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-11:checkstyleMain 

Re: [DISCUSS] KIP-709: Extend OffsetFetch requests to accept multiple group ids

2021-01-26 Thread Thomas Scott
Hi Mickael/David,

  I feel like the combination of these 2 KIPs gives the complete solution
but they can be implemented independently. I have added a description and
links to KIP-699 to KIP-709 to this effect.

Thanks

  Tom


On Tue, Jan 26, 2021 at 11:44 AM Mickael Maison 
wrote:

> Hi Thomas,
> Thanks, the KIP looks good.
>
> David,
> I started working on exactly that a few weeks ago:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-699%3A+FindCoordinators
> I hope to complete my draft and start a discussion later on this week.
>
> Thanks
>
> On Tue, Jan 26, 2021 at 10:06 AM David Jacot  wrote:
> >
> > Hi Thomas,
> >
> > Thanks for the KIP. Overall, the KIP looks good to me.
> >
> > I have only one question: The FindCoordinator API only supports
> > resolving one group id at the time. If we want to get the offsets for
> > say N groups, that means that we have to first issue N FindCoordinator
> > requests, wait for the responses, group by coordinators, and then
> > send a OffsetFetch request per coordinator. I wonder if we should
> > also extend the FindCoordinator API to support resolving multiple
> > groups as well. This would make the implementation in the admin
> > client a bit easier and would ensure that we can handle multiple
> > groups end-to-end. Have you thought about this?
> >
> > Best,
> > David
> >
> > On Tue, Jan 26, 2021 at 10:13 AM Rajini Sivaram  >
> > wrote:
> >
> > > Hi Thomas,
> > >
> > > Thanks for the KIP, this is a useful addition for admin use cases. It
> may
> > > be worth starting the voting thread soon if we want to get this into
> 2.8.0.
> > >
> > > Regards,
> > >
> > > Rajini
> > >
> > >
> > > On Mon, Jan 25, 2021 at 1:52 PM Thomas Scott  wrote:
> > >
> > > > Thanks Ismael, that's a lot better. I've updated the KIP with this
> > > > behaviour instead.
> > > >
> > > > On Mon, Jan 25, 2021 at 11:42 AM Ismael Juma 
> wrote:
> > > >
> > > > > Thanks for the KIP, Thomas. One question below:
> > > > >
> > > > > Should an Admin client with this new functionality be used against
> an
> > > old
> > > > > > broker that cannot handle these requests then the methods will
> throw
> > > > > > UnsupportedVersionException as per the usual pattern.
> > > > >
> > > > >
> > > > > Did we consider automatically falling back to the single group id
> > > request
> > > > > if the more efficient one is not supported?
> > > > >
> > > > > Ismael
> > > > >
> > > > > On Mon, Jan 25, 2021 at 3:34 AM Thomas Scott 
> wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > I'm starting this thread to discuss KIP-709 to extend OffsetFetch
> > > > > requests
> > > > > > to accept multiple group ids. Please check out the KIP here:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=173084258
> > > > > >
> > > > > > Any comments much appreciated.
> > > > > >
> > > > > > thanks,
> > > > > >
> > > > > > Tom
> > > > > >
> > > > >
> > > >
> > >
>


Re: [DISCUSS] KIP-709: Extend OffsetFetch requests to accept multiple group ids

2021-01-26 Thread Mickael Maison
Hi Thomas,
Thanks, the KIP looks good.

David,
I started working on exactly that a few weeks ago:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-699%3A+FindCoordinators
I hope to complete my draft and start a discussion later on this week.

Thanks

On Tue, Jan 26, 2021 at 10:06 AM David Jacot  wrote:
>
> Hi Thomas,
>
> Thanks for the KIP. Overall, the KIP looks good to me.
>
> I have only one question: The FindCoordinator API only supports
> resolving one group id at the time. If we want to get the offsets for
> say N groups, that means that we have to first issue N FindCoordinator
> requests, wait for the responses, group by coordinators, and then
> send a OffsetFetch request per coordinator. I wonder if we should
> also extend the FindCoordinator API to support resolving multiple
> groups as well. This would make the implementation in the admin
> client a bit easier and would ensure that we can handle multiple
> groups end-to-end. Have you thought about this?
>
> Best,
> David
>
> On Tue, Jan 26, 2021 at 10:13 AM Rajini Sivaram 
> wrote:
>
> > Hi Thomas,
> >
> > Thanks for the KIP, this is a useful addition for admin use cases. It may
> > be worth starting the voting thread soon if we want to get this into 2.8.0.
> >
> > Regards,
> >
> > Rajini
> >
> >
> > On Mon, Jan 25, 2021 at 1:52 PM Thomas Scott  wrote:
> >
> > > Thanks Ismael, that's a lot better. I've updated the KIP with this
> > > behaviour instead.
> > >
> > > On Mon, Jan 25, 2021 at 11:42 AM Ismael Juma  wrote:
> > >
> > > > Thanks for the KIP, Thomas. One question below:
> > > >
> > > > Should an Admin client with this new functionality be used against an
> > old
> > > > > broker that cannot handle these requests then the methods will throw
> > > > > UnsupportedVersionException as per the usual pattern.
> > > >
> > > >
> > > > Did we consider automatically falling back to the single group id
> > request
> > > > if the more efficient one is not supported?
> > > >
> > > > Ismael
> > > >
> > > > On Mon, Jan 25, 2021 at 3:34 AM Thomas Scott  wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I'm starting this thread to discuss KIP-709 to extend OffsetFetch
> > > > requests
> > > > > to accept multiple group ids. Please check out the KIP here:
> > > > >
> > > > >
> > > >
> > >
> > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=173084258
> > > > >
> > > > > Any comments much appreciated.
> > > > >
> > > > > thanks,
> > > > >
> > > > > Tom
> > > > >
> > > >
> > >
> >


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #407

2021-01-26 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix visibility of Log.{unflushedMessages, addSegment} methods 
(#9966)


--
[...truncated 3.55 MB...]
MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@593cb11b, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@593cb11b, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@650c2fd7, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@650c2fd7, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@48fa3a32, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@48fa3a32, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@48c3e117, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@48c3e117, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@57d8cf16, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@57d8cf16, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3886430b, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3886430b, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7be52b82, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7be52b82, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@658e0b6b, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@658e0b6b, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@41e1eded, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@41e1eded, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@54a44662, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@54a44662, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1b4d100d, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1b4d100d, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@509174f4, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@509174f4, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@419d2ece, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@419d2ece, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest

Re: [DISCUSS] KIP-709: Extend OffsetFetch requests to accept multiple group ids

2021-01-26 Thread David Jacot
Hi Thomas,

Thanks for the KIP. Overall, the KIP looks good to me.

I have only one question: The FindCoordinator API only supports
resolving one group id at the time. If we want to get the offsets for
say N groups, that means that we have to first issue N FindCoordinator
requests, wait for the responses, group by coordinators, and then
send a OffsetFetch request per coordinator. I wonder if we should
also extend the FindCoordinator API to support resolving multiple
groups as well. This would make the implementation in the admin
client a bit easier and would ensure that we can handle multiple
groups end-to-end. Have you thought about this?

Best,
David

On Tue, Jan 26, 2021 at 10:13 AM Rajini Sivaram 
wrote:

> Hi Thomas,
>
> Thanks for the KIP, this is a useful addition for admin use cases. It may
> be worth starting the voting thread soon if we want to get this into 2.8.0.
>
> Regards,
>
> Rajini
>
>
> On Mon, Jan 25, 2021 at 1:52 PM Thomas Scott  wrote:
>
> > Thanks Ismael, that's a lot better. I've updated the KIP with this
> > behaviour instead.
> >
> > On Mon, Jan 25, 2021 at 11:42 AM Ismael Juma  wrote:
> >
> > > Thanks for the KIP, Thomas. One question below:
> > >
> > > Should an Admin client with this new functionality be used against an
> old
> > > > broker that cannot handle these requests then the methods will throw
> > > > UnsupportedVersionException as per the usual pattern.
> > >
> > >
> > > Did we consider automatically falling back to the single group id
> request
> > > if the more efficient one is not supported?
> > >
> > > Ismael
> > >
> > > On Mon, Jan 25, 2021 at 3:34 AM Thomas Scott  wrote:
> > >
> > > > Hi,
> > > >
> > > > I'm starting this thread to discuss KIP-709 to extend OffsetFetch
> > > requests
> > > > to accept multiple group ids. Please check out the KIP here:
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=173084258
> > > >
> > > > Any comments much appreciated.
> > > >
> > > > thanks,
> > > >
> > > > Tom
> > > >
> > >
> >
>


Re: [DISCUSS] KIP-709: Full support for distributed mode in dedicated MirrorMaker 2.0 clusters

2021-01-26 Thread Thomas Scott
Hi Daniel,

  It seems we have duplicate KIP-709s. Can we move this one to KIP-710?

Thanks

  Tom


On Tue, Jan 26, 2021 at 8:35 AM Dániel Urbán  wrote:

> Hello everyone,
>
> I would like to start a discussion on KIP-709, which addresses some missing
> features in MM2 dedicated mode.
>
> Currently, the dedicated mode of MM2 does not fully support running in a
> cluster. The core issue is that the Connect REST Server is not included in
> the dedicated mode, which makes follower->leader communication impossible.
> In some cases, this results in the cluster not being able to react to
> dynamic configuration changes (e.g. dynamic topic filter changes).
> Another smaller detail is that MM2 dedicated mode eagerly resolves config
> provider references in the Connector configurations, which is undesirable
> and a breaking change compared to vanilla Connect. This can cause an issue
> for example when there is an environment variable reference, which contains
> some host-specific information, like a file path. The leader resolves the
> reference eagerly, and the resolved value is propagated to other MM2 nodes
> instead of the reference being resolved locally, separately on each node.
>
> The KIP addresses these by adding the Connect REST Server to the MM2
> dedicated mode for each replication flow, and postponing the config
> provider reference resolution.
>
> Please discuss, I know this is a major change, but also an important
> feature for MM2 users.
>
> Daniel
>


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #437

2021-01-26 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix visibility of Log.{unflushedMessages, addSegment} methods 
(#9966)


--
[...truncated 3.57 MB...]
OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue() PASSED

OutputVerifierTest > shouldFailIfTimestampIsDifferentForCompareValueTimestamp() 
STARTED

OutputVerifierTest > shouldFailIfTimestampIsDifferentForCompareValueTimestamp() 
PASSED

OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp() 
STARTED

OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp() 
PASSED

OutputVerifierTest > shouldNotAllowNullExpectedRecordForCompareValue() STARTED

OutputVerifierTest > shouldNotAllowNullExpectedRecordForCompareValue() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > shouldNotAllowNullProducerRecordForCompareKeyValue() 
STARTED

OutputVerifierTest > shouldNotAllowNullProducerRecordForCompareKeyValue() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp() STARTED

OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > shouldFailIfValueIsDifferentForCompareKeyValueTimestamp() 
STARTED

OutputVerifierTest > shouldFailIfValueIsDifferentForCompareKeyValueTimestamp() 
PASSED

OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp() STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord()
 PASSED

KeyValueStoreFacadeTest > shouldReturnIsOpen() STARTED

KeyValueStoreFacadeTest > shouldReturnIsOpen() PASSED

KeyValueStoreFacadeTest > shouldDeleteAndReturnPlainValue() STARTED

KeyValueStoreFacadeTest > shouldDeleteAndReturnPlainValue() PASSED

KeyValueStoreFacadeTest > shouldReturnName() STARTED

KeyValueStoreFacadeTest > shouldReturnName() PASSED

KeyValueStoreFacadeTest > shouldPutWithUnknownTimestamp() STARTED

KeyValueStoreFacadeTest > shouldPutWithUnknownTimestamp() PASSED

KeyValueStoreFacadeTest > shouldPutAllWithUnknownTimestamp() STARTED

KeyValueStoreFacadeTest > shouldPutAllWithUnknownTimestamp() PASSED

KeyValueStoreFacadeTest > shouldReturnIsPersistent() STARTED

KeyValueStoreFacadeTest > shouldReturnIsPersistent() PASSED

KeyValueStoreFacadeTest > shouldForwardDeprecatedInit() STARTED

[VOTE] KIP-709: Extend OffsetFetch requests to accept multiple group ids.

2021-01-26 Thread Thomas Scott
Hey all,

 I'd like to start the vote for
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=173084258

Thanks

  Tom


Re: [DISCUSS] KIP-709: Extend OffsetFetch requests to accept multiple group ids

2021-01-26 Thread Rajini Sivaram
Hi Thomas,

Thanks for the KIP, this is a useful addition for admin use cases. It may
be worth starting the voting thread soon if we want to get this into 2.8.0.

Regards,

Rajini


On Mon, Jan 25, 2021 at 1:52 PM Thomas Scott  wrote:

> Thanks Ismael, that's a lot better. I've updated the KIP with this
> behaviour instead.
>
> On Mon, Jan 25, 2021 at 11:42 AM Ismael Juma  wrote:
>
> > Thanks for the KIP, Thomas. One question below:
> >
> > Should an Admin client with this new functionality be used against an old
> > > broker that cannot handle these requests then the methods will throw
> > > UnsupportedVersionException as per the usual pattern.
> >
> >
> > Did we consider automatically falling back to the single group id request
> > if the more efficient one is not supported?
> >
> > Ismael
> >
> > On Mon, Jan 25, 2021 at 3:34 AM Thomas Scott  wrote:
> >
> > > Hi,
> > >
> > > I'm starting this thread to discuss KIP-709 to extend OffsetFetch
> > requests
> > > to accept multiple group ids. Please check out the KIP here:
> > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=173084258
> > >
> > > Any comments much appreciated.
> > >
> > > thanks,
> > >
> > > Tom
> > >
> >
>


Jenkins build is back to normal : Kafka » kafka-2.6-jdk8 #75

2021-01-26 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-709: Full support for distributed mode in dedicated MirrorMaker 2.0 clusters

2021-01-26 Thread Dániel Urbán
And I guess providing the link wouldn't hurt either:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-709%3A+Full+support+for+distributed+mode+in+dedicated+MirrorMaker+2.0+clusters

On Tue, Jan 26, 2021 at 9:35 AM Dániel Urbán  wrote:

> Hello everyone,
>
> I would like to start a discussion on KIP-709, which addresses some missing
> features in MM2 dedicated mode.
>
> Currently, the dedicated mode of MM2 does not fully support running in a
> cluster. The core issue is that the Connect REST Server is not included in
> the dedicated mode, which makes follower->leader communication impossible.
> In some cases, this results in the cluster not being able to react to
> dynamic configuration changes (e.g. dynamic topic filter changes).
> Another smaller detail is that MM2 dedicated mode eagerly resolves config
> provider references in the Connector configurations, which is undesirable
> and a breaking change compared to vanilla Connect. This can cause an issue
> for example when there is an environment variable reference, which contains
> some host-specific information, like a file path. The leader resolves the
> reference eagerly, and the resolved value is propagated to other MM2 nodes
> instead of the reference being resolved locally, separately on each node.
>
> The KIP addresses these by adding the Connect REST Server to the MM2
> dedicated mode for each replication flow, and postponing the config
> provider reference resolution.
>
> Please discuss, I know this is a major change, but also an important
> feature for MM2 users.
>
> Daniel
>


[DISCUSS] KIP-709: Full support for distributed mode in dedicated MirrorMaker 2.0 clusters

2021-01-26 Thread Dániel Urbán
Hello everyone,

I would like to start a discussion on KIP-709, which addresses some missing
features in MM2 dedicated mode.

Currently, the dedicated mode of MM2 does not fully support running in a
cluster. The core issue is that the Connect REST Server is not included in
the dedicated mode, which makes follower->leader communication impossible.
In some cases, this results in the cluster not being able to react to
dynamic configuration changes (e.g. dynamic topic filter changes).
Another smaller detail is that MM2 dedicated mode eagerly resolves config
provider references in the Connector configurations, which is undesirable
and a breaking change compared to vanilla Connect. This can cause an issue
for example when there is an environment variable reference, which contains
some host-specific information, like a file path. The leader resolves the
reference eagerly, and the resolved value is propagated to other MM2 nodes
instead of the reference being resolved locally, separately on each node.

The KIP addresses these by adding the Connect REST Server to the MM2
dedicated mode for each replication flow, and postponing the config
provider reference resolution.

Please discuss, I know this is a major change, but also an important
feature for MM2 users.

Daniel