Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #236

2020-11-04 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #203

2020-11-04 Thread Apache Jenkins Server
See 


Changes:

[github] HOTFIX: RequestContext constructor change (#9559)


--
[...truncated 3.43 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 

[jira] [Created] (KAFKA-10685) --to-datetime passed to kafka-consumer-groups getting interpreted as a timezone

2020-11-04 Thread Russell Sayers (Jira)
Russell Sayers created KAFKA-10685:
--

 Summary: --to-datetime passed to kafka-consumer-groups getting 
interpreted as a timezone
 Key: KAFKA-10685
 URL: https://issues.apache.org/jira/browse/KAFKA-10685
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Russell Sayers


If you pass more than 3 decimal places for the fractional seconds of the 
datetime, the microseconds get interpreted as milliseconds.

{{kafka-consumer-groups --bootstrap-server kafka:9092 \}}
{{ --reset-offsets \}}
{{ --group webserver-avro \}}
{{ --topic driver-positions-avro \}}
{{ --to-datetime "2020-11-05T00:46:48.002237400" \}}
{{ --dry-run}}

Relevant code 
[here|https://github.com/apache/kafka/blob/2.7/clients/src/main/java/org/apache/kafka/common/utils/Utils.java#L1304].

Experimenting with getDateTime:
 * getDateTime("2020-11-05T00:46:48.000") -> 1604537208000
 * getDateTime("2020-11-05T00:46:48.000+0800") -> 1604508408000 - correct the 
formatting string allows for ZZZ timezones
 * getDateTime("2020-11-05T00:46:48.000123") -> 1604537208123 - note this ends 
with 123 milliseconds.

The pattern string is "-MM-dd'T'HH:mm:ss.SSS".  So SimpleDateFormat 
interprets "000123" as 123 milliseconds. See the stackoverflow answer 
[here|https://stackoverflow.com/a/21235602/109102].

The fix?  Remove any digits after more than 3 characters after the decimal 
point, or raise an exception. The code would still need to allow the RFC822 
timezone, i.e Sign TwoDigitHours Minutes.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10684) Avoid additional copies in envelope path when transmitting over network

2020-11-04 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-10684:
---

 Summary: Avoid additional copies in envelope path when 
transmitting over network 
 Key: KAFKA-10684
 URL: https://issues.apache.org/jira/browse/KAFKA-10684
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson
Assignee: Jason Gustafson


When we send an envelope request or response, we first allocate a buffer for 
the embedded data. When we are ready to transmit the data, we allocate a new 
buffer for the full envelope and copy the embedded data to it. We can skip the 
second copy if we are a little smarter when translating the envelope data to 
the network `Send` object.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Looking for PR review for small doc update

2020-11-04 Thread Kowshik Prakasam
Hi all,

I'm looking for a PR review for a small doc update
in FinalizedFeatureChangeListener. Would you be able to please help review
it? Link to PR: https://github.com/apache/kafka/pull/9562 .


Cheers,
Kowshik


question

2020-11-04 Thread ????
Hello, 

I'm working on Kafka development. Now,I have a question. Does Kafka support 
Android client?

KAFKA-10624: Looking for PR review

2020-11-04 Thread Kowshik Prakasam
Hi all,

I'm looking for a PR review for a small PR to address KAFKA-10624
:
https://github.com/apache/kafka/pull/9561. Would you be able to please help
review it?


Cheers,
Kowshik


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #235

2020-11-04 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10181: Use Envelope RPC to do redirection for 
(Incremental)AlterConfig, AlterClientQuota and CreateTopics (#9103)


--
[...truncated 1.64 MB...]

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testWithSchemaUnsupportedReplacementType STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testWithSchemaUnsupportedReplacementType PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testSchemalessWithReplacement STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testSchemalessWithReplacement PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > testSchemaless STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > testSchemaless PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > testReplacementTypeMismatch 
STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > testReplacementTypeMismatch 
PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testEmptyStringReplacementValue STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testEmptyStringReplacementValue PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > testWithSchema STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > testWithSchema PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testWithSchemaAndReplacement STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testWithSchemaAndReplacement PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testSchemalessUnsupportedReplacementType STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testSchemalessUnsupportedReplacementType PASSED

> Task :spotlessScalaCheck
> Task :streams:streams-scala:processResources NO-SOURCE
> Task :streams:streams-scala:processTestResources
> Task :streams:test-utils:processTestResources
> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #234

2020-11-04 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Add back section taken out by mistake (#9544)

[github] KAFKA-7987: Reinitialize ZookeeperClient after auth failures (#7751)


--
[...truncated 6.91 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED


[jira] [Resolved] (KAFKA-10181) Create Envelope RPC and redirection template for configuration change RPCs

2020-11-04 Thread Boyang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen resolved KAFKA-10181.
-
Resolution: Fixed

> Create Envelope RPC and redirection template for configuration change RPCs
> --
>
> Key: KAFKA-10181
> URL: https://issues.apache.org/jira/browse/KAFKA-10181
> Project: Kafka
>  Issue Type: Sub-task
>  Components: admin
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
> Fix For: 2.8.0
>
>
> In the bridge release broker, 
> AlterConfig/IncrementalAlterConfig/CreateTopics/AlterClientQuota should be 
> redirected to the active controller. This ticket will ensure those RPCs get 
> redirected.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #210

2020-11-04 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Add back section taken out by mistake (#9544)

[github] KAFKA-7987: Reinitialize ZookeeperClient after auth failures (#7751)

[github] KAFKA-10181: Use Envelope RPC to do redirection for 
(Incremental)AlterConfig, AlterClientQuota and CreateTopics (#9103)


--
[...truncated 1.64 MB...]

org.apache.kafka.connect.transforms.FlattenTest > testOptionalStruct PASSED

org.apache.kafka.connect.transforms.FlattenTest > testNestedStruct STARTED

org.apache.kafka.connect.transforms.FlattenTest > testNestedStruct PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testSchemalessWithReplacement PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > testSchemaless STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > testSchemaless PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > testReplacementTypeMismatch 
STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullWithSchema 
STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullWithSchema PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > 
nonExistentFieldWithSchemaShouldFail STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > 
nonExistentFieldWithSchemaShouldFail PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > 
nonExistentFieldSchemalessShouldReturnNull STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > 
nonExistentFieldSchemalessShouldReturnNull PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullSchemaless 
STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullSchemaless PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > testReplacementTypeMismatch 
PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testEmptyStringReplacementValue STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testEmptyStringReplacementValue PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > testWithSchema STARTED

org.apache.kafka.connect.transforms.predicates.HasHeaderKeyTest > 
testNameRequiredInConfig STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > testWithSchema PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testWithSchemaAndReplacement STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testWithSchemaAndReplacement PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testSchemalessUnsupportedReplacementType STARTED

org.apache.kafka.connect.transforms.predicates.HasHeaderKeyTest > 
testNameRequiredInConfig PASSED

org.apache.kafka.connect.transforms.predicates.HasHeaderKeyTest > testTest 
STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testSchemalessUnsupportedReplacementType PASSED

org.apache.kafka.connect.transforms.predicates.TopicNameMatchesTest > testTest 
STARTED

org.apache.kafka.connect.transforms.predicates.TopicNameMatchesTest > testTest 
PASSED

org.apache.kafka.connect.transforms.predicates.TopicNameMatchesTest > 
testPatternIsValidRegexInConfig STARTED

org.apache.kafka.connect.transforms.predicates.TopicNameMatchesTest > 
testPatternIsValidRegexInConfig PASSED

org.apache.kafka.connect.transforms.predicates.TopicNameMatchesTest > 
testConfig STARTED

org.apache.kafka.connect.transforms.predicates.TopicNameMatchesTest > 
testConfig PASSED

org.apache.kafka.connect.transforms.predicates.TopicNameMatchesTest > 
testPatternMayNotBeEmptyInConfig STARTED

org.apache.kafka.connect.transforms.predicates.TopicNameMatchesTest > 
testPatternMayNotBeEmptyInConfig PASSED

org.apache.kafka.connect.transforms.predicates.TopicNameMatchesTest > 
testPatternRequiredInConfig STARTED

org.apache.kafka.connect.transforms.predicates.TopicNameMatchesTest > 
testPatternRequiredInConfig PASSED

org.apache.kafka.connect.transforms.predicates.HasHeaderKeyTest > testTest 
PASSED

org.apache.kafka.connect.transforms.predicates.HasHeaderKeyTest > 
testNameMayNotBeEmptyInConfig STARTED

org.apache.kafka.connect.transforms.predicates.HasHeaderKeyTest > 
testNameMayNotBeEmptyInConfig PASSED

org.apache.kafka.connect.transforms.predicates.HasHeaderKeyTest > testConfig 
STARTED

org.apache.kafka.connect.transforms.predicates.HasHeaderKeyTest > testConfig 
PASSED

> Task :spotlessScalaCheck
> Task :streams:streams-scala:processResources NO-SOURCE
> Task :streams:streams-scala:processTestResources
> Task :streams:test-utils:processTestResources
> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task 

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #202

2020-11-04 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10181: Use Envelope RPC to do redirection for 
(Incremental)AlterConfig, AlterClientQuota and CreateTopics (#9103)


--
[...truncated 1.61 MB...]

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testWithSchemaUnsupportedReplacementType STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testWithSchemaUnsupportedReplacementType PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testSchemalessWithReplacement STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testSchemalessWithReplacement PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > testSchemaless STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > testSchemaless PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > testReplacementTypeMismatch 
STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > testReplacementTypeMismatch 
PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testEmptyStringReplacementValue STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testEmptyStringReplacementValue PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > testWithSchema STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > testWithSchema PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testWithSchemaAndReplacement STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testWithSchemaAndReplacement PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testSchemalessUnsupportedReplacementType STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > 
testSchemalessUnsupportedReplacementType PASSED

> Task :spotlessScalaCheck
> Task :streams:streams-scala:processResources NO-SOURCE
> Task :streams:streams-scala:processTestResources
> Task :streams:test-utils:processTestResources
> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task 

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #201

2020-11-04 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Add back section taken out by mistake (#9544)

[github] KAFKA-7987: Reinitialize ZookeeperClient after auth failures (#7751)


--
[...truncated 3.43 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #233

2020-11-04 Thread Apache Jenkins Server
See 


Changes:

[Bill Bejeck] update version in quickstart to current


--
[...truncated 6.91 MB...]

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@16c53130,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@2672ba3e,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@2672ba3e,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@167bf20a,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@167bf20a,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@4860a31f,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@4860a31f,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@77134792,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@77134792,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@4403b004,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@4403b004,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@d21f12c, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@d21f12c, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4094c3cc, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4094c3cc, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2dd6339b, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2dd6339b, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@740c25b5, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@740c25b5, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 

[jira] [Created] (KAFKA-10683) Consumer.position() Ignores Transaction Marker with read_uncommitted

2020-11-04 Thread Gary Russell (Jira)
Gary Russell created KAFKA-10683:


 Summary: Consumer.position() Ignores Transaction Marker with 
read_uncommitted
 Key: KAFKA-10683
 URL: https://issues.apache.org/jira/browse/KAFKA-10683
 Project: Kafka
  Issue Type: Bug
  Components: clients, core
Affects Versions: 2.6.0
Reporter: Gary Russell


The workaround for https://issues.apache.org/jira/browse/KAFKA-6607# Says:

{quote}
or use `consumer.position()` that takes the commit marker into account and 
would "step over it")
{quote}

Note that this problem occurs with all consumers, not just Streams. We have 
implemented this solution in our project (as an option for those users 
concerned about the pseudo lag).

We have discovered that this technique will only work with 
{code}isolation.level=read_committed{code} Otherwise, the 
{code}position(){code} call does not include the marker "record".

https://github.com/spring-projects/spring-kafka/issues/1587#issuecomment-721899560



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-7987) a broker's ZK session may die on transient auth failure

2020-11-04 Thread Rajini Sivaram (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-7987.
---
Fix Version/s: 2.8.0
 Reviewer: Jun Rao
   Resolution: Fixed

> a broker's ZK session may die on transient auth failure
> ---
>
> Key: KAFKA-7987
> URL: https://issues.apache.org/jira/browse/KAFKA-7987
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jun Rao
>Priority: Critical
> Fix For: 2.8.0
>
>
> After a transient network issue, we saw the following log in a broker.
> {code:java}
> [23:37:02,102] ERROR SASL authentication with Zookeeper Quorum member failed: 
> javax.security.sasl.SaslException: An error: 
> (java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
> GSS initiate failed [Caused by GSSException: No valid credentials provided 
> (Mechanism level: Server not found in Kerberos database (7))]) occurred when 
> evaluating Zookeeper Quorum Member's received SASL token. Zookeeper Client 
> will go to AUTH_FAILED state. (org.apache.zookeeper.ClientCnxn)
> [23:37:02,102] ERROR [ZooKeeperClient] Auth failed. 
> (kafka.zookeeper.ZooKeeperClient)
> {code}
> The network issue prevented the broker from communicating to ZK. The broker's 
> ZK session then expired, but the broker didn't know that yet since it 
> couldn't establish a connection to ZK. When the network was back, the broker 
> tried to establish a connection to ZK, but failed due to auth failure (likely 
> due to a transient KDC issue). The current logic just ignores the auth 
> failure without trying to create a new ZK session. Then the broker will be 
> permanently in a state that it's alive, but not registered in ZK.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-679 Producer will enable the strongest delivery guarantee by default

2020-11-04 Thread Cheng Tan
I made the following changes since I sent out the last discussion message:

1. Rename org.apache.kafka.server.authorizer.Authorizer#authorizeAny to 
org.apache.kafka.server.authorizer.Authorizer#authorizeByResourceType
2. Optimized the interface default of 
org.apache.kafka.server.authorizer.Authorizer#authorizeAny, now it’s not 
coupled with org.apache.kafka.server.authorizer.Authorizer#authorize any more 
and having a better performance.

Please feel free to comment and leave any thoughts. Any feedback will be 
appreciated. Thanks.

Best, - Cheng 

> On Oct 19, 2020, at 9:15 PM, Cheng Tan  wrote:
> 
> Hi all,
> 
> I’m proposing a new KIP for enabling the strongest delivery guarantee by 
> default. Today Kafka support EOS and N-1 concurrent failure tolerance but the 
> default settings haven’t bring them out of the box. The proposal is 
> discussing the best approach to change the producer defaults to `ack=all` and 
> `enable.idempotence=true`. Please join the discussion here: 
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-679%3A+Producer+will+enable+the+strongest+delivery+guarantee+by+default
>  
> 
> 
> Thanks
> 
> - Cheng Tan



Re: [VOTE] 2.7.0 RC1

2020-11-04 Thread Bill Bejeck
* Successful Jenkins builds for the 2.7 branches:
Unit/integration tests:
https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-2.7-jdk8/detail/kafka-2.7-jdk8/53/

On Wed, Nov 4, 2020 at 11:34 AM Bill Bejeck  wrote:

> Hello Kafka users, developers, and client-developers,
>
> This is the second candidate for the release of Apache Kafka 2.7.0.
>
> Some blockers were discovered after I cut the first RC, but I had not
> announced it yet.  So that is why you're seeing this email for the first
> time, but
> it's the second RC.
>
> This is a major release that includes many new features, including:
>
> * Configurable TCP connection timeout and improve the initial metadata
> fetch
> * Enforce broker-wide and per-listener connection creation rate (KIP-612,
> part 1)
> * Throttle Create Topic, Create Partition and Delete Topic Operations
> * Add TRACE-level end-to-end latency metrics to Streams
> * Add Broker-side SCRAM Config API
> * Support PEM format for SSL certificates and private key
> * Add RocksDB Memory Consumption to RocksDB Metrics
> * Add Sliding-Window support for Aggregations
>
> This release also includes a few other features, 53 improvements, and 84
> bug fixes.
>
> Release notes for the 2.7.0 release:
> https://home.apache.org/~bbejeck/kafka-2.7.0-rc1/RELEASE_NOTES.html
>
> *** Please download, test, and vote *by Wednesday, November 18, 5 pm* ET
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~bbejeck/kafka-2.7.0-rc1/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> https://home.apache.org/~bbejeck/kafka-2.7.0-rc1/javadoc/
>
> * Tag to be voted upon (off 2.7 branch) is the 2.7.0 tag:
> https://github.com/apache/kafka/releases/tag/2.7.0-rc1
>
> * Documentation:
> https://kafka.apache.org/27/documentation.html
>
> * Protocol:
> https://kafka.apache.org/27/protocol.html
>
> * Successful Jenkins builds for the 2.7 branches:
> Unit/integration tests: (link to follow)
> System tests: (link to follow)
>
> Thanks,
> Bill Bejeck
>


Jenkins build is back to normal : Kafka » kafka-2.7-jdk8 #53

2020-11-04 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-10682) Windows Kafka cluster not reachable via Azure data Bricks

2020-11-04 Thread navin (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

navin resolved KAFKA-10682.
---
Resolution: Fixed

kafka\config

Add 

listeners = PLAINTEXT://10.53.56.140:9092

> Windows Kafka cluster not reachable via Azure data Bricks
> -
>
> Key: KAFKA-10682
> URL: https://issues.apache.org/jira/browse/KAFKA-10682
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 2.6.0
>Reporter: navin
>Priority: Minor
>
> We have windows Kafka cluster,
>  * We enabled inbound and outbound for port 9092/9093
>  * Topic return results on local windows cmd used
>  ** ./kafka-console-consumer.bat --topic SIP.SIP.SHIPMENT --from-beginning 
> --bootstrap-server 10.53.56.140:9092
>  * We trying to consume the topic from Azure data bricks
>  ** Simple ping and telnet works fine and connects to underlying server 
>  *** %sh telnet 10.53.56.140 9092
>  *** %sh ping 10.53.56.140
>  ** df = spark \
>  .readStream \
>  .format("kafka") \
>  .option("kafka.bootstrap.servers", "10.53.56.140:9092") \
>  .option("subscribe", "SIP.SIP.SHIPMENT") \
>  .option("minPartitions", "10") \
>  .option("startingOffsets", "earliest") \
>  .load()
>  #df.isStreaming() # Returns True for DataFrames that have streaming sources
> df.printSchema()
>  * 
>  ** Display(df)
> On using display command after before amount of time we got below error:
> Lost connection to cluster. The notebook may have been detached or the 
> cluster may have been terminated due to an error in the driver such as an 
> OutOfMemoryError.
>   What we see in Logs is below error
> 20/11/04 18:23:52 WARN NetworkClient: [Consumer 
> clientId=consumer-spark-kafka-source-515ba67c-f265-4577-935b-5c7ba954a31d--1012371861-driver-0-5,
>  
> groupId=spark-kafka-source-515ba67c-f265-4577-935b-5c7ba954a31d--1012371861-driver-0]
>  Error connecting to node Navin.us.corp.tim.com:9092 (id: 0 rack: 
> null)20/11/04 18:23:52 WARN NetworkClient: [Consumer 
> clientId=consumer-spark-kafka-source-515ba67c-f265-4577-935b-5c7ba954a31d--1012371861-driver-0-5,
>  
> groupId=spark-kafka-source-515ba67c-f265-4577-935b-5c7ba954a31d--1012371861-driver-0]
>  Error connecting to node Navin.us.corp.tim.com:9092 (id: 0 rack: 
> null)java.net.UnknownHostException: Navin.us.corp.tim.com at 
> java.net.InetAddress.getAllByName0(InetAddress.java:1281) at 
> java.net.InetAddress.getAllByName(InetAddress.java:1193) at 
> java.net.InetAddress.getAllByName(InetAddress.java:1127) at 
> kafkashaded.org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:104)
>  at 
> kafkashaded.org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:403)
>  at 
> kafkashaded.org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:363)
>  at 
> kafkashaded.org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:151)
>  at 
> kafkashaded.org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:949)
>  at 
> kafkashaded.org.apache.kafka.clients.NetworkClient.access$500(NetworkClient.java:71)
>  at 
> kafkashaded.org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1122)
>  at 
> kafkashaded.org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1010)
>  at 
> kafkashaded.org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:545)
>  at 
> kafkashaded.org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
>  at 
> kafkashaded.org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
>  at 
> kafkashaded.org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
>  at 
> kafkashaded.org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:161)
>  at 
> kafkashaded.org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:240)
>  at 
> kafkashaded.org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:444)
>  at 
> kafkashaded.org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1267)
>  at 
> kafkashaded.org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1235)
>  at 
> kafkashaded.org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1168)
>  at 
> org.apache.spark.sql.kafka010.KafkaOffsetReader.$anonfun$partitionsAssignedToConsumer$2(KafkaOffsetReader.scala:540)
>  at 
> 

[jira] [Created] (KAFKA-10682) Windows Kafka cluster not reachable via Azure data Bricks

2020-11-04 Thread navin (Jira)
navin created KAFKA-10682:
-

 Summary: Windows Kafka cluster not reachable via Azure data Bricks
 Key: KAFKA-10682
 URL: https://issues.apache.org/jira/browse/KAFKA-10682
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 2.6.0
Reporter: navin


We have windows Kafka cluster,
 * We enabled inbound and outbound for port 9092/9093
 * Topic return results on local windows cmd used
 ** ./kafka-console-consumer.bat --topic SIP.SIP.SHIPMENT --from-beginning 
--bootstrap-server 10.53.56.140:9092
 * We trying to consume the topic from Azure data bricks
 ** df = spark \
 .readStream \
 .format("kafka") \
 .option("kafka.bootstrap.servers", "10.53.56.140:9092") \
 .option("subscribe", "SIP.SIP.SHIPMENT") \
 .option("minPartitions", "10") \
 .option("startingOffsets", "earliest") \
 .load()
#df.isStreaming() # Returns True for DataFrames that have streaming sources

df.printSchema()

 ** Display(df)

On using display command after before amount of time we got below error:

Lost connection to cluster. The notebook may have been detached or the cluster 
may have been terminated due to an error in the driver such as an 
OutOfMemoryError.

  What we see in Logs is below error

20/11/04 18:23:52 WARN NetworkClient: [Consumer 
clientId=consumer-spark-kafka-source-515ba67c-f265-4577-935b-5c7ba954a31d--1012371861-driver-0-5,
 
groupId=spark-kafka-source-515ba67c-f265-4577-935b-5c7ba954a31d--1012371861-driver-0]
 Error connecting to node Navin.us.corp.tim.com:9092 (id: 0 rack: null)20/11/04 
18:23:52 WARN NetworkClient: [Consumer 
clientId=consumer-spark-kafka-source-515ba67c-f265-4577-935b-5c7ba954a31d--1012371861-driver-0-5,
 
groupId=spark-kafka-source-515ba67c-f265-4577-935b-5c7ba954a31d--1012371861-driver-0]
 Error connecting to node Navin.us.corp.tim.com:9092 (id: 0 rack: 
null)java.net.UnknownHostException: Navin.us.corp.tim.com at 
java.net.InetAddress.getAllByName0(InetAddress.java:1281) at 
java.net.InetAddress.getAllByName(InetAddress.java:1193) at 
java.net.InetAddress.getAllByName(InetAddress.java:1127) at 
kafkashaded.org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:104) 
at 
kafkashaded.org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:403)
 at 
kafkashaded.org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:363)
 at 
kafkashaded.org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:151)
 at 
kafkashaded.org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:949)
 at 
kafkashaded.org.apache.kafka.clients.NetworkClient.access$500(NetworkClient.java:71)
 at 
kafkashaded.org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1122)
 at 
kafkashaded.org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1010)
 at 
kafkashaded.org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:545) 
at 
kafkashaded.org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
 at 
kafkashaded.org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
 at 
kafkashaded.org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
 at 
kafkashaded.org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:161)
 at 
kafkashaded.org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:240)
 at 
kafkashaded.org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:444)
 at 
kafkashaded.org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1267)
 at 
kafkashaded.org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1235)
 at 
kafkashaded.org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1168)
 at 
org.apache.spark.sql.kafka010.KafkaOffsetReader.$anonfun$partitionsAssignedToConsumer$2(KafkaOffsetReader.scala:540)
 at 
org.apache.spark.sql.kafka010.KafkaOffsetReader.$anonfun$withRetriesWithoutInterrupt$1(KafkaOffsetReader.scala:602)
 at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
org.apache.spark.util.UninterruptibleThread.runUninterruptibly(UninterruptibleThread.scala:77)
 at 
org.apache.spark.sql.kafka010.KafkaOffsetReader.withRetriesWithoutInterrupt(KafkaOffsetReader.scala:601)
 at 
org.apache.spark.sql.kafka010.KafkaOffsetReader.$anonfun$partitionsAssignedToConsumer$1(KafkaOffsetReader.scala:538)
 at 
org.apache.spark.sql.kafka010.KafkaOffsetReader.runUninterruptibly(KafkaOffsetReader.scala:569)
 at 

Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-11-04 Thread Jun Rao
Hi, Satish,

Thanks for the updated KIP. A few more comments below.

605.2 "Build the local leader epoch cache by cutting the leader epoch
sequence received from remote storage to [LSO, ELO]." I mentioned an issue
earlier. Suppose the leader's local start offset is 100. The follower finds
a remote segment covering offset range [80, 120). The producerState with
this remote segment is up to offset 120. To trim the producerState to
offset 100 requires more work since one needs to download the previous
producerState up to offset 80 and then replay the messages from 80 to 100.
It seems that it's simpler in this case for the follower just to take the
remote segment as it is and start fetching from offset 120.

5016. Just to echo what Kowshik was saying. It seems that
RLMM.onPartitionLeadershipChanges() is only called on the replicas for a
partition, not on the replicas for the __remote_log_segment_metadata
partition. It's not clear how the leader of __remote_log_segment_metadata
obtains the metadata for remote segments for deletion.

5100. KIP-516 has been accepted and is being implemented now. Could you
update the KIP based on topicID?

5101. RLMM: It would be useful to clarify how the following two APIs are
used. According to the wiki, the former is used for topic deletion and the
latter is used for retention. It seems that retention should use the former
since remote segments without a matching epoch in the leader (potentially
due to unclean leader election) also need to be garbage collected. The
latter seems to be used for the new leader to determine the last tiered
segment.
default Iterator
listRemoteLogSegments(TopicPartition topicPartition)
Iterator listRemoteLogSegments(TopicPartition
topicPartition, long leaderEpoch);

5102. RSM:
5102.1 For methods like fetchLogSegmentData(), it seems that they can
use RemoteLogSegmentId instead of RemoteLogSegmentMetadata.
5102.2 In fetchLogSegmentData(), should we use long instead of Long?
5102.3 Why only some of the methods have default implementation and others
don't?
5102.4. Could we define RemoteLogSegmentMetadataUpdate
and DeletePartitionUpdate?
5102.5 LogSegmentData: It seems that it's easier to pass
in leaderEpochIndex as a ByteBuffer or byte array than a file since it will
be generated in memory.
5102.6 RemoteLogSegmentMetadata: It seems that it needs both baseOffset and
startOffset. For example, deleteRecords() could move the startOffset to the
middle of a segment. If we copy the full segment to remote storage, the
baseOffset and the startOffset will be different.
5102.7 Could we define all the public methods for RemoteLogSegmentMetadata
and LogSegmentData?
5102.8 Could we document whether endOffset in RemoteLogSegmentMetadata is
inclusive/exclusive?

5103. configs:
5103.1 Could we define the default value of non-required configs (e.g the
size of new thread pools)?
5103.2 It seems that local.log.retention.ms should default to retention.ms,
instead of remote.log.retention.minutes. Similarly, it seems
that local.log.retention.bytes should default to segment.bytes.
5103.3 remote.log.manager.thread.pool.size: The description says "used in
scheduling tasks to copy segments, fetch remote log indexes and clean up
remote log segments". However, there is a separate
config remote.log.reader.threads for fetching remote data. It's weird to
fetch remote index and log in different thread pools since both are used
for serving fetch requests.
5103.4 remote.log.manager.task.interval.ms: Is that the amount of time to
back off when there is no work to do? If so, perhaps it can be renamed as
backoff.ms.
5103.5 Are rlm_process_interval_ms and rlm_retry_interval_ms configs? If
so, they need to be listed in this section.

5104. "RLM maintains a bounded cache(possibly LRU) of the index files of
remote log segments to avoid multiple index fetches from the remote
storage." Is the RLM in memory or on disk? If on disk, where is it stored?
Do we need a configuration to bound the size?

5105. The KIP uses local-log-start-offset and Earliest Local Offset in
different places. It would be useful to standardize the terminology.

5106. The section on "In BuildingRemoteLogAux state". It listed two options
without saying which option is chosen.

5107. Follower to leader transition: It has step 2, but not step 1.

5108. If a consumer fetches from the remote data and the remote storage is
not available, what error code is used in the fetch response?

5109. "ListOffsets: For timestamps >= 0, it returns the first message
offset whose timestamp is >= to the given timestamp in the request. That
means it checks in remote log time indexes first, after which local log
time indexes are checked." Could you document which method in RLMM is used
for this?

5110. Stopreplica: "it sets all the remote log segment metadata of that
partition with a delete marker and publishes them to RLMM." This seems
outdated given the new topic deletion logic.

5111. "RLM follower fetches the earliest offset for the 

[jira] [Created] (KAFKA-10681) MM2 translateOffsets returns wrong offsets

2020-11-04 Thread Carlo Bongiovanni (Jira)
Carlo Bongiovanni created KAFKA-10681:
-

 Summary: MM2 translateOffsets returns wrong offsets
 Key: KAFKA-10681
 URL: https://issues.apache.org/jira/browse/KAFKA-10681
 Project: Kafka
  Issue Type: Bug
  Components: mirrormaker
Affects Versions: 2.5.0
 Environment: GKE, strimzi release
Reporter: Carlo Bongiovanni


Hi all,

we'd like to make use of the ability of MM2 to mirror checkpoints of consumer 
offsets, in order to have a graceful failover from an active cluster to a 
standby one.

For this reason we have created the following setup (FYI all done with strimzi 
on k8s):
 * an active kafka cluster 2.5.0 used by a few producers/consumers
 * a standby kafka cluster 2.5.0
 * MM2 is setup in one direction only to mirror from active to standby

We have let MM2 run for some time and we could verify that messages are 
effectively mirrored.

At this point we have started developing the tooling to create consumer groups 
in the consumer-offsets topic of the passive cluster, by reading the internal 
checkpoints topic.

The following is an extract of our code to read the translated offsets:


{code:java}
Map mm2Props = new HashMap<>();
 mm2Props.put(BOOTSTRAP_SERVERS_CONFIG, "bootstrap_servers");
 mm2Props.put("source.cluster.alias", "euwe");
 mm2Props.put(SASL_MECHANISM, "SCRAM-SHA-512");
 mm2Props.put(SASL_JAAS_CONFIG, 
"org.apache.kafka.common.security.scram.ScramLoginModule required 
username=\"user\" password=\"password\";");
 mm2Props.put(SECURITY_PROTOCOL_CONFIG, "SASL_SSL");
 mm2Props.put(SSL_TRUSTSTORE_LOCATION_CONFIG, 
"/usr/local/lib/jdk/lib/security/cacerts");
 mm2Props.put(SSL_TRUSTSTORE_PASSWORD_CONFIG, "some-password");
Map translatedOffsets = RemoteClusterUtils
 .translateOffsets(mm2Props, (String) mm2Props.get("source.cluster.alias"), cgi,
 Duration.ofSeconds(60L));
{code}
 

Before persisting the translated offsets with 
{code:java}
AlterConsumerGroupOffsetsResult alterConsumerGroupOffsetsResult = kafkaClient
 .alterConsumerGroupOffsets(cgi, offsets);{code}
we filter them because we don't want to create consumer groups for all 
retrieved offsets.

During the filtering, we compare the values of the translated offset for each 
topic partition (as coming from the checkpoint topic), 
 with the respective current offset value for each topic partition (as mirrored 
from MM2).

While running this check we have verified that for some topics we get big 
difference between those values, while for other topics the update seems 
realistic.

For example, looking at a given target partition we see it has an offset of 100 
(after mirroring by mm2). 
 From the checkpoint topic for the same consumer group id, we receive offset 
200, and later 150.

The issues are that:
 * both consumer group id offsets exceed the real offset of the partition
 * the consumer group id offsets from checkpoint goes down over time, not up

We haven't been able to explain it, the wrong numbers are coming from the 
*RemoteClusterUtils.translateOffsets()* and we're wondering if this could be a 
misconfiguration on our side or a bug of MM2.

Thanks, best
 C.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[VOTE] 2.7.0 RC1

2020-11-04 Thread Bill Bejeck
Hello Kafka users, developers, and client-developers,

This is the second candidate for the release of Apache Kafka 2.7.0.

Some blockers were discovered after I cut the first RC, but I had not
announced it yet.  So that is why you're seeing this email for the first
time, but
it's the second RC.

This is a major release that includes many new features, including:

* Configurable TCP connection timeout and improve the initial metadata fetch
* Enforce broker-wide and per-listener connection creation rate (KIP-612,
part 1)
* Throttle Create Topic, Create Partition and Delete Topic Operations
* Add TRACE-level end-to-end latency metrics to Streams
* Add Broker-side SCRAM Config API
* Support PEM format for SSL certificates and private key
* Add RocksDB Memory Consumption to RocksDB Metrics
* Add Sliding-Window support for Aggregations

This release also includes a few other features, 53 improvements, and 84
bug fixes.

Release notes for the 2.7.0 release:
https://home.apache.org/~bbejeck/kafka-2.7.0-rc1/RELEASE_NOTES.html

*** Please download, test, and vote *by Wednesday, November 18, 5 pm* ET

Kafka's KEYS file containing PGP keys we use to sign the release:
https://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
https://home.apache.org/~bbejeck/kafka-2.7.0-rc1/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
https://home.apache.org/~bbejeck/kafka-2.7.0-rc1/javadoc/

* Tag to be voted upon (off 2.7 branch) is the 2.7.0 tag:
https://github.com/apache/kafka/releases/tag/2.7.0-rc1

* Documentation:
https://kafka.apache.org/27/documentation.html

* Protocol:
https://kafka.apache.org/27/protocol.html

* Successful Jenkins builds for the 2.7 branches:
Unit/integration tests: (link to follow)
System tests: (link to follow)

Thanks,
Bill Bejeck


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #232

2020-11-04 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10679: [Streams] migrate kafka-site updated docs to kafka/docs 
(#9554)


--
[...truncated 3.45 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@7d39db01,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@365075cb,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@365075cb,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@542ff923,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@542ff923,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@2482f803,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@2482f803,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@17d0de41,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@17d0de41,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@221b0f5d,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@221b0f5d,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@9d60cdc, 
timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@9d60cdc, 
timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3a27af05, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3a27af05, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2b944cd6, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2b944cd6, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@510fcf1e, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@510fcf1e, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #208

2020-11-04 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10679: [Streams] migrate kafka-site updated docs to kafka/docs 
(#9554)


--
[...truncated 3.45 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@35d5d1fe, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2fd43e3, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2fd43e3, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@22af8066, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@22af8066, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@11a98cc4, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@11a98cc4, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@a68e52e, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@a68e52e, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@62625ddb, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@62625ddb, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3c905d41, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3c905d41, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@618229bb, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@618229bb, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2c8ef130, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2c8ef130, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5cf2fdff, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5cf2fdff, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@62b6c975, 
timestamped = false, caching = false, logging = false] STARTED