Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #146

2020-10-05 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Move `RaftRequestHandler` to `tools` package (#9377)

[github] KAFKA-10439: Connect's Values to parse BigInteger as Decimal with zero 
scale. (#9320)


--
[...truncated 6.71 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForC

Build failed in Jenkins: Kafka » kafka-2.4-jdk8 #11

2020-10-05 Thread Apache Jenkins Server
See 


Changes:

[Konstantine Karantasis] KAFKA-10439: Connect's Values to parse BigInteger as 
Decimal with zero scale. (#9320)


--
[...truncated 2.90 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierT

Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #115

2020-10-05 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-10577) StreamThread should be able to process any processible tasks regardless of its state

2020-10-05 Thread Guozhang Wang (Jira)
Guozhang Wang created KAFKA-10577:
-

 Summary: StreamThread should be able to process any processible 
tasks regardless of its state
 Key: KAFKA-10577
 URL: https://issues.apache.org/jira/browse/KAFKA-10577
 Project: Kafka
  Issue Type: Improvement
Reporter: Guozhang Wang


After KAFKA-10199 is done, we should allow active tasks processing even if we 
are not yet in RUNNING. More generally speaking, we would no longer rely on the 
thread's RUNNING state to start processing any tasks, but would just always 
process any processible tasks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Kafka/Zookeeper

2020-10-05 Thread Scott, Thomas G
I have a quick question regarding kafka 2.4.0 and it's compatibility with 
Zookeeper.  According to redhat, it looks like that perhaps kafka 2.4.0 
requires Zookeeper 3.5.7 on RHEL 7.  Could you please confirm, as it appears 
that according to the release notes, that Zookeeper 3.5.7 is not required until 
kafka 2.4.1.


Thanks,
Tom


Build failed in Jenkins: Kafka » kafka-2.6-jdk8 #27

2020-10-05 Thread Apache Jenkins Server
See 


Changes:

[Konstantine Karantasis] KAFKA-10439: Connect's Values to parse BigInteger as 
Decimal with zero scale. (#9320)


--
[...truncated 3.16 MB...]
org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:check

Build failed in Jenkins: Kafka » kafka-2.5-jdk8 #14

2020-10-05 Thread Apache Jenkins Server
See 


Changes:

[Konstantine Karantasis] KAFKA-10439: Connect's Values to parse BigInteger as 
Decimal with zero scale. (#9320)


--
[...truncated 3.10 MB...]

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.intern

Build failed in Jenkins: Kafka » kafka-2.3-jdk8 #7

2020-10-05 Thread Apache Jenkins Server
See 


Changes:

[Konstantine Karantasis] KAFKA-10439: Connect's Values to parse BigInteger as 
Decimal with zero scale. (#9320)


--
[...truncated 2.57 MB...]

kafka.api.AdminClientIntegrationTest > testLogStartOffsetAfterDeleteRecords 
STARTED

kafka.api.AdminClientIntegrationTest > testLogStartOffsetAfterDeleteRecords 
PASSED

kafka.api.AdminClientIntegrationTest > testValidIncrementalAlterConfigs STARTED

kafka.api.AdminClientIntegrationTest > testValidIncrementalAlterConfigs PASSED

kafka.api.AdminClientIntegrationTest > testInvalidIncrementalAlterConfigs 
STARTED

kafka.api.AdminClientIntegrationTest > testInvalidIncrementalAlterConfigs PASSED

kafka.api.AdminClientIntegrationTest > testSeekAfterDeleteRecords STARTED

kafka.api.AdminClientIntegrationTest > testSeekAfterDeleteRecords PASSED

kafka.api.AdminClientIntegrationTest > testCallInFlightTimeouts STARTED

kafka.api.AdminClientIntegrationTest > testCallInFlightTimeouts PASSED

kafka.api.AdminClientIntegrationTest > testDescribeConfigsForTopic STARTED

kafka.api.AdminClientIntegrationTest > testDescribeConfigsForTopic PASSED

kafka.api.AdminClientIntegrationTest > testConsumerGroups STARTED

kafka.api.AdminClientIntegrationTest > testConsumerGroups PASSED

kafka.api.AdminClientIntegrationTest > 
testCreateExistingTopicsThrowTopicExistsException STARTED

kafka.api.AdminClientIntegrationTest > 
testCreateExistingTopicsThrowTopicExistsException PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics STARTED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testControllerMetrics STARTED

kafka.metrics.MetricsTest > testControllerMetrics PASSED

kafka.metrics.MetricsTest > testWindowsStyleTagNames STARTED

kafka.metrics.MetricsTest > testWindowsStyleTagNames PASSED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut STARTED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut PASSED

kafka.cluster.ReplicaTest > testCannotIncrementLogStartOffsetPastHighWatermark 
STARTED

kafka.cluster.ReplicaTest > testCannotIncrementLogStartOffsetPastHighWatermark 
PASSED

kafka.cluster.ReplicaTest > testSegmentDeletionWithHighWatermarkInitialization 
STARTED

kafka.cluster.ReplicaTest > testSegmentDeletionWithHighWatermarkInitialization 
PASSED

kafka.cluster.ReplicaTest > testCannotDeleteSegmentsAtOrAboveHighWatermark 
STARTED

kafka.cluster.ReplicaTest > testCannotDeleteSegmentsAtOrAboveHighWatermark 
PASSED

kafka.cluster.PartitionTest > 
testMakeLeaderDoesNotUpdateEpochCacheForOldFormats STARTED

kafka.cluster.PartitionTest > 
testMakeLeaderDoesNotUpdateEpochCacheForOldFormats PASSED

kafka.cluster.PartitionTest > testReadRecordEpochValidationForLeader STARTED

kafka.cluster.PartitionTest > testReadRecordEpochValidationForLeader PASSED

kafka.cluster.PartitionTest > 
testFetchOffsetForTimestampEpochValidationForFollower STARTED

kafka.cluster.PartitionTest > 
testFetchOffsetForTimestampEpochValidationForFollower PASSED

kafka.cluster.PartitionTest > testListOffsetIsolationLevels STARTED

kafka.cluster.PartitionTest > testListOffsetIsolationLevels PASSED

kafka.cluster.PartitionTest > testAppendRecordsAsFollowerBelowLogStartOffset 
STARTED

kafka.cluster.PartitionTest > testAppendRecordsAsFollowerBelowLogStartOffset 
PASSED

kafka.cluster.PartitionTest > testFetchLatestOffsetIncludesLeaderEpoch STARTED

kafka.cluster.PartitionTest > testFetchLatestOffsetIncludesLeaderEpoch PASSED

kafka.cluster.PartitionTest > testFetchOffsetSnapshotEpochValidationForFollower 
STARTED

kafka.cluster.PartitionTest > testFetchOffsetSnapshotEpochValidationForFollower 
PASSED

kafka.cluster.PartitionTest > testMonotonicOffsetsAfterLeaderChange STARTED

kafka.cluster.PartitionTest > testMonotonicOffsetsAfterLeaderChange PASSED

kafka.cluster.PartitionTest > testMakeFollowerWithNoLeaderIdChange STARTED

kafka.cluster.PartitionTest > testMakeFollowerWithNoLeaderIdChange PASSED

kafka.cluster.PartitionTest > 
testAppendRecordsToFollowerWithNoReplicaThrowsException STARTED

kafka.cluster.PartitionTest > 
testAppendRecordsToFollowerWithNoReplicaThrowsException PASSED

kafka.cluster.PartitionTest > 
testFollowerDoesNotJoinISRUntilCaughtUpToOffsetWithinCurrentLeaderEpoch STARTED

kafka.cluster.PartitionTest > 
testFollowerDoesNotJoinISRUntilC

[jira] [Created] (KAFKA-10576) Different behavior of commitSync and commitAsync

2020-10-05 Thread Yuriy Badalyantc (Jira)
Yuriy Badalyantc created KAFKA-10576:


 Summary: Different behavior of commitSync and commitAsync
 Key: KAFKA-10576
 URL: https://issues.apache.org/jira/browse/KAFKA-10576
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Reporter: Yuriy Badalyantc


It looks like {{commitSync}} and {{commitAsync}} consumer's methods have a 
different semantic.
{code:java}
public class TestKafka {
public static void main(String[]args) {
String id = "dev_test";
Map settings = new HashMap<>();
settings.put("bootstrap.servers", "localhost:9094");
settings.put("key.deserializer", StringDeserializer.class);
settings.put("value.deserializer", StringDeserializer.class);
settings.put("client.id", id);
settings.put("group.id", id);

String topic = "test";
Map offsets = new HashMap<>();
offsets.put(new TopicPartition(topic, 0), new OffsetAndMetadata(1));

try (KafkaConsumer consumer = new 
KafkaConsumer<>(settings)) {
consumer.commitSync(offsets);
}
}
}
{code}
In the example above I created a consumer and use {{commitSync}} to commit 
offsets. This code works as expected — all offsets are committed to kafka.

But in the case of {{commitAsync}} it will not work:
{code:java}
try (KafkaConsumer consumer = new KafkaConsumer<>(settings)) {
CompletableFuture result = new CompletableFuture<>();
consumer.commitAsync(offsets, new OffsetCommitCallback() {
@Override
public void onComplete(Map offsets, 
Exception exception) {
if (exception != null) {
result.completeExceptionally(exception);
} else {
result.complete(true);
}
}
});
result.get(15L, TimeUnit.SECONDS);
}
{code}
The {{result}} future failed with a timeout.

This behavior is pretty surprising. From naming and documentation, it looks 
like {{commitSync}} and {{commitAsync}} methods should behave identically. Of 
course, besides the blocking/non-blocking aspect. But in reality, there are 
some differences.

I can assume that the {{commitAsync}} method somehow depends on the {{poll}} 
calls. But I didn't find any explicit information about it in 
{{KafkaConsumer}}'s javadoc or kafka documentation page.

So, I believe that there are the next options:
# It's a but and not expected behavior. {{commitSync}} and {{commitAsync}} 
should have identical semantics.
# It's expected, but not well-documented behavior. In that case, this behavior 
should be explicitly documented.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-2.2-jdk8 #4

2020-10-05 Thread Apache Jenkins Server
See 


Changes:

[Konstantine Karantasis] KAFKA-10439: Connect's Values to parse BigInteger as 
Decimal with zero scale. (#9320)


--
[...truncated 2.07 MB...]

kafka.log.ProducerStateManagerTest > testCoordinatorFencedAfterReload STARTED

kafka.log.ProducerStateManagerTest > testCoordinatorFencedAfterReload PASSED

kafka.log.ProducerStateManagerTest > testControlRecordBumpsEpoch STARTED

kafka.log.ProducerStateManagerTest > testControlRecordBumpsEpoch PASSED

kafka.log.ProducerStateManagerTest > 
testAcceptAppendWithoutProducerStateOnReplica STARTED

kafka.log.ProducerStateManagerTest > 
testAcceptAppendWithoutProducerStateOnReplica PASSED

kafka.log.ProducerStateManagerTest > testProducerStateAfterFencingAbortMarker 
STARTED

kafka.log.ProducerStateManagerTest > testProducerStateAfterFencingAbortMarker 
PASSED

kafka.log.ProducerStateManagerTest > testLoadFromCorruptSnapshotFile STARTED

kafka.log.ProducerStateManagerTest > testLoadFromCorruptSnapshotFile PASSED

kafka.log.ProducerStateManagerTest > testProducerSequenceWrapAround STARTED

kafka.log.ProducerStateManagerTest > testProducerSequenceWrapAround PASSED

kafka.log.ProducerStateManagerTest > testPidExpirationTimeout STARTED

kafka.log.ProducerStateManagerTest > testPidExpirationTimeout PASSED

kafka.log.ProducerStateManagerTest > testAcceptAppendWithSequenceGapsOnReplica 
STARTED

kafka.log.ProducerStateManagerTest > testAcceptAppendWithSequenceGapsOnReplica 
PASSED

kafka.log.ProducerStateManagerTest > testAppendTxnMarkerWithNoProducerState 
STARTED

kafka.log.ProducerStateManagerTest > testAppendTxnMarkerWithNoProducerState 
PASSED

kafka.log.ProducerStateManagerTest > testOldEpochForControlRecord STARTED

kafka.log.ProducerStateManagerTest > testOldEpochForControlRecord PASSED

kafka.log.ProducerStateManagerTest > 
testTruncateAndReloadRemovesOutOfRangeSnapshots STARTED

kafka.log.ProducerStateManagerTest > 
testTruncateAndReloadRemovesOutOfRangeSnapshots PASSED

kafka.log.ProducerStateManagerTest > testStartOffset STARTED

kafka.log.ProducerStateManagerTest > testStartOffset PASSED

kafka.log.ProducerStateManagerTest > testProducerSequenceInvalidWrapAround 
STARTED

kafka.log.ProducerStateManagerTest > testProducerSequenceInvalidWrapAround 
PASSED

kafka.log.ProducerStateManagerTest > testTruncateHead STARTED

kafka.log.ProducerStateManagerTest > testTruncateHead PASSED

kafka.log.ProducerStateManagerTest > testRecoverFromSnapshotEmptyTransaction 
STARTED

kafka.log.ProducerStateManagerTest > testRecoverFromSnapshotEmptyTransaction 
PASSED

kafka.log.ProducerStateManagerTest > 
testNonTransactionalAppendWithOngoingTransaction STARTED

kafka.log.ProducerStateManagerTest > 
testNonTransactionalAppendWithOngoingTransaction PASSED

kafka.log.ProducerStateManagerTest > testSkipSnapshotIfOffsetUnchanged STARTED

kafka.log.ProducerStateManagerTest > testSkipSnapshotIfOffsetUnchanged PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] STARTED

kafka.log.BrokerCompressionTes

[jira] [Created] (KAFKA-10575) StateRestoreListener#onRestoreEnd should always be triggered

2020-10-05 Thread Guozhang Wang (Jira)
Guozhang Wang created KAFKA-10575:
-

 Summary: StateRestoreListener#onRestoreEnd should always be 
triggered
 Key: KAFKA-10575
 URL: https://issues.apache.org/jira/browse/KAFKA-10575
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Guozhang Wang


Today we only trigger `StateRestoreListener#onRestoreEnd` when we complete the 
restoration of an active task and transit it to the running state. However the 
restoration can also be stopped when the restoring task gets closed (because it 
gets migrated to another client, for example). We should also trigger the 
callback indicating its progress when the restoration stopped in any scenarios.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #113

2020-10-05 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Move `RaftRequestHandler` to `tools` package (#9377)

[github] KAFKA-10439: Connect's Values to parse BigInteger as Decimal with zero 
scale. (#9320)


--
[...truncated 3.32 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcess

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #145

2020-10-05 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Add proper checks to KafkaConsumer.groupMetadata (#9349)

[github] MINOR: Update doc for raft state metrics (#9342)


--
[...truncated 3.35 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo PASSED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
STARTED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes STARTED

org.a

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #114

2020-10-05 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update doc for raft state metrics (#9342)


--
[...truncated 3.35 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.a

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #112

2020-10-05 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update doc for raft state metrics (#9342)


--
[...truncated 3.33 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = fal

[jira] [Resolved] (KAFKA-10439) Connect's Values class loses precision for integers, larger than 64 bits

2020-10-05 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis resolved KAFKA-10439.

Resolution: Fixed

Merged and back ported to the branches listed under "Fix version"

> Connect's Values class loses precision for integers, larger than 64 bits
> 
>
> Key: KAFKA-10439
> URL: https://issues.apache.org/jira/browse/KAFKA-10439
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
>Priority: Major
> Fix For: 2.0.2, 2.1.2, 2.2.3, 2.3.2, 2.4.2, 2.7.0, 2.5.2, 2.6.1
>
>
> The `org.apache.kafka.connect.data.Values#parse` method parses integers, 
> which are larger than `Long.MAX_VALUE` as `double` with `
> Schema.FLOAT64_SCHEMA`.
>  
> That means it loses precision for these larger integers.
> For example:
> {code:java}
> SchemaAndValue schemaAndValue = Values.parseString("9223372036854775808");
> {code}
> returns:
> {code:java}
> SchemaAndValue{schema=Schema{FLOAT64}, value=9.223372036854776E18}
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-2.4-jdk8 #10

2020-10-05 Thread Apache Jenkins Server
See 


Changes:

[John Roesler] MINOR: remove stream simple benchmark suite (#8353)


--
[...truncated 2.90 MB...]

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.st

Re: [DISCUSS] KIP-660: Pluggable ReplicaAssignor

2020-10-05 Thread Efe Gencer
Hi Mickael,

Thanks for the KIP!
A call to an external system, e.g. Cruise Control, in the implementation of the 
provided interface can indeed help with the initial assignment of partitions.

I am curious why the proposed `ReplicaAssignor#assignReplicasToBrokers` 
receives a list of partition ids as opposed to the number of partitions to 
create the topic with?

Would you clarify if this API is expected to be used (1) only for new topics or 
(2) also for existing topics?

Best,
Efe

From: Mickael Maison 
Sent: Thursday, October 1, 2020 9:43 AM
To: dev 
Subject: Re: [DISCUSS] KIP-660: Pluggable ReplicaAssignor

Thanks Tom for the feedback!

1. If the data returned by the ReplicaAssignor implementation does not
match that was requested, we'll also throw a ReplicaAssignorException

2. Good point, I'll update the KIP

3. The KIP mentions an error code associated with
ReplicaAssignorException: REPLICA_ASSIGNOR_FAILED

4. (I'm naming your last question 4.) I spent some time looking at it.
Initially I wanted to follow the model from the topic policies. But as
you said, computing assignments for the whole batch may be more
desirable and also avoids incrementally updating the cluster state.
The logic in AdminManager is very much centered around doing 1 topic
at a time but as far as I can tell we should be able to update it to
compute assignments for the whole batch.

I'll play a bit more with 4. and I'll update the KIP in the next few days

On Mon, Sep 21, 2020 at 10:29 AM Tom Bentley  wrote:
>
> Hi Mickael,
>
> A few thoughts about the ReplicaAssignor contract:
>
> 1. What happens if a ReplicaAssignor impl returns a Map where some
> assignments don't meet the given replication factor?
> 2. Fixing the signature of assignReplicasToBrokers() as you have would make
> it hard to pass extra information in the future (e.g. maybe someone comes
> up with a use case where passing the clientId would be needed) because it
> would require the interface be changed. If you factored all the parameters
> into some new type then the signature could be
> assignReplicasToBrokers(RequiredReplicaAssignment) and adding any new
> properties to RequiredReplicaAssignment wouldn't break the contract.
> 3. When an assignor throws RepliacAssignorException what error code will be
> returned to the client?
>
> Also, this sentence got me thinking:
>
> > If multiple topics are present in the request, AdminManager will update
> the Cluster object so the ReplicaAssignor class has access to the up to
> date cluster metadata.
>
> Previously I've looked at how we can improve Kafka's pluggable policy
> support to pass the more of the cluster state to policy implementations. A
> similar problem exists there, but the more cluster state you pass the
> harder it is to incrementally change it as you iterate through the topics
> to be created/modified. This likely isn't a problem here and now, but it
> could limit any future changes to the pluggable assignors. Did you consider
> the alternative of the assignor just being passed a Set of assignments?
> That means you can just pass the cluster state as it exists at the time. It
> also gives the implementation more information to work with to find more
> optimal assignments. For example, it could perform a bin packing type
> assignment which found a better optimum for the whole collection of topics
> than one which was only told about all the topics in the request
> sequentially.
>
> Otherwise this looks like a valuable feature to me.
>
> Kind regards,
>
> Tom
>
>
>
>
>
> On Fri, Sep 11, 2020 at 6:19 PM Robert Barrett 
> wrote:
>
> > Thanks Mickael, I think adding the new Exception resolves my concerns.
> >
> > On Thu, Sep 3, 2020 at 9:47 AM Mickael Maison 
> > wrote:
> >
> > > Thanks Robert and Ryanne for the feedback.
> > >
> > > ReplicaAssignor implementations can throw an exception to indicate an
> > > assignment can't be computed. This is already what the current round
> > > robin assignor does. Unfortunately at the moment, there are no generic
> > > error codes if it fails, it's either INVALID_PARTITIONS,
> > > INVALID_REPLICATION_FACTOR or worse UNKNOWN_SERVER_ERROR.
> > >
> > > So I think it would be nice to introduce a new Exception/Error code to
> > > cover any failures in the assignor and avoid UNKNOWN_SERVER_ERROR.
> > >
> > > I've updated the KIP accordingly, let me know if you have more questions.
> > >
> > > On Fri, Aug 28, 2020 at 4:49 PM Ryanne Dolan 
> > > wrote:
> > > >
> > > > Thanks Mickael, the KIP makes sense to me, esp for cases where an
> > > external
> > > > system (like cruise control or an operator) knows more about the target
> > > > cluster state than the broker does.
> > > >
> > > > Ryanne
> > > >
> > > > On Thu, Aug 20, 2020, 10:46 AM Mickael Maison <
> > mickael.mai...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I've created KIP-660 to make the replica assignment logic pluggable.
> > > > >
> > > > >
> > >
> > https://nam06

Build failed in Jenkins: Kafka » kafka-2.5-jdk8 #13

2020-10-05 Thread Apache Jenkins Server
See 


Changes:

[Randall Hauch] KAFKA-10531: Check for negative values to Thread.sleep call 
(#9347)

[John Roesler] MINOR: remove stream simple benchmark suite (#8353)


--
[...truncated 3.09 MB...]

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustom

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #144

2020-10-05 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10531: Check for negative values to Thread.sleep call (#9347)


--
[...truncated 6.71 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #113

2020-10-05 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Add proper checks to KafkaConsumer.groupMetadata (#9349)


--
[...truncated 3.35 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
STARTED

org

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #111

2020-10-05 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Add proper checks to KafkaConsumer.groupMetadata (#9349)


--
[...truncated 3.33 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[

Jenkins build is back to normal : Kafka » kafka-2.6-jdk8 #25

2020-10-05 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-598: Augment TopologyDescription with store and source / sink serde information

2020-10-05 Thread Sophie Blee-Goldman
Agreed, I was proposing those as two possible options we can consider.
If we add the store type enum first, then we could leverage that for this;
if not, (which seems most likely), then we should just use the metricsScope
which should be just as good (although not identical)

We can always revisit this part of the API if/when we tackle KIP-591 and
consider migrating the topology description to using the new store enum.

On Mon, Oct 5, 2020 at 3:45 PM Guozhang Wang  wrote:

> That's a good idea, I think StoreBuilder#metricsScope() is not a very
> intrusive API to add. But note that the metricsScope() is not identical to
> KIP-591's store type enum, e.g. the former's possible values include
> "rocksdb-session" and "rocksdb-window".
>
>
> Guozhang
>
> On Mon, Oct 5, 2020 at 2:21 PM Sophie Blee-Goldman 
> wrote:
>
> > I suppose we could add a method to the StoreBuilder interface that calls
> > through
> > to the metricsScope() method of the StoreSupplier, similar to what we do
> > for the store
> > name.
> >
> > It feels a bit indirect but the metricsScope() should be an accurate
> > description of
> > the underlying store type. The whole point of metricsScope() is to
> identify
> > the store
> > type for use in metrics, so it seems like a reasonable extension to use
> it
> > to identify
> > the store type in the topology description as well.
> >
> > Or, if KIP-591 ever gets resurrected, maybe we will have a new store type
> > enum or
> > other public API to identify the stores that we can leverage here. But
> that
> > KIP seems
> > to have gone dormant as well :)
> >
> > On Fri, Oct 2, 2020 at 6:18 PM Guozhang Wang  wrote:
> >
> > > Hey Sophie,
> > >
> > > I've thought about this as well. But the tricky thing is that the
> > topology
> > > description's state store input is from the `StoreBuilder` class, which
> > > does not include type information. If we want to peek into such info we
> > > could call its `build` function, get the actual store and dig it out,
> but
> > > this would build the actual store even before the tasks are assigned
> etc.
> > >
> > > We can, however, extend the API of StoreBuilder to expose its store
> type
> > > information but we need to be careful here: the interface is a public
> API
> > > and information too specific like `RocksDBWindow` may be leaking too
> much
> > > here. WDYT?
> > >
> > >
> > > Guozhang
> > >
> > > On Tue, Sep 29, 2020 at 8:12 PM Sophie Blee-Goldman <
> sop...@confluent.io
> > >
> > > wrote:
> > >
> > > > Hey Guozhang, what's the status of this KIP?
> > > >
> > > > I was recently digging through a particularly opaque Streams
> > application
> > > > and
> > > > it occurred to me that it might also be useful to print the kind of
> > store
> > > > attached
> > > > to each node (eg RocksDBWindowStore, InMemoryKeyValueStore, custom,
> > > > etc). That made me think of this KIP so I was just wondering where it
> > > ended
> > > > up. And if you want to pick it up again, WDYT about including some
> > minor
> > > > store information in the augmented description?
> > > >
> > > > On Tue, May 19, 2020 at 1:22 PM Guozhang Wang 
> > > wrote:
> > > >
> > > > > We already has a Serdes actually, which is a factory class. What we
> > > > really
> > > > > need is to add new functions to `Serde`, `Serializer` and
> > > `Deserializer`
> > > > > interfaces, but since we already dropped Java7 backward
> compatibility
> > > may
> > > > > not be a big issue anyways, let me think about it a bit more.
> > > > >
> > > > > On Tue, May 19, 2020 at 12:01 PM Matthias J. Sax  >
> > > > wrote:
> > > > >
> > > > > > Thanks Guozhang.
> > > > > >
> > > > > > This makes sense. I am still wondering about wrapped serdes:
> > > > > >
> > > > > > > and if it is a wrapper serde, also print its inner
> > > > > > >>> serde name
> > > > > >
> > > > > > How can our default implementation of `TopologyDescriber` know if
> > > it's
> > > > a
> > > > > > wrapped serde or not? Furthermore, how do wrapped serdes expose
> > their
> > > > > > inner serdes?
> > > > > >
> > > > > > I am also not sure what the purpose of TopologyDescriber is?
> Would
> > it
> > > > > > mabye be better to add new interface `Serdes` can implement
> > instead?
> > > > > >
> > > > > >
> > > > > > -Matthias
> > > > > >
> > > > > >
> > > > > >
> > > > > > On 5/18/20 9:24 PM, Guozhang Wang wrote:
> > > > > > > Bruno, Matthias:
> > > > > > >
> > > > > > > Thanks for your inputs. After some thoughts I've decide to
> update
> > > my
> > > > > > > proposal in the following way:
> > > > > > >
> > > > > > > 1. Store#serdes() would return a "Map"
> > > > > > >
> > > > > > > 2. Topology's description would be independent of whether it is
> > > > > generated
> > > > > > > from `StreamsBuilder#build(props)` or `StreamsBuilder#build()`,
> > and
> > > > if
> > > > > > the
> > > > > > > serde is not known we would use "" as the default
> value.
> > > > > > >
> > > > > > > 3. Add `List TopologyDescription#sourceTopics() /
> > > > sinkTopics()
> > > > 

Build failed in Jenkins: Kafka » kafka-2.3-jdk8 #6

2020-10-05 Thread Apache Jenkins Server
See 


Changes:

[John Roesler] MINOR: remove stream simple benchmark suite (#8353)


--
[...truncated 1.71 MB...]
kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData STARTED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline STARTED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiry STARTED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiry PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics STARTED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testControllerMetrics STARTED

kafka.metrics.MetricsTest > testControllerMetrics PASSED

kafka.metrics.MetricsTest > testWindowsStyleTagNames STARTED

kafka.metrics.MetricsTest > testWindowsStyleTagNames PASSED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut STARTED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut PASSED

2430 tests completed, 1 failed, 2 skipped

> Task :kafka-2.3-jdk8:core:test FAILED
> Task :testScala_2_12 FAILED
> Task :kafka-2.3-jdk8:generator:compileJava UP-TO-DATE
> Task :kafka-2.3-jdk8:generator:processResources NO-SOURCE
> Task :kafka-2.3-jdk8:generator:classes UP-TO-DATE
> Task :kafka-2.3-jdk8:clients:processMessages UP-TO-DATE
> Task :kafka-2.3-jdk8:clients:compileJava UP-TO-DATE
> Task :kafka-2.3-jdk8:clients:processResources UP-TO-DATE
> Task :kafka-2.3-jdk8:clients:classes UP-TO-DATE
> Task :kafka-2.3-jdk8:clients:determineCommitId UP-TO-DATE
> Task :kafka-2.3-jdk8:clients:createVersionFile
> Task :kafka-2.3-jdk8:clients:jar UP-TO-DATE
> Task :kafka-2.3-jdk8:clients:compileTestJava UP-TO-DATE
> Task :kafka-2.3-jdk8:clients:processTestResources UP-TO-DATE
> Task :kafka-2.3-jdk8:clients:testClasses UP-TO-DATE
> Task :kafka-2.3-jdk8:core:compileJava NO-SOURCE

> Task :kafka-2.3-jdk8:core:compileScala
Pruning sources from previous analysis, due to incompatible CompileSetup.
:1154:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  
offsetAndMetadata.expireTimestamp.getOrElse(OffsetCommitRequest.DEFAULT_TIMESTAMP

Re: [DISCUSS] KIP-598: Augment TopologyDescription with store and source / sink serde information

2020-10-05 Thread Guozhang Wang
That's a good idea, I think StoreBuilder#metricsScope() is not a very
intrusive API to add. But note that the metricsScope() is not identical to
KIP-591's store type enum, e.g. the former's possible values include
"rocksdb-session" and "rocksdb-window".


Guozhang

On Mon, Oct 5, 2020 at 2:21 PM Sophie Blee-Goldman 
wrote:

> I suppose we could add a method to the StoreBuilder interface that calls
> through
> to the metricsScope() method of the StoreSupplier, similar to what we do
> for the store
> name.
>
> It feels a bit indirect but the metricsScope() should be an accurate
> description of
> the underlying store type. The whole point of metricsScope() is to identify
> the store
> type for use in metrics, so it seems like a reasonable extension to use it
> to identify
> the store type in the topology description as well.
>
> Or, if KIP-591 ever gets resurrected, maybe we will have a new store type
> enum or
> other public API to identify the stores that we can leverage here. But that
> KIP seems
> to have gone dormant as well :)
>
> On Fri, Oct 2, 2020 at 6:18 PM Guozhang Wang  wrote:
>
> > Hey Sophie,
> >
> > I've thought about this as well. But the tricky thing is that the
> topology
> > description's state store input is from the `StoreBuilder` class, which
> > does not include type information. If we want to peek into such info we
> > could call its `build` function, get the actual store and dig it out, but
> > this would build the actual store even before the tasks are assigned etc.
> >
> > We can, however, extend the API of StoreBuilder to expose its store type
> > information but we need to be careful here: the interface is a public API
> > and information too specific like `RocksDBWindow` may be leaking too much
> > here. WDYT?
> >
> >
> > Guozhang
> >
> > On Tue, Sep 29, 2020 at 8:12 PM Sophie Blee-Goldman  >
> > wrote:
> >
> > > Hey Guozhang, what's the status of this KIP?
> > >
> > > I was recently digging through a particularly opaque Streams
> application
> > > and
> > > it occurred to me that it might also be useful to print the kind of
> store
> > > attached
> > > to each node (eg RocksDBWindowStore, InMemoryKeyValueStore, custom,
> > > etc). That made me think of this KIP so I was just wondering where it
> > ended
> > > up. And if you want to pick it up again, WDYT about including some
> minor
> > > store information in the augmented description?
> > >
> > > On Tue, May 19, 2020 at 1:22 PM Guozhang Wang 
> > wrote:
> > >
> > > > We already has a Serdes actually, which is a factory class. What we
> > > really
> > > > need is to add new functions to `Serde`, `Serializer` and
> > `Deserializer`
> > > > interfaces, but since we already dropped Java7 backward compatibility
> > may
> > > > not be a big issue anyways, let me think about it a bit more.
> > > >
> > > > On Tue, May 19, 2020 at 12:01 PM Matthias J. Sax 
> > > wrote:
> > > >
> > > > > Thanks Guozhang.
> > > > >
> > > > > This makes sense. I am still wondering about wrapped serdes:
> > > > >
> > > > > > and if it is a wrapper serde, also print its inner
> > > > > >>> serde name
> > > > >
> > > > > How can our default implementation of `TopologyDescriber` know if
> > it's
> > > a
> > > > > wrapped serde or not? Furthermore, how do wrapped serdes expose
> their
> > > > > inner serdes?
> > > > >
> > > > > I am also not sure what the purpose of TopologyDescriber is? Would
> it
> > > > > mabye be better to add new interface `Serdes` can implement
> instead?
> > > > >
> > > > >
> > > > > -Matthias
> > > > >
> > > > >
> > > > >
> > > > > On 5/18/20 9:24 PM, Guozhang Wang wrote:
> > > > > > Bruno, Matthias:
> > > > > >
> > > > > > Thanks for your inputs. After some thoughts I've decide to update
> > my
> > > > > > proposal in the following way:
> > > > > >
> > > > > > 1. Store#serdes() would return a "Map"
> > > > > >
> > > > > > 2. Topology's description would be independent of whether it is
> > > > generated
> > > > > > from `StreamsBuilder#build(props)` or `StreamsBuilder#build()`,
> and
> > > if
> > > > > the
> > > > > > serde is not known we would use "" as the default value.
> > > > > >
> > > > > > 3. Add `List TopologyDescription#sourceTopics() /
> > > sinkTopics()
> > > > /
> > > > > > repartitionTopics() / changelogTopics()` and for pattern /
> > > > > topic-extractor
> > > > > > we would use fixed format of "" and
> > > > > > "".
> > > > > >
> > > > > >
> > > > > > I will try to implement this in my existing PR and after I've
> > > confirmed
> > > > > it
> > > > > > works, I will update the final wiki for voting.
> > > > > >
> > > > > >
> > > > > > Guozhang
> > > > > >
> > > > > >
> > > > > > On Mon, May 18, 2020 at 9:13 PM Guozhang Wang <
> wangg...@gmail.com>
> > > > > wrote:
> > > > > >
> > > > > >> Hello Andy,
> > > > > >>
> > > > > >> Thanks a lot for your comments! I do not mind at all :)
> > > > > >>
> > > > > >> I think that's a valid point, what I have in mind is to expose
> an
> > > > > >> interface which can be optionally

Re: [DISCUSS] Apache Kafka 2.7.0 release

2020-10-05 Thread Bill Bejeck
Hi John,

I've updated the list of expected KIPs for 2.7.0 with KIP-478.

Thanks,
Bill

On Mon, Oct 5, 2020 at 11:26 AM John Roesler  wrote:

> Hi Bill,
>
> Sorry about this, but I've just noticed that KIP-478 is
> missing from the list. The url is:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-478+-+Strongly+typed+Processor+API
>
> The KIP was accepted a long time ago, and the implementation
> has been trickling in since 2.6 branch cut. However, most of
> the public API implementation is done now, so I think at
> this point, we can call it "released in 2.7.0". I'll make
> sure it's done by feature freeze.
>
> Thanks,
> -John
>
> On Thu, 2020-10-01 at 13:49 -0400, Bill Bejeck wrote:
> > All,
> >
> > With the KIP acceptance deadline passing yesterday, I've updated the
> > planned KIP content section of the 2.7.0 release plan
> > <
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158872629
> >
> > .
> >
> > Removed proposed KIPs for 2.7.0 not getting approval
> >
> >1. KIP-653
> ><
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-653%3A+Upgrade+log4j+to+log4j2
> >
> >2. KIP-608
> ><
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-608+-+Expose+Kafka+Metrics+in+Authorizer
> >
> >3. KIP-508
> ><
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-508%3A+Make+Suppression+State+Queriable
> >
> >
> > KIPs added
> >
> >1. KIP-671
> ><
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-671%3A+Introduce+Kafka+Streams+Specific+Uncaught+Exception+Handler
> >
> >
> >
> > Please let me know if I've missed anything.
> >
> > Thanks,
> > Bill
> >
> > On Thu, Sep 24, 2020 at 1:47 PM Bill Bejeck  wrote:
> >
> > > Hi All,
> > >
> > > Just a reminder that the KIP freeze is next Wednesday, September 30th.
> > > Any KIP aiming to go in the 2.7.0 release needs to be accepted by this
> date.
> > >
> > > Thanks,
> > > BIll
> > >
> > > On Tue, Sep 22, 2020 at 12:11 PM Bill Bejeck 
> wrote:
> > >
> > > > Boyan,
> > > >
> > > > Done. Thanks for the heads up.
> > > >
> > > > -Bill
> > > >
> > > > On Mon, Sep 21, 2020 at 6:36 PM Boyang Chen <
> reluctanthero...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hey Bill,
> > > > >
> > > > > unfortunately KIP-590 will not be in 2.7 release, could you move
> it to
> > > > > postponed KIPs?
> > > > >
> > > > > Best,
> > > > > Boyang
> > > > >
> > > > > On Thu, Sep 10, 2020 at 2:41 PM Bill Bejeck 
> wrote:
> > > > >
> > > > > > Hi Gary,
> > > > > >
> > > > > > It's been added.
> > > > > >
> > > > > > Regards,
> > > > > > Bill
> > > > > >
> > > > > > On Thu, Sep 10, 2020 at 4:14 PM Gary Russell <
> gruss...@vmware.com>
> > > > > wrote:
> > > > > > > Can someone add a link to the release plan page [1] to the
> Future
> > > > > > Releases
> > > > > > > page [2]?
> > > > > > >
> > > > > > > I have the latter bookmarked.
> > > > > > >
> > > > > > > Thanks.
> > > > > > >
> > > > > > > [1]:
> > > > > > >
> > > > >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158872629
> > > > > > > [2]:
> > > > > >
> https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan
> > > > > > > 
> > > > > > > From: Bill Bejeck 
> > > > > > > Sent: Wednesday, September 9, 2020 4:35 PM
> > > > > > > To: dev 
> > > > > > > Subject: Re: [DISCUSS] Apache Kafka 2.7.0 release
> > > > > > >
> > > > > > > Hi Dongjin,
> > > > > > >
> > > > > > > I've moved both KIPs to the release plan.
> > > > > > >
> > > > > > > Keep in mind the cutoff for KIP acceptance is September 30th.
> If the
> > > > > KIP
> > > > > > > discussions are completed, I'd recommend starting a vote for
> them.
> > > > > > >
> > > > > > > Regards,
> > > > > > > Bill
> > > > > > >
> > > > > > > On Wed, Sep 9, 2020 at 8:39 AM Dongjin Lee  >
> > > > > wrote:
> > > > > > > > Hi Bill,
> > > > > > > >
> > > > > > > > Could you add the following KIPs to the plan?
> > > > > > > >
> > > > > > > > - KIP-508: Make Suppression State Queriable
> > > > > > > > <
> > > > > > > >
> > > > >
> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FKAFKA%2FKIP-508%253A%2BMake%2BSuppression%2BState%2BQueriable&data=02%7C01%7Cgrussell%40vmware.com%7Cf9f4193557084c2f746508d854ffe7d2%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637352805378806436&sdata=CkJill9%2FuBqp2HdVQrIjElj2z1nMgQXRaUyWrvY94dk%3D&reserved=0
> > > > > > > > - KIP-653: Upgrade log4j to log4j2
> > > > > > > > <
> > > > > > > >
> > > > >
> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FKAFKA%2FKIP-653%253A%2BUpgrade%2Blog4j%2Bto%2Blog4j2&data=02%7C01%7Cgrussell%40vmware.com%7Cf9f4193557084c2f746508d854ffe7d2%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637352805378806436&sdata=nHbw6WiQpkWT3KgPfanEtDCh3sWcL0O%2By8Fu0Bl4ivc%3D&reserved=0
> > > > > > > >
> > > > > > > > Both KIPs are completely implemented with passing all tests,
> but
> > > > > not
> > > > > > got
> >

Re: [VOTE] KIP-673: Emit JSONs with new auto-generated schema

2020-10-05 Thread Jason Gustafson
+1 Thanks for the KIP!

On Fri, Oct 2, 2020 at 4:37 PM Colin McCabe  wrote:

> Thanks, Anastasia!  This will be a lot easier to maintain.
>
> +1 (binding)
>
> best,
> Colin
>
> On Wed, Sep 30, 2020, at 23:57, David Jacot wrote:
> > Thanks for the KIP, Anastasia.
> >
> > +1 (non-binding)
> >
> > On Thu, Oct 1, 2020 at 8:06 AM Tom Bentley  wrote:
> >
> > > Thanks Anastasia,
> > >
> > > +1 (non-binding)
> > >
> > >
> > > On Thu, Oct 1, 2020 at 6:30 AM Gwen Shapira  wrote:
> > >
> > > > Thank you, this will be quite helpful.
> > > >
> > > > +1 (binding)
> > > >
> > > > On Wed, Sep 30, 2020 at 11:19 AM Anastasia Vela 
> > > > wrote:
> > > > >
> > > > > Hi everyone,
> > > > >
> > > > > Thanks again for the discussion. I'd like to start the vote for
> this
> > > KIP.
> > > > >
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-673%3A+Emit+JSONs+with+new+auto-generated+schema
> > > > >
> > > > > Thanks,
> > > > > Anastasia
> > > >
> > > >
> > > >
> > > > --
> > > > Gwen Shapira
> > > > Engineering Manager | Confluent
> > > > 650.450.2760 | @gwenshap
> > > > Follow us: Twitter | blog
> > > >
> > > >
> > >
> >
>


Re: [DISCUSS] KIP-406: GlobalStreamThread should honor custom reset policy

2020-10-05 Thread Sophie Blee-Goldman
It seems a little misleading, but I actually have no real qualms about
transitioning to the
REBALANCING state *after* RESTORING. One of the side effects of KIP-429 was
that in
most cases we actually don't transition to REBALANCING at all until the
very end of the
rebalance, so REBALANCING doesn't really mean all that much any more. These
days
the majority of the time an instance spends in the REBALANCING state is
actually spent
on restoration anyways.

If users are listening in on the REBALANCING -> RUNNING transition, then
they might
also be listening for the RUNNING -> REBALANCING transition, so we may need
to actually
go RUNNING -> REBALANCING -> RESTORING -> REBALANCING -> RUNNING. This
feels a bit unwieldy but I don't think there's anything specifically wrong
with it.

That said, it makes me question the value of having a REBALANCING state at
all. In the
pre-KIP-429 days it made sense, because all tasks were paused and
unavailable for IQ
for the duration of the rebalance. But these days, the threads can continue
processing
any tasks they own during a rebalance, so the only time that tasks are
truly unavailable
is during the restoration phase.

So, I find the idea of getting rid of the REBALANCING state altogether to
be pretty
appealing, in which case we'd probably need to introduce a new state
listener and
deprecate the current one as John proposed. I also wonder if this is the
sort of thing
we can just swallow as a breaking change in the upcoming 3.0 release.

On Sat, Oct 3, 2020 at 11:02 PM Navinder Brar
 wrote:

>
>
>
> Thanks a lot, Matthias for detailed feedback. I tend to agree with
> changing the state machine
>
> itself if required. I think at the end of the day InvalidOffsetException
> is a rare event and is not
>
> as frequent as rebalancing. So, pausing all tasks for once in while should
> be ok from a processing
>
> standpoint.
>
>
>
>
>
>
>
> I was also wondering if instead of adding RESTORING state b/w REBALANCING
> & RUNNING
>
> can we add it before REBALANCING. Whenever an application starts anyways
> there is no need for
>
> active/replica tasks to be present there for us to build global stores
> there. We can restore global stores first
>
> and then trigger a rebalancing to get the tasks assigned. This might help
> us in shielding the users
>
> from changing what they listen to currently(which is REBALANCING ->
> RUNNING). So, we go
>
> RESTORING -> REBALANCING -> RUNNING. The only drawback here might be that
> replicas would
>
> also be paused while we are restoring global stores but as Matthias said
> we would want to give
>
> complete bandwidth to restoring global stores in such a case and
> considering it is a rare event this
>
> should be ok. On the plus side, this would not lead to any race condition
> and we would not need to
>
> change the behavior of any stores. But this also means that this RESTORING
> state is only for global stores
>
> like the GLOBAL_RESTORING state we discussed before :) as regular tasks
> still restore inside REBALANCING.
>
> @John, @Sophie do you think this would work?
>
>
>
>
>
>
>
> Regards,
>
>
>
>
> Navinder
>
>
>
>
> On Wednesday, 30 September, 2020, 09:39:07 pm IST, Matthias J. Sax <
> mj...@apache.org> wrote:
>
>  I guess we need to have some cleanup mechanism for this case anyway,
> because, the global thread can enter RESTORING state at any point in
> time, and thus, even if we set a flag to pause processing on the
> StreamThreads we are subject to a race condition.
>
> Beside that, on a high level I am fine with either "busy waiting" (ie,
> just lock the global-store and retry) or setting a flag. However, there
> are some trade-offs to consider:
>
> As we need a cleanup mechanism anyway, it might be ok to just use a
> single mechanism. -- We should consider the impact in EOS though, as we
> might need to wipe out the store of regular tasks for this case. Thus,
> setting a flag might actually help to prevent that we repeatably wipe
> the store on retries... On the other hand, we plan to avoid wiping the
> store in case of error for EOS anyway, and if we get this improvement,
> we might not need the flag.
>
> For the client state machine: I would actually prefer to have a
> RESTORING state and I would also prefer to pause _all_ tasks. This might
> imply that we want a flag. In the past, we allowed to interleave restore
> and processing in StreamThread (for regular tasks) what slowed down
> restoring and we changed it back to not process any tasks until all
> tasks are restored). Of course, in our case we have two different
> threads (not a single one). However, the network is still shared, so it
> might be desirable to give the full network bandwidth to the global
> consumer to restore as fast as possible (maybe an improvement we could
> add to `StreamThreads` too, if we have multiple threads)? And as a side
> effect, it does not muddy the waters what each client state means.
>
> Thus, overall, I tend to prefer a flag on `StreamThread` as 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #112

2020-10-05 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10531: Check for negative values to Thread.sleep call (#9347)


--
[...truncated 3.35 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #110

2020-10-05 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10531: Check for negative values to Thread.sleep call (#9347)


--
[...truncated 3.33 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.Topo

Re: [DISCUSS] KIP-598: Augment TopologyDescription with store and source / sink serde information

2020-10-05 Thread Sophie Blee-Goldman
I suppose we could add a method to the StoreBuilder interface that calls
through
to the metricsScope() method of the StoreSupplier, similar to what we do
for the store
name.

It feels a bit indirect but the metricsScope() should be an accurate
description of
the underlying store type. The whole point of metricsScope() is to identify
the store
type for use in metrics, so it seems like a reasonable extension to use it
to identify
the store type in the topology description as well.

Or, if KIP-591 ever gets resurrected, maybe we will have a new store type
enum or
other public API to identify the stores that we can leverage here. But that
KIP seems
to have gone dormant as well :)

On Fri, Oct 2, 2020 at 6:18 PM Guozhang Wang  wrote:

> Hey Sophie,
>
> I've thought about this as well. But the tricky thing is that the topology
> description's state store input is from the `StoreBuilder` class, which
> does not include type information. If we want to peek into such info we
> could call its `build` function, get the actual store and dig it out, but
> this would build the actual store even before the tasks are assigned etc.
>
> We can, however, extend the API of StoreBuilder to expose its store type
> information but we need to be careful here: the interface is a public API
> and information too specific like `RocksDBWindow` may be leaking too much
> here. WDYT?
>
>
> Guozhang
>
> On Tue, Sep 29, 2020 at 8:12 PM Sophie Blee-Goldman 
> wrote:
>
> > Hey Guozhang, what's the status of this KIP?
> >
> > I was recently digging through a particularly opaque Streams application
> > and
> > it occurred to me that it might also be useful to print the kind of store
> > attached
> > to each node (eg RocksDBWindowStore, InMemoryKeyValueStore, custom,
> > etc). That made me think of this KIP so I was just wondering where it
> ended
> > up. And if you want to pick it up again, WDYT about including some minor
> > store information in the augmented description?
> >
> > On Tue, May 19, 2020 at 1:22 PM Guozhang Wang 
> wrote:
> >
> > > We already has a Serdes actually, which is a factory class. What we
> > really
> > > need is to add new functions to `Serde`, `Serializer` and
> `Deserializer`
> > > interfaces, but since we already dropped Java7 backward compatibility
> may
> > > not be a big issue anyways, let me think about it a bit more.
> > >
> > > On Tue, May 19, 2020 at 12:01 PM Matthias J. Sax 
> > wrote:
> > >
> > > > Thanks Guozhang.
> > > >
> > > > This makes sense. I am still wondering about wrapped serdes:
> > > >
> > > > > and if it is a wrapper serde, also print its inner
> > > > >>> serde name
> > > >
> > > > How can our default implementation of `TopologyDescriber` know if
> it's
> > a
> > > > wrapped serde or not? Furthermore, how do wrapped serdes expose their
> > > > inner serdes?
> > > >
> > > > I am also not sure what the purpose of TopologyDescriber is? Would it
> > > > mabye be better to add new interface `Serdes` can implement instead?
> > > >
> > > >
> > > > -Matthias
> > > >
> > > >
> > > >
> > > > On 5/18/20 9:24 PM, Guozhang Wang wrote:
> > > > > Bruno, Matthias:
> > > > >
> > > > > Thanks for your inputs. After some thoughts I've decide to update
> my
> > > > > proposal in the following way:
> > > > >
> > > > > 1. Store#serdes() would return a "Map"
> > > > >
> > > > > 2. Topology's description would be independent of whether it is
> > > generated
> > > > > from `StreamsBuilder#build(props)` or `StreamsBuilder#build()`, and
> > if
> > > > the
> > > > > serde is not known we would use "" as the default value.
> > > > >
> > > > > 3. Add `List TopologyDescription#sourceTopics() /
> > sinkTopics()
> > > /
> > > > > repartitionTopics() / changelogTopics()` and for pattern /
> > > > topic-extractor
> > > > > we would use fixed format of "" and
> > > > > "".
> > > > >
> > > > >
> > > > > I will try to implement this in my existing PR and after I've
> > confirmed
> > > > it
> > > > > works, I will update the final wiki for voting.
> > > > >
> > > > >
> > > > > Guozhang
> > > > >
> > > > >
> > > > > On Mon, May 18, 2020 at 9:13 PM Guozhang Wang 
> > > > wrote:
> > > > >
> > > > >> Hello Andy,
> > > > >>
> > > > >> Thanks a lot for your comments! I do not mind at all :)
> > > > >>
> > > > >> I think that's a valid point, what I have in mind is to expose an
> > > > >> interface which can be optionally overridden in the overridden
> > > > describe()
> > > > >> call:
> > > > >>
> > > > >> Topology#describe(final TopologyDescriber)
> > > > >>
> > > > >> Interface TopologyDescriber {
> > > > >>
> > > > >> default describeSerde(final Serde);
> > > > >>
> > > > >> default describeSerializer(final Serializer);
> > > > >>
> > > > >> default describeDeserializer(final Serializer);
> > > > >> }
> > > > >>
> > > > >> And we would expose a DefaultTopologyDescriber class that just
> print
> > > the
> > > > >> serde class names -- and if it is a wrapper serde, also print its
> > > inner
> > > > >> serde name.
> > > > 

Re: [VOTE] KIP-630: Kafka Raft Snapshot

2020-10-05 Thread Jun Rao
Hi, Jose,

Thanks for the KIP. +1. A couple of minor comments below.

1. The new configuration names suggested by Ron sound reasonable.
2. It seems that OFFSET_OUT_OF_RANGE in the wiki needs to be changed to
POSITION_OUT_OF_RANGE.

Jun

On Mon, Oct 5, 2020 at 9:46 AM Jason Gustafson  wrote:

> +1 Thanks for the KIP!
>
> -Jason
>
> On Mon, Oct 5, 2020 at 9:03 AM Ron Dagostino  wrote:
>
> > Thanks for the KIP, Jose.  +1 (non-binding) from me.
> >
> > I do have one comment/confusion.
> >
> > Upon re-reading the latest version, I am confused about the name of
> > the proposed "metadata.snapshot.min.records" config.  Is this a size,
> > or is it a count?  I think it is about a size but want to be sure.  I
> > also wonder if it is about changes (updates/deletes) rather than just
> > additions/accretions, or is it independent of that?
> >
> > I'm also unclear about the definition of the
> > "metadata.snapshot.min.cleanable.ratio" config -- is that a ratio of a
> > *number* of new records to the number of snapshot records?  Or is it a
> > *size* ratio?  I think it is a ratio of numbers of records rather than
> > a ratio of sizes.  I think this one is also about changes
> > (updates/deletes) rather than just additions/accretions.
> >
> > I'm wondering if we can be clearer with the names of these two configs
> > to make their definitions more apparent.  For example, assuming
> > certain definitions as mentioned above:
> >
> > metadata.snapshot.min.new_records.size -- the minimum size of new
> > records required before a snapshot can occur
> > metadata.snapshot.min.change_records.ratio -- the minimum ratio of the
> > number of change (i.e. not simply accretion) records to the number of
> > records in the last snapshot (if any) that must be achieved before a
> > snapshot can occur.
> >
> > For example, if there is no snapshot yet, then ".new_records.size"
> > must be written before a snapshot is allowed.  If there is a snapshot
> > with N records, then before a snapshot is allowed both
> > ".new_records.size" must be written and ".change_records.ratio" must
> > be satisfied such that the number of changes (not accretions) divided
> > by N meets the ratio.
> >
> > Ron
> >
> >
> >
> >
> >
> > On Fri, Oct 2, 2020 at 8:14 PM Lucas Bradstreet 
> > wrote:
> > >
> > > Thanks for the KIP! Non-binding +1
> > >
> > > On Fri, Oct 2, 2020 at 3:30 PM Guozhang Wang 
> wrote:
> > >
> > > > Thanks Jose! +1 from me.
> > > >
> > > > On Fri, Oct 2, 2020 at 3:18 PM Jose Garcia Sancio <
> > jsan...@confluent.io>
> > > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I would like to start a vote on KIP-630.
> > > > >
> > > > > KIP: https://cwiki.apache.org/confluence/x/exV4CQ
> > > > > Discussion Thread:
> > > > >
> > > > >
> > > >
> >
> https://lists.apache.org/thread.html/r9468d1f276385695a2d6d48f6dfbdc504c445fc5745aaa606d138fed%40%3Cdev.kafka.apache.org%3E
> > > > >
> > > > > Thank you
> > > > > --
> > > > > -Jose
> > > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> >
>


Re: Create KIP permission

2020-10-05 Thread Jun Rao
Hi, Javier,

Thanks for your interest. Just gave you the wiki permission.

Jun

On Sat, Oct 3, 2020 at 8:59 AM Javier Freire Riobo 
wrote:

> Hi,
>
> My UID is javier.freire. I wanted to create a KIP to add to Kafka Streams
> the ability to convert a changelog stream to a stream by computing the
> value by comparing the old and new value of the record.
>
> These are the changes:
>
>
> https://github.com/javierfreire/kafka/commit/d32169f06452388800ceb2a9e1ef86d1921d1345
>
> Thank you
>


[jira] [Resolved] (KAFKA-9585) Flaky Test: LagFetchIntegrationTest#shouldFetchLagsDuringRebalancingWithOptimization

2020-10-05 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna resolved KAFKA-9585.
--
Resolution: Cannot Reproduce

> Flaky Test: 
> LagFetchIntegrationTest#shouldFetchLagsDuringRebalancingWithOptimization
> 
>
> Key: KAFKA-9585
> URL: https://issues.apache.org/jira/browse/KAFKA-9585
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.6.0
>Reporter: Sophie Blee-Goldman
>Priority: Major
>  Labels: flaky-test
>
> Failed for me locally with 
> {noformat}
> java.lang.AssertionError: Condition not met within timeout 12. Should 
> obtain non-empty lag information eventually
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10530) kafka-streams-application-reset misses some internal topics

2020-10-05 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-10530.
--
Resolution: Duplicate

Closing now, since this seems like a duplicate report, and visual code 
inspection indicates it should have been fixed.

If you do still see this [~oweiler] , please feel free to re-open the ticket.

> kafka-streams-application-reset misses some internal topics
> ---
>
> Key: KAFKA-10530
> URL: https://issues.apache.org/jira/browse/KAFKA-10530
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, tools
>Affects Versions: 2.6.0
>Reporter: Oliver Weiler
>Priority: Major
>
> While the \{{kafka-streams-application-reset}} tool works in most cases, it 
> misses some internal topics when using {{Foreign Key Table-Table Joins}}.
> After execution, there are still two internal topics left which were not 
> deleted
> {code}
> bv4-indexer-KTABLE-FK-JOIN-SUBSCRIPTION-REGISTRATION-06-topic
> bbv4-indexer-717e6cc5-acb2-498d-9d08-4814aaa71c81-StreamThread-1-consumer 
> bbv4-indexer-KTABLE-FK-JOIN-SUBSCRIPTION-RESPONSE-14-topic
> {code}
> The reason seems to be the {{StreamsResetter.isInternalTopic}} which requires 
> the internal topic to end with {{-changelog}} or {{-repartition}} (which the 
> mentioned topics don't).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10535) KIP-478: Implement StateStoreContext and Record

2020-10-05 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-10535.
--
Resolution: Fixed

> KIP-478: Implement StateStoreContext and Record
> ---
>
> Key: KAFKA-10535
> URL: https://issues.apache.org/jira/browse/KAFKA-10535
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-614: Add Prefix Scan support for State Stores

2020-10-05 Thread Sagar
Thanks Guozhang,

For the PR review, the PR is already there in the KIP.

I am adding it here:

https://github.com/confluentinc/kafka/pull/242

It's not complete by any means but maybe one pass would be possible?

Thanks!
Sagar.

On Mon, Oct 5, 2020 at 10:26 PM Guozhang Wang  wrote:

> Hey Sagar, since the KIP is accepted, the next step would be to completing
> the PR review and finally merge it :)
>
> On Mon, Oct 5, 2020 at 9:46 AM Sagar  wrote:
>
> > Hi All,
> >
> > Just wanted to know, what the next steps are wrt to the KIP?
> >
> > Thanks!
> > Sagar.
> >
> >
> > On Thu, Sep 3, 2020 at 10:17 PM Sagar  wrote:
> >
> > > Hi John,
> > >
> > > Thank you! I have marked the KIP as Accepted :)
> > >
> > > Regarding the point on InMemoryKeyValueStore, in the PR I had added the
> > > implementation for InMemoryKeyValueStore as well. I hadn't mentioned
> > about
> > > it in the KIP which I have done now as you suggested.
> > >
> > > Thanks!
> > > Sagar.
> > >
> > > On Thu, Sep 3, 2020 at 8:10 PM John Roesler 
> wrote:
> > >
> > >> Hi Sagar,
> > >>
> > >> Yes! Congratulations :)
> > >>
> > >> Now, you can mark the status of the KIP as "Accepted" and we
> > >> can move on to reviewing your PRs.
> > >>
> > >> One quick note: Matthias didn't have time to review the KIP
> > >> in full, but he did point out to me that there's a lot of
> > >> information about the RocksDB implementation and no mention
> > >> of the InMemory store. We both agree that we should
> > >> implement the new method also for the InMemory store.
> > >> Assuming you agree, note that we don't need to discuss any
> > >> implementation details, so you could just update the KIP
> > >> document to also mention, "We will also implement the new
> > >> method in the InMemoryKeyValueStore."
> > >>
> > >> Thanks for your contribution to Apache Kafka!
> > >> -John
> > >>
> > >> On Thu, 2020-09-03 at 09:30 +0530, Sagar wrote:
> > >> > Thanks All!
> > >> >
> > >> > I see 3 binding +1 votes and 2 non-binding +1s. Does it mean this
> KIP
> > >> has
> > >> > gained a lazy majority?
> > >> >
> > >> > Thanks!
> > >> > Sagar.
> > >> >
> > >> > On Thu, Sep 3, 2020 at 6:51 AM Guozhang Wang 
> > >> wrote:
> > >> >
> > >> > > Thanks for the KIP Sagar. I'm +1 (binding) too.
> > >> > >
> > >> > >
> > >> > > Guozhang
> > >> > >
> > >> > > On Tue, Sep 1, 2020 at 1:24 PM Bill Bejeck 
> > wrote:
> > >> > >
> > >> > > > Thanks for the KIP! This is a great addition to the streams API.
> > >> > > >
> > >> > > > +1 (binding)
> > >> > > >
> > >> > > > -Bill
> > >> > > >
> > >> > > > On Tue, Sep 1, 2020 at 12:33 PM Sagar <
> sagarmeansoc...@gmail.com>
> > >> wrote:
> > >> > > >
> > >> > > > > Hi All,
> > >> > > > >
> > >> > > > > Bumping the thread again !
> > >> > > > >
> > >> > > > > Thanks!
> > >> > > > > Sagar.
> > >> > > > >
> > >> > > > > On Wed, Aug 5, 2020 at 12:08 AM Sophie Blee-Goldman <
> > >> > > sop...@confluent.io
> > >> > > > > wrote:
> > >> > > > >
> > >> > > > > > Thanks Sagar! +1 (non-binding)
> > >> > > > > >
> > >> > > > > > Sophie
> > >> > > > > >
> > >> > > > > > On Sun, Aug 2, 2020 at 11:37 PM Sagar <
> > >> sagarmeansoc...@gmail.com>
> > >> > > > wrote:
> > >> > > > > > > Hi All,
> > >> > > > > > >
> > >> > > > > > > Just thought of bumping this voting thread again to see if
> > we
> > >> can
> > >> > > > form
> > >> > > > > > any
> > >> > > > > > > consensus around this.
> > >> > > > > > >
> > >> > > > > > > Thanks!
> > >> > > > > > > Sagar.
> > >> > > > > > >
> > >> > > > > > >
> > >> > > > > > > On Mon, Jul 20, 2020 at 4:21 AM Adam Bellemare <
> > >> > > > > adam.bellem...@gmail.com
> > >> > > > > > > wrote:
> > >> > > > > > >
> > >> > > > > > > > LGTM
> > >> > > > > > > > +1 non-binding
> > >> > > > > > > >
> > >> > > > > > > > On Sun, Jul 19, 2020 at 4:13 AM Sagar <
> > >> sagarmeansoc...@gmail.com
> > >> > > > > > wrote:
> > >> > > > > > > > > Hi All,
> > >> > > > > > > > >
> > >> > > > > > > > > Bumping this thread to see if there are any feedbacks.
> > >> > > > > > > > >
> > >> > > > > > > > > Thanks!
> > >> > > > > > > > > Sagar.
> > >> > > > > > > > >
> > >> > > > > > > > > On Tue, Jul 14, 2020 at 9:49 AM John Roesler <
> > >> > > > vvcep...@apache.org>
> > >> > > > > > > > wrote:
> > >> > > > > > > > > > Thanks for the KIP, Sagar!
> > >> > > > > > > > > >
> > >> > > > > > > > > > I’m +1 (binding)
> > >> > > > > > > > > >
> > >> > > > > > > > > > -John
> > >> > > > > > > > > >
> > >> > > > > > > > > > On Sun, Jul 12, 2020, at 02:05, Sagar wrote:
> > >> > > > > > > > > > > Hi All,
> > >> > > > > > > > > > >
> > >> > > > > > > > > > > I would like to start a new voting thread for the
> > >> below KIP
> > >> > > > to
> > >> > > > > > add
> > >> > > > > > > > > prefix
> > >> > > > > > > > > > > scan support to state stores:
> > >> > > > > > > > > > >
> > >> > > > > > > > > > >
> > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >> > > > > > > > > > > 614%3A+Add+Prefix+Scan+support+for+State+Stores
> > >> > > > > > > > > > > <
> > >> > >
> 

Re: [VOTE] KIP-614: Add Prefix Scan support for State Stores

2020-10-05 Thread Guozhang Wang
Hey Sagar, since the KIP is accepted, the next step would be to completing
the PR review and finally merge it :)

On Mon, Oct 5, 2020 at 9:46 AM Sagar  wrote:

> Hi All,
>
> Just wanted to know, what the next steps are wrt to the KIP?
>
> Thanks!
> Sagar.
>
>
> On Thu, Sep 3, 2020 at 10:17 PM Sagar  wrote:
>
> > Hi John,
> >
> > Thank you! I have marked the KIP as Accepted :)
> >
> > Regarding the point on InMemoryKeyValueStore, in the PR I had added the
> > implementation for InMemoryKeyValueStore as well. I hadn't mentioned
> about
> > it in the KIP which I have done now as you suggested.
> >
> > Thanks!
> > Sagar.
> >
> > On Thu, Sep 3, 2020 at 8:10 PM John Roesler  wrote:
> >
> >> Hi Sagar,
> >>
> >> Yes! Congratulations :)
> >>
> >> Now, you can mark the status of the KIP as "Accepted" and we
> >> can move on to reviewing your PRs.
> >>
> >> One quick note: Matthias didn't have time to review the KIP
> >> in full, but he did point out to me that there's a lot of
> >> information about the RocksDB implementation and no mention
> >> of the InMemory store. We both agree that we should
> >> implement the new method also for the InMemory store.
> >> Assuming you agree, note that we don't need to discuss any
> >> implementation details, so you could just update the KIP
> >> document to also mention, "We will also implement the new
> >> method in the InMemoryKeyValueStore."
> >>
> >> Thanks for your contribution to Apache Kafka!
> >> -John
> >>
> >> On Thu, 2020-09-03 at 09:30 +0530, Sagar wrote:
> >> > Thanks All!
> >> >
> >> > I see 3 binding +1 votes and 2 non-binding +1s. Does it mean this KIP
> >> has
> >> > gained a lazy majority?
> >> >
> >> > Thanks!
> >> > Sagar.
> >> >
> >> > On Thu, Sep 3, 2020 at 6:51 AM Guozhang Wang 
> >> wrote:
> >> >
> >> > > Thanks for the KIP Sagar. I'm +1 (binding) too.
> >> > >
> >> > >
> >> > > Guozhang
> >> > >
> >> > > On Tue, Sep 1, 2020 at 1:24 PM Bill Bejeck 
> wrote:
> >> > >
> >> > > > Thanks for the KIP! This is a great addition to the streams API.
> >> > > >
> >> > > > +1 (binding)
> >> > > >
> >> > > > -Bill
> >> > > >
> >> > > > On Tue, Sep 1, 2020 at 12:33 PM Sagar 
> >> wrote:
> >> > > >
> >> > > > > Hi All,
> >> > > > >
> >> > > > > Bumping the thread again !
> >> > > > >
> >> > > > > Thanks!
> >> > > > > Sagar.
> >> > > > >
> >> > > > > On Wed, Aug 5, 2020 at 12:08 AM Sophie Blee-Goldman <
> >> > > sop...@confluent.io
> >> > > > > wrote:
> >> > > > >
> >> > > > > > Thanks Sagar! +1 (non-binding)
> >> > > > > >
> >> > > > > > Sophie
> >> > > > > >
> >> > > > > > On Sun, Aug 2, 2020 at 11:37 PM Sagar <
> >> sagarmeansoc...@gmail.com>
> >> > > > wrote:
> >> > > > > > > Hi All,
> >> > > > > > >
> >> > > > > > > Just thought of bumping this voting thread again to see if
> we
> >> can
> >> > > > form
> >> > > > > > any
> >> > > > > > > consensus around this.
> >> > > > > > >
> >> > > > > > > Thanks!
> >> > > > > > > Sagar.
> >> > > > > > >
> >> > > > > > >
> >> > > > > > > On Mon, Jul 20, 2020 at 4:21 AM Adam Bellemare <
> >> > > > > adam.bellem...@gmail.com
> >> > > > > > > wrote:
> >> > > > > > >
> >> > > > > > > > LGTM
> >> > > > > > > > +1 non-binding
> >> > > > > > > >
> >> > > > > > > > On Sun, Jul 19, 2020 at 4:13 AM Sagar <
> >> sagarmeansoc...@gmail.com
> >> > > > > > wrote:
> >> > > > > > > > > Hi All,
> >> > > > > > > > >
> >> > > > > > > > > Bumping this thread to see if there are any feedbacks.
> >> > > > > > > > >
> >> > > > > > > > > Thanks!
> >> > > > > > > > > Sagar.
> >> > > > > > > > >
> >> > > > > > > > > On Tue, Jul 14, 2020 at 9:49 AM John Roesler <
> >> > > > vvcep...@apache.org>
> >> > > > > > > > wrote:
> >> > > > > > > > > > Thanks for the KIP, Sagar!
> >> > > > > > > > > >
> >> > > > > > > > > > I’m +1 (binding)
> >> > > > > > > > > >
> >> > > > > > > > > > -John
> >> > > > > > > > > >
> >> > > > > > > > > > On Sun, Jul 12, 2020, at 02:05, Sagar wrote:
> >> > > > > > > > > > > Hi All,
> >> > > > > > > > > > >
> >> > > > > > > > > > > I would like to start a new voting thread for the
> >> below KIP
> >> > > > to
> >> > > > > > add
> >> > > > > > > > > prefix
> >> > > > > > > > > > > scan support to state stores:
> >> > > > > > > > > > >
> >> > > > > > > > > > >
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> > > > > > > > > > > 614%3A+Add+Prefix+Scan+support+for+State+Stores
> >> > > > > > > > > > > <
> >> > >
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-614%3A+Add+Prefix+Scan+support+for+State+Stores
> >> > > > > > > > > > >
> >> > > > > > > > > > > Thanks!
> >> > > > > > > > > > > Sagar.
> >> > > > > > > > > > >
> >> > >
> >> > > --
> >> > > -- Guozhang
> >> > >
> >>
> >>
>


-- 
-- Guozhang


Re: [VOTE] KIP-630: Kafka Raft Snapshot

2020-10-05 Thread Jason Gustafson
+1 Thanks for the KIP!

-Jason

On Mon, Oct 5, 2020 at 9:03 AM Ron Dagostino  wrote:

> Thanks for the KIP, Jose.  +1 (non-binding) from me.
>
> I do have one comment/confusion.
>
> Upon re-reading the latest version, I am confused about the name of
> the proposed "metadata.snapshot.min.records" config.  Is this a size,
> or is it a count?  I think it is about a size but want to be sure.  I
> also wonder if it is about changes (updates/deletes) rather than just
> additions/accretions, or is it independent of that?
>
> I'm also unclear about the definition of the
> "metadata.snapshot.min.cleanable.ratio" config -- is that a ratio of a
> *number* of new records to the number of snapshot records?  Or is it a
> *size* ratio?  I think it is a ratio of numbers of records rather than
> a ratio of sizes.  I think this one is also about changes
> (updates/deletes) rather than just additions/accretions.
>
> I'm wondering if we can be clearer with the names of these two configs
> to make their definitions more apparent.  For example, assuming
> certain definitions as mentioned above:
>
> metadata.snapshot.min.new_records.size -- the minimum size of new
> records required before a snapshot can occur
> metadata.snapshot.min.change_records.ratio -- the minimum ratio of the
> number of change (i.e. not simply accretion) records to the number of
> records in the last snapshot (if any) that must be achieved before a
> snapshot can occur.
>
> For example, if there is no snapshot yet, then ".new_records.size"
> must be written before a snapshot is allowed.  If there is a snapshot
> with N records, then before a snapshot is allowed both
> ".new_records.size" must be written and ".change_records.ratio" must
> be satisfied such that the number of changes (not accretions) divided
> by N meets the ratio.
>
> Ron
>
>
>
>
>
> On Fri, Oct 2, 2020 at 8:14 PM Lucas Bradstreet 
> wrote:
> >
> > Thanks for the KIP! Non-binding +1
> >
> > On Fri, Oct 2, 2020 at 3:30 PM Guozhang Wang  wrote:
> >
> > > Thanks Jose! +1 from me.
> > >
> > > On Fri, Oct 2, 2020 at 3:18 PM Jose Garcia Sancio <
> jsan...@confluent.io>
> > > wrote:
> > >
> > > > Hi all,
> > > >
> > > > I would like to start a vote on KIP-630.
> > > >
> > > > KIP: https://cwiki.apache.org/confluence/x/exV4CQ
> > > > Discussion Thread:
> > > >
> > > >
> > >
> https://lists.apache.org/thread.html/r9468d1f276385695a2d6d48f6dfbdc504c445fc5745aaa606d138fed%40%3Cdev.kafka.apache.org%3E
> > > >
> > > > Thank you
> > > > --
> > > > -Jose
> > > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
>


Re: [VOTE] KIP-614: Add Prefix Scan support for State Stores

2020-10-05 Thread Sagar
Hi All,

Just wanted to know, what the next steps are wrt to the KIP?

Thanks!
Sagar.


On Thu, Sep 3, 2020 at 10:17 PM Sagar  wrote:

> Hi John,
>
> Thank you! I have marked the KIP as Accepted :)
>
> Regarding the point on InMemoryKeyValueStore, in the PR I had added the
> implementation for InMemoryKeyValueStore as well. I hadn't mentioned about
> it in the KIP which I have done now as you suggested.
>
> Thanks!
> Sagar.
>
> On Thu, Sep 3, 2020 at 8:10 PM John Roesler  wrote:
>
>> Hi Sagar,
>>
>> Yes! Congratulations :)
>>
>> Now, you can mark the status of the KIP as "Accepted" and we
>> can move on to reviewing your PRs.
>>
>> One quick note: Matthias didn't have time to review the KIP
>> in full, but he did point out to me that there's a lot of
>> information about the RocksDB implementation and no mention
>> of the InMemory store. We both agree that we should
>> implement the new method also for the InMemory store.
>> Assuming you agree, note that we don't need to discuss any
>> implementation details, so you could just update the KIP
>> document to also mention, "We will also implement the new
>> method in the InMemoryKeyValueStore."
>>
>> Thanks for your contribution to Apache Kafka!
>> -John
>>
>> On Thu, 2020-09-03 at 09:30 +0530, Sagar wrote:
>> > Thanks All!
>> >
>> > I see 3 binding +1 votes and 2 non-binding +1s. Does it mean this KIP
>> has
>> > gained a lazy majority?
>> >
>> > Thanks!
>> > Sagar.
>> >
>> > On Thu, Sep 3, 2020 at 6:51 AM Guozhang Wang 
>> wrote:
>> >
>> > > Thanks for the KIP Sagar. I'm +1 (binding) too.
>> > >
>> > >
>> > > Guozhang
>> > >
>> > > On Tue, Sep 1, 2020 at 1:24 PM Bill Bejeck  wrote:
>> > >
>> > > > Thanks for the KIP! This is a great addition to the streams API.
>> > > >
>> > > > +1 (binding)
>> > > >
>> > > > -Bill
>> > > >
>> > > > On Tue, Sep 1, 2020 at 12:33 PM Sagar 
>> wrote:
>> > > >
>> > > > > Hi All,
>> > > > >
>> > > > > Bumping the thread again !
>> > > > >
>> > > > > Thanks!
>> > > > > Sagar.
>> > > > >
>> > > > > On Wed, Aug 5, 2020 at 12:08 AM Sophie Blee-Goldman <
>> > > sop...@confluent.io
>> > > > > wrote:
>> > > > >
>> > > > > > Thanks Sagar! +1 (non-binding)
>> > > > > >
>> > > > > > Sophie
>> > > > > >
>> > > > > > On Sun, Aug 2, 2020 at 11:37 PM Sagar <
>> sagarmeansoc...@gmail.com>
>> > > > wrote:
>> > > > > > > Hi All,
>> > > > > > >
>> > > > > > > Just thought of bumping this voting thread again to see if we
>> can
>> > > > form
>> > > > > > any
>> > > > > > > consensus around this.
>> > > > > > >
>> > > > > > > Thanks!
>> > > > > > > Sagar.
>> > > > > > >
>> > > > > > >
>> > > > > > > On Mon, Jul 20, 2020 at 4:21 AM Adam Bellemare <
>> > > > > adam.bellem...@gmail.com
>> > > > > > > wrote:
>> > > > > > >
>> > > > > > > > LGTM
>> > > > > > > > +1 non-binding
>> > > > > > > >
>> > > > > > > > On Sun, Jul 19, 2020 at 4:13 AM Sagar <
>> sagarmeansoc...@gmail.com
>> > > > > > wrote:
>> > > > > > > > > Hi All,
>> > > > > > > > >
>> > > > > > > > > Bumping this thread to see if there are any feedbacks.
>> > > > > > > > >
>> > > > > > > > > Thanks!
>> > > > > > > > > Sagar.
>> > > > > > > > >
>> > > > > > > > > On Tue, Jul 14, 2020 at 9:49 AM John Roesler <
>> > > > vvcep...@apache.org>
>> > > > > > > > wrote:
>> > > > > > > > > > Thanks for the KIP, Sagar!
>> > > > > > > > > >
>> > > > > > > > > > I’m +1 (binding)
>> > > > > > > > > >
>> > > > > > > > > > -John
>> > > > > > > > > >
>> > > > > > > > > > On Sun, Jul 12, 2020, at 02:05, Sagar wrote:
>> > > > > > > > > > > Hi All,
>> > > > > > > > > > >
>> > > > > > > > > > > I would like to start a new voting thread for the
>> below KIP
>> > > > to
>> > > > > > add
>> > > > > > > > > prefix
>> > > > > > > > > > > scan support to state stores:
>> > > > > > > > > > >
>> > > > > > > > > > >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> > > > > > > > > > > 614%3A+Add+Prefix+Scan+support+for+State+Stores
>> > > > > > > > > > > <
>> > >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-614%3A+Add+Prefix+Scan+support+for+State+Stores
>> > > > > > > > > > >
>> > > > > > > > > > > Thanks!
>> > > > > > > > > > > Sagar.
>> > > > > > > > > > >
>> > >
>> > > --
>> > > -- Guozhang
>> > >
>>
>>


[jira] [Created] (KAFKA-10574) Infinite loop in SimpleHeaderConverter and Values classes

2020-10-05 Thread Chris Egerton (Jira)
Chris Egerton created KAFKA-10574:
-

 Summary: Infinite loop in SimpleHeaderConverter and Values classes
 Key: KAFKA-10574
 URL: https://issues.apache.org/jira/browse/KAFKA-10574
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.5.1, 2.6.0, 2.4.1, 2.5.0, 2.3.1, 2.4.0, 2.2.2, 2.2.1, 
2.3.0, 2.1.1, 2.2.0, 2.1.0, 2.0.1, 2.0.0, 1.1.1, 1.1.0, 1.1.2, 2.0.2
Reporter: Chris Egerton
Assignee: Chris Egerton


A header value with the byte sequence {{0xEF, 0xBF, 0xBF}} will cause an 
infinite loop in the {{Values::parseString}} method. Since that method is 
invoked by the default header converter ({{SimpleHeaderConverter}}), any sink 
record with that byte array will, by default, cause a sink task reading that 
record to stall forever.

This occurs because that byte sequence, when parsed as a UTF-8 string and then 
read by a 
[StringCharacterIterator|https://docs.oracle.com/javase/8/docs/api/java/text/StringCharacterIterator.html],
 causes the 
[CharacterIterator.DONE|https://docs.oracle.com/javase/8/docs/api/java/text/CharacterIterator.html#DONE]
 character to be returned from 
[StringCharacterIterator::current|https://docs.oracle.com/javase/8/docs/api/java/text/StringCharacterIterator.html#current--],
 
[StringCharacterIterator::next|https://docs.oracle.com/javase/8/docs/api/java/text/StringCharacterIterator.html#next--],
 etc., and a check for that character is used by the {{Values}} class for its 
parsing logic.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-630: Kafka Raft Snapshot

2020-10-05 Thread Ron Dagostino
Thanks for the KIP, Jose.  +1 (non-binding) from me.

I do have one comment/confusion.

Upon re-reading the latest version, I am confused about the name of
the proposed "metadata.snapshot.min.records" config.  Is this a size,
or is it a count?  I think it is about a size but want to be sure.  I
also wonder if it is about changes (updates/deletes) rather than just
additions/accretions, or is it independent of that?

I'm also unclear about the definition of the
"metadata.snapshot.min.cleanable.ratio" config -- is that a ratio of a
*number* of new records to the number of snapshot records?  Or is it a
*size* ratio?  I think it is a ratio of numbers of records rather than
a ratio of sizes.  I think this one is also about changes
(updates/deletes) rather than just additions/accretions.

I'm wondering if we can be clearer with the names of these two configs
to make their definitions more apparent.  For example, assuming
certain definitions as mentioned above:

metadata.snapshot.min.new_records.size -- the minimum size of new
records required before a snapshot can occur
metadata.snapshot.min.change_records.ratio -- the minimum ratio of the
number of change (i.e. not simply accretion) records to the number of
records in the last snapshot (if any) that must be achieved before a
snapshot can occur.

For example, if there is no snapshot yet, then ".new_records.size"
must be written before a snapshot is allowed.  If there is a snapshot
with N records, then before a snapshot is allowed both
".new_records.size" must be written and ".change_records.ratio" must
be satisfied such that the number of changes (not accretions) divided
by N meets the ratio.

Ron





On Fri, Oct 2, 2020 at 8:14 PM Lucas Bradstreet  wrote:
>
> Thanks for the KIP! Non-binding +1
>
> On Fri, Oct 2, 2020 at 3:30 PM Guozhang Wang  wrote:
>
> > Thanks Jose! +1 from me.
> >
> > On Fri, Oct 2, 2020 at 3:18 PM Jose Garcia Sancio 
> > wrote:
> >
> > > Hi all,
> > >
> > > I would like to start a vote on KIP-630.
> > >
> > > KIP: https://cwiki.apache.org/confluence/x/exV4CQ
> > > Discussion Thread:
> > >
> > >
> > https://lists.apache.org/thread.html/r9468d1f276385695a2d6d48f6dfbdc504c445fc5745aaa606d138fed%40%3Cdev.kafka.apache.org%3E
> > >
> > > Thank you
> > > --
> > > -Jose
> > >
> >
> >
> > --
> > -- Guozhang
> >


Re: [DISCUSS] KIP-631: The Quorum-based Kafka Controller

2020-10-05 Thread Colin McCabe
On Mon, Sep 28, 2020, at 11:41, Jun Rao wrote:
> Hi, Colin,
> 
> Thanks for the reply. A few more comments below.
> 
> 62.
> 62.1 controller.listener.names: So, is this used for the controller or the
> broker trying to connect to the controller?
>

Hi Jun,

It's used by both.  The broker tries to connect to controllers by using the 
first listener in this list.  The controller uses the list to determine which 
listeners it should bind to.

>
> 62.2 If we want to take the approach to share the configs that are common
> between the broker and the controller, should we share the id too?
>

On nodes where we are running both the controller and broker roles, we need two 
IDs: one for the controller, and one for the broker.  So we can't share the 
same configuration.

>
> 62.3 We added some configs in KIP-595 prefixed with "quorum" and we plan to
> add some controller specific configs prefixed with "controller". KIP-630
> plans to add some other controller specific configs with no prefix. Should
> we standardize all controller specific configs with the same prefix?
> 

That's a good question.  As Jose said elsewhere in the thread, let's use 
"metadata" as the prefix.

>
> 70. Could you explain the impact of process.roles a bit more? For example,
> if process.roles=controller, does the node still maintain metadata cache as
> described in KIP-630? Do we still return the host/port for those nodes in
> the metadata response?
> 

No, the node does not need any broker data structures if it is not a broker.  
Controllers also don't handle MetadataRequests.  Clients will still go to the 
brokers for those.

best,
Colin


> Thanks,
> 
> Jun
> 
> On Mon, Sep 28, 2020 at 9:24 AM Colin McCabe  wrote:
> 
> > On Fri, Sep 25, 2020, at 17:35, Jun Rao wrote:
> > > Hi, Colin,
> > >
> > > Thanks for the reply.
> > >
> > > 62. Thinking about this more, I am wondering what's our overall strategy
> > > for configs shared between the broker and the controller. For example,
> > both
> > > the broker and the controller have to define listeners. So both need
> > > configs like listeners/advertised.listeners. Both the new controller and
> > > the broker replicate data. So both need to define some replication
> > related
> > > configurations (replica.fetch.min.bytes, replica.fetch.wait.max.ms,
> > etc).
> > > Should we let the new controller share those configs with brokers or
> > should
> > > we define new configs just for the controller? It seems that we want to
> > > apply the same strategy consistently for all shared configs?
> > >
> >
> > Hi Jun,
> >
> > That's a good question.  I think that we should share as many
> > configurations as possible.  There will be a few cases where this isn't
> > possible, and we need to create a new configuration key that is
> > controller-only, but I think that will be relatively rare.
> >
> > >
> > > 63. If a node has process.roles set to controller, does one still need to
> > > set broker.id on this node?
> > >
> >
> > No, broker.id does not need to be set in that case.
> >
> > best,
> > Colin
> >
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > >
> > >
> > > On Fri, Sep 25, 2020 at 2:17 PM Colin McCabe  wrote:
> > >
> > > > On Fri, Sep 25, 2020, at 10:17, Jun Rao wrote:
> > > > > Hi, Colin,
> > > > >
> > > > > Thanks for the reply.
> > > > >
> > > > > 60. Yes, I think you are right. We probably need the controller id
> > when a
> > > > > broker starts up. A broker only stores the Raft leader id in the
> > metadata
> > > > > file. To do the initial fetch to the Raft leader, it needs to know
> > the
> > > > > host/port associated with the leader id.
> > > > >
> > > > > 62. It seems there are 2 parts to this : (1) which listener a client
> > > > should
> > > > > use to initiate a connection to the controller and (2) which listener
> > > > > should a controller use to accept client requests. For (1), at any
> > point
> > > > of
> > > > > time, a client only needs to use one listener. I think
> > > > > controller.listener.name is meant for the client.
> > > >
> > > > Hi Jun,
> > > >
> > > > controller.listener.names is also used by the controllers.  In the case
> > > > where we have a broker and a controller in the same JVM, we have a
> > single
> > > > config file.  Then we need to know which listeners belong to the
> > controller
> > > > and which belong to the broker component.  That's why it's a list.
> > > >
> > > > > So, a single value seems
> > > > > to make more sense. Currently, we don't have a configuration for
> > (2). We
> > > > > could add a new one for that and support a list. I am wondering how
> > > > useful
> > > > > it will be. One example that I can think of is that we can reject
> > > > > non-controller related requests if accepted on controller-only
> > listeners.
> > > > > However, we already support separate authentication for the
> > controller
> > > > > listener. So, not sure how useful it is.
> > > >
> > > > The controller always has a separate listener and does not share
> > liste

Re: [DISCUSS] Apache Kafka 2.7.0 release

2020-10-05 Thread John Roesler
Hi Bill,

Sorry about this, but I've just noticed that KIP-478 is
missing from the list. The url is:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-478+-+Strongly+typed+Processor+API

The KIP was accepted a long time ago, and the implementation
has been trickling in since 2.6 branch cut. However, most of
the public API implementation is done now, so I think at
this point, we can call it "released in 2.7.0". I'll make
sure it's done by feature freeze.

Thanks,
-John

On Thu, 2020-10-01 at 13:49 -0400, Bill Bejeck wrote:
> All,
> 
> With the KIP acceptance deadline passing yesterday, I've updated the
> planned KIP content section of the 2.7.0 release plan
> 
> .
> 
> Removed proposed KIPs for 2.7.0 not getting approval
> 
>1. KIP-653
>
> 
>2. KIP-608
>
> 
>3. KIP-508
>
> 
> 
> KIPs added
> 
>1. KIP-671
>
> 
> 
> 
> Please let me know if I've missed anything.
> 
> Thanks,
> Bill
> 
> On Thu, Sep 24, 2020 at 1:47 PM Bill Bejeck  wrote:
> 
> > Hi All,
> > 
> > Just a reminder that the KIP freeze is next Wednesday, September 30th.
> > Any KIP aiming to go in the 2.7.0 release needs to be accepted by this date.
> > 
> > Thanks,
> > BIll
> > 
> > On Tue, Sep 22, 2020 at 12:11 PM Bill Bejeck  wrote:
> > 
> > > Boyan,
> > > 
> > > Done. Thanks for the heads up.
> > > 
> > > -Bill
> > > 
> > > On Mon, Sep 21, 2020 at 6:36 PM Boyang Chen 
> > > wrote:
> > > 
> > > > Hey Bill,
> > > > 
> > > > unfortunately KIP-590 will not be in 2.7 release, could you move it to
> > > > postponed KIPs?
> > > > 
> > > > Best,
> > > > Boyang
> > > > 
> > > > On Thu, Sep 10, 2020 at 2:41 PM Bill Bejeck  wrote:
> > > > 
> > > > > Hi Gary,
> > > > > 
> > > > > It's been added.
> > > > > 
> > > > > Regards,
> > > > > Bill
> > > > > 
> > > > > On Thu, Sep 10, 2020 at 4:14 PM Gary Russell 
> > > > wrote:
> > > > > > Can someone add a link to the release plan page [1] to the Future
> > > > > Releases
> > > > > > page [2]?
> > > > > > 
> > > > > > I have the latter bookmarked.
> > > > > > 
> > > > > > Thanks.
> > > > > > 
> > > > > > [1]:
> > > > > > 
> > > > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158872629
> > > > > > [2]:
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan
> > > > > > 
> > > > > > From: Bill Bejeck 
> > > > > > Sent: Wednesday, September 9, 2020 4:35 PM
> > > > > > To: dev 
> > > > > > Subject: Re: [DISCUSS] Apache Kafka 2.7.0 release
> > > > > > 
> > > > > > Hi Dongjin,
> > > > > > 
> > > > > > I've moved both KIPs to the release plan.
> > > > > > 
> > > > > > Keep in mind the cutoff for KIP acceptance is September 30th. If the
> > > > KIP
> > > > > > discussions are completed, I'd recommend starting a vote for them.
> > > > > > 
> > > > > > Regards,
> > > > > > Bill
> > > > > > 
> > > > > > On Wed, Sep 9, 2020 at 8:39 AM Dongjin Lee 
> > > > wrote:
> > > > > > > Hi Bill,
> > > > > > > 
> > > > > > > Could you add the following KIPs to the plan?
> > > > > > > 
> > > > > > > - KIP-508: Make Suppression State Queriable
> > > > > > > <
> > > > > > > 
> > > > https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FKAFKA%2FKIP-508%253A%2BMake%2BSuppression%2BState%2BQueriable&data=02%7C01%7Cgrussell%40vmware.com%7Cf9f4193557084c2f746508d854ffe7d2%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637352805378806436&sdata=CkJill9%2FuBqp2HdVQrIjElj2z1nMgQXRaUyWrvY94dk%3D&reserved=0
> > > > > > > - KIP-653: Upgrade log4j to log4j2
> > > > > > > <
> > > > > > > 
> > > > https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FKAFKA%2FKIP-653%253A%2BUpgrade%2Blog4j%2Bto%2Blog4j2&data=02%7C01%7Cgrussell%40vmware.com%7Cf9f4193557084c2f746508d854ffe7d2%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637352805378806436&sdata=nHbw6WiQpkWT3KgPfanEtDCh3sWcL0O%2By8Fu0Bl4ivc%3D&reserved=0
> > > > > > > 
> > > > > > > Both KIPs are completely implemented with passing all tests, but
> > > > not
> > > > > got
> > > > > > > reviewed by the committers. Could anyone have a look?
> > > > > > > 
> > > > > > > Thanks,
> > > > > > > Dongjin
> > > > > > > 
> > > > > > > On Wed, Sep 9, 2020 at 8:38 AM Leah Thomas 
> > > > > wrote:
> > > > > > > > Hi Bill,
> > > > > > > > 
> > > > > > > > Could you also add KIP-450 to the release plan? It's been 
> > > > > > > > merged.
> > > > > > > > 
> > > > > > > > 
> > > > https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2