Build failed in Jenkins: kafka-2.1-jdk8 #266

2020-05-29 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-8104: Consumer cannot rejoin to the group after rebalancing


--
[...truncated 2.82 MB...]
org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfMaterializedIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfMergerIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfMergerIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeCount STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeCount PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeAggregated STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeAggregated PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnCountIfMaterializedIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnCountIfMaterializedIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfAggregatorIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfAggregatorIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnReduceIfReducerIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnReduceIfReducerIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldAggregateSessionWindowed STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldAggregateSessionWindowed PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldCountSessionWindowed STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldCountSessionWindowed PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldReduceWindowed STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldReduceWindowed PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfAggregatorIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfAggregatorIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfInitializerIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfInitializerIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfMergerIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfMergerIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedReduceIfMaterializedIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedReduceIfMaterializedIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeReduced STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeReduced PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowTest > 
shouldOverlapIfOtherWindowIsWithinThisWindow STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowTest > 
shouldOverlapIfOtherWindowIsWithinThisWindow PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowTest > 
shouldNotOverlapIfOtherWindowIsBeforeThisWindow STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowTest > 
shouldNotOverlapIfOtherWindowIsBeforeThisWindow PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowTest > 
cannotCompareSessionWindowWithDifferentWindowType STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowTest > 
cannotCompareSessionWindowWithDifferentWindowType PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowTest > 
shouldOverlapIfOtherWindowContainsThisWindow STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowTest > 
shouldOverlapIfOtherWindowContainsThisWindow PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowTest > 

Build failed in Jenkins: kafka-2.6-jdk8 #6

2020-05-29 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-9494; Include additional metadata information in 
DescribeConfig


--
[...truncated 6.25 MB...]

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED


Build failed in Jenkins: kafka-2.3-jdk8 #209

2020-05-29 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8104; Consumer cannot rejoin to the group after rebalancing


--
[...truncated 2.83 MB...]
kafka.coordinator.group.GroupMetadataManagerTest > 
testDoNotLoadAbortedTransactionalOffsetCommits STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testDoNotLoadAbortedTransactionalOffsetCommits PASSED

kafka.coordinator.group.GroupMetadataManagerTest > testStoreEmptyGroup STARTED

kafka.coordinator.group.GroupMetadataManagerTest > testStoreEmptyGroup PASSED

kafka.coordinator.group.GroupCoordinatorConcurrencyTest > 
testConcurrentGoodPathSequence STARTED

kafka.coordinator.group.GroupCoordinatorConcurrencyTest > 
testConcurrentGoodPathSequence PASSED

kafka.coordinator.group.GroupCoordinatorConcurrencyTest > 
testConcurrentTxnGoodPathSequence STARTED

kafka.coordinator.group.GroupCoordinatorConcurrencyTest > 
testConcurrentTxnGoodPathSequence PASSED

kafka.coordinator.group.GroupCoordinatorConcurrencyTest > 
testConcurrentRandomSequence STARTED

kafka.coordinator.group.GroupCoordinatorConcurrencyTest > 
testConcurrentRandomSequence PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics STARTED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testControllerMetrics STARTED

kafka.metrics.MetricsTest > testControllerMetrics PASSED

kafka.metrics.MetricsTest > testWindowsStyleTagNames STARTED

kafka.metrics.MetricsTest > testWindowsStyleTagNames PASSED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut STARTED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut PASSED

kafka.tools.ConsumerPerformanceTest > testDetailedHeaderMatchBody STARTED

kafka.tools.ConsumerPerformanceTest > testDetailedHeaderMatchBody PASSED

kafka.tools.ConsumerPerformanceTest > testConfigWithUnrecognizedOption STARTED

kafka.tools.ConsumerPerformanceTest > testConfigWithUnrecognizedOption PASSED

kafka.tools.ConsumerPerformanceTest > testConfig STARTED

kafka.tools.ConsumerPerformanceTest > testConfig PASSED

kafka.tools.ConsumerPerformanceTest > testNonDetailedHeaderMatchBody STARTED

kafka.tools.ConsumerPerformanceTest > testNonDetailedHeaderMatchBody PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp STARTED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs STARTED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigs STARTED

kafka.tools.ConsoleProducerTest > testValidConfigs PASSED

kafka.tools.ReplicaVerificationToolTest > testReplicaBufferVerifyChecksum 
STARTED

kafka.tools.ReplicaVerificationToolTest > testReplicaBufferVerifyChecksum PASSED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit STARTED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseGroupIdFromBeginningGivenTogether 
STARTED

kafka.tools.ConsoleConsumerTest > shouldParseGroupIdFromBeginningGivenTogether 
PASSED

kafka.tools.ConsoleConsumerTest > shouldExitOnOffsetWithoutPartition STARTED

kafka.tools.ConsoleConsumerTest > shouldExitOnOffsetWithoutPartition PASSED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails STARTED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails PASSED

kafka.tools.ConsoleConsumerTest > 
shouldExitOnInvalidConfigWithAutoOffsetResetAndConflictingFromBeginning STARTED

kafka.tools.ConsoleConsumerTest > 
shouldExitOnInvalidConfigWithAutoOffsetResetAndConflictingFromBeginning PASSED

kafka.tools.ConsoleConsumerTest > shouldResetUnConsumedOffsetsBeforeExit STARTED

kafka.tools.ConsoleConsumerTest > shouldResetUnConsumedOffsetsBeforeExit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile STARTED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithAutoOffsetResetLatest STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithAutoOffsetResetLatest PASSED

kafka.tools.ConsoleConsumerTest > groupIdsProvidedInDifferentPlacesMustMatch 
STARTED

kafka.tools.ConsoleConsumerTest > groupIdsProvidedInDifferentPlacesMustMatch 
PASSED

kafka.tools.ConsoleConsumerTest > 

[jira] [Created] (KAFKA-10072) Kafkaconsumer is configured with different clientid parameters to obtain different results

2020-05-29 Thread victor (Jira)
victor created KAFKA-10072:
--

 Summary: Kafkaconsumer is configured with different clientid 
parameters to obtain different results
 Key: KAFKA-10072
 URL: https://issues.apache.org/jira/browse/KAFKA-10072
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.4.0
 Environment: centos7.6 8C 32G
Reporter: victor


kafka-console-consumer.sh --bootstrap-server host1:port --consumer-property 
{color:#DE350B}client.id=aa{color} --from-beginning --topic topicA
{color:#DE350B}There's no data
{color}

kafka-console-consumer.sh --bootstrap-server host1:port --consumer-property 
{color:#DE350B}clientid=bb{color} --from-beginning --topic topicA
{color:#DE350B}Successfully consume data{color}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk14 #145

2020-05-29 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9501: convert between active and standby without closing stores

[github] KAFKA-9130; KIP-518 Allow listing consumer groups per state (#8238)

[github] KAFKA-10061; Fix flaky


--
[...truncated 1.91 MB...]
kafka.api.AuthorizerIntegrationTest > 
testTransactionalProducerInitTransactionsNoDescribeTransactionalIdAcl STARTED
[3309.281s][warning][os,thread] Failed to start thread - pthread_create failed 
(EAGAIN) for attributes: stacksize: 1024k, guardsize: 0k, detached.

kafka.api.AuthorizerIntegrationTest > 
testTransactionalProducerInitTransactionsNoDescribeTransactionalIdAcl PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithDescribe 
STARTED
kafka.api.SaslScramSslEndToEndAuthorizationTest.testProduceConsumeWithPrefixedAcls
 failed, log available in 


kafka.api.SaslScramSslEndToEndAuthorizationTest > 
testProduceConsumeWithPrefixedAcls FAILED
java.lang.OutOfMemoryError: unable to create native thread: possibly out of 
memory or process/resource limits reached
at java.base/java.lang.Thread.start0(Native Method)
at java.base/java.lang.Thread.start(Thread.java:799)
at 
kafka.common.ZkNodeChangeNotificationListener.init(ZkNodeChangeNotificationListener.scala:67)
at 
kafka.server.DynamicConfigManager.startup(DynamicConfigManager.scala:162)
at kafka.server.KafkaServer.startup(KafkaServer.scala:366)
at kafka.utils.TestUtils$.createServer(TestUtils.scala:156)
at kafka.utils.TestUtils$.createServer(TestUtils.scala:147)
at 
kafka.integration.KafkaServerTestHarness.$anonfun$setUp$1(KafkaServerTestHarness.scala:101)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
at scala.collection.IterableLike.foreach(IterableLike.scala:74)
at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
at 
kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:100)
at 
kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:93)
at 
kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:84)
at 
kafka.api.EndToEndAuthorizationTest.setUp(EndToEndAuthorizationTest.scala:192)
at 
kafka.api.SaslEndToEndAuthorizationTest.setUp(SaslEndToEndAuthorizationTest.scala:47)
at 
kafka.api.SaslScramSslEndToEndAuthorizationTest.setUp(SaslScramSslEndToEndAuthorizationTest.scala:43)

kafka.api.SaslScramSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithDescribe 
PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreateTopicAuthorizationWithClusterCreate STARTED
[3315.418s][warning][os,thread] Failed to start thread - pthread_create failed 
(EAGAIN) for attributes: stacksize: 1024k, guardsize: 0k, detached.

kafka.api.AuthorizerIntegrationTest > 
testCreateTopicAuthorizationWithClusterCreate PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
STARTED
[3316.976s][warning][os,thread] Failed to start thread - pthread_create failed 
(EAGAIN) for attributes: stacksize: 1024k, guardsize: 0k, detached.
kafka.api.AuthorizerIntegrationTest.testOffsetFetchWithTopicAndGroupRead 
failed, log available in 


kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
FAILED
java.lang.OutOfMemoryError: unable to create native thread: possibly out of 
memory or process/resource limits reached
at java.base/java.lang.Thread.start0(Native Method)
at java.base/java.lang.Thread.start(Thread.java:799)
at 
kafka.server.KafkaRequestHandlerPool.createHandler(KafkaRequestHandler.scala:116)
at 
kafka.server.KafkaRequestHandlerPool.$anonfun$new$1(KafkaRequestHandler.scala:111)
at 
kafka.server.KafkaRequestHandlerPool.(KafkaRequestHandler.scala:110)
at kafka.server.KafkaServer.startup(KafkaServer.scala:342)
at kafka.utils.TestUtils$.createServer(TestUtils.scala:156)
at kafka.utils.TestUtils$.createServer(TestUtils.scala:147)
at 
kafka.integration.KafkaServerTestHarness.$anonfun$setUp$1(KafkaServerTestHarness.scala:101)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)

Build failed in Jenkins: kafka-trunk-jdk11 #1515

2020-05-29 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10056; Ensure consumer metadata contains new topics on

[github] KAFKA-9501: convert between active and standby without closing stores

[github] KAFKA-9130; KIP-518 Allow listing consumer groups per state (#8238)

[github] KAFKA-10061; Fix flaky


--
[...truncated 1.73 MB...]
kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionDisabled PASSED

kafka.controller.ControllerIntegrationTest > 
testTopicPartitionExpansionWithOfflineReplica STARTED

kafka.controller.ControllerIntegrationTest > 
testTopicPartitionExpansionWithOfflineReplica PASSED

kafka.controller.ControllerIntegrationTest > 
testPreferredReplicaLeaderElectionWithOfflinePreferredReplica STARTED

kafka.controller.ControllerIntegrationTest > 
testPreferredReplicaLeaderElectionWithOfflinePreferredReplica PASSED

kafka.controller.ControllerIntegrationTest > 
testMetadataPropagationOnControlPlane STARTED

kafka.controller.ControllerIntegrationTest > 
testMetadataPropagationOnControlPlane PASSED

kafka.controller.ControllerIntegrationTest > 
testAutoPreferredReplicaLeaderElection STARTED

kafka.controller.ControllerIntegrationTest > 
testAutoPreferredReplicaLeaderElection PASSED

kafka.controller.ControllerIntegrationTest > testTopicCreation STARTED

kafka.controller.ControllerIntegrationTest > testTopicCreation PASSED

kafka.controller.ControllerIntegrationTest > testControllerMoveOnTopicDeletion 
STARTED

kafka.controller.ControllerIntegrationTest > testControllerMoveOnTopicDeletion 
PASSED

kafka.controller.ControllerIntegrationTest > testPartitionReassignment STARTED

kafka.controller.ControllerIntegrationTest > testPartitionReassignment PASSED

kafka.controller.ControllerIntegrationTest > testTopicPartitionExpansion STARTED

kafka.controller.ControllerIntegrationTest > testTopicPartitionExpansion PASSED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveIncrementsControllerEpoch STARTED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveIncrementsControllerEpoch PASSED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionEnabled STARTED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionEnabled PASSED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveOnPartitionReassignment STARTED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveOnPartitionReassignment PASSED

kafka.controller.ControllerIntegrationTest > testControllerMoveOnTopicCreation 
STARTED

kafka.controller.ControllerIntegrationTest > testControllerMoveOnTopicCreation 
PASSED

kafka.controller.ControllerIntegrationTest > 
testControllerRejectControlledShutdownRequestWithStaleBrokerEpoch STARTED

kafka.controller.ControllerIntegrationTest > 
testControllerRejectControlledShutdownRequestWithStaleBrokerEpoch PASSED

kafka.controller.ControllerIntegrationTest > 
testBackToBackPreferredReplicaLeaderElections STARTED

kafka.controller.ControllerIntegrationTest > 
testBackToBackPreferredReplicaLeaderElections PASSED

kafka.controller.ControllerIntegrationTest > testEmptyCluster STARTED

kafka.controller.ControllerIntegrationTest > testEmptyCluster PASSED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveOnPreferredReplicaElection STARTED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveOnPreferredReplicaElection PASSED

kafka.controller.ControllerIntegrationTest > testPreferredReplicaLeaderElection 
STARTED

kafka.controller.ControllerIntegrationTest > testPreferredReplicaLeaderElection 
PASSED

kafka.controller.ControllerIntegrationTest > 
testMetadataPropagationOnBrokerChange STARTED

kafka.controller.ControllerIntegrationTest > 
testMetadataPropagationOnBrokerChange PASSED

kafka.controller.ControllerIntegrationTest > 
testMetadataPropagationForOfflineReplicas STARTED

kafka.controller.ControllerIntegrationTest > 
testMetadataPropagationForOfflineReplicas PASSED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaRequestWithAlreadyDefinedDeletedPartition STARTED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaRequestWithAlreadyDefinedDeletedPartition PASSED

kafka.controller.ControllerChannelManagerTest > 
testUpdateMetadataInterBrokerProtocolVersion STARTED

kafka.controller.ControllerChannelManagerTest > 
testUpdateMetadataInterBrokerProtocolVersion PASSED

kafka.controller.ControllerChannelManagerTest > testLeaderAndIsrRequestIsNew 
STARTED

kafka.controller.ControllerChannelManagerTest > testLeaderAndIsrRequestIsNew 
PASSED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicQueuedForDeletion STARTED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicQueuedForDeletion PASSED


Build failed in Jenkins: kafka-2.2-jdk8 #46

2020-05-29 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8104; Consumer cannot rejoin to the group after rebalancing


--
[...truncated 1.33 MB...]

kafka.utils.CommandLineUtilsTest > testParseEmptyArgWithNoDelimiter PASSED

kafka.utils.CommandLineUtilsTest > 
testMaybeMergeOptionsDefaultOverwriteExisting STARTED

kafka.utils.CommandLineUtilsTest > 
testMaybeMergeOptionsDefaultOverwriteExisting PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid STARTED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid PASSED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsNotOverwriteExisting 
STARTED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsNotOverwriteExisting 
PASSED

kafka.utils.JsonTest > testParseToWithInvalidJson STARTED

kafka.utils.JsonTest > testParseToWithInvalidJson PASSED

kafka.utils.JsonTest > testParseTo STARTED

kafka.utils.JsonTest > testParseTo PASSED

kafka.utils.JsonTest > testJsonParse STARTED

kafka.utils.JsonTest > testJsonParse PASSED

kafka.utils.JsonTest > testLegacyEncodeAsString STARTED

kafka.utils.JsonTest > testLegacyEncodeAsString PASSED

kafka.utils.JsonTest > testEncodeAsBytes STARTED

kafka.utils.JsonTest > testEncodeAsBytes PASSED

kafka.utils.JsonTest > testEncodeAsString STARTED

kafka.utils.JsonTest > testEncodeAsString PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr STARTED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ZkUtilsTest > testGetSequenceIdMethod STARTED

kafka.utils.ZkUtilsTest > testGetSequenceIdMethod PASSED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testGetAllPartitionsTopicWithoutPartitions STARTED

kafka.utils.ZkUtilsTest > testGetAllPartitionsTopicWithoutPartitions PASSED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testPersistentSequentialPath STARTED

kafka.utils.ZkUtilsTest > testPersistentSequentialPath PASSED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing STARTED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing PASSED

kafka.utils.ZkUtilsTest > testGetLeaderIsrAndEpochForPartition STARTED

kafka.utils.ZkUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.PasswordEncoderTest > testEncoderConfigChange STARTED

kafka.utils.PasswordEncoderTest > testEncoderConfigChange PASSED

kafka.utils.PasswordEncoderTest > testEncodeDecodeAlgorithms STARTED

kafka.utils.PasswordEncoderTest > testEncodeDecodeAlgorithms PASSED

kafka.utils.PasswordEncoderTest > testEncodeDecode STARTED

kafka.utils.PasswordEncoderTest > testEncodeDecode PASSED

kafka.utils.timer.TimerTaskListTest > testAll STARTED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask STARTED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration STARTED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.ShutdownableThreadTest > testShutdownWhenCalledAfterThreadStart 
STARTED

kafka.utils.ShutdownableThreadTest > testShutdownWhenCalledAfterThreadStart 
PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart STARTED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler STARTED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask STARTED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.json.JsonValueTest > testJsonObjectIterator STARTED

kafka.utils.json.JsonValueTest > testJsonObjectIterator PASSED

kafka.utils.json.JsonValueTest > testDecodeLong STARTED

kafka.utils.json.JsonValueTest > testDecodeLong PASSED

kafka.utils.json.JsonValueTest > testAsJsonObject STARTED

kafka.utils.json.JsonValueTest > testAsJsonObject PASSED

kafka.utils.json.JsonValueTest > testDecodeDouble STARTED

kafka.utils.json.JsonValueTest > testDecodeDouble PASSED

kafka.utils.json.JsonValueTest > testDecodeOption STARTED

kafka.utils.json.JsonValueTest > testDecodeOption PASSED

kafka.utils.json.JsonValueTest > testDecodeString STARTED

kafka.utils.json.JsonValueTest > testDecodeString PASSED

kafka.utils.json.JsonValueTest > testJsonValueToString STARTED

kafka.utils.json.JsonValueTest > testJsonValueToString PASSED


[jira] [Created] (KAFKA-10071) TopicCommand tool should make more efficient metadata calls to Kafka Servers

2020-05-29 Thread Vinoth Chandar (Jira)
Vinoth Chandar created KAFKA-10071:
--

 Summary: TopicCommand tool should make more efficient metadata 
calls to Kafka Servers
 Key: KAFKA-10071
 URL: https://issues.apache.org/jira/browse/KAFKA-10071
 Project: Kafka
  Issue Type: Improvement
Reporter: Vinoth Chandar
Assignee: Vinoth Chandar


This is a follow up from discussion of. KAFKA-9945 

[https://github.com/apache/kafka/pull/8737] 

alter, describe, delete all pull down the entire topic list today, in order to 
support regex matching .. We need to make these commands much more efficient 
(there is also the issue that regex includes support for period.. so may be we 
need two different switches going forward).. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : kafka-trunk-jdk8 #4589

2020-05-29 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-2.6-jdk8 #5

2020-05-29 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-2.5-jdk8 #138

2020-05-29 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-10056; Ensure consumer metadata contains new topics on


--
[...truncated 5.91 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] 

Build failed in Jenkins: kafka-2.3-jdk8 #208

2020-05-29 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-10056; Ensure consumer metadata contains new topics on


--
[...truncated 2.83 MB...]

kafka.api.AdminClientIntegrationTest > testCallInFlightTimeouts PASSED

kafka.api.AdminClientIntegrationTest > testDescribeConfigsForTopic STARTED

kafka.api.AdminClientIntegrationTest > testDescribeConfigsForTopic PASSED

kafka.api.AdminClientIntegrationTest > testConsumerGroups STARTED

kafka.api.AdminClientIntegrationTest > testConsumerGroups PASSED

kafka.api.AdminClientIntegrationTest > 
testCreateExistingTopicsThrowTopicExistsException STARTED

kafka.api.AdminClientIntegrationTest > 
testCreateExistingTopicsThrowTopicExistsException PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics STARTED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testControllerMetrics STARTED

kafka.metrics.MetricsTest > testControllerMetrics PASSED

kafka.metrics.MetricsTest > testWindowsStyleTagNames STARTED

kafka.metrics.MetricsTest > testWindowsStyleTagNames PASSED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut STARTED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut PASSED

kafka.cluster.ReplicaTest > testCannotIncrementLogStartOffsetPastHighWatermark 
STARTED

kafka.cluster.ReplicaTest > testCannotIncrementLogStartOffsetPastHighWatermark 
PASSED

kafka.cluster.ReplicaTest > testSegmentDeletionWithHighWatermarkInitialization 
STARTED

kafka.cluster.ReplicaTest > testSegmentDeletionWithHighWatermarkInitialization 
PASSED

kafka.cluster.ReplicaTest > testCannotDeleteSegmentsAtOrAboveHighWatermark 
STARTED

kafka.cluster.ReplicaTest > testCannotDeleteSegmentsAtOrAboveHighWatermark 
PASSED

kafka.cluster.PartitionTest > 
testMakeLeaderDoesNotUpdateEpochCacheForOldFormats STARTED

kafka.cluster.PartitionTest > 
testMakeLeaderDoesNotUpdateEpochCacheForOldFormats PASSED

kafka.cluster.PartitionTest > testReadRecordEpochValidationForLeader STARTED

kafka.cluster.PartitionTest > testReadRecordEpochValidationForLeader PASSED

kafka.cluster.PartitionTest > 
testFetchOffsetForTimestampEpochValidationForFollower STARTED

kafka.cluster.PartitionTest > 
testFetchOffsetForTimestampEpochValidationForFollower PASSED

kafka.cluster.PartitionTest > testListOffsetIsolationLevels STARTED

kafka.cluster.PartitionTest > testListOffsetIsolationLevels PASSED

kafka.cluster.PartitionTest > testAppendRecordsAsFollowerBelowLogStartOffset 
STARTED

kafka.cluster.PartitionTest > testAppendRecordsAsFollowerBelowLogStartOffset 
PASSED

kafka.cluster.PartitionTest > testFetchLatestOffsetIncludesLeaderEpoch STARTED

kafka.cluster.PartitionTest > testFetchLatestOffsetIncludesLeaderEpoch PASSED

kafka.cluster.PartitionTest > testFetchOffsetSnapshotEpochValidationForFollower 
STARTED

kafka.cluster.PartitionTest > testFetchOffsetSnapshotEpochValidationForFollower 
PASSED

kafka.cluster.PartitionTest > testMonotonicOffsetsAfterLeaderChange STARTED

kafka.cluster.PartitionTest > testMonotonicOffsetsAfterLeaderChange PASSED

kafka.cluster.PartitionTest > testMakeFollowerWithNoLeaderIdChange STARTED

kafka.cluster.PartitionTest > testMakeFollowerWithNoLeaderIdChange PASSED

kafka.cluster.PartitionTest > 
testAppendRecordsToFollowerWithNoReplicaThrowsException STARTED

kafka.cluster.PartitionTest > 
testAppendRecordsToFollowerWithNoReplicaThrowsException PASSED

kafka.cluster.PartitionTest > 
testFollowerDoesNotJoinISRUntilCaughtUpToOffsetWithinCurrentLeaderEpoch STARTED

kafka.cluster.PartitionTest > 
testFollowerDoesNotJoinISRUntilCaughtUpToOffsetWithinCurrentLeaderEpoch PASSED

kafka.cluster.PartitionTest > testFetchOffsetSnapshotEpochValidationForLeader 
STARTED

kafka.cluster.PartitionTest > testFetchOffsetSnapshotEpochValidationForLeader 
PASSED

kafka.cluster.PartitionTest > testOffsetForLeaderEpochValidationForLeader 
STARTED

kafka.cluster.PartitionTest > testOffsetForLeaderEpochValidationForLeader PASSED

kafka.cluster.PartitionTest > testAtMinIsr STARTED

kafka.cluster.PartitionTest > testAtMinIsr PASSED

kafka.cluster.PartitionTest > testOffsetForLeaderEpochValidationForFollower 
STARTED

kafka.cluster.PartitionTest > testOffsetForLeaderEpochValidationForFollower 
PASSED

kafka.cluster.PartitionTest > testDelayedFetchAfterAppendRecords STARTED

kafka.cluster.PartitionTest 

[jira] [Resolved] (KAFKA-9494) Include data type of the config in ConfigEntry

2020-05-29 Thread Rajini Sivaram (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-9494.
---
Fix Version/s: 2.6.0
 Reviewer: Rajini Sivaram
   Resolution: Fixed

> Include data type of the config in ConfigEntry
> --
>
> Key: KAFKA-9494
> URL: https://issues.apache.org/jira/browse/KAFKA-9494
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, core
>Reporter: Shailesh Panwar
>Priority: Minor
> Fix For: 2.6.0
>
>
> Why this request?
> To provide better validation. Including the data type can significantly 
> improve the validation on client side (be it web or cli or any other client). 
> In the absence of `type` the only way to know if the user specified value is 
> correct is to make an `alter` call and check if there is no error.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4588

2020-05-29 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10056; Ensure consumer metadata contains new topics on

[github] KAFKA-9501: convert between active and standby without closing stores


--
[...truncated 6.25 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> Task :streams:upgrade-system-tests-0102:processTestResources 

Re: [DISCUSS] Apache Kafka 2.6.0 release

2020-05-29 Thread Kowshik Prakasam
Hi Randall,

We have to remove KIP-584 from the release plan, as this item will not be
completed for 2.6 release (although KIP is accepted). We plan to include it
in a next release.


Cheers,
Kowshik


On Fri, May 29, 2020 at 11:43 AM Maulin Vasavada 
wrote:

> Hi Randall Hauch
>
> Can we add KIP-519 to 2.6? It was merged to Trunk already in April -
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=128650952
> .
>
> Thanks
> Maulin
>
> On Fri, May 29, 2020 at 11:01 AM Randall Hauch  wrote:
>
> > Here's an update on the AK 2.6.0 release.
> >
> > Code freeze was Wednesday, and the release plan [1] has been updated to
> > reflect all of the KIPs that made the release. We've also cut the `2.6`
> > branch that we'll use for the release; see separate email announcing the
> > new branch.
> >
> > The next important date for the 2.6.0 release is CODE FREEZE on JUNE 10,
> > and until that date all bug fixes are still welcome on the release
> branch.
> > But after that, only blocker bugs can be merged to the release branch.
> >
> > If you have any questions or concerns, please contact me or (better yet)
> > reply to this thread.
> >
> > Thanks, and best regards!
> >
> > Randall
> >
> > [1] AK 2.6.0 Release Plan:
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> >
> >
> > On Wed, May 27, 2020 at 5:53 PM Matthias J. Sax 
> wrote:
> >
> > > Thanks Randall!
> > >
> > > I added missing KIP-594.
> > >
> > >
> > > For the postponed KIP section: I removed KIP-441 and KIP-444 as both
> are
> > > completed.
> > >
> > >
> > > -Matthias
> > >
> > > On 5/27/20 2:31 PM, Randall Hauch wrote:
> > > > Hey everyone, just a quick update on the 2.6.0 release.
> > > >
> > > > Based on the release plan (
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> > > ),
> > > > today (May 27) is feature freeze. Any major feature work that is not
> > > > already complete will need to push out to the next release (either
> 2.7
> > or
> > > > 3.0). There are a few PRs for KIPs that are nearing completion, and
> > we're
> > > > having some Jenkins build issues. I will send another email later
> today
> > > or
> > > > early tomorrow with an update, and I plan to cut the release branch
> > > shortly
> > > > thereafter.
> > > >
> > > > I have also updated the list of planned KIPs on the release plan
> page (
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> > > ),
> > > > and I've moved to the "Postponed" table any KIP that looks like it is
> > not
> > > > going to be complete today. If any KIP is in the wrong table, please
> > let
> > > me
> > > > know.
> > > >
> > > > If you have any questions or concerns, please feel free to reply to
> > this
> > > > thread.
> > > >
> > > > Thanks, and best regards!
> > > >
> > > > Randall
> > > >
> > > > On Wed, May 20, 2020 at 2:16 PM Sophie Blee-Goldman <
> > sop...@confluent.io
> > > >
> > > > wrote:
> > > >
> > > >> Hey Randall,
> > > >>
> > > >> Can you also add KIP-613 which was accepted yesterday?
> > > >>
> > > >> Thanks!
> > > >> Sophie
> > > >>
> > > >> On Wed, May 20, 2020 at 6:47 AM Randall Hauch 
> > wrote:
> > > >>
> > > >>> Hi, Tom. I saw last night that the KIP had enough votes before
> > today’s
> > > >>> deadline and I will add it to the roadmap today. Thanks for driving
> > > this!
> > > >>>
> > > >>> On Wed, May 20, 2020 at 6:18 AM Tom Bentley 
> > > wrote:
> > > >>>
> > >  Hi Randall,
> > > 
> > >  Can we add KIP-585? (I'm not quite sure of the protocol here, but
> > > >> thought
> > >  it better to ask than to just add it myself).
> > > 
> > >  Thanks,
> > > 
> > >  Tom
> > > 
> > >  On Tue, May 5, 2020 at 6:54 PM Randall Hauch 
> > > >> wrote:
> > > 
> > > > Greetings!
> > > >
> > > > I'd like to volunteer to be release manager for the next
> time-based
> > >  feature
> > > > release which will be 2.6.0. I've published a release plan at
> > > >
> > > 
> > > >>>
> > > >>
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> > > > ,
> > > > and have included all of the KIPs that are currently approved or
> > > >>> actively
> > > > in discussion (though I'm happy to adjust as necessary).
> > > >
> > > > To stay on our time-based cadence, the KIP freeze is on May 20
> > with a
> > > > target release date of June 24.
> > > >
> > > > Let me know if there are any objections.
> > > >
> > > > Thanks,
> > > > Randall Hauch
> > > >
> > > 
> > > >>>
> > > >>
> > > >
> > >
> > >
> >
>


Build failed in Jenkins: kafka-2.6-jdk8 #4

2020-05-29 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-9501: convert between active and standby without closing stores

[jason] KAFKA-9130; KIP-518 Allow listing consumer groups per state (#8238)

[jason] KAFKA-10061; Fix flaky


--
[...truncated 3.12 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 

[jira] [Resolved] (KAFKA-10061) Flaky Test `ReassignPartitionsIntegrationTest .testCancellation`

2020-05-29 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-10061.
-
Fix Version/s: 2.6.0
   Resolution: Fixed

> Flaky Test `ReassignPartitionsIntegrationTest .testCancellation`
> 
>
> Key: KAFKA-10061
> URL: https://issues.apache.org/jira/browse/KAFKA-10061
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 2.6.0
>
>
> We have seen this a few times:
> {code}
> org.scalatest.exceptions.TestFailedException: Timed out waiting for 
> verifyAssignment result VerifyAssignmentResult(Map(foo-0 -> 
> PartitionReassignmentState(List(0, 1, 3, 2),List(0, 1, 3),false), baz-1 -> 
> PartitionReassignmentState(List(0, 2, 3, 1),List(0, 2, 
> 3),false)),true,Map(),false).  The latest result was 
> VerifyAssignmentResult(Map(foo-0 -> PartitionReassignmentState(ArrayBuffer(0, 
> 1, 3),List(0, 1, 3),true), baz-1 -> PartitionReassignmentState(ArrayBuffer(0, 
> 2, 3),List(0, 2, 3),true)),false,HashMap(),false)
> {code}
> It looks like the reassignment is completing earlier than the test expects. 
> See the following from the log:
> {code}
> Successfully started partition reassignments for baz-1,foo-0
> ==> verifyAssignment(adminClient, 
> jsonString={"version":1,"partitions":[{"topic":"foo","partition":0,"replicas":[0,1,3],"log_dirs":["any","any","any"]},{"topic":"baz","partition":1,"replicas":[0,2,3],"log_dirs":["any","any","any"]}]})
> Status of partition reassignment:
> Reassignment of partition baz-1 is still in progress.
> Reassignment of partition foo-0 is complete.
> {code}
> A successful run looks like this:
> {code}
> Successfully started partition reassignments for baz-1,foo-0
> ==> verifyAssignment(adminClient, 
> jsonString={"version":1,"partitions":[{"topic":"foo","partition":0,"replicas":[0,1,3],"log_dirs":["any","any","any"]},{"topic":"baz","partition":1,"replicas":[0,2,3],"log_dirs":["any","any","any"]}]})
> Status of partition reassignment:
> Reassignment of partition baz-1 is still in progress.
> Reassignment of partition foo-0 is still in progress.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9130) Allow listing consumer groups per state

2020-05-29 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-9130.

Fix Version/s: 2.6.0
   Resolution: Fixed

> Allow listing consumer groups per state
> ---
>
> Key: KAFKA-9130
> URL: https://issues.apache.org/jira/browse/KAFKA-9130
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 2.6.0
>
>
> Ticket for KIP-518: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-518%3A+Allow+listing+consumer+groups+per+state



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk14 #144

2020-05-29 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10056; Ensure consumer metadata contains new topics on


--
[...truncated 3.14 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos 

Build failed in Jenkins: kafka-trunk-jdk11 #1514

2020-05-29 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Log the reason for coordinator discovery failure (#8747)


--
[...truncated 8.64 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString 

[jira] [Created] (KAFKA-10070) Parameterize Connect unit tests to remove code duplication

2020-05-29 Thread Chris Egerton (Jira)
Chris Egerton created KAFKA-10070:
-

 Summary: Parameterize Connect unit tests to remove code duplication
 Key: KAFKA-10070
 URL: https://issues.apache.org/jira/browse/KAFKA-10070
 Project: Kafka
  Issue Type: Task
  Components: KafkaConnect
Affects Versions: 2.6.0
Reporter: Chris Egerton
Assignee: Konstantine Karantasis


[https://github.com/apache/kafka/pull/8722] added two new unit tests, 
{{WorkerWithTopicCreationTest}} and {{WorkerSourceTaskWithTopicCreationTest}} 
that are almost entirely clones of the existing {{WorkerTest}} and 
{{WorkerSourceTaskTest}} unit tests. There's a couple of comments 
([1|https://github.com/apache/kafka/blob/fe948d39e54cf2d57e9b9e0e2a203890dc8ce86d/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/WorkerWithTopicCreationTest.java#L149],
 
[2|https://github.com/apache/kafka/blob/fe948d39e54cf2d57e9b9e0e2a203890dc8ce86d/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/WorkerSourceTaskWithTopicCreationTest.java#L162])
 in them about plans to parameterize the tests; we should do this sooner rather 
than later to avoid expending developer time on copy+pasting code changes 
across two tests when making modifications to the {{Worker}} and/or 
{{WorkerSourceTask}} classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Apache Kafka 2.6.0 release

2020-05-29 Thread Maulin Vasavada
Hi Randall Hauch

Can we add KIP-519 to 2.6? It was merged to Trunk already in April -
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=128650952.

Thanks
Maulin

On Fri, May 29, 2020 at 11:01 AM Randall Hauch  wrote:

> Here's an update on the AK 2.6.0 release.
>
> Code freeze was Wednesday, and the release plan [1] has been updated to
> reflect all of the KIPs that made the release. We've also cut the `2.6`
> branch that we'll use for the release; see separate email announcing the
> new branch.
>
> The next important date for the 2.6.0 release is CODE FREEZE on JUNE 10,
> and until that date all bug fixes are still welcome on the release branch.
> But after that, only blocker bugs can be merged to the release branch.
>
> If you have any questions or concerns, please contact me or (better yet)
> reply to this thread.
>
> Thanks, and best regards!
>
> Randall
>
> [1] AK 2.6.0 Release Plan:
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
>
>
> On Wed, May 27, 2020 at 5:53 PM Matthias J. Sax  wrote:
>
> > Thanks Randall!
> >
> > I added missing KIP-594.
> >
> >
> > For the postponed KIP section: I removed KIP-441 and KIP-444 as both are
> > completed.
> >
> >
> > -Matthias
> >
> > On 5/27/20 2:31 PM, Randall Hauch wrote:
> > > Hey everyone, just a quick update on the 2.6.0 release.
> > >
> > > Based on the release plan (
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> > ),
> > > today (May 27) is feature freeze. Any major feature work that is not
> > > already complete will need to push out to the next release (either 2.7
> or
> > > 3.0). There are a few PRs for KIPs that are nearing completion, and
> we're
> > > having some Jenkins build issues. I will send another email later today
> > or
> > > early tomorrow with an update, and I plan to cut the release branch
> > shortly
> > > thereafter.
> > >
> > > I have also updated the list of planned KIPs on the release plan page (
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> > ),
> > > and I've moved to the "Postponed" table any KIP that looks like it is
> not
> > > going to be complete today. If any KIP is in the wrong table, please
> let
> > me
> > > know.
> > >
> > > If you have any questions or concerns, please feel free to reply to
> this
> > > thread.
> > >
> > > Thanks, and best regards!
> > >
> > > Randall
> > >
> > > On Wed, May 20, 2020 at 2:16 PM Sophie Blee-Goldman <
> sop...@confluent.io
> > >
> > > wrote:
> > >
> > >> Hey Randall,
> > >>
> > >> Can you also add KIP-613 which was accepted yesterday?
> > >>
> > >> Thanks!
> > >> Sophie
> > >>
> > >> On Wed, May 20, 2020 at 6:47 AM Randall Hauch 
> wrote:
> > >>
> > >>> Hi, Tom. I saw last night that the KIP had enough votes before
> today’s
> > >>> deadline and I will add it to the roadmap today. Thanks for driving
> > this!
> > >>>
> > >>> On Wed, May 20, 2020 at 6:18 AM Tom Bentley 
> > wrote:
> > >>>
> >  Hi Randall,
> > 
> >  Can we add KIP-585? (I'm not quite sure of the protocol here, but
> > >> thought
> >  it better to ask than to just add it myself).
> > 
> >  Thanks,
> > 
> >  Tom
> > 
> >  On Tue, May 5, 2020 at 6:54 PM Randall Hauch 
> > >> wrote:
> > 
> > > Greetings!
> > >
> > > I'd like to volunteer to be release manager for the next time-based
> >  feature
> > > release which will be 2.6.0. I've published a release plan at
> > >
> > 
> > >>>
> > >>
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> > > ,
> > > and have included all of the KIPs that are currently approved or
> > >>> actively
> > > in discussion (though I'm happy to adjust as necessary).
> > >
> > > To stay on our time-based cadence, the KIP freeze is on May 20
> with a
> > > target release date of June 24.
> > >
> > > Let me know if there are any objections.
> > >
> > > Thanks,
> > > Randall Hauch
> > >
> > 
> > >>>
> > >>
> > >
> >
> >
>


Re: [DISCUSS] 2.5.1 Bug Fix Release

2020-05-29 Thread Israel Ekpo
Thanks John for managing the release!

On Fri, May 29, 2020 at 11:56 AM John Roesler  wrote:

> Hello all,
>
> I'd like to volunteer as release manager for the 2.5.1 bugfix release.
>
> Kafka 2.5.0 was released on 15 April 2020, and 40 issues have been fixed
> since then.
>
> The release plan is documented here:
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.5.1
>
> Thanks,
> -John
>


Re: [DISCUSS] 2.5.1 Bug Fix Release

2020-05-29 Thread Bill Bejeck
Thanks for volunteering John, +1.

On Fri, May 29, 2020 at 1:58 PM Ismael Juma  wrote:

> Thanks for volunteering! +1
>
> Ismael
>
> On Fri, May 29, 2020 at 8:56 AM John Roesler  wrote:
>
> > Hello all,
> >
> > I'd like to volunteer as release manager for the 2.5.1 bugfix release.
> >
> > Kafka 2.5.0 was released on 15 April 2020, and 40 issues have been fixed
> > since then.
> >
> > The release plan is documented here:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.5.1
> >
> > Thanks,
> > -John
> >
>


Jenkins build is back to normal : kafka-trunk-jdk8 #4587

2020-05-29 Thread Apache Jenkins Server
See 




Re: [DISCUSS] Apache Kafka 2.6.0 release

2020-05-29 Thread Randall Hauch
Here's an update on the AK 2.6.0 release.

Code freeze was Wednesday, and the release plan [1] has been updated to
reflect all of the KIPs that made the release. We've also cut the `2.6`
branch that we'll use for the release; see separate email announcing the
new branch.

The next important date for the 2.6.0 release is CODE FREEZE on JUNE 10,
and until that date all bug fixes are still welcome on the release branch.
But after that, only blocker bugs can be merged to the release branch.

If you have any questions or concerns, please contact me or (better yet)
reply to this thread.

Thanks, and best regards!

Randall

[1] AK 2.6.0 Release Plan:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430


On Wed, May 27, 2020 at 5:53 PM Matthias J. Sax  wrote:

> Thanks Randall!
>
> I added missing KIP-594.
>
>
> For the postponed KIP section: I removed KIP-441 and KIP-444 as both are
> completed.
>
>
> -Matthias
>
> On 5/27/20 2:31 PM, Randall Hauch wrote:
> > Hey everyone, just a quick update on the 2.6.0 release.
> >
> > Based on the release plan (
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> ),
> > today (May 27) is feature freeze. Any major feature work that is not
> > already complete will need to push out to the next release (either 2.7 or
> > 3.0). There are a few PRs for KIPs that are nearing completion, and we're
> > having some Jenkins build issues. I will send another email later today
> or
> > early tomorrow with an update, and I plan to cut the release branch
> shortly
> > thereafter.
> >
> > I have also updated the list of planned KIPs on the release plan page (
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> ),
> > and I've moved to the "Postponed" table any KIP that looks like it is not
> > going to be complete today. If any KIP is in the wrong table, please let
> me
> > know.
> >
> > If you have any questions or concerns, please feel free to reply to this
> > thread.
> >
> > Thanks, and best regards!
> >
> > Randall
> >
> > On Wed, May 20, 2020 at 2:16 PM Sophie Blee-Goldman  >
> > wrote:
> >
> >> Hey Randall,
> >>
> >> Can you also add KIP-613 which was accepted yesterday?
> >>
> >> Thanks!
> >> Sophie
> >>
> >> On Wed, May 20, 2020 at 6:47 AM Randall Hauch  wrote:
> >>
> >>> Hi, Tom. I saw last night that the KIP had enough votes before today’s
> >>> deadline and I will add it to the roadmap today. Thanks for driving
> this!
> >>>
> >>> On Wed, May 20, 2020 at 6:18 AM Tom Bentley 
> wrote:
> >>>
>  Hi Randall,
> 
>  Can we add KIP-585? (I'm not quite sure of the protocol here, but
> >> thought
>  it better to ask than to just add it myself).
> 
>  Thanks,
> 
>  Tom
> 
>  On Tue, May 5, 2020 at 6:54 PM Randall Hauch 
> >> wrote:
> 
> > Greetings!
> >
> > I'd like to volunteer to be release manager for the next time-based
>  feature
> > release which will be 2.6.0. I've published a release plan at
> >
> 
> >>>
> >>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> > ,
> > and have included all of the KIPs that are currently approved or
> >>> actively
> > in discussion (though I'm happy to adjust as necessary).
> >
> > To stay on our time-based cadence, the KIP freeze is on May 20 with a
> > target release date of June 24.
> >
> > Let me know if there are any objections.
> >
> > Thanks,
> > Randall Hauch
> >
> 
> >>>
> >>
> >
>
>


Re: [DISCUSS] 2.5.1 Bug Fix Release

2020-05-29 Thread Ismael Juma
Thanks for volunteering! +1

Ismael

On Fri, May 29, 2020 at 8:56 AM John Roesler  wrote:

> Hello all,
>
> I'd like to volunteer as release manager for the 2.5.1 bugfix release.
>
> Kafka 2.5.0 was released on 15 April 2020, and 40 issues have been fixed
> since then.
>
> The release plan is documented here:
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.5.1
>
> Thanks,
> -John
>


Jenkins build is back to normal : kafka-trunk-jdk14 #143

2020-05-29 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-2.5-jdk8 #137

2020-05-29 Thread Apache Jenkins Server
See 




Re: KIP-599: Throttle Create Topic, Create Partition and Delete Topic Operations

2020-05-29 Thread Jun Rao
Hi, David, Anna,

Thanks for the response. Sorry for the late reply.

10. Regarding exposing rate or usage as quota. Your argument is that usage
is not very accurate anyway and is harder to implement. So, let's just be a
bit loose and expose rate. I am sort of neutral on that. (1) It seems to me
that overall usage will be relatively more accurate than rate. All the
issues that Anna brought up seem to exist for rate too. Rate has the
additional problem that the cost of each request may not be uniform. (2) In
terms of implementation, a usage based approach requires tracking the user
info through the life cycle of an operation. However, as you mentioned,
things like topic creation can generate additional
LeaderAndIsr/UpdateMetadata requests. Longer term, we probably want to
associate those cost to the user who initiated the operation. If we do
that, we sort of need to track the user for the full life cycle of the
processing of an operation anyway. (3) If you prefer rate strongly, I don't
have strong objections. However, I do feel that the new quota name should
be able to cover all controller related cost longer term. This KIP
currently only covers topic creation/deletion. It would not be ideal if in
the future, we have to add yet another type of quota for some other
controller related operations. The quota name in the KIP has partition
mutation. In the future, if we allow things like topic renaming, it may not
be related to partition mutation directly and it would be trickier to fit
those operations in the current quota. So, maybe sth more general like
controller_operations_quota will be more future proof.

11. Regarding the difference between the token bucket algorithm and our
current quota mechanism. That's a good observation. It seems that we can
make a slight change to our current quota mechanism to achieve a similar
thing. As you said, the issue was that we allocate all 7 * 80 requests in
the last 1 sec window in our current mechanism. This is a bit unintuitive
since each sec only has a quota capacity of 5. An alternative way is to
allocate the 7 * 80 requests to all previous windows, each up to the
remaining quota capacity left. So, you will fill the current 1 sec window
and all previous ones, each with 5. Then, it seems this will give the same
behavior as token bucket? The reason that I keep asking this is that from
an operational perspective, it's simpler if all types of quotas work in the
same way.

Jun

On Tue, May 26, 2020 at 9:52 AM David Jacot  wrote:

> Hi folks,
>
> I have updated the KIP. As mentioned by Jun, I have made the
> quota per principal/clientid similarly to the other quotas. I have
> also explained how this will work in conjunction with the auto
> topics creation.
>
> Please, take a look at it. I plan to call a vote for it in the next few
> days if there are no comments in this thread.
>
> Best,
> David
>
> On Wed, May 13, 2020 at 10:57 AM Tom Bentley  wrote:
>
> > Hi David,
> >
> > Thanks for the explanation and confirmation that evolving the APIs is not
> > off the table in the longer term.
> >
> > Kind regards,
> >
> > Tom
> >
>


Build failed in Jenkins: kafka-2.6-jdk8 #3

2020-05-29 Thread Apache Jenkins Server
See 


Changes:

[vvcephei] MINOR: regression test for task assignor config (#8743)

[vvcephei] MINOR: Relax Percentiles test (#8748)


--
[...truncated 3.13 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED


[DISCUSS] 2.5.1 Bug Fix Release

2020-05-29 Thread John Roesler
Hello all,

I'd like to volunteer as release manager for the 2.5.1 bugfix release.

Kafka 2.5.0 was released on 15 April 2020, and 40 issues have been fixed since 
then.

The release plan is documented here:
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.5.1

Thanks,
-John


[jira] [Created] (KAFKA-10069) The user-defined "predicate" and "negate" are not removed from Transformation

2020-05-29 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-10069:
--

 Summary: The user-defined "predicate" and "negate" are not removed 
from Transformation
 Key: KAFKA-10069
 URL: https://issues.apache.org/jira/browse/KAFKA-10069
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


There are official configDef for both "predicate" and "negate" so we should 
remove user-defined configDef. However, current behavior does incorrect 
comparison so the duplicate key will destroy the following embed configDef.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[VOTE] KIP-599: Throttle Create Topic, Create Partition and Delete Topic Operations

2020-05-29 Thread David Jacot
Hi folks,

I'd like to start the vote for KIP-599 which proposes a new quota to
throttle create topic, create partition, and delete topics operations to
protect the Kafka controller:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-599%3A+Throttle+Create+Topic%2C+Create+Partition+and+Delete+Topic+Operations

Please, let me know what you think.

Cheers,
David


Kafka _Querys

2020-05-29 Thread Csk Raju
Hi Kafka team,

As of now we have successfully implemented kafka for our environment, We
stuck up with below questions so please provide assistance and help for
below questions.

1)  What are the distinct consumer group names currently consuming messages
from the same topic
2) Total number of messages consumed from topic by the given Consumer group
for the given time interval : from and to date time
3)  for the given consumer group and topic, How many new messages arrived
into topic from the previously committed offset position
(Example: Consumer application is down and admin wants to know how many new
messages arrived after that specific consuming app went down)
4) Explicitly move offset to a different position for the given topic and
consumer group
5) Replay messages from Failure topic to Replay topic
6) How do we monitor a topic – alert when a new message is arrived or a
threshold of 5 or 10 new messages arrived

Hoping your response gives me all clarifications on above questions:)

Thanks
Sudhir Raju
+919666432888


[jira] [Created] (KAFKA-10068) Verify HighAvailabilityTaskAssignor performance with large clusters and topologies

2020-05-29 Thread John Roesler (Jira)
John Roesler created KAFKA-10068:


 Summary: Verify HighAvailabilityTaskAssignor performance with 
large clusters and topologies
 Key: KAFKA-10068
 URL: https://issues.apache.org/jira/browse/KAFKA-10068
 Project: Kafka
  Issue Type: Task
  Components: streams
Affects Versions: 2.6.0
Reporter: John Roesler


While reviewing [https://github.com/apache/kafka/pull/8668/files,] I realized 
that we should have a similar test to make sure that the new task assignor 
completes well within the default assignment deadline. 30 seconds is a good 
upper bound.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk14 #142

2020-05-29 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10029; Don't update completedReceives when channels are closed to


--
[...truncated 6.28 MB...]

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED


Build failed in Jenkins: kafka-trunk-jdk8 #4586

2020-05-29 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10029; Don't update completedReceives when channels are closed to


--
[...truncated 3.13 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled 

[jira] [Created] (KAFKA-10067) Provide API to detect cluster nodes version

2020-05-29 Thread Yanming Zhou (Jira)
Yanming Zhou created KAFKA-10067:


 Summary: Provide API to detect cluster nodes version
 Key: KAFKA-10067
 URL: https://issues.apache.org/jira/browse/KAFKA-10067
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 2.5.0
Reporter: Yanming Zhou


{code:java}
try (AdminClient ac = AdminClient.create(conf)) {
for (Node node : ac.describeCluster().nodes().get()) {
System.out.println(node.host());
System.out.println(node.version()); // missing 
feature
}
}
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [Vote] KIP-569: Update DescribeConfigsResponse to include additional metadata information

2020-05-29 Thread Shailesh Panwar
 Based on the PR feedback, I have updated the KIP

to include the update needed to AbstractConfig class. Please let me know if
there are any questions/concerns.

Shailesh



On Fri, Mar 20, 2020 at 8:49 AM Shailesh Panwar 
wrote:

> I have 3 +1s and 1 +1(non-binding) vote for this Kip. Thank you all for
> the feedback. I'll start working on the PR.
>
> Thanks
> Shailesh
>
>
> On Fri, Mar 20, 2020 at 8:35 AM Brian Byrne  wrote:
>
>> +1 (non-binding) - thanks!
>>
>> My only suggestion would be to make the enum-to-int conversion explicit
>> for
>> the new ConfigType, with a surrounding comment, to ensure that no
>> accidental reordering and for easier readability should the response
>> message message be read.
>>
>> Brian
>>
>> On Fri, Mar 20, 2020 at 8:13 AM David Arthur  wrote:
>>
>> > +1 binding. Thanks for the KIP 
>> >
>> > -David
>> >
>> > On Tue, Mar 17, 2020 at 4:44 AM Rajini Sivaram > >
>> > wrote:
>> >
>> > > Hi Shailesh,
>> > >
>> > > +1 (binding)
>> > >
>> > > Thanks for the KIP!
>> > >
>> > > Regards,
>> > >
>> > > Rajini
>> > >
>> > >
>> > > On Tue, Mar 10, 2020 at 2:37 AM Gwen Shapira 
>> wrote:
>> > >
>> > > > +1
>> > > > Looks great. Thanks for the proposal, Shailesh.
>> > > >
>> > > > Gwen Shapira
>> > > > Engineering Manager | Confluent
>> > > > 650.450.2760 | @gwenshap
>> > > > Follow us: Twitter | blog
>> > > >
>> > > > On Mon, Mar 09, 2020 at 6:00 AM, Shailesh Panwar <
>> > span...@confluent.io
>> > > >
>> > > > wrote:
>> > > >
>> > > > >
>> > > > >
>> > > > >
>> > > > > Hi All,
>> > > > > I would like to start a vote on KIP-569: Update
>> > > > > DescribeConfigsResponse to include additional metadata information
>> > > > >
>> > > > >
>> > > > >
>> > > > > The KIP is here:
>> > > > > https:/ / cwiki. apache. org/ confluence/ display/ KAFKA/
>> > > >
>> > >
>> >
>> KIP-569%3A+DescribeConfigsResponse+-+Update+the+schema+to+include+additional+metadata+information+of+the+field
>> > > > > (
>> > > > >
>> > > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-569%3A+DescribeConfigsResponse+-+Update+the+schema+to+include+additional+metadata+information+of+the+field
>> > > > > )
>> > > > >
>> > > > >
>> > > > >
>> > > > > Thanks,
>> > > > > Shailesh
>> > > > >
>> > > > >
>> > > > >
>> > >
>> >
>> >
>> > --
>> > David Arthur
>> >
>>
>


回复:回复:回复:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove members in StreamsResetter

2020-05-29 Thread feyman2009
Hi, team
I updated the KIP-571 since we took a slightly different implementation 
in the PR https://github.com/apache/kafka/pull/8589, basically:
In RemoveMembersFromConsumerGroupOptions, leveraging empty members 
rather than introducing a new field to imply the removeAll scenario.
   Please let me know if you have any concerns, thanks a lot!

Feyman


--
发件人:feyman2009 
发送时间:2020年4月13日(星期一) 08:47
收件人:dev 
主 题:回复:回复:回复:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove members in 
StreamsResetter

Thanks , John and Guochang!
--
发件人:Guozhang Wang 
发送时间:2020年4月11日(星期六) 03:07
收件人:dev 
主 题:Re: 回复:回复:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove members in 
StreamsResetter

Thanks Feyman,

I've looked at the update that you incorporated from Matthias and that LGTM
too. I'm still +1 :)

Guozhang

On Fri, Apr 10, 2020 at 11:18 AM John Roesler  wrote:

> Hey Feyman,
>
> Just to remove any ambiguity, I've been casually following the discussion,
> I've just looked at the KIP document again, and I'm still +1 (binding).
>
> Thanks,
> -John
>
> On Fri, Apr 10, 2020, at 01:44, feyman2009 wrote:
> > Hi, all
> > KIP-571 has already collected 4 bind +1 (John, Guochang, Bill,
> > Matthias) and 3 non-binding +1(Boyang, Sophie), I will mark it as
> > approved and create a PR shortly.
> > Thanks!
> >
> > Feyman
> > --
> > 发件人:feyman2009 
> > 发送时间:2020年4月8日(星期三) 14:21
> > 收件人:dev ; Boyang Chen 
> > 主 题:回复:回复:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove
> > members in StreamsResetter
> >
> > Hi Boyang,
> > Thanks for reminding me of that!
> > I'm not sure about the convention, I thought it would need to
> > re-collect votes if the KIP has changed~
> > Let's leave the vote thread here for 2 days, if no objection, I
> > will take it as approved and update the PR accordingly.
> >
> > Thanks!
> > Feyman
> >
> >
> >
> > --
> > 发件人:Boyang Chen 
> > 发送时间:2020年4月8日(星期三) 12:42
> > 收件人:dev ; feyman2009 
> > 主 题:Re: 回复:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove
> > members in StreamsResetter
> >
> > You should already get enough votes if I'm counting correctly
> > (Guozhang, John, Matthias)
> > On Tue, Apr 7, 2020 at 6:59 PM feyman2009
> >  wrote:
> > Hi, Boyang
> >  I think Matthias's proposal makes sense, but we can use the admin
> > tool for this scenario as Boyang mentioned or follow up later, so I
> > prefer to keep this KIP unchanged to minimize the scope.
> >  Calling for vote ~
> >
> >  Thanks!
> >  Feyman
> >
> >  --
> >  发件人:Boyang Chen 
> >  发送时间:2020年4月8日(星期三) 02:15
> >  收件人:dev 
> >  主 题:Re: 回复:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove
> > members in StreamsResetter
> >
> >  Hey Feyman,
> >
> >  I think Matthias' suggestion is optional, and we could just use admin
> tool
> >  to remove single static members as well.
> >
> >  Boyang
> >
> >  On Tue, Apr 7, 2020 at 11:00 AM Matthias J. Sax 
> wrote:
> >
> >  > > Would you mind to elaborate why we still need that
> >  >
> >  > Sure.
> >  >
> >  > For static memership, the session timeout it usually set quite high.
> >  > This make scaling in an application tricky: if you shut down one
> >  > instance, no rebalance would happen until `session.timeout.ms` hits.
> >  > This is specific to Kafka Streams, because when a Kafka Stream
> > client is
> >  > closed, it does _not_ send a `LeaveGroupRequest`. Hence, the
> >  > corresponding partitions would not be processed for a long time and
> >  > thus, fall back.
> >  >
> >  > Given that each instance will have a unique `instance.id` provided by
> >  > the user, we could allow users to remove the instance they want to
> >  > decommission from the consumer group without the need to wait for
> >  > `session.timeout.ms`.
> >  >
> >  > Hence, it's not an application reset scenario for which one wants to
> >  > remove all members, but a scaling-in scenario. For dynamic
> > membership,
> >  > this issue usually does not occur because the `session.timeout.ms` is
> >  > set to a fairly low value and a rebalance would happen quickly after
> > an
> >  > instance is decommissioned.
> >  >
> >  > Does this make sense?
> >  >
> >  > As said before, we may or may not include this in this KIP. It's up
> > to
> >  > you if you want to address it or not.
> >  >
> >  >
> >  > -Matthias
> >  >
> >  >
> >  >
> >  > On 4/7/20 7:12 AM, feyman2009 wrote:
> >  > > Hi, Matthias
> >  > > Thanks a lot!
> >  > > So you do not plan so support removing a _single static_
> > member via
> >  > `StreamsResetter`?
> >  > > =>
> >  > > Would you mind to elaborate why we still need that if we
> > are
> >  > able to batch remove active 

Re: [VOTE] KIP-518: Allow listing consumer groups per state

2020-05-29 Thread Mickael Maison
Quick update:
While implementing the KIP, we made a couple of changes:
- Use regular fields instead of flexible fields. We had to bump the
protocol version to detect compatibility so flexible fields were not
helping much for this low volume API.
- If no states are specified in the request, return all groups instead
of nothing. This allows clients to not necessarily know all the state
names but still retrieve them all.

More details about these 2 points can be found in the PR discussion:
https://github.com/apache/kafka/pull/8238

On Mon, Mar 16, 2020 at 10:39 AM Mickael Maison
 wrote:
>
> Hi all,
>
> The vote has passed with 3 binding votes (Gwen, Colin and Rajini) and
> 5 non-binding votes (Kevin, Ryanne, Dongjin, Tom and David).
> Thanks everybody
>
> On Fri, Mar 13, 2020 at 11:09 AM Rajini Sivaram  
> wrote:
> >
> > +1 (binding)
> >
> > Thanks for the KIP, Mickael!
> >
> > Regards,
> >
> > Rajini
> >
> >
> > On Thu, Mar 12, 2020 at 11:06 PM Colin McCabe  wrote:
> >
> > > Thanks, Mickael.  +1 (binding)
> > >
> > > best,
> > > Colin
> > >
> > > On Fri, Mar 6, 2020, at 02:05, Mickael Maison wrote:
> > > > Thanks David and Gwen for the votes
> > > > Colin, I believe I've answered all your questions, can you take another
> > > look?
> > > >
> > > > So far we have 1 binding and 5 non binding votes.
> > > >
> > > > On Mon, Mar 2, 2020 at 4:56 PM Gwen Shapira  wrote:
> > > > >
> > > > > +1 (binding)
> > > > >
> > > > > Gwen Shapira
> > > > > Engineering Manager | Confluent
> > > > > 650.450.2760 | @gwenshap
> > > > > Follow us: Twitter | blog
> > > > >
> > > > > On Mon, Mar 02, 2020 at 8:32 AM, David Jacot < dja...@confluent.io >
> > > wrote:
> > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > +1 (non-binding). Thanks for the KIP!
> > > > > >
> > > > > >
> > > > > >
> > > > > > David
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Thu, Feb 6, 2020 at 10:45 PM Colin McCabe < cmccabe@ apache. org
> > > (
> > > > > > cmcc...@apache.org ) > wrote:
> > > > > >
> > > > > >
> > > > > >>
> > > > > >>
> > > > > >> Hi Mickael,
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> Thanks for the KIP. I left a comment on the DISCUSS thread as well.
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> best,
> > > > > >> Colin
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> On Thu, Feb 6, 2020, at 08:58, Mickael Maison wrote:
> > > > > >>
> > > > > >>
> > > > > >>>
> > > > > >>>
> > > > > >>> Hi Manikumar,
> > > > > >>>
> > > > > >>>
> > > > > >>>
> > > > > >>> I believe I've answered David's comments in the DISCUSS thread.
> > > Thanks
> > > > > >>>
> > > > > >>>
> > > > > >>>
> > > > > >>> On Wed, Jan 15, 2020 at 10:15 AM Manikumar < manikumar. reddy@
> > > gmail. com (
> > > > > >>> manikumar.re...@gmail.com ) >
> > > > > >>>
> > > > > >>>
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> wrote:
> > > > > >>
> > > > > >>
> > > > > >>>
> > > > > 
> > > > > 
> > > > >  Hi Mickael,
> > > > > 
> > > > > 
> > > > > 
> > > > >  Thanks for the KIP. Can you respond to the comments from David on
> > > > > 
> > > > > 
> > > > > >>>
> > > > > >>>
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> discuss
> > > > > >>
> > > > > >>
> > > > > >>>
> > > > > 
> > > > > 
> > > > >  thread?
> > > > > 
> > > > > 
> > > > > 
> > > > >  Thanks,
> > > > > 
> > > > > 
> > > > > >>>
> > > > > >>>
> > > > > >>
> > > > > >>
> > > > > >
> > > > > >
> > > > > >
> > > >
> > >


[jira] [Resolved] (KAFKA-10029) Selector.completedReceives should not be modified when channel is closed

2020-05-29 Thread Rajini Sivaram (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-10029.

  Reviewer: Ismael Juma
Resolution: Fixed

> Selector.completedReceives should not be modified when channel is closed
> 
>
> Key: KAFKA-10029
> URL: https://issues.apache.org/jira/browse/KAFKA-10029
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 2.5.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.6.0, 2.5.1
>
>
> Selector.completedReceives are processed using `forEach` by SocketServer and 
> NetworkClient when processing receives from a poll. Since we may close 
> channels while processing receives, changes to the map while closing channels 
> can result in ConcurrentModificationException. We clear the entire map after 
> each poll anyway, so we don't need to remove channel from the map while 
> closing channels.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10066) TopologyTestDriver.createOutputTopic isn't taking record headers into account during deserialization

2020-05-29 Thread Stefaan Dutry (Jira)
Stefaan Dutry created KAFKA-10066:
-

 Summary: TopologyTestDriver.createOutputTopic isn't taking record 
headers into account during deserialization
 Key: KAFKA-10066
 URL: https://issues.apache.org/jira/browse/KAFKA-10066
 Project: Kafka
  Issue Type: Bug
  Components: streams-test-utils
Affects Versions: 2.5.0
Reporter: Stefaan Dutry


When testing a Kafka stream we need the TopologyTestDriver.createOutputTopic to 
take record headers into account.

Is it possible to use the record headers when deserialising when using the 
TopologyTestDriver.createOutputTopic?

The only thing that needs to change is: 
{code:java}
final K key = keyDeserializer.deserialize(record.topic(), record.key());
final V value = valueDeserializer.deserialize(record.topic(), 
record.value());{code}
into: 
{code:java}
final K key = keyDeserializer.deserialize(record.topic(), record.headers(), 
record.key());
final V value = valueDeserializer.deserialize(record.topic(), record.headers(), 
record.value());{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[DISCUSSION] KIP-619: Add internal topic creation support

2020-05-29 Thread Cheng Tan
Hello developers,


I’m proposing KIP-619 to add internal topic creation support. 

Kafka and its upstream applications treat internal topics differently from 
non-internal topics. For example:

• Kafka handles topic creation response errors differently for internal 
topics
• Internal topic partitions cannot be added to a transaction
• Internal topic records cannot be deleted
• Appending to internal topics might get rejected
• ……

Clients and upstream applications may define their own internal topics. For 
example, Kafka Connect defines `connect-configs`, `connect-offsets`, and 
`connect-statuses`. Clients are fetching the internal topics by sending the 
MetadataRequest (ApiKeys.METADATA).

However, clients and upstream application cannot register their own internal 
topics in servers. As a result, servers have no knowledge about client-defined 
internal topics. They can only test if a given topic is internal or not simply 
by checking against a static set of internal topic string, which consists of 
two internal topic names `__consumer_offsets` and `__transaction_state`. As a 
result, MetadataRequest cannot provide any information about client created 
internal topics. 

To solve this pain point, I'm proposing support for clients to register and 
query their own internal topics. 

Please feel free to join the discussion. Thanks in advance.


Best, - Cheng Tan

[jira] [Created] (KAFKA-10065) One of the Kafka broker processes in the cluster suddenly hangs and becomes unresponsive

2020-05-29 Thread akihiro kumabe (Jira)
akihiro kumabe created KAFKA-10065:
--

 Summary: One of the Kafka broker processes in the cluster suddenly 
hangs and becomes unresponsive
 Key: KAFKA-10065
 URL: https://issues.apache.org/jira/browse/KAFKA-10065
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.1.1
Reporter: akihiro kumabe


Kafka version: 1.1.1


I have 3 Brokers and 3 zookeepers.
One of the Kafka broker processes in the cluster hung and became unresponsive 
three times in the last month.
I investigated a Java thread dump. Although some threads such as Kafka Fetcher 
were Parked, I could not find any deadlock.
Server resources, heap memory seemed to be fine, and Kafka logs also had no 
suspicious output.
Many TCP connections with close-wait status remained.
Recovered by simply restarting the process.

Similar to # 5778, but it's unclear if a version upgrade will solve it
https://issues.apache.org/jira/browse/KAFKA-5778



--
This message was sent by Atlassian Jira
(v8.3.4#803005)