Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #681

2021-03-30 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Remove unnecessary suppress (#10434)


--
[...truncated 3.69 MB...]
UserQuotaTest > testThrottledProducerConsumer() STARTED

UserQuotaTest > testThrottledProducerConsumer() PASSED

UserQuotaTest > testQuotaOverrideDelete() STARTED

UserQuotaTest > testQuotaOverrideDelete() PASSED

UserQuotaTest > testThrottledRequest() STARTED

UserQuotaTest > testThrottledRequest() PASSED

LinuxIoMetricsCollectorTest > testReadProcFile() STARTED

LinuxIoMetricsCollectorTest > testReadProcFile() PASSED

LinuxIoMetricsCollectorTest > testUnableToReadNonexistentProcFile() STARTED

LinuxIoMetricsCollectorTest > testUnableToReadNonexistentProcFile() PASSED

AssignmentStateTest > [1] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(), removing=List(), original=List(), isUnderReplicated=false 
STARTED

AssignmentStateTest > [1] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(), removing=List(), original=List(), isUnderReplicated=false 
PASSED

AssignmentStateTest > [2] isr=List(101, 102), replicas=List(101, 102, 103), 
adding=List(), removing=List(), original=List(), isUnderReplicated=true STARTED

AssignmentStateTest > [2] isr=List(101, 102), replicas=List(101, 102, 103), 
adding=List(), removing=List(), original=List(), isUnderReplicated=true PASSED

AssignmentStateTest > [3] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(104, 105), removing=List(102), original=List(101, 102, 103), 
isUnderReplicated=false STARTED

AssignmentStateTest > [3] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(104, 105), removing=List(102), original=List(101, 102, 103), 
isUnderReplicated=false PASSED

AssignmentStateTest > [4] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(104, 105), removing=List(), original=List(101, 102, 103), 
isUnderReplicated=false STARTED

AssignmentStateTest > [4] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(104, 105), removing=List(), original=List(101, 102, 103), 
isUnderReplicated=false PASSED

AssignmentStateTest > [5] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(), removing=List(102), original=List(101, 102, 103), 
isUnderReplicated=false STARTED

AssignmentStateTest > [5] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(), removing=List(102), original=List(101, 102, 103), 
isUnderReplicated=false PASSED

AssignmentStateTest > [6] isr=List(102, 103), replicas=List(102, 103), 
adding=List(101), removing=List(), original=List(102, 103), 
isUnderReplicated=false STARTED

AssignmentStateTest > [6] isr=List(102, 103), replicas=List(102, 103), 
adding=List(101), removing=List(), original=List(102, 103), 
isUnderReplicated=false PASSED

AssignmentStateTest > [7] isr=List(103, 104, 105), replicas=List(101, 102, 
103), adding=List(104, 105, 106), removing=List(), original=List(101, 102, 
103), isUnderReplicated=false STARTED

AssignmentStateTest > [7] isr=List(103, 104, 105), replicas=List(101, 102, 
103), adding=List(104, 105, 106), removing=List(), original=List(101, 102, 
103), isUnderReplicated=false PASSED

AssignmentStateTest > [8] isr=List(103, 104, 105), replicas=List(101, 102, 
103), adding=List(104, 105, 106), removing=List(), original=List(101, 102, 
103), isUnderReplicated=false STARTED

AssignmentStateTest > [8] isr=List(103, 104, 105), replicas=List(101, 102, 
103), adding=List(104, 105, 106), removing=List(), original=List(101, 102, 
103), isUnderReplicated=false PASSED

AssignmentStateTest > [9] isr=List(103, 104), replicas=List(101, 102, 103), 
adding=List(104, 105, 106), removing=List(), original=List(101, 102, 103), 
isUnderReplicated=true STARTED

AssignmentStateTest > [9] isr=List(103, 104), replicas=List(101, 102, 103), 
adding=List(104, 105, 106), removing=List(), original=List(101, 102, 103), 
isUnderReplicated=true PASSED

PartitionTest > testMakeLeaderDoesNotUpdateEpochCacheForOldFormats() STARTED

PartitionTest > testMakeLeaderDoesNotUpdateEpochCacheForOldFormats() PASSED

PartitionTest > testIsrExpansion() STARTED

PartitionTest > testIsrExpansion() PASSED

PartitionTest > testReadRecordEpochValidationForLeader() STARTED

PartitionTest > testReadRecordEpochValidationForLeader() PASSED

PartitionTest > testAlterIsrUnknownTopic() STARTED

PartitionTest > testAlterIsrUnknownTopic() PASSED

PartitionTest > testIsrNotShrunkIfUpdateFails() STARTED

PartitionTest > testIsrNotShrunkIfUpdateFails() PASSED

PartitionTest > testFetchOffsetForTimestampEpochValidationForFollower() STARTED

PartitionTest > testFetchOffsetForTimestampEpochValidationForFollower() PASSED

PartitionTest > testIsrNotExpandedIfUpdateFails() STARTED

PartitionTest > testIsrNotExpandedIfUpdateFails() PASSED

PartitionTest > testLogConfigDirtyAsBrokerUpdated() STARTED

PartitionTest > testLogConfigDirtyAsBroker

Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #620

2021-03-30 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #649

2021-03-30 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Remove unnecessary suppress (#10434)


--
[...truncated 3.70 MB...]
LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV1Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV1Compressed() 
PASSED

LogValidatorTest > testInvalidTimestampExceptionHasBatchIndex() STARTED

LogValidatorTest > testInvalidTimestampExceptionHasBatchIndex() PASSED

LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1() STARTED

LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1() PASSED

LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2() STARTED

LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2() PASSED

LogValidatorTest > testControlRecordsNotAllowedFromClients() STARTED

LogValidatorTest > testControlRecordsNotAllowedFromClients() PASSED

LogValidatorTest > testRelativeOffsetAssignmentCompressedV1() STARTED

LogValidatorTest > testRelativeOffsetAssignmentCompressedV1() PASSED

LogValidatorTest > testRelativeOffsetAssignmentCompressedV2() STARTED

LogValidatorTest > testRelativeOffsetAssignmentCompressedV2() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1NonCompressed() 
PASSED

LogValidatorTest > testMisMatchMagic() STARTED

LogValidatorTest > testMisMatchMagic() PASSED

LogValidatorTest > testLogAppendTimeNonCompressedV1() STARTED

LogValidatorTest > testLogAppendTimeNonCompressedV1() PASSED

LogValidatorTest > testLogAppendTimeNonCompressedV2() STARTED

LogValidatorTest > testLogAppendTimeNonCompressedV2() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV0NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV0NonCompressed() 
PASSED

LogValidatorTest > testControlRecordsNotCompressed() STARTED

LogValidatorTest > testControlRecordsNotCompressed() PASSED

LogValidatorTest > testInvalidCreateTimeNonCompressedV1() STARTED

LogValidatorTest > testInvalidCreateTimeNonCompressedV1() PASSED

LogValidatorTest > testInvalidCreateTimeNonCompressedV2() STARTED

LogValidatorTest > testInvalidCreateTimeNonCompressedV2() PASSED

LogValidatorTest > testCompressedBatchWithoutRecordsNotAllowed() STARTED

LogValidatorTest > testCompressedBatchWithoutRecordsNotAllowed() PASSED

LogValidatorTest > testOnlyOneBatch() STARTED

LogValidatorTest > testOnlyOneBatch() PASSED

LogValidatorTest > testBatchWithInvalidRecordsAndInvalidTimestamp() STARTED

LogValidatorTest > testBatchWithInvalidRecordsAndInvalidTimestamp() PASSED

LogValidatorTest > testAllowMultiBatch() STARTED

LogValidatorTest > testAllowMultiBatch() PASSED

LogValidatorTest > testInvalidOffsetRangeAndRecordCount() STARTED

LogValidatorTest > testInvalidOffsetRangeAndRecordCount() PASSED

LogValidatorTest > testLogAppendTimeWithoutRecompressionV1() STARTED

LogValidatorTest > testLogAppendTimeWithoutRecompressionV1() PASSED

LogValidatorTest > testLogAppendTimeWithoutRecompressionV2() STARTED

LogValidatorTest > testLogAppendTimeWithoutRecompressionV2() PASSED

LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() STARTED

LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
PASSED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() STARTED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
PASSED

LogValidatorTest > testNonCompressedV1() STARTED

LogValidatorTest > testNonCompressedV1() PASSED

LogValidatorTest > testNonCompressedV2() STARTED

LogValidatorTest > testNonCompressedV2() PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV1() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV1() PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV2() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV2() PASSED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() STARTED

LogValidatorTest > testNonIncreasingOffset

[jira] [Created] (KAFKA-12587) Remove KafkaPrincipal#fromString

2021-03-30 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-12587:
--

 Summary: Remove KafkaPrincipal#fromString
 Key: KAFKA-12587
 URL: https://issues.apache.org/jira/browse/KAFKA-12587
 Project: Kafka
  Issue Type: Task
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-2.8-jdk8 #86

2021-03-30 Thread Apache Jenkins Server
See 


Changes:

[A. Sophie Blee-Goldman] HOTFIX: move rebalanceInProgress check to skip commit 
during handleCorrupted (#10444)


--
[...truncated 7.26 MB...]
ControllerIntegrationTest > testTopicCreation() STARTED

ControllerIntegrationTest > testTopicCreation() PASSED

ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsDisabledWithEnabledExistingFeatureZNode()
 STARTED

ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsDisabledWithEnabledExistingFeatureZNode()
 PASSED

ControllerIntegrationTest > testControllerMoveOnTopicDeletion() STARTED

ControllerIntegrationTest > testControllerMoveOnTopicDeletion() PASSED

ControllerIntegrationTest > testPartitionReassignment() STARTED

ControllerIntegrationTest > testPartitionReassignment() PASSED

ControllerIntegrationTest > testTopicIdPersistsThroughControllerRestart() 
STARTED

ControllerIntegrationTest > testTopicIdPersistsThroughControllerRestart() PASSED

ControllerIntegrationTest > testTopicPartitionExpansion() STARTED

ControllerIntegrationTest > testTopicPartitionExpansion() PASSED

ControllerIntegrationTest > testTopicIdsAreNotAdded() STARTED

ControllerIntegrationTest > testTopicIdsAreNotAdded() PASSED

ControllerIntegrationTest > testControllerMoveIncrementsControllerEpoch() 
STARTED

ControllerIntegrationTest > testControllerMoveIncrementsControllerEpoch() PASSED

ControllerIntegrationTest > testIdempotentAlterIsr() STARTED

ControllerIntegrationTest > testIdempotentAlterIsr() PASSED

ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionEnabled() STARTED

ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionEnabled() PASSED

ControllerIntegrationTest > testControllerMoveOnPartitionReassignment() STARTED

ControllerIntegrationTest > testControllerMoveOnPartitionReassignment() PASSED

ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsEnabledWithEnabledExistingFeatureZNode()
 STARTED

ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsEnabledWithEnabledExistingFeatureZNode()
 PASSED

ControllerIntegrationTest > testControllerMoveOnTopicCreation() STARTED

ControllerIntegrationTest > testControllerMoveOnTopicCreation() PASSED

ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsDisabledWithDisabledExistingFeatureZNode()
 STARTED

ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsDisabledWithDisabledExistingFeatureZNode()
 PASSED

ControllerIntegrationTest > 
testControllerRejectControlledShutdownRequestWithStaleBrokerEpoch() STARTED

ControllerIntegrationTest > 
testControllerRejectControlledShutdownRequestWithStaleBrokerEpoch() PASSED

ControllerIntegrationTest > testBackToBackPreferredReplicaLeaderElections() 
STARTED

ControllerIntegrationTest > testBackToBackPreferredReplicaLeaderElections() 
PASSED

ControllerIntegrationTest > testTopicIdPersistsThroughControllerReelection() 
STARTED

ControllerIntegrationTest > testTopicIdPersistsThroughControllerReelection() 
PASSED

ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsDisabledWithNonExistingFeatureZNode()
 STARTED

ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsDisabledWithNonExistingFeatureZNode()
 PASSED

ControllerIntegrationTest > testEmptyCluster() STARTED

ControllerIntegrationTest > testEmptyCluster() PASSED

ControllerIntegrationTest > testControllerMoveOnPreferredReplicaElection() 
STARTED

ControllerIntegrationTest > testControllerMoveOnPreferredReplicaElection() 
PASSED

ControllerIntegrationTest > testPreferredReplicaLeaderElection() STARTED

ControllerIntegrationTest > testPreferredReplicaLeaderElection() PASSED

ControllerIntegrationTest > testMetadataPropagationOnBrokerChange() STARTED

ControllerIntegrationTest > testMetadataPropagationOnBrokerChange() PASSED

ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsEnabledWithDisabledExistingFeatureZNode()
 STARTED

ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsEnabledWithDisabledExistingFeatureZNode()
 PASSED

ControllerIntegrationTest > testMetadataPropagationForOfflineReplicas() STARTED

ControllerIntegrationTest > testMetadataPropagationForOfflineReplicas() PASSED

ControllerIntegrationTest > testTopicIdCreatedOnUpgrade() STARTED

ControllerIntegrationTest > testTopicIdCreatedOnUpgrade() PASSED

ControllerIntegrationTest > testTopicIdMigrationAndHandling() STARTED

ControllerIntegrationTest > testTopicIdMigrationAndHandling() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestWithAlreadyDefinedDeletedPartition() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestWithAlreadyDefinedDeleted

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #680

2021-03-30 Thread Apache Jenkins Server
See 


Changes:

[github] HOTFIX: move rebalanceInProgress check to skip commit during 
handleCorrupted (#10444)


--
[...truncated 3.69 MB...]
ControllerIntegrationTest > testMetadataPropagationOnControlPlane() STARTED

ControllerIntegrationTest > testMetadataPropagationOnControlPlane() PASSED

ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsEnabledWithNonExistingFeatureZNode()
 STARTED

ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsEnabledWithNonExistingFeatureZNode()
 PASSED

ControllerIntegrationTest > testAutoPreferredReplicaLeaderElection() STARTED

ControllerIntegrationTest > testAutoPreferredReplicaLeaderElection() PASSED

ControllerIntegrationTest > testTopicCreation() STARTED

ControllerIntegrationTest > testTopicCreation() PASSED

ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsDisabledWithEnabledExistingFeatureZNode()
 STARTED

ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsDisabledWithEnabledExistingFeatureZNode()
 PASSED

ControllerIntegrationTest > testControllerMoveOnTopicDeletion() STARTED

ControllerIntegrationTest > testControllerMoveOnTopicDeletion() PASSED

ControllerIntegrationTest > testPartitionReassignment() STARTED

ControllerIntegrationTest > testPartitionReassignment() PASSED

ControllerIntegrationTest > testTopicIdPersistsThroughControllerRestart() 
STARTED

ControllerIntegrationTest > testTopicIdPersistsThroughControllerRestart() PASSED

ControllerIntegrationTest > testTopicPartitionExpansion() STARTED

ControllerIntegrationTest > testTopicPartitionExpansion() PASSED

ControllerIntegrationTest > testTopicIdsAreNotAdded() STARTED

ControllerIntegrationTest > testTopicIdsAreNotAdded() PASSED

ControllerIntegrationTest > testControllerMoveIncrementsControllerEpoch() 
STARTED

ControllerIntegrationTest > testControllerMoveIncrementsControllerEpoch() PASSED

ControllerIntegrationTest > testIdempotentAlterIsr() STARTED

ControllerIntegrationTest > testIdempotentAlterIsr() PASSED

ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionEnabled() STARTED

ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionEnabled() PASSED

ControllerIntegrationTest > testControllerMoveOnPartitionReassignment() STARTED

ControllerIntegrationTest > testControllerMoveOnPartitionReassignment() PASSED

ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsEnabledWithEnabledExistingFeatureZNode()
 STARTED

ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsEnabledWithEnabledExistingFeatureZNode()
 PASSED

ControllerIntegrationTest > testControllerMoveOnTopicCreation() STARTED

ControllerIntegrationTest > testControllerMoveOnTopicCreation() PASSED

ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsDisabledWithDisabledExistingFeatureZNode()
 STARTED

ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsDisabledWithDisabledExistingFeatureZNode()
 PASSED

ControllerIntegrationTest > 
testControllerRejectControlledShutdownRequestWithStaleBrokerEpoch() STARTED

ControllerIntegrationTest > 
testControllerRejectControlledShutdownRequestWithStaleBrokerEpoch() PASSED

ControllerIntegrationTest > testBackToBackPreferredReplicaLeaderElections() 
STARTED

ControllerIntegrationTest > testBackToBackPreferredReplicaLeaderElections() 
PASSED

ControllerIntegrationTest > testTopicIdPersistsThroughControllerReelection() 
STARTED

ControllerIntegrationTest > testTopicIdPersistsThroughControllerReelection() 
PASSED

ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsDisabledWithNonExistingFeatureZNode()
 STARTED

ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsDisabledWithNonExistingFeatureZNode()
 PASSED

ControllerIntegrationTest > testEmptyCluster() STARTED

ControllerIntegrationTest > testEmptyCluster() PASSED

ControllerIntegrationTest > testControllerMoveOnPreferredReplicaElection() 
STARTED

ControllerIntegrationTest > testControllerMoveOnPreferredReplicaElection() 
PASSED

ControllerIntegrationTest > testPreferredReplicaLeaderElection() STARTED

ControllerIntegrationTest > testPreferredReplicaLeaderElection() PASSED

ControllerIntegrationTest > testMetadataPropagationOnBrokerChange() STARTED

ControllerIntegrationTest > testMetadataPropagationOnBrokerChange() PASSED

ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsEnabledWithDisabledExistingFeatureZNode()
 STARTED

ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsEnabledWithDisabledExistingFeatureZNode()
 PASSED

ControllerIntegrationTest > testMetadataPropagationForOffli

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #648

2021-03-30 Thread Apache Jenkins Server
See 


Changes:

[github] HOTFIX: move rebalanceInProgress check to skip commit during 
handleCorrupted (#10444)


--
[...truncated 3.69 MB...]
KafkaZkClientTest > testCreateRecursive() STARTED

KafkaZkClientTest > testCreateRecursive() PASSED

KafkaZkClientTest > testGetConsumerOffsetNoData() STARTED

KafkaZkClientTest > testGetConsumerOffsetNoData() PASSED

KafkaZkClientTest > testDeleteTopicPathMethods() STARTED

KafkaZkClientTest > testDeleteTopicPathMethods() PASSED

KafkaZkClientTest > testSetTopicPartitionStatesRaw() STARTED

KafkaZkClientTest > testSetTopicPartitionStatesRaw() PASSED

KafkaZkClientTest > testAclManagementMethods() STARTED

KafkaZkClientTest > testAclManagementMethods() PASSED

KafkaZkClientTest > testPreferredReplicaElectionMethods() STARTED

KafkaZkClientTest > testPreferredReplicaElectionMethods() PASSED

KafkaZkClientTest > testPropagateLogDir() STARTED

KafkaZkClientTest > testPropagateLogDir() PASSED

KafkaZkClientTest > testGetDataAndStat() STARTED

KafkaZkClientTest > testGetDataAndStat() PASSED

KafkaZkClientTest > testReassignPartitionsInProgress() STARTED

KafkaZkClientTest > testReassignPartitionsInProgress() PASSED

KafkaZkClientTest > testCreateTopLevelPaths() STARTED

KafkaZkClientTest > testCreateTopLevelPaths() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() PASSED

KafkaZkClientTest > testIsrChangeNotificationGetters() STARTED

KafkaZkClientTest > testIsrChangeNotificationGetters() PASSED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() STARTED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() PASSED

KafkaZkClientTest > testGetLogConfigs() STARTED

KafkaZkClientTest > testGetLogConfigs() PASSED

KafkaZkClientTest > testBrokerSequenceIdMethods() STARTED

KafkaZkClientTest > testBrokerSequenceIdMethods() PASSED

KafkaZkClientTest > testAclMethods() STARTED

KafkaZkClientTest > testAclMethods() PASSED

KafkaZkClientTest > testCreateSequentialPersistentPath() STARTED

KafkaZkClientTest > testCreateSequentialPersistentPath() PASSED

KafkaZkClientTest > testConditionalUpdatePath() STARTED

KafkaZkClientTest > testConditionalUpdatePath() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() PASSED

KafkaZkClientTest > testDeleteTopicZNode() STARTED

KafkaZkClientTest > testDeleteTopicZNode() PASSED

KafkaZkClientTest > testDeletePath() STARTED

KafkaZkClientTest > testDeletePath() PASSED

KafkaZkClientTest > testGetBrokerMethods() STARTED

KafkaZkClientTest > testGetBrokerMethods() PASSED

KafkaZkClientTest > testCreateTokenChangeNotification() STARTED

KafkaZkClientTest > testCreateTokenChangeNotification() PASSED

KafkaZkClientTest > testGetTopicsAndPartitions() STARTED

KafkaZkClientTest > testGetTopicsAndPartitions() PASSED

KafkaZkClientTest > testRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRegisterBrokerInfo() PASSED

KafkaZkClientTest > testRetryRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRetryRegisterBrokerInfo() PASSED

KafkaZkClientTest > testConsumerOffsetPath() STARTED

KafkaZkClientTest > testConsumerOffsetPath() PASSED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() STARTED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() PASSED

KafkaZkClientTest > testTopicAssignments() STARTED

KafkaZkClientTest > testTopicAssignments() PASSED

KafkaZkClientTest > testControllerManagementMethods() STARTED

KafkaZkClientTest > testControllerManagementMethods() PASSED

KafkaZkClientTest > testTopicAssignmentMethods() STARTED

KafkaZkClientTest > testTopicAssignmentMethods() PASSED

KafkaZkClientTest > testConnectionViaNettyClient() STARTED

KafkaZkClientTest > testConnectionViaNettyClient() PASSED

KafkaZkClientTest > testPropagateIsrChanges() STARTED

KafkaZkClientTest > testPropagateIsrChanges() PASSED

KafkaZkClientTest > testControllerEpochMethods() STARTED

KafkaZkClientTest > testControllerEpochMethods() PASSED

KafkaZkClientTest > testDeleteRecursive() STARTED

KafkaZkClientTest > testDeleteRecursive() PASSED

KafkaZkClientTest > testGetTopicPartitionStates() STARTED

KafkaZkClientTest > testGetTopicPartitionStates() PASSED

KafkaZkClientTest > testCreateConfigChangeNotification() STARTED

KafkaZkClientTest > testCreateConfigChangeNotification() PASSED

KafkaZkClientTest > testDelegationTokenMethods() STARTED

KafkaZkClientTest > testDelegationTokenMethods() PASSED

LiteralAclStoreTest > shouldHaveCorrectPaths() STARTED

LiteralAclStoreTest > shouldHaveCorrectPaths() PASSED

LiteralAclStoreTest > shouldRoundTripChangeNode() STARTED

LiteralAclStoreTest > shouldRoundTripChangeNode() PASSED

LiteralAclStoreTest > shouldThrowFromEncodeOnNoneLiteral() STARTED

L

Re: [DISCUSS] KIP-726: Make the CooperativeStickyAssignor as the default assignor

2021-03-30 Thread Sophie Blee-Goldman
Ok I'm still fleshing out all the details of KAFKA-12477 but I think we can
simplify some things a bit, and avoid
any kind of "fail-fast" which will require user intervention. In fact I
think we can avoid requiring the user to make
any changes at all for KIP-726, so we don't have to worry about whether
they actually read our documentation:

Instead of making ["cooperative-sticky"] the default, we change the default
to ["cooperative-sticky", "range"].
Since "range" is the old default, this is equivalent to the first rolling
bounce of the safe upgrade path in KIP-429.

Of course this also means that under the current protocol selection
mechanism we won't actually upgrade to
cooperative rebalancing with the default assignor. But that's where
KAFKA-12477 will come in.

@Guozhang Wang   I'll get back to you with a
concrete proposal and answer your questions, I just want to point out
that it's possible to side-step the risk of users shooting themselves in
the foot (well, at least in this one specific case,
obviously they always find a way)

On Tue, Mar 30, 2021 at 10:37 AM Guozhang Wang  wrote:

> Hi Sophie,
>
> My question is more related to KAFKA-12477, but since your latest replies
> are on this thread I figured we can follow-up on the same venue. Just so I
> understand your latest comments above about the approach:
>
> * I think, we would need to persist this decision so that the group would
> never go back to the eager protocol, this bit would be written to the
> internal topic's assignment message. Is that correct?
> * Maybe you can describe the steps, after the group has decided to move
> forward with cooperative protocols, when:
> 1) a new member joined the group with the old version, and hence only
> recognized eager protocol and executing the eager protocol with its first
> rebalance, what would happen.
> 2) in addition to 1), the new member joined the group with the old version
> and only recognized the old subscription format, and was selected as the
> leader, what would happen.
>
> Guozhang
>
>
>
>
> On Mon, Mar 29, 2021 at 10:30 PM Luke Chen  wrote:
>
> > Hi Sophie & Ismael,
> > Thank you for your feedback.
> > No problem, let's pause this KIP and wait for this improvement:
> KAFKA-12477
> > .
> >
> > Stay tuned :)
> >
> > Thank you.
> > Luke
> >
> > On Tue, Mar 30, 2021 at 3:14 AM Ismael Juma  wrote:
> >
> > > Hi Sophie,
> > >
> > > I didn't analyze the KIP in detail, but the two suggestions you
> mentioned
> > > sound like great improvements.
> > >
> > > A bit more context: breaking changes for a widely used product like
> Kafka
> > > are costly and hence why we try as hard as we can to avoid them. When
> it
> > > comes to the brokers, they are often managed by a central group (or
> > they're
> > > in the Cloud), so they're a bit easier to manage. Even so, it's still
> > > possible to upgrade from 0.8.x directly to 2.7 since all protocol
> > versions
> > > are still supported. When it comes to the basic clients (producer,
> > > consumer, admin client), they're often embedded in applications so we
> > have
> > > to be even more conservative.
> > >
> > > Ismael
> > >
> > > On Mon, Mar 29, 2021 at 10:50 AM Sophie Blee-Goldman
> > >  wrote:
> > >
> > > > Ismael,
> > > >
> > > > It seems like given 3.0 is a breaking release, we have to rely on
> users
> > > > being aware of this and responsible
> > > > enough to read the upgrade guide. Otherwise we could never ever make
> > any
> > > > breaking changes beyond just
> > > > removing deprecated APIs or other compilation-breaking errors that
> > would
> > > be
> > > > immediately visible, no?
> > > >
> > > > That said, obviously it's better to have a circuit-breaker that will
> > fail
> > > > fast in case of a user misconfiguration
> > > > rather than silently corrupting the consumer group state -- eg for
> two
> > > > consumers to overlap in their ownership
> > > > of the same partition(s). We could definitely implement this, and now
> > > that
> > > > I think about it this might solve a
> > > > related problem in KAFKA-12477
> > > > . We just add a
> new
> > > > field to the Assignment in which the group leader
> > > > indicates whether it's on a recent enough version to understand
> > > cooperative
> > > > rebalancing. If an upgraded member
> > > > joins the group, it'll only be allowed to start following the new
> > > > rebalancing protocol after receiving the go-ahead
> > > > from the group leader.
> > > >
> > > > If we do go ahead and add this new field in the Assignment then I'm
> > > pretty
> > > > confident we can reduce the number
> > > > of required rolling bounces to just one with KAFKA-12477
> > > > . In that case we
> > > > should
> > > > be in much better shape to
> > > > feel good about changing the default to the
> CooperativeStickyAssignor.
> > > How
> > > > does that sound?
> > > >
> > 

Build failed in Jenkins: Kafka » kafka-2.6-jdk8 #109

2021-03-30 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 3.18 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldRet

Re: Re: [DISCUSS] KIP-706: Add method "Producer#produce" to return CompletionStage instead of Future

2021-03-30 Thread Chia-Ping Tsai
hi,

I have updated KIP according to my latest response. I will start vote thread 
next week if there is no more comments :)

Best Regards,
Chia-Ping

On 2021/01/31 05:39:17, Chia-Ping Tsai  wrote: 
> It seems to me changing the input type might make complicate the migration 
> from deprecated send method to new API.
> 
> Personally, I prefer to introduce a interface called “SendRecord” to replace 
> ProducerRecord. Hence, the new API/classes is shown below.
> 
> 1. CompletionStage send(SendRecord)
> 2. class ProducerRecord implement SendRecord
> 3. Introduce builder pattern for SendRecord
> 
> That includes following benefit.
> 
> 1. Kafka users who don’t use both return type and callback do not need to 
> change code even though we remove deprecated send methods. (of course, they 
> still need to compile code with new Kafka)
> 
> 2. Kafka users who need Future can easily migrate to new API by regex 
> replacement. (cast ProduceRecord to SendCast and add toCompletableFuture)
> 
> 3. It is easy to support topic id in the future. We can add new method to 
> SendRecord builder. For example:
> 
> Builder topicName(String)
> Builder topicId(UUID)
> 
> 4. builder pattern can make code more readable. Especially, Produce record 
> has a lot of fields which can be defined by users.
> —
> Chia-Ping
> 
> On 2021/01/30 22:50:36 Ismael Juma wrote:
> > Another thing to think about: the consumer api currently has
> > `subscribe(String|Pattern)` and a number of methods that accept
> > `TopicPartition`. A similar approach could be used for the Consumer to work
> > with topic ids or topic names. The consumer side also has to support
> > regexes so it probably makes sense to have a separate interface.
> > 
> > Ismael
> > 
> > On Sat, Jan 30, 2021 at 2:40 PM Ismael Juma  wrote:
> > 
> > > I think this is a promising idea. I'd personally avoid the overload and
> > > simply have a `Topic` type that implements `SendTarget`. It's a mix of 
> > > both
> > > proposals: strongly typed, no overloads and general class names that
> > > implement `SendTarget`.
> > >
> > > Ismael
> > >
> > > On Sat, Jan 30, 2021 at 2:22 PM Jason Gustafson 
> > > wrote:
> > >
> > >> Giving this a little more thought, I imagine sending to a topic is the
> > >> most
> > >> common case, so maybe it's an overload worth having. Also, if 
> > >> `SendTarget`
> > >> is just a marker interface, we could let `TopicPartition` implement it
> > >> directly. Then we have:
> > >>
> > >> interface SendTarget;
> > >> class TopicPartition implements SendTarget;
> > >>
> > >> CompletionStage send(String topic, Record record);
> > >> CompletionStage send(SendTarget target, Record record);
> > >>
> > >> The `SendTarget` would give us a lot of flexibility in the future. It
> > >> would
> > >> give us a couple options for topic ids. We could either have an overload
> > >> of
> > >> `send` which accepts `Uuid`, or we could add a `TopicId` type which
> > >> implements `SendTarget`.
> > >>
> > >> -Jason
> > >>
> > >>
> > >> On Sat, Jan 30, 2021 at 1:11 PM Jason Gustafson 
> > >> wrote:
> > >>
> > >> > Yeah, good question. I guess we always tend to regret using lower-level
> > >> > types in these APIs. Perhaps there should be some kind of interface:
> > >> >
> > >> > interface SendTarget
> > >> > class TopicIdTarget implements SendTarget
> > >> > class TopicTarget implements SendTarget
> > >> > class TopicPartitionTarget implements SendTarget
> > >> >
> > >> > Then we just have:
> > >> >
> > >> > CompletionStage send(SendTarget target, Record record);
> > >> >
> > >> > Not sure if we could reuse `Record` in the consumer though. We do have
> > >> > some state in `ConsumerRecord` which is not present in `ProducerRecord`
> > >> > (e.g. offset). Perhaps we could provide a `Record` view from
> > >> > `ConsumerRecord` for convenience. That would be useful for use cases
> > >> which
> > >> > involve reading from one topic and writing to another.
> > >> >
> > >> > -Jason
> > >> >
> > >> > On Sat, Jan 30, 2021 at 12:29 PM Ismael Juma  wrote:
> > >> >
> > >> >> Interesting idea. A couple of things to consider:
> > >> >>
> > >> >> 1. Would we introduce the Message concept to the Consumer too? I think
> > >> >> that's what .NET does.
> > >> >> 2. If we eventually allow a send to a topic id instead of topic name,
> > >> >> would
> > >> >> that result in two additional overloads?
> > >> >>
> > >> >> Ismael
> > >> >>
> > >> >> On Sat, Jan 30, 2021 at 11:38 AM Jason Gustafson 
> > >> >> wrote:
> > >> >>
> > >> >> > For the sake of having another option to shoot down, we could take a
> > >> >> page
> > >> >> > from the .net client and separate the message data from the
> > >> destination
> > >> >> > (i.e. topic or partition). This would get around the need to use a
> > >> new
> > >> >> > verb. For example:
> > >> >> >
> > >> >> > CompletionStage send(String topic, Message message);
> > >> >> > CompletionStage send(TopicPartition topicPartition,
> > >> >> Message
> > >> >> > message);
> > >> >> >

Re: [VOTE] KIP-707: The future of KafkaFuture

2021-03-30 Thread Chia-Ping Tsai
Thanks for this KIP. +1 (binding)

On 2021/03/29 15:34:55, Tom Bentley  wrote: 
> Hi,
> 
> I'd like to start a vote on KIP-707, which proposes to add
> KafkaFuture.toCompletionStage(), deprecate KafkaFuture.Function and make a
> couple of other minor cosmetic changes.
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-707%3A+The+future+of+KafkaFuture
> 
> Many thanks,
> 
> Tom
> 


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #619

2021-03-30 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12288: remove task-level filesystem locks (#10342)

[github] KAFKA-12571: Eliminate LeaderEpochFileCache constructor dependency on 
logEndOffset (#10426)


--
[...truncated 3.67 MB...]
LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() STARTED

LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
PASSED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() STARTED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
PASSED

LogValidatorTest > testNonCompressedV1() STARTED

LogValidatorTest > testNonCompressedV1() PASSED

LogValidatorTest > testNonCompressedV2() STARTED

LogValidatorTest > testNonCompressedV2() PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV1() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV1() PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV2() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV2() PASSED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() STARTED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() PASSED

LogValidatorTest > testRecompressionV1() STARTED

LogValidatorTest > testRecompressionV1() PASSED

LogValidatorTest > testRecompressionV2() STARTED

LogValidatorTest > testRecompressionV2() PASSED

ProducerStateManagerTest > testSkipEmptyTransactions() STARTED

ProducerStateManagerTest > testSkipEmptyTransactions() PASSED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() STARTED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() PASSED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
STARTED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
PASSED

ProducerStateManagerTest > testCoordinatorFencing() STARTED

ProducerStateManagerTest > testCoordinatorFencing() PASSED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() PASSED

ProducerStateManagerTest > testTruncateFullyAndStartAt() STARTED

ProducerStateManagerTest > testTruncateFullyAndStartAt() PASSED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() STARTED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() STARTED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() PASSED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() 
STARTED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() PASSED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() STARTED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() PASSED

ProducerStateManagerTest > testTakeSnapshot() STARTED

ProducerStateManagerTest > testTakeSnapshot() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() 
STARTED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() PASSED

ProducerStateManagerTest > testDeleteSnapshotsBefore() STARTED

ProducerStateManagerTest > testDeleteSnapshotsBefore() PASSED

ProducerStateManagerTest > testAppendEmptyControlBatch() STARTED

ProducerStateManagerTest > testAppendEmptyControlBatch() PASSED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() STARTED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() PASSED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
STARTED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
PASSED

ProducerStateManagerTest > testRemoveAllStraySnapshots() STARTED

ProducerStateManagerTest > testRemoveAllStraySnapshots() PASSED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() PASSED

ProducerStateManagerTest > testProducersWithOngoingTransactionsDontExpire() 
STARTE

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #647

2021-03-30 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12288: remove task-level filesystem locks (#10342)

[github] KAFKA-12571: Eliminate LeaderEpochFileCache constructor dependency on 
logEndOffset (#10426)


--
[...truncated 3.69 MB...]
KafkaZkClientTest > testUpdateBrokerInfo() STARTED

KafkaZkClientTest > testUpdateBrokerInfo() PASSED

KafkaZkClientTest > testCreateRecursive() STARTED

KafkaZkClientTest > testCreateRecursive() PASSED

KafkaZkClientTest > testGetConsumerOffsetNoData() STARTED

KafkaZkClientTest > testGetConsumerOffsetNoData() PASSED

KafkaZkClientTest > testDeleteTopicPathMethods() STARTED

KafkaZkClientTest > testDeleteTopicPathMethods() PASSED

KafkaZkClientTest > testSetTopicPartitionStatesRaw() STARTED

KafkaZkClientTest > testSetTopicPartitionStatesRaw() PASSED

KafkaZkClientTest > testAclManagementMethods() STARTED

KafkaZkClientTest > testAclManagementMethods() PASSED

KafkaZkClientTest > testPreferredReplicaElectionMethods() STARTED

KafkaZkClientTest > testPreferredReplicaElectionMethods() PASSED

KafkaZkClientTest > testPropagateLogDir() STARTED

KafkaZkClientTest > testPropagateLogDir() PASSED

KafkaZkClientTest > testGetDataAndStat() STARTED

KafkaZkClientTest > testGetDataAndStat() PASSED

KafkaZkClientTest > testReassignPartitionsInProgress() STARTED

KafkaZkClientTest > testReassignPartitionsInProgress() PASSED

KafkaZkClientTest > testCreateTopLevelPaths() STARTED

KafkaZkClientTest > testCreateTopLevelPaths() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() PASSED

KafkaZkClientTest > testIsrChangeNotificationGetters() STARTED

KafkaZkClientTest > testIsrChangeNotificationGetters() PASSED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() STARTED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() PASSED

KafkaZkClientTest > testGetLogConfigs() STARTED

KafkaZkClientTest > testGetLogConfigs() PASSED

KafkaZkClientTest > testBrokerSequenceIdMethods() STARTED

KafkaZkClientTest > testBrokerSequenceIdMethods() PASSED

KafkaZkClientTest > testAclMethods() STARTED

KafkaZkClientTest > testAclMethods() PASSED

KafkaZkClientTest > testCreateSequentialPersistentPath() STARTED

KafkaZkClientTest > testCreateSequentialPersistentPath() PASSED

KafkaZkClientTest > testConditionalUpdatePath() STARTED

KafkaZkClientTest > testConditionalUpdatePath() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() PASSED

KafkaZkClientTest > testDeleteTopicZNode() STARTED

KafkaZkClientTest > testDeleteTopicZNode() PASSED

KafkaZkClientTest > testDeletePath() STARTED

KafkaZkClientTest > testDeletePath() PASSED

KafkaZkClientTest > testGetBrokerMethods() STARTED

KafkaZkClientTest > testGetBrokerMethods() PASSED

KafkaZkClientTest > testCreateTokenChangeNotification() STARTED

KafkaZkClientTest > testCreateTokenChangeNotification() PASSED

KafkaZkClientTest > testGetTopicsAndPartitions() STARTED

KafkaZkClientTest > testGetTopicsAndPartitions() PASSED

KafkaZkClientTest > testRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRegisterBrokerInfo() PASSED

KafkaZkClientTest > testRetryRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRetryRegisterBrokerInfo() PASSED

KafkaZkClientTest > testConsumerOffsetPath() STARTED

KafkaZkClientTest > testConsumerOffsetPath() PASSED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() STARTED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() PASSED

KafkaZkClientTest > testTopicAssignments() STARTED

KafkaZkClientTest > testTopicAssignments() PASSED

KafkaZkClientTest > testControllerManagementMethods() STARTED

KafkaZkClientTest > testControllerManagementMethods() PASSED

KafkaZkClientTest > testTopicAssignmentMethods() STARTED

KafkaZkClientTest > testTopicAssignmentMethods() PASSED

KafkaZkClientTest > testConnectionViaNettyClient() STARTED

KafkaZkClientTest > testConnectionViaNettyClient() PASSED

KafkaZkClientTest > testPropagateIsrChanges() STARTED

KafkaZkClientTest > testPropagateIsrChanges() PASSED

KafkaZkClientTest > testControllerEpochMethods() STARTED

KafkaZkClientTest > testControllerEpochMethods() PASSED

KafkaZkClientTest > testDeleteRecursive() STARTED

KafkaZkClientTest > testDeleteRecursive() PASSED

KafkaZkClientTest > testGetTopicPartitionStates() STARTED

KafkaZkClientTest > testGetTopicPartitionStates() PASSED

KafkaZkClientTest > testCreateConfigChangeNotification() STARTED

KafkaZkClientTest > testCreateConfigChangeNotification() PASSED

KafkaZkClientTest > testDelegationTokenMethods() STARTED

KafkaZkClientTest > testDelegationTokenMethods() PASSED

LiteralAclStoreTest > shouldHaveCorrectPaths() STARTED

LiteralAclStoreTest > shouldHaveCorrectPaths() PASSED

LiteralAcl

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #679

2021-03-30 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12288: remove task-level filesystem locks (#10342)

[github] KAFKA-12571: Eliminate LeaderEpochFileCache constructor dependency on 
logEndOffset (#10426)


--
[...truncated 3.69 MB...]
UserQuotaTest > testThrottledProducerConsumer() STARTED

UserQuotaTest > testThrottledProducerConsumer() PASSED

UserQuotaTest > testQuotaOverrideDelete() STARTED

UserQuotaTest > testQuotaOverrideDelete() PASSED

UserQuotaTest > testThrottledRequest() STARTED

UserQuotaTest > testThrottledRequest() PASSED

LinuxIoMetricsCollectorTest > testReadProcFile() STARTED

LinuxIoMetricsCollectorTest > testReadProcFile() PASSED

LinuxIoMetricsCollectorTest > testUnableToReadNonexistentProcFile() STARTED

LinuxIoMetricsCollectorTest > testUnableToReadNonexistentProcFile() PASSED

AssignmentStateTest > [1] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(), removing=List(), original=List(), isUnderReplicated=false 
STARTED

AssignmentStateTest > [1] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(), removing=List(), original=List(), isUnderReplicated=false 
PASSED

AssignmentStateTest > [2] isr=List(101, 102), replicas=List(101, 102, 103), 
adding=List(), removing=List(), original=List(), isUnderReplicated=true STARTED

AssignmentStateTest > [2] isr=List(101, 102), replicas=List(101, 102, 103), 
adding=List(), removing=List(), original=List(), isUnderReplicated=true PASSED

AssignmentStateTest > [3] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(104, 105), removing=List(102), original=List(101, 102, 103), 
isUnderReplicated=false STARTED

AssignmentStateTest > [3] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(104, 105), removing=List(102), original=List(101, 102, 103), 
isUnderReplicated=false PASSED

AssignmentStateTest > [4] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(104, 105), removing=List(), original=List(101, 102, 103), 
isUnderReplicated=false STARTED

AssignmentStateTest > [4] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(104, 105), removing=List(), original=List(101, 102, 103), 
isUnderReplicated=false PASSED

AssignmentStateTest > [5] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(), removing=List(102), original=List(101, 102, 103), 
isUnderReplicated=false STARTED

AssignmentStateTest > [5] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(), removing=List(102), original=List(101, 102, 103), 
isUnderReplicated=false PASSED

AssignmentStateTest > [6] isr=List(102, 103), replicas=List(102, 103), 
adding=List(101), removing=List(), original=List(102, 103), 
isUnderReplicated=false STARTED

AssignmentStateTest > [6] isr=List(102, 103), replicas=List(102, 103), 
adding=List(101), removing=List(), original=List(102, 103), 
isUnderReplicated=false PASSED

AssignmentStateTest > [7] isr=List(103, 104, 105), replicas=List(101, 102, 
103), adding=List(104, 105, 106), removing=List(), original=List(101, 102, 
103), isUnderReplicated=false STARTED

AssignmentStateTest > [7] isr=List(103, 104, 105), replicas=List(101, 102, 
103), adding=List(104, 105, 106), removing=List(), original=List(101, 102, 
103), isUnderReplicated=false PASSED

AssignmentStateTest > [8] isr=List(103, 104, 105), replicas=List(101, 102, 
103), adding=List(104, 105, 106), removing=List(), original=List(101, 102, 
103), isUnderReplicated=false STARTED

AssignmentStateTest > [8] isr=List(103, 104, 105), replicas=List(101, 102, 
103), adding=List(104, 105, 106), removing=List(), original=List(101, 102, 
103), isUnderReplicated=false PASSED

AssignmentStateTest > [9] isr=List(103, 104), replicas=List(101, 102, 103), 
adding=List(104, 105, 106), removing=List(), original=List(101, 102, 103), 
isUnderReplicated=true STARTED

AssignmentStateTest > [9] isr=List(103, 104), replicas=List(101, 102, 103), 
adding=List(104, 105, 106), removing=List(), original=List(101, 102, 103), 
isUnderReplicated=true PASSED

PartitionTest > testMakeLeaderDoesNotUpdateEpochCacheForOldFormats() STARTED

PartitionTest > testMakeLeaderDoesNotUpdateEpochCacheForOldFormats() PASSED

PartitionTest > testIsrExpansion() STARTED

PartitionTest > testIsrExpansion() PASSED

PartitionTest > testReadRecordEpochValidationForLeader() STARTED

PartitionTest > testReadRecordEpochValidationForLeader() PASSED

PartitionTest > testAlterIsrUnknownTopic() STARTED

PartitionTest > testAlterIsrUnknownTopic() PASSED

PartitionTest > testIsrNotShrunkIfUpdateFails() STARTED

PartitionTest > testIsrNotShrunkIfUpdateFails() PASSED

PartitionTest > testFetchOffsetForTimestampEpochValidationForFollower() STARTED

PartitionTest > testFetchOffsetForTimestampEpochValidationForFollower() PASSED

PartitionTest > testIsrNotExpandedIfUpdateFails() STARTED

PartitionTest > testIsrNotExpandedIfUpdateFai

[jira] [Created] (KAFKA-12586) Admin API for DescribeTransactions

2021-03-30 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-12586:
---

 Summary: Admin API for DescribeTransactions
 Key: KAFKA-12586
 URL: https://issues.apache.org/jira/browse/KAFKA-12586
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jason Gustafson
Assignee: Jason Gustafson


Add the Admin API for DescribeTransactions documented on KIP-664: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-664%3A+Provide+tooling+to+detect+and+abort+hanging+transactions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #618

2021-03-30 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #646

2021-03-30 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix newly added client side timeout tests in 
`KafkaAdminClientTest` (#10398)


--
[...truncated 3.69 MB...]
KafkaZkClientTest > testUpdateBrokerInfo() STARTED

KafkaZkClientTest > testUpdateBrokerInfo() PASSED

KafkaZkClientTest > testCreateRecursive() STARTED

KafkaZkClientTest > testCreateRecursive() PASSED

KafkaZkClientTest > testGetConsumerOffsetNoData() STARTED

KafkaZkClientTest > testGetConsumerOffsetNoData() PASSED

KafkaZkClientTest > testDeleteTopicPathMethods() STARTED

KafkaZkClientTest > testDeleteTopicPathMethods() PASSED

KafkaZkClientTest > testSetTopicPartitionStatesRaw() STARTED

KafkaZkClientTest > testSetTopicPartitionStatesRaw() PASSED

KafkaZkClientTest > testAclManagementMethods() STARTED

KafkaZkClientTest > testAclManagementMethods() PASSED

KafkaZkClientTest > testPreferredReplicaElectionMethods() STARTED

KafkaZkClientTest > testPreferredReplicaElectionMethods() PASSED

KafkaZkClientTest > testPropagateLogDir() STARTED

KafkaZkClientTest > testPropagateLogDir() PASSED

KafkaZkClientTest > testGetDataAndStat() STARTED

KafkaZkClientTest > testGetDataAndStat() PASSED

KafkaZkClientTest > testReassignPartitionsInProgress() STARTED

KafkaZkClientTest > testReassignPartitionsInProgress() PASSED

KafkaZkClientTest > testCreateTopLevelPaths() STARTED

KafkaZkClientTest > testCreateTopLevelPaths() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() PASSED

KafkaZkClientTest > testIsrChangeNotificationGetters() STARTED

KafkaZkClientTest > testIsrChangeNotificationGetters() PASSED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() STARTED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() PASSED

KafkaZkClientTest > testGetLogConfigs() STARTED

KafkaZkClientTest > testGetLogConfigs() PASSED

KafkaZkClientTest > testBrokerSequenceIdMethods() STARTED

KafkaZkClientTest > testBrokerSequenceIdMethods() PASSED

KafkaZkClientTest > testAclMethods() STARTED

KafkaZkClientTest > testAclMethods() PASSED

KafkaZkClientTest > testCreateSequentialPersistentPath() STARTED

KafkaZkClientTest > testCreateSequentialPersistentPath() PASSED

KafkaZkClientTest > testConditionalUpdatePath() STARTED

KafkaZkClientTest > testConditionalUpdatePath() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() PASSED

KafkaZkClientTest > testDeleteTopicZNode() STARTED

KafkaZkClientTest > testDeleteTopicZNode() PASSED

KafkaZkClientTest > testDeletePath() STARTED

KafkaZkClientTest > testDeletePath() PASSED

KafkaZkClientTest > testGetBrokerMethods() STARTED

KafkaZkClientTest > testGetBrokerMethods() PASSED

KafkaZkClientTest > testCreateTokenChangeNotification() STARTED

KafkaZkClientTest > testCreateTokenChangeNotification() PASSED

KafkaZkClientTest > testGetTopicsAndPartitions() STARTED

KafkaZkClientTest > testGetTopicsAndPartitions() PASSED

KafkaZkClientTest > testRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRegisterBrokerInfo() PASSED

KafkaZkClientTest > testRetryRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRetryRegisterBrokerInfo() PASSED

KafkaZkClientTest > testConsumerOffsetPath() STARTED

KafkaZkClientTest > testConsumerOffsetPath() PASSED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() STARTED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() PASSED

KafkaZkClientTest > testTopicAssignments() STARTED

KafkaZkClientTest > testTopicAssignments() PASSED

KafkaZkClientTest > testControllerManagementMethods() STARTED

KafkaZkClientTest > testControllerManagementMethods() PASSED

KafkaZkClientTest > testTopicAssignmentMethods() STARTED

KafkaZkClientTest > testTopicAssignmentMethods() PASSED

KafkaZkClientTest > testConnectionViaNettyClient() STARTED

KafkaZkClientTest > testConnectionViaNettyClient() PASSED

KafkaZkClientTest > testPropagateIsrChanges() STARTED

KafkaZkClientTest > testPropagateIsrChanges() PASSED

KafkaZkClientTest > testControllerEpochMethods() STARTED

KafkaZkClientTest > testControllerEpochMethods() PASSED

KafkaZkClientTest > testDeleteRecursive() STARTED

KafkaZkClientTest > testDeleteRecursive() PASSED

KafkaZkClientTest > testGetTopicPartitionStates() STARTED

KafkaZkClientTest > testGetTopicPartitionStates() PASSED

KafkaZkClientTest > testCreateConfigChangeNotification() STARTED

KafkaZkClientTest > testCreateConfigChangeNotification() PASSED

KafkaZkClientTest > testDelegationTokenMethods() STARTED

KafkaZkClientTest > testDelegationTokenMethods() PASSED

LiteralAclStoreTest > shouldHaveCorrectPaths() STARTED

LiteralAclStoreTest > shouldHaveCorrectPaths() PASSED

LiteralAclStoreTest > shouldRoundTripChangeNode() STARTED

LiteralAclStoreTest > shou

[jira] [Resolved] (KAFKA-12571) Eliminate LeaderEpochFileCache constructor dependency on LogEndOffset

2021-03-30 Thread Jun Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-12571.
-
Fix Version/s: 3.0.0
   Resolution: Fixed

merged the PR to trunk

> Eliminate LeaderEpochFileCache constructor dependency on LogEndOffset
> -
>
> Key: KAFKA-12571
> URL: https://issues.apache.org/jira/browse/KAFKA-12571
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Kowshik Prakasam
>Assignee: Kowshik Prakasam
>Priority: Major
> Fix For: 3.0.0
>
>
> *This is a precursor to KAFKA-12553.*
> Before refactoring the recovery logic (KAFKA-12553), we would like to move 
> the logic to [initialize 
> LeaderEpochFileCache|https://github.com/apache/kafka/blob/d9bb2ef596343da9402bff4903b129cff1f7c22b/core/src/main/scala/kafka/log/Log.scala#L579]
>  outside the Log class.  Once we suitably initialize LeaderEpochFileCache 
> outside Log, we will be able pass it as a dependency into both the Log class 
> constructor and the recovery module. However, the LeaderEpochFileCache 
> constructor takes a 
> [dependency|https://github.com/apache/kafka/blob/d9bb2ef596343da9402bff4903b129cff1f7c22b/core/src/main/scala/kafka/server/epoch/LeaderEpochFileCache.scala#L42]
>  on logEndOffset (via a callback). This dependency prevents the instantiation 
> of LeaderEpochFileCache outside Log class.
> This situation blocks the recovery logic (KAFKA-12553) refactor. So this 
> constructor dependency on logEndOffset needs to be eliminated.
> It turns out the logEndOffset dependency is used only in 1 of the 
> LeaderEpochFileCache methods: LeaderEpochFileCache.endOffsetFor, and just for 
> 1 particular [case|#L201]. Therefore, it should be possible to modify this so 
> that   we only pass the logEndOffset as a parameter into endOffsetFor 
> whenever the method is called.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #678

2021-03-30 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix newly added client side timeout tests in 
`KafkaAdminClientTest` (#10398)


--
[...truncated 3.69 MB...]

ConsumerTopicCreationTest > [3] brokerAutoTopicCreationEnable=false, 
consumerAllowAutoCreateTopics=true PASSED

ConsumerTopicCreationTest > [4] brokerAutoTopicCreationEnable=false, 
consumerAllowAutoCreateTopics=false STARTED

ConsumerTopicCreationTest > [4] brokerAutoTopicCreationEnable=false, 
consumerAllowAutoCreateTopics=false PASSED

ConsumerTopicCreationTest > [1] brokerAutoTopicCreationEnable=true, 
consumerAllowAutoCreateTopics=true STARTED

ConsumerTopicCreationTest > [1] brokerAutoTopicCreationEnable=true, 
consumerAllowAutoCreateTopics=true PASSED

ConsumerTopicCreationTest > [2] brokerAutoTopicCreationEnable=true, 
consumerAllowAutoCreateTopics=false STARTED

ConsumerTopicCreationTest > [2] brokerAutoTopicCreationEnable=true, 
consumerAllowAutoCreateTopics=false PASSED

ConsumerTopicCreationTest > [3] brokerAutoTopicCreationEnable=false, 
consumerAllowAutoCreateTopics=true STARTED

ConsumerTopicCreationTest > [3] brokerAutoTopicCreationEnable=false, 
consumerAllowAutoCreateTopics=true PASSED

ConsumerTopicCreationTest > [4] brokerAutoTopicCreationEnable=false, 
consumerAllowAutoCreateTopics=false STARTED

ConsumerTopicCreationTest > [4] brokerAutoTopicCreationEnable=false, 
consumerAllowAutoCreateTopics=false PASSED

MetricsTest > testMetrics() STARTED

MetricsTest > testMetrics() PASSED

PlaintextEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe() STARTED

PlaintextEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe() PASSED

PlaintextEndToEndAuthorizationTest > testProduceConsumeWithPrefixedAcls() 
STARTED

PlaintextEndToEndAuthorizationTest > testProduceConsumeWithPrefixedAcls() PASSED

PlaintextEndToEndAuthorizationTest > testProduceConsumeViaAssign() STARTED

PlaintextEndToEndAuthorizationTest > testProduceConsumeViaAssign() PASSED

PlaintextEndToEndAuthorizationTest > testNoConsumeWithDescribeAclViaAssign() 
STARTED

PlaintextEndToEndAuthorizationTest > testNoConsumeWithDescribeAclViaAssign() 
PASSED

PlaintextEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl() STARTED

PlaintextEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl() PASSED

PlaintextEndToEndAuthorizationTest > testProduceConsumeWithWildcardAcls() 
STARTED

PlaintextEndToEndAuthorizationTest > testProduceConsumeWithWildcardAcls() PASSED

PlaintextEndToEndAuthorizationTest > testNoConsumeWithDescribeAclViaSubscribe() 
STARTED

PlaintextEndToEndAuthorizationTest > testNoConsumeWithDescribeAclViaSubscribe() 
PASSED

PlaintextEndToEndAuthorizationTest > testNoConsumeWithoutDescribeAclViaAssign() 
STARTED

PlaintextEndToEndAuthorizationTest > testNoConsumeWithoutDescribeAclViaAssign() 
PASSED

PlaintextEndToEndAuthorizationTest > testNoGroupAcl() STARTED

PlaintextEndToEndAuthorizationTest > testNoGroupAcl() PASSED

PlaintextEndToEndAuthorizationTest > testNoProduceWithDescribeAcl() STARTED

PlaintextEndToEndAuthorizationTest > testNoProduceWithDescribeAcl() PASSED

PlaintextEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl() STARTED

PlaintextEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl() PASSED

PlaintextEndToEndAuthorizationTest > testProduceConsumeViaSubscribe() STARTED

PlaintextEndToEndAuthorizationTest > testProduceConsumeViaSubscribe() PASSED

PlaintextEndToEndAuthorizationTest > testListenerName() STARTED

PlaintextEndToEndAuthorizationTest > testListenerName() PASSED

SslEndToEndAuthorizationTest > testNoConsumeWithoutDescribeAclViaSubscribe() 
STARTED

SslEndToEndAuthorizationTest > testNoConsumeWithoutDescribeAclViaSubscribe() 
PASSED

SslEndToEndAuthorizationTest > testProduceConsumeWithPrefixedAcls() STARTED

SslEndToEndAuthorizationTest > testProduceConsumeWithPrefixedAcls() PASSED

SslEndToEndAuthorizationTest > testProduceConsumeViaAssign() STARTED

SslEndToEndAuthorizationTest > testProduceConsumeViaAssign() PASSED

SslEndToEndAuthorizationTest > testNoConsumeWithDescribeAclViaAssign() STARTED

SslEndToEndAuthorizationTest > testNoConsumeWithDescribeAclViaAssign() PASSED

SslEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl() STARTED

SslEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl() PASSED

SslEndToEndAuthorizationTest > testProduceConsumeWithWildcardAcls() STARTED

SslEndToEndAuthorizationTest > testProduceConsumeWithWildcardAcls() PASSED

SslEndToEndAuthorizationTest > testNoConsumeWithDescribeAclViaSubscribe() 
STARTED

SslEndToEndAuthorizationTest > testNoConsumeWithDescribeAclViaSubscribe() PASSED

SslEndToEndAuthorizationTest > testNoConsumeWithoutDescribeAclViaAssign() 
STARTED

SslEndToEndAuthorizationTest

[jira] [Created] (KAFKA-12585) FencedInstanceIdException can cause heartbeat thread to never be closed

2021-03-30 Thread Brian Hawkins (Jira)
Brian Hawkins created KAFKA-12585:
-

 Summary: FencedInstanceIdException can cause heartbeat thread to 
never be closed
 Key: KAFKA-12585
 URL: https://issues.apache.org/jira/browse/KAFKA-12585
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 2.7.0, 2.5.1
Reporter: Brian Hawkins


The bug has been there since static consumers was introduced.

The problem is all within AbstractCoordinator.java

If a FencedInstanceIdException is throw and onFailure (line 1406) is called by 
a thread other than the heartbeat thread this will occur.  

In the onFailure callback the heartbeatThread.failed is set and the 
heartbeatThread is disabled, but the actual thread is waiting on line 1350 
(AbstractCoordinator.this.wait())

Sometime later pollHeartbeat is called (line 316).  The check for hasFailed is 
true so it sets heartbeatThread = null without freeing the thread and now it 
will never be closed.

 

I have verified this within a debuger using two clients that create read and 
close over and over again using the same group and instance id.  I tested this 
with 2.5.1 but found the same code bug to be in the latest master branch, the 
above line numbers are for the latest in github.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] 2.8.0 RC0

2021-03-30 Thread John Roesler
Hello again, all,

I just wanted to mention that I am aware of Justin's
concerns in the 2.6.2 thread:
https://lists.apache.org/thread.html/r2df54c11c10d3d38443054998bc7dd92d34362641733c2fb7c579b50%40%3Cdev.kafka.apache.org%3E

I plan to make sure we address these concerns before the
actual 2.8.0 release, but wanted to get RC0 out asap for
testing.

Thank you,
John

On Tue, 2021-03-30 at 16:37 -0500, John Roesler wrote:
> Hello Kafka users, developers and client-developers,
> 
> This is the first candidate for release of Apache Kafka
> 2.8.0. This is a major release that includes many new
> features, including:
> 
> * Early-access release of replacing Zookeeper with a self-
> managed quorum
> * Add Describe Cluster API
> * Support mutual TLS authentication on SASL_SSL listeners
> * Ergonomic improvements to Streams TopologyTestDriver
> * Logger API improvement to respect the hierarchy
> * Request/response trace logs are now JSON-formatted
> * New API to add and remove Streams threads while running
> * New REST API to expose Connect task configurations
> * Fixed the TimeWindowDeserializer to be able to deserialize
> keys outside of Streams (such as in the console consumer)
> * Streams resilient improvement: new uncaught exception
> handler
> * Streams resilience improvement: automatically recover from
> transient timeout exceptions
> 
> 
> 
> Release notes for the 2.8.0 release:
> https://home.apache.org/~vvcephei/kafka-2.8.0-rc0/RELEASE_NOTES.html
> 
> *** Please download, test and vote by 6 April 2021 ***
> 
> Kafka's KEYS file containing PGP keys we use to sign the
> release:
> https://kafka.apache.org/KEYS
> 
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~vvcephei/kafka-2.8.0-rc0/
> 
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> 
> * Javadoc:
> https://home.apache.org/~vvcephei/kafka-2.8.0-rc0/javadoc/
> 
> * Tag to be voted upon (off 2.8 branch) is the 2.8.0 tag:
> https://github.com/apache/kafka/releases/tag/2.8.0-rc0
> 
> * Documentation:
> https://kafka.apache.org/28/documentation.html
> 
> * Protocol:
> https://kafka.apache.org/28/protocol.html
> 
> 
> /**
> 
> Thanks,
> John
> 
> 




Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #645

2021-03-30 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12557: Fix hanging KafkaAdminClientTest (#10404)

[github] KAFKA-12552: Introduce LogSegments class abstracting the segments map 
(#10401)

[github] KAFKA-12509 Tighten up StateDirectory thread locking (#10418)


--
[...truncated 3.70 MB...]

KafkaZkClientTest > testDeleteTopicPathMethods() PASSED

KafkaZkClientTest > testSetTopicPartitionStatesRaw() STARTED

KafkaZkClientTest > testSetTopicPartitionStatesRaw() PASSED

KafkaZkClientTest > testAclManagementMethods() STARTED

KafkaZkClientTest > testAclManagementMethods() PASSED

KafkaZkClientTest > testPreferredReplicaElectionMethods() STARTED

KafkaZkClientTest > testPreferredReplicaElectionMethods() PASSED

KafkaZkClientTest > testPropagateLogDir() STARTED

KafkaZkClientTest > testPropagateLogDir() PASSED

KafkaZkClientTest > testGetDataAndStat() STARTED

KafkaZkClientTest > testGetDataAndStat() PASSED

KafkaZkClientTest > testReassignPartitionsInProgress() STARTED

KafkaZkClientTest > testReassignPartitionsInProgress() PASSED

KafkaZkClientTest > testCreateTopLevelPaths() STARTED

KafkaZkClientTest > testCreateTopLevelPaths() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() PASSED

KafkaZkClientTest > testIsrChangeNotificationGetters() STARTED

KafkaZkClientTest > testIsrChangeNotificationGetters() PASSED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() STARTED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() PASSED

KafkaZkClientTest > testGetLogConfigs() STARTED

KafkaZkClientTest > testGetLogConfigs() PASSED

KafkaZkClientTest > testBrokerSequenceIdMethods() STARTED

KafkaZkClientTest > testBrokerSequenceIdMethods() PASSED

KafkaZkClientTest > testAclMethods() STARTED

KafkaZkClientTest > testAclMethods() PASSED

KafkaZkClientTest > testCreateSequentialPersistentPath() STARTED

KafkaZkClientTest > testCreateSequentialPersistentPath() PASSED

KafkaZkClientTest > testConditionalUpdatePath() STARTED

KafkaZkClientTest > testConditionalUpdatePath() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() PASSED

KafkaZkClientTest > testDeleteTopicZNode() STARTED

KafkaZkClientTest > testDeleteTopicZNode() PASSED

KafkaZkClientTest > testDeletePath() STARTED

KafkaZkClientTest > testDeletePath() PASSED

KafkaZkClientTest > testGetBrokerMethods() STARTED

KafkaZkClientTest > testGetBrokerMethods() PASSED

KafkaZkClientTest > testCreateTokenChangeNotification() STARTED

KafkaZkClientTest > testCreateTokenChangeNotification() PASSED

KafkaZkClientTest > testGetTopicsAndPartitions() STARTED

KafkaZkClientTest > testGetTopicsAndPartitions() PASSED

KafkaZkClientTest > testRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRegisterBrokerInfo() PASSED

KafkaZkClientTest > testRetryRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRetryRegisterBrokerInfo() PASSED

KafkaZkClientTest > testConsumerOffsetPath() STARTED

KafkaZkClientTest > testConsumerOffsetPath() PASSED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() STARTED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() PASSED

KafkaZkClientTest > testTopicAssignments() STARTED

KafkaZkClientTest > testTopicAssignments() PASSED

KafkaZkClientTest > testControllerManagementMethods() STARTED

KafkaZkClientTest > testControllerManagementMethods() PASSED

KafkaZkClientTest > testTopicAssignmentMethods() STARTED

KafkaZkClientTest > testTopicAssignmentMethods() PASSED

KafkaZkClientTest > testConnectionViaNettyClient() STARTED

KafkaZkClientTest > testConnectionViaNettyClient() PASSED

KafkaZkClientTest > testPropagateIsrChanges() STARTED

KafkaZkClientTest > testPropagateIsrChanges() PASSED

KafkaZkClientTest > testControllerEpochMethods() STARTED

KafkaZkClientTest > testControllerEpochMethods() PASSED

KafkaZkClientTest > testDeleteRecursive() STARTED

KafkaZkClientTest > testDeleteRecursive() PASSED

KafkaZkClientTest > testGetTopicPartitionStates() STARTED

KafkaZkClientTest > testGetTopicPartitionStates() PASSED

KafkaZkClientTest > testCreateConfigChangeNotification() STARTED

KafkaZkClientTest > testCreateConfigChangeNotification() PASSED

KafkaZkClientTest > testDelegationTokenMethods() STARTED

KafkaZkClientTest > testDelegationTokenMethods() PASSED

LiteralAclStoreTest > shouldHaveCorrectPaths() STARTED

LiteralAclStoreTest > shouldHaveCorrectPaths() PASSED

LiteralAclStoreTest > shouldRoundTripChangeNode() STARTED

LiteralAclStoreTest > shouldRoundTripChangeNode() PASSED

LiteralAclStoreTest > shouldThrowFromEncodeOnNoneLiteral() STARTED

LiteralAclStoreTest > shouldThrowFromEncodeOnNoneLiteral() PASSED

LiteralAclStoreTest > shouldWriteChangesToTheWritePath() STARTED

LiteralAclSto

Re: [VOTE] KIP-516: Topic Identifiers

2021-03-30 Thread Justine Olshan
Hi all,
Another quick update. After some offline discussion with KIP-500 folks, I'm
making a small tweak to one of the configs in KIP-516.
Instead of delete.stale.topics.ms, KIP-516 will introduce
delete.topic.delay.ms which is defined as *"**The minimum amount of time to
wait before removing a deleted topic's data on every broker."*
The idea behind this config is to give a configurable window before the
data is fully deleted and removed from the brokers. This config will apply
to all topic deletions, not just the "stale topic" case described in
KIP-516.

Let me know if there are any questions,
Justine

On Thu, Feb 18, 2021 at 10:16 AM Justine Olshan 
wrote:

> Hi all,
> I realized that the DISCUSS thread got very long, so I'll be posting
> updates to this thread from now on.
> Just a quick update to the KIP. As a part of
> https://issues.apache.org/jira/browse/KAFKA-12332 and
> https://github.com/apache/kafka/pull/10143, I'm proposing adding a new
> error.
> INCONSISTENT_TOPIC_ID will be returned on partitions in
> LeaderAndIsrResponses where the topic ID in the request did not match the
> topic ID in the log. This will only occur when a valid topic ID is provided
> in the request.
>
> I've also updated the KIP to reflect this change.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-516%3A+Topic+Identifiers#KIP516:TopicIdentifiers-LeaderAndIsrRequestv5
>
>
> Please let me know if you have any thoughts or concerns with this change.
>
> Thanks,
> Justine
>
> On Mon, Oct 19, 2020 at 8:50 AM Justine Olshan 
> wrote:
>
>> Thanks everyone for the votes. KIP-516 has been accepted.
>>
>> Binding: Jun, Rajini, David
>> Non-binding: Lucas, Satish, Tom
>>
>> Justine
>>
>> On Sat, Oct 17, 2020 at 3:22 AM Tom Bentley  wrote:
>>
>>> +1 non-binding. Thanks!
>>>
>>> On Sat, Oct 17, 2020 at 7:55 AM David Jacot 
>>> wrote:
>>>
>>> > Hi Justine,
>>> >
>>> > Thanks for the KIP! This is a great and long awaited improvement.
>>> >
>>> > +1 (binding)
>>> >
>>> > Best,
>>> > David
>>> >
>>> > Le ven. 16 oct. 2020 à 17:36, Rajini Sivaram 
>>> a
>>> > écrit :
>>> >
>>> > > Hi Justine,
>>> > >
>>> > > +1 (binding)
>>> > >
>>> > > Thanks for all the work you put into this KIP!
>>> > >
>>> > > btw, there is a typo in the DeleteTopics Request/Response schema in
>>> the
>>> > > KIP, it says Metadata request.
>>> > >
>>> > > Regards,
>>> > >
>>> > > Rajini
>>> > >
>>> > >
>>> > > On Fri, Oct 16, 2020 at 4:06 PM Satish Duggana <
>>> satish.dugg...@gmail.com
>>> > >
>>> > > wrote:
>>> > >
>>> > > > Hi Justine,
>>> > > > Thanks for the KIP,  +1 (non-binding)
>>> > > >
>>> > > > On Thu, Oct 15, 2020 at 10:48 PM Lucas Bradstreet <
>>> lu...@confluent.io>
>>> > > > wrote:
>>> > > > >
>>> > > > > Hi Justine,
>>> > > > >
>>> > > > > +1 (non-binding). Thanks for all your hard work on this KIP!
>>> > > > >
>>> > > > > Lucas
>>> > > > >
>>> > > > > On Wed, Oct 14, 2020 at 8:59 AM Jun Rao 
>>> wrote:
>>> > > > >
>>> > > > > > Hi, Justine,
>>> > > > > >
>>> > > > > > Thanks for the updated KIP. +1 from me.
>>> > > > > >
>>> > > > > > Jun
>>> > > > > >
>>> > > > > > On Tue, Oct 13, 2020 at 2:38 PM Jun Rao 
>>> wrote:
>>> > > > > >
>>> > > > > > > Hi, Justine,
>>> > > > > > >
>>> > > > > > > Thanks for starting the vote. Just a few minor comments.
>>> > > > > > >
>>> > > > > > > 1. It seems that we should remove the topic field from the
>>> > > > > > > StopReplicaResponse below?
>>> > > > > > > StopReplica Response (Version: 4) => error_code [topics]
>>> > > > > > >   error_code => INT16
>>> > > > > > > topics => topic topic_id* [partitions]
>>> > > > > > >
>>> > > > > > > 2. "After controller election, upon receiving the result,
>>> assign
>>> > > the
>>> > > > > > > metadata topic its unique topic ID". Will the UUID for the
>>> > metadata
>>> > > > topic
>>> > > > > > > be written to the metadata topic itself?
>>> > > > > > >
>>> > > > > > > 3. The vote request is designed to support multiple topics,
>>> each
>>> > of
>>> > > > them
>>> > > > > > > may require a different sentinel ID. Should we reserve more
>>> than
>>> > > one
>>> > > > > > > sentinel ID for future usage?
>>> > > > > > >
>>> > > > > > > 4. UUID.randomUUID(): Could we clarify whether this method
>>> > returns
>>> > > > any
>>> > > > > > > sentinel ID? Also, how do we expect the user to use it?
>>> > > > > > >
>>> > > > > > > Thanks,
>>> > > > > > >
>>> > > > > > > Jun
>>> > > > > > >
>>> > > > > > > On Mon, Oct 12, 2020 at 9:54 AM Justine Olshan <
>>> > > jols...@confluent.io
>>> > > > >
>>> > > > > > > wrote:
>>> > > > > > >
>>> > > > > > >> Hi all,
>>> > > > > > >>
>>> > > > > > >> After further discussion and changes to this KIP, I think
>>> we are
>>> > > > ready
>>> > > > > > to
>>> > > > > > >> restart this vote.
>>> > > > > > >>
>>> > > > > > >> Again, here is the KIP:
>>> > > > > > >>
>>> > > > > > >>
>>> > > > > >
>>> > > >
>>> > >
>>> >
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-516%3A+Topic+Identifiers
>>> > > > > > >>
>>> > > > > > >> The discus

[VOTE] 2.8.0 RC0

2021-03-30 Thread John Roesler
Hello Kafka users, developers and client-developers,

This is the first candidate for release of Apache Kafka
2.8.0. This is a major release that includes many new
features, including:

* Early-access release of replacing Zookeeper with a self-
managed quorum
* Add Describe Cluster API
* Support mutual TLS authentication on SASL_SSL listeners
* Ergonomic improvements to Streams TopologyTestDriver
* Logger API improvement to respect the hierarchy
* Request/response trace logs are now JSON-formatted
* New API to add and remove Streams threads while running
* New REST API to expose Connect task configurations
* Fixed the TimeWindowDeserializer to be able to deserialize
keys outside of Streams (such as in the console consumer)
* Streams resilient improvement: new uncaught exception
handler
* Streams resilience improvement: automatically recover from
transient timeout exceptions



Release notes for the 2.8.0 release:
https://home.apache.org/~vvcephei/kafka-2.8.0-rc0/RELEASE_NOTES.html

*** Please download, test and vote by 6 April 2021 ***

Kafka's KEYS file containing PGP keys we use to sign the
release:
https://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
https://home.apache.org/~vvcephei/kafka-2.8.0-rc0/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
https://home.apache.org/~vvcephei/kafka-2.8.0-rc0/javadoc/

* Tag to be voted upon (off 2.8 branch) is the 2.8.0 tag:
https://github.com/apache/kafka/releases/tag/2.8.0-rc0

* Documentation:
https://kafka.apache.org/28/documentation.html

* Protocol:
https://kafka.apache.org/28/protocol.html


/**

Thanks,
John




Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #677

2021-03-30 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12557: Fix hanging KafkaAdminClientTest (#10404)

[github] KAFKA-12552: Introduce LogSegments class abstracting the segments map 
(#10401)

[github] KAFKA-12509 Tighten up StateDirectory thread locking (#10418)


--
[...truncated 3.70 MB...]

AclAuthorizerTest > testDeleteAclOnWildcardResource() STARTED

AclAuthorizerTest > testDeleteAclOnWildcardResource() PASSED

AclAuthorizerTest > testAuthorizerZkConfigFromKafkaConfig() STARTED

AclAuthorizerTest > testAuthorizerZkConfigFromKafkaConfig() PASSED

AclAuthorizerTest > testChangeListenerTiming() STARTED

AclAuthorizerTest > testChangeListenerTiming() PASSED

AclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions()
 STARTED

AclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions()
 PASSED

AclAuthorizerTest > testAuthorzeByResourceTypeSuperUserHasAccess() STARTED

AclAuthorizerTest > testAuthorzeByResourceTypeSuperUserHasAccess() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypePrefixedResourceDenyDominate() 
STARTED

AclAuthorizerTest > testAuthorizeByResourceTypePrefixedResourceDenyDominate() 
PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeMultipleAddAndRemove() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeMultipleAddAndRemove() PASSED

AclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow() STARTED

AclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow() PASSED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnPrefixedResource() 
STARTED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnPrefixedResource() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeDenyTakesPrecedence() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeDenyTakesPrecedence() PASSED

AclAuthorizerTest > testHighConcurrencyModificationOfResourceAcls() STARTED

AclAuthorizerTest > testHighConcurrencyModificationOfResourceAcls() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeWithAllPrincipalAce() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeWithAllPrincipalAce() PASSED

AclAuthorizerTest > testAuthorizeWithEmptyResourceName() STARTED

AclAuthorizerTest > testAuthorizeWithEmptyResourceName() PASSED

AclAuthorizerTest > testAuthorizeThrowsOnNonLiteralResource() STARTED

AclAuthorizerTest > testAuthorizeThrowsOnNonLiteralResource() PASSED

AclAuthorizerTest > testDeleteAllAclOnPrefixedResource() STARTED

AclAuthorizerTest > testDeleteAllAclOnPrefixedResource() PASSED

AclAuthorizerTest > testAddAclsOnLiteralResource() STARTED

AclAuthorizerTest > testAddAclsOnLiteralResource() PASSED

AclAuthorizerTest > testGetAclsPrincipal() STARTED

AclAuthorizerTest > testGetAclsPrincipal() PASSED

AclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet() STARTED

AclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet() PASSED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnWildcardResource() 
STARTED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnWildcardResource() PASSED

AclAuthorizerTest > testLoadCache() STARTED

AclAuthorizerTest > testLoadCache() PASSED

AuthorizerInterfaceDefaultTest > testAuthorizeByResourceTypeWithAllHostAce() 
STARTED

AuthorizerInterfaceDefaultTest > testAuthorizeByResourceTypeWithAllHostAce() 
PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeIsolationUnrelatedDenyWontDominateAllow() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeIsolationUnrelatedDenyWontDominateAllow() PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWildcardResourceDenyDominate() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWildcardResourceDenyDominate() PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWithAllOperationAce() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWithAllOperationAce() PASSED

AuthorizerInterfaceDefaultTest > testAuthorzeByResourceTypeSuperUserHasAccess() 
STARTED

AuthorizerInterfaceDefaultTest > testAuthorzeByResourceTypeSuperUserHasAccess() 
PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypePrefixedResourceDenyDominate() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypePrefixedResourceDenyDominate() PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeMultipleAddAndRemove() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeMultipleAddAndRemove() PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeDenyTakesPrecedence() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeDenyTakesPrecedence() PASSED

AuthorizerInterfaceD

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #617

2021-03-30 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12557: Fix hanging KafkaAdminClientTest (#10404)

[github] KAFKA-12552: Introduce LogSegments class abstracting the segments map 
(#10401)

[github] KAFKA-12509 Tighten up StateDirectory thread locking (#10418)


--
[...truncated 3.69 MB...]

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
PASSED

LogValidatorTest > testNonCompressedV1() STARTED

LogValidatorTest > testNonCompressedV1() PASSED

LogValidatorTest > testNonCompressedV2() STARTED

LogValidatorTest > testNonCompressedV2() PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV1() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV1() PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV2() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV2() PASSED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() STARTED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() PASSED

LogValidatorTest > testRecompressionV1() STARTED

LogValidatorTest > testRecompressionV1() PASSED

LogValidatorTest > testRecompressionV2() STARTED

LogValidatorTest > testRecompressionV2() PASSED

ProducerStateManagerTest > testSkipEmptyTransactions() STARTED

ProducerStateManagerTest > testSkipEmptyTransactions() PASSED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() STARTED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() PASSED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
STARTED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
PASSED

ProducerStateManagerTest > testCoordinatorFencing() STARTED

ProducerStateManagerTest > testCoordinatorFencing() PASSED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() PASSED

ProducerStateManagerTest > testTruncateFullyAndStartAt() STARTED

ProducerStateManagerTest > testTruncateFullyAndStartAt() PASSED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() STARTED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() STARTED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() PASSED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() 
STARTED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() PASSED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() STARTED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() PASSED

ProducerStateManagerTest > testTakeSnapshot() STARTED

ProducerStateManagerTest > testTakeSnapshot() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() 
STARTED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() PASSED

ProducerStateManagerTest > testDeleteSnapshotsBefore() STARTED

ProducerStateManagerTest > testDeleteSnapshotsBefore() PASSED

ProducerStateManagerTest > testAppendEmptyControlBatch() STARTED

ProducerStateManagerTest > testAppendEmptyControlBatch() PASSED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() STARTED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() PASSED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
STARTED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
PASSED

ProducerStateManagerTest > testRemoveAllStraySnapshots() STARTED

ProducerStateManagerTest > testRemoveAllStraySnapshots() PASSED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() PASSED

ProducerStateManagerTest > testProducersWithOngoingTransactionsDontExpire() 
STARTED

ProducerStateManagerTest > testProducersWithOngoingTransactionsDontExpire() 
PASSED

ProducerStateManagerTest > testBasicIdMapping() STARTED

ProducerStateManagerTest > testBasicIdMapping() PASSED

ProducerStateManagerTest > updateProducerTransactionState() STARTED

ProducerStateManagerTest > updateProducerTransactionState() PASSED

ProducerStateManagerTest > testPrepareUpdateDoesNotMutate() STARTED

ProducerStateManagerTest > testPrepareUpdateDoesNotMutate() PASSED

ProducerStateManagerTest > 

Build failed in Jenkins: Kafka » kafka-2.8-jdk8 #85

2021-03-30 Thread Apache Jenkins Server
See 


Changes:

[John Roesler] KAFKA-12557: Fix hanging KafkaAdminClientTest (#10404)


--
[...truncated 3.61 MB...]

PasswordEncoderTest > testEncodeDecodeAlgorithms() STARTED

PasswordEncoderTest > testEncodeDecodeAlgorithms() PASSED

PasswordEncoderTest > testEncodeDecode() STARTED

PasswordEncoderTest > testEncodeDecode() PASSED

ThrottlerTest > testThrottleDesiredRate() STARTED

ThrottlerTest > testThrottleDesiredRate() PASSED

LoggingTest > testLoggerLevelIsResolved() STARTED

LoggingTest > testLoggerLevelIsResolved() PASSED

LoggingTest > testLog4jControllerIsRegistered() STARTED

LoggingTest > testLog4jControllerIsRegistered() PASSED

LoggingTest > testTypeOfGetLoggers() STARTED

LoggingTest > testTypeOfGetLoggers() PASSED

LoggingTest > testLogName() STARTED

LoggingTest > testLogName() PASSED

LoggingTest > testLogNameOverride() STARTED

LoggingTest > testLogNameOverride() PASSED

TimerTest > testAlreadyExpiredTask() STARTED

TimerTest > testAlreadyExpiredTask() PASSED

TimerTest > testTaskExpiration() STARTED

TimerTest > testTaskExpiration() PASSED

ReplicationUtilsTest > testUpdateLeaderAndIsr() STARTED

ReplicationUtilsTest > testUpdateLeaderAndIsr() PASSED

TopicFilterTest > testIncludeLists() STARTED

TopicFilterTest > testIncludeLists() PASSED

RaftManagerTest > testShutdownIoThread() STARTED

RaftManagerTest > testShutdownIoThread() PASSED

RaftManagerTest > testUncaughtExceptionInIoThread() STARTED

RaftManagerTest > testUncaughtExceptionInIoThread() PASSED

RequestChannelTest > testNonAlterRequestsNotTransformed() STARTED

RequestChannelTest > testNonAlterRequestsNotTransformed() PASSED

RequestChannelTest > testAlterRequests() STARTED

RequestChannelTest > testAlterRequests() PASSED

RequestChannelTest > testJsonRequests() STARTED

RequestChannelTest > testJsonRequests() PASSED

RequestChannelTest > testIncrementalAlterRequests() STARTED

RequestChannelTest > testIncrementalAlterRequests() PASSED

ControllerContextTest > 
testPartitionFullReplicaAssignmentReturnsEmptyAssignmentIfTopicOrPartitionDoesNotExist()
 STARTED

ControllerContextTest > 
testPartitionFullReplicaAssignmentReturnsEmptyAssignmentIfTopicOrPartitionDoesNotExist()
 PASSED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsEmptyMapIfTopicDoesNotExist() 
STARTED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsEmptyMapIfTopicDoesNotExist() 
PASSED

ControllerContextTest > testPreferredReplicaImbalanceMetric() STARTED

ControllerContextTest > testPreferredReplicaImbalanceMetric() PASSED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsExpectedReplicaAssignments() 
STARTED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsExpectedReplicaAssignments() PASSED

ControllerContextTest > testReassignTo() STARTED

ControllerContextTest > testReassignTo() PASSED

ControllerContextTest > testPartitionReplicaAssignment() STARTED

ControllerContextTest > testPartitionReplicaAssignment() PASSED

ControllerContextTest > 
testUpdatePartitionFullReplicaAssignmentUpdatesReplicaAssignment() STARTED

ControllerContextTest > 
testUpdatePartitionFullReplicaAssignmentUpdatesReplicaAssignment() PASSED

ControllerContextTest > testReassignToIdempotence() STARTED

ControllerContextTest > testReassignToIdempotence() PASSED

ControllerContextTest > 
testPartitionReplicaAssignmentReturnsEmptySeqIfTopicOrPartitionDoesNotExist() 
STARTED

ControllerContextTest > 
testPartitionReplicaAssignmentReturnsEmptySeqIfTopicOrPartitionDoesNotExist() 
PASSED

TransactionsWithMaxInFlightOneTest > 
testTransactionalProducerSingleBrokerMaxInFlightOne() STARTED

DynamicConnectionQuotaTest > testDynamicIpConnectionRateQuota() PASSED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@7a391909, value = [B@562e0cc0), properties=Map(print.value -> false), 
expected= STARTED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@7a391909, value = [B@562e0cc0), properties=Map(print.value -> false), 
expected= PASSED

DefaultMessageFormatterTest > [2] name=print key, record=ConsumerRecord(topic = 
someTopic, partition = 9, leaderEpoch = null, offset = 9876, CreateTime = 1234, 
serialized key size = 0, serialized value size = 0, he

Re: [DISCUSS] KIP-712: Shallow Mirroring

2021-03-30 Thread Henry Cai
Tom,

Thanks for your comments.  Yes it's a bit clumsy to use the existing
consumer and producer API to carry the underlying record batch, but
creating a new set of API would also mean other use cases (e.g. MM2)
wouldn't be able to use that feature easily.  We can throw exceptions if we
see clients are setting serializer/compression in the consumer config
option.

The consumer is essentially getting back a collection of
RecordBatchByteBuffer records and passing them to the producer.  Most of
the internal APIs inside consumer and producer code paths are actually
taking on ByteBuffer as the argument so it's not too much work to get the
byte buffer through.

For the worry that the client might see the inside of that byte buffer, we
can create a RecordBatchByteBufferRecord class to wrap the underlying byte
buffer so hopefully they will not drill too deep into that object.  Java's
ByteBuffer does have a asReadOnlyBuffer() method to return a read-only
buffer, that can be explored as well.

On Tue, Mar 30, 2021 at 4:24 AM Tom Bentley  wrote:

> Hi Henry and Ryanne,
>
> Related to Ismael's point about the producer & consumer configs being
> dangerous, I can see two parts to this:
>
> 2a. Both the proposed configs seem to be fundamentally incompatible with
> the Producer's existing key.serializer, value.serializer and
> compression.type configs, likewise the consumers key.deserializer and
> value.deserializer. I don't see a way to avoid this, since those existing
> configs are already separate things. (I did consider whether using
> special-case Deserializer and Serializer could be used instead, but that
> doesn't work nicely; in this use case they're necessarily all configured
> together). I think all we could do would be to reject configs which tried
> to set those existing client configs in conjunction with fetch.raw.bytes
> and send.raw.bytes.
>
> 2b. That still leaves a public Java API which would allow access to the raw
> byte buffers. AFAICS we don't actually need user code to have access to the
> raw buffers. It would be enough to get an opaque object that wrapped the
> ByteBuffer from the consumer and pass it to the producer. It's only the
> consumer and producer code which needs to be able to obtain the wrapped
> buffer.
>
> Kind regards,
>
> Tom
>
> On Tue, Mar 30, 2021 at 8:41 AM Ismael Juma  wrote:
>
> > Hi Henry,
> >
> > Can you clarify why this "network performance" issue is only related to
> > shallow mirroring? Generally, we want the protocol to be generic and not
> > have a number of special cases. The more special cases you have, the
> > tougher it becomes to test all the edge cases.
> >
> > Ismael
> >
> > On Mon, Mar 29, 2021 at 9:51 PM Henry Cai 
> > wrote:
> >
> > > It's interesting this VOTE thread finally becomes a DISCUSS thread.
> > >
> > > For MM2 concern, I will take a look to see whether I can add the
> support
> > > for MM2.
> > >
> > > For Ismael's concern on multiple batches in the ProduceRequest
> > (conflicting
> > > with KIP-98), here is my take:
> > >
> > > 1. We do need to group multiple batches in the same request otherwise
> the
> > > network performance will suffer.
> > > 2. For the concern on transactional message support as in KIP-98, since
> > MM1
> > > and MM2 currently don't support transactional messages, KIP-712 will
> not
> > > attempt to support transactions either.  I will add a config option on
> > > producer config: allowMultipleBatches.  By default this option will be
> > off
> > > and the user needs to explicitly turn on this option to use the shallow
> > > mirror feature.  And if we detect both this option and transaction is
> > > turned on we will throw an exception to protect current transaction
> > > processing.
> > > 3. In the future, when MM2 starts to support exact-once and
> transactional
> > > messages (is that KIP-656?), we can revisit this code.  The current
> > > transactional message already makes the compromise that the messages in
> > the
> > > same RecordBatch (MessageSet) are sharing the same
> > > sequence-id/transaction-id, so those messages need to be committed all
> > > together.  I think when we support the shallow mirror with
> transactional
> > > semantics, we will group all batches in the same ProduceRequest in the
> > same
> > > transaction boundary, they need to be committed all together.  On the
> > > broker side, all batches coming from ProduceRequest (or FetchResponse)
> > are
> > > committed in the same log segment file as one unit (current behavior).
> > >
> > > On Mon, Mar 29, 2021 at 8:46 AM Ryanne Dolan 
> > > wrote:
> > >
> > > > Ah, I see, thanks Ismael. Now I understand your concern.
> > > >
> > > > From KIP-98, re this change in v3:
> > > >
> > > > "This allows us to remove the message set size since each message set
> > > > already contains a field for the size. More importantly, since there
> is
> > > > only one message set to be written to the log, partial produce
> failures
> > > are
> > > > no longer possible. The full

Re: [DISCUSS] KIP-712: Shallow Mirroring

2021-03-30 Thread Henry Cai
Ismael,

On the network performance side, the issue is on the throughput.  For
networking purposes, you gain throughout by combining data into bigger
batch otherwise you pay higher overhead on network
handshaking/roudtrip-delay and wastage on the underlying network packet
buffer.  On mirrormaker, it fetches data inbound through FetchResponse from
source broker which can return data in MBytes (comprised of multiple
batches in the same response), however the outbound ProduceRequest to
target broker are in KByte batch size range if we don't optimize.  (The
reason those batches are in KBytes is because when the application client
originally produced to the 1st broker, they tended to select smaller batch
sizes to achieve low latency). The outbound throughput is not going to be
able to match with the inbound throughput.

In order to match the networking parity between inbound FetchResponse
(which allows packing multiple batches) and outbound ProduceRequest, we
ask to restore the multiple batch packing capability of ProduceRequest.  I
think KIP-98 did a shortcut to remove the multiple batch packing capability
to save the extra work they need to do to support transactions across
multiple batches.

For our use case, we view MM as a mere replication pipe to get the data
from one data center into another data center, the remote broker is almost
like a follower to the broker in the source cluster.  For broker to broker
replication, it's using FetchRequest/FetchResponse which does use multiple
batch packing to achieve the optimal network throughput.  On the broker
code path, FetchResponse and ProduceRequest went to the same handling code
and the broker will just append the MemoryRecords (which contains multiple
batches) into the same log segment file as one unit.  So it's not
particularly hard to restore the multiple batch packing feature back for
ProduceRequest.



On Tue, Mar 30, 2021 at 12:34 AM Ismael Juma  wrote:

> Hi Henry,
>
> Can you clarify why this "network performance" issue is only related to
> shallow mirroring? Generally, we want the protocol to be generic and not
> have a number of special cases. The more special cases you have, the
> tougher it becomes to test all the edge cases.
>
> Ismael
>
> On Mon, Mar 29, 2021 at 9:51 PM Henry Cai 
> wrote:
>
> > It's interesting this VOTE thread finally becomes a DISCUSS thread.
> >
> > For MM2 concern, I will take a look to see whether I can add the support
> > for MM2.
> >
> > For Ismael's concern on multiple batches in the ProduceRequest
> (conflicting
> > with KIP-98), here is my take:
> >
> > 1. We do need to group multiple batches in the same request otherwise the
> > network performance will suffer.
> > 2. For the concern on transactional message support as in KIP-98, since
> MM1
> > and MM2 currently don't support transactional messages, KIP-712 will not
> > attempt to support transactions either.  I will add a config option on
> > producer config: allowMultipleBatches.  By default this option will be
> off
> > and the user needs to explicitly turn on this option to use the shallow
> > mirror feature.  And if we detect both this option and transaction is
> > turned on we will throw an exception to protect current transaction
> > processing.
> > 3. In the future, when MM2 starts to support exact-once and transactional
> > messages (is that KIP-656?), we can revisit this code.  The current
> > transactional message already makes the compromise that the messages in
> the
> > same RecordBatch (MessageSet) are sharing the same
> > sequence-id/transaction-id, so those messages need to be committed all
> > together.  I think when we support the shallow mirror with transactional
> > semantics, we will group all batches in the same ProduceRequest in the
> same
> > transaction boundary, they need to be committed all together.  On the
> > broker side, all batches coming from ProduceRequest (or FetchResponse)
> are
> > committed in the same log segment file as one unit (current behavior).
> >
> > On Mon, Mar 29, 2021 at 8:46 AM Ryanne Dolan 
> > wrote:
> >
> > > Ah, I see, thanks Ismael. Now I understand your concern.
> > >
> > > From KIP-98, re this change in v3:
> > >
> > > "This allows us to remove the message set size since each message set
> > > already contains a field for the size. More importantly, since there is
> > > only one message set to be written to the log, partial produce failures
> > are
> > > no longer possible. The full message set is either successfully written
> > to
> > > the log (and replicated) or it is not."
> > >
> > > The schema and size field don't seem to be an issue, as KIP-712 already
> > > addresses.
> > >
> > > The partial produce failure issue is something I don't understand. I
> > can't
> > > tell if this was done out of convenience at the time or if there is
> > > something incompatible with partial produce success/failure and EOS.
> Does
> > > anyone know?
> > >
> > > Ryanne
> > >
> > > On Mon, Mar 29, 2021, 1:41 AM Ismael Juma  

[jira] [Resolved] (KAFKA-12509) Tighten up StateDirectory thread locking

2021-03-30 Thread A. Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

A. Sophie Blee-Goldman resolved KAFKA-12509.

Resolution: Fixed

> Tighten up StateDirectory thread locking
> 
>
> Key: KAFKA-12509
> URL: https://issues.apache.org/jira/browse/KAFKA-12509
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: A. Sophie Blee-Goldman
>Assignee: Ketul Gupta
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0
>
>
> The StateDirectory class is responsible for tracking the ownership of task 
> directories, which may be locked by StreamThreads, the cleaner thread, or a 
> user thread via KafkaStreams.cleanup(). It maintains a map from TaskId to the 
> name of the owning thread. 
> We should consider tightening this up a bit and using a reference to the 
> actual Thread instead of just the name.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-690 Add additional configuration to control MirrorMaker 2 internal topics naming convention

2021-03-30 Thread Omnia Ibrahim
*(sorry forgot to Replay to All) *
Hi Ryanne,
It's a valid concern, I was trying to separate the concerns of internal and
replicated policy away from each other and to make the code readable as
extending ReplicationPolicy to manage both internal and replicated topic is
a bit odd. Am not against simplifying things out to make ReplicationPolicy
handling both at the end of the day if an MM2 user has a special naming
convention for topics it will be affecting both replicated and MM2 internal
topics.

For simplifying things we can extend `ReplicationPolicy` to the following
instead of adding an extra class

> *public interface ReplicationPolicy {*
> String topicSource(String topic);
> String upstreamTopic(String topic);
>
>
> */** Returns heartbeats topic name.*/String heartbeatsTopic();*
>
>
>
>
>
> */** Returns the offset-syncs topic for given cluster alias. */
> String offsetSyncTopic(String targetAlias);/** Returns the name
> checkpoint topic for given cluster alias. */String
> checkpointTopic(String sourceAlias); *
>
> default String originalTopic(String topic) {
> String upstream = upstreamTopic(topic);
> if (upstream == null) {
> return topic;
> } else {
> return originalTopic(upstream);
> }
> }
>
>
> */** Internal topics are never replicated. */
> isInternalTopic(String topic) *//the implementaion will be moved to
> `DefaultReplicationPolicy` to handle both kafka topics and MM2 internal
> topics.
> }
>

On Fri, Mar 26, 2021 at 3:05 PM Ryanne Dolan  wrote:

> Omnia, have we considered just adding methods to ReplicationPolicy? I'm
> reluctant to add a new class because, as Mickael points out, we'd need to
> carry it around in client code.
>
> Ryanne
>
> On Fri, Feb 19, 2021 at 8:31 AM Mickael Maison 
> wrote:
>
>> Hi Omnia,
>>
>> Thanks for the clarifications.
>>
>> - I'm still a bit uneasy with the overlap between these 2 methods as
>> currently `ReplicationPolicy.isInternalTopic` already handles MM2
>> internal topics. Should we make it only handle Kafka internal topics
>> and `isMM2InternalTopic()` only handle MM2 topics?
>>
>> - I'm not sure I understand what this method is used for. There are no
>> such methods for the other 2 topics (offset-sync and heartbeat). Also
>> what happens if there are other MM2 instances using different naming
>> schemes in the same cluster. Do all instances have to know about the
>> other naming schemes? What are the expected issues if they don't?
>>
>> - RemoteClusterUtils is a client-side utility so it does not have
>> access to the MM2 configuration. Since this new API can affect the
>> name of the checkpoint topic, it will need to be used client-side too
>> so users can find the checkpoint topic name. I had to realized this
>> was the case.
>>
>> Thanks
>>
>> On Mon, Feb 15, 2021 at 9:33 PM Omnia Ibrahim 
>> wrote:
>> >
>> > Hi Mickael, did you have some time to check my answer?
>> >
>> > On Thu, Jan 21, 2021 at 10:10 PM Omnia Ibrahim 
>> wrote:
>> >>
>> >> Hi Mickael,
>> >> Thanks for taking another look into the KIP, regards your questions
>> >>
>> >> - I believe we need both "isMM2InternalTopic" and
>> `ReplicationPolicy.isInternalTopic`  as `ReplicationPolicy.isInternalTopic`
>> does check if a topic is Kafka internal topic, while `isMM2InternalTopic`
>> is just focusing if a topic is MM2 internal topic or not(which is
>> heartbeat/checkpoint/offset-sync). The fact that the default for MM2
>> internal topics matches "ReplicationPolicy.isInternalTopic" will not be an
>> accurate assumption anymore once we implement this KIP.
>> >>
>> >> - "isCheckpointTopic" will detect all checkpoint topics for all MM2
>> instances this is needed for "MirrorClient.checkpointTopics" which
>> originally check if the topic name ends with CHECKPOINTS_TOPIC_SUFFIX. So
>> this method just to keep the same functionality that originally exists in
>> MM2
>> >>
>> >> - "checkpointTopic" is used in two places 1. At topic creation in
>> "MirrorCheckpointConnector.createInternalTopics" which use
>> "sourceClusterAlias() + CHECKPOINTS_TOPIC_SUFFIX" and 2. At
>> "MirrorClient.remoteConsumerOffsets" which is called by
>> "RemoteClusterUtils.translateOffsets"  the cluster alias here referred to
>> as "remoteCluster" where the topic name is "remoteClusterAlias +
>> CHECKPOINTS_TOPIC_SUFFIX"  (which is an argument in RemoteClusterUtils, not
>> a config) This why I called the variable cluster instead of source and
>> instead of using the config to figure out the cluster aliases from config
>> as we use checkpoints to keep `RemoteClusterUtils` compatible for existing
>> users. I see a benefit of just read the config a find out the cluster
>> aliases but on the other side, I'm not sure why "RemoteClusterUtils"
>> doesn't get the name of the cluster from the properties instead of an
>> argument, so I decided to keep it just for compatibility.
>> >>
>> >> Hope these answer some of your concerns.
>> >> Best
>> >> O

Re: [DISCUSS] KIP-726: Make the CooperativeStickyAssignor as the default assignor

2021-03-30 Thread Guozhang Wang
Hi Sophie,

My question is more related to KAFKA-12477, but since your latest replies
are on this thread I figured we can follow-up on the same venue. Just so I
understand your latest comments above about the approach:

* I think, we would need to persist this decision so that the group would
never go back to the eager protocol, this bit would be written to the
internal topic's assignment message. Is that correct?
* Maybe you can describe the steps, after the group has decided to move
forward with cooperative protocols, when:
1) a new member joined the group with the old version, and hence only
recognized eager protocol and executing the eager protocol with its first
rebalance, what would happen.
2) in addition to 1), the new member joined the group with the old version
and only recognized the old subscription format, and was selected as the
leader, what would happen.

Guozhang




On Mon, Mar 29, 2021 at 10:30 PM Luke Chen  wrote:

> Hi Sophie & Ismael,
> Thank you for your feedback.
> No problem, let's pause this KIP and wait for this improvement: KAFKA-12477
> .
>
> Stay tuned :)
>
> Thank you.
> Luke
>
> On Tue, Mar 30, 2021 at 3:14 AM Ismael Juma  wrote:
>
> > Hi Sophie,
> >
> > I didn't analyze the KIP in detail, but the two suggestions you mentioned
> > sound like great improvements.
> >
> > A bit more context: breaking changes for a widely used product like Kafka
> > are costly and hence why we try as hard as we can to avoid them. When it
> > comes to the brokers, they are often managed by a central group (or
> they're
> > in the Cloud), so they're a bit easier to manage. Even so, it's still
> > possible to upgrade from 0.8.x directly to 2.7 since all protocol
> versions
> > are still supported. When it comes to the basic clients (producer,
> > consumer, admin client), they're often embedded in applications so we
> have
> > to be even more conservative.
> >
> > Ismael
> >
> > On Mon, Mar 29, 2021 at 10:50 AM Sophie Blee-Goldman
> >  wrote:
> >
> > > Ismael,
> > >
> > > It seems like given 3.0 is a breaking release, we have to rely on users
> > > being aware of this and responsible
> > > enough to read the upgrade guide. Otherwise we could never ever make
> any
> > > breaking changes beyond just
> > > removing deprecated APIs or other compilation-breaking errors that
> would
> > be
> > > immediately visible, no?
> > >
> > > That said, obviously it's better to have a circuit-breaker that will
> fail
> > > fast in case of a user misconfiguration
> > > rather than silently corrupting the consumer group state -- eg for two
> > > consumers to overlap in their ownership
> > > of the same partition(s). We could definitely implement this, and now
> > that
> > > I think about it this might solve a
> > > related problem in KAFKA-12477
> > > . We just add a new
> > > field to the Assignment in which the group leader
> > > indicates whether it's on a recent enough version to understand
> > cooperative
> > > rebalancing. If an upgraded member
> > > joins the group, it'll only be allowed to start following the new
> > > rebalancing protocol after receiving the go-ahead
> > > from the group leader.
> > >
> > > If we do go ahead and add this new field in the Assignment then I'm
> > pretty
> > > confident we can reduce the number
> > > of required rolling bounces to just one with KAFKA-12477
> > > . In that case we
> > > should
> > > be in much better shape to
> > > feel good about changing the default to the CooperativeStickyAssignor.
> > How
> > > does that sound?
> > >
> > > To be clear, I'm not proposing we do this as part of KIP-726. Here's my
> > > take:
> > >
> > > Let's pause this KIP while I work on making these two improvements in
> > > KAFKA-12477 . Once
> I
> > > can
> > > confirm the
> > > short-circuit and single rolling bounce will be available for 3.0, I'll
> > > report back on this thread. Then we can move
> > > forward with this KIP again.
> > >
> > > Thoughts?
> > > Sophie
> > >
> > > On Mon, Mar 29, 2021 at 12:01 AM Luke Chen  wrote:
> > >
> > > > Hi Ismael,
> > > > Thanks for your good question. Answer them below:
> > > > *1. Are we saying that every consumer upgraded would have to follow
> the
> > > > complex path described in the KIP? *
> > > > --> We suggest that every consumer did these 2 steps of rolling
> > upgrade.
> > > > And after KAFKA-12477 <
> > https://issues.apache.org/jira/browse/KAFKA-12477
> > > >
> > > > is completed, it can be reduced to 1 rolling upgrade.
> > > >
> > > > *2. what happens if they don't read the instructions and upgrade as
> > they
> > > > have in the past?*
> > > > --> The reason we want 2 steps of rolling upgrade is that we want to
> > > avoid
> > > > the situation where leader is on old byte-code and only recognize
> > > "eager",
> > > > but due to compatibi

Re: [DISCUSS] KIP-690 Add additional configuration to control MirrorMaker 2 internal topics naming convention

2021-03-30 Thread Omnia Ibrahim
Hi Mickael,
> - I'm still a bit uneasy with the overlap between these 2 methods as
currently `ReplicationPolicy.isInternalTopic` already handles MM2 internal
topics. Should we make it only handle Kafka internal topics and
`isMM2InternalTopic()` only handle MM2 topics?
We can clear the overlap by making`ReplicationPolicy.isInternalTopic`
handling only Kafka internal topics ( topics start with __),  while
`InternalTopicPolicy .isMM2InternalTopic` will handle any internal topic
created by MM2.

>- I'm not sure I understand what this method is used for. There are no
such methods for the other 2 topics (offset-sync and heartbeat). Also what
happens if there are other MM2 instances using different naming schemes in
the same cluster. Do all instances have to know about the other naming
schemes? What are the expected issues if they don't?
Your first comment was right, it will detect the checkpoint configure by
only this MM2 instance so you are right we can work without it. (sorry for
the confusion)

>- RemoteClusterUtils is a client-side utility so it does not have access
to the MM2 configuration. Since this new API can affect the name of the
checkpoint topic, it will need to be used client-side too so users can find
the checkpoint topic name. I had to realized this was the case.
am not sure I got this,  RemoteClusterUtils is using `MirrorClient `which
have `MirrorClientConfig` that contains `REPLICATION_POLICY_CLASS` how is
this different from adding `INTERNAL_TOPIC_NAMING_POLICY_CLASS`?

Thanks

On Fri, Feb 19, 2021 at 2:31 PM Mickael Maison 
wrote:

> Hi Omnia,
>
> Thanks for the clarifications.
>
> - I'm still a bit uneasy with the overlap between these 2 methods as
> currently `ReplicationPolicy.isInternalTopic` already handles MM2
> internal topics. Should we make it only handle Kafka internal topics
> and `isMM2InternalTopic()` only handle MM2 topics?
>
> - I'm not sure I understand what this method is used for. There are no
> such methods for the other 2 topics (offset-sync and heartbeat). Also
> what happens if there are other MM2 instances using different naming
> schemes in the same cluster. Do all instances have to know about the
> other naming schemes? What are the expected issues if they don't?
>
> - RemoteClusterUtils is a client-side utility so it does not have
> access to the MM2 configuration. Since this new API can affect the
> name of the checkpoint topic, it will need to be used client-side too
> so users can find the checkpoint topic name. I had to realized this
> was the case.
>
> Thanks
>
> On Mon, Feb 15, 2021 at 9:33 PM Omnia Ibrahim 
> wrote:
> >
> > Hi Mickael, did you have some time to check my answer?
> >
> > On Thu, Jan 21, 2021 at 10:10 PM Omnia Ibrahim 
> wrote:
> >>
> >> Hi Mickael,
> >> Thanks for taking another look into the KIP, regards your questions
> >>
> >> - I believe we need both "isMM2InternalTopic" and
> `ReplicationPolicy.isInternalTopic`  as `ReplicationPolicy.isInternalTopic`
> does check if a topic is Kafka internal topic, while `isMM2InternalTopic`
> is just focusing if a topic is MM2 internal topic or not(which is
> heartbeat/checkpoint/offset-sync). The fact that the default for MM2
> internal topics matches "ReplicationPolicy.isInternalTopic" will not be an
> accurate assumption anymore once we implement this KIP.
> >>
> >> - "isCheckpointTopic" will detect all checkpoint topics for all MM2
> instances this is needed for "MirrorClient.checkpointTopics" which
> originally check if the topic name ends with CHECKPOINTS_TOPIC_SUFFIX. So
> this method just to keep the same functionality that originally exists in
> MM2
> >>
> >> - "checkpointTopic" is used in two places 1. At topic creation in
> "MirrorCheckpointConnector.createInternalTopics" which use
> "sourceClusterAlias() + CHECKPOINTS_TOPIC_SUFFIX" and 2. At
> "MirrorClient.remoteConsumerOffsets" which is called by
> "RemoteClusterUtils.translateOffsets"  the cluster alias here referred to
> as "remoteCluster" where the topic name is "remoteClusterAlias +
> CHECKPOINTS_TOPIC_SUFFIX"  (which is an argument in RemoteClusterUtils, not
> a config) This why I called the variable cluster instead of source and
> instead of using the config to figure out the cluster aliases from config
> as we use checkpoints to keep `RemoteClusterUtils` compatible for existing
> users. I see a benefit of just read the config a find out the cluster
> aliases but on the other side, I'm not sure why "RemoteClusterUtils"
> doesn't get the name of the cluster from the properties instead of an
> argument, so I decided to keep it just for compatibility.
> >>
> >> Hope these answer some of your concerns.
> >> Best
> >> Omnia
> >>
> >> On Thu, Jan 21, 2021 at 3:37 PM Mickael Maison <
> mickael.mai...@gmail.com> wrote:
> >>>
> >>> Hi Omnia,
> >>>
> >>> Thanks for the updates. Sorry for the delay but I have a few more
> >>> small questions about the API:
> >>> - Do we really need "isMM2InternalTopic()"? There's already
> >>> "ReplicationPolicy.is

[jira] [Resolved] (KAFKA-12552) Extract segments map out of Log class into separate class

2021-03-30 Thread Jun Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-12552.
-
Fix Version/s: 3.0.0
   Resolution: Fixed

Merged the PR to trunk.

> Extract segments map out of Log class into separate class
> -
>
> Key: KAFKA-12552
> URL: https://issues.apache.org/jira/browse/KAFKA-12552
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Kowshik Prakasam
>Assignee: Kowshik Prakasam
>Priority: Major
> Fix For: 3.0.0
>
>
> *This is a precursor to KAFKA-12553.*
> Extract segments map out of Log class into separate class. This will improve 
> the testability and maintainability of the Log layer, and also will be useful 
> to subsequently refactor the recovery logic (see KAFKA-12553).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12557) org.apache.kafka.clients.admin.KafkaAdminClientTest#testClientSideTimeoutAfterFailureToReceiveResponse intermittently hangs indefinitely

2021-03-30 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-12557.
--
Resolution: Fixed

> org.apache.kafka.clients.admin.KafkaAdminClientTest#testClientSideTimeoutAfterFailureToReceiveResponse
>  intermittently hangs indefinitely
> 
>
> Key: KAFKA-12557
> URL: https://issues.apache.org/jira/browse/KAFKA-12557
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
> Fix For: 3.0.0, 2.8.0
>
>
> While running tests for [https://github.com/apache/kafka/pull/10397,] I got a 
> test timeout under Java 8.
> I ran it locally via `./gradlew clean -PscalaVersion=2.12 :clients:unitTest 
> --profile --no-daemon --continue 
> -PtestLoggingEvents=started,passed,skipped,failed -PignoreFailures=true 
> -PmaxTestRetries=1 -PmaxTestRetryFailures=5` (copied from the Jenkins log) 
> and was able to determine that the hanging test is:
> org.apache.kafka.clients.admin.KafkaAdminClientTest#testClientSideTimeoutAfterFailureToReceiveResponse
> It's odd, but it hangs most times on my branch, and I haven't seen it hang on 
> trunk, despite the fact that my PR doesn't touch the client or core code at 
> all.
> Some debugging reveals that when the client is hanging, it's because the 
> listTopics request is still sitting in its pendingRequests queue, and if I 
> understand the test setup correctly, it would never be completed, since we 
> will never advance time or queue up a metadata response for it.
> I figure a reasonable blanket response to this is just to make sure that the 
> test harness will close the admin client eagerly instead of lazily.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12435) Several streams-test-utils classes missing from javadoc

2021-03-30 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-12435.
--
Resolution: Fixed

> Several streams-test-utils classes missing from javadoc
> ---
>
> Key: KAFKA-12435
> URL: https://issues.apache.org/jira/browse/KAFKA-12435
> Project: Kafka
>  Issue Type: Bug
>  Components: docs, streams-test-utils
>Reporter: Ismael Juma
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: image-2021-03-05-14-22-45-891.png
>
>
> !image-2021-03-05-14-22-45-891.png!
> Only 3 of them show up currently ^. Source: 
> https://kafka.apache.org/27/javadoc/index.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12584) Remove deprecated `Sum` and `Total` classes

2021-03-30 Thread David Jacot (Jira)
David Jacot created KAFKA-12584:
---

 Summary: Remove deprecated `Sum` and `Total` classes
 Key: KAFKA-12584
 URL: https://issues.apache.org/jira/browse/KAFKA-12584
 Project: Kafka
  Issue Type: Improvement
Reporter: David Jacot
Assignee: David Jacot


`Sum` and `Total` classes were deprecated in 2.4. As they are not really 
supposed to be used by external parties (even if they are part of AK's public 
API), it seems that we could remove them for 3.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12583) Upgrade of netty-codec due to CVE-2021-21295

2021-03-30 Thread Dominique Mongelli (Jira)
Dominique Mongelli created KAFKA-12583:
--

 Summary: Upgrade of netty-codec due to CVE-2021-21295
 Key: KAFKA-12583
 URL: https://issues.apache.org/jira/browse/KAFKA-12583
 Project: Kafka
  Issue Type: Bug
  Components: security
Affects Versions: 2.7.0
Reporter: Dominique Mongelli
Assignee: Dongjin Lee
 Fix For: 2.8.0, 2.7.1, 2.6.2


Our security tool raised the following security flaw on kafka 2.7: 
[https://nvd.nist.gov/vuln/detail/CVE-2021-21290]

It is a vulnerability related to jar *netty-codec-4.1.51.Final.jar*.

Looking at source code, the netty-codec in trunk and 2.7.0 branches are still 
vulnerable.

Based on netty issue tracker, the vulnerability is fixed in 4.1.59.Final: 
https://github.com/netty/netty/security/advisories/GHSA-5mcr-gq6c-3hq2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-707: The future of KafkaFuture

2021-03-30 Thread David Jacot
Thanks for the KIP, Tom. I agree that this is a good step forward.

+1 (binding)

David

On Mon, Mar 29, 2021 at 5:58 PM Ryanne Dolan  wrote:

> +1, thanks!
>
> On Mon, Mar 29, 2021, 10:54 AM Ismael Juma  wrote:
>
> > Thanks for the KIP, +1 (binding).
> >
> > Ismael
> >
> > On Mon, Mar 29, 2021 at 8:35 AM Tom Bentley  wrote:
> >
> > > Hi,
> > >
> > > I'd like to start a vote on KIP-707, which proposes to add
> > > KafkaFuture.toCompletionStage(), deprecate KafkaFuture.Function and
> make
> > a
> > > couple of other minor cosmetic changes.
> > >
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-707%3A+The+future+of+KafkaFuture
> > >
> > > Many thanks,
> > >
> > > Tom
> > >
> >
>


[jira] [Created] (KAFKA-12582) Remove deprecated `ConfigEntry` constructor

2021-03-30 Thread David Jacot (Jira)
David Jacot created KAFKA-12582:
---

 Summary: Remove deprecated `ConfigEntry` constructor
 Key: KAFKA-12582
 URL: https://issues.apache.org/jira/browse/KAFKA-12582
 Project: Kafka
  Issue Type: Improvement
Reporter: David Jacot
Assignee: David Jacot


ConfigEntry's constructor was deprecated in 1.1.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-712: Shallow Mirroring

2021-03-30 Thread Tom Bentley
Hi Henry and Ryanne,

Related to Ismael's point about the producer & consumer configs being
dangerous, I can see two parts to this:

2a. Both the proposed configs seem to be fundamentally incompatible with
the Producer's existing key.serializer, value.serializer and
compression.type configs, likewise the consumers key.deserializer and
value.deserializer. I don't see a way to avoid this, since those existing
configs are already separate things. (I did consider whether using
special-case Deserializer and Serializer could be used instead, but that
doesn't work nicely; in this use case they're necessarily all configured
together). I think all we could do would be to reject configs which tried
to set those existing client configs in conjunction with fetch.raw.bytes
and send.raw.bytes.

2b. That still leaves a public Java API which would allow access to the raw
byte buffers. AFAICS we don't actually need user code to have access to the
raw buffers. It would be enough to get an opaque object that wrapped the
ByteBuffer from the consumer and pass it to the producer. It's only the
consumer and producer code which needs to be able to obtain the wrapped
buffer.

Kind regards,

Tom

On Tue, Mar 30, 2021 at 8:41 AM Ismael Juma  wrote:

> Hi Henry,
>
> Can you clarify why this "network performance" issue is only related to
> shallow mirroring? Generally, we want the protocol to be generic and not
> have a number of special cases. The more special cases you have, the
> tougher it becomes to test all the edge cases.
>
> Ismael
>
> On Mon, Mar 29, 2021 at 9:51 PM Henry Cai 
> wrote:
>
> > It's interesting this VOTE thread finally becomes a DISCUSS thread.
> >
> > For MM2 concern, I will take a look to see whether I can add the support
> > for MM2.
> >
> > For Ismael's concern on multiple batches in the ProduceRequest
> (conflicting
> > with KIP-98), here is my take:
> >
> > 1. We do need to group multiple batches in the same request otherwise the
> > network performance will suffer.
> > 2. For the concern on transactional message support as in KIP-98, since
> MM1
> > and MM2 currently don't support transactional messages, KIP-712 will not
> > attempt to support transactions either.  I will add a config option on
> > producer config: allowMultipleBatches.  By default this option will be
> off
> > and the user needs to explicitly turn on this option to use the shallow
> > mirror feature.  And if we detect both this option and transaction is
> > turned on we will throw an exception to protect current transaction
> > processing.
> > 3. In the future, when MM2 starts to support exact-once and transactional
> > messages (is that KIP-656?), we can revisit this code.  The current
> > transactional message already makes the compromise that the messages in
> the
> > same RecordBatch (MessageSet) are sharing the same
> > sequence-id/transaction-id, so those messages need to be committed all
> > together.  I think when we support the shallow mirror with transactional
> > semantics, we will group all batches in the same ProduceRequest in the
> same
> > transaction boundary, they need to be committed all together.  On the
> > broker side, all batches coming from ProduceRequest (or FetchResponse)
> are
> > committed in the same log segment file as one unit (current behavior).
> >
> > On Mon, Mar 29, 2021 at 8:46 AM Ryanne Dolan 
> > wrote:
> >
> > > Ah, I see, thanks Ismael. Now I understand your concern.
> > >
> > > From KIP-98, re this change in v3:
> > >
> > > "This allows us to remove the message set size since each message set
> > > already contains a field for the size. More importantly, since there is
> > > only one message set to be written to the log, partial produce failures
> > are
> > > no longer possible. The full message set is either successfully written
> > to
> > > the log (and replicated) or it is not."
> > >
> > > The schema and size field don't seem to be an issue, as KIP-712 already
> > > addresses.
> > >
> > > The partial produce failure issue is something I don't understand. I
> > can't
> > > tell if this was done out of convenience at the time or if there is
> > > something incompatible with partial produce success/failure and EOS.
> Does
> > > anyone know?
> > >
> > > Ryanne
> > >
> > > On Mon, Mar 29, 2021, 1:41 AM Ismael Juma  wrote:
> > >
> > > > Ryanne,
> > > >
> > > > You misunderstood the referenced comment. It is about the produce
> > request
> > > > change to have multiple batches:
> > > >
> > > > "Up to ProduceRequest V2, a ProduceRequest can contain multiple
> batches
> > > of
> > > > messages stored in the record_set field, but this was disabled in V3.
> > We
> > > > are proposing to bring the multiple batches feature back to improve
> the
> > > > network throughput of the mirror maker producer when the original
> batch
> > > > size from source broker is too small."
> > > >
> > > > This is unrelated to shallow iteration.
> > > >
> > > > Ismael
> > > >
> > > > On Sun, Mar 28, 2021, 10:15 PM Ryanne Dolan

Re: [DISCUSS] KIP-727 Add --under-preferred-replica-partitions option to describe topics command

2021-03-30 Thread Tom Bentley
Hi Wenbing,

Thanks for the KIP, I can see this being useful.

1. Could you include an example command line and sample output of the
command in the KIP?

2. I think a better name for the option would be --non-preferred-leader
rather than --under-preferred-replica-partitions

3. One of the things a user might do after running this command would be to
use `kafka-preferred-replica-election.sh` to try electing the preferred
leader for (some of) those partitions.
`kafka-preferred-replica-election.sh` reads a JSON file containing the
partitions which should have the preferred leader elected. It would be
tedious if the user had to reformat the output of `kafka-topics.sh
--non-preferred-leader` in order to pass it to
`kafka-preferred-replica-election.sh`. The tools would need to use a common
format to do that, most obviously by `kafka-topics.sh
--non-preferred-leader` emiting the existing JSON format. It's a little
awkward because the other outputs from kafka-topics.sh don't naturally get
used as inputs to the other tools, so I don't think a top level option,
such as `--output=json` would be a good fit. I did wonder if
`--non-preferred-leader=json` could be used for the JSON output, and plain
`--non-preferred-leader` for the more human readable output, but maybe
`--non-preferred-leader-json` (instead of --non-preferred-leader) or
`--non-preferred-leader-format=json` (as well as --non-preferred-leader) is
a more obvious way to do it.

Kind regards,

Tom

On Sat, Mar 27, 2021 at 2:10 PM wenbing shen 
wrote:

> Hi everyone,
>
> I'd like to discuss the following proposal to add
> --under-preferred-replica-partitions option to describe topics command.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-727%3A+Add+--under-preferred-replica-partitions+option+to+describe+topics+command
>
> Many thanks,
>
> Wenbing
>
>


[jira] [Created] (KAFKA-12581) Remove deprecated Admin.electPreferredLeaders

2021-03-30 Thread Ismael Juma (Jira)
Ismael Juma created KAFKA-12581:
---

 Summary: Remove deprecated Admin.electPreferredLeaders
 Key: KAFKA-12581
 URL: https://issues.apache.org/jira/browse/KAFKA-12581
 Project: Kafka
  Issue Type: Task
Reporter: Ismael Juma
Assignee: Ismael Juma
 Fix For: 3.0.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12580) Remove deprecated close(long, TimeUnit)

2021-03-30 Thread Ismael Juma (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-12580.
-
Fix Version/s: (was: 3.0.0)
   Resolution: Duplicate

Duplicate of KAFKA-12579.

> Remove deprecated close(long, TimeUnit)
> ---
>
> Key: KAFKA-12580
> URL: https://issues.apache.org/jira/browse/KAFKA-12580
> Project: Kafka
>  Issue Type: Task
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12580) Remove deprecated close(long, TimeUnit)

2021-03-30 Thread Ismael Juma (Jira)
Ismael Juma created KAFKA-12580:
---

 Summary: Remove deprecated close(long, TimeUnit)
 Key: KAFKA-12580
 URL: https://issues.apache.org/jira/browse/KAFKA-12580
 Project: Kafka
  Issue Type: Task
Reporter: Ismael Juma
Assignee: Ismael Juma
 Fix For: 3.0.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12579) Remove deprecated ExtendedSerializer/Deserializer

2021-03-30 Thread Ismael Juma (Jira)
Ismael Juma created KAFKA-12579:
---

 Summary: Remove deprecated ExtendedSerializer/Deserializer
 Key: KAFKA-12579
 URL: https://issues.apache.org/jira/browse/KAFKA-12579
 Project: Kafka
  Issue Type: Task
Reporter: Ismael Juma
Assignee: Ismael Juma
 Fix For: 3.0.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12578) Remove deprecated PrincipalBuilder

2021-03-30 Thread Ismael Juma (Jira)
Ismael Juma created KAFKA-12578:
---

 Summary: Remove deprecated PrincipalBuilder
 Key: KAFKA-12578
 URL: https://issues.apache.org/jira/browse/KAFKA-12578
 Project: Kafka
  Issue Type: Task
Reporter: Ismael Juma
Assignee: Ismael Juma
 Fix For: 3.0.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12577) Remove deprecated `quota.producer.default` and `quota.consumer.default` configurations

2021-03-30 Thread David Jacot (Jira)
David Jacot created KAFKA-12577:
---

 Summary: Remove deprecated `quota.producer.default` and 
`quota.consumer.default` configurations
 Key: KAFKA-12577
 URL: https://issues.apache.org/jira/browse/KAFKA-12577
 Project: Kafka
  Issue Type: Improvement
Reporter: David Jacot
Assignee: David Jacot


`quota.producer.default` and `quota.consumer.default` were deprecated in AK 
0.11.0.0. I propose to remove them in AK 3.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-712: Shallow Mirroring

2021-03-30 Thread Ismael Juma
Hi Henry,

Can you clarify why this "network performance" issue is only related to
shallow mirroring? Generally, we want the protocol to be generic and not
have a number of special cases. The more special cases you have, the
tougher it becomes to test all the edge cases.

Ismael

On Mon, Mar 29, 2021 at 9:51 PM Henry Cai 
wrote:

> It's interesting this VOTE thread finally becomes a DISCUSS thread.
>
> For MM2 concern, I will take a look to see whether I can add the support
> for MM2.
>
> For Ismael's concern on multiple batches in the ProduceRequest (conflicting
> with KIP-98), here is my take:
>
> 1. We do need to group multiple batches in the same request otherwise the
> network performance will suffer.
> 2. For the concern on transactional message support as in KIP-98, since MM1
> and MM2 currently don't support transactional messages, KIP-712 will not
> attempt to support transactions either.  I will add a config option on
> producer config: allowMultipleBatches.  By default this option will be off
> and the user needs to explicitly turn on this option to use the shallow
> mirror feature.  And if we detect both this option and transaction is
> turned on we will throw an exception to protect current transaction
> processing.
> 3. In the future, when MM2 starts to support exact-once and transactional
> messages (is that KIP-656?), we can revisit this code.  The current
> transactional message already makes the compromise that the messages in the
> same RecordBatch (MessageSet) are sharing the same
> sequence-id/transaction-id, so those messages need to be committed all
> together.  I think when we support the shallow mirror with transactional
> semantics, we will group all batches in the same ProduceRequest in the same
> transaction boundary, they need to be committed all together.  On the
> broker side, all batches coming from ProduceRequest (or FetchResponse) are
> committed in the same log segment file as one unit (current behavior).
>
> On Mon, Mar 29, 2021 at 8:46 AM Ryanne Dolan 
> wrote:
>
> > Ah, I see, thanks Ismael. Now I understand your concern.
> >
> > From KIP-98, re this change in v3:
> >
> > "This allows us to remove the message set size since each message set
> > already contains a field for the size. More importantly, since there is
> > only one message set to be written to the log, partial produce failures
> are
> > no longer possible. The full message set is either successfully written
> to
> > the log (and replicated) or it is not."
> >
> > The schema and size field don't seem to be an issue, as KIP-712 already
> > addresses.
> >
> > The partial produce failure issue is something I don't understand. I
> can't
> > tell if this was done out of convenience at the time or if there is
> > something incompatible with partial produce success/failure and EOS. Does
> > anyone know?
> >
> > Ryanne
> >
> > On Mon, Mar 29, 2021, 1:41 AM Ismael Juma  wrote:
> >
> > > Ryanne,
> > >
> > > You misunderstood the referenced comment. It is about the produce
> request
> > > change to have multiple batches:
> > >
> > > "Up to ProduceRequest V2, a ProduceRequest can contain multiple batches
> > of
> > > messages stored in the record_set field, but this was disabled in V3.
> We
> > > are proposing to bring the multiple batches feature back to improve the
> > > network throughput of the mirror maker producer when the original batch
> > > size from source broker is too small."
> > >
> > > This is unrelated to shallow iteration.
> > >
> > > Ismael
> > >
> > > On Sun, Mar 28, 2021, 10:15 PM Ryanne Dolan 
> > wrote:
> > >
> > > > Ismael, I don't think KIP-98 is related. Shallow iteration was
> removed
> > in
> > > > KAFKA-732, which predates KIP-98 by a few years.
> > > >
> > > > Ryanne
> > > >
> > > > On Sun, Mar 28, 2021, 11:25 PM Ismael Juma 
> wrote:
> > > >
> > > > > Thanks for the KIP. I have a few high level comments:
> > > > >
> > > > > 1. Like Tom, I'm not convinced about the proposal to make this
> change
> > > to
> > > > > MirrorMaker 1 if we intend to deprecate it and remove it. I would
> > > rather
> > > > us
> > > > > focus our efforts on the implementation we intend to support going
> > > > forward.
> > > > > 2. The producer/consumer configs seem pretty dangerous for general
> > > usage,
> > > > > but the KIP doesn't address the potential downsides.
> > > > > 3. How does the ProducerRequest change impact exactly-once (if at
> > all)?
> > > > The
> > > > > change we are reverting was done as part of KIP-98. Have we
> > considered
> > > > the
> > > > > original reasons for the change?
> > > > >
> > > > > Thanks,
> > > > > Ismael
> > > > >
> > > > > On Wed, Feb 10, 2021 at 12:58 PM Vahid Hashemian <
> > > > > vahid.hashem...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Retitled the thread to conform to the common format.
> > > > > >
> > > > > > On Fri, Feb 5, 2021 at 4:00 PM Ning Zhang <
> ning2008w...@gmail.com>
> > > > > wrote:
> > > > > >
> > > > > > > Hello Henry,
> > > > > > >
> > > > > > 

[jira] [Resolved] (KAFKA-12573) Removed deprecated `Metric#value`

2021-03-30 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-12573.
-
Fix Version/s: 3.0.0
 Reviewer: Ismael Juma
   Resolution: Fixed

> Removed deprecated `Metric#value`
> -
>
> Key: KAFKA-12573
> URL: https://issues.apache.org/jira/browse/KAFKA-12573
> Project: Kafka
>  Issue Type: Improvement
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Minor
> Fix For: 3.0.0
>
>
> The `Metric#value` method was deprecated in AK 1.0. It makes sense to remove 
> it in AK 3.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)