Build failed in Jenkins: kafka-trunk-jdk8 #3978

2019-10-17 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAKFA-8950: Fix KafkaConsumer Fetcher breaking on concurrent 
disconnect

[github] MINOR: AbstractRequestResponse should be an interface (#7513)

[jason] KAFKA-9004; Prevent older clients from fetching from a follower (#7531)

[github] MINOR: Upgrade zk to 3.5.6 (#7544)

[matthias] MINOR: Add ability to wait for all instances in an application to be

[matthias] KAFKA-9058: Lift queriable and materialized restrictions on FK Join

[bbejeck] MINOR: Fix JavaDoc warning (#7546)


--
[...truncated 8.05 MB...]

org.apache.kafka.connect.util.ConnectUtilsTest > 
testLookupKafkaClusterIdTimeout PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testGracefulShutdown 
STARTED

org.apache.kafka.connect.util.ShutdownableThreadTest > testGracefulShutdown 
PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testForcibleShutdown 
STARTED

org.apache.kafka.connect.util.ShutdownableThreadTest > testForcibleShutdown 
PASSED

org.apache.kafka.connect.util.TopicAdminTest > 
shouldNotCreateTopicWhenItAlreadyExists STARTED

org.apache.kafka.connect.util.TopicAdminTest > 
shouldNotCreateTopicWhenItAlreadyExists PASSED

org.apache.kafka.connect.util.TopicAdminTest > 
returnNullWithTopicAuthorizationFailure STARTED

org.apache.kafka.connect.util.TopicAdminTest > 
returnNullWithTopicAuthorizationFailure PASSED

org.apache.kafka.connect.util.TopicAdminTest > 
shouldCreateTopicWhenItDoesNotExist STARTED

org.apache.kafka.connect.util.TopicAdminTest > 
shouldCreateTopicWhenItDoesNotExist PASSED

org.apache.kafka.connect.util.TopicAdminTest > 
shouldReturnFalseWhenSuppliedNullTopicDescription STARTED

org.apache.kafka.connect.util.TopicAdminTest > 
shouldReturnFalseWhenSuppliedNullTopicDescription PASSED

org.apache.kafka.connect.util.TopicAdminTest > returnNullWithApiVersionMismatch 
STARTED

org.apache.kafka.connect.util.TopicAdminTest > returnNullWithApiVersionMismatch 
PASSED

org.apache.kafka.connect.util.TopicAdminTest > 
shouldCreateOneTopicWhenProvidedMultipleDefinitionsWithSameTopicName STARTED

org.apache.kafka.connect.util.TopicAdminTest > 
shouldCreateOneTopicWhenProvidedMultipleDefinitionsWithSameTopicName PASSED

org.apache.kafka.connect.util.TopicAdminTest > 
returnNullWithClusterAuthorizationFailure STARTED

org.apache.kafka.connect.util.TopicAdminTest > 
returnNullWithClusterAuthorizationFailure PASSED

org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicyTest
 > testPrincipalPlusOtherConfigs STARTED

org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicyTest
 > testPrincipalPlusOtherConfigs PASSED

org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicyTest
 > testPrincipalOnly STARTED

org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicyTest
 > testPrincipalOnly PASSED

org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicyTest
 > testNoOverrides STARTED

org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicyTest
 > testNoOverrides PASSED

org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicyTest
 > testWithOverrides STARTED

org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicyTest
 > testWithOverrides PASSED

org.apache.kafka.connect.integration.SessionedProtocolIntegrationTest > 
ensureInternalEndpointIsSecured STARTED

org.apache.kafka.connect.integration.SessionedProtocolIntegrationTest > 
ensureInternalEndpointIsSecured SKIPPED

org.apache.kafka.connect.integration.ConnectWorkerIntegrationTest > 
testAddAndRemoveWorker STARTED

org.apache.kafka.connect.integration.ConnectWorkerIntegrationTest > 
testAddAndRemoveWorker PASSED

org.apache.kafka.connect.integration.RestExtensionIntegrationTest > 
testRestExtensionApi STARTED

org.apache.kafka.connect.integration.RestExtensionIntegrationTest > 
testRestExtensionApi PASSED

org.apache.kafka.connect.integration.StartAndStopLatchTest > 
shouldReturnFalseWhenAwaitingForDependentLatchToComplete STARTED

org.apache.kafka.connect.integration.StartAndStopLatchTest > 
shouldReturnFalseWhenAwaitingForDependentLatchToComplete PASSED

org.apache.kafka.connect.integration.StartAndStopLatchTest > 
shouldReturnTrueWhenAwaitingForStartAndStopAndDependentLatch STARTED

org.apache.kafka.connect.integration.StartAndStopLatchTest > 
shouldReturnTrueWhenAwaitingForStartAndStopAndDependentLatch PASSED

org.apache.kafka.connect.integration.StartAndStopLatchTest > 
shouldReturnFalseWhenAwaitingForStartToNeverComplete STARTED

org.apache.kafka.connect.integration.StartAndStopLatchTest > 
shouldReturnFalseWhenAwaitingForStartToNeverComplete PASSED

org.apache.kafka.connect.integration.StartAndStopLatchTest > 
shouldReturnFalseWhenAwaitingForStopToNeverComplete STARTED

org.apache.k

[jira] [Resolved] (KAFKA-8962) KafkaAdminClient#describeTopics always goes through the controller

2019-10-17 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8962.

Fix Version/s: 2.4.0
   Resolution: Fixed

> KafkaAdminClient#describeTopics always goes through the controller
> --
>
> Key: KAFKA-8962
> URL: https://issues.apache.org/jira/browse/KAFKA-8962
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dhruvil Shah
>Priority: Major
> Fix For: 2.4.0
>
>
> KafkaAdminClient#describeTopic makes a MetadataRequest against the 
> controller. We should consider routing the request to any broker in the 
> cluster using `LeastLoadedNodeProvider` instead, so that we don't overwhelm 
> the controller with these requests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9064) Observe transient issue with kinit cimmand

2019-10-17 Thread Pradeep Bansal (Jira)
Pradeep Bansal created KAFKA-9064:
-

 Summary: Observe transient issue with kinit cimmand
 Key: KAFKA-9064
 URL: https://issues.apache.org/jira/browse/KAFKA-9064
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 2.2.1
Reporter: Pradeep Bansal


I have specified kinit command to be skinit. While this works fine for most 
time, sometimes I see below exception where it doesnt respect provided kinit 
command and use default value. Can this be handled?

 
|{{[}}{{2019}}{{-}}{{02}}{{-}}{{19}} 
{{10}}{{:}}{{20}}{{:}}{{07}}{{,}}{{862}}{{] WARN [Principal}}{{=}}{{null]: 
Could }}{{not}} {{renew TGT due to problem running shell command: 
}}{{'/usr/bin/kinit -R'}}{{. Exiting refresh thread. 
(org.apache.kafka.common.security.kerberos.KerberosLogin)}}
{{org.apache.kafka.common.utils.Shell$ExitCodeException: kinit: Matching 
credential }}{{not}} {{found (filename: 
}}{{/}}{{tmp}}{{/}}{{krb5cc_25012_76850_sshd_w6VpLC8R0Y) }}{{while}} {{renewing 
credentials}}
 
{{}}{{at 
org.apache.kafka.common.utils.Shell.runCommand(Shell.java:}}{{130}}{{)}}
{{}}{{at 
org.apache.kafka.common.utils.Shell.run(Shell.java:}}{{76}}{{)}}
{{}}{{at 
org.apache.kafka.common.utils.Shell$ShellCommandExecutor.execute(Shell.java:}}{{204}}{{)}}
{{}}{{at 
org.apache.kafka.common.utils.Shell.execCommand(Shell.java:}}{{268}}{{)}}
{{}}{{at 
org.apache.kafka.common.utils.Shell.execCommand(Shell.java:}}{{255}}{{)}}
{{}}{{at 
org.apache.kafka.common.security.kerberos.KerberosLogin.}}{{lambda}}{{$login$}}{{10}}{{(KerberosLogin.java:}}{{212}}{{)}}
{{}}{{at 
java.base}}{{/}}{{java.lang.Thread.run(Thread.java:}}{{834}}{{)}}|
| |
| |



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9063) KAfka performance drops with number of topics even when producer is producing on one topic

2019-10-17 Thread Pradeep Bansal (Jira)
Pradeep Bansal created KAFKA-9063:
-

 Summary: KAfka performance drops with number of topics even when 
producer is producing on one topic
 Key: KAFKA-9063
 URL: https://issues.apache.org/jira/browse/KAFKA-9063
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 2.2.1
Reporter: Pradeep Bansal
 Attachments: image-2019-10-18-10-22-40-372.png

# We started throughput test with 1 topic and number of topics present in 
cluster at that time were 1 (excluding already existing topics 1000)
 # We left it to run for about 2 hours (The mean throughput we observed during 
this period was 54k msgs/sec)
 # After this, we started creating 10,000 topics one by one using a script
 # We noted throughput values after creating 100 topics and after 
200,300,400…so on till 10,000 were created
 # After all 10,000 topics were created we left test to run for another 1 hr.

 During the entire duration, we were producing only on a single topic.

!image-2019-10-18-10-22-40-372.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.3-jdk8 #130

2019-10-17 Thread Apache Jenkins Server
See 


Changes:

[bbejeck] Fix bug in AssignmentInfo#encode and add additional logging (#7545)


--
[...truncated 2.93 MB...]

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaInIsrNotLive PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithNoLiveIsr STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithNoLiveIsr PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithEmptyIsr STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithEmptyIsr PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled 
PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive 
PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled 
PASSED

kafka.controller.ControllerChannelManagerTest > 
testUpdateMetadataInterBrokerProtocolVersion STARTED

kafka.controller.ControllerChannelManagerTest > 
testUpdateMetadataInterBrokerProtocolVersion PASSED

kafka.controller.ControllerChannelManagerTest > testLeaderAndIsrRequestIsNew 
STARTED

kafka.controller.ControllerChannelManagerTest > testLeaderAndIsrRequestIsNew 
PASSED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicQueuedForDeletion STARTED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicQueuedForDeletion PASSED

kafka.controller.ControllerChannelManagerTest > 
testLeaderAndIsrRequestSentToLiveOrShuttingDownBrokers STARTED

kafka.controller.ControllerChannelManagerTest > 
testLeaderAndIsrRequestSentToLiveOrShuttingDownBrokers PASSED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaInterBrokerProtocolVersion STARTED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaInterBrokerProtocolVersion PASSED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaSentOnlyToLiveAndShuttingDownBrokers STARTED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaSentOnlyToLiveAndShuttingDownBrokers PASSED

kafka.controller.ControllerChannelManagerTest > testStopReplicaGroupsByBroker 
STARTED

kafka.controller.ControllerChannelManagerTest > testStopReplicaGroupsByBroker 
PASSED

kafka.controller.ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr STARTED

kafka.controller.ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr PASSED

kafka.controller.ControllerChannelManagerTest > 
testMixedDeleteAndNotDeleteStopReplicaRequests STARTED

kafka.controller.ControllerChannelManagerTest > 
testMixedDeleteAndNotDeleteStopReplicaRequests PASSED

kafka.controller.ControllerChannelManagerTest > 
testLeaderAndIsrInterBrokerProtocolVersion STARTED

kafka.controller.ControllerChannelManagerTest > 
testLeaderAndIsrInterBrokerProtocolVersion PASSED

kafka.controller.ControllerChannelManagerTest > testUpdateMetadataRequestSent 
STARTED

kafka.controller.ControllerChannelManagerTest > testUpdateMetadataRequestSent 
PASSED

kafka.controller.ControllerChannelManagerTest > 
testUpdateMetadataRequestDuringTopicDeletion STARTED

kafka.controller.ControllerChannelMana

Build failed in Jenkins: kafka-2.4-jdk8 #25

2019-10-17 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Upgrade zk to 3.5.6 (#7544)

[jason] KAFKA-9004; Prevent older clients from fetching from a follower (#7531)

[jason] MINOR: Augment log4j to add generation number in performAssign (#7451)

[matthias] MINOR: Add ability to wait for all instances in an application to be

[matthias] KAFKA-9058: Lift queriable and materialized restrictions on FK Join


--
[...truncated 4.97 MB...]

kafka.admin.AclCommandTest > testAclsOnPrefixedResourcesWithAuthorizer STARTED

kafka.admin.AclCommandTest > testAclsOnPrefixedResourcesWithAuthorizer PASSED

kafka.admin.AclCommandTest > testProducerConsumerCliWithAuthorizer STARTED

kafka.admin.AclCommandTest > testProducerConsumerCliWithAuthorizer PASSED

kafka.admin.AclCommandTest > testAclCliWithAdminAPI STARTED

kafka.admin.AclCommandTest > testAclCliWithAdminAPI PASSED

kafka.admin.ListConsumerGroupTest > testListWithUnrecognizedNewConsumerOption 
STARTED

kafka.admin.ListConsumerGroupTest > testListWithUnrecognizedNewConsumerOption 
PASSED

kafka.admin.ListConsumerGroupTest > testListConsumerGroups STARTED

kafka.admin.ListConsumerGroupTest > testListConsumerGroups PASSED

kafka.admin.TimeConversionTests > testDateTimeFormats STARTED

kafka.admin.TimeConversionTests > testDateTimeFormats PASSED

kafka.admin.ReassignPartitionsCommandTest > 
shouldRemoveThrottleReplicaListBasedOnProposedAssignment STARTED

kafka.admin.ReassignPartitionsCommandTest > 
shouldRemoveThrottleReplicaListBasedOnProposedAssignment PASSED

kafka.admin.ReassignPartitionsCommandTest > testReassigningNonExistingPartition 
STARTED

kafka.admin.ReassignPartitionsCommandTest > testReassigningNonExistingPartition 
PASSED

kafka.admin.ReassignPartitionsCommandTest > 
shouldFindMovingReplicasMultipleTopics STARTED

kafka.admin.ReassignPartitionsCommandTest > 
shouldFindMovingReplicasMultipleTopics PASSED

kafka.admin.ReassignPartitionsCommandTest > 
shouldNotOverwriteExistingPropertiesWhenLimitIsAdded STARTED

kafka.admin.ReassignPartitionsCommandTest > 
shouldNotOverwriteExistingPropertiesWhenLimitIsAdded PASSED

kafka.admin.ReassignPartitionsCommandTest > 
shouldFindMovingReplicasMultipleTopicsAndPartitions STARTED

kafka.admin.ReassignPartitionsCommandTest > 
shouldFindMovingReplicasMultipleTopicsAndPartitions PASSED

kafka.admin.ReassignPartitionsCommandTest > 
shouldRemoveThrottleLimitFromAllBrokers STARTED

kafka.admin.ReassignPartitionsCommandTest > 
shouldRemoveThrottleLimitFromAllBrokers PASSED

kafka.admin.ReassignPartitionsCommandTest > shouldFindMovingReplicas STARTED

kafka.admin.ReassignPartitionsCommandTest > shouldFindMovingReplicas PASSED

kafka.admin.ReassignPartitionsCommandTest > 
shouldFindMovingReplicasMultiplePartitions STARTED

kafka.admin.ReassignPartitionsCommandTest > 
shouldFindMovingReplicasMultiplePartitions PASSED

kafka.admin.ReassignPartitionsCommandTest > 
testPartitionReassignmentNonOverlappingReplicas STARTED

kafka.admin.ReassignPartitionsCommandTest > 
testPartitionReassignmentNonOverlappingReplicas PASSED

kafka.admin.ReassignPartitionsCommandTest > 
testPartitionReassignmentWithLeaderNotInNewReplicas STARTED

kafka.admin.ReassignPartitionsCommandTest > 
testPartitionReassignmentWithLeaderNotInNewReplicas PASSED

kafka.admin.ReassignPartitionsCommandTest > 
testResumePartitionReassignmentThatWasCompleted STARTED

kafka.admin.ReassignPartitionsCommandTest > 
testResumePartitionReassignmentThatWasCompleted PASSED

kafka.admin.ReassignPartitionsCommandTest > shouldSetQuotaLimit STARTED

kafka.admin.ReassignPartitionsCommandTest > shouldSetQuotaLimit PASSED

kafka.admin.ReassignPartitionsCommandTest > 
testPartitionReassignmentWithLeaderInNewReplicas STARTED

kafka.admin.ReassignPartitionsCommandTest > 
testPartitionReassignmentWithLeaderInNewReplicas PASSED

kafka.admin.ReassignPartitionsCommandTest > 
shouldFindMovingReplicasWhenProposedIsSubsetOfExisting STARTED

kafka.admin.ReassignPartitionsCommandTest > 
shouldFindMovingReplicasWhenProposedIsSubsetOfExisting PASSED

kafka.admin.ReassignPartitionsCommandTest > shouldUpdateQuotaLimit STARTED

kafka.admin.ReassignPartitionsCommandTest > shouldUpdateQuotaLimit PASSED

kafka.admin.ReassignPartitionsCommandTest > 
shouldFindTwoMovingReplicasInSamePartition STARTED

kafka.admin.ReassignPartitionsCommandTest > 
shouldFindTwoMovingReplicasInSamePartition PASSED

kafka.admin.ReassignPartitionsCommandTest > 
shouldNotOverwriteEntityConfigsWhenUpdatingThrottledReplicas STARTED

kafka.admin.ReassignPartitionsCommandTest > 
shouldNotOverwriteEntityConfigsWhenUpdatingThrottledReplicas PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacementAllServers STARTED

kafka.admin.AddPartitionsTest > testReplicaPlacementAllServers PASSED

kafka.admin.AddPartitionsTest > testMissingPartition0 STARTED

kafka.admin.AddPartitionsTest > testMissingPartition0 PASSED

kafka.admin.AddPartitionsTest 

[jira] [Created] (KAFKA-9062) RocksDB writes may stall after bulk loading lots of state

2019-10-17 Thread Sophie Blee-Goldman (Jira)
Sophie Blee-Goldman created KAFKA-9062:
--

 Summary: RocksDB writes may stall after bulk loading lots of state
 Key: KAFKA-9062
 URL: https://issues.apache.org/jira/browse/KAFKA-9062
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Sophie Blee-Goldman


RocksDB may stall writes at times when background compactions or flushes are 
having trouble keeping up. This means we can effectively end up blocking 
indefinitely during a StateStore#put call within Streams, and may get kicked 
from the group if the throttling does not ease up within the max poll interval.

Example: when restoring large amounts of state from scratch, we use the 
strategy recommended by RocksDB of turning off automatic compactions and 
dumping everything into L0. We do batch somewhat, but do not sort these small 
batches before loading into the db, so we end up with a large number of 
unsorted L0 files.

When restoration is complete and we toggle the db back to normal (not bulk 
loading) settings, a background compaction is triggered to merge all these into 
the next level. This background compaction can take a long time to merge 
unsorted keys, especially when the amount of data is quite large.

Any new writes while the number of L0 files exceeds the max will be stalled 
until the compaction can finish, and processing after restoring from scratch 
can block beyond the polling interval



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #891

2019-10-17 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: AbstractRequestResponse should be an interface (#7513)

[jason] KAFKA-9004; Prevent older clients from fetching from a follower (#7531)

[github] MINOR: Upgrade zk to 3.5.6 (#7544)

[matthias] MINOR: Add ability to wait for all instances in an application to be

[matthias] KAFKA-9058: Lift queriable and materialized restrictions on FK Join

[bbejeck] MINOR: Fix JavaDoc warning (#7546)


--
[...truncated 2.70 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.a

[DISCUSS] KIP-539: Implement mechanism to flush out records in low volume suppression buffers

2019-10-17 Thread Richard Yu
Hi all,

I wish to discuss this KIP which would help us in resolving some issues we
have with suppression buffers.
Below is the link:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-539%3A+Implement+mechanism+to+flush+out+records+in+low+volume+suppression+buffers

@John Roesler if you have time, would be great if we could get your input.

Cheers,
Richard


Build failed in Jenkins: kafka-2.2-jdk8 #183

2019-10-17 Thread Apache Jenkins Server
See 

Changes:


--
Started by user rhauch
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H37 (ubuntu xenial) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/2.2^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/2.2^{commit} # timeout=10
Checking out Revision 75d1c0b131d567d24dfae0b9dcf08872377985ce 
(refs/remotes/origin/2.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 75d1c0b131d567d24dfae0b9dcf08872377985ce
Commit message: "Fix bug in AssignmentInfo#encode and add additional logging 
(#7545)"
 > git rev-list --no-walk 75d1c0b131d567d24dfae0b9dcf08872377985ce # timeout=10
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
[kafka-2.2-jdk8] $ /bin/bash -xe /tmp/jenkins6138474637694951898.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins6138474637694951898.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
No credentials specified
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=75d1c0b131d567d24dfae0b9dcf08872377985ce, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #175
Recording test results
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
Not sending mail to unregistered user b...@confluent.io


Build failed in Jenkins: kafka-2.1-jdk8 #238

2019-10-17 Thread Apache Jenkins Server
See 


Changes:

[bill] Fix bug in AssignmentInfo#encode and add additional logging (#7545)


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H31 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/2.1^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/2.1^{commit} # timeout=10
Checking out Revision 16e3c52f1d8b12106add2db87c6c622bbc932ac0 
(refs/remotes/origin/2.1)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 16e3c52f1d8b12106add2db87c6c622bbc932ac0
Commit message: "Fix bug in AssignmentInfo#encode and add additional logging 
(#7545)"
 > git rev-list --no-walk f884bc4f76fa9ee7e6f33086bdcc22e64a5ce911 # timeout=10
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
[kafka-2.1-jdk8] $ /bin/bash -xe /tmp/jenkins2964005714626128610.sh
+ rm -rf 
+ /home/jenkins/tools/gradle/4.8.1/bin/gradle
/tmp/jenkins2964005714626128610.sh: line 4: 
/home/jenkins/tools/gradle/4.8.1/bin/gradle: No such file or directory
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
No credentials specified
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=16e3c52f1d8b12106add2db87c6c622bbc932ac0, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #230
Recording test results
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
Not sending mail to unregistered user wangg...@gmail.com
Not sending mail to unregistered user b...@confluent.io


Build failed in Jenkins: kafka-2.2-jdk8 #182

2019-10-17 Thread Apache Jenkins Server
See 


Changes:

[bill] Fix bug in AssignmentInfo#encode and add additional logging (#7545)


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H31 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/2.2^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/2.2^{commit} # timeout=10
Checking out Revision 75d1c0b131d567d24dfae0b9dcf08872377985ce 
(refs/remotes/origin/2.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 75d1c0b131d567d24dfae0b9dcf08872377985ce
Commit message: "Fix bug in AssignmentInfo#encode and add additional logging 
(#7545)"
 > git rev-list --no-walk 3a29f334e8720d27de7227d3522e0dbc80e293ce # timeout=10
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
[kafka-2.2-jdk8] $ /bin/bash -xe /tmp/jenkins7873283122310053126.sh
+ rm -rf 
+ /home/jenkins/tools/gradle/4.8.1/bin/gradle
/tmp/jenkins7873283122310053126.sh: line 4: 
/home/jenkins/tools/gradle/4.8.1/bin/gradle: No such file or directory
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
No credentials specified
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=75d1c0b131d567d24dfae0b9dcf08872377985ce, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #175
Recording test results
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
Not sending mail to unregistered user b...@confluent.io


Re: [VOTE] KIP-534: Retain tombstones for approximately delete.retention.ms milliseconds

2019-10-17 Thread Guozhang Wang
Thanks for the reply Jason.

On Thu, Oct 17, 2019 at 10:40 AM Jason Gustafson  wrote:

> Hi Guozhang,
>
> It's a fair point. For control records, I think it's a non-issue since they
> are tiny and not batched. So the only case where this might matter is large
> batch deletions. I think the size difference is not a major issue itself,
> but I think it's worth mentioning in the KIP the risk of exceeding the max
> message size. I think the code should probably make this more of a soft
> limit when cleaning. We have run into scenarios in the past as well where
> recompression has actually increased message size. We may also want to be
> able to upconvert messages to the new format in the future in the cleaner.
>
> -Jason
>
>
>
> On Thu, Oct 17, 2019 at 9:08 AM Guozhang Wang  wrote:
>
> > Here's my understanding: when log compaction kicks in, the system time at
> > the moment would be larger than the message timestamp to be compacted, so
> > the modification on the batch timestamp would practically be increasing
> its
> > value, and hence the deltas for each inner message would be negative to
> > maintain their actual timestamp. Depending on the time diff between the
> > actual timestamp of the message and the time when log compaction happens,
> > this negative delta can be large or small since it not long depends on
> the
> > cleaner thread wakeup frequency but also dirty ratio etc.
> >
> > With varInt encoding, the num.bytes needed for encode an int varies from
> 1
> > to 5 bytes; before compaction, the deltas should be relatively small
> > positive values compared with the base timestamp, and hence most likely 1
> > or 2 bytes needed to encode, after compaction, the deltas could be
> > relatively large negative values that may take more bytes to encode.
> With a
> > record batch of 512 in practice, and suppose after compaction each record
> > would take 2 more byte for encoding deltas, that would be 1K more per
> > batch. Usually it would not be too big of an issue with reasonable sized
> > message, but I just wanted to point out this as a potential regression.
> >
> >
> > Guozhang
> >
> > On Wed, Oct 16, 2019 at 9:36 PM Richard Yu 
> > wrote:
> >
> > > Hi Guozhang,
> > >
> > > Your understanding basically is on point.
> > >
> > > I haven't looked into the details for what happens if we change the
> base
> > > timestamp and how its calculated, so I'm not clear on how small the
> delta
> > > (or big) is.
> > > To be fair, would the the delta size pose a big problem if it takes up
> > more
> > > bytes to encode?
> > >
> > > Cheers,
> > > Richard
> > >
> > > On Wed, Oct 16, 2019 at 7:36 PM Guozhang Wang 
> > wrote:
> > >
> > > > Hello Richard,
> > > >
> > > > Thanks for the KIP, I just have one clarification regarding "So the
> > idea
> > > is
> > > > to set the base timestamp to the delete horizon and adjust the deltas
> > > > accordingly." My understanding is that during compaction, for each
> > > > compacted new segment, we would set its base offset of each batch as
> > the
> > > > delete horizon, which is the "current system time that cleaner has
> seen
> > > so
> > > > far", and adjust the delta timestamps of each of the inner records of
> > the
> > > > batch (and practically the deltas will be all negative)?
> > > >
> > > > If that's case, could we do some back of the envelope calculation on
> > > what's
> > > > the possible smallest case of deltas? Note that since we use varInt
> for
> > > > delta values for each record, the smaller the negative delta, that
> > would
> > > > take more bytes to encode.
> > > >
> > > > Guozhang
> > > >
> > > > On Wed, Oct 16, 2019 at 6:48 PM Jason Gustafson 
> > > > wrote:
> > > >
> > > > > +1. Thanks Richard.
> > > > >
> > > > > On Wed, Oct 16, 2019 at 10:04 AM Richard Yu <
> > > yohan.richard...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > Want to try to get this KIP wrapped up. So it would be great if
> we
> > > can
> > > > > get
> > > > > > some votes.
> > > > > >
> > > > > > Cheers,
> > > > > > Richard
> > > > > >
> > > > > > On Tue, Oct 15, 2019 at 12:58 PM Jun Rao 
> wrote:
> > > > > >
> > > > > > > Hi, Richard,
> > > > > > >
> > > > > > > Thanks for the updated KIP. +1 from me.
> > > > > > >
> > > > > > > Jun
> > > > > > >
> > > > > > > On Tue, Oct 15, 2019 at 12:46 PM Richard Yu <
> > > > > yohan.richard...@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hi Jun,
> > > > > > > >
> > > > > > > > I've updated the link accordingly. :)
> > > > > > > > Here is the updated KIP link:
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-534%3A+Retain+tombstones+and+transaction+markers+for+approximately+delete.retention.ms+milliseconds
> > > > > > > >
> > > > > > > > Cheers,
> > > > > > > > Richard
> > > > > > > >
> > > > > > > > On Mon, Oct 14, 2019 at 5:12 PM Jun Rao 
> > > wrote:
> > > > > > > >
> > > > > > > > > Hi, Richard,
> > > >

Re: [VOTE] KIP-534: Retain tombstones for approximately delete.retention.ms milliseconds

2019-10-17 Thread Richard Yu
Hi Guozhang, Jason,

I've updated the KIP to include this warning as well.
If there is anything else that we need for it, let me know. :)
Otherwise, we should vote this KIP in.

Cheers,
Richard

On Thu, Oct 17, 2019 at 10:41 AM Jason Gustafson  wrote:

> Hi Guozhang,
>
> It's a fair point. For control records, I think it's a non-issue since they
> are tiny and not batched. So the only case where this might matter is large
> batch deletions. I think the size difference is not a major issue itself,
> but I think it's worth mentioning in the KIP the risk of exceeding the max
> message size. I think the code should probably make this more of a soft
> limit when cleaning. We have run into scenarios in the past as well where
> recompression has actually increased message size. We may also want to be
> able to upconvert messages to the new format in the future in the cleaner.
>
> -Jason
>
>
>
> On Thu, Oct 17, 2019 at 9:08 AM Guozhang Wang  wrote:
>
> > Here's my understanding: when log compaction kicks in, the system time at
> > the moment would be larger than the message timestamp to be compacted, so
> > the modification on the batch timestamp would practically be increasing
> its
> > value, and hence the deltas for each inner message would be negative to
> > maintain their actual timestamp. Depending on the time diff between the
> > actual timestamp of the message and the time when log compaction happens,
> > this negative delta can be large or small since it not long depends on
> the
> > cleaner thread wakeup frequency but also dirty ratio etc.
> >
> > With varInt encoding, the num.bytes needed for encode an int varies from
> 1
> > to 5 bytes; before compaction, the deltas should be relatively small
> > positive values compared with the base timestamp, and hence most likely 1
> > or 2 bytes needed to encode, after compaction, the deltas could be
> > relatively large negative values that may take more bytes to encode.
> With a
> > record batch of 512 in practice, and suppose after compaction each record
> > would take 2 more byte for encoding deltas, that would be 1K more per
> > batch. Usually it would not be too big of an issue with reasonable sized
> > message, but I just wanted to point out this as a potential regression.
> >
> >
> > Guozhang
> >
> > On Wed, Oct 16, 2019 at 9:36 PM Richard Yu 
> > wrote:
> >
> > > Hi Guozhang,
> > >
> > > Your understanding basically is on point.
> > >
> > > I haven't looked into the details for what happens if we change the
> base
> > > timestamp and how its calculated, so I'm not clear on how small the
> delta
> > > (or big) is.
> > > To be fair, would the the delta size pose a big problem if it takes up
> > more
> > > bytes to encode?
> > >
> > > Cheers,
> > > Richard
> > >
> > > On Wed, Oct 16, 2019 at 7:36 PM Guozhang Wang 
> > wrote:
> > >
> > > > Hello Richard,
> > > >
> > > > Thanks for the KIP, I just have one clarification regarding "So the
> > idea
> > > is
> > > > to set the base timestamp to the delete horizon and adjust the deltas
> > > > accordingly." My understanding is that during compaction, for each
> > > > compacted new segment, we would set its base offset of each batch as
> > the
> > > > delete horizon, which is the "current system time that cleaner has
> seen
> > > so
> > > > far", and adjust the delta timestamps of each of the inner records of
> > the
> > > > batch (and practically the deltas will be all negative)?
> > > >
> > > > If that's case, could we do some back of the envelope calculation on
> > > what's
> > > > the possible smallest case of deltas? Note that since we use varInt
> for
> > > > delta values for each record, the smaller the negative delta, that
> > would
> > > > take more bytes to encode.
> > > >
> > > > Guozhang
> > > >
> > > > On Wed, Oct 16, 2019 at 6:48 PM Jason Gustafson 
> > > > wrote:
> > > >
> > > > > +1. Thanks Richard.
> > > > >
> > > > > On Wed, Oct 16, 2019 at 10:04 AM Richard Yu <
> > > yohan.richard...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > Want to try to get this KIP wrapped up. So it would be great if
> we
> > > can
> > > > > get
> > > > > > some votes.
> > > > > >
> > > > > > Cheers,
> > > > > > Richard
> > > > > >
> > > > > > On Tue, Oct 15, 2019 at 12:58 PM Jun Rao 
> wrote:
> > > > > >
> > > > > > > Hi, Richard,
> > > > > > >
> > > > > > > Thanks for the updated KIP. +1 from me.
> > > > > > >
> > > > > > > Jun
> > > > > > >
> > > > > > > On Tue, Oct 15, 2019 at 12:46 PM Richard Yu <
> > > > > yohan.richard...@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hi Jun,
> > > > > > > >
> > > > > > > > I've updated the link accordingly. :)
> > > > > > > > Here is the updated KIP link:
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-534%3A+Retain+tombstones+and+transaction+markers+for+approximately+delete.retention.ms+milliseconds
> > > > > > > >
> > > > > > > > Cheers

[jira] [Resolved] (KAFKA-9053) AssignmentInfo#encode hardcodes the LATEST_SUPPORTED_VERSION

2019-10-17 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-9053.

Resolution: Fixed

> AssignmentInfo#encode hardcodes the LATEST_SUPPORTED_VERSION 
> -
>
> Key: KAFKA-9053
> URL: https://issues.apache.org/jira/browse/KAFKA-9053
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Assignee: Sophie Blee-Goldman
>Priority: Blocker
> Fix For: 2.1.2, 2.2.2, 2.4.0, 2.3.2
>
>
> We should instead encode the commonlySupportedVersion field. This affects 
> version probing with a subscription change



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.2-jdk8 #181

2019-10-17 Thread Apache Jenkins Server
See 

Changes:


--
Started by user rhauch
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H33 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/2.2^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/2.2^{commit} # timeout=10
Checking out Revision 3a29f334e8720d27de7227d3522e0dbc80e293ce 
(refs/remotes/origin/2.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 3a29f334e8720d27de7227d3522e0dbc80e293ce
Commit message: "MINOR: log reason for fatal error in locking state dir (#7534)"
 > git rev-list --no-walk 3a29f334e8720d27de7227d3522e0dbc80e293ce # timeout=10
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
[kafka-2.2-jdk8] $ /bin/bash -xe /tmp/jenkins3596505928563876798.sh
+ rm -rf 
+ /home/jenkins/tools/gradle/4.8.1/bin/gradle
/tmp/jenkins3596505928563876798.sh: line 4: 
/home/jenkins/tools/gradle/4.8.1/bin/gradle: No such file or directory
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
No credentials specified
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=3a29f334e8720d27de7227d3522e0dbc80e293ce, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #175
Recording test results
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1


Re: [DISCUSS] KIP-150 - Kafka-Streams Cogroup

2019-10-17 Thread Walker Carlson
Good catch I updated that.

I have made a PR for this KIP

I then am splitting it into 3 parts, first cogroup for a key-value store (
here ), then for a
timeWindowedStore, and then a sessionWindowedStore + ensuring partitioning.

Walker

On Tue, Oct 15, 2019 at 12:47 PM Matthias J. Sax 
wrote:

> Walker,
>
> thanks for picking up the KIP and reworking it for the changed API.
>
> Overall, the updated API suggestions make sense to me. The seem to align
> quite nicely with our current API design.
>
> One nit: In `CogroupedKStream#aggregate(...)` the type parameter of
> `Materialized` should be `V`, not `VR`?
>
>
> -Matthias
>
>
>
> On 10/14/19 2:57 PM, Walker Carlson wrote:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-150+-+Kafka-Streams+Cogroup
> > here
> > is a link
> >
> > On Mon, Oct 14, 2019 at 2:52 PM Walker Carlson 
> > wrote:
> >
> >> Hello all,
> >>
> >> I have picked up and updated KIP-150. Due to changes to the project
> since
> >> KIP #150 was written there are a few items that need to be updated.
> >>
> >> First item that changed is the adoption of the Materialized parameter.
> >>
> >> The second item is the WindowedBy. How the old KIP handles windowing is
> >> that it overloads the aggregate function to take in a Window object as
> well
> >> as the other parameters. The current practice to window grouped-streams
> is
> >> to call windowedBy and receive a windowed stream object. The existing
> >> interface for a windowed stream made from a grouped stream will not work
> >> for cogrouped streams. Hence, we have to make new interfaces for
> cogrouped
> >> windowed streams.
> >>
> >> Please take a look, I would like to hear your feedback,
> >>
> >> Walker
> >>
> >
>
>


Re: [DISCUSS] KIP-399: Extend ProductionExceptionHandler to cover serialization exceptions

2019-10-17 Thread Walker Carlson
I have read through the logs and I am of the opinion that while we should
not let the store and the change Log diverge. However, it is not obvious
how we would be able to do that and allow the custom serializer to effect
that topic. In order to get around this we can create a shell around the
handler that will not trigger the custom handler if it is a changeLog
topic. This is also a problem that we can fix in the general handler, so I
think we can add that fix to this kip.

As for the repartition topics I can not see any reason to not apply the
custom logic. By default it fails in any case and does not have to be
changed from that.

For the DSL I think that that can come later as this change would not
affect the status quo. EOS and this handler can be made
mutually exclusive or we could give users the option to use both with a
warning. I would be interested in hearing other people's suggestions about
that.Walker

On Tue, Oct 15, 2019 at 11:50 PM Matthias J. Sax 
wrote:

> Walker,
>
> thanks for picking up this KIP. Did you read the previous discussion?
> It's still unclear if we want to apply the handler to repartition topics
> or not, and how errors for stores and changelog topic should be handled.
> For the Processor API, users could catch `SerializationException` but
> for the DSL this is not possible. Even if there is no serialization
> problem, it's questionable if we should allow to swallow error when
> writing into the changelog topic, because the store content and the
> topic content could diverge, what is especially critical for the EOS case.
>
>
> -Matthias
>
> On 10/15/19 10:25 AM, Walker Carlson wrote:
> > Hello all,
> >
> > I would like to restart the discussion of this KIP 399
> > <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-399%3A+Extend+ProductionExceptionHandler+to+cover+serialization+exceptions
> >.
> > I think it is some low hanging fruit that could be quite beneficial.
> >
> > Thanks,
> > Walker
> >
>
>


[jira] [Created] (KAFKA-9061) StreamStreamJoinIntegrationTest flaky test failures

2019-10-17 Thread Chris Pettitt (Jira)
Chris Pettitt created KAFKA-9061:


 Summary: StreamStreamJoinIntegrationTest flaky test failures
 Key: KAFKA-9061
 URL: https://issues.apache.org/jira/browse/KAFKA-9061
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Chris Pettitt
 Attachments: stacktraces-3269.txt

It looks like one test timed out during cleanup and all other tests failed due 
to lack of cleanup.

See attachment for detailed stack traces.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.2-jdk8 #180

2019-10-17 Thread Apache Jenkins Server
See 

Changes:


--
Started by user rhauch
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H27 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/2.2^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/2.2^{commit} # timeout=10
Checking out Revision 3a29f334e8720d27de7227d3522e0dbc80e293ce 
(refs/remotes/origin/2.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 3a29f334e8720d27de7227d3522e0dbc80e293ce
Commit message: "MINOR: log reason for fatal error in locking state dir (#7534)"
 > git rev-list --no-walk 3a29f334e8720d27de7227d3522e0dbc80e293ce # timeout=10
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
[kafka-2.2-jdk8] $ /bin/bash -xe /tmp/jenkins5614304462531581295.sh
+ rm -rf 
+ /home/jenkins/tools/gradle/4.8.1/bin/gradle
/tmp/jenkins5614304462531581295.sh: line 4: 
/home/jenkins/tools/gradle/4.8.1/bin/gradle: No such file or directory
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
No credentials specified
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=3a29f334e8720d27de7227d3522e0dbc80e293ce, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #175
Recording test results
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1


[jira] [Resolved] (KAFKA-8728) Flaky Test KTableSourceTopicRestartIntegrationTest #shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosDisabled

2019-10-17 Thread Boyang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen resolved KAFKA-8728.

Resolution: Cannot Reproduce

> Flaky Test KTableSourceTopicRestartIntegrationTest 
> #shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosDisabled
> --
>
> Key: KAFKA-8728
> URL: https://issues.apache.org/jira/browse/KAFKA-8728
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.4.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk11/detail/kafka-trunk-jdk11/719/tests]
> {quote}java.lang.AssertionError: Condition not met within timeout 3. 
> Table did not read all values
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:376)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:353)
> at 
> org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest.assertNumberValuesRead(KTableSourceTopicRestartIntegrationTest.java:187)
> at 
> org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest.shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosDisabled(KTableSourceTopicRestartIntegrationTest.java:113){quote}
> STDOUT
> {quote}[2019-07-29 04:08:45,009] ERROR [Controller id=2 epoch=3] Controller 2 
> epoch 3 failed to change state for partition __transaction_state-23 from 
> OnlinePartition to OnlinePartition (state.change.logger:76)
> kafka.common.StateChangeFailedException: Failed to elect leader for partition 
> __transaction_state-23 under strategy 
> ControlledShutdownPartitionLeaderElectionStrategy
> at 
> kafka.controller.ZkPartitionStateMachine.$anonfun$doElectLeaderForPartitions$7(PartitionStateMachine.scala:424)
> at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
> at 
> kafka.controller.ZkPartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:421)
> at 
> kafka.controller.ZkPartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:335)
> at 
> kafka.controller.ZkPartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:236)
> at 
> kafka.controller.ZkPartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:157)
> at 
> kafka.controller.KafkaController.doControlledShutdown(KafkaController.scala:1091)
> at 
> kafka.controller.KafkaController.$anonfun$processControlledShutdown$1(KafkaController.scala:1053)
> at scala.util.Try$.apply(Try.scala:213)
> at 
> kafka.controller.KafkaController.processControlledShutdown(KafkaController.scala:1053)
> at kafka.controller.KafkaController.process(KafkaController.scala:1624)
> at kafka.controller.QueuedEvent.process(ControllerEventManager.scala:53)
> at 
> kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:137)
> at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
> at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:137)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8902) Benchmark cooperative vs eager rebalancing

2019-10-17 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-8902.
-
Resolution: Fixed

I wrote a simple Streams application using the Processor API to update 10 
stores with every record seen.

{code}
final int numStores = 10;
final Topology topology = new Topology();

topology.addSource("source", new StringDeserializer(), new 
StringDeserializer(), "table-in");

topology.addProcessor(
"processor",
(ProcessorSupplier) () -> new Processor() {
private final List> stores = new 
ArrayList<>(numStores);

@SuppressWarnings("unchecked")
@Override
public void init(final ProcessorContext context) {
for (int i = 0; i < numStores; i++) {
stores.add(
i,
(KeyValueStore) 
context.getStateStore("store" + i)
);
}
}

@Override
public void process(final String key, final String value) {
for (final KeyValueStore store : stores) {
store.put(key, value);
}
}

@Override
public void close() {
stores.clear();
}
},
"source"
);
{code}

I tested this topology using both in-memory and on-disk stores, with caching 
and logging enabled.

My benchmark consisted of running one KafkaStreams instance and measuring its 
metrics, while simulating other nodes joining and leaving the cluster (by 
constructing the simulated nodes to participate in the consumer group protocol 
without actually doing any work). I tested three cluster rebalance scenarios:
* scale up: 100 partitions / 10 nodes = 10 tasks per node starting, run 4 
minutes, add one node (each node loses one task), run 2 minutes, add one node 
(each node loses another task), run two minutes, add two nodes (each node loses 
one task), end the test at the 10 minute mark
* rolling bounce: 100 partitions / 10 nodes = 10 tasks per node starting, run 4 
minutes, bounce each node in the cluster (waiting for it to join and all nodes 
to return to RUNNING before proceeding), end the test at the 10 minute mark
* full bounce: 100 partitions / 10 nodes = 10 tasks per node starting, run 4 
minutes, bounce each node in the cluster (without waiting, so they all leave 
and join at once), end the test at the 10 minute mark

For input data, I randomly generated a dataset of 10,000 keys, and another with 
100,000 keys, all with 1kB values. This data was pre-loaded into the broker, 
with compaction and retention disabled (so that every test iteration would get 
the same sequence of updates)

I ran all the benchmarks on AWS i3.large instances, with a dedicated broker 
node running on a separate i3.large instance.

For each test configuration and scenario, I ran 20 independent trials and 
discarded the high and low results (to exclude outliers), for 18 total data 
points. The key metric was the overall throughput of a single node during the 
test.

I compared the above results from:
* 2.3.1-SNAPSHOT (the current head of the 2.3 branch) - Eager protocol
* 2.4.0-SNAPSHOT (the current head of the 2.4 branch) - Cooperative protocol
* a modified 2.4.0-SNAPSHOT with cooperative rebalancing disabled - Eager 
protocol

What I found is that under all scenarios, all three versions performed the same 
(within a 99.9% significance threshold) under the same data sets and the same 
configurations.

I didn't see any marked improvement as a result of cooperative rebalancing 
alone, but this is only the foundation for several follow-on improvements. What 
is very good to know is that I also didn't find any regression as a result of 
the new protocol implementation.

> Benchmark cooperative vs eager rebalancing
> --
>
> Key: KAFKA-8902
> URL: https://issues.apache.org/jira/browse/KAFKA-8902
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Assignee: John Roesler
>Priority: Major
> Fix For: 2.4.0
>
>
> Cause rebalance and measure:
> * overall throughput
> * paused time
> * (also look at the metrics from 
> (https://issues.apache.org/jira/browse/KAFKA-8609)):
> ** accumulated rebalance time
> Cluster/topic sizing:
> ** 10 instances
> ** 100 tasks (each instance gets 10 tasks)
> ** 1000 stores (each task gets 10 stores)
> * standbys = [0 and 1]
> Rolling bounce:
> * with and without state loss
> * shorter and faster than session timeout (shorter in particular should be 
> interesting)
> Expand (from 9 to 10)
> Contract (fro

Build failed in Jenkins: kafka-trunk-jdk8 #3977

2019-10-17 Thread Apache Jenkins Server
See 


Changes:

[bbejeck] MINOR: Improve FK Join docs and optimize null-fk case (#7536)

[wangguoz] KAFKA-9053: AssignmentInfo#encode hardcodes the 
LATEST_SUPPORTED_VERSION

[wangguoz] KAFKA-8496: System test for KIP-429 upgrades and compatibility 
(#7529)

[wangguoz] KAFKA-9000: fix flaky FK join test by using TTD (#7517)

[wangguoz] KAFKA-8104: Consumer cannot rejoin to the group after rebalancing

[matthias] KAFKA-8884: class cast exception improvement (#7309)

[matthias] MINOR: log reason for fatal error in locking state dir (#7534)

[manikumar] KAFKA-8874: Add consumer metrics to observe user poll behavior 
(KIP-517)


--
[...truncated 8.13 MB...]
org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCusto

Build failed in Jenkins: kafka-2.4-jdk8 #24

2019-10-17 Thread Apache Jenkins Server
See 


Changes:

[cmccabe] MINOR: AbstractRequestResponse should be an interface (#7513)


--
[...truncated 2.70 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo PASSED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
STARTED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildName STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildName PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildIndex STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildIndex PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureApplicationAndRecordMetadata STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureApplicationAndRecordMetadata PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureRecordsOutputToChildByName STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureRecordsOutputToChildByName PASSED

org.apache.kafka.streams.MockProcessorContextTest > shouldCapturePunctuator 
STARTED

org.apache.kafka.streams.MockProcessorContextTest > shouldCapturePunctuator 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
STARTED

org.apache

[jira] [Reopened] (KAFKA-8700) Flaky Test QueryableStateIntegrationTest#queryOnRebalance

2019-10-17 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reopened KAFKA-8700:
--

Re-opening to further investigation cc [~cpettitt-confluent]


> Flaky Test QueryableStateIntegrationTest#queryOnRebalance
> -
>
> Key: KAFKA-8700
> URL: https://issues.apache.org/jira/browse/KAFKA-8700
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.4.0
>Reporter: Matthias J. Sax
>Assignee: Guozhang Wang
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.4.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3807/tests]
> {quote}java.lang.AssertionError: Condition not met within timeout 12. 
> waiting for metadata, store and value to be non null
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:376)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:353)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.verifyAllKVKeys(QueryableStateIntegrationTest.java:292)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:382){quote}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Apache Kafka 2.4.0 release

2019-10-17 Thread Matthias J. Sax
Just FYI:

There was also https://issues.apache.org/jira/browse/KAFKA-9058 that I
just merged.


-Matthias

On 10/17/19 7:59 AM, Manikumar wrote:
> Hi all,
> 
> The code freeze deadline has now passed and at this point only blockers
> will be allowed.
> We have three blockers for 2.4.0. I will move out most of the JIRAs that
> aren't currently
> being worked on. If you think any of the other JIRAs are critical to
> include in 2.4.0,
> please update the fix version, mark as blocker and ensure a PR is ready to
> merge.
> I will create the first RC as soon as we close the blockers.
> Please help to close out the 2.4.0 JIRAs.
> 
> current blockers:
> https://issues.apache.org/jira/browse/KAFKA-8943
> https://issues.apache.org/jira/browse/KAFKA-8992
> https://issues.apache.org/jira/browse/KAFKA-8972
> 
> Thank you!
> 
> On Tue, Oct 8, 2019 at 8:27 PM Manikumar  wrote:
> 
>> Thanks Bruno. We will mark KIP-471 as complete.
>>
>> On Tue, Oct 8, 2019 at 2:39 PM Bruno Cadonna  wrote:
>>
>>> Hi Manikumar,
>>>
>>> It is technically true that KIP-471 is not completed, but the only
>>> aspect that is not there are merely two metrics that I could not add
>>> due to the RocksDB version currently used in Streams. Adding those two
>>> metrics once the RocksDB version will have been increased, will be a
>>> minor effort. So, I would consider KIP-471 as complete with those two
>>> metrics blocked.
>>>
>>> Best,
>>> Bruno
>>>
>>> On Mon, Oct 7, 2019 at 8:44 PM Manikumar 
>>> wrote:

 Hi all,

 I have moved couple of accepted KIPs without a PR to the next release.
>>> We
 still have quite a few KIPs
 with PRs that are being reviewed, but haven't yet been merged. I have
>>> left
 all of these in assuming these
 PRs are ready and not risky to merge.  Please update your assigned
 KIPs/JIRAs, if they are not ready and
  if you know they cannot make it to 2.4.0.

 Please ensure that all KIPs for 2.4.0 have been merged by Oct 16th. Any
 remaining KIPs
 will be moved to the next release.

 The KIPs still in progress are:

 - KIP-517: Add consumer metrics to observe user poll behavior
  <

>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-517%3A+Add+consumer+metrics+to+observe+user+poll+behavior
>

 - KIP-511: Collect and Expose Client's Name and Version in the Brokers
  <

>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-511%3A+Collect+and+Expose+Client%27s+Name+and+Version+in+the+Brokers
>

 - KIP-474: To deprecate WindowStore#put(key, value)
  

 - KIP-471: Expose RocksDB Metrics in Kafka Streams
  <

>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-471%3A+Expose+RocksDB+Metrics+in+Kafka+Streams
>

 - KIP-466: Add support for List serialization and deserialization
  <

>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-466%3A+Add+support+for+List
 +serialization+and+deserialization>

 - KIP-455: Create an Administrative API for Replica Reassignment
  <

>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-455%3A+Create+an+Administrative+API+for+Replica+Reassignment
>

 - KIP-446: Add changelog topic configuration to KTable suppress
  <

>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-446%3A+Add+changelog+topic+configuration+to+KTable+suppress
>

 - KIP-444: Augment metrics for Kafka Streams
  <

>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-444%3A+Augment+metrics+for+Kafka+Streams
>

 - KIP-434: Add Replica Fetcher and Log Cleaner Count Metrics
  <

>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-434%3A+Add+Replica+Fetcher+and+Log+Cleaner+Count+Metrics
>

 - KIP-401: TransformerSupplier/ProcessorSupplier StateStore connecting
  <
>>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97553756


 - KIP-396: Add Reset/List Offsets Operations to AdminClient
   <
>>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97551484
>

 - KIP-221: Enhance DSL with Connecting Topic Creation and Repartition
>>> Hint
  <

>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-221%3A+Enhance+DSL+with+Connecting+Topic+Creation+and+Repartition+Hint
>


 Thanks,
 Manikumar

 On Thu, Oct 3, 2019 at 8:54 AM Manikumar 
>>> wrote:

> Hi all,
>
> Let's extend the feature freeze deadline to Friday to merge the
>>> current
> work in-progress PRs.
> Please ensure that all major features have been merged and that any
>>> minor
> features have PRs by EOD Friday.
> We will be cutting the 2.4 release branch early on Monday morning.
>
> Thanks,
> Manikumar
>
> On Thu, Sep 26, 2019 at 7:30 PM Manikumar 
> wrote:
>
>> Hi Gwen,
>>
>> As sugg

Re: [VOTE] - KIP-213 (new vote) - Simplified and revised.

2019-10-17 Thread Adam Bellemare
Awesome. Thanks John for fixing this!

On Thu, Oct 17, 2019 at 3:07 PM John Roesler  wrote:

> Hello all,
>
> While writing some new test cases for foreign key joins (as accepted
> in KIP-213), I realized that there was an oversight in the review
> process: we only proposed to add join methods that take a Materialized
> parameter.
>
> This poses an unnecessary burden on users who don't _need_ to
> materialize the join result, which could be quite a lot of state and
> changelog topic data.
>
> I filed https://issues.apache.org/jira/browse/KAFKA-9058 and then
> https://github.com/apache/kafka/pull/7541 to fix it. I've updated the
> KIP page accordingly as well.
>
> This change consists solely of adding the missing method overloads,
> and doesn't change the behavior or semantics of the operation at all.
>
> Thanks,
> -John
>


[jira] [Created] (KAFKA-9060) Publish BOMs for Kafka

2019-10-17 Thread Michael Holler (Jira)
Michael Holler created KAFKA-9060:
-

 Summary: Publish BOMs for Kafka
 Key: KAFKA-9060
 URL: https://issues.apache.org/jira/browse/KAFKA-9060
 Project: Kafka
  Issue Type: Improvement
Reporter: Michael Holler


Hey there! Love the project, but I would love it if there was a BOM file that 
is published for each version. If you're not familiar with a BOM, it stands for 
"Bill of Materials" it helps your Gradle (in my case, but it's originally a 
Maven thing) file look like this (using JDBI's implementation as an example):

dependencies {
implementation(platform("org.jdbi:jdbi3-bom:3.10.1"))
implementation("org.jdbi:jdbi3-core")
implementation("org.jdbi:jdbi3-kotlin")
implementation("org.jdbi:jdbi3-kotlin-sqlobject")
implementation("org.jdbi:jdbi3-jackson2")
}

Instead of this:

val jdbiVersion by extra { "2.6.1" }
 
dependencies {
implementation("org.jdbi:jdbi3-core:$jdbiVersion")
implementation("org.jdbi:jdbi3-kotlin:$jdbiVersion")
implementation("org.jdbi:jdbi3-kotlin-sqlobject:$jdbiVersion")
implementation("org.jdbi:jdbi3-jackson2:$jdbiVersion")
}

Notice how you just leave the versions off when you use a BOM. This can help 
reduce the number of dependency compatibility surprises one can encounter, 
especially if a transitive dependency brings in a newer version of one of the 
components (it'll be reduced to the BOM's version). Note also that you still 
have to list dependencies you want with a BOM, just not the versions.

Here's a deeper dive into how a BOM works:

https://howtodoinjava.com/maven/maven-bom-bill-of-materials-dependency/

 The Maven help site also has a section on it (Ctrl+F for "BOM"):

https://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html

I think BOMs would be a great for the users of the Kafka project because there 
are lots of Kafka libraries (streams, connect-api, connect-json, etc) that 
require the same version as other Kafka dependencies to work correctly. BOMs 
were designed for exactly this use case. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] - KIP-213 (new vote) - Simplified and revised.

2019-10-17 Thread John Roesler
Hello all,

While writing some new test cases for foreign key joins (as accepted
in KIP-213), I realized that there was an oversight in the review
process: we only proposed to add join methods that take a Materialized
parameter.

This poses an unnecessary burden on users who don't _need_ to
materialize the join result, which could be quite a lot of state and
changelog topic data.

I filed https://issues.apache.org/jira/browse/KAFKA-9058 and then
https://github.com/apache/kafka/pull/7541 to fix it. I've updated the
KIP page accordingly as well.

This change consists solely of adding the missing method overloads,
and doesn't change the behavior or semantics of the operation at all.

Thanks,
-John


[jira] [Resolved] (KAFKA-9058) Foreign Key Join should not require a queriable store

2019-10-17 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-9058.

Fix Version/s: 2.4.0
   Resolution: Fixed

> Foreign Key Join should not require a queriable store
> -
>
> Key: KAFKA-9058
> URL: https://issues.apache.org/jira/browse/KAFKA-9058
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 2.4.0
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Blocker
>  Labels: streams
> Fix For: 2.4.0
>
>
> While resolving KAFKA-9000, I uncovered a significant flaw in the 
> implementation of foreign key joins. The join only works if supplied with a 
> queriable store. I think this was simply an oversight during implementation 
> and review.
> It would be better to fix this now before the release, since the restriction 
> it places on users could represent a significant burden. If they don't 
> otherwise need the store to be queriable, then they shouldn't be forced to 
> allocate enough storage for a full copy of the join result, or add the 
> changelog topic for it either.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-535: Allow state stores to serve stale reads during rebalance

2019-10-17 Thread Vinoth Chandar
Looks like we are covering ground :)

>>Only if it is within a permissible  range(say 1) we will serve from
Restoring state of active.
+1 on having a knob like this.. My reasoning is as follows.

Looking at the Streams state as a read-only distributed kv store. With
num_standby = f , we should be able to tolerate f failures and if there is
a f+1' failure, the system should be unavailable.

A) So with num_standby=0, the system should be unavailable even if there is
1 failure and thats my argument for not allowing querying in restoration
state, esp in this case it will be a total rebuild of the state (which IMO
cannot be considered a normal fault free operational state).

B) Even there are standby's, say num_standby=2, if the user decides to shut
down all 3 instances, then only outcome should be unavailability until all
of them come back or state is rebuilt on other nodes in the cluster. In
normal operations, f <= 2 and when a failure does happen we can then either
choose to be C over A and fail IQs until replication is fully caught up or
choose A over C by serving in restoring state as long as lag is minimal. If
even with f=1 say, all the standbys are lagging a lot due to some issue,
then that should be considered a failure since that is different from
normal/expected operational mode. Serving reads with unbounded replication
lag and calling it "available" may not be very usable or even desirable :)
IMHO, since it gives the user no way to reason about the app that is going
to query this store.

So there is definitely a need to distinguish between :  Replication catchup
while being in fault free state vs Restoration of state when we lose more
than f standbys. This knob is a great starting point towards this.

If you agree with some of the explanation above, please feel free to
include it in the KIP as well since this is sort of our design principle
here..

Small nits :

- let's standardize on "standby" instead of "replica", KIP or code,  to be
consistent with rest of Streams code/docs?
- Can we merge KAFKA-8994 into KAFKA-6144 now and close the former?
Eventually need to consolidate KAFKA-6555 as well
- In the new API, "StreamsMetadataState::allMetadataForKey(boolean
enableReplicaServing, String storeName, K key, Serializer keySerializer)" Do
we really need a per key configuration? or a new StreamsConfig is good
enough?

On Wed, Oct 16, 2019 at 8:31 PM Navinder Brar
 wrote:

> @Vinoth, I have incorporated a few of the discussions we have had in the
> KIP.
>
> In the current code, t0 and t1 serve queries from Active(Running)
> partition. For case t2, we are planning to return List
> such that it returns  so that if IQ
> fails on A, the replica on B can serve the data by enabling serving from
> replicas. This still does not solve case t3 and t4 since B has been
> promoted to active but it is in Restoring state to catchup till A’s last
> committed position as we don’t serve from Restoring state in Active and new
> Replica on R is building itself from scratch. Both these cases can be
> solved if we start serving from Restoring state of active as well since it
> is almost equivalent to previous Active.
>
> There could be a case where all replicas of a partition become unavailable
> and active and all replicas of that partition are building themselves from
> scratch, in this case, the state in Active is far behind even though it is
> in Restoring state. To cater to such cases that we don’t serve from this
> state we can either add another state before Restoring or check the
> difference between last committed offset and current position. Only if it
> is within a permissible range (say 1) we will serve from Restoring the
> state of Active.
>
>
> On Wednesday, 16 October, 2019, 10:01:35 pm IST, Vinoth Chandar <
> vchan...@confluent.io> wrote:
>
>  Thanks for the updates on the KIP, Navinder!
>
> Few comments
>
> - AssignmentInfo is not public API?. But we will change it and thus need to
> increment the version and test for version_probing etc. Good to separate
> that from StreamsMetadata changes (which is public API)
> - From what I see, there is going to be choice between the following
>
>   A) introducing a new *KafkaStreams::allMetadataForKey() *API that
> potentially returns List ordered from most upto date to
> least upto date replicas. Today we cannot fully implement this ordering,
> since all we know is which hosts are active and which are standbys.
> However, this aligns well with the future. KIP-441 adds the lag information
> to the rebalancing protocol. We could also sort replicas based on the
> report lags eventually. This is fully backwards compatible with existing
> clients. Only drawback I see is the naming of the existing method
> KafkaStreams::metadataForKey, not conveying the distinction that it simply
> returns the active replica i.e allMetadataForKey.get(0).
>  B) Change KafkaStreams::metadataForKey() to return a List. Its a breaking
> change.
>
> I prefer A, since none of the semantic

Re: [VOTE] KIP-534: Retain tombstones for approximately delete.retention.ms milliseconds

2019-10-17 Thread Jason Gustafson
Hi Guozhang,

It's a fair point. For control records, I think it's a non-issue since they
are tiny and not batched. So the only case where this might matter is large
batch deletions. I think the size difference is not a major issue itself,
but I think it's worth mentioning in the KIP the risk of exceeding the max
message size. I think the code should probably make this more of a soft
limit when cleaning. We have run into scenarios in the past as well where
recompression has actually increased message size. We may also want to be
able to upconvert messages to the new format in the future in the cleaner.

-Jason



On Thu, Oct 17, 2019 at 9:08 AM Guozhang Wang  wrote:

> Here's my understanding: when log compaction kicks in, the system time at
> the moment would be larger than the message timestamp to be compacted, so
> the modification on the batch timestamp would practically be increasing its
> value, and hence the deltas for each inner message would be negative to
> maintain their actual timestamp. Depending on the time diff between the
> actual timestamp of the message and the time when log compaction happens,
> this negative delta can be large or small since it not long depends on the
> cleaner thread wakeup frequency but also dirty ratio etc.
>
> With varInt encoding, the num.bytes needed for encode an int varies from 1
> to 5 bytes; before compaction, the deltas should be relatively small
> positive values compared with the base timestamp, and hence most likely 1
> or 2 bytes needed to encode, after compaction, the deltas could be
> relatively large negative values that may take more bytes to encode. With a
> record batch of 512 in practice, and suppose after compaction each record
> would take 2 more byte for encoding deltas, that would be 1K more per
> batch. Usually it would not be too big of an issue with reasonable sized
> message, but I just wanted to point out this as a potential regression.
>
>
> Guozhang
>
> On Wed, Oct 16, 2019 at 9:36 PM Richard Yu 
> wrote:
>
> > Hi Guozhang,
> >
> > Your understanding basically is on point.
> >
> > I haven't looked into the details for what happens if we change the base
> > timestamp and how its calculated, so I'm not clear on how small the delta
> > (or big) is.
> > To be fair, would the the delta size pose a big problem if it takes up
> more
> > bytes to encode?
> >
> > Cheers,
> > Richard
> >
> > On Wed, Oct 16, 2019 at 7:36 PM Guozhang Wang 
> wrote:
> >
> > > Hello Richard,
> > >
> > > Thanks for the KIP, I just have one clarification regarding "So the
> idea
> > is
> > > to set the base timestamp to the delete horizon and adjust the deltas
> > > accordingly." My understanding is that during compaction, for each
> > > compacted new segment, we would set its base offset of each batch as
> the
> > > delete horizon, which is the "current system time that cleaner has seen
> > so
> > > far", and adjust the delta timestamps of each of the inner records of
> the
> > > batch (and practically the deltas will be all negative)?
> > >
> > > If that's case, could we do some back of the envelope calculation on
> > what's
> > > the possible smallest case of deltas? Note that since we use varInt for
> > > delta values for each record, the smaller the negative delta, that
> would
> > > take more bytes to encode.
> > >
> > > Guozhang
> > >
> > > On Wed, Oct 16, 2019 at 6:48 PM Jason Gustafson 
> > > wrote:
> > >
> > > > +1. Thanks Richard.
> > > >
> > > > On Wed, Oct 16, 2019 at 10:04 AM Richard Yu <
> > yohan.richard...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > Want to try to get this KIP wrapped up. So it would be great if we
> > can
> > > > get
> > > > > some votes.
> > > > >
> > > > > Cheers,
> > > > > Richard
> > > > >
> > > > > On Tue, Oct 15, 2019 at 12:58 PM Jun Rao  wrote:
> > > > >
> > > > > > Hi, Richard,
> > > > > >
> > > > > > Thanks for the updated KIP. +1 from me.
> > > > > >
> > > > > > Jun
> > > > > >
> > > > > > On Tue, Oct 15, 2019 at 12:46 PM Richard Yu <
> > > > yohan.richard...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Hi Jun,
> > > > > > >
> > > > > > > I've updated the link accordingly. :)
> > > > > > > Here is the updated KIP link:
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-534%3A+Retain+tombstones+and+transaction+markers+for+approximately+delete.retention.ms+milliseconds
> > > > > > >
> > > > > > > Cheers,
> > > > > > > Richard
> > > > > > >
> > > > > > > On Mon, Oct 14, 2019 at 5:12 PM Jun Rao 
> > wrote:
> > > > > > >
> > > > > > > > Hi, Richard,
> > > > > > > >
> > > > > > > > Thanks for the KIP. Looks good to me overall. A few minor
> > > comments
> > > > > > below.
> > > > > > > >
> > > > > > > > 1. Could you change the title from "Retain tombstones" to
> > "Retain
> > > > > > > > tombstones and transaction markers" to make it more general?
> > > > > > > >
> > > > > > > > 2. Could you document which bit in the batc

Build failed in Jenkins: kafka-2.3-jdk8 #129

2019-10-17 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAKFA-8950: Fix KafkaConsumer Fetcher breaking on concurrent 
disconnect


--
[...truncated 2.93 MB...]

kafka.log.LogValidatorTest > testRecompressedBatchWithoutRecordsNotAllowed 
PASSED

kafka.log.LogValidatorTest > testCompressedV1 STARTED

kafka.log.LogValidatorTest > testCompressedV1 PASSED

kafka.log.LogValidatorTest > testCompressedV2 STARTED

kafka.log.LogValidatorTest > testCompressedV2 PASSED

kafka.log.LogValidatorTest > testDownConversionOfIdempotentRecordsNotPermitted 
STARTED

kafka.log.LogValidatorTest > testDownConversionOfIdempotentRecordsNotPermitted 
PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2NonCompressed PASSED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentCompressed STARTED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentCompressed PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV1 PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV2 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV2 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV1 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV1 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV2 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV2 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV1ToV2 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV1ToV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0Compressed PASSED

kafka.log.LogValidatorTest > testZStdCompressedWithUnavailableIBPVersion STARTED

kafka.log.LogValidatorTest > testZStdCompressedWithUnavailableIBPVersion PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2Compressed PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1NonCompressed PASSED

kafka.log.LogValidatorTest > 
testDownConversionOfTransactionalRecordsNotPermitted STARTED

kafka.log.LogValidatorTest > 
testDownConversionOfTransactionalRecordsNotPermitted PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1Compressed PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1 PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2 PASSED

kafka.log.LogValidatorTest > testControlRecordsNotAllowedFromClients STARTED

kafka.log.LogValidatorTest > testControlRecordsNotAllowedFromClients PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV1 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV1 PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV2 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1NonCompressed PASSED

kafka.log.LogValidatorTest > testLogAppendTimeNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeNonCompressedV1 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0NonCompressed PASSED

kafka.log.LogValidatorTest > testControlRecordsNotCompressed STARTED

kafka.log.LogValidatorTest > testControlRecordsNotCompressed PASSED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV1 PASSED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV2 STARTED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV2 PASSED

kafka.log.LogValidatorTest > testCompressedBatchWithoutRecordsNotAllowed STARTED

kafka.log.LogValidatorTest > testCompressedBatchWithoutRecordsNotAllowed PASSED

kafka.log.LogValidatorTest > testInvalidInnerMagicVersion STARTED

kafka.log.LogValidatorTest > testInvalidInnerMagicVersion PASSED

kafka.log.LogValidatorTest > testInvalidOf

[jira] [Resolved] (KAFKA-9004) Fetch from follower unintentionally allowed for old consumers

2019-10-17 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-9004.

Resolution: Fixed

> Fetch from follower unintentionally allowed for old consumers
> -
>
> Key: KAFKA-9004
> URL: https://issues.apache.org/jira/browse/KAFKA-9004
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: David Arthur
>Priority: Major
>
> With KIP-392, we allow consumers to fetch from followers. This capability is 
> enabled when a replica selector has been provided in the configuration. When 
> not in use, the intent is to preserve current behavior of fetching only from 
> leader. The leader epoch is the mechanism that keeps us honest. When there is 
> a leader change, the epoch gets bumped, consumer fetches fail due to the 
> fenced epoch, and we find the new leader.
> However, for old consumers, there is no similar protection. The leader epoch 
> was not available to clients until recently. If there is a preferred leader 
> election (for example), the old consumer will happily continue fetching from 
> the demoted leader until a periodic metadata fetch causes us to discover the 
> new leader. This does not create any problems from a correctness 
> perspective–fetches are still bound by the high watermark–but it is 
> unexpected and may cause unexpected performance characteristics.
> To fix this (assuming we think it should be fixed), we could be stricter 
> about fetches and require the leader check if the fetch request has no epoch. 
> Or maybe just require the leader check for older versions of the fetch 
> request.
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-534: Retain tombstones for approximately delete.retention.ms milliseconds

2019-10-17 Thread Guozhang Wang
Here's my understanding: when log compaction kicks in, the system time at
the moment would be larger than the message timestamp to be compacted, so
the modification on the batch timestamp would practically be increasing its
value, and hence the deltas for each inner message would be negative to
maintain their actual timestamp. Depending on the time diff between the
actual timestamp of the message and the time when log compaction happens,
this negative delta can be large or small since it not long depends on the
cleaner thread wakeup frequency but also dirty ratio etc.

With varInt encoding, the num.bytes needed for encode an int varies from 1
to 5 bytes; before compaction, the deltas should be relatively small
positive values compared with the base timestamp, and hence most likely 1
or 2 bytes needed to encode, after compaction, the deltas could be
relatively large negative values that may take more bytes to encode. With a
record batch of 512 in practice, and suppose after compaction each record
would take 2 more byte for encoding deltas, that would be 1K more per
batch. Usually it would not be too big of an issue with reasonable sized
message, but I just wanted to point out this as a potential regression.


Guozhang

On Wed, Oct 16, 2019 at 9:36 PM Richard Yu 
wrote:

> Hi Guozhang,
>
> Your understanding basically is on point.
>
> I haven't looked into the details for what happens if we change the base
> timestamp and how its calculated, so I'm not clear on how small the delta
> (or big) is.
> To be fair, would the the delta size pose a big problem if it takes up more
> bytes to encode?
>
> Cheers,
> Richard
>
> On Wed, Oct 16, 2019 at 7:36 PM Guozhang Wang  wrote:
>
> > Hello Richard,
> >
> > Thanks for the KIP, I just have one clarification regarding "So the idea
> is
> > to set the base timestamp to the delete horizon and adjust the deltas
> > accordingly." My understanding is that during compaction, for each
> > compacted new segment, we would set its base offset of each batch as the
> > delete horizon, which is the "current system time that cleaner has seen
> so
> > far", and adjust the delta timestamps of each of the inner records of the
> > batch (and practically the deltas will be all negative)?
> >
> > If that's case, could we do some back of the envelope calculation on
> what's
> > the possible smallest case of deltas? Note that since we use varInt for
> > delta values for each record, the smaller the negative delta, that would
> > take more bytes to encode.
> >
> > Guozhang
> >
> > On Wed, Oct 16, 2019 at 6:48 PM Jason Gustafson 
> > wrote:
> >
> > > +1. Thanks Richard.
> > >
> > > On Wed, Oct 16, 2019 at 10:04 AM Richard Yu <
> yohan.richard...@gmail.com>
> > > wrote:
> > >
> > > > Hi all,
> > > >
> > > > Want to try to get this KIP wrapped up. So it would be great if we
> can
> > > get
> > > > some votes.
> > > >
> > > > Cheers,
> > > > Richard
> > > >
> > > > On Tue, Oct 15, 2019 at 12:58 PM Jun Rao  wrote:
> > > >
> > > > > Hi, Richard,
> > > > >
> > > > > Thanks for the updated KIP. +1 from me.
> > > > >
> > > > > Jun
> > > > >
> > > > > On Tue, Oct 15, 2019 at 12:46 PM Richard Yu <
> > > yohan.richard...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi Jun,
> > > > > >
> > > > > > I've updated the link accordingly. :)
> > > > > > Here is the updated KIP link:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-534%3A+Retain+tombstones+and+transaction+markers+for+approximately+delete.retention.ms+milliseconds
> > > > > >
> > > > > > Cheers,
> > > > > > Richard
> > > > > >
> > > > > > On Mon, Oct 14, 2019 at 5:12 PM Jun Rao 
> wrote:
> > > > > >
> > > > > > > Hi, Richard,
> > > > > > >
> > > > > > > Thanks for the KIP. Looks good to me overall. A few minor
> > comments
> > > > > below.
> > > > > > >
> > > > > > > 1. Could you change the title from "Retain tombstones" to
> "Retain
> > > > > > > tombstones and transaction markers" to make it more general?
> > > > > > >
> > > > > > > 2. Could you document which bit in the batch attribute will be
> > used
> > > > for
> > > > > > the
> > > > > > > new flag? The current format of the batch attribute is the
> > > following.
> > > > > > >
> > > > > > > *
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> -
> > > > > > > *  | Unused (6-15) | Control (5) | Transactional (4) |
> Timestamp
> > > Type
> > > > > > > (3) | Compression Type (0-2) |
> > > > > > >
> > > > > > > *
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> -
> > > > > > >
> > > > > > > 3. Could you provide the reasons for the rejected proposals?
> For
> > > > > > > proposal 1, one reason is that it doesn't cover the transaction
> > > > > > > markers. For proposal 2, one reason is that the interval record
> > > > header
> > > >

Need to subscribe to dev group

2019-10-17 Thread Sriram Ganesh
Hi,

I was using kafka for last 3 years. I wanna contribute to kafka. Please add
me in the "dev@kafka.apache.org"

Thanks,

-- 
*Sriram G*
*Tech*


Re: [DISCUSS] Apache Kafka 2.4.0 release

2019-10-17 Thread Manikumar
Hi all,

The code freeze deadline has now passed and at this point only blockers
will be allowed.
We have three blockers for 2.4.0. I will move out most of the JIRAs that
aren't currently
being worked on. If you think any of the other JIRAs are critical to
include in 2.4.0,
please update the fix version, mark as blocker and ensure a PR is ready to
merge.
I will create the first RC as soon as we close the blockers.
Please help to close out the 2.4.0 JIRAs.

current blockers:
https://issues.apache.org/jira/browse/KAFKA-8943
https://issues.apache.org/jira/browse/KAFKA-8992
https://issues.apache.org/jira/browse/KAFKA-8972

Thank you!

On Tue, Oct 8, 2019 at 8:27 PM Manikumar  wrote:

> Thanks Bruno. We will mark KIP-471 as complete.
>
> On Tue, Oct 8, 2019 at 2:39 PM Bruno Cadonna  wrote:
>
>> Hi Manikumar,
>>
>> It is technically true that KIP-471 is not completed, but the only
>> aspect that is not there are merely two metrics that I could not add
>> due to the RocksDB version currently used in Streams. Adding those two
>> metrics once the RocksDB version will have been increased, will be a
>> minor effort. So, I would consider KIP-471 as complete with those two
>> metrics blocked.
>>
>> Best,
>> Bruno
>>
>> On Mon, Oct 7, 2019 at 8:44 PM Manikumar 
>> wrote:
>> >
>> > Hi all,
>> >
>> > I have moved couple of accepted KIPs without a PR to the next release.
>> We
>> > still have quite a few KIPs
>> > with PRs that are being reviewed, but haven't yet been merged. I have
>> left
>> > all of these in assuming these
>> > PRs are ready and not risky to merge.  Please update your assigned
>> > KIPs/JIRAs, if they are not ready and
>> >  if you know they cannot make it to 2.4.0.
>> >
>> > Please ensure that all KIPs for 2.4.0 have been merged by Oct 16th. Any
>> > remaining KIPs
>> > will be moved to the next release.
>> >
>> > The KIPs still in progress are:
>> >
>> > - KIP-517: Add consumer metrics to observe user poll behavior
>> >  <
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-517%3A+Add+consumer+metrics+to+observe+user+poll+behavior
>> > >
>> >
>> > - KIP-511: Collect and Expose Client's Name and Version in the Brokers
>> >  <
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-511%3A+Collect+and+Expose+Client%27s+Name+and+Version+in+the+Brokers
>> > >
>> >
>> > - KIP-474: To deprecate WindowStore#put(key, value)
>> >  
>> >
>> > - KIP-471: Expose RocksDB Metrics in Kafka Streams
>> >  <
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-471%3A+Expose+RocksDB+Metrics+in+Kafka+Streams
>> > >
>> >
>> > - KIP-466: Add support for List serialization and deserialization
>> >  <
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-466%3A+Add+support+for+List
>> > +serialization+and+deserialization>
>> >
>> > - KIP-455: Create an Administrative API for Replica Reassignment
>> >  <
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-455%3A+Create+an+Administrative+API+for+Replica+Reassignment
>> > >
>> >
>> > - KIP-446: Add changelog topic configuration to KTable suppress
>> >  <
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-446%3A+Add+changelog+topic+configuration+to+KTable+suppress
>> > >
>> >
>> > - KIP-444: Augment metrics for Kafka Streams
>> >  <
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-444%3A+Augment+metrics+for+Kafka+Streams
>> > >
>> >
>> > - KIP-434: Add Replica Fetcher and Log Cleaner Count Metrics
>> >  <
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-434%3A+Add+Replica+Fetcher+and+Log+Cleaner+Count+Metrics
>> > >
>> >
>> > - KIP-401: TransformerSupplier/ProcessorSupplier StateStore connecting
>> >  <
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97553756
>> >
>> >
>> > - KIP-396: Add Reset/List Offsets Operations to AdminClient
>> >   <
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97551484
>> > >
>> >
>> > - KIP-221: Enhance DSL with Connecting Topic Creation and Repartition
>> Hint
>> >  <
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-221%3A+Enhance+DSL+with+Connecting+Topic+Creation+and+Repartition+Hint
>> > >
>> >
>> >
>> > Thanks,
>> > Manikumar
>> >
>> > On Thu, Oct 3, 2019 at 8:54 AM Manikumar 
>> wrote:
>> >
>> > > Hi all,
>> > >
>> > > Let's extend the feature freeze deadline to Friday to merge the
>> current
>> > > work in-progress PRs.
>> > > Please ensure that all major features have been merged and that any
>> minor
>> > > features have PRs by EOD Friday.
>> > > We will be cutting the 2.4 release branch early on Monday morning.
>> > >
>> > > Thanks,
>> > > Manikumar
>> > >
>> > > On Thu, Sep 26, 2019 at 7:30 PM Manikumar 
>> > > wrote:
>> > >
>> > >> Hi Gwen,
>> > >>
>> > >> As suggested by Ismael, If required, we can extend the feature
>> freeze to
>> > >> end of the week.
>> > >>
>> > >> Thanks,
>> > >>
>> > >> On Thu, Sep 26, 2019 at 7:32 AM Ismael Juma 
>> wrote:
>> > >>
>>

Build failed in Jenkins: kafka-2.4-jdk8 #23

2019-10-17 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-9000: fix flaky FK join test by using TTD (#7517)

[wangguoz] KAFKA-8104: Consumer cannot rejoin to the group after rebalancing

[matthias] KAFKA-8884: class cast exception improvement (#7309)

[matthias] MINOR: log reason for fatal error in locking state dir (#7534)

[manikumar] KAFKA-8874: Add consumer metrics to observe user poll behavior 
(KIP-517)

[rajinisivaram] KAKFA-8950: Fix KafkaConsumer Fetcher breaking on concurrent 
disconnect


--
[...truncated 2.69 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacad

[jira] [Created] (KAFKA-9059) Implement MaxReassignmentLag

2019-10-17 Thread Viktor Somogyi-Vass (Jira)
Viktor Somogyi-Vass created KAFKA-9059:
--

 Summary: Implement MaxReassignmentLag
 Key: KAFKA-9059
 URL: https://issues.apache.org/jira/browse/KAFKA-9059
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 2.5.0
Reporter: Viktor Somogyi-Vass
Assignee: Viktor Somogyi-Vass


Some more thinking is required to the implementation of MaxReassignmentLag 
proposed by 
[KIP-352|https://cwiki.apache.org/confluence/display/KAFKA/KIP-352:+Distinguish+URPs+caused+by+reassignment]
 as it's not so straightforward (need to maintain partitions' lag), therefore 
we separated it off into this JIRA.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : kafka-trunk-jdk11 #889

2019-10-17 Thread Apache Jenkins Server
See 




Re: [VOTE] 2.3.1 RC1

2019-10-17 Thread Rajini Sivaram
Hi David,

Have merged the fix for https://issues.apache.org/jira/browse/KAFKA-8950 to
the 2.3 branch as well, since it is a critical consumer issue. I set fix
version to 2.3.2 & 2.4.0 in the JIRA since I wasn't sure we will have
another RC for 2,3,1. But it will be good to get it in possible.

Thanks,

Rajini


On Mon, Oct 14, 2019 at 7:11 PM Mickael Maison 
wrote:

> Hi David,
>
> Probably not a blocker, but I noticed the authorization section in the
> security docs was not updated in 2.3.0 to include authorization
> details for ElectPreferredLeaders and IncrementalAlterConfigs.
> I just opened a PR to address that in 2.3:
> https://github.com/apache/kafka/pull/7508 (and also another PR for
> 2.4/trunk: https://github.com/apache/kafka/pull/7509)
>
> On Mon, Oct 14, 2019 at 4:59 PM Gwen Shapira  wrote:
> >
> > David,
> >
> > Why do we have two site-doc packages, one for each Scala version? It
> > is just HTML, right? IIRC, in previous releases we only packaged the
> > docs once?
> >
> > Gwen
> >
> > On Fri, Oct 4, 2019 at 6:52 PM David Arthur  wrote:
> > >
> > > Hello all, we identified a few bugs and a dependency update we wanted
> to
> > > get fixed for 2.3.1. In particular, there was a problem with rolling
> > > upgrades of streams applications (KAFKA-8649).
> > >
> > > Check out the release notes for a complete list.
> > >
> https://home.apache.org/~davidarthur/kafka-2.3.1-rc1/RELEASE_NOTES.html
> > >
> > > *** Please download, test and vote by Wednesday October 9th, 9pm PST
> > >
> > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > https://kafka.apache.org/KEYS
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > https://home.apache.org/~davidarthur/kafka-2.3.1-rc1/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > >
> > > * Javadoc:
> > > https://home.apache.org/~davidarthur/kafka-2.3.1-rc1/javadoc/
> > >
> > > * Tag to be voted upon (off 2.3 branch) is the 2.3.1 tag:
> > > https://github.com/apache/kafka/releases/tag/2.3.1-rc1
> > >
> > > * Documentation:
> > > https://kafka.apache.org/23/documentation.html
> > >
> > > * Protocol:
> > > https://kafka.apache.org/23/protocol.html
> > >
> > > * Successful Jenkins builds for the 2.3 branch are TBD but will be
> located:
> > >
> > > Unit/integration tests: https://builds.apache.org/job/kafka-2.3-jdk8/
> > >
> > > System tests:
> https://jenkins.confluent.io/job/system-test-kafka/job/2.3/
> > >
> > >
> > > Thanks!
> > > David Arthur
>


[jira] [Resolved] (KAFKA-8950) KafkaConsumer stops fetching

2019-10-17 Thread Rajini Sivaram (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-8950.
---
Fix Version/s: 2.3.2
   2.4.0
   Resolution: Fixed

> KafkaConsumer stops fetching
> 
>
> Key: KAFKA-8950
> URL: https://issues.apache.org/jira/browse/KAFKA-8950
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.3.0
> Environment: linux
>Reporter: Will James
>Priority: Major
> Fix For: 2.4.0, 2.3.2
>
>
> We have a KafkaConsumer consuming from a single partition with 
> enable.auto.commit set to true.
> Very occasionally, the consumer goes into a broken state. It returns no 
> records from the broker with every poll, and from most of the Kafka metrics 
> in the consumer it looks like it is fully caught up to the end of the log. 
> We see that we are long polling for the max poll timeout, and that there is 
> zero lag. In addition, we see that the heartbeat rate stays unchanged from 
> before the issue begins (so the consumer stays a part of the consumer group).
> In addition, from looking at the __consumer_offsets topic, it is possible to 
> see that the consumer is committing the same offset on the auto commit 
> interval, however, the offset does not move, and the lag from the broker's 
> perspective continues to increase.
> The issue is only resolved by restarting our application (which restarts the 
> KafkaConsumer instance).
> From a heap dump of an application in this state, I can see that the Fetcher 
> is in a state where it believes there are nodesWithPendingFetchRequests.
> However, I can see the state of the fetch latency sensor, specifically, the 
> fetch rate, and see that the samples were not updated for a long period of 
> time (actually, precisely the amount of time that the problem in our 
> application was occurring, around 50 hours - we have alerting on other 
> metrics but not the fetch rate, so we didn't notice the problem until a 
> customer complained).
> In this example, the consumer was processing around 40 messages per second, 
> with an average size of about 10kb, although most of the other examples of 
> this have happened with higher volume (250 messages / second, around 23kb per 
> message on average).
> I have spent some time investigating the issue on our end, and will continue 
> to do so as time allows, however I wanted to raise this as an issue because 
> it may be affecting other people.
> Please let me know if you have any questions or need additional information. 
> I doubt I can provide heap dumps unfortunately, but I can provide further 
> information as needed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : kafka-2.3-jdk8 #128

2019-10-17 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-352: Distinguish URPs caused by reassignment

2019-10-17 Thread Viktor Somogyi-Vass
Hi Ismael,
By Jason's suggestion we finally went with the originally voted proposal
that is to include reassignment bytes in/out in replication bytes in/out
and we discuss this when throttling is changed. Sorry for not updating this
thread, I was busy with the code review. I'll get the KIP in shape to
reflect the final version.

Thanks,
Viktor

On Wed, Oct 16, 2019 at 10:06 PM Ismael Juma  wrote:

> The current proposal says that replication throughput would change not to
> include reassignment though.
>
> Ismael
>
> On Wed, Oct 16, 2019, 11:53 AM Colin McCabe  wrote:
>
> > Hi Ismael,
> >
> > I think every replica is doing replication, by definition.  But not every
> > replica is undergoing reassignment.
> >
> > If the broker that died was in the set of new replicas being added, its
> > death will not add a new under-replicated partition.  Otherwise, it will
> > add a new URP.
> >
> > best,
> > Colin
> >
> >
> > On Wed, Oct 16, 2019, at 11:26, Ismael Juma wrote:
> > > If a broker dies and loses the disk, is it replication or reassignment?
> > >
> > > Ismael
> > >
> > > On Thu, Oct 10, 2019 at 3:30 AM Viktor Somogyi-Vass <
> > viktorsomo...@gmail.com>
> > > wrote:
> > >
> > > > Hi All,
> > > >
> > > > During the code review it came up that we shouldn't count replication
> > bytes
> > > > together with reassignment bytes so they count to a different
> metrics.
> > This
> > > > is a change in the semantics of ReplicationBytesInPerSec and
> > > > ReplicationBytesOutPerSec metrics but since we plan to separate
> > > > reassignment from replication in terms of throttling, it makes sense
> to
> > > > record metrics separately as well.
> > > > If there are no objections we'll proceed with this interpretation
> but I
> > > > wanted to send a shoutout here as well.
> > > >
> > > > Also the ReassignmentMaxLag will go in a different JIRA as it
> requires
> > more
> > > > discussion.
> > > >
> > > > Thanks,
> > > > Viktor
> > > >
> > > > On Mon, Aug 26, 2019 at 6:30 PM Jason Gustafson 
> > > > wrote:
> > > >
> > > > > Closing this vote. The final result is +9 with 4 binding votes.
> > > > >
> > > > > @Satish Sorry, I missed your question above. Good point about
> > updating
> > > > > documentation. I will create a separate jira to make sure this gets
> > done.
> > > > >
> > > > > -Jason
> > > > >
> > > > > On Fri, Aug 23, 2019 at 11:23 AM Jason Gustafson <
> ja...@confluent.io
> > >
> > > > > wrote:
> > > > >
> > > > > > Thanks Stan, good catch. I have updated the KIP. I will plan to
> > close
> > > > the
> > > > > > vote Monday if there are no objections.
> > > > > >
> > > > > > -Jason
> > > > > >
> > > > > > On Fri, Aug 23, 2019 at 11:14 AM Colin McCabe <
> cmcc...@apache.org>
> > > > > wrote:
> > > > > >
> > > > > >> On Fri, Aug 23, 2019, at 11:08, Stanislav Kozlovski wrote:
> > > > > >> > Thanks for the KIP, this is very helpful
> > > > > >> >
> > > > > >> > I had an offline discussion with Jason and we discussed the
> > > > semantics
> > > > > of
> > > > > >> > the underMinIsr/atMinIsr metrics. The current proposal would
> > expose
> > > > a
> > > > > >> gap
> > > > > >> > where we could report URP but no MinIsr.
> > > > > >> > A brief example:
> > > > > >> > original replica set = [0,1,2]
> > > > > >> > new replica set = [3,4,5]
> > > > > >> > isr = [0, 3, 4]
> > > > > >> > config.minIsr = 3
> > > > > >> >
> > > > > >> > As the KIP said
> > > > > >> > > In other words, we will subtract the AddingReplica from both
> > the
> > > > > total
> > > > > >> > replicas and the current ISR when determining URP
> satisfaction.
> > > > > >> > We would report URP=2 (1 and 2 are not in ISR) but not
> > underMinIsr,
> > > > as
> > > > > >> we
> > > > > >> > have an ISR of 3.
> > > > > >> > Technically, any produce requests with acks=all would succeed,
> > so it
> > > > > >> would
> > > > > >> > be false to report `underMinIsr`. We thought it'd be good to
> > keep
> > > > both
> > > > > >> > metrics consistent, so a new proposal is to use the following
> > > > > algorithm:
> > > > > >> > ```
> > > > > >> > isUrp == size(original replicas) - size(isr) > 0
> > > > > >> > ```
> > > > > >>
> > > > > >> Hi Stan,
> > > > > >>
> > > > > >> That's a good point.  Basically we should regard the size of the
> > > > > original
> > > > > >> replica set as the desired replication factor, and calculate the
> > URPs
> > > > > based
> > > > > >> on that.  +1 for this.  (I assume Jason will update the KIP...)
> > > > > >>
> > > > > >> best,
> > > > > >> Colin
> > > > > >>
> > > > > >>
> > > > > >> >
> > > > > >> > Taking that into account, +1 from me! (non-binding)
> > > > > >> >
> > > > > >> > On Fri, Aug 23, 2019 at 7:00 PM Colin McCabe <
> > cmcc...@apache.org>
> > > > > >> wrote:
> > > > > >> >
> > > > > >> > > +1 (binding).
> > > > > >> > >
> > > > > >> > > cheers,
> > > > > >> > > Colin
> > > > > >> > >
> > > > > >> > > On Tue, Aug 20, 2019, at 10:55, Jason Gustafson wrote:
> > > > > >> > > > Hi All,
> > > > > >> > > >
> > > > > >> > > > I'

Build failed in Jenkins: kafka-2.4-jdk8 #22

2019-10-17 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-7981; Add fetcher and log cleaner thread count metrics (#6514)

[rhauch] KAFKA-8340, KAFKA-8819: Use PluginClassLoader while statically

[bill] MINOR: Improve FK Join docs and optimize null-fk case (#7536)

[wangguoz] KAFKA-9053: AssignmentInfo#encode hardcodes the 
LATEST_SUPPORTED_VERSION

[wangguoz] KAFKA-8496: System test for KIP-429 upgrades and compatibility 
(#7529)


--
[...truncated 5.01 MB...]
kafka.admin.AclCommandTest > testInvalidAuthorizerProperty PASSED

kafka.admin.AclCommandTest > testInvalidJAuthorizerProperty STARTED

kafka.admin.AclCommandTest > testInvalidJAuthorizerProperty PASSED

kafka.admin.AclCommandTest > testAclsOnPrefixedResourcesWithAdminAPI STARTED

kafka.admin.AclCommandTest > testAclsOnPrefixedResourcesWithAdminAPI PASSED

kafka.admin.AclCommandTest > testPatternTypes STARTED

kafka.admin.AclCommandTest > testPatternTypes PASSED

kafka.admin.AclCommandTest > testProducerConsumerCliWithAdminAPI STARTED

kafka.admin.AclCommandTest > testProducerConsumerCliWithAdminAPI PASSED

kafka.admin.AclCommandTest > testAclsOnPrefixedResourcesWithAuthorizer STARTED

kafka.admin.AclCommandTest > testAclsOnPrefixedResourcesWithAuthorizer PASSED

kafka.admin.AclCommandTest > testProducerConsumerCliWithAuthorizer STARTED

kafka.admin.AclCommandTest > testProducerConsumerCliWithAuthorizer PASSED

kafka.admin.AclCommandTest > testAclCliWithAdminAPI STARTED

kafka.admin.AclCommandTest > testAclCliWithAdminAPI PASSED

kafka.admin.TopicCommandWithAdminClientTest > testAlterPartitionCount STARTED

kafka.admin.TopicCommandWithAdminClientTest > testAlterPartitionCount PASSED

kafka.admin.TopicCommandWithAdminClientTest > testCreateWithDefaultReplication 
STARTED

kafka.admin.TopicCommandWithAdminClientTest > testCreateWithDefaultReplication 
PASSED

kafka.admin.TopicCommandWithAdminClientTest > testDescribeAtMinIsrPartitions 
STARTED

kafka.admin.TopicCommandWithAdminClientTest > testDescribeAtMinIsrPartitions 
PASSED

kafka.admin.TopicCommandWithAdminClientTest > 
testParseAssignmentDuplicateEntries STARTED

kafka.admin.TopicCommandWithAdminClientTest > 
testParseAssignmentDuplicateEntries PASSED

kafka.admin.TopicCommandWithAdminClientTest > 
testCreateWithNegativeReplicationFactor STARTED

kafka.admin.TopicCommandWithAdminClientTest > 
testCreateWithNegativeReplicationFactor PASSED

kafka.admin.TopicCommandWithAdminClientTest > 
testCreateWithInvalidReplicationFactor STARTED

kafka.admin.TopicCommandWithAdminClientTest > 
testCreateWithInvalidReplicationFactor PASSED

kafka.admin.TopicCommandWithAdminClientTest > testCreateIfItAlreadyExists 
STARTED
Build timed out (after 360 minutes). Marking the build as failed.
Build was aborted
[FINDBUGS] Collecting findbugs analysis files...
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3
No credentials specified
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=f38d47bf074ec5c23576b0124068d3b8206c903e, 
workspace=
Recording test results
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3

kafka.admin.TopicCommandWithAdminClientTest > testCreateIfItAlreadyExists 
SKIPPED

> Task :core:test FAILED
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3
Not sending mail to unregistered user wangg...@gmail.com
Not sending mail to unregistered user rajinisiva...@googlemail.com
Not sending mail to unregistered user b...@confluent.io

> Task :connect:api:test

org.apache.kafka.connect.header.ConnectHeaderTest > shouldGetSchemaFromStruct 
STARTED

org.apache.kafka.connect.header.ConnectHeaderTest > shouldGetSchemaFromStruct 
PASSED

org.apache.kafka.connect.header.ConnectHeaderTest > shouldAllowNonNullValue 
STARTED

org.apache.kafka.connect.header.ConnectHeaderTest > shouldAllowNonNullValue 
PASSED

org.apache.kafka.connect.header.ConnectHeaderTest > shouldSatisfyEquals STARTED

org.apache.kafka.connect.header.ConnectHeaderTest > shouldSatisfyEquals PASSED

org.apache.kafka.connect.header.ConnectHeaderTest > shouldAllowNullSchema 
STARTED

org.apache.kafka.connect.header.ConnectHeaderTest > shouldAllowNullSchema PASSED

org.apache.kafka.connect.header.ConnectHeaderTest > shouldAllowNullValues 
STARTED

org.apache.kafka.connect.header.ConnectHeaderTest > shouldAllowNullValues PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldNotAllowNullKey 
STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldNotAllowNullKey 
PASSED

org.apache.kafka.connect.header.ConnectHeaders

Build failed in Jenkins: kafka-trunk-jdk8 #3976

2019-10-17 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update Kafka Streams upgrade docs for KIP-444, KIP-470, KIP-471,

[wangguoz] Explain the separate  upgrade paths for consumer groups and Streams

[rhauch] KAFKA-8340, KAFKA-8819: Use PluginClassLoader while statically


--
[...truncated 5.42 MB...]

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PA

Re: KAFKA-8584: Support of ByteBuffer for bytes field implemented[Convert Kafka RPCs to use automatically generated code]

2019-10-17 Thread Nikolay Izhikov
Hello.

Is there something wrong with the PR?
Do we need this ticket to be done? [2]
If no, let's close both PR [1] and ticket.

The design or implementation details were changed?
If yes, can you, please, send a link where I can find details.

[1] https://github.com/apache/kafka/pull/7342
[2] https://issues.apache.org/jira/browse/KAFKA-8885

пн, 7 окт. 2019 г. в 10:08, Nikolay Izhikov :

> Hello.
>
> Please, review my changes [1]
> I fixed all conflicts after KAFKA-8885 [2] merge [3].
>
> [1] https://github.com/apache/kafka/pull/7342
> [2] https://issues.apache.org/jira/browse/KAFKA-8885
> [3]
> https://github.com/apache/kafka/commit/0de61a4683b92bdee803c51211c3277578ab3edf
>
> В Пт, 20/09/2019 в 09:18 -0700, Colin McCabe пишет:
> > Hi Nikolay,
> >
> > Thanks for working on this.  I think everyone agrees that we should have
> byte buffer support in the generator.  We just haven't had a lot of time
> for reviewing it lately.   I don't really mind which PR we use :)  I will
> take a look at your PR today and see if we can get it into shape for what
> we need.
> >
> > best,
> > Colin
> >
> > On Fri, Sep 20, 2019, at 09:18, Nikolay Izhikov wrote:
> > > Hello, all.
> > >
> > > Any feedback on this?
> > > Do we need support of ByteBuffer in RPC generated code?
> > >
> > > Which PR should be reviwed and merged?
> > >
> > > В Чт, 19/09/2019 в 10:11 +0300, Nikolay Izhikov пишет:
> > > > Hello, guys.
> > > >
> > > > Looks like we have duplicate tickets and PR's here.
> > > >
> > > > One from me:
> > > >
> > > > KAFKA-8584: Support of ByteBuffer for bytes field implemented.
> > > > ticket - https://issues.apache.org/jira/browse/KAFKA-8584
> > > > pr - https://github.com/apache/kafka/pull/7342
> > > >
> > > > and one from Colin McCabe:
> > > >
> > > > KAFKA-8628: Auto-generated Kafka RPC code should be able to use
> zero-copy ByteBuffers
> > > > ticket - https://issues.apache.org/jira/browse/KAFKA-8628
> > > > pr - https://github.com/apache/kafka/pull/7032
> > > >
> > > > I want to continue work on my PR and got it merged.
> > > > But, it up to community to decide which changes are best for the
> product.
> > > >
> > > > Please, let me know, what do you think.
> > > >
> > > >
> > > > В Вт, 17/09/2019 в 01:52 +0300, Nikolay Izhikov пишет:
> > > > > Hello, Kafka team.
> > > > >
> > > > > I implemented KAFKA-8584 [1].
> > > > > PR - [2]
> > > > > Please, do the review.
> > > > >
> > > > > [1] https://issues.apache.org/jira/browse/KAFKA-8584
> > > > > [2] https://github.com/apache/kafka/pull/7342
> > >
> > > Attachments:
> > > * signature.asc
>


Build failed in Jenkins: kafka-2.2-jdk8 #179

2019-10-17 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: log reason for fatal error in locking state dir (#7534)


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/2.2^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/2.2^{commit} # timeout=10
Checking out Revision 3a29f334e8720d27de7227d3522e0dbc80e293ce 
(refs/remotes/origin/2.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 3a29f334e8720d27de7227d3522e0dbc80e293ce
Commit message: "MINOR: log reason for fatal error in locking state dir (#7534)"
 > git rev-list --no-walk da39763a31cab9a2c177a33a7e3ae6457a223154 # timeout=10
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
[kafka-2.2-jdk8] $ /bin/bash -xe /tmp/jenkins443901774948737165.sh
+ rm -rf 
+ /home/jenkins/tools/gradle/4.8.1/bin/gradle
/tmp/jenkins443901774948737165.sh: line 4: 
/home/jenkins/tools/gradle/4.8.1/bin/gradle: No such file or directory
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
No credentials specified
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=3a29f334e8720d27de7227d3522e0dbc80e293ce, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #175
Recording test results
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1


Build failed in Jenkins: kafka-2.3-jdk8 #127

2019-10-17 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-8945/KAFKA-8947: Fix bugs in Connect REST extension API (#7392)

[rhauch] KAFKA-8340, KAFKA-8819: Use PluginClassLoader while statically


--
[...truncated 2.93 MB...]
kafka.api.AdminClientIntegrationTest > testCallInFlightTimeouts PASSED

kafka.api.AdminClientIntegrationTest > testDescribeConfigsForTopic STARTED

kafka.api.AdminClientIntegrationTest > testDescribeConfigsForTopic PASSED

kafka.api.AdminClientIntegrationTest > testConsumerGroups STARTED

kafka.api.AdminClientIntegrationTest > testConsumerGroups PASSED

kafka.api.AdminClientIntegrationTest > 
testCreateExistingTopicsThrowTopicExistsException STARTED

kafka.api.AdminClientIntegrationTest > 
testCreateExistingTopicsThrowTopicExistsException PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics STARTED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testControllerMetrics STARTED

kafka.metrics.MetricsTest > testControllerMetrics PASSED

kafka.metrics.MetricsTest > testWindowsStyleTagNames STARTED

kafka.metrics.MetricsTest > testWindowsStyleTagNames PASSED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut STARTED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut PASSED

kafka.cluster.ReplicaTest > testCannotIncrementLogStartOffsetPastHighWatermark 
STARTED

kafka.cluster.ReplicaTest > testCannotIncrementLogStartOffsetPastHighWatermark 
PASSED

kafka.cluster.ReplicaTest > testSegmentDeletionWithHighWatermarkInitialization 
STARTED

kafka.cluster.ReplicaTest > testSegmentDeletionWithHighWatermarkInitialization 
PASSED

kafka.cluster.ReplicaTest > testCannotDeleteSegmentsAtOrAboveHighWatermark 
STARTED

kafka.cluster.ReplicaTest > testCannotDeleteSegmentsAtOrAboveHighWatermark 
PASSED

kafka.cluster.PartitionTest > 
testMakeLeaderDoesNotUpdateEpochCacheForOldFormats STARTED

kafka.cluster.PartitionTest > 
testMakeLeaderDoesNotUpdateEpochCacheForOldFormats PASSED

kafka.cluster.PartitionTest > testReadRecordEpochValidationForLeader STARTED

kafka.cluster.PartitionTest > testReadRecordEpochValidationForLeader PASSED

kafka.cluster.PartitionTest > 
testFetchOffsetForTimestampEpochValidationForFollower STARTED

kafka.cluster.PartitionTest > 
testFetchOffsetForTimestampEpochValidationForFollower PASSED

kafka.cluster.PartitionTest > testListOffsetIsolationLevels STARTED

kafka.cluster.PartitionTest > testListOffsetIsolationLevels PASSED

kafka.cluster.PartitionTest > testAppendRecordsAsFollowerBelowLogStartOffset 
STARTED

kafka.cluster.PartitionTest > testAppendRecordsAsFollowerBelowLogStartOffset 
PASSED

kafka.cluster.PartitionTest > testFetchLatestOffsetIncludesLeaderEpoch STARTED

kafka.cluster.PartitionTest > testFetchLatestOffsetIncludesLeaderEpoch PASSED

kafka.cluster.PartitionTest > testFetchOffsetSnapshotEpochValidationForFollower 
STARTED

kafka.cluster.PartitionTest > testFetchOffsetSnapshotEpochValidationForFollower 
PASSED

kafka.cluster.PartitionTest > testMonotonicOffsetsAfterLeaderChange STARTED

kafka.cluster.PartitionTest > testMonotonicOffsetsAfterLeaderChange PASSED

kafka.cluster.PartitionTest > testMakeFollowerWithNoLeaderIdChange STARTED

kafka.cluster.PartitionTest > testMakeFollowerWithNoLeaderIdChange PASSED

kafka.cluster.PartitionTest > 
testAppendRecordsToFollowerWithNoReplicaThrowsException STARTED

kafka.cluster.PartitionTest > 
testAppendRecordsToFollowerWithNoReplicaThrowsException PASSED

kafka.cluster.PartitionTest > 
testFollowerDoesNotJoinISRUntilCaughtUpToOffsetWithinCurrentLeaderEpoch STARTED

kafka.cluster.PartitionTest > 
testFollowerDoesNotJoinISRUntilCaughtUpToOffsetWithinCurrentLeaderEpoch PASSED

kafka.cluster.PartitionTest > testFetchOffsetSnapshotEpochValidationForLeader 
STARTED

kafka.cluster.PartitionTest > testFetchOffsetSnapshotEpochValidationForLeader 
PASSED

kafka.cluster.PartitionTest > testOffsetForLeaderEpochValidationForLeader 
STARTED

kafka.cluster.PartitionTest > testOffsetForLeaderEpochValidationForLeader PASSED

kafka.cluster.PartitionTest > testAtMinIsr STARTED

kafka.cluster.PartitionTest > testAtMinIsr PASSED

kafka.cluster.PartitionTest > testOffsetForLeaderEpochValidationForFollower 
STARTED

kafka.cluster.PartitionTest > testOffsetForLeaderEpochValidationForFollower 
PASSED

kafka.cluster.PartitionTest >