Build failed in Jenkins: kafka-trunk-jdk8 #4554

2020-05-20 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9409: Supplement immutability of ClusterConfigState class in


--
[...truncated 3.08 MB...]

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
PASSED

> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> Task :streams:upgrade-system-tests-0102:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:testClasses
> Task :streams:upgrade-system-tests-0102:checkstyleTest
> Task :streams:upgrade-system-tests-0102:spotbugsMain NO-SOURCE
> 

[jira] [Resolved] (KAFKA-9950) MirrorMaker2 sharing of ConfigDef can lead to ConcurrentModificationException

2020-05-20 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis resolved KAFKA-9950.
---
Resolution: Fixed

> MirrorMaker2 sharing of ConfigDef can lead to ConcurrentModificationException
> -
>
> Key: KAFKA-9950
> URL: https://issues.apache.org/jira/browse/KAFKA-9950
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 2.4.0, 2.5.0, 2.4.1
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 2.6.0, 2.4.2, 2.5.1
>
>
> The 
> [MirrorConnectorConfig::CONNECTOR_CONFIG_DEF|https://github.com/apache/kafka/blob/34824b7bff64ba387a04466d74ac6bbbd10bf37c/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorConnectorConfig.java#L397]
>  object is reused across multiple MirrorMaker2 classes, which is fine the 
> most part since it's a constant. However, the actual {{ConfigDef}} object 
> itself is mutable, and is mutated when the {{MirrorTaskConfig}} class 
> [statically constructs its own 
> ConfigDef|https://github.com/apache/kafka/blob/34824b7bff64ba387a04466d74ac6bbbd10bf37c/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorTaskConfig.java#L62].
> This has two unintended effects:
>  # Since the two {{ConfigDef}} objects for the {{MirrorConnectorConfig}} and 
> {{MirrorTaskConfig}} classes are actually the same object, the additional 
> properties that the {{MirrorTaskConfig}} class defines for its {{ConfigDef}} 
> are also added to the {{MirrorConnectorConfig}} class's {{ConfigDef}}. The 
> impact of this isn't huge since both additional properties have default 
> values, but this does cause those properties to appear in the 
> {{/connectors/\{name}/config/validate}} endpoint once the 
> {{MirrorTaskConfig}} class is loaded for the first time.
>  # It's possible that, if a config for a MirrorMaker2 connector is submitted 
> at approximately the same time that the {{MirrorTaskConfig}} class is loaded, 
> a {{ConcurrentModificationException}} will be thrown by the 
> {{AbstractHerder}} class when it tries to [iterate over all of the keys of 
> the connector's 
> ConfigDef|https://github.com/apache/kafka/blob/34824b7bff64ba387a04466d74ac6bbbd10bf37c/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java#L357].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-545 support automated consumer offset sync across clusters in MM 2.0

2020-05-20 Thread Maulin Vasavada
Thank you for the KIP. I sincerely hope we get enough votes on this KIP. I
was thinking of similar changes while working on DR capabilities and
offsets are Achilles Heels and this KIP addresses it.

On Mon, May 18, 2020 at 6:10 PM Maulin Vasavada 
wrote:

> +1 (non-binding)
>
> On Mon, May 18, 2020 at 9:41 AM Ryanne Dolan 
> wrote:
>
>> Bump. Looks like we've got 6 non-binding votes and 1 binding.
>>
>> On Thu, Feb 20, 2020 at 11:25 AM Ning Zhang 
>> wrote:
>>
>> > Hello committers,
>> >
>> > I am the author of the KIP-545 and if we still miss votes from the
>> > committers, please review the KIP and vote for it, so that the
>> > corresponding PR will be reviewed soon.
>> >
>> >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-545%3A+support+automated+consumer+offset+sync+across+clusters+in+MM+2.0
>> >
>> > Thank you
>> >
>> > On 2020/02/06 17:05:41, Edoardo Comar  wrote:
>> > > +1 (non-binding)
>> > > thanks for the KIP !
>> > >
>> > > On Tue, 14 Jan 2020 at 13:57, Navinder Brar > > .invalid>
>> > > wrote:
>> > >
>> > > > +1 (non-binding)
>> > > > Navinder
>> > > > On Tuesday, 14 January, 2020, 07:24:02 pm IST, Ryanne Dolan <
>> > > > ryannedo...@gmail.com> wrote:
>> > > >
>> > > >  Bump. We've got 4 non-binding and one binding vote.
>> > > >
>> > > > Ryanne
>> > > >
>> > > > On Fri, Dec 13, 2019, 1:44 AM Tom Bentley 
>> wrote:
>> > > >
>> > > > > +1 (non-binding)
>> > > > >
>> > > > > On Thu, Dec 12, 2019 at 6:33 PM Andrew Schofield <
>> > > > > andrew_schofi...@live.com>
>> > > > > wrote:
>> > > > >
>> > > > > > +1 (non-binding)
>> > > > > >
>> > > > > > On 12/12/2019, 14:20, "Mickael Maison" <
>> mickael.mai...@gmail.com>
>> > > > > wrote:
>> > > > > >
>> > > > > >+1 (binding)
>> > > > > >Thanks for the KIP!
>> > > > > >
>> > > > > >On Thu, Dec 5, 2019 at 12:56 AM Ryanne Dolan <
>> > ryannedo...@gmail.com
>> > > > >
>> > > > > > wrote:
>> > > > > >>
>> > > > > >> Bump. We've got 2 non-binding votes so far.
>> > > > > >>
>> > > > > >> On Wed, Nov 13, 2019 at 3:32 PM Ning Zhang <
>> > > > ning2008w...@gmail.com
>> > > > > >
>> > > > > > wrote:
>> > > > > >>
>> > > > > >> > My current plan is to implement this in
>> > "MirrorCheckpointTask"
>> > > > > >> >
>> > > > > >> > On 2019/11/02 03:30:11, Xu Jianhai > >
>> > > > wrote:
>> > > > > >> > > I think this kip will implement a task in sinkTask ?
>> > right?
>> > > > > >> > >
>> > > > > >> > > On Sat, Nov 2, 2019 at 1:06 AM Ryanne Dolan <
>> > > > > > ryannedo...@gmail.com>
>> > > > > >> > wrote:
>> > > > > >> > >
>> > > > > >> > > > Hey y'all, Ning Zhang and I would like to start the
>> > vote for
>> > > > > > the
>> > > > > >> > following
>> > > > > >> > > > small KIP:
>> > > > > >> > > >
>> > > > > >> > > >
>> > > > > >> > > >
>> > > > > >> >
>> > > > > >
>> > > > >
>> > > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-545%3A+support+automated+consumer+offset+sync+across+clusters+in+MM+2.0
>> > > > > >> > > >
>> > > > > >> > > > This is an elegant way to automatically write
>> consumer
>> > group
>> > > > > > offsets to
>> > > > > >> > > > downstream clusters without breaking existing use
>> cases.
>> > > > > > Currently, we
>> > > > > >> > rely
>> > > > > >> > > > on external tooling based on RemoteClusterUtils and
>> > > > > >> > kafka-consumer-groups
>> > > > > >> > > > command to write offsets. This KIP bakes this
>> > functionality
>> > > > > > into MM2
>> > > > > >> > > > itself, reducing the effort required to
>> > failover/failback
>> > > > > > workloads
>> > > > > >> > between
>> > > > > >> > > > clusters.
>> > > > > >> > > >
>> > > > > >> > > > Thanks for the votes!
>> > > > > >> > > >
>> > > > > >> > > > Ryanne
>> > > > > >> > > >
>> > > > > >> > >
>> > > > > >> >
>> > > > > >
>> > > > > >
>> > > > > >
>> > > > >
>> > >
>> >
>>
>


[jira] [Resolved] (KAFKA-8869) Map taskConfigs in KafkaConfigBackingStore grows monotonically despite of task removals

2020-05-20 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis resolved KAFKA-8869.
---
Resolution: Fixed

> Map taskConfigs in KafkaConfigBackingStore grows monotonically despite of 
> task removals
> ---
>
> Key: KAFKA-8869
> URL: https://issues.apache.org/jira/browse/KAFKA-8869
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.3.0
>Reporter: Konstantine Karantasis
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 2.3.2, 2.6.0, 2.4.2, 2.5.1
>
>
> Investigation of https://issues.apache.org/jira/browse/KAFKA-8676 revealed 
> another issue: 
> a map in {{KafkaConfigBackingStore}} keeps growing despite of connectors and 
> tasks getting removed eventually.
> This bug does not affect directly rebalancing protocols but it'd good to 
> resolve and use in a way similar to how {{connectorConfigs}} is used. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9409) Increase Immutability of ClusterConfigState

2020-05-20 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis resolved KAFKA-9409.
---
Resolution: Fixed

> Increase Immutability of ClusterConfigState
> ---
>
> Key: KAFKA-9409
> URL: https://issues.apache.org/jira/browse/KAFKA-9409
> Project: Kafka
>  Issue Type: Improvement
>Reporter: David Mollitor
>Priority: Minor
> Fix For: 2.6.0
>
>
> The class claims that it is immutable, but there are some mutable features of 
> this class.
>  
> Increase the immutability of it and add a little cleanup:
>  * Pre-initialize size of ArrayList
>  * Remove superfluous syntax
>  * Use ArrayList instead of LinkedList since the list is created once



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4553

2020-05-20 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Increase gradle daemon’s heap size to 2g (#8700)


--
[...truncated 1.78 MB...]

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldNotAllowDivergentLogs STARTED

kafka.server.FetchRequestTest > testBrokerRespectsPartitionsOrderAndSizeLimits 
PASSED

kafka.server.FetchRequestTest > testZStdCompressedTopic STARTED

kafka.server.FetchRequestTest > testZStdCompressedTopic PASSED

kafka.server.FetchRequestTest > 
testDownConversionFromBatchedToUnbatchedRespectsOffset STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldNotAllowDivergentLogs PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
logsShouldNotDivergeOnUncleanLeaderElections STARTED

kafka.server.FetchRequestTest > 
testDownConversionFromBatchedToUnbatchedRespectsOffset PASSED

kafka.server.FetchRequestTest > testFetchRequestV2WithOversizedMessage STARTED

kafka.server.FetchRequestTest > testFetchRequestV2WithOversizedMessage PASSED

kafka.server.FetchRequestTest > testEpochValidationWithinFetchSession STARTED

kafka.server.FetchRequestTest > testEpochValidationWithinFetchSession PASSED

kafka.server.FetchRequestTest > testDownConversionWithConnectionFailure STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
logsShouldNotDivergeOnUncleanLeaderElections PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.KafkaMetricReporterClusterIdTest > testClusterIdPresent STARTED

kafka.server.KafkaMetricReporterClusterIdTest > testClusterIdPresent PASSED

kafka.server.ApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion STARTED

kafka.server.ApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion PASSED

kafka.server.ApiVersionsRequestTest > testApiVersionsRequestValidationV0 STARTED

kafka.server.FetchRequestTest > testDownConversionWithConnectionFailure PASSED

kafka.server.FetchRequestTest > testPartitionDataEquals STARTED

kafka.server.ApiVersionsRequestTest > testApiVersionsRequestValidationV0 PASSED

kafka.server.ApiVersionsRequestTest > testApiVersionsRequestValidationV3 STARTED

kafka.server.ApiVersionsRequestTest > testApiVersionsRequestValidationV3 PASSED

kafka.server.ApiVersionsRequestTest > testApiVersionsRequest STARTED

kafka.server.FetchRequestTest > testPartitionDataEquals PASSED

kafka.server.FetchRequestTest > testCurrentEpochValidation STARTED

kafka.server.ApiVersionsRequestTest > testApiVersionsRequest PASSED

kafka.server.ListOffsetsRequestTest > testListOffsetsErrorCodes STARTED

kafka.server.ListOffsetsRequestTest > testListOffsetsErrorCodes PASSED

kafka.server.ListOffsetsRequestTest > testCurrentEpochValidation STARTED

kafka.server.FetchRequestTest > testCurrentEpochValidation PASSED

kafka.server.FetchRequestTest > testCreateIncrementalFetchWithPartitionsInError 
STARTED

kafka.server.FetchRequestTest > testCreateIncrementalFetchWithPartitionsInError 
PASSED

kafka.server.FetchRequestTest > testFetchRequestV4WithReadCommitted STARTED

kafka.server.ListOffsetsRequestTest > testCurrentEpochValidation PASSED

kafka.server.ListOffsetsRequestTest > testResponseIncludesLeaderEpoch STARTED

kafka.server.FetchRequestTest > testFetchRequestV4WithReadCommitted PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials STARTED

kafka.server.ListOffsetsRequestTest > testResponseIncludesLeaderEpoch PASSED

kafka.server.IsrExpirationTest > testIsrExpirationForSlowFollowers STARTED

kafka.server.IsrExpirationTest > testIsrExpirationForSlowFollowers PASSED

kafka.server.IsrExpirationTest > testIsrExpirationForCaughtUpFollowers STARTED

kafka.server.IsrExpirationTest > testIsrExpirationForCaughtUpFollowers PASSED

kafka.server.IsrExpirationTest > testIsrExpirationForStuckFollowers STARTED

kafka.server.IsrExpirationTest > testIsrExpirationForStuckFollowers PASSED

kafka.server.IsrExpirationTest > testIsrExpirationIfNoFetchRequestMade STARTED

kafka.server.IsrExpirationTest > testIsrExpirationIfNoFetchRequestMade PASSED

kafka.api.SslAdminIntegrationTest > 
testAsynchronousAuthorizerAclUpdatesDontBlockRequestThreads STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 

Build failed in Jenkins: kafka-trunk-jdk14 #110

2020-05-20 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Increase gradle daemon’s heap size to 2g (#8700)


--
[...truncated 1.83 MB...]

kafka.api.SslAdminIntegrationTest > testAttemptToCreateInvalidAcls STARTED

kafka.api.SslAdminIntegrationTest > testAttemptToCreateInvalidAcls PASSED

kafka.api.SslAdminIntegrationTest > testAclAuthorizationDenied STARTED

kafka.api.SslAdminIntegrationTest > testAclAuthorizationDenied PASSED

kafka.api.SslAdminIntegrationTest > testAclOperations STARTED

kafka.api.SslAdminIntegrationTest > testAclOperations PASSED

kafka.api.SslAdminIntegrationTest > testAclOperations2 STARTED

kafka.api.DescribeAuthorizedOperationsTest > 
testConsumerGroupAuthorizedOperations PASSED

kafka.api.GroupEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials STARTED

kafka.api.SslAdminIntegrationTest > testAclOperations2 PASSED

kafka.api.SslAdminIntegrationTest > testAclDelete STARTED

kafka.api.GroupEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials PASSED

kafka.api.GroupEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SslAdminIntegrationTest > testAclDelete PASSED

kafka.api.SslAdminIntegrationTest > testCreateDeleteTopics STARTED

kafka.api.GroupEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.GroupEndToEndAuthorizationTest > testProduceConsumeWithPrefixedAcls 
STARTED

kafka.api.SslAdminIntegrationTest > testCreateDeleteTopics PASSED

kafka.api.SslAdminIntegrationTest > testAuthorizedOperations STARTED

kafka.api.GroupEndToEndAuthorizationTest > testProduceConsumeWithPrefixedAcls 
PASSED

kafka.api.GroupEndToEndAuthorizationTest > testProduceConsumeViaAssign STARTED

kafka.api.SslAdminIntegrationTest > testAuthorizedOperations PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testZkAclsDisabled STARTED

kafka.api.GroupEndToEndAuthorizationTest > testProduceConsumeViaAssign PASSED

kafka.api.GroupEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testZkAclsDisabled PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.GroupEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign PASSED

kafka.api.GroupEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure STARTED

kafka.api.GroupEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl PASSED

kafka.api.GroupEndToEndAuthorizationTest > testProduceConsumeWithWildcardAcls 
STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure STARTED

kafka.api.GroupEndToEndAuthorizationTest > testProduceConsumeWithWildcardAcls 
PASSED

kafka.api.GroupEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationSuccess STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationSuccess PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testProducerWithAuthenticationFailure STARTED

kafka.api.GroupEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.GroupEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testProducerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure 
STARTED

kafka.api.GroupEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.GroupEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure 
PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure STARTED

kafka.api.GroupEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.GroupEndToEndAuthorizationTest > 

Build failed in Jenkins: kafka-trunk-jdk11 #1482

2020-05-20 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Increase gradle daemon’s heap size to 2g (#8700)


--
[...truncated 1.40 MB...]
org.apache.kafka.common.security.auth.DefaultKafkaPrincipalBuilderTest > 
testPrincipalWithSslPrincipalMapper STARTED

org.apache.kafka.common.security.auth.DefaultKafkaPrincipalBuilderTest > 
testPrincipalWithSslPrincipalMapper PASSED

org.apache.kafka.common.security.auth.DefaultKafkaPrincipalBuilderTest > 
testPrincipalIfSSLPeerIsNotAuthenticated STARTED

org.apache.kafka.common.security.auth.DefaultKafkaPrincipalBuilderTest > 
testPrincipalIfSSLPeerIsNotAuthenticated PASSED

org.apache.kafka.common.security.auth.DefaultKafkaPrincipalBuilderTest > 
testPrincipalBuilderGssapi STARTED

org.apache.kafka.common.security.auth.DefaultKafkaPrincipalBuilderTest > 
testPrincipalBuilderGssapi PASSED

org.apache.kafka.common.security.auth.DefaultKafkaPrincipalBuilderTest > 
testUseSessionPeerPrincipalForSsl STARTED

org.apache.kafka.common.security.auth.DefaultKafkaPrincipalBuilderTest > 
testUseSessionPeerPrincipalForSsl PASSED

org.apache.kafka.common.security.auth.DefaultKafkaPrincipalBuilderTest > 
testUseOldPrincipalBuilderForSslIfProvided STARTED

org.apache.kafka.common.security.auth.DefaultKafkaPrincipalBuilderTest > 
testUseOldPrincipalBuilderForSslIfProvided PASSED

org.apache.kafka.common.security.auth.DefaultKafkaPrincipalBuilderTest > 
testReturnAnonymousPrincipalForPlaintext STARTED

org.apache.kafka.common.security.auth.DefaultKafkaPrincipalBuilderTest > 
testReturnAnonymousPrincipalForPlaintext PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testKeyStoreTrustStoreValidation[tlsProtocol=TLSv1.2] STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testKeyStoreTrustStoreValidation[tlsProtocol=TLSv1.2] PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testReconfiguration[tlsProtocol=TLSv1.2] STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testReconfiguration[tlsProtocol=TLSv1.2] PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testReconfigurationWithoutTruststore[tlsProtocol=TLSv1.2] STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testReconfigurationWithoutTruststore[tlsProtocol=TLSv1.2] PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryWithoutPasswordConfiguration[tlsProtocol=TLSv1.2] STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryWithoutPasswordConfiguration[tlsProtocol=TLSv1.2] PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testReconfigurationWithoutKeystore[tlsProtocol=TLSv1.2] STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testReconfigurationWithoutKeystore[tlsProtocol=TLSv1.2] PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testKeystoreVerifiableUsingTruststore[tlsProtocol=TLSv1.2] STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testKeystoreVerifiableUsingTruststore[tlsProtocol=TLSv1.2] PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testServerSpecifiedSslEngineFactoryUsed[tlsProtocol=TLSv1.2] STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testServerSpecifiedSslEngineFactoryUsed[tlsProtocol=TLSv1.2] PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testUntrustedKeyStoreValidationFails[tlsProtocol=TLSv1.2] STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testUntrustedKeyStoreValidationFails[tlsProtocol=TLSv1.2] PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryWithIncorrectProviderClassConfiguration[tlsProtocol=TLSv1.2] 
STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryWithIncorrectProviderClassConfiguration[tlsProtocol=TLSv1.2] 
PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testCertificateEntriesValidation[tlsProtocol=TLSv1.2] STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testCertificateEntriesValidation[tlsProtocol=TLSv1.2] PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testInvalidSslEngineFactory[tlsProtocol=TLSv1.2] STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testInvalidSslEngineFactory[tlsProtocol=TLSv1.2] PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryWithCustomKeyManagerConfiguration[tlsProtocol=TLSv1.2] STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryWithCustomKeyManagerConfiguration[tlsProtocol=TLSv1.2] PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testClientMode[tlsProtocol=TLSv1.2] STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testClientMode[tlsProtocol=TLSv1.2] PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryConfiguration[tlsProtocol=TLSv1.2] STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 

Re: [VOTE]: KIP-604: Remove ZooKeeper Flags from the Administrative Tools

2020-05-20 Thread Jason Gustafson
Sounds good. +1 from me.

On Tue, May 19, 2020 at 5:41 PM Colin McCabe  wrote:

> On Tue, May 19, 2020, at 09:31, Jason Gustafson wrote:
> > Hi Colin,
> >
> > Looks good. I just had one question. It sounds like your intent is to
> > change kafka-configs.sh so that the --zookeeper flag is only supported
> for
> > bootstrapping. I assume in the case of SCRAM that we will only make this
> > change after the broker API is available?
> >
> > Thanks,
> > Jason
>
> Hi Jason,
>
> Yes, that's correct.  We will have the SCRAM API ready by the Kafka 3.0
> release.
>
> best,
> Colin
>
>
> >
> > On Tue, May 19, 2020 at 5:22 AM Mickael Maison  >
> > wrote:
> >
> > > +1 (binding)
> > > Thanks Colin
> > >
> > > On Tue, May 19, 2020 at 10:57 AM Manikumar 
> > > wrote:
> > > >
> > > > +1 (binding)
> > > >
> > > > Thanks for the KIP
> > > >
> > > > On Tue, May 19, 2020 at 12:29 PM David Jacot 
> > > wrote:
> > > >
> > > > > +1 (non-binding).
> > > > >
> > > > > Thanks for the KIP.
> > > > >
> > > > > On Fri, May 15, 2020 at 12:41 AM Guozhang Wang  >
> > > wrote:
> > > > >
> > > > > > +1.
> > > > > >
> > > > > > Thanks Colin!
> > > > > >
> > > > > > Guozhang
> > > > > >
> > > > > > On Tue, May 12, 2020 at 3:45 PM Colin McCabe  >
> > > wrote:
> > > > > >
> > > > > > > Hi all,
> > > > > > >
> > > > > > > I'd like to start a vote on KIP-604: Remove ZooKeeper Flags
> from
> > > the
> > > > > > > Administrative Tools.
> > > > > > >
> > > > > > > As a reminder, this KIP is for the next major release of Kafka,
> > > the 3.0
> > > > > > > release.   So it won't go into the upcoming 2.6 release.  It's
> a
> > > pretty
> > > > > > > small change that just removes the --zookeeper flags from some
> > > tools
> > > > > and
> > > > > > > removes a deprecated tool.  We haven't decided exactly when
> we'll
> > > do
> > > > > 3.0
> > > > > > > but I believe we will certainly want this change in that
> release.
> > > > > > >
> > > > > > > The KIP does contain one small change relevant to Kafka 2.6:
> adding
> > > > > > > support for --if-exists and --if-not-exists in combination
> with the
> > > > > > > --bootstrap-server flag.
> > > > > > >
> > > > > > > best,
> > > > > > > Colin
> > > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > -- Guozhang
> > > > > >
> > > > >
> > >
> >
>


[jira] [Resolved] (KAFKA-1056) Evenly Distribute Intervals in OffsetIndex

2020-05-20 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-1056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-1056.
--
Resolution: Fixed

> Evenly Distribute Intervals in OffsetIndex
> --
>
> Key: KAFKA-1056
> URL: https://issues.apache.org/jira/browse/KAFKA-1056
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Reporter: Guozhang Wang
>Priority: Major
>  Labels: newbie++
>
> Today a new entry will be created in OffsetIndex for each produce request 
> regardless of the number of messages it contains. It is better to evenly 
> distribute the intervals between index entries for index search efficiency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk14 #109

2020-05-20 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9603: Do not turn on bulk loading for segmented stores on 
stand-by


--
[...truncated 1.55 MB...]

kafka.utils.ExitTest > shouldNotInvokeShutdownHookImmediately STARTED

kafka.utils.ExitTest > shouldNotInvokeShutdownHookImmediately PASSED

kafka.utils.TopicFilterTest > testWhitelists STARTED

kafka.utils.TopicFilterTest > testWhitelists PASSED

kafka.utils.LoggingTest > testLog4jControllerIsRegistered STARTED

kafka.utils.LoggingTest > testLog4jControllerIsRegistered PASSED

kafka.utils.LoggingTest > testLogName STARTED

kafka.utils.LoggingTest > testLogName PASSED

kafka.utils.LoggingTest > testLogNameOverride STARTED

kafka.utils.LoggingTest > testLogNameOverride PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDataChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDataChange PASSED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperSessionStateMetric STARTED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperSessionStateMetric PASSED

kafka.zookeeper.ZooKeeperClientTest > testExceptionInBeforeInitializingSession 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testExceptionInBeforeInitializingSession 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnection STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnection PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForCreation STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForCreation PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetAclExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetAclExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiryDuringClose STARTED

kafka.controller.ControllerIntegrationTest > 
testControllerEpochPersistsWhenAllBrokersDown PASSED

kafka.controller.ControllerIntegrationTest > 
testTopicCreationWithOfflineReplica STARTED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiryDuringClose PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetAclNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetAclNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnectionLossRequestTermination 
STARTED

kafka.controller.ControllerIntegrationTest > 
testTopicCreationWithOfflineReplica PASSED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentResumesAfterReplicaComesOnline STARTED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentResumesAfterReplicaComesOnline PASSED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionDisabled STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnectionLossRequestTermination 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout PASSED

kafka.zookeeper.ZooKeeperClientTest > 
testBlockOnRequestCompletionFromStateChangeHandler STARTED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionDisabled PASSED

kafka.controller.ControllerIntegrationTest > 
testTopicPartitionExpansionWithOfflineReplica STARTED

kafka.zookeeper.ZooKeeperClientTest > 
testBlockOnRequestCompletionFromStateChangeHandler PASSED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString STARTED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData STARTED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > 
testZNodeChildChangeHandlerForChildChangeNotTriggered STARTED

kafka.zookeeper.ZooKeeperClientTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #4552

2020-05-20 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9603: Do not turn on bulk loading for segmented stores on 
stand-by


--
[...truncated 1.60 MB...]

kafka.admin.ReassignPartitionsIntegrationTest > testCancellation STARTED

kafka.api.PlaintextAdminIntegrationTest > testDescribeConfigsForTopic PASSED

kafka.api.PlaintextAdminIntegrationTest > testConsumerGroups STARTED

kafka.admin.ReassignPartitionsIntegrationTest > testCancellation PASSED

kafka.admin.ReassignPartitionsIntegrationTest > testThrottledReassignment 
STARTED

kafka.api.PlaintextAdminIntegrationTest > testConsumerGroups PASSED

kafka.api.PlaintextAdminIntegrationTest > 
testElectUncleanLeadersWhenNoLiveBrokers STARTED

kafka.admin.ReassignPartitionsIntegrationTest > testThrottledReassignment PASSED

kafka.admin.ReassignPartitionsIntegrationTest > 
testHighWaterMarkAfterPartitionReassignment STARTED

kafka.api.PlaintextAdminIntegrationTest > 
testElectUncleanLeadersWhenNoLiveBrokers PASSED

kafka.api.PlaintextAdminIntegrationTest > 
testCreateExistingTopicsThrowTopicExistsException STARTED

kafka.admin.ReassignPartitionsIntegrationTest > 
testHighWaterMarkAfterPartitionReassignment PASSED

kafka.admin.ReassignPartitionsIntegrationTest > testReplicaDirectoryMoves 
STARTED

kafka.api.PlaintextAdminIntegrationTest > 
testCreateExistingTopicsThrowTopicExistsException PASSED

kafka.api.PlaintextAdminIntegrationTest > testCreateDeleteTopics STARTED

kafka.api.PlaintextAdminIntegrationTest > testCreateDeleteTopics PASSED

kafka.api.PlaintextAdminIntegrationTest > testAuthorizedOperations STARTED

kafka.api.PlaintextAdminIntegrationTest > testAuthorizedOperations PASSED

kafka.api.PlaintextConsumerTest > testEarliestOrLatestOffsets STARTED

kafka.api.PlaintextConsumerTest > testEarliestOrLatestOffsets PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate STARTED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate PASSED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions STARTED

kafka.admin.ReassignPartitionsIntegrationTest > testReplicaDirectoryMoves PASSED

kafka.admin.BrokerApiVersionsCommandTest > checkBrokerApiVersionCommandOutput 
STARTED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMs STARTED

kafka.admin.BrokerApiVersionsCommandTest > checkBrokerApiVersionCommandOutput 
PASSED

kafka.admin.TopicCommandTest > testIsNotUnderReplicatedWhenAdding STARTED

kafka.admin.TopicCommandTest > testIsNotUnderReplicatedWhenAdding PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacementAllServers STARTED

kafka.admin.AddPartitionsTest > testReplicaPlacementAllServers PASSED

kafka.admin.AddPartitionsTest > testMissingPartition0 STARTED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMs PASSED

kafka.api.PlaintextConsumerTest > testOffsetsForTimes STARTED

kafka.admin.AddPartitionsTest > testMissingPartition0 PASSED

kafka.admin.AddPartitionsTest > testWrongReplicaCount STARTED

kafka.api.PlaintextConsumerTest > testOffsetsForTimes PASSED

kafka.api.PlaintextConsumerTest > testSubsequentPatternSubscription STARTED

kafka.admin.AddPartitionsTest > testWrongReplicaCount PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacementPartialServers STARTED

kafka.api.PlaintextConsumerTest > testSubsequentPatternSubscription PASSED

kafka.api.PlaintextConsumerTest > testPerPartitionLeadMetricsCleanUpWithAssign 
STARTED

kafka.admin.AddPartitionsTest > testReplicaPlacementPartialServers PASSED

kafka.admin.AddPartitionsTest > testIncrementPartitions STARTED

kafka.api.PlaintextConsumerTest > testPerPartitionLeadMetricsCleanUpWithAssign 
PASSED

kafka.api.PlaintextConsumerTest > testConsumeMessagesWithCreateTime STARTED

kafka.admin.AddPartitionsTest > testIncrementPartitions PASSED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas STARTED

kafka.api.PlaintextConsumerTest > testConsumeMessagesWithCreateTime PASSED

kafka.api.PlaintextConsumerTest > testAsyncCommit STARTED

kafka.api.PlaintextConsumerTest > testAsyncCommit PASSED

kafka.api.PlaintextConsumerTest > testLowMaxFetchSizeForRequestAndPartition 
STARTED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas PASSED

kafka.admin.PreferredReplicaLeaderElectionCommandTest > 
testBasicPreferredReplicaElection STARTED

kafka.admin.PreferredReplicaLeaderElectionCommandTest > 
testBasicPreferredReplicaElection PASSED

kafka.admin.PreferredReplicaLeaderElectionCommandTest > testInvalidBrokerGiven 
STARTED

kafka.admin.PreferredReplicaLeaderElectionCommandTest > testInvalidBrokerGiven 
PASSED

kafka.admin.PreferredReplicaLeaderElectionCommandTest > 
testPreferredReplicaJsonData STARTED

kafka.admin.PreferredReplicaLeaderElectionCommandTest > 
testPreferredReplicaJsonData PASSED

kafka.admin.PreferredReplicaLeaderElectionCommandTest > testTimeout STARTED


Build failed in Jenkins: kafka-trunk-jdk11 #1481

2020-05-20 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9603: Do not turn on bulk loading for segmented stores on 
stand-by


--
[...truncated 1.40 MB...]

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testAssignOnEmptyTopicPartition PASSED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testFetchStableOffsetThrowInCommitted STARTED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testFetchStableOffsetThrowInCommitted PASSED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testUnsubscribeShouldTriggerPartitionsLostWithNoGeneration STARTED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testUnsubscribeShouldTriggerPartitionsLostWithNoGeneration PASSED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testSubscriptionOnNullTopic STARTED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testSubscriptionOnNullTopic PASSED

org.apache.kafka.clients.consumer.KafkaConsumerTest > testSubscription STARTED

org.apache.kafka.clients.consumer.KafkaConsumerTest > testSubscription PASSED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testInterceptorConstructorClose STARTED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testInterceptorConstructorClose PASSED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testSubscriptionOnEmptyTopic STARTED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testSubscriptionOnEmptyTopic PASSED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testWakeupWithFetchDataAvailable STARTED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testWakeupWithFetchDataAvailable PASSED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testReturnRecordsDuringRebalance STARTED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testReturnRecordsDuringRebalance PASSED

org.apache.kafka.clients.consumer.KafkaConsumerTest > testLeaveGroupTimeout 
STARTED

org.apache.kafka.clients.consumer.KafkaConsumerTest > testLeaveGroupTimeout 
PASSED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testFetchStableOffsetThrowInPoll STARTED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testFetchStableOffsetThrowInPoll PASSED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testPartitionsForAuthenticationFailure STARTED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testPartitionsForAuthenticationFailure PASSED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testSubscriptionOnNullTopicCollection STARTED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testSubscriptionOnNullTopicCollection PASSED

org.apache.kafka.clients.consumer.KafkaConsumerTest > testCloseWithTimeUnit 
STARTED

org.apache.kafka.clients.consumer.KafkaConsumerTest > testCloseWithTimeUnit 
PASSED

org.apache.kafka.clients.CommonClientConfigsTest > 
testExponentialBackoffDefaults STARTED

org.apache.kafka.clients.CommonClientConfigsTest > 
testExponentialBackoffDefaults PASSED

org.apache.kafka.clients.FetchSessionHandlerTest > testSessionless STARTED

org.apache.kafka.clients.FetchSessionHandlerTest > testSessionless PASSED

org.apache.kafka.clients.FetchSessionHandlerTest > 
testVerifyFullFetchResponsePartitions STARTED

org.apache.kafka.clients.FetchSessionHandlerTest > 
testVerifyFullFetchResponsePartitions PASSED

org.apache.kafka.clients.FetchSessionHandlerTest > testIncrementals STARTED

org.apache.kafka.clients.FetchSessionHandlerTest > testIncrementals PASSED

org.apache.kafka.clients.FetchSessionHandlerTest > 
testIncrementalPartitionRemoval STARTED

org.apache.kafka.clients.FetchSessionHandlerTest > 
testIncrementalPartitionRemoval PASSED

org.apache.kafka.clients.FetchSessionHandlerTest > testFindMissing STARTED

org.apache.kafka.clients.FetchSessionHandlerTest > testFindMissing PASSED

org.apache.kafka.clients.FetchSessionHandlerTest > testDoubleBuild STARTED

org.apache.kafka.clients.FetchSessionHandlerTest > testDoubleBuild PASSED

org.apache.kafka.clients.admin.RemoveMembersFromConsumerGroupOptionsTest > 
testConstructor STARTED

org.apache.kafka.clients.admin.RemoveMembersFromConsumerGroupOptionsTest > 
testConstructor PASSED

org.apache.kafka.clients.admin.internals.AdminMetadataManagerTest > 
testAuthenticationFailure STARTED

org.apache.kafka.clients.admin.internals.AdminMetadataManagerTest > 
testAuthenticationFailure PASSED

org.apache.kafka.clients.admin.internals.AdminMetadataManagerTest > 
testMetadataReady STARTED

org.apache.kafka.clients.admin.internals.AdminMetadataManagerTest > 
testMetadataReady PASSED

org.apache.kafka.clients.admin.internals.AdminMetadataManagerTest > 
testMetadataRefreshBackoff STARTED

org.apache.kafka.clients.admin.internals.AdminMetadataManagerTest > 
testMetadataRefreshBackoff PASSED

org.apache.kafka.clients.admin.DeleteConsumerGroupOffsetsResultTest > 
testPartitionMissingInResponseErrorConstructor STARTED


Re: [DISCUSS] KIP-609: Use Pre-registration and Blocking Calls for Better Transaction Efficiency

2020-05-20 Thread Boyang Chen
Thanks for the check! Makes sense to me.

On Wed, May 20, 2020 at 3:27 PM Guozhang Wang  wrote:

> Just to clarify on the implementation details: today inside
> Sender#runOnce(), we have the following:
>
> ```
>
> if (maybeSendAndPollTransactionalRequest()) {
> return;
> }
>
> ```
>
>
> Which basically means that as long as we still have any in-flight txn
> request; so we are not really blocking on the first sent request but on all
> of the types. For add-partitions requests, that makes sense since once
> unblocked, all requests would unblock at the same time anyways; but we do
> not necessarily need to block on AddOffsetCommitsToTxnRequest and
> TxnOffsetCommitRequest.
>
> So we can relax the above restriction as 1) the txn coordinator is known,
> 2) the producer has a valid PID. 3) all partitions are registered in the
> txn.
>
>
>
> Guozhang
>
> On Wed, May 20, 2020 at 3:13 PM Boyang Chen 
> wrote:
>
> > Thanks Guozhang for the new proposal. I agree that we could deliver
> > https://issues.apache.org/jira/browse/KAFKA-9878
> >  first
> and
> > measure the following metrics:
> >
> > 1. The total volume of AddPartitionToTxn requests
> > 2. The time used in propagating the transaction state updates during
> > transaction
> > 3. The time used in transaction marker propagation
> >
> > If those metrics suggest that we are doing a pretty good job already,
> then
> > the improvement of delivering the entire KIP-609 is minimal. In the
> > meantime, I updated 9878 with more details. Additionally, I realized that
> > we should change the logic for AddPartitionToTxn call so that we could
> > maintain a future queue and wait for all the delta change completions,
> > instead of blocking on the first sent out one. Does that make sense?
> >
> > On Wed, May 20, 2020 at 2:28 PM Guozhang Wang 
> wrote:
> >
> > > Hello Matthias,
> > >
> > > I have a quick question regarding the motivation of the long-blocking
> and
> > > batch-add-partitions behavior: do we think the latency primarily comes
> > from
> > > the network round-trips, or from the coordinator-side transaction-log
> > > appends? If we believe it is coming from the latter, then perhaps we
> can
> > > first consider optimizing that without making any public changes,
> > > specifically:
> > >
> > > 1) We block on the add-partitions in a purgatory, as proposed in your
> > KIP.
> > >
> > > 2) When try-completing the parked add-partitions requests in a
> purgatory,
> > > we consolidate them as a single txn state transition with a single
> append
> > > to transaction log.
> > >
> > > 3) *Optionally* on the client side, we can further optimize the
> behavior:
> > > instead of block on sending any batches as long as there are any txn
> > > requests in flight, we would just query which partitions has
> successfully
> > > "registered" as part of the txn from add-partitions response and then
> > send
> > > records for those partitions. By doing this we can reduce the
> end-to-end
> > > blocking time.
> > >
> > > None of the above changes actually requires any public API or protocol
> > > changes. In addition, it would not make things worse even in edge cases
> > > whereas with the proposed API change, if the producer pre-registered a
> > > bunch of partitions but then timed out, the coordinator need to write
> > abort
> > > markers to those pre-registered partitions unnecessarily. We can
> measure
> > > the avg. number of txn log appends per transaction on the broker side
> and
> > > see if it can be reduced to close to 1 already.
> > >
> > >
> > > Guozhang
> > >
> > >
> > > On Tue, May 19, 2020 at 10:33 AM Boyang Chen <
> reluctanthero...@gmail.com
> > >
> > > wrote:
> > >
> > > > Hey John,
> > > >
> > > > thanks for the insights! Replied inline.
> > > >
> > > > On Tue, May 19, 2020 at 7:48 AM John Roesler 
> > > wrote:
> > > >
> > > > > Thanks for the KIP, Boyang!
> > > > >
> > > > > This looks good and reasonable to me overall.
> > > > >
> > > > > J1: One clarification: you mention that the blocking behavior
> depends
> > > on
> > > > > a new broker version, which sounds good to me, but I didn't see why
> > > > > we would need to throw any UnsupportedVersionExceptions. It sounds
> > > > > a little like you just want to implement a kind of long polling on
> > the
> > > > > AddPartitionToTxn API, such that the broker would optimistically
> > block
> > > > > for a while when there is a pending prior transaction.
> > > > >
> > > > > Can this just be a behavior change on the broker side, such that
> both
> > > > > old and new clients would be asked to retry when the broker is
> older,
> > > > > and both old and new clients would instead see the API call block
> for
> > > > > longer (but be successful more often) when the broker is newer?
> > > > >
> > > > > Related: is it still possible to get back the "please retry" error
> > from
> > > > the
> > > > > broker, or is it guaranteed to block until the call 

Re: [DISCUSS] KIP-617: Allow Kafka Streams State Stores to be iterated backwards

2020-05-20 Thread Jorge Quilcate
Thank you both for the great feedback.

I like the "fancy" proposal :), and how it removes the need for
additional API methods. And with a feature flag on `StateStore`,
disabled by default, should no break current users.

The only side-effect I can think of is that: by moving the flag upwards,
all later operations become affected; which might be ok for most (all?)
cases. I can't think of an scenario where this would be an issue, just
want to point this out.

If moving to this approach, I'd like to check if I got this right before
updating the KIP:

- only `StateStore` will change by having a new method:
`backwardIteration()`, `false` by default to keep things compatible.
- then all `*Stores` will have to update their implementation based on
this flag.


On 20/05/2020 21:02, Sophie Blee-Goldman wrote:
>> There's no possibility that someone could be relying
>> on iterating over that range in increasing order, because that's not what
>> happens. However, they could indeed be relying on getting an empty
> iterator
>
> I just meant that they might be relying on the assumption that the range
> query
> will never return results with decreasing keys. The empty iterator wouldn't
> break that contract, but of course a surprise reverse iterator would.
>
> FWIW I actually am in favor of automatically converting to a reverse
> iterator,
> I just thought we should consider whether this should be off by default or
> even possible to disable at all.
>
> On Tue, May 19, 2020 at 7:42 PM John Roesler  wrote:
>
>> Thanks for the response, Sophie,
>>
>> I wholeheartedly agree we should take as much into account as possible
>> up front, rather than regretting our decisions later. I actually do share
>> your vague sense of worry, which was what led me to say initially that I
>> thought my counterproposal might be "too fancy". Sometimes, it's better
>> to be explicit instead of "elegant", if we think more people will be
>> confused
>> than not.
>>
>> I really don't think that there's any danger of "relying on a bug" here,
>> although
>> people certainly could be relying on current behavior. One thing to be
>> clear
>> about (which I just left a more detailed comment in KAFKA-8159 about) is
>> that
>> when we say something like key1 > key2, this ordering is defined by the
>> serde's output and nothing else.
>>
>> Currently, thanks to your fix in https://github.com/apache/kafka/pull/6521
>> ,
>> the store contract is that for range scans, if from > to, then the store
>> must
>> return an empty iterator. There's no possibility that someone could be
>> relying
>> on iterating over that range in increasing order, because that's not what
>> happens. However, they could indeed be relying on getting an empty
>> iterator.
>>
>> My counterproposal was to actually change this contract to say that the
>> store
>> must return an iterator over the keys in that range, but in the reverse
>> order.
>> So, in addition to considering whether this idea is "too fancy" (aka
>> confusing),
>> we should also consider the likelihood of breaking an existing program with
>> this behavior/contract change.
>>
>> To echo your clarification, I'm also not advocating strongly in favor of my
>> proposal. I just wanted to present it for consideration alongside Jorge's
>> original one.
>>
>> Thanks for raising these very good points,
>> -John
>>
>> On Tue, May 19, 2020, at 20:49, Sophie Blee-Goldman wrote:
 Rather than working around it, I think we should just fix it
>>> Now *that's* a "fancy" idea :P
>>>
>>> That was my primary concern, although I do have a vague sense of worry
>>> that we might be allowing users to get into trouble without realizing it.
>>> For example if their custom serdes suffer a similar bug as the above,
>>> and/or
>>> they rely on getting results in increasing order (of the keys) even when
>>> to < from. Maybe they're relying on the fact that the range query returns
>>> nothing in that case.
>>>
>>> Not sure if that qualifies as relying on a bug or not, but in that past
>>> we've
>>> taken the stance that we should not break compatibility even if the user
>>> was relying on bugs or unintentional behavior.
>>>
>>> Just to clarify I'm not advocating strongly against this proposal, just
>>> laying
>>> out some considerations we should take into account. At the end of the
>> day
>>> we should do what's right rather than maintain compatibility with
>> existing
>>> bugs, but sometimes there's a reasonable middle ground.
>>>
>>> On Tue, May 19, 2020 at 6:15 PM John Roesler 
>> wrote:
 Thanks Sophie,

 Woah, that’s a nasty bug. Rather than working around it, I think we
>> should
 just fix it. I’ll leave some comments on the Jira.

 It doesn’t seem like it should be this KIP’s concern that some serdes
 might be incorrectly written.

 Were there other practical concerns that you had in mind?

 Thanks,
 John

 On Tue, May 19, 2020, at 19:10, Sophie Blee-Goldman wrote:
> I like this 

Re: [DISCUSS] KIP-609: Use Pre-registration and Blocking Calls for Better Transaction Efficiency

2020-05-20 Thread Guozhang Wang
Just to clarify on the implementation details: today inside
Sender#runOnce(), we have the following:

```

if (maybeSendAndPollTransactionalRequest()) {
return;
}

```


Which basically means that as long as we still have any in-flight txn
request; so we are not really blocking on the first sent request but on all
of the types. For add-partitions requests, that makes sense since once
unblocked, all requests would unblock at the same time anyways; but we do
not necessarily need to block on AddOffsetCommitsToTxnRequest and
TxnOffsetCommitRequest.

So we can relax the above restriction as 1) the txn coordinator is known,
2) the producer has a valid PID. 3) all partitions are registered in the
txn.



Guozhang

On Wed, May 20, 2020 at 3:13 PM Boyang Chen 
wrote:

> Thanks Guozhang for the new proposal. I agree that we could deliver
> https://issues.apache.org/jira/browse/KAFKA-9878
>  first and
> measure the following metrics:
>
> 1. The total volume of AddPartitionToTxn requests
> 2. The time used in propagating the transaction state updates during
> transaction
> 3. The time used in transaction marker propagation
>
> If those metrics suggest that we are doing a pretty good job already, then
> the improvement of delivering the entire KIP-609 is minimal. In the
> meantime, I updated 9878 with more details. Additionally, I realized that
> we should change the logic for AddPartitionToTxn call so that we could
> maintain a future queue and wait for all the delta change completions,
> instead of blocking on the first sent out one. Does that make sense?
>
> On Wed, May 20, 2020 at 2:28 PM Guozhang Wang  wrote:
>
> > Hello Matthias,
> >
> > I have a quick question regarding the motivation of the long-blocking and
> > batch-add-partitions behavior: do we think the latency primarily comes
> from
> > the network round-trips, or from the coordinator-side transaction-log
> > appends? If we believe it is coming from the latter, then perhaps we can
> > first consider optimizing that without making any public changes,
> > specifically:
> >
> > 1) We block on the add-partitions in a purgatory, as proposed in your
> KIP.
> >
> > 2) When try-completing the parked add-partitions requests in a purgatory,
> > we consolidate them as a single txn state transition with a single append
> > to transaction log.
> >
> > 3) *Optionally* on the client side, we can further optimize the behavior:
> > instead of block on sending any batches as long as there are any txn
> > requests in flight, we would just query which partitions has successfully
> > "registered" as part of the txn from add-partitions response and then
> send
> > records for those partitions. By doing this we can reduce the end-to-end
> > blocking time.
> >
> > None of the above changes actually requires any public API or protocol
> > changes. In addition, it would not make things worse even in edge cases
> > whereas with the proposed API change, if the producer pre-registered a
> > bunch of partitions but then timed out, the coordinator need to write
> abort
> > markers to those pre-registered partitions unnecessarily. We can measure
> > the avg. number of txn log appends per transaction on the broker side and
> > see if it can be reduced to close to 1 already.
> >
> >
> > Guozhang
> >
> >
> > On Tue, May 19, 2020 at 10:33 AM Boyang Chen  >
> > wrote:
> >
> > > Hey John,
> > >
> > > thanks for the insights! Replied inline.
> > >
> > > On Tue, May 19, 2020 at 7:48 AM John Roesler 
> > wrote:
> > >
> > > > Thanks for the KIP, Boyang!
> > > >
> > > > This looks good and reasonable to me overall.
> > > >
> > > > J1: One clarification: you mention that the blocking behavior depends
> > on
> > > > a new broker version, which sounds good to me, but I didn't see why
> > > > we would need to throw any UnsupportedVersionExceptions. It sounds
> > > > a little like you just want to implement a kind of long polling on
> the
> > > > AddPartitionToTxn API, such that the broker would optimistically
> block
> > > > for a while when there is a pending prior transaction.
> > > >
> > > > Can this just be a behavior change on the broker side, such that both
> > > > old and new clients would be asked to retry when the broker is older,
> > > > and both old and new clients would instead see the API call block for
> > > > longer (but be successful more often) when the broker is newer?
> > > >
> > > > Related: is it still possible to get back the "please retry" error
> from
> > > the
> > > > broker, or is it guaranteed to block until the call completes?
> > > >
> > > > This is a good observation. I agree the blocking behavior could
> benefit
> > > all the producer
> > > versions older than 0.11, which could be retried. Added to the KIP.
> > >
> > >
> > > > J2: Please forgive my ignorance, but is there any ill effect if a
> > > producer
> > > > adds a partition to a transaction and then commits without having
> added
> > > > any data 

Re: [DISCUSS] KIP-609: Use Pre-registration and Blocking Calls for Better Transaction Efficiency

2020-05-20 Thread Boyang Chen
Thanks Guozhang for the new proposal. I agree that we could deliver
https://issues.apache.org/jira/browse/KAFKA-9878
 first and
measure the following metrics:

1. The total volume of AddPartitionToTxn requests
2. The time used in propagating the transaction state updates during
transaction
3. The time used in transaction marker propagation

If those metrics suggest that we are doing a pretty good job already, then
the improvement of delivering the entire KIP-609 is minimal. In the
meantime, I updated 9878 with more details. Additionally, I realized that
we should change the logic for AddPartitionToTxn call so that we could
maintain a future queue and wait for all the delta change completions,
instead of blocking on the first sent out one. Does that make sense?

On Wed, May 20, 2020 at 2:28 PM Guozhang Wang  wrote:

> Hello Matthias,
>
> I have a quick question regarding the motivation of the long-blocking and
> batch-add-partitions behavior: do we think the latency primarily comes from
> the network round-trips, or from the coordinator-side transaction-log
> appends? If we believe it is coming from the latter, then perhaps we can
> first consider optimizing that without making any public changes,
> specifically:
>
> 1) We block on the add-partitions in a purgatory, as proposed in your KIP.
>
> 2) When try-completing the parked add-partitions requests in a purgatory,
> we consolidate them as a single txn state transition with a single append
> to transaction log.
>
> 3) *Optionally* on the client side, we can further optimize the behavior:
> instead of block on sending any batches as long as there are any txn
> requests in flight, we would just query which partitions has successfully
> "registered" as part of the txn from add-partitions response and then send
> records for those partitions. By doing this we can reduce the end-to-end
> blocking time.
>
> None of the above changes actually requires any public API or protocol
> changes. In addition, it would not make things worse even in edge cases
> whereas with the proposed API change, if the producer pre-registered a
> bunch of partitions but then timed out, the coordinator need to write abort
> markers to those pre-registered partitions unnecessarily. We can measure
> the avg. number of txn log appends per transaction on the broker side and
> see if it can be reduced to close to 1 already.
>
>
> Guozhang
>
>
> On Tue, May 19, 2020 at 10:33 AM Boyang Chen 
> wrote:
>
> > Hey John,
> >
> > thanks for the insights! Replied inline.
> >
> > On Tue, May 19, 2020 at 7:48 AM John Roesler 
> wrote:
> >
> > > Thanks for the KIP, Boyang!
> > >
> > > This looks good and reasonable to me overall.
> > >
> > > J1: One clarification: you mention that the blocking behavior depends
> on
> > > a new broker version, which sounds good to me, but I didn't see why
> > > we would need to throw any UnsupportedVersionExceptions. It sounds
> > > a little like you just want to implement a kind of long polling on the
> > > AddPartitionToTxn API, such that the broker would optimistically block
> > > for a while when there is a pending prior transaction.
> > >
> > > Can this just be a behavior change on the broker side, such that both
> > > old and new clients would be asked to retry when the broker is older,
> > > and both old and new clients would instead see the API call block for
> > > longer (but be successful more often) when the broker is newer?
> > >
> > > Related: is it still possible to get back the "please retry" error from
> > the
> > > broker, or is it guaranteed to block until the call completes?
> > >
> > > This is a good observation. I agree the blocking behavior could benefit
> > all the producer
> > versions older than 0.11, which could be retried. Added to the KIP.
> >
> >
> > > J2: Please forgive my ignorance, but is there any ill effect if a
> > producer
> > > adds a partition to a transaction and then commits without having added
> > > any data to the transaction?
> > >
> > > I can see this happening, e.g., if I know that my application generally
> > > sends to 5 TopicPartitions, I would use the new beginTransaction call
> > > and just always give it the same list of partitions, and _then_ do the
> > > processing, which may or may not send data to all five potential
> > > partitions.
> > >
> >
> > Yes, that's possible, which is the reason why we discussed bumping the
> > EndTxn request
> > to only include the partitions actually being written to, so that the
> > transaction coordinator will only send markers
> > to the actually-written partitions. The worst case for mis-used
> > pre-registration API
> > is to write out more transaction markers than necessary. For once, I do
> see
> > the benefit of doing that,
> > which is a life-saver for a "lazy user" who doesn't want to infer the
> > output partitions it would write to, but always
> > registers the full set of output partitions. With this 

Re: [DISCUSS] KIP-609: Use Pre-registration and Blocking Calls for Better Transaction Efficiency

2020-05-20 Thread Guozhang Wang
Hello Matthias,

I have a quick question regarding the motivation of the long-blocking and
batch-add-partitions behavior: do we think the latency primarily comes from
the network round-trips, or from the coordinator-side transaction-log
appends? If we believe it is coming from the latter, then perhaps we can
first consider optimizing that without making any public changes,
specifically:

1) We block on the add-partitions in a purgatory, as proposed in your KIP.

2) When try-completing the parked add-partitions requests in a purgatory,
we consolidate them as a single txn state transition with a single append
to transaction log.

3) *Optionally* on the client side, we can further optimize the behavior:
instead of block on sending any batches as long as there are any txn
requests in flight, we would just query which partitions has successfully
"registered" as part of the txn from add-partitions response and then send
records for those partitions. By doing this we can reduce the end-to-end
blocking time.

None of the above changes actually requires any public API or protocol
changes. In addition, it would not make things worse even in edge cases
whereas with the proposed API change, if the producer pre-registered a
bunch of partitions but then timed out, the coordinator need to write abort
markers to those pre-registered partitions unnecessarily. We can measure
the avg. number of txn log appends per transaction on the broker side and
see if it can be reduced to close to 1 already.


Guozhang


On Tue, May 19, 2020 at 10:33 AM Boyang Chen 
wrote:

> Hey John,
>
> thanks for the insights! Replied inline.
>
> On Tue, May 19, 2020 at 7:48 AM John Roesler  wrote:
>
> > Thanks for the KIP, Boyang!
> >
> > This looks good and reasonable to me overall.
> >
> > J1: One clarification: you mention that the blocking behavior depends on
> > a new broker version, which sounds good to me, but I didn't see why
> > we would need to throw any UnsupportedVersionExceptions. It sounds
> > a little like you just want to implement a kind of long polling on the
> > AddPartitionToTxn API, such that the broker would optimistically block
> > for a while when there is a pending prior transaction.
> >
> > Can this just be a behavior change on the broker side, such that both
> > old and new clients would be asked to retry when the broker is older,
> > and both old and new clients would instead see the API call block for
> > longer (but be successful more often) when the broker is newer?
> >
> > Related: is it still possible to get back the "please retry" error from
> the
> > broker, or is it guaranteed to block until the call completes?
> >
> > This is a good observation. I agree the blocking behavior could benefit
> all the producer
> versions older than 0.11, which could be retried. Added to the KIP.
>
>
> > J2: Please forgive my ignorance, but is there any ill effect if a
> producer
> > adds a partition to a transaction and then commits without having added
> > any data to the transaction?
> >
> > I can see this happening, e.g., if I know that my application generally
> > sends to 5 TopicPartitions, I would use the new beginTransaction call
> > and just always give it the same list of partitions, and _then_ do the
> > processing, which may or may not send data to all five potential
> > partitions.
> >
>
> Yes, that's possible, which is the reason why we discussed bumping the
> EndTxn request
> to only include the partitions actually being written to, so that the
> transaction coordinator will only send markers
> to the actually-written partitions. The worst case for mis-used
> pre-registration API
> is to write out more transaction markers than necessary. For once, I do see
> the benefit of doing that,
> which is a life-saver for a "lazy user" who doesn't want to infer the
> output partitions it would write to, but always
> registers the full set of output partitions. With this observation in mind,
> bumping EndTxn makes sense.
>
> >
> > Thanks again!
> > -John
> >
> > On Mon, May 18, 2020, at 10:25, Boyang Chen wrote:
> > > Oh, I see your point! Will add that context to the KIP.
> > >
> > > Boyang
> > >
> > > On Sun, May 17, 2020 at 11:39 AM Guozhang Wang 
> > wrote:
> > >
> > > > My point here is only for the first AddPartitionToTxn request of the
> > > > transaction, since only that request would potentially be blocked on
> > the
> > > > previous txn to complete. By deferring it we reduce the blocking
> time.
> > > >
> > > > I think StreamsConfigs override the linger.ms to 100ms not 10ms, so
> > in the
> > > > best case we can defer the first AddPartitionToTxn of the transaction
> > by
> > > > 100ms.
> > > >
> > > > Guozhang
> > > >
> > > >
> > > > On Sat, May 16, 2020 at 12:20 PM Boyang Chen <
> > reluctanthero...@gmail.com>
> > > > wrote:
> > > >
> > > > > Thanks Guozhang for the context. The producer batch is either
> > bounded by
> > > > > the size or the linger time. For the default 10ms linger and 100ms
> > > > > transaction 

Re: [DISCUSS] KIP-617: Allow Kafka Streams State Stores to be iterated backwards

2020-05-20 Thread Sophie Blee-Goldman
> There's no possibility that someone could be relying
> on iterating over that range in increasing order, because that's not what
> happens. However, they could indeed be relying on getting an empty
iterator

I just meant that they might be relying on the assumption that the range
query
will never return results with decreasing keys. The empty iterator wouldn't
break that contract, but of course a surprise reverse iterator would.

FWIW I actually am in favor of automatically converting to a reverse
iterator,
I just thought we should consider whether this should be off by default or
even possible to disable at all.

On Tue, May 19, 2020 at 7:42 PM John Roesler  wrote:

> Thanks for the response, Sophie,
>
> I wholeheartedly agree we should take as much into account as possible
> up front, rather than regretting our decisions later. I actually do share
> your vague sense of worry, which was what led me to say initially that I
> thought my counterproposal might be "too fancy". Sometimes, it's better
> to be explicit instead of "elegant", if we think more people will be
> confused
> than not.
>
> I really don't think that there's any danger of "relying on a bug" here,
> although
> people certainly could be relying on current behavior. One thing to be
> clear
> about (which I just left a more detailed comment in KAFKA-8159 about) is
> that
> when we say something like key1 > key2, this ordering is defined by the
> serde's output and nothing else.
>
> Currently, thanks to your fix in https://github.com/apache/kafka/pull/6521
> ,
> the store contract is that for range scans, if from > to, then the store
> must
> return an empty iterator. There's no possibility that someone could be
> relying
> on iterating over that range in increasing order, because that's not what
> happens. However, they could indeed be relying on getting an empty
> iterator.
>
> My counterproposal was to actually change this contract to say that the
> store
> must return an iterator over the keys in that range, but in the reverse
> order.
> So, in addition to considering whether this idea is "too fancy" (aka
> confusing),
> we should also consider the likelihood of breaking an existing program with
> this behavior/contract change.
>
> To echo your clarification, I'm also not advocating strongly in favor of my
> proposal. I just wanted to present it for consideration alongside Jorge's
> original one.
>
> Thanks for raising these very good points,
> -John
>
> On Tue, May 19, 2020, at 20:49, Sophie Blee-Goldman wrote:
> > > Rather than working around it, I think we should just fix it
> >
> > Now *that's* a "fancy" idea :P
> >
> > That was my primary concern, although I do have a vague sense of worry
> > that we might be allowing users to get into trouble without realizing it.
> > For example if their custom serdes suffer a similar bug as the above,
> > and/or
> > they rely on getting results in increasing order (of the keys) even when
> > to < from. Maybe they're relying on the fact that the range query returns
> > nothing in that case.
> >
> > Not sure if that qualifies as relying on a bug or not, but in that past
> > we've
> > taken the stance that we should not break compatibility even if the user
> > was relying on bugs or unintentional behavior.
> >
> > Just to clarify I'm not advocating strongly against this proposal, just
> > laying
> > out some considerations we should take into account. At the end of the
> day
> > we should do what's right rather than maintain compatibility with
> existing
> > bugs, but sometimes there's a reasonable middle ground.
> >
> > On Tue, May 19, 2020 at 6:15 PM John Roesler 
> wrote:
> >
> > > Thanks Sophie,
> > >
> > > Woah, that’s a nasty bug. Rather than working around it, I think we
> should
> > > just fix it. I’ll leave some comments on the Jira.
> > >
> > > It doesn’t seem like it should be this KIP’s concern that some serdes
> > > might be incorrectly written.
> > >
> > > Were there other practical concerns that you had in mind?
> > >
> > > Thanks,
> > > John
> > >
> > > On Tue, May 19, 2020, at 19:10, Sophie Blee-Goldman wrote:
> > > > I like this "fancy idea" to just flip the to/from bytes but I think
> there
> > > > are some practical limitations to implementing this. In particular
> > > > I'm thinking about this issue
> > > >  with the built-in
> > > signed
> > > > number serdes.
> > > >
> > > > This trick would actually fix the problem for negative-negative
> queries
> > > > (ie where to & from are negative) but would cause undetectable
> > > > incorrect results for negative-positive queries. For example, say you
> > > > call #range with from = -1 and to = 1, using the Short serdes. The
> > > > serialized bytes for that are
> > > >
> > > > from = 
> > > > to = 0001
> > > >
> > > > so we would end up flipping those and iterating over all keys from
> > > > 0001 to . Iterating in lexicographical

Re: [VOTE] KIP-572: Improve timeouts and retires in Kafka Streams

2020-05-20 Thread Guozhang Wang
Thanks Matthias,

I agree with you on all the bullet points above. Regarding the admin-client
outer-loop retries inside partition assignor, I think we should treat error
codes differently from those two blocking calls:

Describe-topic:
* unknown-topic (3): add this topic to the to-be-created topic list.
* leader-not-available (5): do not try to create, retry in the outer loop.
* request-timeout: break the current loop and retry in the outer loop.
* others: fatal error.

Create-topic:
* topic-already-exists: retry in the outer loop to validate the
num.partitions match expectation.
* request-timeout: break the current loop and retry in the outer loop.
* others: fatal error.

And in the outer-loop, I think we can have a global timer for the whole
"assign()" function, not only for the internal-topic-manager, and the timer
can be hard-coded with, e.g. half of the rebalance.timeout to get rid of
the `retries`; if we cannot complete the assignment before the timeout runs
out, we can return just the partial assignment (e.g. if there are two
tasks, but we can only get the topic metadata for one of them, then just do
the assignment for that one only) while encoding in the error-code field to
request for another rebalance.

Guozhang



On Mon, May 18, 2020 at 7:26 PM Matthias J. Sax  wrote:

> No worries Guozhang, any feedback is always very welcome! My reply is
> going to be a little longer... Sorry.
>
>
>
> > 1) There are some inconsistent statements in the proposal regarding what
> to
> > deprecated:
>
> The proposal of the KIP is to deprecate `retries` for producer, admin,
> and Streams. Maybe the confusion is about the dependency of those
> settings within Streams and that we handle the deprecation somewhat
> different for plain clients vs Streams:
>
> For plain producer/admin the default `retries` is set to MAX_VALUE. The
> config will be deprecated but still be respected.
>
> For Streams, the default `retries` is set to zero, however, this default
> retry does _not_ affect the embedded producer/admin clients -- both
> clients stay on their own default of MAX_VALUES.
>
> Currently, this introduces the issue, that if a user wants to increase
> Streams retries, she might by accident reduce the embedded client
> retries, too. To avoid this issue, she would need to set
>
> retries=100
> producer.retires=MAX_VALUE
> admin.retries=MAX_VALUE
>
> This KIP will fix this issue only in the long term though, ie, when
> `retries` is finally removed. Short term, using `retries` in
> StreamsConfig would still affect the embedded clients, but Streams, as
> well as both client would log a WARN message. This preserves backward
> compatibility.
>
> Withing Streams `retries` is ignored and the new `task.timeout.ms` is
> used instead. This increase the default resilience of Kafka Streams
> itself. We could also achieve this by still respecting `retries` and to
> change it's default value. However, because we deprecate `retries` it
> seems better to just ignore it and switch to the new config directly.
>
> I updated the KIPs with some more details.
>
>
>
> > 2) We should also document the related behavior change in
> PartitionAssignor
> > that uses AdminClient.
>
> This is actually a good point. Originally, I looked into this only
> briefly, but it raised an interesting question on how to handle it.
>
> Note that `TimeoutExceptions` are currently not handled in this retry
> loop. Also note that the default retries value for other errors would be
> MAX_VALUE be default (inherited from `AdminClient#retries` as mentioned
> already by Guozhang).
>
> Applying the new `task.timeout.ms` config does not seem to be
> appropriate because the AdminClient is used during a rebalance in the
> leader. We could introduce a new config just for this case, but it seems
> to be a little bit too much. Furthermore, the group-coordinator applies
> a timeout on the leader anyway: if the assignment is not computed within
> the timeout, the leader is removed from the group and another rebalance
> is triggered.
>
> Overall, we make multiple admin client calls and thus we should keep
> some retry logic and not just rely on the admin client internal retries,
> as those would fall short to retry different calls interleaved. We could
> just retry infinitely and rely on the group coordinator to remove the
> leader eventually. However, this does not seem to be ideal because the
> removed leader might be stuck forever.
>
> The question though is: if topic metadata cannot be obtained or internal
> topics cannot be created, what should we do? We can't compute an
> assignment anyway. We have already an rebalance error code to shut down
> all instances for this case. Maybe we could break the retry loop before
> the leader is kicked out of the group and send this error code? This way
> we don't need a new config, but piggy-back on the existing timeout to
> compute the assignment. To be conservative, we could use a 50% threshold?
>
>
>
> > BTW as I mentioned in the 

Build failed in Jenkins: kafka-trunk-jdk14 #108

2020-05-20 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9859 / kafka-streams-application-reset tool doesn't take into


--
[...truncated 1.88 MB...]

org.apache.kafka.streams.integration.StoreQueryIntegrationTest > 
shouldQueryAllStalePartitionStores STARTED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldRestoreStateFromSourceTopic PASSED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldSuccessfullyStartWhenLoggingDisabled STARTED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldSuccessfullyStartWhenLoggingDisabled PASSED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldRestoreStateFromChangelogTopic STARTED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldRestoreStateFromChangelogTopic PASSED

org.apache.kafka.streams.integration.StreamTableJoinIntegrationTest > 
testInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamTableJoinIntegrationTest > 
testInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamTableJoinIntegrationTest > 
testLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamTableJoinIntegrationTest > 
testLeft[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamTableJoinIntegrationTest > 
testShouldAutoShutdownOnIncompleteMetadata[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamTableJoinIntegrationTest > 
testShouldAutoShutdownOnIncompleteMetadata[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamTableJoinIntegrationTest > 
testInner[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamTableJoinIntegrationTest > 
testInner[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamTableJoinIntegrationTest > 
testLeft[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamTableJoinIntegrationTest > 
testLeft[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamTableJoinIntegrationTest > 
testShouldAutoShutdownOnIncompleteMetadata[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamTableJoinIntegrationTest > 
testShouldAutoShutdownOnIncompleteMetadata[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StoreQueryIntegrationTest > 
shouldQueryAllStalePartitionStores PASSED

org.apache.kafka.streams.integration.StoreQueryIntegrationTest > 
shouldQuerySpecificStalePartitionStores STARTED

org.apache.kafka.streams.integration.GlobalKTableEOSIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin[exactly_once] STARTED

org.apache.kafka.streams.integration.GlobalKTableEOSIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin[exactly_once] PASSED

org.apache.kafka.streams.integration.GlobalKTableEOSIntegrationTest > 
shouldNotRestoreAbortedMessages[exactly_once] STARTED

org.apache.kafka.streams.integration.GlobalKTableEOSIntegrationTest > 
shouldNotRestoreAbortedMessages[exactly_once] PASSED

org.apache.kafka.streams.integration.GlobalKTableEOSIntegrationTest > 
shouldRestoreTransactionalMessages[exactly_once] STARTED

org.apache.kafka.streams.integration.GlobalKTableEOSIntegrationTest > 
shouldRestoreTransactionalMessages[exactly_once] PASSED

org.apache.kafka.streams.integration.GlobalKTableEOSIntegrationTest > 
shouldKStreamGlobalKTableJoin[exactly_once] STARTED

org.apache.kafka.streams.integration.GlobalKTableEOSIntegrationTest > 
shouldKStreamGlobalKTableJoin[exactly_once] PASSED

org.apache.kafka.streams.integration.GlobalKTableEOSIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin[exactly_once_beta] STARTED

org.apache.kafka.streams.integration.GlobalKTableEOSIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin[exactly_once_beta] PASSED

org.apache.kafka.streams.integration.GlobalKTableEOSIntegrationTest > 
shouldNotRestoreAbortedMessages[exactly_once_beta] STARTED

org.apache.kafka.streams.integration.GlobalKTableEOSIntegrationTest > 
shouldNotRestoreAbortedMessages[exactly_once_beta] PASSED

org.apache.kafka.streams.integration.GlobalKTableEOSIntegrationTest > 
shouldRestoreTransactionalMessages[exactly_once_beta] STARTED

org.apache.kafka.streams.integration.GlobalKTableEOSIntegrationTest > 
shouldRestoreTransactionalMessages[exactly_once_beta] PASSED

org.apache.kafka.streams.integration.GlobalKTableEOSIntegrationTest > 
shouldKStreamGlobalKTableJoin[exactly_once_beta] STARTED

org.apache.kafka.streams.integration.GlobalKTableEOSIntegrationTest > 
shouldKStreamGlobalKTableJoin[exactly_once_beta] PASSED

org.apache.kafka.streams.integration.ResetIntegrationWithSslTest > 
testReprocessingFromScratchAfterResetWithIntermediateUserTopic STARTED

org.apache.kafka.streams.integration.StoreQueryIntegrationTest > 
shouldQuerySpecificStalePartitionStores PASSED


[VOTE] KIP-589: Add API to update Replica state in Controller

2020-05-20 Thread David Arthur
Hello, all. I'd like to start the vote for KIP-589 which proposes to add a
new AlterReplicaState RPC.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-589+Add+API+to+update+Replica+state+in+Controller

Cheers,
David


Build failed in Jenkins: kafka-trunk-jdk8 #4551

2020-05-20 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9859 / kafka-streams-application-reset tool doesn't take into


--
[...truncated 3.08 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > 

Build failed in Jenkins: kafka-trunk-jdk11 #1480

2020-05-20 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9859 / kafka-streams-application-reset tool doesn't take into


--
[...truncated 3.04 MB...]
> Task :streams:upgrade-system-tests-11:testClasses
> Task :streams:streams-scala:spotbugsMain
> Task :streams:upgrade-system-tests-11:checkstyleTest
> Task :streams:test-utils:spotbugsMain

> Task :streams:streams-scala:test

org.apache.kafka.streams.scala.kstream.SuppressedTest > 
Suppressed.untilWindowCloses should produce the correct suppression STARTED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced should 
create a Produced with Serdes STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > 
Suppressed.untilWindowCloses should produce the correct suppression PASSED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced should 
create a Produced with Serdes PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > 
Suppressed.untilTimeLimit should produce the correct suppression STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > 
Suppressed.untilTimeLimit should produce the correct suppression PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.maxRecords 
should produce the correct buffer config STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.maxRecords 
should produce the correct buffer config PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.maxBytes 
should produce the correct buffer config STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.maxBytes 
should produce the correct buffer config PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.unbounded 
should produce the correct buffer config STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.unbounded 
should produce the correct buffer config PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig should 
support very long chains of factory methods STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig should 
support very long chains of factory methods PASSED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy STARTED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy PASSED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes STARTED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes PASSED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes and repartition topic name STARTED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes and repartition topic name PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes PASSED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed should 
create a Consumed with Serdes STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
STARTED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed should 
create a Consumed with Serdes PASSED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a 

Re: [DISCUSS] Apache Kafka 2.6.0 release

2020-05-20 Thread Sophie Blee-Goldman
Hey Randall,

Can you also add KIP-613 which was accepted yesterday?

Thanks!
Sophie

On Wed, May 20, 2020 at 6:47 AM Randall Hauch  wrote:

> Hi, Tom. I saw last night that the KIP had enough votes before today’s
> deadline and I will add it to the roadmap today. Thanks for driving this!
>
> On Wed, May 20, 2020 at 6:18 AM Tom Bentley  wrote:
>
> > Hi Randall,
> >
> > Can we add KIP-585? (I'm not quite sure of the protocol here, but thought
> > it better to ask than to just add it myself).
> >
> > Thanks,
> >
> > Tom
> >
> > On Tue, May 5, 2020 at 6:54 PM Randall Hauch  wrote:
> >
> > > Greetings!
> > >
> > > I'd like to volunteer to be release manager for the next time-based
> > feature
> > > release which will be 2.6.0. I've published a release plan at
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> > > ,
> > > and have included all of the KIPs that are currently approved or
> actively
> > > in discussion (though I'm happy to adjust as necessary).
> > >
> > > To stay on our time-based cadence, the KIP freeze is on May 20 with a
> > > target release date of June 24.
> > >
> > > Let me know if there are any objections.
> > >
> > > Thanks,
> > > Randall Hauch
> > >
> >
>


[jira] [Resolved] (KAFKA-9859) kafka-streams-application-reset tool doesn't take into account topics generated by KTable foreign key join operation

2020-05-20 Thread Levani Kokhreidze (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Levani Kokhreidze resolved KAFKA-9859.
--
Fix Version/s: 2.6.0
   Resolution: Fixed

fixed with PR [https://github.com/apache/kafka/pull/8671]

> kafka-streams-application-reset tool doesn't take into account topics 
> generated by KTable foreign key join operation
> 
>
> Key: KAFKA-9859
> URL: https://issues.apache.org/jira/browse/KAFKA-9859
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, tools
>Reporter: Levani Kokhreidze
>Assignee: Levani Kokhreidze
>Priority: Major
>  Labels: newbie, newbie++
> Fix For: 2.6.0
>
>
> Steps to reproduce:
>  * Create Kafka Streams application which uses foreign key join operation 
> (without a Named parameter overload)
>  * Stop Kafka streams application
>  * Perform `kafka-topics-list` and verify that foreign key operation internal 
> topics are generated
>  * Use `kafka-streams-application-reset` to perform the cleanup of your kafka 
> streams application: `kafka-streams-application-reset --application-id 
>  --input-topics  --bootstrap-servers 
>  --to-datetime 2019-04-13T00:00:00.000`
>  * Perform `kafka-topics-list` again, you'll see that topics generated by the 
> foreign key operation are still there.
> [kafka-streams-application-reset|#L679-L680]] uses 
> `-subscription-registration-topic` and `-subscription-response-topic` 
> suffixes to match topics generated by the foreign key operation. While in 
> reality, internal topics are generated in this format:
> {code:java}
> -KTABLE-FK-JOIN-SUBSCRIPTION-REGISTRATION- number>-topic 
> -KTABLE-FK-JOIN-SUBSCRIPTION-RESPONSE- number>-topic{code}
> Please note that this problem only happens when `Named` parameter is not 
> used. When named parameter is used, topics are generated with a same pattern 
> as specified in StreamsResetter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-587 Suppress detailed responses for handled exceptions in security-sensitive environments

2020-05-20 Thread Connor Penhale
Hi Chris,

I agree that Connect shouldn't obtusely refuse operator-helpful information on 
known, handled exceptions. The situation you describe here feels like a great 
example. In cases like this, I would suggest printing the message, and in the 
field that would contain the stack trace, replacing that message with the 
pre-defined string that says Connect is running in a mode that doesn't print 
stack traces. This way, the schema is not changed. In places where the message 
would include the stack trace along with information about the fault, omitting 
the stack trace or replacing it with the pre-defined string would work just 
fine for me.

Thanks!
Connor

On 5/13/20, 12:30 PM, "Christopher Egerton"  wrote:

Hi Connor,

I think this is really close but have one more thought. Uncaught exceptions
in the REST API are different from exceptions that come about when tasks or
connectors fail, and can be used for different purposes. Stack traces in
500 errors are probably only useful for the administrator of the Connect
cluster. However, if a user has tried to create a connector and sees that
it or one of its tasks has failed, a brief message about the cause of
failure might actually be pretty helpful, and if they can't get any
information on why the connector or task failed, then they're essentially
at the mercy of the Connect cluster administrator for figuring out what
caused the failure. Would it be alright to include the exception message,
but not the entire stack trace, in the response for requests to view the
status of a connector or task?

Cheers,

Chris

On Wed, May 6, 2020 at 12:07 PM Connor Penhale 
wrote:

> Hi Chris,
>
> Apologies for the name confusion! I've been working with the my customer
> sponsor over the last few weeks, and we finally have an answer regarding
> "only exceptions or all responses." This organization is really interested
> in removing stack traces from all responses, which will expand the scope 
of
> this KIP a bit. I'm going to update the wiki entry, and then would it be
> reasonable to call for a vote?
>
> Thanks!
> Connor
>
> On 4/17/20, 3:53 PM, "Christopher Egerton"  wrote:
>
> Hi Connor,
>
> That's great, but I think you may have mistaken Colin for me :)
>
> One more thing that should be addressed--the "public interfaces"
> section
> isn't just for Java interfaces, it's for any changes to any public
> part of
> Kafka that users and external developers interact with. As far as
> Connect
> is concerned, this includes (but is not limited to) the REST API and
> worker
> configuration properties, so it might be worth briefly summarizing the
> scope of your proposed changes in that section (something like "We
> plan on
> adding a new worker config named  that will affect the REST API
> under
> ".
>
> Cheers,
>
> Chris
>
> On Wed, Apr 15, 2020 at 1:00 PM Connor Penhale 
> wrote:
>
> > Hi Chris,
> >
> > I can ask the customer if they can disclose any additional
> information. I
> > provided the information around "PCI-DSS" to give the community a
> flavor of
> > the type of environment the customer was operating in. The current
> mode is
> > /not/ insecure, I would agree with this. I would be willing to agree
> that
> > my customer has particular security audit requirements that go above
> and
> > beyond what most environments would consider reasonable. Are you
> > comfortable with that language?
> >
> > " enable.rest.response.stack.traces" works great for me!
> >
> > I created a new class in the example PR because I wanted the highest
> > chance of not gunking up the works by stepping on toes in an
> important
> > class. I figured I'd be reducing risk by creating an alternative
> > implementing class. In retrospect, and now that I'm getting a
> first-hand
> > look at Kafka's community process, that is probably unnecessary.
> > Additionally, I would agree with your statement that we should
> modify the
> > existing ExceptionMapper to avoid behavior divergence in subsequent
> > releases and ensure this feature's particular scope is easy to
> maintain.
> >
> > Thanks!
> > Connor
> >
> > On 4/15/20, 1:17 PM, "Colin McCabe"  wrote:
> >
> > Hi Connor,
> >
> > I still would like to hear more about whether this feature is
> required
> > for PCI-DSS or any other security certification.  Nobody I talked to
> seemed
> > to think that it was-- if there are certifications that would
> require this,
> > it 

Build failed in Jenkins: kafka-trunk-jdk11 #1479

2020-05-20 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix redundant typos in comments and javadocs (#8693)


--
[...truncated 1.77 MB...]
kafka.controller.ControllerIntegrationTest > testTopicCreation PASSED

kafka.controller.ControllerIntegrationTest > testControllerMoveOnTopicDeletion 
STARTED

kafka.controller.ControllerIntegrationTest > testControllerMoveOnTopicDeletion 
PASSED

kafka.controller.ControllerIntegrationTest > testPartitionReassignment STARTED

kafka.controller.ControllerIntegrationTest > testPartitionReassignment PASSED

kafka.controller.ControllerIntegrationTest > testTopicPartitionExpansion STARTED

kafka.controller.ControllerIntegrationTest > testTopicPartitionExpansion PASSED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveIncrementsControllerEpoch STARTED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveIncrementsControllerEpoch PASSED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionEnabled STARTED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionEnabled PASSED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveOnPartitionReassignment STARTED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveOnPartitionReassignment PASSED

kafka.controller.ControllerIntegrationTest > testControllerMoveOnTopicCreation 
STARTED

kafka.controller.ControllerIntegrationTest > testControllerMoveOnTopicCreation 
PASSED

kafka.controller.ControllerIntegrationTest > 
testControllerRejectControlledShutdownRequestWithStaleBrokerEpoch STARTED

kafka.controller.ControllerIntegrationTest > 
testControllerRejectControlledShutdownRequestWithStaleBrokerEpoch PASSED

kafka.controller.ControllerIntegrationTest > 
testBackToBackPreferredReplicaLeaderElections STARTED

kafka.controller.ControllerIntegrationTest > 
testBackToBackPreferredReplicaLeaderElections PASSED

kafka.controller.ControllerIntegrationTest > testEmptyCluster STARTED

kafka.controller.ControllerIntegrationTest > testEmptyCluster PASSED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveOnPreferredReplicaElection STARTED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveOnPreferredReplicaElection PASSED

kafka.controller.ControllerIntegrationTest > testPreferredReplicaLeaderElection 
STARTED

kafka.controller.ControllerIntegrationTest > testPreferredReplicaLeaderElection 
PASSED

kafka.controller.ControllerIntegrationTest > 
testMetadataPropagationOnBrokerChange STARTED

kafka.controller.ControllerIntegrationTest > 
testMetadataPropagationOnBrokerChange PASSED

kafka.controller.ControllerIntegrationTest > 
testMetadataPropagationForOfflineReplicas STARTED

kafka.controller.ControllerIntegrationTest > 
testMetadataPropagationForOfflineReplicas PASSED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaRequestWithAlreadyDefinedDeletedPartition STARTED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaRequestWithAlreadyDefinedDeletedPartition PASSED

kafka.controller.ControllerChannelManagerTest > 
testUpdateMetadataInterBrokerProtocolVersion STARTED

kafka.controller.ControllerChannelManagerTest > 
testUpdateMetadataInterBrokerProtocolVersion PASSED

kafka.controller.ControllerChannelManagerTest > testLeaderAndIsrRequestIsNew 
STARTED

kafka.controller.ControllerChannelManagerTest > testLeaderAndIsrRequestIsNew 
PASSED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicQueuedForDeletion STARTED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicQueuedForDeletion PASSED

kafka.controller.ControllerChannelManagerTest > 
testLeaderAndIsrRequestSentToLiveOrShuttingDownBrokers STARTED

kafka.controller.ControllerChannelManagerTest > 
testLeaderAndIsrRequestSentToLiveOrShuttingDownBrokers PASSED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaInterBrokerProtocolVersion STARTED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaInterBrokerProtocolVersion PASSED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaSentOnlyToLiveAndShuttingDownBrokers STARTED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaSentOnlyToLiveAndShuttingDownBrokers PASSED

kafka.controller.ControllerChannelManagerTest > testStopReplicaGroupsByBroker 
STARTED

kafka.controller.ControllerChannelManagerTest > testStopReplicaGroupsByBroker 
PASSED

kafka.controller.ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr STARTED

kafka.controller.ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr PASSED

kafka.controller.ControllerChannelManagerTest > 
testMixedDeleteAndNotDeleteStopReplicaRequests STARTED

kafka.controller.ControllerChannelManagerTest > 

Re: [DISCUSS] KIP-602 - Change default value for client.dns.lookup

2020-05-20 Thread Rajini Sivaram
Deprecating for removal in 3.0 sounds good.

On Wed, May 20, 2020 at 3:33 PM Ismael Juma  wrote:

> Is there any reason to use "use_first_dns_ip"? Should we remove it
> completely? Or at least deprecate it for removal in 3.0?
>
> Ismael
>
>
> On Wed, May 20, 2020, 1:39 AM Rajini Sivaram 
> wrote:
>
> > Hi Badai,
> >
> > Thanks for the KIP, sounds like a useful change. Perhaps we should call
> the
> > new option `use_first_dns_ip` (not `_ips` since it refers to one). We
> > should also mention in the KIP that only one type of address (ipv4 or
> ipv6,
> > based on the first one) will be used - that is the current behaviour for
> > `use_all_dns_ips`.  Since we are changing `default` to be exactly the
> same
> > as `use_all_dns_ips`, it will be good to mention that explicitly under
> > Public Interfaces.
> >
> > Regards,
> >
> > Rajini
> >
> >
> > On Mon, May 18, 2020 at 1:44 AM Badai Aqrandista 
> > wrote:
> >
> > > Ismael
> > >
> > > What do you think of the PR and the explanation regarding the issue
> > raised
> > > in KIP-235?
> > >
> > > Should I go ahead and build a proper PR?
> > >
> > > Thanks
> > > Badai
> > >
> > > On Mon, May 11, 2020 at 8:53 AM Badai Aqrandista 
> > > wrote:
> > >
> > > > Ismael
> > > >
> > > > PR created: https://github.com/apache/kafka/pull/8644/files
> > > >
> > > > Also, as this is my first PR, please let me know if I missed
> anything.
> > > >
> > > > Thanks
> > > > Badai
> > > >
> > > > On Mon, May 11, 2020 at 8:19 AM Badai Aqrandista  >
> > > > wrote:
> > > >
> > > >> Ismael
> > > >>
> > > >> Thank you for responding.
> > > >>
> > > >> KIP-235 modified ClientUtils#parseAndValidateAddresses [1] to
> resolve
> > an
> > > >> address alias (i.e. bootstrap server) into multiple addresses. This
> is
> > > why
> > > >> it would break SSL hostname verification when the bootstrap server
> is
> > > an IP
> > > >> address, i.e. it will resolve the IP address to an FQDN and use that
> > > FQDN
> > > >> in the SSL handshake.
> > > >>
> > > >> However, what I am proposing is to modify ClientUtils#resolve [2],
> > which
> > > >> is only used in ClusterConnectionStates#currentAddress [3], to get
> the
> > > >> resolved InetAddress of the address to connect to. And
> > > >> ClusterConnectionStates#currentAddress is only used by
> > > >> NetworkClient#initiateConnect [4] to create InetSocketAddress to
> > > establish
> > > >> the socket connection to the broker.
> > > >>
> > > >> Therefore, as far as I know, this change will not affect higher
> level
> > > >> protocol like SSL or SASL.
> > > >>
> > > >> PR coming after this.
> > > >>
> > > >> Thanks
> > > >> Badai
> > > >>
> > > >> [1]
> > > >>
> > >
> >
> https://github.com/apache/kafka/blob/2.5.0/clients/src/main/java/org/apache/kafka/clients/ClientUtils.java#L51
> > > >> [2]
> > > >>
> > >
> >
> https://github.com/apache/kafka/blob/2.5.0/clients/src/main/java/org/apache/kafka/clients/ClientUtils.java#L111
> > > >> [3]
> > > >>
> > >
> >
> https://github.com/apache/kafka/blob/2.5.0/clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java#L403
> > > >> [4]
> > > >>
> > >
> >
> https://github.com/apache/kafka/blob/2.5.0/clients/src/main/java/org/apache/kafka/clients/NetworkClient.java#L955
> > > >>
> > > >> On Sun, May 10, 2020 at 10:18 AM Ismael Juma 
> > wrote:
> > > >>
> > > >>> Hi Badai,
> > > >>>
> > > >>> I think this is a good change. Can you please address the issues
> > raised
> > > >>> by KIP-235? That was the reason why we did not do it previously.
> > > >>>
> > > >>> Ismael
> > > >>>
> > > >>> On Mon, Apr 27, 2020 at 5:46 PM Badai Aqrandista <
> ba...@confluent.io
> > >
> > > >>> wrote:
> > > >>>
> > >  Hi everyone
> > > 
> > >  I have opened this KIP to have client.dns.lookup default value
> > changed
> > >  to
> > >  "use_all_dns_ips".
> > > 
> > > 
> > > 
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-602%3A+Change+default+value+for+client.dns.lookup
> > > 
> > >  Feedback appreciated.
> > > 
> > >  PS: I'm new here so please let me know if I miss anything.
> > > 
> > >  --
> > >  Thanks,
> > >  Badai
> > > 
> > > >>>
> > > >>
> > > >> --
> > > >> Thanks,
> > > >> Badai
> > > >>
> > > >>
> > > >
> > > > --
> > > > Thanks,
> > > > Badai
> > > >
> > > >
> > >
> > > --
> > > Thanks,
> > > Badai
> > >
> >
>


Re: [DISCUSS] KIP-602 - Change default value for client.dns.lookup

2020-05-20 Thread Ismael Juma
Is there any reason to use "use_first_dns_ip"? Should we remove it
completely? Or at least deprecate it for removal in 3.0?

Ismael


On Wed, May 20, 2020, 1:39 AM Rajini Sivaram 
wrote:

> Hi Badai,
>
> Thanks for the KIP, sounds like a useful change. Perhaps we should call the
> new option `use_first_dns_ip` (not `_ips` since it refers to one). We
> should also mention in the KIP that only one type of address (ipv4 or ipv6,
> based on the first one) will be used - that is the current behaviour for
> `use_all_dns_ips`.  Since we are changing `default` to be exactly the same
> as `use_all_dns_ips`, it will be good to mention that explicitly under
> Public Interfaces.
>
> Regards,
>
> Rajini
>
>
> On Mon, May 18, 2020 at 1:44 AM Badai Aqrandista 
> wrote:
>
> > Ismael
> >
> > What do you think of the PR and the explanation regarding the issue
> raised
> > in KIP-235?
> >
> > Should I go ahead and build a proper PR?
> >
> > Thanks
> > Badai
> >
> > On Mon, May 11, 2020 at 8:53 AM Badai Aqrandista 
> > wrote:
> >
> > > Ismael
> > >
> > > PR created: https://github.com/apache/kafka/pull/8644/files
> > >
> > > Also, as this is my first PR, please let me know if I missed anything.
> > >
> > > Thanks
> > > Badai
> > >
> > > On Mon, May 11, 2020 at 8:19 AM Badai Aqrandista 
> > > wrote:
> > >
> > >> Ismael
> > >>
> > >> Thank you for responding.
> > >>
> > >> KIP-235 modified ClientUtils#parseAndValidateAddresses [1] to resolve
> an
> > >> address alias (i.e. bootstrap server) into multiple addresses. This is
> > why
> > >> it would break SSL hostname verification when the bootstrap server is
> > an IP
> > >> address, i.e. it will resolve the IP address to an FQDN and use that
> > FQDN
> > >> in the SSL handshake.
> > >>
> > >> However, what I am proposing is to modify ClientUtils#resolve [2],
> which
> > >> is only used in ClusterConnectionStates#currentAddress [3], to get the
> > >> resolved InetAddress of the address to connect to. And
> > >> ClusterConnectionStates#currentAddress is only used by
> > >> NetworkClient#initiateConnect [4] to create InetSocketAddress to
> > establish
> > >> the socket connection to the broker.
> > >>
> > >> Therefore, as far as I know, this change will not affect higher level
> > >> protocol like SSL or SASL.
> > >>
> > >> PR coming after this.
> > >>
> > >> Thanks
> > >> Badai
> > >>
> > >> [1]
> > >>
> >
> https://github.com/apache/kafka/blob/2.5.0/clients/src/main/java/org/apache/kafka/clients/ClientUtils.java#L51
> > >> [2]
> > >>
> >
> https://github.com/apache/kafka/blob/2.5.0/clients/src/main/java/org/apache/kafka/clients/ClientUtils.java#L111
> > >> [3]
> > >>
> >
> https://github.com/apache/kafka/blob/2.5.0/clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java#L403
> > >> [4]
> > >>
> >
> https://github.com/apache/kafka/blob/2.5.0/clients/src/main/java/org/apache/kafka/clients/NetworkClient.java#L955
> > >>
> > >> On Sun, May 10, 2020 at 10:18 AM Ismael Juma 
> wrote:
> > >>
> > >>> Hi Badai,
> > >>>
> > >>> I think this is a good change. Can you please address the issues
> raised
> > >>> by KIP-235? That was the reason why we did not do it previously.
> > >>>
> > >>> Ismael
> > >>>
> > >>> On Mon, Apr 27, 2020 at 5:46 PM Badai Aqrandista  >
> > >>> wrote:
> > >>>
> >  Hi everyone
> > 
> >  I have opened this KIP to have client.dns.lookup default value
> changed
> >  to
> >  "use_all_dns_ips".
> > 
> > 
> > 
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-602%3A+Change+default+value+for+client.dns.lookup
> > 
> >  Feedback appreciated.
> > 
> >  PS: I'm new here so please let me know if I miss anything.
> > 
> >  --
> >  Thanks,
> >  Badai
> > 
> > >>>
> > >>
> > >> --
> > >> Thanks,
> > >> Badai
> > >>
> > >>
> > >
> > > --
> > > Thanks,
> > > Badai
> > >
> > >
> >
> > --
> > Thanks,
> > Badai
> >
>


Re: [DISCUSS] Apache Kafka 2.6.0 release

2020-05-20 Thread Randall Hauch
Hi, Tom. I saw last night that the KIP had enough votes before today’s
deadline and I will add it to the roadmap today. Thanks for driving this!

On Wed, May 20, 2020 at 6:18 AM Tom Bentley  wrote:

> Hi Randall,
>
> Can we add KIP-585? (I'm not quite sure of the protocol here, but thought
> it better to ask than to just add it myself).
>
> Thanks,
>
> Tom
>
> On Tue, May 5, 2020 at 6:54 PM Randall Hauch  wrote:
>
> > Greetings!
> >
> > I'd like to volunteer to be release manager for the next time-based
> feature
> > release which will be 2.6.0. I've published a release plan at
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> > ,
> > and have included all of the KIPs that are currently approved or actively
> > in discussion (though I'm happy to adjust as necessary).
> >
> > To stay on our time-based cadence, the KIP freeze is on May 20 with a
> > target release date of June 24.
> >
> > Let me know if there are any objections.
> >
> > Thanks,
> > Randall Hauch
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #4550

2020-05-20 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix redundant typos in comments and javadocs (#8693)


--
[...truncated 3.08 MB...]

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED


Re: Want to remove the archive

2020-05-20 Thread Satya Kotni
Any update on this?

Thanks & Regards
Satya Kotni

On Mon, 11 May 2020 at 10:54, Satya Kotni  wrote:

>
> Hi,
>> Please help me in removing this from the  below archive
>>
>> https://www.mail-archive.com/dev@kafka.apache.org/msg104541.html
>>
>> Best Regards
>> Satya Kotni
>>
>


Re: [DISCUSS] Apache Kafka 2.6.0 release

2020-05-20 Thread Tom Bentley
Hi Randall,

Can we add KIP-585? (I'm not quite sure of the protocol here, but thought
it better to ask than to just add it myself).

Thanks,

Tom

On Tue, May 5, 2020 at 6:54 PM Randall Hauch  wrote:

> Greetings!
>
> I'd like to volunteer to be release manager for the next time-based feature
> release which will be 2.6.0. I've published a release plan at
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> ,
> and have included all of the KIPs that are currently approved or actively
> in discussion (though I'm happy to adjust as necessary).
>
> To stay on our time-based cadence, the KIP freeze is on May 20 with a
> target release date of June 24.
>
> Let me know if there are any objections.
>
> Thanks,
> Randall Hauch
>


Re: [VOTE] KIP 585: Filter and conditional SMTs

2020-05-20 Thread Tom Bentley
Hi,

The vote has passed with 3 binding votes (Konstantine, Mickael and Bill)
and 3 non-binding votes (Andrew, Gunnar and Edoardo).

Thanks to everyone who participated in the discussion and vote.

Tom

On Tue, May 19, 2020 at 9:45 PM Bill Bejeck  wrote:

> Thanks for the KIP Tom, this will be a useful addition.
>
> +1(binding)
>
> -Bill
>
> On Tue, May 19, 2020 at 1:14 PM Tom Bentley  wrote:
>
> > It would be nice to get this into Kafka 2.6. There are 2 binding and 3
> > non-binding votes so far. If you've not looked at it already now would
> be a
> > great time!
> >
> > Many thanks,
> >
> > Tom
> >
> > On Tue, May 19, 2020 at 1:27 PM Mickael Maison  >
> > wrote:
> >
> > > +1 (binding)
> > > Thanks Tom for leading this KIP and steering the syntax discussion
> > > towards a consensus
> > >
> > > On Tue, May 19, 2020 at 11:29 AM Edoardo Comar 
> > wrote:
> > > >
> > > > +1 (non-binding)
> > > > Thanks Tom
> > > > --
> > > >
> > > > Edoardo Comar
> > > >
> > > > Event Streams for IBM Cloud
> > > > IBM UK Ltd, Hursley Park, SO21 2JN
> > > >
> > > >
> > > >
> > > >
> > > > From:   Gunnar Morling 
> > > > To: dev@kafka.apache.org
> > > > Date:   19/05/2020 10:35
> > > > Subject:[EXTERNAL] Re: [VOTE] KIP 585: Filter and conditional
> > > SMTs
> > > >
> > > >
> > > >
> > > > +1 (non-binding)
> > > >
> > > > Thanks for working on this, Tom! This KIP will be very useful for
> > > > connectors like Debezium.
> > > >
> > > > --Gunnar
> > > >
> > > > Am Fr., 15. Mai 2020 um 20:02 Uhr schrieb Konstantine Karantasis
> > > > :
> > > > >
> > > > > +1 (binding)
> > > > >
> > > > > Thanks Tom.
> > > > >
> > > > > Konstantine
> > > > >
> > > > > On Fri, May 15, 2020 at 5:03 AM Andrew Schofield
> > > > 
> > > > > wrote:
> > > > >
> > > > > > +1 (non-binding)
> > > > > >
> > > > > > Thanks for the KIP. This will be very useful.
> > > > > >
> > > > > > Andrew Schofield
> > > > > >
> > > > > > On 13/05/2020, 10:14, "Tom Bentley"  wrote:
> > > > > >
> > > > > > Hi,
> > > > > >
> > > > > > I'd like to start a vote on KIP-585: Filter and conditional
> > SMTs
> > > > > >
> > > > > >
> > > > > >
> > > >
> > >
> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_display_KAFKA_KIP-2D585-253A-2BFilter-2Band-2BConditional-2BSMTs=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ=AnNSwofDk0eZPfkUhSGAHsEyMB_tKe1luK9nox7bE1w=_AHSlXsBMSSSOnVL3bBa-Pzu9Zg1f8lgOSTI_VMTP8s=
> > > >
> > > > > >
> > > > > > Those involved in the discussion seem to be positively
> disposed
> > > to
> > > > the
> > > > > > idea, but in the absence of any committer participation it's
> > been
> > > > > > difficult
> > > > > > to find a consensus on how these things should be configured.
> > > > What's
> > > > > > presented here seemed to be the option which people preferred
> > > > overall.
> > > > > >
> > > > > > Kind regards,
> > > > > >
> > > > > > Tom
> > > > > >
> > > > > >
> > > >
> > > >
> > > >
> > > >
> > > > Unless stated otherwise above:
> > > > IBM United Kingdom Limited - Registered in England and Wales with
> > number
> > > > 741598.
> > > > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire
> PO6
> > > 3AU
> > > >
> > >
> > >
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #4549

2020-05-20 Thread Apache Jenkins Server
See 

Changes:


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H44 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
FATAL: java.nio.file.FileSystemException: 
: Input/output error
java.nio.file.FileSystemException: 
: Input/output error
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.asIOException(UnixException.java:111)
at 
sun.nio.fs.UnixDirectoryStream$UnixDirectoryIterator.readNextEntry(UnixDirectoryStream.java:171)
Also:   hudson.remoting.Channel$CallSiteStackTrace: Remote call to H44
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)
at 
hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:957)
at hudson.FilePath.act(FilePath.java:1072)
at hudson.FilePath.act(FilePath.java:1061)
at hudson.FilePath.deleteRecursive(FilePath.java:1268)
at 
hudson.plugins.ws_cleanup.Wipeout.performDelete(Wipeout.java:82)
at hudson.plugins.ws_cleanup.Wipeout.perform(Wipeout.java:78)
at 
hudson.plugins.ws_cleanup.PreBuildCleanup.preCheckout(PreBuildCleanup.java:107)
at 
jenkins.scm.SCMCheckoutStrategy.preCheckout(SCMCheckoutStrategy.java:76)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:498)
at hudson.model.Run.execute(Run.java:1815)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 
hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused: java.nio.file.DirectoryIteratorException
at 
sun.nio.fs.UnixDirectoryStream$UnixDirectoryIterator.readNextEntry(UnixDirectoryStream.java:172)
at 
sun.nio.fs.UnixDirectoryStream$UnixDirectoryIterator.hasNext(UnixDirectoryStream.java:201)
at 
jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:225)
at jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
at jenkins.util.io.PathRemover.forceRemoveRecursive(PathRemover.java:96)
at hudson.Util.deleteRecursive(Util.java:293)
at hudson.FilePath$DeleteRecursive.invoke(FilePath.java:1274)
at hudson.FilePath$DeleteRecursive.invoke(FilePath.java:1270)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3052)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[FINDBUGS] Collecting findbugs analysis files...
ERROR: Step ?[Deprecated] Publish FindBugs analysis results? aborted due to 
exception: 
Also:   hudson.remoting.Channel$CallSiteStackTrace: Remote call to H44
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)
at 
hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:957)
at hudson.FilePath.act(FilePath.java:1072)
at hudson.FilePath.act(FilePath.java:1061)
at 
hudson.plugins.findbugs.FindBugsPublisher.perform(FindBugsPublisher.java:144)
at 
hudson.plugins.analysis.core.HealthAwarePublisher.perform(HealthAwarePublisher.java:69)
at 
hudson.plugins.analysis.core.HealthAwareRecorder.perform(HealthAwareRecorder.java:298)
at 
hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:79)
at 
hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:741)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:690)
at hudson.model.Build$BuildExecution.post2(Build.java:186)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:635)
 

[jira] [Resolved] (KAFKA-10026) KIP-584: Implement read path for versioning scheme for features

2020-05-20 Thread Kowshik Prakasam (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kowshik Prakasam resolved KAFKA-10026.
--
Resolution: Duplicate

Duplicate of KAFKA-10027

> KIP-584: Implement read path for versioning scheme for features
> ---
>
> Key: KAFKA-10026
> URL: https://issues.apache.org/jira/browse/KAFKA-10026
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Kowshik Prakasam
>Priority: Major
>
> Goal is to implement various classes and integration for the read path of the 
> feature versioning system 
> ([KIP-584|https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features]).
>  The ultimate plan is that the cluster-wide *finalized* features information 
> is going to be stored in ZK under the node {{/feature}}. The read path 
> implemented in this PR is centered around reading this *finalized* features 
> information from ZK, and, processing it inside the Broker.
>  
> Here is a summary of what's needed for this Jira (a lot of it is *new* 
> classes):
>  * A facility is provided in the broker to declare it's supported features, 
> and advertise it's supported features via it's own {{BrokerIdZNode}} under a 
> {{features}} key.
>  * A facility is provided in the broker to listen to and propagate 
> cluster-wide *finalized* feature changes from ZK.
>  * When new *finalized* features are read from ZK, feature incompatibilities 
> are detected by comparing against the broker's own supported features.
>  * {{ApiVersionsResponse}} is now served containing supported and finalized 
> feature information (using the newly added tagged fields).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10027) KIP-584: Implement read path for versioning scheme for features

2020-05-20 Thread Kowshik Prakasam (Jira)
Kowshik Prakasam created KAFKA-10027:


 Summary: KIP-584: Implement read path for versioning scheme for 
features
 Key: KAFKA-10027
 URL: https://issues.apache.org/jira/browse/KAFKA-10027
 Project: Kafka
  Issue Type: Sub-task
Reporter: Kowshik Prakasam


Goal is to implement various classes and integration for the read path of the 
feature versioning system 
([KIP-584|https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features]).
 The ultimate plan is that the cluster-wide *finalized* features information is 
going to be stored in ZK under the node {{/feature}}. The read path implemented 
in this PR is centered around reading this *finalized* features information 
from ZK, and, processing it inside the Broker.

 

Here is a summary of what's needed for this Jira (a lot of it is *new* classes):
 * A facility is provided in the broker to declare it's supported features, and 
advertise it's supported features via it's own {{BrokerIdZNode}} under a 
{{features}} key.
 * A facility is provided in the broker to listen to and propagate cluster-wide 
*finalized* feature changes from ZK.
 * When new *finalized* features are read from ZK, feature incompatibilities 
are detected by comparing against the broker's own supported features.
 * {{ApiVersionsResponse}} is now served containing supported and finalized 
feature information (using the newly added tagged fields).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10026) KIP-584: Implement read path for versioning scheme for features

2020-05-20 Thread Kowshik Prakasam (Jira)
Kowshik Prakasam created KAFKA-10026:


 Summary: KIP-584: Implement read path for versioning scheme for 
features
 Key: KAFKA-10026
 URL: https://issues.apache.org/jira/browse/KAFKA-10026
 Project: Kafka
  Issue Type: New Feature
Reporter: Kowshik Prakasam


Goal is to implement various classes and integration for the read path of the 
feature versioning system 
([KIP-584|https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features]).
 The ultimate plan is that the cluster-wide *finalized* features information is 
going to be stored in ZK under the node {{/feature}}. The read path implemented 
in this PR is centered around reading this *finalized* features information 
from ZK, and, processing it inside the Broker.

 

Here is a summary of what's needed for this Jira (a lot of it is *new* classes):
 * A facility is provided in the broker to declare it's supported features, and 
advertise it's supported features via it's own {{BrokerIdZNode}} under a 
{{features}} key.
 * A facility is provided in the broker to listen to and propagate cluster-wide 
*finalized* feature changes from ZK.
 * When new *finalized* features are read from ZK, feature incompatibilities 
are detected by comparing against the broker's own supported features.
 * {{ApiVersionsResponse}} is now served containing supported and finalized 
feature information (using the newly added tagged fields).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4548

2020-05-20 Thread Apache Jenkins Server
See 

Changes:


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H44 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
FATAL: java.nio.file.FileSystemException: 
: Input/output error
java.nio.file.FileSystemException: 
: Input/output error
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.asIOException(UnixException.java:111)
at 
sun.nio.fs.UnixDirectoryStream$UnixDirectoryIterator.readNextEntry(UnixDirectoryStream.java:171)
Also:   hudson.remoting.Channel$CallSiteStackTrace: Remote call to H44
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)
at 
hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:957)
at hudson.FilePath.act(FilePath.java:1072)
at hudson.FilePath.act(FilePath.java:1061)
at hudson.FilePath.deleteRecursive(FilePath.java:1268)
at 
hudson.plugins.ws_cleanup.Wipeout.performDelete(Wipeout.java:82)
at hudson.plugins.ws_cleanup.Wipeout.perform(Wipeout.java:78)
at 
hudson.plugins.ws_cleanup.PreBuildCleanup.preCheckout(PreBuildCleanup.java:107)
at 
jenkins.scm.SCMCheckoutStrategy.preCheckout(SCMCheckoutStrategy.java:76)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:498)
at hudson.model.Run.execute(Run.java:1815)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 
hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused: java.nio.file.DirectoryIteratorException
at 
sun.nio.fs.UnixDirectoryStream$UnixDirectoryIterator.readNextEntry(UnixDirectoryStream.java:172)
at 
sun.nio.fs.UnixDirectoryStream$UnixDirectoryIterator.hasNext(UnixDirectoryStream.java:201)
at 
jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:225)
at jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
at jenkins.util.io.PathRemover.forceRemoveRecursive(PathRemover.java:96)
at hudson.Util.deleteRecursive(Util.java:293)
at hudson.FilePath$DeleteRecursive.invoke(FilePath.java:1274)
at hudson.FilePath$DeleteRecursive.invoke(FilePath.java:1270)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3052)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[FINDBUGS] Collecting findbugs analysis files...
ERROR: Step ?[Deprecated] Publish FindBugs analysis results? aborted due to 
exception: 
Also:   hudson.remoting.Channel$CallSiteStackTrace: Remote call to H44
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)
at 
hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:957)
at hudson.FilePath.act(FilePath.java:1072)
at hudson.FilePath.act(FilePath.java:1061)
at 
hudson.plugins.findbugs.FindBugsPublisher.perform(FindBugsPublisher.java:144)
at 
hudson.plugins.analysis.core.HealthAwarePublisher.perform(HealthAwarePublisher.java:69)
at 
hudson.plugins.analysis.core.HealthAwareRecorder.perform(HealthAwareRecorder.java:298)
at 
hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:79)
at 
hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:741)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:690)
at hudson.model.Build$BuildExecution.post2(Build.java:186)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:635)
 

Build failed in Jenkins: kafka-trunk-jdk14 #107

2020-05-20 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix redundant typos in comments and javadocs (#8693)


--
[...truncated 3.09 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 

Re: [DISCUSS] KIP-602 - Change default value for client.dns.lookup

2020-05-20 Thread Rajini Sivaram
Hi Badai,

Thanks for the KIP, sounds like a useful change. Perhaps we should call the
new option `use_first_dns_ip` (not `_ips` since it refers to one). We
should also mention in the KIP that only one type of address (ipv4 or ipv6,
based on the first one) will be used - that is the current behaviour for
`use_all_dns_ips`.  Since we are changing `default` to be exactly the same
as `use_all_dns_ips`, it will be good to mention that explicitly under
Public Interfaces.

Regards,

Rajini


On Mon, May 18, 2020 at 1:44 AM Badai Aqrandista  wrote:

> Ismael
>
> What do you think of the PR and the explanation regarding the issue raised
> in KIP-235?
>
> Should I go ahead and build a proper PR?
>
> Thanks
> Badai
>
> On Mon, May 11, 2020 at 8:53 AM Badai Aqrandista 
> wrote:
>
> > Ismael
> >
> > PR created: https://github.com/apache/kafka/pull/8644/files
> >
> > Also, as this is my first PR, please let me know if I missed anything.
> >
> > Thanks
> > Badai
> >
> > On Mon, May 11, 2020 at 8:19 AM Badai Aqrandista 
> > wrote:
> >
> >> Ismael
> >>
> >> Thank you for responding.
> >>
> >> KIP-235 modified ClientUtils#parseAndValidateAddresses [1] to resolve an
> >> address alias (i.e. bootstrap server) into multiple addresses. This is
> why
> >> it would break SSL hostname verification when the bootstrap server is
> an IP
> >> address, i.e. it will resolve the IP address to an FQDN and use that
> FQDN
> >> in the SSL handshake.
> >>
> >> However, what I am proposing is to modify ClientUtils#resolve [2], which
> >> is only used in ClusterConnectionStates#currentAddress [3], to get the
> >> resolved InetAddress of the address to connect to. And
> >> ClusterConnectionStates#currentAddress is only used by
> >> NetworkClient#initiateConnect [4] to create InetSocketAddress to
> establish
> >> the socket connection to the broker.
> >>
> >> Therefore, as far as I know, this change will not affect higher level
> >> protocol like SSL or SASL.
> >>
> >> PR coming after this.
> >>
> >> Thanks
> >> Badai
> >>
> >> [1]
> >>
> https://github.com/apache/kafka/blob/2.5.0/clients/src/main/java/org/apache/kafka/clients/ClientUtils.java#L51
> >> [2]
> >>
> https://github.com/apache/kafka/blob/2.5.0/clients/src/main/java/org/apache/kafka/clients/ClientUtils.java#L111
> >> [3]
> >>
> https://github.com/apache/kafka/blob/2.5.0/clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java#L403
> >> [4]
> >>
> https://github.com/apache/kafka/blob/2.5.0/clients/src/main/java/org/apache/kafka/clients/NetworkClient.java#L955
> >>
> >> On Sun, May 10, 2020 at 10:18 AM Ismael Juma  wrote:
> >>
> >>> Hi Badai,
> >>>
> >>> I think this is a good change. Can you please address the issues raised
> >>> by KIP-235? That was the reason why we did not do it previously.
> >>>
> >>> Ismael
> >>>
> >>> On Mon, Apr 27, 2020 at 5:46 PM Badai Aqrandista 
> >>> wrote:
> >>>
>  Hi everyone
> 
>  I have opened this KIP to have client.dns.lookup default value changed
>  to
>  "use_all_dns_ips".
> 
> 
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-602%3A+Change+default+value+for+client.dns.lookup
> 
>  Feedback appreciated.
> 
>  PS: I'm new here so please let me know if I miss anything.
> 
>  --
>  Thanks,
>  Badai
> 
> >>>
> >>
> >> --
> >> Thanks,
> >> Badai
> >>
> >>
> >
> > --
> > Thanks,
> > Badai
> >
> >
>
> --
> Thanks,
> Badai
>


Build failed in Jenkins: kafka-trunk-jdk8 #4547

2020-05-20 Thread Apache Jenkins Server
See 

Changes:


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H44 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
FATAL: java.nio.file.FileSystemException: 
: Input/output error
java.nio.file.FileSystemException: 
: Input/output error
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.asIOException(UnixException.java:111)
at 
sun.nio.fs.UnixDirectoryStream$UnixDirectoryIterator.readNextEntry(UnixDirectoryStream.java:171)
Also:   hudson.remoting.Channel$CallSiteStackTrace: Remote call to H44
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)
at 
hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:957)
at hudson.FilePath.act(FilePath.java:1072)
at hudson.FilePath.act(FilePath.java:1061)
at hudson.FilePath.deleteRecursive(FilePath.java:1268)
at 
hudson.plugins.ws_cleanup.Wipeout.performDelete(Wipeout.java:82)
at hudson.plugins.ws_cleanup.Wipeout.perform(Wipeout.java:78)
at 
hudson.plugins.ws_cleanup.PreBuildCleanup.preCheckout(PreBuildCleanup.java:107)
at 
jenkins.scm.SCMCheckoutStrategy.preCheckout(SCMCheckoutStrategy.java:76)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:498)
at hudson.model.Run.execute(Run.java:1815)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 
hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused: java.nio.file.DirectoryIteratorException
at 
sun.nio.fs.UnixDirectoryStream$UnixDirectoryIterator.readNextEntry(UnixDirectoryStream.java:172)
at 
sun.nio.fs.UnixDirectoryStream$UnixDirectoryIterator.hasNext(UnixDirectoryStream.java:201)
at 
jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:225)
at jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
at jenkins.util.io.PathRemover.forceRemoveRecursive(PathRemover.java:96)
at hudson.Util.deleteRecursive(Util.java:293)
at hudson.FilePath$DeleteRecursive.invoke(FilePath.java:1274)
at hudson.FilePath$DeleteRecursive.invoke(FilePath.java:1270)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3052)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[FINDBUGS] Collecting findbugs analysis files...
ERROR: Step ?[Deprecated] Publish FindBugs analysis results? aborted due to 
exception: 
Also:   hudson.remoting.Channel$CallSiteStackTrace: Remote call to H44
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)
at 
hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:957)
at hudson.FilePath.act(FilePath.java:1072)
at hudson.FilePath.act(FilePath.java:1061)
at 
hudson.plugins.findbugs.FindBugsPublisher.perform(FindBugsPublisher.java:144)
at 
hudson.plugins.analysis.core.HealthAwarePublisher.perform(HealthAwarePublisher.java:69)
at 
hudson.plugins.analysis.core.HealthAwareRecorder.perform(HealthAwareRecorder.java:298)
at 
hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:79)
at 
hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:741)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:690)
at hudson.model.Build$BuildExecution.post2(Build.java:186)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:635)
 

Re: [DISCUSS] KIP-553: Enable TLSv1.3 by default and disable all protocols except [TLSV1.2, TLSV1.3]

2020-05-20 Thread Nikolay Izhikov
Hello, Ismael.

> What I meant to ask is if we changed the configuration so that TLS 1.3 is 
> exercised in the system tests by default.

Are you suggesting just use TLSv1.3 instead of TLSv1.2 if the new version 
supported?
Or you suggest introducing one more parameter for applicable tests like 
`ssl_protocol_version=[TLSv1.2, TLSv1.3]` ?

The second option enlarges the number of test cases twice so it will be run 
slower.

> 24 апр. 2020 г., в 17:34, Ismael Juma  написал(а):
> 
> Right, some companies run them nightly. What I meant to ask is if we
> changed the configuration so that TLS 1.3 is exercised in the system tests
> by default.
> 
> Ismael
> 
> On Fri, Apr 24, 2020 at 7:32 AM Nikolay Izhikov  wrote:
> 
>> Hello, Ismael.
>> 
>> AFAIK we don’t run system tests nightly.
>> Do we have resources to run system tests periodically?
>> 
>> When I did the testing I used servers my employer gave me.
>> 
>>> 24 апр. 2020 г., в 08:05, Ismael Juma  написал(а):
>>> 
>>> Hi Nikolay,
>>> 
>>> Seems like we have been able to run the system tests with TLS 1.3. Do we
>>> run them nightly?
>>> 
>>> Ismael
>>> 
>>> On Fri, Feb 14, 2020 at 4:17 AM Nikolay Izhikov 
>> wrote:
>>> 
 Hello, Kafka team.
 
 I ran system tests that use SSL for the TLSv1.3.
 You can find the results of the tests in the Jira ticket [1], [2], [3],
 [4].
 
 I also, need a changes [5] in `security_config.py` to execute system
>> tests
 with TLSv1.3(more info in PR description).
 Please, take a look.
 
 Test environment:
   • openjdk11
   • trunk + changes from my PR [5].
 
 Full system tests results have volume 15gb.
 Should I share full logs with you?
 
 What else should be done before we can enable TLSv1.3 by default?
 
 [1]
 
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036927=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036927
 
 [2]
 
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036928=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036928
 
 [3]
 
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036929=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036929
 
 [4]
 
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036930=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036930
 
 [5]
 
>> https://github.com/apache/kafka/pull/8106/files#diff-6dd015b94706f6920d9de524c355ddd8R51
 
> 29 янв. 2020 г., в 15:27, Nikolay Izhikov 
 написал(а):
> 
> Hello, Rajini.
> 
> Thanks for the feedback.
> 
> I’ve searched tests by the «ssl» keyword and found the following tests:
> 
> ./test/kafkatest/services/kafka_log4j_appender.py
> ./test/kafkatest/services/listener_security_config.py
> ./test/kafkatest/services/security/security_config.py
> ./test/kafkatest/tests/core/security_test.py
> 
> Is this all tests that need to be run with the TLSv1.3 to ensure we can
 enable it by default?
> 
>> 28 янв. 2020 г., в 14:58, Rajini Sivaram 
 написал(а):
>> 
>> Hi Nikolay,
>> 
>> Not sure of the total space required. But you can run a collection of
 tests at a time instead of running them all together. That way, you
>> could
 just run all the tests that enable SSL. Details of running a subset of
 tests are in the README in tests.
>> 
>> On Mon, Jan 27, 2020 at 6:29 PM Nikolay Izhikov 
 wrote:
>> Hello, Rajini.
>> 
>> I’m tried to run all system tests but failed for now.
>> It happens, that system tests generates a lot of logs.
>> I had a 250GB of the free space but it all was occupied by the log
>> from
 half of the system tests.
>> 
>> Do you have any idea what is summary disc space I need to run all
 system tests?
>> 
>>> 7 янв. 2020 г., в 14:49, Rajini Sivaram 
 написал(а):
>>> 
>>> Hi Nikolay,
>>> 
>>> There a couple of things you could do:
>>> 
>>> 1) Run all system tests that use SSL with TLSv1.3. I had run a
>> subset,
 but
>>> it will be good to run all of them. You can do this locally using
 docker
>>> with JDK 11 by updating the files in tests/docker. You will need to
 update
>>> tests/kafkatest/services/security/security_config.py to enable only
>>> TLSv1.3. Instructions for running system tests using docker are in
>>> https://github.com/apache/kafka/blob/trunk/tests/README.md.
>>> 2) For integration tests, we run a small number of tests using
>> TLSv1.3
 if
>>> the tests are run using JDK 11 and above. We need to do this for
>> system
>>> tests as well. There is an open JIRA:
>>> https://issues.apache.org/jira/browse/KAFKA-9319. Feel free to
>> assign
 this
>>> to 

Build failed in Jenkins: kafka-trunk-jdk8 #4546

2020-05-20 Thread Apache Jenkins Server
See 

Changes:


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H44 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
FATAL: java.nio.file.FileSystemException: 
: Input/output error
java.nio.file.FileSystemException: 
: Input/output error
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.asIOException(UnixException.java:111)
at 
sun.nio.fs.UnixDirectoryStream$UnixDirectoryIterator.readNextEntry(UnixDirectoryStream.java:171)
Also:   hudson.remoting.Channel$CallSiteStackTrace: Remote call to H44
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)
at 
hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:957)
at hudson.FilePath.act(FilePath.java:1072)
at hudson.FilePath.act(FilePath.java:1061)
at hudson.FilePath.deleteRecursive(FilePath.java:1268)
at 
hudson.plugins.ws_cleanup.Wipeout.performDelete(Wipeout.java:82)
at hudson.plugins.ws_cleanup.Wipeout.perform(Wipeout.java:78)
at 
hudson.plugins.ws_cleanup.PreBuildCleanup.preCheckout(PreBuildCleanup.java:107)
at 
jenkins.scm.SCMCheckoutStrategy.preCheckout(SCMCheckoutStrategy.java:76)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:498)
at hudson.model.Run.execute(Run.java:1815)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 
hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused: java.nio.file.DirectoryIteratorException
at 
sun.nio.fs.UnixDirectoryStream$UnixDirectoryIterator.readNextEntry(UnixDirectoryStream.java:172)
at 
sun.nio.fs.UnixDirectoryStream$UnixDirectoryIterator.hasNext(UnixDirectoryStream.java:201)
at 
jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:225)
at jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
at jenkins.util.io.PathRemover.forceRemoveRecursive(PathRemover.java:96)
at hudson.Util.deleteRecursive(Util.java:293)
at hudson.FilePath$DeleteRecursive.invoke(FilePath.java:1274)
at hudson.FilePath$DeleteRecursive.invoke(FilePath.java:1270)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3052)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[FINDBUGS] Collecting findbugs analysis files...
ERROR: Step ?[Deprecated] Publish FindBugs analysis results? aborted due to 
exception: 
java.nio.file.FileSystemException: 
/home/jenkins/jenkins-slave/remoting/jarCache/0A/AF18AA284B5204780B01834B961E45.jar:
 Read-only file system
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileAttributeViews$Basic.setTimes(UnixFileAttributeViews.java:109)
at java.nio.file.Files.setLastModifiedTime(Files.java:2306)
at 
hudson.remoting.FileSystemJarCache.lookInCache(FileSystemJarCache.java:77)
at hudson.remoting.JarCacheSupport.resolve(JarCacheSupport.java:46)
at 
hudson.remoting.ResourceImageInJar._resolveJarURL(ResourceImageInJar.java:90)
at 
hudson.remoting.ResourceImageInJar.resolve(ResourceImageInJar.java:43)
at 
hudson.remoting.RemoteClassLoader.findClass(RemoteClassLoader.java:304)
Caused: java.lang.ClassNotFoundException: hudson.plugins.analysis.Messages
at 
hudson.remoting.RemoteClassLoader.findClass(RemoteClassLoader.java:317)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
Also:   hudson.remoting.Channel$CallSiteStackTrace: Remote call to H44