Re: [ANNOUNCE] New committer: Tom Bentley

2021-03-15 Thread Kamal Chandraprakash
Congrats, Tom!

On Tue, Mar 16, 2021 at 8:32 AM Konstantine Karantasis
 wrote:

> Congratulations Tom!
> Well deserved.
>
> Konstantine
>
> On Mon, Mar 15, 2021 at 4:52 PM Luke Chen  wrote:
>
> > Congratulations!
> >
> > Federico Valeri  於 2021年3月16日 週二 上午4:11 寫道:
> >
> > > Congrats, Tom!
> > >
> > > Well deserved.
> > >
> > > On Mon, Mar 15, 2021, 8:09 PM Paolo Patierno 
> wrote:
> > >
> > > > Congratulations Tom!
> > > >
> > > > Get Outlook for Android
> > > >
> > > > 
> > > > From: Guozhang Wang 
> > > > Sent: Monday, March 15, 2021 8:02:44 PM
> > > > To: dev 
> > > > Subject: Re: [ANNOUNCE] New committer: Tom Bentley
> > > >
> > > > Congratulations Tom!
> > > >
> > > > Guozhang
> > > >
> > > > On Mon, Mar 15, 2021 at 11:25 AM Bill Bejeck
>  > >
> > > > wrote:
> > > >
> > > > > Congratulations, Tom!
> > > > >
> > > > > -Bill
> > > > >
> > > > > On Mon, Mar 15, 2021 at 2:08 PM Bruno Cadonna
> > >  > > > >
> > > > > wrote:
> > > > >
> > > > > > Congrats, Tom!
> > > > > >
> > > > > > Best,
> > > > > > Bruno
> > > > > >
> > > > > > On 15.03.21 18:59, Mickael Maison wrote:
> > > > > > > Hi all,
> > > > > > >
> > > > > > > The PMC for Apache Kafka has invited Tom Bentley as a
> committer,
> > > and
> > > > > > > we are excited to announce that he accepted!
> > > > > > >
> > > > > > > Tom first contributed to Apache Kafka in June 2017 and has been
> > > > > > > actively contributing since February 2020.
> > > > > > > He has accumulated 52 commits and worked on a number of KIPs.
> > Here
> > > > are
> > > > > > > some of the most significant ones:
> > > > > > > KIP-183: Change PreferredReplicaLeaderElectionCommand to
> use
> > > > > > AdminClient
> > > > > > > KIP-195: AdminClient.createPartitions
> > > > > > > KIP-585: Filter and Conditional SMTs
> > > > > > > KIP-621: Deprecate and replace DescribeLogDirsResult.all()
> > and
> > > > > > .values()
> > > > > > > KIP-707: The future of KafkaFuture (still in discussion)
> > > > > > >
> > > > > > > In addition, he is very active on the mailing list and has
> helped
> > > > > > > review many KIPs.
> > > > > > >
> > > > > > > Congratulations Tom and thanks for all the contributions!
> > > > > > >
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> > >
> >
>


Re: [ANNOUNCE] New Kafka PMC Member: Chia-Ping Tsai

2021-03-15 Thread Konstantine Karantasis
Congratulations Chia-Ping!

Konstantine

On Mon, Mar 15, 2021 at 4:31 AM Rajini Sivaram 
wrote:

> Congratulations, Chia-Ping, well deserved!
>
> Regards,
>
> Rajini
>
> On Mon, Mar 15, 2021 at 9:59 AM Bruno Cadonna 
> wrote:
>
> > Congrats, Chia-Ping!
> >
> > Best,
> > Bruno
> >
> > On 15.03.21 09:22, David Jacot wrote:
> > > Congrats Chia-Ping! Well deserved.
> > >
> > > On Mon, Mar 15, 2021 at 5:39 AM Satish Duggana <
> satish.dugg...@gmail.com
> > >
> > > wrote:
> > >
> > >> Congrats Chia-Ping!
> > >>
> > >> On Sat, 13 Mar 2021 at 13:34, Tom Bentley 
> wrote:
> > >>
> > >>> Congratulations Chia-Ping!
> > >>>
> > >>> On Sat, Mar 13, 2021 at 7:31 AM Kamal Chandraprakash <
> > >>> kamal.chandraprak...@gmail.com> wrote:
> > >>>
> >  Congratulations, Chia-Ping!!
> > 
> >  On Sat, Mar 13, 2021 at 11:38 AM Ismael Juma 
> > >> wrote:
> > 
> > > Congratulations Chia-Ping! Well deserved.
> > >
> > > Ismael
> > >
> > > On Fri, Mar 12, 2021, 11:14 AM Jun Rao 
> > >>> wrote:
> > >
> > >> Hi, Everyone,
> > >>
> > >> Chia-Ping Tsai has been a Kafka committer since Oct. 15,  2020. He
> > >>> has
> > > been
> > >> very instrumental to the community since becoming a committer.
> It's
> > >>> my
> > >> pleasure to announce that Chia-Ping  is now a member of Kafka PMC.
> > >>
> > >> Congratulations Chia-Ping!
> > >>
> > >> Jun
> > >> on behalf of Apache Kafka PMC
> > >>
> > >
> > 
> > >>>
> > >>
> > >
> >
>


[jira] [Created] (KAFKA-12477) Smart rebalancing with dynamic protocol selection

2021-03-15 Thread A. Sophie Blee-Goldman (Jira)
A. Sophie Blee-Goldman created KAFKA-12477:
--

 Summary: Smart rebalancing with dynamic protocol selection
 Key: KAFKA-12477
 URL: https://issues.apache.org/jira/browse/KAFKA-12477
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Reporter: A. Sophie Blee-Goldman
 Fix For: 3.0.0


Users who want to upgrade their applications and enable the COOPERATIVE 
rebalancing protocol in their consumer apps are required to follow a double 
rolling bounce upgrade path. The reason for this is laid out in the [Consumer 
Upgrades|https://cwiki.apache.org/confluence/display/KAFKA/KIP-429%3A+Kafka+Consumer+Incremental+Rebalance+Protocol#KIP429:KafkaConsumerIncrementalRebalanceProtocol-Consumer]
 section of KIP-429. Basically, the ConsumerCoordinator picks a rebalancing 
protocol in its constructor based on the list of supported partition assignors. 
The protocol is selected as the highest protocol that is commonly supported by 
all assignors in the list, and never changes after that.

This is a bit unfortunate because it may end up using an older protocol even 
after every member in the group has been updated to support the newer protocol. 
After the first rolling bounce of the upgrade, all members will have two 
assignors: "cooperative-sticky" and "range" (or sticky/round-robin/etc). At 
this point the EAGER protocol will still be selected due to the presence of the 
"range" assignor, but it's the "cooperative-sticky" assignor that will 
ultimately be selected for use in rebalances if that assignor is preferred (ie 
positioned first in the list). The only reason for the second rolling bounce is 
to strip off the "range" assignor and allow the upgraded members to switch over 
to COOPERATIVE. We can't allow them to use cooperative rebalancing until 
everyone has been upgraded, but once they have it's safe to do so.

And there is already a way for the client to detect that everyone is on the new 
byte code: if the CooperativeStickyAssignor is selected by the group 
coordinator, then that means it is supported by all consumers in the group and 
therefore everyone must be upgraded. 

We may be able to save the second rolling bounce by dynamically updating the 
rebalancing protocol inside the ConsumerCoordinator as "the highest protocol 
supported by the assignor chosen by the group coordinator". This means we'll 
still be using EAGER at the first rebalance, since we of course need to wait 
for this initial rebalance to get the response from the group coordinator. But 
we should take the hint from the chosen assignor rather than dropping this 
information on the floor and sticking with the original protocol



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #631

2021-03-15 Thread Apache Jenkins Server
See 


Changes:

[github] HOTFIX: timeout issue in removeStreamThread() (#10321)


--
[...truncated 3.69 MB...]
AuthorizerIntegrationTest > 
shouldThrowTransactionalIdAuthorizationExceptionWhenNoTransactionAccessOnEndTransaction()
 STARTED

AuthorizerIntegrationTest > 
shouldThrowTransactionalIdAuthorizationExceptionWhenNoTransactionAccessOnEndTransaction()
 PASSED

AuthorizerIntegrationTest > 
shouldThrowTransactionalIdAuthorizationExceptionWhenNoTransactionAccessOnSendOffsetsToTxn()
 STARTED

AuthorizerIntegrationTest > 
shouldThrowTransactionalIdAuthorizationExceptionWhenNoTransactionAccessOnSendOffsetsToTxn()
 PASSED

AuthorizerIntegrationTest > testCommitWithNoGroupAccess() STARTED

AuthorizerIntegrationTest > testCommitWithNoGroupAccess() PASSED

AuthorizerIntegrationTest > 
testTransactionalProducerInitTransactionsNoDescribeTransactionalIdAcl() STARTED

AuthorizerIntegrationTest > 
testTransactionalProducerInitTransactionsNoDescribeTransactionalIdAcl() PASSED

AuthorizerIntegrationTest > testAuthorizeByResourceTypeDenyTakesPrecedence() 
STARTED

AuthorizerIntegrationTest > testAuthorizeByResourceTypeDenyTakesPrecedence() 
PASSED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithDescribe() STARTED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithDescribe() PASSED

AuthorizerIntegrationTest > testCreateTopicAuthorizationWithClusterCreate() 
STARTED

AuthorizerIntegrationTest > testCreateTopicAuthorizationWithClusterCreate() 
PASSED

AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead() STARTED

AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead() PASSED

AuthorizerIntegrationTest > testCommitWithTopicDescribe() STARTED

AuthorizerIntegrationTest > testCommitWithTopicDescribe() PASSED

AuthorizerIntegrationTest > testAuthorizationWithTopicExisting() STARTED

AuthorizerIntegrationTest > testAuthorizationWithTopicExisting() PASSED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithoutDescribe() 
STARTED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithoutDescribe() 
PASSED

AuthorizerIntegrationTest > testMetadataWithTopicDescribe() STARTED

AuthorizerIntegrationTest > testMetadataWithTopicDescribe() PASSED

AuthorizerIntegrationTest > testProduceWithTopicDescribe() STARTED

AuthorizerIntegrationTest > testProduceWithTopicDescribe() PASSED

AuthorizerIntegrationTest > testDescribeGroupApiWithNoGroupAcl() STARTED

AuthorizerIntegrationTest > testDescribeGroupApiWithNoGroupAcl() PASSED

AuthorizerIntegrationTest > testPatternSubscriptionMatchingInternalTopic() 
STARTED

AuthorizerIntegrationTest > testPatternSubscriptionMatchingInternalTopic() 
PASSED

AuthorizerIntegrationTest > testSendOffsetsWithNoConsumerGroupDescribeAccess() 
STARTED

AuthorizerIntegrationTest > testSendOffsetsWithNoConsumerGroupDescribeAccess() 
PASSED

AuthorizerIntegrationTest > testListTransactionsAuthorization() STARTED

AuthorizerIntegrationTest > testListTransactionsAuthorization() PASSED

AuthorizerIntegrationTest > testOffsetFetchTopicDescribe() STARTED

AuthorizerIntegrationTest > testOffsetFetchTopicDescribe() PASSED

AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead() STARTED

AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead() PASSED

AuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId() STARTED

AuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId() PASSED

AuthorizerIntegrationTest > testSimpleConsumeWithExplicitSeekAndNoGroupAccess() 
STARTED

AuthorizerIntegrationTest > testSimpleConsumeWithExplicitSeekAndNoGroupAccess() 
PASSED

SslProducerSendTest > testSendNonCompressedMessageWithCreateTime() STARTED

SslProducerSendTest > testSendNonCompressedMessageWithCreateTime() PASSED

SslProducerSendTest > testClose() STARTED

SslProducerSendTest > testClose() PASSED

SslProducerSendTest > testFlush() STARTED

SslProducerSendTest > testFlush() PASSED

SslProducerSendTest > testSendToPartition() STARTED

SslProducerSendTest > testSendToPartition() PASSED

SslProducerSendTest > testSendOffset() STARTED

SslProducerSendTest > testSendOffset() PASSED

SslProducerSendTest > testSendCompressedMessageWithCreateTime() STARTED

SslProducerSendTest > testSendCompressedMessageWithCreateTime() PASSED

SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread() STARTED

SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread() PASSED

SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread() STARTED

SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread() PASSED

SslProducerSendTest > testSendBeforeAndAfterPartitionExpansion() STARTED

SslProducerSendTest > testSendBeforeAndAfterPartitionExpansion() PASSED

ProducerCompressionTest > [1] compression=none STARTED

ProducerCompressionTest > [1] compression=none PASS

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #573

2021-03-15 Thread Apache Jenkins Server
See 


Changes:

[github] HOTFIX: timeout issue in removeStreamThread() (#10321)


--
[...truncated 3.67 MB...]
LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() STARTED

LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
PASSED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() STARTED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
PASSED

LogValidatorTest > testNonCompressedV1() STARTED

LogValidatorTest > testNonCompressedV1() PASSED

LogValidatorTest > testNonCompressedV2() STARTED

LogValidatorTest > testNonCompressedV2() PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV1() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV1() PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV2() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV2() PASSED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() STARTED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() PASSED

LogValidatorTest > testRecompressionV1() STARTED

LogValidatorTest > testRecompressionV1() PASSED

LogValidatorTest > testRecompressionV2() STARTED

LogValidatorTest > testRecompressionV2() PASSED

ProducerStateManagerTest > testSkipEmptyTransactions() STARTED

ProducerStateManagerTest > testSkipEmptyTransactions() PASSED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() STARTED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() PASSED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
STARTED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
PASSED

ProducerStateManagerTest > testCoordinatorFencing() STARTED

ProducerStateManagerTest > testCoordinatorFencing() PASSED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() PASSED

ProducerStateManagerTest > testTruncateFullyAndStartAt() STARTED

ProducerStateManagerTest > testTruncateFullyAndStartAt() PASSED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() STARTED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() STARTED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() PASSED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() 
STARTED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() PASSED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() STARTED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() PASSED

ProducerStateManagerTest > testTakeSnapshot() STARTED

ProducerStateManagerTest > testTakeSnapshot() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() 
STARTED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() PASSED

ProducerStateManagerTest > testDeleteSnapshotsBefore() STARTED

ProducerStateManagerTest > testDeleteSnapshotsBefore() PASSED

ProducerStateManagerTest > testAppendEmptyControlBatch() STARTED

ProducerStateManagerTest > testAppendEmptyControlBatch() PASSED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() STARTED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() PASSED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
STARTED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
PASSED

ProducerStateManagerTest > testRemoveAllStraySnapshots() STARTED

ProducerStateManagerTest > testRemoveAllStraySnapshots() PASSED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() PASSED

ProducerStateManagerTest > testProducersWithOngoingTransactionsDontExpire() 
STARTED

ProducerStateManagerTest > testProducersWithOngoingTransactionsDontExpire() 
PASSED

ProducerStateMana

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #602

2021-03-15 Thread Apache Jenkins Server
See 


Changes:

[github] HOTFIX: timeout issue in removeStreamThread() (#10321)


--
[...truncated 3.69 MB...]
KafkaZkClientTest > testUpdateBrokerInfo() STARTED

KafkaZkClientTest > testUpdateBrokerInfo() PASSED

KafkaZkClientTest > testCreateRecursive() STARTED

KafkaZkClientTest > testCreateRecursive() PASSED

KafkaZkClientTest > testGetConsumerOffsetNoData() STARTED

KafkaZkClientTest > testGetConsumerOffsetNoData() PASSED

KafkaZkClientTest > testDeleteTopicPathMethods() STARTED

KafkaZkClientTest > testDeleteTopicPathMethods() PASSED

KafkaZkClientTest > testSetTopicPartitionStatesRaw() STARTED

KafkaZkClientTest > testSetTopicPartitionStatesRaw() PASSED

KafkaZkClientTest > testAclManagementMethods() STARTED

KafkaZkClientTest > testAclManagementMethods() PASSED

KafkaZkClientTest > testPreferredReplicaElectionMethods() STARTED

KafkaZkClientTest > testPreferredReplicaElectionMethods() PASSED

KafkaZkClientTest > testPropagateLogDir() STARTED

KafkaZkClientTest > testPropagateLogDir() PASSED

KafkaZkClientTest > testGetDataAndStat() STARTED

KafkaZkClientTest > testGetDataAndStat() PASSED

KafkaZkClientTest > testReassignPartitionsInProgress() STARTED

KafkaZkClientTest > testReassignPartitionsInProgress() PASSED

KafkaZkClientTest > testCreateTopLevelPaths() STARTED

KafkaZkClientTest > testCreateTopLevelPaths() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() PASSED

KafkaZkClientTest > testIsrChangeNotificationGetters() STARTED

KafkaZkClientTest > testIsrChangeNotificationGetters() PASSED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() STARTED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() PASSED

KafkaZkClientTest > testGetLogConfigs() STARTED

KafkaZkClientTest > testGetLogConfigs() PASSED

KafkaZkClientTest > testBrokerSequenceIdMethods() STARTED

KafkaZkClientTest > testBrokerSequenceIdMethods() PASSED

KafkaZkClientTest > testAclMethods() STARTED

KafkaZkClientTest > testAclMethods() PASSED

KafkaZkClientTest > testCreateSequentialPersistentPath() STARTED

KafkaZkClientTest > testCreateSequentialPersistentPath() PASSED

KafkaZkClientTest > testConditionalUpdatePath() STARTED

KafkaZkClientTest > testConditionalUpdatePath() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() PASSED

KafkaZkClientTest > testDeleteTopicZNode() STARTED

KafkaZkClientTest > testDeleteTopicZNode() PASSED

KafkaZkClientTest > testDeletePath() STARTED

KafkaZkClientTest > testDeletePath() PASSED

KafkaZkClientTest > testGetBrokerMethods() STARTED

KafkaZkClientTest > testGetBrokerMethods() PASSED

KafkaZkClientTest > testCreateTokenChangeNotification() STARTED

KafkaZkClientTest > testCreateTokenChangeNotification() PASSED

KafkaZkClientTest > testGetTopicsAndPartitions() STARTED

KafkaZkClientTest > testGetTopicsAndPartitions() PASSED

KafkaZkClientTest > testRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRegisterBrokerInfo() PASSED

KafkaZkClientTest > testRetryRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRetryRegisterBrokerInfo() PASSED

KafkaZkClientTest > testConsumerOffsetPath() STARTED

KafkaZkClientTest > testConsumerOffsetPath() PASSED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() STARTED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() PASSED

KafkaZkClientTest > testTopicAssignments() STARTED

KafkaZkClientTest > testTopicAssignments() PASSED

KafkaZkClientTest > testControllerManagementMethods() STARTED

KafkaZkClientTest > testControllerManagementMethods() PASSED

KafkaZkClientTest > testTopicAssignmentMethods() STARTED

KafkaZkClientTest > testTopicAssignmentMethods() PASSED

KafkaZkClientTest > testConnectionViaNettyClient() STARTED

KafkaZkClientTest > testConnectionViaNettyClient() PASSED

KafkaZkClientTest > testPropagateIsrChanges() STARTED

KafkaZkClientTest > testPropagateIsrChanges() PASSED

KafkaZkClientTest > testControllerEpochMethods() STARTED

KafkaZkClientTest > testControllerEpochMethods() PASSED

KafkaZkClientTest > testDeleteRecursive() STARTED

KafkaZkClientTest > testDeleteRecursive() PASSED

KafkaZkClientTest > testGetTopicPartitionStates() STARTED

KafkaZkClientTest > testGetTopicPartitionStates() PASSED

KafkaZkClientTest > testCreateConfigChangeNotification() STARTED

KafkaZkClientTest > testCreateConfigChangeNotification() PASSED

KafkaZkClientTest > testDelegationTokenMethods() STARTED

KafkaZkClientTest > testDelegationTokenMethods() PASSED

LiteralAclStoreTest > shouldHaveCorrectPaths() STARTED

LiteralAclStoreTest > shouldHaveCorrectPaths() PASSED

LiteralAclStoreTest > shouldRoundTripChangeNode() STARTED

LiteralAclStoreTest > shouldRoundTripChangeNode() PASSED

[jira] [Created] (KAFKA-12476) Worker can block for longer than scheduled rebalance delay and/or session key TTL

2021-03-15 Thread Chris Egerton (Jira)
Chris Egerton created KAFKA-12476:
-

 Summary: Worker can block for longer than scheduled rebalance 
delay and/or session key TTL
 Key: KAFKA-12476
 URL: https://issues.apache.org/jira/browse/KAFKA-12476
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 3.0.0, 2.3.2, 2.4.2, 2.5.2, 2.8.0, 2.7.1, 2.6.2
Reporter: Chris Egerton
Assignee: Chris Egerton


Near the end of a distributed worker's herder tick loop, it calculates how long 
it should poll for rebalance activity before beginning a new loop. See 
[here|https://github.com/apache/kafka/blob/8da65936d7fc53d24c665c0d01893d25a430933b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java#L399-L409]
 and 
[here|https://github.com/apache/kafka/blob/8da65936d7fc53d24c665c0d01893d25a430933b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java#L459].

In between then and when it begins polling for rebalancing activity, some 
connector and task (re-)starts take place. While this normally completes in at 
most a minute or two, an overloaded cluster or one in the midst of garbage 
collection may take longer. See 
[here|https://github.com/apache/kafka/blob/8da65936d7fc53d24c665c0d01893d25a430933b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java#L411-L452].

The worker should calculate the time to poll for rebalance activity as closely 
as possible to when it actually begins that polling.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12475) Kafka Streams breaks EOS with remote state stores

2021-03-15 Thread A. Sophie Blee-Goldman (Jira)
A. Sophie Blee-Goldman created KAFKA-12475:
--

 Summary: Kafka Streams breaks EOS with remote state stores
 Key: KAFKA-12475
 URL: https://issues.apache.org/jira/browse/KAFKA-12475
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: A. Sophie Blee-Goldman


Currently in Kafka Streams, exactly-once semantics (EOS) require that the state 
stores be completely erased and restored from the changelog from scratch in 
case of an error. This erasure is implemented by closing the state store and 
then simply wiping out the local state directory. This works fine for the two 
store implementations provided OOTB, in-memory and rocksdb, but fails when the 
application includes a custom StateStore based on remote storage, such as 
Redis. In this case Streams will fail to erase any of the data before 
reinserting data from the changelog, resulting in possible duplicates and 
breaking the guarantee of EOS.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12474) Worker can die if unable to write new session key

2021-03-15 Thread Chris Egerton (Jira)
Chris Egerton created KAFKA-12474:
-

 Summary: Worker can die if unable to write new session key
 Key: KAFKA-12474
 URL: https://issues.apache.org/jira/browse/KAFKA-12474
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Affects Versions: 2.4.2, 2.5.2, 2.8.0, 2.7.1, 2.6.2
Reporter: Chris Egerton
Assignee: Chris Egerton


If a distributed worker is unable to write (and then read back) a new session 
key to the config topic, an uncaught exception will be thrown from its herder's 
tick thread, killing the worker.

See 
[https://github.com/apache/kafka/blob/8da65936d7fc53d24c665c0d01893d25a430933b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java#L366-L369]

One way we can handle this case by forcing a read to the end of the config 
topic whenever an attempt to write a new session key to the config topic fails.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] New committer: Tom Bentley

2021-03-15 Thread Konstantine Karantasis
Congratulations Tom!
Well deserved.

Konstantine

On Mon, Mar 15, 2021 at 4:52 PM Luke Chen  wrote:

> Congratulations!
>
> Federico Valeri  於 2021年3月16日 週二 上午4:11 寫道:
>
> > Congrats, Tom!
> >
> > Well deserved.
> >
> > On Mon, Mar 15, 2021, 8:09 PM Paolo Patierno  wrote:
> >
> > > Congratulations Tom!
> > >
> > > Get Outlook for Android
> > >
> > > 
> > > From: Guozhang Wang 
> > > Sent: Monday, March 15, 2021 8:02:44 PM
> > > To: dev 
> > > Subject: Re: [ANNOUNCE] New committer: Tom Bentley
> > >
> > > Congratulations Tom!
> > >
> > > Guozhang
> > >
> > > On Mon, Mar 15, 2021 at 11:25 AM Bill Bejeck  >
> > > wrote:
> > >
> > > > Congratulations, Tom!
> > > >
> > > > -Bill
> > > >
> > > > On Mon, Mar 15, 2021 at 2:08 PM Bruno Cadonna
> >  > > >
> > > > wrote:
> > > >
> > > > > Congrats, Tom!
> > > > >
> > > > > Best,
> > > > > Bruno
> > > > >
> > > > > On 15.03.21 18:59, Mickael Maison wrote:
> > > > > > Hi all,
> > > > > >
> > > > > > The PMC for Apache Kafka has invited Tom Bentley as a committer,
> > and
> > > > > > we are excited to announce that he accepted!
> > > > > >
> > > > > > Tom first contributed to Apache Kafka in June 2017 and has been
> > > > > > actively contributing since February 2020.
> > > > > > He has accumulated 52 commits and worked on a number of KIPs.
> Here
> > > are
> > > > > > some of the most significant ones:
> > > > > > KIP-183: Change PreferredReplicaLeaderElectionCommand to use
> > > > > AdminClient
> > > > > > KIP-195: AdminClient.createPartitions
> > > > > > KIP-585: Filter and Conditional SMTs
> > > > > > KIP-621: Deprecate and replace DescribeLogDirsResult.all()
> and
> > > > > .values()
> > > > > > KIP-707: The future of KafkaFuture (still in discussion)
> > > > > >
> > > > > > In addition, he is very active on the mailing list and has helped
> > > > > > review many KIPs.
> > > > > >
> > > > > > Congratulations Tom and thanks for all the contributions!
> > > > > >
> > > > >
> > > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>


Re: Request for JIRA permissions

2021-03-15 Thread Konstantine Karantasis
Added. Welcome to the project Nick!

Looking forward to your contributions.

Konstantine

On Mon, Mar 15, 2021 at 5:21 PM Nick Dekker  wrote:

> I would like to request JIRA issue editing permission, as I would like to
> contribute to the Kafka project (PRs require issue status changes). My JIRA
> username is ikdekker.
>
>
>
> Thank you and I look forward to producing some useful PRs with and for
> Kafka.
>


[GitHub] [kafka-site] chia7712 merged pull request #336: add chia7712 as PMC

2021-03-15 Thread GitBox


chia7712 merged pull request #336:
URL: https://github.com/apache/kafka-site/pull/336


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-12473) Make the CooperativeStickyAssignor the default assignor

2021-03-15 Thread A. Sophie Blee-Goldman (Jira)
A. Sophie Blee-Goldman created KAFKA-12473:
--

 Summary: Make the CooperativeStickyAssignor the default assignor
 Key: KAFKA-12473
 URL: https://issues.apache.org/jira/browse/KAFKA-12473
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Reporter: A. Sophie Blee-Goldman
 Fix For: 3.0.0


Now that 3.0 is coming up, we can change the default ConsumerPartitionAssignor 
to something better than the RangeAssignor. The original plan was to switch 
over to the StickyAssignor, but now that we have incremental cooperative 
rebalancing we should  consider using the new CooperativeStickyAssignor 
instead: this will enable the consumer group to follow the COOPERATIVE 
protocol, improving the rebalancing experience OOTB.

Note that this will require users to follow the upgrade path laid out in 
KIP-429 to safely perform a rolling upgrade. When we change the default 
assignor we need to make sure this is clearly documented in the upgrade guide 
for 3.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Kafka logos

2021-03-15 Thread Justin Mclean
Hi,

I'm still trying to track down an up to date hires (preferable vector) Kafka 
Logo.

These are out of date:
http://svn.apache.org/repos/asf/kafka/site/logos/originals/

Can anyone help?

Thanks,
Justin


[jira] [Created] (KAFKA-12472) Add a Consumer / Streams metric to indicate the current rebalance status

2021-03-15 Thread Guozhang Wang (Jira)
Guozhang Wang created KAFKA-12472:
-

 Summary: Add a Consumer / Streams metric to indicate the current 
rebalance status
 Key: KAFKA-12472
 URL: https://issues.apache.org/jira/browse/KAFKA-12472
 Project: Kafka
  Issue Type: Improvement
  Components: consumer, streams
Reporter: Guozhang Wang


Today to trouble shoot a rebalance issue operators need to do a lot of manual 
steps: locating the problematic members, search in the log entries, and look 
for related metrics. It would be great to add a single metric that covers all 
these manual steps and operators would only need to check this single signal to 
check what is the root cause. A concrete idea is to expose two enum gauge 
metrics on consumer and streams, respectively:

* Consumer level (the order below is by-design, see Streams level for details):
  0. None => there is no rebalance on going.
  1. CoordinatorRequested => any of the coordinator response contains a 
RebalanceInProgress error code.
  2. NewMember => when the join group response has a MemberIdRequired error 
code.
  3. UnknownMember => when any of the coordinator response contains an 
UnknownMember error code, indicating this member is already kicked out of the 
group.
  4. StaleMember => when any of the coordinator response contains an 
IllegalGeneration error code.
  5. DroppedGroup => when hb thread decides to call leaveGroup due to hb 
expired.
  6. UserRequested => when leaveGroup upon the shutdown / unsubscribeAll API, 
as well as upon calling the enforceRebalance API.
  7. MetadataChanged => requestRejoin triggered since metadata has changed.
  8. SubscriptionChanged => requestRejoin triggered since subscription has 
changed.
  9. RetryOnError => when join/syncGroup response contains a retriable error 
which would cause the consumer to backoff and retry.
 10. RevocationNeeded => requestRejoin triggered since revoked partitions is 
not empty.

The transition rule is that a non-zero status code can only transit to zero or 
to a higher code, but not to a lower code (same for streams, see rationales 
below).

* Streams level: today a streams client can have multiple consumers. We 
introduced some new enum states as well as aggregation rules across consumers: 
if there's no streams-layer events as below that transits its status (i.e. 
streams layer think it is 0), then we aggregate across all the embedded 
consumers and take the largest status code value as the streams metric; if 
there are streams-layer events that determines its status should be in 10+, 
then its overrides all embedded consumer layer status code. In addition, when 
create aggregated metric across streams instance within an app, we also follow 
the same aggregation rule, e.g. if there are two streams instance where one 
instance's status code is 1), and the other is 10), then the app's status is 
10).

 10. RevocationNeeded => the definition of this is changed to the original 10) 
defined in consumer above, OR leader decides to revoke either active/standby 
tasks and hence schedule follow-ups.
 11. AssignmentProbing => leader decides to schedule follow-ups since the 
current assignment is unstable.
 12. VersionProbing => leader decides to schedule follow-ups due to version 
probing.
 13. EndpointUpdate => anyone decides to schedule follow-ups due to endpoint 
updates.


The main motivations of the above proposed precedence order are the following:
1. When a rebalance is triggered by one member, all other members would only 
know it is due to CoordinatorRequested from coordinator error codes, and hence 
CoordinatorRequested should be overridden by any other status when aggregating 
across clients.
2. DroppedGroup could cause unknown/stale members that would fail and retry 
immediately, and hence should take higher precedence.
3. Revocation definition is extended in Streams, and hence it needs to take the 
highest precedence among all consumer-only status so that it would not be 
overridden by any of the consumer-only status.
4. In general, more rare events get higher precedence.

Any comments on the precedence rules / categorization are more than welcomed!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Request for JIRA permissions

2021-03-15 Thread Nick Dekker
I would like to request JIRA issue editing permission, as I would like to
contribute to the Kafka project (PRs require issue status changes). My JIRA
username is ikdekker.



Thank you and I look forward to producing some useful PRs with and for
Kafka.


Re: [ANNOUNCE] New committer: Tom Bentley

2021-03-15 Thread Luke Chen
Congratulations!

Federico Valeri  於 2021年3月16日 週二 上午4:11 寫道:

> Congrats, Tom!
>
> Well deserved.
>
> On Mon, Mar 15, 2021, 8:09 PM Paolo Patierno  wrote:
>
> > Congratulations Tom!
> >
> > Get Outlook for Android
> >
> > 
> > From: Guozhang Wang 
> > Sent: Monday, March 15, 2021 8:02:44 PM
> > To: dev 
> > Subject: Re: [ANNOUNCE] New committer: Tom Bentley
> >
> > Congratulations Tom!
> >
> > Guozhang
> >
> > On Mon, Mar 15, 2021 at 11:25 AM Bill Bejeck 
> > wrote:
> >
> > > Congratulations, Tom!
> > >
> > > -Bill
> > >
> > > On Mon, Mar 15, 2021 at 2:08 PM Bruno Cadonna
>  > >
> > > wrote:
> > >
> > > > Congrats, Tom!
> > > >
> > > > Best,
> > > > Bruno
> > > >
> > > > On 15.03.21 18:59, Mickael Maison wrote:
> > > > > Hi all,
> > > > >
> > > > > The PMC for Apache Kafka has invited Tom Bentley as a committer,
> and
> > > > > we are excited to announce that he accepted!
> > > > >
> > > > > Tom first contributed to Apache Kafka in June 2017 and has been
> > > > > actively contributing since February 2020.
> > > > > He has accumulated 52 commits and worked on a number of KIPs. Here
> > are
> > > > > some of the most significant ones:
> > > > > KIP-183: Change PreferredReplicaLeaderElectionCommand to use
> > > > AdminClient
> > > > > KIP-195: AdminClient.createPartitions
> > > > > KIP-585: Filter and Conditional SMTs
> > > > > KIP-621: Deprecate and replace DescribeLogDirsResult.all() and
> > > > .values()
> > > > > KIP-707: The future of KafkaFuture (still in discussion)
> > > > >
> > > > > In addition, he is very active on the mailing list and has helped
> > > > > review many KIPs.
> > > > >
> > > > > Congratulations Tom and thanks for all the contributions!
> > > > >
> > > >
> > >
> >
> >
> > --
> > -- Guozhang
> >
>


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #601

2021-03-15 Thread Apache Jenkins Server
See 


Changes:

[github] revert stream logging level back to ERROR (#10320)


--
[...truncated 7.38 MB...]
KafkaZkClientTest > testCreateRecursive() STARTED

KafkaZkClientTest > testCreateRecursive() PASSED

KafkaZkClientTest > testGetConsumerOffsetNoData() STARTED

KafkaZkClientTest > testGetConsumerOffsetNoData() PASSED

KafkaZkClientTest > testDeleteTopicPathMethods() STARTED

KafkaZkClientTest > testDeleteTopicPathMethods() PASSED

KafkaZkClientTest > testSetTopicPartitionStatesRaw() STARTED

KafkaZkClientTest > testSetTopicPartitionStatesRaw() PASSED

KafkaZkClientTest > testAclManagementMethods() STARTED

KafkaZkClientTest > testAclManagementMethods() PASSED

KafkaZkClientTest > testPreferredReplicaElectionMethods() STARTED

KafkaZkClientTest > testPreferredReplicaElectionMethods() PASSED

KafkaZkClientTest > testPropagateLogDir() STARTED

KafkaZkClientTest > testPropagateLogDir() PASSED

KafkaZkClientTest > testGetDataAndStat() STARTED

KafkaZkClientTest > testGetDataAndStat() PASSED

KafkaZkClientTest > testReassignPartitionsInProgress() STARTED

KafkaZkClientTest > testReassignPartitionsInProgress() PASSED

KafkaZkClientTest > testCreateTopLevelPaths() STARTED

KafkaZkClientTest > testCreateTopLevelPaths() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() PASSED

KafkaZkClientTest > testIsrChangeNotificationGetters() STARTED

KafkaZkClientTest > testIsrChangeNotificationGetters() PASSED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() STARTED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() PASSED

KafkaZkClientTest > testGetLogConfigs() STARTED

KafkaZkClientTest > testGetLogConfigs() PASSED

KafkaZkClientTest > testBrokerSequenceIdMethods() STARTED

KafkaZkClientTest > testBrokerSequenceIdMethods() PASSED

KafkaZkClientTest > testAclMethods() STARTED

KafkaZkClientTest > testAclMethods() PASSED

KafkaZkClientTest > testCreateSequentialPersistentPath() STARTED

KafkaZkClientTest > testCreateSequentialPersistentPath() PASSED

KafkaZkClientTest > testConditionalUpdatePath() STARTED

KafkaZkClientTest > testConditionalUpdatePath() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() PASSED

KafkaZkClientTest > testDeleteTopicZNode() STARTED

KafkaZkClientTest > testDeleteTopicZNode() PASSED

KafkaZkClientTest > testDeletePath() STARTED

KafkaZkClientTest > testDeletePath() PASSED

KafkaZkClientTest > testGetBrokerMethods() STARTED

KafkaZkClientTest > testGetBrokerMethods() PASSED

KafkaZkClientTest > testCreateTokenChangeNotification() STARTED

KafkaZkClientTest > testCreateTokenChangeNotification() PASSED

KafkaZkClientTest > testGetTopicsAndPartitions() STARTED

KafkaZkClientTest > testGetTopicsAndPartitions() PASSED

KafkaZkClientTest > testRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRegisterBrokerInfo() PASSED

KafkaZkClientTest > testRetryRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRetryRegisterBrokerInfo() PASSED

KafkaZkClientTest > testConsumerOffsetPath() STARTED

KafkaZkClientTest > testConsumerOffsetPath() PASSED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() STARTED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() PASSED

KafkaZkClientTest > testTopicAssignments() STARTED

KafkaZkClientTest > testTopicAssignments() PASSED

KafkaZkClientTest > testControllerManagementMethods() STARTED

KafkaZkClientTest > testControllerManagementMethods() PASSED

KafkaZkClientTest > testTopicAssignmentMethods() STARTED

KafkaZkClientTest > testTopicAssignmentMethods() PASSED

KafkaZkClientTest > testConnectionViaNettyClient() STARTED

KafkaZkClientTest > testConnectionViaNettyClient() PASSED

KafkaZkClientTest > testPropagateIsrChanges() STARTED

KafkaZkClientTest > testPropagateIsrChanges() PASSED

KafkaZkClientTest > testControllerEpochMethods() STARTED

KafkaZkClientTest > testControllerEpochMethods() PASSED

KafkaZkClientTest > testDeleteRecursive() STARTED

KafkaZkClientTest > testDeleteRecursive() PASSED

KafkaZkClientTest > testGetTopicPartitionStates() STARTED

KafkaZkClientTest > testGetTopicPartitionStates() PASSED

KafkaZkClientTest > testCreateConfigChangeNotification() STARTED

KafkaZkClientTest > testCreateConfigChangeNotification() PASSED

KafkaZkClientTest > testDelegationTokenMethods() STARTED

KafkaZkClientTest > testDelegationTokenMethods() PASSED

LiteralAclStoreTest > shouldHaveCorrectPaths() STARTED

LiteralAclStoreTest > shouldHaveCorrectPaths() PASSED

LiteralAclStoreTest > shouldRoundTripChangeNode() STARTED

LiteralAclStoreTest > shouldRoundTripChangeNode() PASSED

LiteralAclStoreTest > shouldThrowFromEncodeOnNoneLiteral() STARTED

LiteralAclStoreTest > shouldThrowFrom

Request for JIRA permissions

2021-03-15 Thread Nick Dekker
ikdekker


[jira] [Created] (KAFKA-12471) Implement createPartitions in KIP-500 mode

2021-03-15 Thread Colin McCabe (Jira)
Colin McCabe created KAFKA-12471:


 Summary: Implement createPartitions in KIP-500 mode
 Key: KAFKA-12471
 URL: https://issues.apache.org/jira/browse/KAFKA-12471
 Project: Kafka
  Issue Type: New Feature
Reporter: Colin McCabe






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-708: Rack awareness for Kafka Streams

2021-03-15 Thread Sophie Blee-Goldman
Hey Levani, thanks for the KIP! This looks great.

A few comments/questions about the "task.assignment.rack.awareness" config:
since this is used to determine the encoding of the client tags, we should
make sure to specify that this list must be identical in contents and order
across all clients in the application. Unfortunately there doesn't seem to
be a good way to actually enforce this, so we should call it out in the
config doc string at the least.

On that note, should it be possible for users to upgrade or evolve their
tags over time? For example if a user wants to leverage this feature for an
existing app, or add new tags to an app that already has some configured. I
think we would need to either enforce that you can only add new tags to the
end but never remove/change/reorder the existing ones, or else adopt a
similar strategy as to version probing and force all clients to remain on
the old protocol until everyone in the group has been updated to use the
new tags. It's fine with me if you'd prefer to leave this out of scope for
the time being, as long as we design this to be forwards-compatible as best
we can. Have you considered adding a "ClientTagsVersion" to the
SubscriptionInfo so that we're set up to extend this feature in the future?
In my experience so far, any time we *don't* version a protocol like this
we end up regretting it later.

Whatever you decide, it should be documented clearly -- in the KIP and in
the actual docs, ie upgrade guide and/or the config doc string -- so that
users know whether they can ever change the client tags on a running
application or not. (I think this is hinted at in the KIP, but not called
out explicitly)

By the way, I feel the "task.assignment.rack.awareness" config should have
the words "clients" and/or "tags" somewhere in it, otherwise it's a bit
unclear what it actually means. And maybe it should specify that it applies
to standby task placement only? Obviously we don't need to cover every
possible detail in the config name alone, but it could be a little more
specific. What about "standby.task.rack.aware.assignment.tags" or something
like that?

On Mon, Mar 15, 2021 at 2:12 PM Levani Kokhreidze 
wrote:

> Hello all,
>
> Bumping this thread as we are one binding vote short accepting this KIP.
> Please let me know if you have any extra concerns and/or suggestions.
>
> Regards,
> Levani
>
> > On 12. Mar 2021, at 13:14, Levani Kokhreidze 
> wrote:
> >
> > Hi Guozhang,
> >
> > Thanks for the feedback. I think it makes sense.
> > I updated the KIP with your proposal [1], it’s a nice optimisation.
> > I do agree that having the same configuration across Kafka Streams
> instances is the reasonable requirement.
> >
> > Best,
> > Levani
> >
> > [1] -
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-708%3A+Rack+awareness+for+Kafka+Streams
> <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-708%3A+Rack+awareness+for+Kafka+Streams
> >
> >
> >
> >> On 12. Mar 2021, at 03:36, Guozhang Wang  wangg...@gmail.com>> wrote:
> >>
> >> Hello Levani,
> >>
> >> Thanks for the great write-up! I think this proposal makes sense,
> though I
> >> have one minor suggestion regarding the protocol format change: note the
> >> subscription info is part of the group metadata message that we need to
> >> write into the internal topic, and hence it's always better if we can
> save
> >> on the number of bytes written there. For this, I'm wondering if we can
> >> encode the key part instead of writing raw bytes based on the
> >> configurations, i.e.:
> >>
> >> 1. streams will look at the `task.assignment.rack.awareness` values, and
> >> encode them in a deterministic manner, e.g. in your example zone = 0,
> >> cluster = 1. This assumes that all instances will configure this value
> in
> >> the same way and then with a deterministic manner all instances will
> have
> >> the same encodings, which I think is a reasonable requirement.
> >> 2. the sent protocol would be "key => short, value => bytes" instead.
> >>
> >>
> >> WDYT?
> >>
> >> Otherwise, I'm +1 on the KIP!
> >>
> >> Guozhang
> >>
> >>
> >>
> >>
> >> On Thu, Mar 11, 2021 at 8:29 AM John Roesler  > wrote:
> >>
> >>> Thanks for the KIP!
> >>>
> >>> I'm +1 (binding)
> >>>
> >>> -John
> >>>
> >>> On Wed, 2021-03-10 at 13:13 +0200, Levani Kokhreidze wrote:
>  Hello all,
> 
>  I’d like to start the voting on KIP-708 [1]
> 
>  Best,
>  Levani
> 
>  [1] -
> >>>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-708%3A+Rack+awareness+for+Kafka+Streams
> <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-708%3A+Rack+awareness+for+Kafka+Streams
> >
> 
> >>>
> >>>
> >>>
> >>
> >> --
> >> -- Guozhang
> >
>
>


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #572

2021-03-15 Thread Apache Jenkins Server
See 


Changes:

[github] revert stream logging level back to ERROR (#10320)


--
[...truncated 3.65 MB...]
KafkaTest > testZkSslTrustStoreLocation() PASSED

KafkaTest > testZkSslEnabledProtocols() STARTED

KafkaTest > testZkSslEnabledProtocols() PASSED

KafkaTest > testKafkaSslPasswords() STARTED

KafkaTest > testKafkaSslPasswords() PASSED

KafkaTest > testGetKafkaConfigFromArgs() STARTED

KafkaTest > testGetKafkaConfigFromArgs() PASSED

KafkaTest > testZkSslClientEnable() STARTED

KafkaTest > testZkSslClientEnable() PASSED

KafkaTest > testZookeeperTrustStorePassword() STARTED

KafkaTest > testZookeeperTrustStorePassword() PASSED

KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd() STARTED

KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd() PASSED

KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly() STARTED

KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly() PASSED

KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging() STARTED

KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging() PASSED

KafkaTest > testZkSslKeyStoreLocation() STARTED

KafkaTest > testZkSslKeyStoreLocation() PASSED

KafkaTest > testZkSslCrlEnable() STARTED

KafkaTest > testZkSslCrlEnable() PASSED

KafkaTest > testZkSslEndpointIdentificationAlgorithm() STARTED

KafkaTest > testZkSslEndpointIdentificationAlgorithm() PASSED

KafkaTest > testZkSslTrustStoreType() STARTED

KafkaTest > testZkSslTrustStoreType() PASSED

KafkaMetadataLogTest > testTruncateBelowHighWatermark() STARTED

KafkaMetadataLogTest > testTruncateBelowHighWatermark() PASSED

KafkaMetadataLogTest > testMaxBatchSize() STARTED

KafkaMetadataLogTest > testMaxBatchSize() PASSED

KafkaMetadataLogTest > testCleanupOlderSnapshots() STARTED

KafkaMetadataLogTest > testCleanupOlderSnapshots() PASSED

KafkaMetadataLogTest > testTruncateWillRemoveOlderSnapshot() STARTED

KafkaMetadataLogTest > testTruncateWillRemoveOlderSnapshot() PASSED

KafkaMetadataLogTest > testFailToIncreaseLogStartPastHighWatermark() STARTED

KafkaMetadataLogTest > testFailToIncreaseLogStartPastHighWatermark() PASSED

KafkaMetadataLogTest > testCreateReplicatedLogTruncatesFully() STARTED

KafkaMetadataLogTest > testCreateReplicatedLogTruncatesFully() PASSED

KafkaMetadataLogTest > testCreateSnapshot() STARTED

KafkaMetadataLogTest > testCreateSnapshot() PASSED

KafkaMetadataLogTest > testUpdateLogStartOffset() STARTED

KafkaMetadataLogTest > testUpdateLogStartOffset() PASSED

KafkaMetadataLogTest > testCleanupPartialSnapshots() STARTED

KafkaMetadataLogTest > testCleanupPartialSnapshots() PASSED

KafkaMetadataLogTest > testUnexpectedAppendOffset() STARTED

KafkaMetadataLogTest > testUnexpectedAppendOffset() PASSED

KafkaMetadataLogTest > testTruncateFullyToLatestSnapshot() STARTED

KafkaMetadataLogTest > testTruncateFullyToLatestSnapshot() PASSED

KafkaMetadataLogTest > testDoesntTruncateFully() STARTED

KafkaMetadataLogTest > testDoesntTruncateFully() PASSED

KafkaMetadataLogTest > testUpdateLogStartOffsetWillRemoveOlderSnapshot() STARTED

KafkaMetadataLogTest > testUpdateLogStartOffsetWillRemoveOlderSnapshot() PASSED

KafkaMetadataLogTest > testUpdateLogStartOffsetWithMissingSnapshot() STARTED

KafkaMetadataLogTest > testUpdateLogStartOffsetWithMissingSnapshot() PASSED

KafkaMetadataLogTest > testReadMissingSnapshot() STARTED

KafkaMetadataLogTest > testReadMissingSnapshot() PASSED

RaftManagerTest > testShutdownIoThread() STARTED

RaftManagerTest > testShutdownIoThread() PASSED

RaftManagerTest > testUncaughtExceptionInIoThread() STARTED

RaftManagerTest > testUncaughtExceptionInIoThread() PASSED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@76bcf6fb, value = [B@10d32ad0), properties=Map(print.value -> false), 
expected= STARTED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@76bcf6fb, value = [B@10d32ad0), properties=Map(print.value -> false), 
expected= PASSED

DefaultMessageFormatterTest > [2] name=print key, record=ConsumerRecord(topic = 
someTopic, partition = 9, leaderEpoch = null, offset = 9876, CreateTime = 1234, 
serialized key size = 0, serialized value size = 0, headers = 
RecordHeaders(headers = [RecordHeader(key = h1, value = [118, 49]), 
RecordHeader(key = h2, value = [118, 50])], isReadOnly = false

[jira] [Resolved] (KAFKA-12470) The topic names in the metrics do not retain their format when extracting through JMX.

2021-03-15 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rafał Chmielewski resolved KAFKA-12470.
---
Resolution: Duplicate

> The topic names in the metrics do not retain their format when extracting 
> through JMX.
> --
>
> Key: KAFKA-12470
> URL: https://issues.apache.org/jira/browse/KAFKA-12470
> Project: Kafka
>  Issue Type: Bug
>  Components: metrics
>Reporter: Rafał Chmielewski
>Priority: Major
>
> I have topic names that have a period in the name:
> product.order
> product.offering.price
>  
> However, for the metrics issued by JMX by a program that is a consumer of 
> Kafka messages, the dots are replaced with an underscore:
> kafka.consumer client-id=consumer-export-4, topic=product_offering_price, 
> partition=1><>records-lead
>  
> But for the producer, this problem doesn't occur:
> kafka.producer client-id=bss.data.verification.pi_1, 
> topic=product.offering.price><>record-send-total
>  
> As a consumer I use Akka Alpakka. But I think it's using Apache library to 
> connect to Kafka and report metrics via JMX.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12470) The topic names in the metrics do not retain their format when extracting through JMX.

2021-03-15 Thread Jira
Rafał Chmielewski created KAFKA-12470:
-

 Summary: The topic names in the metrics do not retain their format 
when extracting through JMX.
 Key: KAFKA-12470
 URL: https://issues.apache.org/jira/browse/KAFKA-12470
 Project: Kafka
  Issue Type: Bug
  Components: metrics
Reporter: Rafał Chmielewski


I have topic names that have a period in the name:


product.order
product.offering.price

 

However, for the metrics issued by JMX by a program that is a consumer of Kafka 
messages, the dots are replaced with an underscore:

kafka.consumer<>records-lead

 

But for the producer, this problem doesn't occur:

kafka.producer<>record-send-total

 

As a consumer I use Akka Alpakka. But I think it's using Apache library to 
connect to Kafka and report metrics via JMX.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12469) The topic names in the metrics do not retain their format when extracting through JMX.

2021-03-15 Thread Jira
Rafał Chmielewski created KAFKA-12469:
-

 Summary: The topic names in the metrics do not retain their format 
when extracting through JMX.
 Key: KAFKA-12469
 URL: https://issues.apache.org/jira/browse/KAFKA-12469
 Project: Kafka
  Issue Type: Bug
  Components: metrics
Reporter: Rafał Chmielewski


I have topic names that have a period in the name:


product.order
product.offering.price

 

However, for the metrics issued by JMX by a program that is a consumer of Kafka 
messages, the dots are replaced with an underscore:

kafka.consumer<>records-lead

 

But for the producer, this problem doesn't occur:

kafka.producer<>record-send-total

 

As a consumer I use Akka Alpakka. But I think it's using Apache library to 
connect to Kafka and report metrics via JMX.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-708: Rack awareness for Kafka Streams

2021-03-15 Thread Levani Kokhreidze
Hello all,

Bumping this thread as we are one binding vote short accepting this KIP.
Please let me know if you have any extra concerns and/or suggestions.

Regards,
Levani

> On 12. Mar 2021, at 13:14, Levani Kokhreidze  wrote:
> 
> Hi Guozhang,
> 
> Thanks for the feedback. I think it makes sense.
> I updated the KIP with your proposal [1], it’s a nice optimisation.
> I do agree that having the same configuration across Kafka Streams instances 
> is the reasonable requirement.
> 
> Best,
> Levani
> 
> [1] - 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-708%3A+Rack+awareness+for+Kafka+Streams
>  
> 
> 
> 
>> On 12. Mar 2021, at 03:36, Guozhang Wang > > wrote:
>> 
>> Hello Levani,
>> 
>> Thanks for the great write-up! I think this proposal makes sense, though I
>> have one minor suggestion regarding the protocol format change: note the
>> subscription info is part of the group metadata message that we need to
>> write into the internal topic, and hence it's always better if we can save
>> on the number of bytes written there. For this, I'm wondering if we can
>> encode the key part instead of writing raw bytes based on the
>> configurations, i.e.:
>> 
>> 1. streams will look at the `task.assignment.rack.awareness` values, and
>> encode them in a deterministic manner, e.g. in your example zone = 0,
>> cluster = 1. This assumes that all instances will configure this value in
>> the same way and then with a deterministic manner all instances will have
>> the same encodings, which I think is a reasonable requirement.
>> 2. the sent protocol would be "key => short, value => bytes" instead.
>> 
>> 
>> WDYT?
>> 
>> Otherwise, I'm +1 on the KIP!
>> 
>> Guozhang
>> 
>> 
>> 
>> 
>> On Thu, Mar 11, 2021 at 8:29 AM John Roesler > > wrote:
>> 
>>> Thanks for the KIP!
>>> 
>>> I'm +1 (binding)
>>> 
>>> -John
>>> 
>>> On Wed, 2021-03-10 at 13:13 +0200, Levani Kokhreidze wrote:
 Hello all,
 
 I’d like to start the voting on KIP-708 [1]
 
 Best,
 Levani
 
 [1] -
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-708%3A+Rack+awareness+for+Kafka+Streams
>>>  
>>> 
 
>>> 
>>> 
>>> 
>> 
>> -- 
>> -- Guozhang
> 



Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #630

2021-03-15 Thread Apache Jenkins Server
See 


Changes:

[github] revert stream logging level back to ERROR (#10320)


--
[...truncated 3.70 MB...]
PlaintextConsumerTest > testShrinkingTopicSubscriptions() STARTED

PlaintextConsumerTest > testShrinkingTopicSubscriptions() PASSED

PlaintextConsumerTest > testMaxPollIntervalMs() STARTED

PlaintextConsumerTest > testMaxPollIntervalMs() PASSED

PlaintextConsumerTest > testOffsetsForTimes() STARTED

PlaintextConsumerTest > testOffsetsForTimes() PASSED

PlaintextConsumerTest > testSubsequentPatternSubscription() STARTED

PlaintextConsumerTest > testSubsequentPatternSubscription() PASSED

PlaintextConsumerTest > testPerPartitionLeadMetricsCleanUpWithAssign() STARTED

PlaintextConsumerTest > testPerPartitionLeadMetricsCleanUpWithAssign() PASSED

PlaintextConsumerTest > testConsumeMessagesWithCreateTime() STARTED

PlaintextConsumerTest > testConsumeMessagesWithCreateTime() PASSED

PlaintextConsumerTest > testAsyncCommit() STARTED

PlaintextConsumerTest > testAsyncCommit() PASSED

PlaintextConsumerTest > testLowMaxFetchSizeForRequestAndPartition() STARTED

PlaintextConsumerTest > testLowMaxFetchSizeForRequestAndPartition() PASSED

PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnStopPolling() STARTED

PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnStopPolling() PASSED

PlaintextConsumerTest > testMaxPollIntervalMsDelayInRevocation() STARTED

PlaintextConsumerTest > testMaxPollIntervalMsDelayInRevocation() PASSED

PlaintextConsumerTest > testPerPartitionLagMetricsCleanUpWithAssign() STARTED

PlaintextConsumerTest > testPerPartitionLagMetricsCleanUpWithAssign() PASSED

PlaintextConsumerTest > testPartitionsForInvalidTopic() STARTED

PlaintextConsumerTest > testPartitionsForInvalidTopic() PASSED

PlaintextConsumerTest > testMultiConsumerStickyAssignment() STARTED

PlaintextConsumerTest > testMultiConsumerStickyAssignment() PASSED

PlaintextConsumerTest > testPauseStateNotPreservedByRebalance() STARTED

PlaintextConsumerTest > testPauseStateNotPreservedByRebalance() PASSED

PlaintextConsumerTest > testFetchHonoursFetchSizeIfLargeRecordNotFirst() STARTED

PlaintextConsumerTest > testFetchHonoursFetchSizeIfLargeRecordNotFirst() PASSED

PlaintextConsumerTest > testSeek() STARTED

PlaintextConsumerTest > testSeek() PASSED

PlaintextConsumerTest > testConsumingWithNullGroupId() STARTED

PlaintextConsumerTest > testConsumingWithNullGroupId() PASSED

PlaintextConsumerTest > testPositionAndCommit() STARTED

PlaintextConsumerTest > testPositionAndCommit() PASSED

PlaintextConsumerTest > testFetchRecordLargerThanMaxPartitionFetchBytes() 
STARTED

PlaintextConsumerTest > testFetchRecordLargerThanMaxPartitionFetchBytes() PASSED

PlaintextConsumerTest > testUnsubscribeTopic() STARTED

PlaintextConsumerTest > testUnsubscribeTopic() PASSED

PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose() STARTED

PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose() PASSED

PlaintextConsumerTest > testFetchRecordLargerThanFetchMaxBytes() STARTED

PlaintextConsumerTest > testFetchRecordLargerThanFetchMaxBytes() PASSED

PlaintextConsumerTest > testMultiConsumerDefaultAssignment() STARTED

PlaintextConsumerTest > testMultiConsumerDefaultAssignment() PASSED

PlaintextConsumerTest > testAutoCommitOnClose() STARTED

PlaintextConsumerTest > testAutoCommitOnClose() PASSED

PlaintextConsumerTest > testListTopics() STARTED

PlaintextConsumerTest > testListTopics() PASSED

PlaintextConsumerTest > testExpandingTopicSubscriptions() STARTED

PlaintextConsumerTest > testExpandingTopicSubscriptions() PASSED

PlaintextConsumerTest > testInterceptors() STARTED

PlaintextConsumerTest > testInterceptors() PASSED

PlaintextConsumerTest > testConsumingWithEmptyGroupId() STARTED

PlaintextConsumerTest > testConsumingWithEmptyGroupId() PASSED

PlaintextConsumerTest > testPatternUnsubscription() STARTED

PlaintextConsumerTest > testPatternUnsubscription() PASSED

PlaintextConsumerTest > testGroupConsumption() STARTED

PlaintextConsumerTest > testGroupConsumption() PASSED

PlaintextConsumerTest > testPartitionsFor() STARTED

PlaintextConsumerTest > testPartitionsFor() PASSED

PlaintextConsumerTest > testAutoCommitOnRebalance() STARTED

PlaintextConsumerTest > testAutoCommitOnRebalance() PASSED

PlaintextConsumerTest > testInterceptorsWithWrongKeyValue() STARTED

PlaintextConsumerTest > testInterceptorsWithWrongKeyValue() PASSED

PlaintextConsumerTest > testPerPartitionLeadWithMaxPollRecords() STARTED

PlaintextConsumerTest > testPerPartitionLeadWithMaxPollRecords() PASSED

PlaintextConsumerTest > testHeaders() STARTED

PlaintextConsumerTest > testHeaders() PASSED

PlaintextConsumerTest > testMaxPollIntervalMsDelayInAssignment() STARTED

PlaintextConsumerTest > testMaxPollIntervalMsDelayInAssignment() PASSED

PlaintextConsumerTest > testHeadersSerializerDeserializer() STARTED

PlaintextCo

[jira] [Created] (KAFKA-12468) Initial offsets are copied from source to target cluster

2021-03-15 Thread Bart De Neuter (Jira)
Bart De Neuter created KAFKA-12468:
--

 Summary: Initial offsets are copied from source to target cluster
 Key: KAFKA-12468
 URL: https://issues.apache.org/jira/browse/KAFKA-12468
 Project: Kafka
  Issue Type: Bug
  Components: mirrormaker
Affects Versions: 2.7.0
Reporter: Bart De Neuter


We have an active-passive setup where  the 3 connectors from mirror maker 2 
(heartbeat, checkpoint and source) are running on a dedicated Kafka connect 
cluster on the target cluster.

Offset syncing is enabled as specified by KIP-545. But when activated, it seems 
the offsets from the source cluster are initially copied to the target cluster 
without translation. This causes a negative lag for all synced consumer groups. 
Only when we reset the offsets for each topic/partition on the target cluster 
and produce a record on the topic/partition in the source, the sync starts 
working correctly. 

I would expect that the consumer groups are synced but that the current offsets 
of the source cluster are not copied to the target cluster.

This is the configuration we are currently using:

Heartbeat connector

 
{code:xml}
{
  "name": "mm2-mirror-heartbeat",
  "config": {
"name": "mm2-mirror-heartbeat",
"connector.class": 
"org.apache.kafka.connect.mirror.MirrorHeartbeatConnector",
"source.cluster.alias": "eventador",
"target.cluster.alias": "msk",
"source.cluster.bootstrap.servers": "",
"target.cluster.bootstrap.servers": "",
"topics": ".*",
"groups": ".*",
"tasks.max": "1",
"replication.policy.class": "CustomReplicationPolicy",
"sync.group.offsets.enabled": "true",
"sync.group.offsets.interval.seconds": "5",
"emit.checkpoints.enabled": "true",
"emit.checkpoints.interval.seconds": "30",
"emit.heartbeats.interval.seconds": "30",
"key.converter": " org.apache.kafka.connect.converters.ByteArrayConverter",
"value.converter": "org.apache.kafka.connect.converters.ByteArrayConverter"
  }
}
{code}
Checkpoint connector:
{code:xml}
{
  "name": "mm2-mirror-checkpoint",
  "config": {
"name": "mm2-mirror-checkpoint",
"connector.class": 
"org.apache.kafka.connect.mirror.MirrorCheckpointConnector",
"source.cluster.alias": "eventador",
"target.cluster.alias": "msk",
"source.cluster.bootstrap.servers": "",
"target.cluster.bootstrap.servers": "",
"topics": ".*",
"groups": ".*",
"tasks.max": "40",
"replication.policy.class": "CustomReplicationPolicy",
"sync.group.offsets.enabled": "true",
"sync.group.offsets.interval.seconds": "5",
"emit.checkpoints.enabled": "true",
"emit.checkpoints.interval.seconds": "30",
"emit.heartbeats.interval.seconds": "30",
"key.converter": " org.apache.kafka.connect.converters.ByteArrayConverter",
"value.converter": "org.apache.kafka.connect.converters.ByteArrayConverter"
  }
}
{code}
 Source connector:
{code:xml}
{
  "name": "mm2-mirror-source",
  "config": {
"name": "mm2-mirror-source",
"connector.class": "org.apache.kafka.connect.mirror.MirrorSourceConnector",
"source.cluster.alias": "eventador",
"target.cluster.alias": "msk",
"source.cluster.bootstrap.servers": "",
"target.cluster.bootstrap.servers": "",
"topics": ".*",
"groups": ".*",
"tasks.max": "40",
"replication.policy.class": "CustomReplicationPolicy",
"sync.group.offsets.enabled": "true",
"sync.group.offsets.interval.seconds": "5",
"emit.checkpoints.enabled": "true",
"emit.checkpoints.interval.seconds": "30",
"emit.heartbeats.interval.seconds": "30",
"key.converter": " org.apache.kafka.connect.converters.ByteArrayConverter",
"value.converter": "org.apache.kafka.connect.converters.ByteArrayConverter"
  }
}
{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12467) Disabled implementation for generating controller snapshots

2021-03-15 Thread Jose Armando Garcia Sancio (Jira)
Jose Armando Garcia Sancio created KAFKA-12467:
--

 Summary: Disabled implementation for generating controller 
snapshots
 Key: KAFKA-12467
 URL: https://issues.apache.org/jira/browse/KAFKA-12467
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jose Armando Garcia Sancio
Assignee: Jose Armando Garcia Sancio


Controller implementation for generating snapshots but the default 
configuration doesn’t generate snapshots. The default configuration wont 
generate snapshot because both the Controller and Broker won't know how to load 
at this point.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] New committer: Tom Bentley

2021-03-15 Thread Federico Valeri
Congrats, Tom!

Well deserved.

On Mon, Mar 15, 2021, 8:09 PM Paolo Patierno  wrote:

> Congratulations Tom!
>
> Get Outlook for Android
>
> 
> From: Guozhang Wang 
> Sent: Monday, March 15, 2021 8:02:44 PM
> To: dev 
> Subject: Re: [ANNOUNCE] New committer: Tom Bentley
>
> Congratulations Tom!
>
> Guozhang
>
> On Mon, Mar 15, 2021 at 11:25 AM Bill Bejeck 
> wrote:
>
> > Congratulations, Tom!
> >
> > -Bill
> >
> > On Mon, Mar 15, 2021 at 2:08 PM Bruno Cadonna  >
> > wrote:
> >
> > > Congrats, Tom!
> > >
> > > Best,
> > > Bruno
> > >
> > > On 15.03.21 18:59, Mickael Maison wrote:
> > > > Hi all,
> > > >
> > > > The PMC for Apache Kafka has invited Tom Bentley as a committer, and
> > > > we are excited to announce that he accepted!
> > > >
> > > > Tom first contributed to Apache Kafka in June 2017 and has been
> > > > actively contributing since February 2020.
> > > > He has accumulated 52 commits and worked on a number of KIPs. Here
> are
> > > > some of the most significant ones:
> > > > KIP-183: Change PreferredReplicaLeaderElectionCommand to use
> > > AdminClient
> > > > KIP-195: AdminClient.createPartitions
> > > > KIP-585: Filter and Conditional SMTs
> > > > KIP-621: Deprecate and replace DescribeLogDirsResult.all() and
> > > .values()
> > > > KIP-707: The future of KafkaFuture (still in discussion)
> > > >
> > > > In addition, he is very active on the mailing list and has helped
> > > > review many KIPs.
> > > >
> > > > Congratulations Tom and thanks for all the contributions!
> > > >
> > >
> >
>
>
> --
> -- Guozhang
>


[jira] [Created] (KAFKA-12466) Controller and Broker Metadata Snapshots

2021-03-15 Thread Jose Armando Garcia Sancio (Jira)
Jose Armando Garcia Sancio created KAFKA-12466:
--

 Summary: Controller and Broker Metadata Snapshots
 Key: KAFKA-12466
 URL: https://issues.apache.org/jira/browse/KAFKA-12466
 Project: Kafka
  Issue Type: New Feature
Reporter: Jose Armando Garcia Sancio
Assignee: Jose Armando Garcia Sancio






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #571

2021-03-15 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10348: Share client channel between forwarding and auto creation 
manager (#10135)

[github] KAFKA-12352: Make sure all rejoin group and reset state has a reason 
(#10232)


--
[...truncated 7.32 MB...]
GetOffsetShellParsingTest > [1] excludeInternal=true PASSED

GetOffsetShellParsingTest > [2] excludeInternal=false STARTED

GetOffsetShellParsingTest > [2] excludeInternal=false PASSED

GetOffsetShellParsingTest > testPartitionFilterForInvalidUpperBound() STARTED

GetOffsetShellParsingTest > testPartitionFilterForInvalidUpperBound() PASSED

GetOffsetShellParsingTest > testPartitionFilterForSingleIndex() STARTED

GetOffsetShellParsingTest > testPartitionFilterForSingleIndex() PASSED

GetOffsetShellParsingTest > [1] excludeInternal=true STARTED

GetOffsetShellParsingTest > [1] excludeInternal=true PASSED

GetOffsetShellParsingTest > [2] excludeInternal=false STARTED

GetOffsetShellParsingTest > [2] excludeInternal=false PASSED

GetOffsetShellParsingTest > [1] excludeInternal=true STARTED

GetOffsetShellParsingTest > [1] excludeInternal=true PASSED

GetOffsetShellParsingTest > [2] excludeInternal=false STARTED

GetOffsetShellParsingTest > [2] excludeInternal=false PASSED

GetOffsetShellParsingTest > testPartitionFilterForInvalidRange() STARTED

GetOffsetShellParsingTest > testPartitionFilterForInvalidRange() PASSED

DumpLogSegmentsTest > testDumpEmptyIndex() STARTED

DumpLogSegmentsTest > testDumpEmptyIndex() PASSED

DumpLogSegmentsTest > testPrintDataLog() STARTED

DumpLogSegmentsTest > testPrintDataLog() PASSED

DumpLogSegmentsTest > testDumpIndexMismatches() STARTED

DumpLogSegmentsTest > testDumpIndexMismatches() PASSED

DumpLogSegmentsTest > testDumpMetadataRecords() STARTED

DumpLogSegmentsTest > testDumpMetadataRecords() PASSED

DumpLogSegmentsTest > testDumpTimeIndexErrors() STARTED

DumpLogSegmentsTest > testDumpTimeIndexErrors() PASSED

ClusterToolTest > testClusterTooOldToHaveId() STARTED

ClusterToolTest > testClusterTooOldToHaveId() PASSED

ClusterToolTest > testPrintClusterId() STARTED

ClusterToolTest > testPrintClusterId() PASSED

ClusterToolTest > testLegacyModeClusterCannotUnregisterBroker() STARTED

ClusterToolTest > testLegacyModeClusterCannotUnregisterBroker() PASSED

ClusterToolTest > testUnregisterBroker() STARTED

ClusterToolTest > testUnregisterBroker() PASSED

CustomDeserializerTest > checkDeserializerTopicIsNotNull() STARTED

CustomDeserializerTest > checkDeserializerTopicIsNotNull() PASSED

CustomDeserializerTest > checkFormatterCallDeserializerWithHeaders() STARTED

CustomDeserializerTest > checkFormatterCallDeserializerWithHeaders() PASSED

ControllerContextTest > 
testPartitionFullReplicaAssignmentReturnsEmptyAssignmentIfTopicOrPartitionDoesNotExist()
 STARTED

ControllerContextTest > 
testPartitionFullReplicaAssignmentReturnsEmptyAssignmentIfTopicOrPartitionDoesNotExist()
 PASSED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsEmptyMapIfTopicDoesNotExist() 
STARTED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsEmptyMapIfTopicDoesNotExist() 
PASSED

ControllerContextTest > testPreferredReplicaImbalanceMetric() STARTED

ControllerContextTest > testPreferredReplicaImbalanceMetric() PASSED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsExpectedReplicaAssignments() 
STARTED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsExpectedReplicaAssignments() PASSED

ControllerContextTest > testReassignTo() STARTED

ControllerContextTest > testReassignTo() PASSED

ControllerContextTest > testPartitionReplicaAssignment() STARTED

ControllerContextTest > testPartitionReplicaAssignment() PASSED

ControllerContextTest > 
testUpdatePartitionFullReplicaAssignmentUpdatesReplicaAssignment() STARTED

ControllerContextTest > 
testUpdatePartitionFullReplicaAssignmentUpdatesReplicaAssignment() PASSED

ControllerContextTest > testReassignToIdempotence() STARTED

ControllerContextTest > testReassignToIdempotence() PASSED

ControllerContextTest > 
testPartitionReplicaAssignmentReturnsEmptySeqIfTopicOrPartitionDoesNotExist() 
STARTED

ControllerContextTest > 
testPartitionReplicaAssignmentReturnsEmptySeqIfTopicOrPartitionDoesNotExist() 
PASSED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@4be6ce03, value = [B@2c32bdc1), properties=Map(print.value -> false), 
expected= STARTED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTim

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #600

2021-03-15 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10348: Share client channel between forwarding and auto creation 
manager (#10135)

[github] KAFKA-12352: Make sure all rejoin group and reset state has a reason 
(#10232)


--
[...truncated 7.38 MB...]
BrokerEndPointTest > testFromJsonV1() STARTED

BrokerEndPointTest > testFromJsonV1() PASSED

BrokerEndPointTest > testFromJsonV2() STARTED

BrokerEndPointTest > testFromJsonV2() PASSED

BrokerEndPointTest > testFromJsonV3() STARTED

BrokerEndPointTest > testFromJsonV3() PASSED

BrokerEndPointTest > testFromJsonV5() STARTED

BrokerEndPointTest > testFromJsonV5() PASSED

PartitionLockTest > testNoLockContentionWithoutIsrUpdate() STARTED

PartitionLockTest > testNoLockContentionWithoutIsrUpdate() PASSED

PartitionLockTest > testAppendReplicaFetchWithUpdateIsr() STARTED

PartitionLockTest > testAppendReplicaFetchWithUpdateIsr() PASSED

PartitionLockTest > testAppendReplicaFetchWithSchedulerCheckForShrinkIsr() 
STARTED

PartitionLockTest > testAppendReplicaFetchWithSchedulerCheckForShrinkIsr() 
PASSED

PartitionLockTest > testGetReplicaWithUpdateAssignmentAndIsr() STARTED

PartitionLockTest > testGetReplicaWithUpdateAssignmentAndIsr() PASSED

JsonValueTest > testJsonObjectIterator() STARTED

JsonValueTest > testJsonObjectIterator() PASSED

JsonValueTest > testDecodeLong() STARTED

JsonValueTest > testDecodeLong() PASSED

JsonValueTest > testAsJsonObject() STARTED

JsonValueTest > testAsJsonObject() PASSED

JsonValueTest > testDecodeDouble() STARTED

JsonValueTest > testDecodeDouble() PASSED

JsonValueTest > testDecodeOption() STARTED

JsonValueTest > testDecodeOption() PASSED

JsonValueTest > testDecodeString() STARTED

JsonValueTest > testDecodeString() PASSED

JsonValueTest > testJsonValueToString() STARTED

JsonValueTest > testJsonValueToString() PASSED

JsonValueTest > testAsJsonObjectOption() STARTED

JsonValueTest > testAsJsonObjectOption() PASSED

JsonValueTest > testAsJsonArrayOption() STARTED

JsonValueTest > testAsJsonArrayOption() PASSED

JsonValueTest > testAsJsonArray() STARTED

JsonValueTest > testAsJsonArray() PASSED

JsonValueTest > testJsonValueHashCode() STARTED

JsonValueTest > testJsonValueHashCode() PASSED

JsonValueTest > testDecodeInt() STARTED

JsonValueTest > testDecodeInt() PASSED

JsonValueTest > testDecodeMap() STARTED

JsonValueTest > testDecodeMap() PASSED

JsonValueTest > testDecodeSeq() STARTED

JsonValueTest > testDecodeSeq() PASSED

JsonValueTest > testJsonObjectGet() STARTED

JsonValueTest > testJsonObjectGet() PASSED

JsonValueTest > testJsonValueEquals() STARTED

JsonValueTest > testJsonValueEquals() PASSED

JsonValueTest > testJsonArrayIterator() STARTED

JsonValueTest > testJsonArrayIterator() PASSED

JsonValueTest > testJsonObjectApply() STARTED

JsonValueTest > testJsonObjectApply() PASSED

JsonValueTest > testDecodeBoolean() STARTED

JsonValueTest > testDecodeBoolean() PASSED

PasswordEncoderTest > testEncoderConfigChange() STARTED

PasswordEncoderTest > testEncoderConfigChange() PASSED

PasswordEncoderTest > testEncodeDecodeAlgorithms() STARTED

PasswordEncoderTest > testEncodeDecodeAlgorithms() PASSED

PasswordEncoderTest > testEncodeDecode() STARTED

PasswordEncoderTest > testEncodeDecode() PASSED

ThrottlerTest > testThrottleDesiredRate() STARTED

ThrottlerTest > testThrottleDesiredRate() PASSED

LoggingTest > testLoggerLevelIsResolved() STARTED

LoggingTest > testLoggerLevelIsResolved() PASSED

LoggingTest > testLog4jControllerIsRegistered() STARTED

LoggingTest > testLog4jControllerIsRegistered() PASSED

LoggingTest > testTypeOfGetLoggers() STARTED

LoggingTest > testTypeOfGetLoggers() PASSED

LoggingTest > testLogName() STARTED

LoggingTest > testLogName() PASSED

LoggingTest > testLogNameOverride() STARTED

LoggingTest > testLogNameOverride() PASSED

TimerTest > testAlreadyExpiredTask() STARTED

TimerTest > testAlreadyExpiredTask() PASSED

TimerTest > testTaskExpiration() STARTED

TimerTest > testTaskExpiration() PASSED

ReplicationUtilsTest > testUpdateLeaderAndIsr() STARTED

ReplicationUtilsTest > testUpdateLeaderAndIsr() PASSED

TopicFilterTest > testIncludeLists() STARTED

TopicFilterTest > testIncludeLists() PASSED

RaftManagerTest > testShutdownIoThread() STARTED

RaftManagerTest > testShutdownIoThread() PASSED

RaftManagerTest > testUncaughtExceptionInIoThread() STARTED

RaftManagerTest > testUncaughtExceptionInIoThread() PASSED

RequestChannelTest > testNonAlterRequestsNotTransformed() STARTED

RequestChannelTest > testNonAlterRequestsNotTransformed() PASSED

RequestChannelTest > testAlterRequests() STARTED

RequestChannelTest > testAlterRequests() PASSED

RequestChannelTest > testJsonRequests() STARTED

RequestChannelTest > testJsonRequests() PASSED

RequestChannelTest > testIncrementalAlterRequests() STARTED

RequestChannelTest > testIncrementalAlterRequests() PASSED

ControllerContextTes

Re: [ANNOUNCE] New committer: Tom Bentley

2021-03-15 Thread Paolo Patierno
Congratulations Tom!

Get Outlook for Android


From: Guozhang Wang 
Sent: Monday, March 15, 2021 8:02:44 PM
To: dev 
Subject: Re: [ANNOUNCE] New committer: Tom Bentley

Congratulations Tom!

Guozhang

On Mon, Mar 15, 2021 at 11:25 AM Bill Bejeck 
wrote:

> Congratulations, Tom!
>
> -Bill
>
> On Mon, Mar 15, 2021 at 2:08 PM Bruno Cadonna 
> wrote:
>
> > Congrats, Tom!
> >
> > Best,
> > Bruno
> >
> > On 15.03.21 18:59, Mickael Maison wrote:
> > > Hi all,
> > >
> > > The PMC for Apache Kafka has invited Tom Bentley as a committer, and
> > > we are excited to announce that he accepted!
> > >
> > > Tom first contributed to Apache Kafka in June 2017 and has been
> > > actively contributing since February 2020.
> > > He has accumulated 52 commits and worked on a number of KIPs. Here are
> > > some of the most significant ones:
> > > KIP-183: Change PreferredReplicaLeaderElectionCommand to use
> > AdminClient
> > > KIP-195: AdminClient.createPartitions
> > > KIP-585: Filter and Conditional SMTs
> > > KIP-621: Deprecate and replace DescribeLogDirsResult.all() and
> > .values()
> > > KIP-707: The future of KafkaFuture (still in discussion)
> > >
> > > In addition, he is very active on the mailing list and has helped
> > > review many KIPs.
> > >
> > > Congratulations Tom and thanks for all the contributions!
> > >
> >
>


--
-- Guozhang


Re: [ANNOUNCE] New committer: Tom Bentley

2021-03-15 Thread Guozhang Wang
Congratulations Tom!

Guozhang

On Mon, Mar 15, 2021 at 11:25 AM Bill Bejeck 
wrote:

> Congratulations, Tom!
>
> -Bill
>
> On Mon, Mar 15, 2021 at 2:08 PM Bruno Cadonna 
> wrote:
>
> > Congrats, Tom!
> >
> > Best,
> > Bruno
> >
> > On 15.03.21 18:59, Mickael Maison wrote:
> > > Hi all,
> > >
> > > The PMC for Apache Kafka has invited Tom Bentley as a committer, and
> > > we are excited to announce that he accepted!
> > >
> > > Tom first contributed to Apache Kafka in June 2017 and has been
> > > actively contributing since February 2020.
> > > He has accumulated 52 commits and worked on a number of KIPs. Here are
> > > some of the most significant ones:
> > > KIP-183: Change PreferredReplicaLeaderElectionCommand to use
> > AdminClient
> > > KIP-195: AdminClient.createPartitions
> > > KIP-585: Filter and Conditional SMTs
> > > KIP-621: Deprecate and replace DescribeLogDirsResult.all() and
> > .values()
> > > KIP-707: The future of KafkaFuture (still in discussion)
> > >
> > > In addition, he is very active on the mailing list and has helped
> > > review many KIPs.
> > >
> > > Congratulations Tom and thanks for all the contributions!
> > >
> >
>


-- 
-- Guozhang


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #629

2021-03-15 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12352: Make sure all rejoin group and reset state has a reason 
(#10232)


--
[...truncated 3.67 MB...]
BrokerCompressionTest > [11] messageCompression=none, brokerCompression=lz4 
PASSED

BrokerCompressionTest > [12] messageCompression=gzip, brokerCompression=lz4 
STARTED

BrokerCompressionTest > [12] messageCompression=gzip, brokerCompression=lz4 
PASSED

BrokerCompressionTest > [13] messageCompression=snappy, brokerCompression=lz4 
STARTED

BrokerCompressionTest > [13] messageCompression=snappy, brokerCompression=lz4 
PASSED

BrokerCompressionTest > [14] messageCompression=lz4, brokerCompression=lz4 
STARTED

BrokerCompressionTest > [14] messageCompression=lz4, brokerCompression=lz4 
PASSED

BrokerCompressionTest > [15] messageCompression=zstd, brokerCompression=lz4 
STARTED

BrokerCompressionTest > [15] messageCompression=zstd, brokerCompression=lz4 
PASSED

BrokerCompressionTest > [16] messageCompression=none, brokerCompression=snappy 
STARTED

BrokerCompressionTest > [16] messageCompression=none, brokerCompression=snappy 
PASSED

BrokerCompressionTest > [17] messageCompression=gzip, brokerCompression=snappy 
STARTED

BrokerCompressionTest > [17] messageCompression=gzip, brokerCompression=snappy 
PASSED

BrokerCompressionTest > [18] messageCompression=snappy, 
brokerCompression=snappy STARTED

BrokerCompressionTest > [18] messageCompression=snappy, 
brokerCompression=snappy PASSED

BrokerCompressionTest > [19] messageCompression=lz4, brokerCompression=snappy 
STARTED

BrokerCompressionTest > [19] messageCompression=lz4, brokerCompression=snappy 
PASSED

BrokerCompressionTest > [20] messageCompression=zstd, brokerCompression=snappy 
STARTED

BrokerCompressionTest > [20] messageCompression=zstd, brokerCompression=snappy 
PASSED

BrokerCompressionTest > [21] messageCompression=none, brokerCompression=gzip 
STARTED

BrokerCompressionTest > [21] messageCompression=none, brokerCompression=gzip 
PASSED

BrokerCompressionTest > [22] messageCompression=gzip, brokerCompression=gzip 
STARTED

BrokerCompressionTest > [22] messageCompression=gzip, brokerCompression=gzip 
PASSED

BrokerCompressionTest > [23] messageCompression=snappy, brokerCompression=gzip 
STARTED

BrokerCompressionTest > [23] messageCompression=snappy, brokerCompression=gzip 
PASSED

BrokerCompressionTest > [24] messageCompression=lz4, brokerCompression=gzip 
STARTED

BrokerCompressionTest > [24] messageCompression=lz4, brokerCompression=gzip 
PASSED

BrokerCompressionTest > [25] messageCompression=zstd, brokerCompression=gzip 
STARTED

BrokerCompressionTest > [25] messageCompression=zstd, brokerCompression=gzip 
PASSED

BrokerCompressionTest > [26] messageCompression=none, 
brokerCompression=producer STARTED

BrokerCompressionTest > [26] messageCompression=none, 
brokerCompression=producer PASSED

BrokerCompressionTest > [27] messageCompression=gzip, 
brokerCompression=producer STARTED

BrokerCompressionTest > [27] messageCompression=gzip, 
brokerCompression=producer PASSED

BrokerCompressionTest > [28] messageCompression=snappy, 
brokerCompression=producer STARTED

BrokerCompressionTest > [28] messageCompression=snappy, 
brokerCompression=producer PASSED

BrokerCompressionTest > [29] messageCompression=lz4, brokerCompression=producer 
STARTED

BrokerCompressionTest > [29] messageCompression=lz4, brokerCompression=producer 
PASSED

BrokerCompressionTest > [30] messageCompression=zstd, 
brokerCompression=producer STARTED

BrokerCompressionTest > [30] messageCompression=zstd, 
brokerCompression=producer PASSED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@286d9e3d, value = [B@690faba), properties=Map(print.value -> false), 
expected= STARTED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@286d9e3d, value = [B@690faba), properties=Map(print.value -> false), 
expected= PASSED

DefaultMessageFormatterTest > [2] name=print key, record=ConsumerRecord(topic = 
someTopic, partition = 9, leaderEpoch = null, offset = 9876, CreateTime = 1234, 
serialized key size = 0, serialized value size = 0, headers = 
RecordHeaders(headers = [RecordHeader(key = h1, value = [118, 49]), 
RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), key = 
[B@631

Re: [ANNOUNCE] New committer: Tom Bentley

2021-03-15 Thread Bill Bejeck
Congratulations, Tom!

-Bill

On Mon, Mar 15, 2021 at 2:08 PM Bruno Cadonna 
wrote:

> Congrats, Tom!
>
> Best,
> Bruno
>
> On 15.03.21 18:59, Mickael Maison wrote:
> > Hi all,
> >
> > The PMC for Apache Kafka has invited Tom Bentley as a committer, and
> > we are excited to announce that he accepted!
> >
> > Tom first contributed to Apache Kafka in June 2017 and has been
> > actively contributing since February 2020.
> > He has accumulated 52 commits and worked on a number of KIPs. Here are
> > some of the most significant ones:
> > KIP-183: Change PreferredReplicaLeaderElectionCommand to use
> AdminClient
> > KIP-195: AdminClient.createPartitions
> > KIP-585: Filter and Conditional SMTs
> > KIP-621: Deprecate and replace DescribeLogDirsResult.all() and
> .values()
> > KIP-707: The future of KafkaFuture (still in discussion)
> >
> > In addition, he is very active on the mailing list and has helped
> > review many KIPs.
> >
> > Congratulations Tom and thanks for all the contributions!
> >
>


Re: [ANNOUNCE] New committer: Tom Bentley

2021-03-15 Thread Bruno Cadonna

Congrats, Tom!

Best,
Bruno

On 15.03.21 18:59, Mickael Maison wrote:

Hi all,

The PMC for Apache Kafka has invited Tom Bentley as a committer, and
we are excited to announce that he accepted!

Tom first contributed to Apache Kafka in June 2017 and has been
actively contributing since February 2020.
He has accumulated 52 commits and worked on a number of KIPs. Here are
some of the most significant ones:
KIP-183: Change PreferredReplicaLeaderElectionCommand to use AdminClient
KIP-195: AdminClient.createPartitions
KIP-585: Filter and Conditional SMTs
KIP-621: Deprecate and replace DescribeLogDirsResult.all() and .values()
KIP-707: The future of KafkaFuture (still in discussion)

In addition, he is very active on the mailing list and has helped
review many KIPs.

Congratulations Tom and thanks for all the contributions!



[ANNOUNCE] New committer: Tom Bentley

2021-03-15 Thread Mickael Maison
Hi all,

The PMC for Apache Kafka has invited Tom Bentley as a committer, and
we are excited to announce that he accepted!

Tom first contributed to Apache Kafka in June 2017 and has been
actively contributing since February 2020.
He has accumulated 52 commits and worked on a number of KIPs. Here are
some of the most significant ones:
   KIP-183: Change PreferredReplicaLeaderElectionCommand to use AdminClient
   KIP-195: AdminClient.createPartitions
   KIP-585: Filter and Conditional SMTs
   KIP-621: Deprecate and replace DescribeLogDirsResult.all() and .values()
   KIP-707: The future of KafkaFuture (still in discussion)

In addition, he is very active on the mailing list and has helped
review many KIPs.

Congratulations Tom and thanks for all the contributions!


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #628

2021-03-15 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10348: Share client channel between forwarding and auto creation 
manager (#10135)


--
[...truncated 3.68 MB...]
AclAuthorizerTest > testAuthorizeByResourceTypeWildcardResourceDenyDominate() 
STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeWildcardResourceDenyDominate() 
PASSED

AclAuthorizerTest > testEmptyAclThrowsException() STARTED

AclAuthorizerTest > testEmptyAclThrowsException() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeNoAclFoundOverride() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeNoAclFoundOverride() PASSED

AclAuthorizerTest > testSuperUserWithCustomPrincipalHasAccess() STARTED

AclAuthorizerTest > testSuperUserWithCustomPrincipalHasAccess() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeWithAllOperationAce() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeWithAllOperationAce() PASSED

AclAuthorizerTest > testAllowAccessWithCustomPrincipal() STARTED

AclAuthorizerTest > testAllowAccessWithCustomPrincipal() PASSED

AclAuthorizerTest > testDeleteAclOnWildcardResource() STARTED

AclAuthorizerTest > testDeleteAclOnWildcardResource() PASSED

AclAuthorizerTest > testAuthorizerZkConfigFromKafkaConfig() STARTED

AclAuthorizerTest > testAuthorizerZkConfigFromKafkaConfig() PASSED

AclAuthorizerTest > testChangeListenerTiming() STARTED

AclAuthorizerTest > testChangeListenerTiming() PASSED

AclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions()
 STARTED

AclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions()
 PASSED

AclAuthorizerTest > testAuthorzeByResourceTypeSuperUserHasAccess() STARTED

AclAuthorizerTest > testAuthorzeByResourceTypeSuperUserHasAccess() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypePrefixedResourceDenyDominate() 
STARTED

AclAuthorizerTest > testAuthorizeByResourceTypePrefixedResourceDenyDominate() 
PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeMultipleAddAndRemove() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeMultipleAddAndRemove() PASSED

AclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow() STARTED

AclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow() PASSED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnPrefixedResource() 
STARTED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnPrefixedResource() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeDenyTakesPrecedence() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeDenyTakesPrecedence() PASSED

AclAuthorizerTest > testHighConcurrencyModificationOfResourceAcls() STARTED

AclAuthorizerTest > testHighConcurrencyModificationOfResourceAcls() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeWithAllPrincipalAce() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeWithAllPrincipalAce() PASSED

AclAuthorizerTest > testAuthorizeWithEmptyResourceName() STARTED

AclAuthorizerTest > testAuthorizeWithEmptyResourceName() PASSED

AclAuthorizerTest > testAuthorizeThrowsOnNonLiteralResource() STARTED

AclAuthorizerTest > testAuthorizeThrowsOnNonLiteralResource() PASSED

AclAuthorizerTest > testDeleteAllAclOnPrefixedResource() STARTED

AclAuthorizerTest > testDeleteAllAclOnPrefixedResource() PASSED

AclAuthorizerTest > testAddAclsOnLiteralResource() STARTED

AclAuthorizerTest > testAddAclsOnLiteralResource() PASSED

AclAuthorizerTest > testGetAclsPrincipal() STARTED

AclAuthorizerTest > testGetAclsPrincipal() PASSED

AclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet() STARTED

AclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet() PASSED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnWildcardResource() 
STARTED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnWildcardResource() PASSED

AclAuthorizerTest > testLoadCache() STARTED

AclAuthorizerTest > testLoadCache() PASSED

AuthorizerInterfaceDefaultTest > testAuthorizeByResourceTypeWithAllHostAce() 
STARTED

AuthorizerInterfaceDefaultTest > testAuthorizeByResourceTypeWithAllHostAce() 
PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeIsolationUnrelatedDenyWontDominateAllow() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeIsolationUnrelatedDenyWontDominateAllow() PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWildcardResourceDenyDominate() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWildcardResourceDenyDominate() PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWithAllOperationAce() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWithAllOperationAce() PASSED

AuthorizerInterfaceDe

Re: [DISCUSS] KIP-405 + KAFKA-7739 - Implementation of Tiered Storage Integration with Azure Storage

2021-03-15 Thread Satish Duggana
Hi Israel,
Thanks for your interest in tiered storage. As mentioned by Jun earlier, we
decided not to have any implementations in Apache Kafka repo like Kafka
connectors. We plan to have RSM implementations for HDFS, S3, GCP, and
Azure storages in a separate repo. We will let you know once they are ready
for review.

Best,
Satish.

On Sat, 13 Mar 2021 at 01:27, Israel Ekpo  wrote:

> Thanks @Jun for the prompt response.
>
> That's ok and I think it is a great strategy just like the Connect
> ecosystem.
>
> However, I am still in search for repos and samples that demonstrate
> implementation for the KIP.
>
> I will keep searching but was just wondering if there were sample
> implementations for S3 or HDFS I could take a look at.
>
> Thanks.
>
> On Fri, Mar 12, 2021 at 2:19 PM Jun Rao  wrote:
>
> > Hi, Israel,
> >
> > Thanks for your interest. As part of KIP-405, we have made the decision
> not
> > to host any plugins for external remote storage directly in Apache Kafka.
> > Those plugins could be hosted outside of Apache Kafka.
> >
> > Jun
> >
> > On Thu, Mar 11, 2021 at 5:15 PM Israel Ekpo 
> wrote:
> >
> > > Thanks Satish, Sriharsha, Suresh and Ying for authoring this KIP and
> > thanks
> > > to everyone that participated in the review and discussion to take it
> to
> > > where it is today.
> > >
> > > I would like to contribute by working on integrating Azure Storage
> (Blob
> > > and ADLS) with Tiered Storage for this KIP
> > >
> > > I have created this issue to track this work
> > > https://issues.apache.org/jira/browse/KAFKA-12458
> > >
> > > Are there any sample implementations for HDFS/S3 that I can reference
> to
> > > get started?
> > >
> > > When you have a moment, please share.
> > >
> > > Thanks.
> > >
> >
>


[GitHub] [kafka-site] tombentley opened a new pull request #337: Add tombentley to committers

2021-03-15 Thread GitBox


tombentley opened a new pull request #337:
URL: https://github.com/apache/kafka-site/pull/337


   As title.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #599

2021-03-15 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update year in NOTICE (#10308)


--
[...truncated 7.38 MB...]
KafkaZkClientTest > testUpdateBrokerInfo() STARTED

KafkaZkClientTest > testUpdateBrokerInfo() PASSED

KafkaZkClientTest > testCreateRecursive() STARTED

KafkaZkClientTest > testCreateRecursive() PASSED

KafkaZkClientTest > testGetConsumerOffsetNoData() STARTED

KafkaZkClientTest > testGetConsumerOffsetNoData() PASSED

KafkaZkClientTest > testDeleteTopicPathMethods() STARTED

KafkaZkClientTest > testDeleteTopicPathMethods() PASSED

KafkaZkClientTest > testSetTopicPartitionStatesRaw() STARTED

KafkaZkClientTest > testSetTopicPartitionStatesRaw() PASSED

KafkaZkClientTest > testAclManagementMethods() STARTED

KafkaZkClientTest > testAclManagementMethods() PASSED

KafkaZkClientTest > testPreferredReplicaElectionMethods() STARTED

KafkaZkClientTest > testPreferredReplicaElectionMethods() PASSED

KafkaZkClientTest > testPropagateLogDir() STARTED

KafkaZkClientTest > testPropagateLogDir() PASSED

KafkaZkClientTest > testGetDataAndStat() STARTED

KafkaZkClientTest > testGetDataAndStat() PASSED

KafkaZkClientTest > testReassignPartitionsInProgress() STARTED

KafkaZkClientTest > testReassignPartitionsInProgress() PASSED

KafkaZkClientTest > testCreateTopLevelPaths() STARTED

KafkaZkClientTest > testCreateTopLevelPaths() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() PASSED

KafkaZkClientTest > testIsrChangeNotificationGetters() STARTED

KafkaZkClientTest > testIsrChangeNotificationGetters() PASSED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() STARTED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() PASSED

KafkaZkClientTest > testGetLogConfigs() STARTED

KafkaZkClientTest > testGetLogConfigs() PASSED

KafkaZkClientTest > testBrokerSequenceIdMethods() STARTED

KafkaZkClientTest > testBrokerSequenceIdMethods() PASSED

KafkaZkClientTest > testAclMethods() STARTED

KafkaZkClientTest > testAclMethods() PASSED

KafkaZkClientTest > testCreateSequentialPersistentPath() STARTED

KafkaZkClientTest > testCreateSequentialPersistentPath() PASSED

KafkaZkClientTest > testConditionalUpdatePath() STARTED

KafkaZkClientTest > testConditionalUpdatePath() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() PASSED

KafkaZkClientTest > testDeleteTopicZNode() STARTED

KafkaZkClientTest > testDeleteTopicZNode() PASSED

KafkaZkClientTest > testDeletePath() STARTED

KafkaZkClientTest > testDeletePath() PASSED

KafkaZkClientTest > testGetBrokerMethods() STARTED

KafkaZkClientTest > testGetBrokerMethods() PASSED

KafkaZkClientTest > testCreateTokenChangeNotification() STARTED

KafkaZkClientTest > testCreateTokenChangeNotification() PASSED

KafkaZkClientTest > testGetTopicsAndPartitions() STARTED

KafkaZkClientTest > testGetTopicsAndPartitions() PASSED

KafkaZkClientTest > testRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRegisterBrokerInfo() PASSED

KafkaZkClientTest > testRetryRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRetryRegisterBrokerInfo() PASSED

KafkaZkClientTest > testConsumerOffsetPath() STARTED

KafkaZkClientTest > testConsumerOffsetPath() PASSED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() STARTED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() PASSED

KafkaZkClientTest > testTopicAssignments() STARTED

KafkaZkClientTest > testTopicAssignments() PASSED

KafkaZkClientTest > testControllerManagementMethods() STARTED

KafkaZkClientTest > testControllerManagementMethods() PASSED

KafkaZkClientTest > testTopicAssignmentMethods() STARTED

KafkaZkClientTest > testTopicAssignmentMethods() PASSED

KafkaZkClientTest > testConnectionViaNettyClient() STARTED

KafkaZkClientTest > testConnectionViaNettyClient() PASSED

KafkaZkClientTest > testPropagateIsrChanges() STARTED

KafkaZkClientTest > testPropagateIsrChanges() PASSED

KafkaZkClientTest > testControllerEpochMethods() STARTED

KafkaZkClientTest > testControllerEpochMethods() PASSED

KafkaZkClientTest > testDeleteRecursive() STARTED

KafkaZkClientTest > testDeleteRecursive() PASSED

KafkaZkClientTest > testGetTopicPartitionStates() STARTED

KafkaZkClientTest > testGetTopicPartitionStates() PASSED

KafkaZkClientTest > testCreateConfigChangeNotification() STARTED

KafkaZkClientTest > testCreateConfigChangeNotification() PASSED

KafkaZkClientTest > testDelegationTokenMethods() STARTED

KafkaZkClientTest > testDelegationTokenMethods() PASSED

LiteralAclStoreTest > shouldHaveCorrectPaths() STARTED

LiteralAclStoreTest > shouldHaveCorrectPaths() PASSED

LiteralAclStoreTest > shouldRoundTripChangeNode() STARTED

LiteralAclStoreTest > shouldRoundTripChangeNode() PASSED

LiteralAclStore

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #627

2021-03-15 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update year in NOTICE (#10308)


--
[...truncated 3.71 MB...]

AclAuthorizerTest > testChangeListenerTiming() PASSED

AclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions()
 STARTED

AclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions()
 PASSED

AclAuthorizerTest > testAuthorzeByResourceTypeSuperUserHasAccess() STARTED

AclAuthorizerTest > testAuthorzeByResourceTypeSuperUserHasAccess() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypePrefixedResourceDenyDominate() 
STARTED

AclAuthorizerTest > testAuthorizeByResourceTypePrefixedResourceDenyDominate() 
PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeMultipleAddAndRemove() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeMultipleAddAndRemove() PASSED

AclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow() STARTED

AclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow() PASSED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnPrefixedResource() 
STARTED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnPrefixedResource() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeDenyTakesPrecedence() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeDenyTakesPrecedence() PASSED

AclAuthorizerTest > testHighConcurrencyModificationOfResourceAcls() STARTED

AclAuthorizerTest > testHighConcurrencyModificationOfResourceAcls() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeWithAllPrincipalAce() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeWithAllPrincipalAce() PASSED

AclAuthorizerTest > testAuthorizeWithEmptyResourceName() STARTED

AclAuthorizerTest > testAuthorizeWithEmptyResourceName() PASSED

AclAuthorizerTest > testAuthorizeThrowsOnNonLiteralResource() STARTED

AclAuthorizerTest > testAuthorizeThrowsOnNonLiteralResource() PASSED

AclAuthorizerTest > testDeleteAllAclOnPrefixedResource() STARTED

AclAuthorizerTest > testDeleteAllAclOnPrefixedResource() PASSED

AclAuthorizerTest > testAddAclsOnLiteralResource() STARTED

AclAuthorizerTest > testAddAclsOnLiteralResource() PASSED

AclAuthorizerTest > testGetAclsPrincipal() STARTED

AclAuthorizerTest > testGetAclsPrincipal() PASSED

AclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet() STARTED

AclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet() PASSED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnWildcardResource() 
STARTED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnWildcardResource() PASSED

AclAuthorizerTest > testLoadCache() STARTED

AclAuthorizerTest > testLoadCache() PASSED

AuthorizerInterfaceDefaultTest > testAuthorizeByResourceTypeWithAllHostAce() 
STARTED

AuthorizerInterfaceDefaultTest > testAuthorizeByResourceTypeWithAllHostAce() 
PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeIsolationUnrelatedDenyWontDominateAllow() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeIsolationUnrelatedDenyWontDominateAllow() PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWildcardResourceDenyDominate() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWildcardResourceDenyDominate() PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWithAllOperationAce() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWithAllOperationAce() PASSED

AuthorizerInterfaceDefaultTest > testAuthorzeByResourceTypeSuperUserHasAccess() 
STARTED

AuthorizerInterfaceDefaultTest > testAuthorzeByResourceTypeSuperUserHasAccess() 
PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypePrefixedResourceDenyDominate() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypePrefixedResourceDenyDominate() PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeMultipleAddAndRemove() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeMultipleAddAndRemove() PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeDenyTakesPrecedence() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeDenyTakesPrecedence() PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWithAllPrincipalAce() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWithAllPrincipalAce() PASSED

AclEntryTest > testAclJsonConversion() STARTED

AclEntryTest > testAclJsonConversion() PASSED

AuthorizerWrapperTest > 
testAuthorizeByResourceTypeDisableAllowEveryoneOverride() STARTED

AuthorizerWrapperTest > 
testAuthorizeByResourceTypeDisableAllowEveryoneOverride() PASSED

AuthorizerWrapperTest > testAuthorizeByResourceTypeWithA

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #570

2021-03-15 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update year in NOTICE (#10308)


--
[...truncated 3.67 MB...]
LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() STARTED

LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
PASSED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() STARTED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
PASSED

LogValidatorTest > testNonCompressedV1() STARTED

LogValidatorTest > testNonCompressedV1() PASSED

LogValidatorTest > testNonCompressedV2() STARTED

LogValidatorTest > testNonCompressedV2() PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV1() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV1() PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV2() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV2() PASSED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() STARTED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() PASSED

LogValidatorTest > testRecompressionV1() STARTED

LogValidatorTest > testRecompressionV1() PASSED

LogValidatorTest > testRecompressionV2() STARTED

LogValidatorTest > testRecompressionV2() PASSED

ProducerStateManagerTest > testSkipEmptyTransactions() STARTED

ProducerStateManagerTest > testSkipEmptyTransactions() PASSED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() STARTED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() PASSED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
STARTED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
PASSED

ProducerStateManagerTest > testCoordinatorFencing() STARTED

ProducerStateManagerTest > testCoordinatorFencing() PASSED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() PASSED

ProducerStateManagerTest > testTruncateFullyAndStartAt() STARTED

ProducerStateManagerTest > testTruncateFullyAndStartAt() PASSED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() STARTED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() STARTED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() PASSED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() 
STARTED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() PASSED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() STARTED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() PASSED

ProducerStateManagerTest > testTakeSnapshot() STARTED

ProducerStateManagerTest > testTakeSnapshot() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() 
STARTED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() PASSED

ProducerStateManagerTest > testDeleteSnapshotsBefore() STARTED

ProducerStateManagerTest > testDeleteSnapshotsBefore() PASSED

ProducerStateManagerTest > testAppendEmptyControlBatch() STARTED

ProducerStateManagerTest > testAppendEmptyControlBatch() PASSED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() STARTED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() PASSED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
STARTED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
PASSED

ProducerStateManagerTest > testRemoveAllStraySnapshots() STARTED

ProducerStateManagerTest > testRemoveAllStraySnapshots() PASSED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() PASSED

ProducerStateManagerTest > testProducersWithOngoingTransactionsDontExpire() 
STARTED

ProducerStateManagerTest > testProducersWithOngoingTransactionsDontExpire() 
PASSED

ProducerStateManagerTest > testBas

Re: [ANNOUNCE] New Kafka PMC Member: Chia-Ping Tsai

2021-03-15 Thread Rajini Sivaram
Congratulations, Chia-Ping, well deserved!

Regards,

Rajini

On Mon, Mar 15, 2021 at 9:59 AM Bruno Cadonna 
wrote:

> Congrats, Chia-Ping!
>
> Best,
> Bruno
>
> On 15.03.21 09:22, David Jacot wrote:
> > Congrats Chia-Ping! Well deserved.
> >
> > On Mon, Mar 15, 2021 at 5:39 AM Satish Duggana  >
> > wrote:
> >
> >> Congrats Chia-Ping!
> >>
> >> On Sat, 13 Mar 2021 at 13:34, Tom Bentley  wrote:
> >>
> >>> Congratulations Chia-Ping!
> >>>
> >>> On Sat, Mar 13, 2021 at 7:31 AM Kamal Chandraprakash <
> >>> kamal.chandraprak...@gmail.com> wrote:
> >>>
>  Congratulations, Chia-Ping!!
> 
>  On Sat, Mar 13, 2021 at 11:38 AM Ismael Juma 
> >> wrote:
> 
> > Congratulations Chia-Ping! Well deserved.
> >
> > Ismael
> >
> > On Fri, Mar 12, 2021, 11:14 AM Jun Rao 
> >>> wrote:
> >
> >> Hi, Everyone,
> >>
> >> Chia-Ping Tsai has been a Kafka committer since Oct. 15,  2020. He
> >>> has
> > been
> >> very instrumental to the community since becoming a committer. It's
> >>> my
> >> pleasure to announce that Chia-Ping  is now a member of Kafka PMC.
> >>
> >> Congratulations Chia-Ping!
> >>
> >> Jun
> >> on behalf of Apache Kafka PMC
> >>
> >
> 
> >>>
> >>
> >
>


[GitHub] [kafka-site] chia7712 opened a new pull request #336: add chia7712 as PMC

2021-03-15 Thread GitBox


chia7712 opened a new pull request #336:
URL: https://github.com/apache/kafka-site/pull/336


   as title



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [ANNOUNCE] New Kafka PMC Member: Chia-Ping Tsai

2021-03-15 Thread Bruno Cadonna

Congrats, Chia-Ping!

Best,
Bruno

On 15.03.21 09:22, David Jacot wrote:

Congrats Chia-Ping! Well deserved.

On Mon, Mar 15, 2021 at 5:39 AM Satish Duggana 
wrote:


Congrats Chia-Ping!

On Sat, 13 Mar 2021 at 13:34, Tom Bentley  wrote:


Congratulations Chia-Ping!

On Sat, Mar 13, 2021 at 7:31 AM Kamal Chandraprakash <
kamal.chandraprak...@gmail.com> wrote:


Congratulations, Chia-Ping!!

On Sat, Mar 13, 2021 at 11:38 AM Ismael Juma 

wrote:



Congratulations Chia-Ping! Well deserved.

Ismael

On Fri, Mar 12, 2021, 11:14 AM Jun Rao 

wrote:



Hi, Everyone,

Chia-Ping Tsai has been a Kafka committer since Oct. 15,  2020. He

has

been

very instrumental to the community since becoming a committer. It's

my

pleasure to announce that Chia-Ping  is now a member of Kafka PMC.

Congratulations Chia-Ping!

Jun
on behalf of Apache Kafka PMC













Re: [ANNOUNCE] New Kafka PMC Member: Chia-Ping Tsai

2021-03-15 Thread David Jacot
Congrats Chia-Ping! Well deserved.

On Mon, Mar 15, 2021 at 5:39 AM Satish Duggana 
wrote:

> Congrats Chia-Ping!
>
> On Sat, 13 Mar 2021 at 13:34, Tom Bentley  wrote:
>
> > Congratulations Chia-Ping!
> >
> > On Sat, Mar 13, 2021 at 7:31 AM Kamal Chandraprakash <
> > kamal.chandraprak...@gmail.com> wrote:
> >
> > > Congratulations, Chia-Ping!!
> > >
> > > On Sat, Mar 13, 2021 at 11:38 AM Ismael Juma 
> wrote:
> > >
> > > > Congratulations Chia-Ping! Well deserved.
> > > >
> > > > Ismael
> > > >
> > > > On Fri, Mar 12, 2021, 11:14 AM Jun Rao 
> > wrote:
> > > >
> > > > > Hi, Everyone,
> > > > >
> > > > > Chia-Ping Tsai has been a Kafka committer since Oct. 15,  2020. He
> > has
> > > > been
> > > > > very instrumental to the community since becoming a committer. It's
> > my
> > > > > pleasure to announce that Chia-Ping  is now a member of Kafka PMC.
> > > > >
> > > > > Congratulations Chia-Ping!
> > > > >
> > > > > Jun
> > > > > on behalf of Apache Kafka PMC
> > > > >
> > > >
> > >
> >
>


[jira] [Created] (KAFKA-12465) Decide whether inconsistent cluster id error are fatal

2021-03-15 Thread dengziming (Jira)
dengziming created KAFKA-12465:
--

 Summary: Decide whether inconsistent cluster id error are fatal
 Key: KAFKA-12465
 URL: https://issues.apache.org/jira/browse/KAFKA-12465
 Project: Kafka
  Issue Type: Sub-task
Reporter: dengziming


Currently, we just log an error when an inconsistent cluster-id occurred. We 
should set a window during startup when these errors are fatal but after that 
window, we no longer treat them to be fatal.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)