Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #579

2021-03-17 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #609

2021-03-17 Thread Apache Jenkins Server
See 




Re: [ANNOUNCE] New Kafka PMC Member: Chia-Ping Tsai

2021-03-17 Thread Kowshik Prakasam
Congrats Chia-Ping!

On Tue, Mar 16, 2021, 6:16 PM Dongjin Lee  wrote:

> 
>
> Best,
> Dongjin
>
> On Tue, Mar 16, 2021 at 2:20 PM Konstantine Karantasis
>  wrote:
>
> > Congratulations Chia-Ping!
> >
> > Konstantine
> >
> > On Mon, Mar 15, 2021 at 4:31 AM Rajini Sivaram 
> > wrote:
> >
> > > Congratulations, Chia-Ping, well deserved!
> > >
> > > Regards,
> > >
> > > Rajini
> > >
> > > On Mon, Mar 15, 2021 at 9:59 AM Bruno Cadonna
>  > >
> > > wrote:
> > >
> > > > Congrats, Chia-Ping!
> > > >
> > > > Best,
> > > > Bruno
> > > >
> > > > On 15.03.21 09:22, David Jacot wrote:
> > > > > Congrats Chia-Ping! Well deserved.
> > > > >
> > > > > On Mon, Mar 15, 2021 at 5:39 AM Satish Duggana <
> > > satish.dugg...@gmail.com
> > > > >
> > > > > wrote:
> > > > >
> > > > >> Congrats Chia-Ping!
> > > > >>
> > > > >> On Sat, 13 Mar 2021 at 13:34, Tom Bentley 
> > > wrote:
> > > > >>
> > > > >>> Congratulations Chia-Ping!
> > > > >>>
> > > > >>> On Sat, Mar 13, 2021 at 7:31 AM Kamal Chandraprakash <
> > > > >>> kamal.chandraprak...@gmail.com> wrote:
> > > > >>>
> > > >  Congratulations, Chia-Ping!!
> > > > 
> > > >  On Sat, Mar 13, 2021 at 11:38 AM Ismael Juma  >
> > > > >> wrote:
> > > > 
> > > > > Congratulations Chia-Ping! Well deserved.
> > > > >
> > > > > Ismael
> > > > >
> > > > > On Fri, Mar 12, 2021, 11:14 AM Jun Rao
>  > >
> > > > >>> wrote:
> > > > >
> > > > >> Hi, Everyone,
> > > > >>
> > > > >> Chia-Ping Tsai has been a Kafka committer since Oct. 15,
> 2020.
> > He
> > > > >>> has
> > > > > been
> > > > >> very instrumental to the community since becoming a committer.
> > > It's
> > > > >>> my
> > > > >> pleasure to announce that Chia-Ping  is now a member of Kafka
> > PMC.
> > > > >>
> > > > >> Congratulations Chia-Ping!
> > > > >>
> > > > >> Jun
> > > > >> on behalf of Apache Kafka PMC
> > > > >>
> > > > >
> > > > 
> > > > >>>
> > > > >>
> > > > >
> > > >
> > >
> >
>
>
> --
> *Dongjin Lee*
>
> *A hitchhiker in the mathematical world.*
>
>
>
> *github:  github.com/dongjinleekr
> keybase: https://keybase.io/dongjinleekr
> linkedin: kr.linkedin.com/in/dongjinleekr
> speakerdeck:
> speakerdeck.com/dongjin
> *
>


Re: Inquiry about usage of Kafka Compression

2021-03-17 Thread Dongjin Lee
Hi Loren,


This error occurs when your application fails to load the libzstd-jni
library file from the temporary library.



   - zstd-jni v1.3.5-4 (included in Kafka 2.1.0):
   
https://github.com/luben/zstd-jni/blob/v1.3.5-4/src/main/java/com/github/luben/zstd/util/Native.java#L101
   - zstd-jni lastest:
   
https://github.com/luben/zstd-jni/blob/master/src/main/java/com/github/luben/zstd/util/Native.java#L137


In short, zstd-jni works like the following:



   1. It includes all supported platform's shared library in its jar.
   2. When initializing, it copies the appropriate library into the temp
   directory.
   3. Load the extracted library into the memory.


So, It would be good to check:



   1. What is the platform you are using?
   2. Do you have enough space or permission for the temp directory?
   3. The error occurs repeatedly?


Please have a check and give me a reply. (disclaimer: I added zstd support
to Apache Kafka.)


Thanks,

Dongjin

On Thu, Mar 18, 2021 at 12:17 AM Loren Abigail Sion 
wrote:

> Good day,
>
>
> We're currently in the process of implementing our application with kafka
> compression type ZStandard (zstd).
>
> However during the testing process the consumer encountered this error:
>
>  [ERROR] (consumer-1)
>
> org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer
>- Container exception
> org.apache.kafka.common.KafkaException: Received exception when fetching
> the next record from dp.---. If needed, please seek past
> the record to continue consumption.
> at
>
> org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1228)
> at
>
> org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1400(Fetcher.java:1096)
> at
>
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:544)
> at
>
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:505)
> at
>
> org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1225)
> at
>
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1188)
> at
>
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1123)
> at
>
> org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:532)
> at
>
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
> at
> java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: org.apache.kafka.common.KafkaException:
> java.lang.ExceptionInInitializerError: Cannot unpack libzstd-jni
>
> *Here's the version for the producer and consumer:*
>
> Producer Kafka Client Version (using ZStandard compression): 2.5.1
> Consumer Kafka Client Version: 2.1.0
>
> Could you help us identify what caused this error? Do we need to upgrade
> the version on the consumer side?
>
>
> Best Regards,
>
> Loren Sion
>


-- 
*Dongjin Lee*

*A hitchhiker in the mathematical world.*



*github:  github.com/dongjinleekr
keybase: https://keybase.io/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
speakerdeck: speakerdeck.com/dongjin
*


[GitHub] [kafka-site] Alee4738 opened a new pull request #338: MINOR quickstart.html remove extra closing paren, fix syntax

2021-03-17 Thread GitBox


Alee4738 opened a new pull request #338:
URL: https://github.com/apache/kafka-site/pull/338


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-12494) Broker raise InternalError after disk sector medium error without marking dir to offline

2021-03-17 Thread iBlackeyes (Jira)
iBlackeyes created KAFKA-12494:
--

 Summary: Broker raise InternalError after disk sector medium error 
without marking dir to offline
 Key: KAFKA-12494
 URL: https://issues.apache.org/jira/browse/KAFKA-12494
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.7.0, 2.5.1, 2.6.0, 2.4.0, 1.1.2
 Environment: Kafka Version: 1.1.0
Jdk Version:  jdk1.8
Reporter: iBlackeyes


In my produce env, we encounter a case that kafka broker only raise errors like 

 `_*2021-02-16 23:24:24,965 | ERROR | [data-plane-kafka-request-handler-19] | 
[ReplicaManager broker=7] Error processing append operation on partition 
xxx-0 | kafka.server.ReplicaManager (Logging.scala:76)*_ 
_*java.lang.InternalError: a fault occurred in a recent unsafe memory access 
operation in compiled Java code*_` 

when broker append to a error disk sector  and doesn't mark the dir on this 
disk to offline.

This result in many partitions which assign replicas on this disk  in 
under-replicated state . 

Here is the logs:

*os messages log:*
{code:java}
Feb 16 23:24:24 hd-node109 kernel: blk_update_request: critical medium error, 
dev sds, sector 2308010408
Feb 16 23:24:24 hd-node109 kernel: sd 14:1:0:18: [sds] FAILED Result: 
hostbyte=DID_OK driverbyte=DRIVER_SENSE
Feb 16 23:24:24 hd-node109 kernel: sd 14:1:0:18: [sds] Sense Key : Medium Error 
[current] 
Feb 16 23:24:24 hd-node109 kernel: sd 14:1:0:18: [sds] Add. Sense: Unrecovered 
read error
Feb 16 23:24:24 hd-node109 kernel: sd 14:1:0:18: [sds] CDB: Read(10) 28 00 89 
91 71 a8 00 00 08 00
Feb 16 23:24:24 hd-node109 kernel: blk_update_request: critical medium error, 
dev sds, sector 2308010408
Feb 16 23:24:24 hd-node109 kernel: sd 14:1:0:18: [sds] FAILED Result: 
hostbyte=DID_OK driverbyte=DRIVER_SENSE
Feb 16 23:24:24 hd-node109 kernel: sd 14:1:0:18: [sds] Sense Key : Medium Error 
[current] 
Feb 16 23:24:24 hd-node109 kernel: sd 14:1:0:18: [sds] Add. Sense: Unrecovered 
read error
Feb 16 23:24:24 hd-node109 kernel: sd 14:1:0:18: [sds] CDB: Read(10) 28 00 89 
91 71 a8 00 00 08 00
Feb 16 23:24:24 hd-node109 kernel: blk_update_request: critical medium error, 
dev sds, sector 2308010408
Feb 16 23:24:24 hd-node109 kernel: sd 14:1:0:18: [sds] FAILED Result: 
hostbyte=DID_OK driverbyte=DRIVER_SENSE
Feb 16 23:24:24 hd-node109 kernel: sd 14:1:0:18: [sds] Sense Key : Medium Error 
[current] 
Feb 16 23:24:24 hd-node109 kernel: sd 14:1:0:18: [sds] Add. Sense: Unrecovered 
read error
Feb 16 23:24:24 hd-node109 kernel: sd 14:1:0:18: [sds] CDB: Read(10) 28 00 89 
91 71 a8 00 00 08 00
Feb 16 23:24:24 hd-node109 kernel: blk_update_request: critical medium error, 
dev sds, sector 2308010408
Feb 16 23:24:24 hd-node109 kernel: sd 14:1:0:18: [sds] FAILED Result: 
hostbyte=DID_OK driverbyte=DRIVER_SENSE
Feb 16 23:24:24 hd-node109 kernel: sd 14:1:0:18: [sds] Sense Key : Medium Error 
[current] 
Feb 16 23:24:24 hd-node109 kernel: sd 14:1:0:18: [sds] Add. Sense: Unrecovered 
read error
Feb 16 23:24:24 hd-node109 kernel: sd 14:1:0:18: [sds] CDB: Read(10) 28 00 89 
91 71 a8 00 00 08 00
Feb 16 23:24:24 hd-node109 kernel: blk_update_request: critical medium error, 
dev sds, sector 2308010408{code}
*broker server.log:*
{code:java}
2021-02-16 23:24:24,965 | ERROR | [data-plane-kafka-request-handler-19] | 
[ReplicaManager broker=7] Error processing append operation on x-0 | 
kafka.server.ReplicaManager (Logging.scala:76) 2021-02-16 23:24:24,965 | ERROR 
| [data-plane-kafka-request-handler-19] | [ReplicaManager broker=7] Error 
processing append operation on x-0 | kafka.server.ReplicaManager 
(Logging.scala:76) java.lang.InternalError: a fault occurred in a recent unsafe 
memory access operation in compiled Java code at 
java.util.zip.Inflater.(Inflater.java:102) at 
java.util.zip.GZIPInputStream.(GZIPInputStream.java:77) at 
org.apache.kafka.common.record.CompressionType$2.wrapForInput(CompressionType.java:69)
 at 
org.apache.kafka.common.record.DefaultRecordBatch.compressedIterator(DefaultRecordBatch.java:265)
 at 
org.apache.kafka.common.record.DefaultRecordBatch.iterator(DefaultRecordBatch.java:332)
 at 
scala.collection.convert.Wrappers$JIterableWrapper.iterator(Wrappers.scala:54) 
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
kafka.log.LogValidator$$anonfun$validateMessagesAndAssignOffsetsCompressed$1.apply(LogValidator.scala:267)
 at 
kafka.log.LogValidator$$anonfun$validateMessagesAndAssignOffsetsCompressed$1.apply(LogValidator.scala:259)
 at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 

[jira] [Created] (KAFKA-12493) The controller should handle the consistency between the controllerContext and the partition replicas assignment on zookeeper

2021-03-17 Thread Wenbing Shen (Jira)
Wenbing Shen created KAFKA-12493:


 Summary: The controller should handle the consistency between the 
controllerContext and the partition replicas assignment on zookeeper
 Key: KAFKA-12493
 URL: https://issues.apache.org/jira/browse/KAFKA-12493
 Project: Kafka
  Issue Type: Bug
  Components: controller
Affects Versions: 2.7.0, 2.6.0, 2.5.0, 2.4.0, 2.3.0, 2.2.0, 2.1.0, 2.0.0
Reporter: Wenbing Shen
 Fix For: 3.0.0


This question can be linked to this email: 
[https://lists.apache.org/thread.html/redf5748ec787a9c65fc48597e3d2256ffdd729de14afb873c63e6c5b%40%3Cusers.kafka.apache.org%3E]

 

This is a 100% recurring problem.

Problem description:

In the production environment of our customer’s site, the existing partitions 
were redistributed in the code of colleagues in other departments and written 
into zookeeper. This caused the controller to only judge the newly added 
partitions when processing partition modification events. Partition allocation 
plan and new partition and replica allocation in the partition state machine 
and replica state machine, and issue LeaderAndISR and other control requests.

But the controller did not verify the existing partition replicas assigment in 
the controllerContext and whether the original partition allocation on the 
znode in zookeeper has changed. This seems to be no problem, but when we have 
to restart the broker for some reasons, such as configuration updates and 
upgrades Wait, this will cause this part of the topic in real-time production 
to be abnormal, the controller cannot complete the allocation of the new 
leader, and the original leader cannot correctly identify the replica allocated 
on the current zookeeper. The real-time business in our customer's on-site 
environment is interrupted and partially Data has been lost.

This problem can be stably reproduced in the following ways:

Adding partitions or modifying replicas of an existing topic through the 
following code will cause the original partition replicas to be reallocated and 
finally written to zookeeper.Next, the controller did not accurately process 
this event, restart the topic related broker, this topic will not be able to be 
produced and consumed.

 
{code:java}
public void updateKafkaTopic(KafkaTopicVO kafkaTopicVO) {

ZkUtils zkUtils = ZkUtils.apply(ZK_LIST, SESSION_TIMEOUT, 
CONNECTION_TIMEOUT, JaasUtils.isZkSecurityEnabled());
try {
if (kafkaTopicVO.getPartitionNum() >= 0 && 
kafkaTopicVO.getReplicationNum() >= 0) {
// Get the original broker data information
Seq brokerMetadata = 
AdminUtils.getBrokerMetadatas(zkUtils,
RackAwareMode.Enforced$.MODULE$,
Option.apply(null));
// Generate a new partition replica allocation plan
scala.collection.Map> replicaAssign = 
AdminUtils.assignReplicasToBrokers(brokerMetadata,
kafkaTopicVO.getPartitionNum(), // Number of partitions
kafkaTopicVO.getReplicationNum(), // Number of replicas per 
partition
-1,
-1);
// Modify the partition replica allocation plan
AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK(zkUtils,
kafkaTopicVO.getTopicNameList().get(0),
replicaAssign,
null,
true);
}

} catch (Exception e) {
System.out.println("Adjust partition abnormal");
System.exit(0);
} finally {
zkUtils.close();
}
}
{code}
 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12459) Improve raft simulation tests

2021-03-17 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-12459.
-
Resolution: Fixed

> Improve raft simulation tests
> -
>
> Key: KAFKA-12459
> URL: https://issues.apache.org/jira/browse/KAFKA-12459
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
>
> A couple small suggestions to improve the event simulation tests (courtesy of 
> [~enether]):
> 1. When a test fails, ensure that the random number seed is displayed in the 
> output so that it can be reproduced.
> 2. Once we have done the first one, then using non-deterministic seeds would 
> be a good idea since we can get a bigger benefit out of repeated builds.
> 3. It is a bit painful today to reproduce failures since each test case runs 
> multiple random seeds. It would be helpful to have a convenient way to run a 
> test with a specific seed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12492) Formatting of example RocksDBConfigSetter is messed up

2021-03-17 Thread A. Sophie Blee-Goldman (Jira)
A. Sophie Blee-Goldman created KAFKA-12492:
--

 Summary: Formatting of example RocksDBConfigSetter is messed up
 Key: KAFKA-12492
 URL: https://issues.apache.org/jira/browse/KAFKA-12492
 Project: Kafka
  Issue Type: Bug
  Components: documentation, streams
Reporter: A. Sophie Blee-Goldman


See the example implementation class CustomRocksDBConfig in the docs for the 
rocksdb.config.setter

https://kafka.apache.org/documentation/streams/developer-guide/config-streams.html#rocksdb-config-setter



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


RE: MirrorMaker 2.0 - Offset Sync - Questions/Improvements

2021-03-17 Thread Georg Friedrich
Hi Ryanne,

thank you for your response.

1) Yes, right - I've missed the condition that is part of the PartitionState 
class. Thanks for pointing me. :)

2) Ok, how should I go on there? Shall I create a ticket or a KIP or even both? 
From my point of view this is not a major change, but for people relying on the 
fact that those topics are always created this may look different.

Kind regards
Georg Friedrich

-Original Message-
From: Ryanne Dolan  
Sent: Wednesday, March 17, 2021 4:57 AM
To: dev 
Subject: Re: MirrorMaker 2.0 - Offset Sync - Questions/Improvements

Georg, sorry for the delay, but hopefully I can still help.

1) I think the detail you're missing is that the offset syncs are very sparse. 
Normally, you only get a new sync when the Connector first starts running. You 
are right that it is possible for a consumer to lag behind the most recent 
offset sync, but that will be a rare, transient condition, e.g.
when the Connector first starts running.

2) I think you are right -- disabling checkpoints probably should also prevent 
those topics from being created. I'd support that change.

Ryanne

On Fri, Feb 26, 2021, 4:24 PM Georg Friedrich 
wrote:

> Hi,
>
> recently I've started to look deeper into the code of MirrorMaker 2.0 
> and was faced with some confusing details. Maybe you can point me into 
> a right direction here.
>
>
>   *   The line at
> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgith
> ub.com%2Fapache%2Fkafka%2Fblob%2F02226fa090513882b9229ac834fd493d71ae6
> d96%2Fconnect%2Fmirror%2Fsrc%2Fmain%2Fjava%2Forg%2Fapache%2Fkafka%2Fco
> nnect%2Fmirror%2FOffsetSyncStore.java%23L52data=04%7C01%7Cgeorg.f
> riedrich%40webfleet.com%7Cf53651fc33834e4a793d08d8e8f8c563%7Ce648a6341
> 151497c97970f975bddecc0%7C0%7C0%7C637515502445685286%7CUnknown%7CTWFpb
> GZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0
> %3D%7C1000sdata=D3%2BdZuT3d7wvqLN%2F3q5zIi07iwI7ZO1doxVq2NSHjWU%3
> Dreserved=0 checks whether the offsets that get translated are 
> smaller than the last offset sync.
> If this is the case, no translation happens. But I'm confused here: 
> Isn't this a potential issue? What if some consumers are slow in 
> regards to processing messages from Kafka and fall back behand the 
> offset sync process of the MirrorMaker.
> In this case the MirrorMaker would stop to translate any offsets. Do I 
> miss something here or is this really broken?
>   *   I'm wondering: One is able to deactivate emitting checkpoints to the
> target cluster. But when this happens, the offset sync topic is still 
> written to the source cluster. Why is that? As far as I can see the 
> only consumer of the offset sync topic is the checkpoint connector. So 
> one could also deactivate the whole offset sync production entirely 
> when disabling emitting checkpoints. Or is there again something that 
> I miss? If not, is this worth a KIP?
>
> Thanks in advance for your answers and help.
>
> Kind regards
> Georg Friedrich
>


[jira] [Created] (KAFKA-12491) RocksDB not being pulled in as a transitive dependency

2021-03-17 Thread A. Sophie Blee-Goldman (Jira)
A. Sophie Blee-Goldman created KAFKA-12491:
--

 Summary: RocksDB not being pulled in as a transitive dependency
 Key: KAFKA-12491
 URL: https://issues.apache.org/jira/browse/KAFKA-12491
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: A. Sophie Blee-Goldman






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #608

2021-03-17 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: fix failing ZooKeeper system tests (#10297)


--
[...truncated 7.38 MB...]
KafkaZkClientTest > testUpdateBrokerInfo() STARTED

KafkaZkClientTest > testUpdateBrokerInfo() PASSED

KafkaZkClientTest > testCreateRecursive() STARTED

KafkaZkClientTest > testCreateRecursive() PASSED

KafkaZkClientTest > testGetConsumerOffsetNoData() STARTED

KafkaZkClientTest > testGetConsumerOffsetNoData() PASSED

KafkaZkClientTest > testDeleteTopicPathMethods() STARTED

KafkaZkClientTest > testDeleteTopicPathMethods() PASSED

KafkaZkClientTest > testSetTopicPartitionStatesRaw() STARTED

KafkaZkClientTest > testSetTopicPartitionStatesRaw() PASSED

KafkaZkClientTest > testAclManagementMethods() STARTED

KafkaZkClientTest > testAclManagementMethods() PASSED

KafkaZkClientTest > testPreferredReplicaElectionMethods() STARTED

KafkaZkClientTest > testPreferredReplicaElectionMethods() PASSED

KafkaZkClientTest > testPropagateLogDir() STARTED

KafkaZkClientTest > testPropagateLogDir() PASSED

KafkaZkClientTest > testGetDataAndStat() STARTED

KafkaZkClientTest > testGetDataAndStat() PASSED

KafkaZkClientTest > testReassignPartitionsInProgress() STARTED

KafkaZkClientTest > testReassignPartitionsInProgress() PASSED

KafkaZkClientTest > testCreateTopLevelPaths() STARTED

KafkaZkClientTest > testCreateTopLevelPaths() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() PASSED

KafkaZkClientTest > testIsrChangeNotificationGetters() STARTED

KafkaZkClientTest > testIsrChangeNotificationGetters() PASSED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() STARTED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() PASSED

KafkaZkClientTest > testGetLogConfigs() STARTED

KafkaZkClientTest > testGetLogConfigs() PASSED

KafkaZkClientTest > testBrokerSequenceIdMethods() STARTED

KafkaZkClientTest > testBrokerSequenceIdMethods() PASSED

KafkaZkClientTest > testAclMethods() STARTED

KafkaZkClientTest > testAclMethods() PASSED

KafkaZkClientTest > testCreateSequentialPersistentPath() STARTED

KafkaZkClientTest > testCreateSequentialPersistentPath() PASSED

KafkaZkClientTest > testConditionalUpdatePath() STARTED

KafkaZkClientTest > testConditionalUpdatePath() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() PASSED

KafkaZkClientTest > testDeleteTopicZNode() STARTED

KafkaZkClientTest > testDeleteTopicZNode() PASSED

KafkaZkClientTest > testDeletePath() STARTED

KafkaZkClientTest > testDeletePath() PASSED

KafkaZkClientTest > testGetBrokerMethods() STARTED

KafkaZkClientTest > testGetBrokerMethods() PASSED

KafkaZkClientTest > testCreateTokenChangeNotification() STARTED

KafkaZkClientTest > testCreateTokenChangeNotification() PASSED

KafkaZkClientTest > testGetTopicsAndPartitions() STARTED

KafkaZkClientTest > testGetTopicsAndPartitions() PASSED

KafkaZkClientTest > testRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRegisterBrokerInfo() PASSED

KafkaZkClientTest > testRetryRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRetryRegisterBrokerInfo() PASSED

KafkaZkClientTest > testConsumerOffsetPath() STARTED

KafkaZkClientTest > testConsumerOffsetPath() PASSED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() STARTED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() PASSED

KafkaZkClientTest > testTopicAssignments() STARTED

KafkaZkClientTest > testTopicAssignments() PASSED

KafkaZkClientTest > testControllerManagementMethods() STARTED

KafkaZkClientTest > testControllerManagementMethods() PASSED

KafkaZkClientTest > testTopicAssignmentMethods() STARTED

KafkaZkClientTest > testTopicAssignmentMethods() PASSED

KafkaZkClientTest > testConnectionViaNettyClient() STARTED

KafkaZkClientTest > testConnectionViaNettyClient() PASSED

KafkaZkClientTest > testPropagateIsrChanges() STARTED

KafkaZkClientTest > testPropagateIsrChanges() PASSED

KafkaZkClientTest > testControllerEpochMethods() STARTED

KafkaZkClientTest > testControllerEpochMethods() PASSED

KafkaZkClientTest > testDeleteRecursive() STARTED

KafkaZkClientTest > testDeleteRecursive() PASSED

KafkaZkClientTest > testGetTopicPartitionStates() STARTED

KafkaZkClientTest > testGetTopicPartitionStates() PASSED

KafkaZkClientTest > testCreateConfigChangeNotification() STARTED

KafkaZkClientTest > testCreateConfigChangeNotification() PASSED

KafkaZkClientTest > testDelegationTokenMethods() STARTED

KafkaZkClientTest > testDelegationTokenMethods() PASSED

LiteralAclStoreTest > shouldHaveCorrectPaths() STARTED

LiteralAclStoreTest > shouldHaveCorrectPaths() PASSED

LiteralAclStoreTest > shouldRoundTripChangeNode() STARTED

LiteralAclStoreTest > shouldRoundTripChangeNode() PASSED


Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #636

2021-03-17 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-12490) Forwarded requests should use timeout from request when possible

2021-03-17 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-12490:
---

 Summary: Forwarded requests should use timeout from request when 
possible
 Key: KAFKA-12490
 URL: https://issues.apache.org/jira/browse/KAFKA-12490
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Boyang Chen


Currently forwarded requests timeout according to the broker configuration 
`request.timeout.ms`. However, some requests, such as CreateTopics and 
DeleteTopics have their own timeouts that are part of the request object. We 
should try to use these timeouts when possible.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #578

2021-03-17 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: fix client_compatibility_features_test.py (#10292)

[github] MINOR: fix failing ZooKeeper system tests (#10297)


--
[...truncated 3.67 MB...]
LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() STARTED

LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
PASSED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() STARTED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
PASSED

LogValidatorTest > testNonCompressedV1() STARTED

LogValidatorTest > testNonCompressedV1() PASSED

LogValidatorTest > testNonCompressedV2() STARTED

LogValidatorTest > testNonCompressedV2() PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV1() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV1() PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV2() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV2() PASSED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() STARTED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() PASSED

LogValidatorTest > testRecompressionV1() STARTED

LogValidatorTest > testRecompressionV1() PASSED

LogValidatorTest > testRecompressionV2() STARTED

LogValidatorTest > testRecompressionV2() PASSED

ProducerStateManagerTest > testSkipEmptyTransactions() STARTED

ProducerStateManagerTest > testSkipEmptyTransactions() PASSED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() STARTED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() PASSED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
STARTED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
PASSED

ProducerStateManagerTest > testCoordinatorFencing() STARTED

ProducerStateManagerTest > testCoordinatorFencing() PASSED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() PASSED

ProducerStateManagerTest > testTruncateFullyAndStartAt() STARTED

ProducerStateManagerTest > testTruncateFullyAndStartAt() PASSED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() STARTED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() STARTED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() PASSED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() 
STARTED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() PASSED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() STARTED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() PASSED

ProducerStateManagerTest > testTakeSnapshot() STARTED

ProducerStateManagerTest > testTakeSnapshot() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() 
STARTED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() PASSED

ProducerStateManagerTest > testDeleteSnapshotsBefore() STARTED

ProducerStateManagerTest > testDeleteSnapshotsBefore() PASSED

ProducerStateManagerTest > testAppendEmptyControlBatch() STARTED

ProducerStateManagerTest > testAppendEmptyControlBatch() PASSED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() STARTED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() PASSED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
STARTED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
PASSED

ProducerStateManagerTest > testRemoveAllStraySnapshots() STARTED

ProducerStateManagerTest > testRemoveAllStraySnapshots() PASSED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() PASSED

ProducerStateManagerTest > testProducersWithOngoingTransactionsDontExpire() 
STARTED

ProducerStateManagerTest > 

Re: [VOTE] KIP-708: Rack awareness for Kafka Streams

2021-03-17 Thread Guozhang Wang
SGTM for going back to encoding the full names too --- flexibility wins
here, and if users do hit limits on bytes, probably they'd consider giving
some shorter names anyways :)

Guozhang

On Wed, Mar 17, 2021 at 11:25 AM Levani Kokhreidze 
wrote:

> Hi Sophie,
>
> No worries! And thanks for taking a look.
> I’ve updated the KIP.
>
> Will wait some time for any additional feedback that might arise.
>
> Best,
> Levani
>
> > On 17. Mar 2021, at 19:11, Sophie Blee-Goldman
>  wrote:
> >
> > Ah, sorry for hijacking the VOTE thread :(
> >
> > Limiting the tag length and total amount of tags specified are already
> part
> >> of the implementation I work on. Assuming that
> >
> > encoding a limited number of strings is acceptable, I think it's the most
> >> straightforward way to move forward. Any objections?
> >
> >
> > This sounds good to me -- I imagine most users probably only need a
> handful
> > of tags anyway. If someone is bumping up
> > against the limit and has a valid use case, we can always increase it.
> >
> > One last minor thing -- if we're going to encode the full tag names,
> then I
> > think we can leave out the "version" field from the
> > ClientTag struct. If we ever want to modify this struct, we should do so
> by
> > bumping the overall SubscriptionInfo protocol version.
> > This way we have one fewer version in the mix, and we get all the
> benefits
> > of version probing already baked in -- which
> > means we can modify the protocol however we like without worrying about
> > compatibility. For example this gives us the flexibility
> > to go back to some kind of encoding if need be (although I don't expect
> to
> > need to).
> >
> > If everyone else is on board with the current KIP, I'm +1 (binding) --
> > thanks for the proposal Levani!
> >
> > Cheers,
> > Sophie
> >
> >
> > On Wed, Mar 17, 2021 at 7:54 AM Levani Kokhreidze <
> levani.co...@gmail.com>
> > wrote:
> >
> >> Hi Sophie and Bruno,
> >>
> >> Thanks for the questions and suggestions.
> >> Not sure if it's best to discuss this in the DISCUSSION thread, but I
> will
> >> write it here first, and if it needs more discussion, we can move to the
> >> DISCUSSION thread.
> >> Actually, in the implementation, I have a version field in the ClientTag
> >> struct. I assumed that all structs must-have versions, and it's an
> explicit
> >> requirement; therefore, I left it out of KIP (fixed).
> >> I'm okay with changing the name to "rack.aware.assignment.tags" (fixed).
> >> As for upgrade and evolving tags, good question, we must try to make it
> as
> >> flexible as possible. Good catch that with encoding changing the tags
> may
> >> be problematic, especially changing the tags' order. One other way
> around
> >> it maybe is to change the "rack.aware.assignment.tags" config in a way
> that
> >> users can specify the tag index. For instance:
> >> rack.aware.assignment.tags.0: cluster, rack.aware.assignment.1: zone;
> But
> >> configuration is way uglier and more complicated (and easier to get
> wrong).
> >> Limiting the tag length and total amount of tags specified are already
> part
> >> of the implementation I work on. Assuming that encoding a limited
> number of
> >> strings is acceptable, I think it's the most straightforward way to move
> >> forward. Any objections?
> >> I've updated KIP [1] with the latest discussion points and reverted the
> >> "encoding tag keys" part (sorry Guozhang, I haven't really thought about
> >> this potential edge-case, and thanks, Sophie, for catching it).
> >>
> >> I am looking forward to your feedback.
> >>
> >> Best,
> >> Levani
> >>
> >> [1] -
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-708%3A+Rack+awareness+for+Kafka+Streams
> >>
> >>> On 16. Mar 2021, at 23:30, Bruno Cadonna 
> >> wrote:
> >>>
> >>> Sophie
> >>
> >>
>
>

-- 
-- Guozhang


Build failed in Jenkins: Kafka » kafka-2.8-jdk8 #67

2021-03-17 Thread Apache Jenkins Server
See 


Changes:

[Colin McCabe] MINOR: fix client_compatibility_features_test.py (#10292)

[Colin McCabe] MINOR: fix failing ZooKeeper system tests (#10297)


--
[...truncated 3.60 MB...]

KafkaTest > testZkSslKeyStoreType() PASSED

KafkaTest > testZkSslOcspEnable() STARTED

KafkaTest > testZkSslOcspEnable() PASSED

KafkaTest > testConnectionsMaxReauthMsDefault() STARTED

KafkaTest > testConnectionsMaxReauthMsDefault() PASSED

KafkaTest > testZkSslTrustStoreLocation() STARTED

KafkaTest > testZkSslTrustStoreLocation() PASSED

KafkaTest > testZkSslEnabledProtocols() STARTED

KafkaTest > testZkSslEnabledProtocols() PASSED

KafkaTest > testKafkaSslPasswords() STARTED

KafkaTest > testKafkaSslPasswords() PASSED

KafkaTest > testGetKafkaConfigFromArgs() STARTED

KafkaTest > testGetKafkaConfigFromArgs() PASSED

KafkaTest > testZkSslClientEnable() STARTED

KafkaTest > testZkSslClientEnable() PASSED

KafkaTest > testZookeeperTrustStorePassword() STARTED

KafkaTest > testZookeeperTrustStorePassword() PASSED

KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd() STARTED

KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd() PASSED

KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly() STARTED

KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly() PASSED

KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging() STARTED

KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging() PASSED

KafkaTest > testZkSslKeyStoreLocation() STARTED

KafkaTest > testZkSslKeyStoreLocation() PASSED

KafkaTest > testZkSslCrlEnable() STARTED

KafkaTest > testZkSslCrlEnable() PASSED

KafkaTest > testZkSslEndpointIdentificationAlgorithm() STARTED

KafkaTest > testZkSslEndpointIdentificationAlgorithm() PASSED

KafkaTest > testZkSslTrustStoreType() STARTED

KafkaTest > testZkSslTrustStoreType() PASSED

KafkaMetadataLogTest > testMaxBatchSize() STARTED

KafkaMetadataLogTest > testMaxBatchSize() PASSED

KafkaMetadataLogTest > testFailToIncreaseLogStartPastHighWatermark() STARTED

KafkaMetadataLogTest > testFailToIncreaseLogStartPastHighWatermark() PASSED

KafkaMetadataLogTest > testCreateReplicatedLogTruncatesFully() STARTED

KafkaMetadataLogTest > testCreateReplicatedLogTruncatesFully() PASSED

KafkaMetadataLogTest > testCreateSnapshot() STARTED

KafkaMetadataLogTest > testCreateSnapshot() PASSED

KafkaMetadataLogTest > testUpdateLogStartOffset() STARTED

KafkaMetadataLogTest > testUpdateLogStartOffset() PASSED

KafkaMetadataLogTest > testUnexpectedAppendOffset() STARTED

KafkaMetadataLogTest > testUnexpectedAppendOffset() PASSED

KafkaMetadataLogTest > testCleanupSnapshots() STARTED

KafkaMetadataLogTest > testCleanupSnapshots() PASSED

KafkaMetadataLogTest > testTruncateFullyToLatestSnapshot() STARTED

KafkaMetadataLogTest > testTruncateFullyToLatestSnapshot() PASSED

KafkaMetadataLogTest > testDoesntTruncateFully() STARTED

KafkaMetadataLogTest > testDoesntTruncateFully() PASSED

KafkaMetadataLogTest > testUpdateLogStartOffsetWithMissingSnapshot() STARTED

KafkaMetadataLogTest > testUpdateLogStartOffsetWithMissingSnapshot() PASSED

KafkaMetadataLogTest > testReadMissingSnapshot() STARTED

KafkaMetadataLogTest > testReadMissingSnapshot() PASSED

RaftManagerTest > testShutdownIoThread() STARTED

RaftManagerTest > testShutdownIoThread() PASSED

RaftManagerTest > testUncaughtExceptionInIoThread() STARTED

RaftManagerTest > testUncaughtExceptionInIoThread() PASSED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@438d3de4, value = [B@41d3d54c), properties=Map(print.value -> false), 
expected= STARTED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@438d3de4, value = [B@41d3d54c), properties=Map(print.value -> false), 
expected= PASSED

DefaultMessageFormatterTest > [2] name=print key, record=ConsumerRecord(topic = 
someTopic, partition = 9, leaderEpoch = null, offset = 9876, CreateTime = 1234, 
serialized key size = 0, serialized value size = 0, headers = 
RecordHeaders(headers = [RecordHeader(key = h1, value = [118, 49]), 
RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), key = 
[B@1570c9a9, value = [B@62bb0941), properties=Map(print.key -> true, 
print.value -> false), expected=someKey
 STARTED

DefaultMessageFormatterTest > [2] name=print key, 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #607

2021-03-17 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: fix client_compatibility_features_test.py (#10292)


--
[...truncated 3.69 MB...]
KafkaZkClientTest > testUpdateBrokerInfo() STARTED

KafkaZkClientTest > testUpdateBrokerInfo() PASSED

KafkaZkClientTest > testCreateRecursive() STARTED

KafkaZkClientTest > testCreateRecursive() PASSED

KafkaZkClientTest > testGetConsumerOffsetNoData() STARTED

KafkaZkClientTest > testGetConsumerOffsetNoData() PASSED

KafkaZkClientTest > testDeleteTopicPathMethods() STARTED

KafkaZkClientTest > testDeleteTopicPathMethods() PASSED

KafkaZkClientTest > testSetTopicPartitionStatesRaw() STARTED

KafkaZkClientTest > testSetTopicPartitionStatesRaw() PASSED

KafkaZkClientTest > testAclManagementMethods() STARTED

KafkaZkClientTest > testAclManagementMethods() PASSED

KafkaZkClientTest > testPreferredReplicaElectionMethods() STARTED

KafkaZkClientTest > testPreferredReplicaElectionMethods() PASSED

KafkaZkClientTest > testPropagateLogDir() STARTED

KafkaZkClientTest > testPropagateLogDir() PASSED

KafkaZkClientTest > testGetDataAndStat() STARTED

KafkaZkClientTest > testGetDataAndStat() PASSED

KafkaZkClientTest > testReassignPartitionsInProgress() STARTED

KafkaZkClientTest > testReassignPartitionsInProgress() PASSED

KafkaZkClientTest > testCreateTopLevelPaths() STARTED

KafkaZkClientTest > testCreateTopLevelPaths() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() PASSED

KafkaZkClientTest > testIsrChangeNotificationGetters() STARTED

KafkaZkClientTest > testIsrChangeNotificationGetters() PASSED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() STARTED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() PASSED

KafkaZkClientTest > testGetLogConfigs() STARTED

KafkaZkClientTest > testGetLogConfigs() PASSED

KafkaZkClientTest > testBrokerSequenceIdMethods() STARTED

KafkaZkClientTest > testBrokerSequenceIdMethods() PASSED

KafkaZkClientTest > testAclMethods() STARTED

KafkaZkClientTest > testAclMethods() PASSED

KafkaZkClientTest > testCreateSequentialPersistentPath() STARTED

KafkaZkClientTest > testCreateSequentialPersistentPath() PASSED

KafkaZkClientTest > testConditionalUpdatePath() STARTED

KafkaZkClientTest > testConditionalUpdatePath() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() PASSED

KafkaZkClientTest > testDeleteTopicZNode() STARTED

KafkaZkClientTest > testDeleteTopicZNode() PASSED

KafkaZkClientTest > testDeletePath() STARTED

KafkaZkClientTest > testDeletePath() PASSED

KafkaZkClientTest > testGetBrokerMethods() STARTED

KafkaZkClientTest > testGetBrokerMethods() PASSED

KafkaZkClientTest > testCreateTokenChangeNotification() STARTED

KafkaZkClientTest > testCreateTokenChangeNotification() PASSED

KafkaZkClientTest > testGetTopicsAndPartitions() STARTED

KafkaZkClientTest > testGetTopicsAndPartitions() PASSED

KafkaZkClientTest > testRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRegisterBrokerInfo() PASSED

KafkaZkClientTest > testRetryRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRetryRegisterBrokerInfo() PASSED

KafkaZkClientTest > testConsumerOffsetPath() STARTED

KafkaZkClientTest > testConsumerOffsetPath() PASSED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() STARTED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() PASSED

KafkaZkClientTest > testTopicAssignments() STARTED

KafkaZkClientTest > testTopicAssignments() PASSED

KafkaZkClientTest > testControllerManagementMethods() STARTED

KafkaZkClientTest > testControllerManagementMethods() PASSED

KafkaZkClientTest > testTopicAssignmentMethods() STARTED

KafkaZkClientTest > testTopicAssignmentMethods() PASSED

KafkaZkClientTest > testConnectionViaNettyClient() STARTED

KafkaZkClientTest > testConnectionViaNettyClient() PASSED

KafkaZkClientTest > testPropagateIsrChanges() STARTED

KafkaZkClientTest > testPropagateIsrChanges() PASSED

KafkaZkClientTest > testControllerEpochMethods() STARTED

KafkaZkClientTest > testControllerEpochMethods() PASSED

KafkaZkClientTest > testDeleteRecursive() STARTED

KafkaZkClientTest > testDeleteRecursive() PASSED

KafkaZkClientTest > testGetTopicPartitionStates() STARTED

KafkaZkClientTest > testGetTopicPartitionStates() PASSED

KafkaZkClientTest > testCreateConfigChangeNotification() STARTED

KafkaZkClientTest > testCreateConfigChangeNotification() PASSED

KafkaZkClientTest > testDelegationTokenMethods() STARTED

KafkaZkClientTest > testDelegationTokenMethods() PASSED

LiteralAclStoreTest > shouldHaveCorrectPaths() STARTED

LiteralAclStoreTest > shouldHaveCorrectPaths() PASSED

LiteralAclStoreTest > shouldRoundTripChangeNode() STARTED

LiteralAclStoreTest > shouldRoundTripChangeNode() 

Re: [VOTE] KIP-708: Rack awareness for Kafka Streams

2021-03-17 Thread Levani Kokhreidze
Hi Sophie,

No worries! And thanks for taking a look.
I’ve updated the KIP.

Will wait some time for any additional feedback that might arise.

Best,
Levani

> On 17. Mar 2021, at 19:11, Sophie Blee-Goldman  
> wrote:
> 
> Ah, sorry for hijacking the VOTE thread :(
> 
> Limiting the tag length and total amount of tags specified are already part
>> of the implementation I work on. Assuming that
> 
> encoding a limited number of strings is acceptable, I think it's the most
>> straightforward way to move forward. Any objections?
> 
> 
> This sounds good to me -- I imagine most users probably only need a handful
> of tags anyway. If someone is bumping up
> against the limit and has a valid use case, we can always increase it.
> 
> One last minor thing -- if we're going to encode the full tag names, then I
> think we can leave out the "version" field from the
> ClientTag struct. If we ever want to modify this struct, we should do so by
> bumping the overall SubscriptionInfo protocol version.
> This way we have one fewer version in the mix, and we get all the benefits
> of version probing already baked in -- which
> means we can modify the protocol however we like without worrying about
> compatibility. For example this gives us the flexibility
> to go back to some kind of encoding if need be (although I don't expect to
> need to).
> 
> If everyone else is on board with the current KIP, I'm +1 (binding) --
> thanks for the proposal Levani!
> 
> Cheers,
> Sophie
> 
> 
> On Wed, Mar 17, 2021 at 7:54 AM Levani Kokhreidze 
> wrote:
> 
>> Hi Sophie and Bruno,
>> 
>> Thanks for the questions and suggestions.
>> Not sure if it's best to discuss this in the DISCUSSION thread, but I will
>> write it here first, and if it needs more discussion, we can move to the
>> DISCUSSION thread.
>> Actually, in the implementation, I have a version field in the ClientTag
>> struct. I assumed that all structs must-have versions, and it's an explicit
>> requirement; therefore, I left it out of KIP (fixed).
>> I'm okay with changing the name to "rack.aware.assignment.tags" (fixed).
>> As for upgrade and evolving tags, good question, we must try to make it as
>> flexible as possible. Good catch that with encoding changing the tags may
>> be problematic, especially changing the tags' order. One other way around
>> it maybe is to change the "rack.aware.assignment.tags" config in a way that
>> users can specify the tag index. For instance:
>> rack.aware.assignment.tags.0: cluster, rack.aware.assignment.1: zone; But
>> configuration is way uglier and more complicated (and easier to get wrong).
>> Limiting the tag length and total amount of tags specified are already part
>> of the implementation I work on. Assuming that encoding a limited number of
>> strings is acceptable, I think it's the most straightforward way to move
>> forward. Any objections?
>> I've updated KIP [1] with the latest discussion points and reverted the
>> "encoding tag keys" part (sorry Guozhang, I haven't really thought about
>> this potential edge-case, and thanks, Sophie, for catching it).
>> 
>> I am looking forward to your feedback.
>> 
>> Best,
>> Levani
>> 
>> [1] -
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-708%3A+Rack+awareness+for+Kafka+Streams
>> 
>>> On 16. Mar 2021, at 23:30, Bruno Cadonna 
>> wrote:
>>> 
>>> Sophie
>> 
>> 



Re: [VOTE] KIP-708: Rack awareness for Kafka Streams

2021-03-17 Thread Sophie Blee-Goldman
Ah, sorry for hijacking the VOTE thread :(

Limiting the tag length and total amount of tags specified are already part
> of the implementation I work on. Assuming that

encoding a limited number of strings is acceptable, I think it's the most
> straightforward way to move forward. Any objections?


This sounds good to me -- I imagine most users probably only need a handful
of tags anyway. If someone is bumping up
against the limit and has a valid use case, we can always increase it.

One last minor thing -- if we're going to encode the full tag names, then I
think we can leave out the "version" field from the
ClientTag struct. If we ever want to modify this struct, we should do so by
bumping the overall SubscriptionInfo protocol version.
This way we have one fewer version in the mix, and we get all the benefits
of version probing already baked in -- which
means we can modify the protocol however we like without worrying about
compatibility. For example this gives us the flexibility
to go back to some kind of encoding if need be (although I don't expect to
need to).

If everyone else is on board with the current KIP, I'm +1 (binding) --
thanks for the proposal Levani!

Cheers,
Sophie


On Wed, Mar 17, 2021 at 7:54 AM Levani Kokhreidze 
wrote:

> Hi Sophie and Bruno,
>
> Thanks for the questions and suggestions.
> Not sure if it's best to discuss this in the DISCUSSION thread, but I will
> write it here first, and if it needs more discussion, we can move to the
> DISCUSSION thread.
> Actually, in the implementation, I have a version field in the ClientTag
> struct. I assumed that all structs must-have versions, and it's an explicit
> requirement; therefore, I left it out of KIP (fixed).
> I'm okay with changing the name to "rack.aware.assignment.tags" (fixed).
> As for upgrade and evolving tags, good question, we must try to make it as
> flexible as possible. Good catch that with encoding changing the tags may
> be problematic, especially changing the tags' order. One other way around
> it maybe is to change the "rack.aware.assignment.tags" config in a way that
> users can specify the tag index. For instance:
> rack.aware.assignment.tags.0: cluster, rack.aware.assignment.1: zone; But
> configuration is way uglier and more complicated (and easier to get wrong).
> Limiting the tag length and total amount of tags specified are already part
> of the implementation I work on. Assuming that encoding a limited number of
> strings is acceptable, I think it's the most straightforward way to move
> forward. Any objections?
> I've updated KIP [1] with the latest discussion points and reverted the
> "encoding tag keys" part (sorry Guozhang, I haven't really thought about
> this potential edge-case, and thanks, Sophie, for catching it).
>
> I am looking forward to your feedback.
>
> Best,
> Levani
>
> [1] -
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-708%3A+Rack+awareness+for+Kafka+Streams
>
> > On 16. Mar 2021, at 23:30, Bruno Cadonna 
> wrote:
> >
> > Sophie
>
>


[DISCUSS] KIP-723: Add socket.tcp.no.delay property to Kafka Config

2021-03-17 Thread Andrei Iatsuk
Hello everyone, 

I would like to start a discussion on KIP-723, which propose adding a Kafka 
Config property with TCP_NODELAY socket option flag, that currently hardcoded 
in true.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-723%3A+Add+socket.tcp.no.delay+property+to+Kafka+Config

Best regards,
Andrei Iatsuk

Inquiry about usage of Kafka Compression

2021-03-17 Thread Loren Abigail Sion
Good day,


We're currently in the process of implementing our application with kafka
compression type ZStandard (zstd).

However during the testing process the consumer encountered this error:

 [ERROR] (consumer-1)
org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer
   - Container exception
org.apache.kafka.common.KafkaException: Received exception when fetching
the next record from dp.---. If needed, please seek past
the record to continue consumption.
at
org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1228)
at
org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1400(Fetcher.java:1096)
at
org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:544)
at
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:505)
at
org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1225)
at
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1188)
at
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1123)
at
org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:532)
at
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.kafka.common.KafkaException:
java.lang.ExceptionInInitializerError: Cannot unpack libzstd-jni

*Here's the version for the producer and consumer:*

Producer Kafka Client Version (using ZStandard compression): 2.5.1
Consumer Kafka Client Version: 2.1.0

Could you help us identify what caused this error? Do we need to upgrade
the version on the consumer side?


Best Regards,

Loren Sion


Re: [VOTE] KIP-708: Rack awareness for Kafka Streams

2021-03-17 Thread Levani Kokhreidze
Hi Sophie and Bruno,

Thanks for the questions and suggestions.
Not sure if it's best to discuss this in the DISCUSSION thread, but I will 
write it here first, and if it needs more discussion, we can move to the 
DISCUSSION thread.
Actually, in the implementation, I have a version field in the ClientTag 
struct. I assumed that all structs must-have versions, and it's an explicit 
requirement; therefore, I left it out of KIP (fixed).
I'm okay with changing the name to "rack.aware.assignment.tags" (fixed).
As for upgrade and evolving tags, good question, we must try to make it as 
flexible as possible. Good catch that with encoding changing the tags may be 
problematic, especially changing the tags' order. One other way around it maybe 
is to change the "rack.aware.assignment.tags" config in a way that users can 
specify the tag index. For instance: rack.aware.assignment.tags.0: cluster, 
rack.aware.assignment.1: zone; But configuration is way uglier and more 
complicated (and easier to get wrong). Limiting the tag length and total amount 
of tags specified are already part of the implementation I work on. Assuming 
that encoding a limited number of strings is acceptable, I think it's the most 
straightforward way to move forward. Any objections?
I've updated KIP [1] with the latest discussion points and reverted the 
"encoding tag keys" part (sorry Guozhang, I haven't really thought about this 
potential edge-case, and thanks, Sophie, for catching it).

I am looking forward to your feedback.

Best,
Levani

[1] - 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-708%3A+Rack+awareness+for+Kafka+Streams

> On 16. Mar 2021, at 23:30, Bruno Cadonna  wrote:
> 
> Sophie



Re: Kafka logos and branding

2021-03-17 Thread Jun Rao
Hi, Justin,

Thanks for the suggestions. Will follow up.

Jun

On Tue, Mar 16, 2021 at 5:49 PM Justin Mclean  wrote:

> Hi,
>
> As I mentioned on the dev list I can see the logos for the Kafka project
> are out of date. [1] Just a suggestion - which you are free to ignore. From
> a branding and trademark perspective it's going to help the project if 3rd
> parties have access to the correct logos. Having something on your
> trademark page [2] might help people use the right logos.
>
> Thanks,
> Justin
>
> 1. https://apache.org/logos/#kafka
> 2. https://kafka.apache.org/trademark
>


Re: [ANNOUNCE] New committer: Tom Bentley

2021-03-17 Thread Chia-Ping Tsai
Congratulations!!!

On 2021/03/15 17:59:56, Mickael Maison  wrote: 
> Hi all,
> 
> The PMC for Apache Kafka has invited Tom Bentley as a committer, and
> we are excited to announce that he accepted!
> 
> Tom first contributed to Apache Kafka in June 2017 and has been
> actively contributing since February 2020.
> He has accumulated 52 commits and worked on a number of KIPs. Here are
> some of the most significant ones:
>KIP-183: Change PreferredReplicaLeaderElectionCommand to use AdminClient
>KIP-195: AdminClient.createPartitions
>KIP-585: Filter and Conditional SMTs
>KIP-621: Deprecate and replace DescribeLogDirsResult.all() and .values()
>KIP-707: The future of KafkaFuture (still in discussion)
> 
> In addition, he is very active on the mailing list and has helped
> review many KIPs.
> 
> Congratulations Tom and thanks for all the contributions!
> 


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #606

2021-03-17 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: remove some specifying types in tool command (#10329)

[github] MINOR: Remove redundant allows in import-control.xml (#10339)


--
[...truncated 7.38 MB...]
KafkaZkClientTest > testUpdateBrokerInfo() STARTED

KafkaZkClientTest > testUpdateBrokerInfo() PASSED

KafkaZkClientTest > testCreateRecursive() STARTED

KafkaZkClientTest > testCreateRecursive() PASSED

KafkaZkClientTest > testGetConsumerOffsetNoData() STARTED

KafkaZkClientTest > testGetConsumerOffsetNoData() PASSED

KafkaZkClientTest > testDeleteTopicPathMethods() STARTED

KafkaZkClientTest > testDeleteTopicPathMethods() PASSED

KafkaZkClientTest > testSetTopicPartitionStatesRaw() STARTED

KafkaZkClientTest > testSetTopicPartitionStatesRaw() PASSED

KafkaZkClientTest > testAclManagementMethods() STARTED

KafkaZkClientTest > testAclManagementMethods() PASSED

KafkaZkClientTest > testPreferredReplicaElectionMethods() STARTED

KafkaZkClientTest > testPreferredReplicaElectionMethods() PASSED

KafkaZkClientTest > testPropagateLogDir() STARTED

KafkaZkClientTest > testPropagateLogDir() PASSED

KafkaZkClientTest > testGetDataAndStat() STARTED

KafkaZkClientTest > testGetDataAndStat() PASSED

KafkaZkClientTest > testReassignPartitionsInProgress() STARTED

KafkaZkClientTest > testReassignPartitionsInProgress() PASSED

KafkaZkClientTest > testCreateTopLevelPaths() STARTED

KafkaZkClientTest > testCreateTopLevelPaths() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() PASSED

KafkaZkClientTest > testIsrChangeNotificationGetters() STARTED

KafkaZkClientTest > testIsrChangeNotificationGetters() PASSED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() STARTED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() PASSED

KafkaZkClientTest > testGetLogConfigs() STARTED

KafkaZkClientTest > testGetLogConfigs() PASSED

KafkaZkClientTest > testBrokerSequenceIdMethods() STARTED

KafkaZkClientTest > testBrokerSequenceIdMethods() PASSED

KafkaZkClientTest > testAclMethods() STARTED

KafkaZkClientTest > testAclMethods() PASSED

KafkaZkClientTest > testCreateSequentialPersistentPath() STARTED

KafkaZkClientTest > testCreateSequentialPersistentPath() PASSED

KafkaZkClientTest > testConditionalUpdatePath() STARTED

KafkaZkClientTest > testConditionalUpdatePath() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() PASSED

KafkaZkClientTest > testDeleteTopicZNode() STARTED

KafkaZkClientTest > testDeleteTopicZNode() PASSED

KafkaZkClientTest > testDeletePath() STARTED

KafkaZkClientTest > testDeletePath() PASSED

KafkaZkClientTest > testGetBrokerMethods() STARTED

KafkaZkClientTest > testGetBrokerMethods() PASSED

KafkaZkClientTest > testCreateTokenChangeNotification() STARTED

KafkaZkClientTest > testCreateTokenChangeNotification() PASSED

KafkaZkClientTest > testGetTopicsAndPartitions() STARTED

KafkaZkClientTest > testGetTopicsAndPartitions() PASSED

KafkaZkClientTest > testRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRegisterBrokerInfo() PASSED

KafkaZkClientTest > testRetryRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRetryRegisterBrokerInfo() PASSED

KafkaZkClientTest > testConsumerOffsetPath() STARTED

KafkaZkClientTest > testConsumerOffsetPath() PASSED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() STARTED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() PASSED

KafkaZkClientTest > testTopicAssignments() STARTED

KafkaZkClientTest > testTopicAssignments() PASSED

KafkaZkClientTest > testControllerManagementMethods() STARTED

KafkaZkClientTest > testControllerManagementMethods() PASSED

KafkaZkClientTest > testTopicAssignmentMethods() STARTED

KafkaZkClientTest > testTopicAssignmentMethods() PASSED

KafkaZkClientTest > testConnectionViaNettyClient() STARTED

KafkaZkClientTest > testConnectionViaNettyClient() PASSED

KafkaZkClientTest > testPropagateIsrChanges() STARTED

KafkaZkClientTest > testPropagateIsrChanges() PASSED

KafkaZkClientTest > testControllerEpochMethods() STARTED

KafkaZkClientTest > testControllerEpochMethods() PASSED

KafkaZkClientTest > testDeleteRecursive() STARTED

KafkaZkClientTest > testDeleteRecursive() PASSED

KafkaZkClientTest > testGetTopicPartitionStates() STARTED

KafkaZkClientTest > testGetTopicPartitionStates() PASSED

KafkaZkClientTest > testCreateConfigChangeNotification() STARTED

KafkaZkClientTest > testCreateConfigChangeNotification() PASSED

KafkaZkClientTest > testDelegationTokenMethods() STARTED

KafkaZkClientTest > testDelegationTokenMethods() PASSED

LiteralAclStoreTest > shouldHaveCorrectPaths() STARTED

LiteralAclStoreTest > shouldHaveCorrectPaths() PASSED

LiteralAclStoreTest > 

Re: [VOTE] KIP-717: Deprecate batch-size config from console producer

2021-03-17 Thread Manikumar
Hi Kamal,

It looks like we just forgot this config, when we removed old producer
code.  I think we dont require KIP for this.
we can directly fix with a minor PR .

Thanks.

On Wed, Mar 17, 2021 at 7:02 PM Dongjin Lee  wrote:

> +1. (non-binding)
>
> Thanks,
> Dongjin
>
> On Thu, Mar 11, 2021 at 5:52 PM Manikumar 
> wrote:
>
> > +1 (binding). Thanks for the KIP
> > I think we can remove the config option as the config option is unused.
> >
> > On Wed, Mar 10, 2021 at 3:06 PM Kamal Chandraprakash <
> > kamal.chandraprak...@gmail.com> wrote:
> >
> > > Hi,
> > >
> > > I'd like to start a vote on KIP-717 to remove batch-size config from
> the
> > > console producer.
> > >
> > > https://cwiki.apache.org/confluence/x/DB1RCg
> > >
> > > Thanks,
> > > Kamal
> > >
> >
> --
> *Dongjin Lee*
>
> *A hitchhiker in the mathematical world.*
>
>
>
> *github:  github.com/dongjinleekr
> keybase: https://keybase.io/dongjinleekr
> linkedin: kr.linkedin.com/in/dongjinleekr
> speakerdeck:
> speakerdeck.com/dongjin
> *
>


Re: [VOTE] KIP-717: Deprecate batch-size config from console producer

2021-03-17 Thread Dongjin Lee
+1. (non-binding)

Thanks,
Dongjin

On Thu, Mar 11, 2021 at 5:52 PM Manikumar  wrote:

> +1 (binding). Thanks for the KIP
> I think we can remove the config option as the config option is unused.
>
> On Wed, Mar 10, 2021 at 3:06 PM Kamal Chandraprakash <
> kamal.chandraprak...@gmail.com> wrote:
>
> > Hi,
> >
> > I'd like to start a vote on KIP-717 to remove batch-size config from the
> > console producer.
> >
> > https://cwiki.apache.org/confluence/x/DB1RCg
> >
> > Thanks,
> > Kamal
> >
>
-- 
*Dongjin Lee*

*A hitchhiker in the mathematical world.*



*github:  github.com/dongjinleekr
keybase: https://keybase.io/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
speakerdeck: speakerdeck.com/dongjin
*


Re: [DISCUSS] KIP-720: Deprecate MirrorMaker v1

2021-03-17 Thread Dongjin Lee
+1. Thanks for the proposal.

Best,
Dongjin

On Wed, Mar 17, 2021 at 9:59 PM Ryanne Dolan  wrote:

> Ben, the documentation was recently updated to include a new georeplication
> section which covers MM2. Actually, I think MM1 is no longer mentioned
> anywhere, and the documentation refers to MM2 as just Mirror Maker now.
>
> Ryanne
>
> On Wed, Mar 17, 2021, 6:08 AM Ben Stopford 
> wrote:
>
> > What about the documentation? Currently, there doesn't seem to be much
> > documentation around Mirror Maker 2. Is there a plan to address that?
> >
> > On Wed, 17 Mar 2021 at 10:56, Manikumar 
> wrote:
> >
> > > +1. Thanks for the KIP.
> > >
> > >
> > > On Sun, Mar 14, 2021 at 12:24 PM Ryanne Dolan 
> > > wrote:
> > >
> > > > Hey y'all, I'd like to start the discussion on KIP-720, which
> proposes
> > to
> > > > deprecate the original MirrorMaker in the upcoming 3.0 major release.
> > > >
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-720%3A+Deprecate+MirrorMaker+v1
> > > >
> > > > Thanks!
> > > > Ryanne
> > > >
> > >
> >
> >
> > --
> >
> > Ben Stopford
> >
>
-- 
*Dongjin Lee*

*A hitchhiker in the mathematical world.*



*github:  github.com/dongjinleekr
keybase: https://keybase.io/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
speakerdeck: speakerdeck.com/dongjin
*


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #635

2021-03-17 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: remove some specifying types in tool command (#10329)

[github] MINOR: Remove redundant allows in import-control.xml (#10339)


--
[...truncated 3.69 MB...]

AuthorizerIntegrationTest > 
testTransactionalProducerInitTransactionsNoDescribeTransactionalIdAcl() PASSED

AuthorizerIntegrationTest > testAuthorizeByResourceTypeDenyTakesPrecedence() 
STARTED

AuthorizerIntegrationTest > testAuthorizeByResourceTypeDenyTakesPrecedence() 
PASSED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithDescribe() STARTED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithDescribe() PASSED

AuthorizerIntegrationTest > testCreateTopicAuthorizationWithClusterCreate() 
STARTED

AuthorizerIntegrationTest > testCreateTopicAuthorizationWithClusterCreate() 
PASSED

AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead() STARTED

AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead() PASSED

AuthorizerIntegrationTest > testCommitWithTopicDescribe() STARTED

AuthorizerIntegrationTest > testCommitWithTopicDescribe() PASSED

AuthorizerIntegrationTest > testAuthorizationWithTopicExisting() STARTED

AuthorizerIntegrationTest > testAuthorizationWithTopicExisting() PASSED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithoutDescribe() 
STARTED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithoutDescribe() 
PASSED

AuthorizerIntegrationTest > testMetadataWithTopicDescribe() STARTED

AuthorizerIntegrationTest > testMetadataWithTopicDescribe() PASSED

AuthorizerIntegrationTest > testProduceWithTopicDescribe() STARTED

AuthorizerIntegrationTest > testProduceWithTopicDescribe() PASSED

AuthorizerIntegrationTest > testDescribeGroupApiWithNoGroupAcl() STARTED

AuthorizerIntegrationTest > testDescribeGroupApiWithNoGroupAcl() PASSED

AuthorizerIntegrationTest > testPatternSubscriptionMatchingInternalTopic() 
STARTED

AuthorizerIntegrationTest > testPatternSubscriptionMatchingInternalTopic() 
PASSED

AuthorizerIntegrationTest > testSendOffsetsWithNoConsumerGroupDescribeAccess() 
STARTED

AuthorizerIntegrationTest > testSendOffsetsWithNoConsumerGroupDescribeAccess() 
PASSED

AuthorizerIntegrationTest > testListTransactionsAuthorization() STARTED

AuthorizerIntegrationTest > testListTransactionsAuthorization() PASSED

AuthorizerIntegrationTest > testOffsetFetchTopicDescribe() STARTED

AuthorizerIntegrationTest > testOffsetFetchTopicDescribe() PASSED

AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead() STARTED

AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead() PASSED

AuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId() STARTED

AuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId() PASSED

AuthorizerIntegrationTest > testSimpleConsumeWithExplicitSeekAndNoGroupAccess() 
STARTED

AuthorizerIntegrationTest > testSimpleConsumeWithExplicitSeekAndNoGroupAccess() 
PASSED

SslProducerSendTest > testSendNonCompressedMessageWithCreateTime() STARTED

SslProducerSendTest > testSendNonCompressedMessageWithCreateTime() PASSED

SslProducerSendTest > testClose() STARTED

SslProducerSendTest > testClose() PASSED

SslProducerSendTest > testFlush() STARTED

SslProducerSendTest > testFlush() PASSED

SslProducerSendTest > testSendToPartition() STARTED

SslProducerSendTest > testSendToPartition() PASSED

SslProducerSendTest > testSendOffset() STARTED

SslProducerSendTest > testSendOffset() PASSED

SslProducerSendTest > testSendCompressedMessageWithCreateTime() STARTED

SslProducerSendTest > testSendCompressedMessageWithCreateTime() PASSED

SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread() STARTED

SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread() PASSED

SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread() STARTED

SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread() PASSED

SslProducerSendTest > testSendBeforeAndAfterPartitionExpansion() STARTED

SslProducerSendTest > testSendBeforeAndAfterPartitionExpansion() PASSED

ProducerCompressionTest > [1] compression=none STARTED

ProducerCompressionTest > [1] compression=none PASSED

ProducerCompressionTest > [2] compression=gzip STARTED

ProducerCompressionTest > [2] compression=gzip PASSED

ProducerCompressionTest > [3] compression=snappy STARTED

ProducerCompressionTest > [3] compression=snappy PASSED

ProducerCompressionTest > [4] compression=lz4 STARTED

ProducerCompressionTest > [4] compression=lz4 PASSED

ProducerCompressionTest > [5] compression=zstd STARTED

ProducerCompressionTest > [5] compression=zstd PASSED

MetricsTest > testMetrics() STARTED

MetricsTest > testMetrics() PASSED

ProducerFailureHandlingTest > testCannotSendToInternalTopic() STARTED

ProducerFailureHandlingTest > testCannotSendToInternalTopic() PASSED


Re: [DISCUSS] KIP-720: Deprecate MirrorMaker v1

2021-03-17 Thread Ryanne Dolan
Ben, the documentation was recently updated to include a new georeplication
section which covers MM2. Actually, I think MM1 is no longer mentioned
anywhere, and the documentation refers to MM2 as just Mirror Maker now.

Ryanne

On Wed, Mar 17, 2021, 6:08 AM Ben Stopford  wrote:

> What about the documentation? Currently, there doesn't seem to be much
> documentation around Mirror Maker 2. Is there a plan to address that?
>
> On Wed, 17 Mar 2021 at 10:56, Manikumar  wrote:
>
> > +1. Thanks for the KIP.
> >
> >
> > On Sun, Mar 14, 2021 at 12:24 PM Ryanne Dolan 
> > wrote:
> >
> > > Hey y'all, I'd like to start the discussion on KIP-720, which proposes
> to
> > > deprecate the original MirrorMaker in the upcoming 3.0 major release.
> > >
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-720%3A+Deprecate+MirrorMaker+v1
> > >
> > > Thanks!
> > > Ryanne
> > >
> >
>
>
> --
>
> Ben Stopford
>


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #577

2021-03-17 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: remove some specifying types in tool command (#10329)

[github] MINOR: Remove redundant allows in import-control.xml (#10339)


--
[...truncated 3.67 MB...]
LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() STARTED

LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
PASSED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() STARTED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
PASSED

LogValidatorTest > testNonCompressedV1() STARTED

LogValidatorTest > testNonCompressedV1() PASSED

LogValidatorTest > testNonCompressedV2() STARTED

LogValidatorTest > testNonCompressedV2() PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV1() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV1() PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV2() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV2() PASSED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() STARTED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() PASSED

LogValidatorTest > testRecompressionV1() STARTED

LogValidatorTest > testRecompressionV1() PASSED

LogValidatorTest > testRecompressionV2() STARTED

LogValidatorTest > testRecompressionV2() PASSED

ProducerStateManagerTest > testSkipEmptyTransactions() STARTED

ProducerStateManagerTest > testSkipEmptyTransactions() PASSED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() STARTED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() PASSED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
STARTED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
PASSED

ProducerStateManagerTest > testCoordinatorFencing() STARTED

ProducerStateManagerTest > testCoordinatorFencing() PASSED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() PASSED

ProducerStateManagerTest > testTruncateFullyAndStartAt() STARTED

ProducerStateManagerTest > testTruncateFullyAndStartAt() PASSED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() STARTED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() STARTED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() PASSED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() 
STARTED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() PASSED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() STARTED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() PASSED

ProducerStateManagerTest > testTakeSnapshot() STARTED

ProducerStateManagerTest > testTakeSnapshot() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() 
STARTED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() PASSED

ProducerStateManagerTest > testDeleteSnapshotsBefore() STARTED

ProducerStateManagerTest > testDeleteSnapshotsBefore() PASSED

ProducerStateManagerTest > testAppendEmptyControlBatch() STARTED

ProducerStateManagerTest > testAppendEmptyControlBatch() PASSED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() STARTED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() PASSED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
STARTED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
PASSED

ProducerStateManagerTest > testRemoveAllStraySnapshots() STARTED

ProducerStateManagerTest > testRemoveAllStraySnapshots() PASSED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() PASSED

ProducerStateManagerTest > testProducersWithOngoingTransactionsDontExpire() 
STARTED


Re: Why are Javadocs published on downloads.apache.org?

2021-03-17 Thread sebb
On Wed, 17 Mar 2021 at 11:56, Tom Bentley  wrote:
>
> Hi Sebb,
>
> I think this has been addressed for future releases by
> https://github.com/apache/kafka/pull/10203.

Thanks for the info. I hope that fixes it.

> AFAIU removing the Javadocs for
> previous releases would need to be done by someone on the PMC.

Yes, normally only PMC members can update the source at:

https://dist.apache.org/repos/dist/release/kafka/

If any from the PMC is available, please could they invoke:

svn rm https://dist.apache.org/repos/dist/release/kafka/2.6.1/javadoc/
and
svn rm https://dist.apache.org/repos/dist/release/kafka/2.7.0/javadoc/

Thanks,
Sebb.

> Kind regards,
>
> Tom
>
> On Wed, Mar 17, 2021 at 11:47 AM sebb  wrote:
>
> > PING?
> >
> > On Sat, 13 Mar 2021 at 15:40, sebb  wrote:
> > >
> > > Is anyone there?
> > >
> > > On Sat, 6 Mar 2021 at 11:33, sebb  wrote:
> > > >
> > > > As the subject says: AFAICT the Kafka project is the only project
> > > > which publishes Javadocs as individual files on downloads.a.o.
> > > >
> > > > What is the use case for this?
> > > > The Javadocs are already published on the Kafka website.
> > > >
> > > > The mirror system relies on volunteers, and not all of them have lots
> > > > of disk space, so it is important to minimise what is published.
> > > > Additional files also increase the network traffic when synching (this
> > > > is a minor consideration, but it's still a waste of resources if the
> > > > files are unnecessary.)
> > > >
> > > > Sebb.
> >
> >


Re: Why are Javadocs published on downloads.apache.org?

2021-03-17 Thread Tom Bentley
Hi Sebb,

I think this has been addressed for future releases by
https://github.com/apache/kafka/pull/10203. AFAIU removing the Javadocs for
previous releases would need to be done by someone on the PMC.

Kind regards,

Tom

On Wed, Mar 17, 2021 at 11:47 AM sebb  wrote:

> PING?
>
> On Sat, 13 Mar 2021 at 15:40, sebb  wrote:
> >
> > Is anyone there?
> >
> > On Sat, 6 Mar 2021 at 11:33, sebb  wrote:
> > >
> > > As the subject says: AFAICT the Kafka project is the only project
> > > which publishes Javadocs as individual files on downloads.a.o.
> > >
> > > What is the use case for this?
> > > The Javadocs are already published on the Kafka website.
> > >
> > > The mirror system relies on volunteers, and not all of them have lots
> > > of disk space, so it is important to minimise what is published.
> > > Additional files also increase the network traffic when synching (this
> > > is a minor consideration, but it's still a waste of resources if the
> > > files are unnecessary.)
> > >
> > > Sebb.
>
>


Re: Why are Javadocs published on downloads.apache.org?

2021-03-17 Thread sebb
PING?

On Sat, 13 Mar 2021 at 15:40, sebb  wrote:
>
> Is anyone there?
>
> On Sat, 6 Mar 2021 at 11:33, sebb  wrote:
> >
> > As the subject says: AFAICT the Kafka project is the only project
> > which publishes Javadocs as individual files on downloads.a.o.
> >
> > What is the use case for this?
> > The Javadocs are already published on the Kafka website.
> >
> > The mirror system relies on volunteers, and not all of them have lots
> > of disk space, so it is important to minimise what is published.
> > Additional files also increase the network traffic when synching (this
> > is a minor consideration, but it's still a waste of resources if the
> > files are unnecessary.)
> >
> > Sebb.


Re: request permission to create KIP

2021-03-17 Thread Andrei Iatsuk
Thanks, Manikumar!

> On 17 Mar 2021, at 14:00, Manikumar  wrote:
> 
> Hi,
> 
> I have given you the wiki permissions to create KIP. Thanks for your
> interest in the Kafka Project.
> 
> Thanks,
> 
> On Wed, Mar 17, 2021 at 6:40 AM Andrei Iatsuk  wrote:
> 
>> Hello!
>> 
>> I create improvement task
>> https://issues.apache.org/jira/browse/KAFKA-12481 <
>> https://issues.apache.org/jira/browse/KAFKA-12481> and offered pull
>> request https://github.com/apache/kafka/pull/10333 <
>> https://github.com/apache/kafka/pull/10333> that solves it. ijuma <
>> https://github.com/ijuma> says that according to
>> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
>> <
>> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals>
>> I should create KIP.
>> 
>> So can you add me https://cwiki.apache.org/confluence/display/~a.iatsuk <
>> https://cwiki.apache.org/confluence/display/~a.iatsuk> permission to
>> create KIP?
>> 
>> Best regards,
>> Andrei Iatsuk.



[jira] [Created] (KAFKA-12489) Flaky test ControllerIntegrationTest.testPartitionReassignmentToBrokerWithOfflineLogDir

2021-03-17 Thread dengziming (Jira)
dengziming created KAFKA-12489:
--

 Summary: Flaky test 
ControllerIntegrationTest.testPartitionReassignmentToBrokerWithOfflineLogDir
 Key: KAFKA-12489
 URL: https://issues.apache.org/jira/browse/KAFKA-12489
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Reporter: dengziming


org.opentest4j.AssertionFailedError: expected:  but was:  at 
org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:55) at 
org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:40) at 
org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:35) at 
org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:162) at 
kafka.utils.TestUtils$.causeLogDirFailure(TestUtils.scala:1251) at 
kafka.controller.ControllerIntegrationTest.testPartitionReassignmentToBrokerWithOfflineLogDir(ControllerIntegrationTest.scala:329)

details:

[https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-10289/2/testReport/junit/kafka.controller/ControllerIntegrationTest/Build___JDK_11___testPartitionReassignmentToBrokerWithOfflineLogDir__/]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-720: Deprecate MirrorMaker v1

2021-03-17 Thread Ben Stopford
What about the documentation? Currently, there doesn't seem to be much
documentation around Mirror Maker 2. Is there a plan to address that?

On Wed, 17 Mar 2021 at 10:56, Manikumar  wrote:

> +1. Thanks for the KIP.
>
>
> On Sun, Mar 14, 2021 at 12:24 PM Ryanne Dolan 
> wrote:
>
> > Hey y'all, I'd like to start the discussion on KIP-720, which proposes to
> > deprecate the original MirrorMaker in the upcoming 3.0 major release.
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-720%3A+Deprecate+MirrorMaker+v1
> >
> > Thanks!
> > Ryanne
> >
>


-- 

Ben Stopford


Re: request permission to create KIP

2021-03-17 Thread Manikumar
Hi,

I have given you the wiki permissions to create KIP. Thanks for your
interest in the Kafka Project.

Thanks,

On Wed, Mar 17, 2021 at 6:40 AM Andrei Iatsuk  wrote:

> Hello!
>
> I create improvement task
> https://issues.apache.org/jira/browse/KAFKA-12481 <
> https://issues.apache.org/jira/browse/KAFKA-12481> and offered pull
> request https://github.com/apache/kafka/pull/10333 <
> https://github.com/apache/kafka/pull/10333> that solves it. ijuma <
> https://github.com/ijuma> says that according to
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> <
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals>
> I should create KIP.
>
> So can you add me https://cwiki.apache.org/confluence/display/~a.iatsuk <
> https://cwiki.apache.org/confluence/display/~a.iatsuk> permission to
> create KIP?
>
> Best regards,
> Andrei Iatsuk.


Re: [DISCUSS] KIP-720: Deprecate MirrorMaker v1

2021-03-17 Thread Manikumar
+1. Thanks for the KIP.


On Sun, Mar 14, 2021 at 12:24 PM Ryanne Dolan  wrote:

> Hey y'all, I'd like to start the discussion on KIP-720, which proposes to
> deprecate the original MirrorMaker in the upcoming 3.0 major release.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-720%3A+Deprecate+MirrorMaker+v1
>
> Thanks!
> Ryanne
>


Re: [ANNOUNCE] New committer: Tom Bentley

2021-03-17 Thread Tom Bentley
Thanks folks!

On Wed, Mar 17, 2021 at 1:21 AM Dongjin Lee  wrote:

> Congratulations, Tom! Your contributions are always great!!
>
> +1. Thanks for supporting KIP-653: Upgrade log4j to log4j2 again.
>
> Best,
> Dongjin
>
> On Tue, Mar 16, 2021 at 8:24 PM Rajini Sivaram 
> wrote:
>
> > Congratulations, Tom!
> >
> > Regards,
> >
> > Rajini
> >
> > On Tue, Mar 16, 2021 at 10:39 AM Satish Duggana <
> satish.dugg...@gmail.com>
> > wrote:
> >
> > > Congratulations Tom!!
> > >
> > > On Tue, 16 Mar 2021 at 13:30, David Jacot  >
> > > wrote:
> > >
> > > > Congrats, Tom!
> > > >
> > > > On Tue, Mar 16, 2021 at 7:40 AM Kamal Chandraprakash <
> > > > kamal.chandraprak...@gmail.com> wrote:
> > > >
> > > > > Congrats, Tom!
> > > > >
> > > > > On Tue, Mar 16, 2021 at 8:32 AM Konstantine Karantasis
> > > > >  wrote:
> > > > >
> > > > > > Congratulations Tom!
> > > > > > Well deserved.
> > > > > >
> > > > > > Konstantine
> > > > > >
> > > > > > On Mon, Mar 15, 2021 at 4:52 PM Luke Chen 
> > wrote:
> > > > > >
> > > > > > > Congratulations!
> > > > > > >
> > > > > > > Federico Valeri  於 2021年3月16日 週二 上午4:11
> > 寫道:
> > > > > > >
> > > > > > > > Congrats, Tom!
> > > > > > > >
> > > > > > > > Well deserved.
> > > > > > > >
> > > > > > > > On Mon, Mar 15, 2021, 8:09 PM Paolo Patierno <
> > ppatie...@live.com
> > > >
> > > > > > wrote:
> > > > > > > >
> > > > > > > > > Congratulations Tom!
> > > > > > > > >
> > > > > > > > > Get Outlook for Android
> > > > > > > > >
> > > > > > > > > 
> > > > > > > > > From: Guozhang Wang 
> > > > > > > > > Sent: Monday, March 15, 2021 8:02:44 PM
> > > > > > > > > To: dev 
> > > > > > > > > Subject: Re: [ANNOUNCE] New committer: Tom Bentley
> > > > > > > > >
> > > > > > > > > Congratulations Tom!
> > > > > > > > >
> > > > > > > > > Guozhang
> > > > > > > > >
> > > > > > > > > On Mon, Mar 15, 2021 at 11:25 AM Bill Bejeck
> > > > > >  > > > > > > >
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Congratulations, Tom!
> > > > > > > > > >
> > > > > > > > > > -Bill
> > > > > > > > > >
> > > > > > > > > > On Mon, Mar 15, 2021 at 2:08 PM Bruno Cadonna
> > > > > > > >  > > > > > > > > >
> > > > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > Congrats, Tom!
> > > > > > > > > > >
> > > > > > > > > > > Best,
> > > > > > > > > > > Bruno
> > > > > > > > > > >
> > > > > > > > > > > On 15.03.21 18:59, Mickael Maison wrote:
> > > > > > > > > > > > Hi all,
> > > > > > > > > > > >
> > > > > > > > > > > > The PMC for Apache Kafka has invited Tom Bentley as a
> > > > > > committer,
> > > > > > > > and
> > > > > > > > > > > > we are excited to announce that he accepted!
> > > > > > > > > > > >
> > > > > > > > > > > > Tom first contributed to Apache Kafka in June 2017
> and
> > > has
> > > > > been
> > > > > > > > > > > > actively contributing since February 2020.
> > > > > > > > > > > > He has accumulated 52 commits and worked on a number
> of
> > > > KIPs.
> > > > > > > Here
> > > > > > > > > are
> > > > > > > > > > > > some of the most significant ones:
> > > > > > > > > > > > KIP-183: Change
> > PreferredReplicaLeaderElectionCommand
> > > > to
> > > > > > use
> > > > > > > > > > > AdminClient
> > > > > > > > > > > > KIP-195: AdminClient.createPartitions
> > > > > > > > > > > > KIP-585: Filter and Conditional SMTs
> > > > > > > > > > > > KIP-621: Deprecate and replace
> > > > > DescribeLogDirsResult.all()
> > > > > > > and
> > > > > > > > > > > .values()
> > > > > > > > > > > > KIP-707: The future of KafkaFuture (still in
> > > > discussion)
> > > > > > > > > > > >
> > > > > > > > > > > > In addition, he is very active on the mailing list
> and
> > > has
> > > > > > helped
> > > > > > > > > > > > review many KIPs.
> > > > > > > > > > > >
> > > > > > > > > > > > Congratulations Tom and thanks for all the
> > contributions!
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > > -- Guozhang
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
>
> --
> *Dongjin Lee*
>
> *A hitchhiker in the mathematical world.*
>
>
>
> *github:  github.com/dongjinleekr
> keybase: https://keybase.io/dongjinleekr
> linkedin: kr.linkedin.com/in/dongjinleekr
> speakerdeck:
> speakerdeck.com/dongjin
> *
>