[jira] [Resolved] (KAFKA-12655) CVE-2021-28165 - Upgrade jetty to 9.4.39

2021-04-12 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-12655.

Fix Version/s: 3.0.0
   Resolution: Fixed

> CVE-2021-28165 - Upgrade jetty to 9.4.39
> 
>
> Key: KAFKA-12655
> URL: https://issues.apache.org/jira/browse/KAFKA-12655
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.7.0, 2.6.1
>Reporter: Edwin Hobor
>Assignee: Dongjin Lee
>Priority: Major
>  Labels: CVE, security
> Fix For: 3.0.0
>
>
> *CVE-2021-28165* vulnerability affects Jetty versions up to *9.4.38*. For 
> more information see 
> [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28165] 
> Upgrading to Jetty version *9.4.39* should address this issue 
> ([https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.39.v20210325)|https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.39.v20210325].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12611) Fix using random payload in ProducerPerformance incorrectly

2021-04-12 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-12611.

Fix Version/s: 3.0.0
   Resolution: Fixed

> Fix using random payload in ProducerPerformance incorrectly
> ---
>
> Key: KAFKA-12611
> URL: https://issues.apache.org/jira/browse/KAFKA-12611
> Project: Kafka
>  Issue Type: Bug
>Reporter: Xie Lei
>Assignee: Xie Lei
>Priority: Major
> Fix For: 3.0.0
>
>
> In ProducerPerformance, random payload always same. it has a great impact 
> when use the compression.type option.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12662) add unit test for ProducerPerformance

2021-04-12 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-12662:
--

 Summary: add unit test for ProducerPerformance
 Key: KAFKA-12662
 URL: https://issues.apache.org/jira/browse/KAFKA-12662
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai


ProducerPerformance is a useful tool which offers an official way to test 
produce performance. Hence, it would be better to add enough tests for it. (In 
fact, it has no unit tests currently).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #33

2021-04-12 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-12661) ConfigEntry#equal does not compare other fields when value is NOT null

2021-04-12 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-12661:
--

 Summary: ConfigEntry#equal does not compare other fields when 
value is NOT null 
 Key: KAFKA-12661
 URL: https://issues.apache.org/jira/browse/KAFKA-12661
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai


{code:java}
return this.name.equals(that.name) &&
this.value != null ? this.value.equals(that.value) : that.value 
== null &&
this.isSensitive == that.isSensitive &&
this.isReadOnly == that.isReadOnly &&
this.source == that.source &&
Objects.equals(this.synonyms, that.synonyms);
{code}

the second value of ternary operator is "that.value == null &&
this.isSensitive == that.isSensitive &&
this.isReadOnly == that.isReadOnly &&
this.source == that.source &&
Objects.equals(this.synonyms, that.synonyms);" rather than 
"that.value == null". Hence, it does not compare other fields when value is not 
null.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #32

2021-04-12 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 405796 lines...]
[2021-04-13T03:15:47.636Z] > Task :clients:testJar
[2021-04-13T03:15:47.636Z] > Task :clients:testSrcJar
[2021-04-13T03:15:47.636Z] > Task 
:clients:publishMavenJavaPublicationToMavenLocal
[2021-04-13T03:15:47.636Z] > Task :clients:publishToMavenLocal
[2021-04-13T03:15:47.636Z] 
[2021-04-13T03:15:47.636Z] Deprecated Gradle features were used in this build, 
making it incompatible with Gradle 7.0.
[2021-04-13T03:15:47.636Z] Use '--warning-mode all' to show the individual 
deprecation warnings.
[2021-04-13T03:15:47.636Z] See 
https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings
[2021-04-13T03:15:47.636Z] 
[2021-04-13T03:15:47.636Z] BUILD SUCCESSFUL in 25s
[2021-04-13T03:15:47.636Z] 68 actionable tasks: 33 executed, 35 up-to-date
[Pipeline] sh
[2021-04-13T03:15:49.981Z] 
[2021-04-13T03:15:49.982Z] GetOffsetShellTest > testPartitionsArg() PASSED
[2021-04-13T03:15:49.982Z] 
[2021-04-13T03:15:49.982Z] GetOffsetShellTest > 
testTopicPartitionsArgWithInternalExcluded() STARTED
[2021-04-13T03:15:50.316Z] + grep ^version= gradle.properties
[2021-04-13T03:15:50.316Z] + cut -d= -f 2
[Pipeline] dir
[2021-04-13T03:15:51.014Z] Running in 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk/streams/quickstart
[Pipeline] {
[Pipeline] sh
[2021-04-13T03:15:53.172Z] + mvn clean install -Dgpg.skip
[2021-04-13T03:15:54.113Z] [INFO] Scanning for projects...
[2021-04-13T03:15:54.881Z] 
[2021-04-13T03:15:54.881Z] GetOffsetShellTest > 
testTopicPartitionsArgWithInternalExcluded() PASSED
[2021-04-13T03:15:54.881Z] 
[2021-04-13T03:15:54.881Z] GetOffsetShellTest > testNoFilterOptions() STARTED
[2021-04-13T03:15:55.053Z] [INFO] 

[2021-04-13T03:15:55.053Z] [INFO] Reactor Build Order:
[2021-04-13T03:15:55.053Z] [INFO] 
[2021-04-13T03:15:55.053Z] [INFO] Kafka Streams :: Quickstart   
 [pom]
[2021-04-13T03:15:55.053Z] [INFO] streams-quickstart-java   
 [maven-archetype]
[2021-04-13T03:15:55.053Z] [INFO] 
[2021-04-13T03:15:55.053Z] [INFO] < 
org.apache.kafka:streams-quickstart >-
[2021-04-13T03:15:55.053Z] [INFO] Building Kafka Streams :: Quickstart 
3.0.0-SNAPSHOT[1/2]
[2021-04-13T03:15:55.053Z] [INFO] [ pom 
]-
[2021-04-13T03:15:55.053Z] [INFO] 
[2021-04-13T03:15:55.053Z] [INFO] --- maven-clean-plugin:3.0.0:clean 
(default-clean) @ streams-quickstart ---
[2021-04-13T03:15:55.053Z] [INFO] 
[2021-04-13T03:15:55.053Z] [INFO] --- maven-remote-resources-plugin:1.5:process 
(process-resource-bundles) @ streams-quickstart ---
[2021-04-13T03:15:55.053Z] [INFO] 
[2021-04-13T03:15:55.053Z] [INFO] --- maven-site-plugin:3.5.1:attach-descriptor 
(attach-descriptor) @ streams-quickstart ---
[2021-04-13T03:15:55.994Z] [INFO] 
[2021-04-13T03:15:55.994Z] [INFO] --- maven-gpg-plugin:1.6:sign 
(sign-artifacts) @ streams-quickstart ---
[2021-04-13T03:15:55.994Z] [INFO] 
[2021-04-13T03:15:55.994Z] [INFO] --- maven-install-plugin:2.5.2:install 
(default-install) @ streams-quickstart ---
[2021-04-13T03:15:55.994Z] [INFO] Installing 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk/streams/quickstart/pom.xml
 to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart/3.0.0-SNAPSHOT/streams-quickstart-3.0.0-SNAPSHOT.pom
[2021-04-13T03:15:57.099Z] [INFO] 
[2021-04-13T03:15:57.099Z] [INFO] --< 
org.apache.kafka:streams-quickstart-java >--
[2021-04-13T03:15:57.099Z] [INFO] Building streams-quickstart-java 
3.0.0-SNAPSHOT[2/2]
[2021-04-13T03:15:57.099Z] [INFO] --[ maven-archetype 
]---
[2021-04-13T03:15:57.099Z] [INFO] 
[2021-04-13T03:15:57.099Z] [INFO] --- maven-clean-plugin:3.0.0:clean 
(default-clean) @ streams-quickstart-java ---
[2021-04-13T03:15:57.099Z] [INFO] 
[2021-04-13T03:15:57.099Z] [INFO] --- maven-remote-resources-plugin:1.5:process 
(process-resource-bundles) @ streams-quickstart-java ---
[2021-04-13T03:15:57.099Z] [INFO] 
[2021-04-13T03:15:57.099Z] [INFO] --- maven-resources-plugin:2.7:resources 
(default-resources) @ streams-quickstart-java ---
[2021-04-13T03:15:57.099Z] [INFO] Using 'UTF-8' encoding to copy filtered 
resources.
[2021-04-13T03:15:57.099Z] [INFO] Copying 6 resources
[2021-04-13T03:15:57.099Z] [INFO] Copying 3 resources
[2021-04-13T03:15:57.099Z] [INFO] 
[2021-04-13T03:15:57.099Z] [INFO] --- maven-resources-plugin:2.7:testResources 
(default-testResources) @ streams-quickstart-java ---
[2021-04-13T03:15:57.099Z] [INFO] Using 'UTF-8' encoding to copy filtered 
resources.
[2021-04-13T03:15:57.099Z] [INFO] Copying 2 resources
[2021-04-13T03:15:57.099Z] [INFO] Copying 3 resou

[jira] [Created] (KAFKA-12660) Do not update offset commit sensor after append failure

2021-04-12 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-12660:
---

 Summary: Do not update offset commit sensor after append failure
 Key: KAFKA-12660
 URL: https://issues.apache.org/jira/browse/KAFKA-12660
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson


In the append callback after writing an offset to the log in 
`GroupMetadataManager`, It seems wrong to update the offset commit sensor prior 
to checking for errors: 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/coordinator/group/GroupMetadataManager.scala#L394.
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12637) Remove deprecated PartitionAssignor interface

2021-04-12 Thread A. Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

A. Sophie Blee-Goldman resolved KAFKA-12637.

Resolution: Fixed

> Remove deprecated PartitionAssignor interface
> -
>
> Key: KAFKA-12637
> URL: https://issues.apache.org/jira/browse/KAFKA-12637
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: A. Sophie Blee-Goldman
>Assignee: dengziming
>Priority: Blocker
>  Labels: newbie, newbie++
> Fix For: 3.0.0
>
>
> In KIP-429, we deprecated the existing PartitionAssignor interface in order 
> to move it out of the internals package and better align the name with other 
> pluggable Consumer interfaces. We added an adapter to convert from existing 
> o.a.k.clients.consumer.internals.PartitionAssignor to the new 
> o.a.k.clients.consumer.ConsumerPartitionAssignor and support the deprecated 
> interface. This was deprecated in 2.4, so we should be ok to remove it and 
> the PartitionAssignorAdaptor in 3.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (KAFKA-8391) Flaky Test RebalanceSourceConnectorsIntegrationTest#testDeleteConnector

2021-04-12 Thread A. Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

A. Sophie Blee-Goldman reopened KAFKA-8391:
---

[~rhauch] [~kkonstantine] this test failed again -- based on the error message 
it looks like it may be a "real" failure this time, not environmental.

Stacktrace
java.lang.AssertionError: Tasks are imbalanced: 
localhost:35163=[seq-source11-0, seq-source11-3, seq-source10-1, seq-source12-1]
localhost:36961=[seq-source11-1, seq-source10-2, seq-source12-2]
localhost:39023=[seq-source11-2, seq-source10-0, seq-source10-3, 
seq-source12-0, seq-source12-3]
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.assertTrue(Assert.java:42)
at 
org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest.assertConnectorAndTasksAreUniqueAndBalanced(RebalanceSourceConnectorsIntegrationTest.java:365)
at 
org.apache.kafka.test.TestUtils.lambda$waitForCondition$3(TestUtils.java:319)
at 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:367)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:316)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:300)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:290)
at 
org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest.testDeleteConnector(RebalanceSourceConnectorsIntegrationTest.java:213)


https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-10512/5/testReport/org.apache.kafka.connect.integration/RebalanceSourceConnectorsIntegrationTest/Build___JDK_15_and_Scala_2_13___testDeleteConnector/

> Flaky Test RebalanceSourceConnectorsIntegrationTest#testDeleteConnector
> ---
>
> Key: KAFKA-8391
> URL: https://issues.apache.org/jira/browse/KAFKA-8391
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Randall Hauch
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.2, 2.6.0, 2.4.2, 2.5.1
>
> Attachments: 100-gradle-builds.tar
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/4747/testReport/junit/org.apache.kafka.connect.integration/RebalanceSourceConnectorsIntegrationTest/testDeleteConnector/]
> {quote}java.lang.AssertionError: Condition not met within timeout 3. 
> Connector tasks did not stop in time. at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:375) at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:352) at 
> org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest.testDeleteConnector(RebalanceSourceConnectorsIntegrationTest.java:166){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12659) Mirrormaker 2 - seeking to wrong offsets on restart

2021-04-12 Thread stuart (Jira)
stuart created KAFKA-12659:
--

 Summary: Mirrormaker 2 - seeking to wrong offsets on restart
 Key: KAFKA-12659
 URL: https://issues.apache.org/jira/browse/KAFKA-12659
 Project: Kafka
  Issue Type: Bug
  Components: mirrormaker
Affects Versions: 2.7.0
 Environment: Docker container based on openjdk11:alpine-slim , running 
on Amazon ECS
Reporter: stuart
 Attachments: partitions.png

We are running a dedicated mirror maker 2 cluster with three tasks, and have 
been trialing it for a few weeks on a single topic. It's been going fine, so we 
attempted to add a second topic, changing the MM2 config file from 

topics = sports

to 

topics = sports|translations 

 

We noticed the following day that the replication of the new topic was not 
working, and reading online it seems others have had similar issues, perhaps 
related to the config stored in the internal mm2-configs topic not refreshing 
from the file, so following  recommendations in that thread we stopped the 
tasks for 10 minutes, and eventually it started replicating.

However we also noticed later that MM2 had started re-replicating about 5 
million records from earlier that day (from the original topic) which was 
concerning. A few hours later I restarted the MM2 tasks and the same thing 
happened, it started re-replicating the same old messages.

Looking into the mm2-offsets-\{source}.internal topic I could see that the 
records which track offsets switched partitions, for example the records for 
sports-7 topic-partition went from being written to partition 5 (in 
mm2-offsets) to partition 8. The same occurred for other partitions (most but 
not all)

Following the task restarts in the MM2 logs I can see that MM2 is always 
Seeking to offset 42741034 for sports-7, this value matches the oldest offset 
record on mm2-offsets partition 5, so it looks like MM2 is ignoring the more 
recent offset records on partition 8 and so not seeking to the correct latest 
offsets.

And this also appears to affect compaction of the offsets internal topic, as 
while the older records on partition 8 for the sports-7 key are being cleaned 
up, the even older records for that same key on partition 5 are not.

I cant be certain that introducing the second topic into MM2 config was the 
trigger for that partitioning behaviour change, I am not sure why it would 
unless adding more topics to the topic replication list caused MM2 to 
automatically scale the number of partitions on the 
mm2-offsets-\{source}.internal topic, which I guess might affect partitioning 
behaviour. It was the only noteworthy thing that we consciously changed within 
the same rough timeframe however.

Attached is a screenshot to try and help illustrate the issue

 

 

 

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (KAFKA-12284) Flaky Test MirrorConnectorsIntegrationSSLTest#testOneWayReplicationWithAutoOffsetSync

2021-04-12 Thread A. Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

A. Sophie Blee-Goldman reopened KAFKA-12284:

  Assignee: (was: Luke Chen)

Failed again, on both the SSL and plain version of this test:

https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-10512/5/testReport/org.apache.kafka.connect.mirror.integration/MirrorConnectorsIntegrationSSLTest/Build___JDK_15_and_Scala_2_13___testOneWayReplicationWithAutoOffsetSync__/

https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-10512/5/testReport/org.apache.kafka.connect.mirror.integration/MirrorConnectorsIntegrationTest/Build___JDK_15_and_Scala_2_13___testOneWayReplicationWithAutoOffsetSync__/
 
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: The request timed out.
at 
org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.createTopic(EmbeddedKafkaCluster.java:365)
at 
org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.createTopic(EmbeddedKafkaCluster.java:340)
at 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.createTopics(MirrorConnectorsIntegrationBaseTest.java:609)

> Flaky Test 
> MirrorConnectorsIntegrationSSLTest#testOneWayReplicationWithAutoOffsetSync
> -
>
> Key: KAFKA-12284
> URL: https://issues.apache.org/jira/browse/KAFKA-12284
> Project: Kafka
>  Issue Type: Test
>  Components: mirrormaker, unit tests
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 3.0.0
>
>
> [https://github.com/apache/kafka/pull/9997/checks?check_run_id=1820178470]
> {quote} {{java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TopicExistsException: Topic 
> 'primary.test-topic-2' already exists.
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.createTopic(EmbeddedKafkaCluster.java:366)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.createTopic(EmbeddedKafkaCluster.java:341)
>   at 
> org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.testOneWayReplicationWithAutoOffsetSync(MirrorConnectorsIntegrationBaseTest.java:419)}}
> [...]
>  
> {{Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TopicExistsException: Topic 
> 'primary.test-topic-2' already exists.
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.createTopic(EmbeddedKafkaCluster.java:364)
>   ... 92 more
> Caused by: org.apache.kafka.common.errors.TopicExistsException: Topic 
> 'primary.test-topic-2' already exists.}}
> {quote}
> STDOUT
> {quote} {{2021-02-03 04ː19ː15,975] ERROR [MirrorHeartbeatConnector|task-0] 
> WorkerSourceTask\{id=MirrorHeartbeatConnector-0} failed to send record to 
> heartbeats:  (org.apache.kafka.connect.runtime.WorkerSourceTask:354)
> org.apache.kafka.common.KafkaException: Producer is closed forcefully.
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:750)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:737)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:282)
>   at java.lang.Thread.run(Thread.java:748)}}{quote}
> {quote} {{[2021-02-03 04ː19ː36,767] ERROR Could not check connector state 
> info. 
> (org.apache.kafka.connect.util.clusters.EmbeddedConnectClusterAssertions:420)
> org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: Could not 
> read connector state. Error response: \{"error_code":404,"message":"No status 
> found for connector MirrorSourceConnector"}
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.connectorStatus(EmbeddedConnectCluster.java:466)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectClusterAssertions.checkConnectorState(EmbeddedConnectClusterAssertions.java:413)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectClusterAssertions.lambda$assertConnectorAndAtLeastNumTasksAreRunning$16(EmbeddedConnectClusterAssertions.java:286)
>   at 
> org.apache.kafka.test.TestUtils.lambda$waitForCondition$3(TestUtils.java:303)
>   at 
> org.apa

Re: [kafka-clients] Re: Subject: [VOTE] 2.8.0 RC1

2021-04-12 Thread Israel Ekpo
No problem, I will assign in to you shortly.

https://issues.apache.org/jira/browse/KAFKA-12658

On Mon, Apr 12, 2021 at 8:47 PM John Roesler  wrote:

> Good catch, Israel!
>
> I’ll make sure that gets fixed.
>
> Thanks,
> John
>
> On Mon, Apr 12, 2021, at 19:30, Israel Ekpo wrote:
> > I just noticed that with the latest release candidate, the binaries from
> > the Scala 2.13 and 2.12 tarballs are not finding the class for the meta
> > data shell
> >
> > https://home.apache.org/~vvcephei/kafka-2.8.0-rc1/
> >
> > It looks like kafka-run-class.sh is not able to load it.
> >
> > Is this a known issue? Should I open an issue to track it?
> >
> > isekpo@MININT-5RPA920:/mnt/c/Users/isekpo/kafka_2.12-2.8.0$
> > bin/kafka-metadata-shell.sh --help
> > Error: Could not find or load main class
> > org.apache.kafka.shell.MetadataShell
> > Caused by: java.lang.ClassNotFoundException:
> > org.apache.kafka.shell.MetadataShell
> >
> > isekpo@MININT-5RPA920:/mnt/c/Users/isekpo/kafka_2.12-2.8.0$ cd
> > ../kafka_2.13-2.8.0/
> >
> > isekpo@MININT-5RPA920:/mnt/c/Users/isekpo/kafka_2.13-2.8.0$
> > bin/kafka-metadata-shell.sh --help
> > Error: Could not find or load main class
> > org.apache.kafka.shell.MetadataShell
> > Caused by: java.lang.ClassNotFoundException:
> > org.apache.kafka.shell.MetadataShell
> >
> >
> >
> > On Fri, Apr 9, 2021 at 4:52 PM Bill Bejeck  wrote:
> >
> > > Hi John,
> > >
> > > Thanks for running the 2.8.0 release!
> > >
> > > I've started to validate it and noticed the site-docs haven't been
> > > installed to https://kafka.apache.org/28/documentation.html yet.
> > >
> > > Thanks again!
> > >
> > > -Bill
> > >
> > > On Tue, Apr 6, 2021 at 5:37 PM John Roesler 
> wrote:
> > >
> > >> Hello Kafka users, developers and client-developers,
> > >>
> > >> This is the second candidate for release of Apache Kafka
> > >> 2.8.0. This is a major release that includes many new
> > >> features, including:
> > >>
> > >> * Early-access release of replacing Zookeeper with a self-
> > >> managed quorum
> > >> * Add Describe Cluster API
> > >> * Support mutual TLS authentication on SASL_SSL listeners
> > >> * Ergonomic improvements to Streams TopologyTestDriver
> > >> * Logger API improvement to respect the hierarchy
> > >> * Request/response trace logs are now JSON-formatted
> > >> * New API to add and remove Streams threads while running
> > >> * New REST API to expose Connect task configurations
> > >> * Fixed the TimeWindowDeserializer to be able to deserialize
> > >> keys outside of Streams (such as in the console consumer)
> > >> * Streams resilient improvement: new uncaught exception
> > >> handler
> > >> * Streams resilience improvement: automatically recover from
> > >> transient timeout exceptions
> > >>
> > >>
> > >>
> > >>
> > >> Release notes for the 2.8.0 release:
> > >> https://home.apache.org/~vvcephei/kafka-2.8.0-rc1/RELEASE_NOTES.html
> > >>
> > >>
> > >> *** Please download, test and vote by 6 April 2021 ***
> > >>
> > >> Kafka's KEYS file containing PGP keys we use to sign the
> > >> release:
> > >> https://kafka.apache.org/KEYS
> > >>
> > >> * Release artifacts to be voted upon (source and binary):
> > >>
> > >> https://home.apache.org/~vvcephei/kafka-2.8.0-rc1/
> > >>
> > >> * Maven artifacts to be voted upon:
> > >>
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > >>
> > >> * Javadoc:
> > >>
> > >> https://home.apache.org/~vvcephei/kafka-2.8.0-rc1/javadoc/
> > >>
> > >> * Tag to be voted upon (off 2.8 branch) is the 2.8.0 tag:
> > >>
> > >> https://github.com/apache/kafka/releases/tag/2.8.0-rc1
> > >>
> > >> * Documentation:
> > >> https://kafka.apache.org/28/documentation.html
> > >>
> > >> * Protocol:
> > >> https://kafka.apache.org/28/protocol.html
> > >>
> > >>
> > >> /**
> > >>
> > >> Thanks,
> > >> John
> > >>
> > >>
> > >>
> > >> --
> > > You received this message because you are subscribed to the Google
> Groups
> > > "kafka-clients" group.
> > > To unsubscribe from this group and stop receiving emails from it, send
> an
> > > email to kafka-clients+unsubscr...@googlegroups.com.
> > > To view this discussion on the web visit
> > >
> https://groups.google.com/d/msgid/kafka-clients/CAF7WS%2BrK%3DWMyM3bamNoxa9L-onZbw6UnJFASx0ZO5ywzj38WvA%40mail.gmail.com
> > > <
> https://groups.google.com/d/msgid/kafka-clients/CAF7WS%2BrK%3DWMyM3bamNoxa9L-onZbw6UnJFASx0ZO5ywzj38WvA%40mail.gmail.com?utm_medium=email&utm_source=footer
> >
> > > .
> > >
> >
>


[jira] [Created] (KAFKA-12658) bin/kafka-metadata-shell.sh cannot find or load main class org.apache.kafka.shell.MetadataShell

2021-04-12 Thread Israel Ekpo (Jira)
Israel Ekpo created KAFKA-12658:
---

 Summary: bin/kafka-metadata-shell.sh cannot find or load main 
class org.apache.kafka.shell.MetadataShell
 Key: KAFKA-12658
 URL: https://issues.apache.org/jira/browse/KAFKA-12658
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 3.0.0, 2.8.0
 Environment: Ubuntu, Java 11
Reporter: Israel Ekpo
Assignee: John Roesler


With the latest release candidate for 2.8.0, the binaries from the Scala 2.13 
and 2.12 tarballs are not finding the class for the meta data shell from the 
classpath 
[https://home.apache.org/~vvcephei/kafka-2.8.0-rc1/]
 
kafka-run-class.sh is not able to load it.
 
cd ../kafka_2.12-2.8.0$
 
 bin/kafka-metadata-shell.sh --help
Error: Could not find or load main class org.apache.kafka.shell.MetadataShell
Caused by: java.lang.ClassNotFoundException: 
org.apache.kafka.shell.MetadataShell

cd ../kafka_2.13-2.8.0/


bin/kafka-metadata-shell.sh --help
Error: Could not find or load main class org.apache.kafka.shell.MetadataShell
Caused by: java.lang.ClassNotFoundException: 
org.apache.kafka.shell.MetadataShell
!https://ssl.gstatic.com/ui/v1/icons/mail/images/cleardot.gif!
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-665 Kafka Connect Hash SMT

2021-04-12 Thread bran...@bbrownsound.com
It’s been a while, so I thought I’d give this one another friendly bump. 

> On Sep 21, 2020, at 9:38 AM, Brandon Brown  wrote:
> 
> Hi Tom,
> 
> The reason I went fix was so that we could simplify the configuration for 
> example you can say sha256 instead of having to remember that it’s SHA-256. 
> Admittedly if other formats become implemented then it would require updating 
> this as well. 
> 
> I’m flexible on changing it to a string and letting it be configured with the 
> exact name. What do you think Mickael?
> 
> Brandon Brown
> 
>> On Sep 21, 2020, at 3:42 AM, Tom Bentley  wrote:
>> 
>> Hi Brandon and Mickael,
>> 
>> Is it necessary to fix the supported digest? We could just support whatever
>> the JVM's MessageDigest supports?
>> 
>> Kind regards,
>> 
>> Tom
>> 
>>> On Fri, Sep 18, 2020 at 6:00 PM Brandon Brown 
>>> wrote:
>>> 
>>> Thanks Michael! So proposed hash functions would be MD5, SHA1, SHA256.
>>> 
>>> I can expand the motivation on the KIP but here’s where my head is at.
>>> MaskField would completely remove the value by setting it to an equivalent
>>> null value. One problem with this would be that you’d not be able to know
>>> in the case of say a password going through the mask transform it would
>>> become “” which could mean that no password was present in the message, or
>>> it was removed. However this hash transformer would remove this ambiguity
>>> if that makes sense.
>>> 
>>> Do you think there are other hash functions that should be supported as
>>> well?
>>> 
>>> Thanks,
>>> Brandon Brown
>>> 
 On Sep 18, 2020, at 12:00 PM, Mickael Maison 
>>> wrote:
 
 Thanks Brandon for the KIP.
 
 There's already a built-in transformation (MaskField) that can
 obfuscate fields. In the motivation section, it would be nice to
 explain the use cases when MaskField is not suitable and when users
 would need the proposed transformation.
 
 The KIP exposes a "function" configuration to select the hash function
 to use. Which hash functions do you propose supporting?
 
> On Thu, Aug 27, 2020 at 10:43 PM  wrote:
> 
> 
> 
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-665%3A+Kafka+Connect+Hash+SMT
> 
> The current pr with the proposed changes
> https://github.com/apache/kafka/pull/9057 and the original 3rd party
> contribution which initiated this change
> 
>>> https://github.com/aiven/aiven-kafka-connect-transforms/issues/9#issuecomment-662378057
>>> .
> 
> I'm interested in any suggestions for ways to improve this as I think
> it would make a nice addition to the existing SMTs provided by Kafka
> Connect out of the box.
> 
> Thanks,
> Brandon
> 
> 
> 
>>> 



Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #31

2021-04-12 Thread Apache Jenkins Server
See 




Re: Subject: [VOTE] 2.8.0 RC1

2021-04-12 Thread John Roesler
Thanks for the catch, Bill!

 I was mistaken about the order of operations. I will install them ASAP. 

Thanks,
John

On Fri, Apr 9, 2021, at 15:52, Bill Bejeck wrote:
> Hi John,
> 
> Thanks for running the 2.8.0 release!
> 
> I've started to validate it and noticed the site-docs haven't been
> installed to https://kafka.apache.org/28/documentation.html yet.
> 
> Thanks again!
> 
> -Bill
> 
> On Tue, Apr 6, 2021 at 5:37 PM John Roesler  wrote:
> 
> > Hello Kafka users, developers and client-developers,
> >
> > This is the second candidate for release of Apache Kafka
> > 2.8.0. This is a major release that includes many new
> > features, including:
> >
> > * Early-access release of replacing Zookeeper with a self-
> > managed quorum
> > * Add Describe Cluster API
> > * Support mutual TLS authentication on SASL_SSL listeners
> > * Ergonomic improvements to Streams TopologyTestDriver
> > * Logger API improvement to respect the hierarchy
> > * Request/response trace logs are now JSON-formatted
> > * New API to add and remove Streams threads while running
> > * New REST API to expose Connect task configurations
> > * Fixed the TimeWindowDeserializer to be able to deserialize
> > keys outside of Streams (such as in the console consumer)
> > * Streams resilient improvement: new uncaught exception
> > handler
> > * Streams resilience improvement: automatically recover from
> > transient timeout exceptions
> >
> >
> >
> >
> > Release notes for the 2.8.0 release:
> > https://home.apache.org/~vvcephei/kafka-2.8.0-rc1/RELEASE_NOTES.html
> >
> >
> > *** Please download, test and vote by 6 April 2021 ***
> >
> > Kafka's KEYS file containing PGP keys we use to sign the
> > release:
> > https://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> >
> > https://home.apache.org/~vvcephei/kafka-2.8.0-rc1/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > * Javadoc:
> >
> > https://home.apache.org/~vvcephei/kafka-2.8.0-rc1/javadoc/
> >
> > * Tag to be voted upon (off 2.8 branch) is the 2.8.0 tag:
> >
> > https://github.com/apache/kafka/releases/tag/2.8.0-rc1
> >
> > * Documentation:
> > https://kafka.apache.org/28/documentation.html
> >
> > * Protocol:
> > https://kafka.apache.org/28/protocol.html
> >
> >
> > /**
> >
> > Thanks,
> > John
> >
> >
> >
> >
>


Re: [kafka-clients] Re: Subject: [VOTE] 2.8.0 RC1

2021-04-12 Thread Israel Ekpo
I just noticed that with the latest release candidate, the binaries from
the Scala 2.13 and 2.12 tarballs are not finding the class for the meta
data shell

https://home.apache.org/~vvcephei/kafka-2.8.0-rc1/

It looks like kafka-run-class.sh is not able to load it.

Is this a known issue? Should I open an issue to track it?

isekpo@MININT-5RPA920:/mnt/c/Users/isekpo/kafka_2.12-2.8.0$
bin/kafka-metadata-shell.sh --help
Error: Could not find or load main class
org.apache.kafka.shell.MetadataShell
Caused by: java.lang.ClassNotFoundException:
org.apache.kafka.shell.MetadataShell

isekpo@MININT-5RPA920:/mnt/c/Users/isekpo/kafka_2.12-2.8.0$ cd
../kafka_2.13-2.8.0/

isekpo@MININT-5RPA920:/mnt/c/Users/isekpo/kafka_2.13-2.8.0$
bin/kafka-metadata-shell.sh --help
Error: Could not find or load main class
org.apache.kafka.shell.MetadataShell
Caused by: java.lang.ClassNotFoundException:
org.apache.kafka.shell.MetadataShell



On Fri, Apr 9, 2021 at 4:52 PM Bill Bejeck  wrote:

> Hi John,
>
> Thanks for running the 2.8.0 release!
>
> I've started to validate it and noticed the site-docs haven't been
> installed to https://kafka.apache.org/28/documentation.html yet.
>
> Thanks again!
>
> -Bill
>
> On Tue, Apr 6, 2021 at 5:37 PM John Roesler  wrote:
>
>> Hello Kafka users, developers and client-developers,
>>
>> This is the second candidate for release of Apache Kafka
>> 2.8.0. This is a major release that includes many new
>> features, including:
>>
>> * Early-access release of replacing Zookeeper with a self-
>> managed quorum
>> * Add Describe Cluster API
>> * Support mutual TLS authentication on SASL_SSL listeners
>> * Ergonomic improvements to Streams TopologyTestDriver
>> * Logger API improvement to respect the hierarchy
>> * Request/response trace logs are now JSON-formatted
>> * New API to add and remove Streams threads while running
>> * New REST API to expose Connect task configurations
>> * Fixed the TimeWindowDeserializer to be able to deserialize
>> keys outside of Streams (such as in the console consumer)
>> * Streams resilient improvement: new uncaught exception
>> handler
>> * Streams resilience improvement: automatically recover from
>> transient timeout exceptions
>>
>>
>>
>>
>> Release notes for the 2.8.0 release:
>> https://home.apache.org/~vvcephei/kafka-2.8.0-rc1/RELEASE_NOTES.html
>>
>>
>> *** Please download, test and vote by 6 April 2021 ***
>>
>> Kafka's KEYS file containing PGP keys we use to sign the
>> release:
>> https://kafka.apache.org/KEYS
>>
>> * Release artifacts to be voted upon (source and binary):
>>
>> https://home.apache.org/~vvcephei/kafka-2.8.0-rc1/
>>
>> * Maven artifacts to be voted upon:
>> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>>
>> * Javadoc:
>>
>> https://home.apache.org/~vvcephei/kafka-2.8.0-rc1/javadoc/
>>
>> * Tag to be voted upon (off 2.8 branch) is the 2.8.0 tag:
>>
>> https://github.com/apache/kafka/releases/tag/2.8.0-rc1
>>
>> * Documentation:
>> https://kafka.apache.org/28/documentation.html
>>
>> * Protocol:
>> https://kafka.apache.org/28/protocol.html
>>
>>
>> /**
>>
>> Thanks,
>> John
>>
>>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CAF7WS%2BrK%3DWMyM3bamNoxa9L-onZbw6UnJFASx0ZO5ywzj38WvA%40mail.gmail.com
> 
> .
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 2.8 #8

2021-04-12 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-12643) Kafka Streams 2.7 with Kafka Broker 2.6.x regression: bad timestamp in transform/process (this.context.schedule function)

2021-04-12 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-12643.
---
Resolution: Duplicate

Thanks for confirming!

> Kafka Streams 2.7 with Kafka Broker 2.6.x regression: bad timestamp in 
> transform/process (this.context.schedule function)
> -
>
> Key: KAFKA-12643
> URL: https://issues.apache.org/jira/browse/KAFKA-12643
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.7.0
>Reporter: David EVANO
>Priority: Major
> Attachments: Capture d’écran 2021-04-09 à 17.50.05.png
>
>
> During a tranform() or a process() method:
> Define a schedule tyask:
> this.context.schedule(Duration.ofSeconds(1), PunctuationType.WALL_CLOCK_TIME, 
> timestamp -> \{...}
> store.put(...) or context.forward(...) produce a record with an invalid 
> timestamp.
> For the forward, a workaround is define the timestamp:
> context.forward(entry.key, entry.value.toString(), 
> To.all().withTimestamp(timestamp));
> But for state.put(...) or state.delete(...) functions there is no workaround.
> Is it mandatory to have the Kafka broker version aligned with the Kafka 
> Streams version?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12657) Flaky Tests BlockingConnectorTest.testWorkerRestartWithBlockInConnectorStop

2021-04-12 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-12657:
---

 Summary: Flaky Tests 
BlockingConnectorTest.testWorkerRestartWithBlockInConnectorStop
 Key: KAFKA-12657
 URL: https://issues.apache.org/jira/browse/KAFKA-12657
 Project: Kafka
  Issue Type: Test
  Components: KafkaConnect
Reporter: Matthias J. Sax


[https://github.com/apache/kafka/pull/10506/checks?check_run_id=2327377745]
{quote} {{org.opentest4j.AssertionFailedError: Condition not met within timeout 
6. Worker did not complete startup in time ==> expected:  but was: 

at org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:55)
at org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:40)
at org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:193)
at 
org.apache.kafka.test.TestUtils.lambda$waitForCondition$3(TestUtils.java:319)
at 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:367)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:316)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:300)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:290)
at 
org.apache.kafka.connect.integration.BlockingConnectorTest.setup(BlockingConnectorTest.java:133)}}
{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12656) JMX exporter is leaking a lot of file descriptors

2021-04-12 Thread Liang Xia (Jira)
Liang Xia created KAFKA-12656:
-

 Summary: JMX exporter is leaking a lot of file descriptors
 Key: KAFKA-12656
 URL: https://issues.apache.org/jira/browse/KAFKA-12656
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Liang Xia


jmx exporter doesn't close the connections successfuly after reporting the 
metrics.

They are stuck in CLOSE_WAIT state.
java2351 kcbq *385u IPv63660408   0t0  TCP 
example.internal:9404->x.x.x.x:39470 (CLOSE_WAIT)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] New Kafka PMC Member: Bill Bejeck

2021-04-12 Thread Randall Hauch
Congratulations, Bill!

On Mon, Apr 12, 2021 at 11:02 AM Guozhang Wang  wrote:

> Congratulations Bill !
>
> Guozhang
>
> On Wed, Apr 7, 2021 at 6:16 PM Matthias J. Sax  wrote:
>
> > Hi,
> >
> > It's my pleasure to announce that Bill Bejeck in now a member of the
> > Kafka PMC.
> >
> > Bill has been a Kafka committer since Feb 2019. He has remained
> > active in the community since becoming a committer.
> >
> >
> >
> > Congratulations Bill!
> >
> >  -Matthias, on behalf of Apache Kafka PMC
> >
>
>
> --
> -- Guozhang
>


Re: [DISCUSS] KIP-693: Client-side Circuit Breaker for Partition Write Errors

2021-04-12 Thread Guozhang Wang
Hello Guoqiang,

This is another interesting ticket that may be also related to the issues
you observed and fixed in your production, if you used sticky partitioner
in producer clients:

https://issues.apache.org/jira/browse/KAFKA-10888


Guozhang


On Wed, Apr 7, 2021 at 11:00 AM Jun Rao  wrote:

> Hi, George,
>
> A few more comments on the KIP.
>
> 1. It would be useful to motivate the problem a bit more. For example, is
> the KIP trying to solve a transient broker problem (if so, for how long) or
> a permanent broker problem? It would also be useful to list some common
> causes that can slow the broker down.
>
> 2. It would be useful to discuss a bit more on the high level approach
> (e.g. in the rejected section). This KIP proposes to fix the issue on the
> client side by having a pluggable component to redirect the traffic to
> other brokers. One potential issue with this is that it requires all
> clients to opt in (assuming this is not the default) for the plugin to see
> the benefit. In some environments with a large number of clients,
> coordinating all those clients may not be easy. Another potential solution
> is to fix the issue on the server side. For example, if a broker is slow
> because it has noisy neighbors in a virtual environment, we could
> proactively bring down the broker and restart it somewhere else. This has
> the benefit that it requires less client side coordination.
>
> 3. Regarding how to detect broker slowness in the client. The proposal is
> based on the error in the produce response. Typically, if the broker is
> just slow, the only type of error the client gets is the timeout exception.
> Since the default timeout is 30 seconds, it may not be triggered all the
> time and it may be too late to reflect a broker side issue. I am wondering
> if there are other better indicators. For example, another potential option
> is to use the number of pending batches per partition (or broker) in the
> Accumulator. Intuitively, if a broker is slow, all partitions with the
> leader on it will gradually accumulate more batches.
>
> 4. It would be useful to have a solution that works with keyed messages so
> that they can still be distributed to the partition based on the hash of
> the key.
>
> Thanks,
>
> Jun
>
>
> On Wed, Mar 24, 2021 at 4:05 AM Guoqiang Shu 
> wrote:
>
> >
> > In our current proposal it can be configured via
> > producer.circuit.breaker.mute.retry.interval (defaulted to 10 mins), but
> > perhaps 'interval' is a confusing name.
> >
> > On 2021/03/23 00:45:23, Guozhang Wang  wrote:
> > > Thanks for the updated KIP! Some more comments inlined.
> > > >
> > > > I'm still not sure if, in your proposal, the muting length is a
> > > customizable value (and if yes, through which config) or it is always
> > hard
> > > coded as 10 minutes?
> > >
> > >
> > > > > Guozhang
> >
> >
>


-- 
-- Guozhang


Re: [ANNOUNCE] New Kafka PMC Member: Bill Bejeck

2021-04-12 Thread Guozhang Wang
Congratulations Bill !

Guozhang

On Wed, Apr 7, 2021 at 6:16 PM Matthias J. Sax  wrote:

> Hi,
>
> It's my pleasure to announce that Bill Bejeck in now a member of the
> Kafka PMC.
>
> Bill has been a Kafka committer since Feb 2019. He has remained
> active in the community since becoming a committer.
>
>
>
> Congratulations Bill!
>
>  -Matthias, on behalf of Apache Kafka PMC
>


-- 
-- Guozhang


[jira] [Resolved] (KAFKA-7249) Provide an official Docker Hub image for Kafka

2021-04-12 Thread Timothy Higinbottom (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Higinbottom resolved KAFKA-7249.

Resolution: Not A Problem

> Provide an official Docker Hub image for Kafka
> --
>
> Key: KAFKA-7249
> URL: https://issues.apache.org/jira/browse/KAFKA-7249
> Project: Kafka
>  Issue Type: New Feature
>  Components: build, documentation, packaging, tools, website
>Affects Versions: 1.0.1, 1.1.0, 1.1.1, 2.0.0
>Reporter: Timothy Higinbottom
>Priority: Major
>  Labels: build, distribution, docker, packaging
>
> It would be great if there was an official Docker Hub image for Kafka, 
> supported by the Kafka community, so we knew that the image was trusted and 
> stable for use in production. Many organizations and teams are now using 
> Docker, Kubernetes, and other container systems that make deployment easier. 
> I think Kafka should move into this space and encourage this as an easy way 
> for beginners to get started, but also as a portable and effective way to 
> deploy Kafka in production. 
>  
> Currently there are only Kafka images maintained by third parties, which 
> seems like a shame for a big Apache project like Kafka. Hope you all consider 
> this.
>  
> Thanks,
> Tim



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12408) Document omitted ReplicaManager metrics

2021-04-12 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-12408.
-
Fix Version/s: 3.0.0
 Reviewer: Tom Bentley
   Resolution: Fixed

> Document omitted ReplicaManager metrics
> ---
>
> Key: KAFKA-12408
> URL: https://issues.apache.org/jira/browse/KAFKA-12408
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Minor
> Fix For: 3.0.0
>
>
> There are several problems in ReplicaManager metrics documentation:
>  * kafka.server:type=ReplicaManager,name=OfflineReplicaCount is omitted.
>  * kafka.server:type=ReplicaManager,name=FailedIsrUpdatesPerSec is omitted.
>  * kafka.server:type=ReplicaManager,name=[PartitionCount|LeaderCount]'s 
> descriptions are omitted: 'mostly even across brokers'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #29

2021-04-12 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-726: Make the CooperativeStickyAssignor as the default assignor

2021-04-12 Thread Chris Egerton
Whoops, small correction--meant to say
ConsumerRebalanceListener::onPartitionsLost, not Consumer::onPartitionsLost

On Mon, Apr 12, 2021 at 8:17 AM Chris Egerton  wrote:

> Hi Sophie,
>
> This sounds fantastic. I've made a note on KAFKA-12487 about being sure to
> implement Consumer::onPartitionsLost to avoid unnecessary task failures on
> consumer protocol downgrade, but besides that, I don't think things could
> get any smoother for Connect users or developers. The automatic protocol
> upgrade/downgrade behavior appears safe, intuitive, and pain-free.
>
> Really excited for this development and hoping we can see it come to
> fruition in time for the 3.0 release!
>
> Cheers,
>
> Chris
>
> On Fri, Apr 9, 2021 at 2:43 PM Sophie Blee-Goldman
>  wrote:
>
>> 1) Yes, all of the above will be part of KAFKA-12477 (not KIP-726)
>>
>> 2) No, KAFKA-12638 would be nice to have but I don't think it's
>> appropriate
>> to remove
>> the default implementation of #onPartitionsLost in 3.0 since we never gave
>> any indication
>> yet that we intend to remove it
>>
>> 3) Yes, this would be similar to when a Consumer drops out of the group.
>> It's always been
>> possible for a member to miss a rebalance and have its partition be
>> reassigned to another
>> member, during which time both members would claim to own said partition.
>> But this is safe
>> because the member who dropped out is blocked from committing offsets on
>> that partition.
>>
>> On Fri, Apr 9, 2021 at 2:46 AM Luke Chen  wrote:
>>
>> > Hi Sophie,
>> > That sounds great to take care of each case I can think of.
>> > Questions:
>> > 1. Do you mean the short-Circuit will also be implemented in
>> KAFKA-12477?
>> > 2. I don't think KAFKA-12638 is the blocker of this KIP-726, Am I right?
>> > 3. So, does that mean we still have possibility to have multiple
>> consumer
>> > owned the same topic partition? And in this situation, we avoid them
>> doing
>> > committing, and waiting for next rebalance (should be soon). Is my
>> > understanding correct?
>> >
>> > Thank you very much for finding this great solution.
>> >
>> > Luke
>> >
>> > On Fri, Apr 9, 2021 at 11:37 AM Sophie Blee-Goldman
>> >  wrote:
>> >
>> > > Alright, here's the detailed proposal for KAFKA-12477. This assumes we
>> > will
>> > > change the default assignor to ["cooperative-sticky", "range"] in
>> > KIP-726.
>> > > It also acknowledges that users may attempt any kind of upgrade
>> without
>> > > reading the docs, and so we need to put in safeguards against data
>> > > corruption rather than assume everyone will follow the safe upgrade
>> path.
>> > >
>> > > With this proposal,
>> > > 1) New applications on 3.0 will enable cooperative rebalancing by
>> default
>> > > 2) Existing applications which don’t set an assignor can safely
>> upgrade
>> > to
>> > > 3.0 using a single rolling bounce with no extra steps, and will
>> > > automatically transition to cooperative rebalancing
>> > > 3) Existing applications which do set an assignor that uses EAGER can
>> > > likewise upgrade their applications to COOPERATIVE with a single
>> rolling
>> > > bounce
>> > > 4) Once on 3.0, applications can safely go back and forth between
>> EAGER
>> > and
>> > > COOPERATIVE
>> > > 5) Applications can safely downgrade away from 3.0
>> > >
>> > > The high-level idea for dynamic protocol upgrades is that the group
>> will
>> > > leverage the assignor selected by the group coordinator to determine
>> when
>> > > it’s safe to upgrade to COOPERATIVE, and trigger a fail-safe to
>> protect
>> > the
>> > > group in case of rare events or user misconfiguration. The group
>> > > coordinator selects the most preferred assignor that’s supported by
>> all
>> > > members of the group, so we know that all members will support
>> > COOPERATIVE
>> > > once we receive the “cooperative-sticky” assignor after a rebalance.
>> At
>> > > this point, each member can upgrade their own protocol to COOPERATIVE.
>> > > However, there may be situations in which an EAGER member may join the
>> > > group even after upgrading to COOPERATIVE. For example, during a
>> rolling
>> > > upgrade if the last remaining member on the old bytecode misses a
>> > > rebalance, the other members will be allowed to upgrade to
>> COOPERATIVE.
>> > If
>> > > the old member rejoins and is chosen to be the group leader before
>> it’s
>> > > upgraded to 3.0, it won’t be aware that the other members of the group
>> > have
>> > > not yet revoked their partitions when computing the assignment.
>> > >
>> > > Short Circuit:
>> > > The risk of mixing the cooperative and eager rebalancing protocols is
>> > that
>> > > a partition may be assigned to one member while it has yet to be
>> revoked
>> > > from its previous owner. The danger is that the new owner may begin
>> > > processing and committing offsets for this partition while the
>> previous
>> > > owner is also committing offsets in its #onPartitionsRevoked callback,
>> > > which is invoked at the end of the rebalance i

Re: [DISCUSS] KIP-726: Make the CooperativeStickyAssignor as the default assignor

2021-04-12 Thread Chris Egerton
Hi Sophie,

This sounds fantastic. I've made a note on KAFKA-12487 about being sure to
implement Consumer::onPartitionsLost to avoid unnecessary task failures on
consumer protocol downgrade, but besides that, I don't think things could
get any smoother for Connect users or developers. The automatic protocol
upgrade/downgrade behavior appears safe, intuitive, and pain-free.

Really excited for this development and hoping we can see it come to
fruition in time for the 3.0 release!

Cheers,

Chris

On Fri, Apr 9, 2021 at 2:43 PM Sophie Blee-Goldman
 wrote:

> 1) Yes, all of the above will be part of KAFKA-12477 (not KIP-726)
>
> 2) No, KAFKA-12638 would be nice to have but I don't think it's appropriate
> to remove
> the default implementation of #onPartitionsLost in 3.0 since we never gave
> any indication
> yet that we intend to remove it
>
> 3) Yes, this would be similar to when a Consumer drops out of the group.
> It's always been
> possible for a member to miss a rebalance and have its partition be
> reassigned to another
> member, during which time both members would claim to own said partition.
> But this is safe
> because the member who dropped out is blocked from committing offsets on
> that partition.
>
> On Fri, Apr 9, 2021 at 2:46 AM Luke Chen  wrote:
>
> > Hi Sophie,
> > That sounds great to take care of each case I can think of.
> > Questions:
> > 1. Do you mean the short-Circuit will also be implemented in KAFKA-12477?
> > 2. I don't think KAFKA-12638 is the blocker of this KIP-726, Am I right?
> > 3. So, does that mean we still have possibility to have multiple consumer
> > owned the same topic partition? And in this situation, we avoid them
> doing
> > committing, and waiting for next rebalance (should be soon). Is my
> > understanding correct?
> >
> > Thank you very much for finding this great solution.
> >
> > Luke
> >
> > On Fri, Apr 9, 2021 at 11:37 AM Sophie Blee-Goldman
> >  wrote:
> >
> > > Alright, here's the detailed proposal for KAFKA-12477. This assumes we
> > will
> > > change the default assignor to ["cooperative-sticky", "range"] in
> > KIP-726.
> > > It also acknowledges that users may attempt any kind of upgrade without
> > > reading the docs, and so we need to put in safeguards against data
> > > corruption rather than assume everyone will follow the safe upgrade
> path.
> > >
> > > With this proposal,
> > > 1) New applications on 3.0 will enable cooperative rebalancing by
> default
> > > 2) Existing applications which don’t set an assignor can safely upgrade
> > to
> > > 3.0 using a single rolling bounce with no extra steps, and will
> > > automatically transition to cooperative rebalancing
> > > 3) Existing applications which do set an assignor that uses EAGER can
> > > likewise upgrade their applications to COOPERATIVE with a single
> rolling
> > > bounce
> > > 4) Once on 3.0, applications can safely go back and forth between EAGER
> > and
> > > COOPERATIVE
> > > 5) Applications can safely downgrade away from 3.0
> > >
> > > The high-level idea for dynamic protocol upgrades is that the group
> will
> > > leverage the assignor selected by the group coordinator to determine
> when
> > > it’s safe to upgrade to COOPERATIVE, and trigger a fail-safe to protect
> > the
> > > group in case of rare events or user misconfiguration. The group
> > > coordinator selects the most preferred assignor that’s supported by all
> > > members of the group, so we know that all members will support
> > COOPERATIVE
> > > once we receive the “cooperative-sticky” assignor after a rebalance. At
> > > this point, each member can upgrade their own protocol to COOPERATIVE.
> > > However, there may be situations in which an EAGER member may join the
> > > group even after upgrading to COOPERATIVE. For example, during a
> rolling
> > > upgrade if the last remaining member on the old bytecode misses a
> > > rebalance, the other members will be allowed to upgrade to COOPERATIVE.
> > If
> > > the old member rejoins and is chosen to be the group leader before it’s
> > > upgraded to 3.0, it won’t be aware that the other members of the group
> > have
> > > not yet revoked their partitions when computing the assignment.
> > >
> > > Short Circuit:
> > > The risk of mixing the cooperative and eager rebalancing protocols is
> > that
> > > a partition may be assigned to one member while it has yet to be
> revoked
> > > from its previous owner. The danger is that the new owner may begin
> > > processing and committing offsets for this partition while the previous
> > > owner is also committing offsets in its #onPartitionsRevoked callback,
> > > which is invoked at the end of the rebalance in the cooperative
> protocol.
> > > This can result in these consumers overwriting each other’s offsets and
> > > getting a corrupted view of the partition. Note that it’s not possible
> to
> > > commit during a rebalance, so we can protect against offset corruption
> by
> > > blocking further commits after we detect that the gro

Re: [ANNOUNCE] New Kafka PMC Member: Bill Bejeck

2021-04-12 Thread Bill Bejeck
Thank you all for the kind words!

-Bill

On Mon, Apr 12, 2021 at 6:58 AM Bruno Cadonna 
wrote:

> Congrats Bill! Well deserved!
>
> Best,
> Bruno
>
> On 12.04.21 11:19, Satish Duggana wrote:
> > Congratulations Bill!!
> >
> > On Thu, 8 Apr 2021 at 13:24, Tom Bentley  wrote:
> >
> >> Congratulations Bill!
> >>
> >> On Thu, Apr 8, 2021 at 2:36 AM Luke Chen  wrote:
> >>
> >>> Congratulations Bill!
> >>>
> >>> Luke
> >>>
> >>> On Thu, Apr 8, 2021 at 9:17 AM Matthias J. Sax 
> wrote:
> >>>
>  Hi,
> 
>  It's my pleasure to announce that Bill Bejeck in now a member of the
>  Kafka PMC.
> 
>  Bill has been a Kafka committer since Feb 2019. He has remained
>  active in the community since becoming a committer.
> 
> 
> 
>  Congratulations Bill!
> 
>    -Matthias, on behalf of Apache Kafka PMC
> 
> >>>
> >>
> >
>


Re: [ANNOUNCE] New Kafka PMC Member: Bill Bejeck

2021-04-12 Thread Bill Bejeck
Thank you all for the kind words!


-Bill

On Mon, Apr 12, 2021 at 7:24 AM Dongjin Lee  wrote:

> Congratulations, Bill!
>
> Thanks,
> Dongjin
>
> On Mon, Apr 12, 2021 at 7:58 PM Bruno Cadonna 
> wrote:
>
> > Congrats Bill! Well deserved!
> >
> > Best,
> > Bruno
> >
> > On 12.04.21 11:19, Satish Duggana wrote:
> > > Congratulations Bill!!
> > >
> > > On Thu, 8 Apr 2021 at 13:24, Tom Bentley  wrote:
> > >
> > >> Congratulations Bill!
> > >>
> > >> On Thu, Apr 8, 2021 at 2:36 AM Luke Chen  wrote:
> > >>
> > >>> Congratulations Bill!
> > >>>
> > >>> Luke
> > >>>
> > >>> On Thu, Apr 8, 2021 at 9:17 AM Matthias J. Sax 
> > wrote:
> > >>>
> >  Hi,
> > 
> >  It's my pleasure to announce that Bill Bejeck in now a member of the
> >  Kafka PMC.
> > 
> >  Bill has been a Kafka committer since Feb 2019. He has remained
> >  active in the community since becoming a committer.
> > 
> > 
> > 
> >  Congratulations Bill!
> > 
> >    -Matthias, on behalf of Apache Kafka PMC
> > 
> > >>>
> > >>
> > >
> >
>
>
> --
> *Dongjin Lee*
>
> *A hitchhiker in the mathematical world.*
>
>
>
> *github:  github.com/dongjinleekr
> keybase: https://keybase.io/dongjinleekr
> linkedin: kr.linkedin.com/in/dongjinleekr
> speakerdeck:
> speakerdeck.com/dongjin
> *
>


Re: [DISCUSS] KIP-618: Atomic commit of source connector records and offsets

2021-04-12 Thread Chris Egerton
Hi Randall,

After thinking things over carefully, I've done some reworking of the
design. Instead of performing zombie fencing during rebalance, the leader
will expose an internal REST endpoint that will allow workers to request a
round of zombie fencing on demand, at any time. Workers will then hit this
endpoint after starting connectors and after task config updates for
connectors are detected; the precise details of this are outlined in the
KIP. If a round of fencing should fail for any reason, the worker will be
able to mark its Connector failed and, if the user wants to retry, they can
simply restart the Connector via the REST API (specifically, the POST
/connectors/{connector}/restart endpoint).

The idea I'd been playing with to allow workers to directly write to the
config topic seemed promising at first, but it allowed things to get pretty
hairy for users if any kind of rebalancing bug took place and two workers
believed they owned the same Connector object.

I hope this answers any outstanding questions and look forward to your
thoughts.

Cheers,

Chris

On Mon, Mar 22, 2021 at 4:38 PM Chris Egerton  wrote:

> Hi Randall,
>
> No complaints about email size from me. Let's dive in!
>
> 1. Sure! Especially important in my mind is that this is already possible
> with Connect as it is today, and users can benefit from this with or
> without the expanded exactly-once souce support we're trying to add with
> this KIP. I've added that info to the "Motivation" section and included a
> brief overview of the idempotent producer in the "Background and
> References" section.
>
> 2. I actually hadn't considered enabling exactly-once source support by
> default. Thinking over it now, I'm a little hesitant to do so just because,
> even with the best testing out there, it's a pretty large change and it
> seems safest to try to make it opt-in in case there's unanticipated
> fallout. Then again, when incremental cooperative rebalancing was
> introduced, it was made opt-out instead of opt-in. However, ICR came with
> lower known risk of breaking existing users' setups; we know for a fact
> that, if you haven't granted your worker or connector principals some ACLs
> on Kafka, your connectors will fail. In an ideal world people would
> carefully read upgrade notes and either grant those permissions or disable
> the feature before upgrading their Connect cluster to 3.0, but if they
> don't, they'll be in for a world of hurt. Overall I'd be more comfortable
> letting this feature incubate for a little bit to let everyone get familiar
> with it before possibly enabling it in 4.0 by default; what do you think?
>
> 3. I didn't think too long about the name for the offsets topic property;
> it seemed pretty unlikely that it'd conflict with existing connector
> property names. One alternative could be to follow the idiom established by
> KIP-458 and include the word "override" in there somewhere, but none of the
> resulting names seem very intuitive ("offsets.storage.override.topic" seems
> the best to me but even that isn't super clear). Happy to field suggestions
> if anyone has any alternatives they'd like to propose.
>
> 4. I _really_ wanted to enable per-connector toggling of this feature for
> the exact reasons you've outlined. There's already a couple cases where
> some piece of functionality was introduced that users could only control at
> the worker config level and, later, effort had to be made in order to add
> per-connector granularity: key/value converters and connector Kafka clients
> are the two at the top of my head, and there may be others. So if at all
> possible, it'd be great if we could support this. The only thing standing
> in the way is that it allows exactly-once delivery to be compromised, which
> in my estimation was unacceptable. I'm hoping we can make this feature
> great enough that it'll work with most if not all source connectors out
> there, and users won't even want to toggle it on a per-connector basis.
> Otherwise, we'll have to decide between forcing users to split their
> connectors across two Connect clusters (one with exactly-once source
> enabled, one with it disabled), which would be awful, or potentially seeing
> duplicate record delivery for exactly-once connectors, which is also awful.
>
> 5. Existing source connectors won't necessarily have to be configured with
> "consumer.override" properties, but there are a couple of cases where that
> will be necessary:
>
> a. Connector is running against against a secured Kafka cluster and the
> principal configured in the worker via the various "consumer." properties
> (if there is one) doesn't have permission to access the offsets topic that
> the connector will use. If no "consumer.override." properties are specified
> in this case, the connector and its tasks will fail.
>
> b. Connector is running against a separate Kafka cluster from the one
> specified in the worker's config with the "bootstrap.servers" property (or
> the "consumer.

Re: [ANNOUNCE] New Committer: Bruno Cadonna

2021-04-12 Thread Dongjin Lee
Congratulations, Bruno!!

Best,
Dongjin

On Mon, Apr 12, 2021 at 8:05 PM Bruno Cadonna 
wrote:

> Thank you all for the kind words!
>
> Best,
> Bruno
>
> On 08.04.21 00:34, Guozhang Wang wrote:
> > Hello all,
> >
> > I'm happy to announce that Bruno Cadonna has accepted his invitation to
> > become an Apache Kafka committer.
> >
> > Bruno has been contributing to Kafka since Jan. 2019 and has made 99
> > commits and more than 80 PR reviews so far:
> >
> > https://github.com/apache/kafka/commits?author=cadonna
> >
> > He worked on a few key KIPs on Kafka Streams:
> >
> > * KIP-471: Expose RocksDB Metrics in Kafka Streams
> > * KIP-607: Add Metrics to Kafka Streams to Report Properties of RocksDB
> > * KIP-662: Throw Exception when Source Topics of a Streams App are
> Deleted
> >
> > Besides all the code contributions and reviews, he's also done a handful
> > for the community: multiple Kafka meetup talks in Berlin and Kafka Summit
> > talks, an introductory class to Kafka at Humboldt-Universität zu Berlin
> > seminars, and have co-authored a paper on Kafka's stream processing
> > semantics in this year's SIGMOD conference (
> > https://en.wikipedia.org/wiki/SIGMOD). Bruno has also been quite active
> on
> > SO channels and AK mailings.
> >
> > Please join me to congratulate Bruno for all the contributions!
> >
> > -- Guozhang
> >
>


-- 
*Dongjin Lee*

*A hitchhiker in the mathematical world.*



*github:  github.com/dongjinleekr
keybase: https://keybase.io/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
speakerdeck: speakerdeck.com/dongjin
*


Re: [ANNOUNCE] New Kafka PMC Member: Bill Bejeck

2021-04-12 Thread Dongjin Lee
Congratulations, Bill!

Thanks,
Dongjin

On Mon, Apr 12, 2021 at 7:58 PM Bruno Cadonna 
wrote:

> Congrats Bill! Well deserved!
>
> Best,
> Bruno
>
> On 12.04.21 11:19, Satish Duggana wrote:
> > Congratulations Bill!!
> >
> > On Thu, 8 Apr 2021 at 13:24, Tom Bentley  wrote:
> >
> >> Congratulations Bill!
> >>
> >> On Thu, Apr 8, 2021 at 2:36 AM Luke Chen  wrote:
> >>
> >>> Congratulations Bill!
> >>>
> >>> Luke
> >>>
> >>> On Thu, Apr 8, 2021 at 9:17 AM Matthias J. Sax 
> wrote:
> >>>
>  Hi,
> 
>  It's my pleasure to announce that Bill Bejeck in now a member of the
>  Kafka PMC.
> 
>  Bill has been a Kafka committer since Feb 2019. He has remained
>  active in the community since becoming a committer.
> 
> 
> 
>  Congratulations Bill!
> 
>    -Matthias, on behalf of Apache Kafka PMC
> 
> >>>
> >>
> >
>


-- 
*Dongjin Lee*

*A hitchhiker in the mathematical world.*



*github:  github.com/dongjinleekr
keybase: https://keybase.io/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
speakerdeck: speakerdeck.com/dongjin
*


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #28

2021-04-12 Thread Apache Jenkins Server
See 




Re: [ANNOUNCE] New Committer: Bruno Cadonna

2021-04-12 Thread Bruno Cadonna

Thank you all for the kind words!

Best,
Bruno

On 08.04.21 00:34, Guozhang Wang wrote:

Hello all,

I'm happy to announce that Bruno Cadonna has accepted his invitation to
become an Apache Kafka committer.

Bruno has been contributing to Kafka since Jan. 2019 and has made 99
commits and more than 80 PR reviews so far:

https://github.com/apache/kafka/commits?author=cadonna

He worked on a few key KIPs on Kafka Streams:

* KIP-471: Expose RocksDB Metrics in Kafka Streams
* KIP-607: Add Metrics to Kafka Streams to Report Properties of RocksDB
* KIP-662: Throw Exception when Source Topics of a Streams App are Deleted

Besides all the code contributions and reviews, he's also done a handful
for the community: multiple Kafka meetup talks in Berlin and Kafka Summit
talks, an introductory class to Kafka at Humboldt-Universität zu Berlin
seminars, and have co-authored a paper on Kafka's stream processing
semantics in this year's SIGMOD conference (
https://en.wikipedia.org/wiki/SIGMOD). Bruno has also been quite active on
SO channels and AK mailings.

Please join me to congratulate Bruno for all the contributions!

-- Guozhang



Re: [ANNOUNCE] New Kafka PMC Member: Bill Bejeck

2021-04-12 Thread Bruno Cadonna

Congrats Bill! Well deserved!

Best,
Bruno

On 12.04.21 11:19, Satish Duggana wrote:

Congratulations Bill!!

On Thu, 8 Apr 2021 at 13:24, Tom Bentley  wrote:


Congratulations Bill!

On Thu, Apr 8, 2021 at 2:36 AM Luke Chen  wrote:


Congratulations Bill!

Luke

On Thu, Apr 8, 2021 at 9:17 AM Matthias J. Sax  wrote:


Hi,

It's my pleasure to announce that Bill Bejeck in now a member of the
Kafka PMC.

Bill has been a Kafka committer since Feb 2019. He has remained
active in the community since becoming a committer.



Congratulations Bill!

  -Matthias, on behalf of Apache Kafka PMC









Re: [ANNOUNCE] New Kafka PMC Member: Bill Bejeck

2021-04-12 Thread Satish Duggana
Congratulations Bill!!

On Thu, 8 Apr 2021 at 13:24, Tom Bentley  wrote:

> Congratulations Bill!
>
> On Thu, Apr 8, 2021 at 2:36 AM Luke Chen  wrote:
>
> > Congratulations Bill!
> >
> > Luke
> >
> > On Thu, Apr 8, 2021 at 9:17 AM Matthias J. Sax  wrote:
> >
> > > Hi,
> > >
> > > It's my pleasure to announce that Bill Bejeck in now a member of the
> > > Kafka PMC.
> > >
> > > Bill has been a Kafka committer since Feb 2019. He has remained
> > > active in the community since becoming a committer.
> > >
> > >
> > >
> > > Congratulations Bill!
> > >
> > >  -Matthias, on behalf of Apache Kafka PMC
> > >
> >
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #27

2021-04-12 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-12655) CVE-2021-28165 - Upgrade jetty to 9.4.39

2021-04-12 Thread Edwin Hobor (Jira)
Edwin Hobor created KAFKA-12655:
---

 Summary: CVE-2021-28165 - Upgrade jetty to 9.4.39
 Key: KAFKA-12655
 URL: https://issues.apache.org/jira/browse/KAFKA-12655
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.6.1, 2.7.0
Reporter: Edwin Hobor


*CVE-2021-28165* vulnerability affects Jetty versions up to 
+*[9.4.38|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28165] 
[|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28165].*+ For more 
information see [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28165] 

Upgrading to Jetty version *9.4.39* should address this issue 
([https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.39.v20210325)|https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.39.v20210325].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)