[GitHub] [kafka] ijuma commented on a diff in pull request #13116: KAFKA-14351: Controller Mutation Quota for KRaft

2023-03-03 Thread via GitHub


ijuma commented on code in PR #13116:
URL: https://github.com/apache/kafka/pull/13116#discussion_r1125408496


##
metadata/src/main/java/org/apache/kafka/controller/ControllerRequestContext.java:
##
@@ -42,19 +45,39 @@ public static OptionalLong requestTimeoutMsToDeadlineNs(
 private final OptionalLong deadlineNs;
 private final RequestHeaderData requestHeader;
 
+private final Optional> requestedPartitionCountRecorder;
+

Review Comment:
   We could have an interface in server-common that the scala classes implement.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jeffkbkim commented on a diff in pull request #13329: KAFKA-14462; [2/N] Add ConsumerGroupHeartbeart to GroupCoordinator interface

2023-03-03 Thread via GitHub


jeffkbkim commented on code in PR #13329:
URL: https://github.com/apache/kafka/pull/13329#discussion_r1125367811


##
core/src/test/scala/unit/kafka/server/ConsumerGroupHeartbeatRequestTest.scala:
##
@@ -54,6 +54,20 @@ class ConsumerGroupHeartbeatRequestTest(cluster: 
ClusterInstance) {
 assertEquals(expectedResponse, consumerGroupHeartbeatResponse.data)
   }
 
+  @ClusterTest(serverProperties = Array(
+new ClusterConfigProperty(key = "unstable.api.versions.enable", value = 
"true"),
+new ClusterConfigProperty(key = "group.coordinator.new.enable", value = 
"true")
+  ))
+  def 
testConsumerGroupHeartbeatIsAccessibleWhenNewGroupCoordinatorIsEnabled(): Unit 
= {
+val consumerGroupHeartbeatRequest = new 
ConsumerGroupHeartbeatRequest.Builder(
+  new ConsumerGroupHeartbeatRequestData()
+).build()
+
+val consumerGroupHeartbeatResponse = 
connectAndReceive(consumerGroupHeartbeatRequest)
+val expectedResponse = new 
ConsumerGroupHeartbeatResponseData().setErrorCode(Errors.UNSUPPORTED_VERSION.code)

Review Comment:
   this is where we would update once we have the new group coordinator 
handling heartbeat requests right?



##
core/src/main/scala/kafka/server/KafkaApis.scala:
##
@@ -3573,9 +3573,27 @@ class KafkaApis(val requestChannel: RequestChannel,
 
   def handleConsumerGroupHeartbeat(request: RequestChannel.Request): 
CompletableFuture[Unit] = {
 val consumerGroupHeartbeatRequest = 
request.body[ConsumerGroupHeartbeatRequest]
-// KIP-848 is not implemented yet so return UNSUPPORTED_VERSION.
-requestHelper.sendMaybeThrottle(request, 
consumerGroupHeartbeatRequest.getErrorResponse(Errors.UNSUPPORTED_VERSION.exception))
-CompletableFuture.completedFuture[Unit](())
+
+if (!config.isNewGroupCoordinatorEnabled) {
+  // The API is not supported by the "old" group coordinator (the 
default). If the

Review Comment:
   just to confirm, we will be defaulting to the new coordinator once it's 
available, correct?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] ijuma commented on pull request #13339: MINOR: version upgrades: zinc 1.7.2 -->> 1.8.0, gradle 8.0.1 -->> 8.0.2 and gradle plugins

2023-03-03 Thread via GitHub


ijuma commented on PR #13339:
URL: https://github.com/apache/kafka/pull/13339#issuecomment-1454394562

   Thanks for the PR. I think the following workaround where we use `:` is no 
longer required:
   
   https://github.com/apache/kafka/blob/trunk/build.gradle#L685
   
   This is the PR where it should have been fixed 
https://github.com/gradle/gradle/pull/24059


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] satishd merged pull request #13304: KAFKA-14726 Move/rewrite of LogReadInfo, LogOffsetSnapshot, LogStartOffsetIncrementReason to storage module

2023-03-03 Thread via GitHub


satishd merged PR #13304:
URL: https://github.com/apache/kafka/pull/13304


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] vcrfxia opened a new pull request, #13340: KAFKA-14491: [15/N] Add integration tests for versioned stores

2023-03-03 Thread via GitHub


vcrfxia opened a new pull request, #13340:
URL: https://github.com/apache/kafka/pull/13340

   (This PR is stacked on https://github.com/apache/kafka/pull/13292 -- only 
the last commit needs to be reviewed separately.)
   
   Adds integration tests for the new versioned stores introduced in 
[KIP-889](https://cwiki.apache.org/confluence/display/KAFKA/KIP-889%3A+Versioned+State+Stores).
 The specific tests are:
   * validate put, get, get-with-timestamp, and delete operations
   * changelog topic configs are as expected
   * restore from changelog
   * custom IQv2 queries to a custom store implementation
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] dejan2609 opened a new pull request, #13339: MINOR: version upgrades: zinc 1.7.2 -->> 1.8.0, gradle 8.0.1 -->> 8.0.2 and gradle plugins

2023-03-03 Thread via GitHub


dejan2609 opened a new pull request, #13339:
URL: https://github.com/apache/kafka/pull/13339

   Related links:
- zinc release notes:   https://github.com/sbt/zinc/releases/tag/v1.8.0
- gradle release notes: https://github.com/gradle/gradle/releases/tag/v8.0.2
- gradle diff:  
https://github.com/gradle/gradle/compare/v8.0.1...v8.0.2
   
   plugins version upgrade details:
- 'com.github.ben-manes.versions'  0.44.0 -->> 0.46.0
- 'org.owasp.dependencycheck' 8.0.2 -->> 8.1.2
- 'io.swagger.core.v3.swagger-gradle-plugin' 2.2.0 -->> 2.2.8
- 'org.gradle.test-retry'1.5.1 -->> 
1.5.2
- 'com.github.johnrengelman.shadow'   7.1.2 -->> 8.1.0
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] philipnee commented on a diff in pull request #12813: KAFKA-14317: ProduceRequest timeouts are logged as network exceptions

2023-03-03 Thread via GitHub


philipnee commented on code in PR #12813:
URL: https://github.com/apache/kafka/pull/12813#discussion_r1125051927


##
clients/src/main/java/org/apache/kafka/clients/NetworkClient.java:
##
@@ -753,7 +756,8 @@ public static AbstractResponse parseResponse(ByteBuffer 
responseBuffer, RequestH
 private void processDisconnection(List responses,

Review Comment:
   I'm thinking, instead of passing boolean flags around, would it be better to 
have another function to explicitly calling out `processTimeoutDisconnection` ?



##
clients/src/main/java/org/apache/kafka/clients/NetworkClient.java:
##
@@ -324,11 +324,14 @@ public void disconnect(String nodeId) {
 log.info("Client requested disconnect from node {}", nodeId);
 selector.close(nodeId);
 long now = time.milliseconds();
-cancelInFlightRequests(nodeId, now, abortedSends);
+cancelInFlightRequests(nodeId, now, abortedSends, false);
 connectionStates.disconnected(nodeId, now);
 }
 
-private void cancelInFlightRequests(String nodeId, long now, 
Collection responses) {
+private void cancelInFlightRequests(String nodeId,

Review Comment:
   nit: final-lize all params



##
clients/src/main/java/org/apache/kafka/clients/NetworkClient.java:
##
@@ -1279,9 +1283,10 @@ public ClientResponse completed(AbstractResponse 
response, long timeMs) {
 false, null, null, response);
 }
 
-public ClientResponse disconnected(long timeMs, 
AuthenticationException authenticationException) {
+public ClientResponse disconnected(long timeMs, 
AuthenticationException authenticationException, boolean timedOut) {

Review Comment:
   and maybe we could introduce `timeouted(long timeMs, AuthenticationException 
authenticationException)` ? if that makes everything more explicit about the 
timeout situation.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jolshan commented on a diff in pull request #13231: KAFKA-14402: Update AddPartitionsToTxn protocol to batch and handle verifyOnly requests

2023-03-03 Thread via GitHub


jolshan commented on code in PR #13231:
URL: https://github.com/apache/kafka/pull/13231#discussion_r1125017580


##
clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java:
##
@@ -2598,12 +2604,37 @@ private OffsetsForLeaderEpochResponse 
createLeaderEpochResponse() {
 }
 
 private AddPartitionsToTxnRequest createAddPartitionsToTxnRequest(short 
version) {
-return new AddPartitionsToTxnRequest.Builder("tid", 21L, (short) 42,
-singletonList(new TopicPartition("topic", 73))).build(version);
+if (version < 4) {
+return AddPartitionsToTxnRequest.Builder.forClient("tid", 21L, 
(short) 42,
+singletonList(new TopicPartition("topic", 
73))).build(version);
+} else {
+AddPartitionsToTxnTransactionCollection transactions = new 
AddPartitionsToTxnTransactionCollection(
+singletonList(new AddPartitionsToTxnTransaction()

Review Comment:
   It's a bit too long for the previous line though. I made it 4 instead of 8 
spaces.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-13882) Dockerfile for previewing website

2023-03-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17696324#comment-17696324
 ] 

ASF GitHub Bot commented on KAFKA-13882:


mjsax commented on PR #410:
URL: https://github.com/apache/kafka-site/pull/410#issuecomment-1454128799

   I just checkout the PR locally and it worked for me 

> Dockerfile for previewing website
> -
>
> Key: KAFKA-13882
> URL: https://issues.apache.org/jira/browse/KAFKA-13882
> Project: Kafka
>  Issue Type: Task
>  Components: docs, website
>Reporter: Tom Bentley
>Assignee: Lim Qing Wei
>Priority: Trivial
>  Labels: newbie
>
> Previewing changes to the website/documentation is rather difficult because 
> you either have to [hack with the 
> HTML|https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Website+Documentation+Changes#ContributingWebsiteDocumentationChanges-KafkaWebsiteRepository]
>  or [install 
> httpd|https://cwiki.apache.org/confluence/display/KAFKA/Setup+Kafka+Website+on+Local+Apache+Server].
>  This is a barrier to contribution.
> Having a Dockerfile for previewing the Kafka website (i.e. with httpd 
> properly set up) would make it easier for people to contribute website/docs 
> changes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14748) Relax non-null FK left-join requirement

2023-03-03 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17696313#comment-17696313
 ] 

Matthias J. Sax commented on KAFKA-14748:
-

Yes, the behavior would change. But make this change is the goal, isn't it? – 
That is also why I brought up the KIP question – if we apply a change in 
behavior, we might need a KIP.

The other question was: if we apply with change of behavior (even if we do a 
KIP), and there are users who want to old behavior can they still get it. And I 
think the answer is yes, via upstream filtering.

> Relax non-null FK left-join requirement
> ---
>
> Key: KAFKA-14748
> URL: https://issues.apache.org/jira/browse/KAFKA-14748
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Major
>
> Kafka Streams enforces a strict non-null-key policy in the DSL across all 
> key-dependent operations (like aggregations and joins).
> This also applies to FK-joins, in particular to the ForeignKeyExtractor. If 
> it returns `null`, it's treated as invalid. For left-joins, it might make 
> sense to still accept a `null`, and add the left-hand record with an empty 
> right-hand-side to the result.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14765) Support SCRAM for brokers at bootstrap

2023-03-03 Thread Proven Provenzano (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17696311#comment-17696311
 ] 

Proven Provenzano commented on KAFKA-14765:
---

PR [13114 |https://github.com/apache/kafka/pull/13114]is merged.

PR for this update is now out for review.

> Support SCRAM for brokers at bootstrap
> --
>
> Key: KAFKA-14765
> URL: https://issues.apache.org/jira/browse/KAFKA-14765
> Project: Kafka
>  Issue Type: Improvement
>  Components: kraft
>Reporter: Proven Provenzano
>Assignee: Proven Provenzano
>Priority: Major
>  Labels: KIP-900
>
> We want to add SCRAM support for brokers at bootstrap.
> We will support bootstrap as described in 
> [KIP-900|https://cwiki.apache.org/confluence/display/KAFKA/KIP-900%3A+KRaft+kafka-storage.sh+API+additions+to+support+SCRAM+for+Kafka+Brokers]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14084) Support SCRAM when using KRaft mode

2023-03-03 Thread Proven Provenzano (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17696309#comment-17696309
 ] 

Proven Provenzano commented on KAFKA-14084:
---

Base support without bootstrap support has been pushed with [GitHub Pull 
Request #13114|https://github.com/apache/kafka/pull/13114]

> Support SCRAM when using KRaft mode
> ---
>
> Key: KAFKA-14084
> URL: https://issues.apache.org/jira/browse/KAFKA-14084
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin McCabe
>Assignee: Proven Provenzano
>Priority: Major
>  Labels: kip-500
>
> Support SCRAM when using KRaft mode, as specified in KIP-631



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-14084) Support SCRAM when using KRaft mode

2023-03-03 Thread Proven Provenzano (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Proven Provenzano reassigned KAFKA-14084:
-

Assignee: Proven Provenzano  (was: Colin McCabe)

> Support SCRAM when using KRaft mode
> ---
>
> Key: KAFKA-14084
> URL: https://issues.apache.org/jira/browse/KAFKA-14084
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin McCabe
>Assignee: Proven Provenzano
>Priority: Major
>  Labels: kip-500
>
> Support SCRAM when using KRaft mode, as specified in KIP-631



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-9234) Consider using @Nullable and @Nonnull annotations

2023-03-03 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17696306#comment-17696306
 ] 

Matthias J. Sax commented on KAFKA-9234:


Thanks for the pointer – I am not familiar with JSpecify – let me take a look.

Overall, this ticket has broader impact, and while we don't need a KIP, we 
should make a broader decision as it affects Kafka holistically. \cc [~ijuma] 
[~guozhang] [~hachikuji] [~ChrisEgerton] 

Should we maybe have a discussion on the dev mailing list about it?

> Consider using @Nullable and @Nonnull annotations
> -
>
> Key: KAFKA-9234
> URL: https://issues.apache.org/jira/browse/KAFKA-9234
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, clients, consumer, KafkaConnect, producer , 
> streams, streams-test-utils
>Reporter: Matthias J. Sax
>Assignee: Ganesh Sahu
>Priority: Minor
>  Labels: beginner, newbie
>
> Java7 was dropped some time ago, and we might want to consider usein Java8 
> `@Nullable` and `@Nonnull` annotations for all public facing APIs instead of 
> documenting it in JavaDocs only.
> This tickets should be broken down in a series of smaller PRs to keep the 
> scope of each PR contained, allowing for more effective reviews.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] pprovenzano opened a new pull request, #13338: KAFKA-14765: Support SCRAM for brokers at bootstrap

2023-03-03 Thread via GitHub


pprovenzano opened a new pull request, #13338:
URL: https://github.com/apache/kafka/pull/13338

   Implement KIP-900
   
   Update kafka-storage to be able to add SCRAM records to the bootstrap 
metadata file at format time so that SCRAM is enabled at initial start of KRaft 
cluster.
   
   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (KAFKA-9234) Consider using @Nullable and @Nonnull annotations

2023-03-03 Thread Adam (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17696300#comment-17696300
 ] 

Adam edited comment on KAFKA-9234 at 3/3/23 8:05 PM:
-

I'd like to chip in: Since the last PR there has been an effort to implement a 
standardised library of nullability annotations: 
[JSpecify|https://jspecify.dev/docs/start-here]. It has the backing of [many 
big players in the JVM world|https://jspecify.dev/about#who-are-we], and I 
think it would be a good, forward-thinking decision if Kafka uses it.

While JSpecify hasn't had a 1.0 release, [they do say that it's ready for 
adoption|https://github.com/jspecify/jspecify/wiki/adoption]:

bq. As of 0.3, it's now extremely unlikely we would change any of the four 
annotation interfaces in any incompatible way (that actually breaks your 
build). So once your tools recognize them at all, then switching to them from 
your current ones couldn't really hurt much.

But anything sort of nullability annotations would be better than nothing!


was (Author: catchdepthheightlight):
I'd like to chip in: Since the last PR there has been an effort to implement a 
standardised library of nullability annotations: 
[JSpecify|https://jspecify.dev/docs/start-here]. It has the backing of [many 
big players in the JVM world|https://jspecify.dev/about#who-are-we], and I 
think it would be a good, forward-thinking decision if Kafka uses it.

While JSpecify hasn't had a 1.0 release, [they do say that it's ready for 
adoption|https://github.com/jspecify/jspecify/wiki/adoption]:

bq. As of 0.3, it's now extremely unlikely we would change any of the four 
annotation interfaces in any incompatible way (that actually breaks your 
build). So once your tools recognize them at all, then switching to them from 
your current ones couldn't really hurt much.


> Consider using @Nullable and @Nonnull annotations
> -
>
> Key: KAFKA-9234
> URL: https://issues.apache.org/jira/browse/KAFKA-9234
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, clients, consumer, KafkaConnect, producer , 
> streams, streams-test-utils
>Reporter: Matthias J. Sax
>Assignee: Ganesh Sahu
>Priority: Minor
>  Labels: beginner, newbie
>
> Java7 was dropped some time ago, and we might want to consider usein Java8 
> `@Nullable` and `@Nonnull` annotations for all public facing APIs instead of 
> documenting it in JavaDocs only.
> This tickets should be broken down in a series of smaller PRs to keep the 
> scope of each PR contained, allowing for more effective reviews.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-9234) Consider using @Nullable and @Nonnull annotations

2023-03-03 Thread Adam (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17696300#comment-17696300
 ] 

Adam commented on KAFKA-9234:
-

I'd like to chip in: Since the last PR there has been an effort to implement a 
standardised library of nullability annotations: 
[JSpecify|https://jspecify.dev/docs/start-here]. It has the backing of [many 
big players in the JVM world|https://jspecify.dev/about#who-are-we], and I 
think it would be a good, forward-thinking decision if Kafka uses it.

While JSpecify hasn't had a 1.0 release, [they do say that it's ready for 
adoption|https://github.com/jspecify/jspecify/wiki/adoption]:

> As of 0.3, it's now extremely unlikely we would change any of the four 
> annotation interfaces in any incompatible way (that actually breaks your 
> build). So once your tools recognize them at all, then switching to them from 
> your current ones couldn't really hurt much.


> Consider using @Nullable and @Nonnull annotations
> -
>
> Key: KAFKA-9234
> URL: https://issues.apache.org/jira/browse/KAFKA-9234
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, clients, consumer, KafkaConnect, producer , 
> streams, streams-test-utils
>Reporter: Matthias J. Sax
>Assignee: Ganesh Sahu
>Priority: Minor
>  Labels: beginner, newbie
>
> Java7 was dropped some time ago, and we might want to consider usein Java8 
> `@Nullable` and `@Nonnull` annotations for all public facing APIs instead of 
> documenting it in JavaDocs only.
> This tickets should be broken down in a series of smaller PRs to keep the 
> scope of each PR contained, allowing for more effective reviews.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-9234) Consider using @Nullable and @Nonnull annotations

2023-03-03 Thread Adam (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17696300#comment-17696300
 ] 

Adam edited comment on KAFKA-9234 at 3/3/23 8:04 PM:
-

I'd like to chip in: Since the last PR there has been an effort to implement a 
standardised library of nullability annotations: 
[JSpecify|https://jspecify.dev/docs/start-here]. It has the backing of [many 
big players in the JVM world|https://jspecify.dev/about#who-are-we], and I 
think it would be a good, forward-thinking decision if Kafka uses it.

While JSpecify hasn't had a 1.0 release, [they do say that it's ready for 
adoption|https://github.com/jspecify/jspecify/wiki/adoption]:

bq. As of 0.3, it's now extremely unlikely we would change any of the four 
annotation interfaces in any incompatible way (that actually breaks your 
build). So once your tools recognize them at all, then switching to them from 
your current ones couldn't really hurt much.



was (Author: catchdepthheightlight):
I'd like to chip in: Since the last PR there has been an effort to implement a 
standardised library of nullability annotations: 
[JSpecify|https://jspecify.dev/docs/start-here]. It has the backing of [many 
big players in the JVM world|https://jspecify.dev/about#who-are-we], and I 
think it would be a good, forward-thinking decision if Kafka uses it.

While JSpecify hasn't had a 1.0 release, [they do say that it's ready for 
adoption|https://github.com/jspecify/jspecify/wiki/adoption]:

> As of 0.3, it's now extremely unlikely we would change any of the four 
> annotation interfaces in any incompatible way (that actually breaks your 
> build). So once your tools recognize them at all, then switching to them from 
> your current ones couldn't really hurt much.


> Consider using @Nullable and @Nonnull annotations
> -
>
> Key: KAFKA-9234
> URL: https://issues.apache.org/jira/browse/KAFKA-9234
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, clients, consumer, KafkaConnect, producer , 
> streams, streams-test-utils
>Reporter: Matthias J. Sax
>Assignee: Ganesh Sahu
>Priority: Minor
>  Labels: beginner, newbie
>
> Java7 was dropped some time ago, and we might want to consider usein Java8 
> `@Nullable` and `@Nonnull` annotations for all public facing APIs instead of 
> documenting it in JavaDocs only.
> This tickets should be broken down in a series of smaller PRs to keep the 
> scope of each PR contained, allowing for more effective reviews.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] jolshan commented on a diff in pull request #13231: KAFKA-14402: Update AddPartitionsToTxn protocol to batch and handle verifyOnly requests

2023-03-03 Thread via GitHub


jolshan commented on code in PR #13231:
URL: https://github.com/apache/kafka/pull/13231#discussion_r1124937664


##
clients/src/main/java/org/apache/kafka/common/requests/AddPartitionsToTxnResponse.java:
##
@@ -80,45 +105,44 @@ topicName, new 
AddPartitionsToTxnPartitionResultCollection()
 AddPartitionsToTxnTopicResultCollection topicCollection = new 
AddPartitionsToTxnTopicResultCollection();
 for (Map.Entry 
entry : resultMap.entrySet()) {
 topicCollection.add(new AddPartitionsToTxnTopicResult()
-.setName(entry.getKey())
-.setResults(entry.getValue()));
+.setName(entry.getKey())
+.setResultsByPartition(entry.getValue()));
 }
-
-this.data = new AddPartitionsToTxnResponseData()
-.setThrottleTimeMs(throttleTimeMs)
-.setResults(topicCollection);
+return topicCollection;
 }
 
-@Override
-public int throttleTimeMs() {
-return data.throttleTimeMs();
+public static AddPartitionsToTxnResult resultForTransaction(String 
transactionalId, Map errors) {
+return new 
AddPartitionsToTxnResult().setTransactionalId(transactionalId).setTopicResults(topicCollectionForErrors(errors));
 }
 
-@Override
-public void maybeSetThrottleTimeMs(int throttleTimeMs) {
-data.setThrottleTimeMs(throttleTimeMs);
+public AddPartitionsToTxnTopicResultCollection 
getTransactionTopicResults(String transactionalId) {
+return 
data.resultsByTransaction().find(transactionalId).topicResults();
 }
 
-public Map errors() {
-if (cachedErrorsMap != null) {
-return cachedErrorsMap;
-}
-
-cachedErrorsMap = new HashMap<>();
-
-for (AddPartitionsToTxnTopicResult topicResult : this.data.results()) {
-for (AddPartitionsToTxnPartitionResult partitionResult : 
topicResult.results()) {
-cachedErrorsMap.put(new TopicPartition(
-topicResult.name(), partitionResult.partitionIndex()),
-Errors.forCode(partitionResult.errorCode()));
+public static Map 
errorsForTransaction(AddPartitionsToTxnTopicResultCollection topicCollection) {
+Map topicResults = new HashMap<>();
+for (AddPartitionsToTxnTopicResult topicResult : topicCollection) {
+for (AddPartitionsToTxnPartitionResult partitionResult : 
topicResult.resultsByPartition()) {
+topicResults.put(
+new TopicPartition(topicResult.name(), 
partitionResult.partitionIndex()), 
Errors.forCode(partitionResult.partitionErrorCode()));
 }
 }
-return cachedErrorsMap;
+return topicResults;
 }
 
 @Override
 public Map errorCounts() {
-return errorCounts(errors().values());
+List allErrors = new ArrayList<>();
+
+// If we are not using this field, we have request 4 or later
+if (this.data.resultsByTopicV3AndBelow().size() == 0) {
+allErrors.add(Errors.forCode(data.errorCode()));

Review Comment:
   I create allErrors because I use addAll for the individual transactions. I 
can place this code after I create error counts, but it doesn't really seem to 
accomplish much. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] C0urante commented on pull request #10566: KAFKA-12694 Avoid schema mismatch DataException when validating default values

2023-03-03 Thread via GitHub


C0urante commented on PR #10566:
URL: https://github.com/apache/kafka/pull/10566#issuecomment-1454040953

   Ah thanks @urbandan, that helps. It's worth noting that the latest example 
test case is probably missing calls to actually set the default value for the 
schema builder instances? (Things pass without those.)
   
   This has me thinking more and more about KAFKA-3910, which deals with the 
lack of support for recursive schemas in Connect. IMO this is a deficiency not 
in the API we provide, but in the implementation. And if we address that flaw, 
then we should probably be able to kill two birds with one stone--both the lack 
of support for recursive schemas, and the current issues with specifying a 
default value for struct schemas.
   
   I've prototyped a solution that seeks to address both, which can be found on 
a personal branch here: 
https://github.com/C0urante/kafka/commit/e4a595c5dc4ee5ab3185fb7603130dcc0773677d
   
   Not only does it pass with [every test case discussed here so 
far](https://github.com/C0urante/kafka/blob/e4a595c5dc4ee5ab3185fb7603130dcc0773677d/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaBuilderTest.java#L41-L102),
 it also passes with a [rudimentary test case for recursive 
structs](https://github.com/C0urante/kafka/blob/e4a595c5dc4ee5ab3185fb7603130dcc0773677d/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaBuilderTest.java#L126-L161),
 and for [more-involved combinations of comparisons between built and unbuilt 
schemas that come with default struct 
values](https://github.com/C0urante/kafka/blob/e4a595c5dc4ee5ab3185fb7603130dcc0773677d/connect/api/src/test/java/org/apache/kafka/connect/data/SchemaBuilderTest.java#L104-L123).
   
   There's at least one `TODO` item in it and we'd certainly want to add more 
test coverage, but I'm optimistic that this can satisfy everyone's use cases 
without requiring changes to the Connect API or breaking existing setups.
   
   What do you think?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jolshan commented on a diff in pull request #13231: KAFKA-14402: Update AddPartitionsToTxn protocol to batch and handle verifyOnly requests

2023-03-03 Thread via GitHub


jolshan commented on code in PR #13231:
URL: https://github.com/apache/kafka/pull/13231#discussion_r1124925652


##
core/src/test/scala/unit/kafka/coordinator/transaction/TransactionCoordinatorTest.scala:
##
@@ -1185,4 +1213,8 @@ class TransactionCoordinatorTest {
   def errorsCallback(ret: Errors): Unit = {
 error = ret
   }
+
+  def verifyPartitionsInTxnCallback(result: AddPartitionsToTxnResult): Unit = {
+errors = 
AddPartitionsToTxnResponse.errorsForTransaction(result.topicResults()).asScala.toMap

Review Comment:
   I'm just followed the pattern of the errorsCallback.  I'm not going to redo 
that, but I can redo this if it's an issue.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] lucasbru commented on a diff in pull request #13318: KAFKA-14533: Do not interrupt state-updater thread during shutdown

2023-03-03 Thread via GitHub


lucasbru commented on code in PR #13318:
URL: https://github.com/apache/kafka/pull/13318#discussion_r1124916956


##
streams/src/main/java/org/apache/kafka/streams/processor/internals/DefaultStateUpdater.java:
##
@@ -312,16 +305,20 @@ private void 
addToExceptionsAndFailedTasksThenClearUpdatingTasks(final Exception
 updatingTasks.clear();
 }
 
-private void waitIfAllChangelogsCompletelyRead() throws 
InterruptedException {
-if (isRunning.get() && changelogReader.allChangelogsCompleted()) {
-tasksAndActionsLock.lock();
-try {
-while (tasksAndActions.isEmpty() && 
!isTopologyResumed.get()) {
-tasksAndActionsCondition.await();
-}
-} finally {
-tasksAndActionsLock.unlock();
+private void waitIfAllChangelogsCompletelyRead() {
+tasksAndActionsLock.lock();

Review Comment:
   Nit: the name `tasksAndActionsLock/Condition` is probably not very adequate 
anymore. But we don't need to change it in this PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] guozhangwang commented on a diff in pull request #13330: Prototyping Rebalance Protocol

2023-03-03 Thread via GitHub


guozhangwang commented on code in PR #13330:
URL: https://github.com/apache/kafka/pull/13330#discussion_r1124915057


##
clients/src/main/java/org/apache/kafka/clients/consumer/internals/ProtocolRequestManager.java:
##
@@ -0,0 +1,118 @@
+package org.apache.kafka.clients.consumer.internals;
+
+import org.apache.kafka.clients.consumer.internals.events.ApplicationEvent;
+
+import java.util.Optional;
+import java.util.concurrent.BlockingQueue;
+
+public class ProtocolRequestManager implements RequestManager {
+GroupState groupState;
+HeartbeatRequestManager heartbeatRequestManager;
+RebalanceProtocol protocol;
+BlockingQueue eventQueue;
+Optional callbackInvokedMs;
+
+@Override
+public NetworkClientDelegate.PollResult poll(long currentTimeMs) {
+switch (groupState.state) {
+case UNJOINED:
+return null;
+case PREPARE:
+return protocol.onPrepare(currentTimeMs);
+case ASSIGNING:
+invokeRebalance(currentTimeMs);
+return protocol.onAssign(currentTimeMs);
+case COMPLETE:
+return protocol.onComplete(currentTimeMs);
+case STABLE:
+return protocol.onStable(currentTimeMs);
+}
+return null;
+}
+
+public void ackCallbackInvocation(long currentTimeMs) {

Review Comment:
   Not clear when this func would be called?



##
clients/src/main/java/org/apache/kafka/clients/consumer/internals/GroupState.java:
##
@@ -27,6 +27,15 @@ public class GroupState {
 public final String groupId;
 public final Optional groupInstanceId;
 public Generation generation = Generation.NO_GENERATION;
+public State state = State.UNJOINED;
+
+enum State {
+UNJOINED,
+PREPARE,
+ASSIGNING,
+COMPLETE,

Review Comment:
   What's the difference between COMPLETE and STABLE?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] cmccabe commented on pull request #13331: MINOR: fix fault handling in ControllerServer and KafkaServer

2023-03-03 Thread via GitHub


cmccabe commented on PR #13331:
URL: https://github.com/apache/kafka/pull/13331#issuecomment-1454006628

   Fix checkstyle


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] cmccabe commented on pull request #13332: KAFKA-14057: Support dynamic reconfiguration in KRaft remote controllers

2023-03-03 Thread via GitHub


cmccabe commented on PR #13332:
URL: https://github.com/apache/kafka/pull/13332#issuecomment-1454000670

   Let me split out the MetadataPublisher refactoring part.
   https://github.com/apache/kafka/pull/13337


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] cmccabe opened a new pull request, #13337: MINOR: Refactor the MetadataPublisher interface

2023-03-03 Thread via GitHub


cmccabe opened a new pull request, #13337:
URL: https://github.com/apache/kafka/pull/13337

   This PR refactors MetadataPublisher's interface a bit. There is now a 
handleControllerChange callback. This is something that some publishers might 
want. A good example is ZkMigrationClient.
   
   There is now only a single publish() function rather than a separate 
function for publishing snapshots and log deltas. Most publishers didn't want 
to do anything different in those two cases. The ones that do want to do 
something different for snapshots can always check the manifest type.
   
   The close function now has a default empty implementation, since most 
publishers didn't need to do anything there.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] cmccabe commented on pull request #13114: KAFKA-14084: SCRAM support in KRaft.

2023-03-03 Thread via GitHub


cmccabe commented on PR #13114:
URL: https://github.com/apache/kafka/pull/13114#issuecomment-1453998511

   follow-ons: 
   https://issues.apache.org/jira/browse/KAFKA-14765
   https://issues.apache.org/jira/browse/KAFKA-14775
   https://issues.apache.org/jira/browse/KAFKA-14776


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] guozhangwang commented on pull request #13301: KAFKA-14758: Extract inner classes from Fetcher for reuse in refactoring

2023-03-03 Thread via GitHub


guozhangwang commented on PR #13301:
URL: https://github.com/apache/kafka/pull/13301#issuecomment-1453981950

   Got it, thanks for confirming. The rule of thumb is that for any code 
potentially in production, we would make sure they have test coverage. So as 
long as we could make it to cover the test gaps before the 3.5 release, I think 
we can skip the test in this PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] guozhangwang merged pull request #13082: MINOR: Clarify docs for Streams config max.warmup.replicas.

2023-03-03 Thread via GitHub


guozhangwang merged PR #13082:
URL: https://github.com/apache/kafka/pull/13082


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-14748) Relax non-null FK left-join requirement

2023-03-03 Thread Guozhang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17696279#comment-17696279
 ] 

Guozhang Wang commented on KAFKA-14748:
---

For table-table FK left-joins, if we have record B whose extractKey function 
returns null, today we would not emit any record; if we allow it to return then 
we would emit a join record. And if there's no further records by this join key 
then that record would be the last one, and hence the emitting behavior indeed 
has changed.

> Relax non-null FK left-join requirement
> ---
>
> Key: KAFKA-14748
> URL: https://issues.apache.org/jira/browse/KAFKA-14748
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Major
>
> Kafka Streams enforces a strict non-null-key policy in the DSL across all 
> key-dependent operations (like aggregations and joins).
> This also applies to FK-joins, in particular to the ForeignKeyExtractor. If 
> it returns `null`, it's treated as invalid. For left-joins, it might make 
> sense to still accept a `null`, and add the left-hand record with an empty 
> right-hand-side to the result.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] guozhangwang commented on pull request #13318: KAFKA-14533: Do not interrupt state-updater thread during shutdown

2023-03-03 Thread via GitHub


guozhangwang commented on PR #13318:
URL: https://github.com/apache/kafka/pull/13318#issuecomment-1453957529

   > We had a failure of Build / JDK 11 and Scala 2.13 / 
org.apache.kafka.streams.integration.PauseResumeIntegrationTest.[1] true. I'm 
pretty sure that one was not flaky before. Any chance this PR breaks it? Also 
there are more failures due to the logging changes.
   
   I fixed the test utils for all the `StreamsPartitionAssignorTest`; but I 
cannot find the failure of `PauseResumeIntegrationTest` in run 15, which run 
did you see it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] philipnee closed pull request #13269: KAFKA-12634 enforce checkpoint after restoration

2023-03-03 Thread via GitHub


philipnee closed pull request #13269: KAFKA-12634 enforce checkpoint after 
restoration
URL: https://github.com/apache/kafka/pull/13269


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] cmccabe merged pull request #13114: KAFKA-14084: SCRAM support in KRaft.

2023-03-03 Thread via GitHub


cmccabe merged PR #13114:
URL: https://github.com/apache/kafka/pull/13114


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] philipnee commented on pull request #13269: KAFKA-12634 enforce checkpoint after restoration

2023-03-03 Thread via GitHub


philipnee commented on PR #13269:
URL: https://github.com/apache/kafka/pull/13269#issuecomment-1453921659

   i took a peek at it because @mjsax mentioned it is easy to work on.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] philipnee commented on pull request #13269: KAFKA-12634 enforce checkpoint after restoration

2023-03-03 Thread via GitHub


philipnee commented on PR #13269:
URL: https://github.com/apache/kafka/pull/13269#issuecomment-1453919620

   Oh yeah, I was wondering about that, too - it's up to you. We can abandon 
this and wait for 10199 to be completed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] guozhangwang merged pull request #13336: MINOR: update RocksDBMetricsRecorder test to JUnit5 and fix memory leak

2023-03-03 Thread via GitHub


guozhangwang merged PR #13336:
URL: https://github.com/apache/kafka/pull/13336


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] philipnee commented on pull request #12038: KAFKA-13421 - fix flaky ConsumerBounceTest#testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize

2023-03-03 Thread via GitHub


philipnee commented on PR #12038:
URL: https://github.com/apache/kafka/pull/12038#issuecomment-1453914893

   Well, not "hard to reproduce", I have not had luck to reproduce it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] guozhangwang commented on pull request #13269: KAFKA-12634 enforce checkpoint after restoration

2023-03-03 Thread via GitHub


guozhangwang commented on PR #13269:
URL: https://github.com/apache/kafka/pull/13269#issuecomment-1453914588

   Sorry for jumping into this PR late @philipnee , for this ticket, we have 
fixed it in the new state updater thread 
([KAFKA-10199](https://issues.apache.org/jira/browse/KAFKA-10199). And we are 
expecting to enable the state updater soon. So I'd suggest we do not try to fix 
it inside the main stream thread any more. WDYT?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-9234) Consider using @Nullable and @Nonnull annotations

2023-03-03 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17696249#comment-17696249
 ] 

Matthias J. Sax commented on KAFKA-9234:


{quote}I just joined the community this week.
{quote}
Welcome!
{quote} I noticed there hasn't been any activity on this, so thought of taking 
it
{quote}
Thanks!
{quote}Can you please advise on how I can reassign the task to myself? Do I 
need to obtain contributor access?
{quote}
FIxed. Just added you as a "contributor" to Jira and assigned the ticket to 
you. You can now also self-assign tickets.

> Consider using @Nullable and @Nonnull annotations
> -
>
> Key: KAFKA-9234
> URL: https://issues.apache.org/jira/browse/KAFKA-9234
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, clients, consumer, KafkaConnect, producer , 
> streams, streams-test-utils
>Reporter: Matthias J. Sax
>Assignee: Ganesh Sahu
>Priority: Minor
>  Labels: beginner, newbie
>
> Java7 was dropped some time ago, and we might want to consider usein Java8 
> `@Nullable` and `@Nonnull` annotations for all public facing APIs instead of 
> documenting it in JavaDocs only.
> This tickets should be broken down in a series of smaller PRs to keep the 
> scope of each PR contained, allowing for more effective reviews.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-9234) Consider using @Nullable and @Nonnull annotations

2023-03-03 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-9234:
--

Assignee: Ganesh Sahu  (was: Manasvi Gupta)

> Consider using @Nullable and @Nonnull annotations
> -
>
> Key: KAFKA-9234
> URL: https://issues.apache.org/jira/browse/KAFKA-9234
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, clients, consumer, KafkaConnect, producer , 
> streams, streams-test-utils
>Reporter: Matthias J. Sax
>Assignee: Ganesh Sahu
>Priority: Minor
>  Labels: beginner, newbie
>
> Java7 was dropped some time ago, and we might want to consider usein Java8 
> `@Nullable` and `@Nonnull` annotations for all public facing APIs instead of 
> documenting it in JavaDocs only.
> This tickets should be broken down in a series of smaller PRs to keep the 
> scope of each PR contained, allowing for more effective reviews.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] jolshan commented on a diff in pull request #13231: KAFKA-14402: Update AddPartitionsToTxn protocol to batch and handle verifyOnly requests

2023-03-03 Thread via GitHub


jolshan commented on code in PR #13231:
URL: https://github.com/apache/kafka/pull/13231#discussion_r1124831869


##
core/src/main/scala/kafka/coordinator/transaction/TransactionCoordinator.scala:
##
@@ -317,6 +322,34 @@ class TransactionCoordinator(txnConfig: TransactionConfig,
   }
 }
   }
+  
+  def handleVerifyPartitionsInTransaction(transactionalId: String,
+  producerId: Long,
+  producerEpoch: Short,
+  partitions: 
collection.Set[TopicPartition],
+  responseCallback: 
VerifyPartitionsCallback): Unit = {
+if (transactionalId == null || transactionalId.isEmpty) {
+  debug(s"Returning ${Errors.INVALID_REQUEST} error code to client for 
$transactionalId's AddPartitions request")
+  
responseCallback(AddPartitionsToTxnResponse.resultForTransaction(transactionalId,
 partitions.map(_ -> Errors.INVALID_REQUEST).toMap.asJava))
+} else {
+  val result: ApiResult[(Int, TransactionMetadata)] = 
getTransactionMetadata(transactionalId, producerId, producerEpoch, partitions)
+  
+  result match {
+case Left(err) =>
+  debug(s"Returning $err error code to client for $transactionalId's 
AddPartitions request")
+  
responseCallback(AddPartitionsToTxnResponse.resultForTransaction(transactionalId,
 partitions.map(_ -> err).toMap.asJava))
+
+case Right((_, txnMetadata)) =>
+  val txnMetadataPartitions = txnMetadata.topicPartitions
+  val addedPartitions = partitions.intersect(txnMetadataPartitions)
+  val nonAddedPartitions = partitions.diff(txnMetadataPartitions)
+  val errors = mutable.Map[TopicPartition, Errors]()
+  addedPartitions.foreach(errors.put(_, Errors.NONE))
+  nonAddedPartitions.foreach(errors.put(_, Errors.INVALID_TXN_STATE))

Review Comment:
   sure



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jolshan commented on a diff in pull request #13231: KAFKA-14402: Update AddPartitionsToTxn protocol to batch and handle verifyOnly requests

2023-03-03 Thread via GitHub


jolshan commented on code in PR #13231:
URL: https://github.com/apache/kafka/pull/13231#discussion_r1124831349


##
clients/src/test/java/org/apache/kafka/common/requests/AddPartitionsToTxnResponseTest.java:
##
@@ -58,42 +65,75 @@ public void setUp() {
 errorsMap.put(tp2, errorTwo);
 }
 
-@Test
-public void testConstructorWithErrorResponse() {
-AddPartitionsToTxnResponse response = new 
AddPartitionsToTxnResponse(throttleTimeMs, errorsMap);
-
-assertEquals(expectedErrorCounts, response.errorCounts());
-assertEquals(throttleTimeMs, response.throttleTimeMs());
-}
-
-@Test
-public void testParse() {
-
+@ParameterizedTest
+@ApiKeyVersionsSource(apiKey = ApiKeys.ADD_PARTITIONS_TO_TXN)
+public void testParse(short version) {
 AddPartitionsToTxnTopicResultCollection topicCollection = new 
AddPartitionsToTxnTopicResultCollection();
 
 AddPartitionsToTxnTopicResult topicResult = new 
AddPartitionsToTxnTopicResult();
 topicResult.setName(topicOne);
 
-topicResult.results().add(new AddPartitionsToTxnPartitionResult()
-  .setErrorCode(errorOne.code())
+topicResult.resultsByPartition().add(new 
AddPartitionsToTxnPartitionResult()
+  .setPartitionErrorCode(errorOne.code())
   .setPartitionIndex(partitionOne));
 
-topicResult.results().add(new AddPartitionsToTxnPartitionResult()
-  .setErrorCode(errorTwo.code())
+topicResult.resultsByPartition().add(new 
AddPartitionsToTxnPartitionResult()
+  .setPartitionErrorCode(errorTwo.code())
   .setPartitionIndex(partitionTwo));
 
 topicCollection.add(topicResult);
+
+if (version < 4) {
+AddPartitionsToTxnResponseData data = new 
AddPartitionsToTxnResponseData()
+.setResultsByTopicV3AndBelow(topicCollection)
+.setThrottleTimeMs(throttleTimeMs);
+AddPartitionsToTxnResponse response = new 
AddPartitionsToTxnResponse(data);
 
-AddPartitionsToTxnResponseData data = new 
AddPartitionsToTxnResponseData()
-  .setResults(topicCollection)
-  
.setThrottleTimeMs(throttleTimeMs);
-AddPartitionsToTxnResponse response = new 
AddPartitionsToTxnResponse(data);
-
-for (short version : ApiKeys.ADD_PARTITIONS_TO_TXN.allVersions()) {
 AddPartitionsToTxnResponse parsedResponse = 
AddPartitionsToTxnResponse.parse(response.serialize(version), version);
 assertEquals(expectedErrorCounts, parsedResponse.errorCounts());
 assertEquals(throttleTimeMs, parsedResponse.throttleTimeMs());
 assertEquals(version >= 1, 
parsedResponse.shouldClientThrottle(version));
+} else {
+AddPartitionsToTxnResultCollection results = new 
AddPartitionsToTxnResultCollection();
+results.add(new 
AddPartitionsToTxnResult().setTransactionalId("txn1").setTopicResults(topicCollection));
+
+// Create another transaction with new name and errorOne for a 
single partition.
+Map txnTwoExpectedErrors = 
Collections.singletonMap(tp2, errorOne);
+
results.add(AddPartitionsToTxnResponse.resultForTransaction("txn2", 
txnTwoExpectedErrors));
+
+AddPartitionsToTxnResponseData data = new 
AddPartitionsToTxnResponseData()
+.setResultsByTransaction(results)
+.setThrottleTimeMs(throttleTimeMs);
+AddPartitionsToTxnResponse response = new 
AddPartitionsToTxnResponse(data);
+
+Map newExpectedErrorCounts = new HashMap<>();
+newExpectedErrorCounts.put(Errors.NONE, 1); // top level error
+newExpectedErrorCounts.put(errorOne, 2);
+newExpectedErrorCounts.put(errorTwo, 1);
+
+AddPartitionsToTxnResponse parsedResponse = 
AddPartitionsToTxnResponse.parse(response.serialize(version), version);
+assertEquals(txnTwoExpectedErrors, 
errorsForTransaction(response.getTransactionTopicResults("txn2")));
+assertEquals(newExpectedErrorCounts, parsedResponse.errorCounts());
+assertEquals(throttleTimeMs, parsedResponse.throttleTimeMs());
+assertTrue(parsedResponse.shouldClientThrottle(version));
 }
 }
+
+@Test
+public void testBatchedErrors() {
+Map txn1Errors = Collections.singletonMap(tp1, 
errorOne);
+Map txn2Errors = Collections.singletonMap(tp1, 
errorOne);
+
+AddPartitionsToTxnResult transaction1 = 
AddPartitionsToTxnResponse.resultForTransaction("txn1", txn1Errors);
+AddPartitionsToTxnResult transaction2 = 

[GitHub] [kafka] cmccabe commented on a diff in pull request #13114: KAFKA-14084: SCRAM support in KRaft.

2023-03-03 Thread via GitHub


cmccabe commented on code in PR #13114:
URL: https://github.com/apache/kafka/pull/13114#discussion_r1124830672


##
metadata/src/test/java/org/apache/kafka/image/ScramImageTest.java:
##
@@ -0,0 +1,161 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kafka.image;
+
+import org.apache.kafka.clients.admin.ScramMechanism;
+import org.apache.kafka.common.metadata.RemoveUserScramCredentialRecord;
+import org.apache.kafka.common.metadata.UserScramCredentialRecord;
+import org.apache.kafka.image.writer.ImageWriterOptions;
+import org.apache.kafka.image.writer.RecordListWriter;
+import org.apache.kafka.server.common.MetadataVersion;
+import org.apache.kafka.server.util.MockRandom;
+import org.apache.kafka.metadata.RecordTestUtils;
+import org.apache.kafka.server.common.ApiMessageAndVersion;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.Timeout;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Random;
+
+import static org.apache.kafka.clients.admin.ScramMechanism.SCRAM_SHA_256;
+import static org.apache.kafka.clients.admin.ScramMechanism.SCRAM_SHA_512;
+import static org.junit.jupiter.api.Assertions.assertEquals;
+import static org.junit.jupiter.api.Assertions.fail;
+
+
+@Timeout(value = 40)
+public class ScramImageTest {
+final static ScramImage IMAGE1;
+
+final static List DELTA1_RECORDS;
+
+final static ScramDelta DELTA1;
+
+final static ScramImage IMAGE2;
+
+static byte[] randomBuffer(Random random, int length) {
+byte[] buf = new byte[length];
+random.nextBytes(buf);
+return buf;
+}
+
+static ScramCredentialData randomScramCredentialData(Random random) {
+return new ScramCredentialData(
+randomBuffer(random, 1024),
+randomBuffer(random, 1024),
+1024 + random.nextInt(1024));
+}
+
+static {
+MockRandom random = new MockRandom();
+
+Map> image1mechanisms 
= new HashMap<>();
+
+Map image1sha256 = new HashMap<>();
+image1sha256.put("alpha", randomScramCredentialData(random));
+image1sha256.put("beta", randomScramCredentialData(random));
+image1mechanisms.put(SCRAM_SHA_256, image1sha256);
+
+Map image1sha512 = new HashMap<>();
+image1sha512.put("alpha", randomScramCredentialData(random));
+image1sha512.put("gamma", randomScramCredentialData(random));
+image1mechanisms.put(SCRAM_SHA_512, image1sha512);
+
+IMAGE1 = new ScramImage(image1mechanisms);
+
+DELTA1_RECORDS = new ArrayList<>();
+DELTA1_RECORDS.add(new ApiMessageAndVersion(new 
RemoveUserScramCredentialRecord().
+setName("gamma").
+setMechanism(SCRAM_SHA_512.type()), (short) 0));
+ScramCredentialData secondAlpha256Credential = 
randomScramCredentialData(random);
+DELTA1_RECORDS.add(new ApiMessageAndVersion(new 
UserScramCredentialRecord().
+setName("alpha").
+setMechanism(SCRAM_SHA_256.type()).
+setIterations(secondAlpha256Credential.iterations()).
+setSalt(secondAlpha256Credential.salt()).
+setSaltedPassword(secondAlpha256Credential.saltedPassword()), 
(short) 0));
+DELTA1 = new ScramDelta(IMAGE1);
+RecordTestUtils.replayAll(DELTA1, DELTA1_RECORDS);
+
+Map> image2mechanisms 
= new HashMap<>();
+
+Map image2sha256 = new HashMap<>();
+image2sha256.put("alpha", secondAlpha256Credential);
+image2sha256.put("beta", image1sha256.get("beta"));
+image2mechanisms.put(SCRAM_SHA_256, image2sha256);
+
+Map image2sha512 = new HashMap<>();
+image2sha512.put("alpha", image1sha512.get("alpha"));
+image2mechanisms.put(SCRAM_SHA_512, image2sha512);
+
+IMAGE2 = new ScramImage(image2mechanisms);
+}
+
+@Test
+public void testEmptyImageRoundTrip() throws Throwable {
+testToImageAndBack(ScramImage.EMPTY);
+}
+
+@Test
+public void testImage1RoundTrip() throws Throwable {
+testToImageAndBack(IMAGE1);
+}
+
+@Test
+public void 

[GitHub] [kafka] jolshan commented on a diff in pull request #13231: KAFKA-14402: Update AddPartitionsToTxn protocol to batch and handle verifyOnly requests

2023-03-03 Thread via GitHub


jolshan commented on code in PR #13231:
URL: https://github.com/apache/kafka/pull/13231#discussion_r1124830314


##
clients/src/test/java/org/apache/kafka/common/requests/AddPartitionsToTxnResponseTest.java:
##
@@ -58,42 +65,75 @@ public void setUp() {
 errorsMap.put(tp2, errorTwo);
 }
 
-@Test
-public void testConstructorWithErrorResponse() {
-AddPartitionsToTxnResponse response = new 
AddPartitionsToTxnResponse(throttleTimeMs, errorsMap);
-
-assertEquals(expectedErrorCounts, response.errorCounts());
-assertEquals(throttleTimeMs, response.throttleTimeMs());
-}
-
-@Test
-public void testParse() {
-
+@ParameterizedTest
+@ApiKeyVersionsSource(apiKey = ApiKeys.ADD_PARTITIONS_TO_TXN)
+public void testParse(short version) {
 AddPartitionsToTxnTopicResultCollection topicCollection = new 
AddPartitionsToTxnTopicResultCollection();
 
 AddPartitionsToTxnTopicResult topicResult = new 
AddPartitionsToTxnTopicResult();
 topicResult.setName(topicOne);
 
-topicResult.results().add(new AddPartitionsToTxnPartitionResult()
-  .setErrorCode(errorOne.code())
+topicResult.resultsByPartition().add(new 
AddPartitionsToTxnPartitionResult()
+  .setPartitionErrorCode(errorOne.code())
   .setPartitionIndex(partitionOne));
 
-topicResult.results().add(new AddPartitionsToTxnPartitionResult()
-  .setErrorCode(errorTwo.code())
+topicResult.resultsByPartition().add(new 
AddPartitionsToTxnPartitionResult()
+  .setPartitionErrorCode(errorTwo.code())
   .setPartitionIndex(partitionTwo));
 
 topicCollection.add(topicResult);
+
+if (version < 4) {
+AddPartitionsToTxnResponseData data = new 
AddPartitionsToTxnResponseData()
+.setResultsByTopicV3AndBelow(topicCollection)
+.setThrottleTimeMs(throttleTimeMs);
+AddPartitionsToTxnResponse response = new 
AddPartitionsToTxnResponse(data);
 
-AddPartitionsToTxnResponseData data = new 
AddPartitionsToTxnResponseData()
-  .setResults(topicCollection)
-  
.setThrottleTimeMs(throttleTimeMs);
-AddPartitionsToTxnResponse response = new 
AddPartitionsToTxnResponse(data);
-
-for (short version : ApiKeys.ADD_PARTITIONS_TO_TXN.allVersions()) {
 AddPartitionsToTxnResponse parsedResponse = 
AddPartitionsToTxnResponse.parse(response.serialize(version), version);
 assertEquals(expectedErrorCounts, parsedResponse.errorCounts());
 assertEquals(throttleTimeMs, parsedResponse.throttleTimeMs());
 assertEquals(version >= 1, 
parsedResponse.shouldClientThrottle(version));
+} else {
+AddPartitionsToTxnResultCollection results = new 
AddPartitionsToTxnResultCollection();
+results.add(new 
AddPartitionsToTxnResult().setTransactionalId("txn1").setTopicResults(topicCollection));
+
+// Create another transaction with new name and errorOne for a 
single partition.
+Map txnTwoExpectedErrors = 
Collections.singletonMap(tp2, errorOne);
+
results.add(AddPartitionsToTxnResponse.resultForTransaction("txn2", 
txnTwoExpectedErrors));
+
+AddPartitionsToTxnResponseData data = new 
AddPartitionsToTxnResponseData()
+.setResultsByTransaction(results)
+.setThrottleTimeMs(throttleTimeMs);
+AddPartitionsToTxnResponse response = new 
AddPartitionsToTxnResponse(data);
+
+Map newExpectedErrorCounts = new HashMap<>();
+newExpectedErrorCounts.put(Errors.NONE, 1); // top level error
+newExpectedErrorCounts.put(errorOne, 2);
+newExpectedErrorCounts.put(errorTwo, 1);
+
+AddPartitionsToTxnResponse parsedResponse = 
AddPartitionsToTxnResponse.parse(response.serialize(version), version);
+assertEquals(txnTwoExpectedErrors, 
errorsForTransaction(response.getTransactionTopicResults("txn2")));
+assertEquals(newExpectedErrorCounts, parsedResponse.errorCounts());
+assertEquals(throttleTimeMs, parsedResponse.throttleTimeMs());
+assertTrue(parsedResponse.shouldClientThrottle(version));
 }
 }
+
+@Test
+public void testBatchedErrors() {
+Map txn1Errors = Collections.singletonMap(tp1, 
errorOne);
+Map txn2Errors = Collections.singletonMap(tp1, 
errorOne);
+
+AddPartitionsToTxnResult transaction1 = 
AddPartitionsToTxnResponse.resultForTransaction("txn1", txn1Errors);
+AddPartitionsToTxnResult transaction2 = 

[GitHub] [kafka] dajac commented on a diff in pull request #13231: KAFKA-14402: Update AddPartitionsToTxn protocol to batch and handle verifyOnly requests

2023-03-03 Thread via GitHub


dajac commented on code in PR #13231:
URL: https://github.com/apache/kafka/pull/13231#discussion_r1124792127


##
clients/src/main/java/org/apache/kafka/common/requests/AddPartitionsToTxnRequest.java:
##
@@ -34,22 +43,38 @@ public class AddPartitionsToTxnRequest extends 
AbstractRequest {
 
 private final AddPartitionsToTxnRequestData data;
 
-private List cachedPartitions = null;
-
 public static class Builder extends 
AbstractRequest.Builder {
 public final AddPartitionsToTxnRequestData data;
+
+public static Builder forClient(String transactionalId,
+long producerId,
+short producerEpoch,
+List partitions) {
+
+AddPartitionsToTxnTopicCollection topics = 
buildTxnTopicCollection(partitions);
+
+return new Builder(ApiKeys.ADD_PARTITIONS_TO_TXN.oldestVersion(),
+(short) 3, 

Review Comment:
   nit: Should we put `(short) 3` on the previous line to be consistent with 
how you did it at L66?



##
clients/src/main/java/org/apache/kafka/common/requests/AddPartitionsToTxnRequest.java:
##
@@ -34,22 +43,38 @@ public class AddPartitionsToTxnRequest extends 
AbstractRequest {
 
 private final AddPartitionsToTxnRequestData data;
 
-private List cachedPartitions = null;
-
 public static class Builder extends 
AbstractRequest.Builder {
 public final AddPartitionsToTxnRequestData data;
+
+public static Builder forClient(String transactionalId,
+long producerId,
+short producerEpoch,
+List partitions) {
+
+AddPartitionsToTxnTopicCollection topics = 
buildTxnTopicCollection(partitions);
+
+return new Builder(ApiKeys.ADD_PARTITIONS_TO_TXN.oldestVersion(),
+(short) 3, 
+new AddPartitionsToTxnRequestData()
+.setV3AndBelowTransactionalId(transactionalId)
+.setV3AndBelowProducerId(producerId)
+.setV3AndBelowProducerEpoch(producerEpoch)
+.setV3AndBelowTopics(topics));
+}
+
+public static Builder 
forBroker(AddPartitionsToTxnTransactionCollection transactions) {
+return new Builder((short) 4, 
ApiKeys.ADD_PARTITIONS_TO_TXN.latestVersion(),
+new AddPartitionsToTxnRequestData()
+.setTransactions(transactions));
+}
+
+public Builder(short minVersion, short maxVersion, 
AddPartitionsToTxnRequestData data) {

Review Comment:
   nit: Do we still use this constructor anywhere? It may be good to make it 
private or package private to ensure that `forClient` or `forBroker` is used.



##
clients/src/main/java/org/apache/kafka/common/requests/AddPartitionsToTxnResponse.java:
##
@@ -48,29 +51,51 @@ public class AddPartitionsToTxnResponse extends 
AbstractResponse {
 
 private final AddPartitionsToTxnResponseData data;
 
-private Map cachedErrorsMap = null;
+public static final String V3_AND_BELOW_TXN_ID = "";
 
 public AddPartitionsToTxnResponse(AddPartitionsToTxnResponseData data) {
 super(ApiKeys.ADD_PARTITIONS_TO_TXN);
 this.data = data;
 }
 
-public AddPartitionsToTxnResponse(int throttleTimeMs, Map errors) {
-super(ApiKeys.ADD_PARTITIONS_TO_TXN);
+@Override
+public int throttleTimeMs() {
+return data.throttleTimeMs();
+}
+
+@Override
+public void maybeSetThrottleTimeMs(int throttleTimeMs) {
+data.setThrottleTimeMs(throttleTimeMs);
+}
+
+public Map> errors() {
+Map> errorsMap = new HashMap<>();
+
+if (this.data.resultsByTopicV3AndBelow().size() != 0) {

Review Comment:
   nit: I think that we usually prefer using `isEmpty()`.



##
clients/src/test/java/org/apache/kafka/common/requests/AddPartitionsToTxnResponseTest.java:
##
@@ -58,42 +65,75 @@ public void setUp() {
 errorsMap.put(tp2, errorTwo);
 }
 
-@Test
-public void testConstructorWithErrorResponse() {
-AddPartitionsToTxnResponse response = new 
AddPartitionsToTxnResponse(throttleTimeMs, errorsMap);
-
-assertEquals(expectedErrorCounts, response.errorCounts());
-assertEquals(throttleTimeMs, response.throttleTimeMs());
-}
-
-@Test
-public void testParse() {
-
+@ParameterizedTest
+@ApiKeyVersionsSource(apiKey = ApiKeys.ADD_PARTITIONS_TO_TXN)
+public void testParse(short version) {
 AddPartitionsToTxnTopicResultCollection topicCollection = new 
AddPartitionsToTxnTopicResultCollection();
 
 AddPartitionsToTxnTopicResult topicResult = new 
AddPartitionsToTxnTopicResult();
 topicResult.setName(topicOne);
 
-

[GitHub] [kafka] pprovenzano commented on pull request #13114: KAFKA-14084: SCRAM support in KRaft.

2023-03-03 Thread via GitHub


pprovenzano commented on PR #13114:
URL: https://github.com/apache/kafka/pull/13114#issuecomment-1453878000

   I've added the MetadataVersion fixes with tests.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] viktorsomogyi commented on a diff in pull request #12992: KIP-887: Add ConfigProvider to make use of environment variables

2023-03-03 Thread via GitHub


viktorsomogyi commented on code in PR #12992:
URL: https://github.com/apache/kafka/pull/12992#discussion_r1124689076


##
clients/src/main/java/org/apache/kafka/common/config/provider/EnvVarConfigProvider.java:
##
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.common.config.provider;
+
+import org.apache.kafka.common.config.ConfigData;
+import org.apache.kafka.common.config.ConfigException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Set;
+
+public class EnvVarConfigProvider implements ConfigProvider {
+private final Map envVarMap;
+
+public EnvVarConfigProvider() {
+envVarMap = getEnvVars();
+}
+
+public EnvVarConfigProvider(Map envVarsAsArgument) {
+envVarMap = envVarsAsArgument;
+}
+
+private static final Logger log = 
LoggerFactory.getLogger(EnvVarConfigProvider.class);
+
+@Override
+public void configure(Map configs) {
+}

Review Comment:
   Recently we had 
[CVE-2023-25194](https://nvd.nist.gov/vuln/detail/CVE-2023-25194) which also 
assumes an authorized user. In this sense I think it's good to not assume users 
with certain levels of access as non-malicious, although this case indeed seems 
to be less severe as the configuration isn't enabled by default and requires a 
very high level access (like modify connect configuration) but allows some kind 
of exploitability by trial and error.
   So if it's already implemented and we default to the most sensible values, I 
don't mind this addition, although we should notify the KIP thread too because 
of the extra config (also because someone else might object).
   @omkreddy what do you think of the security implications? (as you were 
involved in the fix of the CVE above)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] viktorsomogyi commented on a diff in pull request #12992: KIP-887: Add ConfigProvider to make use of environment variables

2023-03-03 Thread via GitHub


viktorsomogyi commented on code in PR #12992:
URL: https://github.com/apache/kafka/pull/12992#discussion_r1124689076


##
clients/src/main/java/org/apache/kafka/common/config/provider/EnvVarConfigProvider.java:
##
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.common.config.provider;
+
+import org.apache.kafka.common.config.ConfigData;
+import org.apache.kafka.common.config.ConfigException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Set;
+
+public class EnvVarConfigProvider implements ConfigProvider {
+private final Map envVarMap;
+
+public EnvVarConfigProvider() {
+envVarMap = getEnvVars();
+}
+
+public EnvVarConfigProvider(Map envVarsAsArgument) {
+envVarMap = envVarsAsArgument;
+}
+
+private static final Logger log = 
LoggerFactory.getLogger(EnvVarConfigProvider.class);
+
+@Override
+public void configure(Map configs) {
+}

Review Comment:
   Recently we had 
[CVE-2023-25194](https://nvd.nist.gov/vuln/detail/CVE-2023-25194) which also 
assumes an authorized user. In this sense I think it's good to not assume users 
with certain levels of access as non-malicious, although this case indeed seems 
to be less severe as the configuration isn't enabled by default and requires a 
very high level access (like modify connect configuration) but allows some kind 
of exploitability by trial and error.
   So if it's already implemented and we default to the most sensible values, I 
don't mind this addition, although we should notify the KIP thread too (also 
because someone else might object).
   @omkreddy what do you think of the security implications? (as you were 
involved in the fix of the CVE above)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] philipnee commented on pull request #13269: KAFKA-12634 enforce checkpoint after restoration

2023-03-03 Thread via GitHub


philipnee commented on PR #13269:
URL: https://github.com/apache/kafka/pull/13269#issuecomment-1453776408

   @cadonna - Thanks for the feedback there.  I added the EOS check before the 
checkpoint. I also made a bunch of changes to the tests.  To make things 
obvious, I added a comment to these altered tests to let people know about the 
checkpointing.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] viktorsomogyi commented on a diff in pull request #12992: KIP-887: Add ConfigProvider to make use of environment variables

2023-03-03 Thread via GitHub


viktorsomogyi commented on code in PR #12992:
URL: https://github.com/apache/kafka/pull/12992#discussion_r1124689076


##
clients/src/main/java/org/apache/kafka/common/config/provider/EnvVarConfigProvider.java:
##
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.common.config.provider;
+
+import org.apache.kafka.common.config.ConfigData;
+import org.apache.kafka.common.config.ConfigException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Set;
+
+public class EnvVarConfigProvider implements ConfigProvider {
+private final Map envVarMap;
+
+public EnvVarConfigProvider() {
+envVarMap = getEnvVars();
+}
+
+public EnvVarConfigProvider(Map envVarsAsArgument) {
+envVarMap = envVarsAsArgument;
+}
+
+private static final Logger log = 
LoggerFactory.getLogger(EnvVarConfigProvider.class);
+
+@Override
+public void configure(Map configs) {
+}

Review Comment:
   Recently we had 
[CVE-2023-25194](https://nvd.nist.gov/vuln/detail/CVE-2023-25194) which also 
assumes an authorized user. In this sense I think it's good to not assume users 
with certain levels of access as non-malicious, although this case indeed seems 
to be less severe as the configuration isn't enabled by default and requires a 
very high level access (like modify connect configuration) but allows some kind 
of exploitability by trial and error.
   So if it's already implemented and we default to the most sensible values, I 
don't mind this change.



##
clients/src/main/java/org/apache/kafka/common/config/provider/EnvVarConfigProvider.java:
##
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.common.config.provider;
+
+import org.apache.kafka.common.config.ConfigData;
+import org.apache.kafka.common.config.ConfigException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Set;
+
+public class EnvVarConfigProvider implements ConfigProvider {
+private final Map envVarMap;
+
+public EnvVarConfigProvider() {
+envVarMap = getEnvVars();
+}
+
+public EnvVarConfigProvider(Map envVarsAsArgument) {
+envVarMap = envVarsAsArgument;
+}
+
+private static final Logger log = 
LoggerFactory.getLogger(EnvVarConfigProvider.class);
+
+@Override
+public void configure(Map configs) {
+}

Review Comment:
   Recently we had 
[CVE-2023-25194](https://nvd.nist.gov/vuln/detail/CVE-2023-25194) which also 
assumes an authorized user. In this sense I think it's good to not assume users 
with certain levels of access as non-malicious, although this case indeed seems 
to be less severe as the configuration isn't enabled by default and requires a 
very high level access (like modify connect configuration) but allows some kind 
of exploitability by trial and error.
   So if it's already implemented and we default to the most sensible values, I 
don't mind this addition.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact 

[GitHub] [kafka] philipnee commented on pull request #13269: KAFKA-12634 enforce checkpoint after restoration

2023-03-03 Thread via GitHub


philipnee commented on PR #13269:
URL: https://github.com/apache/kafka/pull/13269#issuecomment-1453770724

   There's a bunch of failing DedicatedMirrorIntegrationTest - most likely 
unrelated.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] vvcephei commented on a diff in pull request #13274: KAFKA-14491: [13/N] Add versioned store builder and materializer

2023-03-03 Thread via GitHub


vvcephei commented on code in PR #13274:
URL: https://github.com/apache/kafka/pull/13274#discussion_r1124670079


##
streams/src/main/java/org/apache/kafka/streams/kstream/internals/KeyValueStoreMaterializer.java:
##
@@ -48,20 +55,30 @@ public StoreBuilder> 
materialize() {
 }
 }
 
-final StoreBuilder> builder = 
Stores.timestampedKeyValueStoreBuilder(
-supplier,
-materialized.keySerde(),
-materialized.valueSerde());
+final StoreBuilder builder;
+if (supplier instanceof VersionedBytesStoreSupplier) {
+builder = new VersionedKeyValueStoreBuilder<>(
+(VersionedBytesStoreSupplier) supplier,
+materialized.keySerde(),
+materialized.valueSerde(),
+Time.SYSTEM);
+} else {
+builder = Stores.timestampedKeyValueStoreBuilder(
+supplier,
+materialized.keySerde(),
+materialized.valueSerde());
+}
 
 if (materialized.loggingEnabled()) {
 builder.withLoggingEnabled(materialized.logConfig());
 } else {
 builder.withLoggingDisabled();
 }
 
-if (materialized.cachingEnabled()) {
+// versioned stores do not support caching

Review Comment:
   +1 on an INFO log (or no log at all).
   
   Caching is non-functional behavior, so it makes sense for a store that 
doesn't support it (yet) just to treat it as a request and ignore it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] pprovenzano commented on a diff in pull request #13114: KAFKA-14084: SCRAM support in KRaft.

2023-03-03 Thread via GitHub


pprovenzano commented on code in PR #13114:
URL: https://github.com/apache/kafka/pull/13114#discussion_r1124583449


##
core/src/main/scala/kafka/server/metadata/BrokerMetadataPublisher.scala:
##
@@ -221,6 +223,21 @@ class BrokerMetadataPublisher(
   s"quotas in ${deltaName}", t)
   }
 
+  // Apply changes to SCRAM credentials.
+  Option(delta.scramDelta()).foreach { scramDelta =>

Review Comment:
   https://issues.apache.org/jira/browse/KAFKA-14775



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-14649) Failures instantiating Connect plugins hides other plugins from REST API, or crash worker

2023-03-03 Thread Chris Egerton (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Egerton updated KAFKA-14649:
--
Fix Version/s: 3.4.1

> Failures instantiating Connect plugins hides other plugins from REST API, or 
> crash worker
> -
>
> Key: KAFKA-14649
> URL: https://issues.apache.org/jira/browse/KAFKA-14649
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 3.1.0, 3.0.0, 3.2.0, 3.3.0, 3.4.0
>Reporter: Greg Harris
>Assignee: Greg Harris
>Priority: Minor
> Fix For: 3.5.0, 3.4.1
>
>
> Connect plugin path scanning evaluates the version() method of plugins to 
> determine which version of a plugin to load, and what version to advertise as 
> part of the REST API. This process involves reflectively constructing an 
> instance of the class and calling the version method, which can fail in the 
> following scenarios:
> 1. If a plugin throws an exception from a static initialization block
> 2. If a plugin does not have a default constructor (such as a non-static 
> inner class)
> 3. If a plugin has a default constructor is not public
> 4. If a plugin throws an exception from the default constructor
> 5. If a plugin's version method throws an exception
> If any of the above is true for any single connector or rest extension on the 
> classpath or plugin.path, the plugin path scanning will exit early, and 
> potentially hide other unrelated plugins. This is primarily an issue in 
> development and test environments, because they are easy-to-make code 
> mistakes that would generally not make it to a release. Exceptions from the 
> version method, however, can cause the worker to fail to start up as they are 
> uncaught.
> It is desirable for the worker to instead log these exceptions and continue. 
> This will prevent one mis-implemented plugin from affecting other plugins, 
> while still causing integration tests to fail against the plugin itself. We 
> can augment logging to make it clear how to correct these failures, where 
> before it was rather opaque and difficult to debug.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] Hangleton commented on a diff in pull request #13240: KAFKA-14690: Add topic IDs to OffsetCommit API and propagate for request version >= 9

2023-03-03 Thread via GitHub


Hangleton commented on code in PR #13240:
URL: https://github.com/apache/kafka/pull/13240#discussion_r1124559198


##
clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitResponse.java:
##
@@ -167,8 +213,24 @@ public  Builder addPartitions(
 }
 
 public Builder merge(
-OffsetCommitResponseData newData
+OffsetCommitResponseData newData,
+Logger logger
 ) {
+if (version >= 9) {

Review Comment:
   Not strictly needed. We can remove the condition and the logger as well.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] Hangleton commented on a diff in pull request #13240: KAFKA-14690: Add topic IDs to OffsetCommit API and propagate for request version >= 9

2023-03-03 Thread via GitHub


Hangleton commented on code in PR #13240:
URL: https://github.com/apache/kafka/pull/13240#discussion_r1124559198


##
clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitResponse.java:
##
@@ -167,8 +213,24 @@ public  Builder addPartitions(
 }
 
 public Builder merge(
-OffsetCommitResponseData newData
+OffsetCommitResponseData newData,
+Logger logger
 ) {
+if (version >= 9) {

Review Comment:
   Not strictly needed. We can remove it and the logger as well.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-14776) Update SCRAM system tests to run with KRaft

2023-03-03 Thread Proven Provenzano (Jira)
Proven Provenzano created KAFKA-14776:
-

 Summary: Update SCRAM system tests to run with KRaft
 Key: KAFKA-14776
 URL: https://issues.apache.org/jira/browse/KAFKA-14776
 Project: Kafka
  Issue Type: Improvement
  Components: kraft
Reporter: Proven Provenzano
Assignee: Proven Provenzano


I will update the SCRAM system tests to run under both ZK and KRaft quorum mode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] Hangleton commented on pull request #13240: KAFKA-14690: Add topic IDs to OffsetCommit API and propagate for request version >= 9

2023-03-03 Thread via GitHub


Hangleton commented on PR #13240:
URL: https://github.com/apache/kafka/pull/13240#issuecomment-1453619279

   Used v8 for AlterConsumerGroupOffsets in the admin client and added 
corresponding unit and integration tests. `MirrorConnectorsIntegrationBaseTest` 
is now successful.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-14775) Support SCRAM for broker to controller authentication

2023-03-03 Thread Proven Provenzano (Jira)
Proven Provenzano created KAFKA-14775:
-

 Summary: Support SCRAM for broker to controller authentication
 Key: KAFKA-14775
 URL: https://issues.apache.org/jira/browse/KAFKA-14775
 Project: Kafka
  Issue Type: Improvement
  Components: kraft
Reporter: Proven Provenzano
Assignee: Proven Provenzano


We need to apply SCRAM changes to controller nodes.

We need to handle DescribeUserScramCredentialsRequest in the controller nodes.

As part of this update I will split out the code from 
{{BrokerMetadataPublisher.scala}} for applying the SCRAM  into a separate 
{{{}MetadataPublisher{}}}, as we did with {{DynamicConfigPublisher}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] edoardocomar commented on pull request #13113: KAFKA-5238: BrokerTopicMetrics can be recreated after topic is deleted

2023-03-03 Thread via GitHub


edoardocomar commented on PR #13113:
URL: https://github.com/apache/kafka/pull/13113#issuecomment-1453510691

   bump @mimaison @rajinisivaram @tombentley @ijuma 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] Hangleton commented on pull request #13240: KAFKA-14690: Add topic IDs to OffsetCommit API and propagate for request version >= 9

2023-03-03 Thread via GitHub


Hangleton commented on PR #13240:
URL: https://github.com/apache/kafka/pull/13240#issuecomment-1453452130

   The integration test `MirrorConnectorsIntegrationBaseTest` is failing due to 
unknown topic id detected from the admin client. I am looking into it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-14745) MirrorSourceConnector keeps creating ReplicationPolicy instances

2023-03-03 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14745.

Fix Version/s: 3.5.0
   Resolution: Fixed

> MirrorSourceConnector keeps creating ReplicationPolicy instances
> 
>
> Key: KAFKA-14745
> URL: https://issues.apache.org/jira/browse/KAFKA-14745
> Project: Kafka
>  Issue Type: Improvement
>  Components: mirrormaker
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 3.5.0
>
>
> In MirrorSourceConnector.findTargetTopicPartitions() we call 
> MirrorSourceConfig.checkpointsTopic() for each remote topic or all topics 
> when using IdentityReplicationPolicy.
> The issue is that checkpointsTopic() calls 
> MirrorSourceConfig.replicationPolicy() which always creates a new instance of 
> the ReplicationPolicy.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] mimaison merged pull request #13328: KAFKA-14745: Cache the ReplicationPolicy instance in MirrorConnectorC…

2023-03-03 Thread via GitHub


mimaison merged PR #13328:
URL: https://github.com/apache/kafka/pull/13328


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] mimaison commented on pull request #13328: KAFKA-14745: Cache the ReplicationPolicy instance in MirrorConnectorC…

2023-03-03 Thread via GitHub


mimaison commented on PR #13328:
URL: https://github.com/apache/kafka/pull/13328#issuecomment-1453366283

   None of the test failures are related to this change, all tests pass locally 
so merging to trunk.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] lucasbru commented on pull request #13318: KAFKA-14533: Do not interrupt state-updater thread during shutdown

2023-03-03 Thread via GitHub


lucasbru commented on PR #13318:
URL: https://github.com/apache/kafka/pull/13318#issuecomment-1453303374

   We had a failure of `Build / JDK 11 and Scala 2.13 / 
org.apache.kafka.streams.integration.PauseResumeIntegrationTest.[1] true`. I'm 
pretty sure that one was not flaky before. Any chance this PR breaks it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] lucasbru commented on pull request #13336: MINOR: update RocksDBMetricsRecorder test to JUnit5 and fix memory leak

2023-03-03 Thread via GitHub


lucasbru commented on PR #13336:
URL: https://github.com/apache/kafka/pull/13336#issuecomment-1453283942

   @cadonna Could you have a look?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] lucasbru opened a new pull request, #13336: MINOR: update RocksDBMetricsRecorder test to JUnit5 and fix memory leak

2023-03-03 Thread via GitHub


lucasbru opened a new pull request, #13336:
URL: https://github.com/apache/kafka/pull/13336

   The test was leaking memory via Mockito internals. Piggybacking an update to 
JUnit5.
   
   ### Test strategy
   
   Ran the tests 10,000 times.
   
   Before: 
   
![image](https://user-images.githubusercontent.com/1628637/222692563-fe8d9725-eca4-4143-915c-39e1fa2cacb0.png)
   
   After:
   
![image](https://user-images.githubusercontent.com/1628637/222692684-3b17e3e9-4cd3-4e65-9567-73e9a9378b2f.png)
   
   ### Committer Checklist (excluded from commit message)
   - [x] Verify design and implementation 
   - [x] Verify test coverage and CI build status
   - [x] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] urbandan commented on pull request #10566: KAFKA-12694 Avoid schema mismatch DataException when validating default values

2023-03-03 Thread via GitHub


urbandan commented on PR #10566:
URL: https://github.com/apache/kafka/pull/10566#issuecomment-1453223698

   @C0urante Yes, you are right about the short-circuit, but it is caused by 
the defaultValue ref equals. Which means that my previous code works with your 
latest proposal, but this one doesn't (infinite loop):
   
   SchemaBuilder builder1 = SchemaBuilder.struct()
   .field("f1", Schema.BOOLEAN_SCHEMA);
   Struct defaultValue1 = new Struct(builder);
   defaultValue1.put("f1", true);
   
   SchemaBuilder builder2 = SchemaBuilder.struct()
   .field("f1", Schema.BOOLEAN_SCHEMA);
   Struct defaultValue2 = new Struct(builder);
   defaultValue2.put("f1", true);
   
   assertEquals(builder1, builder2);


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] chia7712 commented on a diff in pull request #13327: MINOR: tweak the doc of "num.network.threads"

2023-03-03 Thread via GitHub


chia7712 commented on code in PR #13327:
URL: https://github.com/apache/kafka/pull/13327#discussion_r1124201822


##
core/src/main/scala/kafka/server/KafkaConfig.scala:
##
@@ -678,7 +678,7 @@ object KafkaConfig {
   "start from " + MaxReservedBrokerIdProp + " + 1."
   val MessageMaxBytesDoc = TopicConfig.MAX_MESSAGE_BYTES_DOC +
 s"This can be set per topic with the topic level 
${TopicConfig.MAX_MESSAGE_BYTES_CONFIG} config."
-  val NumNetworkThreadsDoc = "The number of threads that the server uses for 
receiving requests from the network and sending responses to the network"
+  val NumNetworkThreadsDoc = s"The number of threads that the server uses for 
receiving requests from the network and sending responses to the network. 
Noted: each listener (except for control plane listener) creates its own thread 
pool."

Review Comment:
   will copy that
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (KAFKA-14768) proposal to reduce the first message's send time cost and max block time for safety

2023-03-03 Thread fujian (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17696089#comment-17696089
 ] 

fujian edited comment on KAFKA-14768 at 3/3/23 9:13 AM:


Hi  [~showuon]:

I create another PR : [KAFKA-14768: add new configure to reduce the 
max.block.ms safely by jiafu1115 · Pull Request #13335 · apache/kafka 
(github.com)|https://github.com/apache/kafka/pull/13335/files] for your 
reference.

The PR is the solution 2 I mentioned. Thought it can't solve the issue 1, but 
it can solve issue 2. 

WDTY? which one is better? I think combining two of them are better one which 
can solve all of the issues. but I think it is also ok for one by one. (Maybe I 
should create two KIP for two issues to make it clear)

Thanks

 

 


was (Author: fujian1115):
Hi  [~showuon]:

I create another PR : [KAFKA-14768: add new configure to reduce the 
max.block.ms safely by jiafu1115 · Pull Request #13335 · apache/kafka 
(github.com)|https://github.com/apache/kafka/pull/13335/files] for your 
reference.

The PR is the solution 2 I mentioned. Thought it can't solve the issue 1, but 
it can solve issue 2. 

WDTY? which one is better? I think combining two of them are better one which 
can solve all of the issues. but I think it is also ok for one by one.

Thanks

 

 

> proposal to reduce the first message's send time cost and max block time for 
> safety 
> 
>
> Key: KAFKA-14768
> URL: https://issues.apache.org/jira/browse/KAFKA-14768
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 3.3.1, 3.3.2
>Reporter: fujian
>Assignee: hzh0425
>Priority: Major
>  Labels: needs-kip, performance
>
> Hi, Team:
>  
> Nice to meet you!
>  
> In our business, we found two types of issue which need to improve:
>  
> *(1) Take much time to send the first message*
> Sometimes, we found the users' functional interaction take a lot of time. At 
> last, we figure out the root cause is that after we complete deploy or 
> restart the servers. The first message's delivery on each application server 
> by kafka client will take much time.
> So, we try to find one solution to improve it.
>  
> After analyzing the source code about the first time's sending logic. The 
> time cost is caused by the getting metadata before the sending. The latter's 
> sending won't take the much time due to the cached metadata. The logic is 
> right and necessary. Thus, we still want to improve the experience for the 
> first message's send/user first interaction.
>  
> *(2) can't reduce the send message's block time to wanted value.*
> Sometimes our application's thread will block for max.block.ms to send 
> message. When we try to reduce the max.block.ms to reduce the blocking time. 
> It can't meet the getting metadata's time requirement sometimes. The root 
> cause is the configured max.block.ms is shared with "get metadata" operation 
> and "send message" operation. We can refer to follow tables:
> |*where to block*
>  |*when it is blocked*
>  |*how long it will be blocked?*
>  |
> |org.apache.kafka.clients.producer.KafkaProducer#waitOnMetadata|the first 
> request which need to load the metadata from kafka| |org.apache.kafka.clients.producer.internals.RecordAccumulator#append|at peak 
> time for business, if the network can’t send message in short 
> time.|  
> What's the solution for the above two issues:
> I think about current logic and figure out followed possible solution:
> (1) send one "warmup" message, thus we can't send any fake message.
> (2) provide one extra configure time configure which dedicated for getting 
> metadata. thus it will break the define for the max.block.ms
> (3) add one method to call waitOnMetadata with one timeout setting without 
> using the max.block.ms  (PR: [KAFKA-14768: provide new method to warmup first 
> record's sending and reduce the max.block.ms safely by jiafu1115 · Pull 
> Request #13320 · apache/kafka 
> (github.com)|https://github.com/apache/kafka/pull/13320])
>  
> _note:  org.apache.kafka.clients.producer.KafkaProducer#waitOnMetadata_
> ClusterAndWaitTime waitOnMetadata(String topic, Integer partition, long 
> nowMs, long maxWaitMs)
>  
>  __ 
> after the change, we can call it before the service is marked as ready. After 
> the ready. it won't block to get metadata due to cache. And then we can be 
> safe to reduce the max.block.ms to a lower value to reduce thread's blocking 
> time.
>  
> After adopting the solution 3. we solve the above issues. For example, we 
> reduce the first message's send about 4s seconds. The log can refer to 
> followed:
> _warmup test_topic at phase phase 2: get metadata from mq start_
> _warmup test_topic at phase phase 2: get metadata from mq 

[GitHub] [kafka] jiafu1115 commented on pull request #13335: KAFKA-14768: add new configure to reduce the max.block.ms safely

2023-03-03 Thread via GitHub


jiafu1115 commented on PR #13335:
URL: https://github.com/apache/kafka/pull/13335#issuecomment-1453191584

   @showuon here is the another proposal to solve part of the issues. thanks 
for your comment about this proposal.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (KAFKA-14768) proposal to reduce the first message's send time cost and max block time for safety

2023-03-03 Thread fujian (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17695956#comment-17695956
 ] 

fujian edited comment on KAFKA-14768 at 3/3/23 8:56 AM:


Hi  [~showuon] 

Thanks for your feedback. 

For 1, send one "warm up" message :  the solution is to send one fake business 
record/message before real business records. it will trigger the metadata's 
fetching.  I think it can solve the issue , but it is bad solution which 
involve one fake message. So I don't adopt it.

For 2,  Provide one extra  time's configure which be dedicated for getting 
metadata. thus it will break the definition for the max.block.ms a little and 
don't solve the issue 1. So I don't adopt it. but I can write one PR showing my 
thought for your reference. Maybe it is part of the whole solution.

For 3,my proposal is this one:   add one method to *provide possibility* to 
call waitOnMetadata with one timeout setting without using the max.block.ms  
(PR: [KAFKA-14768: provide new method to warmup first record's sending and 
reduce the max.block.ms safely by jiafu1115 · Pull Request #13320 · 
apache/kafka (github.com)|https://github.com/apache/kafka/pull/13320])

I will check the KIP for detail and give you one feedback ASAP. 

Thanks.


was (Author: fujian1115):
Hi  [~showuon] 

Thanks for your feedback. 

For 1, send one "warm up" message :  the solution is to send one fake business 
record/message before real business records. it will trigger the metadata's 
fetching.  I think it can solve the issue , but it is bad solution which 
involve one fake message. So I don't adopt it.

For 2,  Provide one extra  time's configure which be dedicated for getting 
metadata. thus it will break the definition for the max.block.ms a little. So I 
don't adopt it.

For 3,my proposal is this one:   add one method to *provide possibility* to 
call waitOnMetadata with one timeout setting without using the max.block.ms  
(PR: [KAFKA-14768: provide new method to warmup first record's sending and 
reduce the max.block.ms safely by jiafu1115 · Pull Request #13320 · 
apache/kafka (github.com)|https://github.com/apache/kafka/pull/13320])

I will check the KIP for detail and give you one feedback ASAP. 

Thanks.

> proposal to reduce the first message's send time cost and max block time for 
> safety 
> 
>
> Key: KAFKA-14768
> URL: https://issues.apache.org/jira/browse/KAFKA-14768
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 3.3.1, 3.3.2
>Reporter: fujian
>Assignee: hzh0425
>Priority: Major
>  Labels: needs-kip, performance
>
> Hi, Team:
>  
> Nice to meet you!
>  
> In our business, we found two types of issue which need to improve:
>  
> *(1) Take much time to send the first message*
> Sometimes, we found the users' functional interaction take a lot of time. At 
> last, we figure out the root cause is that after we complete deploy or 
> restart the servers. The first message's delivery on each application server 
> by kafka client will take much time.
> So, we try to find one solution to improve it.
>  
> After analyzing the source code about the first time's sending logic. The 
> time cost is caused by the getting metadata before the sending. The latter's 
> sending won't take the much time due to the cached metadata. The logic is 
> right and necessary. Thus, we still want to improve the experience for the 
> first message's send/user first interaction.
>  
> *(2) can't reduce the send message's block time to wanted value.*
> Sometimes our application's thread will block for max.block.ms to send 
> message. When we try to reduce the max.block.ms to reduce the blocking time. 
> It can't meet the getting metadata's time requirement sometimes. The root 
> cause is the configured max.block.ms is shared with "get metadata" operation 
> and "send message" operation. We can refer to follow tables:
> |*where to block*
>  |*when it is blocked*
>  |*how long it will be blocked?*
>  |
> |org.apache.kafka.clients.producer.KafkaProducer#waitOnMetadata|the first 
> request which need to load the metadata from kafka| |org.apache.kafka.clients.producer.internals.RecordAccumulator#append|at peak 
> time for business, if the network can’t send message in short 
> time.|  
> What's the solution for the above two issues:
> I think about current logic and figure out followed possible solution:
> (1) send one "warmup" message, thus we can't send any fake message.
> (2) provide one extra configure time configure which dedicated for getting 
> metadata. thus it will break the define for the max.block.ms
> (3) add one method to call waitOnMetadata with one timeout setting without 
> using the max.block.ms  (PR: [KAFKA-14768: 

[jira] [Commented] (KAFKA-14768) proposal to reduce the first message's send time cost and max block time for safety

2023-03-03 Thread fujian (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17696089#comment-17696089
 ] 

fujian commented on KAFKA-14768:


Hi  [~showuon]:

I create another PR : [KAFKA-14768: add new configure to reduce the 
max.block.ms safely by jiafu1115 · Pull Request #13335 · apache/kafka 
(github.com)|https://github.com/apache/kafka/pull/13335/files] for your 
reference.

The PR is the solution 2 I mentioned. Thought it can't solve the issue 1, but 
it can solve issue 2. 

WDTY? which one is better? I think combining two of them are better one which 
can solve all of the issues. but I think it is also ok for one by one.

Thanks

 

 

> proposal to reduce the first message's send time cost and max block time for 
> safety 
> 
>
> Key: KAFKA-14768
> URL: https://issues.apache.org/jira/browse/KAFKA-14768
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 3.3.1, 3.3.2
>Reporter: fujian
>Assignee: hzh0425
>Priority: Major
>  Labels: needs-kip, performance
>
> Hi, Team:
>  
> Nice to meet you!
>  
> In our business, we found two types of issue which need to improve:
>  
> *(1) Take much time to send the first message*
> Sometimes, we found the users' functional interaction take a lot of time. At 
> last, we figure out the root cause is that after we complete deploy or 
> restart the servers. The first message's delivery on each application server 
> by kafka client will take much time.
> So, we try to find one solution to improve it.
>  
> After analyzing the source code about the first time's sending logic. The 
> time cost is caused by the getting metadata before the sending. The latter's 
> sending won't take the much time due to the cached metadata. The logic is 
> right and necessary. Thus, we still want to improve the experience for the 
> first message's send/user first interaction.
>  
> *(2) can't reduce the send message's block time to wanted value.*
> Sometimes our application's thread will block for max.block.ms to send 
> message. When we try to reduce the max.block.ms to reduce the blocking time. 
> It can't meet the getting metadata's time requirement sometimes. The root 
> cause is the configured max.block.ms is shared with "get metadata" operation 
> and "send message" operation. We can refer to follow tables:
> |*where to block*
>  |*when it is blocked*
>  |*how long it will be blocked?*
>  |
> |org.apache.kafka.clients.producer.KafkaProducer#waitOnMetadata|the first 
> request which need to load the metadata from kafka| |org.apache.kafka.clients.producer.internals.RecordAccumulator#append|at peak 
> time for business, if the network can’t send message in short 
> time.|  
> What's the solution for the above two issues:
> I think about current logic and figure out followed possible solution:
> (1) send one "warmup" message, thus we can't send any fake message.
> (2) provide one extra configure time configure which dedicated for getting 
> metadata. thus it will break the define for the max.block.ms
> (3) add one method to call waitOnMetadata with one timeout setting without 
> using the max.block.ms  (PR: [KAFKA-14768: provide new method to warmup first 
> record's sending and reduce the max.block.ms safely by jiafu1115 · Pull 
> Request #13320 · apache/kafka 
> (github.com)|https://github.com/apache/kafka/pull/13320])
>  
> _note:  org.apache.kafka.clients.producer.KafkaProducer#waitOnMetadata_
> ClusterAndWaitTime waitOnMetadata(String topic, Integer partition, long 
> nowMs, long maxWaitMs)
>  
>  __ 
> after the change, we can call it before the service is marked as ready. After 
> the ready. it won't block to get metadata due to cache. And then we can be 
> safe to reduce the max.block.ms to a lower value to reduce thread's blocking 
> time.
>  
> After adopting the solution 3. we solve the above issues. For example, we 
> reduce the first message's send about 4s seconds. The log can refer to 
> followed:
> _warmup test_topic at phase phase 2: get metadata from mq start_
> _warmup test_topic at phase phase 2: get metadata from mq end consume 
> *4669ms*_
> And after the change, we reduce the max.block.ms from 10s to 2s without worry 
> can't get metadata.
>  
> {*}So what's your thought for these two issues and the solution I 
> proposed{*}. I hope to get your feedback and thought for the issues.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] jiafu1115 opened a new pull request, #13335: KAFKA-14768: add new configure to reduce the max.block.ms safely

2023-03-03 Thread via GitHub


jiafu1115 opened a new pull request, #13335:
URL: https://github.com/apache/kafka/pull/13335

   Hi Team:
   
   https://issues.apache.org/jira/projects/KAFKA/issues/KAFKA-14768
   
   This PR is draft one which adopt the proposal 2 which solve the issue 2 
which I mentioned in the JIRA
   (2) can't reduce the send message's block time to wanted value.
   
   Thanks for review. 
   
   refer to #13320 . it is another proposal to solve it.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] pierDipi commented on pull request #13325: KAFKA-14771: Include threads info in ConcurrentModificationException message

2023-03-03 Thread via GitHub


pierDipi commented on PR #13325:
URL: https://github.com/apache/kafka/pull/13325#issuecomment-1453176497

   It seems that test failures are not related


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] vcrfxia commented on a diff in pull request #13274: KAFKA-14491: [13/N] Add versioned store builder and materializer

2023-03-03 Thread via GitHub


vcrfxia commented on code in PR #13274:
URL: https://github.com/apache/kafka/pull/13274#discussion_r1124151220


##
streams/src/main/java/org/apache/kafka/streams/kstream/internals/KeyValueStoreMaterializer.java:
##
@@ -48,20 +55,30 @@ public StoreBuilder> 
materialize() {
 }
 }
 
-final StoreBuilder> builder = 
Stores.timestampedKeyValueStoreBuilder(
-supplier,
-materialized.keySerde(),
-materialized.valueSerde());
+final StoreBuilder builder;
+if (supplier instanceof VersionedBytesStoreSupplier) {
+builder = new VersionedKeyValueStoreBuilder<>(
+(VersionedBytesStoreSupplier) supplier,
+materialized.keySerde(),
+materialized.valueSerde(),
+Time.SYSTEM);
+} else {
+builder = Stores.timestampedKeyValueStoreBuilder(
+supplier,
+materialized.keySerde(),
+materialized.valueSerde());
+}
 
 if (materialized.loggingEnabled()) {
 builder.withLoggingEnabled(materialized.logConfig());
 } else {
 builder.withLoggingDisabled();
 }
 
-if (materialized.cachingEnabled()) {
+// versioned stores do not support caching

Review Comment:
   OK, I've added an INFO log statement for now, while we wait for additional 
opinions.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org