[jira] (KAFKA-14453) Flaky test suite MirrorConnectorsWithCustomForwardingAdminIntegrationTest

2023-01-13 Thread Kamalesh Palanisamy (Jira)


[ https://issues.apache.org/jira/browse/KAFKA-14453 ]


Kamalesh Palanisamy deleted comment on KAFKA-14453:
-

was (Author: JIRAUSER298672):
[~ChrisEgerton] is this test still failing? I am a newbie and I can look into 
this. Thanks!

> Flaky test suite MirrorConnectorsWithCustomForwardingAdminIntegrationTest
> -
>
> Key: KAFKA-14453
> URL: https://issues.apache.org/jira/browse/KAFKA-14453
> Project: Kafka
>  Issue Type: Test
>  Components: mirrormaker
>Reporter: Chris Egerton
>Priority: Major
>  Labels: flaky-test
>
> We've been seeing some integration test failures lately for the 
> {{MirrorConnectorsWithCustomForwardingAdminIntegrationTest}} test suite. A 
> couple examples:
> {{org.opentest4j.AssertionFailedError: Condition not met within timeout 
> 6. Topic: mm2-offset-syncs.backup.internal didn't get created in the 
> FakeLocalMetadataStore ==> expected:  but was: }}
> {{    at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)}}
> {{    at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)}}
> {{    at 
> app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)}}
> {{    at 
> app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)}}
> {{    at 
> app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:210)}}
> {{    at 
> app//org.apache.kafka.test.TestUtils.lambda$waitForCondition$4(TestUtils.java:337)}}
> {{    at 
> app//org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:385)}}
> {{    at 
> app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:334)}}
> {{    at 
> app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:318)}}
> {{    at 
> app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:308)}}
> {{    at 
> app//org.apache.kafka.connect.mirror.integration.MirrorConnectorsWithCustomForwardingAdminIntegrationTest.waitForTopicToPersistInFakeLocalMetadataStore(MirrorConnectorsWithCustomForwardingAdminIntegrationTest.java:326)}}
> {{    at 
> app//org.apache.kafka.connect.mirror.integration.MirrorConnectorsWithCustomForwardingAdminIntegrationTest.testReplicationIsCreatingTopicsUsingProvidedForwardingAdmin(MirrorConnectorsWithCustomForwardingAdminIntegrationTest.java:217)}}
> {{}}
>  
> And:
>  
> {{org.opentest4j.AssertionFailedError: Condition not met within timeout 
> 6. Topic: primary.test-topic-1's configs don't have partitions:11 ==> 
> expected:  but was: }}
> {{    }}{{at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)}}
> {{    }}{{at 
> org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)}}
> {{    }}{{at 
> org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)}}
> {{    }}{{at org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)}}
> {{    }}{{at 
> org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:210)}}
> {{    }}{{at 
> org.apache.kafka.test.TestUtils.lambda$waitForCondition$4(TestUtils.java:337)}}
> {{    }}{{at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:385)}}
> {{    }}{{at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:334)}}
> {{    }}{{at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:318)}}
> {{    }}{{at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:308)}}
> {{    }}{{at 
> org.apache.kafka.connect.mirror.integration.MirrorConnectorsWithCustomForwardingAdminIntegrationTest.waitForTopicConfigPersistInFakeLocalMetaDataStore(MirrorConnectorsWithCustomForwardingAdminIntegrationTest.java:334)}}
> {{    }}{{at 
> org.apache.kafka.connect.mirror.integration.MirrorConnectorsWithCustomForwardingAdminIntegrationTest.testCreatePartitionsUseProvidedForwardingAdmin(MirrorConnectorsWithCustomForwardingAdminIntegrationTest.java:255)}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] smjn commented on a diff in pull request #13119: KAFKA-14623: OAuth's HttpAccessTokenRetriever potentially leaks secrets in logging

2023-01-13 Thread GitBox


smjn commented on code in PR #13119:
URL: https://github.com/apache/kafka/pull/13119#discussion_r1070212730


##
clients/src/main/java/org/apache/kafka/common/security/oauthbearer/internals/secured/HttpAccessTokenRetriever.java:
##
@@ -240,6 +240,9 @@ static String handleOutput(final HttpURLConnection con) 
throws IOException {
 int responseCode = con.getResponseCode();
 log.debug("handleOutput - responseCode: {}", responseCode);
 
+// NOTE: the contents of the response should not be logged so that we 
don't leak any
+// sensitive data.
+// TODO: is it OK to log the error response body and/or its formatted 
version?

Review Comment:
   Hi I had referred the rfc which mentioned about the error response - 
https://www.ietf.org/rfc/rfc6749.txt , section 5.2
   
   P.S. has there been any instance of info leak? There are only 3 things to 
leak - client_id, secret and scope. Of this only the secret is problematic. As 
per the rfc the value of the attribute is not printed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jolshan commented on a diff in pull request #12902: KAFKA-14367; Add `OffsetDelete` to the new `GroupCoordinator` interface

2023-01-13 Thread GitBox


jolshan commented on code in PR #12902:
URL: https://github.com/apache/kafka/pull/12902#discussion_r1070186362


##
core/src/main/scala/kafka/coordinator/group/GroupCoordinatorAdapter.scala:
##
@@ -470,4 +470,45 @@ class GroupCoordinatorAdapter(
   expireTimestamp = expireTimestamp
 )
   }
+
+  override def deleteOffsets(
+context: RequestContext,
+request: OffsetDeleteRequestData,
+bufferSupplier: BufferSupplier
+  ): CompletableFuture[OffsetDeleteResponseData] = {
+val future = new CompletableFuture[OffsetDeleteResponseData]()
+
+val partitions = mutable.ArrayBuffer[TopicPartition]()
+request.topics.forEach { topic =>
+  topic.partitions.forEach { partition =>
+partitions += new TopicPartition(topic.name, partition.partitionIndex)
+  }
+}
+
+val (groupError, topicPartitionResults) = coordinator.handleDeleteOffsets(
+  request.groupId,
+  partitions,
+  RequestLocal(bufferSupplier)
+)
+
+if (groupError != Errors.NONE) {
+  future.completeExceptionally(groupError.exception)
+} else {
+  val response = new OffsetDeleteResponseData()
+  topicPartitionResults.forKeyValue { (topicPartition, error) =>
+var topic = response.topics.find(topicPartition.topic)

Review Comment:
   Map key is defined in the json for the request and response objects. Because 
it is define the message generator code gives this typing.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jolshan commented on a diff in pull request #12902: KAFKA-14367; Add `OffsetDelete` to the new `GroupCoordinator` interface

2023-01-13 Thread GitBox


jolshan commented on code in PR #12902:
URL: https://github.com/apache/kafka/pull/12902#discussion_r1070186244


##
core/src/main/scala/kafka/coordinator/group/GroupCoordinatorAdapter.scala:
##
@@ -470,4 +470,45 @@ class GroupCoordinatorAdapter(
   expireTimestamp = expireTimestamp
 )
   }
+
+  override def deleteOffsets(
+context: RequestContext,
+request: OffsetDeleteRequestData,
+bufferSupplier: BufferSupplier
+  ): CompletableFuture[OffsetDeleteResponseData] = {
+val future = new CompletableFuture[OffsetDeleteResponseData]()
+
+val partitions = mutable.ArrayBuffer[TopicPartition]()
+request.topics.forEach { topic =>
+  topic.partitions.forEach { partition =>
+partitions += new TopicPartition(topic.name, partition.partitionIndex)
+  }
+}
+
+val (groupError, topicPartitionResults) = coordinator.handleDeleteOffsets(
+  request.groupId,
+  partitions,
+  RequestLocal(bufferSupplier)
+)
+
+if (groupError != Errors.NONE) {
+  future.completeExceptionally(groupError.exception)
+} else {
+  val response = new OffsetDeleteResponseData()
+  topicPartitionResults.forKeyValue { (topicPartition, error) =>
+var topic = response.topics.find(topicPartition.topic)

Review Comment:
   It was part of the protocol from the start. It basically means that when we 
deserialize, it builds a collection where the elements can be looked up via 
hashing. That's how we can use "find" here. (Not a new change, but I haven't 
seen it used often.)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jeffkbkim commented on a diff in pull request #12902: KAFKA-14367; Add `OffsetDelete` to the new `GroupCoordinator` interface

2023-01-13 Thread GitBox


jeffkbkim commented on code in PR #12902:
URL: https://github.com/apache/kafka/pull/12902#discussion_r1070170859


##
core/src/test/scala/unit/kafka/server/KafkaApisTest.scala:
##
@@ -2764,37 +2763,170 @@ class KafkaApisTest {
 val request = buildRequest(offsetDeleteRequest)
 
 val requestLocal = RequestLocal.withThreadConfinedCaching
-
when(clientRequestQuotaManager.maybeRecordAndGetThrottleTimeMs(any[RequestChannel.Request](),
-  any[Long])).thenReturn(0)
-when(groupCoordinator.handleDeleteOffsets(
-  ArgumentMatchers.eq(group),
-  ArgumentMatchers.eq(Seq(
-new TopicPartition("topic-1", 0),
-new TopicPartition("topic-1", 1),
-new TopicPartition("topic-2", 0),
-new TopicPartition("topic-2", 1)
-  )),
-  ArgumentMatchers.eq(requestLocal)
-)).thenReturn((Errors.NONE, Map(
-  new TopicPartition("topic-1", 0) -> Errors.NONE,
-  new TopicPartition("topic-1", 1) -> Errors.NONE,
-  new TopicPartition("topic-2", 0) -> Errors.NONE,
-  new TopicPartition("topic-2", 1) -> Errors.NONE,
-)))
+val future = new CompletableFuture[OffsetDeleteResponseData]()
+when(newGroupCoordinator.deleteOffsets(
+  request.context,
+  offsetDeleteRequest.data,
+  requestLocal.bufferSupplier
+)).thenReturn(future)
 
 createKafkaApis().handleOffsetDeleteRequest(request, requestLocal)
 
+val offsetDeleteResponseData = new OffsetDeleteResponseData()
+  .setTopics(new 
OffsetDeleteResponseData.OffsetDeleteResponseTopicCollection(List(
+new OffsetDeleteResponseData.OffsetDeleteResponseTopic()
+  .setName("topic-1")

Review Comment:
   shouldn't we expect 
   topic-1 (partition-0, partition-1)
   topic-2 (partition-0, partition-1)
   
   in the response? why are we expecting
   topic-1 (partition-0, partition-0)?



##
core/src/main/scala/kafka/coordinator/group/GroupCoordinatorAdapter.scala:
##
@@ -470,4 +470,45 @@ class GroupCoordinatorAdapter(
   expireTimestamp = expireTimestamp
 )
   }
+
+  override def deleteOffsets(
+context: RequestContext,
+request: OffsetDeleteRequestData,
+bufferSupplier: BufferSupplier
+  ): CompletableFuture[OffsetDeleteResponseData] = {
+val future = new CompletableFuture[OffsetDeleteResponseData]()
+
+val partitions = mutable.ArrayBuffer[TopicPartition]()
+request.topics.forEach { topic =>
+  topic.partitions.forEach { partition =>
+partitions += new TopicPartition(topic.name, partition.partitionIndex)
+  }
+}
+
+val (groupError, topicPartitionResults) = coordinator.handleDeleteOffsets(
+  request.groupId,
+  partitions,
+  RequestLocal(bufferSupplier)
+)
+
+if (groupError != Errors.NONE) {
+  future.completeExceptionally(groupError.exception)
+} else {
+  val response = new OffsetDeleteResponseData()
+  topicPartitionResults.forKeyValue { (topicPartition, error) =>
+var topic = response.topics.find(topicPartition.topic)

Review Comment:
   i'm still confused on the reason behind using 
ImplicitLinkedHashMultiCollection - what do you mean by map key? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rondagostino commented on a diff in pull request #13116: KAFKA-14351: Controller Mutation Quota for KRaft

2023-01-13 Thread GitBox


rondagostino commented on code in PR #13116:
URL: https://github.com/apache/kafka/pull/13116#discussion_r1070162021


##
core/src/test/scala/unit/kafka/server/ControllerApisTest.scala:
##
@@ -523,8 +523,23 @@ class ControllerApisTest {
 new ListPartitionReassignmentsRequestData()).build(
   }
 
-  @Test
-  def testCreateTopics(): Unit = {
+  object AlwaysExceededControllerMutationQuota extends ControllerMutationQuota 
{
+override def isExceeded: Boolean = true
+override def record(permits: Double): Unit = throw new 
ThrottlingQuotaExceededException(throttleTime, "quota exceeded in test")
+override def throttleTime: Int = 1000
+  }
+
+  @ParameterizedTest(name = "testCreateTopics with mutationQuotaExceeded: {0}, 
validateOnly: {1}")
+  @CsvSource(value = Array(
+"no,no",
+"no,yes",
+"yes,no",
+"yes,yes"
+  ))

Review Comment:
   Instead of `yes/no` for the quota maybe this should be an int so that we can 
test that we are actually sending the correct partition count. We are 
requesting 6 potentially-valid partitions, so it might be good to ensure that a 
limit of 6 is throttled but a limit of 7 is not.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rondagostino commented on a diff in pull request #13116: KAFKA-14351: Controller Mutation Quota for KRaft

2023-01-13 Thread GitBox


rondagostino commented on code in PR #13116:
URL: https://github.com/apache/kafka/pull/13116#discussion_r1070162021


##
core/src/test/scala/unit/kafka/server/ControllerApisTest.scala:
##
@@ -523,8 +523,23 @@ class ControllerApisTest {
 new ListPartitionReassignmentsRequestData()).build(
   }
 
-  @Test
-  def testCreateTopics(): Unit = {
+  object AlwaysExceededControllerMutationQuota extends ControllerMutationQuota 
{
+override def isExceeded: Boolean = true
+override def record(permits: Double): Unit = throw new 
ThrottlingQuotaExceededException(throttleTime, "quota exceeded in test")
+override def throttleTime: Int = 1000
+  }
+
+  @ParameterizedTest(name = "testCreateTopics with mutationQuotaExceeded: {0}, 
validateOnly: {1}")
+  @CsvSource(value = Array(
+"no,no",
+"no,yes",
+"yes,no",
+"yes,yes"
+  ))

Review Comment:
   Instead of `yes/no` for the quota maybe this should be an int so that we can 
test that we are actually sending the correct partition count. We are 
requesting 6 potentially-valid partitions, so it might be good to ensure that 5 
is not throttled and 6 is throttle.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rondagostino commented on pull request #13116: KAFKA-14351: Controller Mutation Quota for KRaft

2023-01-13 Thread GitBox


rondagostino commented on PR #13116:
URL: https://github.com/apache/kafka/pull/13116#issuecomment-1382580751

   Thanks, Colin.  I moved the minor stuff into 
https://github.com/apache/kafka/pull/13118 and put the check into 
`ControllerApis`.  The only difference with what you suggested is that I 
consider the partitions requested (except for the ones with obvious issues like 
lack of authorization or duplicate topic names) rather than the partitions that 
the controller creates.  When a quota is exceeded the request is not accepted, 
so we can't let the controller mutate the partition count and then decide -- we 
have to decide before letting the controller do its thing.
   
   Again, the create-topic only case is the only one I've done.  I've added a 
test as well.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-14622) Create a junit test which would have caught KAFKA-14618

2023-01-13 Thread Jira


[ 
https://issues.apache.org/jira/browse/KAFKA-14622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17676788#comment-17676788
 ] 

José Armando García Sancio commented on KAFKA-14622:


One suggestion for doing this is to implement 
https://issues.apache.org/jira/browse/KAFKA-14619 and catch this kind of issue 
in a snapshot integration test for both the controllers and brokers.

> Create a junit test which would have caught KAFKA-14618
> ---
>
> Key: KAFKA-14622
> URL: https://issues.apache.org/jira/browse/KAFKA-14622
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin McCabe
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] jsancio commented on pull request #13108: KAFKA-14618; Fix off by one error in snapshot id

2023-01-13 Thread GitBox


jsancio commented on PR #13108:
URL: https://github.com/apache/kafka/pull/13108#issuecomment-1382578491

   > Can you file a JIRA for creating a test that would have caught this?
   
   Yes. When I implement https://issues.apache.org/jira/browse/KAFKA-14619, we 
can reliably catch this kind of issues in JUnit integration tests.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jsancio commented on a diff in pull request #13108: KAFKA-14618; Fix off by one error in snapshot id

2023-01-13 Thread GitBox


jsancio commented on code in PR #13108:
URL: https://github.com/apache/kafka/pull/13108#discussion_r1070156218


##
metadata/src/main/java/org/apache/kafka/image/MetadataProvenance.java:
##
@@ -28,30 +29,30 @@
 public final class MetadataProvenance {
 public static final MetadataProvenance EMPTY = new MetadataProvenance(-1L, 
-1, -1L);
 
-private final long offset;
-private final int epoch;
+private final long lastContainedOffset;
+private final int lastContainedEpoch;
 private final long lastContainedLogTimeMs;
 
 public MetadataProvenance(
-long offset,
-int epoch,
+long lastContainedOffset,
+int lastContainedEpoch,
 long lastContainedLogTimeMs
 ) {
-this.offset = offset;
-this.epoch = epoch;
+this.lastContainedOffset = lastContainedOffset;
+this.lastContainedEpoch = lastContainedEpoch;
 this.lastContainedLogTimeMs = lastContainedLogTimeMs;
 }
 
-public OffsetAndEpoch offsetAndEpoch() {
-return new OffsetAndEpoch(offset, epoch);
+public OffsetAndEpoch snapshotId() {
+return new OffsetAndEpoch(lastContainedOffset + 1, lastContainedEpoch);

Review Comment:
   Yes. I'll do this when I implement 
https://issues.apache.org/jira/browse/KAFKA-14620.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jolshan commented on a diff in pull request #12972: KAFKA-14391; Add ConsumerGroupHeartbeat API

2023-01-13 Thread GitBox


jolshan commented on code in PR #12972:
URL: https://github.com/apache/kafka/pull/12972#discussion_r1070132569


##
clients/src/main/java/org/apache/kafka/common/requests/ConsumerGroupHeartbeatRequest.java:
##
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.common.requests;
+
+import org.apache.kafka.common.message.ConsumerGroupHeartbeatRequestData;
+import org.apache.kafka.common.message.ConsumerGroupHeartbeatResponseData;
+import org.apache.kafka.common.protocol.ApiKeys;
+import org.apache.kafka.common.protocol.ByteBufferAccessor;
+import org.apache.kafka.common.protocol.Errors;
+
+import java.nio.ByteBuffer;
+
+public class ConsumerGroupHeartbeatRequest extends AbstractRequest {
+
+public static class Builder extends 
AbstractRequest.Builder {
+private final ConsumerGroupHeartbeatRequestData data;
+
+public Builder(ConsumerGroupHeartbeatRequestData data) {
+super(ApiKeys.CONSUMER_GROUP_HEARTBEAT);
+this.data = data;
+}
+

Review Comment:
   nit: double space here



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jolshan commented on a diff in pull request #12972: KAFKA-14391; Add ConsumerGroupHeartbeat API

2023-01-13 Thread GitBox


jolshan commented on code in PR #12972:
URL: https://github.com/apache/kafka/pull/12972#discussion_r1070126393


##
clients/src/main/java/org/apache/kafka/common/requests/ApiVersionsResponse.java:
##
@@ -185,11 +215,11 @@ public static ApiVersionsResponse 
createApiVersionsResponse(
 }
 
 public static ApiVersionCollection filterApis(
-RecordVersion minRecordVersion,
-ApiMessageType.ListenerType listenerType
+Set enabledApi,

Review Comment:
   nit: Do we want to make this plural `enabledApis`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-14623) OAuth's HttpAccessTokenRetriever potentially leaks secrets in logging

2023-01-13 Thread Kirk True (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17676778#comment-17676778
 ] 

Kirk True commented on KAFKA-14623:
---

cc [~smjn] [~omkreddy] 

> OAuth's HttpAccessTokenRetriever potentially leaks secrets in logging  
> ---
>
> Key: KAFKA-14623
> URL: https://issues.apache.org/jira/browse/KAFKA-14623
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, security
>Affects Versions: 3.3.1
>Reporter: Kirk True
>Assignee: Kirk True
>Priority: Major
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The OAuth code that communicates via HTTP with the IdP 
> (HttpAccessTokenRetriever.java) includes logging that outputs the request and 
> response payloads. Among them are:
>  * 
> [https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/internals/secured/HttpAccessTokenRetriever.java#L265]
>  * 
> [https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/internals/secured/HttpAccessTokenRetriever.java#L274]
>  * 
> [https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/internals/secured/HttpAccessTokenRetriever.java#L320]
> It should be determined if there are other places sensitive information might 
> be inadvertently exposed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] jolshan commented on pull request #12902: KAFKA-14367; Add `OffsetDelete` to the new `GroupCoordinator` interface

2023-01-13 Thread GitBox


jolshan commented on PR #12902:
URL: https://github.com/apache/kafka/pull/12902#issuecomment-1382451476

   Things are starting to fall into a pretty common pattern here. :) Looks 
pretty good, but I'll give Jeff a chance to take a look as well. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] kirktrue commented on a diff in pull request #13119: KAFKA-14623: OAuth's HttpAccessTokenRetriever potentially leaks secrets in logging

2023-01-13 Thread GitBox


kirktrue commented on code in PR #13119:
URL: https://github.com/apache/kafka/pull/13119#discussion_r1070115905


##
clients/src/main/java/org/apache/kafka/common/security/oauthbearer/internals/secured/HttpAccessTokenRetriever.java:
##
@@ -240,6 +240,9 @@ static String handleOutput(final HttpURLConnection con) 
throws IOException {
 int responseCode = con.getResponseCode();
 log.debug("handleOutput - responseCode: {}", responseCode);
 
+// NOTE: the contents of the response should not be logged so that we 
don't leak any
+// sensitive data.
+// TODO: is it OK to log the error response body and/or its formatted 
version?

Review Comment:
   @smjn @omkreddy: open question. Is it safe to log the `errorResponseBody` in 
whole or in part?
   
   The `formatErrorMessage` method attempts to extract the core error message 
out of the `errorResponseBody`, but it falls back to returning the whole string 
verbatim if it can't parse or find the error message in the response.
   
   Thoughts?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] dajac commented on a diff in pull request #12902: KAFKA-14367; Add `OffsetDelete` to the new `GroupCoordinator` interface

2023-01-13 Thread GitBox


dajac commented on code in PR #12902:
URL: https://github.com/apache/kafka/pull/12902#discussion_r1070115003


##
core/src/test/scala/unit/kafka/server/KafkaApisTest.scala:
##
@@ -2817,14 +2949,18 @@ class KafkaApisTest {
   .setTopics(topics)
   ).build()
   val request = buildRequest(offsetDeleteRequest)
-  
when(clientRequestQuotaManager.maybeRecordAndGetThrottleTimeMs(any[RequestChannel.Request](),
-any[Long])).thenReturn(0)
 
-  val requestLocal = RequestLocal.withThreadConfinedCaching
-  when(groupCoordinator.handleDeleteOffsets(ArgumentMatchers.eq(group), 
ArgumentMatchers.eq(Seq.empty),
-ArgumentMatchers.eq(requestLocal))).thenReturn((Errors.NONE, 
Map.empty[TopicPartition, Errors]))
+  // The group coordinator is called even if there are no
+  // topic-partitions left after the validation.
+  when(newGroupCoordinator.deleteOffsets(
+request.context,
+new OffsetDeleteRequestData().setGroupId(group),
+RequestLocal.NoCaching.bufferSupplier
+  )).thenReturn(CompletableFuture.completedFuture(
+new OffsetDeleteResponseData()
+  ))
 
-  createKafkaApis().handleOffsetDeleteRequest(request, requestLocal)
+  createKafkaApis().handleOffsetDeleteRequest(request, 
RequestLocal.NoCaching)

Review Comment:
   Just to make things simpler. The test basically verified that whatever is 
passed here is received by the group coordinator.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] kirktrue commented on pull request #13119: KAFKA-14623: OAuth's HttpAccessTokenRetriever potentially leaks secrets in logging

2023-01-13 Thread GitBox


kirktrue commented on PR #13119:
URL: https://github.com/apache/kafka/pull/13119#issuecomment-1382446967

   @smjn @omkreddy - can you take a look at this fix for OAuth logging 
potentially exposing secrets?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] kirktrue opened a new pull request, #13119: KAFKA-14623: OAuth's HttpAccessTokenRetriever potentially leaks secrets in logging

2023-01-13 Thread GitBox


kirktrue opened a new pull request, #13119:
URL: https://github.com/apache/kafka/pull/13119

   Removed logging of the HTTP response directly in all known cases to prevent 
potentially logging access tokens.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jolshan commented on a diff in pull request #12902: KAFKA-14367; Add `OffsetDelete` to the new `GroupCoordinator` interface

2023-01-13 Thread GitBox


jolshan commented on code in PR #12902:
URL: https://github.com/apache/kafka/pull/12902#discussion_r1070097639


##
core/src/test/scala/unit/kafka/server/KafkaApisTest.scala:
##
@@ -2817,14 +2949,18 @@ class KafkaApisTest {
   .setTopics(topics)
   ).build()
   val request = buildRequest(offsetDeleteRequest)
-  
when(clientRequestQuotaManager.maybeRecordAndGetThrottleTimeMs(any[RequestChannel.Request](),
-any[Long])).thenReturn(0)
 
-  val requestLocal = RequestLocal.withThreadConfinedCaching
-  when(groupCoordinator.handleDeleteOffsets(ArgumentMatchers.eq(group), 
ArgumentMatchers.eq(Seq.empty),
-ArgumentMatchers.eq(requestLocal))).thenReturn((Errors.NONE, 
Map.empty[TopicPartition, Errors]))
+  // The group coordinator is called even if there are no
+  // topic-partitions left after the validation.
+  when(newGroupCoordinator.deleteOffsets(
+request.context,
+new OffsetDeleteRequestData().setGroupId(group),
+RequestLocal.NoCaching.bufferSupplier
+  )).thenReturn(CompletableFuture.completedFuture(
+new OffsetDeleteResponseData()
+  ))
 
-  createKafkaApis().handleOffsetDeleteRequest(request, requestLocal)
+  createKafkaApis().handleOffsetDeleteRequest(request, 
RequestLocal.NoCaching)

Review Comment:
   Was this changed for correctness or just to make things simpler?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] bbejeck merged pull request #13115: Update doc entry in ProducerConfig.java to be consistent with behavior described in max.in.flight.requests.per.connection

2023-01-13 Thread GitBox


bbejeck merged PR #13115:
URL: https://github.com/apache/kafka/pull/13115


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] bbejeck commented on pull request #13115: Update doc entry in ProducerConfig.java to be consistent with behavior described in max.in.flight.requests.per.connection

2023-01-13 Thread GitBox


bbejeck commented on PR #13115:
URL: https://github.com/apache/kafka/pull/13115#issuecomment-1382419745

   merged #13115 into trunk


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (KAFKA-14623) OAuth's HttpAccessTokenRetriever potentially leaks secrets in logging

2023-01-13 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True reassigned KAFKA-14623:
-

Assignee: Kirk True

> OAuth's HttpAccessTokenRetriever potentially leaks secrets in logging  
> ---
>
> Key: KAFKA-14623
> URL: https://issues.apache.org/jira/browse/KAFKA-14623
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, security
>Affects Versions: 3.3.1
>Reporter: Kirk True
>Assignee: Kirk True
>Priority: Major
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The OAuth code that communicates via HTTP with the IdP 
> (HttpAccessTokenRetriever.java) includes logging that outputs the request and 
> response payloads. Among them are:
>  * 
> [https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/internals/secured/HttpAccessTokenRetriever.java#L265]
>  * 
> [https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/internals/secured/HttpAccessTokenRetriever.java#L274]
>  * 
> [https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/internals/secured/HttpAccessTokenRetriever.java#L320]
> It should be determined if there are other places sensitive information might 
> be inadvertently exposed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14623) OAuth's HttpAccessTokenRetriever potentially leaks secrets in logging

2023-01-13 Thread Kirk True (Jira)
Kirk True created KAFKA-14623:
-

 Summary: OAuth's HttpAccessTokenRetriever potentially leaks 
secrets in logging  
 Key: KAFKA-14623
 URL: https://issues.apache.org/jira/browse/KAFKA-14623
 Project: Kafka
  Issue Type: Bug
  Components: clients, security
Affects Versions: 3.3.1
Reporter: Kirk True


The OAuth code that communicates via HTTP with the IdP 
(HttpAccessTokenRetriever.java) includes logging that outputs the request and 
response payloads. Among them are:
 * 
[https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/internals/secured/HttpAccessTokenRetriever.java#L265]
 * 
[https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/internals/secured/HttpAccessTokenRetriever.java#L274]
 * 
[https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/internals/secured/HttpAccessTokenRetriever.java#L320]

It should be determined if there are other places sensitive information might 
be inadvertently exposed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] rondagostino commented on a diff in pull request #13118: MINOR: fix some typos in comments/docs/variable names

2023-01-13 Thread GitBox


rondagostino commented on code in PR #13118:
URL: https://github.com/apache/kafka/pull/13118#discussion_r1070083056


##
docs/ops.html:
##
@@ -1197,7 +1197,7 @@ 
 
   
-  Client quotas: Kafka supports different types of (per-user 
principal) client quotas. Because a client's quotas apply irrespective of which 
topics the client is writing to or reading from, they are a convenient and 
effective tool to allocate resources in a multi-tenant cluster. Request rate quotas, for example, help to limit a 
user's impact on broker CPU usage by limiting the time a broker spends on the 
request handling path for that user, after which 
throttling kicks in. In many situations, isolating users with request rate 
quotas has a bigger impact in multi-tenant clusters than setting 
incoming/outgoing network bandwidth quotas, because excessive broker CPU usage 
for processing requests reduces the effective bandwidth the broker can serve. 
Furthermore, administrators can also define quotas on topic operations—such as 
create, delete, and alter—to prevent Kafka clusters from being overwhelmed by 
high
 ly concurrent topic operations (see https://cwiki.apache.org/confluence/display/KAFKA/KIP-599%3A+Throttle+Create+Topic%2C+Create+Partition+and+Delete+Topic+Operations;>KIP-599
 and the quota type controller_mutations_rate).
+  Client quotas: Kafka supports different types of (per-user 
principal) client quotas. Because a client's quotas apply irrespective of which 
topics the client is writing to or reading from, they are a convenient and 
effective tool to allocate resources in a multi-tenant cluster. Request rate quotas, for example, help to limit a 
user's impact on broker CPU usage by limiting the time a broker spends on the 
request handling path for that user, after which 
throttling kicks in. In many situations, isolating users with request rate 
quotas has a bigger impact in multi-tenant clusters than setting 
incoming/outgoing network bandwidth quotas, because excessive broker CPU usage 
for processing requests reduces the effective bandwidth the broker can serve. 
Furthermore, administrators can also define quotas on topic operations—such as 
create, delete, and alter—to prevent Kafka clusters from being overwhelmed by 
high
 ly concurrent topic operations (see https://cwiki.apache.org/confluence/display/KAFKA/KIP-599%3A+Throttle+Create+Topic%2C+Create+Partition+and+Delete+Topic+Operations;>KIP-599
 and the quota type controller_mutation_rate).

Review Comment:
   Hard to see the change, which is on the last line: changes 
`controller_mutations_rate` to `controller_mutation_rate` since the latter is 
what was implemented despite what the KIP says.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rondagostino opened a new pull request, #13118: MINOR: fix some typos in comments/docs/variable names

2023-01-13 Thread GitBox


rondagostino opened a new pull request, #13118:
URL: https://github.com/apache/kafka/pull/13118

   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] cmccabe commented on pull request #13108: KAFKA-14618; Fix off by one error in snapshot id

2023-01-13 Thread GitBox


cmccabe commented on PR #13108:
URL: https://github.com/apache/kafka/pull/13108#issuecomment-1382409158

   I filed a follow-up jira here 
https://issues.apache.org/jira/browse/KAFKA-14622


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-14622) Create a junit test which would have caught KAFKA-14618

2023-01-13 Thread Colin McCabe (Jira)
Colin McCabe created KAFKA-14622:


 Summary: Create a junit test which would have caught KAFKA-14618
 Key: KAFKA-14622
 URL: https://issues.apache.org/jira/browse/KAFKA-14622
 Project: Kafka
  Issue Type: Bug
Reporter: Colin McCabe






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14618) Off by one error in generated snapshot IDs causes misaligned fetching

2023-01-13 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe resolved KAFKA-14618.
--
Resolution: Fixed

> Off by one error in generated snapshot IDs causes misaligned fetching
> -
>
> Key: KAFKA-14618
> URL: https://issues.apache.org/jira/browse/KAFKA-14618
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: José Armando García Sancio
>Priority: Blocker
> Fix For: 3.4.0
>
>
> We implemented new snapshot generation logic here: 
> [https://github.com/apache/kafka/pull/12983]. A few days prior to this patch 
> getting merged, we had changed the `RaftClient` API to pass the _exclusive_ 
> offset when generating snapshots instead of the inclusive offset: 
> [https://github.com/apache/kafka/pull/12981]. Unfortunately, the new snapshot 
> generation logic was not updated accordingly. The consequence of this is that 
> the state on replicas can get out of sync. In the best case, the followers 
> fail replication because the offset after loading a snapshot is no longer 
> aligned on a batch boundary.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] cmccabe commented on pull request #13116: KAFKA-14351: Controller Mutation Quota for KRaft

2023-01-13 Thread GitBox


cmccabe commented on PR #13116:
URL: https://github.com/apache/kafka/pull/13116#issuecomment-1382406862

   Thanks very much for this, @rondagostino ! 
   
   Anyway what I’d like to do here is two things:
   1. split all the spelling corrections and docs changes into a separate PR 
which we can do today. They look like no-brainers
   
   2. second PR do everything in Scala, see if that is cleaner. I think it will 
be. It would be better not to involve the core controller in this. we should be 
able to see which partition(s) were successfully created from ControllerServer 
(check the controller response)
   
   also we'll need a test. As you said, we need to get a bit of dynamic config 
code in first, but that is pretty small.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] cmccabe commented on a diff in pull request #13116: KAFKA-14351: Controller Mutation Quota for KRaft

2023-01-13 Thread GitBox


cmccabe commented on code in PR #13116:
URL: https://github.com/apache/kafka/pull/13116#discussion_r1070071187


##
docs/ops.html:
##
@@ -1197,7 +1197,7 @@ 
 
   
-  Client quotas: Kafka supports different types of (per-user 
principal) client quotas. Because a client's quotas apply irrespective of which 
topics the client is writing to or reading from, they are a convenient and 
effective tool to allocate resources in a multi-tenant cluster. Request rate quotas, for example, help to limit a 
user's impact on broker CPU usage by limiting the time a broker spends on the 
request handling path for that user, after which 
throttling kicks in. In many situations, isolating users with request rate 
quotas has a bigger impact in multi-tenant clusters than setting 
incoming/outgoing network bandwidth quotas, because excessive broker CPU usage 
for processing requests reduces the effective bandwidth the broker can serve. 
Furthermore, administrators can also define quotas on topic operations—such as 
create, delete, and alter—to prevent Kafka clusters from being overwhelmed by 
high
 ly concurrent topic operations (see https://cwiki.apache.org/confluence/display/KAFKA/KIP-599%3A+Throttle+Create+Topic%2C+Create+Partition+and+Delete+Topic+Operations;>KIP-599
 and the quota type controller_mutations_rate).
+  Client quotas: Kafka supports different types of (per-user 
principal) client quotas. Because a client's quotas apply irrespective of which 
topics the client is writing to or reading from, they are a convenient and 
effective tool to allocate resources in a multi-tenant cluster. Request rate quotas, for example, help to limit a 
user's impact on broker CPU usage by limiting the time a broker spends on the 
request handling path for that user, after which 
throttling kicks in. In many situations, isolating users with request rate 
quotas has a bigger impact in multi-tenant clusters than setting 
incoming/outgoing network bandwidth quotas, because excessive broker CPU usage 
for processing requests reduces the effective bandwidth the broker can serve. 
Furthermore, administrators can also define quotas on topic operations—such as 
create, delete, and alter—to prevent Kafka clusters from being overwhelmed by 
high
 ly concurrent topic operations (see https://cwiki.apache.org/confluence/display/KAFKA/KIP-599%3A+Throttle+Create+Topic%2C+Create+Partition+and+Delete+Topic+Operations;>KIP-599
 and the quota type controller_mutation_rate).

Review Comment:
   ack



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rondagostino commented on pull request #13116: KAFKA-14351: Controller Mutation Quota for KRaft

2023-01-13 Thread GitBox


rondagostino commented on PR #13116:
URL: https://github.com/apache/kafka/pull/13116#issuecomment-1382381390

   Only implemented/tested the create-topics case so far.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rondagostino commented on a diff in pull request #13116: KAFKA-14351: Controller Mutation Quota for KRaft

2023-01-13 Thread GitBox


rondagostino commented on code in PR #13116:
URL: https://github.com/apache/kafka/pull/13116#discussion_r1070011382


##
metadata/src/main/java/org/apache/kafka/controller/ControllerRequestContext.java:
##
@@ -42,19 +45,39 @@ public static OptionalLong requestTimeoutMsToDeadlineNs(
 private final OptionalLong deadlineNs;
 private final RequestHeaderData requestHeader;
 
+private final Optional> requestedPartitionCountRecorder;
+

Review Comment:
   We add this here because `class ControllerMutationQuotaManager` and `trait 
ControllerMutationQuota` are written in Scala while the KRaft controller and 
such are written in Java with no access to the Scala class/trait -- we can't 
pass in anything without wrapping it in Java somehow.  All we need to do is 
record a request for some number of partitions, and this matches that 
requirement precisely.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rondagostino commented on a diff in pull request #13116: KAFKA-14351: Controller Mutation Quota for KRaft

2023-01-13 Thread GitBox


rondagostino commented on code in PR #13116:
URL: https://github.com/apache/kafka/pull/13116#discussion_r107035


##
docs/ops.html:
##
@@ -1197,7 +1197,7 @@ 
 
   
-  Client quotas: Kafka supports different types of (per-user 
principal) client quotas. Because a client's quotas apply irrespective of which 
topics the client is writing to or reading from, they are a convenient and 
effective tool to allocate resources in a multi-tenant cluster. Request rate quotas, for example, help to limit a 
user's impact on broker CPU usage by limiting the time a broker spends on the 
request handling path for that user, after which 
throttling kicks in. In many situations, isolating users with request rate 
quotas has a bigger impact in multi-tenant clusters than setting 
incoming/outgoing network bandwidth quotas, because excessive broker CPU usage 
for processing requests reduces the effective bandwidth the broker can serve. 
Furthermore, administrators can also define quotas on topic operations—such as 
create, delete, and alter—to prevent Kafka clusters from being overwhelmed by 
high
 ly concurrent topic operations (see https://cwiki.apache.org/confluence/display/KAFKA/KIP-599%3A+Throttle+Create+Topic%2C+Create+Partition+and+Delete+Topic+Operations;>KIP-599
 and the quota type controller_mutations_rate).
+  Client quotas: Kafka supports different types of (per-user 
principal) client quotas. Because a client's quotas apply irrespective of which 
topics the client is writing to or reading from, they are a convenient and 
effective tool to allocate resources in a multi-tenant cluster. Request rate quotas, for example, help to limit a 
user's impact on broker CPU usage by limiting the time a broker spends on the 
request handling path for that user, after which 
throttling kicks in. In many situations, isolating users with request rate 
quotas has a bigger impact in multi-tenant clusters than setting 
incoming/outgoing network bandwidth quotas, because excessive broker CPU usage 
for processing requests reduces the effective bandwidth the broker can serve. 
Furthermore, administrators can also define quotas on topic operations—such as 
create, delete, and alter—to prevent Kafka clusters from being overwhelmed by 
high
 ly concurrent topic operations (see https://cwiki.apache.org/confluence/display/KAFKA/KIP-599%3A+Throttle+Create+Topic%2C+Create+Partition+and+Delete+Topic+Operations;>KIP-599
 and the quota type controller_mutation_rate).

Review Comment:
   Hard to see the change, which is on the last line: changes 
`controller_mutations_rate` to `controller_mutation_rate` since the latter is 
what was implemented despite what the KIP says.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rondagostino opened a new pull request, #13116: KAFKA-14351: Controller Mutation Quota for KRaft

2023-01-13 Thread GitBox


rondagostino opened a new pull request, #13116:
URL: https://github.com/apache/kafka/pull/13116

   Adds hooks to check controller mutation quotas in KRaft.
   
   It won't actually work in practice until controllers support dynamic 
reconfiguration (https://issues.apache.org/jira/browse/KAFKA-14350), but this 
adds the appropriate logic for when that happens.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (KAFKA-14351) Implement controller mutation quotas in KRaft

2023-01-13 Thread Ron Dagostino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Dagostino reassigned KAFKA-14351:
-

Assignee: Ron Dagostino

> Implement controller mutation quotas in KRaft
> -
>
> Key: KAFKA-14351
> URL: https://issues.apache.org/jira/browse/KAFKA-14351
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin McCabe
>Assignee: Ron Dagostino
>Priority: Major
>  Labels: kip-500
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] cherylws opened a new pull request, #13115: Update doc entry in ProducerConfig.java to be consistent with behavior described in max.in.flight.requests.per.connection

2023-01-13 Thread GitBox


cherylws opened a new pull request, #13115:
URL: https://github.com/apache/kafka/pull/13115

   This should be greater than 1 to be consistent with behavior described in 
max.in.flight.requests.per.connection property.
   
   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-14621) Don't startup in migration mode if an authorizer is enabled

2023-01-13 Thread David Arthur (Jira)
David Arthur created KAFKA-14621:


 Summary: Don't startup in migration mode if an authorizer is 
enabled
 Key: KAFKA-14621
 URL: https://issues.apache.org/jira/browse/KAFKA-14621
 Project: Kafka
  Issue Type: Sub-task
Affects Versions: 3.4.0
Reporter: David Arthur
Assignee: David Arthur


In 3.4, we do not yet support migrating ACLs from ZK to KRaft. To avoid 
potential confusion and security problems, we should just disallow authorizers 
during the migration.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] dajac commented on a diff in pull request #12902: KAFKA-14367; Add `OffsetDelete` to the new `GroupCoordinator` interface

2023-01-13 Thread GitBox


dajac commented on code in PR #12902:
URL: https://github.com/apache/kafka/pull/12902#discussion_r1069967097


##
core/src/main/scala/kafka/coordinator/group/GroupCoordinatorAdapter.scala:
##
@@ -470,4 +470,45 @@ class GroupCoordinatorAdapter(
   expireTimestamp = expireTimestamp
 )
   }
+
+  override def deleteOffsets(
+context: RequestContext,
+request: OffsetDeleteRequestData,
+bufferSupplier: BufferSupplier
+  ): CompletableFuture[OffsetDeleteResponseData] = {
+val future = new CompletableFuture[OffsetDeleteResponseData]()
+
+val partitions = mutable.ArrayBuffer[TopicPartition]()
+request.topics.forEach { topic =>
+  topic.partitions.forEach { partition =>
+partitions += new TopicPartition(topic.name, partition.partitionIndex)
+  }
+}
+
+val (groupError, topicPartitionResults) = coordinator.handleDeleteOffsets(
+  request.groupId,
+  partitions,
+  RequestLocal(bufferSupplier)
+)
+
+if (groupError != Errors.NONE) {
+  future.completeExceptionally(groupError.exception)
+} else {
+  val response = new OffsetDeleteResponseData()
+  topicPartitionResults.forKeyValue { (topicPartition, error) =>
+var topic = response.topics.find(topicPartition.topic)

Review Comment:
   That’s correct. The « map key » does that.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] artemlivshits commented on a diff in pull request #12901: KAFKA-14367; Add `TxnOffsetCommit` to the new `GroupCoordinator` interface

2023-01-13 Thread GitBox


artemlivshits commented on code in PR #12901:
URL: https://github.com/apache/kafka/pull/12901#discussion_r1069963790


##
core/src/main/scala/kafka/coordinator/group/GroupCoordinatorAdapter.scala:
##
@@ -397,19 +389,24 @@ class GroupCoordinatorAdapter(
 request: TxnOffsetCommitRequestData,
 bufferSupplier: BufferSupplier
   ): CompletableFuture[TxnOffsetCommitResponseData] = {
+val currentTimeMs = time.milliseconds
 val future = new CompletableFuture[TxnOffsetCommitResponseData]()
 
 def callback(results: Map[TopicPartition, Errors]): Unit = {
   val response = new TxnOffsetCommitResponseData()
-  val byTopics = new util.HashMap[String, 
TxnOffsetCommitResponseData.TxnOffsetCommitResponseTopic]()
+  val byTopics = new mutable.HashMap[String, 
TxnOffsetCommitResponseData.TxnOffsetCommitResponseTopic]()
 
   results.forKeyValue { (tp, error) =>
-var topic = byTopics.get(tp.topic)
-if (topic == null) {
-  topic = new 
TxnOffsetCommitResponseData.TxnOffsetCommitResponseTopic().setName(tp.topic)
-  byTopics.put(tp.topic, topic)
-  response.topics.add(topic)
+val topic = byTopics.get(tp.topic) match {

Review Comment:
   The downside is double-lookup (one for get and another for insert).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rondagostino commented on pull request #12587: [WIP] MINOR: Use named class for ExpiringCredential to improve `principalLogText()` output

2023-01-13 Thread GitBox


rondagostino commented on PR #12587:
URL: https://github.com/apache/kafka/pull/12587#issuecomment-1382319525

   Ok, I looked at this again, and I think there was a mistake combined with a 
poorly named variable that was causing confusion.
   
   The `private String principalName = null;` variable that you want to remove 
of should have originally been named `private String lastKnownPrincipalName = 
null;`
   
   The reason this variable exists is because of the potential for this case to 
occur when trying to invoke `reLogin()`:
   ```
   if (!hasExpiringCredential) {
   /*
* Re-login has failed because our login() invocation has 
not generated a
* credential but has also not generated an exception. We 
won't exit here;
* instead we will allow login retries in case we can 
somehow fix the issue (it
* seems likely to be a bug, but it doesn't hurt to keep 
trying to refresh).
*/
   log.error("No Expiring Credential after a 
supposedly-successful re-login");
   principalName = null;
   ```
   
   The bug is the line `principalName = null;` -- that line should not exist.  
If we had named the variable correctly we would have seen that we are blanking 
out the last-known principal name, which we should not do.  This case *should* 
result in the expiring credential being null but the last known principal 
reflecting the principal that we last had, but with this line it doesn't happen 
-- we get a null credential and a null last-known name, which is pointless (and 
therefore I see why with the bad name you wanted to remove it).  But keeping 
this last-known principal name around is why the logging-related method exists 
as follows (with the rename from `principalName` to `lastKnownPrincipalName` 
included):
   
   ```
   private String principalLogText() {
   return expiringCredential == null ? lastKnownPrincipalName
   : expiringCredential.getClass().getSimpleName() + ":" + 
lastKnownPrincipalName;
   }
   ```
   
   Apologies for this morass -- two mistakes compounded into something quite 
confusing.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] (KAFKA-14603) Move KafkaMetricsGroup to server-common module.

2023-01-13 Thread Ivan Yurchenko (Jira)


[ https://issues.apache.org/jira/browse/KAFKA-14603 ]


Ivan Yurchenko deleted comment on KAFKA-14603:


was (Author: ivanyu):
https://github.com/apache/kafka/pull/13067

> Move KafkaMetricsGroup to server-common module.
> ---
>
> Key: KAFKA-14603
> URL: https://issues.apache.org/jira/browse/KAFKA-14603
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Satish Duggana
>Assignee: Ivan Yurchenko
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] ivanyu commented on pull request #13067: KAFKA-14524: Rewrite KafkaMetricsGroup in Java

2023-01-13 Thread GitBox


ivanyu commented on PR #13067:
URL: https://github.com/apache/kafka/pull/13067#issuecomment-1382303264

   I've finished porting `KafkaMetricsGroup` to Java and I'm opening this for 
review.
   
   I decided to not implement the performance improvements @lbownik proposed 
above and stick to verbatim porting to limit the review surface. The methods 
are far from the hot path so their performance doesn't affect much.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] kamalcph commented on pull request #13060: KAFKA-14559: Fix JMX tool to handle the object names with wild cards and optional attributes

2023-01-13 Thread GitBox


kamalcph commented on PR #13060:
URL: https://github.com/apache/kafka/pull/13060#issuecomment-1382263830

   @ijuma @showuon @mimaison 
   
   Ping for review!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] kamalcph commented on pull request #13059: MINOR: KafkaConfig should not expose internal config when queried for non-internal values

2023-01-13 Thread GitBox


kamalcph commented on PR #13059:
URL: https://github.com/apache/kafka/pull/13059#issuecomment-1382262128

   @ijuma @showuon 
   Can you merge this patch? Rebased with the trunk.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] ivanyu commented on a diff in pull request #13067: KAFKA-14524: Rewrite KafkaMetricsGroup in Java

2023-01-13 Thread GitBox


ivanyu commented on code in PR #13067:
URL: https://github.com/apache/kafka/pull/13067#discussion_r1069827409


##
core/src/main/scala/kafka/server/metadata/BrokerMetadataListener.scala:
##
@@ -42,7 +43,7 @@ class BrokerMetadataListener(
   val snapshotter: Option[MetadataSnapshotter],
   brokerMetrics: BrokerServerMetrics,
   _metadataLoadingFaultHandler: FaultHandler
-) extends RaftClient.Listener[ApiMessageAndVersion] with KafkaMetricsGroup {
+) extends RaftClient.Listener[ApiMessageAndVersion] with Logging {

Review Comment:
   `KafkaMetricsGroup` wasn't used here.



##
core/src/main/scala/kafka/server/ZkAdminManager.scala:
##
@@ -69,7 +68,7 @@ object ZkAdminManager {
 class ZkAdminManager(val config: KafkaConfig,
  val metrics: Metrics,
  val metadataCache: MetadataCache,
- val zkClient: KafkaZkClient) extends Logging with 
KafkaMetricsGroup {
+ val zkClient: KafkaZkClient) extends Logging {

Review Comment:
   `KafkaMetricsGroup` wasn't used here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] ivanyu commented on a diff in pull request #13067: KAFKA-14524: Rewrite KafkaMetricsGroup in Java

2023-01-13 Thread GitBox


ivanyu commented on code in PR #13067:
URL: https://github.com/apache/kafka/pull/13067#discussion_r1069826661


##
core/src/test/scala/unit/kafka/log/LogCleanerIntegrationTest.scala:
##
@@ -34,7 +33,7 @@ import scala.jdk.CollectionConverters._
 /**
   * This is an integration test that tests the fully integrated log cleaner
   */
-class LogCleanerIntegrationTest extends AbstractLogCleanerIntegrationTest with 
KafkaMetricsGroup {
+class LogCleanerIntegrationTest extends AbstractLogCleanerIntegrationTest {

Review Comment:
   `KafkaMetricsGroup` wasn't used here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] ivanyu commented on a diff in pull request #13067: KAFKA-14524: Rewrite KafkaMetricsGroup in Java

2023-01-13 Thread GitBox


ivanyu commented on code in PR #13067:
URL: https://github.com/apache/kafka/pull/13067#discussion_r1069825536


##
core/src/main/scala/kafka/log/remote/RemoteLogManager.scala:
##
@@ -48,7 +47,7 @@ import scala.jdk.CollectionConverters._
  */
 class RemoteLogManager(rlmConfig: RemoteLogManagerConfig,
brokerId: Int,
-   logDir: String) extends Logging with Closeable with 
KafkaMetricsGroup {

Review Comment:
   `KafkaMetricsGroup` wasn't used here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jolshan commented on a diff in pull request #12902: KAFKA-14367; Add `OffsetDelete` to the new `GroupCoordinator` interface

2023-01-13 Thread GitBox


jolshan commented on code in PR #12902:
URL: https://github.com/apache/kafka/pull/12902#discussion_r1069791363


##
core/src/main/scala/kafka/coordinator/group/GroupCoordinatorAdapter.scala:
##
@@ -470,4 +470,45 @@ class GroupCoordinatorAdapter(
   expireTimestamp = expireTimestamp
 )
   }
+
+  override def deleteOffsets(
+context: RequestContext,
+request: OffsetDeleteRequestData,
+bufferSupplier: BufferSupplier
+  ): CompletableFuture[OffsetDeleteResponseData] = {
+val future = new CompletableFuture[OffsetDeleteResponseData]()
+
+val partitions = mutable.ArrayBuffer[TopicPartition]()
+request.topics.forEach { topic =>
+  topic.partitions.forEach { partition =>
+partitions += new TopicPartition(topic.name, partition.partitionIndex)
+  }
+}
+
+val (groupError, topicPartitionResults) = coordinator.handleDeleteOffsets(
+  request.groupId,
+  partitions,
+  RequestLocal(bufferSupplier)
+)
+
+if (groupError != Errors.NONE) {
+  future.completeExceptionally(groupError.exception)
+} else {
+  val response = new OffsetDeleteResponseData()
+  topicPartitionResults.forKeyValue { (topicPartition, error) =>
+var topic = response.topics.find(topicPartition.topic)

Review Comment:
   TIL that topics in this response are 
ImplicitLinkedHashMultiCollection.Elements.  
   I'm curious why this specific response gets this typing compared to similar 
request/response jsons -- I guess its maybe the "map key". But good to know 
that this find operation works and is fairly efficient.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] cmccabe merged pull request #13108: KAFKA-14618; Fix off by one error in snapshot id

2023-01-13 Thread GitBox


cmccabe merged PR #13108:
URL: https://github.com/apache/kafka/pull/13108


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jolshan commented on a diff in pull request #12902: KAFKA-14367; Add `OffsetDelete` to the new `GroupCoordinator` interface

2023-01-13 Thread GitBox


jolshan commented on code in PR #12902:
URL: https://github.com/apache/kafka/pull/12902#discussion_r1069791363


##
core/src/main/scala/kafka/coordinator/group/GroupCoordinatorAdapter.scala:
##
@@ -470,4 +470,45 @@ class GroupCoordinatorAdapter(
   expireTimestamp = expireTimestamp
 )
   }
+
+  override def deleteOffsets(
+context: RequestContext,
+request: OffsetDeleteRequestData,
+bufferSupplier: BufferSupplier
+  ): CompletableFuture[OffsetDeleteResponseData] = {
+val future = new CompletableFuture[OffsetDeleteResponseData]()
+
+val partitions = mutable.ArrayBuffer[TopicPartition]()
+request.topics.forEach { topic =>
+  topic.partitions.forEach { partition =>
+partitions += new TopicPartition(topic.name, partition.partitionIndex)
+  }
+}
+
+val (groupError, topicPartitionResults) = coordinator.handleDeleteOffsets(
+  request.groupId,
+  partitions,
+  RequestLocal(bufferSupplier)
+)
+
+if (groupError != Errors.NONE) {
+  future.completeExceptionally(groupError.exception)
+} else {
+  val response = new OffsetDeleteResponseData()
+  topicPartitionResults.forKeyValue { (topicPartition, error) =>
+var topic = response.topics.find(topicPartition.topic)

Review Comment:
   TIL that topics in this response are 
ImplicitLinkedHashMultiCollection.Elements.  
   I'm curious why this specific response gets this typing compared to similar 
request/response jsons. But good to know that this find operation works and is 
fairly efficient.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jolshan commented on a diff in pull request #12902: KAFKA-14367; Add `OffsetDelete` to the new `GroupCoordinator` interface

2023-01-13 Thread GitBox


jolshan commented on code in PR #12902:
URL: https://github.com/apache/kafka/pull/12902#discussion_r1069791363


##
core/src/main/scala/kafka/coordinator/group/GroupCoordinatorAdapter.scala:
##
@@ -470,4 +470,45 @@ class GroupCoordinatorAdapter(
   expireTimestamp = expireTimestamp
 )
   }
+
+  override def deleteOffsets(
+context: RequestContext,
+request: OffsetDeleteRequestData,
+bufferSupplier: BufferSupplier
+  ): CompletableFuture[OffsetDeleteResponseData] = {
+val future = new CompletableFuture[OffsetDeleteResponseData]()
+
+val partitions = mutable.ArrayBuffer[TopicPartition]()
+request.topics.forEach { topic =>
+  topic.partitions.forEach { partition =>
+partitions += new TopicPartition(topic.name, partition.partitionIndex)
+  }
+}
+
+val (groupError, topicPartitionResults) = coordinator.handleDeleteOffsets(
+  request.groupId,
+  partitions,
+  RequestLocal(bufferSupplier)
+)
+
+if (groupError != Errors.NONE) {
+  future.completeExceptionally(groupError.exception)
+} else {
+  val response = new OffsetDeleteResponseData()
+  topicPartitionResults.forKeyValue { (topicPartition, error) =>
+var topic = response.topics.find(topicPartition.topic)

Review Comment:
   TIL that topics are ImplicitLinkedHashMultiCollection.Elements.  
   I'm curious why this specific response gets this typing compared to similar 
request/response jsons. But good to know that this find operation works and is 
fairly efficient.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] fvaleri commented on a diff in pull request #13095: KAFKA-14580: Moving EndToEndLatency from core to tools module

2023-01-13 Thread GitBox


fvaleri commented on code in PR #13095:
URL: https://github.com/apache/kafka/pull/13095#discussion_r1069628843


##
tools/src/main/java/org/apache/kafka/tools/EndToEndLatency.java:
##
@@ -0,0 +1,309 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kafka.tools;
+
+
+import net.sourceforge.argparse4j.ArgumentParsers;
+import net.sourceforge.argparse4j.inf.ArgumentParser;
+import net.sourceforge.argparse4j.inf.ArgumentParserException;
+import net.sourceforge.argparse4j.inf.Namespace;
+import org.apache.kafka.clients.admin.Admin;
+import org.apache.kafka.clients.admin.NewTopic;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.consumer.ConsumerRecords;
+import org.apache.kafka.clients.consumer.KafkaConsumer;
+import org.apache.kafka.clients.producer.KafkaProducer;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.clients.producer.ProducerRecord;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.utils.Exit;
+import org.apache.kafka.common.utils.Utils;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.nio.charset.StandardCharsets;
+import java.time.Duration;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Properties;
+import java.util.List;
+import java.util.Random;
+import java.util.concurrent.ExecutionException;
+import java.util.stream.Collectors;
+
+import static net.sourceforge.argparse4j.impl.Arguments.store;
+
+/**
+ * This class records the average end to end latency for a single message to 
travel through Kafka
+ *
+ * broker_list = location of the bootstrap broker for both the producer and 
the consumer
+ * num_messages = # messages to send
+ * producer_acks = See ProducerConfig.ACKS_DOC
+ * message_size_bytes = size of each message in bytes
+ *
+ * e.g. [localhost:9092 test 1 1 20]
+ */
+public class EndToEndLatency {
+
+public static void main(String... args) {
+Exit.exit(mainNoExit(args));
+}
+private final static long TIMEOUT = 6;
+
+static int mainNoExit(String... args) {
+try {
+execute(args);
+return 0;
+} catch (TerseException e) {
+System.err.println(e.getMessage());
+return 1;
+} catch (Throwable e) {
+System.err.println(e.getMessage());
+System.err.println(Utils.stackTrace(e));
+return 1;
+}
+}
+
+// Visible for testing
+static void execute(String... args) throws Exception {
+
+ArgumentParser parser = addArguments();
+
+Namespace res = null;
+try {
+res = parser.parseArgs(args);
+} catch (ArgumentParserException e) {
+if (args.length == 0) {
+parser.printHelp();
+Exit.exit(0);
+} else {
+parser.handleError(e);
+Exit.exit(1);
+}
+}
+
+String brokers = res.getString("broker_list");
+String topic = res.getString("topic");
+int numMessages = res.getInt("num_messages");
+String acks = res.getString("producer_acks");
+int messageSizeBytes = res.getInt("message_size_bytes");
+String propertiesFile = res.getString("properties_file");
+
+if (!Arrays.asList("1", "all").contains(acks)) {
+throw new IllegalArgumentException("Latency testing requires 
synchronous acknowledgement. Please use 1 or all");
+}
+
+Properties props;
+try {
+props = loadPropsWithBootstrapServers(propertiesFile);
+} catch (FileNotFoundException e) {
+throw new IllegalArgumentException("Properties file not found");
+} catch (IOException e) {
+throw new RuntimeException(e);
+}
+
+try (KafkaConsumer consumer = 
createKafkaConsumer(props, brokers);
+ KafkaProducer producer = 
createKafkaProducer(props, acks, brokers)) {

Review Comment:
   Here you have consumer properties passed to the producer instance with 
warnings printed out to the console. That's not what the original code does.



##

[GitHub] [kafka] pprovenzano opened a new pull request, #13114: KAFKA-14084: SCRAM support in KRaft.

2023-01-13 Thread GitBox


pprovenzano opened a new pull request, #13114:
URL: https://github.com/apache/kafka/pull/13114

   This commit adds support to store the SCRAM credentials in a cluster with 
KRaft quorum servers and no ZK cluster backing the metadata.
   
   This commit does not include changes to allow bootstrap of a KRaft cluster 
with brokers using SCRAM at bootstrap time. The bootstrap support will come 
with a separate KIP and commit.
   
   Tests pass along with manual testing of credential operations against a 
cluster with a separate advertised listener for SASL_SCRAM
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] woja commented on pull request #11743: KAFKA-13660: Switch log4j12 to reload4j

2023-01-13 Thread GitBox


woja commented on PR #11743:
URL: https://github.com/apache/kafka/pull/11743#issuecomment-1382118315

   Great! I'll take a look then
   
   On Fri, 13 Jan 2023 at 10:18, Bruno Cadonna ***@***.***>
   wrote:
   
   > @woja  Thank you for your interest!
   >
   > You can find the cve list here:
   > https://github.com/apache/kafka-site/blob/asf-site/cve-list.html
   >
   > Please also read the contribution guidelines for the website here:
   > https://kafka.apache.org/contributing
   >
   > —
   > Reply to this email directly, view it on GitHub
   > , or
   > unsubscribe
   > 

   > .
   > You are receiving this because you were mentioned.Message ID:
   > ***@***.***>
   >
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-14453) Flaky test suite MirrorConnectorsWithCustomForwardingAdminIntegrationTest

2023-01-13 Thread Kamalesh Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17676683#comment-17676683
 ] 

Kamalesh Palanisamy commented on KAFKA-14453:
-

[~ChrisEgerton] is this test still failing? I am a newbie and I can look into 
this. Thanks!

> Flaky test suite MirrorConnectorsWithCustomForwardingAdminIntegrationTest
> -
>
> Key: KAFKA-14453
> URL: https://issues.apache.org/jira/browse/KAFKA-14453
> Project: Kafka
>  Issue Type: Test
>  Components: mirrormaker
>Reporter: Chris Egerton
>Priority: Major
>  Labels: flaky-test
>
> We've been seeing some integration test failures lately for the 
> {{MirrorConnectorsWithCustomForwardingAdminIntegrationTest}} test suite. A 
> couple examples:
> {{org.opentest4j.AssertionFailedError: Condition not met within timeout 
> 6. Topic: mm2-offset-syncs.backup.internal didn't get created in the 
> FakeLocalMetadataStore ==> expected:  but was: }}
> {{    at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)}}
> {{    at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)}}
> {{    at 
> app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)}}
> {{    at 
> app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)}}
> {{    at 
> app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:210)}}
> {{    at 
> app//org.apache.kafka.test.TestUtils.lambda$waitForCondition$4(TestUtils.java:337)}}
> {{    at 
> app//org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:385)}}
> {{    at 
> app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:334)}}
> {{    at 
> app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:318)}}
> {{    at 
> app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:308)}}
> {{    at 
> app//org.apache.kafka.connect.mirror.integration.MirrorConnectorsWithCustomForwardingAdminIntegrationTest.waitForTopicToPersistInFakeLocalMetadataStore(MirrorConnectorsWithCustomForwardingAdminIntegrationTest.java:326)}}
> {{    at 
> app//org.apache.kafka.connect.mirror.integration.MirrorConnectorsWithCustomForwardingAdminIntegrationTest.testReplicationIsCreatingTopicsUsingProvidedForwardingAdmin(MirrorConnectorsWithCustomForwardingAdminIntegrationTest.java:217)}}
> {{}}
>  
> And:
>  
> {{org.opentest4j.AssertionFailedError: Condition not met within timeout 
> 6. Topic: primary.test-topic-1's configs don't have partitions:11 ==> 
> expected:  but was: }}
> {{    }}{{at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)}}
> {{    }}{{at 
> org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)}}
> {{    }}{{at 
> org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)}}
> {{    }}{{at org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)}}
> {{    }}{{at 
> org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:210)}}
> {{    }}{{at 
> org.apache.kafka.test.TestUtils.lambda$waitForCondition$4(TestUtils.java:337)}}
> {{    }}{{at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:385)}}
> {{    }}{{at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:334)}}
> {{    }}{{at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:318)}}
> {{    }}{{at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:308)}}
> {{    }}{{at 
> org.apache.kafka.connect.mirror.integration.MirrorConnectorsWithCustomForwardingAdminIntegrationTest.waitForTopicConfigPersistInFakeLocalMetaDataStore(MirrorConnectorsWithCustomForwardingAdminIntegrationTest.java:334)}}
> {{    }}{{at 
> org.apache.kafka.connect.mirror.integration.MirrorConnectorsWithCustomForwardingAdminIntegrationTest.testCreatePartitionsUseProvidedForwardingAdmin(MirrorConnectorsWithCustomForwardingAdminIntegrationTest.java:255)}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14218) replace temp file handler with JUnit 5 Temporary Directory Support

2023-01-13 Thread Kamalesh Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17676682#comment-17676682
 ] 

Kamalesh Palanisamy commented on KAFKA-14218:
-

[~divijvaidya] [~showuon] is this issue still open and in scope for current 
version? I can look into it.

> replace temp file handler with JUnit 5 Temporary Directory Support
> --
>
> Key: KAFKA-14218
> URL: https://issues.apache.org/jira/browse/KAFKA-14218
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Luke Chen
>Priority: Major
>  Labels: Newbie, newbie
>
> We created many temp files in tests, and sometimes we forgot to delete them 
> after usage. Instead of polluting @AfterEach for each test, we should 
> consider to use JUnit 5 TempDirectory Extension.
>  
> REF: 1. [https://github.com/apache/kafka/pull/12591#issuecomment-1243001431]
> 2. [https://www.baeldung.com/junit-5-temporary-directory]
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-5238) BrokerTopicMetrics can be recreated after topic is deleted

2023-01-13 Thread Edoardo Comar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-5238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17676681#comment-17676681
 ] 

Edoardo Comar commented on KAFKA-5238:
--

Retrying with https://github.com/apache/kafka/pull/13113 
many years after the old PR,  as this patch has been adopted and used for more 
than 5 years in my organisation, where before this we were able to detect 
metrics leaking in long lived clusters.

> BrokerTopicMetrics can be recreated after topic is deleted
> --
>
> Key: KAFKA-5238
> URL: https://issues.apache.org/jira/browse/KAFKA-5238
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Edoardo Comar
>Priority: Major
>
> As part of KAFKA-3258, we added code to remove metrics during topic deletion. 
> This works fine as long as there are no fetch requests in the purgatory. If 
> there are, however, we'll recreate the metrics when we call 
> `ReplicaManager.appendToLocalLog`.
> This can be reproduced by updating 
> MetricsTest.testBrokerTopicMetricsUnregisteredAfterDeletingTopic() in the 
> following way:
> {code}
> @Test
>   def testBrokerTopicMetricsUnregisteredAfterDeletingTopic() {
> val topic = "test-broker-topic-metric"
> AdminUtils.createTopic(zkUtils, topic, 2, 1)
> // Produce a few messages and consume them to create the metrics
> TestUtils.produceMessages(servers, topic, nMessages)
> TestUtils.consumeTopicRecords(servers, topic, nMessages)
> assertTrue("Topic metrics don't exist", topicMetricGroups(topic).nonEmpty)
> assertNotNull(BrokerTopicStats.getBrokerTopicStats(topic))
> AdminUtils.deleteTopic(zkUtils, topic)
> TestUtils.verifyTopicDeletion(zkUtils, topic, 1, servers)
> Thread.sleep(1)
> assertEquals("Topic metrics exists after deleteTopic", Set.empty, 
> topicMetricGroups(topic))
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] edoardocomar opened a new pull request, #13113: KAFKA-5238: BrokerTopicMetrics can be recreated after topic is deleted

2023-01-13 Thread GitBox


edoardocomar opened a new pull request, #13113:
URL: https://github.com/apache/kafka/pull/13113

   Do not recreate the BrokerTopicMetrics on creating the fetch response to a 
DelayedFetch request executed after a topic was deleted
   
   Add a variant of 
MetricsTest.testBrokerTopicMetricsUnregisteredAfterDeletingTopic
   where messages are consumed 
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] edoardocomar commented on pull request #13113: KAFKA-5238: BrokerTopicMetrics can be recreated after topic is deleted

2023-01-13 Thread GitBox


edoardocomar commented on PR #13113:
URL: https://github.com/apache/kafka/pull/13113#issuecomment-1382078154

   This PR comes adapted to current trunk many many years after 
https://github.com/apache/kafka/pull/4204 
   thanks to @mimaison @ijuma @rajinisivaram who had reviewed it. 
   That was never merged, as there is still the possibility of a race between 
topic deletion and metric being recreated,
   although within a very small timing window. But a consensus appeared to have 
been recahed that it was worth considering.
   
   The rationale for coming back to this is heuristic, as this patch has been 
adopted and used for more than 5 years in my organisation, where before this we 
were able to detect metrics leaking in long lived clusters.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-13709) Document exactly-once support for source connectors

2023-01-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17676676#comment-17676676
 ] 

ASF GitHub Bot commented on KAFKA-13709:


C0urante commented on PR #478:
URL: https://github.com/apache/kafka-site/pull/478#issuecomment-1382075393

   @mjsax Would you mind taking a look when you have a chance? Want to make 
sure I've got the flow for backporting docs changes straight.




> Document exactly-once support for source connectors
> ---
>
> Key: KAFKA-13709
> URL: https://issues.apache.org/jira/browse/KAFKA-13709
> Project: Kafka
>  Issue Type: Task
>  Components: KafkaConnect
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
>
> Add documentation for the support for exactly-once source connectors 
> introduced in 
> [KIP-618|https://cwiki.apache.org/confluence/display/KAFKA/KIP-618%3A+Exactly-Once+Support+for+Source+Connectors].
>  This includes but is not limited to:
>  * How to safely perform a rolling upgrade to enable exactly-once source 
> support for an existing cluster
>  * Any new APIs that connector authors can/should leverage for their source 
> connectors that need clarification beyond what can be included in a Javadoc 
> (for example, how to know what to return from 
> {{{}SourceConnector::exactlyOnceSupport{}}}, and an example on how to define 
> custom transaction boundaries for a connector)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-13709) Document exactly-once support for source connectors

2023-01-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17676675#comment-17676675
 ] 

ASF GitHub Bot commented on KAFKA-13709:


C0urante opened a new pull request, #478:
URL: https://github.com/apache/kafka-site/pull/478

   Backports the docs updates from https://github.com/apache/kafka/pull/12941 
and https://github.com/apache/kafka/pull/13106 to 3.3, the version in which 
KIP-618 was released and exactly-once source connectors became possible.




> Document exactly-once support for source connectors
> ---
>
> Key: KAFKA-13709
> URL: https://issues.apache.org/jira/browse/KAFKA-13709
> Project: Kafka
>  Issue Type: Task
>  Components: KafkaConnect
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
>
> Add documentation for the support for exactly-once source connectors 
> introduced in 
> [KIP-618|https://cwiki.apache.org/confluence/display/KAFKA/KIP-618%3A+Exactly-Once+Support+for+Source+Connectors].
>  This includes but is not limited to:
>  * How to safely perform a rolling upgrade to enable exactly-once source 
> support for an existing cluster
>  * Any new APIs that connector authors can/should leverage for their source 
> connectors that need clarification beyond what can be included in a Javadoc 
> (for example, how to know what to return from 
> {{{}SourceConnector::exactlyOnceSupport{}}}, and an example on how to define 
> custom transaction boundaries for a connector)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-14610) Publish Mirror Maker 2 offset syncs in task commit method

2023-01-13 Thread Chris Egerton (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Egerton reassigned KAFKA-14610:
-

Assignee: Chris Egerton

> Publish Mirror Maker 2 offset syncs in task commit method
> -
>
> Key: KAFKA-14610
> URL: https://issues.apache.org/jira/browse/KAFKA-14610
> Project: Kafka
>  Issue Type: Improvement
>  Components: mirrormaker
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
>
> Mirror Maker 2 periodically publishes offset sync messages to a Kafka topic 
> that contains the corresponding upstream and downstream offsets for a 
> replicated topic partition.
>  
> Currently, this publishing takes place inside the [commitRecord 
> method|https://github.com/apache/kafka/blob/e38526e375389868664c8977c7a2125e5da2388c/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorSourceTask.java#L192],
>  which is invoked by the Kafka Connect framework after a source record has 
> been successfully sent by its producer (i.e., ack'd by the requested number 
> of brokers).
>  
> Mirror Maker 2 also has logic to limit the number of in-flight offset sync 
> messages. Once ten messages have been dispatched to the producer used for 
> offset syncs (which is a separate producer from the one that the Kafka 
> Connect framework uses for sending records received from the [poll 
> method|https://github.com/apache/kafka/blob/e38526e375389868664c8977c7a2125e5da2388c/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorSourceTask.java#L134])
>  that have not yet been ack'd by the requested number of brokers, Mirror 
> Maker 2 begins to skip sending offset sync messages, and will only resume 
> sending messages once the number of in-flight offset syncs goes below 10, and 
> new calls to the {{commitRecord}} method take place.
>  
> When bursts of throughput occur in replicated topic partitions, this can 
> cause offset syncs to be dropped for long periods of time if an offset sync 
> is skipped for some topic partition due to a high number of in-flight 
> messages and then no further messages are read from that same topic partition 
> for a while.
>  
> Instead, the task should cache offset syncs in its {{{}commitRecord 
> method{}}}, and only actually send offset sync messages in its [commit 
> method|https://github.com/apache/kafka/blob/e38526e375389868664c8977c7a2125e5da2388c/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorSourceTask.java#L108],
>  which is invoked periodically by the Kafka Connect framework. Any offset 
> syncs that are skipped due to too many in-flight messages will then be 
> automatically retried later when {{commit}} is re-invoked, regardless of 
> whether any more records are read from the corresponding topic partition.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] edoardocomar closed pull request #4204: KAFKA-5238: BrokerTopicMetrics can be recreated after topic is deleted

2023-01-13 Thread GitBox


edoardocomar closed pull request #4204: KAFKA-5238: BrokerTopicMetrics can be 
recreated after topic is deleted
URL: https://github.com/apache/kafka/pull/4204


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] dajac closed pull request #13110: KAFKA-14367; Add `partitionFor` and `handleTxnCompletion` to the new `GroupCoordinator` interface

2023-01-13 Thread GitBox


dajac closed pull request #13110: KAFKA-14367; Add `partitionFor` and 
`handleTxnCompletion` to the new `GroupCoordinator` interface
URL: https://github.com/apache/kafka/pull/13110


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] dajac commented on pull request #13110: KAFKA-14367; Add `partitionFor` and `handleTxnCompletion` to the new `GroupCoordinator` interface

2023-01-13 Thread GitBox


dajac commented on PR #13110:
URL: https://github.com/apache/kafka/pull/13110#issuecomment-1382015201

   Included in https://github.com/apache/kafka/pull/13112.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] dajac opened a new pull request, #13112: KAFKA-14367; Finalize migration to the new GroupCoordinator interface

2023-01-13 Thread GitBox


dajac opened a new pull request, #13112:
URL: https://github.com/apache/kafka/pull/13112

   WIP - This includes https://github.com/apache/kafka/pull/13110 at the 
momnent.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-14615) Use sizeCompare(Iterable[_]) to compare two iterables

2023-01-13 Thread Divij Vaidya (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17676660#comment-17676660
 ] 

Divij Vaidya commented on KAFKA-14615:
--

[~kamalesh0420] 

You can either start with list of items labelled as "newbie" here: 
https://issues.apache.org/jira/browse/KAFKA-14218?jql=project%20%3D%20KAFKA%20AND%20resolution%20%3D%20Unresolved%20AND%20labels%20in%20(Newbie%2C%20newbie%2C%20%22newbie%2C%22%2C%20%22newbie%2B%2B%22%2C%20newbiee%2C%20newbiew)%20AND%20assignee%20in%20(EMPTY)%20ORDER%20BY%20created%20DESC%2C%20priority%20DESC%2C%20updated%20DESC
 

OR you could start by looking at and fixing flaky tests here: 
https://issues.apache.org/jira/browse/KAFKA-14533?jql=project%20%3D%20KAFKA%20AND%20resolution%20%3D%20Unresolved%20AND%20labels%20%3D%20flaky-test%20AND%20assignee%20in%20(EMPTY)%20ORDER%20BY%20created%20DESC%2C%20priority%20DESC%2C%20updated%20DESC
 

OR if you want to look at items that are targeted to be in 3.5, you can check: 
https://issues.apache.org/jira/browse/KAFKA-14305?jql=project%20%3D%20KAFKA%20AND%20resolution%20%3D%20Unresolved%20AND%20fixVersion%20%3D%203.5.0%20AND%20assignee%20in%20(EMPTY)%20ORDER%20BY%20created%20DESC%2C%20priority%20DESC%2C%20updated%20DESC
 

Let me know if you are still not able to find an item that you like after going 
through the above list. I can point you to some low hanging items.

> Use sizeCompare(Iterable[_]) to compare two iterables
> -
>
> Key: KAFKA-14615
> URL: https://issues.apache.org/jira/browse/KAFKA-14615
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Divij Vaidya
>Assignee: Kamalesh Palanisamy
>Priority: Minor
>  Labels: Newbie, newbie
> Fix For: 4.0.0
>
>
> Since Scala 2.12 is being deprecated in 4.x version, we can utilize some 
> improved methods for comparing size of scala collections which were 
> introduced starting 2.13.
> This task is to find and replace usage of code paths where we use 
> (IterableA.size == IterableB.size) having a complexity of O(IterableA size + 
> IterableB size) with sizeCompare() method which has a complexity of 
> O(min(IterableA size, IterableB size))
> Some examples where sizeCompare() could be used are:
> 1. 
> [https://github.com/apache/kafka/blob/78d4458b94e585bc602a4ae307d3de54fcedf2af/core/src/main/scala/kafka/server/KafkaApis.scala#L1177]
> 2. 
> [https://github.com/apache/kafka/blob/78d4458b94e585bc602a4ae307d3de54fcedf2af/core/src/main/scala/kafka/tools/JmxTool.scala#L215]
>  
> 3. 
>  [1] [https://github.com/scala/scala/pull/6758]
>  [2] [https://github.com/scala/scala/pull/6950] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] calmera commented on a diff in pull request #12740: Update of the PAPI testing classes to the latest implementation

2023-01-13 Thread GitBox


calmera commented on code in PR #12740:
URL: https://github.com/apache/kafka/pull/12740#discussion_r1069563020


##
streams/src/test/java/org/apache/kafka/test/MockProcessor.java:
##
@@ -41,16 +44,16 @@ public MockProcessor() {
 delegate = new MockApiProcessor<>();
 }
 
-@SuppressWarnings("unchecked")
 @Override
-public void init(final ProcessorContext context) {
-super.init(context);
-
delegate.init((org.apache.kafka.streams.processor.api.ProcessorContext) context);
+public void init(ProcessorContext context) {
+Processor.super.init(context);

Review Comment:
   I think this was IDEA trying to be smart and me looking over it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] calmera commented on a diff in pull request #12740: Update of the PAPI testing classes to the latest implementation

2023-01-13 Thread GitBox


calmera commented on code in PR #12740:
URL: https://github.com/apache/kafka/pull/12740#discussion_r1069562574


##
streams/src/test/java/org/apache/kafka/test/MockProcessor.java:
##
@@ -28,9 +29,11 @@
 import java.util.List;
 import java.util.Map;
 
-@SuppressWarnings("deprecation") // Old PAPI. Needs to be migrated.
-public class MockProcessor extends 
org.apache.kafka.streams.processor.AbstractProcessor {
-private final MockApiProcessor delegate;
+public class MockProcessor implements Processor {
+private final MockApiProcessor delegate;
+
+private ProcessorContext context;

Review Comment:
   yup, should use the delegate. Will change.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] C0urante merged pull request #13106: KAFKA-13709 (follow-up): Fix wording around exactly-once in Connect

2023-01-13 Thread GitBox


C0urante merged PR #13106:
URL: https://github.com/apache/kafka/pull/13106


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] C0urante commented on pull request #13106: KAFKA-13709 (follow-up): Fix wording around exactly-once in Connect

2023-01-13 Thread GitBox


C0urante commented on PR #13106:
URL: https://github.com/apache/kafka/pull/13106#issuecomment-1381923555

    Thanks Matthias! Will backport to 3.4 and 3.3.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (KAFKA-14565) Improve Interceptor Resource Leakage Prevention

2023-01-13 Thread Terry Beard (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17676359#comment-17676359
 ] 

Terry Beard edited comment on KAFKA-14565 at 1/13/23 1:31 PM:
--

[~ChrisEgerton]  I understood you the first time.  However, my response to both 
your suggestions was not so clear.  So when you suggested altering 
getConfiguredInstances, my immediate thought was that it conflicts with 
Open/Close Principle (OCP).  Likewise, you suggested  duplicating logic from 
the  getConfiguredInstances into the various clients with an ability to loop 
through interceptors, calling their respective close method in the event of an 
exception, my thought was that it conflicts with the Don't Repeat Yourself 
(DRY) principle as both Consumer/Producer constructors already call their 
respective interceptors close method via the catch block.  Also, 
getConfiguredInstances already performs the create/configure.  It's only 
problem is that it's so abstract/generic that it does not make assumptions 
regarding exceptions leaving it up to the caller to handle.  

NOTE:  The notable exception (pun intended) being not expecting configure 
implementations  to acquire I/O or thread resources.  

Finally,  IMO, Consumer/Producer interceptor constructors are currently 
saturated with logic.  :P

Regarding your other points:
{quote}IMO the proposed changes to the interceptor interfaces aren't 
significantly clearer to developers and are really only convenient as a 
workaround to issues with the AbstractConfig API. 
{quote}
Yes, my approach may look like a workaround but I think that is because of my 
OCP decisions which eventually led to the addition of the default open() method 
on the interceptor interfaces.  Although we could change terse open() to 
something like configureWithResources() or acquireResources() etc. to make the 
intentions clearer.
{quote}And to reiterate a point raised above, they will require changes to 
existing interceptor classes in order to provide any benefit to users, which 
means that every existing interceptor release out there would still cause 
leaked resources in the failure scenario described by this issue.
{quote}
I agree that anyone with a design requirement for accessing I/O resource, 
thread etc. will have to change their code to get the benefit.  But IMO, this 
should be consider a feature versus a bug fix as the designers of the current 
client/interceptor may not have envisioned my use case.   But without any 
history, notes, etc. to get inside their heads, I could be way off.

Also, anecdotally speaking we've had roughly six years of interceptor 
development and AFAIK no one has reported running into this issue.  Now I 
suspect if anyone did, they likely developed a workaround such as myself.  But 
even if we followed your KIPless approach, developers could opt to eventually 
refactor their workaround to fully enjoy the benefits of your approach minus 
their bloated workaround.  At least I would.  :)

 

But. in the spirit of your suggestion, what if we replaced the 
Abstract.getConfiguredInstances with a custom yet to be developed 
InterceptorLoader object?  See below example:

 

 
{code:java}
Loader interceptorLoader = new Loader();
LoadConfiguredInstanceResult loadConfiguredInstanceResult = 
interceptorLoader.loadConfiguredInstances(
ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG,
ConsumerInterceptor.class,
Collections.singletonMap(ConsumerConfig.CLIENT_ID_CONFIG, 
clientId));

List> interceptorList = 
loadConfiguredInstanceResult.getInstances();
loadConfiguredInstanceResult.throwWhenAnyConfigurationFailed();



{code}
 

 

Lastly, the one thing I do not like about the current interceptor interface is 
that in implementing the Configurable interface's configure method, I have to 
wrap exceptions within a RuntimeException which is a pain when 
reading/splunking log files.  Where as the 
configureWithResources()/acquireResources etc. defines a checked Exception 
which can be more specific.  

 

 

 

 

 

 

 


was (Author: JIRAUSER298607):
[~ChrisEgerton]  I understood you the first time.  However, my response to both 
your suggestions was not so clear.  So when you suggested altering 
getConfiguredInstances, my immediate thought was that it conflicts with 
Open/Close Principle (OCP).  Likewise, you suggested  duplicating logic from 
the  getConfiguredInstances into the various clients with an ability to loop 
through interceptors, calling their respective close method in the event of an 
exception, my thought was that it conflicts with the Don't Repeat Yourself 
(DRY) principle as both Consumer/Producer constructors already call their 
respective interceptors close method via the catch block.  Also, 
getConfiguredInstances already performs the create/configure.  It's only 
problem is that it's so abstract/generic that it 

[GitHub] [kafka] divijvaidya commented on pull request #13111: KAFKA-14190: Update Zk TopicId from locally stored cache in controller

2023-01-13 Thread GitBox


divijvaidya commented on PR #13111:
URL: https://github.com/apache/kafka/pull/13111#issuecomment-1381853980

   @jolshan, since you are our topic Id expert, please take a look at this 
change when you get a chance. I would request some urgency (if possible) since 
I would preferably like to have this bug fix added in 3.4 due to greater number 
of users facing this problem with each passing day.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] divijvaidya opened a new pull request, #13111: KAFKA-14190: Update Zk TopicId from locally stored cache in controller

2023-01-13 Thread GitBox


divijvaidya opened a new pull request, #13111:
URL: https://github.com/apache/kafka/pull/13111

   ## Change
   Controller should update Zk with locally cached TopicId (when available) 
instead of assigning a new one when Zk doesn't have a TopicId.
   
   ## Motivation for this change
   This problem was highlighted in KAFKA-14190 and since then, multiple users 
have complained about the problem 
[HERE](https://lists.apache.org/thread/jzk4tyd1xs1wwj0bpkdnxpw0m152qw1f) 
(mailing list), 
[HERE](https://lists.apache.org/thread/4rqrd6kqd0nrc248fv1tmdn0088947hv) 
(mailing list) and 
[HERE](https://the-asf.slack.com/archives/CE7HWJPHA/p1671529649633529) (ASF 
slack channel).
   
   ## Description of the problem
   In certain situations, it is possible that the TopicId stored locally on a 
broker for a topic differs from the topicId stored for that topic on Zk. 
Currently, such situation arises when users use a <2.8 client to 
alterPartitions for a topic on a >=2.8 (including latest 3.4) brokers AND they 
use `--zookeeper` flag from the client. Note that `--zookeeper` has been marked 
deprecated for a long time and has been replaced by `--bootstrap-server` which 
doesn't face this problem.
   
   The result of topic Id discrepancy leads to availability loss for the topic 
until user performs the mitigation steps listed in  KAFKA-14190.
   
   The exact sequence of steps are:
   
   1. User uses pre 2.8 client to create a new topic in zookeeper directly
   2. No TopicId is generated in Zookeeper
   3. KafkaController listens to the ZNode, and a `TopicChange` event is 
created, During handling on this event, controller notices that there is no 
TopicId, it generated a new one and updates Zk.
   4. At this stage, Zk has a TopicId.
   5. User uses pre 2.8 client to increase the number of partitions for this 
topic
   6. The client will replace/overwrite the entire existing Znode with new 
placement information. **This will delete the existing TopicId in Zk (that was 
created by controller in step 3).**
   7. Next time KafkaController interacts with this ZNode, it will generate a 
new TopicId.
   8. Note that we now have two different TopicIds for this topic name.
   9. Broker may have a different topicId (older one) in metadata file and will 
complain about the mismatch when they encounter a new TopicId.
   
   ## Testing
   I have added a test with this change which asserts that TopicId for a topic 
is immutable i.e. once assigned, it does not change. This test fails before 
this change and passes after this change.
   
   ## Side effects of this fix
   There are no additional side effects of this change. No additional calls to 
Zk. We are only updating the TopicId from a locally cached value instead of 
assigning a new one.
   
   ## Caveats
   This code change does not fix the problem completely. The code change 
assumes that controller would have the TopicId locally so that it can update Zk 
but situations such as controller failover, that may not be true. More 
specifically, we will still end up having two different topic Ids in cases when 
controller failover takes place between the time when Zk TopicID was 
overwritten/removed and time when controller could update the TopicId with 
local value.
   
   However, this code change should fix majority of the scenario that are 
impacted by this bug and a separate PR would be filed to fix the minority 
scenarios of controller failover during the exact duration.
   
   ## Release
   Due to the simple nature of the fix and the number of user requests, I would 
request to consider adding this to 3.4.0 and backporting to as many previous 
version as we can.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (KAFKA-14565) Improve Interceptor Resource Leakage Prevention

2023-01-13 Thread Terry Beard (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17676359#comment-17676359
 ] 

Terry Beard edited comment on KAFKA-14565 at 1/13/23 12:48 PM:
---

[~ChrisEgerton]  I understood you the first time.  However, my response to both 
your suggestions was not so clear.  So when you suggested altering 
getConfiguredInstances, my immediate thought was that it conflicts with 
Open/Close Principle (OCP).  Likewise, you suggested  duplicating logic from 
the  getConfiguredInstances into the various clients with an ability to loop 
through interceptors, calling their respective close method in the event of an 
exception, my thought was that it conflicts with the Don't Repeat Yourself 
(DRY) principle as both Consumer/Producer constructors already call their 
respective interceptors close method via the catch block.  Also, 
getConfiguredInstances already performs the create/configure.  It's only 
problem is that it's so abstract/generic that it does not make assumptions 
regarding exceptions leaving it up to the caller to handle.  

NOTE:  The noteable exception (pun intended) being not expecting configure 
implementions  to acquire I/O or thread resources.  

Finally,  IMO, Consumer/Producer interceptor constructors are currently 
saturated with logic.  :P

Regarding your other points:
{quote}IMO the proposed changes to the interceptor interfaces aren't 
significantly clearer to developers and are really only convenient as a 
workaround to issues with the AbstractConfig API. 
{quote}
Yes, my approach may look like a workaround but I think that is because of my 
OCP decisions which eventually led to the addition of the default open() method 
on the interceptor interfaces.  Although we could change terse open() to 
something like configureWithResources() or acquireResources() etc. to make the 
intentions clearer.
{quote}And to reiterate a point raised above, they will require changes to 
existing interceptor classes in order to provide any benefit to users, which 
means that every existing interceptor release out there would still cause 
leaked resources in the failure scenario described by this issue.
{quote}
I agree that anyone with a design requirement for accessing I/O resource, 
thread etc. will have to change their code to get the benefit.  But IMO, this 
should be consider a feature versus a bug fix as the designers of the current 
client/interceptor may not have envisioned my use case.   But without any 
history, notes, etc. to get inside their heads, I could be way off.

Also, anecdotally speaking we've had roughly six years of interceptor 
development and AFAIK no one has reported running into this issue.  Now I 
suspect if anyone did, they likely developed a workaround such as myself.  But 
even if we followed your KIPless approach, developers could opt to eventually 
refactor their workaround to fully enjoy the benefits of your approach minus 
their bloated workaround.  At least I would.  :)

 

But. in the spirit of your suggestion, what if we replaced the 
Abstract.getConfiguredInstances with a custom yet to be developed 
InterceptorLoader object?  See below example:

 

 
{code:java}
Loader interceptorLoader = new Loader(config);
LoadConfiguredInstanceResult loadConfiguredInstanceResult = 
interceptorLoader.loadConfiguredInstances(
ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG,
ConsumerInterceptor.class,
Collections.singletonMap(ConsumerConfig.CLIENT_ID_CONFIG, 
clientId));

List> interceptorList = 
loadConfiguredInstanceResult.getInstances();
loadConfiguredInstanceResult.throwWhenAnyConfigurationFailed();



{code}
 

 

Lastly, the one thing I do not like about the current interceptor interface is 
that in implementing the Configurable interface's configure method, I have to 
wrap exceptions within a RuntimeException which is a pain when 
reading/splunking log files.  Where as the 
configureWithResources()/acquireResources etc. defines a checked Exception 
which can be more specific.  

 

 

 

 

 

 

 


was (Author: JIRAUSER298607):
[~ChrisEgerton]  I understood you the first time.  However, my response to both 
your suggestions was not so clear.  So when you suggested altering 
getConfiguredInstances, my immediate thought was that it conflicts with 
Open/Close Principle (OCP).  Likewise, you suggested  duplicating logic from 
the  getConfiguredInstances into the various clients with an ability to loop 
through interceptors, calling their respective close method in the event of an 
exception, my thought was that it conflicts with the Don't Repeat Yourself 
(DRY) principle as both Consumer/Producer constructors already call their 
respective interceptors close method via the catch block.  Also, 
getConfiguredInstances already performs the create/configure.  It's only 
problem is that it's so abstract/generic 

[jira] [Assigned] (KAFKA-14586) Move StreamsResetter to tools

2023-01-13 Thread Sagar Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sagar Rao reassigned KAFKA-14586:
-

Assignee: Sagar Rao

> Move StreamsResetter to tools
> -
>
> Key: KAFKA-14586
> URL: https://issues.apache.org/jira/browse/KAFKA-14586
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Sagar Rao
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-14583) Move ReplicaVerificationTool to tools

2023-01-13 Thread Sagar Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sagar Rao reassigned KAFKA-14583:
-

Assignee: Sagar Rao

> Move ReplicaVerificationTool to tools
> -
>
> Key: KAFKA-14583
> URL: https://issues.apache.org/jira/browse/KAFKA-14583
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Sagar Rao
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14490) Consider using UncheckdIOException instead of IOException in the log layer

2023-01-13 Thread Sagar Rao (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17676593#comment-17676593
 ] 

Sagar Rao commented on KAFKA-14490:
---

[~ijuma] , can I take this one up? 

> Consider using UncheckdIOException instead of IOException in the log layer
> --
>
> Key: KAFKA-14490
> URL: https://issues.apache.org/jira/browse/KAFKA-14490
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Priority: Major
>
> IOException is a checked exception, which makes it difficult to use with 
> lambdas. We should consider using UncheckdIOException instead.
> I'll add notes below with code that could be simplified if we did this:
> 1. The private constructor of LazyIndex could take a factory method instead 
> of all parameters and IndexType (like the Scala code used to). This would 
> avoid some repetition and some unsafe casts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] vamossagar12 commented on pull request #13095: KAFKA-14580: Moving EndToEndLatency from core to tools module

2023-01-13 Thread GitBox


vamossagar12 commented on PR #13095:
URL: https://github.com/apache/kafka/pull/13095#issuecomment-1381703690

   Looks like there are some more checkstyle failures. Will fix them.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] vamossagar12 commented on a diff in pull request #13095: KAFKA-14580: Moving EndToEndLatency from core to tools module

2023-01-13 Thread GitBox


vamossagar12 commented on code in PR #13095:
URL: https://github.com/apache/kafka/pull/13095#discussion_r1069260140


##
tests/kafkatest/services/performance/end_to_end_latency.py:
##
@@ -87,7 +87,7 @@ def start_cmd(self, node):
 cmd = "export KAFKA_LOG4J_OPTS=\"-Dlog4j.configuration=file:%s\"; " % 
EndToEndLatencyService.LOG4J_CONFIG
 if node.version.consumer_supports_bootstrap_server():
 cmd += "KAFKA_OPTS=%(kafka_opts)s %(kafka_run_class)s 
%(java_class_name)s " % args
-cmd += "%(bootstrap_servers)s %(topic)s %(num_records)d %(acks)d 
%(message_bytes)d %(config_file)s" % args
+cmd += "-b %(bootstrap_servers)s -t %(topic)s -n %(num_records)d 
-a %(acks)d -s %(message_bytes)d -f %(config_file)s" % args

Review Comment:
   This change and 
https://github.com/apache/kafka/pull/13095/files#diff-1a3735187400a54aac5a802fb2d2ff6d4fe9cdfbc5fd953a45e2890ad43f58cfR157-R207
 are needed because in argparse4j, if we are using positional arguments, then 
from what I understood, they can't be made optional. Properties file is an 
optional argument as per the scala code but it worked in that case as the main 
arguments were used as is. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] vamossagar12 commented on pull request #13095: KAFKA-14580: Moving EndToEndLatency from core to tools module

2023-01-13 Thread GitBox


vamossagar12 commented on PR #13095:
URL: https://github.com/apache/kafka/pull/13095#issuecomment-1381690612

   hi @mimaison , I added a few basic unit tests and updated the system test 
needs (end_to_end_latency.py). I haven't set it up locally but I am hoping 
those should run from here to validate if the changes worked. Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] cadonna commented on pull request #11743: KAFKA-13660: Switch log4j12 to reload4j

2023-01-13 Thread GitBox


cadonna commented on PR #11743:
URL: https://github.com/apache/kafka/pull/11743#issuecomment-1381611825

   @woja Thank you for your interest!
   
   You can find the cve list here:
   https://github.com/apache/kafka-site/blob/asf-site/cve-list.html
   
   Please also read the contribution guidelines for the website here:
   https://kafka.apache.org/contributing


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] woja commented on pull request #11743: KAFKA-13660: Switch log4j12 to reload4j

2023-01-13 Thread GitBox


woja commented on PR #11743:
URL: https://github.com/apache/kafka/pull/11743#issuecomment-1381581124

   I would be happy to have a go - but I cannot see where this part of the site 
is defined. It's not in the docs and a search shows nothing in this repo (or in 
fact any part of Apache org on github)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-14606) Use virtual threads to publish Kafka records

2023-01-13 Thread Alexandre Dupriez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17676547#comment-17676547
 ] 

Alexandre Dupriez commented on KAFKA-14606:
---

Got it. Thanks.

> Use virtual threads to publish Kafka records
> 
>
> Key: KAFKA-14606
> URL: https://issues.apache.org/jira/browse/KAFKA-14606
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Reporter: Bart De Neuter
>Priority: Major
>
> Since JDK 19, virtual threads have been added to the JDK as a preview:
> [https://openjdk.org/jeps/425]
> Virtual threads allows you to use the hardware optimal as it is a lightweight 
> thread that runs on a carrier thread (OS thread). When IO happens, the 
> carrier thread is not blocked and can continue doing other work. Currently it 
> doesn't seem to be possible to make 
> `org.apache.kafka.clients.producer.internals.Sender` run on a virtual thread.
> An instance of `org.apache.kafka.common.utils.KafkaThread` is being started. 
> Is it possible to give the possibility to use virtual threads for publishing 
> records?
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-14139) Replaced disk can lead to loss of committed data even with non-empty ISR

2023-01-13 Thread Alexandre Dupriez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17676546#comment-17676546
 ] 

Alexandre Dupriez edited comment on KAFKA-14139 at 1/13/23 9:41 AM:


Hi, Jason, thank you for reporting this scenario and the very clear description 
of the issue. Is this something which is still prioritized and are you 
welcoming additional contributors on it?


was (Author: hangleton):
Hi, Jason, thank you for the very clear description of the issue. Is this 
something which is still prioritized and are you welcoming additional 
contributors on it?

> Replaced disk can lead to loss of committed data even with non-empty ISR
> 
>
> Key: KAFKA-14139
> URL: https://issues.apache.org/jira/browse/KAFKA-14139
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Priority: Major
> Fix For: 3.5.0
>
>
> We have been thinking about disk failure cases recently. Suppose that a disk 
> has failed and the user needs to restart the disk from an empty state. The 
> concern is whether this can lead to the unnecessary loss of committed data.
> For normal topic partitions, removal from the ISR during controlled shutdown 
> buys us some protection. After the replica is restarted, it must prove its 
> state to the leader before it can be added back to the ISR. And it cannot 
> become a leader until it does so.
> An obvious exception to this is when the replica is the last member in the 
> ISR. In this case, the disk failure itself has compromised the committed 
> data, so some amount of loss must be expected.
> We have been considering other scenarios in which the loss of one disk can 
> lead to data loss even when there are replicas remaining which have all of 
> the committed entries. One such scenario is this:
> Suppose we have a partition with two replicas: A and B. Initially A is the 
> leader and it is the only member of the ISR.
>  # Broker B catches up to A, so A attempts to send an AlterPartition request 
> to the controller to add B into the ISR.
>  # Before the AlterPartition request is received, replica B has a hard 
> failure.
>  # The current controller successfully fences broker B. It takes no action on 
> this partition since B is already out of the ISR.
>  # Before the controller receives the AlterPartition request to add B, it 
> also fails.
>  # While the new controller is initializing, suppose that replica B finishes 
> startup, but the disk has been replaced (all of the previous state has been 
> lost).
>  # The new controller sees the registration from broker B first.
>  # Finally, the AlterPartition from A arrives which adds B back into the ISR 
> even though it has an empty log.
> (Credit for coming up with this scenario goes to [~junrao] .)
> I tested this in KRaft and confirmed that this sequence is possible (even if 
> perhaps unlikely). There are a few ways we could have potentially detected 
> the issue. First, perhaps the leader should have bumped the leader epoch on 
> all partitions when B was fenced. Then the inflight AlterPartition would be 
> doomed no matter when it arrived.
> Alternatively, we could have relied on the broker epoch to distinguish the 
> dead broker's state from that of the restarted broker. This could be done by 
> including the broker epoch in both the `Fetch` request and in 
> `AlterPartition`.
> Finally, perhaps even normal kafka replication should be using a unique 
> identifier for each disk so that we can reliably detect when it has changed. 
> For example, something like what was proposed for the metadata quorum here: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-853%3A+KRaft+Voter+Changes.]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-14139) Replaced disk can lead to loss of committed data even with non-empty ISR

2023-01-13 Thread Alexandre Dupriez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17676546#comment-17676546
 ] 

Alexandre Dupriez edited comment on KAFKA-14139 at 1/13/23 9:40 AM:


Hi, Jason, thank you for the very clear description of the issue. Is this 
something which is still prioritized and are you welcoming additional 
contributors on it?


was (Author: hangleton):
Hi, Jason, thank you for the very clear description of the issue. Is this 
something which is still prioritized and are you welcoming additional 
contributors for it?

> Replaced disk can lead to loss of committed data even with non-empty ISR
> 
>
> Key: KAFKA-14139
> URL: https://issues.apache.org/jira/browse/KAFKA-14139
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Priority: Major
> Fix For: 3.5.0
>
>
> We have been thinking about disk failure cases recently. Suppose that a disk 
> has failed and the user needs to restart the disk from an empty state. The 
> concern is whether this can lead to the unnecessary loss of committed data.
> For normal topic partitions, removal from the ISR during controlled shutdown 
> buys us some protection. After the replica is restarted, it must prove its 
> state to the leader before it can be added back to the ISR. And it cannot 
> become a leader until it does so.
> An obvious exception to this is when the replica is the last member in the 
> ISR. In this case, the disk failure itself has compromised the committed 
> data, so some amount of loss must be expected.
> We have been considering other scenarios in which the loss of one disk can 
> lead to data loss even when there are replicas remaining which have all of 
> the committed entries. One such scenario is this:
> Suppose we have a partition with two replicas: A and B. Initially A is the 
> leader and it is the only member of the ISR.
>  # Broker B catches up to A, so A attempts to send an AlterPartition request 
> to the controller to add B into the ISR.
>  # Before the AlterPartition request is received, replica B has a hard 
> failure.
>  # The current controller successfully fences broker B. It takes no action on 
> this partition since B is already out of the ISR.
>  # Before the controller receives the AlterPartition request to add B, it 
> also fails.
>  # While the new controller is initializing, suppose that replica B finishes 
> startup, but the disk has been replaced (all of the previous state has been 
> lost).
>  # The new controller sees the registration from broker B first.
>  # Finally, the AlterPartition from A arrives which adds B back into the ISR 
> even though it has an empty log.
> (Credit for coming up with this scenario goes to [~junrao] .)
> I tested this in KRaft and confirmed that this sequence is possible (even if 
> perhaps unlikely). There are a few ways we could have potentially detected 
> the issue. First, perhaps the leader should have bumped the leader epoch on 
> all partitions when B was fenced. Then the inflight AlterPartition would be 
> doomed no matter when it arrived.
> Alternatively, we could have relied on the broker epoch to distinguish the 
> dead broker's state from that of the restarted broker. This could be done by 
> including the broker epoch in both the `Fetch` request and in 
> `AlterPartition`.
> Finally, perhaps even normal kafka replication should be using a unique 
> identifier for each disk so that we can reliably detect when it has changed. 
> For example, something like what was proposed for the metadata quorum here: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-853%3A+KRaft+Voter+Changes.]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-14139) Replaced disk can lead to loss of committed data even with non-empty ISR

2023-01-13 Thread Alexandre Dupriez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17676546#comment-17676546
 ] 

Alexandre Dupriez edited comment on KAFKA-14139 at 1/13/23 9:39 AM:


Hi, Jason, thank you for the very clear description of the issue. Is this 
something which is still prioritized and are you welcoming additional 
contributors for it?


was (Author: hangleton):
Hi, Jason, thank you  for the very clear description of the issue. Is this 
something which is still prioritized and are you welcoming additional 
contributors for it?

> Replaced disk can lead to loss of committed data even with non-empty ISR
> 
>
> Key: KAFKA-14139
> URL: https://issues.apache.org/jira/browse/KAFKA-14139
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Priority: Major
> Fix For: 3.5.0
>
>
> We have been thinking about disk failure cases recently. Suppose that a disk 
> has failed and the user needs to restart the disk from an empty state. The 
> concern is whether this can lead to the unnecessary loss of committed data.
> For normal topic partitions, removal from the ISR during controlled shutdown 
> buys us some protection. After the replica is restarted, it must prove its 
> state to the leader before it can be added back to the ISR. And it cannot 
> become a leader until it does so.
> An obvious exception to this is when the replica is the last member in the 
> ISR. In this case, the disk failure itself has compromised the committed 
> data, so some amount of loss must be expected.
> We have been considering other scenarios in which the loss of one disk can 
> lead to data loss even when there are replicas remaining which have all of 
> the committed entries. One such scenario is this:
> Suppose we have a partition with two replicas: A and B. Initially A is the 
> leader and it is the only member of the ISR.
>  # Broker B catches up to A, so A attempts to send an AlterPartition request 
> to the controller to add B into the ISR.
>  # Before the AlterPartition request is received, replica B has a hard 
> failure.
>  # The current controller successfully fences broker B. It takes no action on 
> this partition since B is already out of the ISR.
>  # Before the controller receives the AlterPartition request to add B, it 
> also fails.
>  # While the new controller is initializing, suppose that replica B finishes 
> startup, but the disk has been replaced (all of the previous state has been 
> lost).
>  # The new controller sees the registration from broker B first.
>  # Finally, the AlterPartition from A arrives which adds B back into the ISR 
> even though it has an empty log.
> (Credit for coming up with this scenario goes to [~junrao] .)
> I tested this in KRaft and confirmed that this sequence is possible (even if 
> perhaps unlikely). There are a few ways we could have potentially detected 
> the issue. First, perhaps the leader should have bumped the leader epoch on 
> all partitions when B was fenced. Then the inflight AlterPartition would be 
> doomed no matter when it arrived.
> Alternatively, we could have relied on the broker epoch to distinguish the 
> dead broker's state from that of the restarted broker. This could be done by 
> including the broker epoch in both the `Fetch` request and in 
> `AlterPartition`.
> Finally, perhaps even normal kafka replication should be using a unique 
> identifier for each disk so that we can reliably detect when it has changed. 
> For example, something like what was proposed for the metadata quorum here: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-853%3A+KRaft+Voter+Changes.]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] dajac opened a new pull request, #13110: KAFKA-14367; Add `partitionFor` to the new `GroupCoordinator` interface

2023-01-13 Thread GitBox


dajac opened a new pull request, #13110:
URL: https://github.com/apache/kafka/pull/13110

   This patch adds `partitionFor ` to the new `GroupCoordinator` interface and 
updates `KafkaApis` to use it.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-14139) Replaced disk can lead to loss of committed data even with non-empty ISR

2023-01-13 Thread Alexandre Dupriez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17676546#comment-17676546
 ] 

Alexandre Dupriez commented on KAFKA-14139:
---

Hi, Jason, thank you  for the very clear description of the issue. Is this 
something which is still prioritized and are you welcoming additional 
contributors for it?

> Replaced disk can lead to loss of committed data even with non-empty ISR
> 
>
> Key: KAFKA-14139
> URL: https://issues.apache.org/jira/browse/KAFKA-14139
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Priority: Major
> Fix For: 3.5.0
>
>
> We have been thinking about disk failure cases recently. Suppose that a disk 
> has failed and the user needs to restart the disk from an empty state. The 
> concern is whether this can lead to the unnecessary loss of committed data.
> For normal topic partitions, removal from the ISR during controlled shutdown 
> buys us some protection. After the replica is restarted, it must prove its 
> state to the leader before it can be added back to the ISR. And it cannot 
> become a leader until it does so.
> An obvious exception to this is when the replica is the last member in the 
> ISR. In this case, the disk failure itself has compromised the committed 
> data, so some amount of loss must be expected.
> We have been considering other scenarios in which the loss of one disk can 
> lead to data loss even when there are replicas remaining which have all of 
> the committed entries. One such scenario is this:
> Suppose we have a partition with two replicas: A and B. Initially A is the 
> leader and it is the only member of the ISR.
>  # Broker B catches up to A, so A attempts to send an AlterPartition request 
> to the controller to add B into the ISR.
>  # Before the AlterPartition request is received, replica B has a hard 
> failure.
>  # The current controller successfully fences broker B. It takes no action on 
> this partition since B is already out of the ISR.
>  # Before the controller receives the AlterPartition request to add B, it 
> also fails.
>  # While the new controller is initializing, suppose that replica B finishes 
> startup, but the disk has been replaced (all of the previous state has been 
> lost).
>  # The new controller sees the registration from broker B first.
>  # Finally, the AlterPartition from A arrives which adds B back into the ISR 
> even though it has an empty log.
> (Credit for coming up with this scenario goes to [~junrao] .)
> I tested this in KRaft and confirmed that this sequence is possible (even if 
> perhaps unlikely). There are a few ways we could have potentially detected 
> the issue. First, perhaps the leader should have bumped the leader epoch on 
> all partitions when B was fenced. Then the inflight AlterPartition would be 
> doomed no matter when it arrived.
> Alternatively, we could have relied on the broker epoch to distinguish the 
> dead broker's state from that of the restarted broker. This could be done by 
> including the broker epoch in both the `Fetch` request and in 
> `AlterPartition`.
> Finally, perhaps even normal kafka replication should be using a unique 
> identifier for each disk so that we can reliably detect when it has changed. 
> For example, something like what was proposed for the metadata quorum here: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-853%3A+KRaft+Voter+Changes.]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] dajac commented on pull request #12902: KAFKA-14367; Add `OffsetDelete` to the new `GroupCoordinator` interface

2023-01-13 Thread GitBox


dajac commented on PR #12902:
URL: https://github.com/apache/kafka/pull/12902#issuecomment-1381512820

   cc @jolshan @jeffkbkim 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] dajac merged pull request #12901: KAFKA-14367; Add `TxnOffsetCommit` to the new `GroupCoordinator` interface

2023-01-13 Thread GitBox


dajac merged PR #12901:
URL: https://github.com/apache/kafka/pull/12901


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] fvaleri commented on pull request #13085: KAFKA-14568: Move FetchDataInfo and related to storage module

2023-01-13 Thread GitBox


fvaleri commented on PR #13085:
URL: https://github.com/apache/kafka/pull/13085#issuecomment-1381478406

   > I pushed a few clean-ups and it LGTM now. @fvaleri Are you good with them 
too?
   
   Yes, good. Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] drgnchan opened a new pull request, #13109: MINOR: doc: sendfile implement in `TransferableRecords` instead of `MessageSet`

2023-01-13 Thread GitBox


drgnchan opened a new pull request, #13109:
URL: https://github.com/apache/kafka/pull/13109

   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org