[jira] [Resolved] (KAFKA-17522) Share partition acquire() need not return a future
[ https://issues.apache.org/jira/browse/KAFKA-17522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal resolved KAFKA-17522. --- Resolution: Duplicate > Share partition acquire() need not return a future > -- > > Key: KAFKA-17522 > URL: https://issues.apache.org/jira/browse/KAFKA-17522 > Project: Kafka > Issue Type: Sub-task >Reporter: Abhinav Dixit >Assignee: Apoorv Mittal >Priority: Major > > As per discussion > [https://github.com/apache/kafka/pull/16274#discussion_r1700968453] and > [https://github.com/apache/kafka/pull/16969#discussion_r1752362118] , we > don't need acquire method to return a future since we are not persisting > acquisitions. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-17483) Complete pending fetch request on broker shutdown
[ https://issues.apache.org/jira/browse/KAFKA-17483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal resolved KAFKA-17483. --- Resolution: Fixed > Complete pending fetch request on broker shutdown > - > > Key: KAFKA-17483 > URL: https://issues.apache.org/jira/browse/KAFKA-17483 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-17482) Make share partition initialization async
[ https://issues.apache.org/jira/browse/KAFKA-17482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal resolved KAFKA-17482. --- Resolution: Fixed > Make share partition initialization async > - > > Key: KAFKA-17482 > URL: https://issues.apache.org/jira/browse/KAFKA-17482 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-17626) Move common fetch related classes from storage to server-common
[ https://issues.apache.org/jira/browse/KAFKA-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal updated KAFKA-17626: -- Summary: Move common fetch related classes from storage to server-common (was: Move common fetch related classes from storage to storage-api) > Move common fetch related classes from storage to server-common > --- > > Key: KAFKA-17626 > URL: https://issues.apache.org/jira/browse/KAFKA-17626 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17626) Move common fetch related classes from storage to storage-api
Apoorv Mittal created KAFKA-17626: - Summary: Move common fetch related classes from storage to storage-api Key: KAFKA-17626 URL: https://issues.apache.org/jira/browse/KAFKA-17626 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Apoorv Mittal -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17620) Simplify share partition acquire API
Apoorv Mittal created KAFKA-17620: - Summary: Simplify share partition acquire API Key: KAFKA-17620 URL: https://issues.apache.org/jira/browse/KAFKA-17620 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Apoorv Mittal Simplify share partition acquire API to remove completable future as there do not exist any future calls. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17577) May be record metrics on UnknownSubscriptionIdException
Apoorv Mittal created KAFKA-17577: - Summary: May be record metrics on UnknownSubscriptionIdException Key: KAFKA-17577 URL: https://issues.apache.org/jira/browse/KAFKA-17577 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Apoorv Mittal h3. Handling {{UnknownSubscriptionIdException}} (USIE) When a broker changes telemetry subscriptions and a client attempts to report metrics with an outdated subscription ID, the broker throws {{UnknownSubscriptionIdException}} (USIE). In this case, the metrics payload is discarded, leading to a potential data gap. If the client is using delta temporality (incremental metrics), this gap is reflected in external monitoring systems, which will observe a dip, potentially treating the next set of metrics as a new starting point. This can mislead monitoring systems and create inaccuracies. The client should continue to work, as currently, by invoking {{GetTelemetrySubscriptionsRequest}} to renew the metrics subscription when broker throws {{{}USIE{}}}. However, the broker will record the telemetry metrics by validating the last subscription ID, reducing the chance of data gaps. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-17463) Flaky test: kafka.api.PlaintextAdminIntegrationTest."testShareGroups(String).quorum=kraft+kip932"
[ https://issues.apache.org/jira/browse/KAFKA-17463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17879512#comment-17879512 ] Apoorv Mittal commented on KAFKA-17463: --- I ran the tests repeatedly for 100 times and it didn't fail. Seem nothing wrong with the test. > Flaky test: > kafka.api.PlaintextAdminIntegrationTest."testShareGroups(String).quorum=kraft+kip932" > - > > Key: KAFKA-17463 > URL: https://issues.apache.org/jira/browse/KAFKA-17463 > Project: Kafka > Issue Type: Bug >Reporter: Apoorv Mittal >Priority: Major > > Tested locally and test passed but failed on CI. > [https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-16956/5/testReport/kafka.api/PlaintextAdminIntegrationTest/Build___JDK_8_and_Scala_2_12testShareGroups_String__quorum_kraft_kip932_/] > > {code:java} > [2024-08-30 23:28:16,679] WARN maxCnxns is not configured, using default > value 0. (org.apache.zookeeper.server.ServerCnxnFactory:309) > [2024-08-30 23:28:18,306] WARN Setting entity configs without any checks on > the controller. (kafka.zk.KafkaZkClient:500) > [2024-08-30 23:28:19,243] WARN [ReplicaFetcher replicaId=0, leaderId=2, > fetcherId=0] Received UNKNOWN_TOPIC_ID from the leader for partition > mytopic2-2. This error may be returned transiently when the partition is > being created or deleted, but it is not expected to persist. > (kafka.server.ReplicaFetcherThread:70) > [2024-08-30 23:28:19,244] WARN [ReplicaFetcher replicaId=0, leaderId=1, > fetcherId=0] Received UNKNOWN_TOPIC_ID from the leader for partition > mytopic2-0. This error may be returned transiently when the partition is > being created or deleted, but it is not expected to persist. > (kafka.server.ReplicaFetcherThread:70) > [2024-08-30 23:28:19,249] WARN [ReplicaFetcher replicaId=0, leaderId=2, > fetcherId=0] Received UNKNOWN_TOPIC_ID from the leader for partition > mytopic-1. This error may be returned transiently when the partition is being > created or deleted, but it is not expected to persist. > (kafka.server.ReplicaFetcherThread:70) > [2024-08-30 23:28:19,437] WARN [ReplicaFetcher replicaId=2, leaderId=0, > fetcherId=0] Received UNKNOWN_TOPIC_ID from the leader for partition > mytopic2-1. This error may be returned transiently when the partition is > being created or deleted, but it is not expected to persist. > (kafka.server.ReplicaFetcherThread:70) > [2024-08-30 23:28:19,438] WARN [ReplicaFetcher replicaId=2, leaderId=1, > fetcherId=0] Received UNKNOWN_TOPIC_ID from the leader for partition > mytopic2-0. This error may be returned transiently when the partition is > being created or deleted, but it is not expected to persist. > (kafka.server.ReplicaFetcherThread:70) > [2024-08-30 23:28:19,440] WARN [ReplicaFetcher replicaId=1, leaderId=0, > fetcherId=0] Received UNKNOWN_TOPIC_ID from the leader for partition > mytopic2-1. This error may be returned transiently when the partition is > being created or deleted, but it is not expected to persist. > (kafka.server.ReplicaFetcherThread:70) > [2024-08-30 23:28:19,743] WARN [ReplicaFetcher replicaId=2, leaderId=1, > fetcherId=0] Received UNKNOWN_TOPIC_ID from the leader for partition > mytopic-0. This error may be returned transiently when the partition is being > created or deleted, but it is not expected to persist. > (kafka.server.ReplicaFetcherThread:70) > [2024-08-30 23:28:19,749] WARN [ReplicaFetcher replicaId=1, leaderId=2, > fetcherId=0] Received UNKNOWN_TOPIC_OR_PARTITION from the leader for > partition mytopic2-2. This error may be returned transiently when the > partition is being created or deleted, but it is not expected to persist. > (kafka.server.ReplicaFetcherThread:70) > [2024-08-30 23:28:21,502] WARN [RequestSendThread controllerId=0] Controller > 0 epoch 1 fails to send request (type=StopReplicaRequest, controllerId=0, > controllerEpoch=1, brokerEpoch=26, deletePartitions=false, > topicStates=StopReplicaTopicState(topicName='mytopic', > partitionStates=[StopReplicaPartitionState(partitionIndex=1, leaderEpoch=-2, > deletePartition=true)])) to broker localhost:38683 (id: 0 rack: null). > Reconnecting to broker. (kafka.controller.RequestSendThread:72) > java.io.IOException: Connection to 0 was disconnected before the response was > read > at > org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:100) > at > kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:259) > at > org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:136) > [2024-08-30 23:28:21,503] WARN [ReplicaFetcher replicaId=2, leaderId=1, > fetcherId=0] Error in response for fetch request (type=FetchRequest, > replicaId=2, maxWa
[jira] [Created] (KAFKA-17482) Make share partition initialization async
Apoorv Mittal created KAFKA-17482: - Summary: Make share partition initialization async Key: KAFKA-17482 URL: https://issues.apache.org/jira/browse/KAFKA-17482 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Apoorv Mittal -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-17442) Implement persister error handling and make write calls async
[ https://issues.apache.org/jira/browse/KAFKA-17442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal resolved KAFKA-17442. --- Resolution: Fixed > Implement persister error handling and make write calls async > - > > Key: KAFKA-17442 > URL: https://issues.apache.org/jira/browse/KAFKA-17442 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17464) Flaky test: org.apache.kafka.clients.consumer.internals.AbstractCoordinatorTest.testWakeupAfterSyncGroupSentExternalCompletion
Apoorv Mittal created KAFKA-17464: - Summary: Flaky test: org.apache.kafka.clients.consumer.internals.AbstractCoordinatorTest.testWakeupAfterSyncGroupSentExternalCompletion Key: KAFKA-17464 URL: https://issues.apache.org/jira/browse/KAFKA-17464 Project: Kafka Issue Type: Bug Reporter: Apoorv Mittal [https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-16956/5/testReport/org.apache.kafka.clients.consumer.internals/AbstractCoordinatorTest/Build___JDK_21_and_Scala_2_13___testWakeupAfterSyncGroupSentExternalCompletion__/] {code:java} org.apache.kafka.clients.consumer.internals.AbstractCoordinatorTest.testWakeupAfterSyncGroupSentExternalCompletion() org.opentest4j.AssertionFailedError: Should have woken up from ensureActiveGroup() ==> Expected org.apache.kafka.common.errors.WakeupException to be thrown, but nothing was thrown. at app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:152) at app//org.junit.jupiter.api.AssertThrows.assertThrows(AssertThrows.java:73) at app//org.junit.jupiter.api.AssertThrows.assertThrows(AssertThrows.java:39) at app//org.junit.jupiter.api.Assertions.assertThrows(Assertions.java:3131) at app//org.apache.kafka.clients.consumer.internals.AbstractCoordinatorTest.testWakeupAfterSyncGroupSentExternalCompletion(AbstractCoordinatorTest.java:1458) at java.base@21.0.3/java.lang.reflect.Method.invoke(Method.java:580) at java.base@21.0.3/java.util.ArrayList.forEach(ArrayList.java:1596) at java.base@21.0.3/java.util.ArrayList.forEach(ArrayList.java:1596) {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Reopened] (KAFKA-15896) Flaky test: shouldQuerySpecificStalePartitionStores() – org.apache.kafka.streams.integration.StoreQueryIntegrationTest
[ https://issues.apache.org/jira/browse/KAFKA-15896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal reopened KAFKA-15896: --- Failed again: https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-16956/5/testReport/org.apache.kafka.streams.integration/StoreQueryIntegrationTest/Build___JDK_8_and_Scala_2_12___shouldQuerySpecificStalePartitionStores__/ > Flaky test: shouldQuerySpecificStalePartitionStores() – > org.apache.kafka.streams.integration.StoreQueryIntegrationTest > -- > > Key: KAFKA-15896 > URL: https://issues.apache.org/jira/browse/KAFKA-15896 > Project: Kafka > Issue Type: Bug > Components: streams, unit tests >Reporter: Apoorv Mittal >Priority: Major > Labels: flaky-test > > Flaky test: > https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-14699/21/tests/ > > > {code:java} > Error > org.apache.kafka.streams.errors.InvalidStateStorePartitionException: The > specified partition 1 for store source-table does not > exist.Stacktraceorg.apache.kafka.streams.errors.InvalidStateStorePartitionException: > The specified partition 1 for store source-table does not exist. at > app//org.apache.kafka.streams.state.internals.WrappingStoreProvider.stores(WrappingStoreProvider.java:63) > at > app//org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStore.get(CompositeReadOnlyKeyValueStore.java:53) > at > app//org.apache.kafka.streams.integration.StoreQueryIntegrationTest.shouldQuerySpecificStalePartitionStores(StoreQueryIntegrationTest.java:347) > at > java.base@21.0.1/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) > at java.base@21.0.1/java.lang.reflect.Method.invoke(Method.java:580)at > app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728) > at > app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) >at > app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > at > app//org.junit.jupiter.engine.extension.SameThreadTimeoutInvocation.proceed(SameThreadTimeoutInvocation.java:45) > at > app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156) > at > app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147) > at > app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86) >at > app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103) > at > app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93) > at > app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) > at > app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) >at > app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) > at > app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) > at > app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92) > {code} > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17463) Flaky test: kafka.api.PlaintextAdminIntegrationTest."testShareGroups(String).quorum=kraft+kip932"
Apoorv Mittal created KAFKA-17463: - Summary: Flaky test: kafka.api.PlaintextAdminIntegrationTest."testShareGroups(String).quorum=kraft+kip932" Key: KAFKA-17463 URL: https://issues.apache.org/jira/browse/KAFKA-17463 Project: Kafka Issue Type: Bug Reporter: Apoorv Mittal Tested locally and test passed but failed on CI. [https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-16956/5/testReport/kafka.api/PlaintextAdminIntegrationTest/Build___JDK_8_and_Scala_2_12testShareGroups_String__quorum_kraft_kip932_/] {code:java} [2024-08-30 23:28:16,679] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory:309) [2024-08-30 23:28:18,306] WARN Setting entity configs without any checks on the controller. (kafka.zk.KafkaZkClient:500) [2024-08-30 23:28:19,243] WARN [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Received UNKNOWN_TOPIC_ID from the leader for partition mytopic2-2. This error may be returned transiently when the partition is being created or deleted, but it is not expected to persist. (kafka.server.ReplicaFetcherThread:70) [2024-08-30 23:28:19,244] WARN [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Received UNKNOWN_TOPIC_ID from the leader for partition mytopic2-0. This error may be returned transiently when the partition is being created or deleted, but it is not expected to persist. (kafka.server.ReplicaFetcherThread:70) [2024-08-30 23:28:19,249] WARN [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Received UNKNOWN_TOPIC_ID from the leader for partition mytopic-1. This error may be returned transiently when the partition is being created or deleted, but it is not expected to persist. (kafka.server.ReplicaFetcherThread:70) [2024-08-30 23:28:19,437] WARN [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Received UNKNOWN_TOPIC_ID from the leader for partition mytopic2-1. This error may be returned transiently when the partition is being created or deleted, but it is not expected to persist. (kafka.server.ReplicaFetcherThread:70) [2024-08-30 23:28:19,438] WARN [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Received UNKNOWN_TOPIC_ID from the leader for partition mytopic2-0. This error may be returned transiently when the partition is being created or deleted, but it is not expected to persist. (kafka.server.ReplicaFetcherThread:70) [2024-08-30 23:28:19,440] WARN [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Received UNKNOWN_TOPIC_ID from the leader for partition mytopic2-1. This error may be returned transiently when the partition is being created or deleted, but it is not expected to persist. (kafka.server.ReplicaFetcherThread:70) [2024-08-30 23:28:19,743] WARN [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Received UNKNOWN_TOPIC_ID from the leader for partition mytopic-0. This error may be returned transiently when the partition is being created or deleted, but it is not expected to persist. (kafka.server.ReplicaFetcherThread:70) [2024-08-30 23:28:19,749] WARN [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Received UNKNOWN_TOPIC_OR_PARTITION from the leader for partition mytopic2-2. This error may be returned transiently when the partition is being created or deleted, but it is not expected to persist. (kafka.server.ReplicaFetcherThread:70) [2024-08-30 23:28:21,502] WARN [RequestSendThread controllerId=0] Controller 0 epoch 1 fails to send request (type=StopReplicaRequest, controllerId=0, controllerEpoch=1, brokerEpoch=26, deletePartitions=false, topicStates=StopReplicaTopicState(topicName='mytopic', partitionStates=[StopReplicaPartitionState(partitionIndex=1, leaderEpoch=-2, deletePartition=true)])) to broker localhost:38683 (id: 0 rack: null). Reconnecting to broker. (kafka.controller.RequestSendThread:72) java.io.IOException: Connection to 0 was disconnected before the response was read at org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:100) at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:259) at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:136) [2024-08-30 23:28:21,503] WARN [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error in response for fetch request (type=FetchRequest, replicaId=2, maxWait=500, minBytes=1, maxBytes=10485760, fetchData={}, isolationLevel=read_uncommitted, removed=, replaced=, metadata=(sessionId=1256110258, epoch=1), rackId=) (kafka.server.ReplicaFetcherThread:72) java.io.IOException: Connection to 1 was disconnected before the response was read at org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:100) at kafka.server.BrokerBlockingSender.sendRequest(BrokerBlockingSender.scala:114) at kafka.server.RemoteLeaderEndPoint.fet
[jira] [Created] (KAFKA-17442) Implement persister error handling and make write calls async
Apoorv Mittal created KAFKA-17442: - Summary: Implement persister error handling and make write calls async Key: KAFKA-17442 URL: https://issues.apache.org/jira/browse/KAFKA-17442 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Apoorv Mittal -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-17396) Fix releasing of session on clients final epoch
[ https://issues.apache.org/jira/browse/KAFKA-17396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal resolved KAFKA-17396. --- Resolution: Fixed > Fix releasing of session on clients final epoch > --- > > Key: KAFKA-17396 > URL: https://issues.apache.org/jira/browse/KAFKA-17396 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-17283) Add integration test for Share Group heartbeat
[ https://issues.apache.org/jira/browse/KAFKA-17283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal resolved KAFKA-17283. --- Resolution: Fixed > Add integration test for Share Group heartbeat > -- > > Key: KAFKA-17283 > URL: https://issues.apache.org/jira/browse/KAFKA-17283 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17396) Fix releasing of session on clients final epoch
Apoorv Mittal created KAFKA-17396: - Summary: Fix releasing of session on clients final epoch Key: KAFKA-17396 URL: https://issues.apache.org/jira/browse/KAFKA-17396 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Apoorv Mittal -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-17346) Refactor Share Sesssion Class to Server Module
[ https://issues.apache.org/jira/browse/KAFKA-17346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal resolved KAFKA-17346. --- Resolution: Fixed > Refactor Share Sesssion Class to Server Module > -- > > Key: KAFKA-17346 > URL: https://issues.apache.org/jira/browse/KAFKA-17346 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17366) Flaky test: org.apache.kafka.tools.reassign.ReassignPartitionsCommandTest.testAlterReassignmentThrottle
Apoorv Mittal created KAFKA-17366: - Summary: Flaky test: org.apache.kafka.tools.reassign.ReassignPartitionsCommandTest.testAlterReassignmentThrottle Key: KAFKA-17366 URL: https://issues.apache.org/jira/browse/KAFKA-17366 Project: Kafka Issue Type: Bug Reporter: Apoorv Mittal org.apache.kafka.tools.reassign.ReassignPartitionsCommandTest.testAlterReassignmentThrottle [https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-16890/3/testReport/org.apache.kafka.tools.reassign/ReassignPartitionsCommandTest/Build___JDK_8_and_Scala_2_12___testAlterReassignmentThrottle__2__Type_Raft_Isolated__MetadataVersion_4_0_IV1_Security_PLAINTEXT/] {code:java} org.opentest4j.AssertionFailedError: Condition not met within timeout 15000. timed out waiting for broker throttle to become {0={leader.replication.throttled.rate=-1, replica.alter.log.dirs.io.max.bytes.per.second=-1, follower.replication.throttled.rate=-1}, 1={leader.replication.throttled.rate=-1, replica.alter.log.dirs.io.max.bytes.per.second=-1, follower.replication.throttled.rate=-1}, 2={leader.replication.throttled.rate=-1, replica.alter.log.dirs.io.max.bytes.per.second=-1, follower.replication.throttled.rate=-1}, 3={leader.replication.throttled.rate=-1, replica.alter.log.dirs.io.max.bytes.per.second=-1, follower.replication.throttled.rate=-1}}. Latest throttles were {} ==> expected: but was: at org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151) at org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132) at org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63) at org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36) at org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:214) at org.apache.kafka.test.TestUtils.lambda$waitForCondition$3(TestUtils.java:397) at org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:445) at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:394) at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:378) at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:351) at org.apache.kafka.tools.reassign.ReassignPartitionsCommandTest.waitForBrokerLevelThrottles(ReassignPartitionsCommandTest.java:618) at org.apache.kafka.tools.reassign.ReassignPartitionsCommandTest.testAlterReassignmentThrottle(ReassignPartitionsCommandTest.java:231) at java.lang.reflect.Method.invoke(Method.java:498) at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384) at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:647) at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272) at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485) at java.util.ArrayList.forEach(ArrayList.java:1259) at java.util.ArrayList.forEach(ArrayList.java:1259) {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17365) Flaky test: org.apache.kafka.tools.reassign.ReassignPartitionsCommandTest.testProduceAndConsumeWithReassignmentInProgress
Apoorv Mittal created KAFKA-17365: - Summary: Flaky test: org.apache.kafka.tools.reassign.ReassignPartitionsCommandTest.testProduceAndConsumeWithReassignmentInProgress Key: KAFKA-17365 URL: https://issues.apache.org/jira/browse/KAFKA-17365 Project: Kafka Issue Type: Bug Reporter: Apoorv Mittal org.apache.kafka.tools.reassign.ReassignPartitionsCommandTest.testProduceAndConsumeWithReassignmentInProgress [2] Type=Raft-Isolated, MetadataVersion=4.0-IV1,Security=PLAINTEXT [https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-16890/3/testReport/org.apache.kafka.tools.reassign/ReassignPartitionsCommandTest/Build___JDK_8_and_Scala_2_12___testProduceAndConsumeWithReassignmentInProgress__2__Type_Raft_Isolated__MetadataVersion_4_0_IV1_Security_PLAINTEXT/] {code:java} org.opentest4j.AssertionFailedError: Condition not met within timeout 15000. Timed out waiting for verifyAssignment result org.apache.kafka.tools.reassign.VerifyAssignmentResult@273c3657 ==> expected: but was: at org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151) at org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132) at org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63) at org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36) at org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:214) at org.apache.kafka.test.TestUtils.lambda$waitForCondition$3(TestUtils.java:397) at org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:445) at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:394) at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:378) at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:351) at org.apache.kafka.tools.reassign.ReassignPartitionsCommandTest.waitForVerifyAssignment(ReassignPartitionsCommandTest.java:705) at org.apache.kafka.tools.reassign.ReassignPartitionsCommandTest.testProduceAndConsumeWithReassignmentInProgress(ReassignPartitionsCommandTest.java:328) at java.lang.reflect.Method.invoke(Method.java:498) at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384) at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:647) at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272) at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485) at java.util.ArrayList.forEach(ArrayList.java:1259) at java.util.ArrayList.forEach(ArrayList.java:1259) {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17364) Flaky test: org.apache.kafka.streams.integration.QueryableStateIntegrationTest.shouldBeAbleQueryStandbyStateDuringRebalance()
Apoorv Mittal created KAFKA-17364: - Summary: Flaky test: org.apache.kafka.streams.integration.QueryableStateIntegrationTest.shouldBeAbleQueryStandbyStateDuringRebalance() Key: KAFKA-17364 URL: https://issues.apache.org/jira/browse/KAFKA-17364 Project: Kafka Issue Type: Bug Reporter: Apoorv Mittal org.apache.kafka.streams.integration.QueryableStateIntegrationTest.shouldBeAbleQueryStandbyStateDuringRebalance() [https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-16890/3/testReport/org.apache.kafka.streams.integration/QueryableStateIntegrationTest/Build___JDK_8_and_Scala_2_12___shouldBeAbleQueryStandbyStateDuringRebalance__/] {code:java} java.lang.AssertionError: Expected: <8> but: was <4> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6) at org.apache.kafka.streams.integration.QueryableStateIntegrationTest.verifyOffsetLagFetch(QueryableStateIntegrationTest.java:289) at org.apache.kafka.streams.integration.QueryableStateIntegrationTest.shouldBeAbleQueryStandbyStateDuringRebalance(QueryableStateIntegrationTest.java:661) at java.lang.reflect.Method.invoke(Method.java:498) at java.util.ArrayList.forEach(ArrayList.java:1259) at java.util.ArrayList.forEach(ArrayList.java:1259) {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17363) Flaky test: kafka.api.PlaintextConsumerCommitTest.testAutoCommitOnRebalance(String, String).quorum=kraft+kip848.groupProtocol=consumer
Apoorv Mittal created KAFKA-17363: - Summary: Flaky test: kafka.api.PlaintextConsumerCommitTest.testAutoCommitOnRebalance(String, String).quorum=kraft+kip848.groupProtocol=consumer Key: KAFKA-17363 URL: https://issues.apache.org/jira/browse/KAFKA-17363 Project: Kafka Issue Type: Bug Reporter: Apoorv Mittal kafka.api.PlaintextConsumerCommitTest.testAutoCommitOnRebalance(String, String).quorum=kraft+kip848.groupProtocol=consumer [https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-16890/3/testReport/kafka.api/PlaintextConsumerCommitTest/Build___JDK_8_and_Scala_2_12___testAutoCommitOnRebalance_String__String__quorum_kraft_kip848_groupProtocol_consumer/] {code:java} org.opentest4j.AssertionFailedError: Topic [topic2] metadata not propagated after 6 ms at org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:38) at org.junit.jupiter.api.Assertions.fail(Assertions.java:138) at kafka.utils.TestUtils$.waitForAllPartitionsMetadata(TestUtils.scala:931) at kafka.utils.TestUtils$.createTopicWithAdmin(TestUtils.scala:474) at kafka.integration.KafkaServerTestHarness.$anonfun$createTopic$1(KafkaServerTestHarness.scala:193) at scala.util.Using$.resource(Using.scala:269) at kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:185) at kafka.api.PlaintextConsumerCommitTest.testAutoCommitOnRebalance(PlaintextConsumerCommitTest.scala:223) at java.lang.reflect.Method.invoke(Method.java:498) at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:647) at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485) at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485) at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272) at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) at java.util.stream.ForEachOps$ForEachOp.evaluateSequent
[jira] [Created] (KAFKA-17362) Flaky test: kafka.api.AuthorizerIntegrationTest."testDeleteRecordsWithWildCardAuth(String).quorum=kraft"
Apoorv Mittal created KAFKA-17362: - Summary: Flaky test: kafka.api.AuthorizerIntegrationTest."testDeleteRecordsWithWildCardAuth(String).quorum=kraft" Key: KAFKA-17362 URL: https://issues.apache.org/jira/browse/KAFKA-17362 Project: Kafka Issue Type: Bug Reporter: Apoorv Mittal kafka.api.AuthorizerIntegrationTest."testDeleteRecordsWithWildCardAuth(String).quorum=kraft" [https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-16890/3/testReport/kafka.api/AuthorizerIntegrationTest/Build___JDK_8_and_Scala_2_12testDeleteRecordsWithWildCardAuth_String__quorum_kraft_/] {code:java} java.lang.RuntimeException: Received a fatal error while waiting for the controller to acknowledge that we are caught up at org.apache.kafka.server.util.FutureUtils.waitWithLogging(FutureUtils.java:73) at kafka.server.BrokerServer.startup(BrokerServer.scala:532) at kafka.integration.KafkaServerTestHarness.$anonfun$createBrokers$1(KafkaServerTestHarness.scala:401) at kafka.integration.KafkaServerTestHarness.$anonfun$createBrokers$1$adapted(KafkaServerTestHarness.scala:397) at scala.collection.Iterator.foreach(Iterator.scala:943) at scala.collection.Iterator.foreach$(Iterator.scala:943) at scala.collection.AbstractIterator.foreach(Iterator.scala:1431) at scala.collection.IterableLike.foreach(IterableLike.scala:74) at scala.collection.IterableLike.foreach$(IterableLike.scala:73) at scala.collection.AbstractIterable.foreach(Iterable.scala:56) at kafka.integration.KafkaServerTestHarness.createBrokers(KafkaServerTestHarness.scala:397) at kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:130) at kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:141) at kafka.api.AbstractAuthorizerIntegrationTest.setUp(AbstractAuthorizerIntegrationTest.scala:126) at java.lang.reflect.Method.invoke(Method.java:498) at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485) at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485) at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272) at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) at java.util.stream.ForEachOps$ForE
[jira] [Commented] (KAFKA-15772) Flaky test TransactionsWithTieredStoreTest
[ https://issues.apache.org/jira/browse/KAFKA-15772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17874829#comment-17874829 ] Apoorv Mittal commented on KAFKA-15772: --- Flaky Test: org.apache.kafka.tiered.storage.integration.TransactionsWithTieredStoreTest."testFencingOnSend(String).quorum=kraft" [https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-16890/3/testReport/org.apache.kafka.tiered.storage.integration/TransactionsWithTieredStoreTest/Build___JDK_21_and_Scala_2_13testFencingOnSend_String__quorum_kraft_/] {code:java} java.lang.AssertionError: Got an unexpected exception from a fenced producer. at kafka.api.TransactionsTest.testFencingOnSend(TransactionsTest.scala:551) at java.base/java.lang.reflect.Method.invoke(Method.java:580) at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) at java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:1024) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596) at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:276) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1708) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596) at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:276) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1708) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596) at java.base/java.util.ArrayList.forEach(ArrayList.java:1596) at java.base/java.util.ArrayList.forEach(ArrayList.java:1596) Caused by: org.apache.kafka.common.KafkaException: Cannot execute transactional method because we are in an error state at app//org.apache.kafka.clients.producer.internals.TransactionManager.maybeFailWithError(TransactionManager.java:1042) at app//org.apache.kafka.clients.producer.internals.TransactionManager.maybeAddPartition(TransactionManager.java:386) at app//org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:1092) at app//org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:993) at app//org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:875) at app//kafka.api.TransactionsTest.testFencingOnSend(TransactionsTest.scala:538) ... 39 more Caused by: org.apache.kafka.common.errors.InvalidTxnStateException:
[jira] [Created] (KAFKA-17351) Validate compacted topics start offset handling in Share Partition
Apoorv Mittal created KAFKA-17351: - Summary: Validate compacted topics start offset handling in Share Partition Key: KAFKA-17351 URL: https://issues.apache.org/jira/browse/KAFKA-17351 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Apoorv Mittal Currently we handle the compacted topics handling in Share Partition by moving the start offset but there might be a scenario where start offset is not present in the cached batch and a NPE can occur. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17346) Refactor Share Sesssion Class to Server Module
Apoorv Mittal created KAFKA-17346: - Summary: Refactor Share Sesssion Class to Server Module Key: KAFKA-17346 URL: https://issues.apache.org/jira/browse/KAFKA-17346 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Apoorv Mittal -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-15863) Handle push telemetry throttling with quota manager
[ https://issues.apache.org/jira/browse/KAFKA-15863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17873042#comment-17873042 ] Apoorv Mittal commented on KAFKA-15863: --- [~junrao] KIP-714 in [throttling and rate-limiting|https://cwiki.apache.org/confluence/display/KAFKA/KIP-714%3A+Client+metrics+and+observability#KIP714:Clientmetricsandobservability-Throttlingandrate-limiting] mentions that standard throttling on client existing quota will mute the channel which [~sjhajharia] has desrcibed above. The other thing which KIP-714 says about rate-limiting is implemented in ClientMetricsManager, which checks for PushIntervalMs and if subsequent telemetry request violates that then `THROTTLING_QUOTA_EXCEEDED` is returned. However, the latter doesn't mutes the channel as KIP-714 doesn't define that. > Handle push telemetry throttling with quota manager > --- > > Key: KAFKA-15863 > URL: https://issues.apache.org/jira/browse/KAFKA-15863 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Sanskar Jhajharia >Priority: Major > > Details: https://github.com/apache/kafka/pull/14699#discussion_r1399714279 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-17311) testClientInstanceId, testClientInstanceIdInvalidTimeout, and testClientInstanceIdNoTelemetryReporterRegistered should include CONSUMER protocol
[ https://issues.apache.org/jira/browse/KAFKA-17311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17872806#comment-17872806 ] Apoorv Mittal commented on KAFKA-17311: --- Thanks for raising the issue, I might have missed new consumer protocol in tests. > testClientInstanceId, testClientInstanceIdInvalidTimeout, and > testClientInstanceIdNoTelemetryReporterRegistered should include CONSUMER > protocol > > > Key: KAFKA-17311 > URL: https://issues.apache.org/jira/browse/KAFKA-17311 > Project: Kafka > Issue Type: Sub-task >Reporter: Chia-Ping Tsai >Assignee: Ming-Yen Chung >Priority: Minor > > Those test cases don't set `GroupProtocol`, so they always test > `ClassicConsumer` -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-15863) Handle push telemetry throttling with quota manager
[ https://issues.apache.org/jira/browse/KAFKA-15863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal reassigned KAFKA-15863: - Assignee: Sanskar Jhajharia (was: Apoorv Mittal) > Handle push telemetry throttling with quota manager > --- > > Key: KAFKA-15863 > URL: https://issues.apache.org/jira/browse/KAFKA-15863 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Sanskar Jhajharia >Priority: Major > > Details: https://github.com/apache/kafka/pull/14699#discussion_r1399714279 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-16742) Add ShareGroupDescribe API support in GroupCoordinator
[ https://issues.apache.org/jira/browse/KAFKA-16742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal resolved KAFKA-16742. --- Resolution: Fixed > Add ShareGroupDescribe API support in GroupCoordinator > -- > > Key: KAFKA-16742 > URL: https://issues.apache.org/jira/browse/KAFKA-16742 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-17290) Add integration test for ShareGroupFetch/Acknowledge requests
[ https://issues.apache.org/jira/browse/KAFKA-17290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal reassigned KAFKA-17290: - Assignee: Chirag Wadhwa (was: Andrew Schofield) > Add integration test for ShareGroupFetch/Acknowledge requests > - > > Key: KAFKA-17290 > URL: https://issues.apache.org/jira/browse/KAFKA-17290 > Project: Kafka > Issue Type: Sub-task >Reporter: Andrew Schofield >Assignee: Chirag Wadhwa >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17288) Remove tracking member partition epoch for share groups
Apoorv Mittal created KAFKA-17288: - Summary: Remove tracking member partition epoch for share groups Key: KAFKA-17288 URL: https://issues.apache.org/jira/browse/KAFKA-17288 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Apoorv Mittal -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17283) Add integration test for Share Group heartbeat
Apoorv Mittal created KAFKA-17283: - Summary: Add integration test for Share Group heartbeat Key: KAFKA-17283 URL: https://issues.apache.org/jira/browse/KAFKA-17283 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Apoorv Mittal -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-17230) Kafka consumer client doesn't report node request-latency metrics
[ https://issues.apache.org/jira/browse/KAFKA-17230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal updated KAFKA-17230: -- Labels: client consumer (was: ) > Kafka consumer client doesn't report node request-latency metrics > - > > Key: KAFKA-17230 > URL: https://issues.apache.org/jira/browse/KAFKA-17230 > Project: Kafka > Issue Type: Bug > Components: clients, consumer, metrics >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > Labels: client, consumer > > Kafka Consumer client registers node/connection latency metrics in > [Selector.java|https://github.com/apache/kafka/blob/9d634629f29cd402a723d7e7bece18c8099065a4/clients/src/main/java/org/apache/kafka/common/network/Selector.java#L1359] > but the values against the metric is never recorded. This seems to be an > issue since inception. However, the same metric is also created for Kafka > Producer but the value is recorded for Kafka Producer in > [Sender.java|https://github.com/apache/kafka/blob/9d634629f29cd402a723d7e7bece18c8099065a4/clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java#L1092]. > Hence node - request-latency-max and request-latency-avg appears correctly > for Kafka Producer but is NaN for Kafka Consumer. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17230) Kafka consumer client doesn't report node request-latency metrics
Apoorv Mittal created KAFKA-17230: - Summary: Kafka consumer client doesn't report node request-latency metrics Key: KAFKA-17230 URL: https://issues.apache.org/jira/browse/KAFKA-17230 Project: Kafka Issue Type: Bug Components: clients Reporter: Apoorv Mittal Assignee: Apoorv Mittal Kafka Consumer client registers node/connection latency metrics in [Selector.java|https://github.com/apache/kafka/blob/9d634629f29cd402a723d7e7bece18c8099065a4/clients/src/main/java/org/apache/kafka/common/network/Selector.java#L1359] but the values against the metric is never recorded. This seems to be an issue since inception. However, the same metric is also created for Kafka Producer but the value is recorded for Kafka Producer in [Sender.java|https://github.com/apache/kafka/blob/9d634629f29cd402a723d7e7bece18c8099065a4/clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java#L1092]. Hence node - request-latency-max and request-latency-avg appears correctly for Kafka Producer but is NaN for Kafka Consumer. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Reopened] (KAFKA-15863) Handle push telemetry throttling with quota manager
[ https://issues.apache.org/jira/browse/KAFKA-15863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal reopened KAFKA-15863: --- Reopening to introduce metric quota. > Handle push telemetry throttling with quota manager > --- > > Key: KAFKA-15863 > URL: https://issues.apache.org/jira/browse/KAFKA-15863 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > > Details: https://github.com/apache/kafka/pull/14699#discussion_r1399714279 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-16744) Add support for share group describe in KafkaApis
[ https://issues.apache.org/jira/browse/KAFKA-16744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal resolved KAFKA-16744. --- Resolution: Fixed > Add support for share group describe in KafkaApis > - > > Key: KAFKA-16744 > URL: https://issues.apache.org/jira/browse/KAFKA-16744 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-16743) Add support for share group heartbeat in KafkaApis
[ https://issues.apache.org/jira/browse/KAFKA-16743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal resolved KAFKA-16743. --- Resolution: Fixed > Add support for share group heartbeat in KafkaApis > -- > > Key: KAFKA-16743 > URL: https://issues.apache.org/jira/browse/KAFKA-16743 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-16756) Implement max delivery count functionality in SharePartition
[ https://issues.apache.org/jira/browse/KAFKA-16756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal resolved KAFKA-16756. --- Resolution: Done Already merged as part of other code changes in Share Partition. > Implement max delivery count functionality in SharePartition > > > Key: KAFKA-16756 > URL: https://issues.apache.org/jira/browse/KAFKA-16756 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-16756) Implement max delivery count functionality in SharePartition
[ https://issues.apache.org/jira/browse/KAFKA-16756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal reassigned KAFKA-16756: - Assignee: Apoorv Mittal (was: Chirag Wadhwa) > Implement max delivery count functionality in SharePartition > > > Key: KAFKA-16756 > URL: https://issues.apache.org/jira/browse/KAFKA-16756 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-15807) Add support for compression/decompression of metrics
[ https://issues.apache.org/jira/browse/KAFKA-15807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal updated KAFKA-15807: -- Fix Version/s: 3.7.1 > Add support for compression/decompression of metrics > > > Key: KAFKA-15807 > URL: https://issues.apache.org/jira/browse/KAFKA-15807 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > Fix For: 3.7.1 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17047) Refactor Consumer group and shared classes with Share to modern package
Apoorv Mittal created KAFKA-17047: - Summary: Refactor Consumer group and shared classes with Share to modern package Key: KAFKA-17047 URL: https://issues.apache.org/jira/browse/KAFKA-17047 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Apoorv Mittal -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-17040) Unknown telemetry state: TERMINATED thrown when closing AsyncKafkaConsumer
[ https://issues.apache.org/jira/browse/KAFKA-17040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17860012#comment-17860012 ] Apoorv Mittal commented on KAFKA-17040: --- [~kirktrue] Thanks for reporting, I can take it up. Just a quick question, the error only occurs while closing the consumer, correct? Is it under scenarios when consumer close took more time than next network client poll time? I expect that's the only scenario when this issue can occur. I am just wondering when specifically this scenrio happens? > Unknown telemetry state: TERMINATED thrown when closing AsyncKafkaConsumer > -- > > Key: KAFKA-17040 > URL: https://issues.apache.org/jira/browse/KAFKA-17040 > Project: Kafka > Issue Type: Bug > Components: clients, metrics >Affects Versions: 3.9.0 >Reporter: Kirk True >Assignee: Apoorv Mittal >Priority: Major > > An error is occasionally thrown when closing the {{{}AsyncKafkaConsumer{}}}: > {noformat} > [ERROR] 2024-06-20 17:13:54,121 [consumer_background_thread] > org.apache.kafka.clients.consumer.internals.ConsumerNetworkThread > lambda$configureThread$0 - Uncaught exception in thread > 'consumer_background_thread': > java.lang.IllegalStateException: Unknown telemetry state: TERMINATED > at > org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter$DefaultClientTelemetrySender.timeToNextUpdate(ClientTelemetryReporter.java:363) > at > org.apache.kafka.clients.NetworkClient$TelemetrySender.maybeUpdate(NetworkClient.java:1392) > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:668) > at > org.apache.kafka.clients.consumer.internals.NetworkClientDelegate.poll(NetworkClientDelegate.java:143) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkThread.sendUnsentRequests(ConsumerNetworkThread.java:299) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkThread.cleanup(ConsumerNetworkThread.java:318) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkThread.run(ConsumerNetworkThread.java:105){noformat} > The issue appears to be that the {{TERMINATED}} state is not expected in the > switch statement inside > [timeToNextUpdate()|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/telemetry/internals/ClientTelemetryReporter.java#L307]. > As an aside, the error message might make more sense to be written as > "{_}Unexpected{_} telemetry state" instead of "{_}Unknown{_} telemetry state" > since {{TERMINATED}} is a known state, but heretofore unexpected. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-17040) Unknown telemetry state: TERMINATED thrown when closing AsyncKafkaConsumer
[ https://issues.apache.org/jira/browse/KAFKA-17040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal reassigned KAFKA-17040: - Assignee: Apoorv Mittal > Unknown telemetry state: TERMINATED thrown when closing AsyncKafkaConsumer > -- > > Key: KAFKA-17040 > URL: https://issues.apache.org/jira/browse/KAFKA-17040 > Project: Kafka > Issue Type: Bug > Components: clients, metrics >Affects Versions: 3.9.0 >Reporter: Kirk True >Assignee: Apoorv Mittal >Priority: Major > > An error is occasionally thrown when closing the {{{}AsyncKafkaConsumer{}}}: > {noformat} > [ERROR] 2024-06-20 17:13:54,121 [consumer_background_thread] > org.apache.kafka.clients.consumer.internals.ConsumerNetworkThread > lambda$configureThread$0 - Uncaught exception in thread > 'consumer_background_thread': > java.lang.IllegalStateException: Unknown telemetry state: TERMINATED > at > org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter$DefaultClientTelemetrySender.timeToNextUpdate(ClientTelemetryReporter.java:363) > at > org.apache.kafka.clients.NetworkClient$TelemetrySender.maybeUpdate(NetworkClient.java:1392) > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:668) > at > org.apache.kafka.clients.consumer.internals.NetworkClientDelegate.poll(NetworkClientDelegate.java:143) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkThread.sendUnsentRequests(ConsumerNetworkThread.java:299) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkThread.cleanup(ConsumerNetworkThread.java:318) > at > org.apache.kafka.clients.consumer.internals.ConsumerNetworkThread.run(ConsumerNetworkThread.java:105){noformat} > The issue appears to be that the {{TERMINATED}} state is not expected in the > switch statement inside > [timeToNextUpdate()|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/telemetry/internals/ClientTelemetryReporter.java#L307]. > As an aside, the error message might make more sense to be written as > "{_}Unexpected{_} telemetry state" instead of "{_}Unknown{_} telemetry state" > since {{TERMINATED}} is a known state, but heretofore unexpected. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-16749) Implement share fetch messages
[ https://issues.apache.org/jira/browse/KAFKA-16749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal resolved KAFKA-16749. --- Resolution: Done > Implement share fetch messages > -- > > Key: KAFKA-16749 > URL: https://issues.apache.org/jira/browse/KAFKA-16749 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-16999) Integrate persister read API in Partition leader initilization
[ https://issues.apache.org/jira/browse/KAFKA-16999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal resolved KAFKA-16999. --- Resolution: Done > Integrate persister read API in Partition leader initilization > -- > > Key: KAFKA-16999 > URL: https://issues.apache.org/jira/browse/KAFKA-16999 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-16748) Implement share response handling in SharePartitionManager
[ https://issues.apache.org/jira/browse/KAFKA-16748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal resolved KAFKA-16748. --- Resolution: Fixed > Implement share response handling in SharePartitionManager > -- > > Key: KAFKA-16748 > URL: https://issues.apache.org/jira/browse/KAFKA-16748 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Abhinav Dixit >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17002) Integrate partition leader epoch in Share Partition
Apoorv Mittal created KAFKA-17002: - Summary: Integrate partition leader epoch in Share Partition Key: KAFKA-17002 URL: https://issues.apache.org/jira/browse/KAFKA-17002 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Apoorv Mittal -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16999) Integrate persister read API in Partition leader initilization
Apoorv Mittal created KAFKA-16999: - Summary: Integrate persister read API in Partition leader initilization Key: KAFKA-16999 URL: https://issues.apache.org/jira/browse/KAFKA-16999 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Apoorv Mittal -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-16753) Implement acknowledge functionality in SharePartition
[ https://issues.apache.org/jira/browse/KAFKA-16753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal resolved KAFKA-16753. --- Resolution: Fixed > Implement acknowledge functionality in SharePartition > - > > Key: KAFKA-16753 > URL: https://issues.apache.org/jira/browse/KAFKA-16753 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-16752) Implement acquire functionality in SharePartition
[ https://issues.apache.org/jira/browse/KAFKA-16752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal resolved KAFKA-16752. --- Resolution: Done > Implement acquire functionality in SharePartition > - > > Key: KAFKA-16752 > URL: https://issues.apache.org/jira/browse/KAFKA-16752 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-16950) Define Persister and Share Coordinator RPCs
[ https://issues.apache.org/jira/browse/KAFKA-16950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal updated KAFKA-16950: -- Description: Add Persister interface with schemas for RPCs. The classes which are needed by SharePartition to integrate are below, note some of them results from the generated json schema classes. {code:java} import org.apache.kafka.server.group.share.GroupTopicPartitionData; import org.apache.kafka.server.group.share.PartitionAllData; import org.apache.kafka.server.group.share.PartitionErrorData; import org.apache.kafka.server.group.share.PartitionFactory; import org.apache.kafka.server.group.share.PartitionIdLeaderEpochData; import org.apache.kafka.server.group.share.PartitionStateBatchData; import org.apache.kafka.server.group.share.Persister; import org.apache.kafka.server.group.share.PersisterStateBatch; import org.apache.kafka.server.group.share.ReadShareGroupStateParameters; import org.apache.kafka.server.group.share.ReadShareGroupStateResult; import org.apache.kafka.server.group.share.TopicData; import org.apache.kafka.server.group.share.WriteShareGroupStateParameters; import org.apache.kafka.server.group.share.WriteShareGroupStateResult; {code} > Define Persister and Share Coordinator RPCs > --- > > Key: KAFKA-16950 > URL: https://issues.apache.org/jira/browse/KAFKA-16950 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Priority: Major > > Add Persister interface with schemas for RPCs. The classes which are needed > by SharePartition to integrate are below, note some of them results from the > generated json schema classes. > > > {code:java} > import org.apache.kafka.server.group.share.GroupTopicPartitionData; > import org.apache.kafka.server.group.share.PartitionAllData; > import org.apache.kafka.server.group.share.PartitionErrorData; > import org.apache.kafka.server.group.share.PartitionFactory; > import org.apache.kafka.server.group.share.PartitionIdLeaderEpochData; > import org.apache.kafka.server.group.share.PartitionStateBatchData; > import org.apache.kafka.server.group.share.Persister; > import org.apache.kafka.server.group.share.PersisterStateBatch; > import org.apache.kafka.server.group.share.ReadShareGroupStateParameters; > import org.apache.kafka.server.group.share.ReadShareGroupStateResult; > import org.apache.kafka.server.group.share.TopicData; > import org.apache.kafka.server.group.share.WriteShareGroupStateParameters; > import org.apache.kafka.server.group.share.WriteShareGroupStateResult; {code} > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16950) Define Persister and Share Coordinator RPCs
Apoorv Mittal created KAFKA-16950: - Summary: Define Persister and Share Coordinator RPCs Key: KAFKA-16950 URL: https://issues.apache.org/jira/browse/KAFKA-16950 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16914) Add Shage Group dynamic and broker configs
Apoorv Mittal created KAFKA-16914: - Summary: Add Shage Group dynamic and broker configs Key: KAFKA-16914 URL: https://issues.apache.org/jira/browse/KAFKA-16914 Project: Kafka Issue Type: Sub-task Affects Versions: 4.0.0 Reporter: Apoorv Mittal Assignee: Abhinav Dixit Fix For: 4.0.0 Add the configs required for share group in KafkaConfig or equivalent. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-16743) Add support for share group heartbeat in KafkaApis
[ https://issues.apache.org/jira/browse/KAFKA-16743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal reassigned KAFKA-16743: - Assignee: Apoorv Mittal (was: Abhinav Dixit) > Add support for share group heartbeat in KafkaApis > -- > > Key: KAFKA-16743 > URL: https://issues.apache.org/jira/browse/KAFKA-16743 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16905) Thread block in describe topics API in Admin Client
Apoorv Mittal created KAFKA-16905: - Summary: Thread block in describe topics API in Admin Client Key: KAFKA-16905 URL: https://issues.apache.org/jira/browse/KAFKA-16905 Project: Kafka Issue Type: Bug Components: clients Affects Versions: 3.9.0 Reporter: Apoorv Mittal Assignee: Apoorv Mittal Fix For: 3.9.0 The threads blocks while running admin client's descirbe topics API. {code:java} "kafka-admin-client-thread | adminclient-3" #114 daemon prio=5 os_prio=31 cpu=6.57ms elapsed=417.17s tid=0x0001364fc200 nid=0x13403 waiting on condition [0x0002bb419000] java.lang.Thread.State: WAITING (parking) at jdk.internal.misc.Unsafe.park(java.base@17.0.7/Native Method) - parking to wait for <0x000773804828> (a java.util.concurrent.CompletableFuture$Signaller) at java.util.concurrent.locks.LockSupport.park(java.base@17.0.7/LockSupport.java:211) at java.util.concurrent.CompletableFuture$Signaller.block(java.base@17.0.7/CompletableFuture.java:1864) at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.7/ForkJoinPool.java:3463) at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.7/ForkJoinPool.java:3434) at java.util.concurrent.CompletableFuture.waitingGet(java.base@17.0.7/CompletableFuture.java:1898) at java.util.concurrent.CompletableFuture.get(java.base@17.0.7/CompletableFuture.java:2072) at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165) at org.apache.kafka.clients.admin.KafkaAdminClient.handleDescribeTopicsByNamesWithDescribeTopicPartitionsApi(KafkaAdminClient.java:2324) at org.apache.kafka.clients.admin.KafkaAdminClient.describeTopics(KafkaAdminClient.java:2122) at org.apache.kafka.clients.admin.Admin.describeTopics(Admin.java:311) at io.confluent.kafkarest.controllers.TopicManagerImpl.describeTopics(TopicManagerImpl.java:155) at io.confluent.kafkarest.controllers.TopicManagerImpl.lambda$listTopics$2(TopicManagerImpl.java:76) at io.confluent.kafkarest.controllers.TopicManagerImpl$$Lambda$1925/0x000800891448.apply(Unknown Source) at java.util.concurrent.CompletableFuture$UniCompose.tryFire(java.base@17.0.7/CompletableFuture.java:1150) at java.util.concurrent.CompletableFuture.postComplete(java.base@17.0.7/CompletableFuture.java:510) at java.util.concurrent.CompletableFuture.complete(java.base@17.0.7/CompletableFuture.java:2147) at io.confluent.kafkarest.common.KafkaFutures.lambda$toCompletableFuture$0(KafkaFutures.java:45) at io.confluent.kafkarest.common.KafkaFutures$$Lambda$1909/0x000800897528.accept(Unknown Source) at org.apache.kafka.common.internals.KafkaFutureImpl.lambda$whenComplete$2(KafkaFutureImpl.java:107) at org.apache.kafka.common.internals.KafkaFutureImpl$$Lambda$1910/0x000800897750.accept(Unknown Source) at java.util.concurrent.CompletableFuture.uniWhenComplete(java.base@17.0.7/CompletableFuture.java:863) at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(java.base@17.0.7/CompletableFuture.java:841) at java.util.concurrent.CompletableFuture.postComplete(java.base@17.0.7/CompletableFuture.java:510) at java.util.concurrent.CompletableFuture.complete(java.base@17.0.7/CompletableFuture.java:2147) at org.apache.kafka.common.internals.KafkaCompletableFuture.kafkaComplete(KafkaCompletableFuture.java:39) at org.apache.kafka.common.internals.KafkaFutureImpl.complete(KafkaFutureImpl.java:122) at org.apache.kafka.clients.admin.KafkaAdminClient$4.handleResponse(KafkaAdminClient.java:2106) at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:1370) at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1523) at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1446) at java.lang.Thread.run(java.base@17.0.7/Thread.java:833) {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16822) Abstract consumer group in coordinator to share functionality with share group
Apoorv Mittal created KAFKA-16822: - Summary: Abstract consumer group in coordinator to share functionality with share group Key: KAFKA-16822 URL: https://issues.apache.org/jira/browse/KAFKA-16822 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Apoorv Mittal -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16756) Implement max delivery count functionality in SharePartition
Apoorv Mittal created KAFKA-16756: - Summary: Implement max delivery count functionality in SharePartition Key: KAFKA-16756 URL: https://issues.apache.org/jira/browse/KAFKA-16756 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Chirag Wadhwa -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16753) Implement acknowledge functionality in SharePartition
Apoorv Mittal created KAFKA-16753: - Summary: Implement acknowledge functionality in SharePartition Key: KAFKA-16753 URL: https://issues.apache.org/jira/browse/KAFKA-16753 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Apoorv Mittal -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16755) Implement lock timeout functionality in SharePartition
Apoorv Mittal created KAFKA-16755: - Summary: Implement lock timeout functionality in SharePartition Key: KAFKA-16755 URL: https://issues.apache.org/jira/browse/KAFKA-16755 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Abhinav Dixit -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16754) Implement release acquired records functionality in SharePartition
Apoorv Mittal created KAFKA-16754: - Summary: Implement release acquired records functionality in SharePartition Key: KAFKA-16754 URL: https://issues.apache.org/jira/browse/KAFKA-16754 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Abhinav Dixit -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16752) Implement acquire functionality in SharePartition
Apoorv Mittal created KAFKA-16752: - Summary: Implement acquire functionality in SharePartition Key: KAFKA-16752 URL: https://issues.apache.org/jira/browse/KAFKA-16752 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Apoorv Mittal -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16750) Implement acknowledge API in SharePartitionManager
Apoorv Mittal created KAFKA-16750: - Summary: Implement acknowledge API in SharePartitionManager Key: KAFKA-16750 URL: https://issues.apache.org/jira/browse/KAFKA-16750 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Chirag Wadhwa -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16751) Implement release acquired records in SharePartitionManager
Apoorv Mittal created KAFKA-16751: - Summary: Implement release acquired records in SharePartitionManager Key: KAFKA-16751 URL: https://issues.apache.org/jira/browse/KAFKA-16751 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Abhinav Dixit -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16749) Implement share fetch messages
Apoorv Mittal created KAFKA-16749: - Summary: Implement share fetch messages Key: KAFKA-16749 URL: https://issues.apache.org/jira/browse/KAFKA-16749 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Apoorv Mittal -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16747) Implement share sessions and context
Apoorv Mittal created KAFKA-16747: - Summary: Implement share sessions and context Key: KAFKA-16747 URL: https://issues.apache.org/jira/browse/KAFKA-16747 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Abhinav Dixit -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16748) Implement share response handling in SharePartitionManager
Apoorv Mittal created KAFKA-16748: - Summary: Implement share response handling in SharePartitionManager Key: KAFKA-16748 URL: https://issues.apache.org/jira/browse/KAFKA-16748 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Abhinav Dixit -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-16743) Add support for share group heartbeat in KafkaApis
[ https://issues.apache.org/jira/browse/KAFKA-16743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal reassigned KAFKA-16743: - Assignee: Abhinav Dixit > Add support for share group heartbeat in KafkaApis > -- > > Key: KAFKA-16743 > URL: https://issues.apache.org/jira/browse/KAFKA-16743 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Abhinav Dixit >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16746) Add support for share acknowledgement request in KafkaApis
Apoorv Mittal created KAFKA-16746: - Summary: Add support for share acknowledgement request in KafkaApis Key: KAFKA-16746 URL: https://issues.apache.org/jira/browse/KAFKA-16746 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Chirag Wadhwa -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16745) Add support for share fetch request in KafkaApis
Apoorv Mittal created KAFKA-16745: - Summary: Add support for share fetch request in KafkaApis Key: KAFKA-16745 URL: https://issues.apache.org/jira/browse/KAFKA-16745 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Chirag Wadhwa -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16744) Add support for share group describe in KafkaApis
Apoorv Mittal created KAFKA-16744: - Summary: Add support for share group describe in KafkaApis Key: KAFKA-16744 URL: https://issues.apache.org/jira/browse/KAFKA-16744 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Apoorv Mittal -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16743) Add support for share group heartbeat in KafkaApis
Apoorv Mittal created KAFKA-16743: - Summary: Add support for share group heartbeat in KafkaApis Key: KAFKA-16743 URL: https://issues.apache.org/jira/browse/KAFKA-16743 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16741) Add SharGroupHeartbeat API support in GroupCoordinator
Apoorv Mittal created KAFKA-16741: - Summary: Add SharGroupHeartbeat API support in GroupCoordinator Key: KAFKA-16741 URL: https://issues.apache.org/jira/browse/KAFKA-16741 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Apoorv Mittal -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16742) Add ShareGroupDescribe API support in GroupCoordinator
Apoorv Mittal created KAFKA-16742: - Summary: Add ShareGroupDescribe API support in GroupCoordinator Key: KAFKA-16742 URL: https://issues.apache.org/jira/browse/KAFKA-16742 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Apoorv Mittal -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16740) Define skeleton for SharePartitionManager and SharePartition
Apoorv Mittal created KAFKA-16740: - Summary: Define skeleton for SharePartitionManager and SharePartition Key: KAFKA-16740 URL: https://issues.apache.org/jira/browse/KAFKA-16740 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Apoorv Mittal Add high level design for broker side implementation for fetching and acknowledging messages. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-15850) Improve KafkaMetrics APIs to expose Value Provider information
[ https://issues.apache.org/jira/browse/KAFKA-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17839657#comment-17839657 ] Apoorv Mittal commented on KAFKA-15850: --- This has been completed as part of KIP-1019. Resolving the issue. > Improve KafkaMetrics APIs to expose Value Provider information > -- > > Key: KAFKA-15850 > URL: https://issues.apache.org/jira/browse/KAFKA-15850 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > > Today KafkaMetric do not expose the metricValueProvider information which > makes it hard to determine the provider type. In KIP-714 implementation we > have used reflections to access the information but would like to propose the > correct ways of exposing the information. > Discussion thread: > https://github.com/apache/kafka/pull/14620#discussion_r1374059672 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16492) Flaky test: testAlterSinkConnectorOffsetsDifferentKafkaClusterTargeted – org.apache.kafka.connect.integration.OffsetsApiIntegrationTest
Apoorv Mittal created KAFKA-16492: - Summary: Flaky test: testAlterSinkConnectorOffsetsDifferentKafkaClusterTargeted – org.apache.kafka.connect.integration.OffsetsApiIntegrationTest Key: KAFKA-16492 URL: https://issues.apache.org/jira/browse/KAFKA-16492 Project: Kafka Issue Type: Bug Reporter: Apoorv Mittal Build: [https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15680/1/tests/] {code:java} Errororg.opentest4j.AssertionFailedError: Condition not met within timeout 3. Sink connector consumer group offsets should catch up to the topic end offsets ==> expected: but was: Stacktraceorg.opentest4j.AssertionFailedError: Condition not met within timeout 3. Sink connector consumer group offsets should catch up to the topic end offsets ==> expected: but was:at app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151) at app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132) at app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63) at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36) at app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:214)at app//org.apache.kafka.test.TestUtils.lambda$waitForCondition$3(TestUtils.java:396) at app//org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:444) at app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:393) at app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:377) at app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:367) at app//org.apache.kafka.connect.integration.OffsetsApiIntegrationTest.verifyExpectedSinkConnectorOffsets(OffsetsApiIntegrationTest.java:989) at app//org.apache.kafka.connect.integration.OffsetsApiIntegrationTest.alterAndVerifySinkConnectorOffsets(OffsetsApiIntegrationTest.java:381) at app//org.apache.kafka.connect.integration.OffsetsApiIntegrationTest.testAlterSinkConnectorOffsetsDifferentKafkaClusterTargeted(OffsetsApiIntegrationTest.java:362) at java.base@11.0.16.1/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base@11.0.16.1/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base@11.0.16.1/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base@11.0.16.1/java.lang.reflect.Method.invoke(Method.java:566) at app//org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at app//org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at app//org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at app//org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at app//org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at app//org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at app//org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at app//org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)at app//org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at app//org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at app//org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at app//org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at app//org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at app//org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at app//org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at app//org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at app//org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)at app//org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at app//org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at app//org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:112) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:40) at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:60) at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:52)
[jira] [Created] (KAFKA-16491) Flaky test: randomClusterPerturbationsShouldConverge[rackAwareStrategy=balance_subtopology] – org.apache.kafka.streams.processor.internals.assignment.TaskAssignorConverg
Apoorv Mittal created KAFKA-16491: - Summary: Flaky test: randomClusterPerturbationsShouldConverge[rackAwareStrategy=balance_subtopology] – org.apache.kafka.streams.processor.internals.assignment.TaskAssignorConvergenceTest Key: KAFKA-16491 URL: https://issues.apache.org/jira/browse/KAFKA-16491 Project: Kafka Issue Type: Bug Reporter: Apoorv Mittal Build: https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15680/1/tests/ {code:java} Errorjava.lang.AssertionError: Assertion failed in randomized test. Reproduce with: `runRandomizedScenario(-3500059697111741230)`.Stacktracejava.lang.AssertionError: Assertion failed in randomized test. Reproduce with: `runRandomizedScenario(-3500059697111741230)`. at org.apache.kafka.streams.processor.internals.assignment.TaskAssignorConvergenceTest.runRandomizedScenario(TaskAssignorConvergenceTest.java:545) at org.apache.kafka.streams.processor.internals.assignment.TaskAssignorConvergenceTest.randomClusterPerturbationsShouldConverge(TaskAssignorConvergenceTest.java:479) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner.run(ParentRunner.java:413)at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413)at org.junit.runner.JUnitCore.run(JUnitCore.java:137) at org.junit.runner.JUnitCore.run(JUnitCore.java:115) at org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42) at org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80) at org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54){code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16486) Integrate metric measurability changes in metrics collector
Apoorv Mittal created KAFKA-16486: - Summary: Integrate metric measurability changes in metrics collector Key: KAFKA-16486 URL: https://issues.apache.org/jira/browse/KAFKA-16486 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Apoorv Mittal -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-16485) Fix broker metrics to follow kebab/hyphen case
[ https://issues.apache.org/jira/browse/KAFKA-16485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal updated KAFKA-16485: -- Parent: KAFKA-15601 Issue Type: Sub-task (was: Improvement) > Fix broker metrics to follow kebab/hyphen case > -- > > Key: KAFKA-16485 > URL: https://issues.apache.org/jira/browse/KAFKA-16485 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16485) Fix broker metrics to follow kebab/hyphen case
Apoorv Mittal created KAFKA-16485: - Summary: Fix broker metrics to follow kebab/hyphen case Key: KAFKA-16485 URL: https://issues.apache.org/jira/browse/KAFKA-16485 Project: Kafka Issue Type: Improvement Reporter: Apoorv Mittal Assignee: Apoorv Mittal -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-16397) Use ByteBufferOutputStream to avoid array copy
[ https://issues.apache.org/jira/browse/KAFKA-16397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17829519#comment-17829519 ] Apoorv Mittal commented on KAFKA-16397: --- [~chia7712] Yes please, go ahead. > Use ByteBufferOutputStream to avoid array copy > -- > > Key: KAFKA-16397 > URL: https://issues.apache.org/jira/browse/KAFKA-16397 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Priority: Minor > > from https://github.com/apache/kafka/pull/15148#discussion_r1531889679 > source code: > https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/telemetry/internals/ClientTelemetryUtils.java#L216 > we can use ByteBufferOutputStream to collect the uncompressed data, and then > return the inner buffer directly instead of copying full array. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-16397) Use ByteBufferOutputStream to avoid array copy
[ https://issues.apache.org/jira/browse/KAFKA-16397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal reassigned KAFKA-16397: - Assignee: Chia-Ping Tsai > Use ByteBufferOutputStream to avoid array copy > -- > > Key: KAFKA-16397 > URL: https://issues.apache.org/jira/browse/KAFKA-16397 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Minor > > from https://github.com/apache/kafka/pull/15148#discussion_r1531889679 > source code: > https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/telemetry/internals/ClientTelemetryUtils.java#L216 > we can use ByteBufferOutputStream to collect the uncompressed data, and then > return the inner buffer directly instead of copying full array. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-16359) kafka-clients-3.7.0.jar published to Maven Central is defective
[ https://issues.apache.org/jira/browse/KAFKA-16359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17826885#comment-17826885 ] Apoorv Mittal commented on KAFKA-16359: --- [~ijuma] here is the fix [https://github.com/apache/kafka/pull/15532], the PR description explains why we require code from plugin extracted. > kafka-clients-3.7.0.jar published to Maven Central is defective > --- > > Key: KAFKA-16359 > URL: https://issues.apache.org/jira/browse/KAFKA-16359 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 3.7.0 >Reporter: Jeremy Norris >Assignee: Apoorv Mittal >Priority: Critical > > The {{kafka-clients-3.7.0.jar}} that has been published to Maven Central is > defective: it's {{META-INF/MANIFEST.MF}} bogusly include a {{Class-Path}} > element: > {code} > Manifest-Version: 1.0 > Class-Path: zstd-jni-1.5.5-6.jar lz4-java-1.8.0.jar snappy-java-1.1.10 > .5.jar slf4j-api-1.7.36.jar > {code} > This bogus {{Class-Path}} element leads to compiler warnings for projects > that utilize it as a dependency: > {code} > [WARNING] [path] bad path element > ".../.m2/repository/org/apache/kafka/kafka-clients/3.7.0/zstd-jni-1.5.5-6.jar": > no such file or directory > [WARNING] [path] bad path element > ".../.m2/repository/org/apache/kafka/kafka-clients/3.7.0/lz4-java-1.8.0.jar": > no such file or directory > [WARNING] [path] bad path element > ".../.m2/repository/org/apache/kafka/kafka-clients/3.7.0/snappy-java-1.1.10.5.jar": > no such file or directory > [WARNING] [path] bad path element > ".../.m2/repository/org/apache/kafka/kafka-clients/3.7.0/slf4j-api-1.7.36.jar": > no such file or directory > {code} > Either the {{kafka-clients-3.7.0.jar}} needs to be respun and published > without the bogus {{Class-Path}} element in it's {{META-INF/MANIFEST.MF}} or > a new release should be published that corrects this defect. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-16359) kafka-clients-3.7.0.jar published to Maven Central is defective
[ https://issues.apache.org/jira/browse/KAFKA-16359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825772#comment-17825772 ] Apoorv Mittal commented on KAFKA-16359: --- Though by the time fix and release happens, below workaround might suppress the warning for now (not a good practice but a workaround for now): {code:java} org.apache.maven.plugins maven-compiler-plugin true true true ... ... -Xlint:-path ... {code} Similar shadow jar issue encountered by launchdarkly and others, reference https://github.com/launchdarkly/java-server-sdk/issues/240 > kafka-clients-3.7.0.jar published to Maven Central is defective > --- > > Key: KAFKA-16359 > URL: https://issues.apache.org/jira/browse/KAFKA-16359 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 3.7.0 >Reporter: Jeremy Norris >Assignee: Apoorv Mittal >Priority: Critical > > The {{kafka-clients-3.7.0.jar}} that has been published to Maven Central is > defective: it's {{META-INF/MANIFEST.MF}} bogusly include a {{Class-Path}} > element: > {code} > Manifest-Version: 1.0 > Class-Path: zstd-jni-1.5.5-6.jar lz4-java-1.8.0.jar snappy-java-1.1.10 > .5.jar slf4j-api-1.7.36.jar > {code} > This bogus {{Class-Path}} element leads to compiler warnings for projects > that utilize it as a dependency: > {code} > [WARNING] [path] bad path element > ".../.m2/repository/org/apache/kafka/kafka-clients/3.7.0/zstd-jni-1.5.5-6.jar": > no such file or directory > [WARNING] [path] bad path element > ".../.m2/repository/org/apache/kafka/kafka-clients/3.7.0/lz4-java-1.8.0.jar": > no such file or directory > [WARNING] [path] bad path element > ".../.m2/repository/org/apache/kafka/kafka-clients/3.7.0/snappy-java-1.1.10.5.jar": > no such file or directory > [WARNING] [path] bad path element > ".../.m2/repository/org/apache/kafka/kafka-clients/3.7.0/slf4j-api-1.7.36.jar": > no such file or directory > {code} > Either the {{kafka-clients-3.7.0.jar}} needs to be respun and published > without the bogus {{Class-Path}} element in it's {{META-INF/MANIFEST.MF}} or > a new release should be published that corrects this defect. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-16359) kafka-clients-3.7.0.jar published to Maven Central is defective
[ https://issues.apache.org/jira/browse/KAFKA-16359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal reassigned KAFKA-16359: - Assignee: Apoorv Mittal > kafka-clients-3.7.0.jar published to Maven Central is defective > --- > > Key: KAFKA-16359 > URL: https://issues.apache.org/jira/browse/KAFKA-16359 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 3.7.0 >Reporter: Jeremy Norris >Assignee: Apoorv Mittal >Priority: Critical > > The {{kafka-clients-3.7.0.jar}} that has been published to Maven Central is > defective: it's {{META-INF/MANIFEST.MF}} bogusly include a {{Class-Path}} > element: > {code} > Manifest-Version: 1.0 > Class-Path: zstd-jni-1.5.5-6.jar lz4-java-1.8.0.jar snappy-java-1.1.10 > .5.jar slf4j-api-1.7.36.jar > {code} > This bogus {{Class-Path}} element leads to compiler warnings for projects > that utilize it as a dependency: > {code} > [WARNING] [path] bad path element > ".../.m2/repository/org/apache/kafka/kafka-clients/3.7.0/zstd-jni-1.5.5-6.jar": > no such file or directory > [WARNING] [path] bad path element > ".../.m2/repository/org/apache/kafka/kafka-clients/3.7.0/lz4-java-1.8.0.jar": > no such file or directory > [WARNING] [path] bad path element > ".../.m2/repository/org/apache/kafka/kafka-clients/3.7.0/snappy-java-1.1.10.5.jar": > no such file or directory > [WARNING] [path] bad path element > ".../.m2/repository/org/apache/kafka/kafka-clients/3.7.0/slf4j-api-1.7.36.jar": > no such file or directory > {code} > Either the {{kafka-clients-3.7.0.jar}} needs to be respun and published > without the bogus {{Class-Path}} element in it's {{META-INF/MANIFEST.MF}} or > a new release should be published that corrects this defect. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-16359) kafka-clients-3.7.0.jar published to Maven Central is defective
[ https://issues.apache.org/jira/browse/KAFKA-16359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825333#comment-17825333 ] Apoorv Mittal commented on KAFKA-16359: --- Hi [~norrisjeremy] , thanks for reporting the issue. As [~gnarula] mentioned this comes as part of shadow plugin, I will look into the issue and should come back with right resolution. > kafka-clients-3.7.0.jar published to Maven Central is defective > --- > > Key: KAFKA-16359 > URL: https://issues.apache.org/jira/browse/KAFKA-16359 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 3.7.0 >Reporter: Jeremy Norris >Assignee: Apoorv Mittal >Priority: Critical > > The {{kafka-clients-3.7.0.jar}} that has been published to Maven Central is > defective: it's {{META-INF/MANIFEST.MF}} bogusly include a {{Class-Path}} > element: > {code} > Manifest-Version: 1.0 > Class-Path: zstd-jni-1.5.5-6.jar lz4-java-1.8.0.jar snappy-java-1.1.10 > .5.jar slf4j-api-1.7.36.jar > {code} > This bogus {{Class-Path}} element leads to compiler warnings for projects > that utilize it as a dependency: > {code} > [WARNING] [path] bad path element > ".../.m2/repository/org/apache/kafka/kafka-clients/3.7.0/zstd-jni-1.5.5-6.jar": > no such file or directory > [WARNING] [path] bad path element > ".../.m2/repository/org/apache/kafka/kafka-clients/3.7.0/lz4-java-1.8.0.jar": > no such file or directory > [WARNING] [path] bad path element > ".../.m2/repository/org/apache/kafka/kafka-clients/3.7.0/snappy-java-1.1.10.5.jar": > no such file or directory > [WARNING] [path] bad path element > ".../.m2/repository/org/apache/kafka/kafka-clients/3.7.0/slf4j-api-1.7.36.jar": > no such file or directory > {code} > Either the {{kafka-clients-3.7.0.jar}} needs to be respun and published > without the bogus {{Class-Path}} element in it's {{META-INF/MANIFEST.MF}} or > a new release should be published that corrects this defect. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16280) Expose method to determine Metric Measurability
Apoorv Mittal created KAFKA-16280: - Summary: Expose method to determine Metric Measurability Key: KAFKA-16280 URL: https://issues.apache.org/jira/browse/KAFKA-16280 Project: Kafka Issue Type: Bug Components: metrics Affects Versions: 3.8.0 Reporter: Apoorv Mittal Assignee: Apoorv Mittal Fix For: 3.8.0 The Jira is to track the development of KIP-1019: [https://cwiki.apache.org/confluence/display/KAFKA/KIP-1019%3A+Expose+method+to+determine+Metric+Measurability] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-15772) Flaky test TransactionsWithTieredStoreTest
[ https://issues.apache.org/jira/browse/KAFKA-15772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812485#comment-17812485 ] Apoorv Mittal commented on KAFKA-15772: --- Failure of test: `testAbortTransactionTimeout` in `TransactionsWithTieredStoreTest` class https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15251/7/tests {code:java} Errororg.apache.kafka.common.errors.TimeoutException: Timeout expired after 3000ms while awaiting InitProducerIdStacktraceorg.apache.kafka.common.errors.TimeoutException: Timeout expired after 3000ms while awaiting InitProducerIdStandard Output[2024-01-30 16:29:01,250] INFO [LocalTieredStorage Id=0] Creating directory: [/tmp/kafka-remote-tier-transactionswithtieredstoretest11967450412731752897/kafka-tiered-storage] (org.apache.kafka.server.log.remote.storage.LocalTieredStorage:289)[2024-01-30 16:29:01,250] INFO [LocalTieredStorage Id=0] Created local tiered storage manager [0]:[kafka-tiered-storage] (org.apache.kafka.server.log.remote.storage.LocalTieredStorage:301)[2024-01-30 16:29:01,251] INFO Started configuring topic-based RLMM with configs: {remote.log.metadata.topic.replication.factor=3, remote.log.metadata.topic.num.partitions=3, remote.log.metadata.common.client.bootstrap.servers=localhost:40061, broker.id=0, remote.log.metadata.initialization.retry.interval.ms=300, remote.log.metadata.common.client.security.protocol=PLAINTEXT, cluster.id=z_bOu1YoRbKNNIThjztsdA, log.dir=/tmp/kafka-6827936654389854503} (org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager:358)[2024-01-30 16:29:01,251] INFO Successfully configured topic-based RLMM with config: TopicBasedRemoteLogMetadataManagerConfig{clientIdPrefix='__remote_log_metadata_client_0', metadataTopicPartitionsCount=3, consumeWaitMs=12, metadataTopicRetentionMs=-1, metadataTopicReplicationFactor=3, initializationRetryMaxTimeoutMs=12, initializationRetryIntervalMs=300, commonProps={security.protocol=PLAINTEXT, bootstrap.servers=localhost:40061}, consumerProps={security.protocol=PLAINTEXT, key.deserializer=org.apache.kafka.common.serialization.ByteArrayDeserializer, value.deserializer=org.apache.kafka.common.serialization.ByteArrayDeserializer, enable.auto.commit=false, bootstrap.servers=localhost:40061, exclude.internal.topics=false, auto.offset.reset=earliest, client.id=__remote_log_metadata_client_0_consumer}, producerProps={security.protocol=PLAINTEXT, enable.idempotence=true, value.serializer=org.apache.kafka.common.serialization.ByteArraySerializer, acks=all, bootstrap.servers=localhost:40061, key.serializer=org.apache.kafka.common.serialization.ByteArraySerializer, client.id=__remote_log_metadata_client_0_producer}} (org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager:364)[2024-01-30 16:29:01,252] INFO Initializing topic-based RLMM resources (org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager:377)[2024-01-30 16:29:01,363] INFO Topic __remote_log_metadata does not exist. Error: This server does not host this topic-partition. (org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager:466)[2024-01-30 16:29:01,366] ERROR Encountered error while creating __remote_log_metadata topic. (org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager:528)java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 3 larger than available brokers: 1. at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396) at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2073) at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165) at org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager.createTopic(TopicBasedRemoteLogMetadataManager.java:509) at org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager.initializeResources(TopicBasedRemoteLogMetadataManager.java:396) at java.base/java.lang.Thread.run(Thread.java:833)Caused by: org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 3 larger than available brokers: 1.[2024-01-30 16:29:01,366] INFO Sleep for 300 ms before it is retried again. (org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager:401)[2024-01-30 16:29:01,538] WARN [LocalTieredStorage Id=1] Remote storage with ID [/tmp/kafka-remote-tier-transactionswithtieredstoretest11967450412731752897] already exists on the file system. Any data already in the remote storage will not be deleted and may result in an inconsistent state and/or provide stale data. (org.apache.kafka.server.log.remote
[jira] [Resolved] (KAFKA-15683) Delete subscription from metadata when all configs are deleted
[ https://issues.apache.org/jira/browse/KAFKA-15683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal resolved KAFKA-15683. --- Resolution: Not A Problem Closing ticket as this is not required, it works as per the default behaviour of kafka-configs.sh. Moreover ClientMetricsManager deletes the in-memory subscription/client resources once all properties for respective resource are removed. This is not needed. > Delete subscription from metadata when all configs are deleted > -- > > Key: KAFKA-15683 > URL: https://issues.apache.org/jira/browse/KAFKA-15683 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > > As of now the kafka-configs.sh do not differentiate on non-existent and blank > metrics subscription. Add support to differentiate in 2 scenarios and also > delete the subscription if all configs are delete for respective > subscription. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16187) Flaky test: testTopicPatternArg - org.apache.kafka.tools.GetOffsetShellTest
Apoorv Mittal created KAFKA-16187: - Summary: Flaky test: testTopicPatternArg - org.apache.kafka.tools.GetOffsetShellTest Key: KAFKA-16187 URL: https://issues.apache.org/jira/browse/KAFKA-16187 Project: Kafka Issue Type: Bug Reporter: Apoorv Mittal [https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15234/8/tests/] {code:java} Errororg.opentest4j.AssertionFailedError: expected: <[org.apache.kafka.tools.GetOffsetShellTest$Row@c6f09cc2, org.apache.kafka.tools.GetOffsetShellTest$Row@c6f0a084, org.apache.kafka.tools.GetOffsetShellTest$Row@c6f0a0a3, org.apache.kafka.tools.GetOffsetShellTest$Row@c6f0a446, org.apache.kafka.tools.GetOffsetShellTest$Row@c6f0a465, org.apache.kafka.tools.GetOffsetShellTest$Row@c6f0a484, org.apache.kafka.tools.GetOffsetShellTest$Row@c6f0a808, org.apache.kafka.tools.GetOffsetShellTest$Row@c6f0a827, org.apache.kafka.tools.GetOffsetShellTest$Row@c6f0a846, org.apache.kafka.tools.GetOffsetShellTest$Row@c6f0a865]> but was: <[org.apache.kafka.tools.GetOffsetShellTest$Row@c6f09cc2, org.apache.kafka.tools.GetOffsetShellTest$Row@c6f0a084, org.apache.kafka.tools.GetOffsetShellTest$Row@c6f0a446, org.apache.kafka.tools.GetOffsetShellTest$Row@c6f0a808]>Stacktraceorg.opentest4j.AssertionFailedError: expected: <[org.apache.kafka.tools.GetOffsetShellTest$Row@c6f09cc2, org.apache.kafka.tools.GetOffsetShellTest$Row@c6f0a084, org.apache.kafka.tools.GetOffsetShellTest$Row@c6f0a0a3, org.apache.kafka.tools.GetOffsetShellTest$Row@c6f0a446, org.apache.kafka.tools.GetOffsetShellTest$Row@c6f0a465, org.apache.kafka.tools.GetOffsetShellTest$Row@c6f0a484, org.apache.kafka.tools.GetOffsetShellTest$Row@c6f0a808, org.apache.kafka.tools.GetOffsetShellTest$Row@c6f0a827, org.apache.kafka.tools.GetOffsetShellTest$Row@c6f0a846, org.apache.kafka.tools.GetOffsetShellTest$Row@c6f0a865]> but was: <[org.apache.kafka.tools.GetOffsetShellTest$Row@c6f09cc2, org.apache.kafka.tools.GetOffsetShellTest$Row@c6f0a084, org.apache.kafka.tools.GetOffsetShellTest$Row@c6f0a446, org.apache.kafka.tools.GetOffsetShellTest$Row@c6f0a808]> at app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151) at app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132) at app//org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197) at app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:182) at app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:177) at app//org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:1141) at app//org.apache.kafka.tools.GetOffsetShellTest.testTopicPatternArg(GetOffsetShellTest.java:154) at java.base@21.0.1/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) at java.base@21.0.1/java.lang.reflect.Method.invoke(Method.java:580)at app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728) at app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156) at app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147) at app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:94) at app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103) at app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93) at app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92) at app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86) at app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:218) at app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.
[jira] [Created] (KAFKA-16186) Implement broker metrics for client telemetry usage
Apoorv Mittal created KAFKA-16186: - Summary: Implement broker metrics for client telemetry usage Key: KAFKA-16186 URL: https://issues.apache.org/jira/browse/KAFKA-16186 Project: Kafka Issue Type: Sub-task Reporter: Apoorv Mittal Assignee: Apoorv Mittal The KIP-714 lists new metrics for broker which records the usage of client telemetry instances and plugin. Implement broker metrics as defined in the KIP-714. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16176) Flaky test: testSendToPartitionWithFollowerShutdownShouldNotTimeout – kafka.api.PlaintextProducerSendTest
Apoorv Mittal created KAFKA-16176: - Summary: Flaky test: testSendToPartitionWithFollowerShutdownShouldNotTimeout – kafka.api.PlaintextProducerSendTest Key: KAFKA-16176 URL: https://issues.apache.org/jira/browse/KAFKA-16176 Project: Kafka Issue Type: Bug Reporter: Apoorv Mittal [https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15190/3/tests/] {code:java} Errororg.opentest4j.AssertionFailedError: Consumed 0 records before timeout instead of the expected 100 recordsStacktraceorg.opentest4j.AssertionFailedError: Consumed 0 records before timeout instead of the expected 100 records at app//org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:38) at app//org.junit.jupiter.api.Assertions.fail(Assertions.java:134) at app//kafka.utils.TestUtils$.pollUntilAtLeastNumRecords(TestUtils.scala:1142) at app//kafka.utils.TestUtils$.consumeRecords(TestUtils.scala:1869) at app//kafka.api.BaseProducerSendTest.testSendToPartitionWithFollowerShutdownShouldNotTimeout(BaseProducerSendTest.scala:398) at java.base@11.0.16.1/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base@11.0.16.1/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base@11.0.16.1/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base@11.0.16.1/java.lang.reflect.Method.invoke(Method.java:566) at app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728) at app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156) at app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147) at app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:94) {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16177) Flaky test: testBatchSizeZeroNoPartitionNoRecordKey – kafka.api.PlaintextProducerSendTest
Apoorv Mittal created KAFKA-16177: - Summary: Flaky test: testBatchSizeZeroNoPartitionNoRecordKey – kafka.api.PlaintextProducerSendTest Key: KAFKA-16177 URL: https://issues.apache.org/jira/browse/KAFKA-16177 Project: Kafka Issue Type: Bug Reporter: Apoorv Mittal [https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15190/3/tests/] {code:java} Errorjava.lang.RuntimeException: Received a fatal error while waiting for the controller to acknowledge that we are caught upStacktracejava.lang.RuntimeException: Received a fatal error while waiting for the controller to acknowledge that we are caught up at org.apache.kafka.server.util.FutureUtils.waitWithLogging(FutureUtils.java:68) at kafka.server.BrokerServer.startup(BrokerServer.scala:473)at kafka.integration.KafkaServerTestHarness.$anonfun$createBrokers$1(KafkaServerTestHarness.scala:355) at kafka.integration.KafkaServerTestHarness.$anonfun$createBrokers$1$adapted(KafkaServerTestHarness.scala:351) at scala.collection.immutable.Vector.foreach(Vector.scala:2124) at kafka.integration.KafkaServerTestHarness.createBrokers(KafkaServerTestHarness.scala:351) at kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:120) at kafka.api.BaseProducerSendTest.setUp(BaseProducerSendTest.scala:70) at jdk.internal.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:128) {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16175) Flaky test: testAsynchronousAuthorizerAclUpdatesDontBlockRequestThreads – kafka.api.SslAdminIntegrationTest
Apoorv Mittal created KAFKA-16175: - Summary: Flaky test: testAsynchronousAuthorizerAclUpdatesDontBlockRequestThreads – kafka.api.SslAdminIntegrationTest Key: KAFKA-16175 URL: https://issues.apache.org/jira/browse/KAFKA-16175 Project: Kafka Issue Type: Bug Reporter: Apoorv Mittal [https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15190/3/tests/] {code:java} Errororg.opentest4j.AssertionFailedError: expected: <24> but was: <32>Stacktraceorg.opentest4j.AssertionFailedError: expected: <24> but was: <32> at app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151) at app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132) at app//org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197) at app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:150) at app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:145) at app//org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:527) at app//kafka.api.SslAdminIntegrationTest.blockedRequestThreads(SslAdminIntegrationTest.scala:243) at app//kafka.api.SslAdminIntegrationTest.$anonfun$waitForNoBlockedRequestThreads$1(SslAdminIntegrationTest.scala:250) at app//kafka.api.SslAdminIntegrationTest.waitForNoBlockedRequestThreads(SslAdminIntegrationTest.scala:250) at app//kafka.api.SslAdminIntegrationTest.testAsynchronousAuthorizerAclUpdatesDontBlockRequestThreads(SslAdminIntegrationTest.scala:176) at java.base@11.0.16.1/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base@11.0.16.1/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base@11.0.16.1/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base@11.0.16.1/java.lang.reflect.Method.invoke(Method.java:566) at app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728){code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16174) Flaky test: testDescribeQuorumStatusSuccessful – org.apache.kafka.tools.MetadataQuorumCommandTest
Apoorv Mittal created KAFKA-16174: - Summary: Flaky test: testDescribeQuorumStatusSuccessful – org.apache.kafka.tools.MetadataQuorumCommandTest Key: KAFKA-16174 URL: https://issues.apache.org/jira/browse/KAFKA-16174 Project: Kafka Issue Type: Bug Reporter: Apoorv Mittal [https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15190/3/tests/] {code:java} Errorjava.util.concurrent.ExecutionException: java.lang.RuntimeException: Received a fatal error while waiting for the controller to acknowledge that we are caught upStacktracejava.util.concurrent.ExecutionException: java.lang.RuntimeException: Received a fatal error while waiting for the controller to acknowledge that we are caught up at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at kafka.testkit.KafkaClusterTestKit.startup(KafkaClusterTestKit.java:421) at kafka.test.junit.RaftClusterInvocationContext.lambda$getAdditionalExtensions$5(RaftClusterInvocationContext.java:116) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeTestExecutionCallbacks$5(TestMethodTestDescriptor.java:192) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeMethodsOrCallbacksUntilExceptionOccurs$6(TestMethodTestDescriptor.java:203) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeMethodsOrCallbacksUntilExceptionOccurs(TestMethodTestDescriptor.java:203) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeTestExecutionCallbacks(TestMethodTestDescriptor.java:191) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:137) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:69) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35) at org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:226) at org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:204) {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16173) Flaky test: testTimeoutMetrics – org.apache.kafka.controller.QuorumControllerMetricsIntegrationTest
Apoorv Mittal created KAFKA-16173: - Summary: Flaky test: testTimeoutMetrics – org.apache.kafka.controller.QuorumControllerMetricsIntegrationTest Key: KAFKA-16173 URL: https://issues.apache.org/jira/browse/KAFKA-16173 Project: Kafka Issue Type: Bug Reporter: Apoorv Mittal [https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15190/3/tests/] {code:java} Errororg.apache.kafka.server.fault.FaultHandlerException: fatalFaultHandler: exception while renouncing leadership: Attempt to resign from epoch 1 which is larger than the current epoch 0Stacktraceorg.apache.kafka.server.fault.FaultHandlerException: fatalFaultHandler: exception while renouncing leadership: Attempt to resign from epoch 1 which is larger than the current epoch 0 at app//org.apache.kafka.metalog.LocalLogManager.resign(LocalLogManager.java:808) at app//org.apache.kafka.controller.QuorumController.renounce(QuorumController.java:1270) at app//org.apache.kafka.controller.QuorumController.handleEventException(QuorumController.java:547) at app//org.apache.kafka.controller.QuorumController.access$800(QuorumController.java:179) at app//org.apache.kafka.controller.QuorumController$ControllerWriteEvent.complete(QuorumController.java:881) at app//org.apache.kafka.controller.QuorumController$ControllerWriteEvent.handleException(QuorumController.java:871) at app//org.apache.kafka.queue.KafkaEventQueue$EventContext.completeWithException(KafkaEventQueue.java:148) at app//org.apache.kafka.queue.KafkaEventQueue$EventContext.run(KafkaEventQueue.java:137) at app//org.apache.kafka.queue.KafkaEventQueue$EventHandler.handleEvents(KafkaEventQueue.java:210) at app//org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:181) at java.base@21.0.1/java.lang.Thread.run(Thread.java:1583)Caused by: java.lang.IllegalArgumentException: Attempt to resign from epoch 1 which is larger than the current epoch 0... 11 more {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16172) Flaky test: testProducerConsumerOverrideLowerQuota – kafka.api.UserQuotaTest
Apoorv Mittal created KAFKA-16172: - Summary: Flaky test: testProducerConsumerOverrideLowerQuota – kafka.api.UserQuotaTest Key: KAFKA-16172 URL: https://issues.apache.org/jira/browse/KAFKA-16172 Project: Kafka Issue Type: Bug Reporter: Apoorv Mittal https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15190/3/tests/ {code:java} Errororg.opentest4j.AssertionFailedError: Client with id=QuotasTestProducer-1 should have been throttled, 0.0 ==> expected: but was: Stacktraceorg.opentest4j.AssertionFailedError: Client with id=QuotasTestProducer-1 should have been throttled, 0.0 ==> expected: but was: at app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151) at app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132) at app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63) at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36) at app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:210)at app//kafka.api.QuotaTestClients.verifyThrottleTimeRequestChannelMetric(BaseQuotaTest.scala:261) at app//kafka.api.QuotaTestClients.verifyProduceThrottle(BaseQuotaTest.scala:270) at app//kafka.api.BaseQuotaTest.testProducerConsumerOverrideLowerQuota(BaseQuotaTest.scala:133) at java.base@21.0.1/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) at java.base@21.0.1/java.lang.reflect.Method.invoke(Method.java:580)at app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728) at app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156) at app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147) at app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:94) at app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103) at app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93) at app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92) at app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86) at app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:218) at app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:214) {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-15807) Add support for compression/decompression of metrics
[ https://issues.apache.org/jira/browse/KAFKA-15807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apoorv Mittal resolved KAFKA-15807. --- Resolution: Done > Add support for compression/decompression of metrics > > > Key: KAFKA-15807 > URL: https://issues.apache.org/jira/browse/KAFKA-15807 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-15863) Handle push telemetry throttling with quota manager
[ https://issues.apache.org/jira/browse/KAFKA-15863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17807315#comment-17807315 ] Apoorv Mittal commented on KAFKA-15863: --- [~junrao] I looked at the code, suggestion of yours and KIP-714, it seems to me that KIP wants 2 throttle checks i.e. 1 at request level with generic throttling and 2 specific to time interval defined for push. I don't think we need to have another quota defined specific to client telemetry as the intent is to throttle the rogue client which we are anyways handling by default client request quota. I have tested the behaviour by setting low value of client request quota for `producer-1` client id, and can successfully see throttling happening for push telemetry requests. I am marking the ticket as resolved, please let me know if I am missing something. cc: [~schofielaj]. {code:java} [APM] Throttle check for : producer-1, time: 100.0890606 [APM] Checking quota for metric MetricName [name=request-time, group=Request, description=Tracking request-time per user/client-id, tags={user=, client-id=producer-1}] with value 10.00890606 [APM] Non-TokenBucket quota for metric MetricName [name=request-time, group=Request, description=Tracking request-time per user/client-id, tags={user=, client-id=producer-1}] with value 10.00890606 [APM] Quota.acceptable: 10.00890606 1.0 true [2024-01-16 15:42:39,350] INFO Quota violated for sensor (Request-:producer-1). Delay time: (1000) (kafka.server.ClientRequestQuotaManager) [2024-01-16 15:42:39,350] INFO Channel throttled for sensor (Request-:producer-1). Delay time: (1000) (kafka.server.ClientRequestQuotaManager) [APM] Throttling response for client producer-1 on broker is 1000ms, response is PushTelemetryResponseData(throttleTimeMs=1000, errorCode=0){code} > Handle push telemetry throttling with quota manager > --- > > Key: KAFKA-15863 > URL: https://issues.apache.org/jira/browse/KAFKA-15863 > Project: Kafka > Issue Type: Sub-task >Reporter: Apoorv Mittal >Assignee: Apoorv Mittal >Priority: Major > > Details: https://github.com/apache/kafka/pull/14699#discussion_r1399714279 -- This message was sent by Atlassian Jira (v8.20.10#820010)