[jira] [Created] (KAFKA-17136) Added Documentation for Shell Methods

2024-07-14 Thread Arnav Dadarya (Jira)
Arnav Dadarya created KAFKA-17136:
-

 Summary: Added Documentation for Shell Methods
 Key: KAFKA-17136
 URL: https://issues.apache.org/jira/browse/KAFKA-17136
 Project: Kafka
  Issue Type: Task
  Components: docs
Reporter: Arnav Dadarya


Added documentation to the longer (and harder to follow) methods in the shell 
programs

GitHub Pull Request:



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] KAFKA-17136 Added Documentation for Shell Methods [kafka]

2024-07-14 Thread via GitHub


ardada2468 opened a new pull request, #16594:
URL: https://github.com/apache/kafka/pull/16594

   Added documentation to make the longer and more complex methods easier to 
understand incase they need to be changed in the future. 
   
   No code was modified, only added docs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-16905) Thread block in describe topics API in Admin Client

2024-07-14 Thread Josep Prat (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17865897#comment-17865897
 ] 

Josep Prat commented on KAFKA-16905:


Backport PR can be found here: https://github.com/apache/kafka/pull/16593/files

> Thread block in describe topics API in Admin Client
> ---
>
> Key: KAFKA-16905
> URL: https://issues.apache.org/jira/browse/KAFKA-16905
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.8.0, 3.9.0
>Reporter: Apoorv Mittal
>Assignee: Apoorv Mittal
>Priority: Major
> Fix For: 3.9.0
>
>
> The threads blocks while running admin client's descirbe topics API.
>  
> {code:java}
> "kafka-admin-client-thread | adminclient-3" #114 daemon prio=5 os_prio=31 
> cpu=6.57ms elapsed=417.17s tid=0x0001364fc200 nid=0x13403 waiting on 
> condition  [0x0002bb419000]
>java.lang.Thread.State: WAITING (parking)
>   at jdk.internal.misc.Unsafe.park(java.base@17.0.7/Native Method)
>   - parking to wait for  <0x000773804828> (a 
> java.util.concurrent.CompletableFuture$Signaller)
>   at 
> java.util.concurrent.locks.LockSupport.park(java.base@17.0.7/LockSupport.java:211)
>   at 
> java.util.concurrent.CompletableFuture$Signaller.block(java.base@17.0.7/CompletableFuture.java:1864)
>   at 
> java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.7/ForkJoinPool.java:3463)
>   at 
> java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.7/ForkJoinPool.java:3434)
>   at 
> java.util.concurrent.CompletableFuture.waitingGet(java.base@17.0.7/CompletableFuture.java:1898)
>   at 
> java.util.concurrent.CompletableFuture.get(java.base@17.0.7/CompletableFuture.java:2072)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient.handleDescribeTopicsByNamesWithDescribeTopicPartitionsApi(KafkaAdminClient.java:2324)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient.describeTopics(KafkaAdminClient.java:2122)
>   at org.apache.kafka.clients.admin.Admin.describeTopics(Admin.java:311)
>   at 
> io.confluent.kafkarest.controllers.TopicManagerImpl.describeTopics(TopicManagerImpl.java:155)
>   at 
> io.confluent.kafkarest.controllers.TopicManagerImpl.lambda$listTopics$2(TopicManagerImpl.java:76)
>   at 
> io.confluent.kafkarest.controllers.TopicManagerImpl$$Lambda$1925/0x000800891448.apply(Unknown
>  Source)
>   at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(java.base@17.0.7/CompletableFuture.java:1150)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(java.base@17.0.7/CompletableFuture.java:510)
>   at 
> java.util.concurrent.CompletableFuture.complete(java.base@17.0.7/CompletableFuture.java:2147)
>   at 
> io.confluent.kafkarest.common.KafkaFutures.lambda$toCompletableFuture$0(KafkaFutures.java:45)
>   at 
> io.confluent.kafkarest.common.KafkaFutures$$Lambda$1909/0x000800897528.accept(Unknown
>  Source)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.lambda$whenComplete$2(KafkaFutureImpl.java:107)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl$$Lambda$1910/0x000800897750.accept(Unknown
>  Source)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(java.base@17.0.7/CompletableFuture.java:863)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(java.base@17.0.7/CompletableFuture.java:841)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(java.base@17.0.7/CompletableFuture.java:510)
>   at 
> java.util.concurrent.CompletableFuture.complete(java.base@17.0.7/CompletableFuture.java:2147)
>   at 
> org.apache.kafka.common.internals.KafkaCompletableFuture.kafkaComplete(KafkaCompletableFuture.java:39)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.complete(KafkaFutureImpl.java:122)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$4.handleResponse(KafkaAdminClient.java:2106)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:1370)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1523)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1446)
>   at java.lang.Thread.run(java.base@17.0.7/Thread.java:833) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] Backport kafka admin client bug [kafka]

2024-07-14 Thread via GitHub


jlprat opened a new pull request, #16593:
URL: https://github.com/apache/kafka/pull/16593

   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16057 : Change the configuration for adminclient connections.max.idle.ms to 9 minutes [kafka]

2024-07-14 Thread via GitHub


Nancy-ksolves commented on PR #16548:
URL: https://github.com/apache/kafka/pull/16548#issuecomment-2227690511

   @mimaison Confluence account has not yet been created.
   
   Here's the comment link - 
https://issues.apache.org/jira/browse/INFRA-25451?focusedCommentId=17864196&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17864196


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-17076) logEndOffset could be lost due to log cleaning

2024-07-14 Thread Haruki Okada (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-17076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17865823#comment-17865823
 ] 

Haruki Okada commented on KAFKA-17076:
--

[~junrao] Is that possible?

At the step 2 in your scenario, I guess truncation doesn't happen unless at 
least one record is returned from Fetch response because of 
(https://github.com/apache/kafka/pull/9382), so empty active segment is not 
possible in my understanding.
refs: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-595%3A+A+Raft+Protocol+for+the+Metadata+Quorum#KIP595:ARaftProtocolfortheMetadataQuorum-Fetch

> logEndOffset could be lost due to log cleaning
> --
>
> Key: KAFKA-17076
> URL: https://issues.apache.org/jira/browse/KAFKA-17076
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Priority: Major
>
> It's possible for the log cleaner to remove all records in the suffix of the 
> log. If the partition is then reassigned, the new replica won't be able to 
> see the true logEndOffset since there is no record batch associated with it. 
> If this replica becomes the leader, it will assign an already used offset to 
> a newly produced record, which is incorrect.
>  
> It's relatively rare to trigger this issue since the active segment is never 
> cleaned and typically is not empty. However, the following is one possibility.
>  # records with offset 100-110 are produced and fully replicated to all ISR. 
> All those records are delete records for certain keys.
>  # record with offset 111 is produced. It forces the roll of a new segment in 
> broker b1 and is added to the log. The record is not committed and is later 
> truncated from the log, leaving an empty active segment in this log. b1 at 
> some point becomes the leader.
>  # log cleaner kicks in and removes records 100-110.
>  # The partition is reassigned to another broker b2. b2 replicates all 
> records from b1 up to offset 100 and marks its logEndOffset at 100. Since 
> there is no record to replicate after offset 100 in b1, b2's logEndOffset 
> stays at 100 and b2 can join the ISR.
>  # b2 becomes the leader and assign offset 100 to a new record.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-17129) Revisit FindCoordinatorResponse in KafkaConsumerTest

2024-07-14 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-17129.

Fix Version/s: 3.9.0
   Resolution: Fixed

> Revisit FindCoordinatorResponse in KafkaConsumerTest
> 
>
> Key: KAFKA-17129
> URL: https://issues.apache.org/jira/browse/KAFKA-17129
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: PoAn Yang
>Assignee: PoAn Yang
>Priority: Major
> Fix For: 3.9.0
>
>
> Currently, we have many test cases put 
> `client.prepareResponseFrom(FindCoordinatorResponse.prepareResponse(...), 
> ...);` after `newConsumer`. If `FutureResponse` is not in `MockClient`, the 
> request can't get a response. This may cause some flaky tests.
>  
> In our KafkaConsumerTest design, when starting a `newConsumer` for 
> `AsyncKafkaConsumer`, it always sends `FindCoordinatorRequest`.
>  
> In `MockClient#send`, if a `FutureResponse` is missing, the request will be 
> add to `requests`. Even if `client.prepareResponseFrom` adds a new 
> `FutureResponse`, it can't be matched to an existent request, so the request 
> can't get a response.
>  
> It's better to put 
> `client.prepareResponseFrom(FindCoordinatorResponse.prepareResponse(...), 
> ...);` before `newConsumer`, so we don't miss any `FindCoordinatorRequest`.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-17129: Revisit FindCoordinatorResponse in KafkaConsumerTest [kafka]

2024-07-14 Thread via GitHub


chia7712 merged PR #16589:
URL: https://github.com/apache/kafka/pull/16589


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (KAFKA-17135) Add unit test for `ProducerStateManager#readSnapshot` and `ProducerStateManager#writeSnapshot`

2024-07-14 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-17135:
--

Assignee: Kuan Po Tseng

> Add unit test for `ProducerStateManager#readSnapshot` and 
> `ProducerStateManager#writeSnapshot`
> --
>
> Key: KAFKA-17135
> URL: https://issues.apache.org/jira/browse/KAFKA-17135
> Project: Kafka
>  Issue Type: Test
>Reporter: Chia-Ping Tsai
>Assignee: Kuan Po Tseng
>Priority: Minor
>
> We are going to introduce generated code to `ProducerStateManager`, so it 
> would be nice to increase the test converge for now.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-17056 Convert producer state metadata schemas to use generated protocol [kafka]

2024-07-14 Thread via GitHub


chia7712 commented on code in PR #16592:
URL: https://github.com/apache/kafka/pull/16592#discussion_r1677233432


##
storage/src/main/java/org/apache/kafka/storage/internals/log/ProducerStateManager.java:
##
@@ -646,70 +616,69 @@ public Optional 
removeAndMarkSnapshotForDeletion(long snapshotOffs
 }
 
 public static List readSnapshot(File file) throws 
IOException {

Review Comment:
   It seems we don't have unit test for this method. We should add some tests 
for `readSnapshot` and `writeSnapshot` before introducing the generated code.
   
   I have filed https://issues.apache.org/jira/browse/KAFKA-17135 for you 
@brandboat please feel free to un-assign if you have no bandwidth.



##
storage/src/main/java/org/apache/kafka/storage/internals/log/ProducerStateManager.java:
##
@@ -74,37 +71,10 @@ public class ProducerStateManager {
 
 public static final long LATE_TRANSACTION_BUFFER_MS = 5 * 60 * 1000;
 
-private static final short PRODUCER_SNAPSHOT_VERSION = 1;

Review Comment:
   @brandboat could you add a temporary unit test to verify the compatibility? 
We can move those struct stuff to testing and then generate some random data 
for comparison. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-17135) Add unit test for `ProducerStateManager#readSnapshot` and `ProducerStateManager#writeSnapshot`

2024-07-14 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-17135:
--

 Summary: Add unit test for `ProducerStateManager#readSnapshot` and 
`ProducerStateManager#writeSnapshot`
 Key: KAFKA-17135
 URL: https://issues.apache.org/jira/browse/KAFKA-17135
 Project: Kafka
  Issue Type: Test
Reporter: Chia-Ping Tsai


We are going to introduce generated code to `ProducerStateManager`, so it would 
be nice to increase the test converge for now.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-8830) KIP-512: Adding headers to RecordMetaData

2024-07-14 Thread Rich (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17865818#comment-17865818
 ] 

Rich commented on KAFKA-8830:
-

Hi [~rmetukuru] , thank you for your original proposal for KIP-512 regarding 
Record Headers. I appreciate the work you've done on this important feature.

I am interested in resurrecting KIP-512 and expanding on it by providing more 
detailed information on why this feature is essential. Would it be okay for me 
to take over and edit the KIP accordingly?

my new discussion thread about WHY: 
[https://lists.apache.org/thread/5pjtk3nykb5l1vx0wc7o11rdrg3h0bg5]

 

> KIP-512: Adding headers to RecordMetaData
> -
>
> Key: KAFKA-8830
> URL: https://issues.apache.org/jira/browse/KAFKA-8830
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients
>Reporter: Renuka Metukuru
>Priority: Minor
>
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-512%3AAdding+headers+to+RecordMetaData]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-16666: Migrate `TransactionLogMessageFormatter` to tools module [kafka]

2024-07-14 Thread via GitHub


chia7712 commented on code in PR #16019:
URL: https://github.com/apache/kafka/pull/16019#discussion_r1677223413


##
tools/src/test/java/org/apache/kafka/tools/consumer/ConsoleConsumerTest.java:
##
@@ -239,4 +296,130 @@ public void shouldWorkWithoutTopicOption() throws 
IOException {
 verify(mockConsumer).subscribe(any(Pattern.class));
 consumer.cleanup();
 }
+
+@ClusterTest(types = {Type.KRAFT, Type.CO_KRAFT}, brokers = 3)
+public void testTransactionLogMessageFormatter() throws Exception {
+try (Admin admin = cluster.createAdminClient()) {
+
+List testcases = generateTestcases();

Review Comment:
   This test does NOT include the scenario we do care.
   
   1. we should verify the topic `__transaction_state` has data
   2. we should verify the new formatter can parse the data of 
`__transaction_state`
   3. we can use arbitrary data in producing transaction since all we want to 
check is the data in `__transaction_state`



##
tools/src/test/java/org/apache/kafka/tools/consumer/ConsoleConsumerTest.java:
##
@@ -239,4 +296,130 @@ public void shouldWorkWithoutTopicOption() throws 
IOException {
 verify(mockConsumer).subscribe(any(Pattern.class));
 consumer.cleanup();
 }
+
+@ClusterTest(types = {Type.KRAFT, Type.CO_KRAFT}, brokers = 3)

Review Comment:
   we should check all types rather than only kraft



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-17134) Restarting a server (same JVM) configured for OAUTHBEARER fails with RejectedExecutionException

2024-07-14 Thread Keith Wall (Jira)
Keith Wall created KAFKA-17134:
--

 Summary: Restarting a server (same JVM) configured for OAUTHBEARER 
fails with RejectedExecutionException
 Key: KAFKA-17134
 URL: https://issues.apache.org/jira/browse/KAFKA-17134
 Project: Kafka
  Issue Type: Bug
Reporter: Keith Wall


If you programmatically restart a server (3.7.1) configured for OAUTHBEARER 
{*}within the same JVM{*}, the startup attempt fails with the stack trace given 
below.

The issue is that a closed {{VerificationKeyResolver}} gets left behind in the
{{{}OAuthBearerValidatorCallbackHandler.{}}}{{VERIFICATION_KEY_RESOLVER_CACHE}} 
after the server is shutdown.  On restart, as the server's config is unchanged, 
the closed {{VerificationKeyResolver}} gets reused.  The 
{{ScheduledThreadPoolExecutor}} is already in a closed state so the init call 
fails.
 
A reproducer for this problem is found here: 
[https://github.com/k-wall/oauth_bearer_leak/blob/main/src/main/java/OAuthBearerValidatorLeak.java#L51]

The reproducer can be used with this OAuth Server:

{{docker run --rm -p 8080:8080 ghcr.io/navikt/mock-oauth2-server:2.1.8}}

 

{{Exception in thread "main" org.apache.kafka.common.KafkaException: 
org.apache.kafka.common.KafkaException: The OAuth validator configuration 
encountered an error when initializing the VerificationKeyResolver}}
{{    at 
org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:184)}}
{{    at 
org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:192)}}
{{    at 
org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:107)}}
{{    at kafka.network.Processor.(SocketServer.scala:973)}}
{{    at kafka.network.Acceptor.newProcessor(SocketServer.scala:879)}}
{{    at 
kafka.network.Acceptor.$anonfun$addProcessors$1(SocketServer.scala:849)}}
{{    at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:190)}}
{{    at kafka.network.Acceptor.addProcessors(SocketServer.scala:848)}}
{{    at kafka.network.DataPlaneAcceptor.configure(SocketServer.scala:523)}}
{{    at 
kafka.network.SocketServer.createDataPlaneAcceptorAndProcessors(SocketServer.scala:251)}}
{{    at kafka.network.SocketServer.$anonfun$new$31(SocketServer.scala:175)}}
{{    at 
kafka.network.SocketServer.$anonfun$new$31$adapted(SocketServer.scala:175)}}
{{    at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:576)}}
{{    at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:574)}}
{{    at scala.collection.AbstractIterable.foreach(Iterable.scala:933)}}
{{    at kafka.network.SocketServer.(SocketServer.scala:175)}}
{{    at kafka.server.BrokerServer.startup(BrokerServer.scala:255)}}
{{    at 
kafka.server.KafkaRaftServer.$anonfun$startup$2(KafkaRaftServer.scala:99)}}
{{    at 
kafka.server.KafkaRaftServer.$anonfun$startup$2$adapted(KafkaRaftServer.scala:99)}}
{{    at scala.Option.foreach(Option.scala:437)}}
{{    at kafka.server.KafkaRaftServer.startup(KafkaRaftServer.scala:99)}}
{{    at OAuthBearerValidatorLeak.main(OAuthBearerValidatorLeak.java:51)}}
{{Caused by: org.apache.kafka.common.KafkaException: The OAuth validator 
configuration encountered an error when initializing the 
VerificationKeyResolver}}
{{    at 
org.apache.kafka.common.security.oauthbearer.OAuthBearerValidatorCallbackHandler.init(OAuthBearerValidatorCallbackHandler.java:146)}}
{{    at 
org.apache.kafka.common.security.oauthbearer.OAuthBearerValidatorCallbackHandler.configure(OAuthBearerValidatorCallbackHandler.java:136)}}
{{    at 
org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:151)}}
{{    ... 21 more}}
{{Caused by: java.util.concurrent.RejectedExecutionException: Task 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@4f66ffc8[Not
 completed, task = 
java.util.concurrent.Executors$RunnableAdapter@1bc49bc5[Wrapped task = 
org.apache.kafka.common.security.oauthbearer.internals.secured.RefreshingHttpsJwks$$Lambda/0x0001373c7c88@7b6e5c12]]
 rejected from 
java.util.concurrent.ScheduledThreadPoolExecutor@39e67516[Terminated, pool size 
= 0, active threads = 0, queued tasks = 0, completed tasks = 0]}}
{{    at 
java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2081)}}
{{    at 
java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:841)}}
{{    at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:340)}}
{{    at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor.scheduleAtFixedRate(ScheduledThreadPoolExecutor.java:632)}}
{{    at 
java.base/java.util.concurrent.Executors$DelegatedScheduledExecutorService.scheduleAtFixedRate(Executors.java:870)}}
{{    at 
org.apache.kafka.common.security.oauthbearer.internals.secured.RefreshingHttpsJwks.init(RefreshingHttpsJwks.java:198)

[jira] [Assigned] (KAFKA-17094) Make it possible to list registered KRaft nodes in order to know which nodes should be unregistered

2024-07-14 Thread Rikhi Ram Satnami (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rikhi Ram Satnami reassigned KAFKA-17094:
-

Assignee: Rikhi Ram Satnami

> Make it possible to list registered KRaft nodes in order to know which nodes 
> should be unregistered
> ---
>
> Key: KAFKA-17094
> URL: https://issues.apache.org/jira/browse/KAFKA-17094
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.1
>Reporter: Jakub Scholz
>Assignee: Rikhi Ram Satnami
>Priority: Blocker
> Fix For: 3.9.0
>
>
> Kafka seems to require nodes that are removed from the cluster to be 
> unregistered using the Kafka Admin API. If they are unregistred, that you 
> might run into problems later. For example, after upgrade when you try to 
> bump the KRaft metadata version, you might get an error like this:
>  
> {code:java}
> g.apache.kafka.common.errors.InvalidUpdateVersionException: Invalid update 
> version 19 for feature metadata.version. Broker 3002 only supports versions 
> 1-14 {code}
> In this case, 3002 is an old node that was removed before the upgrade and 
> doesn't support the KRaft metadata version 19 and blocks the metadata update.
>  
> However, it seems to be impossible to list the registered nodes in order to 
> unregister them:
>  * The describe cluster metadata request in the Admin API seems to return 
> only the IDs of running brokers
>  * The describe metadata quorum command seems to list the removed nodes in 
> the list of observers. But it does so only until the controller nodes are 
> restarted.
> If Kafka expects the inactive nodes to be registered, it should provide a 
> list of the registered nodes so that it can be checked what nodes to 
> unregister.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: Add more debug logging to EOSUncleanShutdownIntegrationTest [kafka]

2024-07-14 Thread via GitHub


mjsax merged PR #16490:
URL: https://github.com/apache/kafka/pull/16490


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-17056 Convert producer state metadata schemas to use generated protocol [kafka]

2024-07-14 Thread via GitHub


brandboat opened a new pull request, #16592:
URL: https://github.com/apache/kafka/pull/16592

   related to https://issues.apache.org/jira/browse/KAFKA-17056,
   
   Convert producer state metadata schemas to use generated protocol.
   
https://github.com/apache/kafka/blob/33f5995ec379f0d18c6981106838c605ee94be7f/storage/src/main/java/org/apache/kafka/storage/internals/log/ProducerStateManager.java#L94
   
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] MINOR: Fix the typo in RaftEventSimulationTest.java and ControllerNode.java [kafka]

2024-07-14 Thread via GitHub


soravolk opened a new pull request, #16591:
URL: https://github.com/apache/kafka/pull/16591

   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-17122 Change the type of `clusterId` from `UUID` to `String` [kafka]

2024-07-14 Thread via GitHub


soravolk opened a new pull request, #16590:
URL: https://github.com/apache/kafka/pull/16590

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-17129: Revisit FindCoordinatorResponse in KafkaConsumerTest [kafka]

2024-07-14 Thread via GitHub


FrankYang0529 opened a new pull request, #16589:
URL: https://github.com/apache/kafka/pull/16589

   Currently, we have many test cases put 
`client.prepareResponseFrom(FindCoordinatorResponse.prepareResponse(...), 
...);` after `newConsumer`. If `FutureResponse` is not in `MockClient`, the 
request can't get a response. This may cause some flaky tests.

   When starting a `newConsumer` for `AsyncKafkaConsumer`, it always sends 
`FindCoordinatorRequest`.

   In `MockClient#send`, if a `FutureResponse` is missing, the request will be 
added to `requests`. Even if `client.prepareResponseFrom` adds a new 
`FutureResponse`, it can't be matched to an existent request, so the request 
can't get a response.

   It's better to put 
`client.prepareResponseFrom(FindCoordinatorResponse.prepareResponse(...), 
...);` before `newConsumer`, so we don't miss any `FindCoordinatorRequest`.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Improve config-streams.html, remove extra [kafka]

2024-07-14 Thread via GitHub


dujian0068 commented on PR #16577:
URL: https://github.com/apache/kafka/pull/16577#issuecomment-2227303598

   > @dujian0068你能把修复后的输出附给我吗?修复后的效果看起来不错,但最好能看一下 html 结果
   
   hello:
   I have deployed the fixed HTML page here:
   
http://142.171.156.169:8080/dev/documentation/streams/developer-guide/config-streams.html
   thank you


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16791: Add thread detection to ClusterTestExtensions [kafka]

2024-07-14 Thread via GitHub


FrankYang0529 commented on code in PR #16499:
URL: https://github.com/apache/kafka/pull/16499#discussion_r1677073513


##
core/src/test/java/kafka/test/junit/ClusterTestExtensions.java:
##
@@ -119,7 +131,30 @@ public Stream 
provideTestTemplateInvocationContex
 return generatedContexts.stream();
 }
 
+@Override
+@SuppressWarnings("booleanexpressioncomplexity")
+public void beforeEach(ExtensionContext context) {
+detectThreadLeak = DetectThreadLeak.of(thread -> {
+String name = thread.getName();
+return !name.startsWith(METRICS_METER_TICK_THREAD_PREFIX) &&

Review Comment:
   Updated it. Thanks.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-17110) Enable valid test case in KafkaConsumerTest for AsyncKafkaConsumer

2024-07-14 Thread PoAn Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PoAn Yang updated KAFKA-17110:
--
Description: Enable testResetUsingAutoResetPolicy, 
verifyPollTimesOutDuringMetadataUpdate, 
testPollThrowsInterruptExceptionIfInterrupted, 
fetchResponseWithUnexpectedPartitionIsIgnored, 
testManualAssignmentChangeWithAutoCommitEnabled, 
testManualAssignmentChangeWithAutoCommitDisabled, testOffsetOfPausedPartitions, 
and testCloseShouldBeIdempotent for AsyncKafkaConsumer  (was: Enable 
testResetToCommittedOffset, testResetUsingAutoResetPolicy, 
testCommitsFetchedDuringAssign, verifyPollTimesOutDuringMetadataUpdate, 
testFetchStableOffsetThrowInCommitted, testFetchStableOffsetThrowInPosition, 
testNoCommittedOffsets, testPollThrowsInterruptExceptionIfInterrupted, 
fetchResponseWithUnexpectedPartitionIsIgnored, 
testManualAssignmentChangeWithAutoCommitEnabled, 
testManualAssignmentChangeWithAutoCommitDisabled, testOffsetOfPausedPartitions, 
and testCloseShouldBeIdempotent for AsyncKafkaConsumer)

> Enable valid test case in KafkaConsumerTest for AsyncKafkaConsumer
> --
>
> Key: KAFKA-17110
> URL: https://issues.apache.org/jira/browse/KAFKA-17110
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: PoAn Yang
>Assignee: PoAn Yang
>Priority: Minor
> Fix For: 3.9.0
>
>
> Enable testResetUsingAutoResetPolicy, verifyPollTimesOutDuringMetadataUpdate, 
> testPollThrowsInterruptExceptionIfInterrupted, 
> fetchResponseWithUnexpectedPartitionIsIgnored, 
> testManualAssignmentChangeWithAutoCommitEnabled, 
> testManualAssignmentChangeWithAutoCommitDisabled, 
> testOffsetOfPausedPartitions, and testCloseShouldBeIdempotent for 
> AsyncKafkaConsumer



--
This message was sent by Atlassian Jira
(v8.20.10#820010)